Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-04-01

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:03 systemonkey JoeJulian: well... I can't tell any difference. Accessing directories hangs at random places.
00:03 systemonkey JoeJulian: but I appreciate your help regardless. :)
00:06 tdasilva joined #gluster
00:10 primechuck joined #gluster
00:12 Copez joined #gluster
00:18 andreask joined #gluster
00:56 yinyin_ joined #gluster
01:06 bala joined #gluster
01:18 jmarley joined #gluster
01:21 bharata-rao joined #gluster
01:36 tokik joined #gluster
01:42 mattapperson joined #gluster
01:47 ilbot3 joined #gluster
01:47 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:00 chirino joined #gluster
02:11 mattapperson joined #gluster
02:22 harish_ joined #gluster
02:30 chirino joined #gluster
02:32 haomaiwa_ joined #gluster
02:43 sprachgenerator joined #gluster
02:44 mattappe_ joined #gluster
02:58 mattapperson joined #gluster
02:59 mattapperson joined #gluster
02:59 haomaiw__ joined #gluster
03:01 nightwalk joined #gluster
03:11 Durzo joined #gluster
03:12 Durzo can someone clarify this.. I understand the 3.5 release is delayed purely due to documentation.. with that in mind would it be fair to say that the latest beta/RC is considered stable? or is there still bugs to be worked out?
03:23 kanagaraj joined #gluster
03:36 RameshN joined #gluster
03:45 itisravi joined #gluster
03:52 shubhendu joined #gluster
04:07 badone joined #gluster
04:07 hagarth joined #gluster
04:13 ravindran1 joined #gluster
04:19 yinyin joined #gluster
04:20 ngoswami joined #gluster
04:23 ndarshan joined #gluster
04:25 nishanth joined #gluster
04:28 kanagaraj joined #gluster
04:31 aravindavk joined #gluster
04:42 shylesh joined #gluster
04:54 rahulcs joined #gluster
04:54 rahulcs joined #gluster
04:59 swat30 joined #gluster
04:59 RameshN joined #gluster
05:05 deepakcs joined #gluster
05:09 psharma joined #gluster
05:12 bala joined #gluster
05:17 hchiramm_ joined #gluster
05:18 prasanth_ joined #gluster
05:19 RameshN joined #gluster
05:24 bala joined #gluster
05:36 Matthaeus joined #gluster
05:41 sahina joined #gluster
05:44 RameshN joined #gluster
05:46 benjamin_____ joined #gluster
05:47 ravindran1 joined #gluster
05:51 kanagaraj joined #gluster
05:57 saurabh joined #gluster
06:02 kanagaraj_ joined #gluster
06:02 RameshN_ joined #gluster
06:08 lalatenduM joined #gluster
06:10 vimal joined #gluster
06:17 psharma joined #gluster
06:27 davinder joined #gluster
06:27 kanagaraj joined #gluster
06:31 RameshN_ joined #gluster
06:32 jtux joined #gluster
06:35 chirino joined #gluster
06:36 deepakcs joined #gluster
06:40 nshaikh joined #gluster
06:43 dusmant joined #gluster
07:01 ctria joined #gluster
07:02 rgustafs joined #gluster
07:04 chirino joined #gluster
07:04 tjikkun joined #gluster
07:06 dusmant joined #gluster
07:11 samppah hello all
07:11 samppah hi glusterbot
07:12 samppah what's the idea of quorum on replica 2? does it mean that if one server fails then the whole volume goes offline?
07:13 samppah "For a two-node trusted storage pool it is important to set this value to be greater than 50% so that two nodes separated from each other do not both believe they have quorum simultaneously. "
07:13 samppah doesn't this kill the failsafe part or does it still do some magic?
07:15 eseyman joined #gluster
07:19 kanagaraj joined #gluster
07:34 fsimonce joined #gluster
07:37 chirino joined #gluster
07:37 hybrid512 joined #gluster
07:38 kanagaraj joined #gluster
07:39 askb_ joined #gluster
07:40 RameshN_ joined #gluster
07:41 RameshN joined #gluster
07:49 glusterbot New news from newglusterbugs: [Bug 1077516] [RFE] :- Move the container for changelogs from /var/run to /var/lib/misc <https://bugzilla.redhat.com/show_bug.cgi?id=1077516>
07:50 keytab joined #gluster
07:52 fsimonce joined #gluster
07:53 RameshN joined #gluster
07:54 harish_ joined #gluster
08:06 chirino joined #gluster
08:07 DV joined #gluster
08:17 psharma joined #gluster
08:17 ndarshan joined #gluster
08:18 andreask joined #gluster
08:22 Durzo semiosis, around?
08:23 fsimonce joined #gluster
08:29 X3NQ joined #gluster
08:31 elico1 joined #gluster
08:33 elico joined #gluster
08:34 Durzo semiosis, what are the chances of fixing the gluster .deb's to include brick & geo-replication logs in the logrotate script, current one only does /var/log/glusterfs/*.log and skips subdirectories. causing us headaches :/
08:41 basso joined #gluster
08:43 TvL2386 joined #gluster
08:43 Pavid7 joined #gluster
08:47 Ponyo joined #gluster
08:49 ngoswami joined #gluster
08:53 ndarshan joined #gluster
08:55 elico joined #gluster
08:58 elico1 joined #gluster
09:02 ndarshan joined #gluster
09:10 elico joined #gluster
09:13 spandit joined #gluster
09:20 glusterbot New news from newglusterbugs: [Bug 921215] Cannot create volumes with a . in the name <https://bugzilla.redhat.com/show_bug.cgi?id=921215>
09:20 elico joined #gluster
09:21 cyberbootje joined #gluster
09:22 elico joined #gluster
09:25 elico1 joined #gluster
09:28 RameshN joined #gluster
09:28 kanagaraj joined #gluster
09:28 vpshastry joined #gluster
09:28 ricky-ti1 joined #gluster
09:39 kanagaraj joined #gluster
09:40 elico joined #gluster
09:46 ctria joined #gluster
09:50 glusterbot New news from newglusterbugs: [Bug 973891] cp does not work from local fs to mounted gluster volume; <https://bugzilla.redhat.com/show_bug.cgi?id=973891> || [Bug 1082991] Unable to copy local file to gluster volume using libgfapi <https://bugzilla.redhat.com/show_bug.cgi?id=1082991>
10:02 Copez d
10:04 lalatenduM joined #gluster
10:07 Durzo e
10:22 ctria joined #gluster
10:23 sahina joined #gluster
10:26 nshaikh joined #gluster
10:33 kkeithley1 joined #gluster
10:33 DV joined #gluster
10:34 gdubreui joined #gluster
10:36 calum_ joined #gluster
10:38 chirino joined #gluster
10:39 Slash joined #gluster
10:39 elico1 joined #gluster
10:41 flrichar joined #gluster
10:47 kkeithley1 joined #gluster
10:54 dusmant joined #gluster
10:56 shubhendu joined #gluster
10:57 diegows joined #gluster
11:03 elico joined #gluster
11:10 shyam joined #gluster
11:21 flrichar joined #gluster
11:27 shubhendu joined #gluster
11:34 dusmant joined #gluster
12:08 itisravi joined #gluster
12:08 sprachgenerator joined #gluster
12:23 benjamin_____ joined #gluster
12:23 ctria joined #gluster
12:25 tjikkun joined #gluster
12:30 hagarth joined #gluster
12:32 nikk any guesses when 3.5 will be stable? :)
12:33 nikk i'm interested in the new compression option
12:41 Pavid7 joined #gluster
12:48 recidive joined #gluster
12:50 bet_ joined #gluster
12:52 mattapperson joined #gluster
12:57 sroy_ joined #gluster
13:04 asku joined #gluster
13:08 dbruhn joined #gluster
13:09 ninkotech_ joined #gluster
13:16 dusmant joined #gluster
13:24 lalatenduM nikk, in 3.5 there some critical bugs for compression xlator and it does not look like that it will be fixed in 3.5 ,may be 3.5.1
13:27 shyam joined #gluster
13:28 mattappe_ joined #gluster
13:31 jruggiero wc
13:31 jruggiero left #gluster
13:33 doekia joined #gluster
13:35 bms joined #gluster
13:39 bms My gluster client (proxy) acts as a samba server to windows clients. Are there any secrets to keeping samba able to get locks when gluster operations are being performed like rebalance?
13:41 seapasulli joined #gluster
13:44 shyam joined #gluster
13:46 shyam joined #gluster
13:51 theron joined #gluster
13:53 glusterbot New news from newglusterbugs: [Bug 1079709] Possible error on Gluster documentation <https://bugzilla.redhat.com/show_bug.cgi?id=1079709> || [Bug 1057292] option rpc-auth-allow-insecure should default to "on" <https://bugzilla.redhat.com/show_bug.cgi?id=1057292>
13:55 mattappe_ joined #gluster
13:58 shyam joined #gluster
13:59 rpowell joined #gluster
14:03 chirino joined #gluster
14:07 tjikkun joined #gluster
14:08 jmarley joined #gluster
14:10 primechuck joined #gluster
14:15 jobewan joined #gluster
14:21 wushudoin joined #gluster
14:26 dusmant joined #gluster
14:28 dewey joined #gluster
14:36 LoudNoises joined #gluster
14:44 ndk joined #gluster
14:45 gmcwhistler joined #gluster
14:46 andreask joined #gluster
14:54 rahulcs joined #gluster
14:54 chirino joined #gluster
14:56 bala joined #gluster
15:01 rwheeler joined #gluster
15:20 chirino_m joined #gluster
15:22 shyam joined #gluster
15:22 shyam left #gluster
15:23 benjamin_____ joined #gluster
15:23 lmickh joined #gluster
15:26 sprachgenerator joined #gluster
15:29 theron joined #gluster
15:30 zaitcev joined #gluster
15:34 theron joined #gluster
15:42 theron joined #gluster
15:51 theron joined #gluster
15:53 shubhendu joined #gluster
15:56 theron joined #gluster
16:00 nikk i feel like i'm doing something wrong, i have a replica with four bricks (four servers) and it's 20x slower than the same disk without gluster
16:00 nikk and it gets even slower when i add more bricks
16:00 nikk a lot slower
16:04 samppah nikk: what replica level you are using?
16:05 nikk Type: Replicate
16:05 nikk Number of Bricks: 1 x 4 = 4
16:05 nikk Transport-type: tcp
16:08 samppah do you need to replicate data on four bricks?
16:08 shylesh joined #gluster
16:08 samppah problems is that client connects to all bricks and writes data to all of them synchronously
16:08 samppah basicly writespeed = network bandwidth / replication level
16:08 hagarth joined #gluster
16:09 samppah hagarth: ping
16:09 hagarth samppah: pong
16:10 nikk samppah: yeah..
16:10 samppah hagarth: about quorum.. what's the idea of quorum on replica 2? does it mean that if one server fails then the whole volume goes offline?
16:10 nikk other than changing to a distributed replica, which would defeat the purpose of what i'm trying to accomplish, do you have any ideas?
16:10 hagarth samppah: are you referring to client quorum or server quorum?
16:10 samppah "For a two-node trusted storage pool it is important to set this value to be greater than 50% so that two nodes separated from each other do not both believe they have quorum simultaneously. "
16:10 nikk samppah: it'll either go offline or go r/o, depending how you have it configured
16:11 hagarth samppah: this is server quorum
16:11 samppah hagarth: hmm.. i think that was mentioned for server quorum..
16:11 samppah hagarth: could you please explain the difference :)
16:11 hagarth samppah: I should probably write a blog post about it :)
16:11 hagarth let me copy/paste from an email that I wrote a few days back ;)
16:11 samppah that would be superb
16:12 samppah sure
16:12 hagarth Server quorum - Each glusterd in the trusted storage pool maintains a  socket connection to its peers. This network connectivity information as  seen by glusterds on the server nodes is used for determining quorum. If  a glusterd on any node sees less than 50% of its peers, it shuts down  all gluster services on that particular node. services are resumed after  the connectivity situation improves.
16:12 hagarth Client quorum - This is based on the number of bricks that a client  is talking to. In the case of a replicated volume, each client talks to  N bricks where N is the replica count for the volume. If a client does  not see N/2 + 1 bricks, it stops performing I/O and returns back an  error to the application. In the case of N=2, the first server (as  specified at the time of volume definition) is given priority. If the  first server is online, I/O happens f
16:14 samppah If the  first server is online, I/O happens fi <- it was cut there
16:14 hagarth ne. But if the first server goes  offline and the second server is online, the client ceases from  performing I/0.
16:16 samppah hmmh
16:19 samppah so if i have only 2 servers and 1 brick per server the volume would then go offline if i was using quorum?
16:19 samppah in case where one server crashes
16:21 sjoeboo joined #gluster
16:21 hagarth samppah: if the quorum is set to 51%, then it would not happen. For two server scenarios, it is recommended to have a third server which acts as an arbitrator.
16:21 _dist joined #gluster
16:23 samppah hagarth: ok, thanks, i think that it's possible to have a third server
16:24 DV joined #gluster
16:25 hagarth samppah: ok, cool.
16:27 raghug joined #gluster
16:28 daMaestro joined #gluster
16:28 Mo_ joined #gluster
16:44 kaptk2 joined #gluster
16:51 sputnik1_ joined #gluster
16:53 elico1 joined #gluster
16:54 edong23_ joined #gluster
17:04 cyberbootje joined #gluster
17:05 edong23 joined #gluster
17:10 zerick joined #gluster
17:16 lpabon joined #gluster
17:16 rwheeler joined #gluster
17:17 hagarth joined #gluster
17:18 cyberbootje joined #gluster
17:19 rpowell left #gluster
17:24 hagarth1 joined #gluster
17:29 zerick joined #gluster
17:31 ricky-ticky joined #gluster
17:45 cyberbootje joined #gluster
17:48 elico joined #gluster
17:49 nikk is there a functional async write mode that's implemented in 3.4?
17:53 lanning joined #gluster
17:57 elico1 joined #gluster
17:58 badone joined #gluster
18:00 chirino joined #gluster
18:12 nikk is geo replication stable enough for real enviorments?  i haven't touched it yet but am not hearing good things
18:17 elico joined #gluster
18:44 Matthaeus joined #gluster
18:45 lpabon joined #gluster
18:47 lalatenduM joined #gluster
18:52 rotbeard joined #gluster
19:00 gdubreui joined #gluster
19:14 P0w3r3d joined #gluster
19:23 ricky-ti1 joined #gluster
19:44 chirino joined #gluster
19:46 ira_ joined #gluster
19:48 andreask joined #gluster
19:50 chirino joined #gluster
19:59 failshell joined #gluster
20:20 wushudoin joined #gluster
20:27 tdasilva left #gluster
20:28 sijis joined #gluster
20:28 sijis is there some doc regarding best configs for replicas? mostly everything i see jus has 2. is there something wrong with having 2,4 or 5?
20:31 LoudNoises it gets progressively slower for clients to write as you add more replicas
20:32 sijis ok. i was thinking maybe 3 tops. in fact, maybe overkill
20:32 sijis also, does he number of replicas have to match the number of peers?
20:32 Run joined #gluster
20:33 sijis if a cluster has 3 peers, do i need to setup the volume to have 3 replicas
20:33 Run hello, is there anyone available for help on a split-brain ?
20:34 sijis Run: not sure if this will help, but i literally just read this http://joejulian.name/blog/glusterfs-split-brain-recovery-made-easy/
20:34 glusterbot Title: GlusterFS Split-Brain Recovery Made Easy (at joejulian.name)
20:34 LoudNoises sijis: i'm not an expert on replicas, but my understanding is you just need your number of bricks to be a multiple of your replicas
20:35 Run thanks for the link, but first I need to understand where is the issue
20:35 LoudNoises so if you have three servers with 1 brick on each server, you can't do replica 2
20:36 Run this command:
20:36 Run gluster volume heal www-volume info split-brain | head -50
20:36 sijis LoudNoises: but i can have 4 servers with 1 brick, i can have 2 replicas?
20:36 Run returns many entries
20:36 haomaiwa_ joined #gluster
20:36 LoudNoises sijis: yes you can run replica 2 with 4 servers x 1 brick each np
20:37 Run what does this logs means? they seems all the same to me.. http://pastebin.com/1j87NbtU
20:37 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
20:37 sijis LoudNoises: does it basically boil down to even numbers?
20:37 LoudNoises if you have replica 2, then yes :)
20:38 sijis LoudNoises: 2 servers, 2 bricks, 2 replicas ok? (just trying to understand it)
20:38 LoudNoises yes that's okay too
20:38 LoudNoises just make sure when you set them up, the replicas go on the different servers
20:38 LoudNoises otherwise you have a single point of failure
20:38 Run Can someone help me? http://fpaste.org/90754/96384672/
20:38 glusterbot Title: #90754 Fedora Project Pastebin (at fpaste.org)
20:38 sijis LoudNoises: right. but if i have 2 servers, the replica would like on node2 (as well as node1)
20:38 sijis *live
20:39 LoudNoises yes but you have to be explicit about that when you create the volume
20:39 LoudNoises (if you have more than 1 brick per server)
20:39 sijis LoudNoises: right.. create volume replicate x node1:/// node2:/// ?
20:40 LoudNoises yes in the simple case
20:40 sijis my thought was have 2 nodes (maybe 3)... but in either case, it would have multiple bricks (even in number)
20:41 sijis but i'm only thinking, at most 4 bricks per node
20:44 LoudNoises yea that's fine, just make sure you don't do s1:/b1 s1:/b2, instead I believe you want to do s1:/b1 s2:/b1
20:44 LoudNoises but i'm having trouble finding where that is discussed for certain
20:45 LoudNoises also, most people like to have raid underneath each brick
20:45 sijis Run: not having any experience, i'm guessing those are the id's of the files on the nodes.
20:45 sijis LoudNoises: sure. this is all vm though
20:45 Run thanks for trying sijis
20:46 Run any idea why all those logs are replicated every minute or so?
20:46 sijis LoudNoises: if you find that link, i've love to see it
20:47 Run and I have 150% cpu load on glusterfsd
20:50 chirino joined #gluster
20:55 sijis LoudNoises: you can specify more nodes than the replica number. '' create vol1 replica 2 node1:/data node2:/data node3:/data ' so how would i know where the data is at?
20:55 sijis i would guess one of the nodes is out of sync
20:55 Matthaeus1 joined #gluster
20:56 LoudNoises hmm yea i haven't explored that, but you can always look at the filesystems on each server to see where files live
20:57 sijis i saw that... it just made me think what the results would be
20:59 sijis LoudNoises: according to this, it would be split between the two sets. http://www.howtoforge.com/distributed-replicated-storage-across-four-storage-nodes-with-glusterfs-3.2.x-on-centos-6.3
20:59 glusterbot Title: Distributed Replicated Storage Across Four Storage Nodes With GlusterFS 3.2.x On CentOS 6.3 | HowtoForge - Linux Howtos and Tutorials (at www.howtoforge.com)
20:59 LoudNoises yes a file will get assigned to a pair of servers
20:59 LoudNoises it's just like a standard distributed setup
21:00 sijis LoudNoises: but if node1 talks to set 2 nad is looking for file1 and its not there.
21:00 sijis what would happen?
21:00 sijis could it just redirect to set1 that has the file
21:01 LoudNoises what is node1 in this case
21:01 sijis node1 = client
21:01 LoudNoises if a client mounts the volume, it will know to query for the set of servers that has the file
21:01 LoudNoises that's the DHT
21:01 sijis DHT
21:01 sijis ?
21:01 sijis glusterbot: DHT?
21:02 sijis LoudNoises: found it
21:02 sijis gotcha
21:02 sroy_ joined #gluster
21:17 seapasulli joined #gluster
21:35 Matthaeus joined #gluster
21:43 MacWinne_ joined #gluster
21:49 klaas_ joined #gluster
21:50 RyanD- joined #gluster
21:50 T0aD- joined #gluster
21:51 seapasulli_ joined #gluster
21:54 SteveCoo1ing joined #gluster
21:54 badone joined #gluster
21:55 calston joined #gluster
21:55 kkeithley1 joined #gluster
22:12 Pavid7 joined #gluster
22:13 efries joined #gluster
22:16 siel joined #gluster
22:18 rpowell joined #gluster
22:19 cyberbootje joined #gluster
22:21 rpowell1 joined #gluster
22:22 rpowell1 left #gluster
22:23 badone joined #gluster
22:25 cyberbootje1 joined #gluster
22:27 doekia joined #gluster
22:37 cyberbootje joined #gluster
22:43 Pavid7 joined #gluster
22:53 al joined #gluster
23:06 yinyin joined #gluster
23:08 cyberbootje joined #gluster
23:10 m0zes joined #gluster
23:11 mattapperson joined #gluster
23:13 seapasulli joined #gluster
23:14 aarongr_ joined #gluster
23:30 SFLimey joined #gluster
23:32 diegows joined #gluster
23:34 Durzo can someone clarify this.. I understand the 3.5 release is delayed purely due to documentation.. with that in mind would it be fair to say that the latest beta/RC is considered stable? or is there still bugs to be worked out?
23:41 nightwalk joined #gluster
23:47 fidevo joined #gluster
23:48 mattappe_ joined #gluster
23:49 jclift Durzo: There are still some minor-ish bugs that need to be worked out.
23:49 jclift eg: https://bugzilla.redhat.com/show_bug.cgi?id=1077159
23:49 glusterbot Bug 1077159: high, unspecified, ---, vshastry, ASSIGNED , Quota hard limit not being enforced consistently
23:50 Durzo jclift, im only interested in using it for libgfapi to run a kvm cluster.. would it be stable in that regard?
23:50 jclift Durzo: Dev, Test, or Production?
23:51 Durzo internal production to an IT company
23:51 Durzo not externally facing
23:51 jclift Durzo: I wouldn't run _any_ .0 release software immediately into production.
23:52 Durzo is the latest 3.4 branch going to cut it with kvm & libgfapi though?
23:52 jclift That's a good question.  I don't know the answer to it. :/
23:52 Durzo 3.5.0 it is! :P
23:52 jclift Heh
23:53 jclift Durzo: Have you asked on the gluster-users mailing list?
23:54 Durzo nope
23:54 Durzo i did not because i was told to expect 3.5.0 last week
23:54 Durzo now my manager is expecting a kvm cluster by the end of this week
23:55 Durzo as its for us and not a client, im happy to go with 3.5.0 RC or latest beta, and work through any issues as they go
23:55 Durzo things like quota wont matter
23:56 Durzo just need to basic 2 brick replica with gfapi for kvm/ovirt

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary