Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-01-13

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:09 lpabon joined #gluster
00:13 gildub joined #gluster
00:20 wushudoin joined #gluster
00:44 side_control joined #gluster
01:21 tziom joined #gluster
01:52 schrodinger joined #gluster
01:56 DarkBidou joined #gluster
01:56 DarkBidou hi there
01:56 DarkBidou i'm having performance issue with gluster and replication. can anyone help please ?
01:58 eka joined #gluster
02:04 purpleidea DarkBidou: the volunteers here would probably be happy to help, but it
02:04 purpleidea 's best to start by posting a lot more information. your question lacks details and is far too general.
02:05 DarkBidou i'm running drupal website, centos with apache
02:05 DarkBidou mod_fcgid, no APC
02:05 DarkBidou and mariaDB as master/master
02:05 bala joined #gluster
02:05 DarkBidou on a dev server, basic LAMP, the web page load in about 1sec (which is slow but...)
02:06 DarkBidou on my gluster, with a varnish in front of it, replication, it loads in 4sec
02:06 DarkBidou i notice the CPU loads high with glusterd
02:06 eryc joined #gluster
02:06 DarkBidou APC is not enabled, because of the fcgid
02:07 DarkBidou i believe gluster is the bottleneck, because it loads a lot of CPU
02:07 DarkBidou let me post the gluster config
02:07 haomaiwa_ joined #gluster
02:08 Gill joined #gluster
02:08 DarkBidou http://pastebin.com/xUqmEr4E <-- /etc/gluster/datastore.vol
02:08 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
02:08 glusterbot DarkBidou: <'s karma is now -7
02:09 DarkBidou ./etc/glusterfs/datastore.vol       /var/www/html                glusterfs defaults,_netdev,acl      0 0
02:09 DarkBidou this is how it is mounted
02:09 DarkBidou it have GB ethernet, this are the perf
02:10 DarkBidou shell> dd if=/dev/zero of=./test.txt bs=1024k count=1000 conv=sync
02:10 DarkBidou 1048576000 bytes (1.0 GB) copied, 9.79598 s, 107 MB/s
02:10 DarkBidou shell> dd if=test.txt of=/dev/zero bs=1M count=10000
02:10 DarkBidou 1048576000 bytes (1.0 GB) copied, 6.52866 s, 161 MB/s
02:14 tryggvil joined #gluster
02:19 DarkBidou what do you think?
02:24 harish joined #gluster
02:30 DarkBidou are you still around ? purpleidea
02:31 eryc joined #gluster
02:32 nangthang joined #gluster
02:38 nrcpts joined #gluster
02:40 thangnn_ joined #gluster
02:47 hagarth joined #gluster
03:02 eryc joined #gluster
03:08 bharata-rao joined #gluster
03:09 bala joined #gluster
03:12 julim joined #gluster
03:14 lalatenduM joined #gluster
03:15 tryggvil joined #gluster
03:16 DarkBidou :S
03:20 gildub joined #gluster
03:22 eryc joined #gluster
03:36 hagarth joined #gluster
03:36 soumya joined #gluster
03:37 Intensity joined #gluster
03:37 harish joined #gluster
03:39 DarkBidou i run gluster 3.5.2
03:40 suman_d joined #gluster
03:41 DarkBidou every post / forum that i find seems to pretend gluster is slow, no matter what you do
03:45 rejy joined #gluster
03:53 itisravi joined #gluster
03:53 RameshN joined #gluster
03:56 shubhendu joined #gluster
04:06 side_control joined #gluster
04:06 nishanth joined #gluster
04:08 Lee- DarkBidou, you don't think there's some sort of caching involved in your test? I mean you said 1GbE and you got 161MB/s. Perhaps you should try with larger than 1GB test. Also it seems people tend to have issues with frequent file accesses (like what occurs when you have lots of small files).
04:09 rafi1 joined #gluster
04:13 ppai joined #gluster
04:13 sakshi joined #gluster
04:30 gem joined #gluster
04:30 nishanth joined #gluster
04:31 Manikandan joined #gluster
04:37 ndarshan joined #gluster
04:37 anoopcs joined #gluster
04:42 DarkBidou ok.. i have only 4GB RAM on this server, and it pretty much used by mariadb
04:42 DarkBidou its full of small file (website, drupal)
04:42 DarkBidou i find out that enabling APC make me gain 1sec
04:43 DarkBidou its not loading in 4sec but in 3sec
04:44 rjoseph joined #gluster
04:44 spandit joined #gluster
04:47 DarkBidou setting cache on 2GB didnt change the performance
04:47 jiffin joined #gluster
04:48 DarkBidou website is 78M on disk
04:49 kshlm joined #gluster
04:50 raghug joined #gluster
04:50 prasanth_ joined #gluster
04:56 kanagaraj joined #gluster
04:58 hagarth joined #gluster
04:59 DarkBidou some other changes, mariadb >= 5.5.40 now handle query cache
04:59 DarkBidou im loading in 2.7
05:00 DarkBidou however, original is about 1sec
05:00 DarkBidou any advice you can give, please ?
05:02 ppai joined #gluster
05:04 DarkBidou joined #gluster
05:16 atinmu joined #gluster
05:16 lalatenduM joined #gluster
05:22 kumar joined #gluster
05:29 meghanam joined #gluster
05:29 smohan joined #gluster
05:31 anil joined #gluster
05:34 dusmant joined #gluster
05:36 shubhendu joined #gluster
05:42 overclk joined #gluster
05:42 vikumar joined #gluster
05:51 maveric_amitc_ joined #gluster
05:53 raghu` joined #gluster
05:56 saurabh joined #gluster
06:09 elico joined #gluster
06:10 ricky-ti1 joined #gluster
06:16 nrcpts joined #gluster
06:16 misch joined #gluster
06:16 nrcpts joined #gluster
06:17 spandit joined #gluster
06:18 dusmant joined #gluster
06:22 glusterbot News from newglusterbugs: [Bug 1179050] gluster vol clear-locks vol-name path kind all inode return IO error in a disperse volume <https://bugzilla.redhat.com/show_bug.cgi?id=1179050>
06:22 glusterbot News from newglusterbugs: [Bug 1170942] More than redundancy bricks down, leads to the persistent write return IO error, then the whole file can not be read/write any longer, even all bricks going up <https://bugzilla.redhat.com/show_bug.cgi?id=1170942>
06:22 glusterbot News from newglusterbugs: [Bug 1163561] A restarted child can not clean files/directories which were deleted while down <https://bugzilla.redhat.com/show_bug.cgi?id=1163561>
06:24 ctria joined #gluster
06:32 karnan joined #gluster
06:33 nangthang joined #gluster
06:37 soumya joined #gluster
06:38 shubhendu joined #gluster
06:52 glusterbot News from newglusterbugs: [Bug 1180015] reboot node with some glusterd glusterfsd glusterfs services. <https://bugzilla.redhat.com/show_bug.cgi?id=1180015>
06:55 bala joined #gluster
06:58 deepakcs joined #gluster
06:58 quydo joined #gluster
07:01 harish joined #gluster
07:10 shubhendu joined #gluster
07:10 dusmant joined #gluster
07:11 jtux joined #gluster
07:13 quydo joined #gluster
07:22 kshlm joined #gluster
07:23 rgustafs joined #gluster
07:37 shubhendu joined #gluster
07:38 jtux joined #gluster
07:40 nbalacha joined #gluster
07:44 ricky-ti1 joined #gluster
07:46 LebedevRI joined #gluster
07:49 ckotil joined #gluster
07:52 ricky-ticky1 joined #gluster
07:53 ppai joined #gluster
07:54 mbukatov joined #gluster
07:55 mbukatov joined #gluster
07:57 lmickh joined #gluster
07:58 partner alright, the storage is about to hit the road, then we just wait...
08:07 vikumar joined #gluster
08:08 rafi1 joined #gluster
08:12 aravindavk joined #gluster
08:13 vikumar__ joined #gluster
08:19 misch joined #gluster
08:20 itisravi joined #gluster
08:29 dusmant joined #gluster
08:31 fsimonce joined #gluster
08:33 anoopcs joined #gluster
08:35 Manikandan joined #gluster
08:46 T0aD joined #gluster
08:50 anoopcs joined #gluster
08:56 calum_ joined #gluster
09:01 Slashman joined #gluster
09:17 ndarshan joined #gluster
09:30 ppai joined #gluster
09:31 Pupeno joined #gluster
09:33 harish joined #gluster
09:35 ndarshan joined #gluster
09:38 SOLDIERz joined #gluster
09:43 Manikandan joined #gluster
09:44 chirino joined #gluster
09:45 sakshi joined #gluster
09:47 Pupeno joined #gluster
09:47 gem joined #gluster
09:48 gothos_ joined #gluster
09:55 gothos_ joined #gluster
09:56 nishanth joined #gluster
09:56 gothos_ joined #gluster
10:00 ctria joined #gluster
10:00 bala joined #gluster
10:01 [Enrico] joined #gluster
10:02 tryggvil joined #gluster
10:08 atinmu anil, 1173414
10:11 nrcpts joined #gluster
10:13 diegows joined #gluster
10:18 badone joined #gluster
10:24 hagarth joined #gluster
10:24 kanagaraj joined #gluster
10:25 TvL2386 joined #gluster
10:27 nshaikh joined #gluster
10:41 fandi joined #gluster
10:46 kanagaraj joined #gluster
10:53 ninkotech joined #gluster
10:53 glusterbot News from newglusterbugs: [Bug 1181500] Can't mount glusterfs <https://bugzilla.redhat.com/show_bug.cgi?id=1181500>
10:53 ninkotech_ joined #gluster
11:10 ppai joined #gluster
11:14 sakshi joined #gluster
11:18 eka joined #gluster
11:20 nishanth joined #gluster
11:20 bala joined #gluster
11:23 glusterbot News from newglusterbugs: [Bug 1181543] glusterd  crashed with SIGABRT  if  rpc connection is failed in debug mode <https://bugzilla.redhat.com/show_bug.cgi?id=1181543>
11:25 gem joined #gluster
11:31 rgustafs joined #gluster
11:33 Manikandan joined #gluster
11:35 maveric_amitc_ joined #gluster
11:40 soumya_ joined #gluster
11:42 kkeithley1 joined #gluster
11:47 SOLDIERz joined #gluster
11:48 SOLDIERz hey guys update to 3.6 and one of my nodes when i'm printing gluster volume status is listening to port 49152 and all the others to 49153 is this normal
11:55 calum_ joined #gluster
11:57 SOLDIERz ?
11:58 misch joined #gluster
12:00 ndevos REMINDER: Gluster Community Bug Triage meeting starting now in #gluster-meeting
12:01 atinmu anil, ping
12:01 glusterbot atinmu: Please don't naked ping. http://blogs.gnome.org/markmc/2014/02/20/naked-pings/
12:02 misch joined #gluster
12:06 Sunghost joined #gluster
12:12 Sunghost Hello how can i solve this. i have a 2brick distributed volume. 1brick is totaly lost and i moved the remaining data from brick2 direct to a backup disk. but the space is not cleared.
12:13 Sunghost as far as i understand this happens due to hardlinks in .glusterfs dir. i also found this find .glusterfs -type f -links -2 -exec rm {} \;
12:13 Sunghost so this whould cleanup all files which have less then 2 hardlinks, thats ok if both bricks are online, so can i simply exchange -2 -> -1 ??
12:16 anil atinmu, pong
12:16 atinmu anil, u r not using that setup right?
12:17 anil atinmu, no
12:20 fandi joined #gluster
12:22 morse joined #gluster
12:23 calum__ joined #gluster
12:26 Sunghost any idea how i can delete all files from missing bricks out of .glusterfs?
12:28 misch joined #gluster
12:28 kanagaraj joined #gluster
12:29 RameshN joined #gluster
12:36 misch joined #gluster
12:40 DV joined #gluster
12:41 elico joined #gluster
12:43 ppai joined #gluster
12:46 nishanth joined #gluster
12:48 ira joined #gluster
12:49 fandi Sunghost: what's the problem .. and why you need delete file from missing brick
12:49 fandi ?
12:49 misch joined #gluster
12:50 tih joined #gluster
12:50 B21956 joined #gluster
12:50 nbalacha joined #gluster
12:50 Sunghost oh hello and thanks for responding. the problem was. i had 2server in distributed volume - one server dies with 2hdd failures at once, so the volume was half lost
12:51 Sunghost i wanted to copy all remaining files over the client, but got lot of read error, i think due to lost brick
12:51 Sunghost so i stopped the volume and copied direct from running brick, while i create on this break a new volume with the old server and 2 new disks
12:51 RameshN joined #gluster
12:52 Sunghost so i copied from brick old vol1 to new 2bricks to vol2 but the .glusterfs from vol1 didnt get smaller
12:52 tih Simple question, I hope: what updates atime when files are read? Specifically, will atime still be updated if the client FUSE implementation blocks it (in effect having a noatime option on the client)?
12:52 Sunghost i think of hardlinks, my idea is now to delete all hardlinks/files/data which belongs to the lost brick
12:53 glusterbot News from newglusterbugs: [Bug 1181588] RFE: Rebalance shouldn't start when clients older than glusterfs-3.6 are connected <https://bugzilla.redhat.com/show_bug.cgi?id=1181588>
12:55 fandi Sunghost: I have not sure, about this. but I think you need to use another method
12:55 fandi Sunghost: as I know if you running 2 server better using replicate
12:55 misch joined #gluster
12:56 Sunghost i use raid6 on both because of loosing not much diskspace ;)
12:59 Sunghost i think i have a logic failure in this. so think about it. in distributed each brick holds own hardlinks in .glusterfs and a kind of link to the other brick for the directory, right?
13:00 Sunghost so the size of .glusterfs is nearly the same as the data on the brick, right? so deleting those files /links from lost distributed brick, is nothing
13:00 Sunghost therefore i have to delete thoses files which i have already manualy moved from running brick, which gluster didnt notice, right?
13:01 misch joined #gluster
13:04 ndevos tih: I think you also need to mount the brick filesystems with noatime
13:07 Sunghost mh ok - i mounted it manualy like mount -t glusterfs server:volume /mnt/vol , where should i use noatime, or is it only in fstab possible?
13:09 Gill joined #gluster
13:10 partner whee, first boxes up in new home and without damage from transportation
13:12 bennyturns joined #gluster
13:13 Sunghost sorry ndevos - understand - i use xfs and is that not allready done within? i mount with relatime
13:16 Gill good job partner!!
13:16 Gill how many more to go?
13:17 ndevos Sunghost: not sure how it is related, I only answered tih's question...
13:17 Sunghost ok theoretically, i changed mount to noatime, start old vol1 and move again which now will work? gluster recognice this and cleanup .glusterfs automatically?
13:17 Sunghost oh sorry, thought it was for me ;)
13:18 _shaps_ joined #gluster
13:18 ndevos :)
13:22 elico joined #gluster
13:24 partner Gill: 50% done at this point. another batch goes out in couple of days
13:24 Gill oh so all done for today?
13:24 partner no, need to verify the rest will be ok also, 50% done for today
13:25 Gill oh wow
13:25 Gill good luck!
13:26 DV joined #gluster
13:32 suman_d joined #gluster
13:35 Sunghost any idea to my problem?
13:37 Arminder joined #gluster
13:39 haakon_ joined #gluster
13:39 chirino joined #gluster
13:42 _Bryan_ joined #gluster
13:42 edwardm61 joined #gluster
13:43 fandi hmm i'have problem also :
13:43 fandi volume replace-brick: failed: Incorrect source or destination brick
13:43 fandi why this is happen and i cannot abort it :(
13:43 rgustafs joined #gluster
13:44 misch joined #gluster
13:46 Sunghost sorry whould help, but never had this
13:46 partner fandi: so it did start succesfully and failed at some point?
13:47 partner or did not start at all? if its not running there's not much to abort
13:47 misch joined #gluster
13:48 fandi it's didn't start successfull
13:48 fandi partner: I stop force the vol it's working
13:48 fandi it's still new vol gluster
13:49 R0ok_ joined #gluster
13:50 nishanth joined #gluster
13:51 Slashman joined #gluster
13:52 bala joined #gluster
13:52 fandi so it's save my  life
13:52 fandi :)
13:52 misch joined #gluster
13:53 partner fandi: what version you have there? to my understanding the strong community recommendation is not to use replace-brick
13:54 partner fresh enough version should warn its dangerous
13:54 elico joined #gluster
13:54 partner https://bugzilla.redhat.com/show_bug.cgi?id=1039954
13:54 glusterbot Bug 1039954: medium, unspecified, ---, kaushal, CLOSED CURRENTRELEASE, replace-brick command should warn it is broken
13:55 misch joined #gluster
13:58 fandi hmm ok.. so what's better solution for replace brick ?
13:58 fandi partner: because we do  wrong brick
13:58 polychrise joined #gluster
13:59 fandi partner: *we don't give right brick
14:00 fandi partner: i need to change brick /mnt/1 to /mnt/brick1 ..
14:01 hagarth joined #gluster
14:01 doekia hi, looking for some guru to help optimize a 6 node cluster (3 bricks on 3 servers) ... latency on small files stats/read
14:07 fandi hi doekia : what's you problem :)
14:07 fandi ?
14:07 misch joined #gluster
14:08 doekia the 3 server that are not brick replication are ... slow (3x slower) on a read/stat operation
14:09 ricky-ticky joined #gluster
14:10 doekia my conf: http://pastie.org/9829580
14:11 doekia performances from any of 60086-88 are 3x faster that from 601xx
14:12 doekia my version is actually glusterfs 3.4.2 built on Jan 24 2014 02:32:32 - an I feel unconfortable to migrate to latest version since this is a production system I cannot afford to have downtime on
14:17 tih left #gluster
14:18 rjoseph joined #gluster
14:19 dusmant joined #gluster
14:19 misch joined #gluster
14:21 calisto joined #gluster
14:23 crashmag joined #gluster
14:23 misch joined #gluster
14:24 coredump joined #gluster
14:25 n-st joined #gluster
14:25 DV joined #gluster
14:29 misch joined #gluster
14:31 fandi doekia: i'm not sure how your calculate read/stat operation
14:32 doekia well I have similar code (web site) a similar page takes 3x more time when browsing on the non-brick server
14:33 doekia 500ms on 60086-88 1.7s on the other
14:34 misch joined #gluster
14:36 tdasilva joined #gluster
14:38 jdarcy joined #gluster
14:38 kdc joined #gluster
14:39 calum_ joined #gluster
14:40 kdc hey guys, i've got a split brain issue with a 1x2x2 Striped-Replicate volume.  The intention I believe was to create a distribute-replicate volume to begin with.  Is it possible to transform this (once the healing has been performed).
14:40 eka left #gluster
14:40 kdc it seems the folder itself is failing to self-heal due to missing gfid - how can I go about fixing that?
14:41 fandi doekia: not sure about that, but i only ever try using for gluster performance
14:41 fandi keyword :)
14:41 fandi doekia: and i'm sure you should compare with nfs
14:42 fandi because you cannot compare gluster with your direct attached disk
14:43 rjoseph joined #gluster
14:43 doekia I use gluster because of scalability/ha .. fandi, I would like to be able to achieve similar thruput from non brick server.
14:44 deniszh joined #gluster
14:44 doekia I'm comparing gluster w/ gluster here ... the brick based server use the gluster ... direct attach disk is the volume "surface"
14:45 doekia the apps is delivered from the gluster mount (I lie from nfs based mount attacking the gluster volume)
14:45 theron joined #gluster
14:46 theron_ joined #gluster
14:47 elico1 joined #gluster
14:48 bene joined #gluster
14:50 fandi doekia: maybe the expert one can help you :)
14:51 misch joined #gluster
14:55 DV joined #gluster
14:57 johnnytran joined #gluster
15:11 wushudoin joined #gluster
15:19 jkroon joined #gluster
15:24 glusterbot News from newglusterbugs: [Bug 1181669] File replicas differ in content even as heal info lists 0 entries in replica 2 setup <https://bugzilla.redhat.com/show_bug.cgi?id=1181669>
15:24 calum_ joined #gluster
15:25 _Bryan_ joined #gluster
15:28 misch joined #gluster
15:30 rtalur_ joined #gluster
15:37 bene2 joined #gluster
15:38 misch joined #gluster
15:46 Arminder joined #gluster
15:46 PatNarciso joined #gluster
15:47 neofob left #gluster
15:49 dgandhi joined #gluster
15:49 dgandhi greetings all,
15:49 virusuy joined #gluster
15:49 virusuy joined #gluster
15:50 misch joined #gluster
15:53 dgandhi I have 3 nodes with 10 bricks each, I have a directory with files with the samename+1 digit, they are all ending up on the same brick, is there some way to better randomize the mapping ? (3.16kernel, from deb sid)
15:55 calum_ joined #gluster
15:58 smohan_ joined #gluster
15:59 theron joined #gluster
15:59 Arminder joined #gluster
16:00 theron joined #gluster
16:00 PatNarciso dgandhi, have you tried a rebalance?
16:00 PatNarciso also - out of the box: it should distribute the files... if you set it up as a distribution volume...
16:02 PatNarciso so 1) confirm its a distribution volume; 2) try a rebalance. 3) make sure the connecting clients have access to all the machines (ie: are all hosts defined in the clients /etc/hosts file)
16:02 Sunghost you have to rebalance, i use distributed too
16:02 PatNarciso 4) ask the people on IRC; they're a helpful bunch.
16:02 Sunghost and you have to use it each time you add a brick to the vol
16:03 neofob joined #gluster
16:05 dgandhi PatNarciso: I have not tired rebalance, the volume is distributed, and other data is spread out as I would expect. but with two copied over 30 bricks that's 15 "logical bricks"  having these 7 files all land on the same one suggests I'm hitting some fluke of the mapping algo. Will rebalance do anything without adding bricks?
16:05 Arminder joined #gluster
16:05 dgandhi note: I only noticed because the offending files are 500G each
16:06 Arminder joined #gluster
16:08 Arminder joined #gluster
16:08 fubada purpleidea: will try not
16:08 misch joined #gluster
16:09 Arminder joined #gluster
16:09 calisto joined #gluster
16:11 bennyturns joined #gluster
16:11 PatNarciso dgandhi, I'd focus on 'why' they all landed on the same brick.  thus, I'm asking if the connecting client can only see that single brick.
16:12 Arminder joined #gluster
16:12 PatNarciso dgandhi, depending on the setup, the quickest responding brick will become the destination.
16:13 Arminder joined #gluster
16:13 dgandhi it sees all 30 bricks
16:13 roost joined #gluster
16:14 PatNarciso hmm.  if this gluster isn't in production state; i'd yank the working bricks and see where writes go then.
16:15 Arminder joined #gluster
16:15 fubada purpleidea: i tried bug/folder-purges branch, no luck, https://gist.github.com/aamerik/7c81111b6a106559b253
16:16 dgandhi as in unmount the brick from under gluster? I presume this is likely to break everytihing ?
16:17 dgandhi I presume I would have to umount the replica as well
16:18 dgandhi but then the data just disappears, and then I have to re-gen it, which would be messy, sounds like I'll try rebalance first.
16:18 Arminder joined #gluster
16:21 neofob left #gluster
16:21 Arminder joined #gluster
16:22 Arminder joined #gluster
16:24 plarsen joined #gluster
16:24 Arminder joined #gluster
16:26 jmarley joined #gluster
16:26 dgandhi does the mapping algo use base name only? The extra digit is on a subdirectory, but the big files have identical base names.
16:27 neofob joined #gluster
16:29 Arminder joined #gluster
16:32 jdarcy dgandhi: The mapping should be different for same name in different directories, but in a weird sort of way.
16:32 rwheeler joined #gluster
16:32 jdarcy Hi boss.
16:34 misch joined #gluster
16:34 firemanxbr joined #gluster
16:35 firemanxbr hi guys I have one problema with my gluster
16:35 firemanxbr my gluster not mount my nfs system
16:35 firemanxbr systemctl restart glusterd
16:35 firemanxbr error:
16:35 firemanxbr Jan 13 14:34:38 ped-dc02.datacom nfs[20679]: backtrace 1
16:36 firemanxbr ...
16:36 firemanxbr Jan 13 14:34:38 ped-dc02.datacom nfs[20679]: package-string: glusterfs 3.6.1
16:36 Arminder joined #gluster
16:37 Arminder joined #gluster
16:37 ndevos firemanxbr: looks like a crash in the gluster/nfs process, can you fpaste a little more of that log?
16:38 Arminder joined #gluster
16:38 firemanxbr ndevos: my error is: http://ur1.ca/jfa6q
16:39 misch joined #gluster
16:39 firemanxbr ndevos: but I don't start my ovirt system, looks: http://ur1.ca/jfa7a
16:40 jobewan joined #gluster
16:41 ndevos firemanxbr: hmm, looks like there is a process registered in rpcbind already, the gluster/nfs server fails to register
16:41 firemanxbr ndevos: I can stop rpcbind or no ?
16:41 Arminder joined #gluster
16:42 ndevos firemanxbr: does your /usr/lib/systemd/system/rpcbind.service list 'ExecStart=.../rpcbind -w ..' - the -w option is awkward
16:43 firemanxbr ndevos: in my systemd is: ExecStart=/sbin/rpcbind -w ${RPCBIND_ARGS}
16:43 firemanxbr ndevos: I'm using CentOS 7
16:43 Arminder joined #gluster
16:44 misch joined #gluster
16:44 ndevos firemanxbr: I think you can 'sed "s/ -w//" /usr/lib/systemd/system/rpcbind.service > /etc/systemd/system/rpcbind.service ; systemctl daemon-reload ; systemctl restart rpcbind ; systemctl restart glusterd'
16:44 ndevos but check what it is doing, I did not test that command :)
16:45 firemanxbr ndevos: humm looks good
16:45 firemanxbr ndevos: i'm rebooting my ovirt management
16:47 ndevos firemanxbr: you have two bugs there: 1) rpcbind should not start with -w, 2) gluster/nfs should not crash when registering at rpcbind fails
16:47 Arminder joined #gluster
16:48 firemanxbr ndevos: trying start ovirt after this update
16:50 purpleidea fubada: what didn't work?
16:50 fubada the vrrp fixes
16:50 fubada your bug/folder-purges branch
16:50 purpleidea fubada: really?? didn't work?
16:50 fubada yah i pasted a gist.
16:50 fubada purpleidea: i tried bug/folder-purges branch, no luck, https://gist.github.com/aamerik/7c81111b6a106559b253
16:51 Arminder joined #gluster
16:51 purpleidea fubada: woops didn't see that
16:51 fubada np :)
16:51 misch joined #gluster
16:52 purpleidea fubada: oh, wait hang on, might be a new issue! (or rather, i didn't complete the first issue enough...)
16:52 fubada +1
16:53 purpleidea fubada: can you confirm if it's fixed/not with the mount only case?
16:56 dusmant joined #gluster
16:56 fubada one sec
16:56 fubada so this was only checked on the mounted case
16:56 fubada let me check the actual gluster box
16:57 fubada purpleidea: same inboth cases
16:58 misch joined #gluster
16:58 Arminder joined #gluster
16:58 purpleidea hm okay patch todo
16:58 Arminder joined #gluster
16:59 nueces joined #gluster
16:59 Arminder joined #gluster
17:02 firemanxbr ndevos: thnkz, my oVirt is up again :D your save my day :)
17:06 ndevos firemanxbr: cool, maybe you could file two bugs for that problem?
17:06 misch joined #gluster
17:09 Kawal joined #gluster
17:20 elico joined #gluster
17:20 firemanxbr ndevos: yes, I believe my problem is option in rpcbind, I can open in bz this bug, you wish ?
17:23 ndevos firemanxbr: yes please, against rpcbind in rhel7 would be good
17:24 ndevos firemanxbr: but also one against gluster/nfs, it should not crash
17:24 ndevos file a bug against glusterfs/nfs
17:24 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
17:24 glusterbot News from newglusterbugs: [Bug 1154599] Create a document on how "heal" commands work <https://bugzilla.redhat.com/show_bug.cgi?id=1154599>
17:25 ndevos firemanxbr: https://bugzilla.redhat.com/enter_bug.cgi?product=Red%20Hat%20Enterprise%20Linux%207&amp;component=rpcbind :)
17:27 rwheeler joined #gluster
17:29 firemanxbr ndevos: I'm register this bug, thanks bro :D
17:29 theron joined #gluster
17:35 bene2 joined #gluster
17:42 ndevos firemanxbr: thanks, let me know where the bug is, I'll try to follow it too then
17:47 CyrilPeponnet Hey guys, one quick question regarding georeplication.
17:48 CyrilPeponnet Let's say I push a new vm image to my master considered as LATEST, to be easier to get later, it there is a symlink pointing to this image when I push it to the master.
17:48 CyrilPeponnet Now on the georeplicated node, how to update the symlink ONLY when the full image have been replicated ?
17:48 CyrilPeponnet To avoid to point to an incomplete image during the georep transfert
17:50 CyrilPeponnet I use to rsync the image then, rsync the symlink but I'd like to use georeplication process for this.
17:53 XpineX joined #gluster
17:59 lalatenduM joined #gluster
18:07 firemanxbr ndevos: bug create: https://bugzilla.redhat.com/show_bug.cgi?id=1181779
18:07 glusterbot Bug 1181779: unspecified, unspecified, rc, steved, NEW , RPCBIND crash GlusterFS service (NFS)
18:08 ndevos firemanxbr: thanks!
18:20 plarsen joined #gluster
18:20 ira joined #gluster
18:30 theron joined #gluster
18:30 lalatenduM joined #gluster
18:35 msmith_ joined #gluster
18:41 lanning joined #gluster
18:47 theron joined #gluster
18:53 m0ellemeister joined #gluster
18:56 lmickh joined #gluster
19:15 misch joined #gluster
19:17 PatNarciso JoeJulian - do you recall if there is validity to the claim that /etc/glusterfs/glusterd.vol needs to be edited for rpc-auth-allow-insecure to take full affect?
19:18 PatNarciso re: https://bugzilla.redhat.com/show_bug.cgi?id=1057292
19:18 glusterbot Bug 1057292: high, high, 3.4.2, bugs, NEW , option rpc-auth-allow-insecure should default to "on"
19:19 PatNarciso glusterbot, we should grab a beer sometime.
19:22 theron_ joined #gluster
19:35 elico joined #gluster
19:43 JoeJulian PatNarciso: It does if you have a non-root client trying to fetch volume definitions.
19:52 gehaxelt left #gluster
20:01 B21956 joined #gluster
20:02 bene2 joined #gluster
20:04 Pupeno joined #gluster
20:04 Pupeno joined #gluster
20:10 PatNarciso No non-root.   actually, this is the first time I've ran into this issue...
20:11 Pupeno joined #gluster
20:12 PatNarciso only thing I can think of (at this moment) thats different is that... the mounting client is behind two masqurading firewalls.
20:13 JoeJulian Oh, or that.
20:13 PatNarciso nice.
20:13 JoeJulian I never consider that as a possibility.
20:15 PatNarciso inexpensive bandwidth isp <-- modem(masq) <-- KVMHypervisor <-- KVMGuest <-- gluster-client.
20:15 glusterbot PatNarciso: <'s karma is now -8
20:15 glusterbot PatNarciso: <'s karma is now -9
20:15 glusterbot PatNarciso: <'s karma is now -10
20:15 glusterbot PatNarciso: <'s karma is now -11
20:16 PatNarciso that's right... your karma is so low... you're double-digits below.
20:35 msmith_ joined #gluster
20:42 JoeJulian PatNarciso: I would use a vpn if it were me.
20:42 PatNarciso openvpn to the rescue.
20:43 PatNarciso speaking of openvpn -- whats the ideal setup when using gluster open vpn?  tun?  no encryption?
20:56 semiosis ideal for whom?
21:02 lapthorn joined #gluster
21:07 lapthorn joined #gluster
21:12 lapthorn joined #gluster
21:14 lapthorn joined #gluster
21:14 lapthorn left #gluster
21:22 lapthorn joined #gluster
21:27 badone joined #gluster
21:29 Staples84 joined #gluster
21:31 lapthorn joined #gluster
21:34 sac`away joined #gluster
21:34 abyss^ joined #gluster
21:34 Ramereth joined #gluster
21:34 y4m4 joined #gluster
21:34 partner joined #gluster
21:34 tessier_ joined #gluster
21:34 churnd joined #gluster
21:34 Guest75764 joined #gluster
21:34 and` joined #gluster
21:34 Lee- joined #gluster
21:34 yoavz joined #gluster
21:34 ccha joined #gluster
21:34 d-fence joined #gluster
21:34 hflai joined #gluster
21:34 kkeithley joined #gluster
21:34 mikedep333 joined #gluster
21:34 verboese|sleep joined #gluster
21:34 semiosis joined #gluster
21:34 kke joined #gluster
21:34 basso joined #gluster
21:34 pdrakeweb joined #gluster
21:34 shaunm joined #gluster
21:34 samsaffron___ joined #gluster
21:34 primusinterpares joined #gluster
21:34 JustinClift joined #gluster
21:34 javi404 joined #gluster
21:34 wgao joined #gluster
21:34 Micromus joined #gluster
21:34 eightyeight joined #gluster
21:34 jbrooks joined #gluster
21:34 tziom joined #gluster
21:34 Intensity joined #gluster
21:34 haakon_ joined #gluster
21:34 coredump joined #gluster
21:34 tdasilva joined #gluster
21:34 PatNarciso joined #gluster
21:34 virusuy joined #gluster
21:34 XpineX joined #gluster
21:34 ira joined #gluster
21:34 lanning joined #gluster
21:34 Staples84 joined #gluster
21:34 blkperl joined #gluster
21:36 prg3 joined #gluster
21:37 fandi joined #gluster
21:37 lmickh joined #gluster
21:37 plarsen joined #gluster
21:37 polychrise joined #gluster
21:37 morse joined #gluster
21:37 kkeithley_ joined #gluster
21:37 cfeller joined #gluster
21:37 necrogami joined #gluster
21:37 afics joined #gluster
21:37 lkoranda joined #gluster
21:37 Gugge joined #gluster
21:39 misch joined #gluster
21:40 tom[] joined #gluster
21:40 badone joined #gluster
21:40 msmith_ joined #gluster
21:40 Gill joined #gluster
21:40 julim joined #gluster
21:40 m0zes joined #gluster
21:40 marcoceppi joined #gluster
21:40 daddmac joined #gluster
21:40 ron-slc joined #gluster
21:40 devilspgd joined #gluster
21:40 fubada joined #gluster
21:40 mtanner joined #gluster
21:40 codex joined #gluster
21:40 guntha_ joined #gluster
21:40 juhaj joined #gluster
21:40 tru_tru joined #gluster
21:43 bene2 joined #gluster
21:43 neofob joined #gluster
21:43 edwardm61 joined #gluster
21:43 edong23 joined #gluster
21:43 stickyboy joined #gluster
21:43 rastar_afk joined #gluster
21:43 mrEriksson joined #gluster
21:43 jvandewege joined #gluster
21:43 kaii joined #gluster
21:43 32NAAW3WY joined #gluster
21:43 mdavidson joined #gluster
21:43 eclectic joined #gluster
21:43 neoice joined #gluster
21:43 tobias-_ joined #gluster
21:43 the-me joined #gluster
21:43 ttkg joined #gluster
21:43 CyrilPeponnet joined #gluster
21:46 T0aD joined #gluster
21:47 primusinterpares joined #gluster
21:50 johnnytran joined #gluster
21:50 DV joined #gluster
21:50 ubungu joined #gluster
21:50 side_control joined #gluster
21:50 eryc joined #gluster
21:50 Bosse joined #gluster
21:50 sadbox joined #gluster
21:50 dblack joined #gluster
21:50 hchiramm_ joined #gluster
21:50 siel joined #gluster
21:50 klaas joined #gluster
21:50 kalzz joined #gluster
21:50 purpleidea joined #gluster
21:50 saltsa joined #gluster
21:50 capri joined #gluster
21:50 sage_ joined #gluster
21:50 NuxRo joined #gluster
21:50 ndevos joined #gluster
21:50 [o__o] joined #gluster
21:50 strata joined #gluster
21:50 RobertLaptop joined #gluster
21:50 Gorian joined #gluster
21:50 dockbram joined #gluster
21:50 vincent_vdk joined #gluster
21:52 badone joined #gluster
21:52 HuleB joined #gluster
21:53 jmarley joined #gluster
21:53 R0ok_ joined #gluster
21:53 ninkotech joined #gluster
21:53 mbukatov joined #gluster
21:53 LebedevRI joined #gluster
21:53 schrodinger joined #gluster
21:53 JordanHackworth joined #gluster
21:53 owlbot joined #gluster
21:53 masterzen joined #gluster
21:53 y4m4_ joined #gluster
21:53 pcaruana joined #gluster
21:53 msciciel joined #gluster
21:53 _br_ joined #gluster
21:53 bfoster joined #gluster
21:53 AaronGr joined #gluster
21:53 JonathanD joined #gluster
21:53 xrsa_ joined #gluster
21:53 nocturn joined #gluster
21:53 malevolent joined #gluster
21:53 suliba joined #gluster
21:53 sickness joined #gluster
21:53 ws2k3 joined #gluster
21:54 ckotil_ joined #gluster
21:54 ws2k3 joined #gluster
21:54 sickness joined #gluster
21:54 suliba joined #gluster
21:54 malevolent joined #gluster
21:54 nocturn joined #gluster
21:54 xrsa_ joined #gluster
21:54 JonathanD joined #gluster
21:54 AaronGr joined #gluster
21:54 bfoster joined #gluster
21:54 _br_ joined #gluster
21:54 msciciel joined #gluster
21:54 pcaruana joined #gluster
21:54 y4m4_ joined #gluster
21:54 masterzen joined #gluster
21:54 owlbot joined #gluster
21:54 JordanHackworth joined #gluster
21:54 schrodinger joined #gluster
21:54 LebedevRI joined #gluster
21:54 mbukatov joined #gluster
21:54 ninkotech joined #gluster
21:54 R0ok_ joined #gluster
21:54 jmarley joined #gluster
21:54 HuleB joined #gluster
21:54 badone joined #gluster
21:54 vincent_vdk joined #gluster
21:54 dockbram joined #gluster
21:54 Gorian joined #gluster
21:54 RobertLaptop joined #gluster
21:54 strata joined #gluster
21:54 [o__o] joined #gluster
21:54 ndevos joined #gluster
21:54 NuxRo joined #gluster
21:54 sage_ joined #gluster
21:54 capri joined #gluster
21:54 saltsa joined #gluster
21:54 purpleidea joined #gluster
21:54 kalzz joined #gluster
21:54 klaas joined #gluster
21:54 siel joined #gluster
21:54 hchiramm_ joined #gluster
21:54 dblack joined #gluster
21:54 sadbox joined #gluster
21:54 Bosse joined #gluster
21:54 eryc joined #gluster
21:54 side_control joined #gluster
21:54 ubungu joined #gluster
21:54 DV joined #gluster
21:54 johnnytran joined #gluster
21:54 primusinterpares joined #gluster
21:54 T0aD joined #gluster
21:54 CyrilPeponnet joined #gluster
21:54 ttkg joined #gluster
21:54 the-me joined #gluster
21:54 tobias-_ joined #gluster
21:54 neoice joined #gluster
21:54 eclectic joined #gluster
21:54 mdavidson joined #gluster
21:54 32NAAW3WY joined #gluster
21:54 kaii joined #gluster
21:54 jvandewege joined #gluster
21:54 mrEriksson joined #gluster
21:54 rastar_afk joined #gluster
21:54 stickyboy joined #gluster
21:54 edong23 joined #gluster
21:54 edwardm61 joined #gluster
21:54 neofob joined #gluster
21:54 bene2 joined #gluster
21:54 tru_tru joined #gluster
21:54 juhaj joined #gluster
21:54 guntha_ joined #gluster
21:54 codex joined #gluster
21:54 mtanner joined #gluster
21:54 fubada joined #gluster
21:54 devilspgd joined #gluster
21:54 ron-slc joined #gluster
21:54 daddmac joined #gluster
21:54 marcoceppi joined #gluster
21:54 m0zes joined #gluster
21:54 julim joined #gluster
21:54 Gill joined #gluster
21:54 msmith_ joined #gluster
21:54 misch joined #gluster
21:54 Gugge joined #gluster
21:54 lkoranda joined #gluster
21:54 afics joined #gluster
21:54 necrogami joined #gluster
21:54 cfeller joined #gluster
21:54 kkeithley_ joined #gluster
21:54 morse joined #gluster
21:54 polychrise joined #gluster
21:54 plarsen joined #gluster
21:54 lmickh joined #gluster
21:54 fandi joined #gluster
21:54 prg3 joined #gluster
21:54 blkperl joined #gluster
21:54 Staples84 joined #gluster
21:54 lanning joined #gluster
21:54 ira joined #gluster
21:54 XpineX joined #gluster
21:54 virusuy joined #gluster
21:54 PatNarciso joined #gluster
21:54 tdasilva joined #gluster
21:54 coredump joined #gluster
21:54 haakon_ joined #gluster
21:54 Intensity joined #gluster
21:54 tziom joined #gluster
21:54 jbrooks joined #gluster
21:54 eightyeight joined #gluster
21:54 Micromus joined #gluster
21:54 wgao joined #gluster
21:54 javi404 joined #gluster
21:54 JustinClift joined #gluster
21:54 samsaffron___ joined #gluster
21:54 shaunm joined #gluster
21:54 pdrakeweb joined #gluster
21:54 basso joined #gluster
21:54 kke joined #gluster
21:54 semiosis joined #gluster
21:54 verboese|sleep joined #gluster
21:54 mikedep333 joined #gluster
21:54 kkeithley joined #gluster
21:54 hflai joined #gluster
21:54 d-fence joined #gluster
21:54 ccha joined #gluster
21:54 yoavz joined #gluster
21:54 Lee- joined #gluster
21:54 and` joined #gluster
21:54 Guest75764 joined #gluster
21:54 churnd joined #gluster
21:54 tessier_ joined #gluster
21:54 partner joined #gluster
21:54 y4m4 joined #gluster
21:54 Ramereth joined #gluster
21:54 abyss^ joined #gluster
21:54 sac`away joined #gluster
21:54 Pupeno joined #gluster
21:54 elico joined #gluster
21:54 Arminder joined #gluster
21:54 nueces joined #gluster
21:54 wushudoin joined #gluster
21:54 ricky-ticky joined #gluster
21:54 harish joined #gluster
21:54 fsimonce joined #gluster
21:54 a1 joined #gluster
21:54 jamesc joined #gluster
21:54 chen joined #gluster
21:54 nixpanic joined #gluster
21:54 doekia joined #gluster
21:54 ekman joined #gluster
21:54 dastar joined #gluster
21:54 msvbhat joined #gluster
21:54 hchiramm joined #gluster
21:54 georgeh joined #gluster
21:54 eljrax joined #gluster
21:54 hybrid512 joined #gluster
21:54 atrius joined #gluster
21:54 DJClean joined #gluster
21:54 JamesG joined #gluster
21:54 Telsin joined #gluster
21:54 cyberbootje joined #gluster
21:54 aulait joined #gluster
21:54 nage joined #gluster
21:54 scuttlemonkey joined #gluster
21:54 ndk joined #gluster
21:54 Champi joined #gluster
21:54 Bardack joined #gluster
21:54 zutto joined #gluster
21:54 Dave2 joined #gluster
21:54 foster joined #gluster
21:54 xavih joined #gluster
21:54 samppah joined #gluster
21:54 tomased joined #gluster
21:54 ur_ joined #gluster
21:54 asku joined #gluster
21:54 ryao joined #gluster
21:54 johnmark joined #gluster
21:54 Slasheri joined #gluster
21:54 ackjewt joined #gluster
21:54 edualbus joined #gluster
21:54 gomikemike joined #gluster
21:54 coreping joined #gluster
21:54 nhayashi joined #gluster
21:54 oxidane joined #gluster
21:54 osiekhan1 joined #gluster
21:54 twx joined #gluster
21:54 mibby joined #gluster
21:54 JoeJulian joined #gluster
21:55 tom[] joined #gluster
21:55 B21956 joined #gluster
21:55 _Bryan_ joined #gluster
21:55 maveric_amitc_ joined #gluster
21:55 ninkotech_ joined #gluster
21:55 ckotil joined #gluster
21:55 ghenry joined #gluster
21:55 SmithyUK joined #gluster
21:55 glusterbot joined #gluster
21:55 misko_ joined #gluster
21:55 atrius` joined #gluster
21:55 huleboer joined #gluster
21:55 tg2 joined #gluster
21:58 atrius` joined #gluster
21:59 nueces joined #gluster
21:59 Arminder joined #gluster
21:59 wushudoin joined #gluster
21:59 fsimonce joined #gluster
21:59 jamesc joined #gluster
21:59 doekia joined #gluster
21:59 dastar joined #gluster
21:59 hchiramm joined #gluster
21:59 eljrax joined #gluster
21:59 DJClean joined #gluster
21:59 JamesG joined #gluster
21:59 ndk joined #gluster
21:59 Bardack joined #gluster
21:59 foster joined #gluster
21:59 xavih joined #gluster
21:59 samppah joined #gluster
21:59 asku joined #gluster
21:59 ryao joined #gluster
21:59 edualbus joined #gluster
21:59 gomikemike joined #gluster
21:59 nhayashi joined #gluster
21:59 oxidane joined #gluster
21:59 JoeJulian joined #gluster
21:59 mibby joined #gluster
22:01 ryao joined #gluster
22:02 saltsa joined #gluster
22:06 johnnytran joined #gluster
22:06 DV joined #gluster
22:06 ubungu joined #gluster
22:06 side_control joined #gluster
22:06 eryc joined #gluster
22:06 Bosse joined #gluster
22:06 sadbox joined #gluster
22:06 dblack joined #gluster
22:06 hchiramm_ joined #gluster
22:06 siel joined #gluster
22:06 klaas joined #gluster
22:06 kalzz joined #gluster
22:06 purpleidea joined #gluster
22:06 capri joined #gluster
22:06 sage_ joined #gluster
22:06 NuxRo joined #gluster
22:06 ndevos joined #gluster
22:06 [o__o] joined #gluster
22:06 strata joined #gluster
22:06 RobertLaptop joined #gluster
22:06 Gorian joined #gluster
22:06 dockbram joined #gluster
22:06 vincent_vdk joined #gluster
22:13 PatNarciso cluster.min-free-disk 15% vs cluster.min-free-disk 15 (no percent).  are there any known reported issues with this?
22:13 PatNarciso cmd seems to accept both.
22:26 purpleidea testing...
22:26 purpleidea yo
22:26 purpleidea hi
22:26 glusterbot purpleidea: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
22:26 purpleidea glusterbot: you need to learn about 'yo' aswell! cc JoeJulian
22:30 bene_wfh joined #gluster
22:33 Gill joined #gluster
22:34 Gill joined #gluster
22:46 fandi joined #gluster
22:53 PeterA joined #gluster
23:00 MugginsM joined #gluster
23:02 gildub joined #gluster
23:30 fandi joined #gluster
23:32 jmarley joined #gluster
23:36 fandi joined #gluster
23:39 Durzo joined #gluster
23:40 Durzo hey guys, brand new empty gluster 3.6 2 node replica set up with geo-replication, dumped about 40GB of files into it and now geo-repl is faulty looping on "OSError: [Errno 16] Device or resource busy" - any ideas whats going on?
23:40 tryggvil joined #gluster
23:50 RicardoSSP joined #gluster
23:50 RicardoSSP joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary