Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-05-21

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 CyrilPeponnet @ndevos looks good to me, I updated the ticket. On which release of 3.5.x this fix will be merged and when ?
00:11 gildub joined #gluster
00:24 ppai joined #gluster
00:55 cyberbootje joined #gluster
00:58 aaronott joined #gluster
01:09 chirino joined #gluster
01:15 DV joined #gluster
01:17 sadbox joined #gluster
01:32 badone_ joined #gluster
02:20 haomaiwa_ joined #gluster
02:26 harish joined #gluster
02:32 kdhananjay joined #gluster
02:32 nangthang joined #gluster
03:00 RameshN joined #gluster
03:10 pdrakeweb joined #gluster
03:16 DV joined #gluster
03:20 haomaiwa_ joined #gluster
03:35 msvbhat joined #gluster
03:37 shubhendu joined #gluster
03:38 TheSeven joined #gluster
03:38 kdhananjay joined #gluster
03:39 overclk joined #gluster
03:40 itisravi joined #gluster
03:40 hagarth joined #gluster
03:41 nishanth joined #gluster
03:48 dusmant joined #gluster
03:49 bharata-rao joined #gluster
03:54 atinmu joined #gluster
04:15 kanagaraj joined #gluster
04:18 sakshi joined #gluster
04:23 yazhini joined #gluster
04:26 spandit joined #gluster
04:26 ndarshan joined #gluster
04:37 rjoseph joined #gluster
04:41 kshlm joined #gluster
04:42 ashiq joined #gluster
04:46 sakshi joined #gluster
04:49 rafi joined #gluster
04:51 RameshN joined #gluster
04:54 glusterbot News from newglusterbugs: [Bug 1223625] rebalance : output of rebalance status should show ' run time ' in proper format (day,hour:min:sec) <https://bugzilla.redhat.co​m/show_bug.cgi?id=1223625>
05:02 hgowtham joined #gluster
05:05 schandra joined #gluster
05:12 gem joined #gluster
05:19 pppp joined #gluster
05:27 nishanth joined #gluster
05:27 karnan joined #gluster
05:29 haomaiwa_ joined #gluster
05:30 meghanam joined #gluster
05:33 21WAB7UVY joined #gluster
05:39 shubhendu joined #gluster
05:40 raghu joined #gluster
05:42 sakshi joined #gluster
05:42 anil joined #gluster
05:45 sakshi joined #gluster
05:57 kumar joined #gluster
06:00 pdrakeweb joined #gluster
06:01 hagarth joined #gluster
06:02 pdrakeweb joined #gluster
06:02 gem joined #gluster
06:04 jiffin joined #gluster
06:04 pdrakeweb joined #gluster
06:05 DV joined #gluster
06:06 pdrakeweb joined #gluster
06:07 atinmu joined #gluster
06:08 pdrakeweb joined #gluster
06:09 liquidat joined #gluster
06:10 pdrakeweb joined #gluster
06:12 pdrakeweb joined #gluster
06:13 haomaiwa_ joined #gluster
06:14 pdrakewe_ joined #gluster
06:16 pdrakeweb joined #gluster
06:17 pdrakewe_ joined #gluster
06:18 spalai joined #gluster
06:19 Anjana joined #gluster
06:19 pdrakeweb joined #gluster
06:21 pdrakeweb joined #gluster
06:22 spalai joined #gluster
06:23 nangthang joined #gluster
06:23 pdrakeweb joined #gluster
06:25 glusterbot News from newglusterbugs: [Bug 1223644] [geo-rep]: With tarssh the file is created at slave but it doesnt get sync <https://bugzilla.redhat.co​m/show_bug.cgi?id=1223644>
06:26 pdrakeweb joined #gluster
06:27 pdrakewe_ joined #gluster
06:27 haomaiwa_ joined #gluster
06:29 pdrakewe_ joined #gluster
06:31 pdrakeweb joined #gluster
06:33 pdrakeweb joined #gluster
06:35 glusterbot News from resolvedglusterbugs: [Bug 1219782] Regression failures in tests/bugs/snapshot/bug-1112559.t <https://bugzilla.redhat.co​m/show_bug.cgi?id=1219782>
06:37 pdrakeweb joined #gluster
06:39 pdrakeweb joined #gluster
06:41 pdrakeweb joined #gluster
06:43 pdrakeweb joined #gluster
06:43 Rydekull joined #gluster
06:43 xrsanet joined #gluster
06:43 ndevos joined #gluster
06:43 tuxcrafter joined #gluster
06:43 atrius joined #gluster
06:43 swebb joined #gluster
06:44 T0aD joined #gluster
06:44 edong23 joined #gluster
06:44 haomaiwa_ joined #gluster
06:44 milkyline joined #gluster
06:45 pdrakeweb joined #gluster
06:45 dusmant joined #gluster
06:46 pdrakeweb joined #gluster
06:46 schandra joined #gluster
06:47 owlbot joined #gluster
06:48 nishanth joined #gluster
06:48 hagarth joined #gluster
06:49 pdrakeweb joined #gluster
06:50 pdrakeweb joined #gluster
06:50 spandit joined #gluster
06:52 pdrakeweb joined #gluster
06:54 pdrakeweb joined #gluster
06:55 kdhananjay joined #gluster
06:56 pdrakeweb joined #gluster
06:58 pdrakeweb joined #gluster
07:00 pdrakeweb joined #gluster
07:01 prasanth_ joined #gluster
07:02 pdrakeweb joined #gluster
07:04 pdrakeweb joined #gluster
07:05 dusmant joined #gluster
07:06 pdrakeweb joined #gluster
07:08 pdrakeweb joined #gluster
07:09 atinmu joined #gluster
07:10 pdrakeweb joined #gluster
07:11 pdrakewe_ joined #gluster
07:14 pdrakeweb joined #gluster
07:15 rgustafs joined #gluster
07:16 pdrakeweb joined #gluster
07:17 pdrakeweb joined #gluster
07:19 pdrakewe_ joined #gluster
07:21 pdrakeweb joined #gluster
07:22 haomaiwa_ joined #gluster
07:23 kshlm joined #gluster
07:23 pdrakeweb joined #gluster
07:25 pdrakeweb joined #gluster
07:27 pdrakewe_ joined #gluster
07:29 pdrakeweb joined #gluster
07:31 pdrakeweb joined #gluster
07:31 kdhananjay joined #gluster
07:34 pdrakeweb joined #gluster
07:36 fsimonce joined #gluster
07:37 spalai joined #gluster
07:37 pdrakeweb joined #gluster
07:37 rafi1 joined #gluster
07:38 rafi1 joined #gluster
07:38 pdrakeweb joined #gluster
07:41 pdrakeweb joined #gluster
07:44 pdrakeweb joined #gluster
07:44 Slashman joined #gluster
07:46 pdrakeweb joined #gluster
07:48 pdrakewe_ joined #gluster
07:49 tessier joined #gluster
07:49 ctria joined #gluster
07:50 pdrakewe_ joined #gluster
07:52 pdrakeweb joined #gluster
07:54 pdrakeweb joined #gluster
07:55 glusterbot News from newglusterbugs: [Bug 1212762] [HC] - gluster volume info api is broken with 3.6.2 client vs. 3.5.3 server <https://bugzilla.redhat.co​m/show_bug.cgi?id=1212762>
07:56 pdrakeweb joined #gluster
07:58 hpekdemir if I have a recplicated cluster of two peers (srvA and srvB). and one client is writing to a mount of srvA do I have only the latency and I/O load from client to srvA operation or is it complete if srvA replicated to srvB?
07:58 pdrakeweb joined #gluster
07:58 hpekdemir or put another way: does the writing consider the replication of the two peers or is it only the communication of client to srvA that is involved in the operation of glusterfs.
07:59 pdrakeweb joined #gluster
08:01 spandit joined #gluster
08:01 pdrakeweb joined #gluster
08:02 haomaiwang joined #gluster
08:03 pdrakeweb joined #gluster
08:05 pdrakeweb joined #gluster
08:06 hagarth joined #gluster
08:07 pdrakeweb joined #gluster
08:08 ju5t joined #gluster
08:09 pdrakeweb joined #gluster
08:11 pdrakeweb joined #gluster
08:11 hchiramm_ joined #gluster
08:13 pdrakeweb joined #gluster
08:13 akay01 joined #gluster
08:15 pdrakewe_ joined #gluster
08:17 pdrakeweb joined #gluster
08:17 al joined #gluster
08:19 pdrakewe_ joined #gluster
08:19 deniszh joined #gluster
08:19 autoditac joined #gluster
08:20 pdrakeweb joined #gluster
08:22 pdrakeweb joined #gluster
08:24 pdrakeweb joined #gluster
08:26 pdrakeweb joined #gluster
08:28 pdrakewe_ joined #gluster
08:30 pdrakewe_ joined #gluster
08:31 [Enrico] joined #gluster
08:37 atinmu joined #gluster
08:38 andras joined #gluster
08:38 Philambdo joined #gluster
08:39 gem joined #gluster
08:41 andras hello Gluster community! I have a problem with rebalance fix-layout which was stopped and now I cannot continue. Gluster keeps saying it is already started, but not. Any way to reset this status?
08:43 hpekdemir anybody knows something about this type of error message: rsync: get_xattr_names: llistxattr(""/mnt/exports/etc/xml​/.xml-core.xml.old.QQeIb3"",1024) failed: No data available (61)
08:43 hpekdemir I'm rsycing some files and it went all well except some error messages like the one above.
08:43 hpekdemir for several different files
08:44 hpekdemir the file is there though. so it seems to just be a warning. not a severe error
08:44 _shaps_ joined #gluster
08:45 hpekdemir beside that the rsync operation is really slow.
08:53 ashiq joined #gluster
08:54 hgowtham joined #gluster
09:04 dusmant joined #gluster
09:05 nmathew joined #gluster
09:10 spalai joined #gluster
09:13 kshlm joined #gluster
09:17 Manikandan gem++
09:17 glusterbot Manikandan: gem's karma is now 2
09:18 ashiq Manikandan++
09:18 gem Manikandan, :)
09:18 glusterbot ashiq: Manikandan's karma is now 2
09:19 Manikandan ashiq, thanks:)
09:21 autoditac joined #gluster
09:24 nishanth joined #gluster
09:24 yazhini joined #gluster
09:28 hagarth joined #gluster
09:30 sakshi joined #gluster
09:31 sakshi joined #gluster
09:40 Anjana joined #gluster
09:41 anrao joined #gluster
09:43 autoditac joined #gluster
09:44 nmathew left #gluster
09:46 s19n joined #gluster
09:48 Manikandan joined #gluster
10:01 autoditac joined #gluster
10:11 Manikandan joined #gluster
10:13 shubhendu joined #gluster
10:18 LebedevRI joined #gluster
10:24 vincent_vdk joined #gluster
10:27 anrao joined #gluster
10:29 andras joined #gluster
10:30 maveric_amitc_ joined #gluster
10:30 autoditac joined #gluster
10:32 andras hi! anyone have experience with gluster volume rebalance start-stop-start  ?   After stop, it wont start again.  gluster keeps saying that :  rebalance already started.
10:35 haomaiwa_ joined #gluster
10:35 atalur joined #gluster
10:36 atinmu joined #gluster
10:37 msvbhat andras: What does the status say? rebalance process running?
10:39 andras msvbhat:  rebalance status says:   on 9 servers: fix-layout stopped , on two servers fix-layout completed
10:42 andras msvbhat:  i see no glusterfs processes with rebalance running. have also checked directory /var/lib/glusterd/vols/gluster0/rebalance/   which has no files
10:42 kshlm joined #gluster
10:44 andras msvbhat:  also gluster volume status says:  "There are no active volume tasks"
10:45 anrao joined #gluster
10:46 msvbhat andras: Looks like it's in a inconsistent state.
10:47 andras msvbhat:  I believe so. wonderin how to reset state?
10:53 kshlm andras, can you check rebalance status and volume status on all peers. Any inconsistencies will help us identify the culprit peer.
10:54 andras kshlm: will check this. moment
10:54 aravindavk joined #gluster
10:55 msvbhat andras: Which version of glusterfs are you using?
10:56 andras 3.5.1
10:59 andras WOW!!  Thank you guys for help!  I checked all servers and there was glusterfs rebalance process running on one of them......killed... now:  "volume rebalance: gluster0: success: Starting rebalance on volume gluster0 has been successful."
11:00 andras Was so trivial!  Sometimes I just need some little trigger finding out what is wrong...
11:01 badone_ joined #gluster
11:01 andras Maybe I am not alone.  Really often troubleshooting hints can make miracles
11:01 nsoffer joined #gluster
11:05 andras msvbhat: I was thinking of upgrading. Maybe after rebalance next week or so
11:06 andras is 3.5  -> 3.6 upgrade smooth?   experiences?
11:08 msvbhat andras: Should be. Also depends on what other features are you using. Some of the features have some extra steps to be run.
11:08 msvbhat andras: But in any case upgrade document should be of help
11:09 hagarth joined #gluster
11:09 rgustafs joined #gluster
11:10 andras msvbhat: nothing unusual in my setup. distributed-replicated volume.
11:10 andras installed gluster in 2012, had quite a few upgrades already
11:13 msvbhat andras: Then it should be a smooth upgrade. Hopefully... http://www.gluster.org/community/documentation/i​ndex.php/Upgrade_to_3.6#GlusterFS_upgrade_from_3.5.x_to_3.6.X
11:14 msvbhat andras: If you use quota or geo-rep, some extra steps needs to be taken care of
11:16 andras msvbhat: will see the upgrade next week.  Since 2012 there was no shutdown. Gluster works really great.
11:17 andras I think my setup is quite small compared to big players
11:19 andras msvbhat:  1.4TB volume we have.  I guess you have Petas or more
11:21 msvbhat andras: Yeah, I think 1.4 TB is quite small for gluster. It sure can serve PBs of data
11:23 andras msvbhat:  Also big players started off small.  It can be grown when needed on the fly.
11:23 andras :-)
11:24 msvbhat andras: :)
11:27 andras I must leave. Thank you once more msvbhat: and kshlm:  ! have a nice day!
11:30 msvbhat andras: you're welcome. Have a nice day :)
11:37 poornimag joined #gluster
11:44 Anjana joined #gluster
11:45 spalai joined #gluster
11:47 siel joined #gluster
11:47 sankarshan_away joined #gluster
11:47 sage joined #gluster
11:47 hflai joined #gluster
11:48 kanagaraj joined #gluster
11:56 glusterbot News from newglusterbugs: [Bug 1218565] `gluster volume heal <vol-name> split-brain' shows wrong usage <https://bugzilla.redhat.co​m/show_bug.cgi?id=1218565>
12:01 chirino joined #gluster
12:04 itisravi_ joined #gluster
12:06 DV joined #gluster
12:18 rafi joined #gluster
12:26 glusterbot News from newglusterbugs: [Bug 1212842] tar on a glusterfs mount displays "file changed as we read it" even though the file was not changed <https://bugzilla.redhat.co​m/show_bug.cgi?id=1212842>
12:27 atalur joined #gluster
12:28 aaronott joined #gluster
12:31 Anjana joined #gluster
12:34 aravindavk joined #gluster
12:35 rgustafs_ joined #gluster
12:35 kkeithley pekdemir: yes,  http://bits.gluster.org/pub/gluster/glusterfs/src/
12:39 kanagaraj joined #gluster
12:43 Peppard joined #gluster
12:45 mat1010 joined #gluster
12:54 atalur joined #gluster
12:55 pdrakeweb joined #gluster
13:01 spalai left #gluster
13:06 squizzi joined #gluster
13:10 harish joined #gluster
13:17 dgandhi joined #gluster
13:23 aravindavk joined #gluster
13:26 hamiller joined #gluster
13:26 georgeh-LT2 joined #gluster
13:30 m0zes joined #gluster
13:39 rjoseph joined #gluster
13:41 klaxa|work joined #gluster
13:42 Twistedgrim joined #gluster
13:45 its_pete joined #gluster
13:49 shubhendu joined #gluster
13:56 gothos joined #gluster
13:59 vimal joined #gluster
14:03 vimal joined #gluster
14:03 marbu joined #gluster
14:06 vimal joined #gluster
14:06 julim joined #gluster
14:10 vimal joined #gluster
14:14 jayunit1000 joined #gluster
14:14 hagarth joined #gluster
14:15 jayunit1000 #bigtop
14:15 jayunit1000 sorry, meant to type /join
14:15 jayunit1000 Is anyone curating a vagrant recipe for Gluster on Fedora 21/22 [for macs]
14:15 jayunit1000 i will  probably be creating one if not.
14:16 hpekdemir do it
14:16 jayunit1000 :)
14:17 ndevos jayunit1000: obnox was doing some Vagrant stuff
14:18 jayunit1000 i assume theres someone whos got a 2 node gluster vagrant recipe they are using floating around.  would hate to duplicate work.
14:18 jayunit1000 https://forge.gluster.org/vagrant/fedora19-gl​uster/blobs/master/vagrant-gluster-examples/ Is the original stuff i did, need to update it though
14:20 vimal joined #gluster
14:39 pdrakewe_ joined #gluster
14:40 mbukatov joined #gluster
15:00 p8952 joined #gluster
15:06 poornimag joined #gluster
15:07 glusterbot News from resolvedglusterbugs: [Bug 1210137] [HC] qcow2 image creation using qemu-img hits segmentation fault <https://bugzilla.redhat.co​m/show_bug.cgi?id=1210137>
15:20 kshlm joined #gluster
15:27 aravindavk joined #gluster
15:27 dusmant joined #gluster
15:30 vimal joined #gluster
15:32 Pupeno joined #gluster
15:33 nangthang joined #gluster
15:42 CyrilPeponnet gluster vol set vol changelog.changelog off
15:42 CyrilPeponnet volume set: failed: Staging failed on mvdcgluster01.us.alcatel-lucent.com. Error: One or more connected clients cannot support the feature being set. These clients need to be upgraded or disconnected before running this command again
15:42 CyrilPeponnet any idea ?
15:53 poornimag which version of Gluster client and server are you running?
15:54 CyrilPeponnet 3.5.2
15:54 CyrilPeponnet server
15:54 CyrilPeponnet and for client 3.5.2 and some 3.6
15:54 CyrilPeponnet should I use 3.5.2 for everyone ?
15:58 poornimag no, 3.5.2 version has that option supported and so does 3.6
15:58 poornimag This command should work for these versions
15:58 poornimag can you also check the operating-version on the gluster nodes
15:58 poornimag can be found in /var/lib/glusterd/glusterd.info
16:00 p8952 joined #gluster
16:00 CyrilPeponnet 3051 on the 3 nodes
16:00 CyrilPeponnet 30501
16:04 poornimag Can you check one other file to confirm, /var/lib/glusterd/vols/VOLNAME/info
16:05 poornimag op-version and client-op-version value
16:06 CyrilPeponnet version=14 op-version=3 client-op-version=2 on the 3 nodes
16:11 CyrilPeponnet I don't understand why...
16:12 CyrilPeponnet most of the clients are using nfs fronted, only let say 20 are using gfs fuse
16:14 squizzi joined #gluster
16:14 msvbhat CyrilPeponnet: In any case, do you really want to have a heterogeneous cluster? Why not move all servers to same version?
16:15 CyrilPeponnet msvbhat all server are on 3.5.2
16:16 jayunit1000 https://forge.gluster.org/​vagrant/fedora19-gluster/ <-- okay, i added a fedora 21 recipe in there.  need to update dirnames and stuff, but it will work for vbox
16:16 glusterbot jayunit1000: <'s karma is now -14
16:16 jayunit1000 uhoh, why did i lose karma
16:17 ndevos jayunit1000: not you, but <++ did
16:17 glusterbot ndevos: <'s karma is now -13
16:17 msvbhat CyrilPeponnet: Hm, I thought you said some are in 3.6 and some are in 3.5
16:17 CyrilPeponnet @msvbhat clients
16:17 jayunit1000 oh hahaha
16:19 msvbhat CyrilPeponnet: Oh, Okay. My bad
16:19 CyrilPeponnet no pb :)
16:21 CyrilPeponnet @poornimag any clue or debugging I can do ?
16:24 msvbhat CyrilPeponnet: Can you check the log file? It might have clue about which client is rejecting the volume set option
16:25 CyrilPeponnet one of the node reply with
16:25 CyrilPeponnet [2015-05-21 16:25:09.360953] E [glusterd-op-sm.c:357:glusterd_c​heck_client_op_version_support] 0-management: One or more clients don't support the required op-version
16:25 CyrilPeponnet [2015-05-21 16:25:09.360991] E [glusterd-op-sm.c:3886:glusterd_op_ac_stage_op] 0-management: Stage failed on operation 'Volume Set', Status : -1
16:25 glusterbot CyrilPeponnet: set the desired op-version using ''gluster volume set all cluster.op-version $desired_op_version''.
16:25 poornimag the client opversion is 2, which could be the reason for this..., i m not sure if this command will work on 3.5.X, you could try gluster volume set all cluster.op-version
16:25 poornimag gluster volume set all cluster.op-version  <opversion>
16:26 CyrilPeponnet not sure if this command works on 3.5.2
16:26 CyrilPeponnet nope
16:27 CyrilPeponnet and most of gfs client are connected to this node (message above)
16:27 CyrilPeponnet but to another volume, not on the volume I try to pass the changeling.changelog setting
16:28 wushudoin joined #gluster
16:28 Gill joined #gluster
16:40 kshlm CyrilPeponnet, what you are facing is a bug.
16:41 CyrilPeponnet @kshlm sure ? some details ?
16:41 kshlm I've known this for a long time, and have had a fix lying in limbo for a year and a half.
16:41 kshlm https://review.gluster.org/5786
16:41 coredump joined #gluster
16:41 kshlm This is the fix.
16:42 kshlm The problem is glusterd is using the incorrect op-version to do the clients check. Instead of using the volume's client-op-version, we are using the volume's server op-version
16:43 kshlm Do you have any 3.4 client's mounting the volume?
16:44 JoeJulian wtf? reviewed and verified but no +2?
16:44 CyrilPeponnet No I only have 3.5 and 3.6 clients
16:45 JoeJulian Maybe there needs to be a similar group to the bug triage group that focuses on reviews.
16:45 kshlm Hmm, that seems a little strange. What version is your server.
16:45 Norky joined #gluster
16:46 CyrilPeponnet 3.5.2
16:46 CyrilPeponnet operating-version 30501 across all the nodes
16:47 kshlm JoeJulian, It is a verified and reviewed -1. The change still had some more issues, which prevented it from being merged at the time.
16:47 kshlm And I forgot about it, as the bug wasn't a really pressing one.
16:48 kshlm CyrilPeponnet, Give me a couple of minutes. I want to refresh myself on what actually happens when doing the failing check.
16:49 CyrilPeponnet Sure :)
16:52 JoeJulian Ah, whew. Glad it's just me. :D
16:55 Prilly joined #gluster
16:56 kshlm CyrilPeponnet, Are you using any other options on the volume?
16:56 kshlm I don't think you are but just double checking.
16:57 wkf joined #gluster
16:58 kshlm Ah, nevermind.
16:58 jayunit1000 joined #gluster
17:00 kshlm CyrilPeponnet, do you still have any 3.5.0 clients? They could be the only problem.
17:01 kshlm You've got 2 options now to get the command to work.
17:01 kshlm 1. If you've got any 3.5.0 clients, unmount them. You can remount after upgrade them to 3.5.1 or above.
17:01 kshlm or,
17:02 hpekdemir does anybody have any benchmarks of using rsync via glusterfs?
17:02 prilly_ joined #gluster
17:03 kshlm 2. You could downgrade the cluster op-version (the one in glusterd.info) to '3' the op-version of 3.5.0. You'd need to stop glusterds on all peers, edit glusterd.info on all and restart.
17:03 CyrilPeponnet @kshlm afaik only 3.5.2 on el6 and el7 and some 3.6 on el7
17:04 CyrilPeponnet @kshlm to be sure to list all client is gluster vol status all clients reliable ?
17:05 kshlm You need to be concerned only with the clients that have mounted this one particular volume.
17:05 kshlm 'gluster vol status <vol> clients' is reliable.
17:06 CyrilPeponnet let me triple check all this clients
17:06 kshlm It will dump the client list of each brick, so even if there could be some clients that aren't connected to all bricks, a union of the list dumped shold list all of them.
17:06 CyrilPeponnet @kshlm I use other options
17:06 hagarth joined #gluster
17:07 CyrilPeponnet yea I used to do 'gluster vol status all clients | grep -E "^1" | cut -d ":" -f 1 | sort -u'
17:07 glusterbot News from resolvedglusterbugs: [Bug 1215550] glusterfsd crashed after directory was removed from the mount point, while self-heal and rebalance were running on the volume <https://bugzilla.redhat.co​m/show_bug.cgi?id=1215550>
17:08 Rapture joined #gluster
17:15 JoeJulian @ping timeout
17:15 glusterbot pong
17:16 JoeJulian @ping-timeout
17:16 glusterbot JoeJulian: The reason for the long (42 second) ping-timeout is because re-establishing fd's and locks can be a very expensive operation. Allowing a longer time to reestablish connections is logical, unless you have servers that frequently die.
17:16 CyrilPeponnet @kshlm only 3.6.0.29 and 3.5.2 clients
17:19 kshlm CyrilPeponnet, :-/
17:19 kshlm Let's try something else.
17:20 kshlm One of the glusterd's might be logging something along lines 'One or more clients don't support the required op-version'
17:20 glusterbot kshlm: set the desired op-version using ''gluster volume set all cluster.op-version $desired_op_version''.
17:20 CyrilPeponnet https://paste.fedoraproject.org/224346/32228830/
17:20 CyrilPeponnet for all vol and options
17:21 CyrilPeponnet @kshlm yea the node 2
17:21 kshlm Okay. Would you be okay restarting glusterd on that node?
17:22 CyrilPeponnet will it restart the volumes ?
17:22 kshlm Nope.
17:22 CyrilPeponnet okay
17:22 kshlm You could just kill (glusterd pid)
17:22 kshlm and start glusterd
17:22 JoeJulian aka pkill glusterd
17:23 kshlm that works too.
17:23 CyrilPeponnet systemctl restart glusterd could work ?
17:23 kshlm Should work. I don't know how glusterd's unit file is setup though.
17:24 JoeJulian I think that still works. There's been controversy over whether systemd should stop the bricks as well.
17:24 CyrilPeponnet hmm
17:24 CyrilPeponnet ExecStart=/usr/sbin/glusterd -p /run/glusterd.pid
17:24 CyrilPeponnet KillMode=process
17:25 kshlm with killmode=process only glusterd should be restarted.
17:25 CyrilPeponnet yeah that the point right ?
17:25 * CyrilPeponnet praying while restarting glusterd
17:26 kshlm Yes. I didn't know if glusterd's unit file had that option set.
17:26 CyrilPeponnet done
17:27 kshlm Can you tset the command now?
17:27 glusterbot News from newglusterbugs: [Bug 1214169] glusterfsd crashed while rebalance and self-heal were in progress <https://bugzilla.redhat.co​m/show_bug.cgi?id=1214169>
17:27 CyrilPeponnet same thing
17:28 kshlm I thought that should work.
17:28 kshlm BTW which volume are you running the command on?
17:28 CyrilPeponnet usr_global
17:29 CyrilPeponnet I can try to another volume :)
17:29 CyrilPeponnet it passed
17:29 CyrilPeponnet on another vol
17:29 CyrilPeponnet not mounted at all with gfs
17:30 CyrilPeponnet it passes on archive vol
17:31 CyrilPeponnet (you can check the vol in the fpaste above)
17:31 CyrilPeponnet archive vol as only 3.5.2 clients
17:32 CyrilPeponnet I will connect a 3.6 an try the same thing
17:32 kshlm usr_global too has the same I assume.
17:32 CyrilPeponnet usr_global as 3.6 and 3.5.2 clients
17:32 CyrilPeponnet archive only 3.5.2
17:32 CyrilPeponnet I will add a 3.6 now and try to set an option
17:34 CyrilPeponnet ok it failed
17:34 CyrilPeponnet on archive after I mount using 3.6 client
17:35 CyrilPeponnet after unmount on the client I can passe the options
17:36 CyrilPeponnet So I should downgrade all my 3.6 clients
17:36 CyrilPeponnet and this should fix it
17:36 kshlm That is strange.
17:37 kshlm 3.6 clients are op-version 30600
17:37 CyrilPeponnet If you'd know all the strangeness we got with gluster since we are using it :p
17:37 kshlm We are checking if the clients can support 30501.
17:38 kshlm https://github.com/gluster/glusterfs/blob/v3.5.2​/xlators/mgmt/glusterd/src/glusterd-op-sm.c#L347 is the check
17:38 CyrilPeponnet Looks like it doesn't work as expected
17:39 kshlm I need to investigate further. Seems to be reproducible.
17:39 CyrilPeponnet Well in our env yes
17:40 CyrilPeponnet But it simple env
17:40 CyrilPeponnet using James puppet classes to setup 3 nodes in centos7 using 3.5.2
17:40 CyrilPeponnet that all
17:40 kshlm CyrilPeponnet, one more question.
17:40 CyrilPeponnet sure
17:40 kshlm Are you using rhs clients by any chance?
17:41 kshlm The 3.6 clients shipped in centos are based of the rhs client bits.
17:41 CyrilPeponnet only centos
17:41 kshlm They have a different op-version compared to the upstream community shipped glusterfs-3.6 bits.
17:41 CyrilPeponnet how can I check that
17:42 kshlm For rhs clients, the version string of the rpms shold end with rhs
17:42 kumar joined #gluster
17:43 CyrilPeponnet not it's always elX
17:43 kshlm oh wait that's wrong.
17:44 kshlm The rhs packages end with rhs.
17:44 kshlm The client packages don't.
17:44 kshlm Is the rpm version 3.6.0.x?
17:44 CyrilPeponnet .29
17:44 CyrilPeponnet yeah
17:45 CyrilPeponnet Name        : glusterfs
17:45 CyrilPeponnet Version     : 3.6.0.29
17:45 CyrilPeponnet Release     : 2.el7
17:45 CyrilPeponnet Architecture: x86_64
17:45 kshlm That's a rhs client package.
17:45 kshlm So we've found the problem!
17:46 CyrilPeponnet So
17:46 CyrilPeponnet I f I want top use latest glusterfs client
17:46 CyrilPeponnet for centos7
17:46 kshlm There are known incompatibalities with the community supplied glusterfs packages and the ones that come by default with centos which are based of the redhat storage versions.
17:46 CyrilPeponnet and make it works
17:46 CyrilPeponnet what repo should I use
17:46 kshlm We provide an epel based repo I think.
17:46 kshlm @packages
17:46 CyrilPeponnet make sens
17:46 kshlm @repos
17:46 CyrilPeponnet right
17:46 glusterbot kshlm: See @yum, @ppa or @git repo
17:46 kshlm @yum
17:46 glusterbot kshlm: The official community glusterfs packages for RHEL (including CentOS, SL, etc) are available at http://download.gluster.org/pub/gluster/glusterfs/. The official community glusterfs packages for Fedora 19 and later are in the Fedora yum updates (or updates-testing) repository.
17:47 kshlm CyrilPeponnet, ^
17:47 kshlm this should help you.
17:48 CyrilPeponnet our server are installed using puppet classes and the glisters packages are fetched from glusterfs repo
17:48 CyrilPeponnet for the clients most of them are managed through puppet
17:48 CyrilPeponnet so they have the same source for client
17:48 CyrilPeponnet only some stateless images are using 3.6 from el7 repo
17:48 kshlm Doesn't seem like the correct packages were picked though.
17:48 CyrilPeponnet This is the issue so
17:48 kshlm Oh, okay.
17:48 CyrilPeponnet pfiou
17:49 kshlm Those were the problematic clients.
17:49 CyrilPeponnet yep
17:49 CyrilPeponnet So I will update my image using glisters repo
17:49 CyrilPeponnet Should I stick to 3.5.2 or using 3.6 client is better with 3.5.2 servers
17:50 CyrilPeponnet (the stateless image is a ivm hypervisor mouting vol for qcow2 hosting)
17:50 CyrilPeponnet s/ivm/kvm
17:50 kshlm The general recommendation has been to use older clients with newer servers.
17:50 CyrilPeponnet hmm
17:50 kshlm But I'm not sure how valid that statement is any longer.
17:50 JoeJulian (3.5.3 servers are better)
17:51 kshlm hagarth, you have any ideas w.r.t this?
17:51 CyrilPeponnet @JoeJulian I know
17:51 CyrilPeponnet but too afraid to update
17:52 hagarth CyrilPeponnet: sticking to 3.5.2 clients with 3.5.2 servers would be better.
17:52 CyrilPeponnet If the update from 3.5.2 to 3.5.3 can be done Online without losing the connexion state for the clients then it could be planed easily
17:52 kshlm I think so too.
17:52 CyrilPeponnet @hagarth thanks
17:53 kshlm 3.6 has some differences with afr and dht compared to 3.5, which could lead to problems.
17:53 CyrilPeponnet but last time I update, I screwed the cluster
17:53 kshlm Thanks hagarth
17:53 CyrilPeponnet So, time to rebuild my images
17:53 CyrilPeponnet :)
17:55 CyrilPeponnet @kshlm @JoeJulian @hagarth Thank you some much for your time guys, I hope this will solve my geo-replication issue
17:56 JoeJulian ME too
17:56 CyrilPeponnet :p
17:56 kshlm Me too :)
17:57 * kshlm is off to bed.
17:57 CyrilPeponnet @kshlm gn !
18:01 nsoffer joined #gluster
18:03 gnudna joined #gluster
18:13 jiku joined #gluster
18:23 plarsen joined #gluster
18:27 JoeJulian Son of a .... I just scrolled back to see what the solution was. F' Red Hat!
18:27 JoeJulian Just makes me angry.
18:27 CyrilPeponnet :p
18:27 glusterbot News from newglusterbugs: [Bug 1223935] 3.7.0 introduced spelling errors <https://bugzilla.redhat.co​m/show_bug.cgi?id=1223935>
18:27 glusterbot News from newglusterbugs: [Bug 1223937] Outdated autotools helper config.* files <https://bugzilla.redhat.co​m/show_bug.cgi?id=1223937>
18:27 glusterbot News from newglusterbugs: [Bug 1215418] Manpage: hyphen used as minus sign <https://bugzilla.redhat.co​m/show_bug.cgi?id=1215418>
18:28 spot joined #gluster
18:42 deniszh joined #gluster
18:46 sage joined #gluster
18:49 ppai joined #gluster
18:58 glusterbot News from newglusterbugs: [Bug 1223938] Source files are deleted after building glusterfs <https://bugzilla.redhat.co​m/show_bug.cgi?id=1223938>
18:58 glusterbot News from newglusterbugs: [Bug 1223942] Source files are modified after building glusterfs <https://bugzilla.redhat.co​m/show_bug.cgi?id=1223942>
18:58 glusterbot News from newglusterbugs: [Bug 1223945] Scripts/Binaries are not installed with +x bit <https://bugzilla.redhat.co​m/show_bug.cgi?id=1223945>
18:58 glusterbot News from newglusterbugs: [Bug 1223947] Syntax errors in shell scripts <https://bugzilla.redhat.co​m/show_bug.cgi?id=1223947>
18:58 glusterbot News from newglusterbugs: [Bug 1223949] Missing man pages <https://bugzilla.redhat.co​m/show_bug.cgi?id=1223949>
19:01 dman_d joined #gluster
19:04 dman_d hey guys, sorry for being lame. I'm trying to figure out how to find the documentation that I need. I have a replicated volume, that I need to convert into a distributed replicated volume so that it's easier to add storage. First, is this possible, and second, can someone point me to the docs where I can read up on this? Thanks!
19:07 hchiramm joined #gluster
19:11 plarsen joined #gluster
19:13 JoeJulian dman_d: Just add more bricks in a multiple of the replica count.
19:13 JoeJulian Then rebalance.
19:13 JoeJulian That's it.
19:17 Gill joined #gluster
19:17 dman_d so it's not the volume create command, it's the gluster volume add-brick command? And just add a multiple of the replica count? This will work if my cluster is already setup?
19:17 dman_d Thanks JoeJulian!
19:18 JoeJulian Yep, and you're welcome.
19:18 dman_d :)
19:19 tessier joined #gluster
19:24 T0aD- joined #gluster
19:27 mkzero_ joined #gluster
19:28 sage joined #gluster
19:28 soumya joined #gluster
19:36 dman_d joined #gluster
19:42 Gill joined #gluster
20:00 rotbeard joined #gluster
20:02 prilly_ joined #gluster
20:03 dman_d joined #gluster
20:15 CyrilPeponnet Oh and for the record with centos-7 if you use epel-7,  libvirt-daemon-driver-storage as a require on gluster-3.6
20:15 CyrilPeponnet :(
20:23 CyrilPeponnet this is really bad....
20:28 verdurin joined #gluster
20:29 JoeJulian I think we could add the "provides" to the 3.5 spec to allow it to satisfy that requirement.
20:29 gildub joined #gluster
20:29 jayunit1000 joined #gluster
20:30 CyrilPeponnet @JoeJulian it could be great !
20:31 JoeJulian In the mean time, you can "rpm -q -a 'glusterfs*' | xargs rpm -e --nodeps"
20:31 JoeJulian Then you should be able to install.
20:31 CyrilPeponnet http://fpaste.org/224448/22403111/
20:32 JoeJulian Alternatively you could just try the upstream repo for 3.6 and see if it works.
20:32 CyrilPeponnet I use kickstart file to provision machines and it fails when I try to fix the version...
20:32 JoeJulian Ah, well, yeah....
20:32 CyrilPeponnet well I need to fix the version of glisters to 3.5.2
20:33 JoeJulian Just try 3.6.3 from the gluster repo.
20:33 CyrilPeponnet (fu*** auto correction)
20:34 JoeJulian I think the problem is that 3.6.0.29 from RHS is actually 3.5.0 with a bunch of patches.
20:34 CyrilPeponnet 3.6.3 will work fine with 3.5.2 server ?
20:35 JoeJulian It *should*. It won't work any worse than the rhs client you were using. It's certainly worth trying.
20:35 CyrilPeponnet ok it seems to better resolv the dependencies
20:36 CyrilPeponnet I will try
20:36 CyrilPeponnet thanks for the hint
20:36 rwheeler joined #gluster
20:39 CyrilPeponnet in fact just adding the 3.6.3 repo make yum install 3.6.3 instead of epel-7 one
20:39 CyrilPeponnet (when installing libvirt)
20:40 gnudna i used epel-7 with no issues a few days ago when i updated
20:40 CyrilPeponnet @gnudna yeah but I try to fix 3.5.2 for gluster and this doesn't not work
20:42 CyrilPeponnet @JoeJulian I just test if 3.6.3 allow me to pass option to vol using set and it works
20:43 CyrilPeponnet so at least it's a semi-victory
20:45 gnudna CyrilPeponnet if believe you have to change the repo file
20:46 gnudna assuming 3.5.2 is available
20:46 gnudna but i would not hold my breath that it would
20:47 CyrilPeponnet @gnudna Not the point is libvirt from epel-7 require glusterfs and the resolution is only done for gluster > 3.6
20:48 CyrilPeponnet @JoeJulian At least my image is building now :)
20:49 CyrilPeponnet @JoeJulian oh and by the way, as you seems around, on my 3 node setup on of the node is doing some hight CPU spikes (like 600 % cpu) from time to time, and as consequences all nfs client experience "hang" for 0.5 to 5s
20:50 CyrilPeponnet glsuterfsd holding nfs seems to be the culprit
20:50 JoeJulian I haven't seen that, but I also don't use nfs.
20:50 CyrilPeponnet your recommendation should to move from nfs to gfs ?
20:50 CyrilPeponnet that what I have planned for most of the clients
20:51 CyrilPeponnet I have ~4k clients...
20:51 JoeJulian Have you considered not using a mount?
20:51 JoeJulian https://github.com/sahlberg/libnfs
20:52 CyrilPeponnet well basically it hosts home dir and qcow2 files for vms so...
20:52 gnudna left #gluster
20:54 CyrilPeponnet but this is interresting
20:55 JoeJulian https://docs.google.com/document/d/15IiPVI​PMzgGwkt1sKuIusRE2l3pQY6NnA4WmWaMsFJE/edit
20:56 tessier joined #gluster
20:56 JoeJulian It's just a concept document, but fwiw...
20:59 CyrilPeponnet Interresting
21:00 ppai joined #gluster
21:17 zerick joined #gluster
21:18 bturner_ joined #gluster
21:23 ppai joined #gluster
21:26 dgandhi joined #gluster
21:33 chirino_m joined #gluster
21:40 wkf joined #gluster
21:43 Gill_ joined #gluster
21:45 Gill joined #gluster
21:54 codex joined #gluster
22:01 mike25de joined #gluster
22:09 daMaestro joined #gluster
22:20 ppai joined #gluster
23:10 Rapture joined #gluster
23:29 dman_d joined #gluster
23:32 ppai joined #gluster
23:52 CyrilPeponnet @JoeJulian @kshlm Good news after rebuild an hypervisor image based on 3.6.3 I can now change vol set :)
23:55 JoeJulian Excellent
23:56 CyrilPeponnet and even better Changelog socket is now present
23:56 CyrilPeponnet for my georeplication
23:56 CyrilPeponnet Thanks to you guys
23:56 lexi2 joined #gluster
23:56 JoeJulian Always happy to help.
23:57 CyrilPeponnet especially when it works !

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary