Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-08-10

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:08 haomaiwang joined #gluster
00:19 nangthang joined #gluster
00:20 kbyrne joined #gluster
00:29 kokopelli joined #gluster
00:38 badone_ joined #gluster
00:44 kokopellifd joined #gluster
00:54 aaronott joined #gluster
01:04 theron joined #gluster
01:09 theron joined #gluster
01:12 theron_ joined #gluster
01:28 gildub joined #gluster
01:32 Lee1092 joined #gluster
01:50 m0zes joined #gluster
02:01 dgandhi joined #gluster
02:01 suliba joined #gluster
02:11 calavera joined #gluster
02:28 calavera joined #gluster
02:48 nangthang joined #gluster
02:58 dewey joined #gluster
03:07 cholcombe joined #gluster
03:16 TheCthulhu1 joined #gluster
03:18 magamo Heya folks, anyone have any idea on how to force an index resync of a geo-replication session in Gluster 3.7?
03:20 ppai joined #gluster
03:34 TheCthulhu3 joined #gluster
03:43 atinm joined #gluster
03:49 itisravi joined #gluster
03:50 sakshi joined #gluster
03:51 TheSeven joined #gluster
04:01 rafi joined #gluster
04:17 nbalacha joined #gluster
04:19 ramky joined #gluster
04:28 jwd joined #gluster
04:31 jwaibel joined #gluster
04:35 * JoeJulian grumbles about the completely useless nfs-ganesha information available.
04:36 elico joined #gluster
04:39 anil joined #gluster
04:44 kotreshhr joined #gluster
04:45 ndarshan joined #gluster
04:47 ramteid joined #gluster
04:52 meghanam joined #gluster
04:58 RameshN joined #gluster
05:04 kdhananjay joined #gluster
05:04 kanagaraj joined #gluster
05:08 ira joined #gluster
05:11 gem joined #gluster
05:12 deepakcs joined #gluster
05:17 mikemol joined #gluster
05:21 nbalacha joined #gluster
05:24 aravindavk joined #gluster
05:24 vimal joined #gluster
05:24 ppai joined #gluster
05:26 anil joined #gluster
05:29 rwheeler joined #gluster
05:31 jiffin joined #gluster
05:35 overclk joined #gluster
05:39 kovshenin joined #gluster
05:47 dusmant joined #gluster
05:50 maveric_amitc_ joined #gluster
05:51 vmallika joined #gluster
05:53 raghu joined #gluster
06:00 bennyturns joined #gluster
06:03 jwd joined #gluster
06:19 bennyturns joined #gluster
06:26 mikemol joined #gluster
06:31 ramky joined #gluster
06:31 elico joined #gluster
06:34 cuqa_ joined #gluster
06:34 Philambdo joined #gluster
06:35 LebedevRI joined #gluster
06:40 Pupeno joined #gluster
06:42 nishanth joined #gluster
06:44 mikemol joined #gluster
06:45 rjoseph joined #gluster
06:54 spalai joined #gluster
06:58 mikemol joined #gluster
07:00 Slashman joined #gluster
07:14 cuqa_ joined #gluster
07:24 shaunm joined #gluster
07:38 kdhananjay joined #gluster
07:41 mbukatov joined #gluster
07:45 aravindavk joined #gluster
07:45 fsimonce joined #gluster
07:45 nangthang joined #gluster
07:47 ppai joined #gluster
07:51 ajames-41678 joined #gluster
07:52 kotreshhr joined #gluster
07:58 kdhananjay joined #gluster
07:58 rtalur joined #gluster
08:03 Trefex joined #gluster
08:07 ctria joined #gluster
08:26 muneerse joined #gluster
08:30 brascon joined #gluster
08:31 brascon good morning all
08:31 brascon i am using gluster 3.7.3 when i type this command: i got the following error: gluster> gluster volume geo-replication glustvol centos2:/slave start unrecognized word: gluster (position 0) searched all over could not found updated geo replication docs.
08:32 brascon01 joined #gluster
08:35 ndevos brascon: when you started the gluster "shell", you do not need to specify the 1st gluster keyword
08:35 ndevos brascon: you can execute that full command outside if the gluster shell, that is mostly easier
08:37 atalur joined #gluster
08:43 Romeor ndevos, its nice to see you
08:44 ndevos welcome back Romeor :)
08:45 anil joined #gluster
08:49 overclk joined #gluster
08:57 ramky joined #gluster
08:59 beardyjay joined #gluster
09:02 beardyjay joined #gluster
09:07 ramky_ joined #gluster
09:15 ajames41678 joined #gluster
09:16 ajames_41678 joined #gluster
09:34 ppai joined #gluster
09:44 autoditac joined #gluster
09:46 brascon ndevos many thanks so you suggest
09:46 brascon i will type :  volume geo-replication glustvol centos2:/slave start  it is still did not work
09:48 brascon unrecognized word gro-replication  (position 1)
09:50 nishanth joined #gluster
09:54 ndevos brascon: you have once "geo-replication" and once "gro-replication"
09:55 Norky joined #gluster
10:02 brascon sorry it is like so volume geo-replication glustvol centos2:/slave start
10:02 brascon i just mistype it now
10:02 brascon on the chat
10:02 brascon should be geo-replication
10:02 brascon i am using gluster 3.7
10:02 brascon has the config been changed?
10:27 txomon|fon joined #gluster
10:33 jcastill1 joined #gluster
10:38 jcastillo joined #gluster
10:43 anil joined #gluster
10:47 chirino joined #gluster
10:50 txomon|fon anyone knows how to stop a brick? I need to trigger a health check...
10:54 papamoose joined #gluster
11:13 and` did locking change between 3.5.X and 3.7.X series? I was looking at https://www.gluster.org/pipermail/g​luster-users/2015-June/022351.html as the Nagios check was complaining that it couldn't acquire a lock anymore and I see the e-mail mentions the fact bumping the op-version (which in my case was still 3, from the 3.5.X series) should resolve the issue
11:13 glusterbot Title: [Gluster-users] Unable to get lock for uuid peers (at www.gluster.org)
11:13 and` is there any documentation about cluster vs volume locking?
11:14 jwd joined #gluster
11:16 ira joined #gluster
11:20 dewey joined #gluster
11:22 ajames-41678 joined #gluster
11:34 firemanxbr joined #gluster
11:34 bfoster joined #gluster
11:44 ppai joined #gluster
11:48 julim joined #gluster
11:51 unclemarc joined #gluster
11:54 semajnz joined #gluster
11:58 ToMiles joined #gluster
12:10 itisravi_ joined #gluster
12:13 dijuremo @and` Your problem might be the insecure ports. By default previous versions of gluster did not allow that. 3.7.3 does, so changes are needed on the volumes.
12:15 dijuremo @and` read the release notes:  https://www.mail-archive.com/glust​er-users@gluster.org/msg21362.html
12:16 dijuremo @and` I had a similar problem, could not do a rolling upgrade from 3.7.2 to 3.7.3. @JoeJulian pointed me to this change, so I understand why it did not work.
12:16 ppai joined #gluster
12:19 atinm and`, cluster lock implies a lock on the entire gluster cluster which means only one glusterd transaction can run at a time where as 3.6 onwards we have volume wide lock which indicates the lock is applied on each volume and that means multiple transaction on different volumes can be run simultaneously
12:19 atinm and`, does it make sense now?
12:20 bennyturns joined #gluster
12:20 atinm and`, and yes if you are upgrading from 3.5 to 3.6 onwards  locking changes
12:20 atinm and`, you would need to explicitly bump up the op-version as well
12:21 and` dijuremo, not sure that's my case, seems atinm's explanation made it definitely clearer as it was surely a locking issue happening after upgrading from 3.5 to 3.7
12:22 and` atinm, the nagios check was locking glusterd with a query for each of the volumes and that explains why having 3 checks at the same time locked the whole cluster out
12:22 and` 3 checks as the bricks are actually 3
12:22 atinm and`, yes
12:22 and` bumping the op-version from 3 to 30703 seems to have fixed it
12:23 and` atinm, keeping the op-version to 3 was saying glusterd to keep using the old cluster-wide lock?
12:23 gletessier joined #gluster
12:24 and` 3 is the actual op-version for the 3.5 series
12:24 atinm and`, it will as the code checks if the op-version is less than 30600 then go for cluster lock
12:24 and` that totally explains it
12:25 and` atinm, one more question if you don't mind, were 3.5 nodes compatible with 3.7 clients?
12:25 and` seems we weren't able to mount anything when the clients were upgraded to 3.7
12:26 and` and the nodes were still at 3.5 (long story short there was a yum exclude for the package as an earlier release of gluster had a broken init script which took down our entire storage)
12:27 atinm and`, I don't think you would be able to mount 3.7 clients to 3.5 server as there are certain options which are not supported by the clients
12:27 atinm and`, we do not encourage the clients to be running with higher version than servers
12:27 atinm and`, that sounds odd as well, isn't it
12:28 and` atinm, definitely, since that time we actually forgot to remove the yum exclude and the clients version started to diverge
12:28 and` atinm, but having the whole storage down has been awful
12:29 atinm and`, you would need not bring down your whole storage, just upgrade the servers one by one
12:31 and` atinm, the "having the whole storage down" was related to the nightly RHN yum run which upgraded both the nodes with a glusterfs package having a broken init script
12:31 and` atinm, the nightly run happens at the same time so the whole set of nodes went down unexpectedly in the middle of the night
12:32 and` when we do mass reboots and so on we usually shut down one node, wait for it to reboot and go with the other as expected :)
12:35 txomon|fon joined #gluster
12:35 txomon|fon joined #gluster
12:38 theron joined #gluster
12:40 _Bryan_ joined #gluster
12:41 XpineX joined #gluster
12:46 shaunm joined #gluster
12:50 chirino joined #gluster
12:51 kovshenin joined #gluster
12:58 B21956 joined #gluster
13:22 julim joined #gluster
13:26 aaronott joined #gluster
13:33 ekuric joined #gluster
13:37 calavera joined #gluster
13:41 dgandhi joined #gluster
13:45 Twistedgrim joined #gluster
13:46 plarsen joined #gluster
13:46 hamiller joined #gluster
13:51 spalai1 joined #gluster
13:52 kotreshhr1 joined #gluster
14:03 hagarth joined #gluster
14:06 aravindavk joined #gluster
14:10 overclk joined #gluster
14:15 wushudoin joined #gluster
14:16 rwheeler joined #gluster
14:17 nage joined #gluster
14:26 RedW joined #gluster
14:28 magamo Anyone have any idea of how to do a full resync of my geo-replicated volume?  I see one system in status coming up as 'Faulty', and the larger system in the master volume still as 'Hybrid Crawl', but I've not seen any data movement in several days and it never fully synced the volume in the first place.  Any ideas?
14:29 magamo It's taken weeks to get this much synced over, so I'm not particularly relishing the idea of restarting it from scratch.
14:37 kotreshhr1 magamo: Did check the log files for why the session is Faulty?
14:37 kotreshhr1 magamo: *Did you
14:38 drue joined #gluster
14:39 cyberswat joined #gluster
14:40 magamo kotreshhr1: That would be on the node that is faulty, or the node I've been using to control the process? (Short answer: No.)
14:41 kotreshhr1 magamo: ok, please check the log file. It would be on the master gluster volume node which is Faulty.
14:42 magamo Looks like the faulty node has:
14:42 magamo ChangelogException: [Errno 61] No data available
14:44 kotreshhr1 magamo: Is the same error repeating?
14:44 magamo Yep.
14:44 magamo Is there any way to force a complete resync, wouldn't that have the changelogs start from a new point?
14:44 kotreshhr1 magamo: Which version of the gluster is it?
14:44 magamo 3.7.3
14:47 kotreshhr1 kotreshhr1: For complete resync, stime on bricks needs to be removed and geo-rep should be restarted.
14:47 kotreshhr1 magamo: *magamo :)
14:48 magamo stime, that's within the brick directory itself, right?
14:48 kotreshhr1 magamo: stime is a extended attribute stored on each brick root of the master volume.
14:49 kotreshhr1 magamo: How many bricks are there in master volume?
14:49 magamo 4.
14:50 bennyturns joined #gluster
14:51 kotreshhr1 magamo: stime extended attribute should be removed on all the four bricks. give me a sec, will give you the command.
14:52 magamo Thanks, kotreshhr1.
14:54 kotreshhr1 magamo: setfattr -x trusted.glusterfs.<MASTER_VOL​_UUID>.<SLAVE_VOL_UUID>.stime <brick-root>
14:55 kotreshhr1 magamo: Above mentioned stime key can be got as follows:
14:55 kotreshhr1 Using 'gluster volume info <mastervol>', get all brick paths and dump all the
14:55 kotreshhr1 extended attributes, using 'getfattr -d -m . -e hex <brick-path>', which will
14:55 kotreshhr1 dump stime key which should be removed.
14:56 kotreshhr1 magamo: If AFR is setup, do this for all replicated set
14:58 calavera joined #gluster
15:00 kotreshhr1 magamo: Before removing the stime xattr, Please stop the geo-rep session
15:00 kotreshhr1 magamo: Then start geo-rep session after removing
15:02 kotreshhr1 magamo: Could you pass on the current log files for us to debug?
15:02 calisto joined #gluster
15:05 victori joined #gluster
15:10 timotheus1 joined #gluster
15:12 doekia joined #gluster
15:12 overclk joined #gluster
15:13 plarsen joined #gluster
15:16 aravindavk joined #gluster
15:17 Gill joined #gluster
15:17 shyam joined #gluster
15:18 kdhananjay joined #gluster
15:19 _Bryan_ joined #gluster
15:23 magamo Thanks.  I think this arose due to the root partition on one of the bricks filling up.
15:23 magamo So I don't think it's a bug in gluster.  However, I've got another question while I have you.
15:24 kotreshhr1 magamo: Oh ok. Please go ahead.
15:24 magamo I'm seeing a gfid in split-brain, and according to the documentation I've found online, I just need to delete the corresponding file from one of the bricks, and run a heal.
15:24 magamo But, I've done that, and it's still in split-brain.
15:25 kotreshhr1 magamo: I am not very familiar with AFR, but looking into self heal logs should help you.
15:27 magamo Thanks.  I'll give it a shot.
15:27 kotreshhr1 magamo: Welc!
15:29 aaronott joined #gluster
15:34 autoditac joined #gluster
15:51 cholcombe joined #gluster
15:58 theron_ joined #gluster
15:58 autoditac joined #gluster
16:02 autoditac_ joined #gluster
16:02 aaronott joined #gluster
16:09 dewey joined #gluster
16:26 autoditac_ joined #gluster
16:49 aaronott joined #gluster
16:51 cc3 joined #gluster
16:52 Slashman joined #gluster
16:53 Rapture joined #gluster
16:57 cc3 Is there any way to do load balancing of two bricks without using a load balancer? IE drop 2 brick fqdn's on client config and takes which ever one is alive?
17:02 JoeJulian cc3: That's an incorrect view. The client connects to all the bricks and uses read strategies that are generally optimal by default. It writes directly to the servers that contain the replica bricks. Load balancing is inherent in a distributed volume.
17:05 cc3 JoeJulian: I'm setting up replicated volumes. What I'd like to do is mount the volume so that if one brick dies, the other picks it up.
17:06 cc3 JoeJulian: From Docs "mount -t glusterfs HOSTNAME-OR-IPADDRESS:/VOLNAME MOUNTDIR", basically trying to setup in a way if that host dies, I don't lose the filesystem mount.
17:11 cc3 Scratch all that, I see the "backupvolfile-server" option now.
17:14 aaronott joined #gluster
17:14 JoeJulian @mount server
17:14 glusterbot JoeJulian: (#1) The server specified is only used to retrieve the client volume definition. Once connected, the client connects to all the servers in the volume. See also @rrdns, or (#2) One caveat is that the clients never learn of any other management peers. If the client cannot communicate with the mount server, that client will not learn of any volume changes.
17:15 JoeJulian cc3: ^
17:18 cc3 JoeJulian: Thank you, as always
17:29 calavera joined #gluster
17:36 magamo Can anyone tell me what the different fields in a 'geo-replication status detail' mean?  Sepecifically: Data, Entry and Meta?
17:40 ttkg joined #gluster
17:46 cc2 joined #gluster
17:51 cc3 joined #gluster
17:53 cc2 joined #gluster
17:55 neofob joined #gluster
18:11 cc1 joined #gluster
18:12 vimal joined #gluster
18:14 aaronott joined #gluster
18:14 cc1 left #gluster
18:20 RedW joined #gluster
18:20 CyrilPeponnet joined #gluster
18:21 CyrilPeponnet joined #gluster
18:21 CyrilPeponnet joined #gluster
18:21 CyrilPeponnet joined #gluster
18:22 CyrilPeponnet joined #gluster
18:22 theron joined #gluster
18:22 CyrilPeponnet joined #gluster
18:22 calavera joined #gluster
18:22 CyrilPeponnet joined #gluster
18:22 CyrilPeponnet joined #gluster
18:23 CyrilPeponnet joined #gluster
18:23 CyrilPeponnet joined #gluster
18:24 CyrilPeponnet joined #gluster
18:24 CyrilPeponnet joined #gluster
18:27 CP|AFK joined #gluster
18:37 nzero joined #gluster
18:54 ghenry joined #gluster
18:54 ghenry joined #gluster
18:55 ToMiles joined #gluster
18:56 ToMiles joined #gluster
19:11 calavera joined #gluster
19:15 LebedevRI joined #gluster
19:24 aaronott joined #gluster
19:37 CyrilPeponnet joined #gluster
19:37 CyrilPeponnet joined #gluster
19:40 CyrilPeponnet hey guys any clue why I can't pass noatime or remout as option for a glusterfs mount using fuse ? was working fine on 3.5.2 but not in 3.6.4
19:40 CyrilPeponnet how can I do to remount a mount point with new options if remount is not working ?
19:40 theron_ joined #gluster
19:41 CyrilPeponnet (Invalid option remount)
19:45 CyrilPeponnet @purpleidea I have this issue with puppet-gluster since I updated to 3.6.x
19:45 purpleidea CyrilPeponnet: what is it ? :)
19:45 CyrilPeponnet Invalid option remount
19:46 CyrilPeponnet remount is no longer working with fuse gluster
19:46 purpleidea oh i see
19:46 CyrilPeponnet (or was not meant to)
19:46 purpleidea CyrilPeponnet: that is definitely a gluster issue, once it's figured out, please let me know and send me a puppet-gluster patch to fix it :)
19:46 CyrilPeponnet :p
19:47 purpleidea CyrilPeponnet: just to confirm, I get you're having an issue, but i don't know what it is... ie: i'm not confirming the issue, i haven't hit that yet, but i am not testing 3.6 regularly
19:47 CyrilPeponnet @JoeJulian @ndevos ? any idea ?
19:48 CyrilPeponnet @purpleidea I got it. try to mount some things with mount -t gluster server:/vol /mnt and issue mount -o remount /mnt
19:49 purpleidea CyrilPeponnet: ?
19:52 CyrilPeponnet https://github.com/gluster/gluster​fs/blob/release-3.6/xlators/mount/​fuse/utils/mount.glusterfs.in#L467
19:52 glusterbot Title: glusterfs/mount.glusterfs.in at release-3.6 · gluster/glusterfs · GitHub (at github.com)
19:53 CyrilPeponnet mount.glusterfs only take few options and remount is not part of it.
19:53 RedW joined #gluster
19:54 JoeJulian remount has never worked.
19:54 CyrilPeponnet that was my assumption
19:54 ghenry joined #gluster
19:54 CyrilPeponnet so how to remount a mount point with new option ?
19:54 CyrilPeponnet umount and mount again ?
19:55 JoeJulian And noatime is unnecessary. Gluster wouldn't update atime anyway. That's up to the brick filesystem.
19:55 CyrilPeponnet (got it too but for example valid gluster option)
19:55 JoeJulian Yeah, we've always had to unmount and mount again.
19:55 CyrilPeponnet :9
19:55 CyrilPeponnet :(
19:56 CyrilPeponnet Thanks @JoeJulian for the info.
19:56 CyrilPeponnet @purpleidea I will try something and make a PR
19:57 purpleidea CyrilPeponnet: awesome :)
19:58 CyrilPeponnet I'm preparing our migration from 3.5 to 3.6 hopping this will help us with our issues
20:00 aaronott joined #gluster
20:01 CyrilPeponnet only caveat is https://docs.puppetlabs.com/references/l​atest/type.html#mount-attribute-remounts
20:01 glusterbot Title: Type Reference — Documentation — Puppet Labs (at docs.puppetlabs.com)
20:01 CyrilPeponnet unmount then remount is prone to failure
20:05 theron joined #gluster
20:12 CyrilPeponnet @purpleidea testing `remounts => false if $valid_type == "glusterfs",` in mount.pp I'll give you some feedback
20:17 vimal joined #gluster
20:18 CyrilPeponnet looks like my inline is not good :p I'll keep you in touch
20:19 ctria joined #gluster
20:21 aaronott joined #gluster
20:27 shyam joined #gluster
20:38 purpleidea sounds good! :)
20:40 badone_ joined #gluster
20:44 CyrilPeponnet `remounts => "${valid_type}" ? { "glusterfs" => false, default => true },` seems to work
21:01 ctria joined #gluster
21:06 RedW joined #gluster
21:12 jwd joined #gluster
21:12 theron joined #gluster
21:19 victori joined #gluster
21:25 calavera joined #gluster
21:35 abyss^ joined #gluster
21:38 autoditac joined #gluster
21:40 swebb joined #gluster
21:42 autoditac joined #gluster
21:51 RedW joined #gluster
22:04 autoditac joined #gluster
22:10 nzero joined #gluster
22:15 autoditac joined #gluster
22:31 cyberswat joined #gluster
22:43 julim joined #gluster
22:43 plarsen joined #gluster
22:54 Pupeno joined #gluster
22:54 Pupeno joined #gluster
23:04 bennyturns joined #gluster
23:14 cyberswat joined #gluster
23:43 nzero joined #gluster
23:48 gildub joined #gluster
23:59 cyberswat joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary