Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-12-01

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 andreask joined #gluster
00:11 diegows joined #gluster
00:29 harish joined #gluster
00:39 gildub joined #gluster
01:01 chirino joined #gluster
01:02 chirino joined #gluster
01:06 calisto joined #gluster
01:38 bala joined #gluster
01:56 calisto joined #gluster
01:56 mojibake joined #gluster
02:09 calisto joined #gluster
02:11 haomaiwang joined #gluster
02:17 sputnik13 joined #gluster
02:24 kshlm joined #gluster
02:27 haomaiwa_ joined #gluster
02:31 sputnik13 joined #gluster
02:38 sputnik13 joined #gluster
02:38 haomaiwa_ joined #gluster
02:38 bharata-rao joined #gluster
02:45 haomaiwa_ joined #gluster
02:47 chirino joined #gluster
03:05 haomaiw__ joined #gluster
03:06 chirino joined #gluster
03:08 haomaiwang joined #gluster
03:09 haomaiwang joined #gluster
03:20 glusterbot News from newglusterbugs: [Bug 1153610] libgfapi crashes in glfs_fini for RDMA type volumes <https://bugzilla.redhat.co​m/show_bug.cgi?id=1153610>
03:23 haomaiw__ joined #gluster
03:24 zerick joined #gluster
03:27 hagarth joined #gluster
03:29 spandit joined #gluster
03:31 haomaiwa_ joined #gluster
03:44 haomaiw__ joined #gluster
03:50 haomaiwa_ joined #gluster
03:55 nbalachandran joined #gluster
03:56 haomaiwa_ joined #gluster
04:03 itisravi joined #gluster
04:10 nbalachandran joined #gluster
04:10 sputnik13 joined #gluster
04:11 unwastable joined #gluster
04:12 unwastable hello
04:12 glusterbot unwastable: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
04:12 haomaiwa_ joined #gluster
04:14 haomaiw__ joined #gluster
04:16 unwastable can anyone help? possible the self-heal entries have been stored as a file? (from heal fs info)
04:17 kanagaraj joined #gluster
04:18 sputnik13 joined #gluster
04:18 kumar joined #gluster
04:19 atinmu joined #gluster
04:20 unwastable how does the self-heal entries being stored?
04:20 unwastable anyone?
04:21 shubhendu joined #gluster
04:25 haomaiwa_ joined #gluster
04:29 ppai joined #gluster
04:30 haomaiwa_ joined #gluster
04:30 soumya_ joined #gluster
04:32 kdhananjay joined #gluster
04:42 haomaiwang joined #gluster
04:51 haomaiw__ joined #gluster
04:54 haomaiwa_ joined #gluster
04:59 deepakcs joined #gluster
04:59 rafi1 joined #gluster
04:59 jiffin joined #gluster
04:59 ArminderS joined #gluster
05:01 haomaiwang joined #gluster
05:16 haomaiwa_ joined #gluster
05:17 haomaiwang joined #gluster
05:17 meghanam joined #gluster
05:17 meghanam_ joined #gluster
05:19 haomaiw__ joined #gluster
05:22 hagarth joined #gluster
05:29 Humble joined #gluster
05:33 smohan joined #gluster
05:37 bala joined #gluster
05:38 DV joined #gluster
05:39 ArminderS- joined #gluster
05:40 gildub joined #gluster
05:41 ArminderS joined #gluster
05:43 haomaiwang joined #gluster
05:46 aravindavk joined #gluster
05:48 ArminderS- joined #gluster
05:50 haomaiwa_ joined #gluster
05:50 ramteid joined #gluster
05:51 dusmant joined #gluster
05:53 saurabh joined #gluster
05:54 overclk joined #gluster
06:01 sahina joined #gluster
06:01 haomaiwa_ joined #gluster
06:03 poornimag joined #gluster
06:06 haomaiwang joined #gluster
06:07 haomaiw__ joined #gluster
06:16 haomaiwang joined #gluster
06:17 soumya_ joined #gluster
06:18 haomaiwa_ joined #gluster
06:21 glusterbot News from newglusterbugs: [Bug 1169236] Fio (ioengine=gfapi) FS_cached_4k_random_reads fails on gluster v3.6.1 <https://bugzilla.redhat.co​m/show_bug.cgi?id=1169236>
06:23 dusmant joined #gluster
06:27 haomaiw__ joined #gluster
06:30 haomaiwang joined #gluster
06:36 saurabh joined #gluster
06:36 rgustafs joined #gluster
06:42 msvbhat joined #gluster
06:49 LebedevRI joined #gluster
06:50 dusmant joined #gluster
06:51 kumar joined #gluster
06:51 nbalachandran joined #gluster
06:52 anil joined #gluster
06:59 nishanth joined #gluster
07:03 haomaiwa_ joined #gluster
07:05 dusmant joined #gluster
07:11 ctria joined #gluster
07:21 glusterbot News from newglusterbugs: [Bug 1167580] [USS]: Non root user who has no access to a directory, from NFS mount, is able to access the files under .snaps under that directory <https://bugzilla.redhat.co​m/show_bug.cgi?id=1167580>
07:21 glusterbot News from resolvedglusterbugs: [Bug 1159248] glupy compilation issue <https://bugzilla.redhat.co​m/show_bug.cgi?id=1159248>
07:22 nshaikh joined #gluster
07:29 spandit joined #gluster
07:35 ArminderS joined #gluster
07:39 maveric_amitc_ joined #gluster
07:45 ghenry joined #gluster
07:50 dusmant joined #gluster
07:59 rgustafs joined #gluster
08:06 [Enrico] joined #gluster
08:10 mbukatov joined #gluster
08:11 T0aD joined #gluster
08:28 Philambdo joined #gluster
08:32 deniszh joined #gluster
08:36 hybrid512 joined #gluster
08:39 karnan joined #gluster
08:43 fsimonce joined #gluster
08:45 vimal joined #gluster
08:47 sahina joined #gluster
08:47 poornimag joined #gluster
08:50 anil joined #gluster
08:51 dusmant joined #gluster
08:52 shubhendu joined #gluster
08:54 bala joined #gluster
08:56 zerick joined #gluster
09:00 Slashman joined #gluster
09:08 ppai joined #gluster
09:14 marcus_ left #gluster
09:21 shubhendu joined #gluster
09:24 bala joined #gluster
09:25 zerick_ joined #gluster
09:26 ricky-ti1 joined #gluster
09:26 sahina joined #gluster
09:29 Slashman joined #gluster
09:32 zutto left #gluster
09:36 anil joined #gluster
09:37 bjornar joined #gluster
09:46 johndescs_ joined #gluster
09:47 poornimag joined #gluster
09:51 atinmu joined #gluster
09:51 glusterbot News from newglusterbugs: [Bug 1169302] Unable to take Statedump for gfapi applications <https://bugzilla.redhat.co​m/show_bug.cgi?id=1169302>
09:55 warci joined #gluster
09:56 spandit joined #gluster
10:09 ctria joined #gluster
10:21 glusterbot News from newglusterbugs: [Bug 1094119] Remove replace-brick with data migration support from gluster cli <https://bugzilla.redhat.co​m/show_bug.cgi?id=1094119>
10:21 glusterbot News from newglusterbugs: [Bug 1065620] in 3.5 hostname resuluation issues <https://bugzilla.redhat.co​m/show_bug.cgi?id=1065620>
10:23 deepakcs joined #gluster
10:28 dusmant joined #gluster
10:34 warci hi all, how stable is 3.6? I'm running 3.4 now, what's the recommended upgrade path? stick to 3.4? 3.5 / 3.6?
10:43 atinmu joined #gluster
10:51 glusterbot News from newglusterbugs: [Bug 1169320] rmtab file is a bottleneck when lot of clients are accessing a volume through NFS <https://bugzilla.redhat.co​m/show_bug.cgi?id=1169320>
10:52 glusterbot News from newglusterbugs: [Bug 1166862] rmtab file is a bottleneck when lot of clients are accessing a volume through NFS <https://bugzilla.redhat.co​m/show_bug.cgi?id=1166862>
10:56 ndevos warci: this question was recently asked on the mailinglist, see http://thread.gmane.org/gmane.com​p.file-systems.gluster.user/18213
10:56 Anand_ joined #gluster
11:05 nbalachandran joined #gluster
11:07 karnan joined #gluster
11:07 warci thanks ndevos
11:17 Humble warci, http://www.gluster.org/community/doc​umentation/index.php/Upgrade_to_3.5 http://www.gluster.org/community/doc​umentation/index.php/Upgrade_to_3.6
11:18 Humble warci, keep those links with u :)
11:18 ctria joined #gluster
11:21 spandit joined #gluster
11:21 warci Humble: yeah those page i luckily already found :) Upgrade looks nice & easy!
11:21 soumya_ joined #gluster
11:21 Humble warci, \o/
11:22 glusterbot News from newglusterbugs: [Bug 1169331] Geo-replication slave fills up inodes <https://bugzilla.redhat.co​m/show_bug.cgi?id=1169331>
11:22 Humble warci++
11:22 glusterbot Humble: warci's karma is now 1
11:28 diegows joined #gluster
11:41 calisto joined #gluster
11:43 harish joined #gluster
12:02 meghanam_ joined #gluster
12:04 meghanam joined #gluster
12:05 DV joined #gluster
12:06 hagarth joined #gluster
12:20 edward1 joined #gluster
12:33 poornimag joined #gluster
12:40 spandit joined #gluster
12:45 dusmant joined #gluster
12:46 ppai joined #gluster
12:50 _Bryan_ joined #gluster
12:52 glusterbot News from newglusterbugs: [Bug 1057295] glusterfs doesn't include firewalld rules <https://bugzilla.redhat.co​m/show_bug.cgi?id=1057295>
12:56 ricky-ticky1 joined #gluster
12:56 chirino joined #gluster
12:57 smohan_ joined #gluster
12:57 harish joined #gluster
12:57 calisto joined #gluster
13:00 ira joined #gluster
13:04 poornimag joined #gluster
13:17 social joined #gluster
13:22 glusterbot News from newglusterbugs: [Bug 1093217] [RFE] Gluster module (purpleidea) to support HA installations using Pacemaker <https://bugzilla.redhat.co​m/show_bug.cgi?id=1093217>
13:29 B21956 joined #gluster
13:33 rjoseph joined #gluster
13:36 rgustafs joined #gluster
13:36 nbalachandran joined #gluster
13:44 haomaiwa_ joined #gluster
13:52 plarsen joined #gluster
13:57 mojibake joined #gluster
14:03 rtbt joined #gluster
14:04 rtbt I keep getting the error "Setting extended attributes failed, reason: Operation not permitted." whilst creating a volume inside an openvz container, the host has enabled fuse for me so I'm unsure what it's moaning about
14:09 virusuy joined #gluster
14:16 rwheeler joined #gluster
14:30 bennyturns joined #gluster
14:31 bala joined #gluster
14:45 _dist joined #gluster
14:47 Fen1 joined #gluster
14:49 UnwashedMeme rtbt:  (random guess)  Gluster stores metadata in extended attributes on the files, make sure whatever filesystem you are trying to create the brick on is mounted with extended attributes enabled
14:50 julim joined #gluster
14:52 glusterbot News from newglusterbugs: [Bug 1070539] Very slow Samba Directory Listing when many files or sub-direct​ories <https://bugzilla.redhat.co​m/show_bug.cgi?id=1070539>
14:58 meghanam_ joined #gluster
14:58 plarsen joined #gluster
14:58 meghanam joined #gluster
15:03 plarsen joined #gluster
15:11 coredump joined #gluster
15:20 wushudoin joined #gluster
15:24 soumya_ joined #gluster
15:28 jobewan joined #gluster
15:33 wushudoin joined #gluster
15:35 Peanut left #gluster
15:54 spandit joined #gluster
15:56 Slashman joined #gluster
15:56 soumya_ joined #gluster
16:01 julim joined #gluster
16:03 jbrooks joined #gluster
16:05 lpabon joined #gluster
16:09 PatNarciso Good morning all.
16:31 ArminderS joined #gluster
16:40 rjoseph joined #gluster
16:43 lmickh joined #gluster
16:53 nishanth joined #gluster
16:56 CyrilPeponnet Hi Guys, one quick question: In a replicated mode using three nodes, I'd like to avoid clients to fetch data from one specific node (and give priority to the two others). Is this possible ?
16:58 CyrilPeponnet I mean all write operation are replicated but for read operation only take the two faster guys
17:06 bennyturns joined #gluster
17:07 PeterA joined #gluster
17:20 pkoro joined #gluster
17:23 ArminderS whats the best way to sync 2 gluster volumes part of different gluster nodes (diff geographical locations), but same cluster
17:26 ArminderS i had nodes from both locations earlier in volume replica, but removed it later since it was slowing the things
17:26 ArminderS i can do the rsync to keep them synced, but is there any other (better) option available?
17:39 skippy joined #gluster
17:56 andreask joined #gluster
18:13 rafi1 joined #gluster
18:14 DoctorWedgeworth joined #gluster
18:17 DoctorWedgeworth I'm trying to update from gluster 3.2 to 3.4 on a test server which used to be a brick in a replicated volume. I've removed /etc/gluster* and installed gluster 3.4, then tried to gluster volume create on the old directory and another directory on the same server. It says the first path (or a prefix of it) is already part of a volume. getfattr /exported/files shows nothing, and /exported/files/.glusterfs doesn't exist. What do I need to do?
18:17 glusterbot DoctorWedgeworth: To clear that error, follow the instructions at http://joejulian.name/blog/glusterfs-path-or​-a-prefix-of-it-is-already-part-of-a-volume/ or see this bug https://bugzilla.redhat.com/show_bug.cgi?id=877522
18:18 DoctorWedgeworth awesome bot :) but in this case I've just come from the first page and the advice didn't work
18:21 ttkg joined #gluster
18:25 DoctorWedgeworth ah there are some attributes on the directory, just not those ones
18:30 ildefonso joined #gluster
18:31 XpineX_ joined #gluster
18:36 DoctorWedgeworth the second test brick was an empty directory, how do I make it re-duplicate the content?
18:44 DoctorWedgeworth ah got it
18:53 _Bryan_ joined #gluster
18:56 b0e1 joined #gluster
19:21 rotbeard joined #gluster
19:34 xavih joined #gluster
19:41 sputnik13 joined #gluster
19:42 sputnik13 left #gluster
19:43 coredump I am getting this on my logs:  0-fuse: xlator does not implement release_cbk
19:43 coredump and I get a permission denied error at the same time
19:43 coredump it's a small file (probaby 0 size) that is used to check if Cinder (openstack) can write to the share
19:43 coredump but it fails some times
19:43 coredump randomly
19:54 _dist joined #gluster
19:54 fghaas joined #gluster
19:55 elico joined #gluster
20:09 mikemol joined #gluster
20:10 tyrok_laptop joined #gluster
20:10 deniszh joined #gluster
20:11 mikemol So, bizarre problem. Context: Two Bricks on Cent6.5 running 3.5.2. one Cent7 machine running 3.5.3. When volumes are mounted at boot, an 'ls' on the volume locks up, no errors in the log. When volumes are mounted via a login session, 'ls' works fine.
20:12 mikemol The Cent7 machine is the only one exhibiting this behavior.
20:16 skippy we're getting gluster process locks, such that `gluster volume status ...` fails.
20:16 skippy this in turn causes our Puppet runs to fail. :(
20:16 _dist mikemol: What does a mount say, does it look like it mounted correctly at boot time ?
20:17 SOLDIERz_ joined #gluster
20:18 mikemol Sec
20:19 skippy [2014-12-01 20:19:40.645882] W [socket.c:611:__socket_rwv] 0-management: readv on /var/run/3714f2b1aabf9be7087fc323824b74dd.socket failed (Invalid argument)
20:20 skippy what causes that?
20:20 mikemol And...like an idiot, I hung a terminal session by daring to autocomplete into the mounted volume in question while trying to unmount. :-)
20:21 mikemol fstab: (hostname):/(mountname) on /mnt/(mountpath) type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default​_permissions,allow_other,max_read=131072)
20:23 mikemol And while attempting an unmount, I did see this pop up in syslog: Dec  1 15:22:08 compute3 glusterfs: [2014-12-01 20:22:08.904164] C [client-handshake.c:127:rpc​_client_ping_timer_expired] 0-extimages-client-1: server (brick ip):49153 has not responded in the last 42 seconds, disconnecting.
20:23 John_HPC joined #gluster
20:23 mikemol Dec  1 15:22:08 compute3 journal: cannot read header '/mnt/extimages/MailStor1.img': Transport endpoint is not connected
20:24 John_HPC I'm trying to yum update and am getting "primary.sqlite.bz2: [Errno -3] Error performing checksum"
20:25 John_HPC nm. fixed by installing python-hashlib
20:25 mikemol Still trying to poke the thing.
20:29 Slashman joined #gluster
20:31 John_HPC blah, I'm getting "error: glusterfs-libs-3.6.1-4.el5: Header V4 RSA/SHA1 signature: BAD, key ID 4ab22bb3" again
20:31 mikemol _dist: So, after finally unmounting the mount point, I was able to try mounting it again. Mount command emitted no error, exit code 0. And mount point works fine.
20:32 mikemol Comes back to this only breaking when mounted at boot. And when mounted at boot, any listing of the mount point hangs.
20:33 _dist mikemol: you're certain your fstab options are identical to your CLI one that works?
20:34 mikemol _dist: We wrote our own unit file after the _netdev approach didn't work at all for us. Our unit file runs the same command as the CLI command, after verifying it can ping at least one brick.
20:39 calum_ joined #gluster
20:53 _dist mikemol: Is the client on one of the servers?
20:54 mikemol _dist: The brick is one physical box, running Cent6. The client is a different physical box, a vm host running Cent7.
20:55 JoeJulian mikemol: Did you check the client log when it fails?
20:56 mikemol Yeah, pasted the log result earlier. Ang on, I'll pastebin a more recent one.
20:56 _dist mikemol: Sorry, just to be clear here. Your cent7 box is the one having trouble, it mounts at boot but that boot isn't good? The /var/log/glusterfs client log shows what at the time of ls?
20:57 zerick joined #gluster
20:57 mikemol JoeJulian: https://p.6core.net/p/ABqrQAsHKP77BT7S75twq5CR <-- what is shown when the mountpoint hangs on a listing.
20:58 glusterbot mikemol: <'s karma is now -6
20:58 mikemol _dist: Yes, yes, and paste forthcoming. Hang on.
20:58 JoeJulian single brick volume?
20:59 mikemol Found something that might be the issue, and doing a reboot test. Have to wait for the machine to come back up. systemd might be great for init speeds, but it doesn't help much at all if the hardware takes three minutes to POST...
20:59 fghaas while we are at (somewhat) bizarre problems: if you had a 2x2 distributed/replicated volume that was mistakenly configured as server1:/dir1 server2:/dir1 server2:/dir2 server1:/dir2 (note flipped order of replicas on second distribution bucket), the volume didn't use quorum and ran into split brain and got written to from both sides, would you agree that one should expect really Bad Things To Happen™? (btw JoeJulian: your splitmount utility is wonderful
20:59 mikemol JoeJulian: Two bricks. Both running the same version of gluster, 3.5.2
21:00 JoeJulian Odd. It critically fails when only one brick stops responding. I would have to assume, therefore, that the other brick never connected.
21:00 mikemol OK, found the problem.
21:01 mikemol So, we're not using firewalld. We're using sanewall/firehol. It happens that the sanewall/firehol service came up after firewalld, so once the box was up, we didn't notice anything except a failed mount at boot.
21:01 JoeJulian fghaas: Thanks. I do need to spend some time on it one of these days soon... The method of pulling the volume data was all I could do at the time, but there's a more reliable way now.
21:01 mikemol firewalld, being unconfigured, killed all network traffic until sanewall opened it up. And killing the traffic killed the gluster mounts.
21:02 JoeJulian That would do it.
21:02 JoeJulian Good find.
21:02 mikemol sanewall has been moved to a better place in the init/dependency order, and firewalld has been disabled.
21:02 fghaas JoeJulian: yes, but reliable or not, I'm afraid it won't really help with the split brain / flipped replica issue :)
21:03 mikemol Also, since firewalld doesn't log blocked packets by default (as sanewall does), we were lulled into not thinking it was a firewall issue since there wasn't anything in our logs...
21:04 JoeJulian fghaas: No, I would not expect anything worse in that scenario than if they were the way you were expecting. The same two bricks are still part of the replica subvolume, and the split-brain will still only write to the network partition that was reachable. Even if your second pair were in the reverse order, the results would be exactly the same.
21:05 ikke- joined #gluster
21:05 JoeJulian ... and splitmount will still do the job correctly.
21:06 fghaas JoeJulian: I'm seeing a 3.6 volume where there are identically-named files in two distribution brick sets, and the volume mount shows two files of the same name in the same directory, post split brain
21:06 JoeJulian Identically named and same size? Or is one dht subset 0 sized?
21:07 JoeJulian mode 1000
21:07 mikemol And with firewalld disabled, _netdev works. Awesome.
21:07 fghaas identical name, non-identical non-zero size.
21:07 fghaas mode 0600.
21:08 ckotil joined #gluster
21:08 ckotil joined #gluster
21:11 fghaas JoeJulian: so you'd say the above situation is entirely unexpected?
21:12 nage joined #gluster
21:12 JoeJulian unexpected, yes.
21:14 fghaas JoeJulian: woot. :)
21:15 JoeJulian Just a sec... I've got 5 conversations going at once right now and 4 is my limit...
21:15 bene2 joined #gluster
21:17 georgeh joined #gluster
21:18 fghaas JoeJulian: no worries at all. :) The woot was more about my tendency to uncover rather nasty issues in both Ceph *and* Gluster lately :)
21:20 JoeJulian Yeah, I'm doing the same in ceph, and we're still just doing the salt states for deployment. Haven't even gotten in to actually using it yet.
21:24 mikemol_ joined #gluster
21:27 andreask joined #gluster
21:27 tg2 joined #gluster
21:28 poornima joined #gluster
21:29 masterzen_ joined #gluster
21:30 JonathanD joined #gluster
21:32 Bosse__ joined #gluster
21:34 mikemol_ joined #gluster
21:35 failshell joined #gluster
21:36 delhage joined #gluster
22:00 nage joined #gluster
22:03 tyrok_laptop left #gluster
22:04 badone joined #gluster
22:09 zerick joined #gluster
22:13 nage joined #gluster
22:15 _dist joined #gluster
22:16 buhman joined #gluster
22:21 badone joined #gluster
22:29 uebera|| joined #gluster
22:30 rtbt joined #gluster
22:31 DV joined #gluster
22:31 rotbeard joined #gluster
22:35 siel joined #gluster
22:35 meghanam joined #gluster
22:35 meghanam_ joined #gluster
22:36 fghaas left #gluster
22:36 johnnytran joined #gluster
22:40 Telsin joined #gluster
23:34 harish joined #gluster
23:53 gildub joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary