Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-05-17

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 haomaiwang joined #gluster
00:26 F2Knight joined #gluster
00:39 badone_ joined #gluster
00:44 fubada joined #gluster
00:54 EinstCrazy joined #gluster
01:25 F2Knight joined #gluster
01:26 atrius joined #gluster
01:28 Lee1092 joined #gluster
01:29 haomaiwang joined #gluster
01:59 luizcpg joined #gluster
02:01 haomaiwang joined #gluster
02:13 haomaiwang joined #gluster
02:13 harish joined #gluster
02:24 csaba joined #gluster
03:01 haomaiwang joined #gluster
03:03 luizcpg joined #gluster
03:10 EinstCra_ joined #gluster
03:16 raghug joined #gluster
03:24 ramteid joined #gluster
03:26 aspandey joined #gluster
03:29 Gnomethrower joined #gluster
03:34 amye joined #gluster
03:45 itisravi joined #gluster
03:47 nbalacha joined #gluster
03:51 atinm joined #gluster
04:01 haomaiwang joined #gluster
04:11 nehar joined #gluster
04:11 RameshN joined #gluster
04:12 Anonimo_UK joined #gluster
04:15 shubhendu joined #gluster
04:18 hagarth joined #gluster
04:24 vshankar joined #gluster
04:28 sakshi joined #gluster
04:29 amye joined #gluster
04:33 jiffin joined #gluster
04:37 mchangir joined #gluster
04:39 aravindavk joined #gluster
04:41 nishanth joined #gluster
04:42 Apeksha joined #gluster
04:42 gem joined #gluster
04:45 kdhananjay joined #gluster
04:45 jiffin joined #gluster
04:47 karthik___ joined #gluster
04:49 hgowtham joined #gluster
04:56 jiffin joined #gluster
04:57 ramteid joined #gluster
05:01 haomaiwang joined #gluster
05:01 prasanth joined #gluster
05:04 ndarshan joined #gluster
05:09 jiffin joined #gluster
05:12 poornimag joined #gluster
05:14 jiffin1 joined #gluster
05:15 jiffin joined #gluster
05:17 hchiramm joined #gluster
05:29 jiffin joined #gluster
05:32 Bhaskarakiran joined #gluster
05:33 ppai joined #gluster
05:34 aravindavk joined #gluster
05:39 ramky joined #gluster
05:42 kotreshhr joined #gluster
05:45 jiffin joined #gluster
05:47 aspandey joined #gluster
05:50 jiffin joined #gluster
05:51 spalai joined #gluster
05:58 nbalacha joined #gluster
06:01 skoduri joined #gluster
06:01 haomaiwang joined #gluster
06:05 kovshenin joined #gluster
06:05 robb_nl joined #gluster
06:12 Saravanakmr joined #gluster
06:17 d0nn1e joined #gluster
06:18 jtux joined #gluster
06:25 jiffin joined #gluster
06:31 nishanth joined #gluster
06:33 ppai joined #gluster
06:33 nbalacha joined #gluster
06:35 RameshN joined #gluster
06:45 atalur joined #gluster
06:45 karnan joined #gluster
06:47 nishanth joined #gluster
06:57 karnan joined #gluster
06:58 level7 joined #gluster
06:58 anil joined #gluster
07:01 haomaiwang joined #gluster
07:05 tessier Ah..my glusterfsd has been segfaulting and generating core dumps for months and I never noticed. That's why the volumes sometimes go offline.
07:06 karnan joined #gluster
07:13 pur joined #gluster
07:17 jtux joined #gluster
07:18 fsimonce joined #gluster
07:21 ivan_rossi joined #gluster
07:33 MikeLupe joined #gluster
07:34 jiffin joined #gluster
07:34 haomaiwang joined #gluster
07:35 ctria joined #gluster
07:42 ctria joined #gluster
07:44 rastar joined #gluster
07:50 TvL2386 joined #gluster
07:53 micke joined #gluster
07:55 ahino joined #gluster
08:01 haomaiwang joined #gluster
08:09 Biopandemic joined #gluster
08:12 harish_ joined #gluster
08:16 hackman joined #gluster
08:16 rauchrob joined #gluster
08:23 Slashman joined #gluster
08:34 level7 joined #gluster
08:39 post-factum tessier: what gluster version do you use?
08:39 aspandey joined #gluster
08:41 robb_nl joined #gluster
08:41 Debloper joined #gluster
08:42 haomaiwang joined #gluster
08:43 atinm joined #gluster
08:44 nishanth joined #gluster
08:46 atalur joined #gluster
09:01 haomaiwang joined #gluster
09:20 karthik___ joined #gluster
09:24 DV_ joined #gluster
09:29 aspandey_ joined #gluster
09:31 itisravi joined #gluster
09:32 kdhananjay joined #gluster
09:32 hchiramm amye++ Thanks a lot!!
09:32 glusterbot hchiramm: amye's karma is now 4
09:32 amye Yay! Progress on docs!!
09:35 hchiramm amye, Awesome , the build is passed
09:35 hchiramm you rock :)
09:36 amye Awesome! let's see if people agree, but the build is good.
09:36 hchiramm indeed .
09:37 rastar joined #gluster
09:42 amye joined #gluster
09:50 atalur joined #gluster
09:51 ndarshan joined #gluster
09:54 ws2k3 joined #gluster
09:57 level7 joined #gluster
10:00 Bhaskarakiran joined #gluster
10:01 haomaiwang joined #gluster
10:07 Gnomethrower joined #gluster
10:12 Debloper1 joined #gluster
10:16 nbalacha joined #gluster
10:26 yosafbridge joined #gluster
10:29 rauchrob When running `gluster volume set <my-volume> nfs.rpc-auth-allow 192.168.0.0/24`, I get the following error: "volume set: failed: One of the client 192.168.0.10:1019 is running with op-version 30703 and doesn't support the required op-version 30707. This client needs to be upgraded or disconnected before running this command again".
10:29 glusterbot rauchrob: set the desired op-version using ''gluster volume set all cluster.op-version $desired_op_version''.
10:29 Debloper joined #gluster
10:30 B21956 joined #gluster
10:44 hagarth joined #gluster
10:46 atinm joined #gluster
10:49 nbalacha joined #gluster
10:50 Debloper joined #gluster
10:51 ira joined #gluster
10:53 johnmilton joined #gluster
10:56 rastar joined #gluster
10:56 yosafbridge joined #gluster
11:01 arcolife joined #gluster
11:13 kovshenin joined #gluster
11:15 rauchrob joined #gluster
11:16 aravindavk joined #gluster
11:23 chirino joined #gluster
11:26 rauchrob joined #gluster
11:26 rauchrob @glusterbot: $desired_op_version would be 30703 in my case?
11:28 dlambrig joined #gluster
11:40 luizcpg joined #gluster
11:46 DV__ joined #gluster
11:59 kotreshhr left #gluster
12:05 julim joined #gluster
12:05 raghug joined #gluster
12:07 atalur rauchrob, was your problem resolved?
12:07 atalur rauchrob, the one with op version
12:18 uosiu joined #gluster
12:19 uosiu Hi all, I'm looking for info did I can create replicated volume with repl-factor=2 and single brick at the beginning?
12:20 uosiu quick start guide shows all steps with two nodes in pool
12:21 nbalacha joined #gluster
12:22 Ulrar So I read about bitrot detection, that sounds cool but I can't find a way to enable it. Has it been actually developped or is it just planned ?
12:26 itisravi uosiu: no, if replica count=2, you will need 2 bricks for the volume creation command to be successful.
12:36 itisravi Ulrar:The feature is already developed. http://gluster-documentations.readthed​ocs.io/en/latest/Features/bitrot-docs/ has details but I see that the RHS admin guide https://access.redhat.com/documentation/en-US/R​ed_Hat_Storage/3.1/pdf/Administration_Guide/Red​_Hat_Storage-3.1-Administration_Guide-en-US.pdf has details that a user might be interested in.
12:36 glusterbot Title: bitrot docs - Gluster Docs (at gluster-documentations.readthedocs.io)
12:36 itisravi Ulrar: See section 19 in the pdf.
12:37 ppai joined #gluster
12:39 DV joined #gluster
12:45 EinstCrazy joined #gluster
12:45 Ulrar thanks !
12:48 rauchrob @atalur: I have not applied the suggested workaround yet, because it's not perfectly clear to me, whats going on here
12:49 rauchrob @atalur: the gluster installation in question is backing an oVirt instance with quite a few production VMs
12:53 rauchrob @atalur: The point is, I want to update the `nfs.rpc-auth-allow` volume option without disturbing the current NFS clients
12:56 Ulrar So should I enable the bitrot daemon ? It says scrubbing has a performance hit, but does the daemon have one too ? I'm guessing signing the files doesn't come free
12:56 Ulrar I am interested because the raid card of a server died and I had corruption of the VM files before that
12:56 hi11111 joined #gluster
12:56 Ulrar I thought it was glusterFS's fault but a few days later the raid card just died completly
12:56 Ulrar Would have been nice to prevent the vm files from getting corrupted
13:00 shaunm joined #gluster
13:04 johnmilton joined #gluster
13:05 plarsen joined #gluster
13:09 luizcpg joined #gluster
13:11 mchangir joined #gluster
13:17 skoduri joined #gluster
13:28 skoduri_ joined #gluster
13:32 jobewan joined #gluster
13:32 RameshN joined #gluster
13:34 Philambdo1 joined #gluster
13:34 hagarth joined #gluster
13:44 plarsen joined #gluster
13:48 mpietersen joined #gluster
13:48 johnmilton joined #gluster
13:48 chirino joined #gluster
13:53 johnmilton joined #gluster
13:57 amye joined #gluster
13:58 shyam joined #gluster
14:03 fsimonce joined #gluster
14:07 kovshenin joined #gluster
14:20 arcolife joined #gluster
14:23 dlambrig joined #gluster
14:33 mchangir joined #gluster
15:06 wushudoin joined #gluster
15:09 fsimonce joined #gluster
15:10 wushudoin| joined #gluster
15:11 kpease joined #gluster
15:21 klaxa joined #gluster
15:21 haomaiwang joined #gluster
15:24 vshankar joined #gluster
15:31 skylar joined #gluster
15:33 ppai joined #gluster
15:34 haomaiwang joined #gluster
15:38 tyler274 So I'm getting a "Remote I/O error" whenever I attempt to write to a NFS mounted directory
15:38 tyler274 like to https://bugzilla.redhat.co​m/show_bug.cgi?id=1238318
15:38 glusterbot Bug 1238318: medium, unspecified, ---, ndevos, ASSIGNED , NFS mount throws Remote I/O error
15:38 tyler274 except it persists even if I'm not mounting over localhost
15:39 JoeJulian Check logs for clues.
15:39 tyler274 @JoeJulian /var/log/gluster/nfs.log?
15:39 JoeJulian jinx
15:39 JoeJulian gah
15:39 tyler274 nothing on the client end :/
15:39 JoeJulian stupid starting slash... :)
15:40 tyler274 https://www.irccloud.com/pastebin/oXz3fngC/
15:40 glusterbot Title: Pastebin | IRCCloud (at www.irccloud.com)
15:40 tyler274 the client has that, but several minutes old
15:41 JoeJulian Any split-brain listed in "gluster volume heal $vol info"?
15:41 JoeJulian ndevos ^
15:43 tyler274 just a bunch of gfids
15:43 tyler274 although if I point the mount to a different peer it works
15:43 tyler274 both peers have the same iptables configs
15:44 tyler274 although the one having issues, as well as if i point to localhost, is running RHEL7 while the working one is running Arch
15:44 JoeJulian What if you mount the rhel 7 nfs on arch?
15:45 tyler274 Remote I/O error
15:45 JoeJulian Maybe you should just run all arch ;)
15:46 tyler274 @JoeJulian I'm actually planning to switch off of arch as I had a kernel upgrade cause all kinds of headaches
15:46 JoeJulian hmm
15:46 JoeJulian Well, that's what CI testing is for. What kind of headaches?
15:47 tyler274 nothing related to gluster specifically, SSSD/LDAP issues, bash segfaulting like a mofo, etc.
15:48 JoeJulian Try a `netstat -tlnp` and make sure it's just gluster listening for nfs.
15:48 tyler274 https://www.irccloud.com/pastebin/IOXWPjvO/
15:48 glusterbot Title: Pastebin | IRCCloud (at www.irccloud.com)
15:48 JoeJulian I'm not an sssd fan anyway. The bash crashes were kernel related?
15:49 JoeJulian @ports
15:49 glusterbot JoeJulian: glusterd's management port is 24007/tcp (also 24008/tcp if you use rdma). Bricks (glusterfsd) use 49152 & up. All ports must be reachable by both servers and clients. Additionally it will listen on 38465-38468/tcp for NFS. NFS also depends on rpcbind/portmap ports 111 and 2049.
15:50 JoeJulian yep
15:50 tyler274 @JoeJulian I believe they are related, although the nfs issue is affecting the other arch peers which are having the same problems if they attempt to hit my main arch box or the rhel box for the nfs mount
15:51 tyler274 but not the upgrade issues because I made sure not to upgrade them
15:51 JoeJulian We need to finish getting our arch archive up and public. Daily snapshots of the repo for versioned deployments.
15:52 tyler274 hmm interesting, if I tell one of my other arch peers to connect to itself for the nfs mount it works as expected
15:53 tyler274 but if it hits the not-storing-anything arch head node, or the node running rhel I get the i/o error
15:53 skoduri_ joined #gluster
15:53 JoeJulian oh, wait... rhel... selinux?
15:53 tyler274 ....
15:53 tyler274 that could be it
15:54 tyler274 does gluster require selinux disables?
15:54 JoeJulian It /shouldn't/ but I imagine you would have to tag the brick directory appropriately.
15:55 tyler274 explain what you mean by tagging, I haven't interacted with selinux much
15:55 JoeJulian If I ever suggest "setenforce 0" my friend Major Hayden would probably ensure I never get one of his T-Shirts.
15:58 JoeJulian https://access.redhat.com/documentation/en-US/R​ed_Hat_Enterprise_Linux/7/html/SELinux_Users_an​d_Administrators_Guide/sect-Managing_Confined_S​ervices-glusterFS-Configuration_Examples.html
15:58 glusterbot Title: 27.4. Configuration Examples (at access.redhat.com)
15:59 tyler274 ah the zpool is unlabeled
15:59 mayqui joined #gluster
16:00 mayqui Hi people,
16:01 JoeJulian @learn selinux as Be sure to set your brick context, ie. semanage fcontext -a -t glusterd_brick_t $brick_root; restorecon -Rv $brick_root
16:01 mayqui I am reaceiving a message  Server and Client lk-version numbers are not same, reopening the fds, and I am using the same version in both servers.
16:01 glusterbot JoeJulian: The operation succeeded.
16:01 glusterbot mayqui: This is normal behavior and can safely be ignored.
16:02 JoeJulian Think about the context of that version. "Server and client lock version"... that's the version of the lock tables between them that are (expectedly) out of sync.
16:03 mayqui I am using glusterfsd 3.6.9 in ubuntu 14.04 and debian 8
16:04 mayqui the ubuntu 14.04 servers are gluster server and debian 8 server are gluster client
16:07 tyler274 and now I wait for the contexts to reload
16:10 haomaiwang joined #gluster
16:12 atinm joined #gluster
16:18 tyler274 @JoeJulian atm I have a zfs pool at /aleph-pool which contains the bricks for my volumes [important, vital] as /aleph-pool/important/brick1, /aleph-pool/important/brick2, /aleph-pool/vital/brick1, etc. After I finish the restorecon on /aleph-pool will I need to do anything else regarding selinux for the bricks in the future?
16:19 mchangir joined #gluster
16:19 tyler274 Or will the context persist to the new files going forward
16:19 JoeJulian It's persistent.
16:20 JoeJulian The semanage command added it to the managed context table and restorecon is applying it from that table to the filesystem.
16:21 tyler274 any comments on https://wiki.centos.org/HowTos/GlusterFSonCentO​S#head-9f8a6dad9353bbab9dad177633526a5f15a6dccf section 6.6, it seems to imply a different fix
16:21 glusterbot Title: HowTos/GlusterFSonCentOS - CentOS Wiki (at wiki.centos.org)
16:22 cornfed78 joined #gluster
16:23 cornfed78 Hi all.. Noticing a weird du/df issue with glusterfs-3.7.11-1.el7.x86_64
16:23 cornfed78 If I mount a volume over NFS, and create a 4GB file:
16:23 cornfed78 time dd if=/dev/zero of=testfile bs=1M count=4096
16:23 cornfed78 du reports 4GB used, but df shows 8GB
16:23 JoeJulian tyler274: That's the braindead solution. Disable selinux.
16:23 cornfed78 If I mount it using the gluster client, it works correctly
16:24 cornfed78 the brick itself is also report 8GB used
16:24 cornfed78 when created with NFS
16:24 JoeJulian What's du --apparent show?
16:24 cornfed78 the same, ~4GB
16:24 cornfed78 it's df that seems to be overreporting
16:24 JoeJulian @pasteinfo
16:25 glusterbot JoeJulian: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
16:26 cornfed78 https://paste.fedoraproject.org/367640/63502366/
16:26 glusterbot Title: #367640 Fedora Project Pastebin (at paste.fedoraproject.org)
16:26 cornfed78 though, now it looks like it went down to 4gb..
16:26 cornfed78 is there a delay when using replicated servers?
16:26 JoeJulian Shouldn't be, no.
16:27 cornfed78 https://paste.fedoraproject.org/367641/35024711/
16:27 glusterbot Title: #367641 Fedora Project Pastebin (at paste.fedoraproject.org)
16:28 arcolife joined #gluster
16:28 luizcpg joined #gluster
16:29 JoeJulian Oh, I didn't notice this was nfs mounted. Do you see the same thing with a fuse mount?
16:29 cornfed78 no
16:30 cornfed78 works as expected with a fuse mount
16:30 cornfed78 I am, though, seeing a discrepancy between the two servers
16:30 JoeJulian Please file a bug report. Looks like an nfs bug, hopefully it's a kernel bug instead of a gluster bug. ;)
16:30 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
16:31 JoeJulian I wonder if you can get nfs to show more allocated than there is available.
16:31 JoeJulian That would be cool!
16:31 cornfed78 https://paste.fedoraproject.org/367642/46350268/
16:31 glusterbot Title: #367642 Fedora Project Pastebin (at paste.fedoraproject.org)
16:31 cornfed78 OK
16:32 cornfed78 I'll file one :)
16:32 cornfed78 (after lunch) ;)
16:32 JoeJulian Wait, that's not right.
16:33 JoeJulian du -ah --apparent .glusterfs/91/63 on both
16:37 ivan_rossi left #gluster
16:39 kpease joined #gluster
16:47 ahino joined #gluster
16:56 hackman joined #gluster
17:00 plarsen joined #gluster
17:01 haomaiwang joined #gluster
17:13 Wojtek joined #gluster
17:14 rjoseph joined #gluster
17:20 F2Knight joined #gluster
17:23 Wojtek joined #gluster
17:32 cornfed78 both report 4.1 G
17:32 cornfed78 but the df now also reports 4.1G
17:34 JoeJulian Maybe something was deleted but still open? Not enough information to speculate.
17:34 cornfed78 yeah.. seems like there's a delay, either on the rm or on the replication
17:34 cornfed78 well, the rm seems to be immediate
17:35 JoeJulian Well, there's no delay in gluster. fops are synchronous.
17:35 dlambrig joined #gluster
17:35 cornfed78 i'll poke around more..
17:35 cornfed78 both machines are VMs on the same server, and I have a dedicated disk images for each brick but they're on the same filesystem on the host
17:36 cornfed78 i wouldn't imagine that would make a difference, but I can't rule it out
17:37 JoeJulian No, the vm doesn't even know about the host's filesystem.
17:37 cornfed78 right..
17:37 cornfed78 unless something's blocking the disk image on the host or something?
17:37 nathwill joined #gluster
17:38 JoeJulian You'd probably get a hang (or timeout and drop to read-only) in the guest.
17:38 cornfed78 good point..
17:38 cornfed78 hm.. weird that it reports 2x space use for a while, then "fixes itself"
17:41 unclemarc joined #gluster
17:45 karnan joined #gluster
17:50 Wojtek joined #gluster
17:56 rwheeler joined #gluster
17:57 shubhendu joined #gluster
18:01 haomaiwang joined #gluster
18:08 ilbot3 joined #gluster
18:08 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
18:11 plarsen joined #gluster
18:29 tyler274 @JoeJulian should the selinux change be spending so much time going over the files in the  .glusterfs folder in each brick?
18:29 tyler274 it's been going for about 2 hours now
18:33 rafi joined #gluster
18:45 squizzi_ joined #gluster
18:53 XpineX joined #gluster
18:54 chirino joined #gluster
18:57 nage joined #gluster
19:01 haomaiwang joined #gluster
19:16 d0nn1e joined #gluster
19:21 Slashman joined #gluster
19:32 klaxa joined #gluster
19:35 dlambrig joined #gluster
20:00 Slashman joined #gluster
20:01 haomaiwang joined #gluster
20:08 tom[] i have 3.4 servers replicating over 3 nodes on ubuntu 14.04. can i add a couple of hosts with 3.7 to the cluster?
20:29 jbrooks joined #gluster
20:53 rwheeler joined #gluster
20:59 kovshenin joined #gluster
21:01 haomaiwang joined #gluster
21:27 nathwill joined #gluster
21:39 Slashman joined #gluster
21:55 johnmilton joined #gluster
21:58 hybrid512 joined #gluster
22:01 johnmilton joined #gluster
22:01 haomaiwang joined #gluster
22:18 johnmilton joined #gluster
22:36 ira joined #gluster
22:52 plarsen joined #gluster
23:01 haomaiwang joined #gluster
23:08 bluenemo joined #gluster
23:34 mpietersen joined #gluster
23:36 cliluw joined #gluster
23:52 kovshenin joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary