Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-10-16

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:07 JoeJulian that's what she said...
00:11 Telsin it sure was. and now I can't even think of what that typo was for in the first place ;)
00:12 xavih joined #gluster
00:13 calum_ joined #gluster
00:14 _Bryan_ joined #gluster
00:16 B21956 joined #gluster
00:23 * semiosis starts making deps
00:23 semiosis finally, after much delay :|
00:23 semiosis s/deps/debs/
00:23 glusterbot What semiosis meant to say was: * semiosis starts making debs
00:36 semiosis joined #gluster
00:36 semiosis joined #gluster
00:39 xavih joined #gluster
00:44 calisto joined #gluster
00:52 justinmburrous joined #gluster
00:53 SOLDIERz_ joined #gluster
00:58 _dist joined #gluster
01:01 _dist I experienced something troubling last week with libgfapi. It looks like when I took a brick offline libgfapi must have still been connected to that brick, so it caused the 30 second outage to happen
01:02 _dist I'm looking to test this, but I'd like to know how its' supposed to work. Does the init.d shutdown tell the other servers "hey guys, I'm leaving" and maybe that didn't work correctly? Or does libgfapi reneogitate at a fixed interval like every 10 min or something
01:07 semiosis _dist: afaik, should be the same with libgfapi as with a regular fuse client, which is that the client will hang for ping-timeout if a brick disappears abruptly (no response to packets) such as when a power cord or ethernet cable is pulled. however, if the brick sends the client a TCP RST then the client doesn't wait for ping-timeout, just marks the brick gone & continues immediately -- this happens when the brick daemon is killed with SIGTERM (
01:07 semiosis kill cmd default)
01:07 semiosis during system shutdown procs get a TERM, but if the network interface has already been stopped then that wouldn't matter
01:07 _dist semiosis: I did a graceful shutdown, debian
01:08 semiosis hmm
01:08 JoeJulian If your shutdown order is wrong, the network may be stopped before gluster...
01:09 semiosis in my experience, with ubuntu precise, a graceful restart of a server (ec2 vm) does not cause clients to hang for ping-timeout, fwiw
01:09 _dist JoeJulian: that would explain it, perhaps the requirements are set wrong I'll take a look now
01:09 _dist semiosis: debian wheezy
01:09 semiosis idk if debian, or perhaps your hardware, would be different
01:10 _dist "# Required-Stop:     $local_fs $remote_fs $network zfs-mount"
01:10 semiosis JoeJulian: now that you're on ubuntu, what's your experience with this?  ever had a graceful shutdown cause a ping-timeout?  or do you take precautions to prevent that?
01:11 JoeJulian I never shut down.
01:11 _dist JoeJulian: how would you do a kernel upgrade? :)
01:11 JoeJulian We've literally never shut down a storage server since I started in June.
01:12 JoeJulian kernels are pinned.
01:12 JoeJulian I've brought several online...
01:12 semiosis JoeJulian: well, let me know how that goes, if you think of it
01:12 _dist JoeJulian: I think that solution is pretty extreme but I can see how it would work
01:12 JoeJulian The thing is... they're only going to be shut down once I've moved bricks off of them.
01:13 JoeJulian I always picture her when I think of debian wheezy... http://cdn.stripersonline.com/a/ad/1000x1000px​-LL-ad861ee6_65DA7FEB472B142B42F508-Large.jpeg
01:13 JoeJulian (and yes, it's safe for work)
01:14 _dist well, I can test this, watch it with iftop after verifying only my test VM is running off it
01:15 _dist :)
01:16 JoeJulian Weezy Jefferson ... but my parents used to watch it when I was growing up so that's what I think of when I hear Wheezy. In fact, it wasn't until I just started searching for that picture that I realized it wasn't spelled that way.
01:17 semiosis okay, thx for that
01:17 JoeJulian I always wondered why he called her Wheezy. I thought it had to do with smoking.
01:17 JoeJulian :P
01:18 semiosis woo 3.4.6beta1 & 3.5.3beta1 for ubuntu trusty built successfully (in new qa PPAs)
01:20 _dist can anyone think of a simple one line cli to match netstat to pids of VMs? only things I've come up with are convoluted
01:21 semiosis grrr 3.4.6beta1 build failed on wheezy... collect2: error: ld returned 1 exit status
01:21 semiosis ugh
01:21 semiosis _dist: i wouldn't expect a VM to show up in netstat on the hypervisor, if that's what you mean
01:22 semiosis different kernel
01:23 _dist semiosis: on my hypervisor it should, libgfapi will have an outbound connection but to figure out to what host/port is a two step process
01:24 semiosis ok, then i'm confused
01:25 _dist see what I think what's happening is when I do compute migration, the stop/c doesn't renegotiate libgfapi (why would it right) but then if I take that gluster server down, I think I'm just going to have to test it :)
01:25 harish joined #gluster
01:28 _dist semiosis: yeah most setups aren't simple, I think the problem will be something close to what JoeJulian guessed about network down first or the service going down failing to tell the other servers
01:31 dist_ joined #gluster
01:35 _dist it looks like I don't know enough about libgfapi's connections yet to figure this out
01:36 semiosis should be same as fuse client
01:37 semiosis woo, recreated pbuilder base and the build error disappeared \o/
01:37 sputnik13 joined #gluster
01:38 _dist semiosis: sure, but a single VM is using 5 connections, if one isn't "the boss" then a server going down should never interrupt disk IO, but it did
01:46 haomaiwang joined #gluster
01:48 ilbot3 joined #gluster
01:48 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/ | GlusterFS 3.6 test week - https://public.pad.fsfe.or​g/p/GlusterFS-3.6-test-doc
01:54 semiosis hmm, looks like glfsheal is missing in 3.6.0
02:00 diegows joined #gluster
02:00 haomai___ joined #gluster
02:08 semiosis @later tell kkeithley did you make any changes to the debian/ folder besides version bumps in the QA builds you made recently?
02:09 glusterbot semiosis: The operation succeeded.
02:09 semiosis woo.  packages published for trusty & wheezy of 3.4.6beta1, 3.5.3beta1, and 3.6.0beta3!
02:10 semiosis @ppa
02:10 glusterbot semiosis: The official glusterfs packages for Ubuntu are available here: 3.4 stable: http://goo.gl/M9CXF8 -- 3.5 stable: http://goo.gl/6HBwKh -- QEMU with GlusterFS support: http://goo.gl/e8IHnQ (3.4) & http://goo.gl/tIJziO (3.5)
02:13 semiosis @forget ppa
02:13 glusterbot semiosis: The operation succeeded.
02:13 semiosis @learn ppa as The official glusterfs packages for Ubuntu are available here: STABLE: 3.4: http://goo.gl/M9CXF8 3.5: http://goo.gl/6HBwKh -- QA: 3.4: http://goo.gl/B2x59y 3.5: http://goo.gl/RJgJvV 3.6: http://goo.gl/ncyln5 -- QEMU with GlusterFS support: http://goo.gl/e8IHnQ (3.4) & http://goo.gl/tIJziO (3.5)
02:13 glusterbot semiosis: The operation succeeded.
02:13 semiosis @ppa
02:13 glusterbot semiosis: The official glusterfs packages for Ubuntu are available here: STABLE: 3.4: http://goo.gl/M9CXF8 3.5: http://goo.gl/6HBwKh -- QA: 3.4: http://goo.gl/B2x59y 3.5: http://goo.gl/RJgJvV 3.6: http://goo.gl/ncyln5 -- QEMU with GlusterFS support: http://goo.gl/e8IHnQ (3.4) & http://goo.gl/tIJziO (3.5)
02:15 * semiosis out
02:23 justinmburrous joined #gluster
02:28 kshlm joined #gluster
02:33 calisto1 joined #gluster
02:41 SOLDIERz_ joined #gluster
03:08 xavih joined #gluster
03:17 justinmburrous joined #gluster
03:23 ppai joined #gluster
03:27 rejy joined #gluster
03:33 SOLDIERz_ joined #gluster
03:33 shubhendu_ joined #gluster
03:38 kshlm joined #gluster
03:47 justinmburrous joined #gluster
03:51 bharata-rao joined #gluster
03:52 itisravi joined #gluster
03:52 kdhananjay joined #gluster
03:56 kumar joined #gluster
04:01 rwheeler joined #gluster
04:16 saurabh joined #gluster
04:26 spandit joined #gluster
04:26 soumya joined #gluster
04:28 atinmu joined #gluster
04:33 nishanth joined #gluster
04:35 rafi1 joined #gluster
04:35 Rafi_kc joined #gluster
04:41 lalatenduM joined #gluster
04:48 coredump joined #gluster
05:02 rjoseph joined #gluster
05:08 dusmant joined #gluster
05:16 prasanth_ joined #gluster
05:21 SOLDIERz_ joined #gluster
05:25 deepakcs joined #gluster
05:31 nshaikh joined #gluster
05:41 ndarshan joined #gluster
05:42 harish joined #gluster
05:47 overclk joined #gluster
05:51 kdhananjay joined #gluster
06:02 atalur joined #gluster
06:03 RaSTar joined #gluster
06:04 nbalachandran joined #gluster
06:06 RameshN joined #gluster
06:19 soumya joined #gluster
06:21 atinmu joined #gluster
06:22 SOLDIERz_ joined #gluster
06:22 ricky-ticky1 joined #gluster
06:26 anoopcs joined #gluster
06:31 dusmant joined #gluster
06:33 Fen2 joined #gluster
06:34 rolfb joined #gluster
06:38 pkoro_ joined #gluster
06:58 ctria joined #gluster
06:58 jiffin joined #gluster
07:04 rgustafs joined #gluster
07:04 nbalachandran joined #gluster
07:08 ntt joined #gluster
07:15 atinmu joined #gluster
07:23 SOLDIERz_ joined #gluster
07:31 Gorian joined #gluster
07:33 Gorian joined #gluster
07:33 ppai joined #gluster
07:36 anands joined #gluster
07:43 Slydder joined #gluster
07:44 Slydder morning all
07:46 deepakcs joined #gluster
07:52 xavih joined #gluster
07:54 ppai joined #gluster
07:57 giannello joined #gluster
08:00 Pupeno joined #gluster
08:04 aravindavk joined #gluster
08:05 rolfb joined #gluster
08:11 sputnik13 joined #gluster
08:12 Pupeno_ joined #gluster
08:20 justinmburrous joined #gluster
08:24 SOLDIERz_ joined #gluster
08:35 vimal joined #gluster
08:39 Fen2 Slydder: Hi ! :)
08:43 Fen2 Slydder : Is NFS-Ganesha works well yet ?
08:47 xavih joined #gluster
08:48 Slydder yeah. the kernel stack problem is, of course, still an issue. however, instead of the system deadlocking ganesha just caps the connection which frees up the stack. so better than it was before.
08:49 pkoro_ joined #gluster
08:52 spandit joined #gluster
09:04 ppai joined #gluster
09:08 kanagaraj joined #gluster
09:10 bharata_ joined #gluster
09:10 RaSTar joined #gluster
09:11 Fen2 do you think it will work with proxmox ? (for virtualization ?)
09:16 Gorian joined #gluster
09:16 Slashman joined #gluster
09:23 sputnik13 joined #gluster
09:28 sputnik13 joined #gluster
09:28 glusterbot New news from newglusterbugs: [Bug 1153569] client connection establishment takes more time for rdma only volume <https://bugzilla.redhat.co​m/show_bug.cgi?id=1153569>
09:30 verdurin joined #gluster
09:32 harish joined #gluster
09:40 pkoro_ joined #gluster
09:41 RaSTar joined #gluster
09:41 topshare joined #gluster
09:57 ndarshan joined #gluster
09:58 liquidat joined #gluster
10:00 eshy joined #gluster
10:02 samsaffron___ joined #gluster
10:03 sputnik13 joined #gluster
10:12 SOLDIERz_ joined #gluster
10:13 sputnik13 joined #gluster
10:20 ndarshan joined #gluster
10:22 justinmburrous joined #gluster
10:22 hybrid512 joined #gluster
10:28 glusterbot New news from newglusterbugs: [Bug 1153610] libgfapi crashes in glfs_fini for RDMA type volumes <https://bugzilla.redhat.co​m/show_bug.cgi?id=1153610>
10:31 karnan joined #gluster
10:36 sputnik13 joined #gluster
10:50 sputnik13 joined #gluster
10:51 glusterbot New news from resolvedglusterbugs: [Bug 998967] gluster 3.4.0 ACL returning different results with entity-timeout=0 and without <https://bugzilla.redhat.com/show_bug.cgi?id=998967>
10:56 kkeithley1 joined #gluster
10:59 sputnik13 joined #gluster
11:08 social joined #gluster
11:19 LebedevRI joined #gluster
11:20 ira joined #gluster
11:22 justinmburrous joined #gluster
11:25 edward1 joined #gluster
11:29 virusuy joined #gluster
11:29 virusuy joined #gluster
11:35 davemc john_locke, semiosis could you give me a hint as to what was sent. i don't recall the contact about your set up
11:39 calisto joined #gluster
11:40 calum_ joined #gluster
11:47 diegows joined #gluster
11:52 nshaikh joined #gluster
11:55 sputnik13 joined #gluster
11:59 xandrea joined #gluster
12:00 xandrea Hi everyone
12:00 Fen1 joined #gluster
12:00 xandrea I’m getting an error while I try to mount a gluster volume locally
12:01 ctria joined #gluster
12:01 SOLDIERz_ joined #gluster
12:01 xandrea I’m using centos 7 with gluster 3.5.2, I have connected two servers and it seems work
12:01 RicardoSSP joined #gluster
12:02 xandrea but when I try to mount locally with native client it doesn’t work
12:02 Slashman joined #gluster
12:03 xandrea in the log there is this error:
12:03 xandrea “failed to get the 'volume file' from server”
12:03 soumya joined #gluster
12:03 xandrea what can I do ?
12:05 anands joined #gluster
12:12 mkasa joined #gluster
12:17 itisravi_ joined #gluster
12:19 Guest59488 joined #gluster
12:23 justinmburrous joined #gluster
12:37 Fen1 xandrea: can you reach your server from the client ?
12:39 xandrea Fen1: yes.. I understand what was my error.. hihihi
12:39 sputnik13 joined #gluster
12:40 Fen1 xandrea : what was it ?
12:40 xandrea I tried to connect with the path as well I use a nfs command
12:40 Fen1 ok :)
12:40 xandrea instead I needed to use the volume name
12:40 xandrea :D
12:40 xandrea sorry
12:40 Fen1 np ;)
12:41 soumya joined #gluster
12:41 theron joined #gluster
12:42 xandrea is it true from 3.4 and later need to open different port for each bricks ??
12:42 xandrea http://www.jamescoyle.net/how-t​o/457-glusterfs-firewall-rules
12:42 anands joined #gluster
12:42 xandrea I’m still use the 24009 port with 3.5.2 and it works properly
12:42 Fen1 i don't know, i just desable firewall in my case
12:42 doo joined #gluster
12:42 xandrea hihihihi
12:43 theron joined #gluster
12:44 nshaikh joined #gluster
12:44 theron joined #gluster
12:46 xandrea how can I monitor the performance between the two server ??
12:49 Fen1 you can use Nagios to monitor :)
12:49 Fen1 https://access.redhat.com/documentation/en​-US/Red_Hat_Storage/3/html/Administration_​Guide/chap-Monitoring_Red_Hat_Storage.html
12:49 glusterbot Title: Chapter 13. Monitoring Red Hat Storage (at access.redhat.com)
12:54 sputnik13 joined #gluster
12:59 davemc joined #gluster
13:04 sputnik13 joined #gluster
13:05 soumya Slydder, hi
13:09 DV joined #gluster
13:13 lyang0 joined #gluster
13:16 Gorian joined #gluster
13:19 DV joined #gluster
13:21 bennyturns joined #gluster
13:21 julim joined #gluster
13:22 bene joined #gluster
13:24 justinmburrous joined #gluster
13:24 kshlm joined #gluster
13:26 xandrea thanks guys… I’ll try
13:26 xandrea nagios seems hard to set up
13:26 anands joined #gluster
13:29 brettnem joined #gluster
13:31 sputnik13 joined #gluster
13:43 rgustafs joined #gluster
13:46 sputnik13 joined #gluster
13:50 SOLDIERz_ joined #gluster
13:53 mojibake joined #gluster
13:55 Gorian joined #gluster
13:58 _Bryan_ joined #gluster
14:01 firemanxbr joined #gluster
14:04 nullck joined #gluster
14:04 nullck_ joined #gluster
14:05 nullck joined #gluster
14:07 nullck joined #gluster
14:08 nullck_ joined #gluster
14:09 nullck joined #gluster
14:09 karnan joined #gluster
14:09 karnan joined #gluster
14:13 jbautista- joined #gluster
14:20 firemanxbr joined #gluster
14:22 nullck_ joined #gluster
14:25 justinmburrous joined #gluster
14:27 nullck joined #gluster
14:31 anands joined #gluster
14:35 calisto joined #gluster
14:40 kaushal_ joined #gluster
14:44 lmickh joined #gluster
14:46 tdasilva joined #gluster
14:50 deepakcs joined #gluster
14:56 ctria joined #gluster
14:56 mojibake joined #gluster
14:57 msmith_ joined #gluster
15:04 lpabon joined #gluster
15:06 SOLDIERz_ joined #gluster
15:10 coredump joined #gluster
15:15 nshaikh joined #gluster
15:20 jobewan joined #gluster
15:22 daMaestro joined #gluster
15:22 semiosis glusterbot: whoami
15:22 glusterbot semiosis: semiosis
15:23 semiosis glusterbot: op
15:25 davemc joined #gluster
15:29 kumar joined #gluster
15:29 msmith_ joined #gluster
15:36 mojibake Question about ACLs. From what I read the ACLs can only be set using NFS mount. Is that to say the ACLs are effective while mounted as glusterfs mount, but just can not be set. That is what following document seems to indicate with mount command mounting -t glusterfs -o acl. https://github.com/gluster/glusterfs/blob/maste​r/doc/admin-guide/en-US/markdown/admin_ACLs.md
15:36 glusterbot Title: glusterfs/admin_ACLs.md at master · gluster/glusterfs · GitHub (at github.com)
15:45 _dist joined #gluster
15:46 soumya joined #gluster
15:48 anands joined #gluster
15:50 sputnik13 joined #gluster
15:50 davemc joined #gluster
15:54 glusterbot New news from resolvedglusterbugs: [Bug 988946] 'systemd stop glusterd' doesn't stop all started gluster daemons <https://bugzilla.redhat.com/show_bug.cgi?id=988946> || [Bug 1023191] glusterfs consuming a large amount of system memory <https://bugzilla.redhat.co​m/show_bug.cgi?id=1023191> || [Bug 1049616] Error: Package: glusterfs-ufo-3.4.0-8.fc19.noarch <https://bugzilla.redhat.co​m/show_bug.cgi?id=1049616>
15:59 theron joined #gluster
16:00 glusterbot New news from newglusterbugs: [Bug 987624] Feature Request: Color code replica or distribute pairings output from gluster volume info <https://bugzilla.redhat.com/show_bug.cgi?id=987624> || [Bug 1002313] request for ip based access control <https://bugzilla.redhat.co​m/show_bug.cgi?id=1002313> || [Bug 1067733] Rename temp file created in /var/lib/glusterd/peers/ during peer probe <https://bugzilla.redhat.com/show_bug.cgi?id
16:01 xavih joined #gluster
16:08 rotbeard joined #gluster
16:10 davemc joined #gluster
16:19 rwheeler joined #gluster
16:20 daMaestro joined #gluster
16:24 glusterbot New news from resolvedglusterbugs: [Bug 1067756] GlusterFS client mount fails when using Oracle Linux Unbreakable Enterprise Kernel <https://bugzilla.redhat.co​m/show_bug.cgi?id=1067756> || [Bug 1084721] Gluster RPMs for CentOS/etc from download.gluster.org are not consistently signed <https://bugzilla.redhat.co​m/show_bug.cgi?id=1084721>
16:26 justinmburrous joined #gluster
16:37 dtrainor joined #gluster
16:50 theron joined #gluster
16:56 dtrainor joined #gluster
16:56 skippy does the Gluster FUSE client have any kind of inactivity timeout for the mount?
16:57 semiosis not that i know of
16:59 skippy I'm seeing irregular "transport endpoint is not connected" on a few read and write operations.  Not seeing any alerts about system outages, but client Gluster logs clearly state "has not responded in the last 42 seconds, disconnecting."
17:01 soumya joined #gluster
17:02 charta joined #gluster
17:04 B21956 joined #gluster
17:05 firemanxbr joined #gluster
17:05 B21956 joined #gluster
17:05 dist_ joined #gluster
17:07 soumya joined #gluster
17:07 PeterA joined #gluster
17:16 john_locke Good Morning Guys! OK finished to install my Glusterfs with 3 bricks, each of 60TB in raid 5 and iozone shows on reads 4gb/s and big numbers in general, only 1 server. But in Writes, shows 54mb/s I work with small files 10mb each. Any suggestions?
17:17 john_locke One question: I know that speed is aggregated with more nodes, but also more bricks help? if instead of 3 bricks I make 6?
17:29 fattaneh joined #gluster
17:33 jskinner_ joined #gluster
17:34 mojibake john_locke: I do not know the details of IOZONE but if it is only reading and writing a single file, you will not see much activity on the other two servers. If you set up as distributed volume, only multiple files would get written across the bricks..
17:34 mojibake A single large file is not split across bricks..
17:36 mojibake I am still a newbie myself, so if setup as NFS mount that could also cause it to write to one server? Question maybe answered in docs.
17:36 SOLDIERz_ joined #gluster
17:37 john_locke hello mojibake is a bunch of small files 10mb each, also is one server with 3 bricks, no other nodes.
17:37 john_locke the reads are very high, but the writes are way to slow
17:38 ekuric joined #gluster
17:38 mojibake Ahh, OK. Was thinking 3nodes...Yeah I noticed the writes are slow. Especially if it is an overwrite, because it first zeros the previous file and then writes it...
17:39 mojibake You would definately get better performance add extra nodes to split the write accross..
17:39 mojibake Because it is actually the client that is writting to all the nodes at the same time.
17:43 davemc joined #gluster
17:45 brettnem joined #gluster
17:46 rturk joined #gluster
17:48 nshaikh joined #gluster
17:49 JoeJulian replication originates at the client, so like mojibake says, writes can occur at bandwidth/replicas. More distribute subvolumes means more servers to simultaneously serve files if the clients are using different files. If they're all using the same file, that's where replicas *can* help (with very specific workloads and tweaks). This is the first I've heard of "zeroing out" a file, though. Where'd that come from?
17:52 SOLDIERz joined #gluster
17:55 calum_ joined #gluster
17:56 RameshN joined #gluster
17:57 plarsen joined #gluster
18:00 theron joined #gluster
18:17 dtrainor joined #gluster
18:22 churnd joined #gluster
18:22 lalatenduM joined #gluster
18:28 justinmburrous joined #gluster
18:42 zerick joined #gluster
18:45 diegows joined #gluster
18:53 ilbot3 joined #gluster
18:53 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/ | GlusterFS 3.6 test week - https://public.pad.fsfe.or​g/p/GlusterFS-3.6-test-doc
18:54 mojibake JoeJulian: Well, I don't know about zeroing out the file. but If copying a file into Gluster Volume, if it is a "fresh" file it has a faster write speed, then if I copy and overwrite a file with the exact same name.(generally the exact same file, because of testing.).
18:55 dtrainor joined #gluster
19:02 theron joined #gluster
19:02 skippy for folks using Ganesha NFS, are y'all using a VIP for HA purposes, pointing clients at a single Ganesha NFS server, or using automounts backup server feature?
19:03 Slashman joined #gluster
19:09 theron joined #gluster
19:11 fattaneh1 joined #gluster
19:12 theron joined #gluster
19:45 _dist joined #gluster
19:46 uebera|| joined #gluster
19:46 uebera|| joined #gluster
19:48 skippy or does Ganesha NFS 4.1 reduce the single-point-of-failure issues with a single NFS server?
20:20 davemc joined #gluster
20:31 glusterbot New news from newglusterbugs: [Bug 1049946] Possible problems with some strtok_r() calls. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1049946>
20:42 theron joined #gluster
20:44 skippy what exactly does "Server and Client lk-version numbers are not same, reopening the fds" mean in a Gluster client log?
20:44 glusterbot skippy: This is normal behavior and can safely be ignored.
20:44 skippy well okay!
20:48 skippy this pretty closely matches what I'm seeing: http://gluster.org/pipermail/glust​er-users.old/2013-June/013266.html
20:48 glusterbot Title: [Gluster-users] bailing out frame type(GlusterFS 3.1) op(FINODELK(30)) ... timeout = 1800 (at gluster.org)
20:53 _dist joined #gluster
21:18 failshell joined #gluster
21:24 badone joined #gluster
21:30 justinmburrous joined #gluster
21:44 badone joined #gluster
21:45 badone joined #gluster
21:50 semiosis skippy: hows your backend storage looking?  is it heavily loaded?  getting io errors from it?
21:58 ctria joined #gluster
22:04 coredump joined #gluster
22:17 systemonkey joined #gluster
22:20 avati joined #gluster
22:22 _dist left #gluster
22:24 nage joined #gluster
22:25 _NiC joined #gluster
22:25 nixpanic joined #gluster
22:25 SteveCooling joined #gluster
22:26 ryao joined #gluster
22:27 tty00 joined #gluster
22:31 klaxa joined #gluster
22:32 ryao joined #gluster
22:37 doekia joined #gluster
22:53 gildub joined #gluster
22:55 ntt joined #gluster
23:08 PeterA joined #gluster
23:34 DougBishop joined #gluster
23:52 klaxa joined #gluster
23:52 plarsen joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary