Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-03-31

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 bala joined #gluster
00:14 penglish2 joined #gluster
00:19 vijaykumar joined #gluster
00:27 brianw Having trouble starting glusterd on centos 7. I am getting this; Connection failed. Please check if gluster daemon is operational. And from cli.log; [socket.c:3004:socket_connect] 0-glusterfs: connection attempt on  failed, (Connection refused)
00:29 brianw selinus is set to permissive, and firewalld is disabled
00:29 brianw selinux*
01:16 ildefonso brianw, paste, somewhere, the output of "iptables -L -nv ; iptables -t mangle -L -nv ;  iptables -t raw -L -nv"
01:17 ildefonso also, "netstat -npl"
01:48 ilbot3 joined #gluster
01:48 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:58 harish joined #gluster
01:59 T3 joined #gluster
02:03 wkf joined #gluster
02:12 bharata-rao joined #gluster
02:12 T3 joined #gluster
02:18 nangthang joined #gluster
02:19 haomaiwa_ joined #gluster
02:54 rjoseph joined #gluster
03:11 vipulnayyar joined #gluster
03:22 Pupeno joined #gluster
03:27 gildub joined #gluster
03:35 kdhananjay joined #gluster
03:38 atinmu joined #gluster
03:41 itisravi joined #gluster
03:44 shubhendu joined #gluster
03:51 penglish1 joined #gluster
03:55 brianw ildefonso: I had to first; rm /var/run/glusterd.socket  << on all nodes to get glusterd to start. Odd
03:55 brianw ildefonso: up and running now. :)
03:55 ildefonso brianw, great!
03:55 ildefonso and yes, it is a bit weird that the existing socket file was preventing it from starting.
03:56 plarsen joined #gluster
04:00 nbalacha joined #gluster
04:08 overclk joined #gluster
04:11 poornimag joined #gluster
04:12 kanagaraj joined #gluster
04:12 sage joined #gluster
04:17 RameshN joined #gluster
04:21 T3 joined #gluster
04:34 anoopcs joined #gluster
04:35 spandit joined #gluster
04:35 PaulCuzner joined #gluster
04:39 RameshN joined #gluster
04:41 ppai joined #gluster
04:42 schandra joined #gluster
04:45 jiffin joined #gluster
04:49 kasturi joined #gluster
04:49 schandra joined #gluster
04:52 bala joined #gluster
04:52 kotreshhr joined #gluster
04:58 soumya joined #gluster
04:59 glusterbot News from resolvedglusterbugs: [Bug 1109613] gluster volume create fails with ambiguous error <https://bugzilla.redhat.com/show_bug.cgi?id=1109613>
05:02 poornimag joined #gluster
05:03 Bhaskarakiran joined #gluster
05:05 dusmant joined #gluster
05:12 rafi joined #gluster
05:15 kshlm joined #gluster
05:17 anil joined #gluster
05:20 Manikandan joined #gluster
05:21 T3 joined #gluster
05:23 aravindavk joined #gluster
05:24 kdhananjay joined #gluster
05:30 kotreshhr joined #gluster
05:32 ndarshan joined #gluster
05:34 vimal joined #gluster
05:36 karnan joined #gluster
05:36 tetreis joined #gluster
05:37 lalatenduM joined #gluster
05:42 ashiq joined #gluster
05:45 kumar joined #gluster
05:52 PaulCuzner joined #gluster
05:52 nshaikh joined #gluster
05:53 gem joined #gluster
06:03 maveric_amitc_ joined #gluster
06:05 R0ok_ joined #gluster
06:09 mbukatov joined #gluster
06:13 hagarth joined #gluster
06:14 nishanth joined #gluster
06:14 kotreshhr1 joined #gluster
06:19 dusmant joined #gluster
06:28 soumya joined #gluster
06:41 nangthang joined #gluster
06:45 deepakcs joined #gluster
06:47 dusmant joined #gluster
06:52 meghanam joined #gluster
06:59 pelox joined #gluster
07:00 glusterbot News from newglusterbugs: [Bug 1207532] BitRot :- gluster volume help gives insufficient and ambiguous information for bitrot <https://bugzilla.redhat.com/show_bug.cgi?id=1207532>
07:00 glusterbot News from newglusterbugs: [Bug 1207534] glusterd : unable to start glusterd after hard reboot as one of the peer info file is truncated to 0 byte <https://bugzilla.redhat.com/show_bug.cgi?id=1207534>
07:04 atinmu joined #gluster
07:05 kotreshhr joined #gluster
07:08 hagarth joined #gluster
07:09 TvL2386 joined #gluster
07:13 chirino joined #gluster
07:17 raghu joined #gluster
07:19 poornimag joined #gluster
07:20 atinmu joined #gluster
07:22 ricky-ti1 joined #gluster
07:23 ghenry joined #gluster
07:30 glusterbot News from newglusterbugs: [Bug 1207547] BitRot :- If bitrot is not enabled for given volume then scrubber should not crawl bricks of that volume and should not update vol file for that volume <https://bugzilla.redhat.com/show_bug.cgi?id=1207547>
07:36 lyang0 joined #gluster
07:46 hchiramm joined #gluster
07:46 kovshenin joined #gluster
07:47 hagarth joined #gluster
07:51 chirino joined #gluster
07:55 ktosiek joined #gluster
07:56 PaulCuzner joined #gluster
08:01 fsimonce joined #gluster
08:02 liquidat joined #gluster
08:03 Norky joined #gluster
08:07 T3 joined #gluster
08:10 smohan joined #gluster
08:23 ctria joined #gluster
08:30 m0ellemeister joined #gluster
08:31 anrao joined #gluster
08:31 harish joined #gluster
08:42 anrao joined #gluster
08:42 Slashman joined #gluster
08:54 atalur joined #gluster
08:56 ashiq joined #gluster
09:03 siel joined #gluster
09:04 chirino joined #gluster
09:06 chirino joined #gluster
09:08 T3 joined #gluster
09:14 bala joined #gluster
09:20 T0aD joined #gluster
09:25 atalur joined #gluster
09:44 schandra joined #gluster
09:49 lalatenduM joined #gluster
09:55 rjoseph joined #gluster
09:55 rafi1 joined #gluster
09:58 deniszh joined #gluster
10:01 anil_ joined #gluster
10:07 ashiq joined #gluster
10:11 smohan_ joined #gluster
10:14 nhayashi joined #gluster
10:15 rjoseph joined #gluster
10:17 schandra joined #gluster
10:18 ashiq joined #gluster
10:19 dusmant joined #gluster
10:20 hgowtham joined #gluster
10:21 aravindavk joined #gluster
10:30 glusterbot News from newglusterbugs: [Bug 1207611] peer probe with additional network address fails <https://bugzilla.redhat.com/show_bug.cgi?id=1207611>
10:40 schandra joined #gluster
10:41 lalatenduM joined #gluster
10:41 rwheeler joined #gluster
10:49 ira joined #gluster
10:52 prilly joined #gluster
10:54 haomaiwa_ joined #gluster
10:56 T3 joined #gluster
11:00 glusterbot News from newglusterbugs: [Bug 1207624] BitRot :- scrubber is not detecting rotten data and not marking file as 'BAD' file <https://bugzilla.redhat.com/show_bug.cgi?id=1207624>
11:00 glusterbot News from newglusterbugs: [Bug 1207627] BitRot :- Data scrubbing status is not available <https://bugzilla.redhat.com/show_bug.cgi?id=1207627>
11:01 kkeithley1 joined #gluster
11:13 kkeithley_ Gluster Bug Triage in #gluster-meeting in 45 minutes
11:16 LebedevRI joined #gluster
11:18 Manikandan joined #gluster
11:20 nangthang joined #gluster
11:22 bene2 joined #gluster
11:24 itisravi joined #gluster
11:31 glusterbot News from resolvedglusterbugs: [Bug 1067059] Support for unit tests in GlusterFS <https://bugzilla.redhat.com/show_bug.cgi?id=1067059>
11:31 nshaikh joined #gluster
11:34 ashiq joined #gluster
11:35 prilly joined #gluster
11:45 rafi joined #gluster
11:47 firemanxbr joined #gluster
11:50 poornimag joined #gluster
11:54 kkeithley_ Gluster Bug Triage in #gluster-meeting in 5 minutes
11:57 anoopcs joined #gluster
11:58 rjoseph joined #gluster
12:00 soumya joined #gluster
12:00 meghanam joined #gluster
12:05 itisravi_ joined #gluster
12:11 meghanam joined #gluster
12:16 bennyturns joined #gluster
12:17 DV joined #gluster
12:17 ricky-ticky1 joined #gluster
12:19 rjoseph joined #gluster
12:19 prilly_ joined #gluster
12:20 T0aD joined #gluster
12:20 hagarth joined #gluster
12:21 kanagaraj joined #gluster
12:21 dgandhi joined #gluster
12:22 anoopcs joined #gluster
12:26 haomaiwa_ joined #gluster
12:27 T3 joined #gluster
12:28 T3 joined #gluster
12:31 glusterbot News from newglusterbugs: [Bug 1207643] [geo-rep]: starting the geo-rep causes "Segmentation fault" and core is generated by "gsyncd.py" <https://bugzilla.redhat.com/show_bug.cgi?id=1207643>
12:31 glusterbot News from newglusterbugs: [Bug 1191176] Since 3.6.2: failed to get the 'volume file' from server <https://bugzilla.redhat.com/show_bug.cgi?id=1191176>
12:31 glusterbot News from resolvedglusterbugs: [Bug 1205970] can glusterfs be running on cygwin platform ? <https://bugzilla.redhat.com/show_bug.cgi?id=1205970>
12:32 B21956 joined #gluster
12:36 pppp joined #gluster
12:39 rjoseph joined #gluster
12:40 nbalacha joined #gluster
12:40 kovshenin joined #gluster
12:44 wkf joined #gluster
12:45 kdhananjay joined #gluster
12:49 atalur joined #gluster
12:50 RameshN joined #gluster
12:59 smohan joined #gluster
13:17 rafi joined #gluster
13:19 dusmant joined #gluster
13:20 gem joined #gluster
13:29 ricky-ticky joined #gluster
13:32 georgeh-LT2 joined #gluster
13:35 Gill joined #gluster
13:37 plarsen joined #gluster
13:39 hamiller joined #gluster
13:47 Philambdo joined #gluster
13:54 kumar joined #gluster
13:58 shubhendu joined #gluster
14:07 nishanth joined #gluster
14:08 bala1 joined #gluster
14:09 jmarley joined #gluster
14:10 bennyturns joined #gluster
14:10 bene2 joined #gluster
14:16 nbalacha joined #gluster
14:17 Gill joined #gluster
14:30 _Bryan_ joined #gluster
14:31 glusterbot News from newglusterbugs: [Bug 1207709] trash: remove_trash_path broken in the internal case <https://bugzilla.redhat.com/show_bug.cgi?id=1207709>
14:31 glusterbot News from newglusterbugs: [Bug 1207712] Input/Output error with disperse volume when geo-replication is started <https://bugzilla.redhat.com/show_bug.cgi?id=1207712>
14:37 atinmu joined #gluster
14:37 soumya joined #gluster
14:42 meghanam joined #gluster
14:45 wushudoin joined #gluster
14:49 xlz joined #gluster
14:52 coredump joined #gluster
14:54 coredump joined #gluster
15:00 Gill joined #gluster
15:04 hamiller joined #gluster
15:06 jmarley joined #gluster
15:11 bene2 joined #gluster
15:12 tg2 gluster 3.4, distributed volume, did remove-brick start 8 hours ago, status still shows "not-started"
15:12 tg2 any hints?
15:12 bala joined #gluster
15:13 hamiller how big is the volume? How big are the bricks? How much free space?
15:13 tg2 bricks are at 50% capacity each
15:13 tg2 volume is 300TB
15:13 tg2 bricks are ~40TB each
15:14 tg2 before it would start scanning almost right away
15:15 Creeture joined #gluster
15:15 hamiller Hmmm, so it needs to move about 20TB, but you seem to have plenty of space.
15:15 tg2 yeah
15:15 tg2 its ab it less, 18tb
15:15 tg2 19*
15:16 hamiller any 'D' or 'Z' processes?
15:16 tg2 # /dev/sdd1          40T   19T   22T  46% /storage2
15:16 Creeture I'm seeting a "Gfid mismatch detected" in my nfs.log for the same Gfid repeated a lot. gluster volume heal info split-brain shows nothing. Where do I start debugging?
15:16 tg2 all are 'S'
15:16 tg2 cpu usage seems about normal
15:17 hamiller tg2, 'S' sleeping, waiting on HW. Could be disk, could be network.
15:17 Gill joined #gluster
15:17 Creeture It's a replicate volume with 2 bricks. Otherwise working well.
15:17 tg2 you mean in the linux process list, or do you mean inside gluster
15:18 hamiller tg2 ps aux | grep glust*   <- thats how I would check  :)
15:18 tg2 all my fsd processes are alternating between S and R
15:19 tg2 http://i.imgur.com/CwOH3AA.png
15:19 hamiller tg2 R is running,  S    interruptible sleep (waiting for an event to complete)
15:20 tg2 i can't see which process is doing the reblance, but there might be a way to see what it's waiting on
15:22 tg2 in top open I see Current open fds: 67, Max open fds: 130, Max openfd time: 2015-03-30 18:16:45.733649
15:22 hamiller ya, check the logs as well. seems like each glusterfsd is consuming about 1.5GB ram...
15:22 tg2 for that brick
15:22 tg2 that is shared memory included
15:22 hamiller ya
15:22 tg2 the root is using 1.49GB
15:22 magamo left #gluster
15:22 tg2 which is about right on a box with 24G
15:24 hamiller well, it seems like something is stuck, and the rest are 'S'leeping behind the blockage
15:24 hamiller but I'm not sure what to do next
15:24 tg2 network is at <1% of line rate
15:24 hamiller ya, everyone is waiting
15:24 tg2 the brick is still accepting files
15:24 tg2 despite being in remove-brick state
15:25 tg2 where should I look for more debugging
15:25 tg2 hmm
15:25 tg2 in the rebalance log
15:25 tg2 getting transport endpoint is not connected
15:26 tg2 [2014-09-09 01:21:10.547906] W [client3_1-fops.c:2546:client3_1_opendir_cbk] 0-storage-client-5: remote operation failed: Transport endpoint is not connected. Path: / (00000000-0000-0000-0000-000000000001)
15:26 tg2 ah nvm thats old
15:27 hamiller Was a good clue  :)
15:27 tg2 haha
15:27 tg2 ls -ltr fail ;D
15:27 tg2 yeah nothin gin the main storage log either, all geren
15:27 tg2 green *
15:29 hamiller let me dig around a bit....but I'm outta guesses
15:29 tg2 ah I found it
15:29 tg2 was user error
15:29 tg2 was chekcing the wrong brick :|
15:30 tg2 perfect, running already
15:30 tg2 sigh
15:30 tg2 thanks for the help though
15:30 tg2 how just have to wait for 19TB to copy ;D
15:31 glusterbot News from newglusterbugs: [Bug 1207735] Disperse volume: Huge memory leak of glusterfsd process <https://bugzilla.redhat.com/show_bug.cgi?id=1207735>
15:32 hamiller tg2 lol
15:32 hamiller tg2++  Good job......
15:32 glusterbot hamiller: tg2's karma is now 1
15:34 Creeture Now that tg2 has is problem fixed, it looks like my 'Gfid mismatch detected' is due to a "No such file or directory"
15:34 Creeture How do I track this thing down?
15:36 kotreshhr left #gluster
15:37 hamiller joined #gluster
15:38 T3 joined #gluster
15:39 tg2 check on your underlying bricks for the directory or file that is showing as not found
15:40 Creeture It's found on both bricks.
15:40 tg2 ify ou do: getfattr -m . -d -e hex [filepath]
15:40 tg2 from the underlying brick
15:40 tg2 what does it show
15:40 tg2 (on both bricks)
15:42 Creeture https://gist.github.com/ccosby/c5179b7ff6543625ef98
15:42 Creeture Looks like 2 different ESXi hosts trying to use the same .lck file.
15:43 tg2 if you look at that file through the mount you don't see it?
15:44 tg2 is it a repliicated volume?
15:44 Creeture yes
15:44 tg2 its possible that each esx host had a file handle open on each brick
15:44 tg2 and they both released it at the same time, and tried to delete it
15:44 lalatenduM joined #gluster
15:45 tg2 and they are still somehow holding the lock handle open since the unlink removed the file from the volume but not the brick
15:45 tg2 the lock file is for a vm disk or something?
15:45 Creeture Digging to see what it is now. Probably.
15:45 tg2 I'd just remove the .lck if no longer needed
15:46 tg2 or find what is holding it on the esx client
15:46 tg2 the bricks should remove the underlying files once the handle is closed
15:46 tg2 next
15:46 tg2 to see if it is still being held open
15:46 Creeture Looks like it's in the .iorm.sf directory and from there I'm getting an i/o error from the mount
15:47 purpleidea joined #gluster
15:47 Creeture The contents of that file on the bricks is different. Shows 2 different hosts as its contents. I'll start there.
15:47 tg2 gluster volume top storage open brick server:/brick
15:47 tg2 on each brick
15:47 tg2 pipe into grep for that file
15:48 tg2 see if anything still has it open
15:49 tg2 if nothing has it open, then you could copy it OUT from the brick into a /tmp dir for example
15:49 tg2 then remove from both bricks with rm
15:49 Creeture yep. 2 somethings it looks like. are those pids in the 1st column?
15:49 tg2 then copy it back in through the mount
15:49 tg2 the first column is the count
15:49 Creeture Yeah. I just took away the grep to see that. :)
15:50 tg2 I'm not sure hwo to see which client has which file open, but I think with your esx hosts tagged to the lock file you might be abel to cat it an dsee which host ID has the file held
15:50 Creeture yeah. I can see which one has it.
15:51 tg2 it might be that gluster is sensing the discrepancy between the two files and simply showing not available
15:51 Creeture Now I just have to figure out which VM it's associated with. Might just be able to shutdown the vm and cleanup from there.
15:51 tg2 or that the file was "unlinked" from one host which removed it from the volume but the file handles are still open to both files
15:51 tg2 not sure how exactly they got TWO separate files open on two repliate bricks
15:51 tg2 are you mounting by nfs from your esxi nodes by any chance ;)
15:53 tg2 yeah if you shut down the vm it should release any lcoks
15:53 tg2 the only locks i'm aware of are the vmdk locks (if your vm storage is configured by default)
15:53 Creeture Yes I am mounting by NFS.
15:53 tg2 and the host lock
15:53 tg2 I stopped using gluster via nfs for my vms
15:53 tg2 with esxi
15:53 Creeture What are you using now? iSCSI?
15:54 tg2 stil lusing nfs, but not glusters built in nfs
15:54 Creeture nfs-kernel-nfs to a mounted directory?
15:54 tg2 I mount the volume on the gluster host
15:54 tg2 yep
15:54 Creeture works mo betta?
15:54 tg2 i can only tell you from my experience
15:55 tg2 but I had a TON of issues running esxi -> nfs onto my gluster instance
15:55 tg2 then when I changed to native nfs on the box INTO the mount
15:55 tg2 none
15:55 tg2 and you can use nfsv4
15:55 corretico joined #gluster
15:55 tg2 even if your version of gluster doesn't support it
15:55 tg2 the speeds are slower
15:55 tg2 but consistency is better
15:55 Creeture but it's tcp and consistent.
15:55 tg2 yep
15:55 purpleidea joined #gluster
15:56 tg2 it's that or move to a kvm/qemu setup and use gluster native with libgfapi
15:56 Creeture yeah. doing that already on my openstack stuff and it works kinda flawlessly
15:56 tg2 you /could/ also export a single block file INSIDE your volume via iscsi
15:57 tg2 with the new iscsitarget stuff it works very wel
15:57 tg2 but esxi and iscsi is to be avoided unless you're using partner-approved iscsi stuff
15:57 tg2 or hardware iscsi
15:57 tg2 lost 2 vms that way
15:57 tg2 nfs fails gracefully (most of the time unless you're screwing with nfs protocol case in point)
15:58 tg2 iscsi path interruption is like pulling your harddrive out
15:58 Creeture Well...I just turned of SIOC on that storage volume and it cleared up the problem. Now if I cleanup those paths and enable it again, might be happy.
15:59 tg2 i haven't really found a need for sioc yet, but if you're doing resource pools could be useful
15:59 tg2 disable sioc -> remove lockfiles from underlying bricks -> reenable
15:59 tg2 see if it creates two copies
15:59 Creeture workin on it now
16:01 tg2 when gluster nfsv4 stuff (genesha?) is in production it should fix this issue
16:01 glusterbot News from newglusterbugs: [Bug 1195415] glusterfsd core dumps when cleanup and socket disconnect routines race <https://bugzilla.redhat.com/show_bug.cgi?id=1195415>
16:02 tg2 and make nfs a viable mount
16:02 tg2 i had issues with gluster via nfs from other things than esxi too
16:02 tg2 any time where there is contention for the same file with two nodes
16:02 tg2 and unlinks
16:02 tg2 it tends to break
16:06 Creeture Looks like disable SIOC, delete .iorm.sf directory from both bricks, reenable SIOC makes the errors go away, but now I'm watching SIOC go braindead trying to create the same file over and over. I know what its problem is now though.
16:07 tg2 +
16:07 tg2 with nfsv4 it will probably get better
16:07 Creeture too much failover in my plan. turning off sioc is the order of the day. :)
16:08 tg2 in the meantime if you want to get adventerous
16:08 tg2 http://blog.gluster.org/2014/09/glusterfs-and-nfs-ganesha-integration/
16:08 tg2 compile that into a test cluster
16:08 tg2 and try with a few vms see if it works better
16:08 tg2 it should
16:09 tg2 then you can even use 4.1 pnfs
16:09 Creeture I tried that at some point and it was an unhappy mess.
16:11 Creeture I have 281 VMs on that mount at the moment. I'm not screwing with during the day that's for sure.
16:12 Creeture But hey, thanks for the help. I understand a little more what I'm dealing with now.
16:19 penglish1 joined #gluster
16:32 bala joined #gluster
16:33 nangthang joined #gluster
16:39 T3 joined #gluster
16:41 rjoseph joined #gluster
16:51 plarsen joined #gluster
16:51 Gill joined #gluster
17:01 vipulnayyar joined #gluster
17:12 Rapture joined #gluster
17:20 T3 joined #gluster
17:21 nangthang joined #gluster
17:46 purpleid1a joined #gluster
17:47 lalatenduM joined #gluster
17:55 penglish2 joined #gluster
18:14 purpleidea joined #gluster
18:29 o5k joined #gluster
18:30 Gill joined #gluster
18:32 loctong joined #gluster
18:38 Gill joined #gluster
18:53 kovshenin joined #gluster
18:57 cmtime joined #gluster
19:00 ricky-ticky1 joined #gluster
19:02 glusterbot News from newglusterbugs: [Bug 906763] SSL code does not use OpenSSL multi-threading interface <https://bugzilla.redhat.com/show_bug.cgi?id=906763>
19:08 penglish1 joined #gluster
19:11 diegows joined #gluster
19:13 penglish2 joined #gluster
19:23 T0aD joined #gluster
19:39 PaulCuzner left #gluster
19:59 eugenewrayburn joined #gluster
20:01 eugenewrayburn Having trouble mounting a gluster volume.  It appears to mount, then immediately unmounts.
20:01 eugenewrayburn log:  +------------------------------------------------------------------------------+
20:01 eugenewrayburn [2015-03-31 19:53:55.827866] I [rpc-clnt.c:1761:rpc_clnt_reconfig] 0-isisbackup-client-0: changing port to 49153 (from 0)
20:01 eugenewrayburn [2015-03-31 19:53:55.832444] I [rpc-clnt.c:1761:rpc_clnt_reconfig] 0-isisbackup-client-1: changing port to 49153 (from 0)
20:01 eugenewrayburn [2015-03-31 19:53:55.836618] I [client-handshake.c:1413:select_server_supported_programs] 0-isisbackup-client-0: Using Program GlusterFS 3.3, Num (1298437), Version (330)
20:01 eugenewrayburn [2015-03-31 19:53:55.836844] I [client-handshake.c:1413:select_server_supported_programs] 0-isisbackup-client-1: Using Program GlusterFS 3.3, Num (1298437), Version (330)
20:01 glusterbot eugenewrayburn: +----------------------------------------------------------------------------'s karma is now -2
20:01 eugenewrayburn [2015-03-31 19:53:55.836966] I [client-handshake.c:1200:client_setvolume_cbk] 0-isisbackup-client-0: Connected to isisbackup-client-0, attached to remote volume '/data/isisbackup'.
20:01 eugenewrayburn [2015-03-31 19:53:55.836988] I [client-handshake.c:1210:client_setvolume_cbk] 0-isisbackup-client-0: Server and Client lk-version numbers are not same, reopening the fds
20:01 eugenewrayburn [2015-03-31 19:53:55.837053] I [MSGID: 108005] [afr-common.c:3552:afr_notify] 0-isisbackup-replicate-0: Subvolume 'isisbackup-client-0' came back up; going online.
20:01 eugenewrayburn [2015-03-31 19:53:55.837093] I [client-handshake.c:1200:client_setvolume_cbk] 0-isisbackup-client-1: Connected to isisbackup-client-1, attached to remote volume '/data/isisbackup'.
20:01 glusterbot eugenewrayburn: This is normal behavior and can safely be ignored.
20:01 eugenewrayburn [2015-03-31 19:53:55.837118] I [client-handshake.c:1210:client_setvolume_cbk] 0-isisbackup-client-1: Server and Client lk-version numbers are not same, reopening the fds
20:01 eugenewrayburn [2015-03-31 19:53:55.842295] I [fuse-bridge.c:5080:fuse_graph_setup] 0-fuse: switched to graph 0
20:01 glusterbot eugenewrayburn: This is normal behavior and can safely be ignored.
20:01 eugenewrayburn [2015-03-31 19:53:55.842443] I [client-handshake.c:188:client_set_lk_version_cbk] 0-isisbackup-client-1: Server lk version = 1
20:01 eugenewrayburn [2015-03-31 19:53:55.842447] I [fuse-bridge.c:4921:fuse_thread_proc] 0-fuse: unmounting /mnt
20:01 eugenewrayburn [2015-03-31 19:53:55.842514] I [client-handshake.c:188:client_set_lk_version_cbk] 0-isisbackup-client-0: Server lk version = 1
20:16 eugenewrayburn Since there are no E or C messages in the mnt.log file, I'm confused what the error is.  Is there a way to get more debug information on the failure?
20:26 roost joined #gluster
20:29 deniszh joined #gluster
20:34 penglish1 joined #gluster
20:34 anrao joined #gluster
20:38 JoeJulian eugenewrayburn: When sharing more than 3 lines information to IRC, please use a service like fpaste.org or dpaste.org. Not only is reading logs in a chat window difficult, but it also scrolls off any other conversations that might need attention (or even the point of the question you're trying to ask).
20:40 JoeJulian I don't immediately see a problem that needs answering.
20:42 JoeJulian Ah, found the question. That log doesn't show what you're describing. It's still mounted as of 2015-03-31 19:53:55.842514
21:13 badone_ joined #gluster
21:34 penglish1 joined #gluster
21:45 eugenewrayburn left #gluster
22:11 purpleidea joined #gluster
22:11 purpleidea joined #gluster
22:13 o5k joined #gluster
22:24 penglish1 joined #gluster
22:33 glusterbot News from newglusterbugs: [Bug 1207867] Are not distinguishing internal vs external FOPs in tiering <https://bugzilla.redhat.com/show_bug.cgi?id=1207867>
22:33 rotbeard joined #gluster
22:39 swebb joined #gluster
23:01 jmarley joined #gluster
23:16 penglish joined #gluster
23:29 khanku joined #gluster
23:45 wkf joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary