Camelia, the Perl 6 bug

IRC log for #gluster, 2013-07-29

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:17 lpabon joined #gluster
00:21 yinyin joined #gluster
00:23 recidive joined #gluster
00:25 codex joined #gluster
01:00 raghug joined #gluster
01:05 asias joined #gluster
01:28 vpshastry joined #gluster
01:38 badone_ joined #gluster
01:42 badone__ joined #gluster
01:45 kevein joined #gluster
01:51 bala joined #gluster
01:54 bharata-rao joined #gluster
02:00 m0zes joined #gluster
02:28 harish joined #gluster
02:28 raghug joined #gluster
02:36 lalatenduM joined #gluster
02:43 lalatenduM joined #gluster
02:59 saurabh joined #gluster
03:18 shubhendu joined #gluster
03:24 sgowda joined #gluster
03:49 badone joined #gluster
03:54 bulde joined #gluster
04:15 shylesh joined #gluster
04:40 vpshastry joined #gluster
04:54 asias joined #gluster
04:54 hagarth JoeJulian: we need to knock off community.gluster.org from the topic, we do not have a Q&A forum as of now ;)
04:59 kshlm joined #gluster
05:10 dusmant joined #gluster
05:11 CheRi_ joined #gluster
05:21 kanagaraj joined #gluster
05:22 itisravi joined #gluster
05:26 bulde joined #gluster
05:26 asias joined #gluster
05:30 Humble joined #gluster
05:37 vijaykumar joined #gluster
05:38 bala joined #gluster
05:39 raghu joined #gluster
05:45 kshlm joined #gluster
05:49 rastar joined #gluster
05:58 vshankar joined #gluster
06:01 satheesh joined #gluster
06:04 vpshastry joined #gluster
06:14 guigui3 joined #gluster
06:16 lalatenduM joined #gluster
06:20 lalatenduM joined #gluster
06:28 Recruiter joined #gluster
06:39 Humble joined #gluster
06:41 ricky-ticky joined #gluster
06:46 raghu joined #gluster
06:47 ngoswami joined #gluster
06:57 psharma joined #gluster
06:59 satheesh joined #gluster
07:00 shylesh joined #gluster
07:01 aravindavk joined #gluster
07:01 mohankumar joined #gluster
07:01 ctria joined #gluster
07:02 dobber joined #gluster
07:10 hybrid512 joined #gluster
07:13 ekuric joined #gluster
07:14 vpshastry1 joined #gluster
07:25 glusterbot New news from resolvedglusterbugs: [Bug 822791] Probe failed messages logged even though the peer probe is successful. <http://goo.gl/Yf1dpf>
07:30 SynchroM joined #gluster
07:35 mooperd joined #gluster
07:44 shireesh joined #gluster
08:18 ricky-ticky1 joined #gluster
08:32 deepakcs joined #gluster
08:37 satheesh joined #gluster
08:44 ujjain joined #gluster
08:45 sgowda joined #gluster
08:58 lalatenduM joined #gluster
08:58 kanagaraj joined #gluster
09:04 sgowda joined #gluster
09:09 vshankar joined #gluster
09:13 CheRi_ joined #gluster
09:17 ipalaus joined #gluster
09:18 ipalaus joined #gluster
09:30 ngoswami joined #gluster
09:38 shireesh joined #gluster
09:41 avati joined #gluster
09:42 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
09:46 puebele joined #gluster
09:47 Dga joined #gluster
09:50 lalatenduM joined #gluster
09:54 ricky-ticky joined #gluster
09:59 shylesh joined #gluster
10:02 harish joined #gluster
10:03 Dga Hello, anyone tried Zimbra with /opt mounted on glusterfs volume ?
10:04 pkoro joined #gluster
10:05 16WAASZ6C joined #gluster
10:07 lalatenduM joined #gluster
10:17 vshankar joined #gluster
10:25 glusterbot New news from resolvedglusterbugs: [Bug 980838] nufa xlator's algorithm for locating local brick doesn't work if client xlators are not its immediate children <http://goo.gl/tmhqS>
10:36 mooperd joined #gluster
10:43 abyss^_ what does mean performance.cache-max-file-size default = 0. It mean, no cache or it mean no limit?;)
10:56 mooperd joined #gluster
11:03 edward1 joined #gluster
11:05 spider_fingers joined #gluster
11:06 Shahar joined #gluster
11:10 kkeithley joined #gluster
11:14 hagarth abyss^_: no cache :)
11:15 hagarth j/k .. it means no limit  .. files of all sizes are cached
11:18 mooperd joined #gluster
11:24 mooperd_ joined #gluster
11:35 rcheleguini joined #gluster
11:46 mooperd joined #gluster
11:49 Dga How I can get the gluster web interface ? http://s3.amazonaws.com/crunchbase_prod_assets​/assets/images/original/0011/3823/113823v2.png
11:49 glusterbot <http://goo.gl/MqBV72> (at s3.amazonaws.com)
11:53 atrius joined #gluster
11:54 CheRi_ joined #gluster
11:56 recidive joined #gluster
11:57 mooperd joined #gluster
11:59 aliguori joined #gluster
12:06 ctria joined #gluster
12:10 abyss^_ hagarth: |* *|! Thx:)
12:10 hagarth abyss^_: yw! :)
12:17 jdarcy joined #gluster
12:18 ngoswami joined #gluster
12:29 chirino joined #gluster
12:55 chirino joined #gluster
12:56 awheeler_ joined #gluster
12:58 bennyturns joined #gluster
13:14 dhsmith joined #gluster
13:16 bradfirj joined #gluster
13:20 plarsen joined #gluster
13:24 __Bryan__ joined #gluster
13:28 glusterbot New news from resolvedglusterbugs: [Bug 962619] glusterd crashes on volume-stop <http://goo.gl/XXzSY>
13:34 yinyin joined #gluster
13:34 msciciel joined #gluster
13:35 shireesh joined #gluster
13:40 satheesh joined #gluster
13:46 vpshastry joined #gluster
14:00 vpshastry2 joined #gluster
14:01 duerF joined #gluster
14:15 ctria joined #gluster
14:15 vpshastry2 left #gluster
14:18 rwheeler joined #gluster
14:19 wushudoin joined #gluster
14:20 ipalaus joined #gluster
14:20 ipalaus joined #gluster
14:21 theron joined #gluster
14:25 lalatenduM joined #gluster
14:27 chirino joined #gluster
14:31 bennyturns joined #gluster
14:36 bennyturns joined #gluster
14:41 jdarcy_ joined #gluster
14:50 kaptk2 joined #gluster
14:51 sjoeboo joined #gluster
15:03 ekuric left #gluster
15:08 spider_fingers left #gluster
15:09 sprachgenerator joined #gluster
15:12 daMaestro joined #gluster
15:16 recidive joined #gluster
15:17 jclift joined #gluster
15:27 zaitcev joined #gluster
15:31 jbrooks joined #gluster
15:32 plarsen joined #gluster
15:33 Technicool joined #gluster
15:41 jiku joined #gluster
15:45 raghug joined #gluster
15:47 _pol joined #gluster
15:48 _pol_ joined #gluster
15:53 karthik joined #gluster
15:54 Salsinhasta joined #gluster
15:54 Salsinhasta left #gluster
15:55 psharma joined #gluster
16:00 rotbeard joined #gluster
16:03 aknapp joined #gluster
16:17 bala1 joined #gluster
16:22 TuxedoMan joined #gluster
16:35 bulde joined #gluster
16:43 Mo__ joined #gluster
16:49 zombiejebus joined #gluster
16:56 dscastro joined #gluster
16:58 _pol joined #gluster
17:01 aknapp_ joined #gluster
17:02 CheRi_ joined #gluster
17:13 jdarcy joined #gluster
17:21 _pol joined #gluster
17:28 daMaestro joined #gluster
17:29 maxamillion joined #gluster
17:31 maxamillion if I 'mount -t glusterfs node01.example.com:/gv0 /mnt/foo' and node01.example.com goes down, how does gluster handle that? does my client have to do anything to get failed over to another brick?
17:32 haidz basically to do failover you can set it up with DNS-RR
17:32 haidz the mount option "fetch-attempts" will do the failover if a node is not available for the mount
17:32 haidz ex.
17:32 haidz gluster:/exporter/appdata/exporterg​lusterfsfetch-attempts=12,_netdev0 0
17:33 haidz i have 6 nodes in my cluster
17:33 haidz so i do 12 attempts so it iterates over the cluster twice, and also leaves room if i expand
17:33 haidz JoeJulian, has a great doc on this
17:34 maxamillion ah, perfect
17:34 kkeithley Gluster handles failover on replica nodes when you use glusterfs native mounts
17:34 ricky-ticky joined #gluster
17:35 maxamillion oh, there's talk of it here in the "Note" https://access.redhat.com/site/documentation​/en-US/Red_Hat_Storage_Software_Appliance/3.​2/html/User_Guide/chap-Administration_Guide-​GlusterFS_Client.html#sect-Administration_Gu​ide-Test_Chapter-GlusterFS_Client-Manuall
17:35 glusterbot <http://goo.gl/frQE9K> (at access.redhat.com)
17:35 maxamillion perfect, thanks! :D
17:35 haidz yeah so basically to avoid a SPOF you dont want to just mount "server1"
17:36 haidz if server1 is down you wont be able to mount in your fstab
17:39 jame joined #gluster
17:40 jame Today I upgraded gluster 3.3 to gluster 3.4. I have 3 gluster servers/bricks in a replica block. I did rollup upgrades on each of the servers. The servers are working fine. I have two clients, my 3.3 client is connected to my 3.4 server and working properly, however my 3.4 client is unable maintain a connection. It will connect to my node long enough to download the volfile, but is unable to connect to the actual bricks.
17:42 lalatenduM joined #gluster
17:42 jame It constantly returns error message """ returning as transport is already disconnected OR there are no frames """ and """ failed to get the port number for remote subvolume. Please run 'gluster volume status' on server to see if brick process is running. """, gluster volume status shows everything working properly.
17:42 devopsman joined #gluster
17:42 jame And I know it is working properly because the 3.3 client works.
17:43 _pol joined #gluster
17:45 devopsman I have gluster version 3.3.  However, how come I don't have volume rename?
17:45 JoeJulian jame: Anything in the glusterd log?
17:46 JoeJulian devopsman: hmm... never had anyone ask to do that before... file a bug report as a feature request.
17:46 glusterbot http://goo.gl/UUuCq
17:46 JoeJulian (unless that's in 3.4 and I haven't noticed yet)
17:47 dhsmith joined #gluster
17:48 jame JoeJulian: What should I look for specifically?
17:49 jame I see a ton of """ readv failed (No data available) """ and then the client disconnected
17:50 JoeJulian Not sure yet... 3.4's new. Use fpaste.org if you'd like me to take a look.
17:51 JoeJulian @op
17:52 jame Stupid question, did 3.4 change the port numbers by chance, could it be a firewall rule blocking things?
17:52 JoeJulian @deop
17:52 JoeJulian jame: it did... not quite sure what they are yet, but gluster volume status should tell.
17:52 jame I have 24007 - 24010 open, and 49152 - 49060 open
17:53 jame gluster volume shows N/A for port
17:53 jame And some of the log messages are saying unable to determine port for brick
17:53 JoeJulian @3.4
17:53 glusterbot JoeJulian: 3.4 sources and packages are available at http://goo.gl/zO0Fa Also see @3.4 release notes and @3.4 upgrade notes
17:53 JoeJulian @3.4 upgrade notes
17:53 glusterbot JoeJulian: http://goo.gl/SXX7P
17:53 JoeJulian (I haven't read these yet)
17:54 devopsman I am looking at a man page online and I see volume rename ???
17:54 raghug joined #gluster
17:55 devopsman Was rename a feature that was on deck but never transpired?
17:55 JoeJulian devopsman: link?
17:55 devopsman http://linux.die.net/man/8/gluster
17:55 glusterbot Title: gluster(8): Gluster Console Manager - Linux man page (at linux.die.net)
17:56 devopsman hummm
17:56 JoeJulian lol... sure is...
17:56 devopsman don't know
17:57 JoeJulian Yeah, that command doesn't exist.
17:57 devopsman ok, it least I'm not going crazy
17:57 JoeJulian well... I wouldnt' go that far... ;)
17:57 devopsman thanks for checking
17:57 devopsman lol
17:58 JoeJulian devopsman: It's in the source but commented out. I suppose that means it's broken.
17:59 devopsman ahhh
17:59 devopsman interesting
18:00 cao left #gluster
18:02 devopsman So the business wanted to change the mount point name so I figured I'd do the same for the volume, maybe bricks.  What's the easiest method to do this?
18:02 JoeJulian jame: My only guess would be that not all glusterfsd were stopped when you did the upgrade. That could, I imagine, cause the management daemon to be unable to determine the brick port address.
18:03 devopsman Not a production environment
18:03 JoeJulian easy != safest... As long as you have that definition, you can mess with the vol directories (and files) under /var/lib/glusterd/vols. As long as the volume's not running and you're thorough, it should work.
18:05 Humble joined #gluster
18:06 jame JoeJulian: http://fpaste.org/28722/21176137/
18:06 glusterbot Title: #28722 Fedora Project Pastebin (at fpaste.org)
18:07 devopsman thanks joeJul
18:10 thomaslee joined #gluster
18:12 devopsman I am running rhel 6.4.  Has anyone had any issues with fstab auto mounting using _netdev and backupvolfile-server?  mount spits out _netdev as an unkown option and ps does not show glusterfs with the backupvolfile option.
18:13 JoeJulian jame: Ok, let's also see /var/log/glusterfs/etc-glusterfs-glusterd.vol.log between 17:59:31 and 17:59:44
18:13 _pol joined #gluster
18:13 chirino joined #gluster
18:14 jame JoeJulian: does it matter which server?
18:14 JoeJulian devopsman: which version is that? The noise surrounding the _netdev init option should have been silenced.
18:14 JoeJulian jame: server2
18:15 jmalm joined #gluster
18:16 JoeJulian devopsman: Does it have multiple "--volfile-server=" ?
18:17 devopsman JoeJulian, do you mean Linux Kernel or Gluster RHEL Storage Server vs?
18:17 jame http://fpaste.org/28726/13751218/
18:17 glusterbot Title: #28726 Fedora Project Pastebin (at fpaste.org)
18:17 devopsman yes on the multiple
18:17 _pol_ joined #gluster
18:17 JoeJulian devopsman: rpm -q glusterfs
18:17 jame joelulian: the same thing just repeats over and over again
18:17 JoeJulian devopsman: That's how it does the backupvolfileserver thing.
18:19 devopsman glusterfs-3.3.1-1.el6.x86_64
18:19 devopsman and fuse
18:19 JoeJulian devopsman: That's why.
18:19 JoeJulian @yum repo
18:19 glusterbot JoeJulian: The official community glusterfs packages for RHEL (including CentOS, SL, etc), Fedora 17 and earlier, and Fedora 18 arm/armhfp are available at http://goo.gl/s077x. The official community glusterfs packages for Fedora 18 and later are in the Fedora yum updates repository.
18:20 JoeJulian It doesn't harm anything though. It's just the shell script saying it doesn't know what to do with that option. That's okay because it's an init option anyway.
18:20 JoeJulian You should still upgrade to 3.3.2 or 3.4.0 though.
18:23 mooperd joined #gluster
18:23 maxamillion left #gluster
18:25 jmalm I have files getting i/o errors in the mount, and can't find them on any of the bricks.  Any ideas what to do about this?
18:25 theron joined #gluster
18:26 devopsman ahh, my repo is pointing to */glusterfs/3.3/* not */glusterfs/repos/*
18:27 JoeJulian jmalm: If you're running a version older than 3.3.1-6, you might just need to remount.
18:27 ricky-ticky joined #gluster
18:27 JoeJulian jmalm: If not, check "gluster volume heal $vol info split-brain"
18:30 _pol joined #gluster
18:32 _pol__ joined #gluster
18:35 _pol joined #gluster
18:39 _pol joined #gluster
18:41 JoeJulian jame: selinux?
18:45 _pol_ joined #gluster
18:46 JoeJulian jame: Otherwise, the brick opens a socket file that's based on an md5 hash of something. If that something has changed, perhaps the brick and the management daemon don't agree on what that path should be. Try to kill -HUP your bricks.
18:47 jame ubuntu, AWS
18:48 dscastro wondering why i can't change selinux labels on gluster mounted volume
18:49 dscastro even passeing --selinux option
18:49 jmalm JoeJulian: Does that "gluster volume heal…." that you suggested affect the data at all?
18:49 JoeJulian dscastro: Not sure. I know the selinux extended attributes are passed through to the filesystem.
18:50 JoeJulian jmalm: No, that's a reporting command. Shows the most recent 1024 split-brain errors from the self-heal daemons.
18:50 dscastro JoeJulian: i tried pass --selinux and even --user-xattr
18:50 _pol joined #gluster
18:52 dscastro JoeJulian: chcon: failed to change context of `yum.log' to `system_u:object_r:openshift_var_lib_t:s0': Operation not supported
18:54 JoeJulian Searching... I see an email from May of 2011 saying that fuse doesn't support passing the selinux labels to the security server. There was also a deadlock issue at that time. Still digging...
18:56 JoeJulian Everything I can see shows setting the mount context of any fuse based filesystem but not labeling things directly...
18:57 JoeJulian but there's a lot of code dedicated to supporting selinux so that can't be it...
18:58 jame JoeJulian: I had to kill -9 the processes, they would not respond to a -HUP, -15, or service glusterfs-server stop
18:58 JoeJulian jame: That would explain why they were refusing connections... :/
18:58 jame JoeJulian: Working now that everything is restarted!
18:59 jame JoeJulian: I was just doing the service stop.... assuming it would kill itself like apache does if apache is not responding.
18:59 JoeJulian dscastro: There's a --selinux option to glusterfsd that's required to "Enable SELinux label (extened attributes) support on inodes"
19:00 dscastro JoeJulian: i guess it supposed to be passed on client right?
19:00 JoeJulian volume set $vol selinux on
19:00 JoeJulian Just found it. :D
19:01 dscastro ohhhh
19:01 dscastro JoeJulian: guessing if default is off
19:01 dscastro let me spinup machines
19:01 dscastro i guess i tried it
19:01 dscastro let me confirm
19:02 Shahar joined #gluster
19:03 Shahar joined #gluster
19:03 JoeJulian dscastro: btw... I'm browsing 3.4.0 source. Haven't even looked at 3.3 for that
19:03 jame JoeJulian: thank you for helping getting pointed in the right direction! It was mentioned on the 3.4 upgrade to stop the service a server at a time (rolling restart), however, I never checked ps to see if they were actually stopped..... upgraded while it was still running, which of course caused it to get very confused. Woudl be nice if the blog post would have mentioned that.
19:03 dscastro JoeJulian: that what i'm using
19:03 dscastro 34
19:04 JoeJulian jame: Cool. Glad I could help.
19:04 Shahar joined #gluster
19:06 Shahar joined #gluster
19:06 jame Now for another issue, how do you report memory leaks? What are the steps required for the bug report? (I was upgrading to 3.4 to see if it was fixed), after about 3-weeks of running, glusterfs-client is consuming 4GB of memory, triggering OOM, the client only ever accesses at most 100MB file at a time. If it still does it on 3.4 I would like to submit bug report.
19:07 JoeJulian The best (and most painful) is to use valgrind and attach its report.
19:07 Shahar joined #gluster
19:08 JoeJulian One of kkeithley's bugfix releases include a patch for a client memory leak. I haven't had a problem with 3.3.1-15 but I did with the initial 3.3.1 release.
19:11 Shahar joined #gluster
19:13 jame Ok... valgrand, yuk, thanks!
19:13 Shahar joined #gluster
19:13 dscastro [root@ip-10-232-20-215 ~]# gluster volume set gv0 selinux on
19:13 dscastro volume set: failed: option : selinux does not exist
19:13 dscastro JoeJulian:
19:14 Shahar joined #gluster
19:15 Shahar joined #gluster
19:18 _pol joined #gluster
19:18 Shahar joined #gluster
19:19 Shahar joined #gluster
19:21 ShaharL joined #gluster
19:22 Shahar joined #gluster
19:23 ShaharL joined #gluster
19:27 mooperd joined #gluster
19:27 Humble joined #gluster
19:29 glusterbot New news from resolvedglusterbugs: [Bug 989702] A non-empty file created on glusterfs with ecryptfs reports as a file of size zero <http://goo.gl/fa1xiw>
19:32 the-me_ joined #gluster
19:33 edward2 joined #gluster
19:33 jmalm JoeJulian: I have to grid output from that 'bluster volume heal' command, what do I do from there to access these files that aren't showing up in the bricks?
19:33 georgeh|workstat joined #gluster
19:33 ultrabizweb joined #gluster
19:36 dscastro_ joined #gluster
19:56 Humble joined #gluster
19:57 jmalm After running split-brain info it returns gfids, how do I look up what those grids correspond to?
20:00 JoeJulian @split-brain
20:00 glusterbot JoeJulian: To heal split-brain in 3.3+, see http://goo.gl/FPFUX .
20:01 JoeJulian Those gfids are hardlinks in the .glusterfs directory. First, check to see if they're hardlinked on any of your bricks. If they're not, they can be safely deleted.
20:02 JoeJulian To find the file they're hardlinked to, I stat the gfid file in .glusterfs. That gives me the inode number. Then 'find $brick_path -inum $inode_number'
20:11 Shahar joined #gluster
20:17 semiosis ,,(gfid resolver)
20:17 glusterbot https://gist.github.com/4392640
20:17 semiosis jmalm: ^
20:21 aliguori joined #gluster
20:21 jmalm semiosis: Thank you
20:21 jmalm JoeJulian: Thanks
20:24 semiosis yw
20:32 mooperd joined #gluster
20:33 jmalm Hey I am having a problem getting this to run sorry, getting "cannot access …. no such file or directory" and "find: invalid argument '!' to '-inum' "
20:35 semiosis can you run it with 'bash -x' and ,,(paste) the whole output please?
20:35 glusterbot For RPM based distros you can yum install fpaste, for debian and ubuntu it's pastebinit. Then you can easily pipe command output to [f] paste [binit] and it'll give you a URL.
20:37 Gilbs1 joined #gluster
20:43 Gilbs1 Any suggestions for troubleshooting high glusterfsd CPU %?  I'm up to 232% on one of my servers.
20:44 jmalm semiosis: http://fpaste.org/28793/13060913/
20:44 glusterbot Title: #28793 Fedora Project Pastebin (at fpaste.org)
20:45 semiosis jmalm: what is this?
20:46 semiosis jmalm: either edit the script so the hashbang at the top (first line, #!) says bash -x or run it as bash -x gfid-resolver.sh ...
20:48 jiku joined #gluster
20:50 jmalm semiosis: http://fpaste.org/28795/37513101/
20:50 glusterbot Title: #28795 Fedora Project Pastebin (at fpaste.org)
20:53 semiosis jmalm: wrong brick?
20:53 semiosis you need to give the path to a brick which actually contains .glusterfs/d6/23/d623c976-5​2df-42cd-aa3f-77ffee37c8b5
20:54 semiosis try ls /var/brick*/.glusterfs/d6/23/d623​c976-52df-42cd-aa3f-77ffee37c8b5
20:54 semiosis 'ls /var/brick*/.glusterfs/d6/23/d623​c976-52df-42cd-aa3f-77ffee37c8b5'
21:00 semiosis it's coffee-o-clock
21:01 jmalm semiosis: indeed
21:04 ipalaus joined #gluster
21:04 ipalaus joined #gluster
21:05 JoeJulian Gilbs1: First, look in the logs. If there's nothing abnormal there, I'd probably start looking at a tcpdump and see if it's an external cause.
21:06 _pol joined #gluster
21:09 Gilbs1 Ah...  Lots and lots of:   failed on child my-volume-client-5 (No such file or directory) & remote operation failed: No such file or directory
21:10 Gilbs1 glusterfs 3.3.1
21:11 JoeJulian sounds like your brick path isn't mounted?
21:12 Gilbs1 checked, both mounted
21:14 _pol_ joined #gluster
21:15 Gilbs1 checked/tested all clients, all working.
21:19 efries joined #gluster
21:20 _pol joined #gluster
21:21 JoeJulian Actually, re-reading that, that client log is just saying that it's trying to do something to a file or directory that doesn't exist. That's possible in a healthy volume.
21:25 Gilbs1 That is what started me looking, I was out last week but some complained of a handful of missing files over the span of a few months.  They said they were able to find one but seen it as duplicate in samba and was only able to delete one but not the other.  I'm still waiting on the file name so I can see what happened.  Then I decided to try top and see if there was anything else going on.
21:28 _pol_ joined #gluster
21:30 thomaslee joined #gluster
21:33 zombiejebus_ joined #gluster
21:35 _pol joined #gluster
21:36 duerF joined #gluster
21:47 mooperd joined #gluster
22:04 _pol_ joined #gluster
22:10 _pol joined #gluster
22:12 JonnyNomad joined #gluster
22:16 _pol_ joined #gluster
22:38 _pol joined #gluster
23:09 _pol_ joined #gluster
23:14 recidive joined #gluster
23:19 plarsen joined #gluster
23:19 NeatBasis joined #gluster
23:21 theron joined #gluster
23:29 fidevo joined #gluster
23:30 NeatBasis joined #gluster
23:35 NeatBasis joined #gluster
23:54 _pol joined #gluster
23:55 chirino joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary