Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-01-27

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 nh2_ JoeJulian: ah, it seems to be mentioned in a side Note on https://gluster.readthedocs.io/en/latest/Adm​inistrator%20Guide/Setting%20Up%20Volumes/: Also, the order in which bricks are specified has a great effect on data protection. Each replica_count consecutive bricks in the list you give will form a replica set, with all replica sets combined into a volume-wide distribute set. To make sure that replica-set members are not placed on the same
00:01 nh2_ JoeJulian: great, this solves my problem!
00:02 JoeJulian excellent
00:35 squeakyneb joined #gluster
00:35 Wizek_ joined #gluster
00:36 jbrooks joined #gluster
00:47 jdossey joined #gluster
00:50 shruti` joined #gluster
00:50 lalatend1M joined #gluster
00:51 sac` joined #gluster
00:56 victori joined #gluster
01:14 shutupsquare joined #gluster
01:15 plarsen joined #gluster
01:53 DV__ joined #gluster
02:00 shyam joined #gluster
02:00 Gambit15 joined #gluster
02:12 shyam left #gluster
02:15 derjohn_mobi joined #gluster
02:32 jkroon joined #gluster
02:35 gyadav joined #gluster
03:01 skumar joined #gluster
03:02 victori joined #gluster
03:05 sbulage joined #gluster
03:20 sbulage joined #gluster
03:21 shruti joined #gluster
03:21 sac joined #gluster
03:22 nishanth joined #gluster
03:23 lalatenduM joined #gluster
03:33 Shu6h3ndu joined #gluster
03:33 magrawal joined #gluster
03:33 DV__ joined #gluster
03:35 shyam joined #gluster
03:39 Prasad joined #gluster
03:47 riyas joined #gluster
03:49 gyadav joined #gluster
03:50 itisravi joined #gluster
03:52 apandey joined #gluster
03:52 sbulage joined #gluster
03:54 ksandha_ joined #gluster
03:58 atinm joined #gluster
04:02 rjoseph joined #gluster
04:14 sanoj joined #gluster
04:20 gyadav joined #gluster
04:21 rjoseph joined #gluster
04:24 Saravanakmr joined #gluster
04:25 susant joined #gluster
04:26 victori joined #gluster
04:28 poornima_ joined #gluster
04:28 RameshN joined #gluster
04:28 ppai joined #gluster
04:34 Shu6h3ndu joined #gluster
04:48 Saravanakmr joined #gluster
04:58 ndarshan joined #gluster
04:58 victori joined #gluster
04:58 gyadav joined #gluster
05:00 gyadav joined #gluster
05:01 ppai joined #gluster
05:03 susant joined #gluster
05:03 mb_ joined #gluster
05:05 ppai joined #gluster
05:08 msvbhat joined #gluster
05:11 kdhananjay joined #gluster
05:18 Prasad joined #gluster
05:19 kotreshhr joined #gluster
05:19 mb_ joined #gluster
05:20 buvanesh_kumar joined #gluster
05:22 armyriad joined #gluster
05:23 bbooth joined #gluster
05:27 victori joined #gluster
05:35 karthik_us joined #gluster
05:38 msvbhat joined #gluster
05:39 Prasad_ joined #gluster
05:41 Prasad_ joined #gluster
05:43 riyas joined #gluster
05:46 sbulage joined #gluster
05:50 Prasad__ joined #gluster
05:53 apandey joined #gluster
05:56 zoyvind joined #gluster
06:03 victori joined #gluster
06:04 jkroon joined #gluster
06:05 Prasad joined #gluster
06:15 Prasad joined #gluster
06:19 Prasad joined #gluster
06:21 gem joined #gluster
06:32 Prasad joined #gluster
06:34 Prasad_ joined #gluster
06:34 rastar joined #gluster
06:35 ankit_ joined #gluster
06:37 sona joined #gluster
06:43 msvbhat joined #gluster
06:52 Humble joined #gluster
06:54 buvanesh_kumar joined #gluster
06:55 buvanesh_kumar joined #gluster
07:08 [diablo] joined #gluster
07:12 mhulsman joined #gluster
07:14 saali joined #gluster
07:15 ankit_ joined #gluster
07:21 vbellur joined #gluster
07:22 jtux joined #gluster
07:24 msvbhat joined #gluster
07:36 kdhananjay joined #gluster
07:57 musa22 joined #gluster
08:04 shutupsquare joined #gluster
08:08 fsimonce joined #gluster
08:11 victori joined #gluster
08:13 msvbhat joined #gluster
08:15 ivan_rossi joined #gluster
08:20 Guest89004 joined #gluster
08:25 ndarshan joined #gluster
08:25 ivan_rossi left #gluster
08:27 jri joined #gluster
08:28 ivan_rossi joined #gluster
08:30 victori joined #gluster
08:32 jkroon joined #gluster
08:32 jiffin joined #gluster
08:35 ksandha_ joined #gluster
08:38 sanoj joined #gluster
08:45 sanoj joined #gluster
08:49 Humble joined #gluster
08:54 gyadav_ joined #gluster
08:55 flying joined #gluster
08:57 TvL2386 joined #gluster
09:07 musa22 joined #gluster
09:13 flying joined #gluster
09:16 gyadav joined #gluster
09:22 itisravi joined #gluster
09:23 buvanesh_kumar_ joined #gluster
09:24 karthik_us joined #gluster
09:43 derjohn_mob joined #gluster
09:43 Wizek_ joined #gluster
09:54 sona joined #gluster
10:00 victori joined #gluster
10:02 jkroon joined #gluster
10:17 pulli joined #gluster
10:22 musa22 joined #gluster
10:25 Guest89004 joined #gluster
10:28 victori joined #gluster
10:29 jiffin joined #gluster
10:31 cacasmacas joined #gluster
10:34 ashiq joined #gluster
10:42 jkroon joined #gluster
10:57 buvanesh_kumar joined #gluster
11:02 Seth_Karlo joined #gluster
11:03 Seth_Karlo joined #gluster
11:10 jiffin joined #gluster
11:17 mhulsman joined #gluster
11:19 zakharovvi[m] joined #gluster
11:19 jiffin1 joined #gluster
11:20 cacasmacas joined #gluster
11:35 kotreshhr left #gluster
11:56 jkroon joined #gluster
11:57 ndarshan|lunch joined #gluster
11:58 shyam joined #gluster
12:04 susant left #gluster
12:28 kettlewell joined #gluster
12:30 ndarshan joined #gluster
12:31 kettlewell joined #gluster
12:32 jkroon joined #gluster
12:46 shyam joined #gluster
12:51 pulli joined #gluster
12:53 rastar joined #gluster
13:05 vimja joined #gluster
13:12 musa22 joined #gluster
13:20 vimja Hi. I have a nice Gluster setup up and running. Most things are working nicely. However, I've spent the better part of the last two days looking for "that one stupid mistake" in my configuration. I was hoping that you guys might be able to help me out. My setup is as follows: Storage is a 2+1 Gluster setup (2 replicating hosts + 1 arbiter) with a volume for virtual machines. There are two virtualization
13:20 vimja hosts running libvirt / qemu / kvm. Most things, as mentioned before, run fine. Live migration of a VM from one host to another and even shutting down one of the storage nodes works (tested fo arbiter and also full storage nodes). However, when I pull the power plug on one of the storage nodes, everything goes to shits. From my understanding, the VMs are supposed to "freeze" when the hypvervisor looses
13:20 vimja connection to the Gluster backedn. But instead the VMs keep running, unable to perform read or write, crashing all processes and sometimes even corrupting their filesystem.
13:20 vimja The gluster volume is configured as suggested by Red Hat https://access.redhat.com/documentation/en-​US/Red_Hat_Storage/3.1/html/Configuring_Red​_Hat_Enterprise_Virtualization_with_Red_Hat​_Gluster_Storage/chap-Hosting_Virtual_Machi​ne_Images_on_Red_Hat_Storage_volumes.html except I don't do sharding
13:21 glusterbot Title: Chapter 4. Hosting Virtual Machine Images on Red Hat Gluster Storage volumes (at access.redhat.com)
13:21 vimja glusterbot: yeah, done that
13:22 vimja I'd imagine that I may be missing a setting in libvirt / qemu?
13:25 vimja Maybe I should add: Gluster is totally fine with the situation. When i bring the storage node back up, it starts healing and then continues runnign as before
13:30 ic0n_ joined #gluster
13:30 victori joined #gluster
13:41 rastar joined #gluster
13:44 nh2_ joined #gluster
13:44 shyam joined #gluster
13:46 musa22 joined #gluster
13:47 susant joined #gluster
13:54 ira joined #gluster
14:01 unclemarc joined #gluster
14:03 sbulage joined #gluster
14:08 victori joined #gluster
14:21 shutupsquare joined #gluster
14:23 Asako joined #gluster
14:23 jdossey joined #gluster
14:23 Asako good morning.  I'm seeing a lot of geo-replication errors on my master node.
14:23 Asako [2017-01-27 13:52:35.926632] E [repce(/var/mnt/gluster/brick2):207:__call__] RepceClient: call 11047:139708535072576:1485525152.76 (entry_ops) failed on peer with OSError
14:23 Asako does anybody know what would cause this?
14:24 Asako [2017-01-27 13:53:26.641827] E [master(/var/mnt/gluster/brick2):783:log_failures] _GMaster: ENTRY FAILED: ({'gfid': '54500697-bd55-4c1a-a8e3-099e89238bcf', 'entry': '.gfid/edc4e980-5fe3-450a-9​6cd-e6c31affef0f/releases', 'stat': {'atime': 1485467874.964314, 'gid': 0, 'mtime': 1483727014.0494654, 'mode': 41471, 'uid': 0}, 'link': '.', 'op': 'SYMLINK'}, 2)
14:24 Asako also that
14:30 cloph joined #gluster
14:31 B21956 joined #gluster
14:31 skylar joined #gluster
14:37 mpiet_cloud joined #gluster
14:37 haomaiwang joined #gluster
14:38 Asako OSError: [Errno 5] Input/output error: '.gfid/
14:40 victori joined #gluster
14:51 BitByteNybble110 joined #gluster
14:52 squizzi joined #gluster
14:58 msvbhat joined #gluster
15:01 kpease joined #gluster
15:04 victori joined #gluster
15:07 ndarshan joined #gluster
15:13 raghu joined #gluster
15:21 kraynor5b__ joined #gluster
15:32 hamburml joined #gluster
15:32 ankit joined #gluster
15:34 hamburml Hello :) I hope anyone can help me. I have two vserver from hetzner and I installed ubuntu 16.04 (before debian 8.7) and installed GlusterFS 3.9. I was able to peer them correctly and I create a volume which was also mounted. Everything works. Now I tried the rdma transport mode instead of tcp for performance reasons. But when I try to start the volume I get an error that the commit failed on localhost.
15:35 hamburml Is it because I use vserver?
15:35 hamburml Same with debian 8.7 but thought it could be the 3.6 kernel so I installed ubuntu which has 4.4 kernel
15:37 ksandha_ joined #gluster
15:38 rwheeler joined #gluster
15:44 hamburml I simply don't know where I could ask. I just checked channel logs on botbot.me and there isn't any board I can use to ask that question.
15:50 raghu hamburml: Did you create the volume as a tdma volume? I think for rdma you will have to do one of the 2 things for creating the volume. 1) "gluster volume create <volume name> transport=rdma <bricks>" (for RDMA only access) OR 2) "gluster volume create <volume name> transport=tcp,rdma <bricks> ( for both tcp and RDMA access)"
15:51 raghu hamburml: Also Make sure you have got infiniband in your systems which are needed for RDMA access
15:54 hamburml raghu: I need extra hardware for RDMA? Didn't know that, I thought RDMA bypasses the os and the clients have direct access (as direct as it can get) to the hdd/ssd.
15:55 hamburml Looks like that's the reason why it's not working when I need extra hardware
15:55 hamburml And yeah, I set the volume for rdma via gluster volume set volname config.transport rdma
15:56 ron-slc joined #gluster
15:59 snehring pretty sure rdma requires infiniband
15:59 snehring and I don't think it'll really do what you want wrt 'bypass the os'
16:03 hamburml snehring: Yeah, looks like I was misinformed. Thanks anyways :) Love to test GlusterFS and mount the Gluster Volume via NFS on my docker hosts and try stateful services. All servers are inside the same data center so the latency shouldn't be that high. Let's see how that works :)
16:05 snehring if your docker hosts are linux based you may get better performance via the native gluster client
16:06 plarsen joined #gluster
16:07 raghu joined #gluster
16:08 wushudoin joined #gluster
16:10 hamburml You are right, will install glusterfs-client on the hosts and mount the volume.
16:15 plarsen joined #gluster
16:17 bowhunter joined #gluster
16:21 raghu joined #gluster
16:28 msvbhat joined #gluster
16:33 annettec joined #gluster
16:33 Gambit15 Hey guys, I'm trying to understand a weird problem I've been noticing occasionally popping up. I'm using a (2+1)x2 setup with default server quorum options to store my VM images. Being VMs, there is only ever one client accessing a file at a time.
16:35 Gambit15 Now every now and then, when I see gluster automatically healing a particular file, that file is usually listed as being simultaneously healed on 2 bricks.
16:35 Gambit15 How can that be so?
16:36 Gambit15 Surely if the file is different on more than one brick, that's a split0brain scenario?
16:40 Gambit15 ...and my understanding was that "sync()" is only completed once all active replication nodes in the cluster have confirmed the data has been written. If that's the case, how would I ever get a scenario where more than 1 replicate node is out of sync?
16:41 JoeJulian Gambit15: It's just marked "dirty" due to inflight fops.
16:42 Gambit15 JoeJulian, could you elaborate a bit?
16:44 Gambit15 I'll add that whilst the files might often appear in "heal info", they're never listed as being in split-brain
16:45 JoeJulian A write operation is bracketed by metadata ops that mark a file showing pending changes for the other brick. When that write finishes, the client then clears that metadata. If something happens in between, the metadata remains and the client can tell which server is clean and which is dirty. Your query happens to be hitting right in the middle.
16:51 Gambit15 JoeJulian: On the occasions I've noticed this, the files usually stay in this "healing" state for a fair amount of time, perhaps an hour or more
16:52 JoeJulian If you think it's a legitimate heal, check the client logs.
16:52 Gambit15 If it was never more than a couple of minutes, I wouldn't be so concerned
16:52 JoeJulian The client would know if a fop failed.
16:53 Gambit15 /var/log/glusterfs/glfsheal-<brick>.log?
16:55 Gambit15 In fact, when I noticed this last night, one of the pair of healing bricks was an arbiter. Surely healing the metadata on an arbiter should never take more than a couple of seconds?
16:57 nishanth joined #gluster
16:58 rjoseph joined #gluster
16:59 B21956 joined #gluster
17:02 jdossey joined #gluster
17:08 JoeJulian True
17:09 Gambit15 So, the next time this happens, which are the key logs I should be looking at?
17:09 Gambit15 /var/log/glusterfs/glfsheal-<brick>.log on the affected nodes/servers?
17:09 JoeJulian The file is only open on one client, so that client log.
17:09 JoeJulian not a heal log, a client log.
17:10 Gambit15 glusterfs 3.8.5
17:11 Gambit15 The client would be the host the VM is running on? (gfapi)
17:11 JoeJulian Assuming you're using libgfapi, I forget where the log should be. If you're mounting via fuse, it's /var/log/glusterfs/${mountpoint/\//-}.log
17:11 JoeJulian yes
17:15 shaunm joined #gluster
17:16 Gambit15 Hmm, ok, cheers. Will see if I can get a confirmation of the exact log from the ovirt guys
17:16 Gambit15 Is there any telltale data that'd only be present in that log which I could grep for?
17:21 JoeJulian If I knew of something to look for, I would have already suggested you look for it. ;)
17:27 Gambit15 Cool, cheers for the advice then :)
17:30 Humble joined #gluster
17:34 Gambit15 JoeJulian, ooh, actually, one other thing - my gluster logs are all using UTC instead of the host's configured timezone, and it's making cross-referencing stuff a PITA. Know where I can fix that?
17:34 JoeJulian In your cross-referencing tools.
17:34 JoeJulian imho, servers should always be in utc.
17:35 JoeJulian Only human-facing environments should have local timezones.
17:36 Gambit15 I used to preach the same, however now I live UTC-4/5, it can get a bit confusing in practice!
17:38 JoeJulian I just do a "date -u" to know what time it is if I don't already know (doing this long enough I know by looking at the clock what it is UTC).
17:39 Gambit15 And with some applications not allowing for a separation of frontend & backend time, it can complicate things.
17:39 JoeJulian But anway, no. there's no way to change the log timestamps from UTC.
17:40 JoeJulian This is especially important if you have servers and clients in different timezones and need to match up logs.
17:42 Gambit15 Oh well
17:58 ivan_rossi left #gluster
18:04 susant left #gluster
18:06 nh2_ left #gluster
18:22 ankit joined #gluster
18:29 jdossey joined #gluster
18:32 ankit joined #gluster
18:52 mhulsman joined #gluster
18:54 Seth_Karlo joined #gluster
19:04 Asako OSError: [Errno 5] Input/output error: '.gfid/54500697-bd55-4c1a-a8e3-099e89238bcf'
19:04 Asako I keep seeing this error, any idea how to fix it?
19:05 Asako it's always on the same gfid too
19:06 JoeJulian error 5 is EIO
19:06 JoeJulian That's an I/O error on your brick filesystem.
19:06 JoeJulian (failing drive?)
19:09 derjohn_mob joined #gluster
19:17 shutupsq_ joined #gluster
19:21 Asako possibly, it's a drive on a VM
19:21 Asako don't see any I/O errors in the journal though
19:27 bowhunter joined #gluster
19:28 Asako and now it's Faulty again
19:33 Asako File "/usr/libexec/glusterfs/pyth​on/syncdaemon/resource.py", line 820, in entry_ops
19:33 Asako [ESTALE, EINVAL])
19:33 Asako File "/usr/libexec/glusterfs/pytho​n/syncdaemon/syncdutils.py", line 478, in errno_wrap
19:33 Asako return call(*arg)
19:33 Asako OSError: [Errno 95] Operation not supported: '.gfid/54500697-bd55-4c1a-a8e3-099e89238bcf'
19:33 Asako must be a network issue or something but I thought geo-replication was built to handle high latency
19:41 Asako hmm, that gfid is a symlink which is a link to the directory itself
19:41 Asako lrwxrwxrwx. 1 root root 1 Jan 26 17:00 54500697-bd55-4c1a-a8e3-099e89238bcf -> .
19:43 PatNarciso has anyone done performance benchmarks on 12 bricks in JBOD vs RAID6?  I'm curious what has better response time for negative lookups.
19:43 Asako so is the issue with xfs or gluster?
19:54 quest`` joined #gluster
19:54 quest`` Hello all
19:55 quest`` Getting a crash from gluster (signal received: 11
19:55 quest`` when updating from 3.7.8 -> 3.8.8
19:56 quest`` Have tried bumping up debug logs, but I don't see any descriptive error message =\
19:56 musa22 joined #gluster
19:56 quest`` When I deploy and setup gluster 3.8.8 from scratch, everything works perfectly
19:56 quest`` but if I setup on 3.7.8 (exact same configs), then try to update to 3.8.8 on running clusters, it crashes and burns. =\
19:57 quest`` Hoping someone here can point me in the right direction
19:57 PatNarciso hmm, I never upgraded from 3.7.8 to 3.8.8.   Is there... *something* ya may have skipped in the subversion updates?
19:58 quest`` I am using the trusty packages on the PPA that were recently added
19:58 quest`` and all the dependencies are right
19:58 quest`` as fresh installs with the new packages work perfecto
19:58 * PatNarciso rolls ubuntu ppa also.
19:58 mhulsman joined #gluster
19:59 quest`` yeah I am following the upgrade guide posted for 3.9
19:59 quest`` as nothing was ever posted to update to 3.8
19:59 PatNarciso altho, I'm 3.9 now...
19:59 quest`` Using cluster op version 30707
19:59 quest`` is 3.9 posted on the PPA? It wasn't last I checked... going to look now
19:59 quest`` we mainly picked 3.8.8 because it is LTS
20:00 PatNarciso unsure honestly.  I needed functionality related to tier that was unresolved in 3.8.
20:00 quest`` ahh ok
20:00 quest`` let me try just doing 3.9.1
20:01 quest`` but the PPA doesn't have trusty packages for those
20:01 quest`` fuck
20:01 quest`` unless I shouldn't be looking at https://launchpad.net/~gluster​/+archive/ubuntu/glusterfs-3.9
20:01 glusterbot Title: glusterfs-3.9 : “Gluster” team (at launchpad.net)
20:01 quest`` I only see zesty, yakkety and xenial
20:03 PatNarciso I'm xenial.  Forgot the reason I went to 16.x... I think it had something to due with the kernel / lsi raid controller farts.
20:04 quest`` gotcha, if I could I would. Xenial support is on our roadmap, but not done yet =P
20:04 Asako crap, I think I hosed my volume
20:15 Asako don't mess around with the .glusterfs dir :D
20:15 PatNarciso *nods*
20:15 Asako but it looks like gluster doesn't like circular symlinks
20:18 Asako releases -> .
20:19 Asako moved things around and replication is running again
20:27 Asako are slave volumes read-only?
20:31 MikeLupe joined #gluster
20:37 Asako I know replication is one-way but what happens when I make changes on the slave volume?
20:44 JoeJulian Asako: yes, geo-replication is unidirectional.
20:44 JoeJulian Changes made to the slave may be overwritten
20:46 PatNarciso JoeJulian, while I see ya in the room, have you done performance testing with N bricks JBOD vs single brick on RAID6/5 ?
20:47 * PatNarciso catches JoeJulian at the water cooler.   andddd I'm gonna need you to come in on Sunday too... yeahhh...
20:47 PatNarciso ... or was it mmmkay.
20:52 JoeJulian "yeah"
20:53 JoeJulian mmkay is South Park
20:54 PatNarciso whoah - its been years since I've watched South Park.
20:55 JoeJulian No, I have not. Really, though, it comes down to network bandwidth and underlying storage performance. Look for the bottlenecks.
20:55 JoeJulian Also consider context switches/cpu bottlenecks.
20:57 jiffin joined #gluster
20:57 PatNarciso Years ago I tested 6 bricks on an 8disk raid6.  Once the volume got over 40% capacity, performance suffered so very very much.
20:59 PatNarciso thanks, I appreciate the feedback.
21:00 pulli joined #gluster
21:02 JoeJulian That shouldn't have happened. Were you using lvm, too? What filesystem?
21:02 PatNarciso no lvm.  xfs.
21:03 JoeJulian Odd
21:03 PatNarciso (correctly formmated)
21:04 JoeJulian Only time I've seen something like that was a disk that had a misaligned read head on one platter.
21:08 jdossey joined #gluster
21:09 PatNarciso regarding alignment... does alignment still matter on an thin LVM logical volume?  In my mind, I see 'thin' as a not-fully allocated space, and it does this by writing very compactly.
21:10 JoeJulian Right, no. filesystem alignment should be based on the extent size.
21:11 JoeJulian A thin lv is allocated one extent at a time.
21:12 mhulsman joined #gluster
21:29 Gambit15 JoeJulian> Only time I've seen something like that was a disk that had a misaligned read head on one platter.
21:29 Gambit15 JoeJulian: Out of curiosity, how'd you find that out?
21:31 JoeJulian A friend at the manufacturer (who shall remain nameless) had tools that could interact with the firmware. We found that once we got to that one platter, it started doing a bunch of retries. When we got past that platter, performance returned to normal.
21:31 JoeJulian He said it wasn't uncommon for that particular production run.
21:32 timotheus1 joined #gluster
21:32 Gambit15 Ah, so not something in the range of a normal techie then...
21:32 JoeJulian Unfortunately, no.
21:33 Gambit15 That said, if you removed that one drive from the array, I presume you should see a performance boost?
21:34 Gambit15 So just a case of yanking out 1 disk at at time & retesting I/O
21:39 JoeJulian Which, of course, you can do with software.
21:39 JoeJulian I wonder if there's any way to get kernel-level write-op timings.
21:40 Asako dtrace
21:41 Gambit15 Isn't that only for Solaris?
21:42 Gambit15 ...huh, ok: "Solaris, Mac OS X and FreeBSD. A Linux port is in development."
22:11 mhulsman joined #gluster
22:21 jbrooks joined #gluster
22:22 musa22 joined #gluster
22:28 musa22 joined #gluster
22:30 plarsen joined #gluster
22:31 raghu joined #gluster
22:37 PatNarciso am I correct: bitrot is only detected with gluster.  is it my understanding active resolution is not a thing?  (by copying the file to another brick perhaps?)
22:38 JoeJulian no active resolution yet
22:40 PatNarciso on this topic, ZFS is looking better than XFS :\
22:40 JoeJulian I think that's what people like about it.
22:42 PatNarciso I'm also interested in its caching; but have yet to test it... only because of horror stories in the past.  I'll be carving out time on my calendar to test this (hopefully in the near future).
22:58 Seth_Karlo joined #gluster
23:33 cacasmacas joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary