Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-03-17

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 haomaiwa_ joined #gluster
00:21 d4n13L joined #gluster
00:39 ctria joined #gluster
00:45 bennyturns joined #gluster
00:47 jvandewege_ joined #gluster
00:48 BogdanR joined #gluster
01:01 haomaiwang joined #gluster
01:08 anmol joined #gluster
01:18 EinstCrazy joined #gluster
01:26 puiterwijk joined #gluster
01:27 dlambrig_ joined #gluster
01:34 harish joined #gluster
01:39 nishanth joined #gluster
01:44 amye joined #gluster
01:48 DV joined #gluster
01:50 aravindavk joined #gluster
01:58 calavera joined #gluster
02:03 calavera joined #gluster
02:06 jjafulle_ joined #gluster
02:11 baojg joined #gluster
02:15 hgichon0 joined #gluster
02:18 Lee1092 joined #gluster
02:25 jjafulle_ joined #gluster
02:25 kbyrne joined #gluster
02:25 Champi joined #gluster
02:25 devilspgd joined #gluster
02:25 lh joined #gluster
02:25 tyler274 joined #gluster
02:25 k-ma joined #gluster
02:25 purpleidea joined #gluster
02:27 haomaiwa_ joined #gluster
02:28 kbyrne joined #gluster
02:31 cpetersen_ joined #gluster
02:34 EinstCra_ joined #gluster
02:39 dlambrig_ joined #gluster
02:46 nathwill joined #gluster
02:52 sakshi joined #gluster
02:55 amye joined #gluster
03:00 baojg joined #gluster
03:01 hgichon joined #gluster
03:01 haomaiwa_ joined #gluster
03:10 overclk joined #gluster
03:12 Intensity joined #gluster
03:23 nbalacha joined #gluster
03:27 rastar joined #gluster
03:33 anmol joined #gluster
03:41 dthrvr joined #gluster
03:49 rjoseph joined #gluster
03:51 nathwill joined #gluster
03:54 kkeithley1 joined #gluster
04:01 haomaiwang joined #gluster
04:05 itisravi joined #gluster
04:09 hgichon0 joined #gluster
04:10 kanagaraj joined #gluster
04:20 atinm joined #gluster
04:21 rjoseph_ joined #gluster
04:29 RameshN joined #gluster
04:31 dlambrig_ joined #gluster
04:50 pur joined #gluster
04:51 prasanth joined #gluster
04:51 nehar joined #gluster
04:52 Gnomethrower joined #gluster
04:58 RameshN joined #gluster
05:00 spalai joined #gluster
05:00 karthik___ joined #gluster
05:01 haomaiwa_ joined #gluster
05:03 ppai joined #gluster
05:04 anmol joined #gluster
05:07 gem joined #gluster
05:10 dlambrig_ joined #gluster
05:12 ramky joined #gluster
05:18 ahino joined #gluster
05:19 hgowtham joined #gluster
05:24 karnan joined #gluster
05:28 jiffin joined #gluster
05:33 rastar joined #gluster
05:33 ndarshan joined #gluster
05:36 rastar joined #gluster
05:38 decay joined #gluster
05:38 Apeksha joined #gluster
05:42 sakshi joined #gluster
05:47 ovaistariq joined #gluster
05:49 ovaistar_ joined #gluster
05:51 atalur joined #gluster
05:55 skoduri joined #gluster
05:59 Manikandan joined #gluster
06:01 haomaiwa_ joined #gluster
06:03 Manikandan joined #gluster
06:04 ashiq joined #gluster
06:05 kdhananjay joined #gluster
06:05 prasanth joined #gluster
06:15 baojg joined #gluster
06:15 shubhendu joined #gluster
06:16 gowtham joined #gluster
06:20 Manikandan joined #gluster
06:27 ppai_ joined #gluster
06:29 spalai joined #gluster
06:30 suliba joined #gluster
06:31 Wizek joined #gluster
06:37 poornimag joined #gluster
06:39 ppai_ left #gluster
06:41 beeradb_ joined #gluster
06:41 suliba joined #gluster
06:42 Manikandan joined #gluster
06:46 baojg joined #gluster
06:49 suliba joined #gluster
06:53 anil_ joined #gluster
06:56 robb_nl joined #gluster
06:56 unlaudable joined #gluster
06:57 samsaffron___ joined #gluster
07:01 nishanth joined #gluster
07:01 haomaiwa_ joined #gluster
07:02 geniusoftime joined #gluster
07:03 geniusoftime hello gluster guru's. anyone know which interface ganesha HA selects to attach the VIPs?
07:04 mbukatov joined #gluster
07:04 Wizek joined #gluster
07:06 anoopcs jiffin, ^^
07:06 skoduri geniusoftime, I do not know how or which nic it picks up
07:07 skoduri geniusoftime, but you can update the resource  to select another nic I guess
07:08 skoduri geniusoftime, pcs resource update <VIP-resource-name> nic=<nic-interface>
07:08 geniusoftime cool, that's helpful
07:08 geniusoftime skoduri, thanks!
07:08 skoduri geniusoftime, welcome...let us know if that works :
07:08 skoduri :)
07:09 deniszh joined #gluster
07:10 gbox joined #gluster
07:11 skoduri geniusoftime, and if you would like to assign a particular nic to these VIPs while bringing up nfs-ganesha cluster .. you need to modify '/usr/libexec/ganesha/ganesha.sh' script before setup
07:11 skoduri pcs -f ${cibfile} resource create ${1}-cluster_ip-1 ocf:heartbeat:IPaddr ip=${ipaddr} cidr_netmask=32 op monitor interval=15s
07:12 skoduri edit above line to take 'nic=<interface>' option
07:13 skoduri geniusoftime, 'man pcs' has examples
07:13 geniusoftime excellent
07:13 geniusoftime thankyou
07:14 ppai joined #gluster
07:14 ndevos skoduri and geniusoftime: see also some more details in #ganesha :)
07:15 deniszh joined #gluster
07:18 deniszh1 joined #gluster
07:18 gbox Hello how can I determine the client name to use in " setfattr -n replica.split-brain-choice -v "choiceX" <path-to-file>"?
07:19 gbox choiceX could be volume-client-2 for example.  Where is that client name stored?
07:22 lanning joined #gluster
07:23 atinm joined #gluster
07:25 jbrooks joined #gluster
07:26 mhulsman joined #gluster
07:28 Manikandan joined #gluster
07:29 jtux joined #gluster
07:31 itisravi gbox: The mount log will print the volume graph, that maps 'volname-client-x' to a particular brick.
07:32 [Enrico] joined #gluster
07:33 itisravi You can also check the volfile in the server at /var/lib/glusterd/vols/volname/trusted-volname.tcp-fuse.vol
07:34 kovshenin joined #gluster
07:35 gbox itisravi: Hi, thanks, that worked.  I found it in /var/lib/glusterd/vols/volname/trusted-volname.tcp-fuse.vol
07:36 itisravi gbox: cool.
07:36 gbox The "setfattr -n replica.split-brian-choice" fix for input/output error does not work however
07:37 gbox Should it take the gluster volume path or the brick path as the argument?
07:38 gbox It will only work against the gluster volume path but that still results in an input/output error
07:39 gbox itisravi:  Regarding mount log and the volume graph, where can i find that?  /var/log/glusterfs?
07:39 gbox Ah yes it's there at the beginning of the log file
07:42 rafi joined #gluster
07:43 rafi1 joined #gluster
07:45 gbox All of the docs say "setfattr -n replica.split-brain-choice -v test-client-2 file1" will fix split-brain input/output errors.  But it fails due to the input/output error.
07:46 gbox Does "Skipping conservative merge on the file" mean self-heal won't work?
07:47 arcolife joined #gluster
08:01 haomaiwa_ joined #gluster
08:15 DV joined #gluster
08:19 ivan_rossi joined #gluster
08:25 ggarg joined #gluster
08:26 ahino joined #gluster
08:29 jri joined #gluster
08:42 hackman joined #gluster
08:45 hchiramm joined #gluster
08:48 Debloper joined #gluster
08:49 liibert joined #gluster
08:52 Ulrar Ha, geo replication doesn't seem to be doing what I want to do. Is there a way to tell glusterfs "this server is in datacenter A, and those ones in datacenter B, make sure I have at least one replica of each file on each side" ?
08:52 d0nn1e joined #gluster
08:55 haomaiw__ joined #gluster
09:01 haomaiwa_ joined #gluster
09:04 vmallika joined #gluster
09:05 kanagaraj joined #gluster
09:06 harish_ joined #gluster
09:12 fsimonce joined #gluster
09:24 rafi joined #gluster
09:25 muneerse joined #gluster
09:27 rafi1 joined #gluster
09:30 rafi joined #gluster
09:31 ctria joined #gluster
09:45 atalur joined #gluster
09:47 sakshi joined #gluster
09:48 hchiramm joined #gluster
09:51 baojg joined #gluster
09:51 Slashman joined #gluster
09:51 [diablo] joined #gluster
09:53 gem joined #gluster
09:53 Norky joined #gluster
09:55 baojg joined #gluster
10:01 haomaiwang joined #gluster
10:03 mhulsman joined #gluster
10:06 baojg joined #gluster
10:13 drankis joined #gluster
10:14 rafi joined #gluster
10:18 Norky joined #gluster
10:30 arcolife joined #gluster
10:31 arcolife joined #gluster
10:38 ovaistariq joined #gluster
10:41 Manikandan joined #gluster
10:45 kotreshhr joined #gluster
10:53 ira joined #gluster
10:54 kotreshhr left #gluster
10:58 deniszh2 joined #gluster
10:59 Wizek joined #gluster
11:01 haomaiwa_ joined #gluster
11:13 kotreshhr joined #gluster
11:14 Manikandan joined #gluster
11:16 sakshi joined #gluster
11:23 sakshi joined #gluster
11:28 wushudoin joined #gluster
11:31 Manikandan joined #gluster
11:34 deniszh joined #gluster
11:39 johnmilton joined #gluster
11:39 Wizek joined #gluster
11:42 ramky joined #gluster
11:43 Manikandan joined #gluster
12:00 plarsen joined #gluster
12:13 chirino_m joined #gluster
12:16 rastar joined #gluster
12:16 nbalacha joined #gluster
12:16 poornimag joined #gluster
12:20 chirino joined #gluster
12:27 haomaiwa_ joined #gluster
12:27 Wizek joined #gluster
12:27 fsimonce joined #gluster
12:28 hgowtham joined #gluster
12:34 samppah is anyone here using glusterfs with zfs?
12:34 samppah zfs on linux to be exact
12:34 pjrebollo joined #gluster
12:35 samppah i see quite low perfromance especially with random io and i'm suspecting that network.remote-dio is somewhat ineffective as i don't see glusterfs caching anything on server side
12:39 ovaistariq joined #gluster
12:48 jiffin samppah: there was a performance regression for v3.7.8
12:49 unclemarc joined #gluster
12:50 mpietersen joined #gluster
12:51 jiffin https://bugzilla.redhat.com/show_bug.cgi?id=1309462
12:51 glusterbot Bug 1309462: low, unspecified, ---, ravishankar, MODIFIED , Upgrade from 3.7.6 to 3.7.8 causes massive drop in write performance.  Fresh install of 3.7.8 also has low write performance
12:53 chirino joined #gluster
12:57 samppah jiffin: thanks, i'll take a look into it
12:57 samppah this is fresh system with version 3.7.8
12:57 bennyturns joined #gluster
12:57 overclk joined #gluster
13:01 haomaiwa_ joined #gluster
13:05 mwilson_ joined #gluster
13:05 pjreboll_ joined #gluster
13:15 Apeksha joined #gluster
13:24 mpietersen joined #gluster
13:26 rwheeler joined #gluster
13:32 kdhananjay joined #gluster
13:36 Apeksha joined #gluster
13:36 anmol joined #gluster
13:42 ggarg joined #gluster
13:45 hamiller joined #gluster
13:46 skylar joined #gluster
13:48 EinstCrazy joined #gluster
14:01 7GHAAJT61 joined #gluster
14:18 hackman joined #gluster
14:27 spalai joined #gluster
14:40 ovaistariq joined #gluster
14:40 nbalacha joined #gluster
14:42 Apeksha joined #gluster
14:43 rastar joined #gluster
14:46 nbalacha joined #gluster
14:46 bennyturns joined #gluster
14:46 harish_ joined #gluster
14:49 farhorizon joined #gluster
14:56 overclk joined #gluster
14:58 [Enrico] joined #gluster
15:01 B21956 joined #gluster
15:01 haomaiwa_ joined #gluster
15:03 92AAAKT33 joined #gluster
15:04 pjrebollo joined #gluster
15:09 shaunm joined #gluster
15:11 post-factum JoeJulian: check ML, I've posted arbiter size estimation for 1k inodes
15:20 pjreboll_ joined #gluster
15:23 jiffin joined #gluster
15:23 baojg joined #gluster
15:27 TvL2386 joined #gluster
15:30 amye joined #gluster
15:37 nbalacha joined #gluster
15:44 spalai left #gluster
15:45 nbalacha joined #gluster
15:49 ashiq joined #gluster
15:49 fsimonce joined #gluster
15:51 Manikandan joined #gluster
15:59 chirino joined #gluster
16:01 haomaiwa_ joined #gluster
16:12 gbox joined #gluster
16:21 squizzi joined #gluster
16:22 calavera joined #gluster
16:23 gbox I have an interesting situation.  A peer went rogue yesterday and hosed 700,000+ files.  @JoeJulian helped diagnose and fix the problem.  The 700,000 files though are still in limbo.  They're all gfid mismatches but the self-heal daemon simply logs "Skipping conservative merge on the file."  The good peer shows "trusted.afr.gv0-client-2=0x000000020000000100000000" (client-2 is the rogue peer) while the rogue brick copy has no trusted
16:24 gbox I'd greatly appreciate any insight on this.  There are similar issues on the gluster-users mailing list but no clear resolution.  I'm leaning toward erasing all copies on the rogue brick.
16:27 hackman joined #gluster
16:33 coredump joined #gluster
16:40 gbox https://gluster.readthedocs.org/en/latest/Troubleshooting/split-brain/ is very helpful but does not explain my scenario
16:40 glusterbot Title: Split Brain - Gluster Docs (at gluster.readthedocs.org)
16:41 liibert joined #gluster
16:44 d0nn1e joined #gluster
16:46 gbox That explains split-brain, which is different than gfid mismatch.  OK I found some @JoeJulian advice from 2014 that confirms deleting the rogue copy should fix it.
16:46 RameshN joined #gluster
16:50 JoeJulian I love it when I can help people before I've even gotten out of bed in the morning. :)
16:50 gbox What happens in each brick's .glusterfs directory to resolve gfid mismatch when the file is deleted on one brick?
16:50 JoeJulian If the file is deleted, its ,,(extended attributes) are, too. This deletes the gfid assignment for that filename.
16:50 glusterbot (#1) To read the extended attributes on the server: getfattr -m .  -d -e hex {filename}, or (#2) For more information on how GlusterFS uses extended attributes, see this article: http://pl.atyp.us/hekafs.org/index.php/2011/04/glusterfs-extended-attributes/
16:50 gbox JoeJulian:  It is kind of a super power
16:51 hchiramm joined #gluster
16:53 gbox But there isn't a hardlink in .glusterfs matching the new gfid
16:55 gbox OK now there is
16:59 gbox Does a file only have trusted.afr.<volume>-client-# if there are unresolved changes?
17:01 haomaiwa_ joined #gluster
17:07 spalai joined #gluster
17:18 gbox Is there a way to test for those input/output errors?  stat?
17:28 pjrebollo joined #gluster
17:29 gbox So "gluster volume heal <volume> info" gives the gfid mismatches.  Can I just delete those in .glusterfs on the bad brick?  Then I can find orphaned (hardlinks==1) files in the directories and delete those.
17:31 calavera_ joined #gluster
17:32 karnan joined #gluster
17:33 pjrebollo joined #gluster
17:35 Manikandan joined #gluster
17:44 shubhendu joined #gluster
17:45 tessier peer probe: failed: Probe returned with Transport endpoint is not connected
17:45 tessier Looks like I still have work to do in unhosting this cluster.
17:46 tessier Still have some volumes to manually restore too but I know what to do there, it's just tedious.
17:49 JoeJulian _gbox: I don't think that's what you want. You need to delete the files *not* in .glusterfs whose gfid doesn't match that of the replica. The cleanup of links==1 files will happen in .glusterfs/[0-9a-f][0-9a-f]/
17:50 gbox joined #gluster
17:50 JoeJulian How to fix that with 600k files? Not sure. If it was me, I'd evaluate the value of punting and wiping the bad brick entirely.
17:52 JoeJulian tessier: Your issue gives me another chance to, once again, recommend to the world that they stop putting /var/log on the root partition. Make a partition for it so if it gets full it doesn't break anything.
17:54 _gbox JoeJulian:  Thanks I'll consider it but this seems like a not uncommon gluster problem so there needs to be a better way.
17:54 JoeJulian And if it's too hard to repartition (aws or whatever), just create a loop image and mount that.
17:55 JoeJulian Runaway logs happen with all sorts of software.
17:58 coredump joined #gluster
17:58 _gbox Would "gluster volume heal <volume> info" also list the gfid of the erroneous files on the bad brick?  Or does gluster not see them and that's why they don't have a trusted.afr.<volume>-client-# xattr?
17:59 _gbox Actually the trusted.afr.<volume>-client-# xattr on the good brick has the client # of the bad brick.
17:59 tessier JoeJulian: I agree completely. In my case, /var/log isn't on root. /var is a separate partition which just happened to fill with logs due to other gluster issues generating lots of logs.
17:59 JoeJulian heal info only shows the entries in .glusterfs/indices/*
17:59 calavera joined #gluster
18:00 JoeJulian hehe, yeah, /var is application state. If that fills you have the potential to lose application state.
18:01 haomaiwang joined #gluster
18:02 JoeJulian If you're running a modern distro, you can up the diagnostics.{brick,client}-log-level to critical and lower the diagnostics.*-sys-log-level to INFO. Then journald will ensure the logs don't overflow.
18:03 ovaistariq joined #gluster
18:06 portante JoeJulian: fyi, I think systemd will start issuing fsync's if the log level is high enough
18:06 portante I am going to guess at EMERG (was named PANIC), but it might be at CRIT or ALERT
18:06 portante just to be aware
18:06 JoeJulian Makes some sense.
18:08 atalur joined #gluster
18:08 _gbox JoeJulian:  By .glusterfs/indices/* you mean the normal .glusterfs/01/23/0123456... gfid indices?
18:09 JoeJulian Nope, the directory named "indices"
18:10 JoeJulian It used to only be "indices/xattrop" but I see there's a new "dirty" directory there, too. Not sure what that one does yet.
18:11 _gbox JoeJulian:   Mine indices dir does not have a dirty subdir
18:12 _gbox For gfid mismatches though wouldn't both copies of a file (with different gfid) show up in the heal list?
18:13 JoeJulian No, that list is created when a change is done to a file. The entry is created, xattrs are incremented, the fop is completed, the xattrs are decremented. If the xattrs are null, the entry is removed.
18:13 JoeJulian If the brick wasn't reachable, it would have no way of doing any of that.
18:14 _gbox But there is a copy of the file on the brick, with another gfid.
18:14 JoeJulian That part puzzles me.
18:14 _gbox Isn't that essentially what a gfid mismatch is?
18:15 JoeJulian No, check the extended attributes. trusted.gfid specifically.
18:16 _gbox Wow my 'good' brick has 620262 files in .glusterfs/indices/xattrop
18:16 JoeJulian So since the good brick gfid doesn't match the bad brick, any gfid entries in .glusterfs/indices/xattrop probably won't even exist on the bad brick.
18:17 JoeJulian You would have to resolve the gfid on the good brick to filenames and operate on those filenames on the bad brick.
18:18 JoeJulian With 620k files, that'll take a bit of time.
18:18 _gbox Got it, and those are the gfid that show up in "gluster volume heal"
18:18 JoeJulian Right
18:19 _gbox The hashing function should have an inverse function although that's not usually the goal with hashing
18:19 ivan_rossi left #gluster
18:20 JoeJulian Yeah, hashing is lossy and can't be reversed.
18:20 _gbox Thanks again Joe
18:21 JoeJulian It's like facial recognition software picks points on the face and measures the distance between them. With those measurements, you can fairly accurately determine identity, but you couldn't draw the original face from those measurements.
18:26 JoeJulian gbox: /usr/lib/glusterfs/glusterfs/gfind_missing_files/gfid_to_path.py
18:27 JoeJulian @forget gfid resolver
18:27 glusterbot JoeJulian: The operation succeeded.
18:27 JoeJulian @learn gfid resolver as Use /usr/lib/glusterfs/glusterfs/gfind_missing_files/gfid_to_path.py to resolve a gfid to a filename.
18:27 glusterbot JoeJulian: The operation succeeded.
18:29 JoeJulian Can someone with a centos box see if that's an accurate path? I suspect it'll be in /usr/lib64 instead.
18:46 dbruhn joined #gluster
18:47 coredump|br joined #gluster
18:51 ahino joined #gluster
19:01 7YUAAKM95 joined #gluster
19:07 mhulsman joined #gluster
19:19 atalur joined #gluster
19:23 gbox On Centos it's in /usr/libexec/glusterfs/gfind_missing_files/gfid_to_path.py
19:24 JoeJulian @forget gfid resolver
19:24 glusterbot JoeJulian: The operation succeeded.
19:24 JoeJulian @learn gfid resolver as Use /usr/lib*/glusterfs/glusterfs/gfind_missing_files/gfid_to_path.py to resolve a gfid to a filename.
19:24 glusterbot JoeJulian: The operation succeeded.
19:24 JoeJulian Thanks
19:24 gbox Thank you
19:29 gbox What does it mean when a gfid entry has a <gfid:...>/# at the end?
19:30 gbox In 'gluster volume heal' output
19:31 chirino joined #gluster
19:35 post-factum JoeJulian: arbiter test for 256b inode is also ready
19:36 post-factum JoeJulian: inodes count remains const despite of inode size changing
19:36 post-factum but with 256b arbiter takes twice less space than with 1k
19:37 JoeJulian Cool
19:39 JoeJulian I wonder why it's still 530 bytes. If the xattrs exceed the size of a single inode, it's just supposed to use two, but then your inode usage should change. Odd.
19:41 post-factum then, it doesn't exceed, eh?
19:47 JoeJulian But then why isn't it 256 bytes / inode?
19:47 JoeJulian Are the files all 0 size?
19:48 post-factum nope
19:48 post-factum random number between 1 and 3768
19:48 post-factum *32768
19:49 post-factum on arbiter, surely, they have zero size
19:57 spalai joined #gluster
20:01 jeroen__ joined #gluster
20:01 haomaiwa_ joined #gluster
20:04 calavera joined #gluster
20:13 JoeJulian Yeah, arbiter was what I was asking about.
20:14 deniszh joined #gluster
20:15 calavera joined #gluster
20:18 post-factum JoeJulian: dunno :/
20:18 post-factum any ideas are welcome
20:30 deniszh joined #gluster
20:32 farhorizon joined #gluster
20:32 pjreboll_ joined #gluster
20:39 farhorizon joined #gluster
20:46 spalai left #gluster
20:46 farhoriz_ joined #gluster
21:01 haomaiwa_ joined #gluster
21:02 amye joined #gluster
21:13 kovsheni_ joined #gluster
21:30 fcami joined #gluster
21:33 squizzi joined #gluster
21:38 calavera joined #gluster
21:47 klfwip joined #gluster
21:49 klfwip I've got a linux cluster using NFS and ZFS that I will probably be scaling up with gluster soon and there's something I'm not certain about. Would there be much difference between running two ZFS pools (on two servers) each hosting one gluster brick and creating a 2-way replicated gluster volume with these two servers, vs create a 4-way replicated gluster volume on the bare disks?
21:51 klfwip More generally: Do you think that local raid / ZFS pools make any sense in any kind of gluster cluster, or would you just let gluster manage all your disks directly?
22:01 haomaiwa_ joined #gluster
22:04 JoeJulian I gave up on zfs as just extra overhead that didn't get me anything of any value. For production, I prefer raid-0 (enough to max out my sas controller) bricks and replica 3. More than replica 3 and you're just wasting mony, imho.
22:05 JoeJulian @lucky the do's and don'ts of replication
22:05 glusterbot JoeJulian: http://replicateministries.org/2013/10/30/dos-and-donts-of-family-discipleship/
22:05 JoeJulian Guess I wasn't lucky.
22:05 misc ah ah
22:05 JoeJulian https://joejulian.name/blog/glusterfs-replication-dos-and-donts/
22:05 glusterbot Title: GlusterFS replication do's and don'ts (at joejulian.name)
22:36 post-factum replica 4 is very expensive
22:36 post-factum tried it already
22:37 post-factum with 2 DC it is better to use raid-1 pairs
22:37 post-factum and replica 2
22:37 karl_ joined #gluster
22:38 JoeJulian Yeah, like any advice, it depends on the use case.
22:43 post-factum sure
22:45 farhorizon joined #gluster
22:53 post-factum summary of arbiter brick size estimation: https://gist.github.com/e8265ca07f7b19f30bb3
22:53 glusterbot Title: glfs_arbiter.md · GitHub (at gist.github.com)
23:01 johnmilton joined #gluster
23:01 haomaiwa_ joined #gluster
23:15 scubacuda joined #gluster
23:25 farhorizon joined #gluster
23:28 shyam left #gluster
23:41 hackman joined #gluster
23:49 johnmilton joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary