Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-04-26

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:03 cmdpancakes joined #gluster
00:05 cmdpancakes what is the easiest way to have multiple glusterd services serve independent volumes from different IPs on the same server?
00:06 cmdpancakes i have different DNS names and IPs pointing to this host but the same UUID is served obviously and mounting on the other IP fails
00:10 cmdpancakes i assume i'd have to start another glusterfsd process to only bind on the first interface, but i'm not sure how to not muddy the waters with the configs to just produce a separate uuid
00:19 cmdpancakes it seems i might be able to replicate the glusterdd.vol file with a different working directory and start from there
00:24 moneylotion joined #gluster
00:29 ahino joined #gluster
00:45 armin joined #gluster
00:45 moneylotion joined #gluster
00:46 arif-ali joined #gluster
00:57 zerick joined #gluster
00:57 wushudoin joined #gluster
01:01 JoeJulian joined #gluster
01:02 jesk joined #gluster
01:02 ItsMe` joined #gluster
01:03 daMaestro joined #gluster
01:13 _KaszpiR__ joined #gluster
01:17 shdeng joined #gluster
01:25 tdasilva joined #gluster
01:26 moneylotion joined #gluster
01:29 amarts joined #gluster
01:31 tz-zeejay joined #gluster
01:31 tz-zeejay left #gluster
01:35 Philambdo joined #gluster
01:47 derjohn_mob joined #gluster
01:48 ilbot3 joined #gluster
01:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:27 ankitr joined #gluster
02:49 snixor joined #gluster
02:57 kraynor5b joined #gluster
02:59 P0w3r3d joined #gluster
03:01 oajs_ joined #gluster
03:03 prasanth joined #gluster
03:05 Gambit15 joined #gluster
03:19 kramdoss_ joined #gluster
03:23 Philambdo joined #gluster
03:27 nbalacha joined #gluster
03:42 itisravi joined #gluster
03:44 magrawal joined #gluster
03:47 atinm joined #gluster
03:49 riyas joined #gluster
03:58 scobanx joined #gluster
04:01 gem joined #gluster
04:02 ashiq joined #gluster
04:03 vbellur joined #gluster
04:23 vbellur joined #gluster
04:26 Shu6h3ndu joined #gluster
04:27 amarts joined #gluster
04:28 ankitr joined #gluster
04:36 dominicpg joined #gluster
04:39 vbellur joined #gluster
04:39 buvanesh_kumar joined #gluster
05:02 gyadav joined #gluster
05:08 skoduri joined #gluster
05:11 ndarshan joined #gluster
05:14 apandey joined #gluster
05:14 kdhananjay joined #gluster
05:15 skumar joined #gluster
05:16 ankitr joined #gluster
05:24 armyriad joined #gluster
05:25 kramdoss_ joined #gluster
05:33 poornima_ joined #gluster
05:34 DV_ joined #gluster
05:41 hgowtham joined #gluster
05:42 DV__ joined #gluster
05:43 ppai joined #gluster
05:44 apandey_ joined #gluster
05:45 kramdoss_ joined #gluster
05:46 karthik_us joined #gluster
05:47 jiffin joined #gluster
05:48 apandey joined #gluster
05:55 msvbhat joined #gluster
05:59 prasanth joined #gluster
05:59 Karan joined #gluster
06:04 Guest99109 joined #gluster
06:04 apandey joined #gluster
06:06 Saravanakmr joined #gluster
06:09 gem joined #gluster
06:18 rastar joined #gluster
06:19 rwheeler joined #gluster
06:19 rafi joined #gluster
06:20 sanoj joined #gluster
06:21 sona joined #gluster
06:23 nbalacha joined #gluster
06:24 sbulage joined #gluster
06:25 derjohn_mob joined #gluster
06:30 ayaz joined #gluster
06:34 oajs_ joined #gluster
06:40 kramdoss_ joined #gluster
06:41 karthik_us joined #gluster
06:43 aravindavk joined #gluster
06:51 vbellur joined #gluster
06:51 Saravanakmr joined #gluster
06:57 vbellur joined #gluster
06:58 ivan_rossi joined #gluster
07:01 gyadav_ joined #gluster
07:01 rastar joined #gluster
07:03 amarts joined #gluster
07:06 vbellur joined #gluster
07:09 mbukatov joined #gluster
07:21 fsimonce joined #gluster
07:22 kotreshhr joined #gluster
07:25 nbalacha joined #gluster
07:26 vbellur joined #gluster
07:26 kramdoss_ joined #gluster
07:26 ankitr joined #gluster
07:31 Saravanakmr joined #gluster
07:33 itisravi joined #gluster
07:34 gyadav_ joined #gluster
07:37 gyadav__ joined #gluster
07:41 skoduri joined #gluster
07:42 _KaszpiR_ joined #gluster
07:49 nbalacha joined #gluster
08:00 Peppard joined #gluster
08:05 mb joined #gluster
08:07 vbellur joined #gluster
08:14 karthik_us joined #gluster
08:28 apandey joined #gluster
08:36 vbellur joined #gluster
08:38 susant joined #gluster
08:43 jesk left #gluster
08:48 flying joined #gluster
08:52 ankitr joined #gluster
08:52 sbulage joined #gluster
08:53 kdhananjay joined #gluster
08:54 kdhananjay joined #gluster
08:55 omega joined #gluster
08:57 omega12 Hey, anyone here? I've got a quick question regarding an attribute heal problem quite similar to this one http://lists.gluster.org/pipermail/gluster-users.old/2016-September/028416.html as in the heal daemon can't repair files due to xaattr not being set. My question is: how do i reproduce this problem? I tried many ways like killing the daemon during sync etc. but nothing seems to get my cluster into that state.
08:57 glusterbot Title: [Gluster-users] gluster 3.7 healing errors (no data available, buf->ia_gfid is null) (at lists.gluster.org)
08:57 omega12 my set up is also the same (2 node, full replica)
08:58 jiffin itisravi: ^^
09:01 jkroon joined #gluster
09:02 itisravi omega12: Does the file in question not have the trusted.gfid xattr?
09:06 omega12 itisravi: that's part of the problem, i do not have direct access to the system, only the log files. I am assuming it does not though, since that's the only way this can occur apparently.
09:06 jiffin1 joined #gluster
09:06 omega12 (according to other posts i found when researching this)
09:12 itisravi omega12:  well one way to reproduce would be to put a breakpoint in posix_create/ posix_mknod, step thru the function and kill the brick process after the file creation is done but before setting the gfid xattr etc happens.
09:12 itisravi using gdb, that is.
09:14 omega12 itisravi: ah alright, that makes sense. would it also work to create a file, let it sync, kill a node and then remove the xattr manually on one node and restart? or is that going to produce a different state? my gdb-fu isn't that good
09:14 itisravi omega12: that would work too I guess :)
09:15 omega12 itisravi: alright, I'll try that first, thanks
09:15 itisravi setfattr -x is much easier than gdb.
09:15 itisravi cool.
09:17 itisravi omega12:  but to exercise self-heal, you would need to write to the file (i.e. modify it) after you kill one replica, then remove xattr + restart etc.
09:23 sbulage joined #gluster
09:29 _KaszpiR_ joined #gluster
09:34 apandey joined #gluster
09:35 jiffin1 joined #gluster
09:43 kdhananjay joined #gluster
09:47 buvanesh_kumar joined #gluster
09:50 nbalacha joined #gluster
09:52 MrAbaddon joined #gluster
09:55 vbellur joined #gluster
10:01 omega12 itisravi: i managed to reproduce it! did it like this: kill first node, mount on second node, write to file, remove the attribute, restart first node (so basically just like you summed it up)
10:02 itisravi omega12: okay.
10:03 msvbhat joined #gluster
10:04 omega12 itisravi: also it only seems to work when creating the file, just modifying didn't do the trick for me
10:04 plarsen joined #gluster
10:05 itisravi But in order for you to confirm that the brick crash was indeed the actual cause for the missing gfid, you would need to check the brick log and see if it went down.
10:10 vbellur joined #gluster
10:12 omega12 itisravi: well the problem was that the heal failed messages were becoming so frequent on the bad cluster that the logging caused problems on the system. now i have the same messages reoccurring as it attempts to heal the file, and heal info says that it it having problems healing that gfid entry, so all is "well"
10:12 nishanth joined #gluster
10:13 om2 joined #gluster
10:15 itisravi omega12:  sorry I do not follow... Are you saying all is well because you managed to get the same kind of logs with your reproducer?
10:16 omega12 itisravi: yeah, i'm just trying to create a routine to fix this kind of problem for files that are in conflict
10:16 omega12 anyway thanks for your help :)
10:17 itisravi omega12:  ok, but if you for some reason are able to hit a 'missing gifd xattr on a file' without manually deleting it or killing a brick, please file a bug.
10:17 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
10:18 omega12 itisravi: will do
10:18 itisravi omega12: great :)
10:19 Wizek_ joined #gluster
10:23 nbalacha joined #gluster
10:25 skoduri joined #gluster
10:39 amarts joined #gluster
10:43 vbellur joined #gluster
10:46 caitnop joined #gluster
10:48 hybrid512 joined #gluster
10:57 plarsen joined #gluster
10:59 skoduri joined #gluster
10:59 vbellur joined #gluster
11:19 skumar joined #gluster
11:26 XpineX joined #gluster
11:29 ankitr joined #gluster
11:32 _Bryan_ joined #gluster
11:33 XpineX joined #gluster
11:35 kotreshhr left #gluster
11:37 msvbhat joined #gluster
12:00 MikiL joined #gluster
12:02 arpu joined #gluster
12:05 MikiL Hello, I have two server that with gluster volume working in Replicate mode. Yesterday I installed apache server and put Joomla web page on it. Unfortunate, gluster is working very, very, very slow with Joomla. I don't know is there any special option that I could activate on volume and do some fine tuning for Joomla web pages on gluster. Please help!
12:05 Shu6h3ndu joined #gluster
12:14 pdrakeweb joined #gluster
12:15 gestahlt joined #gluster
12:15 gestahlt Hi
12:15 glusterbot gestahlt: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer a
12:16 gestahlt I made a small mistake and like to fix it: I created a volume without having the brick on my 2nd node. Now i cannot delete the volume or recreate it. What should i do?
12:17 gestahlt I always get the error: staging on node2 failed, volume foo does not exist
12:18 gestahlt when i create its the other way around: volume foo exists
12:23 brayo joined #gluster
12:25 ira_ joined #gluster
12:26 hgowtham gestahlt, Without the brick the second node won't be a part of the volume. you should have used only the other nodes and staging shouldn't fail. can you send the output of "gluster volume info" ?
12:32 gestahlt https://pastebin.com/iASig7N8
12:32 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
12:32 gestahlt i hate the glusterbot
12:32 vbellur joined #gluster
12:33 steveeJ left #gluster
12:33 gestahlt https://paste.fedoraproject.org/paste/TeZoUHIbhjxQj9vlFSpwF15M1UNdIGYhyRLivL9gydE=
12:33 glusterbot Title: Untitled - Modern Paste (at paste.fedoraproject.org)
12:37 hgowtham gestahlt, you have only one node as of now and you need your second node with one more brick as a part of this volume?
12:39 gestahlt hgowtham: I am also fine with just getting rid of that volume
12:39 hgowtham if that is what you need you can add-brick to the existing volume with the add brick command or delete the volume and create a new one which isnt a good idea
12:39 hgowtham you havent started the volume. so its better to add brick from the second volume
12:40 hgowtham before that make sure you have the second node peer probed. "gluster peer probe <ip_of_second_node>"
12:43 hgowtham and then do a "gluster volume add-brick mnet_generic <brick on the second node>"
12:43 hgowtham gestahlt, that should help
12:44 gestahlt did not work volume add-brick: failed: Staging failed
12:45 hgowtham gestahlt, need your gluster logs and the command you actually used
12:46 gestahlt hgowtham: Im currently checking the logs.. i have no idea where it writes what went wrong. I was checking cli.log and etc-glusterfs-glusterd.log (which just repeats the message)
12:47 gestahlt and glustershd
12:47 gestahlt nothing about the reason why it failed
12:47 hgowtham check for the glusterd log
12:47 hgowtham can you copy paste the command you used?
12:49 gestahlt just the one you stated
12:49 gestahlt gluster volume add-brick mnet_generic node2:/srv/mnet_generic/data
12:51 hgowtham okie. can you share the glusterd.log with me?
12:51 gestahlt I only found this:
12:51 gestahlt https://paste.fedoraproject.org/paste/GEY2ywqVCaGzdfRRB7ZA-15M1UNdIGYhyRLivL9gydE=
12:51 glusterbot Title: Untitled - Modern Paste (at paste.fedoraproject.org)
12:52 * hgowtham checking
12:53 hgowtham gestahlt, it says it is unable to find the volume "mnet_generic"
12:54 gestahlt exactly
12:54 hgowtham can you paste the whole glusterd.log and the "gluster peer status" output?
12:54 gestahlt i have like 5 other volumes already in prod and working
12:54 gestahlt That was the whole glusterd
12:55 hgowtham oh okie. so you are giving all these commands from the first node? I feel that the node you probed doesnt have the mnet_generic volume in it
12:56 gestahlt https://paste.fedoraproject.org/paste/fOFLEDdnVQBb0c1KEROBJF5M1UNdIGYhyRLivL9gydE=
12:56 glusterbot Title: Untitled - Modern Paste (at paste.fedoraproject.org)
12:56 gestahlt Yeah, i think that with the volume is messed. The 2nd node does not have it at all
12:56 gestahlt i gave it from the first node
12:57 gem joined #gluster
12:57 gestahlt when i try to do it from the 2nd node then i get this
12:57 hgowtham if the volume is messed up and deleting the volume is not an issue you can go ahead with that
12:57 gestahlt volume add-brick mnet_generic 192.168.112.20:/srv/mnet_generic/data
12:57 gestahlt volume add-brick: failed: Volume mnet_generic does not exist
12:58 gestahlt I cannot delete it
12:58 gestahlt volume delete mnet_generic does not work
12:58 hgowtham what does the gluster v info say on the second node?
12:58 shyam joined #gluster
12:58 gestahlt Volume mnet_generic does not exist
12:59 gestahlt and this is now new: volume create: mnet_generic: failed: /srv/mnet_generic/data is already part of a volume
12:59 hgowtham gluster v info on the second node can't throw the output "Volume mnet_generic does not exist"
12:59 gestahlt when i try to create it from the 2nd node (again)
12:59 gestahlt It does
13:01 hgowtham /srv/mnet_generic/data this is your first brick. you cant use it on an add-brick again
13:01 hgowtham you need to specify a new brick
13:01 hgowtham else you will face this output "volume create: mnet_generic: failed: /srv/mnet_generic/data is already part of a volume"
13:03 gestahlt im a bit confused
13:03 gestahlt on the 2nd node i create just the same directory with same size
13:03 gestahlt to add it
13:03 hgowtham oh okie
13:04 hgowtham and this directory is not used by anything else right?
13:04 gestahlt exactly
13:07 tjelinek1 joined #gluster
13:08 hgowtham gestahlt, so the "gluster v status" on the second node does it show this mnet_generic volume?
13:08 gestahlt Nope
13:08 hgowtham it should show it as it has been probed and it has become a single cluster
13:08 pdrakeweb joined #gluster
13:09 gestahlt It says: Volume mnet_generic does not exist
13:09 gestahlt i f***ed it up badly somehow
13:09 hgowtham so there is some issue with the peer probe. but from the output of your peer status from first node, it shows the second node is connected
13:09 gestahlt yeah, the other volumes just work fine
13:10 gestahlt i have just messed up this one somehow (i assume because i forgot to add the second brick as param while creating)
13:10 hgowtham can you check what node2 stands for in the first node?
13:10 gestahlt ?
13:10 hgowtham not adding it as a param uring creating is not an issue
13:10 gestahlt But thats how it went wrong
13:10 gestahlt as said, the other volumes i have just work fine
13:11 gestahlt for months? even years?
13:11 hgowtham yes. it wont cause any issue
13:11 hgowtham the whole purpose of add-brick command is to expand the volume when need arises
13:11 gestahlt somehow did in my case.. or i had not the directory on the other node when i created the volume
13:12 gestahlt creation was volume create mnet_generic node1:/srv/mnet_generic/data
13:12 gestahlt nothing passed
13:12 hgowtham oh tat explains it
13:13 hgowtham creation wasnt successful?
13:13 hgowtham hmm then its better to delete the volume
13:13 Klas which seems to be refused, according to earlier comments
13:14 gestahlt it was "partly" refused
13:14 gestahlt it said "failed on node2" and i was like "ah damn!"
13:14 gestahlt Then i wanted to delete it with volume delete and then i couldnt delete it because it didnt exist on node2
13:15 gestahlt and then after google and fails i ended up here :)
13:17 vbellur joined #gluster
13:19 hgowtham gestahlt, when the staging fails gluster can create the volume as you said
13:20 hgowtham but for you something else has happened and you have come to this situation
13:27 ankitr joined #gluster
13:28 hgowtham deleting the /var/lib/glusterd/vols/ directory and restarting on both the nodes will help you get out of this situation.
13:28 gestahlt ok
13:28 gestahlt then i have to make a note and waiit until i get downtime...
13:29 hgowtham gestahlt, this directory followed by your volume
13:29 hgowtham /var/lib/glusterd/vols/mnt_generic alone
13:29 hgowtham dont delete the whole /var/lib/glusterd/vols/  if it had other volumes. it will wipe that off too
13:30 gestahlt yeah i imagined so
13:31 hgowtham hope it helped.
13:34 gestahlt Yeah thanks!
13:36 jockek hi, long time lurker, and a while since I tested/used gluster -- how's the ipv6 support going along? (i.e. does it support IPv6-only environments yet?)
13:49 alezzandro joined #gluster
13:49 jiffin joined #gluster
13:49 buvanesh_kumar joined #gluster
13:53 Saravanakmr joined #gluster
13:56 atinm jockek, we should get it supported soon! We have a WIP patch https://review.gluster.org/#/c/16228/ which will get into gluster 3.12 release
13:56 glusterbot Title: Gerrit Code Review (at review.gluster.org)
13:59 jockek atinm: ah, nice!
14:01 shyam joined #gluster
14:14 pdrakeweb joined #gluster
14:15 skylar joined #gluster
14:17 KoSoVaR gut reactions to raid6 erasure coding vs. jbod erasure?
14:25 ankitr joined #gluster
14:26 pdrakeweb joined #gluster
14:31 tz-zeejay joined #gluster
14:37 ahino joined #gluster
14:43 vbellur joined #gluster
14:49 cornfed78 joined #gluster
14:50 Karan joined #gluster
14:52 farhorizon joined #gluster
14:58 ivan_rossi left #gluster
15:00 kshlm Community meeting is starting now in #gluster-meeting
15:05 Shu6h3ndu joined #gluster
15:10 pdrakeweb joined #gluster
15:18 nbalacha joined #gluster
15:21 oajs_ joined #gluster
15:27 kramdoss_ joined #gluster
15:31 JoeJulian left #gluster
15:31 JoeJulian joined #gluster
15:31 gem joined #gluster
15:40 nbalacha joined #gluster
15:48 farhorizon joined #gluster
15:51 pdrakeweb joined #gluster
15:56 MrAbaddon joined #gluster
15:59 guhcampos joined #gluster
16:03 MrAbaddon joined #gluster
16:04 cloph_away joined #gluster
16:19 Karan joined #gluster
16:29 pdrakeweb joined #gluster
16:34 Saravanakmr joined #gluster
16:36 ankitr joined #gluster
16:53 pdrakeweb joined #gluster
16:56 pdrakeweb joined #gluster
16:57 gyadav__ joined #gluster
16:57 pdrakeweb joined #gluster
16:58 Vapez joined #gluster
17:22 sbulage joined #gluster
17:31 Karan joined #gluster
17:39 ksj joined #gluster
17:41 ksj is it possible to combine two distributed volumes into a replicated volume?
17:42 MrAbaddon joined #gluster
17:48 Asako joined #gluster
17:50 kpease joined #gluster
17:55 Asako is there a way to speed up gluster?  Running a simple ls -l takes over 2 seconds on my client node.
17:55 Asako There's only 632 files in the directory
17:58 v12aml joined #gluster
18:01 pdrakeweb joined #gluster
18:01 MrAbaddon joined #gluster
18:11 pdrakeweb joined #gluster
18:12 MrAbaddon joined #gluster
18:12 pdrakeweb joined #gluster
18:20 pdrakeweb joined #gluster
18:29 baber joined #gluster
18:36 pdrakeweb joined #gluster
18:37 farhorizon joined #gluster
18:53 JoeJulian Asako: Decrease the latency between your replicas?
18:54 derjohn_mob joined #gluster
19:19 pdrakeweb joined #gluster
19:20 jkroon joined #gluster
19:21 tz-zeejay Is there a reason why RAID6 backed bricks shouldn't be used for dispersed volumes? RedHat documentation seems to frown on it.
19:35 farhoriz_ joined #gluster
19:36 Asako JoeJulian: that's one possibility but we have gigabit interfaces
19:42 MrAbaddon joined #gluster
19:50 KoSoVaR joined #gluster
19:59 derjohn_mob joined #gluster
20:32 JoeJulian tz-zeejay: I would imagine they would recommend against it just because it's wasteful.
20:37 oajs_ joined #gluster
20:43 tz-zeejay JoeJulian: Wasteful in which sense?
20:46 tz-zeejay JoeJulian: A 2x replicated volume would yeild less usable storage (regardless of what underlying hardware supported the bricks). Yet, replicated supported by RAID6 is recommended.
20:49 JoeJulian tz-zeejay: Myself, I use gluster for fault tolerance and I use raid for performance only.
20:52 Asako hmm
21:11 tz-zeejay JoeJulian: I see. I intend to build a 1.5PB gluster volume. My initial draft was a Dispersed volume (redundancy 2) with 10 Nodes, 1 brick per node (backed by RAID6). This creates a volume comprised of bricks that are unlikely to fail (as long as we maintain the raid). But, the volume can still tolerate 2 bricks/nodes going offline in the offchance of system crash or RAID failure.
21:14 Wizek_ joined #gluster
21:15 tz-zeejay JoeJulian: Per the redhat documentation, raid with a replicated volume is common. But, they seem to recommend JBOD for dispersed. Trying to understand why that's the case.
21:16 tz-zeejay Alternitavely, I can do a full on JBOD configuration. That would result in quite a lot of bricks (24 per node - so potentially 240 total).
21:30 snehring tz-zeejay: completely anecdotal but we had a lot of trouble with a jbod based volume
21:30 snehring greater bricks per node (36) than you but performance was pretty bad
21:31 snehring ended up falling back to a single brick per node
21:31 tz-zeejay snehring: I see, and that provided better performance?
21:31 snehring very much so
21:32 tz-zeejay okay. Cool thanks. Out of curiosity, what type of volume do you use?
21:33 snehring 4+2 distributed dispersed
21:39 tz-zeejay Okay. So, each node has one brick. Presumably, each brick is backed by some kind of RAID? That would be similar to what I'm looking to do.
21:40 snehring yeah in our case it's ZFS
21:41 tz-zeejay I see. Cool. Do you know what expansion looks like with 4+2 distrib-dispersed? Would it require 6 additional nodes/bricks
21:41 tz-zeejay ?
21:45 snehring That's my understanding
21:49 tz-zeejay joined #gluster
21:50 tz-zeejay thanks snehring.
21:50 snehring np
21:57 oajs_ joined #gluster
22:07 marlinc joined #gluster
22:12 derjohn_mob joined #gluster
22:15 kraynor5b1 joined #gluster
22:29 MrAbaddon joined #gluster
22:38 JoeJulian I'm back (meeting).
22:39 JoeJulian tz-zeejay, snehring: I, too, have done 36 bricks per server and was consistently able to saturate the network and performance met my personal performance metric.
22:42 JoeJulian tz-zeejay: my volumes have always been distributed replica 3. One volume I inherited was replica 2 on raid 6. Because of a hardware issue, it was not uncommon to lose a raid6 volume during rebuild <grumble> leaving me very uncomfortable until everything was fully recovered.
22:44 JoeJulian I prefer to avoid super-large bricks because MTTR for them is significantly longer than smaller bricks.
23:04 farhorizon joined #gluster
23:22 moneylotion joined #gluster
23:47 flying joined #gluster
23:58 shyam joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary