Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-03-22

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 Doyle joined #gluster
00:07 geniusoftime any ganesha pros here? enabling of nfs-ganesha fails for me :(
00:29 alghost joined #gluster
00:31 calavera joined #gluster
00:43 plarsen joined #gluster
00:56 renout_away joined #gluster
00:56 nathwill joined #gluster
00:56 juhaj joined #gluster
01:06 anmol joined #gluster
01:20 XpineX joined #gluster
01:20 johnmilton joined #gluster
01:22 Lee1092 joined #gluster
01:28 EinstCrazy joined #gluster
01:34 skoduri joined #gluster
01:41 EinstCra_ joined #gluster
01:42 EinstCr__ joined #gluster
01:42 XpineX joined #gluster
01:43 baojg joined #gluster
01:58 XpineX joined #gluster
02:07 luizcpg_ joined #gluster
02:11 XpineX joined #gluster
02:22 dlambrig_ joined #gluster
02:26 harish joined #gluster
02:28 harish joined #gluster
02:55 dlambrig_ joined #gluster
03:00 kshlm joined #gluster
03:01 deniszh joined #gluster
03:06 XpineX joined #gluster
03:17 7F1AAKCMS joined #gluster
03:29 jiffin joined #gluster
03:30 haomaiwa_ joined #gluster
03:34 nbalacha joined #gluster
03:37 poornimag joined #gluster
03:38 overclk joined #gluster
03:40 atalur joined #gluster
03:44 DV joined #gluster
03:55 shubhendu joined #gluster
03:58 kkeithley1 joined #gluster
03:59 anmol joined #gluster
04:00 DV__ joined #gluster
04:00 itisravi joined #gluster
04:01 haomaiwa_ joined #gluster
04:02 dlambrig_ joined #gluster
04:04 nishanth joined #gluster
04:11 gem joined #gluster
04:12 RameshN joined #gluster
04:13 kanagaraj joined #gluster
04:15 sakshi joined #gluster
04:18 calavera joined #gluster
04:18 atinm joined #gluster
04:20 dlambrig_ joined #gluster
04:23 tessier I think I'm going to have to wipe this entire gluster cluster and start over. :|
04:23 tessier Been messing with it for weeks and it just won't behave.
04:24 tessier But that means I have to find some other hardware to migrate onto.
04:25 ashiq_ joined #gluster
04:26 kshlm joined #gluster
04:30 jiffin joined #gluster
04:34 gem joined #gluster
04:42 nehar joined #gluster
04:43 tessier Just found and corrected a few more errors.
04:45 XpineX joined #gluster
04:54 Guest86611 joined #gluster
04:55 prasanth joined #gluster
04:58 kdhananjay joined #gluster
05:00 JesperA_ joined #gluster
05:01 haomaiwa_ joined #gluster
05:02 tessier Wow...I think I may have finally fixed everything!
05:05 gowtham joined #gluster
05:07 XpineX joined #gluster
05:08 hgowtham joined #gluster
05:09 ndarshan joined #gluster
05:10 aravindavk joined #gluster
05:15 Manikandan joined #gluster
05:18 Bhaskarakiran joined #gluster
05:21 XpineX joined #gluster
05:22 ramteid joined #gluster
05:22 skoduri joined #gluster
05:23 atalur joined #gluster
05:23 mowntan joined #gluster
05:23 mowntan joined #gluster
05:26 Apeksha joined #gluster
05:30 rafi joined #gluster
05:34 karthik___ joined #gluster
05:36 kshlm tessier, Good to hear!
05:36 kshlm Did you face any other problems after the last time we spoke?
05:40 XpineX joined #gluster
05:44 ovaistariq joined #gluster
05:47 gem_ joined #gluster
05:50 atalur joined #gluster
05:53 spalai joined #gluster
05:54 tswartz joined #gluster
05:57 baojg joined #gluster
05:57 skoduri joined #gluster
05:58 gem joined #gluster
05:59 XpineX joined #gluster
06:01 haomaiwa_ joined #gluster
06:02 ggarg joined #gluster
06:02 gem_ joined #gluster
06:03 ramky joined #gluster
06:04 hchiramm joined #gluster
06:04 rastar joined #gluster
06:04 DV joined #gluster
06:04 deniszh1 joined #gluster
06:06 pur joined #gluster
06:10 Saravanakmr joined #gluster
06:14 XpineX joined #gluster
06:18 atinm joined #gluster
06:20 karnan joined #gluster
06:26 Bhaskarakiran joined #gluster
06:28 liibert joined #gluster
06:32 dlambrig__ joined #gluster
06:32 social joined #gluster
06:32 spalai1 joined #gluster
06:33 spalai1 left #gluster
06:41 tessier joined #gluster
06:41 kshlm joined #gluster
06:43 vmallika joined #gluster
06:44 atalur joined #gluster
06:47 kdhananjay joined #gluster
06:47 nehar joined #gluster
06:58 gem_ joined #gluster
07:00 atinm joined #gluster
07:01 haomaiwa_ joined #gluster
07:04 arcolife joined #gluster
07:08 gem joined #gluster
07:08 anil joined #gluster
07:08 sakshi joined #gluster
07:09 spalai joined #gluster
07:13 mhulsman joined #gluster
07:31 Gnomethrower joined #gluster
07:36 rastar joined #gluster
07:37 Wizek joined #gluster
07:39 jtux joined #gluster
07:44 [Enrico] joined #gluster
08:01 haomaiwa_ joined #gluster
08:06 drankis joined #gluster
08:06 Wizek joined #gluster
08:07 jri joined #gluster
08:17 ivan_rossi joined #gluster
08:18 harish_ joined #gluster
08:22 nishanth joined #gluster
08:28 hchiramm joined #gluster
08:32 fsimonce joined #gluster
08:35 jiffin joined #gluster
08:39 ovaistariq joined #gluster
08:43 spalai joined #gluster
08:45 Manikandan joined #gluster
08:46 Bhaskarakiran joined #gluster
08:46 atinm joined #gluster
08:51 ahino joined #gluster
08:56 kshlm joined #gluster
08:58 ctria joined #gluster
08:58 arcolife joined #gluster
08:58 Manikandan joined #gluster
09:00 sakshi joined #gluster
09:01 64MAANEGE joined #gluster
09:03 atalur joined #gluster
09:06 farblue joined #gluster
09:08 kovshenin joined #gluster
09:16 Slashman joined #gluster
09:20 muneerse joined #gluster
09:26 atinm joined #gluster
09:31 dlambrig_ joined #gluster
09:32 rafi1 joined #gluster
09:38 Manikandan joined #gluster
09:43 hackman joined #gluster
09:47 ovaistariq joined #gluster
09:48 atalur joined #gluster
09:48 nishanth joined #gluster
09:49 ramky joined #gluster
09:49 nbalacha joined #gluster
09:50 itisravi joined #gluster
10:00 post-factum with 3.7.9 i get the following messages in brick log: http://termbin.com/1s8z
10:01 post-factum what is that?
10:01 64MAANEVR joined #gluster
10:01 post-factum it is distributed-replicated volume, 2×2
10:02 hgowtham joined #gluster
10:05 rafi joined #gluster
10:09 hchiramm joined #gluster
10:12 dlambrig_ joined #gluster
10:19 nbalacha joined #gluster
10:23 DV joined #gluster
10:33 kshlm joined #gluster
10:36 64MAANE4M joined #gluster
10:36 haomaiwa_ joined #gluster
10:46 kshlm joined #gluster
10:48 ovaistariq joined #gluster
10:51 nishanth joined #gluster
10:54 bluenemo joined #gluster
10:56 atinm joined #gluster
10:59 Javezim joined #gluster
10:59 Javezim Hey All
11:00 shyam joined #gluster
11:00 Javezim So we have this issue where running the split brain command on our gluster cluster now returns 3000 Results :/
11:00 Javezim Now obviously this is too many to go through and fix single handly
11:00 Javezim Is there any programs out there that can do it? Say clear the files based on if timestamps are different or anything?
11:00 luizcpg_ joined #gluster
11:06 dlambrig_ joined #gluster
11:15 DV joined #gluster
11:15 robb_nl joined #gluster
11:16 ramky joined #gluster
11:22 nishanth joined #gluster
11:23 kshlm joined #gluster
11:25 DV joined #gluster
11:34 Bhaskarakiran joined #gluster
11:35 Javezim joined #gluster
11:35 kdhananjay joined #gluster
11:36 Javezim So we have this issue where running the split brain command on our gluster cluster now returns 3000 Results :/
11:36 Javezim Now obviously this is too many to go through and fix single handly
11:36 Javezim Is there any programs out there that can do it? Say clear the files based on if timestamps are different or anything?
11:37 itisravi Javezim: what version of gluster are you using?
11:38 karnan joined #gluster
11:39 Javezim 3.7.8
11:40 alghost joined #gluster
11:42 Saravanakmr joined #gluster
11:43 Saravanakmr REMINDER: Gluster community bug triage meeting 20 minutes from now in #gluster-meeting
11:45 itisravi Javezim: you can use the split-brain resolution CLI. You can either choose the bigger file or a particular file as the source.
11:45 itisravi https://github.com/gluster/glusterfs​-specs/blob/master/done/Features/hea​l-info-and-split-brain-resolution.md has examples.
11:45 glusterbot Title: glusterfs-specs/heal-info-an​d-split-brain-resolution.md at master · gluster/glusterfs-specs · GitHub (at github.com)
11:46 atalur joined #gluster
11:48 Bhaskarakiran_ joined #gluster
11:50 ovaistariq joined #gluster
11:53 plarsen joined #gluster
11:55 ramteid joined #gluster
11:59 johnmilton joined #gluster
12:00 Manikandan joined #gluster
12:01 nbalacha joined #gluster
12:03 EinstCrazy joined #gluster
12:06 kdhananjay joined #gluster
12:11 sakshi joined #gluster
12:17 atalur joined #gluster
12:18 nigelb joined #gluster
12:33 karnan joined #gluster
12:35 kdhananjay1 joined #gluster
12:38 DV joined #gluster
12:38 unclemarc joined #gluster
12:52 ovaistariq joined #gluster
12:56 plarsen joined #gluster
12:58 anmol joined #gluster
13:04 Javezim joined #gluster
13:04 nehar joined #gluster
13:06 dlambrig_ joined #gluster
13:07 sakshi joined #gluster
13:07 luizcpg joined #gluster
13:10 rwheeler joined #gluster
13:16 squizzi joined #gluster
13:18 dlambrig_ joined #gluster
13:21 nbalacha joined #gluster
13:24 spalai left #gluster
13:29 skoduri_ joined #gluster
13:30 Javezim joined #gluster
13:31 Javezim Does anyone know of a way to heal split-brain based off timestamp of the file?
13:31 Javezim ie. if on Brick1 Machine1 there is a file dated 22nd March
13:31 Javezim And on Brick1 Machine2 there is a file dated 21st March
13:31 Javezim It always uses the latest file when healing
13:40 shyam joined #gluster
13:43 dlambrig_ joined #gluster
13:46 skylar joined #gluster
13:51 plarsen joined #gluster
13:52 dlambrig_ joined #gluster
13:53 ovaistariq joined #gluster
13:53 mpietersen joined #gluster
13:53 DV joined #gluster
13:59 overclk joined #gluster
14:00 hamiller joined #gluster
14:05 baojg joined #gluster
14:05 coredump joined #gluster
14:06 ramky joined #gluster
14:06 Brandon joined #gluster
14:08 Guest93762 anyone have great success with gluster? Im having huge issues where my 3 webserver nodes are getting high load. I started a gluster heal and it had like 5000 files healing
14:09 Guest93762 but load spikes are killing my sites while the heal is going on.
14:17 squizzi joined #gluster
14:18 squizzi joined #gluster
14:20 ivan_rossi left #gluster
14:22 shaunm joined #gluster
14:22 ivan_rossi joined #gluster
14:23 pur joined #gluster
14:24 ivan_rossi left #gluster
14:35 baojg joined #gluster
14:37 drankis joined #gluster
14:38 ivan_rossi joined #gluster
14:40 cpetersen JoeJulian: well I've been contemplating naming standards.  It's hard to google to find consistent information about how IT professionals name their systems.  Every company has their own way, but I need somewhere to start!
14:44 kdhananjay joined #gluster
14:49 skylar joined #gluster
14:50 jiffin joined #gluster
14:53 baojg joined #gluster
14:54 ovaistariq joined #gluster
14:57 unclemarc joined #gluster
15:02 nathwill joined #gluster
15:10 hagarth joined #gluster
15:10 hagarth joined #gluster
15:11 dlambrig_ joined #gluster
15:16 Saravanakmr joined #gluster
15:17 bennyturns joined #gluster
15:19 bennyturns joined #gluster
15:26 nbalacha joined #gluster
15:31 m0zes you pick something that works for you. but for huge setups, this is a decent start http://blog.device42.com/wp-content/uploads/2​014/05/wpid2980-anatomy-device-of-a-name.png
15:34 BrandonEmerge joined #gluster
15:35 wushudoin joined #gluster
15:38 calavera joined #gluster
15:40 DV__ joined #gluster
15:54 ovaistariq joined #gluster
15:55 hchiramm joined #gluster
15:56 Gnomethrower joined #gluster
15:58 rafi joined #gluster
16:03 itisravi joined #gluster
16:10 mhulsman joined #gluster
16:16 robb_nl joined #gluster
16:22 d0nn1e joined #gluster
16:23 JoeJulian cpetersen: We use r01c01.r16.d304.az1.{org unit}.{business unit}.io.com where that's row/column (open compute does 3 nodes wide per 2u), rack, module (see https://io.com to see what a module is), datacenter, etc.
16:29 wushudoin joined #gluster
16:38 drankis joined #gluster
16:41 brandon joined #gluster
16:42 wushudoin joined #gluster
16:43 Guest29960 anyone know why when I check the heal info: gluster volume heal persistence info  it has not healed the rest of my files, server gluster01 shows number of entres 4 and gluster02 shows number of heals 387
16:43 Guest29960 has been like this for ~24hours
16:44 Guest29960 i need all the filesto heal :(
16:46 itisravi Guest29960: do the glustershd.log files of the 2 servers give any indication?
16:47 siel joined #gluster
16:50 haomaiwa_ joined #gluster
16:55 ovaistariq joined #gluster
17:01 mhulsman joined #gluster
17:03 liibert joined #gluster
17:03 ira joined #gluster
17:06 bfoster joined #gluster
17:10 Guest29960 itisravi: nothing really  the last few entries show:
17:10 Guest29960 [2016-03-22 17:03:38.890593] W [MSGID: 114031] [client-rpc-fops.c:2971:client3_3_lookup_cbk] 0-persistence-client-1: remote operation failed. Path: <gfid:33215b03-5a95-45a9-93c1-8db0b49b5228> (33215b03-5a95-45a9-93c1-8db0b49b5228) [No such file or directory] [2016-03-22 17:03:39.607588] W [MSGID: 114031] [client-rpc-fops.c:2971:client3_3_lookup_cbk] 0-persistence-client-1: remote operation failed. Path: <gfid:c332ab9b-af7a-4f34-
17:13 Saravanakmr joined #gluster
17:25 ivan_rossi left #gluster
17:28 mlhess joined #gluster
17:34 hagarth joined #gluster
17:37 fcami joined #gluster
17:42 tessier joined #gluster
17:44 arcolife joined #gluster
17:44 mhulsman1 joined #gluster
17:44 EinstCrazy joined #gluster
17:46 EinstCrazy joined #gluster
17:48 EinstCrazy joined #gluster
17:49 dblack joined #gluster
17:50 EinstCrazy joined #gluster
17:53 EinstCrazy joined #gluster
17:55 EinstCrazy joined #gluster
17:56 ovaistariq joined #gluster
17:59 papamoose joined #gluster
18:05 dtrainor joined #gluster
18:09 muneerse2 joined #gluster
18:10 poornimag joined #gluster
18:10 EinstCrazy joined #gluster
18:10 dtrainor If I had a distributed striped volume that I wanted to expand by two bricks (assuming my replica count was 2), would I give both the new bricks as arguments to 'gluster volume add-brick VolName srv1:/newbrick srv2:/newbrick' ?  The docs don't explicitly say if I'd need two bricks as part of the command.
18:11 shubhendu joined #gluster
18:16 EinstCrazy joined #gluster
18:18 mhulsman joined #gluster
18:18 klfwip joined #gluster
18:23 JoeJulian dtrainor: yes, you would have to give your volume a new set of replica*stripe bricks all at once on the add-brick command.
18:25 dtrainor Thank you for the confirmation.
18:26 luizcpg joined #gluster
18:26 JoeJulian ie. if you have replica 2 stripe 2 you have to add 4 bricks.
18:26 dtrainor are you talking initially, or are you talking after the fact when I want to expand the size of the volume?
18:27 JoeJulian both
18:27 JoeJulian Unless you're just changing the replica count.
18:30 jri joined #gluster
18:39 dtrainor hrm maybe i've got it wrong.  if i have a 2x2 distributed stripe, i want to add two more bricks to increase the size of each stripe
18:48 JoeJulian I don't think you can change stripe size.
18:49 JoeJulian I assumed it was also replicated because you stated "assuming my replica count was 2".
18:49 haomaiwa_ joined #gluster
18:49 JoeJulian If you're not sure, share "gluster volume info" through fpaste.org
18:54 F2Knight joined #gluster
18:56 dtrainor i have the benefit of starting over this time... hah
18:56 dtrainor the replica count can remain the same while also increasing the stripe size, can't it?
18:57 ovaistariq joined #gluster
18:57 JoeJulian I didn't think stripe could be resized.
18:57 JoeJulian I never have, nor will I ever likely need to, use the stripe translator.
18:58 JoeJulian I'm pretty sure you can't change it though. The data migration necessary would be overwhelming.
19:05 dtrainor ok.  i'll experiment a little bit.
19:05 dtrainor thank you
19:21 Brandon_ joined #gluster
19:26 ahino joined #gluster
19:33 JoeJulian dtrainor: since you have the benefit of starting over, I would avoid stripe. If your file sizes exceed your brick size, use disperse instead.
19:34 dtrainor oh?  that's a new term to me, I'll look in to it
19:51 amye joined #gluster
19:53 ahino joined #gluster
19:57 mzink_gone joined #gluster
19:57 ovaistariq joined #gluster
19:58 _nixpanic joined #gluster
19:58 _nixpanic joined #gluster
19:59 btpier joined #gluster
20:00 inodb joined #gluster
20:05 cuqa_ joined #gluster
20:06 calavera joined #gluster
20:11 DV joined #gluster
20:13 jlp1 joined #gluster
20:15 gbox joined #gluster
20:20 post-factum or sharding
20:24 shaunm joined #gluster
20:27 JoeJulian Oh, right, or sharding. That's actually what I was thinking of in the first place.
20:28 JoeJulian Personally, I hate spreading files across multiple devices.
20:29 * JoeJulian says while using raid0 bricks....
20:31 post-factum lel
20:32 post-factum yep, spreading does not feel reliable in case of crash
20:32 post-factum anyway, the file could be restored manually even in sharded volume
20:33 post-factum (i believe so)
20:33 post-factum unlike raid0
20:33 JoeJulian There's a mathematically greater chance of recovering from a replicated volume with raid-0 bricks than there is from a sharded volume.
20:34 post-factum and if that is distributed-replicated volume with sharding enabled and raid10 for bricks?
20:34 post-factum (i really use softare raid10 with f2 layout of 2 devices — acts like raid1 with slightly greater throughput)
20:35 post-factum *software
20:35 JoeJulian Replica 3 with raid 0 has better uptime than raid10 replica 2.
20:35 JoeJulian (for less money)
20:35 post-factum to have replica 3 one need to have 3 DCs, if one wants to build "right" architecture
20:36 JoeJulian Depends on the DC.
20:36 post-factum "it depends" is always a great answer
20:36 JoeJulian right?
20:36 post-factum like "42" for network timeout
20:37 post-factum "right". you know, what is the sense of having 2 nodes of 3 total in one dc
20:38 post-factum yep, it protects you from node power failure. i prefer reserving that with 2 power adapters per 1 node
20:38 JoeJulian If you can have unique power and cooling feeds within a single DC.
20:38 post-factum again, recent example
20:38 post-factum we have a DC here
20:39 JoeJulian I'm used to dealing with three of the 5 largest DC's in the world.
20:39 post-factum it is owned by cloud provider, and it provides vps, colocation and other stuff like that
20:39 post-factum and they were doing power maintenance in the middle of week in working hours
20:40 post-factum they have 2 separate power suppliers, UPSes, diesel generators and other cool crap
20:40 JoeJulian not uncommon
20:40 post-factum and you now what?
20:40 post-factum the whole damned DC went offline!
20:40 JoeJulian Been there.
20:40 post-factum fortunately, we had only 1 ceph mon there
20:41 post-factum and other were located in 2 other dcs
20:41 JoeJulian The difference in our DC is that it's so big that the only thing the different sections share is the roof.
20:42 post-factum you are lucky man
20:42 JoeJulian That's what they tell me. :)
20:43 post-factum but i hope that local dc has learnt something from the damned outage
20:43 post-factum 5 hours. you know, even support was unavailable because of damned pbx was hosted in the same dc
20:44 post-factum the only channel they were communicating with customers was. guess what?
20:44 post-factum facebook!
20:44 post-factum not even cellular phones, because they had damned gsm gateways in that dc too!
20:45 post-factum i believe, they had facebook access because of 3G on chief's smartphone
20:45 post-factum look, that is the real diversity
20:45 post-factum or diversification. how should i call it properly
20:46 post-factum so, having only 2 dcs... it depends, yep
20:47 post-factum but no replica withing same provider. not anymore. everything could happen
20:47 post-factum *within
20:47 JoeJulian There's nothing like experience, eh?
20:48 post-factum eh :)
20:49 post-factum btw
20:50 post-factum JoeJulian: do you have some experience with proprietary raid?
20:50 post-factum like this:
20:50 post-factum flexraid.com
20:51 post-factum http://flexraid.com
20:51 post-factum hey, bot
20:51 post-factum #link http://flexraid.com/
20:51 post-factum :(
20:51 post-factum JoeJulian: anyway, ^^
20:52 post-factum glusterbot--
20:52 glusterbot post-factum: glusterbot's karma is now 8
20:52 JoeJulian As little as I can. I've had nothing but bad experiences with proprietary storage.
20:53 post-factum any examples?
20:53 post-factum vsan?
20:53 post-factum veritas?
20:55 JoeJulian I can't point fingers publicly, but they're in the business of making sales, not supporting customers.
20:56 post-factum that is what proprietary companies always do
20:56 post-factum nothing new here
20:56 JoeJulian If I was going to pay a company for storage, I'd rather it be Red Hat. They're in the business of selling support.
20:56 dblack joined #gluster
20:57 post-factum that is because there is no other company with red hat business scheme that successes. except red hat
20:57 post-factum but anyway, red hat builds storage stack on top of open source
20:57 post-factum i was wandering about closed ones
20:58 post-factum *wondering
20:58 ovaistariq joined #gluster
21:02 Philambdo joined #gluster
21:08 haomaiwang joined #gluster
21:18 gbox glfsheal-gv0.log: [options.c:166:xlator_option_validate_sizet] 0-gv0-write-behind: Cache size 2147483648 is out of range [524288 - 1073741824]
21:18 gbox I haven't been able to figure this out.  Changing write-behind window and such does not seem to do it
21:23 post-factum gbox: what about cache size?
21:31 gbox post-factum: same
21:31 gbox It seems inconsequential enough
21:31 gbox Using the cli to change the value (for the parameter) should take effect immediately, yes?
21:33 gbox What is the unit even?  kilobytes?
21:35 gbox It's bytes
21:36 gbox performance.write-behind-window size has a 1GB limit
21:39 gbox I guess for the raspberry pi users having a minimum of ½ megabyte makes sense
21:41 gbox OK it's per file so that does make sense
21:43 harish_ joined #gluster
21:44 plarsen joined #gluster
21:45 shyam joined #gluster
21:46 bennyturns joined #gluster
21:52 abyss^ joined #gluster
21:52 ramas joined #gluster
21:59 ovaistariq joined #gluster
22:00 abyss^ joined #gluster
22:07 DV joined #gluster
22:14 ramas joined #gluster
22:35 bet_ joined #gluster
22:35 gbox Can a file in a gluster filesystem ever NOT have a trusted.gfid xttr on the brick?
23:00 ovaistariq joined #gluster
23:04 gbox getfattr won't read the trusted.gfid xattr if the file is in a symlinked directory!
23:05 gbox security.selinux shows up but not the trusted.* xattrs
23:17 calavera joined #gluster
23:32 bennyturns joined #gluster
23:35 Wojtek [2016-03-12 18:40:55.838485] W [MSGID: 108015] [afr-self-heal-entry.c:60:​afr_selfheal_entry_delete] 0-gv0-replicate-0: expunging dir 00000000-0000-0000-0000-000000000001/files (372c1e11-3347-49f0-9631-44a6a0c497bd) on gv0-client-0
23:35 Wojtek We had an instance of gluster wiping out all the data after a hardware brick swap on a second node of a 2 server cluster
23:35 Wojtek expunging dir shows up a few times,
23:36 Wojtek what could of triggered it?
23:36 JoeJulian 1st, did you file a bug report? That's a critical problem that should never happen.
23:36 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
23:37 Wojtek Will do so now, wasn't sure if this was a bug or something I did
23:38 JoeJulian There should never be a way to "accedentally" lose data that isn't done from a mounted client. (rm -rf happens to everyone once)
23:40 Wojtek joined #gluster
23:43 Wojtek I will sanitize my logs and upload them along with the bug report. I'll post back tomorrow morning with the bug #
23:50 johnmilton joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary