Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-09-08

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
01:02 om joined #gluster
01:06 shdeng joined #gluster
01:48 ilbot3 joined #gluster
01:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:04 Lee1092 joined #gluster
02:04 om joined #gluster
02:19 d0nn1e joined #gluster
02:47 rastar joined #gluster
02:48 kukulogy joined #gluster
02:54 Gnomethrower joined #gluster
03:03 nbalacha joined #gluster
03:08 Gambit15 joined #gluster
03:08 aspandey joined #gluster
03:20 magrawal joined #gluster
03:27 kramdoss_ joined #gluster
03:30 skoduri joined #gluster
03:35 RameshN joined #gluster
03:36 nbalacha joined #gluster
03:39 shortdudey123 joined #gluster
03:44 atinm joined #gluster
03:58 harish joined #gluster
04:10 riyas joined #gluster
04:10 karnan joined #gluster
04:16 sanoj joined #gluster
04:18 itisravi joined #gluster
04:24 hgichon joined #gluster
04:27 k4n0 joined #gluster
04:33 Philambdo joined #gluster
04:34 Philambdo joined #gluster
04:39 poornima joined #gluster
04:42 hackman joined #gluster
04:47 shubhendu joined #gluster
04:51 jkroon JoeJulian, ok, so here is the theory:  kernel 4.6.4 left things in a bad state for me due to certain data never making it to disk.  as long as no reboots happened it was "ok", but after reboot obviously things are in bad state.  after clearing all the split brains loads are once more in normal levels.  and much lower than what we saw with a 4.6.4 kernel ever.
04:56 jiffin joined #gluster
04:57 kotreshhr joined #gluster
04:57 om joined #gluster
05:10 aravindavk joined #gluster
05:15 gem joined #gluster
05:19 deniszh joined #gluster
05:20 karthik_ joined #gluster
05:22 aspandey joined #gluster
05:22 derjohn_mob joined #gluster
05:24 kramdoss_ joined #gluster
05:36 raghug joined #gluster
05:36 Saravanakmr joined #gluster
05:45 ppai joined #gluster
05:46 ankitraj joined #gluster
05:48 ndarshan joined #gluster
05:48 deniszh1 joined #gluster
05:56 hgowtham joined #gluster
05:59 prasanth joined #gluster
06:01 satya4ever joined #gluster
06:03 eightyeight hmm. i'm following the syntax exactly, but `gluster volume replace-brick' isn't working for me. keeps giving me a usage dump
06:04 eightyeight Usage: volume replace-brick <VOLNAME> <SOURCE-BRICK> <NEW-BRICK> {commit force}
06:04 eightyeight `start' also doesn't work, and i'm assuming `status' won't either (dunno, haven't made it that far yet)
06:04 eightyeight # gluster volume replace-brick pool station22:/data/glusterfs/pool/sdb1/brick station23:/data/glusterfs/pool/sdb1/brick
06:05 eightyeight # gluster volume replace-brick pool station22:/data/glusterfs/pool/sdb1/brick station23:/data/glusterfs/pool/sdb1/brick start
06:05 eightyeight both give usage dumps, including `commit' and `force'
06:05 eightyeight # rpm -q glusterfs
06:05 eightyeight glusterfs-3.8.3-1.el7.x86_64
06:10 Bhaskarakiran joined #gluster
06:10 anoopcs eightyeight, Usage clearly shows we need to have either commit or force
06:11 ehermes joined #gluster
06:13 jiffin eightyeight:IMO it is not recommended to replace brick directly
06:13 rafi joined #gluster
06:14 jiffin eightyeight: just do an add-brick and then perform remove-brick is safer than replace command
06:14 anoopcs My bad..Even if you use replace-brick commit and force(both) are required.
06:15 suliba joined #gluster
06:15 anoopcs eightyeight, As jiffin mentioned it may not be recommended.
06:15 eightyeight doesn't `replace-brick' migrate data from the old brick to the new?
06:16 eightyeight where `remove-brick' does not?
06:16 Klas now I'm curious as well, since I'm running 2+1 and it's tricky to add-brick
06:16 Klas eightyeight: it definitely migrates data
06:16 eightyeight yeah. i definitely don't want data loss
06:17 eightyeight interesting about `commit force' though. that seems to be the ticket i needed
06:18 Klas yup, that does indeed work
06:18 jiffin eightyeight:I am sure that remove-brick will migrate existing data to remaining bricks
06:18 jiffin https://gluster.readthedocs.io/en/latest/​Administrator%20Guide/Managing%20Volumes/
06:18 glusterbot Title: Managing Volumes - Gluster Docs (at gluster.readthedocs.io)
06:19 Klas don't you need to add bricks evenly according to replica count, btw?
06:19 Klas meaning, if you want to add one brick and are running replica 3, don't you need to add 3 bricks?
06:20 jiffin Klas: correct
06:20 derjohn_mob joined #gluster
06:20 jiffin addition in removal should  performed in pairs
06:20 Klas so, what, to replace one brick, you should add X new ones, then remove all three =P?
06:21 Klas if replica three, that is
06:21 Klas I assume most people are running replica three
06:21 Klas or replica 3 arb 1 ;)
06:22 devyani7 joined #gluster
06:23 deniszh joined #gluster
06:24 jiffin Klas: it is mentioned in the above doc in "Replacing brick in Replicate/Distributed Replicate volumes"
06:24 Klas yup, currently reading it
06:24 jiffin but I am not sure how we can do it for arbtier
06:25 jiffin Klas: u can check with itisravi
06:25 Klas I think someone said it's not even possible until 3.8 actually
06:25 Klas might even been itisravi actually
06:25 itisravi Klas: sorry what is the question?
06:26 Klas what is the recommended way of replacing a brick in a replicate 3 arbiter 1?
06:26 Klas since it seems kinda clunky adding bricks, if even at all possible
06:26 Klas replace-brick is currently in my documentation, btw
06:26 itisravi Klas: ah, gluster volume replaice-brick volname oldbrick newbrick commit force.
06:27 Klas ah, ok, good, someone said that is generally a worse option than add/remove brick
06:27 Klas we've only tried it with small data sets so far (10G or so)
06:28 itisravi add/remove brick is usually used to change distribute count.
06:29 Klas distribute count, that is not replica, but how many bricks the replica is across, right?
06:29 Klas replica 2 host1 host2 host3 host4
06:29 itisravi joined #gluster
06:30 Klas there distributed count is host1-4?
06:30 Klas to simplify?
06:30 Klas (I'm still bad at the terminology)
06:30 aravindavk_ joined #gluster
06:31 jiffin1 joined #gluster
06:31 jtux joined #gluster
06:33 kdhananjay joined #gluster
06:34 aspandey_ joined #gluster
06:41 msvbhat joined #gluster
06:45 ppai joined #gluster
06:47 atinm joined #gluster
06:52 nbalacha joined #gluster
06:53 rafi joined #gluster
06:57 kshlm joined #gluster
06:59 JoeJulian Klas, itisravi: of course replace-brick...commit force is a "worse" option. Unfortunately it's worse than the nothing that actually exists at this point. It takes your 99.9999% uptime configuration and turns it in to a 99.99% configuration until the self-heal finishes.
07:00 JoeJulian Which leaves you in a state of liability if you have SLA agreements that assure more than 4 nines.
07:03 raghug joined #gluster
07:04 Klas ah
07:04 jiffin1 joined #gluster
07:06 JoeJulian The only way to replace a brick and maintain SLA assurances is (with an arbiter volume anyway) to add 3 new servers to the volume (two replica and an arbiter) and then remove-brick the original three.
07:07 aravindavk joined #gluster
07:07 JoeJulian Not very cost effective.
07:07 itisravi joined #gluster
07:09 itisravi JoeJulian: yes agreed. FWIW, I started working on having some sort of throttling for just self-heal traffic using the existing token bucket filter framework in gluster  but there are asks from pranith and hagarth to make it more general purpose so that this 'throttling xlator' can also regulate client side traffic. I need to think more about this.
07:10 JoeJulian Yeah, I've been keeping my eye on that.
07:10 karnan joined #gluster
07:11 jiffin joined #gluster
07:14 jri joined #gluster
07:16 nishanth joined #gluster
07:17 devyani7 joined #gluster
07:18 harish joined #gluster
07:20 ppai joined #gluster
07:24 atinm joined #gluster
07:26 ivan_rossi joined #gluster
07:32 prasanth joined #gluster
07:32 ahino joined #gluster
07:33 JoeJulian Oh, itisravi, I just realized you read that as a complaint against self-heal using up resources. It's not. It's a mathematical fact that if you have a working replica you can have a 6 nines configuration. If you remove one of those replica, your data now all lives on just one drive until healed to the other drive. That leaves the volume in a 4 nines state until the heal is complete. Often we've sold our reliability at 5 nines and if we knowingly
07:33 JoeJulian drop to 4 nines, we may be liable for negligence.
07:34 JoeJulian Which is precisely why I don't do that.
07:36 itisravi Ah makes sense. I did take it as a (genuine) complaint :)
07:41 prasanth joined #gluster
07:43 deniszh joined #gluster
07:44 derjohn_mob joined #gluster
07:50 karnan joined #gluster
07:51 fsimonce joined #gluster
07:52 Vaizki joined #gluster
07:55 gem joined #gluster
08:02 k4n0_away joined #gluster
08:04 Slashman joined #gluster
08:07 kramdoss_ joined #gluster
08:25 rastar joined #gluster
08:30 devyani7 joined #gluster
08:31 karnan joined #gluster
08:32 derjohn_mob joined #gluster
08:38 nbalacha joined #gluster
08:42 Gnomethrower joined #gluster
08:47 karthik_ joined #gluster
08:51 ashiq joined #gluster
09:11 hackman joined #gluster
09:13 harish joined #gluster
09:29 atinm joined #gluster
09:37 jwd joined #gluster
09:38 aravindavk joined #gluster
09:42 Mattias joined #gluster
09:44 robb_nl joined #gluster
09:46 hchiramm joined #gluster
09:47 Mattias I've setup three servers all synced with glusterfs, the only issue I have which makes the servers super slow is glusterfsd takes around 50-80% cpu and glusterfs takes around 20-40% cpu. I can't seem to figure it out, no errors or fails in the logs, peer status is good and I'm not sure how to read the profile dumps? 60second dump: https://gist.github.com/mattias/​7b5e8d333abd51fde28afadbf7531bed  and 120s dump:https://gist.github.com/m
09:47 glusterbot Title: gluster 60s profile · GitHub (at gist.github.com)
09:47 Mattias I can see RELEASE and RELEASEDIR is high, not sure how to fix that or if that is a problem?
09:48 Mattias There is a Joomla installation on this volume if that helps
09:48 Mattias The server is however not in use currently, so I'm not sure why glusterfs even shows so much activity
09:56 rastar joined #gluster
10:01 Mattias A server, identical in configuration without glusterfs, TTFB is in the ms. But on a server with glusterfs with this problem, all three servers in this case has TTFB in seconds, takes over 5-7 seconds to load a page.
10:02 atinm joined #gluster
10:19 beemobile joined #gluster
10:20 jri_ joined #gluster
10:24 jri joined #gluster
10:28 rwheeler joined #gluster
10:36 Sebbo2 joined #gluster
10:37 rastar joined #gluster
10:49 Jacob843 joined #gluster
10:51 ndarshan joined #gluster
10:54 beemobile joined #gluster
11:03 aravindavk joined #gluster
11:03 msvbhat joined #gluster
11:04 robb_nl joined #gluster
11:07 aravindavk_ joined #gluster
11:07 arcolife joined #gluster
11:11 bluenemo joined #gluster
11:14 jtux joined #gluster
11:16 ndarshan joined #gluster
11:21 Mattias Just tried turning off glusterfs on the servers and it is all fast again. But then there's no filesync between the servers. I wonder what could cause glusterfs to be so slow...
11:24 rastar joined #gluster
11:29 Sebbo3 joined #gluster
11:32 aravindavk joined #gluster
11:34 kdhananjay joined #gluster
11:34 derjohn_mob joined #gluster
11:42 devyani7 joined #gluster
12:01 Sebbo4 joined #gluster
12:03 aspandey_ joined #gluster
12:11 johnmilton joined #gluster
12:12 johnmilton joined #gluster
12:12 Mattias I should note that I did try this server setup on vagrant first with 3 servers, but now on digital ocean I get this cpu problem, using the $20 servers 3x.
12:13 Mattias Is it the special vps's DO has that causes glusterfs to fail on performance?
12:21 poornima joined #gluster
12:32 shyam joined #gluster
12:36 ira joined #gluster
12:37 k4n0 joined #gluster
12:42 Klas cloud vpses are generally a bad idea when you want to guarantee low-latency
12:42 Klas and gluster requires low latency
12:43 Klas the problem is that with cloud vpses, they can end up literally anywhere, especially with large providers
12:43 Klas if one server is in china, one in brazil and on in europe
12:43 Klas well, that would be quite horrible for gluster
12:46 Sebbo4 Is there a way to see the rebalance status in percent?
12:47 jri_ joined #gluster
12:47 unclemarc joined #gluster
12:53 ahino1 joined #gluster
12:55 kdhananjay joined #gluster
13:10 ahino joined #gluster
13:14 jdarcy joined #gluster
13:15 Mattias Klas: I've created the servers in the same data center with private ip's (same network)
13:19 legreffier my gluster is logging weird things to daemon.log > http://pastebin.com/uzWCGTsq
13:19 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
13:19 legreffier https://paste.fedoraproject.org/423895/14733407/
13:19 glusterbot Title: #423895 • Fedora Project Pastebin (at paste.fedoraproject.org)
13:20 legreffier im cool with the log, but it definitely shouldn't have "var-opt-hosting-data-data-01" as programname/tag :)
13:20 B21956 joined #gluster
13:22 Klas Mattias: Ah, well planned then =). Have you checked your latency?
13:22 legreffier it did drive the supervising team crazy here : "omg you broke the master syslog serv" < ah ah ah ah
13:22 Klas a normal ping will be mostly good enough in most cases
13:22 legreffier (told him to stick away from le Gibson)
13:23 Mattias Klas: The problem is, the files are already synced and when running the webserver everything is super slow, because glusterfs and glusterfsd is taking 100% cpu together. Shutting glusterfs down and then using the webserver and everything is fine again
13:23 Mattias I'm not sure why glusterfs is even using cpu when everything is already synced...
13:23 Klas Mattias: I know, I read what you wrote, and that is why I'm asking latency.
13:23 Klas every read on every file requires a stat on all nodes
13:23 Mattias Klas: I'll try a simple ping and see what I get
13:24 Klas and a sync of said stats
13:24 Klas so, well, every 0.1 ms counts =P
13:24 Mattias Klas: I get around 0.2 to 0.5ms
13:24 Klas (exaggerating a bit, but you catch my drift)
13:24 Klas that should be fine
13:25 Klas what version are you running?
13:25 Klas I seem to recall seeing something about poor performance in 3.7.14 that was fixed in 3.7.15 in certain workloads (might be wrong though)
13:25 Mattias 3.8.3 on centos 7.2
13:26 Klas ok, then I probably got nothing, just trying to rule out obvious issues
13:26 Mattias Nothing weird with the profiles I posted?
13:26 Mattias RELEASE and RELEASEDIR looks suspicious to me
13:27 Klas I'm not that knowledgeable, sorry =(
13:28 Klas btw, 120s dump link is broken due to irc not allowing long lines
13:28 Mattias ah
13:28 Klas end of line is: and 120s dump:https://gist.github.com/ma
13:28 glusterbot Title: ma’s gists · GitHub (at gist.github.com)
13:28 kpease joined #gluster
13:28 Mattias https://gist.github.com/mattias/​26e8b4d88b4c0f7d3eb1cc8e8279702b (120s)  and  https://gist.github.com/mattias/​7b5e8d333abd51fde28afadbf7531bed (60s)
13:28 glusterbot Title: gluster 120s profile · GitHub (at gist.github.com)
13:29 Klas Mattias: btw, in general, most good glusterfs techies are online a bit later than this, or earlier, from what I've seen
13:30 Mattias I'll try to be on earlier tomorrow. My bouncer died (raspberry pi) a few weeks ago so now I'm just using the freenode webchat :)
13:31 Mattias Can't believe the webchat overrides the cloak I get first with some special cloak containing ip...
13:35 nbalacha joined #gluster
13:40 plarsen joined #gluster
13:41 Klas oh, the horror of not having an irc-shell!
13:42 skylar joined #gluster
13:43 B21956 joined #gluster
13:46 gem joined #gluster
13:47 Lee1092 joined #gluster
13:49 shaunm joined #gluster
13:50 squizzi joined #gluster
13:55 kdhananjay joined #gluster
14:04 hagarth joined #gluster
14:04 arcolife joined #gluster
14:12 baojg joined #gluster
14:13 kotreshhr left #gluster
14:15 shubhendu joined #gluster
14:17 shyam joined #gluster
14:20 JoeJulian Sebbo4: No, you cannot see the rebalance status as a percentage. There is no way for the shd to know what's left to be healed, especially for files that might be changing during the heal.
14:24 ankitraj joined #gluster
14:27 Sebbo4 JoeJulian: Ok, thanks for your feedback.
14:29 derjohn_mob joined #gluster
14:34 bluenemo joined #gluster
14:36 shyam joined #gluster
14:43 ahino joined #gluster
14:46 plarsen joined #gluster
14:50 jdarcy joined #gluster
14:57 Gnomethrower joined #gluster
15:04 arcolife joined #gluster
15:07 kramdoss_ joined #gluster
15:07 bluenemo joined #gluster
15:10 kdhananjay joined #gluster
15:15 derjohn_mob joined #gluster
15:24 kramdoss_ joined #gluster
15:28 om joined #gluster
15:28 jiffin joined #gluster
15:31 ju5t joined #gluster
15:31 ChrisHolcombe joined #gluster
15:36 kdhananjay joined #gluster
15:44 gem joined #gluster
15:46 derjohn_mob joined #gluster
15:48 JoeJulian legreffier: Please file a bug report on the log issue, thanks.
15:48 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
15:49 arcolife joined #gluster
15:51 jiffin joined #gluster
15:55 robb_nl joined #gluster
16:00 aravindavk joined #gluster
16:10 hagarth joined #gluster
16:19 d0nn1e joined #gluster
16:21 squizzi joined #gluster
16:23 Gambit15 joined #gluster
16:32 glustin joined #gluster
16:32 gem joined #gluster
16:38 ankitraj joined #gluster
16:38 aspandey joined #gluster
16:52 arcolife joined #gluster
16:53 ivan_rossi left #gluster
16:56 nbalacha joined #gluster
17:00 kramdoss_ joined #gluster
17:03 legreffier JoeJulian: as an ubuntu-member, im not sure i can log to redhat bug tracker
17:03 legreffier JoeJulian: strict rules ya know :P
17:04 legreffier jk , i'll sign off now, but i'll do it tomorrow morning :] have a nice day/evening
17:04 jwd joined #gluster
17:05 gem joined #gluster
17:06 ieth0 joined #gluster
17:07 derjohn_mob joined #gluster
17:07 ieth0 Hello everyone, Im using glusterfs 3.8.3 , I’ve many resolution failed message on unlinking a file
17:08 ieth0 storage is mounted using fuse.
17:09 ieth0 http://pastebin.com/x1syPgkM
17:09 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
17:09 ieth0 any idea ?
17:10 rafi joined #gluster
17:10 msvbhat joined #gluster
17:18 hagarth joined #gluster
17:24 bluenemo joined #gluster
17:26 syadnom joined #gluster
17:28 mreamy joined #gluster
17:28 hchiramm joined #gluster
17:32 gnufied joined #gluster
17:32 gnufied the gluster docker image seems to be broken - https://github.com/gluster/docker ? https://gist.github.com/gnufied/​c13cb8c1b7535219c5d3abfc08ccd452
17:32 glusterbot Title: GitHub - gluster/docker: Dockerfiles (CentOS, Fedora, Red Hat) for GlusterFS (at github.com)
17:37 aravindavk joined #gluster
17:42 virusuy joined #gluster
17:42 virusuy joined #gluster
17:43 hackman joined #gluster
17:49 shortdudey123 joined #gluster
17:57 TZaman joined #gluster
18:08 syadnom joined #gluster
18:15 emitor_uy joined #gluster
18:23 ju5t joined #gluster
18:29 eightyeight getting `volume add-brick: failed: Commit failed on localhost. Please check log file for details.' when i try to add bricks
18:29 eightyeight log is showing trouble communicating with the other peer (there are only 2 in the trusted pool)
18:29 eightyeight but `gluster peer status' shows connected on both ends
18:30 eightyeight interestingly enough, `gluster volume info pool' shows the new bricks where the command was executed, but not on the other peer
18:32 eightyeight pastebin of what i'm seeing on `station21' and `station22': https://ae7.st/p/870
18:32 glusterbot Title: Pastebin on ae7.st » 870 (at ae7.st)
18:34 eightyeight trying to remove the bricks off station21:
18:34 eightyeight volume remove-brick commit force: failed: Staging failed on station22. Error: Deleting all the bricks of the volume is not allowed
18:35 eightyeight o.0
18:49 ahino joined #gluster
19:27 johnmilton joined #gluster
19:29 ieth0 joined #gluster
19:34 Slashman joined #gluster
19:42 ju5t joined #gluster
20:04 roost joined #gluster
20:09 roost Hello, we currently have cluster.min-free-disk set to 1%. Is it possible if we lower that percentage below 1%, or turn that off?
20:09 hackman joined #gluster
20:11 TZaman joined #gluster
20:11 JoeJulian roost: Are the existing files ever going to grow?
20:12 roost JoeJulian, no. We have millions of files right now and i'm thinking of the future right now. Ex. 1% could mean we still have like 5 TB of free space left
20:13 sage_ joined #gluster
20:13 JoeJulian Then, sure, you can set it to 0
20:34 shyam joined #gluster
20:35 skylar joined #gluster
20:43 jiffin joined #gluster
20:44 derjohn_mob joined #gluster
20:50 hagarth joined #gluster
20:57 bowhunter joined #gluster
20:58 jkroon joined #gluster
21:04 roost JoeJulian, thank you
21:08 jkroon joined #gluster
21:16 jkroon joined #gluster
21:20 Javezim joined #gluster
21:33 jkroon joined #gluster
22:03 eMBee joined #gluster
22:14 Wizek joined #gluster
22:16 mbukatov joined #gluster
22:22 jkroon joined #gluster
22:26 johnmilton joined #gluster
22:42 congpine joined #gluster
22:45 david_ joined #gluster
22:47 ttkg joined #gluster
22:47 david_ hi all, is creating and remove hardlink an expensive operation in gluster? our workflow involves copy files to gluster (/gluster/dir1/file1), create hardlink /gluster/dir2/file2 and then remove file1
22:48 david_ thanks in advance
22:50 JoeJulian david_: No, that's pretty simple. Accessing the file later will be less than optimally efficient, unless you rebalance, because the distribute translator looks up the brick based on a hash of the filename. With a different filename, a different hash will be used and two lookup() will have to occur to find the new filename.
22:52 Jacob843 joined #gluster
22:59 david_ Sorry, the filename would be the same, but just different path
22:59 david_ so hardlink /gluster/dir1/file1 and /gluster/dir2/file1
23:00 david_ if the filename is the same, it won't cause any lookup performance issues ?
23:01 JoeJulian dht hashes are assigned per-directory... I'm not quite sure. Interesting...
23:02 JoeJulian s/hashes/hash mappings/
23:02 glusterbot What JoeJulian meant to say was: dht hash mappings are assigned per-directory... I'm not quite sure. Interesting...
23:02 david_ hmm, yeah, I though it would do two lookup as well.
23:03 JoeJulian I mean it's not a HUGE hit. It'll probably be just fine.
23:03 JoeJulian Depends on your use case, of course.
23:04 david_ we have about 3m files. they have been running with that workflow over the year, and it now started showing some performance issues.
23:04 david_ sometime when I run rebalancing, I can see lots of files from that gluster's directory is being rebalanced or healed.
23:05 david_ i though the hardlink may have contributed to this
23:06 JoeJulian rebalanced, yes. Healed, no. If files are being healed, I'd say you had another issue.
23:11 david_ thanks, @joeJulian. Do you have any documentation regarding to hardlink operation?
23:14 JoeJulian Not that I can think of. In a nutshell, when you create a file on a gluster volume, it assigns a uuid (gfid) that acts like an inode number. The file is hardlinked to that gfid under the .glusterfs directory. Any additional hardlinks are literally hardlinks on the bricks with the gfid filename being a permanent reference which prevents hardlinks from being broken due to self-heals and such.
23:15 JoeJulian There's this: https://joejulian.name/blog/what-is-​this-new-glusterfs-directory-in-33/
23:15 glusterbot Title: What is this new .glusterfs directory in 3.3? (at joejulian.name)
23:22 david_ ok thank you
23:25 Klas joined #gluster
23:43 kukulogy joined #gluster
23:57 arcolife joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary