Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-09-09

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:09 johnmilton joined #gluster
00:42 BitByteNybble110 joined #gluster
00:57 shdeng joined #gluster
00:57 shdeng joined #gluster
00:59 plarsen joined #gluster
01:35 derjohn_mobi joined #gluster
01:46 Lee1092 joined #gluster
01:48 ilbot3 joined #gluster
01:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:25 squizzi joined #gluster
03:12 ira joined #gluster
03:18 squizzi joined #gluster
03:20 magrawal joined #gluster
03:22 shubhendu joined #gluster
03:44 david_ joined #gluster
03:45 david_ hi yah, does anyone know if I set cluster.self-heal-daemon off, will file be healed if I access it from Fuse mount point ? Or only when I execute heal command?
03:46 david_ one of our servers is currently under high load as it is healing metadata, the client couldn't access the volume. as soon as I stop it, it comes back to live
03:57 eightyeight if i create a distributed volume, then enable sharding, FUSE-mount the volume, and write 100MB, i only see 16MB (my shard size) on one brick
03:57 eightyeight where is the rest of the data?
04:04 squizzi joined #gluster
04:05 riyas joined #gluster
04:07 sanoj joined #gluster
04:07 eightyeight and if i create a distributed replicate volume, with a 4MB shard size, it cuts off writing at the FUSE mount at 4MB. i can't write the full file
04:07 eightyeight what am i missing with sharding?
04:10 eightyeight ah. /.shard/ contains the rest of the data
04:12 eightyeight https://ae7.st/p/2fg is the error i'm seeing
04:12 glusterbot Title: Pastebin on ae7.st » 2fg (at ae7.st)
04:13 prasanth joined #gluster
04:14 aspandey joined #gluster
04:16 atinm joined #gluster
04:19 kdhananjay joined #gluster
04:19 atinmu joined #gluster
04:35 karthik_ joined #gluster
04:37 d4n13L joined #gluster
04:38 rafi joined #gluster
04:47 rastar joined #gluster
04:47 Gnomethrower joined #gluster
04:50 ashiq joined #gluster
04:53 kramdoss_ joined #gluster
04:58 kshlm joined #gluster
04:59 hchiramm joined #gluster
04:59 JoeJulian david_: I recommend turning on client self-heal and leaving the daemon on.
04:59 ndarshan joined #gluster
05:00 JoeJulian eightyeight: Check the client log.
05:01 david_ hey JoeJulian, the problem is 4 of the replica bricks were offline for few days, I've been syncing them back but today it seems to be doing metadata self heal
05:02 david_ while it is running, we see poor performance on gluster. as soon as I stop self healing, it is back to normal
05:03 david_ the other replica server is under load , too. not sure if there is a way to manually run this process
05:05 kotreshhr joined #gluster
05:06 JoeJulian Try disabling client-side heals and see if that resolves it. I've had good luck.
05:06 david_ how would I do that? just kill self heal daemon on clients ?
05:08 david_ i found this:
05:08 david_ "gluster volume set <volname> cluster.entry-self-heal off "gluster volume set <volname> cluster.data-self-heal off "gluster volume set <volname> cluster.metadata-self-heal off
05:08 david_ cluster.data-self-heal off
05:09 JoeJulian set each of "cluster.{metadata,data,entry}-self-heal" off
05:09 JoeJulian right
05:09 david_ ok, then server will do self-healing if it needed to, is that right ?
05:09 JoeJulian right
05:10 JoeJulian and if a client encounters a file that needs healed, it creates the entry the server uses to trigger the heal.
05:11 david_ Ok,. I will try that. thank you very much
05:13 aravindavk joined #gluster
05:18 jiffin joined #gluster
05:21 ndarshan joined #gluster
05:29 raghug joined #gluster
05:36 karnan joined #gluster
05:38 Philambdo joined #gluster
05:41 Saravanakmr joined #gluster
05:44 kramdoss_ joined #gluster
05:47 ppai joined #gluster
05:49 ju5t joined #gluster
05:49 ieth0 JoeJulian: any thoughts on  http://pastebin.com/x1syPgkM ?
05:49 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
05:50 Mattias Having issues with glusterfs using 100% cpu, can anyone read these profiles below?
05:50 Mattias https://gist.github.com/mattias/​26e8b4d88b4c0f7d3eb1cc8e8279702b (120s)  and  https://gist.github.com/mattias/​7b5e8d333abd51fde28afadbf7531bed (60s)
05:50 glusterbot Title: gluster 120s profile · GitHub (at gist.github.com)
05:51 JoeJulian Mattias: same as above. Disable client-side self-heal.
05:55 Mattias Thanks, I'll try that!
05:57 Bhaskarakiran joined #gluster
05:59 mhulsman joined #gluster
05:59 JoeJulian ieth0: not really. Check your brick logs for a corresponding error.
05:59 ankitraj joined #gluster
06:00 JoeJulian I'm off to bed.
06:00 masuberu joined #gluster
06:01 ieth0 JoeJulian: oh sorry, thanks for reply btw. have a good night
06:03 ramky joined #gluster
06:05 nishanth joined #gluster
06:06 jiffin joined #gluster
06:07 atinm joined #gluster
06:11 skoduri joined #gluster
06:17 raghug joined #gluster
06:26 ju5t joined #gluster
06:26 d0nn1e joined #gluster
06:27 hgowtham joined #gluster
06:31 Mattias self-heal off did not fix the cpu issue :(
06:33 Mattias This is what it looks like 24/7: https://gist.github.com/mattias/​ddfaf54ef584a9a30c0b6e257f490c11  If only I could find out what it was doing...
06:33 glusterbot Title: cpu glusterfsd · GitHub (at gist.github.com)
06:34 jtux joined #gluster
06:39 Mattias Could it be ansible? I have a task which chowns the whole directory to a certain user:group when I run it. does that cause glusterfs to go through all files again? Takes a long time though... Ansible runs the chown on all three servers at once.
06:45 aspandey joined #gluster
06:47 satya4ever joined #gluster
06:57 mattb joined #gluster
06:58 deniszh joined #gluster
06:59 derjohn_mobi joined #gluster
07:01 jri joined #gluster
07:06 social joined #gluster
07:09 jkroon joined #gluster
07:09 Mattias Just deleted all files and cpu usage is gone...
07:09 Mattias I'll try to put them back and see what happens
07:17 aspandey joined #gluster
07:19 Klas I highly suggest you use some other method to verify ownership somehow, cause, yeah, that is quite a heavy operation
07:23 Mattias Klas: I just unzipped all the files again on the first node. high cpu usage again... all files are already synced.
07:24 Mattias Klas: I also commented things modifying the mounted directory in ansible so it won't change stuff on all three servers at once.
07:25 Klas as I said, I highly recommend you use some other method, any stat action, even just checking, will be costly
07:25 Mattias Yeah, not doing anything right now, just put the files there on node1
07:26 Mattias it's about 500mb of files (15852 files approx) shouldn't be too big of a problem for glusterfs?
07:26 Klas oh, large bunch of files
07:26 Klas still, you are correct, imo
07:27 Klas (I mean large bunch of small files, not a large quantity in general)
07:27 Mattias Should it really keep using cpu after all files are synced across all servers?
07:27 Klas every stat action on a file costs performance, but, well, no
07:27 Mattias I use replication, a replica of 3
07:28 Mattias Don't know what more I can check/try... should I wait it out perhaps? I'll check back on Monday I guess if it's something gluster has to do with all the files which takes a lot of time...
07:28 Klas my idle volume, replica 3 arb 1, with an excess of 500k files with a size of 1.7 TB, is, well, idle
07:28 Mattias Interesting...
07:28 Klas it's not currently mounted, though
07:29 Klas so it's really idle
07:29 Mattias Do you generally get high cpu after having put a lot of small files on the filesystem?
07:29 robb_nl joined #gluster
07:30 Klas haven't done the benchmarking bit very thouroughly yet, we are just about to start the pilot actually
07:30 Klas we won't have proper data until april-may or so, unfortunately (that's when the big consumers will have been connected for a month or so)
07:31 Mattias node3 is using the least cpu of 40-50% combining all glusterfs processes
07:31 Mattias Not sure why that is... all three servers are identical
07:31 derjohn_mobi joined #gluster
07:32 Mattias Well, I'll wait it out until Monday. If it is not idle by then something must be wrong.
07:33 jri_ joined #gluster
07:34 nbalacha joined #gluster
07:40 jwd joined #gluster
07:40 tomaz__ joined #gluster
07:40 k4n0 joined #gluster
07:43 masber joined #gluster
07:44 sandersr joined #gluster
07:47 robb_nl joined #gluster
07:52 fsimonce joined #gluster
08:01 TZaman joined #gluster
08:05 nbalacha joined #gluster
08:09 nbalacha joined #gluster
08:09 devyani7 joined #gluster
08:19 Slashman joined #gluster
08:28 ju5t joined #gluster
08:40 tomaz__ joined #gluster
08:45 jtux joined #gluster
08:49 jri joined #gluster
08:52 Saravanakmr joined #gluster
08:52 philiph joined #gluster
08:56 philiph I have a problem with a replace-brick operation: ‘filling’ of the new brick seems to be stopped for some reason. Brick size 23TB, filled for 21TB, stopped at 16TB. Any ideas how to resolve this or check what is wrong?
09:04 Wizek joined #gluster
09:09 TZaman joined #gluster
09:14 ankitraj left #gluster
09:23 philiph joined #gluster
09:23 philiph I followed this tutorial https://joejulian.name/blog/how-to-expand-g​lusterfs-replicated-clusters-by-one-server/
09:23 glusterbot Title: How to expand GlusterFS replicated clusters by one server (at joejulian.name)
09:24 prasanth joined #gluster
09:24 mhulsman joined #gluster
09:25 philiph I did the replace-brick command but the process stopped before the bricks are equal in size
09:25 philiph Tutorial says to check the log. Which gluster log should I look into? On the gluster server or the new node?
09:25 philiph Thanks for the help
09:29 philiph Is there a way to check if the replace-brick is done?
09:35 philiph joined #gluster
09:42 rafi joined #gluster
10:10 Wizek_ joined #gluster
10:12 aspandey joined #gluster
10:16 nbalacha joined #gluster
10:19 ppai_ joined #gluster
10:23 ira joined #gluster
10:25 nbalacha joined #gluster
10:26 rastar joined #gluster
10:34 ira joined #gluster
10:38 jiffin joined #gluster
10:48 aspandey joined #gluster
10:48 ju5t joined #gluster
10:51 philiph joined #gluster
10:58 Wizek_ joined #gluster
10:58 philiph joined #gluster
11:06 ashiq joined #gluster
11:09 arcolife joined #gluster
11:11 karnan joined #gluster
11:12 ju5t joined #gluster
11:17 aspandey joined #gluster
11:17 lkoranda joined #gluster
11:18 Gnomethrower joined #gluster
11:22 Gnomethrower joined #gluster
11:23 philiph joined #gluster
11:26 ju5t joined #gluster
11:26 eMBee joined #gluster
11:33 philiph joined #gluster
11:37 ira_ joined #gluster
11:41 nishanth joined #gluster
11:46 atinm joined #gluster
12:02 raghug joined #gluster
12:03 pdrakeweb joined #gluster
12:05 pdrakeweb joined #gluster
12:05 jtux joined #gluster
12:12 B21956 joined #gluster
12:16 pdrakeweb joined #gluster
12:20 jri_ joined #gluster
12:23 jri__ joined #gluster
12:27 hagarth joined #gluster
12:28 kdhananjay joined #gluster
12:46 rafi joined #gluster
12:51 shyam joined #gluster
12:52 unclemarc joined #gluster
12:56 bkolden joined #gluster
13:00 UnchainedUnicorn joined #gluster
13:01 philiph joined #gluster
13:01 UnchainedUnicorn Hi, cant ask if anyone has used mdtest for benchmarking their system?
13:04 amud joined #gluster
13:05 amud help
13:05 amud left #gluster
13:09 johnmilton joined #gluster
13:11 devyani7 joined #gluster
13:12 ju5t joined #gluster
13:16 ashiq joined #gluster
13:16 squizzi joined #gluster
13:19 philiph joined #gluster
13:23 plarsen joined #gluster
13:24 shyam joined #gluster
13:26 david_ joined #gluster
13:27 david_ Hi JoeJulian, I just saw one of your msg on Gluster mailing-list regarding "remove orphaned linkfile"
13:27 david_ https://www.gluster.org/pipermail/glu​ster-users/2014-December/019989.html
13:27 glusterbot Title: [Gluster-users] Hundreds of duplicate files (at www.gluster.org)
13:28 david_ where you would run:
13:28 david_ find $brick_root -type f -size 0 -perm 1000 -exec /bin/rm {} \;
13:28 Muthu_ joined #gluster
13:28 david_ which find file which is 0 byte and has 1000 permission.
13:29 david_ However, I found another code on github, which remove file with 1000 permission only
13:29 david_ find "${export_directory}" -type f -perm +01000 -exec rm -v '{}' \;
13:29 david_ I've been using this to delete link file: find . -perm 1000 -size +0c
13:29 david_ which search for file size greater than 0 bytes and 1000 permission.
13:30 david_ I'm wondering which one is the correct one ?
13:30 david_ does linkfile suppose to be 0 byte or non-0 byte?
13:31 ivan_rossi joined #gluster
13:31 david_ someone else recommends to remove files with 2 hardlinks: find .glusterfs -type f -links -2 -exec rm {} \;
13:32 david_ sry, less than 2 hardlinks
13:35 plarsen joined #gluster
13:39 david_ according to this: http://staged-gluster-docs.readthedocs.io​/en/release3.7.0beta1/Features/rebalance/
13:39 glusterbot Title: Rebalance - Gluster Docs (at staged-gluster-docs.readthedocs.io)
13:39 david_ A link file is an empty file, which has an extended attribute set that points to the subvolume on which the actual file exists
13:39 devyani7 joined #gluster
13:40 mreamy joined #gluster
13:41 raghug joined #gluster
13:48 harish joined #gluster
13:55 baojg joined #gluster
13:57 jkroon david_, i've seen other files under .glusterfs with link count 1 that I don't think is safe to remove.
13:57 jkroon anyway, the link count == 1 achieves removing orhpaned gfid files.  I've filed a bug about improved way of handling them recently.
13:58 mhulsman joined #gluster
14:00 mhulsman joined #gluster
14:02 mhulsman joined #gluster
14:11 guhcampos joined #gluster
14:17 skylar joined #gluster
14:18 ramky joined #gluster
14:25 bowhunter joined #gluster
14:25 bluenemo joined #gluster
14:26 deniszh joined #gluster
14:27 Wizek_ joined #gluster
14:32 ju5t joined #gluster
14:46 hagarth joined #gluster
14:48 wushudoin joined #gluster
15:00 jobewan joined #gluster
15:05 sysanthrope joined #gluster
15:11 Lee1092 joined #gluster
15:13 arcolife joined #gluster
15:17 guhcampos joined #gluster
15:18 hagarth joined #gluster
15:20 cholcombe joined #gluster
15:20 prasanth joined #gluster
15:23 shyam joined #gluster
15:29 shaunm joined #gluster
15:33 david_ hey jkroon, how about removing linkfile using 0 Byte and and permission 1000? which one is the correct way to deal with it ? 0 Byte or greater than 0 byte?
15:39 jkroon i've not run into that before so I haven't read up on that just yet.
15:40 jkroon just noticed your -links 1 and thought I'd give you a heads up.
15:46 JoeJulian david_: linkfile is supposed to be 0 bytes.
15:47 JoeJulian david_: it's possible (however it's very unlikely) that you could have a real file on your volume that you've set mode 1000 for some non-gluster-related reason. That's why I only selected 0 sized files. Even that carries a small element of risk that you'd created an empty mode 1000 file on your volume.
15:48 JoeJulian If you wanted to be pedantic, you could further check those files to ensure they have the "dht.linkto" extended attribute.
16:00 david_ @jkroon: ah thanks.the -links =1  is someone else's. I don't use it yet.
16:00 david_ @JoeJulian: yeah, that makes sense.
16:00 jwd joined #gluster
16:03 jkroon yea - JoeJulian is short of the developers the single person i suspect knows most about glusterfs.  i'd trust his judgement.
16:04 jkroon btw JoeJulian you did see my theory on the  f-up from yesterday?
16:04 jkroon (comes back to that bad md raid stuff from >4.0 kernels).
16:06 JoeJulian Yeah, I sympathize. Wish there was more I could offer than that.
16:06 jkroon no need.
16:06 jwaibel joined #gluster
16:06 jkroon you've given me LOADS of help.
16:07 hagarth joined #gluster
16:11 kpease joined #gluster
16:19 kukulogy joined #gluster
16:23 Gambit15 joined #gluster
16:32 kukulogy joined #gluster
16:33 devyani7 joined #gluster
16:34 ivan_rossi left #gluster
16:43 shaunm joined #gluster
16:54 slunatecqo joined #gluster
16:54 kukulogy joined #gluster
16:55 slunatecqo Hi - I just wanted to ask - anyone of you used gluster for sharing docker volumes between nodes in your cluster? Is that a good idea even to production?
16:55 JoeJulian I've recently seen several blog articles about people doing just that.
16:56 slunatecqo So you belive it is?
16:56 JoeJulian like anything, it depends on the use case. I don't think it's a good idea to run docker, but that's why we have a wealth of alternatives.
16:57 d0nn1e joined #gluster
16:58 slunatecqo Well, docker is one of few thinks I can not change :D . So If we stay using docker, do you think there is better solution than gluster?
16:58 JoeJulian I think gluster's an excellent general solution to having a single filesystem that's accessible from multiple clients simultaneously.
16:59 slunatecqo OK. Thanks
16:59 slunatecqo And what do you have against docker anyway?
17:00 JoeJulian It uses cgroups (not cgroups v2) which is insecure and doesn't isolate resources.
17:01 slunatecqo oh... Good to know...
17:01 JoeJulian s/insecure/not secure/
17:01 glusterbot What JoeJulian meant to say was: It uses cgroups (not cgroups v2) which is not secure and doesn't isolate resources.
17:01 JoeJulian There is a slight difference.
17:02 slunatecqo ok
17:05 markd_ joined #gluster
17:05 JoeJulian My view of docker: https://twitter.com/JoeCyberG​uru/status/728733009067679744
17:07 d0nn1e joined #gluster
17:11 post-factum JoeJulian: you are too cynical ;)
17:12 JoeJulian If I was wrong more often it would help prevent that.
17:14 post-factum JoeJulian: but it is ok, be proud of it
17:19 TZaman joined #gluster
17:42 slunatecqo left #gluster
17:43 kotreshhr left #gluster
18:04 jwd joined #gluster
18:08 mreamy joined #gluster
18:15 unclemarc joined #gluster
18:17 aspandey joined #gluster
18:23 unclemarc joined #gluster
19:05 hagarth joined #gluster
19:24 slunatecqo joined #gluster
19:25 slunatecqo Hi - I want to run gluster server in docker. My question is, what ports do I need to enable for the container? (what ports does gluster needs to work)
19:27 hagarth @ports
19:27 glusterbot hagarth: glusterd's management port is 24007/tcp (also 24008/tcp if you use rdma). Bricks (glusterfsd) use 49152 & up. All ports must be reachable by both servers and clients. Additionally it will listen on 38465-38468/tcp for NFS. NFS also depends on rpcbind/portmap ports 111 and 2049.
19:27 hagarth slunatecqo: ^^
19:28 slunatecqo hagarth: yeah. I am reading this right now https://support.rackspace.com/how-to/getting-start​ed-with-glusterfs-considerations-and-installation/
19:28 glusterbot Title: Get started with GlusterFS - considerations and installation (at support.rackspace.com)
19:28 slunatecqo it seems I need to find another way than allowing one port after another
19:29 slunatecqo and the second question: Is there any way to have a peer, that is not in local network?
19:37 derjohn_mobi joined #gluster
19:39 kramdoss_ joined #gluster
20:08 JoeJulian slunatecqo: Sure, as long as it can be routed to without nat. If you can't do that, consider vxlan or ipsex.
20:08 JoeJulian sec
20:08 JoeJulian Those keys are too close together...
20:08 JoeJulian ipsec.
20:09 slunatecqo JoeJulian: Thank you very much :-)
20:10 slunatecqo left #gluster
21:05 david__ joined #gluster
21:06 david__ Hi, I have a question regarding to healing replica brick. if one of the bricks (10TB) crashes and i need to re-sync it, what is the quickest way to do it? Can I copy the data from other brick ( exclude .glusterfs dir), and then re-add it to the cluster ?
21:07 david__ will Gluster be able to check the data already on there, and only fix metadata or linking it to .glusterfs ?
21:07 shyam joined #gluster
21:08 JoeJulian I've never quite understood why people ask that. Why not just let gluster heal it? Is scp supposed to somehow be faster than a tool that was actually designed to move data across networks?
21:08 JoeJulian (scp was not, btw)
21:09 david__ yeah, I would like to let gluster fixing itself. However, it's been running for ages and I haven't seen much data copying there. And also we hit performance issue when it happens.
21:09 JoeJulian I mean sure, if you want to attach the storage locally to the machine that has it - that I would get (I would still let gluster heal then I would replace-brick to the new hostname).
21:10 david__ ( remember I asked you about this self-healing in the last day or so )
21:10 JoeJulian did I have you disable client-side heals?
21:10 david__ yes, I did
21:10 JoeJulian still not performing adequately?
21:11 david__ performance is ok, but it seems to synced only about 30GB of data in the last 12h
21:11 JoeJulian I've repaired 60TB inside a week without my clients noticing, so that's why I ask.
21:12 JoeJulian Is that 30GB of *new* data? Was the brick empty when the heal started?
21:13 david__ so I started the self-healing few days ago, I did around 3TB before hitting performance issue.
21:13 david__ then I followed your advice to disable client self-healing
21:13 david__ and it did another 30GB since then
21:14 david__ cluster.metadata-self-heal: off cluster.data-self-heal: off cluster.entry-self-heal: off
21:14 JoeJulian But it was empty when the whole process started?
21:14 david__ I don't know if 30GB are new data as I haven't checked.
21:14 david__ yes, it was empty
21:15 JoeJulian Asking because if it was not, it could be checking data to see if it needs healed. If it doesn't, there's no data transfer.
21:16 JoeJulian It locks a chunk of file, creates a hash, compares that hash. If it matches, it moves on to the next chunk.
21:16 JoeJulian Lots of cpu but very little data transfer.
21:16 david__ yeah, I saw CPU usage went up to 60
21:16 david__ I meant Load Average
21:17 JoeJulian You can actually see which file self-heal is crawling and where in that file it is in the state dump data.
21:17 david__ do you have instruction on your website to do that ?
21:17 david__ I'm thinking about setting up: cluster.background-self-heal-count: 1
21:18 david__ then re-enable client-side self-healing and ls files on that bricks
21:18 JoeJulian Not yet... I'm moving my blog to a new host (and upgrading software, salting it, etc) then I'll be writing some new articles.
21:18 david__ ah that's cool
21:18 JoeJulian I wouldn't. I've heard the client-side self-heal is going away anyway.
21:19 JoeJulian I think the performance issue is some sort of lock contention between clients.
21:19 david__ yeah, I couldn't run ls on the volume from the client
21:19 JoeJulian But since turning it off fixes it, and learning that it's going to be gone eventually anyway, I figured I'll just stick with the daemon.
21:20 david__ you mean let gluster server (shd daemon) crawl and fix itself ?
21:20 david__ I am just afraid it may take weeks to complete it
21:21 david__ I did heal another brick last month, and it was ok, running it 24/7 and no performance issue
21:22 JoeJulian So... what is your concern if it _did_ take weeks?
21:23 david__ I have to upgrade other storage servers + networking as well. I can't really turn off the other server to do such maintenance
21:24 david__ so it's all dependencies.
21:24 JoeJulian makes sense.
21:27 david__ and I can't sleep well with the feeling that there is only 1 brick up :)
21:27 david__ self-healing seems to focus on metadata healing at the moment, lots of messages like that in the log:
21:27 david__ metadata self heal  is successfully completed, foreground data self heal  is successfully completed,  data self heal from vol-client-5  to sinks  vol-client-4, with 82481 bytes on lvol-client-4, 82481 bytes on vol-client-5
21:35 JoeJulian Well.... if you think it would be faster to actually move all the data across the network than it is to check the data to see if it needs moved (maybe it is?) you could change the self-heal type to full (default is auto). (see gluster volume set help. search for full)
21:42 david__ ok, so that is cluster.data-self-heal-algorithm - does that apply to Server side healing ?
21:43 JoeJulian any, yes.
21:43 david__ do I need to re-enable client-side
21:43 david__ healing that I set earlier
21:43 JoeJulian No, you don't need to re-enable client heals.
21:44 JoeJulian I mean, feel free to experiment. I'm not trying to tell you what to do. I just had to get help from a developer to solve that performance problem and that's what she had me do.
21:45 david__ yes, thank you
21:45 david__ I've learned a lot from you in the last few days, than I did myself in the last few months
21:46 JoeJulian I'm glad I've been helpful.
21:46 JoeJulian I tried to do this in the dark when I started and it was way more complicated to figure out back then.
21:47 david__ so with that "full" option, the data will be copied from brick 2 to brick 1( failed brick - already has 3.5TB/10TB data ) and even if files already exist on br1, it will be copied and overwritten ?
21:47 JoeJulian right
21:48 david__ yeah, I only have experience with other SAN/XSAN technology, I find that there is still lack of technical explanation for glusterfs.
21:49 david__ but your website fills that gap, i hope you can keep it updated
21:49 JoeJulian https://github.com/gluster/glusterdocs pull requests accepted. ;)
21:49 glusterbot Title: GitHub - gluster/glusterdocs: This repo had it's git history re-written on 19 May 2016. Please create a fresh fork or clone if you have an older local clone. (at github.com)
22:04 d0nn1e joined #gluster
22:30 Peppard joined #gluster
22:45 plarsen joined #gluster
23:04 wushudoin joined #gluster
23:10 snehring joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary