Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-03-14

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 kramdoss_ joined #gluster
00:15 farhorizon joined #gluster
00:44 daMaestro joined #gluster
01:05 baber joined #gluster
01:08 jkroon joined #gluster
01:10 shdeng joined #gluster
01:17 farhorizon joined #gluster
02:05 skoduri joined #gluster
02:21 MrAbaddon joined #gluster
02:24 derjohn_mob joined #gluster
02:46 jkroon joined #gluster
02:48 ilbot3 joined #gluster
02:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:50 kramdoss_ joined #gluster
02:51 masber joined #gluster
03:03 Gambit15 joined #gluster
03:08 buvanesh_kumar joined #gluster
03:18 farhorizon joined #gluster
03:43 magrawal joined #gluster
03:44 riyas joined #gluster
03:47 dominicpg joined #gluster
03:48 rafi joined #gluster
03:49 prasanth joined #gluster
03:53 nbalacha joined #gluster
03:54 rafi joined #gluster
03:56 susant joined #gluster
03:56 msvbhat joined #gluster
03:56 atinm joined #gluster
03:59 major anyone understand the internal mechanics of 'snapshot restore' ?
04:01 ppai joined #gluster
04:05 rafi joined #gluster
04:06 gyadav joined #gluster
04:10 rafi joined #gluster
04:11 itisravi joined #gluster
04:22 rejy joined #gluster
04:26 atinm joined #gluster
04:27 Shu6h3ndu joined #gluster
04:28 RameshN joined #gluster
04:30 MrAbaddon joined #gluster
04:33 raghu joined #gluster
04:33 ankitr joined #gluster
04:35 sanoj joined #gluster
04:37 jiffin joined #gluster
04:38 rafi joined #gluster
04:47 karthik_us joined #gluster
04:52 kdhananjay joined #gluster
04:53 itisravi joined #gluster
04:56 sanoj joined #gluster
04:56 apandey joined #gluster
04:57 skumar joined #gluster
05:04 msvbhat joined #gluster
05:05 k4n0 joined #gluster
05:06 skoduri joined #gluster
05:06 ankitr joined #gluster
05:07 ppai joined #gluster
05:11 Prasad joined #gluster
05:12 skumar_ joined #gluster
05:15 ndarshan joined #gluster
05:18 apandey joined #gluster
05:20 farhorizon joined #gluster
05:23 ankitr joined #gluster
05:29 k4n0 joined #gluster
05:30 hgowtham joined #gluster
05:35 ppai joined #gluster
05:37 Saravanakmr joined #gluster
05:38 rastar joined #gluster
05:39 ira joined #gluster
05:47 riyas joined #gluster
05:47 hgowtham joined #gluster
05:50 d0nn1e joined #gluster
05:53 msvbhat joined #gluster
05:54 armyriad joined #gluster
06:02 Shu6h3ndu joined #gluster
06:08 ankush joined #gluster
06:12 raghu joined #gluster
06:21 msvbhat joined #gluster
06:21 ahino joined #gluster
06:23 susant joined #gluster
06:23 Wizek_ joined #gluster
06:28 skumar joined #gluster
06:29 jkroon joined #gluster
06:30 itisravi joined #gluster
06:36 k4n0 joined #gluster
06:37 buvanesh_kumar joined #gluster
06:45 nbalacha joined #gluster
06:45 apandey joined #gluster
06:53 ashiq joined #gluster
06:55 kdhananjay joined #gluster
06:55 kharloss joined #gluster
06:56 aardbolreiziger joined #gluster
06:57 aardbolreiziger joined #gluster
07:00 k4n0 joined #gluster
07:02 sona joined #gluster
07:12 mhulsman joined #gluster
07:15 susant joined #gluster
07:16 mhulsman1 joined #gluster
07:17 [diablo] joined #gluster
07:19 kharloss joined #gluster
07:21 farhorizon joined #gluster
07:25 jwd joined #gluster
07:25 jtux joined #gluster
07:27 aardbolreiziger joined #gluster
07:30 kharloss_ joined #gluster
07:32 aardbolreiziger joined #gluster
07:34 kharloss joined #gluster
07:35 chawlanikhil24 joined #gluster
07:39 kdhananjay joined #gluster
07:40 skoduri joined #gluster
07:43 m0zes joined #gluster
07:55 Humble joined #gluster
07:55 sona joined #gluster
07:56 rafi1 joined #gluster
08:05 kharloss joined #gluster
08:10 skoduri joined #gluster
08:12 mbukatov joined #gluster
08:18 level7 joined #gluster
08:33 level7 joined #gluster
08:35 sanoj joined #gluster
08:43 mhulsman joined #gluster
08:44 OtaKAR joined #gluster
08:51 karthik_us joined #gluster
08:55 hybrid512 joined #gluster
08:56 mhulsman joined #gluster
08:57 ankitr joined #gluster
09:01 chawlanikhil24 joined #gluster
09:02 OtaKAR joined #gluster
09:07 nbalacha joined #gluster
09:14 ivan_rossi joined #gluster
09:14 ivan_rossi left #gluster
09:14 derjohn_mob joined #gluster
09:17 sanoj joined #gluster
09:18 sona joined #gluster
09:19 rastar joined #gluster
09:23 farhorizon joined #gluster
09:25 hybrid512 joined #gluster
09:26 jiffin joined #gluster
09:29 nbalacha joined #gluster
09:31 fsimonce joined #gluster
09:32 kdhananjay joined #gluster
09:41 msvbhat joined #gluster
09:45 hybrid512 joined #gluster
09:49 hgowtham joined #gluster
09:50 sona joined #gluster
09:51 RameshN joined #gluster
09:52 hybrid512 joined #gluster
09:58 hybrid512 joined #gluster
10:00 kotreshhr joined #gluster
10:01 ashiq joined #gluster
10:02 kpease joined #gluster
10:04 skoduri joined #gluster
10:04 Seth_Karlo joined #gluster
10:05 Seth_Kar_ joined #gluster
10:06 ahino joined #gluster
10:06 Seth_Kar_ joined #gluster
10:07 ankitr joined #gluster
10:08 Ashutto joined #gluster
10:08 Ashutto Hello
10:08 glusterbot Ashutto: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an
10:08 Seth_Kar_ joined #gluster
10:14 MrAbaddon joined #gluster
10:15 ahino joined #gluster
10:21 ahino joined #gluster
10:22 k4n0 joined #gluster
10:25 mhulsman1 joined #gluster
10:33 nishanth joined #gluster
10:37 RameshN joined #gluster
10:38 rafi1 joined #gluster
10:45 itisravi Ashutto: Hi, just FYI, I have created BZ 1431908 to keep track of the duplicate file bug.
10:46 mhulsman2 joined #gluster
10:46 msvbhat joined #gluster
10:49 Seth_Karlo joined #gluster
10:53 Ashutto Thanks itisravi
10:59 Ashutto CC-ed myself :)
11:11 rafi joined #gluster
11:14 rafi joined #gluster
11:16 mb_ joined #gluster
11:41 skoduri joined #gluster
11:43 gyadav joined #gluster
11:44 arpu joined #gluster
11:45 derjohn_mob joined #gluster
12:01 fcoelho joined #gluster
12:06 pcammarata joined #gluster
12:10 nishanth joined #gluster
12:16 ankitr joined #gluster
12:17 ankitr joined #gluster
12:19 MadPsy joined #gluster
12:19 MadPsy joined #gluster
12:32 Philambdo joined #gluster
12:34 hgowtham joined #gluster
12:44 ashiq joined #gluster
12:48 mhulsman1 joined #gluster
12:50 unclemarc joined #gluster
12:59 sona joined #gluster
12:59 baber joined #gluster
12:59 bfoster joined #gluster
13:04 jiffin1 joined #gluster
13:07 rafi1 joined #gluster
13:08 Seth_Karlo joined #gluster
13:12 atinm joined #gluster
13:14 MadPsy joined #gluster
13:14 MadPsy joined #gluster
13:16 mhulsman joined #gluster
13:17 pioto joined #gluster
13:18 ankush joined #gluster
13:18 kharloss joined #gluster
13:19 rafi joined #gluster
13:22 ankush joined #gluster
13:25 oajs joined #gluster
13:29 ira joined #gluster
13:31 rafi joined #gluster
13:32 plarsen joined #gluster
13:34 susant left #gluster
13:40 rafi joined #gluster
13:45 nishanth joined #gluster
13:46 mhulsman joined #gluster
13:48 squizzi joined #gluster
13:48 rafi1 joined #gluster
13:51 MadPsy joined #gluster
13:51 MadPsy joined #gluster
13:56 Jacob8432 joined #gluster
13:58 buvanesh_kumar joined #gluster
14:01 kramdoss_ joined #gluster
14:02 Gambit15 joined #gluster
14:02 skylar joined #gluster
14:06 Jacob843 joined #gluster
14:21 kharloss joined #gluster
14:26 farhorizon joined #gluster
14:36 niknakpaddywak joined #gluster
14:45 pcammarata joined #gluster
14:47 farhorizon joined #gluster
14:49 niknakpaddywak joined #gluster
14:54 niknakpaddywak joined #gluster
14:58 ashiq joined #gluster
15:00 susant joined #gluster
15:04 mhulsman1 joined #gluster
15:05 susant left #gluster
15:11 wushudoin joined #gluster
15:16 rafi joined #gluster
15:19 jkroon joined #gluster
15:23 Philambdo joined #gluster
15:31 msvbhat joined #gluster
15:36 ankush joined #gluster
15:39 derjohn_mob joined #gluster
15:40 Tanner__ Hey guys I was in here yesterday asking about how to copy ~4 TB worth of small files onto a gluster volume. I've tried a few different things, but not much seems to be working. Right now I have the volumes I'm copying the data (EBS volumes) from mounted right on the gluster server (1 of 4), but it is still very slow
15:44 Clone Tanner__: have you tried running copies in parallel?
15:45 Clone so different directories and such?
15:45 Clone also depends on the number of procs, as gluster seems to handle this in serial, which is a performance bottleneck.
15:46 ivan_rossi joined #gluster
15:46 ivan_rossi left #gluster
15:54 Shu6h3ndu joined #gluster
16:02 Tanner__ Clone, I've tried a doing it using parallel and splitting the file list. I've also tried using parsync although I get some strange error with it. Possibly due to the generated file list being 1.8GB. I have the file list saved and was going to try doing rsync parallely using --from-files and splitting the file list now
16:05 oajs joined #gluster
16:11 Jacob843 joined #gluster
16:13 Jacob843 joined #gluster
16:15 baber joined #gluster
16:28 Gambit15 joined #gluster
16:36 level7 joined #gluster
16:37 gyadav joined #gluster
16:39 gyadav_ joined #gluster
16:42 Wizek_ joined #gluster
16:44 mhulsman joined #gluster
16:45 gyadav joined #gluster
16:58 gyadav_ joined #gluster
17:05 shaunm joined #gluster
17:07 shaunm joined #gluster
17:07 baber joined #gluster
17:08 gyadav_ joined #gluster
17:10 gyadav__ joined #gluster
17:14 gyadav__ joined #gluster
17:16 gyadav_ joined #gluster
17:19 gyadav__ joined #gluster
17:21 gyadav_ joined #gluster
17:22 mhulsman joined #gluster
17:24 cholcombe joined #gluster
17:27 gyadav__ joined #gluster
17:29 gyadav_ joined #gluster
17:34 cliluw joined #gluster
17:38 gyadav__ joined #gluster
17:41 kramdoss_ joined #gluster
17:44 gyadav__ joined #gluster
17:46 sona joined #gluster
17:49 gyadav_ joined #gluster
17:49 ira joined #gluster
17:52 gyadav__ joined #gluster
17:52 mhulsman joined #gluster
17:53 ira joined #gluster
17:54 gyadav_ joined #gluster
17:58 gyadav__ joined #gluster
18:07 gyadav_ joined #gluster
18:10 gyadav__ joined #gluster
18:11 oajs joined #gluster
18:18 f0rpaxe joined #gluster
18:23 gyadav_ joined #gluster
18:24 k4n0 joined #gluster
18:25 gyadav__ joined #gluster
18:31 gyadav_ joined #gluster
18:33 gyadav_ joined #gluster
18:36 ankitr joined #gluster
18:36 gyadav__ joined #gluster
18:44 mhulsman joined #gluster
19:02 Tanner__ Clone, so I've got it going decently fast. I ended up dumping a list of the files I wanted to copy, then running that through split, splitting into lines of 200, and then running rsync through parallel
19:03 Tanner__ and I'm getting roughly 15-20MB/s
19:05 sanoj joined #gluster
19:06 raghu joined #gluster
19:18 rafi joined #gluster
19:19 baber joined #gluster
19:23 rafi joined #gluster
19:25 shyam left #gluster
19:34 mhulsman joined #gluster
19:35 mhulsman1 joined #gluster
19:39 raghu joined #gluster
19:46 JoeJulian rsync's slow. I'd just use tar or cpio.
19:48 mhulsman joined #gluster
19:49 derjohn_mob joined #gluster
19:57 level7 joined #gluster
20:01 saali joined #gluster
20:13 Seth_Karlo joined #gluster
20:31 foster joined #gluster
20:40 Vapez joined #gluster
20:40 Vapez joined #gluster
20:48 Philambdo joined #gluster
20:53 msvbhat joined #gluster
21:14 wushudoin joined #gluster
21:15 mhulsman joined #gluster
21:17 d0nn1e joined #gluster
21:24 dtrainor joined #gluster
21:28 dtrainor Hi.  I recently replaced a brick with 'replace-brick' and it worked out really smoothly, I liked that.  The new brick however was mounted, of course, in a different place than the replaced brick.  Now, gluster always expects this brick to be located in the same new name (I mounted the new brick in slow_v00_b02-new).  I couldn't find an actual suggestion for how to fix this but in poking around I see some references to this new mount po
21:28 dtrainor int name under /var/lib/gluster, but I'd rather not mess with that unless it was suggested to do so.  How do you clean this up?
21:30 rastar dtrainor: I believe you used replace-brick to replace a brick in a replica volume
21:30 rastar dtrainor: what do you mean by gluster expects this to be located in same new name?
21:31 dtrainor in a distributed-replicate volume yep.  the gluster volume info now expects a brick at /gluster/bricks/slow/slow_v00_b02-new/data instead of /gluster/bricks/slow/slow_v00_b02/data
21:32 dtrainor this has zero technical implications at all, everything is working well.  i just want uniform names.  i'm nitpicking.
21:34 rastar dtrainor: Got it. I have not attempted it and don't know of any Gluster cli way to do it.
21:34 rastar dtrainor: It could probably be done by some magic. Would let someone like JoeJulian answer this.
21:35 dtrainor part of me just wants to mow over the files and their contents with sed....
21:35 dtrainor and then bounce gluster
21:40 dtrainor does healing generate a total list of files that need healing before actually preforming the self-heal?  i'm watching the number climb via 'gluster volume heal slow_gv00 info' and I'm afraid it's going to take a while.
21:41 dtrainor glfsheal and glusterfsd certainly are busy
21:41 rastar dtrainor: it does so traversing the whole brick path
21:41 rastar dtrainor: it does take time
21:42 dtrainor cool np just trying to get a better idea of what i'm looking at, thank you
21:42 rastar dtrainor: and yes, please don't perform another replace-brick till the heal count comes to 0.
21:43 dtrainor i understand that replace-brick with 'commit force' automatically starts a self heal
21:44 dtrainor cool, the number is going down now.  that's neat to see.
21:44 dtrainor i think i made it mad by deleting a few large directories from the volume while the self heal was running, adding to the queue
21:46 farhorizon joined #gluster
21:59 Tanner__ JoeJulian, something like tar cf - /dir1 | (cd /dir2 && tar xBf -) ?
22:03 JoeJulian dtrainor: sed works. You can also kill the brick, replace-brick to the old location, kill the brick again, move the mount (or mv the directory or whatever) and then start...force the volume to start the brick again.
22:03 dtrainor ah that's a good idea
22:05 JoeJulian Tanner__: See `tar -C` to avoid the `cd && tar`, but more or less right: tar cf - -C /dir1 | tar xBf - -C /dir2
22:05 JoeJulian Tanner__: See `tar -C` to avoid the `cd && tar`, but more or less right: tar cf - -C /dir1 .| tar xBf - -C /dir2
22:06 JoeJulian forgot to specify the current directory the first time.
22:07 baber joined #gluster
22:09 dtrainor thanks for the suggestion JoeJulian
22:21 Tanner__ JoeJulian, do you know why my rsync processes would go into uninterruptiple sleep? Or why the tar I was running in the fg is now in interruptible sleep?
22:21 JoeJulian dtrainor: you're welcome
22:22 JoeJulian Tanner__: Check the client log
22:22 JoeJulian If there's nothing obvious, see if you can get a state dump.
22:22 Tanner__ that would be /var/log/glusterfs/mnt-<name>?
22:23 JoeJulian Assuming it's mounted under /mnt/<name>, yes.
22:24 Tanner__ Lots of this: [2017-03-14 22:02:43.121365] I [MSGID: 109036] [dht-common.c:9016:dht_log_n​ew_layout_for_dir_selfheal] 0-farmcommand-dht: Setting layout of /mnt/fcdata/blobstore/204701 with [Subvol_name: farmcommand-client-0, Err: -1 , Start: 2147483647 , Stop: 4294967295 , Hash: 1 ], [Subvol_name: farmcommand-client-1, Err: -1 , Start: 0 , Stop: 2147483646 , Hash: 1 ],
22:24 Tanner__ not much else
22:26 JoeJulian farmcommand sounds like it could be a fun game. :D RTS farmville with armies!
22:26 Tanner__ :D we are in agriculture
22:26 JoeJulian But anyway... That looks like a rebalance which I wouldn't have expected.
22:27 JoeJulian Must have to do with the directory self-heal.
22:27 JoeJulian I suspect the client's blocking on pending heals.
22:28 Tanner__ I'm not sure if it makes a difference, but I have these volumes (EBS) mounted right on the gluster server
22:28 JoeJulian To test my theory, turn off client-side self-heals. "gluster volume set cluster.data-self-heal off" Also for the other two cluster.*-self-heal. See "gluster volume set help | grep self-heal"
22:29 JoeJulian The self-heal daemon will still perform the heals, but the clients you're trying to actually use will not actively participate in healing the volume.
22:30 Tanner__ volume set: failed: Cannot set cluster.data-self-heal for a non-replicate volume.
22:31 Tanner__ http://pastebin.com/rG1NxL07
22:31 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
22:32 Tanner__ https://paste.fedoraproject.org/paste/rwR​v~vmFVx7nH325dhj8315M1UNdIGYhyRLivL9gydE=
22:32 JoeJulian Oh, well that's different...
22:32 glusterbot Title: Untitled - Modern Paste (at paste.fedoraproject.org)
22:32 Tanner__ I feel like I've done something really wrong
22:33 Tanner__ I managed to copy about 150GB of data before things started going wrong, then got another 80GB over today
22:33 JoeJulian I'm not sure why the directory's self-healing. That all should have been done the moment the directory was created.
22:34 Tanner__ could it be because there would be some overlap in what I've copied? i.e the tar just now would have started from byte 0
22:35 JoeJulian Nope
22:35 JoeJulian That's something like a brick wasn't available when it was writing something earlier.
22:36 Tanner__ I do have 24 rsync processes with status D right now
22:41 JoeJulian I wonder if they're fighting over creating a directory?
22:42 Tanner__ when I caught the first one, I tried running strace on it but there was no output
22:42 JoeJulian If it's waiting on a lock, that would make sense.
22:44 Tanner__ that does make sense. As well I have some sitting in S, and they are running on the same input file list as the ones in D, so clearly when it tries to process some line in that file it is getting caught on the same thing
22:45 Tanner__ Trying to umount it gives target is busy as well, maybe I need a reboot on this node
22:46 Tanner__ I need to update to 3.10 anyways, I will nuke the whole cluster
23:00 Klas joined #gluster
23:04 JoeJulian You can always just kill the glusterfs client. That'll kill any IO on the fuse mount.
23:04 JoeJulian saves from having to reboot
23:09 farhorizon joined #gluster
23:13 f0rpaxe joined #gluster
23:22 Tanner__ oh, thanks, good to know

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary