Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-04-26

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:07 russoisraeli joined #gluster
00:11 shaunm joined #gluster
00:48 ttkg joined #gluster
01:03 F2Knight joined #gluster
01:16 harish_ joined #gluster
01:21 brandon_ joined #gluster
01:28 Lee1092 joined #gluster
01:31 EinstCrazy joined #gluster
01:53 DV joined #gluster
02:12 julim joined #gluster
02:12 russoisraeli joined #gluster
02:18 mowntan joined #gluster
02:28 EinstCrazy joined #gluster
02:37 ramteid joined #gluster
02:41 EinstCrazy joined #gluster
02:44 EinstCrazy joined #gluster
02:44 julim joined #gluster
02:46 auzty joined #gluster
02:47 harish_ joined #gluster
03:03 RameshN joined #gluster
03:23 DV joined #gluster
03:35 aspandey joined #gluster
03:35 kshlm joined #gluster
03:37 nehar joined #gluster
03:44 overclk joined #gluster
03:47 DV joined #gluster
03:51 skoduri joined #gluster
04:01 shubhendu joined #gluster
04:03 nbalacha joined #gluster
04:04 rafi joined #gluster
04:08 itisravi joined #gluster
04:12 DV joined #gluster
04:20 kdhananjay joined #gluster
04:20 brandon joined #gluster
04:21 shubhendu joined #gluster
04:22 gem joined #gluster
04:22 harish_ joined #gluster
04:24 atinm joined #gluster
04:27 harish_ joined #gluster
04:30 shubhendu joined #gluster
04:32 jiffin joined #gluster
04:35 rafi1 joined #gluster
04:35 EinstCra_ joined #gluster
04:50 rafi joined #gluster
04:53 DV joined #gluster
04:57 karthik___ joined #gluster
05:01 jiffin1 joined #gluster
05:02 brandon joined #gluster
05:03 arcolife joined #gluster
05:03 Javezim joined #gluster
05:03 Ryan___ joined #gluster
05:03 Ryan___ left #gluster
05:03 Ryan___ joined #gluster
05:03 Javezim Hey All
05:03 Javezim Trying to run a script to remove Folders from being in split brain
05:03 Javezim for d in $(find $brick_root -type d | xargs getfattr -m trusted.afr. | egrep -v '^#|^$' | sort -u); do find $brick_root -type d -exec setfattr -x $d {} \;; done
05:03 Javezim However it fails on files that have a space in it
05:03 Javezim Anyone have any idea how to resolve?
05:06 ndarshan joined #gluster
05:16 Pupeno joined #gluster
05:19 rafi1 joined #gluster
05:24 rafi joined #gluster
05:25 gowtham joined #gluster
05:27 harish_ joined #gluster
05:27 hgowtham joined #gluster
05:37 prasanth joined #gluster
05:38 shubhendu joined #gluster
05:39 Apeksha joined #gluster
05:39 RameshN joined #gluster
05:45 rafi1 joined #gluster
05:47 Pupeno joined #gluster
05:47 aravindavk joined #gluster
05:49 atalur joined #gluster
05:50 level7 joined #gluster
05:52 anil joined #gluster
05:54 brandon joined #gluster
06:04 Bhaskarakiran joined #gluster
06:05 mhulsman joined #gluster
06:05 karnan joined #gluster
06:07 ramky joined #gluster
06:07 poornimag joined #gluster
06:08 spalai joined #gluster
06:08 mhulsman1 joined #gluster
06:09 Pupeno joined #gluster
06:14 post-factum vfs_glusterfs for samba eats more memory than fuse mountpoints :(
06:16 spalai left #gluster
06:18 kdhananjay joined #gluster
06:25 harish_ joined #gluster
06:26 jtux joined #gluster
06:26 post-factum had to revert to fuse mountpoints
06:27 post-factum the fun things is that i'm damned proud that fuse mountpoint now eats less memory than gfapi :D
06:27 rafi joined #gluster
06:28 vmallika joined #gluster
06:29 ppai joined #gluster
06:29 post-factum where one could find vfs_glusterfs developer?
06:29 post-factum i would like to ask him a couple of questions
06:30 Siavash joined #gluster
06:31 Siavash Hi guys
06:31 Siavash I have 2 x 2 volume and I want to remove 2 bricks.
06:31 Siavash is it possible to do this without data loss?
06:32 hchiramm joined #gluster
06:35 Manikandan joined #gluster
06:35 mhulsman joined #gluster
06:36 rafi1 joined #gluster
06:38 post-factum Siavash: would you like to remove replica?
06:38 Siavash post-factum: yes
06:38 post-factum Siavash: so just remove it with "remove-brick force"
06:39 mhulsman1 joined #gluster
06:40 post-factum in case you would like to remove non-replicated bricks, you would want to "remove-brick start"
06:44 DV joined #gluster
06:47 atinm joined #gluster
06:47 nbalacha joined #gluster
06:53 Siavash post-factum: this is the current list of bricks:
06:53 Siavash Brick1: jaguar:/gluster/brick1/gv0
06:53 Siavash Brick2: cougar:/gluster/brick1/gv0
06:53 Siavash Brick3: jaguar:/gluster/brick2/gv0
06:53 Siavash Brick4: panter:/gluster/brick1/gv0
06:53 Siavash I want to remove 2 and 4
06:56 [Enrico] joined #gluster
06:56 post-factum it seems, 2 and 4 are replicas of 1 and 3
06:56 post-factum so just use force
06:59 Siavash ok
06:59 ashiq joined #gluster
07:00 pg joined #gluster
07:01 Siavash I just tried it without force, to be safe
07:01 Siavash gluster> volume remove-brick gv0 cougar:/gluster/brick1/gv0 panter:/gluster/brick1/gv0 start
07:01 Siavash volume remove-brick start: failed: Bricks not from same subvol for replica
07:01 Siavash post-factum: so should I remove them one by one? or just use force?
07:02 post-factum nope, you must remove both of them in one command
07:02 post-factum like this:
07:03 post-factum gluster volume remove-brick VOLUME brick2 brick4
07:03 post-factum gluster volume remove-brick VOLUME brick2 brick4 force
07:03 post-factum umm no
07:03 post-factum gluster volume remove-brick VOLUME replica 1 brick2 brick4 force
07:03 Siavash I think I also need to use the replica option, right?
07:03 post-factum yes, see above
07:03 Siavash ok
07:04 Siavash worked
07:04 Siavash thanks
07:07 post-factum np
07:08 kshlm joined #gluster
07:08 nehar joined #gluster
07:09 [Enrico] joined #gluster
07:10 Pupeno joined #gluster
07:11 jri joined #gluster
07:15 Debloper joined #gluster
07:19 robb_nl joined #gluster
07:19 ivan_rossi joined #gluster
07:19 Pupeno joined #gluster
07:19 Saravanakmr joined #gluster
07:25 kovshenin joined #gluster
07:25 rastar joined #gluster
07:25 arcolife joined #gluster
07:31 kassav joined #gluster
07:32 ctria joined #gluster
07:37 kovshenin joined #gluster
07:39 nbalacha joined #gluster
07:39 fsimonce joined #gluster
07:39 Pupeno joined #gluster
07:43 hackman joined #gluster
07:43 atinm joined #gluster
07:49 brandon joined #gluster
07:56 Pupeno joined #gluster
07:58 spalai joined #gluster
08:02 Slashman joined #gluster
08:12 itisravi joined #gluster
08:22 TvL2386 joined #gluster
08:23 TvL2386 joined #gluster
08:31 twisted` hi, I have a gluster mount that once disconnected doesn't auto-reconnect...
08:31 twisted` and then the service that reads/writes to gluster of course writes to the actual disk instead of the gluster mount
08:33 aravindavk joined #gluster
08:37 mhulsman joined #gluster
08:37 harish_ joined #gluster
08:39 mhulsman1 joined #gluster
08:41 post-factum twisted`: logs, please
08:48 DV joined #gluster
08:52 kdhananjay joined #gluster
09:27 _nex_ joined #gluster
09:28 DV joined #gluster
09:50 _nex_ Hi guys, i would like to know the tool you prefer to bench your gluster volumes? Thanks
09:51 post-factum _nex_: real workload, of course :)
09:54 atinm joined #gluster
09:57 _nex_ lol :) thanks
09:57 robb_nl joined #gluster
10:00 level7_ joined #gluster
10:07 kkeithley1 joined #gluster
10:07 kkeithley1 left #gluster
10:09 kkeithley1 joined #gluster
10:11 level7 joined #gluster
10:23 Peppard joined #gluster
10:53 rastar joined #gluster
10:54 poornimag joined #gluster
10:56 johnmilton joined #gluster
10:56 pg joined #gluster
10:57 atinm joined #gluster
11:12 rastar joined #gluster
11:14 gem joined #gluster
11:16 ira joined #gluster
11:23 Pupeno joined #gluster
11:24 russoisraeli joined #gluster
11:26 morse joined #gluster
11:35 DV joined #gluster
11:37 kshlm joined #gluster
11:38 uebera|| Hi. Looking at https://www.gluster.org/community/roadmap/3.8/, I don't see the terms "bitrot" (https://www.gluster.org/community/docu​mentation/index.php/Features/BitRot); as a number of "subfeatures", i.e., Repair/Recover strategies were marked as "not in scope of 3.7", does this mean we have to wait for v3.9 until these are being addressed?
11:38 uebera|| s /terms/term/
11:45 RameshN joined #gluster
11:45 Manikandan joined #gluster
11:55 akay hey guys, as per a command provided by JoeJulian (for d in $(find $brick_root -type d -print0 | xargs -0 getfattr -m trusted.afr. | egrep -v '^#|^$' | sort -u); do find $brick_root -type d -print0 -exec setfattr -x $d {} \;; done) I'm resetting attributes on directories to clean up some split brains but I'm seeing a lot of "No such attribute" messages. I've run this on every brick on a small subset of folders - is it OK to run this on the entire bricks?
11:55 scobanx joined #gluster
11:56 scobanx Hi, Should I remount fuse client after increasing client.event-threads?
11:58 Pupeno joined #gluster
12:04 gem joined #gluster
12:09 mhulsman joined #gluster
12:09 jiffin Sorry for the late notice gluster bug triage meeting will start on gluster-meeting  channel
12:12 mhulsman1 joined #gluster
12:12 overclk joined #gluster
12:14 julim joined #gluster
12:14 Pupeno joined #gluster
12:16 rastar joined #gluster
12:17 DV joined #gluster
12:18 XpineX joined #gluster
12:31 bluenemo joined #gluster
12:37 hagarth joined #gluster
12:37 burn_ joined #gluster
12:38 RameshN joined #gluster
12:39 DV joined #gluster
12:43 nehar joined #gluster
12:44 bennyturns joined #gluster
12:52 javi404 joined #gluster
12:53 DV joined #gluster
12:53 Debloper joined #gluster
12:55 mpietersen joined #gluster
12:57 burn joined #gluster
12:59 EinstCrazy joined #gluster
13:10 XpineX joined #gluster
13:21 _nex_ joined #gluster
13:26 ahino joined #gluster
13:49 skylar joined #gluster
13:56 spalai left #gluster
14:01 sloop joined #gluster
14:11 mhulsman joined #gluster
14:12 DV joined #gluster
14:28 julim joined #gluster
14:30 shaunm joined #gluster
14:33 gem joined #gluster
14:36 bennyturns joined #gluster
14:36 bennyturns sry got dc
14:38 kpease joined #gluster
14:40 bennyturns joined #gluster
14:43 rastar joined #gluster
14:50 sakshi joined #gluster
15:03 overclk joined #gluster
15:06 uebera|| joined #gluster
15:08 uebera|| joined #gluster
15:08 uebera|| joined #gluster
15:09 uebera|| joined #gluster
15:09 uebera|| joined #gluster
15:10 ueberall joined #gluster
15:10 ira joined #gluster
15:11 wnlx joined #gluster
15:12 ueberall joined #gluster
15:12 ueberall joined #gluster
15:16 ueberall joined #gluster
15:16 ueberall joined #gluster
15:17 amye joined #gluster
15:20 level7 joined #gluster
15:26 Peppard joined #gluster
15:29 nbalacha joined #gluster
15:29 level7_ joined #gluster
15:31 jri joined #gluster
15:31 wushudoin joined #gluster
15:36 robb_nl joined #gluster
15:43 bennyturns joined #gluster
15:43 Pupeno joined #gluster
15:46 wnlx joined #gluster
15:51 plarsen joined #gluster
15:51 atinm joined #gluster
15:57 brandon joined #gluster
15:58 kpease joined #gluster
16:02 armyriad joined #gluster
16:06 Manikandan joined #gluster
16:07 overclk joined #gluster
16:18 ivan_rossi left #gluster
16:21 nathwill joined #gluster
16:33 ueberall joined #gluster
16:38 DV joined #gluster
16:38 armyriad joined #gluster
16:40 ueberall joined #gluster
16:41 chirino_m joined #gluster
16:45 armyriad joined #gluster
16:48 armyriad joined #gluster
16:49 brandon joined #gluster
16:51 jiffin joined #gluster
16:51 rafi joined #gluster
17:00 F2Knight joined #gluster
17:07 rafi joined #gluster
17:20 jri joined #gluster
17:22 RameshN joined #gluster
17:25 muneerse2 joined #gluster
17:31 rafi1 joined #gluster
17:41 mhulsman joined #gluster
17:47 spalai joined #gluster
17:52 spalai1 joined #gluster
18:09 chirino_m joined #gluster
18:39 jiffin joined #gluster
18:41 rastar joined #gluster
18:44 jiffin joined #gluster
18:47 jiffin joined #gluster
18:52 robb_nl joined #gluster
19:06 tswartz joined #gluster
19:16 sakshi joined #gluster
19:27 luizcpg joined #gluster
19:36 cyberbootje joined #gluster
19:46 sagarhani left #gluster
20:01 hackman joined #gluster
20:04 m0zes joined #gluster
20:11 MikeLupe joined #gluster
20:22 cholcombe joined #gluster
20:38 Siavash joined #gluster
20:38 Siavash joined #gluster
20:53 cholcombe joined #gluster
21:32 cholcombe joined #gluster
21:37 MugginsM joined #gluster
22:13 russoisraeli joined #gluster
22:56 Logos01 joined #gluster
22:58 Logos01 Howdy. I'm trying to set up a replica-3 3-brickhost volume, and I'm seeing 60-80 Kbps on writes testing w/ "dd if=/dev/zero of=zero.img bs=1M count=100". (That command takes on average 5 minutes to complete).  I'm wondering if anyone could help me work out what is causing this slowness.
22:59 Logos01 The glusterfs brick-hosts are all VMware VMs that are residing on the same ESXi interface; network traffic for them never leaves the host. The backing store for each brick is ZFS datasets which can see something like 400-500MB/s write speed on the same tests when not using glusterfs.
23:00 Logos01 I'm seeing something like 1/800th the write speed of native write when over glusterfs transport -- that seems wrong.
23:18 plarsen joined #gluster
23:18 luizcpg joined #gluster
23:20 akay hey guys, as per a command provided by JoeJulian (for d in $(find $brick_root -type d -print0 | xargs -0 getfattr -m trusted.afr. | egrep -v '^#|^$' | sort -u); do find $brick_root -type d -print0 -exec setfattr -x $d {} \;; done) I'm resetting attributes on directories to clean up some split brains but I'm seeing a lot of "No such attribute" messages. I've run this on every brick on a small subset of folders - is it OK to run this on the entire bricks?
23:22 JoeJulian It should be. The "no such attribute" is safe, it just means that particular directory doesn't have any of the attributes we're trying to remove. No big deal.
23:26 MugginsM joined #gluster
23:37 johnmilton joined #gluster
23:38 brandon joined #gluster
23:40 Logos01 JoeJulian: I've returned... <_<   Got my company to hand me a shiny new staging environment to try to work out what's causing my environment to be dumb.
23:40 Logos01 Challenge is, discovered that amongst other things I'm getting 60KB/s write over a replica-3 volume whose backing stores get ~450MB/s write.
23:40 Logos01 That ... can't be right...
23:41 MugginsM could be all sorts of reasons :-/
23:41 Logos01 Any ideas on where I could look to tune something or else see what else might be going on?
23:41 Logos01 MugginsM: Oh, yeah, this is on an essentially unlimited network stack; all clients and brick hosts are on the same ESXi host -- none of the network traffic ever goes outside the host.
23:42 Logos01 But still, I'm seeing less than 1/1,000,000th the write performance as single-host direct writes.
23:42 Logos01 Err, too many commas
23:42 Logos01 but still
23:44 MugginsM last time I saw that was when one of my bricks was a bit too full and I think it was spending a lot of time redirecting to an emptier brick
23:45 MugginsM was my take on it at least
23:45 Logos01 This is pure replica
23:45 MugginsM dunno, I'm still struggling a bit with gluster :)
23:45 Logos01 And I'm at 1% filesystem usage.
23:46 mpingu joined #gluster
23:47 Logos01 Latency on ping tests between bricks (and clients) is ~0.3ms
23:49 Trefex joined #gluster
23:50 Logos01 Just set the backing store's zfs filesystems to be forced sync=disabled ... (meaning it's using asynch writes even when the client calls for synchronous) ... no gain.
23:50 Logos01 ... haha that actually made it *worse*
23:54 johnmilton joined #gluster
23:57 * Logos01 tries adding the brick hosts' names to /etc/hosts on all machines involved and seeing if that has an impact
23:58 s-hell joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary