Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-11-30

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:05 janegil joined #gluster
00:06 EinstCrazy joined #gluster
00:09 jermudgeon joined #gluster
00:09 owlbot joined #gluster
00:10 frankS2 joined #gluster
00:16 owlbot joined #gluster
00:22 msvbhat joined #gluster
00:22 janegil joined #gluster
00:30 hagarth_ joined #gluster
00:33 Pintomatic joined #gluster
00:42 rideh joined #gluster
00:44 janegil joined #gluster
00:45 Telsin joined #gluster
00:47 owlbot` joined #gluster
00:47 Telsin left #gluster
00:50 atrius joined #gluster
00:50 devilspgd joined #gluster
00:50 Vaizki joined #gluster
00:55 frankS2 joined #gluster
01:01 EinstCrazy joined #gluster
01:02 zhangjn joined #gluster
01:02 atrius` joined #gluster
01:07 fyxim joined #gluster
01:10 felicity joined #gluster
01:12 janegil joined #gluster
01:15 RedW joined #gluster
01:16 Pintomatic joined #gluster
01:20 xMopxShell joined #gluster
01:21 frankS2 joined #gluster
01:30 gothos joined #gluster
01:31 fyxim joined #gluster
01:32 jermudgeon joined #gluster
01:34 Chinorro joined #gluster
01:38 Pintomatic joined #gluster
01:43 janegil joined #gluster
01:49 shortdudey123 joined #gluster
01:49 Lee1092 joined #gluster
01:49 newdave joined #gluster
02:08 newdave joined #gluster
02:11 Lee1092 joined #gluster
02:13 sghatty_ joined #gluster
02:14 owlbot joined #gluster
02:19 Larsen joined #gluster
02:19 crashmag joined #gluster
02:25 janegil joined #gluster
02:27 newdave joined #gluster
02:28 harish joined #gluster
02:31 newdave_ joined #gluster
02:36 newdave joined #gluster
02:40 newdave joined #gluster
02:41 Lee1092 joined #gluster
04:44 ilbot3 joined #gluster
04:44 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
04:45 janegil joined #gluster
04:45 RedW joined #gluster
04:46 kshlm joined #gluster
04:51 jermudgeon joined #gluster
04:53 n-st joined #gluster
04:54 virusuy joined #gluster
04:59 n-st joined #gluster
05:01 virusuy joined #gluster
05:07 virusuy joined #gluster
05:07 JPaul joined #gluster
05:08 Pintomatic joined #gluster
05:17 overclk joined #gluster
05:19 virusuy joined #gluster
05:19 n-st joined #gluster
05:25 RedW joined #gluster
05:28 janegil joined #gluster
05:28 Pintomatic joined #gluster
05:34 Iouns joined #gluster
05:37 owlbot` joined #gluster
05:38 RedW joined #gluster
05:40 atrius joined #gluster
05:42 gothos joined #gluster
05:42 zhangjn joined #gluster
06:00 janegil joined #gluster
06:01 anil joined #gluster
06:05 fyxim joined #gluster
06:10 kanagaraj joined #gluster
06:19 fyxim_ joined #gluster
06:20 Chinorro joined #gluster
06:32 felicity joined #gluster
06:33 ccha4 joined #gluster
06:33 crashmag joined #gluster
06:33 k-ma joined #gluster
06:33 ccha4 joined #gluster
06:33 gothos joined #gluster
06:33 frankS2 joined #gluster
06:36 Chinorro joined #gluster
06:43 Chinorro joined #gluster
06:47 atalur joined #gluster
06:49 auzty_ joined #gluster
06:50 auzty_ joined #gluster
06:50 nangthang joined #gluster
06:54 janegil joined #gluster
06:54 Saravana_ joined #gluster
06:55 kotreshhr joined #gluster
06:57 doekia joined #gluster
06:57 vmallika joined #gluster
06:57 gothos joined #gluster
06:58 rjoseph joined #gluster
07:01 ccha4 joined #gluster
07:02 [Enrico] joined #gluster
07:03 Dasiel joined #gluster
07:06 frankS2 joined #gluster
07:06 wistof joined #gluster
07:06 RedW joined #gluster
07:07 n-st joined #gluster
07:07 owlbot joined #gluster
07:09 Dasiel joined #gluster
07:13 mhulsman joined #gluster
07:31 RedW joined #gluster
07:32 janegil joined #gluster
07:32 Chinorro joined #gluster
07:35 fyxim_ joined #gluster
07:39 ackjewt joined #gluster
07:40 n-st joined #gluster
07:42 masterzen joined #gluster
07:43 Dasiel joined #gluster
07:49 Park2 joined #gluster
07:51 fyxim joined #gluster
07:55 janegil joined #gluster
07:57 Philambdo joined #gluster
08:04 masterzen joined #gluster
08:04 wistof joined #gluster
08:04 Dasiel joined #gluster
08:05 deniszh joined #gluster
08:06 RedW joined #gluster
08:14 fyxim_ joined #gluster
08:15 R0ok_ joined #gluster
08:15 Norky joined #gluster
08:15 Vaizki joined #gluster
08:16 itisravi joined #gluster
08:16 cuqa_ joined #gluster
08:17 d0nn1e joined #gluster
08:17 atrius` joined #gluster
08:19 Humble joined #gluster
08:20 mobaer joined #gluster
08:20 lord4163 joined #gluster
08:20 RedW joined #gluster
08:20 rafi joined #gluster
08:21 newdave joined #gluster
08:25 jtux joined #gluster
08:25 kotreshhr joined #gluster
08:27 ivan_rossi joined #gluster
08:29 ryan_ joined #gluster
08:29 lh_ joined #gluster
08:29 sc0 joined #gluster
08:29 badone joined #gluster
08:29 sac joined #gluster
08:29 portante joined #gluster
08:29 csim joined #gluster
08:29 rp_ joined #gluster
08:30 kkeithley joined #gluster
08:30 shruti joined #gluster
08:30 ndk joined #gluster
08:30 bfoster joined #gluster
08:30 csaba1 joined #gluster
08:30 twisted` joined #gluster
08:30 dblack joined #gluster
08:30 scuttle` joined #gluster
08:30 msvbhat joined #gluster
08:30 sghatty_ joined #gluster
08:30 harish joined #gluster
08:30 ppai joined #gluster
08:30 sakshi joined #gluster
08:30 Lee1092 joined #gluster
08:30 hagarth_ joined #gluster
08:30 nbalacha joined #gluster
08:30 samsaffron___ joined #gluster
08:30 kshlm joined #gluster
08:30 jermudgeon joined #gluster
08:30 Pintomatic joined #gluster
08:30 anil joined #gluster
08:30 kanagaraj joined #gluster
08:30 atalur joined #gluster
08:30 vmallika joined #gluster
08:30 frankS2 joined #gluster
08:30 fyxim joined #gluster
08:30 itisravi joined #gluster
08:30 Humble joined #gluster
08:31 rafi joined #gluster
08:31 kotreshhr joined #gluster
08:31 jiffin joined #gluster
08:31 jiffin joined #gluster
08:36 Pintomatic joined #gluster
08:36 zhangjn joined #gluster
08:37 Saravana_ joined #gluster
08:38 ashka joined #gluster
08:39 Iouns joined #gluster
08:44 RedW joined #gluster
08:45 shubhendu joined #gluster
08:48 atrius joined #gluster
08:48 fsimonce joined #gluster
08:49 shubhendu joined #gluster
08:50 fyxim joined #gluster
08:52 kovshenin joined #gluster
08:55 Slashman joined #gluster
08:56 RameshN joined #gluster
08:57 frankS2 joined #gluster
08:57 Manikandan joined #gluster
08:58 ahino joined #gluster
09:00 ctria joined #gluster
09:01 haomaiwang joined #gluster
09:02 kdhananjay joined #gluster
09:08 spalai joined #gluster
09:08 sakshi joined #gluster
09:13 kovshenin joined #gluster
09:13 Pintomatic joined #gluster
09:13 janegil joined #gluster
09:15 uebera|| joined #gluster
09:15 uebera|| joined #gluster
09:19 Chinorro joined #gluster
09:21 zhangjn joined #gluster
09:22 samsaffron___ joined #gluster
09:22 kshlm joined #gluster
09:22 nbalacha joined #gluster
09:25 RedW joined #gluster
09:26 jermudgeon joined #gluster
09:26 skoduri joined #gluster
09:32 ivan_rossi joined #gluster
09:33 Pintomatic joined #gluster
09:39 crashmag joined #gluster
09:39 kdhananjay joined #gluster
09:41 Lee1092 joined #gluster
09:41 uebera|| joined #gluster
09:41 uebera|| joined #gluster
09:41 nbalacha joined #gluster
09:42 csaba joined #gluster
09:44 samsaffron___ joined #gluster
09:48 arcolife joined #gluster
09:50 nangthang joined #gluster
09:51 tru_tru joined #gluster
09:53 gem joined #gluster
09:55 lkoranda joined #gluster
09:58 linagee joined #gluster
09:59 DV joined #gluster
10:00 kdhananjay joined #gluster
10:01 zhangjn joined #gluster
10:01 haomaiwang joined #gluster
10:07 hajile joined #gluster
10:14 marlinc joined #gluster
10:14 rp_ joined #gluster
10:15 zhangjn joined #gluster
10:18 Manikandan joined #gluster
10:18 Chr1st1an joined #gluster
10:21 devilspgd joined #gluster
10:22 spalai joined #gluster
10:24 zhangjn joined #gluster
10:25 kdhananjay joined #gluster
10:27 zhangjn joined #gluster
10:30 pppp joined #gluster
10:36 kdhananjay joined #gluster
10:36 Dasiel joined #gluster
10:36 CP|AFK joined #gluster
10:37 masterzen joined #gluster
10:38 cuqa_ joined #gluster
10:44 javi404 joined #gluster
10:45 deepakcs joined #gluster
10:47 CP|AFK joined #gluster
10:51 DV joined #gluster
10:52 Bhaskarakiran joined #gluster
10:54 ndarshan joined #gluster
10:57 Dasiel joined #gluster
10:58 masterzen joined #gluster
10:58 ackjewt joined #gluster
10:59 wistof joined #gluster
11:01 haomaiwang joined #gluster
11:02 skoduri joined #gluster
11:04 spalai left #gluster
11:06 Slashman joined #gluster
11:07 masterzen joined #gluster
11:08 zhangjn joined #gluster
11:13 masterzen joined #gluster
11:19 kdhananjay joined #gluster
11:19 Manikandan joined #gluster
11:23 d0nn1e joined #gluster
11:25 Dasiel joined #gluster
11:25 haomaiwang joined #gluster
11:25 lord4163 joined #gluster
11:30 MessedUpHare joined #gluster
11:36 RedW joined #gluster
11:36 ppai joined #gluster
11:37 wistof joined #gluster
11:38 Iouns joined #gluster
11:38 Manikandan joined #gluster
11:39 masterzen joined #gluster
11:40 atalur joined #gluster
11:40 Dasiel joined #gluster
11:43 haomaiwang joined #gluster
11:43 DV joined #gluster
11:44 night joined #gluster
11:46 Park2 JoeJulian, I know what happened if no surprise.  The high traffic from storage node to fuse client is caused by fuse read ahead, which read 128KB after each "read" or "lseek" invocation.   See https://pastebin.mozilla.org/8853415
11:46 glusterbot Title: Mozilla Pastebin - collaborative debugging tool (at pastebin.mozilla.org)
11:56 morse joined #gluster
11:57 owlbot joined #gluster
11:59 Norky joined #gluster
12:01 haomaiwa_ joined #gluster
12:03 masterzen joined #gluster
12:05 rafi1 joined #gluster
12:08 kotreshhr left #gluster
12:11 bfoster joined #gluster
12:14 owlbot joined #gluster
12:14 dlambrig joined #gluster
12:15 kovshenin joined #gluster
12:17 Dasiel joined #gluster
12:24 Dasiel joined #gluster
12:24 masterzen joined #gluster
12:24 wistof joined #gluster
12:25 Dasiel left #gluster
12:25 Dasiel joined #gluster
12:25 owlbot` joined #gluster
12:27 Norky joined #gluster
12:28 owlbot joined #gluster
12:28 wistof joined #gluster
12:29 masterzen joined #gluster
12:33 DV joined #gluster
12:39 owlbot joined #gluster
12:43 owlbot joined #gluster
12:49 DV joined #gluster
12:55 kshlm joined #gluster
12:55 Dasiel joined #gluster
12:56 owlbot joined #gluster
12:59 mobaer joined #gluster
13:02 EinstCrazy joined #gluster
13:03 zhangjn joined #gluster
13:03 owlbot joined #gluster
13:07 plarsen joined #gluster
13:09 rafi joined #gluster
13:11 masterzen joined #gluster
13:12 bitpushr joined #gluster
13:24 unclemarc joined #gluster
13:31 theron_ joined #gluster
13:32 theron_ joined #gluster
13:45 Chr1st1an joined #gluster
13:54 Dasiel joined #gluster
13:59 masterzen joined #gluster
14:06 shaunm joined #gluster
14:12 shaunm joined #gluster
14:16 wistof joined #gluster
14:16 masterzen joined #gluster
14:26 Dasiel joined #gluster
14:26 masterzen joined #gluster
14:27 deepakcs joined #gluster
14:32 Dasiel joined #gluster
14:32 wistof joined #gluster
14:36 haomaiwang joined #gluster
14:40 skylar joined #gluster
14:42 bitpushr joined #gluster
14:42 mobaer joined #gluster
14:44 cuqa_ joined #gluster
14:47 CP|AFK joined #gluster
14:48 crashmag joined #gluster
14:59 cuqa_ joined #gluster
15:02 crashmag joined #gluster
15:05 bluenemo joined #gluster
15:06 k-ma joined #gluster
15:06 csaba joined #gluster
15:06 janegil joined #gluster
15:09 wnlx joined #gluster
15:15 csaba joined #gluster
15:23 CP|AFK joined #gluster
15:30 csaba joined #gluster
15:32 theron joined #gluster
15:33 Slashman joined #gluster
15:35 wnlx joined #gluster
15:35 atrius joined #gluster
15:36 CP|AFK joined #gluster
15:36 wushudoin joined #gluster
15:37 rwheeler joined #gluster
15:37 devilspgd joined #gluster
15:38 Slashman joined #gluster
15:38 shyam joined #gluster
15:38 Park2 joined #gluster
15:38 night joined #gluster
15:38 zhangjn joined #gluster
15:39 skoduri joined #gluster
15:39 julim joined #gluster
15:39 EinstCrazy joined #gluster
15:40 ccha4 joined #gluster
15:40 k-ma joined #gluster
15:41 dgandhi joined #gluster
15:43 ahino joined #gluster
15:49 d0nn1e joined #gluster
15:50 tru_tru joined #gluster
15:52 corretico joined #gluster
15:55 CP|AFK joined #gluster
15:57 rafi joined #gluster
15:58 dlambrig joined #gluster
15:59 B21956 joined #gluster
15:59 Dasiel joined #gluster
16:00 rp_ joined #gluster
16:05 CyrilPeponnet joined #gluster
16:06 saltsa joined #gluster
16:07 aravindavk joined #gluster
16:07 chirino joined #gluster
16:07 nhayashi joined #gluster
16:10 atrius` joined #gluster
16:12 CyrilPeponnet joined #gluster
16:12 bowhunter joined #gluster
16:13 tru_tru joined #gluster
16:13 maserati joined #gluster
16:14 mattmcc joined #gluster
16:15 linagee joined #gluster
16:18 _feller joined #gluster
16:21 maserati|work joined #gluster
16:22 kdhananjay joined #gluster
16:31 sac joined #gluster
16:38 JoeJulian Park2: Huh... I thought you said you disabled that.
16:39 Dasiel joined #gluster
16:40 Ramereth joined #gluster
16:40 masterzen joined #gluster
16:42 wistof joined #gluster
16:45 Park2 Park2, no, actually I tried to disable it after found the readahead cause, but no way, and looks to me need to patch glusterfsd.c to accept "max_readahead".   However, I tested with libgfapi too,  and it makes not much difference,  i.e.  lots of traffic from storage node to client too.  I'm still confused,  libgfapi shouldn't have readahead like kernel fuse module, isn't it?
16:48 skoduri joined #gluster
16:48 Akee joined #gluster
16:54 Park2 What I disabled is performance.read-ahead of the volume,  not fuse readahead on the client side.
16:55 mhulsman joined #gluster
16:59 kotreshhr joined #gluster
17:04 jmarley joined #gluster
17:14 EinstCrazy joined #gluster
17:18 Humble joined #gluster
17:21 JoeJulian Park2: Right, "gluster volume set $vol performance.read-ahead off" removes the read-ahead translator from the fuse vol file.
17:22 JoeJulian You can confirm that by looking in /var/lib/glusterd/vols/$vol/$vol.tcp-fuse.vol
17:22 jobewan joined #gluster
17:23 Gugge joined #gluster
17:23 rwheeler joined #gluster
17:27 calavera joined #gluster
17:27 Telsin joined #gluster
17:29 Norky joined #gluster
17:29 swebb joined #gluster
17:34 ivan_rossi left #gluster
17:34 mhulsman joined #gluster
17:38 Park2 JoeJulian, I tested libgfapi  more,  and found the traffic flow backing up was from performance.io-cache,  when I turned off it,  the file uploading process works like a charm.. :-)
17:40 F2Knight joined #gluster
17:46 JoeJulian Park2: Are you documenting all these findings somewhere? I'd love to see what you're coming up with.
17:49 Rapture joined #gluster
17:51 Park2 JoeJulian, not yet,  but I'd like do it, need more time to dig,  e.g. for the fuse mode,  I turned off both performance.read-ahead and performance.io-cache, but the traffic is still backing up.  I do believe it's caused by  fuse kernel mode's read ahead, which read 128KB after each read and seek.  It should be here:
17:52 Park2 xlators/mount/fuse/src/fuse-bridge.c:        fino.max_readahead = 1 << 17;
17:52 Park2 I'm not quite sure yet though, need to look around.
17:53 mhulsman joined #gluster
17:53 newdave joined #gluster
17:55 JoeJulian No, that's just the max allowed readahead(2 http://linux.die.net/man/2/readahead ) size.
17:55 glusterbot Title: readahead(2) - Linux man page (at linux.die.net)
17:56 JoeJulian (unless I'm just completely wrong)
17:57 chirino joined #gluster
17:57 jwd joined #gluster
17:58 Park2 yeah, it's the max allowed size, but doesn't it try to read as much as possible?  from the strace output I pasted previously,  it reads 128K after seek indeed (if reading 128K after a previous read is not obvious).
18:01 mlncn joined #gluster
18:07 OregonGunslinger joined #gluster
18:13 monotek joined #gluster
18:14 JoeJulian Park2: Ok, so seek doesn't actually do anything. The file op is actually the read or write that then uses the offset to which you seeked. You should see a "READ (size=\d+, offset=\d+)" in the trace level log.
18:18 newdave joined #gluster
18:21 EinstCrazy joined #gluster
18:22 VeggieMeat_ joined #gluster
18:25 Park2 JoeJulian, yeah, but actually the read after the seek is not what I want,  i.e useless for the program logic.  The whole logic is to merge small files,  with some seek's to set flags (e.g. lock),  and then seek again to the write position to do appending.
18:28 JPaul joined #gluster
18:29 bfoster joined #gluster
18:29 siel joined #gluster
18:32 calavera joined #gluster
18:40 atrius joined #gluster
18:42 mmckeen joined #gluster
18:42 devilspgd joined #gluster
18:43 swebb joined #gluster
18:44 jermudgeon joined #gluster
18:45 ahino joined #gluster
18:47 night joined #gluster
18:47 amye joined #gluster
18:50 mobaer joined #gluster
18:56 jermudgeon joined #gluster
18:57 virusuy joined #gluster
19:06 hagarth_ joined #gluster
19:09 ira joined #gluster
19:10 dlambrig joined #gluster
19:14 cholcombe joined #gluster
19:35 dlambrig_ joined #gluster
19:38 B21956 joined #gluster
20:02 sigkillbr joined #gluster
20:06 calavera joined #gluster
20:06 josh joined #gluster
20:08 kotreshhr left #gluster
20:15 calavera joined #gluster
20:25 theron_ joined #gluster
20:30 ahino joined #gluster
20:42 EinstCrazy joined #gluster
20:47 Humble joined #gluster
20:55 RedW joined #gluster
21:00 lpabon joined #gluster
21:04 hagarth_ joined #gluster
21:05 calavera joined #gluster
21:08 jwaibel joined #gluster
21:09 sigkillbr left #gluster
21:19 mlncn joined #gluster
21:31 Ericle_ joined #gluster
21:32 lapy joined #gluster
21:33 lapy hi all
21:33 lapy i believe joining the #gluster irc channel is the best idea i had today
21:34 lapy anyone interested in helping a new gluster user
21:34 lapy ?
21:35 Jmainguy prolly best just to ask your real question
21:35 lapy Jmainguy: :)
21:35 Jmainguy =)
21:35 lapy I've a platform with +/- 20 servers
21:36 lapy and 1 storage server
21:36 lapy my objective is to start many virtual machine in //
21:36 Ericle_ I'm new to gluster and want to know how to best handle my current situation. I have hundreds of terabytes of data that I would like to distribute across multiple servers with gluster. Is it possible to create a brick on top of the existing data or is that a bad idea?
21:36 lapy but i'm stuck with low read speed
21:37 lapy so here is my question : Do read accesses are load balanced accross replicas ?
21:38 lapy my current arch is a "replica 3 strip 2" (distributed stripped replica)
21:39 lapy If a client read something on my volume, network traffic increases betwen the client and a single server
21:40 lapy so if multiple clients read data in my volume, my server slows down and sometimes even crash with a "glusterfs blocked for more than 120 seconds"
21:40 Ericle_ I don't exactly have enough storage space to free up a couple hundred terabytes to create the new bricks and then copy all the data to them.
21:41 Jmainguy Ericle_: no
21:41 Jmainguy Ericle_: a new brick should be an emtpy partition
21:41 kovshenin joined #gluster
21:41 Jmainguy lapy: not sure
21:42 Jmainguy lapy: that is a pretty good question
21:42 lapy Jmainguy: :(
21:43 Ericle_ Jmainguy: Okay, that's what I was afraid of. I tried it in a test environment and it seemed to work, but did have some oddities.
21:43 lapy Here : http://www.gluster.org/community/documentation/index.php/Gluster_3.1:_Understanding_Load_Balancing
21:44 lapy it says the client can "distribute" his accesses accross various glusterfs servers
21:44 Jmainguy Ericle_: well maybe I am a liar if it worked for you
21:44 Jmainguy Ericle_: I havent really tried it that way, seems dangerous
21:45 Jmainguy Ericle_: it creates some metadata stuff when you create a new brick, I guess it would work in an existing partition
21:45 Jmainguy lapy: yeaaaaaaa
21:45 Jmainguy lapy: so, if its distributed, that makes sense
21:45 Jmainguy lapy: if its replica distributed, I am not sure how it decides which replica to read from
21:45 Jmainguy lapy: hopefully it round robins it or somehting
21:47 lapy Jmainguy: do you mean it actually work but I cannot see it ?
21:47 Jmainguy lapy: yeah, *should be
21:47 Jmainguy gluster volume set help
21:47 lapy or should I configure something to "control"
21:47 Jmainguy has like a million options you cna set, but, I imagine by default
21:47 Jmainguy it will be reading from each of your replicas atleast some
21:48 lapy Jmainguy: perhaps something related with option 'disperse.read-policy'
21:49 Jmainguy lapy: https://www.gluster.org/pipermail/gluster-users/2015-June/022321.html
21:49 glusterbot Title: [Gluster-users] reading from local replica? (at www.gluster.org)
21:49 Jmainguy looks like it reads fastest initial response
21:49 Jmainguy so if one server always answers faster than the others, he gets hit
21:49 lapy Jmainguy: YEAHH
21:49 Ericle_ Jmainguy: I was just creating a distributed volume with no replication. I created one brick on top of existing data on server1 and another freshly formatted brick on server2. I mounted the distributed volume on server2 and saw all the data from server1 and new data I created was evenly distributed across the nodes. Where it got weird was when I tried to
21:49 Ericle_ rebalance the nodes and it copied some of the original data from server1 to server2 but it just replicated it instead of distributing it. Maybe I'm not using the rebalance correctly though.
21:50 lapy Jmainguy: my storage server has a 10Gb link while others 'only' have a 1Gb
21:50 Jmainguy Ericle_: I bet it will remove the duplicates eventually
21:50 Jmainguy Ericle_: copys stuff to be safe, removes dupes later, for the rebalance
21:51 Jmainguy so like in an hour, or less, depends ond ata, should be fairly distributed
21:52 Jmainguy Ericle_: that being said, unless you add a third brick later, shouldnt really need to rebalance again
21:52 hagarth_ joined #gluster
21:52 Jmainguy unless you just like rebalancing, cuz it is a fun command to run
21:52 Ericle_ Jmainguy: oh okay, so I guess it's possible it could work that way? I'm just trying to figure out if that method is something that is supported or not. I don't want to risk losing data or getting gluster confused because I set it up wrong.
21:53 Jmainguy I think best practice is a new partition
21:53 Jmainguy per brick
21:53 Jmainguy https://www.gluster.org/community/documentation/index.php/QuickStart
21:53 Jmainguy in the training I took, thats what we always did
21:53 Jmainguy xfs, inode of 512, new partition per brick
21:54 Ericle_ Okay, thanks for the help. I guess I'll have to figure out how to juggle the data around to make the new bricks. It's not easy with this much data.
21:55 Jmainguy Ericle_: looks like what I did when I made mine
21:55 Jmainguy was created a partition /export/BrickBackup, and then did a per dir for bricks
21:55 Jmainguy but thats cuz I had limited space, if I were to do it again, I would likely do partition per brick
21:56 Jmainguy Ericle_: I mean, if your in a pinch, looks like your method works
21:57 Jmainguy afk, relocating home
22:00 Ericle_ Jmainguy: I guess I'll do some more testing. I guess I'm not really in danger of losing data. It seems like the worst case scenario is it doesn't work and I still have the underlying data.
22:02 RedW joined #gluster
22:08 lapy Jmainguy: bad luck i believed 'cluster.eager-lock' was the good one :)
22:08 DV__ joined #gluster
22:18 nathwill joined #gluster
22:19 jrm16020 joined #gluster
22:35 dlambrig joined #gluster
22:48 lkoranda_ joined #gluster
22:58 lkoranda joined #gluster
23:04 Jmainguy Ericle_: yup
23:04 Jmainguy lapy: lol nice
23:06 shyam joined #gluster
23:25 delhage joined #gluster
23:26 newdave joined #gluster
23:32 lapy Jmainguy: ok so I've tested the read-hash-mode option set to 2
23:32 Jmainguy lapy: any better?
23:32 lapy it seems to provided the kind of read access distribution I was looking for
23:32 Jmainguy nice
23:32 lapy provide*
23:33 lapy BUT its not perfect
23:33 lapy with read-hash-mode set to 0 : client1 : 21sec, client2 : 21sec
23:34 lapy with read-hash-mode set to 2 : client1 : 12sec, client2 : 37sec
23:34 Jmainguy =/
23:34 lapy to download same file at the same moment
23:34 Jmainguy sacrificing one client for the other
23:34 lapy Jmainguy: yep
23:35 lapy but its on a very small platform
23:35 lapy tomorrow I will test this option on the real platform
23:35 delhage joined #gluster
23:35 lapy where files are replicated and stripped
23:37 Jmainguy yeah
23:37 lapy I will also set the "choose-local" and increase the "cache-size"
23:37 Jmainguy let me know what you find out, I am pretty curious on your results
23:37 lapy but I may have choosen the wrong architecture to organize my data
23:37 lapy Jmainguy: sure
23:38 lapy thx for your help Jmainguy
23:38 Jmainguy yeah np
23:39 Jmainguy I hope gluster is the right solution, would love to see it take over
23:45 delhage joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary