Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-05-05

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:24 smohan_ joined #gluster
00:34 Pupeno joined #gluster
00:37 Pupeno joined #gluster
00:39 Pupeno joined #gluster
00:41 shubhendu_ joined #gluster
00:45 Pupeno joined #gluster
00:47 Rapture joined #gluster
01:05 Le22S joined #gluster
01:19 shubhendu_ joined #gluster
01:29 Pupeno joined #gluster
01:29 Pupeno joined #gluster
01:30 fattaneh joined #gluster
01:32 fattaneh left #gluster
01:34 kdhananjay joined #gluster
01:35 Pupeno joined #gluster
01:45 kripper joined #gluster
01:45 kripper hi Joe, how is it possible to get 200 MBps performance with just a 1GbE?
01:45 kripper is it because they are considering local bricks reads?
01:56 nangthang joined #gluster
02:00 Pupeno joined #gluster
02:03 julim joined #gluster
02:08 stickyboy joined #gluster
02:09 smohan joined #gluster
02:16 Pupeno_ joined #gluster
02:35 soumya joined #gluster
02:37 gildub joined #gluster
02:41 lalatenduM joined #gluster
02:42 Peppaq joined #gluster
02:42 kdhananjay joined #gluster
03:02 ku joined #gluster
03:02 yosafbridge joined #gluster
03:03 ku hello.. i encounter permission denied error for my glusterfs setup intermittenly
03:07 Pupeno joined #gluster
03:07 raghug joined #gluster
03:17 kdhananjay joined #gluster
03:17 ashiq joined #gluster
03:18 harish joined #gluster
03:32 rcschool joined #gluster
03:45 itisravi joined #gluster
03:49 kumar joined #gluster
03:52 kotreshhr joined #gluster
03:52 Pupeno_ joined #gluster
03:56 TheSeven joined #gluster
03:58 glusterbot News from newglusterbugs: [Bug 1218479] Gluster NFS Mount Permission Denied Error (Occur Intermittent) <https://bugzilla.redhat.co​m/show_bug.cgi?id=1218479>
03:59 RameshN joined #gluster
03:59 karnan joined #gluster
04:03 atinm joined #gluster
04:06 theron joined #gluster
04:12 shubhendu_ joined #gluster
04:17 schandra1 joined #gluster
04:20 kanagaraj joined #gluster
04:27 TheSeven joined #gluster
04:28 lalatenduM joined #gluster
04:38 nbalacha joined #gluster
04:39 kshlm joined #gluster
04:41 anoopcs joined #gluster
04:42 rafi joined #gluster
04:48 nishanth joined #gluster
04:53 poornimag joined #gluster
04:53 sakshi joined #gluster
04:56 ndarshan joined #gluster
04:58 rafi joined #gluster
04:58 karnan joined #gluster
04:58 glusterbot News from newglusterbugs: [Bug 1218485] spurious failure bug-908146.t <https://bugzilla.redhat.co​m/show_bug.cgi?id=1218485>
05:04 Apeksha joined #gluster
05:08 ramteid joined #gluster
05:09 jiffin joined #gluster
05:17 hagarth joined #gluster
05:20 anil joined #gluster
05:21 aravindavk joined #gluster
05:21 Manikandan joined #gluster
05:22 fattaneh joined #gluster
05:23 meghanam joined #gluster
05:29 glusterbot News from newglusterbugs: [Bug 1218488] Brick and nfs processes gets killed with OOM <https://bugzilla.redhat.co​m/show_bug.cgi?id=1218488>
05:29 glusterbot News from newglusterbugs: [Bug 1207735] Disperse volume: Huge memory leak of glusterfsd process <https://bugzilla.redhat.co​m/show_bug.cgi?id=1207735>
05:36 schandra2 joined #gluster
05:37 schandra2 left #gluster
05:37 gem joined #gluster
05:38 cp3glfs joined #gluster
05:39 cp3glfs joined #gluster
05:40 gem joined #gluster
05:40 dusmantkp_ joined #gluster
05:46 MF1 joined #gluster
05:46 MF1 HI, does anyone work with glusterfs and uWSGI?
05:49 badone_ joined #gluster
05:55 gem_ joined #gluster
05:56 kdhananjay joined #gluster
05:59 rjoseph joined #gluster
06:00 saurabh joined #gluster
06:01 schandra1|WFH joined #gluster
06:02 ashiq joined #gluster
06:02 MF1 i use this document, in this url http://uwsgi-docs.readthedocs​.org/en/latest/GlusterFS.html, i configure glusterfs, but don't mount it, and after that i configure uwsgi, and make it with glusterfs plugin, but when i use ./uwsgi --http-socket=:9090 --http-socket-modifier1=27 --glusterfs-mount=mountpoint=/​,volume=uwsgi,server=x.x.x.x:0 --threads=30, when curl x.x.x.x:9090/ the output is: Not Found
06:02 MF1 please help me
06:02 stickyboy joined #gluster
06:03 Anjana joined #gluster
06:04 soumya joined #gluster
06:04 hchiramm joined #gluster
06:08 raghu joined #gluster
06:10 gem_ joined #gluster
06:11 kanagaraj joined #gluster
06:14 Bhaskarakiran joined #gluster
06:24 karnan joined #gluster
06:24 jtux joined #gluster
06:25 kripper1 joined #gluster
06:25 gem_ joined #gluster
06:26 ndarshan joined #gluster
06:32 atalur joined #gluster
06:32 soumya joined #gluster
06:33 poornimag joined #gluster
06:34 kripper1 can I have split brains with a replica-1 volume?
06:34 kripper1 and having multiple peers?
06:34 kripper1 and clients
06:38 schandra|WFH left #gluster
06:39 Manikandan joined #gluster
06:39 ppai joined #gluster
06:42 kotreshhr joined #gluster
06:47 ashiq joined #gluster
06:48 spalai joined #gluster
06:49 dusmantkp_ joined #gluster
06:55 fattaneh joined #gluster
06:59 glusterbot News from newglusterbugs: [Bug 1218506] gf_msg not giving output to STDOUT. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1218506>
07:01 overclk joined #gluster
07:09 SOLDIERz joined #gluster
07:11 _Bryan_ joined #gluster
07:12 dusmantkp_ joined #gluster
07:18 [Enrico] joined #gluster
07:18 [Enrico] joined #gluster
07:32 Arminder joined #gluster
07:32 lyang0 joined #gluster
07:34 fsimonce joined #gluster
07:36 Arminder joined #gluster
07:42 cp3glfs does anyone knows the relationship between gfid and local filesystem inode ?
07:44 lalatenduM joined #gluster
07:45 cp3glfs ??
07:52 LebedevRI joined #gluster
07:53 kotreshhr joined #gluster
07:55 liquidat joined #gluster
08:01 ndevos cp3glfs: the gfid is like a volume-wide inode, but a gfid does not really care what an inode (or its value) is
08:02 ndevos cp3glfs: the gfid is an extended attribute of a directory-entry, so all hardlinks will have the same gfid (and inode)
08:06 nangthang joined #gluster
08:09 cp3glfs thanks
08:12 cp3glfs does that indicate inode is meaningless for glusterfs,when client read a file is not
08:12 soumya joined #gluster
08:12 cp3glfs care about inode
08:13 poornimag joined #gluster
08:13 nsoffer joined #gluster
08:14 aravindavk joined #gluster
08:14 cp3glfs when i use stat to check xx file,it show me the inode of this file,what's inode for
08:15 cp3glfs check in glusterfs client
08:18 SOLDIERz joined #gluster
08:18 ndevos right, so the inode a glusterfs client sees is (I think) the last 64 bits of the gfid
08:19 fattaneh joined #gluster
08:20 cp3glfs o
08:28 spalai left #gluster
08:30 spalai1 joined #gluster
08:32 Norky joined #gluster
08:42 ctria joined #gluster
08:45 nbalacha joined #gluster
08:45 Slashman joined #gluster
08:49 ndarshan joined #gluster
08:51 Philambdo joined #gluster
08:53 spandit joined #gluster
08:56 gem_ joined #gluster
08:59 deepakcs joined #gluster
09:02 glusterbot News from resolvedglusterbugs: [Bug 1212400] Attach tier failing and messing up vol info <https://bugzilla.redhat.co​m/show_bug.cgi?id=1212400>
09:11 jiffin ndevos++
09:11 glusterbot jiffin: ndevos's karma is now 13
09:13 jcastill1 joined #gluster
09:18 soumya joined #gluster
09:19 jcastillo joined #gluster
09:20 harish_ joined #gluster
09:23 overclk joined #gluster
09:25 ashiq joined #gluster
09:26 gem__ joined #gluster
09:30 glusterbot News from newglusterbugs: [Bug 1218553] [Bitrot]: glusterd crashed when node was rebooted <https://bugzilla.redhat.co​m/show_bug.cgi?id=1218553>
09:30 glusterbot News from newglusterbugs: [Bug 1218562] Fix memory leak while using scandir <https://bugzilla.redhat.co​m/show_bug.cgi?id=1218562>
09:30 glusterbot News from newglusterbugs: [Bug 1218565] `gluster volume heal <vol-name> split-brain' shows wrong usage <https://bugzilla.redhat.co​m/show_bug.cgi?id=1218565>
09:30 glusterbot News from newglusterbugs: [Bug 1218567] Upcall: Cleanup the expired upcall entries <https://bugzilla.redhat.co​m/show_bug.cgi?id=1218567>
09:30 glusterbot News from newglusterbugs: [Bug 1218566] upcall: polling is done for a invalid file <https://bugzilla.redhat.co​m/show_bug.cgi?id=1218566>
09:31 lalatenduM joined #gluster
09:43 ndarshan joined #gluster
09:52 yosafbridge joined #gluster
09:53 R0ok_ joined #gluster
09:53 nsoffer joined #gluster
09:56 kovshenin joined #gluster
10:00 glusterbot News from newglusterbugs: [Bug 1218570] `gluster volume heal <vol-name> split-brain' tries to heal even with insufficient arguments <https://bugzilla.redhat.co​m/show_bug.cgi?id=1218570>
10:00 glusterbot News from newglusterbugs: [Bug 1218573] [Snapshot] Scheduled job is not processed when one of the node of shared storage volume is down <https://bugzilla.redhat.co​m/show_bug.cgi?id=1218573>
10:00 glusterbot News from newglusterbugs: [Bug 1218575] Snapshot-scheduling helper script errors out while running "snap_scheduler.py init" <https://bugzilla.redhat.co​m/show_bug.cgi?id=1218575>
10:00 glusterbot News from newglusterbugs: [Bug 1218576] Regression failures in tests/bugs/snapshot/bug-1162498.t <https://bugzilla.redhat.co​m/show_bug.cgi?id=1218576>
10:02 kovsheni_ joined #gluster
10:05 kovshenin joined #gluster
10:12 kovshenin joined #gluster
10:15 yosafbridge joined #gluster
10:19 kovsheni_ joined #gluster
10:20 kbyrne joined #gluster
10:21 Anjana joined #gluster
10:22 Anjana left #gluster
10:23 Anjana joined #gluster
10:25 shubhendu joined #gluster
10:28 ira joined #gluster
10:28 kovshenin joined #gluster
10:30 glusterbot News from newglusterbugs: [Bug 1218585] [Snapshot] Snapshot scheduler show status disable even when it is enabled <https://bugzilla.redhat.co​m/show_bug.cgi?id=1218585>
10:30 glusterbot News from newglusterbugs: [Bug 1218587] directory become root ownership when the directories are created in parallel on serveral different mounts <https://bugzilla.redhat.co​m/show_bug.cgi?id=1218587>
10:30 glusterbot News from newglusterbugs: [Bug 1218584] RFE: Clone of a snapshot <https://bugzilla.redhat.co​m/show_bug.cgi?id=1218584>
10:31 kovsheni_ joined #gluster
10:34 kkeithley1 joined #gluster
10:39 karnan joined #gluster
10:40 kdhananjay joined #gluster
10:40 kripper1 left #gluster
10:44 kovshenin joined #gluster
10:48 aravindavk joined #gluster
10:48 SOLDIERz joined #gluster
10:49 kovshenin joined #gluster
10:51 mbukatov joined #gluster
10:57 kovsheni_ joined #gluster
10:59 kovshenin joined #gluster
11:00 firemanxbr joined #gluster
11:00 glusterbot News from newglusterbugs: [Bug 1218589] Snapshots failing on tiered volumes (with EC) <https://bugzilla.redhat.co​m/show_bug.cgi?id=1218589>
11:00 glusterbot News from newglusterbugs: [Bug 1218593] ec test spurious failures <https://bugzilla.redhat.co​m/show_bug.cgi?id=1218593>
11:01 kovsheni_ joined #gluster
11:03 lpabon joined #gluster
11:04 kotreshhr joined #gluster
11:08 kovshenin joined #gluster
11:08 ppai joined #gluster
11:10 kovshenin joined #gluster
11:20 yosafbridge joined #gluster
11:22 kovshenin joined #gluster
11:23 fattaneh joined #gluster
11:27 theron joined #gluster
11:30 glusterbot News from newglusterbugs: [Bug 1218596] BitRot :- scrub pause/resume should give proper error message if scrubber is already paused/resumed and Admin tries to perform same operation <https://bugzilla.redhat.co​m/show_bug.cgi?id=1218596>
11:35 overclk joined #gluster
11:36 bennyturns joined #gluster
11:40 ndk joined #gluster
11:41 yosafbridge joined #gluster
11:46 itisravi_ joined #gluster
11:47 deniszh joined #gluster
11:53 anoopcs joined #gluster
11:56 soumya joined #gluster
11:59 lalatenduM joined #gluster
12:01 overclk joined #gluster
12:01 schandra joined #gluster
12:02 rafi1 joined #gluster
12:08 rafi joined #gluster
12:09 rafi joined #gluster
12:10 atalur joined #gluster
12:12 theron joined #gluster
12:14 rjoseph joined #gluster
12:15 poornimag joined #gluster
12:17 shubhendu joined #gluster
12:25 yosafbridge joined #gluster
12:29 anoopcs_ joined #gluster
12:32 glusterbot News from resolvedglusterbugs: [Bug 1217589] glusterd crashed while schdeuler was creating snapshots when bit rot was enabled on the volumes <https://bugzilla.redhat.co​m/show_bug.cgi?id=1217589>
12:40 rafi1 joined #gluster
12:40 nangthang joined #gluster
12:42 smohan_ joined #gluster
12:43 anoopcs_ joined #gluster
12:44 yosafbridge joined #gluster
12:45 kotreshhr joined #gluster
12:52 shaunm_ joined #gluster
12:55 kotreshhr1 joined #gluster
12:56 nbalacha joined #gluster
12:57 DV__ joined #gluster
13:05 B21956 joined #gluster
13:07 hamiller joined #gluster
13:09 Anjana joined #gluster
13:14 SOLDIERz joined #gluster
13:20 sblanton joined #gluster
13:20 soumya joined #gluster
13:23 premera joined #gluster
13:23 bennyturns joined #gluster
13:24 dgandhi joined #gluster
13:24 georgeh-LT2 joined #gluster
13:24 hagarth joined #gluster
13:26 sblanton /msg NickServ VERIFY REGISTER sblanton dyhwtefvimlg
13:26 hchiramm joined #gluster
13:26 lalatenduM joined #gluster
13:30 bene2 joined #gluster
13:31 ppai joined #gluster
13:31 glusterbot News from newglusterbugs: [Bug 1211863] RFE: Support in md-cache to use upcall notifications to invalidate its cache <https://bugzilla.redhat.co​m/show_bug.cgi?id=1211863>
13:39 spalai1 left #gluster
13:42 rafi joined #gluster
13:44 lalatenduM joined #gluster
14:00 Gill joined #gluster
14:01 glusterbot News from newglusterbugs: [Bug 1218653] rdma: properly handle memory registration during network interruption <https://bugzilla.redhat.co​m/show_bug.cgi?id=1218653>
14:06 yosafbridge joined #gluster
14:11 coredump joined #gluster
14:12 kotreshhr1 left #gluster
14:16 Gill joined #gluster
14:35 Larsen joined #gluster
14:40 Philambdo joined #gluster
14:43 atinm joined #gluster
14:54 nbalacha joined #gluster
15:03 lalatenduM joined #gluster
15:05 mike25de joined #gluster
15:05 lexi2 joined #gluster
15:06 mike25de hi guys... in order to add a 3rd server to a replica volume .. i used : gluster volume add-brick replica 3 vol1 vm3:/gluster-storage-replica BUT i have an error ... wrong brick type: 3, use <HOSTNAME>:<export-dir-abs-path>
15:06 mike25de i want to have a 3 hosts in a replicated environment
15:10 cholcombe joined #gluster
15:10 jobewan joined #gluster
15:13 kkeithley_ vm3 is missing in /etc/hosts or DNS ?
15:21 mike25de kkeithley i think is due to the fact that i am using an older version... 3.2.7
15:21 mike25de is there  a way to install the latest version offline?
15:26 kdhananjay joined #gluster
15:39 kovshenin joined #gluster
15:40 jmarley joined #gluster
15:41 bennyturns joined #gluster
15:42 jobewan joined #gluster
15:46 soumya joined #gluster
16:10 raghug joined #gluster
16:17 meghanam joined #gluster
16:24 ashiq joined #gluster
16:26 mike25de guys.. is it possible to add a 3rd brick to a 2 bricks replicated volume?
16:30 nbalacha joined #gluster
16:30 hagarth joined #gluster
16:36 Prilly joined #gluster
16:36 kkeithley_ mike25de: it is. Although with 3.2.x I think you're skating on thin ice generally.
16:36 mike25de i have moved to 3.6.3
16:36 mike25de and i added 2 bricks... and tried to add the 3rd one.. but it doesn't work.
16:37 mike25de I can create a 3 bricks with replica 3 from scratch... but it seems it is not possible to increase a replica 2 to replica 3
16:37 mike25de or ... i am an idiot - which may be the case
16:38 mike25de kkeithley_ thanks anyway for your time man.
16:38 kkeithley_ add brick _should_ work. I've done it in the past.
16:38 mike25de with a replica 2 ?
16:38 kkeithley_ yes, starting with replica 2, adding a brick to make replica 3 should work
16:38 mike25de ok...
16:38 kkeithley_ I dunno, maybe we broke something in 3.6.3, but it ought to work
16:39 mike25de gluster volume add-brick vol2   server:/gluster-rep   this is what i have used initially ...
16:39 mike25de error: volume add-brick: failed: Incorrect number of bricks supplied 1 with count 2
16:39 mike25de then i tried:
16:40 mike25de volume add-brick: failed: One or more nodes do not support the required op-version. Cluster op-version must atleast be 30600.   when i tried :  gluster volume add-brick vol2  replica 3  server:/gluster-rep
16:40 glusterbot mike25de: set the desired op-version using ''gluster volume set all cluster.op-version $desired_op_version''.
16:40 mike25de what is that ? op_version?
16:40 kkeithley_ I'm trying
16:40 kkeithley_ the original two bricks were created with 3.2.7 maybe?
16:41 mike25de no no ... with the latest one
16:43 mike25de gluster volume set all cluster.op-version 30600  FIXED IT... now can you guys let me know what the heck is this op-version?!
16:43 mike25de after setting the op-version i could add the brick replica 3
16:44 kkeithley_ (,,op-version)
16:45 kkeithley_ http://www.gluster.org/documentatio​n/architecture/features/Opversion/
16:46 ndarshan joined #gluster
16:47 mike25de thanks man :)
16:48 kkeithley_ I don't think you should have needed to set the op-version like that, but maybe there was old stuff hanging around somewhere
16:48 mike25de it might... although all servers are 3.6.3
16:48 kkeithley_ new clean installs, or upgraded from 3.2.x?
16:49 kkeithley_ anyway, you're working now
16:49 mike25de removed 3.2... and got the rpms
16:49 nbalacha joined #gluster
16:49 mike25de yeah man thaanks
16:49 kkeithley_ yw
16:49 mike25de have an awesome day :)
16:50 kkeithley_ thanks
16:50 kkeithley_ u2
16:53 DV joined #gluster
16:57 Rapture joined #gluster
16:59 deepakcs joined #gluster
17:01 theron joined #gluster
17:10 bene2 joined #gluster
17:12 squizzi joined #gluster
17:14 kumar joined #gluster
17:19 Gill joined #gluster
17:28 jiffin joined #gluster
17:30 jbrooks joined #gluster
17:31 glusterbot News from newglusterbugs: [Bug 1218732] gluster volume status --xml gives back unexpected non xml output <https://bugzilla.redhat.co​m/show_bug.cgi?id=1218732>
17:39 DV joined #gluster
17:41 alwayscurious joined #gluster
17:44 bfoster joined #gluster
17:51 plarsen joined #gluster
17:57 iPancreas joined #gluster
17:58 theron joined #gluster
18:01 squizzi_ joined #gluster
18:03 spot joined #gluster
18:04 glusterbot News from resolvedglusterbugs: [Bug 1218399] glfs.h:46:21: fatal error: sys/acl.h: No such file or directory <https://bugzilla.redhat.co​m/show_bug.cgi?id=1218399>
18:05 anoopcs joined #gluster
18:06 Gill joined #gluster
18:19 bfoster joined #gluster
18:19 Gill joined #gluster
18:33 jcastill1 joined #gluster
18:37 jbrooks joined #gluster
18:38 jobewan joined #gluster
18:38 jcastillo joined #gluster
18:39 ekuric joined #gluster
18:39 squizzi joined #gluster
18:40 kripper joined #gluster
18:40 kripper Is it possible to get split-brains with replica-1, many peers and georeplication?
18:45 jobewan joined #gluster
18:48 jobewan joined #gluster
18:56 shaunm_ joined #gluster
19:00 jobewan joined #gluster
19:05 jbrooks joined #gluster
19:08 squizzi_ joined #gluster
19:12 alwayscurious semiosis: Any news on when gluster 3.6.3 will be out for Ubuntu?
19:13 jobewan joined #gluster
19:13 redbeard joined #gluster
19:27 Rapture joined #gluster
19:28 Alpinist joined #gluster
19:28 theron joined #gluster
19:43 kripper joined #gluster
19:50 gpmidi joined #gluster
19:51 gpmidi joined #gluster
19:52 gpmidi I have a distributed volume with four bricks. Is there any way to change the replica value from one to two without removing two bricks and then adding them back in?
19:57 kripper gpmidi: yes, gluster vol add-brick <vol> replica n ...
19:57 kripper gpmidi: please check the command
19:57 kripper gpmidi: the brick order is very important
19:58 kripper gpmidi: since it defines what the replica sets
19:58 kripper gpmidi: since it defines the replica sets
19:58 kripper gpmidi: please check the doc
20:00 gpmidi kripper: All of the docs I've found indicate that it's only changeable when adding a brick. I'll try add-brick with only the replica args as in your first message.
20:01 gpmidi kripper: It failed saying that the brick type is wrong
20:03 gpmidi The exact command I used is "gluster volume add-brick media replica 2". I'm assuming that's what your first comment was about.
20:08 squizzi joined #gluster
20:08 ricky-ticky joined #gluster
20:19 gpmidi kripper: Was there a specific doc you were referring to? The only ones I've been able to find only cover using add-brick to change it. Since all of the bricks I have are already in the volume add-brick doesn't seem to work.
20:24 nsoffer joined #gluster
20:26 victori joined #gluster
20:32 gpmidi I gotta go for a while. I'll be back later tonight.
20:32 gpmidi left #gluster
20:38 jvandewege joined #gluster
20:40 lezo joined #gluster
20:42 kripper gpmidi: sorry, I didn't notice you already had them added. I'm not sure, but I would just remove the bricks (requires redistributing data) and then add replica bricks
20:52 JoeJulian kripper: you're right.
21:01 kripper JoeJulian: Hi Joe, about the no-split brains?
21:02 kripper JoeJulian: I have two questions
21:02 kripper JoeJulian: 1) Is it possible to get split-brains with replica-1, many peers and georeplication?
21:03 JoeJulian no
21:04 kripper JoeJulian: 2) I've read reports about gluster performing 200 MBps with a 1 GbE link....since network only gives 100 MBps, I guess the other 100 MBps are from local reads, right?
21:06 kripper JoeJulian: IMO, Gluster starts to make sense in terms of performance with a 10 GbE...other ways, the value is in out-scalability (storage space) and availability (replicas)
21:16 JoeJulian Yeah, if your client is a server and you're reading from localhost, you should be able to read at disk speed minus a bit for cpu/memory performance in context switches.
21:17 JoeJulian wrt performance, you need to look at your workload as a whole and use the tools that satisfies the entire job within your specs. Yes, gluster is a great tool for scale-out and redundancy. It's also great if you have a thousand clients all needing to use the same data set.
22:01 gildub joined #gluster
22:02 uxbod joined #gluster
22:03 Pupeno joined #gluster
22:03 Pupeno joined #gluster
22:10 badone_ joined #gluster
22:28 papamoose1 joined #gluster
22:29 kkeithley1 joined #gluster
22:29 redbeard joined #gluster
22:31 malevolent joined #gluster
22:41 Gill joined #gluster
22:54 scooby2 does geo-repl only go from master -> slave or do changes on slaves get synced back?
23:01 kripper JoeJulian: thanks Joe...we are going to experiment with geo-rep now
23:02 kripper scooby2: It's only one way
23:02 scooby2 k
23:03 kripper scooby2: But you can probably do the inverse geo-rep after
23:03 kripper scooby2: Just make sure to write only on one of the volumes
23:03 kripper scooby2: that's as far as I know
23:04 scooby2 we have 4 nodes - 2 at each datacenter and even reads are pretty slow
23:04 kripper scooby2: what is slow?
23:04 kripper scooby2: may the network be the bottleneck?
23:04 scooby2 ls of a directory with 500 files takes about 15 minutes
23:04 scooby2 ls -al
23:05 kripper scooby2: how much MBps are you getting?
23:05 scooby2 we have 50Mbps
23:05 kripper scooby2: MBps or Mbps?
23:05 scooby2 Mbps
23:06 scooby2 that our issue?
23:06 kripper scooby2: it's a 100 Mbps network link?
23:06 scooby2 yes capped at 50 for WAN
23:06 kripper scooby2: that's probably the problem
23:07 kripper scooby2: and your client getting 50 Mbps is probably not holding a replica for local reads does it?
23:08 kripper scooby2: I mean, all data is received via network
23:08 scooby2 yes
23:08 kripper scooby2: makes sense
23:09 scooby2 even for reads is there a way to prefer local nodes and not do anything over the WAN?
23:12 kripper scooby2: I guess gluster should handle this internally
23:13 kripper scooby2: do you have local nodes that are gluster is not priorizing as expected?
23:13 kripper scooby2: *that gluster
23:13 scooby2 i just inheritted this setup and was asked to speed it up so I'm trying to learn gluster.
23:13 scooby2 it seems a little slow
23:14 scooby2 10-15 seconds to open files off this gluster
23:14 kripper scooby2: ok, a local replica node will probably help
23:14 scooby2 i setup a test gluster with 2 nodes at the local site and its instant to open files
23:14 scooby2 local replica as in being on the webserver it self?
23:15 kripper scooby2: on the same host or on the same LAN
23:15 scooby2 ok
23:15 kripper scooby2: to avoid network bottlenecks
23:16 kripper scooby2: but remember that writes are synchronic = as slow as the slowest node
23:16 scooby2 correct
23:17 scooby2 we are fine with the slow writes. Its just the slow reads that I'm trying to get around
23:17 kripper scooby2: if you will be only reading, I believe geo-rep would be better, so the write performance on the remote site is not impacted
23:18 tdasilva joined #gluster
23:19 scooby2 whats wierd is NFS directly to one of the gluster nodes is much faster
23:20 scooby2 i wonder if NFS writes would be populated to all nodes or break things
23:20 scooby2 something to play with in the lab
23:22 wushudoin joined #gluster
23:22 kripper scooby2: kernel-nfs or gluster-nfs?
23:23 kripper scooby2: I'm not sure, but I guess kernel-nfs is caching writes (asynchronically), while gluster is replicating syncronically to all replica-nodes
23:23 kripper JoeJulian: what IRC client are you using?
23:24 harish_ joined #gluster
23:30 JoeJulian kripper: I use XChat
23:31 JoeJulian but it all goes through znc so I can be on from every machine I have simultaneously without having to have multiple nicks.
23:35 kripper JoeJulian: ok, its good to have the logs
23:40 scooby2 kripper: kernel nfs
23:40 scooby2 just using it in readonly mode
23:41 Rapture joined #gluster
23:45 kripper scooby2: how much faster is kernel-nfs reading from the remote node compared to gluster or gluster-nfs reading from the gluster-servers?
23:45 kripper scooby2: maybe it's because you have many small files
23:46 scooby2 ls in that same directory that takes 15 minutes takes 37 seconds the first time (instant due to caching after that)
23:46 kripper JoeJulian: "ls"?
23:47 kripper scooby2: you mean listing files? not reading them?
23:47 scooby2 yes
23:47 scooby2 ls -al
23:47 kripper scooby2: that's insane
23:47 tdasilva joined #gluster
23:47 scooby2 yes
23:48 scooby2 it sounds like its due to the stat that ls does.
23:48 scooby2 that causes gluster to check replication of the files on all nodes
23:49 kripper scooby2: could you listing with some command or argument that doesn't touch the files?
23:49 kripper scooby2: *try listing
23:50 scooby2 know of any unix commands that do that?
23:50 scooby2 echo * ?
23:56 scooby2 echo * is fast

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary