Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-03-21

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 xymox joined #gluster
00:04 gmcwhistler joined #gluster
00:07 kam270 joined #gluster
00:11 xymox joined #gluster
00:12 discretestates joined #gluster
00:19 kam270 joined #gluster
00:20 xymox joined #gluster
00:22 pdrakeweb joined #gluster
00:25 recidive joined #gluster
00:27 xymox joined #gluster
00:30 gmcwhist_ joined #gluster
00:34 kam270 joined #gluster
00:36 xymox joined #gluster
00:44 xymox joined #gluster
00:53 xymox joined #gluster
00:57 nightwalk joined #gluster
01:02 sroy joined #gluster
01:04 xymox joined #gluster
01:09 robo joined #gluster
01:14 xymox joined #gluster
01:14 robo joined #gluster
01:21 xymox joined #gluster
01:26 discretestates joined #gluster
01:30 xymox joined #gluster
01:39 xymox joined #gluster
01:43 robo joined #gluster
01:48 xymox joined #gluster
01:51 yinyin joined #gluster
01:51 jmarley joined #gluster
01:51 jmarley joined #gluster
01:55 xymox joined #gluster
02:04 robo joined #gluster
02:04 XpineX_ joined #gluster
02:04 harish joined #gluster
02:06 xymox joined #gluster
02:13 sprachgenerator joined #gluster
02:15 xymox joined #gluster
02:22 nightwalk joined #gluster
02:24 xymox joined #gluster
02:33 nightwalk joined #gluster
02:33 xymox joined #gluster
02:40 mattapperson joined #gluster
02:44 discretestates joined #gluster
02:44 xymox joined #gluster
02:51 xymox joined #gluster
03:00 xymox joined #gluster
03:08 xymox joined #gluster
03:11 coredump joined #gluster
03:14 nueces joined #gluster
03:14 coredum__ joined #gluster
03:17 xymox joined #gluster
03:24 xymox joined #gluster
03:27 RameshN joined #gluster
03:27 mattappe_ joined #gluster
03:28 lyang0 joined #gluster
03:31 hagarth joined #gluster
03:33 xymox joined #gluster
03:38 nightwalk joined #gluster
03:42 xymox joined #gluster
03:42 recidive joined #gluster
03:42 raghug joined #gluster
03:45 itisravi joined #gluster
03:52 shubhendu joined #gluster
03:53 xymox joined #gluster
03:54 shubhendu joined #gluster
03:55 bharata-rao joined #gluster
03:57 chirino joined #gluster
04:00 xymox joined #gluster
04:09 discretestates joined #gluster
04:09 xymox joined #gluster
04:12 aravindavk joined #gluster
04:15 deepakcs joined #gluster
04:16 ndarshan joined #gluster
04:19 shylesh joined #gluster
04:21 xymox joined #gluster
04:28 chirino_m joined #gluster
04:29 xymox joined #gluster
04:32 haomaiwang joined #gluster
04:33 pdrakeweb joined #gluster
04:36 xymox joined #gluster
04:38 vpshastry joined #gluster
04:43 xymox joined #gluster
04:43 cjanbanan joined #gluster
04:44 raghug y4m4: ping
04:44 atinm joined #gluster
04:47 prasanthp joined #gluster
04:53 xymox joined #gluster
04:55 sks joined #gluster
04:57 vpshastry joined #gluster
05:01 nightwalk joined #gluster
05:02 kdhananjay joined #gluster
05:02 y4m4 raghug: pong
05:03 y4m4 raghug: sorry wasn't available yesterday
05:03 y4m4 raghug: yeah the patch doesn't seem to fix the issue, there is a regression which is causes
05:03 ravindran joined #gluster
05:03 y4m4 raghug: http://build.gluster.org/job​/regression/3697/consoleFull - look a the test case failure
05:03 glusterbot Title: regression #3697 Console [Jenkins] (at build.gluster.org)
05:04 y4m4 raghug: seems like directory self-heal issues after an "add-brick
05:04 xymox joined #gluster
05:04 dusmant joined #gluster
05:06 ppai joined #gluster
05:07 ravindran left #gluster
05:08 sahina joined #gluster
05:10 raghug y4m4: setting aside regression failures, what about gfid mismatch of directories on different subvolumes?
05:10 raghug is it still observed?
05:11 haomaiwang joined #gluster
05:12 aravindavk joined #gluster
05:13 nightwalk joined #gluster
05:14 xymox joined #gluster
05:22 haomai___ joined #gluster
05:22 xymox joined #gluster
05:22 y4m4 raghug: i haven't reproduced it
05:24 y4m4 raghug: let me think, may be i did - i couldn't get a testcase since it isn't easy to reproduce on a single client
05:24 y4m4 raghug: the patch indeed fixes the problem
05:25 y4m4 raghug: but there could be other regressions it causes, are you taking up the task of finishing it?
05:25 yinyin joined #gluster
05:26 nshaikh joined #gluster
05:31 xymox joined #gluster
05:34 sputnik13 joined #gluster
05:40 lalatenduM joined #gluster
05:41 xymox joined #gluster
05:43 vkoppad joined #gluster
05:44 rahulcs joined #gluster
05:47 harish joined #gluster
05:51 xymox joined #gluster
05:54 shubhendu joined #gluster
05:56 benjamin_____ joined #gluster
05:57 kanagaraj joined #gluster
05:58 discretestates joined #gluster
05:59 xymox joined #gluster
06:09 xymox joined #gluster
06:10 raghu joined #gluster
06:12 vpshastry1 joined #gluster
06:15 harish joined #gluster
06:15 raghug joined #gluster
06:18 atinm joined #gluster
06:20 psharma joined #gluster
06:23 vimal joined #gluster
06:24 dusmant joined #gluster
06:30 pjschmitt joined #gluster
06:31 rahulcs_ joined #gluster
06:31 pjschmitt hi, so I was wondering, looking at all the clients if I misread is the nfs client faster than the gluster client?
06:32 cjanbanan joined #gluster
06:33 pjschmitt also can the gluster nfs server do rdma?
06:34 pdrakeweb joined #gluster
06:34 Philambdo joined #gluster
06:37 nightwalk joined #gluster
06:41 aravindavk joined #gluster
06:53 raghug joined #gluster
07:02 davinder joined #gluster
07:10 dusmant joined #gluster
07:10 aravindavk joined #gluster
07:11 itisravi joined #gluster
07:11 raghug joined #gluster
07:12 Gaurav_ joined #gluster
07:21 itisravi joined #gluster
07:24 ekuric joined #gluster
07:30 glusterbot New news from resolvedglusterbugs: [Bug 847624] [FEAT] Support for duplicate reply cache <https://bugzilla.redhat.com/show_bug.cgi?id=847624>
07:32 thotz joined #gluster
07:34 coredump joined #gluster
07:35 coredump joined #gluster
07:38 kanagaraj joined #gluster
07:38 vkoppad joined #gluster
07:38 deepakcs joined #gluster
07:47 fraggeln I have a small question.
07:47 fraggeln I did start out with 2 nodes in my lab-cluster
07:48 fraggeln then I added a 3rd brick, but it seems like only new files are added to the new replica
07:48 fraggeln is there anyway I can force a sync?
07:48 fraggeln volume rebalance didnt work.
07:52 ricky-ti1 joined #gluster
07:52 slayer192 joined #gluster
07:55 eseyman joined #gluster
07:56 cjanbanan joined #gluster
07:56 ctria joined #gluster
08:00 ngoswami joined #gluster
08:00 sputnik13 joined #gluster
08:02 andreask joined #gluster
08:02 nightwalk joined #gluster
08:13 ngoswami joined #gluster
08:21 cjanbanan joined #gluster
08:22 Elico joined #gluster
08:23 keytab joined #gluster
08:25 aravindavk joined #gluster
08:25 monotek joined #gluster
08:28 ngoswami joined #gluster
08:32 monotek joined #gluster
08:34 pdrakeweb joined #gluster
08:36 Elico joined #gluster
08:40 Elico1 joined #gluster
08:40 ngoswami joined #gluster
08:45 fsimonce joined #gluster
08:46 92AAALF9F joined #gluster
08:50 vpshastry1 joined #gluster
08:52 Elico joined #gluster
08:53 pk1 joined #gluster
08:54 saravanakumar1 left #gluster
08:55 saravanakumar1 joined #gluster
08:56 liquidat joined #gluster
08:59 yinyin joined #gluster
09:11 aravindavk joined #gluster
09:18 jbustos joined #gluster
09:22 Norky joined #gluster
09:26 nightwalk joined #gluster
09:32 kdhananjay joined #gluster
09:36 wgao joined #gluster
09:37 pk1 left #gluster
09:41 raghug joined #gluster
09:50 Psi-Jack joined #gluster
09:51 vpshastry1 joined #gluster
09:53 mattapperson joined #gluster
10:03 X3NQ joined #gluster
10:07 glusterbot New news from newglusterbugs: [Bug 969461] RFE: Quota fixes <https://bugzilla.redhat.com/show_bug.cgi?id=969461>
10:13 Pavid7 joined #gluster
10:35 pdrakeweb joined #gluster
10:47 elgonzo joined #gluster
10:48 hagarth joined #gluster
10:51 trendzettter joined #gluster
11:00 nightwalk joined #gluster
11:03 hagarth joined #gluster
11:04 vpshastry1 joined #gluster
11:04 prasanth_ joined #gluster
11:04 itisravi_ joined #gluster
11:14 pdrakeweb joined #gluster
11:16 raghug joined #gluster
11:25 thotz joined #gluster
11:30 diegows joined #gluster
11:33 harish joined #gluster
11:34 dusmant joined #gluster
11:39 mattapperson joined #gluster
11:45 foster joined #gluster
11:50 monotek joined #gluster
11:50 pdrakeweb joined #gluster
11:55 trendzettter joined #gluster
11:57 vpshastry joined #gluster
11:57 robo joined #gluster
11:59 kam270_ joined #gluster
12:08 kam270_ joined #gluster
12:10 Peanut Does glusterfs support subnet notation for 'gluster volume set <volname> auth.allow'? E.g. auth.allow 10.88.4.0/31 ?
12:10 B21956 joined #gluster
12:13 fraggeln Peanut: test it? :D
12:14 recidive joined #gluster
12:14 Peanut fraggeln: volume set: failed: option auth.addr./export/brick1/sda3.allow 10.88.4.0/31 is not a valid internet-address-list.
12:14 Peanut I'll take that as a 'No' then.
12:16 fraggeln auth.allow: 172.16.99.*
12:16 fraggeln that works
12:16 fraggeln but, that is just a /24
12:17 Peanut Yes, exactly - I wanted to make it a bit tighter than a /24, without listing each address seperately.
12:17 fraggeln but, /31 is only 1 host
12:17 fraggeln maybe try with /29 or so instead
12:18 Peanut No, that's two hosts, without broadcast or network addres. See RFC-3021. And anyway, for access lists, you just want to look at the bitmask.
12:22 hagarth1 joined #gluster
12:24 nshaikh joined #gluster
12:24 itisravi_ joined #gluster
12:26 kam270_ joined #gluster
12:28 dusmant joined #gluster
12:28 fraggeln is there any way to speedup the timeout when a node is offline?
12:29 saurabh joined #gluster
12:29 fraggeln my client gets like a 30-40 sec timeout waiting before I can access the filesystem again
12:30 DV joined #gluster
12:30 davinder2 joined #gluster
12:37 nightwalk joined #gluster
12:39 sks joined #gluster
12:42 kam270_ joined #gluster
12:49 edward1 joined #gluster
12:50 nightwalk joined #gluster
12:51 kam270_ joined #gluster
12:59 nightwalk joined #gluster
13:00 kam270_ joined #gluster
13:02 benjamin_____ joined #gluster
13:02 elgonzo http://rudradevbasak.github.io/16384_hex/
13:03 glusterbot Title: 16384 Hex (at rudradevbasak.github.io)
13:03 jmarley joined #gluster
13:03 jmarley joined #gluster
13:03 Copez joined #gluster
13:05 japuzzo joined #gluster
13:10 Arrfab joined #gluster
13:10 nightwalk joined #gluster
13:10 Arrfab Hi folks. Trying to find the admin guide for 3.4 but unable to find it. Any pointer please ?
13:10 bennyturns joined #gluster
13:11 fraggeln Arrfab: its hidden, I havent found it either.
13:12 Arrfab fraggeln: someone pointed me to https://access.redhat.com/site/doc​umentation/en-US/Red_Hat_Storage/ but the one-to-one mapping of gluster release vs RHS relese isn't obvious either :-)
13:12 glusterbot Title: Red Hat Storage (at access.redhat.com)
13:15 tdasilva joined #gluster
13:18 kam270_ joined #gluster
13:21 rfortier1 joined #gluster
13:22 robo joined #gluster
13:22 nightwalk joined #gluster
13:22 saravanakumar1 joined #gluster
13:24 robo joined #gluster
13:34 theron joined #gluster
13:36 theron joined #gluster
13:38 nightwalk joined #gluster
13:39 discretestates joined #gluster
13:45 dusmant joined #gluster
13:51 jobewan joined #gluster
13:53 sroy_ joined #gluster
13:56 nightwalk joined #gluster
13:58 XpineX_ joined #gluster
14:00 kaptk2 joined #gluster
14:05 primechuck joined #gluster
14:07 aravindavk joined #gluster
14:07 pdrakeweb joined #gluster
14:12 rpowell joined #gluster
14:14 nightwalk joined #gluster
14:15 primechuck joined #gluster
14:15 lmickh joined #gluster
14:20 robo joined #gluster
14:22 vkoppad joined #gluster
14:24 bstromski joined #gluster
14:25 sputnik13 joined #gluster
14:28 muhh left #gluster
14:29 nightwalk joined #gluster
14:32 seapasulli joined #gluster
14:37 aravindavk joined #gluster
14:38 chirino joined #gluster
14:39 discretestates joined #gluster
14:39 nightwalk joined #gluster
14:41 sijis left #gluster
14:43 samppah any idea if data-classification/tiering is planned for 3.6?
14:43 samppah it's listed in feature proposals
14:46 bstromski hey guys, i have a production 2 node cluster with gluster shared storage (/slaps back of head for not having quorum), and it looks like one of the bricks on the primary node went offline. I dont see any big issues with these boxes at this immediate time, and both filesystems are currently mounted. What would be the safest way to go about resolving this? Would unmounting the filesystem and restarting the gluster service on the primary host with the dow
14:52 ndk joined #gluster
14:55 hagarth1 samppah: yes, planned for 3.6 and hopefully we will get that in
14:55 primechuck joined #gluster
15:00 nightwalk joined #gluster
15:08 glusterbot New news from newglusterbugs: [Bug 958781] KVM guest I/O errors with xfs backed gluster volumes <https://bugzilla.redhat.com/show_bug.cgi?id=958781>
15:11 jag3773 joined #gluster
15:13 chirino joined #gluster
15:15 nightwalk joined #gluster
15:16 benjamin_____ joined #gluster
15:18 DV joined #gluster
15:19 hagarth joined #gluster
15:19 samppah hagarth: thanks :)
15:21 discretestates joined #gluster
15:25 vpshastry joined #gluster
15:28 vpshastry left #gluster
15:28 nightwalk joined #gluster
15:28 jag3773 joined #gluster
15:31 liquidat Say, I have a two node replication setup. One of them wasn't reachable anymore for at least an hour, so we restarted it . Upon restart, the log on the failed node now spits out error logs:
15:31 liquidat E [rpc-clnt.c:207:call_bail] 0-management: bailing out frame type(glusterd mgmt) op(--(2)) xid = 0x40135348x sent = 2014-03-21 15:14:35.872742. timeout = 600
15:31 liquidat Does that mean that the healing is not working properly?
15:31 coredump joined #gluster
15:31 kam270_ joined #gluster
15:34 coredum__ joined #gluster
15:42 nightwalk joined #gluster
15:42 kam270_ joined #gluster
15:45 vpshastry1 joined #gluster
15:48 jag3773 joined #gluster
15:50 diegows joined #gluster
15:51 kam270_ joined #gluster
15:56 recidive joined #gluster
16:07 daMaestro joined #gluster
16:07 kam270_ joined #gluster
16:11 nightwalk joined #gluster
16:14 nikk i have a question about gluster architecture in general
16:14 nikk haven't been able to find an answer online
16:15 semiosis hi
16:15 glusterbot semiosis: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
16:15 semiosis nikk: ^^
16:15 nikk does gluster support weighted nodes in reference to quorum?  for example, if i have four physical servers and i want to be marked as "ok" if i can access one particular server
16:15 nikk but if i can't access that one mark it as "not ok" even if the others are up
16:16 nikk kind of brainstorming a wan architecture
16:17 nikk and i can't use the built-in geo functionality
16:17 semiosis AFR (replication) doesn't tolerate high latency well.  you're probably not going to like the performance
16:17 nikk four datacenters, two of them are on the other side of the world so high latency.. i want each side of the world to talk to eachother *first* and be marked as ok if they can reach eachother
16:17 nikk yeahhhh
16:18 nikk that's what i've run into so far
16:18 nikk right now with full replica.. when i write to one node, gluster has to make sure all other nodes write as well before returning "ok"
16:18 semiosis one day gluster will have multi-master geo-replication.
16:18 nikk ^--- this
16:19 nikk that's why we didn't buy redhat's storage solution.. because they rely on gluster's geo master/slave
16:19 kam270_ joined #gluster
16:20 nikk cross-site high-latency replication is a problem i haven't been able to solve
16:20 semiosis it's a hard problem
16:21 nikk distributed replica would be better for the actual replication process but worse for file reads
16:21 nikk by a lot
16:21 nikk which is why i haven't done that
16:23 nightwalk joined #gluster
16:23 nikk <-- does not want to use rsync :]
16:26 sputnik13 joined #gluster
16:26 nikk other than create different volumes in different physically closer locations (us, eu, asia, etc) has anyone had any experience with globally distributed file systems?
16:28 semiosis does such a thing exist?
16:29 nikk yeah it's called dropbox haha
16:29 kam270_ joined #gluster
16:29 nikk but sadly i haven't run into it
16:29 semiosis i wonder how they deal with split brain
16:30 nikk i wonder what's underneath dropbox
16:30 semiosis S3 afaik
16:32 nikk but yeah i'm not about to use dropbox/google drive/s3 + fuse connector to create a posix file system
16:32 * nikk throws up in mouth
16:32 HeisSpiter joined #gluster
16:33 semiosis i've had bad experiences with the S3/fuse
16:34 nikk i'm having a bad experience thinking about it :]
16:34 semiosis ha
16:34 nightwalk joined #gluster
16:34 semiosis the nice thing about dropbox is that it's personal, so most of the time you only have a single writer
16:35 semiosis so split brain is very unlikely
16:35 semiosis however, i'd like to see a test where one takes two disconnected dropboxes, edits the same file on both, then reconnects them
16:35 semiosis what will dropbox do?  the file is split brained
16:36 semiosis merging lines would be a sophisticated solution for text files, but for binary?  forget it
16:36 kdhananjay joined #gluster
16:40 dkorzhevin joined #gluster
16:42 vipulnayyar joined #gluster
16:45 vpshastry joined #gluster
16:46 Mo_ joined #gluster
16:47 kam270_ joined #gluster
16:49 B21956 joined #gluster
16:50 sprachgenerator joined #gluster
16:50 recidive joined #gluster
16:51 larsks semiosis: dropbox will pick one of the copies as "current", and create a new file with the contents of the conflicting file.
17:00 zaitcev joined #gluster
17:05 kam270_ joined #gluster
17:05 vpshastry left #gluster
17:06 discretestates joined #gluster
17:08 nightwalk joined #gluster
17:08 robo joined #gluster
17:09 FarbrorLeon joined #gluster
17:10 semiosis larsks: cool!  thanks for the info
17:11 FarbrorLeon joined #gluster
17:11 jmarley joined #gluster
17:11 jmarley joined #gluster
17:13 sputnik13 joined #gluster
17:17 shubhendu joined #gluster
17:19 SFLimey joined #gluster
17:27 nightwalk joined #gluster
17:32 rfortier1 joined #gluster
17:42 nightwalk joined #gluster
17:45 robo joined #gluster
17:50 rotbeard joined #gluster
17:51 nightwalk joined #gluster
17:53 hagarth joined #gluster
17:55 Pavid7 joined #gluster
17:57 recidive joined #gluster
17:57 jag3773 joined #gluster
18:00 jonathanpoon joined #gluster
18:01 jonathanpoon hi everyone, I'm looking to setup glusterfs storage.  Does the glusterfs client require a lot of cpu resources when performing fs operations onto a gluster mount?
18:02 robo joined #gluster
18:03 nikk semiosis: dropbox does revisioning, so whoever edits first gets a revision, not sure how the index is decided, probably a timestamp of some sort
18:04 nightwalk joined #gluster
18:19 nightwalk joined #gluster
18:21 FarbrorLeon joined #gluster
18:24 redyaffle joined #gluster
18:24 thotz joined #gluster
18:26 lpabon joined #gluster
18:28 msvbhat jonathanpoon: Ideally, It should not...
18:28 nightwalk joined #gluster
18:29 jonathanpoon okay, I'm looking to build a webserver that will read and write data to the glusterfs to host a storage service
18:29 jonathanpoon I am hoping that the client server running apache will only need a low powered dual core cpu
18:30 nikk if you have the memory install varnish
18:31 nikk it's a caching proxy that sits in front of your web server
18:32 failshell joined #gluster
18:32 msvbhat jonathanpoon: Someone in the community has already hosted webserver using glusterfs. atm I don't recall who.
18:33 jonathanpoon nikk, thanks for the tip
18:35 nikk if you use php mod apc that will also reduce stress on the file system
18:35 nikk the more you cache in memory = the less you rely on the file system = lower resource usage and higher responsiveness all around :)
18:36 nikk hope that helps
18:36 nikk if you don't have very much memory, however, that won't help much at all
18:40 nightwalk joined #gluster
18:40 rpowell joined #gluster
18:41 rshade98 semiosis, which repo you want to me use for gluster for debian?
18:41 zerick joined #gluster
18:42 semiosis nikk: the question re: dropbox was about a conflict, not normal versions, but larsks gave the answer
18:42 semiosis jonathanpoon, msvbhat_afk: lots of us run web servers off glusterfs
18:43 semiosis rshade98: there's a debian repo on the ,,(latest) site
18:43 glusterbot rshade98: The latest version is available at http://download.gluster.org/p​ub/gluster/glusterfs/LATEST/ . There is a .repo file for yum or see @ppa for ubuntu.
18:43 nikk aah
18:43 jonathanpoon semiosis, do you notice your webservers using a lot of cpu/memory to access the glusterfs?
18:44 semiosis jonathanpoon: iirc our web servers have between 2 & 8 GB of memory, which i don't consider to be "a lot".
18:44 jonathanpoon what kind of cpus do you use?
18:45 semiosis it's all EC2
18:45 jonathanpoon oh I see...
18:46 pdrakeweb joined #gluster
18:46 semiosis consider separating code from data.  can you store your web app code on local disk, but have that web app access data on glusterfs?  that would probably be better
18:46 jonathanpoon thats the plan
18:46 semiosis then you should be fine
18:46 jonathanpoon I'm looking to host GitLab privately...
18:47 semiosis the hard part is optpimizing apache, php, and the php app to run fast from glusterfs.  it can be done, but it's work.
18:47 jonathanpoon and seafile as well.  I want to use Gluster to host the data, while the apache server will host the websites
18:47 jonathanpoon and also do the data transfer between client --> apache server --> gluster
18:47 Pavid7 joined #gluster
18:47 semiosis i have a private gitlab.  i tried running it out of glusterfs, it was too slow.  my solution was to run gitlab off the SSD and every 10 minutes rsync it to an EBS volume, then snapshot the ebs volume
18:48 jonathanpoon interesting...was it a bandwidth issue?
18:48 semiosis latency
18:48 semiosis on a replicated glusterfs volume opening & writing require several RTT between client & bricks.  for small files this overhead dominates
18:49 semiosis for larger files it's not so noticable
18:49 semiosis and of course code repos are tons of small files
18:49 rwheeler joined #gluster
18:49 jonathanpoon I see.  So you essentially rsync'd a snapshot of the GitLab data to a GlusterFS backend to store as a backup?
18:50 semiosis no, glusterfs is not involved in my gitlab
18:50 semiosis it's just a single server, with local SSD & an EBS volume (like iscsi)
18:50 jonathanpoon how much storage do you require?
18:50 jonathanpoon or have I should say for gitlab
18:50 semiosis the local SSD can't be snapshotted by EC2, but EBS can, so i rsync to the EBS volume and snapshot it
18:50 semiosis iirc my gitlab is around 6 GB
18:51 jonathanpoon okay
18:51 semiosis we have 100s of projects
18:51 jonathanpoon would gluster be useful for backing up the gitlab data?
18:52 jonathanpoon taking daily snapshots of those repositories?
18:52 semiosis i tried rsyncing from the SSD to gluster but that was too slow
18:52 semiosis for me
18:52 semiosis ymmv
18:53 jonathanpoon when copying a 100Mb file...what kind of bandwidth should I expect when copying to gluster from a client?
18:53 jonathanpoon assuming 1gb network speed
18:53 jonathanpoon so you get about 10-20 mb/sec?
18:53 failshell jonathanpoon: roughly 50MB/s
18:53 failshell that's what i get in our VMware environment
18:53 jonathanpoon does network bonding help?
18:53 semiosis if it's replicated, roughly your available throughput divided by the replica count
18:54 jonathanpoon ethernet bonding I should say
18:55 jonathanpoon semiosis, gotcha
18:55 nikk use apc and set stat=0, put gluster in front, boom :)
18:55 nikk semiosis is right though, code and data should be separate
18:55 chalkfiles joined #gluster
18:56 failshell jonathanpoon: the smaller the files, the slower gluster gets
18:58 jonathanpoon failshell, so when hosting data for websites using gluster
18:59 failshell jonathanpoon: we do it, for static content fronted by our CDN
18:59 jonathanpoon failshell, there can be many small files...
18:59 failshell we're about to put our main news site (very dynamic) on it too, but to serve it over NFS
18:59 jonathanpoon failshell, using NFS, how do you scale the storage?
19:00 failshell not NFS, but serving a gluster volume over NFS
19:00 failshell that's builtin
19:00 semiosis ,,(nfs)
19:00 glusterbot To mount via nfs, most distros require the options, tcp,vers=3 -- Also an rpc port mapper (like rpcbind in EL distributions) should be running on the server, and the kernel nfs server (nfsd) should be disabled
19:00 jonathanpoon gotcha...
19:00 jonathanpoon is that faster?
19:01 semiosis there's tradeoffs
19:01 semiosis nfs clients do some caching, which weakens consistency, but can improve performance
19:01 isaacabo joined #gluster
19:01 semiosis of course you can disable that caching if you want better consistency, such as for multiple writers
19:01 nightwalk joined #gluster
19:01 failshell https://access.redhat.com/site/docume​ntation/en-US/Red_Hat_Storage/2.1/htm​l/Administration_Guide/ch09s05.html
19:01 glusterbot Title: 9.5. Configuring Automated IP Failover for NFS and SMB (at access.redhat.com)
19:01 failshell we use that for HA
19:02 failshell i dunno if that works on OSS though. we use RHS.
19:02 semiosis also nfs clients do all ops through one server, so you dont get HA for free like you do with a FUSE client
19:02 semiosis you need to set up your own VIP for NFS HA
19:03 isaacabo Hello guys
19:03 semiosis it works well for a remote client though, since you can have a slow or high latency link to the client, and fast links between the servers
19:03 semiosis hi
19:03 glusterbot semiosis: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
19:03 semiosis isaacabo: ^^
19:03 isaacabo how are you all?
19:03 isaacabo hey semiosis
19:03 isaacabo looking more to improve my gluster-fu
19:04 isaacabo it's normal to have duplicate files adter rebalance operation?
19:05 failshell bottom line jonathanpoon, test it with your intended workload, compare before and after gluster
19:05 jonathanpoon failshell, I'm shopping for hardware
19:06 jonathanpoon failshell, so I wanted to make an educated guess to what I needed =D.  Sounds like i'll be a bit unsatisfied with my setup after round 1.
19:06 jonathanpoon failshell, I'm looking at gluster to host the data for a few users.  I'm looking to run seafile with gluster as the backend so I can have scalable storage (also redundancy).
19:07 isaacabo http://ur1.ca/gwb4u
19:07 glusterbot Title: #87505 Fedora Project Pastebin (at ur1.ca)
19:07 failshell jonathanpoon: if you have the $$, get a 10GbE network
19:07 failshell its not that expensive these days and you're going to get a much better experience
19:07 semiosis seafile looks interesting - http://seafile.com/en/product/private_server/
19:07 glusterbot Title: Seafile (at seafile.com)
19:08 jonathanpoon failshell, I'm looking to use ethernet bonding first to see how it goes
19:08 isaacabo if you can se that fpaste link, i got duplicates
19:08 jonathanpoon if I need 10GB, I'll look into it =D
19:08 jonathanpoon its for 5-10 users right now
19:08 semiosis jonathanpoon: infiniband, IPoIB, is the best performing afaik
19:08 semiosis microsecond latency
19:10 jonathanpoon so people can backup their desktop data onto the hosted seafile server.  I expect people to have up to 1TB of data individually
19:10 failshell jonathanpoon: it all depends what your needs are, if it's redundancy over speed, then you'll be fine
19:10 jonathanpoon failshell, how often do you backup your glusterfs data?
19:11 failshell nightly VM snapshot
19:11 jonathanpoon onto another glusterfs server?
19:11 nightwalk joined #gluster
19:12 failshell no, we use Veeam to snapshot our VMs
19:14 robos joined #gluster
19:16 Oneiroi joined #gluster
19:18 isaacabo no one?
19:18 isaacabo no duplicate files after rebalance?
19:19 jonathanpoon failshell, do the apache webservers also need 10gb?
19:20 jonathanpoon failshell, or could they run on 1GB ethernet, while the glusterfs servers run on 10gb ethernet?
19:20 systemonkey joined #gluster
19:21 systemonkey Hi. Can someone help me with performance tuning with glusterfs?
19:21 systemonkey read,write operations are really painfully slow.
19:21 isaacabo systemonkey: hello
19:21 systemonkey hi isaacabo
19:22 isaacabo how many nodes do you have?
19:22 systemonkey I got three systems in distributed mode. local read,write is fast and normal but when accessing through gluster mount point, it crawls.
19:22 robo joined #gluster
19:23 isaacabo did you test your network?
19:23 systemonkey nothing tweaked at this point. default configuration.
19:23 semiosis systemonkey: what kind of network connects the systems?
19:23 failshell jonathanpoon: every machine accessing the gluster cluster would benefit from using a 10GbE
19:23 semiosis also what are you using to test speed?  is it dd?
19:23 systemonkey currently just 1GB nics. iperf is fast between the machines
19:23 jonathanpoon okay
19:23 jonathanpoon brb
19:24 isaacabo just one nic or bonding?
19:24 semiosis systemonkey: set the block size in your tests to something large.  1MB?
19:24 systemonkey failshell: yes, I'm waiting for the 10GbE patch cables. until then, I have 1Gb Nic.
19:24 systemonkey isaacabo, one nic atm.
19:26 isaacabo cpu use?
19:26 systemonkey semiosis: if I set the block size to 1MB, would it affect the small files? I read small files suffer with gluster.
19:26 systemonkey isaacabo: 0.7% usages
19:28 systemonkey I'll upgrade to 10GbE first then if It's slow, I guess I'll revisit.
19:29 semiosis systemonkey: i'm just suggesting that you set your test to use large block writes.... it won't have any effect beyond your test results
19:29 semiosis systemonkey: but benchmarks are of limited use anyway.  test your app, thats the only way to know if it will work
19:30 systemonkey Quick question: Is there a way to dedicate a management network with gluster nodes separate from client connections?
19:30 systemonkey semiosis: ok. I'll try that later today.
19:31 semiosis systemonkey: you can use split-horizon DNS, or get clever with hosts files
19:31 semiosis gluster processes listen on all addresses, and make connections using the normal system name resolution facilities
19:32 jonathanpoon joined #gluster
19:35 systemonkey semiosis: Thanks for the info. question: If I left the 1Gb NICs as management network for the glusterfs to communicate with other nodes and 10GbE for the client access, would this speed things up? or is this just too much work with no real benefits.
19:36 semiosis afaik most people use a single network for clients & servers
19:37 semiosis if that's any indication
19:37 semiosis if you want to speed things up, use infiniband & ssd
19:39 systemonkey alright. thanks for the infos. I think i'll switch over to 10GbE first.
19:39 Pavid7 joined #gluster
19:42 nightwalk joined #gluster
19:43 discretestates joined #gluster
19:48 bennyturns joined #gluster
19:50 semiosis yw
20:03 pdrakewe_ joined #gluster
20:07 cfeller Has anyone here seen a situation where the gluster volume disappears from a client?  I don't think anything happened server side, as this only happened to one of my clients.  My other two were unaffected.
20:07 cfeller The gluster mount log from the affectd client is here: http://ur1.ca/gwbh8
20:07 glusterbot Title: #87523 Fedora Project Pastebin (at ur1.ca)
20:09 cfeller After I noticed everything was down, I remounted the volumes and everything was fine.
20:09 cfeller I'm on gluster 3.4.2.  This client is Fedora 20.  The unaffected clients were on RHEL 6.
20:09 cfeller All clients are on the same  switch, so I don't think it was a networking issue.
20:17 nightwalk joined #gluster
20:23 tdasilva left #gluster
20:29 svenwiesner joined #gluster
20:30 semiosis cfeller: [2014-03-21 12:49:09.802775] E [socket.c:2157:socket_connect_finish] 0-gv0-client-2: connection to 134.197.20.196:24007 failed (No route to host)
20:30 nightwalk joined #gluster
20:30 semiosis clearly a network issue
20:31 semiosis maybe an IP conflict or some kind of arp spoofing
20:31 JoeJulian Or iptables
20:31 semiosis perhaps
20:31 svenwiesner Good evening. Does anyone know where to find a complete list of glusterd.vol options? I am looking for a way to bind gluster to a specific ipv6 interface.
20:31 semiosis no route to host, that involves an icmp in addition to a tcp rst, right?
20:32 nikk svenwiesner: http://gluster.org/community/documentation/i​ndex.php/Gluster_3.2:_Tuning_Volume_Options
20:32 glusterbot Title: Gluster 3.2: Tuning Volume Options - GlusterDocumentation (at gluster.org)
20:32 nikk might be there
20:32 semiosis svenwiesner: gluster binds to all addresses.  you can use iptables to restrict it
20:32 JoeJulian svenwiesner: There's no option to do that. It's going to listen on ::0. Just use iptables if you don't want other interfaces to be able to connect.
20:32 JoeJulian jinx
20:33 semiosis ha
20:33 semiosis i should gbtw
20:33 JoeJulian me too
20:34 svenwiesner ok, thank you for this information. The link pasted does not provide any options like *socket* or *transport*. Are those options documented somewhere else apart from the source code?
20:35 cfeller semiosis: I saw that, but was confused as I was clearly able to SSH into the box and remount the volumes w/out addressing anything network wise. Additionally, there is nothing in the other logs on that machine indicating that the ethernet device went down.  Also, the fact that the other clients are on the same switch. It is a bit confusing. This box has been acting as a gluster client for months and I've never seen this.
20:36 cfeller Anyway, I guess just keep an eye on it?
20:37 svenwiesner regarding listening: if "option transport.address-family inet6" is not given, gluster refuses to listen on ipv6, so I am somewhat unsure what gluster is doing exactly ;). A list providing possible options would be a clear simplification of the setup process...
20:37 semiosis cfeller: would be interesting to try to reproduce this by sending an arp spoof directly to the client
20:41 cfeller semiosis: I was reading your initial comments regard that again after I sent that reply. I work at a university, and this wouldn't be the first time that someone would have tried to put a machine on the network with a wrong IP. I'll play with your arp theory on my test gluster cluster and let you know if I find anything interesting.
20:42 Pavid7 joined #gluster
20:42 cfeller Thanks for pointing me in that direction.  That begins to make sense the more I think about it.
20:42 semiosis yw, good luck
20:42 semiosis arp spoofing seems really unlikely, an ip conflict might produce the same result
20:42 semiosis but if it was the server's ip, other machines would've noticed, probably
20:43 semiosis and if it was the client's ip, idk if we'd see this behavior
20:44 semiosis for the client to get a rst, then a host unreachable, seems odd
20:44 JoeJulian I do wish it would report where the EHOSTUNREACH game from.
20:44 semiosis but strongly suggests something at the arp layer
20:44 semiosis to me
20:44 JoeJulian came from...
20:45 JoeJulian I think I'm starting to develop android autocorrect syndrome...
20:46 JoeJulian ... that's where I just start typing gibberish since I know it's going to eff it up anyway.
20:50 JoeJulian huh... I never noticed that about ipv6 support. Where'd you find that tidbit, svenwiesner?
20:50 svenwiesner http://www.gluster.org/pipermail/glu​ster-users/2013-November/037824.html
20:50 glusterbot Title: [Gluster-users] Hi all! Glusterfs Ipv6 support (at www.gluster.org)
20:52 JoeJulian Seriously! Only in an EMAIL!?!
20:52 JoeJulian @meh
20:52 glusterbot JoeJulian: I'm not happy about it either
20:52 svenwiesner :)
20:53 JoeJulian And why isn't this the default?!?!
20:54 svenwiesner the default is inet which has been discussed here: http://review.gluster.com/#/c/331​9/1/rpc/rpc-lib/src/rpc-transport.c
20:54 glusterbot Title: Gerrit Code Review (at review.gluster.com)
20:54 johnmark fwiw, there's a discussion about ipv6 friendliness for 3.6
20:55 svenwiesner However, parsing the source code is quite exhausting, especially if not into gluster. That's why I thought someone here maybe running an ipv6 gluster already
20:55 JoeJulian Actually, finding options is relatively simple.
20:56 JoeJulian But since the whole idea of the CLI is that you don't mess with the vol files... That makes ipv6 "unusable".
20:57 JoeJulian ... I could have sworn it listened on ::0 back in 3.1...
20:58 discretestates joined #gluster
21:01 svenwiesner I guess the address-family has been changed / utilised after this discussion: http://lists.gnu.org/archive/html/g​luster-devel/2012-05/msg00016.html which was in 2012, after release of 3.2
21:01 glusterbot Title: Re: [Gluster-devel] Fixing Address family mess (at lists.gnu.org)
21:03 nightwalk joined #gluster
21:04 jbrooks left #gluster
21:07 svenwiesner johnmark: My question is if ipv6z really is working in 3.4.2? I don't care about setup nightmares as long as there is light at the end of the tunnel
21:07 svenwiesner *ipv6
21:08 JoeJulian I would test that myself, but I need to get these routers shipped out in under 2 hours...
21:09 svenwiesner No worries. I will try to figure it out myself, if not today than ... well, sometimes :)
21:10 jclift _Proper_ IPv6 support is on the "Nice to have" list for 3.6.
21:10 jclift Kind of thinking it'd be dangerous to rely on it at this stage, even if you get it working
21:10 jclift But, your call :)
21:11 jclift Kind of thinking that a more likely sequence for support is "Better Peer Identification" (feature) goes into 3.6 (pretty likely).  Proper IPv6 support could probably leverage that, and make it into 3.7
21:13 Arrfab jclift: decent ipv6 would be good to have as I'm currently evaluating gluster between two DCs where I have zillions of ipv6 and mostly no ipv4
21:15 mattappe_ joined #gluster
21:16 FarbrorLeon joined #gluster
21:19 sticky_afk joined #gluster
21:20 stickyboy joined #gluster
21:20 jbrooks joined #gluster
21:21 bennyturns joined #gluster
21:25 nightwalk joined #gluster
21:25 FarbrorLeon joined #gluster
21:26 sputnik13 joined #gluster
21:26 sputnik13 joined #gluster
21:28 jbrooks joined #gluster
21:29 gmcwhistler joined #gluster
21:31 FarbrorLeon joined #gluster
21:34 gmcwhist_ joined #gluster
21:37 svenwiesner jclift, thank you for this information. Someone should clarify that on gluster.org and save hundreds of adminstration time :)
21:38 andreask joined #gluster
21:38 mattappe_ joined #gluster
21:46 nightwalk joined #gluster
22:02 sputnik13 how can I tell whether a volume has been healed?
22:02 sputnik13 I have an 8 node cluster in a distribute/replicate configuration with 2 replicas
22:02 sputnik13 I took one of the servers down for maintenance for about 10 minutes
22:03 sputnik13 I don't know how to tell whether the node has finished repairing
22:03 andreask gluster volume heal <volname> info
22:05 sputnik13 and if it's empty there's no heal required?
22:06 andreask then all is healed
22:07 sputnik13 is a heal only required if there has been writes?
22:07 sputnik13 I guess by definition that would be the case unless it's a striped volume
22:08 andreask yes, If there have been no changes no heal is needed for a replicated setup
22:10 sputnik13 andreask: that's what I expected, thank you for the confirmation
22:11 andreask yw
22:16 coredump joined #gluster
22:18 btreeinfinity joined #gluster
22:26 discretestates joined #gluster
22:27 sputnik13 if a node that clients are pointing to goes down, does their existing volume connection go away?
22:27 sputnik13 or is that node only required for the initial mount?
22:33 glusterbot New news from resolvedglusterbugs: [Bug 964059] gluster volume status fails <https://bugzilla.redhat.com/show_bug.cgi?id=964059>
22:34 sputnik13 joined #gluster
22:41 badone_ joined #gluster
23:03 nightwalk joined #gluster
23:17 coredump joined #gluster
23:27 coredump joined #gluster
23:42 m0zes_ joined #gluster
23:43 kam270_ joined #gluster
23:45 sputnik13 joined #gluster
23:46 cjanbanan joined #gluster
23:48 chirino_m joined #gluster
23:55 andreask joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary