Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-06-16

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:22 ProT-0-TypE joined #gluster
00:36 suliba joined #gluster
00:52 Pupeno_ joined #gluster
01:01 gildub joined #gluster
01:04 haomaiwa_ joined #gluster
01:09 gem joined #gluster
01:19 haomaiwa_ joined #gluster
01:22 jbautista- joined #gluster
01:26 kdhananjay joined #gluster
01:28 jbautista- joined #gluster
01:54 nangthang joined #gluster
01:56 craigcabrey joined #gluster
02:03 aaronott joined #gluster
02:04 harish joined #gluster
02:39 kdhananjay joined #gluster
02:47 kdhananjay joined #gluster
02:56 tessier Man....I have confirmed the interfaces are up at 1000Mb/S full duplex. Yet a simple dd over nc between machines so it is RAM to RAM not involving disks or gluster or anything maxes out at 10MB/s transfer speed.
02:57 tessier Weirdest thing I've seen in quite some time.
03:01 wushudoin| joined #gluster
03:03 craigcabrey joined #gluster
03:04 maveric_amitc_ joined #gluster
03:06 wushudoin| joined #gluster
03:11 overclk joined #gluster
03:32 gem joined #gluster
03:33 kdhananjay joined #gluster
03:33 shubhendu joined #gluster
03:42 ppai joined #gluster
03:42 craigcabrey joined #gluster
03:49 aaronott joined #gluster
03:50 TheSeven joined #gluster
04:02 autoditac joined #gluster
04:04 RameshN joined #gluster
04:04 atinm joined #gluster
04:08 nbalacha joined #gluster
04:30 atinm joined #gluster
04:41 anil joined #gluster
04:42 deepakcs joined #gluster
04:48 spandit joined #gluster
04:52 zeittunnel joined #gluster
04:53 sakshi joined #gluster
04:54 vimal joined #gluster
04:59 soumya joined #gluster
05:05 autoditac joined #gluster
05:07 ashiq joined #gluster
05:07 Manikandan joined #gluster
05:14 Bhaskarakiran joined #gluster
05:17 ndarshan joined #gluster
05:17 pppp joined #gluster
05:24 rjoseph joined #gluster
05:41 karnan joined #gluster
05:44 kdhananjay joined #gluster
05:44 hgowtham joined #gluster
05:48 schandra joined #gluster
05:48 m0zes joined #gluster
05:49 Bhaskarakiran joined #gluster
05:50 Philambdo joined #gluster
05:51 zeittunnel joined #gluster
05:52 jiffin joined #gluster
05:55 anrao joined #gluster
05:59 bharata-rao joined #gluster
06:01 atalur joined #gluster
06:19 vimal joined #gluster
06:19 scubacuda joined #gluster
06:22 raghu joined #gluster
06:23 maveric_amitc_ joined #gluster
06:26 jtux joined #gluster
06:32 jtux joined #gluster
06:34 Bhaskarakiran joined #gluster
06:34 tessier joined #gluster
06:39 soumya joined #gluster
06:44 kshlm joined #gluster
06:45 nangthang joined #gluster
06:48 RameshN_ joined #gluster
06:49 RameshN_ joined #gluster
06:51 RameshN_ joined #gluster
06:53 RameshN_ joined #gluster
06:54 RameshN_ joined #gluster
06:54 RameshN_ joined #gluster
06:58 [Enrico] joined #gluster
06:59 klaas joined #gluster
07:02 rgustafs joined #gluster
07:15 RameshN_ joined #gluster
07:15 jcastill1 joined #gluster
07:16 atinm joined #gluster
07:20 jcastillo joined #gluster
07:21 schandra sakshi++ thanks
07:21 glusterbot schandra: sakshi's karma is now 2
07:21 rjoseph joined #gluster
07:23 XpineX joined #gluster
07:33 Bhaskarakiran joined #gluster
07:38 fsimonce joined #gluster
07:40 haomaiw__ joined #gluster
07:49 XpineX joined #gluster
07:52 jcastill1 joined #gluster
07:57 Pupeno joined #gluster
07:57 Pupeno joined #gluster
07:57 al joined #gluster
07:57 jcastillo joined #gluster
07:58 m0zes joined #gluster
07:59 rgustafs joined #gluster
08:00 atinm joined #gluster
08:03 glusterbot News from newglusterbugs: [Bug 1232155] Not able to export volume using nfs-ganesha <https://bugzilla.redhat.co​m/show_bug.cgi?id=1232155>
08:05 nsoffer joined #gluster
08:06 RameshN_ joined #gluster
08:11 liquidat joined #gluster
08:12 ProT-0-TypE joined #gluster
08:20 Slashman joined #gluster
08:25 arcolife joined #gluster
08:32 hgowtham joined #gluster
08:36 harish_ joined #gluster
08:43 rjoseph joined #gluster
09:01 meghanam joined #gluster
09:04 glusterbot News from newglusterbugs: [Bug 1232172] Disperse volume : 'ls -ltrh' doesn't list correct size of the files every time <https://bugzilla.redhat.co​m/show_bug.cgi?id=1232172>
09:10 legreffier joined #gluster
09:10 legreffier hello
09:10 glusterbot legreffier: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
09:13 legreffier on the current glusterfs's ppa, glusterfind is broken, it's searching for main.py in /usr/local/lib/libexec/glusterfs/glusterfind , but main.py gets in /usr/lib/x86_64-linux-gnu/glusterfs/glusterfind
09:13 legreffier there's some more calls to /usr/local/ in tool.conf
09:15 legreffier solved it Q&D  by seding releverant files in postinst , I don't get why all GLUSTERFS_LIBEXECDIR won't automagically be set to /usr/lib/x86_64-linux-gnu/glusterfs/glusterfind
09:17 legreffier as it's a package specific problem, i'm not reporting it to upstream bugtracking , and the ubuntu bugtracker doesn't seem to be used. it might interest semiosis though :D
09:22 anrao joined #gluster
09:30 overclk joined #gluster
09:32 jcastill1 joined #gluster
09:34 glusterbot News from newglusterbugs: [Bug 1232199] Skip zero byte files when triggering signing <https://bugzilla.redhat.co​m/show_bug.cgi?id=1232199>
09:37 jcastillo joined #gluster
09:39 raghu joined #gluster
09:40 legreffier joined #gluster
09:47 ndarshan joined #gluster
09:58 autoditac joined #gluster
09:59 jcastill1 joined #gluster
10:00 RameshN joined #gluster
10:01 maveric_amitc_ joined #gluster
10:03 gem joined #gluster
10:04 jcastillo joined #gluster
10:06 autoditac joined #gluster
10:10 kdhananjay1 joined #gluster
10:17 Bhaskarakiran joined #gluster
10:17 kovshenin joined #gluster
10:18 RameshN_ joined #gluster
10:20 Debloper joined #gluster
10:26 k-ma was there any other way to resolve gfid -> filename in 3.4.x than find with -inum?
10:28 haomaiwa_ joined #gluster
10:28 k-ma if not, is it safe to upgrade gluster with unresolved split-brains?
10:28 haomaiw__ joined #gluster
10:33 kovshenin joined #gluster
10:41 nsoffer joined #gluster
10:47 kshlm joined #gluster
11:00 jcastill1 joined #gluster
11:02 atinm joined #gluster
11:02 spalai joined #gluster
11:05 jcastillo joined #gluster
11:08 atinm REMINDER: Gluster Community Bug Triage meeting today at 12:00 UTC (~in 55 minutes)
11:08 kkeithley1 joined #gluster
11:08 kkeithley1 left #gluster
11:09 kkeithley1 joined #gluster
11:13 LebedevRI joined #gluster
11:15 kovshenin joined #gluster
11:23 nsoffer joined #gluster
11:28 firemanxbr joined #gluster
11:30 DV joined #gluster
11:36 Bhaskarakiran joined #gluster
11:42 dusmantkp_ joined #gluster
11:53 milkyline joined #gluster
11:53 ndevos REMINDER: in ~5 minutes from now, the Gluster Bug Triage meeting will start in #gluster-meeting
11:57 atinm joined #gluster
11:59 zeittunnel joined #gluster
12:02 soumya joined #gluster
12:09 theron joined #gluster
12:10 soumya joined #gluster
12:13 Trefex joined #gluster
12:21 DV joined #gluster
12:26 shaunm joined #gluster
12:27 nsoffer joined #gluster
12:29 shubhendu joined #gluster
12:38 smohan joined #gluster
12:39 dusmant joined #gluster
12:42 nbalacha joined #gluster
12:45 kovshenin joined #gluster
12:45 bene-at-car-repa joined #gluster
12:54 nangthang joined #gluster
12:58 kanagaraj joined #gluster
13:00 overclk joined #gluster
13:05 glusterbot News from newglusterbugs: [Bug 1232304] libglusterfs: delete duplicate code in libglusterfs/src/dict.c <https://bugzilla.redhat.co​m/show_bug.cgi?id=1232304>
13:08 aravindavk joined #gluster
13:12 pppp joined #gluster
13:13 jcastill1 joined #gluster
13:15 rwheeler joined #gluster
13:17 overclk joined #gluster
13:18 jcastillo joined #gluster
13:21 julim joined #gluster
13:24 Twistedgrim joined #gluster
13:30 B21956 joined #gluster
13:33 arcolife joined #gluster
13:34 georgeh-LT2 joined #gluster
13:41 dgandhi joined #gluster
13:46 aaronott joined #gluster
13:46 nishanth joined #gluster
13:49 hamiller joined #gluster
13:49 theron joined #gluster
13:49 bennyturns joined #gluster
13:52 Trefex ndevos: small update, i mounted the gluser volume using direct-io-mode=enable instead of disable
13:52 Trefex no crash since almost 24 hours, and transferred about 35 TB
13:52 Trefex ndevos: any thoughts?
13:53 nbalacha joined #gluster
13:54 ndevos Trefex: hmm, sounds like some memory issue, direct-io disabled caching where it can, so there might be some cache growing without boundaries
13:54 Trefex ndevos: is it bad to have it on?
13:54 Trefex i mean caching disabled?
13:55 Trefex spot performance is slightly lower, but with a non-crashing system, the global performance is obviously much better :)
13:56 zeittunnel joined #gluster
13:56 ndevos Trefex: not bad really, performance may get a hit when you have a workload that does many repeated reads (no caching), or many small writes that could be cached/combined
13:56 squizzi joined #gluster
13:57 Trefex i see, ndevos, unless i have a way of bounding the growing cache, i guess i should go to safe side (any ideas on that?)
13:59 ndevos Trefex: I dont know which caching is the issue, we would need to investigate that somehow
14:00 ndevos Trefex: what kind of volume is that? distribute, replicate, disperse, ...?
14:00 Trefex ndevos: http://ur1.ca/mubc6
14:02 ndevos Trefex: was there a need to change those performance settings? did you have the problem with the default values too?
14:03 Trefex ndevos: this was setup for maximum performance by a previous eomployee, who left before it was put in prod
14:03 Trefex i'm now taking over (as a n00b), so i couldn't say the reasons for this
14:03 miroslav_ joined #gluster
14:04 ndevos Trefex: hmm, I'm never sure what those settings exactly do, but I think some of them could be causing a lot of memory usage (some of them are per thread or file?)
14:06 ndevos bennyturns: the last 15 minutes is about glusterfs-fuse segfaulting (after memory abuse?), maybe you have seen something like that before?
14:07 Trefex ndevos: good point, perhaps i should revisit those one by one, at least i'm happy we seem to "home in" on the issue, seemingly related to memory abuse
14:09 dgbaley joined #gluster
14:09 ndevos Trefex: seems the problem is identified, and you have a workaround, now we need an explanation and maybe a permanent fix or documentation update
14:09 Trefex ndevos: indeed, i'm happy to test anything i can
14:10 dgbaley Hey. I think my upgrade on Ubuntu to 3.6.3 has broken my ability to mount. "Failed to get the 'volume file' from server"
14:10 ndevos Trefex: do you have a test environment where you can run a similar workload?
14:10 Trefex not really a realistic one, however, i'm currently transferring data to this one in an attempt to go to production
14:11 Trefex so it can itself by used to do some tests, eg loss of data could be acceptable, still
14:12 ndevos hmm, not sure how to judge that... it would be good to know if you hit the issue with the default settings though
14:16 Trefex ndevos: where can i find the defaults for a .vol file?
14:17 ndevos Trefex: "gluster volume set help" should display them
14:17 ndevos or, maybe even "gluster volume get $VOLNAME all
14:17 Trefex and by default it uses "default" except what is overwritten in volume.vol ?
14:18 Trefex ndevos: isn't it actually the options i pasted you earlier? That's the ones overwritten right?
14:19 ndevos Trefex: the options you pasted are set on the volume, they could match the default values
14:20 ndevos Trefex: if you want to unset an option, you can use "gluster volume reset $VOLNAME $option"
14:22 Trefex that will be fun to find the culprit...
14:22 Trefex ndevos:  ^^
14:25 ndevos Trefex: yes, fun indeed :D
14:26 ndevos Trefex: I'm not sure if those options are applied on the running fuse-mount, so I would suggest to unmount and mount again too...
14:35 chirino joined #gluster
14:38 dgbaley Is there any difference between mounting with server1,server2,server3:/volume /mountpoint  VS /path/to/volfile /mountpoint with volumefile having one server#:/volume per line?
14:38 dgbaley Or even VS the backup-volfile-servers options?
14:59 Gill joined #gluster
15:01 nage joined #gluster
15:01 hamiller joined #gluster
15:05 ekuric joined #gluster
15:07 chirino joined #gluster
15:07 miroslav_ Hi, I'm new to GlusterFS. I'm thinking about using it for storing lots of small files. Is it a good idea? How does it perform on writing lots of small files (25kB on average)? Is it possible to reach say 30MB/s when sending to 10 servers with replication factor 2?
15:08 lpabon joined #gluster
15:12 dgbaley miroslav_: one thing that's nice about gluster (as opposed to ceph) is that it's extremely easy to bootstrap and get running in minutes
15:18 sdsd joined #gluster
15:21 kdhananjay joined #gluster
15:21 spalai left #gluster
15:22 bennyturns joined #gluster
15:24 miroslav_ I'm asking to know if it's worth to try. I've read here: http://www-conf.slac.stanford.edu/xldb2015/​Talks2015/6_Wed_1115_Holcombe_XldbTalk.pdf that it's not optimal for files <128kB, but I don't understand the details in the presentation, so I'm trying to find out more here.
15:33 soumya joined #gluster
15:35 glusterbot News from newglusterbugs: [Bug 1232378] [remove-brick]: Creation of file from NFS  writes to the decommissioned subvolume and subsequent lookup from fuse creates a link <https://bugzilla.redhat.co​m/show_bug.cgi?id=1232378>
15:38 aravindavk joined #gluster
15:49 cholcombe joined #gluster
15:51 MontyCarleau joined #gluster
15:53 ppai joined #gluster
15:55 Leildin miroslav_, gluster can be a bit slow on reading loads of small files but storing should be fine. I use mine for a php website with many small files and don't struggle too much.
15:55 Leildin I would just say, try it out ! it's pretty easy to use, even I managed to make a volume easily
15:56 Leildin and I'm what's refered to in the industry as "an idiot"
16:01 Trefex ndevos: can these options be set while running?
16:01 Trefex ndevos: or should i rather restart the transfers?
16:01 Trefex ah well, if i have to unmount, then the question is answered by itself :D
16:02 bennyturns joined #gluster
16:03 nangthang joined #gluster
16:04 miroslav_ thanks, Leildin! How many files do you store? How big are they? How many servers do you use?
16:05 Leildin I have various servers but our biggest one is 30TB in 5T * 6 bricks
16:05 Leildin there are approx 2,5Million files on them
16:05 Leildin half are smaller then 5 Mb
16:06 miroslav_ I'd store up to 1TB a day in millions of files.
16:06 Leildin we've only been using distributed volumes though as our storage is replcated by the SAN itself
16:07 Leildin I haven't encountered problems with FUSE mounts
16:07 Leildin we do have apache accessing gluster through a samba file share
16:07 Leildin and THAT is a massive problem
16:07 miroslav_ gluster+samba doesn't play well?
16:07 Leildin we're porting our apache servers to centos to be able to use fuse mounts as it's cut our video loading times by 1000000%
16:08 Leildin for our use, no
16:08 Leildin it brought a load of problems of caching and
16:08 Leildin small files were not available to our watchfolders within 20Ms of creation
16:08 Leildin it was all samba, not gluster
16:09 Leildin so we're moving to centos and normal gluster mounts
16:09 Leildin it's so blantantly better it hurts
16:09 Leildin but there were always windows web servers when I got to this company
16:10 miroslav_ I see, I'll be also writing from windows server.
16:10 Leildin took me 2 years to get them to use a reverse proxy so using the full potential of gluster is going to be a lot of trial and error and convincing people it works fine (because it really does)
16:11 Leildin do you need to use the files instantly after writing ?
16:11 miroslav_ what if you share by samba the fuse mount?
16:11 Leildin that's what we do
16:11 miroslav_ almost instantly, yes
16:11 Leildin I wouldn't recommend it
16:12 Leildin we had files not being seen by encoders and other appliances because samba was doing stuff
16:12 Leildin or the cache wasn't being refreshed and bla and bla ...
16:12 Leildin try it !
16:13 Leildin we have a very special case here and going full linux is out best solution
16:13 Leildin you might have a better time with it :)
16:13 miroslav_ ok, thanks for responses. I'll see when I try.
16:14 Leildin Im also the kind of guy who crashes his 30T volume 20 minutes before leaving work
16:14 Leildin not the best with gluster but I do know it really works well
16:15 miroslav_ I'm actually leaving now so I'll sleep for it :-)
16:15 miroslav_ thanks, bye.
16:27 Philambdo joined #gluster
16:37 hagarth joined #gluster
16:37 Philambdo joined #gluster
16:41 malevolent joined #gluster
16:42 xavih joined #gluster
16:44 craigcabrey joined #gluster
17:02 smohan joined #gluster
17:04 haomaiwang joined #gluster
17:07 Philambdo joined #gluster
17:15 tessier joined #gluster
17:15 bennyturns joined #gluster
17:21 Leildin joined #gluster
17:24 Rapture joined #gluster
17:34 papamoose1 joined #gluster
17:35 glusterbot News from newglusterbugs: [Bug 1232420] Disperse volume : fuse mount hung on renames on a distributed disperse volume <https://bugzilla.redhat.co​m/show_bug.cgi?id=1232420>
17:37 jbautista- joined #gluster
17:38 B21956 joined #gluster
17:38 chirino joined #gluster
17:42 jbautista- joined #gluster
17:54 rwheeler joined #gluster
17:58 dusmant joined #gluster
18:07 Philambdo joined #gluster
18:12 nsoffer joined #gluster
18:17 soumya joined #gluster
18:28 woakes070048 joined #gluster
18:41 scubacuda joined #gluster
18:42 Vortac joined #gluster
18:44 rotbeard joined #gluster
18:46 haomaiwang joined #gluster
19:13 spot joined #gluster
19:20 autoditac joined #gluster
19:25 aravindavk joined #gluster
19:34 ira joined #gluster
19:40 ProT-0-TypE joined #gluster
19:49 jbrooks joined #gluster
19:49 victori joined #gluster
19:50 klaas joined #gluster
19:57 ProT-0-TypE left #gluster
20:10 jrdn joined #gluster
20:12 jcastill1 joined #gluster
20:15 chirino joined #gluster
20:17 lexi2 joined #gluster
20:17 jcastillo joined #gluster
20:18 julim joined #gluster
20:38 Rapture having a little issue with glusterfs. Using the gluster volume heal gv0 info I'm getting "Possibly undergoing heal" under one of the bricks. It's been like that for a while now and I've tried to manually heal using gluster volume heal my-volume
20:52 wkf joined #gluster
21:17 badone joined #gluster
21:29 hagarth joined #gluster
21:33 Philambdo joined #gluster
21:33 rwheeler joined #gluster
21:36 glusterbot News from newglusterbugs: [Bug 914874] Enhancement suggestions for BitRot hash computation <https://bugzilla.redhat.com/show_bug.cgi?id=914874>
21:42 scooby2 left #gluster
22:19 jbrooks joined #gluster
22:31 Gill joined #gluster
23:31 gildub joined #gluster
23:43 gsaadi left #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary