Camelia, the Perl 6 bug

IRC log for #gluster, 2013-10-24

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:04 StarBeast joined #gluster
00:13 XpineX_ joined #gluster
00:21 masterzen joined #gluster
00:32 _pol joined #gluster
00:37 _pol joined #gluster
00:37 glusterbot New news from newglusterbugs: [Bug 1022759] subvols-per-directory floods client logs with "disk layout missing" messages <http://goo.gl/boi1Wu>
00:49 cnfourt joined #gluster
00:58 bstr joined #gluster
01:04 theguidry joined #gluster
01:30 mohankumar joined #gluster
01:38 bala joined #gluster
01:45 bstr joined #gluster
01:45 micu2 joined #gluster
01:58 johnbot11 joined #gluster
02:26 kevein joined #gluster
02:32 johnbot11 joined #gluster
02:52 kaushal_ joined #gluster
02:53 mohankumar joined #gluster
02:56 johnbot11 joined #gluster
03:13 dusmant joined #gluster
03:18 shyam joined #gluster
03:33 bharata-rao joined #gluster
03:36 sprdd joined #gluster
03:37 shubhendu joined #gluster
03:38 kanagaraj joined #gluster
03:40 robo joined #gluster
03:44 mohankumar joined #gluster
03:50 robo left #gluster
03:50 glusterbot New news from resolvedglusterbugs: [Bug 953694] Requirements of Samba VFS plugin for glusterfs <http://goo.gl/v7g29>
03:53 itisravi joined #gluster
03:57 RameshN joined #gluster
04:15 _pol joined #gluster
04:20 bulde joined #gluster
04:27 ndarshan joined #gluster
04:31 satheesh joined #gluster
04:34 spandit joined #gluster
04:40 kshlm joined #gluster
04:45 bala joined #gluster
04:45 ababu joined #gluster
04:45 ppai joined #gluster
04:56 _pol_ joined #gluster
04:58 bala joined #gluster
05:07 ziiin joined #gluster
05:09 rjoseph joined #gluster
05:14 nshaikh joined #gluster
05:17 raghu joined #gluster
05:27 sgowda joined #gluster
05:29 vpshastry joined #gluster
05:36 CheRi__ joined #gluster
05:39 anands joined #gluster
05:40 lalatenduM joined #gluster
05:44 tryggvil joined #gluster
05:45 shylesh joined #gluster
05:47 meghanam joined #gluster
05:47 meghanam_ joined #gluster
05:51 ndarshan joined #gluster
05:57 psharma joined #gluster
06:11 _pol joined #gluster
06:15 sgowda joined #gluster
06:24 marcoceppi joined #gluster
06:25 mohankumar joined #gluster
06:25 ngoswami joined #gluster
06:38 sgowda joined #gluster
06:44 gluslog_ joined #gluster
06:47 ctria joined #gluster
06:47 vpshastry joined #gluster
06:49 keytab joined #gluster
06:58 bulde joined #gluster
06:59 eseyman joined #gluster
07:05 vshankar joined #gluster
07:06 ekuric joined #gluster
07:08 ricky-ticky joined #gluster
07:10 hybrid512 joined #gluster
07:15 jbrooks joined #gluster
07:18 franc joined #gluster
07:18 franc joined #gluster
07:20 satheesh joined #gluster
07:21 glusterbot New news from resolvedglusterbugs: [Bug 961892] Compilation chain isn't honouring CFLAGS environment variable <http://goo.gl/xy5LX> || [Bug 781285] [faf9099bb50d4d2c1a9fe8d3232d541b3f68bc58] improve replace-brick cli outputs. <http://goo.gl/6mwh7> || [Bug 893795] Gluster 3.3.1 won't compile on Freebsd <http://goo.gl/U8QFF>
07:32 bulde joined #gluster
07:35 shubhendu joined #gluster
07:39 glusterbot New news from newglusterbugs: [Bug 960867] failover doesn't work when a hdd part of hardware raid massive becomes broken <http://goo.gl/6usIi> || [Bug 961668] gfid links inside .glusterfs are not recreated when missing, even after a heal <http://goo.gl/4vuYc> || [Bug 962875] Entire volume DOSes itself when a node reboots and runs fsck on its bricks while network is up <http://goo.gl/uG0yY> || [Bug 962878] A downed node d
07:43 kPb_in joined #gluster
07:54 mbukatov joined #gluster
07:55 harish joined #gluster
07:58 harish joined #gluster
08:00 StarBeast joined #gluster
08:04 jbrooks joined #gluster
08:04 aravindavk joined #gluster
08:04 anands joined #gluster
08:05 mgebbe_ joined #gluster
08:12 vpshastry1 joined #gluster
08:15 satheesh1 joined #gluster
08:16 shylesh joined #gluster
08:19 StarBeast joined #gluster
08:20 saurabh joined #gluster
08:25 tryggvil joined #gluster
08:28 sgowda joined #gluster
08:31 T0aD hi guys
08:32 T0aD so gluster is ready with the new quota management ? :P
08:35 aravindavk joined #gluster
08:35 DV__ joined #gluster
08:44 vpshastry1 joined #gluster
08:49 tryggvil joined #gluster
08:52 bala joined #gluster
08:54 satheesh1 joined #gluster
08:54 dusmant joined #gluster
08:59 sgowda joined #gluster
09:00 vimal joined #gluster
09:03 aravindavk joined #gluster
09:04 kshlm joined #gluster
09:04 pkoro joined #gluster
09:05 bulde joined #gluster
09:05 hagarth joined #gluster
09:06 StarBeast joined #gluster
09:09 glusterbot New news from newglusterbugs: [Bug 1022905] quota build: reset command fails <http://goo.gl/ti1J9z>
09:12 manik joined #gluster
09:22 F^nor joined #gluster
09:24 hngkr_ joined #gluster
09:32 dusmant joined #gluster
09:33 bala joined #gluster
09:37 tryggvil joined #gluster
09:39 satheesh joined #gluster
09:40 shilpa_ joined #gluster
09:43 srsc left #gluster
09:46 ndarshan joined #gluster
09:49 harish joined #gluster
10:16 cnfourt joined #gluster
10:19 jord-eye Question: is there any way to restart a glusterfsd process? I mean: I have a replica 2 volume, and I want to restart glusterfsd of that volume in one brick only (is a critical system and I can't do "volume stop" since it would make all volume unaccesible).
10:19 jord-eye It would be like restarting the server, but only for one of the bricks of one volume. I know I can kill it, but I don't know how to bring it running again.
10:19 kshlm use 'volume start <name> force' to start the killed bricks
10:20 jord-eye ok. Is the "force" necessary?
10:20 jord-eye what's the difference with or without "force"?
10:21 dneary joined #gluster
10:22 kshlm without force the command will error out saying volume already started. with force that check is not done.
10:26 satheesh joined #gluster
10:32 jord-eye perfect. Thanks kshlm
10:41 manik joined #gluster
10:49 hflai joined #gluster
10:50 satheesh joined #gluster
10:56 hflai joined #gluster
11:01 harish joined #gluster
11:13 hngkr joined #gluster
11:21 JordanHackworth joined #gluster
11:22 ppai joined #gluster
11:22 CheRi__ joined #gluster
11:24 uebera|| joined #gluster
11:30 manik joined #gluster
11:31 aravindavk joined #gluster
11:32 vpshastry1 joined #gluster
11:34 hngkr_ joined #gluster
11:35 tryggvil joined #gluster
11:40 rastar joined #gluster
11:53 DV__ joined #gluster
11:54 B21956 joined #gluster
11:56 calum_ joined #gluster
12:00 itisravi joined #gluster
12:07 rcheleguini joined #gluster
12:08 jbrooks joined #gluster
12:15 CheRi__ joined #gluster
12:17 ndarshan joined #gluster
12:19 tryggvil joined #gluster
12:23 DV__ joined #gluster
12:26 hngkr_ joined #gluster
12:33 jclift joined #gluster
12:33 edward1 joined #gluster
12:33 shubhendu joined #gluster
12:33 dneary joined #gluster
12:36 ngoswami joined #gluster
12:38 nixpanic joined #gluster
12:39 nixpanic joined #gluster
12:40 glusterbot New news from newglusterbugs: [Bug 1005526] All null pending matrix <http://goo.gl/tu7Eh0> || [Bug 990028] enable gfid to path conversion <http://goo.gl/1HwiQc> || [Bug 1022995] quota: moving files between directories does not update the size field properly <http://goo.gl/pPljfG>
12:58 kkeithley1 joined #gluster
13:03 shubhendu joined #gluster
13:06 bennyturns joined #gluster
13:09 kshlm joined #gluster
13:11 plarsen joined #gluster
13:25 roidelapluie joined #gluster
13:25 roidelapluie joined #gluster
13:28 chirino joined #gluster
13:33 ngoswami joined #gluster
13:33 ndarshan joined #gluster
13:33 squizzi joined #gluster
13:35 shilpa_ joined #gluster
13:37 manik joined #gluster
13:45 kaptk2 joined #gluster
13:52 bulde joined #gluster
14:03 tryggvil joined #gluster
14:05 rastar joined #gluster
14:17 bugs_ joined #gluster
14:18 wushudoin joined #gluster
14:19 shylesh joined #gluster
14:20 jclift joined #gluster
14:30 TDJACR joined #gluster
14:30 Raymii joined #gluster
14:31 Raymii joined #gluster
14:33 Raymii joined #gluster
14:35 cnfourt left #gluster
14:36 Raymii joined #gluster
14:40 jag3773 joined #gluster
14:41 msciciel joined #gluster
14:42 joshcarter joined #gluster
14:42 grisu joined #gluster
14:42 jclift left #gluster
14:43 grisu Hey guys, I have problem using glusterfs 3.4.1 on debian where google cannot help me at the moment
14:44 grisu every create a stripe-distributed volume with stride 2 over 4 bricks. if I want to change the stripe-block-size now I get every time: # gluster volume set datas cluster.stripe-block-size 256k  <\n> volume set: failed: option block-size 256k: '256k' is not a valid size-list
14:46 Raymii left #gluster
14:49 zaitcev joined #gluster
14:51 ricky-ticky joined #gluster
14:54 brosner joined #gluster
14:54 hagarth joined #gluster
14:56 ricky-ticky joined #gluster
14:59 ekuric1 joined #gluster
15:00 ekuric joined #gluster
15:02 bala joined #gluster
15:03 ekuric joined #gluster
15:08 pkoro can you try 256KB (instead of 256k)?
15:09 ricky-ticky joined #gluster
15:09 pkoro grisu: can you try 256KB (instead of 256k)?
15:10 glusterbot New news from newglusterbugs: [Bug 996047] volume-replace-brick issues <http://goo.gl/OjqWJW>
15:18 LoudNoises joined #gluster
15:20 joshcarter joined #gluster
15:20 daMaestro joined #gluster
15:23 glusterbot New news from resolvedglusterbugs: [Bug 951549] license: xlators/protocol/server dual license GPLv2 and LGPLv3+ <http://goo.gl/WqDBI5>
15:38 Technicool joined #gluster
15:52 anands joined #gluster
15:52 failshell joined #gluster
16:04 johnbot11 joined #gluster
16:05 johnbot11 joined #gluster
16:06 hybrid5121 joined #gluster
16:07 _pol joined #gluster
16:10 vpshastry1 joined #gluster
16:19 ababu joined #gluster
16:19 bulde joined #gluster
16:23 lpabon joined #gluster
16:27 mohankumar joined #gluster
16:29 Mo_ joined #gluster
16:30 SpeeR joined #gluster
16:33 vpshastry1 left #gluster
16:33 _Bryan_ Is anyone here using Gluster for the backend to ElasticSearch?
16:35 torrancew _Bryan_: no, that sounds like it may be a bad idea
16:35 torrancew ES has its own replication strategies
16:36 torrancew and generally writes should be super high throughput for it
16:36 torrancew (it also does its own sharding and such)
16:36 _Bryan_ torrancew: I am looking for an easy way to keep older indexes online but be able to store them...and I will need about 300TB....so I though gluter since I am using it elsewhere already
16:36 _Bryan_ I woudl still have the ES nodes...but they would have a mounted glsuter volume for the storage
16:36 torrancew For /all/ storage?
16:37 _Bryan_ yes on these nodes...
16:37 _Bryan_ woudl look like when you run multiple ES nodes on one physical server
16:37 torrancew Yeah, I would imagine that would have a huge perf hit
16:37 _Bryan_ so as a index ages I can move it to different groups of nodes...and the older the index the slower and bigger the storage
16:37 _Bryan_ I am ok with the perf hit..these would be seldom used indexes..but need them online just in case
16:38 _Bryan_ the primary last few days woudl all be on SSD arrays local the to the servers
16:38 torrancew Ok, I keep getting mislead as to whether /all/ your ES nodes would /only/ use gluster, or not
16:38 torrancew If not, it's probably acceptable (ie if it's an archival solution)
16:38 torrancew But measure that stuff
16:38 _Bryan_ No....4 ES nodes on Fio SSD Arrays, 4 nodes on Raid 10 Arrays and then 4 nodes on Gsluter storage
16:38 torrancew got it
16:39 _Bryan_ so the odler the index it gets moved to the group of ES nodes than has slower but larger storage....to keep costs under control
16:39 torrancew ya
16:39 hagarth joined #gluster
16:39 _Bryan_ since they want em to keep ALOT of logs for 13 months
16:43 cfeller Fuse client memory question:  I've noticed that, after a lot of file transfers (say rsync for instance) that the Gluster Fuse client willsuck up about 41 - 42% of the memory on the system, consistently. Itdoesn't matter if the client machine has 4GB or RAM or 12GB of RAM.After a certain amount of time it is always sucking up a ton of RAM.
16:43 cfeller Since the fuse client will gobble up roughly the same percentage of memory on the system (eventually after heavy use), is there an advantage (performance or otherwise) to having more memory on the client machine?
16:44 cfeller by fuse client, I mean the "glusterfs" process on the client machine (I'm using the fuse mount).
16:44 cfeller I'm using 3.4.1
17:02 vpshastry joined #gluster
17:03 ricky-ticky joined #gluster
17:07 ndk joined #gluster
17:08 bala joined #gluster
17:09 shylesh joined #gluster
17:09 bala joined #gluster
17:11 glusterbot New news from newglusterbugs: [Bug 1023134] Used disk size reported by quota and du mismatch <http://goo.gl/T2fAzM>
17:21 vpshastry left #gluster
17:27 JoeJulian cfeller: Sounds odd. Are you keeping fds open? On my desktop machine, I have 13 volumes mounted, one used heavily. The most heavily used one with 447 open files is only using 149Mb and has been mounted for over a month.
17:33 jkroon joined #gluster
17:33 jkroon hi guys, I've got a situation where the .gluster folder on my bricks is consuming 100% of my storage.
17:34 JoeJulian @lucky what is this .glusterfs directory
17:34 glusterbot JoeJulian: http://goo.gl/j981n
17:35 wahly i'm having a hard time determing (from google searches) the "best practice" for brick sizes. we are looking at deploying gluster now with something like 1.8TB bricks, but we aren't sure that future bricks will be that size (maybe bigger, maybe smaller). should we take the 1.8TB of disk and break it into multiple bricks so that each one is smaller, thus making it easier to match in the future?
17:35 JoeJulian That link can explain what that directory is. Why it's filling your drive may be another question.
17:35 wahly or is matching brick sizes not that important?
17:35 JoeJulian Matching brick sizes is not critical, but it is best to do so.
17:36 JoeJulian Most people will partition (physically or through lvm) their drives down and use them as multiple bricks if they mismatch in size.
17:37 JoeJulian Be aware, of course, that with replication ,,(brick order) matters.
17:37 glusterbot Replicas are defined in the order bricks are listed in the volume create command. So gluster volume create myvol replica 2 server1:/data/brick1 server2:/data/brick1 server3:/data/brick1 server4:/data/brick1 will replicate between server1 and server2 and replicate between server3 and server4.
17:37 cfeller JoeJulian: how would I know that? What I'm doing right now is migrating a few terabytes of data off of some old Sun boxes via rsync.  Even after rsync completes, the memory consumption remains
17:37 cfeller JoeJulian: here is what I'm seeing, unless I'm interpreting this wrong: http://ur1.ca/fxn4q
17:37 glusterbot Title: #49268 Fedora Project Pastebin (at ur1.ca)
17:38 JimR_ joined #gluster
17:38 JoeJulian cfeller: The way to know if fd's are open: lsof. But it sounds like that should not be the case...
17:39 vpagan joined #gluster
17:41 cfeller JoeJulian: doesn't look like anything on the mounted glusterfs filesystem: http://ur1.ca/fxn5a
17:41 glusterbot Title: #49270 Fedora Project Pastebin (at ur1.ca)
17:42 JoeJulian cfeller: hmm, I just checked my backup server that does the same thing, rsync's everything to a removable drive, and that's using 1.8% on a 2Gb box...
17:43 JoeJulian ~pasteinfo | cfeller
17:43 glusterbot cfeller: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
17:43 JoeJulian Oh!
17:44 JoeJulian I almost forgot, because I do this for the servers since I host 60 bricks each.... I set performance.cache-size to 8Mb.
17:44 wahly JoeJulian: thanks for the info!
17:45 cfeller JoeJulian: gluster volume info: http://ur1.ca/fxn5t
17:45 glusterbot Title: #49272 Fedora Project Pastebin (at ur1.ca)
17:45 JoeJulian hmm, that defaults to 32Mb and 128Mb (not sure why there's 2 defaults) anyway, so it still shouldn't be that big...
17:46 dbruhn joined #gluster
17:47 dusmant joined #gluster
17:47 JoeJulian cfeller: echo 3 >/proc/sys/vm/drop_caches and see if that changes anything.
17:50 cfeller JoeJulian: on the box with 12GB (osgiliath), that rsync has completed on, it dropped the the glusterfs memory consumption marginally, to 40.9%.
17:51 cfeller (clients and servers are both RHEL6, BTW.)
17:51 JoeJulian ok, that's the same then...
17:52 JoeJulian well, technically it's not the same... I'm running CentOS, but for all intents and purposes...
17:53 JoeJulian Oh, wait... no... that's a fedora box...
17:54 JoeJulian Well that's interesting... not 50% like yours, but 8%, perhaps because it hasn't opened/read as many files...
17:57 JoeJulian cfeller: The answer to your original question about more memory, it's cheap and I always like to add more when I can, but I don't actually know if it's beneficial to client performance per-se.
17:57 JoeJulian I'd wonder if it was a memory leak, except that it seems to max out and stop.
17:58 JoeJulian I would still file a bug report. Having the client use up half the ram seems a bit excessive.
17:58 glusterbot http://goo.gl/UUuCq
17:58 cfeller OK.  Maybe I'll do some more testing.  The 4GB box is actually a VM that I'm setting for general SSH client access to the gluster cluster, and other boxes that I have to rsync backups with.  Since there is other VM clients on the VM host box, I don't want to give it unlimited access, but I do value performance...
17:59 mbukatov joined #gluster
17:59 harish joined #gluster
17:59 vpagan I have a question about volume and brick practices
17:59 cfeller sure, I'll file a bug report.  should I include those fpaste.org contents that Is hared with you?  anything else relevant?
17:59 glusterbot http://goo.gl/UUuCq
17:59 JoeJulian I have an answer. Let's see if they match.
18:00 JoeJulian cfeller: probably a status dump, and the output from pmap
18:00 cfeller OK
18:00 JoeJulian for the status dump, kill -USR1 the glusterfs process.
18:00 [o__o] left #gluster
18:01 JoeJulian It used to put it in /tmp but I'm not sure with 3.4. They moved it under /var/run I think.
18:02 vpagan I've seen that there are examples with several bricks on a single server, which I can only assume are single disks. Is there any advantage to that over just a single raid brick per server?
18:02 [o__o] joined #gluster
18:03 JoeJulian I've grown averse to files being split into stripes after trying to piece together chunks of files where something failed.
18:04 JoeJulian By having lvm lv's per-disk and using distribute to make use of them, if a disk is failing I can pvmove the brick temporarily, too.
18:06 vpagan Well, right now I have two servers with zfs bricks of essentially raid 6. Both servers are connected by one volume as a distribute
18:06 JoeJulian There's nothing "bad" about that.
18:06 vpagan my thought is that I want to replicate this, of course, but I would need two more servers in this senario
18:06 jbrooks joined #gluster
18:06 vpagan I was debating whether to keep the zfs or
18:06 JoeJulian Right. That's considered best practice.
18:07 vpagan just split the drives as bricks and replicate between two servers instead of 4.
18:07 vpagan I'm just curious about "standard practice"
18:08 JoeJulian Depends on use case and what suits it best. :/
18:08 vpagan Ok, I'm really just looking out for performance pitfalls
18:08 JoeJulian In my scenario, I have a lot of disparate files that are accessed more-or-less randomly. There's a very even distribution of access so it lends itself well to dht load balancing.
18:10 vpagan I see, I don't have enough data about my system yet to make solid assumptions, but most of the files I do believe will be accessed consistently with new files created over time
18:11 vpagan Since duplication was an issue, zfs was a choice here. Gluster was chosen to continually grow space seamlessly
18:11 JoeJulian One think to consider is read-ahead caching. If you have a fairly few files that will be frequently read, raid and fewer bricks will suit you better.
18:12 vpagan I'll note that
18:12 JoeJulian The more simultaneous unique files that are read, the more bricks.
18:14 vpagan Well, in the future, I'm probably going to have to split my datacenter across different sites, so keeping to the distribute-replicate model with raid bricks, I'll have to use geo replication for this?
18:14 JoeJulian yes
18:15 vpagan Ok, I'm not entirely sure how the geo replication works when I can have a distribute-replication with the same hostnames
18:15 vpagan or rather what advantages geo replication gives me
18:16 JoeJulian replication is very latency dependent. Doing afr across high-latency connections is very problematic. geo-replication allows eventually consistent replication of your volume to a remote location
18:17 JoeJulian or locations
18:17 harish joined #gluster
18:17 kPb_in_ joined #gluster
18:17 JoeJulian I've got to run. Be back in a little while...
18:17 vpagan So does it stagger the replication process across locations?
18:17 vpagan alright, thanks for your help
18:20 _pol joined #gluster
18:33 kPb_in joined #gluster
18:52 tryggvil joined #gluster
18:57 _pol joined #gluster
19:06 JonnyNomad joined #gluster
19:09 ira joined #gluster
19:19 zerick joined #gluster
19:27 _pol_ joined #gluster
19:43 _pol joined #gluster
20:13 hagarth joined #gluster
20:21 nueces joined #gluster
20:42 glusterbot New news from newglusterbugs: [Bug 1023191] glusterfs consuming a large amount of system memory <http://goo.gl/OkQlS3>
20:42 cfeller JoeJulian: that is the bug from our chat earlier...
20:54 tryggvil joined #gluster
21:31 failshel_ joined #gluster
21:51 jesse joined #gluster
21:52 askb joined #gluster
22:01 _pol joined #gluster
22:10 premera joined #gluster
22:14 a2 joined #gluster
22:21 verdurin joined #gluster
22:22 khushildep joined #gluster
22:24 baoboa joined #gluster
22:26 RobertLaptop joined #gluster
22:28 fidevo joined #gluster
22:59 johnbot11 joined #gluster
23:03 _pol joined #gluster
23:23 maxburk joined #gluster
23:26 maxburk hi everyone!
23:26 maxburk when i run `gluster volume heal <volname> info split-brain`, i get a lot of entries with a path of '/'
23:27 maxburk does that mean my only choice is to rebuild the whole volume?
23:27 rm left #gluster
23:28 JoeJulian No... Check to see if you've encountered this strange bug that I haven't been able to figure out how to repro. See if $brick/.glusterfs/00/00/0000*0001 is a directory instead of a symlink.
23:29 maxburk it's a symlink to ../../..
23:29 JoeJulian On all your bricks?
23:29 maxburk yep, looks like it
23:31 maxburk there was a period in which the two servers in my setup were rebooted a lot in a short span, so it's possible that they did manage to enter a full conflicted state
23:31 JoeJulian meh, ok... "getfattr -m trusted.afr $brick" to get a list of afr attributes, then remove them with "setfattr -x $attribute_name $brick"
23:32 JoeJulian I don't like how directories can end up with conflicting afr states and never resolve... I should probably file a bug report about that.
23:32 glusterbot http://goo.gl/UUuCq
23:33 maxburk JoeJulian: that worked perfectly. thank you!
23:33 JoeJulian You're welcome.

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary