Camelia, the Perl 6 bug

IRC log for #gluster, 2013-06-10

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 RicardoSSP joined #gluster
00:00 RicardoSSP joined #gluster
00:09 StarBeast joined #gluster
00:15 mtanner_ joined #gluster
00:33 AndrewX192 joined #gluster
00:34 AndrewX192 Why would "gluster volume create".. never complete with two nodes (i.e.: never return a result)? (I've turned off my firwall, and confirmed that the peers can communicate with each other
00:36 AndrewX192 I should clarify, when it does complete (which takes a few minutes), "gluster volume info" shows no volumes
00:59 majeff joined #gluster
01:09 hjmangalam2 joined #gluster
01:14 edong23 joined #gluster
01:24 daMaestro joined #gluster
01:31 yinyin joined #gluster
02:18 vrturbo joined #gluster
02:45 aravindavk joined #gluster
03:04 bharata joined #gluster
03:27 mohankumar joined #gluster
04:01 sgowda joined #gluster
04:13 saurabh joined #gluster
04:18 hjmangalam1 joined #gluster
04:23 vshankar joined #gluster
04:30 shylesh joined #gluster
04:44 vpshastry joined #gluster
04:59 glusterbot New news from newglusterbugs: [Bug 952029] Allow an auxiliary mount which lets users access files using only gfids <http://goo.gl/x5z1R>
05:07 ferringb curious; is there a way to force clients to reload the volume definition without manually going and doing a umount + mount?
05:08 ferringb this being beyond just a gluster volume set invocation
05:15 1JTAAARWV joined #gluster
05:19 psharma joined #gluster
05:32 vimal joined #gluster
05:32 bala1 joined #gluster
05:40 vpshastry joined #gluster
05:59 lalatenduM joined #gluster
06:05 rastar joined #gluster
06:10 rgustafs joined #gluster
06:16 lalatenduM joined #gluster
06:17 JoeJulian ferringb: You can try a kill -HUP, but I don't expect success.
06:20 jtux joined #gluster
06:21 ollivera joined #gluster
06:21 satheesh joined #gluster
06:31 goerk_ joined #gluster
06:41 satheesh joined #gluster
06:41 ricky-ticky joined #gluster
06:45 rastar1 joined #gluster
06:47 StarBeast joined #gluster
06:54 mooperd joined #gluster
06:55 ctria joined #gluster
06:56 chirino joined #gluster
06:57 raghu joined #gluster
06:59 ekuric joined #gluster
07:01 guigui joined #gluster
07:08 rotbeard joined #gluster
07:08 harish joined #gluster
07:11 sohoo joined #gluster
07:13 sohoo hi, will a normal rsync from brick A node1 --> brick A node 2 on replicated node be ok(what is the best rsync) i put up a monitoring for checking diff of bricks and noticed 1 pair of bricks are 1GB out of sync
07:14 ferringb JoeJulian: is that a guess based on the unix norm, or experience? ;)
07:15 dobber_ joined #gluster
07:15 sohoo just a note on that, iv noticed that on heavy writes a normal diff is 100MB but its becoming fully in sync after a while(make sence client sidce replication etc..) but this pair is 1GB out off sync all the time
07:16 rb2k joined #gluster
07:17 ngoswami joined #gluster
07:19 sohoo is it possible to stop writing to a pair of bricks while you run rsync on them? this is important questune
07:19 sohoo in 10 bricks cluster
07:20 sohoo :)
07:24 sohoo do you need to exclude the metadata directory(the symlinks directory) from rsync?
07:37 JoeJulian ferringb: A HUP causes the client to re-query the server for the volume configuration, and reload if the config has changed. I'm just not sure if hand-modifying a volume and doing the same hup will do what you're expecting.
07:38 JoeJulian ferringb: I'd have a higher expectation of success if you used the filter though: http://www.gluster.org/community/docu​mentation/index.php/Glusterfs-filter
07:38 glusterbot <http://goo.gl/dMhlL> (at www.gluster.org)
07:39 JoeJulian sohoo: I wouldn't rsync the bricks. Do a "gluster volume heal $vol full" instead.
07:42 sohoo joe tnx, i rather control the workflow of the proccess, also i just need to sync 1 pair not the entire volume
07:48 spider_fingers joined #gluster
07:48 spider_fingers left #gluster
07:49 ferringb JoeJulian: yeah, I was looking at filter to enforce some settings that aren't exposed through `gluster volume set`
07:49 koubas joined #gluster
07:49 ferringb JoeJulian: long story short, in a situation were re-addition of a failed brick results in the volume effectively hanging, w/ cpu pegged on the brick that's being re-added
07:50 ferringb thus I'm thinking about ways to try and shove as much load off that brick as possible, ie, read-subvolume. ;)
07:53 ferringb notions/solutions obviously welcome; at this point it's easy to trigger, thus I've got some fairly copmlete logs from it
07:54 rastar joined #gluster
07:58 ujjain joined #gluster
08:13 vpshastry joined #gluster
08:21 tziOm joined #gluster
08:24 Norky joined #gluster
08:31 mooperd joined #gluster
08:36 rastar joined #gluster
08:36 majeff joined #gluster
08:41 puebele joined #gluster
08:46 spider_fingers joined #gluster
08:49 rb2k joined #gluster
09:00 hybrid5121 joined #gluster
09:07 Staples84 joined #gluster
09:19 bulde joined #gluster
09:22 Shdwdrgn joined #gluster
09:29 ricky-ticky joined #gluster
09:43 ccha joined #gluster
09:47 deepakcs joined #gluster
10:03 rotbeard joined #gluster
10:15 manik joined #gluster
10:19 Shdwdrgn joined #gluster
10:19 a2_ joined #gluster
10:20 pkoro joined #gluster
10:22 harish joined #gluster
10:23 sohoo joined #gluster
10:28 tziOm How can I make gluster quiet down the debugging?
10:38 edward1 joined #gluster
10:38 manik joined #gluster
10:43 andreask joined #gluster
10:54 jbrooks joined #gluster
11:03 Shdwdrgn joined #gluster
11:05 xavih joined #gluster
11:15 matiz joined #gluster
11:15 matiz hi
11:15 glusterbot matiz: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
11:21 aravindavk joined #gluster
11:33 rwheeler joined #gluster
11:44 aliguori joined #gluster
11:48 matiz I have a problem described here - https://bugzilla.redhat.com/show_bug.cgi?id=853895 (can not mount gluster as read-only) - is there any solution, to work this patch with version 3.3.0, or 3.3.1?
11:48 glusterbot <http://goo.gl/xCkfr> (at bugzilla.redhat.com)
11:48 glusterbot Bug 853895: medium, medium, ---, csaba, ON_QA , CLI: read only glusterfs mount fails
11:52 hagarth joined #gluster
11:55 matiz I can not find packages glusterfs in version 3.3.0/3.3.1 where applied this patch.
11:55 morse joined #gluster
11:55 pkoro Hi matiz, we have had the same issue on 3.3.0 and once we updated to 3.3.1 a couple of weeks ago I tried to see if it would work but unfortunately it does not work on 3.3.1 either.
11:56 pkoro (community builds)
11:56 pkoro so we are still using nfs mount for this volume on the client we need to have it in read only mode.
12:01 matiz yes, version glusterfs-3.3.0-2.el6.x86_64 and glusterfs-3.3.1-15.el6.x86_64 not work - the solution with nfs is not good for me :(
12:02 kkeithley So far the fix is in 3.4.0. I'm checking to see if it will also be in 3.3.2
12:09 kkeithley It does not appear that it will be in 3.3.2; at least not as of right now.
12:11 ccha does 3.3.2 already release ?
12:11 kkeithley no, 3.3.2 has not been released yet
12:12 anands joined #gluster
12:12 hagarth joined #gluster
12:12 aliguori joined #gluster
12:12 kkeithley AFAIK it's going through QE release testing.
12:13 nueces joined #gluster
12:13 matiz I find, that in RHSA, fixed this - http://rhn.redhat.com/errata/RHSA-2013-0691.html
12:13 glusterbot <http://goo.gl/erZio> (at rhn.redhat.com)
12:13 matiz "Various fixes in the FUSE module to ensure the 'read-only' (-o ro) mount option works. (BZ#858499)"
12:15 kkeithley Yes, that's for RHS.
12:15 matiz RHSA uses version 3.3.0 from what I know - maybe it is a good idea, to merge source, to community builds? :)
12:17 kkeithley It is merged in the community source. It's on the master and release-3.4 branches and is fixed in the forthcoming 3.4.0 release.
12:18 kkeithley (And I'm not the one who decided it should or shouldn't be in 3.3.2.)
12:18 matiz client 3.4.0 will be compatible with 3.3.0 server?
12:20 kkeithley Generally we don't suggest mixing versions between client and server, but it might work. You can try it and let us know
12:21 guigui joined #gluster
12:21 kkeithley And now that I'm aware of it, if it's not in 3.3.2 proper, I might backport the patch into the Fedora and community packages for 3.3.2.
12:22 kkeithley s/I might/I might be persuaded to/
12:22 glusterbot kkeithley: Error: I couldn't find a message matching that criteria in my history of 1000 messages.
12:22 kkeithley meh
12:27 matiz yes, this is a good idea, to backport this fixes to 3.3.2 - it would be nice, to have options read-only in version 3.3.x :)
12:34 ricky-ticky joined #gluster
12:43 ctria joined #gluster
12:53 satheesh joined #gluster
12:58 aravindavk joined #gluster
13:00 bulde joined #gluster
13:01 bulde1 joined #gluster
13:01 rastar joined #gluster
13:02 bennyturns joined #gluster
13:02 robo joined #gluster
13:02 plarsen joined #gluster
13:04 H__ what is glusters' lightest/fastests self-heal method ?
13:05 ndevos it depends... 'gluster volume heal ... full' is supposed to have a better performance than a ,,(targeted self-heal)
13:05 glusterbot I do not know about 'targeted self-heal', but I do know about these similar topics: 'targeted self heal'
13:06 ndevos but, if you know the subdir where the heal needs to happen, the ,,(targeted self heal) will only try to heal that subdir
13:06 glusterbot http://goo.gl/E3b2r
13:07 vpshastry1 joined #gluster
13:11 ekuric joined #gluster
13:16 ricky-ticky joined #gluster
13:20 vpshastry1 joined #gluster
13:23 mohankumar joined #gluster
13:25 sjoeboo joined #gluster
13:27 bennyturns joined #gluster
13:31 glusterbot New news from newglusterbugs: [Bug 969461] RFE: Quota fixes <http://goo.gl/XFSM4>
13:36 shylesh joined #gluster
13:36 bdperkin joined #gluster
13:47 deepakcs joined #gluster
13:49 ollivera joined #gluster
13:51 lalatenduM joined #gluster
13:51 manik joined #gluster
13:55 vpshastry1 joined #gluster
13:59 failshell joined #gluster
14:00 MrNaviPa_ joined #gluster
14:06 ekuric1 joined #gluster
14:08 failshell i know i keep asking about this, but i still dont understand why that's happening. ive set cluster.min-free-disk: 20 and yet, some bricks are filed at 93%
14:11 rwheeler joined #gluster
14:13 lpabon joined #gluster
14:15 anands joined #gluster
14:18 portante joined #gluster
14:30 hjmangalam1 joined #gluster
14:32 stopbit joined #gluster
14:37 bugs_ joined #gluster
14:49 aravindavk joined #gluster
14:49 majeff joined #gluster
14:50 neofob glusterfs 3.4beta2 on AMD A10-5800K with 3 bricks; one client transferring files (rsync)
14:50 neofob http://bit.ly/13tFNTd
14:50 neofob 90-101 MB/s with very low cpu utilization on server side, left top screenshot
14:51 neofob the right screenshoot is client system monitor, ivy bridge i7-3770S
14:57 portante_ joined #gluster
14:57 neofob has anyone run bonnie++ or fio on glusterfs?
14:57 failshell its a pointless test honestly
14:58 MrNaviPacho joined #gluster
15:03 harish joined #gluster
15:07 daMaestro joined #gluster
15:08 hjmangalam joined #gluster
15:09 Norky I have run various benchmarks
15:09 bambi23 joined #gluster
15:11 bulde joined #gluster
15:12 neofob i dont take benchmarks wholeheartedly, but it is a good way to generate some loads for io/cpu hopefully we dont see any problem
15:12 neofob either in kernel or user-space
15:21 bulde1 joined #gluster
15:25 MrNaviPacho joined #gluster
15:27 hchiramm__ joined #gluster
15:27 jthorne joined #gluster
15:31 vpshastry joined #gluster
15:31 theron joined #gluster
15:35 manik joined #gluster
15:38 aravindavk joined #gluster
15:42 sjoeboo so, we've got a brick remove running, the reblance part, and its pretty slow going...any pointers on speeding it up ? replica=2, 10GB connected nodes, minimal io going on, and its only moved ~300GB in 5 days (granted, its millions of files).
16:01 bala joined #gluster
16:07 vpshastry left #gluster
16:09 aravindavk joined #gluster
16:10 hjmangalam1 joined #gluster
16:15 vpshastry joined #gluster
16:19 MrNaviPa_ joined #gluster
16:23 chirino joined #gluster
16:40 marmoset I'm trying to do a replace-brick (3.3.1) and it seems to give up after a random amount (one time after ~1.8TB, the other after only 42G) with the receiving brick having a glusterfs processing using 100% cpu on one core, but not doing anything.  status says it completed, but it didn't.  Any ideas?
16:50 chirino joined #gluster
16:53 chirino joined #gluster
17:07 soukihei joined #gluster
17:11 lalatenduM joined #gluster
17:27 portante joined #gluster
17:29 thisisdave joined #gluster
17:30 [1]lbalbalba joined #gluster
17:30 [1]lbalbalba left #gluster
17:31 lbalbalba joined #gluster
17:31 manik joined #gluster
17:35 Nuxr0 joined #gluster
17:36 edoceo_ joined #gluster
17:36 stigchri1tian joined #gluster
17:36 abyss^_ joined #gluster
17:36 GLHMarmo1 joined #gluster
17:36 xymox joined #gluster
17:39 joelwallis joined #gluster
17:40 ehg_ joined #gluster
17:42 the-me joined #gluster
17:46 MrNaviPa_ joined #gluster
17:58 bennyturns joined #gluster
18:04 bivak joined #gluster
18:07 joelwallis joined #gluster
18:17 Skunnyk joined #gluster
18:21 raghu joined #gluster
18:25 MrNaviPa_ joined #gluster
18:49 edward1 joined #gluster
18:55 MrNaviPacho joined #gluster
19:00 nightwalk joined #gluster
19:01 isomorphic joined #gluster
19:11 JonnyNomad joined #gluster
19:13 JusHal joined #gluster
19:17 JusHal Using 3.3.1; even if the cluster.min-free-disk limit of a brick in a volume of multiple bricks is reached, it keeps on being written too till it is out of space causing error and or frozen clients. Any idea?
19:18 edoceo joined #gluster
19:20 chirino joined #gluster
19:25 rwheeler joined #gluster
19:33 chirino joined #gluster
19:44 hchiramm__ joined #gluster
19:46 realdannys1 joined #gluster
19:46 realdannys1 Hi
19:46 glusterbot realdannys1: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
19:49 realdannys1 I'm trying to get my first use of Gluster up and running on EC2, I picked two 64 bit versions of Amazon's Linux and I followed the instructions at both http://www.gluster.org/community/document​ation/index.php/Getting_started_configure and http://serverfault.com/questions/4795​76/installing-glusterfs-on-amazon-ec2 which offered some Ec2 help. I managed to probe the other instance, but now I try to create the volume the terminal sits for a while. I get no
19:49 realdannys1 error message but when I type to check volumes it says "no volumes present"
19:49 glusterbot <http://goo.gl/BsK02> (at www.gluster.org)
19:52 andreask joined #gluster
19:57 realdannys1 Not sure if my question just got through or was blocked by the bot?
20:02 kkeithley the bot doesn't block anything
20:05 realdannys1 ah ok, thanks
20:06 hagarth joined #gluster
20:09 hchiramm__ joined #gluster
20:13 ctria joined #gluster
20:15 ricky-ticky joined #gluster
20:19 rb2k joined #gluster
20:28 realdannys1 Hmmm, I've tried everything but it just pauses, no error message and doesn't create the volume - I can't understand why
20:33 realdannys1 looking at cli.log
20:33 realdannys1 Im getting
20:33 realdannys1 Unable to parse create volume CLI
20:33 hchiramm__ joined #gluster
20:34 kkeithley firewall ports open? selinux disabled?
20:34 realdannys1 kkeithley I'm running EC2 so I've just opened the ports glusterfs needed
20:34 realdannys1 selinux I'm not sure if thats on Amazon Linux - how could I check and then disable?
20:37 semiosis realdannys1: can you share your volume create command here?
20:38 realdannys1 sure its
20:38 realdannys1 gluster volume create gv0 replica 2  23.21.65.252:/export/sdb1/brick 23.21.247.73:/export/sdb1/brick
20:39 semiosis can you paste the output of 'gluster peer status' from both servers to pastie.org & give the link here?
20:41 realdannys1 Here you go - http://pastebin.com/hdi2YRmU
20:41 glusterbot Please use http://fpaste.org or http://dpaste.org . pb has too many ads. Say @paste in channel for info about paste utils.
20:42 realdannys1 sorry - http://pastie.org/8031926
20:42 glusterbot Title: #8031926 - Pastie (at pastie.org)
20:46 jag3773 joined #gluster
20:52 atrius joined #gluster
20:53 rb2k joined #gluster
20:55 JusHal left #gluster
20:58 _NiC joined #gluster
20:58 irk joined #gluster
20:58 gluslog joined #gluster
20:59 tru_tru joined #gluster
21:01 realdannys1 I've turned off selinux for now
21:03 hchiramm__ joined #gluster
21:04 nueces joined #gluster
21:06 chirino joined #gluster
21:08 chirino joined #gluster
21:10 rb2k joined #gluster
21:11 bulde joined #gluster
21:39 portante joined #gluster
21:42 hchiramm__ joined #gluster
22:02 barking|spiders joined #gluster
22:06 barking|spiders hi yall, I am in need of some help. wondering if someone here can shed some light for me. here is my problem:
22:07 barking|spiders I am saving video stream to cluster (20 servers). Nothing fancy, just 3Mbit/s stream. I am trying to play the stream, while it's being saved
22:07 barking|spiders and the playback is choking -- pauses every few seconds
22:09 barking|spiders I was able to do it just fine when I had a bunch of samba servers (and was manually managing which stream is recorded where)
22:10 barking|spiders I wanted to simplify and replaced sambas with one glusterfs cluster, which made things very simple, but not sure what to do with performance
22:12 barking|spiders I am using gluster 3.1 btw
22:14 hchiramm__ joined #gluster
22:15 barking|spiders is io-cache translator something I should be looking at?
22:20 barking|spiders anyone?
22:33 hchiramm__ joined #gluster
22:35 a2_ barking|spiders, how is the playback detecting the "growth" of files?
22:36 a2_ barking|spiders, is the playback and writing happening from the same mount point on the same machine?
22:36 barking|spiders a2, the file is written by one (of many) host and is played on another host
22:37 barking|spiders using patched mplayer which stats file in a loop until new data arrives
22:37 barking|spiders a2_ ^
22:37 a2_ how often does mplayer stat the file?
22:38 a2_ my suspicion is that this is a caching problem, where file size is not reflected fast enough, rather than a performance problem
22:39 a2_ you can test that by mounting glusterfs with attribute-timeout=0 command line paramter, disable md-cache/stat-prefetch performance translator and re-test with mplayer
22:39 barking|spiders every 5 seconds
22:39 a2_ oh ok
22:40 barking|spiders I tried disabling fstat and it helped a bit, but didn't resolve completely
22:42 barking|spiders hmm, gonna try the above
22:42 barking|spiders thanks a2_
22:45 jag3773 joined #gluster
22:45 mooperd joined #gluster
23:06 sohoo joined #gluster
23:07 sohoo is there any rsync recomendation fro putting bricks back in synch? we have 2 bricks with 1GB diff.. between them and self heal didnt help
23:10 sohoo 1 brick(in replicated mode) is 87GB the other is 88GB any recomendations on fixing that would be great
23:11 sohoo rsync -avz?
23:25 sohoo nobody?
23:29 StarBeast joined #gluster
23:32 majeff joined #gluster
23:45 sohoo no ideas? realy strange

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary