Camelia, the Perl 6 bug

IRC log for #gluster, 2013-09-23

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:13 glusterbot New news from newglusterbugs: [Bug 1010747] cp of large file from local disk to nfs mount fails with "Unknown error 527" <http://goo.gl/yjrChU>
00:28 RichiH joined #gluster
02:03 davinder joined #gluster
02:17 tjikkun_work joined #gluster
02:17 bivak joined #gluster
02:20 zapotah joined #gluster
02:20 zapotah joined #gluster
02:26 kevein joined #gluster
02:31 harish joined #gluster
02:37 zapotah joined #gluster
03:16 kshlm joined #gluster
03:22 elyograg left #gluster
03:24 bulde joined #gluster
03:35 shubhendu joined #gluster
03:44 kanagaraj joined #gluster
03:47 sgowda joined #gluster
03:55 itisravi joined #gluster
04:03 saurabh joined #gluster
04:16 rcoup joined #gluster
04:26 ndarshan joined #gluster
04:29 satheesh joined #gluster
04:29 vpshastry joined #gluster
04:31 dusmant joined #gluster
04:34 shruti joined #gluster
04:40 DV joined #gluster
04:47 nshaikh joined #gluster
04:48 Durzo joined #gluster
04:49 Durzo Hi team, is there any update on the ETA for 3.4.1? i remember there was talk of it being 21/08 but that has long passed... im waiting on some geo-replication bug fixes - currently our goe-repl causes a huge memory loss and gsyncd needs to be restarted daily
04:59 aravindavk joined #gluster
04:59 kanagaraj joined #gluster
05:00 JoeJulian 3.4.1qa2 is available for testing.
05:04 Durzo JoeJulian, are you able to point me in the direction? i have looked on the website but cannot find it - is it git only ??
05:05 Durzo nvm i found it.. thx
05:05 JoeJulian You're welcome
05:05 Durzo actually http://download.gluster.org/pub​/gluster/glusterfs/qa-releases/ only has qa1
05:05 glusterbot <http://goo.gl/LGV5s> (at download.gluster.org)
05:05 shylesh joined #gluster
05:06 JoeJulian Oh, I missed qa3... qa
05:07 JoeJulian http://bits.gluster.org/pub/​gluster/glusterfs/3.4.1qa3/
05:07 glusterbot <http://goo.gl/GLDlgU> (at bits.gluster.org)
05:07 Durzo yikes rpms... any source or .debs ?
05:08 Durzo i guess semiosis is still on holidays ?
05:08 JoeJulian http://bits.gluster.org/pub/gluster/gl​usterfs/src/glusterfs-3.4.1qa3.tar.gz
05:08 glusterbot <http://goo.gl/fTHV3k> (at bits.gluster.org)
05:09 Durzo thanks
05:09 JoeJulian More or less. He's still in New Orleans. He extended LinuxCon an extra few days.
05:09 Durzo fair enough
05:09 Durzo JoeJulian, would you agree that geo-repl could trigger this? http://review.gluster.org/#/c/5392/
05:09 glusterbot Title: Gerrit Code Review (at review.gluster.org)
05:10 JoeJulian I wish I'd had time to do that. Not a bad place to eat. :D
05:10 JoeJulian Seems possible.
05:11 dusmant joined #gluster
05:11 JoeJulian Well, even though it's still early, I'm trying to start adjusting my sleep pattern to US Central time so I'm heading to bed. Let us know how your tests turn out.
05:12 Durzo ok thanks, good night
05:17 davinder joined #gluster
05:17 lalatenduM joined #gluster
05:19 bala joined #gluster
05:19 raghu joined #gluster
05:20 zapotah joined #gluster
05:21 _pol joined #gluster
05:22 bulde joined #gluster
05:31 hagarth joined #gluster
05:32 _pol joined #gluster
05:38 lalatenduM joined #gluster
05:50 anands joined #gluster
05:52 ndarshan joined #gluster
05:52 mohankumar joined #gluster
05:55 hagarth joined #gluster
06:00 ppai joined #gluster
06:03 glusterbot New news from resolvedglusterbugs: [Bug 983362] Metadata locks, data locks in replicate should be in different domains <http://goo.gl/f4RdSl>
06:03 jglo joined #gluster
06:06 rastar joined #gluster
06:12 ndarshan joined #gluster
06:14 vshankar joined #gluster
06:15 CheRi joined #gluster
06:17 satheesh joined #gluster
06:29 meghanam joined #gluster
06:30 meghanam_ joined #gluster
06:30 hagarth joined #gluster
06:34 jtux joined #gluster
06:34 an joined #gluster
06:35 rcoup joined #gluster
06:43 shyam joined #gluster
07:00 shruti joined #gluster
07:00 ekuric joined #gluster
07:01 ricky-ticky joined #gluster
07:01 ngoswami joined #gluster
07:03 ctria joined #gluster
07:13 aib_007 joined #gluster
07:14 vpshastry1 joined #gluster
07:14 dusmant joined #gluster
07:15 glusterbot New news from newglusterbugs: [Bug 1010834] No error is reported when files are in extended-attribute split-brain state. <http://goo.gl/HlfVxX>
07:20 shruti joined #gluster
07:29 psharma joined #gluster
07:33 harish joined #gluster
07:36 hagarth joined #gluster
07:46 risibusy joined #gluster
07:48 odata joined #gluster
07:56 odata Hi, i have a setup of 4 Bricks providing 5  Replica volumes to one Client,  Brick 1 and 2 provide Volume 1-3, Brick 3 and 4 provide Volume 4-5.  Now i have to change the IP Setup of the Bricks, im changing the VLAN and the IP network of the bricks. Is this possible without loosing any Data?
08:17 vpshastry joined #gluster
08:22 Norky yes. I believe you'll need to restart gluster on all of the servers after the changes have been made
08:23 Norky when you say losing data, do you mean "permanently losing data" or "access to data"?
08:23 Norky there will be an interruption in service
08:25 Norky joined #gluster
08:29 meghanam joined #gluster
08:29 meghanam_ joined #gluster
08:29 CheRi joined #gluster
08:36 X3NQ joined #gluster
08:39 vpshastry2 joined #gluster
08:44 hybrid5122 joined #gluster
08:47 mooperd joined #gluster
08:49 rastar joined #gluster
08:51 ThatGraemeGuy joined #gluster
08:53 Oneiroi joined #gluster
08:54 odata norky; i mean permanent loss of data, a downtime is no problem, i was wondering if the replicas will still work after the IP Settings of the bricks have changed
08:59 saurabh joined #gluster
09:04 shruti joined #gluster
09:15 glusterbot New news from newglusterbugs: [Bug 1010874] Dist-geo-rep : geo-rep config log-level option takes invalid values and makes geo-rep status defunct <http://goo.gl/om4qdi>
09:21 hagarth joined #gluster
09:31 ngoswami joined #gluster
09:34 Norky odata, yes they will. Make sure that everything (clients and servers) can correctly resolve hostnames to the new IP addresses.
09:34 odata norky; ok thanks
09:35 Norky I've got a couple of test servers which use DHCP. Everything works fine after an address change
09:39 meghanam joined #gluster
09:40 vshankar joined #gluster
09:41 meghanam_ joined #gluster
09:43 morse joined #gluster
09:43 bnh2 joined #gluster
09:44 bnh2 Hi All,
09:44 bnh2 I need help configuring volumes on glusterfs
09:45 sgowda joined #gluster
09:46 bnh2 My Question is " I have glusterfs setup on three servers and one of the servers has the client installed and mounted on. Now I have another server which has a share called public_html and this share contains alot of Data. I would like to add this server to the glusterfs setup but have the data in Public_html migrated and copied to the gluster fs servers.
09:46 bnh2 everytime i try and mount the public_html to glusterfs server it ony shows me the data i had on gluster previously not the ones on public_html
09:46 bnh2 any assistance would be much appreciated.
09:56 pkoro joined #gluster
09:58 Norky what exactly do you mean by "mount the public_html to glusterfs server"?
09:59 meghanam_ joined #gluster
10:01 meghanam joined #gluster
10:13 dusmant joined #gluster
10:14 sgowda joined #gluster
10:16 zapotah joined #gluster
10:16 zapotah joined #gluster
10:18 aib_007 joined #gluster
10:20 shubhendu joined #gluster
10:21 ndarshan joined #gluster
10:21 meghanam joined #gluster
10:22 kanagaraj joined #gluster
10:23 hagarth joined #gluster
10:25 aravindavk joined #gluster
10:31 eseyman joined #gluster
10:33 wgao joined #gluster
10:34 rwheeler joined #gluster
10:35 vshankar joined #gluster
10:40 ngoswami joined #gluster
10:46 saurabh joined #gluster
10:46 vpshastry1 joined #gluster
10:56 jtux joined #gluster
10:57 davinder joined #gluster
11:06 manik joined #gluster
11:12 y4m4 joined #gluster
11:21 Norky joined #gluster
11:21 bnh2 1
11:21 bnh2 geWorks SAN.
11:21 CheRi joined #gluster
11:21 bnh2 what I mean by mounting public_html is " my current glusterfs server setup with three nodes and one of the nodes is setup as a both server and client, so files are replicating fine. now I have clus4 new server i would like to add the the volume i have setup currently. clus4 has a share called public_html which contains large amount of Data and I would like to replicate these data to the current glusterfs volume. is there a way of doing so?"
11:22 ppai joined #gluster
11:22 Norky well, yes, copy the contents of "public_html" to the mounted gluster volume
11:22 meghanam_ joined #gluster
11:23 bnh2 that will take days if not weeks as its about 40 + TB of data
11:27 bnh2 any other suggestions?
11:28 Norky well if you want replication this 40+ TiB will have to be copied
11:28 meghanam_ joined #gluster
11:28 Norky were you thinking of using the existing public_html as a brick to make a new gluster volume?
11:28 bnh2 Thanks Norky, so this is the only I will have to copy this data to the new mounted share
11:29 bnh2 yes Norky that's correct
11:29 bnh2 Basically the data in Public_Html folder is what I want to get replicated on all servers even if i have to recreate the volume or build new volume.
11:29 harish joined #gluster
11:30 Norky I don't know what the expected behaviour is when you have a new, non-empty brick
11:30 Norky it might work to create a new volume
11:31 bnh2 let me try that and I will let you know.
11:31 bnh2 Thanks
11:31 Norky of course, if you want replication, you'll need at least one more 40+ TiB brick on another machine
11:31 Remco And keep in mind that the metadata will take up significant space too
11:32 Norky and if it does work, gluster will be copying that data to the other brick(s)
11:32 bnh2 yep the other three machines are bigger and have at least 20TB more
11:32 bnh2 space
11:32 bnh2 Will do this now and let you know here if it worked or not
11:32 bnh2 many thanks
11:33 bnh2 for your support
11:33 Norky I think you need the advice of someone more knowledgable than me
11:33 Remco And most of all, test it in a seperate setup
11:33 Remco Don't just try and hope it doesn't destroy your data
11:34 ndevos Remco: huh? metadata should not take much space at all, most is set in the xattrs of the files
11:35 Remco It doesn't always fit, depends on the backend
11:35 Remco You don't notice it, but it's there
11:35 Remco That's what I've seen anyway
11:35 ndevos sure, but the space it requires should be minimal
11:42 ppai joined #gluster
11:42 CheRi joined #gluster
11:45 bnh2 It has worked on my test network, even though they few folders and some empty txt files but the replication seems to work if u create the voume from scratch
11:45 bnh2 volume
11:46 bnh2 I have one more question for you expers
11:46 bnh2 experts*
11:49 bnh2 Can glusterfs does the read check, so when a user opens up a file reads/adds to it. is there an option to enable that gets glusterfs to scan all nodes for changes or if the file exist or not and if not to copy it over
11:59 shruti joined #gluster
12:03 kanagaraj joined #gluster
12:04 shubhendu joined #gluster
12:04 ndarshan joined #gluster
12:05 aravindavk joined #gluster
12:11 bulde1 joined #gluster
12:22 eseyman joined #gluster
12:22 ctria joined #gluster
12:29 nasso joined #gluster
12:29 emil joined #gluster
12:30 vshankar_ joined #gluster
12:30 rc10 joined #gluster
12:31 rc10 hi, which is the preferred Raid level for gluster  ?
12:31 rc10 i have 3 servers each with  5 disks
12:35 Alpinist joined #gluster
12:38 bala joined #gluster
12:46 glusterbot New news from newglusterbugs: [Bug 957917] gluster create volume doesn't cleanup after its self if the create fails. <http://goo.gl/0AMjI>
12:47 itisravi joined #gluster
12:47 Remco rc10: I'd say one brick per disk and then a replicated distributed setup
12:48 Remco Can't really say what the best way is, haven't tested all of it
12:48 rc10 Remco: so each disk  as Jbods and have replcation of 1 ?
12:49 bnh2 Can glusterfs does the read check, so when a user opens up a file reads/adds to it. is there an option to enable that gets glusterfs to scan all nodes for changes or if the file exist or not and if not to copy it over
12:50 Remco rc10: That can work too
12:50 Remco I'm talking about 5 bricks per server
12:50 bnh2 also can I set one to be a master so changes made on the share doesnt affect the master? but changes done to the master affect all ?
12:50 rc10 Remco: thanks
12:50 Remco Test to see what works for you, that's a lot more setup than jbod per server or a raid per server
12:52 rc10 I have millions of small files - and having Raid per server + gluster is going to slow down reads/writes
12:53 rc10 as gluster take care of replication, I can avoid Raid controller  ops
12:56 mohankumar joined #gluster
13:00 hagarth joined #gluster
13:02 bennyturns joined #gluster
13:11 jag3773 joined #gluster
13:18 jag3773 joined #gluster
13:18 ctria joined #gluster
13:23 ndk joined #gluster
13:27 jag3773 joined #gluster
13:31 _pol joined #gluster
13:31 _pol joined #gluster
13:31 anands joined #gluster
13:37 chirino joined #gluster
13:41 eseyman joined #gluster
13:43 jag3773 joined #gluster
13:47 meghanam joined #gluster
13:51 meghanam_ joined #gluster
13:52 vshankar joined #gluster
13:52 bulde joined #gluster
13:57 rcheleguini joined #gluster
14:02 bugs_ joined #gluster
14:03 meghanam joined #gluster
14:04 bnh2 Can glusterfs does the read check, so when a user opens up a file reads/adds to it. is there an option to enable that gets glusterfs to scan all nodes for changes or if the file exist or not and if not to copy it over
14:04 bnh2 also can I set one to be a master so changes made on the share doesnt affect the master? but changes done to the master affect all ?
14:08 dustin1 joined #gluster
14:14 meghanam_ joined #gluster
14:15 wushudoin joined #gluster
14:16 shylesh joined #gluster
14:18 jporterfield left #gluster
14:21 jag3773 joined #gluster
14:27 ekuric joined #gluster
14:29 ekuric joined #gluster
14:31 XpineX joined #gluster
14:31 meghanam joined #gluster
14:32 manik joined #gluster
14:34 jag3773 joined #gluster
14:43 jag3773 joined #gluster
14:46 chirino joined #gluster
14:48 jag3773 joined #gluster
14:55 meghanam_ joined #gluster
14:57 ababu joined #gluster
15:04 ekuric left #gluster
15:04 _pol joined #gluster
15:06 LoudNoises joined #gluster
15:08 sprachgenerator joined #gluster
15:09 jag3773 joined #gluster
15:14 GLHMarmot joined #gluster
15:20 Alpinist joined #gluster
15:21 lyang0 joined #gluster
15:22 anands joined #gluster
15:24 zerick joined #gluster
15:25 bnh2 Can glusterfs does the read check, so when a user opens up a file reads/adds to it. is there an option to enable that gets glusterfs to scan all nodes for changes or if the file exist or not and if not to copy it over
15:25 bnh2 also can I set one to be a master so changes made on the share doesnt affect the master? but changes done to the master affect all ?
15:26 rc10 joined #gluster
15:30 mohankumar joined #gluster
15:31 failshell joined #gluster
15:32 wushudoin joined #gluster
15:37 rc10 joined #gluster
15:38 kaptk2 joined #gluster
15:51 DataBeaver joined #gluster
15:51 jag3773 joined #gluster
16:00 zaitcev joined #gluster
16:05 lala joined #gluster
16:08 dneary joined #gluster
16:13 aravindavk joined #gluster
16:21 eseyman joined #gluster
16:22 semiosis :O
16:22 semiosis Durzo: you were looking for me?
16:25 hagarth :O
16:26 Mo_ joined #gluster
16:28 XpineX joined #gluster
16:29 meghanam joined #gluster
16:30 meghanam_ joined #gluster
16:38 l0uis semiosis: any eta on that glusterfs rebuild for 3.4 ?
16:41 SpeeR joined #gluster
16:43 SpeeR is there any benefit running raid0 SSD's for the boot disks on my gluster setup? Will offloading the cache to them help?
16:44 SpeeR sorry meant raid1
16:45 johnbot11 joined #gluster
16:45 jag3773 joined #gluster
16:51 kkeithley avati_: ping
16:51 aliguori joined #gluster
16:52 manik joined #gluster
17:05 shylesh joined #gluster
17:16 rwheeler joined #gluster
17:22 an joined #gluster
17:22 mgalkiewicz joined #gluster
17:23 mgalkiewicz hi, is it possible to downgrade glusterfs from 3.4.0 to 3.3.2?
17:23 shruti joined #gluster
17:24 nightwalk joined #gluster
17:26 meghanam joined #gluster
17:27 tryggvil joined #gluster
17:35 chirino joined #gluster
17:44 johnbot11 joined #gluster
17:47 edward1 joined #gluster
17:50 meghanam joined #gluster
17:53 manik joined #gluster
17:53 meghanam_ joined #gluster
17:57 [o__o] left #gluster
17:59 [o__o] joined #gluster
18:02 mooperd joined #gluster
18:04 pdrakewe_ hi, I'm running gluster in a 1x2 configuration (2 servers, each with 1 brick) and benchmarking 3.0 vs 3.3 with iozone.  so far, 3.0 seems faster (up to 2-3x faster) in most of the tests, is this expected?
18:05 pdrakewe_ dd seems to display roughly the same performance difference between 3.0 and 3.3 (with 3.0 2x faster than 3.3)
18:07 pdrakewe_ network tests between the servers show the bandwidth to be the same.  I (attempted to) set the performance tuning parameters the same between versions
18:10 an joined #gluster
18:12 JoeJulian SpeeR: Depends on your use case.
18:13 JoeJulian mgalkiewicz: yes
18:13 JoeJulian pdrakewe_: no. 3.0 is much slower than 3.3 and 3.0 has a lot of unfixed bugs that will never get fixed.
18:13 meghanam joined #gluster
18:14 SpeeR we will primarily be using it for 16MB wal files for postgres
18:14 SpeeR and logs files for systems
18:15 JoeJulian Seems likely that it would have a positive effect then. As with anything, test for your requirements and see if it meets them.
18:16 pdrakewe_ JoeJulian: ty for the confirmation.  given that iozone and dd both show the same performance drop, I'm presuming that my 3.3 setup is not ideally configured
18:17 JoeJulian I would make some other presumption, I just don't know what it would be. ;)
18:18 JoeJulian I don't believe in benchmarking tools unless they accurately reflect your use case.
18:18 edong23 joined #gluster
18:18 meghanam_ joined #gluster
18:18 rcheleguini joined #gluster
18:20 meghanam joined #gluster
18:22 pdrakewe_ agreed, some sort of benchmark that reflects our I/O profile would be ideal.  initially, I'm looking to get a general feeling of whether we have things properly configured for a fair comparison.  either dd and iozone both operate in a manner such that 3.0 falsely appears faster than 3.3, or something is misconfigured.
18:24 JoeJulian It would be interesting to get avati_'s take on that though.
18:26 rotbeard joined #gluster
18:30 _pol_ joined #gluster
18:52 davinder joined #gluster
18:54 mooperd joined #gluster
18:58 daMaestro joined #gluster
18:59 _pol joined #gluster
19:07 rcheleguini joined #gluster
19:11 rcheleguini joined #gluster
19:15 semiosis l0uis: Real Soon Now
19:15 JoeJulian OH: "Documentation is like sex." "There's never enough?"
19:23 _pol joined #gluster
19:36 ricky-ticky joined #gluster
19:56 anands joined #gluster
19:58 tryggvil joined #gluster
20:02 _pol joined #gluster
20:17 zapotah joined #gluster
20:45 badone joined #gluster
20:54 jskinner_ joined #gluster
20:56 mooperd joined #gluster
21:06 anands joined #gluster
21:41 _pol joined #gluster
21:41 SpeeR joined #gluster
21:46 mooperd joined #gluster
21:55 rwheeler joined #gluster
21:56 RichiH what's the schedule for the glusterfs day after linuxcon?
21:57 JoeJulian I presume you're referring to the one in Scotland?
21:58 RichiH yes
21:58 RichiH sorry
21:58 JoeJulian No prob... Just making sure we're not in some time paradox since I just got back from one. :D
21:59 RichiH ;)
21:59 RichiH is there a guesstimate how long they usually take?
22:00 l0uis semiosis: k, thanks!
22:00 JoeJulian The one we just had went from 10:00 to 4:00
22:02 RichiH k
22:09 chirino joined #gluster
22:18 torrancew joined #gluster
22:21 torrancew Is there any notion of an "API" for gluster? I'm looking for a way to remotely query a node for its own UUID (before it's been joined to a cluster)
22:24 JoeJulian Hmm... Not really. There is an RPC that you could use for that though.
22:24 torrancew oh?
22:24 rcoup joined #gluster
22:24 JoeJulian Don't get too excited. The only documentation is by reading the source code or analyzing with wireshark.
22:25 torrancew Is it internally stable-ish, or something I can count on to change between releases?
22:25 JoeJulian ndevos: figured it out for wireshark, but I don't think he actually made any documentation about it.
22:25 JoeJulian It's stable and versioned.
22:25 torrancew oh cool
22:25 torrancew Well, let me step back for a minute
22:26 torrancew What I'm trying to solve, is to have puppet be able to automatically "recover" a node's bricks if needed
22:26 torrancew following http://gluster.org/community/documen​tation/index.php/Gluster_3.4:_Brick_​Restoration_-_Replace_Crashed_Server
22:26 glusterbot <http://goo.gl/hFwCcB> (at gluster.org)
22:26 JoeJulian ... and I'm only guessing you can get the uuid through the rpc as part of the peer probe handshaking.
22:26 torrancew But automating the UUID bit is tricky to say the least
22:26 JoeJulian puppet?
22:27 torrancew ya
22:27 torrancew The idea being that if a node has downtime, the next puppet run should be able to detect and perform the needed steps
22:27 JoeJulian https://forge.gluster.org/puppet-gluster
22:27 glusterbot Title: puppet-gluster - Gluster Community Forge (at forge.gluster.org)
22:27 JoeJulian Might want to start there.
22:28 torrancew So what we're caught up on is handling failure
22:28 torrancew yeah, we've studied that pretty heavily
22:28 torrancew but we can spin things up smoothly, we just want to automate the healing process, which is where I'm a bit stuck
22:28 JoeJulian James just did a lot of updates this last week.
22:29 torrancew (to clarify: spin up is already well captured in puppet)
22:29 JoeJulian One of the things he was working on was uuid handling.
22:29 torrancew oh
22:29 torrancew cool
22:41 _pol joined #gluster
22:52 nueces joined #gluster
22:59 nightwalk joined #gluster
23:09 chirino joined #gluster
23:16 Durzo semiosis, was just searching for a qa3 deb, but i can wait for final...
23:28 chirino joined #gluster
23:28 johnbot11 joined #gluster
23:30 kbsingh joined #gluster
23:38 mooperd joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary