Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-01-15

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:04 dan_ joined #gluster
00:11 dan____ joined #gluster
00:12 DanMons joined #gluster
00:26 robo joined #gluster
00:39 mattappe_ joined #gluster
00:43 mattappe_ joined #gluster
00:56 jporterfield joined #gluster
00:58 mattappe_ joined #gluster
01:09 hagarth joined #gluster
01:12 mattapperson joined #gluster
01:16 mattappe_ joined #gluster
01:19 jporterfield joined #gluster
01:25 shyam joined #gluster
01:26 sprachgenerator joined #gluster
01:33 TrDS left #gluster
01:52 harish joined #gluster
02:06 _pol joined #gluster
02:26 gmcwhistler joined #gluster
02:29 trmpet1 joined #gluster
02:29 gmcwhistler joined #gluster
02:32 parad1se_ joined #gluster
02:33 bharata-rao joined #gluster
02:38 harish joined #gluster
02:45 bala joined #gluster
06:32 ilbot3 joined #gluster
06:32 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
06:39 glusterbot New news from resolvedglusterbugs: [Bug 1024369] Unable to shrink volumes without dataloss <https://bugzilla.redhat.co​m/show_bug.cgi?id=1024369>
06:39 MrNaviPacho joined #gluster
06:40 bala joined #gluster
06:41 glusterbot New news from newglusterbugs: [Bug 1053362] Fix bug-858488-min-free-disk.t <https://bugzilla.redhat.co​m/show_bug.cgi?id=1053362>
06:43 pk joined #gluster
06:45 vimal joined #gluster
06:57 raghug joined #gluster
06:59 bala joined #gluster
07:05 Philambdo joined #gluster
07:15 raghug joined #gluster
07:18 ngoswami joined #gluster
07:19 jtux joined #gluster
07:33 ctria joined #gluster
08:05 mbukatov joined #gluster
08:10 raghug joined #gluster
08:10 wd_ joined #gluster
08:10 wd_ hello
08:10 glusterbot wd_: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
08:10 eseyman joined #gluster
08:12 keytab joined #gluster
08:14 wd_ I am trying to set up a 2 node mirror with glusterfs 3.2.7. So far this works, however, i have some existing on one of the bricks, which i would like to import
08:14 wd_ i've searched the mailing list, and it came up with a link: http://gluster.org/docs/index.php/Setting_u​p_AFR_on_two_servers_with_pre-existing_data
08:14 glusterbot Title: GlusterFS (at gluster.org)
08:14 wd_ this is however not accessable any more
08:15 wd_ is importing data possible with my version of glusterfs?
08:16 blook joined #gluster
08:22 clag_ joined #gluster
08:22 hagarth joined #gluster
08:26 64MABA0RW joined #gluster
08:26 20WAAVO7V joined #gluster
08:31 ProT-0-TypE joined #gluster
08:33 andreask joined #gluster
08:43 NeatBasis joined #gluster
08:44 shyam joined #gluster
08:46 mgebbe_ joined #gluster
08:46 hagarth joined #gluster
08:46 45PAA24O6 joined #gluster
08:50 cyberbootje joined #gluster
08:51 TrDS joined #gluster
08:55 benjamin__ joined #gluster
08:56 pk wd_: 3.2.7 is old and most probably not supported....
08:56 pk wd_: why not try 3.4.x or at least 3.3.x
08:58 Rocky__ left #gluster
09:00 shylesh joined #gluster
09:06 raghu joined #gluster
09:09 clag_ left #gluster
09:19 navid__ joined #gluster
09:21 khushildep joined #gluster
09:22 ngoswami joined #gluster
09:25 KORG joined #gluster
09:27 tryggvil joined #gluster
09:27 dusmant joined #gluster
09:28 kanagaraj joined #gluster
09:38 rjoseph1 joined #gluster
09:45 KORG joined #gluster
09:46 KORG joined #gluster
09:47 keytab joined #gluster
09:48 NeatBasis joined #gluster
09:54 TrDS left #gluster
09:54 overclk joined #gluster
09:54 complexmind joined #gluster
09:58 NeatBasis joined #gluster
10:00 bala joined #gluster
10:06 NeatBasis joined #gluster
10:09 nshaikh joined #gluster
10:14 NeatBasis joined #gluster
10:14 overclk joined #gluster
10:22 sac joined #gluster
10:24 rjoseph joined #gluster
10:33 dusmant joined #gluster
10:36 raghug joined #gluster
10:46 eightyeight joined #gluster
10:59 xavih joined #gluster
11:02 kshlm joined #gluster
11:05 harish joined #gluster
11:10 pkoro joined #gluster
11:21 khushildep joined #gluster
11:25 TonySplitBrain joined #gluster
11:28 hybrid512 joined #gluster
11:35 calum_ joined #gluster
11:39 ProT-0-TypE joined #gluster
11:46 khushildep_ joined #gluster
11:52 diegows joined #gluster
11:53 shyam joined #gluster
11:53 diegows joined #gluster
11:55 theron joined #gluster
11:57 qdk joined #gluster
11:59 itisravi_ joined #gluster
12:03 edward2 joined #gluster
12:09 KORG|2 joined #gluster
12:09 ababu joined #gluster
12:14 khushildep_ joined #gluster
12:18 CheRi joined #gluster
12:20 ppai joined #gluster
12:24 khushildep_ joined #gluster
12:25 anands joined #gluster
12:29 micu2 joined #gluster
12:29 micu2 left #gluster
12:31 micu2 joined #gluster
12:34 pk left #gluster
12:40 qdk joined #gluster
12:41 dusmant joined #gluster
12:42 glusterbot New news from newglusterbugs: [Bug 1049727] Dist-geo-rep : volume won't be able to stop untill the geo-rep session is deleted. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1049727>
12:54 hagarth joined #gluster
12:56 boholazzler joined #gluster
12:58 khushildep_ joined #gluster
12:59 qdk joined #gluster
13:01 boholazzler anyone had problems with georeplication to non gluster mount? get error: line 25, in raise_oserr
13:01 boholazzler raise OSError(errn, os.strerror(errn))
13:04 benjamin__ joined #gluster
13:05 flrichar joined #gluster
13:08 theron joined #gluster
13:10 B21956 joined #gluster
13:16 khushildep_ joined #gluster
13:17 boomertsfx joined #gluster
13:19 rjoseph joined #gluster
13:20 qdk joined #gluster
13:26 khushildep_ joined #gluster
13:29 ira joined #gluster
13:30 ira joined #gluster
13:37 qdk joined #gluster
13:41 boholazzler so does geo replication to a non gluster volume require xattrs on filesystem?
13:44 tryggvil joined #gluster
13:44 sroy_ joined #gluster
13:47 tryggvil joined #gluster
13:54 robo joined #gluster
13:56 ndevos boholazzler: yes
13:56 bennyturns joined #gluster
14:02 mohankumar joined #gluster
14:04 boholazzler thanks ndevos
14:05 boholazzler the slave is actually a gluster volume, but have been trying to mount as nfs to improve performance
14:08 boholazzler was originally replicating to raw brick ( was a fair bit quicker) rather than the fuse.glusterfs mount
14:08 boholazzler but apparently that is bad practice
14:10 tor left #gluster
14:13 plarsen joined #gluster
14:15 khushildep_ joined #gluster
14:16 qdk joined #gluster
14:17 psyl0n joined #gluster
14:17 jskinner_ joined #gluster
14:17 psyl0n joined #gluster
14:20 r0b joined #gluster
14:22 boomertsfx semiosis, you around?
14:28 jruggiero joined #gluster
14:31 aixsyd https://i.imgur.com/zkkGCZm.jpg  <-- hate
14:32 psyl0n joined #gluster
14:32 psyl0n joined #gluster
14:36 dbruhn joined #gluster
14:42 boomertsfx aixsyd, I blame Obama
14:43 dbruhn uh oh....
14:43 dbruhn aixsyd, whats going on this morning?
14:43 raghug joined #gluster
14:44 aixsyd dbruhn: same stuff every day
14:44 aixsyd i'm more or less stuggling with one of my servers - non-gluster related
14:44 dbruhn Ahh ok
14:44 aixsyd well, semi-gluster related
14:44 theron joined #gluster
14:44 boomertsfx dbruhn, that issue I had from yesterday about the split brain directories... tried to fix it with the gfid stuff and couldn't... ended up just mkdir xx.new; mv old/* xx.new; rmdir old; mv xx.new old  and it went away...
14:44 dbruhn Did you figure out what nuked your filesystems
14:45 aixsyd dbruhn: nupe. But I have a good feeling its this server
14:45 aixsyd read and writes are super duper slow. like, sub megabyte per second slow. 4.5 hour windows 7 VM install on it.
14:45 dbruhn :/ gross
14:46 aixsyd tell me about it. i've eliminated network, hard drive, and glusterfs as the source of the issue - so its either mem, cpu, mobo, or something else
14:47 ells joined #gluster
14:47 aixsyd a cpu benchmark on this server is 2x as slow as the other 4
14:48 aixsyd then, i get this error: https://i.imgur.com/zkkGCZm.jpg and the OS wont boot or even show up as a bootable drive
14:49 dbruhn sounds like something corrupting your file systems
14:49 aixsyd that error happens on the problem server, not glusterfs
14:49 aixsyd but yes
14:50 aixsyd its very possible that its a RAM issue - but memtesting 32gigs...
14:50 dbruhn That's a trip to the coffee shop on the other side of town and a three course lunch'
14:51 boomertsfx memory wouldn't slow down the cpu benchmark I wouldn't think.. maybe thermal issue
14:51 aixsyd boomertsfx: temps seem in line with the other 4
14:52 aixsyd thankfully, i'm within a 90 day warranty period
14:52 boomertsfx wow, it's new?
14:52 aixsyd no, second hand ebay
14:52 aixsyd 90 day warranty though ;)
14:53 japuzzo joined #gluster
14:53 boomertsfx good times
14:53 boomertsfx what server is it
14:53 aixsyd Dell PowerEdge 2950
14:53 boomertsfx i got a cheap dell 1u on the ebayz
14:53 dbruhn I've had good luck with grey market ebay vendors, I actually found one of them local to me, that dude take great care of me
14:53 aixsyd problem is, this vendor is out of the same server/same specs
14:54 boomertsfx http://ft.trillian.im/602a98daf274474e31226e8c26​373e1e14229141/6mEamkkQ8vi22n7JbCsZO8wfn7nkW.jpg  cheeep
14:54 aixsyd i might look into the R700's they have. 4x quad core xeons with 64gb ram =O
14:55 dbruhn how much ram/proc/ and storage do you need?
14:55 aixsyd its running VM's, so as many as we can get
14:55 aixsyd er, as much
14:56 aixsyd http://www.ebay.com/itm/DELL-POWEREDGE-R900-QUAD-4​-2-4-E7330-QC-32GB-PERC-6I-DRAC-RPS-BEZEL-/2010194​77459?pt=COMP_EN_Servers&amp;hash=item2ecdb1cdd3   <-- FAP
14:56 glusterbot Title: Dell PowerEdge R900 Quad 4 2 4 E7330 QC 32GB PERC 6i DRAC RPS Bezel | eBay (at www.ebay.com)
14:56 dbruhn As much as you can get can get expensive ;) I know where some 64 core machines, with 1/2tb of ram are in a single u.
14:57 kdhananjay joined #gluster
14:58 dbruhn othardware.com, sales guy's name is Mike Meshbesher, he always takes care of me, and gives me a three year warranty on anything I buy from him. He might be able to find what you need.
14:58 pk joined #gluster
14:58 tdasilva joined #gluster
14:59 aixsyd hmm
15:00 aixsyd might give him a call. we'll need more than this down the line in a quarter or two - but i figure RMA this problem server and get something comparable or better at this ebay vendors cost
15:01 zaitcev joined #gluster
15:01 dbruhn Yep, just giving you a vendor from ebay that i've had good luck with.
15:01 aixsyd :) :)
15:01 dbruhn jclift has directed me to a few along the way too
15:02 aixsyd jclift is a bro
15:03 dbruhn Good dude for sure
15:05 gmcwhistler joined #gluster
15:06 rjoseph joined #gluster
15:06 rjoseph1 joined #gluster
15:15 raghug joined #gluster
15:15 blook joined #gluster
15:16 jbrooks joined #gluster
15:17 jclift dbruhn aixsyd: tx. :)
15:18 aixsyd jclift: so im 99% sure i have faulty server hardware - causing the IB and many other issues
15:18 aixsyd a kernel bug just nuked my partition table
15:19 aixsyd sum MB/s ethernet reads and wrtied
15:19 rwheeler joined #gluster
15:19 aixsyd *writes
15:19 aixsyd **sub - i fail
15:24 jclift aixsyd: Damn servers.  If we weren't paid to use them... :)
15:25 kaptk2 joined #gluster
15:25 dbruhn I keep threatening to hop on my bicycle, rent out my house, and throw my phone and computer off the first bridge I come to!
15:26 dbruhn Then I ride my bike 10 miles to work in the snow, and realize it's 0 degrees out today, and computers are warm.
15:28 bala joined #gluster
15:29 andreask joined #gluster
15:31 rwheeler joined #gluster
15:36 rastar joined #gluster
15:37 jbrooks jclift: Is there another gluster test day scheduled before 3.5 comes out
15:37 jobewan joined #gluster
15:38 hagarth jbrooks: this weekend
15:38 jclift jbrooks: I only know about the one for this weekend.
15:38 jclift jbrooks: There is a timeline here: http://www.gluster.org/community/d​ocumentation/index.php/Planning35
15:38 glusterbot Title: Planning35 - GlusterDocumentation (at www.gluster.org)
15:39 jclift jbrooks: More could be slotted in, if they're needed and there's enough interest/manpower
15:39 jbrooks Ah, thanks
15:41 rjoseph1 joined #gluster
15:42 kkeithley we'll have 3.5.0beta1 packages for at least Fedora and RHEL/CentOS available on download.gluster.org soon.
15:42 jbrooks Cool, I want to write a post about it for our community.redhat.com blog
15:45 hagarth1 joined #gluster
15:45 kkeithley I'll announce the RPMs here when they're ready
15:46 kkeithley and on the gluster-users and gluster-devel mailing lists
15:50 bugs_ joined #gluster
15:58 pk left #gluster
16:06 hybrid512 joined #gluster
16:12 pk1 joined #gluster
16:12 pk1 left #gluster
16:13 glusterbot New news from newglusterbugs: [Bug 1053670] "compress" option name for over-wire-compression is extremely misleading and should be changed <https://bugzilla.redhat.co​m/show_bug.cgi?id=1053670> || [Bug 1049981] 3.5.0 Tracker <https://bugzilla.redhat.co​m/show_bug.cgi?id=1049981>
16:15 skered joined #gluster
16:16 raghug joined #gluster
16:17 hagarth joined #gluster
16:19 daMaestro joined #gluster
16:24 raghug joined #gluster
16:25 khushildep joined #gluster
16:28 aixsyd dbruhn: wow - i sysbench 5 identical servers for CPU power - same tests results in 2.7s on 4, 16s on the 5th
16:29 aixsyd each time i run it, it varys between 12s and 20s. the other 4 are a consistent 2.7s
16:30 shyam joined #gluster
16:35 TonySplitBrain_ joined #gluster
16:35 davinder joined #gluster
16:42 dbruhn wow, is that the same server that was having the same slow speeds in your IB testing?
16:45 shyam1 joined #gluster
16:46 harish joined #gluster
16:52 aixsyd dbruhn: yep
16:53 aixsyd i just changed the thread count for sysbench from 8 to 16 - all 4 test again at exactly 2.72s each time its ran, problem server #5 - now at 30s
16:55 dbruhn Sounds like you've found what was plaguing you all this time
16:56 aixsyd however - post doesnt catch any CPU errors
16:56 aixsyd gotta be a mobo issue
16:59 theron joined #gluster
17:00 aixsyd gonna run prime95 on it a while
17:01 ndk joined #gluster
17:20 mohankumar joined #gluster
17:23 lpabon_ joined #gluster
17:25 zerick joined #gluster
17:26 theron joined #gluster
17:26 aixsyd dbruhn: semiosis JoeJulian jclift  - i know one of ya'll will have an opinion on this - im planning to run a file server VM with Glusterfs for storage - would it be better for me to directly mount glusterfs for direct file writing (lots of small to medium sized files) or create a large single VM disk for the VM to mount/write files to? The concern mainly is file safety, and then performance
17:27 aixsyd from what i understand, gluster likes large files more than lots of small ones
17:32 Mo_ joined #gluster
17:32 diegows joined #gluster
17:33 SFLimey joined #gluster
17:36 jruggiero left #gluster
17:37 lpabon joined #gluster
17:40 jbrooks joined #gluster
17:43 theron joined #gluster
17:48 SpeeR joined #gluster
17:53 sghosh joined #gluster
18:10 aixsyd huh. no ones got an opinion
18:11 jbrooks left #gluster
18:13 nage joined #gluster
18:16 ProT-0-TypE joined #gluster
18:18 jbrooks joined #gluster
18:21 tdasilva joined #gluster
18:21 skered left #gluster
18:27 glusterbot` joined #gluster
18:27 rotbeard joined #gluster
18:27 tdasilva joined #gluster
18:31 erik49__ joined #gluster
18:32 TrDS joined #gluster
18:38 RedShift joined #gluster
18:42 tdasilva left #gluster
18:43 complexmind joined #gluster
18:45 avati joined #gluster
18:49 jbrooks joined #gluster
19:00 Mo___ joined #gluster
19:01 jbrooks left #gluster
19:05 jbrooks joined #gluster
19:12 jbrooks left #gluster
19:12 jbrooks joined #gluster
19:23 tdasilva joined #gluster
19:52 kkeithley 3.5.0beta1 RPMs available for testing. RPMs are in the YUM repos at http://download.gluster.org/pub/glust​er/glusterfs/qa-releases/3.5.0beta1/
19:52 glusterbot Title: Index of /pub/gluster/glusterfs/qa-releases/3.5.0beta1 (at download.gluster.org)
19:53 kkeithley Watch for the announcement of the "Gluster Test Weekend" coming this  weekend, and remember, you heard it here first, unless you heard it here second. And feel free to test it  without waiting for the announcement.
19:54 kkeithley That's RPMs for Fedora, RHEL6, and CentOS6.
19:57 aixsyd NO DEB LOVE :(
19:57 aixsyd caps
19:57 kkeithley semiosis is pedaling as fast as he can.
19:57 kkeithley But yeah, we're working on taking some of the load off semiosis.
19:58 dbruhn aixsyd, sorry I've been AFK. Since you are in the middle of testing try both, but a lot of people use it as a file server, what kind of clients?
20:01 semiosis ?!
20:02 primusinterpares joined #gluster
20:04 semiosis aixsyd: i'll make the debs today/tonight
20:04 semiosis but not immediately
20:15 jbrooks left #gluster
20:19 mattappe_ joined #gluster
20:29 complexmind joined #gluster
20:29 jbrooks joined #gluster
20:33 _pol joined #gluster
20:40 Philambdo joined #gluster
20:45 rotbeard joined #gluster
20:59 psyl0n joined #gluster
21:03 JoeJulian aixsyd: I do that myself. My preference is a volume for the file server that's mounted within the VM.
21:12 mattappe_ joined #gluster
21:26 neofob joined #gluster
21:28 Philambdo joined #gluster
21:30 andreask joined #gluster
21:31 tdasilva left #gluster
21:33 Philambdo joined #gluster
21:41 Philambdo joined #gluster
21:47 Philambdo joined #gluster
21:50 nage joined #gluster
21:50 nage joined #gluster
21:51 nage joined #gluster
21:53 theron joined #gluster
22:09 sroy_ joined #gluster
22:27 geewiz joined #gluster
22:28 nage joined #gluster
22:28 geewiz Hi there! Is there something special I need to look after when upgrading from 3.3.2 to 3.4.2?
22:33 nage joined #gluster
22:33 nage joined #gluster
22:33 dbruhn You need to restart the all of the gluster processes, the brick processes do not restart automatically on upgrade
22:48 tryggvil joined #gluster
22:49 nueces joined #gluster
22:57 _pol joined #gluster
23:03 sulky joined #gluster
23:18 ilbot3 joined #gluster
23:18 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
23:18 geewiz semiosis: But I'm upgrading _from_ 3.3.2 to 3.4.2!?
23:18 semiosis joined #gluster
23:18 lanning joined #gluster
23:19 geewiz I've already read http://vbellur.wordpress.com/2013/​07/15/upgrading-to-glusterfs-3-4/
23:19 geewiz Now I wonder if it's really that easy and what could go wrong.
23:22 harish joined #gluster
23:24 semiosis joined #gluster
23:27 semiosis_ joined #gluster
23:27 semiosis_ geewiz: see ,,(3.3 upgrade notes)
23:27 glusterbot geewiz: http://vbellur.wordpress.com/2012/​05/31/upgrading-to-glusterfs-3-3/
23:27 semiosis_ geewiz: see also ,,(3.4 upgrade notes)
23:27 glusterbot geewiz: http://vbellur.wordpress.com/2013/​07/15/upgrading-to-glusterfs-3-4/
23:28 geewiz So I should be ok if I shut down Gluster on both nodes, do the upgrade and start the new version, right?
23:28 semiosis_ read the docs
23:29 semiosis_ iirc there are one or two more steps involved
23:29 semiosis joined #gluster
23:29 semiosis possibly depending on your distro
23:29 geewiz I did read the docs. We're using your Ubuntu packages.
23:29 semiosis neat
23:30 geewiz Thanks Louis, BTW. Your PPAs are a great resource.
23:30 semiosis glad to hear it :)
23:30 geewiz I'm just a bit anxious because it's an important file server we're going to upgrade.
23:31 semiosis set up a machine or vm to test the upgrade
23:31 semiosis go through the upgrade once or twice on a test setup before upgrading prod
23:33 geewiz Makes sense.
23:45 TrDS left #gluster
23:46 _pol joined #gluster
23:49 complexmind joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary