Camelia, the Perl 6 bug

IRC log for #gluster, 2012-11-15

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:14 simcen Hi guys. Is there a way to identify all files affected by a split-brain?
00:15 semiosis @check script
00:16 semiosis where did i put that link
00:16 semiosis never tried this but i think it would work...
00:16 semiosis http://joejulian.name/blog/quick-and-d​irty-python-script-to-check-the-dirty-​status-of-files-in-a-glusterfs-brick/
00:16 glusterbot <http://goo.gl/grHFn> (at joejulian.name)
00:17 JoeJulian gluster volume heal $vol info split-brain
00:17 semiosis that should show you all the files with unsynced changes on a brick... so if you run that on both replica bricks
00:17 semiosis then any files appearing dirty on both are split brain
00:17 semiosis JoeJulian: or that
00:17 semiosis <-- old sckool
00:18 simcen unfortunately I'm affected by the crash bug in 3.3.0 so gluster volume heal $vol info split-brain doesn't help me
00:18 simcen in this case I'm thankful for the script, semiosis
00:18 JoeJulian That was fixed in 3.3.1 btw
00:18 simcen I know
00:18 simcen but I'm afraid upgrading before solving the split-brain
00:18 JoeJulian And upgrading won't make anything worse. I'm certain of that.
00:18 simcen oh
00:19 simcen good to know, thanks
00:19 JoeJulian In fact, I personally would upgrade before going any further. There were some bugs that make fixing split-brain in 3.3.0 really frustrating.
00:20 simcen ok
00:20 JoeJulian For instance, once you've fixed the split brain, the 3.3.0 client has to be re-mounted to be able to access the file again.
00:21 simcen yes that's true. I already ran into that
00:22 simcen actually, it doesn't matter, we had to stop our environment anyway, but we're thinking about upgrading right now
01:00 inodb_ left #gluster
01:15 redsolar joined #gluster
01:16 redsolar_office joined #gluster
01:17 simcen With the risk to be marked as noob, but is there a smart way to upgrade to 3.3.1 on debian wheezy? we tried adding the official repo but it fails since it cannot find the i386 packages
01:20 redsolar joined #gluster
01:28 redsolar_office joined #gluster
01:31 lng joined #gluster
01:34 lng Hi! When I `/etc/init.d/glusterfs-server restart` or `service glusterfs-server restart`, I get 'stop: Unknown instance: start: Job failed to start'. Why?
01:38 robo joined #gluster
01:42 redsolar joined #gluster
01:44 JoeJulian @ppa
01:44 glusterbot JoeJulian: The official glusterfs 3.3 packages for Ubuntu are available here: http://goo.gl/7ZTNY
01:46 JoeJulian lng: Why do you have upstart?
01:51 lng JoeJulian: why not?
01:52 lng http://download.gluster.org/pub/gluster/​glusterfs/3.3/3.3.1/Ubuntu/Ubuntu.README
01:52 glusterbot <http://goo.gl/AvrSq> (at download.gluster.org)
01:52 lng I have taken it from here
01:55 lng so I installed the same thing
01:56 lng I used `add-apt-repository ppa:semiosis/ubuntu-glusterfs-3.3`
01:57 * semiosis ducks
01:58 semiosis lng: check your glusterd log, usually /var/log/glusterfs/etc-glusterfs-glusterd.log iirc
01:59 lng /var/log/glusterfs/etc-glusterfs-glusterd.vol.log
01:59 lng semiosis: what should look for?
01:59 semiosis problems
02:00 semiosis if it failed to start, log might say why
02:00 lng semiosis: what is bottle neck for Gluster on EC2? I can see CPU utilization is about 20% but load average might be 7
02:00 semiosis you can pastie logs
02:00 redsolar_office joined #gluster
02:00 lng semiosis: disk io?
02:00 semiosis could be many things
02:00 lng ok, let me check
02:01 lng semiosis: do you have any guide on that?
02:01 semiosis no
02:01 semiosis @canned ebs rant
02:01 glusterbot semiosis: http://goo.gl/GJzYu
02:01 semiosis closest thing i guess
02:01 semiosis not reall though
02:01 semiosis really*
02:01 lng thanks!
02:02 semiosis yw
02:05 lng http://paste.ubuntu.com/1359392/
02:05 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
02:05 lng I have issue there
02:07 lng but maybe these errors appeared becuase I executed this command? `killall glusterfsd; killall -9 glusterfsd; killall glusterd; glusterd`
02:08 nightwalk joined #gluster
02:09 JoeJulian probably.
02:10 JoeJulian the script can't kill glusterd because it doesn't have the pid. It can't start it because it's already running...
02:11 lng then why can I successfully execute `/etc/init.d/glusterd restart`?
02:11 semiosis lng: you should not even have /etc/init.d/glusterd
02:12 lng semiosis: it might left from prvious install
02:12 lng -rwxr-xr-x 1 root root 2132 Jun  4 06:10 /etc/init.d/glusterd
02:12 lng it is very old
02:12 semiosis could you pastie the last 100 lines of the log, not just the E lines, please
02:12 sunus joined #gluster
02:12 lng sure
02:12 lng one sec
02:13 esm_ joined #gluster
02:13 semiosis looks like your old glusterd is running so your new glusterd (glusterfs-server) can't start
02:13 semiosis see ,,(3.3 upgrade notes) if you need to move config from /etc/glusterd (old) to the new /var/lib/glusterd
02:13 glusterbot http://goo.gl/qOiO7
02:14 kevein joined #gluster
02:14 lng i have upgraded from 3.3.0 to 3.3.1
02:14 semiosis hm
02:14 lng minor version
02:14 semiosis but diff packages :/
02:14 lng ah, I see
02:19 redsolar joined #gluster
02:20 lng http://paste.ubuntu.com/1359409/
02:20 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
02:23 lng what would you suggest me to do?
02:23 semiosis well that looks successful... but was that from when you started the old version?
02:24 semiosis make a backup of your /var/lib/glusterd directory, just in case... then remove teh old glusterd package
02:24 semiosis check if removing that package wiped out /var/lib/glusterd, and if it did replace the missing directory from the backup you just made
02:24 semiosis then the new one should work, i hope
02:25 lng semiosis: I have upgraded it few days ago
02:25 lng this log is new
02:26 semiosis so are you able to do service glusterfs-server start?
02:27 lng start: Job failed to start
02:27 lng ...then remove teh old glusterd package
02:27 semiosis yeah, after making a backup copy of /var/lib/glusterd
02:27 lng I think I removed it with dpkg -r
02:28 lng yes, it was removed by `dpkg -r glusterfs`
02:28 lng then I executed `apt-get install glusterfs-server`
02:29 lng semiosis: do you think I still need to remove something from older install?
02:29 semiosis idk anymore
02:30 lng oh
02:30 semiosis if you still have /etc/init.d/glusterd, then yeah that needs to be removed
02:30 semiosis but it should have gone away when you removed the glsuterd package
02:30 lng maybe I should reinstall it?
02:31 lng purge existing installation
02:31 semiosis starting over should work :)
02:31 lng semiosis: what do you mean by starting over?
02:32 semiosis backup your /var/lib/glusterd, remove all gluster packages, kill all gluster processes, reinstall glusterfs-server 3.3.1, stop the service, restore /var/lib/glusterd, start service
02:32 JoeJulian format!
02:32 lng semiosis: I see
02:32 semiosis my brain is being formatted
02:32 lng :-)
02:32 semiosis learning spring framework & spring security... THE HARD WAY
02:32 lng JoeJulian: format?
02:33 JoeJulian lng: I'm being sarcastic.
02:34 lanning format c:
02:34 lanning :)
02:34 JoeJulian dd if=/dev/zero of=/dev/sda
02:35 lng lanning: windows user, please leave the premises
02:35 lng :)
02:35 JoeJulian windows? That's good ol' DOS!
02:35 lanning I was just copying his "format"
02:36 lanning unix, it's like DOS, but better.... :)
02:36 lng JoeJulian: this command is very sufficient
02:36 * JoeJulian started with DOS 2.0
02:36 lanning I started with BASIC on a Commodore 64
02:36 semiosis wooo commodore
02:37 lng JoeJulian: fuser -km /home
02:37 JoeJulian The first computer I got my hands on was a VIC20
02:37 semiosis oh no, my eclipse is turning self-aware... get the EMP
02:38 lanning LOAD "*",8,1
02:38 lanning "Press play on tape."
02:38 JoeJulian 10 CALL CLEAR
02:39 lanning I used to write self modifying BASIC code on the C64
02:43 lng I was able to service glusterfs-server start
02:43 lng finally
02:45 lng I just `pkill -f gluster && rm /etc/init.d/glusterd && apt-get remove glusterfs-client glusterfs-common glusterfs-dbg glusterfs-examples glusterfs-server && apt-get install glusterfs-server`
02:45 lng btw, /var/lib/glusterd was not deleted
02:52 semiosis sweet
02:53 semiosis now if i could just get maven to do my bidding
02:54 semiosis any enterprise java wizzards in the house ;)
02:54 semiosis ?
02:56 lng semiosis: thank you so much!
02:56 semiosis you're welcome
03:01 sunus why i suddenly can not disable iptables? fedora 17
03:01 sunus systemctl stop iptables.service ; systemctl disable iptables.service     but after reboot, the iptables starts as well..
03:06 lng semiosis: how can I find out where the bottleneck is when load average is bumping up?
03:07 bharata joined #gluster
03:08 semiosis lng: that's an EXCELLENT interview question!
03:08 semiosis :)
03:08 semiosis why am I still at work at 10 PM?
03:08 semiosis do I love java this much?
03:11 lng semiosis: haha
03:15 bulde joined #gluster
03:16 semiosis lng: sorry now's not a good time for me to help you with benchmarking
03:16 semiosis too big of a topic and i'm kinda busy
03:26 kevein joined #gluster
03:27 Psi-Jack_ joined #gluster
03:28 quillo joined #gluster
03:33 nightwalk joined #gluster
03:39 Psi-Jack_ joined #gluster
03:42 Psi-Jack_ joined #gluster
03:46 sripathi joined #gluster
04:01 quillo joined #gluster
04:03 bulde joined #gluster
04:05 shylesh joined #gluster
04:05 shylesh_ joined #gluster
04:18 sripathi joined #gluster
04:45 lng is there anything abnormal in this stats? http://pastie.org/5380681
04:45 glusterbot Title: #5380681 - Pastie (at pastie.org)
04:53 shireesh joined #gluster
04:59 vpshastry joined #gluster
05:10 mohankumar joined #gluster
05:12 ramkrsna joined #gluster
05:12 ramkrsna joined #gluster
05:28 jiku joined #gluster
05:28 jiku left #gluster
05:31 raghu joined #gluster
05:42 bulde joined #gluster
05:44 themadcanudist joined #gluster
05:44 themadcanudist left #gluster
05:44 sripathi joined #gluster
05:45 hagarth joined #gluster
05:53 bala1 joined #gluster
05:53 bharata joined #gluster
05:55 18VAAE11C joined #gluster
05:56 glusterbot New news from resolvedglusterbugs: [Bug 842549] getattr command from NFS xlator does not make hard link file in .glusterfs directory <http://goo.gl/6EcUF>
06:02 hagarth joined #gluster
06:10 tryggvil joined #gluster
06:12 sunus joined #gluster
06:25 raghu joined #gluster
06:39 deepakcs joined #gluster
06:48 kevein joined #gluster
06:57 bharata joined #gluster
06:59 hagarth joined #gluster
07:09 ngoswami joined #gluster
07:10 Nr18 joined #gluster
07:11 dobber joined #gluster
07:35 sunus joined #gluster
07:36 sunus joined #gluster
07:42 rabbit7 still working on that split-head issue, is it possible to just delete the offending files through the glusterfs mount to resolve the conflict ?
07:42 Tekni joined #gluster
07:52 ctria joined #gluster
07:52 20WABJ52Z joined #gluster
07:57 xavih joined #gluster
07:57 Azrael808 joined #gluster
07:58 14WAAOVQ4 joined #gluster
07:58 kevein joined #gluster
08:00 sunus joined #gluster
08:01 14WAAOVQ4 left #gluster
08:09 chaseh joined #gluster
08:10 StevenLiu joined #gluster
08:16 tjikkun_work joined #gluster
08:18 joeto joined #gluster
08:18 ndevos rabbit7: not if you are on 3.3.x, you'll need to delete the gfid file/link as well (under the .glusterfs directory on the brick)
08:24 ekuric joined #gluster
08:24 lkoranda joined #gluster
08:29 sripathi joined #gluster
08:30 Nr18 joined #gluster
08:31 JoeJulian @split-brain
08:31 glusterbot JoeJulian: (#1) learn how to cause split-brain here: http://goo.gl/nywzC, or (#2) To heal split-brain in 3.3, see http://goo.gl/FPFUX .
08:35 andreask joined #gluster
08:37 hagarth joined #gluster
08:49 pkoro joined #gluster
08:51 simcen joined #gluster
08:52 simcen Hi all. Can I mix 3.3.0 and 3.3.1 nodes?
08:52 simcen or at least do a rolling upgrade
08:54 JoeJulian You can do the rolling upgrade, yes.
08:55 simcen great, thanks
08:56 bulde joined #gluster
08:56 gbrand_ joined #gluster
08:57 joeto joined #gluster
09:06 grzany joined #gluster
09:21 z00dax JoeJulian: to complete the story, i ended up with 1.7 TiB of used space, migrating 1.1TiB, even after 2 rebalances
09:21 z00dax i ended up rsyncing the data away. blowing away everything, rsyncing stuff back to 1.1TiB
09:21 z00dax mystified enough to try and create a reproducer
09:22 JoeJulian My guess would be sparse files.
09:32 z00dax not sure if we have any of those, but its worth checking.
09:33 Triade joined #gluster
09:36 lh joined #gluster
09:44 DaveS_ joined #gluster
09:48 sshaaf joined #gluster
09:55 carrar joined #gluster
09:55 carrar left #gluster
09:56 deepakcs joined #gluster
09:56 simcen "gluster volume heal vol0 info split-brain" only shows gfids. Is there a way to get the concrete file or directory behind this id?
09:56 lng just on clean install I have these errors: http://paste.ubuntu.com/1359956/
09:57 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
09:57 carrar joined #gluster
09:57 lng but I can read / write
09:58 JoeJulian lng: Those are all normal.
09:58 lng JoeJulian: thanks
09:58 gbrand_ joined #gluster
09:58 lng JoeJulian: trying to benchmark it
09:59 lng so far I was not able to simulate similar load to live
10:00 JoeJulian simcen: http://irclog.perlgeek.de/g​luster/2012-11-14#i_6151482
10:00 glusterbot <http://goo.gl/eqDJa> (at irclog.perlgeek.de)
10:04 lng is it okay to have 'formance.stat-prefetch: off'?
10:04 tryggvil joined #gluster
10:04 simcen thanks, JoeJulian
10:05 JoeJulian lng: Sure, it's okay. Were you having some specific problem?
10:06 Staples84 joined #gluster
10:06 vpshastry joined #gluster
10:07 lng JoeJulian: well... whn CPU utilization is ~%40, Load Average might be ~8 sometimes. We have monitoring trigger set to 3 for 5 min
10:08 lng I am keep on getting alert email
10:08 lng s
10:08 lng I want to find bottleneck
10:09 lng and change monitoring settings accordinghly
10:10 ramkrsna joined #gluster
10:13 lng JoeJulian: could you advice something?
10:41 lh joined #gluster
10:41 lh joined #gluster
10:46 rudimeyer joined #gluster
11:24 sripathi joined #gluster
11:30 quillo joined #gluster
11:33 ctria joined #gluster
11:45 sripathi joined #gluster
11:48 shireesh joined #gluster
11:50 ika2810 joined #gluster
12:15 ika2810 left #gluster
12:17 ctria joined #gluster
12:25 kkeithley1 joined #gluster
12:29 silopolis joined #gluster
12:31 duerF joined #gluster
12:32 nightwalk joined #gluster
12:33 shireesh joined #gluster
12:41 puebele joined #gluster
12:45 edward1 joined #gluster
12:50 quillo joined #gluster
12:50 esm_ joined #gluster
12:52 rudimeyer I would like to talk to someone with experience in running Gluster on EC2 EBS disks, anyone?
12:58 nightwalk joined #gluster
13:01 H__ rudimeyer: I've set it up as a test site there on 4 instances, and run a simple one now but only on 1 machine. What do you want to know ?
13:03 rudimeyer H__:  I'm testing out a potential failure of an EBS disk (by force-detaching it through the API), this locks up the Gluster volume - making the highavailability feature not so HA. I've read of people who overcome this issue but cant get how? :)
13:03 rudimeyer H__:  There are some bug reports mentioning this, but they have been untuched for a while, don't know if people handle it in a nother way
13:07 Nr18 joined #gluster
13:11 H__ rudimeyer: well, for that i cannot help much i'm afraid, I assume some devs here can so stay awhile
13:11 rudimeyer H__: All right, thanks
13:11 shylesh joined #gluster
13:11 shylesh_ joined #gluster
13:16 NuxRo interesting http://www.h-online.com/open/news/it​em/Distributed-filesystem-XtreemFS-1​-4-with-Hadoop-support-1750564.html
13:16 glusterbot <http://goo.gl/Zvo8I> (at www.h-online.com)
13:27 bfoster joined #gluster
13:32 mohankumar joined #gluster
13:32 Nr18 joined #gluster
13:33 joeto1 joined #gluster
13:33 vpshastry joined #gluster
13:49 abyss^ JoeJulian: Did you have a problem with gluster + php? I have gluster 3.2 64bit and symbolic link to mounted gluster volume (shared > /mnt/gluster/shared). I see files propertly from shell but from php (www-data (debian)) I can see make operation only on directories, but no files can see (from php)...
13:50 abyss^ s/I can see make operation/I can do operation/
13:51 glusterbot abyss^: Error: I couldn't find a message matching that criteria in my history of 1000 messages.
13:51 * m0zes has been talking up glusterfs as a reliable hpc filesystem at sc12. i think i convinced 'advanced clustering systems' to start testing it
13:52 jdarcy m0zes: Cool!
13:54 ramkrsna joined #gluster
13:58 robo joined #gluster
14:16 johnmark m0zes: cool! I was wondering if we should do anything at sc12
14:16 johnmark but I wasn't sure if we had a good story thetre
14:17 johnmark m0zes: if you find interesting people that should be involved here, do pass them on to me
14:23 m0zes i've telling people about my good experiences, found a few others using it as well.
14:25 m0zes i know ceph, orangefs, fhgfs, and a few other filesystems are represented here.
14:26 ekuric left #gluster
14:27 rwheeler joined #gluster
14:47 johnmark m0zes: good to know
14:48 jdarcy I wonder if Joe wore my "Your distributed filesystem sucks" shirt.
14:50 ngoswami joined #gluster
14:50 H__ heh :)
14:50 ika2810 joined #gluster
14:51 jdarcy He really seems to like FhGFS.  Too bad it's proprietary.
14:51 ika2810 joined #gluster
14:52 H__ have an image online of said shirt ?
14:53 jdarcy http://www.customink.com/designs/fssucks/jjm​0-000r-j108/hotlink?pc=HL-46120&amp;cm_mmc=h​otlink-_-control_4-_-Header_txt-_-prehead1
14:53 glusterbot <http://goo.gl/RKkE9> (at www.customink.com)
14:53 jdarcy Any similarity to the official Red Hat color is purely coincidental.  This is not a Red Hat shirt.
14:54 ika2810 joined #gluster
14:54 lh joined #gluster
14:54 lh joined #gluster
14:54 H__ just red :)
14:55 sshaaf joined #gluster
14:55 H__ small text, it should splash more off that shirt /me thinks
14:55 jdarcy Submit a patch.  ;)
14:55 ika2810 joined #gluster
14:57 ika2810 joined #gluster
14:59 hagarth joined #gluster
15:06 robo joined #gluster
15:11 stopbit joined #gluster
15:33 _Bryan_ So out of curiousity..before I reinvent the wheel.....Does anyone have a nagios monitor for gluster mounts?
15:37 johnmark m0zes: what I'm hearing from you and others is that we should definitely have a larger presence there next year
15:38 johnmark _Bryan_: I know there have been attempts to do that, don't know how well they work
15:46 _Bryan_ johnmark: thanks....just thought I would check first.. 8-)
15:46 m0zes johnmark: i think that would be a good idea. plus next year it is in denver, co. if that is your thing ;-)
15:48 m0zes _Bryan_: i just cgeck that the requisite daemons  are in the process list
15:54 18VAAE59E joined #gluster
16:00 raghu joined #gluster
16:01 kkeithley_wfh where do I file a BZ against your shirt? s/filesystem sucks/file system sucks/
16:02 johnmark _Bryan_: there is this, although it looks pretty simple: http://exchange.nagios.org/directory/Plugins/Sy​stem-Metrics/File-System/Check_Gluster/details
16:02 glusterbot <http://goo.gl/3ZirD> (at exchange.nagios.org)
16:03 daMaestro joined #gluster
16:03 puebele joined #gluster
16:05 bulde joined #gluster
16:06 johnmark _Bryan_: and this one was made with 3.3 in mind: http://www.johnbertrand.com​/code/check_gluster_pl.html
16:06 glusterbot <http://goo.gl/Bndxw> (at www.johnbertrand.com)
16:16 wushudoin joined #gluster
16:21 joeto joined #gluster
16:25 abyss^ JoeJulian: ok, this is not gluster, only our programmers, sorry for disturb you
16:27 andreask joined #gluster
16:29 nightwalk joined #gluster
16:29 chandank joined #gluster
16:31 Humble joined #gluster
16:35 aliguori joined #gluster
16:44 joeto1 joined #gluster
17:08 nightwalk joined #gluster
17:39 bulde joined #gluster
17:48 Nr18 joined #gluster
17:55 Nr18 joined #gluster
17:55 plarsen joined #gluster
17:56 chandank What is the main purpose of glusterfs-switf packages?
17:57 semiosis providing the UFO (Unified File and Object) feature, which is an OpenStack Swift compatible object storage over HTTP api
17:57 semiosis ,,(ufo)
17:57 glusterbot I do not know about 'ufo', but I do know about these similar topics: 'ufopilot'
17:58 Mo__ joined #gluster
17:58 chandank Thanks
17:59 chandank Moreover, yum installing the gluster packages from kkeithle repo, there does not seems any gpgcheck available for that.
18:00 semiosis kkeithley_wfh: ^^
18:00 chandank I learned froom JeoJulian that this is kind of official package and I can use it for production as well.
18:02 sjoeboo joined #gluster
18:03 Nr18 joined #gluster
18:31 esm_ joined #gluster
18:41 tryggvil_ joined #gluster
19:08 samppah joined #gluster
19:09 wN joined #gluster
19:09 nueces joined #gluster
19:17 DaveS_ joined #gluster
19:17 kkeithley_wfh my rpms are signed, you need to enable gpgcheck=1 in the repo file (/etc/yum.repos.d/*-glusterfs.repo)
19:18 kkeithley_wfh my rpms are signed, it even says so in the README
19:18 balunasj joined #gluster
19:18 chandank thanks.
19:18 kkeithley_wfh They're as official as I am, whatever that might be.
19:21 DaveS__ joined #gluster
19:21 semiosis bug 856341
19:21 glusterbot Bug http://goo.gl/9cGAC low, low, ---, security-response-team, NEW , CVE-2012-4417 GlusterFS: insecure temporary file creation
19:23 chandank from where to get the public key.  http://fpaste.org/hbL5/
19:23 glusterbot Title: Viewing Paste #252636 (at fpaste.org)
19:25 puebele joined #gluster
19:27 JoeJulian chandank: Last time I looked there wasn't one. Did you enable gpgcheck in the .repo?
19:28 chandank Yes I did.
19:28 kkeithley_wfh my public key is in a couple popular key repos, but hang on and I'll put it on the fedorapeople repo
19:29 chandank http://fpaste.org/pUOO/
19:29 chandank That is will be awesome!
19:30 DaveS_ joined #gluster
19:33 kkeithley_wfh it's there now
19:33 kkeithley_wfh pub.key
19:33 gbrand_ joined #gluster
19:33 chandank Thanks
19:35 wN joined #gluster
19:36 DaveS_ joined #gluster
19:56 nick5 joined #gluster
20:08 Humble joined #gluster
20:08 nightwalk joined #gluster
20:32 TSM joined #gluster
20:49 pithagorians joined #gluster
20:57 tryggvil joined #gluster
21:03 TSM2 joined #gluster
21:03 nightwalk joined #gluster
21:08 tryggvil_ joined #gluster
21:08 aliguori joined #gluster
21:19 robo joined #gluster
21:22 sshaaf joined #gluster
21:27 y4m4 joined #gluster
21:27 dstywho left #gluster
21:28 tryggvil_ joined #gluster
21:32 tryggvil joined #gluster
21:41 rwheeler joined #gluster
22:00 esm_ joined #gluster
22:02 aricg_ joined #gluster
22:02 aricg_ what is raid 10 called in gluster speak?
22:04 m0zes aricg_: distributed+replicated with replica 2, more or less. i am assuming you don't want @stripe
22:05 m0zes @stripe
22:05 glusterbot m0zes: Please see http://goo.gl/5ohqd about stripe volumes.
22:06 aricg_ just messing around right now, so ill probably try both.
22:07 rags joined #gluster
22:07 TSM2 what raid are peeps generaly running on the underlying hardware, R10 5 6?
22:08 * semiosis uses ec2/ebs
22:08 semiosis with no additional lvm/md raid
22:09 m0zes TSM2: some don't use any, although i use raid50. 18 5disk raid5, striped, pernstorage node.
22:09 m0zes s/pern/per /
22:09 glusterbot What m0zes meant to say was: TSM2: some don't use any, although i use raid50. 18 5disk raid5, striped, per storage node.
22:09 TSM2 50 wtw thats mad
22:09 TSM2 ubber critial + speed required?
22:10 semiosis see thats why it's hard to say what people use under glusterfs "generally" -- use cases vary widely
22:11 m0zes not horribly fast requirements, i just wanted large underlying bricks.
22:12 m0zes raid10 would be faster, but i couldn't justify losing half my space.
22:12 TSM2 what is the largest EBS vol, the calculator only allowes me to go to 999GB
22:13 semiosis ebs is limited to 1TB/vol
22:13 semiosis so i have lots of ebs vols per server
22:14 semiosis and i dont even use 1TB ebs vols, i prefer 500GB
22:15 TSM2 how big is the cluster you have
22:15 semiosis not as big as m0zes'
22:15 semiosis (guessing)
22:15 TSM2 you still have not said
22:16 semiosis not going to either
22:16 TSM2 nor he
22:16 TSM2 i need about 10TB to start
22:16 * m0zes has ~400TB formatted. will probably be increasing it to the 2-5pb raing
22:17 m0zes s/raing/range
22:19 m0zes glusterfs needs tiering at some point in time... ;-)
22:20 TSM2 the ammount you put in one box sounds like backblaze units :p
22:20 hattenator joined #gluster
22:21 TSM2 i doubt we will get anywhere near 100TB unless someone in our office starts to store hidef video
22:21 TSM2 ebs costings just dont make sense when the sizes get large
22:25 semiosis how large is large? #deepquestions
22:26 m0zes depends on what it is/where it is going ;)
22:26 TSM2 well at 12x1TB EBS vols so as to have N+1 gluster setup, 100 IOPS guranteed $1700pm
22:27 TSM2 thats only 6TB at N+1
22:28 davdunc joined #gluster
22:29 twx_ semiosis: more like: how large is large? #couldvebeenaporno
22:29 neofob left #gluster
22:33 Daxxial_ joined #gluster
22:38 aliguori joined #gluster
22:43 pkoro joined #gluster
23:08 zaitcev joined #gluster
23:11 hchiramm_ joined #gluster
23:36 badone joined #gluster
23:38 robo joined #gluster
23:40 nightwalk joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary