Camelia, the Perl 6 bug

IRC log for #gluster, 2013-08-05

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:36 yinyin joined #gluster
00:38 recidive joined #gluster
01:15 itisravi joined #gluster
01:16 chirino joined #gluster
01:23 vpshastry joined #gluster
01:26 MugginsM has anyone made gluster 3.4 packages for Ubuntu Lucid yet?
01:31 vpshastry1 joined #gluster
01:32 kevein joined #gluster
01:34 chirino joined #gluster
01:35 yinyin joined #gluster
02:25 asias joined #gluster
02:30 CheRi joined #gluster
02:36 aravindavk joined #gluster
02:37 aravindavk joined #gluster
02:38 bulde joined #gluster
02:51 lpabon joined #gluster
02:51 aravindavk joined #gluster
03:08 vpshastry joined #gluster
03:09 vpshastry left #gluster
03:12 kshlm joined #gluster
03:15 shubhendu joined #gluster
03:15 lalatenduM joined #gluster
03:19 ajha joined #gluster
03:44 jebba joined #gluster
03:46 shylesh joined #gluster
03:49 itisravi joined #gluster
03:51 bharata joined #gluster
03:52 _pol joined #gluster
04:02 dusmant joined #gluster
04:30 shruti joined #gluster
04:31 bulde joined #gluster
04:33 sgowda joined #gluster
04:39 satheesh joined #gluster
04:40 yosafbridge joined #gluster
04:42 rcoup joined #gluster
04:50 bala joined #gluster
04:59 vijaykumar joined #gluster
05:30 bala joined #gluster
05:33 deepakcs joined #gluster
05:33 RameshN joined #gluster
05:34 raghu joined #gluster
05:38 rastar joined #gluster
05:42 kanagaraj joined #gluster
05:47 lalatenduM joined #gluster
05:48 lalatenduM joined #gluster
05:53 dusmant joined #gluster
06:05 rjoseph joined #gluster
06:05 hagarth joined #gluster
06:11 RameshN joined #gluster
06:13 dusmant joined #gluster
06:17 vimal joined #gluster
06:21 thomaslee joined #gluster
06:54 ekuric joined #gluster
07:01 dobber_ joined #gluster
07:01 SynchroM joined #gluster
07:03 ngoswami joined #gluster
07:10 sas_ joined #gluster
07:11 satheesh joined #gluster
07:13 rcoup joined #gluster
07:13 Durzo joined #gluster
07:14 Durzo joined #gluster
07:15 Durzo hi all, i was wondering where i can find the gluster management gateway source/rpm files? i can see the docs and build specs at https://github.com/gluster/gmc but it looks like the crucial source has been removed
07:15 glusterbot Title: gluster/gmc · GitHub (at github.com)
07:16 andreask joined #gluster
07:17 m0zes joined #gluster
07:18 furkaboo joined #gluster
07:20 harish joined #gluster
07:20 psharma joined #gluster
07:23 ndarshan joined #gluster
07:26 kanagaraj Durzo, gluster management support is available in oVirt, http://www.ovirt.org/Features/Gluster_Support
07:26 glusterbot Title: Features/Gluster Support (at www.ovirt.org)
07:28 shubhendu joined #gluster
07:28 kanagaraj Durzo, here the steps to install oVirt http://www.ovirt.org/Download
07:28 glusterbot Title: Download (at www.ovirt.org)
07:33 mooperd joined #gluster
07:36 andreask joined #gluster
07:37 Durzo kanagaraj, our gluster has nothing to do with virtual machines...
07:43 kanagaraj Durzo, yes, oVirt supports both storage and vm management, you can choose what you want during the installation
07:43 Durzo ok
07:47 Durzo kanagaraj, doesnt run on ubuntu :/
07:49 shireesh joined #gluster
08:06 puebele joined #gluster
08:07 rjoseph joined #gluster
08:15 lyang0 joined #gluster
08:24 puebele joined #gluster
08:26 Norky joined #gluster
08:27 aravindavk joined #gluster
08:33 tziOm joined #gluster
08:40 ziiin joined #gluster
08:48 ppai joined #gluster
08:49 harish joined #gluster
08:56 psharma joined #gluster
09:10 satheesh joined #gluster
09:11 shubhendu joined #gluster
09:18 puebele1 joined #gluster
09:18 saurabh joined #gluster
09:18 saurabh joined #gluster
09:28 ndarshan joined #gluster
09:37 root_____ joined #gluster
09:50 ngoswami joined #gluster
09:56 ndarshan joined #gluster
09:59 ngoswami joined #gluster
10:13 RameshN joined #gluster
10:17 pk_ joined #gluster
10:18 pk_ NuxRo: are you nux! on gluster-users?
10:23 NuxRo pk_: hi, yes
10:23 NuxRo i just finished the instructions you sent me via email
10:24 NuxRo so far so good
10:24 NuxRo let me pack the logs
10:24 NuxRo do you want them from both nodes or just one?
10:30 manik joined #gluster
10:34 piotrektt joined #gluster
10:34 piotrektt joined #gluster
10:41 rjoseph joined #gluster
10:45 ziiin joined #gluster
10:46 saurabh joined #gluster
10:47 vpshastry joined #gluster
10:56 pk_ joined #gluster
10:56 pk_ NuxRo: ping
10:57 pk_ NuxRo: There was some network problem... Yes I need the logs from all the machines.
11:04 NuxRo pk_: roger, will prepare them shortly and email to you
11:06 pk_ NuxRo: Just for my satisfaction, could you check that the trusted.afr.488_1152-client-0/1 xattrs are all zeros now.
11:07 harish joined #gluster
11:08 rastar joined #gluster
11:18 NuxRo pk_: yes, on both nodes, for both files I get:
11:18 NuxRo trusted.afr.488_1152-client-​0=0x000000000000000000000000
11:18 NuxRo trusted.afr.488_1152-client-​1=0x000000000000000000000000
11:20 Maskul joined #gluster
11:21 NuxRo pk_: and thanks a lot for your help :)
11:23 ajha joined #gluster
11:23 ziiin joined #gluster
11:25 pk_ NuxRo: Is it ok if I include gluster-users again to the mail you sent. The logs won't be attached in my reply so there should be no problem. I will ask some more questions there, It will be helpful for users for future....
11:32 NuxRo of course
11:33 NuxRo pk_: sure, go ahead
11:36 pk_ NuxRo: thanks. I am in a release crunch at the moment, I will probably take a look at the logs in 2-3 days time and get back to you if I have any doubts
11:37 NuxRo pk_: sure
11:37 andreask joined #gluster
11:37 NuxRo thanks
11:38 pk_ NuxRo: no probs. See ya around. I gotta leave now...
11:38 edward1 joined #gluster
11:39 ricky-ticky joined #gluster
11:57 satheesh joined #gluster
12:10 chirino joined #gluster
12:17 rwheeler joined #gluster
12:23 vpshastry left #gluster
12:29 al joined #gluster
12:32 B21956 joined #gluster
12:37 aliguori joined #gluster
12:43 bala joined #gluster
12:46 chirino joined #gluster
12:50 ngoswami joined #gluster
12:54 CheRi joined #gluster
12:55 jdarcy joined #gluster
13:01 ziiin joined #gluster
13:01 ajha joined #gluster
13:07 deepakcs joined #gluster
13:08 shubhendu joined #gluster
13:09 bennyturns joined #gluster
13:26 dusmant joined #gluster
13:40 ug joined #gluster
13:42 plarsen joined #gluster
13:47 chirino joined #gluster
13:49 The_Ugster Possibly silly question, but I've been banging my head against this for a few days now. I have a nine node setup running on Debian Wheezy, each brick is an SSD, 50GB XFS formatted. I'm unable to write any single file larger than 50 gigs to this cluster. I've attempted Stripe, Distribute, and various in-betweens. I am able to write a 50gig file, then another 50gig file after that, but no single file larger than that.
13:54 ndevos The_Ugster: You should be able to if you use stripe. Distribute and replicate will place the whole file on the bricks.
13:55 ndevos The_Ugster: note that for certain xfs versions, you will need to disable speculative (?) allocation, xfs may not allocate the sparse parts of the striped files on the bricks
13:55 The_Ugster So in that case I'd use 'gluster volume create SomeVolumeName stripe 9 server:/brick'? (Also, thanks for the quick response!)
13:56 ndevos yes, '... stripe 9 server1:/brick server<N>:/brick ... server9:/brick'
13:57 The_Ugster Alright. I'll reformat all bricks and try it again. Thanks so much for the help!
14:00 ngoswami joined #gluster
14:18 robinr joined #gluster
14:22 robinr What are the limitation of GID size in Gluster ? I create a group with 200,090,480 GID that does not take effects with Gluster and assigned group writable rights and the permission did not take effects. Assigning group-writable rights for groups with GID like 101 seems to work fine. I've filed a bug with the Bugzilla.
14:23 robinr Gluster Version 3.3.0
14:23 ShaharL joined #gluster
14:28 plarsen joined #gluster
14:32 ShaharL joined #gluster
14:35 The_Ugster Dang it. So I reformatted the bricks, created the stripe, set dd to write to the cluster. It stopped again at 50 gigs saying it was out of space. df tells me that the volume is full, however looking at the file that was written on the mount point, it's only 50 gigs.
14:37 rcheleguini joined #gluster
14:40 bfoster The_Ugster: have you tried with stripe-coalesce enabled? only a subset of the file should end up on a single brick of a striped volume
14:40 jbrooks joined #gluster
14:41 The_Ugster @ bfoster Is there documentation on that somewhere?
14:43 shylesh joined #gluster
14:44 bfoster The_Ugster: not sure tbh, 'gluster volume set <vol> stripe-coalesce on' should enable it iirc
14:47 jdarcy joined #gluster
14:47 The_Ugster Ahh, okey dokey. Just tried that, and the option isn't found. Thanks for version 3.2.7 of GlusterFS, Debian :p
14:48 dusmant joined #gluster
14:48 bfoster oh, I guess that's too old. The alternative would be to disable speculative prealloc on your brick mount point...
14:48 bfoster i.e., use the 'allocsize=4k' mount option
14:48 bfoster (to xfs)
14:50 The_Ugster Alright. I'll give that a shot, but I think I'll first update to the latest version.
14:51 bfoster that would be preferable :)
14:53 hagarth joined #gluster
14:55 mynameisdeleted joined #gluster
14:58 recidive joined #gluster
15:01 bala joined #gluster
15:04 al joined #gluster
15:06 ShaharL joined #gluster
15:09 zaitcev joined #gluster
15:12 jclift_ joined #gluster
15:15 sprachgenerator joined #gluster
15:17 al joined #gluster
15:20 sanG joined #gluster
15:21 eseyman joined #gluster
15:21 sanG Can anyone help me with the error " 0-management: Unable to start brick" on gluster 3.4.0 ?
15:23 rjoseph joined #gluster
15:28 sanG Can anyone help me with the error " 0-management: Unable to start brick" on gluster 3.4.0 ?
15:30 Technicool joined #gluster
15:33 mynameisdeleted so.. moosefs vs gluster vs gpfs  vs lustre
15:37 satheesh joined #gluster
15:41 vpshastry joined #gluster
15:42 eseyman I'm creating /exports/audio on my glusterfs server and mounting it on 4 clients
15:42 eseyman when 1 client creates a file, the 3 others can see it but it doesn't appear in /exports/audio on the server
15:43 eseyman is this normal or do I have the mount the volume on the server as well ?
15:45 eseyman glusterfs 3.2.7, btw
15:48 vpshastry left #gluster
15:53 shylesh joined #gluster
15:54 jdarcy_ joined #gluster
16:03 mooperd_ joined #gluster
16:04 _pol joined #gluster
16:11 The_Ugster bfoster: ndevos: I upgraded to 3.4.0, set cluster.stripe-coalesce, and can now write larger-than-brick files. Thanks so much for your help!
16:19 CheRi joined #gluster
16:24 recidive joined #gluster
16:27 cfeller joined #gluster
16:27 satheesh joined #gluster
16:31 redragon_ joined #gluster
16:35 ShaharL joined #gluster
16:40 Mo_ joined #gluster
16:40 daMaestro joined #gluster
16:55 sanG Can anyone help me with the error " 0-management: Unable to start brick" on gluster 3.4.0 ?
16:56 lanning joined #gluster
16:59 vimal joined #gluster
17:00 zombiejebus joined #gluster
17:05 thomasle_ joined #gluster
17:13 vpshastry joined #gluster
17:17 vpshastry left #gluster
17:22 recidive joined #gluster
17:29 bulde joined #gluster
17:31 sanG How do I delete volume ?
17:31 sanG I see error "start: Job failed to start"
17:34 The_Ugster sanG: gluster volume stop 'yourvolumename' and then gluster volume delete 'yourvolumename'
17:37 DWSR joined #gluster
17:37 DWSR joined #gluster
17:39 bdperkin_gone joined #gluster
17:42 msciciel_ joined #gluster
17:42 bdperkin joined #gluster
17:45 lalatenduM joined #gluster
17:45 sanG I see error " Failed to initialize IB Device"
17:45 sanG anyone can help ?
17:51 sanG any idea on rdma_cm event channel creation failed (No such file or directory)
17:58 mooperd joined #gluster
18:13 SynchroM joined #gluster
18:16 mooperd left #gluster
18:21 al joined #gluster
18:22 chirino joined #gluster
18:25 B21956 left #gluster
18:31 kkeithley New sets of RPMs are on http://download.gluster.org/pu​b/gluster/glusterfs/3.4/3.4.0/.  Please note that there are two new RPMs in the set: glusterfs-libs and glusterfs-cli. These are provided to satisfy dependencies for qemu-kvm and vdsm.
18:31 glusterbot <http://goo.gl/gHuOpO> (at download.gluster.org)
18:32 rbrown joined #gluster
18:32 kkeithley Anyone who has automated YUM update scripts and the like may like to check them for any requisite changes as the contents of the RPMs has changed.
18:33 kkeithley s/has changed/have changed/
18:33 glusterbot kkeithley: Error: I couldn't find a message matching that criteria in my history of 1000 messages.
18:33 kkeithley glusterbot say my brain hurts
18:33 glusterbot kkeithley: Error: You must be registered to use this command. If you are already registered, you must either identify (using the identify command) or add a hostmask matching your current hostmask (using the "hostmask add" command).
18:35 a2 :O
18:41 kaptk2 joined #gluster
18:44 semiosis :O
18:44 rbrown guys this might sound a bit silly but i'm looking to create a 4 node web cluster 2 instances of apache 2 of tomcat
18:44 rbrown should I go with NFS over Gluster?
18:44 rbrown this is mostly read intensive data (forums)
18:45 semiosis hard to say without more information
18:48 rbrown like what :-)
18:48 rbrown i really want to use a shared filesystem for easy of deployment  of my web applications
18:48 rbrown and the ability to easily scale up/down aas needed
18:48 rbrown not sure if gluster is the right choice I've setup gluster before with no issues
18:49 rbrown but dontknow how it would wokr in a web environment
18:49 semiosis works well enough for some people (myself included)
18:49 rbrown how many servers?
18:51 NuxRo rbrown: this could help http://joejulian.name/blog/optimizi​ng-web-performance-with-glusterfs/
18:51 glusterbot <http://goo.gl/uDFgg> (at joejulian.name)
18:52 NuxRo rbrown: if you do mostly reads, you can read directly from brick, btw
18:52 NuxRo just mount it --bind -ro somewhere to make sure you're not messing it up
18:55 kkeithley If you've got a lot of perl scripts, perl makes a lot of stat calls that will kill performance. Python too, to a lesser extent. YMMV
18:56 semiosis caching is helpful for static assets.  apache mod_cache is real easy to use.  varnish is not as easy but more capable and probably faster
18:56 JoeJulian The nice things about perl and python, in a general sense over php, is that they're more often run as fcgi so they're not loading from scratch on every page access.
18:56 kkeithley Re: new RPMs on download.gluster.org. The same sets of RPMs will be showing up in your Fedora and EPEL (RHEL, CentOS, Scientific Linux) updates in about a week.
18:56 JoeJulian php can, of course, be run as fcgi but even then it's reads every file every time (not sure why).
18:57 kkeithley Oh, it
18:57 kkeithley Oh, it's PHP that I was really thinking about. Too damn many P* scripting languages
18:57 JoeJulian hehe
19:06 dusmant joined #gluster
19:06 tziOm joined #gluster
19:19 redragon_ JoeJulian, just an update, i deployed gluster in one config and working like a champ, working on getting the drives installed in the other configs
19:19 redragon_ I will be celebrating the day I take my last GFS offline
19:19 bulde joined #gluster
19:23 JoeJulian :)
19:32 SpacewalkN00b joined #gluster
19:32 SpacewalkN00b hello
19:32 glusterbot SpacewalkN00b: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
19:33 SpacewalkN00b anybody else on?
19:35 B21956 joined #gluster
19:44 trifonov joined #gluster
19:47 Technicool joined #gluster
19:49 GLHMarmot joined #gluster
19:54 andreask joined #gluster
19:54 JoeJulian SpacewalkN00b: Did you read what glusterbot told you?
19:58 daMaestro joined #gluster
20:06 _pol joined #gluster
20:15 manik joined #gluster
20:29 plarsen joined #gluster
20:32 rastar joined #gluster
20:34 jebba joined #gluster
20:35 _pol joined #gluster
20:35 elyograg joined #gluster
20:35 elyograg good afternoon, all.
20:36 elyograg is there any way to get a quota report that's machine readable rather than human readable?  "8.3GB" isn't all that useful for a program.
20:38 elyograg the xml format is easier to parse, but still doesn't give me a number that can used in a programattical calculation.
20:41 elyograg additionally, is there a value (like 0) that I can use to say "don't enforce a limit, just track usage" ?
20:56 chirino joined #gluster
21:00 spligak When I set an option like "performance.io-thread-count" from the gluster cli, does it automatically introduce the translator for that volume?
21:02 SynchroM joined #gluster
21:10 spligak Or would I wrap the *-fuse.vol configuration toward the bottom with an "iothreads" volume declaration?
21:13 badone joined #gluster
21:20 elyograg joined #gluster
21:23 JoeJulian spligak: the cli manages building the volume configurations for you, and updating them dynamically. No manual .vol file editing required.
21:35 MugginsM joined #gluster
21:36 Mo__ joined #gluster
21:44 mynameisdeleted so... does infiniband help gluster a lot?
21:44 SpacewalkN00b I saw that when setting up a brick, you can use  lvms
21:44 SpacewalkN00b I just read that its infiniband is recommended for you backbone
21:45 mynameisdeleted lusre people like it a lot too
21:45 mynameisdeleted ##hardware people dont know what it is...haha
21:46 SpacewalkN00b question,  if I setup a main server glustersrv-00 , and setup a brick with an lvm below it, formatted xfs and I add another server, glustersrv-01 do I need to repeat the process and mirror that brick?
21:46 SpacewalkN00b i would imagine so, just confirming?
21:47 JoeJulian yes
21:48 SpacewalkN00b ok, so then for each server i want to add, the volumes i want replicated, need this done?
21:48 JoeJulian And infiniband has much lower latency than ethernet. On transactions where latency is noticeable, then infiniband helps. This includes things like self-heal checks when you're doing a lot of them.
21:49 SpacewalkN00b what if I setup the brisk to do distribution? The underlying lvms need to be the same then?
21:49 SpacewalkN00b across all servers?
21:49 JoeJulian SpacewalkN00b: Each brick requires a filesystem. So yes. Each brick you add to the volume will need a filesystem which may reside on a logical volume.
21:49 JoeJulian s/logical/lvm logical/
21:50 glusterbot What JoeJulian meant to say was: SpacewalkN00b: Each brick requires a filesystem. So yes. Each brick you add to the volume will need a filesystem which may reside on a lvm logical volume.
21:50 SpacewalkN00b but again to confirm, this needs to be mirrored on all servers i wish to replicate/distribute that brick?
21:50 elyograg SpacewalkN00b: what you're asking seems a little confused.  Gluster takes care of all the mirroring.
21:51 semiosis SpacewalkN00b: the underlying structure of your bricks does not need to be the same
21:51 semiosis ,,(glossary)
21:51 glusterbot A "server" hosts "bricks" (ie. server1:/foo) which belong to a "volume"  which is accessed from a "client"  . The "master" geosynchronizes a "volume" to a "slave" (ie. remote1:/data/foo).
21:51 SpacewalkN00b ok, thats what i was thinking, but is it recommended the underlying structure is the same?
21:52 JoeJulian Probably recommended, yes.
21:52 semiosis probably will make your life easier at some point
21:52 JoeJulian At least it's recommended that the bricks be the same size.
21:52 SpacewalkN00b yes, for sake of troubleshooting and standardization. I would probably take that route
21:54 SpacewalkN00b This also means that each brick , requires its own disk/lvm correct?
21:54 semiosis not required, but that's how people usually do it
21:54 kkeithley1 joined #gluster
21:54 semiosis I call it a best practice
21:55 SpacewalkN00b ok, thanks for clearing that up.
21:55 semiosis in small scale tests though i often make a brick just a dir on the root mount, but that's a bad idea for production
21:55 semiosis all depends on what you're trying to accomplish
21:55 SpacewalkN00b i was also thinking, wether you could do that, you answered my question
21:56 SpacewalkN00b just mount a brick on the rootfs of a system
21:56 SpacewalkN00b next question, thanks guys btw
21:56 semiosis filling up a brick is never a good thing, but when thr brick is your root partition, it can be catastrophic
21:56 semiosis s/thr/the/
21:56 glusterbot What semiosis meant to say was: filling up a brick is never a good thing, but when the brick is your root partition, it can be catastrophic
21:57 elyograg I like to use a subdir, not the root of the filesystem.  That way if the filesystem isn't mounted, it can't fill up the root fs because the actual brock path won't exist.
21:57 semiosis +1
21:57 JoeJulian Doesn't matter anymore with 3.4
21:57 semiosis orly?
21:57 JoeJulian glusterfsd checks for the volume-id on the directory
21:57 elyograg i haven't gotten beyond 3.3.1 yet.
21:57 SpacewalkN00b replication vs distribution,  which is best for rudundancy?
21:58 kkeithley_ there is no redundancy with distribution
21:58 semiosis there is no redundancy without replication
21:58 semiosis !
21:58 SpacewalkN00b really?
21:58 elyograg for redundancy, you need replication, regardless of what else you do.  if you also want to go eyond the limits of one filesystem, you need distribution as well.
21:58 SpacewalkN00b so the redunancy would be at the hardware level with, raid etc..
21:58 SpacewalkN00b when using distribution?
21:59 kkeithley_ correct
21:59 JoeJulian of course. When you distribute a thing across multiple somethings, that doesn't provide redundancy.
21:59 elyograg SpacewalkN00b: what happens if an entire server dies?  that happens even if the server has redundancy features.  a chip on the motherboard may blow.
21:59 semiosis a kernel may need to be upgraded
21:59 elyograg or a capacitor (which is probably more likely)
21:59 SpacewalkN00b thats what im worried about. Redundancy would be inportant to me
21:59 JoeJulian I disagree with kkeithley's "correct". RAID will not provide redundancy. It provides a degree of fault-tolerance.
22:00 SpacewalkN00b semiosis, i thought, glusterfs  was independed of kernel upgrades?
22:00 SpacewalkN00b JoeJulian: that is true, fault tolerance
22:01 kkeithley_ JoeJulian is correct
22:01 elyograg SpacewalkN00b: that's true, but to upgrade your kernel, you usually have to reboot.  if you don't have replication, part of your data just disappeared.
22:01 JoeJulian SpacewalkN00b: And it's not an either/or. See ,,(afr)
22:01 glusterbot SpacewalkN00b: For some do's and don'ts about replication, see http://goo.gl/B8xEB
22:01 SpacewalkN00b so what setups do Pandora have? for data redundancy and scaling?
22:02 * JoeJulian loves Pandora's scaling model....
22:02 kkeithley_ probably distribution+replication
22:02 SpacewalkN00b how is that setup, minimum servers etc... what is replicated to what?
22:02 semiosis i heard a rumor long ago (no idea if this is true) that they turned off self-heal checks & used a large replica count to get killer read speeds
22:02 JoeJulian Most of Pandora's needs are distribution. They don't need to scale single-file simultaneous reads all that much because they never give you the song you're asking for.
22:03 kkeithley_ with four bricks you can do 2+2
22:03 kkeithley_ or 2x2 if you prefer
22:03 JoeJulian 2^2
22:04 kkeithley_ six bricks doesn't give you 3^2
22:04 JoeJulian No, but it works with 4. :D
22:04 kkeithley_ ;-)
22:06 SpacewalkN00b so what is the minim amount of servers for distibution+replication? 4 or 3
22:06 SpacewalkN00b im going to build a lab and test with 4 boxes
22:06 semiosis practically only 2 servers
22:06 semiosis you can have many bricks per server & replicate between the servers
22:06 JoeJulian You could do it with 1, really....
22:06 SpacewalkN00b but if i only have 2, then distribution is not needed imo
22:07 SpacewalkN00b only replication
22:07 JoeJulian Not "needed" but that's not what you asked.
22:07 SpacewalkN00b that is true JoeJulian
22:07 elyograg I have a setup with only two servers that is distributed+replicated.  There are 8 bricks per server, each of which is 5TB.
22:07 elyograg We're gearing up to add two more servers, same setup.
22:08 semiosis an advantage of having more, smaller, bricks per server is that the time to heal for a brick replacement will be less
22:08 SpacewalkN00b elyograg; so in your setup, you would just add the addtional servers and tell gluster you want to distribute+replicate and it will take care of it?
22:08 JoeJulian I have just 3 servers, 4 bricks per server with 3-way replication.
22:10 elyograg I'll use add-brick to add the additional bricks to the volume. my replica count is 2 and will remain 2.
22:12 chirino joined #gluster
22:13 SpacewalkN00b if Im using gluster to store vms disks, and on some of those vms. I require a fault tolerant share, maybe using nfs. What is the recommendation for the shares storage? Store it on the same glusterfs or a different glusterfs cluster? does it make a difference?
22:14 SpacewalkN00b example, im hosting a zimbra server ,  the vm will disk will be on 1 gluseter fs cluster.  /opt/zimbra  is where the mail server app resisdes and I wnant this to be fault tolerant on its own
22:15 SpacewalkN00b does it matter if I put that share on the same glusterfs or should It reside on another.  The reason i ask mostly is performance wise
22:15 JoeJulian If you're worried about performance, use qemu-kvm and the native interface.
22:16 SpacewalkN00b i also wanted to be able to re-image that vm and just re-mount nfs share, for restore purposes, i think that would be quick? Any issue wiht this setup?
22:16 JoeJulian Try not to engineer for local/remote because you want the flexibility to have remote satisfy your sla/ola.
22:17 JoeJulian Just nfs.
22:17 SpacewalkN00b the only thing that really changes on that mail server is the director where it lives
22:18 SpacewalkN00b so then this setup would work? If so, then keep both the vm disk and nfs share on the same gluster would be fine, performance wise?
22:20 elyograg if you use the gluster nfs mounting capapbility, the IP address in the mount request is a single-point of failure.  You can avoid that if you have a shared IP using a cluster program like heartbeat or corosync/pacemaker.
22:21 JoeJulian It's still TCP so you still see the connection loss from your hypervisor. I'm not sure how/if they recover from that gracefully.
22:24 SpacewalkN00b Im not looking for failover capablities, more like restore
22:24 SpacewalkN00b but thanks for that advise on the heartbeat
22:37 SpacewalkN00b If i have two servers , replicating and one goes down, will the client automatically go to the secondary server for data?
22:38 elyograg if it's the native (fuse) client, yes.  if it's NFS, no.  that's why you need something that can switch a shared IP address to the other server.
22:39 SpacewalkN00b ok, i see,  so in the case of the  mail server directory, i could jus use  the fuse client and point that particular directory to another brick
22:39 elyograg the qemu-kvm native interface that Joe mentioned will also fail over with no shared IP, because (as I understand it) it incorporates the fuse client into the hypervisor.
22:39 JoeJulian SpacewalkN00b: You don't point to bricks. You point to volumes.
22:39 elyograg you don't use bricks on the client.  you use volumes.  volumes are built using bricks.
22:39 SpacewalkN00b yes, thats what I meant.
22:39 JoeJulian The brick is ONLY for GlusterFS' use.
22:40 SpacewalkN00b thank you
22:40 SpacewalkN00b ok guys,  thank you for answering my questions.  Time to get to testing.
22:43 elyograg JoeJulian: did you see my questions earlier?  I got disconnected after I asked them, so if you did answer I didn't see it.
22:45 duerF joined #gluster
22:47 JoeJulian elyograg: I did but unfortunately I don't have that answer.
22:47 elyograg JoeJulian: cool.  thanks anyway.  I suspect that the answer is no, so I think I need to file a bug.
22:47 glusterbot http://goo.gl/UUuCq
22:48 elyograg glusterbot: thanks.
22:48 glusterbot elyograg: I do not know about 'thanks.', but I do know about these similar topics: 'thanks'
22:48 elyograg heh.
22:48 JoeJulian lol
22:49 chirino joined #gluster
22:57 elyograg not waiting for glusterbot to mention the bug I just filed.  Need to go afk so I can head home.
23:04 roidelapluie joined #gluster
23:04 roidelapluie joined #gluster
23:04 mriv joined #gluster
23:04 rnts joined #gluster
23:08 _pol joined #gluster
23:09 ninkotech joined #gluster
23:28 ninkotech joined #gluster
23:29 maco_ joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary