Camelia, the Perl 6 bug

IRC log for #gluster, 2012-12-01

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:08 redsolar joined #gluster
00:13 JoeJulian jiffe98: Completely off topic of your issue, but wouldn't you rather keep those in memcache?
00:28 TSM2 joined #gluster
00:33 nick5 anyone have experience using the nfs mount for a gluster volume, specifically if in a 3x replica setup, one brick goes away and the other 2 are still up, shouldn't everything continue to work?
00:35 nick5 with the same setup (3x replica) and using fuse, it's just a 42s timeout before the writes continue on the other 2 nodes.
00:36 nick5 on nfs, both the nfs (gluster server) and the client seem to get into a funky state.
00:58 semiosis nick5: is your nfs client mounting from the server that dies?
00:58 semiosis or from a server that survives?
01:04 nick5 from a server that survives.
01:06 genewitch JoeJulian: Last night i was asking about NAT... i am trying to run gluster volumes on Amazon cloud and connect to it using fuse client from our local datacenter. Should i just look into doing VPN to get around the subnet issues?
01:12 genewitch or am i better off just mounting on every gluster node the gluster volume and then rsyncing to that from the DC?
01:12 genewitch i was hoping to have something local so i could use lsyncd and other helpers
01:16 dalekurt joined #gluster
01:20 pdurbin left #gluster
01:47 plarsen joined #gluster
02:12 genewitch what's the command to see all the ports in use?
02:19 semiosis i use netstat -anp
02:21 nightwalk joined #gluster
02:25 JoeJulian genewitch: Yes, I woud use a vpn for that.
02:25 JoeJulian would
02:28 genewitch i am trying to figure out how to do it through SSH
02:28 genewitch via http://hekafs.org/index.php/2011/05/b​uilding-your-own-dropbox-equivalent/
02:28 glusterbot <http://goo.gl/fYpUR> (at hekafs.org)
02:33 genewitch and i just requested a temporary VPN account to see if that will work, cuz ssh isn't
02:38 yinyin_ joined #gluster
02:41 bala joined #gluster
02:41 nightwalk joined #gluster
02:50 Oneiroi joined #gluster
02:50 Oneiroi joined #gluster
03:10 nightwalk joined #gluster
03:12 JoeJulian genewitch: I just use openswan to do ipsec vpns
03:15 tc00per left #gluster
03:28 mohankumar joined #gluster
03:32 yinyin joined #gluster
03:50 dalekurt joined #gluster
03:52 nightwalk joined #gluster
04:07 bwasher joined #gluster
04:08 bwasher ...
04:15 nightwalk joined #gluster
04:20 mooperd_ joined #gluster
04:26 bwasher left #gluster
04:27 bwasher joined #gluster
04:30 rudimeyer_ joined #gluster
04:30 jayeffkay joined #gluster
04:30 bwasher left #gluster
04:31 bwasher joined #gluster
04:31 bwasher joined #gluster
04:39 nightwalk joined #gluster
04:48 yinyin joined #gluster
04:48 RobertLaptop joined #gluster
05:41 yinyin joined #gluster
05:49 n8whnp joined #gluster
06:46 yinyin_ joined #gluster
07:06 yinyin joined #gluster
07:35 yinyin joined #gluster
07:51 yinyin joined #gluster
08:40 mohankumar joined #gluster
08:40 atlee_ joined #gluster
08:41 atlee_ maybe someone can help me with my gluster test?
08:42 atlee_ i have 3 servers running glusterfs server 3.3., first server probe server 2 and three all working, i make a volume and start a volume and all 3 see it, i touch five text files on server 1 but when i do a ls of dir on server 2 and 3 then files do not replicate???
08:42 rudimeyer joined #gluster
08:43 atlee_ what am i missing?
08:44 atlee_ i have even mounted the volume on my client and doing an ls on the dir cannot see files?
08:46 atlee_ dont worry figured it out
08:47 atlee_ client write a file it all replicates
08:47 atlee_ 'i touched five files server side it did not replicate
08:47 JoeJulian Yep, the bricks are off limits.
08:47 atlee_ off limits hyow?
08:47 atlee_ server cannot write files to replicate? only mounted clients>
08:47 atlee_ ?
08:48 JoeJulian They're used by gluster, not by users. If you want to write to the volume from the server, mount the client there as well.
08:48 atlee_ sweet yeah figured it out now, had me pondering for a while :(
08:48 atlee_ tks
08:49 JoeJulian Any time. :D
08:52 atlee_ sweet works good
09:47 mjrosenb joined #gluster
09:47 circut joined #gluster
09:47 wica joined #gluster
09:47 gluslog joined #gluster
09:47 raghavendrabhat joined #gluster
10:05 atlee_ Joe in aust?
10:07 JoeJulian Nope, Seattle.
10:07 JoeJulian I know some people in Melbourne though.
10:09 atlee_ just reading your blog about why replication is hard
10:10 atlee_ i'm trying to work on a gluster prottype where 2-3 servers in different locations but bridged into same network, though 80+ people will be working on the same files, nteowkr shares
10:10 atlee_ and obviously we do not want data loss or split brain
10:11 JoeJulian use posix locks and you won't get "nteowkr" ;)
10:11 atlee_ we use locks at the moment with our nfs/samba
10:12 JoeJulian Just making a joke about out-of-order edits.
10:12 atlee_ i read it as network :D
10:12 JoeJulian I'd tell you a UDP joke, but you probably wouldn't get it.
10:14 atlee_ how would posix locks work with 80+ people?
10:14 atlee_ maybe 4-5 people open same file at any given time
10:14 atlee_ maybe of course
10:14 atlee_ i'm in Aust NSW
10:15 JoeJulian That's a programming issue. Typically you would lock a file, or a portion of the file prior to writing that portion. I'm not sure how the other clients know that the edit happened though. Probably an ESTALE or something.
10:18 atlee_ lets say a word doc, same file name, all three servers replicate, 3 people add different text to same file at 3 different times and then server 1/2 or 3 go down
10:18 atlee_ when missing servers come back up, how would someone pick what version to lose or keep
10:19 JoeJulian If the server went down, then nobody's connected to it. That's one of the easy ones.
10:19 atlee_ can glusterfs differentiate between the 3 diff versions and keep all and join them back to one file?
10:20 robo joined #gluster
10:20 atlee_ if you have 3 servers then the client's mount still exists right?
10:20 JoeJulian There would only be one current version of the file. The versions residing on the downed server(s) would be stale and would be updated.
10:20 JoeJulian right, the mount would be able to stay up with three replicated servers if you lose up to 2.
10:21 atlee_ so 3 peoples edits a 3 diff time intervals are still kept?
10:21 JoeJulian Yes and no.
10:21 JoeJulian MS Office isn't capable of that.
10:22 JoeJulian Microsoft will just let one user edit a document and anyone else that tries to open it will open it read-only.
10:22 atlee_ like a ms template it spawns another version of the file
10:24 JoeJulian Right, but that would still be only one person with that file open. You can't, for instance, collaboratively edit a spreadsheet where 5 people are all editing cells.
10:24 JoeJulian I don't know of any software that does that other than Google Docs.
10:25 atlee_ so in theory running a gluster over multiple sites, multiple servers using same mounted brick would work maybe?
10:26 atlee_ im talking about a gluster 2 servers one suburb and 2 servers another site in another suburb with latency less then 100 probably
10:26 atlee_ all within the same network because they will be joined together of course
10:28 JoeJulian The latency will be amplified but yes. That will be able to work.
10:28 atlee_ im only new with this stuff
10:29 atlee_ but ive setup 3 servers in vm'sw and me as a client and i'm trying to test for fail points
10:29 atlee_ works so far
10:33 JoeJulian I would think that your biggest worry should be network splits.
10:34 atlee_ what if site one network disconnects from site 2 network, 4 servers in total, 2 each side
10:34 atlee_ would data be safe?
10:34 atlee_ or mega split brain?
10:35 JoeJulian It's only split-brain if both sides are able to edit the same files while disconnected.
10:37 atlee_ so basically if this happens its upto me as an it admin to chose which correct version is right and ditch others?
10:38 atlee_ you have a glusterfs setup between london and melbourne?
10:39 JoeJulian Heh, no... just wanted the most outrageous example I could think of.
10:40 JoeJulian And yes, it's up to you as an admin.... It's all your fault. ;)
10:41 atlee_ lol
10:41 er|c joined #gluster
10:42 atlee_ thanks Joe, really enjoy reading your blog examples
10:43 atlee_ and speaking with you
10:43 atlee_ got my mind going to diff corners
10:43 atlee_ we run our main site off wifi NBN
10:44 atlee_ and of course we have had majoy carrier outages
10:44 atlee_ but if we create a glusterfs cloud based file storage, carrier will cause probs but is it still workable and a smart choice
10:45 atlee_ and we also have cloud based servers, and even those instances can go offline due to storage problems
10:45 atlee_ this can also be a factory of failing point
10:45 JoeJulian It's all about disaster planning.
10:46 atlee_ more servers for glusterfs = more directions for success yes?
10:46 atlee_ assuming runnning off diff internet connections of course
10:47 JoeJulian Mostly, yeah. If you can get a 3 or 5 way replica going and use quorum, you might have a pretty good chance of avoiding split-brain.
10:47 JoeJulian Those Microsoft products are still going to suck though. ;)
10:48 atlee_ hahaha lol
10:48 atlee_ i hate ms products really but people at work that dont know any better will not change :D:D:D
10:49 JoeJulian I've been slowly forcing change on the last of our windows users.
10:50 atlee_ to linux diskless?
10:50 JoeJulian "Here's the system that will work. You *may* continue to use that MS product, but it's no longer going to be supported. If it breaks, you're on your own." - Me
10:51 JoeJulian No, I don't do diskless. It's more of a pain than it's worth.
10:51 atlee_ it is a pain
10:51 atlee_ wasted a lot of hours with it
10:54 atlee_ how long have you been using gluserfs?
10:54 JoeJulian nearly 3 years now I think...
10:55 atlee_ and out of 3 years, how many split brains
10:55 atlee_ not physically of course from bashing your head against the walls :P
10:55 JoeJulian Lots... but none of them as painful as the ones I had with drbd.
10:56 JoeJulian Some of mine though were my own damned fault.
10:56 JoeJulian Plus the software has improved immensely over that time.
10:56 atlee_ drdb another cluster type yes?
10:56 atlee_ same as ocfs2?
10:59 JoeJulian drbd is a block level replication device that was designed specifically to sent me to an early grave due to heart failure and a simultaneous aneurysm.
10:59 atlee_ so so its a bad method of replication?
11:01 JoeJulian In my opinion.
11:01 JoeJulian I'm sure there are people that use it successfully and could tell me I was doing something wrong though. But when it comes right down to it, I like having actual files that are accessible in case of a disaster instead of blocks.
11:02 atlee_ i probably would agree
11:03 atlee_ block level you would need to restore which takes time, then run, then search for files, if block is damaged then you prob couldnt restore it right
11:04 JoeJulian Not without a lot of free time and knowledge of what you're recovering.
11:06 atlee_ im glad i spoke with you as you seem to have a lot of experience with glusterfs, im learning heaps actually since mucking around
11:07 JoeJulian Thanks. I try to help.
11:08 atlee_ if i wanted to enable quorum is it easy option?
11:12 JoeJulian yep, gluster volume set help will show you the list of things that are normally configurable. I think quorum is in there....
11:12 JoeJulian nope...
11:13 JoeJulian But it's on the ,,(options) page for 3.2. It hasn't changed for 3.3 though.
11:13 glusterbot http://goo.gl/dPFAf
11:16 atlee_ sweet awsome
11:16 atlee_ tks
11:18 atlee_ ok so if i set cluster.quorum-type "Fixed" then what would my cluster.quorum-count be? does it depend on number of servers?
11:18 JoeJulian yeah, you want at least half.
11:18 JoeJulian Er, half +1
11:18 atlee_ ok so 4 servers = 3
11:19 atlee_ 2+1
11:19 JoeJulian right
11:20 atlee_ ok cool all good.
11:20 atlee_ really appreciate your help
12:34 tjikkun joined #gluster
12:48 gbrand_ joined #gluster
12:55 chirino joined #gluster
13:08 lkoranda joined #gluster
13:31 lkoranda joined #gluster
14:24 lkoranda joined #gluster
14:31 tjikkun joined #gluster
14:38 hagarth joined #gluster
15:04 plarsen joined #gluster
15:17 mooperd joined #gluster
16:11 thekev joined #gluster
16:12 johnmark joined #gluster
16:14 arusso joined #gluster
16:15 lkoranda joined #gluster
16:41 noob21 joined #gluster
16:42 noob21 left #gluster
16:46 robo joined #gluster
17:36 JoeJulian joined #gluster
17:39 lkoranda joined #gluster
17:43 AK6L left #gluster
17:50 Bullardo joined #gluster
18:06 jonathanpoon joined #gluster
18:17 jonathanpoon Hi Everyone.  I am currently implementing a GlusterFS storage unit.  I have 3 Gluster servers, with each one having 2 bricks.  I would like to implement a distribute replicate system.  Would it be incorrect to set the number of replicates to 2?  Is it possible that both replicates would reside on the same server since each server hosts 2 bricks?
18:41 m0zes jonathanpoon: replica pairs are done in the order of bricks. host1:/brick1 host2:/brick1 host3:/brick1 host1:/brick2 host3:/brick2 host2:/brick2
18:41 m0zes host1:/brick1 host2:/brick1 are pairs, host3:/brick1 host1:/brick2 are paired, host3:/brick2 host2:/brick2 are pairde
18:44 jonathanpoon thats great...
18:45 jonathanpoon from a practical standpoint, how many replicas are necessary?
18:45 jonathanpoon is there a general rule?
18:46 m0zes generally I wouldn't go over replica 3. When I replicate I use replica 2.
18:46 jonathanpoon another question I had was from a hardware prospective.  Does the master server need more CPU/RAM?
18:46 jonathanpoon gotcha..
18:49 m0zes CPU isn't a big deal in my experience. depending on how many bricks per host, I tend to have 8-16GB per brick for read caching server side.
18:49 gbrand_ joined #gluster
18:50 m0zes I wouldn't use mismatched servers. I had two machines, one with pcix backplane and one with a pcie backplane. the pcix one couldn't keep up and would slow down the other when dealing with replicated volumes
18:50 m0zes same mem, same number of cores, same size/number of bricks...
18:51 jonathanpoon okay
18:52 jonathanpoon I was planning on having 16GB of RAM with a quad core CPU.
18:52 m0zes that should be reasonable
18:52 jonathanpoon I was also planning on using a 4-nic bonded ethernet
18:52 jonathanpoon for each server...is that sufficient for network bandwidth purposes in your experience?
18:54 m0zes I'm using 10GbE currently. 4 bonded nics have a downside where each tcp session resides on 1 nic (to avoid out of order packets).
18:54 m0zes http://ganglia.beocat.cis.ksu.edu​/?r=hour&amp;c=Beocat&amp;h=echo http://ganglia.beocat.cis.ksu.edu/​?r=hour&amp;c=Beocat&amp;h=electra
18:54 glusterbot <http://goo.gl/Ovur9> (at ganglia.beocat.cis.ksu.edu)
18:54 m0zes those are my fileservers currently.
18:55 jonathanpoon yeah
18:56 jonathanpoon for each brick, do you employ a raid5 or raid6?
18:56 m0zes I use raid50 to make a large lvm volume to carve out bricks for different volumes.
18:57 * m0zes has large storage arrays. 200TB per server.
18:57 jonathanpoon how many hard drives do you have per server?
18:58 jonathanpoon thats huge...!
18:58 jonathanpoon is it a 10U server?
18:59 m0zes I have these http://www-03.ibm.com/systems/xbc/​cog/storagesystems/ds3512aag.html that attach via Fiber channel to my 2U servers.
18:59 glusterbot <http://goo.gl/jZNmp> (at www-03.ibm.com)
19:00 m0zes 1 head controller, 7 slaves. each with 12 3TB drives. I have 1 of those per server.
19:00 m0zes (18 5 disk raid5 arrays, striped. 6 hot spares.) per server
19:02 jonathanpoon how do you make many servers look like one volume before carving out bricks?
19:02 jonathanpoon sorry for my ignorance!
19:03 odavid joined #gluster
19:04 odavid left #gluster
19:05 m0zes The head storage array controller connects to the slaves and handles the raid on all the slaves. I just connect to the head controller via FC on my glusterfs servers.
19:05 jonathanpoon oh I see, its a hardware configuration
19:08 m0zes ideally I had more distribution than that, but this is what was in the budget.
19:08 jonathanpoon is the head storage array controller a raid card?
19:09 jonathanpoon or a fibre channel card?
19:10 m0zes the head storage controller has two raid controllers in it. and connects to the slaves via Multipathed SAS. The head controller sees the individual disks and raids the disks.
19:10 jonathanpoon gotcha
19:12 lh joined #gluster
19:12 lh joined #gluster
19:17 dalekurt joined #gluster
19:22 jonathanpoon m0zes:  is multipathed SAS more scalable than using gluster over the network?
19:25 m0zes ~24Gbit bidirectional, and it allows access to all the data even if a link is broken.
19:25 jonathanpoon m0zes:  did you need a SAS switch?
19:26 m0zes no it is part of the hardware I use.
19:27 jonathanpoon so on your head storage node, each card has 4 external sas ports?
19:29 m0zes The head storage node has two cards that have 2 external sas ports. all the slaves have the same. You just daisy chain the connections following the wiring diagram in the manual
19:30 m0zes the head storage node can handle up to 192 disks if you want to go that route, but it requires the purchase of a feature key from IBM.
19:31 jonathanpoon m0zes: so when you mount the storage, you only access the master server
19:31 jonathanpoon meaning, the OS is installed on the master server
19:31 jonathanpoon all the slave computers are left on without any software?
19:33 m0zes all storage controllers have their own firmware. You need a server to connect to the master storage controller via fiber channel (or SAS/iSCSI with the right controllers).
19:35 jonathanpoon so when you have glusterfs setup
19:36 jonathanpoon you installed an OS on the head storage node, setup glusterfs, and allow other computers to access that node through TCP/IP to see the storage right?
19:36 m0zes yes
19:36 jonathanpoon so I'm a bit confused...you still need fibre channel between the head storage node and the slave storage nodes?
19:37 jonathanpoon or do you have fibre channel connected between the head storage node and the computers outside the storage units?
19:40 m0zes I guess I've muddied the waters with the various terms I've been using. for each of my GlusterFS servers I have 1 IBM System x3650 M3 that acts as the linux host for glusterfs. it connects to a IBM DS3512 via FC. Then the IBM DS3512 connects via daisy chained SAS to 7 IBM DS3512 Express units.
19:41 jonathanpoon gotcha
19:41 jonathanpoon in your case, what does glusterfs add?
19:42 jonathanpoon since you already have redundancy through the raid50...
19:43 m0zes Since I have 2 GlusterFS servers with the same stack it provides a single unified filesystem. and replication/reduncancy in the case I need to update/reboot a fileserver
19:43 jonathanpoon ooh I see...
19:44 jonathanpoon thanks for your help!
19:44 jonathanpoon I've learned quite a bit
19:45 m0zes no problem. the other nice thing about glusterfs is that I can add more machines with this stack and increase my capacity/performance.
19:45 jonathanpoon yeah
19:45 jonathanpoon definitely...
19:46 jonathanpoon the multipath sas solves the problem of bandwidth between the storage servers
19:46 jonathanpoon so I definitely need to look into that
19:47 m0zes I also multipath the FiberChannel connections between the DS3512 and the x3650
19:47 jonathanpoon do your slave nodes have a motherboard in them?
19:48 jonathanpoon or are they just nodes with hard drives, a power supply, and a sas pass through card?
19:48 jonathanpoon something like this:  http://www.koutech.com/pro​ddetail.asp?linenumber=588
19:48 glusterbot <http://goo.gl/t6JsG> (at www.koutech.com)
19:49 m0zes They have some sort of management (not really user usable), but mostly hard drives with SAS passthrough.
19:49 jonathanpoon gotcha
19:51 m0zes The thing I like about this setup is that almost every component has some sort of redundancy.
19:51 jonathanpoon yeah, definitely
19:51 jonathanpoon when you have 2 raid cards in the master node...
19:52 jonathanpoon each one is daisy chained to 3 different servers?
19:53 m0zes two links daisy chained to the one above and the one below.
19:53 m0zes two sas cables up, two sas cables down
19:54 jonathanpoon each slave only has 2 SAS ports...
19:54 jonathanpoon so each slave is connected to another slave
19:55 m0zes two per card. there are two cards in each storage node.
19:55 jonathanpoon oh I see
20:15 gbrand_ joined #gluster
20:55 ron-slc joined #gluster
21:19 saz_ joined #gluster
21:43 _Bryan_ joined #gluster
22:09 jonathanpoon joined #gluster
22:20 duerF joined #gluster
22:29 noob2 joined #gluster
22:43 noob21 joined #gluster
22:43 noob21 left #gluster
23:48 TSM2 joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary