Camelia, the Perl 6 bug

IRC log for #gluster, 2013-10-21

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:27 failshell joined #gluster
00:32 MrNaviPa_ joined #gluster
00:52 ozbitty_ joined #gluster
00:59 ozbitty_ Morning/Evening..  I'm hoping someone here can help me with a gluster/proxmox issue I'm trying to resolve ... (after extensive searching I'm no closer to solving the mystery of the 0-mgmt: failed to fetch volume file (key:/test-volume)
01:13 bharata-rao joined #gluster
01:21 harish joined #gluster
02:26 purpleidea ,,(hi) | mibby
02:26 glusterbot purpleidea: Error: No factoid matches that key.
02:26 purpleidea hi
02:26 glusterbot purpleidea: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
02:26 purpleidea hi | mibby
02:26 purpleidea mibby: just ask your question :)
02:28 purpleidea @teach hi Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
02:28 harish joined #gluster
02:28 purpleidea @teach glusterbot hi Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
02:29 purpleidea @learn hi Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
02:29 glusterbot purpleidea: (learn [<channel>] <key> as <value>) -- Associates <key> with <value>. <channel> is only necessary if the message isn't sent on the channel itself. The word 'as' is necessary to separate the key from the value. It can be changed to another word via the learnSeparator registry value.
02:29 purpleidea @learn hi as Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
02:29 glusterbot purpleidea: The operation succeeded.
02:30 purpleidea ,,(hi)
02:30 glusterbot Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
02:30 purpleidea ,,(hi) | mibby
02:30 glusterbot Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
02:30 purpleidea ,,(hi) ~ mibby
02:30 glusterbot Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
02:31 purpleidea ~hi | mibby
02:31 glusterbot mibby: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
02:31 purpleidea ,,(thanks) glusterbot!
02:31 glusterbot you're welcome
02:32 mohankumar joined #gluster
03:11 sgowda joined #gluster
03:15 shubhendu joined #gluster
03:25 cyberbootje joined #gluster
03:25 lalatenduM joined #gluster
03:26 gGer joined #gluster
03:26 _br_ joined #gluster
03:36 sgowda left #gluster
03:39 kr1ss joined #gluster
03:47 ajha joined #gluster
03:47 itisravi joined #gluster
03:49 kanagaraj joined #gluster
03:53 fidevo joined #gluster
04:04 mohankumar joined #gluster
04:05 davinder joined #gluster
04:27 shyam joined #gluster
04:38 vpshastry joined #gluster
04:50 theguidry joined #gluster
04:50 ThatGraemeGuy joined #gluster
04:53 aravindavk joined #gluster
05:01 rjoseph joined #gluster
05:03 ppai joined #gluster
05:11 dusmant joined #gluster
05:29 ababu joined #gluster
05:42 lalatenduM joined #gluster
05:49 CheRi joined #gluster
05:55 anands joined #gluster
06:00 shruti joined #gluster
06:00 vshankar joined #gluster
06:11 davinder joined #gluster
06:12 ngoswami joined #gluster
06:17 dusmant joined #gluster
06:26 psharma joined #gluster
06:33 saurabh joined #gluster
06:36 kPb_in joined #gluster
06:44 vpshastry left #gluster
06:45 ThatGraemeGuy joined #gluster
06:47 MrNaviPa_ joined #gluster
06:52 bala joined #gluster
06:55 nshaikh joined #gluster
06:58 abyss__ joined #gluster
07:01 RameshN joined #gluster
07:06 ctria joined #gluster
07:07 eseyman joined #gluster
07:09 ricky-ticky joined #gluster
07:12 dusmant joined #gluster
07:18 vpshastry joined #gluster
07:19 keytab joined #gluster
07:22 davinder joined #gluster
07:23 shubhendu joined #gluster
07:33 andreask joined #gluster
07:42 mohankumar joined #gluster
07:51 harish joined #gluster
07:52 mbukatov joined #gluster
07:53 _pll_ joined #gluster
08:00 DV joined #gluster
08:03 shubhendu joined #gluster
08:20 dusmant joined #gluster
08:34 aliguori joined #gluster
08:35 harish joined #gluster
08:36 ProT-0-TypE joined #gluster
08:44 RichiH-linuxcon long shot, but is there someone at linuxcon and able to lend me a UK to two-prong cable or an intl adapter into which EU two-prong fits?
08:46 tziOm joined #gluster
08:50 mohankumar joined #gluster
08:52 hagarth joined #gluster
08:54 jord-eye joined #gluster
08:54 jord-eye hello
08:54 glusterbot jord-eye: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
08:55 jord-eye I have a 2 replica glusterfs, and friday I rebooted one of the servers (let's call it FS1). Now, all the traffic is going to FS2 and to "balance" the traffic I have to umount and mount again the resources. Is there another way to do that?
09:05 dusmant joined #gluster
09:11 theguidry joined #gluster
09:13 ndarshan joined #gluster
09:21 Dga joined #gluster
09:23 glusterbot New news from newglusterbugs: [Bug 892808] [FEAT] Bring subdirectory mount option with native client <http://goo.gl/wpcU0>
09:31 hagarth @newbug
09:31 hagarth @bug
09:31 glusterbot hagarth: (bug <bug_id> [<bug_ids>]) -- Reports the details of the bugs with the listed ids to this channel. Accepts bug aliases as well as numeric ids. Your list can be separated by spaces, commas, and the word "and" if you want.
09:32 hagarth @file a bug
09:33 hagarth "file a bug"
09:33 glusterbot http://goo.gl/UUuCq
09:33 hagarth glusterbot: thanks!
09:33 glusterbot hagarth: I do not know about 'thanks!', but I do know about these similar topics: 'thanks'
09:39 ngoswami joined #gluster
09:40 vpshastry joined #gluster
09:54 hybrid512 joined #gluster
09:56 ndarshan joined #gluster
09:56 psharma joined #gluster
10:15 ndarshan joined #gluster
10:21 aliguori joined #gluster
10:22 theguidry_ joined #gluster
10:22 theguidry joined #gluster
10:23 theguidry joined #gluster
10:26 ppai joined #gluster
10:29 harish_ joined #gluster
10:31 lpabon joined #gluster
10:31 asias joined #gluster
10:31 hagarth joined #gluster
10:33 chirino joined #gluster
10:42 ntt_ joined #gluster
10:43 ntt_ Hi. Can i use backupvolfile option with nfs on gluster?
10:44 ngoswami_ joined #gluster
10:44 verdurin joined #gluster
10:45 ndevos ntt_: nfs client do not use volume files
10:47 ntt_ ndevos: How can i manage case where one storage node goes down (in a replication config)? I would automatically remount from the alternative storage node.
10:47 ndevos ntt_: the most common way to make NFS high-available, is to give the storage servers a virtual IP and use that for mounting, when a storage servr is offline, the VIP should relocate to a different storage server
10:47 ntt_ ndevos: ok
10:48 ndevos ntt_: you can use pacemaker or CTDB for that, those are pretty common
10:48 ntt_ pacemaker is ok
10:53 ntt_ ndevos: suppose that i have a VIP 172.16.1.1 where my client mounts with nfs. If physical node where VIP runs goes down, i have a short interval where VIP is unavailable (switch agent time in pacemaker). If my client write something in this time on the partition, it fails. But, when VIP become available my client needs a new mount?
10:55 ndevos ntt_: the nfs-client should handle that gracefully, reads/writes will be on-hold for a few seconds
10:55 ntt_ ndevos: ok. Thank You!
10:55 ndevos ntt_: in order to get some distribution of nfs-clients, it is common to have a VIP for each storage server, and DNS returning them all in a randon/rotating order
10:55 ntt_ Important thing is that i don't have to remount the nfs partition
10:56 ndevos no, that should not be needed
10:56 ntt_ ndevos: good advice
10:58 mohankumar joined #gluster
11:10 3JTAAIQ6P joined #gluster
11:12 3JTAAIQ6P Hi.. I am seing self-healing in the logs, but as far as I can tell everything is operating normally.  Here is a log excerpt:  http://pastebin.ca/2469429
11:12 glusterbot Title: pastebin - Mine - post number 2469429 (at pastebin.ca)
11:12 3JTAAIQ6P How can I trace this down?
11:15 psharma joined #gluster
11:21 chirino joined #gluster
11:22 ppai joined #gluster
11:23 CheRi joined #gluster
11:34 chirino joined #gluster
11:35 fracky joined #gluster
11:36 rcheleguini joined #gluster
11:38 rcheleguini joined #gluster
11:46 rcheleguini joined #gluster
11:47 hagarth joined #gluster
11:50 andreask joined #gluster
11:50 edward1 joined #gluster
11:54 itisravi joined #gluster
12:01 ctria joined #gluster
12:21 dusmant joined #gluster
12:35 Amanda joined #gluster
12:48 davidbierce joined #gluster
12:54 shubhendu joined #gluster
12:59 theguidry joined #gluster
13:04 DV_ joined #gluster
13:12 ctria joined #gluster
13:23 hagarth joined #gluster
13:25 hagarth joined #gluster
13:32 bennyturns joined #gluster
13:33 jruggiero joined #gluster
13:43 shylesh joined #gluster
13:48 asias joined #gluster
13:49 Dga joined #gluster
13:58 jurrien_ joined #gluster
14:02 Perihelion joined #gluster
14:02 jruggiero left #gluster
14:03 bugs_ joined #gluster
14:03 kaptk2 joined #gluster
14:09 askb joined #gluster
14:13 kanagaraj joined #gluster
14:22 wushudoin joined #gluster
14:22 hagarth1 joined #gluster
14:22 failshell joined #gluster
14:29 tryggvil joined #gluster
14:34 Amanda joined #gluster
14:41 [o__o] left #gluster
14:43 [o__o] joined #gluster
14:45 [o__o] left #gluster
14:47 [o__o] joined #gluster
14:50 [o__o] left #gluster
14:51 jag3773 joined #gluster
14:52 [o__o] joined #gluster
14:53 bala joined #gluster
14:54 [o__o] left #gluster
14:56 [o__o] joined #gluster
14:56 davidbierce joined #gluster
14:58 [o__o] left #gluster
15:00 [o__o] joined #gluster
15:03 fuzzy_id joined #gluster
15:03 fuzzy_id i'd like to have three nodes in a replicat=2 cluster
15:05 fuzzy_id is it possible to state a brick a second time when adding the third brick?
15:13 lpabon joined #gluster
15:15 Skaag joined #gluster
15:17 dusmant joined #gluster
15:18 vpshastry1 joined #gluster
15:18 [o__o] joined #gluster
15:21 LoudNoises joined #gluster
15:23 daMaestro joined #gluster
15:23 ninkotech_ joined #gluster
15:23 ninkotech joined #gluster
15:25 vpshastry1 left #gluster
15:25 glusterbot New news from newglusterbugs: [Bug 1018178] Glusterfs ports conflict with qemu live migration <http://goo.gl/oDNTL3>
15:26 vpshastry joined #gluster
15:27 vpshastry left #gluster
15:27 [o__o] left #gluster
15:27 harish joined #gluster
15:29 [o__o] joined #gluster
15:33 bennyturns joined #gluster
15:44 tryggvil joined #gluster
15:45 dmueller joined #gluster
15:47 Maxence joined #gluster
15:50 ThatGraemeGuy joined #gluster
16:05 rjoseph joined #gluster
16:19 vpshastry joined #gluster
16:29 xymox joined #gluster
16:29 cfeller joined #gluster
16:29 zaitcev joined #gluster
16:30 Mo__ joined #gluster
16:35 plarsen joined #gluster
16:38 LoudNoises joined #gluster
16:43 vpshastry left #gluster
16:50 bfrank joined #gluster
16:55 chouchins joined #gluster
17:02 johnbot11 joined #gluster
17:16 asias joined #gluster
17:18 bala joined #gluster
17:20 dmueller joined #gluster
17:24 QbY joined #gluster
17:25 QbY How long on average would it take to set up a single folder (directory) and replicate between three machines?
17:25 QbY brand new installation
17:34 purpleidea QbY: under 10 minutes using ,,(puppet)-gluster
17:34 glusterbot QbY: (#1) https://github.com/purpleidea/puppet-gluster, or (#2) semiosis' unmaintained puppet module: https://github.com/semiosis/puppet-gluster
17:34 QbY purpleidea: that's what i thought....  i got a slacker on the team trying to say its going to be a few days
17:35 purpleidea QbY: well getting a test setup going is one thing, working out all the little details and getting comfortable with gluster before production is another.
17:36 purpleidea and testing your app on it is yet another...
17:36 QbY understood.  but this guy also took three days to write a bash script to delete old log files.
17:36 QbY and sold it upstairs as a major accomplishment
17:36 purpleidea QbY: i can't help you with HR issues, but I can help you with puppet-gluster.
17:37 purpleidea also, it sounds like i want the source of this bash script :P
17:37 QbY hehehe
17:38 JoeJulian QbY: I've demo'ed setting up a 4 server volume with fresh centos installs in under 4 minutes by hand.
17:38 QbY Hehe.. JoeJulian.. I feel like getting Gluster up during my lunch break.
17:38 purpleidea JoeJulian: by hand? you mean, manually? *scoff* ,,(puppet)   just kidding Joe's a pro
17:38 glusterbot JoeJulian: (#1) https://github.com/purpleidea/puppet-gluster, or (#2) semiosis' unmaintained puppet module: https://github.com/semiosis/puppet-gluster
17:39 JoeJulian Yeah, but then all the magic is hidden. Doesn't make a very interesting demo to say, ok... here I'm spinning up new servers. ... ... There, done.
17:40 purpleidea Good point! It's really beneficial to do it manually if you want to learn how gluster works, and with puppet if you just want it to work.
17:40 JoeJulian I think everyone should learn how to write vol files... ;)
17:41 purpleidea JoeJulian: hey, as long as you're doing support for it :) I thought that early gluster which did it this way was awesome. I didn't know it was even supported these days. Is it?
17:41 JoeJulian It can be done, but "supported" is rather subjective.
17:41 davinder joined #gluster
17:41 purpleidea ps: my mom doesn't want to learn how to write .vol files.
17:41 purpleidea idk why though
17:42 * JoeJulian refrains from "your mom" jokes...
17:42 purpleidea oh :(
17:42 purpleidea smoke if you've got 'em
17:43 purpleidea there's got to be a "your mom is so big, it makes gluster look small" sort of joke somewhere...
17:43 _pol joined #gluster
17:43 JoeJulian hehe
17:44 fuzzy_id i'd like to create a distributed replicated volume, with a replication_count of 2 over three nodes
17:45 purpleidea fuzzy_id: ok
17:45 purpleidea sounds like you'll have an assymetrical volume which isn't a good idea
17:45 fuzzy_id do i have to create two bricks on each cluster, or can i give a brick a second time?
17:45 purpleidea or maybe unbalanced is the right word?
17:45 fuzzy_id ?
17:46 fuzzy_id why?
17:46 purpleidea fuzzy_id: keep things X mod count == 0
17:46 purpleidea so that each host has the same amount of "stuff" on it
17:46 fuzzy_id hmm
17:47 purpleidea if you have different things on each host, then you're HA is harder to reason about too...
17:47 purpleidea for example, in your scenario how would you do it?
17:47 JoeJulian You can do it by making 2 partitions on each of the three servers...
17:47 JoeJulian That gives you 6 bricks so you can replica 2...
17:47 purpleidea JoeJulian: okay, he's yours
17:48 fuzzy_id argh
17:48 * purpleidea tags joe in
17:48 fuzzy_id that simply doesn't sound right
17:48 fuzzy_id what if i want to add an additional node?
17:48 JoeJulian http://joejulian.name/blog/how-to-expand-gl​usterfs-replicated-clusters-by-one-server/
17:48 glusterbot <http://goo.gl/BM1qD> (at joejulian.name)
17:48 fuzzy_id i'll have to add another brick on all the previous nodes, won't I?
17:48 JoeJulian replication is not arbitrary.
17:49 purpleidea JoeJulian: i thought we're trying to get away from this type of setup :P
17:49 davidbierce joined #gluster
17:49 fuzzy_id purpleidea: but, why? ;)
17:50 purpleidea fuzzy_id: why what?
17:50 JoeJulian purpleidea: I believe in tools. What's right for one use case may not be right for another. Although I spout "rules", I do not discourage breaking them. :D
17:50 purpleidea JoeJulian: there's nothing wrong with the chained setup, it's just going to be a pita to manage
17:50 fuzzy_id purpleidea: what's so bad in a num_nodes=3, replicate_count=2 setup?
17:51 purpleidea fuzzy_id: depending on your # of bricks, it might be a pita to manage
17:51 JoeJulian fuzzy_id: It's just more complicated.
17:51 purpleidea yeah
17:51 JoeJulian See that link I posted.
17:52 purpleidea fuzzy_id: joe's link has a nice drawing you should look at.
17:52 purpleidea that's a chained setup. you don't want that if you're a beginner
17:52 JoeJulian The logic gets messy. When you start to manage changes after that, it takes some really good planning and your guy that takes a week to write a bash script (instead of using logrotate) just isn't going to get it.
17:53 purpleidea Joe's allowed because he's not a beginner. Joe is at level 4+ afaict
17:53 fuzzy_id well that actually seems quite straight-forward :)
17:53 purpleidea fuzzy_id: it's not
17:54 fuzzy_id ok
17:54 purpleidea or, build it yourself and get comfortable managing it
17:54 JoeJulian Depends on the individual.
17:55 JoeJulian purpleidea's probably complicating the thought process because he maintains the puppet module. Automating that would be ugly.
17:55 purpleidea JoeJulian: it's true.
17:55 purpleidea but i'm working on it. but it's hard. for the moment, i'm not supporting automatic changes when the volume is chained
17:56 purpleidea but i'd like to, i just don't have the algorithm written yet.
17:56 JoeJulian native ruby...
17:56 purpleidea JoeJulian: it's not a native versus non-native question. The unchained stuff i wrote is in ruby.
17:56 JoeJulian ooh
17:57 JoeJulian I haven't looked in a while. I'm busy trying to get Microfocus Cobol to run under an EL6 kernel... :/
17:58 purpleidea yikes!
17:58 JoeJulian I'm a level 1 kernel debugger...
17:58 purpleidea how's new $JOB?
18:00 Skaag joined #gluster
18:01 davidbierce joined #gluster
18:01 fuzzy_id hmm, ok just read that blog entry carefully; good job JoeJulian
18:01 JoeJulian Thanks.
18:05 fuzzy_id i'm just wondering if there is going to be a big performance decrease when adding further nodes and increasing replicate_count everytime
18:05 fuzzy_id that's why i'd actually prefer the chained solution
18:08 anands joined #gluster
18:09 SpeeR joined #gluster
18:09 Debolaz joined #gluster
18:10 purpleidea fuzzy_id: try both and see how easy it is to manage or not.
18:12 fuzzy_id purpleidea: good point
18:12 fuzzy_id just not 'nuff time for testing around here :(
18:12 fuzzy_id but I just commented it to my boss
18:13 fuzzy_id we'll stick with the easy replicate_count=3-solution :)
18:13 purpleidea fuzzy_id: or count=3 and use two servers, and then keep adding them in pairs.
18:13 purpleidea if you use count=3 you should add in 3's.
18:14 JoeJulian http://joejulian.name/blog/glust​erfs-replication-dos-and-donts/
18:14 glusterbot <http://goo.gl/B8xEB> (at joejulian.name)
18:14 fuzzy_id … or increase the replicate_count?
18:14 JoeJulian Why would you increase the replica count?
18:15 JoeJulian If 2 satisfies your fault tolerance, then when you need more space, add servers in pairs.
18:15 fuzzy_id to have all the data on all nodes available?
18:15 fuzzy_id hmm
18:15 JoeJulian Yeah, read the do's and don'ts.
18:15 fuzzy_id yeah, i already read these a few days ago :)
18:16 purpleidea fuzzy_id: what Joe is saying is that count==2 is probably already twice as good as your existing RAID-ed fileserver. So if you really need count=3, okay, but your probably don't. it depends on your problem.
18:17 fuzzy_id purpleidea: no raid!
18:17 fuzzy_id ah ok, sorry
18:17 fuzzy_id got that wrong
18:19 fuzzy_id ok, so i guess i'll go with 3, and when adding another node i'll chain around
18:19 purpleidea no chain
18:21 fuzzy_id i thought chaining would be necessary then. at least if i want fault tolerance on the machine level
18:25 lanning joined #gluster
18:27 Skaag joined #gluster
18:35 glusterbot New news from resolvedglusterbugs: [Bug 987555] Glusterfs ports conflict with qemu live migration <http://goo.gl/SbL8x> || [Bug 1018178] Glusterfs ports conflict with qemu live migration <http://goo.gl/oDNTL3>
18:38 lpabon joined #gluster
18:40 ctria joined #gluster
18:55 tryggvil joined #gluster
19:02 harish joined #gluster
19:15 rubbs joined #gluster
19:47 SpeeR joined #gluster
19:56 glusterbot New news from newglusterbugs: [Bug 1021686] refactor AFR module <http://goo.gl/Wn5nCD>
19:58 B21956 joined #gluster
19:59 hagarth joined #gluster
20:08 nasso joined #gluster
20:14 davidbierce joined #gluster
20:24 cfeller when using 3.4.1, why do I see this: "[2013-10-21 20:20:15.157245] I [client-handshake.c:1658:sele​ct_server_supported_programs] 0-gv0-client-0: Using Program GlusterFS 3.3, Num (1298437), Version (330)" in the logs... specifically that it is using version 3.3?
20:25 hagarth cfeller: that is a rpc program version. that has no co-relation with the glusterfs version.
20:27 cfeller OK. thanks.
20:28 cfeller (I had upgraded from 3.3, so I was concerned that something didn't upgrade properly.)
20:29 hagarth cfeller: you can see a "Using glusterfs version " or equivalent in log message to figure out the software version in use.
20:35 diegows joined #gluster
20:50 F^nor joined #gluster
21:24 ricky-ticky joined #gluster
21:44 joshcarter joined #gluster
21:53 DV__ joined #gluster
21:58 johnbot1_ joined #gluster
22:08 jag3773 joined #gluster
22:13 fidevo joined #gluster
22:15 nasso joined #gluster
22:23 DV_ joined #gluster
22:30 johnbot11 joined #gluster
22:50 calum_ joined #gluster
22:52 johnbot11 joined #gluster
22:52 calum_ First im really new to gluster. Infact I have not even installed it yet.... I am interested in running a gluster system for my client. What should I do to get started... Also how can I backup???
23:01 RicardoSSP joined #gluster
23:01 RicardoSSP joined #gluster
23:01 JoeJulian calum_: The second part first... depends on what you want to back up, how you want it backed up, etc... I back up through client mounts - meaning I'm creating a full backup of everything on the volume. Some use geo-replicate to back up to a remote volume or some other remote storage.
23:02 JoeJulian Some have multiple petabytes and just can't feasibly back it up so they rely on redundancy.
23:03 JoeJulian As for how to get started, I would use http://www.gluster.org/community/d​ocumentation/index.php/QuickStart to quickly get something to play with.
23:03 glusterbot <http://goo.gl/OEzZn> (at www.gluster.org)
23:04 DV__ joined #gluster
23:06 calum_ JoeJulian: Thanks. Out of interest how do you use gluster??
23:07 JoeJulian How *don't* I use it... :D
23:07 JoeJulian I primarily use it for fault tolerance. I have 3-way replicated volumes to provide 6 nines availability.
23:09 Xunil really, 31 seconds of downtime per year?  impressive.
23:09 JoeJulian I host business files, vm images, web sites, repo mirrors, home directories, samba shares, mysql data....
23:10 m0zes_ joined #gluster
23:10 m0zes_ /quit
23:10 JoeJulian Xunil: Well, that's how the math calculates out. It's probably accurate. I haven't had a loss of access to the filesystem in well over a year.
23:10 JoeJulian No, m0zes, don't go!!!
23:12 nonsenso joined #gluster
23:13 * m0zes just screwed up screen session is all.
23:31 calum_ joined #gluster
23:48 nueces joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary