Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-04-27

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 elico So just to make sure I understand right:
00:00 elico The glusterfs client does the actual stripping and replication?
00:02 hagarth joined #gluster
00:09 elico I also want to make sure I understood right something:  Striped Replicated Volume uses both the replica and the strip option and automatically does them both ??
00:09 elico Can I decide on what servers it will do the replication and stripping on?
00:37 vpshastry joined #gluster
00:57 diegows joined #gluster
01:01 DV joined #gluster
01:08 ira joined #gluster
01:57 DV joined #gluster
01:57 Ark joined #gluster
02:00 XpineX_ joined #gluster
02:50 bala joined #gluster
04:38 vpshastry joined #gluster
04:41 bala joined #gluster
05:10 sadbox joined #gluster
05:15 badone__ joined #gluster
05:15 sadbox I'm trying to evaluate gluster for an upcoming project and I've got a local cluster running. I got to the point where I was able to mount and access the volume remotely. I wanted to see what would happen if I bounced a server, so I hard-rebooted node#3 and since it's come back I don't see the brick when using "gluster volume status" from node#1 or 2
05:15 sadbox Is there some command I need to run to re-sync it or something?
05:17 purpleidea sadbox: what version?
05:17 purpleidea os/gluster/etc
05:17 sadbox 3.5
05:17 sadbox er
05:17 sadbox centos 6.5 / gluster 3.5
05:18 purpleidea gluster volume info | fpaste
05:19 sadbox http://ur1.ca/h6yhm
05:19 glusterbot Title: #97232 Fedora Project Pastebin (at ur1.ca)
05:21 purpleidea sadbox: so all your bricks look like they exist...
05:23 sadbox purpleidea: So, should it be normal to get this ( http://ur1.ca/h6yhx ) from node1 and this ( http://ur1.ca/h6yi3 ) from node3?
05:23 glusterbot Title: #97233 Fedora Project Pastebin (at ur1.ca)
05:24 purpleidea sadbox: run iptables -L on all hosts... is there a firewall running?
05:24 haomaiwang joined #gluster
05:25 sadbox oh god, I'm retarded
05:25 sadbox yes
05:25 sadbox that was it
05:25 sadbox ty good sir
05:26 purpleidea sadbox: no worries. plug btw: @puppet
05:27 purpleidea sadbox: err ,,(puppet)
05:27 glusterbot sadbox: https://github.com/purpleidea/puppet-gluster
05:27 purpleidea is an easy way to setup and try glusterfs... and to help avoid silly mistakes
05:28 purpleidea sadbox: one example: getting performance right is often a mistake people make because you have to align disks, and things like that. true for all storage systems. if puppet-gluster does it for you, no mistakes! (or it's a bug)
05:47 samppah purpleidea: oh, puppet-gluster even align disks correctly? that's cool!
05:51 purpleidea samppah: yup! https://github.com/purpleidea/puppet-g​luster/blob/master/manifests/brick.pp # i used ben england's work to build in the math for all his performance tweaks. that's one of them of course. so in theory you get a tuned gluster out of the box :)
05:51 glusterbot Title: puppet-gluster/manifests/brick.pp at master · purpleidea/puppet-gluster · GitHub (at github.com)
05:55 samppah purpleidea: that's neat :)
05:55 purpleidea samppah: thanks! (i think so, but i'm biased :P)
06:05 TvL2386 joined #gluster
06:51 ekuric joined #gluster
06:53 vpshastry left #gluster
07:09 ricky-ti1 joined #gluster
07:11 giannello joined #gluster
07:14 vpshastry joined #gluster
07:35 sadbox When deploying gluster in a "real" cluster would you normally put a brick on it's own dedicated disk or use a raid card with a pile of disks on it, and then stick the brick in there?
07:51 purpleidea sadbox: it depends on a lot of things, but typically you'd put 1 brick on each virtual drive that you make with a RAID. you typically split a large pool of disks into a few RAID 6's instead of one huge RAID device. depends on drive count and lots of other things.
07:54 sadbox purpleidea: Sorry for all the questions! If I did something like set replica to 2 for a volume
07:55 sadbox and had multiple bricks on a each host, would it properly distribute it amongst hosts still?
08:00 Humble joined #gluster
08:02 purpleidea sadbox: did you try it?
08:03 sadbox I haven't =P
08:03 sadbox But I suppose I could
08:03 purpleidea sadbox: you have a common problem, but i'm going to help!
08:03 purpleidea sadbox: you come with a lot of questions, without having tried anything :) hehe try it out, and 80% of your questions will be answered. then come back with the hard questions!
08:06 purpleidea sadbox: btw the answer is it would warn you, but you can force it with 'force' (not recommended of course)
08:06 sadbox Snazzy
08:11 badone__ joined #gluster
08:12 edward2 joined #gluster
08:48 glusterbot New news from newglusterbugs: [Bug 1091677] Issues reported by Cppcheck static analysis tool <https://bugzilla.redhat.co​m/show_bug.cgi?id=1091677>
09:12 rahulcs joined #gluster
09:31 ctria joined #gluster
09:32 Pavid7 joined #gluster
09:35 hagarth joined #gluster
09:49 rahulcs joined #gluster
10:10 LessSeen_ joined #gluster
10:35 Ylann joined #gluster
10:41 edward2 joined #gluster
10:55 seddrone joined #gluster
11:04 92AAAU2PV joined #gluster
11:17 suliba joined #gluster
11:21 seddrone joined #gluster
11:49 sroy joined #gluster
12:02 hchiramm_ joined #gluster
12:26 qdk_ joined #gluster
12:28 sroy joined #gluster
12:47 Ark joined #gluster
13:14 DV joined #gluster
13:51 edward3 joined #gluster
13:58 mohan_ joined #gluster
14:06 ricky-ticky joined #gluster
14:17 davinder joined #gluster
14:22 hagarth joined #gluster
14:39 nixpanic joined #gluster
14:39 nixpanic joined #gluster
14:52 bala1 joined #gluster
15:31 bala joined #gluster
15:49 ira joined #gluster
16:11 Intensity joined #gluster
16:57 scuttle_ joined #gluster
17:02 Mneumonik joined #gluster
17:03 Mneumonik hey guys, I noticed NFS Server on localhost Online = N in my volume status, nfs is running though. Any idea?
17:08 diegows joined #gluster
17:28 vpshastry joined #gluster
17:30 vpshastry left #gluster
17:34 qdk_ joined #gluster
17:39 edward1 joined #gluster
17:57 Joe630 gluster rtuns its own nfs
18:09 davinder joined #gluster
18:13 chirino joined #gluster
18:22 Ylann joined #gluster
19:15 rypervenche joined #gluster
19:33 Ark joined #gluster
19:59 Mneumonik Yeah I realized the stock nfs was running, and now it says nfs is good to go, but I get mount.nfs: requested NFS version or transport protocol is not supported even if i specify version 3 and tcp... dafuq
19:59 glusterbot Mneumonik: make sure your volume is started. If you changed nfs.disable, restarting your volume is known to work.
20:01 Mneumonik i restarted the volume, now getting mount.nfs: mount system call failed
20:03 Ylannj joined #gluster
20:10 Ylannj can somone help me fing an how to Configure Gluster Virtual Storage Appliance for VMware
20:18 systemonkey joined #gluster
20:21 systemonkey joined #gluster
20:26 jag3773 joined #gluster
20:31 badone joined #gluster
20:36 ghenry joined #gluster
20:37 ghenry Hi all, on a 2 node replicated cluster setup do you *need* to mount the brick partition via glusterfs fuse or NFS or can the two servers just write to the mount brick folders?
20:38 DV joined #gluster
20:40 ghenry all the examples just show that clients can mount the volume remotely, but the servers the bricks live on don't need to mount the volume right?
21:39 qdk_ joined #gluster
21:39 txmoose joined #gluster
21:39 txmoose howdy all
21:41 txmoose anyone around today?
21:47 sputnik1_ joined #gluster
21:49 JoeJulian Usually is...
21:51 Ark joined #gluster
21:59 DV joined #gluster
22:07 txmoose JoeJulian: Got a few minutes to talk?  I'm new to Gluster and having some trouble conceptualizing some stuff
22:07 txmoose PS: sorry for long delays... cooking and server-admin'ing at the same time :P
22:08 fidevo joined #gluster
22:12 sputnik1_ joined #gluster
22:15 MrAbaddon joined #gluster
22:19 badone_ joined #gluster
22:20 txmoose win 5
22:20 txmoose oops
22:20 DV joined #gluster
22:25 cyberbootje joined #gluster
22:26 purpleidea txmoose: just ask your question, don't ask to ask. whoever can answer will
22:26 purpleidea @justask
22:39 sputnik1_ joined #gluster
22:53 DV joined #gluster
23:01 Ark joined #gluster
23:06 DV joined #gluster
23:20 DV joined #gluster
23:27 hagarth joined #gluster
23:36 sroy_ joined #gluster
23:40 velladecin joined #gluster
23:48 elico Hey there. What are the options about accessing glusterfs by rest API? like swift? but throw only one single node?

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary