Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-05-16

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:07 sjm left #gluster
00:08 nueces joined #gluster
00:09 sputnik13 joined #gluster
00:10 jbd1 joined #gluster
00:16 gdubreui joined #gluster
00:30 jag3773 joined #gluster
00:38 yinyin joined #gluster
00:45 chirino joined #gluster
01:04 primechuck joined #gluster
01:06 Ark joined #gluster
01:14 sjm joined #gluster
01:14 nueces joined #gluster
01:18 mjsmith2 joined #gluster
01:35 harish joined #gluster
01:38 yinyin_ joined #gluster
01:41 lyang0 joined #gluster
01:53 aviksil joined #gluster
02:06 Ark joined #gluster
02:10 theron joined #gluster
02:31 mattapperson joined #gluster
02:39 yinyin_ joined #gluster
02:40 an joined #gluster
02:46 sputnik13 joined #gluster
02:49 chirino joined #gluster
03:00 ceiphas_ joined #gluster
03:06 velladecin joined #gluster
03:16 chirino joined #gluster
03:26 rjoseph joined #gluster
03:27 bharata-rao joined #gluster
03:27 an joined #gluster
03:30 sputnik13 joined #gluster
03:35 kanagaraj joined #gluster
03:36 kshlm joined #gluster
03:36 RameshN joined #gluster
03:36 RameshN_ joined #gluster
03:38 shubhendu joined #gluster
03:38 ppai joined #gluster
03:58 harish joined #gluster
03:58 itisravi joined #gluster
04:01 bala joined #gluster
04:02 RameshN_ joined #gluster
04:03 dusmant joined #gluster
04:11 aravindavk joined #gluster
04:13 glusterbot New news from newglusterbugs: [Bug 1091777] Puppet module gluster (purpleidea/puppet-gluster) to support RHEL7/Fedora20 <https://bugzilla.redhat.co​m/show_bug.cgi?id=1091777>
04:16 kumar joined #gluster
04:23 ndarshan joined #gluster
04:24 prasanthp joined #gluster
04:26 haomaiwang joined #gluster
04:28 kdhananjay joined #gluster
04:35 silky joined #gluster
04:37 hagarth joined #gluster
04:41 decimoe joined #gluster
04:42 silky joined #gluster
04:46 meghanam joined #gluster
04:47 silky joined #gluster
04:47 decimoe joined #gluster
04:49 chirino joined #gluster
04:55 sahina joined #gluster
04:56 vimal joined #gluster
05:00 ravindran1 joined #gluster
05:03 davinder joined #gluster
05:22 nshaikh joined #gluster
05:27 ppai joined #gluster
05:28 bala joined #gluster
05:29 raghu` joined #gluster
05:50 chirino joined #gluster
06:00 saurabh joined #gluster
06:01 rejy joined #gluster
06:05 aviksil joined #gluster
06:08 DV joined #gluster
06:14 RameshN joined #gluster
06:14 RameshN_ joined #gluster
06:17 nishanth joined #gluster
06:23 psharma joined #gluster
06:24 rjoseph joined #gluster
06:25 aviksil joined #gluster
06:29 rahulcs joined #gluster
06:31 zerick joined #gluster
06:36 kanagaraj joined #gluster
06:37 ricky-ti1 joined #gluster
06:38 sputnik13 joined #gluster
06:47 edward2 joined #gluster
06:49 rgustafs joined #gluster
06:55 rjoseph joined #gluster
06:55 rahulcs joined #gluster
06:59 fsimonce joined #gluster
07:04 keytab joined #gluster
07:06 ctria joined #gluster
07:15 eseyman joined #gluster
07:16 ngoswami joined #gluster
07:20 aviksil joined #gluster
07:26 cppking joined #gluster
07:26 crashmag_ joined #gluster
07:27 cppking hi guys ,I got a big question,  If I use strip in Glusterfs,, If all my storage nodes are poweroff,  Will all my data lost
07:27 samppah any thoughts about future of glusterfs regarding Red Hat Storage and using it to store VM images?
07:27 samppah there still seems to be problems with libgfapi and Ceph has probably better support for block devices?
07:27 samppah no offense, just thinking out loud :)
07:27 samppah I'm currently using GlusterFS with RHEV and I think I have to make somekind of decision if I still use it and invest more money into it or should I look into another options (Ceph mostly)
07:32 vpshastry joined #gluster
07:33 samppah cppking: no data should be lost but you can't access it until they are all back again
07:34 liquidat joined #gluster
07:35 cppking samppah: You mean data in stored on my nodes , But Can I access them like beforce
07:35 cppking before
07:36 cppking I know that all my data are still stored on my nodes, But If I power them on , Can I access them like before?
07:36 samppah cppking: if you are using striping then it's not possible since one file spread to several nodes
07:37 cppking If i don't use strip but replica , can I still access them?
07:37 samppah yes
07:38 cppking but replica dont dived files
07:38 samppah as long as there is one server that holds the data
07:38 samppah cppking: how many servers you have?
07:38 cppking 8
07:39 cppking can I use strip and replica at same time?
07:39 samppah that should be fine
07:39 samppah also distributed replicated setup is godo
07:39 samppah *good
07:40 samppah i'm not very familiar with striping
07:40 glusterbot samppah: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
07:40 samppah glusterbot: my humble apologies
07:40 cppking If I use replica and strip at same time ? after nodes power off , can I still access my data?
07:41 samppah cppking: as long as there is one node up with replica data
07:42 cppking DHT is replica + strip ?
07:44 samppah DHT is distributed
07:44 samppah distributed saves one file to one brick
07:44 haomaiwa_ joined #gluster
07:44 davinder joined #gluster
07:45 cppking thx a lot
07:45 samppah no problem.. i hope it helps :)
07:46 cppking how glusterfs decide to dived a file?
07:48 aviksil joined #gluster
07:52 chirino joined #gluster
07:56 haomaiwang joined #gluster
07:59 ktosiek joined #gluster
08:20 hybrid512 joined #gluster
08:20 Ark joined #gluster
08:23 vpshastry1 joined #gluster
08:28 VerboEse Hi. I have a question rgd. HA: I found mentioned it should be possible to use a server list fpr volfiles (http://www.gluster.org/category/rrdns/). Did these patches found their way into release? And if so: which version do I need (at minimum) for using it?
08:30 andreask joined #gluster
08:32 LiRul joined #gluster
08:32 LiRul hi
08:32 glusterbot LiRul: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
08:34 LiRul i made a test cluster 3 node, each node with 3 brick, so total brick no. 9. each brick is a 4 tb wd re disk. i've got 193 GB data, lot of (millions) small files (50-100 KB). when I test a rebalance i got really-really slow speed
08:34 LiRul only 2 MB/s
08:34 LiRul and 15 object/sec
08:35 LiRul i'm using latest 3.5.0 gluster with centos 6.5 x86_64
08:41 ppai joined #gluster
08:43 ravindran1 joined #gluster
08:43 olisch joined #gluster
08:44 DV__ joined #gluster
08:47 rjoseph joined #gluster
08:51 ninkotech__ joined #gluster
08:53 chirino joined #gluster
08:58 aviksil joined #gluster
09:03 edward3 joined #gluster
09:09 keytab joined #gluster
09:23 vpshastry joined #gluster
09:31 meghanam joined #gluster
09:44 glusterbot New news from newglusterbugs: [Bug 1075611] [FEAT] log: enhance gluster log format with message ID and standardize errno reporting <https://bugzilla.redhat.co​m/show_bug.cgi?id=1075611>
09:45 meghanam joined #gluster
09:50 int-0x21 joined #gluster
09:50 int-0x21 Hello
09:50 glusterbot int-0x21: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
09:51 an joined #gluster
09:53 int-0x21 Is there any best practise documentaion for using GlusetrFS as a vmware datastore ? Considering distributed striped replicated considering the size of the vm:s and the availibily of quite alot of 72/146G 15k sas discs
10:02 ppai joined #gluster
10:14 glusterbot New news from newglusterbugs: [Bug 1086782] Add documentation for the Feature: glusterfs and oVirt integration <https://bugzilla.redhat.co​m/show_bug.cgi?id=1086782>
10:14 ravindran1 left #gluster
10:15 an joined #gluster
10:16 edward3 left #gluster
10:18 rjoseph joined #gluster
10:19 tryggvil joined #gluster
10:20 qdk joined #gluster
10:38 tryggvil joined #gluster
10:39 tryggvil joined #gluster
10:42 Slashman joined #gluster
10:47 primechuck joined #gluster
10:59 vpshastry joined #gluster
11:02 ngoswami joined #gluster
11:07 calum_ joined #gluster
11:10 firemanxbr joined #gluster
11:12 ngoswami joined #gluster
11:14 int-0x21 Is there any issue of running replicated distributed striped bricks with diffrent brick sizes ?
11:19 ngoswami joined #gluster
11:19 RameshN_ joined #gluster
11:20 shubhendu joined #gluster
11:22 dusmant joined #gluster
11:23 sahina joined #gluster
11:23 ndarshan joined #gluster
11:23 RameshN joined #gluster
11:24 LiRul int-0x21: yes of course.  http://www.gluster.org/community/documentat​ion/index.php/Features/heterogeneous-bricks
11:24 glusterbot Title: Features/heterogeneous-bricks - GlusterDocumentation (at www.gluster.org)
11:24 LiRul not yet implemented
11:27 vpshastry joined #gluster
11:29 sman joined #gluster
11:29 andreask joined #gluster
11:30 sman Hi. I get a number of theese in my etc-glusterfs-glusterd.vol.log:
11:30 sman W [socket.c:1494:__socket_proto_state_machine] 0-socket.management: reading from socket failed. Error (Transport endpoint is not connected), peer (127.0.0.1:1019)
11:30 glusterbot sman: That's just a spurious message which can be safely ignored.
11:31 sman glusterbot: Thanks! :-)
11:39 FeanorKnD joined #gluster
11:40 FeanorKnD Hello... is there a way to bring up a glusterfs cluster with existing 150Gb of files, from the beginning, instead of needing to "rsync" this Data from a client to a mounted glusterfs share?????
11:49 John_HPC joined #gluster
11:50 FeanorKnD it is a great question for people migrating to glusterfs from other networking filesystems.... I think it may be at documentation..... is it possible?
11:51 sman In the log for my brick i saw a number of "fd cleanup on" and then suddenly I got "reading from socket failed. Error (Transport endpoint is not connected), peer (...)"
11:51 sman The connection never came back.
11:51 sman (untill after a restart).
11:52 sman In the log I can see that this has happende before during "fd cleanup", but the disconnection usually takes only a few seconds
11:52 sman Any idea what the problem is?
11:52 diegows joined #gluster
11:56 vpshastry left #gluster
11:59 int-0x21 LiRul Thanks. That sets som sticks in my plans. Thinking of using a bunch of 72,146 and 300G 15k disks that are laying around to make a vmware datastore for the developers.
12:06 ProT-0-TypE joined #gluster
12:07 shubhendu joined #gluster
12:08 LiRul int-0x21: you can split (make partitions) on large disks and use more bricks (for example 1 brick 73G and 4 brick on 300G)
12:08 LiRul yes i know this is not the ideal solution
12:10 chirino joined #gluster
12:16 int-0x21 Its either that then or to raid the smaller disks to 300G so i dont run into some overhead problem with to many bricks
12:16 int-0x21 Or well many bricks might be alot more bricks then 100
12:18 rgustafs joined #gluster
12:20 Necrophagos joined #gluster
12:20 Pupeno joined #gluster
12:21 Pupeno Is GlusterFS a file system in the Linux sense? that is, can I mount it and write to it in the traditional way?
12:22 Necrophagos any idea why my glustermount isn't mounted at boot on one node (the "main" node) although all 3 nodes are configured the same and _netdev is set in fstab
12:24 Necrophagos Pupeno: I suppose so, but there are some differences, like you can't mount it with ACLs if the filesystem on the serverside doesn't support acls
12:24 Necrophagos maybe there are other differences to
12:24 Pupeno That wouldn't be an issue for me.
12:24 Necrophagos but its basically another filesystem
12:25 Pupeno I'm searching for solutions to start having multiple instances of my web app. Obviously the database is not a problem, but uploaded and generated files is.
12:25 Pupeno Would GlusterFS be a possible solution for that?
12:26 Necrophagos probably, if you just wnat to repklicate files to others servers
12:26 Pupeno Yup.
12:26 Necrophagos Yeah go for it, its super easy to do
12:26 mjsmith2 joined #gluster
12:27 Necrophagos I used this tut
12:27 Necrophagos http://www.howtoforge.com/high-availability-storag​e-with-glusterfs-3.2.x-on-debian-wheezy-automatic-​file-replication-mirror-across-two-storage-servers
12:27 glusterbot Title: High-Availability Storage With GlusterFS 3.2.x On Debian Wheezy - Automatic File Replication (Mirror) Across Two Storage Servers | HowtoForge - Linux Howtos and Tutorials (at www.howtoforge.com)
12:27 Necrophagos worked right away
12:28 Pupeno Thanks.
12:31 Ark joined #gluster
12:32 japuzzo joined #gluster
12:39 partner hmm, i can make distributed volume into distributed-replicated, does it work other way around ie. dist-repl back to plain distributed, haven't ever tested..?
12:40 Philambdo joined #gluster
12:46 sroy joined #gluster
12:53 Ark joined #gluster
13:02 prasanthp joined #gluster
13:04 aviksil joined #gluster
13:16 plarsen joined #gluster
13:19 dusmant joined #gluster
13:19 bennyturns joined #gluster
13:20 ctria joined #gluster
13:25 mjsmith2 joined #gluster
13:27 jskinner_ joined #gluster
13:28 chirino joined #gluster
13:31 ngoswami joined #gluster
13:37 silky joined #gluster
13:38 decimoe joined #gluster
13:38 scuttle_ joined #gluster
13:46 mshadle joined #gluster
13:46 saltsa joined #gluster
13:46 Slasheri joined #gluster
13:46 Slasheri joined #gluster
13:46 NuxRo joined #gluster
13:46 JoeJulian joined #gluster
13:46 Lookcrabs joined #gluster
13:46 Lookcrabs joined #gluster
13:46 jvandewege joined #gluster
13:47 ccha2 joined #gluster
13:47 osiekhan1 joined #gluster
13:48 T0aD joined #gluster
13:50 ctria joined #gluster
13:53 theron joined #gluster
13:54 Necrophagos I have peer probed 2 nodes from one server via theit hostname, on the other nodes peer status displays the initial node's hostname by its IP
13:54 Necrophagos can I change this?
13:57 sjm joined #gluster
13:58 kaptk2 joined #gluster
13:58 chirino joined #gluster
14:04 lmickh joined #gluster
14:04 yinyin_ joined #gluster
14:05 jbd1 joined #gluster
14:06 basso joined #gluster
14:06 davinder joined #gluster
14:14 primechuck joined #gluster
14:16 coredump joined #gluster
14:18 plarsen joined #gluster
14:19 LoudNoises joined #gluster
14:21 LoudNoises joined #gluster
14:29 davinder joined #gluster
14:30 nishanth joined #gluster
14:31 sprachgenerator joined #gluster
14:32 LiRul left #gluster
14:42 gmcwhist_ joined #gluster
14:47 andreask joined #gluster
14:52 davinder joined #gluster
14:57 Necrophagos yes I can in /etc/glusterd/peers/<UUID>
14:59 haomaiwa_ joined #gluster
15:00 gmcwhistler joined #gluster
15:02 liquidat_ joined #gluster
15:04 chirino joined #gluster
15:06 plarsen joined #gluster
15:11 Philambdo joined #gluster
15:16 daMaestro joined #gluster
15:23 vpshastry joined #gluster
15:25 Nostalgeek joined #gluster
15:28 Nostalgeek Hi, I'm new to Gluster. My idea is to use Gluster as a XenServer / XenCenter datastore. My Gluster servers/bricks are separated than my hypervisors hosts. I want full HA in case of a Gluster server failure (so, from what I understand, NFS failover is an issue). I haven't tested yet, but I see 2 options: either mount the Gluster volume through Fuse on each hypervisor, or somehow configure gluster on each hypervisor but don't mount through Fuse and have
15:28 Nostalgeek gluster do the NFS proxy to the actuel Gluster store. Does that make any sense?
15:36 JoeJulian Nostalgeek: My preference for that would be fuse. Qemu had libgfapi support added. If that carries over to qemu-xen then I would use that.
15:37 Nostalgeek JoeJulian, thanks. Yeah, I've read about qemu and libgfapi, but seems to work only for KVM now and not sure what is the progress to get support into qemu-xen
15:37 marcoceppi joined #gluster
15:37 marcoceppi joined #gluster
15:38 JoeJulian I'm sure they would accept a pull request. ;)
15:38 Nostalgeek Now I'm reading about ganesha, maybe that's an option too.
15:39 JoeJulian The problem is with the tcp nfs connection. I don't know of any way to maintain the image connection through a tcp rst.
15:40 chirino joined #gluster
15:42 Nostalgeek JoeJulian, yeah, I understand the NFS HA issue. But how about having ganesha running on each of my hypervisors, each hypervisor having 127.0.0.1 as the NFS target, and ganesha tunniling to gluster through libgfapi. I can't find any benchmarks though
15:42 liquidat joined #gluster
15:45 aviksil joined #gluster
15:50 swebb joined #gluster
15:54 jag3773 joined #gluster
16:02 sroy_ joined #gluster
16:18 VerboEse joined #gluster
16:23 bennyturns joined #gluster
16:23 systemonkey joined #gluster
16:28 Philambdo joined #gluster
16:31 jbd1 joined #gluster
16:33 MacWinner joined #gluster
16:33 diegows joined #gluster
16:36 ndk` joined #gluster
16:44 swebb joined #gluster
16:44 swebb left #gluster
16:46 hagarth joined #gluster
16:50 sroy_ joined #gluster
16:52 ninkotech__ joined #gluster
16:54 ktosiek joined #gluster
16:55 vpshastry left #gluster
16:59 siel joined #gluster
17:08 vipulnayyar joined #gluster
17:09 sroy_ joined #gluster
17:16 ramteid joined #gluster
17:17 dusmant joined #gluster
17:36 ninkotech_ joined #gluster
17:59 kmai007 joined #gluster
18:00 ninkotech__ joined #gluster
18:05 Ark joined #gluster
18:07 ninkotech_ joined #gluster
18:08 ProT-0-TypE joined #gluster
18:11 ninkotech joined #gluster
18:14 ninkotech__ joined #gluster
18:18 jobewan joined #gluster
18:21 ninkotech_ joined #gluster
18:27 ninkotech_ joined #gluster
18:28 ninkotech joined #gluster
18:32 [o__o] joined #gluster
18:33 ninkotech_ joined #gluster
18:34 ninkotech joined #gluster
18:34 an joined #gluster
18:36 cvdyoung left #gluster
18:37 theron joined #gluster
18:41 ninkotech__ joined #gluster
18:43 ninkotech_ joined #gluster
19:01 ninkotech joined #gluster
19:04 ninkotech__ joined #gluster
19:16 GoJkOrS_ joined #gluster
19:18 GoJkOrS_ Hey all, I need some help.  We are using GlusterFS with cinder.  When we try to attach a volume that has a snapshot apparmor blocks it.  Does anyone know the proper fix for this?
19:30 Nostalgeek joined #gluster
19:43 rahulcs joined #gluster
19:46 edong23 joined #gluster
20:19 Ark joined #gluster
20:22 jbd1 GoJkOrS_: sounds like you need to add a rule to apparmor to allow it
20:22 jbd1 GoJkOrS_: dmesg will tell you details, as you know-- use those details to figure out which rule you need
20:28 kmai007 joined #gluster
20:37 ctria joined #gluster
20:49 sputnik13 joined #gluster
20:49 ninkotech joined #gluster
21:05 partner repeating earlier question - i can make distributed volume into distributed-replicated, does it work other way around ie. dist-repl back to plain distributed?
21:06 partner i could of course try it out but asking first is a lot easier than setting it all up and yet i might fail just because not knowing the proper approach
21:08 kmai007 i believe partner you cannot. you will have to create a new volume on that brick, migrate the full data over
21:10 kmai007 nevermind
21:10 kmai007 you said dist.-rep. to distr.
21:11 kmai007 you should send that inquire to gluster-users@gluster.org somebody will answer
21:13 badone joined #gluster
21:13 sjm left #gluster
21:15 Nostalgeek Simple question regarding the new File-Snapshot feature: is it possible to "goto" (restore) a snapshot to a different file name
21:30 JoeJulian partner: yes
21:44 tryggvil joined #gluster
21:50 JoeJulian Nostalgeek: Not sure yet.
22:25 Nostalgeek JoeJulian, thanks
22:32 Philambdo joined #gluster
22:35 siel joined #gluster
22:35 marcoceppi joined #gluster
22:35 glusterbot joined #gluster
22:35 \malex\ joined #gluster
22:35 ultrabizweb joined #gluster
22:35 nixpanic_ joined #gluster
22:35 yosafbridge joined #gluster
22:35 eightyeight joined #gluster
22:35 ackjewt joined #gluster
22:36 bfoster joined #gluster
22:36 necrogami joined #gluster
22:36 hflai joined #gluster
22:36 doekia joined #gluster
22:36 anotheral joined #gluster
22:36 Dave2 joined #gluster
22:36 purpleidea joined #gluster
22:36 ernetas joined #gluster
22:36 Licenser joined #gluster
22:36 semiosis joined #gluster
22:36 eryc joined #gluster
22:36 brosner joined #gluster
22:36 SteveCooling joined #gluster
22:36 stigchristian joined #gluster
22:36 64MAAAH5Y joined #gluster
22:36 georgeh|workstat joined #gluster
22:37 yosafbridge joined #gluster
23:03 tryggvil joined #gluster
23:30 foster joined #gluster
23:32 badone joined #gluster
23:34 xavih joined #gluster
23:42 theron joined #gluster
23:43 JoeJulian file a bug
23:43 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary