Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-01-30

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
23:03 ells joined #gluster
23:28 khushildep joined #gluster
23:30 ells_ joined #gluster
23:34 tdasilva left #gluster
23:47 askb joined #gluster
23:49 gdubreui joined #gluster
23:51 overclk joined #gluster
23:59 jbrooks joined #gluster
00:13 jbrooks joined #gluster
00:38 DV__ joined #gluster
00:53 sarkis joined #gluster
01:01 diegows joined #gluster
01:26 dalax joined #gluster
01:28 tokik joined #gluster
01:28 sprachgenerator joined #gluster
01:38 a2 joined #gluster
01:48 zaitcev joined #gluster
01:52 shyam joined #gluster
01:54 dalax left #gluster
01:55 vpshastry joined #gluster
01:57 gmcwhistler joined #gluster
02:05 shyam joined #gluster
02:05 dbruhn joined #gluster
02:20 shyam joined #gluster
02:28 shyam joined #gluster
02:30 shyam joined #gluster
02:30 Gluster joined #gluster
02:32 sarkis joined #gluster
03:13 bharata-rao joined #gluster
03:21 jag3773 joined #gluster
03:34 nshaikh joined #gluster
03:38 jmarley joined #gluster
03:42 kdhananjay joined #gluster
03:44 itisravi joined #gluster
03:45 RameshN joined #gluster
03:50 mohankumar__ joined #gluster
03:53 shubhendu joined #gluster
03:56 lalatenduM joined #gluster
04:01 shylesh joined #gluster
04:04 vpshastry joined #gluster
04:08 dbruhn joined #gluster
04:18 mattappe_ joined #gluster
04:30 vpshastry joined #gluster
04:31 shyam joined #gluster
04:46 sks joined #gluster
04:49 ndarshan joined #gluster
04:52 iksik_ joined #gluster
05:00 DV joined #gluster
05:02 glusterbot New news from newglusterbugs: [Bug 969461] RFE: Quota fixes <https://bugzilla.redhat.com/show_bug.cgi?id=969461>
05:11 lalatenduM joined #gluster
05:12 davinder joined #gluster
05:13 bala joined #gluster
05:14 jporterfield joined #gluster
05:19 aravindavk joined #gluster
05:21 mohankumar__ joined #gluster
05:22 hagarth joined #gluster
05:23 dusmantkp_ joined #gluster
05:26 ppai joined #gluster
05:27 spandit joined #gluster
05:29 meghanam joined #gluster
05:29 meghanam_ joined #gluster
05:37 CamoYoshi joined #gluster
05:38 CamoYoshi Hi everyone, bit lost on my Gluster set up here... it's working but it's not exactly behaving as I expected.
05:39 vpshastry joined #gluster
05:40 CamoYoshi So what I want to do is take three machines with a handful of "bricks" if you will, and have them pooled into one. So I think the Distribute type volume would be what I want to pick, right? However, I've got mdadm doing JBOD on all the drives on each of the boxes. One of the boxes has 1.1TB, the other has 680GB, and the last one has about 300GB. So my total should be about 2TB, right?
05:41 CamoYoshi However the gv0 volume shows as having a capacity of 80GB according to df. So I'm not sure what I'm doing wrong. Proud that I got it working though, but the capacity is a little smaller than what I want :)
05:44 shubhendu joined #gluster
05:47 samppah CamoYoshi: can you send outputof gluster vol info to pastie.org?
05:47 CamoYoshi Sure. Just a sec.
05:49 CamoYoshi http://pastie.org/private/p1d3pss2qjz1g1wllgweg
05:49 glusterbot Title: Private Paste - Pastie (at pastie.org)
05:50 CamoYoshi Haven't added the second brick yet, just a FYI.
05:50 CamoYoshi *third
05:51 samppah ok, what's mounted into /gv0
05:52 CamoYoshi Just /dev/md0, which is a linear (JBOD) mdadm raid setup.
05:53 samppah does df show correct size for it?
05:53 CamoYoshi Yep, that's correct
05:54 CamoYoshi df shows 1.1T, while gv0 shows 74GB.
05:54 mohankumar__ joined #gluster
05:54 samppah odd
05:55 CamoYoshi My thoughts exactly :)
05:55 samppah same thing on both servers?
05:56 CamoYoshi Effectively, yes, though the amount of physical storage on both servers are different, simply because there are different hard drives in each.
05:56 samppah is root or any other filesystem 74GB?
05:56 CamoYoshi Though I would imagine that shouldn't matter. (I hope)
05:57 CamoYoshi root on one of the servers is 71GB, but I don't think there is a correlation... the other server's root is about 4GB
05:57 samppah hmm!
05:59 samppah csn you pastie output of df and mount commands from both servers?
05:59 CamoYoshi Sure. Just a sec...
06:00 psharma joined #gluster
06:01 hagarth joined #gluster
06:06 CamoYoshi Here's for the server with the 680GB of drive space http://pastie.org/private/upbldrwtufruhsisldpg
06:06 glusterbot Title: Private Paste - Pastie (at pastie.org)
06:06 CamoYoshi And the 1.1TB one: http://pastie.org/private/wmlz74mcf2k6l9igmhdvzg
06:06 glusterbot Title: Private Paste - Pastie (at pastie.org)
06:09 samppah ah, you must point gluster to /export/md0 or mount md0 to /gv0
06:09 CamoYoshi Ok. Let's try that then. :P
06:10 CamoYoshi Must be the lack of coffee
06:10 samppah ie. gluster creste volName server:/export/md0 ...
06:10 samppah *create
06:11 CheRi joined #gluster
06:11 CamoYoshi So that would be "gluster create volName server1:/export/md0 server2:/export/md0" (Obviously adding servers to that as needed)?
06:11 CamoYoshi Or just one server?
06:13 samppah you can add all servers at once or add more afterwards with gluster volume add-brick command
06:13 CamoYoshi Ok, let's give this a shot...
06:13 samppah and sorry, exact command to create volume is of course gluster volume create ... :)
06:14 CamoYoshi No worries, you've been a huge help so far
06:14 kanagaraj joined #gluster
06:17 CamoYoshi WOOHOO! It works!
06:18 samppah great!
06:20 raghu joined #gluster
06:20 CamoYoshi Now I just need to finish building my other server and I'll be in business. Spend $0 on a 2.2TB SAN for my home :)
06:21 Philambdo joined #gluster
06:22 micu1 joined #gluster
06:25 dusmantkp_ joined #gluster
06:28 harish joined #gluster
06:43 rastar joined #gluster
06:50 shubhendu joined #gluster
07:21 jtux joined #gluster
07:34 Cenbe joined #gluster
07:36 flrichar joined #gluster
07:37 ells joined #gluster
07:39 benjamin_____ joined #gluster
07:41 XpineX joined #gluster
08:02 micu1 joined #gluster
08:04 dusmantkp_ joined #gluster
08:07 hagarth joined #gluster
08:15 eshy hello. i'm having an issue bringing up a brand new volume. when i issue a create (gluster volume create gfs_data replica 2 node1:/mnt/gfs node2:/mnt/gfs) i receive an error saying /mnt/gfs or a prefix of it is already a volume
08:15 glusterbot eshy: To clear that error, follow the instructions at http://joejulian.name/blog/glusterfs-path-or-a-prefix-of-it-is-already-part-of-a-volume/ or see this bug https://bugzilla.redhat.com/show_bug.cgi?id=877522
08:15 eshy wow, nice.
08:15 eshy well, thanks i guess?
08:17 eshy to confirm: it worked
08:18 plarsen joined #gluster
08:21 ngoswami joined #gluster
08:24 keytab joined #gluster
08:29 kdhananjay joined #gluster
08:31 hagarth joined #gluster
08:33 glusterbot New news from newglusterbugs: [Bug 1058227] AFR: glusterfs client invoked oom-killer <https://bugzilla.redhat.com/show_bug.cgi?id=1058227>
08:34 saurabh joined #gluster
08:35 aravindavk joined #gluster
08:38 ProT-0-TypE joined #gluster
08:50 mick27 joined #gluster
09:04 blook joined #gluster
09:05 lalatenduM joined #gluster
09:06 rastar joined #gluster
09:07 meghanam joined #gluster
09:08 meghanam_ joined #gluster
09:20 khushildep joined #gluster
09:27 hagarth left #gluster
09:32 sprachgenerator_ joined #gluster
09:35 meghanam joined #gluster
09:35 meghanam_ joined #gluster
09:43 aravindavk joined #gluster
09:43 ells joined #gluster
09:52 olisch joined #gluster
09:54 ppai joined #gluster
10:03 franc joined #gluster
10:04 benjamin_____ joined #gluster
10:09 bala joined #gluster
10:32 jikz joined #gluster
10:32 jikz joined #gluster
10:38 StarBeas_ joined #gluster
10:48 RameshN joined #gluster
10:48 hybrid512 joined #gluster
10:59 d-fence_ joined #gluster
11:00 bala joined #gluster
11:02 siebenburgen joined #gluster
11:05 itisravi joined #gluster
11:09 tokik joined #gluster
11:15 d-fence joined #gluster
11:18 mgebbe_ joined #gluster
11:20 ppai joined #gluster
11:38 glusterbot New news from resolvedglusterbugs: [Bug 1043737] NFS directory access is too slow with ACL <https://bugzilla.redhat.com/show_bug.cgi?id=1043737>
11:42 recidive joined #gluster
11:45 diegows joined #gluster
11:47 d-fence joined #gluster
12:03 tdasilva joined #gluster
12:04 ira joined #gluster
12:06 Peanut What version of qemu/virsh would I need for it to support libgfapi ?
12:10 kkeithley1 joined #gluster
12:20 CheRi joined #gluster
12:21 siebenburgen left #gluster
12:23 edward2 joined #gluster
12:24 ppai joined #gluster
12:26 mohankumar__ joined #gluster
12:33 Philambdo joined #gluster
12:36 Philambdo joined #gluster
12:37 meghanam_ joined #gluster
12:37 meghanam joined #gluster
12:42 b0e joined #gluster
12:42 Philambdo joined #gluster
12:47 sks joined #gluster
12:52 ndevos Peanut: see http://www.ovirt.org/Features/GlusterFS_Storage_Domain#Important_Pre-requisites
12:52 glusterbot Title: Features/GlusterFS Storage Domain (at www.ovirt.org)
12:52 Philambdo joined #gluster
12:54 recidive joined #gluster
12:54 Philambdo joined #gluster
12:55 sroy joined #gluster
12:56 Philambdo joined #gluster
12:57 Philambdo joined #gluster
12:59 Philambdo joined #gluster
12:59 Philambdo joined #gluster
13:01 Rydekull aaa
13:06 delhage Rydekull: oh yeah?
13:06 vpshastry left #gluster
13:16 rastar joined #gluster
13:20 mick27 do you guys change the inode size depending on the average size of files ?
13:25 dusmantkp_ joined #gluster
13:31 itisravi joined #gluster
13:33 Rydekull delhage: yeah, exactly that
13:34 mattappe_ joined #gluster
13:34 ira joined #gluster
13:34 olisch joined #gluster
13:35 Peanut ndevos: thanks, that has the numbers I needed
13:36 Peanut ndevos: though I have libvirt-1.0.2 and qemu 1.4.0, yet libgfapi doesn't seem to work for me, I can't seem to open a gluster:// file from qemu/libvirt.
13:42 Peanut Hmm.. or do you need to run your virtual machine as root in order to be able to use libgfapi?
13:44 bennyturns joined #gluster
13:49 ells joined #gluster
13:50 davinder joined #gluster
13:50 ndevos Peanut: that link also explains how to configure /etc/glusterfs/glusterd.vol and some volume options
13:50 ndevos Peanut: after changing the volume options, you seem to need to stop + start the volume :-/
13:51 Peanut ndevos: oh, looks like downtime then? :-(
13:53 Peanut Ah, I was missing the storage.owner-ui and rpc-auth-allow-insecure, server.allow-insecure. How dangerous are those options?
13:55 ndevos Peanut: you may not need the storage.owner-{g,u}id one, the others allow tcp-ports > 1024 to access the volume
13:56 japuzzo joined #gluster
13:58 hagarth joined #gluster
14:01 Peanut The protection offered by low port numbers is fairly small anyway, I wouldn't mind enabling that.
14:10 benjamin_____ joined #gluster
14:12 pkliczew joined #gluster
14:13 primechuck joined #gluster
14:22 mattappe_ joined #gluster
14:24 _br_ joined #gluster
14:24 sulky joined #gluster
14:26 juhaj joined #gluster
14:31 jmarley joined #gluster
14:35 benjamin_____ joined #gluster
14:37 diegows joined #gluster
14:37 jmarley joined #gluster
14:39 RameshN joined #gluster
14:42 mattapperson joined #gluster
14:42 jmarley__ joined #gluster
14:43 jmarley joined #gluster
14:48 ells joined #gluster
14:49 ells joined #gluster
14:51 ells joined #gluster
14:56 thefiguras joined #gluster
14:57 failshell joined #gluster
15:01 ells joined #gluster
15:03 social hagarth: do you have 5minutes?
15:03 theron joined #gluster
15:04 social or kkeithley_ a2 ^^ 5minutes?
15:04 hagarth social: I think so, am attending another meeting in IRC too atm
15:04 ells left #gluster
15:04 social hagarth: ah, I wanted to ask whether I understand it correctly about inode_unref and loc_wipe
15:05 ells joined #gluster
15:05 hagarth social: go ahead
15:05 social hagarth: my I can reproduce memleak with mkdir/touch/ls in gluster fuse client but it is not leaking 100% it leaks only ~50%
15:06 theron_ joined #gluster
15:06 hagarth social: do you see the leak if you were to do "echo 3 > /proc/sys/vm/drop_caches" and then kill glusterfs
15:06 social I'll check
15:09 social hagarth: good I was guessing that I'm chasing my tail and it's something like this :)
15:10 bala joined #gluster
15:10 bugs_ joined #gluster
15:11 social hagarth: ok that means I just need this patch, but I don't understand why it failed http://review.gluster.org/#/c/6850/
15:11 glusterbot Title: Gerrit Code Review (at review.gluster.org)
15:12 hagarth social: there is a regression failure affecting more patches. will debug that after I am done with this IRC meeting.
15:13 social hagarth: ok thanks
15:13 ericsean joined #gluster
15:13 ericsean left #gluster
15:14 kkeithley_ the release-3.5 branch isn't broken. If you wanted to, you could submit your patch against that and we can see if it passes.
15:14 social ook I want it in 3.4 also, should I send it there too?
15:14 kkeithley_ yup
15:18 Peanut Does 'rpc-auth-allow-insecure' no longer exist in 3.4.1 ?
15:20 Peanut Ugh, never mind, I cannot read. Please ignore.
15:20 bala joined #gluster
15:21 jobewan joined #gluster
15:26 Gluster1 joined #gluster
15:27 shylesh joined #gluster
15:36 shyam joined #gluster
15:39 Peanut I've set all required values (storage.owner-{u,g}id, server.allow-insecure on, option rpc-auth-allow-insecure on) but I still cannot start a KVM with <source file="gluster://localhost/gv0/kvmtest.raw" - what could I still be missing to get libgfapi to work?
15:46 rcaskey hey all, reading through the docs and I have a question. Why do you give a server name when mounting a gluster volume? It is actually sending traffic requests to that server or does it get configuration information for the entire cluster or...
15:47 rcaskey in "real life" would you typically give the hostname of a load balancer instead?
15:49 samppah rcaskey: it only fetches volume information from that server
15:49 samppah you can use virtual ip if you like or do round robin with dns or...
15:50 samppah Peanut: are you able to use qemu-img to create images using gluster:// etc?
15:55 rcaskey samppah, so suppose I have a VM stored on gluster and host_a goes down and round robin is now sending out host_a and host_b, will gluster go down the list and retry a few times before telling the filesystem level that OMG YOUR BLOCK DEVICE IS GONE
15:55 rcaskey or will it just be like 'welp, your block device dissapeared, hope you know how to wait for a hotplug'
15:56 mik3 is xfs still the preferred filesystem for gluster?
15:56 mik3 i read a few results about ext4 issues and i'm wondering if anyone has current data on the state
15:57 Peanut samppah: No, but I understood that qemu-img got updated much later?
15:57 purpleidea mik3: xfs is the prefered fs
15:57 samppah rcaskey: if you use mount -t glusterfs serverA:vol /mnt/point to mount glusterfs, then it connects to serverA to get volume information and what servers to connect to etc.. serverA doesn't necessarily have to be part of the volume
15:57 mik3 purpleidea: roger
15:58 purpleidea mik3: my name
15:58 purpleidea 's  not roger
15:58 B21956 joined #gluster
15:58 samppah rcaskey: now if you have replication between serverA and serverB and serverA dies
15:58 mik3 ;)
15:58 kkeithley_ but fyi the ext4 issues are ancient history. Use ext4 if you want, but, as purpleidea indicated, xfs is considered 'best practice'
15:58 samppah rcaskey: there is option called network.ping-timeout, glusterfs client waits for that period before it considerds server dead.. during that timeout all IO is blocked and it may cause problems to vms
15:59 samppah rcaskey: your options are to either lower network.ping-timeout value or set longer timeout period in VM's
15:59 rcaskey yeah i'd expect that, that sounds good
15:59 rcaskey so serverA is a gluster endpoint in the cluster that may or may ot be serving the volume that informs the client of all the servers that _are_ serving the volume
15:59 vpshastry joined #gluster
16:00 samppah Peanut: if your qemu support libgfapi then qemu-img should work with it too
16:00 samppah Peanut: that's a good way to test that basic settings are fine
16:00 samppah rcaskey: yes, exactly like that
16:01 samppah rcaskey: you can also use mount option -o backupvolfile-server=serverB to define another server where to ask for volume information
16:01 purpleidea rcaskey: fwiw, vrrp integration (VIP's) for gluster is integrated into ,,(puppet)
16:01 glusterbot rcaskey: (#1) https://github.com/purpleidea/puppet-gluster, or (#2) semiosis' unmaintained puppet module: https://github.com/semiosis/puppet-gluster
16:01 rcaskey k, this is fitting into my worldview nicely :P
16:02 rcaskey back to reading
16:02 pixelgremlins hey... if I have /home/folder and var/www/folder is a symbolic link to /home/folder... and var/www/folder is replicated... will that make all the files in /home/folder replicate across servers?
16:05 abyss^ I have a lot of that message on one of my brick: http://fpaste.org/72699/09980901/ Any explaination of this strange behavior? Log of that brick grow about 1MB per minute:/
16:05 glusterbot Title: #72699 Fedora Project Pastebin (at fpaste.org)
16:09 jag3773 joined #gluster
16:17 mohankumar__ joined #gluster
16:22 primechuck joined #gluster
16:30 mattappe_ joined #gluster
16:32 daMaestro joined #gluster
16:33 Peanut samppah: my qemu-img is not linked against libgfapi.so. On second thought, that makes a lot of sense: the qemu-img comes from Ubuntu, and the Gluster from semiosis' PPA. So this makes perfect sense and I could have saved myself a lot of trouble.
16:35 semiosis Peanut: ubuntu people are looking into this... https://bugs.launchpad.net/cloud-archive/+bug/1246924 & https://bugs.launchpad.net/ubuntu/+source/glusterfs/+bug/1274247
16:35 glusterbot Title: Bug #1246924 “qemu not built with GlusterFS support” : Bugs : ubuntu-cloud-archive (at bugs.launchpad.net)
16:36 rcaskey does georeplication play nice with apps that are bypassing FUSE (particularly qemu)
16:38 Peanut semiosis: thanks, would be great if 3.4.2 and qemu+libgfapi would land in Thrusty.
16:39 semiosis Peanut: glusterfs 3.4.2 *is* in trusty already, i saw to that
16:39 semiosis problem is that it's in universe, so qemu won't build against it
16:39 semiosis glusterfs needs to be in main for that to happen, which is what they're looking into
16:40 mattappe_ joined #gluster
16:40 Peanut What does 'MIR' stand for in that context?
16:40 semiosis Main Inclusion Request
16:40 semiosis https://help.ubuntu.com/community/Repositories/Ubuntu
16:40 glusterbot Title: Repositories/Ubuntu - Community Ubuntu Documentation (at help.ubuntu.com)
16:43 Peanut How likely is that to get a decision in time for Thrusty, which is the new LTS?
16:43 ira joined #gluster
16:47 semiosis Peanut: i'm asking in #ubuntu-server now
16:47 semiosis and it's Trusty btw
16:48 Peanut Err.. baaad typo, I'm getting the first and second part of the new name mixed up.
16:48 semiosis Thrusty is funnier though, i thought maybe you were joking ;)
16:52 Peanut puppetca --list
16:52 Peanut Ugh.. typo day today, sorry.
16:54 rotbeard joined #gluster
16:55 rcaskey So for getting started with gluster with the goal being geo-replicated qemu vm storage ... precise + compile from src?
16:55 vpshastry left #gluster
17:02 Jayunit100 hi purple idea, I'm trying to get it running on vagrant-gluster-puppet on VBox .  will let you know.
17:03 rcaskey semiosis, what do you recommend for folks wanting to do gluster + qemu and are ubuntu-server users?
17:04 rcaskey semiosis, is it a matter of uncommenting some --enables and rebuilding via dpkg-buidlpackage, are there 3rd party repos of interest, or do I need to be building libvirt, qemu, and gluster from src?
17:06 asku joined #gluster
17:06 semiosis my internets dropped
17:06 semiosis rcaskey: you should generally use the ,,(ppa) packages instead of glusterfs packages in ubuntu universe
17:06 glusterbot rcaskey: The official glusterfs packages for Ubuntu are available here: 3.3 stable: http://goo.gl/7ZTNY -- 3.4 stable: http://goo.gl/u33hy -- 3.5 QA: http://goo.gl/Odj95k
17:07 rcaskey semiosis, so what about qemu?
17:08 semiosis rcaskey: you'll probably have to compile it
17:08 rcaskey Something tells me the version in precise wil feel a bit old
17:09 hybrid512 joined #gluster
17:09 semiosis rcaskey: use the ppa, 3.4.2 is the current version of glusterfs
17:10 semiosis i update the ppas whenever there is a new glusterfs release
17:10 rcaskey semiosis, yeah, i'm more worried about qemu
17:10 semiosis well depending on your timetable maybe you'd want to start out on trusty
17:11 rcaskey semiosis, i'm not terribly pushed so i might very well if it gets built with all the bells and whistles enabled as you alluded to earlier
17:11 semiosis rcaskey: we're all hoping for that but dont hold your breath
17:11 rcaskey I won't, i can be a big boy if I have to
17:12 semiosis if it doesnt make it i'll consider making a qemu-glusterfs ppa
17:12 rcaskey my biggest concern is with geo replication + slow link + churn
17:12 rcaskey i've got a 10M up connection and churn is bursty
17:12 semiosis since all the hard work of packaging qemu is already done, it should be (in theory) pretty easy to just add a dependency on glusterfs & shove it in a ppa
17:12 rcaskey so will georep keep a cache and not resend dirty blocks and ... blahdy blahdy....magically work?
17:12 semiosis ubuntu just wont put a dependency from main to universe, but i can do that in a ppa
17:13 rcaskey if nothing else ask em to put the deps in the rules file commented out
17:13 semiosis lol
17:14 rcaskey or is it gonna be resending X meg chunks every time the least little bit is flipped in a vm image and result in the entire pipe being continually full and never catching up
17:15 semiosis rcaskey: afaik geo-rep uses some gluster internals to track which files are dirty, then uses librsync to sync the dirty files
17:15 semiosis idk how that's going to play out in practice with your setup though, best is to test it
17:15 rcaskey semiosis, rsyncing a hundred gig file over 10 meg would get ugly methinks
17:20 rcaskey maybe knowing the exact changed boundaries  and passing htat through to rsync make it a much less ugly propositiojn?
17:23 leochill joined #gluster
17:27 Mo_ joined #gluster
17:32 KyleG joined #gluster
17:32 KyleG joined #gluster
17:33 KyleG left #gluster
17:34 sarkis joined #gluster
17:35 zaitcev joined #gluster
17:39 SFLimey_ joined #gluster
17:54 iksik joined #gluster
17:58 dusmantkp_ joined #gluster
18:01 mattappe_ joined #gluster
18:04 burn420 joined #gluster
18:08 burn420 Is there anywhere I can still get UFO package for the newest version of gluster and the swift package as well?
18:12 sarkis joined #gluster
18:26 theron joined #gluster
18:26 mattapperson joined #gluster
18:27 theron joined #gluster
18:31 japuzzo joined #gluster
18:33 burn420 I just read they changed the name to Gluster for Openstack but not seeing g4o rpms... Anyone know where to get those rpms for 3.4 ?
18:35 samppah burn420: is this any helpful https://github.com/gluster/gluster-swift/blob/havana/doc/markdown/quick_start_guide.md ?
18:35 glusterbot Title: gluster-swift/doc/markdown/quick_start_guide.md at havana · gluster/gluster-swift · GitHub (at github.com)
18:36 KyleG1 joined #gluster
18:38 burn420 it might be I am checking it out now thanks
18:40 burnalot joined #gluster
18:46 hybrid512 joined #gluster
18:46 burn420 yes that helps a lot  samppah
18:46 burn420 thanks
18:47 burn420 we setup a service of our customers using bluster now they want s3 access to there stuff I don't know if I can do it but I am going to try lol
18:47 burn420 gluster
18:47 samppah i hope it goes well :)
18:47 samppah i should try it out too
18:51 burn420 thanks...
18:51 burn420 me too customers could use it a million ways if it works
18:51 burn420 eel a lot of ways ...
18:51 burn420 can't type with this new keyboard
18:51 burn420 to small!@
18:52 samppah may i ask what kind of service you set up with gluster?
18:58 mick272 joined #gluster
19:07 glusterbot New news from newglusterbugs: [Bug 1059833] Crash in quota <https://bugzilla.redhat.com/show_bug.cgi?id=1059833>
19:28 m0zes joined #gluster
19:28 wushudoin joined #gluster
19:29 KyleG joined #gluster
19:29 KyleG joined #gluster
19:34 pixelgremlins gluster is using mega resources -- 142% cpu + 45% ram... I'm running a small linode vps 1MB ram. Are there some tweaks that can be made?
19:37 divbell joined #gluster
19:38 pixelgremlins * 1GB lol it's not 1993
19:38 marcoceppi joined #gluster
19:39 cfeller joined #gluster
19:41 burn420 @samppah we sell a cloud storage solution to our dedicated server customers
19:41 burn420 its basically a distributed replicated bluster setup
19:41 burn420 mount at home
19:41 burn420 on some servers that customers have access to
19:41 burn420 our billing system, automatically adds new unix accounts with quota on the bluster side
19:41 burn420 so customer can access via ftp sftp and sshmount at the moment
19:42 burn420 in the future want to have nfs and samba and s3 access would be nice but will have an object storage solution soon if we can't make it work how we want it
19:44 samppah burn420: sounds cool
19:45 burn420 it works !
19:45 samppah burn420: does each customer has it's own volume or one large and using gluster quotas?
19:45 burn420 if we could get the s3 for it would be so much better
19:45 burn420 cause they could pretty much mount their storage to anything
19:45 burn420 its just one big volume with quotas
19:45 samppah err.. i think that my question is somewhat senseless :)
19:45 samppah okay :)
19:46 burn420 'I backed up all my servers to it...
19:46 burn420 and I have been told some people use it for web storage
19:46 burn420 but would be nice if you could mount it to windows mac os linux etc with a program instead sshmount
19:47 burn420 I don't know there is tons of stuff for s3
19:47 burn420 so we shall see
19:47 burn420 I will let yiou know how it works out
19:47 burn420 I keep getting pulled into meetings so no way ill finish it today
19:47 samppah great! :)
19:47 samppah i didn't know that user quotas work with gluster
19:47 burn420 yeah they are built in
19:48 burn420 we tested them they work
19:48 burn420 you set them on a directory with in the volume
19:48 burn420 I think you can set them another way but it was six months ago when I finished my side
19:48 samppah xah right
19:49 samppah i thought you were using filesystem quota :)
19:49 burn420 nah
19:49 burn420 that was an important feature..
19:49 burn420 to us...
19:49 burn420 I have tested a lot of solutions
19:50 burn420 I would like to see a hybrid of gluster and ifs
19:50 burn420 or zfs  and ceph
19:50 burn420 zfs is fast
19:50 burn420 this damn client keeps correcting me and its wrong!
19:51 burn420 keeps change gluster to bluster
19:51 burn420 wtf
19:51 samppah :D
19:52 samppah gluster with zfs would be nice indeed
19:52 samppah too bad it's not supported so i think that's not an option for us
19:53 khushildep joined #gluster
19:53 samppah would be superb to use zfs snapshots for offsite backups
19:54 blook joined #gluster
19:56 samppah burn420: this might be intresting for you if you are looking into s3 https://bitbucket.org/nikratio/s3ql/overview
19:56 glusterbot Title: nikratio / S3QL Bitbucket (at bitbucket.org)
20:02 sarkis joined #gluster
20:09 ricky-ti1 joined #gluster
20:09 sprachgenerator having some issues with havana/nova/gluster using qemu 1.7.x on 13.10 with libgfapi/qemu support built in: http://pastebin.com/03DmsmGL
20:09 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
20:24 theron joined #gluster
20:36 rwheeler joined #gluster
20:40 kl4m joined #gluster
20:50 nocturn left #gluster
20:52 andreask joined #gluster
20:54 6JTAA4KVG joined #gluster
20:54 mattappe_ joined #gluster
20:58 mattap___ joined #gluster
20:59 theron joined #gluster
21:02 theron_ joined #gluster
21:03 KyleG joined #gluster
21:03 KyleG joined #gluster
21:25 jporterfield joined #gluster
21:34 theron joined #gluster
21:34 badone joined #gluster
21:36 NeatBasis joined #gluster
21:47 theron joined #gluster
21:57 xymox joined #gluster
21:59 B21956 joined #gluster
22:24 failshel_ joined #gluster
22:27 khushildep_ joined #gluster
22:32 ivymike joined #gluster
22:32 gdubreui joined #gluster
22:33 spechal joined #gluster
22:34 burn420 joined #gluster
22:34 spechal I have a volume (one of three) on my gluster cluster at /gluster/legacy ... when I mount from a client with mount -t glusterfs glusterfs:/gluster/legacy /mnt/legacy ... I get the error that it failed to get the volume file ... and I doing something wrong (obviously)?
22:35 spechal I have /gluster as a regular directory on the file system and three volumes within it
22:35 burn420 @samppah we were looking at the other day ... It sounds very interesting...
22:36 spechal the volume is started according to gluster volume info all
22:36 burn420 is it the same version of the client that is on the server seems like I had that problem before a long time ago
22:37 burn420 I don't know if it was the version
22:37 burn420 check if the client and server are same versions though
22:38 burn420 It could have been missing a library to If I remember correctly its been 6 months to a year since I had that problem
22:38 spechal burn420: yes, all are 3.4.1-3.el6
22:38 burn420 what os you using ?
22:38 spechal I have bluster-fuse and the OS fuse packages installed
22:38 spechal CentOS 6.5
22:39 burn420 can you see it hitting the logs on server?
22:42 spechal Yes, Received get vol req
22:43 burn420 is the volume called legacy?
22:43 spechal yes
22:43 burn420 probably only need glusterfs:/legacy
22:43 burn420 mount -t glusterfs glusterfs:/legacy /mnt/legacy
22:45 burn420 is glisters the server name?
22:45 spechal no, that was autocorrect
22:45 burn420 spell check don't like the word gluster
22:45 burn420 keeps correcting me
22:46 spechal That is my problem ... using the path instead of the volume name
22:46 spechal Thank you
22:46 burn420 np
22:49 diegows joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary