Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-01-18

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:40 jporterfield joined #gluster
01:31 tryggvil joined #gluster
01:43 TrDS left #gluster
02:00 raghug joined #gluster
02:01 dbruhn johnmark or kkiethly, I saw someone mentioned tiered storage the other day as a potential 3.6 feature... is that going to be a reality?
02:56 jporterfield joined #gluster
03:03 jporterfield joined #gluster
03:11 chirino joined #gluster
03:22 purpleidea dbruhn: i'd also like the answer to this question. someone ping me in the replies :)
03:37 jporterfield joined #gluster
03:41 jobewan joined #gluster
03:42 shyam joined #gluster
03:49 jporterfield joined #gluster
03:53 [o__o] joined #gluster
03:56 psyl0n joined #gluster
03:59 jporterfield joined #gluster
03:59 jag3773 joined #gluster
04:29 jporterfield joined #gluster
04:32 davinder joined #gluster
04:40 jporterfield joined #gluster
04:52 mattappe_ joined #gluster
04:57 shyam joined #gluster
05:08 harish joined #gluster
05:16 jporterfield joined #gluster
05:19 davinder joined #gluster
05:23 raghug joined #gluster
05:29 shyam joined #gluster
05:34 shyam joined #gluster
05:39 hagarth joined #gluster
05:40 jporterfield joined #gluster
05:41 shyam joined #gluster
05:45 shyam joined #gluster
05:51 davinder joined #gluster
05:52 ababu joined #gluster
06:00 benjamin__ joined #gluster
06:07 jporterfield joined #gluster
06:19 TheDingy joined #gluster
06:27 mohankumar__ joined #gluster
06:29 jporterfield joined #gluster
06:29 davinder joined #gluster
06:31 raghug joined #gluster
06:32 glusterbot New news from newglusterbugs: [Bug 1055037] Add-brick causing exclusive lock missing on a file on nfs mount <https://bugzilla.redhat.co​m/show_bug.cgi?id=1055037>
06:58 jporterfield joined #gluster
07:07 sticky_afk joined #gluster
07:07 stickyboy joined #gluster
07:08 shyam joined #gluster
07:08 lalatenduM joined #gluster
07:12 ababu joined #gluster
07:27 DV joined #gluster
07:34 shyam joined #gluster
07:36 dbruhn joined #gluster
07:39 shyam1 joined #gluster
07:42 GLHMarmot joined #gluster
07:45 raghug joined #gluster
07:51 iksik joined #gluster
07:53 ekuric joined #gluster
07:56 Humble joined #gluster
08:00 Guest13596 joined #gluster
08:00 iksik joined #gluster
08:00 tbaror hello
08:00 glusterbot tbaror: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
08:01 samppah howdy
08:02 tbaror We would like to build/create our own scale-out scale-up storage Isilon like or distributed file system.
08:02 tbaror The decision came to that since its seems to us that building such storage will accommodate to our needs Performance/space wise and off course price wise , using low cost available hardware like supermicro chassis etc.. .
08:02 tbaror Now our thought was  GlusterFS, so i address to those who have knowledge mostly experience with that technology to help me to decide which one is better to invest on.
08:02 tbaror The storage scenario will be as follow, we will write (CIFS) mostly video blocks from 0 space to hd,dv formats with 2 additional audio channel , video seeks on large files ,converting high video (hd,dv,) to lower res video format h.264 etc.., also maybe mount vm guest(NFS).
08:02 tbaror my points of thought will be as
08:02 tbaror High availability node failure
08:02 tbaror Performance described scenario above
08:02 tbaror ease management
08:02 tbaror Please advice
08:02 tbaror Thanks
08:05 samppah tbaror: this may be intresting to read for you http://forums.overclockers.co​m.au/showthread.php?t=1078674
08:05 tbaror In addition is there option with Gluster to create like Raid5 or 6 stripes accross nodes ?
08:06 samppah also hangout there was hangout session with Dan Mons http://www.gluster.org/2014/01/gluster-ha​ngout-with-daniel-mons-from-cutting-edge/
08:08 samppah tbaror: currently i5's not possible to create raid5 like setup with glusterfs, however there is plans for something like that https://forge.gluster.org/disperse
08:08 glusterbot Title: Dispersed Volume - Gluster Community Forge (at forge.gluster.org)
08:08 tbaror Thanks :-)for the info
08:10 iksik tbaror: i'm trying to build similar solution (being totally new with clusters), i'm wondering about how should i connect my nodes from cluster to achieve best performance - since nodes (i think) can't serve files directly to the clients
08:11 iksik where 'client' i mean, someone who is downloading that file
08:11 tbaror Is there road map for  Raid like will be in near release ?
08:12 tbaror i am planing 10G Ethernet network for both inter comm and clients
08:20 samppah tbaror: glusterfs 3.5 is going to be released "soon" and developers are planning 3.6 .. afaik neither of those include raid5 like functionality
08:21 samppah iksik: what os you are using on clients?
08:23 tbaror mostly windows 7 ,wi2k8r2 & few MAC using final cut pro
08:24 tbaror Thanks for the info about raid progress , in matter in fact this is really important factor.
08:26 iksik samppah: i didn't set any clients yet, it would be nice to have freebsd there, but it seems it wont happen
08:50 ricky-ti1 joined #gluster
08:55 Zylon joined #gluster
09:11 sputnik13 joined #gluster
09:13 sputnik13 trying to evaluate whether I should be using gluster...  I need a distributed redundant filesystem primarily to provide an active/passive redundancy configuration for an openstack controller and secondarily to potentially provide the backing store for an iscsi SAN
09:14 sputnik13 when I search on google about gluster, I'm finding stuff about small file performance being really bad that's a bit concerning, but at the same time those posts are from a few years ago...  have those problems been addressed?
09:17 benjamin__ joined #gluster
09:21 rotbeard joined #gluster
09:28 ZhangHuan joined #gluster
09:50 purpleidea @joe's performance metric
09:50 glusterbot purpleidea: nobody complains.
09:50 purpleidea sputnik13: ^^^
09:51 purpleidea ~joe's performance metric | sputnik13
09:51 glusterbot sputnik13: nobody complains.
09:51 purpleidea you have to test out your actual use case and extrapolate if it will perform to your satisfaction. there are too many variables to reliably predict everything
09:52 purpleidea setup a small test cluster and see how well it works for you and then think about scaling up and so on...
09:52 purpleidea i use vagrant and puppet-gluster
09:52 purpleidea ~vagrant | sputnik13
09:52 glusterbot sputnik13: (#1) Part 1 @ https://ttboj.wordpress.com/2013/12​/09/vagrant-on-fedora-with-libvirt/, or (#2) Part 2 @ https://ttboj.wordpress.com/2013/12​/21/vagrant-vsftp-and-other-tricks/, or (#3) Part 3 @ https://ttboj.wordpress.com/2014/01/​02/vagrant-clustered-ssh-and-screen/, or (#4) Part 4 @
09:52 glusterbot https://ttboj.wordpress.com/2014/0​1/08/automatically-deploying-glust​erfs-with-puppet-gluster-vagrant/, or (#5) https://ttboj.wordpress.com/2014/01/16​/testing-glusterfs-during-glusterfest/
09:53 sputnik13 purpleidea: fair point, it's just that I'm on a rather short timeline and hoping to narrow down my options without going through full trials
09:58 erik49__ joined #gluster
09:59 wanye joined #gluster
10:00 hflai joined #gluster
10:02 sputnik13 joined #gluster
10:03 RobertLaptop joined #gluster
10:03 jporterfield joined #gluster
10:03 fidevo joined #gluster
10:04 purpleidea sputnik13: i hear you, but you're either doing engineering or guessing. it's up to you!
10:04 cyberbootje joined #gluster
10:04 purpleidea you can probably knock down the puppet-gluster+vagrant testing with my above notes in less than a day, probably in a few hours if you know what you're doing
10:05 purpleidea if you have questions, you're welcome to ping me
10:06 sputnik13 http://forums.overclockers.co​m.au/showthread.php?t=1078674
10:06 sputnik13 this is encouraging
10:07 sputnik13 purpleidea: I will keep that in mind, thank you
10:07 purpleidea sputnik13: yeah he gave a youtube talk thing the other day
10:07 sputnik13 it so happens we do use puppet for configuration management so it's likely I'll be using your module :)
10:08 sputnik13 purpleidea: do you have the link handy?
10:08 purpleidea sputnik13: well please let me know if you have any issues. it's _the_ module to use :) if there's something it can't do for you, let me know as it's probably just undocumented.
10:08 purpleidea sputnik13: http://www.gluster.org/2014/01/gluster-ha​ngout-with-daniel-mons-from-cutting-edge/
10:11 sputnik13 purpleidea: thank you for that, watching now :)
10:12 purpleidea sputnik13: no worries! it's cool to hear about his setup. i think about half way through he shows his topology, and the forum post is also very useful.
10:14 jporterfield joined #gluster
10:15 TrDS joined #gluster
10:21 harish joined #gluster
10:25 sputnik13 it seems like gluster wouldn't work well or won't work at all if you were dealing with file sizes that go beyond the size of a node
10:27 mohankumar joined #gluster
10:28 purpleidea sputnik13: how big are your files?
10:29 Worf joined #gluster
10:30 sputnik13 depends on how we make the storage accessible...  we're hosting repositories that can easily go multiple terabytes
10:30 purpleidea sputnik13: no, a single file, how big (max)
10:30 sputnik13 I'd rather not expose the underlying storage to the tenant vms
10:32 sputnik13 that's what I mean, it depends on how we're exposing the underlying storage, so if the repository is accessing a virtual volume for storage, which will be multiple terabyes, I can see it going beyond a single storage node's storage
10:33 sputnik13 sorry, "thinking out loud" :)  for my current problem this is a non-issue, but for the longer term problem I need to solve this could be an issue, have to think through it more
10:34 purpleidea it's not entirely clear to me what you're trying to do. i think either i'm missing information, or perhaps you've mis-read something about glusterfs. in any case, if you _really_ do want to "stripe" individual files, gluster supports this but it's very uncommon and i wouldn't do it without an official redhat support contract.
10:35 iksik finally, vagrant-libvirt is working under osx, ugh
10:35 ZhangHuan joined #gluster
10:35 purpleidea iksik: why bother with osx when you can run a real gnu/linux box :P
10:36 iksik well, i really like to test smaller bits on my desktop
10:37 sputnik13 purpleidea: I'm not quite set on whether I really do want to stripe or leave it to my customers to stripe across multiple virtual volumes, but the important bit of information I needed about gluster with respect to this topic is whether #1 it supports distribution/striping a file across multiple nodes (which you've said it does) and how well it works (which from what you said is not terribly recommended)
10:38 purpleidea sputnik13: what are you talking about "virtual volumes"? do you understand how gluster stores say a directory of 50 files across say four hosts?
10:39 sputnik13 purpleidea: yes I do understand, perhaps I wasn't clear that I'm running openstack :)
10:39 purpleidea sputnik13: ah
10:39 sputnik13 so... my customers/tenants would interface with the underlying storage through virtual volumes, or at least I'd prefer they do
10:40 sputnik13 as trying to mediate/control direct access to a glusterfs from multiple tenants is an exercise I don't want to go through
10:41 purpleidea sputnik13: so tbh, i'm not very familiar with all the openstack details... i think you might be going about this in a more complex than necessary or indirect way
10:42 sputnik13 purpleidea: I think so too, I could just say we won't give customers single contiguous volumes larger than x TB and let them tie them all up in an lvm or something just as well
10:42 purpleidea sputnik13: tell me if this makes sense and if you've overlooked this approach... (first a question though)
10:42 purpleidea you're providing vm's to your customers that run on openstack, yes? and you want them to have storage available, right?
10:43 sputnik13 purpleidea: yes, and they need to be able to scale their storage out to X terabytes
10:44 sputnik13 we haven't set a ceiling on the X
10:44 purpleidea sputnik13: okay, here's your mistake (and the solution...)
10:44 purpleidea 1) build a gluster pool of size N hosts.
10:44 purpleidea 2) you have some openstack cluster
10:45 purpleidea 2b) you can optionally host the os images for openstack on glusterfs, but it doesn't matter. typically an os image will be say 40G+/-
10:45 purpleidea 3) inside each guest vm running on openstack you _mount_ glusterfs volumes as needed...
10:45 purpleidea you don't want to have to pass through the vm layer to hold the storage of course...
10:45 purpleidea get it?
10:46 sputnik13 right, so that's the approach I want to avoid
10:46 sputnik13 :)
10:46 purpleidea sputnik13: you want to avoid my 1.2.3 solution? why?
10:47 sputnik13 the reason being, we have multiple tenants that need to be confined to their own areas, and exposing the underlying glusterfs is not an attractive option to us for the same reasons we don't use LXC or straight shell accounts to provide access
10:49 sputnik13 we also need to be able to account for their usage and tie their access control to the same openstack credentials
10:49 purpleidea well, doing it the other way (having TB size vm images) is probably a failure waiting to happen... As for the multi-tenancy concerns, this is legitimate, hekafs addressed some of these, and i think glusterfs will be doing better in this area in the future
10:49 sputnik13 the customers are storing potentially sensitive data as well, so if we told them that there's "shared storage", the whole thing would be a non-starter
10:50 purpleidea sputnik13: do they have root on the vm machines
10:50 sputnik13 purpleidea: I agree, with respect to having TB size vm images being a "bad idea" and yes they have root to the vm
10:51 sputnik13 like I said, it's a longer term problem I need to solve :)
10:51 purpleidea sputnik13: you can have separate gluster volumes per customer, but there isn't a strong mechanism to prevent access... there is auth.allow/deny which you can use to restrict by ip address if that's enough
10:54 sputnik13 restrict by IP might be enough but that also puts more burden on us for maintaining the access list...  we're not really a fully staffed infrastructure provider (at least not yet), just trying to provide a value add service to the industry and I'm the one guy that half way knows his way around cloud infrastructure (and that's being generous ;) )
10:54 sputnik13 so the thing I like about gluster so far vs ceph is it just seems much simpler
10:54 sputnik13 for the same reason this daniel mons guy liked the simplicity, I really like the simplicity vs ceph
10:55 purpleidea sputnik13: there's no maintenance burden for me... eg: https://github.com/purpleidea/puppe​t-gluster/blob/master/examples/file​system-backed-bricks-example.pp#L49
10:55 glusterbot Title: puppet-gluster/examples/filesy​stem-backed-bricks-example.pp at master · purpleidea/puppet-gluster · GitHub (at github.com)
10:55 purpleidea sputnik13: you can use the puppet module with the ::property to manage access lists automatically but with gluster::simple https://github.com/purpleidea/puppet-gluster/b​lob/master/examples/gluster-simple-example.pp
10:55 glusterbot Title: puppet-gluster/examples/gluster-simple-example.pp at master · purpleidea/puppet-gluster · GitHub (at github.com)
10:56 sputnik13 right, but the vms are self service, the customer turns up VMs at will, so IPs will be random
10:56 purpleidea sputnik13: so?
10:56 sputnik13 so...  someone on our side needs to populate the right access control list with the right IPs
10:56 purpleidea sputnik13: nope
10:56 sputnik13 before they'll have access to storage
10:57 sputnik13 ok, I'm not getting something, please edumacate me if you don't mind :)
10:58 samppah we are also using gluster to store vm images and selling VPS to our customers.. even if it would be good in some cases it's not always possible to say to customers that you have to mount nfs/gluster volume to be able to store data
10:58 purpleidea you know, automate it... it's almost trivial...
10:58 purpleidea sputnik13: familiar with puppet? you can export the ip fact as a resource... or when you provision your machine, have it update the auth.allow property automatically...
10:59 samppah also by security means i don't want customers to be able to access storage network directly
11:00 purpleidea samppah: glusterfs needs kerberos support. i've mentioned this many times :(
11:00 samppah although i have been thinkin about creating self service portal where it would be possible to creat object storage/nfs/iscsi/glusterfs shares (probably from different gluster) :)
11:00 sputnik13 purpleidea: right, but what you're saying requires that either openstack provide those exported facts or the vm images be preprovisioned to talk with our puppet master/db
11:01 sputnik13 purpleidea: both of these are not really options for tenant vms not under our control...  we're just providing an infrastructure not a full saas
11:01 purpleidea sputnik13: you can watch arp, you can run a script from the vm (if you trust it enough to) i mean... at some layer either you have no control over what the machine does, and it could just eat any ip it wants including other tenants, or you have some management of it....
11:02 purpleidea sputnik13: anyways i think i've given you all the info i can. goodluck with your setup.
11:02 sputnik13 purpleidea: right, but you watch arp, have custom code sniffing IPs, then go query openstack to see whether the tenant should have access to gluster and if so where, then setup the access...
11:03 purpleidea sputnik13: tbh, it sounds like you have zero-access control over the guests, which if that's so, you can't really guarantee anyone else's storage is safe whether kerberos worked or not for example...
11:04 sputnik13 purpleidea: it's an interesting idea, but it seems a bit complicated to me
11:04 sputnik13 purpleidea: the guests have full control, yes
11:04 purpleidea sputnik13: i don't think arp is the right solution
11:05 purpleidea sounds like you have big problems. what company is this?
11:05 sputnik13 yeah, I was hoping you wouldn't think so ;)
11:05 sputnik13 we're in the genomics field
11:05 purpleidea sputnik13: sweet. i'm a physiologist by training
11:06 sputnik13 so the repositories do indeed get large
11:06 purpleidea sputnik13: security 101, if the guests have full control, in the environment you're describing, i think your network can get p0wned fairly easily...
11:07 sputnik13 purpleidea: I agree, the good thing is this is not a general public cloud that joe blow can sign up for and get access to :)
11:07 purpleidea sputnik13: then just mount gluster volumes 1.2.3 like i mentioned, and use auth.allow
11:08 purpleidea if you want to write the automatic ip address setting up stuff i mentioned, it's easy or buy me some $beverages and i'll do it for you
11:08 purpleidea in the future when glusterfs gets better acl type support (maybe kerberos) you'll add in extra security i guess
11:09 purpleidea because atm, you're working in a "trusted" environment anyways...
11:09 sputnik13 purpleidea: I appreciate the help and the offer, but I think we need to dig through this more
11:09 purpleidea sputnik13: cool, good luck!
11:09 sputnik13 and yes we "trust" our customers, but the problem is, they don't trust each other :)
11:10 DV joined #gluster
11:10 sputnik13 anyway, I've taken up enough of your time, thanks so much for spending the time to talk :)
11:10 purpleidea yw
11:10 sputnik13 I'll definitely be in touch about the puppet script ;)
11:17 jporterfield joined #gluster
11:19 ells joined #gluster
11:20 Worf i'm reading about GlusterFS (and others), trying to figure out if it could be the right thing for me:
11:20 Worf I'm looking for some setup that would allow me running virtual machines (with KVM) on a small number of hosts, and the ability to migrate them from one host to another. So i kinda need some suitable storage.
11:22 purpleidea Worf: test it out with ,,(vagrant) first
11:22 glusterbot Worf: (#1) Part 1 @ https://ttboj.wordpress.com/2013/12​/09/vagrant-on-fedora-with-libvirt/, or (#2) Part 2 @ https://ttboj.wordpress.com/2013/12​/21/vagrant-vsftp-and-other-tricks/, or (#3) Part 3 @ https://ttboj.wordpress.com/2014/01/​02/vagrant-clustered-ssh-and-screen/, or (#4) Part 4 @
11:22 glusterbot https://ttboj.wordpress.com/2014/0​1/08/automatically-deploying-glust​erfs-with-puppet-gluster-vagrant/, or (#5) https://ttboj.wordpress.com/2014/01/16​/testing-glusterfs-during-glusterfest/
11:24 psyl0n joined #gluster
11:24 ells_ joined #gluster
11:25 purpleidea Worf: and this should work. qemu has native glusterfs integration...
11:25 Worf purpleidea: reading ... lots to read ...
11:26 purpleidea Worf: sorry!
11:26 purpleidea 1 day of homework, and weeks of engineering saved :)
11:26 Worf purpleidea: hehe - no, i asked for information, you give me information :)
11:26 purpleidea Worf: well i told you that 1) it should work, and 2) gave you an easy way to test and re-test things.
11:26 psyl0n joined #gluster
11:27 purpleidea Worf: also this link: http://libvirt.org/storage.​html#StorageBackendGluster
11:27 glusterbot Title: libvirt: Storage Management (at libvirt.org)
11:29 purpleidea ~next
11:29 purpleidea @next
11:29 glusterbot purpleidea: Error: You must be registered to use this command. If you are already registered, you must either identify (using the identify command) or add a hostmask matching your current hostmask (using the "hostmask add" command).
11:30 purpleidea ,,(next)
11:30 glusterbot Another happy customer... NEXT!
11:31 tryggvil joined #gluster
11:31 Worf ... hmm ... Vagrant homepage is a bit ... discouraging tough ...
11:31 purpleidea Worf: how come ?
11:34 Worf well, i think i've read now more than 10 times that it will make my live better and how brilliant it is, how much time and cost it will save me, that it "just works", etc, etc ... but i can only guess what it actually does.
11:34 purpleidea i don't understand your comment. vagrant is pretty handy, yes.
11:35 sputnik13 Worf: you can only guess until you use it, at which point you're very likely to become an unabashed fanboi
11:36 * sputnik13 loves vagrant
11:36 * purpleidea ++
11:36 Worf well, the about page only states "a tool for building complete development environments" ... that's kinda vague
11:36 sputnik13 although, if there's one nit it's that compiling from a box on a synced folder doesn't work
11:36 purpleidea Worf: read my articles
11:36 sputnik13 but that's not vagrant's fault
11:37 Worf purpleidea: i'll read :)
11:37 purpleidea i don't hack on puppet-gluster or glusterfs without vagrant anymore since i started using it
11:38 sputnik13 Worf: it's a tool for defining VMs and how they should be built, which if you think about it is pretty damn powerful
11:38 sputnik13 single cli command to build your environment, 2 commands to tear it down and rebuild it
11:38 purpleidea sputnik13: well sort of. pretty useless without some puppet glue...
11:39 sputnik13 puppet, chef, shell, all work, but they all fail to be as useful for throw away environments without a vm bootstrapper like vagrant
11:39 purpleidea indeed!
11:39 sputnik13 preaching to the choir here :)
11:40 purpleidea sputnik13: i guess Worf's got his/her homework to do :)
11:40 sputnik13 indeed
11:40 Worf define "homework" :)
11:41 purpleidea sputnik13: i'm curious to hear more about your genomics work if you have stuff you can talk about. you can pm me if it's semi-private. i'm under frie-nda
11:41 purpleidea Worf: homework == reading
11:41 purpleidea (and hacking)
11:41 Worf then yes :)
11:41 Worf and i start to grasp what vagrant is about, but it doesn't seem to be of any use to my case :)
11:42 purpleidea Worf: i bet you're wrong about that :P
11:42 purpleidea Worf: are you on a Fedora 19-20 machine?
11:42 ells Worf: start here: http://docs.vagrantup.com/v2​/getting-started/index.html
11:42 glusterbot Title: Getting Started - Vagrant Documentation (at docs.vagrantup.com)
11:43 Worf no, i'm on debian. but from what i understand the main purpose of vagrant ist to automate the creation of new VMs.
11:43 purpleidea yep and then re-create
11:43 purpleidea and so on
11:44 ells you don't have to use it - you can, install your OS manually every time in VMware or Virtual box on your laptop - and take snapshots to start off from scratch every time you want to test a change to your code
11:44 ells or u can just use vagrant.,
11:45 ells i can test my whole puppet manifests for any given role on my laptop
11:45 ells destroy and start again all in a matter of minutes
11:45 ells all on teh command line, no mouse ;-)
11:45 diegows joined #gluster
11:45 purpleidea ells: thank goodness no mouse :)
11:46 ells or touchpad ;)
11:47 iksik vagrant-pristine or similar also is really handy thingy ;]
11:47 purpleidea i've poisoned #gluster with vagrant talk :P
11:47 Worf well, i don't have a lot of VMs, creating them is a task that doesn't happen frequently, and from what i've read so far only very few of them could be prepared by vagrant :)
11:47 purpleidea ells: have you tried out my puppet-gluster+vagrant work?
11:48 purpleidea Worf: what can't you build with vagrant ?
11:49 ells https://ttboj.wordpress.com/ - im new to GlusterFS - so started lurking here now and then - your blog made it onto my reading list ;-)
11:49 Worf purpleidea: i've not yet read enough documentation, but from the features i've read i somehow doubt that it can create a Windows VM
11:49 purpleidea Worf: haha yeah nobody here does windows i think :P
11:49 purpleidea (sorry)
11:49 iksik but there are ready to use vagrant boxes with windows ;-)
11:50 purpleidea ells: cool, ping me if you have any questions or suggestions to make something clearer, etc...
11:50 ells google says: https://github.com/WinRb/vagrant-windows
11:50 glusterbot Title: WinRb/vagrant-windows · GitHub (at github.com)
11:50 Worf anyway - actually i wanted to talk about suitable storage solutions for runninv VMs
11:50 ells purpleidea: ta ;-)
11:50 purpleidea Worf: did you look at the libvirt link i sent ya?
11:50 ells going from a NetApp in a colo to GlusterFS in AWS :-0
11:51 Worf purpleidea: yes. not long enough tough :)
11:51 * purpleidea didn't know about vagrant+windows ... til!
11:51 purpleidea tough? you mean though?
11:52 Worf ah yes my spelling is bad
11:52 purpleidea Worf: no worries... so what information is missing from that then?
11:53 Worf my problem is: i have to mostly work with the hardware i have available. and the network infrastructure isn't exactely suitable for having VMs and Storage all seperated.
11:53 purpleidea Worf: no reason you need to separate them
11:54 purpleidea what i mean is, there's no rule that forces you to. but it's often common practice
11:55 Worf i know. and we have some ESX hardware at work where it is practice ...
11:56 Worf so ... lets assume i set up libvirt + glusterfs on a couple of hosts. i guess i could life-migrate a vm from one host to a other
11:56 purpleidea yep
11:56 Worf could i life-migrate the vm image as well, keeping vm + image on the same host?
11:56 purpleidea should be. although i haven't personally tested that myself
11:57 Worf ( i'm not exactely talking in the right terms here i assume )
11:57 purpleidea Worf: i think you might be confused about the topology/architecture
11:57 Worf well, maybe not confused, but not familiar with the fine-tuning possibilities
11:58 purpleidea Worf: http://awwapp.com/draw.html#a5f05d73 let's try this
11:58 glusterbot Title: A Web Whiteboard (at awwapp.com)
11:59 Worf i guess eaven in glusterfs, at a given point of time, one node will have the read-write version of a file/block/whatever
11:59 purpleidea Worf: okay i'll type here, watch there, okay?
11:59 Worf ay :)
12:01 purpleidea Worf: storage for vm (red dot) os, is host "on the gluster pool somewhere"
12:01 Worf ay
12:02 purpleidea storage for vm is always in the pool...
12:02 Worf i think replication can be configured a bit ... so for example fsync is considered "done" as soon as at last one node has written the to disk?
12:03 purpleidea Worf: no
12:03 purpleidea i don't think so.
12:03 purpleidea you meant least* ?
12:03 Worf ... yeah ... as usual ...
12:03 purpleidea it's all synchronous
12:04 purpleidea do you get the topology now?
12:04 Worf yes
12:04 purpleidea Worf: there's one cool thing...
12:05 purpleidea if you use the qemu storage thing, it directly talks to gluster, avoiding the overhead of mounting... it uses libgfapi to do this. you can read about this if you're interested
12:05 Worf i allready found that ... thats one of the reasons why i am here and did read more into detail about gluster
12:06 Worf back to the topology:
12:06 purpleidea ay
12:06 RedShift joined #gluster
12:06 Worf nothing keeps me putting a glusterfs node and a vm host on the same physical machine.
12:07 purpleidea Worf: correct, except remember that the vm's still have to mount the storage from the "cluster" you don't "access it directly". your files are in the pool. not necessarily on that same host
12:07 Worf so if it is possible to ... control where replicas go, reading could stay on the same physical machine ...
12:09 purpleidea Worf: there are actually some tricks you'd need to do to get this type of behaviour, but it's possible mostly
12:09 Worf now that tricks are what would interest me :)
12:09 purpleidea i wouldn't bother trying to do this unless you're really doing something specialized. first make it work, then optimize later. seriously. you probably don't want to do the tricks.
12:10 Worf i'm not sure if you would say that if you would know what i did do the last week :)
12:11 purpleidea Worf: what i'm saying is, i'm not going to tell you advanced stuff until you've at least set up a dozen or so gluster clusters first.
12:11 purpleidea walk, and then run, you know?
12:12 Worf ok, that's a valid point
12:12 purpleidea Worf: back to your homework now :P
12:13 purpleidea good luck and ping me if you're stuck on puppet or vagrant things.
12:13 purpleidea ,,(next)
12:13 glusterbot Another happy customer... NEXT!
12:14 purpleidea @forget next
12:14 glusterbot purpleidea: The operation succeeded.
12:14 purpleidea @learn next as Another satisfied customer... NEXT!
12:14 glusterbot purpleidea: The operation succeeded.
12:14 purpleidea ,,(next)
12:14 glusterbot Another satisfied customer... NEXT!
12:14 purpleidea maybe it sounds better that way :P
12:15 Worf however, you now understand what i'm interested in. just one last question: so would you say that "while not exactely the goal, glusterfs is probably the right way to go", or should i look in a totally different direction?
12:17 purpleidea Worf: afaict it's the right tool. i don't know of a distributed storage solution that will serve you better. i'm excluding proprietary systems, which cost 10x more anyways, and by definition suck. also i'm biased of course because i'm in glusterfs, but i'm quite happy with the benefits of it.
12:17 Worf thanks :)
12:17 purpleidea yw
12:23 jporterfield joined #gluster
12:30 jporterfield joined #gluster
12:37 hagarth joined #gluster
12:55 jporterfield joined #gluster
12:57 mattappe_ joined #gluster
12:58 Cenbe joined #gluster
13:07 ricky-ticky1 joined #gluster
13:07 jporterfield joined #gluster
13:27 jporterfield joined #gluster
13:37 jporterfield joined #gluster
13:54 jporterfield joined #gluster
13:57 psyl0n joined #gluster
14:04 jporterfield joined #gluster
14:07 ells joined #gluster
14:17 ells_ joined #gluster
14:18 ells__ joined #gluster
14:25 zaitcev joined #gluster
14:58 psyl0n joined #gluster
15:00 mattappe_ joined #gluster
15:01 tbaror Hi again , i did some googleing
15:01 tbaror and there few mention issues with Ability to handle recusive ops lie du, ls, find
15:01 tbaror ability to handle many of Tiny files , this worries me any improvement with last versions ?
15:01 tbaror
15:07 mattapp__ joined #gluster
15:14 edward1 joined #gluster
15:38 robo joined #gluster
15:51 ZhangHuan joined #gluster
15:52 TrDS joined #gluster
15:52 Humble joined #gluster
15:52 [o__o] joined #gluster
15:52 KORG joined #gluster
15:52 rwheeler joined #gluster
15:52 qdk joined #gluster
15:52 eightyeight joined #gluster
15:52 Slasheri joined #gluster
15:52 Gugge joined #gluster
15:52 JonnyNomad joined #gluster
15:58 TrDS joined #gluster
15:58 Humble joined #gluster
15:58 [o__o] joined #gluster
15:58 KORG joined #gluster
15:58 rwheeler joined #gluster
15:58 qdk joined #gluster
15:58 eightyeight joined #gluster
15:58 Slasheri joined #gluster
15:58 Gugge joined #gluster
15:58 JonnyNomad joined #gluster
16:00 raghug joined #gluster
16:29 tryggvil joined #gluster
16:36 dbruhn joined #gluster
16:52 glusterbot New news from resolvedglusterbugs: [Bug 862082] build cleanup <https://bugzilla.redhat.com/show_bug.cgi?id=862082>
17:28 leochill joined #gluster
17:51 psyl0n joined #gluster
18:18 raghug joined #gluster
18:28 F^nor joined #gluster
18:39 qdk joined #gluster
18:43 gmcwhistler joined #gluster
18:44 gmcwhistler joined #gluster
18:56 mattappe_ joined #gluster
19:06 _pol joined #gluster
19:22 mattappe_ joined #gluster
19:24 mattapperson joined #gluster
20:09 _pol joined #gluster
21:15 DV joined #gluster
21:20 tryggvil joined #gluster
21:20 twx joined #gluster
21:20 nocturn joined #gluster
21:20 GabrieleV joined #gluster
21:20 johnmark joined #gluster
21:20 neofob joined #gluster
21:20 yosafbridge joined #gluster
21:20 klaas joined #gluster
21:20 Dave2 joined #gluster
21:20 asku joined #gluster
21:21 yosafbridge joined #gluster
21:27 TrDS left #gluster
21:27 TrDS joined #gluster
21:53 mattappe_ joined #gluster
22:02 jporterfield joined #gluster
22:04 raghug joined #gluster
22:06 daMaestro joined #gluster
22:15 jobewan joined #gluster
22:29 jporterfield joined #gluster
22:47 sputnik13 is it a bad idea to run applications/services on a gluster server node?
22:57 theron joined #gluster
23:02 TrDS left #gluster
23:02 TrDS joined #gluster
23:12 mattappe_ joined #gluster
23:16 ZhangHuan joined #gluster
23:17 mattapperson joined #gluster
23:24 mattappe_ joined #gluster
23:25 theron joined #gluster
23:35 mattapperson joined #gluster
23:39 ZhangHuan joined #gluster
23:42 JoeJulian sputnik13: Not necessarily, though there have been occasions where kernel bugs have caused race conditions trying to lock memory when using the nfs client locally.
23:44 cyberbootje hi, anyone got ZFS and glusterFS working?
23:48 purpleidea cyberbootje: it has been done, but it's not supported, and not recommended
23:48 cyberbootje hmm ok

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary