Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-10-12

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:06 fubada joined #gluster
00:34 Telsin joined #gluster
01:00 justinmburrous joined #gluster
01:21 msmith_ joined #gluster
01:41 rjoseph joined #gluster
01:46 kiwnix joined #gluster
01:50 kiwnix- joined #gluster
02:13 ACiDGRiM joined #gluster
02:14 bala joined #gluster
02:20 rshott joined #gluster
02:35 bigred15 joined #gluster
02:58 justinmburrous joined #gluster
03:10 msmith_ joined #gluster
03:32 bala joined #gluster
03:40 justinmburrous joined #gluster
03:40 justinmburrous joined #gluster
03:46 harish joined #gluster
04:24 lyang0 joined #gluster
04:43 jobewan joined #gluster
04:58 bala joined #gluster
05:31 jobewan joined #gluster
05:41 bala joined #gluster
05:43 jiffin joined #gluster
06:01 ricky-ticky1 joined #gluster
06:05 jiffin joined #gluster
06:07 justinmb_ joined #gluster
06:08 rshott joined #gluster
06:11 ctria joined #gluster
06:12 XpineX joined #gluster
07:12 rafi1 joined #gluster
07:13 davemc joined #gluster
07:49 ekuric joined #gluster
08:26 vimal joined #gluster
09:17 rafi1 joined #gluster
09:27 rafi1 joined #gluster
09:43 rshott joined #gluster
10:02 LebedevRI joined #gluster
10:21 soumya joined #gluster
10:45 shubhendu joined #gluster
11:12 prasanth_ joined #gluster
11:29 mketo joined #gluster
11:31 bala joined #gluster
11:35 mketo Hi everyone! I have a question, when I create a gluster volume in a directory (ie. /gluster-data), am I allowed to read/write in that directory?
11:54 koodough left #gluster
11:57 bala joined #gluster
12:22 kshlm joined #gluster
12:39 rafi1 joined #gluster
13:12 PsionTheory joined #gluster
13:39 DV_ joined #gluster
13:39 rjoseph joined #gluster
13:42 rajesh joined #gluster
14:11 XpineX joined #gluster
14:31 edwardm61 joined #gluster
14:45 n-st joined #gluster
14:52 firemanxbr joined #gluster
15:19 bennyturns joined #gluster
15:23 SOLDIERz joined #gluster
15:23 SOLDIERz joined #gluster
15:35 SOLDIERz joined #gluster
15:56 xoritor joined #gluster
16:02 bala joined #gluster
16:22 soumya joined #gluster
16:26 xoritor is there any other object storage access for gluster other than swift?
16:26 xoritor has anyone setup glusterfs as a docker registry storage backend?
16:27 xoritor ceph can do it with ceph-s3 and swift can do it... but i do not want to run swift for sure
16:28 xoritor well... i do not want to run openstack... not sure if i can _only_ run swift with no other part of the stack
16:34 xoritor even then... its just well... swift... nuff said
16:35 xoritor so is there any other choice than swift for object storage?
16:48 TheFlyingCorpse can I use gluster to have failover between two racks? I'd like to have redundant storage (thinking haproxy for the client connections via nfs/iscsi)
16:50 xoritor TheFlyingCorpse, you can replicate the storage, but the rest would be something else...
16:51 TheFlyingCorpse xoritor, how good can I get the replication? <1s?
16:51 xoritor thats pretty ambiguous
16:51 xoritor what are you looking for?
16:52 xoritor glusterfs is what you make it...
16:52 TheFlyingCorpse I'm looking for an alternative to buying synchronous SAN's that cost around 200 000 EUR for hosting a set of WLC's (critical to us). the need for the solution is "just" for 8 months forward.
16:52 xoritor it is either distributed, distributed-replicated, distributed-striped-replicated, or something in between
16:53 TheFlyingCorpse ah. I want replicated.
16:53 xoritor so i have used it for just that kind of thing...
16:53 xoritor you probably really want distributed replicated
16:53 xoritor figure out how many "copies" you need
16:54 TheFlyingCorpse yup. I have 2 racks (the datacenters are <1ms in between) that I want to have complete redundancy for on its storage.
16:54 xoritor make several "bricks" i tend to use 1 drive per brick
16:54 TheFlyingCorpse ideal solution is two storage nodes (right term?) that are distributed-replicated between themselves (10Gbit backbone).
16:54 morsik xoritor: you can use swift alone (you just need keystone for authorization), we have this setup
16:54 morsik you don't need whole openstack
16:54 morsik also, you can use glusterfs api
16:55 xoritor morsik, yea... but i do NOT want to use swift at all...
16:55 xoritor you can?
16:55 xoritor hmm
16:56 morsik i don't think there's something more currently
16:56 xoritor i will have to look into using the api
16:56 morsik if you want to serve static files (only GET requests) you can use uwsgi with glusterfs plugin
16:56 xoritor no... i want to use it as a docker registry
16:57 xoritor https://github.com/docker/docker-registry/blob/master/ADVANCED.md  look down at the ceph part
16:57 glusterbot Title: docker-registry/ADVANCED.md at master · docker/docker-registry · GitHub (at github.com)
16:57 xoritor i was wondering if glusterfs can do that
16:58 xoritor TheFlyingCorpse, so what i have is usually a few systems with bricks in them, again i tend to use 1 whole drive per brick because it makes replacement easy for me
16:58 morsik oh…
16:58 xoritor TheFlyingCorpse, you do not have to do that though
16:59 morsik S3-complaint is Swift afaik.
16:59 xoritor :-/
16:59 xoritor bleh
16:59 morsik for sure, it's not glusterfs api
16:59 xoritor ok thanks
16:59 morsik api is designed for deep coding
16:59 xoritor sigh
17:00 TheFlyingCorpse xoritor, nice. I'll look into that model, thanks! I barely need 2TB of storage, might go all out on 15K's to have the speed (SSD's is a waste in this iteration)
17:00 morsik xoritor: setting keystone + swift is not difficult
17:00 cliluw joined #gluster
17:00 xoritor morsik, its a PITA to setup and maintain... not to mention it is just horrible
17:00 xoritor i have used it i do not like it
17:01 xoritor it is resource intensive
17:01 morsik we have test setup like that for half a year, and now we're started setingup whole openstack
17:01 xoritor its a bottle neck for perfomance
17:01 xoritor openstack as a whole sucks
17:01 morsik not that hard as we thought, but entering into it is… well… difficult — true
17:01 xoritor the worst is neutron
17:01 morsik xoritor: not much choice in software like that :P
17:02 xoritor yes... dont use it
17:02 xoritor kubernetes+mesos on libvirt
17:02 xoritor or something else
17:02 xoritor coreos
17:03 xoritor projectatomic if it ever comes out
17:03 xoritor something but NOT opencrack
17:03 morsik xoritor: it'll be not only for internal use unfortunaletly so it needs openstack-complaint api ;\
17:03 morsik and nobody will like something like coreos for now
17:03 morsik cause "it's not enterprise" :F
17:03 xoritor heh
17:04 xoritor well then opensift
17:04 xoritor :-P
17:04 xoritor hahahaha
17:04 morsik since we started using CentOS (we previously used Gentoo :D) we use almost everything RedHat gaves
17:04 xoritor sorry
17:04 morsik xoritor: yeah, i installed it for test
17:04 xoritor s/sift/shift/
17:04 glusterbot What xoritor meant to say was: well then openshift
17:05 xoritor look at projectatomic
17:05 xoritor its not ready for prime time
17:06 xoritor but it will use kubernetes (probably mesos too) and geard will most likely be rolled into kubernetes
17:06 xoritor if you do not need VM isolation containers are fast
17:06 morsik for myself i'm trying to use uwsgi for small dev cloud purposes :P
17:06 morsik it's quite nice, has many features, still missing some things
17:06 morsik but i think it's worth try
17:07 xoritor i will look into it
17:07 xoritor thanks
17:08 morsik xoritor: it's whole hosting software, firstly it was just created for running python webapps
17:08 morsik it's not cloud software, but using it you can build cloud quite easily from core
17:08 xoritor looks like i am going to have to setup ceph though
17:09 xoritor well... may not be what i am looking for
17:09 xoritor https://mesosphere.com/
17:09 xoritor seen that?
17:09 morsik nope
17:09 xoritor basically you can do what google does
17:10 xoritor https://mesosphere.com/docs/
17:12 xoritor it took me all of 15-20 min to have kubernetes and mesos installed, clustered and running from bare metal
17:13 morsik not bad…
17:13 morsik we setup openstack in 24hrs at local hackaton lol :D
17:13 xoritor on 3 nodes with kickstarts to minimal adding in openvsiwtch with lacp bonding
17:13 morsik (and network didn't worked)
17:13 xoritor and it does not die every time it is upgraded
17:14 xoritor and load shifts automatically
17:14 xoritor i was really hoping glusterfs would do the s3 stuff without swift or had some other interface i could use
17:14 xoritor sigh
17:15 xoritor ceph will work though
17:15 xoritor its just such a royal PITA to setup and maintain
17:15 xoritor glusterfs is so much easier and faster to setup... and maintaining it is much more straight forward
17:16 morsik yeah, and works
17:16 xoritor yep
17:17 morsik xoritor: for my purposes, i thought about using aufs with glusterfs and local fs. writing to gluster, reading from local
17:17 morsik + inotify that copies files <100kB from glsuter to local fs
17:17 morsik why? for fast php files reading
17:17 xoritor LOL
17:17 xoritor yea i can see that
17:17 morsik i must try this, it's in my head theory right now
17:17 morsik but i checked php running from gluster…
17:17 morsik it's HELL
17:17 xoritor but if its just a read... read it from the local
17:17 morsik fucking hell
17:18 morsik wordpress loads about 900ms from gluster, when <300ms from local
17:18 morsik xoritor: yeah, but i need HA storage, and i want gluster :P
17:18 xoritor is the node local?
17:18 xoritor oh so its not JUST read
17:19 xoritor its mostly read, with some write
17:19 morsik yes
17:19 morsik imagine:  multiple front servers (php/python/ruby/etc)
17:19 morsik connected to storage
17:19 morsik which will be cluster too (glusterfs)
17:19 morsik i want fast reads of code, that's why i'll copy small files from gluster to local dir
17:20 morsik aufs will be configured to always read first from local, but always write to remote
17:20 morsik i hope you know what aufs is (-;
17:20 xoritor aufs is about to be depricated
17:20 morsik yeah damn. but you know what i mean :D
17:20 xoritor you may want to look into overlayfs
17:20 xoritor or some other setup
17:20 xoritor but yea
17:21 morsik wait what
17:21 xoritor maybe a tiered cache storage
17:21 morsik why deprecated?
17:21 xoritor not maintained
17:21 xoritor old
17:21 xoritor crufty
17:21 morsik http://aufs.sourceforge.net/
17:21 xoritor docker has an issue they were built on it
17:21 xoritor and its about to go AWAY
17:21 morsik i have 3.16 kernel on my archlinux…
17:21 morsik aufs supports… 3.17 even
17:21 morsik so what are you talking about? ;o
17:22 xoritor yea, but it will not be around for much longer
17:22 morsik i know one of theese kind of tools is deprecated, but i don't think it's aufs
17:23 xoritor http://opensource.com/business/13/11/docker-fedora-red-hat-collaboration
17:23 glusterbot Title: Docker for the Fedora Project: an interview with Alexander Larsson | Opensource.com (at opensource.com)
17:23 xoritor thats part of an article... google about it
17:23 xoritor its going away from almost everywhere
17:24 morsik yeah, just reading on some forums
17:24 xoritor overlayfs is trying to get in
17:24 morsik thanks for info
17:24 xoritor but they are not wanting it
17:25 xoritor no problem... didnt want to see you get saddled with supporing a great solution that suddenly stopped working because aufs was not there
17:25 xoritor sounds like a real solution to the issue though
17:26 xoritor you may be able to use some sort of 'cache' ie... bcache or dm-cache
17:26 xoritor or memcached
17:27 xoritor hell... you may be able to setup a redis master/slave and use that
17:28 morsik :D
17:28 morsik redis for filesystem?
17:28 morsik huh…
17:29 morsik or i did not understand you correctly, or this is crazy
17:29 xoritor just as a temp space
17:29 xoritor hell redis can do lots of stuff
17:29 morsik i could put small files even in ram :P
17:30 morsik but well, anyway localfs will be fast enough
17:30 xoritor depending on how much ram you can put big files in
17:30 morsik when file is not in localfs then it'll be get from remote gluster (for files >100kB doesn't matter)
17:30 xoritor seriously you may want to look into something like dm-cache
17:30 morsik xoritor: yeah sure, in real hosting it's almost impossible
17:30 morsik xoritor: gluster will not work with dmcache i think…
17:31 morsik (or you think about using dm-cache on gluster-server-side?)
17:32 xoritor why would the client side not work?
17:32 xoritor due to fuse?
17:32 morsik yep
17:32 xoritor are you use nfs?
17:32 xoritor tried using nfs?
17:32 morsik fuse.glusterfs
17:33 xoritor nfs for php/web is MUCH faster
17:33 morsik xoritor: important thing: NEVER use nfs in gluster :D
17:33 morsik we used nfs-endpoint in gluster at work… it's bad by design
17:33 xoritor i have never had issues with the nfs use in gluster
17:33 morsik nfs connect to only 1 node…
17:33 morsik and you shutdown that one node…
17:33 xoritor well yea
17:33 morsik imagine what happens now :P
17:33 xoritor true
17:34 xoritor ok... but there is a ganesh-pnfs out there
17:34 xoritor nfs-ganesha-fsal-gluster    pNFS  4.1 i think
17:35 xoritor https://github.com/nfs-ganesha/nfs-ganesha/wiki
17:35 glusterbot Title: Home · nfs-ganesha/nfs-ganesha Wiki · GitHub (at github.com)
17:35 xoritor you have to setup CTDB but that is easy
17:37 xoritor http://download.gluster.org/pub/gluster/glusterfs/doc/HA%20and%20Load%20Balancing%20for%20NFS%20and%20SMB.html
17:37 xoritor the only thing i do not like about that solution is that you use round robin DNS
17:39 xoritor but that works VERY well on my setup
17:40 xoritor just dont do both nfs and cifs at the same time
17:40 morsik we use roundrobin dns even for gluster for easier connect (and nfs was connected to this)
17:41 xoritor eek
17:41 morsik but until nfs discovered that endpoint doesn't works anymore everything was fucked up anyway for few minutes
17:41 xoritor i can imagine
17:41 xoritor running pNFS on all the nodes fixes that issue
17:42 xoritor and really increases performance too
17:42 xoritor i mean a LOT
17:42 xoritor pNFS is FAST
17:42 xoritor well... in my setup it is
17:42 xoritor ;-)
17:42 morsik but if i remember, pNFS can write only to one node, right?
17:42 xoritor and that can definitely be cached...
17:43 xoritor yea, but it does not matter
17:43 xoritor the write will go to the server and the server with share glusterize it
17:43 xoritor s/share//
17:43 glusterbot What xoritor meant to say was: the write will go to the server and the server with  glusterize it
17:44 xoritor man i messed that whole statement up
17:44 xoritor so the write goes to the server and then gets glusterized out to the other nodes
17:44 xoritor so write 1 goes to one node write 2 may go to some other node
17:45 xoritor ie.. round robin dns
17:48 xoritor does not matter where due to using the ganesha nfs server instead of the kernel nfs server
17:48 xoritor i think the issue you refer to is only with the kernel pNFS support
17:49 xoritor the ganesha nfs server is in userspace
17:50 johndescs_ joined #gluster
17:50 xoritor https://forge.gluster.org/nfs-ganesha-and-glusterfs-integration
17:50 glusterbot Title: nfs-ganesha and glusterfs integration - Gluster Community Forge (at forge.gluster.org)
17:51 xoritor anyway... if that works for you, great... if not sorry for blabbering on
17:51 cliluw joined #gluster
17:52 xoritor now i have to break my foot off in a ceph cluster
18:00 SOLDIERz joined #gluster
18:10 tom[] joined #gluster
18:14 justinmburrous joined #gluster
18:29 rshott joined #gluster
18:34 diegows joined #gluster
19:07 mlilenium_ joined #gluster
19:07 mlilenium_ left #gluster
19:13 fubada joined #gluster
19:28 fubada joined #gluster
19:30 RioS2 joined #gluster
19:32 justinmburrous joined #gluster
20:00 sman joined #gluster
20:00 charta joined #gluster
20:00 sac`away joined #gluster
20:00 JustinClift joined #gluster
20:01 ccha joined #gluster
20:01 Slasheri joined #gluster
20:01 Slasheri joined #gluster
20:01 suliba joined #gluster
20:01 bennyturns joined #gluster
20:01 tdasilva joined #gluster
20:01 cyberbootje joined #gluster
20:01 necrogami joined #gluster
20:01 atrius` joined #gluster
20:01 huleboer joined #gluster
20:01 osiekhan1 joined #gluster
20:01 frb joined #gluster
20:01 verboese joined #gluster
20:02 capri joined #gluster
20:02 tty00 joined #gluster
20:03 RioS2 joined #gluster
20:05 kke joined #gluster
20:05 gomikemike joined #gluster
20:10 justinmburrous joined #gluster
20:11 jbrooks joined #gluster
20:11 sac`away joined #gluster
20:11 JustinClift joined #gluster
20:35 calisto joined #gluster
21:26 and` joined #gluster
21:27 justinmburrous joined #gluster
21:29 xoritor left #gluster
21:35 David_H__ joined #gluster
22:01 XpineX_ joined #gluster
22:23 calisto joined #gluster
22:46 msmith_ joined #gluster
22:47 msmith_ joined #gluster
22:48 glusterbot New news from resolvedglusterbugs: [Bug 765571] Test bug <https://bugzilla.redhat.com/show_bug.cgi?id=765571>
22:56 jbrooks joined #gluster
23:15 David_H_Smith joined #gluster
23:33 ilbot3 joined #gluster
23:33 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/ | GlusterFS 3.6 test week - https://public.pad.fsfe.org/p/GlusterFS-3.6-test-doc
23:45 ilbot3 joined #gluster
23:45 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/ | GlusterFS 3.6 test week - https://public.pad.fsfe.org/p/GlusterFS-3.6-test-doc
23:49 jobewan joined #gluster
23:52 David_H_Smith joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary