Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-01-17

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 spstarr ok so well...
00:00 spstarr I just abused Route 53 to do internal DNS....
00:01 spstarr so I thought AWS *DID* use Route 53 for internal DNS
00:01 spstarr I do not know if GlusterFS will freak out however
00:05 jobewan joined #gluster
00:08 fidevo joined #gluster
00:25 diegows joined #gluster
00:33 plarsen joined #gluster
00:46 ccope joined #gluster
00:47 mattappe_ joined #gluster
00:50 gtobon joined #gluster
01:08 mattapperson joined #gluster
01:10 theron joined #gluster
01:11 mattappe_ joined #gluster
01:16 raghug joined #gluster
01:18 bala joined #gluster
01:20 harish joined #gluster
01:21 RameshN joined #gluster
01:36 purpleidea @vagrant
01:36 glusterbot purpleidea: (#1) Part 1 @ https://ttboj.wordpress.com/2013/12​/09/vagrant-on-fedora-with-libvirt/, or (#2) Part 2 @ https://ttboj.wordpress.com/2013/12​/21/vagrant-vsftp-and-other-tricks/, or (#3) Part 3 @ https://ttboj.wordpress.com/2014/01/​02/vagrant-clustered-ssh-and-screen/, or (#4) Part 4 @ https://ttboj.wordpress.com/2014/0​1/08/automatically-deploying-glust​erfs-with-puppet-gluster-vagrant/
01:37 SFLimey joined #gluster
01:37 purpleidea @learn
01:37 glusterbot purpleidea: (learn [<channel>] <key> as <value>) -- Associates <key> with <value>. <channel> is only necessary if the message isn't sent on the channel itself. The word 'as' is necessary to separate the key from the value. It can be changed to another word via the learnSeparator registry value.
01:37 purpleidea @learn vagrant as https://ttboj.wordpress.com/2014/01/16​/testing-glusterfs-during-glusterfest/
01:37 glusterbot purpleidea: The operation succeeded.
01:37 purpleidea @vagrant
01:37 glusterbot purpleidea: (#1) Part 1 @ https://ttboj.wordpress.com/2013/12​/09/vagrant-on-fedora-with-libvirt/, or (#2) Part 2 @ https://ttboj.wordpress.com/2013/12​/21/vagrant-vsftp-and-other-tricks/, or (#3) Part 3 @ https://ttboj.wordpress.com/2014/01/​02/vagrant-clustered-ssh-and-screen/, or (#4) Part 4 @ https://ttboj.wordpress.com/2014/0​1/08/automatically-deploying-glust​erfs-with-puppet-gluster-vagrant/,
01:37 glusterbot purpleidea: or (#5) https://ttboj.wordpress.com/2014/01/16​/testing-glusterfs-during-glusterfest/
01:55 bharata-rao joined #gluster
01:58 jporterfield joined #gluster
02:11 Bullardo joined #gluster
02:12 Bullardo joined #gluster
02:18 jporterfield joined #gluster
02:32 RameshN joined #gluster
02:47 ilbot3 joined #gluster
02:47 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:51 ccope joined #gluster
02:53 mattappe_ joined #gluster
02:54 jag3773 joined #gluster
03:07 jporterfield joined #gluster
03:14 kanagaraj joined #gluster
03:16 mattapperson joined #gluster
03:17 jporterfield joined #gluster
03:17 bala joined #gluster
03:21 _pol joined #gluster
03:23 dbruhn joined #gluster
03:24 _pol_ joined #gluster
03:29 saurabh joined #gluster
03:32 shubhendu joined #gluster
03:41 mattappe_ joined #gluster
03:45 itisravi joined #gluster
03:46 mattappe_ joined #gluster
03:50 ppai joined #gluster
03:54 [o__o]2 joined #gluster
04:03 * spstarr trips bot for url
04:03 spstarr prefix of it is already part of a volume
04:03 spstarr hmm
04:03 spstarr @prefix
04:03 glusterbot spstarr: I do not know about 'prefix', but I do know about these similar topics: 'path or prefix', 'path-or-prefix'
04:03 spstarr @path-or-prefix
04:03 glusterbot spstarr: http://joejulian.name/blog/glusterfs-path-or​-a-prefix-of-it-is-already-part-of-a-volume/
04:10 mattappe_ joined #gluster
04:11 kanagaraj joined #gluster
04:18 kanagaraj joined #gluster
04:24 DV__ joined #gluster
04:25 theron_ joined #gluster
04:27 jporterfield joined #gluster
04:28 shylesh joined #gluster
04:47 davinder joined #gluster
04:48 bala joined #gluster
04:49 rastar joined #gluster
04:50 gmcwhistler joined #gluster
04:51 spandit joined #gluster
05:00 robo joined #gluster
05:01 dusmant joined #gluster
05:04 MiteshShah joined #gluster
05:06 meghanam joined #gluster
05:06 kdhananjay joined #gluster
05:06 meghanam_ joined #gluster
05:07 lalatenduM joined #gluster
05:14 CheRi joined #gluster
05:21 CheRi joined #gluster
05:27 rjoseph joined #gluster
05:30 prasanth joined #gluster
05:32 pk1 joined #gluster
05:33 dusmantkp_ joined #gluster
05:33 psharma joined #gluster
05:39 satheesh1 joined #gluster
05:39 ira joined #gluster
05:46 shyam joined #gluster
05:50 mohankumar joined #gluster
05:51 ndarshan joined #gluster
05:53 benjamin__ joined #gluster
05:54 raghu joined #gluster
05:58 vimal joined #gluster
06:12 shylesh joined #gluster
06:13 prasanth joined #gluster
06:24 bala joined #gluster
06:26 rastar joined #gluster
06:34 TonySplitBrain joined #gluster
06:41 bulde joined #gluster
06:45 hagarth joined #gluster
06:51 bulde joined #gluster
06:55 davinder joined #gluster
07:03 RameshN_ joined #gluster
07:03 MiteshShah joined #gluster
07:07 anands joined #gluster
07:10 rastar joined #gluster
07:17 aravindavk joined #gluster
07:17 geewiz joined #gluster
07:22 jtux joined #gluster
07:22 bala joined #gluster
07:22 kanagaraj_ joined #gluster
07:26 kanagaraj joined #gluster
07:30 dusmantkp_ joined #gluster
07:34 kanagaraj joined #gluster
08:02 RameshN_ joined #gluster
08:02 jtux joined #gluster
08:09 kanagaraj joined #gluster
08:13 ekuric joined #gluster
08:14 eseyman joined #gluster
08:17 GabrieleV joined #gluster
08:24 iksik joined #gluster
08:24 ababu joined #gluster
08:26 ctria joined #gluster
08:31 blook joined #gluster
08:37 RameshN_ joined #gluster
08:40 kanagaraj joined #gluster
08:43 benjamin__ joined #gluster
08:47 ngoswami joined #gluster
08:49 ababu joined #gluster
08:53 gflow joined #gluster
08:53 aravindavk joined #gluster
08:53 shubhendu joined #gluster
08:54 ndarshan joined #gluster
08:55 dusmantkp_ joined #gluster
08:58 glusterbot New news from newglusterbugs: [Bug 1054668] One brick always has alot of entries on `gluster volume heal gv0 info` <https://bugzilla.redhat.co​m/show_bug.cgi?id=1054668>
08:59 _pol joined #gluster
09:00 shyam joined #gluster
09:06 d-fence joined #gluster
09:10 bala joined #gluster
09:25 mgebbe_ joined #gluster
09:32 iksik good morning
09:44 purpleidea hi
09:44 glusterbot purpleidea: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
09:44 ells joined #gluster
09:44 purpleidea glusterbot: :P i was saying hi back to iksik ;)
09:46 iksik ;-DD
09:46 mohankumar__ joined #gluster
09:47 * purpleidea slaps glusterbot
09:47 iksik i have a question, but it's kind of big (as of i'm real noob with clusters), so i'm writing it on pastebin -_-
09:47 purpleidea ~hi | iksik
09:47 glusterbot iksik: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
09:49 purpleidea iksik: also: 'vipe | fpaste' can be helpful!
09:51 iksik https://gist.github.com/krzysztofantczak​/4297dc3331fe85b80771/raw/3bb6060e11ff18​05924a1625016c378445417061/gistfile1.txt
09:51 iksik sorry if it's tldr
09:51 purpleidea iksik: it's too long to click on :P
09:51 iksik oh damn ;P
09:52 purpleidea its ok
09:52 iksik + i'm sorry if some info is missing there or some things are described wrong
09:52 purpleidea 1sec
09:55 purpleidea iksik: okay
09:56 purpleidea so it was a bit confusing to understand exactly all the things you want to do or understand, but how about i offer a few thoughts, and then you ask again for what you're missing, okay?
09:56 iksik sure :)
09:57 purpleidea 1) best way to learn about glusterfs is to try it out. it's easy to setup yourself to test on some vm's. i'm the puppet-gluster+vagrant guy, so if you want to have lots of dev environments, it's a good way to test/re-test different things.
09:57 purpleidea ~vagrant | iksik
09:57 glusterbot iksik: (#1) Part 1 @ https://ttboj.wordpress.com/2013/12​/09/vagrant-on-fedora-with-libvirt/, or (#2) Part 2 @ https://ttboj.wordpress.com/2013/12​/21/vagrant-vsftp-and-other-tricks/, or (#3) Part 3 @ https://ttboj.wordpress.com/2014/01/​02/vagrant-clustered-ssh-and-screen/, or (#4) Part 4 @
09:57 glusterbot https://ttboj.wordpress.com/2014/0​1/08/automatically-deploying-glust​erfs-with-puppet-gluster-vagrant/, or (#5) https://ttboj.wordpress.com/2014/01/16​/testing-glusterfs-during-glusterfest/
09:57 iksik purpleidea: yeah i'm vagrant and puppet as well ;)
09:57 purpleidea (also my internet might die soon, not sure, so sorry if i go afk)
09:57 iksik i'm using*
09:57 purpleidea carrying on
09:59 purpleidea 2) glusterfs replicates whole files N times... so you get the same file on the 2 hosts you chose to replicate across... if more than 2 hosts are needed for added capacity it can also distribute across pairs of replicas. think a raid 10, but instead of striping it uses whole files
10:00 purpleidea iksik: if i talk about "distribution" and "replication" do you understand the two concepts (preferably in terms of glusterfs?)
10:00 iksik i'm not sure about that
10:01 purpleidea iksik: okay
10:01 purpleidea in your question, it seems your worry is load. in truth, you probably want a caching layer in your application if it's a serious issue, but based on your small scale, i doubt you really need it at this point. here's how performance works/scales in glusterfs:
10:03 tryggvil joined #gluster
10:03 purpleidea if you have N clients accessing different files, they can come from the M different servers you have, therefore splitting up the total load. the more servers you add (to get more storage) the more your total volume is split up across them, so the more M hosts there are to offer files and split up the load. You don't get any benefit if you only ever have 1 client accessing 1 file, but that use case basically doesn't exist.
10:04 iksik sure
10:05 purpleidea iksik: what else do you need answering or clearing up ?
10:06 iksik what i'm really concerned about is N clients accessing a single file which exists only on one node (ie. when i have 3 nodes setup) or.. it exists on 2 nodes but total traffic is larger than those 2 can handle by themselves
10:07 purpleidea iksik: do you understand that the "entire" file exists on X nodes if you have a replication count of X ?
10:09 iksik hum, ok, so ... if have that replication on 3 nodes, my total storage will be equal to the size of hdd on each of those nodes?
10:09 purpleidea iksik: exactly
10:10 purpleidea iksik: did i understand that the "original" content can be generated to the gluster pool and you're not interested in using it for reliability of your data?
10:13 kanagaraj joined #gluster
10:13 iksik yes, i don't care so much about reliability - when we talking about those particular files... but i really care about expanding my storage and bandwith capacity
10:14 purpleidea iksik: you can expand your cluster in two ways:
10:14 purpleidea 1) adding distribution: this adds total capacity, and improves distributed performance. think like raid 0 (vaguely)
10:15 purpleidea 2) adding replication: you pick this "count" add volume creation. this adds redundancy and data duplication. think raid 1 (vaguely)
10:15 purpleidea iksik: in practice, you pick replica=2
10:16 purpleidea as you add more hosts (distribute) to your cluster, more concurrent clients will be able to access different videos, and you'll also have more storage...
10:16 iksik so, it's not really possible to start with two machines
10:16 iksik btw. replication over all nodes?
10:17 purpleidea iksik: if you have a lot of hits for the same video, the os has a certain amount of iocaching built in i think, but i'm really not an expert on this subject. if it's a network bottleneck, you can use 10gE or bonded 10gE, etc...
10:17 iksik or only for particular sets of them?
10:17 purpleidea iksik: of course you can start with two machines. make sure you choose N=2. very normal setup.
10:17 purpleidea when you add hosts, you need to add them in multiples of N to add distribution in pairs
10:17 iksik so it will be 1,5TB (size of single node) in total storage for whole cluster, right?
10:17 purpleidea yes
10:18 iksik next step will be to add 2 more... with replica between them (+distribute)
10:19 purpleidea when you add the two more, it always does replica between them. it's not optional to not do that.
10:19 iksik oh
10:19 iksik ok
10:19 purpleidea when you add the two, it expands the volume so now it's bigger (based on the size of the two new nodes)
10:19 purpleidea so if the two new nodes have 3tb of storage each, then your total volume is 3tb bigger
10:19 iksik btw. will i need some frontend machine for that cluster? or can i directly connect customers to cluster nodes?
10:20 purpleidea of course you can have multiple volumes, and split it up however you want, or even share the whole space among all volumes, but it's up to you
10:20 purpleidea iksik: normally "clients" connect through a glusterfs fuse mount client...
10:21 purpleidea if you want to mount the volume on the servers, and have the clients connect directly to that you could. it's not usually done that way, but it's probably fine for small setups
10:21 purpleidea would the customers access files via http or something?
10:21 iksik yeah, but that thing with accessing only few files by all of customers is the only pain in the ass i can think of right now - which is unpredictable... i was also starting wondering about some dynamic way of detecting most popular files and moving them around servers ;P
10:21 Rocky____ joined #gluster
10:22 iksik yes, over http (streaming mp4)
10:23 purpleidea iksik: right, so in the future, glusterfs will probably support some sort of automatic tiering whereby most commonly accesses files get moved to ssd's, and others move to spinning disk or so on, but at the moment this doesn't exist. tbh, i'm not sure you need this type of feature with only 2 servers
10:23 satheesh2 joined #gluster
10:23 purpleidea you're better off to get more (even smaller-ish) servers to distribute load that way. it'll be cheaper probably
10:24 iksik well, my first machine is out of storage atm... the second one will be soon + i can't handle more traffic anymore, so i need to think about more nodes atm, but first i would like to know which clustering solution would be the best
10:24 purpleidea or if you really think io performance is the bottleneck, have the disks be in a raid 10 or faster or be ssds or similar....
10:24 iksik i'm single dev/admin in that project, my budget is not so big, so i'm learning and counting everything - i'm trying at least -_-
10:25 purpleidea since you have 2*1.5tb, you're basically at the low end of gluster usage. typically you can probably put 10-30 times more storage per host... if you want more performance, get more disks and do a raid 10. normally people do a raid6
10:25 iksik purpleidea: well, each node (from those two) can use all of it's available bandwith along with maximizing IO for disk - so IO is not that big bottleneck here since NIC can't handle more traffic anyway
10:25 purpleidea and some people host in the cloud which has different normals
10:26 purpleidea iksik: so bond your nics or get faster ones
10:28 glusterbot New news from newglusterbugs: [Bug 1054694] A replicated volume takes too much to come online when one server is down <https://bugzilla.redhat.co​m/show_bug.cgi?id=1054694> || [Bug 1054696] Got debug message in terminal while qemu-img creating qcow2 image <https://bugzilla.redhat.co​m/show_bug.cgi?id=1054696>
10:29 iksik btw. raid6.. do You think putting ZFS (with raidz2) there wouldn't be much overhead for my setup?
10:29 purpleidea iksik: different people have different opinions, but if you're asking me: just forget zfs.
10:30 hagarth joined #gluster
10:31 dusmantkp_ joined #gluster
10:31 ndarshan joined #gluster
10:31 shubhendu joined #gluster
10:31 aravindavk joined #gluster
10:32 iksik ok, so i have one more question... it's not that much related with current project, but wondering about next one more related with typical clusters/clouds... is there any advantage with using VMs on my bare metal machines when building cluster?
10:32 bala joined #gluster
10:33 purpleidea iksik: can you rephrase?
10:35 iksik when googling some materials about clusters, i often read about people using virtual machines to build their clusters, but didn't found any explanation on that
10:36 purpleidea iksik: you mean you want to use vm's for the individual gluster hosts?
10:36 diegows joined #gluster
10:37 purpleidea (running on your own iron?)
10:38 iksik not that i want to use vms, just wondering why people are using them for clusters
10:38 purpleidea either:
10:38 purpleidea 1) you're running gluster server in vm's because you're using some cloud server
10:38 purpleidea s/server/service/
10:38 glusterbot What purpleidea meant to say was: 1) you're running gluster service in vm's because you're using some cloud server
10:38 purpleidea 2) you're running on iron
10:39 purpleidea 3) you're running vm's on your own iron for testing
10:39 iksik iron?
10:39 purpleidea 4) you're using vm's on your own iron because of your infrastructure and the need to split it up this way
10:39 ells joined #gluster
10:39 purpleidea iron == physical hardware
10:39 iksik oh ;]
10:40 purpleidea does that answer your vm question?
10:40 purpleidea the problem with running vm's on iron for gluster hosts is if the storage is in the vm's you're just adding a layer to accessing it... unnecessary overhead
10:41 iksik yes, it does ;-)
10:41 purpleidea @next
10:41 glusterbot purpleidea: Error: You must be registered to use this command. If you are already registered, you must either identify (using the identify command) or add a hostmask matching your current hostmask (using the "hostmask add" command).
10:41 iksik so VM only as a gluster client is fine?
10:41 purpleidea @teach next as Another happy customer... NEXT!
10:41 purpleidea @learn next as Another happy customer... NEXT!
10:41 glusterbot purpleidea: The operation succeeded.
10:41 iksik haha ;-)
10:42 purpleidea @next
10:42 glusterbot purpleidea: Error: You must be registered to use this command. If you are already registered, you must either identify (using the identify command) or add a hostmask matching your current hostmask (using the "hostmask add" command).
10:42 purpleidea hm
10:42 purpleidea ~next | purpleidea
10:42 glusterbot purpleidea: Another happy customer... NEXT!
10:43 purpleidea iksik: mounting your gluster volume(s) inside of vm's is quite normal if that's your question
10:43 iksik yup
10:43 iksik ok, so now i have some picture about how it should look like
10:45 purpleidea great!
10:46 iksik purpleidea: thank You ;)
10:46 purpleidea iksik: test with ,,(vagrant) first!
10:46 glusterbot iksik: (#1) Part 1 @ https://ttboj.wordpress.com/2013/12​/09/vagrant-on-fedora-with-libvirt/, or (#2) Part 2 @ https://ttboj.wordpress.com/2013/12​/21/vagrant-vsftp-and-other-tricks/, or (#3) Part 3 @ https://ttboj.wordpress.com/2014/01/​02/vagrant-clustered-ssh-and-screen/, or (#4) Part 4 @
10:46 glusterbot https://ttboj.wordpress.com/2014/0​1/08/automatically-deploying-glust​erfs-with-puppet-gluster-vagrant/, or (#5) https://ttboj.wordpress.com/2014/01/16​/testing-glusterfs-during-glusterfest/
10:46 iksik oh, right, vagrant
10:47 iksik so one more question... any tools which are good to use when testing cluster setup?
10:47 purpleidea iksik: and puppet-gluster ,,(puppet)
10:47 glusterbot iksik: (#1) https://github.com/purpleidea/puppet-gluster, or (#2) semiosis' unmaintained puppet module: https://github.com/semiosis/puppet-gluster
10:47 purpleidea iksik: test test test everything to see if the solution works and performs for you
10:50 ppai joined #gluster
10:51 CheRi joined #gluster
10:53 sticky_afk joined #gluster
10:53 stickyboy joined #gluster
10:55 tziOm joined #gluster
10:55 RameshN_ joined #gluster
10:59 _pol joined #gluster
11:11 franc joined #gluster
11:13 benjamin__ joined #gluster
11:15 RameshN_ joined #gluster
11:15 psyl0n joined #gluster
11:35 kanagaraj joined #gluster
11:36 theron joined #gluster
11:38 mattappe_ joined #gluster
11:40 nullck joined #gluster
11:47 ira joined #gluster
11:57 psyl0n joined #gluster
11:59 shylesh joined #gluster
12:04 itisravi joined #gluster
12:06 edward1 joined #gluster
12:08 theron_ joined #gluster
12:14 monotek Hello Gluster community,
12:14 monotek i'm trying to move my OTRS filesystem from DRBD to Glusterfs on Ubuntu  12.04 Server using Gluster 3.4.2 via the deb files of the semiosis ppa.
12:14 monotek I allredy run my samba and kvm on glusterfs which are running fine.
12:14 monotek I created a new gfs volume and rsynced all OTRS files to it without  problems.
12:14 monotek All files are user/group otrs with uid 1001 which exists on server and  client.
12:14 monotek Webserver (which is running as otrs user too) is restarting without  problems but when i try to access the otrs site i get a 500 because of  permission problems.
12:14 monotek Accessing the files as root or otrs user from a shell works.
12:14 monotek Here is the Gluster client log: http://pastie.org/8639756
12:14 monotek Here is the Gluster server log: http://pastie.org/8639760
12:14 monotek Any hints?
12:14 glusterbot Title: #8639756 - Pastie (at pastie.org)
12:14 glusterbot Title: #8639760 - Pastie (at pastie.org)
12:29 ndevos monotek: the server logs show that there is a problem with setting extended attributes on the filesystem on the brick, what filesystem do you use for /glusterfs/otrs?
12:29 monotek its ext4.
12:30 monotek /dev/mapper/storage1-otrs on /glusterfs/otrs type ext4 (rw,user_xattr,acl,errors=remount-ro)
12:33 ndevos that should be fine then, I'm not sure what would prevent it - maybe double check that all processes are running as uid+gid 1001?
12:35 dneary joined #gluster
12:40 monotek apache runs as 1001/otrs. should be ok.... what i just saw was, that the directory which is mentioned in the error message is created with:
12:40 monotek d----wS---  2 otrs otrs 4096 Jan 16 18:55 check_permissions_25235
12:41 monotek so maybe some umask proplem?
12:42 spstarr_work hmmm
12:42 spstarr_work im confused
12:42 spstarr_work I have 3 VM instances with /dev/xvdf2 partitioned for 20GB how do I replicate the content of this can each Gluster Server BE a Client also?
12:45 F^nor joined #gluster
12:50 monotek yes, glustefs server can aslo be client....
12:51 spstarr_work ok so then im confused :)
12:59 ProT-0-TypE joined #gluster
13:00 _pol joined #gluster
13:01 Cenbe joined #gluster
13:03 uebera|| joined #gluster
13:03 harish joined #gluster
13:04 benjamin__ joined #gluster
13:04 kkeithley glusterfsd is the gluster file server.  The glusterfs process/daemon on your server machines is the NFS server and is, in turn, a client of (all) the glusterfsds across your gluster cluster. The glusterfs process/daemon on client machines is the fuse-bridge that marshalls I/O through the FUSE layer.
13:06 kkeithley And if you aren't completely confused, /usr/sbin/glusterfs is just a symlink to /usr/sbin/glusterfsd. They're the same program, what they do is determined by the .vol file they load and how the are invoked.
13:07 * kkeithley wonders why we use a symlink and not a hard link.
13:08 * kkeithley hopes spstarr_work is now slightly less confused.
13:09 spstarr_work no need for NFS, I get t now.. we DONT mount anything as every node is a server
13:11 kkeithley Okay, so ignore what I said about glusterfs as the NFS server. When you mount, mount.glusterfs starts the glusterfs process — it's the fuse-bridge.
13:12 uebera|| joined #gluster
13:12 uebera|| joined #gluster
13:14 spstarr_work doing this:   gluster volume create content replica 3 transport tcp 192.168.1.75:/var/www 192.168.0.140:/var/www 192.168.2.245:/var/www force
13:14 spstarr_work ok so now each VM is supposed to replicate /var/www correct?
13:14 spstarr_work no mounting as no client
13:18 kkeithley no mounting? no client?   Gluster replication occurs on the client.  When you mount that volume somewhere and write to the mounted volume, it's the client-side glusterfs fuse-bridge process that writes, in turn, to each of the replicas.
13:19 kkeithley just creating a replica volume with three bricks isn't going to do magically do anything wrt replication.
13:19 spstarr_work this is the confusion
13:20 spstarr_work i have /var/www mounted across all 3 VMs
13:20 spstarr_work each /var/www has a .glusterfs in them
13:20 spstarr_work so now.. *how* do I add content to /var/www and see it replicate ?
13:20 spstarr_work do I have mount /var/www  to say /mnt/foo on each and then write to /mnt/foo ?
13:20 spstarr_work then its replicated?
13:22 spstarr_work kkeithley: then how do we know which node is primary for replication?
13:23 kkeithley gluster doesn't have any notion of primary. They're all equals.
13:23 spstarr_work if each VM has a /dev/xvdf2 20GB partition mounted in say /mnt/content <---- I want each of them to replicate the same content to each box to /var/www as the mount point for clients..
13:23 spstarr_work so then it should be:
13:23 kkeithley mount  -t glusterfs 192.168.1.175:content on /mnt/foo.
13:23 yinyin joined #gluster
13:23 spstarr_work gluster volume create content replica 3 transport tcp 192.168.1.75:/mnt/content 192.168.0.140:/mnt/conten 192.168.2.245:/mnt/content force
13:23 kkeithley okay, let's back up
13:24 spstarr_work then on each box mount -t glusterfs [rrdns-name:content /var/www ?
13:24 kkeithley yes
13:24 spstarr_work !
13:27 kkeithley "content" is your gluster volume. It consists of three "bricks": 192.168.1.75:/mnt/content, 192.168.0.140:/mnt/content, and 192.168.2.245:/mnt/content.
13:28 theron joined #gluster
13:28 kkeithley `mount -t glusterfs $rrdnsname:content /var/www`
13:29 spstarr_work sec.. doing   setfattr options...
13:30 kkeithley now if you write content to /var/www, it will be replicated to all three bricks. You can peek under the hood at, e.g., 192.168.1.75:/mnt/content and see what was written.
13:30 kkeithley Or if you speak british english, you can peek under the bonnet.
13:33 spstarr_work /dev/xvdf2 on /mnt/content type ext4 (rw,noatime,nodiratime) glusterfs.content.local:content on /var/www type fuse.glusterfs (rw,default_permissions,al​low_other,max_read=131072)
13:33 spstarr_work interesting.. so no wi need to do that mount on each VM
13:33 spstarr_work even though rrdnsname == 192.168.1.75.192.168.0.140/192.168.2.245
13:34 spstarr_work since we cant do round robin in /etc/hosts files :/
13:34 kkeithley yes, you need to mount /dev/xvdf2 on /mnt/content on each vm.
13:34 spstarr_work did
13:34 spstarr_work now i need to do the glusterfs mount on each also
13:35 kkeithley each node where you want to see the data in /var/www
13:35 spstarr_work so even if is RRDNS it may connect to the same node twice thats ok?
13:35 spstarr_work mind you abusing Route 53 in AWS for local dNS is... hacky
13:35 spstarr_work it seems to work
13:36 kkeithley once the client connects to one server, the handshake shares the names of every brick in the cluster. After that all reads and writes go to every brick in the cluster
13:36 kkeithley you only need rrdns for the very first mount.
13:37 spstarr_work very first mount?
13:37 spstarr_work it doesnt seem to be sync'd
13:37 spstarr_work glusterfs.content.local:content   20G   44M   19G   1% /var/www
13:37 spstarr_work glusterfs.content.local:content     7.9G  1.9G  5.7G  26% /var/www
13:37 spstarr_work !?
13:38 kkeithley The fact that one volume is 20G and the other is 7.9G suggests that you didn't do the mount
13:39 spstarr_work i did?
13:39 kkeithley They should all be 20G?
13:39 spstarr_work mount -t glusterfs glusterfs.content.local:content /var/www
13:39 spstarr_work on each box... im remounting that one
13:40 spstarr_work one box is showing corruption errors
13:40 spstarr_work ls: cannot access lost+found: Input/output error
13:40 B21956 joined #gluster
13:41 spstarr_work the other shows Mount failed. Please check the log file for more details. now
13:41 spstarr_work ugugh
13:41 spstarr_work rrdns seems to be breaking the mounts
13:41 spstarr_work d?????????  ? ?    ?       ?            ? lost+found
13:41 * spstarr_work is confused
13:42 kkeithley yeah, dunno. Someone like semiosis or JoeJulian know more about using rrdns with gluster.
13:42 spstarr_work oh sec i think i see why
13:43 bala joined #gluster
13:46 spstarr_work redid the volume.. mounting now...
13:46 spstarr_work fixed something i saw typo
13:46 spstarr_work /mnt/conten != /mnt/content
13:47 spstarr_work no not working, i touched file in one node the other one sees nothing and the other shows i/o error
13:48 spstarr_work checking gluster logs..
13:48 B21956 joined #gluster
13:49 spstarr_work [2014-01-17 13:45:57.952708] I [client-handshake.c:1468:client_setvolume_cbk] 0-content-client-1: Server and Client lk-version numbers are not same, reopening the fds
13:49 glusterbot spstarr_work: This is normal behavior and can safely be ignored.
13:49 spstarr_work erm..
13:49 spstarr_work lol
13:49 spstarr_work :)
13:49 spstarr_work i like this bot
13:49 spstarr_work hmmm 49152.. port connection timed out
13:49 * spstarr_work examines AWS security group
13:52 hagarth joined #gluster
13:53 spstarr_work gluster ports
13:53 ndevos @ports
13:53 glusterbot ndevos: glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up for <3.4 and 49152 & up for 3.4. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
13:53 spstarr_work oh @
13:53 spstarr_work AND 2049.. adding
13:53 eseyman joined #gluster
13:53 spstarr_work wait if using nFS.. im not..
13:54 kkeithley right
13:54 spstarr_work oh it BEGINS at 49152
13:54 spstarr_work ah im using wrong set of ports
13:54 spstarr_work so the bricks aren't seeing one another
13:55 spstarr_work 49152, 49153, 49154, (49155)
13:56 spstarr_work now it mounts quickly
13:56 spstarr_work haha
13:56 kkeithley \0/
13:56 spstarr_work i didnt realize ports changed...
13:57 spstarr_work i had it setup pre-3.4.. when i first setup gluster on buntu
13:57 spstarr_work doh!
13:58 spstarr_work it works!
13:58 spstarr_work THANK YOU :D
13:58 kkeithley yw
13:58 spstarr_work \o/
13:58 spstarr_work :)
14:00 * spstarr_work extracts the www content and sees it replicating..
14:01 kkeithley excellent
14:01 spstarr_work MariaDB galera clustering and GlusterFS for the www content
14:07 japuzzo joined #gluster
14:18 robo joined #gluster
14:20 rnathuji joined #gluster
14:25 _pol joined #gluster
14:26 Alpinist joined #gluster
14:28 mattappe_ joined #gluster
14:29 mattapperson joined #gluster
14:29 glusterbot New news from newglusterbugs: [Bug 1054816] Rebalance reporting failure for file migrations <https://bugzilla.redhat.co​m/show_bug.cgi?id=1054816>
14:30 lalatenduM Any idea, howto give logs to a bug when it is more than 30 MB
14:33 ctria joined #gluster
14:35 kkeithley put them on a web site somewhere and link to it.
14:36 kkeithley or compress them (I'd bet they compress well)
14:36 lalatenduM kkeithley, will try to compress it
14:37 lalatenduM kkeithley, it came down to 1.5 MB :)
14:38 kkeithley ;-)
14:39 dbruhn joined #gluster
14:40 lalatenduM kkeithley, I just came across bz 1054816 using 3.5.0-0.1.beta1 , should I send a email on gluster-users?
14:40 jobewan joined #gluster
14:41 kkeithley I don't see why not
14:42 mattappe_ joined #gluster
14:42 lalatenduM cool, will send an email
14:46 mattapp__ joined #gluster
14:48 robos joined #gluster
14:48 raghug joined #gluster
14:50 mattappe_ joined #gluster
14:53 mattapp__ joined #gluster
14:57 nocturn joined #gluster
14:59 ells joined #gluster
15:01 kaptk2 joined #gluster
15:02 rotbeard joined #gluster
15:04 mattappe_ joined #gluster
15:07 bennyturns joined #gluster
15:07 B21956 joined #gluster
15:11 ctria joined #gluster
15:25 plarsen joined #gluster
15:27 _pol joined #gluster
15:29 glusterbot New news from newglusterbugs: [Bug 1054847] rpm doesnt create all directories in /var/lib/glusterd <https://bugzilla.redhat.co​m/show_bug.cgi?id=1054847>
15:29 _pol_ joined #gluster
15:32 MiteshShah joined #gluster
15:36 semiosis spstarr_work: http://pastie.org/8642613
15:36 glusterbot Title: #8642613 - Pastie (at pastie.org)
15:37 semiosis example of how to set up rr dns
15:37 semiosis only use the rrdns for the mount server, not the brick addresses
15:37 semiosis bbiab
15:41 monotek it seem i got my problem solved :-)
15:41 monotek The problem is caused by this OTRS file:
15:41 monotek https://github.com/OTRS/otrs/blob/rel-2_4​/Kernel/System/Ticket/ArticleStorageFS.pm
15:41 monotek It works if i uncoment this part: http://pastie.org/8642626
15:41 monotek After that evrything file related works fine so far. No errors in any logs...
15:41 monotek Can somebody explain whats causing the error?
15:41 glusterbot Title: otrs/Kernel/System/Ticket/ArticleStorageFS.pm at rel-2_4 · OTRS/otrs · GitHub (at github.com)
15:41 glusterbot Title: #8642626 - Pastie (at pastie.org)
15:42 lalatenduM joined #gluster
15:43 jag3773 joined #gluster
15:45 japuzzo joined #gluster
15:49 bugs_ joined #gluster
15:49 yinyin joined #gluster
15:53 glusterbot New news from resolvedglusterbugs: [Bug 1054847] rpm doesnt create all directories in /var/lib/glusterd <https://bugzilla.redhat.co​m/show_bug.cgi?id=1054847>
15:56 spstarr_work semiosis: all's ok, i had wrong ports :)
15:56 semiosis ok
15:56 spstarr_work its working perfectly :))
15:56 spstarr_work thank you
15:56 * spstarr_work happy
16:17 tryggvil joined #gluster
16:18 gkleiman joined #gluster
16:18 mattappe_ joined #gluster
16:19 daMaestro joined #gluster
16:20 mattappe_ joined #gluster
16:22 aixsyd dbruhn: morning bro - im testing Gluster + ZFS this morning =O
16:22 robo joined #gluster
16:23 lpabon joined #gluster
16:25 dbruhn aixsyd = Gluton for punishment
16:26 dbruhn did you find a good writeup to work from? from what I've seen out of the box it doesn't work
16:28 andreask joined #gluster
16:30 mattapperson joined #gluster
16:30 aixsyd dbruhn: i found my exact use case
16:30 dbruhn Sweet
16:30 aixsyd ZFS on Proxmox with gluster on top
16:30 aixsyd and it seems to work brilliantly
16:30 dbruhn Onions!
16:31 aixsyd and much better IOPs
16:31 dbruhn Good to hear
16:31 mattappe_ joined #gluster
16:31 aixsyd now im experimenting with raidz2 vs mirror pairs for speed
16:31 glusterbot New news from resolvedglusterbugs: [Bug 849630] client_t implementation <https://bugzilla.redhat.com/show_bug.cgi?id=849630>
16:31 dbruhn You should do a writeup on the performance between the two with statistical data
16:32 aixsyd might have to
16:32 aixsyd im gonna write up a blog post about this whole endevour when i'm done for sure
16:32 dbruhn Awesome
16:33 aixsyd and yes, i do seem to love paid
16:33 aixsyd >)
16:33 aixsyd *pain
16:34 dbruhn Comes with the gig
16:34 aixsyd it just seems like zfs, being block level and GFS being file level should compliment eachother perfectly
16:35 spechal joined #gluster
16:35 dbruhn makes sense
16:36 aixsyd what stops corrupted data from getting to both nodes in gluster?
16:37 aixsyd like, corruption from disk
16:37 aixsyd in say, a rebuild
16:37 aixsyd (bit rot, for example)
16:38 JoeJulian spstarr_work: One other thing. I wouldn't use the root of an ext filesystem. The lost+found directory is almost always going to end up split-brain since it's modified by the filesystem itself rather than by GlusterFS. Make the brick a directory under that, ie. /mnt/content/brick where /mnt/content is your mounted ext4 filesystem.
16:39 dbruhn Off the top of my head, I think the files would become split-brain, but I could be wrong about that.
16:39 dbruhn JoeJulian, am I right/wrong about that assumption?
16:41 JoeJulian lost+found on two replicated bricks with different attributes. I'm not completely certain, but I thought I remembered that being a problem. That was forever ago though since I both stopped using the filesystem root and then stopped using ext.
16:42 spstarr_work even if these are AWS volumes
16:44 JoeJulian I don't see any reason why that would make a difference. The lost+found is modified whenever the fsck happens, regardless of what's underneath.
16:44 spstarr_work I have to hope, never :)
16:46 JoeJulian ext is checked by default every so many mounts or after some number of days, so as long as you never need to remount....
16:46 JoeJulian ... or you've disabled those checks.
16:48 zerick joined #gluster
16:48 dbruhn spstarr, I am with JoeJulian on this even with xfs you end up with lost+found split-brain errors, always better to make your brick a directory deeper.
16:50 aixsyd and gluster 3.4.2 requires it.
16:50 aixsyd 3.4.1-2 allows it
16:50 * JoeJulian raises an eyebrow quizzically...
16:51 aixsyd gluster 3.4.2 allows you to "force" that behavior if youd like, but recommends not doing that
16:54 dbruhn interesting, have you tried upgrading a system from 3.3 that was set up that way to 3.4.2?
16:55 aixsyd no, but i found out the hard way that if you have gluster-common 3.4.2 and server 3.4.1-2 and your brick is the root of a file system, volume create hangs forever expecting "force" at the end of your create command - and if you put force at the end, 3.4.1-2 doesnt recognize that syntax and wont allow you to create a volume
16:56 zerick joined #gluster
17:02 glusterbot New news from resolvedglusterbugs: [Bug 851092] Add support for --enable-debug configure option <https://bugzilla.redhat.com/show_bug.cgi?id=851092> || [Bug 985565] Wrong source macro referenced for el5 build <https://bugzilla.redhat.com/show_bug.cgi?id=985565> || [Bug 1002207] remove unused parameter and correctly handle mem alloc failure <https://bugzilla.redhat.co​m/show_bug.cgi?id=1002207> || [Bug 1004751] glusterfs-a
17:02 raghug joined #gluster
17:05 jrizzo joined #gluster
17:05 jrizzo hey guys
17:06 jrizzo I keep getting this when trying to create a vol
17:06 jrizzo volume create: datastore: failed: /mnt/disk1/EXT4-SSD-GFS or a prefix of it is already part of a volume
17:06 glusterbot jrizzo: To clear that error, follow the instructions at http://joejulian.name/blog/glusterfs-path-or​-a-prefix-of-it-is-already-part-of-a-volume/ or see this bug https://bugzilla.redhat.com/show_bug.cgi?id=877522
17:07 JoeJulian "volume create: foo: failed: The brick questor:/mnt/brick1 is a mount point. Please create a sub-directory under the mount point and use that as the brick directory. Or use 'force' at the end of the command if you want to override this behavior.", interesting. I thought they'd specifically made some of the changes they made in order to allow that safely.
17:07 jag3773 joined #gluster
17:08 jrizzo http://joejulian.name/blog/glusterfs-path-or​-a-prefix-of-it-is-already-part-of-a-volume/ this will not work for me  since this is a new setup and the dirs are empty -  I suspect this has to do with DNS resolution or
17:08 jrizzo 2014-01-17 16:28:48.412386] D [rpc-transport.c:249:rpc_transport_load] 0-rpc-transport: attempt to load file /usr/lib64/glusterfs/3.4.2/rpc-transport/rdma.so
17:08 jrizzo librdmacm: Warning: couldn't read ABI version.
17:08 jrizzo librdmacm: Warning: assuming: 4
17:08 jrizzo librdmacm: Fatal: unable to get RDMA device list
17:08 glusterbot Title: GlusterFS: {path} or a prefix of it is already part of a volume (at joejulian.name)
17:09 jrizzo I'm willing to pay if anyone could help with this situation --already have a production cluster but bulding a backup on new hardware --- same configuration settings but cant create a vol
17:09 JoeJulian The only code path that produces that error is when the extended attribute trusted.glusterfs.volume-id is set or the .glusterfs directory exists.
17:10 jrizzo [root@hv1 disk1]# pwd
17:10 jrizzo [root@hv1 disk1]# ls -al EXT4-SSD-GFS/
17:10 jrizzo total 12
17:10 jrizzo drwxr-xr-x  2 vdsm kvm  4096 Jan 16 03:19 .
17:10 jrizzo drwxr-xr-x. 4 root root 4096 Jan 17 10:21 ..
17:10 jrizzo [root@hv1 disk1]#
17:10 jrizzo both data dirs are emty
17:10 _pol joined #gluster
17:11 JoeJulian If you're argument is that you've made it into that subroutine and triggered that error message without the logical conditions being met to reach that, then I cannot argue. You've found magic.
17:11 dbruhn Check the extended attributes on the containing directories all the way to the root, this came up the other day.
17:12 JoeJulian @extended attributes
17:12 glusterbot JoeJulian: (#1) To read the extended attributes on the server: getfattr -m .  -d -e hex {filename}, or (#2) For more information on how GlusterFS uses extended attributes, see this article: http://hekafs.org/index.php/2011/​04/glusterfs-extended-attributes/
17:12 dbruhn aixsyd was that you that found that error?
17:12 aixsyd which
17:13 aixsyd OH
17:13 aixsyd not sure. might have been.
17:14 JoeJulian like it says. The path /or a prefix of it/...
17:14 glusterbot JoeJulian: To clear that error, follow the instructions at http://joejulian.name/blog/glusterfs-path-or​-a-prefix-of-it-is-already-part-of-a-volume/ or see this bug https://bugzilla.redhat.com/show_bug.cgi?id=877522
17:14 JoeJulian lol
17:14 JoeJulian glusterbot: thanks
17:14 glusterbot JoeJulian: you're welcome
17:14 jrizzo thanks
17:14 jrizzo you are correct
17:14 jrizzo it looks like at some point I had selinux enabled
17:14 jrizzo and this created a conflict
17:14 jrizzo beore
17:15 jrizzo chaing the dir path fixes the prob
17:15 jrizzo thanks again!
17:15 JoeJulian You're welcome.
17:15 * aixsyd sends JoeJulian an e-beer. He deserves it.
17:15 * JoeJulian is disappointed, though. He'd hoped for magic.
17:16 aixsyd JoeJulian: http://24.media.tumblr.com/0b51b9ac044e062857f4​d4a26effca1a/tumblr_mkxkvhe54v1rc113po1_250.gif
17:16 JoeJulian hehe
17:17 aixsyd that might be the best gif ever. and very topical.
17:18 mattappe_ joined #gluster
17:20 twx joined #gluster
17:21 smellis joined #gluster
17:31 Mo__ joined #gluster
17:33 wgao joined #gluster
17:41 andreask joined #gluster
17:44 zaitcev joined #gluster
17:56 ells_ joined #gluster
17:59 LoudNoises joined #gluster
18:00 mattapperson joined #gluster
18:20 robo joined #gluster
18:35 Technicool joined #gluster
18:44 guntha__ joined #gluster
18:47 andreask joined #gluster
18:51 _pol joined #gluster
18:55 aixsyd joined #gluster
18:56 aixsyd joined #gluster
19:05 jezier_ joined #gluster
19:08 smellis anyone know how to default --remote-host for the gluster cli command?
19:08 smellis I am using transport.socket.bin-address and I have to use --remote-host for gluster cli commands
19:16 semiosis smellis: iptables dnat 127.0.0.1:24007 -> remote host
19:16 semiosis i would try that
19:16 aixsyd 10240000000 bytes (10 GB) copied, 2.04241 s, 5.0 GB/s   O_O
19:19 jezier_ hi... is there any solution to speedup initial data import to glusterfs? rsync takes forever :|
19:19 aixsyd ^^^ i'd be interested to know this too
19:20 aixsyd besides trucking an external HDD and populating it that way
19:20 aixsyd i'm not sure of any other ways
19:20 aixsyd (that are quicker)
19:22 jezier_ well it's not a bandwidth problem but fstat... I have about 8mil files, 6TB to rsync..
19:22 aixsyd about the same here
19:22 aixsyd 4tb
19:23 smellis ah
19:23 smellis semiosis: thanks!
19:23 smellis semiosis: i think an alias is probably also possible
19:23 smellis going to try it
19:25 jobewan joined #gluster
19:25 jobewan joined #gluster
19:39 smellis alias works well for my purposes
19:40 smellis after setting the bind-address, gluster volume status reports the self heal daemons as not online
19:42 smellis ah crap, glustershd is attempting to connect to localhost, guess I have to have the DNAT
19:44 smellis I wonder if I recreate the volume, if the self heal daemon will connect to the bin-address i specified
19:57 spstarr_work semiosis: your nickname reminds me of osmosis :)
19:57 spstarr_work i dont know why it just does
20:03 ells left #gluster
20:03 jbrooks I'm trying out https://ttboj.wordpress.com/2014/0​1/08/automatically-deploying-glust​erfs-with-puppet-gluster-vagrant/, I haven't used vagrant before. It's complaining about no VirtualBox -- do you have to have vbox installed even if you're using libvirt/kvm?
20:07 iksik i was trying the same project jbrooks, but i have some really weird problems with installing vagrant plugin for libvirt under osx :|
20:09 P0w3r3d joined #gluster
20:09 purpleidea jbrooks: hey i wrote that
20:10 purpleidea iksik: i haven't tried it on osx. you'd probably have to patch the vagrantfile to work with virtualbox to get it working
20:10 jbrooks purpleidea: Any idea why I'm hitting this issue -- I didn't fully complete this one: https://ttboj.wordpress.com/2013/12​/09/vagrant-on-fedora-with-libvirt/
20:10 purpleidea jbrooks: what os are you using?
20:10 jbrooks I got an error w/ mutate
20:11 jbrooks F20
20:11 purpleidea jbrooks: yeah known issue skip over that
20:11 purpleidea jbrooks: https://github.com/sciurus​/vagrant-mutate/issues/37
20:11 glusterbot Title: Conversion errors on Fedora 20 · Issue #37 · sciurus/vagrant-mutate · GitHub (at github.com)
20:11 purpleidea jbrooks: i'll edit the article to reflect that
20:15 iksik purpleidea: i'm getting the same issue https://github.com/mitchellh/vagrant/issues/2449
20:15 glusterbot Title: Vagrant 1.3.5 fails to install vagrant-libvirt on OS X · Issue #2449 · mitchellh/vagrant · GitHub (at github.com)
20:17 jbrooks purpleidea: I keep getting the could not detect VirtualBox
20:22 purpleidea jbrooks: are you using --provider : https://ttboj.wordpress.com/2013/12​/21/vagrant-vsftp-and-other-tricks/
20:22 purpleidea i forgot to mention this in my first article :P sorry the first article needs fixing. i updated the mutate issue (refresh the article)
20:22 JoeJulian jezier_, aixsyd: Would be interesting to see if tar or cpio are any quicker... Maybe a java version of cp that uses gfapi.
20:24 jbrooks purpleidea: ah, no, I'll try that
20:25 jezier_ JoeJulian: the problem is that I need to do initial sync and then just before migration rsync again to keepup with recent changes..
20:26 purpleidea jbrooks: article updated again, let me know if that's clearer and if you have any new issues!
20:28 purpleidea iksik: i have not tested vagrant on osx. i'd recommend you try my setup using a Fedora 20 machine. I've talked to some people who use osx and they tell me their vagrant setup breaks often everytime there's a virtualbox update. this doesn't mean there aren't problems on fedora, but i've tried to document all of them, and their workarounds.
20:31 aixsyd jezier_: JoeJulian - exact same situation for me
20:31 JoeJulian http://jarsync.sourceforge.net/ + https://forge.gluster.org/libgfapi-jni
20:31 glusterbot Title: Jarsync - A Java rsync implementation (at jarsync.sourceforge.net)
20:31 JoeJulian semiosis: What do you think of that idea? ^
20:32 * semiosis looks
20:32 * aixsyd is surprised that the words fast + java are thought/spoken in the same idea/sentence
20:32 aixsyd :P
20:32 JoeJulian lol
20:32 semiosis gut reaction is "Sourceforge.net?  This isn't going to end well"
20:32 JoeJulian me too
20:34 semiosis JoeJulian: here's the deal... since java7 there's now support for pluggable filesystem providers -- that's glusterfs-java-filesystem
20:35 semiosis so the integration should be trivial
20:35 JoeJulian Nice. I'm still not going to touch java... ;)
20:35 semiosis if jarsync supported the java7 filesystem api, and the glusterfs-java-filesystem library was in the classpath, then all you'd have to do is tell jarsync to use "gluster://server:volname/path"
20:36 semiosis and then you'd have the magic you were looking for earlier
20:37 JoeJulian lol
20:37 semiosis no one should have to use libgfapi-jni in their project
20:37 semiosis would be preferable to upgrade their project to run on jdk7 & use the new filesystem api
20:38 semiosis then just drop in glusterfs-java-filesystem, or any other fs provider
20:40 johnmark woah
20:40 johnmark semiosis: that sounds awesome
20:41 semiosis johnmark: thats why i'm doing this thing :)
20:41 johnmark w00t
20:42 purpleidea while i don't use java, the thing semiosis is working on, is of course, awesome.
20:42 purpleidea i wish there was a python equivalent... i'm sure someone could write one
20:44 semiosis purpleidea: jython, jruby, scala, clojure, groovy, any JVM language will be able to use this
20:44 purpleidea that's cool!
20:44 semiosis \m/
20:47 bcompton joined #gluster
20:47 _pol joined #gluster
20:50 semiosis bbl
20:53 jbrooks purpleidea: I'm getting the error * Unknown configuration section 'cache'. now
20:54 purpleidea jbrooks: can you fpaste your vagrantfile
20:54 johnmark semiosis: how does that work?
20:55 * johnmark doesn't understand how any scripting language would plug in
20:55 purpleidea jbrooks: you probably didn't do the vagrant-cachier part in: https://ttboj.wordpress.com/2013/12​/21/vagrant-vsftp-and-other-tricks/ but i don't know how you would get the cache error without getting there ins sequence...
20:56 _pol_ joined #gluster
20:57 jbrooks purpleidea: Yeah, I'm sort of jumping around -- I want to do the gluster, vagrant & puppet one
20:58 purpleidea jbrooks: i think the cachier part is the last thing you'll need *crosses fingers*
20:58 purpleidea sorry that there are a lot of steps... blame fedora for not having this already packaged :P
20:58 purpleidea but i can tell you that going through all the steps are useful, as there are good tricks to make your vagrant hacking more awesome!
20:59 purpleidea jbrooks: in your defence, the puppet-gluster+vagrant is the best one ;)
21:00 smellis if i use transport.socket.bind-address, must I DNAT localhost 24007 to the bind-address i specified in order to allow shd to work?
21:00 jbrooks :)
21:00 jbrooks purpleidea: my Vagrantfile is unchanged from https://github.com/purpleidea/puppet-glust​er/blob/master/vagrant/gluster/Vagrantfile
21:00 glusterbot Title: puppet-gluster/vagrant/gluster/Vagrantfile at master · purpleidea/puppet-gluster · GitHub (at github.com)
21:01 purpleidea jbrooks: is the cache stuff working now? you can just comment it out from the vagrant file as it's optional
21:01 jbrooks purpleidea: The funny thing is I'm coming at this like, oh, yeah, here's a way to speed my testing
21:02 purpleidea jbrooks: ... it's funny because it's taking longer than you expected?
21:03 jbrooks It's been a similar story w/ docker & vagrant -- every so often I sit down to fiddle with these super easy popular timesaver apps, and I never get anywhere w/ them -- back to the slow ways :)
21:04 purpleidea jbrooks: well, sadly there's an overhead to using any new tools... that's why i wrote 4+ articles about it so that all the answers were already discovered... once you get through it, the time savings will add up if you test glusterfs more than a few times...
21:04 purpleidea and you can use the newly learned knowledge for other things too!
21:05 jbrooks purpleidea: Yeah, I think it'll be worthwhile -- commenting out the cache bit earned me some NFS-related errors. I think I'll go back to the start and do all the howtos -- maybe when I have a bit more time
21:05 purpleidea jbrooks: if you really want to make it easier, you can hire me to come and give puppet-gluster+vagrant+etc lectures or mini courses
21:07 purpleidea jbrooks: can you post the errors
21:13 TonySplitBrain joined #gluster
21:13 jbrooks purpleidea: this is the nfs error: http://fpaste.org/69455/99931751/
21:13 glusterbot Title: #69455 Fedora Project Pastebin (at fpaste.org)
21:14 jbrooks purpleidea: I've retreated to the first in the series, though, and I'm hitting an error about manifests path specified for Puppet does not exist: -- you mention in your post, toward the bottom, that there's good docs elsewhere for using puppet + vagrant, maybe that's what I'm missing?
21:15 purpleidea jbrooks: so i tested this with vagrant 1.3.5, you're using 1.4.3  -- i heard 1.4.x made some major changes. you should report bugs you find! i'd mention this to the vagrant-libvirt project, they're very good at giving good feedback if you report a good bug
21:15 purpleidea https://github.com/pradels/​vagrant-libvirt/issues/new
21:15 glusterbot Title: Sign in · GitHub (at github.com)
21:16 jbrooks purpleidea: OK, I think I'll just roll back for now -- I've got to see it working before I can be helpful w/ bugs
21:17 purpleidea jbrooks: puppet docs for vagrant are: https://docs.vagrantup.com/​v2/provisioning/index.html (select again or apply)
21:17 glusterbot Title: Provisioning - Vagrant Documentation (at docs.vagrantup.com)
21:18 gmcwhistler joined #gluster
21:19 purpleidea jbrooks: sorry again that it's a lot of steps. if you put this in perspective though, it took me a few weeks to do all the vagrant setup and integration, and puppet-gluster has been going for much longer than that... the good news is that you can probably get through most of it in 1 day.
21:22 jbrooks :)
21:22 purpleidea jbrooks: report that nfs bug to vagrant-libvirt it's legit
21:42 mattap___ joined #gluster
22:01 mattappe_ joined #gluster
22:05 TrDS joined #gluster
22:28 tryggvil joined #gluster
22:29 mattapperson joined #gluster
22:31 glusterbot New news from newglusterbugs: [Bug 1021686] refactor AFR module <https://bugzilla.redhat.co​m/show_bug.cgi?id=1021686>
22:37 TonySplitBrain joined #gluster
22:52 mattf left #gluster
23:23 semiosis johnmark: magic
23:23 semiosis johnmark: somehow the people who built the language interpreters on the jvm figured out a way to bridge between the "scripting" language and java
23:24 semiosis i've done a little bit of that in jruby and it's very natural, since ruby itself is object oriented.
23:25 semiosis https://github.com/logstash/logstash/blob​/master/lib/logstash/filters/date.rb#L25
23:25 glusterbot Title: logstash/lib/logstash/filters/date.rb at master · logstash/logstash · GitHub (at github.com)
23:26 semiosis for example, that is a call from ruby to a static method of a java object
23:31 jobewan joined #gluster
23:42 jporterfield joined #gluster
23:53 dbruhn joined #gluster
23:58 uebera|| joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary