Camelia, the Perl 6 bug

IRC log for #gluster, 2013-02-28

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:13 Troy joined #gluster
00:14 Troy hi there
00:14 Troy can I use rsync 3.0.6 for geo-replication ?
00:14 Troy don't want to upgrade to 3.0.7
00:17 Troy "/tmp/gsyncd-aux-ssh-otwy5T/gsycnd-ssh-%r@%h:%p"
00:17 Troy any help ?
00:22 raven-np joined #gluster
00:23 hagarth joined #gluster
00:27 yinyin joined #gluster
00:30 glusterbot New news from newglusterbugs: [Bug 916390] NLM acquires lock before confirming callback path to client <http://goo.gl/5eoJv>
00:47 plarsen joined #gluster
01:00 glusterbot New news from newglusterbugs: [Bug 916392] dht_hash_compute() called against path instead of basename <http://goo.gl/5lbU6>
01:27 yinyin joined #gluster
01:45 kevein joined #gluster
01:51 _pol joined #gluster
02:00 glusterbot New news from newglusterbugs: [Bug 916406] NLM failure against Solaris NFS client <http://goo.gl/uGTJA>
02:18 __Bryan__ joined #gluster
02:34 JoeJulian Troy: Yes. I thought I filed a bug about that... Anyway, 3.0.6 that ships with EL works just fine.
02:37 raven-np joined #gluster
02:42 yinyin joined #gluster
02:45 jdarcy joined #gluster
02:48 hagarth joined #gluster
02:49 stopbit joined #gluster
02:52 kevein joined #gluster
02:57 bharata joined #gluster
03:02 vshankar joined #gluster
03:02 rcheleguini_ joined #gluster
03:03 rcheleguini__ joined #gluster
03:03 vshankar joined #gluster
03:19 rcheleguini_ joined #gluster
03:21 rcheleguini joined #gluster
03:25 satheesh joined #gluster
03:30 bulde joined #gluster
03:53 lpabon joined #gluster
04:09 pai joined #gluster
04:24 sripathi joined #gluster
04:26 bala1 joined #gluster
04:31 lala joined #gluster
04:31 yinyin joined #gluster
04:37 vpshastry joined #gluster
04:48 shylesh joined #gluster
04:48 zwu joined #gluster
04:51 sahina joined #gluster
04:58 hagarth joined #gluster
05:01 glusterbot New news from newglusterbugs: [Bug 911489] Georeplication causing Virtual Machines to be put into Read Only mode. <http://goo.gl/rlqUc>
05:09 mohankumar joined #gluster
05:15 satheesh joined #gluster
05:17 yinyin joined #gluster
05:28 Humble joined #gluster
05:30 hateya joined #gluster
05:33 sripathi joined #gluster
05:41 deepakcs joined #gluster
05:42 sgowda joined #gluster
05:47 aravindavk joined #gluster
05:53 jtux joined #gluster
05:53 bulde1 joined #gluster
05:55 bulde2 joined #gluster
05:57 raghu` joined #gluster
05:58 pithagorians joined #gluster
06:10 ramkrsna joined #gluster
06:10 timothy joined #gluster
06:14 rastar joined #gluster
06:25 hagarth joined #gluster
06:30 atrius joined #gluster
06:34 ngoswami joined #gluster
06:38 pai_ joined #gluster
06:45 vpshastry joined #gluster
06:46 vpshastry1 joined #gluster
06:49 raven-np joined #gluster
06:51 sripathi joined #gluster
06:52 sripathi joined #gluster
06:52 Nevan joined #gluster
06:52 pai joined #gluster
06:54 sripathi1 joined #gluster
07:03 vimal joined #gluster
07:10 shireesh joined #gluster
07:10 guigui1 joined #gluster
07:13 bulde joined #gluster
07:15 tjstansell left #gluster
07:22 hagarth joined #gluster
07:23 vpshastry joined #gluster
07:28 ThatGraemeGuy joined #gluster
07:35 rotbeard joined #gluster
07:36 puebele joined #gluster
07:46 yinyin joined #gluster
07:47 ctria joined #gluster
07:50 raven-np joined #gluster
08:02 jtux joined #gluster
08:13 deepakcs joined #gluster
08:15 raven-np joined #gluster
08:21 puebele1 joined #gluster
08:21 andreask joined #gluster
08:22 rgustafs joined #gluster
08:26 pithagorians joined #gluster
08:30 tjikkun_work joined #gluster
08:31 tjikkun_work joined #gluster
08:35 Staples84 joined #gluster
08:42 cw joined #gluster
08:45 guigui3 joined #gluster
08:52 puebele1 joined #gluster
08:52 RobertLaptop joined #gluster
08:54 sripathi joined #gluster
08:58 duerF joined #gluster
09:00 hybrid512 joined #gluster
09:02 tryggvil joined #gluster
09:09 ramkrsna joined #gluster
09:10 Humble joined #gluster
09:12 ekuric joined #gluster
09:15 tryggvil joined #gluster
09:18 glusterbot New news from resolvedglusterbugs: [Bug 908708] DeprecationWarning: BaseException.message has been deprecated as of Python 2.6 (glusterfs/python/syncdaemon/syncdutils.py:175) <http://goo.gl/U3Og9> || [Bug 799716] Glusterd crashed while performing geo-replication start operation. <http://goo.gl/JCNug>
09:25 Norky joined #gluster
09:27 bharata joined #gluster
09:31 deepakcs joined #gluster
09:43 ngoswami joined #gluster
09:55 dobber_ joined #gluster
10:07 aravindavk joined #gluster
10:17 yinyin joined #gluster
10:17 cw joined #gluster
10:18 gbrand_ joined #gluster
10:21 rotbeard hi there, to replace a healthy brick with another one, a simple volume replace-brick VOL old:/foo new:/foo start followed by a volume replace-brick VOL old:/foo new:/foo commit should do, shouldn't it?
10:22 rotbeard gluster 3.2 btw
10:24 satheesh joined #gluster
10:30 sripathi joined #gluster
10:37 Joda joined #gluster
10:55 gbrand__ joined #gluster
11:06 tryggvil joined #gluster
11:06 abyss^_ I'd like to turn off one gluster server (in replica mode) and move to another Xen then start gluster. I'd like to ask about safe of this operation? Gluster should synchronize without problem, yes? (No ip change etc just only move to another xen)
11:18 _benoit_ joined #gluster
11:21 gbrand_ joined #gluster
11:27 ramkrsna joined #gluster
11:31 Staples84 joined #gluster
11:36 ndevos_ joined #gluster
11:40 yinyin joined #gluster
11:48 jdarcy joined #gluster
11:52 jdarcy joined #gluster
12:02 glusterbot New news from newglusterbugs: [Bug 916577] gluster volume status all --xml output doesn't have parameters of tasks <http://goo.gl/LQ9g7>
12:05 ndevos joined #gluster
12:14 kkeithley joined #gluster
12:16 gbrand_ joined #gluster
12:26 shapemaker joined #gluster
12:30 m0zes_ joined #gluster
12:31 the-me_ joined #gluster
12:31 larsks_ joined #gluster
12:37 andreask joined #gluster
12:49 x4rlos joined #gluster
12:50 dustint joined #gluster
12:51 bronaugh joined #gluster
12:54 yinyin joined #gluster
12:55 Staples84 joined #gluster
12:57 aliguori joined #gluster
13:01 ngoswami joined #gluster
13:02 sahina joined #gluster
13:04 shireesh joined #gluster
13:06 hybrid5121 joined #gluster
13:08 fleducquede joined #gluster
13:13 edward1 joined #gluster
13:18 zetheroo joined #gluster
13:20 zetheroo when preparing to setup a GlusterFS system are there any pointers on which HDD's to use/not use !?
13:20 hagarth joined #gluster
13:32 NuxRo zetheroo: glusterfs is agnostic from this point of view, but it requires a xattr capable filesyste, XFS being the recommended one
13:32 rotbeard joined #gluster
13:38 stopbit joined #gluster
13:43 16WAAIST9 joined #gluster
13:46 zetheroo what about ext4 ...
13:46 zetheroo we have 3 Ubuntu servers using ext4 and we want to build a glusterFS with them
13:47 Slydder joined #gluster
13:48 Slydder hey all
13:48 zetheroo currently we have 3 servers, each with 2 x 1TB HDD's
13:48 zetheroo configured on RAID 0
13:48 Slydder just heard the gui will not be available in 3.3. where could one get a copy of the old gui? I want to continue development and make it available on IT Admins.
13:49 zetheroo we want to add 2 x 3TB HDD's to each of the 3 servers and have that setup with GlusterFS
13:54 rotbeard zetheroo, according to this -> http://www.gluster.org/2012/08/glus​terfs-bit-by-ext4-structure-change/ you maybe want to use xfs
13:54 glusterbot <http://goo.gl/86YVi> (at www.gluster.org)
13:55 rotbeard I use ext4 too so far, but before the next kernel upate I will migrate to xfs
13:56 rotbeard with default mkfs.xfs my bricks are not 'that' fast as with ext4 so far, but I think I can handle this with some xfs tuning
13:57 zetheroo we need a solution that will be up and running with the least amount of tweaking possible :)
13:58 zetheroo do the system HDD's have to be XFS or just the gluster HDD's?
13:58 rotbeard only the bricks
13:58 rotbeard the blockdevices that you want to use for glusterfs
13:58 zetheroo ok - so I don't have to redo all my work on the system drives - whew! :P
13:59 rotbeard nope ;)
13:59 rotbeard I have a similar setting here, 2 servers, each with a dedicated blockdevice for glusterfs
14:00 rotbeard root fs is still ext4
14:00 zetheroo ok
14:00 puebele joined #gluster
14:02 zetheroo we would like to have all our VM images running from the gluster drives ...
14:03 rotbeard your idea sounds good, but I don't have any experience with that ;)
14:03 rotbeard have a nice day guys
14:17 vshankar joined #gluster
14:18 guigui joined #gluster
14:24 Staples84 joined #gluster
14:29 bennyturns joined #gluster
14:31 zetheroo is GlusterFS primarily for pooled storage or for redundancy?
14:32 andreask yes ;-)
14:34 rwheeler joined #gluster
14:35 timothy joined #gluster
14:37 zetheroo heh
14:38 zetheroo what happens is a drive fails?
14:38 plarsen joined #gluster
14:39 zetheroo if I have 6 x 3TB HDD's in a gluster and there are VM's running from the images stored on these gluster'ed drives ... and one of the six drives fail -- will the VM's continue to run?!
14:45 Norky zetheroo, it depends...
14:45 jack_ joined #gluster
14:45 Norky if you have a replica volume, then that gives you redundancy
14:45 Norky if you have some form of RAID 'underneath' your bricks, that gives you redundancy
14:46 jbrooks joined #gluster
14:46 Norky many people do both
14:46 zetheroo hmm
14:47 bugs_ joined #gluster
14:47 Norky if you have JBODs, in a gluster volume without replica, then of course a drive failure will lose data
14:47 zetheroo JBOD's?
14:47 Norky just a bunch of disk
14:47 Norky c.f RAID
14:49 Norky http://en.wikipedia.org/wiki/RA​ID#Non-RAID_drive_architectures
14:49 glusterbot <http://goo.gl/cSy2Q> (at en.wikipedia.org)
14:54 zetheroo this is what we are dealing with here:  http://paste.ubuntu.com/5573491/
14:54 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
14:55 zetheroo Norky: not sure if this makes sense to you or if I am leaving something out of the picture ...
14:55 Norky that would work
14:56 Norky are you intending to use the qemu ligluster interface?
14:56 zetheroo never heard of it ... what is it?
14:56 Norky actually, the feature is only alpha at present
14:57 zetheroo ok, then probably not :)
14:58 zetheroo but do I make 3 glusters and then RAID one to the other two?
14:58 Norky make three glsuter servers, join them up
14:59 hagarth joined #gluster
14:59 Norky "RAID" is the wrong term wrt glsuter itself
14:59 zetheroo ok
14:59 zetheroo :P
14:59 Norky RAID only applies to the disks underlying the Gluster "bricks"
15:00 Norky you have the option of RAID0 on the two drives in each server
15:00 zetheroo so if I make each pair of 2 x 3TB HDD's a gluster ... how do I get one gluster to sync with the next pair on the other server? Is that something done through gluster itself?
15:01 Norky otherwise you woudl have a brick on each drive, for a total of 6 bricks
15:02 Norky yes. you would create a volume with "replica 3" from the 3 or 6 bricks. You woudl then mount the gluster volume on each server (yes, each server is itself a client)
15:02 Norky any data then written to the FUSE-mounted directory is replicate to all three machines
15:03 zetheroo would you know of any online how-tos that show this :)
15:04 zetheroo seems like this page is messed up a bit  ... http://www.gluster.org/community/docum​entation/index.php/GlusterFS_Concepts
15:04 glusterbot <http://goo.gl/cmDTp> (at www.gluster.org)
15:04 Norky can't think of any off the top of my head, there are probably some at the other end of google
15:04 zetheroo oh right ... editing in progress ... heh
15:05 zetheroo found this but it's a bit dated http://www.howtoforge.com/high-availability-stor​age-with-glusterfs-3.2.x-on-ubuntu-11.10-automat​ic-file-replication-across-two-storage-servers
15:05 glusterbot <http://goo.gl/kxfdQ> (at www.howtoforge.com)
15:07 zetheroo would each of the three servers we have need to have gluster server and client installed? - or just one with server and the other two with client?
15:09 Norky the three servers would be.... servers
15:09 Norky and also clients
15:09 zetheroo so all three need to have server and client installed ..
15:09 Norky yes.
15:13 Staples84 joined #gluster
15:14 zetheroo I am guessing the package names are glusterfs-server and glusterfs-client ... since 'apt-get install glusterd' did not work for me :P
15:15 zetheroo is there a gui for GlusterFS? :)
15:15 timothy hi everyone; anybody knows the VDSM repo for ubuntu/debian ?
15:20 martoss joined #gluster
15:22 sripathi joined #gluster
15:24 georgeh|workstat does anyone have an experience with making/testing a worm volume?  not sure if this is a future feature or if it is supported in 3.3.1
15:24 Norky zetheroo, not really. People are working on making OpenStack (which has a web UI) handle gluster
15:25 zetheroo ok
15:25 Norky georgeh|workstat, I think WORM features are slated for 3.4, which is currently in alpha
15:25 hagarth joined #gluster
15:25 georgeh|workstat Norky, thanks!
15:26 zetheroo one other thing ... I get how with 2 gluster servers there is this replication going from server A to B ... but if you have A, B and C ... where does server C get 'fed' from? ... is it best to have it replicate from server A as well!?
15:26 Norky http://www.gluster.org/community/d​ocumentation/index.php/Features34
15:26 glusterbot <http://goo.gl/4MvOh> (at www.gluster.org)
15:27 Norky zetheroo, you misunderstand how the replication works
15:27 zetheroo ok :(
15:28 Norky replication in Glsuter, that is. When the client writes data, it writes it synchronously to all the replicas, i.e. in your case, the client will be talking to A, B & C at the same time.
15:29 zetheroo oh wow
15:29 m0zes bandwidth = total/N replicas btw :)
15:30 zetheroo so if I have VM1 running on server B and data is written to that VM's image - it will be written on all three servers A, B and C simultaneously ?
15:30 flakrat I performed a "gluster volume remove-brick my-vol srv-01:/export/brick1/scratch start", after a while it showed as completed however it shows failures "28", how do I find out what the failures were?
15:32 m0zes zetheroo: correct.
15:32 flakrat ah, I see in the rebalance log file
15:33 ramkrsna joined #gluster
15:33 ramkrsna joined #gluster
15:33 ramkrsna joined #gluster
15:34 vpshastry joined #gluster
15:45 theron joined #gluster
15:47 aliguori joined #gluster
15:55 zetheroo is it easy to change from replica 2 to replica 3 ?
15:58 puebele joined #gluster
15:59 johnmark Norky: WORM features were included in 3.3
16:00 johnmark georgeh|workstat: ^^^
16:00 johnmark although I don't know if you have to do anything specific to activate them
16:00 Norky oh? Sorry, I got that wrong then
16:00 johnmark Norky: yeah, it wasn't exactly well-publicized :)
16:01 johnmark and we don't have any docs for it, so it might as well not exist :/
16:01 Norky http://www.gluster.org/community/do​cumentation/index.php/Features/worm has this "Currently, GlusterFS codebase has an implementation of WORM, but it is not usable by users as there is no option to add that volume into our volume specification files through CLI. One of the easiest way to implement it without disturbing much of the code in glusterfs is using 'volume set' interface.
16:01 Norky So, we are providing a new key 'features.worm' which takes boolean values (enable/disable) for volume set."
16:01 glusterbot <http://goo.gl/HP8it> (at www.gluster.org)
16:01 johnmark but we're including it as a feature for 3.4
16:01 johnmark specifically because we didn't say anything about it when we released 3.3 :)
16:01 johnmark try it with 3.3 - it should work
16:02 johnmark ooooh... so we're adding a glusterd option. Ok - never mind
16:02 johnmark so to use it with 3.3, you have to edit the volume conf file directly
16:03 Norky call me ignorant but personally I don't quite "get" WORM :)
16:03 johnmark Norky: I don
16:03 johnmark 't know exactly what it does
16:03 Norky certainly I have files which are written once and only read there after, but that's just a use pattern, surely?
16:04 Norky what does making it a filesystem feature actually do? :)
16:04 shireesh joined #gluster
16:04 kkeithley johnmark: gluster docs say you can enable WORM with `gluster volume set ...` Why do you think you have to edit .vol files? (I know you're as jet lagged as I am ;-))
16:06 kkeithley As a file system feature, WORM would enforce not over-writing an existing file
16:07 vpshastry joined #gluster
16:08 ndevos and prevent deleting?
16:11 lpabon joined #gluster
16:11 zetheroo left #gluster
16:13 daMaestro joined #gluster
16:14 jbrooks joined #gluster
16:19 johnmark kkeithley: well there ya go. all I know is what the wiki page says "not usable by users"
16:20 johnmark using the CLI
16:22 sripathi1 joined #gluster
16:29 aliguori_ joined #gluster
16:29 hossman99999 joined #gluster
16:31 semiosis :O
16:35 samppah :O
16:36 samppah https://bugzilla.redhat.com/show_bug.cgi?id=858850 this seems kinda nice patch
16:36 glusterbot <http://goo.gl/3Q7yG> (at bugzilla.redhat.com)
16:36 glusterbot Bug 858850: high, urgent, rc, bfoster, CLOSED ERRATA, fuse: backport scatter-gather direct IO
16:38 _br_ joined #gluster
16:39 georgeh|workstat sorry, was afk there, I tried the feature(s).worm and says it doesn't exist when I try to set it with the cli
16:40 georgeh|workstat kkeithley, so, you're saying I need to edit the vol file directly?
16:40 johnmark georgeh|workstat: I think this is the change that's going into 3.4
16:41 johnmark kkeithley: ^^^^
16:41 johnmark but I'd need ot try it myself to be sure
16:42 georgeh|workstat johnmark, so adding it to the glusterd.vol file...just a line like 'option worm enable' in the volume management section?
16:42 * Norky is having all sorts of problems when writing to a Samba export of a volume from Windows clients
16:43 johnmark georgeh|workstat: at this point, it's pure specualtion on my part, and I don't want to lead you into anything damaging
16:43 johnmark just promise you you're not doing this in production :)
16:43 georgeh|workstat johnmark, not currently no :)
16:43 johnmark georgeh|workstat: *whew*
16:44 Norky nothing untoward recorded in samba logs, but written files are sometimes truncated in one application, an auto .docx->.doc converter which writes a temporary file never works on Gluster/CIFS
16:44 johnmark bbiab
16:45 semiosis Norky: have you disabled all the locking features in samba for this share?  i've heard that's helpful, may be worth a try
16:46 _br_ joined #gluster
16:49 Norky semiosis, I've not, I will give that a try, thank you
16:49 semiosis yw, hth
16:51 Norky bother this Windows nonsense, 'tis the bane of my life ;)
16:52 hossman99999 hey all, new here
16:52 hossman99999 i have a question, i want someone to tell me how bad of an idea this is
16:52 Norky terrible ;)
16:52 hossman99999 doing gluster in EC2, have a script to update CNAMEs in route53 to the 2 instances i am testing for now
16:52 hossman99999 everyone works great on bootup after restart (mounts etc) - ubuntu 12.10 with the ppa btw
16:53 semiosis hossman99999: yay \o/
16:53 hossman99999 but there's a delay in the cname
16:53 semiosis yep, r53 sync time
16:53 hossman99999 takes a couple minutes obviously for the record to come back with updated ip
16:53 hossman99999 so i can simply restart the glusterd service and it works magically
16:54 hossman99999 i think it would work nicely if simply delay started the gluster service, say 5 minutes or something
16:54 hossman99999 having a terrible time figuring out the best way to do that with ubuntu/upstart, figured i could just put a wait in the gluster upstart script but that doesn't seem to be an option
16:54 semiosis hossman99999: what exactly is the problem you're having?
16:55 hossman99999 just really need the glusterd service to not start for about 5 minutes after boot, to give time for the CNAME records to update
16:55 semiosis that is a solution, but what is the problem?
16:55 hossman99999 no problem, glusterfs is the best thing i've ever found
16:56 hossman99999 maybe someone else did something silly like i'm doing and found the best way to delay start without adding a bunch of hacks like i'd probably do
16:56 semiosis if there's no problem then why do you need to delay glusterd startup?
16:56 hossman99999 oh yes, ok, the problem is when glusterd starts early it can't resolve the new CNAME yet, since the cname gets updated on boot to amazon route53
16:56 semiosis its own CNAME?
16:57 semiosis add that hostname to /etc/hosts as an alias of 127.0.0.1
16:57 semiosis so the machine can always resolve itself
16:57 semiosis others will just keep trying until the r53 syncs up
16:57 hossman99999 ya know, for some reasons i'm trying to be so "dynamic" that i've been trying to avoid hardcoding things like that
16:58 hossman99999 but.. that will work find i'm sure
17:01 semiosis hossman99999: if you insist on blocking glusterd until r53 is synced that's going to be complicated.  you could probably achieve it most easily by 1) making your r53 update script poll & wait until the record is in sync, and 2) calling that script from a pre-start script clause in /etc/init/glusterd.conf
17:01 semiosis hossman99999: fwiw, there's a nice python based CLI for r53 called cli53 which can do the polling & waiting.  i use that as a basis for my own r53 cname update script
17:01 hossman99999 yah that's what i'm using so i'll have to look at that option
17:02 semiosis although i just map the server's own gluster CNAME to localhost
17:02 semiosis the --wait option
17:03 semiosis s/glusterd.conf/glusterfs-server.conf/
17:03 glusterbot What semiosis meant to say was: hossman99999: if you insist on blocking glusterd until r53 is synced that's going to be complicated.  you could probably achieve it most easily by 1) making your r53 update script poll & wait until the record is in sync, and 2) calling that script from a pre-start script clause in /etc/init/glusterfs-server.conf
17:24 bulde joined #gluster
17:29 jclift_ joined #gluster
17:33 nueces joined #gluster
17:34 __Bryan__ joined #gluster
17:35 vpshastry left #gluster
17:35 mohankumar joined #gluster
17:38 hossman99999 semiosis, i think it's magically working
17:38 hossman99999 did the pre-start and --wait idea
17:39 hossman99999 only odd thing is i had to open up permissions to the route53 IAM user
17:39 semiosis well, not magic :)
17:39 hossman99999 otherwise amazon tossed back a not authorized
17:39 hossman99999 so i just made full access and it worked, so i'll want to lock that back down methinks
17:39 hossman99999 computer magic box machines
17:40 semiosis interesting, makes sense if there's a different API endpoint used to check the sync status
17:40 semiosis http://docs.aws.amazon.com/Route53/la​test/APIReference/API_GetChange.html
17:40 glusterbot <http://goo.gl/YFlaa> (at docs.aws.amazon.com)
17:40 semiosis should be an IAM permission for that endpoint
17:40 hossman99999 now what happens if i start both instances from stopped at the same time :0
17:41 semiosis try it and let me know :)
17:41 _pol joined #gluster
17:41 semiosis use multiple availability zones so that doesnt happen too often
17:42 hossman99999 zone schmone
17:42 _pol joined #gluster
17:43 Humble joined #gluster
17:43 Mo__ joined #gluster
17:44 _br_ joined #gluster
17:44 hossman99999 the dumb thing about me is that these 2 servers normally will always be up, and new servers in the cluster will simply be clients when we need more servers for load
17:44 _pol joined #gluster
17:45 hossman99999 but i like never having to ssh into an instance and do stuff
17:45 _pol joined #gluster
17:46 semiosis +1
17:48 hagarth joined #gluster
17:50 _br_ joined #gluster
17:52 codex joined #gluster
17:52 codex semiosis: hey
17:52 semiosis welcome :)
17:52 codex thanks
17:52 codex I setup 4 nodes, over 2 datacenters (layer2 adjacent w/ the same vlan)
17:52 codex let's call them 'gfs01..04'
17:53 semiosis codex: could you ,,(pasteinfo) please
17:53 aliguori_ joined #gluster
17:53 glusterbot codex: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
17:53 codex using this: http://www.gluster.org/wp-content/up​loads/2012/05/Gluster_File_System-3.​3.0-Administration_Guide-en-US.pdf, section 5.5 - "Creating Distributed Replicated Volumes"
17:53 glusterbot <http://goo.gl/yqPZn> (at www.gluster.org)
17:53 hossman99999 semiosis: it worked! thanks for your help mang
17:53 semiosis hossman99999: yw, glad to hear it
17:54 codex semiosis: http://fpaste.org/YpqW/
17:54 glusterbot Title: Viewing Paste #281043 by V (at fpaste.org)
17:55 semiosis codex: ok first of all i'd strongly recommend following the ,,(brick naming) convention
17:55 glusterbot codex: http://goo.gl/l3iIj
17:55 codex ah, I used this: http://www.gluster.org/community/documentati​on/index.php/HowTos:Brick_naming_conventions
17:55 glusterbot <http://goo.gl/7evbQ> (at www.gluster.org)
17:57 semiosis odd, idk how JoeJulian could have written that!
17:58 codex semiosis: is that newer or older (the one you linked)
17:59 codex it seems the other one was modified ~4 or so months ago
17:59 semiosis the one i linked is older but i wrote it so it's better :)
17:59 codex heh
18:00 semiosis here's the difference, i want replicas named the same, so if you have a replica 2 volume then every replica pair should have the same path, although on different servers, and different replica pairs should have different paths
18:00 semiosis simplest example is a 2x2 d-r volume: gluster volume create replica 2 server1:/brick1 server2:/brick1 server3:/brick2 server4:/brick2
18:01 codex yea that makes sense to me too
18:02 semiosis i will bug JoeJulian about that wiki page next time i see him.  we'll get these consolidated
18:02 codex let me change it around - i'll keep /data for now, but i'll rename to this method
18:03 rotbeard joined #gluster
18:03 semiosis he makes some good points there, such as making the glusterfs brick path a subdir of the disk's mount point.  this improves reliability by preventing glusterfs from writing to your root partition when a brick disk isn't mounted
18:04 codex yea, that's whawt I am going to do now
18:04 codex I didn't even see that part "having /../../../brick1/brick" for example
18:05 codex so on 4 servers (gfs01 and gfs02) being in the same datacenter - I am guessing I want the replication accross gfs01 and gfs03
18:05 semiosis codex: ok now to your original question... files only appearing on bricks 1 & 2 (the first of two replica sets)
18:06 semiosis oh hm
18:06 disarone joined #gluster
18:06 codex my real question/problem is I am not understanding how the volume create ends up creating the distribution/replication pairs
18:06 codex does it just choose them in order, and if so, which happens first- distribution or replication pair
18:07 codex ex: gluster volume create test-volume replica 2 transport tcp server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4
18:08 semiosis yes, in order, by replica sets, so for replica 2 it would be (brick1 = brick2) + (brick3 = brick4) where = is replica and + is distribute
18:08 codex that creates a 4node distributed (replicated)  with a 2 way mirror
18:08 codex ok
18:08 codex so if I wanted to replicated 1 and 3, it's simply switched during creation
18:08 codex got it :)
18:09 ctria joined #gluster
18:09 semiosis i like convenient hostnames too... mine are basically left-N and right-N with left & right being different datacenters
18:10 codex for these 2, we usually do odds and evens, so i'll move 1+3 in the "left" and 2+4 in the "right"
18:11 codex btw, i am absolutely in love with this filesystem
18:11 codex i can't believe how easy and flexible it is
18:11 semiosis yeah it's been a huge win for me too
18:13 codex is the idea behind "bricks" that each one is an external mount basically?
18:14 hossman99999 ditto on the in love thing here
18:14 semiosis a brick is just a directory path on a server, though for most nontrivial deployments that means dedicated block devices of some kind (disks, raid, EBS, ...)
18:14 hossman99999 last time i looked at it maybe a year ago i kind of didn't believe that it was that easy
18:15 semiosis codex: btw glusterfs recommends XFS with inode size 512 for brick filesystems
18:15 semiosis hossman99999 also: ^^^
18:15 codex funny that you mention that - i saw that, but i wasn't sure about xfs
18:15 codex isn't it meant for very large FS? (>16TB or something like that)
18:16 semiosis sadly theres a bug affecting glusterfs over ext on most recent linux kernels, see ,,(ext4)
18:16 glusterbot Read about the ext4 problem at http://goo.gl/PEBQU
18:22 rotbeard joined #gluster
18:23 codex ok, converted to xfs
18:26 codex is there to auth.allow a /25?
18:26 codex is there a way*
18:27 tyl0r joined #gluster
18:28 semiosis i think a string glob is all that's allowed
18:28 semiosis :(
18:29 semiosis there is probably an open bug about that
18:32 codex semiosis: updated: http://fpaste.org/O5S4/
18:32 glusterbot Title: Viewing updated by V (at fpaste.org)
18:32 codex (with xfs)
18:33 codex so same thing, wrote a 'test' file, it's on gfs01 and gfs03, but not 2 and 4
18:34 codex from gfs01, 3 peers show up (gfs02..04)
18:36 codex if i go on another system, create a file, it writes to gfs02 and gfs04, but it does not sync to gfs01,03 THAT file
18:37 codex however, in the mount, all the files are visible...
18:37 semiosis yeah, thats what it means to distribute files over two replica sets
18:37 codex ohh
18:38 codex so if I lose something in DC2, i am screwed (with let's say 50% of the files)
18:38 codex i guess what I want is strip then replicate :)
18:38 semiosis no stripe
18:38 codex distribute*
18:38 codex :)
18:39 semiosis i'm confused by the gfs{1..4} names
18:39 NcA^ joined #gluster
18:39 mohankumar joined #gluster
18:39 codex 2 VMs in one data center (1, 3), and 2 VMs in the other (2,4)
18:40 semiosis oh ok great, so you have an "odds" and an "evens" datacenter
18:40 codex yea
18:40 codex this is just for the testing phase
18:40 semiosis then you'd want to alternate odd,even,odd,even in your volume layout
18:40 codex right
18:40 codex so i have 1 copy of each in each DC
18:40 mattr01 codex: how did you setup your volumes, how many bricks per host?
18:41 codex it's starting to make sense
18:41 codex mattr01: just one additional volume currently:
18:41 codex a /dev/sdb1 w/ 50GBs on each VM for testing
18:41 semiosis yeah you can certainly have more than one brick per server, which makes sense if your disks are much slower than your network
18:42 mattr01 semiosis: I am new to this but I am using two per server
18:42 mattr01 To keep an even multiple if I grow
18:42 mattr01 I currently have 2 vms with 30gb each
18:43 mooperd joined #gluster
18:43 mattr01 and I created a 2 replica set like server1:/brick1 server1:/brick2 server2:/brick1 server2:/brick2
18:43 mattr01 that makes 1 volume
18:44 mattr01 what I am trying to understand if there is really a need for two bricks per host
18:44 semiosis you are replicating between directories on the same server?  glusterfs should have warned you about that
18:44 semiosis mattr01: please ,,(pasteinfo)
18:44 glusterbot mattr01: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
18:45 mattr01 sure 1 moment
18:45 codex semiosis: thanks for the help. Getting the hang of this
18:45 semiosis yw
18:48 mattr01 http://dpaste.org/9pshY/
18:48 glusterbot Title: dpaste.de: Snippet #220258 (at dpaste.org)
18:49 codex so the option I need is "replicated and distributed" instead of "distributed and replicated"
18:49 codex :)
18:51 mattr01 This is my use case.  I will have a jboss instance on each host that will access via the hadoop file system API.  I want the file to replica to each host.  The jboss instances will access and write the files to the local server
18:55 codex actually, this might be better
18:55 codex i take it back - this is better
18:57 kr4d10 joined #gluster
18:57 semiosis codex: you dont get a choice between distribution over replication or replication over distribution.  it's always distribution over replication... i.e. files are distributed over replica sets
18:58 JoeJulian ~brick order | mattr01
18:58 glusterbot mattr01: Replicas are defined in the order bricks are listed in the volume create command. So gluster volume create myvol replica 2 server1:/data/brick1 server2:/data/brick1 server3:/data/brick1 server4:/data/brick1 will replicate between server1 and server2 and replicate between server3 and server4.
18:59 semiosis ooh
18:59 semiosis so JoeJulian wouldn't you agree those latter two bricks should be .../brick2 ?
18:59 mattr01 I had a hunch thatis how it worked
19:00 JoeJulian semiosis: nah, they're different servers in that example.
19:00 semiosis yes but they are a different replica set from the first two bricks, thus a different distribution subvol
19:01 JoeJulian Hmm... how about....
19:02 mattr01 I am starting to feel like using 2 bricks per host to only have 1 volume creates more overhead
19:03 JoeJulian Eww... that just looked ugly what I was thinking, and wouldn't have fit.
19:04 semiosis imho it's plain & simple... replicas have same brick path, non-replicas have different brick paths... regardless of server (no this doesn't support the case of two replicas on the same server, because that's just silly)
19:04 JoeJulian mattr01: I have 1 brick per each of 4 hard drives per volume (and I have 15 volumes) giving me 60 bricks per server. It's just a design choice.
19:05 semiosis mattr01: overhead?  sounds like premature optimization at this point
19:06 mattr01 could be, my only next concern is when I shutdown server1 then do a ls on the mounted volume there is a delay ... is this to be expected
19:06 mattr01 mind you when server1 shuts down I lose %50 of my volume
19:06 semiosis @ping timeout
19:06 glusterbot pong
19:06 semiosis haha
19:06 semiosis ,,(ping timeout)
19:06 glusterbot I do not know about 'ping timeout', but I do know about these similar topics: 'ping-timeout'
19:06 semiosis ,,(ping-timeout)
19:06 glusterbot The reason for the long (42 second) ping-timeout is because re-establishing fd's and locks can be a very expensive operation. Allowing a longer time to reestablish connections is logical, unless you have servers that frequently die.
19:06 semiosis mattr01: ^
19:07 mattr01 semiosis: thanks!
19:08 mattr01 semiosis: are you have been a good help, I hope to stick around and help others.. are you farmiliar with the hadoop interface?
19:09 semiosis great!
19:09 semiosis no sorry not familiar with the hadoop/gluster stuff
19:13 Mo___ joined #gluster
19:13 mattr01 okay, ill play around with it now, ill keep you updated on my exeriance if you like
19:13 semiosis sure
19:14 JoeJulian I considered forking hadoop and making a very illogical interface. Was going to call it haderp... :)
19:15 semiosis oh wow
19:15 semiosis @later tell jdarcy JoeJulian made a haderp joke!
19:15 glusterbot semiosis: The operation succeeded.
19:15 semiosis lol
19:15 hossman99999 farmiliar
19:17 _pol_ joined #gluster
19:17 _pol joined #gluster
19:17 ramkrsna joined #gluster
19:21 mattr01 semiosis: is there any advanced gluster documentation?
19:22 mattr01 .. beyond the administrative guide
19:22 semiosis there's some good blog posts & articles around
19:31 kkeithley joined #gluster
19:33 lpabon joined #gluster
19:39 kkeithley joined #gluster
19:52 gbrand_ joined #gluster
20:10 y4m4 joined #gluster
20:17 theron joined #gluster
20:19 jdarcy joined #gluster
20:19 jdarcy_ joined #gluster
20:39 Humble joined #gluster
20:42 cw joined #gluster
20:46 andreask joined #gluster
21:10 Humble joined #gluster
21:20 zaitcev joined #gluster
21:20 hagarth joined #gluster
21:27 GLHMarmot joined #gluster
21:30 fidevo joined #gluster
21:36 rwheeler joined #gluster
21:59 Jason_Sage joined #gluster
22:05 johnmark hagarth_: ping
22:05 johnmark hagarth: ping
22:15 xian1 joined #gluster
22:21 johnmark joined #gluster
22:21 theron joined #gluster
22:22 johnmark theron: ^5
22:31 semiosis what is this ^5?
22:34 johnmark semiosis: we were just at CERN 2 days ago, and Theron presented
22:35 johnmark plus I think we want to revive the community office hours
22:35 johnmark semiosis: oh, it means "high five"
22:35 semiosis oh ok
22:35 * semiosis was scratching his head
22:35 semiosis to the fifth power?
22:36 johnmark ha... :)
22:36 johnmark that too
22:47 johnmark this gives new meanting to the term "split brain" - http://www.nature.com/news/intercontine​ntal-mind-meld-unites-two-rats-1.12522
22:47 glusterbot <http://goo.gl/duPlh> (at www.nature.com)
22:58 kkeithley I was waiting for someone to bring that up ;-)
23:00 semiosis interesting article
23:00 xian1 I thought it was more like "entangled brain."  Sure would be nice if gluster could learn something from these rats and "do the right thing" when encountering split-brain issues, however.
23:02 Troy joined #gluster
23:08 xian1 ok perhaps that was less than charitable…I understand it's complicated enough that there is no one right thing for everyone, even if I could say what is right for me in all cases.
23:18 ultrabizweb joined #gluster
23:19 Troy gsyncd initializaion failed
23:19 Troy any help ?
23:19 Troy please!
23:20 Troy connection to peer is broken
23:23 Troy ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret.pem
23:24 Troy ssh without password works just fine
23:24 Troy don't understand this parameter =auto -S /tmp/gsyncd-aux-ssh-Cv6RBu/gsycnd-ssh-%r@%h:%p
23:32 CROS joined #gluster
23:34 hattenator joined #gluster
23:40 Troy why gluster rewrite gsyncd file every time I restart it !!!!
23:41 Troy gsyncd.conf
23:45 CROS Hey guys, quick question that I'm sure has been answered many times before. I have two gluster servers replicated together. Clients on both mounted using the glusterfs client. It's a file server so it serves out pretty big files 10-100 MB. The problem is that it seems like the server2 is always serving that file out from server1, so lots of private network traffic. is there a way to set it up so that it sends out directly from server2 if it has
23:45 CROS the data on that machine?
23:47 mattr01 on the clients how are you mounting the volumes??
23:48 CROS glusterfs mount
23:48 CROS not nfs
23:48 CROS want the fstab entry?
23:48 mattr01 yes
23:48 CROS data-02.priv:/data /mnt/data glusterfs defaults,_netdev 0 0
23:49 CROS on server 2
23:49 CROS data-01.priv:/data /mnt/data glusterfs defaults,_netdev 0 0
23:49 CROS on server 1
23:50 mattr01 Well the eliminates the obvious
23:51 y4m4 CROS: 'glusterfs client' has no local data affinity it used to be there historically as 'NUFA' but its been almost 2-3yrs it doesn't exist
23:51 glusterbot New news from resolvedglusterbugs: [Bug 765473] [glusterfs-3.2.5qa1] glusterfs client process crashed <http://goo.gl/4fZUW>
23:51 mattr01 CROS: you might want ot look at using NFS it will use the vfs layer to do data caching
23:52 mattr01 with FUSE you sorta by pass all that
23:52 CROS got'cha
23:52 CROS So, NFS w/ caching is the only current way to get around this?
23:52 mattr01 what type of data are you pulling?
23:52 CROS games
23:52 CROS @mattr01: zips, exes, rars, lots of stuff.
23:53 y4m4 CROS: not really NFS caching - the question you are asking is 'data' affinity for server2 and server1
23:53 CROS y4m4: yes, is NFS the right thing to do in this case?
23:53 y4m4 GlusterFS client would already know which data exists where, now it might be that the application you are using
23:54 y4m4 is picking up files which are with random chances are residing on server1
23:54 y4m4 is your configuration 'replicated' ?
23:54 y4m4 or purely distributed
23:54 CROS completely replicated
23:54 CROS only 2 servers, and replicate is set to 2
23:54 y4m4 CROS: so now that comes into a different question
23:54 y4m4 CROS: right now the read-subvolume is randomly picked
23:54 y4m4 it should be ideally
23:56 y4m4 But NFS might not generally help NFS would create more traffic - its the replicate which is not providing local data access.. but relying on read-subvolume picking up 'server1' for all the files
23:56 CROS I feel like the only network traffic that should happen is to do the self-healing check if the data exists locally...
23:56 CROS what is read-subvolume?
23:56 y4m4 read-subvolume used to be an option in replicate, which is basically which node should be picked
23:56 y4m4 for reading the file
23:56 CROS got'cha
23:56 y4m4 requested by the client
23:57 y4m4 right now it should be doing 'rr' between both the servers
23:57 y4m4 rr -> round robin
23:57 y4m4 first file server1
23:57 y4m4 second file server2
23:57 y4m4 so on
23:57 y4m4 if you are not seeing that then things might have changed a bit, to default to 'server1'
23:57 y4m4 if 'server1' is down
23:57 y4m4 then 'sever2'
23:57 CROS oh, I think it is doing RR
23:58 CROS It's just more network traffic than is optimal. =\
23:58 Troy :|
23:58 y4m4 well that is a design thing, replicate individually AFAIK cannot do local data affinity when client is mounted on server
23:58 CROS Technically, these files could be cached forever, too. Any file edits results in a new filename. Should I hook up some kind of caching layer before accessing the mount?
23:58 y4m4 atleast it doesn't know
23:58 y4m4 that is mounted on 'server'
23:59 CROS It doesn't do some sort of ping to see latency?
23:59 CROS Why not a smarter load balancing solution?
23:59 y4m4 could be, i haven't personally explored any of that
23:59 semiosis y4m4: i thought the fuse client polled replicas when doing lookup on a file and used whichever responded first to serve reads

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary