Camelia, the Perl 6 bug

IRC log for #gluster, 2013-05-02

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:21 bchilds joined #gluster
00:38 kevein joined #gluster
00:50 yinyin joined #gluster
01:13 nickw joined #gluster
01:22 bchilds joined #gluster
01:38 portante|ltp joined #gluster
01:42 bchilds joined #gluster
02:34 rb2k joined #gluster
02:35 vshankar joined #gluster
02:37 bchilds joined #gluster
02:57 bchilds joined #gluster
03:14 saurabh joined #gluster
03:39 nickw joined #gluster
03:57 bchilds joined #gluster
04:27 bchilds joined #gluster
04:28 shylesh joined #gluster
04:37 pai joined #gluster
04:47 bchilds joined #gluster
04:53 mohankumar joined #gluster
04:53 bulde joined #gluster
04:56 sgowda joined #gluster
05:00 vpshastry joined #gluster
05:02 bala joined #gluster
05:13 pithagorians joined #gluster
05:16 isomorphic joined #gluster
05:21 mohankumar joined #gluster
05:26 glusterbot New news from newglusterbugs: [Bug 953694] Requirements of Samba VFS plugin for glusterfs <http://goo.gl/v7g29>
05:28 bchilds joined #gluster
05:41 krutika joined #gluster
05:43 rastar joined #gluster
05:45 bala2 joined #gluster
06:00 lalatenduM joined #gluster
06:06 hagarth joined #gluster
06:08 bchilds joined #gluster
06:15 ctria joined #gluster
06:26 jtux joined #gluster
06:29 rgustafs joined #gluster
06:32 haakon_ joined #gluster
06:56 nickw joined #gluster
06:58 gluslog_ joined #gluster
07:02 ilbot_bck joined #gluster
07:02 Topic for #gluster is now  Gluster Community - http://gluster.org | Q&A - http://community.gluster.org/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - http://irclog.perlgeek.de/gluster/
07:03 45PAAIUIM joined #gluster
07:08 bchilds joined #gluster
07:24 shireesh joined #gluster
07:24 Susant joined #gluster
07:24 rastar joined #gluster
07:27 Susant left #gluster
07:28 Susant joined #gluster
07:41 vimal joined #gluster
07:47 ngoswami joined #gluster
08:00 ctria joined #gluster
08:01 edong23 joined #gluster
08:08 bchilds joined #gluster
08:12 guigui1 joined #gluster
08:24 gbrand_ joined #gluster
08:33 tshm_ joined #gluster
08:51 bulde joined #gluster
08:55 bulde1 joined #gluster
08:57 glusterbot New news from newglusterbugs: [Bug 958691] nfs-root-squash: rename creates a file on a file residing inside a sticky bit set directory <http://goo.gl/sCok0>
08:57 bulde joined #gluster
09:08 bchilds joined #gluster
09:12 ngoswami joined #gluster
09:18 bchilds joined #gluster
09:27 Rocky__ joined #gluster
09:43 bulde joined #gluster
09:58 badone joined #gluster
09:58 bchilds joined #gluster
10:05 lcligny joined #gluster
10:05 lcligny Hi there !
10:10 lcligny May I submit you a problem I face with my Gluster 3.3.1 installation ?
10:11 lcligny My logs are reporting "[posix.c:1755:posix_create] 0-Record-posix: setting xattrs on /data/record/test-ro.hlNgv failed (Operation not supported)"
10:12 lcligny I remounted the underlying FS with user_xattr as suggested by another log entry: W [posix-helpers.c:681:posix_handle_pair] 0-Record-posix: Extended attributes not supported (try remounting brick with 'user_xattr' flag)
10:12 lcligny but I still have the error
10:13 lcligny So I wonder where I missed something
10:18 bchilds joined #gluster
10:27 glusterbot New news from newglusterbugs: [Bug 958739] More descriptive logging when there is a checksum mismatch in volume <http://goo.gl/fXrPv>
10:41 yinyin joined #gluster
10:48 vpshastry1 joined #gluster
10:48 sgowda joined #gluster
10:48 bchilds joined #gluster
10:50 ramkrsna joined #gluster
10:55 jtux joined #gluster
11:05 rotbeard joined #gluster
11:12 clag_ joined #gluster
11:20 guigui3 joined #gluster
11:20 ujjain joined #gluster
11:21 sgowda joined #gluster
11:28 bchilds joined #gluster
11:36 edward1 joined #gluster
11:45 guigui3 left #gluster
11:48 bchilds joined #gluster
11:58 guigui3 joined #gluster
12:01 dustint joined #gluster
12:12 rastar joined #gluster
12:12 nicolasw joined #gluster
12:12 vpshastry joined #gluster
12:13 lcligny Well mounting the glusterfs in client with user_xattr as well seems to solve the problem in logs. but now I have another problem.
12:16 lcligny When I want to remount a glusterfs volume on clients with user_xattr with /etc/fstab aving the information I fall with "unknown option user_xattr (ignored)"
12:18 lcligny but if I mount or -o remount,user_xattr without any fstab entry related to the volume, it mount without problem and I can see it mounted with properly options
12:18 hagarth joined #gluster
12:18 lcligny 192.168.0.1:/volume on /mnt type fuse.glusterfs (rw,allow_other,max_read=1310​72,user_xattr,acl,user_xattr)
12:21 Spamchecker20 joined #gluster
12:24 yps joined #gluster
12:25 andrewjsledge joined #gluster
12:25 H__ lcligny: i found out that noauto in fstab breaks auto gluster mount . your issue might be related
12:26 ramkrsna joined #gluster
12:28 glusterbot New news from newglusterbugs: [Bug 958781] KVM guest I/O errors with xfs backed gluster volumes <http://goo.gl/4Goa9>
12:28 rcheleguini joined #gluster
12:29 bchilds joined #gluster
12:31 Spamchecker20 left #gluster
12:33 lcligny H__: Unfortunaltely I don't noauto option on my gluster volumes neither on server and clients
12:45 dustint joined #gluster
12:48 lcligny I'm wondering if mount -t glusterfs -o user_xattr is mandatory or if I only need to set _user_xattr on the server underlying ext3 FS
12:49 H__ it's for the bricks
12:49 lcligny so only on the server side right ?
12:49 H__ yes
12:49 semiosis lcligny: glusterfs doesnt use user.* xattrs
12:49 H__ curious : why ext3 and not 4 ?
12:49 semiosis only trusted.* xattrs
12:50 semiosis you shouldnt need user_xattr unless your application uses them
12:50 lcligny H__: because this system is in production since some time and at this time ext3 was the choice
12:51 H__ ok, good reason :)
12:51 lcligny semiosis: thanks for the clarification
12:52 semiosis also xfs is recommended for gluster bricks (inode size 512) when you're ready to change from ext3 to something else
12:52 lcligny but I still don't understand why I stille have things like this on my logs: [2013-05-02 14:48:40.663667] E [posix.c:1755:posix_create] 0-Record-posix: setting xattrs on /data/record/test-ro.SQs4i failed (Operation not supported)
12:52 semiosis even with user_xattr?
12:53 semiosis selinux maybe?
12:53 lcligny cat /etc/fstab
12:53 lcligny UUID=eba65037-8eaa-4d9b-af2b-abb6121bda93       /               ext3    acl,user_xattr,errors=remount-ro 0       1
12:54 lcligny here lies my gluster volume but I changed it with -o remount
12:54 lcligny -o remount,user_xattr
12:54 plarsen joined #gluster
13:00 nicolasw hello, is 'perlgeek.de' down? I can't access the irclogs for days already
13:01 semiosis nicolasw: i've heard it's down for a server migration, hopefully will be back online today
13:02 nicolasw semiosis: thx for the info :)
13:02 semiosis yw
13:13 lcligny The fact is I have a log entry with this error for each file created on my gluster volume
13:14 lcligny Maybe enabling xattr after installing gluster is doing something nasty
13:14 yinyin joined #gluster
13:17 wN joined #gluster
13:18 chirino joined #gluster
13:20 lcligny Well thanks for your support anyway, I continue searching for a solution and I will maybe ask 1 or 2 more questions when they will rise ;)
13:22 vshankar joined #gluster
13:23 deepakcs joined #gluster
13:29 mohankumar joined #gluster
13:34 plarsen joined #gluster
13:49 TakumoKatekari joined #gluster
13:49 TakumoKatekari ~ports | TakumoKatekari
13:49 glusterbot TakumoKatekari: glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111.
13:50 TakumoKatekari This is annoying, I can't seem to mount my volume via NFS
13:50 TakumoKatekari I've got all those ports open and it times out
13:50 semiosis ~nfs | TakumoKatekari
13:50 glusterbot TakumoKatekari: To mount via nfs, most distros require the options, tcp,vers=3 -- Also an rpc port mapper (like rpcbind in EL distributions) should be running on the server, and the kernel nfs server (nfsd) should be disabled
13:51 semiosis you probably need those mount opts
13:51 TakumoKatekari vers=3 probably needs to be set, thanks
13:52 TakumoKatekari and this all applies to the ubuntu nfs-client package?
13:53 Lui1 joined #gluster
13:53 semiosis yep
13:53 hagarth joined #gluster
13:54 vpshastry left #gluster
13:54 TakumoKatekari is there a list of options gluster supports and does not support?
13:54 TakumoKatekari such as r/wsize intr and hard
13:55 semiosis options are configured on the cluster & volume usually, you can run 'gluster volume set help' to see a list of options
13:56 semiosis there's not many client options, 'ro' is the only one that comes to mind, and i just found out yesterday it's broken in 3.3 :(
13:56 semiosis well, there are client options, but they're configured on the volume or cluster, and automatically pushed to clients, not set individually on the clients themselves
13:57 TakumoKatekari ah
13:59 guigui3 joined #gluster
14:02 nueces joined #gluster
14:03 guigui5 joined #gluster
14:05 lcligny Ok, so I just finished some test and it turns out that the errors on setting xattr are only shown when I write a file from a gluster client _having_ the volume mounted with the "-o acl" mount option
14:05 lcligny without acl mount option, no error
14:07 JoeJulian Aren't acls stored in user_xattr? Perhaps mounting your bricks with user_xattr?
14:07 bala joined #gluster
14:07 JoeJulian Nah, I think I'm wrong about that...
14:07 lcligny my brick is already mounted with user_xattr on ext3 FS
14:08 TakumoKatekari This is odd, I've got the line in my fstab saying "server:/volume /mnt defaults,_netdev,tcp,vers=3 0 0" but it times out D:
14:08 JoeJulian ~ext4 | lcligny
14:08 glusterbot lcligny: (#1) Read about the ext4 problem at http://goo.gl/xPEYQ or (#2) Track the ext4 bugzilla report at http://goo.gl/CO1VZ
14:08 JoeJulian And ext3 is affected the same.
14:09 lcligny oh
14:10 lcligny I wondered it didn't apply to me since I have ext3, so I didn't read this before
14:11 TakumoKatekari JoeJulian: Does that mean there are issues trying to mount on ubuntu 12.04 via NFS with ext4 bricks?
14:15 lcligny WTF ! thanks JoeJulian
14:19 JoeJulian TakumoKatekari: Check your kernel version against that article I wrote. I'm all xfs so I've allowed myself to forget what versions that is.
14:20 portante|ltp joined #gluster
14:23 bugs_ joined #gluster
14:26 TakumoKatekari 3.2.0-36-virtual
14:27 TakumoKatekari yeah so this shouldn't be an issue
14:27 TakumoKatekari I just can't get it to mount
14:39 TakumoKatekari Which is faster NFS or Gluster-client ?
14:41 Sven3b1 joined #gluster
14:41 sjoeboo_ joined #gluster
14:42 sjoeboo_ anyone have tips on healing split brain when the gfid is the only thing listed?
14:42 sjoeboo_ ie:
14:42 sjoeboo_ at                    path on brick
14:42 sjoeboo_ -----------------------------------
14:42 sjoeboo_ 2013-05-02 10:26:56 <gfid:cbb09eb4-f068-4f6d-9127-95bfb6f53f58>
14:44 Sven3b1 Hello all. From what I understand of the Gluster self heal is if a node #2 of a 2 node replicated cluster goes down and is down long enough that files are deleted from node #1, that when node #2 is restored the files deleted from node #1 will be restored on node #1 becasue they are still present on node #2? Is this correct?
14:44 Sven3b1 Sorry sjoeboo, I haven't had to deal with split brain yet so I can't help you on that one.
14:48 JoeJulian sjoeboo_: The file needing healed is .glusterfs/cb/b0/cbb09eb4-f​068-4f6d-9127-95bfb6f53f58 on the bricks. stat that file. If its link count < 2, just delete it.
14:49 JoeJulian Sven3b1: Not correct. That's part of what the uuid tree is good for.
14:49 sjoeboo_ and what if its > 2 ?
14:50 sjoeboo_ or  = 2 ?
14:51 lcligny joined #gluster
14:51 lcligny re
14:51 JoeJulian If it's >= 2, then it's hardlinked to an actual file. You can use the ,,(gfid resolver) to figure out which file, then use normal split-brain resolution.
14:51 glusterbot https://gist.github.com/4392640
14:53 lcligny JoeJulian: I've a Debina 6.0.2 with 2.6.32-5-amd64 kernel so I d'ont think I'm affected by the ext3/4 bug, but thanks anyway
14:53 lcligny s/Debina/Debian/
14:53 glusterbot lcligny: Error: I couldn't find a message matching that criteria in my history of 1000 messages.
14:54 lcligny It seems glusterbot don't like sed :p
14:55 aliguori joined #gluster
15:06 wushudoin joined #gluster
15:07 daMaestro joined #gluster
15:08 Sven3b1 JoeJulian: Thank you for the info. Of course I can't find in the documentation the section that gave me my original understanding. Perhaps I will do a test scenario on my centos vms and see how gluster behaves.
15:09 TakumoKatekari Anyone know of any tweaks to make stating files faster? Currently stating files on a volume is killing performance
15:10 JoeJulian don't :)
15:10 TakumoKatekari And when "don't " isn't an option?
15:12 karoshi is there a way to have gluster follow the symlink it finds in the brick?
15:12 JoeJulian Then it's pretty much down to latency improvement.
15:12 semiosis TakumoKatekari: ,,(php)
15:12 glusterbot TakumoKatekari: php calls the stat() system call for every include. This triggers a self-heal check which makes most php software slow as they include hundreds of small files. See http://goo.gl/uDFgg for details.
15:12 JoeJulian karoshi: no
15:12 karoshi JoeJulian: ok thanks
15:13 geewiz joined #gluster
15:13 TakumoKatekari So is there a way to make PHP fast on Gluster or should I just find another solution like prue NFS?
15:13 semiosis TakumoKatekari: see glusterbot's last message
15:14 semiosis there are optimizations you can do for php, both at the web server config level (APC), and at the code level (autoloading)
15:14 semiosis static files should be cached in front of glusterfs... apache's mod_cache is easy, varnish cache is powerful
15:14 geewiz Hi! I'm a bit confused that "cat /proc/mounts" shows "relatime" even when I mount our Gluster 3.2 volume with "noatime". Is there something wrong?
15:14 TakumoKatekari well code modification isn't an option because its a joomla app
15:14 wushudoin left #gluster
15:15 TakumoKatekari I'm already using varnish and nginx but joomla just includes too many files
15:15 semiosis TakumoKatekari: http://developer.joomla.org/manual/ch01s04.html -- looks like joomla uses (or can use) autoloading
15:15 glusterbot Title: 1.4. Class Auto-loading (at developer.joomla.org)
15:16 semiosis TakumoKatekari: you can use APC to eliminate stat calls
15:17 semiosis TakumoKatekari: also optimize your include path to have directories in order of where files are most likely to be found (most commonly included dir should be first...)
15:17 TakumoKatekari can APC totally remove stat calls?
15:18 JoeJulian mostly, yes.
15:18 ladd joined #gluster
15:18 geewiz TakumoKatekari: Yes. You'll have to restart your web server after code modifications, though.
15:18 Sven3b1 When bringing an entire cluster back online does the node boot order matter? I just brought my 2 node cluster back online and the gluster volume was not able to mount cause bricks were not connected. I shutdown both nodes and then brought them back online one at a time and now all is working well.
15:19 JoeJulian Sven3b1: Yes, servers must be started before clients.
15:19 Sven3b1 In this case both are on the same system
15:19 JoeJulian @glossary
15:19 glusterbot JoeJulian: A "server" hosts "bricks" (ie. server1:/foo) which belong to a "volume"  which is accessed from a "client"  . The "master" geosynchronizes a "volume" to a "slave" (ie. remote1:/data/foo).
15:20 JoeJulian Ah, that's probably semiosis' fault.
15:20 Sven3b1 oh?
15:20 * JoeJulian waits for it....
15:21 deepakcs joined #gluster
15:21 JoeJulian damn.. I hate when I try to tease someone and they're afk.
15:22 Sven3b1 lol
15:22 semiosis glusterbot: meh
15:22 glusterbot semiosis: I'm not happy about it either
15:22 Sven3b1 hahah
15:24 Sven3b1 So I am too assume that when bringing all nodes of a cluster back online that I should be doing it one node at a time?
15:25 shylesh joined #gluster
15:26 semiosis Sven3b1: imho you shouldnt make any assumptions when dealing with failure recovery.  test test test!
15:26 geewiz Finally found https://bugzilla.redhat.com/show_bug.cgi?id=825569 where it became clear to me that there is no such mount option.
15:26 glusterbot <http://goo.gl/6Z8Po> (at bugzilla.redhat.com)
15:26 glusterbot Bug 825569: high, medium, ---, kaushal, ASSIGNED , [enhancement]: Add support for noatime, nodiratime
15:29 Sven3b1 I am testing it at the moment and have been testing for about 6 months. I was just curious if there was a recommended procedure for bringing server nodes back online because I have always brought my test cluster backup by powering all nodes at the same time and today was the first time I had an issue. So was just looking for clarification if this was an isolated issue or if I should be bringing
15:29 Sven3b1 the nodes back up in a certain order or fashion.
15:30 tshm_ Hi all. I'm trying out a Gluster v3.3 setup, where I'm expanding from one pair of replicated bricks to two pairs. Problem being, once the second pair is online the range is messed up:
15:30 tshm_ trusted.glusterfs.pathinfo="((<DISTRIBUTE:dht> (<REPLICATE:replicate-0> <POSIX(/mnt/brick_a0/storage):bench2​:/mnt/brick_a0/storage/00042-store1> <POSIX(/mnt/brick_a1/storage):bench5.p2.b1.​local:/mnt/brick_a1/storage/00042-store1>)) (dht-layout (replicate-0 0 4294967295) (replicate-1 0 0)))"
15:30 semiosis like JoeJulian said, bring up servers before clients.
15:30 tshm_ ... which effectively makes sure that all new files end up only on the old pair of bricks
15:31 semiosis Sven3b1: to clarify JoeJulian's remark, he means bring up all your server *processes* before bringing up your client *processes*.  doesnt matter if those run on the same machine or different machines
15:31 tshm_ I'm expecting replicate-1 to take care of half of the range
15:32 Sven3b1 semiosis: Ah ok. Thank you. Can you advise how I can ensure that happens on CentOS x64 6.4?
15:32 lcligny Bye
15:32 lcligny left #gluster
15:32 semiosis tshm_: you need to rebalance after you add bricks/sets to distribution
15:34 semiosis Sven3b1: i'm kinda like the "debuntu guy" around here
15:34 tshm_ Ok, I admit, I'm a beginner to Gluster... so how do I do that? A little quirk is that in the setup we're running here, we don't use the daemon or whatever that's called...
15:34 semiosis tshm_: in all kindness, ,,(rtfm) :)
15:34 glusterbot tshm_: Read the fairly-adequate manual at http://goo.gl/E3Jis
15:34 Supermathie Some beginning results with Oracle on Gluster (glusterfs native client) overnight: 1423 aggregate TPS
15:34 Supermathie Not as nice as with the DNFS client, but it's a start.
15:35 semiosis tshm_: you don't use glusterd & the gluster CLI?  i dont think I can help you then.  maybe someone else can but that's either a very old setup, or a very custom setup.
15:35 tshm_ I see your point, and in all kindness, don't think I didn't spend quite a few hours googling before asking here.
15:35 tshm_ Somehow it worked fine in v3.1, but unfortunately I don't know how that was done.
15:35 semiosis tshm_: there's a rebalance command in the CLI, the manual (or interactive help) should get you going with that
15:35 tshm_ My bet is it's a "very custom" setup :-)
15:36 tshm_ all right, thanks anyway
15:36 semiosis stick around, maybe someone else will have more helpful advice
15:36 Sven3b1 semiosis: Fair enough.
15:36 inodb joined #gluster
15:37 tshm_ Will do! Still some time remaining of the workday
15:38 Supermathie tshm_: *WHY* don't you use the daemon? Do you just not use it for setup of bricks? Have you found it to cause a problem? Have you just "always done it this way"?
15:39 tshm_ Very valid questions! I will pass them on to whoever made that decision once upon a time. It has got something to do with automated provisioning, though... large setup
15:39 Supermathie tshm_: Also, cl
15:39 Sven3b1 tshm_: I am no expert but form my understanding a rebalance and then trigger a self heal would do the trick. Although I have never seen replcate-1 before.
15:39 Supermathie tshm_: Also, clarify what you mean by "the daemon".
15:40 tshm_ I mean the CLI thingie where you can run such commands as "gluster volume rebalance"
15:41 Supermathie So the problem is that you don't use the cli and you want to know how to submit the 'volume rebalance' command? :D That's essentially all the CLI is doing, telling the gluster daemon (glusterd) to trigger a rebalance.
15:41 tshm_ there's supposed to be an xattr called trusted.distribute.fix.layout involved in the process of rebalancing - no idea where that originates from, though
15:42 tshm_ I know, if I had the option of using the CLI, I would be done yesterday already. ;-)
15:42 tshm_ ... and not have to ask you guys
15:42 Supermathie tshm_: Yeah, I think you need to ask the question "Why aren't we using the CLI?"
15:45 andreask joined #gluster
15:53 luckybambu joined #gluster
15:53 dialt0ne joined #gluster
15:54 dialt0ne what's the best way to import a brick from one gluster pool to a separate gluster pool
15:54 JoeJulian Sven3b1: Oh, I thought you were using that other distro.... :O The init scripts are scheduled such that the servers should start before the client mount if _netdev is in the mount options.
15:55 dialt0ne e.g. i have an EBS snapshot of a prod gluster setup that i want to make available on a regular basis to a separate stage gluster setup
15:55 dialt0ne i have ec2-consistent-snapshot working, i can mount the filesystem on the stage system and see the files in the brick
15:55 Sven3b1 JoeJulian: So this would work: host1:/gvtest /gvtest glusterfs defaults,_netdev 0 0
15:56 dialt0ne only it's 1.6T in 14 million files
15:56 dialt0ne which makes something like rsync a problem
15:56 JoeJulian tshm_: That xattr can be set to any value through any client to fix the layout of one directory. You would have to walk the directory tree and set that xattr to have the new bricks be used.
15:56 dialt0ne filesystem is XFS
15:56 JoeJulian Or, better yet, use the cli as it was designed. :P
15:57 JoeJulian Sven3b1: Should, yes. If not, it's log checking time.
15:57 Sven3b1 14 million files? oh dear. I thought my 1.1 million files was a lot to keep track of.
15:58 dialt0ne well, that's a guesstimate. df -i says 14642705 inodes
15:58 dialt0ne so that's dirs, files, links, etc.
15:58 tshm_ JoeJulian: Cool... I have a script available doing something similar, but it only seems to want to /read/ the attribute ... for some reason
15:58 Sven3b1 still, hella lot of stuff.
15:58 JoeJulian @targeted fix-layout
15:58 JoeJulian damn
15:58 dialt0ne gluster is working "ok" for all these media files
15:59 tshm_ Anyway, good pointer! Thanks.
15:59 dialt0ne they just need a system to duplicate them into the QA/staging env
16:00 dialt0ne so i'm not sure what's the best way... delete the old volume, nuke the old brick, put the new brick in place, clear extended attributes, re-create volume?
16:00 dialt0ne it's not a _live_ system so sufficient automation and doing this at o-dark-thirty is ok
16:01 JoeJulian @learn targeted fix-layout You can trigger a fix-layout for a single directory by setting the extended attribute "trusted.distribute.fix.layout" to any value for that directory. This is done through a fuse client mount.
16:01 glusterbot JoeJulian: (learn [<channel>] <key> as <value>) -- Associates <key> with <value>. <channel> is only necessary if the message isn't sent on the channel itself. The word 'as' is necessary to separate the key from the value. It can be changed to another word via the learnSeparator registry value.
16:01 JoeJulian @learn targeted fix-layout as You can trigger a fix-layout for a single directory by setting the extended attribute "trusted.distribute.fix.layout" to any value for that directory. This is done through a fuse client mount.
16:01 glusterbot JoeJulian: The operation succeeded.
16:01 JoeJulian @targeted fix-layout
16:01 glusterbot JoeJulian: You can trigger a fix-layout for a single directory by setting the extended attribute trusted.distribute.fix.layout to any value for that directory. This is done through a fuse client mount.
16:01 JoeJulian @forget targeted fix-layout
16:01 glusterbot JoeJulian: The operation succeeded.
16:02 JoeJulian @learn targeted fix-layout as You can trigger a fix-layout for a single directory by setting the extended attribute \"trusted.distribute.fix.layout\" to any value for that directory. This is done through a fuse client mount.
16:02 glusterbot JoeJulian: The operation succeeded.
16:02 Sven3b1 diatone: I am not export but from what I understand if you tell gluster to remove the brick from the old volume it will migrate the data off it, so I know that is not a solution, as your brick will be empty now.
16:02 Sven3b1 I apparently can't spell today either. Meant to say "I am no expert" not "I am not export"
16:03 nicolasw joined #gluster
16:03 tshm_ I'm sure most people read it as "I am no expert" anyway, instead of scratching their heads wondering what you meant ;-)
16:04 dialt0ne well, the key bit is that i want the volume name to stay the same so the clients can reconnect when it comes back but i want to completely replace the underlying brick
16:04 dialt0ne discard the old contents because it's too tedious to sync from new filesystem to old
16:05 dialt0ne i guess that's part of what i'm not sure of either
16:06 dialt0ne will the clients re-connect to a volume of the same name if it's been completely replaced on the server side?
16:06 dialt0ne client is fuse on ubuntu talking to amazon linux server
16:06 JoeJulian dialt0ne: If you're doing this with all the bricks, you could create the test volume but not start it. When you want to use the snapshot, mount all the bricks and start the volume.
16:07 Sven3b1 diantone: what if you treated it as a rolling upgrade. Add new bricks one at a time and then remove the old bricks one at time. If that makes sense and if I understand what you are trying to do.
16:07 dialt0ne no, i don't think i want a rolling upgrade
16:07 Sven3b1 ok
16:07 dialt0ne i kind-of want to have a clean break from the old brick and start with new bricks
16:08 dialt0ne but use the same volume name without restarts from the clients (if it can be helped)
16:08 JoeJulian Do you mean: "gluster volume replace-brick $vol $brick_old $brick_new commit force"
16:09 dialt0ne hm
16:09 Sven3b1 what I suggested should give you new bricks but keep the volume name without causing conflicts or restarts cause the volume would never go offline.
16:09 dialt0ne hm
16:09 dialt0ne i think that could work
16:09 sjoeboo_ so, had a xfs brick that needed cleaning, now in the brick logs I'm getting TONS of:
16:10 sjoeboo_ https://gist.github.com/ano​nymous/a3baea538f2b96133d88
16:10 glusterbot <http://goo.gl/HMN9R> (at gist.github.com)
16:10 dialt0ne sorry to fill-in more details late in the game, but it's a replica volume with 1 brick on each server
16:10 Sven3b1 and you wouldn't have to worry about having two volumes online of the same name during transition or have downtime when bring down one volume then starting the new volume
16:10 dialt0ne can replica be 0?
16:11 dialt0ne or 1 i guess
16:11 dialt0ne gluster volume remove-brick $vol replica 1 $server2:$brick_old
16:11 JoeJulian Yes, turning it to replica 1 will turn it into a distribute-only volume.
16:11 dialt0ne then gluster volume replace-brick $vol $server1:$brick_old $server1:$brick_new commit force
16:11 Sven3b1 dialtone: sorry that I don't know. I know replica can be changed but I don't know if 1 or 0 are valid values.
16:12 dialt0ne then gluster volume add-brick $vol replica 2 $server2:$brick_new
16:12 dialt0ne well, i guess it wouldn't be "brick" per-se
16:12 dialt0ne brick == server:/dir
16:13 dialt0ne hm ok
16:13 JoeJulian Wait... are your new bricks just blank?
16:14 dialt0ne no
16:14 JoeJulian What's the goal?
16:14 dialt0ne in prod they add a few thousand files a day
16:14 Sven3b1 sjoeboo: I only know of one xfs bug when running on xen servers, but JoeJulian will be better suited to help you with that when he has a moment.
16:15 dialt0ne so when they QA in stage, they have the latest db from prod but all the media is missing from the filesystem
16:15 JoeJulian So they want to mount a snapshot in qa?
16:15 dialt0ne right
16:15 Supermathie So, I have a crazy idea. If the bottleneck in gluster seems to be the glusterfs/nfs daemon, can I add more NFS daemons to the same server to do more processing? (obviously not registered with portmapper)
16:16 dialt0ne rsync from prod to stage isn't doable because of the # of inodes
16:16 dialt0ne but i do have snapshots
16:16 JoeJulian dialt0ne: Then, make a qa volume. Doesn't matter what you use for bricks. Don't start it 'till you're ready to qa. When you're ready to qa, mount the snapshots in the brick locations for the qa volume. Start it. Mount it. qa test.
16:17 dialt0ne well, the volume is running most of the time
16:17 dialt0ne because developers don't care about broken images, they're using it to bug squash
16:18 JoeJulian point being. Stop the volume, mount the snapshots, start the volume.
16:18 dialt0ne sounds good to me
16:18 JoeJulian The xattrs will all work as long as the bricks are in the same order and uses the same translators.
16:18 dialt0ne ding ding ding ding
16:19 dialt0ne thank you very much sir!
16:19 JoeJulian You're welcome
16:19 dialt0ne i will try this out and report back
16:19 dialt0ne it... might take a few days :-)
16:19 JoeJulian Supermathie: I can't think of how the data path could work to take advantage of more nfs services.
16:21 Supermathie JoeJulian: the bottleneck in my system seems to be whichever daemon is responsible for handling distribution of writes, either the fuse-mount daemon or the NFS server.
16:21 duerF joined #gluster
16:21 JoeJulian Distribution as in dht translator?
16:21 Supermathie Yeah
16:23 JoeJulian Really? That's such a very fast hashing algorithm I'm really surprised that could be the bottleneck. Have you profiled it to see which function?
16:24 Supermathie I can't say that it's *dht* specifically, only that it's that daemon chewing 100% of CPU while running
16:25 JoeJulian Ah, each daemon runs several translators. iirc, it's glusterfsd that was at 100%, right?
16:27 JoeJulian Aack, I have a conference call I need to participate in. bbl.
16:27 satheesh joined #gluster
16:28 Sven3b1 dam, was going to remind him about sjoeboo_'s xfs issue.
16:30 Mo___ joined #gluster
16:30 aliguori joined #gluster
16:35 brooshevski joined #gluster
16:39 aliguori_ joined #gluster
16:46 jclift_ joined #gluster
16:48 chirino joined #gluster
16:53 y4m4 joined #gluster
16:54 sjoeboo_ so, i feel likei'm reading conflicting things....if i remove a brick (this is 3.3.1), is the data migrated/rebalanced before the brick remove is "successfull" ?
16:54 sjoeboo_ or is it really just removed from the layout, and its up to me to move the data from the brick to the newly-smaller volume ?
16:55 bulde joined #gluster
17:02 Sven3b1 The data will be migrated off the brick. This is my understanding from the 3.3.1 admin pdf
17:03 JoeJulian sjoeboo_: Still in my call, but it's boring... Starting with 3.3, if you remove brick a new layout is written to account for the brick being removed and a rebalance is performed. This /should/ migrate everything off. If the remove-brick status shows success, then it theoretically did. Trust but verify.
17:03 Sven3b1 you could always double check and view your remaining bricks to make sure the data has been moved and will be available
17:03 sjoeboo_ okay, thats good...and sicne this is a dist-replica i would remove the replica pair of bricks.
17:04 JoeJulian right
17:04 Sven3b1 yes
17:05 thomasle_ joined #gluster
17:05 Sven3b1 curious question. Would gluster even let you remove a single brick and not all bricks from the replica pair? I assume this would make creating a new layout and rebalance rather difficult if not impossible.
17:06 JoeJulian it would not
17:06 semiosis Sven3b1: gluster CLI should not let you do that (if it does, its a bug)
17:07 Sven3b1 good to know.
17:08 fubada joined #gluster
17:09 fubada hi, can someone tell me if this is a really bad idea? I have two kvm machines and I want to use gluster to share storage between them for storing my vm images
17:09 dustint joined #gluster
17:09 fubada 2 gluster servers, also clients of each other
17:09 fubada is this going to cause tons of sync events in gluster
17:11 Sven3b1 I plainning something similar at the moment using oVirt and using gluster as the shared storage and from what I understand there is some integration and native support for KVM on gluster volumes. 3.4 is suppose to add support for qemu thin provisioned volumes.
17:11 fubada yes this is for ovirt
17:12 fubada Sven3b1: the issue is, if your gluster clients are your 2 gluster servers i think theres some issues with that
17:12 fubada someone in here explained it to me awhile back
17:12 the-me joined #gluster
17:12 fubada but basically the IO will be very high due to sync
17:13 luis_silva joined #gluster
17:13 Sven3b1 I haven't moved my test to oVirt yet on real hardware. I am still testing gluster in vmware at the moment. However from what I read in the oVirt documentation running kvm on gluster is well supporter. oVirt gui actually had gluster management built into it.
17:13 Sven3b1 Oh i see.
17:13 fubada ya but your KVM hypervisor box should ideally not be your gluster server
17:14 fubada if you have another separate physical box for gluster that the kvm box mounts as a gluster client, thats fine
17:14 fubada but in that case id just do iscsi
17:15 Sven3b1 True, that is the ideal scenario, however budget constrained deployments sometime demand not ideal deployments and sacrafice speed.
17:15 fubada ya
17:15 JoeJulian Not ideal, but a lot of people do it.
17:15 Sven3b1 My testing reveals that the performance loss is not that great. I don't have hard numbers for you in hardware.
17:15 Sven3b1 But..
17:15 fubada JoeJulian: two kvm boxes, are also gluster servers in master master and mount of themselves
17:15 soukihei I am getting host is not a friend errors when I try to create my first volume. Both peers can see on another. I've pasted some information here: http://pastebin.com/p87SERds
17:15 glusterbot Please use http://fpaste.org or http://dpaste.org . pb has too many ads. Say @paste in channel for info about paste utils.
17:15 fubada ?
17:16 JoeJulian If you can put your working data into its own glusterfs volume and mount that inside the vm, that's often advisable. See what suits your use case.
17:16 fubada so one gluster volume per vm?
17:16 fubada thats sounds interesting
17:16 JoeJulian No, like....
17:17 soukihei Sorry, here is a URL to fpaste.org with the error. http://ur1.ca/dnhx1
17:17 glusterbot Title: #10065 Fedora Project Pastebin (at ur1.ca)
17:17 fubada right now I have two kvm boxes and they each have /var/lib/libvirt/images with *img files in there which are my virtual machine disks
17:17 JoeJulian I have a bunch of VMs. One is an intranet server running drupal. I have an intranet volume with the drupal install on it.
17:17 fubada i need to share that directory between kvm1 and kvm2
17:18 Sven3b1 fubada: My test environment is two CentOSx64 vms running on Vmware Workstation 9 with host OS Win7x64 and they are both server and client (mount their own volumes) and I can get 30-40MBps writting.
17:18 fubada ok cool
17:18 JoeJulian With my method, the VM image has almost no I/O most of the time. Some logs, but I don't care if I lose those.
17:18 fubada and your images are now redudnant across two bare metal machines?
17:19 soukihei does AWS EC2 instances having a problem connecting volumes when you use hostnames versus IP addresses?
17:19 JoeJulian ~hostnames | soukihei
17:19 glusterbot soukihei: Hostnames can be used instead of IPs for server (peer) addresses. To update an existing peer's address from IP to hostname, just probe it by name from any other peer. When creating a new pool, probe all other servers by name from the first, then probe the first by name from just one of the others.
17:20 JoeJulian soukihei: Last phrase ^^
17:20 JoeJulian soukihei: btw... not sure if you noticed, but that paste had the peer status from the same server twice.
17:20 soukihei JoeJulion okay, thanks. I'll double-check it again
17:22 Rapture joined #gluster
17:23 Rapture hi all, experiencing very high CPU load with glusterfsd on server side. Gluster info is as follows
17:23 Rapture Type: Replicate
17:24 Rapture Status: Started
17:24 Rapture Number of Bricks: 1 x 2 = 2
17:24 Rapture Transport-type: tcp
17:24 Supermathie Rapture: doing synchronous NFS writes?
17:24 Rapture no activity on clients and cpu load is between 50 − 80% non-stop
17:24 JoeJulian @ext4
17:24 glusterbot JoeJulian: (#1) Read about the ext4 problem at http://goo.gl/xPEYQ or (#2) Track the ext4 bugzilla report at http://goo.gl/CO1VZ
17:24 JoeJulian Maybe...
17:25 Rapture @super — pardon my noobness I will have to check this as I'm not sure
17:26 Rapture @Joe — Thanks for the links, will check them out
17:27 Sven3b1 I am out. Gotta sort out a new cell phone. Thanks to all who helped me with my questions and I hope my novice suggestions\advice was helpful.
17:27 JoeJulian +1
17:27 Rapture @joe — our servers are all ext3 and client using fuse
17:27 Rapture client not experiencing any abnormal cpu loads
17:28 Rapture servers: centOS 5.4 // clients: centOS 5.8
17:28 JoeJulian ext3 is the same thing as ext4 with a little less resiliency. Uses the same code and is subjected to the same bug.
17:29 JoeJulian I don't know if the bug was backported to those kernels though.
17:30 JoeJulian Well, if there's no client activity then I would expect a self-heal crawl. Check your logs for activity.
17:32 fubada so JoeJulian just to confirm theres nothign wrong with 2 node cluster where both are master, master?
17:32 fubada and both have my virtual machines insie
17:33 JoeJulian fubada: It works.
17:33 fubada thanks
17:33 Rapture gluster volume heal vault info
17:33 Rapture Gathering Heal info on volume vault has been successful
17:33 Rapture Brick ....com:/mnt/gluster1
17:33 Rapture Number of entries: 0
17:33 Rapture Brick ....com:/mnt/gluster2
17:33 Rapture Number of entries: 0
17:34 Supermathie Rapture: Anything in the logs? /var/log/gluster/*.log, /var/log/gluster/bricks/*.log
17:37 Rapture a few of these: [2013-05-02 04:30:01.206778] I [server3_1-fops.c:1085:server_unlink_cbk] 0-vault-server: 34909349: UNLINK /CB1/ctoken/11d504481e781.json (--) ==> -1 (No such file or directory) in the bricks log
17:38 Rapture [2013-05-02 17:33:24.618866] W [rpc-transport.c:174:rpc_transport_load] 0-rpc-transport: missing 'option transport-type'. defaulting to "socket"
17:38 Rapture [2013-05-02 17:33:24.735082] I [cli-rpc-ops.c:5928:gf_cli3_1_heal_volume_cbk] 0-cli: Received resp to heal volume
17:38 Rapture [2013-05-02 17:33:24.735943] I [input.c:46:cli_batch] 0-: Exiting with: 0
17:38 coredumb Hi folks
17:38 Rapture nothing else really, nothing that stands out
17:39 Rapture gluster v 3.3.0 btw
17:39 coredumb performance wise, to run VMs, is it better to use the "new" libvirt block support or directly storing them on a mountpoint as raw/qcow2 would achieve the same ?
17:42 fubada im chosing the second option
17:42 fubada unless its easy to migrate existing raw images to libvirt block
17:42 fubada whatever that means
17:43 coredumb fubada: just storing your VM images on a glusterfs mount point then
17:43 fubada yea
17:44 coredumb i was saying to myself that it would be easier :)
17:44 coredumb was just wondering if there was some perf diff
17:44 fubada would also be easier to debug
17:44 fubada qemu-img and such stuff
17:44 coredumb yeah
17:45 fubada i really like using lvm volumes for vms
17:45 coredumb any story of users using that kind of setup ?
17:45 fubada lvm over iscsi
17:47 coredumb fubada: with cluster lvm stack ?
17:47 fubada i dunno
17:47 fubada im a newb
17:48 coredumb hehe ok
17:48 fubada storage newb
17:49 Supermathie fubada: lvm over iscsi over lvm over disk?
17:49 JoeJulian Rapture: upgrade to 3.3.1
17:52 fubada Supermathie: lvm over disk on the storage server, then attach the lv over iscsi on the clinet
17:52 fubada client
17:52 fubada thats how id do it
17:53 Supermathie fubada: Exactly, one LUN per virtual disk
17:53 fubada ya
17:53 y4m4 joined #gluster
17:54 fubada the client is my kvm hypervisor bare metal
17:58 johnmark @stats
17:58 glusterbot johnmark: I have 3 registered users with 0 registered hostmasks; 1 owner and 1 admin.
17:58 johnmark @chanstats
17:58 * JoeJulian off-topic grumbles about google screwing up their dependencies for chrome on fedora...
17:58 johnmark JoeJulian: srsly???
17:59 JoeJulian Yeah, google-chrome-beta requires libudev.so.0 which doesn't exist.
17:59 johnmark haha... that's awesome
18:00 johnmark for f18?
18:00 JoeJulian yes
18:02 Supermathie It's *fedora* :p
18:02 mypetdonkey joined #gluster
18:04 semiosis ~channelstats | johnmark
18:04 glusterbot semiosis: Error: No factoid matches that key.
18:04 johnmark semiosis: lol... thanks :)
18:04 johnmark semiosis: I finally just did "/list #gluster"
18:04 johnmark 192 nicks!
18:05 johnmark @channelstats
18:05 glusterbot johnmark: On #gluster there have been 120599 messages, containing 5202244 characters, 873238 words, 3626 smileys, and 446 frowns; 777 of those messages were ACTIONs. There have been 44210 joins, 1417 parts, 42823 quits, 19 kicks, 119 mode changes, and 5 topic changes. There are currently 192 users and the channel has peaked at 217 users.
18:05 johnmark there we go
18:05 JoeJulian Wow... 19 kicks... that's gone up a lot.
18:05 mypetdonkey left #gluster
18:06 johnmark heh
18:06 JoeJulian I'm sure most of them were automated for pasting in channel.
18:06 semiosis yep
18:07 johnmark JoeJulian: off with their heads
18:07 * JoeJulian has spring fever today... I just want to go out and play with my daughter in the sun and can't really get into the groove.
18:07 johnmark it's nice and warm here, too
18:07 johnmark *sigh*
18:09 johnmark JoeJulian: huh, I'm also running f18 and didn't see that error
18:09 johnmark I'm also running the unstable build
18:09 bennyturns joined #gluster
18:10 JoeJulian Oh, right... speaking of sunny in Boston... I have to call someone in Concord...
18:10 JoeJulian 28.0.1490.2-196983
18:10 semiosis heavy rains here all week... great weather for coding!
18:10 JoeJulian Yeah, I've seen some pictures... Great if you own a rowboat.
18:12 fubada JoeJulian: is it possible to gluster initializa a brick with existing files in there?
18:12 fubada something like /var/lib/libvirt/images
18:12 fubada instead of me having to copy files from there to my brick later
18:25 JoeJulian Yes. Make sure, if it's a replicated volume, the the existing files are on the left-hand brick.
18:31 Rapture left #gluster
18:41 sjoeboo_ so, question: removing 2 (replica pair) bricks, data started migrating. one of them failed, the other is going strong looks like.....now, they SHOULD be perfect replicas, so jsut one finishing cleanly should be okay, right? and...any way to "re-try" before i do the commit at the end? just start the removal again?
18:44 JoeJulian Should be, yeah. To retry? I suspect that would be to "stop" and "start" again.
18:45 ThatGraemeGuy joined #gluster
18:46 brunoleon_ joined #gluster
18:54 portante` joined #gluster
18:59 fubada JoeJulian: before using the bricks, is it necessary to mount them?
18:59 fubada if my gluster server and client are the same machine
19:00 Supermathie Does this look right? http://www.websequencediagrams.com/fi​les/render?link=ItCE9JbOq0yw8HmrQrGW
19:00 glusterbot <http://goo.gl/tNpKO> (at www.websequencediagrams.com)
19:00 fubada do i need to: localhost:/dev_kvm_images /opt/dev_kvm_images glusterfs defaults,_netdev,backupvolf​ile-server=kvm4.us1.foo.com 0 0
19:00 fubada of can I use /gluster/bricks/dev_kvm_images
19:00 fubada direct
19:01 Supermathie oh god no don't use the underlying bricks directly
19:01 fubada ok
19:01 fubada i didnt think that it was ok
19:01 fubada just curious
19:09 Supermathie JoeJulian: can I attach a profiler to an already-running glusterfs-fuse daemon?
19:12 JoeJulian Not a profiler expert, sorry. I've submitted one bug once with profiler data and I'm still not sure how useful it was...
19:12 rotbeard joined #gluster
19:23 fubada its taking a long time to move a 100gb file into gluster
19:23 fubada even when its mounted local to glusterd
19:27 JoeJulian replicated, right?
19:28 fubada yea
19:28 fubada sooo srow
19:28 JoeJulian So it's copying to both replicas.
19:30 fubada 7mb/sec
19:31 JoeJulian cp?
19:31 fubada mv
19:32 JoeJulian Not sure how efficient mv is wrt block size.
19:32 JoeJulian dd bs=1M might be more efficient.
19:36 johnmark semiosis: can you help out this guy? http://bugs.debian.org/cgi-b​in/bugreport.cgi?bug=706625
19:36 glusterbot <http://goo.gl/372qT> (at bugs.debian.org)
19:36 johnmark the-me: ^^^
19:37 johnmark he also posted here: https://twitter.com/mikagrm​l/status/329996862360793090
19:37 glusterbot <http://goo.gl/PsYwG> (at twitter.com)
19:37 fubada why doesnt he just get his ass on irc
19:37 fubada and chat in here
19:40 semiosis johnmark: no one can help him.  he's trying to do a hands-free upgrade from 3.0 to 3.2
19:41 semiosis fubada: +1
19:43 cyberbootje joined #gluster
19:45 semiosis the-me: ping re: http://bugs.debian.org/cgi-b​in/bugreport.cgi?bug=706625
19:45 glusterbot <http://goo.gl/372qT> (at bugs.debian.org)
19:45 semiosis johnmark: this seems like more of a debian policy issue than a matter of actual technical support
19:45 JoeJulian I replied.
19:47 JoeJulian "Debian Developer" He must be familiar with debian policies then...
19:47 semiosis +1
19:47 semiosis thanks JoeJulian!
19:47 semiosis JoeJulian: the-me is also a DD & the official lead maintainer of debian's glusterfs packages
19:47 semiosis probably the best person to ask about this
19:50 nueces joined #gluster
19:55 soukihei I'm trying to setup gluster inside AWS and I keep getting 'host is not a friend' when I go to create the volume. I can see the peer is connected and I am not having much luck finding an answer via google. Does anyone know why this error is being generated?
19:55 semiosis soukihei: you need to probe first
19:56 semiosis soukihei: ,,(rtfm) about creating a trusted storage pool
19:56 glusterbot soukihei: Read the fairly-adequate manual at http://goo.gl/E3Jis
19:56 soukihei I did that. Like I said, the peer says it is connected
19:56 semiosis oh oops
19:56 semiosis are you using ,,(hostnames) ?
19:56 glusterbot Hostnames can be used instead of IPs for server (peer) addresses. To update an existing peer's address from IP to hostname, just probe it by name from any other peer. When creating a new pool, probe all other servers by name from the first, then probe the first by name from just one of the others.
19:56 semiosis you should be
19:56 semiosis and every peer should know every other peer (incl the first) by name, not ip
19:56 soukihei yes, I am using hostnames. I've setup both DNS and local /etc/hosts entries for the two gluster servers
19:57 fubada why by name
19:57 fubada what if dns shis
19:57 fubada shits
19:57 fubada i use IP
19:57 semiosis fubada: are you in ec2?
19:57 fubada no
19:57 semiosis ok then
19:57 fubada ok
19:57 soukihei I am in EC2
19:57 fubada i get it
19:57 fubada yea ec2 is crap
19:57 semiosis soukihei: use hostnames
19:57 soukihei I am
19:57 fubada ever changing eeps
19:57 semiosis fubada: meh
19:57 soukihei I also have EIPs assigned to the two nodes
19:57 fubada why use gluster in ec2
19:57 fubada use their shit
19:58 semiosis fubada: you're borderline trolling here
19:58 fubada unless just learning
19:58 soukihei I am trying to setup gluster across multiple availability zones
19:58 johnmark semiosis: JoeJulian: thanks guys
19:58 soukihei semiosis, I am following the quick-start guide. I got gluster working at home on some KVMs, just not able to replicate the same setup in EC2
19:59 semiosis soukihei: try mapping the hostname of each server to 127.0.0.1 in its own hosts file
19:59 soukihei ok
19:59 semiosis thats the only thing i use a hosts file for
20:00 soukihei bingo, that was it
20:00 soukihei thank you
20:00 semiosis great, you're welcome
20:00 semiosis and despite some people's opinions, imo glusterfs on ec2 is full of win
20:00 soukihei I am hoping that is the case
20:00 semiosis i've had great success with it myself over the last ~2 years
20:01 soukihei have you set it up across availability zones?
20:01 semiosis yes, definitely
20:01 soukihei do you have recommendations for documentation or things to look out for when I start trying to do that?
20:02 semiosis no that part was trivial.  the machines act like they're all on the same lan
20:03 semiosis i do have a ,,(canned ebs rant) about when to combine ebs vols vs. when to use them individually as bricks
20:03 glusterbot http://goo.gl/GJzYu
20:05 sjoeboo_ joined #gluster
20:05 semiosis meetings, bbiab
20:16 soukihei thanks
20:21 fleducquede joined #gluster
20:38 fubada max_read=131072
20:38 fubada folks what does this mean
20:38 fubada its on my fstab line for gluster
20:38 fubada is that kb? bytes?
20:43 sjoeboo_ so, if i 'm removeing a brick (and its replica pair), and the rebalance that runs keeps failling....why would it ? Not seeing anything specific to this in logs...but i could be missing it
20:50 semiosis fubada: looks like 128KiB to me
20:50 semiosis fubada: idk what max_read signifies tho
20:50 fubada the default is infinite
20:50 fubada not sure why i have it set
20:57 jag3773 joined #gluster
21:05 andreask joined #gluster
21:19 dialt0ne left #gluster
21:44 gbrand_ joined #gluster
21:52 fidevo joined #gluster
22:07 sjoeboo_ joined #gluster
22:10 zaitcev joined #gluster
22:11 harold[MTV] joined #gluster
22:19 sjoeboo_ joined #gluster
22:40 sjoeboo_ joined #gluster
22:56 luis_silva joined #gluster
23:23 sjoeboo_ joined #gluster
23:23 jthorne joined #gluster
23:57 badone joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary