Camelia, the Perl 6 bug

IRC log for #gluster, 2013-05-23

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:03 isomorphic joined #gluster
00:14 jbrooks joined #gluster
00:28 hagarth joined #gluster
00:32 hjmangalam1 joined #gluster
01:27 sysconfig joined #gluster
01:42 hjmangalam1 joined #gluster
01:43 portante joined #gluster
01:48 nightwalk joined #gluster
02:14 wgao joined #gluster
02:22 lalatenduM joined #gluster
02:40 vshankar joined #gluster
02:40 sprachgenerator joined #gluster
02:46 anands joined #gluster
02:46 majeff joined #gluster
03:21 jim` joined #gluster
03:28 nightwalk joined #gluster
03:30 bharata joined #gluster
03:48 atrius_ joined #gluster
04:01 rastar joined #gluster
04:14 shylesh joined #gluster
04:22 majeff joined #gluster
04:28 NeatBasis joined #gluster
04:33 majeff joined #gluster
04:41 sgowda joined #gluster
04:49 kshlm joined #gluster
04:49 satheesh joined #gluster
04:50 satheesh1 joined #gluster
04:57 thomaslee joined #gluster
05:04 vpshastry joined #gluster
05:08 kaushal_ joined #gluster
05:10 thomasle_ joined #gluster
05:15 vpshastry1 joined #gluster
05:22 saurabh joined #gluster
05:23 mohankumar joined #gluster
05:29 koubas joined #gluster
05:29 psharma joined #gluster
05:30 aravindavk joined #gluster
05:31 majeff joined #gluster
05:44 vimal joined #gluster
05:52 ricky-ticky joined #gluster
05:57 rastar joined #gluster
05:59 rastar joined #gluster
05:59 rgustafs joined #gluster
06:12 edong23 joined #gluster
06:12 flrichar joined #gluster
06:13 portante|afk joined #gluster
06:13 lalatenduM joined #gluster
06:14 foster joined #gluster
06:14 ngoswami joined #gluster
06:20 rastar1 joined #gluster
06:21 jtux joined #gluster
06:24 vpshastry1 joined #gluster
06:34 guigui3 joined #gluster
06:35 lalatenduM joined #gluster
06:41 ujjain joined #gluster
06:47 ollivera joined #gluster
06:48 atrius_ joined #gluster
06:57 shireesh joined #gluster
06:58 dobber_ joined #gluster
07:00 kaushal_ joined #gluster
07:02 edong23 joined #gluster
07:09 tjikkun_work joined #gluster
07:09 thomaslee joined #gluster
07:13 badone joined #gluster
07:29 tziOm joined #gluster
07:31 rotbeard joined #gluster
07:33 andreask joined #gluster
07:48 rastar joined #gluster
07:50 edong23 joined #gluster
07:52 ekuric joined #gluster
08:03 duerF joined #gluster
08:11 guigui3 joined #gluster
08:15 gbrand_ joined #gluster
08:16 spider_fingers joined #gluster
08:16 nightwalk joined #gluster
08:17 saurabh joined #gluster
08:28 Airbear joined #gluster
08:29 anands joined #gluster
08:30 bala joined #gluster
08:32 stickyboy joined #gluster
08:32 rb2k joined #gluster
08:41 satheesh joined #gluster
08:53 Norky joined #gluster
09:05 lalatenduM joined #gluster
09:09 duerF joined #gluster
09:12 partner i wonder what this might mean: [2013-05-21 19:51:35.273608] E [dht-common.c:1372:dht_lookup] 0-dfs-dht: Failed to get hashed subvol for /
09:13 partner there was a 10 min network outage and during/after that the client mount went into "state" and logged such for a long time until i did a umount & mount for it again
09:15 partner its a simple two brick distributed volume. identical client next to this box didn't do such any things. debian squeeze and 3.3.1 gluster
09:16 icemax left #gluster
09:16 glusterbot New news from newglusterbugs: [Bug 962226] 'prove' tests failures <http://goo.gl/J2qCz>
09:29 partner some log entries: http://dpaste.com/1196276/
09:29 glusterbot Title: dpaste: #1196276 (at dpaste.com)
09:32 partner not sure if its a gluster or fuse or what but reason and possible fix/cure for such scenario would be nice to know to avoid any possible future issues as part of the production was down for 14+ hours
09:40 satheesh joined #gluster
09:43 rotbeard joined #gluster
09:58 bharata joined #gluster
10:07 spider_fingers joined #gluster
10:10 manik joined #gluster
10:17 glusterbot New news from newglusterbugs: [Bug 961668] gfid links inside .glusterfs are not recreated when missing, even after a heal <http://goo.gl/4vuYc>
10:38 jclift joined #gluster
10:41 manik joined #gluster
10:44 edward1 joined #gluster
10:45 rastar joined #gluster
10:47 glusterbot New news from newglusterbugs: [Bug 953694] Requirements of Samba VFS plugin for glusterfs <http://goo.gl/v7g29>
10:51 shylesh joined #gluster
10:53 thomaslee joined #gluster
11:04 vpshastry1 joined #gluster
11:05 kkeithley1 joined #gluster
11:07 hchiramm__ joined #gluster
11:14 SteveCoo1ing Guys, are there any dirty tricks that can be used to speed up directory listings in GlusterFS?
11:14 SteveCoo1ing (GlusterFS 3.3.1)
11:15 Norky we have found it to be slow, minutes for a directory of 50K files
11:15 Norky 3.4 is much better
11:16 Norky there may be things you can do to improve the performance of 3.3 a little bit, but without more details about your setup I can't begin to offer any ideas
11:16 SteveCoo1ing I have a 4 node cluster, the volume is replica 2
11:16 SteveCoo1ing total of 16 bricks
11:17 jclift SteveCoo1ing: As a thought, are you able to restructure stuff so there are less entries per directory?
11:17 SteveCoo1ing xfs on bricks
11:17 piotrektt joined #gluster
11:17 jclift SteveCoo1ing: The thing that causes delays, is the lookup of extended info "per directory entry".
11:18 jclift SteveCoo1ing: i.e. when a person does  "ls -la" vs plain "ls", Gluster queries each node, in turn, for every entry
11:18 Norky an ls (without colouration) will be a lot faster than ls -l
11:18 jclift (not super great)
11:18 Norky err, what jclift said :)
11:18 SteveCoo1ing well.. that is what rsync does too then :)
11:18 jclift SteveCoo1ing: So, if you can wangle things so there's less entries in a directory you're looking at, you get the results back faster
11:19 SteveCoo1ing i am rsyncing fresh business logic onto frontend servers from the volume.
11:19 jclift SteveCoo1ing: Yeah... rsync is awesome tool, but super non-optimal for this now I think about it... rsync being fully recursive and all... ;)
11:20 SteveCoo1ing also, am planning to move mail storage onto the cluster, but looks like i will have to move to dovecot first :)
11:20 jclift SteveCoo1ing: There is code introduced with GlusterFS 3.4 to improve the way this directory listing stuff is done.  Speeds things up a bunch (if not using NFS share).
11:20 SteveCoo1ing the current imap setup has no indexing and i suspect it will be ultra slow :)
11:21 SteveCoo1ing jclift: no, all native clients so far
11:21 jclift Yeah, NON-OPTIMAL (with super caps!) for individual 1-file-per-email emails setups
11:21 SteveCoo1ing i have noticed that nfs client seems to have a stat cache, so sobsequent scans are faster on nfs
11:22 jclift Apparently nfs also has the "sped up version" of the directory reading code it it already, which has been added to the native client with 3.4
11:22 jclift So, if people are having crap dir lookup speed problems with nfs with 3.3, then 3.4 isn't going to help them
11:22 jclift But, with the native client there is improvement for 3.3 -> 3.4
11:23 jclift SteveCoo1ing: Personally, I don't know of any special tricks though
11:23 SteveCoo1ing that sounds sweet
11:23 SteveCoo1ing but, about email, jclift
11:23 ninkotech joined #gluster
11:23 ninkotech_ joined #gluster
11:24 jclift SteveCoo1ing: If you're using an EL6 based linux distro (ie RHEL/CentOS/SL), then there are performance tuning profiles that can be tweaked
11:24 SteveCoo1ing my plan is to continue to use Maildir, but with the dovecot indexing on top. MDA will update index. any idea if that will be ok?
11:24 SteveCoo1ing clients are EL5, cluster is CentOS6.
11:24 jclift SteveCoo1ing: Unfortunately, I have no idea.
11:24 jclift :(
11:25 SteveCoo1ing will need to do testing anyways :) thanks though
11:25 jclift With the perf tuning stuff mentioned above too, that's also not something I've yet looked into, so no clue how to (thus far).  Googling might help you though. :)
11:25 jclift SteveCoo1ing: No worries. :)
11:27 SteveCoo1ing btw, jclift, we rolled our own nss-mdns lib with patches to do rverse resolving for our internal IPs, and solved the slow reverse DNS weirdness i mentioned that way
11:28 SteveCoo1ing that works fine since we already have a yum repo for our other custom stuff
11:29 vpshastry joined #gluster
11:31 jclift SteveCooling: Heh, interesting approach.  Hadn't even thought of that.
11:32 jclift SteveCooling: Wonder if your patches for it, to enable reverse resolving that way, would be useful to others?  Maybe worth chucking up on GitHub or something, or asking it's upstream if they'd be interested in them?
11:33 SteveCooling we thought about it. right now we just changed the 169.254.x.x to 10.x.x.x hardcoded. but we could make provisions for a config file
11:35 SteveCooling hmm.. could upgrade our 2-node staging-gluster to the 3.4 beta..
11:37 shylesh joined #gluster
11:38 Norky so far as I can tell, it is the FUSE client side whcih got faster
11:39 Norky at least, I was able to get much faster metadata access just by upgrading a client to 3.4 and leaving the server on 3.3
11:40 Norky mixing versions did not work in previous versions, but they appear to be making efforts to resolve that
11:40 Norky I'd suggest keeping client and server at the same version anyway, but if you can easily upgrade one client to test, that might make it less work for you
11:40 kkeithley| I'm pretty sure 3.4 has READDIRPLUS. That's probably why it's faster
11:40 Norky I am assuming so
11:42 Norky SteveCooling, are the two nodes in your staging cluster both server and client (to themselves)?
11:43 SteveCooling no, they are just servers that provide for a set of vm's that replicate the frontend functions
11:46 lbalbalba joined #gluster
11:55 hchiramm__ joined #gluster
11:56 efries joined #gluster
12:13 aliguori joined #gluster
12:19 Norky joined #gluster
12:30 Brian_TS joined #gluster
12:31 Brian_TS Has anyone compiled/used GlusterFS on IBM S390x zLinux (Red Hat )??
12:38 plarsen joined #gluster
12:45 kkeithley| Brian_TS: there's some #ifarch s390 stuff in the glusterfs.spec file, which sorta suggests that it was done at some point in time.  I haven't heard about anyone doing it recently though
12:45 Brian_TS kkeithley:  Thanks, I'll give the compile a try on my system.
12:47 ndevos Brian_TS: be aware that there are possible some endianness assuptions in the code, mixing S390x and x86_64 migh result in weird behaviour
12:47 vpshastry joined #gluster
12:47 kkeithley| ndevos: eek, really? where?
12:48 ndevos kkeithley|: like bug 951903
12:50 bennyturns joined #gluster
12:53 * kkeithley| doesn't have a ppc64 box to put rhel6 on. :-(
12:56 JoeJulian Nice one kkeithley... That pipe is really messing with glusterbot. ;P
12:56 JoeJulian but 951903
12:56 JoeJulian bug 951903
12:56 * JoeJulian pokes glusterbot
12:56 kkeithley| oh
12:56 glusterbot joined #gluster
12:56 JoeJulian bug 951903
12:57 JoeJulian Or maybe bugzilla is down...
12:57 kkeithley_ bugzilla has "issues" after being upgraded
13:00 mohankumar joined #gluster
13:02 * kkeithley_ should find a ppc distro and see if the G4 Mac he rescued from ewaste recycling runs.
13:03 * kkeithley_ but suspects there's a reason why it was in ewaste
13:07 ndevos kkeithley_: yum install qemu-system-ppc ? or use some of the systems in our RH labs/beaker
13:09 kkeithley_ ndevos: interesting
13:20 satheesh joined #gluster
13:20 kkeithley_ or qemu-system-s390x!
13:20 wN joined #gluster
13:21 xavih @bug 951903
13:21 glusterbot xavih: An error has occurred and has been logged. Please contact this bot's administrator for more information.
13:26 dustint joined #gluster
13:33 manik joined #gluster
13:36 atrius_ joined #gluster
13:38 semiosis bug 951903
13:38 semiosis no @
13:40 kaptk2 joined #gluster
13:45 hjmangalam1 joined #gluster
13:49 xavih semiosis: I saw the other day another user using this with the '@' and glusterbot answered, without '@' it seems it does nothing
13:49 ndevos @volunteer
13:49 glusterbot ndevos: A person who voluntarily undertakes or expresses a willingness to undertake a service: as one who renders a service or takes part in a transaction while having no legal concern or interest or receiving valuable consideration.
13:50 semiosis xavih: the glusterbot-bugzilla integration is down at the moment
13:50 ndevos I think the integration works fine, its just that bugzilla is down
13:50 semiosis truth
13:51 theron joined #gluster
13:51 lbalbalba joined #gluster
13:52 glusterbot New news from newglusterbugs: [Bug 966555] [G4S]: gluster-swift does not properly handle the transport endpoint not connected state <http://goo.gl/2WYRU> || [Bug 955588] Gluster volume info should return server uuid along with server ip and brick details. <http://goo.gl/ir68N>
13:53 ndevos maybe it's up again? bug 951903
13:54 hagarth joined #gluster
13:58 lbalbalba bugzilla is up ... just very very slow ;)
13:59 pkoro joined #gluster
14:07 bennyturns joined #gluster
14:20 dxd828 joined #gluster
14:22 plarsen joined #gluster
14:23 swaT30 joined #gluster
14:24 swaT30 hey all
14:24 Chocobo Hrmm, nothing I try seems to work.  Can't get gluster to mount at boot.   Maybe adding it to rc.local would work?
14:24 swaT30 experiencing some really low IOPS with my GlusterFS cluster
14:24 stickyboy swaT30: Are you replicated?
14:24 swaT30 running on three hosts with six 15k SAS disks each
14:24 swaT30 stickyboy: yes
14:25 stickyboy swaT30: Using FUSE client?
14:25 swaT30 stickyboy: yup
14:25 swaT30 recommend using another method?
14:25 stickyboy swaT30: What interconnect are you using?
14:25 swaT30 I've seen mention of NFS, but not sure that's ideal
14:25 stickyboy Gigabit Ethernet?
14:26 swaT30 stickyboy: 10GbE
14:26 swaT30 definitely not hitting my max
14:26 stickyboy swaT30: Ah ok.
14:26 stickyboy Well with the FUSE client + replication, your client throughput is essentially halved.
14:27 stickyboy As it writes simultaneously to both replicas.
14:27 swaT30 stickyboy: understood. but seeing approx 140 IOPS
14:27 stickyboy With NFS you will get closer to 10GbE, but replication will be offloaded to the storage servers.
14:27 stickyboy I don't have answers... only discussion :P
14:28 swaT30 stickyboy: :) anything is appreciated. could you explain more on the replication offload?
14:28 stickyboy swaT30: Nothing to explain really.  If you use NFS, the servers do the replication. :)
14:28 stickyboy Whereas with FUSE the logic is in the clients.
14:28 swaT30 ahhh, gotcha
14:28 swaT30 in my case the servers are the clients
14:29 stickyboy We use 1GbE here and used NFS for our volumes... but found out that GlusterFS NFS doesn't support NFS locking... and some of our apps needed that.
14:29 JoeJulian stickyboy: Still going to see a decrease with nfs. It's still got to replicate before it tells you that it's done.
14:29 stickyboy So back to FUSE!
14:29 swaT30 is there any way to do async?
14:30 stickyboy JoeJulian: True, the "total" time is probably the same (SOMEONE has to replicate it)?
14:30 JoeJulian nfs -> glusterfs client -> 2 servers (one of them _may_ be local) -- still gives you at least two network traversals before your operation is confirmed.
14:31 swaT30 so is there any performance tuning that I could do?
14:32 * JoeJulian wonders what iops he hits regularly...
14:33 tziOm Are there two different gluster samba vfs implementations?
14:33 JoeJulian I need to file a bug asking for the default settings to be changed to provide the least possible throughput so there can be an easy answer when people ask about "tuning".
14:33 glusterbot http://goo.gl/UUuCq
14:34 ricky-ticky joined #gluster
14:34 JoeJulian tziOm: possibly, but avati's is most likely going to be the one that's implemented.
14:34 portante joined #gluster
14:34 tziOm JoeJulian, ok..
14:34 tziOm JoeJulian, where is the repo for that?
14:34 swaT30 JoeJulian: haha, I'm just thinking there may be settings which may be adjusted to increase performance in specific environments
14:35 stickyboy JoeJulian: "premature optimization is the root of all evil" :P
14:35 JoeJulian https://github.com/avati/samba/branches
14:35 glusterbot Title: Branches · avati/samba · GitHub (at github.com)
14:36 JoeJulian swaT30: There may be, but nobody's reported them.
14:36 bugs_ joined #gluster
14:37 JoeJulian how are you measuring iops?
14:38 jtux joined #gluster
14:38 swaT30 JoeJulian: just using dd and /dev/zero
14:38 swaT30 pretty rudimentary, but does the trick
14:39 tziOm JoeJulian, have you tried the vfs implementation?
14:39 JoeJulian tziOm: I have not. I really don't want my windows users getting any better performance.
14:39 JoeJulian @wonka
14:40 JoeJulian Damn...  semiosis we need that link.
14:40 zaitcev joined #gluster
14:40 tziOm JoeJulian, hehe..
14:41 plarsen joined #gluster
14:47 aliguori joined #gluster
14:51 * m0zes agrees. windows servers should be given as few resources as possible.
14:52 daMaestro joined #gluster
14:52 m0zes punish them for making my day-to-day routine harder to replace with a shell script
14:52 JoeJulian I've made sure they all know that they have the option of running a linux desktop.
14:55 * m0zes was asked for a windows vm for iis so someone can write an mvc app in asp.net for a new *simple* website. it is at the bottom of my todo list.
14:56 JoeJulian lol
14:57 m0zes now back to hadoop. :(
14:59 rastar joined #gluster
15:03 portante_ joined #gluster
15:04 nueces joined #gluster
15:06 twx joined #gluster
15:15 sprachgenerator joined #gluster
15:18 jthorne joined #gluster
15:18 majeff joined #gluster
15:20 devoid joined #gluster
15:21 ABAJosh joined #gluster
15:22 devoid1 joined #gluster
15:26 recidive joined #gluster
15:31 recidive hello, can I use GlusterFS to share a filesystem with 2 webservers?
15:32 sprachgenerator joined #gluster
15:33 ABAJosh not a lot of activity on this channel
15:33 ABAJosh yep, you can use Gluster to share an FS between 2 web servers
15:33 JoeJulian ABAJosh: what???
15:34 ABAJosh JoeJulian: I was replying to recidive
15:34 JoeJulian ABAJosh: Which channel have you been on?
15:34 ABAJosh I'm on a web client - maybe it's screwy :(
15:34 recidive ok, are there any docs on this use case?
15:35 dustint joined #gluster
15:35 JoeJulian recidive: The standard quick-start guide works for that.
15:35 ABAJosh recidive: What you're talking about is the most basic configuration. check out the getting started guide: http://www.gluster.org/community/document​ation/index.php/Getting_started_overview
15:36 glusterbot <http://goo.gl/Lq4vp> (at www.gluster.org)
15:36 * JoeJulian is shamed by ABAJosh's much more detailed answer....
15:37 ABAJosh JoeJulian: has there been conversation I've missed in the past 30min or so?
15:37 JoeJulian Nah, they all kind-of stalled right about then.
15:37 ABAJosh Ah gotcha
15:37 ABAJosh didn't mean to cast aspersions about the channel in general ;)
15:37 JoeJulian But there's almost always someone around answering questions.
15:38 ABAJosh Awesome. I'm sure I'll be asking plenty in the near future
15:39 JoeJulian I give lots of free advice, too.... some of it's even topical.
15:40 recidive JoeJulian, ABAJosh: thanks, will take a look at this again, have done this late at night and thought this wouldn't work for my case
15:40 JoeJulian recidive: if you're running ,,(php) apps, you might want to read the following page:
15:40 glusterbot recidive: php calls the stat() system call for every include. This triggers a self-heal check which makes most php software slow as they include hundreds of small files. See http://goo.gl/uDFgg for details.
15:41 lbalbalba hrm. my nfs mount fails as soon as I 'volume set nfs.addr-namelookup on' and 'volume set nfs.rpc-auth-allow <hostname>' 'volume set nfs.rpc-auth-allow <ip>' works...
15:41 lbalbalba must be fuzzy hostname resolution ...
15:42 JoeJulian lbalbalba: Last I knew, hostnames still didn't work in nfs.rpc-auth-allow, but I haven't been keeping an eye out for commits in that regard since I don't use nfs and I prefer iptables for that task.
15:43 ABAJosh JoeJulian - that is a most excellent link. Thanks
15:43 JoeJulian You're welcome
15:43 lbalbalba JoeJulian: ah. so i found a genuine bug ? yippiekayeah ! :(
15:43 lbalbalba should i file a bug report ?
15:43 glusterbot http://goo.gl/UUuCq
15:43 hjmangalam1 joined #gluster
15:43 JoeJulian Sure
15:46 spider_fingers left #gluster
15:51 devoid joined #gluster
15:51 lbalbalba done.... https://bugzilla.redhat.com/show_bug.cgi?id=966659
15:51 glusterbot <http://goo.gl/cVLHx> (at bugzilla.redhat.com)
15:51 glusterbot Bug 966659: unspecified, unspecified, ---, vraman, NEW , hostname based nfs mounts fail when setting 'volume set nfs.addr-namelookup on'
15:53 glusterbot New news from newglusterbugs: [Bug 966659] hostname based nfs mounts fail when setting 'volume set nfs.addr-namelookup on' <http://goo.gl/cVLHx>
16:13 recidive JoeJulian: got it, it's the same problem for NFS, I thought Gluster would handle this better, but this may be just unfeasible
16:14 recidive JoeJulian: Can I rsync my www folder and have Gluster mounted in www/files?
16:14 recidive I have 70GB files, that's what I want to share
16:14 recidive the code I can rsync
16:16 manik joined #gluster
16:16 JoeJulian yep
16:17 JoeJulian btw... I take /most/ of my own advice (I don't have a varnish server set up) and my php sites are quite snappy.
16:17 devoid joined #gluster
16:20 lbalbalba can anyone verify what 'uname -i' (hardware-platform) prints out for 64 bit Fedora or Red Hat ? Is that 'x86_64' ?
16:20 ABAJosh 'x86_64 on CentOS 6, fwiw
16:20 lbalbalba close enough thanks
16:21 ABAJosh no worries
16:26 jag3773 joined #gluster
16:27 Mo_ joined #gluster
16:28 thomaslee joined #gluster
16:32 jdarcy joined #gluster
16:32 vpshastry joined #gluster
16:36 devoid1 joined #gluster
16:39 brunoleon_ joined #gluster
16:42 recidive JoeJulian: so if I got this correctly, Gluster only works getting the entire storage device? Can't it be just a folder? The brick itself needs to be in fstab?
16:45 balunasj joined #gluster
16:48 jdarcy recidive: GlusterFS bricks are just directories on servers, and in fact it's preferable for them *not* to be at local-FS boundaries.
16:49 jdarcy recidive: If a brick is defined at a mountpoint, it's too easy to start up a brick with the underlying FS not mounted.  Better for it to be a subdirectory, which won't exist if the local FS isn't mounted.
16:50 jdarcy BTW, this isn't a theoretical thing.  I've actually been in the room with users who've been burned this way.
16:50 sjoeboo yes...
16:51 recidive jdarcy: ok, let me say I have a. a file server, b. web server 1, and c. web server 2
16:51 jdarcy sjoeboo: O HAI.  ;)
16:52 recidive I need to mount the brick in 'a' on servers 'b' and 'c'
16:52 recidive is this a common scenario?
16:53 jdarcy recidive: Single brick isn't common, but it's a degenerate case of what we're all about.
16:54 jdarcy recidive: Seriously, if you don't expect to grow beyond a single brick, might as well use NFS.  It's when you start combining bricks that things get interesting.
16:54 lbalbalba nfs is not fault tolerant glusterfs is
16:55 jdarcy lbalbalba: Not with a single brick it's not.
16:55 mynameisbruce joined #gluster
16:55 mynameisbruce_ joined #gluster
16:56 lbalbalba jdarcy: just stripe replicate whatnot it. cant do that with standard nfs
16:56 recidive jdarcy: ok, can I maybe get rid of a and have a "two way" between 'b' and 'c'?
16:57 recidive 'b' and 'c' are read/write, and files uploaded to 'c' should be read in 'b' and vice-versa
16:57 jdarcy recidive: Yes, you can, but there's a very important caveat.  Once you devote a directory on B and a directory on C to a be bricks, you must not modify them directly.  Instead, you must access them even on B and C through the GlusterFS volume which is the combination of the two.
16:58 jdarcy recidive: So, for example: gluster volume create myvol replica 2 a:/some/directory b:/some/directory
16:58 mynameisbruce_ left #gluster
16:58 lbalbalba so i guess you want 'a' ;)
16:59 jdarcy recidive: Then, to mount on each host: mount -t glusterfs a:myvol /some/other/directory
16:59 jdarcy recidive: So the files are actually present in /some/directory but you access them through /some/other/directory to ensure replication and consistency.
16:59 edong23 joined #gluster
17:02 recidive jdarcy: got it, and what could I use for taking snapshots/backups of the merged files?
17:03 plarsen joined #gluster
17:03 jdarcy recidive: We don't have snapshots that are coordinated across all bricks yet, but you can pretty close by using local (filesystem or LVM level) snapshots yourself.
17:06 recidive ok, and what would be stored in /some/directory on each server/brick would be just the files that would be copied to that brick, or all files?
17:08 jdarcy recidive: That would contain all of the files, plus extra metadata that we use to keep track of everything.
17:08 recidive or maybe I can rsync the /some/other/directory to a "backup" server and take snapshots from there?
17:08 jdarcy You could certainly do that too.
17:09 recidive jdarcy: cool so that could be a point for taking a snapshot, I mean the /some/directory on either server
17:09 recidive ?
17:10 recidive jdarcy: so in the case I lost 'b' or 'c' all the data would still be safe in the other server?
17:10 jdarcy recidive: Correct.
17:11 recidive jdarcy: nice, that's better than I thought
17:12 recidive jdarcy: actually I still keep getting excited and frustrated fast, but I'm starting to understand how all that stuff works
17:13 recidive I believe it's because it's a paradigm shift for me
17:15 recidive jdarcy: alright, thanks for your help
17:15 recidive jdarcy: I'll start setting this up and see how that goes
17:15 jdarcy recidive: Excellent.  Please let us know the result.  :)
17:15 recidive and maybe share my finds somewhere eventually
17:16 recidive jdarcy: ok, I will
17:16 rsherman joined #gluster
17:17 rsherman Hello Gluster folks
17:18 rsherman I have a question about increasing replica count in 3.3 and was looking for some guidance
17:20 jdarcy I can try, but TBH I can't even remember if 3.3 supported changing replica count via the CLI.
17:20 semiosis it should
17:20 semiosis using add/remove brick :(
17:21 jdarcy Oh yeah.  Ick.
17:21 mohankumar joined #gluster
17:22 rsherman Yeah, I didn't see anything documented, but some old forum posts said it was slated for 3.3
17:28 hchiramm__ joined #gluster
17:30 rb2k joined #gluster
17:34 recidive jdarcy: I have 3 extra large EC2 instances for implementing this architecture, plus the RDS for the database, the 2 webserver ones are for processor intensive application to run (CMS) and the other was initially planned to be a file server. I was planning to put memcached on the 2 webserver instances to take the RAM available, but now I'm changing a little the architecture. Are there performance tunning settings to make use of that
17:34 recidive available on the webservers for Gluster FS performance?
17:35 recidive jdarcy: also since it's a 'two way' approach, is there any network i/o, even when the bricks are local to each server?
17:36 recidive like on the example you outlined
17:37 jdarcy recidive: There's definitely network I/O to replicate between the servers.
17:37 recidive I mean for reads
17:37 jdarcy recidive: It's supposed to be smart about choosing the local replica to read from, if there is one.
17:37 recidive the question is if the stat checks would be still costly or not
17:38 recidive jdarcy: cool
17:38 jdarcy Stat/readdir calls will still be expensive, because they have to check consistency state on both servers.
17:39 recidive jdarcy: hmm, I'll have to do some benchmarks with php code in and out the Gluster brick I believe
17:39 lbalbalba crap. cant get 'prove ./tests/basic/rpm.t' working for 'non-x86_64' platforms. sucks.
17:39 lbalbalba if [ `uname -i` = 'x86_64' ]; then foo; fi doesnt work :(
17:40 jdarcy PHP can be a bit problematic, because of looking for stuff where it's not there in long include paths.
17:40 lbalbalba egrep -e "epel-[0-9]+-`uname -i`.cfg$" doesnt work
17:41 jdarcy recidive: Using a PHP cache is almost imperative.  There are also some "negative lookup" settings that can be tweaked in 3.4, but it's still a bit of a danger zone.
17:41 devoid joined #gluster
17:42 recidive jdarcy: ok, I'm using apc but the auto reload is a must for me (apc.stat=1)
17:42 nightwalk joined #gluster
17:44 andreask joined #gluster
17:45 jdarcy recidive: I don't know enough about APC to know how that would be affected.  The *stat* will still have to go over the network, but only once.  It's the failed *lookup* calls across many include directories that are usually the problem.
17:45 jdarcy recidive: If we're doing stat on a file that is in fact there, just to see if it has changed, that's probably not too bad.
17:45 recidive jdarcy: I see
17:46 semiosis ,,(php)
17:46 glusterbot php calls the stat() system call for every include. This triggers a self-heal check which makes most php software slow as they include hundreds of small files. See http://goo.gl/uDFgg for details.
17:47 semiosis recidive: "two-way" approach?
17:47 semiosis recidive: do you mean you're going to have two servers replicating & also clients on those servers reading/writing?
17:50 thomasle_ joined #gluster
17:51 lh joined #gluster
17:51 lh joined #gluster
17:51 recidive semiosis: something like that
17:52 thoma____ joined #gluster
17:53 semiosis beware the split-brain
17:53 semiosis http://gluster.helpshiftcrm.com/q/what-is-spl​it-brain-in-glusterfs-and-how-can-i-cause-it/
17:53 glusterbot <http://goo.gl/Oi3AA> (at gluster.helpshiftcrm.com)
17:54 semiosis @split brain
17:54 glusterbot semiosis: I do not know about 'split brain', but I do know about these similar topics: 'split-brain'
17:54 semiosis @split-brain
17:54 glusterbot semiosis: (#1) learn how to cause split-brain here: http://goo.gl/nywzC, or (#2) To heal split-brain in 3.3, see http://goo.gl/FPFUX .
17:54 semiosis @forget split-brain 1
17:54 glusterbot semiosis: The operation succeeded.
17:54 semiosis @learn split-brain as learn how to cause split-brain here: http://gluster.helpshiftcrm.com/q/what-is-spl​it-brain-in-glusterfs-and-how-can-i-cause-it/
17:54 glusterbot semiosis: The operation succeeded.
17:54 semiosis @split-brain
17:54 glusterbot semiosis: (#1) To heal split-brain in 3.3, see http://goo.gl/FPFUX ., or (#2) learn how to cause split-brain here: http://goo.gl/Oi3AA
17:54 semiosis \o/
17:55 semiosis all our helpfist links are broken, i'm fixing them one by one
17:55 semiosis s/helpfist/helpshift/
17:55 glusterbot What semiosis meant to say was: all our helpshift links are broken, i'm fixing them one by one
17:55 recidive semiosis: thanks
17:55 ABAJosh lbalbalba: you need to use == instead of = in your if statement
17:56 * jdarcy chuckles at "helpfist"  - that's why we got rid of it.
17:56 ABAJosh lbalbalba: this works: if [ `uname -i` == 'x86_64' ]; then echo 64-bit biatch; else echo '32 lousy bits :('; fi
17:57 semiosis ;)
17:57 recidive joined #gluster
17:58 lbalbalba ABAJosh: thanx. but i still get :   Bad plan.  You planned 5 tests but ran 0.   :(
17:59 recidive semiosis: would it potentially cause split-brain even if I don't access the bricks folders directly?
17:59 vpshastry left #gluster
17:59 lbalbalba ABAJosh: tp://fpaste.org/14066/31971136/
18:00 lbalbalba ABAJosh: http://fpaste.org/14066/31971136/
18:00 glusterbot Title: #14066 Fedora Project Pastebin (at fpaste.org)
18:00 semiosis recidive: you shouldn't ever access bricks directly.  and yes you can get a split brain, in a network partition, if you write through clients on your servers, as described in that article.
18:00 meunierd1 joined #gluster
18:01 semiosis recidive: i recommend using quorum to prevent that
18:01 semiosis recidive: or using read-only clients on your servers, if you can do that (although read-only client mounts don't work in 3.3, that's a bug, so this option will have to wait)
18:01 ABAJosh lbalbalba: sorry man - I don't have time to dig through that code ATM - hopefully somone else on the channel can help out
18:02 lbalbalba ABAJosh: cool, no worries :)
18:08 devoid1 joined #gluster
18:09 recidive semiosis: can't do read only, can I set some folders to have the synchronization ignored, similar to .gitignore?
18:10 recidive I think the thumbnail generation thing is a potential for split-brain
18:10 recidive but they can just be ignored
18:11 recidive some other stuff like css/js min/aggregation uses hashes for files
18:11 recidive but image thumbnails use the same filename but put it on a different folder, so potential for conflicts
18:17 lalatenduM joined #gluster
18:18 recidive semiosis: or can I just ignore the split-brain in the case I don't care having two versions of the same file?
18:19 recidive and also don't care if they get lost at some point (thumbnails cache)
18:22 devoid joined #gluster
18:22 lbalbalba recidive: write cache files to a diff /foo on each host from the read /bar ?
18:23 lbalbalba recidive: nevermind
18:35 lpabon joined #gluster
18:41 kkeithley_ @ppa
18:41 glusterbot kkeithley_: The official glusterfs 3.3 packages for Ubuntu are available here: http://goo.gl/7ZTNY
18:46 edward1 joined #gluster
18:50 semiosis recidive: doubt it, but you should try it and see for yourself
18:50 semiosis imho everyone should cause & resolve a split brain before going to production with glusterfs
18:57 recidive semiosis: got it, thanks
18:57 Airbear joined #gluster
19:10 kkeithley_ @yum
19:10 glusterbot kkeithley_: I do not know about 'yum', but I do know about these similar topics: 'yum repo', 'yum repository', 'yum3.3 repo', 'yum33 repo'
19:10 kkeithley_ @yum repo
19:10 glusterbot kkeithley_: kkeithley's fedorapeople.org yum repository has 32- and 64-bit glusterfs 3.3 packages for RHEL/Fedora/Centos distributions: http://goo.gl/EyoCw
19:13 kkeithley_ JoeJulian, semiosis: please change yum repo, etc., to refer to the new repo at http://download.gluster.org/pu​b/gluster/glusterfs/repos/YUM/
19:13 glusterbot <http://goo.gl/s077x> (at download.gluster.org)
19:31 y4m4 joined #gluster
19:37 devoid1 joined #gluster
19:42 devoid joined #gluster
19:43 semiosis kkeithley_: happy to help but fyi anyone can @forget & @learn
19:43 semiosis @forget yum repo
19:43 glusterbot semiosis: The operation succeeded.
19:44 semiosis @learn yum repo as 32- and 64-bit glusterfs 3.3 packages for RHEL/Fedora/Centos distributions: http://download.gluster.org/pu​b/gluster/glusterfs/repos/YUM/
19:44 glusterbot semiosis: The operation succeeded.
19:44 semiosis @yum repo
19:44 glusterbot semiosis: 32- and 64-bit glusterfs 3.3 packages for RHEL/Fedora/Centos distributions: http://goo.gl/s077x
19:50 stickyboy :)
19:55 devoid joined #gluster
19:56 recidive semiosis: does your ubuntu repo packages have the same layout of official ones?
19:56 recidive I mean settings files, filesystem structure
19:57 recidive by official I mean the ones in the ubuntu official repos
19:57 recidive sorry
19:58 kkeithley_ semiosis: oh, sorry, didn't realize
19:59 semiosis recidive: yes
19:59 semiosis the ubuntu-glusterfs-* ppas anyway
20:00 semiosis others dont
20:00 semiosis the ,,(ppa) ppas
20:00 glusterbot The official glusterfs 3.3 packages for Ubuntu are available here: http://goo.gl/7ZTNY
20:00 semiosis @forget ppa
20:00 glusterbot semiosis: The operation succeeded.
20:00 semiosis @learn ppa as The official glusterfs 3.3 packages for Ubuntu are available here: http://goo.gl/7ZTNY -- and 3.4 packages are here: https://launchpad.net/~semiosis​/+archive/ubuntu-glusterfs-3.4
20:00 glusterbot semiosis: The operation succeeded.
20:00 semiosis @ppa
20:00 glusterbot semiosis: The official glusterfs 3.3 packages for Ubuntu are available here: http://goo.gl/7ZTNY -- and 3.4 packages are here: http://goo.gl/u33hy
20:07 devoid joined #gluster
20:09 devoid1 joined #gluster
20:13 recidive semiosis: do you recommend 3.4 for production?
20:13 recidive or just stick with 3.3?
20:14 semiosis ehhhh
20:14 semiosis tbh i'm still running 3.1.7 in prod :)
20:15 recidive semiosis: should I just stick with the 3.2 from ubuntu repos?
20:15 semiosis no
20:15 semiosis there is at least a newer 3.2 than what's in ubuntu precise universe
20:15 semiosis if you want 3.2
20:15 semiosis idk what you should do
20:18 recidive ok, just ordering loudly, not really expecting a verdict from you on it
20:18 semiosis i spent weeks hammering my setup before going live, i would need to put that kind of time into an upgrade
20:18 semiosis and right now i have other priorities
20:18 recidive I'm not native english speaker, so my vocabulary is a little limited
20:18 semiosis it's all good :)
20:20 cjh_ joined #gluster
20:21 cjh_ hello everyone.  i'm having trouble with gluster peer probe saying nodes are already in the cluster.  I rm -rf /var/lib/glusterd; /etc/init.d/gluster restart and it says the same thing.  I seem to be missing something
20:27 JoeJulian @forget yum repo
20:27 glusterbot JoeJulian: The operation succeeded.
20:28 JoeJulian @learn yum as The official glusterfs packages for RHEL/CentOS/SL are available here: http://download.gluster.org/pu​b/gluster/glusterfs/repos/YUM/
20:28 glusterbot JoeJulian: The operation succeeded.
20:28 JoeJulian @alias yum yum repo
20:28 glusterbot JoeJulian: (alias [<channel>] <oldkey> <newkey> [<number>]) -- Adds a new key <newkey> for factoid associated with <oldkey>. <number> is only necessary if there's more than one factoid associated with <oldkey>. The same action can be accomplished by using the 'learn' function with a new key but an existing (verbatim) factoid content.
20:29 JoeJulian @alias "yum" "yum repo"
20:29 glusterbot JoeJulian: The operation succeeded.
20:33 JoeJulian ~cloned servers | cjh_
20:33 glusterbot cjh_: Check that your peers have different UUIDs ('gluster peer status' on both). The uuid is saved in /var/lib/glusterfs/glusterd.info - that file should not exist before starting glusterd the first time. It's a common issue when servers are cloned. You can delete the /var/lib/glusterfs/peers/<uuid> file and /var/lib/glusterfs/glusterd.info, restart glusterd and peer-probe again.
20:33 cjh_ glusterbot: thanks.  i'll try that
20:34 JoeJulian hmm, then again you did say you rm -rf'd that directory...
20:34 cjh_ yeah
20:34 JoeJulian hostname resolution?
20:34 cjh_ yeah they resolve properly
20:36 badone joined #gluster
20:36 cjh_ JoeJulian: ill shut down gluster, smoke that directory and then try starting everything up again
20:57 hagarth joined #gluster
21:11 devoid joined #gluster
21:23 devoid joined #gluster
21:36 majeff joined #gluster
21:36 devoid1 joined #gluster
22:10 duerF joined #gluster
22:23 rb2k joined #gluster
22:23 lh joined #gluster
22:23 lh joined #gluster
22:39 nightwalk Anyone have any idea why gluster would choke on a filesystem having millions of small to moderately-sized files (ex: tons of kernel headers + modules for numerous distros/kernels/versions) when it works just fine (mostly) for a mail store containing (very) roughly the same number of files?
22:41 nightwalk I've destroyed the volume, wiped all xattrs (which took forever on that many files), and re-created it several times, but never have I got beyond the point that it throws an "Input/Output Error" when I try to list the contents of one of the root directory's subdirectories
22:43 nightwalk It's running in a pair of kvm vms that have 3G of ram a piece and 8G of swap, but I wouldn't *think* it'd be ram related unless gluster was trying to malloc a huge amount of ram at the beginning (which had to allstay resident for a short time)...?
22:44 nightwalk The base is CentOS 6.3 x86_64, but I've tried with the stock 3.2 rpms as well as the 3.3 rpms from the third party repo, and I get the same results with both
22:47 nightwalk Just to re-iterate, this seems strange to me mainly because I would've expected the mail store to be the one to cause the most issues since it gets written to all the time from both bricks. This particular store is just a bunch of files read by the kernel and compiler chains to build new modules on netbooted installations and what-not, so the usual cluster write issues don't apply
22:47 edong23 joined #gluster
22:50 nightwalk Iirc, it's after-hours for most of you, and this isn't urgent in any case. I've since fallen back to rsyncing periodically since it works fairly well for this particular use case.
22:51 lhawthor_ joined #gluster
22:51 duerF joined #gluster
22:52 nightwalk I'd be interested in any theories you might have when you get in in the morning since the logs don't show much and are as clear as mud (in typical python fashion?) in any case
22:55 portante joined #gluster
23:00 semiosis nightwalk: client logs?
23:08 edong23 joined #gluster
23:12 hagarth joined #gluster
23:16 semiosis nm, outta here
23:17 andrewjsledge joined #gluster
23:18 JoeJulian Technically, the 3.3 rpms are stock and the 3.2 are from the 3rd party repo. ;)
23:21 nightwalk http://paste.ubuntu.com/5695384/
23:21 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
23:21 nightwalk JoeJulian: regardless, neither one works for this particular collection of files :)
23:21 nightwalk and btw...I thought you were one of the ones who were usually gone long before now
23:22 lnxsix joined #gluster
23:22 JoeJulian I'm often attached to this channel from the time I wake up 'till I go to bed. This week I've been away a bit, but that's an anomaly.
23:23 JoeJulian ... but I'm in Seattle, so figure GMT-7 or 8, depending on whether the government's imposed some artificial saving of daylight upon me....
23:24 nightwalk let's see...that makes it... 4:24PM or 5:24PM there? that's still within regular work hours I guess :)
23:24 JoeJulian When you deleted the attributes, did you delete the .glusterfs tree too?
23:25 lh joined #gluster
23:25 lh joined #gluster
23:25 JoeJulian Yes, 4:25 now... probably should get ready to head into town to an openstack meetup I'm going to.
23:25 nightwalk yes. in fact, I ended up using a script snippet I'd found in the gluster forums
23:25 nightwalk I had already written my own too, but people were already swearing by the other one, so I tried it too
23:26 JoeJulian Hehe
23:26 * nightwalk is more interested in OpenDaylight, but openstack is good too
23:26 JoeJulian I do make people swear sometimes...
23:26 nightwalk vms and virtual networking and oss networking (oh my)
23:27 JoeJulian Oh, this is the server log... client log would probably be more useful
23:27 nightwalk sorry. one sec...
23:28 nightwalk the brick/ vs. /var/log/glusterfs/ thing confuses me sometimes (esp. when I haven't fiddled with it in a while)
23:29 JoeJulian client log is /var/log/glusterfs/{mount path with / replaced by -}.log
23:29 nightwalk oh...now I remember why I opted for the other one
23:29 nightwalk client log is prohibitively large (41MB)
23:30 JoeJulian Ah, yes... SDN. Just saw a talk about that a few months ago at the local lopsa affiliated meeting. I'm strongly interested in that too. Just wish my existing (freshly purchased) hardware supported it.
23:31 nightwalk don't we all? :)
23:31 JoeJulian Just grab the last 100 lines or so.
23:32 nightwalk http://paste.ubuntu.com/5695410/
23:32 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
23:33 nightwalk there's the last 500 lines
23:34 recidive_ joined #gluster
23:34 JoeJulian So it's trying to self-heal every file. "no active sinks" suggests that the client is unable to connect to all the bricks in the volume. Check "gluster volume status" (assuming 3.3) and iptables.
23:34 nightwalk no iptables in use (nor arptables, ebtables, etc)
23:34 nightwalk gluster volume status always showed it as being online
23:35 lhawthor_ joined #gluster
23:35 JoeJulian grep that log for " E " maybe?
23:35 JoeJulian though, "no active sinks" has always turned out to be a connection issue when I've encountered it.
23:36 nightwalk In fact, the only thing I ever had a problem with was something about the multi-homed nature of the vms. It'd pick up one of the hosts by ip address for some strange reason when I first associated the bricks, so I'd have to shut everything down and manually change it to the corresponding hostname in the configs
23:36 JoeJulian @hostname
23:36 glusterbot JoeJulian: I do not know about 'hostname', but I do know about these similar topics: 'hostnames'
23:36 JoeJulian @hostnames
23:36 glusterbot JoeJulian: Hostnames can be used instead of IPs for server (peer) addresses. To update an existing peer's address from IP to hostname, just probe it by name from any other peer. When creating a new pool, probe all other servers by name from the first, then probe the first by name from just one of the others.
23:36 JoeJulian That last bit....
23:36 JoeJulian It's in the documentation, but frequently overlooked.
23:36 nightwalk I believe I tried that and it didn't work
23:37 JoeJulian Hey there lh
23:37 nightwalk or rather, I believe the others wouldn't allow me to do it since it already existed as a peer
23:37 JoeJulian It would complain about that, but it would still update the hostname.
23:38 JoeJulian I still consider that a bug, but the developers have never cared so much about it. It just feels unpolished to have to do it that way.
23:39 edong23 joined #gluster
23:40 nightwalk it looks like ubuntu's pastebin allows at least 256K, so here's 1000 lines when grepping for " E ", and it's hopefully a little more informative: http://paste.ubuntu.com/5695418/
23:40 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
23:40 JoeJulian Ok, here's what I would do. Since this is obviously a replica volume, I would wipe the right-hand brick(s) and do a heal...full
23:40 JoeJulian Once again though: Not all children are up
23:41 JoeJulian def something wrong there.
23:41 JoeJulian Check netstat -t and look at the client's connections. Should be one for each brick.
23:41 nightwalk well, there *would* be one child not up, because the third vm borked itself and won't come back until I've fixed it (it's the one I'm using to test ZoL+gluster)
23:42 nightwalk that was added after I started having issues though. Doesn't seem to affect the other volumes either
23:42 JoeJulian Ah. That makes sense then.
23:42 nightwalk Though, it WAS kind of a pain since I had to re-create the volumes to ADD the third child initially
23:42 nightwalk err.../child/brick/
23:42 JoeJulian Without the xattrs it can't start the self-heal unless all the replicas are available.
23:44 nightwalk actually...I think I confused the issue with something I shouldn't have. The *other* volumes existed during the same lifetime as third replica vm, *but* I'd recreated the problem volume a couple times since it went kaput
23:44 JoeJulian Several options. I'd either fix the borked one ;) or remove it and decrease the replica count if I had to to make that happen. Again, when re-adding it I'd make sure the brick(s) are empty so the self-heal doesn't end up split-brain.
23:44 nightwalk ('it' being the third vm)
23:45 JoeJulian Today was a work-from-home day and if I'm going to be seen in public I need to change out of my sweats and probably hop in the shower...
23:45 nightwalk aaand, I finally have an exact number of files. 2,374,188 total, so not particularly huge
23:46 nightwalk I guess I can try it one more time just to say I did, but I don't think it's going to help
23:47 nightwalk To tell you the truth, I'm kind of wondering if it's something to do with the volumes being backed by btrfs
23:48 nightwalk But there again, so is the mail store (same compression even), and it works mostly fine. I get a duplicated message on rare occasion, but that's about it
23:49 nightwalk JoeJulian: anyway, thanks for the input. will report back once I get a chance to re-test
23:50 nightwalk and have fun at the meetup :)

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary