Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-06-27

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:09 koguma joined #gluster
00:09 koguma Hey guys, any NFS guru on here?
00:10 koguma I'm having the weirdest problem...
00:10 koguma I can't seem to get fastcgi to read off of an nfs mounted volume.
00:13 Peter1 permission?
00:15 koguma It's definitely not file permissions..
00:15 koguma Could be some NFS related option though.
00:15 koguma I confirmed the issue by rsyncing the whole tree over to a non-nfs volume, and that worked fine.
00:16 Peter1 what is your mount option?
00:16 koguma and here's the thing... lighttpd itself can serve static files just fine off that tree, but fastcgi can't see them.
00:16 koguma defaults,_netdev
00:17 Peter1 r u mounting as glusterfs or nfs
00:17 koguma glusterfs mounted via nfs
00:18 koguma sfdev1:/devwww          /mnt        nfs     defaults,_netdev 0 0
00:19 bala joined #gluster
00:20 koguma are just the defaults ok for nfs, or is there something specific I should use?
00:21 Peter1 what user u run lighttpd and fastcgi as?
00:21 koguma I had a similar problem with running GitLab over nfs.  It just wouldn't work.
00:21 koguma fastcgi is running via lighttpd which is apache/apache user/group.  :P
00:22 koguma It's not a file permission issue.  Without nfs, everything works fine.
00:22 Peter1 maybe it's apache cgi exec opts?
00:22 koguma So it must be either acl or permission within nfs itself....
00:23 Peter1 can u write to the nfs?
00:24 koguma yeah, I can write to nfs, lighty can service static files off of it.
00:25 koguma even if I chmod everything to 777, fastcgi doesn't work.. gives the unable to open file error...
00:26 purpleidea joined #gluster
00:26 purpleidea joined #gluster
00:26 Peter1 can u read write files on nfs?
00:27 Peter1 if you read write files on unix, then next thing would be httpd.conf
00:27 sputnik13 joined #gluster
00:29 koguma The issue is the nfs.
00:29 koguma If I rsync the whole tree to a non-nfs volume everything works fine.  So there's something in the nfs that's messing things up.
00:30 koguma It had to be acls of some sort....
00:30 koguma or maybe the way fastcgi accesses files...
00:30 Peter1 can u read and write to nfs from OS?
00:30 koguma of course
00:31 koguma as I said, lighttpd can serve static (non-fastcgi/php) files off that same directory tree.
00:31 koguma only fastcgi takes a crap...
00:31 Peter1 try mount with hard,intr
00:31 koguma k, lemme try that brb
00:32 JoeJulian JustinClift: What should I look at for http://fpaste.org/113650/03829092/
00:32 glusterbot Title: #113650 Fedora Project Pastebin (at fpaste.org)
00:33 koguma Peter1: No go...
00:36 Peter1 what is the error from fastcgi?
00:37 koguma 'No input file specified.'
00:38 Peter1 any http error?
00:38 koguma from the debug, lighty finds the file, passes it to fastcgi and fastcgi just comes back with that.
00:38 koguma The actual error code is 404
00:47 Peter1 need to make sure the apache user/group can read write exec the files
00:49 gildub joined #gluster
00:50 koguma Yep, they can.
00:50 koguma I'm trying every nfs option I can find....
00:52 Peter1 i m out of thos on nfs if apache user can rwx
00:59 koguma it's really really weird...
00:59 Peter1 yes it is
01:02 koguma I did have similar problems with GitLab btw.  Ended up just not using nfs for it..
01:04 Peter1 which fav of linxu?
01:04 Peter1 might want to chk the parent dir's permission
01:04 Peter1 also the permission of the mount poiint
01:09 koguma CentOS 5.10
01:10 koguma For shits and giggles, I tried running lighty directly off the brick and it worked.
01:10 Peter1 no good to run data direct from brick
01:10 koguma So it's definitely something with nfs... I've just tried like every mount option though....
01:10 koguma I know, it was just a quick test
01:10 koguma If I can run off the brick, no reason I can't off the nfs mount...
01:11 Peter1 ur nfs share option is default ?
01:12 koguma right now I tried with this:  defaults,_netdev,noacl,tcp,nolock,nfsvers=3,sec=sys
01:12 koguma nothing works.. let me remove defaults
01:12 Peter1 try nfs rw,rsize=32768,hard,intr,vers=3,proto=tcp
01:13 koguma oh shit... removing defaults worked
01:13 Peter1 congrad
01:14 koguma oh man... I spent all day narrowing it down to nfs...
01:14 koguma Thanks Peter1! :)
01:14 Peter1 u got it man :)
01:46 Peter1 Error: One or more connected clients cannot support the feature being set. These clients need to be upgraded or disconnected before running this command again
01:47 Peter1 when i tried to  volume set <vol> nfs.export-dir
01:47 Peter1 I have one gfs client mounting that vol when i set it and both run on 3.5.1
01:47 Peter1 is that another bug?
01:48 Ark joined #gluster
01:48 sputnik13 joined #gluster
02:12 harish_ joined #gluster
02:15 jcsp joined #gluster
02:27 dusmant joined #gluster
02:28 Peter1 nvm, it was an old client …. all good :)
02:36 jag3773 joined #gluster
02:44 badone joined #gluster
02:47 bharata-rao joined #gluster
02:59 rjoseph joined #gluster
02:59 sjm joined #gluster
03:11 bala joined #gluster
03:14 davinder15 joined #gluster
03:18 jcsp1 joined #gluster
03:37 jcsp joined #gluster
03:39 atinmu joined #gluster
03:41 dtrainor joined #gluster
03:42 saurabh joined #gluster
03:42 kshlm joined #gluster
03:43 JustinClift JoeJulian: This line is likely the problem: [2014-06-26 20:48:04.315546] E [glupy.c:2363:init] 0-data-glusterflow: Python import failed
03:43 JoeJulian Well yeah, but why?
03:44 JustinClift Have you tried Glupy with something like the "helloworld" translator yet?
03:44 JoeJulian Nope
03:45 JoeJulian This is actually someone else at the company asking me why it failed 'cause, you know, I know EVERYTHING about anything gluster.
03:45 JustinClift JoeJulian: k, that'll be the better translator to try it with initially.  The helloworld translator, and start up glusterfs in foreground debug mode, so it spits out errors to the screen
03:45 gmcwhistler joined #gluster
03:46 JustinClift JoeJulian: I'm kind of inclined to recommend "don't use Glupy" atm, as I haven't had a chance to look at it in months... and barely remember the code. :(
03:46 JustinClift It's on my ToDo list to get working again, after this regression stuff is sorted.  Which is hopefully not too far off.
03:47 JoeJulian oh... that's a possibility. I don't know what version they were testing with.
03:47 JoeJulian Ah, 3.5.1... there you have it.
03:48 JoeJulian Thanks.
03:48 JustinClift I find that pretty much every time I go looking at this stuff, I have to start with the helloworld translator, get that working, then move onto the more complicated stuff.
03:48 JoeJulian When you wrote that it was under 3.4, iirc.
03:48 JustinClift Yeah.
03:49 JustinClift Actually, I think Gerald was looking at it not long ago.  He was probably running 3.5 version.
03:49 JustinClift Hmmm.
03:49 * JustinClift checks something
03:50 JustinClift Ok, both GitHub and the Forge have the same version in there.
03:50 JustinClift Was just thinking that maybe one was older than the others or something.
03:50 JoeJulian right
03:51 JustinClift Hmmm, I don't think I can be much help with it atm.  It's been so long since I looked at it. :(
03:51 lmickh joined #gluster
03:51 JoeJulian No worries. I told him to try downgrading to 3.4 and see if that works for now.
03:51 JustinClift :)
03:51 JoeJulian If it does, I'll see if I can get it working with 3.5.
03:53 kanagaraj joined #gluster
03:54 shubhendu joined #gluster
03:54 RameshN joined #gluster
04:06 prasanthp joined #gluster
04:09 dtrainor joined #gluster
04:17 ndarshan joined #gluster
04:38 kumar joined #gluster
04:41 hchiramm_ joined #gluster
04:45 psharma joined #gluster
04:46 glusterbot New news from newglusterbugs: [Bug 1113842] Incorrect diagrams in the admin guide documentation <https://bugzilla.redhat.com/show_bug.cgi?id=1113842>
04:49 ramteid joined #gluster
04:50 RameshN joined #gluster
04:50 nishanth joined #gluster
04:52 kdhananjay joined #gluster
04:53 lalatenduM joined #gluster
04:56 MacWinner joined #gluster
05:00 RameshN joined #gluster
05:01 kdhananjay joined #gluster
05:03 rastar joined #gluster
05:13 _polto_ joined #gluster
05:15 karnan joined #gluster
05:16 vpshastry joined #gluster
05:18 gmcwhistler joined #gluster
05:32 meghanam joined #gluster
05:40 aravindavk joined #gluster
05:46 kdhananjay joined #gluster
05:48 ppai joined #gluster
05:51 hagarth joined #gluster
05:51 rjoseph joined #gluster
06:05 nshaikh joined #gluster
06:08 deepakcs joined #gluster
06:08 hchiramm__ joined #gluster
06:09 qdk joined #gluster
06:12 Philambdo joined #gluster
06:13 raghu joined #gluster
06:16 dusmant joined #gluster
06:16 bala joined #gluster
06:19 kshlm joined #gluster
06:24 mbukatov joined #gluster
06:29 meghanam joined #gluster
06:29 meghanam_ joined #gluster
06:30 jcsp joined #gluster
06:32 vimal joined #gluster
06:38 Neeraj joined #gluster
06:38 rgustafs joined #gluster
06:38 Neeraj hi
06:38 aravindavk joined #gluster
06:38 glusterbot Neeraj: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
06:40 prasanthp joined #gluster
06:42 bala joined #gluster
06:44 lalatenduM joined #gluster
06:45 Ark joined #gluster
06:46 rjoseph joined #gluster
06:49 confusedp3rms hi guys, having an issue with geo replication
06:49 confusedp3rms http://pastebin.com/FCmr5q8v
06:49 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
06:49 confusedp3rms [2014-06-27 06:46:43.032789] E [glusterd-geo-rep.c:4083:glusterd_get_slave_info] 0-: Invalid slave name
06:49 hchiramm__ joined #gluster
06:50 confusedp3rms I don't get it
06:55 Neeraj I installed gluster 3.3 using apt-get gluster-server on debian 6 and amd 64,  but found out that geo replication system command is not available on it. So I apt-get purge gluster-server but it didn't remove all files so I built 3.5.1 from source but its not working
06:55 confusedp3rms hmmm
06:55 confusedp3rms I'm using gluster 3.5.0 on ubuntu 14.04
06:55 koguma Hey guys, so looks like I didn't actually solve my problem with the nfs mounts.  I'm unable to serve fastcgi data off of gluster nfs mounts...
06:56 romero joined #gluster
06:56 Neeraj I am getting "0-glusterfs: ERROR: parsing the volfile failed" when I run "/usr/local/sbin/glusterfs"
06:57 JoeJulian ~ppa | Neeraj
06:57 glusterbot Neeraj: The official glusterfs packages for Ubuntu are available here: 3.4 stable: http://goo.gl/u33hy -- 3.5 stable: http://goo.gl/cVPqEH -- introducing QEMU with GlusterFS 3.4 support: http://goo.gl/7I8WN4
07:00 ekuric joined #gluster
07:01 JoeJulian confusedp3rms: https://github.com/gluster/glusterfs/blob/master/doc/admin-guide/en-US/markdown/admin_geo-replication.md
07:01 glusterbot Title: glusterfs/doc/admin-guide/en-US/markdown/admin_geo-replication.md at master · gluster/glusterfs · GitHub (at github.com)
07:02 JoeJulian your slave syntax is incorrect
07:05 confusedp3rms JoeJulian: I'm reading that, I see "A local directory which can be represented as file URL like file:///path/to/dir. You can use shortened form, for example, /path/to/dir."
07:05 confusedp3rms which is what I have, I tried file:/// too
07:06 confusedp3rms originally started with ssh, but wanted to make it as simple as possible
07:06 JoeJulian pfft... maybe I should overcome my ADHD and read past the first sentence.
07:06 confusedp3rms haha, I appreciate the second set of eyes :)
07:07 itisravi joined #gluster
07:09 calum_ joined #gluster
07:10 Neeraj JoeJulian: ppa 3.5 for ubuntu does not exist at the path apt-get is requesting
07:11 confusedp3rms Neeraj: apt-get update ?
07:11 Neeraj yes, not working
07:11 JoeJulian wtf?
07:11 JoeJulian tmp = strtok_r (linearr[0], "/", &save_ptr);
07:11 JoeJulian tmp = strtok_r (NULL, "/", &save_ptr);
07:11 confusedp3rms slave name stuff?
07:12 JoeJulian does that modify save_ptr?
07:12 JoeJulian otherwise, tmp= twice so the first one is lost.
07:14 JoeJulian hmm, wierd function.
07:14 JoeJulian I guess it's not wrong.
07:14 confusedp3rms re-entrant fun :(
07:16 confusedp3rms sounds like it thinks it's syncing to another gluster instead of to the simple file, no?
07:17 glusterbot New news from newglusterbugs: [Bug 1113894] AFR : self-heal of few files not happening when a AWS EC2 Instance is back online after a restart <https://bugzilla.redhat.com/show_bug.cgi?id=1113894>
07:19 fraggeln [2014-06-27 07:01:12.488106] E [xlator.c:390:xlator_init] 0-blogstage01-dht: Initialization of volume 'blogstage01-dht' failed, review your volfile again <-- where should shit volfile be located?
07:20 JoeJulian wow. that's an old error message...
07:20 JoeJulian The volfile is in /var/lib/glusterd/vols/$volname/$volname_fuse.vol . But it's irrelevant because it's built by glusterd.
07:21 JoeJulian Did you downgrade?
07:21 ktosiek joined #gluster
07:21 fraggeln JoeJulian: I did downgrade to 3.4.3
07:22 fraggeln since there was some scary "features" in 3.5 ;)
07:22 confusedp3rms geo-rep?
07:22 JoeJulian You like how I guessed that one... ;)
07:22 JoeJulian Just a sec... I need to find something for you.
07:23 koguma hey guys, what are the usual mount options for nfs mounting glusterfs?
07:23 fraggeln or downgrade and downgrade, we did start out from scratch with volumes and stuff on 3.4.3
07:23 haomaiwang joined #gluster
07:25 JoeJulian fraggeln: Edit the /var/lib/glusterd/glusterd.info files on your servers, and set operating-version to 2. Then regenerate the volfiles by running 'glusterd --xlator-option *.upgrade=on -N'.
07:25 JoeJulian ~nfs | koguma
07:25 glusterbot koguma: To mount via nfs, most distros require the options, tcp,vers=3 -- Also an rpc port mapper (like rpcbind in EL distributions) should be running on the server, and the kernel nfs server (nfsd) should be disabled
07:26 fsimonce joined #gluster
07:26 ThatGraemeGuy joined #gluster
07:26 koguma Thanks.  Thing is, I can mount.  But I can't get it working with fastcgi.  I thought I had it working before, but I was testing running off the brick, which works.
07:26 fraggeln JoeJulian: its already set to 2
07:27 Neeraj confusedp3rms: apt-get update not working
07:27 JoeJulian ~ppa | Neeraj
07:27 glusterbot Neeraj: The official glusterfs packages for Ubuntu are available here: 3.4 stable: http://goo.gl/u33hy -- 3.5 stable: http://goo.gl/cVPqEH -- introducing QEMU with GlusterFS 3.4 support: http://goo.gl/7I8WN4
07:27 JoeJulian ... again...
07:28 koguma does gluster support nfsv4?
07:29 Neeraj JoeJulian: done that but I am getting "Failed to fetch http://ppa.launchpad.net/semiosis/ubuntu-glusterfs-3.5/ubuntu/dists/wheezy/main/source/Sources 404 Not Found " for 3.5 on apt-get update
07:30 koguma at this point I think I might just switch to fuse..
07:30 confusedp3rms JoeJulian: I'm running gluster 3.5, does that mean the older style doesn't work? I should be using https://github.com/gluster/glusterfs/blob/master/doc/admin-guide/en-US/markdown/admin_distributed_geo_rep.md instead?
07:31 glusterbot Title: glusterfs/doc/admin-guide/en-US/markdown/admin_distributed_geo_rep.md at master · gluster/glusterfs · GitHub (at github.com)
07:31 koguma nfs is just broken for me.
07:31 confusedp3rms which has a different naming scheme
07:31 JoeJulian @ports
07:31 glusterbot JoeJulian: glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up for <3.4 and 49152 & up for 3.4. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
07:31 JoeJulian maybe?
07:35 Neeraj JoeJulian: ppa doesn't have package for wheezy
07:35 fraggeln JoeJulian: do you think there is any perfromance gains in using either redhat/centos over debian as gluster-servers?
07:36 fraggeln or ofc stability
07:36 JoeJulian Yes. I work much more efficiently in rpm based distros. :D
07:36 ctria joined #gluster
07:37 JoeJulian Like, why the F do I NEED to have a headless server ask for input when a raid is failed? It's a brick and isn't required for the OS to boot. <eyeroll>
07:39 JoeJulian semiosis: another debian request
07:39 JoeJulian I thought he was building debian packages yesterday.
07:39 fraggeln JoeJulian: ofc you do :)
07:40 fraggeln there is no .deb for version 3.4.4 as well ;)
07:40 * fraggeln hides
07:40 JoeJulian there is under qa
07:41 JoeJulian There was a critical bug a patch release was made for.
07:41 ricky-ti1 joined #gluster
07:43 fraggeln note to self, do not run rebalance when doing a rsync.
07:45 JoeJulian note to fraggeln, use --inplace when doing an rsync to a gluster volume
07:46 fraggeln ohh.
07:47 glusterbot New news from newglusterbugs: [Bug 1073616] Distributed volume rebalance errors due to hardlinks to .glusterfs/... <https://bugzilla.redhat.com/show_bug.cgi?id=1073616> || [Bug 1113907] AFR: Inconsistent GlusterNFS behavior v/s GlusterFUSE during metadata split brain on directories <https://bugzilla.redhat.com/show_bug.cgi?id=1113907>
07:48 Thilam Hi, do you have an idea when the 3.5.1 packages will be released ?
07:50 JoeJulian yesterday
07:51 JoeJulian @yum repo
07:51 glusterbot JoeJulian: The official community glusterfs packages for RHEL (including CentOS, SL, etc) are available at http://download.gluster.org/pub/gluster/glusterfs/. The official community glusterfs packages for Fedora 18 and later are in the Fedora yum updates (or updates-testing) repository.
07:53 JoeJulian huh, not built for quantal. I wonder what happened.
08:01 keytab joined #gluster
08:06 capri im testing geo-replication at the moment, but when i start the geo-rep i always get the following error output: "Staging failed on localhost." "Unable to store slave volume name." "Unable to fetch slave or confpath details."
08:06 capri what im doing wrong?
08:06 haomaiw__ joined #gluster
08:08 bala joined #gluster
08:10 confusedp3rms JoeJulian: I was able to get it closer to working by following the other doc
08:10 confusedp3rms thanks for your help
08:11 JoeJulian confusedp3rms: I'm still plugging away at understanding it...
08:11 confusedp3rms Understanding what?
08:12 JoeJulian how the glusterd_get_slave function works...
08:12 confusedp3rms oh, ha, it's all changed with v3.5
08:12 JoeJulian Doesn't look like it should have, according to the source.
08:12 confusedp3rms https://github.com/gluster/glusterfs/blob/master/doc/features/geo-replication/distributed-geo-rep.md is the correct document for geo-rep v3.5+
08:12 glusterbot Title: glusterfs/doc/features/geo-replication/distributed-geo-rep.md at master · gluster/glusterfs · GitHub (at github.com)
08:13 JoeJulian but that's not supposed to break the former way of doing it.
08:14 confusedp3rms errr, that's wrong doc
08:14 confusedp3rms https://github.com/gluster/glusterfs/blob/master/doc/admin-guide/en-US/markdown/admin_distributed_geo_rep.md
08:14 glusterbot Title: glusterfs/doc/admin-guide/en-US/markdown/admin_distributed_geo_rep.md at master · gluster/glusterfs · GitHub (at github.com)
08:15 confusedp3rms "Unlike previous version, slave must be a gluster volume. Slave can not be a directory. And both the master and slave volumes should have been created and started before creating geo-rep session"
08:15 confusedp3rms and that doc has some typos: s/<slave_volume>::<slave_volume>/<slave_host>::<slave_volume>/
08:15 JoeJulian @hack
08:15 glusterbot JoeJulian: The Development Work Flow is at http://www.gluster.org/community/documentation/index.php/Development_Work_Flow
08:15 confusedp3rms I'd submit a pull request, but signing up for the review process + gerrit stuff is too involved for me
08:16 confusedp3rms yeah
08:16 confusedp3rms for tonight, it's too late
08:16 JoeJulian Well, that I understand. I'm up way too late myself.
08:16 confusedp3rms haha, I'm out
08:16 confusedp3rms have a good night, thanks again!
08:16 JoeJulian goodnight
08:20 capri anyone knows what im doing wrong?
08:20 capri im using gluster 3.5.1 packages
08:26 Thilam [09:48] <Thilam> Hi, do you have an idea when the 3.5.1 packages will be released ? <- was talking about the debian packages sorry :/
08:27 JoeJulian Thilam: semiosis said he was going to try to do that yesterday. I presume he got busy.
08:27 Thilam ok
08:27 Thilam thx for answer :)
08:32 nbalachandran joined #gluster
08:33 dusmant joined #gluster
08:33 aravindavk joined #gluster
08:47 hagarth joined #gluster
08:48 shubhendu joined #gluster
08:57 dusmant joined #gluster
08:57 xavih joined #gluster
09:05 lalatenduM joined #gluster
09:16 kumar joined #gluster
09:23 hagarth joined #gluster
09:24 Paul-C joined #gluster
09:43 xavih joined #gluster
10:15 capri is it possible to geo replicate an snapshot of the glustervol?
10:18 crashmag joined #gluster
10:20 dusmant joined #gluster
10:21 fraggeln 11gb 200k files, is that a unusual workload?
10:27 kkeithley1 joined #gluster
10:30 deepakcs joined #gluster
10:31 ndevos fraggeln: that is a description of the contents, I think it is not unusual for something like a dropbox use-case - workload depends on what you do with the data
10:31 Paul-C joined #gluster
10:39 shubhendu joined #gluster
10:47 kanagaraj joined #gluster
10:52 glusterbot New news from newglusterbugs: [Bug 1113959] Spec %post server does not wait for the old glusterd to exit <https://bugzilla.redhat.com/show_bug.cgi?id=1113959>
11:04 LebedevRI joined #gluster
11:12 haomaiwa_ joined #gluster
11:13 haomai___ joined #gluster
11:19 dusmant joined #gluster
11:20 julim joined #gluster
11:21 edward1 joined #gluster
11:22 glusterbot New news from newglusterbugs: [Bug 1113960] brick process crashed when rebalance and rename was in progress <https://bugzilla.redhat.com/show_bug.cgi?id=1113960>
11:32 fraggeln ndevos: pictures for a "small" website ;)
11:34 ndevos fraggeln: yes, something like that would do, as long as you do not need to do directory browsing - pictures are mostly write-once-read-many and thats a nice use-cea
11:34 ndevos *case
11:36 hagarth joined #gluster
11:41 sputnik13 joined #gluster
11:42 bene2 joined #gluster
11:45 dusmant joined #gluster
11:47 sputnik13 joined #gluster
11:49 ppai joined #gluster
12:09 haomaiwang joined #gluster
12:20 dusmant joined #gluster
12:29 kanagaraj joined #gluster
12:49 ndarshan joined #gluster
12:54 edwardm61 joined #gluster
12:55 Andreas-IPO joined #gluster
12:56 theron joined #gluster
12:59 hagarth joined #gluster
13:05 gildub joined #gluster
13:14 bala joined #gluster
13:17 sjm left #gluster
13:17 bennyturns joined #gluster
13:20 chirino joined #gluster
13:21 Andreas-IPO joined #gluster
13:25 RameshN joined #gluster
13:25 dusmant joined #gluster
13:26 rwheeler joined #gluster
13:33 gmcwhistler joined #gluster
13:33 gildub joined #gluster
13:35 B21956 joined #gluster
13:43 Ark joined #gluster
13:48 txmoose left #gluster
13:50 gildub joined #gluster
14:09 theron joined #gluster
14:12 theron_ joined #gluster
14:13 theron joined #gluster
14:14 theron_ joined #gluster
14:20 pasqd i cant start, volume  Error: Volume id mismatch for brick
14:20 pasqd any ideas? :D
14:22 itisravi joined #gluster
14:27 dusmant joined #gluster
14:28 theron joined #gluster
14:30 sjm joined #gluster
14:38 theron joined #gluster
14:39 theron joined #gluster
14:41 coredump joined #gluster
14:47 wushudoin joined #gluster
14:50 jobewan joined #gluster
14:54 steveg_away joined #gluster
14:54 sgordon joined #gluster
14:59 ndk joined #gluster
15:02 ndk joined #gluster
15:22 itisravi joined #gluster
15:33 hchiramm__ joined #gluster
15:48 lpabon_test joined #gluster
15:51 lpabon_test joined #gluster
15:52 lmickh joined #gluster
16:01 haomaiwa_ joined #gluster
16:07 ramteid joined #gluster
16:15 chirino joined #gluster
16:23 glusterbot New news from newglusterbugs: [Bug 1023191] glusterfs consuming a large amount of system memory <https://bugzilla.redhat.com/show_bug.cgi?id=1023191>
16:26 Mo_ joined #gluster
16:30 semiosis pasqd: check the ,,(extended attributes) on the top directory of all the bricks in the volume
16:31 glusterbot pasqd: (#1) To read the extended attributes on the server: getfattr -m .  -d -e hex {filename}, or (#2) For more information on how GlusterFS uses extended attributes, see this article: http://hekafs.org/index.php/2011/04/glusterfs-extended-attributes/
16:41 zerick joined #gluster
16:44 dusmant joined #gluster
16:54 cmtime JoeJulian, I just wanted to let you know you solved my problem yesterday thank you.
17:03 zerick joined #gluster
17:08 JoeJulian cmtime: awesome, thanks
17:09 cmtime now my only problem is trying to get samba to be faster with gluster.  But that is going to take some time.
17:09 dtrainor joined #gluster
17:13 kanagaraj joined #gluster
17:18 JoeJulian I turned off anything to do with locks/oplocks in samba.
17:19 ramteid joined #gluster
17:43 Peter1 joined #gluster
17:43 Peter1 everytime a file write, the brick logs an entries
17:43 Peter1 0-sas03-quota: quota context is not present in inode (gfid:00000000-0000-0000-0000-000000000001)
17:43 Peter1 [2014-06-27 17:42:56.027675] W [quota.c:3669:quota_statfs_validate_cbk] 0-sas03-quota: quota context is not present in inode (gfid:00000000-0000-0000-0000-000000000001)
17:43 Peter1 [2014-06-27 17:42:57.016152] W [quota.c:3669:quota_statfs_validate_cbk] 0-sas03-quota: quota context is not present in inode (gfid:00000000-0000-0000-0000-000000000001)
17:44 Peter1 is that a valid warning?
17:45 saurabh joined #gluster
17:47 _polto_ joined #gluster
17:51 vpshastry joined #gluster
17:57 kanagaraj joined #gluster
17:57 daMaestro joined #gluster
18:00 dusmant joined #gluster
18:06 ctria joined #gluster
18:15 Matthaeus joined #gluster
18:27 jcsp joined #gluster
18:47 tdasilva joined #gluster
18:49 ThatGraemeGuy joined #gluster
18:50 Peter2 joined #gluster
19:05 _dist joined #gluster
19:09 mortuar joined #gluster
19:15 ThatGraemeGuy joined #gluster
19:39 shapemaker joined #gluster
19:41 n0de__ joined #gluster
19:41 hchiramm__ joined #gluster
19:42 DV__ joined #gluster
19:42 Alex____1 joined #gluster
19:43 uebera|| joined #gluster
19:43 d-fence joined #gluster
19:43 Andreas-IPO joined #gluster
19:43 n0de__ I just noticed one of my bricks does not have glusterfsd running, that is clearly not good
19:44 verdurin joined #gluster
19:44 elico joined #gluster
19:44 semiosis you can check that on 3.4 or newer with 'gluster volume status'
19:44 semiosis (maybe 3.3?)
19:44 n0de__ It is 3.2.4
19:45 semiosis ohh well then
19:45 n0de__ yea :/
19:45 semiosis you can either restart glusterd on that host, or run 'gluster volume start $vol force'
19:45 semiosis to respawn the missing process
19:45 semiosis if it still doesnt come back, check the log for that brick, it probably tried to start & logged an error
19:45 capri joined #gluster
19:46 n0de__ gotcha
19:46 n0de__ I will fist try to just restart glusterd
19:46 n0de__ I use force as a last resort :)
19:46 semiosis should be the same either way
19:47 n0de__ Explains why on the host that I do not have glusterfsd running there is 50TB free
19:48 n0de__ err, 34TB free, while the others only have like 8.7TB free
19:54 semiosis if this is a distributed-replicated volume (as opposed to pure replicate) then this is a good opportunity to use ,,(targeted self heal)
19:54 glusterbot https://web.archive.org/web/20130314122636/http://community.gluster.org/a/howto-targeted-self-heal-repairing-less-than-the-whole-volume/
19:54 semiosis also a great time to recommend planning for an upgrade to 3.4, or maybe 3.5 if it's going to be a ways out
19:55 n0de__ yep :) it is a distributed-replica
20:27 B21956 joined #gluster
20:39 coredump joined #gluster
20:47 Ramereth|home joined #gluster
20:47 the-me_ joined #gluster
20:47 oxidane_ joined #gluster
20:47 social joined #gluster
20:48 Andreas-IPO_ joined #gluster
20:48 Paul-C joined #gluster
20:53 hchiramm_ joined #gluster
20:54 JoeJulian @meh
20:54 glusterbot JoeJulian: I'm not happy about it either
20:54 JoeJulian I need to change cell phone providers. The one I'm currently using doesn't let me connect to vpns.
20:55 lpabon_test joined #gluster
20:57 uebera|| joined #gluster
21:01 lpabon_test joined #gluster
21:02 capri joined #gluster
21:02 uebera|| joined #gluster
21:11 stigchristian joined #gluster
21:22 n0de joined #gluster
21:23 Ark joined #gluster
21:24 Peter3 joined #gluster
21:25 oxidane joined #gluster
21:25 stigchri1tian joined #gluster
21:25 m0zes_ joined #gluster
21:25 bet_ joined #gluster
21:26 coredump|br joined #gluster
21:26 suliba joined #gluster
21:26 nixpanic joined #gluster
21:26 SteveCoo1ing joined #gluster
21:26 Dave2_ joined #gluster
21:27 nixpanic joined #gluster
21:28 eryc_ joined #gluster
21:32 semiosis joined #gluster
21:32 semiosis joined #gluster
21:36 BradLsys joined #gluster
22:00 tg2 joined #gluster
22:44 mjsmith2 joined #gluster
23:01 ThatGraemeGuy joined #gluster
23:13 sonicrose joined #gluster
23:15 sonicrose hi #gluster! i'm looking for a quick recommendation on laying out bricks for a new volume.  i'm doing a stripe replica (no distribution).  I have 6 gluster VMs with two disks each so replica 2 stripe 6 (yes performance is a requirement (vhd file storage for virtual machines)) i had a layout in mind... to follow...
23:24 sonicrose here's a quick diagram http://gyazo.com/64df28cee70e9b91fa64bc7f62030054
23:24 sonicrose so to keep the replicas in line, i thought about creating a giant 'chain' and i'm wondering if this is best practice or not...
23:29 sonicrose so i was going to do something like this:
23:29 sonicrose s11:/bra s21:/brb s21:/bra s31:/brb s31:/bra s12:/brb s12:/bra s22:/brb s22:/bra s32:/brb s32:/bra s11:/brb
23:31 sonicrose i believe this insures the replicas are all on seperate physical boxes, but am i going to be creating any kind of weird performance bottle neck by using this layout?
23:31 sonicrose ah... i also see that i would be out of luck with this layout if one of the switches failed...  ok i guess that really only leaves me one options
23:44 theron joined #gluster
23:44 stickyboy joined #gluster
23:45 sonicrose volume start: san2: success ... so i guess im in business thx anyway!
23:46 sonicrose final config http://pastebin.com/va5B1R85

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary