Camelia, the Perl 6 bug

IRC log for #gluster, 2013-05-24

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:04 yinyin joined #gluster
00:17 recidive_ joined #gluster
00:44 cyberbootje1 joined #gluster
01:12 cyberbootje joined #gluster
01:16 majeff joined #gluster
01:27 majeff1 joined #gluster
01:31 majeff joined #gluster
01:40 hagarth joined #gluster
01:56 sprachgenerator joined #gluster
02:25 hjmangalam1 joined #gluster
02:37 cfeller joined #gluster
02:48 lpabon joined #gluster
02:54 anands joined #gluster
03:03 majeff joined #gluster
03:03 majeff1 joined #gluster
03:05 majeff joined #gluster
03:06 bharata joined #gluster
03:15 vigia joined #gluster
03:37 hjmangalam1 joined #gluster
03:39 recidive joined #gluster
04:02 vpshastry joined #gluster
04:03 shylesh joined #gluster
04:05 sgowda joined #gluster
04:07 saurabh joined #gluster
04:14 edong23 joined #gluster
04:21 vpshastry joined #gluster
04:45 zhashuyu joined #gluster
04:56 kshlm joined #gluster
04:56 glusterbot New news from newglusterbugs: [Bug 960752] Update to 3.4-beta1 kills glusterd <http://goo.gl/69M5f>
05:03 bulde joined #gluster
05:05 y4m4 joined #gluster
05:07 hagarth joined #gluster
05:20 bala joined #gluster
05:21 36DAANRZ7 joined #gluster
05:22 ashimame joined #gluster
05:43 lalatenduM joined #gluster
05:46 guigui1 joined #gluster
05:49 koubas joined #gluster
05:54 psharma joined #gluster
05:56 rastar joined #gluster
05:58 aravindavk joined #gluster
06:03 vimal joined #gluster
06:07 andreask joined #gluster
06:12 satheesh joined #gluster
06:21 rgustafs joined #gluster
06:27 ricky-ticky joined #gluster
06:34 vshankar joined #gluster
06:35 ollivera joined #gluster
06:46 rotbeard joined #gluster
06:55 ekuric joined #gluster
06:56 glusterbot New news from resolvedglusterbugs: [Bug 966855] nfs: add-brick, rebalance when locks are held nfs-server crashes <http://goo.gl/EfUxC>
06:58 mohankumar joined #gluster
07:06 isomorphic joined #gluster
07:07 NeatBasis joined #gluster
07:08 ctria joined #gluster
07:13 kshlm joined #gluster
07:17 Cotolez joined #gluster
07:45 zoldar joined #gluster
07:51 guigui3 joined #gluster
07:57 spider_fingers joined #gluster
08:16 ngoswami joined #gluster
08:37 Airbear joined #gluster
08:40 tjikkun_work joined #gluster
08:45 77CAA4YDW joined #gluster
09:20 rb2k joined #gluster
09:27 anands joined #gluster
09:28 guigui1 joined #gluster
09:36 ujjain joined #gluster
10:16 rosco joined #gluster
10:38 jtux joined #gluster
10:41 manik joined #gluster
10:49 ProT-0-TypE joined #gluster
10:50 guigui1 joined #gluster
11:00 lpabon joined #gluster
11:02 xavih joined #gluster
11:19 yinyin_ joined #gluster
11:23 edward1 joined #gluster
11:23 jtux joined #gluster
11:25 hchiramm__ joined #gluster
11:28 pkoro joined #gluster
11:30 ekuric joined #gluster
11:35 dustint joined #gluster
11:43 recidive_ joined #gluster
11:52 billythekid joined #gluster
11:52 billythekid Hi!
11:52 glusterbot billythekid: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
11:54 billythekid I am using GlusterFs from EPEL repository. I am trying to configure the volumes by using the "gluster volume set" command. On manual you mention that "quick-read" Translator has several values. Nevertheless i am not able to find how to change them via the previous command. Does anyone know if this is possible OR if i should edit directly the .vol files?
11:54 billythekid * I forgot to menion I am using 3.2 version
11:55 pkoro joined #gluster
11:59 puebele1 joined #gluster
11:59 hagarth joined #gluster
12:02 xavih joined #gluster
12:05 chirino joined #gluster
12:11 billythekid anyone knows how to modify quick-read.max-file-size option?????
12:22 hchiramm__ joined #gluster
12:32 rb2k any recommendations for stab mounting in ubuntu?
12:32 rb2k I think it wants to mount before the network is up
12:32 rb2k which doesn't make it particularly happy
12:53 manik joined #gluster
13:13 meunierd1 joined #gluster
13:16 sprachgenerator joined #gluster
13:16 semiosis rb2k: could you please pastie the client log file showing the failed mount attempt at boot time?
13:17 semiosis i'm working on a fix for one cause of that situation, and want to see if your problem may be the same one i'm having
13:17 rb2k I think I have that problem: unix-heaven.org/glusterfs-f​ails-to-mount-after-reboot
13:17 manik joined #gluster
13:17 rb2k I just solved it the same way as that guy did
13:17 rb2k But I'll try to get you the logs
13:18 rb2k I hope those instances are still up
13:18 semiosis what guy?
13:18 rb2k the unix-heaven.org/glusterfs-f​ails-to-mount-after-reboot guy
13:18 semiosis the article author or one of the commenters?
13:18 rb2k yes
13:18 rb2k the author
13:18 rb2k I'm using something similar to rc.local
13:19 rb2k "nobootwait" as an argument in fstab
13:19 rb2k and a mount -a later on
13:19 manik joined #gluster
13:19 lpabon joined #gluster
13:19 ctria joined #gluster
13:19 semiosis that's not a good solution, but if it works for you, great
13:19 rb2k I know :)
13:19 rb2k what would be the alternative though?
13:19 rb2k _netdev doesn't seem to work
13:20 semiosis _netdev doesnt do anything on ubuntu
13:20 rb2k exactly
13:20 semiosis there's many causes of this problem, and i've fixed most of them, but seems like a new one came up with ubuntu precise, and i'm working on a fix for that
13:20 semiosis but your problem may be different (eeven though symptom is the same) so i need your log file
13:21 rb2k ok, new instances are booting
13:21 rb2k what would be your current way to do things on ubuntu?
13:21 mohankumar joined #gluster
13:22 hchiramm__ joined #gluster
13:22 semiosis i am trying this out on my systems: http://pastie.org/7952859
13:22 glusterbot Title: #7952859 - Pastie (at pastie.org)
13:22 semiosis works for me
13:22 rb2k ok, so upstart
13:22 semiosis the right fix has to be an upstart job
13:23 semiosis but the key is what to put in the job
13:23 semiosis if you could 1) get me your log, and 2) test out that upstart job, that would be really helpful
13:24 rb2k I should have the log once these instances are up and I reboot one
13:24 semiosis that upstart job is a work in progress
13:24 semiosis i'm rolling it out to servers one at a time, so far so good, but i still have not tried it on the servers with localhost client mounts
13:34 Norky joined #gluster
13:34 theron joined #gluster
13:42 recidive_ joined #gluster
13:46 ctria joined #gluster
13:54 hchiramm__ joined #gluster
13:55 rb2k semiosis: *sigh*, not it doesn't fail anymore
13:56 rb2k retrying
14:00 vpshastry joined #gluster
14:00 manik joined #gluster
14:02 premera joined #gluster
14:03 portante joined #gluster
14:15 theron joined #gluster
14:16 lpabon joined #gluster
14:21 kaptk2 joined #gluster
14:22 spider_fingers left #gluster
14:30 NeatBasis joined #gluster
14:37 jtux joined #gluster
14:40 red_solar joined #gluster
14:41 mohankumar joined #gluster
14:42 piotrektt_ joined #gluster
14:49 hagarth joined #gluster
14:51 recidive_ good morning guys
14:53 recidive_ is 3.2 still maintained? Can I use it or there are some issues with it?
14:53 Brian_TS joined #gluster
14:54 recidive_ I played with 3.2 on a ubuntu 12.04 machine yesterday, and it worked fine
14:54 puebele1 left #gluster
14:55 Brian_TS I'm trying to install Gluster from source on IBM Z series s390x zLinux Red Hat 6.5 s390x.  Since Gluster requires some packages that aren't available as binary RPMs, I also have to compile those packages as well.  I'm trying to compile the EPEL6 Source RPMs for the IBM S390x Zlinux Red Hat 6 environment. I've run into a chicken/egg issue with two SRPMS.  I can't rpmbuild one because of a dependency of an RPM that I haven't built
14:55 Brian_TS http://pastebin.com/seqrC8Ap
14:55 glusterbot Please use http://fpaste.org or http://dpaste.org . pb has too many ads. Say @paste in channel for info about paste utils.
14:56 Brian_TS http://dpaste.org/dNL7D/
14:56 glusterbot Title: dpaste.de: Snippet #228662 (at dpaste.org)
14:56 guigui1 left #gluster
14:57 JoeJulian recidive_: 3.2 is no longer receiving backports.
14:57 duerF joined #gluster
14:59 JoeJulian jinja and sphinx? Those work in to the dependencies for GlusterFS?
14:59 recidive_ JoeJulian: ok, but what about security issues with the official ubuntu package and docs on the process of fixing brain split issue (the article on your website seems specific for 3.3)?
15:00 JoeJulian @ppa
15:00 glusterbot JoeJulian: The official glusterfs 3.3 packages for Ubuntu are available here: http://goo.gl/7ZTNY -- and 3.4 packages are here: http://goo.gl/u33hy
15:00 JoeJulian That's what we consider official.
15:00 recidive_ JoeJulian: great, thanks, so 3.3+ is the way to go
15:00 JoeJulian However, fixing split-brain on 3.2 is to simply delete one copy.
15:00 ndevos Brian_TS: maybe you can build the rpms without the ufo/swift subpackages? I think there is a switch for that
15:01 JoeJulian recidive_: Yes.
15:02 Brian_TS I'm completely new to Gluster so I don't know what the ufo/swift are yet...
15:02 recidive_ JoeJulian: I believe I can put this brain split fix in a script, but was wondering if you already have one somewhere
15:02 kkeithley jinja? Where do you see that? Proabably a dependency on one of the UFO/Python dependencies. Sphinx is used to build docs for UFO. (And it's all going away, slowly.)
15:02 recidive_ JoeJulian: is the process the same in 3.4?
15:03 JoeJulian /probably/ though I haven't had time to even install 3.4 yet.
15:03 Brian_TS kkeithley:  I'm trying to build the binary RPMS from source RPMs.  IT's a chicken/egg issue.
15:04 Brian_TS python-eventlet and python-sphinx10 from Source
15:04 Brian_TS the rpmbuild of gluster requires them.
15:04 Brian_TS the binary RPMs of python-eventlet and python-sphinx10 are not available for s390x architecture.
15:05 kkeithley rpmbuild --without ufo isn't in 3.3.x, just 3.4.0
15:05 ndevos Brian_TS: you could try this: fedpkg clone -a glusterfs ; cd glusterfs ; fedpkg switch-branch el6 ; fedpkg srpm ; rpmbuild --rebuild --without ufo glusterfs*.src.rpm
15:06 JoeJulian I don't suppose koji supports that arch?
15:06 64MAABLNL joined #gluster
15:06 Brian_TS I have had NO problems rpmbuilding any of the SRPMs so far.
15:06 ndevos yes it does, s390-koji build $srpm
15:06 kkeithley yeah, building SRPMs is not the hard part. ;-)
15:07 ndevos or rather: s390-koji build --scratch $srpm
15:07 sprachgenerator joined #gluster
15:09 ndevos JoeJulian, Brian_TS: s390-koji would work, but there is no epel-6 target to build against :-/
15:09 JoeJulian Ah
15:09 JoeJulian Run fedora?
15:10 kkeithley on an s390?
15:10 Brian_TS I'm running Red Hat on s390
15:10 ndevos Brian_TS: with the fedpkg commands I just gave you, you should get the packages without ufo
15:10 JoeJulian Just looking at the package builds for an s390. Looks like stuff's getting built for fedora... <shrug>
15:11 ndevos and, even in the latest el6 glusterfs-3.4-beta2 release
15:11 ndevos copy & paste & press return :)
15:11 ndevos at least on my x86_64 laptop...
15:12 Brian_TS I just built koji-1.8.0-1.el6.s390x successfully.
15:13 Brian_TS but I don't know how koji comes into play here.
15:13 JoeJulian I wonder, if there's a RHEL for s390, why there's no EPEL target in koji...
15:13 kkeithley el6-candidate IIRC
15:13 Brian_TS I'm trying to compile gluster 3.3.1 BTW
15:13 JoeJulian koji is a build system. It's used by the fedora project for building the packages for epel and, obviously, fedora.
15:14 ndevos Brian_TS: ah, no need for koji, maybe fedpkg would depend on it, but you can use plain git without that too
15:14 JoeJulian According to http://s390.koji.fedorapro​ject.org/koji/buildtargets there's just fedora targets.
15:14 glusterbot <http://goo.gl/PVpSm> (at s390.koji.fedoraproject.org)
15:14 Brian_TS maybe this is more a problem for the fedora EPEL folks....
15:15 ndevos Brian_TS: is there a EPEL for s390?
15:15 Brian_TS no
15:15 Brian_TS but I'm building ALL the packages successfully so far.
15:15 ndevos Brian_TS: okay, how about building 3.4.x instead? thats way easier
15:16 Brian_TS its just the chicken/egg issue   Can't build package 1 until I have package 2 installed.  Can't install package 2 until I can build package 2.  Can't build package 2 because package 1 isn't installed.
15:16 Brian_TS There's only a tarball....not a SRPM....I'm barely adequate at rpmbuild, let alone compiling from scratch.
15:16 Brian_TS Unless I'm missing a SRPM somewhere.
15:17 JoeJulian @yum
15:17 glusterbot JoeJulian: The official glusterfs packages for RHEL/CentOS/SL are available here: http://goo.gl/s077x
15:18 ndevos Brian_TS: its easy: wget http://download.gluster.org/pub/gluster/gluste​rfs/3.4/3.4.0beta1/glusterfs-3.4.0beta1.tar.gz ; rpmbuild -ta --without ufo glusterfs-3.4.0beta1.tar.gz
15:18 glusterbot <http://goo.gl/ZwgQg> (at download.gluster.org)
15:19 JoeJulian Hell... after paying that much money, you should be able to tell IBM to build it for you. :/
15:20 Brian_TS JoeJulian:  I agree.  GPFS isn't even available on zLinux.
15:20 Brian_TS Go figure.
15:21 Brian_TS JoeJulian:  Wow, your recommendation on rpmbuild seems to be working!  I always assumed one needed a SRPM w/ spec file to build an RPM.
15:21 JoeJulian ndevos: ^
15:22 JoeJulian But yeah, if there's a spec in the tar file it'll build from that.
15:22 Brian_TS JoeJulian:  It's chugging away!
15:22 ndevos oh, good!
15:23 kkeithley btw, I just (this very minute) put 3.4.0beta2 bits on download.gluster.org
15:23 JoeJulian ctrl-c
15:23 Brian_TS Is Gluster interested in providing S390x RPMs if I am successful?
15:23 kkeithley Brian_TS: once you've finished building 3.4.0beta1 you can do it all over with beta2
15:24 ndevos that cant take long...
15:24 Brian_TS uhoh.....build failed
15:24 JoeJulian kkeithley: Where'd you put them? They're not under http://download.gluster.org​/pub/gluster/glusterfs/3.4/ ?
15:24 glusterbot <http://goo.gl/fe1TG> (at download.gluster.org)
15:26 kkeithley I dunno who put them in .../3.4/3.4.0beta1/...   I've been putting them in .../qa-releases/3.4.0*
15:27 Brian_TS If anyone is interested in the failure of the s390x build, it's here:  http://fpaste.org/14312/
15:27 glusterbot Title: #14312 Fedora Project Pastebin (at fpaste.org)
15:27 * JoeJulian loves the randomness of placement that's always plagued download.gluster.org
15:27 puebele1 joined #gluster
15:27 13WAADK4A joined #gluster
15:27 daMaestro joined #gluster
15:28 semiosis @qa releases
15:28 glusterbot semiosis: The QA releases are available at http://bits.gluster.com/pub/gluster/glusterfs/ -- RPMs in the version folders and source archives for all versions under src/
15:28 semiosis for example
15:28 kkeithley well,  http://download.gluster.org/pub/​gluster/glusterfs/3.4/3.4.0beta1 is a symlink to .../qa-releases/3.4.0beta1/
15:29 glusterbot <http://goo.gl/2UYPA> (at download.gluster.org)
15:29 kkeithley Maybe I did it after all
15:30 kkeithley So now there's a symlink for 3.4.0beta2 as well
15:30 jthorne joined #gluster
15:33 Brian_TS It's possible I ran out of HDD space during my build.  Adding space and retrying the build now.
15:36 hchiramm__ joined #gluster
15:36 isomorphic joined #gluster
15:38 JoeJulian Brian_TS: I don't suppose you've ever seen how fast that thing could generate hashes for bitcoin mining?
15:39 Brian_TS kkeithley:  The build failed again.
15:39 Brian_TS JoeJulian:  Nope....but we're getting a brand new mainframe later this year with all the hardware based encryption goodies.
15:40 Brian_TS I can tell you that the servers that I have built boot in about six seconds.
15:40 Brian_TS well, nine seconds if you count the five second wait at the kernel menu
15:40 JoeJulian Hehe
15:40 Brian_TS symlinked /usr/lib/debug/usr/sbin/glusterfsd.debug to /usr/lib/debug/usr/sbin/glusterd.debug cpio: glusterfs-3.4.0beta1/libglusterfs/src/<stdout>: Cannot stat: No such file or directory
15:46 ndevos Brian_TS: do you have redhat-rpm-config installed?
15:48 Brian_TS ndevos: yes
15:48 Brian_TS redhat-rpm-config-9.0.3-42.el6 (noarch)
15:49 ndevos hmm, that package contains macros for the debuginfo generation... so that should not be the problem
15:49 ndevos Brian_TS: can you fpaste the buildlog?
15:49 Brian_TS I did already..
15:49 Brian_TS it's up there somewhere
15:50 Brian_TS ndevos:  http://fpaste.org/14312/
15:50 glusterbot Title: #14312 Fedora Project Pastebin (at fpaste.org)
15:51 Brian_TS ndevos:  about line 1366
15:52 kkeithley I've never seen stuff like that.   /home/.../epel/*,     cpio: glusterfs-3.4.0beta1/libglusterfs/src/<stdout>: Cannot stat: No such file or directory
15:52 kkeithley something is wack
15:53 ndevos Brian_TS: I do not think that is fatal, I agree it looks weird, but the build continues after that
15:53 Brian_TS kkeithley:  With literally MINUTES of experience in compiling GlusterFS, I would agree.
15:53 Brian_TS :-)
15:53 kkeithley lol
15:54 Brian_TS I have 12G free on the build filesystem.
15:54 kkeithley that's my Keen Eye For The Obvious®
15:54 ndevos Brian_TS: I guess /home/MyHome/epel/glusterfsd.init does really not exist?
15:54 Brian_TS hehehe
15:54 Brian_TS it does not
15:54 Brian_TS unless it got cleaned up in the build.
15:55 Brian_TS any chance it's writing to /tmp?  I'm a bit short on space there....only 520 M
15:56 mohankumar joined #gluster
15:56 Brian_TS I'm rebuilding now and watching the filesystems.  /tmp is growing but not much so.
15:56 ndevos Brian_TS: those missing files are actually from the Fedora package... the tar.gz does not contain them :-/
15:57 hchiramm__ joined #gluster
15:57 Brian_TS So does that tell you something?
15:57 Brian_TS Perhaps an issue with the package?
15:58 ndevos Brian_TS: yes, you should get the files manually and place them in that directory
15:58 Brian_TS Pardon my ignorance, but what files do I need?
15:58 ndevos like: git clone git://pkgs.fedoraproject.org/glusterfs ; cd glusterfs ; git checkout el6
15:59 kkeithley wait, are you building from the tarball from bits.gluster.org?
15:59 ndevos Brian_TS: the files that are listed as missing in that fpaste, the fedora git contains these and you can use those
16:00 Brian_TS Here's what I ran:   rpmbuild -ta --without ufo glusterfs-3.4.0beta1.tar.gz
16:00 ndevos kkeithley: yeah, and it seems that the glusterfs.spec.in refers to those files
16:00 kkeithley right, that doesn't work.
16:01 ndevos no, it is broken
16:01 kkeithley untar that. do `autogen.sh; configure; make dist; cd extras/LinuxRPM; make glusterrpmswithoutufo`
16:02 JoeJulian ... It shouldn't be. Can we get that stuff checked in upstream so the tar builds?
16:02 Brian_TS kkeithley:  doing that now
16:02 kkeithley Or get http://download.gluster.org/pub/gluster/glust​erfs/qa-releases/3.4.0beta2/EPEL.repo/epel-6/​SRPMS/glusterfs-3.4.0-0.5.beta2.el6.src.rpm
16:02 glusterbot <http://goo.gl/BxkmY> (at download.gluster.org)
16:03 kkeithley install the src rpm; cd rpmbuild; rpmbuild --without ufo -bb SPECS/glusterfs.spec
16:03 ndevos Brian_TS: I suggest to use that ^ src.rpm and "rpmbuild --rebuild --without ufo ....src.rpm"
16:03 ndevos or what kkeithley said
16:04 ndevos kkeithley: but still, the 'rpmbuild -ta ...' should be made to work too I think
16:05 Brian_TS rpmbuilding now
16:06 harold_ joined #gluster
16:06 Brian_TS Oh, and I untarred the source but it failed on "make dist"
16:08 Brian_TS Here's the failure of "make dist"   http://fpaste.org/14330/
16:08 glusterbot Title: #14330 Fedora Project Pastebin (at fpaste.org)
16:08 recidive_ after upgrading from 3.2 to 3.3 following the directions, I get this when trying to start my test volume "Volume myvolume does not exist"
16:09 recidive_ never mind, forgot the path changes in 4
16:09 ricky-ticky joined #gluster
16:09 Brian_TS kkeithley:  It looks like the rpmbuild succeeded!
16:10 recidive_ ok, my vol isn't in /var/lib/glusterd/vols
16:10 recidive_ what can I do to fix it?
16:11 recidive_ just create the volume again, or copy files from old location?
16:15 recidive_ tried creating the volume again, got this "/path/to/brick-1 or a prefix of it is already part of a volume" but gluster volume list says "No volumes present in cluster" interesting
16:15 glusterbot recidive_: To clear that error, follow the instructions at http://goo.gl/YUzrh or see this bug http://goo.gl/YZi8Y
16:16 JoeJulian recidive_: I don't think the ppa moves the old /etc/glusterd stuff.
16:16 rwheeler joined #gluster
16:16 JoeJulian You should be able to mv /etc/glusterd /var/lib/glusterd
16:17 recidive_ ok, is it safe, should I copy all files?
16:18 JoeJulian It's safe.
16:23 devoid joined #gluster
16:23 recidive_ ok, another dummy question, how can I restart glusterd?
16:24 recidive_ it still says my volume doesn't exists
16:24 JoeJulian service glusterd restart
16:24 dblack joined #gluster
16:24 manik joined #gluster
16:25 recidive_ no "glusterd: unrecognized service"
16:25 recidive_ "/etc/init.d/glusterfs-server restart" worked
16:25 hjmangalam joined #gluster
16:25 recidive_ alright service glusterfs-server restart
16:26 recidive_ that's the command
16:26 JoeJulian @meh
16:26 glusterbot JoeJulian: I'm not happy about it either
16:26 JoeJulian And that's why I don't use ubuntu.
16:27 JoeJulian ... because I know where everything is in Fedora/EL distros.
16:28 semiosis i think it's a debian convention (policy?) that things be called -server
16:28 JoeJulian Not saying it's wrong, just unfamiliar.
16:29 semiosis when i got involved i suggested s/glusterfs-server/glusterd/ but patrick & I decided it wasn't worth the effort
16:29 semiosis distros are complicated
16:30 JoeJulian heh, that's an understatement.
16:30 semiosis i am in upstart hell right now
16:31 JoeJulian upstart = hell
16:31 semiosis event driven init is painful
16:31 semiosis pretty sure i'm going to have to escalate to the ubuntu devs for help
16:32 semiosis every mount in fstab triggers a mounting event, and we need to block mounting glusterfs until the network is up
16:32 JoeJulian I'm not sure where I stand on systemd, but it does seem to just work.
16:32 semiosis problem is when i make a blocker task it only blocks the first glusterfs mount, the others proceed immediately, before network is up
16:33 Mo_ joined #gluster
16:33 JoeJulian D'oh!
16:34 recidive_ I think debian/ubuntu convention is just use the software name, just like nginx (service nginx start)
16:35 kkeithley ndevos: rpmbuild -ta $tarfile will work again once we finish getting UFO out of the tree.
16:38 thomasle_ joined #gluster
16:44 Brian_TS kkeithley:  Just FYI 'rpmbuild -bb glusterfs.spec' worked great!
16:44 Brian_TS Thanks for all your help!
16:45 kkeithley yw
16:46 kkeithley that was with --without ufo, correct?
16:46 Brian_TS correct
16:47 Brian_TS It worked both ways
16:47 kkeithley really? interesting
16:47 Brian_TS rpmbuild --without ufo _AND_ rpmbuild
16:47 kkeithley well, it doesn't build openstack-swift, so it doesn't need sphinx
16:48 kkeithley was there another one that was causing heartburn?
16:48 Brian_TS the non beta SRPM package that I downloaded had all kinds of prereq packages that I had to install.
16:49 Brian_TS I couldn'g get these: http://pastebin.com/seqrC8Ap  built in order to satisfy gluster.spec.
16:49 glusterbot Please use http://fpaste.org or http://dpaste.org . pb has too many ads. Say @paste in channel for info about paste utils.
16:51 semiosis @later tell recidive_ there's no nginx client, but with glusterfs we have -client & -server, hence the convention
16:51 glusterbot semiosis: The operation succeeded.
17:06 devoid joined #gluster
17:06 sprachgenerator joined #gluster
17:09 devoid joined #gluster
17:13 hjmangalam joined #gluster
17:24 edong23 joined #gluster
17:41 lpabon joined #gluster
17:45 rotbeard joined #gluster
17:50 kkeithley a2: is .../api/examples/glfsxmp.c still accurate? Is there a platform where its autogen.sh works? (It doesn't work on f18)
17:51 isomorphic joined #gluster
17:53 Scott2 joined #gluster
18:08 samppah @latest
18:08 glusterbot samppah: The latest version is available at http://goo.gl/zO0Fa . There is a .repo file for yum or see @ppa for ubuntu.
18:09 rb2k joined #gluster
18:12 Brian_TS kkeithley:  are you interested in posting the IBM s390x binary RPMs that I built?
18:14 JoeJulian Will you be able to maintain keeping builds current?
18:15 Brian_TS As long as I still have my job where the mainframe is!  :-)
18:15 Brian_TS Please be advised: I am a complete novice at building.
18:15 kkeithley joined #gluster
18:15 johnmark Brian_TS: woah, cool!
18:16 johnmark Brian_TS: but yes, the question that comes to mind is the same as JoeJulian's
18:16 Brian_TS If you send it to me, I will build it.
18:17 Brian_TS But again, I've never done anything like this before.   So I might need some hand-holding...
18:17 JoeJulian As long as that commitment's there, I'd say give him a place to upload.
18:18 JoeJulian Either a place to upload or access and instructions on running createrepo.
18:19 Brian_TS I just started doing RPM Builds last month and I know very little about compiling outside of following instructions.
18:19 JoeJulian That's about all there is to it.
18:19 JoeJulian kkeithley does all the heavy lifting, making sure the spec's right and all.
18:20 Brian_TS Then I'm willing.  I've made my living off the backs of others' work.  It's about time I start giving back.
18:20 JoeJulian Though it would be good if we could figure out that circular dependency thing so we could have the complete package built.
18:21 Brian_TS That was me trying to install EPEL from source....
18:21 Brian_TS I don't know if that's related or not.
18:26 kkeithley JoeJulian: circular dependency?
18:28 recidive joined #gluster
18:28 Brian_TS kkeithley:   http://pastebin.com/seqrC8Ap
18:28 glusterbot Please use http://fpaste.org or http://dpaste.org . pb has too many ads. Say @paste in channel for info about paste utils.
18:29 HPS joined #gluster
18:30 kkeithley oh that. That's not a problem in 3.4.0
18:30 kkeithley you only saw that trying to build 3.3.1
18:30 Brian_TS That's correct
18:30 Brian_TS I have all the RPMs built if you want me to send them somewhere
18:31 * kkeithley isn't going to solve that
18:31 kkeithley :-)
18:31 JoeJulian Fair enough.
18:32 thomaslee joined #gluster
18:32 kkeithley Well, it can be solved by adding --without ufo to the 3.3.2 spec file.
18:34 kkeithley Since we didn't get the 3.3.2 release this week and I'm heading off into the twilight world of trains, planes, automobiles, and LinuxCon for the next week-to-nine-days; well, we'll see what happens and when. ;-)
18:35 Brian_TS Ok, after bugging you guys for all this help (thanks again!) I'm going to ask a really silly question.  Can I use GlusterFS to share a file system between two Linux servers?  Much like GFS2 or Oracle ASM or OCFS2?
18:35 kkeithley Or NFS? Yes
18:35 Brian_TS I don't want to share OUT disks, I want to share one filesystem between ten web servers.
18:36 Brian_TS Kinda like NFS, but using raw luns.
18:36 Brian_TS I don't want a "master" server (think NFS server) so to speak.  I want all ten servers to have the same LUN attached via VMware and have Gluster take care of the 'cluster stuff'
18:37 Brian_TS So I don't want TCP sharing at all....all local (and shared) LUNs.
18:37 Brian_TS my HDD space is much faster than my network so NFS (or any NAS) storage actually makes things slower.
18:39 kkeithley I'm not quite sure what you're asking. gluster doesn't have raw luns. It's not at all like OCFS2 or GFS. It's really closer to NFS, with a better scale out, scale up model than NFS has
18:39 Brian_TS Hmm.....My network is my bottleneck unfortunately.
18:39 stickyboy Brian_TS: Join the club :)
18:40 Brian_TS I was hoping to attache the same 800G LUN to ten of my web servers (via VMWare) and have gluster manage the read/writes to the shared LUN.
18:40 Brian_TS Currently I do this with IBM's GPFS -- and it works -- but we're trying to move to zLinux and GPFS isn't offered for zLinux.
18:49 Brian_TS http://community.gluster.org/a/li​nux-kernel-tuning-for-glusterfs/ is broken
18:49 hchiramm__ joined #gluster
18:49 glusterbot <http://goo.gl/URHmU> (at community.gluster.org)
18:50 Brian_TS Ok, I lied
18:50 Brian_TS didn't work twice in a row then it did.
18:50 HPS http://gluster.org/community/documenta​tion/index.php/Getting_started_wrap_up "performance tuning link" seems to be broken.
18:50 glusterbot <http://goo.gl/s26Ff> (at gluster.org)
19:29 glusterbot New news from newglusterbugs: [Bug 959887] clang static src analysis of glusterfs <http://goo.gl/gf6Vy>
19:49 andreask joined #gluster
19:51 Airbear joined #gluster
20:06 hchiramm__ joined #gluster
20:09 tg2 joined #gluster
20:10 tg2 What is the best way to do ssd write caching with a glusterfs brick?  raid controller (cachecade), ext4 ssd journaling, or is there an option in glusterfs configs for write caching?
20:14 JoeJulian There's no built-in option, no.
20:16 RobertLaptop joined #gluster
20:19 tg2 the client-side write buffer setting
20:20 tg2 is this only in the event that one thread is locking writes that it is used?
20:20 tg2 what scenarios can arise where 1 client would write a file and it would not be available to another client immediately, how can this be remedied
20:25 JoeJulian fuse mount?
20:25 semiosis write-behind
20:25 semiosis can be disabled
20:25 semiosis iirc
20:26 tg2 is that a client setting?
20:26 semiosis it's a volume setting.
20:27 tg2 so only on the glusterfs server
20:27 tg2 servers*
20:27 semiosis well you configure volume settings on the servers
20:28 semiosis but the results of those configs are applied to bricks & clients as appropriate
20:28 semiosis in the case of write behind, i'd expect it to affect clients
20:28 JoeJulian I wonder about attribute-timeout=0 (mount option)
20:31 brunoleon joined #gluster
20:39 devoid joined #gluster
20:51 Scott2 Hey All, anyone seen errors like "Non Blocking entrylks failed for" in the logs ( 3.3.1 ). Am seeing a ton of them as well as split-brian issues after adding new bricks to a volume.
20:52 JoeJulian Were the new bricks not empty?
20:53 Scott2 they were empty
20:53 Scott2 I stopped a rebalance after about 2 days
20:54 tg2 what underlying fs
20:54 Scott2 ext4
20:54 tg2 what kernel version
20:54 Scott2 its a 3.2.23 grsec patched kernel
20:55 semiosis tg2: distro version is a better indicator, since the ext4 code has now been ported to many kernels
20:55 semiosis although that one might be safe
20:55 tg2 I think that kernel has it
20:55 semiosis hmm
20:55 semiosis mainline kernel introduced the problematic ext4 code with version 3.3.0
20:55 tg2 wasn't it before that?
20:55 tg2 the ext4 was backported though too
20:56 Scott2 it has run pretty smoothly for about 6 months, but after adding the bricks cpu usage has spiked ( for the processes on the old bricks )
20:57 Scott2 seems the "best" way to fix has been to move files off to tmp, remove the dirs and copy back
20:57 tg2 if you can do that it might be a good way to rule out some issues
20:58 Scott2 yeah, it seems to fix it, just going to take a while...
20:58 tg2 maybe ask in dev
20:59 tg2 any other errors other than that?
20:59 Scott2 just wasn't sure if "Non Blocking entrylks failed for" was a known state with an easier fix that you guys knew about.  Thanks guys, at least I ruled out a well know fix that I somehow missed with 2 days of googling :)
21:00 Scott2 yeah, also some No such file or directory
21:00 Scott2 a decent amount of Unable to self-heal contents of
21:00 tg2 is it just a distributed share or distirbuted/replicated?
21:01 Scott2 distributed/replicated
21:05 Scott2 For the most part the clients can still view everything ( though rather slowly ). I have see a couple of cases where it would list directories twice when doing an ls
21:25 Airbear joined #gluster
21:34 lbalbalba joined #gluster
21:34 tg2 maybe the directory identifiers got mixed up
21:35 tg2 i've seen double listings only when a new node was added that had existing files on it
21:35 tg2 and it hadn't been in the volume yet
21:40 lbalbalba joined #gluster
21:55 isomorphic joined #gluster
21:56 ctria joined #gluster
22:10 leaky joined #gluster
22:10 leaky hello
22:10 glusterbot leaky: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
22:11 leaky hello, i'm unable to set permissions with setfattr from terminal on my gluster mount, even though it is mounted with -o acl - i can use setfattr on the volume backend filesystem though...
22:18 JoeJulian What's your setfattr command?
22:26 leaky i just tried setfattr -n security.test -v test2 test.txt
22:26 leaky on a random test file,
22:26 leaky as per an old samba wiki i found
22:30 JoeJulian Odd... I can set trusted.foo but not security.foo...
22:31 badone joined #gluster
22:32 leaky i wouldn't normally mind, but i'm using this gluster mount with samba xattr,
22:32 leaky trying to drag my colleagues down the linux route, show them gluster
22:33 JoeJulian Hmm, I see that security.* except security.selinux is specifically blocked in fuse-bridge.c
22:34 leaky interesting
22:34 leaky i'm not running selinux on this box, ubuntu 12.04
22:34 leaky which has apparmor? somewhere...
22:35 JoeJulian Also blocked are system.posix_acl_{access,default}
22:37 JoeJulian Looks like you should file a bug report requesting that security.NTACL be allowed.
22:37 glusterbot http://goo.gl/UUuCq
22:37 JoeJulian I can't think of any reason why it shouldn't be.
22:37 leaky is there a way i can enable it for the time being?
22:37 JoeJulian In fact, the only reason I can think of security.capabilites being blocked is so you don't inadverently block gluster from accessing the file.
22:39 leaky i'm only using this machine for samba shares to that directory, do you think i can get away without security.NTACL?
22:39 JoeJulian For that matter, it would be trivial to translate security.* to trusted.glusterfs.security.*
22:40 JoeJulian iirc, there's some way of storing samba acls in a user_xattr...
22:41 leaky you would think so,
22:41 leaky at the very least i can store them in a legacy database elsewhere
22:42 leaky seems odd though...
22:44 JoeJulian Hmm, nope. posix acls and you /can/ add the ntacl. posix acls should work though.
22:46 leaky ok
22:51 leaky i've got the cluster virtualised, will try a couple now
22:52 yinyin joined #gluster
22:57 devoid joined #gluster
23:17 meunierd1 joined #gluster
23:44 leaky http://gluster.helpshiftcrm.com/q/is-it-not-pos​sible-to-set-security-xattrs-on-gluster-mounts/
23:44 glusterbot <http://goo.gl/geg2q> (at gluster.helpshiftcrm.com)
23:58 hjmangalam1 joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary