Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-01-16

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:06 KORG|2 joined #gluster
00:11 _pol joined #gluster
00:13 complexmind joined #gluster
00:16 complexmind joined #gluster
00:17 _pol joined #gluster
00:18 andreask joined #gluster
00:21 DanMons joined #gluster
00:34 complexmind joined #gluster
00:38 diegows joined #gluster
00:44 bala joined #gluster
00:46 hflai joined #gluster
00:47 _pol joined #gluster
00:56 firegs joined #gluster
01:06 jporterfield joined #gluster
01:07 dbruhn joined #gluster
01:11 harish joined #gluster
01:12 robo joined #gluster
01:14 raghug joined #gluster
01:19 plarsen joined #gluster
01:26 _pol joined #gluster
01:33 shyam joined #gluster
01:34 _pol_ joined #gluster
01:49 harish joined #gluster
02:10 aknapp joined #gluster
02:17 theron joined #gluster
02:17 raghug joined #gluster
02:26 harish joined #gluster
02:28 RameshN joined #gluster
02:31 parad1se joined #gluster
02:36 gmcwhistler joined #gluster
02:37 DV joined #gluster
02:39 mattappe_ joined #gluster
02:41 gmcwhistler joined #gluster
02:50 diegows joined #gluster
02:56 kshlm joined #gluster
03:08 atrius joined #gluster
03:08 mattappe_ joined #gluster
03:09 RameshN joined #gluster
03:10 bharata-rao joined #gluster
03:16 firegs anyone alive?
03:16 mattappe_ joined #gluster
03:31 rastar joined #gluster
03:36 mattappe_ joined #gluster
03:40 theron joined #gluster
03:49 itisravi joined #gluster
03:49 glusterbot New news from newglusterbugs: [Bug 1047416] Feature request (CLI): Add options to the CLI that let the user control the reset of stats <https://bugzilla.redhat.com/show_bug.cgi?id=1047416>
03:54 shyam joined #gluster
03:55 shubhendu joined #gluster
03:55 complexmind_ joined #gluster
03:58 atrius joined #gluster
04:01 saurabh joined #gluster
04:04 kanagaraj joined #gluster
04:08 wushudoin joined #gluster
04:08 RameshN joined #gluster
04:11 kanagaraj_ joined #gluster
04:19 glusterbot New news from newglusterbugs: [Bug 969461] RFE: Quota fixes <https://bugzilla.redhat.com/show_bug.cgi?id=969461>
04:21 hagarth joined #gluster
04:26 shylesh joined #gluster
04:28 DV joined #gluster
04:34 semiosis @later tell aixsyd glusterfs 3.5.0beta1 packages for debian wheezy are now published.  http://download.gluster.org/pub/gluster/glusterfs/qa-releases/3.5.0beta1/Debian/
04:34 glusterbot semiosis: The operation succeeded.
04:40 jporterfield joined #gluster
04:44 rjoseph1 joined #gluster
04:44 ppai joined #gluster
04:46 bala joined #gluster
04:48 MiteshShah joined #gluster
04:54 anands joined #gluster
04:56 aravindavk joined #gluster
05:00 fyxim_ joined #gluster
05:01 kdhananjay joined #gluster
05:05 psharma joined #gluster
05:07 mattapperson joined #gluster
05:10 DV joined #gluster
05:14 lawrie joined #gluster
05:17 lalatenduM joined #gluster
05:18 MiteshShah joined #gluster
05:18 ndarshan joined #gluster
05:19 prasanth joined #gluster
05:23 mohankumar joined #gluster
05:23 KORG|2 joined #gluster
05:26 nshaikh joined #gluster
05:31 KORG joined #gluster
05:31 atrius joined #gluster
05:34 kanagaraj joined #gluster
05:37 ekool joined #gluster
05:38 ekool question: Can you create a gluster "brick" on an existing and in use filesystem? Ie / (this Howto doesn't mention using a block device specifically for it, but it didnt work) and every other howto says it needs one. http://www.howtoforge.com/creating-an-nfs-like-standalone-storage-server-with-glusterfs-3.2.x-on-ubuntu-12.10
05:38 glusterbot Title: Creating An NFS-Like Standalone Storage Server With GlusterFS 3.2.x On Ubuntu 12.10 | HowtoForge - Linux Howtos and Tutorials (at www.howtoforge.com)
05:40 anands joined #gluster
05:44 raghu joined #gluster
05:45 RameshN joined #gluster
05:45 dusmant joined #gluster
05:50 rastar joined #gluster
05:50 anands1 joined #gluster
05:52 shyam joined #gluster
05:58 satheesh1 joined #gluster
06:02 benjamin__ joined #gluster
06:04 jporterfield joined #gluster
06:07 ababu joined #gluster
06:09 jporterfield joined #gluster
06:19 ababu joined #gluster
06:23 davinder joined #gluster
06:24 eclectic joined #gluster
06:24 Peanut joined #gluster
06:26 spandit joined #gluster
06:29 ndarshan joined #gluster
06:32 dusmant joined #gluster
06:35 mikedep333 joined #gluster
06:36 shyam joined #gluster
06:39 Philambdo joined #gluster
06:41 mattappe_ joined #gluster
06:45 micu joined #gluster
06:48 ndarshan joined #gluster
06:50 dusmant joined #gluster
06:51 vimal joined #gluster
06:55 satheesh1 joined #gluster
06:56 ppai joined #gluster
07:04 ndarshan joined #gluster
07:20 _pol joined #gluster
07:22 jtux joined #gluster
07:32 tryggvil joined #gluster
07:37 samppah https://bitbucket.org/nikratio/s3ql/overview FUSE filesystem that uses OpenStack Swift as storage
07:37 glusterbot Title: nikratio / S3QL Bitbucket (at bitbucket.org)
07:38 samppah may be useful for some of GlusterFS users aswell :)
07:41 ngoswami joined #gluster
07:44 CheRi joined #gluster
07:44 ekuric joined #gluster
07:44 mattappe_ joined #gluster
07:48 ctria joined #gluster
07:57 ProT-0-TypE joined #gluster
08:00 satheesh2 joined #gluster
08:06 anands joined #gluster
08:07 eseyman joined #gluster
08:26 hagarth joined #gluster
08:26 Rocky_ joined #gluster
08:28 keytab joined #gluster
08:29 klaxa|work joined #gluster
08:29 blook joined #gluster
08:44 benjamin__ joined #gluster
08:45 andreask joined #gluster
08:46 inodb joined #gluster
08:49 satheesh1 joined #gluster
08:50 klaxa|work just yesterday i enabled profiling on our productive systems and i'm seeing high latency for all write calls (90%+) here are gluster volume info and gluster volume profile $vol info: https://gist.github.com/anonymous/c589aca90960d5ef7c4f
08:50 glusterbot Title: gist:c589aca90960d5ef7c4f (at gist.github.com)
08:51 klaxa|work the storage is mostly used for virtual machines, the slow write performance is very noticable on them (high CPU wait time, just like on the host, nfs mount from virtual machine on physical client blocks a lot)
08:52 hagarth joined #gluster
08:55 satheesh1 joined #gluster
08:56 samppah klaxa|work: what hypervisor you are using?
08:57 samppah and what gluster version?
08:57 klaxa|work gluster 3.3.2, libvirt with kvm
08:57 samppah i need to leave for lunch and datacenter duties but i'll check this later
08:57 samppah klaxa|work: is it possible to upgrade to 3.4.x?
09:00 klaxa|work satheesh1: not really, for three reasons: a) we haven't tested it yet, b) according to the docs, rdma support is not fully stable yet c) we would have to update our libc version
09:00 klaxa|work thanks for looking into this, if you need more information i'll try to provide as much as i can
09:03 satheesh1 joined #gluster
09:05 klaxa|work (10:00:10) klaxa|work: satheesh1: not really, for three reasons: a) we haven't tested it yet, b) according to the docs, rdma support is not fully stable yet c) we would have to update our libc version
09:05 klaxa|work (10:00:47) klaxa|work: thanks for looking into this, if you need more information i'll try to provide as much as i can
09:08 andreask joined #gluster
09:10 satheesh2 joined #gluster
09:12 andreask joined #gluster
09:15 satheesh3 joined #gluster
09:17 jbrooks joined #gluster
09:22 _pol joined #gluster
09:22 glusterbot New news from newglusterbugs: [Bug 1005526] All null pending matrix <https://bugzilla.redhat.com/show_bug.cgi?id=1005526>
09:22 mgebbe_ joined #gluster
09:33 satheesh2 joined #gluster
09:33 ricky-ti1 joined #gluster
09:35 bala1 joined #gluster
09:38 glusterbot New news from resolvedglusterbugs: [Bug 811250] self heal - dirty afr flags after successfull stat: the empty file two equally dirty servers case <https://bugzilla.redhat.com/show_bug.cgi?id=811250>
09:39 ngoswami joined #gluster
09:47 mattappe_ joined #gluster
09:48 Varun` joined #gluster
09:48 Varun` Hello Everyone
09:54 shubhendu joined #gluster
09:55 dusmant joined #gluster
09:55 ndarshan joined #gluster
09:55 hagarth Varun`: hello
09:59 satheesh2 joined #gluster
10:00 satheesh3 joined #gluster
10:02 Varun` Hey
10:03 harish joined #gluster
10:09 glusterbot New news from resolvedglusterbugs: [Bug 922292] writes fail with invalid argument <https://bugzilla.redhat.com/show_bug.cgi?id=922292>
10:10 satheesh1 joined #gluster
10:10 fidevo joined #gluster
10:11 DV joined #gluster
10:17 rastar joined #gluster
10:21 ells joined #gluster
10:23 glusterbot New news from newglusterbugs: [Bug 1041109] structure needs cleaning <https://bugzilla.redhat.com/show_bug.cgi?id=1041109> || [Bug 1054133] Slow write speed when using small blocksize <https://bugzilla.redhat.com/show_bug.cgi?id=1054133>
10:26 hagarth joined #gluster
10:28 Guest98067 joined #gluster
10:29 DV__ joined #gluster
10:40 MiteshShah joined #gluster
10:40 anands joined #gluster
10:49 Philambdo joined #gluster
10:50 mattappe_ joined #gluster
10:52 harish joined #gluster
10:52 muhh joined #gluster
10:52 diegows joined #gluster
10:54 geewiz joined #gluster
10:56 kshlm joined #gluster
11:02 blook joined #gluster
11:04 ccha4 hello, I add-bricks on replica 2 to replica 3
11:04 ccha4 what next? I need to rebalance ?
11:06 RameshN joined #gluster
11:08 ccha4 hum rebalance is only for distribe volume and not replicate
11:11 ira joined #gluster
11:11 ccha4 or I need to trigger heal
11:13 20WAAV2N6 joined #gluster
11:13 dusmant joined #gluster
11:13 purpleidea ccha4: your question isn't clear, if you're asking about what to do after adding bricks to go from r=2 to r=3, then yes, a rebalance is fine
11:13 ndarshan joined #gluster
11:14 purpleidea it will re-distribute the distribute part of the volume...
11:14 ccha4 purpleidea: yes replica 2 to replica 3
11:14 shubhendu joined #gluster
11:15 bala joined #gluster
11:15 ccha4 # gluster volume rebalance DATA start
11:15 ccha4 Volume DATA is not a distribute volume or contains only 1 brick.
11:16 ccha4 Not performing rebalance
11:17 purpleidea you don't need to rebalance for no distribute
11:17 ccha4 oh, self-heal do the job ?
11:17 purpleidea ?
11:18 ccha4 how data will be on new added replicate server ?
11:18 purpleidea ccha4: oh, i see your question
11:18 davinder2 joined #gluster
11:20 ccha4 I have a cluster of 2 repicated servers. I added new replicated server with gluster volume add-brick <my VOL> replica 3 <new server brick>
11:20 ccha4 then what should I do next step ?
11:21 purpleidea ccha4: what version are you using?
11:21 ccha4 3.3.2
11:22 purpleidea ccha4: i would have imagined this would happen automatically with 3.4, but i don't really know for sure. i haven't done this operation with large amounts of data
11:22 purpleidea is the glusterfsd process doing lots of work?
11:24 ccha4 glustersd have write io
11:24 ccha4 there some few files on the server replicated server
11:26 NeatBasis_ joined #gluster
11:26 ccha4 what trigger the copy data to the new added replicated server ? selfheal ? because self heal status got alot entries
11:27 purpleidea it makes sense that the selfheal would do this... it triggers automatically in 3.4 maybe 3.3 but i don't remember
11:27 purpleidea it might be good to test this a bunch, and maybe ask on the mailing list for clarification. sorry i don't have more information
11:28 ccha4 it is possible to purge the self-heal (healed,failed,...) lists ?
11:37 ndarshan joined #gluster
11:37 rjoseph1 joined #gluster
11:38 saurabh joined #gluster
11:40 rjoseph2 joined #gluster
11:45 rastar joined #gluster
11:47 calum_ joined #gluster
11:52 edward2 joined #gluster
11:58 zwu joined #gluster
12:03 dusmant joined #gluster
12:04 d-fence joined #gluster
12:05 kkeithley1 joined #gluster
12:06 Rocky_ left #gluster
12:08 itisravi joined #gluster
12:17 CheRi joined #gluster
12:18 kshlm joined #gluster
12:19 ells joined #gluster
12:23 ppai joined #gluster
12:28 dkorzhevin joined #gluster
12:29 psyl0n joined #gluster
12:30 DV__ joined #gluster
12:33 klaxa|work left #gluster
12:39 Rydekull joined #gluster
12:43 blook joined #gluster
12:50 rjoseph1 joined #gluster
12:53 glusterbot New news from newglusterbugs: [Bug 1054199] gfid-acces not functional during creates under sub-directories. <https://bugzilla.redhat.com/show_bug.cgi?id=1054199>
12:53 ujjain1 joined #gluster
12:54 ujjain1 what´s the glusterfs-server called these days? it changed since 3.2? I use centos with normal+epel repo.
12:59 ujjain1 gluster changed a lot since version 3.2?
13:02 CheRi joined #gluster
13:06 kkeithley_ There's a lot of new features to be sure, but the main glusterfs-server is still glusterfsd. Clients — viz. the fuse bridge and the NFSv3 server — are still glusterfs. The management agent is glusterd, and the CLI is gluster; just like they were in 3.2.
13:07 mattapperson joined #gluster
13:15 mattappe_ joined #gluster
13:17 mattapp__ joined #gluster
13:18 jurrien_ joined #gluster
13:21 mattappe_ joined #gluster
13:22 theron joined #gluster
13:26 ells joined #gluster
13:26 benjamin__ joined #gluster
13:27 mattappe_ joined #gluster
13:30 mohankumar joined #gluster
13:32 rastar joined #gluster
13:46 aixsyd semiosis: thx broseph :P
13:49 lalatenduM joined #gluster
13:54 jtickle joined #gluster
13:55 jtickle hey folks, is anyone else here beta testing RHEL7?
13:59 tryggvil joined #gluster
14:01 dbruhn joined #gluster
14:02 plarsen joined #gluster
14:03 bennyturns joined #gluster
14:04 anands joined #gluster
14:07 japuzzo joined #gluster
14:14 theron joined #gluster
14:15 blook joined #gluster
14:18 ujjain1 I don´t have a gluster executable, which package do I miss?
14:19 mattappe_ joined #gluster
14:21 ujjain1 http://www.howtoforge.com/high-availability-storage-with-glusterfs-3.2.x-on-centos-6.3-automatic-file-replication-mirror-across-two-storage-servers - these instructions seem outdated, there is no package glusterfs-server, there is no glusterd service, there is no gluster binary, not in epel.
14:21 glusterbot Title: High-Availability Storage With GlusterFS 3.2.x On CentOS 6.3 - Automatic File Replication (Mirror) Across Two Storage Servers | HowtoForge - Linux Howtos and Tutorials (at www.howtoforge.com)
14:22 boomertsfx get it straight from gluster's site...
14:24 ujjain1 thanks, http://download.gluster.org/pub/gluster/glusterfs/3.4/3.4.0/EPEL.repo/glusterfs-epel.repo worked :)
14:25 kkeithley_ 3.4.0? Why didn't you get 3.4.2?
14:26 r0b joined #gluster
14:26 morsik or just LATEST.
14:26 kkeithley_ Same thing.
14:26 harish joined #gluster
14:27 morsik well... LATEST is latest. not 3.4.0. now it's the same, but when 3.4.3 will be released - it won't be the same :P
14:28 kkeithley_ And yes, there are no glusterfs packages in EPEL because Fedora/EPEL policy is that EPEL must not have packages that conflict with what's in RHEL (and thus CentOS). Since glusterfs-client was added in RHEL (and CentOS) in 6.5, glusterfs was retired from EPEL.
14:28 robo joined #gluster
14:28 kkeithley_ That's then. this is now.
14:28 kkeithley_ But if you've got any good stock tips, I'm all ears
14:28 morsik yeah... only client ;f
14:29 morsik and centos now did conflicts with our installation because of that :D
14:29 morsik because we compiled RHS sources with server
14:29 morsik and centos has missing packages (so we've got dependency problems)
14:29 mattapperson joined #gluster
14:31 kkeithley_ Really? What's missing? I haven't had any problems building or installing gluster community bits on CentOS.
14:33 morsik kkeithley_: server part is missing :>
14:33 ira joined #gluster
14:33 morsik and if we have georeplication for example - upgrade didn't worked because centos versions was newer than we had
14:34 ira joined #gluster
14:35 ujjain1 mount.glusterfs centos01:/testvol /mnt/glusterfs < what´s wrong with this mount?
14:36 ujjain1 my gluster was updated to 3.4.2, even though I used the above repo link.
14:36 NeatBasis joined #gluster
14:36 kkeithley_ It's magick. ;-)
14:37 ujjain1 oh, I thought the mountpiont existed, but it dfidn´t, the error message was not clear at all
14:37 ujjain1 works after creating mountpoint
14:38 ujjain1 df -h doesn´t show it mounted, it does return the empty line after typing mount.glusterfs centos01:/testvol /mnt/glusterfs
14:39 kkeithley_ anyway, you can't get the glusterfs-server RPMs in RHEL or CentOS. Those would be the RHS -server bits and they aren't available. Use the community bits from download.gluster.org. Make sure you uninstall the RHS -client bits first.
14:39 ccha4 about this bug https://bugzilla.redhat.com/show_bug.cgi?id=843003 , there is nothing to do and just need to wait to try the command later ?
14:39 glusterbot Bug 843003: high, urgent, ---, kaushal, CLOSED CURRENTRELEASE, call_bail of a frame in glusterd might lead to stale locks in the cluster
14:39 mattappe_ joined #gluster
14:41 mattap___ joined #gluster
14:41 ujjain1 glusterfs-client 3.2.7-3ubuntu2 is not compatible maybe with glusterfs 3.4.2 (centos/official)?
14:42 boomertsfx you should probably match the versions
14:42 kkeithley_ official. 3.2.x is !compatible with 3.3.x and 3.4.x
14:43 ctria joined #gluster
14:45 mattapperson joined #gluster
14:47 mattapp__ joined #gluster
14:50 kshlm joined #gluster
14:51 ujjain1 add-apt-repository ppa:semiosis/ubuntu-glusterfs-3.4 < this doesn´t seem to be working though, Cannot add PPA: 'ppa:semiosis/ubuntu-glusterfs-3.4'.
14:51 ujjain1 / Please check that the PPA name or format is correct.
14:53 ndevos @ppa
14:53 glusterbot ndevos: The official glusterfs packages for Ubuntu are available here: 3.3 stable: http://goo.gl/7ZTNY -- 3.4 stable: http://goo.gl/u33hy -- 3.5 QA: http://goo.gl/Odj95k
14:54 ujjain1 Yeah, that´s where I get the error.
14:54 ujjain1 To install, add the PPA using command "add-apt-repository ppa:semiosis/ubuntu-glusterfs-3.4" ... < this command gives the error above
14:54 johnmark ujjain1: hrm...
14:54 JMWbot johnmark: @3 purpleidea reminded you to: thank purpleidea for an awesome JMWbot (please report any bugs) [2751218 sec(s) ago]
14:54 JMWbot johnmark: @5 purpleidea reminded you to: remind purpleidea to implement a @harass action for JMWbot  [2679982 sec(s) ago]
14:54 JMWbot johnmark: @6 purpleidea reminded you to: get semiosis article updated from irc.gnu.org to freenode [2584512 sec(s) ago]
14:54 JMWbot johnmark: @8 purpleidea reminded you to: git.gluster.org does not have a valid https certificate [655819 sec(s) ago]
14:54 JMWbot johnmark: Use: JMWbot: @done <id> to set task as done.
14:54 * johnmark checks on his ubuntu server
14:55 johnmark ujjain1: I just added it with no problem: sudo add-apt-repository ppa:semiosis/ubuntu-glusterfs-3.4
14:56 semiosis i'm here for a few minutes
14:56 semiosis whats the problem?
14:56 ujjain1 ahhhh, maybe it´s a https thing!
14:56 johnmark ujjain1: what ubuntu version are you using?
14:56 ujjain1 i am behind triple-megadruple firewalls
14:56 johnmark ujjain1: I have no idea :(
14:56 ujjain1 saucy, 13.04
14:56 johnmark aha... ok
14:56 johnmark ujjain1: hmm should work. I'd try another PPA just to make sure that's the issue
14:57 gmcwhistler joined #gluster
14:57 johnmark ujjain1: also, you were root, right?
14:57 ujjain1 Yes.
14:57 ujjain1 I am root.
14:57 johnmark ok
14:58 ujjain1 all PPA´s have the issue for me, not just this one.
15:00 semiosis ujjain1: looks like add-apt-repository is broken, although i've never seen that happen before.  maybe try reinstalling the software-properties-common package which provides it?
15:01 ujjain1 well, I need to use a proxy, I have configured http_proxy and apt to use a proxy, but maybe somewhere it still tries to make connetion without proxy,
15:01 monotek joined #gluster
15:02 semiosis ujjain1: you can skip add-apt-repository and do the work manually.  you need to create a .list file in /etc/apt/sources.list.d and also add the signing key
15:02 semiosis you can find help on how to do this on the PPA page, https://launchpad.net/~semiosis/+archive/ubuntu-glusterfs-3.4, by expanding the "Technical details about this PPA" section
15:03 lpabon joined #gluster
15:03 mattappe_ joined #gluster
15:03 NeatBasis joined #gluster
15:03 sroy_ joined #gluster
15:04 gmcwhistler joined #gluster
15:05 diegows joined #gluster
15:05 semiosis gotta run, bbl
15:05 gmcwhistler joined #gluster
15:06 aixsyd JoeJulian: for your VM File Server - what fstype do you use? ext3/4? xfs?
15:08 mattappe_ joined #gluster
15:09 monotek Hello gluster Community,
15:09 monotek i'm trying to move my OTRS filesystem from drbd to glusterfs on Ubuntu 12.04 Server using gluster 3.4.2 via the deb files of the semiosis ppa.
15:09 monotek I allredy run my samba on glusterfs which runs fine.
15:09 monotek Now i just rsynced all OTRS files to a newly created gluser volume.
15:09 monotek All files are user/group otrs.
15:09 monotek Webserver (which is running as otrs user to) is restarting without problems but when i try to access the otrs site i get a 500 because of permission problems.
15:09 monotek Here is the Gluster client log: http://paste.ubuntu.com/6762446/
15:09 monotek Any hints?
15:09 monotek Cheers from Dresden
15:09 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
15:09 monotek André
15:10 dbruhn joined #gluster
15:10 ujjain1 it works!! :)
15:11 bugs_ joined #gluster
15:11 shyam joined #gluster
15:14 dbruhn 3.5 File Snapshot, anyone used it yet?
15:14 aixsyd newp
15:15 dbruhn How about brick failure detection?
15:15 dbruhn lol
15:17 aixsyd are these all things now? i should look at the release notes..
15:17 satheesh2 joined #gluster
15:17 dbruhn 3.5 marquee features
15:17 dbruhn File Snapshot
15:17 dbruhn Quota Scalability
15:17 dbruhn Brick Failure Detection
15:17 dbruhn On-wire Compression and Decompression
15:17 dbruhn Disk Encryption
15:19 monotek forgot server log: http://paste.ubuntu.com/6762512/
15:19 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
15:20 dbruhn monotek, problems?
15:21 monotek yes, postet question some lines above ;-)
15:21 monotek 16:09
15:21 dbruhn Ahh sorry just came online a few min ago, looks like you are having a permissions issue from that log you just posted.
15:22 monotek yes, seems so. proplem is i dont understand where....
15:22 ctria joined #gluster
15:22 dbruhn I'm not super familiar with app armor but is it inhibiting the systems ability to write the extended attributes?
15:22 monotek i can copy the files as user otrs to to gfs volume but apache running as user otrs cant write?
15:23 monotek ok, thanks :-) i'll have a look at this....
15:23 dbruhn I'm assuming the log you posted is the client side log?
15:23 monotek samba is running fine with non root useres by the way....
15:24 monotek Here is the Gluster client log: http://paste.ubuntu.com/6762446/
15:24 monotek Her is the Gluster server log: http://paste.ubuntu.com/6762512/
15:24 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
15:24 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
15:25 dbruhn The permissions it seems to be failing on are the extended attributes, from the logs.
15:26 dbruhn You see it in both logs
15:27 dbruhn gluster uses the extended attributes as part of it's system.
15:29 dbruhn I am assuming you are using ubuntu
15:29 monotek my vm images and samba files in 2 different gluster volumes are running fine. no differences in volume config (same creation process). filesystem is ext4. allready tried xfs but didnt work either....
15:29 atrius joined #gluster
15:30 monotek yes ubuntu 12.04. glusterfs 3.42. / semiosis ppa
15:32 satheesh1 joined #gluster
15:35 jag3773 joined #gluster
15:35 dbruhn So the mount is working fine for everything except for access from apache?
15:36 theron joined #gluster
15:38 monotek read / write as root works fine...
15:40 monotek also logged in as otrs user read/write isnt a üproblem....
15:40 dbruhn for giggles, have you tried changing apache to run under root to see if the problem follows?
15:41 monotek doesnt work, because of:
15:41 monotek Syntax error on line 144 of /etc/apache2/apache2.conf:
15:41 monotek Error:\tApache has not been designed to serve pages while\n\trunning as root.  There are known race conditions that\n\twill allow any local user to read any file on the system.\n\tIf you still desire to serve pages as root then\n\tadd -DBIG_SECURITY_HOLE to the CFLAGS env variable\n\tand then rebuild the server.\n\tIt is strongly suggested that you instead modify the User\n\tdirective in your httpd.conf file to list a non-root\n\tuser.\n
15:41 monotek Action 'configtest' failed.
15:41 monotek The Apache error log may have more information.
15:41 monotek ...fail!
15:42 rwheeler joined #gluster
15:44 sghosh joined #gluster
15:47 monotek hmmm... also did not found any apparmor related logs on cvlient or server :-(
15:48 jobewan joined #gluster
15:49 dbruhn I honestly am lost when it comes to app armor, all my stuff is RHEL, so we have SELinux on this side.
15:50 monotek from the ubuntu docs: "IMPORTANT: If you do not have any 'audit' entries in  /var/log/kern.log at the time the application had a problem, then this  is not an apparmor bug.  Please see DebuggingProcedures for more information on filing a bug. "
15:50 monotek should be ok...
15:51 tryggvil joined #gluster
15:53 tdasilva joined #gluster
15:59 MrNaviPacho joined #gluster
15:59 mattappe_ joined #gluster
16:06 anands joined #gluster
16:06 sticky_afk joined #gluster
16:07 stickyboy joined #gluster
16:11 mattapperson joined #gluster
16:12 raghug joined #gluster
16:12 raghug bfoster: ping
16:12 bfoster raghug: pong
16:14 raghug bfoster: what is the default behaviour of negative entry caching in fuse kernel module? If a lookup fails with ENOENT, does it cache it?
16:14 aixsyd dbruhn: got a question about IOPS - is it normal for gluster to have VERY LOW IOPS?
16:14 raghug bfoster: I am seeing posix compliance tests failing for link
16:15 bfoster raghug: I think that is the case for all fs'
16:15 dbruhn aixsyd, in what context?
16:15 aixsyd just normal file-copy
16:15 aixsyd im getting about 100 IOPS
16:15 aixsyd o.O
16:15 dbruhn what do you get when writing directly to the disk?
16:16 dbruhn and I am assuming this is from one of your client side machines on the bonded 1GB ports?
16:16 aixsyd yessir
16:16 aixsyd havent tried to test IOPS directly from the gluster machines
16:18 dbruhn what is your storage on the systems?
16:18 ctria joined #gluster
16:18 dbruhn such as raid config, disk info, etc.
16:19 aixsyd 4x 2TB in RAID10
16:19 zerick joined #gluster
16:19 aixsyd consumer-level drives, 7200rpm
16:19 dbruhn is your 100 i/o's read, write. or?
16:21 aixsyd about the same on both
16:22 dbruhn Well in my personal experience, enterprise grade 7200 rpm drives are usually good for about 70i/o's on random read per disk, until they are about 70/80% full
16:23 aixsyd hm - so then im actually okay in that regard
16:23 dbruhn Gluster really is just a product of what you build it on top of
16:23 aixsyd im used to testing IOPs on windows - how do you get that metric on linux?
16:24 dbruhn i've used iostat in the past
16:24 dbruhn it's been more than a few years since I've had to do i/o testing though
16:26 anands joined #gluster
16:27 TonySplitBrain joined #gluster
16:28 bala joined #gluster
16:28 _pol joined #gluster
16:30 MrNaviPacho left #gluster
16:31 bala1 joined #gluster
16:32 ndk joined #gluster
16:35 aixsyd dbruhn: gonna try bonnie++
16:36 semiosis aixsyd: get the message about 3.5.0beta1 debs?
16:36 aixsyd sure did - get my message?
16:37 aixsyd [08:46] <aixsyd> semiosis: thx broseph :P
16:37 aixsyd but now that I think about it - youre more of a bosideon, king of the brocean
16:37 aixsyd *brosideon
16:40 semiosis el duderino if you're not into the whole brevity thing
16:41 spstarr_work joined #gluster
16:41 ricky-ti1 joined #gluster
16:41 monotek do i need xattr support from ext4 mount on the client to?
16:41 mattappe_ joined #gluster
16:42 semiosis monotek: glusterfs uses trusted.* xattrs, which are always enabled on ext4.  the user_xattr option allows user.* xattrs, which gluster supports but doesnt require.
16:42 semiosis monotek: so, no
16:43 dbruhn semiosis, monotek is having an issue where he is getting attended attribute errors on ubuntu, but only when the user apache is running under try's to access the mount, or so it looks.
16:43 dbruhn s/attended/extended/
16:43 glusterbot What dbruhn meant to say was: semiosis, monotek is having an issue where he is getting extended attribute errors on ubuntu, but only when the user apache is running under try's to access the mount, or so it looks.
16:43 * semiosis scrolls back
16:43 monotek thanks guys :-)
16:44 monotek question was from 16:09
16:46 semiosis monotek: from your client log:  [2014-01-16 14:36:38.148056] W [client-rpc-fops.c:1044:client3_3_setxattr_cbk] 0-gv7-client-1: remote operation failed: Permission denied
16:47 semiosis any time you see "remote operation failed" in a client log you should look for a corresponding entry in the *brick* log file, which is the other side of the remote operation
16:47 semiosis that might shed light on the situation
16:47 semiosis unless you have configured apparmor yourself there's nothing i know of that would make it break apache on glusterfs
16:47 monotek http://paste.ubuntu.com/6762512/
16:47 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
16:48 semiosis apache on glusterfs works "out of the box" for just about everyone on all distros
16:48 monotek this is the server log...
16:49 semiosis could you please pastie 'sudo getfattr -m . -d -e hex /var/article/check_permissions_19606'
16:49 semiosis you may need to install the attr package
16:50 monotek does this mean i need the same users on client and server? my otrs use has id 1001 which is not availabe on gluuster servers...
16:50 semiosis also please include 'ls -la /var/article/check_permissions_19606'
16:50 semiosis these comands need to be run on the backend brick filesystem
16:51 semiosis monotek: oh yeah, there you go
16:51 semiosis you need the same users on the clients & servers
16:51 semiosis ha that was easy
16:52 dbruhn hmm, why was it allowing him to write from the command line not through apache with that same user?
16:52 monotek thanks for the hint. wil try that...
16:52 semiosis dbruhn: that's... unlikely
16:52 shyam joined #gluster
16:54 monotek i also have another gfs volume for a samba/ftp server. samba dirs has user dirs which arent present on the gluster server but it works....
16:54 semiosis monotek: permissions
16:54 monotek but i'll try anyway....
16:54 dbruhn that's what tripped me up
16:54 glusterbot New news from newglusterbugs: [Bug 1054351] Failure of posix compliance tests for link on fuse mount <https://bugzilla.redhat.com/show_bug.cgi?id=1054351>
16:55 semiosis monotek: fwiw i use puppet to manage users for apps on my cluster
16:55 semiosis works fine for a small number of app/system user account
16:55 semiosis s
16:57 bennyturns joined #gluster
17:03 monotek just added the user with -uid 1001. remounted client. didnt worked. same permission errors in log :-(
17:03 semiosis monotek: you should probably stop & start the volume, maybe also unmount/remount the client
17:04 monotek is it normal that gluster logs have wrong time by the way? time is 2 hours back in gluster logs....
17:04 monotek ok, try that...
17:04 semiosis gluster logs are UTC
17:04 monotek imho would be better to us system time....
17:04 monotek use
17:04 dbruhn the log time stamp has been a topic of conversation many times
17:05 monotek ahh... ok... :-)
17:05 semiosis monotek: set your system to UTC, problem solved :)
17:05 monotek *g*
17:05 monotek but no ;-)
17:05 ndevos many users have servers in multiple timezones, use UTC and the logs make much more sense
17:06 JoeJulian +1
17:06 dbruhn Or join the project adding a feature to select what time stamp you want on logs.
17:06 dbruhn but I am cool with utc
17:06 ndevos ah, I've seen a patch or something for that, I think
17:06 semiosis seems like it would be easy enough (relatively speaking) to add a UTC offset config param
17:06 monotek i would like this option but unfortunately im note a programmer :-(
17:07 daMaestro joined #gluster
17:07 JoeJulian The patches I saw were for moving the logs into syslog I think.
17:08 hybrid512 joined #gluster
17:09 ndevos well, anyway, I cant find the patch now...
17:10 ricky-ticky1 joined #gluster
17:10 khushildep joined #gluster
17:11 semiosis uh oh, problems building 3.5.0beta1 on ubuntu precise :(
17:12 semiosis this should be interesting
17:12 wrale joined #gluster
17:13 monotek stop/start my volume and umount/mount client didnt help either :-(
17:14 ndevos semiosis: I'm not sure of there are new deps, got the build logs somewhere?
17:15 wrale is gluster a flat topology like cassandra, where there is no particular "head node" for me to "manually" make highly available?   as in: http://stackoverflow.com/questions/11735957/how-cassandra-ensures-high-availability ?
17:15 glusterbot Title: jdbc - How Cassandra ensures high availability? - Stack Overflow (at stackoverflow.com)
17:15 semiosis ndevos: http://pastie.org/8639632 -- end of configure & beginning of make
17:15 glusterbot Title: #8639632 - Pastie (at pastie.org)
17:15 kkeithley_ semiosis: yeah, I had problems on Fedora and RHEL too.  Are you using the tarball from bits.gluster.org? I copped out and respun it (directory glusterfs-3.5.0beta1 instead of gluster-3.5beta1) and tweaked the configure.ac line 9 from 3.5beta1 to 3.5.0beta1.
17:15 pdrakeweb joined #gluster
17:15 semiosis kkeithley_: i always use the tarballs
17:16 kkeithley_ semiosis: right, usually I do too, but this time
17:16 semiosis i am running a build against ubuntu trusty (unreleased 14.04 LTS) and it is going ok
17:16 kkeithley_ this time I couldn't get rpmbuild to cooperate
17:16 semiosis something in precise is too old i think
17:17 semiosis hmm well it's not going ok, but it's going
17:17 semiosis lots & lots of warning: function declaration isn't a prototype [-Wstrict-prototypes]
17:17 semiosis but i'm ok with that
17:18 SFLimey_ joined #gluster
17:19 ndevos semiosis: does it help if you re-run ./autogen.sh?
17:19 wrale i guess the answer to my question is no:  http://www.gluster.org/community/documentation/index.php/Features/thousand-node-glusterd  ("Benefit to GlusterFS: Scaling of our management plane to 1000+ nodes, enabling competition with other projects such as HDFS or Ceph which already have or claim such scalability. "
17:19 glusterbot Title: Features/thousand-node-glusterd - GlusterDocumentation (at www.gluster.org)
17:19 kkeithley_ it actually built fine using the bits.gluster.org tarball, but at the end, after compiling, there were discrepancies between files installed in /usr/lib64/gluster/3.5beta1/... but the spec expected them to be in /usr/lib64/gluster/3.5.0beta/...
17:21 semiosis kkeithley_: interesting.  i'll check that out but i think the deb build tools find that sort of thing automaticall
17:21 semiosis the build on trusty completed
17:22 semiosis might have to wait until this evening to get deeper into the libtool problem on precise
17:22 dbruhn wrale, there is no "head node" in gluster
17:22 LoudNoises joined #gluster
17:23 dbruhn it's a peer group configuration
17:24 wrale dbruhn: thank you..  so on a fencing event on a ovirt hypervisor + co-located gluster cluster, there would be no service interruption, provided there are enough peers, correct?
17:24 wrale (no wait for leader election)
17:25 semiosis well there is a ,,(ping-timeout)
17:25 glusterbot The reason for the long (42 second) ping-timeout is because re-establishing fd's and locks can be a very expensive operation. Allowing a longer time to reestablish connections is logical, unless you have servers that frequently die.
17:25 dbruhn In theory. There are a couple things like timeouts that might need to be tweaked to get exactly what you are looking for, and some of those settings don't play well with latency issues.
17:27 monotek thanks guys :-)
17:27 monotek maybe i try to repost my problem tomorrow....
17:27 Mo_ joined #gluster
17:28 psyl0n joined #gluster
17:28 * spstarr_work got approval
17:28 spstarr_work I am setting up RRDNS w/ ROute 53 and 3 AWS zones
17:28 spstarr_work :))
17:30 semiosis spstarr_work: welcome to the future!
17:30 spstarr_work lol :D
17:30 dbruhn I expect a lengthy blog article, with diagrams, and coupons for free redbull
17:30 semiosis curious what the powers that be had to consider for R53?  it should only cost you about $1/mo
17:31 zaitcev joined #gluster
17:31 spstarr_work when they were told that.. it went quickly ;)
17:32 semiosis hahaha
17:33 dbruhn Did you hand them $12 and said it's worth the hell it will save you?
17:33 semiosis we dropped a substantial dns bill by 90% moving from ultradns to r53
17:33 semiosis i'm a big fan
17:35 semiosis ndevos: kkeithley_: i added libtool to the build-depends package list & now the precise build is running :D
17:36 ndevos semiosis: haha, I doubt that dependency was introduced only recently!
17:36 semiosis srsly
17:37 semiosis now it's failing here: http://pastie.org/8639690
17:37 glusterbot Title: #8639690 - Pastie (at pastie.org)
17:37 LoudNois_ joined #gluster
17:38 ndevos hmpf
17:39 TrDS joined #gluster
17:39 semiosis maybe some debhelper/python thing i need to add
17:40 monotek @ semiosis how many dns zones do you need? i'm using https://dns.he.net/ where you get 50 for free....
17:40 _pol joined #gluster
17:40 semiosis monotek:  a lot
17:41 monotek ok *g*
17:41 semiosis and we need a robust API, and we need high availability
17:41 semiosis and there's integration between R53 and EC2 that only AWS can provide
17:41 semiosis which it would be a pain to live without
17:42 ndevos semiosis: that build error puzzles me, the needed file just got build a few lines above the error and you're running 'make -j1' - no idea
17:42 * semiosis tries again
17:44 _pol joined #gluster
17:45 failshell joined #gluster
17:45 dbruhn monotek, you can also post to the users group and see if you can get an answer there
17:46 _Bryan_ joined #gluster
17:46 verdurin After the recent RPM upgrades, my 2 peers aren't working together (Status: Peer Rejected: Connected)
17:46 bfoster joined #gluster
17:46 monotek thanks for the tip :-) will try that tomorrow....
17:46 dbruhn monotek http://www.gluster.org/mailman/listinfo/gluster-users
17:46 glusterbot Title: Gluster-users Info Page (at www.gluster.org)
17:46 failshell im trying to configure georepl. ive created a secret.pem and secret.pem.pub file under /var/lib/gluster/geo-replication. added the pub to /root/.ssh/authorized_keys, can ssh fine with that key. but when i try to setup georepl. i get : Passwordless ssh login has not been setup with foo
17:46 johnmark joined #gluster
17:46 failshell any idea what im doing wrong?
17:46 verdurin I've looked at the checksums of the vol files and the 'info' file was different, so I corrected that
17:47 verdurin Didn't make any difference
17:47 verdurin This is 3.4.2 on CentOS 6.5
17:47 hagarth joined #gluster
17:49 mgebbe_ joined #gluster
17:50 77CAA3I37 joined #gluster
17:52 semiosis ndevos: something in ubuntu raring & older breaks the build... it went fine on saucy & trusty
17:52 semiosis i'll get into this later, gotta gbtw
17:57 NuxRo joined #gluster
17:58 ndevos semiosis: I'm sorry to hear that, I've also seen failures on EPEL-5, but have not been able to check that yet
17:59 ndevos and, I have no idea about ubuntu, so I cant really help with that either :-/
17:59 semiosis thx anyway
18:00 geewiz_ joined #gluster
18:00 ira joined #gluster
18:00 aurigus joined #gluster
18:00 aurigus joined #gluster
18:00 ira joined #gluster
18:06 SFLimey joined #gluster
18:10 johnmark semiosis: which one is Raring?
18:10 johnmark we should support LTS releases at the very least
18:10 theron joined #gluster
18:10 * johnmark is running johnmark.org on 12.04
18:10 johnmark theron: theron!
18:10 * johnmark will try to build
18:11 semiosis raring - 13.04
18:11 semiosis 12.04 is precise
18:12 ira joined #gluster
18:13 dusmant joined #gluster
18:14 TonySplitBrain joined #gluster
18:15 johnmark huh ok
18:15 johnmark semiosis: and this is for 3.5?
18:15 semiosis 3.5.0beta1
18:16 semiosis https://launchpad.net/~semiosis/+archive/ubuntu-glusterfs-3.5qa/+build/5469753
18:16 glusterbot Title: amd64 build of glusterfs 3.5.0beta1-ubuntu1~raring2 : ubuntu-glusterfs-3.5qa : semiosis (at launchpad.net)
18:16 semiosis you can see the build log there
18:16 johnmark huh, ok
18:16 failshell Passwordless ssh login has not been setup with remote-server. but  i can ssh in with the /var/lib/glusterd/geo-replication/server.pem ... i just cant figure what's wrong with my setup
18:18 mattappe_ joined #gluster
18:22 anands joined #gluster
18:23 andreask joined #gluster
18:23 theron johnmark: johnmark!
18:24 parad1se_ joined #gluster
18:28 ira joined #gluster
18:29 johnmark lol
18:29 ira joined #gluster
18:29 Peanut joined #gluster
18:29 ira joined #gluster
18:30 NuxRo joined #gluster
18:35 Kins joined #gluster
18:41 dneary joined #gluster
18:42 lalatenduM joined #gluster
18:44 Technicool joined #gluster
19:08 Peanut Hi JoeJulian, you recommended I run strace on my glusterd a while back - could you give me the command line again that you recommended?
19:14 semiosis Peanut: you might find what you're looking for quicker by checking the channel logs, links are in the /topic
19:16 Peanut semiosis: ah, thanks.
19:16 Peanut Still debugging my live-migration issue - has anyone else complained?
19:17 Peanut Oh, good bot, I found it right away.
19:30 mattappe_ joined #gluster
19:31 dbruhn joined #gluster
19:31 NeatBasis joined #gluster
19:31 failshell Unable to fetch slave or confpath details
19:31 failshell anyone ever seen that error?
19:42 Peanut semiosis: I see there's new Ubuntu glusterfs packages available? What's new?
19:43 semiosis new version 3.4.2, also updated upstart job to block mounting glusterfs until network is ready
19:43 semiosis now supports multiple glusterfs mounts, previously only blocked one mount
19:44 Peanut Ah, ok, thanks for packaging those. You've had nobodoy else ask about live migration issues?
19:44 semiosis i haven't backported the new upstart job to 3.3 though.
19:44 semiosis no idea about migration issues, sorry
19:46 ira_ joined #gluster
19:47 Peanut Pity, I was really enjoying the ability to simply kick a VM to the other host, without anyone noticing.
19:50 divbell http://changelogs.ubuntu.com/changelogs/pool/universe/g/glusterfs/glusterfs_3.4.2-1ubuntu1/changelog
19:51 JoeJulian Peanut: Where can I look at that strace?
19:53 semiosis divbell: yep
19:54 quique joined #gluster
19:55 quique i have a server that was a samba share, i'm going to make it a gluster node.  How should I copy all the files over initially?
19:55 quique i think i read somewhere it's bad to copy them directly to the node
19:56 semiosis quique: ideally you would set up your gluster volume & copy the data in through a client mount point (fuse or nfs)
19:56 divbell semiosis, now if the samba package maintainers would just add the glusterfs vfs module...
19:56 samppah Peanut: there is thread on mailing list about live migration problems with 3.4.2
19:56 davinder joined #gluster
19:56 semiosis divbell: send them a patch :D
19:57 semiosis divbell: probably should contribute that patch to debian
19:58 divbell i may! :)
19:59 semiosis \o/
19:59 semiosis feel free to bounce questions & ideas off me, though i can't promise i'll be much help
19:59 _pol_ joined #gluster
20:00 semiosis s/much/any/
20:00 glusterbot What semiosis meant to say was: feel free to bounce questions & ideas off me, though i can't promise i'll be any help
20:01 _pol joined #gluster
20:05 NeatBasis joined #gluster
20:05 sac`away joined #gluster
20:08 psyl0n joined #gluster
20:15 mattapperson joined #gluster
20:21 diegows joined #gluster
20:22 quique joined #gluster
20:39 lpabon joined #gluster
20:46 lalatenduM kkeithley, ping
20:47 kkeithley_ pong
20:48 lalatenduM kkeithley, can I update from glusterfs-3.5.0qa3-1.el6.x86_64 to beta1
20:49 lalatenduM kkeithley, I tried using http://download.gluster.org/pub/gluster/glusterfs/qa-releases/3.5.0beta1/RHEL/glusterfs-350beta-epel.repo and yum update glusterfs
20:50 lalatenduM but it didn't work for me
20:50 kkeithley_ are you asking whether a yum update should update qa3 to beta1? It likely won't because 3.5.0-0.1.qa3 is lexigraphically "higher" than 3.5.0-0.1.beta1
20:50 khushildep_ joined #gluster
20:50 lalatenduM kkeithley, yeah I remember we had a discussion on last community meeting
20:50 lalatenduM on this
20:50 kkeithley_ We (hagarth, ndevos, and I) discussed this issue in the community planning meeting yesterday
20:51 kkeithley_ yup
20:51 lalatenduM kkeithley, whats the work around?
20:51 kkeithley_ yum erase glusterfs*, then yum install
20:51 zapotah joined #gluster
20:51 zapotah joined #gluster
20:51 lalatenduM kkeithley, ok
20:51 * JoeJulian wonders if yum downgrade would work
20:52 lalatenduM kkeithley, are we planning to fix this issue for later beta releases?
20:52 lalatenduM JoeJulian, will try that :)
20:52 kkeithley_ yes,
20:52 jbrooks joined #gluster
20:52 lalatenduM kkeithley, cool thanks
20:55 khushildep joined #gluster
20:55 lalatenduM kkeithley, also with http://download.gluster.org/pub/gluster/glusterfs/qa-releases/3.5.0beta1/RHEL/glusterfs-350beta-epel.repo, during yum update I got http://fpaste.org/69098/
20:56 glusterbot Title: #69098 Fedora Project Pastebin (at fpaste.org)
20:56 kkeithley_ er, that's not right...
20:56 * lalatenduM hopes that he is not disturbing kkeithley too much
20:56 kkeithley_ nope, not a problem
20:57 JoeJulian kkeithley_ was already disturbed.
20:57 lalatenduM JoeJulian, kkeithley :)
20:57 kkeithley_ lol, if you only knew
20:58 kkeithley_ hang on
20:59 johnmark lulz
20:59 smellis anyway to disable nfs by default? as in before I create a volume
20:59 kkeithley_ fixed
21:00 lalatenduM smellis, do u want to disable kernel nfs or gluster nfs?
21:00 lalatenduM kkeithley, will check
21:00 johnmark ***ANNOUNCE*** GlusterFest starts in T-3hrs - http://gluster.org/gfest
21:00 glusterbot Title: GlusterFest - GlusterDocumentation (at gluster.org)
21:00 smellis gluster nfs, it interferes with normal nfs mounts
21:00 smellis after I disable it and restart its ok
21:01 kkeithley_ same prob with the 3.5.0qa3 repo files. I guess that tells us how many people tried qa3
21:01 smellis but before i create a volume and disable it in a volume, it prevents normal nfs mounts
21:02 lalatenduM kkeithley, I am still seeing the issue :(, did a yum clean all; yum update glusterfs
21:02 kkeithley_ reinstall the repo file
21:02 lalatenduM kkeithley, will do
21:02 kkeithley_ rc-release != qa-release
21:04 lalatenduM kkeithley, it is working now , thanks :)
21:06 kkeithley_ glad we caught that before the GlusterFest
21:06 smellis poop, i am having trouble find docs for what's availble in /etc/glusterfs/glusterd.vol
21:09 lalatenduM smellis, yeah thats an issue, I also know that we can disable it for a volume, but not sure if we can completely do it for a gluster node
21:09 lalatenduM kkeithley, yup :)
21:10 lalatenduM smellis, kkeithley might know if it is possible
21:12 semiosis smellis: here's what to do, create your volume(s) but do not start.  then on the volume(s) set the nfs-disable option to on (see gluster volume set help).  then you can start the volume(s)
21:12 semiosis if nfs-disable is on for all volumes then the nfs server should not start up
21:13 smellis that's what I do, see this https://github.com/stephanellis/centos6-roles/blob/master/extras/setup-vpool1.sh
21:13 glusterbot Title: centos6-roles/extras/setup-vpool1.sh at master · stephanellis/centos6-roles · GitHub (at github.com)
21:13 mattappe_ joined #gluster
21:13 smellis it's not until i restart the server that the problem goes away
21:16 semiosis well then idk
21:17 smellis haha, thanks anyway
21:17 smellis it's a very minor annoyance anyway
21:18 lalatenduM JoeJulian, kkeithley yum downgrade worked  from qa3 to beta1..:P
21:20 ccope joined #gluster
21:20 ccope anyone remember how to check the status of a cluster in gluster 2.0? :)
21:20 JoeJulian ps
21:21 JoeJulian ps ax combined with netstat -tlnp, netstat -t, and reading through all the log files.
21:22 mattappe_ joined #gluster
21:25 ccope different clients are having trouble writing to different directories, and restarting the clients seems to resolve some of the errors
21:26 ccope in the client logs, in debug mode, i see this: [2014-01-16 13:23:33] D [dht-selfheal.c:435:dht_selfheal_directory] cluster: 1 subvolumes down -- not fixing
21:27 ccope there are two subvolumes in the volume, both of which appear to be available (glusterfsd is running, the mounts are accessible)
21:32 kkeithley_ lalatenduM: that's funny
21:52 B21956 joined #gluster
22:02 B21956 joined #gluster
22:06 mattapperson joined #gluster
22:11 tziOm joined #gluster
22:16 khushildep_ joined #gluster
22:32 aknapp joined #gluster
22:39 theron joined #gluster
22:51 rwheeler joined #gluster
22:52 KORG joined #gluster
22:55 jporterfield joined #gluster
22:55 gork4life joined #gluster
22:55 gork4life Hello
22:55 glusterbot gork4life: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
22:56 gork4life I would like to know if storage tiering is possible with gluster
22:57 gork4life And if so what could I use to support that feature?
22:59 spstarr joined #gluster
22:59 spstarr semiosis: ah thats your ppa ;)
22:59 semiosis mine all mine
23:00 spstarr your a gluster dev or just packager for buntu?
23:00 semiosis i'm just this guy, y'know?
23:00 gork4life semiosis: I'm used your package a few times
23:00 JoeJulian gork4life: Not yet, but I do believe that's a proposed feature.
23:00 JoeJulian That's what she said.
23:00 semiosis i help maintain deb packages (for debian & ubuntu) for the gluster community
23:01 spstarr semiosis: so i can trust no backdoors :D
23:01 semiosis JoeJulian: ^^
23:01 purpleidea JoeJulian: lol
23:02 gork4life has anyone ever use zfs with gluster and what was performance with it
23:04 semiosis gork4life: yes and not too bad
23:04 semiosis you should be able to find some articles about ZFS & gluster on the web, i think there's even a HOWTO on gluster.org
23:05 gork4life semiosis:  ok that's I'll look into it
23:05 gork4life semiosis:  I meant thanks
23:05 semiosis yw
23:07 * spstarr looks at Route 53 and configures
23:07 wrale joined #gluster
23:21 mattappe_ joined #gluster
23:26 mattapperson joined #gluster
23:28 TonySplitBrain joined #gluster
23:30 mattappe_ joined #gluster
23:50 ccope joined #gluster
23:58 TrDS left #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary