Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-01-08

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:11 nueces joined #gluster
00:32 Shdwdrgn joined #gluster
00:35 Shdwdrgn left #gluster
00:47 mkzero joined #gluster
01:01 tyl0r Does anybody have any experience with translate-uid?
01:14 johnbot11 joined #gluster
01:32 JoeJulian tyl0r: the filter translator hasn't seen any attention in a very long time and is not currently configured or configurable through the cli. You can, however, configure server.root-squash.
01:33 ira joined #gluster
01:34 tyl0r I don't think root-squash helps me because I'm not root on the client-side...
01:35 dbruhn joined #gluster
01:35 tyl0r Hmm, okay, I'll have to come up with a workaround. Thanks for answering my question
01:37 mkzero joined #gluster
01:38 cfeller joined #gluster
01:40 theron joined #gluster
01:40 JoeJulian You're welcome.
01:43 JoeJulian Wow... http://www.zdnet.com/red-hat-incorporate​s-free-red-hat-clone-centos-7000024907/
01:43 glusterbot Title: Red Hat incorporates free Red Hat clone CentOS | ZDNet (at www.zdnet.com)
01:44 dbruhn I read that earlier, sad.
01:44 JoeJulian no. now we won't have to wait so long for releases.
01:45 dbruhn True
01:45 dbruhn I just hope it doesn't eventually lead to it becoming a pay for version down the road.
01:45 JoeJulian It wont.
01:46 dbruhn I see it mostly as RedHat trying to beat Oracle to the punch on releases
01:46 dbruhn edging them out on market share
01:46 JoeJulian Yep
01:47 JoeJulian Plus, it's a much easier sell. You're already using the product and struggling. Wanna pay for support and training?
01:47 dbruhn I read this earlier
01:47 dbruhn http://lists.centos.org/pipermail/cen​tos-announce/2014-January/020100.html
01:47 glusterbot Title: [CentOS-announce] CentOS Project joins forces with Red Hat (at lists.centos.org)
01:47 dbruhn Little better information on what's actually going on
01:51 JoeJulian Nice! "...contraints that resulted in efforts like CentOS-QA being behind
01:51 JoeJulian closed doors, now go away and we hope to have the entire build, test, and delivery chain open to anyone who wishes to come and join the effort."
02:00 failshell joined #gluster
02:01 sticky_afk joined #gluster
02:01 stickyboy joined #gluster
02:07 psyl0n joined #gluster
02:24 harish joined #gluster
02:30 bharata-rao joined #gluster
02:32 theron_ joined #gluster
02:33 mattappe_ joined #gluster
02:38 quillo joined #gluster
02:40 sticky_afk joined #gluster
02:40 stickyboy joined #gluster
02:50 _Bryan_ joined #gluster
03:12 kshlm joined #gluster
03:18 mattappe_ joined #gluster
03:28 glusterbot New news from newglusterbugs: [Bug 1021998] nfs mount via symbolic link does not work <https://bugzilla.redhat.co​m/show_bug.cgi?id=1021998>
03:29 bala joined #gluster
03:30 [o__o] left #gluster
03:31 tyl0r joined #gluster
03:33 guntha__ joined #gluster
03:36 JMWbot joined #gluster
03:36 JMWbot I am JMWbot, I try to help remind johnmark about his todo list.
03:36 JMWbot Use: JMWbot: @remind <msg> and I will remind johnmark when I see him.
03:36 JMWbot /msg JMWbot @remind <msg> and I will remind johnmark _privately_ when I see him.
03:36 JMWbot The @list command will list all queued reminders for johnmark.
03:36 JMWbot The @about command will tell you about JMWbot.
03:38 [o__o] joined #gluster
03:38 shubhendu joined #gluster
03:40 [o__o] left #gluster
03:40 shyam joined #gluster
03:43 [o__o] joined #gluster
03:43 overclk joined #gluster
03:44 itisravi joined #gluster
03:44 jbrooks joined #gluster
03:45 r0b joined #gluster
03:46 dalekurt joined #gluster
03:49 kanagaraj joined #gluster
03:54 DV joined #gluster
03:54 primechuck joined #gluster
03:57 dylan_ joined #gluster
03:57 davinder joined #gluster
04:03 shylesh joined #gluster
04:23 ndarshan joined #gluster
04:30 primechuck joined #gluster
04:33 ppai joined #gluster
04:39 RameshN joined #gluster
04:41 kdhananjay joined #gluster
04:41 saurabh joined #gluster
04:57 davinder2 joined #gluster
04:59 aixsyd joined #gluster
05:00 bala joined #gluster
05:02 davinder joined #gluster
05:03 ajha joined #gluster
05:04 MiteshShah joined #gluster
05:12 prasanth joined #gluster
05:15 psharma joined #gluster
05:19 hagarth joined #gluster
05:19 aravindavk joined #gluster
05:25 hchiramm__ joined #gluster
05:29 glusterbot New news from newglusterbugs: [Bug 1049727] Dist-geo-rep : volume won't be able to stop untill the geo-rep session is deleted. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1049727>
05:33 aixsyd anyone alive?
05:44 vpshastry joined #gluster
05:46 prasanth joined #gluster
05:48 wanye_ joined #gluster
05:48 guntha__ joined #gluster
05:50 dylan_ joined #gluster
05:52 rastar joined #gluster
05:53 DV joined #gluster
06:05 tor joined #gluster
06:06 dylan_ joined #gluster
06:13 raghu` joined #gluster
06:18 JonnyNomad joined #gluster
06:19 lalatenduM joined #gluster
06:20 mohankumar joined #gluster
06:21 CheRi joined #gluster
06:28 meghanam joined #gluster
06:28 meghanam_ joined #gluster
06:29 glusterbot New news from newglusterbugs: [Bug 1049735] Avoid NULL referencing of auth->authops->request_init during RPC init <https://bugzilla.redhat.co​m/show_bug.cgi?id=1049735>
06:29 benjamin_______ joined #gluster
06:39 koodough joined #gluster
06:39 koodough joined #gluster
06:44 hagarth joined #gluster
06:45 guntha__ joined #gluster
06:45 ababu joined #gluster
06:52 nshaikh joined #gluster
07:12 ppai joined #gluster
07:16 satheesh joined #gluster
07:20 jtux joined #gluster
07:32 guntha__ joined #gluster
07:35 ngoswami joined #gluster
07:39 hagarth joined #gluster
07:44 ctria joined #gluster
07:46 sticky_afk joined #gluster
07:46 stickyboy joined #gluster
07:59 keytab joined #gluster
08:00 ekuric joined #gluster
08:06 FarbrorLeon joined #gluster
08:06 eseyman joined #gluster
08:14 keytab joined #gluster
08:22 KaZeR_ joined #gluster
08:26 davinder joined #gluster
08:38 jclift joined #gluster
08:47 andreask joined #gluster
08:51 benjamin_______ joined #gluster
08:55 ababu joined #gluster
09:23 mgebbe_ joined #gluster
09:24 keytab joined #gluster
09:31 tryggvil joined #gluster
09:31 tryggvil_ joined #gluster
09:33 shyam joined #gluster
09:38 overclk joined #gluster
09:41 quillo joined #gluster
09:51 overclk joined #gluster
09:53 s2r2 joined #gluster
09:58 aravindavk joined #gluster
09:59 quillo_ joined #gluster
10:00 morse joined #gluster
10:07 RameshN joined #gluster
10:10 bala1 joined #gluster
10:13 hagarth joined #gluster
10:14 shubhendu joined #gluster
10:15 kdhananjay joined #gluster
10:26 kanagaraj joined #gluster
10:29 shyam joined #gluster
10:43 XATRIX joined #gluster
10:44 XATRIX Hi guys, what if i have splitbrain situation, And i will have a bit different file permissions, files, or whatever
10:44 XATRIX How can i fix it or find which files i have corrupted ?
10:47 kdhananjay joined #gluster
10:49 ira joined #gluster
10:51 TDJACR joined #gluster
10:56 Oneiroi joined #gluster
11:03 jclift joined #gluster
11:03 jclift left #gluster
11:03 jclift joined #gluster
11:07 shyam joined #gluster
11:13 ndarshan joined #gluster
11:13 mbukatov joined #gluster
11:18 kdhananjay joined #gluster
11:25 bala joined #gluster
11:25 hagarth joined #gluster
11:30 aravindavk joined #gluster
11:31 shubhendu joined #gluster
11:31 ababu joined #gluster
11:32 psyl0n joined #gluster
11:34 RameshN joined #gluster
11:42 satheesh joined #gluster
11:46 glusterbot joined #gluster
11:47 sticky_afk joined #gluster
11:47 stickyboy joined #gluster
11:50 MiteshShah joined #gluster
12:01 itisravi_ joined #gluster
12:05 prasanth joined #gluster
12:08 theron joined #gluster
12:09 vpshastry left #gluster
12:09 andreask joined #gluster
12:09 ajha joined #gluster
12:16 CheRi joined #gluster
12:18 ira joined #gluster
12:18 glusterbot New news from newglusterbugs: [Bug 1049727] Dist-geo-rep : volume won't be able to stop untill the geo-rep session is deleted. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1049727> || [Bug 1049470] Gluster could do with a useful cli utility for updating host definitions <https://bugzilla.redhat.co​m/show_bug.cgi?id=1049470> || [Bug 1049481] Need better GlusterFS log message string when updating host definition <https://bugz
12:18 glusterbot New news from resolvedglusterbugs: [Bug 950083] Merge in the Fedora spec changes to build one single unified spec <https://bugzilla.redhat.com/show_bug.cgi?id=950083>
12:18 ira joined #gluster
12:19 harish joined #gluster
12:27 ppai joined #gluster
12:27 jbrooks joined #gluster
12:38 edward1 joined #gluster
12:41 psyl0n joined #gluster
12:41 psyl0n joined #gluster
12:43 TonySplitBrain joined #gluster
12:48 XATRIX Can anyone advice any tools/ways to find/fix files that are corrupt/different between 2-node cluster ?
12:55 ppai joined #gluster
12:55 warci joined #gluster
12:58 FarbrorLeon joined #gluster
13:08 CheRi joined #gluster
13:13 wildfire1 joined #gluster
13:14 wildfire1 Hi, I've got gluster 3.4.2 setup, and mounted via the native client. However I would like to export it via NFS. My problem is that the machine running the gluster volume *also* does a native kernel NFS exports.
13:14 wildfire1 Is it possible to have *both* the native kernel NFS server and gluster's NFS server running at the same time?
13:15 kkeithley no
13:15 ira I'd think that they'd conflict...
13:15 ndevos you can only have one NFS-server registered at portmap/rpcbind...
13:20 kkeithley kbsingh, johnmark: ping. We should figure out gluster (and nfs-ganesha) in CentOS.
13:21 kkeithley s/figure out/sort out/
13:21 glusterbot What kkeithley meant to say was: kbsingh, johnmark: ping. We should sort out gluster (and nfs-ganesha) in CentOS.
13:21 diegows joined #gluster
13:21 benjamin_______ joined #gluster
13:22 wildfire1 hmm, what if I have one of the other machines in the gluster cluster export things via NFS?
13:23 wildfire (it isn't running any kernel NFS services)
13:25 wildfire how do you specify things you would normally put in /etc/exports ? (e.g. no_root_squash)
13:25 wildfire i.e. where is the gluster nfs config?
13:26 kkeithley the gluster cli to set the options. options are persistent so you don't need to set them each time the gluster nfs server is restarted
13:26 chirino joined #gluster
13:27 kkeithley generally speaking you don't want to edit the gluster vol file, which is where, among other things, the nfs options are persisted
13:28 kkeithley e.g. `gluster volume set $volname server.root-squash false`
13:28 wildfire if options are set on one machine, will each of the other servers also replicate the changes?
13:28 kkeithley yes
13:28 wildfire (the gluster vol file is kept in /etc, so I wasn't sure)
13:30 kkeithley it's in /var/lib/glusterd/vols/
13:31 kkeithley if you installed RPMs or DPKGs (from download.gluster.org, among other places)
13:32 wildfire Hmm, I have: /etc/glusterfs/glusterd.vol
13:33 wildfire I am using the gluster 3.4.2 Ubuntu PPA
13:33 wildfire I also have /var/lib/glusterd/vols too
13:34 Retenodus joined #gluster
13:34 Retenodus Hello
13:34 wildfire oh, I see - the /var has the replicated state as well
13:34 glusterbot Retenodus: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
13:34 kkeithley @later tell semiosis (08:32:59 AM) wildfire: Hmm, I have: /etc/glusterfs/glusterd.vol (08:33:08 AM) wildfire: I am using the gluster 3.4.2 Ubuntu PPA
13:34 glusterbot kkeithley: The operation succeeded.
13:35 wildfire is there a list of various NFS related options I can set anywhere?
13:35 theron joined #gluster
13:36 Retenodus I use glusterfs with ovirt with 2 hosts with 1Gbit Ethernet cards. When I try to create a VM with preallocated disk (10 Go), glusterfs use all the bandwidth for the creation of the VM, consume much CP (about 2 CPU - I have 10 on my host), which make my host down
13:36 kkeithley `gluster volume set help` will tell you everything, grep out the nfs-related ones
13:37 Retenodus Is there a way to limit the bandwidth allocated for creation of VM in gluster Volume ?
13:44 DV joined #gluster
13:50 sroy joined #gluster
13:53 vpshastry joined #gluster
13:55 pk1 joined #gluster
13:56 pk1 cfeller: ping
13:57 pk1 cfeller: I am working on a blocker issue today, so didn't come online
13:57 pk1 cfeller: Do you think we can debug the issue today, as I need to send this patch today?
13:57 pk1 cfeller: sorry, debug it tomorrow
14:00 pk1 cfeller: You are probably afk, I joined too late I think :-(
14:02 aixsyd_ joined #gluster
14:02 dalekurt_ joined #gluster
14:02 psyl0n joined #gluster
14:03 aixsyd Hey guys - anyone here have experience running Glusterfs on top of ZFS?
14:04 ababu joined #gluster
14:04 mattapperson joined #gluster
14:06 primechuck joined #gluster
14:07 kkeithley check this: http://comments.gmane.org/gmane.co​mp.file-systems.gluster.user/14142
14:07 glusterbot Title: Gluster filesystem users () (at comments.gmane.org)
14:08 ProT-0-TypE joined #gluster
14:09 cfeller pk1: sorry, I'm here.
14:10 cfeller pk1: we can do tomorrow, too.
14:11 aixsyd kkeithley: Interesting. good to know
14:11 pk1 cfeller: Thanks a lot man, I am extremely sorry again... :-(
14:18 TonySplitBrain aixsyd: i think onece about using Gluster on ZFS, save some links.. https://github.com/zfsonlinux/zfs/issues/1648
14:18 TonySplitBrain aixsyd: http://ispire.me/glusterfs-on-top-of-linux-zfs/
14:18 * kkeithley thinks RMS cries a little every time someone uses ZFS on Linux
14:19 TonySplitBrain aixsyd: http://jamesCoyle.net/how-to/543-m
14:19 TonySplitBrain aixsyd: http://serverfault.com/questions/535847/
14:19 aixsyd TonySplitBrain: i love you <3
14:20 kkeithley s/cries/dies/
14:20 TonySplitBrain aixsyd: http://gluster.org/community/docu​mentation/index.php/GlusterOnZFS
14:21 aixsyd perfect
14:21 TonySplitBrain aixsyd: thanks, but you better try to make good relationships with some internet search engine... ;-)
14:22 aixsyd Google and I had a bad breakup. We dont talk anymore. Google changed her number and got a restraining order against me :'(
14:22 social why not btrfs? especialy if one uses it as backend for gluster?
14:23 aixsyd social: i know nothing about it
14:23 robo joined #gluster
14:23 kkeithley If your data is important to you, I wouldn't use btrfs just yet
14:23 aixsyd it is important to me
14:24 pk1 left #gluster
14:24 social kkeithley: I use it for quite long time anything more specifix?
14:24 glusterbot Title: xattr=sa causes symlinks with large enough xattr data (like glusterfs uses) will corrupt symlink · Issue #1648 · zfsonlinux/zfs · GitHub (at github.com)
14:24 social *specific
14:24 glusterbot Title: GlusterFs + zfs xattr - Server Fault (at serverfault.com)
14:24 glusterbot What kkeithley meant to say was: * kkeithley thinks RMS dies a little every time someone uses ZFS on Linux
14:24 glusterbot Title: GlusterOnZFS - GlusterDocumentation (at gluster.org)
14:24 hagarth joined #gluster
14:25 kkeithley just that there's no fsck.btrfs. And the kernel fs guys here at Redhat anyway don't think it's ready for production systems.
14:26 kkeithley mainly, I think, because of the lack of fsck.btrfs
14:26 social kkeithley: everyone who likes his data probably doesn't care about the first FS where it lands but DOES backups
14:28 kkeithley Sure. If I was going to run gluster on top of btrfs I'd do replica 2 or geo-rep and use a different fs on the replica
14:28 TonySplitBrain aixsyd: If there are no special reasons, use XFS (and maybe LVM). It's recommended, is't probably most widely used combo, RedHat uses it.
14:28 kkeithley But I trust the kernel fs guys here and if they don't trust it, I'm not going to recommend it.
14:29 aixsyd TonySplitBrain: I just worry about error. whats doign error checking in an xfs + gluster scenario?
14:29 TonySplitBrain aixsyd: or you should know some reasons why XFS is not good for your case.
14:30 aixsyd *doing
14:30 B21956 joined #gluster
14:30 aixsyd bitrot, etc
14:32 wildfire kkeithley: thanks
14:32 TonySplitBrain aixsyd: I newer met that beast, but if you fear them, yes, probably ZFS with content checsuming is your only option.
14:32 aixsyd TonySplitBrain: Thats what I figured
14:32 aixsyd TonySplitBrain: someone asked a good question: How much data have you lost that you don't even know you've lost?
14:35 vpshastry left #gluster
14:43 sroy joined #gluster
14:43 glusterbot joined #gluster
14:43 TDJACR joined #gluster
14:43 tryggvil joined #gluster
14:44 Retenodus_ joined #gluster
14:47 failshell joined #gluster
14:49 glusterbot New news from newglusterbugs: [Bug 1049946] Possible problems with some strdup_r() calls. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1049946>
14:50 jbrooks joined #gluster
14:55 zaitcev joined #gluster
14:55 dbruhn joined #gluster
14:56 kaptk2 joined #gluster
14:57 japuzzo joined #gluster
14:57 pseudo_ joined #gluster
14:58 ccha4 hum I got these Warning "0-glusterfs: Not able to get reserved ports, hence there is a possibility that glusterfs may consume reserved port"
15:00 kanagaraj joined #gluster
15:01 hagarth gluster community meeting starting in #gluster-meeting
15:01 ndk joined #gluster
15:03 kbsingh kkeithley: we really should. maybe lets try and setup a timeslot when people can be around and just thrash it out over an hour
15:04 kkeithley kbsingh: my calendar is generally pretty open. Suggest a time that works for you.
15:09 TonySplitBrain I need some help: my nearest goal is to setup CTDB on Gluster (v.3.4.1) volume (replica 2) on Debian Wheezy.
15:09 TonySplitBrain First step is to check clustered FS with ping_pong test
15:10 TonySplitBrain I'm using this manual: https://wiki.samba.org/index.php/Ping_pong
15:10 glusterbot Title: Ping pong - SambaWiki (at wiki.samba.org)
15:11 TonySplitBrain step 2, with '-m' arg, fails.
15:13 TonySplitBrain And no pointers, how it could be fixed... :-(
15:16 Technicool joined #gluster
15:19 glusterbot New news from newglusterbugs: [Bug 1049946] Possible problems with some strtok_r() calls. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1049946> || [Bug 1049981] 3.5.0 Tracker <https://bugzilla.redhat.co​m/show_bug.cgi?id=1049981>
15:20 wushudoin joined #gluster
15:25 bugs_ joined #gluster
15:25 ccha4 hum on my server there is no /proc/sys/net/ipv4/ip_local_reserved_ports
15:25 ccha4 how can I fix that ?
15:28 TonySplitBrain ccha4: what OS, what kernel?
15:28 aixsyd TonySplitBrain: https://github.com/zfsonlinux/zfs/issues/1648  <-- so theres a fix for this - any idea where? Cant seem to find it.
15:28 glusterbot Title: xattr=sa causes symlinks with large enough xattr data (like glusterfs uses) will corrupt symlink · Issue #1648 · zfsonlinux/zfs · GitHub (at github.com)
15:28 wildfire TonySplitBrain: http://download.gluster.org/pub/glust​er/glusterfs/doc/HA%20and%20Load%20Ba​lancing%20for%20NFS%20and%20SMB.html - perhaps it'll help?
15:29 peem joined #gluster
15:31 ccha4 TonySplitBrain: Lucid
15:32 ccha4 2.6.32-51-server
15:33 ccha4 on precise there is the file
15:33 TonySplitBrain wildfire: no, nothing about ping_pong and locking
15:34 chirino_m joined #gluster
15:34 TonySplitBrain ccha4: can you try more recent kernel?
15:35 TonySplitBrain ccha4: i look on couple of my hosts, file is always in place
15:36 ccha4 even on lucid ?
15:37 TonySplitBrain aixsyd: sorry, i don't know, i decide to go without ZFS, so didn't look into the issue; maybe it's solved already..
15:37 ccha4 and same or older kernel ?
15:38 TonySplitBrain ccha4: my oldest one is 3.2.0-56 from Ubuntu
15:39 social wq
15:39 CheRi joined #gluster
15:39 vpshastry joined #gluster
15:47 dewey joined #gluster
15:51 _Bryan_ joined #gluster
15:51 _Bryan_ Anyone sen an issue where at 127 days your gluster servers just hang?  These are RHEL6 and I am not saying the issue is in gluster..but it has only happened on my gluster servers.
15:54 Alpinist joined #gluster
15:55 nega0 joined #gluster
15:56 nega0 can't gluster 3.3 clients talk to 3.4 servers? or i have i totally misinterpereted what i've read?
15:57 theron joined #gluster
16:04 daMaestro joined #gluster
16:07 juhaj joined #gluster
16:07 jobewan joined #gluster
16:08 jclift joined #gluster
16:08 pingitypong joined #gluster
16:09 jag3773 joined #gluster
16:10 P0w3r3d joined #gluster
16:14 TonySplitBrain joined #gluster
16:15 jag3773 joined #gluster
16:16 Ritter joined #gluster
16:16 Ritter greetings
16:17 Ritter I'm getting a lot of log entries for:
16:17 Ritter [2014-01-08 16:16:44.481497] W [socket.c:514:__socket_rwv] 0-media-client-17: readv failed (No data available)
16:17 Ritter [2014-01-08 16:16:44.481548] I [client.c:2097:client_rpc_notify] 0-media-client-17: disconnected
16:17 Ritter every 3 seconds on both servers, even though both show peer status as connected
16:23 zerick joined #gluster
16:35 tryggvil joined #gluster
16:35 TonySplitBrain joined #gluster
16:41 Mo_ joined #gluster
16:57 LoudNoises joined #gluster
17:00 johnbot11 joined #gluster
17:24 lpabon joined #gluster
17:24 dylan_ joined #gluster
17:28 ctria joined #gluster
17:30 vpshastry left #gluster
17:36 semiosis kkeithley: wildfire: i got the message.  whats the issue?
17:37 semiosis i'd expect there to be /etc/glusterfs/glusterd.vol, but the only reason i can think of for having /etc/glusterd is artifacts from an old version.  /etc/glusterd was replaced by /var/lib/glusterd and my packages dont have anything to do with that, should be the same on all distros
17:40 bennyturns joined #gluster
17:41 SFLimey joined #gluster
17:45 Staples84 joined #gluster
17:45 Crabby joined #gluster
17:46 Crabby I'm a bit confused about the getting started guide for gluster
17:47 Crabby I was following http://www.gluster.org/community/document​ation/index.php/Getting_started_configure, and I finished the configuration, but I'm confused as to what the final result is supposed to be
17:50 Crabby Are files suppsed to replicate between the two brick directories?
17:50 Crabby *supposed
17:51 wanye hi, did an upgrade from 3.2.5 to 3.4.2 on ubuntu 12.10lts (uninstall 325 then an install of 342 then copy the files from my backup of /etc/glusterd over to /var/lib/glusterd and start it), but during the *.upgrade=on run i get CMA: unable to get RDMA device list - any ideas where to start?
17:51 semiosis Crabby: yes but you're not supposed to write to the bricks directly.  you're supposed to mount the volume and do all operations through the client mount point
17:52 semiosis wanye: there is no 12.10lts
17:53 wanye sorry. 12.04lts
17:54 semiosis wanye: are you using infiniband?
17:54 wanye no, its just a hp micro server with a bunch of data disks
17:54 wanye sata even
17:55 semiosis hmm, not sure what to say, i havent encountered that issue (or even tried that upgrade before)
17:57 redbeard joined #gluster
17:57 wanye volume status lists my bricks, but under port is says n/a and in etc-glusterfs-glusterd.vol.log it keeps throwing out errors along the lines of:  E [glusterd-store.c:1858:glus​terd_store_retrieve_volume] 0-: Unknown key: brick-0
17:58 semiosis did you restart after upgrading the packages?
17:58 Crabby semiosis: Thank you.  That worked.
17:58 semiosis Crabby: yw
18:00 wanye (a mate) did a kernel upgrade on his box, which broke gluster. so after doing some digging, i stopped gluster, uninstalled the old version using dpkg -r, then installed the new version using a repo and apt-get… once upgraded, it started automatically, so i killed it, then ran the upgrade=on command
18:03 Ritter_ is there any good docs on enabling SSL for server <-> server replication?
18:03 bala joined #gluster
18:04 semiosis wanye: there are other processes that aren't controlled by the glusterfs-server upstart job.  you'd need to either kill them all & restart glusterfs-server (which will respawn the missing procs) or just reboot the server, which I recommend if you're able to do that
18:04 semiosis ,,(processes)
18:04 glusterbot The GlusterFS core uses three process names: glusterd (management daemon, one per server); glusterfsd (brick export daemon, one per brick); glusterfs (FUSE client, one per client mount point; also NFS daemon, one per server). There are also two auxiliary processes: gsyncd (for geo-replication) and glustershd (for automatic self-heal). See http://goo.gl/F6jqx for more information.
18:04 wanye i *think* I've tried rebooting already, but ill do it again, just to make sure
18:05 jbrooks joined #gluster
18:06 semiosis wanye: are your bricks ext4 or xfs?
18:06 semiosis you might have encountered the ,,(ext4) bug
18:06 glusterbot The ext4 bug has been fixed in 3.3.2 and 3.4.0. Read about the ext4 problem at http://goo.gl/Jytba or follow the bug report here http://goo.gl/CO1VZ
18:06 wanye ext4
18:06 wanye (i think, ill double check when its rebooted)
18:06 jclift left #gluster
18:06 semiosis afaik ubuntu kernels <3.3.0 weren't affected
18:07 semiosis if your kernel upgrade was to >= 3.3.0, while you were running glusterfs 3.2.x, you may have been affected
18:07 wanye that sounds likely
18:07 semiosis however, afaik ubuntu precise uses kernel 3.2, not 3.3.0
18:07 semiosis but i guess anything is possible
18:09 wanye its currently: Linux microserver 3.2.0-58-generic #88-Ubuntu SMP Tue Dec 3 17:37:58 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
18:09 wanye upgraded from 3.2.0-40-generic
18:09 semiosis well, it's possible the ext4 change was backported, i really dont know
18:10 semiosis but i'd guess that's unlikely
18:10 semiosis it's just the only thing that comes to mind when you said a kernel upgrade broke gluster
18:11 wanye after the upgrade, it was maxing out the cpu constantly (did it for ~48 hours before i got involved)
18:11 wanye so at that point i hit google, and figured it was (probably) that
18:13 wanye dunno if these log snippets (and the order things were done) is any use… http://nottsdj.co.uk/gubbins/gluster-errors.rtf
18:14 semiosis did you just reboot the server?  try the upgrade xlator command again
18:15 semiosis just guessing here btw
18:15 wanye the processes are all running, should i kill them and then run it?
18:16 semiosis dont kill, just run
18:17 wanye oh. that seems to have run ok (no errors output to screen)
18:21 pingitypong joined #gluster
18:22 dannyroberts joined #gluster
18:24 wanye http://pastebin.com/raw.php?i=FwTLGHDx  is the results of running xlator
18:24 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
18:24 wanye after that the log is just full of E [socket.c:2788:socket_connect] 0-management: connection attempt failed (Connection refused)
18:24 ira joined #gluster
18:32 ira joined #gluster
18:45 robos joined #gluster
19:06 TonySplitBrain wanye: 3.4 uses new TCP port
19:07 semiosis ,,(ports)
19:07 glusterbot glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up for <3.4 and 49152 & up for 3.4. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
19:08 Ritter_ semiosis: any suggestions on docs or guide to follow setting up SSL for gluster?
19:08 semiosis Ritter_: no idea, sorry
19:12 nega0 semiosis: any chance of lucid debs 3.4 popping up in your ppa?
19:12 semiosis i really hope not
19:12 semiosis fwiw, precise is very nice
19:12 semiosis you should look into it
19:13 TonySplitBrain Ritter_afk: https://lists.gnu.org/archive/html/​gluster-devel/2012-11/msg00014.html is probably all docs out there..
19:13 glusterbot Title: Re: [Gluster-devel] Questions on SSL in 3.4.0qa2 (at lists.gnu.org)
19:14 JoeJulian Ritter_afk: There are no docs for server to server replication in part because it's the clients that do the replicating.
19:14 nega0 yeah, but ive got a cluster that's going to remain on lucid for the forseeable future
19:15 JoeJulian nega0: I've had 3.3 clients talking to 3.4 servers. What problem are you having with that?
19:16 Ritter_ TonySplitBrain: been reading that, and have my certs, but the brick logs are showing relentless complaining
19:16 Ritter_ so its not too happy about my certs apparently
19:17 nega0 JoeJulian: client x.x.x.x doesnt support required op-version (2) rejecting volfile reequest
19:17 TonySplitBrain Ritter_: are you sure you need this (unstable) SSL?
19:18 Ritter_ I have 2 AWS/EC2 instances in seperate availability zones, I'd like to know the replication is encrypted for traffic between them
19:18 TonySplitBrain Ritter_: can you use VPN?
19:19 Ritter_ perhaps
19:19 Ritter_ or ssh tunnel etc
19:20 TonySplitBrain Ritter_: or some multipoint-capable VPN, lice TINC
19:20 TonySplitBrain s/lice/like/
19:20 glusterbot What TonySplitBrain meant to say was: Ritter_: or some multipoint-capable VPN, like TINC
19:20 JoeJulian nega0: I was wondering about that. I'm not sure how the servers determine when it's safe to upgrade the op-version. You can set it back to 1 in the volume's info file, ie. "/var/lib/glusterd/vols/myvol/info". There's op-version and client-op-version. That's all I know about that subject though. Maybe I'll look in to it more later.
19:20 aixsyd JoeJulian: ever run zfs with gluster?
19:21 JoeJulian aixsyd: nope. zfs isn't considered production ready from what I've heard.
19:21 aixsyd wat? zfs in general or with glusterfs?
19:21 JoeJulian Linux based zfs.
19:22 JoeJulian I guess it's great on solaris, but then you're running solaris...
19:22 aixsyd "2013-03-27: 0.6.1 ZFSOnLinux (ZoL) is now ready for wide scale deployment on everything from desktops to super computers."
19:22 JoeJulian Plus, it's way easier for me to get more storage than to worry about it.
19:23 TonySplitBrain aixsyd: you know, it's ZFS-on-Linux porters say than it's production ready..
19:23 JoeJulian Maybe it's just me being old, but major version numbers of 0 have always meant unstable to me.
19:23 nega0 agreed
19:24 TonySplitBrain aixsyd: but you know, Gluster team also declares Gluster as production ready, and now we all come here to ask questions...
19:25 aixsyd burrrrn
19:25 JoeJulian Also, in March of last year I think, ZoL didn't yet allow you to set xattrs on symlinks. That's since been fixed but that's still a posix fault at the time they announced that.
19:26 JoeJulian ... and, certainly, I'm not trying to make the claim that it's not ready. I'm just not ready to, or have any need to, try it out.
19:27 aixsyd gotcha
19:28 wanye back again… so the port issue - is that something i need to change in the config to point to the new ports, or does iptables need some new config setting up? (as it was all local, i didn't think the firewall would affect things)
19:29 JoeJulian ~pasteinfo | wanye
19:29 glusterbot wanye: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
19:30 rcoup joined #gluster
19:30 wanye http://ur1.ca/gdcdm
19:30 glusterbot Title: #66809 Fedora Project Pastebin (at ur1.ca)
19:31 aixsyd JoeJulian: I honestly think I'm going to skip zfs. it feels like a layer of complication for something that i dont is a really big issue
19:31 aixsyd *i dont think
19:31 rcoup left #gluster
19:31 aixsyd but maybe someone here would know - how does gluster handle file corruption?
19:32 JoeJulian wanye: Ok, now does "ping microserver" ping 127.0.0.1?
19:32 wanye it does, yes
19:33 JoeJulian aixsyd: http://lists.gnu.org/archive/html/g​luster-devel/2013-12/msg00153.html
19:33 glusterbot Title: [Gluster-devel] bit rot support for glusterfs (at lists.gnu.org)
19:33 aixsyd oh shit - nice. thats pretty recent though. looks like it wont be ready for a while?
19:33 JoeJulian wanye: Well okay then... I think you're right and unless your iptables is completely fubar, it shouldn't be that.
19:34 JoeJulian aixsyd: maybe 3.6
19:34 aixsyd any eta for 3.5?
19:34 JoeJulian yes
19:34 JoeJulian http://www.gluster.org/community/d​ocumentation/index.php/Planning35
19:35 glusterbot Title: Planning35 - GlusterDocumentation (at www.gluster.org)
19:36 aixsyd jeez - thats right around the corner
19:36 JoeJulian Beta testing starts friday.
19:37 aixsyd JoeJulian: how often have you encountered bitrot?
19:37 JoeJulian Never that I've noticed. :D
19:37 aixsyd ;)
19:38 JoeJulian We have so many backups upon backups, and really all our historical files are for would be audits, so I'm not all that worried.
19:39 aixsyd nice
19:42 pingitypong .
19:43 TonySplitBrain JoeJulian: i saw you once give some advices to someone trying to setup CTDB (here http://irclog.perlgeek.de/g​luster/2012-10-15#i_6064193 )
19:43 glusterbot Title: IRC log for #gluster, 2012-10-15 (at irclog.perlgeek.de)
19:44 TonySplitBrain JoeJulian: i got very same problem, asked here today, no one answers..
19:47 semiosis Ritter_: openvpn
19:47 semiosis you could use that to encrypt traffic between gluster machines
19:48 semiosis might be kinda complicated tho
19:51 TonySplitBrain semiosis: why complicated?
19:54 wanye just rebooted again to check something else, and after it was back up, i tried to remount but got "mount failed. please check the logfile" … and now I'm seeing this in gluster.log: http://fpaste.org/66818/21063913/
19:54 glusterbot Title: #66818 Fedora Project Pastebin (at fpaste.org)
19:55 _Bryan_ joined #gluster
19:59 JoeJulian TonySplitBrain: I guess I didn't scroll back far enough...
20:00 JoeJulian Ritter_: I use libreswan or strongswan for encrypting traffic between sites.
20:01 JoeJulian wanye: "gluster volume status" maybe?
20:03 wanye http://fpaste.org/66822/11378138/
20:03 glusterbot Title: #66822 Fedora Project Pastebin (at fpaste.org)
20:04 JoeJulian Well that would do it.
20:05 JoeJulian Your bricks aren't started. Try stopping and starting your volume. If that doesn't fix it, you'll need to look at your brick logs.
20:06 semiosis TonySplitBrain: potentially lots of configs & connections to manage
20:08 TonySplitBrain semiosis: why lots of config? Ritter_ has just 2 nodes
20:08 semiosis well then i suppose not much config for Ritter_
20:09 wanye http://ur1.ca/gdcl0
20:09 glusterbot Title: #66825 Fedora Project Pastebin (at ur1.ca)
20:09 wanye brb
20:10 TonySplitBrain semiosis: and there are a better VPN tool for multi-nodes: tinc
20:10 semiosis neat
20:10 JoeJulian wanye: Looks like either your bricks aren't mounted, or you formatted them.
20:17 samppah does performance.flush-behind affect server or client process?
20:18 _pol joined #gluster
20:24 Technicool joined #gluster
20:24 TonySplitBrain any ideas how to get fcntl locks working?
20:35 andreask joined #gluster
21:06 plarsen joined #gluster
21:06 psyl0n joined #gluster
21:18 chirino joined #gluster
21:28 ira joined #gluster
22:03 diegows joined #gluster
22:05 tryggvil joined #gluster
22:16 KaZeR joined #gluster
22:16 jobewan joined #gluster
22:16 qdk joined #gluster
22:17 robos joined #gluster
22:24 _pol joined #gluster
22:51 psyl0n joined #gluster
22:52 _pol joined #gluster
23:00 Ritter_ does gluster work with any encrypted filesystems?
23:08 cfeller joined #gluster
23:13 bala joined #gluster
23:17 KaZeR joined #gluster
23:17 purpleidea Ritter_: sure, why not. it has to support xattrs of course....
23:19 clyons joined #gluster
23:21 ndk joined #gluster
23:25 wildfire joined #gluster
23:26 badone joined #gluster
23:28 purpleidea What is: 'Accepted peer request', state == 4 mean?
23:29 purpleidea I've seen this one host1, about host2, whereas on host2, it sees host 1 just fine...
23:29 purpleidea restarting glusterd seems to fix this...
23:36 wildfire joined #gluster
23:43 cfeller joined #gluster
23:48 robo joined #gluster
23:54 ctria joined #gluster
23:55 wildfire joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary