Camelia, the Perl 6 bug

IRC log for #gluster, 2013-10-22

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:03 johnbot11 joined #gluster
00:08 vpshastry joined #gluster
00:19 vpshastry left #gluster
00:20 davidbierce joined #gluster
00:29 jbrooks Hey guys, when I run gluster volume heal $volname info for a replicated gluster volume, any files returned are files that need to be replicated -- is that right?
00:32 JoeJulian yes*
00:32 jbrooks oooooh
00:33 jbrooks intrigue
00:33 JoeJulian Thought it may be just metadata.
00:33 JoeJulian It's showing a state where the xattrs are marked for update. That could be a transient condition.
00:33 jbrooks I have a volume across two nodes, one brick in each, and two files are listed under each brick
00:34 jbrooks and they aren't going away
00:34 jbrooks the same two files for each
00:34 JoeJulian That's when I do a volume heal $vol info {heal-failed,split-brain}
00:35 _pol_ joined #gluster
00:36 jbrooks gluster volume heal $vol info heal-failed lists zero entries
00:36 jbrooks same for split-brain
00:36 JoeJulian have you done a "heal $vol" without the info?
00:37 jbrooks Yeah, a few times
00:37 JoeJulian Ah, crap... I just stayed too late and missed my train. I hate when I lose track of time like that.
00:37 torrancew JoeJulian: :/
00:37 jbrooks yeah, that's a dangout
00:37 torrancew You in the bay area? if so, that was a terrible mistake, given the BART striek
00:38 JoeJulian No, Seattle. I can take a bus but the train looks like http://t.co/LceFF68ric
00:38 glusterbot Title: Twitter / JoeCyberGuru: I do love this view on my train ... (at t.co)
00:38 torrancew Ah, yes
00:39 torrancew that is quite nice
00:42 JoeJulian http://www.urbandictionary.​com/define.php?term=Dangout
00:42 glusterbot <http://goo.gl/81TNvj> (at www.urbandictionary.com)
00:43 bayamo joined #gluster
00:43 jbrooks JoeJulian: heh
00:43 jbrooks I guess mine is an extremely local usage
00:44 jbrooks like in my house
00:44 jbrooks like a dang-a-palooza
00:51 yinyin joined #gluster
00:51 calum_ joined #gluster
01:10 vpshastry joined #gluster
01:17 Skaag joined #gluster
01:25 _pol joined #gluster
01:39 kevein joined #gluster
01:52 harish joined #gluster
02:23 bala joined #gluster
02:45 johnbot11 joined #gluster
02:52 bharata-rao joined #gluster
03:16 kshlm joined #gluster
03:20 _pol joined #gluster
03:23 RameshN joined #gluster
03:24 kanagaraj joined #gluster
03:26 johnbot11 joined #gluster
03:32 shubhendu joined #gluster
03:38 _pol joined #gluster
03:39 johnbot11 joined #gluster
03:40 dusmant joined #gluster
03:44 ppai joined #gluster
03:44 spandit joined #gluster
03:51 sgowda joined #gluster
04:00 itisravi joined #gluster
04:00 shruti joined #gluster
04:02 mohankumar joined #gluster
04:04 sgowda joined #gluster
04:10 meghanam joined #gluster
04:10 meghanam_ joined #gluster
04:11 dusmant joined #gluster
04:13 asias joined #gluster
04:29 ndarshan joined #gluster
04:34 vpshastry joined #gluster
04:34 ngoswami joined #gluster
04:35 dusmant joined #gluster
04:37 shylesh joined #gluster
04:38 _pol joined #gluster
04:38 rjoseph joined #gluster
04:42 Shdwdrgn joined #gluster
04:45 spandit joined #gluster
04:49 johnbot11 joined #gluster
04:54 nshaikh joined #gluster
05:00 ababu joined #gluster
05:10 hagarth1 joined #gluster
05:13 lalatenduM joined #gluster
05:23 spandit joined #gluster
05:23 bala joined #gluster
05:29 kevein joined #gluster
05:30 raghu joined #gluster
05:39 rastar joined #gluster
05:40 aravindavk joined #gluster
05:42 CheRi joined #gluster
05:47 shruti joined #gluster
05:47 JoeJulian jbrooks: Are you still around, and want to look at those files?
05:52 ajha joined #gluster
05:52 ppai joined #gluster
05:58 glusterbot New news from newglusterbugs: [Bug 969461] RFE: Quota fixes <http://goo.gl/XFSM4>
05:59 johnbot11 joined #gluster
06:04 ppai joined #gluster
06:08 anands joined #gluster
06:09 harish joined #gluster
06:16 manik joined #gluster
06:18 atrius joined #gluster
06:18 sgowda joined #gluster
06:22 kshlm joined #gluster
06:39 kshlm joined #gluster
06:39 psharma joined #gluster
06:53 VerboEse joined #gluster
06:54 sgowda joined #gluster
06:55 vshankar joined #gluster
06:55 hateya joined #gluster
06:56 keytab joined #gluster
06:58 atrius joined #gluster
06:58 eseyman joined #gluster
06:59 ekuric joined #gluster
07:09 ricky-ticky joined #gluster
07:12 Dga joined #gluster
07:12 ctria joined #gluster
07:17 ngoswami joined #gluster
07:28 glusterbot New news from newglusterbugs: [Bug 1020848] Enable per client logging for gluster shares served by Samba <http://goo.gl/i7zH1R>
07:36 andreask joined #gluster
07:36 edward1 joined #gluster
07:38 bkrram joined #gluster
07:39 bkrram Hi, I'm getting a Another transaction is in progress. Please try again after sometime.</opErrstr> when I try to initiate a replace-brick. I suspect this may be from a zombied operation. How do I get around this?
07:44 _pll_ joined #gluster
07:44 bkrram left #gluster
07:44 bkrram joined #gluster
07:45 harish joined #gluster
07:46 92AAAONRS joined #gluster
07:48 blook joined #gluster
07:50 _pll_ joined #gluster
07:56 giannello joined #gluster
07:57 bkrram left #gluster
07:59 andreask joined #gluster
08:07 giannello hey all, I have a problem with replace-brick on gluster 3.4.1
08:08 giannello after executing the replace-brick command, I can see some files in the .glusterfs folder in the destination brick
08:08 giannello but the migration never completes, and I have "transport.address-family not specified" errors in the brick logs
08:09 giannello if I check the volume status, I can see task "Replace brick" with status "completed"
08:10 dneary joined #gluster
08:10 giannello and executing an abort command gives "commit failed on localhost. Please check the log file for more details"
08:11 Skaag joined #gluster
08:23 shireesh joined #gluster
08:32 dneary joined #gluster
08:43 giannello oh, looks like using "replace-brick [volume] [brick] [brick] start" is deprecated. using "commit force" worked.
08:50 ndarshan joined #gluster
08:55 vimal joined #gluster
08:55 hagarth joined #gluster
08:55 sgowda joined #gluster
09:01 shireesh joined #gluster
09:07 tryggvil joined #gluster
09:13 kanagaraj_ joined #gluster
09:22 DV__ joined #gluster
09:33 dusmant joined #gluster
09:36 manik joined #gluster
09:40 shyam joined #gluster
09:41 yinyin joined #gluster
09:43 manik joined #gluster
09:45 manik joined #gluster
09:52 calum_ joined #gluster
09:53 ziiin joined #gluster
09:56 kanagaraj joined #gluster
10:03 meghanam joined #gluster
10:03 stickyboy joined #gluster
10:03 stickyboy joined #gluster
10:03 meghanam_ joined #gluster
10:05 nshaikh joined #gluster
10:07 sgowda joined #gluster
10:14 manik joined #gluster
10:20 dneary joined #gluster
10:22 kanagaraj joined #gluster
10:23 dusmant joined #gluster
10:25 manik joined #gluster
10:39 F^nor joined #gluster
10:40 ekuric joined #gluster
10:41 manik joined #gluster
10:54 ababu joined #gluster
10:54 dusmant joined #gluster
10:55 kanagaraj joined #gluster
10:57 vpshastry1 joined #gluster
11:00 rastar joined #gluster
11:10 diegows joined #gluster
11:10 andreask joined #gluster
11:11 rm joined #gluster
11:25 manik joined #gluster
11:25 manik joined #gluster
11:29 samsamm joined #gluster
11:29 shubhendu joined #gluster
11:37 mohankumar joined #gluster
11:39 fracky joined #gluster
11:39 dusmant joined #gluster
11:40 kevein joined #gluster
11:42 kanagaraj joined #gluster
11:44 shubhendu joined #gluster
11:47 hagarth joined #gluster
11:48 Alpinist joined #gluster
11:48 ndarshan joined #gluster
11:57 itisravi joined #gluster
11:57 rastar joined #gluster
12:05 hateya joined #gluster
12:10 rcheleguini joined #gluster
12:11 hateya joined #gluster
12:17 dusmant joined #gluster
12:17 vpshastry1 joined #gluster
12:20 manik joined #gluster
12:21 vpshastry joined #gluster
12:24 fracky left #gluster
12:24 ababu joined #gluster
12:27 onny1 joined #gluster
12:33 sgowda joined #gluster
12:34 yinyin joined #gluster
12:37 hngkr joined #gluster
12:39 hngkr joined #gluster
12:41 JoeJulian @ping-timeout
12:41 glusterbot JoeJulian: The reason for the long (42 second) ping-timeout is because re-establishing fd's and locks can be a very expensive operation. Allowing a longer time to reestablish connections is logical, unless you have servers that frequently die.
12:43 B21956 joined #gluster
12:46 kanagaraj_ joined #gluster
12:50 shireesh joined #gluster
12:59 eseyman joined #gluster
13:07 davidbierce joined #gluster
13:08 partner any chance to get dsc and tar.gz files for 3.3.2 available similarly to 3.3.1 (http://download.gluster.org/pub/gl​uster/glusterfs/3.3/3.3.1/Debian/) would ease up building the package for squeeze ?
13:08 glusterbot <http://goo.gl/AwJsw> (at download.gluster.org)
13:09 F^nor joined #gluster
13:10 partner (makes no harm to make them available for upcoming releases either, 3.4 series etc)
13:10 meghanam joined #gluster
13:10 meghanam_ joined #gluster
13:11 DV__ joined #gluster
13:12 shylesh joined #gluster
13:14 JoeJulian hmm...
13:14 bennyturns joined #gluster
13:14 dneary joined #gluster
13:14 JoeJulian @qa releases
13:14 glusterbot JoeJulian: The QA releases are available at http://bits.gluster.com/pub/gluster/glusterfs/ -- RPMs in the version folders and source archives for all versions under src/
13:15 JoeJulian interesting... I'll ask hagarth.
13:16 JoeJulian partner: Ah, no I won't... :D It's http://download.gluster.org/pub/gluster/g​lusterfs/3.3/3.3.2/glusterfs-3.3.2.tar.gz
13:16 glusterbot <http://goo.gl/9bU5fu> (at download.gluster.org)
13:17 JoeJulian Oh, you mean the dsc too... ok, maybe I will...
13:18 rm hello
13:18 glusterbot rm: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
13:19 rm I am trying out GlusterFS for the first time
13:19 rm made a two-node based volume like described on http://www.gluster.org/community/d​ocumentation/index.php/QuickStart
13:19 glusterbot <http://goo.gl/OEzZn> (at www.gluster.org)
13:20 rm then simulated crash of one node by killing glusterfs daemon on it, making some changes on the FS from a mount on the other node, then restarting the daemon on the first one
13:20 rm aaand it's been 40 minutes after that but those changes are not synchronized to the first node's storage
13:20 rm how is it supposed to work?
13:21 JoeJulian Are you modifying the brick directly, or making changes through a client mountpoint?
13:21 tryggvil joined #gluster
13:22 rm through a client mountpoint
13:23 JoeJulian Does "gluster volume heal $vol info" show any pending heals?
13:26 shireesh joined #gluster
13:26 rm I don't seem to have "volume heal" as a recognized command
13:26 JoeJulian @latest
13:26 glusterbot JoeJulian: The latest version is available at http://goo.gl/zO0Fa . There is a .repo file for yum or see @ppa for ubuntu.
13:26 JoeJulian You must be using an old version.
13:27 rm actually the problem seems to be that the daemon now fails to start
13:27 rm I was under a wrong impression that it's running
13:27 rm there are no errors on the console whatsoever
13:27 rm only in the log file
13:27 rm thanks, will keep digging
13:28 JoeJulian partner: You can get the dsc from semiosis' launchpad stuff, like from https://launchpad.net/~semiosis/+ar​chive/ubuntu-glusterfs-3.3/+files/g​lusterfs_3.3.1-ubuntu1%7Elucid9.dsc
13:28 glusterbot <http://goo.gl/RL7i9S> (at launchpad.net)
13:28 JoeJulian rm: Let me know if I can help with that.
13:29 JoeJulian /var/log/glusterfs/etc-glusterfs-glusterd.vol.log /should/ tell you what it's problem is. Feel free to fpaste it if you need another pair of eyes.
13:30 glusterbot New news from newglusterbugs: [Bug 1021998] nfs mount via symbolic link does not work <http://goo.gl/H3C8W2>
13:30 DV__ joined #gluster
13:36 F^nor joined #gluster
13:38 vpshastry joined #gluster
13:43 rjoseph joined #gluster
13:45 chirino joined #gluster
13:46 ndk joined #gluster
14:00 hagarth joined #gluster
14:10 DV_ joined #gluster
14:13 DV__ joined #gluster
14:13 bala joined #gluster
14:15 F^nor joined #gluster
14:16 daMaestro joined #gluster
14:16 partner JoeJulian: thanks but dsc is dependent of the files around it (it has hashes) so only the originals qualify. i'm just trying to avoid the effort of getting from the very beginning of it all as those do exist somewhere, just not visible
14:17 partner i would simply just need to build 3.3.2 for Squeeze as i can't upgrade the server to wheezy right now but it depends on gluster mount point.. but as remote is 3.4.0 it does not mount with 3.3.1 client
14:17 partner not sure of the 3.3.2 either but at least i could try it out. and 3.4.1, too
14:18 bala joined #gluster
14:19 JoeJulian I'm pretty sure the dscs don't exist for squeeze. semiosis?
14:19 partner it does, i asked for it last time and someone was kind enough to drop them into webserver (as can be seen from my link)
14:20 partner http://goo.gl/AwJsw
14:20 glusterbot Title: Index of /pub/gluster/glusterfs/3.3/3.3.1/Debian (at goo.gl)
14:20 JoeJulian Oh, ok. I thought that was from back when they were auto-building debs
14:20 JoeJulian btw... the 3.3 client /should/ be able to mount 3.4 volumes.
14:20 partner it makes absolutely no harm to provide those too, makes peoples lifes so much easier, we are basically just repeating now what you've done already
14:21 partner i know, it reads in many places, well it doesn't
14:21 JoeJulian I thought semiosis was maintaining those.
14:21 partner at least if you have client: debian squeeze 3.3.1 and server debian wheezy 3.4.0
14:22 JoeJulian In fact, I had to mount 3.4 volumes with a 3.3 client in EL5 due to some bug (which I've already forgotten)
14:22 partner i trust it works on some combinations but mine doesn't i have some paste somewhere from the client and server..
14:22 partner found it: http://pastie.org/8393039
14:22 glusterbot Title: #8393039 - Pastie (at pastie.org)
14:23 partner let me know if it seems i'm doing something wrong here.. thought that does not reveal much..
14:23 partner a "simple" volume, 4 servers with two bricks on each, replica 2
14:23 JoeJulian Interesting.
14:24 partner it was that "op-version" thingy i read from somewhere but forgot the details already of it
14:24 franc joined #gluster
14:25 JoeJulian Now I have to find out what I get from changing that. My /var/lib/glusterd/glusterd.info has "operating-version=1"
14:25 partner oh, let me check mine
14:25 JoeJulian Which I suspect you could also set if you want to use your 3.3 clients.
14:25 kkeithley1 joined #gluster
14:26 partner operating-version=2
14:26 partner that's what i have on the server by default
14:26 JoeJulian Makes sense. New server installs would have that set. Mine was an upgrade.
14:28 wushudoin joined #gluster
14:28 partner i'm not using any new special features, just plain simple volume here so i guess i could lower that down to 1 to make it compatible with 3.3.n client?
14:28 partner all defaults and stuff so i assume i am not using anything new.
14:28 JoeJulian Reading through source now, but yeah. That's the way it seems to me.
14:29 partner http://www.gluster.org/community/docum​entation/index.php/Features/Opversion found that one
14:29 glusterbot <http://goo.gl/L29fA> (at www.gluster.org)
14:33 JoeJulian Ok, so operating-version is a feature in progress.
14:33 partner and in use obviously..
14:34 JoeJulian So for now if you want interoperability with 3.3, you need to set it to 1. In future releases, it will handshake and choose the opver that is compatible (afaict).
14:35 ababu joined #gluster
14:37 uebera|| joined #gluster
14:38 partner hmm ok thanks
14:39 partner need to try it out if it would work. currently i have intermediate box in metween the computing cluster and client servers that turns glusterfs volume mount into nfs mount to be able to mount it for the clients, that is quite a glue IMHO :)
14:40 JoeJulian If you're going to mount via nfs, why not just go directly to a server?
14:41 partner i'm going to mount with glusterfs as soon as i get it working and drop the nfs which is just temporarily and (given its nature) a bit hardcoded solution currently
14:41 JoeJulian iirc, there was some sort of deadlock due to kernel memory locking when sharing a fuse filesystem via nfs.
14:41 ndevos partner: you're exporting the glusterfs mountpoint over nfs? you should be aware of /usr/share/doc/fuse-*/README.NFS in case you use the kernel-nfs-server
14:42 partner it works but as said i'm getting rid off it as soon as i can either build a proper debian package for my system or try setting the op-version
14:43 JoeJulian Sure, I understand, I was just wondering why you didn't use the native nfs server.
14:43 ndevos partner: but, I guess you can add your border nfs/gateway as a brickless member of the trusted pool, that way it can still export nfs, but it uses the glusterfs nfs-server
14:43 partner thanks for warning thought
14:44 shubhendu joined #gluster
14:44 JoeJulian Weeks late, I'm finally looking at emc's vipr announcement... lol!
14:44 JoeJulian "open"
14:44 ndevos JoeJulian: what is the 'native nfs server' to you?
14:45 JoeJulian Yeah, that wasn't very descriptive. Gluster's native nfs server as opposed to the kernel nfs server.
14:46 ndevos ah :)
14:46 joshcarter joined #gluster
14:48 partner thanks for the tips, i guess one doesn't always figure out all the options when shoveling in hurry, i did "confirm" this setup should work and when it didn't i had to quickly come up with something so i threw in bucket of glue and got it working somehow well enough
14:48 JoeJulian I hear that!
14:52 dneary joined #gluster
14:55 jord-eye hello. anybody knows why I can't mount a gluster volume as read-only? I get this in the log: http://pastebin.com/zBuywKzE
14:55 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
14:56 jord-eye ok, fpaste then: http://fpaste.org/48589/
14:56 glusterbot Title: #48589 Fedora Project Pastebin (at fpaste.org)
14:56 JoeJulian what's your mount command/mount options
14:57 JoeJulian Also, what version?
14:57 jord-eye mount -t glusterfs -o ro glustermount.atr.bcn:/web-cache /mnt/web-cache
14:57 jord-eye without '-o ro' it mounts perfectly
14:58 bugs_ joined #gluster
14:58 jord-eye I'm using ver. 3.3.1
15:00 JoeJulian hmm, I don't have any 3.3.1 test machines left...
15:00 jord-eye on debian squeeze, using the packages from gluster repo
15:00 jord-eye the problem is I can't upgrade to 4.1, because there's no compiled package for debian squeeze, only wheezy
15:00 jord-eye in case upgrading is the solution...
15:01 partner see, there is a need for the packages :)
15:01 jord-eye upgrading OS to wheezy is not an option right now
15:01 JoeJulian Yeah. Nobody's stepped up to offer squeeze packages.
15:01 jord-eye jaja, I see
15:01 partner i was told "nobody asked for it" was the reason of not building it
15:01 jord-eye ok, then I ask for it! :D
15:02 JoeJulian semiosis: ^
15:02 partner if i just had the two files it would take few minutes for me to build it, now its lots more to start cloning the repos and figuring out if they contain any debian stuff or does it at all exist anywhere public, of course i could grab 3.3.1 and try to apply 3.3.2 stuff into it
15:03 jord-eye so you think is because the version?
15:03 partner i recall 3.3.1 had that bug of not being able to ro-mount
15:03 jord-eye or just guessing?
15:03 jord-eye ok
15:04 partner possibly this one, my browser seems to remember the url: https://bugzilla.redhat.com/show_bug.cgi?id=853895
15:04 jord-eye there's anywhere sources for the debian package? I mean debianized
15:04 glusterbot <http://goo.gl/xCkfr> (at bugzilla.redhat.com)
15:04 glusterbot Bug 853895: medium, medium, ---, csaba, CLOSED CURRENTRELEASE, CLI: read only glusterfs mount fails
15:04 partner fixed in version 3.4.0 it reads
15:05 jord-eye I can build the package, but I'm afraid I don't have the ./configure options used in the official package.
15:05 jord-eye I you can give me that, I can build my own package
15:05 jord-eye for squeeze
15:05 LoudNoises joined #gluster
15:06 JoeJulian I don't see that bug reference in the release-3.3 branch so I guess it wasn't backported.
15:06 JoeJulian @ppa
15:07 glusterbot JoeJulian: The official glusterfs packages for Ubuntu are available here: 3.3 stable: http://goo.gl/7ZTNY -- 3.4 stable: http://goo.gl/u33hy
15:08 partner at least the glusterfs-3.4.1.tar.gz package does not contain anything much related to any particular flavor of linux
15:08 jag3773 joined #gluster
15:09 jord-eye no, official sources are not debianized
15:10 jord-eye I can try to download ppa sources and compile in debian squeeze
15:11 onny1 hey, whats the difference between <volname>.log and cli.log?
15:12 partner if i'm not totally remembering incorrectly kkeithley was able to provide me with the files last time.. found his name from inside the package..
15:13 JoeJulian One's for the cli and there isn't one for <volname>.log ;) (unless you mounted the volume to the <volname> directory)
15:13 jord-eye who did the debian packages?
15:13 JoeJulian semiosis:
15:13 jord-eye ok, I'll ask him
15:13 jord-eye thnks
15:13 * semiosis just got here
15:13 semiosis ??
15:14 JoeJulian They want packages for that really old distro
15:14 semiosis [11:14] <jord-eye> I'm looking for the debian sources of lastest gluster. I need to compile them for squeeze. Can you provide me with the debianized source package?
15:15 jord-eye squeeze is not so old
15:15 jord-eye the problems is that the system where I have glsuter installed cannot be upgraded right now
15:15 jord-eye it is not so strange that something like this happen
15:16 jord-eye I'm not asking fot the package, I can compile it myself
15:16 semiosis need to go to the office.  i'll send you the debian.tar.gz when i get there... 45 min ok?
15:16 jord-eye of course, for the comunity it would be goot to have it available
15:16 jord-eye ok
15:16 jord-eye thanks semiosis
15:16 semiosis yw
15:16 * semiosis afk
15:17 hagarth joined #gluster
15:20 DV__ joined #gluster
15:22 JoeJulian Just out of curiosity, do you guys plan on upgrading beyond squeeze before May 2014?
15:22 onny1 JoeJulian: thank you for the reply. what is more important to monitor, the cli.log or the <mountpoint>.log if I want to ensure proper functionality?
15:23 JoeJulian onny1: mount-point.log. I would (do) also monitor the brick logs and etc-glusterfs-glusterd.vol.log
15:23 onny1 I just want to be sure that I won't miss any critical error message
15:23 onny1 why?
15:23 onny1 isn't there a general client log?
15:24 JoeJulian brick logs so I know if a brick has a problem. Can also be monitored with gluster volume status, but I already had log monitoring in place when that feature became available.
15:25 JoeJulian glusterd.vol.log so I watch for glusterd problems. If glusterd dies, puppet will restart it, but I still would want to know why and file any appropriate bug reports.
15:26 onny1 okay I see
15:26 onny1 and what is the content of cli.log?
15:26 JoeJulian Client log monitoring can immediately inform me of self-heal problems, should any arise.
15:27 JoeJulian cli log is just the command line interface, so any command you perform with the "gluster" cli.
15:27 harish joined #gluster
15:31 onny1 thank you JoeJulian
15:32 vpshastry joined #gluster
15:32 partner JoeJulian: there are some cases where we are unfortunately stuck to squeeze due to other dependenies and limitations currently. i would go and dist-upgrade it immediately if i could but that would break stuff and people would be unable to do certain parts of their work which again costs money for the ocmpany
15:33 partner its difficult to drive the needs forward, get developers to spend weeks on a code just to be able to upgrade, the software itself is legacy and no longer developed, i pretty much always get turned down when asking for it
15:35 partner trust me, i don't like the situation
15:37 jord-eye JoeJulian: I agree with partner. In fact is our situation. We've been upgrading some not important systems, but the core can't be upgraded so easily
15:38 partner heck we just got rid off sarge server last week :D
15:39 jord-eye and yes, we are planing upgrades of our systems, and hopefully it will be done before May'14
15:40 cjohnston_work joined #gluster
15:40 _pol joined #gluster
15:43 hagarth1 joined #gluster
15:45 hagarth joined #gluster
15:45 dusmant joined #gluster
15:52 zaitcev joined #gluster
15:55 calum_ joined #gluster
15:58 MrNaviPa_ joined #gluster
16:00 _pol_ joined #gluster
16:11 vpshastry left #gluster
16:14 harish joined #gluster
16:14 shylesh joined #gluster
16:15 jord-eye semiosis: sorry I need to go. Let's talk tomorrow. cheers
16:16 semiosis ok i will try to get the source package files into the repo on download.gluster.org
16:16 jord-eye perfect
16:18 phox joined #gluster
16:22 dneary joined #gluster
16:23 sprachgenerator joined #gluster
16:27 Mo_ joined #gluster
16:29 partner that would be awesome, thanks in advance. not sure if version was discussed but i could use 3.3.2 AND 3.4-series if its not too much effort, that opens up some options for upgrading stuff rather than having a major maintenance
16:43 glusterbot New news from resolvedglusterbugs: [Bug 953694] Requirements of Samba VFS plugin for glusterfs <http://goo.gl/v7g29>
16:48 chirino joined #gluster
17:02 Slashman joined #gluster
17:03 rotbeard joined #gluster
17:04 Slashman hello, I'm trying to delete and then recreate a glusterfs volume with glusterfs 3.4 but it doesn't remove the nfs configuration and it prevents me to recreate the volume... should I just remove the /var/lib/glusterd/nfs/nfs-server.vol file ?
17:07 Slashman ok, did that, still the message "volume create: kvmpool1: failed: /data/glusterfs/kvmpool1/brick1 or a prefix of it is already part of a volume", but the command "gluster volume info" returns "No volumes present"
17:07 glusterbot Slashman: To clear that error, follow the instructions at http://goo.gl/YUzrh or see this bug http://goo.gl/YZi8Y
17:08 XpineX_ joined #gluster
17:09 chouchins joined #gluster
17:13 badone joined #gluster
17:18 tryggvil joined #gluster
17:18 rastar joined #gluster
17:19 phox heh cute, the bot detects the error message.  clever.
17:20 rastar joined #gluster
17:31 calum_ joined #gluster
17:31 johnbot11 joined #gluster
17:46 johnbot11 joined #gluster
17:51 cyberbootje joined #gluster
17:53 DV_ joined #gluster
17:57 chirino joined #gluster
17:59 ninkotech joined #gluster
17:59 ninkotech_ joined #gluster
18:06 _pol joined #gluster
18:23 _pol joined #gluster
18:24 chouchins joined #gluster
18:36 _pol joined #gluster
18:45 hateya joined #gluster
19:02 chirino joined #gluster
19:33 Skaag joined #gluster
19:41 anands joined #gluster
19:49 DV__ joined #gluster
19:57 failshell joined #gluster
20:00 chouchins joined #gluster
20:00 hagarth joined #gluster
20:01 chouchins joined #gluster
20:01 kaptk2 joined #gluster
20:20 jag3773 joined #gluster
20:22 anands joined #gluster
20:32 _pol_ joined #gluster
20:36 _pol joined #gluster
20:37 pdrakeweb joined #gluster
20:38 diegows joined #gluster
21:10 andreask joined #gluster
21:15 soukihei joined #gluster
21:19 _pol joined #gluster
21:23 Skaag joined #gluster
21:46 failshel_ joined #gluster
21:51 _pol joined #gluster
21:52 badone_gone joined #gluster
21:55 _pol joined #gluster
22:01 fidevo joined #gluster
22:03 jbrooks joined #gluster
22:10 _pol_ joined #gluster
22:14 ncjohnsto joined #gluster
22:17 SpeeR joined #gluster
22:19 johnbot11 joined #gluster
22:47 nueces joined #gluster
22:48 chouchins joined #gluster
22:56 a2 joined #gluster
23:06 badone_gone joined #gluster
23:30 Xunil hi #gluster - what's the consensus behind using LVM-backed filesystems as bricks?  good idea, bad idea?
23:31 Xunil how about XFS versus ext3 or even ext4?
23:34 StarBeast joined #gluster
23:34 torrancew xfs is generally recommended, not sure about lvm vs not, but don't see why it would be a huge problem
23:37 JoeJulian I use lvm and I have different volumes for different purposes. This allows me to allot space to my bricks as needed.
23:38 torrancew ++
23:40 Xunil cool
23:40 Xunil just wanted to make sure there wasn't some big ugly performance hit or something
23:45 semiosis ,,(joe's performance metric)
23:45 glusterbot nobody complains.
23:50 SpeeR has anyone tested io schedulers on Centos with gluster, and vmware, I'm seeing my IO go up when doing a storage vmotion with only a couple of VM's on the brick
23:50 JoeJulian typically deadline or noop are best.
23:51 SpeeR excellent, thank you
23:51 SpeeR I shall try them, and see which works best
23:51 JoeJulian let us know for your use case.
23:51 SpeeR ok will do

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary