Camelia, the Perl 6 bug

IRC log for #gluster, 2013-02-06

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 JoeJulian andrei: no
00:03 andrei thanks
00:14 sashko joined #gluster
00:14 abyss^ joined #gluster
00:16 raven-np joined #gluster
00:23 Humble joined #gluster
00:31 _br_ joined #gluster
00:54 mnaser joined #gluster
00:55 glusterbot New news from newglusterbugs: [Bug 908128] READDIRPLUS support in glusterfs-fuse <http://goo.gl/DqTlR>
01:03 bala joined #gluster
01:07 RicardoSSP joined #gluster
01:07 RicardoSSP joined #gluster
01:07 C2 left #gluster
01:29 a2 andrei, server is async by default. sync behavior needs to be driven by the client. root-squashing option is being worked on
01:29 andrei thanks
01:30 JoeJulian a2: bug 907695 is just ugly. :(
01:30 glusterbot Bug http://goo.gl/W9Pmw unspecified, unspecified, ---, rgowdapp, NEW , rdma does not select the correct port when mounting volume
01:31 JoeJulian andrei: had to roll back to 3.3.0 because of it and had other issues that are fixed in 3.3.1 which is why I was trying to get him upgraded.
01:32 Humble joined #gluster
01:32 andrei ))
01:32 andrei is this the bug that we've found?
01:32 JoeJulian right
01:33 andrei i had a black tuesday day today
01:33 andrei )))
01:33 andrei actually (((
01:34 andrei after glusterfsd crashed and brought down about 28 of my vms
01:34 andrei followed by horrible software failures (((
01:34 andrei just managed to recover from it
01:34 andrei had to go back to nfs without gluster at the moment
01:34 JoeJulian glusterfsd crashed? or was unresponsive?
01:35 andrei fully crashed
01:35 andrei gave a bunch of oops
01:35 JoeJulian Oh, that's the kernel oops you were referring to.
01:35 a2 JoeJulian, opening the bug
01:35 JoeJulian So the kernel crashed, not glusterfsd.
01:35 andrei hold on a sec
01:35 andrei let me check
01:36 andrei perhaps I am mixing the terms here
01:36 andrei i will check the logs once again
01:37 a2 JoeJulian, what port did you have to set to manually?
01:38 andrei a2: it was trying to default to 24008
01:38 andrei but i was using 24009-24011 for my volumes
01:38 andrei over rdma
01:39 a2 andrei, can you apply http://review.gluster.org/4384 and see if it helps?
01:39 glusterbot Title: Gerrit Code Review (at review.gluster.org)
01:40 * a2 kicks off tests to merge that patch
01:40 andrei a2: i had this today: http://dpaste.org/Zz6IT/
01:40 glusterbot Title: dpaste.de: Snippet #218569 (at dpaste.org)
01:41 andrei not sure if this is caused by glusterfsd or not
01:41 andrei i will check the url, one sec
01:42 a2 andrei, that looks like a possible zfs issue
01:43 a2 or maybe a layer below that (your hardware)
01:44 andrei looking at the call trace, indeed, i can see a lot of references to zfs
01:44 andrei a2: i can try the patch over the weekend as I need to shutdown the infrastructure
01:45 andrei a2: perhaps you know how to address the second issue i had with glusterfs.
01:45 andrei i've got 2 servers
01:45 andrei the main server and the other server that I want to use for the replica
01:45 a2 ..
01:45 andrei when i add a new brick to the existing main server
01:46 andrei all seems fine, but
01:46 andrei the client mountpoint just freezes when i initiate the heal process
01:46 andrei and the whole volume becomes unusable untill i shut down the newly added server
01:47 andrei once i do that the mountpoint becomes active once again
01:47 a2 please take a statedump of the client (mount) process with 'kill -USR1 <pid>'
01:47 kevein joined #gluster
01:47 a2 the statedump needs to be taken when the mount is hung, and the dumpfile will be placed in /tmp (ls -ltr | tail -1)
01:47 andrei i had a bunch of debug logs that we've generated with JoeJulian
01:47 andrei would that help?
01:48 a2 debug logs wouldn't give a list of pending syscalls which might be hung
01:48 andrei how do I take the statedump?
01:48 andrei using the glusterfs option?
01:48 a2 as I just mentioned, kill -USR1 <pid>
01:48 andrei ah, sorry
01:49 semiosis <pid> of the fuse client, see ,,(processes) for details
01:49 glusterbot the GlusterFS core uses three process names: glusterd (management daemon, one per server); glusterfsd (brick export daemon, one per brick); glusterfs (FUSE client, one per client mount point; also NFS daemon, one per server). There are also two auxiliary processes: gsyncd (for geo-replication) and glustershd (for automatic self-heal). See http://goo.gl/hJBvL for more information.
01:49 andrei it's getting rather late and i've not been sleeping well for the past 3 days trying to sort out glusterfs issue )))
01:49 lanning joined #gluster
01:49 a2 it might be a good idea to get state dumps from all the involved processes - pkill -USR1 glusterfs; pkill -USR1 glusterfsd
01:50 a2 andrei, sorry about that! get some good sleep and give us statedumps :)
01:50 andrei a2: are you going to around this weekend? I can come online and we can do that? My env is live and i can't shutdown the infrasturcture during the week
01:51 andrei our users have been in pain already ;-)
01:51 andrei i can schedule a weekend downtime to do some testing
01:52 a2 i can try, but i'm sorry i cannot confirm just yet
01:52 andrei if you are not online, it would be very helpful if i can follow some instructions by myself
01:53 andrei if you could send them to andrei@arhont.com
01:53 andrei i would follow them and send you the output
01:53 andrei as well as try the patch
01:55 a2 ok
01:58 killermike joined #gluster
02:04 andrei a2: many thanks!
02:04 andrei i am off to sleep )))
02:04 andrei have a good evening/night
02:09 raghug joined #gluster
02:15 Humble joined #gluster
02:17 isomorphic joined #gluster
02:29 bharata joined #gluster
02:36 amccloud joined #gluster
02:43 niv joined #gluster
02:49 raven-np joined #gluster
03:04 Crypticf1rtune joined #gluster
03:04 Nuxr0 joined #gluster
03:05 romero joined #gluster
03:06 jiffe98 joined #gluster
03:07 overclk joined #gluster
03:08 bulde joined #gluster
03:10 Humble joined #gluster
03:17 shylesh joined #gluster
03:18 sashko joined #gluster
03:22 glusterbot New news from resolvedglusterbugs: [Bug 907695] rdma does not select the correct port when mounting volume <http://goo.gl/W9Pmw>
03:34 hateya joined #gluster
03:35 nca_ joined #gluster
03:43 Humble joined #gluster
03:46 hagarth joined #gluster
03:54 pai joined #gluster
03:58 sgowda joined #gluster
04:05 overclk joined #gluster
04:05 Humble joined #gluster
04:14 lala joined #gluster
04:15 lala__ joined #gluster
04:17 sripathi joined #gluster
04:17 vpshastry joined #gluster
04:33 bala joined #gluster
04:36 Humble joined #gluster
04:47 badone joined #gluster
04:54 mohankumar joined #gluster
04:58 sgowda joined #gluster
04:59 jjnash left #gluster
05:03 srhudli joined #gluster
05:03 sripathi1 joined #gluster
05:10 pai joined #gluster
05:10 sripathi joined #gluster
05:12 Humble joined #gluster
05:22 glusterbot New news from resolvedglusterbugs: [Bug 878652] Enchancement: Replication Information in gluster volume info <http://goo.gl/dWQnM>
05:29 raghu joined #gluster
05:34 deepakcs joined #gluster
05:48 vpshastry joined #gluster
05:51 nca_ joined #gluster
05:56 hagarth joined #gluster
05:58 mohankumar joined #gluster
06:11 pai joined #gluster
06:28 Nevan joined #gluster
06:32 ramkrsna joined #gluster
06:32 ramkrsna joined #gluster
06:40 bharata_ joined #gluster
06:47 vimal joined #gluster
07:05 sripathi1 joined #gluster
07:09 sripathi joined #gluster
07:17 sgowda joined #gluster
07:39 raven-np1 joined #gluster
07:47 ctria joined #gluster
07:54 hagarth joined #gluster
08:01 ekuric joined #gluster
08:04 guigui1 joined #gluster
08:06 vpshastry joined #gluster
08:14 kevein joined #gluster
08:21 Humble joined #gluster
08:31 rastar joined #gluster
08:31 sripathi joined #gluster
08:32 Joda joined #gluster
08:48 Staples84 joined #gluster
09:22 Norky joined #gluster
09:28 torbjorn__ joined #gluster
09:29 morse joined #gluster
09:31 Staples84 joined #gluster
09:32 sripathi joined #gluster
09:33 bauruine joined #gluster
09:46 Staples84 joined #gluster
09:50 sripathi joined #gluster
10:00 puebele joined #gluster
10:02 puebele1 joined #gluster
10:16 vpshastry1 joined #gluster
10:16 manik joined #gluster
10:19 hybrid5121 joined #gluster
10:20 puebele joined #gluster
10:24 kleind joined #gluster
10:26 kleind Hi there. I'm currently testing qemus new glusterfs backend capability and have a question concerning the gluster connection. There are 2 ways of specifying the connection. One points to a tcp socket, the other one to a unix socket. If I use the unix socket connection, will it be possible to (live-)migrate the VM to another host? Will the unix socket communication carry over on the new host to the new gluster daemon or will it silently transform int
10:26 kleind o a tcp connection? Can someone elaborateß
10:27 glusterbot New news from newglusterbugs: [Bug 908277] Poor performance with gluster volume set command execution <http://goo.gl/lC62E>
10:28 dobber joined #gluster
10:29 bharata_ kleind, not sure with unix socket (never tried that), but with tcp socket, its possible to migrate VM from one host to another when both hosts have access to a GlusterFS volume
10:29 kleind bharata_: tried that (successfully), too. then found the unix socket thing and wondered how it would work.
10:30 bharata_ kleind, nice to know that migration has worked successfully for you with qemu-glusterfs :)
10:31 kleind what I wondered with the tcp socket ... since there is only one socket to specify, how am i supposed to start the vm if that host is not available? I'm not talking about the gluster node going down while the vm is running, i guess i understood it is "gluster magic" doing its work here, but if the machine supposed to handle the initial connection is gone, that would'nt work, would it?
10:31 puebele2 joined #gluster
10:31 kleind so i would have to edit the configuration and have it connect to another machein
10:31 kleind which would get the thing running, but would require administrative intervention
10:31 bulde joined #gluster
10:35 raven-np joined #gluster
10:35 ekuric joined #gluster
10:46 bharata_ kleind, you don't specify a socket, do you ? We specify volfile-server, volume and image from QEMU
10:46 bharata_ ah
10:47 mynameisbruce_ joined #gluster
10:48 kleind joined #gluster
10:50 bharata_ kleind, iiuc, any machine that contributes bricks to a volume can be contacted for initial connection and yes if that isn't available, you would have to change your QEMU command line I suppose
10:50 duerF joined #gluster
10:51 kleind bharata_: that's what I tried to say. it would be great if it supported a url with multiple hosts that would be contacted in a row if #1 is not available
10:52 bharata_ kleind, don't think that's possible in gluster currently, bulde ^ ?
10:53 kleind iirc, the fuse client allows for it.
10:53 kleind it takes some mount option "backup host" or sth like that and contacts that one if it does reach the first node
10:53 samppah kleind: nice, may i ask that what distro you are using for testing?
10:53 kleind i guess it's time for a qemu feature request then
10:54 kleind samppah: i'm testing on debian wheezy
10:54 samppah kleind: ah okay, does it already have gluster support in qemu or does it have to be recompiled?
10:55 kleind samppah: i compiled qemu, gluster and libvirt manually
10:55 samppah ok, thanks :)
10:55 kleind there is an experimental gluster package, but other than that, you'reon your own
10:55 kleind so I figured I'd just compile everything
10:56 ndevos bharata_, kleind: when mounting normally, you can pass --backupvolfile-server=$server_nr_two - not sure who that works with qemu/libgfapi
10:56 samppah i think i'm most used to centos/rhel and luckily there are packages available of qa release of gluster
10:57 kleind ndevos: i could not find a way to configure that. libvirt explicitly forbids configuration of two <host/> tags
10:57 kleind actually, the rng schema allows for it, but the code does not
10:58 ndevos kleind: I really would not know how libvirt/qemu would handle that - using a virtual-ip (failover) is probably easiest to guarantee successful mounting
10:59 kleind i'll now try to configure <host=a/> on node a <host=b/> on node b.
10:59 kleind (in my case, gluster nodes are also vm hosts)
10:59 ndevos when mounting, the volume file is retrieved from that IP, after that, the servers in the volfile will be contacted directly - so there is hardly any additional load on the server that has the virtual-ip
10:59 hagarth joined #gluster
11:00 kleind and glusterd (or whichever part of gluster it is that answers this connection) will just bind to an ip that is configured onto an interface _after_ it started?
11:00 Norky joined #gluster
11:01 ndevos glusterd binds to all IPs
11:02 ndevos that is to say, it binds to 0.0.0.0, any IP, including IPs that get added later
11:03 andrei joined #gluster
11:05 kleind ndevos: ok, so that might be a solution. i kind of like my idea, too, though. will try both.
11:05 ndevos good luck!
11:05 bharata_ ndevos, I am not sure how that (specifying backup server) would fit in the current semantics (gluster:://server/volume/image), may be it could be some optional parameter (?backup=)
11:06 deepakcs bharata_, ya.. but then that means new backup child node under host or source tags too in libvirt xml
11:07 bharata_ deepakcs, there is more fundamental problem, I don't see a way to specify that in libgfapi (glfs_set_volfile_server)
11:07 ndevos bharata_: maybe, yes, /sbin/mount.glusterfs just adds more --volfile-server=... options to the glusterfs binary, not sure how it is handled in the C code
11:07 deepakcs bharata_, ya that was my next Q.. does libgfapi take the backup server as a param or inside any struct ?
11:08 rcheleguini joined #gluster
11:08 bharata_ deepakcs, glfs_set_volfile_server(glfs_t *, transport, host, port)
11:09 deepakcs bharata_, can that be caled twice ;-) once eac hwith diff host ?
11:09 bharata_ deepakcs, :)
11:10 vpshastry joined #gluster
11:12 kleind am i really the first one to ask this question?
11:13 deepakcs kleind, :)
11:13 deepakcs bharata_, so if there is a way to set addnl host via some option api, it can be done
11:15 bharata_ deepakcs, Thankfully with URI spec in QEMU, such extensions can be supported w/o much problems, but first libgfapi should support it I suppose
11:15 bharata_ deepakcs, afaics, it doesn't, may be avati a2 knows better
11:16 RicardoSSP joined #gluster
11:16 RicardoSSP joined #gluster
11:21 andrei avati: hello are you online?
11:23 ndevos that is unlikely, I think he lives in San Francisco
11:26 bulde joined #gluster
11:28 vpshastry left #gluster
11:28 edward1 joined #gluster
11:37 bauruine joined #gluster
11:39 rgustafs joined #gluster
11:40 venkat_ joined #gluster
11:56 andrei thanks ))
11:56 andrei i will check later with him
11:57 glusterbot New news from newglusterbugs: [Bug 908302] eager-lock option change while a transaction in progress can yield stale locks <http://goo.gl/nwp5z>
12:12 kanagaraj joined #gluster
12:30 raven-np joined #gluster
12:33 pai left #gluster
12:36 vimal joined #gluster
12:41 lala joined #gluster
12:48 dustint joined #gluster
13:01 sjoeboo joined #gluster
13:20 Norky joined #gluster
13:23 shireesh joined #gluster
13:27 rotbeard joined #gluster
13:29 rotbeard hi there, I have glusterfs 3.0 on 2 nodes, both nodes are server and client at the same time (I know, I will change this in future). I want to find old files in a directory that contains about 5mio files, and delete them! what is recommended. fire up my commands on the gluster mountpoint or directly on the local storage?
13:35 aliguori joined #gluster
13:35 Staples84 joined #gluster
13:35 Norky joined #gluster
13:41 flrichar joined #gluster
13:50 kkeithley As a rule we tell people _not_ to do anything "directly on the local storage."
13:53 rotbeard thanks kkeithley ;) that will help
13:55 * kkeithley thinks you should upgrade to 3.3.x too
13:56 x4rlos Is gluster intelligent enough to point a client to a more appropriate (local/faster) gluster peer?
13:58 x4rlos So if i have 3 sites, i have just pointed each site to the gluster serverA at site A. Maybe i should point them to their local gluster server :-/
13:58 kkeithley native mounts will read (automatically) from the fastest peer (first to respond)
13:58 x4rlos really? that's friggin magic.
13:58 x4rlos Can i tell which peer gluster is currently connect to via the cli or anything?
13:59 kkeithley it's not really magic. The client will fire off the read request to all the servers. The first response is taken, the other response(s) is/are dropped on the floor.
14:00 x4rlos gues that's how it provides continuity in case other servers are down.
14:00 kkeithley clients are always connected to all the servers when you use native/fuse.
14:00 kkeithley yes
14:01 x4rlos im just thinking when i try and copy a file from a gluster server, whether it will choose the fastest first.
14:01 x4rlos Its a few hundred MB.
14:01 x4rlos but according to what you said, it will. And that makes me happy :-)
14:02 lala joined #gluster
14:02 kkeithley Well, that's the theory. We know that sometimes theory and reality are at odds with each other. ;-)
14:03 x4rlos hehehe. That's why i was curious if there's a gluster ping command or something that will let me know where it chose by default :-)
14:04 theron joined #gluster
14:04 rotbeard kkeithley, upgrading to 3.3 is on my to-do list, thanks for your advice
14:06 rotbeard have a nice day folks
14:06 theron_ joined #gluster
14:11 raven-np joined #gluster
14:19 ehg joined #gluster
14:20 hagarth joined #gluster
14:33 jbrooks joined #gluster
14:39 stopbit joined #gluster
14:41 Staples84 joined #gluster
14:52 kkeithley the-me: http://review.gluster.org/#change,4479
14:52 glusterbot Title: Gerrit Code Review (at review.gluster.org)
15:00 rwheeler joined #gluster
15:13 Staples84 joined #gluster
15:16 wushudoin joined #gluster
15:29 bugs_ joined #gluster
15:30 chouchins joined #gluster
15:41 raven-np joined #gluster
15:42 Humble joined #gluster
15:47 luckybambu joined #gluster
15:52 Ryan_Lane1 joined #gluster
16:09 jaugros joined #gluster
16:09 jaugros Hello everyone.  What would I need to do to mount Gluster on a Windows machine?
16:11 NuxRo jaugros: there is no native client for windows, you can use NFS though
16:13 flrichar how does gluster's nfs offering differ from normal nfs?
16:13 kkeithley What NuxRo said, you can use the Windows NFS client, or you can run Samba over a Gluster client mount.
16:14 jaugros ok thanks, i'll start looking into that.  these are stable solutions?
16:14 kkeithley flrichar: the gluster nfs server runs in user space versus the kernel nfs server which, as you probably guess, runs in the kernel.
16:15 NuxRo talking of clients, kkeithley is there an rpm repo for 3.4 builds? I want to upgrade my current installation of 3.3 and see if it still kills the volumes
16:16 kkeithley No, I' haven't built any of the 3.4qaX releases into a yum repo.
16:16 * kkeithley says that's a lot of work
16:17 NuxRo createrepo is not for everyone :)
16:17 NuxRo do you have the url with the rpms at hand?
16:18 Humble joined #gluster
16:19 NuxRo found it, seems like latest is http://bits.gluster.org/pub/​gluster/glusterfs/3.4.0qa8/
16:19 glusterbot <http://goo.gl/zRvPL> (at bits.gluster.org)
16:22 sashko joined #gluster
16:23 NuxRo kkeithley: interesting, why does 3.4 require device-mapper?
16:25 kkeithley don't know off the top of my head why dev-mapper is required.
16:25 kkeithley where do you see that?
16:27 NuxRo I'm trying to "yum update glusterfs-3.4X.rpm" stuff, it's pulling in the device-mapper
16:28 ndevos NuxRo: thats because it contains the bd-xlator (uses lvm internally)
16:28 bitsweat joined #gluster
16:29 tylerflint joined #gluster
16:29 ndevos http://www.gluster.org/community/do​cumentation/index.php/Planning34/BD contains a description
16:29 glusterbot <http://goo.gl/LglqL> (at www.gluster.org)
16:30 NuxRo cool
16:30 NuxRo thanks
16:30 tylerflint has anybody ever seen a linux process enter "D" (uninterruptible) state while making a fs call through the gluster native (fuse) client?
16:31 ndevos tylerflint: any process does that while its doing I/O
16:32 tylerflint right, I should've clarified that the processes have been in that state for over 48 hours
16:33 ndevos that is pretty long, you would need to check if there is any I/O happening - maybe the logs or a statedump of the glusterfs process shows anything
16:35 tylerflint well, oddly the logs are showing this: https://gist.github.com/tyle​rflint/ad0e3fd316cefe516a3d
16:35 glusterbot <http://goo.gl/JDdZ1> (at gist.github.com)
16:35 tylerflint over and over
16:36 tylerflint I haven't forced a statedump yet
16:36 tylerflint remind me of the signal to send that force that?
16:36 ndevos tylerflint: thats weird, something returns ESTALE (errno 116), you should check the logs from the bricks as well
16:37 NuxRo kkeithley: yup, after yum update to 3.4 my volumes are now down, this is what i see in the logs, let me know if you need more logs http://fpaste.org/TAMC/. on the other machine in my setup that i haven't touched the glusterfsd processes are still running
16:37 glusterbot Title: Viewing Paste #274586 (at fpaste.org)
16:37 ndevos thats USR1 tha tyou can send, but I have no idea how to read state dumps :)
16:39 ndevos NuxRo: are the gluster{d,fsd} processes running? maybe listening on different ports?
16:40 NuxRo ndevos: seems like /usr/sbin/glusterfs is still running for one of the volumes, but not for the rest, /usr/sbin/glusterfs is still running, too
16:41 NuxRo this happens every time i update glusterfs, the only way to fix it is to stop an start the volumes
16:41 Ryan_Lane joined #gluster
16:41 NuxRo needless to say it's quite disruptive, luckily this is still in testing..
16:42 ndevos NuxRo: glusterfs is a client process... is it a fuse-mount or the nfs-server?
16:45 elyograg kkeithley: ping! still stuck on trying to get the swift-1.7.4-based UFO working.
16:45 amccloud joined #gluster
16:46 NuxRo ndevos: yes fuse-mount, on this server i also export a volume via samba, of course, listing the mountpoint results in "endpoint not connected". here's the processes I'm running
16:46 NuxRo http://fpaste.org/61VA/
16:46 glusterbot Title: Viewing Paste #274587 (at fpaste.org)
16:47 ndevos NuxRo: hmm, looks like glusterd has not been started after you updated, glusterd would start the glusterfsd processes for the bricks
16:48 ndevos NuxRo: maybe there is an error in the /var/log/glusterfs/etc-gluster..log
16:49 kkeithley elyograg: hmmm. Not sure what to tell you. Over the weekend at FOSDEM I helped theron fix some of his mistakes and things were starting to work for him until his laptop was stolen.
16:50 kkeithley If I get what I'm working on finished I can spend a little time later today trying to get you working.
16:50 NuxRo ndevos: http://fpaste.org/iLIt/ does it tell you anything?
16:50 glusterbot Title: Viewing Paste #274590 (at fpaste.org)
16:50 bitsweat left #gluster
16:50 ramkrsna joined #gluster
16:50 ramkrsna joined #gluster
16:51 * theron mumbles about stolen laptops
16:51 elyograg kkeithley: ok.  the basic problem seems to be that the .ring.gz files don't exist.
16:51 kkeithley did you run  Step 9 on my howto: `/usr/bin/gluster-swift-gen-builders $myvolname`
16:52 kkeithley (That was new with the switch to 1.7.4)
16:52 elyograg kkeithley: I did not see that step.  thanks for the pointer.  I'll let you know if it works.
16:52 ndevos NuxRo: glusterd was started on line 76, but geo-replication was not updated yet and could not be started (line 84)
16:53 NuxRo hm
16:53 ndevos NuxRo: that happened on line 89 and 96 too
16:54 NuxRo georeplication was never installed
16:54 NuxRo is it an absolute requirement?
16:55 ndevos NuxRo: not that I am aware of, and neither am hoping - but I tend to have it installed on my test-systems anyway
16:55 NuxRo anyway, i restarted glusterd and i can see now glusterfs processs for all my volumes
16:55 hagarth joined #gluster
16:55 NuxRo but `gluster volume blah status` just hangs
16:59 chandank joined #gluster
17:00 johnmark theron: sigh :(
17:04 ndevos NuxRo: does any gluster command  work?
17:05 NuxRo ndevos: after i rebooted both nodes, yes :)
17:05 NuxRo BUT
17:05 NuxRo apparently glusterfs-server-3.4.0qa8 does not obsolete glusterfs-server-3.3.1
17:06 NuxRo now i have both glusterfs-server rpms installed ... can't be good
17:06 ndevos ai, no, that sounds horribly wrong
17:07 NuxRo rpm -e them, rebooting once more to start fresh
17:13 Norky joined #gluster
17:14 elyograg kkeithley: just getting to step 9.  It includes the volume name.  What if I want to use more than one volume?
17:15 chandank I have a dum and quick question. In the replication mode (2 nodes) if I mount the fuse on 3rd server and if one of the node goes down and comes back after sometime, will it be accessible from the client's fuse mount point?
17:15 kkeithley we're working on the 'more than one volume' issue
17:15 kkeithley for now, you get one volume
17:15 kkeithley :-(
17:15 chandank As of now it looks like, the client fences it and never talk to it unless I remount the volume again.
17:42 JoeJulian chandank: 3.3.0?
17:42 chandank yes
17:42 JoeJulian upgrade
17:42 chandank 3.3.1
17:42 chandank ok does the new one solves this problem?
17:42 amccloud joined #gluster
17:42 JoeJulian Ok, 3.3.1 solved that in my experience.
17:43 chandank ok, another thing current value of self heal deamon fire is at 10 minutes. Can we reconfigure it?
17:43 chandank say something like every 1 minute or so.
17:46 JoeJulian I don't see any setting in the source for that. You could run "gluster volume heal $vol" from cron though.
17:46 chandank ok
17:47 chandank this client mount stuff appears to be bottle neck for production deployment.
17:47 chandank so is it not a good idea to keep the fuse mount point on 3rd machine?
17:48 JoeJulian You do realize that self-heal is also triggered on lookup, also, right? So actively used files will be healed right away, and all files will be healed upon access so there's never a worry about stale data during that <=10 min period.
17:48 chandank ok.
17:49 JoeJulian "client mount stuff appears to be bottle neck"? I'm not following you. The network is usually the bottleneck.
17:50 chandank sorry for my wrong usage of words..
17:50 edward1 joined #gluster
17:50 chandank I cant go production without that stuff working
17:50 phox heh.  in our case Gluster is the bottleneck for various things.
17:50 phox speaking of which I need to file a bug on that to somewhat improve the situation
17:50 JoeJulian No need to apologize, I just want to help you as best I can so I want to make sure I'm with you.
17:50 glusterbot http://goo.gl/UUuCq
17:52 JoeJulian Nearly the only thing glusterfs does that adds overhead is the self-heal check. I imagine everyone wants that to happen though.
17:53 chandank ok, so let me put it in this way that, what would be the best practice way of using/accessing/mounting the gluster. From what I understand, I would keep g1/g2 as gluster and mount a highly available fuse on client machine C1. Is there a better way to do this?
17:54 JoeJulian That's common. It's also normal to have the servers be clients. It depends on your needs and systems.
17:55 chandank Also, is this highly available gluster client fuse mount is dependent on Kernel/OS version.
17:55 JoeJulian Not in any reasonably up to date installation.
17:56 JoeJulian Though 3.4 is adding features that, if enabled at mount time, will depend on new features in fuse.
17:58 chandank oh! so that high availability of the client fuse mount depends more on feature of fuse rather than glusterfs itself?
17:58 chandank I have Centos 5.8 client and servers on 6.3
17:59 JoeJulian No, it's a glusterfs design.
18:00 JoeJulian Userspace -> FUSE -> GlusterFS -> Network * N -> GlusterFS -> Filesystem
18:00 JoeJulian That works fine.
18:00 JoeJulian I do the same.
18:01 JoeJulian Though once I finish getting my OpenStack puppet configuration complete I'll finally get the opportunity to upgrade those Centos5 systems.
18:01 awickham joined #gluster
18:02 hagarth joined #gluster
18:03 rwheeler joined #gluster
18:11 disarone joined #gluster
18:12 chandank Thanks for all quick answers :-)
18:14 JoeJulian You're welcome
18:15 sjoeboo joined #gluster
18:16 zaitcev joined #gluster
18:20 awickham Hi, new to gluster here, I created a Striped-Replicated volume from 2 nodes/4 bricks and I'm running into a couple issues: I have the volume mounted to another machine and when I copy a large file to the directory it fails the first time, then when copying the file again and accept overwrite it takes.
18:21 awickham then when I run a df -h it shows the volume has double the used space from the size of the file
18:22 raghu joined #gluster
18:22 kkeithley elyograg: and?
18:23 JoeJulian awickham: Check your log files. I suspect it's the bug where it says something about being unable to determine stripe size.
18:24 JoeJulian awickham: And, though you didn't ask for it, you also get this free advice! ,,(stripe)
18:24 glusterbot awickham: Please see http://goo.gl/5ohqd about stripe volumes.
18:27 awickham got it. thank you much
18:29 JoeJulian You're welcome. :)
18:31 chouchins just sent you a linkedin invite JoeJulian
18:32 JoeJulian Oh, cool. I don't suppose you made it to my "intro to gluster" talk in EE1 a couple months ago?
18:33 JoeJulian Oh, nevermind... haven't had coffee yet and read that backwards.
18:34 JoeJulian I saw "Washington" and "University" on the same page and read "University of Washington"
18:34 chouchins I haven't seen you since the redhat summit :)
18:35 chouchins Was looking for a paypal button to send you a donation since you are so awesome with the gluster community heh.
18:37 JoeJulian :D Thank you.
18:38 JoeJulian I, of course, don't have a paypal donate button. I've been playing with bitcoin, just for fun, and have that link, but it's as much for the lols as anything.
18:41 chouchins yeah I know nothing about bitcoin, but yeah
18:41 chouchins I'll buy you a drink in Boston or something next summit
18:44 JoeJulian +1
18:48 johnmark +1000
18:49 elyograg kkeithley: it started OK and I was able to get an auth token.  Someone else will be trying it out, when they get some time.
18:51 kkeithley good, that's progress.
18:51 hateya joined #gluster
18:54 nueces joined #gluster
19:05 Humble joined #gluster
19:11 rwheeler_ joined #gluster
19:14 jaugros joined #gluster
19:18 jaugros so if i re-export a GlusterFS mount as a Samba share for Windows clients, that means I lose the auto-failover feature for those Windows clients.  is that correct?
19:18 tylerflint left #gluster
19:20 amccloud joined #gluster
19:20 jaugros at this point i'm wondering whether there is any added benefit of glusterfs if I have to have Windows clients.  Maybe Samba and DRBD would be just as good?
19:21 JoeJulian That was quick
19:21 sashko hey guys, any of you guys know what liquid web's hobs (high performance block storage) is based on?
19:21 * JoeJulian shrugs
19:22 Humble joined #gluster
19:22 hateya joined #gluster
19:22 puebele1 joined #gluster
19:22 sashko i feel like they may be using  a modified version of gluster?
19:23 jag3773 joined #gluster
19:24 JoeJulian It's possible, but I would guess unlikely. The block device translator was only recently developed.
19:25 sashko surprises me how these companies just pop out with these kind of storage solutions out of nowhere, it requires a big amount of work
19:26 hateya joined #gluster
19:31 flrichar watch it just be aoe or nbd to an ssd
19:31 flrichar ;oD
19:31 flrichar that would be simple
19:31 hateya joined #gluster
19:37 amccloud joined #gluster
19:45 sashko that would be funny flrichar
19:46 JoeJulian I wonder if jaugros will change his nick when he comes back in 6 months...
19:46 flrichar ok ok a raid set of ssds
19:47 tru_tru joined #gluster
19:52 eightyeight joined #gluster
19:54 chouchin_ joined #gluster
19:56 sashko why JoeJulian ?
19:58 sashko flrichar: not sure if nbd or aoe though, why not give them the disks in raid then?
19:58 JoeJulian Because that's when he'll come back after his horror of drbd. ;)
19:58 sashko lol
19:58 sashko what is he trying to do?
19:59 JoeJulian http://irclog.perlgeek.de/g​luster/2013-02-06#i_6420791
19:59 glusterbot <http://goo.gl/JGVXp> (at irclog.perlgeek.de)
20:03 Staples84 joined #gluster
20:04 amccloud joined #gluster
20:10 Humble joined #gluster
20:13 y4m4 joined #gluster
20:20 hateya joined #gluster
20:22 bauruine joined #gluster
20:22 sjoeboo joined #gluster
20:27 johnmark y4m4: ping
20:27 amccloud joined #gluster
20:37 y4m4 johnmark: pong
20:38 Humble joined #gluster
21:00 johnmark y4m4: can you post that NFS tip somewhere publicly?
21:06 chouchins joined #gluster
21:07 chouchins joined #gluster
21:09 NuxRo anyone played with clvm on gluster?
21:11 y4m4 johnmark: yeah we can post that
21:11 y4m4 johnmark: the eager-lock ?
21:16 hagarth joined #gluster
21:19 nightwalk joined #gluster
21:23 jjnash joined #gluster
21:23 hattenator joined #gluster
21:26 Humble joined #gluster
21:26 johnmark y4m4: yup.
21:27 y4m4 johnmark: yes it should be
21:28 y4m4 johnmark: in future that would be default
21:28 y4m4 so i don't see a reason not to
21:28 y4m4 but we can mention one particular race
21:28 y4m4 which was mentioned by avati
21:29 glusterbot New news from newglusterbugs: [Bug 908500] Gluster/Swift etag (md5sum) calculations on files need to cooperatively yield the co-routine in the object-server to avoid creating starvation <http://goo.gl/oKgvY> || [Bug 908507] Investigate using one native mount point per object server worker <http://goo.gl/bImnV>
21:33 shawns|work from a client (using the fuse module) is there a way of checking peer status (like checking /proc/something)? I can see connection timed out for peers in the client logs, but not peer rejoins (with warning level logs, info is too verbose).
21:37 JoeJulian netstat?
21:37 polenta joined #gluster
21:37 shawns|work do like a netstat -an |grep 24007 |grep ESTAB ?
21:37 JoeJulian @processes
21:37 glusterbot JoeJulian: the GlusterFS core uses three process names: glusterd (management daemon, one per server); glusterfsd (brick export daemon, one per brick); glusterfs (FUSE client, one per client mount point; also NFS daemon, one per server). There are also two auxiliary processes: gsyncd (for geo-replication) and glustershd (for automatic self-heal). See http://goo.gl/hJBvL for more information.
21:38 JoeJulian 24007 is just the management daemon.
21:38 JoeJulian You're interested in the brick ports.
21:40 shawns|work okay, cool. thanks.
21:40 JoeJulian I think there might be a way to pull the brick port info... checking...
21:52 JoeJulian Hmm... there isn't. gluster volume status would show it on a peer, but unless you make your clients peers, that won't work. You might want to file a bug report asking for that feature.
21:52 glusterbot http://goo.gl/UUuCq
21:53 shawns|work Thank you for looking, I appreciate the effort. I will ask for the feature.
21:54 Humble joined #gluster
21:55 JoeJulian You /could/ parse an lsof of the client pid for that mount.
21:55 shawns|work heh, 3.3.1 isn't on the version list in bugzilla (yet)
21:55 * JoeJulian grumbles
21:56 shawns|work or mebe i use mainline?
21:56 JoeJulian mainline's fine
22:03 elyograg glusterbot observation - the response to ' file a bug ' should indicate that's what the URL is for.  it might look to a newbie like a random url.
22:05 JoeJulian Yeah, you're right... After I figure out why puppet's being a @#$%^ I'll change that.
22:06 elyograg glusterbot: thanks!
22:06 glusterbot elyograg: I do not know about 'thanks!', but I do know about these similar topics: 'thanks'
22:06 elyograg ... in the "do what i mean" category... :)
22:06 JoeJulian heh, maybe we should adopt a turing bot...
22:07 JoeJulian s/turing/Turing/
22:07 glusterbot JoeJulian: Error: I couldn't find a message matching that criteria in my history of 1000 messages.
22:07 JoeJulian Ok, glusterbot... wtf?
22:08 JoeJulian Would be kind-of cool to have a factoid system that could parse natural language.
22:10 * phox wonders.
22:10 phox glusterbot: thanks
22:10 glusterbot phox: you're welcome
22:10 phox ha.
22:30 glusterbot New news from newglusterbugs: [Bug 908518] There is no ability to check peer status from the fuse client (or maybe I don't know how to do it) <http://goo.gl/TnrZT>
22:32 raven-np joined #gluster
22:34 Humble joined #gluster
23:04 eightyeight joined #gluster
23:13 Humble joined #gluster
23:29 themadcanudist joined #gluster
23:29 themadcanudist left #gluster
23:35 bauruine joined #gluster
23:45 Humble joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary