Camelia, the Perl 6 bug

IRC log for #gluster, 2013-06-28

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:33 bala joined #gluster
00:56 bala joined #gluster
01:04 semiosis looks like rpc-auth-allow-insecure is not available in 3.4.  anyone know if there's an alternative?
01:06 semiosis allow-insecure, perhaps
01:07 itisravi joined #gluster
01:10 kevein joined #gluster
01:18 semiosis code looks like server.allow-insecure should do what i want, but it doesnt seem to do anything at all
01:28 jclift semiosis: Time to file BZ?
01:28 semiosis trying to gather a bit more info first
01:29 jclift semiosis: Yeah.  Maybe grep -ri on the release-3.4 branch will show something useful up.
01:29 semiosis already searched BZ though, sadly found an open bug in ON_QA status from ~3.1 days
01:29 jclift Ugh, I keep on looking at IRC and getting distracted.  Never going to get the end of the RDMA testing done if I keep doing that. :D
01:29 semiosis yeah i found server.allow-insecure by grep
01:30 semiosis get back to work, you!
01:30 semiosis hehe
01:30 jclift :)
01:33 vpshastry joined #gluster
01:40 rcoup joined #gluster
01:43 semiosis i'd like to file a bug glusterbot
01:43 glusterbot http://goo.gl/UUuCq
01:54 hagarth joined #gluster
02:16 hagarth semiosis: you need to turn on allow-insecure in glusterd (edit glusterd.vol and restart) too for your unprivileged applications to function properly.
02:16 semiosis i suspected something like that
02:17 semiosis could you give me the exact line?
02:17 hagarth there's ongoing work to provide a CLI interface for that.
02:17 semiosis option rpc-auth-allow-insecure on
02:17 semiosis or is it "option server.allow-insecure on:
02:17 semiosis s/:/"/
02:17 glusterbot semiosis: Error: u's/:/ /" or is it "option server.allow-insecure on:' is not a valid regular expression.
02:18 hagarth option rpc-auth-allow-insecure on
02:18 kedmison joined #gluster
02:18 semiosis thx
02:19 semiosis i started looking through glusterd.c & other files for evidence that i could put that option in glusterd.vol -- I should have just tried it!
02:20 hagarth once the CLI is available, let us document this procedure well (in our brand new markdown based documentation) :).
02:21 kedmison I'm having a problem with my 2-node gluster distribute setup; client is tryng a rm -rf <dir> and getting 'Directory not empty', but the client cannot see any files.  However, i can see them on the bricks themselves.
02:22 kedmison What is the best way to resolve this?  (I'm running gluster 3.3.1)
02:24 kedmison Am I safe to remove the files direct from the bricks themselves?  or do I need to somehow update the brick's .glusterfs directory contents too?
02:24 bharata joined #gluster
02:25 semiosis kedmison: check client mount's log file for more information, then check the brick log files (both of them) for more information
02:25 semiosis if you find anything there, put it in a pastie for us
02:25 semiosis hagarth: indeed
02:26 kedmison on it…
02:28 semiosis hagarth: thanks insecure clients are working now
02:33 jclift Found another potential bug in 3.4.0beta3.
02:33 hagarth semiosis: cool
02:33 hagarth jclift: that being?
02:34 glusterbot New news from newglusterbugs: [Bug 979225] server.allow-insecure aka rpc-auth-allow-insecure option does not work <http://goo.gl/FSWVp>
02:34 jclift I've got a the 2 node gluster storage set up, with the rdma test volume from the rdma test day.
02:34 jclift And I've peer probed a third box, that doesn't have any bricks on it.
02:34 jclift The peer probe is fine, it's all connected.
02:35 jclift So, I reboot that third box, and it comes up fine.  Glusterd starts up fine, and also starts up the NFS and self heal server (on this third node with no bricks).
02:36 jclift It seems to want to serve the volumes it doesn't have bricks for via NFS, so I think "interesting", and promptly mount one of the volumes from another box completely over NFS.
02:37 jclift The mount succeeds.  However, can't do file listing (ls) nor file creation (sudo touch foo) on this weirdly mounted NFS server.
02:37 harish joined #gluster
02:37 jclift hagarth: I'm just trying to figure out if the bug is in the "can't do stuff with this mounted volume" or if the bug is in the third box attempting to serve volumes via NFS when it doesn't have any bricks itself.
02:37 hagarth NFS server gets started on all peers.
02:38 hagarth and files should be served off the 3rd server too.
02:38 jclift hagarth: Cool.  So that's expected behaviour then.  The bug is in the "can't do jack to the mount volume".
02:38 hagarth jclift: yes, that seems to be the case.
02:38 jclift Note, I haven't actually tried writing data to an NFS volume from the "proper" servers yet either now I think about it.
02:39 jclift I should probably try that first, in case the problem exists there too and not just in this edge case.
02:39 * jclift goes and tries it out
02:40 semiosis hagarth: ok this is interesting
02:41 semiosis my insecure client now can fetch the volfile, attempting to create a file silently does nothing, and then trying to write to that file crashes the app
02:42 semiosis re-setting server.allow-insecure, just to be sure
02:42 semiosis it was set to "ON", now i'm setting it to "on"
02:43 semiosis oh and another question for you... it seems like 3.4 takes much longer to complete gluster commands
02:43 semiosis is that common, or maybe something strange about my test server & volume?
02:44 hagarth semiosis: yeah, we have introduced a safer configuration write mechanism.
02:44 * jclift kind of wishes Gluster had copied PostgreSQL's "2 phase commit" code
02:45 hagarth that induces slowness. (maybe there could be an option to have old behavior for testing purposes)
02:45 semiosis good to know
02:45 jclift It's amazes me that creating new volumes can error out with things like "volume foo can't be used due to XXX" and actually writes a bunch of xattr's to all of the bricks before it's figured that out.  Like... wow. :(
02:45 hagarth jclift: don't you like glusterd's "2 phase commit" code?
02:45 semiosis now back to the issue with the insecure client not being able to write to the volume
02:45 jclift hagarth: From the way it does volume creation, I'd call it a steaming pile
02:46 jclift But, that could just be the one edge case where it goes wrong. :D
02:46 jclift No idea. :)
02:47 semiosis i am going to update bug 979225 with this write problem
02:47 hagarth jclift: we do not have rollback semantics if the staging passes. maybe we should harden validations during staging so that commit doesn't fail.
02:47 glusterbot Bug http://goo.gl/FSWVp unspecified, unspecified, ---, kparthas, NEW , server.allow-insecure aka rpc-auth-allow-insecure option does not work
02:47 hagarth semiosis: ok
02:47 portante joined #gluster
02:48 jclift hagarth: Maybe.  I think it's not so much the rollback, but the "lets check that everything would succeed before we start modifying the bricks".  So, there shouldn't be any "rollback" as such.
02:48 jclift hagarth: Ahhh, I guess that's what you mean by hardening validations so the commit doesn't fail
02:48 hagarth jclift: yes.
02:49 jclift Yeah.  That would be the thing to do.
02:49 kedmison This is the log information I could find from the client.  There are no relevant logs on the server side(s) for the time in question.  http://pastie.org/8089950
02:49 glusterbot Title: #8089950 - Pastie (at pastie.org)
02:54 semiosis kedmison: whenever a client says something like "remote operation failed" go check the brick logs, /var/log/glusterfs/bricks -- did you look there for corresponding log messages?
02:54 kevein joined #gluster
02:55 * semiosis goes to log level TRACE
02:56 kedmison apologies; I did not check the brick logs.  looking now.
02:57 nueces joined #gluster
02:58 semiosis hagarth: wow i feel dumb right now
02:58 semiosis hagarth: the insecure couldn't write to the volume because the perms didn't allow other to write
02:58 semiosis :O
02:59 semiosis chmod 0777 the brick and now it's all good :D
02:59 hagarth was the application crash due to libgfapi?
03:00 semiosis yes
03:01 semiosis i mean, it's a very simple application
03:01 hagarth ok, that's a bug in libgfapi.
03:01 semiosis https://github.com/semiosis/libgfapi-jni​/blob/master/glfsjni/src/test/java/org/f​usesource/glfsjni/internal/GLFSTest.java
03:01 glusterbot <http://goo.gl/CscnQ> (at github.com)
03:01 semiosis all it does is wrap a few libgfapi functions with JNI and call them
03:02 semiosis writes hello world to a file
03:02 hagarth yeah, saw that now.
03:02 semiosis maybe i'm doing something wrong not catching an error
03:02 semiosis haven't got that far with libgfapi yet
03:02 semiosis maybe the int val returned by create indicated the create failed, but i tried to write to it anyway
03:03 semiosis s/create/creat/
03:03 glusterbot What semiosis meant to say was: maybe the int val returned by creat indicated the create failed, but i tried to write to it anyway
03:04 hagarth semiosis: quite possible
03:04 vshankar joined #gluster
03:10 kedmison updated with snippets from the brick logs: http://pastie.org/8089977; I'm trying to keep the log snippets minimal.
03:13 semiosis kedmison: are you doing rm -rf through the client?  gotta ask
03:13 kedmison yes, I am doing it through the client; glusterfs fuse client.
03:16 kedmison I have had some problems with this  server in question; had some hangs and reboots while trying to put the data into gluster.  (Turned out to be a bad controller card; once I sorted that out, things have been solid)
03:17 semiosis well i'm not sure what else to do
03:18 semiosis haven't encountered this myself
03:18 saurabh joined #gluster
03:19 jclift hagarth__: Still around?
03:19 kedmison I am wondering if there's a command to cause the server to re-scan the bricks and figure out that these files are actually there… or if it's OK to remove the directories manually from the bricks since I actually want the directories deleted anyway.
03:20 jclift kedmison: The volume heal command maybe?
03:21 jclift kedmison: Note, I mostly do stuff in my dev/test environment here, so I've barely touched the healing and rebalancing commands.  Thus, I'm not really sure.
03:21 jclift kedmison: semiosis may know better.
03:22 kedmison jclift: I'm definitely looking at the rebalance commands.  heal seems to be for replicated-configs only.
03:22 jclift Cool. :)
03:23 kedmison jclift: but I've got so many files in there that I think I need a fix in 3.3.2 to not leak FDs during a rebalance operation.
03:23 kedmison left #gluster
03:23 kedmison joined #gluster
03:24 mooperd joined #gluster
03:26 itisravi joined #gluster
03:29 semiosis glusterfs doesnt keep much state apart from whats actually on disk, but there is some
03:29 itisravi joined #gluster
03:29 semiosis in that case, stop & start the volume, which kills & respawns all the brick ,,(processes)
03:29 glusterbot The GlusterFS core uses three process names: glusterd (management daemon, one per server); glusterfsd (brick export daemon, one per brick); glusterfs (FUSE client, one per client mount point; also NFS daemon, one per server). There are also two auxiliary processes: gsyncd (for geo-replication) and glustershd (for automatic self-heal). See http://goo.gl/F6jqx for more information.
03:31 kedmison semiosis: ok, I'll try that.  I guess I have to unmount all the clients first before stopping and starting the volume?
03:31 semiosis well you dont have to
03:31 semiosis they'll just hang until they can reconnect
03:32 semiosis hard to say what consequences that might have for your apps & such
03:34 kedmison I'll unmount and remount the clients; just to ensure that the apps are quiesced.
03:34 glusterbot New news from resolvedglusterbugs: [Bug 979225] server.allow-insecure aka rpc-auth-allow-insecure option does not work <http://goo.gl/FSWVp>
03:40 lalatenduM joined #gluster
03:41 aravindavk joined #gluster
03:42 kedmison I unmounted the clients, volume stop, volume start, mounted the clients again, and still client cannot delete the directory.
03:45 mmalesa joined #gluster
03:55 rjoseph joined #gluster
04:00 mohankumar__ joined #gluster
04:01 mohankumar__ joined #gluster
04:05 bala joined #gluster
04:10 vpshastry joined #gluster
04:12 kevein_ joined #gluster
04:34 vpshastry joined #gluster
04:37 vpshastry left #gluster
04:47 deepakcs joined #gluster
04:54 rcoup joined #gluster
04:55 rcoup joined #gluster
04:56 anands joined #gluster
05:03 hagarth joined #gluster
05:15 shireesh joined #gluster
05:16 JoeJulian ~pasteinfo | kedmison
05:16 glusterbot kedmison: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
05:16 JoeJulian ... if you're still around...
05:17 kedmison joejulian: yep, still here…
05:21 redbeard joined #gluster
05:21 kedmison joejulian: here it is.  http://fpaste.org/21584/39682613/
05:21 glusterbot Title: #21584 Fedora Project Pastebin (at fpaste.org)
05:28 kedmison I did experiment with some of the options at one point, including turning the flush-behind on, to try to improve small-file performance.  Flush-behind set to on corresponded with the disk controller-induced crashes I had, so I did wonder if some writes weren't committed to disk.  I was rsync-ing my data to the cluster, and it seemed like all of the data was there, so I didn't worry too much about it.
05:28 kedmison joejulian: is there anything out of the ordinary that you see in that fpaste output?
05:30 psharma joined #gluster
05:36 satheesh joined #gluster
05:37 mooperd joined #gluster
05:45 vpshastry1 joined #gluster
05:55 sgowda joined #gluster
06:04 glusterbot New news from newglusterbugs: [Bug 978030] Qemu libgfapi support broken for GlusterBD integration <http://goo.gl/8VZjy>
06:05 pkoro joined #gluster
06:09 satheesh joined #gluster
06:11 JonnyNomad_ joined #gluster
06:22 bulde joined #gluster
06:30 ollivera joined #gluster
06:32 JonnyNomad joined #gluster
06:35 satheesh joined #gluster
06:37 vimal joined #gluster
06:38 puebele1 joined #gluster
06:48 puebele joined #gluster
06:48 andreask joined #gluster
06:53 krokar joined #gluster
06:55 ngoswami joined #gluster
06:57 satheesh joined #gluster
07:01 ekuric joined #gluster
07:02 ctria joined #gluster
07:08 puebele joined #gluster
07:09 ramkrsna joined #gluster
07:09 ramkrsna joined #gluster
07:10 rgustafs joined #gluster
07:15 pkoro joined #gluster
07:55 rastar joined #gluster
07:59 ricky-ticky joined #gluster
08:11 X3NQ joined #gluster
08:16 piotrektt joined #gluster
08:16 piotrektt joined #gluster
08:31 pkoro joined #gluster
08:35 mmalesa joined #gluster
08:37 shireesh_ joined #gluster
08:46 al joined #gluster
08:46 spider_fingers joined #gluster
09:08 mmalesa joined #gluster
09:09 ccha Can I geo-replicate as master 1 gluster volume to another volume as slave thought nfs ? gluster volume geo-replication masterVol nfs://slaveHost:slaveVol ?
09:10 ccha gluster volume geo-replication masterVol glusterfs://slaveHost:slaveVol <-- this one works
09:28 social_ if I have split brain on gfid and I know I can loose data, is the best option to just finding out where gfid is pointing and deleting the content?
09:33 mooperd joined #gluster
09:46 manik joined #gluster
09:55 andreask joined #gluster
10:35 glusterbot New news from newglusterbugs: [Bug 979365] Provide option to disable afr durability <http://goo.gl/CAVAY>
10:39 mooperd joined #gluster
10:52 ollivera joined #gluster
10:54 yinyin joined #gluster
10:56 puebele3 joined #gluster
11:06 duerF joined #gluster
11:14 puebele joined #gluster
11:16 andreask joined #gluster
11:26 yinyin joined #gluster
11:33 puebele1 joined #gluster
11:40 vpshastry1 left #gluster
11:42 ngoswami joined #gluster
11:46 mooperd joined #gluster
12:24 hybrid5123 joined #gluster
12:27 chirino semiosis
12:27 chirino get my last message?
12:28 manik joined #gluster
12:30 mmalesa_ joined #gluster
12:31 mmalesa joined #gluster
12:36 duerF joined #gluster
12:36 plarsen joined #gluster
12:40 ngoswami joined #gluster
12:43 champtar joined #gluster
12:48 an joined #gluster
12:49 andreask joined #gluster
12:50 krokarion joined #gluster
13:04 edward1 joined #gluster
13:04 harish joined #gluster
13:07 NcA^__ joined #gluster
13:08 aliguori joined #gluster
13:12 X3NQ joined #gluster
13:12 krokar joined #gluster
13:26 krokarion joined #gluster
13:32 krokar joined #gluster
13:32 rwheeler joined #gluster
13:34 theron joined #gluster
13:39 kedmison joined #gluster
13:40 anands joined #gluster
13:41 hybrid5121 joined #gluster
13:41 bennyturns joined #gluster
13:46 lpabon joined #gluster
13:48 hagarth joined #gluster
13:49 failshell joined #gluster
13:52 joelwallis joined #gluster
13:56 semiosis chirino: i updated the readme last night with instructions on how to run the tests without root privileges
13:56 chirino coolio
13:56 semiosis thanks to hagarth's help with that
13:57 kaptk2 joined #gluster
13:57 semiosis also tried the search & replace method of generating the hawtjni annotations, but after I pushed i realized i had made many mistakes, so i undid that this morning
13:57 chirino k
13:57 semiosis going to try again tonight with a more TDD approach
13:57 bsaggy joined #gluster
13:59 hagarth it's time for beta4!
14:00 manik joined #gluster
14:00 semiosis excellent!
14:02 semiosis chirino: would you agree that byte[] is a good replacement for void*?
14:02 semiosis or is there a better java type?
14:02 chirino yeah..
14:03 chirino for stuff like (void *, site_t len)
14:03 semiosis right
14:03 robo joined #gluster
14:03 chirino I noticed you had mapped to String
14:04 chirino which probably is not the most general purpose choice.
14:04 semiosis yes that's temporary only because i was too lazy to call getBytes on the string :/
14:04 semiosis will fix that tonight
14:05 chirino also this will apply: http://hawtjni.fusesource.org/documenta​tion/developer-guide.html#optimizations
14:05 glusterbot <http://goo.gl/7k7TY> (at hawtjni.fusesource.org)
14:06 chirino since those byte[] are not updated by the C code, you will want to apply the NO_IN flag
14:06 semiosis makes sense, thanks for pointing that out
14:06 chirino otherwise, on the way out of the method, it will try updating the Java byte[] with any changes that happened to the C array
14:06 semiosis i feel like i still need to read that document 50 more times to fully comprehend it
14:07 chirino :)
14:07 semiosis there's a lot of info packed in there
14:08 semiosis ah yes there's the void * to byte[] example in that section, missed that before
14:10 bugs_ joined #gluster
14:11 andreask joined #gluster
14:13 kkeithley 3.4.0beta4 just released
14:15 semiosis sweet, it has the lvm patch for ubuntu precise
14:20 mmalesa joined #gluster
14:20 semiosis java has a AtomicMoveNotSupportedException :)
14:25 spider_fingers left #gluster
14:26 harish joined #gluster
14:28 hagarth semiosis: sounds pretty cool :).
14:47 lpabon joined #gluster
15:00 manik joined #gluster
15:07 samppah @latest
15:07 glusterbot samppah: The latest version is available at http://goo.gl/zO0Fa . There is a .repo file for yum or see @ppa for ubuntu.
15:11 samppah nam
15:14 alias_willsmith joined #gluster
15:16 vimal joined #gluster
15:24 kedmison joined #gluster
15:30 mmalesa joined #gluster
15:34 kushnir-nlm Hey guys. I have a problem on a 2 node replica cluster with 4 bricks on each node. Each node has 4 CPU ang 16GB of RAM. Running RHEL 6. Gluster 3.3.1-15 from kkeithley. I had a distrubuted volume that I added a second node to in order to make it distributed replicated. During the replication, I started getting page allocation errors on the destination server. I know that "echo 1 > /proc/sys/
15:34 kushnir-nlm vm/zone_reclaim_mode" makes the page allocation errors go away.... But, are the page allocation errors fatal? Do they mean that I have garbage on my second node now and that I need to wipe it and start replication all over?
15:38 tjikkun_work joined #gluster
15:44 NcA^_ joined #gluster
15:54 semiosis kushnir-nlm: you can inspect the files on the bricks to see if they're garbage.  should be safe to read them, just don't modify
15:55 semiosis ,,(extended attributes) may be helpful
15:55 glusterbot (#1) To read the extended attributes on the server: getfattr -m .  -d -e hex {filename}, or (#2) For more information on how GlusterFS uses extended attributes, see this article: http://goo.gl/Bf9Er
15:57 semiosis wow no queue on launchpad this morning.  beta4 building already
15:59 jthorne joined #gluster
16:00 samppah :)
16:01 samppah waiting for epel rpm.. had some problems with rebalance on beta3
16:02 kedmison joined #gluster
16:06 dbruhn joined #gluster
16:07 kushnir-nlm semiosis: thanks for the response... Examinig 1.6 million files isn't really going to work for me. What I was asking was whether page allocation errors like that can be ignored or whether they generally result in bad data.
16:08 kushnir-nlm Also, if they result in bad data, will I see split brains, or will it just be silently bad data?
16:08 semiosis i was giving you an option in case you don't get a response here... i've never heard of that problem
16:08 semiosis hopefully someone else has, who sees your question, and answers
16:11 kushnir-nlm :/ Thanks for trying :)
16:11 kushnir-nlm What OS do you run your gluster on?
16:11 dbruhn joined #gluster
16:11 semiosis ubuntu of course
16:11 semiosis ,,(ppa)
16:11 glusterbot The official glusterfs 3.3 packages for Ubuntu are available here: 3.3 stable: http://goo.gl/7ZTNY -- 3.3 QA: http://goo.gl/5fnXN -- and 3.4 QA: http://goo.gl/u33hy
16:12 kushnir-nlm Ahh, we don't allow that... It's "unsafe" because NIST doens't provide hardening scripts for it ;)
16:12 * semiosis living dangerously
16:12 semiosis :)
16:14 semiosis i'm not too familiar with this zone_reclaim_mode, but reading up on it
16:14 semiosis how can we make a test to see if the page allocation errors corrupt data?
16:14 semiosis can we cause them to happen on just one file, then look at that file?
16:14 kushnir-nlm http://www.gluster.org/pipermail/glus​ter-users/2012-September/034328.html
16:14 glusterbot <http://goo.gl/ypM65> (at www.gluster.org)
16:15 semiosis also https://www.kernel.org/doc/​Documentation/sysctl/vm.txt - at the end
16:15 glusterbot <http://goo.gl/U8T22> (at www.kernel.org)
16:15 kushnir-nlm Yeah, that's the general explanation
16:16 semiosis which suggests it will affect performance, not surprising
16:16 semiosis this doesnt sound like something that would corrupt data, but it would be good to test
16:16 kushnir-nlm So RHEL 5 default was 1 (on) , RHEL 6 it's 0 (off).
16:17 kushnir-nlm In a prior test, I got a lot of these errors... I replicated 6M files, all of which were consistent except for 3 files... I did md5 hashes and compared between servers
16:18 kushnir-nlm The funny thing was that I would delete those 3 bad files on the bricks and do a stat on my gluster mount to trigger replication, and they would be replaced with the same bad files.
16:19 ndevos ~split brain | kushnir-nlm
16:19 glusterbot kushnir-nlm: I do not know about 'split brain', but I do know about these similar topics: 'split-brain'
16:19 ndevos ~split-brain | kushnir-nlm
16:19 glusterbot kushnir-nlm: (#1) To heal split-brain in 3.3, see http://goo.gl/FPFUX ., or (#2) learn how to cause split-brain here: http://goo.gl/Oi3AA
16:19 semiosis #2 is broken :(
16:19 semiosis johnmark: can we get a static export of C.G.O?
16:21 ndevos kushnir-nlm: anyway, the deleting the files is not sufficient, you need to delete the .glusterfs/AB/CD/ABCDEFG...gfid too
16:23 ndevos kushnir-nlm: if the .glusterfs/AB/CD/ABCD...gfid file exists, the file will just be hard-linked back, the contents was kept on the local disk
16:24 ndevos kushnir-nlm: http://joejulian.name/blog/fixin​g-split-brain-with-glusterfs-33/ contains a more verbose description
16:24 glusterbot <http://goo.gl/FPFUX> (at joejulian.name)
16:30 l0uis left #gluster
16:36 aravindavk joined #gluster
16:37 glusterbot New news from newglusterbugs: [Bug 953694] Requirements of Samba VFS plugin for glusterfs <http://goo.gl/v7g29>
16:43 zaitcev joined #gluster
16:55 robos joined #gluster
16:57 kedmison joined #gluster
17:12 rwheeler joined #gluster
17:33 manik joined #gluster
17:37 kkeithley 3.4.0beta4 RPMs are available at http://download.gluster.org/pub/glust​er/glusterfs/qa-releases/3.4.0beta4/
17:37 glusterbot <http://goo.gl/964uJ> (at download.gluster.org)
17:39 andreask joined #gluster
17:55 alias_willsmith joined #gluster
17:56 samppah kkeithley: whee!
17:59 kkeithley :-)
18:08 dbruhn How did the 3.4 RDMA test day go?>
18:13 tg2 joined #gluster
18:17 kkeithley They're doing it again this weekend. Apparently everyone got sidetracked by other important things they needed to do instead.
18:19 Dean Ahh ok, I just got my testing servers, I still need to order IB hardware though
18:22 tg2 joined #gluster
18:23 samppah hmm, adding new brick and rebalance after that causes VM's runninv on oVirt to pause "due to unknown storage error"
18:25 samppah this is on Gluster 3.4 beta4 and oVirt 3.3 (nightly builds)..
18:25 alias_willsmith joined #gluster
18:26 samppah any idea if i'm hitting a known bug here?
18:29 manik joined #gluster
18:32 tg2 joined #gluster
18:35 tg3 joined #gluster
18:37 rwheeler joined #gluster
19:02 robo joined #gluster
19:06 lpabon joined #gluster
19:10 nueces joined #gluster
19:29 tg3 @samppah - the vm is stored on the gluster share?
19:29 samppah tg3: yes
19:29 samppah mounted with fuse client
19:29 tg3 had the same issue with vmware mounted via nfs
19:29 tg3 so I'm guessing its a rebalance bug
19:29 portante joined #gluster
19:29 samppah tg3: was this with 3.4 alpha/beta?
19:29 tg3 3.3.2
19:29 andreask joined #gluster
19:29 samppah oh okay
19:29 tg3 does it work for you on 3.3?
19:30 iatecake joined #gluster
19:30 samppah i did some testing with red hat storage some weeks ago and i don't remeber hitting this issue back then
19:34 drpal joined #gluster
19:34 drpal joined #gluster
19:35 drpal left #gluster
19:43 portante_ joined #gluster
19:44 kedmison joined #gluster
19:46 portante joined #gluster
20:12 samppah https://bugzilla.redhat.com/show_bug.cgi?id=953887
20:13 glusterbot <http://goo.gl/tw8oW> (at bugzilla.redhat.com)
20:13 glusterbot Bug 953887: high, high, ---, pkarampu, MODIFIED , [RHEV-RHS]: VM moved to paused status due to unknown storage error while self heal and rebalance was in progress
20:25 samppah although that patch is included in 3.4 beta
20:31 samppah bug 922183
20:31 glusterbot Bug http://goo.gl/ZD3FO is not accessible.
20:34 Nagilum_ joined #gluster
20:34 Nagilum_ left #gluster
20:38 Nagilum joined #gluster
20:42 Nagilum kkeithley: do you know if you'll do any rpms of v3.3.2qa3 ?
21:06 manik joined #gluster
21:09 mooperd joined #gluster
21:19 semiosis @qa releases
21:19 glusterbot semiosis: The QA releases are available at http://bits.gluster.com/pub/gluster/glusterfs/ -- RPMs in the version folders and source archives for all versions under src/
21:21 Nagilum semiosis: hmm, good point! Thanks
21:37 manik joined #gluster
21:59 tg2 joined #gluster
22:10 badone joined #gluster
22:12 semiosis oh yeah there are rpms there too
22:12 semiosis i was just looking to see if i had the latest 3.3.2qa myself
22:12 semiosis http://bits.gluster.com/pub/​gluster/glusterfs/3.3.2qa3/ has rpms
22:12 glusterbot <http://goo.gl/miEUH> (at bits.gluster.com)
22:14 cyberbootje joined #gluster
22:21 gluslog joined #gluster
22:30 georgeh|workstat joined #gluster
23:36 heitor joined #gluster
23:41 heitor joined #gluster
23:44 heitor left #gluster
23:45 joelwallis joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary