Camelia, the Perl 6 bug

IRC log for #gluster-dev, 2012-12-13

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
01:21 _ilbot joined #gluster-dev
02:12 hagarth joined #gluster-dev
02:27 mohankumar joined #gluster-dev
02:40 bharata joined #gluster-dev
03:30 mdarade joined #gluster-dev
03:32 mdarade left #gluster-dev
03:41 sgowda joined #gluster-dev
03:57 xavih joined #gluster-dev
04:17 sripathi joined #gluster-dev
04:20 mdarade joined #gluster-dev
04:27 deepakcs joined #gluster-dev
04:30 mdarade left #gluster-dev
04:54 vpshastry joined #gluster-dev
05:20 hagarth joined #gluster-dev
05:21 vpshastry joined #gluster-dev
05:29 bulde joined #gluster-dev
05:30 sripathi joined #gluster-dev
05:43 bulde bharata: ping
05:43 bulde around?
05:43 raghu joined #gluster-dev
05:44 bharata bulde, yes
05:45 bulde bharata: i mostly got the reason for the wb crash
05:47 bulde bharata: please try below patch, and see if it works
05:47 bulde diff --git a/xlators/performance/write​-behind/src/write-behind.c b/xlators/performance/write​-behind/src/write-behind.c
05:47 bulde index 232e6c0..c2d30e7 100644
05:47 bulde --- a/xlators/performance/write​-behind/src/write-behind.c
05:47 bulde +++ b/xlators/performance/write​-behind/src/write-behind.c
05:47 bulde @@ -917,13 +917,13 @@ void
05:47 bulde __wb_preprocess_winds (wb_inode_t *wb_inode)
05:47 bulde {
05:47 bulde off_t         offset_expected = 0;
05:47 bulde -        size_t        space_left      = 0;
05:47 bulde +        ssize_t       space_left      = 0;
05:47 bulde wb_request_t *req             = NULL;
05:47 bulde wb_request_t *tmp             = NULL;
05:47 bulde wb_request_t *holder          = NULL;
05:47 bulde wb_conf_t    *conf            = NULL;
05:47 bulde int           ret             = 0;
05:47 bulde -       size_t        page_size       = 0;
05:48 bharata bulde, so just s/size_t/ssize_t ?
05:48 bulde the problem here seems to be bigger write sizes... with fuse mount we get max of 128kB, but here, we got write size of 8MB, which is much bigger than default iobuf size (or page_size) which is 128kB
05:48 bulde yes, in that one function, thats what i suspect
05:49 bulde mostly should fix it right away :-) if not, it may be much bigger problem :-)
05:49 bharata bulde, ok will try and let you know
05:50 bulde sure, no hurry, do it when you run qemu next time :-)
05:50 bharata bulde, sure will do and get back
05:50 bharata bulde, but curious, nobody there is using qemu+glusterfs yet ? :)
05:51 raghu joined #gluster-dev
05:53 johnmark bharata: I'm trying to get some time to do it soon :)
05:53 johnmark bharata: in preparation for the beta, we need to come up with some test scripts
05:53 johnmark ie. steps to follow to test basic functionality
05:53 bharata johnmark, ok
05:55 bharata johnmark, yes that imp to ensure that qemu-glusterfs combination is broken in subsequent releases
05:55 johnmark so that we can provide an easy way for people to run the same tests and allow us to compare results
05:55 bharata johnmark, s/is broken/isn't broken
05:55 johnmark in some qauntitative way
05:55 bharata makes sense
05:55 johnmark bharata: yes, I knew what you meant :)
05:55 bharata :)
05:56 johnmark heh
05:56 johnmark bharata: so if you can help us write some test scripts, that would be very helpful
05:58 bharata johnmark, I want to have one test case within glusterfs that would create an image (using qemu-img), install a disto on it using an iso image, then start the VM
05:58 johnmark ok
05:58 bharata johnmark, that should help us automatically test if qemu-glusterfs is working
05:58 johnmark excellent, ok
05:59 johnmark bharata: as soon as I get a date that we'll have the first beta, I'll schedule a test day
05:59 bharata johnmark, let me work on that, but it means that the system that runs the tests should also host a working VM image or a distro iso image
05:59 johnmark to get as much community action around beta testing as possible
05:59 johnmark bharata: indeed
06:00 johnmark bharata: as long as you clearly lay out the steps so that someone else can reproduce your procedures, that's all that is needed
06:01 bharata johnmark, right
06:18 bala joined #gluster-dev
06:41 sripathi1 joined #gluster-dev
06:43 bharata bulde, s/size_t/ssize_t not helping, crash at the same place
06:45 bulde bharata: thanks for feedback... let me do some more homework :-/
06:46 bharata bulde, size_t is long unsiged int and ssize_t is long int here
06:49 bulde yeah, that should have worked IMO, let me debug a little more, earlier patch was just a code review fix, which I thought is enough... give me some time, will get back to you
06:50 bharata bulde, ok
06:52 shireesh joined #gluster-dev
06:56 sripathi joined #gluster-dev
07:08 puebele joined #gluster-dev
07:16 deepakcs hagarth, [2012-12-13 07:12:32.946988] E [glusterd-op-sm.c:535:glus​terd_op_stage_set_volume] 0-management: Option with name: rpc-auth-allow-insecure does not exist
07:16 deepakcs hagarth,  i still see that E msg for rpc-auth... in glusterd log
07:17 deepakcs is that ok?
07:23 hagarth deepakcs: where have you set that option? in glusterd.vol?
07:24 deepakcs hagarth, yes in /etc/glusterfs/glusterd.vol
07:25 deepakcs hagarth, but this time i was able to create a domain (VM) from virsh ( but my host hung!), but looks like irrespectiveof the E msg, that option helped get past the volfile error
07:25 deepakcs hagarth, strange tho' it complains of that option not existing.
07:25 deepakcs will try again, rebooting my VM ( my host is a VM in itself)
07:34 hagarth deepakcs: do you see that option listed in gluster volume info?
07:40 mohankumar joined #gluster-dev
08:00 sripathi joined #gluster-dev
08:07 xavih left #gluster-dev
08:12 deepakcs hagarth, i had manually set server.allow-insecure and nfs.rpc-aut-allow to on, using volume set .. so i see them in volume info.
08:14 deepakcs server.allow-insecure: on
08:14 deepakcs nfs.rpc-auth-allow: on
08:14 deepakcs hagarth, ^^
08:15 ndevos deepakcs: whenever you get that working (or give up), could you send an email with the options to the mailinglist?
08:17 deepakcs ndevos, :) sure, will reply to the current mail thread. i think setting the option  rpc-auth-allow-insecure in glusterd.vol did make it work.. but for some reason my VM (where i am running virsh) hangs
08:17 ndevos deepakcs: yeah, a reply on that would be much appreciated :)
08:18 deepakcs ndevos, sure
08:20 ndevos thanks!
08:34 mdarade1 joined #gluster-dev
08:54 gbrand_ joined #gluster-dev
09:02 sgowda joined #gluster-dev
09:18 mohankumar joined #gluster-dev
09:40 deepakcs hagarth, ndevos bharata has a similar setup ( virsh->libvirt->qemu(using gluster backend) and for him virsh VM creation works __without__ having rpc-auth-allow-insecure in glusterd.vol
09:40 deepakcs hagarth, ndevos wondering if you can throw some light on why my setup needs it and bharata 's doesnt ?
09:40 deepakcs I am curious to understnad what in the setup/config has to do with it . any ideas ?
09:43 ndevos deepakcs: does bharatas setup run as user vdsm? If you run as "root" all should be fine
09:43 deepakcs ndevos, for me running as root also saw the problem and adding that config option resolved it.
09:44 deepakcs ndevos, i saw bharata 's libvirt/qemu.conf, the user & group option lines are commented, which means by default libvirt uses user 'qemu'
09:44 ndevos deepakcs: hmm, and you run libvirt on the same host as glusterd? does it connect to localhost?
09:44 deepakcs ndevos, jsut to clarify, i was seeing this issue when running as root, qemu and vdsm users.. it just didn't work for me. First i thot it was due to 12-12-12.. but today is 13 :)
09:45 ndevos hehee
09:45 deepakcs ndevos, yes.. bharata and me both run glusterd on the same host
09:45 deepakcs ndevos,  i edited .xml of virsh and tried hostname, localhost, 127.0.0.1 and IP of my host.. all hit the same error
09:46 deepakcs everything on 1 host.. this is a single machine setup
09:47 ndevos deepakcs: that's pretty strange, can you stop glusterd, empty the /var/log/glusterfs/etc-glusterfs-glusterd.log (or similar), start glusterd with --log-level=DEBUG, reproduce and post the log?
09:48 deepakcs ndevos, the logs i posted in the mail, was from qemu-glsuterfs log, which i enabled inside qemu. glsuterd log only says readv failed... somethign liek that. i didn't find that useful.
09:48 deepakcs ndevos, Currently i am trying to see if using the option works for me in all cases. Then will re-prodice the prob and post to the mail thread.
09:51 mdarade joined #gluster-dev
09:52 ndevos deepakcs: enabling DEBUG (or TRACE) for glusterd may help in explaining why it denies the connection, the rpc-readv is a result of that denial
09:52 bharata deepakcs, did my xml work for you ? Trying to see if xml generated by vdsm vs xml hand edited makes any difference
09:52 ndevos at least, thats what I have seen for other similar issues
09:56 deepakcs bharata, the vdsm generated xml was hand edited by me good enuf :) and it works, so urs also shud work
09:59 hagarth1 joined #gluster-dev
10:10 puebele joined #gluster-dev
10:11 shireesh joined #gluster-dev
10:53 harshpb joined #gluster-dev
11:08 harshpb joined #gluster-dev
11:15 deepakcs ndevos, replied to the mail thread, with the soln that works for me.
11:15 deepakcs fyi
11:16 ndevos deepakcs: great, thanks! I'll read that later and let you know if I have questions about it ;)
11:16 deepakcs ndevos, sure.
11:19 edward1 joined #gluster-dev
11:19 harshpb joined #gluster-dev
11:20 bfoster_ joined #gluster-dev
11:21 jdarcy_ joined #gluster-dev
11:21 kkeithley1 joined #gluster-dev
11:23 harshpb joined #gluster-dev
11:25 bfoster joined #gluster-dev
11:30 harshpb joined #gluster-dev
11:32 mohankumar joined #gluster-dev
11:39 harshpb joined #gluster-dev
11:42 harshpb joined #gluster-dev
11:48 harshpb joined #gluster-dev
11:48 shireesh joined #gluster-dev
11:50 hagarth joined #gluster-dev
11:59 puebele joined #gluster-dev
12:23 hagarth joined #gluster-dev
12:28 kkeithley joined #gluster-dev
12:39 mdarade joined #gluster-dev
12:40 mdarade left #gluster-dev
12:40 66MAACXPK joined #gluster-dev
12:53 maxiz joined #gluster-dev
13:07 sunus joined #gluster-dev
13:21 harshpb joined #gluster-dev
13:47 vpshastry left #gluster-dev
14:28 mdarade joined #gluster-dev
14:59 puebele joined #gluster-dev
15:17 puebele joined #gluster-dev
15:25 harshpb joined #gluster-dev
15:26 harshpb joined #gluster-dev
15:28 wushudoin joined #gluster-dev
15:32 ron-slc joined #gluster-dev
15:35 harshpb joined #gluster-dev
15:44 harshpb joined #gluster-dev
15:44 jbrooks joined #gluster-dev
15:50 harshpb joined #gluster-dev
15:50 mdarade1 joined #gluster-dev
15:59 harshpb joined #gluster-dev
15:59 mdarade1 joined #gluster-dev
16:05 harshpb joined #gluster-dev
16:09 harshpb joined #gluster-dev
16:10 harshpb joined #gluster-dev
16:18 harshpb joined #gluster-dev
16:26 mdarade joined #gluster-dev
16:56 UnixDev joined #gluster-dev
17:01 raghu joined #gluster-dev
17:07 mdarade left #gluster-dev
17:24 hagarth joined #gluster-dev
18:58 Technicool joined #gluster-dev
20:40 kkeithley left #gluster-dev
20:50 badone joined #gluster-dev
20:51 ron-slc joined #gluster-dev
20:56 badone joined #gluster-dev
21:40 gbrand_ joined #gluster-dev
23:32 gbrand_ joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary