Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2015-06-18

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
01:37 kdhananjay joined #gluster-dev
02:10 pranithk joined #gluster-dev
02:46 dlambrig joined #gluster-dev
02:54 morphkurt joined #gluster-dev
02:55 Gaurav__ joined #gluster-dev
03:23 dlambrig joined #gluster-dev
03:55 atinm joined #gluster-dev
03:58 itisravi joined #gluster-dev
04:13 shaunm joined #gluster-dev
04:14 shubhendu joined #gluster-dev
04:20 nkhare joined #gluster-dev
04:21 suliba joined #gluster-dev
04:23 sakshi joined #gluster-dev
04:26 soumya_ joined #gluster-dev
04:35 ppai joined #gluster-dev
04:40 Manikandan joined #gluster-dev
04:40 Manikandan_ joined #gluster-dev
04:41 ashiq joined #gluster-dev
04:42 Manikandan__ joined #gluster-dev
04:43 ndarshan joined #gluster-dev
04:48 hgowtham joined #gluster-dev
04:56 spandit joined #gluster-dev
05:00 jiffin joined #gluster-dev
05:03 pppp joined #gluster-dev
05:08 pranithk joined #gluster-dev
05:16 gem joined #gluster-dev
05:18 nbalacha joined #gluster-dev
05:20 deepakcs joined #gluster-dev
05:24 Bhaskarakiran joined #gluster-dev
05:26 badone_ joined #gluster-dev
05:29 ashishpandey joined #gluster-dev
05:30 badone__ joined #gluster-dev
05:31 vimal joined #gluster-dev
05:32 schandra joined #gluster-dev
05:35 Gaurav__ joined #gluster-dev
05:41 soumya_ joined #gluster-dev
05:41 pranithk ndevos: pm
05:44 kdhananjay joined #gluster-dev
05:56 rjoseph joined #gluster-dev
06:03 kdhananjay joined #gluster-dev
06:05 kotreshhr joined #gluster-dev
06:08 raghu joined #gluster-dev
06:11 ndarshan joined #gluster-dev
06:14 vimal joined #gluster-dev
06:14 ndarshan joined #gluster-dev
06:15 overclk joined #gluster-dev
06:17 ndarshan joined #gluster-dev
06:17 Humble_ joined #gluster-dev
06:17 ndarshan joined #gluster-dev
06:25 atalur joined #gluster-dev
06:28 spalai joined #gluster-dev
06:30 spandit joined #gluster-dev
06:31 overclk raghu, can you take a look at http://review.gluster.org/#/c/11300/
06:34 anrao joined #gluster-dev
06:36 anekkunt joined #gluster-dev
06:37 kshlm joined #gluster-dev
06:43 spalai1 joined #gluster-dev
06:49 asengupt joined #gluster-dev
07:01 rgustafs joined #gluster-dev
07:05 kdhananjay raghu: Would you be able to accept http://review.gluster.org/#/c/11309/ for 3.6.4?
07:06 saurabh_ joined #gluster-dev
07:07 anekkunt joined #gluster-dev
07:07 raghu overclk: sure.
07:08 raghu kdhananjay: Let me take a look at it. I have already made 3.6.4beta2, which I plan to G.A.
07:08 raghu kdhananjay: anyway I will take a look at it once.
07:09 kdhananjay raghu: Okay.
07:14 ppai joined #gluster-dev
07:14 pranithk xavih: I was trying to address comments on http://review.gluster.org/11246, need a small discussion :-). Let me know when we can have it
07:24 xavih pranithk: I was answering some emails. We can talk about that now if you want
07:29 pranithk xavih: hey!
07:30 pranithk xavih: sorry, was in Bhaskar's cube :-)
07:31 xavih pranithk: hehe
07:31 pranithk xavih: My question is mainly about introducing EC_STATE_PREPARE_ANSWER in the manager_readdir/access
07:33 ndarshan joined #gluster-dev
07:33 pranithk xavih: We will need to set fop->answer to NULL everytime we don't like the answer(so that ec_complete() resumes the fop), and because the answers are combined, we will need to go over the list to check if there is an answer with success. Is that fine?
07:33 pranithk xavih: If there is a cleaner way, tell me :-)
07:34 pranithk xavih: I feel the approach I told is not as clean :-(
07:34 xavih pranithk: I don't like it very much either... let me think...
07:38 pranithk xavih: Hey, I will go for quick lunch and come back. Leave the messages here. Should be back in ~20?
07:38 pranithk xavih: Is that okay?
07:38 xavih pranithk: sure :)
07:39 pranithk xavih: cool, cya
07:40 kdhananjay joined #gluster-dev
07:44 morphkurt joined #gluster-dev
07:44 xavih pranithk: we could rewrite ec_dispatch_one_retry() so that it calls ec_dispatch_start() and ec_dispatch_next(), and use that function in EC_STATE_PREPARE_ANSWER when needed
07:46 xavih pranithk: and ec_dispatch_start() should be modified to call ec_fop_cleanup() (and remove the call to this function from in ec_child_select())
07:49 xavih pranithk: another possibility is to make ec_fop_cleanup() to also change the fields that currently ec_dispatch_start() clears, and convert ec_dispatch_start() to a call to ec_fop_cleanup() plus the lock owner initialization
07:50 xavih pranithk: in that case, ec_dispatch_one_retry() should call ec_fop_cleanup() before ec_dispatch_next()
07:51 xavih pranithk: I'm not sure if I have explained it well enough... I hope you understand me :P
08:00 rgustafs joined #gluster-dev
08:15 pppp joined #gluster-dev
08:20 anrao joined #gluster-dev
08:25 pppp joined #gluster-dev
08:26 pranithk xavih: ec_dispatch_inc() will have problem if we put ec_fop_cleanup in it IMO?
08:30 pranithk xavih: no no, wrong code reading :-)
08:46 rjoseph hciramm: ping, Can you please review http://review.gluster.org/#/c/11229?
08:48 atinm ndevos, pm
08:55 * ndevos gets a coffee, and will be right back, atinm
08:55 atinm ndevos, sure
08:57 poornimag joined #gluster-dev
09:02 anrao joined #gluster-dev
09:10 kaushal_ joined #gluster-dev
09:15 krishnan_p joined #gluster-dev
09:22 anrao joined #gluster-dev
09:24 poornimag joined #gluster-dev
09:26 rjoseph joined #gluster-dev
09:26 hagarth joined #gluster-dev
09:38 Humble_ rjoseph, yes, doing
09:39 rjoseph humble_: Finally, welcome back :)
09:40 Humble_ rjoseph, missed ur ping  in another nick.. :)
09:40 Humble_ rjoseph, done
09:40 Humble_ sorry for the delay
09:41 Bhaskarakiran joined #gluster-dev
09:41 Humble_ anrao, can u update the dht patch by addressing nithyas comments
09:42 Humble_ spandit++ thanks for reviewing
09:42 glusterbot Humble_: spandit's karma is now 4
09:42 Humble_ gem, may be atinm can merge it now
09:42 anrao Humble_, yes working on it
09:43 Humble_ ok
09:43 gem Humble_, NetBSD build still pending
09:43 Humble_ This link shows that it has successfully passed NetBSD builds as well
09:43 Humble_ gem, ^^ >
09:45 gem Humble_, That's NetBSD smoke. The regression log is: http://build.gluster.org/job/rackspace-n​etbsd7-regression-triggered/6835/console
09:45 gem Humble_, I don't think it ran properly
09:46 Humble_ gem, checking
09:46 Humble_ + VERDICT=SUCCESS
09:46 Humble_ gem, it did
09:47 gem Humble_, Oh, okay.
09:47 Humble_ u can consider it as passed
09:49 rjoseph humble: thanks
09:49 rjoseph Humble_++
09:49 glusterbot rjoseph: Humble_'s karma is now 1
09:52 kdhananjay joined #gluster-dev
09:59 atinm gem, which one?
09:59 gem atinm, http://review.gluster.org/10473
10:00 atinm gem, what's the respective 3.7 link for it?
10:00 csim ndevos: quick question, when it come to UI, what are the options regarding gluster ?
10:00 gem atinm, 3.7 -> http://review.gluster.org/11222
10:04 gem atinm, downstream link https://code.engineering.redhat.com/gerrit/50910 :)
10:05 atinm gem, no downstream discussion here please
10:05 gem atinm, oops. sorry.
10:07 ndevos csim: oVirt offfers a UI for Gluster
10:08 ndevos csim: management UI that is, and for monitoring there are some Nagios plugins and Zabbix templates
10:12 csim ndevos: yeah, but that's the only project we have ?
10:12 csim because ovirt is a bit more than gluster :)
10:12 ndevos csim: oVirt has a storage-only mode
10:13 ndevos csim: there is a project in the works "Unifies Storage Management UI" or something, no idea on what upstream that is based
10:13 csim ndevos: let's say i can produce a paper from ym doctor about my java allergy :)
10:14 ndevos csim: what functionality are you looking for?
10:14 ndevos there is also gluster-deploy in case you want easy installations
10:15 morphkurt joined #gluster-dev
10:15 atinm spalai1, pm
10:16 Humble_ atinm++
10:16 glusterbot Humble_: atinm's karma is now 8
10:16 csim ndevos: I was discussing with solution architect in the kitchen, and basically, what people want is something to admin shares, and UI client side for snapshot, etc, etc
10:16 csim ndevos: and I realised that I had no idea of what we offer for this
10:16 csim I also wondered if that's maybe something we need to push on the website
10:16 nkhare joined #gluster-dev
10:16 atinm Humble_, :)
10:17 Humble_ :)
10:17 csim (and then, we were out of cake, so I left the kitchan to go back on salt and ci )
10:18 ndevos csim: oVirt is the base for the Red Hat Gluster Storage Console, there is quite some active development going on with that
10:19 csim ndevos: ok, so good to know
10:19 csim let's see if we can get some win-win synergy with a paradigmatic change with neighbooring project
10:30 kdhananjay joined #gluster-dev
10:31 ndarshan joined #gluster-dev
10:45 poornimag joined #gluster-dev
10:48 ashiq anoopcs ++
10:48 ashiq anoopcs++
10:48 glusterbot ashiq: anoopcs's karma is now 8
10:49 anrao joined #gluster-dev
11:01 kanagaraj joined #gluster-dev
11:06 kotreshhr1 joined #gluster-dev
11:06 dlambrig joined #gluster-dev
11:17 atalur joined #gluster-dev
11:18 atinm joined #gluster-dev
11:20 atinm joined #gluster-dev
11:21 overclk joined #gluster-dev
11:41 gem joined #gluster-dev
11:43 ndevos aaah! CTRL+w for word-backspace, but also Firefox-close-tab...
11:44 csim I feel your pain
11:44 csim I can recommend lazarus firefox extension for the future
11:44 msvbhat ndevos: ctrl+shit+T is to reopen the last tab :P
11:45 ndevos msvbhat: yes, and like csim says lazarus
11:45 ndevos but still, it's extremely annoying
11:48 msvbhat ndevos: I personally use ctrw+w to close the tab (in browsers) and alt+backspace for word backspace
11:54 overclk joined #gluster-dev
11:56 atalur joined #gluster-dev
11:59 ndevos msvbhat: hmm, maybe I should re-educate myself to use ALT+backspace...
12:02 msvbhat ndevos: I think that would be quite irritating. Once you get used to something, it's hard to {un|re}learn the same.
12:02 msvbhat That BTW was my statement for editor wars (vim or emacs) :P
12:05 pranithk xavih: Did you find anything on that data corruption bug I was talking to you about yesterday? I have a meeting today in 2 hours for 45 minutes. Been busy with that. Once that is over I will resend the patch we talked about today and start looking into the data corruption bug...
12:06 poornimag joined #gluster-dev
12:06 kshlm joined #gluster-dev
12:07 xavih pranithk: I was just going to talk with you
12:07 pranithk xavih: oh :-). Tell me sir
12:08 gem atalur++
12:08 xavih pranithk: I'm not sure if this can be the cause of the corruption, but surely it's not good...
12:08 glusterbot gem: atalur's karma is now 2
12:09 xavih pranithk: I think there is a problem similar to what you solved in ec_unlock()
12:09 pranithk xavih: ah!
12:09 pranithk xavih: interesting
12:10 xavih pranithk: what happens if some ec_manager function launches a fop and it finishes before returning from the ec_manager function ?
12:10 xavih pranithk: the callback will call ec_resume(), which will continue execution of the manager
12:10 xavih pranithk: but when the first manager returns it won't see any pending job, so it will also continue execution the state machine
12:11 pranithk xavih: So it will do the operation twice is it?
12:11 xavih pranithk: could be, but the effects could be very weird...
12:12 xavih pranithk: however this should be very difficult to happen
12:12 xavih pranithk: the first thread should be blocked for a considerable amount of time
12:12 pranithk xavih: Well, difficult things are happening at will on his setup :-D
12:12 xavih pranithk: Do I send a patch to solve this ?
12:12 pranithk xavih: It doesn't happen always... sometimes it does...
12:13 pranithk xavih: Feel free to. I will try to re-create the bug on his setup and verify our theory...
12:13 xavih pranithk: I haven't been able to reproduce it...
12:13 pranithk xavih: Same here. On his setup he created it for me twice :-)
12:14 pranithk xavih: I will anyway take his help again in re-creating the bug.
12:15 firemanxbr joined #gluster-dev
12:16 itisravi joined #gluster-dev
12:16 xavih pranithk: I'll try to write the patch as soon as possible to check it (probably this night or tomorrow)
12:22 pranithk xavih: sure take your time. I am also gonna take a look at it tomorrow...
12:23 xavih pranithk: :)
12:24 pranithk xavih: I thought we will be done stabilizing healing by this time :-). He said he will start testing healing parts tomorrow and next week. How is your availability? I could use a hand along with ashishpandey :-)
12:24 pranithk xavih: I am very confident about I/O path now though :-)
12:26 xavih pranithk: I'll try to spend as much time as possible on healing in the following days...
12:27 pranithk xavih: Do you want to merge http://review.gluster.com/#/c/11128/?
12:28 pranithk xavih: It is backport
12:30 xavih pranithk: it seems r.g.o is not responding... :(
12:31 xavih pranithk: now it's working... slow though...
12:34 kotreshhr joined #gluster-dev
12:34 atinm joined #gluster-dev
12:35 xavih pranithk: I've +2 it
12:39 pranithk xavih: You have merge permissions right? merge it :-)
12:41 kotreshhr left #gluster-dev
12:44 dlambrig joined #gluster-dev
12:44 aravindavk joined #gluster-dev
12:47 asengupt joined #gluster-dev
12:47 pranithk xavih: I submitted it for now...
12:48 hagarth joined #gluster-dev
12:57 aravindavk joined #gluster-dev
13:03 shaunm joined #gluster-dev
13:07 pppp joined #gluster-dev
13:11 kanagaraj joined #gluster-dev
13:17 shyam joined #gluster-dev
13:18 pousley_ joined #gluster-dev
13:20 ashiq joined #gluster-dev
13:21 Manikandan joined #gluster-dev
13:26 aravindavk joined #gluster-dev
13:30 jrm16020 joined #gluster-dev
13:32 rjoseph joined #gluster-dev
13:39 xavih pranithk: yes. I'm not used to it yet :P
13:40 pranithk xavih: :-D, there are more patches, if you see some ec patch that looks good and got all regression results as success, go ahead and merge it :-)
13:40 overclk joined #gluster-dev
13:55 anrao joined #gluster-dev
13:55 overclk joined #gluster-dev
14:02 hagarth ndevos: http://build.gluster.org/job/rackspace-n​etbsd7-regression-triggered/6971/console does this seem hung to you?
14:11 josferna joined #gluster-dev
14:14 ndevos hagarth: yes
14:14 ndevos hagarth: I'll reboot the VM and retrigger the test
14:15 hagarth ndevos: thanks, I think we need to fix the hanging mounts for good
14:15 hagarth just dropped a note on gluster-devel
14:15 ndevos hagarth: review the refcount patch :D
14:16 hagarth ndevos: we need both :)
14:16 ndevos hagarth: when http://review.gluster.org/11022 is in I can merge the gluster/nfs fixes and things should be more stable
14:17 ndevos xavih++ gave a +1 on it, after quite some rounds of improvements
14:17 glusterbot ndevos: xavih's karma is now 16
14:22 xavih ndevos: I gave comments because I think it wasn't working properly. It's not a personal issue :P
14:22 xavih ndevos: I've also reviewed 11023 and it has a bug
14:24 ndevos xavih: yeah, I value your reviews, please keep on pointing out issues :)
14:24 hagarth ndevos: we still need acks from gluster and netbsd regressions for 11022?
14:25 ndevos hagarth: http://build.gluster.org/job/rackspace-n​etbsd7-regression-triggered/6981/console is an other issue that hangs a lot... I think we can blame Gerrit for that
14:26 ndevos hagarth: yeah, Verified +1 from Jenkins would be good for 11022 - although there are no users in that patch, it only adds the functions
14:27 hagarth ndevos: /opt/qa/build.sh hangs due to stale nfs mounts
14:27 hagarth which cannot be umounted as part of cleanup()
14:29 ndevos ah, hmm
14:42 soumya joined #gluster-dev
14:44 spalai joined #gluster-dev
14:44 kdhananjay joined #gluster-dev
14:46 ashishpandey joined #gluster-dev
14:49 pranithk joined #gluster-dev
14:51 ndevos hagarth: care to share your opinion on only starting regression tests after a +1 or +2?
14:58 aravindavk joined #gluster-dev
15:01 vimal joined #gluster-dev
15:01 hagarth ndevos: yes, I intend to.
15:01 shyam joined #gluster-dev
15:13 ndevos aravindavk: did you see my note on the glusterfind packaging change? how do you want to proceed? http://review.gluster.org/11298
15:26 spalai joined #gluster-dev
15:34 overclk joined #gluster-dev
15:43 soumya joined #gluster-dev
15:45 spalai joined #gluster-dev
16:05 spalai joined #gluster-dev
16:08 dlambrig joined #gluster-dev
16:12 shyam joined #gluster-dev
16:12 ndevos wohoo, git-review is *so* cool, it lets you post changes on top of an other review which ./rfc.sh prevents
16:14 spalai joined #gluster-dev
16:15 krink joined #gluster-dev
16:16 gsaadi joined #gluster-dev
16:17 gsaadi Whats the best way to configure unix socket for gluster instead of using tcp/ip?
16:19 jiffin joined #gluster-dev
16:22 shyam joined #gluster-dev
16:30 ira joined #gluster-dev
16:30 dlambrig joined #gluster-dev
16:34 krink i’m getting Snapshot list : failed: Cluster operating version is lesser than the supported version for a snapshot.  when in fact I have gluster version glusterfs-server-3.7.1-1.el7.x86_64.  is setting cluster.op-vesion the best solution described http://www.gluster.org/pipermail/glu​ster-users/2014-November/019659.html
16:36 spalai joined #gluster-dev
16:46 Gaurav__ joined #gluster-dev
16:56 ndevos interesting hell scripting with variable variables and magic substitions, kkeithley!
16:56 * ndevos tries to wrap his head around it, whew!
16:58 ndevos csim: the freebsd jenkins slave seems to be not reporting back to jenkins? does it need a reboot or something?
16:58 ndevos csim: or, did the IP change, and does it need updating in /etc/hosts on build.gluster.org?
17:06 csim ndevos: no idea, I do not think the ip changed
17:07 csim ndevos: I can reboot however
17:07 csim ndevos: seems the jenkins slave is dead
17:08 csim no idea how it is started
17:10 csim ndevos: who did set the instance up ?
17:10 ndevos csim: I dont know, mat y4m4 or JustinClift
17:11 ndevos csim: OH!!!! kkeithley is a FreeBSD fan
17:12 shyam joined #gluster-dev
17:13 kkeithley ndevos: re: bash variables and substitution, hell is indeed the right word for it
17:13 ndevos kkeithley: it hurts my eyes and brain when reviewing
17:14 ndevos oh, this is a most useful comment: # ignore any comment lines
17:15 csim meetup start, so going offline
17:15 ndevos bye csim!
17:32 ira joined #gluster-dev
17:45 Manikandan joined #gluster-dev
17:45 Manikandan_ joined #gluster-dev
17:47 ashiq joined #gluster-dev
18:06 dlambrig joined #gluster-dev
18:08 krink I’m trying to configure kvm/qemu for unix+socket /var/run/glusterd.socket.  i can’t seem to get the syntax correct.  this is what I have for tcp on localhost…
18:08 krink <disk type='network' device='disk'>
18:08 krink <driver name='qemu' type='raw' cache='none'/>
18:08 krink <source protocol='gluster' name='gvol01/vda.raw'>
18:08 krink <host name='127.0.0.1' port='24007'/>
18:08 krink </source>
18:08 krink <target dev='vda' bus='virtio'/>
18:08 krink </disk>
18:08 krink <disk type='network' device='disk'>
18:09 shyam joined #gluster-dev
18:10 ndevos krink: what is the reason you try to do this? the data will go to the bricks over tcp anyway...
18:11 ndevos the glusterd.socket would only pass the volume layout to qemu, after that, qemu will talk to the bricks
18:15 krink ndevos.  that would be fine.  I’d still like to config the libvirt/qemu interface to use unix socket.   i found an example like: gluster+unix:///testvol/dir/a.i​mg?socket=/tmp/glusterd.socket   But, can’t seem to translate its syntax properly to the xml
18:16 ndevos krink: hmm, I dont know if libvirt support the syntax for it
18:18 krink perhaps not…  but i’m still looking.  i’ve found a few threads here and there that suggest it may be possible
18:19 ndevos krink: http://libvirt.org/formatdomain.html#elementsDisks contains the options, it really says "network" + "gluster" requires a "host" = "a server running glusterd daemon"
18:19 shyam joined #gluster-dev
18:20 ndevos krink: the final answer would be in the libvirt sources, good luck!
18:21 krink ndevos i’ll keep looking.  thanks for your help.
18:22 krink http://www.gluster.org/pipermail/g​luster-devel/2008-May/032972.html
18:22 ndevos krink: the libvirt sources are not too bad, mostly easy to understand
18:23 krink still digging.  i’m thinking it is just a syntax thing i can’t get right yet…
18:23 ndevos lol, that is from 2008, and not related to libvirt :)
18:24 ndevos krink: maybe libvirt has the option to specify a "qemu -disk ..." option directly? that would make it possible
18:25 krink i’ll try that
18:25 Humble_ krink, http://www.gluster.org/community/documenta​tion/index.php/Libgfapi_with_qemu_libvirt
18:26 ndevos hey Humble_, dont we have a new doc site? :P
18:26 Humble_ ndevos, I will take that offline :P
18:26 Humble_ yeah, we need to port this doc as well
18:26 Humble_ krink, u may need to adjust the host name to proper IP In your setup
18:27 Humble_ if this gives any error , can u check the libvirt logs
18:27 * ndevos will be going now, cya!
18:27 krink yes, yes.  thank you.  i do have that syntax working, the <host name='10.70.37.106' port='24007’/>.  but i was also trying to see if i could set the transport=unix or something along those lines to get unix socket configured instead of tcp
18:28 Humble_ I doubt that
18:29 krink this looks interesting…  http://libvirt.org/git/?p=libvirt.git;a=commitd​iff;h=a2d2b80fbd29dec4da31ec4072b6b555fe93d2c0
18:29 * Humble_ checking
18:29 atinm joined #gluster-dev
18:29 krink <host transport='unix' socket='/path/to/sock'/>
18:29 atinm ndevos, hi
18:29 krink i have a gluster daemon socket at /var/run/glusterd.socket
18:30 Humble_ its the same example or libvirt schema
18:30 kkeithley but why? a "network" tcp socket to _this_ host, i.e. a loopback, performs just as well as a unix socket. It really does.
18:30 Humble_ which mentioned in above doc
18:30 kkeithley you're not going to get better throughput with a unix socket.
18:31 kkeithley tcp socket in the linux kernel has been very extensively optimized over the last 10 years.
18:31 krink the communication between libvirtd and the gluster volume would loose the tcp overhead.  yes, i do agree that the backend gluster daemons will still communicate via tcp overhead.  but wouldn’t that at least loose one more layer of tcp overhead
18:32 krink as far as comparing tcp overhead to a unix socket.  the unix socket typically regarded as faster/better for performance
18:33 Humble_ kkeithley, I havent tried to connect via unix socket
18:34 krink i’d like to gather and compare the performance numbers.  config 1) tcp , config 2) unix socket.  and see which performance number are better
18:35 Humble_ good thought
18:35 Humble_ :)
18:43 krink darn.  no go.  getting error: failed to initialize gluster connection to server: '/var/run/glusterd.socket': Transport endpoint is not connected
18:43 krink <disk type='network' device='disk'>
18:43 krink <driver name='qemu' type='qcow2' cache='none'/>
18:43 krink <source protocol='gluster' name='test123/root.qcow2'>
18:43 krink <host transport='unix' socket='/var/run/glusterd.socket'/>
18:43 krink </source>
18:43 krink <target dev='vda' bus='virtio'/>
18:43 krink </disk>
18:45 krink ah.  perhaps i need to point it at one of the sockets available in the /var/run/gluster/ directory ?
18:46 spalai joined #gluster-dev
18:50 spalai1 joined #gluster-dev
19:10 krink looking like my xml syntax is proper… at least in the same as here  https://bugzilla.redhat.co​m/show_bug.cgi?id=1115809
19:10 glusterbot Bug 1115809: low, low, rc, libvirt-maint, CLOSED NOTABUG, Error messages are not clearly enough during start a guest with source protocol='gluster'
19:12 krink perhaps i’m missing a glusterd or volume specific config or setting…  error: failed to initialize gluster connection to server: '/var/run/glusterd.socket': Transport endpoint is not connected
19:16 shyam joined #gluster-dev
19:28 atalur joined #gluster-dev
19:52 atalur joined #gluster-dev
20:08 spalai joined #gluster-dev
20:13 shyam joined #gluster-dev
21:08 badone__ joined #gluster-dev
23:06 badone__ joined #gluster-dev
23:06 suliba joined #gluster-dev
23:06 kkeithley joined #gluster-dev
23:06 xrsanet joined #gluster-dev
23:33 dlambrig joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary