Camelia, the Perl 6 bug

IRC log for #gluster-dev, 2013-06-26

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:30 portante joined #gluster-dev
00:40 bala joined #gluster-dev
01:25 lalatenduM joined #gluster-dev
01:45 badone joined #gluster-dev
02:30 portante_ joined #gluster-dev
02:56 bharata joined #gluster-dev
03:02 bulde joined #gluster-dev
03:53 bala joined #gluster-dev
03:55 bala joined #gluster-dev
03:56 bala joined #gluster-dev
04:03 itisravi joined #gluster-dev
04:14 hagarth joined #gluster-dev
04:29 bulde joined #gluster-dev
04:56 aravindavk joined #gluster-dev
05:16 rastar joined #gluster-dev
05:26 raghu joined #gluster-dev
05:53 hagarth joined #gluster-dev
05:53 johnmark ugh. forgot.
05:53 johnmark hagarth: heya
05:54 johnmark im getting everyone set for another round of testing thois weekend
05:56 hagarth johnmark: great
05:56 hagarth sorry about today's meeting.. couldn't make myself available on time.
05:56 hagarth update on 3.4 - beta4 to be out this week
05:57 hagarth documentation in markdown now -- will be updating part of it this week. more help there would be welcome!
06:01 jclift_ hagarth: Were there any known NFS problems with beta3?
06:01 hagarth johnmark: we probably should get all test going this weekend (rdma and non-rdma).
06:01 hagarth jclift_: like?
06:02 hagarth jclift_: am not aware of known problems.
06:02 jclift_ hagarth: Only asking as I'm writing up a BZ at the moment.  Don't want to waste the time if it's already fixed.
06:02 jclift_ hagarth: k, it's probably a new unknown one then
06:02 jclift_ NFS mounts are only working for the last volume
06:02 jclift_ (and only the last one is showing up in showmount -e [host])
06:02 * jclift_ goes back to writing up the BZ
06:03 hagarth ok..
06:14 puebele joined #gluster-dev
06:32 jclift_ johnmark: btw, 3 of the 4 the QA infiniband machines are up and ready for use for Eco for next round of testing
06:33 jclift_ johnmark: We've had issues with Beaker not being able to control them remotely properly, so getting them rebuilt and online with useful OS is being a pain
06:33 jclift_ johnmark: The fourth box is still out of commission until (prob) Kaleb has time to manually get it rebuilt or something
06:33 puebele joined #gluster-dev
06:33 hagarth jclift_: is the nfs problem seen only with rdma?
06:34 jclift_ hagarth: I haven't run these tests on 3.4.0 beta 3 without it
06:34 jclift_ So, I'm not sure
06:34 hagarth jclift_: i see, I haven't seen this behavior with ethernet.
06:34 jclift_ Sure
06:35 jclift_ hagarth: I'd prefer to continue on with the testing these rdma volumes using the rest of the test steps as far as I can get first.  After that I'll go back and try it again using tcp
06:35 hagarth jclift_: sounds good.
06:36 jclift_ This is the BZ if that's useful: https://bugzilla.redhat.com/show_bug.cgi?id=978205
06:36 glusterbot Bug 978205: medium, unspecified, ---, vraman, NEW , NFS mount failing for several volumes with 3.4.0 beta3.  Only last one created can be mounted with NFS.
06:37 hagarth jclift_: read through, will get some attention for this.
06:37 jclift_ thx
06:39 jclift_ hagarth: Actually, if someone in RH wants to replicate this themselves to take a look, those 3x QA boxes I were just mentioning have RHEL 6.4 installed with Infinband.  They'd probably behave the same.
06:40 * jclift_ goes back to RDMA testing
06:40 hagarth jclift_: thanks, that would be helpful.
06:42 kshlm joined #gluster-dev
06:45 bharata-rao hagarth, you might want to check this discussion about qemu-glusterfs http://lists.nongnu.org/archive/htm​l/qemu-devel/2013-06/msg04423.html
06:47 bharata-rao hagarth, it looks like it will be useful to have a capability to determine the glusterfs backend (posix or bd) from qemu/gfapi
06:48 hagarth bharata-rao: thanks, will do.
06:55 bharata-rao hagarth, A user reported some issues with qemu-glusterfs and while trying to reproduce that I found out that qemu-glusterfs is broken when used from 3.4beta3-el6 rpms but works when used with 3.4beta3 source. May be I will wait for beta4 before finding out why the same version behaves differently when compiled and when using rpms
06:56 hagarth bharata-rao: ok, I am hoping to have a separate libgfapi rpm for beta4.
06:57 bharata-rao hagarth, isn't it already separate (glusterfs-api) ?
06:59 hagarth bharata-rao: not yet. this patch needs to be merged: http://review.gluster.org/#/c/5230/
07:00 bharata-rao hagarth, oh ok thanks
07:16 hagarth rpm -i glusterfs-3.4.0-0.6.beta3.el6.x86_64.rpm
07:16 hagarth error: Failed dependencies:
07:16 hagarth libgfapi.so.0()(64bit) is needed by glusterfs-3.4.0-0.6.beta3.el6.x86_64
07:17 hagarth ndevos: any idea on the above?
07:25 shubhendu joined #gluster-dev
08:28 jclift_ left #gluster-dev
08:29 hagarth joined #gluster-dev
08:37 deepakcs joined #gluster-dev
09:11 bulde joined #gluster-dev
09:22 bulde joined #gluster-dev
09:29 vshankar joined #gluster-dev
09:35 ndevos hagarth: uh... something in the glusterfs package uses libgfapi?! definitely looks weird
09:36 ndevos hagarth: where did you get that package from? And does it come with a src.rpm?
10:22 rastar joined #gluster-dev
10:32 hagarth ndevos: got that from download.gluster.org
10:36 bulde joined #gluster-dev
10:36 edward1 joined #gluster-dev
10:58 hagarth joined #gluster-dev
10:59 hagarth ndevos: am wondering if this is due to the fedora spec being used for building rpms?
11:01 msvbhat joined #gluster-dev
11:06 ndevos hagarth: it is pulled in because of this:
11:06 ndevos [root@vm122-104 glusterfs-3.4.0-0.6.beta3.el6.x86_64.d]# find . -type f | while read F ; do ( ldd $F 2>/dev/null | grep -q gfapi ) && echo $F ; done
11:06 ndevos ./usr/lib64/glusterfs/3.4.​0beta3/xlator/mount/api.so
11:06 ndevos hagarth: should that api.so move to a different package?
11:10 ndevos hmm, I wonder where that api.so comes from...
11:11 * kkeithley is looking
11:15 ndevos kkeithley: seems to be from api/src/Makefile.am
11:17 ndevos I dont know why there is a need for a mount/api xlator, but I think it should be part of the glusterfs-api package?
11:18 hagarth ndevos: think it needs to be part of glusterfs-api.
11:19 kkeithley seems right
11:20 ndevos hagarth: got a Bug for that?
11:21 hagarth ndevos: not yet, I observed this while trying to install today.
11:21 kkeithley we could just reuse 819130 (release-3.4) and 950083 (master)
11:21 hagarth agree
11:31 ndevos kkeithley: release-3.4: http://review.gluster.org/5254
11:32 ndevos kkeithley: I can not file the one for master yet, the change needs to be merged first
11:36 hagarth ndevos: i will merge the master patch now
11:38 jclift_ joined #gluster-dev
11:40 ndevos hagarth, kkeithley: http://review.gluster.org/5255 for master
11:40 ndevos I have scheduled a regression test against both too already :)
11:41 hagarth ndevos: great!
11:41 ndevos now, lunch!
11:41 hagarth will push once the regression tests pass.
12:02 bulde joined #gluster-dev
12:09 jclift_ My main problem (so far) with my "connection groups" idea, is that we have two main err.. levels of connections being used.
12:09 jclift_ Peer connections (so far) are for _all volumes_ of a host.  Client connections can be per volume.
12:09 jclift_ Not sure how to nicely resolve that.
12:09 jclift_ .... though I really do need to get some sleep soon.  Not exactly completely awake atm.
12:11 * jclift_ kind of suspects we could just use peer probe to identify a server, not a specific IP/interface on it
12:11 jclift_ And we then use "connection groups" to handle what volumes get to be connected to from where
12:11 jclift_ Plus something to tell Gluster which connection group to use for replication traffic for each volume
12:50 hagarth joined #gluster-dev
13:02 deepakcs joined #gluster-dev
13:33 bala joined #gluster-dev
13:34 kshlm joined #gluster-dev
13:50 portante joined #gluster-dev
14:08 JoeJulian joined #gluster-dev
14:22 jclift_ kshlm: ping
14:23 jclift_ kshlm: I tried thinking through the answer to what you're asking from a "how would I want to use this as a SysAdmin" viewpoint
14:24 jclift_ kshlm: I think my answer mostly makes sense
14:24 jclift_ But, am happy to have gaping holes in my concept of it pointed out as needed, etc
14:24 jclift_ ;)
14:38 kshlm jclift_: Just skimmed thorugh your reply. What you've described is really close to what I was thinking of.
14:38 jclift_ kshlm: Cool.  I personally have no skills for coding any of it though.  Hoping you do? :D
14:39 jclift_ kshlm: We should probably also consider how this hostname / UUID / alias thing could affect the use of SSL certificates.
14:39 jclift_ At first glance I don't think it would be a problem for SSL certs... but we'd want to think it through to be sure.
14:42 kshlm I had forgotten we have ssl to support as well. But that shouldn't worry us much as it'll be at the transport layer, and we'll be working on higher up layer.
14:43 kshlm We'll need to think this through for sure.
14:44 * jclift_ agrees
14:44 kshlm :)
14:44 * kshlm is going out for dinner
14:45 jclift_ Have a good night dude. :)
14:45 kshlm will catch up with you later
14:45 kshlm good night
15:04 wushudoin joined #gluster-dev
15:18 awheeler_ joined #gluster-dev
15:41 bulde joined #gluster-dev
15:41 bala joined #gluster-dev
16:06 hagarth joined #gluster-dev
16:20 bulde joined #gluster-dev
16:45 tg2 joined #gluster-dev
16:48 lpabon joined #gluster-dev
16:49 deepakcs joined #gluster-dev
18:52 portante joined #gluster-dev
19:18 bulde joined #gluster-dev
20:02 lpabon_ joined #gluster-dev
20:09 lpabon joined #gluster-dev
23:42 nixpanic joined #gluster-dev
23:42 nixpanic joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary