Camelia, the Perl 6 bug

IRC log for #gluster-dev, 2013-04-23

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 yinyin joined #gluster-dev
00:57 bala joined #gluster-dev
01:30 lpabon joined #gluster-dev
03:22 mohankumar joined #gluster-dev
03:47 itisravi joined #gluster-dev
03:51 nickw joined #gluster-dev
03:54 sgowda joined #gluster-dev
04:23 itisravi joined #gluster-dev
04:24 itisravi_ joined #gluster-dev
04:27 bharata joined #gluster-dev
04:30 itisravi joined #gluster-dev
04:36 hagarth joined #gluster-dev
04:56 aravindavk joined #gluster-dev
04:57 bala1 joined #gluster-dev
05:04 bulde joined #gluster-dev
05:17 sgowda joined #gluster-dev
05:19 lalatenduM joined #gluster-dev
05:31 mohankumar joined #gluster-dev
05:50 bharata joined #gluster-dev
05:50 hagarth joined #gluster-dev
05:52 vshankar joined #gluster-dev
06:35 mohankumar__ joined #gluster-dev
06:43 hagarth joined #gluster-dev
06:44 deepakcs joined #gluster-dev
06:59 raghu joined #gluster-dev
07:51 rastar joined #gluster-dev
08:42 gbrand_ joined #gluster-dev
08:44 rastar joined #gluster-dev
08:47 bharata joined #gluster-dev
09:52 rastar joined #gluster-dev
10:15 puebele joined #gluster-dev
10:22 hagarth joined #gluster-dev
10:23 nixpanic joined #gluster-dev
10:23 nixpanic joined #gluster-dev
10:37 _ilbot joined #gluster-dev
10:51 sgowda joined #gluster-dev
10:51 kkeithley1 joined #gluster-dev
11:05 edward1 joined #gluster-dev
11:12 jdarcy joined #gluster-dev
11:15 portante|ltp joined #gluster-dev
11:25 sgowda joined #gluster-dev
11:35 jclift joined #gluster-dev
12:04 hagarth joined #gluster-dev
12:10 yinyin_ joined #gluster-dev
12:16 nickw joined #gluster-dev
12:43 jclift Trying to access the Gluster translator api docs here: http://hekafs.org/dist/xlator_api_2.html
12:44 jclift But, page isn't loading
12:44 jclift Seems like hekafs.org site is offline?
12:44 jclift http://www.downforeveryoneorjustme.com/hekafs.org
12:44 jclift ^^^ Says it's down too. :(
12:55 rastar1 joined #gluster-dev
13:00 rastar joined #gluster-dev
13:02 mohankumar joined #gluster-dev
13:02 kkeithley| ???
13:03 kkeithley| are we having a meeting?
13:04 jdarcy joined #gluster-dev
13:09 kkeithley| johnmark: ^^^
13:09 jdarcy Seems a bit quiet.
13:10 kkeithley| sigh
13:10 jdarcy Do we actually have anything to talk about re: 3.4 readiness?
13:11 kkeithley| dunno. I may have a couple gluster.spec.in tweaks that I discovered while packaging alpha3
13:11 jdarcy Would that be an alpha3something, alpha4, or wait for beta1?
13:12 kkeithley| beta1 would suffice I think
13:12 jdarcy OK.  Anything to update on the tracker?
13:13 kkeithley| I think the BZ is still there, I'll double check
13:13 jdarcy BTW, I kind of wouldn't mind if someone else were to wrangle the backports for beta1.
13:13 kkeithley| how many are left?
13:14 jclift jdarcy: hekafs.org offline?
13:16 jclift Maybe I should have raised the priority of the email abt it. ;)
13:17 jdarcy jclift: Temporarily.  Had to resize the instance.  Should be back up now.
13:18 jclift thx
13:19 jclift jdarcy: Do you have the slides for your GlusterFS translator talk at LinuxCon last year?
13:19 jclift jdarcy: Assuming this is the right page for all the slides, yours aren't on there. :(  http://events.linuxfoundation.o​rg/archive/2012/linuxcon/slides
13:20 kkeithley| Linuxcon Japan?
13:21 jclift kkeithley: jdarcy mentioned he hasn't been to Japan
13:21 kkeithley| yes
13:21 jclift So, trying to figure out which/what/where place to find the slides. :)
13:22 jclift Hmmm, the schedule for LinuxCon 2012 doesn't list jdarcy either.  Guessing that's the wrong conf then. http://events.linuxfoundation.or​g/archive/2012/linuxcon/schedule
13:23 jdarcy There's a Sheraton logo on the lectern, and the only Sheraton I've been to lately is San Diego.  That would have been LCNA, but not the main conference.  There was a Gluster miniconf the day before.
13:24 jclift Ahhh.
13:25 jdarcy And of course LF fumblesthe program page for the miniconf.  ;)
13:25 jclift jdarcy: Don't suppose you have the slides handy in your email archives or something?
13:26 jclift That's all I'm trying to get, so I can get something working well enough that I can write some good newbie friendly intro docs.
13:26 jclift Might be able to manage it anyway, but figure I'll ask first. :)
13:27 jdarcy My main problem is remembering *which* slides those were.
13:28 johnmark jclift: howdy
13:28 johnmark which slides do you need again? and for what?
13:29 kkeithley| johnmark: you're here! ;-)
13:29 jclift johnmark: Looking for jdarcy's slides to match his talk at the Gluster mini conf in San Diego last year
13:29 johnmark jclift: you can find presos here: http://www.gluster.org/community/do​cumentation/index.php/Presentations
13:29 jclift johnmark: Oh come on.
13:29 jclift Like I didn't look there first. :p
13:29 jclift To match this talk: http://video.linux.com/videos/gluste​rfs-translators-conceptual-overview
13:29 johnmark jclift: lol
13:29 johnmark looks like jdarcy never uploaded his preso *sigh*
13:30 johnmark kkeithley|: yeah... bleh.
13:30 kkeithley| Nice of you to drop by. ;-)
13:31 johnmark yeah. with all the forge shit and community launch, I'm pressed for time
13:40 sandeen joined #gluster-dev
13:44 jclift jdarcy: Just saw your post about "Split and Secure Networks for GlusterFS" in Jan.  Was thinking about similar thing a few weeks ago, as most corp places I've worked have multiple network interfaces on their storage boxes, each for specific purpose (ie backup interface, client interface, general connectivity interface, etc).
13:45 jclift jdarcy: The best way I've been able to think of, for addressing this is to have "client groups" or some other named thing.
13:47 jclift jdarcy: Whereby we define a "client group" such as "gluster-storage-boxes" and list interfaces that belong to that group.  Such as "gluster1-int1" (ie interface1 of gluster1), "gluster2-int1" (similar, but different host).
13:48 jclift jdarcy: Then, when a client connects, it gets given back just the node connection details of the interfaces for it's group.
13:49 jclift jdarcy: So, we could define several groups (ie client-group, storage-interface-group, backup-group) and the clients/peers that connect would be given the interfaces that belong in their matching group.
13:49 jclift I could be explaining that like crap tho.
13:49 jclift The concept in my head sounds ok.
13:49 * jclift should write it up
14:03 bulde joined #gluster-dev
14:18 wushudoin joined #gluster-dev
14:19 bala joined #gluster-dev
14:28 nickw joined #gluster-dev
14:38 jbrooks joined #gluster-dev
14:46 nickw joined #gluster-dev
15:05 ndevos jclift: so, glusterd would privide the mounting client with a .vol file that contains hostnames (or ip-addresses?) that resolve to the interface(s) where it connects on?
15:05 nickw joined #gluster-dev
15:07 ndevos jclift: how would the matching of a mounting client and the client-group be done? what kind of management interface would it need?
15:08 jclift ndevos: Well, it's the general concept thought I had.  Where exactly the ip addresses of hosts would be stored I'm less sure of.
15:09 jclift ndevos: For the matching of mounting client and client group... I think we'll need to figure out what options are possible even.  Not sure how flexible the params are that can be passed as mount options.
15:09 jclift ndevos: So, think of this as first concept thought... details might turn out to make it non-possible. :/
15:12 ndevos jclift: the general concept sounds good, but I'd like to know how users would configure it - without that knowlegde a proposal will be more difficult to accept
15:12 jclift ndevos: Oh yeah, I completely fully agree.
15:12 jclift ndevos: I often have 1/2 baked ideas that sound good in theory.
15:13 jclift ndevos: But I won't actually propose anything until I've learned more of the code in depth so have more clue as to how it could actually be implemented.
15:13 jclift i.e. don't need the full answer.  But do need more clue first. ;)
15:14 jclift (and I'm not selfish with ideas.  If someone hears one of my 1/2 baked ideas and then has a better one, I *hope* they go and do it)
15:14 ndevos well, sharing a proposal for some functionality is always good, others can then ponder on it and suggest stuff :)
15:15 jclift sudo gluster client-group create client-group1  gluster1-1 gluster2-1 gluster3-1 gluster4-1
15:15 jclift (being first interface on 4 gluster boxes)
15:15 jclift Though, I'm the kind of person that names interfaces specifically first
15:16 jclift sudo gluster client-group default client-group1
15:16 jclift Hmmm... there's probably a logical structure in here that can be teased out. :)
15:16 nickw left #gluster-dev
15:17 ndevos why not use: gluster client-group create glusterfs-internal $hostname:bond0 ...
15:17 jclift ndevos: Yeah, that makes sense too.  Have host:ip
15:17 ndevos explicitly mention the NIC that is used for the client group?
15:17 jclift Oh, the nic
15:17 jclift Hmmm... that might be better
15:17 * jclift would need to actually think that through more
15:17 jclift :)
15:18 ndevos hehe
15:18 jclift host:nic might be better because it might work for rdma (non IP) based systems too
15:18 ndevos when I get some thoughts about it, I'll let you know
15:18 jclift ie  myhost:ib0
15:18 ndevos yeah
15:18 jclift Yeah
15:19 ndevos but how does a mounting client get ordered to a specific group?
15:19 jclift I think having a default group makes sense for the backup
15:19 jclift For the peer probe thing, there'd probably be a need to add options to that command
15:19 jclift (unsure, would need to think it through)
15:20 jclift I don't know what is possible to be passed on mount command line tho
15:20 jclift i.e.
15:20 jclift sudo mount -t glusterfs -o client-group=XXX gluster1:somevol /my/mount/point
15:21 ndevos thats relatively easy to do, there are loads of mount-options that can work as example
15:21 jclift Cool.
15:21 jclift This might actually be possible then.
15:21 jclift i.e. not a completely dumb idea:)
15:21 ndevos definitely
15:26 jclift ndevos: Have you done rdma coding before?
15:26 * jclift is wanting to know how if it's possible to make a service listen on a "port" or port equivalent for rdma interface
15:27 jclift i.e. for tcp/ip we have ip:port.  "Is there rdma equiv?" is something I'm interested in finding out. ;)
15:36 ndevos jclift: no, never looked at rdma
15:36 jclift ndevos: No worries. :)
15:37 ndevos jclift: glusterd accepts connections on rmda port 24008 - searching the source for that and finding the related defines might point you to some examples
15:38 jclift ndevos: I *think* it's been coded at some point to rely on IPoIB
15:38 jclift ndevos: Which is what I want to find out if it's actually needed
15:39 ndevos hmm, I I dont think so, IPoIB is concidered a workaround for the broken rmda transport
15:39 jclift ndevos: Cool.
15:39 jclift ndevos: I just read in a change on review.gluster.org that IPoIB was being made mandatory
15:39 ndevos IPoIB should work fine, it just has some overhead pure rdma would not have
15:39 jclift ndevos: Change isn't approved though, and I queried it
15:40 jclift ndevos: Yeah, I'd rather not force IPoIB usage if we don't actually have to :)
15:40 ndevos oh, thats news to me, people are interested in rdma without IPoOB
15:40 ndevos uh, IPoIB
15:42 jclift ndevos: http://review.gluster.org/#​/c/4600/1/doc/rdma-cm-in-3.4.0.txt
15:42 jclift ndevos: That's a very short couple of lines there.
15:42 jclift ndevos: It says IPoIB would be a requirement
15:43 * jclift thinks (without knowing the in depth options) that's not a great idea
15:43 jclift <etc>
15:43 jclift :)
15:44 hagarth joined #gluster-dev
15:49 bala joined #gluster-dev
15:51 itisravi joined #gluster-dev
16:05 ndevos jclift: I agree with you, requiring IPoIB does not make sense to me - but I'm not an ib/rdma expert
16:07 kkeithley| FWIW, there does seem to be an initial handshake when (fuse) mounting an rdma volume. E.g. you create a volume with rdma as the only transport. On the client you still mount with the hostname-or-IP-addr.
16:07 * kkeithley| was just playing with this last week to fix a bug in 3.3.1
16:07 kkeithley| to fix a bug in rdma on 3.3.1
16:10 jclift kkeithley: Yeah, that's kind of what I was thinking.
16:10 jclift kkeithley: In "pure ib" land, without IPoIB, then the hosts are referred to via lid's (first choice) or guid's (2nd choice).
16:11 jclift kkeithley: There doesn't seem to be an exact equiv for dns in ib land.  There's something called acm (ie openacm) which is supposed to do something with naming.
16:11 jclift But, I don't know any details
16:12 jclift kkeithley: So doing ibping in ib land, it's "ibping [host lid]"  instead of "ibping [remote ip address]"
16:12 jclift So, some ib stuff I know, but obviously a lot more needed before I can be intelligent in my conversation abt it. ;(
16:16 kkeithley| makes sense. When I was playing I only used tcp/ip for the mount. But a couple weeks ago Avati added the librdmacm rpm on the build/gerrit/jenkins machine and referenced http://review.gluster.org/#/c/149/
16:17 kkeithley| which says rpc-transport/rdma: use rdma-cm for connection establishment. Till now gluster used tcp/ip based communication channel with gluster specific protocol to exchange infiniband addresses.
16:17 kkeithley| from which I would infer that we can use IB addresses.
16:17 kkeithley| from which I would infer that we can use IB addresses too.
16:19 kkeithley| well, without studying the reviews a lot harder, it's not clear that the IB exchange was ever merged.
16:19 kkeithley| some parts were
16:43 jbrooks joined #gluster-dev
16:53 mohankumar joined #gluster-dev
16:54 mohankumar joined #gluster-dev
16:54 mohankumar avati: ping
16:57 hagarth mohankumar: hey
16:57 mohankumar hi hagarth
16:57 hagarth mohankumar: will reply to your comments tomorrow
16:57 __Bryan__ joined #gluster-dev
16:58 mohankumar hagarth: thanks!
16:58 mohankumar wanted to have some discussion on finalizing the design part for bd xlator
16:58 hagarth mohankumar: sure, now or after we discuss further on gerrit?
16:59 mohankumar hagarth: if you are ok now?
17:00 mohankumar i mean have time for discussion now?
17:01 hagarth mohankumar: am on the phone right now, give me a few minutes or else it might be getting late for you too.
17:01 mohankumar hagarth: if someone else also agrees for my current design bd -> posix, i can start working on the remaining pieces of bd xlator
17:01 mohankumar thats why pinging avati also
17:02 hagarth mohankumar: i am ok with the current design. Do talk to avati as well.
17:03 mohankumar hagarth: any idea when avati will be online?
17:04 hagarth mohankumar: he should be around in a bit
17:07 mohankumar hagarth: thanks!
17:10 a2 mohankumar, pong
17:10 mohankumar a2: i posted bd multi brick support patches to gerrit and also a design rfc to gluster-devel
17:11 mohankumar hagarth already reviewed patches on the gerrit and gave some comments
17:11 mohankumar if you are ok with the design of multi brick bd xlator, i can proceed with lots of 'ToDos'
17:11 mohankumar a2: ^^
17:15 a2 mohankumar, i will comment by tonight my time?
17:15 a2 sorry for the delay!
17:16 mohankumar a2: np
17:16 mohankumar a2: thanks in advance!
18:22 johnmark portante: ping
18:22 johnmark portante: is luis on here?
18:22 johnmark portante: just wondering if we had a solution in place for the forge
18:37 lpabon joined #gluster-dev
18:48 puebele3 joined #gluster-dev
19:18 sandeen joined #gluster-dev
19:21 kkeithley| sigh, centos6's mock epel5-x86_64 builds don't like a noarch package
19:24 kkeithley| ndevos: why is building rpms in mock for epel-5 interesting as a regression test at this point?
19:25 kkeithley| a2,avati,hagarth_: ^^^
19:31 a2 kkeithley, probably not interesting
19:31 a2 epel6 and fedora should be sufficient?
19:50 kkeithley| Apparently mock builds for fedora don't work so well on CentOS 6? ndevos knows more perhaps?
20:17 a2 i guess epel6 should be sufficient.. the part we want to regression-test (if newly added or removed headers/files are getting packaged or not, etc.) is probably not dependent on the distro
20:26 JoeJulian Speaking of epel5... The packages in your fedorapeople repo, kkeithley, are sha1 signed instead of md5 so the don't work.
20:26 kkeithley| seems sufficient for now, I agree.
20:27 kkeithley| If we want to test that headers and other files are packaged, then maybe we ought to make the test actually do that?
20:27 kkeithley| JoeJulian: ugh
20:27 JoeJulian ... I'm stuck needing an el5 build for a really long time.
20:28 JoeJulian Unless I can get someone to rewrite our company's old cobol app into something that'll run on newer kernels. :/
20:29 kkeithley| wait, el5 are are supposed to be signed with sha1.
20:29 kkeithley| because RHEL5/CentOS5 rpm doesn't grok md5 sigs
20:31 kkeithley| oh, nm, wait. crap
20:31 kkeithley| I'm looking at the repo sig, not the rpm sig
20:39 hagarth joined #gluster-dev
20:40 johnmark kkeithley|: doh
20:42 kkeithley| el5 rpm doesn't grok 2048-bit sigs, so I've never signed el5 rpms. Except this time. New rpms will be up momentarily
20:48 kkeithley| I've got blisters on my fingers
20:53 kkeithley| JoeJulian: give it a whirl
21:00 a2 kkeithley, i thought a combination of make dist and rpmbuild would fail if we either missed or specified non-existent files/headers? do we need any more explicit checks?
21:02 kkeithley| yes and no. the rpmbuild will fail if you attempt to package files that aren't listed, and vice versa, but——  Many of the files are globbed in the spec. e.g. in the -devel rpm. We automagically got the gfapi headers, but not because we did anything special
21:06 kkeithley| And the test would not fail if somehow it failed to get the gfapi header.
22:07 JoeJulian kkeithley: worked, thanks.
22:14 kkeithley| yw
22:18 kkeithley| scheduled five tests but only ran four? I don't see that in tests/basic/rpm.t. Where is that coming from?
22:19 kkeithley| er, planned 5 but ran 4?
23:49 yinyin joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary