Camelia, the Perl 6 bug

IRC log for #gluster-dev, 2013-04-09

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:05 yinyin joined #gluster-dev
01:03 yinyin joined #gluster-dev
01:52 kkeithley1 joined #gluster-dev
01:59 portante` joined #gluster-dev
02:55 hagarth joined #gluster-dev
03:12 lalatenduM joined #gluster-dev
03:30 vshankar joined #gluster-dev
03:50 bala joined #gluster-dev
03:59 anmol joined #gluster-dev
03:59 sgowda joined #gluster-dev
04:04 bulde joined #gluster-dev
04:05 hagarth joined #gluster-dev
04:40 rastar joined #gluster-dev
04:43 aravindavk joined #gluster-dev
04:45 bharata joined #gluster-dev
04:47 sgowda joined #gluster-dev
04:48 aravindavk joined #gluster-dev
05:38 bala joined #gluster-dev
05:46 raghu joined #gluster-dev
05:48 spai joined #gluster-dev
05:49 nkhare joined #gluster-dev
05:58 deepakcs joined #gluster-dev
06:08 deepakcs joined #gluster-dev
06:14 mohankumar joined #gluster-dev
06:21 mohankumar joined #gluster-dev
06:26 bharata joined #gluster-dev
06:33 ollivera joined #gluster-dev
06:42 deepakcs joined #gluster-dev
06:44 hagarth joined #gluster-dev
07:00 rastar joined #gluster-dev
07:34 hagarth joined #gluster-dev
07:39 ollivera ndevos, thank you for the link
07:40 ndevos you're welcome, ollivera
07:47 ollivera JoeJulian, "Even better than reverse-engineering the dht hash function in order to calculate the hashes, you can just use the library function directly like I do"
07:54 ollivera so, I could take a temporary filename (assigned by my script) calculate the hash just calling the gf_dm_hashfn() and assign a range for brickX that contains the hash for that temporary filename
08:43 nkhare joined #gluster-dev
08:47 puebele joined #gluster-dev
08:51 bharata joined #gluster-dev
09:18 nkhare joined #gluster-dev
09:21 sgowda joined #gluster-dev
09:34 rastar joined #gluster-dev
09:46 rastar joined #gluster-dev
10:16 hagarth joined #gluster-dev
10:29 sgowda joined #gluster-dev
11:11 H__ On replace-brick start, where in the code tree is the initial directory traversal of the source brick ? (It downs the volume for 20 minutes, then dies.)
11:17 hagarth joined #gluster-dev
12:12 kkeithley1 joined #gluster-dev
12:59 hagarth joined #gluster-dev
13:01 jdarcy joined #gluster-dev
13:02 puebele1 joined #gluster-dev
13:02 jdarcy Anybody else here for the 3.4 readiness meeting?
13:02 * hagarth is around
13:03 kkeithley1 yes
13:03 jdarcy Beta1 dependency tracker is at https://bugzilla.redhat.com/showdependen​cytree.cgi?id=918917&hide_resolved=1
13:04 jdarcy I'm pretty sure it's missing a few things that we'll want to have.  Everyone is encouraged to review the latest merged and posted patches on Gerrit to see if they recognize anything.
13:05 hagarth I think we would need the readdir consistency fix with nfs, will add that over.
13:05 jdarcy Also, I think most of the bugs that are in MODIFIED are only so in master - still need backports to 3.4
13:06 jdarcy I'll volunteer to wrangle the backports, unless someone else is so eager that they want to push me aside.  ;)
13:06 jdarcy (crickets)
13:07 hagarth let me know if you need any help in reminding folks over here for that.
13:07 jdarcy OK, thanks.
13:07 jdarcy I'll try to compile the list after this.  Probably take an hour or so.
13:07 hagarth sure.
13:08 jdarcy Are we still committed to getting the glusterd stuff in for beta1?
13:08 hagarth maybe we will get more opinions on that in tomorrow's meeting.
13:08 hagarth or later tonight in your case.
13:09 jdarcy OK.  Apologies for last night.
13:09 hagarth no problem. KP was mentioning about a wakeup related problem in synctasks which is being tracked.
13:10 jdarcy I saw Pranith backported a change related to that.  Just a sec.
13:11 jdarcy No, that was into downstream, not into 3.4.  Nevermind.
13:11 hagarth OK. A few hours ago when I spoke to KP, he was still trying to establish a root cause.
13:11 jdarcy Yeah, we'll surely need all of the synctask and synclock patches to support the actual glusterd changes.
13:12 jdarcy What about RDMA?
13:12 hagarth I think we can take in RDMA - I will take up that AI, review and get it going.
13:13 jdarcy You have the hardware you need for that, right?  ISTRC seeing it in BLR.
13:13 hagarth yeah, have a 4 node testbed.
13:13 jdarcy Also, we might be getting access to some IB equipment at Argonne.  That will allow us to do some testing on different hardware/drivers.
13:14 hagarth right, that would be helpful.
13:14 jdarcy Is anybody (other than John Mark) up to date on what's planned for a test day?
13:15 hagarth no, I know that we need a release for that.
13:15 kkeithley1 If a fix has been merged, shouldn't the BZ be updated to ON_QA?
13:15 hagarth kkeithley1: you could leave it in MODIFIED and move it to ON_QA when a build with the fix is available
13:15 kkeithley1 okay
13:16 jdarcy kkeithley: merged goes to MODIFIED, when we actually tag and build the RPMs it goes to ON_QA (AFAIK)
13:16 hagarth we have had a few patches go in since alpha2, so probably time for one more alpha release?
13:17 jdarcy We really kind of need a state for "merged in master but not in a release branch"
13:17 jdarcy hagarth: I'd say that's kind of up to you and John Mark.
13:18 jdarcy If we do another alpha, then we only get one beta in before Summit.
13:18 hagarth jdarcy: I am a bit hesitant to go beta without the glusterd patches
13:19 hagarth maybe alpha3 this week and beta1 as soon as we are good with glusterd? (keeping my fingers crossed on that one).
13:19 kkeithley1 and I'd like to see the Swift 1.8.0 in, whether it's alpha3 or beta1
13:19 gbrand_ joined #gluster-dev
13:20 jdarcy Could we do an alpha3 with what we have (plus Swift 1.8.0?) and then do beta1 from the longer list?
13:20 hagarth jdarcy: sounds good to me. If there are easy pickings for release-3.4, I will wait till end of tomorrow.
13:20 jdarcy kkeithley: You've already submitted the 3.4 patch, but it isn't merged, right?
13:21 kkeithley1 correct
13:21 jdarcy Oh, we *must* get the ext4 fix in.
13:22 hagarth jdarcy: yes. Once we have your list, let us start getting things in.
13:22 jdarcy OK, I'll send out the short list for alpha3, and a longer one for beta1.
13:23 jdarcy That's it for me.  Anyone else?
13:24 johnmark gah. picked a stupid day to drive in
13:24 jdarcy O hai.
13:24 johnmark howdy
13:24 * johnmark reads the backlog
13:24 kkeithley1 It's probably a bad day to quit smoking too
13:24 * jdarcy gives JMW a chance to read scrollback.
13:24 johnmark thanks
13:24 hagarth jdarcy: that should be it.
13:24 johnmark so for testing day
13:25 johnmark I need to know when the beta release is
13:25 hagarth I will fire a 3.3.2qa release as well this week.
13:25 johnmark so we can test it :)
13:25 johnmark hagarth: bless you
13:25 johnmark that will be awesome
13:25 hagarth johnmark: beta contingent on glusterd issues getting mitigated. We are close but not there as yet. More on the call in 13 hours.
13:25 kkeithley1 has anything been merged to the 3.3.x tree since we did 3.3.1. (I know, I could look at git)
13:25 johnmark hagarth: got it. thanks
13:25 johnmark so alpha 3
13:26 johnmark hagarth: if you can pop it by Friday, then that will be the test day
13:26 johnmark kkeithley1: I picked a bad day to stop shooting heroin
13:27 jdarcy I think we can have a coherent and useful release by Friday, but it'll have a much shorter list of changes than what's on the beta1 tracker.
13:27 kkeithley1 are you sure you want to admit that here?
13:27 johnmark hahaha
13:27 johnmark jdarcy: right
13:27 jdarcy As long as he admitted that he *stopped*.  ;)
13:27 johnmark *ideally* betas aren't released until all new code has been checkedin
13:27 hagarth kkeithley1: a few fixes have been pushed since 3.3.1.
13:28 johnmark so if we're going with alpha 3, so be it
13:28 johnmark and then we'll target a beta by May 1
13:28 johnmark and perhaps a beta per week afterwards?
13:28 johnmark maybe?
13:29 hagarth I am inclined to do a beta as soon as we knock down dependencies off the blocker bug.
13:29 jdarcy git log --oneline origin/release-3.3 --not v3.3.1
13:30 jdarcy I'd say beta1 as soon as the glusterd and fsync changes are done and backported.
13:30 hagarth jdarcy: sounds good to me
13:31 jdarcy Then beta2 when the rest of the list is done.
13:31 johnmark hagarth: cool
13:31 johnmark alrighty then
13:31 johnmark we should have two test days - one for alpha3 and one for beta1
13:32 johnmark and possibly one more
13:32 * johnmark looks at the calendar
13:32 H__ Is the meeting over ? (I'd like to discuss replace-brick behaviour)
13:32 jdarcy H__: Very close, I think.
13:32 jclift_ joined #gluster-dev
13:33 jdarcy I'll go compile those lists.
13:33 * jdarcy (gavel)
13:34 jdarcy H__: What's up with replace-brick?
13:35 H__ it worked fine in 2 test setups yet kills my production environment. It downs the entire volume immediately after startup for 20 minutes, then dies
13:36 H__ And I'm trying to find where it does that in the code ;-)
13:37 jdarcy H__: Did it generate any cores in /var/log?
13:37 H__ none. The recipient glusterfs is still running. Spinning at 100%
13:37 jdarcy H__: Can you attach to it with gdb and get a backtrace?
13:37 H__ I filed bugs 950024 and 950006 for this issue
13:38 H__ yes, bt is in 950006
13:38 H__ and gdb is now in : Run till exit from #0  __inode_find (table=0x23cf660, gfid=0x2687410 "\003\267\245~\246\321@\024\2​74\372\230\310\002\365\306,") at inode.c:765
13:38 * jdarcy looks.
13:40 jdarcy Why two bugs?
13:40 H__ I think they're separate issues
13:41 jdarcy So 95006 is about the hang, 95024 is about the I/O saturation?
13:41 H__ yes
13:42 jdarcy I don't think we can do much about the I/O saturation in the near term, except maybe to experiment with cgroups.
13:42 jdarcy On 95006, it seems like there must be an infinite loop in the inode table.  Scary.
13:43 jdarcy Since you seem comfortable with gdb...
13:44 H__ it's rusty ;-) I hope it all comes back soon
13:44 jdarcy Would it help to have a macro for walking the inode list, so we can verify that there's a loop, or should we just treat that as an operating assumption?
13:44 jdarcy Even if we verify that there's a loop, that doesn't tell us how it happened.
13:45 H__ the target filesystem was made fresh, replace-brick ran 20 minutes on it and since then it has been stuck in this mode for 24h now
13:46 H__ I tried to peek at some variables, but got this : (gdb) p uu1 $2 = <optimized out>
13:47 jdarcy The most likely cause of such a loop is racy concurrent access.  That's an awful kind of bug to track down because it's not likely to happen consistently.
13:48 H__ right. yet how does this happen ? there's only 1 source brick reading thread and 1 destination brick writing thread afaik
13:49 lalatenduM joined #gluster-dev
13:49 jdarcy If I knew how it could happen, it wouldn't happen.  ;)
13:49 H__ You mentioned cgroups. I find no reference for that in the source tree. Do you have a pointer for me ?
13:50 jdarcy H__: That's a Linux kernel feature.  Not sure if the tools are fully packaged except in the RHEL and Fedora families.
13:51 H__ I'll read up on that. btw, this is on ubuntu server 11.10
13:51 jdarcy H__: It lets you set per-process limits on all sorts of resources - CPU, memory, disk and network utilization, etc.
13:51 jdarcy Not sure if cgroups are enabled in the Ubuntu kernels.  :(
13:52 jdarcy It's probably worth doing a bit of research, but (eventually) the solution should be within GlusterFS IMO.
13:52 H__ agreed
13:53 H__ where's the actual work code in the source tree btw ? I've been seqarching for replace-brick etc. but did not yet find the actual copying pieces
13:53 jdarcy I'm afraid that's all I can do right now.  Amar (bulde) needs to take a look at this first, I think.
13:54 H__ 11.10 has cgroup-bin tools, so i'll look into those.
13:55 H__ ok thanks for you help. I see no Amar (bulde) here.
13:56 jdarcy Yeah, it's evening in BLR, so it's kind of hit or miss when he'll see it.
13:56 jdarcy The code to migrate files is mostly in xlators/cluster/dht/src/dht-helper.c BTW.
13:58 jdarcy Start with dht_migrate_file, then it fans out from there.
14:01 vshankar joined #gluster-dev
14:04 jdarcy I'm going to start using ON_DEV to mean a patch has been backported to a release branch but the release hasn't been made yet.
14:06 H__ dht_migrate_file() lives in dht-rebalance.c . Is that what is actually being used by a replace-brick ?
14:06 jdarcy I believe so.
14:07 jdarcy That's not quite my area of expertise, though.
14:07 H__ thanks, I'll work from there to find the todo-list generation code (I suspect the saturation issue lives there)
14:22 wushudoin joined #gluster-dev
14:56 sgowda joined #gluster-dev
15:07 JoeJulian kkeithley1: Aack. Just discovered that glusterfsd is "WantedBy=multi-user.target" so stopping glusterd also stops all the bricks. Perhaps it should just be "WantedBy=shutdown.target" and remove glusterd.service from "After"??
15:08 kkeithley1 okay
15:09 JoeJulian Not sure if that works or not, I don't have time to try it right this moment...
15:13 kkeithley1 I'm hip deep trying to make my IB gear do glusterfs.rdma, so... want to file a bug? I guess it should be against fedora (not glusterfs) because all the .service files are only in the fedora packaging.
15:28 JoeJulian ok, after I get to the office I'll spin up a test vm and make sure it'll even work.
15:59 rastar joined #gluster-dev
16:17 bala joined #gluster-dev
16:22 bulde joined #gluster-dev
16:25 kkeithley1 I need some rdma help.
16:26 kkeithley1 I've got a server and a client with mlx cards and a mlx switch. I'm running opensm. modules loaded can be seen here at http://paste.fedoraproject.org/7007/24717136
16:28 kkeithley1 I've got a single brick defined with transport rdma. Mounts just hang forever. I can fpaste more stuff about my ib setup if needed.
16:28 kkeithley1 but first I need some caffeine. biab
16:51 raghu joined #gluster-dev
16:56 lalatenduM joined #gluster-dev
17:08 gbrand_ joined #gluster-dev
17:37 jclift_ kkeithley: Are you able to ping the server and client from each other?
17:38 kkeithley1 left #gluster-dev
17:38 jclift_ Heh
17:38 kkeithley1 joined #gluster-dev
17:39 jclift_ kkeithley1: Today's meetings now finished, and it looks like I've missed the docs guys to help them with Cinder stuff.  So, I should be ok to help you now. :)
17:39 kkeithley1 yes, I can ping both ways. (using the ipaddrs of the ib0 interface of course)
17:40 jclift_ kkeithley1: k.  There's a tool called "ibping" that should be installed.
17:40 jclift_ kkeithley1: That pings stuff using native IB, rather than using tcp
17:41 kkeithley1 don't seem to have it. what rpm ?
17:41 jclift_ kkeithley1: Just to point out, I remember the head f**k from years ago when I started to learn this stuff.  There was almost no docs around at the time then tho.
17:41 jclift_ 1 sec
17:41 kkeithley1 I've got things like ibv_*_pingpong?
17:42 jclift_ infiniband-diags
17:42 jclift_ That's the rpm
17:42 jclift_ kkeithley1: You've installed the "Infiniband Support" yum group, yeah?
17:43 kkeithley1 not explicitly
17:43 jclift_ kkeithley1: k, do that now.
17:43 jclift_ sudo yum groupinstall "Infiniband Support"
17:44 jclift_ Then do a sudo yum groupinfo "Infiniband Support", so it shows you all of the packages that are mandatory/default/optional.
17:44 jclift_ Make a note of the optional ones, as you'll see stuff in there that's useful. ;)
17:44 kkeithley1 No packages in any requested group available to install or update
17:44 kkeithley1 yum says: No packages in any requested group available to install or update
17:44 jclift_ You're not running RHEL are you?
17:44 kkeithley1 fedora18
17:44 jclift_ That's not going to work.
17:45 jclift_ Well... that's an over-statement.  You *might* get it to work.
17:45 kkeithley1 ugh
17:45 jclift_ But some of the packages for Infiniband are explicitly _not_ included in Fedora.  Only in RHEL.
17:45 jclift_ And yeah, I think that's a completely friggin bogus idea too.
17:46 kkeithley1 huh?
17:46 kkeithley1 oh, that packages are not in Fedora
17:46 jclift_ Yeah.
17:47 jclift_ Anyway, try to install at least:
17:47 jclift_ libibcommon mstflint perftest qperf
17:47 jclift_ kkeithley1: You'll be much better off running on RHEL 6.x or CentOS 6.x though.
17:47 jclift_ Just saying.
17:48 kkeithley1 yeah, I'm not in the office today, so it'd be a bit hard to fix that atm. In theory I can reprovision with beaker but...
17:49 jclift_ kkeithley1: k.
17:49 jclift_ kkeithley1: Does this help? http://fpaste.org/1Zsi/
17:49 jclift_ That's from the RHEL 6.4 box here.
17:49 jclift_ Shows what's in the group.  You should try to install as much of the "Default Packages" as it will let you.
17:50 jclift_ kkeithley1: Technically you should be able to get 99.9% of stuff to work.
17:50 jclift_ kkeithley1: I think it's just the srptools or something that doesn't.
17:50 kkeithley1 two secs
17:50 jclift_ kkeithley1: np
17:50 * jclift_ gets coffee
17:57 kkeithley1 okay, of all those rpms, I've installed all of them that are available in f18
17:57 kkeithley1 and I've got ibping now
17:57 jclift_ Cool.
17:58 jclift_ As a note, you might as well su to the root user
17:58 kkeithley1 yup, been there, done that
17:58 jclift_ Cool. The perms on the IB executables are way tight, so tab completion isn't so good through sudo. ;)
17:59 jclift_ Gah.
17:59 jclift_ kkeithley1: Give me a few mins.  I switched my cards into "10GbE" mode a while ago, and need to switch them back to IB mode.
18:00 jclift_ kkeithley1: In 10GbE mode they show up as standard ethX adapters, none of this IPoIB stuff.
18:00 kkeithley1 np. I might have to step away for a few minutes to sign for a delivery
18:00 jclift_ kkeithley1: Go for it. :)
18:00 jclift_ kkeithley1: In 10GbE mode, the IB tooling doesn't seem them. o_O
18:00 * jclift_ gets it done
18:00 kkeithley1 I've got ib0 and ib1 interfaces and the Chelsio 10g cards aren't connected to anything
18:01 jclift_ Yep, that's right.
18:02 jclift_ As a note, if you want to muck around with 10GbE mode later on at some point, it's trivial to change. 1 line adjustment in /etc/rdma/mlx4.conf (but best to do a reboot as well)
18:03 kkeithley1 hmm. Okay, I'll keep that in mind, but if I do 10Gb I expect I'm more likely to just use the Chelsio 10Gb interfaces.
18:03 jclift_ Sure, your choice, etc. ;)
18:04 jclift_ You seem to have ConnectX 3 cards, from you model number you mentioned in email before.
18:04 kkeithley1 that's assuming they work. One of my mlx cards was dead
18:04 * jclift_ suspects they might do 40GbE, but isn't sure
18:04 jclift_ kkeithley1: How dead?
18:05 kkeithley1 are there degrees of dead? ;-) it showed up in the dmesg, but it didn't intialize and didn't show up in ib_devinfo or ib_devices
18:06 kkeithley1 I swapped a different one in and it's working
18:06 jclift_ kkeithley1: Could be number of things.  Firmware is one possibility.
18:07 jclift_ kkeithley1: Feel free to post it to me.  I'll be happy to revive. :D
18:07 kkeithley1 all of them report the same fw version
18:07 kkeithley1 okay, I drop it "in the post" soon
18:07 jclift_ Interesting.  Showing up in dmesg is a good sign.
18:08 jclift_ If this stuff is still in warranty from mlnx tho, you could do that.
18:08 jclift_ kkeithley1: As a thought, how urgent is this stuff for you?  Just thinking that need to get this stuff sorted today/tomorrow anyway.
18:08 kkeithley1 dunno. this stuff is at least two years old
18:08 jclift_ kkeithley1: Post it to me. :D
18:09 jclift_ kkeithley1: If it's not completely urgent for you, then you'll probably have instructions by tomorrow to get everything running.
18:09 jclift_ Whereas at the moment I'm going to have to re-remember stuff and experiment for a bit.
18:09 jclift_ Up to you. :D
18:09 kkeithley1 not urgent for me per se. We have a community gluster user who's suffering some pain.
18:10 kkeithley1 take your time, I'll tinker on other things in the mean time
18:10 jclift_ k.  Hopefully they can put up with a few hour wait.
18:10 jclift_ kkeithley1: Any idea which version of Gluster they're using?
18:13 kkeithley1 3.3.1
18:13 jclift_ tx
18:14 kkeithley1 https://bugzilla.redhat.com/show_bug.cgi?id=920332 if that helps
18:14 glusterbot Bug 920332: unspecified, unspecified, ---, rgowdapp, NEW , Mounting issues
18:14 jclift_ k
18:40 JoeJulian kkeithley: bug 878883 maybe?
18:40 glusterbot Bug http://goo.gl/CXce2 unspecified, medium, ---, rgowdapp, ON_QA , Fuse mount hangs for a volume with RDMA transport
19:02 kkeithley1 could be the same thing
19:02 johnmark kkeithley1: was just reading about AoE
19:03 johnmark and all its fanboys. is there any significance for glusterfs?
19:03 johnmark as in, can glusterfs take advantage of an aoe storage array?
19:05 jclift_ johnmark: When you say "take advantage of", are you meaning can it simply use it, or are you asking if we can "do it better" in some way?  i.e. use it++
19:06 johnmark use it
19:06 johnmark was reading that it's udp-based
19:06 johnmark I think...
19:07 jclift_ johnmark: Does it present as a block device to the system, that people can normally chuck a filesystem on top of?
19:07 johnmark jclift_: yes
19:07 johnmark iiuc
19:07 jclift_ Well, then regardless of underlying transport, that sounds promising.
19:07 johnmark that's what I'm hoping
19:08 johnmark or actually I don't know - was just trying to understand it
19:08 jclift_ np
19:10 kkeithley1 Why didn't they call it iSATA?
19:11 kkeithley1 I guess because it doesn't use IP
19:11 kkeithley1 like iSCSI
19:11 kkeithley1 just ethernet frames
19:12 kkeithley1 wikipedia says it's closer to FCoE
19:12 kkeithley1 UDP still uses IP
19:13 kkeithley1 Regardless, seems like it's just another way to connect to a disk. All by itself it doesn't seem to do anything for scale out.
19:29 johnmark kkeithley1: right. that's what I was wondering
19:30 johnmark kkeithley1: so I was kind of confused by this article - http://www.computerweekly.com/news/2240181109/Icel​and-media-firm-opts-for-ATA-over-Ethernet-archive
19:30 johnmark as if using AoE was somehow mutually exclusive from storage software
19:30 johnmark I realize that's just a PR puff piece masquerading as news, but still...
19:42 jclift_ kkeithley1: Reading over this quickly, it sounds wrong: https://bugzilla.redhat.com/show_bug.cgi?id=890502
19:42 glusterbot Bug 890502: unspecified, medium, ---, kparthas, ASSIGNED , glusterd fails to identify peer while creating a new volume
19:42 jdarcy joined #gluster-dev
19:42 jclift_ kkeithley1: With that bug, it sounds like gluster is taking the wrong approach.  Isn't there a Gluster "host id" UUID type of thing that could be used instead?
19:43 jclift_ kkeithley1: Asking about that bug, because it directly has follow on effect on the rdma review here: http://review.gluster.org/#/c/4600
19:46 jclift_ Just seems like the wrong direction, but I'm not clueful with Glusters initial peer probe exchange stuff.
19:46 kkeithley1 I think you have a valid point though.
19:49 jclift_ k, I'll update the BZ directly with similar general question. ;)
19:50 JoeJulian Sounds similar in theory to bug 765437
19:50 glusterbot Bug http://goo.gl/YORlt low, medium, ---, kparthas, ASSIGNED , [FEAT] Use uuid in volume info file for servers instead of hostname or ip address
19:50 jclift_ JoeJulian: You're good with those goo.gl addresses. :)
19:50 JoeJulian hehe, the bot does it for me.
19:50 jclift_ :)
19:51 JoeJulian But I did write the plugin.
19:51 jclift_ JoeJulian++
19:54 jclift_ left #gluster-dev
19:55 jclift_ joined #gluster-dev
20:22 __Bryan__ joined #gluster-dev
21:28 msvbhat_ joined #gluster-dev
21:38 gbrand_ joined #gluster-dev
21:38 inodb_ joined #gluster-dev
21:40 kkeithley joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary