Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2014-11-12

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
01:01 topshare joined #gluster-dev
01:42 pranithk joined #gluster-dev
02:04 shyam joined #gluster-dev
02:12 bala joined #gluster-dev
02:13 nishanth joined #gluster-dev
02:47 bharata-rao joined #gluster-dev
03:04 hagarth joined #gluster-dev
03:37 pranithk left #gluster-dev
03:38 topshare joined #gluster-dev
03:39 topshare joined #gluster-dev
03:43 kanagaraj joined #gluster-dev
03:46 shubhendu joined #gluster-dev
03:48 shubhendu_ joined #gluster-dev
03:59 badone joined #gluster-dev
04:08 ndarshan joined #gluster-dev
04:12 topshare joined #gluster-dev
04:13 topshare joined #gluster-dev
04:34 soumya_ joined #gluster-dev
04:38 rafi joined #gluster-dev
04:38 Rafi_kc joined #gluster-dev
04:40 lalatenduM joined #gluster-dev
04:40 nishanth joined #gluster-dev
04:40 nkhare joined #gluster-dev
04:49 bala joined #gluster-dev
04:52 jiffin joined #gluster-dev
04:53 raghug joined #gluster-dev
04:59 atinmu joined #gluster-dev
05:03 _Bryan_ joined #gluster-dev
05:04 spandit joined #gluster-dev
05:09 kshlm joined #gluster-dev
05:15 ppai joined #gluster-dev
05:26 hagarth joined #gluster-dev
05:37 kdhananjay joined #gluster-dev
05:38 kdhananjay left #gluster-dev
06:10 atalur joined #gluster-dev
06:17 anoopcs joined #gluster-dev
06:35 soumya_ joined #gluster-dev
07:04 deepakcs joined #gluster-dev
07:05 ppai joined #gluster-dev
07:12 bala joined #gluster-dev
07:15 Humble joined #gluster-dev
07:20 bala joined #gluster-dev
07:53 krishnan_p joined #gluster-dev
07:54 ppai joined #gluster-dev
08:21 ndevos JustinClift: did you fix http://build.gluster.org/computer/slave21.cloud.gluster.org/ , or not yet?
08:43 vimal joined #gluster-dev
08:46 shubhendu_ joined #gluster-dev
08:49 ndarshan joined #gluster-dev
08:52 bala joined #gluster-dev
09:12 atalur joined #gluster-dev
09:13 kdhananjay joined #gluster-dev
09:15 ndevos any reviewers for a build fix? http://review.gluster.org/9033
09:16 ppai joined #gluster-dev
09:16 ndevos or a code-duplicaion cleanup? http://review.gluster.org/9035
09:17 rgustafs joined #gluster-dev
09:25 ndarshan joined #gluster-dev
09:25 shubhendu_ joined #gluster-dev
09:33 bala joined #gluster-dev
09:43 aravindavk joined #gluster-dev
10:18 lkoranda joined #gluster-dev
10:20 kdhananjay joined #gluster-dev
10:23 kdhananjay left #gluster-dev
10:30 atalur joined #gluster-dev
10:35 ppai joined #gluster-dev
10:53 lalatenduM joined #gluster-dev
11:05 JustinClift ndevos: Ahh, not yet
11:05 JustinClift ndevos: And slave20 seems busted too atm
11:06 JustinClift ndevos: I'll have to rebuild slave20 later I think
11:06 ndevos oh, I tried to start some of the offline ones that did not have a note, did not check if it worked out
11:07 ndevos JustinClift, Humble, lalatenduM: want to take care of Bug 1163071 ?
11:07 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=1163071 high, unspecified, ---, bugs, NEW , RHEL 5 noarch repo broken/missing
11:07 JustinClift ndevos: Yeah, that's what I do, then go back to see what worked or not (then investigate)
11:07 * ndevos fixed the symlinks for EPEL-5-$VARIANT yesterday, and it happy to see someone else taking care of this new one
11:08 JustinClift slave20 seems truly busted.  Or at least it did this morning.  Tried rebooting it, but it didn't seem to come back (even hard reboot)
11:08 * JustinClift looks again
11:08 ndevos JustinClift: I mostly check it too, unless I get distracted
11:08 ndevos which happens a lot...
11:08 * JustinClift nods
11:11 JustinClift ndevos: I just cheated and created that path on slave21, then chowned it to jenkins:jenkins
11:11 JustinClift Not the right solution, but it should work for the moment
11:12 ndevos JustinClift: whatever, maybe http://review.gluster.org/9033 would fix that too?
11:12 JustinClift NFI.  That would be nice
11:12 * JustinClift doesn't have time to really look atm
11:23 Gaurav_ joined #gluster-dev
11:26 krishnan_p JustinClift, hi
11:26 krishnan_p JustinClift, I just read your reply to the mail on trusted and non-trusted volfile.
11:28 soumya joined #gluster-dev
11:28 lalatenduM ndevos, regarding 1163071, it is interesting because the dir structure is pretty old, may be no one was using el5 :)
11:28 krishnan_p JustinClift, I am not really sure how do you want to seek -devel or -user feedback this?
11:28 JustinClift krishnan_p: I'm thinking you could email gluster-devel, saying something like:
11:29 krishnan_p JustinClift, The content for doc I have handy is context dependent and we need to build the document that this one depends on before including this.
11:29 JustinClift krishnan_p: Ok.  It was just an idea.  If that make it more difficult instead of easier/better, ignore it. :)
11:31 krishnan_p JustinClift, I like the idea, which is why I want your suggestion how we can close this one :)
11:32 krishnan_p JustinClift, I can lift the relevant doc for why the trusted volfile approach was used in the first place. I just want your opinion on where we should place it in the admin-guide
11:35 JustinClift krishnan_p: Hmmm, I was kind of thinking sounded like info for developers
11:35 JustinClift Is it more suitable as info for admin staff?
11:35 JustinClift krishnan_p: (sorry for the delayed reponse btw, my vpn dropped out)
11:37 JustinClift krishnan_p: Hmmm, I guess we could start a new directory under the docs area.  Something specific to developers
11:37 * JustinClift swears at his internet connection
11:37 JustinClift It's very very slow atm
11:37 krishnan_p JustinClift, No problem. hmm. The way I think of this doc, it is information that we should make available for our users
11:38 JustinClift krishnan_p: k.  Focus on the admin aspect then
11:38 krishnan_p if it sounds too developer centric, I am willing to work on make it user-friendly
11:38 krishnan_p s/make/making it
11:38 JustinClift "developer-centric" isn't really a problem (unless you're not targetting them)
11:39 * JustinClift points out that developers are definitely an audience we want to target better anyway :D
11:39 JustinClift But, it's also not going to hurt to have admin focused info on this topic
11:39 JustinClift Admins will like it for sure. :)
11:39 JustinClift krishnan_p: I'm not really sure how to close this one tho
11:39 krishnan_p "Why am I seeing two volfiles that are nearly identical?" is a valid question. So, my target is the end-user. Developers would also benefit from this information, but they might have figured it out themselves :)
11:40 JustinClift Better not to assume that :)
11:40 JustinClift eg devs figure it out for themselves
11:40 krishnan_p JustinClift, I am tempted to start a did_you_know.md to cover all these often unexplained 'features' of glusterfs
11:40 JustinClift Do it
11:40 JustinClift Really.
11:40 krishnan_p JustinClift, I was kidding. I wouldn't assume that.
11:40 JustinClift That's a good idea
11:41 JustinClift As in "PLEASE do that" :)
11:41 krishnan_p JustinClift, Really? I thought you would call me being lazy and uncreative
11:41 JustinClift The did you know concept?
11:41 krishnan_p JustinClift, "Do it" works in this case :)
11:41 JustinClift It's a good concept
11:41 krishnan_p JustinClift, Yes the did you know concept.
11:41 JustinClift It's like giving people info on the unexplored corners they've probably wondered about but never looked up
11:42 JustinClift Helps round out people's knowledge
11:42 JustinClift And no, that's not a lazy/uncreative thing
11:42 krishnan_p JustinClift, Well, thanks. Let me send a patch for adding a did_you_know.md under admin-guide with this content
11:42 krishnan_p JustinClift, Yep.
11:42 JustinClift Send it on the mailing list.  You'll probably get a lot of positive feedback (and ideas of what to include) :D
11:43 JustinClift Do both mailing lists. :)
11:43 krishnan_p I will add content in the form of a patch and send the patch URL and the idea to both mailing lists. Does that make sense?
11:44 JustinClift Yep, that'll work.  In the email, encourage people to post follow up patches with their own "did you know" stuff too.
11:45 hagarth JustinClift: have you slept at all today?
11:47 JustinClift hagarth: Not yet
11:47 krishnan_p JustinClift, thanks for helping with this :)
11:47 JustinClift I'm waiting for an engineer from BT to turn up.  He's supposed to be installing Fibre here today
11:47 JustinClift Between 8am-1pm
11:48 JustinClift It's 11:48, and no sign so far. :(
11:48 hagarth ah, that explains your availability during the day here
11:49 hagarth much faster bandwidth with Fibre?
11:51 davemc community meeting in 10 minutes on #gluster-meeting.
11:52 ppai joined #gluster-dev
11:54 JustinClift hagarth: Extremely so
11:55 JustinClift (vpn just dropped out again)
11:55 hagarth JustinClift: cool
11:56 JustinClift hagarth: It may be practical to setup a box or two here and give them dedicated IPv6 addresses for hooking into Jenkins
11:56 JustinClift (probably VM's on them tho)
11:56 JustinClift That reminds me, I need to see if OSAS's "lets get the glusterfs gear into the dmz" task has moved along
11:56 hagarth JustinClift: that would be neat, yeah VMs would be better.
11:59 tdasilva joined #gluster-dev
12:00 * JustinClift just sent reminder email to ppl
12:04 pranithk joined #gluster-dev
12:05 JustinClift ndevos: We don't support EL5 any more do we?
12:06 * JustinClift kinda remembers asking about this a while back, and thinks the answer was "no"
12:07 krishnan_p JustinClift, patch for did-you-know.md ready :)
12:07 JustinClift :)
12:07 hagarth krishnan_p++
12:07 glusterbot hagarth: krishnan_p's karma is now 4
12:08 ndevos JustinClift: yes we do, just not for geo-rep, I think
12:09 JustinClift Ahh cool
12:09 JustinClift Yeah, that might have been it
12:09 JustinClift ndevos: Tx :)
12:17 krishnan_p JustinClift, email sent to -users and -devel.
12:23 edward1 joined #gluster-dev
12:25 soumya joined #gluster-dev
12:43 deepakcs left #gluster-dev
13:08 ndevos Humble, kkeithley: could you glance over http://review.gluster.org/8903 and +1 it?
13:15 kkeithley successful (not successfull)
13:15 kkeithley that's on line 76
13:31 ndevos kkeithley++ thanks, updated version ./rfc.sh'd
13:31 glusterbot ndevos: kkeithley's karma is now 39
13:32 bala joined #gluster-dev
13:32 kkeithley +1'd it
13:36 ndevos thanks
13:50 hagarth joined #gluster-dev
14:22 pranithk joined #gluster-dev
14:22 pranithk left #gluster-dev
14:23 lalatenduM joined #gluster-dev
14:27 shyam joined #gluster-dev
14:36 rgustafs joined #gluster-dev
14:38 lalatenduM joined #gluster-dev
14:40 JustinClift k, the BT engineer has finished setting up the Fibre stuff.
14:40 JustinClift I'm going to do some brief testing, then hit the sack.
14:40 JustinClift See everyone tomorrow. :)
15:25 soumya joined #gluster-dev
15:25 jobewan joined #gluster-dev
15:25 xavih joined #gluster-dev
15:26 wushudoin joined #gluster-dev
15:27 nkhare joined #gluster-dev
15:47 aravindavk joined #gluster-dev
15:54 bala joined #gluster-dev
16:30 davemc joined #gluster-dev
16:50 _Bryan_ joined #gluster-dev
16:56 hagarth joined #gluster-dev
17:01 soumya joined #gluster-dev
17:12 davemc joined #gluster-dev
17:20 _shaps_ joined #gluster-dev
17:21 _shaps_ Hi guys, I've got a quick question
17:22 _shaps_ I've got a geo-rep setup which copies data across 3 DCs, it works fine, but I've just spotted that the both the master and slave server ( the main ones ) ran out of inodes
17:22 _shaps_ looking through the directories which have lots of files, the "/var/lib/misc/glusterfsd/[volname]/[connection]/.processed has got thousands of xsync changelog files
17:22 _shaps_ I'm using glusterfs 3.6.0beta3, as incurred in a bug while using 3.5.2 which was preventing geo-rep to work properly
17:22 _shaps_ Any idea if it's ok to delete what's in the processed directory
17:25 hagarth _shaps_: if you don't get an answer here, please send out an email on gluster-devel mailing list
17:52 dlambrig joined #gluster-dev
17:56 pranithk joined #gluster-dev
17:58 raghug joined #gluster-dev
18:53 _shaps_ hagarth: Ok, thanks
18:54 dlambrig joined #gluster-dev
19:06 _Bryan_ joined #gluster-dev
20:28 jobewan joined #gluster-dev
20:54 dlambrig joined #gluster-dev
21:40 _Bryan_ joined #gluster-dev
21:44 hchiramm joined #gluster-dev
22:06 dlambrig joined #gluster-dev
22:47 dlambrig joined #gluster-dev
23:01 shyam joined #gluster-dev
23:23 dlambrig joined #gluster-dev
23:40 dlambrig left #gluster-dev
23:54 _Bryan_ joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary