Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2015-03-26

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
01:27 bala joined #gluster-dev
02:40 lalatenduM joined #gluster-dev
02:47 ilbot3 joined #gluster-dev
02:47 Topic for #gluster-dev is now Gluster Development Channel - http://gluster.org | For general chat go to #gluster | Patches - http://review.gluster.org/ | Channel Logs - https://botbot.me/freenode/gluster-dev/ & http://irclog.perlgeek.de/gluster-dev/
03:15 spandit joined #gluster-dev
03:35 soumya_ joined #gluster-dev
03:42 itisravi joined #gluster-dev
03:48 lalatenduM joined #gluster-dev
04:00 nishanth joined #gluster-dev
04:07 rjoseph joined #gluster-dev
04:07 itisravi_ joined #gluster-dev
04:13 shubhendu joined #gluster-dev
04:16 kanagaraj joined #gluster-dev
04:23 kshlm joined #gluster-dev
04:39 nkhare joined #gluster-dev
04:41 anoopcs joined #gluster-dev
04:41 ndarshan joined #gluster-dev
04:41 jiffin joined #gluster-dev
04:48 rafi joined #gluster-dev
05:09 ashiq joined #gluster-dev
05:10 Manikandan joined #gluster-dev
05:12 vimal joined #gluster-dev
05:26 vimal joined #gluster-dev
05:36 kdhananjay joined #gluster-dev
05:45 aravindavk joined #gluster-dev
05:47 hgowtham joined #gluster-dev
05:50 ndarshan joined #gluster-dev
05:50 gem joined #gluster-dev
05:52 shubhendu joined #gluster-dev
05:52 lalatenduM joined #gluster-dev
06:06 raghu joined #gluster-dev
06:07 spandit joined #gluster-dev
06:14 overclk joined #gluster-dev
06:22 soumya_ joined #gluster-dev
06:24 soumya joined #gluster-dev
06:36 pranithk joined #gluster-dev
06:36 pranithk left #gluster-dev
06:37 shubhendu joined #gluster-dev
06:38 ndarshan joined #gluster-dev
07:17 suliba joined #gluster-dev
07:32 spandit joined #gluster-dev
07:36 pranithk joined #gluster-dev
08:51 pranithk joined #gluster-dev
08:53 spandit joined #gluster-dev
09:01 anoopcs joined #gluster-dev
09:13 soumya joined #gluster-dev
09:32 soumya joined #gluster-dev
09:44 ndarshan joined #gluster-dev
09:48 shubhendu joined #gluster-dev
09:59 ira joined #gluster-dev
10:02 ira joined #gluster-dev
10:09 bala joined #gluster-dev
10:18 ndarshan joined #gluster-dev
10:18 shubhendu joined #gluster-dev
10:48 kkeithley1 joined #gluster-dev
10:56 firemanxbr joined #gluster-dev
11:16 nkhare joined #gluster-dev
11:18 Manikandan_ joined #gluster-dev
11:21 sachin_ joined #gluster-dev
11:21 nixpanic_ joined #gluster-dev
11:21 nixpanic_ joined #gluster-dev
11:33 jiffin1 joined #gluster-dev
11:33 tg2 joined #gluster-dev
11:33 ndk joined #gluster-dev
11:34 jiffin1 joined #gluster-dev
11:34 pranithk joined #gluster-dev
11:34 ndevos kkeithley_: http://review.gluster.org/8092 has a .sh file instead of .t because glfs_fini is not really stable and we dont want to introduce more regression test failures - I thought I left that in a comment too?
11:35 kkeithley_ oh, okay. I did't read all comments. I didn't give a score, it was just a question.
11:36 ndevos soumya: maybe include a note about it in the commit message? ^
11:36 soumya sure will do that..thanks
11:37 ndevos kkeithley_: oh, and also, compiling libgfapi tests is not really functional in the regression test framework, that needs some fixing too :-/
11:38 kkeithley_ okay. I'm still waiting for the unit test fairy to put a bunch of tests under my pillow. Or in the tree.
11:39 sachin_ joined #gluster-dev
11:41 kkeithley_ Looks like hchiramm just shot it down though. :-/
11:44 ira kkeithley_: When you find that fairy... ship her my way. ;)
11:45 kkeithley_ I think it's him actually. I'm not going to name names but his initials are l
11:45 kkeithley_ left #gluster-dev
11:46 kkeithley1 joined #gluster-dev
11:47 soumya_ joined #gluster-dev
12:01 anoopcs joined #gluster-dev
12:02 nkhare joined #gluster-dev
12:03 lalatenduM kkeithley, glusterfs-ganesha-3.7dev-0.803.gitf64666f.el6 have a dependency on pcs , how can centos users get pcs , epel?
12:04 lalatenduM kkeithley_, ^^
12:04 ndevos lalatenduM: isnt that in the standard CentOS repository?
12:05 ndevos lalatenduM: on RHEL it would be in the High-Availability channel, I think that is included in CentOS?
12:05 lalatenduM ndevos, hmm let me check
12:05 lalatenduM ndevos, centos only builds mainline of rhel
12:05 ndevos lalatenduM: on CentOS-7 it is in the base centos channel (and in updates)
12:07 ndevos lalatenduM: the current difficulty is with userspace-rcu, that is only available in epel :-/
12:09 lalatenduM ndevos, is it the pkg http://mirror.centos.org/centos-6/6.6/os/x86_6​4/Packages/pcs-0.9.123-9.el6.centos.x86_64.rpm
12:15 kkeithley_ looking
12:16 kkeithley_ ugh, no, there's not pcs in epel
12:16 kkeithley_ yes, that's it
12:18 rjoseph joined #gluster-dev
12:23 kshlm joined #gluster-dev
12:28 pranithk left #gluster-dev
12:29 lalatenduM kkeithley, is thr any spec file change between lusterfs-3.4.7beta2 and lusterfs-3.4.7beta4?
12:30 lalatenduM ahh kkeithley_ , the alter ego :)
12:30 kanagaraj joined #gluster-dev
12:30 JustinClift Not GNU any more... ;)
12:30 lalatenduM JustinClift, ??
12:30 JustinClift lusterfs
12:30 JustinClift Well, I just woke up.
12:30 * JustinClift gets coffee
12:30 JustinClift :)
12:32 lalatenduM JustinClift, ahh its a typo :)
12:32 lalatenduM GlusterFS it is :)
12:32 JustinClift :)
12:32 itisravi_ joined #gluster-dev
12:35 lalatenduM kkeithley_, ndevos , I dont see any mail about the glusterfs-ganesha pkg info in gluster MLs or packagers ML :(
12:36 ndevos lalatenduM: we still need to populate the packagers ML :-/
12:36 ndevos there are no subscribers on it yet, or at least not sufficient
12:36 lalatenduM ndevos, what abt gluster-devel :)
12:38 ndevos lalatenduM: what would be the interest of the developers of that sub-package?
12:38 ndevos or, for that sub-package?
12:40 lalatenduM ndevos, we definitely want to communicate this , it would at least help people who are doing upstream testing , guys like me who want to take these to CentOS Storage SIG
12:40 kkeithley_ lalatenduM: no, no spec file change between beta1-4
12:40 lalatenduM kkeithley, ok cool
12:40 lalatenduM ah kkeithley_
12:41 kkeithley_ I answer to all, when I see lt
12:41 hchiramm__ kkeithley, ndevos yeah, it was me did -1 to make sure we are on right path and we will be surely moving back to .t once the glfs_fini() is stable..
12:41 kkeithley_ What's the issue with glusterfs-ganesha?
12:41 ndevos surprise?
12:41 hchiramm__ 0_0 :(
12:42 rjoseph joined #gluster-dev
12:43 hchiramm__ Soumya has agreed to do that once we are ready  with glfs_fini :) so ideally +1 from me :)
12:45 ndevos hchiramm__: yes, she'd better watch that!
12:45 hchiramm__ yep ..  !!
12:46 hchiramm__ lalatenduM, its better to avoid that packaging traffic from devel and keep it in a seperate ML as we discussed earlier
12:46 kkeithley_ hopefully we all watch it. Those of us who care about it that is
12:46 hchiramm__ kkeithley, yeah..  I am sure we can miss it later :)
12:46 hchiramm__ typo-- can/will
12:46 glusterbot hchiramm__: typo's karma is now -3
12:47 hchiramm__ ah..
12:47 hchiramm__ later/''
12:51 hchiramm__ lalatenduM, Is ur coverity jenkins is up and running against upstream ?
12:52 lalatenduM hchiramm, nope, JustinClift is supposed to give me a VM to run the job
12:52 lalatenduM the master Jenkins is heavily loaded as now
12:53 kkeithley_ You can have an internal VM to run coverity, but I expect you want an external one
12:54 JustinClift lalatenduM: Is the next Storage SIG meeting this week?
12:54 hchiramm__ yeah, its better to have an external one
12:54 hchiramm__ yes, tom .
12:54 hchiramm__ JustinClift, ^^
12:54 JustinClift "hchiramm, nope, JustinClift is supposed to give me a VM to run the job"
12:54 hchiramm__ Storage SIG Meeting 27-March-2015 15:30 UTC JustinClift
12:55 JustinClift hchiramm__: Cool.  Has the reminder email for the Storage SIG been sent out?
12:55 JustinClift Just so people don't miss it
12:55 hchiramm__ yes
12:55 JustinClift Cool
12:55 * JustinClift looked quickly but missed it then ;)
12:56 JustinClift lalatenduM: For that VM, I think it's probably better if we ensure the job can run on any of the Rackspace slave vms
12:56 shyam joined #gluster-dev
12:56 hchiramm__ JustinClift, its in centos-devel@centos.org
12:56 JustinClift hchiramm__: Tx
12:57 hchiramm__ JustinClift, Yw ..
12:57 JustinClift lalatenduM kkeithley_: Oh, I forgot to ask... is the long running build gluster cluster thing suitable for running the coverity scan on?
12:57 kkeithley_ long running build gluster cluster thing    ???
12:58 JustinClift the box/boxes you have online for long running gluster
12:58 * JustinClift isn't remembering the right phrase/name atm :(
12:58 kkeithley_ oh, the longevity cluster?
12:58 JustinClift Yeah, that' the thing :)
12:58 kkeithley_ not really. For one thing it's internal
12:59 JustinClift No worries.  It was just a thought. :)
12:59 hchiramm__ in a different thought we dont need an external one ..
12:59 kkeithley_ Besides lalatenduM's coverty runs, I run one internally on all the branches and push the results to download.gluster.org.  I also run other things like cppcheck, and clang compiles
12:59 hchiramm__ Isnt it ? because nobody want to analyse anything from Coverity run results as long as we publish it
12:59 hchiramm__ lalatenduM, kkeithley JustinClift ^^^
13:00 kkeithley_ hchiramm__: I don't follow.  Why does nobody want to analyze anything?
13:03 kkeithley_ and I want to get compiles with Intel's and AMD's compilers. Those spit out all kinds of fun errors and warnings
13:03 JustinClift Ahhhh
13:03 JustinClift Yeah, that's not a bad idea
13:03 JustinClift I wish we could clone Eric Blake and put his clone on our team :)
13:04 kkeithley_ As does clang, but clang tries too hard to be gcc compatible
13:04 JustinClift He's *awesome* for that kind of stuff. :)
13:04 hchiramm__ kkeithley, I think I made a typo again .. what I meant was , nobody want to analyse the jenkins server or where the job is running for coverity ..
13:05 hchiramm__ yes, they should be worried about the "CIDs"
13:05 lalatenduM JustinClift, any slave is fine with me , just give me one :)
13:05 kkeithley_ They don't want to run the analyze on the jenkins server?  Is that what you mean?
13:05 hchiramm__ yes.
13:05 JustinClift Yeah, don't run anything on the master
13:05 lalatenduM kkeithley, yes bcz the master is heavily loaded
13:05 kkeithley_ yes, understood.
13:05 hchiramm__ so its ok to keep an internal server for the same.
13:06 kkeithley_ But on a slave is okay
13:06 kkeithley_ ?
13:06 kkeithley_ On a slave is okay?
13:06 JustinClift Yep
13:06 hchiramm__ should be fine..
13:06 lalatenduM yup
13:06 JustinClift The Jenkins master has every cpu pegged ( by Jenkins) 24/7 atm
13:06 JustinClift I don't want to break it ;)
13:06 kkeithley_ Why does it need to be a jenkins slave? Can't it just be a VM on rackspace?
13:06 lalatenduM folks gotta go
13:06 lalatenduM ttyl :)
13:06 JustinClift Sure
13:06 kkeithley_ ttfn
13:07 JustinClift It's just that the slaves are already there in rotation, so adding more stuff to them's pretty easy
13:07 JustinClift eg make sure the script/executables/whatever use the right paths and don't screw up other scripts
13:08 * kkeithley_ still doesn't get it. What does being in rotation get us?  Are we creating a jenkins job to run a covscan, e.g. on every commit?
13:08 JustinClift kkeithley_: You know... that might be an interesting idea
13:09 hchiramm__ it should be configured like  a weekly/monthly run ..
13:09 JustinClift Like a smoke test, but a cov scan
13:09 hchiramm__ not against every patch set
13:09 hchiramm__ the option is available in jenkins afaict
13:09 JustinClift hchiramm__: How long does it take to run?
13:09 JustinClift Or would it be more false positives than it's worth?
13:10 hchiramm__ JustinClift, "how long" , I am not having a clear picture, need to run and see..
13:10 JustinClift If it's only a couple of minutes... that' not hard
13:11 hchiramm__ "Reg# false positive than its worth" -> I dont think so.. its good to run in a period and getting the results announced frequently with devel
13:11 hchiramm__ so that atleast some CIDs will get fixed..
13:11 JustinClift Would doing it on every CR - triggered like the smoke tests - be useful?
13:12 JustinClift I would only be practical if it's not a long running task though
13:12 hchiramm__ Ideally it should not vote
13:12 JustinClift Yeah, it could be set either way
13:12 hchiramm__ JustinClift, no. its not required.
13:12 JustinClift k
13:13 hchiramm__ if there is an existing server without much load , we can think about configuring a different job for coverity
13:13 hchiramm__ becasuse we can set the time for run in a non peak time of a week
13:14 JustinClift hchiramm__: Luis Pabon's VM's in Rackspace are probably the thing then
13:14 hchiramm__ oh..ok .. any way we need to give the access to lmohanty
13:15 hchiramm__ he has the job ready for the coverity  run .
13:15 hchiramm__ if u can send a mail about the details of the specified VM to lala and me , we can give a try
13:15 kkeithley_ I think a covscan takes more than a few minutes
13:16 hchiramm__ then its not a worry at all !!
13:16 hchiramm__ because setting side , we just need to get the coverity binaries in the standard path
13:16 hchiramm__ rest is almost straightforward..
13:17 hchiramm__ we dont need to misuse a server/VM for it..
13:17 hchiramm__ http://review.gluster.org/#/c/10013/ not sure who will merge this. :)
13:18 kkeithley_ me?
13:19 * JustinClift hopes it passes the regression run
13:19 kkeithley_ hmmm.  my internal covscan on master branch has fallen down
13:19 hchiramm__ no objection from me :)
13:19 kkeithley_ once it passes regression and gets +2
13:20 kkeithley_ hmmm.  my internal covscan on master branch has fallen down.  release-3.6 branch is still working
13:21 hchiramm__ kkeithley++ thanks ..
13:21 glusterbot hchiramm__: kkeithley's karma is now 59
13:21 hchiramm__ JustinClift++ thanks
13:21 glusterbot hchiramm__: JustinClift's karma is now 37
13:25 dlambrig__ joined #gluster-dev
13:29 hchiramm__ anoopcs, ^^^
13:29 hchiramm__ :)
13:29 anoopcs hchiramm__++
13:29 glusterbot anoopcs: hchiramm__'s karma is now 1
13:30 kkeithley_ And what lovely prize do we have for our winner, Jeff?
13:31 kkeithley_ jdarcy: ^^^
13:32 anoopcs kkeithley_: What about change #9999?
13:33 kkeithley_ meh, jdarcy isn't here
13:36 kkeithley_ anoopcs: ask jdarcy, he's our MC
13:37 anoopcs kkeithley_: I would give +1 for CR #9999 :)
13:39 kasturi joined #gluster-dev
13:39 kkeithley_ too bad it failed regression ;-)
13:41 * JustinClift runs it again
13:42 anoopcs JustinClift++
13:42 glusterbot anoopcs: JustinClift's karma is now 38
13:44 hchiramm__ :)
13:44 * anoopcs hopes that it passes
13:45 JustinClift k, I really need to unfocus on the Jenkins stuff, and get back to focusing on Gerrit stuff
13:45 JustinClift If something blows up with Jenkins, please email me.  I'll be ignoring IRC again for the next X hours
13:45 kkeithley_ hmmm.  my internal covscan on master branch has fallen down. Because the box is missing libacl-devel
13:45 JustinClift Ahhh
13:46 JustinClift Does it have sqlite3-devel too?
13:46 JustinClift That was another recent dep addition
13:46 kkeithley_ was just thinking the same
13:46 ndevos anyone knows the reason to pass char* around instead of void* for memory allocations?
13:47 * ndevos is looking at mem-pool.c and __gf_realloc for example
13:47 kkeithley_ not really.
13:48 kkeithley_ and userspace-rcu-devel
13:50 * ndevos goes for a lunch break
14:00 shyam joined #gluster-dev
14:06 shyam joined #gluster-dev
14:10 ndevos any ununtu users here? interested in fixing bug 1201484 ?
14:10 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1201484 high, high, ---, bugs, NEW , glusterfs-3.6.2 fails to build on Ubuntu Precise: 'RDMA_OPTION_ID_REUSEADDR' undeclared
14:10 ndevos *ubuntu even
14:11 anrao joined #gluster-dev
14:50 lpabon joined #gluster-dev
15:00 nishanth joined #gluster-dev
15:16 deZillium joined #gluster-dev
15:20 shyam joined #gluster-dev
15:21 _Bryan_ joined #gluster-dev
15:38 deZillium joined #gluster-dev
16:08 soumya_ joined #gluster-dev
16:27 kkeithley_ wrt to discussion yesterday in the Community Meeting about not supporting 3.7 on el5, maybe we should say the same for Ubuntu Precise LTS ?
16:27 ndevos probably
16:28 ndevos oh, wait, did I not get an action item for that?
16:29 * ndevos looks for the logs
16:29 kkeithley_ which?
16:30 kkeithley_ ACTION: ndevos to     poll the community about continued el5 support (hagarth,     12:29:22)
16:31 ndevos yeah, that onw
16:31 ndevos -w +e
16:31 kkeithley_ interesting, precise has python2.7 though
16:32 kkeithley_ but crufty old rdma support
16:32 ndevos you dont happen to have a glusterfs-3.2 around, do you?
16:33 kkeithley_ packages?
16:33 ndevos installation?
16:33 kkeithley_ oh, installed somewhere?
16:34 ndevos yeah, I need to know what happens when you do: dd if=/dev/zero bs=64 count=1 | nc $HOSTNAME 24007
16:35 ndevos dont do that on your always-needs-to-be-up-and-running production system though
16:36 ndevos well, with 3.2 it might not be an issue, that would be nice to hear
16:37 kkeithley_ I've got 3.3 installed on an f17vm
16:41 kkeithley_ @later tell lalatenduM you're raspberry pi arrived
16:41 glusterbot kkeithley_: The operation succeeded.
16:43 kanagaraj joined #gluster-dev
16:45 kkeithley_ bad fingers
16:45 kkeithley_ @later tell lalatenduM your raspberry pi arrived
16:45 glusterbot kkeithley_: The operation succeeded.
16:49 shyam joined #gluster-dev
17:17 soumya_ joined #gluster-dev
17:19 lalatenduM joined #gluster-dev
17:33 hchiramm__ lalatenduM,
17:33 hchiramm__ <kkeithley_> @later tell lalatenduM your raspberry pi arrived
17:33 hchiramm__ <glusterbot> kkeithley_: The operation succeeded.
17:33 hchiramm__ :)
17:33 hchiramm__ lalatenduM, did u get 2 ? :)
17:33 lalatenduM hchiramm, yes :)
17:34 lalatenduM kkeithley_, kkeithley awesome news :)
17:34 lalatenduM hchiramm, I mean rasberry pi 2 (2nd version)
17:35 lalatenduM hchiramm, pm
17:44 hchiramm__ lalatenduM, sure
17:47 kkeithley_ I didn't open it. Do you want me to?
17:49 jobewan joined #gluster-dev
17:52 lalatenduM kkeithley_, I will not mind if you open it :)
17:55 kkeithley_ oh, I misread the above, thought there was a question about whether you bought one or two.
17:55 kkeithley_ I don't need to open it.
17:58 hchiramm__ kkeithley, actually I meant the same : )
17:58 hchiramm__ did he buy one or two :)
17:58 hchiramm__ if its 2,  I can pay for one :)
18:00 lpabon joined #gluster-dev
18:00 lalatenduM hchiramm, ohh, I bought just one. wanted to buy two, but realized it is available in India too
18:00 lalatenduM hchiramm__, ah another alter ego:)
18:01 hchiramm__ lalatenduM, u like the one available in India ? :)
18:02 lalatenduM hchiramm__, flipkart has the latest :)
18:02 lalatenduM and the price is similar
18:03 hchiramm__ so the question remains :)
18:04 lalatenduM hchiramm__, yes, I like it
18:04 lalatenduM :)
18:04 hchiramm__ which one ? :)
18:04 shyam joined #gluster-dev
18:06 kkeithley_ You don't want Model A.
18:08 kkeithley_ flipkart's price is pretty good.
18:08 lalatenduM kkeithley, yes
18:09 hchiramm__ USB charger is missing in FKart ?
18:09 lalatenduM kkeithley, effects for globalization :)
18:09 lalatenduM hchiramm__, charger should not be a problem , we need a usb charger with 2.5A current
18:09 lalatenduM rating
18:09 kkeithley_ but what would a USB charger cost, Rs 500?
18:10 lalatenduM yeah
18:10 kkeithley_ If that much?
18:10 lalatenduM may be 600rs
18:15 lalatenduM kkeithley, I was wrong , may be we just need 1A current rating usb charger for raspberry pi ?, refer: http://www.amazon.com/CanaKit-Raspberr​y-Supply-Adapter-Charger/dp/B00GF9T3I0
18:15 lalatenduM hchiramm__, u too
18:15 hchiramm__ let me do some more research and get one
18:15 kkeithley_ 500mw should be good actually. That's what should be included with yours.
18:16 lalatenduM if thats correct, we should be able to use most of the cell usb chargers
18:16 kkeithley_ And that's what I use with mine.  The one that came with mine (Model B+)
18:16 kkeithley_ yup
18:16 lalatenduM cool
18:17 lalatenduM kkeithley_, btw r u carrying ur lego bricks this time to india
18:17 lalatenduM I think u should
18:18 anrao joined #gluster-dev
18:18 hchiramm__ :)
18:18 kkeithley_ I'll bring them
18:22 kkeithley_ I better start packing now so I don't forget anything
18:23 ndevos kkeithley_: ah, you have an active reviewer for libntirpc, I'll stay away from it for now, but let me know if the process stalls and I can pick it up from there
18:24 kkeithley_ right. (I did mention that earlier.)
18:25 ndevos yes, you meantioned it, but I thought that was the bits the 1st guy responded, I did not notice the bz was assigned already
18:25 kkeithley_ yeah, he took it and changed it to assigned after the first or second set of comments
18:26 ndevos fine with me
18:27 kkeithley_ less work for you ;-)
18:28 ndevos indeed!
18:42 lalatenduM kkeithley_, ndevos hchiramm__ do u guys knw abt readthedocs project?
18:42 lalatenduM https://readthedocs.org/
18:49 ndevos lalatenduM: I've read some docs there, but thats ot
18:49 ndevos *it
18:50 lalatenduM ndevos, I want to give it a shot for glusterfs, 1st attempt http://glusterfs-docs.readthedocs.org/en/latest/
18:50 lalatenduM may be someday it would be readable
18:51 ndevos lalatenduM: sure, why not, I guess it should be possible to integrate that in the gluster.org site?
18:51 ndevos lalatenduM: wasnt Debloper going to show something about the new site-design today?
18:51 * ndevos missed it
18:52 lalatenduM ndevos, yeah thats easy, readthedocs already provides hooks for this
18:52 ndevos lalatenduM: sounds good to me :)
18:52 lalatenduM ndevos, was not aware, but he showed me the design on his laptop :)
18:53 ndevos hey, this does not look too bad: http://glusterfs-docs.readthedocs.org/en/​latest/features/brick-failure-detection/
18:53 ndevos lalatenduM++ :D
18:53 glusterbot ndevos: lalatenduM's karma is now 77
18:53 lalatenduM :)
18:53 lalatenduM pages got rendered correctly , just need to work on linking and arranging them
18:55 ndevos it surely looks like a good start to me
18:55 lalatenduM I was talking to Debloper today abt this and he introduced me to readthedocs to me
18:55 lalatenduM Debloper++
18:55 glusterbot lalatenduM: Debloper's karma is now 4
18:55 * ndevos wonders how many people actually want to use 3.7 on old distributions
18:55 ndevos lalatenduM: sounds like you guys have a plan!
18:56 lalatenduM ndevos, u will be surprised to know
18:56 lalatenduM people still use RHEL 4 :)
18:56 ndevos lalatenduM: let them speak up!
18:56 ndevos lalatenduM: oh, yes, I know, but would they need the client and server on rhel4? I doubt that
18:56 lalatenduM some even rhel3 , lol
18:56 lalatenduM agree
18:57 lalatenduM for the same reason , we choose rhel 6 and rhel 7 for storage sig
18:57 ndevos yeah, 2 years ago I still got the occasional support request for rhel3... happy to have changed roles :D
18:57 lalatenduM haha
18:57 hchiramm__ lalatenduM, sorry for the delay to suport
18:57 hchiramm__ suport/respond
18:57 hchiramm__ yes, I know abt readthedcos
18:58 hchiramm__ more or less, we are planning to deploy something in house rather than pushing to third party site..
18:59 lalatenduM hchiramm__, understand
18:59 lalatenduM hchiramm__, I think u r plan right from longer term point of view
18:59 lalatenduM I just want to learn readthedocs
19:00 lalatenduM and using glusterfs to learn it
19:00 hchiramm__ whatever we are progressing wrt documentation will land on something like above..
19:00 * ndevos steps out for a bit, might be back later
19:00 hchiramm__ lalatenduM, yeah, I see ur requirement :)
19:01 lalatenduM hchiramm__, :)
19:06 shyam joined #gluster-dev
21:47 shaunm joined #gluster-dev
22:15 badone joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary