Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2016-08-10

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
01:47 ilbot3 joined #gluster-dev
01:47 Topic for #gluster-dev is now Gluster Development Channel - http://gluster.org | For general chat go to #gluster | Patches - http://review.gluster.org/ | Channel Logs - https://botbot.me/freenode/gluster-dev/ & http://irclog.perlgeek.de/gluster-dev/
02:01 nishanth joined #gluster-dev
02:02 lpabon joined #gluster-dev
03:09 nigelb post-factum: hey, I just recently reduced that to 300 mins, it used to be 500 mins ;)
03:10 nigelb post-factum: I started that email thread on gluster-devel about netbsd failures.
03:19 magrawal joined #gluster-dev
03:23 hagarth joined #gluster-dev
03:31 nbalacha joined #gluster-dev
04:01 itisravi joined #gluster-dev
04:06 rastar joined #gluster-dev
04:13 lpabon joined #gluster-dev
04:15 rafi joined #gluster-dev
04:19 shubhendu joined #gluster-dev
04:29 aspandey joined #gluster-dev
04:35 poornimag joined #gluster-dev
04:35 shubhendu joined #gluster-dev
04:45 karthik_ joined #gluster-dev
04:51 sanoj joined #gluster-dev
04:52 asengupt joined #gluster-dev
05:07 shubhendu joined #gluster-dev
05:11 shubhendu joined #gluster-dev
05:11 shubhendu joined #gluster-dev
05:12 shubhendu joined #gluster-dev
05:15 rastar_ joined #gluster-dev
05:15 pkalever1 joined #gluster-dev
05:19 jiffin joined #gluster-dev
05:26 Manikandan joined #gluster-dev
05:31 asengupt joined #gluster-dev
05:31 aravindavk joined #gluster-dev
05:32 skoduri joined #gluster-dev
05:36 mchangir joined #gluster-dev
05:41 ashiq joined #gluster-dev
05:44 rafi joined #gluster-dev
05:46 poornimag joined #gluster-dev
05:49 kotreshhr joined #gluster-dev
05:51 Bhaskarakiran joined #gluster-dev
05:51 atinm joined #gluster-dev
05:55 _ndevos joined #gluster-dev
05:55 _ndevos joined #gluster-dev
05:56 prasanth joined #gluster-dev
06:02 nishanth joined #gluster-dev
06:03 spalai joined #gluster-dev
06:03 aspandey joined #gluster-dev
06:03 ramky joined #gluster-dev
06:07 hgowtham joined #gluster-dev
06:07 ankitraj joined #gluster-dev
06:13 kdhananjay joined #gluster-dev
06:13 msvbhat joined #gluster-dev
06:14 karthik_ joined #gluster-dev
06:16 prasanth joined #gluster-dev
06:18 ashiq joined #gluster-dev
06:25 atalur joined #gluster-dev
06:29 ashiq joined #gluster-dev
06:34 ppai joined #gluster-dev
06:36 shubhendu_ joined #gluster-dev
06:36 Muthu_ joined #gluster-dev
06:49 devyani7_ joined #gluster-dev
06:49 devyani7_ joined #gluster-dev
06:50 gem joined #gluster-dev
06:57 karthik__ joined #gluster-dev
07:16 ppai joined #gluster-dev
07:21 rastar joined #gluster-dev
07:26 spalai joined #gluster-dev
07:38 poornimag joined #gluster-dev
07:38 nigelb I'm deploying the new smoke jobs based on JJB configuration.
07:38 nigelb So I may create some extraneous noise in reviews for a while.
07:49 msvbhat joined #gluster-dev
07:50 sanoj joined #gluster-dev
08:05 gem joined #gluster-dev
08:06 karthik_ joined #gluster-dev
08:23 pur joined #gluster-dev
08:40 atalur joined #gluster-dev
08:41 itisravi joined #gluster-dev
08:48 itisravi joined #gluster-dev
08:51 itisravi joined #gluster-dev
08:51 mchangir joined #gluster-dev
08:52 asengupt joined #gluster-dev
08:53 ira joined #gluster-dev
08:57 Muthu_ joined #gluster-dev
09:00 poornimag joined #gluster-dev
09:02 ashiq Manikandan++, thanks
09:02 glusterbot ashiq: Manikandan's karma is now 61
09:19 rafi joined #gluster-dev
09:19 jiffin1 joined #gluster-dev
09:22 rafi1 joined #gluster-dev
09:32 rafi1 joined #gluster-dev
09:44 msvbhat joined #gluster-dev
09:45 nigelb Oh boy
09:45 nigelb we had a bug in smoke.sh that went undiscovered until now.
09:51 misc a big bug ?
10:02 nigelb Not very big one.
10:03 nigelb If  a test fails, the log creation will also fail.
10:03 msvbhat joined #gluster-dev
10:07 post-factum ndevos: http://review.gluster.org/#/c/15119/ feel free to merge it
10:07 post-factum it seems today netbsd progresses okay
10:11 post-factum ndevos: also, http://review.gluster.org/#/c/13488/
10:14 jiffin1 joined #gluster-dev
10:22 ndevos post-factum: I'm doing a 3.8 release today, other branches will get attention when I'm done with it :)
10:22 post-factum ndevos: oh, okay. good luck with that :)
10:23 ndevos it's a shame that there are so many patches needing review/testing http://review.gluster.org/#/q/pr​oject:glusterfs+branch:release-3.8+status:open
10:23 ndevos but well, I guess that'll make the next 3.8 release worthwile too :D
10:24 ndevos rafi: fix the topic of http://review.gluster.org/15123 , and if you tested it, mark it Verified=+1 too
10:25 ndevos kkeithley: fwiw, el6 builds of ganesha in the storage sig (3.8) can be downloaded from https://cbs.centos.org/koji/taskinfo?taskID=104574
10:26 kkeithley ndevos: okay. thanks
10:26 ndevos kkeithley: do you test el6 or el7?
10:27 kkeithley the common-HA failover? I'm doing it on rhel7 vms
10:29 kkeithley at least to start with
10:31 ndevos ok, you found the links to the scratch build for el7 builds in your history?
10:31 prasanth joined #gluster-dev
10:35 kkeithley you lost me
10:37 kkeithley I'm using gluster-3.8.1 from the Storage SIG repo, and nfs-ganesha-2.3.2 from the Storage SIG test repo.
10:37 ndevos but nfs-ganesha from the test repo crashes on you
10:38 ndevos there is a scratch build that should fix it
10:38 kkeithley because I hadn't disabled the invaldation.
10:38 kkeithley pfft.
10:38 kkeithley cache invalidation
10:39 ndevos https://cbs.centos.org/koji/taskinfo?taskID=104551 has the __gf_free() fix
10:39 kkeithley right.
10:39 kkeithley all roads lead to Rome
10:40 ndevos some take the tourist route
10:40 kkeithley ndevos++ for building nfs-ganesha with the __gf_free fix
10:40 glusterbot kkeithley: ndevos's karma is now 300
10:41 kkeithley I'll send you the scripts in a bit. You can look at them after you finish the release
10:41 ndevos I'll fixup the patches that need to be included in 2.3.3 and get them to malahal, not sure if __gf_free should be in the nfs-ganesha tree
10:41 ndevos opinions?
10:42 kkeithley I thought you and/or jiffin was working on fixing the upcall so that that isn't needed?
10:43 ndevos yes, but that is not ready yet, and will need a bigger change in ganesha
10:43 kkeithley indeed.
10:43 kkeithley I'm not in love with short term hacks, but....
10:43 ndevos jiffin: btw, I forgot to mention in our call, that you could start to modify ganesha to use the new glfs_upcall_*() API :)
10:44 ndevos I can add it to the storage sig package as a patch, that is what I currently plan to do
10:45 kkeithley yeah, that's a reasonable strategy
10:46 asengupt joined #gluster-dev
10:46 kkeithley I take it you found the patches I used for ganesha on el6 in the el6 src rpm? Or did you just roll your own?
10:47 ndevos yeah, I found those
10:47 ndevos there is something else needed too, because the newer/older libntirpc with vsock :-/
10:48 mchangir joined #gluster-dev
10:48 ndevos also, with my current travel planning for September, I seem to sleep < 10 nights in .nl
10:48 kkeithley lol
10:49 ndevos not sure if I want to continue that in October with bakeathon...
10:50 kkeithley up to you
10:53 msvbhat joined #gluster-dev
10:53 * ndevos goes for lunch, will be back later
10:57 * misc didn't slept in .fr since the start of the month
11:03 pur joined #gluster-dev
11:13 prasanth joined #gluster-dev
11:17 kshlm joined #gluster-dev
11:17 jiffin ndevos: sure
11:19 jiffin i will start digging up, there is no point in making ganesha until those merge in gluster(I am not whether we can target that for ganesha 2.3.3)
11:20 asengupt joined #gluster-dev
11:20 kkeithley probably not 2.3.3. Definitely for 2.4 though
11:20 kkeithley and 2.3.4 if there is one
11:22 poornimag joined #gluster-dev
11:22 jiffin kkeithley: yes :)
11:23 kkeithley jiffin: what's the cache invalidation option?  If you don't know off the top of your head I'll go look it up
11:23 jiffin cache-invalidation
11:23 jiffin on/off
11:23 jiffin gluster v set <volname> cache-invalidation <on/off>
11:23 jiffin :P
11:24 jiffin it was easy to guess
11:26 kkeithley jiffin++
11:26 glusterbot kkeithley: jiffin's karma is now 48
11:26 jiffin kkeithley: :)
11:30 mchangir joined #gluster-dev
11:30 kshlm Weekly community meeting starts in 30 minutes in #gluster-meeting
11:50 pkalever1 left #gluster-dev
11:51 jiffin joined #gluster-dev
11:52 asengupt joined #gluster-dev
12:00 karthik_ joined #gluster-dev
12:06 kkeithley rafi++
12:06 glusterbot kkeithley: rafi's karma is now 51
12:06 rafi kkeithley: :)
12:13 shyam joined #gluster-dev
12:29 misc obnox: I was wondering if storhaug was a place in World of Warcraft, but that's in Norway, so I guess that's the same
12:31 obnox misc: according to the author, jose, storhaug is a norwegian term . means pile . iirc
12:33 skoduri post-factum++
12:33 glusterbot skoduri: post-factum's karma is now 26
12:33 obnox misc: but it also seems to be a place in norway, indeed: https://en.wikipedia.org/wiki/Storhaug
12:33 post-factum obnox: that is what i googled first for storhaug word and had some surprise
12:40 justinclift joined #gluster-dev
12:45 nigelb Yeah, it was very confusing googling that word :)
12:46 Manikandan joined #gluster-dev
12:50 Manikandan joined #gluster-dev
12:50 nigelb kshlm: You haven't forgotten infra today, have you? :)
12:51 karthik_ joined #gluster-dev
12:51 kshlm nigelb, Nope. Didn't see you guys around. And the ganesha/samba/storhaug discussion had already hijacked some other topics. So decided to get them over first.
12:51 aravindavk joined #gluster-dev
12:51 nigelb kshlm: ah, no worries. I've been watching. To be fair, I got back just as the meeting started.
12:53 nigelb misc: https://github.com/gluster/gluster​fs-patch-acceptance-tests/pull/51
12:53 justinclift misc: Just in case you haven't seen it: http://www.gluster.org/pipermail/gl​uster-infra/2016-August/002593.html
12:53 nigelb I don't know how very similar code works for regression tests.
12:53 justinclift I'm here for yelling at for a bit, if needed. ;)
12:53 nigelb justinclift: I did! I'm unsure if that breaks anything wrt github authentication against gerrit.
12:53 misc justinclift: I did see
12:53 nigelb That's the only thing I'm worried about.
12:54 misc I didn't knew you could do that for github, because indeed, it make me nervous that stuff get so much access to my acocunt
12:54 misc (one more reason I do not like github)
12:54 justinclift Yeah.  The GitHub permissions model is ... pretty fucked. :)
12:54 justinclift nigelb: I suspect that since Gerrit has already received permissions, it shouldn't be a problem.
12:54 misc well, i rather say that this is not done for the scale on which we use it
12:55 justinclift misc: I'm not sure what scale it would work at
12:55 misc justinclift: simple project :)
12:56 misc like, when you are doing your stuff alone
12:56 justinclift In another popular project I'm on, we have people who can't do stuff with the project specifcially because they'd have to ask their employer to disable this setting (org wide) just to be able to do stuff with your project after hours.
12:56 justinclift It's not a model which scales :(
12:56 justinclift s/model/permissions model/
12:57 misc yeah
12:57 justinclift I've emailed GitHub staff with a suggested fix (eg tickboxes that the user can choose), but I don't expect they'll give a crap :/
12:57 justinclift GitHub doesn't have a reputation for listening any more
12:58 justinclift Anyway. If something breaks, grab me. ;)
12:58 misc well, they do listen
12:58 misc not just you :p
12:58 justinclift ;)
13:03 misc the topic "open floor" remind me of me playing Quake long ago
13:03 misc and opening the floor in a lava pit
13:04 ira misc: Trap door?
13:08 misc ira: yeah :)
13:09 shaunm joined #gluster-dev
13:17 nigelb misc: what's the easiest way to trigger an update of /opt/qa everywhere?
13:19 misc nigelb: run it with ansible from your workstation
13:20 nigelb aha. Thank you.
13:20 nigelb I need to get the host list
13:20 misc in fact no, the easiest is "wait"
13:20 nigelb is that on the salt master?
13:20 misc nigelb: it is in github
13:20 misc https://github.com/gluster/gluster.org_​ansible_configuration/blob/master/hosts
13:20 nigelb aha
13:20 nigelb brilliant.
13:21 misc just make sure to use -e 'ansible_connection=ssh' when run from your workstation
13:22 misc nigelb: let me write the complete command line in doc
13:23 nigelb thanks!
13:24 julim joined #gluster-dev
13:28 mchangir joined #gluster-dev
13:30 misc nigelb: https://github.com/gluster/infra-docs/pull/15 tell me if that's clear or not
13:36 jiffin1 joined #gluster-dev
13:40 asengupt joined #gluster-dev
13:44 prasanth joined #gluster-dev
13:45 ndevos really, we dont use pthread_rwlock_* in our glusterfs sources?
13:47 kkeithley why is that so surprising?
13:48 ndevos I was hoping to use it for http://review.gluster.org/13649 , so it'll take more work to try it out
13:51 aravindavk joined #gluster-dev
13:53 justinclift Hmmm, searching the Gluster docs on Read the Docs is failing.
13:53 justinclift Does that happen often?
13:53 misc IIR? there is a bug opened on it
13:54 justinclift "IIR"?
13:55 misc IIRC
13:55 justinclift :)
14:01 gus_ joined #gluster-dev
14:01 gus_ how can run statedump in client?
14:02 jiffin gus_: did u mean gluster native client?
14:03 justinclift misc: https://github.com/Kickball/​awesome-selfhosted/pull/647
14:03 justinclift Just because I was there ;)
14:03 jiffin gus_: https://github.com/gluster/glusterfs/b​lob/master/doc/debugging/statedump.md
14:03 shyam joined #gluster-dev
14:04 hagarth joined #gluster-dev
14:06 gus_ commands for statedump are for server, but my mem leak in client
14:09 misc justinclift: thanks :)
14:09 justinclift ;)
14:12 mchangir joined #gluster-dev
14:26 kkeithley gus_: you can just `kill -USR1 $pid-of-client-glusterfs-process`
14:27 kkeithley IIRC
14:37 nbalacha joined #gluster-dev
14:37 nigelb justinclift: It's the problem with using RTD and Mkdocs.
14:37 nigelb RTD works great with Sphinx. With everything else, it's a bit hacky.
14:37 misc mhhh
14:37 misc what about getting a docs.gluster.org vhost
14:37 misc and use that everywhere
14:38 misc so if we move away from rtd, we keep url ?
14:39 nigelb yeah, but then if we break urls, we'll have a *long* list of redirects :)
14:40 misc true
14:40 ndevos is this becoming a yearly thing? move the docs to a different location?
14:40 misc but at least, people would find the docs and search again
14:41 misc ndevos: you can't be prepared enough :)
14:41 nigelb ndevos: There's been talk of an in-tree developer documentation.
14:41 misc but more in the case we want to switch to self hosted mkdocs, or if there is a issues on rtd, since the founder is trying to get the site working without him or something
14:41 ndevos misc: indeed, and it may offer us a way to have our own design/style for the docs too :D
14:41 nigelb And infra docs is a separate repo
14:41 misc (at least in my mind)
14:42 nigelb so, at some point, we'd want them all in the same place.
14:42 nigelb combined from 3 sources.
14:42 ndevos and gedeploy.readthedocs.io (I think), and other projects :)
14:42 ndevos uh, gdeploy
14:42 nigelb yeah, so misc is talking about something like docs.openstack.org
14:43 nigelb all the documentation for sub projects is collected in one place.
14:43 nigelb sort of an index page to all of it.
14:43 skoduri joined #gluster-dev
14:43 spalai joined #gluster-dev
14:43 nigelb (or at least that's my understanding, correct me I'm wrong)
14:43 misc you went further than my idea :)
14:43 ndevos hmm, search for gdeploy on RTD works
14:43 spalai left #gluster-dev
14:43 misc me, it was just "mhh, so rtd do not work with mkdocs, so why not self host"
14:44 justinclift nigelb: :(
14:44 nigelb Probably because it's using sphinx
14:45 justinclift Any chance of gettig it fixed today?
14:45 kkeithley I thought we had all the docs for subprojects collected in one place when they were on www.gluster.org/docs
14:45 justinclift Users are unlikely to care why it's broken, just that it is ;)
14:45 kkeithley then we moved to rtd
14:46 ndevos whatever is done, just make sure the doc maintainers are ok with it
14:46 nigelb The doc maintainers have an issue about it and they're working on it, I believe.
14:47 nigelb justinclift: Just tested github auth. Seems to work without issues :)
14:47 kkeithley Pacemaker has the same docs for both upstream and downstream: clusterlabs.org and redhat.
14:47 * kkeithley wonders why we can't do something similar
14:48 misc short answered: "it is complicated"
14:48 misc long answer is the same
14:48 nigelb long answer: downstream uses asiidocs
14:48 nigelb *asciidocs
14:49 kkeithley so we continue to muddle along with crappy, incomplete upstream docs.
14:50 kkeithley Users biggest complaint when I go to conferences is the documentation.
14:50 misc yeah
14:50 nigelb I've been driving some conversations around this. Maybe there'll be something to report by summit.
14:50 nigelb As a person not-a-gluster-develper-per-se, it's complicated for me to drive it.
14:51 ndevos oh, definitely a topic we should discuss during the summit, it is 2+ year waiting on progress already
14:51 nigelb I'm kind of thinking of submitting a talk I did at fudcon in Pune for the summit.
14:51 nigelb Short sort of talk about what we did to clean up our docs a few years ago in CKAN.
14:52 misc isn't it too late to submit talk for the summit ?
14:53 ndevos ah
14:53 nigelb no, the cfp just opened.
14:53 ndevos the 3.8.2 release notes mention 3.8.1 in the title :-(
14:54 misc mhh, we speak of which summit ?
14:54 ndevos if we only knew who is going, and what they proposed in their request for an invite...
14:55 nigelb gluster summit.
14:55 misc ok
14:55 misc (cause "openstack summit", and "RH summit"...)
14:55 misc people should use fully qualified name for events !
14:56 nigelb indeed, sorry.
14:56 atinm joined #gluster-dev
15:02 lpabon joined #gluster-dev
15:07 kkeithley yuck. New glibc in fedora-26 is spitting out new warnings about makedev, major, minor. Maybe other things.
15:10 kkeithley no, looks like it's limited to that. But "that" is in iatt.h, and every file in our source includes iatt.h. sigh
15:11 wushudoin joined #gluster-dev
15:13 ndevos kkeithley: did you send me a gpg encrypted email on purpose?
15:14 kkeithley I sent the mail on purpose. I didn't intend to encrypt it
15:14 ndevos :) well, at least I got to test my setup again
15:16 kkeithley I don't know why t'bird decided to encrypt it. (and I didn't notice it)
15:17 ppai nigelb, misc tried out sphinx and it seems to work well. A sample: http://glusterdocs-beta.readthedocs.io/
15:18 ndevos ppai++ very cool!
15:18 glusterbot ndevos: ppai's karma is now 16
15:19 ppai just a preview, admin guide among other stuffs isn't in there yet. It's a ton of manual work
15:20 ppai pandoc can do a conversion from .md to .rst but the result needs manual inspection
15:20 lpabon joined #gluster-dev
15:22 ppai If enough people including each component maintainers volunteer, the move from mkdocs to sphinx can be done.
15:35 sankarshan joined #gluster-dev
15:36 hagarth joined #gluster-dev
15:45 lpabon_ joined #gluster-dev
15:49 nigelb oh neat.
15:51 nbalacha joined #gluster-dev
16:00 pranithk1 joined #gluster-dev
16:07 jiffin joined #gluster-dev
16:10 baojg joined #gluster-dev
16:22 penguinRaider joined #gluster-dev
16:25 rastar joined #gluster-dev
16:32 gem joined #gluster-dev
16:33 jiffin joined #gluster-dev
16:37 msvbhat joined #gluster-dev
16:41 shyam joined #gluster-dev
16:45 justinclift nigelb: Good that it works.  I'll drop off here then.  If something does explode, email me
16:45 justinclift :)
16:45 justinclift left #gluster-dev
17:08 lpabon_ joined #gluster-dev
17:14 penguinRaider joined #gluster-dev
17:17 jiffin joined #gluster-dev
17:20 rafi joined #gluster-dev
17:34 msvbhat joined #gluster-dev
17:36 spalai joined #gluster-dev
17:37 spalai left #gluster-dev
17:39 ashiq joined #gluster-dev
17:43 shyam joined #gluster-dev
18:09 mchangir joined #gluster-dev
18:21 jiffin joined #gluster-dev
18:33 dlambrig left #gluster-dev
18:38 lpabon_ joined #gluster-dev
19:02 rastar joined #gluster-dev
19:16 post-factum anyone with op permissions on #gluster here?
19:20 amye hmmmm no
19:20 amye That's weird, hagarth should have, at least
19:20 post-factum amye: I wonder why u still have no op here :)
19:20 hagarth ban or kick?
19:20 amye johnmark didn't pass the torch? :D No idea.
19:21 post-factum hagarth: hmm. both :)
19:22 post-factum amye: I guess community lead should have op on community channels
19:24 hagarth amye: are you on #gluster atm?
19:24 amye Hmm no
19:25 amye Fixed
19:25 amye thanks hagarth
19:55 msvbhat joined #gluster-dev
20:34 shyam joined #gluster-dev
21:40 penguinRaider joined #gluster-dev
22:47 hagarth joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary