Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2016-07-21

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
01:13 hchiramm_ joined #gluster-dev
02:14 hchiramm_ joined #gluster-dev
03:14 hchiramm_ joined #gluster-dev
03:15 ashiq joined #gluster-dev
03:33 magrawal joined #gluster-dev
03:41 lpabon joined #gluster-dev
03:44 julim joined #gluster-dev
03:49 atinm joined #gluster-dev
04:03 mchangir joined #gluster-dev
04:04 itisravi joined #gluster-dev
04:07 ppai joined #gluster-dev
04:12 spalai joined #gluster-dev
04:14 hchiramm_ joined #gluster-dev
04:15 rafi joined #gluster-dev
04:22 poornimag joined #gluster-dev
04:24 nbalacha joined #gluster-dev
04:24 kshlm joined #gluster-dev
04:30 ndarshan joined #gluster-dev
04:31 atinm joined #gluster-dev
04:35 gem joined #gluster-dev
04:35 asengupt joined #gluster-dev
04:36 penguinRaider joined #gluster-dev
04:38 jiffin joined #gluster-dev
04:49 atalur joined #gluster-dev
04:52 spalai joined #gluster-dev
04:59 aravindavk joined #gluster-dev
05:02 penguinRaider joined #gluster-dev
05:03 karthik_ joined #gluster-dev
05:08 hchiramm joined #gluster-dev
05:12 ankitraj joined #gluster-dev
05:15 aspandey joined #gluster-dev
05:20 shubhendu joined #gluster-dev
05:22 shubhendu joined #gluster-dev
05:24 msvbhat joined #gluster-dev
05:26 kdhananjay joined #gluster-dev
05:29 hgowtham joined #gluster-dev
05:30 sakshi joined #gluster-dev
05:30 jiffin1 joined #gluster-dev
05:31 kotreshhr joined #gluster-dev
05:31 Muthu_ joined #gluster-dev
05:32 ramky joined #gluster-dev
05:33 prasanth joined #gluster-dev
05:42 kdhananjay joined #gluster-dev
05:42 jiffin1 joined #gluster-dev
05:47 ashiq joined #gluster-dev
05:47 atalur joined #gluster-dev
05:55 Manikandan joined #gluster-dev
05:58 pur_ joined #gluster-dev
05:58 pkalever joined #gluster-dev
06:01 ppai joined #gluster-dev
06:07 mchangir joined #gluster-dev
06:08 aravindavk joined #gluster-dev
06:10 spalai joined #gluster-dev
06:10 spalai joined #gluster-dev
06:11 spalai joined #gluster-dev
06:18 Bhaskarakiran joined #gluster-dev
06:19 Bhaskarakiran joined #gluster-dev
06:21 spalai left #gluster-dev
06:22 devyani7_ joined #gluster-dev
06:22 spalai joined #gluster-dev
06:27 itisravi joined #gluster-dev
06:36 msvbhat joined #gluster-dev
06:44 poornimag joined #gluster-dev
06:52 aravindavk joined #gluster-dev
06:54 itisravi joined #gluster-dev
07:05 kshlm joined #gluster-dev
07:10 aspandey xavih, ping
07:10 glusterbot aspandey: Please don't naked ping. http://blogs.gnome.org/markmc/2014/02/20/naked-pings/
07:16 xavih aspandey: hi
07:18 aspandey xavih: want to discuss about http://review.gluster.org/#/c/14761/
07:19 ndevos nigelb: nbslave7g has a problem - https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/18224/consoleFull
07:19 ndevos nigelb: shall I disable it?
07:20 xavih aspandey: sure. Let me refresh my memory...
07:20 * ndevos assumes that is a yes
07:21 aspandey xavih: In this if we do not match inodelk key at all and just get the max value of inodelk of all the cbk, will it work? I think this is what you want to say in your comment.
07:21 aspandey xavih: sure, take your time..
07:22 * nigelb looks
07:22 nigelb ndevos: yes, please.
07:22 ndevos nigelb: done, and just sent an email too :)
07:23 nigelb what does that error mean though?
07:23 nigelb that /d/backends/patchy2/ is a folder that needs to exist for the tests to run?
07:23 nigelb the tests don't create it?
07:23 Saravanakmr joined #gluster-dev
07:27 xavih aspandey: yes, if we take the maximum of all answers I think it should work fine. You can see how GLUSTERFS_OPEN_FD_COUNT is managed
07:27 xavih aspandey: you should do exactly the same for the inodelk key
07:31 xavih aspandey: you are referring to GLUSTERFS_INODELK_COUNT key, right ?
07:32 xavih aspandey: in this case you only need to add it in ec_xattr_match(), since all the other code is already in place
07:32 aspandey xavih: I think it is already doing for inodelk also. While combining these dict in ec_dict_combine, ec_dict_data_combine will get the max of all inodelk using ec_dict_data_max32. I just need to put it in ec_xattr_match to not match it..
07:32 aspandey xavih, :) yes
07:32 xavih aspandey: you should also add GLUSTERFS_ENTRYLK_COUNT to be consistent
07:32 aspandey xavih: right..
07:33 nigelb ndevos: do you know how I can fix that?
07:33 aspandey xavih: Ok. I will see if any other key also require such thing or not and then will send this modified patch...
07:34 xavih aspandey: great :)
07:34 aspandey xavih: Thanks for confirmation ..
07:34 xavih aspandey: yw :)
07:42 pur_ joined #gluster-dev
07:42 ndevos nigelb: I would expect the scripts create that directory... but maybe /d/backends needs to exist before?
07:45 * nigelb looks
07:45 nigelb /d/backends exists
07:47 misc yeah, /d/backends is created by ansible
07:50 poornimag joined #gluster-dev
07:51 shubhendu_ joined #gluster-dev
08:27 asengupt joined #gluster-dev
08:29 rraja joined #gluster-dev
08:33 pkalever joined #gluster-dev
08:53 shubhendu joined #gluster-dev
08:57 penguinRaider joined #gluster-dev
08:58 pkalever joined #gluster-dev
09:01 asengupt joined #gluster-dev
09:06 Bhaskarakiran joined #gluster-dev
09:07 Bhaskarakiran joined #gluster-dev
09:12 karthik_ joined #gluster-dev
09:17 nigelb misc: that still doesn't explain why that job fails on that particular box :(
09:21 misc nigelb: permissions ?
09:22 nigelb good point
09:22 * nigelb checks
09:23 nigelb Nope.
09:23 nigelb Same permissions as other machines which work fine.
09:24 misc acl, selinux ?
09:25 nigelb Ah, found it.
09:25 nigelb selinux exists on netbsd?
09:26 nigelb > volume create: patchy: failed: Host nbslave70.cloud.gluster.org is not in ' Peer in Cluster' state
09:26 nigelb Yeah, 7g is a clone of 70.
09:26 nigelb Now I need to find out how to make gluster think it's the right host
09:28 gem joined #gluster-dev
09:40 mchangir nigelb, iptables -F ?
09:41 nigelb mchangir: No iptables on BSD :)
09:41 Manikandan joined #gluster-dev
09:46 msvbhat joined #gluster-dev
09:46 ndevos nigelb: oh, wat does $HOSTNAME have on that VM?
09:51 penguinRaider joined #gluster-dev
09:52 nigelb ndevos: It had nbslave70.cloud.glsuter.org, it should have had nbslave7g.gluster.org.
09:52 nigelb I have just fixed that.
09:52 nigelb A lot of the netbsd machines have system files set as immutable.
09:52 nigelb So, the last time I made the change, it didn't stick.
09:52 nigelb Should work now.
09:53 ndevos nigelb: yeah, there was an issue where some system files got randomly overwritten, the immutable flag prevented that
09:57 nigelb I'm not complaining. I know the context. Just forgot about it when I set the hostname just after cloning the machine.
09:57 nigelb A restart undid the hostname change.
10:00 itisravi joined #gluster-dev
10:04 kdhananjay joined #gluster-dev
10:12 Bhaskarakiran joined #gluster-dev
10:21 Muthu_ joined #gluster-dev
10:26 kdhananjay nbalacha++
10:26 glusterbot kdhananjay: nbalacha's karma is now 7
10:32 kaushal_ joined #gluster-dev
10:32 rraja joined #gluster-dev
10:33 asengupt joined #gluster-dev
10:37 atalur joined #gluster-dev
11:30 anoopcs ndevos, Regarding https://review.gluster.org/#/c/11177/20/api/src/glfs-fops.c@4229... I couldn't find a place where errno will be set within dict_new(). What made you to assume so?
11:42 post-factum https://github.com/billziss-gh/winfsp glusterfs client for windows possible?
11:43 atinm joined #gluster-dev
11:46 ppai joined #gluster-dev
11:53 mchangir joined #gluster-dev
11:56 rastar joined #gluster-dev
12:08 kotreshhr joined #gluster-dev
12:19 poornimag joined #gluster-dev
12:29 ndevos anoopcs: GF_CALLOC should do that?
12:29 anoopcs ndevos, If so, isn't that the case with all dict_new() calls in API?
12:30 anoopcs I could see that errno being explicitly set to ENOMEM adjacent to dict_new() calls.
12:30 jiffin joined #gluster-dev
12:33 anoopcs The  UNIX 98 standard requires malloc(), calloc(), and realloc() to set errno to ENOMEM upon failure.  Glibc assumes that this is done (and the glibc versions of these routines do
12:33 anoopcs this); if you use a private malloc implementation that does not set errno, then certain library routines may fail without having a reason in errno.
12:33 anoopcs ndevos, ^^ from calloc man page.
12:33 anoopcs Hm.
12:36 jiffin joined #gluster-dev
12:39 rastar joined #gluster-dev
12:39 Manikandan joined #gluster-dev
12:40 spalai left #gluster-dev
12:41 ppai joined #gluster-dev
12:41 ira_ joined #gluster-dev
12:42 aravindavk joined #gluster-dev
12:43 jiffin1 joined #gluster-dev
12:46 atinm joined #gluster-dev
12:57 atalur_ joined #gluster-dev
12:58 devyani7 joined #gluster-dev
13:01 julim joined #gluster-dev
13:02 jiffin joined #gluster-dev
13:03 penguinRaider joined #gluster-dev
13:48 sanoj joined #gluster-dev
13:51 shyam joined #gluster-dev
14:03 kshlm joined #gluster-dev
14:04 shyam joined #gluster-dev
14:06 msvbhat joined #gluster-dev
14:06 jiffin1 joined #gluster-dev
14:22 kkeithley anyone know anything about "aio support removed in 3.7.13" ?
14:23 lpabon joined #gluster-dev
14:26 jiffin joined #gluster-dev
14:27 ira joined #gluster-dev
14:28 hagarth joined #gluster-dev
14:28 kkeithley Looking at the Fedora and Ubuntu PPA build logs, `configure` reports that Linux-AIO is enabled in the build.
14:31 msvbhat joined #gluster-dev
14:31 kkeithley @bug
14:31 glusterbot kkeithley: (bug <bug_id> [<bug_ids>]) -- Reports the details of the bugs with the listed ids to this channel. Accepts bug aliases as well as numeric ids. Your list can be separated by spaces, commas, and the word "and" if you want.
14:31 kkeithley @bugs
14:32 kkeithley @fileabug
14:32 hagarth kkeithley: even if aio is enabled, i don't think it would be in effect as the aio behavior is disabled by default
14:36 kkeithley was that new in 3.7.13? Is it documented in the release notes?  David Gossage on gluster-dev mailing list seems to think it is not
14:36 ndevos kkeithley: there is a storage.linux-aio volume option, or something like that
14:37 ndevos and I *think* that always was off by default...
14:38 hagarth kkeithley: don't think so .. configure output lists aio as enabled, it would be due to the presence of libaio-devel in the build system
14:39 pur_ joined #gluster-dev
14:40 hagarth kkeithley: i am a bit confused about where the aio problem is stemming from .. gluster or qemu/kvm/ovirt ?
14:43 pur joined #gluster-dev
14:49 kkeithley Me too.  Maybe check out David Gossage's recent email to gluster-dev list
14:50 ndevos kkeithley: yeah, I'm just responding to Lindsay now, not sure what to make from Davids message
14:54 shyam joined #gluster-dev
14:55 pkalever left #gluster-dev
15:01 atalur joined #gluster-dev
15:03 gem joined #gluster-dev
15:05 lpabon joined #gluster-dev
15:07 hagarth joined #gluster-dev
15:08 shyam joined #gluster-dev
15:35 shubhendu joined #gluster-dev
15:37 shubhendu joined #gluster-dev
16:05 jiffin1 joined #gluster-dev
16:10 jiffin joined #gluster-dev
16:13 jiffin joined #gluster-dev
16:25 poornimag joined #gluster-dev
16:28 jiffin joined #gluster-dev
16:41 poornimag joined #gluster-dev
16:53 jiffin joined #gluster-dev
17:03 poornimag joined #gluster-dev
17:23 msvbhat joined #gluster-dev
17:25 shyam joined #gluster-dev
17:26 ankitraj joined #gluster-dev
17:32 mchangir joined #gluster-dev
17:34 julim joined #gluster-dev
17:45 kkeithley Seems I have an account on www.gluster.org and have sudo.  I've archived the old community documentation and placed a redirect to our readthedocs.io documentation
17:46 kkeithley archive is in /root/old-community-documenation.tgz.
17:46 kkeithley nigelb, misc, ndevos: ^^^
17:47 kkeithley s/placed/set up/
17:54 ankitraj kkeithley: are you looking on git?
17:55 kkeithley Am I looking on git?
17:55 ankitraj m asking?
17:55 ankitraj where you are looking?
17:55 kkeithley but I don't understand the question?
17:55 kkeithley looking for what?
17:56 kkeithley I'm not looking for anything.
17:57 ankitraj soory i joined late here, didn't get what you are talking?
17:58 kkeithley I'm talking about having removed the old community documentation at www.gluster.org/community/documentation and setting up a redirect to our documentation at gluster.readthedocs.io
17:59 kkeithley Because people keep finding out-of-date docs on www.gluster.org, because that's the top hit in google search
17:59 ankitraj correct gluster.readthedocs.io is the latest
17:59 kkeithley I know
18:00 ankitraj http://www.gluster.org/ is also working
18:00 kkeithley yes, it is.  I'm not getting your point
18:01 julim_ joined #gluster-dev
18:01 kkeithley It works, but the documentation is out of date.
18:01 ankitraj yes
18:01 kkeithley so I have set up a redirect that automatically sends people to readthedocs
18:02 ankitraj correct
18:03 shyam1 joined #gluster-dev
18:13 pkalever joined #gluster-dev
18:14 pkalever left #gluster-dev
18:25 msvbhat joined #gluster-dev
18:25 ankitraj joined #gluster-dev
18:28 ndevos kkeithley++ so much for graceful migration, but yeah, anything that helps getting it to move again is great!
18:28 glusterbot ndevos: kkeithley's karma is now 136
18:28 ndevos kkeithley: although, I guess the redirecting could use some tweaks, http://www.gluster.org/community/documentation/index.php/Features/disk-encryption doesnt redirect
18:31 [o__o] joined #gluster-dev
18:35 misc kkeithley: mhh, how did you set the redirect ?
18:36 misc (just to make sure ansible do not erase it or anything, but I think it doesn't fiddle too much with that server yet)
18:45 shyam1 left #gluster-dev
18:49 rastar joined #gluster-dev
18:56 ashiq joined #gluster-dev
19:29 kkeithley misc, ndevos: I replaced .../community/documentation/index.php with index.html with a meta refresh.  All subdirs above .../community/documentation are gone. (But I could duplicate some or all of the directories and add redirects.)
20:01 misc mhh ok, so I guess that would work
20:01 misc kkeithley: but that do not mean the whole wiki is gone ?
20:03 penguinRaider joined #gluster-dev
20:05 kkeithley no, only .../community/documentation/...
20:06 kkeithley and I archived those bits in /root/old-community-documentation.tgz
20:10 misc yeah, that part I did see :)
20:16 ndevos kkeithley: ah, ok, I was expecting a redirect in the httpd/nginx/.../$webserver config, so that all existing bookmarks and search results also point to the new docs
20:36 shaunm joined #gluster-dev
20:55 kkeithley ndevos: I'm not (at all) familiar with nginx, so I did the bare minimum that I know how to do. something better than nothing
20:58 shyam joined #gluster-dev
21:05 misc unless we map url by url, I am not sure of the value :/
22:46 penguinRaider joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary