Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2015-01-21

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
01:04 shyam joined #gluster-dev
01:35 bala joined #gluster-dev
01:48 _Bryan_ joined #gluster-dev
01:50 shyam joined #gluster-dev
01:55 bala joined #gluster-dev
02:45 hagarth joined #gluster-dev
03:14 badone joined #gluster-dev
03:34 rjoseph joined #gluster-dev
03:52 bala joined #gluster-dev
04:10 shubhendu joined #gluster-dev
04:16 Manikandan joined #gluster-dev
04:36 anoopcs joined #gluster-dev
04:40 spandit joined #gluster-dev
04:54 anoopcs joined #gluster-dev
04:56 rafi joined #gluster-dev
04:59 Gaurav_ joined #gluster-dev
05:00 nishanth joined #gluster-dev
05:00 nkhare joined #gluster-dev
05:07 gem joined #gluster-dev
05:07 lalatenduM joined #gluster-dev
05:10 anoopcs left #gluster-dev
05:11 nkhare joined #gluster-dev
05:12 soumya joined #gluster-dev
05:13 ndarshan joined #gluster-dev
05:18 anoopcs joined #gluster-dev
05:21 anoopcs joined #gluster-dev
05:22 atinmu joined #gluster-dev
05:25 anoopcs joined #gluster-dev
05:26 kanagaraj joined #gluster-dev
05:29 hagarth joined #gluster-dev
05:33 anoopcs joined #gluster-dev
05:34 kdhananjay joined #gluster-dev
05:37 bala joined #gluster-dev
05:43 flu_ joined #gluster-dev
05:47 pranithk joined #gluster-dev
05:49 pp joined #gluster-dev
05:50 raghu joined #gluster-dev
05:50 jiffin joined #gluster-dev
06:03 ndarshan joined #gluster-dev
06:13 aravindavk joined #gluster-dev
06:19 kshlm joined #gluster-dev
06:37 hagarth joined #gluster-dev
07:08 purpleidea joined #gluster-dev
07:08 purpleidea joined #gluster-dev
07:13 deepakcs joined #gluster-dev
07:15 ndarshan joined #gluster-dev
07:29 atalur joined #gluster-dev
08:01 aravindavk joined #gluster-dev
08:09 rafi joined #gluster-dev
08:15 nishanth joined #gluster-dev
08:16 hagarth joined #gluster-dev
08:24 ppai joined #gluster-dev
08:36 atalur joined #gluster-dev
08:45 ndarshan joined #gluster-dev
08:46 rtalur_ joined #gluster-dev
08:48 shubhendu joined #gluster-dev
08:50 rafi1 joined #gluster-dev
08:51 jiffin1 joined #gluster-dev
08:58 rgustafs joined #gluster-dev
09:06 ppai joined #gluster-dev
09:25 rafi joined #gluster-dev
09:32 lalatenduM ndevos, I got some questions for https://copr.fedoraproject.org/coprs/devos/glusterfs/
09:42 hagarth joined #gluster-dev
09:53 ndevos lalatenduM: ask away!
09:59 lalatenduM ndevos, check https://copr-be.cloud.fedoraproject.org/results/devos/glusterfs/epel-7-x86_64/glusterfs-3.7dev-0.510.git8beaf16.autobuild/
10:00 lalatenduM ndevos, I can see a CentOS string in the rom name , which is not present for el6 rpms
10:00 lalatenduM s/rom/rpm/
10:00 ndevos lalatenduM: is the centos string included in other epel-7 rpms?
10:02 lalatenduM ndevos, in copr el7 , yes
10:02 lalatenduM ndevos, not in d.g.o http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-7/x86_64/
10:03 ndevos lalatenduM: hmm, maybe that is a recent change in copr?
10:04 lalatenduM ndevos, can we remove the string?
10:05 ppai joined #gluster-dev
10:05 ndevos lalatenduM: not really, I guess it comes from an rpm-macro that gets installed in the COPR/mock environment
10:05 lalatenduM ndevos, RE: <ndevos> lalatenduM: sometimes build.gluster.org does weird things to dns or inserts a web-proxy, or something - thats an ongoing issue :-/
10:06 lalatenduM ndevos, I dont understand the issue, can you plz explain it to me
10:06 ndevos lalatenduM: if the .centos bit bothers you, you could file a bug against COPR, or ask about it in their irc-channel
10:07 lalatenduM ndevos, are you sure it is coming for all el7 builds?
10:07 ndevos lalatenduM: I assume it would, there is no specific centos config for the nightly builds
10:08 lalatenduM ndevos, ok
10:08 lalatenduM do you know the irc channel name ? #fedora-copr?
10:08 ndevos I think #copr-dev, but not sure, otherwise #fedora-buildsys
10:10 Debloper joined #gluster-dev
10:10 vikumar joined #gluster-dev
10:11 ndevos lalatenduM: build.gluster.org (has a cron job for the nightly builds) does not always connect reliably to the outside world, sometimes ssh connections to download.gluster.org fail, sometimes connecting to http servers give a 404 error from a local(?) proxy
10:11 ndevos the issue is that it happens 'sometimes', other times it seems to work well...
10:13 lalatenduM ndevos, I am confused little , the nightly builds are done in copr or build.gluster.org?
10:15 ndevos lalatenduM: build.gluster.org generates the src.rpm and COPR takes that and builds it for different distributions
10:15 ndarshan joined #gluster-dev
10:16 lalatenduM ndevos, ok
10:17 lalatenduM ndevos, what is the job name on build.gluster.org
10:19 ndevos lalatenduM: its a cron job that runs as my user
10:19 ndevos lalatenduM: https://forge.gluster.org/bugzappers/nightly-builds/blobs/master/README contains the details
10:23 flu_ joined #gluster-dev
10:28 lalatenduM ndevos, cool, lets say if I want to build rpms at a certain time , I can login in to the system and run the scripts right? so rest of things get done automatically and copr will do the builds
10:29 lalatenduM ndevos, can I force copr to re-try to fetch src rpm
10:30 ndevos lalatenduM: not really, a COPR is to tied to my Fedora account, there is no option for group permissions (yet)
10:34 lalatenduM ndevos, a jenkins salve with mock for building these rpms can solve these issues
10:36 lalatenduM brb
10:38 lalatenduM joined #gluster-dev
10:40 lalatenduM s/salve/slave/
10:52 ndarshan joined #gluster-dev
10:55 soumya joined #gluster-dev
11:01 nishanth joined #gluster-dev
11:03 atinmu xavih, hi
11:03 xavih atinmu: hi
11:04 atinmu xavih, it seems like node-uuid implementation is not working
11:04 xavih atinmu: what fails ?
11:05 atinmu xavih, getxattr returns both the sub volumes
11:05 atinmu xavih, in case of 2 x (2 + 1) = 6 setup
11:06 xavih atinmu: in a distributed-dispersed volume the getxattr returns two uuid ?
11:06 atinmu xavih, yes
11:06 xavih atinmu: one from each ec subvolume ?
11:07 atinmu xavih, there were two nodes each of which hosting 3 bricks
11:09 xavih atinmu: I mean, the uuid's returned belong to bricks of different ec disperse sets ?
11:10 atinmu xavih, yes
11:11 xavih atinmu: then wouldn't this be a problem in DHT that does not choose from the two answers it receives ?
11:11 xavih atinmu: it seems that ec filters all answers and returns a single uuid (otherwise you would receive 6 uuid)
11:11 xavih atinmu: I'll take a look at DHT
11:12 atinmu xavih, one question, u mentioned DHT would receive two answers
11:12 atinmu xavih, how?
11:12 xavih atinmu: yes, one from each ec subvolume
11:16 xavih atinmu: I'm looking at DHT code and it seems it only sends the request to one of its subvolumes, so I don't understand how you are receiving data from both of them...
11:16 atinmu xavih, dht will send it getxattr request to one subvolume
11:17 xavih :q
11:18 xavih sorry
11:20 xavih atinmu: I'll create a volume to test it. This xattr can be requested using a getfattr command ?
11:21 atinmu xavih, geo-rep has a use case for it
11:21 rafi1 joined #gluster-dev
11:22 pp joined #gluster-dev
11:24 atinmu xavih, yes
11:24 atinmu xavih, on the mount point u can do a getfattr
11:24 xavih atinmu: ok, I'll try it
11:24 atinmu xavih, thanks
11:24 atinmu xavih, let me know if u find some discrepency
11:24 xavih atinmu: yw :)
11:24 xavih atinmu: sure
11:36 gem joined #gluster-dev
11:42 xavih atinmu: it seems there's a bug in DHT
11:43 xavih atinmu: DHT only sends getxattr requests to one volume for files, but not for directories
11:43 soumya joined #gluster-dev
11:43 xavih atinmu: if you do a getfattr -n trusted.glusterfs.node-uuid <mount point> you get two uuid
11:44 xavih atinmu: if you do touch <mount point>/a and then getfattr -n trusted.glusterfs.node-uuid <mount point>/a, you get only one uuid
11:45 xavih atinmu: in dht you can see that for directories, the getxattr is sent to all subvolumes (dht-common, lines 2804-2816)
11:46 xavih atinmu: then, on dht_vgetxattr_dir_cbk() it uses dht_vgetxattr_alloc_and_fill() for each answer, which concatenates all of them
11:47 xavih atinmu: this causes the effect you have seen
11:49 xavih atinmu: I've repeated the test on a replica 2 volume and the same happens
11:49 xavih atinmu: even a pure distributed volume has this problem
11:52 overclk joined #gluster-dev
11:53 hagarth joined #gluster-dev
11:53 soumya_ joined #gluster-dev
12:01 jdarcy joined #gluster-dev
12:05 JustinClift Hmmm, does this load properly for anyone? https://github.com/gluster/glusterfs/pull/20
12:06 JustinClift I'm getting weird layout for GitHub.  Seems to be missing stylesheets?
12:08 kkeithley looks okay to me
12:10 Manikandan joined #gluster-dev
12:12 JustinClift kkeithley: k, i'll be something on my end then.  Thx :)
12:12 JustinClift it'll be
12:13 ppai joined #gluster-dev
12:19 atinmu xavih, awesome, thanks, I will inform dht team about it
12:26 kdhananjay bfoster: Brian, we just found that fsetxattr is not changing the ctime of the file immediately. This backend fs is xfs. Wonder if you have any pointers about where to look next.
12:26 Gaurav_ joined #gluster-dev
12:27 kdhananjay foster: ^^
12:30 nkhare joined #gluster-dev
12:30 foster kdhananjay: quick test with setfattr/upstream kernel shows it change
12:31 foster we probably need to work back from there
12:31 kdhananjay foster: we did it with fsetxattr and it didn't
12:31 ira joined #gluster-dev
12:31 kdhananjay foster: We are just writing a test c program which does both setxattr and fsetxattr and prints the times. Give us 10 minutes and we will post the results
12:32 foster what kernel, and do you get different results with setfattr?
12:32 foster ok
12:33 kdhananjay foster: yes we got different results with setfattr
12:34 ppai joined #gluster-dev
12:42 soumya joined #gluster-dev
12:42 soumya joined #gluster-dev
12:46 badone joined #gluster-dev
12:53 jiffin joined #gluster-dev
12:59 anoopcs joined #gluster-dev
13:01 lalatenduM joined #gluster-dev
13:03 kkeithley hagarth: sorry I phased out on the cmockery question. Was wrapping up my other meeting and couldn't get away. Looks like ndevos handled it though.
13:03 kkeithley ndevos++
13:03 glusterbot kkeithley: ndevos's karma is now 79
13:03 hagarth kkeithley: no problems!
13:03 hagarth ndevos++ nevertheless!
13:03 glusterbot hagarth: ndevos's karma is now 80
13:03 ndevos kkeithley: sure, np!
13:04 ndevos hagarth: ah, I'll start with writing the presentation now, did you have anything to add?
13:04 shubhendu joined #gluster-dev
13:04 hagarth ndevos: not yet, I might find some time after I head back home
13:05 kdhananjay foster: We re-created the problem, with the following c program. http://paste.fedoraproject.org/172445/84536614/, output: http://paste.fedoraproject.org/172447/45416142/
13:05 ndevos hagarth: okay, I will be travelling tomorrow very early, and plan to work on the plane too - so anything you have before that is appreciated
13:06 kkeithley lalatenduM: do you still want to chat about the presentation?  Today I saw your PM to me/kkeithley (was that yesterday?) but yesterday I was kkeithley_. and didn't see it. Sorry about that.
13:06 hagarth ndevos: certainly
13:06 lalatenduM ndevos, btw I was checking old mails for nightly builds , saw your mail where we are pointing to http://download.gluster.org/pub/gluster/glusterfs/nightly/
13:06 ndevos hagarth: I'll get started with the 3.x-stable releases, NFS changes and features for 3.7 - beyond that, its a little misty for me
13:07 lalatenduM ndevos, but in my I will point to https://copr.fedoraproject.org/coprs/devos/glusterfs/
13:08 foster kdhananjay: i'll try to run it..
13:08 kdhananjay foster: Sure.
13:08 ndevos lalatenduM: the builds on download.gluster.org will get used when a dgo-nightly package is used for the .repo file - the copr is on my name, so others can not update it
13:08 hagarth ndevos: will definitely send out something tonight to aide there.
13:08 ndevos hagarth++ awesome!
13:08 glusterbot ndevos: hagarth's karma is now 37
13:08 lalatenduM kkeithley, yes, lets sync up, may be little late in the evening around 11PM my time :)
13:08 lalatenduM kkeithley, will that work for you ?
13:09 * kkeithley has to do some TZ math....   <thinking...>
13:09 kdhananjay foster: Works correctly when file is opened with O_DIRECT though.
13:10 lalatenduM kkeithley, lets talk in utc :) , what abt 17:30 UTC?
13:10 lalatenduM ndevos, yeah kind of confusing for other isn't it?
13:10 kkeithley 11pm IST is 12:30pm EST.  yeah, for about 1/2 an hour?
13:11 lalatenduM kkeithley, sure , we can deplay it for 30 mins or 1 hour
13:11 lalatenduM if you want
13:11 kkeithley deplay?
13:11 lalatenduM kkeithley, delay
13:12 ndevos lalatenduM: confising yes, and in case someone else wants to run copr (or mock?) builds, it will get even more confusing - so download.gluster.org is a single point for the packages
13:12 kkeithley I have a match in our TT tournament at 1:00pm so delay would not be good. I just meant I can chat for 1/2hour at 11pm IST
13:12 lalatenduM ndevos, ok, can you plz send the mail abt nightly rpms :)
13:13 lalatenduM kkeithley, cool
13:13 lalatenduM kkeithley, works for me
13:13 lalatenduM ndevos, I tried though :)
13:13 kkeithley okay, good let's do that
13:13 ndevos lalatenduM: a reminder based on the previous email I sent? or did you send something now already?
13:14 lalatenduM ndevos, I haven't sent anything , yeah similar to the old one, which clearly says where others can get the RPMs
13:15 ndevos lalatenduM: sure
13:15 lalatenduM ndevos, basically RH qe team is waiting for these RPMs
13:15 kdhananjay foster: Nope. Not always working with O_DIRECT.
13:16 kdhananjay foster: Ok we're heading home now. If you find anything, could you please send a mail to kdhananj@redhat.com & pkarampu@redhat.com?
13:17 kdhananjay foster: We were able to hit the bug on ext4 as well.
13:17 foster kdhananjay: looks like it's just using the same time as the previous setxattr ?
13:17 foster kdhananjay: have you tried introducing a delay?
13:20 kdhananjay foster: Introducing a delay of 1 second between the first stat and the second (with a setxattr in between) did work.
13:23 ndevos kdhananjay, foster: maybe related to bug 1164506 ?
13:23 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=1164506 medium, medium, ---, ndevos, MODIFIED , md-cache checks for modification using whole seconds only
13:23 kdhananjay ndevos: This is directly on xfs
13:23 ndevos kdhananjay: ah, ok, definitely something else then :D
13:24 kdhananjay foster: If we change the order of fsetxattr and setxattr in the program then setxattr is not changing the time. Does this mean the syscall completed in less than a nano second?
13:25 foster kdhananjay: i think what you're testing in this program is the granularity used for stamping time on the inode
13:25 foster whether it comes from the fs or somewhere else I'm not sure off hand
13:26 foster but for example, just comment out the setxattr completely and afaics the fsetxattr ctime update works fine
13:27 kdhananjay foster: So how do we figure out the granularity?
13:28 foster kdhananjay: probably just dig into the code, is there a larger problem to be solved here?
13:31 kdhananjay foster: Yes, there is. http://review.gluster.org/#/c/9418/1 is failing because gluster does not perceive any ctime change even on an fsetxattr.
13:33 kdhananjay foster: Check this out: http://paste.fedoraproject.org/172467/21847138/
13:33 kdhananjay foster: The timestamp associated with the individual log messages are changing but the ctimes are not.
13:37 bala joined #gluster-dev
13:37 foster kdhananjay: if i introduce a 1ns delay, ctime_nsec updates most of the time
13:40 kdhananjay foster: The time difference between the logs in the link above are more than a nano second apart. right?
13:43 foster appears so, but I'm not sure the point ?
13:45 kdhananjay foster: Each log represents a stat, a setxattr and another stat whose ctimes are printed.
13:45 kdhananjay foster: This means that between any two logs of the same kind, the timestamps of the log messages themselves are > 1 ns apart.
13:46 kdhananjay foster: Which could imply that even any two consecutive setxattrs happened > 1ns apart.
13:47 pranithk left #gluster-dev
13:48 kdhananjay foster: It's already 7-15pm and we need to leave now. Could you let us know if you find anything over mail?
13:48 lalatenduM ndevos++
13:48 glusterbot lalatenduM: ndevos's karma is now 81
13:48 foster perhaps, it's probably more relevant to look at the time as seen by the fs than log messages
13:48 foster sure
13:48 ppai joined #gluster-dev
13:49 kdhananjay foster: Thanks a lot, Brian.
13:49 kdhananjay foster: Logging out now.
13:50 hagarth joined #gluster-dev
13:56 shyam joined #gluster-dev
13:59 shubhendu joined #gluster-dev
14:13 atalur joined #gluster-dev
14:14 gothos I'm currently giving beta 2 a whirl, but like in the previous release I'm getting a lot of: marker.c:2562:marker_removexattr_cbk] 0-data-marker: No data available occurred while creating symlinks
14:51 lalatenduM joined #gluster-dev
14:52 kkeithley joined #gluster-dev
14:53 bala joined #gluster-dev
15:01 nkhare joined #gluster-dev
15:07 kanagaraj joined #gluster-dev
15:13 lpabon joined #gluster-dev
15:13 tdasilva joined #gluster-dev
15:17 wushudoin joined #gluster-dev
15:29 bala joined #gluster-dev
15:49 gothos but otherwise everything seems to be working fine :)
15:56 vimal joined #gluster-dev
16:04 hagarth joined #gluster-dev
16:05 bala joined #gluster-dev
16:22 anoopcs joined #gluster-dev
16:22 shubhendu joined #gluster-dev
17:10 lalatenduM joined #gluster-dev
17:25 badone joined #gluster-dev
18:05 badone joined #gluster-dev
18:15 hagarth ndevos: ping,  are you looking for slides on 4.0 from me?
18:27 JoeJulian Is there a dev familiar with the quota code that can help with PeterA's questions in #gluster? I have no idea how to solve his problem, and I'm of the mind to just tell everyone that quota's unusable at this point.
18:28 jobewan joined #gluster-dev
18:31 hagarth JoeJulian: if an email with problem description is sent out on gluster-devel, I can check with one of the quota developers tomorrow (or rather later today in Bangalore)
18:55 vikumar joined #gluster-dev
18:57 ndevos hagarth: yes indeed, something about 4.0 would be nice - if not, well, FOSDEM or DevConf.cz are coming up too
18:57 hagarth ndevos: ok
18:58 ndevos hagarth: I have a stop tomorrow around your lunch time, and should be able to check email then - in case you would be able to send something
18:58 hagarth ndevos: intend sending something out soon
18:58 ndevos hagarth: cool, much appreciated!
19:14 hagarth ndevos: inboxed
19:14 ndevos hagarth++ thanks!
19:14 glusterbot ndevos: hagarth's karma is now 38
19:40 shyam joined #gluster-dev
21:03 ilbot3 joined #gluster-dev
21:03 Topic for #gluster-dev is now Gluster Development Channel - http://gluster.org | For general chat go to #gluster | Patches - http://review.gluster.org/ | Channel Logs - https://botbot.me/freenode/gluster-dev/ & http://irclog.perlgeek.de/gluster-dev/
21:20 tdasilva joined #gluster-dev
22:15 vimal joined #gluster-dev
22:34 vimal joined #gluster-dev
23:23 tdasilva joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary