Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2014-12-19

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:40 shyam joined #gluster-dev
01:24 _Bryan_ joined #gluster-dev
01:41 nishanth joined #gluster-dev
02:03 bala joined #gluster-dev
02:20 lalatenduM joined #gluster-dev
02:32 tg2 http://www.gluster.org/documen​tation/use_cases/GlusterOnZFS/  <3
02:34 tg2 anybody using disperse in production ?
02:35 tg2 oop my bad didn't realize iw as in dev chat
02:47 badone joined #gluster-dev
03:49 shubhendu joined #gluster-dev
03:58 hagarth joined #gluster-dev
04:04 ppai joined #gluster-dev
04:12 itisravi joined #gluster-dev
04:12 spandit joined #gluster-dev
04:23 kanagaraj joined #gluster-dev
04:33 lalatenduM joined #gluster-dev
04:35 anoopcs joined #gluster-dev
04:36 hagarth joined #gluster-dev
04:38 bala joined #gluster-dev
04:40 kshlm joined #gluster-dev
04:48 jiffin joined #gluster-dev
04:49 anoopcs joined #gluster-dev
05:10 rafi_kc joined #gluster-dev
05:15 rafi1 joined #gluster-dev
05:16 rafi_kc joined #gluster-dev
05:16 soumya joined #gluster-dev
05:37 bala joined #gluster-dev
05:37 kdhananjay joined #gluster-dev
05:42 rafi1 joined #gluster-dev
05:47 pranithk joined #gluster-dev
05:47 raghu joined #gluster-dev
06:10 soumya joined #gluster-dev
06:49 pcaruana joined #gluster-dev
06:49 rjoseph joined #gluster-dev
07:06 atalur joined #gluster-dev
07:06 rjoseph joined #gluster-dev
07:21 aravindavk joined #gluster-dev
07:41 lalatenduM joined #gluster-dev
07:49 overclk joined #gluster-dev
07:51 bala joined #gluster-dev
08:03 atalur joined #gluster-dev
08:31 atalur joined #gluster-dev
08:42 lalatenduM hchiramm, Humble ping pm
08:44 hchiramm lalatenduM, pong
09:17 lalatenduM ndevos, regarding http://review.gluster.org/#/c/9169/2, we need to revert you commit 858ffb7e452aad1aae4005cf9d22b30546c0864c in fedora dist git
09:19 ndevos lalatenduM: well, yes, not revert, just undo the changes (except for the %changelog part)
09:23 an joined #gluster-dev
09:24 aravindavk joined #gluster-dev
09:41 lalatenduM ndevos, hmm so I should just edit the file and  remove the changes?
09:42 ndevos lalatenduM: yes, just undo them
09:42 lalatenduM ndevos, ok
09:47 an_ joined #gluster-dev
10:09 Humble joined #gluster-dev
10:13 divyamn joined #gluster-dev
10:18 divyamn left #gluster-dev
10:23 ppai joined #gluster-dev
10:24 lalatenduM ndevos, we have also incorporate http://review.gluster.org/#/c/9272/2 too
10:24 lalatenduM s/have/have to/
10:25 lalatenduM ndevos, also need help to understand http://review.gluster.org/#/c/9248/2/configure.ac
10:27 ndevos lalatenduM: http://review.gluster.org/#/c/9272/2 <- yes, it is cleaner when that is included
10:27 ndevos lalatenduM: http://review.gluster.org/#/c/9248/2/configure.ac - when geo-replication is not enabled (like on RHEL5), the geo-replication scripts should not get installed with 'make install'
10:29 an joined #gluster-dev
10:34 lalatenduM ndevos, RE: 9248/2/configure.ac -> right , I understand you are creating a dif dir GEOREP_EXTRAS_SUBDIR=geo-rep only when geo-rep is enabled
10:35 lalatenduM ndevos, but I am not getting how during install the geo-rep scripts wills not get installed
10:35 lalatenduM ndevos, does spec file has a understanding of the new directory?
10:35 lalatenduM an*
10:37 ndevos lalatenduM: when geo-replication is not enabled the GEOREP_EXTRAS_SUBDIR variable is empty, so the Makefile.am scripts in extras/ skips the extras/geo-rep/ subdirectory
10:38 lalatenduM ndevos, the above logic is in configure.ac file ?
10:40 ndevos lalatenduM: configure.ac sets the GEOREP_EXTRAS_SUBDIR variable to "geo-rep" or an empty string "", the extras/Makefile.am has a list of subdirectories it needs to precess, and $(GEOREP_EXTRAS_SUBDIR) is in that list
10:42 ndevos lalatenduM: when extras/Makefile.am goes through the list, it will include the geo-rep subdir in case GEOREP_EXTRAS_SUBDIR="geo-rep", otherwise the list of subdirs just contains an empty value (skipped)
10:45 lalatenduM ndevos, I understand this, but I am not able to co-relate as the script location in the spec file is %{_datadir}/glusterfs/scripts/get-gfid.sh
10:46 lalatenduM ndevos, in the %files geo-replication , the script location
10:46 ndevos lalatenduM: ah, the actual installation of the scripts in case geo-rep is enabled, it done by extras/geo-rep/Makefile.am
10:46 lalatenduM ndevos, ah, checking
10:47 an_ joined #gluster-dev
10:48 an joined #gluster-dev
10:50 lalatenduM ndevos, at the end the scripts should be in %{_datadir}/glusterfs/scripts/get-gfid.sh when geo-rep is enabled, right?
10:50 ndevos lalatenduM: yes, I think so
10:51 lalatenduM ndevos, pm
10:54 an joined #gluster-dev
11:02 an_ joined #gluster-dev
11:11 an joined #gluster-dev
11:24 kkeithley1 joined #gluster-dev
11:33 ira joined #gluster-dev
11:41 an joined #gluster-dev
11:44 lalatenduM ndevos++ :)
11:44 glusterbot lalatenduM: ndevos's karma is now 70
11:45 ndevos lalatenduM++ !
11:45 glusterbot ndevos: lalatenduM's karma is now 54
11:46 Humble joined #gluster-dev
11:49 atalur joined #gluster-dev
12:11 itisravi joined #gluster-dev
12:14 an joined #gluster-dev
12:19 edward1 joined #gluster-dev
12:30 atalur joined #gluster-dev
12:31 an joined #gluster-dev
12:32 an_ joined #gluster-dev
12:38 an joined #gluster-dev
12:41 krishnan_p joined #gluster-dev
12:43 an_ joined #gluster-dev
12:44 an joined #gluster-dev
12:46 tdasilva joined #gluster-dev
12:55 hagarth_ joined #gluster-dev
12:57 anoopcs joined #gluster-dev
12:58 jdarcy joined #gluster-dev
13:40 rjoseph joined #gluster-dev
13:57 pranithk joined #gluster-dev
14:29 shyam joined #gluster-dev
14:32 lpabon joined #gluster-dev
14:38 lalatenduM ndevos, http://review.gluster.org/#/c/9272/ is not back ported to 3.6 branch, do you have plan to send a backport
14:39 atalur joined #gluster-dev
14:48 hagarth_ joined #gluster-dev
14:51 shyam joined #gluster-dev
15:03 lalatenduM_ joined #gluster-dev
15:13 wushudoin joined #gluster-dev
15:16 _Bryan_ joined #gluster-dev
15:30 soumya joined #gluster-dev
15:41 shyam joined #gluster-dev
15:54 an joined #gluster-dev
15:59 vimal joined #gluster-dev
16:06 lalatenduM_ ndevos, kkeithley abt http://review.gluster.org/#/c/9272/2/glusterfs.spec.in, what does "%dir %{_datadir}/glusterfs/scripts" help?
16:13 ndevos lalatenduM_: on http://review.gluster.org/#/c/9272/2/glusterfs.spec.in - the directory is automatically created, but it does not belong to a package without that %dir statement
16:14 ndevos lalatenduM_: when an RPM is uninstalled, all contents (files *and* directories) should be removed from the system (logs and config files being an exception)
16:15 lalatenduM_ ndevos, cool, got it thanks ndevos++
16:15 glusterbot lalatenduM_: ndevos's karma is now 71
16:15 ndevos lalatenduM_: you should be able to  verify with: rpm -qf /usr/share/http://review.gluster.org/#/c/9272/2/glusterfs.spec.in
16:15 ndevos yugh
16:15 ndevos that should be: rpm -qf /usr/share//glusterfs/scripts
16:15 lalatenduM_ yeah
16:15 lalatenduM_ cool thanks
16:16 ndevos lalatenduM_: there probably are other directories that do not get removed upon uninstalling the rpms - you can file bugs+patches for those :D
16:17 lalatenduM_ ndevos, yeah :)
16:18 ndevos lalatenduM_: and I did not send a backport for http://review.gluster.org/#/c/9272/ - it is a minor issue, but you could backport it if you want
16:18 * lalatenduM_ now believes packaging is black magic, thought people exaggerating it :)
16:18 lalatenduM_ before
16:22 ndevos lalatenduM_: packaging can do magic! I'm looking at automatically re-building nfs-ganesha from the upstream sources, and have just modified their .spec: https://github.com/nixpanic/nfs-ganesha/com​mit/a2085ef33c3db4b3d66f97fda6fd3f93a727471​4#diff-84e88d6934792966f34a1171c3a5b709L51
16:23 lalatenduM_ ndevos, kewl :)
16:24 lalatenduM_ I bet you and kkeithley_ went to Hogwarts :)
16:24 ndevos lol!
16:24 kkeithley_ I was in Slytherin
16:25 kkeithley_ No, wait, Gryffendor
16:25 ndevos you're so evil!
16:25 lalatenduM_ kkeithley_, heheh
16:25 lalatenduM_ kkeithley_, I think suits you better
16:25 lalatenduM_ :)
16:25 kkeithley_ Gryffindor even
16:25 lalatenduM_ I mean Slytherin
16:27 kkeithley_ I need a good anagram of my name. I am Admiral Ackbar. Hmmm, needs another 'k'
16:28 lalatenduM_ :)
16:28 kkeithley_ I like good magic better
16:29 kkeithley_ I just play evil on TV
16:30 lalatenduM_ haha
16:30 kkeithley_ Sorry for the semi- American Cultural Reference
16:30 lalatenduM_ np
16:30 kkeithley_ "I'm not a doctor, but I play one on TV"
16:31 lalatenduM_ lol
16:31 kkeithley_ but let me give you some medical advice anyway
16:31 kkeithley_ ads on TV
16:32 lalatenduM_ yeah saw the ad on youtube :)
16:33 lalatenduM_ ttyl, gotta go to home before I locked out out side of home :)
16:33 kkeithley_ don't get locked out
16:55 vimal joined #gluster-dev
17:08 soumya joined #gluster-dev
17:18 ndevos hey shyam, got a minute to look at pub_glfs_h_creat() in api/src/glfs-handleops.c?
17:19 ndevos shyam: I'm trying to understand how the fd that gets created there, gets closed/destroyed
17:20 ndevos shyam: could it be that glfs_h_creat() returns the glfs_object, but the fd that is used to generate that, never gets closed?
17:22 * ndevos has bricks running out of file descriptors, and this seems one possible cause
17:40 jobewan joined #gluster-dev
17:55 lalatenduM joined #gluster-dev
18:08 soumya ndevos, agree..
18:09 soumya ndevos, in addition looks like there is no reference maintained for the fd created in glfs_h_creat to close it later.
18:09 ndevos soumya: I'm not sure if it is kept open on purpose... and lru_cleanup() in ganesha is not called either, so nothing seems to close it :-/
18:10 ndevos soumya: yeah, thats one thing I'm not sure about, it gets bound to an inode, to it may get closed through that - but *how*?
18:10 soumya ndevos, may be it was implemented to make it similar to glfs_creat..
18:10 soumya but the difference is that glfs_creat returns the fd to the application
18:10 soumya which can do close later..
18:11 soumya but as you have mentioned glfs_h_creat returns the handle
18:11 ndevos yeah...
18:11 ndevos I'm going to run a test where glfs_h_creat() always does a fd_destroy() before returning, lets see
18:12 soumya yupp....good catch :)
18:14 ndevos soumya, shyam: this is what I'm testing now: http://paste.fedoraproject.org/161407/90128711
18:16 ndevos and, I think that does the trick...
18:17 soumya in addition dont we need to syncop_flush(..)
18:17 soumya to let brick process flush the fd
18:17 soumya ?
18:17 lalatenduM kkeithley_, Need some help , need to re-run regression for http://review.gluster.org/#/c/9103/
18:18 ndevos soumya: oh, yes, maybe that makes the creation of 1000000 files so slow!
18:18 ndevos lalatenduM: got a Jenkins login?
18:18 soumya and just found that in glfs_closedir, we are not making syncop_flush() like in glfs_close()
18:18 soumya *call
18:18 lalatenduM ndevos, yes
18:19 ndevos lalatenduM: then go to http://build.gluster.org/job/rackspace-r​egression-2GB-triggered/2598/consoleFull , login and click on 'retrigger' in the left menu :)
18:19 lalatenduM ndevos, on it
18:19 ndevos soumya: hmm, flush() a directory?
18:19 * ndevos wonders what that does, and if that is needed...
18:20 soumya flush the 'fd' we got via glfs_opendir/glfs_h_opendir
18:21 ndevos soumya: the 'f' in that 'fd' is stands for 'directory' it is not a file?
18:21 ndevos wait - fd -> dh -> directory handle
18:22 ndevos I do not think you can flush a directory... but well, I would need to look that up
18:22 ndevos oh, wow, my glfs_h_creat() loop does not loose any fd's now, thats nice!
18:23 ndevos lalatenduM: found it?
18:23 lalatenduM ndevos, yes, build with parameters right?
18:24 ndevos lalatenduM: no, 'retrigger'
18:24 ndevos lalatenduM: you can build with parameters, but that is different
18:24 ndevos lalatenduM: maybe you need to go to that failed-console-link again, after you logged in
18:24 lalatenduM ndevos, got it
18:26 soumya ndevos, :)
18:26 soumya ndevos, looking at posix_open(..) -> does pfd->fd=open(..) -> fd_ctx_set (fd, pfd)...
18:27 soumya ndevos, close (pfd->fd) in case of errors...
18:27 soumya ndevos, but in posix_flush(..), there is no close (pfd->fd)
18:27 soumya :-/
18:27 lalatenduM ndevos, cool, ndevos++
18:27 glusterbot lalatenduM: ndevos's karma is now 72
18:30 ndevos soumya: I do not think there is a problem there, I am checking for the open file descriptors in the brick process, they stay 0 while I run my tests
18:30 ndevos lalatenduM++ and now you can retrigger any failed regression tests :D
18:30 glusterbot ndevos: lalatenduM's karma is now 55
18:30 ndevos soumya++ too, btw
18:30 glusterbot ndevos: soumya's karma is now 3
18:31 soumya ndevos, hmm okay..I just wanted to know what flush() exactly does as you had pointed out
18:33 ndevos soumya: where did you see syncop_flush() in the glfs_h_creat() path>
18:33 ndevos ?
18:33 soumya glfs_close
18:33 lalatenduM ndevos, yeah, I never did that bcz I never had to cherry pick patches :)
18:34 soumya not in glfs_h_creat but in glfs_close
18:34 lalatenduM may be we should make doc patches to get merged without regression test run
18:34 ndevos lalatenduM: sure, but in case you see patches fail, you can now retrigger the test, others appreciate that :)
18:35 soumya ndevos, I thought we need to let brick process know when to flush the fds it may have opened/stored using syncop_flush()
18:35 ndevos soumya: ah, right, yes, close() should not need a flush()
18:35 lalatenduM ndevos, will do , only way to return the favor :)
18:36 lalatenduM ndevos, soumya good night :)
18:36 ndevos lalatenduM: not everyone has the power (or knowledge, but knowledge *is* power) to retrigger regression tests ;)
18:36 soumya lalatenduM, good night :)
18:36 ndevos lalatenduM: good night!
18:36 lalatenduM yup agree
18:36 lalatenduM :)
18:36 lalatenduM bye
18:36 ndevos cya
18:37 soumya ndevos, I am not sure..I am now confused.. I dont see posix xlator making close(fd) system calls except for error conditions..may be I am missing out something
18:37 ndevos soumya: uhh, I dont know?
18:38 * ndevos opens the sources
18:41 ndevos soumya: I think the close() is done by posix_janitor_thread_proc()
18:42 soumya its a lru_cleanup?
18:42 ndevos soumya: and, in that case, syncop_close() would not be passed on to the posix xlator, and sycnop_flush() would be needed
18:43 ndevos yeah, something like lru_cleanup
18:43 * ndevos now wonders if there is a CLOSE rpc call
18:44 soumya syncop_close() seem to be doing just fd_unref and used only during dht_migrate_file
18:45 ndevos ah, probably there is on fd on the posix xlator, but many fd-structs/objects in other xlators, in the end referencing the same posix-fd
18:46 ndevos soumya: and, it is an iteresting topic, but I think I should have dinner now, and you also have better things to do on a Friday evening/night?
18:46 soumya hehe :P.. sleep :)
18:46 ndevos hehe
18:47 soumya ok then..ttyl ..have a good weekend..
18:47 soumya bbyw
18:47 ndevos have a good weekend too, talk to you soon again!
19:21 shaunm joined #gluster-dev
19:34 lpabon joined #gluster-dev
19:44 shyam ndevos: there is no close from client to brick, only release, on fd ref going to 0
19:44 shyam ndevos: And the above mentioned, glfs_h_creat seems to be right, need to go back to check what was the assumption
19:45 tdasilva joined #gluster-dev
19:56 ndevos shyam: hmm, okay, but I think glfs_h_creat() always keeps one fd ref open... I do not see how that glfs_fd can get closed
19:57 ndevos shyam: patch http://paste.fedoraproject.org/161407/90128711 prevents the fd-leak for me, but there surely could be a better approach
19:58 shyam ndevos: close having a flush, is because the fd is open till a release is done (i.e  the fd ref goes to 0), so close does a flush (or there is another very good reason that I seem to recollect Avati mentioning to me a while back)
19:59 ndevos shyam: yeah, the flush on close looks needed
19:59 shyam ndevos: glfs_h_creat should close the fd. i.e destroy it, as it gets rid of the ref that it holds on it, which should eventually trigger the relase and hence the brick should also close its fd, that is correct
19:59 ndevos shyam: http://review.gluster.org/9318 is the patch that I have tested, comments very much welcome :)
20:00 ndevos shyam: ah, okay, so that patch should be good? I'll file a bug for it then
20:00 shyam ndevos: I guess initially while writing the code, this is a remnant from glfs_creat :-/
20:00 shyam ndevos: I think the patch is good
20:00 ndevos shyam++ okay, thanks!
20:00 glusterbot ndevos: shyam's karma is now 1
20:01 shyam ndevos: I was playing around with fd relase for another patch, so I am reasonably sure of this, just need to connect the dots once again (i.e release on client ending up in a relase on the brick process)
20:01 shyam ndevos: Otherwise that looks fine... (will take up the rest in Gerrit :) )
20:03 shyam ndevos: Just stating, it maybe argued if we then need the fd_bind  (or even the fd to be passed to syncop_creat) etc. but need to read more code the validate the same, jFYI
20:04 ndevos shyam: yeah, I wanted to pass NULL to syncop_creat(), but when looking at it, that would need (much) more work
20:05 ndevos well, not sure how 'much' that would be, I just took the easy way out
21:06 an joined #gluster-dev
21:06 an joined #gluster-dev
22:20 an joined #gluster-dev
23:09 badone joined #gluster-dev
23:26 badone joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary