Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2015-06-02

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:38 shyam joined #gluster-dev
01:23 shyam joined #gluster-dev
01:52 wushudoin| joined #gluster-dev
01:57 wushudoin| joined #gluster-dev
03:09 overclk joined #gluster-dev
03:41 kanagaraj joined #gluster-dev
03:50 shaunm_ joined #gluster-dev
03:50 atinmu joined #gluster-dev
03:53 rjoseph joined #gluster-dev
03:57 shubhendu joined #gluster-dev
04:14 kshlm joined #gluster-dev
04:23 ppai joined #gluster-dev
04:27 spandit joined #gluster-dev
04:31 deepakcs joined #gluster-dev
04:31 soumya joined #gluster-dev
04:34 rjoseph joined #gluster-dev
04:38 nbalacha joined #gluster-dev
04:42 sakshi joined #gluster-dev
04:53 rafi joined #gluster-dev
04:57 rafi joined #gluster-dev
05:01 jiffin joined #gluster-dev
05:01 spalai joined #gluster-dev
05:03 ashishpandey joined #gluster-dev
05:08 gem joined #gluster-dev
05:12 anekkunt joined #gluster-dev
05:13 kaushal_ joined #gluster-dev
05:18 lalatenduM joined #gluster-dev
05:18 kotreshhr joined #gluster-dev
05:19 kshlm joined #gluster-dev
05:20 pppp joined #gluster-dev
05:25 schandra joined #gluster-dev
05:25 ashiq joined #gluster-dev
05:25 hgowtham joined #gluster-dev
05:31 spalai joined #gluster-dev
05:32 nkhare joined #gluster-dev
05:32 vimal joined #gluster-dev
05:34 nkhare_ joined #gluster-dev
05:38 spalai joined #gluster-dev
05:39 hagarth joined #gluster-dev
05:44 poornimag joined #gluster-dev
05:53 Gaurav_ joined #gluster-dev
05:55 spalai joined #gluster-dev
05:56 atalur joined #gluster-dev
06:18 spalai joined #gluster-dev
06:28 Joe_f joined #gluster-dev
06:36 saurabh_ joined #gluster-dev
06:43 sas_ joined #gluster-dev
06:44 rgustafs joined #gluster-dev
06:45 ppai joined #gluster-dev
06:47 pranithk joined #gluster-dev
06:48 atinm joined #gluster-dev
06:48 raghu joined #gluster-dev
06:54 msvbhat pranithk: I have sent the details in the mail. For any further inspection
06:56 pranithk msvbhat: Yes, I will take a look, looking into one OOM killer issue now, after that I will take a look at that.
06:57 msvbhat pranithk: Sure, Thanks
07:23 pranithk joined #gluster-dev
07:29 Manikandan joined #gluster-dev
07:55 Manikandan joined #gluster-dev
08:15 Manikandan joined #gluster-dev
08:41 atinmu joined #gluster-dev
08:45 aravindavk joined #gluster-dev
08:45 Joe_f joined #gluster-dev
08:47 rgustafs joined #gluster-dev
08:53 kbyrne joined #gluster-dev
09:10 rafi ndevos: hi
09:11 atalur joined #gluster-dev
09:12 ndevos hello rafi!
09:12 rafi ndevos: I was looking into your comments,  http://review.gluster.org/#/c/11032/
09:12 rafi ndevos: agreed,
09:13 atinmu rafi, could you host today's triage?
09:14 atinmu rafi, I might have to leave bit early as my kid is not well
09:14 rafi ndevos: I believe it is more likely a trade of between object size and execution time :)
09:15 ndevos rafi: indeed it is, and IMHO that should be decided by the compiler flags given by the package builder, not by the developers :)
09:15 rafi atinmu: our scrum meeting is also scheduled in same time ;-)
09:16 atinmu ndevos, can you volunteer it for today?
09:16 rafi atinmu: I can join bit later
09:18 ndevos rafi: you really should have your meeting time fixed, Joe_f wanted to join the bug triage meetings too
09:18 rafi atinmu: ndevos : I will talk with the team :)
09:19 atinmu ndevos,
09:20 ndevos atinmu, rafi: I can do the meeting today, thats fine
09:21 atinmu ndevos, thanks :)
09:21 atinmu ndevos++
09:21 glusterbot atinmu: ndevos's karma is now 142
09:21 rafi ndevos++
09:21 glusterbot rafi: ndevos's karma is now 143
09:21 rafi ndevos: that is decided :) . What about the bug ;-)
09:21 rafi ndevos: i can send a new patch set
09:23 hchiramm joined #gluster-dev
09:23 ndevos rafi: I do not care too much about it, the patch fixes an issue, thats good :)
09:24 hchiramm ndevos, lalatenduM kkeithley1 the 3.7.1 rpms are available at http://download.gluster.org/pub/gluster/glusterfs/3.7/3.7.1/
09:29 rafi ndevos: Ok :) Next time onward, i will give that decision to compiler  ;)
09:32 ndevos hchiramm++ thank you!
09:32 glusterbot ndevos: hchiramm's karma is now 37
09:32 hchiramm ndevos++ , Yw !
09:32 glusterbot hchiramm: ndevos's karma is now 144
09:33 ndevos hchiramm: will you send out a note to the lists about it as well, or do you pass that on to KP?
09:34 hchiramm ndevos, checking for 3.7.1 release mail
09:35 hchiramm looks like there is no seperate mail for this release ..
09:36 hchiramm can I reply to the same thread "[Gluster-devel] Will be pushing 3.7.1 tag shortly" saying rpms are available ?
09:36 hchiramm ndevos,^^
09:37 ndevos hchiramm: we try to do announcements of the release when packages are available, so there is no email yet
09:37 hchiramm it was violated on 3.7.0 release :)
09:38 itisravi joined #gluster-dev
09:38 hchiramm manu had sent a mail saying
09:38 hchiramm I updated NetBSD glusterfs package to 3.7.1 plus .......
09:38 ndevos yes, and on many others, that does not mean it is right :)
09:38 hchiramm true :)
09:39 ndevos I do not think manu sent it to the users list?
09:39 hchiramm ah.. yes. only devel
09:39 ndevos at the moment, there has not been an announement for users yet?
09:39 hchiramm true..
09:39 hchiramm ok.. I will send a mail to KP saying packages are ready
09:39 hchiramm let him announce.
09:41 ndevos sure, works for me :)
09:41 hchiramm ndevos, notified him .. :)
09:43 pranithk ndevos: itisravi and I have some questions about invalidating nfs client cache...
09:45 ndevos pranithk: go look for the answers!
09:45 itisravi ndevos: we looked and  found you :D
09:46 ndevos itisravi: oh, hah!
09:46 pranithk ndevos: So in fuse Du helped us find the answers. Wondering if you know any pointers in case of nfs
09:46 pranithk ndevos: is there a way to invalidate nfs cache?
09:47 itisravi In fuse there is setting attribute and entry time out to zero and calling inode_invalidate() which does the job.
09:48 pranithk itisravi: could you give him the link to patch. ndevos is good with fuse as well. So he will have better idea about what we are looking for.
09:48 itisravi pranithk: sure.
09:48 hagarth joined #gluster-dev
09:48 itisravi ndevos: http://review.gluster.org/#/c/10905/
09:49 ndevos pranithk, itisravi: invalidate the cache, or changing the time objects can stay cached?
09:49 pranithk ndevos: we don't want caching on files in split-brain so that afr can give EIO when they are accessed
09:49 itisravi I think invalidating the cache is what we are after.
09:50 pranithk ndevos: Any suggestion so that the fop would come till afr would help
09:50 ndevos pranithk, itisravi: on Linux you can do "echo 3 > /proc/sys/vm/drop_caches"
09:51 ndevos pranithk, itisravi: on NetBSD you can: cd $MOUNTPOIINT ; umount $MOUNTPOINT ; cd -
09:51 pranithk ndevos: That would remove cache of all the entries which is bad...
09:51 pranithk ndevos: We want this to happen only for the names in gfid-split-brain and files which are in data/metadata split-brain
09:52 itisravi ndevos: that worked (https://bugzilla.redhat.com/show_bug.cgi?id=1220347#c2) but some thing that can be controlled in gluster would help.
09:52 glusterbot Bug 1220347: high, unspecified, ---, bugs, NEW , Read operation on a file which is in split-brain condition is successful
09:52 pranithk ndevos: Can we do selective cache invalidation like we do for fuse?
09:52 ndevos pranithk: you only want to flush the cache for data?
09:52 pranithk ndevos: that would help, because readv will come till afr and then afr can fail it with EIO
09:53 ndevos pranithk: check "man 5 procfs" and search for drop_caches
09:55 itisravi ndevos: is there something on  a 'per file' basis that can be done?
09:55 pranithk ndevos: but that is by executing shell commands, is there something we can do in nfs xlator etc?
09:56 ndevos itisravi: no, not that I know of, but maybe some NFS-clients have options for that
09:56 ndevos pranithk: is it not something that gets done on the NFS-client side?
10:00 ndevos pranithk, itisravi: the gluster/nfs server does not have a special cache for all I know, or thats something I've never needed to touch before?
10:02 * itisravi thinks so too but is not sure
10:04 pranithk ndevos: hmm... so there is no way to invalidate nfs kernel client chache? Do you know who may know more about this?
10:05 badone_ joined #gluster-dev
10:05 csim_ joined #gluster-dev
10:07 anrao joined #gluster-dev
10:11 csim joined #gluster-dev
10:14 ndevos pranithk: the NFS protocol does not provide cache-invalidation notifications from server -> client
10:16 ndevos pranithk: there are mount options for NFS-clients that can set more strict timeouts, but that will always be NFS-client dependent
10:20 spalai joined #gluster-dev
10:24 pranithk ndevos: thanks ndevos :-)
10:30 ndevos pranithk: we can do cache-invalidation in NFS-ganesha though, I'm not sure how afr could trigger that, soumya might have an idea
10:30 ndevos pranithk: you would need something like upcall, but triggered from the client-xlator stack, right?
10:31 nbalacha joined #gluster-dev
10:39 soumya ndevos, right we could do it for nfs-ganesha...but  pranithk and itisravi just spoke to me..seems like they need it to be communicated to nfs-client as well
10:39 soumya and as you have mentioned NFS server doesn't have such facility..
10:40 soumya one of the suggestions rjoseph provided is to change mtime/ctime of the file so that nfs-client will not read from cache next time it does getattr...
10:40 kkeithley joined #gluster-dev
10:45 nbalacha joined #gluster-dev
10:47 ira joined #gluster-dev
10:48 RajeshReddy joined #gluster-dev
10:49 soumya ndevos, RajeshReddy  from QE has seen a strange issue with mounts using gNFS..
10:50 soumya copy of a file to the mount point give Remote I/O error ..though its successful at the backend
10:50 ira joined #gluster-dev
10:50 soumya any idea if this issue seen before?
10:50 kshlm joined #gluster-dev
10:54 ndevos soumya: I have seen that before, and it was fixed pretty quickly too
10:56 ndevos soumya: bug 1210338 was what I was thinking of
10:56 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=1210338 unspecified, unspecified, ---, rabhat, CLOSED CURRENTRELEASE, file copy operation fails on nfs
10:57 * ndevos will have lunch now, will be back later
11:06 kkeithley joined #gluster-dev
11:09 hagarth joined #gluster-dev
11:23 nkhare joined #gluster-dev
11:33 soumya ndevos, thanks for the link..but RajeshReddy  has ran into the issue with the latest 3.7 release
11:36 spalai joined #gluster-dev
11:40 hagarth joined #gluster-dev
11:41 kkeithley joined #gluster-dev
11:46 kkeithley ndevos: scrum?
11:51 tigert hey
11:51 tigert who broke the download LATEST link?
11:51 * tigert merged the change to fix it on a merge req on the site,
11:51 tigert but
11:51 tigert shouldnt http://download.gluster.org/pub/gluster/glusterfs/LATEST/ always work?
11:51 tigert http://download.gluster.org/pub/gluster/glusterfs/3.7/LATEST/ < this is wrong IMHO
11:52 tigert it will work in a moment once the site is updated in a hour,
11:52 tigert but I think this is wrong way to do this
11:52 tigert shouldnt LATEST always point to the latest no matter what version it is
11:52 tigert ?
11:53 tigert or is there a reason why it is like that?
11:55 soumya joined #gluster-dev
11:58 rafi1 joined #gluster-dev
11:58 ndevos REMINDER: Gluster Bug Triage meeting starts in a few minutes in #gluster-meeting
12:01 krishnan_p joined #gluster-dev
12:02 csim tigert: I fixed it
12:08 tigert csim: ok
12:08 * tigert re-fixes the web
12:09 tigert thanks
12:10 csim but maybe it should be automated somehow
12:10 tigert so what happened? did someone change the download server paths
12:10 tigert or what?
12:10 csim no idea, the symlink was wrong
12:10 soumya joined #gluster-dev
12:10 tigert yeah, I guess when one does a release the path should be updated
12:11 tigert the source code is kinda secondary link anyway, but it should still work though
12:11 tigert since the primary thing is the packaged gluster
12:12 nkhare joined #gluster-dev
12:15 atalur joined #gluster-dev
12:16 poornimag joined #gluster-dev
12:22 kkeithley joined #gluster-dev
12:28 shubhendu joined #gluster-dev
12:33 ndevos overclk: you are still the maintainer for geo-replication in the MAINTAINERS file, should Aravinda not get added there too?
12:34 ndevos overclk: hagarth is working on updating the MAINTAINERS file, you can tell him if you need  it changed
12:47 overclk ndevos, yeh sure. I spoke to hagarth. Once Aravinda confirms, the necessary changes would be done.
12:48 itisravi_ joined #gluster-dev
12:48 ndevos overclk: ok :)
12:48 kkeithley joined #gluster-dev
12:55 rafi joined #gluster-dev
12:59 dlambrig joined #gluster-dev
13:02 atalur joined #gluster-dev
13:04 shyam joined #gluster-dev
13:05 pppp joined #gluster-dev
13:07 shubhendu joined #gluster-dev
13:11 ndevos hagarth: would your overlay xlator be able to read from a read-only (NFS) filesystem and store changes on bricks?
13:12 kdhananjay joined #gluster-dev
13:12 hagarth ndevos: overlay would be a translator above dht. so we would need a nfs client translator to read from a ro NFS export.
13:18 mikemol joined #gluster-dev
13:18 rafi joined #gluster-dev
13:19 ndevos hagarth: hmm, okay, sounds reasonable
13:25 spalai joined #gluster-dev
13:29 hagarth ndevos: do you plan to rebase this - http://review.gluster.org/#/c/10803 ?
13:30 hagarth damn, stripe.t nuked regression run for this.
13:44 pousley joined #gluster-dev
13:49 ndevos hagarth: hmm, I want that change to get in yes, but does that need a rebase?
13:49 ndevos oh, and someone posted a patch for stripe... /me checks
13:49 hagarth ndevos: actually it might be good to rebase after Jeff's stripe patch gets picked up.
13:50 ndevos hagarth: would http://review.gluster.org/11037 be the fix for that?
13:50 hagarth ndevos: indeed
13:50 hagarth I also noticed that we moved from 10k to 11k in gerrit in 60 days or so :)
13:51 ndevos yes, the number of patches that get send seems to grow *fast*
13:52 hagarth yeah! by the time I review one patch and come back, I can no longer see the same patch in the top 20 or so.
14:06 jdarcy joined #gluster-dev
14:08 ashiq joined #gluster-dev
14:24 nkhare joined #gluster-dev
14:41 shubhendu joined #gluster-dev
14:54 rafi joined #gluster-dev
14:55 ira joined #gluster-dev
14:58 RajeshReddy joined #gluster-dev
15:06 RajeshReddy joined #gluster-dev
15:10 soumya joined #gluster-dev
15:16 nbalacha joined #gluster-dev
15:19 jiffin joined #gluster-dev
15:26 RajeshReddy joined #gluster-dev
15:29 spalai left #gluster-dev
15:39 shubhendu joined #gluster-dev
15:43 krishnan_p joined #gluster-dev
16:03 rafi joined #gluster-dev
16:04 rafi joined #gluster-dev
17:06 Gaurav__ joined #gluster-dev
17:06 hagarth joined #gluster-dev
17:25 jbautista- joined #gluster-dev
17:31 jbautista- joined #gluster-dev
17:54 jdarcy joined #gluster-dev
18:19 jiffin joined #gluster-dev
18:23 soumya joined #gluster-dev
18:57 anrao joined #gluster-dev
19:16 pousley joined #gluster-dev
20:22 badone_ joined #gluster-dev
21:10 kkeithley_bat semiosis: to build a gluster in/for the ppa, I would build a glusterfs-3.x.y-1.dsc file (same as building for debian, up to the debuild -S -sa), then use ` dput ppa:gluster/glusterfs-3.6 glusterfs_3.x.y-1.dsc` to have launchpad build?
21:13 kkeithley_bat Or do you just dput the debian-built .dsc file?
21:38 badone_ joined #gluster-dev
21:41 jdarcy left #gluster-dev
22:23 dlambrig left #gluster-dev
22:25 purpleidea joined #gluster-dev
22:25 purpleidea joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary