Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2016-05-19

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
01:06 penguinRaider joined #gluster-dev
01:26 EinstCrazy joined #gluster-dev
01:48 ilbot3 joined #gluster-dev
01:48 Topic for #gluster-dev is now Gluster Development Channel - http://gluster.org | For general chat go to #gluster | Patches - http://review.gluster.org/ | Channel Logs - https://botbot.me/freenode/gluster-dev/ & http://irclog.perlgeek.de/gluster-dev/
02:00 EinstCra_ joined #gluster-dev
02:01 luizcpg joined #gluster-dev
02:33 ira joined #gluster-dev
02:42 nishanth joined #gluster-dev
02:59 EinstCrazy joined #gluster-dev
03:35 hagarth joined #gluster-dev
03:39 jobewan joined #gluster-dev
03:40 aravindavk joined #gluster-dev
03:44 luizcpg_ joined #gluster-dev
03:52 atinm joined #gluster-dev
03:53 gem joined #gluster-dev
03:54 shubhendu joined #gluster-dev
03:59 penguinRaider joined #gluster-dev
04:02 vshankar joined #gluster-dev
04:03 poornimag joined #gluster-dev
04:15 nbalacha joined #gluster-dev
04:25 raghug joined #gluster-dev
04:29 luizcpg joined #gluster-dev
04:30 rastar joined #gluster-dev
04:33 itisravi joined #gluster-dev
04:38 nbalacha joined #gluster-dev
04:39 luizcpg joined #gluster-dev
04:40 jiffin joined #gluster-dev
04:47 luizcpg joined #gluster-dev
04:53 sakshi joined #gluster-dev
04:58 Apeksha joined #gluster-dev
04:59 jobewan joined #gluster-dev
05:01 aspandey joined #gluster-dev
05:03 skoduri joined #gluster-dev
05:03 nishanth joined #gluster-dev
05:03 ndarshan joined #gluster-dev
05:05 prasanth joined #gluster-dev
05:12 ppai joined #gluster-dev
05:13 hchiramm joined #gluster-dev
05:15 kshlm joined #gluster-dev
05:15 mchangir joined #gluster-dev
05:21 pkalever joined #gluster-dev
05:22 Saravanakmr joined #gluster-dev
05:22 pkalever left #gluster-dev
05:23 pkalever joined #gluster-dev
05:24 nishanth joined #gluster-dev
05:27 kdhananjay joined #gluster-dev
05:29 kotreshhr joined #gluster-dev
05:36 Bhaskarakiran joined #gluster-dev
05:41 hgowtham joined #gluster-dev
05:42 jobewan joined #gluster-dev
05:48 atinm joined #gluster-dev
05:58 aravindavk joined #gluster-dev
06:06 sakshi joined #gluster-dev
06:12 asengupt joined #gluster-dev
06:12 vimal joined #gluster-dev
06:12 spalai joined #gluster-dev
06:13 itisravi joined #gluster-dev
06:15 nishanth joined #gluster-dev
06:18 pkalever joined #gluster-dev
06:21 atinm joined #gluster-dev
06:26 aravindavk joined #gluster-dev
06:40 ndarshan joined #gluster-dev
06:43 hagarth noticing these logs in 3.8rc0
06:43 hagarth [2016-05-19 06:37:46.002488] E [MSGID: 113091] [posix.c:179:posix_lookup] 0-repl-posix: null gfid for path (null)
06:43 hagarth [2016-05-19 06:37:46.002508] W [MSGID: 113018] [posix.c:197:posix_lookup] 0-repl-posix: lstat on null failed [Invalid argument]
06:45 hagarth any idea why we are seeing this?
06:52 EinstCrazy joined #gluster-dev
06:56 EinstCrazy joined #gluster-dev
06:58 EinstCrazy joined #gluster-dev
07:01 EinstCrazy joined #gluster-dev
07:09 mchangir joined #gluster-dev
07:15 gvandeweyer joined #gluster-dev
07:16 gvandeweyer small question, could I do harm by exporting replicated, non-distributed bricks directly through native NFS, in read-only mode, instead of accessing them through gluster/glusterNFS ? It's for access on some low-memory machines that can't handle the 5G caching we have...
07:26 rraja joined #gluster-dev
07:30 jiffin gvandeweyer: it is not recommended, there will lot of context switches which reduces perrfomance
07:30 jiffin and also potential memory deadlocks
07:36 gvandeweyer hmm. too bad.
07:36 gvandeweyer jiffin: and would the GlusterNFS client support the -nolocks mount option?
07:43 mchangir joined #gluster-dev
07:43 nishanth joined #gluster-dev
07:57 EinstCra_ joined #gluster-dev
08:03 penguinRaider joined #gluster-dev
08:03 rastar_ joined #gluster-dev
08:04 k4n0 joined #gluster-dev
08:05 rjoseph Can someone merge these patches? http://review.gluster.org/#/c/14109/  and http://review.gluster.org/#/c/14098/
08:07 kshlm joined #gluster-dev
08:19 mchangir joined #gluster-dev
08:26 skoduri joined #gluster-dev
08:28 atalur joined #gluster-dev
08:38 EinstCrazy joined #gluster-dev
08:39 atinm joined #gluster-dev
08:43 penguinRaider joined #gluster-dev
08:47 shubhendu joined #gluster-dev
08:52 nishanth joined #gluster-dev
08:53 ndarshan joined #gluster-dev
08:54 kshlm joined #gluster-dev
09:00 jiffin gvandeweyer: it worked for me
09:00 jiffin mount -t nfs -o nolock 10.70.1.14:/dis /mnt/nfs/1/
09:07 skoduri joined #gluster-dev
09:10 rastar_ joined #gluster-dev
09:18 atinm joined #gluster-dev
09:18 penguinRaider joined #gluster-dev
09:22 rastar joined #gluster-dev
09:27 nishanth joined #gluster-dev
09:31 jiffin1 joined #gluster-dev
09:36 EinstCrazy joined #gluster-dev
09:44 mchangir joined #gluster-dev
09:45 aravindavk joined #gluster-dev
09:52 aravindavk joined #gluster-dev
09:56 pkalever1 joined #gluster-dev
09:56 penguinRaider joined #gluster-dev
09:56 pkalever1 left #gluster-dev
09:59 jiffin1 joined #gluster-dev
10:02 mchangir joined #gluster-dev
10:11 shubhendu_ joined #gluster-dev
10:15 rafi joined #gluster-dev
10:19 rafi joined #gluster-dev
10:23 bfoster joined #gluster-dev
10:23 ndarshan joined #gluster-dev
10:24 nishanth joined #gluster-dev
10:26 aravindavk joined #gluster-dev
10:27 rafi joined #gluster-dev
10:48 hgowtham joined #gluster-dev
10:52 mchangir joined #gluster-dev
10:58 ndevos jiffin: I just hit a Gluster/NFS bug on 3.8 while trying to install a VM for testing FSAL_VFS on Fedora 23...
10:59 ndevos http://review.gluster.org/14421 in case you care about it, but I'll probably fix that later today
10:59 ndevos and FSAL_VFS will need to wait a little
10:59 kotreshhr joined #gluster-dev
11:02 darshan joined #gluster-dev
11:07 kotreshhr left #gluster-dev
11:12 aravindavk joined #gluster-dev
11:20 jiffin ndevos: K
11:21 Debloper joined #gluster-dev
11:28 Debloper joined #gluster-dev
11:37 jiffin joined #gluster-dev
11:39 ira joined #gluster-dev
11:48 jiffin ndevos: do u have any steps to produce the FSAL_VFS issue?
11:49 jiffin i ran pynfs suite for open calls in FSAL_VFS, it worked without crash
11:53 mchangir joined #gluster-dev
12:11 prasanth joined #gluster-dev
12:20 gvandeweyer left #gluster-dev
12:20 poornimag joined #gluster-dev
12:23 ndevos jiffin: no, I do not know the steps to reproduce it, if mounting and using the export works, well, thats fine then :)
12:24 jiffin ndevos: it works fine
12:25 ndevos jiffin: ok, then you can just push the packages to stable, I guess
12:25 jiffin ndevos: sure will do
12:26 jiffin ndevos: can we further details, like ganesha.log, core file etc?
12:26 jiffin *can we get
12:27 ndevos jiffin: not sure, I do not think there was a bug filed, and the coredump was gathered anonymously
12:27 jiffin ndevos: hmm
12:30 spalai left #gluster-dev
12:48 luizcpg joined #gluster-dev
12:53 ppai ndevos, Hi. Do you have permissions to edit repo description on github ?
12:53 ndevos ppai: yes, I think so
12:53 ndevos ppai: what repo?
12:54 ppai ndevos, https://github.com/gluster/glusterdocs
12:54 ndevos ppai: yes, I can edit that
12:55 ppai ndevos, Could you this to description - "This repo had it's git history re-written recently. Please create a fresh fork or clone"
12:55 ppai ndevos: If you could word it better, that'd be good too
12:56 ndevos ppai: done, just copy/pasted it
12:56 ppai ndevos: thanks
12:56 ppai ndevos++
12:56 glusterbot ppai: ndevos's karma is now 264
12:56 ndevos ppai: maybe mention the date, instead of 'recently'?
12:57 ppai ndevos, yes! "19 May 2016 12:30 PM UTC"
12:57 ndevos ppai: check again, hows that?
12:58 ppai ndevos, lgtm
12:58 ndevos ppai: ok, great :)
13:00 Apeksha joined #gluster-dev
13:17 shyam joined #gluster-dev
13:33 hgowtham joined #gluster-dev
13:36 pranithk1 joined #gluster-dev
13:38 hagarth joined #gluster-dev
13:50 jiffin joined #gluster-dev
13:57 itisravi joined #gluster-dev
13:57 kkeithley ndevos: I wonder if autoconf, etc., tools were updated on build.gluster.org recently.  The configure in 3.7.11.tar.gz does not try to invoke config.sub while (as we know) the one in 3.8rc1.tar.gz does.
13:58 kkeithley The respective configure.ac files are exactly the same in the area around where the config.sub invocation appears in the 3.8rc1 configure script
13:59 kkeithley misc, nigelb:  were there any updates to build.gluster.org recently?  Particularly the autoconf, automake, libtool or pkgconfig?
14:00 * misc check
14:00 misc kkeithley: on the server itself, not by rpm
14:00 misc no new rpm since the 15 of march
14:01 misc kkeithley: but is the build done on the master itself ?
14:01 kkeithley where ever 'release' jobs run.  Isn't that on the master?
14:03 * nigelb hasn't touched a thing.
14:04 misc kkeithley: yep
14:04 misc and so no, this wasn't touch
14:04 kkeithley okay, thanks. we can rule that out
14:04 dlambrig joined #gluster-dev
14:06 misc now, i also assume taht's run in mock
14:06 misc and ùaybe the mock chroot was changed since it is recreated from 0
14:17 kkeithley oh, good point
14:21 kkeithley oh, good point about running in mock
14:24 ndevos kkeithley: I dont know where it comes from... its weird
14:24 ndevos kkeithley: we dont run 'make dist' in mock, mock is for RPMs
14:25 kkeithley the tarball that comes out of a 'release' job is the dist tarball IIRC
14:25 ndevos yes, correct
14:26 kkeithley which is the same dist tarball used to build RPMs with `rpmbuild -ta $tarfile`
14:26 ndevos the git checkout of the tag is done on build.gluster.org, then it runs ./autogen.sh && ./configure && make dist
14:26 kkeithley in mock?
14:26 kkeithley oh, no.  nm
14:26 ndevos no, mock is used for building rpms...
14:28 ndevos we could probably copy the git repository in a mock chroot, create the tarball there, and copy the tarball out the chroot again... but that'll take a while to get done correctly
14:38 gem joined #gluster-dev
14:50 prasanth joined #gluster-dev
14:55 skoduri joined #gluster-dev
15:12 hagarth joined #gluster-dev
15:20 josferna joined #gluster-dev
15:45 luizcpg joined #gluster-dev
15:58 josferna joined #gluster-dev
16:15 josferna joined #gluster-dev
16:17 jiffin joined #gluster-dev
16:19 atalur joined #gluster-dev
16:21 xenthree3 joined #gluster-dev
16:21 xenthree3 left #gluster-dev
16:36 shubhendu_ joined #gluster-dev
16:37 spalai joined #gluster-dev
16:58 jiffin1 joined #gluster-dev
17:15 nishanth joined #gluster-dev
17:19 jiffin joined #gluster-dev
17:22 jiffin1 joined #gluster-dev
17:29 hagarth1 joined #gluster-dev
17:31 mchangir joined #gluster-dev
17:43 luizcpg joined #gluster-dev
18:03 jiffin1 joined #gluster-dev
18:09 shubhendu joined #gluster-dev
18:26 purpleidea joined #gluster-dev
18:26 purpleidea joined #gluster-dev
18:26 kkeithley joined #gluster-dev
18:26 ndevos joined #gluster-dev
18:26 ndevos joined #gluster-dev
18:28 jtc joined #gluster-dev
18:31 shubhendu joined #gluster-dev
18:34 dlambrig left #gluster-dev
19:10 shubhendu joined #gluster-dev
19:32 shaunm joined #gluster-dev
19:51 luizcpg joined #gluster-dev
20:06 penguinRaider joined #gluster-dev
20:07 pranithk1 joined #gluster-dev
20:10 luizcpg_ joined #gluster-dev
20:32 shyam joined #gluster-dev
20:57 luizcpg joined #gluster-dev
21:24 fcp14 joined #gluster-dev
21:24 fcp14 hi
21:25 fcp14 please i have an debian vm only have the debian system installed and create an /mnt/cluster/files only without mkfs.xfs because i only have one disk at virtual machine can i mount on my glusterfs-client the remote server glusterfs?
21:26 fcp14 or i only can do that with volume created with mkfs.xfs?
21:26 fcp14 thanks for the help
21:35 luizcpg joined #gluster-dev
22:51 gtobon joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary