Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2014-07-31

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:14 shyam joined #gluster-dev
00:17 MacWinner joined #gluster-dev
00:41 bala joined #gluster-dev
00:49 pranithk joined #gluster-dev
00:50 pranithk JoeJulian: Did you get a chance to look at the comment on the bug?
01:19 awheeler joined #gluster-dev
01:51 bala joined #gluster-dev
01:55 hagarth joined #gluster-dev
02:03 pranithk joined #gluster-dev
02:03 pranithk left #gluster-dev
02:56 bharata-rao joined #gluster-dev
03:09 pranithk joined #gluster-dev
03:09 pranithk left #gluster-dev
03:16 nishanth joined #gluster-dev
03:24 spandit joined #gluster-dev
03:37 lalatenduM joined #gluster-dev
03:44 shubhendu joined #gluster-dev
03:44 jobewan joined #gluster-dev
03:47 Humble joined #gluster-dev
03:48 itisravi joined #gluster-dev
03:52 ndk joined #gluster-dev
03:53 kanagaraj joined #gluster-dev
04:07 kdhananjay joined #gluster-dev
04:22 atalur joined #gluster-dev
04:24 Rafi_kc joined #gluster-dev
04:24 anoopcs joined #gluster-dev
04:39 jiffin joined #gluster-dev
04:40 aravindavk joined #gluster-dev
04:47 ppai joined #gluster-dev
04:49 atinmu joined #gluster-dev
04:59 ndarshan joined #gluster-dev
05:01 JoeJulian [2014-07-24 05:23:44.931310] W [fuse-bridge.c:3932:fuse_migrate_fd_open] 0-glusterfs-fuse: name-less lookup of gfid (b7e794e9-67c0-4d47-9ee5-2b41d1de792f) failed (Invalid argument)(old-subvolume:gv-nova-3 new-subvolume:gv-nova-4)
05:02 JoeJulian Does that not start with 0 like everything else? Are gv-nova-3 and gv-nova-4 actually Brick3 and Brick4?
05:05 bala joined #gluster-dev
05:12 lalatenduM joined #gluster-dev
05:16 pranithk joined #gluster-dev
05:19 pranithk JoeJulian: fuse never switches graphs, so it always have graph number 0. Other things change graphs. gv-nova-3, gv-nova-4 are graphs with id 3, 4 respectivelyt
05:21 JoeJulian Ah, ok. I see. So it's not about where the fd is pointing to, it's about which "version" of the graph its associated with. I presume when everything is migrated off gv-nova-3, it would be deleted?
05:22 pranithk JoeJulian: it is supposed to, but I am not sure the deletion code is present in 3.4.
05:22 JoeJulian Oh, that explains a lot!
05:22 pranithk JoeJulian: It is not disabled properly according to the logs you attached. There shouldn't be any connects to old bricks otherwise
05:22 JoeJulian pretty cool, actually.
05:23 JoeJulian right
05:23 pranithk JoeJulian: Any chance you will update brick logs today?
05:24 pranithk s/update/upload
05:24 JoeJulian Sorry, yes. I was trying to hunt down those gfids, but they don't exist anymore.
05:25 pranithk JoeJulian: Interesting... hmm... :-(
05:29 JoeJulian I did at least find a corresponding do_fd_cleanup. It's uploading now.
05:34 pranithk JoeJulian: I am interested in any lookup failures in brick logs which could have failed with EINVAL
05:37 JoeJulian It's going to have the string version of that error in the logs, isn't it? I'm trying to build a regex that I can grep all the logs for because, apparently we don't ever rotate brick logs. :/
05:40 * JoeJulian grumbles about ubuntu again...
05:46 aravindavk joined #gluster-dev
05:46 pranithk JoeJulian: I don't see any invalid argument errors on the log attached :-(
05:47 JoeJulian I'm grepping all of the brick logs for "lookup.*Invalid argument"
05:49 pranithk JoeJulian: grep -i invalid <log-file>
06:00 lalatenduM joined #gluster-dev
06:07 JoeJulian Only one invalid argument in server_alloc_frame and it's the wrong timeframe. Callback looks like it's for server3_3_getxattr. Any value?
06:11 pranithk JoeJulian: nope :-(
06:14 JoeJulian pranithk: Can you add the repro to the bug? Maybe I can dig something more out of that.
06:15 vpshastry joined #gluster-dev
06:19 JoeJulian There's a number of EPERM within seconds of the graph change: /var/log/glusterfs/bricks/glust​er-brick04-nova.log:[2014-07-24 05:23:48.596396] E [marker.c:2080:marker_setattr_cbk] 0-gv-nova-marker: Operation not permitted occurred during setattr of <nul>
06:19 JoeJulian /var/log/glusterfs/bricks/glust​er-brick04-nova.log:[2014-07-24 05:23:48.596473] I [server-rpc-fops.c:1778:server_setattr_cbk] 0-gv-nova-server: 678: SETATTR /instances/c9252e6e-3e35-48c9​-8388-5e58df6d258f/disk.local (c8d7f5bc-e626-40a6-966e-d5b760735a10) ==> (Operation not permitted)
06:20 pranithk JoeJulian: The way I re-created the crash is by changing the code to fail fd-migration always...
06:20 JoeJulian Heh, ok.
06:20 pranithk JoeJulian: I was not able to re-create fd-migration failure in any sane way :-)
06:20 pranithk JoeJulian: That is why I was curious as to why fd-migrations failed...
06:20 aravindavk joined #gluster-dev
06:20 Humble joined #gluster-dev
06:23 pranithk JoeJulian: According to the log you attached I figured out the fd-migration failed because of lookup failure with Invalid Argument....
06:24 pranithk JoeJulian: So was wondering when that could have happened, but so far no clue :-(
06:28 kanagaraj_ joined #gluster-dev
06:30 Humble joined #gluster-dev
06:43 kanagaraj joined #gluster-dev
06:52 JoeJulian pranithk: The only fds that failed, were the fds without a path.
06:54 pranithk JoeJulian: You mean the files don't exist on the backend anymore?
06:55 JoeJulian They don't now at least, but every one that failed on the client looks like "I [server-helpers.c:463:do_fd_cleanup] 0-gv-nova-server: fd cleanup on <gfid:..."
06:55 JoeJulian And only the ones that reference gfid failed.
06:56 pranithk JoeJulian: interesting... for now I will fix the crash then.
06:56 pranithk JoeJulian: I will once check to see what will happen if we unlink the file and do fd-migration...
06:57 pranithk JoeJulian: Anything more I can help with the issue?
06:57 JoeJulian Not that I know of. I'm just looking for patterns.
06:57 JoeJulian I should probably go to bed though. It's midnight.
06:57 pranithk JoeJulian: yes :-)
06:58 JoeJulian Thanks for all your help.
06:58 pranithk JoeJulian: Thanks for all the help Joe.
06:58 pranithk JoeJulian: :-)
07:57 Humble joined #gluster-dev
07:58 ndevos hey lalatenduM, Humble: do you have any objections for me releasing 3.5.2?
07:58 ndevos http://review.gluster.org/8339 could use a review, but it should be OK as is
07:58 lalatenduM ndevos, nope, let me check if the libgfapi patch is in or nor?
07:59 lalatenduM ndevos, cool, you have merged gluster-nagios-addons
07:59 lalatenduM oops
07:59 lalatenduM copy paste error
08:00 ndevos lalatenduM: just merged it, see bug 1124728
08:00 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1124728 urgent, unspecified, ---, pgurusid, POST , SMB: CIFS mount fails with the latest glusterfs rpm's
08:00 lalatenduM ndevos, yup
08:00 lalatenduM ndevos, but the cpp check  fixes are not in yet
08:00 ndevos lalatenduM: yeah, they have not been merged in master
08:00 lalatenduM ndevos, I think they have sufficient +1s
08:01 lalatenduM hagarth, ^^ need ur help merging cpp check fixes, by kaleb
08:01 lalatenduM ndevos, it is stopping gluster packages to get in to Ubuntu
08:01 ndevos lalatenduM: I'd like to include them, but will likely get that done in 3.5.3
08:02 lalatenduM ndevos, looks like that
08:02 ndevos lalatenduM: ubuntu can ship 3.4.5, that should include the fixes
08:04 lalatenduM ndevos, right
08:05 ndevos anyone knows the bug where the ec-xlator requires MMX? It causes the nightly builds for Fedora/Rawhide/i386 to fail :-/
08:05 ndevos xavih: ^
08:06 lalatenduM ndevos, did not get the link between MMX and ec-xlator + fedora, is the mail is in fedora-devel?
08:07 lalatenduM ndevos, we haven't built 3.6 for rawhide yet
08:07 lalatenduM and I think ec is in 3.6
08:08 ndevos lalatenduM: the nightly builds get done on EPEL-5,6,7 and Fedora 19,20,21,Rawhide for x86_64 and i386
08:09 ndevos and the different release-* branches, and master
08:09 lalatenduM ndevos, got it, you mean Gluster Jenkins job
08:09 lalatenduM I was thinking abt koji
08:09 ndevos lalatenduM: no, COPR - master branch here: http://copr.fedoraproject.org/​coprs/devos/glusterfs/builds/
08:10 lalatenduM ndevos, which spec file you use for this?
08:10 ndevos lalatenduM: http://download.gluster.org/p​ub/gluster/glusterfs/nightly/ for the results
08:11 ndevos lalatenduM: the one from the gluster sources, not the fedora one
08:11 lalatenduM ndevos, cool
08:12 ndevos hmm, and I seem to be missing automated builds for 3.6... I thought I added those :-/
08:12 lalatenduM ndevos, yup saw that 3.6 directory is empty
08:13 ndevos lalatenduM: I plan to post the scripts that do the builds on the BugZappers project in the forge
08:13 lalatenduM ndevos, cool, I was going to ask you abt that
08:13 ndevos lalatenduM: or did we think of a different name?
08:14 lalatenduM ndevos, check https://github.com/LalatenduMoh​anty/gluster-rpm-packager-utils
08:14 lalatenduM ndevos, Humble and me are working on some scripts
08:14 ndevos lalatenduM: ah, cool
08:15 lalatenduM ndevos, but I am nit sure what name would be common to bugZappers and build scripts :(
08:15 ndevos lalatenduM: the main issue between the fedora/upstream .spec, is that the regression tests-subpackage isnt acceptible in its current form for fedora
08:15 lalatenduM s/nit/not/
08:16 lalatenduM ndevos, didn't get it, what is Fedora's expectation?
08:16 ndevos lalatenduM: I'd like to have multiple repositories for different tasks in the BugZappers project
08:17 ndevos like: triaging scripts, release/on_qa/close scripts, complex automated tests, ...
08:18 ndevos lalatenduM: the regression-tests subpackage does not cleanly uninstall after tests have been run
08:18 ndevos lalatenduM: oh, and that it destroys the current config - that might not be very user friendly
08:19 lalatenduM ndevos, it seems it is broken in a way
08:19 xavih ndevos: I'm aware of it. I'm currently working to make it independent of intel's SSE2 extensions
08:19 ndevos xavih: yes, thats great, do you have a bug I can follow for that?
08:19 xavih ndevos: however there is a patch to temporarily fix this problem, not sure if already merge
08:20 xavih ndevos: no, I'll create one :P
08:20 ndevos xavih: thanks!
08:24 xavih ndevos: http://review.gluster.org/8366/ and http://review.gluster.org/8381/ should fix the problem until I finish the patch
08:24 xavih ndevos: both are already merged
08:26 ndevos xavih: hmm, the master branch errors out with this (scroll to the end): http://copr-be.cloud.fedoraproject.org/result​s/devos/glusterfs/fedora-rawhide-i386/gluster​fs-3.7dev-0.44.git60f2e23.autobuild/build.log
08:27 ndevos or, search for ec_method_encode
08:27 xavih ndevos: is this a 32 bit build ?
08:28 ndevos xavih: yes
08:28 xavih ndevos: that's also a problem :-/
08:28 ndevos xavih: and for whatever reason, it only fails on fedora/rawhide, fedora/20 works
08:29 * ndevos guesses the compiler flags for rawhide are more strict
08:30 xavih ndevos: it seems that the compiler version on rawhide doesn't know the XMM registers
08:31 ndevos xavih: yeah, thats how far I got :D
08:32 ndevos xavih: do you have an eta for the c implementation?
08:33 xavih ndevos: no, this is another problem because ec-gf.c seems to have been compiled successfully and it also references XMM registers...
08:33 xavih ndevos: I expected to have it working at the beginning of this week, however things got more difficult than expected
08:34 ndevos xavih: I've never heard of XMM registers before, so I'll leave that up to you :)
08:34 xavih ndevos: it should be a matter of few days. Next monday at worst
08:34 ndevos xavih: you want me to file an other bug for this?
08:35 ndevos xavih: wow, sounds good!
08:35 xavih ndevos: yes, please
08:36 xavih ndevos: I'll try a quick test. If it works I can push it in few hours to solve the compilation problem until the C patch is ready
08:36 xavih ndevos: however I'll need to do testing in 32 bits environments, because I've never tried that and I'm not sure if all those warnings are safe
08:37 ndevos xavih: ok, currently its only the nightly builds that error out, preventing automated tests run against the master branch - so a compile fix will be sufficient for now
08:39 xavih ndevos: Bug to track Intel's SSE2 dependencies: https://bugzilla.redhat.co​m/show_bug.cgi?id=1125166
08:39 glusterbot Bug 1125166: high, unspecified, ---, gluster-bugs, NEW , Current implementation depends on Intel's SSE2 extensions
08:40 Yuan_ joined #gluster-dev
08:44 ndevos xavih: filed bug 1125168
08:44 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1125168 low, high, ---, gluster-bugs, NEW , ec-method.c fails to compile in function 'ec_method_encode' due to unknown register name 'xmm7'
08:44 xavih ndevos: ok, thanks :)
08:45 ndevos xavih: I guess I'll assing it to you :)
08:45 xavih ndevos: yes, please :)
08:45 ndevos xavih: done, thanks!
08:54 deepakcs joined #gluster-dev
08:56 xavih ndevos: can you make a quick test ? is it easy to run a build on fedora/rawhide with -msse compiler option enabled ?
08:57 ndevos xavih: I dont have a system setup atm...
08:58 xavih ndevos: I tried it on fedora 20 and it compiles successfully (but with warnings)
08:58 ndevos xavih: yeah, only rawhide seems to fail :-/
08:58 xavih ndevos: without it, it says that SSE is not enabled
08:58 xavih ndevos: yes, this doesn't seem the same problem... :-/
08:59 ndevos xavih: you can use 'mock' to setup a rawhide chroot, try: mock -r fedora-rawhide-i386 shell
08:59 xavih ndevos: ok, will try :)
08:59 xavih ndevos: thanks
08:59 ndevos xavih: and you can install additional packages: mock -r fedora-rawhide-i386 install gcc
09:00 ndevos xavih: but I guess the mock --help shows you all you need ;)
09:00 xavih ndevos: thanks :)
09:01 ndevos xavih: its not very urgent, but we'd like to see the automated tests running again :)
09:01 xavih ndevos: yes, I'll try to see if there is a fast way to solve it or I'll concentrate on the "good" patch
09:02 ndevos xavih: works for me, thanks
09:02 xavih ndevos: yw
09:10 vimal joined #gluster-dev
09:17 ira joined #gluster-dev
09:17 pranithk ndevos: Are you going to release 3.5.2?
09:21 hchiramm ndevos, no objection  from me :)
09:22 hchiramm ndevos, but if we can include important fixes as much as we can , its better ..
09:45 ndevos pranithk: yeah, I think so
09:54 hchiramm joined #gluster-dev
09:55 pranithk ndevos: Good :-).
09:55 xavih ndevos: I've tried with mock and I've been able to reproduce the same exact problem. It's solved with -msse option
09:55 xavih ndevos: I'll push a patch for review in few minutes
09:58 sickness xavih: I did solve in that way too and posted here, do you remember? =_)
09:59 xavih sickness: sorry, I don't remember... you could have pushed a patch ;)
10:00 sickness xavih: I'm not a programmer, unfortunately =_)
10:01 sickness but I did pull the 3.6 the first night it appeared and tried to a compile onto a 32bit linux and got that problem
10:02 sickness https://botbot.me/freenode/gl​uster-dev/2014-07-21/?page=1
10:02 sickness 19th of july =_)
10:02 sickness usr/lib/gcc/i686-linux-gnu/4​.6/include/xmmintrin.h:32:3: error: #error "SSE instruction set not enabled"
10:02 sickness hey I solved that with CFLAGS=-msse2 ./configure and then CFLAGS=-msse2 make ;)
10:02 sickness (because on x86 gcc doesn't enable sse2 by default)
10:02 sickness this was a known "behavior" of gcc and affected other projects too, so I found other examples on google
10:03 sickness now it compiled fine but doesn't run, but I suppose that's another problem I'll investigate further =_)
10:06 ndevos xavih: okay, so with -msse it will stay broken on non-intel architectures, but there ec is disabled anyway?
10:07 xavih ndevos: on non intel architectures the other patch will disable the ec xlator, so it should work
10:07 xavih ndevos: I can't add you as a reviewer (I get an error). The patch is http://review.gluster.org/#/c/8395/
10:07 ndevos xavih: okay, then I understand it :)
10:08 ndevos xavih: add me by starting to type "Niels"?
10:09 xavih ndevos: yes, I've done that, but then it says: 422 Group Not Found: Niels de Vos <ndevos@redhat.com>
10:09 ndevos pranithk, Humble: a review of http://review.gluster.org/8339 would be nice
10:09 xavih sickness: sorry, I didn't see that message on irc...
10:10 ndevos xavih: hmm... I'll add myself then, hchiramm will look into that account issue one day
10:10 sickness no prob, 2 independents checks on the same problem are better than 1 ;)
10:11 ndevos xavih: oh, I think you can add the niels@nixpanic.net account
10:11 xavih sickness: but I think it will be better to use -msse2 than -msse... I'll change that
10:11 xavih ndevos: ok
10:12 xavih ndevos: it worked :)
10:12 ndevos xavih: \o/
10:12 sickness xavih: oh, ok, so I'll retry, or will you add that to git and I just need to redownload?
10:13 hchiramm ndevos, sure..
10:13 ndevos hchiramm: thanks!
10:13 xavih sickness: I'm pushing this change for review. Once reviewed and accepted it will be merged into master and you will be able to get the change with a git pull
10:15 sickness tnx
10:15 sickness so I'll just wait for the next time :)
10:19 xavih ndevos: Do I change the bug status to POST ?
10:22 ndevos xavih: yes, POST = patch in gerrit, MODIFIED = patch merged
10:24 ndevos xavih: can you edit the commit message a little in the webui? -msse -> -msse2 and mention that ec is currently disabled for no-intel architectures
10:25 xavih ndevos: oops :( true...
10:25 ndevos xavih: no problem, just trying to prevent confusion to others :)
10:27 ndevos lalatenduM, hchiramm: thanks for the review comments, but those came from the 3.5.1 release?!
10:28 lalatenduM ndevos, ohh :)
10:30 ndevos lalatenduM: I can add the 'vfs glusterfs plugin' for you, if you really like that
10:30 ndevos hchiramm: I'm not sure atm what the issue with open-behind is... we can extend that text for 3.5.3, ok?
10:31 xavih ndevos: done :)
10:31 lalatenduM ndevos, I think we should add it as the libgfapi integration is marketed as samba vfs plugin integration
10:32 ndevos oh, and we all missed the error in the title :-/
10:32 ndevos lalatenduM: okay, doing the update now
10:32 lalatenduM ndevos, thanks :)
10:32 lalatenduM ndevos, pm
10:32 hchiramm ndevos, as a user if there is information on how to disable , it would be good.
10:32 hchiramm thats what I meant
10:33 ndevos hchiramm: hmm, thats surely easier as to figure out /why/ its needed
10:34 hchiramm its my view .. but I can be flexible :)
10:35 ndevos hchiramm: I'll see if I can easily figure out how to disable it, if not, well :)
10:37 hchiramm :)
10:50 ndevos hchiramm, lalatenduM: can you check again?
10:50 hchiramm sure..
10:50 lalatenduM ndarshan, checking
10:52 lalatenduM ndevos, done
10:52 hchiramm ndevos, done :)
10:54 lalatenduM ndevos++
10:54 glusterbot lalatenduM: ndevos's karma is now 9
10:56 edward1 joined #gluster-dev
10:56 ndevos lalatenduM++ Humble++ thanks!
10:56 glusterbot ndevos: lalatenduM's karma is now 14
10:56 glusterbot ndevos: Humble's karma is now 4
10:57 hchiramm ndevos++ lalatenduM++ np!!
10:57 glusterbot hchiramm: ndevos's karma is now 10
10:57 glusterbot hchiramm: lalatenduM's karma is now 15
11:03 ndk joined #gluster-dev
11:06 kkeithley joined #gluster-dev
11:06 ndevos hchiramm, lalatenduM: I guess one of you will build the 3.5.2 rpms? (tar.gz is available now)
11:08 lalatenduM ndevos, yup, it would be done
11:51 kanagaraj joined #gluster-dev
12:04 kkeithley Humble, lalatenduM: I'm having bugzilla updated with new version, target milestone, and components for community glusterfs.  E.g. I'm adding erasure and gfapi components. Can you think of any others we ought to have? Humble, do you want your python bindings for gfapi to have a component, or should they just land in gfapi?
12:05 kkeithley Anyone/Everyone: ^^^
12:05 hchiramm kkeithley, I am not sure I am missing something here.. The "ec" component is already added to bugzilla..
12:05 kkeithley so far I can think of: erasure, gfapi, snapshot, nagios
12:05 kkeithley anything else?
12:06 lalatenduM kkeithley, do we have snapshot already?
12:06 lalatenduM kkeithley, also we have compression , trash translator
12:07 hchiramm kkeithley, I think its better to have "python bindings of gfapi " as a separate component.
12:07 kkeithley I don't see either ec or snapshot for Community GlusterFS
12:07 hchiramm kkeithley, its there as "disperse"
12:07 hchiramm I have added it on "Xavi's" request
12:07 kkeithley ah
12:07 kkeithley okay
12:07 hchiramm the description should say it as "Erasure coding"
12:08 kkeithley you don't see that in the drop-down menu
12:09 hchiramm https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
12:09 hchiramm there I can see "disperse " in component..
12:10 kkeithley yes, hence the "okay" up above
12:10 kkeithley and there's already a libgfapi
12:10 kkeithley thjat
12:10 hchiramm kkeithley, may be we could request for libgfapi-python
12:11 kkeithley that's okay, I have situational blindness. I just have look for the right thing. ;-)
12:11 lalatenduM hchiramm, I think you have created a list of components in gluster wiki right?
12:11 hchiramm lalatenduM, its abt the existing components..
12:11 hchiramm kkeithley is working on new components..
12:11 kkeithley new components to add to bugzilla
12:11 lalatenduM hchiramm, right, I am not sure if bugzilla is updated with existing one also
12:11 hchiramm lalatenduM, means ?
12:12 hchiramm kkeithley, it would be appreciated if u can add "libgfapi-python"
12:12 kkeithley yup, got it.
12:12 hchiramm kkeithley, thanks ..
12:12 kkeithley If you think of any more, send them along and I'll add them to the list
12:12 hchiramm kkeithley, sure..
12:13 hchiramm kkeithley++
12:13 glusterbot hchiramm: kkeithley's karma is now 7
12:13 lalatenduM hchiramm, it would be good if can we compare the list with bugzilla's existing components
12:13 itisravi_ joined #gluster-dev
12:14 hchiramm kkeithley, the default assignee for all components have been set to gluster-bugs ..
12:14 kkeithley do you know a way to query bz to get that? Otherwise getting the list is a bit painful
12:14 hchiramm kkeithley, u need admin privilege..
12:14 hchiramm or afaict ..^^
12:14 kkeithley I'm just sending the updates to someone who has admin privs
12:14 hchiramm the current list is available with me
12:15 hchiramm kkeithley, do u want me to forward that ?
12:16 kkeithley I've already got the email written, I just have to press send. I started asking for new Version and Target Milestones and realized we probably need some additional components too while we're at it
12:17 kkeithley I wonder why ndevos has component set to tests in the 3.5.3 tracker
12:18 hchiramm may be by mistake ? :)
12:18 hchiramm kkeithley, check ur inbox
12:19 kkeithley ah, excellent
12:19 hchiramm np!
12:19 kkeithley hchiramm++
12:19 glusterbot kkeithley: hchiramm's karma is now 2
12:19 kkeithley Humble++
12:19 glusterbot kkeithley: Humble's karma is now 5
12:20 hchiramm the only change is on "default assignee" .. when requesting the default assignee , please set to "gluster-bugs" for now.
12:20 hchiramm we are working on setting component owner list .. once its done , we wil change it in bugzilla.
12:22 kkeithley excellent
12:22 hchiramm kkeithley, some suggestions
12:22 hchiramm nfs-ganesha,
12:23 kkeithley there's already a whole separate nfs-ganesha project under Community in bugzilla
12:23 hchiramm gluster-nagios..etc can be components .. Isnt it ?
12:23 kkeithley I asked for a nagios component in the mail I sent
12:24 hchiramm kkeithley, are u sure nfs-ganesha is available ?
12:24 hchiramm :)
12:24 kkeithley anyway, the nfs-ganesha gluster FSAL lives in the nfs-ganesha source tree. Bugs against that should be filed there, yes?
12:24 hchiramm may be I am blind now :)
12:24 kkeithley https://bugzilla.redhat.com/ent​er_bug.cgi?product=nfs-ganesha
12:25 hchiramm oh.. its a separate product ..
12:25 hchiramm kkeithley, may be "trash" ?
12:26 kkeithley yup, I asked for trash-xlator, compression-xlator, and encryption-xlator
12:26 kkeithley If you don't like those names we can change them
12:26 hchiramm oh.. ok.. :) then u should show the  prepared list  :)
12:27 hchiramm I thought u are only proposing "erasure, gfapi, snapshot, nagios"
12:27 hchiramm :)
12:27 kkeithley Component: snapshot, nagios, compression-xlator, trash-xlator,  encryption-xlator, libgfapi-python
12:27 lalatenduM kkeithley++
12:27 glusterbot lalatenduM: kkeithley's karma is now 8
12:29 kkeithley Anyway, that's what I've asked for. It seems you know someone with admin privs too.
12:29 kkeithley So I don't necessarily have to be in the loop
12:30 hchiramm kkeithley, if u want I can work on this task .
12:31 kkeithley sure. I've sent off my email already asking for the Version, Target Milestone, and those components to be added. But there's no reason to stop there. ;-)
12:31 hchiramm :)
12:32 hchiramm kkeithley, do u think glusterfs-swift component have to be added?
12:33 kkeithley wouldn't that be object-storage?
12:33 ndevos kkeithley: the 3.5.3 tracker is against the 'core' component, but feel free to change if you know of something more suitable
12:34 hchiramm kkeithley, ah.. yes.. missed it
12:35 kkeithley ndevos: indeed. Someone cast a Confundus spell on me
12:38 kkeithley and AFAIK core is the right component for trackers
12:38 kkeithley Unless we want to add tracker to the list of components?
12:47 ndevos kkeithley: no, no tracker component please
12:47 kkeithley yeah, I didn't think so
12:48 ndevos kkeithley: maybe 'build' would work for the tracker too, or of we had something like rel-eng
12:48 deepakcs joined #gluster-dev
12:48 kkeithley build or core, either one works for me
12:49 kkeithley rel-eng seems redundant to build
12:49 hchiramm joined #gluster-dev
12:50 ndevos yeah, I think build can be used for any rel-eng tasks that we (might) need
13:01 cristov joined #gluster-dev
13:03 awheeler joined #gluster-dev
13:14 bala joined #gluster-dev
13:14 dlambrig_ joined #gluster-dev
13:15 Yuan_ joined #gluster-dev
13:15 xavih joined #gluster-dev
13:16 purpleidea joined #gluster-dev
13:16 JoeJulian joined #gluster-dev
13:19 shyam joined #gluster-dev
13:35 tdasilva joined #gluster-dev
13:46 ndevos Humble, lalatenduM: scripts that do the nightly build are here: https://forge.gluster.org/bugzap​pers/nightly-builds/trees/master
13:47 ndevos Humble, lalatenduM: you both are in the 'BugZappers' group and can add more repositories or other things to the project
13:48 lalatenduM ndevos, thanks, will take a look
13:55 hchiramm joined #gluster-dev
13:56 edward1 joined #gluster-dev
13:58 deepakcs joined #gluster-dev
14:03 ndevos lalatenduM: I'll add a README too, but I have to write it first :)
14:10 lalatenduM ndevos, I have intention to help you, but dont want to give u false commit too :)
14:10 lalatenduM s/commit/commitment/
14:15 ppai joined #gluster-dev
14:16 ndevos lalatenduM: oh, dont worry about it, its all working and running ;)
14:22 anoopcs joined #gluster-dev
14:23 wushudoin joined #gluster-dev
14:23 hagarth joined #gluster-dev
14:35 dlambrig left #gluster-dev
14:40 ndk joined #gluster-dev
15:11 Humble joined #gluster-dev
15:13 tdasilva joined #gluster-dev
15:14 bala joined #gluster-dev
15:20 ira joined #gluster-dev
15:21 ndk` joined #gluster-dev
15:50 scuttle__ joined #gluster-dev
16:09 kkeithley where's the blankety blank sign in page for the gluster blog
16:13 johnmark oh dear
16:13 johnmark kkeithley: blog.gluster.org/wp-admin
16:27 kkeithley oh, now it works. I got a 404 error before
16:52 Humble joined #gluster-dev
17:22 tdasilva joined #gluster-dev
17:22 shyam joined #gluster-dev
20:22 ira joined #gluster-dev
21:56 bala joined #gluster-dev
21:59 shyam joined #gluster-dev
22:18 awheeler joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary