Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2014-03-12

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:28 kkeithley1 joined #gluster-dev
00:56 sac`away joined #gluster-dev
01:09 kkeithley joined #gluster-dev
01:09 bala joined #gluster-dev
01:10 bfoster joined #gluster-dev
01:55 ira joined #gluster-dev
02:52 hagarth joined #gluster-dev
02:56 bharata-rao joined #gluster-dev
03:42 itisravi joined #gluster-dev
03:44 shubhendu joined #gluster-dev
04:14 ajha joined #gluster-dev
04:19 ndarshan joined #gluster-dev
04:24 kdhananjay joined #gluster-dev
04:32 deepakcs joined #gluster-dev
04:50 bala joined #gluster-dev
04:58 spandit joined #gluster-dev
05:00 ppai joined #gluster-dev
05:01 mohankumar joined #gluster-dev
05:03 hagarth joined #gluster-dev
05:26 Yuan_ joined #gluster-dev
05:30 pk1 joined #gluster-dev
05:30 itisravi joined #gluster-dev
05:38 mohankumar joined #gluster-dev
05:52 raghu joined #gluster-dev
07:17 lalatenduM joined #gluster-dev
08:54 ndarshan joined #gluster-dev
09:15 chandan_kumar hello
09:16 chandan_kumar i want to integrate openstack with gluster fs
09:16 chandan_kumar to test swift api calls
09:16 chandan_kumar i am not getting the docs for it.
09:26 lalatenduM chandan_kumar, did u check RDO docs
09:26 lalatenduM chandan_kumar, I dont know where the docs are but just a guess that it might be in rdo docs
09:27 mohankumar joined #gluster-dev
09:30 lalatenduM chandan_kumar, are you looking for this https://github.com/gluster/gluster-swift​/blob/master/doc/markdown/user_guide.md?
09:31 lalatenduM chandan_kumar, ppai might know
09:31 ppai chandan_kumar, lalatenduM , yep, that link is the right one
09:32 lalatenduM ppai, thanks :)
09:32 ppai chandan_kumar, https://github.com/gluster/gluster-swift/bl​ob/master/doc/markdown/quick_start_guide.md
09:32 chandan_kumar ppai, lalatenduM thanks :)
09:33 ppai chandan_kumar, the quick start guide (link above) has the steps to get you started
09:33 mohankumar joined #gluster-dev
09:49 ndarshan joined #gluster-dev
10:27 ajha joined #gluster-dev
10:52 ajha joined #gluster-dev
11:02 lpabon joined #gluster-dev
11:06 tdasilva joined #gluster-dev
11:26 mohankumar joined #gluster-dev
11:29 vpshastry1 joined #gluster-dev
11:34 mohankumar joined #gluster-dev
11:55 ppai joined #gluster-dev
12:02 mohankumar joined #gluster-dev
12:15 kdhananjay joined #gluster-dev
12:22 deepakcs joined #gluster-dev
12:28 yinyin joined #gluster-dev
12:38 ppai joined #gluster-dev
12:50 social_ hmm are the restests in jenkins OK? they seem to be broken
12:51 social_ ton of loop: can't delete device /dev/loop10: Device or resource busy
12:51 social_ and I don't think my commit caused it
12:52 pk1 joined #gluster-dev
13:00 hagarth social_: let me check
13:01 hagarth social_: will get back to you on this
13:04 ndevos hagarth: I see these:
13:04 ndevos /dev/loop9: [fd02]:135955260 (/d/backends/1/patchy_snap_vhd)
13:04 ndevos /dev/loop10: [fd02]:1529230 (/d/backends/2/patchy_snap_vhd)
13:04 ndevos /dev/loop11: [fd02]:135955264 (/d/backends/3/patchy_snap_vhd)
13:04 ndevos /d/backends/1/patchy_snap_vhd does not seem to exist, but loop9 can not be free'd either - still in use :-/
13:06 mohankumar__ joined #gluster-dev
13:06 ndevos maybe these loop devices are used as PV for some of the LV snapshots... but I'm not sure from a quick check
13:07 hagarth ndevos: wonder if the snapshot tests introduced this
13:07 hagarth I observe a failed regression test for volume snapshots
13:08 ndevos hagarth: I think it is likely, but I have not found a testcase that could be blamed yet
13:10 social_ anyway I'd love to get some feedback on patches for 1063832, I'm not 100% sure about the solution.
13:10 hagarth ndevos: right
13:10 hagarth social_: sure
13:10 ndevos hagarth: ah, losetup is done in http://review.gluster.org/​#/c/7128/8/tests/snapshot.rc
13:10 hagarth ndevos: I see!
13:11 hagarth social_: are you interested in adding/seeing any new features in glusterfs 3.6?
13:13 social_ hagarth: well I seem to be quitting the company where I work now so I don't have future plans like this :) But what was discussed for quite long time is ttls for files/directories
13:13 ndevos social_: those patches are missing a Signed-off-by line...
13:14 hagarth social_: ok :)
13:14 social_ to understand we have ton of uploads from customers and so on and we have garbage collecting on that but it would be nice to have something simillar to s3 ttl
13:14 hagarth social_: ok..
13:16 hagarth I need to run now so that I will be in time for the planning meeting, ttyl then
13:19 social_ ndevos: I seem to be unable to ammend and forcepush to my repo :/
13:20 ndevos social_: hmm, that should normally 'just work'... but I think you can amend the commit message in the gerrit webui too
13:21 shyam joined #gluster-dev
13:21 shyam left #gluster-dev
13:21 social_ ndevos: thanks that seem to have worked fine
13:21 shyam joined #gluster-dev
13:22 ndevos social_: why remove the brick-uid/gid options? is the problem not solved with the patch from bug 1040275?
13:22 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1040275 high, high, ---, vbellur, NEW , Stopping/Starting a Gluster volume resets ownership
13:23 ndevos social_: advantage of those settings is that ownership can be managed without mounting (for non-root users like libgfapi)
13:25 social_ ndevos: because posix translator than touches the brick while it should not do so and it means you have permissions in volume file and options but you don't have acls xattrs and so on
13:26 social_ ndevos: and when you think about it further it should not be there as you would have to update volume option from chown chmod sefacl...
13:27 social_ and from libgfapi you could just send chown on brick root, couldn't you?
13:27 ndevos social_: no, libgfapi processes run as non-root :-/
13:29 social_ hmm than the gluster set option should end up in chown call?
13:29 social_ but the volume has to be running :(
13:29 ndevos social_: I'm not convinced yet that we need to drop the storage.owner-uid/gid options, but, I am not a big fan of them either
13:31 social_ ndevos: well the strongest argument is the need to update them on chown in posix translator which seems to be well at least hacky as primary user is the mount and root there changing permissions on his mount path
13:31 ndevos social_: yeah, the option should end up in a chown/chgrp, but if you do not use/set those options, there should be no changes when setting ownership manually IMHO
13:32 social_ ndevos: and the chown to something managed to break permissions as dht self heal was checking mod time and it was newer :/
13:33 ndevos social_: from my point of view, there are 2 ways to set ownership of the root-directory of the volume, storage.owner-uid/gid *or* a chown() by root through a mountpoint
13:34 ndevos social_: does dht also break the ownership with the patch from 1040275?
13:34 social_ yes but when you do the second one which is probably more likely the code in posix translator will break up those permissions by calling posix_do_chown in reconfigure
13:35 ndevos hmm
13:35 social_ And there is second issue which is completly standalone - acls and dht
13:40 ndevos I agree that acls should work without any volume option, normal uid/gid should work if storage.owner-uid/gid is not set
13:41 ndevos storage.owner-uid/gid has a valid use-case, and dropping that would require changes in oVirt and other projects that can set volume options, and do not need to mount the volume
13:42 social_ ook, I'll change that commit in order to find out how to fix this
13:43 aravindavk joined #gluster-dev
13:45 jobewan joined #gluster-dev
13:49 mohankumar__ joined #gluster-dev
13:56 itisravi joined #gluster-dev
14:04 yinyin joined #gluster-dev
14:05 chandan_kumar lalatenduM, is the rpm for icehouse release available for glusterFS?
14:07 lalatenduM chandan_kumar, you mean glusterfs rpms?
14:07 chandan_kumar yes
14:08 chandan_kumar lalatenduM, i have to integrate with RDO icehouse.
14:09 lalatenduM chandan_kumar, glusterfs RPMs are not dependent on Openstack release, so latest 3.4.2 RPMs should work for you. or am I missing anything?
14:10 lalatenduM kkeithley, johnmark ^^
14:11 lalatenduM @rpms
14:11 lalatenduM @rpm
14:12 kkeithley_ @yum
14:13 kkeithley_ The official community glusterfs packages for RHEL (including CentOS, SL, etc) are available at http://download.gluster.org/pub/gluster/glusterfs/. The official community glusterfs packages for Fedora 18 and later are in the Fedora yum updates (or updates-testing) repository.
14:13 kkeithley_ @learn yum as The official community glusterfs packages for RHEL (including CentOS, SL, etc) are available at http://download.gluster.org/pub/gluster/glusterfs/. The official community glusterfs packages for Fedora 18 and later are in the Fedora yum updates (or updates-testing) repository.
14:13 glusterbot kkeithley_: The operation succeeded.
14:14 lalatenduM kkeithley, thanks :)
14:14 kkeithley_ and yes, 3.4.2 has everything that there is to have
14:14 chandan_kumar lalatenduM,  i am asking for this "Download the latest Havana release RPMs from launchpad.net downloads:"?
14:15 chandan_kumar launchpad shows for havana release. so i asked?
14:15 chandan_kumar *.
14:15 wushudoin joined #gluster-dev
14:16 lalatenduM chandan_kumar, can you please pass me the web url of launchpad where this info is mentioned, actually I am not aware about it
14:16 chandan_kumar lalatenduM, https://launchpad.net/glus​ter-swift/havana/1.10.0-2
14:16 chandan_kumar lalatenduM, https://github.com/gluster/gluster-swift/bl​ob/master/doc/markdown/quick_start_guide.md under download section
14:19 tdasilva joined #gluster-dev
14:19 lalatenduM chandan_kumar, Icehouse is scheduled to release on April17th , so the documentation mentions about released version of Openstack. Not sure whats the plan for Icehouse. But lpabon can give you correct info
14:20 lalatenduM chandan_kumar, lpabon is ur guy :)
14:20 chandan_kumar lpabon, hello
14:26 social_ ndevos: hmm strange, I reverted the uid/gid option removal and my tests still pass
14:29 ndevos social_: that means we can keep the storage.owner-uid option? or you happen to have it set for your volume?
14:30 social_ ndevos: nay, I don't have control over the volumes, I'll probably drop the whole patch and just extend the tests in next commit so it'll test also storage.owner-uid setting and so on
14:31 ndevos social_: I'm a little confused (doing multiple things at once), but I think that sounds god
14:31 ndevos *good
14:32 mohankumar__ joined #gluster-dev
14:41 jobewan joined #gluster-dev
14:43 hagarth joined #gluster-dev
14:56 kkeithley_ Gluster community meeting in five minutes over in #gluster-meeting on freenode
14:56 ndevos j #gluster-meeting
14:56 ndevos ah!
14:56 hagarth joined #gluster-dev
15:00 shubhendu joined #gluster-dev
15:08 jobewan joined #gluster-dev
15:09 ndk joined #gluster-dev
15:13 ndk` joined #gluster-dev
15:16 ndk`` joined #gluster-dev
15:20 lpabon how do i send  a message to someone that is not here using glusterbot?
15:20 kkeithley_ use @later
15:21 kkeithley_ @later tell lpabon hey, how's the gluster-swift packaging going?
15:21 glusterbot kkeithley_: The operation succeeded.
15:21 kkeithley_ ;-)
15:22 lpabon :-D
15:23 lpabon @later tell chandan_kumar  Hi, if I am not available on #gluster-dev or #gluster-swift, you can leave your question at https://answers.launchpad.ne​t/gluster-swift/+addquestion and I will get back to you as soon as I can
15:23 glusterbot lpabon: The operation succeeded.
15:57 bala joined #gluster-dev
16:07 avati joined #gluster-dev
16:33 kkeithley1 joined #gluster-dev
16:38 kkeithley1 joined #gluster-dev
16:51 pk1 left #gluster-dev
17:23 kkeithley_ purpleidea: we need to make an RPM of puppet-gluster no matter what. (Whether we put that into Fedora is something else entirely.)
17:28 kkeithley_ so, is there something like puppet-gluster that already exists? I.e. something that I can copy the spec file from? Otherwise we'll have to roll our own from scratch. No big deal either way.
17:28 kkeithley_ purpleidea: ping
17:35 kkeithley_ purpleidea: er, no we don't have to make an RPM no matter what.
17:49 purpleidea kkeithley_: hey
17:49 purpleidea agreed. let's make an rpm
17:49 purpleidea kkeithley_: i don't know of where to get a base spec file for a puppet module
17:50 purpleidea just googling....
17:51 purpleidea nothing catches my eye
18:07 kkeithley_ purpleidea: okay, from scratch.
18:08 purpleidea kkeithley_: i guess so. the good thing is you'll be a pioneer!
18:08 kkeithley_ yeah, good. \o/
18:09 kkeithley_ I'll make a .spec file.  From the untarred tarball, how does it get installed? There's no configure or "build" step, right?
18:10 purpleidea kkeithley_: correct, nothing "compiles"...
18:10 ndevos kkeithley_, purpleidea: how about https://bugzilla.redhat.co​m/show_bug.cgi?id=1005320 ?
18:10 purpleidea kkeithley_: i would propose two options for "installing"
18:10 ndevos "openstack-puppet-modules - Puppet modules used to install OpenStack"
18:10 glusterbot Bug 1005320: medium, medium, ---, pbrady, CLOSED NEXTRELEASE, Review Request: openstack-puppet-modules - Puppet modules used to install OpenStack
18:10 ndevos it's a little ugly, but points to http://rohara.fedorapeople.org​/puppet/openstack-puppet.spec
18:11 kkeithley_ ah, good
18:11 purpleidea ndevos: cool thanks... there's one thing that's different here...
18:11 tdasilva joined #gluster-dev
18:11 purpleidea i'll go back to my "options for installing comment":
18:11 ndevos "fedpkg clone -a openstack-puppet-modules" to get the spec ;)
18:12 purpleidea 1) package installs module to /etc/puppet/modules/gluster/ it pulls in related rpm's (that would be almost identical) to install dependencies into /etc/puppet/modules/dep{1..N}
18:12 purpleidea a different option would be:
18:13 purpleidea 2) install code into /usr/share/something/something/gluster/ and the user does a cp -a to "install" it on their box. some people might put code in /etc/puppet/something_different/modules/ for example...
18:13 ndevos purpleidea: /etc sounds incorrect to me, static files should be under /usr and /etc contains real configuration files
18:13 kkeithley_ related rpms are commonly pulled in by listing them as  Requires: dependencies
18:14 purpleidea ndevos: hm. well puppet looks for the modules in /etc/puppet/modules/ you can configure this, but let me check the default... i actually agree with you about it that it shouldn't be in /etc
18:14 purpleidea ndevos: unless you think of puppet code like a big fancy config file ;)
18:15 ndevos purpleidea: I dont know how puppet in fedora (or anywhere else) works, but if you install the package, it should just work, add config files, execute some commands...
18:15 purpleidea ndevos: kkeithley_ http://docs.puppetlabs.com/references​/latest/configuration.html#modulepath
18:15 purpleidea maybe it shouldn't be /etc/ then...
18:16 purpleidea /usr/share/puppet/modules/gluster/ it is!
18:16 ndevos nice!
18:17 * ndevos leaves for the day, ttyl!
18:19 purpleidea ndevos: bye!
18:25 kkeithley_ seems we've got a convention: glusterfs-puppet-modules.rpm?  you want to do a version like 2014.1 like openstack-puppet-modules or do you want a more conventional x.y.z version?  And where's the canonical src tarball?
18:26 kkeithley_ and what license are you using? AGPL IIRC
18:55 JoeJulian I prefer x.y.z
18:57 JoeJulian What I really don't like is where everyone, including the documentation, reference "essex", "grizzly", "havana" and the packages have some obscure number that has no connection.
18:58 ndk joined #gluster-dev
19:00 kkeithley_ duly noted, i.e. x.y.z.   Yeah, OpenStack drives me a little crazy with that sh*t
19:00 awheeler_ joined #gluster-dev
19:01 tdasilva joined #gluster-dev
20:19 kkeithley_ purpleidea: ping
20:19 purpleidea kkeithley_: brb ~10 min
20:30 lalatenduM purpleidea, kkeithley I think you guys should be #centos-devel  for the storage SIG stuff
20:32 lalatenduM purpleidea, kkeithley I was just talking with kbsingh about it, for the storage SIG we need to provide repos, which will give correct packages for glusterfs
20:33 lalatenduM I mean all related RPMs i.e. gluster (storage ) + qemu (virt or cloud ) + opennebula ( cloud ) on CentOS-6( from Core SIG )
20:43 purpleidea kkeithley_: back sorry i am late :(
20:43 purpleidea kkeithley_: versioning would be good.
20:44 purpleidea it is agpl
20:44 purpleidea and it should be 'puppet-gluster' not glusterfs
21:32 yinyin joined #gluster-dev
21:58 avati joined #gluster-dev
23:38 tg2 joined #gluster-dev
23:44 yinyin joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary