Perl 6 - the future is here, just unevenly distributed

IRC log for #puppet-openstack, 2013-10-18

| Channels | #puppet-openstack index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:08 ryanycoleman joined #puppet-openstack
00:39 ryanycoleman joined #puppet-openstack
00:40 bodepd_ mgagne: https://github.com/bodepd/scenario_node_terminus/issues/4
00:40 bodepd_ mgagne: this feature is based on your feedback :)
00:42 ryanycol_ joined #puppet-openstack
01:06 prad joined #puppet-openstack
01:42 ari joined #puppet-openstack
01:47 _ilbot joined #puppet-openstack
01:47 Topic for #puppet-openstack is now Place to collaborate on Puppet/OpenStack tools: logs at http://irclog.perlgeek.de/puppet-openstack/today
02:02 ryanycoleman joined #puppet-openstack
02:03 xarses joined #puppet-openstack
02:07 ryanycoleman joined #puppet-openstack
02:12 ari joined #puppet-openstack
02:17 xingchao joined #puppet-openstack
02:23 pabelanger joined #puppet-openstack
02:49 ari_ joined #puppet-openstack
03:36 tnoor1 joined #puppet-openstack
03:38 tnoor2 joined #puppet-openstack
03:50 tnoor1 joined #puppet-openstack
05:01 tnoor2 joined #puppet-openstack
05:42 tnoor2 joined #puppet-openstack
05:46 bcrochet joined #puppet-openstack
06:07 openstackgerrit Xingchao Yu proposed a change to stackforge/puppet-cinder: Add cinder::ceilometer class  https://review.openstack.org/52292
06:07 dachary bodepd_: how would that work ?
06:07 * dachary interested :-)
06:25 tnoor1 joined #puppet-openstack
06:41 gabriel-bezerra joined #puppet-openstack
06:57 bodepd_ if you scroll up, you can see the whole discussionC
06:57 bodepd_ https://github.com/stackforge/puppet-tempest/blob/master/lib/puppet/provider/tempest_glance_id_setter/ruby.rb#L16
06:57 bodepd_ that is the best example. the gist of it is that you can query resource properties as long as the resources are in the same catalog
06:58 bodepd_ in that example, the glance uuid needs to set in the tempest file, and we want to create the image on the same run where we configure tempest
06:58 bodepd_ basically, providers can call model.catalog.resource(<some_resource>).provider.<some_getter_method>)
06:59 bodepd_ whether or not that fully resolves the issue is something else :)
06:59 bodepd_ dachary: ^^^^
07:32 * dachary reading
07:33 dachary ok thanks bodepd_
07:37 dachary bodepd_: this is above my paygrade to be honest, but I think I get the general idea
07:38 dachary bodepd_: however, I think it is possible to get rid of the osd id entirely, as far as puppet is concerned
07:39 dalgaaf joined #puppet-openstack
07:39 bodepd_ like I said before, I'll leave that decision up to you ceph guys.
07:40 bodepd_ I just want to make sure you know that this is an option (although perhaps not the cleanest implementation)
07:40 bodepd_ and I can help show someone how to implement it. I've already helped redhat implement something similar for some of the neutron native types
07:41 dachary bodepd_: cool. Let see what other ceph users have to say about it. I'll write something down in the blueprint to open the discussion.
07:41 bodepd_ I'll be really busy tomororw, but I can at least find some time to review the blue-print
07:43 dachary a) the manifest contains a uuid computed on the puppetmaster, b) the mon picks this and run "osd create uuid", c) the target osd picks this too and runs "ceph-disk prepare --osd-uuid".
07:44 bodepd_ if it's possible, that could be easier
07:44 dachary (warning : uuid != id in the context of osd )
07:44 bodepd_ the question may become about uniqueness
07:44 bodepd_ ah, so it doesn't have to be the same past the first run?
07:44 bodepd_ as in, it can be generate, used once , then thrown away?
07:44 bodepd_ or it has to be persisted?
07:45 dachary it's definitely possible, the only question is to make sure the id won't ever be needed by the puppet master. From this disucssion with other ceph developers yesterday, it seems there is no case where it would be needed. But better make sure before going in this direction.
07:45 dachary the uuid is a genuuid() therefore unique
07:45 bodepd_ I gotcha
07:45 dachary and it is persisted on the disk
07:45 bodepd_ but does it have to persist
07:45 bodepd_ the next time that puppet runs?
07:45 dachary so that it finds its place when it comes back
07:46 dachary no, persisting it is part of the osd disk formating done by ceph-disk prepare
07:46 dachary it resides on the disk assigned to the osd
07:46 bodepd_ I may have to see the proposed implementation
07:46 dachary and in the mon
07:46 bodepd_ but the next time that Puppet runs
07:46 bodepd_ and it asks":
07:46 bodepd_ is this osd disk already configured
07:47 dalgaaf In the current situation it's no problem with the uuid we get this from the blkid of the disk ... there was no need to get this from the puppet master
07:47 bodepd_ does it need that uuid?
07:47 dalgaaf the only problem was the ID
07:47 dachary dalgaaf: hi !
07:47 bodepd_ ah. see. no one should expect that I know what I'm talking about
07:47 dalgaaf and this was done in the first run currently
07:48 bodepd_ I'll go quiet, this is probably a discussion better had by y'all
07:48 dachary dalgaaf: I'm very curious to hear your thoughts on the "let's use uuid and not id at all" topic ;-)
07:49 dalgaaf the problem is the ID of the OSD which is returned by 'ceph osd create' which is needed for the config on the system for starting the single service
07:49 dachary say you ceph osd create <uuid>
07:49 dalgaaf via a Service call ....
07:49 dalgaaf yes we use this call ...
07:50 dachary for what purpose would you need to get the id returned by osd create ?
07:51 dalgaaf ceph osd create ${uuid} returns the ID of the OSD that was created
07:51 dachary yes. Let say you don't bother to get it back. Why would it be a problem exactly ?
07:51 dalgaaf and this ID is needed by other calls ... and we need it for the ceph.conf on the OSD node to be able to run e..g rcceph stop
07:52 dalgaaf or rcceph osd.1 start
07:53 dachary or start ceph-osd id=1
07:53 dachary you are correct
07:53 * dachary checking
07:53 dalgaaf yes for the most of the current exec calls we could may replace by ceph-disk prepare call, but I have to check if this takes some flexibility away
07:54 dalgaaf IIRC these services need the config ... with at least the osds of this node
07:55 dachary the discussion with josh yesteday night on #ceph-devel lead us to the conclusion that there is no post-creation configuration that would require an osd id ( i.e. no osd specific dymanic configuration )
07:55 dalgaaf okay ... how would this work? or is it a missunderstanding?
07:56 dachary the service don't need the config to start, I'm sure of that. It will be discovered by walking /var/lib/ceph/osd
07:56 dalgaaf I have to check that
07:56 dachary dalgaaf: say the puppetmaster creates a uuid which it can
07:56 dachary it then says that this osd with this uuid must exist
07:57 dalgaaf currently I use the blkid of the filesystem to be able to easy match osds to disks
07:57 dachary the mon picks this and does "osd create uuid" ( which is idempotent, can be done each time )
07:58 dachary the osd picks this and does ceph-disk prepare --osd-uuid which joins the cluster
07:58 dachary dalgaaf: you're not using the ceph udev logic ?
07:59 dalgaaf not at the moment
07:59 dalgaaf but i guess we could ... i need to test it
07:59 dachary it's available since cuttlefish
08:00 dalgaaf yes I know ... but had no time to test it completely
08:00 dalgaaf yet
08:00 dachary and it appears that it would greatly simplify the making of the puppet module
08:01 dalgaaf I'm not sure atm if ceph-disk prepare fits my needs ... especially for encryption and encryption keys ... let me check
08:02 dachary dalgaaf: do you mean dmcrypt ?
08:02 dachary or keyring ?
08:02 dachary ceph keyring that is ;-)
08:04 dalgaaf I know that ceph-disk prepare is able to use dmcrypt
08:05 dalgaaf but it would be hard to preshare one key for all osds on the node
08:06 dalgaaf which I need since I need to have this key available in  a central place for all cluster machines for OPS
08:06 dalgaaf how is the plan to get this working via ceph-disk and uuid and udev ? what would be the workflow?
08:07 dalgaaf if we ignore the key problem for dmcrypt for now
08:13 dalgaaf If the mon needs to call the "ceph osd create <uuid>" ... you would need to do that on the mon node ...
08:14 dachary here is an idea
08:14 dalgaaf that would mean you have to predefine all osd uuids and let the mon call the command as soon as the mon(s) are running
08:15 dalgaaf and then you have to call ceph::osd::device with this uuid and the path/name of the related /dev/ device and e.g. the dmcrypt key
08:16 dalgaaf correct?
08:16 dachary ceph::osd { /dev/sdb } => checks if the ceph magic uuids ( 4fbd7e29-9d25-41b8-afd0-062c0ceff05d ) are in place and set them if not. the udev logic does the rest, including creating the osd on the mon. That requires that the osd has a key to be able to talk to the mon to create the osd.
08:16 derekh joined #puppet-openstack
08:16 dachary (that's a simplification of what I proposed above )
08:17 dachary reading & thinking about what you just wrote dalgaaf
08:19 dachary dalgaaf: IIRC the osd machines have permissions to create osd in your setup, right ?
08:19 dachary permissions to do ceph osd create that is
08:20 dalgaaf yes fot the osd machines ... they have to
08:20 dachary ok
08:20 openstackgerrit Florian Haas proposed a change to stackforge/puppet-openstack: repo.pp: Switch default release to Havana  https://review.openstack.org/52589
08:20 dalgaaf ... sorry, I have to leave for 30 minutes ... are you available later to finish the discussion?
08:20 dachary regarding dmcrypt my level of understanding is still not good enough to answer your question. I know it's transparently handled by the udev logic but I did not take a look at how the dmcrypt keys are managed.
08:21 dachary dalgaaf: yes, I want to see the bottom of this :-)
08:21 dachary I feel we have a simple way out and this is pretty exciting
08:23 dalgaaf okay ... I guess it could be solved ... need only to fully understand your idea ;-) ... I ping you as soon as I'm back
08:25 dachary cool
08:27 dachary it is entirely possible to leave the osd management entirely to ceph, with the sole exception of tagging the disk to be used as osds with sgdisk --partition-guid   via ceph-disk prepare and let the udev logic do the rest ( which is what ceph-disk does, waiting for udevadm to settle after doing this )
08:28 dachary ceph-disks also has various checks to verify that a disk is not in use to prevent accidents such as wiping out the root disk
08:30 dachary the conf file would contain things such as what file system is needed
09:06 openstackgerrit Florian Haas proposed a change to stackforge/puppet-neutron: params.pp: Rename client_package_name to client_package, and correctly define it  https://review.openstack.org/52365
09:08 dachary there are a number of non documented configuration parameters in ceph-disk to control what file system is going to be set ( osd mkfs type etc. )
09:14 bauzas joined #puppet-openstack
09:46 xingchao_ joined #puppet-openstack
09:50 dalgaaf dachary: back ... took a little more time than I expected
09:50 dachary :-)
09:52 dalgaaf If I get it right: we would need a list of UUIDs, osdnodes and devices to do that (since you may not want to add all disks of an OSD node to the cluster (e.g. keep some as spare parts))
09:52 dalgaaf then we would start the MONs
09:53 dalgaaf one of these MONs would take the list an call 'ceph osd create <uuid>'
09:53 dalgaaf then we would start the OSDs and would give each OSD node the list and the config
09:54 dalgaaf the OSD node would pick the osds they need and would call ceph-disk prepare for each of the disks ... right?
09:54 dalgaaf this includes may some more steps if you need some special handling for dmcrypt keys or may other things,but let this out
09:54 dalgaaf for now
09:55 dalgaaf then UDEV would pick up the prepared disk and do everything including starting the osd?
09:58 dalgaaf The only thing I see is the creation of the ceph.conf (wich includes the list of OSDs) since this is may needed for tools like start-scripts
09:58 dalgaaf and this could be done by call facter on startup to get this from ceph
09:59 dalgaaf what do you think?
10:00 dachary the startup script does not need the ceph.conf, it walks /var/lib/ceph/osd to discover what's there
10:00 dachary the rest of what you describe makes total sense to me
10:00 dachary dalgaaf: ^
10:01 dachary https://github.com/ceph/ceph/blob/master/src/upstart/ceph-osd-all-starter.conf
10:02 dalgaaf that's may true for upstart but not for the init script for e.g suse/redhat ... but let me check
10:08 dachary even if it is not true, puppet could do the same until it's upstream. The key here is that /var/lib/ceph/osd/ contains the osd id and can be relied on for discovery
10:08 dalgaaf the rc script depends on ceph-conf which needs the config
10:09 dalgaaf at least on cuttlefish
10:09 dachary URL ?
10:09 dalgaaf give me second
10:09 dalgaaf need to find a github link
10:10 dachary puppet can create the conf file by walking  /var/lib/ceph/osd/ , if it's rpm and cuttlefish.
10:10 dalgaaf https://github.com/ceph/ceph/blob/cuttlefish/src/ceph_common.sh#L3
10:10 dachary create => add the required [osd.X]
10:11 dalgaaf that is what the script needs
10:11 dalgaaf and uses to get config options
10:12 dachary got it
10:13 dachary what do you think about walking  /var/lib/ceph/osd/ to make sure ceph.conf is up to date in this context ? that will be deprecated over time but does not require any architectural change when that happens. It just can go away. And even if it does not go away, it won't hurt.
10:13 dalgaaf It could may walk trough that directory ... but I'm not complete sure if this would work for all tasks ... have to think about the usecases for that ... but it doesn't change the fact that this is how it works currently with cuttlefish
10:15 dalgaaf I would propose to simply use facter to get a list of uuids and osd ids from ceph in startup (use ceph osd dump or whatever) and provide this list to ceph::osd to write the config down ... this would may be optional but it wouldn't hurt I think
10:16 dalgaaf or to provide it to ceph::conf::osd to write the config ... whatever would be nicer
10:16 dachary that's sensible also
10:17 dalgaaf but the one issue I see with the udev approach: how do we detect if everything is up and running before the puppet run ends?
10:17 dachary udevadm settle + check that all is in place ?
10:17 dalgaaf Somehow I would like to see the puppet run to fail if there is a problem with the setup
10:17 dalgaaf okay ...
10:18 dachary ceph-disk uses this to wait for things to happen instead of returning immediately while the async task is doing its work
10:19 dachary unless I'm mistaken, if puppet calls ceph-disk, things will happen synchronously and the result can be checked when it returns
10:19 dalgaaf have to check that ... I guess there will be a way to do it ...
10:20 dachary https://github.com/ceph/ceph/blob/master/src/ceph-disk#L867
10:20 dachary https://github.com/ceph/ceph/blob/master/src/ceph-disk#L1042
10:21 dachary how would all this work when an osd must be decomissioned...
10:22 dachary dalgaaf: how do you deal with removing osds currently ?
10:23 dalgaaf that isn't implemented yet ... if an osd fails I see no problem ... that works currently
10:24 dalgaaf but to decomission in meaning to remove it from the cluster and may clean it up or to replace it with a new disk ... this is a complicated task
10:25 dalgaaf Currently this is for me a admin task anyway since this may affect customer data
10:26 dalgaaf and if e.g. a HDD fails due to hardware failure it needs to get replaced ... then puppet would provision this disk as soon as the disk is available to the system on the next run
10:26 dalgaaf the new disk
10:26 dalgaaf or do you see another task?
10:27 dalgaaf I didn't handle a 'shrink the cluster by removing some OSDs' yet
10:27 dalgaaf not sure if this is even a task that would happen in a productive system
10:28 dalgaaf without manual admin steps
10:28 dalgaaf involved
10:30 dalgaaf In the future we could think about some kind of puppet solution for that but I guess in the first steps this wouldn't be managed by puppet-ceph since this is very critical since user/customer data is involved
10:33 dalgaaf The only thing I have to think about is currently: how to share and process this list of uuid|osdhost|device between the MONs and the OSDs in the site.pp but this is a implementation detail
10:33 dalgaaf dachary: any idea?
10:34 * dachary reading ( I zapped to another window sorry )
10:34 dalgaaf can puppet handle looping over some array to call classes on compile time?
10:35 dachary My puppet level skills are lower than yours, I suspect I don't know the answer to the question ;-)
10:35 beddari no, don't try looping, what are you trying to do?
10:36 dalgaaf no problem ... will find a solution for that
10:36 beddari (in the future there will be a looping construct, yes .. the "future parser")
10:37 dachary if ceph::osd { /dev/sdc, ensure => present } then there should be a ceph::osd { /dev/sdc, ensure => absent } to deal with the case of a failed osd, until it's replaced ?
10:37 dalgaaf we have to share the list of: [osdhost1, /dev/sdb, uuid ] between the MONs and the OSDs
10:38 * dachary feels he is massively confused when thinking about removing osds...
10:38 dalgaaf since the MON would generate the osds and the OSD node would hanle the rest but needs the uuid and the device to run whats needed
10:39 dalgaaf dachary: yes it would be possible to do some ensure => absent  ... but IMO this should only stop the osd, nothing more for now
10:39 dachary dalgaaf: interesting
10:40 dalgaaf but this is something you have to change for the special osd then in your site.pp if you need it
10:40 dachary well, if you remove the disk manually but don't tell puppet ... it will try to set it up again, won't it ?
10:41 dachary lunch calls ... I'll discuss this with ccourtaut :-D bbl.
10:41 dalgaaf if the disk is gone puppet can't do anything
10:41 dalgaaf I dare to do something like: pick up 10 disks form the system and make OSDs out of it ... or what do you think
10:42 dalgaaf this would may lead to strange situations if a disk fails
10:44 dalgaaf beddari: do you know what happens on Line 46-48 on http://paste.openstack.org/show/48654/ ... is there some kind of looping (like: or each do) involved?
10:45 dalgaaf beddari: since the ceph::osd::device class takes normally only one device to process it
10:50 beddari reject and prefix are functions form puppetlabs-stdlib, they do array operations
10:50 beddari dalgaaf: so no looping, only passing an array
10:51 beddari dalgaaf: for stdlib the code is often the best docu, https://github.com/puppetlabs/puppetlabs-stdlib/tree/master/lib/puppet/parser/functions
10:52 fc__ joined #puppet-openstack
10:53 beddari dalgaaf: so $::blockdevices is a fact (string) I think that is split into an array, which finally ceph::osd::device gets as a param … one param yeah, but probably it handles both string and arrays? haven't looked at that code
10:54 dalgaaf no it doesn't .... that's what's confusing me
10:54 dalgaaf it handles only a string with one dev name
10:55 dalgaaf the rest was clear to me ... but the array confuses me
10:55 beddari bug? :)
10:55 beddari crossing intents
10:56 dalgaaf I have to check back with the author of the site.pp ... maybe he did some changes or he has only /dev/sda and sdb ;-)
10:56 dalgaaf which then would work
10:57 dalgaaf then I have to use a provider to process the list and do the needed calls ... that would be possible in ruby
10:58 dalgaaf if there is currently no type of 'for/foreach/while' in puppet
10:58 beddari most of the time when people think they need to do that they don't ;-)
10:58 beddari but I don't know your problem
11:00 beddari (trying to read the above ;-)
11:00 dalgaaf I have a array with entries like [osdhost1, /dev/sdb, uuid ] and need to do some action for each of these elements of the array
11:04 xingchao joined #puppet-openstack
11:05 beddari so you might need a resource/provider to be able to express this as a state in Puppet DSL?
11:06 beddari I didn't see what that array expanded to, just more elements the same way?
11:07 dalgaaf yes all the same kind of entries but different values ... it's maybe more a array of elements which either are a list or a array
11:14 otherwiseguy joined #puppet-openstack
11:15 otherwiseguy joined #puppet-openstack
11:52 prad joined #puppet-openstack
12:07 bauzas joined #puppet-openstack
12:12 e1mer joined #puppet-openstack
12:18 dachary dalgaaf: I have to run http://redmine.the.re/projects/there/wiki/HOWTO_setup_OpenStack this afternoon, I'll reconnect later today
12:20 dalgaaf okay ... .I will be online till 16:30 and then very late maybe in the night, but tommorow again ...
12:20 dalgaaf I will write down later or tommorrow what we discussed .. is this okay for you?
12:20 dalgaaf discussed so far
12:21 dalgaaf dachary: ^^^ ... to the blueprint
12:21 dachary excellent idea !
12:22 dalgaaf dachary: great .. then see you later or tomorrow
12:28 mjblack joined #puppet-openstack
12:49 xingchao joined #puppet-openstack
12:49 dalgaaf_m joined #puppet-openstack
13:00 dprince joined #puppet-openstack
13:13 xingchao_ joined #puppet-openstack
13:26 mjblack joined #puppet-openstack
13:41 mjblack joined #puppet-openstack
13:43 xingchao joined #puppet-openstack
13:54 prad joined #puppet-openstack
14:04 dalgaaf joined #puppet-openstack
14:07 badiane_ka joined #puppet-openstack
14:08 dmsimard joined #puppet-openstack
14:14 blentz joined #puppet-openstack
14:25 badiane_ka joined #puppet-openstack
14:38 xingchao joined #puppet-openstack
15:31 dalgaaf joined #puppet-openstack
15:34 dtalton joined #puppet-openstack
15:37 ryanycoleman joined #puppet-openstack
15:39 ryanycoleman joined #puppet-openstack
16:02 ryanycoleman joined #puppet-openstack
16:04 ryanycol_ joined #puppet-openstack
16:15 ryanycoleman joined #puppet-openstack
16:22 ryanycoleman joined #puppet-openstack
16:26 xarses joined #puppet-openstack
16:29 tnoor1 joined #puppet-openstack
16:32 hogepodge joined #puppet-openstack
16:38 tnoor2 joined #puppet-openstack
16:39 tnoor2 joined #puppet-openstack
16:45 dmsimard dachary: ping
16:46 dachary dmsimard: pong
16:47 dmsimard dachary: was looking at the blueprint and looking what still needed to be fleshed out
16:48 dachary dmsimard: what would be great at this point is to have a procedure of how things could be done in this context. A step by step explanation of how you see things working with the current design. That will greatly help figure out what's missing.
16:48 dmsimard Specifically ceph::krbd and ceph::client, we should probably mention that they are related/dependant. Also, let's pretend I want to mount a RBD image - should that be part of the module ?
16:49 tnoor1 joined #puppet-openstack
16:49 ryanycoleman joined #puppet-openstack
16:50 dmsimard I ask because there's a part talking about mounting a cephfs - to be consistent we should probably allow mounting of a rbd image
16:54 ryanycoleman joined #puppet-openstack
16:55 xarses joined #puppet-openstack
16:57 bodepd_ dalgaaf: there is an experimental foreach in the future parser in 3.3
16:57 bodepd_ dalgaaf: it will likely be supported in 4.0
16:57 bodepd_ ah. 3.2, not 3.3
16:58 bodepd_ http://puppetlabs.com/blog/puppet-3-2-introduces-an-experimental-parser-and-new-iteration-features
16:59 hogepodge joined #puppet-openstack
17:01 dwt1 joined #puppet-openstack
17:10 openstackgerrit Mathieu Gagné proposed a change to stackforge/puppet-keystone: Fix duplicated keystone endpoints  https://review.openstack.org/52675
17:10 mgagne bodepd_: ^
17:12 mgagne damn
17:13 ari joined #puppet-openstack
17:14 hogepodge joined #puppet-openstack
17:22 xarses joined #puppet-openstack
17:23 tnoor1 joined #puppet-openstack
17:25 tnoor1 joined #puppet-openstack
17:29 xarses joined #puppet-openstack
17:41 xingchao joined #puppet-openstack
17:43 xarses joined #puppet-openstack
17:45 xarses joined #puppet-openstack
17:46 xarses joined #puppet-openstack
17:47 tnoor1 joined #puppet-openstack
17:50 newptone joined #puppet-openstack
17:51 yuxcer joined #puppet-openstack
17:55 xingchao joined #puppet-openstack
17:58 tnoor1 joined #puppet-openstack
18:00 mgagne bodepd_: having a hard time unit testing keystone_endpoint =)
18:06 mjblack joined #puppet-openstack
18:06 bodepd_ did you use flush?
18:06 mgagne bodepd_: yes
18:06 bodepd_ are you familiar with expects, stubs?
18:06 mgagne bodepd_: learning
18:07 bodepd_ flush it a little harder
18:07 mjblack joined #puppet-openstack
18:07 bodepd_ b/c you have to figure out how to have the system trigger it
18:07 mgagne bodepd_: flush(:hard => true) ?
18:07 bodepd_ I woudl have to dig into the Puppet source
18:07 mjblack interesting conversation I walked into
18:08 mgagne bodepd_: I'm already flusing but struggling with expectations and stubs
18:08 bodepd_ basically, access an instance of the provider and verify, when I call flush
18:08 bodepd_ ah. what about them?
18:08 tnoor1 joined #puppet-openstack
18:08 mgagne bodepd_: I think my resources aren't properly prefetched
18:08 bodepd_ yeah,it's all tangled
18:09 mgagne bodepd_: I mocked the instances part but I think I did it wrong
18:11 tnoor1 joined #puppet-openstack
18:12 mgagne bodepd_: so I think my endpoints aren't loaded correctly before the test. I expect destroy to be called once and it's never called
18:16 xarses joined #puppet-openstack
18:23 mgagne bodepd_: http://paste.openstack.org/show/48725/
18:25 mgagne bodepd_: results: http://paste.openstack.org/show/48726/
18:26 mjblack_ joined #puppet-openstack
18:32 mgagne bodepd_: flushing twice helped...
18:32 mgagne bodepd_: or not, dammit
18:37 mjblack joined #puppet-openstack
18:42 mgagne bodepd_: got to go in a meeting, new version: http://paste.openstack.org/show/48729/
18:42 mgagne bodepd_: auth_keystone are never called :-/
18:44 mgagne bodepd_: ok, partially found it: http://gofreerange.com/mocha/docs/Mocha/ObjectMethods.html#expects-instance_method
18:44 mgagne bodepd_: "The original implementation of the method is replaced during the test and then restored at the end of the test."
18:52 mjblack_ joined #puppet-openstack
19:34 dachary dmsimard: that seems sensible
19:38 dachary bodepd_: mgagne is there something to do for https://review.openstack.org/#/c/52215/ to happen or just wait patiently ?
19:38 mgagne dachary: ask #openstack-infra for review
19:38 mgagne dachary: only them can approve, I cannot approve it myself
19:38 dmsimard dachary, mgagne: I asked already
19:38 dmsimard dachary, mgagne: They're just really busy.
19:39 dmsimard Which I believe is sort of understandable with havana and all
19:39 dachary ok. In your expert opinion mgagne, should I kindly ask although dmsimard already did ? Or give it a few more days ?
19:39 bodepd_ I just did
19:39 dachary bodepd_: :)
19:40 bodepd_ pabelanger: can you give us one of the +2s we need?
19:40 ryanycoleman joined #puppet-openstack
19:40 ryanycoleman joined #puppet-openstack
19:41 dachary I'll update the blueprint after dalgaaf this week end. I feel we have a good understanding for the first class ( mon ) and the second ( conf )  and almost good for the third ( osd ). The key / auth is still unclear and probably has pitfalls.
19:43 dachary a first version with just one class ( mon ) is probably best to bootstrap everything. It does something useful and does not involve any complexity, nor does it orientate the design.
19:43 dachary orientate is probably a frenchism in this context ;-)
19:47 pabelanger bodepd_, can't sorry, don't have the permission. But, -infra could
19:48 dmsimard dachary: lots of frenchies here :p
19:48 dmsimard dachary: more than you think, probably
19:48 dachary :-)
19:48 mgagne connais pas
19:48 dachary moi non plus
19:49 dmsimard dachary: back to my earlier question about ceph::krbd/ceph::client/ceph::cephfs
19:50 dmsimard dachary: ceph::krbd and ceph::cephfs are to be used to install the required dependencies and mount a resource - correct ?
19:52 bodepd_ pabelanger: I was mostly hoping b/c I know you have helped them lots!
19:53 dachary dmsimard: that's my understanding, yes
19:55 dmsimard Okay, I haven't toyed with cephfs (yet) so I don't know exactly how it works but i'll update ceph::krbd in this direction (I think ceph::rbd might be a better name)
19:55 pabelanger bodepd_, I'd +2 is I could :D
19:55 ryanycoleman joined #puppet-openstack
19:55 dmsimard clarkb from -infra says he's looking
19:59 dmsimard bodepd_: Have we confirmed how we will go about finding and approving reviewers for puppet-ceph ?
20:00 ryanycoleman joined #puppet-openstack
20:01 bodepd_ dmsimard: I have an email drafted that I will send to puppet-openstack (I could include another ceph list)
20:01 bodepd_ it encourages people to request to be core, and then peeople can +1
20:01 bodepd_ I was waiting for that infra patch to get merged
20:02 dmsimard k
20:02 mgagne dachary: requirement on CLA removed
20:03 dachary mgagne: cool. We'll never know if that helps or not but better be safe ;-)
20:04 mgagne dachary: CLA is strongly recommended if the project hopes to be part of openstack one day, it isn't required (enforced) for stackforge projects.
20:15 openstackgerrit Francois Deppierraz proposed a change to stackforge/puppet-horizon: Set LOGOUT_URL properly in horizon settings  https://review.openstack.org/52699
20:22 qba73 joined #puppet-openstack
20:25 bodepd_ looks like it should be good for getting merged today
20:39 tnoor1 joined #puppet-openstack
20:42 bodepd_ does celimoeter require a mysql db and a mongo db?
20:43 tnoor1 joined #puppet-openstack
20:45 mgagne bodepd_: only one of them
20:45 mgagne bodepd_: although you would be crazy to store all those metrics in mysql IMO
20:47 mgagne bodepd_: still trying to debug my rspec test
20:47 mgagne bodepd_: I can't get @property_hash to be populated even if I call prefetch
20:54 dmsimard dachary: I think I found something missing - there's a class for creating pools but not images (in these pools)
20:54 dmsimard ceph::image ?
21:06 ryanycol_ joined #puppet-openstack
21:10 ryanyco__ joined #puppet-openstack
21:15 tnoor1 joined #puppet-openstack
21:24 ari joined #puppet-openstack
21:32 ryanycoleman joined #puppet-openstack
21:32 ryanycoleman joined #puppet-openstack
21:35 xarses dmismard: eh?
21:35 xarses dont you just create the pool named images and or volumes?
21:36 xarses also, should probably be a function so it can be reused? see https://github.com/Mirantis/fuel/blob/master/deployment/puppet/ceph/manifests/pool.pp
21:43 tnoor1 joined #puppet-openstack
21:44 openstackgerrit Mathieu Gagné proposed a change to stackforge/puppet-keystone: Fix duplicated keystone endpoints  https://review.openstack.org/52675
22:00 dmsimard xarses: let's pretend you want to mount a rbd, you need a create the image in a pool first
22:01 xarses dmsimard: ahh i probably wouldn't need /want puppet to do that, but if you see a need
22:03 dmsimard The blueprint is essentially just representing what can/could be done to the extent of what is possible with ceph
22:04 dalgaaf xarses: the current puppet-ceph (from DTAG) has already a class to manage pools (ceph::pool) and the user management would be done via ceph::key
22:05 xarses dalgaaf, again i hope functions, so they can be re-used, but sure
22:05 dalgaaf I wouldn't add image import via puppet-ceph ... this is something that could be managed by e.g. glance or cinder
22:05 xarses i think dmsimard is thinking outside the confines of openstack
22:06 dmsimard xarses: Yes ;)
22:06 xarses which means he can implment it :P
22:06 dmsimard xarses: we're very biased in the direction of openstack and thus it should suit our needs but puppet-ceph should be generic enough to be used elsewhere IMO
22:07 dmsimard otherwise no point in mirroring over to github.com/ceph and we should call this puppet-openstack-ceph instead :)
22:07 xarses dmsimard: don't disagree, just people around here wont be likely to support / use it
22:08 dmsimard I'm signing off, you guys have a nice weekend - some great work this week so far
22:09 dalgaaf btw. what do think about relicensing the puppet-ceph module to Apache v2 and reuse code from there?
22:10 dmsimard dalgaaf: need to ask the people from enovance for that
22:10 dmsimard I can check with leseb next week, he's one of the authors iirc
22:11 dmsimard he's probably asleep at this hour :)
22:11 leseb dmsimard: I'm not!
22:11 dalgaaf i guess we could speak with them also, we have some contacts to their management AFAIK
22:11 dmsimard leseb:  see dalgaaf's question about licensing ^
22:12 * leseb is reading
22:13 dmsimard leseb: in the context where puppet-ceph is currently licensed agpl, and projects on openstack are apache v2 there is a clash
22:13 leseb dmsimard: well if this goes to the community, this looks pretty normal to me
22:13 leseb dmsimard: yes yes, let me just look at our other modules
22:14 dalgaaf On the ceph list some people said: AGPL is a no go
22:14 xarses me
22:16 dalgaaf I would agree to relicensing for my contributions ... and would do the same for what we have in our fork.
22:17 leseb I believe eNovance will agree on that as well, but I just need to ask first and will confirm that as soon as I get the go
22:18 dalgaaf should i send an official request to the authors or the repo maintainers ... or is there a special contact to ask?
22:18 dalgaaf leseb: ok
22:19 dmsimard leseb: puppet-ceilometer and puppet-heat are Apache v2 :D
22:19 leseb dalgaaf: well I'm just writing an email to my manager, if I get the go I'll change the LICENSE file right the away
22:19 leseb dmsimard: this should not be a problem then :)
22:20 xarses leseb, for proprietary we need all non eNovance contributors to agree to the change also
22:20 dmsimard Okay, I'm off for real now - have a nice weekend !
22:21 dalgaaf oh ... you need to send a mail to all contributers and ask if they agree before relicensing it
22:21 xarses leseb: assuming eNovance has a rights assignement agreement for its employees / contractors
22:21 leseb humm
22:21 dalgaaf correct
22:21 ryanycoleman joined #puppet-openstack
22:22 dalgaaf but i guess this will not be a problem
22:22 leseb ok then as soon as I have the go from eNovance I'll ask all the contributors
22:22 dalgaaf but may take dome days
22:22 leseb yeah but you're right, it's better to be transparent I guess
22:22 dalgaaf some days
22:23 openstackgerrit Mathieu Gagné proposed a change to stackforge/puppet-keystone: Fix duplicated keystone endpoints  https://review.openstack.org/52675
22:23 xarses leseb, dalgaaf: we can define an resonable date to respond by and update the repo with the information to consider it public notice. The worse case is that the contributions are removed if they don't agree
22:24 leseb dalgaaf: well I can parallelize this request
22:24 dalgaaf It's not only for transparency ... it's more a legal issue.
22:24 leseb dalgaaf: correct
22:25 dalgaaf ok ... sounds like a plan
22:25 * leseb is writing the email
22:25 ryanycol_ joined #puppet-openstack
22:25 dalgaaf great ... thanks!
22:28 leseb dalgaaf: could please give me your email address? can't find it on github
22:28 leseb dalgaaf: well you're not the only one... :/
22:30 leseb dalgaaf: thanks
22:32 xarses leseb, you can usually find it hiding in commits
22:32 xarses let me know if you want help finding some
22:32 dalgaaf check git log
22:33 xarses thats what im saying it took me 2 sec to find yours dalgaaf
22:35 leseb xarses: yeah, was looking directly on github...
22:35 dalgaaf i can easily send you the full list if you need it
22:37 dachary dalgaaf: here we go :-) https://review.openstack.org/#/c/52215/
22:38 leseb dalgaaf: well I'm just missing arnewiebalck
22:39 dalgaaf darchary: great
22:39 dachary dalgaaf: bodepd_ do we have an agreement to set the goal of the first useable puppet-ceph module to just deploy a single ceph ? One class at a time :-)
22:40 dalgaaf what's for you a 'single ceph'?
22:40 dachary if so I'll cleanup the blueprint accordingly. Most of it can remain fuzzy but we can be precise about this first goal.
22:40 dachary single mon, sorry typo :-)
22:41 dachary dalgaaf: ^
22:41 dachary dalgaaf: bodepd_ do we have an agreement to set the goal of the first useable puppet-ceph module to just deploy a single MON ? One class at a time :-)
22:43 dachary a) mon class part 1 : single mon, b) conf class ( should be very non controversial ),  c) mon class part 2 : multiple mon, d) osd class ( we're in good shape but there remains details to figure out )
22:43 dalgaaf hm ... I could also imagine to add more than only the MON class in the first step
22:43 dachary each iteration will lead to a useable / testable / integrable module
22:44 dalgaaf from my point there is no difference between step on
22:44 dalgaaf step a) and c)
22:44 dachary step c requires a conf file, right ?
22:45 dalgaaf I would
22:45 dachary I'm a strong believer in KISS and maximizing the amount of work not done for a single iteration. Can you think of a smaller step than just one MON ?
22:45 dalgaaf sorry ... typing on my mobile
22:45 dachary :-D
22:46 openstackgerrit joined #puppet-openstack
22:47 dachary actually, there is a step 0) : a module with the right name that does nothing at all, it just is a valid puppet module...
22:48 dalgaaf no there is propably no smaller step but from my point you can always setup multiple MONs in parallel ... I have to check tomorrow w
22:48 dachary dalgaaf: can you do that without writing a configuration file ?
22:48 dalgaaf if there is a problem with this
22:49 dalgaaf i have to check this ... but not tod
22:49 dachary ok, let's talk tomorrow, have a good night !
22:50 dalgaaf today ... I'll leave now ... lets discuss tomorrow/later ;-)
22:50 dalgaaf cu
22:56 ryanycoleman joined #puppet-openstack
22:59 bodepd_ how goes the ceph work?
23:00 bodepd_ great to see some Mirantis involvement :)
23:00 bodepd_ you guys should have somewhere commit code to now :)
23:02 ryanycoleman joined #puppet-openstack
23:02 xarses booped_ sounds like they dont like ours
23:02 xarses ;)
23:03 bodepd_ xarses: as long as everyone can have an honest discussion about the pros and cons of each approach, I think the end product will suit everyone
23:03 bodepd_ xarses: module consolidation is hard :)
23:03 xarses of course
23:04 bodepd_ xarses: just really glad to see you involved, so you can hopefully agree with and use the final work
23:04 bodepd_ if you makes you feel any better, people have specific issues with all of the implementations out there
23:04 bodepd_ hence all of the fragmentation
23:05 xarses bodepd_ when we started there was only one other reasonable puppet source besides the eNovance, it was unmaintained looking, and eNovance was AGPL so we had to do our own thing
23:06 bodepd_ ah, for licensing reasons.
23:06 bodepd_ I know Cisco started with the enovance module, but had issues with all of the requirements for orchestration
23:06 bodepd_ (ie: too many runs required for installing osd's)
23:06 xarses my understanding was that and AGPL
23:07 bodepd_ ah. I guess I mostly herad about the implementation as opposed to licensing concerns :)
23:08 xarses imho AGPL is like smog, everyone contributes, but no one likes having it around, and it gets into everything
23:09 bodepd_ it's a shame, that it's a good license, intended to make things sharable, but the results is that it actually limits who can use things
23:09 xarses its a great license dont get me wrong, but it hampers adoption due to several vagaties
23:09 bodepd_ yep. I was at PuppetLabs when they moved from GPL to Apache
23:10 bodepd_ huge effort, but totally required for commercialization/coorporate adoption
23:11 bodepd_ xarses: I'll try to catch up with the Puppet guys at Miratis in Hong Kong to see if any of the other stuff I've been working on may be useful for you
23:11 bodepd_ xarses: not sure where you sit in that world
23:12 xarses there is no clarity in AGPL as to how JIT compled languages consider referencing other "modules" as static, dynamic, or a API
23:12 xarses as such, its commonly considered too risky
23:13 xarses i wont be at Hong Kong, a number of the Russian guys will bw
23:13 xarses be
23:13 bodepd_ ah. too bad. It should be fun :)
23:13 xarses as well as a bit of mgmt and sales from the US
23:21 openstackgerrit William Van Hevelingen proposed a change to stackforge/puppet-cinder: Clean up errors the README examples  https://review.openstack.org/52717
23:25 openstackgerrit William Van Hevelingen proposed a change to stackforge/puppet-cinder: Clean up errors the README examples  https://review.openstack.org/52720
23:32 ryanycoleman joined #puppet-openstack
23:34 blkperl well shoot, thats not a grammatically correct sentence
23:35 openstackgerrit William Van Hevelingen proposed a change to stackforge/puppet-cinder: Clean up errors in the README examples  https://review.openstack.org/52717
23:37 openstackgerrit William Van Hevelingen proposed a change to stackforge/puppet-cinder: Clean up errors in the README examples  https://review.openstack.org/52720

| Channels | #puppet-openstack index | Today | | Search | Google Search | Plain-Text | summary