Perl 6 - the future is here, just unevenly distributed

IRC log for #fuel, 2014-10-31

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:04 mattgrif_ joined #fuel
00:12 mattgriffin joined #fuel
00:26 rmoe joined #fuel
00:26 xarses joined #fuel
00:47 teran joined #fuel
01:19 mattgriffin joined #fuel
02:49 mattgriffin joined #fuel
04:23 mattgriffin joined #fuel
04:41 ArminderS joined #fuel
04:59 ArminderS- joined #fuel
05:05 anand_ts joined #fuel
05:56 kakaouette joined #fuel
06:31 syt joined #fuel
06:48 stamak joined #fuel
06:52 dklepikov joined #fuel
07:05 syt joined #fuel
07:11 monester_laptop joined #fuel
07:14 syt joined #fuel
07:14 syt1 joined #fuel
07:34 adanin joined #fuel
07:44 robklg joined #fuel
07:54 dancn joined #fuel
08:01 pasquier-s joined #fuel
08:08 CybDev joined #fuel
08:45 DaveJ__ joined #fuel
08:45 DaveJ__ Hi guys - does anyone know is it possible to run the fuel demo on an Openstack VM instead of virtual box?  I know there are issues with getting Openstack to allow VMs to act as a DHCP server / and pxe boot, but if anyone has solved it I'd like to hear
08:48 adanin joined #fuel
09:25 kaliya DaveJ__: you can setup to run the Fuel Master on any appliance you like. KVM, VMware and bare metal.
09:26 kaliya DaveJ__: so I guess yes, but I have no idea, never tried to simulate a TripleO with Fuel :)
09:28 DaveJ__ kaliya: Thanks - I have no problem deploying the fuel appliance, but I can't get VMs to pxe boot, Openstack seems to block dhcp offers from VMs by default.  So was wondering if there is a guide for messing with the iptables rules
09:34 kaliya DaveJ__: could be a security option issue, you should enable ports for DHCP internally I guess
09:35 DaveJ__ No it's not part of the security groups
09:35 DaveJ__ it's simlilar to the anti spoofing rules I guess that were added in ice house
09:46 boris-42 joined #fuel
09:54 stamak joined #fuel
09:58 ddmitriev joined #fuel
10:16 e0ne joined #fuel
10:23 hyperbaba Kupo24z: how big is the risk of enabling caching in ceph? And what are the advantages in the sense of speed?
10:27 syt joined #fuel
10:43 teran joined #fuel
10:43 syt joined #fuel
10:48 syt joined #fuel
10:57 monester_laptop joined #fuel
11:02 vtzan joined #fuel
11:12 pasquier-s_ joined #fuel
11:32 monester_laptop joined #fuel
11:39 pasquier-s joined #fuel
11:40 alvinstarr left #fuel
12:27 syt1 joined #fuel
12:43 alvinstarr joined #fuel
12:43 alvinstarr left #fuel
12:44 teran joined #fuel
12:44 teran_ joined #fuel
12:50 jaypipes joined #fuel
12:54 monester_laptop joined #fuel
13:09 Dr_Drache hmm, figured out how to download images via glance and APIs, just no way to get volumes out yet
13:09 merdoc Dr_Drache: and how?
13:10 Dr_Drache merdoc, glance allows image-download, and you can also get it via a GET in the image API
13:10 Dr_Drache but for backups that's near meaningless, since images are static.
13:11 Dr_Drache (normally create new volumes from images for instances)
13:15 merdoc Dr_Drache: if you find how to backup instance - let me know please
13:15 Dr_Drache merdoc, i'm looking, it's a real thorn and a ding to openstack devs.
13:15 Dr_Drache IMO, how can you have 3 ways to upload, but ZERO ways to download?
13:16 Dr_Drache merdoc, there is a backup command, but it seems like what it backs up is useless for migrations or DR.
13:19 merdoc sad
13:20 Dr_Drache and so far the communications I've got is; add more hardware - that's horrible advice.
13:21 merdoc yep. and what should I do, if I need migrate between clouds?
13:23 Dr_Drache that you can do.
13:23 Dr_Drache but the other storage needs to match (ceph to ceph, etc)
13:30 teran joined #fuel
13:34 merdoc Dr_Drache: how?
13:35 Dr_Drache merdoc; didn't look into that very far.
13:38 boris-42 joined #fuel
13:38 Dr_Drache acually, I might have found a way
13:40 Dr_Drache create snapshots of the running instances with "nova image-create" --- publish your /var/lib/glance/images via http OR use glance image-download to copy the snapshots somewhere and publish that directory via http ---
13:40 Dr_Drache import those snaps in the new environment by using "glance image-create --copy-from"
13:41 merdoc hm. I gotta try it. thx
13:45 getup joined #fuel
13:48 e0ne joined #fuel
13:51 Dr_Drache merdoc, I'm having customer issues, if that works (even to restore to the old cluster) i'd be happy
13:57 Dr_Drache and I mean, i'm having customer issues, meaning I might not get time to test right away
14:09 mattgriffin joined #fuel
14:23 Dr_Drache merdoc, my snapshot downloads are of size 0
14:27 Dr_Drache merdoc, may just be me
14:28 merdoc I have no time to try it right now. maybe next week will be somethiing usefull
14:28 Dr_Drache yea, i'm working on it.
14:28 monester_laptop joined #fuel
14:29 Dr_Drache but I'm not getting snapshots of any size
14:41 e0ne joined #fuel
14:54 boris-42 joined #fuel
14:56 blahRus joined #fuel
15:05 teran_ joined #fuel
15:07 Dr_Drache can anyone help me sort out why my nova image-create snapshots are size 0 when downloaded with glance?
15:08 mpetason joined #fuel
15:10 vt102 joined #fuel
15:25 vt102 I'd like to push a horizon skin out to my controllers.  It seems to me that, rather than manually doing so, it would be good to use a puppet module and have fuel push it out.  It seems I can trigger a puppet run across my existing cluster for some changes with the command: fuel --env 1 deploy-changes
15:26 vt102 It appears those are changes that fuel has been notified of, like node assignment.  I'm not sure a puppet module change or addition would count.
15:26 vt102 Would it?  Or is there a different way to do that?
15:27 e0ne joined #fuel
15:29 Dr_Drache fancy, horizon skins
15:29 mattgriffin joined #fuel
15:31 vt102 the logo and such
15:32 vt102 it seems like a pretty non-intrusive way to approach pushing changes out via fuel
15:44 Dr_Drache yea
15:53 bhi joined #fuel
16:02 kozhukalov joined #fuel
16:02 e0ne joined #fuel
16:03 rmoe joined #fuel
16:06 angdraug joined #fuel
16:18 akupko joined #fuel
16:31 jobewan joined #fuel
16:55 xarses joined #fuel
17:10 ArminderS joined #fuel
17:14 robklg joined #fuel
17:15 warpc__ joined #fuel
17:16 robklg joined #fuel
17:19 robklg left #fuel
17:30 syt joined #fuel
18:00 championofcyrodi left #fuel
18:00 championofcyrod1 joined #fuel
18:36 kupo24z joined #fuel
18:58 nathharp joined #fuel
19:01 nathharp Hello!  I have been playing with Fuel for a few days now, having previously used RDO plus hacking to deploy Openstack+Ceph.
19:02 nathharp I’ve got a test Virtualbox deployment, but I’ve got a query and I’ve not been able to find the answer in any docs
19:03 nathharp if I deploy a compute node, it insists on creating ‘Virtual Storage’ on the disks - however, if I’m using Ceph for everything, I don’t know what this is used for?
19:16 Dr_Drache nathharp, it's a hold over
19:16 Dr_Drache it has to be 5120MB
19:16 wayneeseguin joined #fuel
19:17 nathharp ok - is it actually used for anything?
19:17 Dr_Drache on the controller, yes.
19:18 Dr_Drache but with full ceph backing, it's not supposed to be no
19:19 robklg joined #fuel
19:21 robklg joined #fuel
19:23 robklg joined #fuel
19:27 nathharp ok, thank you for the info
19:31 stamak joined #fuel
19:32 ArminderS- joined #fuel
19:38 xarses Dr_Drache: nathharp: it's needed by nova for somethings still and so the min value of 5G remains. Mostly its used to store xml describing the instances
19:38 xarses In cases where the images are not in raw format, or glance in not using ceph, the images will be pulled to that location prior to creating the instance
19:39 xarses in the case of a current but in nova, ceph snapshots can end up being placed there
19:40 Dr_Drache xarses, when I nova image-create a snapshot
19:40 Dr_Drache then glance image-download
19:40 Dr_Drache the download is 0 bytes
19:41 xarses Dr_Drache: you'd have to talk to angdraug about the nova - ceph - snapshot bug, I don't know all the details other than there is a bug in some part of that process that can cause it to use the instance storage
19:42 Dr_Drache xarses, ok.
19:42 Dr_Drache I figured that would be a way to do my offsite backups.
19:42 Dr_Drache lol
19:47 xarses Dr_Drache: nathharp we also discussed that it's good to keep to volume so in case something like the nova ceph snapshot bug where to cause the system to start consuming storage abnormally, that we would break that volume, instead of the rootfs
19:48 Dr_Drache xarses, I keep mine @ 5120.
19:48 Dr_Drache should it be bigger?
19:49 xarses Dr_Drache: no, with ceph it will provision the 5G
19:49 xarses that should be enough
19:57 * angdraug perks up
19:59 angdraug virtual storage size should be double the size of your largest glance image
20:00 angdraug when you create snapshots for glance, nova on compute will use this space to download the image from ceph, then run qemu-img convert on it and upload it to glance
20:00 Dr_Drache ....
20:00 Dr_Drache damn
20:00 Dr_Drache that sucks alot
20:00 angdraug cow snapshots only work from glance, not to glance
20:01 Dr_Drache have to take my cluster down then.
20:01 Dr_Drache but...
20:01 angdraug https://github.com/angdraug/beamer-fuel-ceph/blob/master/fuel-ceph.tex#L376
20:01 Dr_Drache saying that... is that process correct?
20:02 angdraug well, it's a bug: https://bugs.launchpad.net/nova/+bug/1346525
20:02 angdraug likely won't be fixed until kilo
20:03 Dr_Drache I mean, saying i'm working around that bug, the nova image-create - then download that snapshot by cinder image-download -f
20:03 angdraug why do you want to have glance involved at all? why not just use cinder volume snapshots?
20:04 Dr_Drache I want to pull a image OUT of openstack.
20:04 angdraug or are you just concerned about your existing vms?
20:04 Dr_Drache for legecy backups
20:04 angdraug you don't have to use openstack for that at all, rbd cli can pull an image for you
20:05 Dr_Drache really?
20:05 Dr_Drache damn. I havn't seen that anywhere.
20:05 angdraug rbd export
20:05 angdraug http://ceph.com/docs/master/man/8/rbd/
20:05 angdraug that will bypass all that moving data around between ceph, compute, and glance
20:05 Dr_Drache yea, seemed like alot of work
20:06 angdraug rbd export will even create a sparse file for you, if I'm not mistaken
20:06 angdraug I meant to try that for while, never got around
20:06 Dr_Drache then, say I want that back into a openstack?
20:06 Dr_Drache standard glance upload?
20:06 angdraug upload that as a raw image
20:06 Dr_Drache nice
20:08 Dr_Drache i'm going to test here in 5 min
20:08 xarses Dr_Drache: the items in rbd are the UUID of the object in openstack
20:09 xarses that is the objects name will equal the UUID in openstack, so they are easy to sort out
20:09 Dr_Drache so, nova list will get me what I need
20:10 xarses for the compute pool
20:10 Dr_Drache right.
20:10 xarses for the cinder pool it will match cinder list in the images pool
20:10 xarses erm volumes
20:10 xarses not images
20:10 Dr_Drache sweet
20:10 xarses for glance it will match glance list in the images pool
20:11 angdraug xarses: have you seem my IM about that radosgw/hadoop bug?
20:11 Dr_Drache so, decide from where I want it to come from
20:13 Dr_Drache angdraug xarses http://paste.openstack.org/show/127761/
20:13 xarses angdraug: yes, deploying new packages now
20:16 Dr_Drache I think I have the syntax wrong
20:23 Dr_Drache damn syntax
20:23 Dr_Drache lol
20:26 brad[] If I deploy openstack and ceph on the same nodes with fuel
20:26 brad[] can I remove/redeploy the openstack bits without annihilating my ceph filesystem or do I have to stage all the data elsewhere?
20:26 brad[] using fuel
20:27 kupo24z xarses: It looks like my 5.1.1 deployment is missing network.incoming.bytes and other metrics, is there something that needs to enabled for those?
20:28 Dr_Drache angdraug, sadly either there is something wrong with my deployment, or I'm an idiot.
20:33 angdraug brad[]: first of all, not a good idea combining node roles, that's only useful in PoC testing, not in production
20:33 angdraug brad[]: when you delete a node from an environment, all its disks are nuked
20:36 angdraug Dr_Drache: ?
20:37 Dr_Drache http://paste.openstack.org/show/127761/
20:37 Dr_Drache even adding --pool
20:43 angdraug what does "rbd ls -l compute" show?
20:44 Dr_Drache blank
20:45 Dr_Drache http://paste.openstack.org/show/127762/
20:48 Dr_Drache adding the volume- prefix fixed that
20:48 angdraug so your vms are in cinder, not in nova ephemeral
20:49 Dr_Drache I guess?
20:49 Dr_Drache it's a fuel deploy with all ceph options enabled
20:49 angdraug yeah, that's why their disks are in the volumes pool
20:49 Dr_Drache that something I need to change?
20:49 angdraug no, that's a better way
20:50 angdraug so you should use cinder list, not nova list, to get the list of images you need to export
20:51 Dr_Drache root@node-9:/mnt/img# rbd --pool=volumes export volume-fce68938-9cfe-468a-bd12-1f01e5b222c2 raw.img
20:51 Dr_Drache Exporting image: 49% complete...
20:51 Dr_Drache guess I could have named the file better
20:51 kupo24z Can someone with a 5.1.x deployment run ceilometer meter-list | grep network and see if incoming or outgoing bytes is listed?
20:53 Dr_Drache root@node-9:/mnt/img# ceilometer meter-list | grep network
20:53 Dr_Drache publicURL endpoint for metering not found
20:53 Dr_Drache kupo24z
20:55 Dr_Drache angdraug, thanks... looks like i gotta go
21:17 kupo24z joined #fuel
21:32 kupo24z should i not have 100% packet loss from pinging the public VIP from a compute node?
21:33 kupo24z Getting From 23.80.0.2 icmp_seq=2 Redirect HostFrom 23.80.0.2: icmp_seq=3 Redirect Host(New nexthop: 23.80.0.1)
21:34 kupo24z then repeats
21:34 nathharp left #fuel
21:46 kupo24z xarses: Looks like ceilometer-agent is trying to contact using the public VIP, this causes issues with compute nodes with external network disabled
21:47 kupo24z Is there a way to change that IP its connecting to, possibly keystone endpoint?
21:52 kupo24z Ah, may have found it, can set internalURL vs publicURL in ceilometer.conf on the compute nodes
21:55 boris-42 joined #fuel
21:55 kupo24z os_endpoint_type=internalURL is the change
21:58 kupo24z should make a report about that..
22:21 adanin joined #fuel
22:22 teran joined #fuel
22:24 teran_ joined #fuel
22:25 vtzan joined #fuel
22:28 xarses kupo24z: please do and drop me the #
22:30 kupo24z xarses: getting Auth fail after i changed it to InternalURL, http://pastebin.mozilla.org/7033447
22:30 kupo24z should i use AdminURL or do i need to change some additional parameters in ceilometer.conf?
22:30 kupo24z http://docs.openstack.org/trunk/config-reference/content/section_ceilometer.conf.html
22:30 kupo24z see line 972
23:14 kupo24z1 joined #fuel
23:21 xarses joined #fuel
23:24 kupo24z1 xarses: fixed that previous error, was a human issue. I'll make a bug report about it in a min
23:31 xarses ok
23:32 rmoe joined #fuel
23:39 kupo24z1 xarses: https://bugs.launchpad.net/fuel/+bug/1388284

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary