Perl 6 - the future is here, just unevenly distributed

IRC log for #fuel, 2015-02-11

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:11 codybum hi Rich
00:19 claflico joined #fuel
00:21 codybum is anyone experiencing lots of "Failed to publish message to topic" errors under Neutron?
00:24 claflico joined #fuel
00:33 codybum hello?
00:35 rbowen joined #fuel
00:35 docaedo @codybum: I'm pinging a few people, see if I can find someone who can answer your questions
00:35 docaedo I know changing swappiness has helped out with some rabbitmq issues but I don't have any links at hand that would help with more info
00:37 docaedo (but I haven't seen anything about failed to publish issues being resolved by that specific change)
00:39 codybum hi docaedo: I would be surprised if I am alone in this.  I have experenced the same issue with all fuel 6 releases.
00:40 codybum it is very strange.  I even cabled the admin network separate from the other interfaces to try and rule it out.
00:40 codybum someone mentioned this bug might be related: https://bugs.launchpad.net/fuel/+bug/1403687
00:42 docaedo interesting, I wouldn't think keepalives (even if way too frequent) would cause that. I assume you've already seen https://bugs.launchpad.net/fuel/+bug/1413702
00:45 codybum Yep.  that setting really helped.  Basically, failure rates where dropped from 1/5 1/30
00:45 codybum to 1/30
00:49 docaedo that's better, but still seems really high, clearly something else is happening
00:50 docaedo sorry I'm not any help at the moment - but in a few hours the European crew starts to come online, and they check the scrollback, so hopefully there's going to be some thoughts on this here soon
00:50 codybum Cool!
00:51 xarses codybum: you can try setting the kernel keepalive setting back to the defaults, I'm not sure that they are necessary any longer; you will want to go through the ha destructive tests to ensure that it doesn't cause a regression elsewhere
00:52 codybum @xarses: What is the process for setting the kernel keepalive settings?
00:53 xarses https://github.com/stackforge/fuel-library/blob/master/deployment/puppet/openstack/manifests/keepalive.pp
00:53 xarses you can use sysctl to set the values like net.ipv4.tcp_keepalive_time
00:54 xarses or you can use puppet to run that class and it will reset it to the defaults
00:54 codybum Oh those
00:54 codybum ok
00:54 xarses puppet apply -e 'class {"openstack::keepalive":}'
00:55 xarses this describes the HA destruction tests we run https://github.com/stackforge/fuel-main/blob/master/fuelweb_test/tests/tests_strength/test_failover.py
00:56 xarses you will want to look at the text blocks "scnario" to see plain English descriptions of what each test does
00:56 xarses scenario even
00:57 codybum Ok.  I will take a look
01:01 xarses I'll be back online in a hour or so.
02:04 rmoe joined #fuel
02:36 xarses joined #fuel
02:53 xarses codybum: any progress?
03:17 champion_mobile joined #fuel
03:32 ahg joined #fuel
03:42 champion_mobile https://blueprints.launchpad.net/fuel/+spec/fuel-smtp-notification-service
05:41 adanin joined #fuel
05:46 codybum hi fellers
06:15 wiza joined #fuel
06:22 ahg joined #fuel
06:31 Longgeek joined #fuel
06:39 saibarspeis joined #fuel
06:49 stamak joined #fuel
06:50 xarses hi codybum
06:51 xarses still a bit early for the some of the eu folks, but they should be along in an hour or two
06:51 xarses where you able to get anywhere?
07:09 dklepikov joined #fuel
07:14 sambork joined #fuel
07:21 saibarspeis joined #fuel
07:24 monester_laptop joined #fuel
07:40 Miouge joined #fuel
07:48 wiza joined #fuel
07:58 stamak joined #fuel
08:10 e0ne joined #fuel
08:33 adanin joined #fuel
08:40 alecv joined #fuel
08:51 devstok joined #fuel
08:51 devstok HI all
08:52 devstok still pending issue on glance image-create
08:53 devstok I have a cluster using ceph + nova
08:53 devstok I changed my keystone endpoint
08:53 devstok and now glance image-create start but remain in status queued
09:04 samuelbartel joined #fuel
09:21 pal_bth joined #fuel
09:28 stamak joined #fuel
09:28 devstok I don't get any error from log
09:37 sambork joined #fuel
09:45 e0ne joined #fuel
10:36 devstok anyone?
11:09 e0ne_ joined #fuel
11:10 okosse joined #fuel
11:10 avlasov joined #fuel
11:13 t_dmitry_ joined #fuel
11:13 t_dmitry joined #fuel
11:13 alecv joined #fuel
11:29 ilbot3 joined #fuel
11:29 Topic for #fuel is now Fuel 5.1.1 (Icehouse) and Fuel 6.0 (Juno) https://software.mirantis.com | Fuel for Openstack: https://wiki.openstack.org/wiki/Fuel | Paste here http://paste.openstack.org/ | IRC logs http://irclog.perlgeek.de/fuel/
11:30 sambork joined #fuel
11:43 f13o joined #fuel
11:48 Miouge_ joined #fuel
12:24 Miouge joined #fuel
12:36 pal_bth joined #fuel
12:46 devstok joined #fuel
12:49 Miouge joined #fuel
13:01 HeOS joined #fuel
13:06 mattgriffin joined #fuel
13:07 rbowen joined #fuel
13:10 omolchanov joined #fuel
13:14 devstok joined #fuel
13:14 devstok glance api v2 is bugged
13:15 devstok doesnt start the write on ceph storage
13:20 pal_bth_ joined #fuel
13:20 t_dmitry joined #fuel
13:27 devstok Iwanna solve this issue when try to create an image
13:27 devstok Error communicating with http://x.x.x.x:9292 [Errno 32] Broken pipe
13:41 devstok 2015-02-11 14:21:56.924 13885 TRACE glance.api.v1.upload_utils ObjectNotFound: error opening ioctx '4bfe64f0-8bcc-432d-b78c-8004c83d0d01'
13:42 championofcyrodi devstok:  Hi.  I've seen you struggling with glance and ceph.  Are you trying to import an image or export one?
13:42 championofcyrodi looks like 'create'
13:43 championofcyrodi where is your image currently?  Is it on a file or is on on ceph?
13:44 e0ne joined #fuel
13:47 devstok hey
13:47 devstok is a file
13:48 devstok downloaded from cloud image
13:48 devstok a simple ubuntu
13:48 championofcyrodi okay.... when you try using the create-image command with glance, does it upload the file and then error? or does it just hang there?
13:48 championofcyrodi I think you said it was stuck on 'saving' ?
13:48 devstok two way
13:49 devstok v1 create a record in ceph ... : rbd -p images ls
13:49 championofcyrodi I use, "rbd ls --long images"
13:49 devstok but the command wait for an answer and stay until it gives a broken pipe
13:50 championofcyrodi okay, so it sounds like the request is not making it through the message queue...
13:50 championofcyrodi how many controllers?
13:50 devstok 3 HA
13:50 devstok 3 OBJ Storage
13:50 championofcyrodi Centos or Ubuntu?
13:50 devstok ceph
13:50 devstok ubu
13:51 devstok using the v2 api I get a ok message but seem openstack write recordo only in the db because I don't see any record in RBD
13:51 devstok so
13:51 championofcyrodi do you know which disk (/dev/sdX) rabbitmq uses on your controller?
13:52 devstok a colleague that has used v1 got the image creation
13:52 mattgriffin joined #fuel
13:52 championofcyrodi It sounds like there is performance issue with rabbitmq if seeing broken pipe
13:52 devstok how can i check rabbit's disk?
13:52 championofcyrodi df -h
13:53 devstok Filesystem      Size  Used Avail Use% Mounted on /dev/vda3        15G  7.8G  6.4G  56% / udev            7.9G   12K  7.9G   1% /dev tmpfs           1.6G  320K  1.6G   1% /run none            5.0M     0  5.0M   0% /run/lock none            7.9G   43M  7.8G   1% /run/shm /dev/vda2       185M   58M  119M  33% /boot
13:53 devstok sorry
13:53 devstok udev            7.9G   12K  7.9G   1% /dev
13:53 devstok tmpfs           1.6G  320K  1.6G   1% /run
13:53 championofcyrodi that's okay... i got it
13:54 championofcyrodi okay... do ps -ef | grep rabbitmq-server
13:55 championofcyrodi get the PId
13:55 championofcyrodi PID
13:55 championofcyrodi and then do ls -of | grep <PID>
13:55 championofcyrodi and you should be able to find out the DIR where RABBIT MQ is storing data
13:55 championofcyrodi for me it is /var/lib/rabbitmq
13:55 championofcyrodi [root@node-10 ~]# lsof | grep 16598
13:55 championofcyrodi rabbitmq- 16598       root  cwd       DIR              253,0       4096    1442303 /var/lib/rabbitmq
13:56 devstok i got 3 pid of rabbit
13:56 championofcyrodi rabbitmq-server 3 times?
13:56 devstok root     18352     1  0 11:40 ?        00:00:00 /bin/sh /usr/sbin/rabbitmq-server rabbitmq 18365 18352  0 11:40 ?        00:00:00 su rabbitmq -s /bin/sh -c /usr/lib/rabbitmq/bin/rabbitmq-server rabbitmq 18367 18365  0 11:40 ?        00:00:00 sh -c /usr/lib/rabbitmq/bin/rabbitmq-server
13:57 championofcyrodi it looks like your root filesystem (/) is mounted on /dev/vda3, which means your using a virtual machine.
13:58 championofcyrodi sudo hdparm -Tt /dev/vda3
13:58 championofcyrodi will test your hard disk i/o speed
13:58 devstok yes
13:58 devstok is virtual
13:58 championofcyrodi if it is too slow, and/or you are running low on RAM, rabbitmq will start swapping.
13:58 championofcyrodi then it will not respond and thus, 'broken pipe'
13:58 devstok i gave 16 gb of ram
13:58 championofcyrodi free -m
13:58 championofcyrodi ^ run that
13:59 championofcyrodi http://superuser.com/questions/793192/what-is-using-up-all-my-memory-ubuntu-14-04-lts-server
13:59 championofcyrodi 16GB is probably enough
13:59 dkusidlo joined #fuel
13:59 devstok total       used       free     shared    buffers     cached Mem:         16047       5236      10811          0          2        990 -/+ buffers/cache:       4243      11804 Swap:         7650          0       7650
14:00 championofcyrodi okay that looks all okay.
14:00 devstok also for speed?
14:00 championofcyrodi did you do the hdparm test? (http://ubuntuforums.org/showthread.php?t=2239308)
14:00 devstok Timing cached reads:   1000 MB in  2.00 seconds = 500.00 MB/sec
14:01 devstok Timing buffered disk reads: 146 MB in  3.00 seconds =  48.61 MB/sec
14:01 championofcyrodi okay, the first is cached reads from RAM... second is buffered reads from virtual disk.
14:01 championofcyrodi if you run that command on your 'bare-metal' desktop, you'll see 2x-3x those speeds...
14:02 devstok ok
14:02 championofcyrodi the concern about 48 MB/sec is... what else is your hypervisor doing right now?
14:02 devstok but the resources seem ok
14:02 devstok maybe ceph doesnt works well?
14:03 championofcyrodi if nothing, my concern is that when glance starts processing, your bandwidth is eat up
14:03 devstok proxmox kvm
14:03 championofcyrodi i feel as though a disk of 48 MB/sec for a controller is too slow
14:03 championofcyrodi how many virtual disks are on the 1 physical disk?
14:04 devstok i think 4
14:04 devstok raid 5 i think
14:04 championofcyrodi so you have ~4 Virtual machines, reading/writing to 1 physical disk.  is that right?
14:04 devstok let me show an example
14:05 championofcyrodi I'm trying to understand why your /dev/vda3 disk i/o is so low on your openstack controller.
14:06 devstok i have a phi machine with 4 disks raid 5 . on that there are 3 Virtual machine
14:06 devstok 3 controllers
14:06 devstok but on the first installation without changing keystone endpoint
14:06 devstok all works
14:06 devstok then i changed the keystone for authentication
14:07 devstok and i got this issue
14:07 championofcyrodi interesting.
14:07 devstok nova list , glance image-list etc work
14:07 championofcyrodi fyi, I am using an HP desktop w/ 16GB of RAM as a single controller...
14:07 championofcyrodi Timing cached reads:   27464 MB in  2.00 seconds = 13763.33 MB/sec
14:07 championofcyrodi Timing buffered disk reads: 434 MB in  3.01 seconds = 144.31 MB/sec
14:08 championofcyrodi and I also was getting broken pipe until i disabled swap
14:08 championofcyrodi i still have concerns with using such restricted disk i/o
14:09 championofcyrodi if your machines are currently doing 'nothing', i would expect faster reads than 40 MB/sec.
14:09 devstok I run the command in other phy controller
14:09 devstok Timing cached reads:   22878 MB in  2.00 seconds = 11462.23 MB/sec
14:09 championofcyrodi when keystone+glance start sending messages to/from rabbitmq and coordinating calls w/ ceph,rbd, etc... You'll have nothing left
14:09 devstok Timing buffered disk reads: 236 MB in  3.22 seconds =  73.33 MB/sec
14:10 championofcyrodi that seems better.  is 'phy' mean physical controller? (not virtualized)
14:10 devstok my machines suck
14:10 devstok :)
14:10 championofcyrodi :) it's okay
14:11 devstok yes physical
14:11 devstok old cluster with griszzly
14:11 championofcyrodi do you have other services running you don't use? murano or sahara perhaps?
14:11 championofcyrodi (i started w/ icehouse, so not familiar w/ grizzly)
14:12 devstok ok
14:12 devstok no
14:13 devstok the other services wasn't installed
14:13 championofcyrodi i would first try to eliminate the rabbitmq errors.  sometimes network related, sometimes disk i/o related... sometimes RAM related.
14:13 devstok ok
14:13 devstok I have a 1gb network
14:14 championofcyrodi Then you can be sure message delivery is working.
14:14 championofcyrodi so do we... and ceph crushes it.
14:14 championofcyrodi we are planning to upgrade to 10Gbps soon.
14:15 championofcyrodi there are many subnets just for fuel and environment deployment... and this is before you begin to create new virtual networks for projects
14:15 championofcyrodi in openstack
14:15 devstok yes is necessary
14:15 championofcyrodi i think ceph is too much for virtualized environment
14:15 championofcyrodi you likely need 3 physical machines for controllers.
14:15 devstok to restart rabbitMQ is right to use : service rabbit... restart
14:16 devstok or I have to go into crm
14:16 championofcyrodi yes, that is what i use
14:16 championofcyrodi does your fuel server have a ui accessable via the web?
14:17 championofcyrodi in 5.0 there was a Logs tab, i closely monitor the rabbitmq logs
14:17 devstok not from external
14:19 championofcyrodi interesting
14:19 devstok if you want we can chat in skype
14:20 championofcyrodi Unfortunately I don't really have the time to get that engaged.  I am dealing w/ my own migration and am actually at work.
14:20 championofcyrodi plus i have developers hounding me for VMs ;p
14:20 mattgriffin joined #fuel
14:21 devstok ahahahha
14:21 devstok thanks for your help
14:21 championofcyrodi sorry we couldnt get it fixed... but don't give up!
14:22 devstok yes I can't
14:22 championofcyrodi likely you just have some config tweaks... but also be advised, controllers do need almost as much resources as your compute nodes.
14:23 championofcyrodi depending on your compute nodes of course
14:23 devstok compute : 32 cores 64gb ram
14:23 devstok controllers physical : 4 core 16 ram
14:27 sambork joined #fuel
14:28 championofcyrodi if your controllers are physical... why is the device /dev/vdX?
14:28 championofcyrodi and not /dev/sdX
14:29 championofcyrodi (SATA vs. Virtual)
14:37 devstok new strategy to migrate old cluster to the new cluster
14:37 devstok have few machine and virtualizing that i gain in number
14:41 championofcyrodi that might be a good idea.  also using something like 6.0 will provide a lot of bug fixes and reduce the number of differences between your configuration, and what the fuel developers can support.
14:41 claflico joined #fuel
14:42 championofcyrodi when i first started in here, almost all of my issues w/ 5.0/5.1 could have been related to a bug report that was patched in 5.1.1 or 6.0
14:42 championofcyrodi in regard to ceph,glance,cinder,apis
15:03 Longgeek joined #fuel
15:08 DaveJ__ joined #fuel
15:09 DaveJ__ Hi wondering if anyone can offer me some advice about Fuel, VMware and OVS
15:09 DaveJ__ I wanted to try out Ceilometer, but didn't have enough physical machines
15:09 DaveJ__ so I created a deployment on bare-metal, but three Virtual Machine nodes running on VMWare for the mongodb
15:09 DaveJ__ The OS install worked fine
15:09 DaveJ__ but once the Openstack install started, I loose connectivity.
15:10 DaveJ__ I confirmed that by removing the OVS bridges and reconfiguring eth0 I could get access again
15:10 DaveJ__ is there an issue with the VMWare vswitch and guests with ovs running ?
15:21 blahRus joined #fuel
15:28 championofcyrodi so this time i tried converting my RAW volume to qcow2 and importing it...  but Error: Failed to launch instance "cmdbuild-inventory": Please try again later [Error: Build of instance 1e685b3c-0a26-4f6e-95fd-ffde0b51adf6 aborted: Failure prepping block device.].
15:28 championofcyrodi also instead of using ephem, i specified to create a new volume...
15:29 championofcyrodi so it spent about 4 minutes trying to create the 50GB volume "Mapping Block Device..."
15:29 championofcyrodi and then it quit... however watching ceph osd pool stats volumes, i can see that there is still a lot of data being written to the volume
15:29 championofcyrodi as though it is still 'mapping'
15:30 e0ne joined #fuel
15:32 championofcyrodi it seems a better approach would be to just import the volume, and boot the instance... instead of importing as an image
15:34 championofcyrodi yup... sure enough... the volume finished mapping on it's own even though the UI timed out.
15:35 championofcyrodi once the volume was ready, i relaunched a VM from the volume that was imported and it worked.
15:38 saibarspeis joined #fuel
15:39 mattgriffin joined #fuel
15:44 emagana joined #fuel
15:52 denis_makogon joined #fuel
15:53 jobewan joined #fuel
15:56 denis_makogon Hello to all, I've got question about packaging and publishing fuel-ostf onto PyPI. Is there someone who i can speak to ?
16:03 adanin joined #fuel
16:04 Longgeek joined #fuel
16:10 mattymo denis_makogon, speak to a_teem
16:10 mattymo he's in #fuel-dev
16:10 denis_makogon mattymo, thanks
16:12 samuelbartel joined #fuel
16:14 martineg_ joined #fuel
16:20 jobewan joined #fuel
16:54 MiroslavAnashkin DaveJ__: You probably configured vSwitch incorrectly. Please check this for vSwitch settings reference and data flow separation settings: http://vbyron.com/blog/deploy-openstack-on-vsphere-with-fuel/
16:56 daniel3_ joined #fuel
16:59 MiroslavAnashkin DaveJ__: Please check you permitted promiscuous traffic in vSwitch, it is mandatory for Neutron
17:04 DaveJ__ MiroslavAnashkin: Thanks I'll try that
17:04 championofcyrodi okay... i think i finally got everything worked out.  I have updated this post extensively to include commands, outputs, and screenshots: http://championofcyrodiil.blogspot.com/2015/01/upgrading-openstack-with-fuel.html
17:05 championofcyrodi please feel free to review and let me know if I missed anything.  Also if someone would like me to add it as documentation else where, or use the information elsewhere, feel free.
17:05 championofcyrodi it logs my experience going from 5.x to 6.0 with VMs.  I had a lot of problems with glance doing an export, so i skipped all that and went straight to rbd/ceph for the RAW disk.
17:07 championofcyrodi also, we have some scripts that use smtp+nova cli api to notify us about disk usage that can be added to the feature request here: https://blueprints.launchpad.net/fuel/+spec/fuel-smtp-notification-service
17:08 championofcyrodi but only if anyone cares to have a look at them :p
17:09 mattymo championofcyrodi, cool writeup!
17:10 championofcyrodi thanks mattymo, I do a lot of different things, so I'm not really an expert in this arena.  But I gave it my best and tried to use the correct terminology in regard to volumes,instances,snapshots,ceph,rbd,cinder,glance,etc.
17:16 championofcyrodi now that i look over it... it still needs a lot of additional information, and some of the console output stuff is hard to follow because of the font/color i chose.  Oh well, i'll update it more as I finish the rest of my migrations and cleanup.
17:21 mattymo championofcyrodi, looks like you did a manual version of a product Mirantis is also working on, called Pumphouse
17:21 * championofcyrodi googles pumphouse...
17:21 mattymo but in most scenarios we would prefer to deploy new ceph nodes
17:22 MiroslavAnashkin https://www.mirantis.com/blog/upgrading-openstack-workloads-introducing-pumphouse/
17:22 championofcyrodi ^im there :)
17:22 mattymo it's not tied to Fuel. It could be done with any OpenStack cloud deployed however you like
17:23 championofcyrodi yes.  if i had a better understanding, i would have liked to 'upgrade' ceph/rbd on the old environment, and add new ceph nodes to the new, replicate, and decomission old nodes...
17:24 rmoe joined #fuel
17:24 xarses joined #fuel
17:24 championofcyrodi HDFS uses a mapreduce called distributed-copy to migrate from one filesystem to another: http://hadoop.apache.org/docs/r0.19.0/distcp.html
17:25 championofcyrodi also WANdisco uses paxos for WAN replication where distcp is too 'dumb' to handle dropped connections.
17:25 championofcyrodi https://www.wandisco.com/get?f=documentation%2Fwhitepapers%2FWANdisco_DConE_White_Paper.pdf
17:26 championofcyrodi in a nutshell, they implement their own 'proxy' between the namenodes (ceph monitors in this case) across networks.
17:26 championofcyrodi and use paxos to ensure delivery
17:27 championofcyrodi but it's pretty complex and way beyond me.
17:28 championofcyrodi ooo... Taskflow looks neat.
17:29 championofcyrodi better get my environment more stable before i start trying to chain together api calls though :p
17:29 mattymo it's mainly geared at letting others extend functionality and add their own hooks
17:54 Longgeek joined #fuel
18:07 xarses joined #fuel
18:07 championofcyrodi hmmm interesting.  I'm trying to Create a Volume from a RAW 60.0GB image.  This worked great for qcow2, but it says 'creating' for the RAW Image->Volume... but ceph osd stats for volumes pool is showing like, 20 KB/s
18:08 championofcyrodi when I was creating the QCOW2 -> Volume... ceph stats were in the 20-30 MB/s speeds.
18:09 championofcyrodi essentially it appears as though it just moved to 'creating volume' and err'd out somewhere
18:11 championofcyrodi yea, as i figured.. the uuid doesnt exist in the volumes pool.. so the request never made it.
18:11 championofcyrodi from UI i cannot update status or delete the volume while it is 'Creating...' either.
18:12 championofcyrodi well, i can update status, but it just says creating.
18:12 championofcyrodi let me see what cinder says...
18:12 championofcyrodi yup, def. in cinder as 'creating'
18:16 ahg joined #fuel
18:19 championofcyrodi well, cli does the same thing.  I guess i'll try converting this RAW into qcow2
18:30 emagana joined #fuel
18:31 emagana joined #fuel
18:59 emagana joined #fuel
19:03 emagana joined #fuel
19:03 ddmitriev joined #fuel
19:10 HeOS joined #fuel
19:14 codybum joined #fuel
19:14 codybum Hi @xarses.
19:15 xarses hi
19:17 clauded joined #fuel
19:21 codybum I just pushed out changes related to keepalives and am currently resetting everything
19:26 clauded Hi. In Fuel 6.0, with Nova networks - VLAN manager, it seems like I can't bond multiple interface in the network configuration. Is it normal behaviour?
19:42 codybum @xarses:  I changed swapness and kernel keepalives, but I am still getting "oslo.messaging._drivers.impl_rabbit [-] Failed to publish message to topic" errors very often.
19:42 codybum I actually created a separate physical untagged network to rule out issues on the admin network.
19:42 Longgeek joined #fuel
19:46 xarses codybum: messages will use the management network not the fuel admin / pxe
19:48 codybum Sorry. wrong name.  The admin/pxe was already a separate interface and management network was a VLAN on another interface.
19:49 codybum I change the management network to a separate untagged interface
19:49 championofcyrodi hmmm snapshot finished, i see the image pool is no longer writing data, and the full image is there via rbd -p images info <uuid>
19:49 championofcyrodi but the image state never got updated... so it still thinks it is 'saving'
19:50 codybum For instance here is a log from the neutron-server: oslo.messaging._drivers.impl_rabbit [-] Failed to publish message to topic 'reply_cc5c1ef7a01d4f09ab8991a277251807': Socket closed
19:58 codybum Commonly metadata services and things like that break when I experience rabbitmq disconnects
20:04 MarkDude joined #fuel
20:09 emagana joined #fuel
20:12 e0ne joined #fuel
20:12 adanin joined #fuel
20:13 ChrisNBlum joined #fuel
20:22 emagana joined #fuel
20:38 stamak joined #fuel
20:39 xarses championofcyrodi: are you using ceph for glance and ephemeral?
20:39 championofcyrodi yes
20:40 championofcyrodi qcow2 images seem to work fine for creating volumes.   but RAW images don't make an entry to rbd/ceph
20:40 xarses it should just copy on write the raw image and start the instancee quickly
20:41 xarses volumes via cinder?
20:41 championofcyrodi yes.  there is a feature in horizon that allows me to Launch a Volume from an Image.
20:42 xarses correct, and cinder is also set up to use RBD?
20:42 championofcyrodi screenshot: http://3.bp.blogspot.com/-9qkfOO-UEWQ/VNuK47hR1uI/AAAAAAAAAko/WGyXo-S_zHI/s1600/image-to-volume.png
20:42 championofcyrodi and with qcow2, it seems to copy the whole image to a compute disk under /var/lib/nova/... then uploads it as RAW to ceph.
20:42 xarses you should see the cinder volume's uuid in the volumes pool, not images
20:43 xarses championofcyrodi: that is correct
20:43 xarses its supposed to bypass that when they are part of the same fsid and in RAW already
20:43 championofcyrodi yea, i'm not seeing the uuid in the volume pool at all
20:43 xarses version of fuel?
20:43 championofcyrodi 6.0
20:44 championofcyrodi and cinder just has an entry stuck on 'creating'
20:44 championofcyrodi i'm also running watch on 'rados df' and i'm not seeing anything more than a few KB on my volumes pool from the instances currently running
20:45 championofcyrodi so... i tried launching an instance from the image, and creating a new volume.  got this error on the instance: Build of instance 37872d80-279d-4c47-8e31-0979137d88df aborted: Failure prepping block device.
20:47 emagana joined #fuel
20:48 emagana joined #fuel
20:49 championofcyrodi xarses: what is fsid?
20:49 championofcyrodi file-system id?
20:50 codybum joined #fuel
20:50 xarses championofcyrodi: its the uuid of the ceph cluster
20:50 xarses its in /etc/ceph/ceph.conf
20:50 championofcyrodi ah.. gotcha.  yea they should be the same, since it's the same ceph cluster.
20:50 championofcyrodi just diff pools
20:51 championofcyrodi images -> volumes
20:51 xarses unless you hacked fuel to do something else, they are the same cluster =)
20:51 xarses give me ~1hr and I'll have a cluster running to compare
20:51 xarses which os and network mode?
20:51 championofcyrodi centos
20:52 xarses neutron vlan, gre?
20:52 championofcyrodi neutron vlan
20:53 xarses ceph for volumes, images, and ephemeral?
20:53 championofcyrodi yes
20:54 championofcyrodi 1st thing i did was rbd export the RAW image from Icehouse, and had 80GB RAW file.
20:54 championofcyrodi 2nd, glance-import of the RAW file.  everything made it in to ceph, but i think the auth token timed out and glance was not notified about it being finished and it was stuck on saving.
20:55 championofcyrodi so as a test, i did a glance image-create with the --location flag, and pointed it at the rbd images pool and the RAW image that did make it in.
20:56 championofcyrodi this resulted in 2 images in glance pointing to the same rbd image.  the original 'saving...' and the new one as 'active' (the new one added with --location)
20:56 championofcyrodi both type RAW
20:56 championofcyrodi i then booted an ephemeral instance from the 'active' RAW image.  it took about 47 minute while copying the RAW data to ceph, and then finally booted w/o issue.
20:57 championofcyrodi copying the RAW data from the images pool to the compute pool, since it was ephem.
20:58 championofcyrodi this left me with an ephem VM booted and working from the RAW image, a RAW image active and another RAW image 'saving...' both pointing to the same rbd image.
20:58 championofcyrodi so i wanted to clean up... switch the ephem instance over to a volume backed instance, and then just delete the images.
20:58 championofcyrodi so that is where i am stuck... booting a volume backed instance from a RAW image
20:59 championofcyrodi or creating a New Volume from a RAW image, same result for both.
20:59 championofcyrodi cinder volume 'creating', rbd doing nothing w/ no sign of the uuid
21:00 championofcyrodi and since it is stuck 'creating' i can only force delete via CLI
21:03 championofcyrodi also i did a snapshot of the ephem... it ended up being a RAW images pool image, and i got the same result trying to create a volume or boot from the RAW snapshot
21:04 championofcyrodi and of course the snapshot took ~40min.-1hr. copying the ceph data from 'compute' pool to 'images' pool.
21:49 Longgeek joined #fuel
21:50 daniel3_ joined #fuel
22:01 daniel3__ joined #fuel
22:15 angdraug joined #fuel
22:21 youellet joined #fuel
22:22 youellet Anyone have a pre/pratice examen for the MCA100 test or a guide of section/question?
23:00 emagana joined #fuel
23:19 championofcyrodi xarses: I'm done for the day.  I'll be back Feb 12 ~13:30 UTC.  Thanks for the help.
23:38 codybum @xarses: any more ideas on the issue I am having with rabbitmq?
23:42 codybum is anyone using fuel 6.0.1 in HA successfully?
23:56 Longgeek joined #fuel

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary