Perl 6 - the future is here, just unevenly distributed

IRC log for #fuel, 2014-02-05

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
01:50 rmoe joined #fuel
02:47 ilbot3 joined #fuel
02:47 Topic for #fuel is now Fuel for Openstack: http://fuel.mirantis.com/ | Paste here http://paste.openstack.org/ | IRC logs http://irclog.perlgeek.de/fuel/
03:34 ArminderS joined #fuel
03:39 ArminderS joined #fuel
03:56 xarses joined #fuel
04:28 IlyaE joined #fuel
04:36 ArminderS joined #fuel
05:23 IlyaE joined #fuel
05:49 mihgen joined #fuel
06:04 e0ne joined #fuel
06:06 meow-nofer joined #fuel
06:07 meow-nofer__ joined #fuel
08:01 kpimenova__ joined #fuel
08:09 mrasskazov joined #fuel
08:09 kpimenova__ joined #fuel
08:24 miguitas joined #fuel
08:56 e0ne joined #fuel
08:58 vk joined #fuel
09:33 rvyalov joined #fuel
09:38 e0ne_ joined #fuel
09:58 tatyana joined #fuel
10:12 mihgen joined #fuel
10:35 e0ne joined #fuel
10:41 bookwar joined #fuel
11:04 MACscr why doesnt the fuel deployment tool allow me to use ceph for image storage?
11:05 MACscr i thought the settings said i could do it, but when setting up the disks, it seem as if i had to setup a partition for images
11:16 vk joined #fuel
11:59 ruhe joined #fuel
12:08 vk joined #fuel
13:11 Dr_Drache joined #fuel
13:17 Dr_Drache MiroslavAnashkin, I know your not around yet. but bios mode doesn't help. even tryed in UEFI mode for giggles.
13:17 Dr_Drache going to try centOS here in a few min
13:22 ruhe joined #fuel
14:13 TVR__ joined #fuel
14:22 MiroslavAnashkin MACscr: Sorry, what do you mean "by does not allow to use Ceph"? You select Ceph for Glance backend during the new environment creation and get image storage on Ceph. The partitions for images are made to use as Glance image cache. Glance cannot offer image directly from storage.
14:25 MiroslavAnashkin MACscr: Glans workflow is following: 1. It downloads image from storage to cache on controller. If it is RAW image - it needs cache size bigger than the filesystem inside the image, even if this filesystem is empty.
14:26 MiroslavAnashkin Glance
14:26 MiroslavAnashkin MACscr: 2. It copies the cached image from cache on controller to the cache on compute.
14:27 MiroslavAnashkin MACscr: 3. It uploads the image from the cache on compute to the block storage as new volume.
14:28 MiroslavAnashkin MACscr: So, OpenStack needs big cache for Glance, at least until they refactor Glance code.
14:29 MACscr MiroslavAnashkin: ah, my problem is that i only have 8gb flash storage on all the systems. I have 4 storage servers that were going to be used for VM and image storage. The disk for the OS though on those systems is still only 8gb as well
14:33 Dr_Drache MiroslavAnashkin, I thought ceph for emph volumes took care of that?
14:33 MiroslavAnashkin MACscr: Yes, there is minimum disk size requirements for OpenStack nodes. Disk size should be at least 50 GiB, and it is the very minimum, allowing to deploy OpenStack and use only small sized qcow2 images.
14:35 MACscr why waste power and disks on a node that shouldnt need it? that stinks. Eh, it was worth a shot
14:35 ruhe joined #fuel
14:36 Dr_Drache MACscr, because it DID need it.
14:36 Dr_Drache as soon as icehouse and firefly are out, it shouldn't.
14:36 MiroslavAnashkin Dr_Drache: Yes, it is the reason we changed Swift to Ceph. We are going to remove Swift at all in the future versions, since Ceph works faster and offers everything.
14:37 MiroslavAnashkin Ceph Ephemeral volumes reduce necessary disk size.
14:38 MiroslavAnashkin Glance was designed in far from optimal way and needs a lot of additional space just for the case.
14:38 MACscr icehouse? is that the new openstack release?
14:39 MiroslavAnashkin Yes, Icehouse is the upcoming OpenStack release, expected in Spring.
14:40 MACscr cant wait for firefly, going to be an amazing release
14:43 Dr_Drache MiroslavAnashkin, CentOS installs perfectly
14:44 Dr_Drache MiroslavAnashkin, so, the issue is in the install process for Ubuntu
14:45 Dr_Drache MiroslavAnashkin, intrested in the diag log from this install?
14:45 MiroslavAnashkin Dr_Drache: Hmm, I'll pass your last diagnostic snapshot to vkozhukalov.
14:47 Dr_Drache MiroslavAnashkin, thanks. I just find unbuntu to run better than cent in 99% of my cases.
14:47 MACscr and you can actually upgrade it =P
14:47 MACscr plus its kvm support is a bit better
14:47 MACscr oh, and ubuntu is used by ceph and openstack devs =P
14:55 mrasskazov2 joined #fuel
14:58 Dr_Drache MACscr, when the main funding comes from redhat....
14:58 Dr_Drache you can't fault the devs.
14:59 MACscr redhat makes a great stable product
14:59 MACscr but with that comes a major lag in improvements and updates
14:59 MACscr and i personally just hate not being able to upgrade from one major version to another
14:59 Dr_Drache how is it any more stable than, well anything?
14:59 MACscr and i like apt over yum =P
15:00 Dr_Drache that part I never understood.
15:00 MACscr just typically has more thorough testing
15:01 MACscr or versions are supported longer
15:01 MACscr like LTS of ubuntu
15:01 Dr_Drache from whome? since no one uses the non-stable version much.
15:01 Dr_Drache who acually tests it?
15:01 Dr_Drache not a large # that's for sure.
15:02 Dr_Drache ubuntu has a much larger userbase, that tests the non-LTS, or even the LTS, that shows higher overall stablity and bug fixes.
15:24 IlyaE joined #fuel
15:56 mihgen joined #fuel
15:59 richardkiene joined #fuel
16:17 xarses joined #fuel
16:18 jouston_ joined #fuel
16:25 richardkiene joined #fuel
16:41 Dr_Drache TVR__,
16:45 TVR__ yes?
16:45 TVR__ I think my fuel node just died...
16:45 TVR__ I can't even drac in
16:45 TVR__ can't touch it until tomorrow due to the snow
16:47 Dr_Drache damn
16:47 Dr_Drache TVR__, I need support!
16:48 TVR__ I can still connect to the cluster and such... so it shouldn't be that big a deal
16:48 Dr_Drache TVR__, I'm still having issues with IPs showing on instances, and to route out
16:49 richardkiene_ joined #fuel
16:51 TVR__ VLAN's or GRE
16:51 Dr_Drache GRE
16:52 Dr_Drache i can get it to get a floating.
16:52 TVR__ my setup is... 6 nodes.. (3x controller + ceph and 3x compute + ceph)
16:52 TVR__ neutron with gre
16:52 TVR__ and image and volumes on ceph
16:52 Dr_Drache then I have to setup routing out... which may just be the router it's behind.
16:53 TVR__ did you set the external network to be shared?
16:53 rmoe joined #fuel
16:53 Dr_Drache ....I must have missed that
16:58 angdraug joined #fuel
17:01 TVR__ admin => networks => edit network (on external network) and be sure it's shared..... project => routers => set gateway ... then you can associate a floating IP and have it work
17:19 Dr_Drache TVR__, got it, at least a floating.
17:20 TVR__ cool
17:38 rmoe joined #fuel
17:43 Dr_Drache TVR__, in HA, can I make certain nodes "backup nodes" so, nothing gets put on them until i'm full or have a node go down?
17:44 TVR__ outside my scope of knowledge man
18:00 MiroslavAnashkin Dr_Drache: Which nodes do you mean? Controllers, computes, or..?
18:02 Dr_Drache computes.
18:03 MiroslavAnashkin There is such feature built in to OpenStack scheduler.
18:04 MiroslavAnashkin http://docs.openstack.org/admin-guide-cloud/content/ch_introduction-to-openstack-compute.html#section_instance-scheduling-constraints
18:05 MiroslavAnashkin And here http://docs.openstack.org/havana/config-reference/content/section_compute-scheduler.html
18:07 Dr_Drache so, basiclly, regular nodes weight of 1, backup nodes weight of like 70
18:09 MiroslavAnashkin Or use GroupAffinityFilter
18:10 Dr_Drache i'll get back to that once I figure out about my DR networking
18:13 Dr_Drache from what I am reading, setting up groups and affinity is easy, but what if the member(s) of the group arn't there?
18:24 MiroslavAnashkin In case of any errors with filtering Scheduler uses default filter.
18:31 Dr_Drache MiroslavAnashkin, this is a stupid question.
18:31 Dr_Drache "terminate instance" that's like delete?
18:33 TVR__ it deleted the VM with a virsh stop ; virsh undefine, but it DOES NOT delete the volume, so you can 'boot from volume' later and the VM is back if needed
18:38 IlyaE joined #fuel
18:48 IlyaE joined #fuel
19:02 Dr_Drache MiroslavAnashkin,
19:02 Dr_Drache "Check that VM is accessible via floating IP address
19:02 Dr_Drache Floating IP can not be created. Please refer to OpenStack logs for more details."
19:02 Dr_Drache i'm not seeing where the logs are showing much
19:05 MiroslavAnashkin Logs are on master node, in /var/log/remote
19:08 Dr_Drache no, I mean I don't see much in the log that really helps me.
19:08 MiroslavAnashkin Please start from neutron-agent-dhcp log
19:09 MiroslavAnashkin There are a lot of logs to search through. BTW, do you have free floating IP addresses in your floating address pool?
19:10 Dr_Drache sure do
19:10 Dr_Drache about 50 or so
19:11 Dr_Drache nothing else fails but that test.
19:11 Dr_Drache (and the one directly after it, of course)
19:15 dhblaz joined #fuel
19:15 dhblaz Anyone know how the best way to enable Block live migration?
19:16 albionandrew joined #fuel
19:17 MiroslavAnashkin Dr_Drache: Do you mean built in test in Fuel UI?
19:17 Dr_Drache MiroslavAnashkin, yes sir.
19:17 MiroslavAnashkin Aah. it is not reliable.
19:17 Dr_Drache just doing my due dilliagnce. (spelling)
19:18 MiroslavAnashkin And some built in tests require preparation steps
19:18 Dr_Drache well, should they be runable then?
19:18 Dr_Drache maybe a box of information that needs filled out before that test is attempted?
19:19 MiroslavAnashkin We are trying to improve these tests, but they are still buggy
19:19 Dr_Drache I'm just saying, for tests that need information to be filled, don't allow test group to run until it's filled out, like the pre-deploy network tests
19:23 MiroslavAnashkin dhblaz: Do you mean block storage backend migration or instance migration?
19:25 dhblaz (lvm) block live migration as described here: http://docs.openstack.org/grizzly/openstack-compute/admin/content//configuring-migrations.html
19:26 TVR__ my network team completed the VLANs I asked for... so that explains my fuel node going offline..even DRAC
19:26 Dr_Drache TVR__, oops.
19:28 TVR__ no, it's all good... I can now try the true production quality install... and test what will happen... migrating VM's is also on my list.. live migration that is...
19:29 Dr_Drache right now, I have my GRE network behind a router on it's own l2.
19:30 Dr_Drache have about 6 devs pounding the crap out of it
19:30 mutex hi
19:31 mutex any ideas about this l3 agent crash I keep getting ? http://paste.openstack.org/show/62641/
19:32 MiroslavAnashkin dhblaz: I guess, block live migration is supported with XEN only. But I may be wrong.
19:33 Dr_Drache MiroslavAnashkin, that's not a concern with Ceph, am i correct?
19:33 MiroslavAnashkin Dr_Drache: Yes
19:33 dhblaz MiroslavAnashkin: Thanks, what about "Volume-backed live migration" as described here: http://docs.openstack.org/havana/config-reference/content//configuring-openstack-compute-basics.html#true-live-migration-kvm-libvirt
19:37 mutex actually this may be related to something strange on the ha setup: http://paste.openstack.org/show/62643/
19:37 mutex because the result is the network layer disappears for 14 seconds
19:37 mutex which is just about the time for a process to restart I would think
19:37 MiroslavAnashkin mutex: Please check RabbitMQ server is alive on any controller with `rabbitmqctl status` or with `rabbitmqctl cluster_status` for HA mode
19:40 albionandrew joined #fuel
19:40 mutex I am running HA mode, just not quite sure what I should be seeing ?
19:41 MiroslavAnashkin mutex: All rabbit nodes should be reported as online after the `rabbitmqctl cluster_status`
19:41 mutex yeah I guess it looks that way
19:41 mutex http://paste.openstack.org/show/62649/
19:42 MiroslavAnashkin dhblaz: What is your Cinder backend?
19:42 dhblaz ceph
19:43 mutex I mean I am supposed to have 5 in the cluster, and I have 5 in the cluster_status
19:43 mutex so
19:43 mutex I guess that means it is working
19:44 MiroslavAnashkin dhblaz: Hmm, in 4.0 with Ceph+RBD+Ephemeral drives live migration should be enabled out of the box.
19:45 dhblaz I didn't use that feature
19:45 dhblaz because it increases latency and was labeled as experimental
19:46 Dr_Drache ....openstack is experimental
19:47 dhblaz Thank you Dr_Drache
19:48 Dr_Drache no problem, will be here all week.
19:53 angdraug the reason ephemeral rbd was labeled experimental is segv bug in ceph that has since been fixed
19:53 angdraug if you look at the fuel LP bug referenced in 4.0 notes it will have a link to fixed ceph packages
19:55 e0ne joined #fuel
20:00 mutex so what is the vlan splinters that I see as a configuration option in the fuel installer
20:01 dhblaz joined #fuel
20:15 MiroslavAnashkin mutex: Keep vlan splinters turned off for good. It is workaround for specific hardware.
20:18 MiroslavAnashkin mutex: OK, here are the possible reasons: One of controllers has less than 1 Gb on the root partition and RabbitMQ blocked itself automatically.
20:20 MiroslavAnashkin mutex: 2. RabbitMQ simply hanged up. Please try `service rabbitmq-server restart` on any controller, to see if it starts successfully from the first attempt
20:32 vk joined #fuel
20:41 albionandrew joined #fuel
20:43 MiroslavAnashkin joined #fuel
21:19 xarses joined #fuel
21:28 IlyaE joined #fuel
21:31 dhblaz joined #fuel
21:43 mutex MiroslavAnashkin: yeah there is pleny of space so I suspect it is not a root partition problem
21:47 miroslav_ joined #fuel
21:51 mutex MiroslavAnashkin: so if I restart the rabbitmq server I definitely get the same behavior in the l3-agent.log
22:15 xarses joined #fuel
22:37 miroslav_ Well, this error is actually graceful shutdown message from RabbitMQ. So, looks like something with Neutron l3 agent.
22:40 miroslav_ mutex: Please try `crm resource restart clone_p_neutron-plugin-openvswitch-agent`
22:45 miroslav_ mutex: it would be enough to run the command above on one of controllers only, it affects whole cluster
22:58 vk joined #fuel
23:15 mutex interesting
23:15 mutex what are you hoping to acheive when issuing that restart ?
23:21 dhblaz joined #fuel
23:21 dhblaz When I try to take a instance snapshot or make a volume I get this:
23:21 dhblaz 2014-02-05T23:14:35.091011+00:00 debug:  2014-02-05 23:14:30 ERROR cinder.openstack.common.rpc.common [req-5e9d0a3e-0c62-4ed0-bfef-5517995544f6 e649d971f45f448f96d0a15b11e9f690 63adb3d6a61243e78ad91b3831bd1f0b]  Failed to publish message to topic 'cinder-volume:node-17.mumms.com': [Errno 32] Broken pipe
23:31 jhurlbert joined #fuel
23:40 e0ne joined #fuel

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary