Perl 6 - the future is here, just unevenly distributed

IRC log for #fuel, 2014-03-20

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:05 justif joined #fuel
00:26 designate joined #fuel
00:27 designate I've read through http://docs.mirantis.com/fuel-dev/develop/nailgun/partitions.html#adding-an-lv-to-an-existing-volume-group but I'm still not entirely sure how to create a software raid for os volume.  I want to load the OS on a software raid across two physical disks, any assistance would be greatly appreciated.
01:03 justif2 joined #fuel
01:33 jobewan joined #fuel
02:19 xarses joined #fuel
04:12 saju_m joined #fuel
04:21 IlyaE joined #fuel
04:28 mihgen joined #fuel
05:24 dburmistrov joined #fuel
05:32 crandquist joined #fuel
05:33 vkozhukalov_ joined #fuel
05:38 IlyaE joined #fuel
06:03 crandquist joined #fuel
06:04 saju_m joined #fuel
06:08 saju_m joined #fuel
06:27 Ch00k joined #fuel
06:33 crandquist joined #fuel
07:22 Ch00k joined #fuel
07:26 dburmistrov joined #fuel
07:33 crandquist joined #fuel
07:44 Ch00k joined #fuel
07:49 vkozhukalov_ joined #fuel
07:56 saju_m joined #fuel
08:20 Ch00k joined #fuel
08:29 Nikolay_St joined #fuel
08:29 Nikolay_St hi all
08:30 Nikolay_St have problems like described in this bug https://bugs.launchpad.net/fuel/+bug/1271129 do you have any decision for fuel 4.1?
08:31 mihgen Nikolay_St: hi
08:31 mihgen dmm can't you simply back port patch?
08:31 mihgen backport*
08:32 mihgen you are running Fake UI, right?
08:33 crandquist joined #fuel
08:35 kobier joined #fuel
08:37 nmarkov joined #fuel
08:51 topochan joined #fuel
08:55 DaveJ__ joined #fuel
09:09 rvyalov joined #fuel
09:12 Ch00k_ joined #fuel
09:28 acca joined #fuel
09:33 crandquist joined #fuel
09:35 acca left #fuel
09:36 designated joined #fuel
09:47 mihgen joined #fuel
09:48 e0ne joined #fuel
10:23 akasatkin joined #fuel
10:32 e0ne joined #fuel
10:33 crandquist joined #fuel
11:09 Ch00k joined #fuel
11:20 e0ne_ joined #fuel
11:27 tatyana joined #fuel
11:33 crandquist joined #fuel
11:34 Axam joined #fuel
11:38 Ch00k joined #fuel
11:44 anotchenko joined #fuel
12:17 justif joined #fuel
12:21 anotchenko joined #fuel
12:27 saju_m joined #fuel
12:33 crandquist joined #fuel
12:52 anotchenko joined #fuel
12:58 vkozhukalov joined #fuel
12:59 derek joined #fuel
13:00 rvyalov joined #fuel
13:07 saju_m joined #fuel
13:14 derek I’ve installed a HA deployment using three nodes. Now I would remove a node and use only two nodes. Can I preserve the HA not using the garbd arbitrator and avoiding the split-brain problem? Can I reconfigure appropriately Galera so as accomplish the goal?  Thanks in advance.
13:16 dpyzhov joined #fuel
13:31 MiroslavAnashkin derek: Yes, you may simply remove the third controller with Fuel UI or via command line. After that, please check cluster with 'crm status' command to see, what is the quorum number.
13:32 MiroslavAnashkin derek: And please note - quorum of 2 nodes is unstable, In case of any issues these 2 nodes could not determine, which one has correct replica.
13:33 crandquist joined #fuel
13:39 derek <MiroslavAnashkin>: exactly...I am aware of this problem. I have done some experiments. If I put down the network interface of a node, the mysql cluster doesn’t accept any queries (both ready and write operations).  Can I change galera configuration in order to use a db instance as primary and the second as secondary (used only for replication process)?
13:39 saju_m joined #fuel
13:40 Dr_Drache joined #fuel
13:40 MiroslavAnashkin By default Galera is already configured this way on Pacemaker level
13:41 MiroslavAnashkin You may additionally set any Galera node as slave - Galera supports such nodes.
13:43 crandquist joined #fuel
13:47 derek <MiroslavAnashkin>:Correct me if I wrong. You suggested I should set a node as Master and the remaining node as slave. Therefore, I should see Mysql configuration
13:53 MiroslavAnashkin Yes. Please check Galera related parameters here: http://www.codership.com/wiki/doku.php?id=galera_parameters
13:53 mattymo ogelbukh, can you run /kick meow-nofer meow-nofer_ meow-nofer__ for me?
13:53 nmarkov and for me
13:54 derek <MiroslavAnashkin>: ok, I will see it. Thanks a lot. D.
13:56 Dr_Drache hmmm
13:58 Dr_Drache well this sucks
13:58 Dr_Drache booted my fuel master up.
13:58 Dr_Drache and... the cd was still in the drive.
13:58 Dr_Drache guess what is reinstalling?
13:58 Dr_Drache LOL
14:03 MiroslavAnashkin Yes, you are right. We should set boot from hdd a default option in boot menu. I'll file a bug.
14:03 dpyzhov joined #fuel
14:03 crandquist joined #fuel
14:04 Dr_Drache heh, it was really my mistake, but damn, I guess i can try to redeploy with the settings xarses wanted me to try
14:08 e0ne joined #fuel
14:26 anotchenko joined #fuel
14:26 IlyaE joined #fuel
14:28 e0ne_ joined #fuel
14:28 xarses joined #fuel
15:08 obcecado hi guys
15:09 obcecado what is the 'proper way' to get help, are there any guidelines on how to compile the relevant information?
15:12 strictlyb joined #fuel
15:14 MiroslavAnashkin obcecado: To get help on what?
15:15 obcecado i'm trying to use fuel to deploy openstack on 4 baremetal nodes
15:15 obcecado each node (hp blades) has similar hw, cpu, ram, hdd and two nics
15:16 obcecado i'm using three nodes with controller and storage nodes
15:16 obcecado one node for compute role
15:16 obcecado the network deployment model is via gre tunnels, all network validate successfully
15:17 obcecado i had hp smart array doing raid on the storage, but it's destroyed
15:17 obcecado simple hdds
15:18 obcecado bootstrap works, centos is installed
15:18 obcecado but openstack fails to install
15:18 obcecado when all these componentes are selected
15:18 TVR_ what node fails?
15:18 TVR_ one of the controller + OSD?
15:18 obcecado all nodes have a similar error
15:19 TVR_ which is?
15:19 obcecado https://bugs.launchpad.net/fuel/+bug/1268961
15:19 obcecado this one
15:19 obcecado sorry was looking for the link
15:19 TVR_ go to logs, select orchestration, set level error and what do you see?
15:19 obcecado yet, this report states that the deploy does not stop
15:20 obcecado in my env the deploy ends up with a timeout
15:20 obcecado let me check
15:20 xarses joined #fuel
15:20 Dr_Drache xarses,
15:20 obcecado let me pastebin it
15:21 Dr_Drache just the man I was looking to talk to
15:21 MiroslavAnashkin Oh, it is false error and we still cannot find where it comes from
15:21 MiroslavAnashkin xarces in on vacation
15:21 xarses Dr_Drache: on vacation
15:21 obcecado http://paste.openstack.org/show/73915/
15:21 xarses =)
15:22 Dr_Drache xarses, ohh, just wanted to say "nofb" worked.
15:22 xarses nice
15:22 Dr_Drache MiroslavAnashkin, xarses http://paste.openstack.org/show/73916/
15:22 Dr_Drache that's my "final" version, lol
15:23 obcecado no line breaks, that's weird
15:24 MiroslavAnashkin obcecado: Please check the cinder/glance backends - your second error comes from timeout and I bet if you try to upload this image manually from command line it would succeed
15:24 TVR_ no worries.. the first line says it all... seems the storage is broken, I suspect... you chose 3 controllers as controller+ ceph?
15:24 obcecado that's correct i did
15:25 obcecado any recommendations on redistributing the roles?
15:25 mattymo joined #fuel
15:26 Dr_Drache MiroslavAnashkin, is there a chance something like my issue will be attempted to be fixed in an offical patch? like an extra checkbox for "affected" devices?
15:28 TVR_ when my backend would not accept the image, I looked at the following.... I pinged the nodes on the storage network and the management network.... ... then if all was good there, I would find the server that caused the issue, reboot it.. wait till it came back up and redeploy.... I have seen with 4.1 I sometimes may need a clean reboot if it was previously used as a node...
15:29 MiroslavAnashkin obcecado: Hmm. i don't think it is the best role distribution if you have only 1 compute and 3 controllers, if you don't plan to add more computes in the future..
15:29 obcecado well, let me put it this way
15:29 obcecado i have more blades i can use
15:29 obcecado this scenario would be up for a demo
15:30 obcecado if management is happy, another redeploy with more blades will be done
15:30 Dr_Drache well, do you need to demo HA controllers?
15:30 obcecado yes
15:30 MiroslavAnashkin for demo - yes, single compute and controllers + storage is the best.
15:30 Dr_Drache ok, still, more than one compute
15:30 obcecado if ha is not an option, openstack is not an option
15:31 obcecado the demo must contemplate it
15:31 Dr_Drache obcecado, not like there are very many other HA options out there.
15:33 obcecado but lets get back to the role distribution
15:34 obcecado as a minimal ha environment, would you recommend against this role distribution?
15:34 obcecado should another compute node be added?
15:35 MiroslavAnashkin No, 1 compute is enough, if you don't want to show migration
15:35 TVR_ are you going to down the main controller and see how well it survives?
15:35 obcecado my plan is demo controller ha
15:36 obcecado then demo the ease of adding another compute node
15:36 obcecado then show how ha works on the compute nodes
15:37 MiroslavAnashkin Hmm, the only HA-like feature I may suggest for computes is Ceph ephemeral volume.
15:38 mihgen joined #fuel
15:39 davideaster joined #fuel
15:39 obcecado ok
15:39 obcecado thank you for your input
15:40 TVR_ ok... well the controller HA seems to be fine now and works quite snappy, so you will be happy with that... adding a compute works well and it will start issueung instances to the new compute until it balances...
15:40 TVR_ to demo that.. have like 4 instances already created.. then add the compute, and add 4 more instances
15:40 obcecado i see
15:40 mihgen left #fuel
15:41 TVR_ as for adding a controller + ceph... that did not work for me.... so let me know how well that works please
15:41 aglarendil joined #fuel
15:41 TVR_ however, the HA on the controllers is now fantastic...
15:43 TVR_ the takeover of the vip is VERY fast....
15:44 obcecado let me boot some more blades, to separate controller from storage role
15:45 TVR_ also, setting the ceph journal on a faster disk is noticeably faster, if you have the SSD's lying around....
15:46 Dr_Drache blades with local storage? nice.
15:46 TVR_ if it's HP, they have like 2 disks
15:46 obcecado yes, that's precisely the case
15:47 obcecado i have a full enclosure waiting to be (ab)used
15:47 TVR_ nice...
15:47 Dr_Drache lol
15:47 Dr_Drache I need to switch to OS on my c42U of c6100s
15:49 obcecado these enclosures got the cisco 3020 integrated
15:49 obcecado it works quite well
15:50 TVR_ ok.. so you're not using flexfab bays then
15:50 obcecado we also got some of those
15:50 obcecado with flexconnect
15:50 obcecado we're not really happy with it
15:51 TVR_ cool.. I always like the 4 chassis stacking ability with those using the flexfab
15:51 TVR_ all 64 blades using the backplane
15:51 TVR_ redundant 10G interconnects
15:57 miroslav_ joined #fuel
16:00 mihgen joined #fuel
16:01 rmoe joined #fuel
16:35 tatyana joined #fuel
16:36 tatyana joined #fuel
16:45 e0ne joined #fuel
16:49 e0ne_ joined #fuel
16:50 Dr_Drache joined #fuel
17:06 anotchenko joined #fuel
17:09 alex_didenko joined #fuel
17:18 jobewan joined #fuel
17:28 abubyr joined #fuel
17:33 anotchenko joined #fuel
17:43 Ch00k joined #fuel
18:17 mutex so I am seeing the following problem with my fuel deployment
18:17 mutex occasionally an instance cannot get a DHCP address
18:17 mutex I login to the compute node, restart the neutron-openvswitch
18:17 mutex also restart the p_neutron-l3-agent
18:17 mutex then the instance can get a dhcp address
18:18 mutex but I don't see anything obvious in the logs that would show where the problem is
18:36 jhurlbert joined #fuel
18:36 angdraug joined #fuel
18:36 jhurlbert is there a way to do an in place upgrade from fuel 4.0 to 4.1?
18:37 Dr_Drache jhurlbert, no
18:40 jhurlbert cool, thank you.
18:40 jhurlbert is there a way to login to psql on the fuel controller?
18:40 mutex jhurlbert: my understanding is that one must do a migration, rather than an in place upgrade
18:40 Dr_Drache mutex, a manual migrated of the instances, not the cluster.
18:41 Dr_Drache s/migrated/migration
18:41 mutex yeah
18:42 jhurlbert ah ok, are there steps documented anywhere for this migration?
18:42 Dr_Drache nope.
18:42 Dr_Drache short version.
18:42 Dr_Drache setup new cluster...
18:42 Dr_Drache download from one, upload to the other.
18:43 Dr_Drache a few of us had it down better, but I'm forgetting a bit.
18:43 jhurlbert thats fine
18:43 IlyaE joined #fuel
18:44 jhurlbert what do you mean by "download from one
18:44 jhurlbert "
18:45 Dr_Drache download instance images from glance/ceph
18:45 MiroslavAnashkin jhurlbert: On master node: `su - postgres; psql nailgun;`
18:45 Dr_Drache jhurlbert, I am sure MiroslavAnashkin can clairify.
18:45 jhurlbert MiroslavAnashkin: thank you
18:46 mutex glance image-download img; glance image-create --file img;
18:46 mutex i'm simplifying
18:46 mutex but that is the basic premise
18:46 jhurlbert Ah, so a brand new OpenStack cluster
18:47 jhurlbert yeah, we can't do that, we have 30+ servers in production
18:48 Dr_Drache I don't think there is a reason for you to go after 4.1 then
18:48 jhurlbert Dr_Drache: I was thinking of upgrading because we sometimes have an issue where we try to provision a new node, but it will never finish, and we can't cancel the deployment process
18:49 Dr_Drache ahhh
18:49 Dr_Drache yea, sadly, you'd have to start from a small cluster and add a few nodes at a time
18:50 jhurlbert yeah, we may do that method when fuel moves to icehouse if needed
18:51 jhurlbert thanks everyone for your help
19:11 tatyana joined #fuel
19:11 designated I've read through http://docs.mirantis.com/fuel-dev/develop/nailgun/partitions.html#adding-an-lv-to-an-existing-volume-group but I'm still not entirely sure how to create a software raid for os volume.  I want to load the OS on a software raid across two physical disks, any assistance would be greatly appreciated.
19:16 vkozhukalov joined #fuel
19:33 sb- joined #fuel
19:50 dburmistrov joined #fuel
19:52 mutex designated: are you familiar with how kickstart and preseed work ?
19:52 MiroslavAnashkin designated: it is probably question to #fuel-dev, I also cannot find mandatory raid properties inside volume manager source code
19:56 designated MiroslavAnashkin, thank you
19:57 designated mutex, not entirely no
20:06 e0ne joined #fuel
20:08 aleksandr_null joined #fuel
20:21 mutex has anyone done nested virtualization with fuel ?
20:27 designated mutex, isn't that essentialy what the documentation shows when using virtualbox?
20:29 e0ne joined #fuel
20:29 mutex nah, that is software... I"m talking about hardware
20:29 mutex I found a bug I think with nested virt on AMD with G5 Opterons
20:29 Dr_Drache umm
20:29 Dr_Drache how is nested virt hardware?
20:29 mutex virtualbox doesn't pass through the hardware nested virt flags AFAIK
20:30 mutex with kvm_intel you can passthrough the hardware virt flags with a kernel option
20:30 Dr_Drache virtualbox does that fine on my amd systems.
20:30 mutex interesting, what platform ?
20:31 Dr_Drache ubuntu and arch linuxs, and windows.
20:31 mutex interesting
20:31 mutex anyway on my production fuel deployment
20:31 Dr_Drache only intel boxes we have are acually old p4's and a few workstations
20:31 mutex when I create a VM I get this from libvirt
20:31 mutex warning : x86Decode:1346 : Preferred CPU model Opteron_G5 not allowed by hypervisor; closest supported model will be used
20:32 mutex and the G4 it creates does not have the virtualization flags passed to the instance CPU
20:32 Dr_Drache that virtualbox?
20:32 mutex I double checked the libvirt package, and it has the G5 patch
20:32 mutex no, production actual fuel deployments
20:32 Dr_Drache ohhh.
20:32 mutex onto hardware
20:32 Dr_Drache yes yes,
20:32 Dr_Drache it might be the qemu version.
20:33 Dr_Drache (I am totally confused about the qemu/kvm choices in fuel)
20:33 Dr_Drache they are the same thing.
20:33 Dr_Drache just called differntly
20:33 mutex well I just ran some tests, and it looks like a bug
20:34 mutex if you use host-model in the nova config, one gets the above error
20:34 mutex however, if you use host-passthrough
20:34 mutex it all works fine
20:34 mutex hooray
20:34 Dr_Drache ahh, that's much simpler to gix.
20:34 Dr_Drache fix
20:34 mutex yeah
20:41 MiroslavAnashkin joined #fuel
20:53 tzabal joined #fuel
21:14 jaypipes joined #fuel
22:01 tzabal left #fuel
22:05 designated fuel-dev isn't near as active as this channel haha
22:09 mutex heh
22:39 e0ne joined #fuel
23:36 xarses joined #fuel
23:57 IlyaE joined #fuel

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary