Perl 6 - the future is here, just unevenly distributed

IRC log for #fuel, 2014-10-15

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 jetole Ok... I should have said this earlier but it had the same error before I deployed initially and I assumed based on the NIC and what it was used for that it seemed false
00:00 jetole and it did pxe boot the nodes and deploy them
00:01 jetole I just clicked deploy. I have iDRAC on these so going to pull up a IP/KVM on one of the machines and verify
00:03 jetole yeah I see one controller pxe booting fine
00:04 jetole hey xarses. What does the "Base MAC address" under Neutron L2 mean?
00:04 xarses it will use that to generate MAC addresses from
00:05 jetole I assume the OUI is safe
00:05 xarses you would likely only need to change it if you have another cluster on the same L2 segments
00:06 jetole figured
00:06 jetole cool
00:07 kupo24z xarses: I am using the nova/neutron clients on a cron script to limit instance iops/throughput usage
00:07 kupo24z on each compute node
00:08 kupo24z im sure in normal use cases its probably not needed
00:09 jetole xarses: so I have verified all nodes PXE booted, are installing centos, etc. Is it safe to assume that eth0 untagged actually is a false error
00:10 xarses yes, as long as that is the PXE interface
00:17 jetole Cool. Thank you
00:17 jetole xarses: thank you for all the help
00:21 xarses jetole: np
00:24 mattgriffin joined #fuel
00:35 rmoe joined #fuel
00:44 Rajbir joined #fuel
00:55 mattgriffin joined #fuel
00:56 harybahh joined #fuel
01:00 teran joined #fuel
01:16 xarses joined #fuel
01:31 justif_ joined #fuel
01:31 obcecado_ joined #fuel
01:31 TomekG_ joined #fuel
01:33 alex_didenko joined #fuel
01:41 dmitryme2 joined #fuel
01:45 Rajbir joined #fuel
01:45 mattgriffin joined #fuel
01:45 Rajbir left #fuel
01:45 alex_didenko joined #fuel
01:45 Rajbir joined #fuel
01:45 xarses joined #fuel
02:01 jpf_ joined #fuel
02:01 mattgriffin joined #fuel
02:08 Kupo24z1 joined #fuel
02:49 jpf joined #fuel
02:49 Longgeek joined #fuel
02:56 harybahh joined #fuel
02:58 dhblaz joined #fuel
03:05 jetole joined #fuel
04:47 jpf joined #fuel
04:57 harybahh joined #fuel
04:58 AKirilochkin joined #fuel
05:17 artem_panchenko left #fuel
05:17 artem_panchenko joined #fuel
05:22 syt joined #fuel
06:45 ArminderS joined #fuel
06:49 saibarspeis joined #fuel
06:53 dnikishov joined #fuel
06:57 harybahh joined #fuel
07:20 azemlyanov joined #fuel
07:21 hyperbaba joined #fuel
07:24 pasquier-s joined #fuel
07:30 harybahh joined #fuel
07:32 dancn joined #fuel
07:37 e0ne joined #fuel
07:37 HeOS joined #fuel
07:56 vtzan joined #fuel
08:08 syt joined #fuel
08:10 stamak joined #fuel
08:15 Alremovi4 joined #fuel
08:16 merdoc I found a non-critical issue with logrotate scripts - http://paste.openstack.org/show/121185/
08:19 kaliya joined #fuel
08:36 ArminderS- joined #fuel
08:55 sc-rm kaliya: Now I retried to redeploy the complete env, and after deploy of zabbix as like the first time, zabbix-server is not startet. I had to do service zabbix-server restart.
09:07 stamak joined #fuel
09:14 ArminderS joined #fuel
09:15 ArminderS- joined #fuel
09:17 t_dmitry joined #fuel
09:25 adanin joined #fuel
09:26 teran joined #fuel
09:31 aarefiev joined #fuel
09:40 kaliya sc-rm: on CentOS or Ubuntu deployment?
09:41 sc-rm Ubuntu
09:43 kaliya sc-rm: can you find some reason in zabbix logs, e.g. timeouts or other?
09:43 teran joined #fuel
09:46 sc-rm kaliya: nope, no errors, and the start time entry “Oct 15 08:51:07 node-37 zabbix_server[29309]: Starting Zabbix Server. Zabbix 2.2.2 (revision 42525).” fits with the time I manually started id
09:46 sc-rm id = it
10:05 teran joined #fuel
10:14 aarefiev joined #fuel
10:17 ArminderS joined #fuel
10:18 ArminderS joined #fuel
10:29 harybahh joined #fuel
11:00 AKirilochkin joined #fuel
11:03 e0ne joined #fuel
11:05 AKirilochkin joined #fuel
11:09 AKirilochkin_ joined #fuel
11:29 harybahh joined #fuel
12:07 harybahh joined #fuel
12:19 fuel fuel 5.1 icehouse http://paste.openstack.org/show/KbwX22beqLlyxiYJwBD3/
12:22 e0ne joined #fuel
12:28 kaliya fuel: we don't have addressed related bugs, could you please briefly describe your environment?
12:31 fuel sure, virtualbox vms (four) created by launch.sh. OpenStack cluster (Icehouse version) based on ubuntu. One controller, one compute node, one compute+storage. Everything's vanilla
12:32 kaliya fuel: did you choose ceph for cinder and glance or for someone?
12:32 kaliya fuel: and in HA or just multinode?
12:33 saibarspeis joined #fuel
12:34 fuel multinode, cinder lvm,  no ceph
12:34 pasquier-s joined #fuel
12:41 jpf joined #fuel
12:49 jaypipes joined #fuel
13:04 stamak joined #fuel
13:09 kaliya sjukskoterska: did you choose nova-network or neutron for your test deployment?
13:12 sjukskoterska nova-network
13:13 sjukskoterska is there a button "download cluster settings"?
13:13 kaliya sjukskoterska: yep, the diagnostic snapshot, but now I have all my information I need :)
13:16 sjukskoterska good to know anyway, thanks
13:19 merdoc can opanstack integrate with custom OpenID provider?
13:28 kaliya merdoc: you mean if keystone can?
13:31 kaliya sjukskoterska: you should get the packages related to oslo here http://fuel-repository.mirantis.com/fwm/5.1/ubuntu/pool/main/ install on all the nodes, and then restart all the openstack services. These new packages fix some oslo issues
13:32 sjukskoterska thanks a lot!
13:33 merdoc kaliya: I think yes. I meditate on a single point of auth in whole company. so I decide what to choose - OpenLDAP, OpenID or AD (sic!)
13:41 emagana joined #fuel
13:50 ganso joined #fuel
14:00 mattgriffin joined #fuel
14:03 vtzan joined #fuel
14:03 emagana joined #fuel
14:08 harybahh joined #fuel
14:11 syt joined #fuel
14:19 syt joined #fuel
14:19 stamak joined #fuel
14:21 anand_ts joined #fuel
14:22 anand_ts hello all, what is the defualt login password for fuel dashboard. I installed Fuel 5.1
14:22 kaliya anand_ts: admin/admin
14:22 anand_ts is it r00tme? but when I gave that , it says unable to login
14:22 anand_ts ohh!! thanks kaliya
14:23 jetole joined #fuel
14:24 xarses joined #fuel
14:24 jetole Hey guys. I have a fresh fuel / OS install, multinode HA on bare-metal using neutron VLAN. It seems my volumes are in a constantly frozen state of either deleting or or creating and I don't know how I should resolve this
14:25 youellet joined #fuel
14:25 syt joined #fuel
14:25 kaliya jetole: cinder is on ceph?
14:25 jetole yes
14:25 kaliya jetole: is ceph healthy and so on?
14:26 jetole so is ephemeral and radosgw (which I believe is swift)
14:26 kaliya jetole: yep it's swift but has to do with glance, in HA
14:26 jetole Ok...
14:26 jetole I believe it's healthy
14:26 kaliya jetole: do you have any relevant cinder log somewhere? in /var/log/cinder
14:26 jetole I don't know how to connect to these nodes post fuel install
14:26 jetole ^
14:27 kaliya jetole: you have to run `fuel nodes` on the master, and identify your controllers' IP addresses, then ssh to them
14:27 jetole k. give me a sec
14:27 kaliya jetole: you can ssh with the master key, by ssh IP from the master
14:28 kaliya or if you specified an additional public key in the settings tab, you can connect even from outside the master
14:29 jetole the fuel command is prompting me for auth via --os-username --os-password
14:29 jetole what credentials is it referring to
14:29 jetole ?
14:30 kaliya jetole: on the master, you have to run `fuel nodes`
14:30 kaliya jetole: it will give you the list of the detected and provisioned nodes :)
14:30 jetole kaliya, http://pastebin.com/eppG2U6B
14:32 kaliya jetole: please paste /etc/fuel/client/config.yaml ?
14:33 jetole I'd rather not. I assume that is the user/pass I require
14:33 kaliya are KEYSTONE_USER and KEYSTONE_PASS set?
14:33 jetole yes
14:34 jetole oh...
14:34 kaliya how did you change them?
14:34 jetole The pass may be wrong
14:34 jetole I just ran export KEYSTONE_USER, etc but let me pull up keepass and get the new pass
14:35 kaliya jetole: it's not required export anything, master should work with fuel commands out of the box
14:35 jetole OK. Is that supposed to be the same password I use to log into horizon via admin user?
14:36 kaliya jetole: nope, they are different things.
14:36 jetole well it's admin/admin in the yaml
14:36 jetole server port 8000
14:36 kaliya so logout-login again in the master, and try `fuel` ?
14:36 jetole keystone port 5000
14:36 kaliya `fuel nodes`
14:36 jetole same thing
14:36 mpetason joined #fuel
14:37 kaliya jetole: try to `dockerctl restart keystone`
14:38 jetole http://pastebin.com/CsP10FzG
14:39 syt joined #fuel
14:40 jetole @ kaliya
14:40 kaliya jetole: please change the fuel password as described here http://docs.mirantis.com/openstack/fuel/fuel-5.1/operations.html#fuel-passwd-ops
14:41 jetole OK
14:42 jpf joined #fuel
14:43 jetole kaliya, and I now I update /etc/fuel/client/config.yaml to the new password?
14:45 kaliya jetole: no, you should use the UI or the fuel command, to change the password
14:45 kaliya jetole: you did? `fuel` command is restored?
14:46 jetole kaliya, I used the keystone command on that page, edited /etc/fuel/client/config.yaml and ran dockerctl restart keystone and now fuel nodes is working
14:46 kaliya jetole: ok, with `fuel nodes` you get the list of the nodes, you have to identify a controller of your environment and ssh to it
14:47 jetole done
14:47 kaliya so now do `. openrc`
14:47 kaliya and ceph -s ?
14:47 jetole HEALTH_OK
14:47 kaliya jetole: now you can explore /var/log/cinder if something wrong has been logged
14:48 jetole kaliya, /var/log/cinder/volume.log ?
14:49 jetole I see broken pipe errors from oslo.messaging._drivers.impl_rabbit
14:50 jetole kaliya, http://pastebin.com/7WVbb4zy
14:53 kaliya jetole: is your env on centos or ubuntu?
14:53 jetole centos
14:54 kaliya jetole: you have to install the oslo packages from here http://fuel-repository.mirantis.com/fwm/5.1/centos/os/x86_64/Packages/ (they fix an oslo bug) on all your nodes, and restart all the openstack services on them
14:55 jetole Ok
14:55 jetole I'm not to apt on centos/redhat. How do I install that rpm?
14:56 kaliya rpm -i
14:56 jetole all nodes? Controller, osd, compute?
14:56 kaliya jetole: yes
14:56 jetole kaliya, python-oslo-messaging-1.3.0-fuel5.1.mira3.noarch.rpm ?
14:58 kaliya jetole: sorry, take packages from http://fuel-repository.mirantis.com/fwm/5.1.1/centos/os/x86_64/Packages/
14:58 jetole kaliya, OK. Which package. There are 5 oslo ones. I know I am not running vmware
14:59 kaliya jetole: all, but the vmware one
14:59 jetole ok
14:59 jetole do I install these on the fuel master as well?
15:01 kaliya jetole: nope, only on your openstack nodes
15:01 emagana joined #fuel
15:03 jetole kaliya, http://pastebin.com/eSenP5ZQ
15:04 jetole ls
15:08 emagana joined #fuel
15:09 emagana joined #fuel
15:11 dhblaz joined #fuel
15:12 jetole kaliya, ?
15:14 kaliya jetole: just rpm -iv python-oslo-messaging-1.3.0-fuel5.1.mira4.noarch.rpm ?
15:15 jetole kaliya, It looks like I got the same results
15:15 jetole Is this an error or do I carry on?
15:15 kaliya so it's a warning
15:16 jetole kaliya, so what do I do about it?
15:16 jetole Sorry to sound so uninformed. I have been a debian / ubuntu guy forever
15:16 kaliya `rpm -qa | grep oslo` shows which package version?
15:17 jetole It still shows mira3 for messaging
15:17 kaliya try `yum install http://fuel-repository.mirantis.com/fwm/5.1.1/centos/os/x86_64/Packages/python-oslo-messaging-1.3.0-fuel5.1.mira4.noarch.rpm`
15:18 jetole and now it shows mira4 :-)
15:18 MiroslavAnashkin Yum works well with locally downloaded files as well.
15:22 jetole yep. moving forward on all nodes and yum installing the local files which I already scp'd over
15:22 jetole thanks kaliya and MiroslavAnashkin
15:23 Dr_Drache jetole, may I ask why you are deploying in cent if you are a ubuntu guy?
15:24 jetole I overlooked the option when deploying
15:25 jetole kaliya, you said I had to restart all openstack services. Is there a simple command to do so?
15:27 jetole also, do I restart ceph?
15:29 MiroslavAnashkin No, you don't need to restart ceph.
15:29 evg jetole: services are restarted with "service SERVICE restart" like in ubuntu
15:30 MiroslavAnashkin # service nova-api restart
15:30 MiroslavAnashkin # service nova-cert restart
15:30 MiroslavAnashkin # service nova-consoleauth restart
15:30 MiroslavAnashkin # service nova-scheduler restart
15:30 MiroslavAnashkin # service nova-conductor restart
15:30 MiroslavAnashkin # service nova-novncproxy restart
15:30 jetole evg. yeah. So I read /etc/init and /etc/init.d on every server and run all of them?
15:30 jetole MiroslavAnashkin, are those the only ones I need to do for for the oslo patch?
15:31 jetole evg, I meant run all of the ones that look like they are part of openstack
15:32 jetole can I just reboot all nodes since this isn't production yet?
15:33 evg jetole: MiroslavAnashkin wrote the list of commands.
15:33 jetole OK
15:36 jetole those were unrecognized services on all nodes
15:38 jetole got it
15:38 jetole for x in openstack-nova-api openstack-nova-cert openstack-nova-consoleauth openstack-nova-scheduler openstack-nova-conductor openstack-nova-novncproxy; do service $x restart; done
15:38 MiroslavAnashkin Yes, these are named different between Ubuntu and CentOS
15:39 jetole figures
15:40 jetole so I ptached oslo and restarted the services and I still have volumes that are frozen on creating or deleting
15:47 MiroslavAnashkin Please try to delete these volumes one more time.
15:48 harybahh joined #fuel
15:59 jetole MiroslavAnashkin, I rebooted all nodes and then took a shower since it takes Dell R servers forever to reboot. I just tried to delete the volumes again and I got the error "You are not allowed to delete volumes:... etc" as a modal popup in horizon. I am logged in as admin
16:00 Dr_Drache jetole, delete them through CLI
16:00 jetole I don't know if this matters but it's using ceph
16:00 jetole Dr_Drache, which type of node and which command should I use?
16:00 Dr_Drache on the controller node... let me double check
16:01 jetole Dr_Drache, thanks
16:01 Dr_Drache jetole, your ceph status good?
16:02 jetole it was before the reboot. Let me check again
16:03 Dr_Drache joined #fuel
16:03 Dr_Drache weird
16:04 jetole Dr_Drache, It's not now
16:04 jetole #ceph: y guys. I just rebooted all nodes in a 4 node ceph cluster. My health is `health HEALTH_WARN 548 pgs stale; 548 pgs stuck stale`. Can anyone please help me resolve this?
16:05 Dr_Drache all OSDs up?
16:06 jetole I'm seeing 3 up, 3 in. not node3
16:06 jobewan joined #fuel
16:09 jetole ok... 4 up, 4 in
16:09 jetole HEALTH_OK
16:10 Dr_Drache now try to delete
16:10 Dr_Drache cinder force-delete
16:14 jetole Dr_Drache, is there a file I source on the compute node for OS_USERNAME / OS_PASSWORD ?
16:15 Dr_Drache I'm not sure
16:15 Dr_Drache I'm not all bent on changing passwords like that.
16:16 rmoe joined #fuel
16:16 jetole cinder on the compute node doesn't seem to know anything about user, pass, tenant, url, etc
16:17 Dr_Drache are you using force?
16:18 jetole . openrc
16:18 jetole that was the file. kaliya mentioned that earlier
16:18 jetole ok. Now cinder is working. Let me try the force-delete
16:19 jetole that worked for the volumes. I have one instance which I cannot delete either
16:19 jetole Dr_Drache, do you know which command I use for that?
16:20 jetole I think I got it
16:20 jetole 1 sec
16:20 Dr_Drache it's nova
16:21 Dr_Drache but if it's at an error state you need to reset it firast
16:21 jetole yeah it's not letting me delete when in state error
16:21 Dr_Drache nova reset-state UUID
16:21 Dr_Drache then
16:21 jetole I'm googling this but if you know the quick answer. I don't want to bother you too much
16:21 Dr_Drache nova delete UUID
16:22 Dr_Drache if that doesn't work
16:22 Dr_Drache you can try
16:22 jetole perfect.
16:22 jetole no
16:22 jetole it did
16:22 jetole it worked
16:22 jetole thank you
16:22 Dr_Drache good good
16:22 Dr_Drache no problem
16:22 jetole Now let's try creating some instances and volumes and hope everything is on the up and up
16:22 Dr_Drache bbiab, need some lunch
16:22 * jetole knocks on wood
16:23 Dr_Drache jetole, there is some "fun" with ceph
16:23 Dr_Drache don't use qcow2
16:23 jetole yeah I have used ceph before but fuel is new to me and a bit of a learning curve but hey, who not having fun?
16:23 jetole oh
16:23 jetole no?
16:23 Dr_Drache no
16:23 Dr_Drache raw only
16:24 jetole for images and volumes?
16:24 Dr_Drache yes
16:24 jetole OK
16:24 jetole thanks
16:25 jetole go eat. I'm going to go grab some lunch on my way into the office
16:26 MiroslavAnashkin yes, qcow is copy-on-write and ceph has its own copy-on-write built-in. Since qcow also occupies disk space on demand - cehp has to expand it to full size and convert on the fly. It is slow and usually ends up with not enough free space on temporary storage.
16:27 Dr_Drache MiroslavAnashkin, they need to fix that temp storage issue.
16:27 Arminder joined #fuel
16:28 Dr_Drache been a year since openstack/ceph/mirantis stated it doesn't need to be used for ceph, but here we are still using it :P
16:29 Dr_Drache well, maybe not mirantis, but it was in a blog post in collabroation with inktank
16:34 Rajbir joined #fuel
16:35 jetole Is there any way for me to disable qcow?
16:35 jetole I doubt every user will know or remember
16:45 MiroslavAnashkin If you have qcow images - simply convert them with `qemu-img convert -f qcow2 -O raw <image name>.qcow <image name>.img` before importing to OpenStack and import as raw.
16:47 MiroslavAnashkin You may also set default image format for snapshots in /etc/nova/nova.conf
16:49 MiroslavAnashkin There are  images_type and use_cow_images parameters.  please set these accordingly. http://docs.openstack.org/trunk/config-reference/content/list-of-compute-config-options.html
17:00 alex_didenko joined #fuel
17:09 xarses joined #fuel
17:28 jpf joined #fuel
17:35 harybahh joined #fuel
17:36 HeOS joined #fuel
17:47 xarses joined #fuel
17:48 stamak joined #fuel
18:01 kupo24z1 joined #fuel
18:01 jpf joined #fuel
18:14 kupo24z joined #fuel
18:21 jetole joined #fuel
18:22 jetole Hey guys. I have networking via neutron/vlan. I have two nets right now. One which should always be selected when creating a instance and one which should never be selected. I wanted to know, is there a way to hide/disable one when creating an instance and is there a way to set one as a default so it does not have to be manually selected?
18:38 Dr_Drache jetole, I think you create a new project and in there are able to show only 1 network.
19:11 ilbot3 joined #fuel
19:11 Topic for #fuel is now Fuel 5.1 for Openstack: https://wiki.openstack.org/wiki/Fuel | Paste here http://paste.openstack.org/ | IRC logs http://irclog.perlgeek.de/fuel/
19:17 tatyana joined #fuel
19:18 jetole When I start ubuntu instances, these are the only ones I have looked at so far but when I start them, I am seeing console messages similar to "url_helper.py[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [103/120s]: request error [HTTPConnectionPool(host='169.254.169.254', port=80): Max retries exceeded with url: /2009-04-04/meta-data/instance-id (Caused by <class 'socket.error'>: [Errno 113] No route to host)]"
19:20 Dr_Drache you have the external network, or internal network connected?
19:29 jetole Internal
19:29 jetole I don't connect the external. I only access that via assigning a floating IP
19:31 jetole It also doesn't seem to have associated with the floating IP. I assigned a floating IP but I'm not getting any confirmation that the IP is active from remote hosts
19:33 jetole I see a fedora image had a almost identical message
19:36 Dr_Drache it's because it's not connected to the internal network
19:45 e0ne joined #fuel
19:54 jetole no it's because heat engine wasn't running
19:54 jetole it was connected to the internal network
20:05 Dr_Drache why wasn't heat running?
20:08 _________ joined #fuel
20:14 jetole I don't know. I had restarted my servers earlier
20:15 jetole no I see it in the default launch scripts
20:15 jetole /etc/rc*
20:22 adanin joined #fuel
20:47 teran joined #fuel
21:10 teran joined #fuel
21:11 jpf joined #fuel
21:36 harybahh joined #fuel
21:59 adanin joined #fuel
22:12 mattgriffin joined #fuel
22:18 alex_didenko joined #fuel
23:31 kupo24z joined #fuel
23:36 harybahh joined #fuel
23:46 xarses kupo24z: juno nightly builds are green
23:47 kupo24z Yeah got the list email
23:47 kupo24z Ill try it out
23:48 kupo24z This doesnt have DVR right? What about the ml2 changes?

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary