Perl 6 - the future is here, just unevenly distributed

IRC log for #fuel, 2015-12-08

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 jerrygb joined #fuel
00:06 kaliya joined #fuel
00:18 jerrygb joined #fuel
00:22 dnikishov joined #fuel
00:49 xarses joined #fuel
01:00 elo joined #fuel
01:18 aglarendil joined #fuel
01:18 ashtokolov joined #fuel
01:43 Jabadia joined #fuel
02:01 kaliya joined #fuel
02:14 jerrygb joined #fuel
03:35 jerrygb joined #fuel
03:37 NelsonPR joined #fuel
04:15 dnikishov joined #fuel
04:19 elo joined #fuel
05:57 jaypipes joined #fuel
06:29 javeriak joined #fuel
06:38 e0ne joined #fuel
06:49 e0ne joined #fuel
07:02 e0ne joined #fuel
07:04 javeriak joined #fuel
07:21 e0ne joined #fuel
07:30 LinusLinne joined #fuel
07:37 javeriak joined #fuel
07:41 javeriak joined #fuel
08:06 jerrygb joined #fuel
08:15 samuelBartel joined #fuel
08:25 Philipp__ joined #fuel
08:36 fzhadaev1 joined #fuel
09:03 e0ne joined #fuel
09:04 hyperbaba joined #fuel
09:06 javeriak joined #fuel
09:12 javeriak_ joined #fuel
09:28 LinusLinne joined #fuel
09:31 LinusLin_ joined #fuel
09:47 pbelamge joined #fuel
09:52 bhaskarduvvuri joined #fuel
09:55 asvechnikov_ joined #fuel
09:56 izinovik joined #fuel
09:56 dklenov joined #fuel
09:57 vkramskikh joined #fuel
09:57 akislitsky joined #fuel
09:58 dburmistrov joined #fuel
09:58 seeg- joined #fuel
09:59 bhaskarduvvuri_ joined #fuel
09:59 MiroslavAnashkin joined #fuel
10:00 bookwar joined #fuel
10:01 venkat_ joined #fuel
10:01 meow-nofer joined #fuel
10:13 ikar joined #fuel
10:21 bhaskarduvvuri While deploying a VM in a new environment, proxy settings are not reflected and we are getting
10:22 asvechnikov_ joined #fuel
10:22 usdsd joined #fuel
10:24 bhaskarduvvuri Failed to execute hook 'shell' Failed to run command cd / && fa_build_image --image_build_dir /var/lib/fuel/ibp --log-file /var/log/fuel-agent-env-8.log --data_driver nailgun_build_image --input_data '{"image_data": {"/boot": {"container": "gzip", "uri": "http://10.20.0.2:8080/targetimages/env_8_ubuntu_1404_amd64-boot.img.gz",
10:25 ikar joined #fuel
10:26 asvechnikov_ joined #fuel
10:32 vsedelnik joined #fuel
10:40 fzhadaev1 joined #fuel
10:46 mgrohar joined #fuel
10:49 pbelamge anybody knows how to configure bridge details on a KVM node for fuel installation?
10:50 vsedelnik joined #fuel
10:53 ikar joined #fuel
11:00 samuelBartel joined #fuel
11:00 aglarendil pbelamge: could you please share more details? what do you call 'bridge details'?
11:02 pbelamge @aglarendil, for Virtual Box, we have scripts that configures the required bridge configuration for slave nodes, I would like to mimic the same for VMs on KVM host
11:03 aglarendil well, I guess, we use fuel-devops library for that
11:03 aglarendil #link https://github.com/openstack/fuel-devops
11:05 pbelamge thank you, will go through the link and comeback if any more questions
11:28 izinovik joined #fuel
11:29 meow-nofer joined #fuel
11:29 javeriak joined #fuel
11:30 nurla joined #fuel
11:33 javeriak_ joined #fuel
12:01 NelsonPR joined #fuel
12:05 asvechnikov_ joined #fuel
12:05 Venturi is it possible to install properly let's say Zabbix plugin to Mirantisu Fuel into environment which has already been deployed in the past?
12:06 Venturi or does it always has to be fresh install of hte openstack environment on nodes for plugin to be useful?
12:09 jerrygb joined #fuel
12:10 javeriak joined #fuel
12:10 e0ne joined #fuel
12:13 vsedelnik joined #fuel
12:17 neilus1 joined #fuel
12:22 Liuqing joined #fuel
12:24 fzhadaev1 joined #fuel
12:46 anddrew joined #fuel
12:48 anddrew Hello. I have a problem with the l3-agent in openstack juno. When i launch an instance i can ping from the qdhcp namespace, but not from qrouter.
12:48 anddrew l3 agent shows started in crm resource show.
12:50 anddrew But it shows stopped in service neutron-l3-agent status. Should i start the service manually?
12:53 anddrew I can see reports in l3-agent log, so i guess it's started....
13:00 vsedelnik joined #fuel
13:02 MiroslavAnashkin joined #fuel
13:04 vsedelnik joined #fuel
13:04 championofcyrodi joined #fuel
13:07 meow-nofer joined #fuel
13:29 javeriak_ joined #fuel
13:29 TVR_ joined #fuel
13:34 meow-nofer joined #fuel
13:34 MiroslavAnashkin joined #fuel
13:35 nurla joined #fuel
13:35 Venturi another question. how to attach ceph rbd image pool to running VM Instance (centos generic cloud image) within MOS 7 deployment??
13:35 Venturi is it possible?
13:35 jaypipes joined #fuel
13:36 bhaskarduvvuri joined #fuel
13:41 asvechnikov_ joined #fuel
13:44 jerrygb joined #fuel
13:44 mgrohar joined #fuel
13:45 asvechnikov_ joined #fuel
13:48 rmoe joined #fuel
13:49 nurla joined #fuel
13:56 Liuqing joined #fuel
13:56 vsedelnik joined #fuel
14:06 Verilium Venturi:  No, unfortunately, you can't add a plugin to an existing environment, have to make a new one.
14:14 tlbr joined #fuel
14:21 pma joined #fuel
14:26 alrick joined #fuel
14:31 neilus joined #fuel
14:31 Venturi Verilium:ok thank you
14:31 zhangjn joined #fuel
14:32 zhangjn joined #fuel
14:33 fzhadaev1 joined #fuel
14:49 dnikishov joined #fuel
14:52 vsedelnik joined #fuel
14:55 rmoe joined #fuel
15:01 vsedelnik joined #fuel
15:10 jerrygb joined #fuel
15:10 neilus joined #fuel
15:12 anddrew Still trying to figure it out, so if you have any advice on what i should check..
15:23 blahRus joined #fuel
15:24 jerrygb joined #fuel
15:28 javeriak joined #fuel
15:34 javeriak joined #fuel
15:38 javeriak_ joined #fuel
15:39 thumpba joined #fuel
15:39 vsedelnik joined #fuel
15:41 cartik joined #fuel
15:41 TVR_ joined #fuel
15:44 claflico joined #fuel
16:13 tlbr joined #fuel
16:13 Jabadia joined #fuel
16:23 bhaskarduvvuri Hi Team, I am trying to write a plugin and was wondering where db_password and user_password are generated
16:24 bhaskarduvvuri I see that in globals.yaml file cinder_hash, ceilometer_hash and so on.. have their db and user passwords generated. How can a plugin extend this
16:30 mwhahaha bhaskarduvvuri: they are generated by nailgun as part of the task serializers
16:30 mwhahaha not sure if a plugin can leverage them, you may want to check in #fuel-dev
16:31 mwhahaha i think you might be able to in the settings yaml but i'm not sure
16:32 bhaskarduvvuri mwhahaha: can you point me to a code like cinder/ceilometer is doing?
16:33 mwhahaha let me go find it
16:34 mwhahaha https://github.com/openstack/fuel-web/blob/master/nailgun/nailgun/fixtures/openstack.yaml#L1266-L1368
16:34 bhaskarduvvuri I have posted the same question in #fuel-dev
16:34 mwhahaha so it's something that is done via nailgun but i'm not sure if plugins can leverage a similar function
16:35 mwhahaha you might be able to add a generated field to the plugin settings but i'm haven't tried it
16:36 bhaskarduvvuri This file resembles node_roles.yaml in the plugin. Can I add it there?
16:36 mwhahaha wouldn't be node roles
16:36 mwhahaha it might work in the environment_config.yaml
16:38 bhaskarduvvuri I will give a try
16:39 bhaskarduvvuri Thanks mwhahaha
16:40 mwhahaha no problem
16:40 mwhahaha let me know if it works :D
16:49 bhaskarduvvuri sure :)
17:29 nurla joined #fuel
17:59 bhaskarduvvuri joined #fuel
18:28 bhaskarduvvuri joined #fuel
18:35 xarses joined #fuel
18:36 xarses joined #fuel
18:39 e0ne joined #fuel
18:46 jerrygb joined #fuel
18:55 vsedelnik joined #fuel
19:24 Vent joined #fuel
19:24 Vent Hi there. How does one choose CephFS to use within VM instance of Mirantis deployed OpenStack? Is that possible?
19:27 mattymo_ joined #fuel
19:35 elo joined #fuel
19:37 xarses We don't deploy/support CephFS with Fuel
19:38 xarses if you deployed with ceph for ephemeral enabled, then instances created w/o cinder volumes will be stored on ceph
19:38 xarses similar story for cinder volumes
19:41 Vent xarses: thx. do you possible have some good resource on how tu set-up RBD access within VM instance on MOS (using some centos or ubuntu generic cloud). I want to consume ceph backend through using ceph rbd, not through cinder ....
19:41 xarses Why not through cinder?
19:41 Vent because when using it through rbd , the rbd image could be expanded on the fly
19:42 Vent when attached through cinder, attached volume could be resized only, when instance is power down
19:42 Vent if i am correct?
19:43 xarses yes, but then do it in the instance you have to do RBD in the kernel (which is bad, we will get to it). When you do it with cinder/nova its done in KVM's RBD driver in userspace
19:45 xarses this can be bad when you put your storage on your compute nodes, you will end up with the possibility of the client deadlocking kernel dead locking on a client request (in the osd)
19:46 Vent let's say i have the following situation. I would like to have 2 VM instances, where one partition of each instance would be intend to store some call logs. this partition (rbd image mapped) would be visible to both sides, but only one would write to the rbd image..... i know that cepfs would probably do better
19:47 Vent aha. yes probably you are right, but i guess cephfs would sort out that kind of situation?
19:48 xarses it could help
19:48 xarses what kind of logs? like a oracle ha cluster?
19:48 xarses or are these more sedentary
19:48 Vent call data records
19:48 Vent telco app
19:48 Vent legacy app within VM
19:49 xarses why would both nodes need to be able to see each others? call history can be fetched from either?
19:50 Vent could i attach the same cinder volume as writable and read-only on different instances?
19:53 Vent two's two nodes would such as an active and stdby nodes
19:53 Vent legacy app.... where one node takes over the controll when one in is donw
19:54 Vent we have used drbd till now....
19:55 Vent i would like to somehow used ceph instead....i heard that some ceph rbd mirroring is also on the roadmap
19:57 Vent i do not want for VM instance to grow in time, but to use attached ceph pool through rbd or cephfs, where i could expand the size of the partition with no downtime...
19:58 Vent that's idea, but still searching what's possible to do with MOS...
20:02 TVR__ joined #fuel
20:04 xarses hmm, ya CephFS would be nice, but it's still not 'production'
20:05 xarses you may be better off setting up NFS with a DRDB replication or use GlusterFS
20:10 Vent but glusterfs and ceph could not be used at the same time within mirantis , right?
20:10 Vent i see mirantis has some glusterfs plugin for to be used as cinder backend
20:11 Vent http://plugins.mirantis.com/docs/e/x/external_glusterfs/external_glusterfs-1.0.0.pdf
20:11 xarses it won't want to deploy it for you, but you can have multiple cinder back ends configured
20:11 xarses you will have to update the cinder config a little
20:12 xarses but, I think your app would use GlusterFS directly inside the instance, not via cinder
20:12 Vent aha i think i saw that somewhere yes, multiple backends ...
20:12 Vent aha
20:12 xarses Fuel doesn't do a good job configuring them (multiple backends ) yes
20:12 xarses s/yes/yet
20:17 Vent xarses: ok thx for now
20:17 Vent be great!
20:17 Jabadia Hi, I have a Cinder/RBD/ question - I notice that for every VM deployment I do there's a process of RBD import process
20:17 vsedelnik joined #fuel
20:18 Jabadia so, if I deploy 10 VM's it takes a very long time
20:25 Jabadia why is there no copy on write from the base image I ask
20:27 LinusLinne joined #fuel
20:33 vsedelnik joined #fuel
20:36 MiroslavAnashkin Jabadia: It is normal if one image is used to create thousands volumes. Volumes are block devices - so we need fast access and as less lags as possible. Syncing thousands of fast devices with a lot of changes cost too much.
20:38 MiroslavAnashkin Jabadia: Ceph team is attempting to create some solution in the future Ceph versions. For now built in copy on write works only for images and only for images in RAW format
20:38 Jabadia MiroslavAnashkin:  I'm talking about ephemeral, not attached volume
20:40 MiroslavAnashkin Jabadia: Please check the image format you are using - images should be RAW.
20:40 Jabadia yes, I did use qcow2 indeed
20:42 MiroslavAnashkin Jabadia: OpenStack needs to perform on the fly image convertation from QCOW to RAW in case of Ceph ethemeral storage. This may be the root cause of the slowdown
20:42 Jabadia but once it has been convered, cant it be base image for the rest
20:43 Jabadia future ones
20:43 Jabadia I assume this is what in develop ?
20:43 MiroslavAnashkin Jabadia: Please try to convert one image to RAW offline and re-upload it as RAW into glance as new image
20:43 Jabadia I know that's an option, then i lose the option to use qcow2 images
20:44 Jabadia I might find myself importing qcow2 that was 400M to a raw that is 100G
20:44 MiroslavAnashkin Jabadia: Yes. Ceph ethemeral storage has its own copy on write implementation and it conflicts with QCOW format
20:44 Jabadia allright, thanks for the help
20:46 TVR_ joined #fuel
20:46 MiroslavAnashkin Jabadia: Exactly. The key issue is in QCOW images which has formatted file system size greater than the image size - non-used clusters omitted. Ceph requires to expand the image to the full filesystem size before it is able to use copy on write.
20:48 Jabadia So, but what happens now ( if I understand correctly ) is that there's a QCOW to raw conversion by nova to import it into RBD ( for each VM , even on same host )
20:48 MiroslavAnashkin Yep.
20:53 Jabadia so, me original question - if/once a conversion from qcow to RAW has taken place ( once, per entire ceph RBD pool ) cant nova do the copy on write from it ? ( I assume not currently , i'm asking theoretically )
20:56 Jabadia well, i guest, i'll just try that myself ..
21:02 MiroslavAnashkin Jabadia: Ceph should base Cow sequence on the original image, volume is not good candidate in this. That means image and volume formats should match.
21:13 meow-nofer joined #fuel
21:14 mgrohar joined #fuel
21:29 bhaskarduvvuri joined #fuel
21:29 bhaskarduvvuri mwhahaha: Thank you very much. It worked
21:29 mwhahaha excellent
21:30 mwhahaha thanks for letting me know
21:30 bhaskarduvvuri yeah, you need to make sure the https://bugs.launchpad.net/fuel/+bug/1473452 is applied on 7.0 and 8.0
21:31 bhaskarduvvuri and restart master node
21:32 mwhahaha yup that'd do it
21:32 mwhahaha you could have also just restarted the nailgun container
21:32 bhaskarduvvuri yes :)
21:35 anddrew what is the procedure to start to cluster in case of a total failure?
21:35 NelsonPR joined #fuel
21:35 anddrew the*
21:37 anddrew on fuel version 6.1
21:38 mwhahaha like a complete power outage?
21:39 anddrew yes
21:39 mwhahaha turn all the controllers back on and wait
21:39 mwhahaha they should reform a cluster
21:39 mwhahaha but it does take some time
21:40 anddrew i see, not my primary controller finished the init process, but mysql can't start
21:40 anddrew now*
21:47 mwhahaha are the other controllers on?
21:48 anddrew yes, but hadn't finished the init yet
21:49 mwhahaha you may want to check the mysql logs to see what's up with that node. what used to be the primary controller may not be the most current mysql dataset so it might be waiting until the other nodes are reachable before trying to continue
21:51 anddrew whahaha_:    there is nothing new in /var/log/mysqld.log since the shutdown
21:51 anddrew mwhahaha_:
21:53 mwhahaha what does 'pcs status' say
21:54 anddrew http://paste.openstack.org/show/481247/
22:01 mwhahaha so it's stopped, have you tried a 'crm resource start p_mysql' ?
22:04 anddrew yes, i've tried before, i will try again now. Although i and see /bin/bash /usr/lib/ocf/resource.d/fuel/mysql-wss oscillating between start and stop
22:04 mwhahaha check the pacemaker logs
22:09 anddrew mwhahaha_: the only error i see is this node-26 lrmd[16096]:   notice: operation_finished: p_mysql_start_0:1288:stderr [ /usr/lib/ocf/resource.d/fuel/mysql-wss: line 342: TMP: bad array subscript
22:09 anddrew maybe i shoul
22:10 anddrew maybe i should let it finish for a while and than see what'a going on, on one node is still doing the init scripts
22:14 Jabadia joined #fuel
22:15 Jabadia joined #fuel
22:16 wiza joined #fuel
23:16 TVR_ joined #fuel
23:38 rmoe joined #fuel
23:44 rmoe joined #fuel
23:48 dnikishov joined #fuel
23:49 LinusLinne joined #fuel
23:56 jerrygb joined #fuel
23:57 Jabadia joined #fuel
23:59 rmoe joined #fuel

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary