Perl 6 - the future is here, just unevenly distributed

IRC log for #fuel, 2015-01-26

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:19 thumpba joined #fuel
00:24 jobewan joined #fuel
00:26 thumpba joined #fuel
01:37 ddmitriev1 joined #fuel
02:22 jobewan joined #fuel
02:47 ilbot3 joined #fuel
02:47 Topic for #fuel is now Fuel 5.1.1 (Icehouse) and Fuel 6.0 (Juno) https://software.mirantis.com | Fuel for Openstack: https://wiki.openstack.org/wiki/Fuel | Paste here http://paste.openstack.org/ | IRC logs http://irclog.perlgeek.de/fuel/
03:21 fandi joined #fuel
03:29 thumpba joined #fuel
04:23 mattgriffin joined #fuel
04:24 Longgeek joined #fuel
04:36 fandi joined #fuel
04:47 Longgeek_ joined #fuel
05:00 fandi joined #fuel
05:01 fandi joined #fuel
05:07 fandi joined #fuel
05:09 fandi joined #fuel
05:11 fandi joined #fuel
05:13 fandi joined #fuel
05:17 fandi joined #fuel
06:24 Miouge joined #fuel
06:36 Miouge joined #fuel
06:43 sambork joined #fuel
07:11 mstrukov joined #fuel
07:11 mstrukov left #fuel
07:13 e0ne joined #fuel
07:20 aliemieshko_ joined #fuel
07:22 avgoor joined #fuel
07:25 sovsianikov joined #fuel
07:32 dklepikov joined #fuel
07:32 stamak joined #fuel
07:33 monester_laptop joined #fuel
07:44 HeOS joined #fuel
07:50 Miouge joined #fuel
08:12 corepb_ joined #fuel
08:14 okosse joined #fuel
08:14 stamak joined #fuel
08:18 avgoor joined #fuel
08:37 f13o joined #fuel
08:52 artem_panchenko left #fuel
08:52 LiJiansheng joined #fuel
08:58 sambork joined #fuel
09:11 saibarspeis joined #fuel
09:16 [HeOS] joined #fuel
09:19 fandi joined #fuel
09:22 fandi joined #fuel
09:24 stamak joined #fuel
09:30 Longgeek joined #fuel
09:31 HeOS joined #fuel
09:37 ricolin_ joined #fuel
09:39 ricolin joined #fuel
09:49 fandi joined #fuel
09:50 ddmitriev joined #fuel
09:57 monester_laptop joined #fuel
09:58 artem_panchenko joined #fuel
10:03 e0ne joined #fuel
10:08 subscope joined #fuel
10:19 alecv joined #fuel
10:38 avgoor joined #fuel
10:48 subscope joined #fuel
10:52 teran joined #fuel
10:53 teran joined #fuel
11:00 HeOS joined #fuel
11:30 sambork joined #fuel
11:39 e0ne joined #fuel
12:14 subscope joined #fuel
12:28 adanin joined #fuel
12:52 kaliya_ joined #fuel
13:03 baloney joined #fuel
13:06 samuelbartel joined #fuel
13:16 circ-user-bgrSG joined #fuel
13:25 baloney Hi!   Can I install Fuel with Neutron if I have a single NIC? My admin has already set up 3 additional VLANs. Or am I forced to use nova-network?
13:28 sambork joined #fuel
13:44 aliemieshko for 1 NIC nova-network  http://docs.mirantis.com/fuel/fuel-6.0/reference-architecture.html#nova-config-vlan
13:46 aliemieshko you must have at least three NICS configured to use the Neutron VLAN topology and two for GRE
13:47 baloney aliemieshko: thanks a lot! i'll keep it simple this time
13:54 Sekke joined #fuel
13:56 rbowen joined #fuel
14:05 mihgen MiroslavAnashkin: ping
14:29 baloney_ joined #fuel
14:32 EugeneB1984 joined #fuel
14:34 ebogdanov joined #fuel
14:37 ebogdano_ joined #fuel
14:38 mattgriffin joined #fuel
14:48 Miouge joined #fuel
14:56 benrodrigue joined #fuel
14:57 ebogdanov joined #fuel
15:02 miroslav_ joined #fuel
15:03 omolchanov joined #fuel
15:04 mpetason joined #fuel
15:15 Miouge joined #fuel
15:59 SergK joined #fuel
16:11 benrodri_ joined #fuel
16:12 jobewan joined #fuel
16:12 championofcyrodi joined #fuel
16:24 teran joined #fuel
16:35 ebogdanov joined #fuel
16:38 blahRus joined #fuel
16:53 Miouge joined #fuel
17:10 rmoe joined #fuel
17:14 championofcyrodi and there any known issues with decommission of a node that is configured for "Compute, Ceph-OSD" roles?   have 4 and migrated the instances on 1 to the other 3.   There is plenty of storage left on the entire ceph cluster... but I'm curious about ceph volume/image migration
17:14 championofcyrodi in 5.1 environment.
17:15 championofcyrodi more details here: http://championofcyrodiil.blogspot.com/2015/01/upgrading-openstack-with-fuel.html
17:17 championofcyrodi the fuel dashboard just says "delete" instead of decomission... so i'm wondering if any sort of ceph block replication will be performed before deleting...
17:17 championofcyrodi http://ceph.com/docs/master/rados/operations/add-or-rm-osds/#take-the-osd-out-of-the-cluster
17:17 championofcyrodi hmmm
17:18 championofcyrodi hmm i guess the fuel instance is 6.0 now... but the env. was created w/ 5.1
17:18 championofcyrodi or possibly 5.0
17:19 championofcyrodi i feel like i could just use ceph to remove and rebalance the OSD node... then perform the fuel delete.
17:20 ebogdanov joined #fuel
17:22 teran joined #fuel
17:24 miroslav_ joined #fuel
17:32 championofcyrodi well i just went w/ the ceph rebalance, because it felt right.  currently things are running smoothly and I am watching the remapping via 'ceph -w'
17:33 rmoe joined #fuel
17:37 MiroslavAnashkin championofcyrodi: Yes. Fuel only deletes Ceph node, without proper Ceph OSD decomission. We only started thinking on post-deployment product lifecycle and standard maintenance actions, like add/remove disks to already existing node, add/remove OSD etc. Stakeholders want robust and stable OpenStack upgrades between versions from us first of all:-(
17:39 championofcyrodi I understand that desire.  a push to ensure upgrade paths will exist makes sense.  especially with a lot of initial investment.
17:40 championofcyrodi and allows implementation of post-deployment maintenance actions to be implemented via upgrades. :)
17:42 championofcyrodi We occasionally work with Cloudera CDH clusters.  They're implementation of parcels make it easy to upgrade software stacks.  however, it had a lot of issues in it's infancy.
17:55 xarses_ joined #fuel
18:03 jaypipes joined #fuel
18:06 miroslav_ joined #fuel
18:08 justif joined #fuel
18:27 alwaysatthenoc joined #fuel
18:51 angdraug joined #fuel
19:04 championofcyrodi does anyone have some clues on migrating an instance/volume from one fuel environment to another?
19:05 HeOS joined #fuel
19:10 teran_ joined #fuel
19:19 MiroslavAnashkin Not sure about instances - but possible for volumes. You need to create and export new image from an instance, then import the image on new environment.
19:31 moizarif joined #fuel
19:33 moizarif hi, i am using Fuel 6 and deploying Openstack in HA. I see a error message on the GUI saying: Deployment has failed. Method deploy. Upload cirros "TestVM" image failed. Inspect Astute logs for the details. What seems to be the issue here? can anyone guide me?
19:38 MiroslavAnashkin Such issues usually appear if Glance or Glance backend configured incorrectly,  or storage network has no connectivity.
19:39 kdavyd "you must have at least three NICS configured to use the Neutron VLAN topology" - what is the reasoning behind the requirement?
19:40 kdavyd i.e. will things fail outright regardless of what I do if I put all 5 networks on the same interface but different vlans, or can it work, just not recommended?
19:42 MiroslavAnashkin kdavyd: Admin PXE network either should be untagged or should be configured with some Native VLAN. Public network should be untagged as well, or you have to re-configure public network to use VLAN and configure proper external routing.
19:43 kdavyd MiroslavAnashkin: Admin PXE being untagged is a given. For public network, I have no problem with tagged+routing.
19:44 MiroslavAnashkin Fuel does not support VLAN tagged  Admin and Public networks out of the box. You have to reconfigure Public network to use VLAN after the deployment (if finishes successfully) or make some network customizations with CLI before deployment.
19:44 kdavyd 3rd interface is for physical
19:44 kdavyd sorry, lost my train of thought
19:46 kdavyd MiroslavAnashkin: OK, so despite Fuel having a checkbox to vlan tag the public network, it is not supported and does not work in Fuel 6.0?
19:48 MiroslavAnashkin It works, but you have to setup external routing for VLAN tagged Public network before deployment. About 3rd interface - looks like error, can't remember a Neutron+VLAN configuration, which require 3rd NIC
19:50 kdavyd 3rd NIC is probably recommended either for the storage network (performance reasons), or physical separation between public and private.
19:51 kdavyd The external routing bit is probably where my deployment is failing now - I don't have a gateway setup yet, and I can see attempts to ping the gateway in the logs.
19:51 MiroslavAnashkin Private works well inside VLAN. As for performance - yes, it is recommended to use multiple and even bonded NICs, but is not mandatory.
19:51 e0ne joined #fuel
19:53 kdavyd ok, thanks. I'll set up a gateway and try again.
19:57 moizarif MiroslavAnashkin: i have 3 nics. 1st for untagged public, 2nd for untagged admin and tagged mgmt and tagged storage networks. 3rd nic is reserved for testing. Is this config correct? I am not using Ceph.
20:02 MiroslavAnashkin Yes, correct
20:07 moizarif Thanks.
20:08 teran joined #fuel
20:14 Miouge MiroslavAnashkin: since you speak about VLANs, I am considering to tag public and storage together on the same interface
20:15 Miouge I guessed that only ceph node will have heavy use on the storage network and only controlleurs will have heavy use on the public network, so no problem collocating them ?
20:16 Miouge Or is there a use case for ceph node having public network ?
20:17 MiroslavAnashkin Heavy usage of public network is not usual scenario. Managenment and Storage are most used.
20:20 Miouge Since management carries nova to ceph traffic and vm to router traffic ?
20:20 MiroslavAnashkin If you expect heavy storage/Ceph traffic - please consider to move storage traffic path outside OpenVSwitch. It appeared OpenVSwitch does not work well with Ceph traffic on, say all-SSD Ceph storage environments. Such networking schematics (Linux bonds instead of OVS and storage data path outside OVS) will be default in upcoming Fuel 6.1
20:21 Miouge Well good you mention that, I plan for all SSD cinder
20:22 Miouge with ceph backend for cinder, glance and swift (likely to have its own ceph pool of sata disks)
20:26 MiroslavAnashkin Then please increase Ceph journal size to 10-15 GB (there is osd_journal_size => '2048', variable somewhere in Puppet manifests)
20:30 Work|Seony joined #fuel
20:30 Miouge MiroslavAnashkin: thank you for the tip
20:30 Miouge It looks like so much good stuff coming in 6.1!
20:35 xarses_ joined #fuel
20:38 championofcyrodi we're using vlan tagging only for management and storage... @ 1Gbps, we get linux reported speeds as high as 80MB/sec when doing unbuffered read from ceph backed volumes.
20:38 championofcyrodi and as low as 8MB/sec/
20:38 championofcyrodi or worse, depending on what the overall load is
20:39 championofcyrodi and about 7000 MB/sec for buffered reads from "RAM"
20:39 MiroslavAnashkin yes, we also thinking on implementing Ceph cache tiering and RBD cache out of the box.
20:40 championofcyrodi I have a ZFS array on a solaris 11 machine... ZFS utilizes about 250GB of RAM on that machine just for caching
20:41 championofcyrodi it's mainly for archival purposes since it's such an old machine... but with link aggregation and all that RAM, is fantastic for ZFS/NFS storage.
20:43 championofcyrodi The ZFS is caching for 24 TB of RAIDZ
20:43 championofcyrodi which is why the RAM cache is so high.
20:47 ebogdanov joined #fuel
20:48 moizarif MiroslavAnashkin: another quick question, in HA with 3 controllers, with usual glance/cinder with swift backend. what do i need to do while deploying controller nodes. Controller + Cinder ?
20:49 Miouge championofcyrodi: out of curiosity, how many disks do you have to push theses numbers ?
20:50 MiroslavAnashkin moizarif: yes.
20:51 moizarif i have selected Cinder LVM  over iSCSI  for volumes. and this gets me to the same error, "Upload cirros "TestVM" image failed.
20:51 moizarif Inspect Astute logs for the details"
20:53 MiroslavAnashkin Upload image goes via Glance - Cinder does nothing in this.
20:54 moizarif yes, i think glance isnt getting installed properly on controller nodes. which is causing an image upload error
20:56 rmoe joined #fuel
21:01 MiroslavAnashkin Not Glance but Glance backend
21:01 moizarif i think there is also a bug posted on launchpad related to this issue
21:02 rmoe joined #fuel
21:04 moizarif glance backend means swift right?
21:06 MiroslavAnashkin Ceph or Swift or local volume, depending on this setting: http://docs.mirantis.com/fuel/fuel-6.0/user-guide.html#storage-backend-for-cinder-and-glance
21:07 alecv joined #fuel
21:12 championofcyrodi Miouge: 3 osd devices on 1 host, with 1x256GB SSD, and 2x2TB SATA
21:12 championofcyrodi a second host with 1x256GB SSD for Ceph Journal, and then another 2x2TB SATA
21:12 championofcyrodi and then we had two other older hosts that were running a ceph parition on a RAID 5 array over 4 disks...
21:12 championofcyrodi i know SSD and SATA are the same interface, but by 'sata' i mean spinning disks at 6.0Gbps sata 2
21:13 championofcyrodi The SSDs are SATA II 6.0Gbps as well.
21:14 Miouge connected with 1 Gbps ports, no bonding ?
21:14 championofcyrodi yes
21:14 championofcyrodi we have a dedicated 1Gbps switch just for openstack though
21:14 adanin joined #fuel
21:14 Miouge Since 2 hosts, I assume ceph replica is 2x ?
21:14 championofcyrodi yes
21:14 championofcyrodi but we're reading that we should have 3?
21:15 championofcyrodi so there is no other traffic to interfere with the fabric speed for the switch
21:15 championofcyrodi or... fabric 'capacity' rather
21:16 championofcyrodi basically our building block is a 1u supermicro, w/ 2 SSDs at 256GB, and 2 Spinners at 2TB, with 2 hex core+HT, 96GB RAM.
21:16 championofcyrodi currently we have 2 of those, with 3 high end desktops as controllers.
21:17 championofcyrodi then we added two HP proliants that were basically same ram/cpu, but 5 older disks in a raid-5 config... so we just let fuel figure out what to do with the single logical disk.
21:19 championofcyrodi if there is a script or set of commands i could run to show you output, let me know
21:21 championofcyrodi Miouge: sorry, not two hosts.... those numbers were from 4 physical hosts.  Two supermicros, and two HP proliants.
21:22 Miouge Makes more sense!
21:25 Miouge championofcyrodi: on ceph’s documentation they do not recommend to use raid for OSD disks
21:25 wayneeseguin joined #fuel
21:26 championofcyrodi Miouge: good to know.  i think we were hesitant to modify the HP's factory hardware raid config... so we just went with the pxeboot 'plug-n-play' route.
21:27 championofcyrodi i think we had issues originally w/ centos and the integrated raid controller
21:27 championofcyrodi setting up fuel/centos6 to begin with.
21:28 Miouge oppososed to the PCI Express raid card ?
21:28 e0ne joined #fuel
21:31 championofcyrodi well the servers we bought are very cheap and have integrated RAID controllers on the motherboard
21:32 championofcyrodi our plan was to just mirror the SSDs as the base/system & virtual storage... so if a disk died, it would be transparent to the os
21:32 championofcyrodi and being SSDs, plenty of speed
21:32 kdavyd joined #fuel
21:33 championofcyrodi fyi... we really don't know what were doing ;)
21:33 championofcyrodi but this is dev space
21:34 championofcyrodi so it doesnt need to be some order of nines with uptime
21:35 youellet joined #fuel
21:38 championofcyrodi in a nutshell... we are constantly changing versions of apache [insert cloudy name] and doing development and testing for various distributed architectures always in flux.   With virtualization getting better, i'm pushing us to start doing virtual private clouds via openstack.
21:39 championofcyrodi another push is to use FAT Clients as workstations, and have users remote NX into a VM.  with NoMachine+QXL, i've been able to actually stream youtube+audio from openstack VM, to diskless fat client without much stutter.
21:39 championofcyrodi we would like to just get rid of desktops all together in the office space and use our Intel NUCs
21:40 championofcyrodi everything we have tested works acceptable... so now we are trying to build something solid, and not just bandaid everything.
21:41 championofcyrodi still so much to consider about ceph storage and network bandwidth.  openstack is just awesome though because i can give a developer 32GB of ram for an hour.... then release it back into the cluster.
21:42 championofcyrodi if openstack was dinner... fuel seems like it would be the ice cream sunday.
21:43 championofcyrodi so that's why we like using it
21:52 e0ne joined #fuel
21:54 Miouge championofcyrodi: more or less VDI on OpenStack ?
21:59 e0ne joined #fuel
22:01 kdavyd championofcyrodi: famous last words :)
22:07 championofcyrodi Miouge: Yup
22:07 championofcyrodi kdavyd: i know it.
22:08 thumpba joined #fuel
22:08 championofcyrodi Miouge: The QXL Display w/ NoMachine NX client is very smooth.  OpenNX/FreeNX too.
22:10 championofcyrodi mainly the developer's IDE has a ton of file locks, which don't perform well over network disk... so they connect to a VM... (which has a network disk via ceph)
22:10 championofcyrodi well... that's what they're chief complaint was about intellij over Nfs.
22:11 championofcyrodi it seems to run fine for me... but *shrugs*
22:11 championofcyrodi i use pycharm more than intellij
22:11 championofcyrodi which feels less bloated, although the same platform
22:12 championofcyrodi i don't know why they can't just use 'vi
22:14 championofcyrodi hmmm... device mapper caching via ssd... https://github.com/opinsys/dmcache-utils
22:17 championofcyrodi i guess LVM just got faster...
22:19 championofcyrodi fyi... live migrating nodes and removing the ceph osd daemon and rebalancing seemed to work fine... but i noticed the horizon hypervisor summary metadata didn't get updated.
22:46 e0ne joined #fuel
23:27 jpf joined #fuel
23:28 jpf Greetings, quick question - to change the DNS settings handed down to slave nodes, you have to edit the cobbler docker container correct? So docker shell cobbler, vi dnsmasq.template, then what?
23:50 ebogdanov joined #fuel

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary