Perl 6 - the future is here, just unevenly distributed

IRC log for #fuel, 2015-06-29

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:19 xarses joined #fuel
00:30 eliqiao joined #fuel
00:31 eliqiao left #fuel
00:37 eliqiao joined #fuel
00:51 eliqiao1 joined #fuel
00:52 eliqiao1 left #fuel
01:34 xarses joined #fuel
01:34 xarses joined #fuel
01:44 LiJiansheng joined #fuel
01:59 Longgeek joined #fuel
03:07 Longgeek joined #fuel
03:35 LiJiansheng joined #fuel
05:17 Longgeek joined #fuel
05:25 Longgeek joined #fuel
05:30 youellet__ joined #fuel
05:34 Longgeek joined #fuel
05:49 sergmelikyan joined #fuel
06:04 ub joined #fuel
06:52 hyperbaba joined #fuel
06:53 eJunky joined #fuel
06:59 stamak joined #fuel
07:13 Longgeek joined #fuel
07:18 dancn joined #fuel
07:36 samuelBartel joined #fuel
07:37 monester_laptop joined #fuel
07:48 saibarspeis joined #fuel
08:02 HeOS joined #fuel
08:14 sergmelikyan joined #fuel
08:37 sergmelikyan joined #fuel
09:10 HeOS joined #fuel
09:12 Longgeek joined #fuel
09:37 monester_laptop joined #fuel
09:59 sergmelikyan joined #fuel
10:23 hyperbaba joined #fuel
10:27 sergmelikyan joined #fuel
10:42 subscope joined #fuel
11:01 monester_laptop joined #fuel
11:07 azemlyanov joined #fuel
11:20 stamak joined #fuel
12:03 sergmelikyan joined #fuel
12:12 tobiash MiroslavAnashkin: I reinstalled a fresh 6.0 and restored the last backup. After that upgrade to 6.1 was successful
12:24 dkusidlo joined #fuel
12:33 dhblaz joined #fuel
13:02 sergmelikyan joined #fuel
13:24 LanceHaig joined #fuel
13:35 ub Are the Fule Plugins VPNaas and LBaaS compatible to Fule 6.1?
13:39 monester_laptop joined #fuel
13:47 dkusidlo joined #fuel
13:53 saibarspeis joined #fuel
13:59 subscope joined #fuel
14:15 v1k0d3n DrSlump: thanks! are you guys planning beta release (bleeding edge) sooner? for some reason, i thought that there would be a build available sometime at the end of june.
14:20 rmoe joined #fuel
14:24 subscope joined #fuel
14:33 claflico joined #fuel
14:47 dkusidlo joined #fuel
15:20 dkusidlo joined #fuel
15:23 e0ne joined #fuel
15:48 e0ne joined #fuel
15:58 sergmelikyan joined #fuel
16:42 samuelBartel joined #fuel
16:51 xarses joined #fuel
17:14 sergmelikyan joined #fuel
17:20 DrSlump v1k0d3n: If you want to be bleeding edge, try the community nightly builds https://ci.fuel-infra.org/
17:22 MiroslavAnashkin tobiash: hammondr: Please do not destroy containers for a while. We finally found the cause of possible data loss on master nodes, upgraded from the previous versions.
17:22 MiroslavAnashkin Link to the bug: https://bugs.launchpad.net/fuel/+bug/1469399
17:23 v1k0d3n DrSlump: right, wondering when that will have kilo support...like i said, i thought that was around late june, early july?
17:23 MiroslavAnashkin I am currently writing the steps how to fix this manually
17:24 MiroslavAnashkin v1k0d3n: Kilo should appear in 7.0 branch closer to the end of this week.
17:30 v1k0d3n MiroslavAnashkin: awesome!!! thank you! will this be clearly updated on the RDO site? i know some things aren't extremely clear with the updates on nightly's.
17:30 v1k0d3n but that seems like a pretty big one to announce to me. thank you guys for the hard work on this. it can't be easy! all the moving parts...wow.
17:32 dontalton joined #fuel
17:35 Billias I have some questions about the compute nodes planning
17:35 Billias on the storage side.
17:42 MiroslavAnashkin ?
17:49 hammondr MiroslavAnashkin: I'm sorry.  I overwrote my 6.0 configuration with a new 6.1 config, abandoning the upgrade path
18:15 Billias If i use local Ceph RBD on each compute node
18:15 Billias is there possible again to have replication between the nodes for the ephemeral storage?
18:16 tobiash_ joined #fuel
18:17 hammondr is there a way to specify ceph replication rules from the fuel master? or must this be done after deployment somehow?
18:44 xarses hammondr: like the crush map?
18:44 xarses you will have to update the crush map yourself after deployment
18:47 xarses Billias: it's recommended that you do not combine ceph and compute roles. It's possible for one to over-load the other. While using ceph (ephemeral or otherwise) there is no way to ensure that a copy of the data is local to the consumer with out creating custom RBD maps and a pool for each compute node. in practice it would not be tenable.
18:48 Billias xarses: so it is better to have shared storage, or just local storage without shared storage, and use the typical LVM ephemeral storage
18:48 xarses Billias: by default (crush map) all ceph-osd's will peer with each other and replicate all of the pools in the cluster
18:50 Billias xarses: I only need ephemeral storage, not Object store or anything else.
18:50 xarses Billias: if you expect to have some locality of the data to improve performance or reduce network turn then you likely don't want ceph ephemeral
18:50 Billias so what the best solution in that case?
18:50 Billias what ephemeral is the best for local storage then?
18:50 xarses you will not get live-migration with out shared ephemeral
18:50 Billias I would like to have a share-nothing (almost) topology
18:51 Billias ok fair
18:51 Billias so if i replace it with a good raid 10 it should be enough then
18:51 xarses by default ephemeral will come out of what ever is mouted at /var/lib/nova
18:52 Billias xarses: nice...
18:52 Billias in case of Ceph shared storage
18:52 Billias does live-migration happen automatically if the node crashes? (is this ability possible?)
18:53 xarses you also get copy-on-write clones with ceph on glance, and for ephemeral/cinder which reduces provisioning time
18:53 xarses no, you can't live migrate dead instances
18:53 Billias i mean
18:53 xarses you could re-provision them, but this is not automatic you will need an external tool to do so
18:53 Billias ok
18:53 Billias but in case of shared storage
18:53 Billias it does it by itself? of you reprovision them again?
18:54 xarses they can be cold-migrated, but it sill requires the host hypervisor to be running
18:54 Billias lets say you have 6 compute nodes
18:54 Billias with the same hypervisor
18:54 Billias on the local storage, is there any way to have something like copy-on-write clones?
18:55 xarses no, the image has to be copied to the hypervisor, it will be cached by default but the image will still need to be written to the backing store
18:56 xarses you can end up with glance -> nova-compute (cached) -> write to qcow2
18:56 Billias is this bad?
18:56 xarses no, its just slow(er)
18:56 Billias on provisioning
18:57 xarses yes
18:57 Billias and faster on use
18:57 xarses it can be, it would depend on your setup
18:58 Billias xmmm I have to see that
18:58 Billias but if i have to buy 10GB switches for the cinder volumes
18:58 Billias and equipment, for a small installation of 3 compute nodes...
18:58 xarses but as a trade off you loose redundancy, and recovery time
18:59 Billias in real life you loose that, if your application is not designed to work like that
18:59 xarses the quick provision is a nice to have, but un-important to the above
18:59 Billias what are the system requirements on ceph per tb?
18:59 xarses you loose all of the value of using a system like openstack if your application is a pet, and not more like cattle
19:00 xarses if you have pet workloads, I'd keep them on a vCenter DRS/HA cluster with openstack API's on the front
19:01 xarses then put your cattle on KVM hypervisors
19:01 Billias why then pay the vCenter licensing?
19:02 xarses DRS/HA/vMotion for pet instances
19:02 Billias well if I have
19:02 Billias 20 microservices running along VMs
19:02 Billias i can afford to loose 10
19:02 xarses if you have cattle, then don't bother
19:03 Billias lets say I have one house with many pets
19:03 Billias and i can spawn as much as i like
19:04 samuelBartel joined #fuel
19:06 Billias the other small question: on a small environment with 60-70 vms, is 4gbps enough for storage networking?
19:52 sergmelikyan joined #fuel
20:10 saibarspeis joined #fuel
20:10 HeOS joined #fuel
20:23 monester_laptop joined #fuel
20:37 ClaudeD joined #fuel
20:39 jaycee joined #fuel
20:39 ClaudeD Hi. I'm testing Fuel 6.1 and I have a Ceph monitor down (health HEALTH_WARN 1 mons down). How do I restart that monitor?
20:46 ClaudeD joined #fuel
20:47 ClaudeD lost my connection to irc :(
21:22 Longgeek joined #fuel
21:30 youellet_ joined #fuel
21:52 ClaudeD left #fuel
23:53 Longgeek joined #fuel

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary