Perl 6 - the future is here, just unevenly distributed

IRC log for #fuel, 2014-11-03

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
02:05 Longgeek joined #fuel
03:07 Longgeek joined #fuel
04:09 Longgeek joined #fuel
04:26 ArminderS joined #fuel
04:29 ArminderS- joined #fuel
05:25 Longgeek joined #fuel
05:28 ArminderS joined #fuel
05:28 Arminder- joined #fuel
05:45 ArminderS joined #fuel
06:04 Longgeek joined #fuel
06:29 e0ne joined #fuel
06:56 ArminderS- joined #fuel
06:57 ArminderS joined #fuel
07:05 dancn joined #fuel
07:29 Alremovi4 joined #fuel
07:40 akupko joined #fuel
07:48 boris-42_ joined #fuel
08:00 robklg joined #fuel
08:16 hyperbaba joined #fuel
08:20 adanin joined #fuel
08:32 stamak joined #fuel
08:42 dklepikov joined #fuel
09:11 bhi joined #fuel
09:22 ddmitriev joined #fuel
09:44 Longgeek joined #fuel
09:51 monester_laptop joined #fuel
10:35 boris-4__ joined #fuel
10:45 akupko joined #fuel
10:47 cvieri joined #fuel
11:01 bhi Hi Everybody
11:02 bhi Is it possible to install the swift API without Data replication?
11:15 bhi What do I need to chose during installation of mirantis 5.1 to get Swift?
11:16 saibarspeis joined #fuel
11:22 xarses joined #fuel
11:44 Longgeek_ joined #fuel
12:06 akupko joined #fuel
12:29 sc-rm kaliya: How do one recover from a HA setup, with 2 corntroller nodes and one of the controllers break down. Then cinder stops working
12:55 Longgeek joined #fuel
13:00 Longgeek joined #fuel
13:01 Longgeek_ joined #fuel
13:03 Longgee__ joined #fuel
13:11 akupko joined #fuel
13:48 xarses joined #fuel
14:16 Guest__ joined #fuel
14:17 akupko joined #fuel
15:03 xarses joined #fuel
15:09 jobewan joined #fuel
15:34 xarses joined #fuel
15:40 ArminderS joined #fuel
15:44 ArminderS- joined #fuel
16:00 mpetason joined #fuel
16:08 alex_didenko joined #fuel
16:31 xarses joined #fuel
16:32 mattgriffin joined #fuel
17:10 angdraug joined #fuel
17:13 brad[] angdraug: ref your reply a couple of days ago (Ceph and Openstack on the same system) - would you avoid doing so on a machine with 256GB RAM and 32 cores?
17:14 brad[] I assume there will be a point where resource contention will be an issue
17:45 angdraug rule of thumb is 2 cores per osd device
17:47 angdraug and CPU context switching with Firefly and earlier (and MOS 5.1 has Firefly and so will 6.0) will be pretty heavy on IOPS heavy workloads
17:47 angdraug at 256GB you won't be very pressed for memory unless you run RAM heavy VMs on it
17:48 angdraug but you will need to adjust your memory overcommit to account for memory used by OSDs
17:49 angdraug ditto for CPU cores
17:57 brad[] nod
18:01 Dr_Drache hmmm
18:01 Dr_Drache good to know
18:07 mattgriffin joined #fuel
18:07 brad[] hmm 4 drives would require 16 of those 32 cores at that rate
18:07 brad[] err 8. Wow.
18:07 Dr_Drache that's rather pricey
18:07 angdraug also, still not a good idea to build a huge failure domain like that
18:07 angdraug how many of such nodes you're going to have?
18:08 brad[] Starting with 3
18:08 angdraug tsk tsk
18:08 brad[] lol
18:08 angdraug at that price, you're way better off with more smaller nodes
18:08 angdraug one of these goes down and you're whole cluster goes pear shaped
18:09 brad[] even with a replication factor of 3? I thought ceph was aware of locality when it arranges PG's
18:09 angdraug with replication factor of 3, losing 1 of 3 nodes means all your ceph pools go degraded straight away
18:10 angdraug you won't lose data, but it will refuse any writes until it's got enough osds
18:10 angdraug err, enough osd nodes
18:11 angdraug also at ceph day in san jose, fujitsu was presenting performance test results on a similar configuration
18:11 angdraug (very fat OSD nodes)
18:11 brad[] oh?
18:11 adanin joined #fuel
18:11 angdraug they got lots of performance issues, mostly around CPU contention and IOPS, and Sage's feedback was that Ceph works much better with lower density nodes
18:12 brad[] low density being fewer than four drives?
18:12 angdraug it's essentially designed for Ethernet drives, not for multi-osd nodes
18:12 angdraug yes
18:12 brad[] interesting.
18:12 Dr_Drache so, to fix that, throw more hardware at the problem? :P
18:13 brad[] troubling, in that I already got the hardware acquired. D'oh!
18:13 angdraug well, the hardware brad[] got is already an overkill )
18:13 angdraug are those your computes or controllers?
18:13 brad[] computes
18:13 Dr_Drache brad[], same here.. my hardware was bought on the idea that controllers and computes can and should be OSD nodes.
18:13 angdraug if that's your computes, I shudder to think what you're putting on controllers
18:15 brad[] for the moment using a single controller - was planning on an 8-core system with 16GB RAM
18:15 angdraug you're supposed to read http://docs.mirantis.com/openstack/fuel/fuel-5.1/planning-guide.html before you've bought hardware, sorry
18:15 brad[] well I thought I had, but it looks like I overlooked a few things :-)
18:15 angdraug http://docs.mirantis.com/openstack/fuel/fuel-5.1/planning-guide.html#nodes-and-roles
18:15 angdraug "When deploying a production-grade OpenStack environment, it is best to spread the roles (and, hence, the workload) over as many servers as possible in order to have a fully redundant, highly-available OpenStack environment and to avoid performance bottlenecks."
18:16 Dr_Drache angdraug, my hardware was approved from the online app, the 4.0 guide, and mirantis.
18:16 Dr_Drache now we are on 5.1 and it's all bad news.
18:16 Dr_Drache such is life.
18:17 angdraug yeah, the recommendation to not combine roles in production came after 4.0, based on experience
18:17 angdraug still, I'm not saying you're setup is useless
18:18 Dr_Drache no, I know that.
18:18 angdraug just that you've got more risks to deal with than others
18:18 Dr_Drache and it can scale.
18:18 angdraug brad[]: you're controller is fine for 3 nodes, but you will grow out of it relatively soon if you plan to scale out
18:18 brad[] the point about failure domains is really good tohugh
18:19 angdraug yeah, that's your biggest concern really
18:19 brad[] a large number of VM's will fall down as soon as one node dies
18:19 angdraug given that many cores and RAM, it can sustain a fair number of spindle based OSDs
18:19 angdraug so unless you plan to stuff it full of SSD you're ok
18:20 Dr_Drache so, replication of 3, we should have at least 4?
18:20 brad[] angdraug: I'd want to deploy more controller nodes if I were to scale out, this is beta stages atm - is there conventional wisdom on how many compute nodes a given controller can handle?
18:21 angdraug doesn't work that way
18:21 angdraug more controller nodes will only scale the API services
18:21 angdraug MySQL/Galera doesn't scale linearly
18:21 angdraug and that's going to be your primary bottleneck
18:21 kupo24z joined #fuel
18:22 Dr_Drache brad[] - thank you sir, for asking the questions I needed to as well.
18:22 angdraug check out Rally
18:22 angdraug it can help you simulate heavy load on your controller, see how many VMs you can sustain
18:23 brad[] hmm.
18:23 brad[] Dr_Drache: np :P
18:23 Dr_Drache wait, so main controller can handle XX Vms.... what if you need to scale that?
18:23 Dr_Drache can't just add more controllers?
18:25 Dr_Drache Guess that's a huge misunderstanding of how the cluster scales.
18:25 kupo24z I am getting Permission denied (publickey). when trying to connect to my mongo deployed nodes, is it supposed to have the same private key as the other nodes?\
18:32 blahRus joined #fuel
18:32 kupo24z Ah, looks like the shared public key wasnt added to authorized on deploy, weird
18:35 Dr_Drache kupo24z, weird is todays word of the day!
18:43 xarses joined #fuel
18:45 kupo24z does the connection= string in /etc/ceilometer/ceilometer.conf have the cleartext mongodb password?
18:46 kupo24z or is there a method of connecting to mongo using a pregenerated user?
19:07 adanin joined #fuel
19:33 mattgriffin joined #fuel
19:46 blahRus1 joined #fuel
19:46 kupo24z1 joined #fuel
20:21 angdraug afair mongodb role is deployed ahead of controllers, might have something to do with the keys not being set up correctly
20:21 angdraug could be abug
20:22 angdraug I don't know much about mongodb/ceilometer setup, sorry
20:22 mpetason joined #fuel
20:27 robklg joined #fuel
20:31 kupo24z1 angdraug: was able to get around the problem by cron'ing a restart of ceilometer-api
20:31 kupo24z1 was going to need to pull data directly out of mongo if that diddnt work
20:36 mattgriffin joined #fuel
20:45 blahRus joined #fuel
20:47 kupo24z joined #fuel
21:08 mattgriffin joined #fuel
21:56 e0ne joined #fuel
23:07 xarses joined #fuel
23:15 mpetason joined #fuel
23:16 e0ne joined #fuel
23:49 jetole joined #fuel
23:50 kupo24z left #fuel
23:52 jetole Hey guys. The image cache on the controllers seems to be using up enough disk space to consume / to 100%. I have changed the maximum cache size and set the cache to be cleaned hourly on cronjobs however I am planning on uploading a couple images tomorrow that are larger then the available space on 2/3 of the controllers and according to the openstack docs, cache size of a single request can still exceed the max set size so I am wondering how I should ad
23:52 jetole dress this?

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary