Perl 6 - the future is here, just unevenly distributed

IRC log for #fuel, 2014-11-07

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
02:06 Longgeek joined #fuel
03:52 Longgeek joined #fuel
04:59 ArminderS joined #fuel
05:28 capricorn_1 joined #fuel
05:35 adanin joined #fuel
05:41 Longgeek joined #fuel
05:58 ArminderS- joined #fuel
06:08 boris-42_ joined #fuel
06:41 stamak joined #fuel
07:00 dklepikov joined #fuel
07:07 kat_pimenova joined #fuel
07:12 ArminderS joined #fuel
07:16 ArminderS joined #fuel
07:21 ArminderS joined #fuel
07:26 ArminderS joined #fuel
07:29 Longgeek joined #fuel
07:30 hyperbaba joined #fuel
07:33 Longgeek joined #fuel
07:38 HeOS joined #fuel
07:40 stamak joined #fuel
07:43 ArminderS joined #fuel
07:44 e0ne joined #fuel
07:45 rmoe joined #fuel
07:52 robklg joined #fuel
07:55 ArminderS joined #fuel
07:56 ArminderS joined #fuel
07:58 ArminderS joined #fuel
07:58 boris-42_ joined #fuel
07:59 ArminderS joined #fuel
08:00 ArminderS joined #fuel
08:01 ArminderS joined #fuel
08:08 azemlyanov joined #fuel
08:09 bogdando joined #fuel
08:13 dancn joined #fuel
08:19 pasquier-s joined #fuel
08:23 ArminderS joined #fuel
08:25 e0ne joined #fuel
08:33 ddmitriev joined #fuel
08:35 Longgeek joined #fuel
08:38 tatyana joined #fuel
08:38 dancn joined #fuel
09:07 fandi joined #fuel
09:17 tatyana joined #fuel
09:17 atze joined #fuel
09:18 Guest37695 hi all
09:18 Guest37695 is there a way to remove a node from the discover list
09:22 akupko joined #fuel
09:24 jaypipes joined #fuel
09:31 Guest37695 if i run fuel node --node-id #{number}  --delete-from-db i get a 500 error
09:32 e0ne joined #fuel
09:40 tatyana joined #fuel
09:40 sc-rm merdoc: When loosing a ceph monitoring host, everything crashes and openstack goes into a broken state. Tried to replicate this issue 3 times now and the same result
09:48 hyperbaba sc-rm: is it an only ceph-mon in a ceph cluster?
09:49 sc-rm hyperbaba: I created a setup with 4 nodes running as Ceph OSD
09:50 hyperbaba sc-rm: You should use odd number of monitors not even.
09:51 hyperbaba sc-rm: this way split brain scenarios could happen
09:51 sc-rm hyperbaba: Okay, that would have been nice to know befor doing the setup
09:51 sc-rm hyperbaba: ;-)
09:51 hyperbaba sc-rm: This logic goes for all the cluster stuff. Even for openstack controllers
09:52 sc-rm hyperbaba: But right now I have 3 nodes not able to do anything, trying to add a new 4th node
09:53 hyperbaba sc-rm: what the ceph monitor said on remaining ceph-mon instances?
09:53 sc-rm hyperbaba: But it’s noted and I’ll add another node to everything, so I get an odd number
09:53 hyperbaba sc-rm: do you have output from ceph -s ?
10:01 sc-rm hyperbaba: http://paste.openstack.org/show/130401/
10:02 sc-rm hyperbaba: it looks almost the same on the 3 nodes running right now
10:03 e0ne joined #fuel
10:03 hyperbaba sc-rm: looks like you lost network layer also. Can you ping nodes between them self?
10:05 hyperbaba sc-rm: over storage network?
10:10 sc-rm hyperbaba: no problem in ping
10:11 sc-rm hyperbaba: all nodes are connected through the same physical switch, no seperation for the purpose of testing
10:14 ddmitriev joined #fuel
10:16 kaliya joined #fuel
10:29 kaliya Guest37695: no. Why do you want to remove it?
10:31 sc-rm hyperbaba: It seems like the ceph-mon is only running on controllers and not on ceph-osd nodes
10:31 hyperbaba sc-rm: yes
10:32 sc-rm hyperbaba: Why is that?
10:33 hyperbaba sc-rm: You don't need a large number of monitors. And their number overlaps with number of controllers in a deployment. So Mirantis people created a deployment scenario where if ceph is used, every controller is a ceph-mon also. How many controllers you have in the system?
10:34 sc-rm hyperbaba: Ahe, then I see my problem - only two nodes :-P
10:35 hyperbaba sc-rm: One controller and one compute/ceph-osd?
10:36 sc-rm hyperbaba: nope. Two nodes running as controllers and ceph-osd. Two other nodes running as ceph-osd.
10:39 akupko joined #fuel
10:40 hyperbaba sc-rm: 2 ceph-mon. Ok the remaining one can't form the cluster because of missing quorum prerequestities. I think for two ceph-mon there is a special case in configuration. Fuel deployment  recepies do not take this in account
10:40 pasquier-s joined #fuel
10:40 hyperbaba sc-rm: you can add additional ceph-mon on one of the ceph-osd nodes using ceph-deploy command
10:40 hyperbaba sc-rm: and then i think is it going to work for You
10:41 stamak joined #fuel
10:41 hyperbaba sc-rm: with just 2 controllers you are going to run into issues with openstack controllers also, for example pacemaker quorum
10:42 sc-rm hyperbaba: Also what I thought now, so I’m going to redeploy with 3 controllers
10:42 sc-rm hyperbaba: then it will be able to survive a crash of one controller and be able to add e new one in that case
10:43 hyperbaba sc-rm: yes
10:49 monester_laptop joined #fuel
10:54 Longgeek joined #fuel
10:58 pasquier-s joined #fuel
11:19 Longgeek joined #fuel
11:22 teran joined #fuel
11:25 teran_ joined #fuel
11:39 ddmitriev joined #fuel
11:41 Longgeek joined #fuel
11:57 pal_bth joined #fuel
12:07 Longgeek joined #fuel
12:09 Longgeek joined #fuel
12:35 fandi joined #fuel
12:58 pal_bth joined #fuel
12:58 boris-42_ joined #fuel
13:23 boris-42_ joined #fuel
13:49 samuelBartel joined #fuel
13:59 nathharp joined #fuel
14:06 jaypipes joined #fuel
14:10 adanin joined #fuel
14:12 nathharp Hi - can anyone point me in the right direction for adding hardware support to the bootstrap image?
14:13 kaliya nathharp: master node or nodes to be provisioned?
14:13 nathharp I’m working with some bladeservers for my test envionment, and disk is provided via FC (not ideal for this)
14:13 nathharp nodes to be provisioned
14:14 nathharp because there are 4 paths for each LUN, it appears that the machines have more disk thank they really do
14:14 nathharp so hoping to add the multipath drivers to the bootstrap image
14:14 kaliya so you basically need to add packages to the boostrapped image?
14:15 kaliya with drivers
14:15 nathharp yep, that should be it (I hope)
14:28 evg nathharp: hi
14:30 nathharp hi
15:05 coryc joined #fuel
15:07 mattgriffin joined #fuel
15:12 boris-42_ joined #fuel
15:13 jobewan joined #fuel
15:33 blahRus joined #fuel
15:47 mpetason joined #fuel
15:59 ArminderS joined #fuel
16:03 mattgriffin joined #fuel
16:23 adanin joined #fuel
16:28 keyz182 joined #fuel
16:32 keyz182 Hi, I'm having some issues deploying the 6.0 tech preview. Is this the right place to ask questions?
17:03 angdraug joined #fuel
17:08 MiroslavAnashkin keyz182: Yes, it is the right place.
17:17 Dr_Drache joined #fuel
17:32 byrdnuge joined #fuel
18:18 tatyana joined #fuel
18:20 teran joined #fuel
18:31 teran joined #fuel
18:45 mattgriffin joined #fuel
19:09 teran joined #fuel
19:23 blahRus joined #fuel
19:36 rmoe joined #fuel
19:47 boris-42_ joined #fuel
19:49 rmoe joined #fuel
19:49 coryc Is there a hardware compatibility list somewhere?  I have 5 HP blades (3 controller/2 compute) that I"m trying to setup MOS on and having all sorts of problems. I've tried both Ubuntu & CentOS and sometimes the install hangs at 57% for hours and then sometimes it fails to complete the install. Just did a CentOS install and it passes both the Network & Health Check tests yet give me a 404 when I try to browse to the controller IP.
19:54 MiroslavAnashkin http://docs.mirantis.com/fuel/fuel-5.1/planning-guide.html#system-requirements
19:56 MiroslavAnashkin As for Ubuntu + HA - there is bug in Xtrabackup driver for Ubuntu. So, MOS 5.1 installation may fail with Ubuntu+HA mode, if your controllers have less than 3 GB of RAM. The border is somewhere between 3GB and 8GB, it is unstable issue
19:57 coryc MiroslavAnashkin: thanks but I meet that, all three control nodes have 16GB of ram
19:58 coryc I thought I read somewhere that there was a 5.1.1, is that publicly available?
19:58 MiroslavAnashkin It is bug with process kill/restart race conditions, so it may fail even if there is more than 8GB RAM
20:01 MiroslavAnashkin yes, here is the latest 5.1.1 builds: https://fuel-jenkins.mirantis.com/view/ISO/job/publish_fuel_community_iso/ links to download ISO, IMG and upgrade tarball (torrent) are near every build number
20:02 coryc is the bug fixed in 5.1.1?
20:03 MiroslavAnashkin No, it is Percona bug. Still wait for the fix.
20:04 MiroslavAnashkin BTW, https://software.mirantis.com/6.0-openstack-juno-tech-preview/ - 6.0 tech preview (call it alpha version) available as well
20:05 coryc yeah, I saw that.
20:05 coryc I try to use mostly stable when possible
20:06 MiroslavAnashkin Latest 5.1.1 build is the most stable
20:08 coryc ok, going to delete and start over. thanks
20:19 pasquier-s joined #fuel
20:22 HeOS joined #fuel
20:23 blahRus joined #fuel
21:57 mattgriffin joined #fuel
22:12 kupo24z joined #fuel
22:20 rmoe joined #fuel
22:28 rmoe joined #fuel
22:41 rmoe joined #fuel
22:55 rmoe_ joined #fuel
23:03 rmoe joined #fuel
23:23 mattgriffin joined #fuel
23:39 e0ne joined #fuel
23:44 pappy joined #fuel

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary