Perl 6 - the future is here, just unevenly distributed

IRC log for #fuel, 2015-07-30

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:59 tzn joined #fuel
01:20 xarses joined #fuel
01:39 claflico joined #fuel
02:00 tzn joined #fuel
02:39 claflico joined #fuel
02:51 sergmelikyan joined #fuel
02:52 hakimo_ joined #fuel
03:05 jobewan joined #fuel
03:20 kevinbenton joined #fuel
03:31 kybe joined #fuel
03:44 ximepa joined #fuel
04:01 tzn joined #fuel
04:01 kevinbenton joined #fuel
04:26 serg_melikyan joined #fuel
04:43 sergmelikyan joined #fuel
05:55 seens joined #fuel
05:56 seens Hi All
06:00 seens How to create bonding before deploying environment in CLI
06:00 seens ?
06:03 tzn joined #fuel
06:06 seens How to create bonding before deploying environment in CLI?
06:08 rawat_ joined #fuel
06:21 magicboiz joined #fuel
06:22 magicboiz Hi, I need some help with MOS6.1: I've installed mos6.1 into 8 VM (running over KVM/Ubuntu): fuel server , 3 controllers, 3 ceph nodes, 1 compute node. Install options: QEMU+GRE. My problem is simple: the instances launched with public(floating IP) cannot reach the public gateway.... they fail to get IP via DHCP.....¿?¿?
06:22 magicboiz Something that I find strange: in the compute node, I see br-tun interface DOWN, and I don't see any gre interface....
06:22 magicboiz but there are gre interfaces defined into the bridge...
06:24 mkwiek joined #fuel
06:32 rawat_ hi all
06:33 rawat_ can any one tell me how can i create interface bond using command line ...?????
07:19 ub joined #fuel
07:20 devvesa joined #fuel
07:27 aliemieshko_ joined #fuel
08:05 monester joined #fuel
08:09 sgolovatiuk rawat_: The I replied in #fuel-dev
08:17 hyperbaba_ joined #fuel
08:21 aliemieshko_ joined #fuel
08:37 pbeskow joined #fuel
08:39 tzn joined #fuel
08:39 tzn joined #fuel
09:03 e0ne joined #fuel
09:05 ashtokolov joined #fuel
09:06 ashtokol_ joined #fuel
09:22 ashtokolov joined #fuel
09:23 ashtokolov joined #fuel
09:40 gongysh joined #fuel
09:42 ashtokolov joined #fuel
09:57 rawat joined #fuel
10:01 fuel-slackbot joined #fuel
10:11 HeOS joined #fuel
10:20 magicboiz joined #fuel
10:33 devvesa joined #fuel
12:25 teran joined #fuel
12:41 warpc__ joined #fuel
12:47 pbeskow joined #fuel
12:54 e0ne joined #fuel
13:23 rbrooker joined #fuel
13:24 sergmelikyan joined #fuel
13:25 rbrooker I keep running into the same issue, when deploying from a new install of 6.1 post update.
13:26 rbrooker <ip>:8080/targetimages/env_1_ubuntu_1404_amd64.img.gz gives a 404 but when digging <ip>:8080/targetimages/env_2_ubuntu_1404_amd64.img.gz exists  --> its env_2 that works
13:27 rbrooker but astute keeps pointing to env_1 where do I update, that url?
13:28 rbrooker our better yet that image -- properly -- I'm about to just cp with the number changed to get it to work, but that feels wrong.
13:31 e0ne joined #fuel
13:40 BobBall joined #fuel
13:41 BobBall How much of a clean slate does "dockerctl destroy all; bootstrap_admin_node.sh" give me?  Seems it should be a quick way to reinstall when dev testing?
13:44 tzn joined #fuel
13:47 rbrooker good question. as an add-on -> does it remove the mirrors?
13:53 jaypipes joined #fuel
14:03 bildz good morning
14:04 bildz I'm designing my setup with the "Nova-network VLAN Manager" and just had a couple questions.  My compute and storage nodes will be on 10GbE and I was curious if the controllers needed to be at 10GbE as well.
14:05 bildz why do the controller nodes need access to the storage network.  I keep thinking of things from a VMware standpoint and need to gain a stronger foundation of what is performed with the openstack components
14:05 mwhahaha rbrooker: we build boot images per environment, it should rebuild that image if you redeploy
14:06 mwhahaha rbrooker: how did you get to the point where it was 404ing
14:06 mwhahaha BobBall: it nukes all the docker images and resets them up. so recreates them and reruns puppet on them to bring up the service
14:07 rbrooker mwhahaha: not sure exactly, what caused it, yet,
14:07 BobBall I guess the question might have been better expressed as "Is there anything important that isn't in the docker images"?
14:07 bildz BobBall: are you low on disk space?
14:08 mwhahaha BobBall: the puppet classes used to build the docker images are on the host itself, but as far as the web services and deployment items they are all contained in docker images
14:09 BobBall No - but I'm doing a bunch of dev + test cycles which have needed quite a few reinstalls (e.g. IP address changes) which I think/hope I can safely just destroy + rebuild the images?
14:09 mwhahaha if you want to reset your system without reinstalling the master then yes
14:09 mwhahaha it won't cleanup old logs or anything though
14:09 bildz do you utilimately log into the fuel web interface to provision VMs?
14:09 rbrooker that would mean that the images aren't being generated?
14:09 BobBall Yes bildz and perfect thanks mwhahaha
14:10 bildz BobBall: thanks, trying to design the firewall rules/routing
14:10 mwhahaha i think you can just get away with dockerctl destroy all; dockerctl build all. it won't cleanup the environmnet images for image based deploys
14:10 mwhahaha rbrooker: maybe? the image gets generated as part of the initial deployment process. it should check to see if that file exists and rebuild it. is your environment 1 still active?
14:11 rbrooker could it be caused by 2 deployments running concurrent? because I was seeing cross log messages in Astute
14:11 mwhahaha bildz: back to your question about the storage network & controllers, glance runs on the controllers
14:11 mwhahaha rbrooker: shouldn't because it creates an image per environment
14:11 mwhahaha but i wouldn't be shocked if there was a bug
14:13 rbrooker when does the image get created?
14:13 bildz mwhahaha: would I see a performance hit provisioning the controllers on 1GbE, as opposed to the storage/compute nodes at 10GbE?
14:14 rbrooker at deploy? or on stack creation
14:14 bildz I would think that storage/compute would need ot be as fast as possible
14:14 mwhahaha rbrooker: deploy
14:15 rbrooker ok I also found an similar situation a while back when I added install base os option
14:17 mwhahaha bildz: i'm not completely sure what the best option would be. I think with glance it's really how much of the image service you're going to be using. I think if you're doing cinder on the computes it would be best to have 10g on those.
14:18 mwhahaha rbrooker: it should rebuild it if the file is missing when you go to deploy, are you sure it's not in /var/www/nailgun/ or something?
14:19 rbrooker I looked, to see
14:19 rbrooker I'm restarting again, to see what happens
14:20 rbrooker now, I haven't created any centos based ones yet, though there are centos images in that directory
14:21 mwhahaha we ship a centos image
14:21 mwhahaha we don't ship an ubuntu image
14:21 rbrooker it also didn't delete images when the stack was deleted.
14:21 mwhahaha that might be a bug
14:23 rbrooker ok I'll watch for it.
14:24 rbrooker one other question. I've trying to build a PoC and only have 2 half blades, and couple of old desktop boxes to work with.
14:24 rbrooker the blades have only one external port, and an internal 10g port but is open to the intranet (with dhcp)
14:25 rbrooker fuel is complaining that even if I just vlan tie the 2 10g ports together, and ensure all networks are running on the 1g to external.
14:26 rbrooker that is isn't getting the vlan tags from those machines.
14:27 mwhahaha your switch isn't eating the tags is it?
14:27 rbrooker (I know its not ideal, but I'm teaching the persons with the ability to buy me more hardware that cloud isn't just another hyper-v or vmware
14:27 mwhahaha alternatively use gre
14:28 rbrooker ok the GRE doesn't require tags?
14:28 mwhahaha no because it's tunneling it
14:28 mwhahaha so long as they are on the same l2 segment it should beok
14:29 rbrooker ok, I'll test that out.
14:32 ximepa left #fuel
14:33 claflico joined #fuel
14:43 bapalm_ joined #fuel
14:48 jobewan joined #fuel
14:51 sergmelikyan joined #fuel
14:57 blahRus joined #fuel
15:02 rbrooker joined #fuel
15:02 angdraug joined #fuel
15:24 n3m8tz joined #fuel
15:38 thenetguy joined #fuel
15:40 thenetguy Hi all, I'm facing some weird issues during installation of Mirantis Fuel 6.1. After finish the setup, the installer gets stuck after "Loading Docker images." with some messages like "mounted filesystem with ordered data mode.".
15:42 magicboiz Hi, I need some help with MOS6.1: I've installed mos6.1 into 8 VM (running over KVM/Ubuntu): fuel server , 3 controllers, 3 ceph nodes, 1 compute node. Install options: QEMU+GRE. My problem is simple: the instances launched with public(floating IP) cannot reach the public gateway.... they fail to get IP via DHCP.....¿?¿?
15:42 magicboiz Something that I find strange: in the compute node, I see br-tun interface DOWN, and I don't see any gre interface....
15:42 magicboiz but there are gre interfaces defined into the bridge...
15:42 mwhahaha thenetguy: when does this happen? do you have a screenshot/logs?
15:43 thenetguy mwhahaha: I'm just installing the FUEL, and the messages start after the setup menu (where we define IPs, DNS, etc).
15:43 mwhahaha thenetguy: interesting, i've not seen that before.  It does take some time to bootstrap i think. how long did you give it?
15:44 thenetguy mwhahaha: So, I'm waiting for around 10 min. How long does it usually take?
15:45 mwhahaha I think depends on hardware and network config, i don't think it pulls stuff down from the internet but i might be wrong.
15:46 thenetguy mwhahaha: One thing that I was curious was how to define HTTP/HTTPS Proxy during the installation?
15:47 thenetguy mwhahaha: The error I'm getting is like this http://postimg.org/image/4pnqnrpbr/.
15:47 mwhahaha thenetguy: https://docs.mirantis.com/fuel/fuel-master/operations.html?highlight=proxy#setting-up-local-mirrors
15:49 mwhahaha thenetguy: i think it might take 10-30minutes
15:49 thenetguy mwhahaha: I see, but I understand that this is something to be done after the installation process. Right? I was concearned about the installer trying to download stuff from internet.
15:49 mwhahaha i think it's building teh docker images
15:49 mwhahaha thenetguy: i don't think the initial fuel-master install requires internet
15:49 thenetguy mwhahaha: Hum... let's see... I'll wait a little bit... :P
15:50 magicboiz mwhahaha: may I ask you to help with my MOS6.1 lab?
15:51 mwhahaha magicboiz: sure, i'm not sure the answer to your question however
15:51 magicboiz oopss..... :(
15:52 mwhahaha magicboiz: you might want to create a bug https://wiki.openstack.org/wiki/Fuel/How_to_contribute#Test_and_report_bugs and provide logs
15:52 mwhahaha someone else with more knowledge might be able to identify the problem and provide a solution
15:54 magicboiz yeah, that's an option.....I will make some more tests. I've tried with MOS6.0 and I get the same result, so maybe I'm missing something in my lab (which is based in KVM/libvirt)....
15:56 ashtokolov joined #fuel
15:56 mwhahaha if it's happening in multiple versions of MOS, it might be network configuration
15:59 rbrooker joined #fuel
16:00 xarses joined #fuel
16:03 dilyin joined #fuel
16:03 dpyzhov_ joined #fuel
16:04 thumpba joined #fuel
16:08 magicboiz I'm opening a bug in launchpad.....let's see....
16:09 rbrooker joined #fuel
16:12 sergmelikyan joined #fuel
16:16 kdavyd joined #fuel
16:24 monester joined #fuel
16:33 monester joined #fuel
17:00 rward joined #fuel
17:14 thenetguy mwhahaha: It worked! =D
17:20 ub joined #fuel
17:21 e0ne joined #fuel
17:23 e0ne joined #fuel
17:29 n3m8tz joined #fuel
17:34 sergmelikyan joined #fuel
17:40 mattgriffin joined #fuel
17:44 a1a23 joined #fuel
17:44 a1a23 Hi
17:46 a1a23 quick help. "ruby /etc/puppet/modules/osnailyfacter/modular/astute/ceph_ready_check.rb" reutns "OSDs in cluster less than pool size: Pool size 2 UP 1 IN 1" in my setup.. someone here can help me understand the error?
17:46 ub joined #fuel
17:46 a1a23 please
17:47 kiwnix joined #fuel
18:05 ub joined #fuel
18:11 xarses whelp, cant help if ya don't stick around
18:16 youellet joined #fuel
18:24 Akshik joined #fuel
18:30 Akshik joined #fuel
18:30 blahRus1 joined #fuel
18:30 thumpba_ joined #fuel
18:31 Akshik stuck with deployment error using Fuel 6.1
18:31 Akshik Deployment has failed. Method granular_deploy. Failed to execute hook 'shell' Failed to run command cd / && ruby /etc/puppet/modules/osnailyfacter/modular/astute/upload_cirros.rb
18:34 pbrzozowski_ joined #fuel
18:34 mwhahaha Akshik: i would try running that by hand on the node and see what the error is
18:34 mwhahaha is your public vip available?
18:35 Akshik it is not accessible from outside
18:35 aliemieshko joined #fuel
18:36 Akshik mwhahaha, https://dl.dropboxusercontent.com/u/60991263/fuel-snapshot-2015-07-30_17-57-52.tar.xz
18:37 nurla joined #fuel
18:45 mwhahaha no i mean to your nodes
18:45 mwhahaha i think the upload_cirros task tries to upload to glance via the public vip. if you controller can't ping the public gateway, the public vip is downed
18:45 mwhahaha i'll look at your logs in a minute
18:50 mwhahaha Akshik: if you run that command on your node-1 do you get an error?
18:51 mwhahaha it looks like your vip is up, so i'm wondering if there is an issue with glance
18:51 Akshik mwhahaha, will try and update you
18:53 tatyana joined #fuel
18:56 Akshik mwhahaha, i have a query
18:56 mwhahaha yes?
18:56 Akshik should i be able to reach the management vip from controller?
18:57 Akshik meaning im my case my public vip is 10.15.2.2
18:57 mwhahaha yes
18:57 Akshik and management is 10.15.3.2
18:57 Akshik im able to reach 10.15.2.2 from controllers and not my management ip
18:58 mwhahaha that would be a problem
18:58 mwhahaha you'll need to troubleshoot the network configuration issue as it's probably why the upload cirros task failed
18:59 Akshik ok let me give that a try
19:00 bapalm_ joined #fuel
19:01 teran joined #fuel
19:11 wwegener joined #fuel
19:11 CTWill /msg NickServ identify Test@123456
19:12 CTWill /msg NickServ identify
19:12 CTWill s
19:12 CTWill great
19:13 CTWill So now that all that is in the channel
19:15 CTWill from a fuel master node can I get a diagnostic snapshot from the CLI instead of the web gui? the gui seems to stopped working. the dialog states that it is Generating Logs Snapshot
19:15 mwhahaha yes
19:15 mwhahaha it takes a long time
19:15 mwhahaha or it can
19:16 mwhahaha see fuel snapshot --help
19:17 CTWill yes but the gui was started like a few months ago
19:17 CTWill or a few weeks
19:17 CTWill hard to tell sometimes
19:18 mwhahaha does it show up as a task in fuel task'
19:18 mwhahaha er when you run 'fuel task'
19:18 CTWill yes
19:18 CTWill 350 | running | dump           | None    | 0        |
19:19 mwhahaha could try deleteing it
19:19 mwhahaha fuel task delete --task-id 350
19:20 monester joined #fuel
19:21 CTWill 400 Client Error: Bad Request (You cannot delete running task manually)
19:21 CTWill booo
19:21 CTWill brb need coffee
19:21 mwhahaha well that's lame
19:21 HeOS joined #fuel
19:25 ub joined #fuel
19:40 Akshik mwhahaha, how do i restart my management vip? crm resource restart vip__management
19:40 mwhahaha yea that should do it
19:42 Akshik i tried and no luck
19:42 Akshik i can reach my public vip
19:42 Akshik and i can reach my management ip, but not vip
19:42 Akshik is there a way to fix that
19:43 mwhahaha so you may want to try getting into the vrouter (ip netns exec vrouter bash) where it's currently located and see if you can ping/tcpdump
19:45 mwhahaha also check the haproxy netns to make sure the hapr-m is up
19:45 Akshik or rather
19:46 Akshik telnet 10.15.2.2 5000 works and telnet 10.15.3.2 5000 dose not
19:46 Akshik im able to ping my management vip
19:46 Akshik ping 10.15.3.2
19:46 Akshik PING 10.15.3.2 (10.15.3.2) 56(84) bytes of data.
19:46 Akshik 64 bytes from 10.15.3.2: icmp_seq=1 ttl=64 time=0.436 ms
19:48 mwhahaha is haproxy running?
19:48 mwhahaha if you do haproxy-status.sh is everything up?
19:49 magicboiz joined #fuel
19:50 Akshik http://termbin.com/9193
19:50 Akshik hapr-m is not avl
19:51 mwhahaha which node is it up on?
19:51 mwhahaha from the crm status should show where it's running
19:52 Akshik Clone Set: clone_p_haproxy [p_haproxy]
19:52 Akshik Started: [ node-1.ukdev.tld node-3.ukdev.tld node-4.ukdev.tld ]
19:52 mwhahaha no where is the vip__public and vip__management running?
19:54 Akshik http://termbin.com/66bc
19:55 mwhahaha that's interesting
19:55 mwhahaha it should show what node it's running on
19:55 Akshik :)
19:55 mwhahaha well that definately would be a problem
19:56 xarses Akshik: does the gateway IP of the public network in the settings page correspond to a address that will respond to ICMP packets?
19:56 mwhahaha but his issue is with the management vip cause teh public vip is working
19:57 xarses oh
19:57 mwhahaha Akshik: you could try restarting p_vrouter and p_haproxy
19:58 Akshik sure
19:59 Akshik i did, but same behaviour
20:06 mwhahaha well that's the problem, but i'm not sure why
20:07 mwhahaha if you check all 3 controllers, none of them have the vip ips in their haproxy netns?
20:11 e0ne joined #fuel
20:11 Akshik ive checked it on one controller
20:11 Akshik let me check others
20:14 Akshik left #fuel
20:14 Akshik joined #fuel
20:15 Akshik http://termbin.com/rm51
20:15 Akshik node-1 has hapr-m
20:17 mwhahaha ok so it must be the public interface that is down
20:17 mwhahaha so that goes back to xarses's question, is the gateway for the public network pingable?
20:17 Akshik gateway of the public network is pingable
20:18 Akshik but dose fuel expect that gateway to respond to management ip?
20:18 mwhahaha i'm not sure
20:18 mwhahaha but probably
20:19 Akshik then thats quite a thing
20:19 magicboiz joined #fuel
20:26 CTWill joined #fuel
20:26 CTWill boo guest wireless, it dropped my irc connections
20:36 xarses Akshik: I'm slightly confused with what the problem is
20:37 xarses you cant reach the management vip?
20:37 mwhahaha the issues started with an upload_cirros failure
20:37 mwhahaha does that use management or public
20:37 xarses which version of fuel?
20:37 Akshik xarses, im able to ping the management vip, i added a static route for the management network
20:37 mwhahaha 6.1
20:37 Akshik fuel 6.1
20:40 thenetguy joined #fuel
20:40 xarses whatever `hiera internal_address` corosponds to
20:40 xarses to contact keystone
20:40 xarses then stays on internalurl endpoints
20:41 xarses https://github.com/stackforge/fuel-library/blob/stable/6.1/deployment/puppet/osnailyfacter/modular/astute/upload_cirros.rb#L9-15
20:42 xarses try exporting thoes ENV vars into the shell and then do glance --debug --verbose image-list
20:43 xarses Akshik: ^
20:43 thenetguy Hi all, is there any way to use another interface as Admin PXE beside the Eth0?
20:43 Akshik xarses, let me try and share the results
20:43 thenetguy My FUEL 6.1 discovered my nodes and put the Admin PXE as the Eth0, but the interface used for PXE was Eth4
20:43 thenetguy It sounds like  a bug to me...
20:44 xarses thenetguy: it will put the Admin PXE network role on the first interface that 1) has an address on the same network as the fuel's pxe network 2) if 1 isn't matched the interface which currently has the default route
20:45 Akshik thenetguy, i even faced similar issue, try just toremova the noda and add it back it would be fixed, it worked for me give it a try
20:45 xarses if only 'eth4' has the PXE attached, then the kernel's udev sorted them in an order that you didn't expect
20:45 Akshik try to remove the node and add again
20:46 Akshik xarses, it blank im not getting any response
20:46 xarses Akshik: did keystone respond?
20:46 Akshik for keystone
20:46 Akshik keystone --debug user-list
20:46 Akshik DEBUG:keystoneclient.auth.identity.v2:Making authentication request to http://10.15.3.2:5000/v2.0/tokens
20:46 Akshik INFO:urllib3.connectionpool:Starting new HTTP connection (1): 10.15.3.2
20:47 Akshik and no response from there on
20:47 xarses 3.2 is the local address right?
20:47 xarses or is that supposed to be the vip
20:47 Akshik management vip
20:48 xarses and this is on the node with the vip, or another node?
20:48 Akshik 3.7 is the local ip
20:48 Akshik this is on the other node
20:48 Akshik let me try from the same node which has vip
20:49 Akshik the result is the same no change
20:54 thenetguy xarses: thanks for the info, but this is weird because the eth0 is not even configured on the network side...
20:58 xarses thenetguy: check that they where not re-ordered by udev, you could compare the mac on the network, or use network checker to verify that it's passing traffic correctly
20:59 xarses Akshik: can you share the `ip -4 a` and `ip r` for both inside and outside the namespace
21:00 Akshik http://termbin.com/of3w
21:00 Akshik , http://termbin.com/xgb0
21:02 Akshik http://termbin.com/vees
21:02 Akshik http://termbin.com/37ps
21:02 [HeOS] joined #fuel
21:04 thenetguy xarses: I'm trying to reboot the nodes with the cables disconnected from the hosts, and now the discovery went correct...
21:05 thenetguy xarses: the interfaces Eth4 and Eth1 are in different VLANs segments at switch level (port on access in different VLANs). I really think this behavior sounds like a bug during discovery. I will try to reproduce it...
21:05 swann_ joined #fuel
21:05 pbrzozowski joined #fuel
21:06 xarses thenetguy: like I said, it's not un-common for the kernel to re-order them. It's odd that the same kernel brought them up in a different order
21:06 xarses Akshik: can you ping from 240.0.0.1 to 240.0.0.2 on the node with the vip?
21:08 aarefiev joined #fuel
21:09 xarses also can you ping out of the namespace to another address (vs pinging in)?
21:10 magicboiz Hi, i've just deployed MOS6.1 in lab env (kvm host running several vm...), and the external network created has provider:network_type=local. Is it right? Shouldn't it be gre/vxlan (or even flat)?
21:10 magicboiz if this BUG (https://bugs.launchpad.net/mos/+bug/1352203) this was marked as a bug....
21:13 Akshik xarses, yes im able to ping bot ips either ways
21:14 xarses magicboiz: there is a reason it was set to local, and it should work correctly that way. I just cant find the comment for why
21:15 xarses Akshik: are these vmware VM's?
21:15 magicboiz well....my instances cannot reach my physical internet gateway....
21:15 xarses magicboiz: did you provision them on net04_ext or net04?
21:16 magicboiz xarses: on net04_ext
21:16 xarses use net04
21:16 xarses net04_ext is really only there to host the floating ip range
21:16 xarses dhcp isn't enabled on it so they won't receive an address
21:16 danwest joined #fuel
21:16 magicboiz xarses: ok let me try.....
21:16 xarses and by default the compute nodes are not even connected to that network
21:17 Akshik no
21:17 Akshik dell blade servers
21:17 xarses could they have a ACL blocking forged transmits or promiscuous mode?
21:18 Akshik not sure
21:20 ddmitriev1 joined #fuel
21:20 magicboiz xarses: this works, but the instance gets an IP address from net 192.168.111.0/24 (MOS default setting), and after associating a floating IP, it gets internet access. But the instance eth0 interface, doesn't have the floating IP address configured.....
21:20 magicboiz xarses: thanks :)
21:21 magicboiz xarses: I find this procedure a little bit "dirty"...
21:21 xarses magicboiz: attaching directly on the external network removes the need for the overlay provider
21:22 xarses in which case you likely may want to use provider networks
21:22 xarses but then you loose per-tenant isolation
21:22 xarses specifically they can't have overlapping network addresses
21:22 magicboiz xarses: isn't possible to mix both options?
21:23 CTWill joined #fuel
21:24 xarses as I understand, yes you could have both set up, 1) fuel won't set up provider configuration or the multiple providers 2) you have to have the public address assigned to every compute node then (its off by default and is a setting in the settings page)
21:26 xarses Akshik: this is at the limit of what I can think of to thoubleshoot. If you can be on earlier in the day we have some guys in eastern europe that might have a better idea
21:26 Akshik :)
21:27 Akshik thanks xarses, will try to catch up early t
21:27 Akshik thanks a ton
21:27 magicboiz xarses: is there any option on fuel to define step 1? step 2 is clear to me...
21:28 xarses you should ping aglarendil, alex_didenko or xenolog, they authored the netns haproxy stuff
21:28 magicboiz xarses: ok, thanks mate :)
21:28 xarses magicboiz: that was for akislitsky
21:28 xarses erm Akshik even
21:29 xarses magicboiz: no, fuel won't do that out of the box. It might be possible to force it to do so in the puppet manifests, but I've not tried to switch it out.
21:29 xarses i should say s/might be/should be/
21:30 Akshik xarses, thanks, will reachout to them
21:31 xarses magicboiz: you can see the network config entry into the puppet manifests at https://github.com/stackforge/fuel-library/tree/stable/6.1/deployment/puppet/osnailyfacter/modular/openstack-network
21:31 xarses the modules are in /etc/puppet/modules on the fuel-master node
21:35 ub2 joined #fuel
21:37 magicboiz xarses: thanks for your help!! let me read those modules.... :)
21:52 angdraug joined #fuel
21:57 danwest joined #fuel
22:06 thenetguy joined #fuel
22:06 thenetguy Do I need to have access to Internet from my nodes to use Mirantis?
22:06 thenetguy I'm trying to install the nodes using Fuel but my nodes does not have access to internet.
22:16 xarses joined #fuel
22:23 thenetguy Anyone?
22:36 CTWill as long as you got your fuel master server installed you do not need internet access to deploy an openstack instance
22:36 CTWill you do need to have the min networks setup
23:06 thenetguy joined #fuel
23:07 thenetguy Hi all, I'm trying to troubleshoot my problem here but I'm a really newbie on Mirantis...
23:08 thenetguy My deployment fails complaining about "Mcollective problem with nodes", but I can reach my nodes without problems using the admin/PXE network.
23:08 thenetguy http://paste.openstack.org/show/406514/
23:09 thenetguy Any help is really appreciated...
23:18 thenetguy Anyone?
23:33 mquin joined #fuel

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary