Perl 6 - the future is here, just unevenly distributed

IRC log for #fuel, 2016-06-30

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:05 Zer0Byte__ joined #fuel
00:56 Sesso_ joined #fuel
01:01 fatdragon joined #fuel
01:07 Zer0Byte__ hi
01:07 Zer0Byte__ someone here
01:56 fatdragon joined #fuel
01:59 code-R joined #fuel
02:04 kur1j joined #fuel
02:34 xarses joined #fuel
02:46 krypto joined #fuel
02:58 Abhishek joined #fuel
03:16 Abhishek joined #fuel
03:59 code-R joined #fuel
04:30 javeriak joined #fuel
05:20 krypto joined #fuel
05:32 code-R joined #fuel
05:39 code-R_ joined #fuel
05:43 Abhishek joined #fuel
05:53 code-R joined #fuel
05:56 code-R_ joined #fuel
06:11 javeriak joined #fuel
06:15 ydderd joined #fuel
06:15 javeriak_ joined #fuel
06:35 tosc_fiberdata joined #fuel
06:38 Zer0Byte__ joined #fuel
06:55 tatyana joined #fuel
07:02 vkulanov joined #fuel
07:22 DavidRama joined #fuel
07:23 fatdragon joined #fuel
07:24 DavidRama1 joined #fuel
07:27 javeriak joined #fuel
07:46 neilus joined #fuel
07:48 DavidRama joined #fuel
07:50 DavidRama joined #fuel
07:52 code-R joined #fuel
07:57 noshankus joined #fuel
07:59 neilus joined #fuel
08:06 tosc_fiberdata a searchfunction for the logstab would be nice, is that in progress for the next release?
08:13 Egyptian[Home] joined #fuel
08:22 Egyptian[Home] joined #fuel
08:23 fatdragon joined #fuel
08:42 aglarendil tosc_fiberdata: what do you mean by that? btw, I think we removed log tab in favour of Fuel LMA plugins which provide elasticsearch, kibana and logstash
08:43 aglarendil tosc_fiberdata: http://fuel-plugin-lma-collector.readthedocs.io/en/stable/ you might want to check this out
08:43 tosc_fiberdata aglarendil, ah, i meant during the installation process
08:44 tosc_fiberdata during deployment
08:44 aglarendil tosc_fiberdata: what do you mean by that? you want to know what has failed and why? Like, which deploymen task failed?
08:44 aglarendil we have this feature in 9.0. it is called deployment tasks history
08:44 tosc_fiberdata ah ok
08:44 aglarendil you can check which node/task failed and then look into particular logs
08:45 aglarendil something like this:
08:45 aglarendil fuel2 task list
08:46 tosc_fiberdata in the CLI you mean?
08:46 aglarendil this would list 'Nailgun Super-tasks' (we are going to rename them to transactions ,btw in the future). then just use `fuel2 task history show <deployment_supertask_id>` to see task statuses and filter by --status <error|running|pending>
08:46 aglarendil and so on
08:46 aglarendil or by node with --node-id
08:46 tosc_fiberdata ah okey i get it
08:46 aglarendil 9.0.1 Fuel release is aproaching, so we are going to update docs
08:46 tosc_fiberdata i meant in the webui
08:46 tosc_fiberdata but
08:46 tosc_fiberdata but i saw this: Logs This tab is deprecated and will be removed in the upcoming release.
08:46 aglarendil webui fancy implementation is lagging a little bit, but we have plans to implement it
08:46 tosc_fiberdata ah
08:47 aglarendil so you will have 2 options: for operations you use LMA toolchain
08:47 aglarendil and for deployment actions you will have Fuel API
08:47 aglarendil CLI and UI
08:47 aglarendil Zer0Byte__: wassup ?)
08:47 Zer0Byte__ hey
08:48 Zer0Byte__ someone know
08:48 Zer0Byte__ why fuel 9.0
08:48 Zer0Byte__ on the controller is eating memory
08:48 Zer0Byte__ ?
08:48 Zer0Byte__ and getting error on logs
08:48 aglarendil we have atop running on the nodes
08:48 Zer0Byte__ for a missing library
08:48 Zer0Byte__ ?
08:48 aglarendil what is you RAM amount?
08:48 Zer0Byte__ 16 gb
08:48 aglarendil and what do you call by 'missing library'?
08:48 aglarendil is it default configuration? no plugins so far?
08:49 Zer0Byte__ no
08:49 Zer0Byte__ im just modify cinder
08:49 yassine joined #fuel
08:49 Zer0Byte__ to run on another backend
08:49 aglarendil hm, is it cinder that consumes so much memory?
08:50 Zer0Byte__ ode-3 liberasurecode[26402]: liberasurecode_backend_open: dynamic linking error libshss.so.1: cannot open shared object file: No such file or directory
08:50 smachara joined #fuel
08:50 Zer0Byte__ KiB Mem:  16156700 total, 15590692 used,   566008 free,   306128 buffers
08:50 aglarendil I guess this is cinder who is eating the memory, not Fuel. may be this is because of endless restarts of the process
08:51 Zer0Byte__ liverasurecode is cinder
08:51 Zer0Byte__ ?
08:51 aglarendil so, do you have stats which process actually eats the memory? is it cinder?
08:52 tosc_fiberdata Zer0Byte__, do you have a elkstack implemented to monitor the process? Then you can easily see which process is eating memory and see trends
08:52 aglarendil well, we have atop on the nodes btw, so you can do this without LMA or elkstack
08:53 aglarendil Zer0Byte__: I guess here is a similar issue with erasurecode https://bitbucket.org/tsg-/liberasurecode/issues/28/libshssso-no-such-file-or-director
08:53 Zer0Byte__ is myssql
08:53 smachara joined #fuel
08:53 Zer0Byte__ with a lot of process open
08:54 aglarendil well, mysql has performance schema turned on,  but it won't eat all your memory. how much pages do you have in swap, btw?
08:54 aglarendil and it depends on your cluster topology - there could be a lot of connections open obviously, if you have many DB consumers
08:54 Zer0Byte__ no
08:54 Zer0Byte__ is just a test enviroment
08:54 Zer0Byte__ with 1 controller
08:55 Zer0Byte__ and 1 compute node
08:55 Zer0Byte__ and the controller is very slow
08:55 Zer0Byte__ cpu is always using 30 40 %
08:56 Zer0Byte__ by python
08:56 Zer0Byte__ and rabbitmq
08:56 Zer0Byte__ with beam.sm
08:56 Zer0Byte__ beam.sm
08:56 Zer0Byte__ beam.smo
08:56 Zer0Byte__ god
08:56 Zer0Byte__ smp
09:00 bapalm joined #fuel
09:00 aglarendil is controller virtual or real hardware ?
09:02 Zer0Byt__ joined #fuel
09:03 Zer0Byt__ sorry y connection drops
09:03 zephcom joined #fuel
09:04 abramley_ joined #fuel
09:04 tosc_fiberdata aglarendil, if the deployment are stuck at any step
09:05 tosc_fiberdata how am i supposed to be able to see what is happening?
09:05 aglarendil you can see that tasks are running
09:06 aglarendil for too long
09:06 tosc_fiberdata okey, so we got this error and it was timing out after 1800seconds then. i think we are stuck on the same thing again
09:06 tosc_fiberdata something with the rabbitMQ deployment
09:06 Zer0Byt__ how i can stop switf
09:06 Zer0Byt__ ?
09:07 tosc_fiberdata do openstack-status
09:07 tosc_fiberdata check name of the service
09:07 tosc_fiberdata you can just turn that service off
09:08 dburmistrov_ joined #fuel
09:11 Zer0Byt__ is on fuel 9.0
09:12 ekosareva joined #fuel
09:12 bgaifullin joined #fuel
09:17 javeriak joined #fuel
09:21 byrdnuge joined #fuel
09:33 Zer0Byt__ someone here
09:33 Zer0Byt__ ?
09:39 froots joined #fuel
09:44 Egyptian[Home] joined #fuel
09:47 tosc_fiberdata is it normal that the fuel deployment is stuck for 1h?
09:48 tosc_fiberdata nvm
09:50 Egyptian[Home] how do i backup the fuel master?
09:50 Egyptian[Home] fuel9 even
09:51 tosc_fiberdata run a full backup on the server. it's a centos 7
09:51 tosc_fiberdata ?
09:51 tosc_fiberdata or do you mean databasestuff?
10:05 Abhishek hi guys
10:05 Abhishek fuel deploy is getting failed after 100% due to ntp failures
10:05 Abhishek unable to connect to ntp servers
10:06 Abhishek however.. pings are working for these ntp servers
10:06 Abhishek on fuel master node
10:08 ikalnitsky Abhishek: did you try pings from controllers? if i'm not mistaken, our controllers point to external ntp servers and other slaves - to controllers.
10:11 Abhishek yes.. I tried.. external ntp servers are pinging from all the slaves including controller
10:11 ikalnitsky aglarendil: can you help here? ^^
10:24 aglarendil Abhishek: are you sure you can access ntp port from this servers?
10:25 fatdragon joined #fuel
10:25 aglarendil do you have ntpdate -v output ?
10:26 Abhishek I can ping ntp servers.. but if I am doing "ntpdate -u 0.ubuntu.pool.ntp.org" then getting this >>
10:26 Abhishek no server suitable for synchronization found
10:27 Abhishek yes.. getting this on "ntpdate -v"
10:27 Abhishek no servers can be used, exiting
10:27 aglarendil try ntpdate -u -d
10:27 aglarendil it may be due to the fact that the server is not actually suitable for sync
10:28 aglarendil you might want to try another ntp server
10:28 aglarendil debian for example
10:28 Abhishek okk.. I'll try :)
10:33 DavidRama left #fuel
10:33 vvalyavskiy joined #fuel
10:35 bgaifullin joined #fuel
10:40 javeriak joined #fuel
10:45 neilus1 joined #fuel
10:50 asvechnikov joined #fuel
10:52 neilus joined #fuel
10:55 tosc_fiberdata joined #fuel
11:12 aderyugin joined #fuel
11:13 javeriak joined #fuel
11:18 smachara joined #fuel
11:23 smachara I'm having problems configuring the Network settings in Fuel 8.0. I get the following error: verification failed Expected VLAN (not received) over the interfaces related with the Storage Network. My "storage network" is over a Infiniband Switch "Mellanox IS5025" Unmanaged, which means it is plug and play. So, I can not configure PK (VLANs). There is a incompiblility problem between VLAN and PK
11:23 ekosareva joined #fuel
11:29 smachara JOIN
11:36 Aurelgadjo smachara: is the hardware recognised when booting on the bootstrap OS ?
11:37 smachara Yes
11:39 Captain_Murdoch joined #fuel
11:39 smachara I have create all my nodes and configure all network interfaces to correspond to my topology.
12:06 ekosareva joined #fuel
12:06 bgaifullin joined #fuel
12:29 smachara My topology <http://lists.openstack.org/pipermail/openstack-dev/attachments/20160630/ff5904b6/attachment.svg
12:29 smachara The error <http://lists.openstack.org/pipermail/openstack-dev/attachments/20160630/ff5904b6/attachment.png>
12:30 smachara My configuration: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20160630/ff5904b6/attachment-0001.png>
12:34 fritchie is it possible to add nic bonding to an environment after it has been deployed?
13:05 neilus joined #fuel
13:06 code-R_ joined #fuel
13:09 code-R_ joined #fuel
13:14 akscram bookwar_: What do you think to bump the devops version in CI to enable the next fix: https://review.openstack.org/#/c/331126/
13:27 fatdragon joined #fuel
13:30 neilus1 joined #fuel
13:32 neilus1 joined #fuel
13:35 neilus joined #fuel
13:36 ekosareva joined #fuel
13:36 javeriak joined #fuel
13:42 code-R joined #fuel
13:52 hkominos joined #fuel
14:08 AlexAvadanii https://bugs.launchpad.net/fuel/+bug/1597775 I would love to get some input on this one
14:13 tosc_fiberdata mwhahaha, ping ping
14:14 mwhahaha hi
14:18 tosc_fiberdata mwhahaha, yo, did you have time to take a look if it was possible to port that NFS plugin?
14:19 mwhahaha no i just started, gimme a few hours and i'll see if i can get something together
14:19 tosc_fiberdata alright!
14:20 tosc_fiberdata ;)
14:20 mwhahaha looks like by default we only support lvm, ceph and block devices for cinder
14:21 mwhahaha depending on how you configured stuff
14:23 tatyana joined #fuel
14:27 Sketch joined #fuel
14:28 fatdragon joined #fuel
14:29 Sketch does my fuel node need to have all of my defined networks set up on it, or only the admin network?
14:29 javeriak joined #fuel
14:30 mwhahaha Sketch: what do you mean?
14:30 Sketch when i initially set up the node, do i need to set up interfaces for public/management/etc networks, or do i just need the admin network for PXE/installation?
14:31 mwhahaha so to initially get them loaded you just need admin, to actually run deploy you need to set them all up
14:31 mwhahaha you can run network validation while they sit in the bootstrap mode to verify your switch/port/nic configs
14:32 Sketch ok.  i can get bootstrap up on my modes, but they error out of provisioning, so maybe that's the problem.
14:32 Sketch s/modes/nodes/
14:32 vvalyavskiy AlexAvadanii: Hi, I triaged your, so, network team should take a look into it soon
14:32 mwhahaha if you run network validation it should tell you before deploying
14:32 vvalyavskiy *bug
14:33 AlexAvadanii vvalyavskiy: thank you very much!
14:35 Sketch fuel says "At least two online nodes are required to verify environment network configuration" ... as far as i can tell, they're not online unless they're actually provisioned?
14:35 Abhishek joined #fuel
14:36 mwhahaha bootstraped is considered online
14:36 mwhahaha so as long as they are registered as 'online' in fuel
14:36 mwhahaha they don't have to be provisioned
14:37 xarses joined #fuel
14:40 fritchie joined #fuel
14:46 charz joined #fuel
14:58 Sketch they have pxe booted bootstrap, but i can't see them in the web UI unless i click "Add Nodes" in where they are listed as "discovered"
14:58 Sketch and "verify networks" is still greyed out
14:59 Sketch aha
14:59 Sketch i added roles to them, now they show up
14:59 Sketch (even though i didn't provision yet)
15:01 Aurelgadjo verify networks is used to verify that the vlan you set on the interfaces (after adding them to a deployment, but before deploying) are available where you want them
15:03 fritchie joined #fuel
15:10 dmeltsaykin joined #fuel
15:11 fatdragon joined #fuel
15:13 dmeltsaykin Hi. Fuel-web cores please review https://review.openstack.org/#/q/status:open+project:openstack/fuel-web+topic:bug/1585137
15:14 dmeltsaykin And fuelclient fellows please review this too https://review.openstack.org/#/c/335969/
15:15 AlexAvadanii1 joined #fuel
15:16 cr0wrx joined #fuel
15:18 Sketch aha, Repo availability verification using public network failed on following nodes...
15:18 cr0wrx alright gentlemen, I have an issue and have no clue what caused it (and am debugging how to fix). Is there any time that the br-ex interface on a fuel controller should change IP, same for br-mgmt? I have a 1 controller deploy and the IPs all shuffled around sometime somehow (it didn't even reboot, uptime was nearly 3 months) and now all sorts of issues are coming up using services
15:21 xarses cr0wrx: only if you change the data sent to the interface and tell puppet to apply it.
15:22 xarses which version of fuel, and are the nics eth* or enp*?
15:22 Sketch looks like i forgot to add outbound NAT on my public network.  whoops.
15:22 fritchie joined #fuel
15:22 cr0wrx MOS 8.0, I didn't change anything in fuel and tell puppet to apply (I don't even know how to do that). The nics are eth*
15:23 cr0wrx both IPs got bumped up one, x.x.0.3 to x.x.0.4, for example
15:23 AlexAvadanii joined #fuel
15:23 cr0wrx I know because my openrc file and dns setting for horizon dashboard differ from what is current IP
15:24 xarses and what does the config in /etc/ imply should be the correct addresses?
15:25 cr0wrx I lied about interfaces, they are eno* not eth*
15:25 xarses hmm, In that format they aren't even supposed to move via udev
15:26 xarses so they likely didn't renumber themselves
15:26 xarses oh, you said they shifted ip's
15:26 cr0wrx yes
15:26 xarses =/
15:26 xarses uh, lets examine what fuel thought it was supposed to be and the local config on the system
15:27 cr0wrx for example, /etc/network/interfaces.d/ifcfg-br-mgmt is set to 192.168.0.3, and its running with that IP. However, my openrc file has 192.168.0.2 configured (and that worked fine until everything else stopped working)
15:27 fritchie joined #fuel
15:27 mwhahaha .2 is the vip
15:27 xarses openrc should point to the vip though
15:27 xarses did the vip move to another controller?
15:27 cr0wrx same for br-ex, the network interface config was on the new IP, but I know it's different because my dns A record, that used to work, points to the old IP
15:27 cr0wrx I only have one controller
15:28 xarses is the vip down for some reason?
15:28 cr0wrx I don't know
15:28 mwhahaha haproxy-status.sh
15:28 mwhahaha on the controller
15:28 xarses `pcs status`
15:29 cr0wrx idk where haproxy-status.sh is, but pcs status shows failed actions vip__public_monitor_5000 as 'not running'
15:29 ekosareva joined #fuel
15:29 mwhahaha haproxy-status.sh should be in the path
15:29 cr0wrx vip__public is stopped
15:29 mwhahaha is your public gateway available?
15:30 cr0wrx I think so
15:30 cr0wrx I mean, I can ping anything I like
15:30 Egyptian[Home] joined #fuel
15:30 mwhahaha check that it's pingable from the haproxy network namespace
15:31 cr0wrx ok I'm still a n00b for all these namespace things, give me a min to figure out what I'm doing
15:31 mwhahaha ip netns list
15:31 mwhahaha ip netns exec haproxy ping <ip>
15:31 code-R_ joined #fuel
15:32 vkulanov joined #fuel
15:33 thumpba joined #fuel
15:37 AlexAvadanii joined #fuel
15:49 fritchie is it possible to add nic bonding to an environment after it has been deployed?
15:52 cr0wrx sorry bout that, I can ping from within namespace
15:59 krypto joined #fuel
16:08 cr0wrx So I had manually changed /etc/network/interfaces.d/br-ex to the old IP prior hoping it would resolve the issue, but it didn't. I've reset it back to how it was at the start of this brokenness, rebooted, and now vip__public is started. The only thing that `pcs status` shows failed is p_ntp_montiro_20000 is not running, and PCSD Status shows 192.168.0.3: Offline
16:09 cr0wrx (everything external still seems broken though, can't hit dashboard for example nor use `keystone service-list`. Can still ping from haproxy namespace)
16:15 gongysh joined #fuel
16:18 cr0wrx just poking around more, but `fuel --env 1 network --download` shows public_vip as 192.168.5.3 (old IP) whereas br-ex on controller has 192.168.5.4 (random new IP). public_vrouter_vip on fuel is set to 192.168.5.2
16:19 cr0wrx in fuel network output vips.public.ipaddr is also 192.168.5.3
16:38 Abhishek joined #fuel
16:41 javeriak joined #fuel
16:43 cr0wrx aaand rebooted again, but now `pcs status` shows everything as stopped. /var/log/pacemaker shows a bunch of errors about Resource * cannot run anywhere. Up above those it says node controller1 has combined system health of -1000000 - does that mean anything to anyone? This is my only controller
16:44 Egyptian[Home] joined #fuel
17:04 bgaifullin joined #fuel
17:23 vvalyavskiy_ joined #fuel
17:25 AlexAvadanii joined #fuel
18:01 javeriak joined #fuel
18:04 cr0wrx ok ok ok forget all that IP changing non-sense, I may not know what I'm talking about and have to track that aspect down later. I think my real issue was /var/log/ was almost full and none of the resources from `pcs status` would start because of that (maybe that is why controller wasn't considered healthy)
18:04 Jabadia joined #fuel
18:05 cr0wrx moving some old logs out and rebooting seems to have it up and working again. Tried a few times to make sure that was it and I think so.
18:06 cr0wrx Which brings up the question...I have some rather large logs that don't seem to be rotated - I would've thought fuel would've had rotation on all the logs (especially since default log partition is pretty small). ceph-client.radosgw.gateway.log is pretty large compared to others (1.5GB), ceph-mon.controller.log ~500MB, conntracd-stats.log ~ 500MB, etc.
18:18 mwhahaha oh yea that'd do it
18:18 mwhahaha after clearing disk space, crm node status-attr `hostname` delete "#health_disk"
18:20 DavidRama joined #fuel
18:25 cr0wrx what does that command do?
18:28 mwhahaha clears the error condition when you fill your disk
18:28 mwhahaha so all the services will restart
18:29 mwhahaha saves you the reboot
18:29 thumpba joined #fuel
18:29 cr0wrx ah, well I had already rebooted :) so I need to learn more about pacemaker I think. If all services are stopped, it still continues to check and try to start them? And that command flips the bit on error condition
18:31 mwhahaha It's something we have configured in the pacemaker that we deploy to make sure we try and gracefully shutdown services before the system runs out of disk space.
18:31 mwhahaha this is because mysql/rabbitmq can corrupt their data and you'd end up losing the whole thing
18:31 mwhahaha you should put some disk monitoring in place :D
18:31 mwhahaha the condition is triggered if the disks have less than 512M of free space
18:32 cr0wrx well, yes :) but at the same time, any reason why ceph logs could get that large? They hadn't been rotated on this small cluster since I built it back at start of april
18:32 mwhahaha no idea
18:32 cr0wrx Looking online lots of places talk about ceph should be rotating with logrotate, but in this deploy there is no logrotate config for anything ceph
18:32 mwhahaha they probably should have been rotated since then
18:32 mwhahaha maybe it's missing from the logrotation rules
18:33 cr0wrx yea there isn't a config for it, maybe an issue in MOS 8.0 fuel version or something
18:33 mwhahaha that would be a fuel bug
18:33 mwhahaha i bet we don't have it
18:34 cr0wrx outside of conntrackd-stats, most of the big ones seem to all be ceph related
18:34 cr0wrx everything else seems to be rotating it seems
18:35 mwhahaha i'm surprised no one has reported that yet
18:35 thumpba joined #fuel
18:35 cr0wrx is there anywhere that documents major differences in MOS 8.0 fuel and fuel-community (currently stable at 9.0 release)
18:35 mwhahaha not yet i don't think
18:35 cr0wrx and, do you happen to know if MOS 9.0 is still expected around July?
18:35 mwhahaha it is
18:35 thumpba joined #fuel
18:36 Abhishek joined #fuel
18:36 cr0wrx hrm, bummer timewise. We are spinning up new small cluster nowish that we kinda need and don't want to wait longer, but it's still only a few weeks out from MOS 9.0
18:36 mwhahaha you could grab the latest fuel community 9
18:37 mwhahaha it's basically whats going to be released
18:37 mwhahaha from https://ci.fuel-infra.org
18:38 cr0wrx on 8.0 there are some things I wish were a bit different (not sure 9.0 addresses, though). For example by default it seems to use identity v2 which makes it non-trivial (for me anyways) to add in idP / SSO integration
18:38 mwhahaha i think 9 is still v2
18:38 mwhahaha don't quote me on that tho
18:39 cr0wrx so fuel community 9 == MOS 9.0? Is community more or less just the more bleeding edge version of MOS but still stable? It's a small cluster but still for enterprise so we do want something pretty stable
18:40 mwhahaha it's not an exact == but it's pretty close
18:40 mwhahaha you could at least take a look at it. if you have a support contract you could always ask to get an early version of 9.0
18:42 cr0wrx sure. It's just a pain, getting fuel 8 installed was painful for our environment (remote over idrac, fuel-menu didn't play well in virtual console, etc....), but maybe worth it if there are some good improvements over 8
18:42 mwhahaha there are
18:45 cr0wrx I'll def consider it, if I can roll out and redeploy today or this week :/ (diff environment than the one that just crashed and burned today)
18:45 mwhahaha 8 still had docker right?
18:45 cr0wrx yes
18:45 mwhahaha 9 doesn't
18:46 mwhahaha simplifies things so much
18:46 cr0wrx ok
18:46 cr0wrx yea it was a bit confusing at first - can we still take backup fairly easily for restore of fuel node?
18:46 mwhahaha that's a good question
18:47 mwhahaha i'm not sure of the state of the backup tools for 9
18:47 cr0wrx ok
18:48 cr0wrx so 9 doesn't use identity v3 yet either? Do you happen to know if it's not too bad to manually configure on 8.0 and/or 9.0 without breaking too much to make use of web sso or the other auth mechanisms? I briefly tried the fuel ldap plugin which configures some of v3, at least, but it had strange issues for me
18:50 mwhahaha i'm not sure if it's v3 by default, v3 should still be available
18:50 mwhahaha but the services themselves might have issues, so it depends on which ones you need
18:51 mwhahaha I believe we're also working on the ldap plugin for 9 (fixing/improving)
18:51 mwhahaha https://review.openstack.org/#/c/328505/ currently being worked on
18:52 cr0wrx ok, I'll probably just have to try switching manually to using v3 to get sso support and see how it goes.
18:53 cr0wrx sorry for all the questions, you've been really helpful though. I'm still fairly new (at least to troubleshooting, been running MOS 8.0 dev environment for ~4 months or so, but no issues until today, it just worked)
18:55 cr0wrx we are spinning up a cluster at work now and I started with MOS 8.0 this week, but if community 9.0 is still pretty stable maybe we should run with that instead (not sure if we can wait longer for MOS 9.0)...Would like to get support contract one day but gotta get the team on board first that openstack is key part of our future (I think so...)
18:56 mwhahaha 9.0 is mitaka while 8 is liberty so it really depends on you guys as to what you need. if 8 just works there will be an upgrade path eventually
18:58 cr0wrx sure. The downside is the upgrade path in past from what I've read requires some extra nodes / hardware essentially as you swap around controllers. Maybe by that time we will have more, but today we do not as this is first deploy just to get our feet wet and try to start using openstack for some infrastructure
18:59 mwhahaha yea, i know there's been some work on that as well but i can't speak to specifics
19:00 cr0wrx 8 may be ok, it's the identity v3 pieces that is the main thing missing, and/or some other components I'd like to setup and install that fuel didn't handle (designate, lbaas, trove) - not sure how easy it is to install those on top of the mos 8.0 deploy (I think I saw somewhere someone mentioning dependency issues for designate/trove because of minor package differences in mos components)
19:00 mwhahaha yea not sure about those packages
19:01 cr0wrx thats ok. Are you with mirantis?
19:03 mwhahaha yea
19:03 f1gjam joined #fuel
19:04 f1gjam hey guys is this version of fuel basially the free version of mirantis?
19:04 cr0wrx and/or know anything about their training options? I'm mostly curious if they dive into fuel specifics (fuel cli, HA setup, etc.) - all the issues I started messing with today. I'm familiar with openstack cli and how to use the components after they are installed, but ensuring things are running right seems to be the tricky part for me...but I'm worried the trainings will focus more on the openstack cli and basic usage more than sys admin
19:04 mwhahaha f1gjam: you can use the community version
19:05 f1gjam which is?
19:05 f1gjam im looking for a distro which will get updates etc...
19:05 mwhahaha https://www.fuel-infra.org/ (i recommend if you want 9, get a nightly from https://ci.fuel-infra.org)
19:05 DavidRama joined #fuel
19:06 mwhahaha cr0wrx: i don't know if there's fuel traning, the openstack training is just openstack stuff
19:07 f1gjam oh, so the stable 9 isnt updated automatically or manually
19:07 f1gjam you have tod ownload the latest iso?
19:07 mwhahaha well the repos get updated
19:07 mwhahaha but the stable of the community is quite old, so there were some issues with it
19:08 mwhahaha so if you grab the nightly as a starting point, you'll get updates for the packages as they hit the repos
19:08 f1gjam :)
19:08 f1gjam i see
19:08 f1gjam i did download the one fromthe website
19:08 f1gjam yes it was buggy
19:08 f1gjam :)
19:08 DavidRama left #fuel
19:15 cr0wrx thanks for all the help mwhahaha, let your boss know you were awesome and helpful to someone today and have made good strides towards gaining another enterprise support customer if I can pull it off ;)
19:16 mwhahaha ha ok thanks
19:18 Zer0Byte__ joined #fuel
19:20 Zer0Byte__ hi
19:20 Sketch is there an easy way to reset the fuel server without reinstalling it? (can't change existing settings after deployment)
19:21 Zer0Byte__ someone with fuel 9.0?
19:22 mwhahaha Sketch: there's a reset of environment, in 9 we started allowing some updating of settings
19:22 mwhahaha Zer0Byte__: what's up?
19:22 Zer0Byte__ hey mwhahaha how are you
19:22 Zer0Byte__ im experimenting a slow controller
19:22 Zer0Byte__ and taking a lot of ram
19:23 mwhahaha anything in particular taking all the ram?
19:24 Zer0Byte__ node-3 liberasurecode[15696]: liberasurecode_backend_open: dynamic linking error libshss.so.1: cannot open shared object file:
19:24 Zer0Byte__ a lot of mysql process
19:24 Zer0Byte__ and i got this error on my syslog
19:24 Zer0Byte__ every second
19:24 Zer0Byte__ node-3 liberasurecode[15691]: liberasurecode_backend_open: dynamic linking error libisal.so.2: cannot open shared object file: No such file or directory
19:26 Zer0Byte__ i wnat to know if something is just happening to me
19:26 mwhahaha googling that returns possible issues with swift-proxy-server
19:26 mwhahaha being misconfigured
19:26 mwhahaha https://bugs.launchpad.net/kolla/+bug/1552669
19:27 mwhahaha we also had https://review.openstack.org/#/c/332152/2 which was fixed recently in master
19:27 mwhahaha you could try manually disabling those two services
19:32 Abhishek joined #fuel
19:32 Zer0Byte__ what is the best way to doit
19:32 Zer0Byte__ ?
19:33 mwhahaha fix it  by hand
19:33 Zer0Byte__ got it
19:39 Zer0Byte__ thansk mwhahaha
19:39 Zer0Byte__ solved
19:40 mwhahaha k
19:56 bgaifullin joined #fuel
20:13 HeOS joined #fuel
20:36 DavidRama1 joined #fuel
20:42 DavidRama joined #fuel
20:42 DavidRama joined #fuel
20:46 DavidRama1 joined #fuel
20:51 DavidRama1 left #fuel
20:53 DavidRama joined #fuel
21:41 HenryG_ joined #fuel
22:44 AlexAvadanii1 joined #fuel
22:44 Egyptian[Home] joined #fuel

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary