Perl 6 - the future is here, just unevenly distributed

IRC log for #fuel, 2015-11-13

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:11 tzn joined #fuel
00:44 devl joined #fuel
01:15 zhangjn joined #fuel
01:19 zhangjn_ joined #fuel
01:20 tzn joined #fuel
01:24 Sesso joined #fuel
01:28 shimura_ken joined #fuel
01:35 tzn joined #fuel
01:52 gongysh_ joined #fuel
02:16 jerrygb_ joined #fuel
02:27 fedexo joined #fuel
02:28 omartsyniuk joined #fuel
02:30 jerrygb joined #fuel
02:40 pasquier-s joined #fuel
02:45 bookwar joined #fuel
02:45 Obi-Wan joined #fuel
02:45 ogelbukh joined #fuel
02:45 6A4AAX6CR joined #fuel
02:45 dr joined #fuel
02:45 ogelbukh_ joined #fuel
02:48 ilbot3 joined #fuel
02:48 Topic for #fuel is now Fuel 7.0 (Kilo) https://software.mirantis.com | Paste here http://paste.openstack.org/ | IRC logs http://irclog.perlgeek.de/fuel/
03:10 youellet joined #fuel
03:21 subscope joined #fuel
03:40 youellet_ joined #fuel
05:06 bildz joined #fuel
05:27 bildz if I import glance images from the web, how can I check their status in the queue?
05:48 zhangjn joined #fuel
05:49 zhangjn joined #fuel
06:01 zhangjn joined #fuel
06:16 zhangjn joined #fuel
06:22 zhangjn joined #fuel
06:48 zhangjn joined #fuel
06:57 zhangjn joined #fuel
07:07 javeriak joined #fuel
07:30 devl joined #fuel
07:34 zhangjn joined #fuel
07:42 neouf joined #fuel
07:45 fzhadaev1 joined #fuel
08:24 mkwiek07 joined #fuel
08:26 subscope joined #fuel
08:30 hyperbaba joined #fuel
08:33 cartik joined #fuel
08:34 javeriak joined #fuel
08:39 cartik joined #fuel
08:57 e0ne joined #fuel
09:26 sergmelikyan joined #fuel
09:33 zhangjn joined #fuel
09:35 krogon joined #fuel
09:53 akislitsky_ joined #fuel
09:54 tkhno joined #fuel
10:02 e0ne joined #fuel
10:06 martineg_ joined #fuel
10:07 subscope joined #fuel
10:08 zhangjn joined #fuel
10:16 Chlorum joined #fuel
10:55 tzn joined #fuel
11:19 subscope joined #fuel
11:22 subscope joined #fuel
12:04 jaypipes joined #fuel
12:07 tzn joined #fuel
12:29 sergmelikyan joined #fuel
12:47 xarses joined #fuel
13:07 tzn joined #fuel
13:26 neophy joined #fuel
13:52 sergmelikyan joined #fuel
14:08 tzn joined #fuel
14:18 xek joined #fuel
14:22 ericjwolf joined #fuel
14:45 omolchanov joined #fuel
14:57 jerrygb joined #fuel
14:59 ericjwolf I thought I bookmarked it but apparently I did not but in 6.1 there was a Zabbix pluggin bug where it was missing some packages during the install.  I have tried to search but I cannot find the bug report.  Does anybody happen to rememeber it?  I am almost possitive mahahaha helped me find it.  I had to rebuild my 6.1 install and going to 7.0 at this time is not possible.
15:02 mwhahaha we can go look in the channel logs
15:02 mwhahaha :D
15:04 mwhahaha or was it this bug https://bugs.launchpad.net/fuel/6.1.x/+bug/1483983
15:07 pbrooko joined #fuel
15:09 tzn joined #fuel
15:16 Verilium Anyone running LMA and having issues between the lma_collector (hekad) running on the controller node and haproxy?
15:16 pasquier-s Verilium, could you describe the problem you have?
15:20 Verilium pasquier-s:  There 'seems' to be something between the fact pacemaker starts up lma/hekad and starting it manually via 'start lma_collector'.
15:21 Verilium If I kill hekad (with a kill -9), and pacemaker starts it back up, haproxy seems to see it as being down...  And then, everything trying to send to it just gets blocked off.
15:22 pasquier-s Verilium, have you deployed any other plugin apart LMA?
15:22 Verilium But, it seems, if I kill it, then start it back up with a 'start lma_collector', before pacemaker does anything, it 'seems' to be working again.
15:22 Verilium ...hmm, or maybe not.  Nagios is going haywire.
15:22 pasquier-s you shouldn't use 'start lma_collector' since it's managed by Pacemaker
15:23 Verilium pasquier-s:  lma_collector, influxdb_grafana, lma_infrastructure_alerting, elasticsearch_kibana
15:23 jobewan joined #fuel
15:23 pasquier-s Verilium, thanks, we ran into issues with the Zabbix plugin at some point but it's not your case
15:23 Verilium (on another note, if I didn't install elasticsearch_kibana, the fuel deploy crapped out midway)
15:23 Verilium (seems there's some hard dependency in there somewhere)
15:24 Verilium (wasn't happening back with the 6.1 version)
15:24 pasquier-s Yes, it shouldn't fail
15:24 Verilium pasquier-s:  Ok for the start/stop for lma_collector.  Let's see, I'll try it right now.
15:25 ericjwolf <mwhahaha> - Thank you, that was it.
15:25 ericjwolf I have now bookmarked it.....
15:25 pasquier-s lets open 2 bugs then so we can try to reproduce the issues you're seeing
15:25 mwhahaha ericjwolf: cool
15:27 ericjwolf I wounder if this will still be a problem since when I created the local mirror I did a full mirror not the default partial?
15:27 pasquier-s Verilium, this one is for Elasticsearch-Kibana: https://bugs.launchpad.net/lma-toolchain/+bug/1516055
15:30 Verilium pasquier-s:  I have another environment I can probably reproduce the bug on and be able to gather some output.
15:31 Verilium But, based on the puppet output I had seen (and the fact I tried it 2-3 times), that seemed to be it.
15:32 swann ericjwolf:hi, you can check the fix to see if you have all packages https://review.openstack.org/#/c/241275/1/pre_build_hook,cm
15:33 pasquier-s Verilium, and a second bug for the hekad/HAProxy problem: https://bugs.launchpad.net/lma-toolchain/+bug/1516061
15:34 pasquier-s Verilium, feel free to update the descriptions and attach logs to the bugs
15:35 Verilium pasquier-s:  I also am having a bit of a hard time wrapping my head around the way the nagios plugin works with it's passive checks.  Such as, right now, things seem to be fine in grafana/influxdb, services are OKAY (except for rabbitmq in WARN, not sure what that's about yet), but in nagios, all the services in 00-global-clusters-env1
15:35 Verilium ...are showing up as UNKNOWN.  UNKNOWN: No data received for at least 130 seconds .
15:36 pasquier-s The service checks are sent by the LMA collector with the management VIP
15:36 claflico joined #fuel
15:36 pasquier-s so if this LMA collector instance is down or unresponsive then Nagios receives no data and assumes the UNKNOWN statte
15:36 pasquier-s *state*
15:38 Verilium Hmm.  Might explain a few things.
15:40 pasquier-s This has been added to the LMA Infra Alerting documentation only a few days ago: http://fuel-plugin-lma-infrastructure-alerting.readthedocs.org/en/latest/user.html#troubleshooting
15:40 Verilium I guess it's possible all my current issues might stem from LMA on this node not working correctly, considering it's the node that has the VIP/haproxy and such.
15:40 bildz Anyone around to help me with a ceph issue?  I'm trying to upload images into glance and keep getting an error 500:  http://pastebin.com/E3LYWTT8
15:41 pasquier-s Not sure which version you're running exactly but we've found an issue recently with our service implementation in Pacemaker
15:41 pasquier-s This has been fixed very recently: https://review.openstack.org/#/c/243677/
15:44 Verilium pasquier-s:  Hmm, interesting troubleshooting bits for nagios.  Seems to describe what I'm having right now.
15:45 Verilium pasquier-s:  The versions of all 4 plugins I have were cloned from github on oct 29th, so I definitely don't have that fix in right now.
15:45 pasquier-s so this might explain your trouble but anyhow it needs some verification on our side
15:47 Verilium I should add, all those dashboards in grafana are pretty awesome. :)
15:47 pasquier-s thanks :-)
15:47 pasquier-s kudos to swann & tuvenen_ too
15:48 Verilium At first, saw the Main one, and hey, cool stuff.  Then I saw the one for each component...  Very nice.
15:54 Verilium pengine:  warning: unpack_rsc_op_failure:   Processing failed op start for lma_collector:2 on node-5.streamtheworld.net: unknown error (1)
15:55 Verilium Strange.
15:55 pasquier-s Any hekad process still alive on this node?
15:55 Verilium Yep.
15:56 Verilium Although, I made sure to kill it previously...  Let me try again.
15:56 pasquier-s try killall -9 hekad
15:58 Verilium Yeah, just did.  Ran a crm resource start lma_collector afterwards.
15:59 Verilium There's definitely no hekad process anymore right now.  Monitoring pacemaker.log and I see it just started it up again.
16:03 Verilium ...and it considers it dead again, and leaves the process running.
16:04 pasquier-s Pacemaker considers hekad as dead? or HAProxy?
16:06 Verilium Pacemaker starts it up.  Shows as started.  Process is there.  But then considers it dead.
16:07 Verilium And the clone set for clone_lma_collector shows as stopped for that node.
16:07 Verilium (process is still there)
16:08 pasquier-s that might be linked to the bug #1514893 that we fixed 3 days ago
16:08 * Verilium nods.
16:08 Verilium Seems like it might make sense.  I'll try it out.
16:09 pasquier-s Verilium, thanks, please update https://bugs.launchpad.net/lma-toolchain/+bug/1516061 with your findings or ping me on IRC
16:09 Verilium Just to make sure...  Only way to deploy a new version of the plugins is to rebuild them and do a new deploy?
16:10 Verilium pasquier-s:  I certainly will.  Thank you very much!
16:10 tzn joined #fuel
16:10 subscope joined #fuel
17:00 ericjwolf this is the first time I am installing Ceilometer via fuel and I am running into an error.  It is failing to deploy http://paste.openstack.org/show/478826/ I did a quick search
17:00 ericjwolf https://bugs.launchpad.net/fuel/+bug/1467180
17:00 ericjwolf But I do not see any work arounds for this.  Fuel 6.1 3 controllers with local Mongo.
17:01 ericjwolf At first I thought it was a networking issue but that is resolved.
17:01 mwhahaha what's the error from the puppet log"?
17:01 javeriak joined #fuel
17:01 * mwhahaha wishes the puppet errors made it into the astute logs :/
17:03 ericjwolf 2015-11-13 16:56:13 +0000 /Stage[main]/Ceilometer::Alarm::Evaluator/Service[ceilometer-alarm-evaluator] (info): Starting to evaluate the resource 2015-11-13 16:56:13 +0000 /Stage[main]/Ceilometer::Alarm::Evaluator/Service[ceilometer-alarm-evaluator] (notice): Dependency Package[ceilometer-common] has failures: true 2015-11-13 16:56:13 +0000 /Stage[main]/Ceilometer::Alarm::Evaluator/Service[ceilometer-alarm-evaluator] (notice): Depend
17:03 ericjwolf Failed dependencies.
17:03 ericjwolf hummmm
17:04 ericjwolf yes it would be helpful if the puppet logs were added :)
17:05 ericjwolf The biggest issue I have with Openstack is the gazillion pieces of software all trying to work togehter ....
17:05 mwhahaha yes
17:06 mwhahaha what happens if you install ceilometer-alarm-evaluator by hand?
17:06 mwhahaha which version is it pulling in?
17:06 ericjwolf no pluggin
17:06 ericjwolf just selected from Fuel gui
17:07 mwhahaha no i mean login to the node and apt-get install ceilomter-alarm-evaluator
17:07 mwhahaha :D
17:08 ericjwolf sorry about that.. Yeah, I am checking that now.
17:08 mwhahaha i'm wondering if it's pulling in an ubuntu package or a mos package which results in weirdness
17:08 ericjwolf root@node-2:/var/log# apt-get install ceilometer-alarm-evaluator Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the si
17:09 ericjwolf checking fuel repo to see what is there...
17:09 mwhahaha whats the output of: apt-cache policy ceilometer-alarm-evaluator
17:10 ericjwolf root@node-2:/var/log# apt-cache policy ceilometer-alarm-evaluator ceilometer-alarm-evaluator:   Installed: (none)   Candidate: 2014.2.2-1~u14.04+mos9   Version table:      2014.2.2-1~u14.04+mos9 0        1050 http://10.20.0.2:8080/2014.2.2-6.1/ubuntu/x86_64/ mos6.1/main amd64 Packages      2014.1-0ubuntu1 0         500 http://10.20.0.2:8080/ubuntu-part/ trusty/universe amd64 Packages
17:12 ericjwolf its needs ceilomoter-common, which needs phyton-ceilometer but that needs python-happybase which needs python-thrift.
17:12 fedexo joined #fuel
17:13 mwhahaha I'm kinda surprised that bug is still open
17:13 mwhahaha but yea there's  not much workaround without a package fix
17:15 ericjwolf Interesting..  So it is a condition that usually another package is added before ceilometer that would take care of these other missing packages?
17:15 mwhahaha no sounds like there either amissing package or a conflict
17:15 mwhahaha but i'd have to reproduce it my self
17:16 mwhahaha or if you could provide the full logs from the apt-get install, maybe i could see if something sticks out
17:16 mwhahaha unfortunately on the envs i have right now it's working but that's dev 8.0
17:26 ericjwolf the apt history file is not very useful.  I have the puppet.log file.  Can I attach it to the original bug?  It looks like that bug was reported on a virtual box setup.  Want a new report?
17:26 mwhahaha No we'd just need to trouble shoot it with apt-get
17:33 ericjwolf Do you need more than this http://paste.openstack.org/show/478827/
17:35 mwhahaha ah so it's the python-happybase
17:35 mwhahaha i wonder if there's a mos version of the package that can be used to satisfy it
17:35 mwhahaha let me see
17:37 ericjwolf [root@srvr417 nailgun]# find ./ -name *happybase* ./ubuntu-part/pool/main/p/python-happybase ./ubuntu-part/pool/main/p/python-happybase/python-happybase_0.7-1build1_all.deb ./ubuntu-part/pool/main/p/python-happybase/python-happybase_0.8-0ubuntu1_all.deb ./2014.2.2-6.1/centos/x86_64/Packages/python-happybase-0.6-1.el6.noarch.rpm [root@srvr417 nailgun]#
17:38 ericjwolf appears that is not in the MOS ubuntu repo....
17:38 mwhahaha well it might be but since you used a mirror it might not have grabbed it
17:39 mwhahaha or we're not properly providing it
17:39 mwhahaha it's available as part of 8 so you might be able to steal it and put it in your mirror as a work around, http://mirror.fuel-infra.org/mos-repos/ubuntu/8.0/pool/main/p/python-happybase/
17:49 ericjwolf question....
17:49 ericjwolf my repo does have 2 versions.
17:49 ericjwolf ./ubuntu-part/pool/main/p/python-happybase/python-happybase_0.7-1build1_all.deb ./ubuntu-part/pool/main/p/python-happybase/python-happybase_0.8-0ubuntu1_all.deb
17:50 ericjwolf when I manually installed happybase it picked the .7 version.
17:50 ericjwolf shouldn't the .8 version been picked.
17:50 ericjwolf or is there a file I need to "fix"  so it picks the right versions?
17:51 mwhahaha Probably, i'm not super familiar with the ways of debian based repositories
17:51 mwhahaha i know with redhat you'd have to rebuild the metadata
17:51 mwhahaha so i wonder if there is a similar concept
17:52 as0bu mwhahaha: are you referring to the apt-mirror or the local metadata?
17:53 mwhahaha the local
17:53 mwhahaha from the mirror process
17:55 as0bu that might be hard to do since fuel starts a new docker image to mirror the ubuntu repos
17:55 mwhahaha well in theory you could launch the docker container and run whatever :D
17:55 tzn joined #fuel
17:55 mwhahaha but yea i think that's the problem is that it's not picking up on the 0.8 version
17:55 as0bu in theory yes... :D
17:56 mwhahaha ericjwolf: if you manually install the 0.8 version does it work?
17:57 as0bu if you run the fuel-createmirror it should pull down any updates and regenerate metadata
17:58 mwhahaha well he did that and clearly it didn't work too well :D
18:00 bildz Anyone around to help me with a ceph issue?  I'm trying to upload images into glance and keep getting an error 500:  http://pastebin.com/E3LYWTT8
18:00 as0bu mwhahaha: I haven't been keeping up with the entire conversation :P sorry about that
18:01 mwhahaha it's all good
18:10 tzn joined #fuel
18:12 rmoe joined #fuel
18:24 blahRus joined #fuel
18:32 e0ne joined #fuel
18:33 angdraug joined #fuel
18:51 sudo_woodo joined #fuel
18:51 sudo_woodo hi
18:51 sudo_woodo I'm running fuel in virtualbox
18:52 sudo_woodo I'm using fuel 6.1 with centos-based installations
18:52 sudo_woodo I need to install a package on one of the hypervisors, but it seems yum doesn't work
18:52 sudo_woodo what can I do?
18:55 ericjwolf had to run out for a few mins.  <as0bu> - When I installed a new fuel master, I did a yum update, dockctrl destroy all, dockerctrl start all, then did fuel-createmirror but I modified the config file to do a full ubuntu mirror.  I did this becasue I install a few other packages after the openstack deployment.
18:56 ericjwolf I can see the python-happybase i the ubuntu repo both 0.7 and 0.8 but when I do a apt-cache it onyl shows the 0.7 version.  So curious how to force the install of the newer package.
19:06 mwhahaha ericjwolf: download it and dpkg -i <file>
19:07 sudo_woodo hi
19:07 sudo_woodo any help for me?
19:07 mwhahaha sudo_woodo: what package are you looking to install?
19:08 mwhahaha for centos installations i believe we provide our own package set which may not have the full centos package list
19:08 pbrooko joined #fuel
19:12 ericjwolf I wonder if doing a full mirror vs a partial if there are conflicts with the MOS repo.
19:12 ericjwolf I see the python-ceilometer in both but they have different dependency versions.
19:12 mwhahaha well it's installing the mos one
19:12 mwhahaha and the happybase is provided by ubuntu
19:13 ericjwolf I will try the dpkg
19:14 javeriak joined #fuel
19:17 ericjwolf ok, maunally installing the happybase you provided allows the rest of the packages to install.   could a package be updated to automate the install of this package dpkg -i <http>/......
19:17 javeriak_ joined #fuel
19:18 javeria__ joined #fuel
19:19 ericjwolf could I modify the ubuntu manifest file ?
19:19 ericjwolf would this work?
19:19 ericjwolf on fuel /etc/puppet/2014.2.2-6.1/manifests/ubuntu-versions.yaml
19:20 mwhahaha no you could modify the puppet script do do a download and install of the file
19:20 mwhahaha but this really needs to get fixed
19:25 mwhahaha i updated the bug so maybe that'll get it on the correct people's radar
19:25 javeriak joined #fuel
19:26 ericjwolf ARG, I need to learn puppet..... Thansk again for your help.
19:26 mwhahaha my fall back is go create a plugin :D
19:26 thumpba joined #fuel
19:26 mwhahaha you can use a shell script
19:26 mwhahaha but i'm not sure which is harder
19:27 ericjwolf I think I have a shell script that will do that I wrote for some other items.  Need to go find it...
19:27 ericjwolf thsi needs to be added to all machines or only the controllers?
19:27 mwhahaha i think just the controllers
19:28 mwhahaha it's wherever ceilometer is getting installed
19:28 ericjwolf That's only 3 servers
19:28 ericjwolf so I can do it by hand for now.
19:28 mwhahaha yea
19:28 as0bu ericjwolf: I assume you did an "apt-get update" on the server you were trying to get the package to
19:28 mwhahaha that should be done automatically as a part of the install
19:29 javeriak_ joined #fuel
19:29 as0bu and it wasn't showing up in the "apt-cache madison"
19:29 as0bu on the node you were installing to
19:29 ericjwolf yes I did a apt-get clean and update
19:30 as0bu did you see if it was in the "apt-cache madison <packagename>"?
19:30 ericjwolf I will look at that
19:30 ericjwolf need to look at a different server.
19:30 as0bu it should tell you all versions of the package the client can see
19:32 ericjwolf root@node-3:~# apt-cache madison python-happybase python-happybase | 0.7-1build1 | http://10.20.0.2:8080/ubuntu-part/ trusty/main amd64 Packages
19:32 ericjwolf only shows this one version.
19:32 ericjwolf does not show the new version...
19:32 ericjwolf Does the Packages file need to be rebuilt to show this new file?
19:32 javeria__ joined #fuel
19:33 mwhahaha i would have thought that to be done as part of the create mirror process
19:35 thumpba joined #fuel
19:38 as0bu ericjwolf: you might need to use reprepro to add a deb to a repo
19:38 ericjwolf But I manually added the file you sent me.  That was nto part of the official mirror.
19:38 mwhahaha oh i thought that was available in your find
19:38 mwhahaha was that not pulled down with the create repo?
19:39 ericjwolf this file:  python-happybase_0.8-2~u14.04+mos1_all.deb  you sent  from your MOS8.  It was not in my repo
19:39 ericjwolf I copied it to /var/www/nailgun/2014.2.2-6.1/ubuntu/x86_64/pool/main/p/python-happybase
19:40 ericjwolf The ones in the ubuntu mirror:
19:40 mwhahaha oh then an updated centos-common probably needs to be created. it probably has an improper version on it since it was required 0.7.1 but it might have needed to be 0.7-1 or something
19:40 ericjwolf ./ubuntu-part/pool/main/p/python-happybase/python-happybase_0.7-1build1_all.deb
19:40 ericjwolf ./ubuntu-part/pool/main/p/python-happybase/python-happybase_0.8-0ubuntu1_all.deb
19:41 mwhahaha er not centos-common ceilometer-common
19:41 ericjwolf but, I am not sure why, the python-happybase_0.8-0ubuntu1_all.deb does not show in any of the apt-cache commands.
19:42 mwhahaha you'd have to update the repo metadata
19:42 mwhahaha just putting the file out there doesn't make it available
19:42 as0bu that's where the reprepro command comes into play I believe
19:44 as0bu that's what I use to setup custom deb repos. Not totally sure how well it will work to add a deb to an exsisting ubuntu repo
19:44 as0bu "in theory" it should work
19:45 mwhahaha I like to consider all advice in here "in theory" advice ;)
19:45 thumpba joined #fuel
19:45 as0bu yes lol fair enough
19:47 sudo_woodo mwhahaha: I need to install "stress"
19:47 ericjwolf humm  updating a ubuntu repo to include a deb on a Centos based machine?
19:47 mwhahaha sudo_woodo: is that a standard centos package?
19:48 sudo_woodo mwhahaha: It seems it's on EPEL repo
19:48 ericjwolf <poof>  head just exploded....
19:48 mwhahaha sudo_woodo: you'd have to setup a repo to provide it, I don't recommend just adding all of epel
19:48 mwhahaha Alternatively we do provide an Auxiliary repo on the master that can be used if you put the file in there and rebuild the repo metadata
19:49 sudo_woodo mwhahaha: How can I do it?
19:49 mwhahaha ericjwolf: you could use the ubuntu docker image that createmirror script uses
19:50 mwhahaha sudo_woodo: I don't have a 6.1 environment at the moment but if you look on a node see if there is an auxiliary repo configured and the repo directory should be on the fuel master node in /var/www/nailgun/repos (i think)
19:51 mwhahaha then if you download your stress rpm and put it in there and rebuild the metadata via 'createrepo', it should be available
19:51 mwhahaha (these directions being an "in theory" process)
19:54 sudo_woodo mwhahaha: /var/www/nailgun/repos  doesn't exist
19:55 mwhahaha like i said i don't have an environment, but it would map to somewhere in /var/www/nailgun
20:07 thumpba joined #fuel
20:13 javeriak joined #fuel
20:19 thumpba joined #fuel
20:25 zhangjn joined #fuel
20:26 thumpba joined #fuel
20:35 javeriak joined #fuel
20:42 Sesso joined #fuel
20:51 javeriak_ joined #fuel
20:54 thumpba joined #fuel
20:56 javeriak joined #fuel
21:28 thumpba joined #fuel
21:33 ericjwolf not sure if anybody is still around but now I am getting an rsync failure from the second controller when trying to rsync the swift files.
21:34 ericjwolf I am getting a read socket error.
21:34 ericjwolf is this related to ssh keys?
21:34 ericjwolf this is on the primary controller:
21:34 ericjwolf <158>Nov 13 21:29:48 node-2 rsyncd[53945]: name lookup failed for 192.168.102.4: Name or service not known <158>Nov 13 21:29:48 node-2 rsyncd[53945]: connect from UNKNOWN (192.168.102.4) <158>Nov 13 21:29:48 node-2 rsyncd[53945]: rsync on swift_server/account.builder from UNKNOWN (192.168.102.4) <158>Nov 13 21:29:48 node-2 rsyncd[53945]: building file list <46>Nov 13 21:29:49 node-2 container-replicator: no_change:0 ts_repl:0 diff:0
21:35 ericjwolf but I cannot ssh between the servers.
21:35 mwhahaha no i thought it was done via rsync not ssh+rsync
21:52 ericjwolf hummm  I manually added a file to the /etc/swift dir on the primary and rsynced with no issue.  But all the rsync for the swift builder files is failing.
22:14 ericjwolf ha  MTU on server is 9k but switch is 9k and did nto accound for vlan tag so packet was dropped.  Network issue.  switch fixed.
22:15 mwhahaha was that the cause of the rsync issue?
22:15 thumpba joined #fuel
22:34 ericjwolf <mwhahaha> yes.  The frame size was 9014.  But the switch MTU was set to 9000.  So the switch was dropping it.  My test file was only a few bytes so that is why it was fine.
22:34 mwhahaha good ol' network issue :D
22:34 ericjwolf a gizillon parts all working together.....
22:36 mwhahaha http://cube-drone.com/comics/c/alien-geometries :D
22:36 * mwhahaha wanders off
22:37 thumpba joined #fuel
22:38 ericjwolf That's greatness.
22:46 thumpba joined #fuel
22:54 thumpba joined #fuel
23:22 sbfox joined #fuel
23:51 gongysh joined #fuel
23:56 thumpba joined #fuel

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary