Perl 6 - the future is here, just unevenly distributed

IRC log for #fuel, 2014-02-04

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:32 jaypipes joined #fuel
00:36 richardkiene joined #fuel
01:32 richardkiene So I'm having another interesting Neutron + GRE networking issue
01:33 richardkiene randomly VMs will lose all connectivity, but rebooting them from Horizon brings the mback
01:33 richardkiene and oddly enough if multiple VMs in the same tenant are unavailable, rebooting one often times brings them all back
01:33 richardkiene has anyone experienced this? I've been looking for bugs and haven't come up with anything yet.
01:45 rmoe joined #fuel
02:38 xarses joined #fuel
03:12 mutex joined #fuel
03:30 IlyaE joined #fuel
03:33 ArminderS joined #fuel
04:03 richardkiene joined #fuel
04:29 steale joined #fuel
04:44 IlyaE joined #fuel
05:00 IlyaE joined #fuel
05:35 steale joined #fuel
05:43 Arminder- joined #fuel
05:45 mihgen joined #fuel
05:53 steale joined #fuel
05:53 IlyaE joined #fuel
06:14 MACscr joined #fuel
06:15 MACscr joined #fuel
06:15 MACscr joined #fuel
06:23 e0ne joined #fuel
06:24 steale joined #fuel
06:35 IlyaE_ joined #fuel
06:57 mrasskazov joined #fuel
07:12 saju_m joined #fuel
07:54 demon_mhm joined #fuel
07:59 e0ne joined #fuel
08:08 AndreyDanin joined #fuel
08:49 akupko joined #fuel
09:07 vk_ joined #fuel
09:07 miguitas joined #fuel
09:18 miguitas joined #fuel
09:22 evgeniyl joined #fuel
09:29 rvyalov joined #fuel
09:35 mihgen joined #fuel
09:46 e0ne joined #fuel
09:47 tatyana joined #fuel
09:50 e0ne_ joined #fuel
10:02 e0ne joined #fuel
10:07 e0ne joined #fuel
10:10 e0ne joined #fuel
10:50 e0ne joined #fuel
11:22 rvyalov joined #fuel
12:15 vk_ joined #fuel
12:17 e0ne joined #fuel
12:19 Bomfunk joined #fuel
12:24 e0ne_ joined #fuel
12:29 e0ne joined #fuel
12:30 e0ne__ joined #fuel
12:31 e0ne___ joined #fuel
12:32 e0ne_ joined #fuel
12:41 Dr_Drache joined #fuel
12:44 TVR__ joined #fuel
12:45 TVR__ missed this channel yesterday with the issues of freenode and all
12:48 Dr_Drache hey TVR__
12:48 TVR__ morning...
12:48 Dr_Drache had a guy in here yesterday with our same issue. that I heard you fixed.
12:48 TVR__ which one?
12:49 TVR__ the dhcp?
12:49 Dr_Drache yea.
12:49 TVR__ I still have to go back to neutron with VLANs... but neutron with gre works fine
12:49 Dr_Drache well, he had no vlan commuications. then dhcp
12:50 Dr_Drache so GRE worked with dhcp?
12:51 TVR__ I also have a slightly *different* setup here, as my VLANs won't be extended to my switches until later today... so for now, I gave each network their own nic... which worked great
12:51 TVR__ gre works the nuts.....
12:51 TVR__ it simply just works
12:51 Dr_Drache weird.
12:52 TVR__ unfortunately, on a bigger network, the networking folks like VLAN tags, so I will have to revisit that again soon
12:52 Dr_Drache i'm going to redeploy here in a few, got another controller to test with and some routing fixed.
12:52 Dr_Drache well, our network isn't that big.
12:53 TVR__ gre, with HA and neutron will just work out of the gate... set up security groups and make sure to uncheck the default security group, and you will be all set
12:53 Dr_Drache other than users, EVERYTHING will fit into a 5 compute node + DR
12:54 Dr_Drache can't test HA yet, only have 2+2 nodes right now
12:54 TVR__ the only issue I had was the very first instance I assigned a floating IP to was unreachable from the outside.... untill I ran a traceroute from the inside out.. and that seemed to *refresh* the network ... and it suddenly started working...
12:55 TVR__ cool..
12:55 Dr_Drache and i'm not getting any traction with DR + ceph.
12:56 Dr_Drache so, I may have to drop fuel soon, anyway.
12:56 TVR__ I am now in-process of writing a script to add users, security groups, networks and projects as setting them up through the GUI is painfull in large numbers..
12:56 TVR__ oh, one other thing...
12:58 TVR__ for assigning floating IPs... the default install for the external net is NOT set to shared, so you cannot attach a router until you change that setting, which means you cannot assign a floating IP until you do that... just an FYI
12:58 TVR__ DR + ceph in neutron + gre just works
12:58 TVR__ well.. for me it did
13:04 Dr_Drache DR?
13:04 Dr_Drache failing node?
13:05 e0ne joined #fuel
13:13 e0ne_ joined #fuel
13:16 TVR__ right now, I am testing how well an existing cluster responds to being expanded
13:17 Dr_Drache ahh, I just want gladosrw
13:17 Dr_Drache or, however it's spelled.
13:18 Dr_Drache gladosGW
13:21 Dr_Drache wth, i can't have multple controllers in fuel? unless I go to a full 6 node HA?
13:26 TVR__ I have the 6 node HA... I just filed a question with support as to where the files / what files hold the cluster information so in a disaster on the fuel deployment node, recovery would be possible ..
13:28 Dr_Drache right
13:28 Dr_Drache but i have 2+2 for testing, just went non-HA, and won't let me do more than one controller...
13:29 Dr_Drache do i deploy another controller after?
13:29 TVR__ Hmmm.... I wonder how that would work.....
13:29 TVR__ Let us know
13:30 TVR__ heh.. I feel like we are the QA team for Mirantis...
13:30 TVR__ which I am ok with.. FYI
13:32 Dr_Drache we are. just shouldn't be at this level if they acually deploy clients with this.
13:33 Dr_Drache OR, when they deploy and fix shit, it gets hidden so no one else can deploy easy.
13:34 TVR__ After rolling my own puppet modules to do this, I guess I have more patience..... especially when the issues they face are the same ones I faced, but they are further along with resolution....
13:35 Dr_Drache well, I've been doing virtulization for years.
13:35 Dr_Drache to say a product does XXXX better or easier, and it doesn't, I get a sour taste.
13:36 Dr_Drache obviouslly i'm here for a long haul, and it doesn't bother me THAT much, or we'd not be talking
13:36 TVR__ their support is responsive... and my comparison is to IBM's GPFS support, RedHats classroom support and so far, they are far more engaged..
13:36 TVR__ I hear ya...
13:37 Dr_Drache but what happens to your support after 30 days?
13:37 TVR__ heh... guess I had better have broken it good by then....
13:37 Dr_Drache i'm not opposed to paying for support, I have it on all my stuff.
13:38 TVR__ This place has a budget.. so we will see what they offer
13:38 TVR__ it will depend on their pricing model
13:39 xdeller joined #fuel
13:40 TVR__ it will also depend on how they handle minor tweaks... for example... I plan on not having to 'source openrc' from root after.. and I will add a module to set that...
13:40 Dr_Drache we couldn't even get that information, lol
13:41 TVR__ will that cause issues with their support.. it shouldn't, but we will see the devil in the details
13:42 Dr_Drache we had a phone meeting, and asked for the pricing. and we got no email back. LOL
13:45 Dr_Drache TVR__, nope, can't deploy without 6 nodes in HA.
13:46 Dr_Drache and can't deploy more than 1 controller in non-HA
13:47 TVR__ HA can be 4 nodes... 3 controller + ceph with 1 compute + ceph and add compute + ceph as needed
13:48 Dr_Drache i don't have 3 controllers in my test rack yet.
13:48 Dr_Drache dell is slow to ship 1U units
13:50 TVR__ flame on... but I do not have a high reguard for dell equipment.... they differ and I am not a big fan of their out of band management
13:50 Dr_Drache well, they are more cost effective that HPs.
13:50 Dr_Drache :P
13:51 Dr_Drache their out-of-band is find, if you just script it.
13:51 Dr_Drache *fine
13:57 Dr_Drache so going to deploy in non-HA and see if i can just add an extra node.
13:59 Dr_Drache not that it matters I guess, its' not HA, so it will just crash.
13:59 Dr_Drache :P
14:00 MiroslavAnashkin Dr_Drache: You cannot just add one more controller in non-HA mode. How 2+ controllers in non-HA mode would sync data between RabbitQM and DB?
14:00 Dr_Drache MiroslavAnashkin, makes sense.
14:00 Dr_Drache but you can use 2 nodes in HA mode.
14:01 Dr_Drache *can't
14:01 MiroslavAnashkin Yes, while it is strongly recommended to use at least 3 nodes.
14:01 TVR__ question: you can in a 6 node HA setup (3 controller + ceph and 3 compute + ceph) add another controller only node, correct?
14:01 TVR__ sorry.. compute only node
14:01 Dr_Drache MiroslavAnashkin, no, it's FORCED even on a test to use 3.
14:01 Dr_Drache not suggested.
14:02 MiroslavAnashkin Yes, you may add compute nodes.
14:03 MiroslavAnashkin You may even add controllers, with a couple of tricks
14:03 TVR__ ok.. cool.. because it installed fine, but now when I try (from the admin user) to create an instance, I get Error: The server has either erred or is incapable of performing the requested operation. (HTTP 500) (Request-ID: req-932267b7-88ab-491b-8468-5d1aed3ab9d0)
14:04 TVR__ I am looking into the services to see if anything is in a bad state now
14:04 Dr_Drache wait, without tricks, your HA controllers are static?
14:06 MiroslavAnashkin Dr_Drache: Not sure. I always prefer to do it myself, to be sure the sequence is correct.
14:07 MiroslavAnashkin TVR__: Please check Horizon logs first.
14:08 TVR__ yes.. I am looking over the logs before I touch anything
14:21 TVR__ 2014-02-04T14:00:10.004834+00:00 emerg:  WARNING Recoverable error: The server has either erred or is incapable of performing the requested operation. (HTTP 500) (Request-ID: req-932267b7-88ab-491b-8468-5d1aed3ab9d0)
14:21 TVR__ that is from my dashboard-horizon.exceptions.log
14:21 TVR__ still looking at the others
14:31 mrasskazov2 joined #fuel
14:44 TVR__ all other horizon logs do not lead me anywhere... looking at other logs
14:51 angdraug joined #fuel
14:59 IlyaE joined #fuel
15:07 anotchenko joined #fuel
15:09 TVR__ so I have an existing instance running... rebooting it worked fine.. it could get outside the network as it had an associated floating IP attached... so I tried shutting it down from the console... it shut down... now when I try to start it up, it stated started successful, and it does try, but it will not start anymore.
15:17 MiroslavAnashkin TVR__: Do you create new volume for this instance? Please check the available free space on your storage, then at the glance/cinder mount points.
15:17 Dr_Drache ...really?
15:18 Dr_Drache so, now fuel won't deploy to R415 Dells.
15:18 Dr_Drache well, not ubuntu
15:18 Dr_Drache fails to boot.
15:19 TVR__ ceph for volumes AND images...
15:19 TVR__ pgmap v78492: 892 pgs: 892 active+clean; 2623 MB data, 41690 MB used, 8281 GB / 8321 GB avail
15:19 TVR__ barely touching it
15:20 TVR__ ceph -s works from new compute node, so it sees storage and such... rados commands work, etc
15:21 Dr_Drache MiroslavAnashkin, http://www.sendspace.com/file/k1p2nl
15:21 Dr_Drache dia logs, dell R415 fresh out of box, controller deploy.
15:26 anotchenko joined #fuel
15:26 MiroslavAnashkin TVR__: Please try to refresh the Horaison UI with Shift+Reload button. Or clean up the cookies. I may be obsolete token.
15:27 MiroslavAnashkin Horizon
15:28 TVR__ I can log out and log back in as well
15:31 TVR__ same behavior ... I shift and refreshed, logged out.. killed the web tab... started a new tab.. logged in... same behavior ...
15:33 MiroslavAnashkin TVR__: Please create diagnostic snapshot.
15:35 TVR__ in process... on it
15:36 Dr_Drache MiroslavAnashkin, looking through my log
15:37 TVR__ between Dr_Drache and I, any issues that can come up will be addressed ....
15:37 MiroslavAnashkin Dr_Drache: BTW, have you run Verify Networks before deployment? Has network check passed?
15:37 Dr_Drache I can't see why my controller isn't deploying.
15:37 Dr_Drache MiroslavAnashkin, yes.
15:37 Dr_Drache this is the 2nd attempts
15:37 Dr_Drache to deploy on that node
15:42 MiroslavAnashkin Dr_Drache: Gimme 10 minutes, I'll finish with todays code reviews and becone all yours)
15:44 Dr_Drache MiroslavAnashkin, lol, OK
15:55 TVR__ ftp server?
15:56 anotchenko joined #fuel
15:56 TVR__ attaching the 127M file to the support ticket doesn't seem to want to work
15:57 MiroslavAnashkin TVR__: Yes, you may use google drive, sendspace etc and provide a link to download.
16:00 Dr_Drache 127M
16:00 Dr_Drache now THAT is a log
16:01 jouston_ joined #fuel
16:04 MiroslavAnashkin We have logs 2-3 GB in size with million+ records. It is OK, we just had to increase snapshot creation timeout.
16:05 anotchenko joined #fuel
16:34 IlyaE joined #fuel
16:48 anotchenko joined #fuel
16:49 angdraug joined #fuel
16:50 designated Are there plans to be able to backup a Fuel install/environments to account for fuel/hardware failure with the ability to restore those backups during a reinstall of fuel?
16:52 bookwar joined #fuel
16:56 MiroslavAnashkin designated: Yes. There are even greater plans.
16:56 designated MiroslavAnashkin, glad to hear that.  Is there a rough timeline?
16:57 TVR__ I had actually just sent support a similar question..
17:03 designated MiroslavAnashkin, you've wet my appetite with "greater plans".  Can you elaborate?
17:05 TVR__ log is here:    https://drive.google.com/file/d/0B5xmhlRed6NqVkVEbThmNWFjdHM/edit?usp=sharing
17:07 xarses joined #fuel
17:12 designated TVR__, Thank you
17:13 Dr_Drache designated, that's not for you. lol
17:19 designated Dr_Drache, I realized that after clicking it lol
17:22 TVR__ what? you won't parse my 124M of logs for me?!
17:22 TVR__ heh
17:24 TVR__ designated ... with MiroslavAnashkin seeming to know all things openstack, I ask any issues I have, even when I have a ticket in here as he seems to just know the answers... I added a compute node to a fully functioning cluster and now have issues, so I wanted to be sure MiroslavAnashkin had access to my logs
17:26 richardkiene joined #fuel
17:27 anotchenko joined #fuel
17:28 Dr_Drache ...lol
17:29 mihgen joined #fuel
17:34 Dr_Drache MiroslavAnashkin, is fuel going to be updated for 3013.2.2
17:36 rmoe joined #fuel
17:44 kpimenova_ joined #fuel
17:49 IlyaE joined #fuel
17:56 MiroslavAnashkin Dr_Drache: Could you connect to the node-4 (PowerEdge R415) console, to see, what is the error message it prints on boot? Looks like Ubuntu installed OK, then gone to reboot and never came back
17:56 anotchenko joined #fuel
18:01 MiroslavAnashkin Dr_Drache: Is your PowerEdge R415 configured to boot via UEFI?
18:03 kpimenova__ joined #fuel
18:08 e0ne joined #fuel
18:14 Dr_Drache MiroslavAnashkin, it shouldn't be UEFI
18:14 Dr_Drache but i can check again
18:14 Dr_Drache it fails to do anything after grub
18:17 MiroslavAnashkin TVR__: Please check Security Groups. I've found the following in your logs:
18:17 MiroslavAnashkin TVR__: ERROR: Caught error: Multiple security_group matches found for name 'Allow_ALL', use an ID to be more specific.
18:21 MiroslavAnashkin https://bugs.launchpad.net/nova/+bug/1241480
18:22 MiroslavAnashkin And https://bugs.launchpad.net/horizon/+bug/1203413
18:32 mihgen joined #fuel
18:33 e0ne_ joined #fuel
18:33 MiroslavAnashkin TVR__: Workaround. Security group names should not match through all OpenStack installation, same group names in different tenants will not work.
18:38 MiroslavAnashkin Dr_Drache: Yes, upcoming Fuel 4.1 is planned to ship with 2013.2.1. 2013.2.2 is not released yet, and looks like it's release is planned after Fuel 4.1
18:38 mutex joined #fuel
18:39 MiroslavAnashkin Dr_Drache: So, if new OpenStack 2013.2.2 comes on schedule - we'll include it to Fuel 4.1
18:46 Dr_Drache and, 3-3 is the date your hoping for 4.1?
18:50 IlyaE joined #fuel
18:50 MiroslavAnashkin Currently it is scheduled to the very late February. We should update and test our packages, apply fixes and customizations and test new 2013.2.2 release before Fuel 4.1
18:54 anotchenko joined #fuel
18:56 Dr_Drache MiroslavAnashkin, bios mode.
18:58 tatyana joined #fuel
19:06 MiroslavAnashkin Dr_Drache: Hmm, interesting. Could you please boot this server with Ubuntu 12.04 image and check the disk layout with `fdisk -l` or `parted` command? Or even with GParted. I still suggest it is partitioned with GPT.
19:06 TVR__ security group did it... nice catch...
19:07 Dr_Drache MiroslavAnashkin, working on that as we speak.
19:08 Dr_Drache MiroslavAnashkin, new develoment, grub fails at finding boot device
19:08 Dr_Drache drops me to a initramfs>
19:12 MiroslavAnashkin Dr_Drache: Well, then run `fdisk -l` right from bootstrap (initramfs)
19:14 TVR__ I requested the ticket closed... as usual, MiroslavAnashkin is the god of all things openstack
19:14 TVR__ thanks again
19:18 MiroslavAnashkin TVR__: There is the whole pantheon of gods, they simply busy polishing new Fuel 4.1 features, while I work mostly on the patches to 4.0
19:23 Dr_Drache anyone want a google glass invite?
19:23 TVR__ cool, cool.......
19:23 Dr_Drache I can't spare my funds to get one.
19:23 TVR__ heh, nice... but I would prolly end up devorced if I wore glass all the time
19:24 Dr_Drache yea, i have 2 invites, but don't have the spare coin to get one.
19:24 Dr_Drache so, i'd give them away if i could find anyone
19:24 Dr_Drache MiroslavAnashkin,
19:24 Dr_Drache https://www.dropbox.com/s/cmpb3pzyjqkfzb4/IMAG0108.jpg
19:38 designated I'll take a google glass invite
19:39 MiroslavAnashkin Dr_Drache: Hmm, that biosgrub partition remainds me of GPT. Please try running `parted -l`. OR - manually delete all partitions and convert the disk to MBR, and then re-deploy OpenStack.
19:39 designated i signed up but haven't heard anything
19:39 Dr_Drache MiroslavAnashkin, doing option 2
19:39 Dr_Drache designated, PM please
20:06 vk_ joined #fuel
20:06 Dr_Drache MiroslavAnashkin, it seems to have done it again.
20:08 IlyaE joined #fuel
20:08 MiroslavAnashkin Dr_Drache: It indicates Debian installer consider you system as UEFI
20:10 Dr_Drache wtf, if i wanted UEFI i would have enabled it.
20:10 Dr_Drache lol
20:10 Dr_Drache thanks
20:28 e0ne joined #fuel
21:04 angdraug I didn't follow most of the conversation, but fuel provisioning scripts do use GPT
21:04 angdraug ceph osd udev rules need it to automount osd devices
21:05 angdraug if you replace that with mbr, you'll need to put the osd devices into fstab
21:11 Dr_Drache well
21:11 Dr_Drache either way
21:11 Dr_Drache system won't boot
21:11 Dr_Drache and it's a controller, not a OSD
21:33 Dr_Drache angdraug, that was a response to you, I know you don't watch as much
22:37 tatyana joined #fuel
23:38 vk_ joined #fuel

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary