Perl 6 - the future is here, just unevenly distributed

IRC log for #fuel, 2015-07-08

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:36 eliqiao left #fuel
01:15 Longgeek joined #fuel
01:33 CTWill mwhahaha: thanks for all the help
01:45 rmoe joined #fuel
01:59 claflico joined #fuel
02:04 claflico joined #fuel
02:07 youellet__ joined #fuel
02:18 xarses joined #fuel
02:52 hakimo_ joined #fuel
03:16 Longgeek joined #fuel
03:46 CheKoLyN joined #fuel
05:14 neophy joined #fuel
05:16 Longgeek joined #fuel
06:02 darkhuy joined #fuel
06:04 darkhuy_ joined #fuel
06:10 darkhuy joined #fuel
06:15 dancn joined #fuel
06:19 stamak joined #fuel
07:25 e0ne joined #fuel
07:25 darkhuy joined #fuel
08:04 HeOS joined #fuel
08:20 HeOS joined #fuel
08:39 martineg_ Samos123: I saw your PR to the ansible inventory script, have merged it now. thanks.
08:42 stamak joined #fuel
08:59 devvesa joined #fuel
09:00 monester joined #fuel
09:03 e0ne joined #fuel
09:40 devvesa joined #fuel
09:42 Samos123 oh hi ;)
11:00 saibarspeis joined #fuel
11:42 Longgeek joined #fuel
11:42 Longgeek joined #fuel
11:47 stamak joined #fuel
12:25 Longgeek joined #fuel
12:40 Billias I am building 2 nodes for Block Storage.
12:40 Billias and i Use 2 servers with JBOD on them, with 6TB disks.
12:41 Billias is Replication factor:2 enough? do I need special ? I will have 6*6TB disks on each server
12:41 Billias and 4*10GBit LACP ethernets.
13:00 obcecado joined #fuel
13:01 martineg_ joined #fuel
13:14 championofcyrodi attempting LACP myself.... this srw2048 is not very good at explaining whether or not i need to create a LAG (Link Aggregate Group) to support LACP.
13:18 championofcyrodi ahh figured it out
13:18 championofcyrodi https://community.linksys.com/t5/Switches/SRW2048-LACP-problem-SOLVED/td-p/163357
13:19 championofcyrodi limitation of the MS XML 5.0 Web UI... have to 'uncheck' the ports, check LACP, then check the ports... lol
13:20 DevStok joined #fuel
13:20 DevStok hi to all bots
13:21 DevStok on the compute nodes we'are facing a problem about space
13:21 DevStok in the folder /var/tmp/ there are a lot of file named guestfs.***
13:22 DevStok can be deleted?
13:28 championofcyrodi it's generally good practice (from an engineering perspecting) to only put temporary files in that path.  In that, if they are actually 'in use' by a process you will not be able to delete them, and if they are not in use, then they can typically be removed.
13:29 championofcyrodi one 'trick' i use is to just move the files to another path, and run some tests.
13:29 championofcyrodi if everything behaves and functions as expected, you can likely, 'safely' remove them.
13:29 samuelBartel HI all
13:30 samuelBartel is it possible in 6.1 to assign node to nodegroups using fuel cli
13:30 samuelBartel ?
13:30 samuelBartel i am always getting a "not allowed" error
13:33 championofcyrodi samuelBartel: so i guess you're trying to figure out why it's "not allowed"?
13:33 mwhahaha samuelBartel: it should be allowed via the cli
13:35 championofcyrodi https://bugs.launchpad.net/fuel/+bug/1464682 <- related?
13:36 samuelBartel championofcyrodi, yes it seems to be related. any chance to get a backpoprt to 6.1?
13:37 championofcyrodi i don't work here... but you can always ask someone else or update the bug report in that it affects you and request backport to 6.1
13:37 samuelBartel when reading the bug I understand it is related to misleading error message
13:37 samuelBartel but in y case it never works
13:38 championofcyrodi likely you will not be able to obtain a patch unless you do it yourself by review the 7.0 code if it gets patched, or pay someone.
13:38 championofcyrodi (like Mirantis)
13:42 samuelBartel yes their is a fix for 7.0
13:42 mwhahaha the bug seems to indicate that you have to remove the node from the environment before adding it to a new group
13:42 mwhahaha comment #2
13:42 samuelBartel mwhahaha, it was I tried
13:43 samuelBartel i had nodes with no environemnt => group =null
13:43 samuelBartel so I should get the error when checking if the node is already part of a group
13:44 mwhahaha well we can add 6.1 to that bug and maybe someone can backport it
13:45 samuelBartel i can do it if needed
13:46 mwhahaha i updated the bug to add an 6.1-updates
13:46 mwhahaha might just be able to cherry pick the fix
13:47 samuelBartel yes
13:48 mwhahaha if you want to give it a shot, go ahead. i can take a look later if i get some cycles
13:48 samuelBartel ok I am going to try it
13:50 samuelBartel mwhahaha, fixto 6.1 or 6.1.2?
13:50 mwhahaha you'd port the fix to stable/6.1
13:51 samuelBartel ok
13:51 samuelBartel will try first to cherry-pick
13:51 samuelBartel and otherwise backport it
13:59 ub2 joined #fuel
14:03 championofcyrodi frustration!
14:03 championofcyrodi (/Stage[main]/Ceph::Conf/Exec[ceph-deploy config pull]/returns) change from notrun to 0 failed: ceph-deploy --overwrite-conf config pull node-5 returned 1 instead of one of [0]
14:04 mwhahaha again?
14:04 championofcyrodi well, its slightly different...
14:04 championofcyrodi before seemed to be an issue with the keyring...
14:05 championofcyrodi i got LACP working and cat /proc/network/bonding/bond0 shows a successful and active aggregate...
14:05 mwhahaha you still getting the connection reset?
14:06 championofcyrodi http://paste.openstack.org/show/355478/
14:06 championofcyrodi no route now...
14:06 mwhahaha wat
14:07 championofcyrodi seems like the LACP i guess is not working... i think the only reason i can get to it from the fuel master is because it's using a dedicated NIC
14:07 championofcyrodi but from node->controller, it's using management/private/storage on LACP...
14:08 mwhahaha but you'd be using the admin network
14:08 mwhahaha node-5 should translate to the admin network address
14:09 championofcyrodi when i ping node-5 (controller) from node-4, it resolves the management subnet
14:09 championofcyrodi which is using VLAN tagging... so maybe there is an issue w/ VLAN on LACP...
14:09 samuelBartel mwhahaha, question regarding unit test on python-fuelclient do I have to deploy all module are is there a way to have mock test?
14:09 samuelBartel i am trying to launch it with tox but getting error because of failed dependancies
14:10 mwhahaha there is a test that was added as part of that patch
14:10 mwhahaha so it should work with the tests
14:10 championofcyrodi pinging node-5 from the fuel master resolves the Admin (Pxe) IP, which is NOT LACP...
14:10 championofcyrodi so i think it's the bonding that is not working still...
14:10 championofcyrodi or at least the IP is not assigned to the bond0 interface.
14:12 championofcyrodi http://paste.openstack.org/show/355480/
14:13 championofcyrodi just checked route table... there is a route to br-mgmt for 10.10.28.0
14:14 championofcyrodi so thats not the issue... :(
14:15 mwhahaha well no route to host is a networking thing so you'll have to look into that
14:23 darkhuy joined #fuel
14:27 championofcyrodi hmm i wonder if this is related to "GRVP" which i just enabled.  it seems as though there is a unique configuration on my switch for link aggregate groups utilizing VLANs.
14:38 claflico joined #fuel
14:39 rmoe joined #fuel
15:24 rodrigo_BR joined #fuel
15:25 jobewan joined #fuel
15:40 e0ne joined #fuel
15:51 stamak joined #fuel
15:57 xarses joined #fuel
16:10 dontalton joined #fuel
16:30 jhova joined #fuel
16:41 bitblt joined #fuel
16:56 kutija joined #fuel
16:57 _Mordor_ joined #fuel
16:57 kutija how can I manually change a status of volume which is currently showing status "attached to none" because the instance that used this volume does not exist anymore?
16:57 kutija I use Ceph
16:58 kutija with cinder I just could manually update MYSQL and change it's status
16:58 kutija but I do not have Cinder anymore
16:58 kutija and nova volume-update does not love me
16:59 kutija so now I can't either detach it, delete it or attach it to some other instance
16:59 neophy joined #fuel
17:00 Akshik joined #fuel
17:00 skath joined #fuel
17:01 mwhahaha shouldn't it still be mysql?
17:02 kutija it should be in table block_device_mapping
17:02 kutija and it is there
17:02 kutija but I am not certain which column should be updated
17:02 kutija in order to detach it
17:04 kutija http://pastebin.com/ACYJSCFv
17:04 kutija so which one is it? :)
17:05 mwhahaha i have no idea
17:07 mwhahaha might be a question for nova?
17:07 kutija well this bug exists from Folsom
17:07 kutija and it's stupid and pretty irritating
17:07 kutija but that should be a question for nova team
17:09 kutija mwhahaha aren't you working for Mirantis as dev?
17:10 mwhahaha yea but on fuel
17:10 mwhahaha so deployments are my cup of tea
17:11 mwhahaha the actual openstack issues, i'd have to look in the code to track stuff down
17:11 kutija cool, I see that 6.1 is stable now
17:13 kutija that's nice, I have too see if there is some big difference between my build and the production one
17:13 kutija but before that I need to detach this god damned volume...
17:14 kutija mwhahaha if there is someone from nova team or whoever could answer to this question please pass it
17:15 mwhahaha so do you want to update that  volumne to show that it is attached to another instance?
17:15 mwhahaha or do you want to delete it because it's not attached to anything anymore
17:16 kutija I want to attach it to other instance
17:16 kutija or, if that is not possible
17:16 kutija to delete it (but not just from MYSQL)
17:18 mwhahaha http://lists.ceph.com/pipermail/ceph-users-ceph.com/2014-April/038563.html
17:18 mwhahaha is that the issue?
17:19 kutija nope
17:20 kutija I had an instance and a volume attached to it, I have deleted an instance successfully but volume shows that is still attached to that instance
17:20 kutija and I can't do anything with it using nova tools
17:21 kutija instance is marked as deleted in database and does not exist on compute node anymore
17:21 kutija a the paste from above shows the volume status
17:21 mwhahaha yea but it also shows deleted: 792
17:21 kutija which means what
17:22 mwhahaha you could also try setting instance_uuid to null or something
17:22 kutija no possible
17:22 kutija mysql> update block_device_mapping SET instance_uuid = "" WHERE id = 792 AND volume_id = "43a14a6e-5cbd-45d1-9b49-fd745bf37c17";
17:22 kutija ERROR 1452 (23000): Cannot add or update a child row: a foreign key constraint fails (`nova`.`block_device_mapping`, CONSTRAINT `block_device_mapping_instance_uuid_fkey` FOREIGN KEY (`instance_uuid`) REFERENCES `instances` (`uuid`))
17:22 mwhahaha i wonder if you just remove that entry to essentially unattach it
17:23 kutija it's possible
17:23 kutija theoretically it should work
17:23 mwhahaha like take a backup of the record and  then try deleteing it to see if resolves your problem
17:23 kutija because this is mapping table
17:23 kutija oh well what could go wrong? :)
17:23 kutija I'll try that
17:34 rpb joined #fuel
17:52 Akshik hi while trying to update Fuel 5.1 to 6.0 im facing the few error
17:52 Akshik pls. help
17:53 Akshik http://pastebin.com/miHD7aAK
17:57 Longgeek joined #fuel
17:59 Longgeek joined #fuel
18:00 ub joined #fuel
18:02 kutija mwhahaha found a solution
18:02 kutija just had to update cinder.volumes
18:15 thumpba joined #fuel
18:18 mwhahaha ah ok
18:35 stamak joined #fuel
18:54 BludGeonT joined #fuel
18:58 championofcyrodi going back to centos w/ lacp... i know the bond is configured right because i can see the aggregate ID as the same on each NIC, and the switch is showing the aggregate group as all Active..
18:59 championofcyrodi but found some notes about kernel/bug issues with VLANs on bonded bridge. aka, br-mgmt VLAN.
18:59 championofcyrodi related to older ubuntu.
19:00 ub joined #fuel
19:07 championofcyrodi sweet! it looks like i might get something working this time... arp shows a MAC Address for br-mgmt this time! on the ubuntu image it was showing <incomplete> on the MAC Address, the same way shown here: https://bugs.launchpad.net/ubuntu/+source/qemu-kvm/+bug/785668
19:09 e0ne joined #fuel
19:09 kutija sigh... note to myself
19:10 kutija when everything else fails to work, one hard reboot can do miracles
19:10 championofcyrodi also, 3rd day is a charm.
19:11 championofcyrodi ( e.g. after failing the first two days)
19:13 bitblt joined #fuel
19:14 fBigBro joined #fuel
19:14 junkao joined #fuel
19:36 junkao joined #fuel
19:36 xarses joined #fuel
19:40 championofcyrodi [root@node-7 ~]# arp
19:40 championofcyrodi node-10.ccri.com                 (incomplete)                              br-mgmt
19:46 championofcyrodi node-10 is the controller w/ lacp :(  once again, foiled.
19:48 mwhahaha have you tried just active-backup yet?
19:50 rbrooker joined #fuel
19:53 championofcyrodi even if active/backup worked, i'd be funneling all my CEPH clients through a single 1Gbps connection, so no.
19:53 ub joined #fuel
19:55 championofcyrodi the odd thing is, that the bonding seems to be okay via linux kernel module... just no route exists...
19:55 championofcyrodi going to go back and look in to the LAG + VLAN support.
19:55 championofcyrodi (on my switch)
19:56 HeOS joined #fuel
20:02 kutija_ joined #fuel
20:09 junkao_ joined #fuel
20:11 e0ne joined #fuel
20:17 kutija joined #fuel
20:20 championofcyrodi so it looks like the ARP requests from the controller arent going out on the VLAN...
20:20 championofcyrodi e.g. From compute node:
20:20 championofcyrodi 00:25:90:f3:20:c1 > Broadcast, ethertype 802.1Q (0x8100), length 46: vlan 201, p 0, ethertype ARP, Request who-has 10.10.28.7 tell 10.10.28.5, length 28
20:21 championofcyrodi e.g. From controller w/ LACP:
20:21 championofcyrodi 42:0c:a9:e2:5d:be > Broadcast, ethertype ARP (0x0806), length 42: Request who-has 10.10.28.5 tell 10.10.28.7, length 28
20:21 championofcyrodi thus communication from slave<->slave is working because the arp requests are going over the VLAN...
20:21 championofcyrodi but my bridge over bond0 is not routing through VLAN...
20:22 championofcyrodi well gotta go... i'll hack at this more tomorrow.
20:36 CTWill joined #fuel
20:38 rbrooker joined #fuel
21:04 Akshik while trying to upgrade from fuel 5.1 to 6.0 stuck with http://pastebin.com/4arafp1H
21:29 e0ne joined #fuel
22:08 CTWill why not go to 6.1?
22:09 CTWill I had some initial pain due to a iptables problem but its was an easy upgrade
22:09 CTWill just had to make sure I head enough free space to uncompress the upgrade
22:40 kaliya joined #fuel
22:40 kaliya #fuel-dev
22:41 Topic for #fuel is now Fuel 6.1 (Juno) https://software.mirantis.com | Paste here http://paste.openstack.org/ | IRC logs http://irclog.perlgeek.de/fuel/
22:55 rmoe joined #fuel

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary