Perl 6 - the future is here, just unevenly distributed

IRC log for #fuel, 2014-07-22

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:49 rmoe joined #fuel
01:03 xarses joined #fuel
02:13 jobew_000 joined #fuel
02:16 jobew_000 joined #fuel
02:17 jobewan joined #fuel
02:19 jobewan joined #fuel
02:54 mattgriffin joined #fuel
03:13 geekinutah joined #fuel
04:13 Longgeek joined #fuel
04:27 Rajbir joined #fuel
04:27 Rajbir Hi All,
04:27 Rajbir I've a query
04:27 ArminderS joined #fuel
04:28 Rajbir I have installed openstack icehouse
04:28 Rajbir but the problem is that the Floating IP is not working
04:28 Rajbir Any advise ?
04:29 Rajbir Also I'm using neutron for networking
04:30 Rajbir By Floating IP I mean Secondary IP's not working
04:52 xarses joined #fuel
05:09 Rajbir anyone ?
05:12 jobewan joined #fuel
05:23 Rajbir further to add, I'm not able to locate the entry auto_assign_floating_ip in nova.conf file which used to be there when I was using quantam
05:23 Rajbir Do I have to add that entry in nova.conf file ?
06:22 e0ne joined #fuel
06:32 e0ne joined #fuel
06:38 e0ne joined #fuel
06:39 al_ex11 joined #fuel
06:49 e0ne joined #fuel
06:51 hyperbaba joined #fuel
06:58 pasquier-s joined #fuel
07:08 e0ne joined #fuel
07:16 e0ne joined #fuel
07:42 mkulke joined #fuel
07:44 artem_panchenko joined #fuel
07:55 topochan joined #fuel
08:18 tuvenen joined #fuel
08:20 adanin joined #fuel
08:30 e0ne joined #fuel
08:36 tuvenen_ joined #fuel
08:42 mkulke hello there, i have a problem understanding the concept of public network in an openstack (nova-network) setup. i understand it is supposed to be public in a sense that it's a range whitin the corporate network (for a private cloud). am i right?
08:44 mkulke because the fuel installer complains that on my eth1 (where i put the public network) there are dhcp servers (which is correct, since it is the corporate network)
08:54 mkulke can i ignore the warning then? because on the public network there will be no dhcp server from the openstack side? (the rest of the openstack networks are vlan-tagged)
09:14 e0ne joined #fuel
09:44 pasquier-s joined #fuel
09:53 taj joined #fuel
09:55 jaranovich joined #fuel
09:55 brain461 joined #fuel
09:56 racingferret joined #fuel
10:02 racingferret hi guys, anyone online for a quick chat about storage?
10:14 evg_ Rajbir: Hi, you should check this parameter before deployment in fuel setting tab. Now you have to add "auto_assign_floating_ip=True" to your nova.conf and then restart the nova-network service.
10:18 evg_ mkulke: dhcp server is started by the nova-network service, so it will intersect with outer one.
10:22 mkulke evg_: does it offer dhcp services on the nic? (eth1, only public network is assigned to it)
10:22 mkulke all other networking is running on eth0
10:33 pasquier-s joined #fuel
10:38 racingferret so, just seeking clarification regarding what storage partitions are required...
10:40 racingferret if I choose Ceph for my Cinder and Glance backends, I assume I don't have to assign the "Cinder LVM" role to the compute nodes.
10:42 racingferret If I assign the "compute" and "ceph-osd" roles to the node, that gives me three partitions on my disk: 1. Base System, 2. Ceph, 3. Virtual Storage
10:43 racingferret so my question is: what is the "Virtual Storage" partition used for and can it be replaced with Ceph entirely?
10:50 hyperbaba racingferret: Virtual Storage is used for ephermal storage (if you did not checkout ceph for that too) and for making clones/snapshots. For now nova pools volume from ceph and convert it using qemu-img and push it back on ceph
10:51 hyperbaba racingferret: very frustrating indeed. Instead of couple of seconds for the snapshot you have to wait for hours (depending on volume size)
10:55 hyperbaba racingferret: because guys from nova did not want to include rados mksnap and other patches to achieve full ceph functionality
10:56 racingferret hyperbaba: thanks for that. I do have "Ceph RDB for ephemeral volumes" checked, so I assume I can use Ceph for everything.
10:58 racingferret the problem is, if I set the virtual storage partition to 0MB, Fuel says I must allocate a minimum of 5,120MB, which sounds as though it's still required for something?
10:58 hyperbaba In my installations i use rest of the system disk for that
10:59 hyperbaba ceph must be on separate disk anyway to be usefull
10:59 hyperbaba Until nova team accept the patches.. Then the fuel team (i hope) will follow in the setup logic
11:45 e0ne joined #fuel
11:52 adanin joined #fuel
11:55 pasquier-s joined #fuel
12:23 vogelc joined #fuel
12:23 jbellone left #fuel
12:24 vogelc Anyone have an know the GA release date for Fuel 5.0.1?
12:33 tuvenen joined #fuel
13:14 tdubyk joined #fuel
13:20 binusida joined #fuel
13:26 thehybridtech joined #fuel
13:35 mattgriffin joined #fuel
13:43 geekinutah joined #fuel
13:47 taj joined #fuel
13:47 adanin joined #fuel
13:53 sc-rm joined #fuel
13:54 sc-rm Hi, I’ve installed Fuel 5.0 on a bare metal, but after installation I don’t see any server running on port 8000
13:54 sc-rm and I don’t see nginx installed on the machine
13:54 sc-rm but I can do fuelmenu
13:55 taj joined #fuel
14:05 mkulke sc.
14:05 mkulke sc-rm: you might have selected the wrong nic configuration
14:06 mkulke in my experience fuel needs network interfaces
14:13 getup- joined #fuel
14:18 e0ne_ joined #fuel
14:31 pasquier-s joined #fuel
14:32 jobewan joined #fuel
14:32 wrale is there any guide on applying ssl to horizon in H/A on 5.x ? i'm okay with doing it manually, if that is the only way available.
14:35 vidalinux joined #fuel
14:37 Longgeek_ joined #fuel
14:37 ArminderS joined #fuel
14:52 adanin joined #fuel
15:06 angdraug joined #fuel
15:11 wrale Is it better to use RAW or QCOW2 OS images if I built with fuel and configured ceph as backing for all things possible?
15:11 xarses joined #fuel
15:22 wrale Is there any downside to using RAW boot images with Ceph?  Why use QCOW2 if it's just going to get converted to RAW anyway, right?
15:25 vidalinux wrale, QCOW2 is optimized for KVM
15:29 wrale looks like good info on the topic: http://irclog.perlgeek.de/fuel/2014-02-14#i_8286110
15:40 blahRus joined #fuel
15:43 vogelc Is there a way to download the fuel_community_master builds without having to use a torrent software?  My company blocks torrent software.
16:02 vidalinux joined #fuel
16:34 rmoe joined #fuel
17:28 tatyana joined #fuel
17:44 BillTheKat joined #fuel
18:03 BillTheKat I am running fuel 5.0 with ephemeral storage on ceph. Live migration work fine, but "nova evacuate" fails. I notice that /var/lib/nova does not look like a shared FS.  So am I to assume that nova evacuate does not work with any fuel instantiation of openstack?
18:05 angdraug BillTheKat: fuel doesn't set up a shared FS
18:06 BillTheKat but it does use ceph - so live migration works but evacution does not?
18:06 angdraug there's a lot of assumptions in live migration related code in nova that there's a shared fs, but ceph based live migrations don't use that
18:06 angdraug we have fixed live migrations code path for ceph, but evacuate still has problems
18:06 BillTheKat great - ugh!!!!!!! I nee evacuations to work - any idea as to when?
18:07 BillTheKat I am testing HA stuff and I need evacuation.
18:08 wrale is it normal to have issues booting from volume with ceph in 5.0.1 using raw images?  " No valid host was found." :(
18:08 angdraug BillTheKat: here is a nova bug: https://bugs.launchpad.net/nova/+bug/1340411
18:09 angdraug wrale: no, that's not normal
18:09 wrale works for manually added cirros image, does not work for precise and trusty images
18:09 wrale 49 nodes.. i'm sure there is a place :)
18:09 angdraug wrale: are you by any chance having this problem? https://bugs.launchpad.net/fuel/+bug/1332660
18:10 e0ne joined #fuel
18:10 angdraug BillTheKat: please upvote the bug 1340411, it will put some heat on it and make it likely for Nova developers to look at it
18:11 BillTheKat angdraug: ok - will do
18:11 angdraug now that my fix for live migrations was merged for juno, it is likely that ceph ephemeral will be getting more attention
18:11 angdraug also keep pinging me about this :)
18:12 angdraug we're not likely to have time for this in 5.1 timeframe, but if upstream doesn't fix it in juno I will try to find time to fix it myself
18:12 wrale angdraug: looks like it might be my problem.. not sure how to test for positive, though..
18:12 angdraug wrale: try nova packages from 5.0.1 mirrors, they have the fix for that bug
18:13 wrale angdraug: i built this 5.0.1 iso yesterday
18:13 angdraug oh
18:14 the_hybrid_tech joined #fuel
18:14 wrale ceph -s says HEALTH_OK
18:15 wrale pgmap v5480: 4288 pgs: 4288 active+clean; 7425 MB data, 108 GB used, 245 TB / 245 TB avail
18:15 wrale all osds are up..hmm
18:17 wrale many of the controller logs are zero length.. no fun
18:19 angdraug wrale: centos or ubuntu?
18:19 wrale ./ceph/radosgw.log:2014-07-21 20:20:47.799461 7f9d377fe700  0 ERROR: signer 0 status = SigningCertNotFound  (a bunch of times)
18:19 wrale ubuntu
18:19 wrale also: ./upstart/nova-conductor.log:ProgrammingError: (ProgrammingError) (1146, "Table 'nova.services' doesn't exist")
18:19 angdraug "No valid host" indicates a nova-scheduler problem, ceph should not be a problem
18:20 angdraug radosgw is red herring, nothing to do with rbd
18:20 angdraug nova-conductor probably too: check the timestamp
18:20 angdraug if that was during deployment it's expected: service was started when nova was installed before nova dbsync had a chance to run, after nova dbsync there should be a restart and no more db problems in the logs
18:21 angdraug wrale: for your reference, upstream version of the scheduler fix: https://review.openstack.org/102064
18:21 angdraug I'm now checking our packages to confirm they have it
18:23 angdraug yep, the fix is there
18:23 angdraug which means that what you have is either a new variation of the same problem or something entirely new
18:24 angdraug can you turn on debug for nova and restart all nova services on controllers and computes?
18:25 angdraug btw also check nova service-list, make sure your computes show up as online
18:25 wrale i suppose this is of no consequence: cinder-cinder.context WARNING: Arguments dropped when creating context: {'user': None, 'tenant': None, 'user_identity': u'- - - - -'}
18:25 wrale there are thousands of these lines in cinder-scheduler
18:25 wrale log
18:25 angdraug hm, doesn't look familiar
18:26 angdraug oh wait, you're booting from volume
18:26 angdraug that fix was for nova ephemeral
18:26 wrale neither works.. boot from image and volume
18:26 angdraug same error, No valid host?
18:27 wrale i'll verify again and report back
18:27 angdraug if you can reproduce this with debug on, please create a bug and attach a diagnostic snapshot, I will have a look
18:28 wrale i have multiple images, cirros, fedora, u-precise and u-trusty... all are in raw.. not sure how to enable debug everywhere.. easy ways?
18:28 angdraug you just need debug in nova, that's /etc/nova/nova.conf
18:29 angdraug there's debug=False line, set that to True
18:29 angdraug then ps aux|grep nova; restart services that are found
18:44 wrale angdraug: it looks like it might have something to do with the images... booting from image for cirros and fedora work fine... both ubuntus fail there with no valid host... now i'm trying boot from new volume for fedora.. hanging on block device mapping.. this sometimes fails.. (times out, i guess, depending on volume size).. i'm trying 20GB..
18:45 tatyana joined #fuel
18:50 e0ne joined #fuel
18:52 wrale okay so that finally booted.... interestingly, boot from image instances quickly work with novnc.. boot from new volume works with novnc after a lonnng time ... system boot must be very slow from new volume.. console log queries really lag on new volume too
18:53 wrale trying to boot from new volume for ubuntu
18:53 wrale "No valid host was folund"... so yeah.. image fail
18:54 wrale these are the latest qcow2 ubuntu images (precise/trusty) converted to raw using .. example: qemu-img convert -f qcow2 -O raw trusty-server-cloudimg-amd64-disk1.img trusty-server-cloudimg-amd64-disk1.raw
18:54 wrale hmm..
18:57 the_hybrid_tech joined #fuel
18:59 e0ne joined #fuel
19:09 angdraug joined #fuel
19:11 angdraug wrale: just checking, you have ceph enabled for cinder too, right?
19:13 wrale angdraug: thanks.. i think so... here's an excerpt from /etc/astute.yaml on a controller: http://paste.openstack.org/show/87646/
19:20 wrale interesting!  i tried using the shell to upload the image to glance instead of via the browser (horizon).. boot from image for ubuntu raw now works.. more tests pending
19:26 wrale boot from new volume worked too .. so much for http
19:28 kupo24z angdraug: looks like https://bugs.launchpad.net/nova/+bug/1262914 finally got movement
19:28 wrale ^ even large volume.. (128GB).. cool.. thanks angdraug!
19:29 kupo24z we decided to go with an external system for snapshots and templating though using ceph export into a compressed file
19:30 wrale for posterity, here is a good way to import images when ceph is used as backing: https://ask.openstack.org/en/question/348/what-configurations-are-needed-to-enable-boot-from-volume/
19:32 kupo24z wrale: unfortunetly you get the full image size in glance with that unless you shrink the partition/filesystem before hand unlike with qcow which is usually thin images
19:33 angdraug wrale: so turns out glance was messing up your images? I thought this bug was fixed ages ago...
19:33 wrale kupo24z: i'm okay with it, i guess.. 245TB spinning disks with ceph.. we need quick and massively parallel instance initiation over capacity
19:34 wrale angdraug: i guess so, but really it seems like horizon was the problem
19:34 wrale image corruption, perhaps
19:34 kupo24z yeah depends on your use case, mine was SSD's so low disk capacity
19:36 wrale kupo24z: cool.. we have one ssd per node for cpeh journal.. i did an ioping and fio benchmark earlier... random write was at around 800 IOPS.. for network-replicated storage i guess that's okay.. latency was hovering around 1 ms.. not sure how to do better without dropping ceph for backing of ephemeral
19:37 wrale direct-io
19:37 wrale (inside vm)
19:38 wrale https://www.binarylane.com.au/support/articles/1000055889-how-to-benchmark-disk
19:44 ArminderS joined #fuel
19:47 adanin joined #fuel
19:48 e0ne joined #fuel
19:49 tatyana joined #fuel
19:53 taj joined #fuel
19:56 e0ne joined #fuel
20:02 kupo24z wrale: latency is something inherent to ceph right now, they are working on imporving it with later releases
20:02 kupo24z improving*
20:03 thehybridtech joined #fuel
20:03 wrale What do I need to do to use Sahara (which I enabled)?  I guess I need some kind of cluster template.  Is there no default template?
20:05 e0ne joined #fuel
20:22 e0ne joined #fuel
20:41 geekinutah joined #fuel
21:01 geekinutah joined #fuel
21:15 geekinutah joined #fuel
21:28 DaveJ__ joined #fuel
21:34 taj joined #fuel
23:04 angdraug wrale: does that help? http://docs.openstack.org/developer/sahara/devref/quickstart.html
23:30 kupo24z any update on the 5.0.1 ISO?
23:39 angdraug not yet, still blocked by oslo.messaging problems
23:51 geekinutah joined #fuel

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary