Perl 6 - the future is here, just unevenly distributed

IRC log for #fuel, 2014-02-14

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:14 e0ne joined #fuel
01:13 e0ne joined #fuel
01:37 phamby joined #fuel
02:13 e0ne joined #fuel
02:24 xarses joined #fuel
02:28 rongze joined #fuel
02:56 crandquist joined #fuel
03:13 e0ne joined #fuel
03:20 richardkiene joined #fuel
04:01 crandquist joined #fuel
04:11 besem9krispy joined #fuel
04:13 e0ne joined #fuel
04:39 saju_m joined #fuel
05:10 IlyaE joined #fuel
05:13 e0ne joined #fuel
06:10 IlyaE joined #fuel
06:13 e0ne joined #fuel
06:21 vkozhukalov joined #fuel
07:25 saju_m joined #fuel
08:14 ykotko joined #fuel
08:14 vk joined #fuel
08:22 mihgen joined #fuel
09:00 vk joined #fuel
09:02 mattymo1 joined #fuel
09:04 anotchenko joined #fuel
09:05 vkozhukalov joined #fuel
09:09 e0ne joined #fuel
09:27 mrasskazov joined #fuel
09:29 miguitas joined #fuel
09:58 tatyana joined #fuel
10:05 vkozhukalov joined #fuel
10:42 getup- joined #fuel
10:53 anotchenko joined #fuel
10:57 anotchenko_ joined #fuel
11:19 anotchenko joined #fuel
11:22 pbrooko joined #fuel
11:33 mrasskazov joined #fuel
11:57 besem9krispy joined #fuel
12:03 mihgen joined #fuel
12:25 Bomfunk joined #fuel
12:25 Bomfunk joined #fuel
12:35 besem9krispy I think it's just me
12:35 besem9krispy I think I'm the only person in the world for whom this software doesn't work.
12:35 besem9krispy Really, it doesn't work.
12:35 anotchenko joined #fuel
12:36 besem9krispy I think it is pitched at developers who are hacking the source as they go, and not so much at me.
12:36 besem9krispy My installations all fail.
12:36 besem9krispy My fuel crashes and gets stuck.
12:36 besem9krispy Booting the USB image dies pathetically for me.
12:37 miguitas_ joined #fuel
12:37 besem9krispy And right now it says wonderful things like "2014-02-14 12:32:39.169 INFO [7f0a317fb700] (helpers) Task 19b85812-46c3-4c98-8b47-7f3ffe17e8d3 (dump) message is set to [Errno 104] Connection reset by peer
12:37 besem9krispy 2014-02-14 12:33:02.106 DEBUG [7f0a30dfa700] (node) Node has provisioning or error status - status not updated by agent"
12:37 besem9krispy I think it's just me.
12:41 besem9krispy The one time I did get a working openstack environment, it couldn't create volumes and attach them to instances.  Now I can't get anything, unless I somehow reset fuel, but I forgot to take a snapshot of my fuel system before I started.  So I could reinstall, or give up, and I'm inclined to give up.
12:43 Dr_Drache joined #fuel
12:44 Dr_Drache MiroslavAnashkin,
12:47 MiroslavAnashkin Dr_Drache: ?
12:51 Dr_Drache MiroslavAnashkin, you get my information last night.
12:51 Dr_Drache that patch (need to revert) stopped all deployment from happing.
12:53 MiroslavAnashkin Yeah, I see
12:53 Dr_Drache ok, also tryed a centOS deployment. same effect. you need a diag snap from that as well?
12:54 MiroslavAnashkin Yes, please.
12:55 MiroslavAnashkin May we have console to your master node?
12:55 Dr_Drache oooo... hmmm let me figure out how to make that happen.
12:59 Dr_Drache i'm an idiot, it's dual homed, give me a min to get a route
13:01 besem9krispy joined #fuel
13:02 isviridov joined #fuel
13:04 Dr_Drache MiroslavAnashkin, PMed you information, snapshot is uploading.
13:09 MiroslavAnashkin OK, i've got coffee and ready
13:10 Dr_Drache doesn't look it it's going through for my outside.
13:11 isviridov left #fuel
13:24 e0ne joined #fuel
13:47 vkozhukalov joined #fuel
13:57 phamby joined #fuel
14:01 bookwar joined #fuel
14:38 e0ne joined #fuel
14:52 crandquist joined #fuel
14:59 besem9krispy joined #fuel
15:15 anotchenko joined #fuel
15:27 besem9krispy joined #fuel
15:31 IlyaE joined #fuel
15:45 besem9krispy joined #fuel
15:56 anotchenko joined #fuel
16:00 mswynnex joined #fuel
16:02 e0ne_ joined #fuel
16:05 asledzinskiy left #fuel
16:09 phamby joined #fuel
16:25 e0ne joined #fuel
16:41 xarses joined #fuel
16:44 e0ne joined #fuel
16:51 rmoe joined #fuel
16:54 saju_m joined #fuel
16:56 vkozhukalov joined #fuel
17:34 TVR___ joined #fuel
17:35 TVR___ morning..
17:35 TVR___ have a question if someone wants to field it...
17:35 TVR___ boot from image, create volume... while the image is being stamped onto the volume... what is the process of events?
17:36 TVR___ reason:
17:36 TVR___ I have an Oracle Linux image
17:36 TVR___ it is ~ 1.6G in size and qcow2 as a 4G image
17:37 TVR___ when I launch the image and create volume.. it fails... but when I then launch an instance from the volume it created, that works
17:37 TVR___ so there must be a timeout happening? .. Yes?
17:37 TVR___ thoughts?
17:38 xarses are you using ceph?
17:38 TVR___ yes, yes I am for both image and volume store
17:38 xarses and what version of fuel?
17:38 TVR___ also have rados GW checked as well
17:38 TVR___ 4.0? whatever the release current is
17:39 TVR___ yes.. 4.0
17:40 xarses hmm, i thought the launch from image bug was fixed in fuel 4.0
17:41 TVR___ need a log? I can send you logs as I can point out when it was created vs when I simply just launched it
17:41 TVR___ what logs do you need?
17:42 TVR___ one more bit of info.. should be unrelated...
17:42 TVR___ at this moment, I am adding 2 more compute + ceph nodes... but they were in the installing OS point when I first noticed it.. (they are installing openstack right now)
17:43 TVR___ should be unrelated... but still, you should know about it
17:45 xarses see if you are not seeing https://bugs.launchpad.net/fuel/+bug/1271924 or https://bugs.launchpad.net/fuel/+bug/1246219
17:45 TVR___ when I boot from image create volume from my centos image (768 meg in size, qcow2 and 1G) it is successful.... so... does the SIZE of the image being booted and stepped upon the volume matter? As in, maybe there is a timeout occuring when the image being coppied doesn't complete in time?
17:47 TVR___ mine is not a snapshot issue.. and my volume is NOT created instantly.. as these are 80 gig volumes.. it sits in the creating state for ~ 10 seconds or so....
17:47 xarses so, with ceph for glance and cinder, its better to store them raw, as it will just do a copy on write clone.
17:48 Dr_Drache how is it better?
17:48 TVR___ I am thinking it is a timeout in the time nova is expecting the image to be planted into the volume... as my smaller image works fine, and my larger image fails the creation nova side, but obviously the volume is created correctly and the image finished being implanted
17:48 Dr_Drache TVR___, I have the same issue.
17:49 Dr_Drache if the volume is of any good size, it times out.
17:49 Dr_Drache then you recreate the instance, using that volume once it's created it's fine
17:49 xarses bug 1246219 is related, to it not converting the image from a container to raw correctly
17:49 TVR___ OK.. as my cirrus 17m never fails either.. but is essentially useless as an OS..heh
17:50 xarses can you try importing the image raw into glance and see if that performs better?
17:50 Dr_Drache I think this is any issue that was fixed in emperor, i remember reading a bug report on it.
17:50 TVR___ importing raw into glance...
17:51 TVR___ I do not understand.. sorry
17:51 xarses In either case, we should catch this into a bug
17:51 TVR___ commands?
17:51 Dr_Drache also, raw is undesirable because it more or less wastes massive amounts of diskspace.
17:52 Dr_Drache undesirable for me anyway.
17:52 xarses TVR___: qemu-img convert {image_name}.qcow2 {image_name}.raw
17:52 anotchenko joined #fuel
17:52 xarses TVR___: and then upload the raw image into glance
17:53 TVR___ ok.. will do that
17:53 Dr_Drache question.
17:53 Dr_Drache how does that effect the volumes not being created?
17:53 xarses Dr_Drache: ceph will create copy-on-write clones in this case. In your case, you still have to explode the compressed image to create a volume
17:55 xarses Dr_Drache: There are seperate code paths in the rbd driver between the two, We've seen failures in one of them before. If both are broken, then it still narrows down where to look
17:55 Dr_Drache xarses, so, I'm misunderstanding that a 80gb (20gb used) qcow2 vm, converted to raw would be 80gb?
17:56 Dr_Drache xarses, just asking questions, not trying to imply anything.
17:57 xarses Dr_Drache: if you upload a image into glance, backed by ceph, and it cant be used in place. Every time you provision an image, it will explode the volume in cinder and ceph will ear mark the full size again.
17:58 Dr_Drache so double time.
17:58 TVR___ I created a set of scripts to take my VM and expand it on the fly for whatever size volumes you created.... let me know if you want them Dr_Drache
17:59 xarses Dr_Drache: for example 20g qcow in glance, 2 volumes at 80g, total allocated space 180gb
17:59 Dr_Drache I think I am missing something.
18:00 xarses Dr_Drache: using copy_on_write raw 80gb in glance, 2 volumes (no changes yet) total allocated space 80gb
18:00 Dr_Drache ahh, ok
18:00 xarses changes will be stored as delta of A
18:00 xarses which would be the raw image in glance
18:01 Dr_Drache still don't know why you can't use qcow2 as the glance image, since it supports COW at the block lvl. oh well.
18:01 IlyaE joined #fuel
18:02 xarses ceph cant hanle that
18:02 xarses also, in your case the 20gb image has to be spooled out of ceph, and then back into ceph
18:02 xarses in my case, it returns in about a second
18:03 TVR___ OK.. so convert to raw brought my image up to 4G as expected... but the boot time for the boot from image and create volume is substantially quicker (oh, yea, and it works as well) so not bad...
18:04 Dr_Drache looks like i need a few more OSDs now.
18:04 TVR___ so I suspect there is a timing issue with the qcow2 image greater than $X in size.. whatever $X is
18:05 Dr_Drache since i have about 50-60 VMS in qcow2 format, that right now take up 2TB, but if i convert to raw, it will 3x that.
18:05 Dr_Drache (that's just the OS images, not even talking working data)
18:05 TVR___ my biggest image is 4 G now Dr_Drache and I can create volumes of whatever size and it will automatically expand .. so if I choose 80G, when it finished, my instance has a / of ~75G
18:07 TVR___ create only a few images, and use puppet to deal with packages, and some rc.local love to deal with different volume sizes at creation man...
18:08 TVR___ raw should not be a game changer... the only reason I originally chose qcow2 was.. I am lazy and didn't want to wait any signifigant time to upload it..
18:08 TVR___ heh
18:08 vk joined #fuel
18:09 TVR___ all my swap expansion depending on volume size choice, disk size, etc is done through one of two ways... rc.local love or puppet modules
18:13 TVR___ so.. do we want to file a bug... can you guys repeat this... ~how~ do you file a bug?
18:14 besem9krispy joined #fuel
18:15 xarses https://bugs.launchpad.net/fuel/
18:15 xarses even https://bugs.launchpad.net/fuel/+filebug
18:16 xarses describe in detail what steps you did you reproduce the bug
18:16 TVR___ ok.. will do.. thanks
18:17 xarses I don't think we've poked around with multi-gb qcow2 images
18:17 xarses so it should hopefully be easy to reproduce
18:21 xarses TVR___: let me know the bug url, and I'll get it tagged
18:22 Dr_Drache TVR___, I have a 6 node KVM cluster right now
18:23 Dr_Drache qcow2 is fast and saves massive space.
18:30 TVR___ https://bugs.launchpad.net/fuel/+bug/1280399
18:30 TVR___ why so many images?
18:38 Dr_Drache that many VMs
18:39 xarses_ joined #fuel
19:06 e0ne joined #fuel
19:10 mihgen joined #fuel
19:19 e0ne joined #fuel
20:02 vkozhukalov joined #fuel
20:16 xarses joined #fuel
20:33 IlyaE joined #fuel
20:48 designated joined #fuel
20:50 xarses joined #fuel
21:01 xarses TVR___: are you using ceph for nova (ephemeral instances) also?
21:02 designated I've heard rumors that fuel only supports up to 10 nodes without purchasing licensing...is this true?  If so, is it a limitation of puppet not being licensed as enterprise?
21:02 TVR___ I have not used ephemeral disks, no .. not yet
21:03 xarses designated: no there is no limits like that in fuel
21:04 designated xarses, thank you
21:04 xarses designated: fuel is free and opensource, Mirantis would like to sell you support and services though
21:05 xarses designated: but that is in no way required to enjoy fuel
21:05 Dr_Drache designated, you can change that in defaults in horizon.
21:05 designated how does fuel get around the puppet limitations?
21:05 angdraug fuel uses opensource version of puppet, not enterprise
21:06 designated angdraug, ahh.  thank you
21:06 angdraug TVR___: regarding #1280399, if you want to boot from volumes you need raw format anyway
21:07 designated last question for now.  when is 4.1 expected and will it address the issue of not being able to backup/recover a fuel environment?
21:07 angdraug not that the timeout shouldn't happen, that's still a problem
21:07 angdraug but still, if you put qcow2 images in glance only to create volumes in ceph from them,
21:07 Dr_Drache angdraug, so, I can't use qcow2, if i want to boot from that volume?
21:07 TVR___ I looked into puppet enterprise last year...I see the value in the puppetlabs bootcamps and training, but not in the licensing of the puppet software...
21:08 xarses Dr_Drache: same conversation we had about raw volumes supporting copy on write clones
21:08 angdraug when cinder uses ceph rbd as backend, it will convert image to raw before creating volume from it
21:08 Dr_Drache ahh ok.
21:08 Dr_Drache so convoluted
21:09 angdraug https://bugs.launchpad.net/fuel/+bug/1246219
21:09 xarses designated: 4.1 should be releasing by the end of the month. I'm not sure about backup feature.
21:09 angdraug if it didn't, you wouldn't be able to boot from it. that's what #1246219 is about
21:10 TVR___ angdraug .. when I rolled my own from packstack, I set up the cinder.conf to use the user/pass and UUID ... I wonder if I would have the large qcow2 issue with those configs...
21:10 angdraug AFAIU that's nothing to do with access creds, just storage backend
21:11 TVR___ sure..
21:11 angdraug if rbd is the backend, only raw volumes can be booted from
21:12 angdraug so it would be more efficient to pre-convert images to raw before putting them into glance, instead of making cinder call qemu-img convert every time you create a new volume
21:12 TVR___ that explains my speed increase when using the raw volume..
21:13 angdraug yep, it just clones it: no download, no conversion, no upload
21:13 Dr_Drache angdraug, what about the space concerns?
21:13 angdraug space is a concern if you have many significantly different images
21:13 Dr_Drache which I do, about 60.
21:14 Dr_Drache (active VMs)
21:14 angdraug if you want the image to have a large disk size so that you can put a lot of data on the root partition, qcow2 would be more efficient
21:14 angdraug I'm not talking about VMs, it's about images in glance
21:14 TVR___ I imagine, you could see the real size.. qemu-img resize it and then convert it if space was that much of a premium as you create the new volume when you boot the image anyway
21:15 angdraug if you have 60 VMs booted from 1 or 2 base images, raw will actually save you space
21:15 angdraug TVR___: precisely
21:15 angdraug you can optimize your way around the space concerns
21:15 angdraug e.g. keep your images lean on the disk size, and create a blank volume for each vm you boot for data
21:15 Dr_Drache see, that's where we differnt, these are 60 VMs, that cannot at this time, be booted from base images, there is to many differnt required settings, that base images cannot allow.
21:15 TVR___ sure.. makes sense..
21:16 angdraug so each of your 60 VMs is unique?
21:16 Dr_Drache yes. sadly.
21:17 angdraug ok, that's a tough one )
21:17 TVR___ do you create each one by hand, or script them?
21:17 TVR___ powershell / bash/ puppet whatever
21:17 Dr_Drache TVR___, by hand from a base linux/windows install.
21:18 TVR___ can vars be passed into a script to then create them?
21:18 TVR___ if they are unique by files and packages only, then puppet can handle that for you
21:18 angdraug I think rbd raw can still be a solution for you
21:18 Dr_Drache some just process txt files, but the way some customers require data seperations, the VMs cannot be combined.
21:19 angdraug you don't need to combine them, just make sure they are all clones of the same base OS image
21:19 angdraug something like this:
21:20 angdraug create image for OS -> launch VM1 (creates clone in rbd) -> configure VM1 (clone deviates from base OS but is still not a full copy) -> create image from VM1 (another clone in RBD)
21:20 angdraug combined disk usage in ceph after the sequence above would be (size of base OS) + (disk space consumption of whatever changes you made while configuring the VM)
21:21 Dr_Drache now to do this, with less than an hour downtime, combined.
21:21 Dr_Drache LOL
21:21 Dr_Drache that's my issue, not yours :P
21:22 Dr_Drache of course, the windows NT boxes and windows 2000 boxes, can't be done like that.
21:22 Dr_Drache LOL
21:23 angdraug ok, if it's 60 unique windows VMs, I give up :)
21:23 Dr_Drache no
21:23 Dr_Drache only like 10 windows vms
21:27 Dr_Drache is there thin provisioning in raw?
21:27 Dr_Drache that's pretty much all we use the qcow2 format for really.
21:28 Dr_Drache I am not opposed to going to the proper way.
21:29 Dr_Drache but, I need to find a path that doesn't require rebuilding. lol
21:32 Dr_Drache or sparse I guess, blah.
21:32 angdraug no, lack if thin provisioning is the biggest drawback of raw format
21:33 angdraug I don't think it's altogether impossible, just that rbd backend doesn't support it
21:34 angdraug actually I think it would be very reasonable to demand such a feature from Ceph :)
21:34 Dr_Drache I would like to DEMAND THAT!
21:34 Dr_Drache :P
21:35 angdraug http://ceph.com/docs/master/man/8/rbd/
21:35 angdraug " The import operation will try to create sparse rbd images if possible."
21:35 angdraug looks like RBD itself supports sparse images, so the problem is with rbd drivers in OpenStack and QEMU
21:36 angdraug oh and maybe qemu-img would need to be patched to create a sparse raw file when converting from qcow2
21:36 Dr_Drache wonder if this is addressed in emperor, or firefly
21:36 angdraug I wonder if it's really a qemu-img's fault, really
21:36 designated so has anyone come up with a solution to the fuel failure scenario?  I'm thinking of just dd'ing fuel in the event of a failure unless someone can suggest an alternative.
21:37 angdraug never tried to create a raw image in rbd from a local sparse file
21:37 Dr_Drache angdraug, who would I go to, to raise this?
21:37 Dr_Drache to see if someone could look into it.
21:37 Dr_Drache i'm not looking for short term, just want to talk to the right peoples.
21:38 angdraug I'd start by sending an email to ceph-users mailing list, see if somebody knows a way to do this
21:38 angdraug if there's nothing there, move the discussion to ceph-devel@
21:38 angdraug both MLs are quite helpful
21:38 Dr_Drache blah, i hate mailing lists, such a hard way to keep things organized.
21:38 angdraug well, there's also tracker.ceph.org
21:39 Dr_Drache just cause I hate them, doesn't mean they don't work.
21:39 angdraug or you could go to #ceph on OFTC, you seem to be ok with IRC :)
21:39 Dr_Drache been in IRC for....20 years?
21:39 Dr_Drache prob less than that.
21:39 Dr_Drache or more.
21:39 Dr_Drache whatever.
21:39 angdraug hehe
21:39 Dr_Drache i feel old, in my young age.
21:40 designated I've been using IRC since 95 :/
21:40 angdraug me, since 97. and I thought myself old :p
21:40 Dr_Drache i think it was around there...
21:40 Dr_Drache when did 56K modems come out?
21:40 angdraug around then, in 97 I was still using a 14K modem
21:41 designated i couldn't afford one when they did come out so I know I was still using a 14.4 modem in 95
21:41 Dr_Drache I got one the week they were new, the tax place that ran the BBS in my town, ordered too many.
21:42 Dr_Drache pretty sure I got milked on the cost of that.
21:44 Dr_Drache blah, I hate this holiday
21:44 designated anyone remember aohell, written by Da Chronic, The Rizzer and some other guy?
21:45 Dr_Drache ...you remember the only 2 I remember too lol
21:45 designated played the intro from nuthin but a g thang
21:45 Dr_Drache ahh, IRC, and the downfall of DALnet
21:47 designated lol
22:00 xarses_ joined #fuel
22:09 Dr_Drache designated, angdraug
22:09 Dr_Drache it works
22:09 Dr_Drache by default
22:10 Dr_Drache "qemu-img should keep it sparse, as should 'rbd import' of a raw file in any stable version of ceph" "qemu-img detects sparseness and skips writing sparse blocks...and rbd import does the same if you convert to a raw file, and run 'rbd import' on that file"
22:14 angdraug nice!
22:15 Dr_Drache now, does it work in "OUR" version?
22:20 Dr_Drache we'll see later...
22:21 Dr_Drache :P
22:40 crandquist joined #fuel
23:22 xarses joined #fuel

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary