Perl 6 - the future is here, just unevenly distributed

IRC log for #fuel, 2014-01-07

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
23:00 rongze joined #fuel
23:02 rongze joined #fuel
23:02 rongze joined #fuel
23:03 rongze joined #fuel
23:04 rongze joined #fuel
23:05 rongze_ joined #fuel
23:06 rongze joined #fuel
23:07 rongze joined #fuel
23:08 rongze joined #fuel
23:09 rongze_ joined #fuel
23:10 rongze joined #fuel
23:11 rongze joined #fuel
23:12 rongze_ joined #fuel
23:12 rongze joined #fuel
23:13 rongze joined #fuel
23:14 rongze joined #fuel
23:15 rongze_ joined #fuel
23:17 rongze joined #fuel
23:18 rongze joined #fuel
23:19 rongze joined #fuel
23:24 rongze_ joined #fuel
23:24 rongze joined #fuel
23:25 rongze joined #fuel
23:27 rongze_ joined #fuel
23:28 rongze_ joined #fuel
23:29 rongze joined #fuel
23:30 rongze_ joined #fuel
23:32 rongze_ joined #fuel
23:35 rongze joined #fuel
23:36 rongze joined #fuel
23:40 rongze joined #fuel
23:41 rongze__ joined #fuel
23:42 rongze_ joined #fuel
23:48 rongze joined #fuel
23:48 rongze joined #fuel
23:50 rongze_ joined #fuel
23:51 rongze_ joined #fuel
23:53 rongze_ joined #fuel
23:53 rongze_ joined #fuel
23:55 rongze joined #fuel
00:01 rongze joined #fuel
00:01 rongze joined #fuel
00:02 rongze joined #fuel
00:03 rongze joined #fuel
00:12 rongze_ joined #fuel
00:14 rongze joined #fuel
00:15 rongze joined #fuel
00:18 rongze_ joined #fuel
00:19 rongze joined #fuel
00:19 rongze joined #fuel
00:20 rongze joined #fuel
00:20 rongze joined #fuel
00:22 rongze_ joined #fuel
00:24 rongze joined #fuel
00:26 rongze_ joined #fuel
00:27 rongze_ joined #fuel
00:28 rongze joined #fuel
00:31 rongze joined #fuel
00:32 rongze joined #fuel
00:36 rongze joined #fuel
00:36 rongze joined #fuel
00:38 rongze joined #fuel
00:38 rongze joined #fuel
00:39 rongze joined #fuel
00:41 rongze joined #fuel
00:42 rongze joined #fuel
00:43 rongze joined #fuel
00:44 rongze joined #fuel
00:48 rongze joined #fuel
00:50 rongze joined #fuel
00:51 rongze joined #fuel
00:52 rongze_ joined #fuel
00:54 rongze joined #fuel
00:55 rongze joined #fuel
00:56 rongze joined #fuel
00:56 rongze joined #fuel
00:57 rongze joined #fuel
00:59 rongze_ joined #fuel
01:01 rongze joined #fuel
01:01 rongze joined #fuel
01:03 rongze_ joined #fuel
01:04 rongze_ joined #fuel
01:05 rongze joined #fuel
01:06 xarses joined #fuel
01:06 rongze joined #fuel
01:07 rongze joined #fuel
01:07 rongze joined #fuel
01:08 rongze joined #fuel
01:09 rongze joined #fuel
01:09 rongze joined #fuel
01:10 rongze joined #fuel
01:13 rongze__ joined #fuel
01:14 rongze joined #fuel
01:15 rongze joined #fuel
01:16 rongze joined #fuel
01:19 rongze_ joined #fuel
01:19 rongze joined #fuel
01:21 rongze_ joined #fuel
01:21 sparc joined #fuel
01:23 rongze joined #fuel
01:26 jouston joined #fuel
01:28 rongze joined #fuel
01:29 rongze joined #fuel
01:29 dhblaz joined #fuel
01:29 rongze joined #fuel
01:30 rongze joined #fuel
01:31 rongze joined #fuel
01:31 rongze joined #fuel
01:32 rongze joined #fuel
01:35 rongze__ joined #fuel
01:36 rongze joined #fuel
01:38 rongze_ joined #fuel
01:40 rongze joined #fuel
01:40 rongze joined #fuel
01:41 rongze joined #fuel
01:52 rongze joined #fuel
01:56 rongze joined #fuel
02:00 dhblaz joined #fuel
02:17 xarses joined #fuel
03:31 ArminderS joined #fuel
03:44 rongze joined #fuel
03:46 rongze joined #fuel
03:53 richardkiene_ joined #fuel
04:02 richardkiene joined #fuel
05:18 rongze joined #fuel
05:34 sanek_ joined #fuel
05:36 jouston joined #fuel
05:37 alex_didenko joined #fuel
05:39 book` joined #fuel
05:45 Bomfunk joined #fuel
05:45 Bomfunk joined #fuel
05:52 jouston_ joined #fuel
05:58 rongze joined #fuel
06:04 ArminderS has enayone here got any issues using instance re-size for instance created from image using ephemeral storage with ceph as backend
06:05 ArminderS i just did and it removed the disk from ceph
06:07 ArminderS in earlier flavor, root/ephemeral disk size was 10 gb and in new flavor it was 20 gb
06:08 ArminderS somewhere between resizing, it removed that old disk and now i can't boot the instance because the disk is gone
06:09 ArminderS +--------------------------------------+---------------------------+--------+------------+-------------+-----------------------------+
06:09 ArminderS | ID                                   | Name                      | Status | Task State | Power State | Networks                    |
06:09 ArminderS +--------------------------------------+---------------------------+--------+------------+-------------+-----------------------------+
06:09 ArminderS | b891ac3c-3f9f-448f-b6e7-c722bc04b0b3 | centos5-boot-from-image   | ACTIVE | None       | Running     | admin_private=192.168.111.5 |
06:09 ArminderS | b630083e-4d4a-4bec-a1ce-2e70f39ee09e | centos6-boot-from-img2vol | ERROR  | None       | Shutdown    | admin_private=192.168.111.6 |
06:09 ArminderS | b14ade50-ddfb-4429-89cb-697055783d4e | win2012-boot-from-volume  | ACTIVE | None       | Running     | admin_private=192.168.111.7 |
06:09 ArminderS +--------------------------------------+---------------------------+--------+------------+-------------+-----------------------------+
06:09 ArminderS [root@node-3 ~]# rbd ls compute
06:09 ArminderS b14ade50-ddfb-4429-89cb-697055783d4e_disk
06:09 ArminderS b891ac3c-3f9f-448f-b6e7-c722bc04b0b3_disk
06:10 ArminderS qemu-kvm: -drive file=rbd:compute/b630083e-4d4a-4bec-a1ce-2e70f39ee09e_disk:id=compute:key=AQAATMdSuA5yIRAAuKfthmI1thz/0N2eJr+nGw==:auth_supported=cephx\;none:mon_host=192.168.0.3\:6789\;192.168.0.4\:6789\;192.168.0.5\:6789,if=none,id=drive-virtio-disk0,format=raw,cache=none: error reading header from b630083e-4d4a-4bec-a1ce-2e70f39ee09e_disk
06:49 ArminderS joined #fuel
06:57 sparc joined #fuel
07:12 kevein joined #fuel
08:37 SergeyLukjanov joined #fuel
08:49 mrasskazov1 joined #fuel
10:02 topshare joined #fuel
10:25 ArminderS joined #fuel
11:03 e0ne joined #fuel
11:57 kevein joined #fuel
12:14 SergeyLukjanov joined #fuel
13:52 e0ne joined #fuel
14:48 rmoe joined #fuel
15:57 dhblaz joined #fuel
16:04 albionandrew joined #fuel
16:11 dhblaz angdraug: That node is the master.
16:11 dhblaz I have a new 4.0 installation and several of the health checks are failing:
16:11 dhblaz List ceilometer availability
16:11 dhblaz is one of them
16:12 dhblaz The initial deploy failed because of a problem with a compute node.  This means that some of the preconditions for the health checks were not met (like uploading the cirros image).  Does the "List ceilometer availability" check have such a precondition?
16:34 SergeyLukjanov joined #fuel
17:18 albionandrew Hi, I just added a bug - https://bugs.launchpad.net/fuel/+bug/1266853 similar to Arminder I think, the deploy node fails because of ceph. Any ideas how to fix?
17:21 xarses joined #fuel
17:34 rmoe joined #fuel
17:34 albionandrew_ joined #fuel
17:34 angdraug joined #fuel
17:45 e0ne joined #fuel
18:29 SergeyLukjanov joined #fuel
18:37 richardkiene joined #fuel
19:06 jkirnosova joined #fuel
19:14 xarses joined #fuel
19:15 xarses_ joined #fuel
19:22 angdraug joined #fuel
19:38 rmoe joined #fuel
19:46 dhblaz Anyone here have experience with "Ceph RBD for ephemeral volumes"?
20:31 dhblaz joined #fuel
20:56 SergeyLukjanov joined #fuel
20:57 e0ne joined #fuel
21:21 e0ne joined #fuel
21:43 * xarses pokes angdraug
21:46 angdraug dhblaz: present :)
21:46 angdraug what do you want to know?
21:55 albionandrew angdraug dhblaz is not about at the moment.
21:56 albionandrew I have a question - I have a deployment running the first controller has an error in the deployment Unrecognised escape sequence '\1' in file /etc/puppet/modules/horizon/manifests/init.pp at line 106
21:56 albionandrew Any idea how to fix?
21:57 angdraug I've seen a warning like that pop up, with no adverse effect on a deployment
21:58 angdraug was this one an error, instead of a warning?
21:58 albionandrew It shows as an error on the webpage
22:00 IlyaE joined #fuel
22:01 angdraug can you grep for it in puppet-apply.log and see how it shows up there?
22:02 angdraug that's all 4.0 GA right? puppet 2.7.23?
22:03 albionandrew Puppet v2.7.23
22:03 albionandrew 4.0 yes
22:06 angdraug ok, I'm not sure why this breaks the deploy since it's just a warning, but making it go away is easy, just add another backslash before \1 on the reported line
22:09 albionandrew I'm not sure it is breaking the deploy, It was an error I saw in the logs.
22:10 dhblaz joined #fuel
22:14 angdraug https://review.openstack.org/65384
22:15 dhblaz The fatal problem was this:
22:15 dhblaz 2014-01-07 21:36:14 DEBUG
22:15 dhblaz Prefetching crm resources for cs_property
22:15 dhblaz 2014-01-07 21:36:14 DEBUG
22:15 dhblaz (Puppet::Type::Cs_property::ProviderCrm) Corosync not ready, retrying
22:15 dhblaz ...
22:18 dhblaz angdraug: re "Ceph RBD for ephemeral volumes"
22:18 dhblaz Does it work?
22:19 angdraug it works in a lab, the reason we labeled this experimental is a ceph bug we found just before the release
22:19 angdraug http://tracker.ceph.com/issues/5426
22:20 dhblaz How does it work?  I thought that there would be an OSD on the compute node but there isn't
22:20 angdraug doesn't have to be
22:20 xarses joined #fuel
22:20 angdraug QEMU natively supports RBD
22:20 dhblaz I didn't know that it was marked as experimental
22:20 angdraug says so in the release notes
22:24 dhblaz Was the fix integrated into the release or do I need to update the packages in my nailgun repo?
22:27 angdraug it wasn't yet, the fix came out from Inktank after we released 4.0
22:27 angdraug it should be in 4.0.1/4.1
22:29 angdraug you can try to update ceph packages from here: http://gitbuilder.ceph.com/ceph-rpm-centos6-x86_64-basic/ref/dumpling-5426/
22:29 angdraug there's a gotcha that Inktank compiles ceph against a different version of libgoogle-perftools than Mirantis, at least I had that problem on Ubuntu
22:30 dhblaz Do you offer updated binary packages on download.mirantis.com?
22:30 angdraug not yet, but should have it in a couple days
22:31 angdraug btw you don't need this fix for testing purposes
22:31 angdraug it may once in a blue moon corrupt your ephemeral drive, but otherwise works fine as it is
22:32 angdraug for going to production, I would definitely recommend installing that patch
22:41 dhblaz When I imagined how this would work was that it would have an OSD for the ephemeral storage on the compute node with replication factor of 1.  I imagine to set it up this way I would need to manually configure such a thing
22:41 dhblaz Rather to say, it doesn't appear to work the way I envisioned it.
22:42 dhblaz And to make it work that way would require some work.
22:42 angdraug why would you want that way? how's that better than just having images on the local file system?
22:42 dhblaz The reason given in the gui is to allow for live migration.
22:43 angdraug live migration requires shared volume storage between compute nodes
22:43 angdraug ceph is one way to achieve such shared storage, along with other benefits
22:44 angdraug you can set the replication factor to 1, but there's a bit more work than just field in GUI
22:45 angdraug but even then it won't be the same as local storage
22:46 angdraug regardless of replication factors, all OSDs in the cluster are combined in a single pool
22:46 angdraug you can't directly control which node in the ceph cluster gets to store which image
22:46 e0ne joined #fuel
22:47 angdraug most time, OSD backing your VM will be on a different node than the compute running that VM
22:47 angdraug even if you combine OSD and compute roles
22:48 angdraug oh and one more concern. if your replication factor is 1 and you take out an OSD, your VM is lost
22:49 angdraug that means that if you want to migrate all VMs off a compute e.g. to take it down for maintenance, you have to migrate all the data of the OSD on that node, too
22:49 angdraug if your replication factor is 2 or more, you can just bring it down and let your ceph cluster remain in degraded mode until you bring the OSD back online
22:51 dhblaz Thanks for the explanation
22:51 dhblaz I don't think we can afford the performance hit of replication.
22:52 angdraug if you're more concerned about latency than reliability, ceph might not be the best option for you
22:52 dhblaz Anyone have experience with "Corosync not ready, retrying" causing a deployment to fail?
22:53 dhblaz we will probably look into it again after we can use bcache
22:54 IlyaE joined #fuel

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary