Perl 6 - the future is here, just unevenly distributed

IRC log for #fuel, 2014-10-01

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:12 alex_didenko joined #fuel
00:18 teran_ joined #fuel
00:18 teran__ joined #fuel
00:41 rmoe joined #fuel
01:14 mattgriffin joined #fuel
01:48 ilbot3 joined #fuel
01:48 Topic for #fuel is now Fuel 5.1 for Openstack: https://wiki.openstack.org/wiki/Fuel | Paste here http://paste.openstack.org/ | IRC logs http://irclog.perlgeek.de/fuel/
01:49 geekinutah joined #fuel
02:01 jpf_ joined #fuel
02:22 jobewan joined #fuel
02:26 mattgriffin joined #fuel
02:39 harybahh joined #fuel
03:22 AKirilochkin joined #fuel
03:58 ArminderS joined #fuel
04:03 geekinutah joined #fuel
04:08 jobewan joined #fuel
04:23 dhblaz joined #fuel
04:28 Arminder joined #fuel
04:39 harybahh joined #fuel
04:52 anand_ts joined #fuel
05:08 AKirilochkin joined #fuel
05:11 Longgeek joined #fuel
05:54 kaliya_ joined #fuel
06:10 kaliya_ joined #fuel
06:23 dancn joined #fuel
06:33 adanin joined #fuel
06:33 flor3k joined #fuel
06:38 pal_bth joined #fuel
06:46 sc-rm merdoc: now after openstack could not create the instances which there was not resources for have been terminated/deleted. But in the resources view some of the resources are still occupired
06:48 sc-rm merdoc: http://snag.gy/eRvOg.jpg
06:49 harybahh joined #fuel
06:58 pasquier-s joined #fuel
07:02 hyperbaba joined #fuel
07:31 baboune joined #fuel
07:31 baboune hello
07:32 baboune How do the nodes when deeploying an environment get a domain?
07:32 baboune And why can u not combine telemetry and compute for an HA deployment?
07:33 e0ne joined #fuel
07:35 baboune That is with 5.1
07:39 saibarspeis joined #fuel
07:39 kaliya hi baboune, you mean how they resolve dns?
07:46 merdoc sc-rm: so you are able to remove instances, yes?
07:46 merdoc and now you got corrupted stats?
07:46 e0ne joined #fuel
07:47 sc-rm Yep, they are able to be terminated/removed from horizon
07:47 sc-rm but a given compute node still has them reported as there even thought they are not.
07:47 sc-rm after a restart of the compute nodes they disapere
07:48 [HeOS] joined #fuel
07:48 sc-rm which leads me to a nice to have feature in fuel web: The ability to force a restart of a node in fuel-web, so you don’t have to go look it up in the physical racks
07:48 baboune kaliya: no, I mean how do the node gets a domain name like: node10.my.domain.com
07:48 merdoc ok. so now you need go to sql and remove it from there. I try to find more info
07:49 sc-rm merdoc: I could do so, but I decided to try to restart the compute nodes, and after that the resources came back, so it seems like a crash on the compute node side
07:49 kaliya baboune: it's sort of primary key, so assigned by the master
07:49 merdoc sc-rm: so now stats correct?
07:50 merdoc or your instance are back?
07:50 sc-rm merdoc: yep :-) but required a physical restart
07:50 sc-rm merdoc: they are gone as I requested
07:50 merdoc ok, good (%
07:51 baboune kaliya: you mean nodeXX part, I am referring to the "my.domain.com" part?
07:51 merdoc baboune: while you setup fuel you may set 'domain.com' in what you need
07:52 merdoc so after setup your node will have name node-XX.example.com
07:52 baboune ok so the nodes inherit the "domain" part from the fuel master.
07:53 merdoc yes
07:53 sc-rm merdoc: where do I write my request for this feature? It’s not a bug, so should I create blueprint on https://bugs.launchpad.net/fuel/6.0.x ?
07:53 kaliya baboune: it's configurable in `fuelmenu`
07:54 merdoc sc-rm: that question for kaliya I think
07:54 merdoc or someone from mirantis, I just regular user (%
07:55 kaliya sc-rm: yes you can file a blueprint. If confirmed, it will enter in in progress
07:56 sc-rm kaliya: cool, I’ll do so :-)
07:59 kaliya sc-rm: thank you!
08:00 e0ne joined #fuel
08:02 baboune ok, used fuelmenu to rename the hostname for the fuel machine.  now the UI is a white page, and in the cobbler.log : Info: Loading facts in /etc/puppet/modules/ceph/lib/facter/ceph_osd.rb ls: cannot access /dev/sda?*: No such file or directory ls: cannot access /dev/sda?*: No such file or directory
08:02 baboune otice: /Stage[main]/Cobbler::Server/Exec[cobbler_sync]: Triggered 'refresh' from 1 events Info: /Stage[main]/Cobbler::Server/Exec[cobbler_sync]: Scheduling refresh of Service[dnsmasq] Info: /Stage[main]/Cobbler::Server/Exec[cobbler_sync]: Scheduling refresh of Service[dnsmasq] Info: /Stage[main]/Cobbler::Server/Exec[cobbler_sync]: Scheduling refresh of Service[xinetd] Notice: /Stage[main]/Cobbler::Server/Service[dnsmasq]/ensure: ensure
08:03 kaliya uh baboune try dockerctl restart cobbler; dockerctl restart nailgun; dockerctl restart nginx
08:06 merdoc it's time to redeploy env! now I have some 'virtual storage' in one node around 200Gb!
08:09 kaliya merdoc: you tried with a m1.medium right? so 40G
08:09 kaliya I looked into your image mounting in kpartx, and is just 10, will be allocated into that 40
08:10 kaliya your error, guys say, is related to 'virtual storage' indeed
08:10 sc-rm kaliya: as I read it on (http://docs.mirantis.com/fuel-dev/), is the fuel web based on nailgun, and I could start looking into this my self?
08:10 merdoc kaliya: but on 5.0.1 I set same amount of virtual storage
08:11 kaliya sc-rm: yes, requirements are that you run Ubuntu to practice with the examples
08:11 merdoc and as I understand - if all (cinder/glance/nova) on ceph - I don't need that storage at all
08:11 kaliya you need to allocate the image in the virtual storage
08:13 merdoc hm. so even if nova use ceph I need virtual storage to place images? so basicaly nova don't use ceph at all?
08:14 merdoc seems like I need another test on 5.0.1...
08:16 sc-rm kaliya: cool - is it corretly understood that I can run nailgun on the ubuntu machine in a fake mode, so it does not orchestrate anything and/or not have the rest of the stack running?
08:18 Rajbir joined #fuel
08:23 kozhukalov joined #fuel
08:23 kaliya sc-rm: please wait I'm asking
08:55 kaliya sc-rm: our team says you're right and understood correctly. Please feel free to experiment in nailgun and even ask again for further info :)
08:56 sc-rm kaliya: cool - I’ll poke around in it :-)
09:01 e0ne joined #fuel
09:04 sc-rm when starting up an debian image in an instance I get http://snag.gy/Obf7t.jpg
09:25 merdoc Dr_Drache: our bug in lack of virtual storage
09:32 kaliya sc-rm: do you get also a more precise error in nova logs?
09:34 sc-rm kaliya: not as far as I can see
09:35 kaliya seems kind of timeout, do you get also mysql or rabbit errors around?
09:37 sc-rm but It starts up and is completely accessible, so what will be the consequenses for not being able to access the metadata
09:40 kaliya sc-rm: did you also attach a volume when instanciating?
09:42 sc-rm kaliya: nope
09:45 baboune what is this uwsgi process: 11745 root      20   0  222m  55m 2796 R 88.7  2.7  76:47.64 uwsgi   ?  It s grabbing most of the CPU in the fuel master
09:46 baboune As a result the UI is unresponsive
09:47 kaliya it's the UI process
09:47 kaliya baboune: are you running 5.1?
09:49 baboune kaliya: yes
09:49 baboune 5.0.1 + Upgrade to 5.1
09:56 adanin joined #fuel
09:56 stamak joined #fuel
10:03 Longgeek joined #fuel
10:04 Longgeek joined #fuel
10:09 e0ne joined #fuel
10:13 Longgeek_ joined #fuel
10:16 Longgeek joined #fuel
10:16 teran joined #fuel
10:17 pasquier-s_ joined #fuel
10:22 Longgeek_ joined #fuel
10:24 Longgeek_ joined #fuel
10:28 Longgeek joined #fuel
10:31 Longgeek joined #fuel
10:33 Longgeek_ joined #fuel
10:40 hyperbaba Hi there, using 5.1 (ceph for all services) and got this fault error message trying to terminate instance "unsupported operand type(s) for +: 'NoneType' and 'str'"
10:41 adanin joined #fuel
10:42 evg hyperbaba: looks like a bug
10:43 evg hyperbaba: could you help to reproduce, please. I'm deploying the same env now
10:44 hyperbaba evg: one controller. Ceph for all. qcow ubuntu image. Ephermal instance. Created one snapshot (that failed). Tried to terminate instance. Got this error. The problem is that something is wrong with snapshot creation.
10:46 teran joined #fuel
11:08 e0ne joined #fuel
11:20 Longgeek joined #fuel
11:22 Longgeek joined #fuel
11:27 evg hyperbaba: ah, snapshot. i'v got it. thank you.
11:28 Longgeek joined #fuel
11:39 Dr_Drache morning merdoc; kaliya
11:39 Dr_Drache merdoc, so they didn't fix that requirement from 4.0
11:40 Dr_Drache or 4.x
11:40 Dr_Drache or 5.0
11:40 Dr_Drache or 5.0.1
11:40 merdoc Dr_Drache: hi!
11:40 Dr_Drache merdoc, saw the virtual storage comment.
11:41 merdoc yes. and seems that in my 5.0.1 I created that storage, so my qcow works
11:41 Dr_Drache I'm seriously annoyed.
11:41 merdoc now I recreate all nodes with virtual storage atleast 50gb
11:42 Dr_Drache merdoc, and kaliya, but if i create a 200GB cirros
11:42 Dr_Drache it works
11:42 kaliya Dr_Drache: did you try to mount in loopback your image and see how much it expands?..
11:43 Dr_Drache kaliya, it's true raw, not sparse raw.
11:43 merdoc I think it's because you have enough space to convert cirros.qcow to raw
11:43 merdoc I don't think that they use that space for something different from converting
11:43 Dr_Drache merdoc, I had about a week of discussion here about this same thing when 4.0 came out.
11:44 Dr_Drache and I was assured that it's a temp bug, and the virtual storage is NOT used with ceph.
11:46 Dr_Drache merdoc; kaliya, I'm going to redeploy with more virtual storage,  but can we get a blueprint to make this work properly?
11:46 kaliya Dr_Drache: sure. I will query the team.
11:46 merdoc I assume that it should be as you say. maybe sometime they fix that
11:46 Dr_Drache kaliya, only annoyed not because of the bug, it's that it was already deemed "fixed"
11:47 pasquier-s joined #fuel
11:47 Dr_Drache kaliya, I think it was xarses I worked with on some of it. IIRC
11:51 hyperbaba evg: Is snapshoting working at all on 5.1 with ceph? Becuse i'ts a litlebit confusing that the instance itelf is just a first snapshot of the Image. Or I got it all wrong
11:51 merdoc hyperbaba: are your instance created from raw or qcow?
11:52 hyperbaba merdoc: it was from qcow2. I got that is the problem. Now i am making the same machines with raw images
11:52 Dr_Drache kaliya, merdoc is this virtual storage supposed to be on the controller?
11:52 Dr_Drache controller(s)
11:52 merdoc Dr_Drache: I create it on every copute
11:53 kaliya Dr_Drache: on computes
11:53 Dr_Drache thanks
11:53 Dr_Drache so it needs to be bigger than any single instance I'd create
11:56 Dr_Drache merdoc, kaliya
11:56 Dr_Drache problem
11:56 merdoc ?
11:56 Dr_Drache I already have 225GB of Virtual storage on my computes
11:56 Dr_Drache each
11:56 merdoc what show df -h /var/lib/nova ?
11:57 Dr_Drache nothing now, I just reset.
11:57 Dr_Drache but I went to configure
11:57 Dr_Drache (re)configure
11:57 merdoc after reseting that setting back to default
11:58 Dr_Drache no, not default
11:59 Dr_Drache just reset so I can change something to redeploy.
11:59 Dr_Drache http://i.imgur.com/csPoAEV.png
12:01 merdoc Dr_Drache: in my case, after I click on "reset" my storage config sets to default
12:01 Dr_Drache merdoc, if that was the case, then I wouldn't have any problems the last 2 days.
12:02 Dr_Drache because that's what I've done for all the testing.
12:02 merdoc hm
12:03 teran_ joined #fuel
12:07 Dr_Drache merdoc, going to redeploy to test, but not confident
12:13 pasquier-s joined #fuel
12:17 Dr_Drache merdoc, guess you are right about the disk configure
12:17 Dr_Drache it did change them
12:17 Dr_Drache but, it didn't before.
12:18 merdoc it's maaaagick....
12:18 merdoc duh, I want on vacation!
12:22 sc-rm When doing a snapshot of an instance, which log should the snapshotting progress be in? It seems like it’s stocked at “Queued"
12:23 kaliya sc-rm: cinder
12:25 sc-rm kaliya: okay, cinder.context [-] Arguments dropped when creating context: {'user': None, 'tenant': None, 'user_identity': u'- - - - -'}
12:25 sc-rm kaliya: is that because it’s not doing anything?
12:28 flor3k joined #fuel
12:28 kaliya sc-rm: cinder snapshot-list ?
12:31 sc-rm +----+-----------+--------+--------------+------+
12:31 sc-rm | ID | Volume ID | Status | Display Name | Size |
12:31 sc-rm +----+-----------+--------+--------------+------+
12:31 sc-rm +----+-----------+--------+--------------+------+
12:32 kaliya sc-rm: is there some cinder process running?
12:32 sc-rm I’m using ceph for cinder and glance
12:33 sc-rm kaliya: which node should I check for a cinder process? the computer node having the instance or one of the controllers?
12:34 hyperbaba Does anybody noticed that download to instance is very slow in contrast to upload from instance which is at near wirespeed?
12:35 kaliya sc-rm: controller
12:36 sc-rm kaliya: all the normal cinder processes are running on both controllers
12:36 kaliya where do you see the 'Queued'? In the UI?
12:38 sc-rm kaliya: /horizon/admin/images/images/
12:39 sc-rm kaliya: also when I do a glance image-list
12:40 sc-rm kaliya: is it because the instance is running?
12:40 kaliya sc-rm: nope, shouldn't
12:41 sc-rm kaliya: which kind of information should I be able to find in the logs about the process have started? Then I might be able to hunt it down
12:42 kaliya you should watch in /var/log/cinder
12:42 Dr_Drache kaliya, redeploying now, just FYI
12:42 Longgeek joined #fuel
12:43 Longgeek joined #fuel
12:45 sc-rm kaliya: just did run “Launch instance, create snapshot, launch instance from snapshot” in the healthcheck and went just fine
12:46 Longgeek_ joined #fuel
12:48 Longgeek_ joined #fuel
12:52 dancn joined #fuel
12:53 sc-rm kaliya: If I look at the ceph storage nodes, nothing is happening on those
12:53 kaliya sc-rm: how big is the image
12:53 sc-rm kaliya: 5G
12:54 sc-rm kaliya: In /horizon/admin/instances/ it says Snapshotting and if I look at the compute node which is having this node, no big deal of activity is going on, on it.
12:55 kaliya in cinder logs, nothing useful why it stucks?
12:56 Dr_Drache sounds familure :P
12:56 kaliya Dr_Drache: :P
12:56 sc-rm kaliya: nope, but we just had to talk about it, now on the compute node, there is a 100% CPU load on it
12:57 kaliya sc-rm: can you see the process that's eating your CPUs?
12:57 sc-rm kaliya: qemu-system-x86_64 is eating it
12:57 kaliya sc-rm: instances are on qemu (VM) or KVM (bare)?
12:58 sc-rm kaliya:  kvm
12:59 merdoc currently I'm waiting while snapshot creates from 40Gb instance. 20+ min %(
13:02 sc-rm merdoc: and your is also just Snapshotting ?
13:02 e0ne joined #fuel
13:04 merdoc sc-rm: now I'm in state 'saving'
13:04 merdoc qemu-system-x86 uses 5%cpu and 50% ram
13:06 sc-rm merdoc: okay, mine is still using 100% CPU 26% RAM
13:08 kaliya sc-rm: but you don't have cinder processes?
13:09 sc-rm nope nothing else than the normal cinder-volume cinder-api cinder-scheduler
13:10 baboune quick question, after setting up the env with mirantis 5.1, when adding a project, the following was automatically added without any intervention:"ost1_test-tenant-1439989985 ". What is that?
13:12 Dr_Drache kaliya, merdoc
13:12 Dr_Drache this last deployment with virtual space, is 2x as fast
13:12 kaliya baboune: sorry cannot understand. The project in Openstack?
13:12 Dr_Drache as in, it took 1/2 as long to deploy
13:12 kaliya baboune: or a Fuel environment?
13:12 MiroslavAnashkin baboune: Some of built-in OSCF tests in previous Fuel version required some manual setup.
13:13 MiroslavAnashkin baboune: Now these steps are made automatically
13:13 baboune kaliya: yes inside the horizon dashboard...
13:13 baboune MiroslavAnashkin: ok, so one of the functional tests. thx
13:18 baboune btw: you guys are great!
13:18 baboune thanks for all the help and support
13:19 Dr_Drache heh
13:20 kaliya Dr_Drache: how big is your 'Virtual Storage' now?
13:21 Dr_Drache kaliya, 225GB on each compute node
13:22 Dr_Drache about to upload my RAW
13:22 kaliya ok let me know, Dr_Drache
13:31 sc-rm kaliya: is there anyway I’m able to see, what qemu-system-x86_64 is using 100% CPU for? the instance it self is apparently dead now
13:31 sc-rm kaliya: At least I lost connection to it through ssh
13:33 kaliya sc-rm: try to restart libvirtd...
13:34 kaliya sc-rm: you can look into /var/log/libvirt/libvirtd.log
13:34 hyperbaba kaliya: can libvirtd be restarted with live instances without loosing them?
13:35 kaliya hyperbaba: the daemon, yes, the libvirt-guests not
13:36 hyperbaba kaliya: because i have some stuck images in ceph not belonging to any glance image ... The snapshots going bad.
13:36 sc-rm kaliya: hyperbaba I can confirm that /etc/init.d/libvirt-bin restart does not break anything - just brought my instance in a reachable state again
13:36 hyperbaba kaliya: Thank you.
13:37 sc-rm kaliya: and now horizon reports the instances as active. But in the glance image-list it’s still listed as queued
13:38 kaliya sc-rm: mmh in /var/log/glance (controller) something?
13:40 sc-rm kaliya: Successfully retrieved image 44b1c6c1-9a67-45ea-a79d-aafecd517a3f
13:40 sc-rm kaliya: which is corresponding to the id listed in glane image-list
13:41 sc-rm kaliya: I think after the restart of the libvirt and deleting the image and then starting a new snapshot made it work
13:41 sc-rm kaliya: maybe it’s related to the libvirt crashing in some way
13:41 kaliya good
13:42 kaliya some info into the libvirtd.log?
13:42 jaypipes joined #fuel
13:43 sc-rm kaliya : error : virCommandWait:2399 : internal error: Child process (/sbin/iptables --table filter --delete INPUT --in-interface virbr0 --protocol tcp --destination-port 67 --jump ACCEPT) unexpected exit status 1: iptables: Bad rule (does a matching rule exist in that chain?).
13:44 sc-rm kaliya: the timestamps matches with when I tried to create a snapshot
13:44 mattgriffin joined #fuel
13:44 kaliya 67 is boot protocol
13:45 kaliya I'll check in bugs
13:45 sc-rm kaliya: http://paste.openstack.org/show/117466/
13:45 sc-rm for the full list
13:49 kaliya sc-rm: cannot find anything relevant in the whole Openstack bugs
13:50 sc-rm kaliya: Myabe I’m seeing some strange errors because we are pushing some of the limits
13:51 sc-rm kaliya: I’ll try to reproduce the error and see if it can be done again
13:51 kaliya sc-rm: that iptables issues shouldn't happen, if you did leave the standard nova conf and didn't touch iptables
13:51 kaliya sc-rm: yes please try
13:52 sc-rm kaliya: Just installed everything as deployed from mirantis fuel 5.1
13:52 kaliya ok sc-rmplease retry and see
13:54 ddmitriev joined #fuel
13:56 HeOS_ joined #fuel
14:00 dancn joined #fuel
14:06 merdoc I have some env with setuped servers, I reset it, reconfigure some disk space and deploy again. Q: does fuel reinstall OS completely, or just rearenge storage?
14:06 merdoc if reinstall - what should I do if partition remained the same?
14:06 kaliya merdoc: will reinstall completely
14:07 kaliya what should you do what? :)
14:07 Dr_Drache kaliya, merdoc testing now
14:08 merdoc kaliya: I change partition size on controller, but it still the same
14:15 Dr_Drache merdoc
14:15 Dr_Drache kaliya
14:15 Dr_Drache failure
14:15 geekinutah joined #fuel
14:17 Dr_Drache kaliya : http://paste.openstack.org/show/117475/
14:17 Dr_Drache it's not virtual storage.
14:18 merdoc Dr_Drache: try to create w/o new volume
14:19 merdoc and make sure that you realy have enough space in /var/lib/nova
14:19 Dr_Drache merdoc, I will but, that will pretty much make openstack worthless if you can't clone/snapshot images.
14:25 Dr_Drache merdoc; kaliya 105GB free in /var/lib/nova - I manually increased the base size and gave it more size
14:26 Dr_Drache ok
14:26 Dr_Drache so it boots.
14:26 Dr_Drache from image
14:27 merdoc yay. and now try to create snapshot
14:31 merdoc Dr_Drache: https://www.brighttalk.com/webcast/6793/127279
14:33 Dr_Drache snapshot works
14:35 Dr_Drache trying to boot from snapshot create new volume
14:35 Dr_Drache that works
14:37 Dr_Drache also can manually (GUI) create volume from an image
14:37 Dr_Drache (or it's going)
14:37 merdoc and all that with qcow2?
14:37 Dr_Drache no
14:37 Dr_Drache raw
14:39 merdoc you create snapshot from raw on ceph? are you shure?
14:40 Dr_Drache yes
14:40 Dr_Drache it's booted
14:40 kaliya it's reccomended to use raw on ceph, merdoc
14:40 merdoc kaliya: I know
14:41 Dr_Drache kaliya, he's talking that snapshotting isn't supposed to work on raw
14:41 merdoc but I could not create snapshot from raw. only from qcow2
14:41 merdoc Dr_Drache: exactly
14:41 kaliya ok
14:42 kaliya the team from docs didn't provide fixes yet, I queried to explain better that option
14:42 Dr_Drache only thing I'm waiting on right now in this testing, is create a new volume from an image.
14:42 Dr_Drache I think that is going to fail
14:43 Dr_Drache nope worked
14:44 merdoc hmmmm... ceph status show that "10444 MB used, 1294 GB / 1304 GB avail" on fresh created cloud. how can I look who took the place?
14:45 kaliya merdoc: from glance maybe?
14:45 Dr_Drache ok guys
14:45 Dr_Drache kaliya.. .merdoc
14:45 Dr_Drache here are my tests :
14:46 Dr_Drache upload 10G from glance CLI
14:46 Dr_Drache boot instance from Image = good
14:46 Dr_Drache boot instance from image (create new volume) = bad
14:47 merdoc kaliya: glance image-list tells that I have only TestVM and it 13Mb
14:47 ArminderS joined #fuel
14:47 kaliya Dr_Drache: do you get the same error 28?
14:47 merdoc so question is - whoo took another 9gb ? (%
14:48 Dr_Drache snapshot instance into new volume - then boot instance from volume snapshot (Create new) = good
14:48 Dr_Drache create volume from image, then create instance to boot from that volume = good
14:49 Dr_Drache kaliya, error 28 where?
14:49 kaliya merdoc: could be the journal, but 9G seems too much
14:49 kaliya Dr_Drache: in nova logs
14:49 Dr_Drache crap
14:49 Dr_Drache kaliya
14:49 Dr_Drache my ceph is stuck
14:50 merdoc :D
14:50 kaliya Dr_Drache: some node down?
14:50 Dr_Drache kaliya, no.
14:50 kaliya unhealth?
14:50 Dr_Drache 10.850%
14:50 Dr_Drache hold for paste
14:51 Dr_Drache http://paste.openstack.org/show/117486/
14:52 merdoc 7 osds: 4 up, 4 in
14:52 merdoc ceph osd tree | grep down
14:53 Dr_Drache crap, didn't see that
14:53 Dr_Drache a whole compute node
14:53 merdoc and why do you have 10 pools? O_o
14:53 Dr_Drache merdoc, 10 drives.
14:53 merdoc no no. pools != drives
14:54 merdoc I have 5 osd and 6 pools.
14:54 merdoc look at ceph osd lspool
14:54 merdoc 0 data,1 metadata,2 rbd,3 images,4 volumes,5 compute,
14:54 Dr_Drache 0 data,1 metadata,2 rbd,3 .rgw.root,4 images,5 volumes,6 compute,7 .rgw.control,8 .rgw,9 .rgw.gc,
14:55 merdoc ah, you have RadosGW
14:55 merdoc ok
14:56 kaliya Dr_Drache: ceph health detail
14:56 kaliya and see if some pg is degraded?
14:57 Dr_Drache http://paste.openstack.org/show/117488/
14:57 Dr_Drache same thing that happened monday
14:58 Dr_Drache holy crap
14:58 merdoc maybe something with your hardware?
14:59 Dr_Drache merdoc you guessed it
14:59 merdoc kaliya: yes, it's journals. 2Gb per osd. even if I don't create it in fuel ui
15:00 Dr_Drache merdoc, 2 faulty DIMMS
15:00 merdoc shit happens
15:01 Dr_Drache merdoc
15:01 Dr_Drache 1 month old server,
15:02 merdoc give it to seller
15:03 Dr_Drache yea
15:03 Dr_Drache calling them
15:03 Dr_Drache wtf
15:06 merdoc kaliya: so. I *need* to use raw images on ceph? and snapshoting *must* work with raw?
15:06 dhblaz joined #fuel
15:18 Dr_Drache merdoc, that's a good question
15:23 Dr_Drache looks like my testing is done for now
15:24 geekinutah joined #fuel
15:28 flor3k joined #fuel
15:35 t_dmitry joined #fuel
15:52 mpetason joined #fuel
15:57 kaliya merdoc: you should, yes
16:01 rmoe joined #fuel
16:06 adanin joined #fuel
16:08 ArminderS- joined #fuel
16:09 AKirilochkin joined #fuel
16:18 e0ne joined #fuel
16:19 xdeller_ joined #fuel
16:32 bogdando joined #fuel
16:37 geekinutah joined #fuel
16:54 angdraug joined #fuel
17:28 jobewan joined #fuel
17:30 HeOS joined #fuel
17:36 youellet joined #fuel
17:41 kupo24z MiroslavAnashkin: angdraug: any word on the new oslo packages on http://fuel-repository.mirantis.com/fwm/5.1.1/ubuntu/pool/main/ ?
17:52 [HeOS] joined #fuel
17:53 MiroslavAnashkin kupo24z: In progress. Our OSCI team just confirmed they are aware. Looks like our repos has been migrated to the new storage with the same entry point, but got out of sync.
17:54 kupo24z MiroslavAnashkin: Any estimation as far as timelines?
18:04 jpf joined #fuel
18:05 tatyana joined #fuel
18:08 MiroslavAnashkin Depends on sync speed. They should only include this mirror to sync procedure, start and wait.
18:10 angdraug kupo24z: fuel-repository has been superceded with mirror.fuel-infra.org, although neither still has new oslo
18:15 MiroslavAnashkin Currently fuel-repository is an alias to mirror.fuel-infra.org
18:29 bart613 joined #fuel
18:31 bart613 Hello, is anyone facing some issues with using Fuel 5.1 on mirantis distribution?  Whenever I try to install a new cluster with 3 controllers one of the controllers is fine and installs successfully but the other two fail on trying to communicate with keystone
18:33 jobewan joined #fuel
18:36 MiroslavAnashkin bart613: Please check if your network interfaces start on boot on problematic nodes. Some NICs has longer startup time than kernel expects. Tigon TG3 NICs are bad example of such issue.
18:37 MiroslavAnashkin bart613: If these UP - please try to ping master node from these nodes, Network issues are possible as well.
18:41 evgeniyl__ joined #fuel
18:44 bart613 MiroslavAnashkin: OK checking...
18:49 ArminderS joined #fuel
18:59 wayneeseguin joined #fuel
19:04 Rajbir joined #fuel
19:08 HeOS joined #fuel
19:19 thehybridtech joined #fuel
19:33 ArminderS- joined #fuel
19:40 angdraug geekinutah: https://bugs.launchpad.net/fuel/+bug/1373096
19:41 f13o_f13o joined #fuel
19:47 adanin joined #fuel
19:57 e0ne joined #fuel
20:17 dhblaz joined #fuel
20:18 e0ne joined #fuel
20:21 e0ne joined #fuel
20:53 dburmistrov kupo24z: 5.1.1 mirrors are up-to-date
20:53 kupo24z dburmistrov: Thanks!
20:54 kupo24z does 5.1/stable community ISO use the 5.1.1 or 5.1 directory?
20:55 vt102 joined #fuel
21:04 jpf_ joined #fuel
21:05 adanin joined #fuel
21:19 jpf joined #fuel
21:22 adanin joined #fuel
21:40 angdraug joined #fuel
22:10 adanin joined #fuel
22:22 teran joined #fuel
22:23 teran__ joined #fuel
22:42 tatyana joined #fuel
22:56 teran joined #fuel
23:27 geekinutah joined #fuel

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary