Perl 6 - the future is here, just unevenly distributed

IRC log for #fuel, 2015-02-26

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:13 adanin joined #fuel
00:41 youellet_ joined #fuel
01:18 LiJiansheng joined #fuel
01:32 rmoe joined #fuel
02:36 gongysh joined #fuel
03:24 LiJiansheng joined #fuel
03:38 zerda joined #fuel
05:01 xarses joined #fuel
05:31 claflico joined #fuel
05:42 MarkDude joined #fuel
06:19 tobiash joined #fuel
06:22 tobiash joined #fuel
06:36 zerda joined #fuel
06:58 dklepikov joined #fuel
07:10 sambork joined #fuel
07:43 Miouge joined #fuel
07:46 stamak joined #fuel
07:50 adanin joined #fuel
08:03 sambork joined #fuel
08:03 e0ne joined #fuel
08:07 maximov joined #fuel
08:20 Philipp__ joined #fuel
08:43 HeOS joined #fuel
08:55 sambork1 joined #fuel
09:12 hyperbaba joined #fuel
09:20 maximov joined #fuel
09:25 avlasov joined #fuel
09:28 teran joined #fuel
09:36 maximov joined #fuel
10:00 sambork joined #fuel
10:05 dkaigarodsev joined #fuel
10:12 sambork1 joined #fuel
10:21 teran joined #fuel
10:24 teran_ joined #fuel
10:41 dkaigarodsev_ joined #fuel
10:42 SergK joined #fuel
10:42 sovsianikov joined #fuel
10:50 monester_laptop joined #fuel
10:59 andriikolesnikov joined #fuel
11:08 Philipp__ joined #fuel
11:14 gongysh joined #fuel
11:29 saibarspeis joined #fuel
12:04 sambork joined #fuel
12:04 SergK joined #fuel
12:05 sovsianikov joined #fuel
12:05 dkaigarodsev_ joined #fuel
12:07 ddmitriev joined #fuel
12:09 maximov joined #fuel
12:18 SergK joined #fuel
12:18 dkaigarodsev_ joined #fuel
12:35 sambork joined #fuel
12:38 maximov joined #fuel
12:53 rbowen joined #fuel
13:02 sambork joined #fuel
13:22 devstok joined #fuel
13:56 t_dmitry joined #fuel
14:21 francois joined #fuel
14:56 claflico joined #fuel
14:58 adanin joined #fuel
15:04 daniel3_ joined #fuel
15:11 devstok hi
15:11 devstok I've setted by FUEL use QCOW format
15:11 devstok Openstack Icehouse
15:11 devstok how can I modify that parameter?
15:12 devstok in conf file?
15:13 devstok i only found in nova.conf "use_cow_format=True"
15:14 glavni_ninja when i do "ethtool -k eth1" command among everything else i get this value "tx-gre-segmentation: off [fixed]"
15:15 glavni_ninja does it mean i can't use Neutron with GRE segmentation?
15:15 glavni_ninja can i change this?
15:18 blahRus joined #fuel
15:27 evkonst joined #fuel
15:34 glavni_ninja or is it possible that tx-gre-segmentation is off on switch, also?
15:39 e0ne joined #fuel
15:39 xarses joined #fuel
16:01 kozhukalov joined #fuel
16:05 angdraug joined #fuel
16:06 MarkDude joined #fuel
16:06 xarses joined #fuel
16:15 devstok can I put 2 virual router on the internal shared network?
16:16 ofkoz joined #fuel
16:24 xarses devstok: the paramater is actually old and doesn't do what you expect, you simply need to upload images to glace in the format you desire
16:26 strictlyb @xarses
16:26 strictlyb so i have the 2 nodes setup one for hypervisor and one for controller/cinder
16:27 monester_laptop joined #fuel
16:27 strictlyb and when i login to the UI and try and add an image it sits there idling also since i have eth0/eth1 and one is wan one is lan how should i do this networking aspect
16:31 devstok @xarses : i see in disk.info the format is always qcow2
16:39 glavni_ninja when i do "ethtool -k eth1" command among everything else i get this value "tx-gre-segmentation: off [fixed]"
16:39 glavni_ninja does it mean i can't use Neutron with GRE segmentation?
16:39 glavni_ninja can i change this?
16:39 glavni_ninja or is it possible that tx-gre-segmentation is off on switch, also?
16:44 zimboboyd joined #fuel
16:46 stamak joined #fuel
16:47 championofcyrodi does fuel delete the volume groups from a physical volume when a node is 'removed' ?
16:48 championofcyrodi basically, adding a 3rd controller failed.  And apparently fuel undeployed ALL the controllers.  and no we only have compute nodes running w/ ceph, qemu etc...
16:48 championofcyrodi but no monitor, so i can't even get to the ceph data to export
16:49 championofcyrodi looking at the physical disk for one of the controllers, i see it's GPT partitioned w/ lvm2, and the physical volume is there...
16:49 championofcyrodi but it shows 'blank' volume group when running 'pvs
16:55 xarses championofcyrodi: if you delete a node from a cluster it will wipe the partition // lvm // md data from it
16:56 xarses championofcyrodi: the same will occur if you 'rest' the cluster
16:56 championofcyrodi yea, that's what it looks like...
16:56 xarses otherwise it won't erase anything between the failed jobs
16:57 championofcyrodi pvck -d -v shows the metadata still resides at offset=5632 with size... so i guess i'll see if i can restore the volume groups and logical volumes...
16:57 xarses with the failed third controller, the first 2 should be fine, and not re-provisioned
16:57 championofcyrodi so i can get the ceph monitor data directory, and try to get a monitor up so i can export images
16:58 championofcyrodi well i was out sick and the other admin said that he just tried to add a third, and when it failed it removed all of the controllers.  so i don't know exactly, just trying to deal with fallout
16:59 championofcyrodi but you answered my question.  thanks xarses
16:59 xarses there is no reason for it to format a node that was not removed from the cluster, so if that happened I'm very interested to find out why
16:59 xarses and prevent it from ever happening again
17:01 xarses glavni_ninja: the ethtool -k dumps the physical options on the device, you can enable/disable them for performance reasons, it wouldn't normally impact being able to use Neutron with GRE, just the performance of (unless the network hardware is banning it)
17:02 xarses strictlyb: what backend are you using for glance?
17:03 championofcyrodi my priority ATM is recovering data, if i can get that accomplished, perhaps i can get to logs and figure out what happened.
17:03 championofcyrodi but that explains why the LVM2 VG and LV are missing.
17:03 strictlyb i believe the default lvm should i use ceph or something along those lines?
17:06 devstok How can i Set virtual router to manage to different subnet on a single EXTERNAL network ????
17:07 devstok please help
17:10 glavni_ninja xarses: thank you very much
17:25 xarses devstok: I'm not sure what you are attempting to do.
17:25 devstok I added another subnet
17:26 devstok this subnet has a different Gate Way
17:26 xarses so you added an additional external network?
17:27 devstok no a subnet
17:27 devstok my external got 2 subnets
17:28 devstok but the virtual router cant route the request to the new gateway
17:28 andriikolesnikov joined #fuel
17:30 championofcyrodi xarses: well I've confirmed that the metadata is wiped from the disk. so i guess i'll start trying to traverse the osd's data directory and piece together object identifiers and data chunk positions, and see what i can recover.
17:31 championofcyrodi but all that is on the disk's metadata is someone else's attempt at creating new volume groups after the metadata was deleted via fuel's node removal process.
17:35 maximov joined #fuel
17:36 xarses championofcyrodi: you might get something from http://ceph.com/docs/master/rados/operations/add-or-rm-mons/#removing-monitors #3
17:37 xarses although I don't see anything specific for all monitors are gone
17:38 championofcyrodi yea, already been down that route, unfortunately the w/o the mons data, all you really end up w/ is osds with rbd objects like: rbd\udata.228174b0dc51.0000000000000fa0__head_26F1E127__5
17:39 xarses you probably want to poke your head into #ceph on OFTC network if you arent already
17:39 championofcyrodi if i can find the 'headers' for each object, aka rbd image, then i can potentially find out where all of the 4MB-8MB chunks are and put them back together
17:40 championofcyrodi yea, that is where i've been getting this insight thus far.
17:41 maximov joined #fuel
17:45 strictlyb @xarses let me know when you have time to help me with this
17:50 jobewan joined #fuel
17:53 adanin joined #fuel
18:03 emagana joined #fuel
18:35 rmoe joined #fuel
18:45 Miouge joined #fuel
18:49 stamak joined #fuel
18:50 mattgriffin joined #fuel
19:20 monester_laptop joined #fuel
19:25 HeOS joined #fuel
19:41 saibarspeis joined #fuel
19:57 CheKoLyN joined #fuel
20:12 zimboboyd I installed the Openstack cloud (on VIrtualbox) for the first time. I used Mirantis Fuel. I want to test Murano with Docker. Now my question is where can i find a download of the image "Ubuntu14.04 x64 (pre-installed murano agent and docker)" ?
20:38 vtzan joined #fuel
20:45 rmoe strictlyb: how big is the image you're trying to upload? also, what is its status in glance?
20:47 strictlyb it sits queueing i guess
20:48 mattgriffin joined #fuel
20:50 strictlyb status queued
20:59 strictlyb whats the testvm user/pass that ships by default
21:08 strictlyb @rmoe thoughts ?
21:08 xarses cirros:cubswin:)
21:09 rmoe strictlyb: how big is your image you're trying to upload?
21:12 strictlyb small ubuntu image
21:17 strictlyb odd cirros:cubswin dosnt work
21:19 zimboboyd strictlyb: add the ":)"
21:19 strictlyb ohh
21:19 strictlyb anyone have any idea why i cant create images?
21:21 zimboboyd I am also struggeling with the images.
21:21 rmoe strictlyb: I never upload images via horizon, web forms don't generally deal with large uploads well, give it a shot from the command line on one of your controllers
21:23 zimboboyd Anyone knows where i find a image for running docker containers?
21:23 rmoe strictlyb: here's an example, do this on one of your controllers http://paste.openstack.org/show/182676/
21:24 rmoe if that fails then something is wrong with glance and we can try to figure out what the issue is
21:24 strictlyb uh duh
21:25 strictlyb i kno y
21:25 strictlyb so ok
21:25 strictlyb i have a network with eth0/eth1
21:25 strictlyb eth0 is lan
21:25 strictlyb eth1 is wan
21:25 strictlyb so
21:25 strictlyb i have a /27 for this so i need to bind a public IP to eth1 on each node or
21:26 strictlyb or change it in fuel
21:26 strictlyb master
21:26 strictlyb i would assume but
21:27 strictlyb it wont let me change the network
21:27 strictlyb do i need to delete the environment and recreate it or?
21:30 rmoe you can reset your environment (on the actions tab), it will keep all of your settings and role allocations and return the nodes into the bootstrap state
21:30 rmoe then you'll be able to make changes before you deploy again
21:33 thumpba joined #fuel
21:42 xarses joined #fuel
22:05 e0ne joined #fuel
22:19 adanin joined #fuel
22:22 strictlyb @rmoe
22:22 strictlyb question
22:22 strictlyb u there?
22:22 rmoe yep
22:22 strictlyb ok so
22:22 strictlyb i said earlier
22:22 strictlyb eth0 lan
22:22 strictlyb eth1 wan
22:23 strictlyb i have multiple subnets i can use for wan right
22:23 strictlyb so  when i add in public
22:23 strictlyb and the ranges to use for floating
22:23 strictlyb will it bind the nics automatically ?
22:26 rmoe so after you've configured the network ranges on the settings page you need to assign the networks to the appropriate nics for each node
22:32 mattgriffin joined #fuel
22:41 championofcyrodi xarses: you around?
22:51 strictlyb kk
22:54 adanin joined #fuel
22:56 championofcyrodi xarses: using custom bash scripts, I was able to traverse the osd data directories for all my osds, find rbd images that had <uid>-0000000000000000_head_<hash> or whatever... read the first block w/ hex dump and determine it was a QFI image.... then traverse and get a list that builds a map of node, osd, and obj.... the pipe in to uniq to remove replicas... then scp ALL the objs to a single folder. then use dd with conv=no
22:57 championofcyrodi since ls sorts automatically, using a for loop and dd w/ notrunc wrote the data to the qcow in the same order it was written to ceph, hex ascending.
22:58 championofcyrodi where data ~ a bunch ob 8MB obs
22:58 championofcyrodi bunch of*
23:00 championofcyrodi the worst part was scping the rbd files with \ in the name...
23:02 championofcyrodi the virtualbox import of the qcow2 converted to VDI works... so i got my images back...
23:02 championofcyrodi phew
23:02 championofcyrodi now set up recursion, and walk away for a day. :\
23:24 dmellado joined #fuel

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary