Perl 6 - the future is here, just unevenly distributed

IRC log for #fuel, 2014-09-30

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
01:39 mattgriffin joined #fuel
02:01 jpf joined #fuel
02:20 vtzan joined #fuel
02:27 harybahh joined #fuel
02:30 adanin joined #fuel
02:31 teran joined #fuel
03:04 AKirilochkin joined #fuel
03:32 teran joined #fuel
03:34 wayneeseguin joined #fuel
03:36 tdubyk joined #fuel
03:54 ArminderS joined #fuel
04:04 AKirilochkin joined #fuel
04:22 HeOS joined #fuel
04:27 harybahh joined #fuel
04:42 AKirilochkin_ joined #fuel
04:43 AKirilochkin joined #fuel
04:48 teran joined #fuel
05:05 anand_ts joined #fuel
05:31 sressot joined #fuel
05:49 teran joined #fuel
06:24 stamak joined #fuel
06:26 emagana joined #fuel
06:27 harybahh joined #fuel
06:34 emagana joined #fuel
06:43 Longgeek joined #fuel
06:46 Longgeek joined #fuel
06:47 harybahh joined #fuel
06:48 pal_bth joined #fuel
06:49 Longgeek_ joined #fuel
06:50 teran joined #fuel
06:59 merdoc Dr_drache: about your question - no, I choose 'boot from image'
07:01 pasquier-s joined #fuel
07:07 flor3k joined #fuel
07:22 HeOS joined #fuel
07:23 e0ne joined #fuel
07:27 Longgeek joined #fuel
07:27 teran joined #fuel
07:29 e0ne joined #fuel
07:29 teran_ joined #fuel
07:30 teran__ joined #fuel
07:31 hyperbaba joined #fuel
07:36 azemlyanov joined #fuel
07:36 hyperbaba hi there,
07:39 hyperbaba i've installed 5.1 couple of days ago, and since then testing the system. Can anyone help me with this: I've created 3 ephermal instances on ceph storage sized 80,80,160 GB. Then tried to snapshot them. The snapshot is in saving state since yesterday. It's a 4 node cloud with 1 controller. The network is bonded 3gbps per node. This look too slow for me. Any idea?
07:42 Longgeek joined #fuel
07:43 kaliya hyperbaba: could you please pastebin a `cinder snapshot-show <yoursnap>`? we should troubleshoot cinder
07:44 vkramskikh_ joined #fuel
07:45 hyperbaba kaliya: cinder snapshot-list --all-tenants gives me no result. And i see those images in images but in saving state
07:46 bogdando joined #fuel
07:47 kaliya hyperbaba: do you see something wrong in the nova logs, say you're using livbirtd?
07:48 hyperbaba kaliya: no errors in nova-api. Where to look for more?
07:50 hyperbaba kaliya: The openstack is in debug mode
07:50 kaliya ok are your nodes on centos?
07:50 kaliya and images are in raw?
07:51 hyperbaba kaliya: nodes are ubuntu. Images are raw
07:52 hyperbaba kaliya: and some are in qcow2. But the snapshots show that are raw
07:52 kaliya because on ceph, they're translated into raw
07:53 stamak joined #fuel
07:53 merdoc kaliya: Hi! any luck yesterday with win.qcow?
07:54 kaliya hi merdoc, still not tried to upload
07:54 merdoc hyperbaba: is it enough space on controller? do you restart your controller along with computes?
07:55 kaliya hyperbaba: is `ceph status` ok? mmh looks like another ceph issue
07:55 hyperbaba kaliya: I've used openrc for project which initiated the snapshots, and issued the cinder commands . Still no snapshots
07:55 kaliya hyperbaba: wondering if stucking, it's going in timeout
07:56 hyperbaba kaliya: is there any separate process visible on node when doing snapshots? Maybe i can try to find it on a node?
07:58 kaliya hyperbaba: should work as cinder
08:01 warpc__ joined #fuel
08:01 hyperbaba don't see it
08:01 hyperbaba kaliya: don't see it
08:02 hyperbaba kaliya: beside processes on controller. Api scheduler and volume
08:02 e0ne joined #fuel
08:03 kaliya hyperbaba: could you please investigate osd logs?
08:03 merdoc hyperbaba: try to restart ceph-mon and then ceph-osd
08:03 hyperbaba kaliya: ceph is fully functional.
08:05 hyperbaba kalya: and i see those images on ceph using: rbd ls  -p images
08:06 kaliya ok
08:07 hyperbaba kalya: those snapshoted images to be precise.
08:08 kaliya hyperbaba: I cannot find any related bug
08:08 kaliya hyperbaba: so we can file one, describing how to reproduce
08:08 kaliya sounds like a ceph issue
08:09 hyperbaba kaliya: Do you use ceph mechanism mksnap for creating snapshot or old way?
08:10 hyperbaba kaliya: and at the end, maybe snapshots are ok and only the state is wrong? How can I change the state to active to test it?
08:17 e0ne joined #fuel
08:24 hyperbaba kaliya: the funny thing: the command glance show $image_id shows that image is in saving state and updated at field is yesterday's date. If only I can change the state of that image to see is it working
08:30 Rajbir joined #fuel
08:35 evg hyperbaba: haven't you tried creating snapshot with rbd command (if it's testing env of course)
08:49 dancn joined #fuel
08:50 kkk_ joined #fuel
08:53 kkk_ Hi all! In Mirantis 5.0.1, if I have change a .py file, what command should I use to restart the docker environment to generate a .pyc file?
08:57 lordd joined #fuel
09:00 hyperbaba kaliya: I've found this error in glance api log regaring that snapshot: Failed to upload image 420557a7-faa7-4174-b9b2-a8f27b10c373. And then Unable to kill image 420557a7-faa7-4174-b9b2-a8f27b10c373
09:01 evg kkk_: dockerctl restart <container>
09:04 evg kkk_: or just restart service which use this .py
09:13 kaliya hyperbaba: further info in glance?
09:14 e0ne joined #fuel
09:14 hyperbaba kaliya: nothing regarding those id's
09:16 hyperbaba kaliya: Next, i've removed those snapshots from horizon. They are no longer visible on glance. But they still exist in ceph images pool. When I tried to remove them from ceph got the following: 7f640b179780 -1 librbd: image has snapshots - not removing. When i've tried to do "snap purge" got this: librbd: removing snapshot from header failed: (16) Device or resource busy
09:17 hyperbaba kaliya: so looks like it's ceph after all. But without any errors in the logs.
09:17 kaliya yes seems :(
09:35 evgeniyl__ joined #fuel
09:43 e0ne joined #fuel
09:44 adanin joined #fuel
10:26 teran joined #fuel
10:38 kaliya joined #fuel
10:46 teran joined #fuel
11:04 sc-rm Have anybody expirenced if you launch too many instances at the same time, you will get “Setting instance to ERROR state.”
11:05 sc-rm instead of just waiting a bit more for the instance to get spawned
11:05 merdoc sc-rm: I ran 20 instance at time. all ok
11:05 merdoc are you shure that you have enough resources?
11:10 teran joined #fuel
11:11 sc-rm merdoc: ahe I see, in the old setup there was overcommiting set, now it’s just as resources 1:1
11:11 anand_ts hello all, trying to install compute node. I got this error, http://imgur.com/jlPouY1 , http://imgur.com/MjiLQ20 any idea? Controller node installed successfully.
11:12 rsFF hi guys i screwed up on the dns servers of the fuel
11:12 rsFF where can i edit them by hand since all nodes have a /etc/resolv.conf of fuel...
11:15 pal_bth joined #fuel
11:16 bdudko left #fuel
11:19 merdoc sc-rm: current default overcommmiting is 8:1:1 cpu:ram:storage
11:20 sc-rm merdoc: I just figured that the hard way :-P
11:21 merdoc sc-rm: also note, that horizon have bug in UI - they show real VCPU numbers instead that that multiplied
11:22 sc-rm merdoc: I saw and that was confused me in the first place, but that’s okay for now :-)
11:23 evg anand_ts: hello, the first one means you tried to configure interface on nodes with different network hw
11:24 evg anand_ts: the second can mean anything, please find the real error in the logs
11:26 anand_ts evg: Okay, but I gave same network for all the nodes. configured only eth0
11:26 tobiash joined #fuel
11:29 flor3k joined #fuel
11:32 evg anand_ts: one your node may be has one NIC, another two NICs for example
11:33 sc-rm merdoc: is there some time that has to go between having instances runiing, deleting them and spawn new instances. If I spawn new instances too fast after terminating some other instances it will fail to create the new instances...
11:35 evg rsFF: I'm not sure I've got you. But dns on fuel master is managed by dnsmask in cobbler container. "dockerctl shell cobbler"
11:37 merdoc sc-rm: maybe your storage too slow?
11:38 sc-rm merdoc: might be, I’m also just stress testing to find bottlenecks and fix those on the hardware side
11:40 Dr_drache merdoc, morning sir
11:40 Dr_drache kupo24z, yes, and same size.
11:41 merdoc Dr_drache: hi!
11:42 Dr_drache merdoc, so they got me to redeploy will all my options changed.
11:42 Dr_drache and guess what?
11:42 merdoc sc-rm: you can use Rally - https://wiki.openstack.org/wiki/Rally
11:42 Dr_drache fails faster
11:42 merdoc :D
11:43 kaliya Dr_drache: what fails, the same from yesterday?
11:43 sressot joined #fuel
11:44 Dr_drache kaliya, yea, kupo24z and angdraug had me remove OSD from controller, change to a repl of 2, and then I get this : http://paste.openstack.org/show/116844/ after a upload of a raw disc
11:44 Dr_drache s/disc/disk
11:45 merdoc Dr_drache: raw??
11:45 Dr_drache merdoc, yes sir.
11:45 merdoc O_o
11:45 Dr_drache now, the question still stands : http://i.imgur.com/btPT4Je.png <--
11:45 kaliya Dr_drache: is the ceph degraded?
11:45 Dr_drache is that supposed to work?
11:45 Dr_drache kaliya, not this time no.
11:45 * merdoc hate testrails!
11:46 kaliya Dr_drache: it is supposed to
11:46 kaliya Dr_drache: I am trying with a windows image as well
11:46 Dr_drache kaliya, then ceph, glance, cinder is broken.
11:46 Dr_drache kaliya, this is a linux raw.
11:47 Dr_drache known working on 2 differnt KVM based clusters.
11:47 merdoc Dr_drache: try 'boot from image' instead of 'boot from image (creates ...)'
11:47 Dr_drache merdoc, I will here in a min, but that still leaves me with the problem of how to create new instances.
11:47 kaliya Dr_drache: please do `rbd ls images`?
11:48 sc-rm merdoc: Cool, I’ll have a look into it
11:49 Dr_drache kaliya : http://paste.openstack.org/show/117099/
11:49 kaliya Dr_drache: and `rbd ls volumes`
11:50 Dr_drache root@node-6:~# rbd ls volumes
11:50 Dr_drache volume-2fcb8ee3-3d0e-4240-8cf2-6980566970f0
11:50 kaliya add an -l option?
11:50 kaliya is the volume for the instance?
11:51 Dr_drache kaliya
11:51 Dr_drache http://paste.openstack.org/show/117100/
11:52 kaliya evg: we should help Dr_drache. He's running a 5.1 on ceph. When he tries to launch an instance from image, gets http://paste.openstack.org/show/116844/ error "InvalidBDM: Block Device Mapping is Invalid."
11:55 Dr_drache kaliya, evg to add, if the image is qcow2 (other than cirros image) I've had 2 clusters Ceph acually become unusable.
11:56 evg kaliya sure. I'll try to reproduce.
11:57 evg Dr_drache: what configuration have you deployed?
11:57 Dr_drache evg, is there a easy way to dump that?
11:59 evg Dr_drache: mb, but I just mean ceph  rdb for cinder/glance/nova ephemeral ?
11:59 Dr_drache http://i.imgur.com/ZUWGT0t.png
12:00 Dr_drache http://i.imgur.com/J0TXCdk.png
12:00 kaliya Dr_drache: you have still snapshots running? I'm into the nova source code and an exception is raised when an associated snapshot is running
12:01 Dr_drache kaliya, no. this was upload via glance CLI - > horizon launch instance.
12:02 evg Dr_drache: I've got you, thanks
12:03 tobiash hello, is "dockerctl backup" supposed to work under Mirantis 5.1?
12:03 tobiash it says it needs 11gb on /var but there are 99gb free
12:04 mattymo tobiash, can you make a bug? if you specify the path /var/backup/, it shouldn't complain about disk space
12:04 kaliya tobiash: yes it is http://docs.mirantis.com/openstack/fuel/fuel-5.1/operations.html?highlight=dockerctl%20backup#howto-backup-and-restore-fuel-master
12:04 evg kaliya: but now is a time for syncup
12:05 mattymo tobiash, run bash -x dockerctl backup and that should show more info
12:05 tobiash looks like the comparison in /usr/share/dockerctl/functions (function verify_disk_space) doesn't work correctly
12:05 mattymo it's possible
12:06 tobiash after changing the comparison to if [ "$avail" -lt "$required" ] it works
12:06 mattymo what was it before?
12:06 mattymo so we should see what $avail is
12:07 tobiash it was something like this: if [[ "$avail" < "$required" ]]
12:08 mattymo can you add before this line this: echo "Avail: $avail   Required: $required"?
12:09 kaliya Dr_drache: you can store in block storage right? `cinder create 1` will create a 1G sample volume
12:10 tobiash mattymo:  the numbers were correct (verified via df) just the comparison didn't work for me
12:10 Dr_drache kaliya, let me test
12:11 Dr_drache http://paste.openstack.org/show/117104/ kaliya
12:12 kaliya Dr_drache: cinder show a9167da1-db44-4e42-a311-abe0c91c50a7
12:13 kaliya Dr_drache: if you have it in `rbd ls volumes -l` also, ceph seems ok
12:14 Dr_drache kaliya
12:14 Dr_drache http://paste.openstack.org/show/117106/
12:15 kaliya health is okay, so I'm wondering if it's some nova issue indeed
12:19 tobiash mattymo: this was the output before fixing the comparison http://paste.openstack.org/show/117107/
12:19 e0ne joined #fuel
12:21 Dr_drache kaliya : but other times ceph "dies"
12:24 mattymo tobiash, do you want to make a bug in launchpad?
12:25 tobiash mattymo: yes
12:26 mattymo ok great. let me know. I'll fix this right away
12:30 tobiash mattymo: currently waiting for the confirmation mail during account creation...
12:33 waterkinfe joined #fuel
12:35 waterkinfe joined #fuel
12:40 Dr_drache blah, I love, and Hate finding bugs
12:42 merdoc Love and Loathing in Fuel (%
12:45 Dr_drache merdoc, been this way since 3.2.1
12:45 Dr_drache havn't yet had a cluster deployable
12:46 Dr_drache I guess because I'm picky.
12:47 merdoc 5.0.1 worked _almost_ like I need. but now I need mellanox drivers, so 5.1 is my choise
12:48 Dr_drache merdoc, that's how it's been, I pretty much have to become a developer to use it. but I already stuck my job on hte line for fuel, so I'm sticking with it
12:48 Dr_drache and we are SOOO close.
12:49 Dr_drache not really blaming anyone else for my choices.
12:49 Dr_drache just frusterating.
12:49 kaliya Dr_drache: I'm sorry for that, would you please create a diagnostic snapshot and send us?
12:50 Dr_drache kaliya, not your fault
12:50 Dr_drache and don't take it that way.
12:50 Dr_drache and yes
12:50 Dr_drache give me a min
12:50 kaliya sure
12:51 Dr_drache it's worse because "we" are too small for mirantis to finacally care :P
12:52 kaliya Mirantis provides an opensource product and community releases for anyone, so we care of any user :)
12:52 Dr_drache kaliya, I mean to hire them :P
12:52 Dr_drache we tried that.
12:52 Dr_drache LOL
12:52 kaliya to hire Mirantis engineers?
12:53 Dr_drache yeapper, not going to discusss rates, but it would cost more than all our hardware. :P
12:53 Dr_drache or, it was last winter.
12:54 Dr_drache Personally, I'd like to learn and help.
12:55 Dr_drache kaliya
12:55 Dr_drache https://www.dropbox.com/s/wsvygcu26ne30z2/fuel-snapshot-2014-09-30_13-50-47.tgz?dl=0
12:56 kaliya Dr_drache: you're already helping
12:56 kaliya thanks I'm downloading it
12:57 flor3k joined #fuel
12:58 Dr_drache no problem, and don't take my annoyances with how openstack commiercal works, with anything against anything. just blag.
13:11 teran joined #fuel
13:19 kaliya joined #fuel
13:19 sc-rm how do I update the fuel master for the shellshock?
13:20 merdoc sc-rm: yum/apt-get
13:20 sc-rm yum update did not yield any update and I guess it’s because it’s using the local repository supplied by the install media
13:25 kaliya sc-rm: you can get the upgraded rpm here http://mirror.centos.org/centos/6/updates/x86_64/Packages/bash-4.1.2-15.el6_5.1.x86_64.rpm
13:25 kaliya sc-rm: please note that a new patch is likely to be released soon, for bash
13:27 sc-rm kaliya: Yep, I know so that’s also why it’s kind of anoying that fuel master is not linked to their private repository or a security repository...
13:27 sc-rm kaliya: but It makes sense to me, why they don’t do so
13:33 sc-rm kaliya: the above package does not fix “foo='() { echo not patched; }' bash -c foo”
13:34 sc-rm kaliya: it needs to be http://mirror.centos.org/centos/6/updates/x86_64/Packages/bash-4.1.2-15.el6_5.2.x86_64.rpm ;-)
13:35 vkramskikh_ joined #fuel
13:35 kaliya sc-rm: ok thanks
13:38 Dr_drache hmm
13:39 kaliya Dr_drache: I'm looking into your snapshot
13:39 Dr_drache kaliya, it's the same fuel install as all the previous installs, so I bet there is a bunch of useless
13:40 kaliya Dr_drache: might be useful to understand the whole settings
13:40 Dr_drache ok
13:40 kaliya Dr_drache: did you run Verify Networks before deployment?
13:40 Dr_drache yes
13:41 Dr_drache I get the standard "dhcp" warning.
13:41 Dr_drache for the public networks
13:56 tobiash mattymo: https://bugs.launchpad.net/fuel/+bug/1375810
13:56 tobiash is that ok?
14:07 mattgriffin joined #fuel
14:07 kaliya Dr_drache: sorry if asking again, how big was your .raw image?
14:08 Dr_drache 10G
14:11 Dr_drache kaliya, if giving info gets this fixed, I'll give you my damn address and email passwords
14:11 AKirilochkin joined #fuel
14:12 kaliya there are a couple of issues, one with rabbitmq, which goes in timeout and leads MySQL in timeout as well
14:12 kaliya that error is usually when a raw expansion is required and there's no enough space in /var/cinder/somewhere... do you launch just 1 instance alone?
14:14 Dr_drache yes
14:14 Dr_drache funny thing is, I can do 10 cirros images at once
14:14 Dr_drache all 5g+
14:16 merdoc ha. 33 cirros images at once - my record. after that - no ram on compute (%
14:19 mattymo tobiash, I'm back. I was in a meeting
14:19 jobewan joined #fuel
14:21 pal_bth joined #fuel
14:25 kaliya Dr_drache: with what .raw are you trying? Linux distribution?
14:26 Dr_drache kaliya, it's a ubuntu 14.04 image
14:28 kaliya you did, or from ubuntu repositories?
14:29 e0ne joined #fuel
14:30 kaliya Dr_drache: seems similar http://www.laocius.com/?p=40 ?
14:30 Dr_drache kaliya, it's a known working image from my KVM cluster
14:33 Dr_drache kaliya, sounds close I guess.
14:34 kaliya Dr_drache: it would be great if you could retry with a new launch instance, and monitor the space on the cinder-scheduled node, just to be sure the /var cannot contain an (eventually) expanded raw
14:35 Dr_drache how will I know?
14:35 Dr_drache which one it's scheduled to
14:35 emagana joined #fuel
14:40 kaliya Dr_drache: do you have old big-files in /var/lib/cinder/*
14:40 kaliya ?
14:41 Dr_drache kaliya, no, this is a fresh redeploy,
14:41 Dr_drache but i'll check
14:42 Dr_drache none
14:42 Dr_drache and I assume they would only be on the controller
14:44 Dr_drache kaliya : root@node-6:~# df -h /var/lib/cinder/
14:44 Dr_drache Filesystem      Size  Used Avail Use% Mounted on
14:44 Dr_drache /dev/sda3        47G   16G   30G  35% /
14:54 Dr_drache /dev/sda3        47G   16G   30G  35% /
14:54 Dr_drache /dev/sda3        47G   16G   30G  35% /
14:54 Dr_drache shit sorry
14:56 Dr_drache kaliya, going from RAW, it should only need 10G in /var/lib/cinder?
14:57 Dr_drache kaliya, because it uses much more than that
14:58 kaliya Dr_drache: I'm just wondering, probably it's because of that
14:58 Dr_drache 13G for a 10G
14:58 Dr_drache but now it's filling ceph
14:59 Dr_drache and failed
14:59 Dr_drache cinder.sqlite increased in size 3G
15:00 Dr_drache and we're dead in the water again
15:03 Dr_drache kaliya
15:03 kaliya yes
15:03 Dr_drache this last error paste is much longer :
15:03 Dr_drache http://paste.openstack.org/show/117156/
15:06 kaliya Dr_drache: seems close http://lists.openstack.org/pipermail/openstack/2014-May/007190.html
15:11 stamak joined #fuel
15:12 merdoc is that ok - DEBUG nova.virt.libvirt.driver [-] skipping disk for instance-00000042 as it does not have a path get_instance_disk_info /usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py:4803 ?
15:13 kaliya merdoc: might. There's no related bug. Do you have further symptoms?
15:14 merdoc it's first attempt to create instance from qcow on selected node. after that - stacktrace about 'IOError: [Errno 28] No space left on device'
15:14 merdoc and then - 2nd attepmt
15:16 merdoc then 3rd and rescheduling to another node
15:16 dhblaz joined #fuel
15:19 dhblaz I have a fuel 4.x cluster using ceph for swift, glance and cinder.  I would like to add several more slower disks to ceph and move swift and glance there.  Anyone have any advice on how to do this?
15:21 e0ne joined #fuel
15:32 merdoc kaliya: not related. try to found something useful in logs
15:34 evg dhblaz: you can follow the instructions from here http://ceph.com/docs/giant/rados/deployment/ceph-deploy-osd/
15:50 AKirilochkin joined #fuel
15:50 ArminderS joined #fuel
15:58 angdraug joined #fuel
15:58 emagana joined #fuel
16:00 emagana_ joined #fuel
16:10 pal_bth joined #fuel
16:11 AKirilochkin_ joined #fuel
16:12 thehybridtech joined #fuel
16:15 Dr_drache kaliya, anything else I can do? I know this takes time. but I feel useless just watching a screen on this project. LOL
16:19 kaliya Dr_drache: I'm trying to reproduce locally with a closer conf
16:19 Dr_drache kaliya, I figured. you know how it is though
16:20 kaliya Dr_drache: you can read this in the meanwhile ) http://documentation.atomia.com/SkyManager/12.5.0.0/html/ar01s02.html
16:21 bdudko joined #fuel
16:23 rmoe joined #fuel
16:23 merdoc hoho. I also got 'InvalidBDM: Block Device Mapping is Invalid.' (%
16:23 kaliya merdoc: finally :P
16:24 merdoc now it's time to go home and sleep with that
16:25 Dr_drache LOL
16:25 Dr_drache I have 5 more hours @ work.
16:26 merdoc tomorrow I will try to recreate env, and I'll give for 'virtual storage' 200Gb, whatever it was!
16:26 kaliya merdoc: well )
16:26 Dr_drache wonder if that's the problem
16:26 merdoc Dr_drache: it's 19:30 here, so time to go home (%
16:27 kaliya here it's 20.30 :)
16:27 Dr_drache I set my virtual storage small
16:27 merdoc kaliya: and you, sir, will explain me why current storage conf work good on 5.0.1 ($
16:29 merdoc I'm sure that problem somewhere in 'Use qcow format for images' checkbox!
16:29 Dr_drache merdoc, I don't have that checked
16:29 Dr_drache same problems
16:29 Dr_drache only this time ceph doesn't die.
16:29 merdoc exactly!
16:29 kaliya Dr_drache: your settings are ok, that checkbox not selected
16:29 kaliya merdoc: I filed a bug to explain better that label, it's confusing
16:29 Dr_drache kaliya, this time, yesterday it was checked
16:29 merdoc maybe they ignores that option! (%
16:31 kaliya merdoc: no way )
16:31 merdoc kaliya: it's not only confusing, but also whery timewasting. hopefuly I still waiting while new servers arrive
16:32 merdoc see you tomorrow
16:32 kaliya bye
16:42 mpetason joined #fuel
16:53 fandi joined #fuel
17:12 emagana joined #fuel
17:13 ArminderS joined #fuel
17:14 fandi joined #fuel
17:23 kupo24z angdraug: you around?
17:26 flor3k joined #fuel
17:26 mutex joined #fuel
17:29 flor3k joined #fuel
17:38 moe joined #fuel
17:39 moe hello -
17:39 moe i have a quick question
17:39 moe i am running fuel/controller/cinder in VMs (vmware)
17:39 moe and compute on physical servers
17:40 moe the servers require vlan tagging b/c 2 networks (storage, management) are mapped to same physical nic
17:42 moe__ joined #fuel
17:42 emagana joined #fuel
17:42 moe__ how are you guys implementing vlan tagging for ubuntu
17:44 mattgriffin joined #fuel
17:44 junkao_ joined #fuel
17:45 emagana_ joined #fuel
17:46 kaliya_ joined #fuel
17:47 flor3k joined #fuel
17:53 flor3k joined #fuel
17:56 blahRus joined #fuel
17:58 mpetason Moe you should be trunking those VLANs to the network interfaces on the controller/computes.
17:59 mpetason For each interface on the switch that is sending out tagged traffic you should be trunking if there is more than one VLAN
18:01 youellet_ joined #fuel
18:01 youellet_ a
18:01 Dr_drache hmmm
18:03 MiroslavAnashkin If I remember correct - VBox does not support VLANs for outgoing connections. If I need virtualized + physical machines to be connected by VLAN tagged network - I use KVM.
18:05 Dr_drache kaliya, it's late there isn't it?
18:05 jpf joined #fuel
18:12 thehybridtech joined #fuel
18:16 emagana joined #fuel
18:17 HeOS joined #fuel
18:19 emagana joined #fuel
19:29 [HeOS] joined #fuel
19:44 xdeller_ joined #fuel
19:50 e0ne joined #fuel
19:54 fandi joined #fuel
19:59 the_hybrid_tech joined #fuel
19:59 Dr_Drache joined #fuel
20:09 the_hybrid_tech joined #fuel
21:11 thehybridtech joined #fuel
21:34 e0ne joined #fuel
21:43 geekinutah joined #fuel
21:47 adanin joined #fuel
22:51 teran joined #fuel
23:45 mattgriffin joined #fuel

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary