Perl 6 - the future is here, just unevenly distributed

IRC log for #fuel, 2014-03-18

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:14 IlyaE joined #fuel
00:16 rmoe joined #fuel
00:35 xarses joined #fuel
00:43 book` joined #fuel
01:49 ggreenwalt joined #fuel
01:49 ggreenwalt Good evening.
01:49 ggreenwalt Anyone had a PXE booted system sit on "child_rip+0x0/0x20" before?
01:56 dhblaz joined #fuel
01:58 IlyaE joined #fuel
04:03 dhblaz joined #fuel
04:07 vkozhukalov_ joined #fuel
04:21 fandi joined #fuel
04:35 crandquist joined #fuel
04:49 dburmistrov joined #fuel
05:22 IlyaE joined #fuel
06:22 vkozhukalov_ joined #fuel
06:29 Ch00k joined #fuel
06:44 saju_m joined #fuel
06:46 saju_m joined #fuel
07:15 dburmistrov joined #fuel
07:29 ggreenwalt1 joined #fuel
07:53 e0ne joined #fuel
07:57 acca1 joined #fuel
07:58 acca joined #fuel
07:59 acca left #fuel
08:13 acca joined #fuel
08:16 e0ne_ joined #fuel
08:16 bogdando joined #fuel
08:19 Ch00k joined #fuel
08:30 acca Hi, yesterday we had a chat about issues on Adaptec RAID support in Fuel (our nick was orsetto)
08:30 acca I would like to share with you the Fuel dump: https://www.dropbox.com/s/7nvtapy5ewxoqiv/fuel-snapshot-2014-03-18_07-55-06.tar
08:32 acca We are deploying on three IBM xSeries 366 8863-3RG machines
08:37 Ch00k joined #fuel
08:51 topochan joined #fuel
08:56 saju_m joined #fuel
08:56 rvyalov joined #fuel
09:03 alex_didenko joined #fuel
09:31 DaveJ__ joined #fuel
09:34 mattymo joined #fuel
09:36 bookwar joined #fuel
10:03 anotchenko joined #fuel
10:22 Ch00k joined #fuel
10:22 acca joined #fuel
10:30 mihgen joined #fuel
10:36 anotchenko joined #fuel
11:12 vk joined #fuel
11:15 anotchenko joined #fuel
11:33 vk joined #fuel
11:34 anotchenko joined #fuel
11:42 Dr_Drache joined #fuel
11:48 acca joined #fuel
11:50 anotchenko joined #fuel
11:51 e0ne joined #fuel
11:54 TVR___ joined #fuel
11:58 TVR___ what is on the agenda for today?
11:58 Dr_Drache lol
11:58 Dr_Drache I hope getting this video bug squashed
11:58 Dr_Drache since, it was regressed after I got it fixed.
12:01 TVR___ adding controllers to an existing cluster is where I am waiting... I need the HA and the ability to expand
12:01 Dr_Drache ahh, so we are both semi stuck
12:02 Dr_Drache I can deploy centOS all day.
12:02 TVR___ yes, yes we are
12:02 Dr_Drache but, that's not what the dept wants.
12:03 TVR___ I may have to drop back to 4.0 as it was able to allow adding controller+ceph nodes before.. but they weren't truely HA (loose main node holding VIP.. game over)
12:03 TVR___ but at least I could expand the cluster
12:04 Dr_Drache that's my issue, I can't even use 4.0
12:05 TVR___ 4.0 on centos using neutron with gre seemed stable for me on dell hardware
12:05 dhblaz joined #fuel
12:06 Dr_Drache yea, I might go for that.
12:08 Arminder joined #fuel
12:12 Dr_Drache so TVR___ i'm waiting to see if xarses or miroslav_ came up with anything, I'm trying other patches that have fixed things simular in desktops.
12:13 TVR___ cool...
12:13 TVR___ that is one of the reasons I idle in here... I learn by watching everything..
12:15 Dr_Drache same.
12:15 Dr_Drache they say it's videocard related.
12:17 fweyns joined #fuel
12:17 fweyns Hi all :-)
12:17 meow-nofer joined #fuel
12:17 meow-nofer_ joined #fuel
12:18 meow-nofer__ joined #fuel
12:18 Dr_Drache TVR___, stupid ceph qestion.
12:18 TVR___ sup?
12:19 Dr_Drache replication.
12:19 fweyns Anybody with some knowledge about virtual box and the launch.sh script ?
12:19 Dr_Drache if i had a ceph node, with say 18 disks, each disk a OSD, does it replicate between those, or is the node ONE repliucation target?
12:20 Dr_Drache s/repliucation/replication
12:21 TVR___ crush map determines that... but if you have 3 nodes with 18 disks (all OSD's) then it ~should~ with replication of 3 put one copy on each node.....
12:23 vk joined #fuel
12:26 Dr_Drache so crush map is set per node.
12:26 Dr_Drache just was a fleeting idea.
12:26 Dr_Drache add a extra shelf type.
12:27 Dr_Drache but if that messes with replication to the negitive...
12:29 richardkiene joined #fuel
12:30 TVR___ yea... I did something interesting yesterday, with failures. I had 3 nodes with 6 disks in them... I set them up with 1x1TB OSD each, and 1x146GB disk for journal.... it worked just fine.. showing the 3 OSD's in the cluster.... then, from the command line I tried to add 3 of the unused disks... using the ceph-deploy the fuel system uses... failure.
12:30 TVR___ .. seems it adds the unused disks to the md0 device....
12:32 Dr_Drache soo
12:32 Dr_Drache if you add a node, use all the disks
12:32 richardkiene I'm trying to enable discard (i.e. trim) for my SSD Ceph Journals
12:33 Dr_Drache ahh yes. that should be simple
12:33 richardkiene the journal doesn't appear to be mounted via fstab
12:33 Dr_Drache ahhh, crap
12:33 Dr_Drache lol
12:33 richardkiene :D
12:34 richardkiene It isn't obvious to me where or how it is mounted, though Fuel lists the journal as /dev/sdb
12:34 richardkiene but if I cat /etc/fstab or do a df -h I only see the root filesystem and the OSD volumes
12:34 Dr_Drache i'm not sure where that is defined.
12:35 TVR___ look at ceph osd tree
12:36 TVR___ see if it is in at all
12:36 TVR___ I am a bit confused as to how that works as well.....
12:37 TVR___ it seems dmcrypt is also used with the fuel deployment for it's ceph build
12:37 richardkiene TVR___: I assume you mean run "ceph osd tree" ?
12:37 TVR___ yes
12:38 justif joined #fuel
12:38 richardkiene I'm not sure how to get the Journal information from that, I only see OSD info
12:41 TVR___ I am not entirely sure how they set that up... on the one hand, they seem to use the standard ceph-deploy commands to initiate their cluster, but on the other hand, some things are different... I guess I will have to dissect what their puppet is actually doing to figure it out..
12:41 aglarendil joined #fuel
12:43 e0ne joined #fuel
12:47 MiroslavAnashkin fweyns: What is wrong with VBox scripts?
12:48 fweyns I am trying to figure out what is the easiest way to reach the internet from my setup ...
12:49 fweyns When I use the 4.x iso and then the launch script I don't have access to the internet from my fuel master ... the NAT of Interface 3 does not seem to work ...
12:49 fweyns I access the internet through my wifi ... Which gets every time another IP ....
12:50 vkozhukalov_ joined #fuel
12:51 MiroslavAnashkin fweyns: What is your host OS?
12:51 fweyns OSX 10.9
12:52 Dr_Drache fweyns, it can be virtualbox isn't natting the right interface, OR your gateway is wrong for fuel master.
12:52 Dr_Drache in that case i'd have 2 interfaces, 1 for PXE from fuel, and 1 to the internet
12:58 fweyns hmm...  it seems nat works after a reboot ... Let me check if my VM inside my openstack can get a floating IP and reach the Internet  (Healthcheck is handy ) ... ... Later ...
12:58 aglarendil richardkiene: why do you think that dedicated os journal should be mounted in fstab?
12:59 richardkiene aglarendil: That is just the first place I looked. I'm not suggesting it needs to be mounted in the fstab
13:00 aglarendil this is just a block device or file that ceph uses. so, what you really need is just enabling of TRIM for the block device, not for the filesystem
13:00 Dr_Drache MiroslavAnashkin,
13:00 Dr_Drache need to talk about this real quick.
13:01 richardkiene aglarendil: Awesome, that was essentially what I was getting at. I'm familiar with enabling TRIM for a mounted filesystem, I'll just have to figure out how to do that on a block device
13:01 MiroslavAnashkin Dr_Drache: ?
13:01 richardkiene aglarendil: If you happen to have some pointers there, I'd appreciate them
13:01 Dr_Drache MiroslavAnashkin, I got ubuntu to install.
13:02 Dr_Drache acpi=off in our grub line.
13:02 dhblaz joined #fuel
13:03 Dr_Drache I don't think that is production ready, but it fixes it for now (redeploying right now to test)
13:03 richardkiene aglarendil: Google makes me think "blkdiscard" is the way to do that
13:03 aglarendil richardkiene: yep, but Ceph guys do not recommend TRIM usage
13:04 aglarendil richardkiene: I mean, for journal devices
13:04 richardkiene aglarendil: Not even on SSD Journals that have gone from 350MB/s writes to 50MB/s writes?
13:05 aglarendil here is the discussion
13:05 aglarendil http://irclogs.ceph.widodh.nl/index.php?date=2011-10-11
13:05 Dr_Drache wonder if that's relivent anymore, being from 2 years ago.
13:05 aglarendil richardkiene: you can ask Ceph guys on this, I do not think this channel is really relevant for low-level ceph questions
13:06 aglarendil Dr_Drache: there is another one from 2014 http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-December/006486.html
13:06 richardkiene aglarendil:Sounds good. My motivation was to figure out how the journal was provisioned with Fuel. I was not trying to get in to a low level ceph discussion
13:07 aglarendil richardkiene: we just use ceph-deploy utility. nothing more.
13:07 Dr_Drache aglarendil, i'm reading, that comment was befoire I read... more of a "hmmm" thing
13:07 MiroslavAnashkin Dr_Drache: Thats interesting - ACPI works everywhere about 10+ years. Do you have ACPI enabled in your servers BIOS?
13:08 richardkiene aglarendil:Ok thanks
13:09 Dr_Drache MiroslavAnashkin, yes, of course.
13:10 Dr_Drache MiroslavAnashkin, it's a known workaround for desktop linux, when the kernel or etc is bugged for video.
13:10 Dr_Drache I just never thought to try it.
13:11 Dr_Drache meaning, whatever the issue is, is in the version of video drivers and/or kernel.
13:12 MiroslavAnashkin Dr_Drache: That's true, but direct workaround for video worked in 4.0. Wonder, what has happened since 4.0, if we changed nothing...
13:12 Dr_Drache MiroslavAnashkin, it worked in 4.0 patch v2
13:12 Dr_Drache patch v4 doesn't
13:13 e0ne joined #fuel
13:13 e0ne joined #fuel
13:18 acca joined #fuel
13:23 acca1 joined #fuel
13:24 MiroslavAnashkin Dr_Drache: There is nothing about video in patch v4. v3 has cobbler sync added, v4 adds workaround for old Adaptecs
13:26 Dr_Drache MiroslavAnashkin, all I'm saying is, it worked in the v2 + pmanager.py you gave me.
13:26 Dr_Drache I attempted to use 4.1, no go.
13:26 Dr_Drache went back to 4.0 fresh with v4 patch + pmanager patch
13:26 Dr_Drache didn't work
13:28 anotchenko joined #fuel
13:29 acca1 MiroslavAnashkin: Hi, I see you online now, so I try to submit our request again
13:30 acca1 Hi, yesterday we had a chat about issues on Adaptec RAID support in Fuel (our nick was orsetto)
13:30 orsetto joined #fuel
13:30 acca1 I would like to share with you the Fuel dump: https://www.dropbox.com/s/7nvtapy5ewxoqiv/fuel-snapshot-2014-03-18_07-55-06.tar
13:41 dhblaz joined #fuel
13:43 MiroslavAnashkin acca1: Yes, thank you! I already downloaded it.
13:45 acca1 thanks to you
13:45 e0ne joined #fuel
13:47 Dr_Drache MiroslavAnashkin, seems it causes the deployment to stall.
13:52 MiroslavAnashkin Dr_Drache: Then, we wait the error message and diagnostic snapshot from you)
13:52 Dr_Drache MiroslavAnashkin, waiting for it to error out.
13:52 Dr_Drache lol
13:54 MiroslavAnashkin acca1: Are your SCSI disks named in OS as sg0 - sg8?
13:56 acca1 MiroslavAnashkin: On the bootstrap live we see a unique disk named /dev/sda of the corrected size (6 disks in RAID5=~200GB)
13:57 acca1 But on the fuel interface we see disk size value at 0
13:57 MiroslavAnashkin acca1: Ah, OK.
14:03 MiroslavAnashkin acca1: Could you please boot one of your problematic nodes t bootstrap and run the following on it:
14:04 MiroslavAnashkin `for dev in $(ls /sys/block); do udevadm info --query=property --export --name=${dev}; done` > /tmp/udevadm.log
14:04 MiroslavAnashkin `for dev in $(ls /sys/block); do udevadm info --query=property --export --name=${dev}; done > /tmp/udevadm.log`
14:05 MiroslavAnashkin And then please copy that /tmp/udevadm.log from the node and share it?
14:05 orsetto MiroslavAnashkin: I am here with acca, one minutes we are rebooting it
14:07 Dr_Drache MiroslavAnashkin, how long is this timeout?
14:08 acca joined #fuel
14:10 jobewan joined #fuel
14:12 MiroslavAnashkin Dr_Drache: Which timeout?
14:14 Dr_Drache MiroslavAnashkin, the installation.
14:14 Dr_Drache the node is installed, but fuel is still waiting for it.
14:16 MiroslavAnashkin Dr_Drache: Fuel either wait for node to reboot or for remained nodes to install.
14:18 Dr_Drache MiroslavAnashkin, the node is installed, I can go log in/etc, fuel is still http://imgur.com/KIIcekY
14:20 acca MiroslavAnashkin: Here is the udev logs you've requested https://www.dropbox.com/s/d21orgdv2ublml4/udevadm.log
14:22 rsFF MiroslavAnashkin -https://dl.dropboxusercontent.com/u/34371313/fuel-snapshot-2014-03-18_13-59-58.tgz
14:23 MiroslavAnashkin acca: orsetto: Please confitm it is full log. Looks like it stops after CDROM device description
14:23 acca ok let me check better
14:23 rsFF about fuel failing in a mixed enviroment baremetal+vm
14:26 MiroslavAnashkin rsFF: Yes, downloaded. Thank you!
14:26 vkozhukalov_ joined #fuel
14:27 Dr_Drache MiroslavAnashkin, https://www.dropbox.com/s/3n0mdrzr8ctz99s/fuel-snapshot-2014-03-18_14-25-20.tgz
14:27 Dr_Drache it didn't fail yet.
14:27 Dr_Drache still waiting
14:28 MiroslavAnashkin Timeout is 2 hours. Or at least was.
14:29 Dr_Drache damn
14:29 Dr_Drache the node is there, fully installed.
14:29 Dr_Drache I guess apci=off broke something else
14:38 acca MiroslavAnashkin: HEre is the full log, we were wrong before because we didn't have created the RAID array. https://www.dropbox.com/s/d21orgdv2ublml4/udevadm.log
14:38 acca Now we have just created a RAID1 array
14:39 acca of ~70GB with two disk
14:40 MiroslavAnashkin acca: Please check Fuel UI one more time - does it still reports the node with RAID1 as with 0 disk space?
14:41 dhblaz joined #fuel
14:42 acca MiroslavAnashkin: https://www.dropbox.com/s/r4983n6ax5qf7d1/Screen%20Shot%202014-03-18%20at%2015.41.17.png
14:43 acca Yes Fuel UI report 0 size, as you can see from the image
14:43 MiroslavAnashkin acca: OK
14:52 TVR___ so... empirical disk seems to be broken... anyone see that?
14:56 TVR___ Hmm.. ok.. not broken.. but very slow... will look into it...
14:58 fweyns joined #fuel
15:05 Dr_Drache joined #fuel
15:07 x86brandon joined #fuel
15:08 x86brandon left #fuel
15:08 x86brandon joined #fuel
15:14 Dr_Drache MiroslavAnashkin,
15:14 Dr_Drache got it to install openstack
15:15 x86brandon found a happy little bug between what it writes to the nova.conf and what it executes when it does a nova network-create
15:16 x86brandon seems to disregard subnet specific when it does a nova-network create
15:16 x86brandon defaulted to a /24
15:19 Dr_Drache MiroslavAnashkin, http://paste.openstack.org/show/73752/
15:19 Dr_Drache that shows the grub edits needed to make it deploy.
15:19 designate joined #fuel
15:19 Dr_Drache x86brandon, can you re-create this?
15:26 MiroslavAnashkin Dr_Drache: vga=791?
15:26 Dr_Drache MiroslavAnashkin, yes sir.
15:27 Dr_Drache force a default vga mode.
15:33 vkozhukalov_ joined #fuel
15:34 x86brandon dr_drache yes
15:35 x86brandon for giggles, i wiped the install, did it from scratch, same behavior
15:36 x86brandon i specified a /20 for fixed range
15:36 x86brandon nova.conf reflects  fixed_range=192.168.32.0/20
15:36 x86brandon but nova network-show
15:36 x86brandon | cidr                | 192.168.32.0/24                      |
15:37 x86brandon neat eh?
15:37 x86brandon 4.1
15:37 Dr_Drache x86brandon, lol, I'd open a bug for that
15:38 Dr_Drache MiroslavAnashkin, " (/Stage[main]/Ceph::Osd/Exec[ceph-deploy osd prepare]/returns) change from notrun to 0 failed: ceph-deploy osd prepare node-18:/dev/sdb2 returned 1 instead of one of [0] at /etc/puppet/modules/ceph/manifests/osd.pp:27"
15:40 saju_m joined #fuel
15:49 asledzinskiy joined #fuel
15:54 rmoe joined #fuel
15:55 fweyns left #fuel
15:56 rupsky joined #fuel
16:13 Dr_Drache joined #fuel
16:14 warpig joined #fuel
16:18 x86brandon also, dr_drache, I have noticed a specific type of server only registers itself about half the time with fuel.... i sometimes have to bounce it 2 or 3 times before it shows up in the dashboard... ever hear of that?
16:21 Dr_Drache x86brandon, I have not.
16:21 Dr_Drache but that's usally a cobbler issue.
16:21 Dr_Drache IIRC
16:22 x86brandon makes sense, i seem to recall running into that before with cobbler in years past
16:32 angdraug joined #fuel
16:33 warpig hi guys!
16:33 warpig I know there was talk of disk detection issues in 4.1.... Does anyone know if there was a fix?
16:34 warpig I'm using 4.1 to deploy Ubuntu on HP BL465c G8 blades with P220i controllers
16:34 warpig and the partitioning phase just loops over and over again.
16:35 warpig fdisk -l doesn't show a disk.
16:35 MiroslavAnashkin warpig: You may try this patch http://download.mirantis.com/fuelweb/Fuel_updates/4.1/fuel_partition_manager_patch_41_to_411/
16:36 warpig OK, cool - I'll give that a go now...
16:36 warpig thanks MiroslavAnashkin
16:37 xarses joined #fuel
16:38 dburmistrov joined #fuel
16:38 Dr_Drache joined #fuel
16:43 vk joined #fuel
16:45 e0ne_ joined #fuel
16:45 anotchenko joined #fuel
16:51 x86brandon CPU 40 x 2.79 GHz
16:51 x86brandon that amuses me :)
16:58 Dr_Drache joined #fuel
17:01 MiroslavAnashkin Dr_Drache: xarces and angdraug should have better expertise in Ceph deployment issues
17:02 MiroslavAnashkin Dr_Drache: But new diagnostic snapshot would be more than helpful here.
17:18 Dr_Drache joined #fuel
17:25 angdraug Dr_Drache: do you see anything relevant in /root/ceph.log?
17:36 Ch00k joined #fuel
17:42 mihgen joined #fuel
17:44 Dr_Drache joined #fuel
17:48 Ch00k joined #fuel
17:56 acca left #fuel
18:01 dhblaz joined #fuel
18:01 Ch00k_ joined #fuel
18:10 obcecado hi guys
18:11 obcecado is it safe to patch 4.11 v2 on top of 4.11v1?
18:11 acca joined #fuel
18:11 Dr_Drache obcecado, use v2 to revery
18:11 Dr_Drache revert
18:11 Dr_Drache then patch again
18:12 obcecado is there a revert mechanism in the .run file?
18:12 Dr_Drache sure is
18:12 obcecado ok
18:13 obcecado thank you for your input
18:13 Dr_Drache you'll see it, it's in the same menu as apply/overwrite
18:13 obcecado tbh i've been having a hard time using fuel
18:13 Dr_Drache ohh?
18:13 MiroslavAnashkin obcecado: 4.11v2 includes all 4.1.1v1 plus added IBM to one special vendor list. You may safely apply or revert v2 over v1. Simply use Overwrite mode
18:14 obcecado first we ran into the hp ciss bug
18:14 Dr_Drache obcecado, bugs happen.
18:14 obcecado now i guess it's something related with neutron not assuming dot1q confs
18:15 obcecado please, don't take this as ranting
18:15 obcecado i'm just chatting
18:21 dhblaz joined #fuel
18:31 TVR1 joined #fuel
18:36 TVR_ joined #fuel
18:40 Dr_Drache joined #fuel
18:43 dhblaz obcecado: fuel has had some problems with cciss for the last few releases at least.  They seem to be really focusing on it this time (for releases before 4.0 I had to make my own patches for this problem).
18:44 dhblaz joined #fuel
18:44 dhblaz obcecado: If you are having trouble with hb networking gear I could potentially offer a hand.  We have two deployments on c7000s.
18:45 Dr_Drache dhblaz, if there were patches, why do they have to rewrite them?
18:45 Dr_Drache you didn't submit them? :P
18:45 Dr_Drache (just poking btw)
18:45 dhblaz I did submit them, and they weren't integrated
18:45 Dr_Drache i know, I read the bug reports :P
18:47 Dr_Drache holy crap money
18:47 Dr_Drache first time I got 100% passed on the fuel self-tests
18:48 vkozhukalov_ joined #fuel
18:50 xarses dhblaz: Dr_Drache dhblaz's patch was integrated, however the code near it was re-factored and unfortunately this caused a regression. This regression of cciss support has been raised with out management team.
18:50 xarses hopefully it won't happen again as we are working to improve our testing coverage with HP hardware
18:51 Dr_Drache MiroslavAnashkin,
18:51 xarses dhblaz: Did you ever clear up that floating ip issue?
18:51 dhblaz xarses: we have a test environment to lend when needed
18:51 dhblaz xarses: I run arping from a machine on the lan to any floating IP
18:52 dhblaz it keeps the switch in line
18:52 dhblaz can't say I really like the solution but it works for now
18:52 Dr_Drache xarses, 2 things.
18:52 e0ne joined #fuel
18:52 Dr_Drache 1. you prob saw I got a deploy working
18:52 Dr_Drache 2. I acually need virtualbox help.
18:52 Dr_Drache lol
18:53 Dr_Drache fuel keeps changing my default route.... like a bad kitty.
18:53 TVR_ ok cartman
18:54 Dr_Drache TVR_, hey.
18:54 Dr_Drache lol
18:54 Dr_Drache I like that a deprecated command fixed my deployments.... wtf,
18:54 Dr_Drache makes me hate code even more.
19:00 xarses its your pot pie?
19:00 xarses Dr_Drache: who's default route is being changed?
19:01 MiroslavAnashkin Dr_Drache:  <angdraug> Dr_Drache: do you see anything relevant in /root/ceph.log?
19:01 xarses Dr_Drache: yes i did, any clue why acpi=off needs to be passed? I've probably already asked, but is your bios up to date?
19:02 Dr_Drache xarses, it wasn't acpi=off that allowed the full deploy.
19:02 Dr_Drache it was vga=791
19:02 Dr_Drache xarses, on my workstation, fuel is running in Virtualbox
19:02 Dr_Drache MiroslavAnashkin, there was nothing there, and I just manually cleared the drives and redepolyed.
19:03 Dr_Drache the bios is as new as it can get.
19:03 xarses oh, would nofb work then?
19:04 Dr_Drache xarses, I honestly don't know. grub complains that vga=791 is depreciated.. blanks the screen.
19:04 Dr_Drache then login appears seconds later.
19:04 Dr_Drache well, 10-15 seconds
19:04 xarses https://wiki.ubuntu.com/FrameBuffer
19:05 xarses try nofb, or remove nomodeset
19:05 Dr_Drache I shall in a bit.
19:05 xarses nofb will stop the framebuffer from loading, nomodeset will prevent mode setting in the kernel
19:18 saju_m joined #fuel
19:20 acca left #fuel
19:35 Dr_Drache joined #fuel
19:37 dhblaz_ joined #fuel
19:41 Dr_Drache xarses, fuel is running on virtualbox on my workstation, and it keeps changing my default route to itself.... it's crazy in control.
19:42 xarses Dr_Drache: again, who's route is being changed? the fuel node? your host? the "slave" nodes?
19:42 Dr_Drache my workstation.
19:42 xarses fuel cant change that
19:42 xarses the launch.sh maybe
19:42 Dr_Drache well, it's doing it via virtualbox.
19:42 xarses but i don't recall it
19:43 xarses the fuel admin network is in what forwarding mode in virtual box?
19:43 xarses Dr_Drache: which version of the virtual box scripts are you using?
19:44 Dr_Drache fuel admin is on it's own bridge.
19:44 Dr_Drache not using the scripts.
19:44 Dr_Drache I just don't have a box just for fuel master on my test setup.
19:44 Dr_Drache so, fuel master is a VM.
19:44 xarses thats fine
19:44 Dr_Drache bridged to a dedicated network.
19:44 Dr_Drache adpator.
19:45 xarses what forwarding mode is that bridge in?
19:45 xarses forward, nat, private...?
19:45 * xarses installs virtualbox
19:46 obcecado thank you Dr_Drache
19:46 obcecado foi your time
19:46 obcecado maybe i'll have time to redeploy tomorrow
19:47 xarses Dr_Drache: oh, duaa ok its in some forward mode since you are working with some hardware somewhere else
19:47 xarses Dr_Drache: are you getting your host address via DHCP?
19:48 Dr_Drache just crazy that it wants to set it's own default route.
19:48 Dr_Drache "rberry@it2120-berry { ~/Downloads }$ route
19:48 Dr_Drache Kernel IP routing table
19:48 Dr_Drache Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
19:48 Dr_Drache default         fuel.domain.tld 0.0.0.0         UG    202    0        0 eth0
19:48 Dr_Drache "
19:48 Dr_Drache xarses, no...
19:48 Dr_Drache BUT
19:49 Dr_Drache the eth0 could be getting it from fuel.
19:49 Dr_Drache that's it.
19:49 Dr_Drache I need to go ahead and static that.
19:49 Dr_Drache wtf. n00b mode here, don't mind me
19:50 xarses Dr_Drache: ok, so there is a way to prevent dhcp in ubuntu from setting the route from an interface if you want to do that
19:50 xarses or static configure the interface
19:50 dhblaz joined #fuel
19:50 Dr_Drache I don't use ubuntu on my desktop :P but yea.
19:51 xarses static configure it is
19:51 xarses there isn't another DHCP server on that network is there?
19:52 xarses not that it's this issue, but could impact your deployments if there is
19:52 Dr_Drache not that network no.
19:52 xarses ok =)
19:53 Dr_Drache I DO have one on the network we'd like to deploy production.
19:53 Dr_Drache havn't figured out how to do that yet.
20:15 mutex has anybody produced multiple cells/availability zones with fuel ?
20:24 dhblaz I am using availability zones
20:24 dhblaz I didn't make them with fuel but it was simple enough after the cluster was stood up to do.
20:25 mutex dhblaz: how complicated was it to setup ?
20:25 xarses AZ are easy to setup
20:25 mutex any pointers ?
20:25 xarses cells are a pain
20:25 mutex ah
20:25 xarses mutex: a couple of command line calls to tell nova which computes belong together
20:25 mutex interesting
20:25 xarses (to set up AZ)
20:25 dhblaz mutex: I don't know if I even took notes about what I did because it straight forward
20:25 dhblaz if I recall they are all nova calls
20:26 xarses dhblaz: thats what i remember
20:26 dhblaz We have each rack in aggregate/az
20:26 dhblaz And another az for maintenance
20:26 xarses left #fuel
20:26 xarses joined #fuel
20:26 MiroslavAnashkin joined #fuel
20:27 dhblaz I see this in my computes nova log when I try to delete an instance:
20:27 dhblaz 2014-03-18 20:22:42.591 12857 TRACE nova.openstack.common.rpc.amqp ConnectionError: Unable to establish connection: [Errno 113] EHO
20:27 dhblaz STUNREACH
20:27 xarses do they delete?
20:27 dhblaz Does nova talk to rabbitmq on the management vip or something else?
20:28 xarses mgmt vip
20:28 xarses for 4.0
20:28 mutex I see
20:29 dhblaz no, it doesn't delete/terminate
20:32 mutex has anyone here done an instance migration between clusters ?
20:34 dhblaz I haven't even gotten instance migration working within my cluster.
20:34 xarses nope
20:34 xarses just in the one, I'd guess you'd need cells to make that work
20:35 xarses and some black magic
20:35 xarses =)
20:35 dhblaz you could also hack it
20:35 dhblaz make an instance like you want on the hypervisor you want
20:35 dhblaz then use that as a donor
20:36 dhblaz then use the libvirt features for the migration
20:41 mutex yeah I'm going to hack it I suspect
20:41 mutex this doesn't have to be live migration I just mean migrating the image to a new cluster
20:43 Dr_Drache then you could just upload the image in glance.
20:43 Dr_Drache of course... I seem to have forgotten how to get it OUT of the cluster.
20:44 mutex yeah that is what I plan to do
20:44 mutex glance image-download img > file.img
20:44 dhblaz if it is an image you can use glance image-download
20:45 mutex yeah
20:46 dhblaz If it is in cinder you are supposed to be able to use cinder upload-to-image
20:46 dhblaz but it never worked for me.
20:46 dhblaz always got 0 byte images
20:48 mutex well I have images, and some of them have ephemeral disks
20:48 mutex I was thinking I might be able to turn the ephemeral disks into a volume
20:48 mutex somewhat manually
20:48 mutex but then use volue-attach to boot the new instance to be identical as the old cluster
20:51 x86brandon joined #fuel
20:52 mutex can you have multiple glance/swift backends on the same cluster ?
20:52 dhblaz Your ephemeral disks should be in the directory defined in your /etc/nova.conf instances_path
20:53 dhblaz you could just rsync them over to where you want them after you have convinced openstack to expect them there.  (I have never tried this so don't listen to me)
20:54 mutex yeah
20:54 mutex I know where the disks are
20:54 dhblaz but I have done this on the same hypervisor to move where the instance_path is
20:54 dhblaz because fuel doesn't recognize our iodrives where we put ephemeral disks
20:56 mutex ah
20:56 dhblaz I can't find the right services (order of services) to restart to get my compute node talking to my public vip
20:58 mutex the compute nodes talk to the private endpoints, not the VIP right ?
20:58 mutex over the mgmt channel
20:58 dhblaz Not when they talk to cinder api (apparently)
20:59 dhblaz mutex: here are the relevant steps from our process: http://paste.openstack.org/show/73789/
21:00 mutex dhblaz: how big is your cluster ?
21:00 xarses mutex, you can only have one swift provider per region, because it is mapped as a keystone endpoint
21:00 mutex ah
21:00 mutex the reason I ask is the cinder backup feature
21:00 mutex would be nice to be able to send the backups to a separate glance endpoint
21:01 xarses mutex: you can have multiple glance backends in the cluster, but only one default provider per glance-api.
21:01 mutex oh really
21:01 dhblaz mutex: 6 computes right now
21:02 GeertJohan joined #fuel
21:02 GeertJohan joined #fuel
21:02 mutex how many VMs ?
21:02 xarses mutex: cinder should be able to support multiple backup providers since it already supports multiple backend providers
21:03 xarses but im not sure
21:03 mutex xarses: interesting, i guess I'll poke around
21:04 xarses so i'd guess that you would duplicate the "swift" backup provider by changing the details of where the swift endpoint is or maybe lookedup by
21:15 dhblaz Anyone have a trick for comparing output from ovs-vsctl show
21:15 dhblaz between nodes
21:16 dhblaz diff doesn't work because the bridges or ordered differently
21:17 mutex oh dear
21:19 dhblaz sorry, I should say the bridges and ports for the bridges are output with no apparent order
21:22 mutex I have noticed that as well
21:29 xarses scripts to the rescue!
21:31 IlyaE joined #fuel
21:32 dhblaz I found that my br-eth3 (the physical interface that should have public traffic) doesn't show any traffic when I use tcpdump -ni br-eth3
21:32 dhblaz but tcpdump -ni eth3 yields packets
21:32 dhblaz This is not the case on a working node
21:45 e0ne joined #fuel
22:21 e0ne_ joined #fuel
23:04 IlyaE joined #fuel
23:19 e0ne joined #fuel
23:20 crandquist joined #fuel
23:25 GeertJohan joined #fuel
23:25 GeertJohan joined #fuel

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary