Perl 6 - the future is here, just unevenly distributed

IRC log for #fuel, 2015-02-13

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:05 Longgeek joined #fuel
00:08 angdraug joined #fuel
00:24 xarses codybum: you could attempt to disable connection heartbeats, It's the only thing left that I can think of that should be killing the connections
00:44 xarses codybum: rabbitmqctl list_connections name timeout
00:45 xarses by my example, they should stay around 60
00:46 xarses sorry, thats a setting, they are set to 60
00:48 xarses you should be able to see each connection increasing by at least 8 in both send_oct and recv_oct if the connection is properly heartbeating
00:48 xarses rabbitmqctl list_connections name send_oct recv_oct
01:24 rmoe joined #fuel
01:28 mattgriffin joined #fuel
01:59 emagana joined #fuel
02:19 Longgeek joined #fuel
02:49 ilbot3 joined #fuel
02:49 Topic for #fuel is now Fuel 5.1.1 (Icehouse) and Fuel 6.0 (Juno) https://software.mirantis.com | Fuel for Openstack: https://wiki.openstack.org/wiki/Fuel | Paste here http://paste.openstack.org/ | IRC logs http://irclog.perlgeek.de/fuel/
03:51 mattgriffin joined #fuel
04:01 Longgeek joined #fuel
04:05 xarses joined #fuel
04:12 claflico joined #fuel
05:17 claflico joined #fuel
05:22 zerda joined #fuel
05:28 codybum yess'
05:28 codybum '
05:38 jobewan joined #fuel
05:52 mihgen_ joined #fuel
05:52 holser joined #fuel
05:53 roman_vyalov joined #fuel
05:54 moizarif joined #fuel
05:54 MorAle joined #fuel
05:54 dburmistrov joined #fuel
05:55 akislitsky joined #fuel
05:56 MiroslavAnashkin joined #fuel
05:57 evg joined #fuel
05:58 meow-nofer joined #fuel
05:59 vkramskikh joined #fuel
06:00 apalkina joined #fuel
06:01 mrasskazov joined #fuel
06:02 sbog joined #fuel
06:14 jobewan joined #fuel
06:24 nurla joined #fuel
06:24 aglarendil joined #fuel
06:37 bdudko joined #fuel
06:37 nurla joined #fuel
06:37 claflico joined #fuel
06:39 aglarendil joined #fuel
06:53 claflico1 joined #fuel
06:53 sambork joined #fuel
07:08 dklepikov joined #fuel
07:35 sambork joined #fuel
07:36 tzn joined #fuel
07:36 Miouge joined #fuel
08:08 alecv joined #fuel
08:12 evgeniyl___ joined #fuel
08:14 devstok joined #fuel
08:19 Longgeek joined #fuel
08:32 claflico joined #fuel
08:42 claflico joined #fuel
08:42 sambork joined #fuel
08:42 ChrisNBlum joined #fuel
08:51 HeOS joined #fuel
08:54 devstok Failed to upload image 7d705057-cf87                                             -4042-9f8e-18d0f6ef42ab
09:16 sambork joined #fuel
09:18 claflico joined #fuel
09:21 hyperbaba joined #fuel
09:21 hyperbaba Hi there, where can i find cinder-backup package for 5.1 deployment?
09:22 claflico joined #fuel
09:26 e0ne joined #fuel
09:36 teran joined #fuel
09:42 sambork joined #fuel
09:43 sc-rm dklepikov: Now I played a bit more around and got this http://paste.openstack.org/show/172777/ there seems to be drops in the test, but the avg speed is slightly better.
09:46 andriikolesnikov joined #fuel
09:47 dklepikov sc-rm : Do you talking about floating speed? it is normal, all instances use ceph, and all ceph speed divided between instances. Also cache must be wrote.
09:49 tzn joined #fuel
09:49 devstok Failed to upload image 7d705057-cf87xxxxxxxxxx
09:49 dklepikov sc-rm : Can you please tell me: how is the  import of a database goes?
09:49 devstok glance image-create
09:50 devstok what kind of problem could be?
09:50 devstok I have a  fresh deployment
09:51 dklepikov devstok : what size of image do you uploading? In what format? Specify the command you use to upload.
09:55 devstok glance --debug image-create --name ubuntu-qcow2 --disk-format=qcow2 --container-format=bare --is-public=true --file precise-server-cloudimg-amd64-disk1.img
09:56 devstok Error communicating with http://193.205.211.131:9292 [Errno 32] Broken pipe
09:57 devstok File "/usr/lib/python2.7/dist-packages/eventlet/wsgi.py", line 405, in handle_one_response     write(''.join(towrite))   File "/usr/lib/python2.7/dist-packages/eventlet/wsgi.py", line 349, in write     _writelines(towrite)   File "/usr/lib/python2.7/socket.py", line 334, in writelines     self.flush()   File "/usr/lib/python2.7/socket.py", line 303, in flush     self._sock.sendall(view[write_offset:write_offset+buffer_size])   File "/u
10:05 teran joined #fuel
10:07 sc-rm dklepikov: Okay, so maybe because not all instances have been restarted and therfore not taking advantage of the caching yet, they are slowing the entire open stack down on the ceph side?
10:08 teran joined #fuel
10:08 sc-rm dklepikov: It’s done as a simple # mysql my_database < database.sql
10:09 devstok any hint?
10:09 sc-rm dklepikov: How can I test the raw ceph write speed?
10:13 hyperbaba Can i do the following; i;ve upgraded fuel to 5.1.1. There is no option to upgrade depoyloyed cloud to 5.1.1. Can i just point repository to 5.1.1 on fuelweb on nodes and do apt-get upgrade?
10:22 dklepikov sc-rm : Ceph has an integrated benchmark program. The corresponding command is rados bench. In general, this benchmark writes objects as fast as possible to a Ceph cluster and reads them back sequentially afterwards.
10:22 dklepikov sc-rm : http://ceph.com/docs/master/man/8/rados/
10:27 devstok glance --debug image-create --name ubuntu-qcow2 --disk-format=qcow2 --container-format=bare --is-public=true --file precise-server-cloudimg-amd64-disk1.img
10:27 devstok doesnt work why
10:28 stamak joined #fuel
10:35 dklepikov devstok : do you have enough free disk space and free memory to convert image?
10:36 devstok ???
10:36 devstok on the controller?
10:37 sc-rm dklepikov: I did some testing and got http://paste.openstack.org/show/172815/
10:39 samuelbartel_ joined #fuel
10:42 devstok could be my ceph cluster?
10:42 devstok I compared the results with sc-rm
10:43 devstok sc-rm performance are 30 times better
10:43 dklepikov sc-rm : does your osds 1, 15, 8, 20 is HHD drivers? If yes, can you please look on its smart data (smartctl -a /DEV)
10:44 sc-rm dklepikov: all osds are 7.2K rpm disks
10:45 dklepikov devstok : what is your ceph version (ceph -v). please show (ceph -s, ceph -w, ceph osd tree, rados df)
10:45 Miouge_ joined #fuel
10:46 dklepikov sc-rm : i look on your output 172815, there is some different speeds on different osds
10:47 dklepikov sc-rm : some speeds are different in twice
10:47 sc-rm dklepikov: The disks with slow speed are a different and older models, so I guess the obvious thing to do is: replace the slow drives
10:50 sc-rm dklepikov: all disks returns “No errors logged” so I guess it’s just because they are old and not as good performing as the other ones
10:52 devstok ceph version 0.80.7 (6c0127fcb58008793d3c8b62d925bc91963672a3)
10:52 devstok health HEALTH_OK
10:53 devstok # id    weight  type name       up/down reweight -1      10.74   root default -2      3.58            host node-61 0       1.76                    osd.0   up      1 3       1.82                    osd.3   up      1 -3      3.58            host node-60 1       1.76                    osd.1   up      1 5       1.82                    osd.5   up      1 -4      3.58            host node-59 2       1.76                    osd.2   up      1
10:53 sc-rm devstok:  http://paste.openstack.org is a good thinkg for that info ;-)
10:55 devstok thanks
10:55 sc-rm dklepikov: I’m leaving for today, but I’ll replace those drives and see how the performance goes after that.
10:55 devstok http://paste.openstack.org/show/172828/
11:07 aliemieshko_ joined #fuel
11:08 dklepikov sc-rm : do not forget to compare the output. to check did it help or not.
11:09 devstok [WRN] 1 slow requests, 1 included below; oldest blocked for > 32.410274 secs
11:09 devstok I always get this warn
11:09 devstok looking ceph -w
11:10 dklepikov devstok :  show the output of ceph -w for 1-2 mins
11:49 andriikolesnikov joined #fuel
11:52 e0ne joined #fuel
12:02 thumpba joined #fuel
12:04 thumpba_ joined #fuel
12:16 jaypipes joined #fuel
12:16 mattgriffin joined #fuel
12:21 mattgriffin joined #fuel
12:33 moizarif joined #fuel
12:46 stamak joined #fuel
13:00 rbowen joined #fuel
13:03 samuelbartel__ joined #fuel
13:09 aliemieshko_ joined #fuel
13:15 ddmitriev1 joined #fuel
13:38 mattgriffin joined #fuel
13:39 rbowen joined #fuel
13:52 rbowen joined #fuel
14:03 Miouge joined #fuel
14:06 devstok this is the log during a test
14:06 devstok http://paste.openstack.org/show/172954/
14:06 devstok and this during a glance image-create
14:06 devstok http://paste.openstack.org/show/172955/
14:12 devstok dklepikov: are you here
14:13 dklepikov devstok: yes
14:14 devstok I attached the log
14:26 e0ne joined #fuel
14:26 rbowen joined #fuel
14:27 dklepikov devstok: does it looks like your log? http://www.sebastien-han.fr/blog/2013/04/17/some-ceph-experiments/
14:29 devstok mmm
14:32 devstok my osd are up
14:33 devstok osds work but sometimes give me a warning slow request
14:33 devstok health ok
14:34 dklepikov devstok: it can be related to network speed
14:34 dklepikov devstok: or disk performance
14:34 devstok network no
14:34 devstok i got a 1gb speed
14:35 devstok in the obj storage i got 2 disks 2TB each
14:35 devstok for a total of 4 TB TB
14:35 devstok perhaps the prblem is related to the first disk that is shared with the ceph OS
14:35 devstok from fuel I didnt set any space for journal
14:36 dklepikov devstok: check  disk io performance
14:38 devstok http://paste.openstack.org/show/172979/
14:40 dklepikov devstok: iostat -x 1 /dev/vda3
14:41 Miouge joined #fuel
14:41 devstok on the ceph or on the controller?
14:42 devstok I got 3 virtual machine for the controllers
14:44 dklepikov devstok: both
14:44 dklepikov devstok: from one of the controller, and from node with osd.1
15:19 andriikolesnikov joined #fuel
15:25 moizarif joined #fuel
15:34 Miouge joined #fuel
15:54 Miouge joined #fuel
16:14 mattgriffin joined #fuel
16:20 emagana joined #fuel
16:31 blahRus joined #fuel
17:03 mattgriffin joined #fuel
17:11 claflico joined #fuel
17:21 daniel3_ joined #fuel
17:28 rmoe joined #fuel
17:53 jobewan joined #fuel
17:56 xarses joined #fuel
18:51 emagana joined #fuel
18:52 emagana joined #fuel
19:11 HeOS joined #fuel
19:31 saibarspeis joined #fuel
19:33 emagana joined #fuel
19:33 emagana joined #fuel
19:38 emagana joined #fuel
19:44 teran_ joined #fuel
19:45 teran__ joined #fuel
19:48 teran joined #fuel
19:54 teran_ joined #fuel
20:13 rbowen joined #fuel
21:17 e0ne joined #fuel
21:22 rbowen joined #fuel
21:37 e0ne joined #fuel
21:39 emagana joined #fuel
21:58 emagana joined #fuel
22:38 emagana joined #fuel
22:52 emagana joined #fuel
23:51 xarses joined #fuel

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary