Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-11-21

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 deniszh joined #gluster
00:06 purpleidea joined #gluster
00:06 purpleidea joined #gluster
00:10 protoporpoise question re: gluster fuse client - is it possible to not have each fuse client writing to all replica servers at once? it's /really/ impacting our write performance, especially since one of the replicas is just an arbiter node
00:13 Wizek_ joined #gluster
00:18 map1541 joined #gluster
00:32 zcourts joined #gluster
00:34 msvbhat joined #gluster
00:45 timotheus1_ joined #gluster
00:45 farhorizon joined #gluster
00:47 baber joined #gluster
00:54 purpleidea joined #gluster
01:07 vbellur joined #gluster
01:10 Wizek__ joined #gluster
01:14 rastar joined #gluster
01:14 msvbhat joined #gluster
01:15 vbellur joined #gluster
01:23 wushudoin joined #gluster
01:33 map1541 joined #gluster
01:36 daMaestro joined #gluster
01:58 vbellur joined #gluster
02:16 major joined #gluster
02:19 vbellur joined #gluster
02:31 poornima joined #gluster
02:56 ilbot3 joined #gluster
02:56 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
03:03 msvbhat joined #gluster
03:08 gyadav joined #gluster
03:28 ppai joined #gluster
03:30 zyffer joined #gluster
03:32 nbalacha joined #gluster
03:33 aravindavk joined #gluster
03:36 msvbhat joined #gluster
03:56 psony joined #gluster
03:56 cliluw joined #gluster
03:58 kramdoss_ joined #gluster
04:08 itisravi joined #gluster
04:15 psony joined #gluster
04:16 ompragash joined #gluster
04:25 apandey joined #gluster
04:26 sahina joined #gluster
04:33 skumar joined #gluster
04:35 kramdoss_ joined #gluster
04:40 Prasad joined #gluster
04:50 daMaestro joined #gluster
05:04 vishnuk joined #gluster
05:07 msvbhat joined #gluster
05:10 vishnu_kunda joined #gluster
05:13 rafi1 joined #gluster
05:13 sanoj joined #gluster
05:24 vishnu_sampath joined #gluster
05:25 karthik_us joined #gluster
05:44 hgowtham joined #gluster
06:00 Saravanakmr joined #gluster
06:04 kotreshhr joined #gluster
06:08 skumar_ joined #gluster
06:22 msvbhat_ joined #gluster
06:23 skumar_ joined #gluster
06:27 decayofmind joined #gluster
06:46 susant joined #gluster
06:51 mbukatov joined #gluster
06:58 cloph_away joined #gluster
07:06 susant joined #gluster
07:11 jtux joined #gluster
07:26 Saravanakmr joined #gluster
07:35 vishnu_kunda joined #gluster
07:39 zcourts joined #gluster
08:00 pdrakeweb joined #gluster
08:00 major joined #gluster
08:02 ivan_rossi joined #gluster
08:02 vishnu_sampath joined #gluster
08:06 fsimonce joined #gluster
08:09 jkroon joined #gluster
08:19 Saravanakmr joined #gluster
08:35 karthik_us joined #gluster
08:36 itisravi joined #gluster
08:38 vishnu_kunda joined #gluster
08:43 sanoj joined #gluster
08:48 skumar_ joined #gluster
09:09 ahino joined #gluster
09:15 tdasilva joined #gluster
09:18 deniszh joined #gluster
09:28 buvanesh_kumar joined #gluster
09:36 bartden joined #gluster
09:38 bartden Hi, can i use iowait to monitor gluster network reads and writes on the client machine?
09:40 arpu_ joined #gluster
09:41 deniszh left #gluster
09:42 prasanth joined #gluster
09:46 Humble joined #gluster
09:46 rastar joined #gluster
09:47 rastar joined #gluster
09:58 ThHirsch joined #gluster
10:13 zcourts joined #gluster
10:21 MrAbaddon joined #gluster
10:45 jkroon joined #gluster
11:00 gyadav_ joined #gluster
11:02 buvanesh_kumar joined #gluster
11:03 sona joined #gluster
11:15 buvanesh_kumar joined #gluster
11:23 sanoj joined #gluster
11:26 zcourts_ joined #gluster
11:26 zcourts__ joined #gluster
11:28 ompragash|FAD joined #gluster
11:42 buvanesh_kumar joined #gluster
12:02 apandey joined #gluster
12:05 TBlaar2 joined #gluster
12:06 bueschi joined #gluster
12:11 bueschi joined #gluster
12:14 Ulrar joined #gluster
12:18 msvbhat joined #gluster
12:19 bueschi left #gluster
12:24 karthik_us joined #gluster
12:30 buvanesh_kumar joined #gluster
12:38 bueschi joined #gluster
12:49 ThHirsch joined #gluster
12:50 bueschi joined #gluster
12:57 bartden Hi, can i use iowait to monitor gluster network reads and writes on the client machine?
13:00 major joined #gluster
13:01 ahino1 joined #gluster
13:05 phlogistonjohn joined #gluster
13:16 MrAbaddon joined #gluster
13:19 dustymabe left #gluster
13:25 Prasad joined #gluster
13:27 ThHirsch joined #gluster
13:30 Wizek__ joined #gluster
13:36 ThHirsch joined #gluster
13:36 skumar__ joined #gluster
13:43 skumar_ joined #gluster
13:46 jiffin joined #gluster
13:50 [diablo] joined #gluster
13:52 shyam joined #gluster
13:57 major joined #gluster
13:57 shyam joined #gluster
14:03 ahino joined #gluster
14:08 baber joined #gluster
14:14 fxpester joined #gluster
14:15 fxpester hi all, I just deployed a 3x node cluster (dockerized on ubuntu16.04) and started volume with 'replica 3', I do local gluster mount and try test it with fio
14:15 fxpester `fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --filename=test --bs=4k --iodepth=64 --size=1G --readwrite=randwrite --name=bzzzzz`
14:16 fxpester native result, no gluster: `[0KB/98368KB/0KB /s] [0/24.6K/0 iops]`
14:16 fxpester local mount gluster result: `[0KB/4252KB/0KB /s] [0/1063/0 iops]`
14:16 prasanth joined #gluster
14:17 fxpester it's 20 times performance degradation, is it supposed to be so ?
14:18 rastar fxpester: what network do you use for gluster containers?
14:18 rastar fxpester: host or software defined?
14:18 fxpester host network
14:19 fxpester `--net=host`
14:19 rastar fxpester: ok
14:19 glusterbot fxpester: `'s karma is now -2
14:19 rastar fxpester: by native result, you mean local disk I assume
14:19 fxpester yes, cd ~/ and run, then cd /mnt and repeat
14:20 rastar fxpester: please try a single brick volume to remove the impact of replication
14:20 rastar fxpester: I suspect network and/or CPU restrictions on pod for the cause
14:25 fxpester ok, single node results: `[0KB/10080KB/0KB /s] [0/2520/0 iops]`
14:25 fxpester 10 times degradation
14:27 fxpester LA spikes to 15
14:28 fxpester and rerun of native test: `[0KB/101.1MB/0KB /s] [0/26.9K/0 iops]`
14:28 fxpester kernel 4.4.0-87-generic
14:29 fxpester gluster from ubuntu ppa
14:30 dominicpg joined #gluster
14:30 phlogistonjohn joined #gluster
14:32 deniszh1 joined #gluster
14:34 ThHirsch joined #gluster
14:35 MrAbaddon joined #gluster
14:36 msvbhat joined #gluster
14:39 psony joined #gluster
14:39 marc_888 joined #gluster
14:46 aravindavk joined #gluster
14:47 skylar1 joined #gluster
14:55 farhorizon joined #gluster
14:59 deniszh1 left #gluster
15:08 plarsen joined #gluster
15:10 timotheus1_ joined #gluster
15:11 shyam joined #gluster
15:14 psony joined #gluster
15:20 kramdoss_ joined #gluster
15:33 buvanesh_kumar joined #gluster
15:34 gyadav_ joined #gluster
15:40 hmamtora joined #gluster
15:40 hmamtora_ joined #gluster
15:45 marc_888 joined #gluster
15:57 kpease joined #gluster
15:58 kpease_ joined #gluster
16:03 farhorizon joined #gluster
16:21 buvanesh_kumar joined #gluster
16:25 Humble joined #gluster
16:27 farhorizon joined #gluster
16:30 rastar joined #gluster
16:33 jkroon joined #gluster
16:38 ompragash joined #gluster
16:38 d0minicpg joined #gluster
16:40 shyam joined #gluster
16:46 bueschi joined #gluster
16:54 boutcheee520 joined #gluster
17:07 skumar_ joined #gluster
17:22 shyam joined #gluster
17:27 jmulligan joined #gluster
17:27 jmulliga joined #gluster
17:36 ivan_rossi left #gluster
17:41 buvanesh_kumar joined #gluster
18:06 zcourts joined #gluster
18:25 cliluw joined #gluster
18:31 cliluw joined #gluster
18:35 hchiramm_ joined #gluster
18:47 ahino joined #gluster
18:50 vbellur joined #gluster
18:51 vbellur1 joined #gluster
18:52 vbellur joined #gluster
18:52 Vapez joined #gluster
18:52 Vapez joined #gluster
19:00 MrAbaddon joined #gluster
19:04 vbellur joined #gluster
19:04 vbellur joined #gluster
19:05 vbellur joined #gluster
19:06 vbellur joined #gluster
19:06 baber joined #gluster
19:09 vbellur joined #gluster
19:10 vbellur joined #gluster
19:11 vbellur joined #gluster
19:11 vbellur joined #gluster
19:12 vbellur joined #gluster
19:13 vbellur joined #gluster
19:14 vbellur1 joined #gluster
19:14 vbellur joined #gluster
19:25 Vapez joined #gluster
19:35 farhoriz_ joined #gluster
19:46 phlogistonjohn joined #gluster
20:03 tg2 joined #gluster
20:31 glisignoli How can I determine the status of a replica brick?
20:34 rofl____ anyone have any experience in adaptec vs lsi in high-io performant gluster volumes?
20:34 rofl____ we have issues with iowait on a adaptec controller
20:38 protoporpoise IMO - we got rid of all 'hardware' RAID controllers and switched to using kernel (MD) RAID, major performance increases across the board and rid us of adapter and adapter firmware failures / poor design
20:40 rofl____ we do ~128tb bricks so its convenient to have hwraid
20:40 protoporpoise IMO - even more reason to get rid of hardware RAID, it'd be holding your drives back
20:41 protoporpoise anyway, im not helping with your question there - just giving my 2c
20:41 rofl____ mdadm is to much of a maintenance with thousands of drives imho
20:41 rofl____ sure, any feedback is welcomed
20:41 rofl____ :-)
20:42 protoporpoise :) hopefully someone has an answer for you with adaptec vs lsi, personally when we did have RAID cards from memory we had more problems with adaptec than LSI, but I remember it was crucial to keep LSI firmware up to date
20:46 ThHirsch joined #gluster
20:49 smgulua joined #gluster
20:53 zcourts joined #gluster
21:10 ingard protoporpoise: how do you do drive replacements etc with software raid on big volumes?
21:11 protoporpoise pull the drive out and replace it with a working on
21:11 protoporpoise one*
21:12 protoporpoise oh if you're talking about the partition table on the disk - we automate everything, so we just let puppet set the partition type to raid auto detect
21:13 protoporpoise hardly ever have any of our 1000s of disks fail these days since moving to SSDs mind you
21:13 protoporpoise I remember the old days when it'd be several a month with spinning rust lol
21:16 ingard yeah its less frequent for sure :)
21:17 ingard but still i find it annoying replacing drives
21:17 ingard we dont do the raid rebuild stuff with puppet tho
21:17 protoporpoise that's what lackies / cheap serivce contracts
21:17 protoporpoise are for
21:18 zcourts joined #gluster
21:18 ingard i meant the whole process of replacing drives (mostly actually figuring out which physical drive is bork, and then mdadm yadiyada)
21:27 protoporpoise oh right, when a drive fails a ticket is logged to our ticket system, we can either make that automatically log one with the vendor we get to do lackie tasks for us to swap out the disk (light will be amber, red or dead) and they just do it, a few hours later the disk is replaced, or we can just swap it next time we visit one of the datacentres for something more important and bring one with us.
21:30 ingard protoporpoise: the funny thing is. we had an incident when the dc lackie pulled the wrong drive from a raid5
21:30 ingard we've found the light doenst always go red/amber
21:30 ingard and it can be tricky to figure out which bloddy drive needs pulling :)
21:31 JoeJulian I've always made them verify the serial number first.
21:38 ingard JoeJulian: rly? how?
21:47 Jacob843 joined #gluster
21:49 JoeJulian ingard: The trays we had you could see the SN before you uncaged the drive. I'm make them read it to me.
21:49 JoeJulian You could maybe do something similar with front-loading slots if you just label the tray with the SN of the drive.
21:50 msvbhat joined #gluster
21:51 protoporpoise We don't even label our trays at all anymore, if the lights off - it's dead
21:51 protoporpoise we can also make the light blink rapidly in a colour should we want to make it clear
21:51 protoporpoise really easy when you use mdadm / linux rather than some crappy vendor drivers
21:54 Wizek_ joined #gluster
21:56 ingard protoporpoise: what do you use to force blinking?
21:56 protoporpoise here's a pic of one part of one of the racks - https://github.com/sammcj/smcleod_files/blob/master/images/storage_rack_2.jpg?raw=true
21:56 ingard JoeJulian: right. labeling the tray would fix it :)
21:57 protoporpoise @ingard: just ledctl
21:57 ingard right. yeah thats what we use as well
21:57 protoporpoise I mean, smartctl has it built in and mdadm can trigger events etc...
21:57 ingard and as you said it doesnt always work
21:57 protoporpoise I haven't found it not to work actually
21:57 protoporpoise I guess if an LED died
21:57 ingard yeah it could be that
21:58 protoporpoise actually - I remember seeing that on a really old HP server
21:58 protoporpoise it was like a G6 or G7 or something
21:58 ingard the funny thing tho with the led not working when a drive is dead is that the led isnt on the actual drive
21:58 protoporpoise one of the two HDD LEDs was dead
21:58 ingard its on the backplane
21:58 protoporpoise remember the days when the drives did have LEDs!!!
21:58 protoporpoise those quantim fireballs
21:58 protoporpoise lol
21:58 ingard so that coincidence
21:58 ingard i dunno :)
21:58 protoporpoise or bigfoots
21:59 ingard we use the same servers
21:59 ingard :)
21:59 protoporpoise https://farm4.static.flickr.com/3201/2664533936_3a6dfbc8f4_b.jpg
21:59 ingard we havnt tried the nvme stuff yet tho
21:59 ingard hehe
21:59 protoporpoise Yeah for compute we use HP Blades (G8/9)
21:59 protoporpoise For storage we use Supermicro
21:59 protoporpoise although if I was starting fresh I'd ditch all the HP and use Supermicro for compute as well
22:00 protoporpoise Inside those supermicro's we also have PCIe NVMe
22:00 protoporpoise mostly DC3600, with a few 700 series
22:00 protoporpoise I figure if my laptops and desktops have NVMe - why not our servers ;)
22:01 protoporpoise the storage servers serve up between 4,000,000 and 8,000,000 random 4k write/read IOP/s per 2ru
22:01 protoporpoise and between 8-16GB/s
22:02 protoporpoise connect via iSCSI back to compute nodes
22:02 ingard nice
22:02 protoporpoise its all SO cheap when you ditch BS 'enterprise' rotational crap too
22:02 protoporpoise https://vimeo.com/154701062
22:02 protoporpoise bit of an old video (2-3 years)
22:03 ingard lol
22:03 ingard 8000MB/s
22:03 protoporpoise this is the very first gen I designed 3 ½ years(?) ago - https://vimeo.com/137813890
22:03 protoporpoise doing 2M/IOP/s per 1RU
22:03 protoporpoise heh seems like smallfry now
22:04 ingard and this is with mdadm?
22:04 protoporpoise yeah 100%
22:04 protoporpoise mdadm -> drbd -> LVM -> iSCSI
22:05 protoporpoise I just ssh'd to one thats busy in prod: Load average: 0.60 0.42 0.38
22:05 protoporpoise lol
22:06 protoporpoise thats one of those old 1ru ones, just has Intel(R) Xeon(R) CPU E5-2609 v3 @ 1.90GHz
22:06 protoporpoise 32GB of DDR4, but only uses 1.49GB
22:06 protoporpoise anyway - im totally bragging now so I should shut up before my head explodes lol
22:11 ingard hehe
22:14 farhorizon joined #gluster
22:22 David_H__ joined #gluster
22:31 bmikhael joined #gluster
22:51 |R left #gluster
23:03 baber joined #gluster
23:15 zcourts_ joined #gluster
23:16 farhorizon joined #gluster
23:30 zcourts joined #gluster
23:36 bmikhael joined #gluster
23:40 bmikhael joined #gluster
23:42 David_H_Smith joined #gluster
23:47 Alghost joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary