Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-04-20

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 JoeJulian To use a filesystem, you need to go through that filesystem's interface.
00:00 JoeJulian In the case of gluster, that's either the fuse mount, nfs mount, or using the api (which qemu supports).
00:01 gospod2 im sorry I meant local brick = fuse mount JoeJulian
00:01 JoeJulian Ah, ok.
00:02 gospod2 I know glusterfs mount is writing to both nodes at the same time, but you can see the connection here I want it to write atleast to local brick if there is no other :p
00:03 JoeJulian Then the other issue I think you're misinterpreting is the timeout waiting for the tcp connection to continue. When you pull an ethernet cable there's functionally no difference between a powered off machine and switch reboot. The connection may (often does) come back.
00:03 JoeJulian It's what's known as a ,,(ping timeout)
00:03 glusterbot I do not know about 'ping timeout', but I do know about these similar topics: 'ping-timeout'
00:03 JoeJulian @ping-timeout
00:03 glusterbot JoeJulian: The reason for the long (42 second) ping-timeout is because re-establishing fd's and locks can be a very expensive operation. With an average MTBF of 45000 hours for a server, even just a replica 2 would result in a 42 second MTTR every 2.6 years, or 6 nines of uptime.
00:03 JoeJulian Don't get pedantic with me, glusterbot!
00:03 major hah
00:11 gospod2 lol
00:14 gospod2 do I need to skip all the testing and go straight to KVM guests if they crash or no then, or disable ping timeout? :/
00:14 gospod2 some have passthrough devices and 1 is nested, dont think a pause is possible
00:15 JoeJulian You can disable ping-timeout if you can guarantee you'll never have a network glitch.
00:15 JoeJulian Otherwise, just deal with it every 2.6 years.
00:15 moneylotion joined #gluster
00:15 gospod2 node rebooting for kernel update == network glitch or am I wrong ? :P lol
00:17 JoeJulian No, if you gracefully exit the server daemon, it will send the TCP RST which will close the connection. When the client has a gracefully closed connection, you'll see no interruption of service.
00:18 gospod2 nice explanation, i would never of googled that.
00:18 gospod2 simple reboot in centos7 doesnt gracefully exit glusterd?
00:18 JoeJulian It should.
00:18 JoeJulian Is that not what you're experiencing?
00:18 gospod2 nope
00:19 gospod2 rebooted like 500s times today alone
00:19 JoeJulian Sounds like an order of operations bug in centos.
00:19 JoeJulian How are you configuring your network?
00:19 JoeJulian systemd-networkd?
00:19 gospod2 at the moment when im testing its default (networkmanager) should I go for network?
00:19 vbellur joined #gluster
00:19 gospod2 at the end im deploying all my servers with network
00:20 gospod2 which one is prefered by you JoeJulian ?
00:20 JoeJulian I think NM downs the interfaces early, iirc.
00:20 JoeJulian I really like systemd-networkd.
00:20 JoeJulian It's very powerful.
00:20 JoeJulian and simple at the same time.
00:21 gospod2 thanks for this clue JoeJulian ! im testing it and will report back
00:21 JoeJulian I'm heading home. Good luck.
00:21 gospod2 np thanks !
00:24 moneylotion_ joined #gluster
00:42 moneylotion joined #gluster
00:53 daMaestro joined #gluster
01:06 shdeng joined #gluster
01:26 Wizek_ joined #gluster
01:50 derjohn_mob joined #gluster
01:52 plarsen joined #gluster
03:05 Gambit15 joined #gluster
03:28 prasanth joined #gluster
03:29 susant joined #gluster
03:33 magrawal joined #gluster
03:36 nbalacha joined #gluster
03:38 riyas joined #gluster
03:43 nbalacha joined #gluster
03:45 moneylotion joined #gluster
03:50 susant joined #gluster
03:50 susant left #gluster
04:00 moneylotion joined #gluster
04:02 Shu6h3ndu joined #gluster
04:06 gyadav joined #gluster
04:10 buvanesh_kumar joined #gluster
04:16 atinm joined #gluster
04:19 itisravi joined #gluster
04:21 ppai joined #gluster
04:25 poornima_ joined #gluster
04:34 kramdoss_ joined #gluster
04:35 skumar joined #gluster
04:35 kramdoss_ joined #gluster
04:38 ankitr joined #gluster
04:39 aravindavk joined #gluster
04:43 apandey joined #gluster
04:49 sanoj joined #gluster
04:50 itisravi joined #gluster
05:01 Karan joined #gluster
05:03 karthik_us joined #gluster
05:07 rafi joined #gluster
05:20 ppai joined #gluster
05:24 Philambdo joined #gluster
05:27 mbukatov joined #gluster
05:27 Saravanakmr joined #gluster
05:31 jwd joined #gluster
05:33 ashiq joined #gluster
05:39 Karan joined #gluster
05:43 Prasad joined #gluster
05:51 skoduri joined #gluster
05:52 hgowtham joined #gluster
05:57 ankitr joined #gluster
05:57 k0nsl joined #gluster
05:57 k0nsl joined #gluster
05:59 _KaszpiR_ joined #gluster
06:02 kramdoss_ joined #gluster
06:02 Prasad joined #gluster
06:04 Prasad_ joined #gluster
06:07 shdeng joined #gluster
06:13 sona joined #gluster
06:18 kotreshhr joined #gluster
06:19 itisravi joined #gluster
06:23 ppai joined #gluster
06:25 kramdoss_ joined #gluster
06:26 sbulage joined #gluster
06:27 susant joined #gluster
06:29 jtux joined #gluster
06:33 sanoj joined #gluster
06:34 hgowtham joined #gluster
06:37 kdhananjay joined #gluster
06:41 ppai joined #gluster
06:42 sbulage joined #gluster
06:44 msvbhat joined #gluster
06:45 rafi joined #gluster
06:56 atinm joined #gluster
06:57 susant joined #gluster
06:59 kotreshhr joined #gluster
07:00 lalatenduM joined #gluster
07:01 lalatenduM joined #gluster
07:02 jtux left #gluster
07:03 lalatenduM joined #gluster
07:14 ayaz joined #gluster
07:37 fsimonce joined #gluster
07:43 atrius joined #gluster
07:45 anoopcs joined #gluster
07:45 Saravanakmr joined #gluster
07:49 john joined #gluster
07:55 kotreshhr joined #gluster
07:57 derjohn_mob joined #gluster
08:01 hgowtham joined #gluster
08:05 atinm joined #gluster
08:10 om2 joined #gluster
08:20 ankitr joined #gluster
08:20 flying joined #gluster
08:23 karthik_us|lunch joined #gluster
08:25 ankitr joined #gluster
08:32 karthik_us joined #gluster
08:32 rastar joined #gluster
08:45 sanoj joined #gluster
09:07 poornima joined #gluster
09:40 Prasad__ joined #gluster
09:44 jkroon joined #gluster
09:57 kramdoss_ joined #gluster
09:59 msvbhat joined #gluster
10:11 ingard__ joined #gluster
10:12 ingard__ hi. we've been looking at supermicro 60 and 90bay servers. does anyone here know of someone that uses these models (or similar density) for glusterfs?
10:13 Wizek_ joined #gluster
10:15 Klas damn, that's a lot of disk space =)
10:18 Philambdo1 joined #gluster
10:22 Philambdo joined #gluster
11:29 _nixpanic joined #gluster
11:29 _nixpanic joined #gluster
11:31 Guest25353 joined #gluster
11:32 rwheeler joined #gluster
11:32 buvanesh_kumar joined #gluster
11:35 susant joined #gluster
12:01 gyadav_ joined #gluster
12:08 jkroon ingard__, not for gluster, but i do use the 36 drive supermicro chassis.
12:08 gyadav__ joined #gluster
12:08 jkroon be prepared for it to kick out drives from time to time.
12:08 jkroon on raid 6 that's quite a long rebuild :)
12:09 jkroon steer clear on kernels 4.1 through to at least 4.7
12:09 jkroon those kernels completely lock up during any form of rebuild (simple check is good enough to kill the array)
12:23 lalatenduM joined #gluster
12:29 kotreshhr left #gluster
12:30 amarts joined #gluster
12:34 bartden joined #gluster
12:34 bartden Hi, how can i remove a brick without having to confirm it? I want to automate this process
12:41 baber joined #gluster
12:44 kramdoss_ joined #gluster
12:46 fsimonce joined #gluster
12:46 Karan joined #gluster
12:47 itisravi bartden:  `gluster --mode-script ....` is what you need to do if you want to script out commands, although remove-brick is not the command you want to automate unless it is for testing.
12:47 bartden itisravi its not for testing … so why wouldn’t i want to do it?
12:48 itisravi bartden: because remove-brick involves migrating data and you want to monitor its progress.
12:49 bartden So whenever i do a start, i have to wait until it finishes before i do a commit, correct?
12:49 itisravi yes, start-->status-->commit.
12:49 glusterbot itisravi: start-->status's karma is now -1
12:53 skoduri joined #gluster
12:54 bartden ok, thx. Additional question, when addind a brick to a distributed cluster, should i always run the rebalance or will gluster automatically use the new node until they are equal in usage versus the already added nodes?
12:59 itisravi You would need to run rebalance. You can optionally run fix-layout first so that new files start going to the new brick and then run the full rebalance later.
12:59 itisravi btw `gluster --mode=script` is what I meant to type.
13:00 ingard__ jkroon: yeah we use the 36 bays extensively
13:00 ingard__ cant say we've got too many problems with the raids though
13:00 Karan joined #gluster
13:00 ingard__ drives fail from time to time yes, but its usually handled gracefully
13:00 ingard__ rebuilds do take time ofc
13:00 jkroon ingard__, do you also find that disks sometimes goes missing? (we run the disks as simple disks due to the RAID controller not dealing with RAID6)
13:01 ingard__ jkroon: how do (did) you setup the raid6?
13:01 jkroon in 80% of cases we reseat the drives and then it rebuilds, and SMART doesn't report issues.
13:01 jkroon mdadm ...
13:01 jkroon 3 arays of 12 disks each.
13:01 jkroon then we use LVM on top of that.
13:02 ingard__ right. we've seen some boxes completely freeze up when ssds fail when in mdadm
13:02 jkroon kernel version?
13:02 ingard__ smart reports no issues, but we end up with having to fail one drive manually to get it back in biz
13:02 ingard__ both precise and trusty
13:03 ingard__ but these are the drives attached directly to mobo
13:03 ingard__ for the 36 bays we use lsi controller
13:03 ingard__ 2x raid6 with 18 drives each
13:03 jkroon i'm not familiar with the kernel versions in precise and trusty.
13:04 jkroon i do know that on 4.1 kernel version up to at least 4.7 if I ask the kernel to perform a consistency check I'm headed for a hard reboot.
13:04 ingard__ right. yeah we use hw raid for the storage
13:04 ingard__ mdadm for the OS drive bays
13:05 ingard__ why dont you use hw raid on these boxes?
13:06 ingard__ i wouldnt want to use mdadm on them :s at least for us for smaller raid sets with mdadm we have lots of weirdness when a drive starts to fail
13:06 ingard__ server locks up. iowait through the roof etc etc
13:07 alvinstarr joined #gluster
13:11 Prasad joined #gluster
13:13 sona joined #gluster
13:30 skylar joined #gluster
13:30 ankitr joined #gluster
13:30 kdhananjay joined #gluster
13:39 Karan joined #gluster
13:48 JoeJulian probably the lsi expander.
13:51 ira joined #gluster
13:52 moneylotion joined #gluster
13:55 scobanx joined #gluster
13:55 amarts joined #gluster
13:56 scobanx Hi, I have a question about DHT. I want to learn that if file hashing in DHT uses only file name? If so, assume ve have dir1/file1 and dir2/file1, do they land in same sub-volume?
13:57 JoeJulian Yes and no. Yes, they use the same hash value. No, different hash maps are applied to the directories.
13:58 susant left #gluster
13:58 skoduri joined #gluster
13:59 JoeJulian see https://joejulian.name/blog/dht-misses-are-expensive/
13:59 glusterbot Title: DHT misses are expensive (at joejulian.name)
13:59 scobanx Thanks for the answer JoeJulian
13:59 JoeJulian You're welcome.
14:04 sanoj joined #gluster
14:05 skoduri joined #gluster
14:06 Karan joined #gluster
14:11 moneylotion joined #gluster
14:11 shaunm joined #gluster
14:13 Shu6h3ndu joined #gluster
14:14 plarsen joined #gluster
14:14 jevo joined #gluster
14:19 gyadav__ joined #gluster
14:25 kpease joined #gluster
14:29 moneylotion joined #gluster
14:45 derjohn_mob joined #gluster
14:47 riyas joined #gluster
14:48 fyxim left #gluster
14:53 farhorizon joined #gluster
14:54 jkroon ingard__, we couldn't get RAID6 configured at the HW level.
14:59 vbellur joined #gluster
15:02 ankitr joined #gluster
15:03 Shu6h3ndu joined #gluster
15:06 gyadav joined #gluster
15:15 gnulnx left #gluster
15:28 farhorizon joined #gluster
15:38 jbrooks joined #gluster
15:48 ankitr joined #gluster
15:54 moneylotion joined #gluster
15:57 flying joined #gluster
16:01 shemmy joined #gluster
16:04 farhorizon joined #gluster
16:07 moneylotion joined #gluster
16:12 moneylotion joined #gluster
16:17 susant joined #gluster
16:21 kramdoss_ joined #gluster
16:25 Gambit15 joined #gluster
16:29 Karan joined #gluster
17:03 plarsen joined #gluster
17:06 gyadav joined #gluster
17:12 ivan_rossi left #gluster
17:14 msvbhat joined #gluster
17:33 sona joined #gluster
17:40 buvanesh_kumar joined #gluster
17:50 susant joined #gluster
18:02 susant joined #gluster
18:04 cliluw joined #gluster
18:14 baber joined #gluster
18:15 ira joined #gluster
18:39 gem joined #gluster
18:47 gyadav joined #gluster
18:50 bchilds_ joined #gluster
18:53 KoSoVaR joined #gluster
18:57 KoSoVaR hi all - i'm tyring to price out a medium sized deployment and having a difficult time pricing out something that's between "general purpose" and "hpc" but closer to general purpose... what I do know is that I'm looking to use 10TB 7200RPM drives - and don't quite understand how to spec out which CPUs, how much RAM.. and where the sweet spots for I/O bottlenecks to occur are..  looking for
18:57 KoSoVaR some general feedback.. and yes, i know there are tons of questions about how it's going to be used - small files, big files, archival, and probably recommendations around tiered storage and putting in nvme/ssd for a hot tier.. but "generally speaking"... i'd like to figure out where to start on the spec of the server... if I use 24 drives, where do I want to be for raid controller, cpu, RAM
18:57 KoSoVaR .. vs 60 or 90 bay chassis
18:58 KoSoVaR i've read through things like this for example https://www.redhat.com/cms/managed-files/st-RHGS-QCT-config-size-guide-technology-detail-INC0436676-201608-en.pdf .. and some other docs, and blogs, but I can't quite pinpoint where a sweet spot would be for price / bottleneck
18:59 KoSoVaR and when i say price, i mean .. do I use 2687v4s or 2640v4s .. or something else.. so really, the hardware driving the cost of the actual chassis
18:59 kpease joined #gluster
19:01 baber joined #gluster
19:07 msvbhat joined #gluster
19:09 farhorizon joined #gluster
19:23 jkroon joined #gluster
19:30 msvbhat joined #gluster
20:01 farhorizon joined #gluster
20:05 farhorizon joined #gluster
20:07 baber joined #gluster
20:23 rastar joined #gluster
20:27 baber joined #gluster
20:32 msvbhat joined #gluster
20:52 gem joined #gluster
21:38 edong23 joined #gluster
21:51 Vaelatern joined #gluster
22:14 jkroon joined #gluster
22:37 ankitr joined #gluster
22:39 ira joined #gluster
22:51 farhorizon joined #gluster
22:56 farhoriz_ joined #gluster
23:32 kraynor5b_ joined #gluster
23:35 baber joined #gluster
23:57 plarsen joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary