Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-05-07

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:08 Wizek__ joined #gluster
01:33 derjohn_mob joined #gluster
01:44 kramdoss_ joined #gluster
02:11 shyam joined #gluster
02:34 amarts joined #gluster
02:43 eMBee joined #gluster
02:45 * eMBee is dealing with a gluster performance problem. the gluster servers are working but very slow to respond. one of the servers has a load average of 20. the other seems to be fine. both are running a monthly raid check, but that has not impacted performance before
02:46 eMBee clients are connecting to the server that seems to be fine, but are still suffering from a lot of timeouts. the mount is stuck most of the times
02:47 eMBee in trying to diagnose this, i want to force the clients to switch servers. is there a way to do that without rebooting the machine? or without unounting the gluster volume
03:25 atrius joined #gluster
04:06 Humble joined #gluster
04:20 JoeJulian eMBee: theoretically, each time you open a file it _should_ be served by the first server to respond (for reads) writes are still synchronous so you would still have this problem.
04:21 JoeJulian Are heals happening? If you're on an older version, turning off client-side heals may help: set cluster.[*]-self-heal off
04:45 kramdoss_ joined #gluster
04:49 msvbhat joined #gluster
05:53 riyas joined #gluster
06:02 Gnomethrower joined #gluster
06:03 jiffin joined #gluster
06:33 msvbhat joined #gluster
07:04 kramdoss_ joined #gluster
07:06 jiffin joined #gluster
07:22 om2_ joined #gluster
07:32 kramdoss_ joined #gluster
07:34 nitro3v joined #gluster
07:34 nitro3v Hey there
07:56 ahino joined #gluster
08:31 bootc JoeJulian: thanks for the hint, unfortunately copying the file from the brick was complicated by it being a VM image on a volume with sharding
08:32 bootc in the end I did just remove the xattr trusted.bit-rot.bad-file from the files on the bricks, restarted glusterfsd, then triggered a full heal
08:32 bootc and not bitrot detection is off because I can't trust it not to break again
08:35 eMBee JoeJulian: you mean for writes the client waits for both server to complere?
08:35 eMBee erm complete
08:37 eMBee we ended up rebooting the clients, and were actually unable to access the second server through gluster (by specifying the server name in /etc/fstab), not surprising because that server had a load of 20. but we wanted to be sure that the load was connected to the problem
08:38 eMBee not that anything helped. rebooting the clients made gluster work for a few minutes until it started stalling again.
08:39 eMBee a raid check was running on the gluster servers, but turning that off didn't help (much)
08:40 eMBee finally we rebooted the gluster server with the high load. if that helped is to early to tell
08:41 eMBee we did find a small number of errors in the smart status of one of the disks where all others had 0, so now we are suspecting a hardware problem
08:42 Gnomethrower joined #gluster
09:00 msvbhat joined #gluster
09:54 nitro3v joined #gluster
10:09 nitro3v joined #gluster
10:14 nitro3v do you guys know a fast way to replicate two bricks containing lots of small files using heal? forcing the full heal even after tuning self-heal-readdir-size self-heal-window-size is taking 4ever.. like days to mirror couple of TBs
10:14 nitro3v running 3.6.7
10:15 nitro3v on aws with bricks made by 3X1TB GP2
10:24 nitro3v joined #gluster
10:24 nitro3v still here :)
10:27 flying joined #gluster
10:27 Wizek__ joined #gluster
11:25 riyas joined #gluster
11:41 kotreshhr joined #gluster
11:48 kramdoss_ joined #gluster
12:13 al joined #gluster
12:32 msvbhat joined #gluster
12:57 shaunm joined #gluster
13:02 jiffin joined #gluster
13:05 Vapez joined #gluster
13:25 msvbhat joined #gluster
13:27 kramdoss_ joined #gluster
14:00 skoduri joined #gluster
14:17 nitro3v joined #gluster
14:22 sanoj joined #gluster
14:22 buvanesh_kumar joined #gluster
14:29 kotreshhr joined #gluster
14:32 kotreshhr left #gluster
14:34 kramdoss_ joined #gluster
14:45 kotreshhr joined #gluster
15:00 kotreshhr joined #gluster
15:02 kotreshhr left #gluster
15:08 kotreshhr1 joined #gluster
15:09 shyam joined #gluster
15:15 kotreshhr joined #gluster
15:21 kotreshhr1 joined #gluster
15:23 nitro3v joined #gluster
15:35 social joined #gluster
15:38 Vapez_ joined #gluster
15:40 JoeJulian eMBee: Well that's kind-of good news. At least that's something easy to address.
15:47 jkroon joined #gluster
15:47 plarsen joined #gluster
16:06 kotreshhr joined #gluster
16:12 kotreshhr left #gluster
16:58 atinm joined #gluster
17:13 nitro3v joined #gluster
17:19 amarts joined #gluster
17:23 jkroon joined #gluster
17:49 msvbhat joined #gluster
17:51 sona joined #gluster
18:19 ahino joined #gluster
18:21 jkroon joined #gluster
18:25 Vapez joined #gluster
19:10 Intensity joined #gluster
19:31 DV joined #gluster
19:35 wiza joined #gluster
19:42 wiza joined #gluster
20:54 taved joined #gluster
20:59 taved I am trying out glusterfs and I have created two bricks.  I have 2 subnets in my configuration and I used a 172.16.1.0 for peer probe and replication. I want users to access the gluster service using the 192.168.1.0 subnet.  How do
21:00 taved I get gluster to fail over on the 192.168.1.0 network if one of the bricks go down?
21:15 nitro3v joined #gluster
21:27 JoeJulian taved: Use ,,(hostnames)
21:27 glusterbot taved: Hostnames should be used instead of IPs for server (peer) addresses. By doing so, you can use normal hostname resolution processes to connect over specific networks.
21:29 shaunm joined #gluster
21:31 taved If I use hostnames, how do I prevent the 192.168.1.0 network from doing replication.  I was hoping to utilize a private network (172.16.1.0) for the replication portion.
22:03 JoeJulian taved: You would take advantage of split-horizon dns, where the servers resolve the hostname to a 172 address, but the clients resolve to a 192 address.
22:04 JoeJulian taved: but that's really only used when you have a healing instance. Through regular use, the replication happens from the client.
22:16 serg_k_ joined #gluster
22:21 DJCl34n joined #gluster
22:21 portante_ joined #gluster
22:22 DJClean joined #gluster
22:37 baber joined #gluster
22:37 rwheeler joined #gluster
22:42 Peppard joined #gluster
22:47 primusinterpares joined #gluster
22:50 Reiner030 joined #gluster
22:50 plarsen joined #gluster
22:50 Reiner030 left #gluster
23:15 taved @JoeJulian - ok, so there really is no way to separate this process of replication.  So ideally I should just use the public network (192.168.1.0), right?

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary