Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-06-10

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:03 farhorizon joined #gluster
00:15 jkroon joined #gluster
01:07 ldiamond joined #gluster
01:20 kpease joined #gluster
01:20 kramdoss_ joined #gluster
01:48 ilbot3 joined #gluster
01:48 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:38 bowhunter joined #gluster
03:35 jkroon joined #gluster
03:55 riyas joined #gluster
04:03 jkroon joined #gluster
04:35 jkroon joined #gluster
04:45 jkroon joined #gluster
04:50 dwargo joined #gluster
05:29 sanoj|afk joined #gluster
05:32 wtaq joined #gluster
05:39 riyas joined #gluster
05:58 wtaq Hi, new to Gluster and NAS. Mounting host:/volume on my laptop via either FUSE or NFS shows only the files located on one out of five 10TB servers in a RAID 0 setup, but other clients (i.e. Kubernetes pods) can see all files? `df` shows 10TB available vs 50TB respectively
06:11 wtaq We noticed this a day after seeing behavior similar to https://lists.gnu.org/archive/html/gluster-devel/2013-05/msg00033.html. We have another (barely used) Gluster cluster with the same config that still appears as 50TB to all clients
06:31 susant joined #gluster
07:04 vbellur joined #gluster
07:21 pcdummy joined #gluster
07:21 pcdummy joined #gluster
07:23 pcdummy joined #gluster
07:23 pcdummy joined #gluster
07:24 pcdummy joined #gluster
07:24 pcdummy joined #gluster
07:38 jkroon joined #gluster
08:04 om2 joined #gluster
08:48 Wizek_ joined #gluster
09:06 susant joined #gluster
09:27 sona joined #gluster
09:36 vbellur joined #gluster
09:53 jiffin joined #gluster
10:02 jiffin joined #gluster
10:09 jiffin joined #gluster
10:10 jiffin1 joined #gluster
10:11 sona joined #gluster
10:35 gem joined #gluster
10:36 sona joined #gluster
12:28 zcourts joined #gluster
12:31 dwargo left #gluster
12:57 gem joined #gluster
13:08 Alghost joined #gluster
13:15 zcourts joined #gluster
13:19 kramdoss_ joined #gluster
13:26 jkroon joined #gluster
13:28 bluenemo joined #gluster
13:46 ashka joined #gluster
13:46 ashka joined #gluster
14:29 jiffin joined #gluster
15:03 jiffin1 joined #gluster
15:11 jiffin1 joined #gluster
15:42 jiffin joined #gluster
15:43 bluenemo joined #gluster
15:47 jiffin joined #gluster
16:17 jiffin joined #gluster
16:39 jarbod joined #gluster
16:56 jkroon joined #gluster
17:12 farhorizon joined #gluster
17:25 wtaq joined #gluster
17:26 wtaq Just to follow up here - one of the five servers had lots of "Transport endpoint not connected" in its NFS logs. Running `gluster volume start gv0 force` on it seems to have fixed the mount issue for both FUSE and NFS at the moment. Should note that throughout all of this, `gluster volume status` looked fine on all hosts
17:35 ldiamond joined #gluster
18:11 kpease joined #gluster
18:45 jkroon joined #gluster
19:00 armyriad joined #gluster
19:05 jkroon joined #gluster
19:23 Wizek_ joined #gluster
19:25 om2 joined #gluster
19:26 jkroon joined #gluster
19:46 jkroon joined #gluster
20:12 Karan joined #gluster
21:15 jkroon joined #gluster
21:42 ldiamond joined #gluster
21:56 jkroon joined #gluster
22:12 jkroon joined #gluster
22:24 zcourts joined #gluster
22:25 zcourts joined #gluster
22:40 k0nsl joined #gluster
22:40 k0nsl joined #gluster
23:44 jkroon joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary