Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-09-21

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:03 shyam joined #gluster
00:03 shyam left #gluster
00:22 farhorizon joined #gluster
00:32 gyadav_ joined #gluster
00:49 farhorizon joined #gluster
00:53 ThHirsch joined #gluster
00:55 nish joined #gluster
00:56 nish any good forum for support?  I'm trying to get this working and all looks fine but nothing's replicating
01:16 susant joined #gluster
01:25 aravindavk joined #gluster
01:32 mahendratech joined #gluster
01:42 david_ joined #gluster
01:53 david_ dear all, i'm currently suffering the following issue on glusterfs version 3.5.4 and i hope someone can shed some light : Basically the client keep losing connection to 1 or 2 particular brick(s). The volume is distributed-replica. I have to kill and restart that brick process and the clients are working again
01:55 ilbot3 joined #gluster
01:55 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:02 gospod3 joined #gluster
02:45 atinm|brb joined #gluster
02:49 aravindavk joined #gluster
03:03 victori joined #gluster
03:09 ppai joined #gluster
03:11 victori_ joined #gluster
03:19 omie888777 joined #gluster
03:20 susant joined #gluster
03:35 kramdoss_ joined #gluster
03:56 itisravi joined #gluster
03:58 victori joined #gluster
04:01 victori_ joined #gluster
04:12 sunnyk joined #gluster
04:13 aravindavk joined #gluster
04:21 jiffin joined #gluster
04:24 victori joined #gluster
04:27 mlg9000 joined #gluster
04:40 sunnyk joined #gluster
04:43 victori_ joined #gluster
04:45 victori joined #gluster
04:50 ronrib joined #gluster
04:52 sanoj joined #gluster
04:55 atinm|brb joined #gluster
04:56 victori joined #gluster
04:59 victori_ joined #gluster
05:03 mlg9000 joined #gluster
05:04 sanoj joined #gluster
05:08 xavih joined #gluster
05:10 skumar joined #gluster
05:11 nbalacha joined #gluster
05:13 karthik_us joined #gluster
05:19 aravindavk joined #gluster
05:19 dominicpg joined #gluster
05:24 ndarshan joined #gluster
05:25 Chinorro joined #gluster
05:29 msvbhat joined #gluster
05:40 hgowtham joined #gluster
05:47 Prasad joined #gluster
05:54 sanoj_ joined #gluster
05:55 atinm joined #gluster
05:56 kdhananjay joined #gluster
05:57 Saravanakmr joined #gluster
06:21 apandey joined #gluster
06:21 jtux joined #gluster
06:24 apandey_ joined #gluster
06:25 ThHirsch joined #gluster
06:27 overclk joined #gluster
06:30 prasanth joined #gluster
06:32 susant joined #gluster
06:44 atinm joined #gluster
07:05 msvbhat joined #gluster
07:16 Snowmanko joined #gluster
07:17 p7mo joined #gluster
07:27 atinm joined #gluster
07:30 skoduri joined #gluster
07:36 fsimonce joined #gluster
07:43 Shu6h3ndu joined #gluster
07:44 _KaszpiR_ joined #gluster
07:59 subscope joined #gluster
08:00 mbukatov joined #gluster
08:04 _KaszpiR_ joined #gluster
08:05 buvanesh_kumar joined #gluster
08:22 msvbhat joined #gluster
08:28 buvanesh_kumar joined #gluster
08:36 xavih joined #gluster
08:46 poornima_ joined #gluster
08:47 dominicpg joined #gluster
09:01 sanoj_ joined #gluster
09:11 jiffin joined #gluster
09:19 jkroon joined #gluster
09:20 sanoj joined #gluster
09:54 poornima_ joined #gluster
10:10 buvanesh_kumar joined #gluster
10:22 poornima_ joined #gluster
10:32 msvbhat joined #gluster
10:35 saybeano joined #gluster
10:45 foster joined #gluster
11:04 skoduri joined #gluster
11:06 apandey__ joined #gluster
11:06 baber joined #gluster
11:13 nh2 joined #gluster
11:38 rdanter Hi all, I am working on an issue where setting ACLs on files in a gluster volume is causing a high CPU load. This seems to happen only when the Kernel NFS server is running to export the gluster volume to non-gluster aware clients. I suspect the issue is that gluster needs to tell the kernel about file changes (permissions/access times/etc). Is this a good suspicion?
11:39 rdanter Presumably this is to ensure the kernel is not caching out of date info and this would also not be a problem if Ganesha was being used instead??
11:51 apandey_ joined #gluster
11:51 itisravi joined #gluster
11:51 aravindavk joined #gluster
12:00 kdhananjay joined #gluster
12:01 Wizek_ joined #gluster
12:08 kotreshhr joined #gluster
12:08 kotreshhr left #gluster
12:23 buvanesh_kumar joined #gluster
12:25 ndevos rdanter: we dont recommend (or even test) exporting gluster volumes through the kernel nfs-server, people report issues with it, but no (regular) developers are looking to improve it
12:26 ndevos rdanter: the suggestion is to use nfs-ganesha, that is much better tested, and development efforts are redirected there
12:27 rdanter Yeah, I am trying to convince people to use nfs-ganesha but they require an explanation of why using the kernel server is resulting in such high cpu load (when either knfs or ganesha alone don't have the issue)
12:28 rdanter I suspect this is more to do with fuse than glusterfs, but not able to find any docs that say "don't do this because..." so any pointers would be great
12:28 rdanter s/ganesha alone/glusterfs alone/
12:28 glusterbot What rdanter meant to say was: Yeah, I am trying to convince people to use nfs-ganesha but they require an explanation of why using the kernel server is resulting in such high cpu load (when either knfs or glusterfs alone don't have the issue)
12:29 ndevos there used to be a README.nfs file in the libfuse(?) repository, that explained some of the issues with exporting fuse filesystems over nfs
12:30 rdanter ok, thanks, I will see what I can find in the repos
12:30 nbalacha joined #gluster
12:30 ndevos there it is! https://github.com/libfuse/libfuse/blob/master/doc/README.NFS - glusterfs-fuse does not use libfuse, but some of the problems still apply
12:30 glusterbot Title: libfuse/README.NFS at master · libfuse/libfuse · GitHub (at github.com)
12:31 sanoj joined #gluster
12:32 rdanter nothing about performance in there, just what has to be done to enable nfs exporting :(
12:34 ndevos yeah... I remember someone else mentioning problems with acls and fuse+knfsd, moving to nfs-ganesha helped - no idea what the actual problem is though
12:36 ppai joined #gluster
12:36 rdanter it looks like when setting the ACLs there is a lot more work going on when knfs is enabled then when not, cpu load is ~6x and it does not matter if there are any clients or not, so it appears that gluster is sending info to the kernel rather than the other way round
12:36 major joined #gluster
12:37 rdanter my assumption is that it is ensuring the kernel is not caching any of the inodes with acl/timestamp data
12:38 rafi joined #gluster
12:39 ndevos I dont know, you would need to debug the glusterfs/fuse <-> /dev/fuse communication to verify
12:40 rdanter fun :)
12:41 rdanter thanks for your help, it is appreciated, I shall go off and do some more investigation...
12:41 ndevos if you want to dig into it, you probably can use https://github.com/csabahenk/parsefuse
12:41 glusterbot Title: GitHub - csabahenk/parsefuse: fusedump dissector tool (at github.com)
12:45 rdanter thanks! that will be a big help!
13:00 baber joined #gluster
13:13 gospod2 joined #gluster
13:39 skylar joined #gluster
13:40 shaunm joined #gluster
13:45 sanoj joined #gluster
13:49 rafi1 joined #gluster
13:51 skoduri joined #gluster
13:56 WebertRLZ joined #gluster
14:09 subscope joined #gluster
14:10 dtube joined #gluster
14:13 _KaszpiR_ joined #gluster
14:17 kpease joined #gluster
14:27 kotreshhr joined #gluster
14:31 ThHirsch joined #gluster
14:32 skylar joined #gluster
14:37 prasanth joined #gluster
14:39 ic0n joined #gluster
14:41 buvanesh_kumar joined #gluster
14:43 ic0n_ joined #gluster
14:43 plarsen joined #gluster
14:43 farhorizon joined #gluster
14:45 omie888777 joined #gluster
14:47 jobewan joined #gluster
14:51 hmamtora joined #gluster
14:52 dominicpg joined #gluster
15:08 wushudoin joined #gluster
15:12 aravindavk joined #gluster
15:13 dtrainor left #gluster
15:29 buvanesh_kumar joined #gluster
15:36 atrius joined #gluster
15:37 Shu6h3ndu joined #gluster
16:01 jefarr_ joined #gluster
16:13 Snowmanko joined #gluster
16:13 Snowmanko hey
16:14 Snowmanko can anyone here check mailing list gluster-users@gluster.org ?
16:14 Snowmanko i tried to send mails (yes i am subscribed)
16:14 Snowmanko but it seems its get stuck somewhere
16:15 amye Snowmanko, what am I looking for?
16:15 Snowmanko mail will never arrive to list and also there so no bounce reply from list at al
16:15 Snowmanko all
16:15 amye Like an email title or an address?
16:15 Snowmanko i am sending from snowmail@gmail
16:15 amye Let me go take a look, I have nothing currently in the spam catchers
16:15 jstrunk joined #gluster
16:16 amye but I know that we have had issues since this weekend with mailing lists
16:16 Snowmanko i can send you times, mmnt
16:16 Snowmanko i tried it twice
16:16 amye http://lists.gluster.org/pipermail/gluster-users/2017-September/thread.html is where I'm starting
16:16 glusterbot Title: The Gluster-users September 2017 Archive by thread (at lists.gluster.org)
16:17 Snowmanko its not in lists, i've already checked it
16:17 Snowmanko Date: Sat, 16 Sep 2017 18:12:47 +0200
16:17 Snowmanko second : Date: Wed, 20 Sep 2017 10:15:33 +0200
16:17 amye How about this, fwd me the email you're trying to send, I'll send it in so that you get it in. amye@redhat.com should work
16:17 Snowmanko ok np, thanks
16:17 amye Sorry for the trouble, but here's a quick workaround
16:18 Snowmanko mail sent, did you have it in your box ?
16:18 amye Not yet. :)
16:21 amye This looks to be resolved: http://lists.gluster.org/pipermail/gluster-users/2017-September/032449.html
16:21 glusterbot Title: [Gluster-users] Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help] (at lists.gluster.org)
16:21 amye http://lists.gluster.org/pipermail/gluster-users/2017-September/032486.html
16:21 glusterbot Title: [Gluster-users] Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help] (at lists.gluster.org)
16:23 Snowmanko great so its finnaly there
16:23 Snowmanko thanks !
16:25 aravindavk joined #gluster
16:27 amye Yes! Sorry for the trouble!
16:27 Snowmanko no problem
16:28 vbellur joined #gluster
16:33 WebertRLZ joined #gluster
16:40 jkroon joined #gluster
16:41 misc mhh, seems lists.gluster is again overloaded :/
16:43 ic0n joined #gluster
17:01 ij_ joined #gluster
17:09 omie888777 joined #gluster
17:17 skylar joined #gluster
17:27 jiffin joined #gluster
17:28 Shu6h3ndu joined #gluster
17:52 eyeball joined #gluster
17:52 amye misc, anything we can do
17:52 amye ?
17:53 Guest26357 Question: What's the difference between a replicated and distributed replicated volume? I'm not understanding the use-case difference.
17:54 msvbhat joined #gluster
17:54 Guest26357 Oh, is the use case that only the DRV can scale to the volume being larger than a single server's capacity?
17:57 bwerthmann joined #gluster
18:05 farhorizon joined #gluster
18:14 ThHirsch joined #gluster
18:15 baber joined #gluster
18:38 omie888777 joined #gluster
19:22 guhcampos joined #gluster
19:34 msvbhat joined #gluster
19:40 Champi joined #gluster
19:43 baber joined #gluster
20:16 baber joined #gluster
20:47 vbellur joined #gluster
20:49 vbellur joined #gluster
20:52 vbellur joined #gluster
20:52 vbellur joined #gluster
20:53 vbellur joined #gluster
20:54 vbellur joined #gluster
20:54 vbellur joined #gluster
20:56 vbellur1 joined #gluster
21:01 baber joined #gluster
21:09 snehring joined #gluster
21:20 kotreshhr joined #gluster
21:24 Kassandry joined #gluster
21:39 msvbhat joined #gluster
21:39 vbellur joined #gluster
21:39 vbellur joined #gluster
21:48 yawkat joined #gluster
21:48 billputer joined #gluster
21:50 decayofmind joined #gluster
21:50 mlg9000 joined #gluster
21:51 n-st joined #gluster
21:54 siel joined #gluster
21:55 samikshan joined #gluster
22:01 Teraii joined #gluster
22:02 Humble joined #gluster
22:02 aronnax joined #gluster
22:15 eclay joined #gluster
22:15 _KaszpiR_ joined #gluster
22:17 eclay I have a 3 node gluster cluster that 2 of the nodes shows 3 entries in possibly undergoing heal process but nothing I do gets these same 3 items to no be possibly undergoing a heal.  I don't see any split-brain issues so I'm at a lost as to what I should try.
22:17 eclay running glusterfs 3.7.20-1.el7 on centos 7.4
22:28 gospod3 joined #gluster
22:33 vbellur joined #gluster
22:36 _KaszpiR_ joined #gluster
22:39 g_work joined #gluster
22:46 nh2 joined #gluster
23:07 farhorizon joined #gluster
23:15 siel joined #gluster
23:21 victori joined #gluster
23:47 gospod2 joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary