Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-09-11

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
01:04 colm4 joined #gluster
01:15 zcourts joined #gluster
01:27 shyu joined #gluster
01:54 ilbot3 joined #gluster
01:54 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:12 shdeng joined #gluster
03:13 DV joined #gluster
03:18 apandey joined #gluster
03:23 apandey_ joined #gluster
03:28 gyadav joined #gluster
03:30 bwerthmann joined #gluster
03:50 itisravi joined #gluster
04:01 kramdoss_ joined #gluster
04:03 gyadav joined #gluster
04:03 nbalacha joined #gluster
04:12 dominicpg joined #gluster
04:23 omie888777 joined #gluster
04:25 winrhelx joined #gluster
04:38 Saravanakmr joined #gluster
04:41 malevolent joined #gluster
04:41 xavih joined #gluster
04:48 jiffin joined #gluster
04:49 skumar joined #gluster
05:03 karthik_us joined #gluster
05:05 lalatenduM joined #gluster
05:07 msvbhat joined #gluster
05:20 susant joined #gluster
05:20 ndarshan joined #gluster
05:28 ppai joined #gluster
05:41 Prasad joined #gluster
05:42 hgowtham joined #gluster
05:44 bkunal joined #gluster
05:45 buvanesh_kumar joined #gluster
05:48 zcourts joined #gluster
05:50 poornima joined #gluster
05:55 prasanth joined #gluster
05:58 apandey__ joined #gluster
06:04 bkunal joined #gluster
06:06 rafi1 joined #gluster
06:08 bkunal joined #gluster
06:18 rastar_ joined #gluster
06:20 ahino joined #gluster
06:22 bkunal joined #gluster
06:31 jtux joined #gluster
06:33 ankitr joined #gluster
06:38 mbukatov joined #gluster
06:40 bkunal joined #gluster
06:41 TBlaar joined #gluster
06:48 buvanesh_kumar joined #gluster
06:49 bEsTiAn joined #gluster
06:49 ivan_rossi joined #gluster
06:53 mrw___ geolocation has a problem with the ssh-command from master to slave — manual ssh from root@master to replication@slave works as expected (pwless). Question: Can I manually execute the same ssh-command that daemon runs (and e.g. add more verbosity)?
07:20 skoduri joined #gluster
07:24 susant joined #gluster
07:27 [diablo] joined #gluster
07:34 msvbhat joined #gluster
07:44 rafi1 joined #gluster
07:55 rwheeler joined #gluster
08:02 bkunal joined #gluster
08:03 _KaszpiR_ joined #gluster
08:04 hgowtham joined #gluster
08:25 bkunal joined #gluster
08:27 _KaszpiR_ joined #gluster
08:35 Chewi mrw___: this is just my 2¢ but the whole passwordless thing isn't critical for running geo-rep, it's only used for the initial setup and you can do the same steps manually. I wish the docs would make this clearer. I'm a little hazy on the details now.
08:36 susant joined #gluster
08:46 buvanesh_kumar joined #gluster
08:49 Chewi so I'm now happy that xfs is working as expected, it's only using a little more space than the original, not 2x like reiserfs
08:50 jkroon joined #gluster
08:50 Chewi this is my first time trying proper gluster as opposed to just geo-rep. I'm doing this over a local LAN connection but one of the systems is very underpowered and client access is unbearably slow. I read that clients need to access both servers even when just reading. I don't understand why that is, can someone explain?
08:51 Chewi given this is a 2-brick replicated setup
08:56 bkunal joined #gluster
09:07 msvbhat joined #gluster
09:16 mrw___ Chewi: My problem is, that georeplication fails with an error on ssh → how can I run this ssh-command manually?
09:21 ThHirsch joined #gluster
09:21 mrw___ http://paste.ubuntu.com/25513489/
09:21 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
09:22 mrw___ when I manually call ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret.pem -p 22 -oControlMaster=auto -S /tmp/gsyncd-aux-ssh-DxzLRb/f58c1428c82bff3bdcec0a495b568c1d.sock replication@raum /nonexistent/gsyncd --session-owner c6ed98d9-483d-4252-b55b-8aaff4d8a59b --local-id .%2Fvar%2Fgluster%2Fvolumes --local-node universum -N --listen --timeout 120 gluster://localhost:
09:22 mrw___ replication
09:22 mrw___ then I get: unix_listener: cannot bind to path: /tmp/gsyncd-aux-ssh-DxzLRb/f58c1428c82bff3bdcec0a495b568c1d.sock.ZHSCZlr2v66RqxIe
09:22 mrw___ → So, what can I do to test this ssh-command maually?
09:28 msvbhat joined #gluster
09:33 mrw___ has anyone ever managed to get the georeplication run! If yes, according to what documentation?
09:33 ogelpre left #gluster
09:39 arpu joined #gluster
09:40 gyadav_ joined #gluster
09:44 gyadav__ joined #gluster
09:45 Chewi mrw___: /nonexistent/gsyncd looks bad and I think I ran into this myself
09:46 mrw___ how did you solve it?
09:48 Chewi mrw___: "replication" is your volume name and user name? and "raum" is the host?
09:49 mrw___ yes
09:49 mrw___ raum=slave
09:49 Chewi do this on the master
09:49 Chewi sudo gluster volume geo-replication replication replication@raum::replication config remote_gsyncd /usr/libexec/glusterfs/gsyncd
09:50 mrw___ gsyncd is the command in the .shhg/authorized_keys anyway
09:50 Chewi you should check that /usr/libexec/glusterfs/gsyncd exists on the slave
09:50 mrw___ /usr/libexec/glusterfs/gsyncd does not exist, it's /usr/lib/x86_64-linux-gnu/glusterfs/gsyncd
09:50 Chewi okay enter it as that then
09:51 mrw___ hmm: Volume replication does not exist
09:51 Chewi okay, maybe I've got your setup slightly off but pretty sure that's the step you need
09:52 buvanesh_kumar joined #gluster
09:52 Chewi is the volume on the master called something else?
09:52 mrw___ the volume replication ist on slave not on master
09:53 Chewi right but what's the name of the volume you're trying to replicate?
09:53 mrw___ it's the target of the replication, not the source
09:53 mrw___ volumes
09:53 mrw___ sudo gluster volume geo-replication volumes replication@raum::replication config remote_gsyncd  /usr/lib/x86_64-linux-gnu/glusterfs/gsyncd
09:53 Chewi choose less confusing names next time ;)
09:53 Chewi yep
09:54 mrw___ volumes, because it contains docker volumess
09:54 Chewi ah
09:54 mrw___ You see my whole plan and steps here: https://marc.wäckerlin.ch/computer/docker-swarm-and-glusterfs
09:55 mrw___ noe the command still hangs …
09:55 mrw___ now the command still hangs …
09:55 mrw___ geo-replication config updated successfully
09:58 mrw___ did not help
09:58 Chewi if it's any consolation, it took me a long time to get geo-rep working too and I ended up using Chef to automate half of it so I wouldn't mess it up in future
09:59 mrw___ http://paste.ubuntu.com/25513584/
09:59 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
09:59 mrw___ yeah, I'm writing my blog :)
10:00 Chewi I would look at logs on the slave side now, it should at least be starting gsyncd
10:30 mrw___ Chew, all slave logs: http://paste.ubuntu.com/25513831/
10:30 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
10:32 mrw___ line 60?
10:32 Chewi Datei oder Verzeichnis nicht gefunden :)
10:32 Chewi I looked that up :P
10:32 mrw___ file or path not fund
10:32 mrw___ what file or path?
10:33 Chewi I don't know what file it's complaining about though. is /usr/sbin/gluster on the slave?
10:33 mrw___ yes
10:33 Chewi dunno then
10:36 mrw___ Does it want to mount volume replication to the path fom gluster-mountbroker setup?
10:36 mrw___ sudo gluster-mountbroker setup /var/replication replication
10:36 mrw___ should user replication opwn that path?
10:38 Chewi mrw___: I think that command does everything necessary, no need to manually change paths
10:39 mrw___ But still it does not work, so something is missing …
10:39 prasanth joined #gluster
10:40 Chewi oh you do need to do "gluster-mountbroker add replication replication"
10:41 jtux joined #gluster
10:43 mrw___ gluster-mountbroker status →
10:43 mrw___ | localhost |          UP | /var/replication(OK) | replication(OK) | replication(replication)  |
10:44 Chewi looks ok
10:44 mrw___ no, I ran it again, but that was not the problem
10:45 mrw___ is there a way to debug the mountbroker
10:45 Chewi I don't have anything left to suggest
10:46 mrw___ thanks for trying, but I'm still not further… :(
10:47 mrw___ can you check your chef-configuration, Chew? And perhabs paste it or compare it against my documentation in my blog?
10:48 Chewi Chef just runs the commands that you would normally do manually. I can't look at the actual state of the servers, they're buried in our secure production environment.
10:48 mrw___ could you paste me your chef-scripts, so that I can compare them to what I did?
10:49 mrw___ (without ssh-keys of course ;))
10:49 Chewi it's not that easy to follow and this is internal code, sorry
10:49 mrw___ could you compare it=
10:50 Chewi I've already been looking at it to see if you've missed anything
10:50 mrw___ did I?
10:50 Chewi I don't think so
10:51 mrw___ well, something must be the source of the problem … :(
10:51 mrw___ It seems to be the mountbroker, right?
10:52 Chewi I think it's something else
10:53 mrw___ Hmm, what could it be and how could I trace that down?
10:53 mrw___ There are too many «magic» scripts and processes running…
10:53 Chewi gluster volume geo-replication ... ... create <- this step that sets things up is supposed to check everything is okay. I wonder if it completed correctly given that you had /nonexistent/gsyncd in your config. I've only seen that happen when we skip the passwordless bit.
10:54 mrw___ found something: mount.glusterfs localhost:/var/gluster/replication /var/replication → Mount failed.
10:54 Chewi is this Debian btw?
10:55 apandey_ joined #gluster
10:55 mrw___ ubuntu 16.04 + latest repository from glusterfs 3.12
10:56 Chewi I've only done this on CentOS
10:56 Chewi my latest attempt is Gentoo but that wasn't geo-rep
10:56 mrw___ do I need to set:
10:56 mrw___ sudo gluster volume set replication auth.allow 127.0.0.1
10:57 mrw___ to be able to mount locally?
10:57 Chewi I don't think so, it allows anything by default?
10:57 mrw___ «anything» is too much ;)
10:58 Chewi btw if you think this is bad, after we got it working, we broke it all again when we added TLS ;) got it going eventually
10:59 mrw___ without tl everything is unencrtypted?
11:00 skumar_ joined #gluster
11:00 Chewi I can't remember the details, I think that's more for the clients than geo-rep
11:00 mrw___ yeah, geo-rep is ssh
11:00 Chewi not everything in geo-rep travels over the ssh port though, important to be aware of that
11:00 bfoster joined #gluster
11:00 mrw___ but why can't I mount locally?
11:01 Chewi I'm afraid I've got work to do :(
11:01 foster joined #gluster
11:01 mrw___ probably that's the problem
11:04 mrw___ What is: failed to get the 'volume file' from server
11:05 mrw___ that's the correct command: mount.glusterfs localhost:replication /var/replication
11:05 mrw___ that works
11:15 Shu6h3ndu joined #gluster
11:23 saltsa joined #gluster
11:25 mrw___ ha Chewi, you kno what?
11:25 mrw___ the daemons were not properly running on the slave!
11:26 mrw___ systemctl was unable to start, stop, restart
11:26 mrw___ after killing the gluster-processes, it looks much better now
11:26 mrw___ only error message that remains:
11:26 mrw___ [2017-09-11 11:26:20.439655] E [syncdutils(/var/gluster/volumes):299:log_raise_exception] <top>: master volinfo unavailable
11:27 mrw___ (on server)
11:29 mrw___ But still:
11:29 mrw___ sudo gluster volume geo-replication volumes replication@raum::replication status
11:29 mrw___ universum      volumes       /var/gluster/volumes    replication    replication@raum::replication    N/A           Faulty    N/A             N/A
11:33 jkroon joined #gluster
11:36 shyu joined #gluster
11:37 WebertRLZ joined #gluster
11:45 skoduri joined #gluster
11:53 mrw___ Chew, yes it works! Thank you very much!
11:56 mrw___ Except restarting the services, probably this was the key:
11:56 mrw___ sudo gluster volume geo-replication volumes replication@raum::replication config remote_gsyncd  /usr/lib/x86_64-linux-gnu/glusterfs/gsyncd
12:08 Chewi :)
12:09 mrw___ Do I have to setup gluster-mountbroker on every slave, or is this automatically distributed, even if I add a new slave?
12:10 Chewi I think it's every slave
12:10 Chewi the mountbroker is what avoids the need to connect as root on each slave
12:10 Chewi (which used to be the only option when I first tried gluster)
12:12 baojg joined #gluster
12:12 apandey__ joined #gluster
12:13 mrw___ Chewi, are you workung for RedHat?
12:13 Chewi haha, no
12:13 mrw___ or are yu just a user?
12:14 Chewi I use gluster geo-rep a little at work. I've just started trying to use regular gluster at home.
12:15 jiffin joined #gluster
12:18 skumar__ joined #gluster
12:18 msvbhat joined #gluster
12:21 cmooney joined #gluster
12:21 mrw___ Chewi, I updated my blog. Could you please check my instructions and give me feedback?
12:21 mrw___ https://marc.wäckerlin.ch/computer/docker-swarm-and-glusterfs#Geo_Replication
12:22 mrw___ I rearranged and simplified a lot … :)
12:24 cmooney Hello, all.  Could anyone in the community tell me if there has been any discussion about this issue?  https://github.com/gluster/glusterfs/issues/230
12:24 glusterbot Title: Replace MD5 usage to enable FIPS support · Issue #230 · gluster/glusterfs · GitHub (at github.com)
12:24 Chewi mrw___: I'd swap the "start" and "config remote_gsyncd" lines
12:24 mrw___ (y)
12:24 cmooney It is a real sticking point for a customer of ours.
12:25 atinmu joined #gluster
12:26 mrw___ Chewi, now it looks quite easy and straight-forward… ;)
12:26 Chewi like it never happened
12:26 mrw___ yes, so strange :)
12:27 mrw___ now I only need to setup the replicatino vice-versa, with volumes on raum to replication on universum. :)
12:27 mrw___ Let's see, if my documentation produces stable results. :)
12:32 mrw___ updated blog with tipp to restart gluster processes in case of error
12:35 bkunal joined #gluster
12:43 cmooney Did I ask my question on the wrong channel, or is no one here familiar with this issue?
12:44 cmooney Should I have asked on the dev channel?
12:45 Chewi cmooney: no one but mrw___ and I have spoken here in the last 12 hours or so
12:46 Chewi it's quiet
12:46 mrw___ cmooney, this channel is very low traffic. I had to repeat my questions often until I got an answer. good luck ;)
12:46 mrw___ and on the weekend, this channel was absolutely dead...
12:47 cmooney Ah, OK :-)  Thank you for letting me know.  I appreciate your help!
12:51 mrw___ is there a production ready web-dashboard to monitor gluster / geo-replication status
12:59 baber joined #gluster
12:59 jstrunk joined #gluster
13:00 Euroclydon joined #gluster
13:05 shyam joined #gluster
13:11 baojg joined #gluster
13:14 bkunal_ joined #gluster
13:17 Wayke91_ joined #gluster
13:19 plarsen joined #gluster
13:25 bkunal__ joined #gluster
13:37 skylar joined #gluster
13:44 kramdoss_ joined #gluster
13:45 msvbhat joined #gluster
13:48 cmooney Hi, re-asking this question as it appears more folks are around.  Could anyone in the community tell me if there has been any discussion about this issue?  https://github.com/gluster/glusterfs/issues/230
13:48 glusterbot Title: Replace MD5 usage to enable FIPS support · Issue #230 · gluster/glusterfs · GitHub (at github.com)
13:49 cmooney I have a client who is keen to have this capability
13:56 saali joined #gluster
14:00 nbalacha joined #gluster
14:04 jtux joined #gluster
14:05 Brian_M joined #gluster
14:34 susant joined #gluster
14:34 glustin joined #gluster
14:35 ic0n joined #gluster
14:38 Brian_M Hello gluster chat. Is this the right room to get help with an obscure issue?
14:42 jiffin joined #gluster
14:52 nbalacha Brian_M, go ahead.
14:52 Prasad_ joined #gluster
15:00 MrAbaddon joined #gluster
15:00 shyu joined #gluster
15:00 ic0n joined #gluster
15:02 snehring joined #gluster
15:03 dominicpg joined #gluster
15:03 Brian_M Hi nbalacha. I have a meeting in 3 min I have to run to but I'll be back right after.      We're running gluster version 3.7.2 on 2 nodes with one share, replicated.  We deleted a bunch of files directly from the brick of the primary node, not knowing we shouldn't have. We believe previous administrators deleted files like this from both nodes at some point in the past as well, it's a mess.
15:03 Brian_M There's tons of space being used by un-linked files in .glusterfs . All newer versions seem to auto-fix this but 3.7.2 does not. Is there any way to fix this without any downtime? We initially thought we could find any files in .glusterfs (excluding health files and a few directories) with only one hard-link and remove those, but can't find any confirmation that that's not a bad idea. I can't
15:03 Brian_M find any clear documentation on exactly what healing does, so we're not entirely confident in using it.
15:08 nbalacha Brian_M, that is something the folks working on the replication module should confirm
15:08 nbalacha let me see if I can find someone
15:09 mbrandeis joined #gluster
15:13 shaunm joined #gluster
15:13 Brian_M Thanks nbalacha!
15:17 msvbhat joined #gluster
15:27 ic0n joined #gluster
15:31 fsimonce joined #gluster
15:31 baber joined #gluster
15:35 susant joined #gluster
15:45 jiffin joined #gluster
15:46 gyadav__ joined #gluster
15:47 vbellur joined #gluster
15:49 kramdoss_ joined #gluster
15:51 omie888777 joined #gluster
15:58 kpease joined #gluster
16:01 baber_ joined #gluster
16:11 ivan_rossi left #gluster
16:18 gyadav joined #gluster
16:26 MrAbaddon joined #gluster
16:26 kraynor5b__ joined #gluster
16:48 Brian_M Is there anyone here familiar with the replication module that can help with an odd issue in v 3.7.2 ?
16:53 humblec joined #gluster
16:58 baber_ joined #gluster
17:00 MrAbaddon joined #gluster
17:03 tg2 joined #gluster
17:13 msvbhat joined #gluster
17:16 Guest1004 joined #gluster
17:16 Guest1004 Hello all
17:17 Guest1004 Has anyone encountered gluster 3.10 failing to rebalance when the bricks are ZFS?
17:29 Guest1004 left #gluster
17:32 winrhelx joined #gluster
17:33 shaunm joined #gluster
17:50 vbellur joined #gluster
18:18 MrAbaddon joined #gluster
18:19 _KaszpiR_ joined #gluster
18:20 dgandhi joined #gluster
18:25 _KaszpiR_ joined #gluster
18:35 mbrandeis joined #gluster
18:53 bluenemo joined #gluster
18:54 Brian-M joined #gluster
18:56 rastar_ joined #gluster
19:01 cliluw joined #gluster
19:21 jbrooks joined #gluster
19:23 ic0n joined #gluster
19:41 kotreshhr joined #gluster
19:51 shaunm joined #gluster
20:09 Brian-M Is there anyone here familiar with the replication module that can help with an odd issue in v 3.7.2 ?
20:13 amye Brian-M, you may have better luck on the gluster-users mailing lists. A great majority of the developers are based out of IST, so it's 1:45am for them. :)
20:16 glusterbot` joined #gluster
20:16 baber_ joined #gluster
20:16 TBlaar2 joined #gluster
20:17 d-fence joined #gluster
20:18 fabianvf_ joined #gluster
20:18 AppStore_ joined #gluster
20:18 PotatoGim_ joined #gluster
20:18 d-fence___ joined #gluster
20:19 glustin_ joined #gluster
20:20 n-st- joined #gluster
20:20 yawkat` joined #gluster
20:21 Brian-M amye - thanks!
20:22 Limebyte joined #gluster
20:22 thatgraemeguy_ joined #gluster
20:22 thatgraemeguy_ joined #gluster
20:22 rideh- joined #gluster
20:23 amye Apologies for timezones?
20:23 lkoranda_ joined #gluster
20:23 primehaxor joined #gluster
20:23 primehaxor joined #gluster
20:24 rastar_ joined #gluster
20:24 ic0n joined #gluster
20:24 JoeJulian Brian-M: Maybe explain the issue - too
20:25 JoeJulian Oh, scroll back Joe....
20:25 amye I wasn't going to say it...
20:26 JoeJulian I skipped it this morning when I saw that nbalacha was replying without reading the contents.
20:26 amye hee
20:28 JoeJulian Brian-M: Yes with two caveats. One, only do files, not symlinks and only under .glusterfs/[0-9a-f][0-9a-f]/. Two, do not do files that are mode 1000 which have the trusted.dht-linkto extended attribute set.
20:28 kkeithle joined #gluster
20:29 JoeJulian - though two is probably safe to do as well, but if you can avoid doing so that's better.
20:30 varesa- joined #gluster
20:31 WebertRLZ joined #gluster
20:33 ndk_ joined #gluster
20:33 portante joined #gluster
20:34 aronnax joined #gluster
20:36 dijuremo joined #gluster
20:38 portdirect joined #gluster
20:59 Brian-M JoeJulian: Thanks! Would something along the lines of "find .glusterfs/[0-9a-f][0-9a-f]/. -links 1 -not -perm 1000 -delete" work?
21:00 Brian-M also with "-type f"
21:10 jkroon joined #gluster
21:11 major joined #gluster
21:16 Chewi I'm trying nfs-ganesha but all I ever get is "Unknown error 524"
21:16 Chewi google doesn't reveal much about that
21:26 baber_ joined #gluster
21:44 JoeJulian Brian-M: That's what I've done
21:44 JoeJulian Chewi: where?
21:45 Chewi JoeJulian: when I try nfs4, it fails to mount with that. when I try nfs3, it mounts but any access like ls errors with that
21:46 JoeJulian Chewi: So what I'm understanding is that the nfs client is where that error is coming from? I'm also assuming this is a linux kernel that is the client.
21:46 Chewi JoeJulian: yeah it's all linux, just to localhost actually
21:47 JoeJulian #define ENOTSUPP524/* Operation is not supported */
21:47 JoeJulian Well that's not all that helpful.
21:47 Chewi hmm
21:49 Chewi JoeJulian: I'm only trying this at all because I hoped it wouldn't have the same performance issue as the FUSE client. this is my first time trying gluster proper (only used geo-rep before) and I was surprised that client reads need to contact remote bricks. why is that? I can understand write being synchronous but expected reads to be local.
21:50 JoeJulian What if you have a file that's out of sync? You probably don't want the stale copy.
21:51 Chewi gluster talks a lot about data centre use cases (and I do that kind of stuff at work) but this is just for home between my desktop PC and an arm box that is seriously struggling to keep up
21:51 JoeJulian So the client connects to all the bricks and does a lookup(). Each server responds with some state data and if one (or more) of them is dirty, logic happens to ensure the client gets good data and a self-heal is triggered in the background.
21:52 Chewi I'm only likely one system at a time, even if they're both up, so I don't care too much about short term staleness
21:52 Chewi *likely to use
21:52 JoeJulian That's beyond the design considerations they had in mind when they built this.
21:53 JoeJulian Perhaps something more asynchronous would suit your use case better?
21:53 Chewi that's unfortunate. it doesn't seem like such a great leap from where we are now but what do I know? :)
21:53 JoeJulian Clustered filesystems are complex.
21:53 Chewi asynchronous and bi-directional don't seem to mix well
21:54 Chewi I even thought about overlaying the local brick for read access over the top of the gluster mount. probably a bad idea though. ;)
21:54 JoeJulian You can just read from the brick without any state guarantees.
21:55 Chewi I'll try overlaying, worth a shot at least
21:58 Chewi JoeJulian: one more thing, I think I read somewhere that the NFS client doesn't do this staleness check. is that true?
21:59 JoeJulian I think that's not true with nfsv4. If I understand correctly, it has the notion of cache invalidation.
21:59 Chewi ok thank you
22:00 JoeJulian Meaning if a file is changed on the server, the server can notify the client to kick out the cache.
22:01 bwerthmann joined #gluster
22:07 major joined #gluster
22:08 Wayke91_ left #gluster
22:33 varesa joined #gluster
22:35 vbellur joined #gluster
22:37 vbellur1 joined #gluster
22:39 vbellur1 joined #gluster
22:40 vbellur joined #gluster
23:12 XpineX joined #gluster
23:21 mbrandeis joined #gluster
23:32 plarsen joined #gluster
23:35 winrhelx joined #gluster
23:37 Gambit15 joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary