Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-11-02

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:25 Klas joined #gluster
00:45 Alghost joined #gluster
00:45 Rivitir joined #gluster
00:50 Rivitir JoeJulian, I messed with the oplocks, but so far its not changing anything. Similar to PatNarciso I'm only getting 40mbs through samba on fuse.
00:58 shdeng joined #gluster
01:06 rafi joined #gluster
01:10 Alghost joined #gluster
01:17 Alghost joined #gluster
01:17 Alghost_ joined #gluster
01:24 klaas joined #gluster
01:45 daMaestro joined #gluster
02:04 haomaiwang joined #gluster
02:10 marbu joined #gluster
02:10 csaba joined #gluster
02:18 arpu joined #gluster
02:21 derjohn_mobi joined #gluster
03:11 mchangir joined #gluster
03:25 magrawal joined #gluster
03:33 nbalacha joined #gluster
03:40 msvbhat joined #gluster
03:46 kramdoss_ joined #gluster
04:01 riyas joined #gluster
04:05 kdhananjay joined #gluster
04:10 shubhendu joined #gluster
04:15 hgowtham joined #gluster
04:16 Muthu joined #gluster
04:40 karthik_us joined #gluster
04:40 ankitraj joined #gluster
05:00 derjohn_mob joined #gluster
05:05 nishanth joined #gluster
05:07 nishanth joined #gluster
05:10 ankitraj joined #gluster
05:10 RameshN joined #gluster
05:11 hgowtham joined #gluster
05:14 Javezim Hey Team, I created a test lab today where by I setup 4 Machines, each with 2 Bricks in a Dist-Replica. I then added a 5th Machine as the "Arbiter" - http://paste.ubuntu.com/23414762/
05:14 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
05:14 Javezim gluster volume add-brick gv0 replica 3 arbiter 1 g5:/data/gv0/arbiter1/brick1 g5:/data/gv0/arbiter1/brick2 g5:/data/gv0/arbiter1/brick3 g5:/data/gv0/arbiter1/brick4
05:14 Lee1092 joined #gluster
05:14 Javezim However the arbiters are not up - http://paste.ubuntu.com/23414764/
05:14 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
05:15 Javezim I've stopped the volume, restarted the service, still not showing
05:15 ndarshan joined #gluster
05:16 Javezim Anyone know how I can the Arbiters online?
05:30 ashiq joined #gluster
05:31 itisravi joined #gluster
05:31 jiffin joined #gluster
05:49 shruti joined #gluster
05:50 ppai joined #gluster
05:51 haomaiwang joined #gluster
05:56 kotreshhr joined #gluster
05:59 haomaiwang joined #gluster
05:59 jkroon joined #gluster
06:07 Muthu joined #gluster
06:08 karnan joined #gluster
06:12 msvbhat joined #gluster
06:13 Bhaskarakiran joined #gluster
06:19 Philambdo joined #gluster
06:21 devyani7_ joined #gluster
06:25 mhulsman joined #gluster
06:27 ashiq joined #gluster
06:28 devyani7_ joined #gluster
06:34 ndarshan joined #gluster
06:38 arc0 joined #gluster
06:45 mhulsman joined #gluster
06:46 skoduri joined #gluster
06:47 ashiq joined #gluster
06:51 hchiramm joined #gluster
06:53 ndarshan joined #gluster
06:53 poornima joined #gluster
06:55 ankitraj joined #gluster
07:21 fsimonce joined #gluster
07:24 bluenemo joined #gluster
07:30 jtux joined #gluster
07:36 ankitraj joined #gluster
07:49 mchangir joined #gluster
07:51 hchiramm_ joined #gluster
08:01 derjohn_mob joined #gluster
08:07 rouven joined #gluster
08:09 pcaruana joined #gluster
08:25 flying joined #gluster
08:27 PaulCuzner joined #gluster
08:32 ivan_rossi joined #gluster
08:36 mss joined #gluster
08:38 ndarshan joined #gluster
08:38 riyas joined #gluster
08:40 karthik_us joined #gluster
08:43 loadtheacc joined #gluster
08:49 msvbhat joined #gluster
08:52 hchiramm__ joined #gluster
08:53 [diablo] joined #gluster
09:00 vinurs joined #gluster
09:05 ankitraj joined #gluster
09:05 hgowtham joined #gluster
09:07 Debloper joined #gluster
09:09 ankitraj joined #gluster
09:10 owlbot joined #gluster
09:11 om2 joined #gluster
09:12 derjohn_mob joined #gluster
09:13 PaulCuzner joined #gluster
09:14 ahino joined #gluster
09:16 panina joined #gluster
09:18 Muthu joined #gluster
09:20 rastar joined #gluster
09:32 abyss^ Javezim: maybe you will find something in logs?
09:34 Jules-2 joined #gluster
09:52 hchiramm_ joined #gluster
09:55 jiffin joined #gluster
09:56 kdhananjay joined #gluster
10:18 derjohn_mob joined #gluster
10:22 mchangir joined #gluster
10:34 TvL2386 joined #gluster
10:35 jkroon joined #gluster
10:53 hchiramm__ joined #gluster
10:55 MadPsy 'gluster nfs-ganesha enable' appears to be for ganesha HA - is it possible to use ganesha without HA?
10:58 jiffin MadPsy: u can use ganesha without ha
10:58 MadPsy does that mean you don't need to run ^^, given that specifically looks for ganesha-ha.conf
10:58 msvbhat joined #gluster
11:00 d0nn1e joined #gluster
11:06 msvbhat joined #gluster
11:25 malevolent joined #gluster
11:31 hackman joined #gluster
11:33 kshlm Weekly community meeting starts in ~30 minutes in #gluster-meeting . If you have a topic to be discussed please add it to the agenda at public.pad.fsfe.org/p/gluster-community-meetings
11:57 panina I have a 3-way replicated gluster that won't re-connect one of the hosts after a reboot.
11:58 panina The rebooted host lists the other nodes as connected, but they list it as disconnected.
11:58 panina Are there any special steps I need to take to bring it back online?
11:59 kshlm Weekly community meeting starts now in #gluster-meeting
12:00 rastar panina: most likely that you did not persist/save the iptable changes
12:01 panina I'm using the iptables-file that came with the glusterfs installation, and it has been restored. But I'll check up on that - there might be something missing.
12:01 jdarcy joined #gluster
12:08 ShwethaHP joined #gluster
12:09 hchiramm joined #gluster
12:09 MadPsy is there a known workaround to prevent the FUSE client from gobbling up memory?
12:09 panina rastar: spot on, the glusterd & mgmt ports weren't open. Thanks a million!
12:10 rastar panina: Happy to help :)
12:10 panina MadPsy: I'm using GlusterFS with oVirt according to redhat's instructions, and following those we created systemd slices for the glsuterd services.
12:10 panina MadPsy: But that might not be a good solution with memory.
12:10 johnmilton joined #gluster
12:11 MadPsy hmm interesting
12:11 MadPsy I'm tempted just to use NFS and lose HA until I get ganesha working
12:14 rastar MadPsy: What is the workload and how much memory is being consumed?
12:15 rastar MadPsy: you can tweak perf xlators in client side as per workload to reduce memory usage
12:15 MadPsy within 4 hours it's now up at 1.050g resident
12:16 MadPsy workload was a 'find' in the volume
12:17 rastar MadPsy: if there was also a cat of many files, then those read pages are cached in memory
12:17 rastar MadPsy: it is useful for web server use-cases where files are re-read often
12:18 rastar MadPsy: if that is not what you want, you might try switching off io-cache xlator
12:18 MadPsy surely up to the value of performance.cache-size
12:18 MadPsy (which is 512M)
12:19 mhulsman joined #gluster
12:20 MadPsy when I was tweaking performance.io-cache I was thinking these were server side but given the web servers themselves use NFS and only my 'managent' box uses the FUSE client I'm thinking I got it wrong
12:22 rastar MadPsy: yes, io-cache is loaded in the FUSE mount process  or NFS-Server process
12:22 MadPsy ah so it does affect the NFS server side
12:22 MadPsy need to be careful then
12:23 rastar MadPsy: I am not sure about NFS-Ganesha
12:23 MadPsy not using that just now (that's a whole other story) just the built in one
12:24 MadPsy basically in this test there's 2 bricks, one on each server and each server has itself mounted over NFS
12:25 rastar MadPsy: jiffin do we load perf xlators in gNFS?
12:27 rastar MadPsy: for now, you could probably reduce cache size back to 32MB default if 512 MB is high
12:27 MadPsy good idea
12:28 MadPsy I guess other stuff is being cached to make up the ~1G it's using just now
12:35 rastar MadPsy: On the mount point we have something similar to /proc
12:36 rastar MadPsy: you could do 'cat $MNT/.meta/graphs/active/xcube-io-cache/meminfo'
12:36 rastar MadPsy: where xcube should have been $VOL
12:37 MadPsy ooh
12:38 MadPsy https://paste.fedoraproject.org/468110/47809030/
12:38 glusterbot Title: #468110 • Fedora Project Pastebin (at paste.fedoraproject.org)
12:46 rastar MadPsy: that is close to 41MB I think
12:46 rastar MadPsy: I usually do 'find .meta/graphs/active/* -name meminfo -exec grep ^size {} \; | awk '{sum+=$3} END {print sum}''
12:46 MadPsy nice one liner
12:46 rastar MadPsy: that should be close to resident memory usage if you have not done as many vol sets
12:47 rastar MadPsy: I don't have a simple way of figuring out the highest usage xlator than going into each
12:47 MadPsy so that's 782MB which I guess is close to RSS
12:48 MadPsy thanks for your help, appreciated
12:49 rastar MadPsy: no problem, play with various options to tweak it to your workload.
12:49 * MadPsy will have a busy day today
12:50 R0ok_ joined #gluster
12:54 R0ok__ joined #gluster
12:56 jiffin rastar: sorry I was away
12:58 jiffin MadPsy, rastar: only write behind is present in nfs-server graph
12:59 rastar jiffin: thanks!
12:59 MadPsy cheers
13:06 shyam joined #gluster
13:14 ppai joined #gluster
13:19 suliba joined #gluster
13:24 squizzi_ joined #gluster
13:24 karnan joined #gluster
13:29 hagarth joined #gluster
13:31 suliba joined #gluster
13:32 jiffin joined #gluster
13:33 skylar joined #gluster
13:45 suliba joined #gluster
13:46 Debloper joined #gluster
13:56 juo joined #gluster
13:59 juo Hello, is it neccessary to trigger self heal on a dispersed volume after a node has been offline and comes back online in a dispersed 12 (8+4) configuration?
14:08 arc0 joined #gluster
14:10 hagarth joined #gluster
14:18 kotreshhr joined #gluster
14:35 nbalacha joined #gluster
14:45 jimcoz joined #gluster
14:45 farhorizon joined #gluster
14:46 jimcoz hello, any reasons why we would put a brick under a Virtual Volume with LVM ?
14:46 farhoriz_ joined #gluster
14:48 kkeithley if you use thinp on the LVM you can grow the brick later if you want. Or reserve space for snapshots
14:51 luizcpg joined #gluster
14:52 nblanchet joined #gluster
14:55 bfoster joined #gluster
14:56 hagarth joined #gluster
14:56 nbalacha joined #gluster
14:57 ndk_ joined #gluster
14:58 Creeture joined #gluster
14:59 skoduri joined #gluster
14:59 Creeture I'm setting up a new 4-node gluster ha cluster. The gluster_shared_storage volume comes online with 3 bricks under /var/lib/glusterd/ss_brick/ - is that correct or should there be a brick on all 4 nodes? I'm thinking that the 3 is probably correct because that's fine for quorum.
15:01 rafi joined #gluster
15:04 skoduri Creeture, yes..that's correct
15:04 skoduri For >=3 nodes, shared volume will be replica-3 volume
15:04 Creeture Cool.
15:08 farhorizon joined #gluster
15:09 jimcoz how we could have a client performing a transparent failover in case of a glusterfs server node failure ?
15:12 siel joined #gluster
15:13 wushudoin joined #gluster
15:14 shyam joined #gluster
15:15 Creeture Everything works except for my VIPs. pengine says it can't force the cluster IP away from any of the hosts
15:15 Creeture Any idea where to troubleshoot that?
15:16 mhulsman joined #gluster
15:16 JoeJulian jimcoz: that's built-in if you're using the fuse client or client library (gfapi).
15:18 haomaiwang joined #gluster
15:19 haomaiwang joined #gluster
15:22 JoeJulian Creeture: Not sure if this is of any help, but kkeithley did a talk about how pacemaker does that at the Gluster Developer Summit: https://www.youtube.com/watch?v=3mof2XerU6Y
15:23 skoduri Creeture, are those two hosts in different subnets?
15:38 ketan joined #gluster
15:39 ketan Where can I get older (3.6.x) version of gluster docker images?  Didn't find them on docker hub
15:42 kpease joined #gluster
15:45 farhorizon joined #gluster
15:49 JoeJulian afaik, nobody was bulding docker images back then.
15:51 ketan @JoeJulian, I have used docker image 2 weeks back
15:51 JoeJulian Oh, well then I know nothing. :)
15:51 jimcoz JoeJulian, using the fuse client glusterfs-client rpm  package
15:51 ketan It was tagged 'latest' then
15:52 jimcoz mount type = fuse.glusterfs
15:53 jimcoz version = glusterfs-fuse-3.8.5-1.el7.x86_64
15:58 kramdoss_ joined #gluster
16:03 kotreshhr left #gluster
16:08 shyam joined #gluster
16:11 msvbhat joined #gluster
16:16 JoeJulian jimcoz: It's been built-in since 2.0 so the version doesn't really matter much.
16:16 JoeJulian As long as you have a replicated volume, you have HA.
16:25 RameshN joined #gluster
16:29 devyani7 joined #gluster
16:54 rwheeler joined #gluster
17:01 rwheeler joined #gluster
17:17 squizzi joined #gluster
17:31 jimcoz @joejulian, what do you mean by this ?
17:31 jimcoz It's been built-in since 2.0 so the version doesn't really matter much.
17:32 squizzi_ joined #gluster
17:39 JoeJulian <jimcoz> how we could have a client performing a transparent failover in case of a glusterfs server node failure? <me> that's built-in if you're using the fuse client or client library (gfapi) <jimcoz> ... version = ... <me> It's been built-in since...
17:41 JoeJulian So I was responding to the irrelevant information to try and point out that it was irrelevant.
17:41 JoeJulian And trying to do it in a nice way.
17:43 ivan_rossi left #gluster
17:44 DarylLee joined #gluster
17:48 DarylLee Hey everyone.  I'm getting an I/O error from qemu-kvm when trying to connect to a gluster image file after upgrading Gluster packages to 3.8.5 from 3.8.4.   Rolling back fixes the issue but was curious if anyone had seen this yet?   It's a new error to me:
17:48 DarylLee 2016-11-02T17:27:22.020249Z qemu-kvm: -drive file=gluster://gluster1:24007/opennebula/361d9f69c43ca458f037b8afb23eed5a,if=none,id=drive-ide0-1-0,format=qcow2,cache=none: could not open disk image gluster://gluster1:24007/opennebula/361d9f69c43ca458f037b8afb23eed5a: Could not read L1 table: Input/output error
17:48 DarylLee the gluster volume shows clean of any split brains,  heals or anything abnormal
17:52 JoeJulian DarylLee: That string does not exist in the gluster source code, so it must be coming from somewhere else.
17:52 JoeJulian Did you look for a client log in /var/log/glusterd?
17:53 ic0n joined #gluster
17:55 DarylLee Yea i'm still looking try to find what's specifically causing it.   I'll look in that log next.  I have a feeling gluster is fine, but something in the way libgfapi is interfacing with gluster id imagine.  I'll check that log file next
17:56 JoeJulian That error comes from qemu's qcow2 code: https://github.com/qemu/qemu/blob/master/block/qcow2.c#L1073
17:56 glusterbot Title: qemu/qcow2.c at master · qemu/qemu · GitHub (at github.com)
17:56 JoeJulian But it still may be gluster related. Check for that client log and see if there's a clue there.
18:05 mhulsman joined #gluster
18:29 jimcoz it seems to take up to 1 mins to failover with backupvolfile-server option.. any idea ?
18:30 jimcoz how i could shorter that failover time ?
18:32 msvbhat joined #gluster
18:36 Lee1092 joined #gluster
18:37 JoeJulian ~ping-timeout | jimcoz
18:37 glusterbot jimcoz: The reason for the long (42 second) ping-timeout is because re-establishing fd's and locks can be a very expensive operation. With an average MTBF of 45000 hours for a server, even just a replica 2 would result in a 42 second MTTR every 2.6 years, or 6 nines of uptime.
18:39 JoeJulian If a TCP connection is properly closed, like killing the brick, the client will disconnect gracefully and no timeout will be noticed.
18:40 ahino joined #gluster
18:41 arif-ali Hi, we've had an issue one of our systems, where one of the gluster volumes, has gone into heal, and not sure what caused it. Looking through the glfsheal log file, we have lots of W [dict.c:612:dict_ref] (-->/usr/lib64/glusterfs/3.7.11/xlator/cluster/replicate.so(afr_get_heal_info+0x1bf) [0x7f658dafd54f] -->/lib64/libglusterfs.so.0(syncop_getxattr_cbk+0
18:41 arif-ali x34) [0x7f65a2b35a24] -->/lib64/libglusterfs.so.0(dict_ref+0x79) [0x7f65a2aeb2d9] ) 0-dict: dict is NULL [Invalid argument]
18:41 glusterbot arif-ali: ('s karma is now -165
18:41 rwheeler joined #gluster
18:59 Creeture1 joined #gluster
19:02 squizzi_ joined #gluster
19:23 plarsen joined #gluster
19:28 msvbhat joined #gluster
19:33 DarylLee So on the IO error with qcow2,  I upgraded the glusterfs package again and attempted to start a RAW image based VM.  This generates even less an error.  The only refernce i can find is in messages "journal: failed to initialize gluster connection to server: 'gluster1': Invalid argument"
19:35 johnmilton joined #gluster
19:35 ahino joined #gluster
19:35 DarylLee and downgrading to 3.8.4 everything resumes operating as expected.
19:38 raghu` joined #gluster
19:39 JoeJulian DarylLee: Apparently qemu's default logfile for gfapi is stderr. Are you capturing that somewhere?
19:39 DarylLee I don't believe so
19:40 jns does anyone know what dht.force-readdirp does?
19:41 JoeJulian DarylLee: According to this patch, https://lists.gnu.org/archive/html/qemu-devel/2016-07/msg05439.html, you can specify the logfile with the syntax listed.
19:41 jimcoz Ok thanks JoeJulian...  IS there any issues if i'm lowering this down to 3 secs ?  Our network is very stable.. and it's over the same switch.
19:41 jimcoz (42 second) ping-timeout
19:41 DarylLee thanks ill look at the link
19:41 JoeJulian jimcoz: if it's very stable, then why would you want to do that?
19:44 jimcoz @joejulian, what's the proper way to shutdown one node of the cluster ?  i have been doing shutdown of gluster server  node  (only one node down at a time)and have been experiencing  some timeout glitches.
19:44 JoeJulian Let me guess, ubuntu....
19:44 JoeJulian glusterfsd needs to be killed before the network is stopped
19:44 jimcoz @joejulian i must stop volume first? then os shutdown ?
19:45 JoeJulian Only ubuntu, afaik, stops the network during its shutdown process.
19:45 jimcoz @joejulian i was assuming systemctl was dealing with this part
19:45 JoeJulian with systemd, make sure glusterfsd is enabled.
19:46 JoeJulian It should be a dummy service that ensures the bricks are stopped during shutdown.
19:48 jimcoz @Joejulian, manually doing systemctl stop glusterd prior shutting down the OS,    seems to fix the timeout issue.
19:48 JoeJulian shouldn't
19:48 JoeJulian glusterd is only the management daemon.
19:48 JoeJulian It has nothing to do with ping-timeout.
19:49 jimcoz but probably it advertize himself down.. so the clients doesn't connect to him anymore..
19:49 JoeJulian No new connections can be established if that host is specified as the mount server, no, but it has no effect on running mounts.
19:49 JoeJulian @mount server
19:49 glusterbot JoeJulian: (#1) The server specified is only used to retrieve the client volume definition. Once connected, the client connects to all the servers in the volume. See also @rrdns, or (#2) One caveat is that the clients never learn of any other management peers. If the client cannot communicate with the mount server, that client will not learn of any volume changes.
19:50 JoeJulian If stopping glusterd stops the bricks, you'll eventually be frustrated by that.
19:50 jimcoz @joejulian, i have two servers with replication
19:51 jimcoz i only stop one node.. not both at the same time
19:51 JoeJulian I assumed as much.
19:51 JoeJulian That statement is irrelevant to the information I was attempting to convey.
19:52 JoeJulian @processes
19:52 glusterbot JoeJulian: The GlusterFS core uses three process names: glusterd (management daemon, one per server); glusterfsd (brick export daemon, one per brick); glusterfs (FUSE client, one per client mount point; also NFS daemon, one per server). There are also two auxiliary processes: gsyncd (for geo-replication) and glustershd (for automatic self-heal).
19:52 jimcoz didn't experienced any glitches when im doing manually systemctl stop glusterd prior shutting down the OS
19:52 JoeJulian Sounds like a race condition.
19:53 jimcoz @joejulian.. ok..  do i still need a load balancer to deal with transparent failover ?  or the fuse-glusterfs clients are able to transparently follow the heathly nodes automatically ?
19:53 JoeJulian No, a load balancer will completely break gluster.
19:54 jimcoz @joejulian, i saw some documentation recommending usage of load balancer.. there's are outdated documentation ?
19:54 JoeJulian Since the client connects to all the bricks in the volume, it allows the client to continue functioning when a brick is stopped.
19:54 jimcoz is best practice is to use backupvolfile-server= or not ?
19:55 JoeJulian I've never seen anything of the sort in official documentation. I'd be happy to take a look if you've found otherwise.
19:55 JoeJulian I prefer rrdns, but backupvolfile-server works.
19:56 jimcoz rrdn balancing the load and the backupvolfile-server not.. would this be the only difference ? or there's something else
19:56 JoeJulian Neither. They both just provide options when the client is trying to retrieve the volume configuration.
20:01 jimcoz @joejulian, is there a better way to load balancer the traffic ? or once a client reaching one node with the glusterfsd, it become aware of all others "brick" nodes of the same cluster and could automatically talk with them to split the load ?
20:05 JoeJulian It can. See "gluster volume set help" look for cluster.read-hash-mode
20:07 jimcoz @joejulian.. great.. that what i was suspecting.. so no need for load balancers at all unless we are using NFS clients instead of Fuse-glusterfs clients ?
20:07 jimcoz is that correct ?
20:07 JoeJulian What you want is for any client that's connecting to any specific file to use the same brick as its read-subvolume. That way you're more likely to be using the disk and system caches effectively.
20:08 JoeJulian correct, though with the HA configuration of ganesha you wouldn't even necessarily want a load balancer for that.
20:08 jimcoz @joejulian, any reasons to create brick under virtual volumes with LVM lvcreate -V option ?
20:09 JoeJulian I haven't done block allocation for any of my use cases. It's nice to have options though.
20:10 jimcoz @joejulian i saw some documentation creating brick under LVM with thin + virtual volumes
20:10 jimcoz @joejulian is this a recommended approach ?
20:11 jimcoz example : lvcreate -L 14G -T vg_bricks/brickpool1  (-T = thin provision)
20:11 jimcoz lvcreate -V 3G -T vg_bricks/brickpool1 -n dist_brick1   (-V = virtual volume on top of the thin provision)
20:12 jimcoz any reasons for doing this ?
20:12 JoeJulian You can snapshot them quickly.
20:13 jimcoz with -T i could do that also.. but why also -V just after ?
20:13 JoeJulian In fact, that type of volume is the only one that supports snapshots at this time.
20:13 JoeJulian I don't know. I haven't looked in to that translator at all.
20:13 jimcoz @joejulian, -T = thin.. this is what glusterfs requires for snapshot.. but what about -V ?
20:16 jimcoz @joejulian im still scratching my head to figure out why the advantage of -V
20:17 jimcoz @joejulian, ganesha is for NFS  HA services.. Why would someone install this instead of glusterfs ?  legacy reasons ?  Any performance or other  advantages ?
20:18 JoeJulian We use ganesha to provide HA NFS for gluster volumes. Some use cases require nfs, vmware for example.
20:21 jimcoz @joeJulian.. ok.. but if we have the flexibility to chose either ones.. how the glusterfs performance compare to NFS ?
20:21 JoeJulian The more you learn, the more you'll understand when I start answering everything with "it depends". :)
20:23 JoeJulian The native client has better throughput and can be significantly faster than nfs when you use the api library. NFS can take advantage of some kernel caching at the potential expense of having stale metadata.
20:24 luizcpg joined #gluster
20:26 jimcoz ok great.. thanx for the explanation.. very useful.  have a nice day
20:33 panina joined #gluster
20:46 DarylLee @JoeJulian unfortunately that patch for fapi logging hasn't been implemented yet.   I suppose i can try to apply the patchand recompile it manually on a test system somewhere and see if i can get any more info.   Anyways thanks for the help this far, appreciate it.
21:02 om2 joined #gluster
21:09 PaulCuzner left #gluster
21:38 tom[] joined #gluster
21:44 hchiramm joined #gluster
21:50 johnmilton joined #gluster
21:55 farhoriz_ joined #gluster
22:14 masber joined #gluster
22:18 daryllee joined #gluster
22:22 johnmilton joined #gluster
22:34 haomaiwang joined #gluster
22:38 PatNarciso_ joined #gluster
22:39 PatNarciso_ WhooHoo!  http://blog.gluster.org/2016/10/gluster-tiering-and-small-file-performance/
22:39 glusterbot Title: Gluster tiering and small file performance | Gluster Community Website (at blog.gluster.org)
22:59 luizcpg joined #gluster
23:18 ic0n joined #gluster
23:36 derjohn_mob joined #gluster
23:37 ic0n joined #gluster
23:45 ic0n joined #gluster
23:50 vinurs joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary