Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-10-31

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:25 mmckeen joined #gluster
00:32 mmckeen joined #gluster
00:33 PsionTheory joined #gluster
01:05 shdeng joined #gluster
01:05 shdeng joined #gluster
01:07 shdeng joined #gluster
01:28 luizcpg joined #gluster
02:05 haomaiwang joined #gluster
02:17 arpu joined #gluster
02:23 caitnop joined #gluster
02:23 derjohn_mobi joined #gluster
02:33 Gnomethrower joined #gluster
02:48 ilbot3 joined #gluster
02:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:48 d0nn1e joined #gluster
03:37 riyas joined #gluster
04:38 haomaiwang joined #gluster
04:41 om2__ joined #gluster
04:50 nbalacha joined #gluster
05:27 gem joined #gluster
05:27 jiffin joined #gluster
05:42 Jacob843 joined #gluster
06:07 skoduri joined #gluster
06:36 Gnomethrower joined #gluster
06:39 armyriad joined #gluster
06:56 armyriad joined #gluster
07:04 mhulsman joined #gluster
07:19 jtux joined #gluster
07:34 jtux joined #gluster
07:39 Philambdo joined #gluster
08:15 jkroon joined #gluster
08:22 mbukatov joined #gluster
08:32 flying joined #gluster
08:32 riyas joined #gluster
08:39 ivan_rossi joined #gluster
08:44 jesk can I use gluster with shared storage (FC) connected to every node as well?
08:47 hackman joined #gluster
08:47 post-factum jesk, explain
08:49 jesk I have shared storage and want to have a filesystem with that accessible from every node
08:49 post-factum jesk, you need to use gfs2 or ocfs for that
08:49 jesk so local storage is fiberchannel on every node
08:50 jesk i hate gfs2
08:50 jesk :)
08:50 jesk so gluster can not just supply the locking without replication anything
08:52 post-factum jesk, find out what do you want first. if you already have shared block storage, you need to use appropriate fs that supports this mode
08:52 post-factum jesk, of course, you may use clustered lvm, create several volumes for each node and use gluster on top of that
08:53 post-factum jesk, if that is what you want
08:54 post-factum jesk, ocfs should be more pleasant than gfs2
08:55 post-factum jesk, i guess, lustre also can do that
08:55 post-factum jesk, or proprietary shit like veritas
08:56 jesk I tried clvm and gfs2, in my opinion both are highly error-prone
08:56 jesk ocfs2 isnt free anymore
08:57 post-factum jesk, ocfs is in-kernel
08:58 jesk do you have good experience with ofcs?
08:58 post-factum jesk, https://git.kernel.org/cgit/linux/kerne​l/git/torvalds/linux.git/tree/fs/ocfs2
08:58 glusterbot Title: kernel/git/torvalds/linux.git - Linux kernel source tree (at git.kernel.org)
08:58 post-factum jesk, no experience, just this is the only available solution with no real other options
09:01 jesk I have some few nodes, all running just as libvirt kvm hypervisors and now I want to backup VMs somewhere
09:01 jesk I thought about just using another LUN with as backup storage on the hypervisors
09:01 jesk maybe I need other way of doing this
09:02 jesk dont have that much hardware
09:03 jesk I want that each hypervisor can backup/snapshot VMs
09:17 [diablo] joined #gluster
09:25 panina joined #gluster
09:27 rouven joined #gluster
09:40 gem joined #gluster
09:50 derjohn_mobi joined #gluster
09:55 social joined #gluster
10:07 panina joined #gluster
10:11 ahino joined #gluster
10:49 jtux joined #gluster
10:58 mchangir joined #gluster
11:31 Jacob843 joined #gluster
11:33 ivan_rossi left #gluster
11:35 MadPsy should I be using the ubuntu 16.04 provided gluster packages or is there an official PPA that's recommended ?
11:36 MadPsy I'm assuming it's this: https://launchpad.net/~gluster​/+archive/ubuntu/glusterfs-3.8
11:36 glusterbot Title: glusterfs-3.8 : “Gluster” team (at launchpad.net)
11:37 cloph MadPsy: if ubuntu already comes with gluster 3.8, then I'd say no reason to use ppa
11:37 MadPsy in 16.04 it's 3.7.6
11:38 cloph http://download.gluster.org/pub/gl​uster/glusterfs/3.8/3.8.5/Ubuntu/ → yeah, you already found the right ppa
11:38 glusterbot Title: Index of /pub/gluster/glusterfs/3.8/3.8.5/Ubuntu (at download.gluster.org)
11:42 kkeithley @ppa
11:42 glusterbot kkeithley: The official glusterfs packages for Ubuntu are available here: 3.6: http://goo.gl/XyYImN, 3.7: https://goo.gl/aAJEN5, 3.8: https://goo.gl/gURh2q
11:42 kkeithley @forget ppa
11:42 glusterbot kkeithley: The operation succeeded.
11:43 kkeithley @learn ppa as The GlusterFS Community packages for Ubuntu  are available here: 3.6: http://goo.gl/XyYImN, 3.7: https://goo.gl/aAJEN5, 3.8: https://goo.gl/gURh2q
11:43 glusterbot kkeithley: The operation succeeded.
11:47 MadPsy thanks
11:48 MadPsy I presume running 3.8 on ubuntu is always recommended over 3.7 for the simple fact it's newer
11:51 cloph if you use geo-replication, then yeah, definitely worth it :-)
11:54 MadPsy I don't yet :) I presume there's been a few cirtical bugs then
11:56 jiffin joined #gluster
11:57 kkeithley yes, but fixes for critical bugs get backported. The 3.7 ppa has 3.7.16. Whether you use 3.7 or 3.8 — from the PPA or from Ubuntu — is up to you.
11:58 kkeithley @repos
11:58 glusterbot kkeithley: See @yum, @ppa or @git repo
11:58 kkeithley @yum
11:58 glusterbot kkeithley: The official community glusterfs packages for RHEL (including CentOS, SL, etc) are available at http://download.gluster.org/pub/gluster/glusterfs/. The official community glusterfs packages for Fedora 19 and later are in the Fedora yum updates (or updates-testing) repository.
11:59 kkeithley @forget yum
11:59 glusterbot kkeithley: The operation succeeded.
12:00 kkeithley @learn yum as The GlusterFS Community Packages for RHEL are available from the CentOS Storage SIG at  https://wiki.centos.org/Sp​ecialInterestGroup/Storage
12:00 glusterbot kkeithley: The operation succeeded.
12:01 cloph @debian
12:01 glusterbot cloph: I do not know about 'debian', but I do know about these similar topics: 'files edited with vim become unreadable on other clients with debian 5.0.6'
12:02 cloph haha :-)
12:02 kkeithley @learn zypper as The GlusterFS Community Packages for SuSE (SLES, OpenSuSE, Leap42.x) are in the SuSE Build System. See https://download.gluster.org/pub/glu​ster/glusterfs/3.x/3.x.y/SuSE/README
12:02 glusterbot kkeithley: The operation succeeded.
12:02 kkeithley @apt
12:02 glusterbot kkeithley: I do not know about 'apt', but I do know about these similar topics: 'afr', 'apc', 'api', 'ask'
12:03 johnmilton joined #gluster
12:03 kkeithley @learn debian as The GlusterFS Community Packages for Debian (wheezy, jessie, stretch) are available from https://download.gluster.o​rg/pub/gluster/glusterfs/
12:03 glusterbot kkeithley: The operation succeeded.
12:04 kkeithley @fedora
12:04 kkeithley @fedora
12:04 kkeithley @debian
12:04 glusterbot kkeithley: The GlusterFS Community Packages for Debian (wheezy, jessie, stretch) are available from https://download.gluster.o​rg/pub/gluster/glusterfs/
12:05 luizcpg joined #gluster
12:05 kkeithley @learn fedora as The GlusterFS Community Packages For Fedora are available from the Fedora Updates repo (use dnf or yum), or from https://download.gluster.o​rg/pub/gluster/glusterfs/
12:05 glusterbot kkeithley: The operation succeeded.
12:07 kkeithley @learn packages as The GlusterFS Community Packages are documented at http://gluster.readthedocs.io/en/late​st/Install-Guide/Community_Packages/
12:07 glusterbot kkeithley: The operation succeeded.
12:07 kkeithley @docs
12:07 glusterbot kkeithley: The Gluster Documentation is at https://gluster.readthedocs.org/en/latest/
12:08 kkeithley @ppa
12:08 glusterbot kkeithley: The GlusterFS Community packages for Ubuntu are available here: 3.6: http://goo.gl/XyYImN, 3.7: https://goo.gl/aAJEN5, 3.8: https://goo.gl/gURh2q
12:08 kkeithley @lean ubuntu as The GlusterFS Community packages for Ubuntu are available here: 3.6: http://goo.gl/XyYImN, 3.7: https://goo.gl/aAJEN5, 3.8: https://goo.gl/gURh2q
12:09 cloph +r
12:09 kkeithley @learn ubuntu as The GlusterFS Community packages for Ubuntu are available here: 3.6: http://goo.gl/XyYImN, 3.7: https://goo.gl/aAJEN5, 3.8: https://goo.gl/gURh2q
12:09 glusterbot kkeithley: The operation succeeded.
12:10 kkeithley dwim
12:11 cloph @random
12:11 glusterbot cloph: Error: The command "random" is available in the Dict and Factoids plugins.  Please specify the plugin whose command you wish to call by using its name as a command before "random".
12:11 cloph @factoids random
12:11 glusterbot cloph: "nscd": If your client cannot find one of your servers after it has been brought back up, make sure you restart nscd if you're using it.; "git": git clone https://github.com/gluster/glusterfs.git; "volume-id": 'To set the id on a replaced brick, read it from another brick getfattr -n trusted.glusterfs.volume-id -d -e hex $brick_root and set it on the new brick with setfattr -n trusted.glusterfs.volume-id
12:11 glusterbot cloph: -v $volume_id .'
12:28 PatNarciso joined #gluster
12:28 kkeithley @learn rhel as The GlusterFS Community Packages for RHEL are available from the CentOS Storage SIG at  https://wiki.centos.org/Sp​ecialInterestGroup/Storage
12:28 glusterbot kkeithley: The operation succeeded.
12:28 kkeithley @learn centos as The GlusterFS Community Packages for RHEL are available from the CentOS Storage SIG at  https://wiki.centos.org/Sp​ecialInterestGroup/Storage
12:28 glusterbot kkeithley: The operation succeeded.
12:29 B21956 joined #gluster
12:29 unclemarc joined #gluster
12:30 post-factum kkeithley++ cloph++
12:30 glusterbot post-factum: kkeithley's karma is now 29
12:30 glusterbot post-factum: cloph's karma is now 3
12:32 ira joined #gluster
12:39 DrRetro_ joined #gluster
12:46 DrRetro_ greetings everyone, i've got on a centos-system a pretty big problem to start the glusterd-Service; systemctl itself does not describe the _real_ error and in the etc-glusterfs-glusterd.vol.log are too many warnings and errors to find the real problem. anyway here is a nopaste for the logfile: https://nopaste.me/view/12ee19cd
12:46 glusterbot Title: problems to start the glusterd - Nopaste.me (at nopaste.me)
12:46 gem joined #gluster
12:47 legreffier joined #gluster
12:48 DrRetro_ Could someone see where is the problem with the deamon?
12:50 DrRetro_ /join #centos
12:51 kkeithley why 3.7.6? Is this a new install?  3.7.16 is in the CentOS Storage SIG.     anyway, lines 7, 13, 16, 21 makes it look to me like there's a firewall or selinux problem?
12:52 kkeithley Or even 3.8.5
12:53 DrRetro_ let me see
12:53 kkeithley are there any incriminating lines in /var/log/audit.log?
12:59 DrRetro_ kkeithley: the only lines in the audit.log describes this: https://nopaste.me/view/df5ef8f0#​4nzpnhdwp2zcBIckRBR6PEIRPwdIztuv
12:59 glusterbot Title: audit.log - Nopaste.me (at nopaste.me)
12:59 DrRetro_ kkeithley: and: selinux and iptables are not enabled on this machine.
13:02 haomaiwang joined #gluster
13:04 haomaiwang joined #gluster
13:05 shyam joined #gluster
13:17 prth joined #gluster
13:24 squizzi joined #gluster
13:25 skylar joined #gluster
13:35 Vaizki joined #gluster
13:45 shubhendu joined #gluster
13:55 rouven joined #gluster
13:56 plarsen joined #gluster
14:01 kramdoss_ joined #gluster
14:03 arcolife joined #gluster
14:11 ivan_rossi joined #gluster
14:19 luizcpg_ joined #gluster
14:24 derjohn_mobi joined #gluster
14:35 arcolife joined #gluster
14:38 prth joined #gluster
14:40 dgandhi joined #gluster
14:45 farhorizon joined #gluster
14:57 wushudoin joined #gluster
14:59 MadPsy does performance.cache-size affect the FUSE client or the bricks themselves ?
14:59 yaya2017 joined #gluster
15:00 MadPsy currently see a FUSE client consuming 1.5GB RAM (resident)
15:01 panina joined #gluster
15:01 yaya2017 Hi All, I know its not supported but it's save to delete data from brick directory when using replicated volume?
15:04 yaya2017 i'm facing strange issue where a directory is not accessible but at the same time no split-brain entries exists.
15:07 cloph I'd recommend against messing with brick dir before tracking down the problem further.
15:07 cloph so how did you determine that the directory is not accessible/that there is no split-brain?
15:08 yaya2017 cloph: gluster vol <volname> heal info split-brain command.
15:09 yaya2017 I was middle of rm- r </mointpoint>/dir then suddenly, i lost connection and that directory become unaccessible.
15:10 yaya2017 cloph: i've checked getfattr and both directories have the same gfid.
15:12 Philambdo joined #gluster
15:17 gem joined #gluster
15:20 Gnomethrower joined #gluster
15:24 kpease joined #gluster
15:24 post-factum MadPsy, performance.cache-size affect the FUSE client
15:25 post-factum MadPsy, huge memory consumption of FUSE cliet is well-known issue
15:25 MadPsy that's what I thought - thanks
15:25 post-factum *client
15:25 MadPsy yeah it's not pretty!
15:26 circ-user-BzYOc joined #gluster
15:26 prth_ joined #gluster
15:29 jiffin joined #gluster
15:29 farhorizon joined #gluster
15:30 farhorizon joined #gluster
15:39 ivan_rossi left #gluster
15:42 rwheeler joined #gluster
16:22 mchangir joined #gluster
16:49 Jules- Hi. Is there any ressourceless way to check if NFS server of cluster is running or died like shown (Online: Y/N) when entering command: gluster volume status. I want to check states with keepalived but TCP is to ressource intensive and will flooding port 2049 and gluster volume status is very instable while glusterfs is in trouble. Maybe: rpcinfo -t "127.0.0.1" nfs '3' ??
16:51 cloph not sure about "resourceless" - but I'd say the only reliable way would be to mount it, and try to read/write data from/to it
16:52 cloph maybe the event-driven monitoring in one of the recent talks/presentations would work for that?
16:52 Jules- cloph: didnt heared of that yet. is it supported in v3.7?
16:53 cloph http://aravindavk.in/blog/effecti​ve-gluster-monitoring-eventsapis/ (but no idea, didn't even look into that except for reading about it on planet.gluster.org)
16:53 glusterbot Title: Effective Gluster Monitoring using Events APIs (at aravindavk.in)
16:53 cloph nope, not in 3.7
16:54 Jules- wow, that api would be awesome if it supports 3.7
17:09 JoeJulian What's a resourceless monitor?
17:14 luizcpg joined #gluster
17:15 derjohn_mobi joined #gluster
17:21 Jacob843 joined #gluster
17:26 Jules- JoueJulian: i mean something like: killall -0 haproxy to check if a service is running
17:27 Jules- and is not dependent to the daemon itself
17:38 JoeJulian Oh, so something like: pgrep -F /var/lib/glusterd/nfs/run/nfs.pid
17:39 dgandhi Greetings all, is there a version compatibility table? I seem to be able to "join" a 3.7 node to a 3.5 cluster, but it's not happy and can not access the bricks.
17:41 JoeJulian You should try to run the same version on all your servers.
17:41 JoeJulian You should upgrade servers before clients.
17:41 dgandhi so minor versions are assumed to contain breaking changes?
17:42 JoeJulian nope
17:42 JoeJulian But occasionally there is one. Last one I remember for sure was in the 3.4 release.
17:44 dgandhi I was hoping to do a rolling upgrade, but looks like I have to take it down and update it all offline (which I understand is suggested, but rolling would be nice).
17:44 JoeJulian Since you used the word "join", I'm assuming your "node" is a server. Not sure how you measure its happiness ;). As for accessing the bricks, are you saying that glustershd, and nfs cannot connect to the other servers?
17:45 dgandhi I join the cluster, the cluster sees it and visa/sera, but mounts throw errors so fast it fills the disk
17:45 dgandhi with logs
17:45 * JoeJulian is now pondering a future where we have nagios checks to monitor the emotional state of our AIs.
17:46 JoeJulian Please share something via fpaste.org
17:47 dgandhi I had to nuke the logs to get the node back up ( disk full ) I can try and bring it back up, if there is something to look for I can collect it.
17:48 JoeJulian I have no idea. The newer clients _should_ still be able to mount the older servers. Whatever messages are filling your logs may give a clue to a workaround.
17:49 gem joined #gluster
17:50 JoeJulian To your point about wanting to do rolling upgrades: rolling upgrades are feasible, but again you should upgrade your servers before upgrading your clients.
17:51 JoeJulian Newer servers can handle all the rpc calls of an older client, but an older server will not be able to perform all the remote procedures a newer client will attempt.
17:51 JoeJulian That _shouldn't_ be fatal, however.
17:52 JoeJulian But I can sure imagine that it would produce a lot of warnings, at least.
17:52 dgandhi http://paste.fedoraproject.org/466923/36270147/  here is a chunk of what was filling the logs after I left the machine "connected" to the cluster
17:52 glusterbot Title: #466923 • Fedora Project Pastebin (at paste.fedoraproject.org)
17:55 d0nn1e joined #gluster
17:57 Jules- JoeJulian: if the pidfile gets destroyed on glusterd nfs server crash then yes ;-)
17:57 JoeJulian Jules-: No, but if the pid is not longer there, pgrep will return 1.
17:58 JoeJulian Jules-: For added protection, add glusterfs to the end so if the pid exists but it's not glusterfs it'll still fail.
17:58 Jules- okay, then this should be less ressource intensive than: rpcinfo -t "127.0.0.1" nfs '3'
17:59 Jules- JoeJulian: good idea
18:00 JoeJulian dgandhi: Can you check the brick logs for errors that coincide with the errors on the client? This only shows it's failing for GF_DUMP but the GF_DUMP rpc exists in 3.5 so I suspect that's a red herring.
18:00 farhoriz_ joined #gluster
18:00 Jules- JoeJulian: ah no, i forgot to mention that i also need to check state on other server also.
18:01 Jules- so if 127.0.0.1 check fails keepalived that holds the loadbalancing ip will switch traffic to other nfs server ip
18:01 Jules- if remote check succeed
18:03 JoeJulian No need to check the remote. Release the ip if this host is not longer an nfs server. If the other one is also not an nfs server you're in trouble anyway.
18:05 prth_ joined #gluster
18:05 dgandhi on one of the active brick servers I have lots of : [2016-10-31 17:15:34.169354] E [rpcsvc.c:620:rpcsvc_handle_rpc_call] 0-glusterd: Request received from non-privileged port. Failing request
18:06 JoeJulian Oh! That's an easy fix with a gluster volume set...
18:06 JoeJulian rpc-auth-allow-insecure on
18:08 JoeJulian Oh, you'll need to edit /etc/glusterfs/glusterd.vol and add option rpc-auth-allow-insecure on to the management translator
18:08 JoeJulian and restart glusterd
18:08 JoeJulian on the 3.5 servers
18:08 JoeJulian Can you tell my coffee hasn't kicked in all the way yet?
18:09 shaunm joined #gluster
18:10 JoeJulian https://github.com/gluster/glusterfs/blob/relea​se-3.5/doc/release-notes/3.5.5.md#known-issues
18:10 glusterbot Title: glusterfs/3.5.5.md at release-3.5 · gluster/glusterfs · GitHub (at github.com)
18:10 dgandhi so  I need to make that change restarting each server in turn, then if I can get 3.7 to play nice, rolling upgrade might be in the cards?
18:10 tom[] joined #gluster
18:11 JoeJulian Again, like I have said twice now, rolling upgrades are in the cards already. You upgrade the servers before the clients.
18:12 JoeJulian Restarting glusterd (the management daemon) doesn't interfere with the bricks, so you can make that change to glusterd.vol and restart glusterd at any time.
18:12 JoeJulian No worries about "in turn".
18:13 dgandhi all good, thanks.
18:19 MidlandTroy joined #gluster
18:52 panina joined #gluster
18:53 panina Hello. I'm testing oVirt 4.0 HyperConverged Infrastructure, with GlusterFS storage. It has NFS disabled in the GlusterFS settings.
18:54 panina I'm trying to use oVirt's scripts to upload ISO files, but it wants to mount a NFS share. Does anyone know of a good way to mount the GlusterFS shares as NFS mounts?
18:56 panina I've been trying to use the nfs.export-dir settings, but they won't stick to the shares. I get a 'volume set:success' message, but the settings don't show with `gluster volume set`.
18:56 panina Or wait, they do show. But I still get 'permission denied' when I try to mount them.
19:09 gem joined #gluster
19:24 JoeJulian I thought ovirt used fuse mounts if gluster was configured.
19:25 farhorizon joined #gluster
19:25 panina It seems like their iso-uploader script is a bit out of date, it's only configured to use nfs shares.
19:26 panina And they don't seem to have any other decent way of getting iso's into their system.
19:26 panina It looked straight-forward enough to mount the shares with nfs, but I'm not getting anywhere with it...
19:27 panina I'm not sure of the syntax, and what settings to set, to allow nfs clients to mount.
19:28 JoeJulian If you're using 3.8+ you need to enable nfs via either ganesha-nfs or by setting "nfs.disable off"
19:29 panina I'm going with nfs.disable off, as the ganesha apparantly interferes with oVirt's settings.
19:31 Rivitir joined #gluster
19:32 panina I'm currently digging through the `gluster volume set help` dump, it's more helpful than I initially thought...
19:33 JoeJulian Isn't it?! I'm still impressed at the level of detail and wonder why everyone else doesn't do that.
19:35 panina I'd love to see the same info in man-pages, but at least it's there.
19:36 panina I'm guessing glusterfs evolves a bit too fast for manpages & documentation to keep up, but I'm really glad they put energy into the help command.
19:51 post-factum ye, catching up with docs is pita
19:56 panina Oh glory. I reset some settings, and it's mounting now.
19:56 panina or, well, it seems like the nfs share hasn't been processed through gluster's translators, but something is mounting....
19:57 panina I've got a cryptically named directory, and a 0-byte file called __DIRECT_IO_TEST__...
19:58 panina doesn't seem optimal, but at least it's progress.
20:00 panina The nfs-mounted share looks exactly as it does on one of the replicated nodes, there's no difference in content. The volume has shard set on, so I guess I'm seeing shards.
20:03 panina Nope. The brick has .shard and .glusterfs, the mounted volume does not.
20:03 panina So that should mean I'm seeing the processed volume, I guess.
20:06 JoeJulian +1
20:14 gem joined #gluster
20:14 kpease joined #gluster
20:20 kpease joined #gluster
20:26 kpease joined #gluster
20:35 Rivitir joined #gluster
20:51 kpease_ joined #gluster
20:52 johnnyNumber5 joined #gluster
21:08 Rivitir Greetings all. I've been running a 2 node Gluster in replica 2 for a couple months in production and I keep running into performance bottle necks.
21:09 Rivitir I've been doing some research on it but before I start tuning the volume, I wanted to check with you first to see if I am missing something.
21:10 Rivitir each node is running CentOS 7 with gluster 3.7.16. They are RAID 6 volumes with XFS. They are connected via Gig ethernet.
21:11 Rivitir all nodes are in the same cabinet so latency isn't an issue. I'm not seeing any packet drop. My only guess is it must be a cache issue.
21:12 Rivitir I say this because it starts off strong, a copy starts around 10mbps then after a few minutes it drastically slows to 200 - 300 kbps
21:12 johnmilton joined #gluster
21:12 Rivitir the files are very small. Most files I am copying are 300K to 1.1mb
21:13 Rivitir Any advice I would appreciate on how to best troubleshoot this.
21:14 PatNarciso joined #gluster
21:25 farhoriz_ joined #gluster
21:30 post-factum Rivitir: well, by saying latency is not an issue you show your misunderstanding of network fs. i believe that latency is not an issue only on rdma connection
21:30 post-factum Rivitir: but
21:30 post-factum Rivitir: your issue is not latency-related
21:30 post-factum Rivitir: it is smth different
21:31 post-factum Rivitir: so you may start by showing us volume status and volume info
21:31 post-factum @paste
21:31 glusterbot post-factum: For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
21:32 Rivitir Thanks for the help. Here is the volume I'm working with:
21:34 Rivitir http://termbin.com/ewp5
21:34 Rivitir the volume I'm working on is XVweb
21:36 Rivitir volume info on xvweb: http://termbin.com/9pjo
21:37 Rivitir Gluster is running on defaults. My guess is its not latency. Its actually a cache. Since every time I mount using fuse it speeds up the copy process. But slows quickly over time.
21:37 Rivitir but I don't want to tweak with the performance settings without some direction, hence why I came here in case I'm missing something, and to get some direction.
21:39 Rivitir Also what I mean by latency, is just pure network latency. I should have been more clear in that statement.
21:41 post-factum Rivitir: /me is trying to recollect if write-behind is enabled by default
21:41 post-factum try to switch it on or off
21:43 Rivitir you mean performance.flush-behind? According to Admin guide it is On.
21:45 post-factum no, write-behind
21:46 Rivitir In the admin guide I only see the flush and the write-behind buffer settings.
21:46 Rivitir https://gluster.readthedocs.io/en/latest/Administ​rator%20Guide/Managing%20Volumes/#tuning-options
21:46 glusterbot Title: Managing Volumes - Gluster Docs (at gluster.readthedocs.io)
21:46 derjohn_mobi joined #gluster
21:47 post-factum gluster volume set help
21:47 post-factum docs are not up-to-date usually
21:48 Rivitir Gotcha, thanks. I set that to off
21:48 Rivitir do I need to restart gluster for the setting to take effect? right now a copy is going on, but no change in speed.
21:49 post-factum changes should apply immediately
21:51 Rivitir no change. let me try restarting the copy just to be sure.
21:54 Rivitir does seem faster so far. Anything else I should tweak if this doesn't do the trick?
22:01 Rivitir Thank you again for the help Post-factum. This seems to be running much faster.
22:05 Gnomethrower joined #gluster
22:08 MidlandTroy joined #gluster
22:17 haomaiwa_ joined #gluster
22:23 post-factum Rivitir: no more tweaks without real necessity
22:23 post-factum Rivitir: anyway, you'd better file a bug
22:23 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
22:31 Rivitir @post-factum It looks like reading files is quick, but writing files is very slow still. Its taking 4sec to copy a 300k file. One file at a time.
22:34 post-factum Rivitir: weird
22:34 post-factum Rivitir: however, you can try to catch JoeJulian here, it is sleep time for me
22:50 Rivitir understood. Thank you for your help tonight Post-factum
23:00 Klas joined #gluster
23:02 KitKat_ joined #gluster
23:03 arpu joined #gluster
23:03 KitKat_ Hey all gluster noob here... Can someone point me towards the release notes for v3.6.9?  I can't seem to locate any information on this release version.
23:13 Klas joined #gluster
23:19 plarsen joined #gluster
23:25 Gnomethrower joined #gluster
23:28 prth_ joined #gluster
23:54 masber joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary