Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-08-13

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 jkroon om, performance.readdir-ahead and cluster.readdir-optimize settings?
00:03 jkroon i'm not sure exactly yet what the impact of those are and why they're not on by default.
00:03 jkroon only just discovered them a few hours back.  but perhaps google might be able to get you some help on those options.
00:04 jkroon also remember kbYTEps vs kBITps.  make sure you compare apples with apples.
00:06 om I know
00:06 om it is faster but not what is expected
00:07 om an rsync cp is 1.2 MB/s tops which is about 8 Mbps
00:07 om very slow
00:07 om still
00:13 jkroon closer to 10Mbps.
00:13 jkroon entertain me - is there anywhere where the duplex/speed settings are forced?
00:14 jkroon what does your error counter rates for your network interfaces and your switches look like?
00:14 jkroon have you switched off those heal options?
00:14 om no problems there
00:14 jkroon hahaha, i thought that too.  check the counters.
00:14 om just checked the network transfer speed from local disk to remote glusterfs server local disk
00:14 om rsync --progress test500.zip ubuntu@i2-p-fshare-101.us02.kexpress.net:~/
00:14 om test500.zip
00:14 om 524,288,000 100%   86.07MB/s    0:00:05 (xfr#1, to-chk=0/1)
00:15 jkroon ok, that's about 600mbit.
00:15 om so faster than 87 MB/s rsync from local disk (not gfs mount) to gfs server
00:15 om yea,
00:15 om that's what I want
00:15 om for the gfs mount
00:15 jkroon betting you can improve on that still but increasing your hardware rx offload buffers (-g to ethtool IIRC).
00:16 jkroon don't think you'll quite get there.
00:16 jkroon but 10mbit does feel too slow.
00:16 om yea, compared to 640 mbps
00:17 JoeJulian Are you doing jumbo frames?
00:17 om no
00:17 om rx jumbo: 0
00:17 jkroon what does CPU and RAM usage do during the rsync of gfs?
00:17 om negligible
00:17 jkroon JoeJulian, i thought you were outa here? (I'm short on your heels, 6 more splits to sort)
00:18 JoeJulian So you'll only get about 70% of the possible bandwidth.
00:18 jkroon that doesn't make sense.
00:18 jkroon iotop?
00:18 JoeJulian jumbo can get you just over 80%.
00:18 JoeJulian But not with cp (not sure about rsync) which does 512byte chunks.
00:19 jkroon hahaha, JoeJulian - i'm afraid even without jumbo frames we've pushed intel NICs from 9 years back to 980 mbps using 1500 byte frames.  it's possible - but you need to tweak the NICs something crazy.
00:19 JoeJulian 512byes would probably only get you 60% of potential.
00:19 om TX:256
00:19 om but that's not the issue
00:20 JoeJulian out again...
00:20 om it's something related to gfs
00:20 om thanks JoeJulian !
00:20 om ttyl
00:20 om gonna remove all the glusterfs server volume options configured
00:20 om and see if that goes back to default
00:22 jkroon om, we agree, but just set both that rx and tx buffer to 4096 just to entertain me please.
00:22 jkroon do you have firewall settings?
00:22 jkroon kernel version?
00:22 jkroon (fuse itself has been a bottleneck in the past)
00:22 jkroon also, iotop is very useful to see if something is doing something stupid wrt disk IO.
00:23 om right
00:23 om I can't do this on the prod server
00:23 om let me check dev
00:23 jkroon can't or not allowed ? :p
00:24 om both
00:24 om can and allowed, but bad practice
00:24 om could break prod
00:25 loadtheacc joined #gluster
00:28 jkroon perhaps i'm just getting cocky on certain things but it's almost standard for me now to max those settings out wherever I can.  If I could go larger than 4096 I probably would.
00:29 jkroon @ JoeJulian - trying to file that bug report but can't get my password reset.
00:40 om perf is a bit better now
00:40 om with default config options
00:41 om up to 5 MBps
00:41 om but on dev it goes up to 40 Mbps
00:41 om lol
00:41 om murphy's law?
00:42 om dev speed above 40 Mbps
00:42 om prod below 6 MB/s
00:42 om whoops, dev 40 MB/s
00:46 om well, performance is much better at 6 MB/s than 370 Kb/s
00:47 om will keep looking into this
00:47 om thanks jkroon !
00:47 om ttyl
01:02 jkroon joined #gluster
01:11 jiffin joined #gluster
01:13 aj__ joined #gluster
01:16 jiffin1 joined #gluster
01:24 jiffin1 joined #gluster
01:47 ilbot3 joined #gluster
01:47 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:08 jiffin1 joined #gluster
02:12 jiffin1 joined #gluster
02:16 jiffin joined #gluster
02:18 jiffin1 joined #gluster
02:25 jiffin1 joined #gluster
02:33 jiffin1 joined #gluster
02:36 jiffin1 joined #gluster
02:41 jiffin joined #gluster
02:42 wadeholler joined #gluster
02:45 jiffin1 joined #gluster
02:50 jiffin1 joined #gluster
02:53 jiffin joined #gluster
03:02 Gambit15 joined #gluster
03:05 plarsen joined #gluster
03:22 jiffin1 joined #gluster
03:27 jiffin joined #gluster
03:29 jiffin1 joined #gluster
03:32 jiffin joined #gluster
03:37 bkolden joined #gluster
03:42 jiffin joined #gluster
03:43 jiffin1 joined #gluster
03:49 bkolden1 joined #gluster
03:55 jiffin1 joined #gluster
04:03 jiffin joined #gluster
04:34 jiffin joined #gluster
04:36 skoduri|afk joined #gluster
05:05 jkroon om, compare your configs (gluster volume get ${volname} all).  there'll be some reason.  could also be that prod is doing other IO on the same gfs mount at the same time (iotop really is useful, and in this case iostat to a lesser degree).  you want to get an overall view, not just single process.
05:06 jkroon also, those heal options really made a HUGE difference for us.  from system unusable to running stable and having resources to spare.
05:18 hchiramm joined #gluster
05:23 jkroon i do seem to get shd crawl lockups though ...
05:54 Gnomethrower joined #gluster
06:13 kovshenin joined #gluster
06:22 kovsheni_ joined #gluster
06:27 kovshenin joined #gluster
07:02 unforgiven512 joined #gluster
07:03 unforgiven512 joined #gluster
07:03 unforgiven512 joined #gluster
07:04 unforgiven512 joined #gluster
07:04 unforgiven512 joined #gluster
07:05 unforgiven512 joined #gluster
07:06 unforgiven512 joined #gluster
07:06 unforgiven512 joined #gluster
07:07 unforgiven512 joined #gluster
07:07 unforgiven512 joined #gluster
07:08 unforgiven512 joined #gluster
07:09 unforgiven512 joined #gluster
07:09 unforgiven512 joined #gluster
07:27 Wizek_ joined #gluster
07:48 Lee1092 joined #gluster
07:57 Wizek_ joined #gluster
08:06 hackman joined #gluster
08:17 sac joined #gluster
08:17 shruti joined #gluster
08:22 ZachLanich joined #gluster
08:26 ZachLanich Hey guys, I'm building a H/A PaaS with LXD containers dividing the web apps (Wordpress) inside the app VMs. I'm using Galera DB cluster and I'm looking for a smart way to use Gluster to distribute my FS, but unlike MySQL, where I can have all WP installs connect to the SQL cluster using different Usernames/Passwords for secure separation, every time I see Gluster in practice, it seems you're mounting 1 volume, intended to be
08:26 ZachLanich shared across your entire web app. In my case, I have dozens of LXD containers, each needing their own securely separated storage space (obviously). Can someone help me figure out how to best use Gluster and volumes, etc in this use case?
08:34 post-factum ZachLanich: I'd end up with 1 volume for all containers with specific subfolders on it, and then provide a symlink to each container if that is possible
08:34 post-factum unfortunately, glusterfs still has no possibility to mount subfolder
08:34 post-factum also, you may mount subfolder via nfs, but then you lose ha
08:34 ZachLanich post-factum You know lol, I was literally wandering around thinking about just that and your message popped up haha
08:35 post-factum ZachLanich: also, if you want to try, cephfs has subfolder mounts
08:35 ZachLanich So you're thinking mount a Gluster volume on my app host, create subdirs, then symlink them into the LXD containers, correct?
08:35 post-factum ZachLanich: correct
08:36 ZachLanich I think might actually do the trick
08:36 ZachLanich that might*
08:36 ZachLanich Thanks! I'll be looing into that.
08:37 ZachLanich looking* Good lord, the typos tonight lol. I think it's time for bed :P
08:40 post-factum late morning here :)
08:46 MikeLupe joined #gluster
08:58 hackman joined #gluster
09:22 skoduri joined #gluster
09:54 poornimag joined #gluster
10:37 MessedUpHare joined #gluster
10:37 hchiramm joined #gluster
10:50 Wizek_ joined #gluster
10:56 hackman joined #gluster
11:24 Mmike joined #gluster
11:27 Wizek_ joined #gluster
11:34 suliba joined #gluster
11:48 jkroon joined #gluster
11:56 jkroon JoeJulian, after re-enabling self-heal-daemon it took a while (about two hours), and for some reason I didn't notice last night, but I suspect a crawl is stuck somehow.
12:01 jkroon https://paste.fedoraproject.org/407366/08970014/ contains more information (crawl stats) showing that we've got an in-progress crawl for a few hours now.
12:01 glusterbot Title: #407366 Fedora Project Pastebin (at paste.fedoraproject.org)
12:02 jkroon where normally it finishes in minutes.
12:03 poornimag joined #gluster
12:24 hchiramm joined #gluster
12:26 Gnomethrower joined #gluster
12:36 shellclear joined #gluster
12:36 hgowtham joined #gluster
12:41 mhulsman joined #gluster
12:43 jkroon JoeJulian, https://bugzilla.redhat.com/show_bug.cgi?id=1366849
12:43 glusterbot Bug 1366849: unspecified, unspecified, ---, bugs, NEW , add reporting on why files/folders are in split-brain
12:49 shellclear joined #gluster
12:50 shyam joined #gluster
12:57 luis_silva joined #gluster
12:58 pkalever joined #gluster
12:58 rjoseph joined #gluster
12:59 lalatenduM joined #gluster
13:02 sac joined #gluster
13:02 shruti joined #gluster
13:03 shyam joined #gluster
13:25 plarsen joined #gluster
13:26 unforgiven512 joined #gluster
13:34 MikeLupe joined #gluster
13:50 mhulsman joined #gluster
13:55 plarsen joined #gluster
13:57 ron-slc joined #gluster
13:58 harish_ joined #gluster
14:16 plarsen joined #gluster
14:29 pdrakeweb joined #gluster
14:41 mhulsman joined #gluster
14:47 suliba joined #gluster
14:57 side_control joined #gluster
15:03 mhulsman joined #gluster
15:13 arif-ali joined #gluster
15:30 pdrakeweb joined #gluster
16:08 Wizek_ joined #gluster
16:14 hchiramm joined #gluster
16:22 Gambit15 joined #gluster
17:32 pdrakeweb joined #gluster
17:53 ZachLanich joined #gluster
17:56 ZachLanich post-factum, you around?
18:33 pdrakeweb joined #gluster
18:39 cloph hi * - for those running kvm/qemu from gluster fuse - is cache=none and direct-io-mode=enable on the fuse mount the way to go or is there a better way to have minimal risk of loss-in-flight?
18:49 samppah cloph: cache=none should ne enough.. i have been using it with network-remote.dio=on
18:51 cloph samppah: but the networ-remote,dio setting would only apply for the direct qemu gluster support, not for using the disk-images from a fuse mount, or am I mistaken?
18:52 samppah cloph: it applies also for fuse mount
18:52 cloph so what is the purpose of the direct-io switch for fuse mount then?
18:58 samppah cloph: afaik network-remote.dio filters o_direct flags from file access requests on client side before they are sent to server.. so basicly it allows caching on server side.. however i have noticed it to be quite safe with cache=none when using with atleast replica 2
18:58 samppah i'm not sure anout direct-io-mode=enable setting and what's the status of it
19:15 om joined #gluster
19:19 bfoster joined #gluster
19:26 rafi joined #gluster
19:41 rafi joined #gluster
19:51 rafi joined #gluster
20:07 ZachLanich Hey guys, I have a question regarding usage of Gluster with LXC/D. I chatted with post-factum last night briefly about it and I'm in the final decision making mode after reading through a ton of Gluster's Docs. Prerequisities: I'm dividing my App VM into multiple LXD containers (one for each customer). I'm looking to mount a Gluster Volume on the App VM, subdivide that volume into user directories, place ACL policies on those
20:07 ZachLanich directories, then either mount or symlink the sub-directories into their respective LXD containers as the web doc root (or higher in the dir hierarchy - feedback welcome). The ACLs would ensure that each customer would only be able to access their own files within the Gluster Vol. Can anyone give me advice into the best way to handle injecting these subdirs into the LXD containers? Networked volume mounts from Container to Host?
20:07 ZachLanich Symlinks somehow? etc.
20:10 nathwill joined #gluster
20:10 ZachLanich Is it possible I should be somehow placing the LXD containers directly in the Gluster Vol (assuming that's even possible/good practice)? End goal is file redundancy and H/A for anything important/unique in the LXD containers (which are hosting Wordpress sites, fyi)
20:17 pkalever joined #gluster
20:21 rjoseph joined #gluster
20:22 lalatenduM joined #gluster
20:23 sac joined #gluster
20:23 shruti joined #gluster
20:36 pdrakeweb joined #gluster
20:37 rafi1 joined #gluster
20:42 post-factum ZachLanich: sorry?
20:43 ZachLanich post-factum Sorry??
20:43 post-factum ZachLanich: would you like to get some advice on injecting folders into lxd from me?
20:44 ZachLanich post-factum Sure. I know we talked about it briefly, but I'm not certain you can symlink into an isolated LXD container, so I'm looking for advice/best practices for getting a subdir in the Gluster Vol into a container as a mount point
20:44 post-factum ZachLanich: unfortunately, no advice :). i simply made an assumption, it's up to you to test it, as i have never had lxd containers
20:45 ZachLanich That's fine. Just seeing if anyone in here has.
20:45 post-factum JoeJulian: ^^
20:45 post-factum JoeJulian is the guy that helps
20:46 post-factum :D
20:46 ZachLanich I'm also keeping in mind trying to keep Gluster CPU/RAM usage as low as I can obviously, so making it sync more files than necessary is not a good idea.
20:46 ZachLanich post-factum Thanks. We'll see if JoeJulian has an answer. Oh Joe!!! <cups the ear>
20:54 rafi1 joined #gluster
21:10 ZachLanich JoeJulian Should I just mount the Gluster Vol directly inside each container and use ACLs to restrict access to only the allowed subdirs?
21:27 ZachLanich Would just this work without security issues?: `lxc config device add gluster sdb disk source=/data/gluster/auserdir path=gluster`
21:29 BitByteNybble110 joined #gluster
21:37 pdrakeweb joined #gluster
21:38 genial joined #gluster
21:41 cloph samppah: reading mailinglist thread I see that networ.remote-dio is to *filter out* o_direct, so kind of misleading switch. so you're not actually using direct-io when that is enabled...
21:42 ZachLanich joined #gluster
21:43 cloph http://www.mail-archive.com/gluster-devel%40gluster.org/msg07911.html
21:43 glusterbot Title: Re: [Gluster-devel] What's the correct way to enable direct-IO? (at www.mail-archive.com)
21:44 ZachLanich I'm back in here. I can't remember if I was under nick: WebDude or ZachLanich on my laptop.
21:46 cloph bot no matter what I use - with direct-io enabled qemu here is not happy. "Could not find working O_DIRECT alignment. Try cache.direct=off."
22:04 cloph getting invalid argumet from dd with iflag=direct (oflag works for some reason).. so no direct-io for me it seems...
22:14 ZachLanich joined #gluster
22:38 pdrakeweb joined #gluster
22:43 plarsen joined #gluster
22:52 frakt joined #gluster
23:47 Klas joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary