Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-02-07

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:32 jdossey joined #gluster
00:32 alvinstarr joined #gluster
00:58 shdeng joined #gluster
01:37 ShwethaHP joined #gluster
01:48 shdeng joined #gluster
01:50 alvinstarr joined #gluster
01:55 swebb joined #gluster
02:06 Gambit15 joined #gluster
02:09 DSimko joined #gluster
02:10 DSimko Last 24 hours my nfs.log has been filling up with this message. [2017-02-07 02:08:04.488859] E [MSGID: 114031] [client-rpc-fops.c:444:client3_3_open_cbk] 0-gv00-client-1: remote operation failed. Path: <gfid:d04a5e4c-5946-4994-86ef-32bab48c477c> (d04a5e4c-5946-4994-86ef-32bab48c477c) [Transport endpoint is not connected] [2017-02-07 02:08:04.489184] E [MSGID: 114031] [client-rpc-fops.c:1549:client3_3_inodelk_cbk] 0-gv00-client-1: rem
02:10 DSimko Anybody have an idea what can cause this?
02:13 nthomas joined #gluster
02:14 JoeJulian The nfs service cannot connect to Brick 2
02:15 JoeJulian "Transport endpoint is not connected" comes from the network calls of the glibc library.
02:16 DSimko @joeJulian ty Joe, my statuses look good though
02:17 JoeJulian So what else could block that application from connecting? firewalls? hostname lookup? selinux?
02:17 DSimko https://da.gd/h12B
02:17 glusterbot Title: #550119 • Fedora Project Pastebin (at da.gd)
02:17 DSimko Yeah I have checked selinux, iptables
02:18 JoeJulian Since the volume status comes back, at least port 24007 is open. Can you open a tcp connection from node01 to node02 port 49156?
02:19 RameshN joined #gluster
02:20 DSimko https://da.gd/qrRJ
02:20 glusterbot Title: #550120 • Fedora Project Pastebin (at da.gd)
02:20 DSimko all telnets look ok
02:21 JoeJulian odd... can you "gluster volume start gv00 force"? It will restart the self-heal daemon and, I think, the nfs service too.
02:22 DSimko Right no as this is production servers
02:22 DSimko I have about 6 applicatiosn connected to this node via NFS
02:23 DSimko application servers
02:25 derjohn_mob joined #gluster
02:25 DSimko Here is a bit more of logs
02:26 DSimko https://da.gd/OL6Zu
02:26 glusterbot Title: #550121 • Fedora Project Pastebin (at da.gd)
02:34 Saravanakmr joined #gluster
02:35 susant joined #gluster
03:08 JonathanD joined #gluster
03:36 Saravanakmr joined #gluster
03:41 magrawal joined #gluster
03:41 kramdoss_ joined #gluster
03:44 gem joined #gluster
03:48 itisravi joined #gluster
04:00 Shu6h3ndu joined #gluster
04:06 farhorizon joined #gluster
04:07 atinm joined #gluster
04:12 nbalacha joined #gluster
04:12 itisravi joined #gluster
04:13 aravindavk joined #gluster
04:14 sudoSamurai joined #gluster
04:19 Jacob843 joined #gluster
04:34 susant left #gluster
04:35 buvanesh_kumar joined #gluster
04:35 ppai joined #gluster
04:45 mb_ joined #gluster
04:50 victori joined #gluster
04:52 skumar joined #gluster
04:52 Prasad joined #gluster
05:07 ndarshan joined #gluster
05:07 BitByteNybble110 joined #gluster
05:10 nirokato joined #gluster
05:11 skoduri joined #gluster
05:19 sanoj joined #gluster
05:24 kotreshhr joined #gluster
05:34 sbulage joined #gluster
05:36 gyadav joined #gluster
05:37 Philambdo joined #gluster
05:38 karthik_us joined #gluster
05:40 Humble joined #gluster
05:42 kramdoss_ joined #gluster
05:44 rafi joined #gluster
05:45 k4n0 joined #gluster
05:45 apandey joined #gluster
05:49 riyas joined #gluster
05:51 rjoseph joined #gluster
06:00 rafi1 joined #gluster
06:07 nthomas joined #gluster
06:13 k4n0 joined #gluster
06:15 Saravanakmr joined #gluster
06:19 rafi1 joined #gluster
06:19 hgowtham joined #gluster
06:20 nbalacha joined #gluster
06:20 ashiq joined #gluster
06:25 Wizek_ joined #gluster
06:30 gem joined #gluster
06:35 rastar joined #gluster
06:37 mb_ joined #gluster
06:41 msvbhat joined #gluster
06:44 ankit_ joined #gluster
06:45 sanoj joined #gluster
06:49 hgowtham joined #gluster
06:51 poornima_ joined #gluster
06:53 msvbhat joined #gluster
06:55 Jacob843 joined #gluster
06:59 nbalacha joined #gluster
07:08 [diablo] joined #gluster
07:10 gem joined #gluster
07:14 jkroon joined #gluster
07:15 kramdoss_ joined #gluster
07:20 jtux joined #gluster
07:25 mhulsman joined #gluster
07:26 zoyvind joined #gluster
07:29 mhulsman joined #gluster
07:39 hgowtham joined #gluster
07:49 ankit_ joined #gluster
07:56 ivan_rossi joined #gluster
08:14 sudoSamurai joined #gluster
08:16 fsimonce joined #gluster
08:17 kramdoss_ joined #gluster
08:18 hgowtham joined #gluster
08:22 susant joined #gluster
08:22 susant left #gluster
08:29 mhulsman joined #gluster
08:37 musa22 joined #gluster
08:41 ahino joined #gluster
08:50 sanoj joined #gluster
08:56 mhulsman1 joined #gluster
08:59 Manikandan joined #gluster
09:01 k0nsl joined #gluster
09:01 k0nsl joined #gluster
09:04 nh2_ joined #gluster
09:11 RameshN joined #gluster
09:13 jiffin joined #gluster
09:23 k0nsl joined #gluster
09:23 k0nsl joined #gluster
09:30 bhakti left #gluster
09:31 derjohn_mob joined #gluster
09:43 skumar_ joined #gluster
09:44 itisravi_ joined #gluster
09:45 skumar__ joined #gluster
09:49 jri joined #gluster
09:53 skumar_ joined #gluster
09:55 Karan joined #gluster
09:56 karthik_us joined #gluster
10:00 Seth_Karlo joined #gluster
10:05 itisravi_ joined #gluster
10:08 Plam hi there. Does it make sense to have better perfs in disperse than 3 way replicate (on 3 nodes, disperse with redundancy 1)
10:08 Plam write perfs I mean
10:08 Plam read perf is better in replicate
10:08 skumar__ joined #gluster
10:10 karthik_us joined #gluster
10:14 sudoSamurai joined #gluster
10:14 jdossey joined #gluster
10:14 pulli joined #gluster
10:18 rastar joined #gluster
10:22 k4n0 joined #gluster
10:40 dspisla joined #gluster
10:43 karthik_us joined #gluster
10:45 dspisla @nigelb Hello, a few weeks ago I did a registration and entered my Email Adress for verification. I want to join the gluster user group but I did not get a verification until now. Maybe you can veriy my Email (david.spisla@iternity.com)
10:45 kotreshhr joined #gluster
11:03 mhulsman joined #gluster
11:04 kkeithley gluster community bug triage in ~60min in #gluster-meeting
11:17 Philambdo joined #gluster
11:35 alexcontis joined #gluster
11:35 alexcontis hello
11:35 glusterbot alexcontis: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually off
11:36 alexcontis any volunteer available please?
11:47 buvanesh_kumar joined #gluster
11:57 kramdoss_ joined #gluster
11:58 mhulsman joined #gluster
11:58 mhulsman joined #gluster
12:01 hgowtham joined #gluster
12:04 Karan joined #gluster
12:12 karthik_us joined #gluster
12:17 kotreshhr joined #gluster
12:21 poornima_ joined #gluster
12:23 kramdoss_ joined #gluster
12:25 Seth_Karlo joined #gluster
12:27 musa22 joined #gluster
12:28 mhulsman1 joined #gluster
12:33 pulli joined #gluster
12:39 ivan_rossi left #gluster
12:48 kpease joined #gluster
12:54 nbalacha joined #gluster
13:01 pdrakeweb joined #gluster
13:14 skoduri joined #gluster
13:16 pulli joined #gluster
13:16 plarsen joined #gluster
13:21 musa22 joined #gluster
13:24 karthik_us joined #gluster
13:25 kotreshhr joined #gluster
13:29 mhulsman joined #gluster
13:48 atinm joined #gluster
13:51 unclemarc joined #gluster
13:54 Manikandan joined #gluster
13:59 mhulsman1 joined #gluster
13:59 pioto joined #gluster
14:01 jkroon joined #gluster
14:15 ankit joined #gluster
14:29 mhulsman joined #gluster
14:32 nh2_ joined #gluster
14:45 jeffspeff joined #gluster
14:46 susant joined #gluster
14:49 skoduri joined #gluster
14:52 pulli joined #gluster
14:52 shaunm joined #gluster
14:53 ira joined #gluster
14:54 skylar joined #gluster
14:55 kotreshhr joined #gluster
15:06 dspisla_ joined #gluster
15:07 gem joined #gluster
15:12 nh2_ joined #gluster
15:15 msvbhat joined #gluster
15:19 rwheeler joined #gluster
15:25 nbalacha joined #gluster
15:26 Gambit15 joined #gluster
15:27 Philambdo joined #gluster
15:34 farhorizon joined #gluster
15:41 JoeJulian file a bug
15:41 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
15:42 Plam nice bot JoeJulian :p
15:42 JoeJulian :)
15:42 JoeJulian I'm too lazy to use bookmarks. ;)
15:42 Plam I like this way to do it :)
15:43 JoeJulian Plam: and the theoretical answer to your question is "yes" though I have not seen any benchmarks.
15:44 Plam okay so I just hope to have the "limitation" removed for the 50% redundancy rule on disperse :p
15:44 Plam eg disperse on 4 with redundancy at 2
15:44 sbulage joined #gluster
15:44 Plam that should be possible
15:45 Plam even with a force flag at worst
15:47 JoeJulian Ah yes, I saw that email. Should be, yes. You might want to also email Xavier Hernandez <xhernandez@datalab.es> directly. I'm not sure how much he follows the mailing lists.
15:48 Plam :) cool, I will
15:55 sanoj joined #gluster
15:59 Plam JoeJulian: also, would it be "insane" to use sharding + disperse? I did it for a while without any problem or perf impact, without knowing that's "strange" from a Gluster devs point of view :p
15:59 Plam I got very short heal time with that
15:59 JoeJulian It seems to me like it would be a good idea.
15:59 JoeJulian Oh, and I'm no dev.
15:59 Plam :D
15:59 JoeJulian Well, not a gluster dev anyway.
15:59 JoeJulian I'm just a user.
16:00 Plam maybe there is a specific/ideal shard size number going with disperse volume algorithm
16:00 JoeJulian Have been since 2006.
16:00 Plam didn't search a lot the sweet spot
16:01 JoeJulian Those are both relatively new. I haven't seen any posts or talks looking for best practice on that.
16:01 JoeJulian Could be a good talk for Vault if it wasn't already closed...
16:02 buvanesh_kumar joined #gluster
16:04 wushudoin joined #gluster
16:05 susant left #gluster
16:07 nh2_ JoeJulian: type $s -> %s in https://joejulian.name/blog/glusterfs-and-lstat/
16:07 glusterbot Title: GlusterFS and lstat (at joejulian.name)
16:07 nh2_ *typo
16:15 JoeJulian Heh, yep.
16:17 JoeJulian fixed
16:28 nh2_ JoeJulian: given that you wrote that post: I'm having a performance problem related to lstat. I try to sync ~1M files to gluster with rsync, and it takes forever, because a single lstat takes 34 microseconds on average according to strace. With your benchmark, it's much faster (like 100x). I wonder whether that is because your bencmark lstats the same files.
16:29 nh2_ Also, on http://events.linuxfoundation.org/s​lides/2010/linuxcon2010_wheeler.pdf it was claimed that XFS is at least ~10x slower for metadata operations than ext4 for lots of files, I wonder whether I'm hitting that.
16:31 JoeJulian I can't believe Ric would have said that in 2010!
16:33 JoeJulian Odd. This looks like he's referencing the 2006 numbers.
16:34 JoeJulian rwheeler: Any comments on xfs metadata performance vs ext4 today?
16:35 nh2_ joined #gluster
16:39 mhulsman joined #gluster
16:41 nh2_ JoeJulian: do you know of any profiling facilities that I could use to learn whether the time is indeed spent in xfs local operations, or in gluster? gluster vol profile and gluster vol top don't seem to expose any such info
16:42 shyam nh2_: You could strace the brick process to see how much time is spent in the syscalls, which would reflect the ratio of time spent there against the strace on the client/application (which is what I assume you are referring to above)
16:42 JoeJulian That would be hard since you have profile time on two different machines.
16:46 nh2_ joined #gluster
16:53 skoduri joined #gluster
16:56 nh2_ JoeJulian shyam: I will probably get something useful with `strace -c` on the two machines; which process is the one that eventually writes to disk? glusterfsd?
16:57 JoeJulian yes
16:58 kotreshhr left #gluster
17:03 jbrooks joined #gluster
17:03 nh2_ JoeJulian shyam: https://gist.github.com/nh2/18​36415489e2132cf85ed3832105fcc1
17:03 glusterbot Title: rsync being slow on lstat() syscalls on GlusterFS (on XFS, not sure if it matters) · GitHub (at gist.github.com)
17:04 nh2_ suggests to me that XFS isn't the problem, and that the slowness comes from Gluster
17:04 jdossey joined #gluster
17:05 nh2_ network load is 30 Mbit/s incoming, 2 Mbit/s on a 1000 Mbit/s link
17:06 raghu joined #gluster
17:09 jdossey joined #gluster
17:29 riyas joined #gluster
17:35 Saravanakmr joined #gluster
17:38 nh2_ I have updated my gist with `strace` times which are more reliable than csysdig's times
17:41 Karan joined #gluster
17:42 pulli joined #gluster
17:46 nh2_ JoeJulian: I'm a bit sceptical about some things: Am I using strace correctly? `strace -c` shows this `30 usecs/call` for lstat, but when I use `strace -T`, then each lstat entry is about ~2ms (which makes more sense, because who can do a network roundtrip in 30 usecs?)
17:48 JoeJulian nh2_: With the more recent version of gluster, that might high local cache.
17:48 JoeJulian I think
17:49 JoeJulian I haven't looked that closely at it, but there's a new way of invalidating caches that's smarter.
17:49 nh2_ JoeJulian: but the problem is that two different invocations of strace disagree: what -c shows doesn't agree with what -T shows
17:49 nh2_ what -T shows me makes more sense: ~2ms per lstat call (because it's a network roundtrip), and rsync does them sequentially, so I get at max 500 lstats per second
17:50 JoeJulian Do you *need* to use rsync?
17:50 nh2_ JoeJulian: I haven't found any better program so far that can copy files, recursively, skip existing ones, and preserve attributes accordingly
17:52 nh2_ in general file copying utilities seem to have been stuck in the 80s, they also don't use efficient syscalls like `sendfile()` or batched `getdents()` for listing directories
17:52 nh2_ and pretty much none can perform copy operations in a parallel/pipelined fashion, as network file systems like gluster would benefit from
17:52 JoeJulian +1
17:52 nh2_ it's a real pain
17:52 musa22 joined #gluster
17:53 nh2_ I tried `gsutil` from Google which has an rsync like mode, but it just randomly hangs forever and then never does any syscalls again
17:53 nh2_ (I tried it because it's the only one that has a parallel feature)
17:54 nh2_ JoeJulian: I'm more and more considering sitting down for a couple hours and writing one that can do at least a basic parallel `cp -u -a`
17:55 cholcombe are the nagios gluster plugins packaged in a ppa or just rpm's?
17:55 JoeJulian nh2_: Would it be able to fill tcp buffers?
17:55 JoeJulian cholcombe: They're not in our ppa, no.
17:55 cholcombe JoeJulian, ok
17:56 JoeJulian cholcombe: Happy to have someone support that though, if you want to volunteer! ;)
17:56 cholcombe hehe
17:56 nh2_ JoeJulian: how do you mean? Whether it'd saturate my gigabit link?
17:56 cholcombe well i want to include them in my gluster juju charm.  i'm not sure whether that'll entail me packaging them also
17:57 JoeJulian nh2_: cp reads and writes 512 byte blocks. On my ~9k frames, that's a lot of wasted packet.
17:59 nh2_ JoeJulian: ah, you mean for the actual copy operation, yes, I'd either make the block size tunable, or just pass the entire file to sendfile() so it's all done in the kernel
18:00 JoeJulian nh2_: if you make it, I'll package it for Arch Linux's aur repo.
18:00 JoeJulian (yes, I realize the redundancy)
18:08 nh2_ JoeJulian: OK deal. Do you have any idea why `strace -c` would show me this bogus insanely small number?
18:08 JoeJulian Only if it's hitting cache and not the network.
18:10 nh2_ JoeJulian: but then the times in `strace -T` would also have to be small, and not ~2ms
18:10 nh2_ let me copy the output so that it's clear what I mean
18:10 bbooth joined #gluster
18:12 farhorizon joined #gluster
18:13 nh2_ JoeJulian: https://gist.github.com/nh2/1836415489e2​132cf85ed3832105fcc1#strace--c-is-fishy
18:13 glusterbot nh2_: https://gist.github.com/nh2/183641​5489e2132cf85ed3832105fcc1#strace's karma is now -1
18:13 glusterbot Title: rsync being slow on lstat() syscalls on GlusterFS (on XFS, not sure if it matters) · GitHub (at gist.github.com)
18:29 nh2_ joined #gluster
18:32 ahino joined #gluster
18:34 gem joined #gluster
18:42 Saravanakmr joined #gluster
18:51 nh2_ joined #gluster
18:54 gem joined #gluster
18:58 farhorizon joined #gluster
19:03 msvbhat joined #gluster
19:08 nh2_ JoeJulian: ah, so easy http://stackoverflow.com/questions/42097278/​why-do-straces-timings-for-c-and-t-disagree
19:08 glusterbot Title: linux - Why do strace's timings for -c and -T disagree? - Stack Overflow (at stackoverflow.com)
19:08 JoeJulian That makes sense.
19:08 nh2_ `strace -c` is wrong, `strace -c -w` (for "w"all time) is what I'm looking for
19:09 JoeJulian +1
19:09 nh2_ another mystery debunked
19:10 shyam nh2_: Ignore on the brick process (glusterfsd) select, futex, nanosleep, restart_syscall, poll syscall time (just saying, you probably already are not bothered with those)
19:14 nh2_ shyam: yes, I figured it out: I used strace wrong (missing a flag), the csysdig output is right, and the rsyng is simply doing one lstat after another, each of which takes at least one LAN roundtrip, thus I can't do more than ~500 files per second. All as expected
19:14 shyam nh2_: Ah ok...
19:16 JoeJulian I wonder if you used some of these mount options...
19:16 JoeJulian @php
19:16 glusterbot JoeJulian: (#1) php calls the stat() system call for every include. This triggers a self-heal check which makes most php software slow as they include hundreds of small files. See http://joejulian.name/blog/optimizi​ng-web-performance-with-glusterfs/ for details., or (#2) It could also be worth mounting fuse with glusterfs --attribute-timeout=HIGH --entry-timeout=HIGH --negative-timeout=HIGH
19:16 glusterbot JoeJulian: --fopen-keep-cache
19:16 alvinstarr joined #gluster
19:16 shyam I would have done "strace -ff -p <PID> -T -o <outputfile>" that way each thread output is separated into a file of its own, and processing the syscall times becomes a little easier (possibly future reference)
19:16 JoeJulian The attribute timeouts might help, but only if they're prefetched. I'm not sure if they are.
19:17 rastar joined #gluster
19:20 * PatNarciso has XFS+GlusterDistributed+GlusterFuse+Samba MacOS clients who get 35MB/s often... 50MB/s if lucky.  Still trying to nail down the reason why.
19:23 PatNarciso I need a 100TB sandbox.
19:24 nh2_ shyam: thanks, -ff seems useful for when I eventually have a threaded copying program, and for stracing `glusterd`. My rsync is single-threaded though (a shame)
19:24 nh2_ PatNarciso: you don't have the problem with smaller setups?
19:25 PatNarciso no.
19:25 PatNarciso I thought it was XFS for a little while.  thought it was because the meta data size needed to be increased... what is it, -b -bs?
19:26 PatNarciso ... took 4 weeks for me to move data around to reformat both nodes.  didn't improve :\
19:31 PatNarciso any ticks for getting the total dir and file count of a filesystem (or dir)?
19:32 bbooth joined #gluster
19:34 * PatNarciso likes find | wc
19:34 musa22 joined #gluster
19:40 klaas joined #gluster
19:44 nh2_ PatNarciso: apparently there is no function for that in Linux
19:45 nh2_ PatNarciso: if you know how long the filenames are, you can divide the size of the dirent (ls -ld) by the chars taken up by the file names
19:45 nh2_ still kinda silly
19:46 nh2_ PatNarciso: why did it take 4 weeks to copy? Are you also missing a parallel cp utility?
19:46 jwd joined #gluster
19:46 nh2_ or because you had to do a "rolling rebalance" and format each disk in turn?
19:46 JoeJulian If you're sure your inodes are large enough, the number of inodes used should be fairly accurate.
19:47 nirokato_ joined #gluster
19:47 JoeJulian well, on a brick it would be half the inodes used.
19:51 glustin joined #gluster
19:56 types joined #gluster
19:56 [diablo] joined #gluster
19:56 derjohn_mob joined #gluster
19:56 types Hello. I'm getting the following while trying to install Gluster 3.8 on ubuntu 16.04
19:56 types N: Skipping acquire of configured file 'main/binary-i386/Packages' as repository 'http://download.gluster.org/pub/gluster​/glusterfs/3.8/LATEST/Debian/jessie/apt jessie InRelease' doesn't support architecture 'i386'
19:56 glusterbot Title: Index of /pub/gluster/glusterfs/3.8​/LATEST/Debian/jessie/apt (at download.gluster.org)
19:57 types then trying to apt-get install yields
19:57 types The following packages have unmet dependencies:
19:57 types glusterfs-server : Depends: glusterfs-common (>= 3.8.8-1) but it is not going to be installed
19:57 types Depends: glusterfs-client (>= 3.8.8-1) but it is not going to be installed
19:57 types E: Unable to correct problems, you have held broken packages.
19:57 types any idea on how to solve this?
19:58 social joined #gluster
20:05 types ok, solved
20:05 types by adding the proper ppa
20:05 types sudo add-apt-repository ppa:gluster/glusterfs-3.8
20:05 types :D
20:18 jdossey joined #gluster
20:25 renout_away joined #gluster
20:26 bbooth joined #gluster
20:30 social joined #gluster
20:43 jdossey joined #gluster
20:52 Philambdo joined #gluster
20:59 raghu joined #gluster
21:02 farhorizon joined #gluster
21:04 msvbhat joined #gluster
21:16 farhorizon joined #gluster
22:00 Jacob843 joined #gluster
22:01 Vapez joined #gluster
22:09 farhorizon joined #gluster
22:11 jdossey joined #gluster
22:12 tallmocha joined #gluster
22:19 mathatoms joined #gluster
22:22 bbooth joined #gluster
22:23 mathatoms is there a limit to the number of bricks that can be added a single machine?
22:24 musa22 joined #gluster
22:24 mathatoms i recall coming across something in some redhat documentation that said each machine is limited to 24 bricks, but I can't find anywhere in the gluster documentation to confirm this
22:29 tallmocha We had some network issues earlier today which messed up our gluster cluster. A few hours ago there were ~50 entries and slowly it's got down to 1. This one entry has been stuck for over an hour now, on one brick saying "Possibly undergoing heal", and then a few mins later saying "Is in split-brain". Anything I can do or check?
22:33 farhorizon joined #gluster
22:34 cholcombe joined #gluster
22:41 farhorizon joined #gluster
22:48 farhorizon joined #gluster
22:56 johnmilton joined #gluster
23:25 musa22 joined #gluster
23:40 tallmocha joined #gluster
23:40 nh2_ joined #gluster
23:48 Wizek_ joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary