Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-02-05

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:15 shyam joined #gluster
00:26 Jacob843 joined #gluster
00:29 Seth_Karlo joined #gluster
00:31 irated joined #gluster
00:31 irated Hey Guys
00:31 Seth_Karlo joined #gluster
00:32 irated I have a replicated volume and normally it writes at about 600fps. Add the java process above (i.e. the actually api) it drops to 10-15 files per second.
00:33 irated File-Size is 64k and record-size is 64k as well.
00:34 irated When the java process goes to local storage or nfs (HA pair not gluster) I get near the 600 file mark.
00:34 irated Is there some call i need to disable to make this all uber fast.
00:37 irated I was thinking about switching the mount to async to see if it was some sync call, but it appears you can't do that.  Thinking O_DIRECT or some other call is slowing it down.
00:38 irated Maybe even lookup ampilifacation is happening
00:58 DSimko I have a 2 node gluster in replica with 5 bricks on it, for the last 20 hours I have had one brick sporadically go offline.
01:54 arpu joined #gluster
02:09 phileas joined #gluster
02:28 derjohn_mob joined #gluster
02:49 ilbot3 joined #gluster
02:49 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:50 susant joined #gluster
03:13 riyas joined #gluster
03:54 shyam joined #gluster
04:04 Seth_Karlo joined #gluster
04:07 daMaestro joined #gluster
04:22 tyler274 joined #gluster
04:33 gem joined #gluster
04:40 armyriad joined #gluster
04:51 kramdoss_ joined #gluster
04:58 gem joined #gluster
05:29 jiffin joined #gluster
06:14 susant joined #gluster
06:16 kramdoss_ joined #gluster
06:57 susant left #gluster
07:11 susant joined #gluster
07:17 susant left #gluster
08:17 rjoseph joined #gluster
08:39 ahino joined #gluster
08:55 Seth_Karlo joined #gluster
09:06 unlaudable joined #gluster
09:11 unlaudable joined #gluster
09:25 mb_ joined #gluster
09:30 Seth_Karlo joined #gluster
09:31 jiffin joined #gluster
09:31 Seth_Kar_ joined #gluster
09:35 Seth_Karlo joined #gluster
09:35 Seth_Karlo joined #gluster
09:40 Marbug_ joined #gluster
09:49 nishanth joined #gluster
10:07 jiffin joined #gluster
10:22 Jules- joined #gluster
10:30 Seth_Karlo joined #gluster
11:04 musa22 joined #gluster
11:07 Jules- Hey everybody. Gnfs seems brocken on latest 3.9.1-1. It was a fault to upgrade. my sites are currently flapping. Is there any way to downgrade to 3.7.8 again?
11:09 unlaudable joined #gluster
11:41 kramdoss_ joined #gluster
11:43 Seth_Kar_ joined #gluster
11:50 Teraii joined #gluster
11:51 flying joined #gluster
11:52 Teraii hello here
11:52 Teraii glusterfs is usable on production environment ?
11:53 yalu joined #gluster
12:05 jiffin joined #gluster
12:16 susant joined #gluster
12:16 susant left #gluster
12:24 Seth_Karlo joined #gluster
12:27 Seth_Karlo joined #gluster
12:27 Seth_Karlo joined #gluster
12:36 Teraii someone alive here ? :)
12:41 ashiq joined #gluster
12:49 gem joined #gluster
12:50 DV joined #gluster
12:56 Wizek_ joined #gluster
13:22 nbalacha joined #gluster
13:23 Seth_Kar_ joined #gluster
13:26 Seth_Karlo joined #gluster
13:41 caitnop joined #gluster
14:05 Seth_Kar_ joined #gluster
14:10 Teraii hum seems not
14:10 Teraii i've tested
14:10 Teraii glusterfs is not ready for production
14:17 f0rpaxe How come?
14:19 ashiq joined #gluster
14:20 DV joined #gluster
14:43 nthomas_ joined #gluster
15:11 marlinc joined #gluster
15:17 * Teraii don't know :)
15:18 Teraii listing directory during write on another node will break replication
15:18 Teraii on the client
15:19 Teraii (the files are corectly replicated but seem truncated on the mounted dir)
15:20 kramdoss_ joined #gluster
15:21 Gambit15 joined #gluster
15:51 mhulsman joined #gluster
15:58 susant joined #gluster
15:59 jkroon joined #gluster
16:07 Dsimko joined #gluster
16:25 JoeJulian Teraii: Weekends are often a bit quieter and, apparently, it's quieter in non-american continent time zones.
16:26 JoeJulian Yes, it's used in production by thousands.
16:26 jkroon joined #gluster
16:31 Teraii JoeJulian, so "little" issue here on the first test :)
16:32 Teraii i have tested with the example described here :
16:32 Teraii https://gluster.readthedocs.io/en/l​atest/Quick-Start-Guide/Quickstart/​#step-5-set-up-a-glusterfs-volume
16:32 glusterbot Title: Quick start Guide - Gluster Docs (at gluster.readthedocs.io)
16:32 Teraii :)
16:32 plarsen joined #gluster
16:32 Teraii on freebsd
16:33 Teraii 11
16:33 Teraii the write seems working
16:34 Teraii but on mounted dir i have differnces on the 2 nodes if i access the dir on the 2 nodes
16:35 JoeJulian ~pasteinfo | Teraii
16:35 glusterbot Teraii: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
16:36 Teraii https://paste.fedoraproject.org/547467/86312607/
16:36 glusterbot Title: #547467 • Fedora Project Pastebin (at paste.fedoraproject.org)
16:37 Teraii tips : i must remount on the second node too see synchronized files
16:37 JoeJulian Ok, thanks.
16:38 JoeJulian That rules out a couple possible user errors. :)
16:38 Teraii :)
16:38 Teraii i hope :)
16:39 Teraii user error are easy to terminate :p
16:39 JoeJulian heh
16:39 Teraii +s
16:39 JoeJulian So tell me of one example difference between bricks.
16:40 Teraii bricks are physically the same
16:40 Teraii but on mounted dir
16:41 Teraii on the node where the file is uploaded : ok
16:41 Teraii on the node (slave ?) : the file is truncated
16:42 Teraii (528MB on master , 54 MB on slave)
16:42 JoeJulian @glossary
16:42 yalu joined #gluster
16:42 glusterbot JoeJulian: A "server" hosts "bricks" (ie. server1:/foo) which belong to a "volume"  which is accessed from a "client"  . The "master" geosynchronizes a "volume" to a "slave" (ie. remote1:/data/foo).
16:42 JoeJulian Just to help with terms.
16:42 Teraii ok thanks
16:42 JoeJulian So you're writing to /mnt/gfs, yes?
16:42 Teraii yes
16:43 Teraii the bricks are in /var/gluster/vg0/
16:43 Teraii (and the file here is correctly synchronized)
16:43 Teraii (between the 2 nodes)
16:44 JoeJulian Let me restate to ensure I understand.
16:45 JoeJulian You use the write loop as shown in the tutorial. It creates 100 files of 528MB and an ls from that client shows the correct stats. But an ls from the *other* client, shows each of those same files as being 54MB.
16:46 Teraii no
16:46 Teraii the test shown is correctly executed
16:47 Teraii i've made another test with one 528 MB file
16:47 Teraii :
16:47 Teraii node 1 :
16:47 Teraii -rw-r--r-- 1 root wheel 167346176 2016-07-29 11:16 video.mp4
16:47 glusterbot Teraii: -rw-r--r's karma is now -23
16:47 Teraii node 2 :
16:47 Teraii -rw-r--r-- 1 root wheel 541494061 2016-07-29 11:16 video.mp4
16:47 glusterbot Teraii: -rw-r--r's karma is now -24
16:47 JoeJulian Poor -rw-r--r
16:47 glusterbot JoeJulian: -rw-r's karma is now -1
16:47 JoeJulian lol
16:48 Teraii ? :p
16:48 Teraii see the size
16:48 Teraii (upload is finished)
16:49 Teraii if i remount /mnt/gfs the correct size appear
16:50 Teraii i miss something ?
16:50 JoeJulian I think I remember seeing something about this and how bsd handles fstat
16:50 Teraii ha
16:50 JoeJulian See if they hash compare correctly
16:50 Teraii how i can verify ?
16:51 JoeJulian sha1sum
16:51 Teraii ok
16:52 Teraii sha1 match
16:53 JoeJulian Ok, good. Then it's just the fstat thing. I'm trying to find where it was discussed.
16:53 Teraii (on /var/gluster/vg0)
16:53 JoeJulian ... and see if there's an open bug for it.
16:53 JoeJulian I was suggesting you get into the state where the sizes don't match between clients, and compare the has from the client perspective.
16:53 JoeJulian I suspect they'll match.
16:54 JoeJulian Proving that the whole file is accessible, but the directory listing is wrong.
16:54 Teraii on /mnt/gfs they mismatch !
16:54 Teraii (and output other result as /var/gluster/vg0
16:54 Teraii )
16:54 JoeJulian >:(
16:55 Teraii gluster instert some metadata in files ?
16:55 Teraii insert
16:55 Teraii i'm testing with remounting
16:55 JoeJulian Not in the files, it does add ,,(extended attributes) which are stored in their own inodes.
16:55 glusterbot (#1) To read the extended attributes on the server: getfattr -m .  -d -e hex {filename}, or (#2) For more information on how GlusterFS uses extended attributes, see this article: http://pl.atyp.us/hekafs.org/index.php/​2011/04/glusterfs-extended-attributes/
16:57 Dsimko I have a 2 node gluster running 3.8.4 on CentOS 6.5 with 5 bricks. One of my bricks sporadically keeps going offline. Looking for assistance since I cannot find anything wrong.
16:58 Teraii the original sum : f3d0f9bab959ca8e060fedbe5a893313a4cae6b2
16:58 Teraii physical sum is the same
16:58 Teraii but on mounted dir ...
16:58 Teraii no :p
16:59 JoeJulian Dsimko: Have you checked the brick log?
16:59 Teraii (result differ when i remounting dir)
16:59 Dsimko Yes I have
16:59 Dsimko I get a signal 11
16:59 Dsimko It started after I enabled trashcan on all of them
16:59 Dsimko due to a user accidently deleting a file
16:59 Dsimko n all the bricks
16:59 musa22 joined #gluster
17:00 JoeJulian segfault... Please use fpaste to paste the crash info and several log lines above it.
17:00 JoeJulian fpaste.org
17:00 JoeJulian I meant
17:00 Dsimko ok ty
17:01 JoeJulian Teraii: I'm getting close to the limits of my abilities to help. I have only used bsd once about 18 years ago. Hated it and never went back. Do check the client log (/var/log/glusterfs/mnt-gfs.log) and see if there are any clues.
17:02 Teraii :)
17:05 Teraii nothing in the log
17:05 Teraii can we configure more verbose ?
17:05 JoeJulian I'm stuck. I'd like to blame bsd's fuse implementation.
17:06 JoeJulian Sure, but if there's not an error, I'd be surprised if it tells you anything.
17:06 JoeJulian gluster volume set help | grep log
17:07 Teraii you're probably right
17:08 JoeJulian Teraii: Please do file a bug report. This is supposed to be working. There are plenty of bsd tests that are run as part of the ci process.
17:08 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
17:08 rjoseph joined #gluster
17:09 Teraii JoeJulian, please note : if i'm not accessing the directory the file are identical
17:09 Teraii i can easily reproduce the issue
17:09 JoeJulian And even if it's not gluster's fault, the developers are very good about working with kernel developers to fix their bugs.
17:09 Teraii good ;)
17:10 Teraii the second node seems slow
17:11 Teraii hum and consuming cpu
17:12 JoeJulian Clients connect directly to all the servers that are part of the volume. Replication happens at the client-side. There's no functional difference from one client or server to another.
17:13 JoeJulian Not saying that to discount your observations, but merely to put them in perspective.
17:13 Teraii ok
17:13 shyam joined #gluster
17:13 Teraii i love knowledge :)
17:13 JoeJulian If you know how it works, you're better prepared to intuit what you're seeing.
17:14 Teraii i'll try to reset and begin at the start
17:14 JoeJulian Dsimko: Are you close to pasting that crash info? I'm in need of some breakfast and will need to go back to my weekend soon.
17:14 Teraii indeed
17:16 JoeJulian Ok, I'm going to go make the family some breakfast. I'll check back in later to see if I can answer anything that gets left here.
17:16 Teraii n'app :)
17:17 Teraii thanks you for help ;)
17:31 susant left #gluster
17:34 musa22 joined #gluster
17:44 edong23 joined #gluster
17:57 Dsimko Sorry @Joejulian Got called away for work here is the link to the fpaste this occurs write before the brick goes offline. https://da.gd/Vd6AF
17:57 glusterbot Title: #547689 • Fedora Project Pastebin (at da.gd)
17:59 Dsimko It is almost liek it cannot find the volfile for that brick
18:11 JoeJulian Dsimko: That's a client log. We need the brick log (/var/log/glusterfs/bricks/${brick_path/./-}.log
18:23 jkroon joined #gluster
18:27 irated JoeJulian: you around.
18:29 level7 joined #gluster
18:31 JoeJulian On and off
18:36 irated You've helped me with issues in the past soo. I have a weird one. Not sure if you saw it last night. If I benchmark directly to the mount I get 600 files per sec
18:36 irated Add Java and it drops to 40-50 Files per sec
18:36 irated the java behind the call is fairly simple and the java does great against an nfs server
18:37 irated Do you know of any bottleknecks that get caused by java.io
18:37 JoeJulian I'd have to guess that java does a bunch of stat calls. Have you looked at the gluster jni?
18:37 irated Not yet, we are trying to keep storage libs out of the code..
18:37 JoeJulian https://github.com/semiosis/libgfapi-jni
18:37 glusterbot Title: GitHub - semiosis/libgfapi-jni: Java Native Interface (JNI) bindings for libgfapi (the GlusterFS client API) (at github.com)
18:37 irated so that its easy to switch back ends
18:37 irated yeah we "looked" at it
18:38 irated We have multiple backends we use
18:38 irated so fuse like stuff is the best
18:38 irated My boss is trying to convince me we should switch back to nfs, but I dont like scale up infra..
18:38 JoeJulian Too bad semiosis has moved on to other technologies. He would have been the best one to answer this.
18:39 irated Lame Sauce
18:39 irated I gues i could run profile
18:40 irated that would show the stat right
18:40 JoeJulian I tried profiling a java application once. Spent 4 days on it and finally gave up.
18:41 JoeJulian Might be easier to use wireshark and see what it's doing from the network side.
18:42 irated Here is the code we use for reference: https://gist.github.com/pryorda/​14a18e9ecbd302af73903cd4e37c1e0b
18:42 glusterbot Title: Example Path · GitHub (at gist.github.com)
18:43 irated Its a pretty straight forward spring app.
18:43 irated Just java + gluster = no bueno
18:46 irated JoeJulian: this might help: https://github.com/semiosis​/glusterfs-java-filesystem
18:46 glusterbot Title: GitHub - semiosis/glusterfs-java-filesystem: GlusterFS for Java (at github.com)
18:48 irated I just realized something in all the code he uses io instead of nio.
18:48 irated he being our dev
18:49 ahino joined #gluster
18:55 mhulsman joined #gluster
19:03 Dsimko Sorry @JoeJulian here is the brick log https://da.gd/Q2Lb and thank you.
19:03 glusterbot Title: #547908 • Fedora Project Pastebin (at da.gd)
19:21 farhorizon joined #gluster
19:25 JoeJulian Dsimko: Sorry, there's no signal 11 there so I cannot determine where it crashed nor can I predict how to prevent it crashing in the future.
19:26 Dsimko Ok one second
19:28 yalu joined #gluster
19:28 Dsimko @JoeJulian does this help? https://da.gd/V4q7c
19:50 irated JoeJulian: wasnt there some LOOKUP cache performance stuff you could do
19:50 irated based on the checks im seeing lookups is the slow part.
19:52 musa22 joined #gluster
20:01 level7_ joined #gluster
20:13 edong23 joined #gluster
20:14 edong23 joined #gluster
20:21 ahino joined #gluster
20:24 ashiq joined #gluster
20:58 DSimko joined #gluster
20:58 DSimko @JoeJulian did that last fpaste help? Had to disconnect to go to office, so if you typed anything I missed it.
21:13 jdossey joined #gluster
21:25 jdossey joined #gluster
21:28 Homastli joined #gluster
21:48 Homastli what do people use to share a vol in samba? fuse or vfs?
21:58 cyberbootje joined #gluster
22:03 pulli joined #gluster
22:23 yalu_ joined #gluster
22:25 xrandr_laptop joined #gluster
22:25 xrandr_laptop Hello. I was wondering what the max storage that gluster can handle
22:33 cyberbootje1 joined #gluster
22:39 farhorizon joined #gluster
22:42 RustyB joined #gluster
22:46 derjohn_mob joined #gluster
23:49 Seth_Karlo joined #gluster
23:50 plarsen joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary