Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-03-15

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 ovaistariq joined #gluster
00:01 64MAAK95B joined #gluster
00:15 johnmilton joined #gluster
00:19 ovaistariq joined #gluster
00:37 dgandhi joined #gluster
00:39 dgandhi joined #gluster
00:41 dgandhi joined #gluster
00:58 dlambrig_ joined #gluster
00:59 johnmilton joined #gluster
01:01 haomaiwa_ joined #gluster
01:04 ovaistariq joined #gluster
01:13 DV joined #gluster
01:20 rouven joined #gluster
01:27 Lee1092 joined #gluster
01:32 EinstCrazy joined #gluster
01:39 nehar joined #gluster
01:43 baojg joined #gluster
01:46 bit4man joined #gluster
01:46 scones left #gluster
01:48 atrius joined #gluster
01:49 haomaiwa_ joined #gluster
01:50 haomaiwa_ joined #gluster
02:01 hagarth joined #gluster
02:01 haomaiwa_ joined #gluster
02:25 dlambrig_ joined #gluster
02:46 dlambrig_ joined #gluster
02:47 ilbot3 joined #gluster
02:47 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:48 EinstCrazy joined #gluster
03:01 haomaiwa_ joined #gluster
03:18 overclk joined #gluster
03:19 kdhananjay joined #gluster
03:21 hchiramm joined #gluster
03:21 hchiramm_ joined #gluster
03:48 atinm joined #gluster
03:50 DV joined #gluster
03:52 shubhendu joined #gluster
03:55 dlambrig_ joined #gluster
04:01 haomaiwa_ joined #gluster
04:05 sakshi joined #gluster
04:08 itisravi joined #gluster
04:11 gem joined #gluster
04:13 anmol joined #gluster
04:13 kkeithley1 joined #gluster
04:14 skoduri joined #gluster
04:17 jiffin joined #gluster
04:17 jmarley joined #gluster
04:18 kanagaraj joined #gluster
04:21 dlambrig_ joined #gluster
04:26 dlambrig_ joined #gluster
04:38 ppai joined #gluster
04:41 shubhendu joined #gluster
04:47 nbalacha joined #gluster
04:50 pur joined #gluster
04:51 prasanth joined #gluster
04:51 nishanth joined #gluster
04:53 hchiramm_ joined #gluster
04:54 Saravanakmr joined #gluster
04:59 gowtham joined #gluster
04:59 vmallika joined #gluster
05:01 hchiramm joined #gluster
05:01 haomaiwa_ joined #gluster
05:04 arcolife joined #gluster
05:05 natarej joined #gluster
05:05 ndarshan joined #gluster
05:07 EinstCrazy joined #gluster
05:09 EinstCrazy joined #gluster
05:14 hgowtham joined #gluster
05:17 skoduri joined #gluster
05:21 aravindavk joined #gluster
05:21 poornimag joined #gluster
05:24 Manikandan joined #gluster
05:26 rjoseph joined #gluster
05:28 ahino joined #gluster
05:30 rafi joined #gluster
05:33 gowtham joined #gluster
05:47 Apeksha joined #gluster
05:47 Merlin_ joined #gluster
05:48 karthikfff joined #gluster
05:48 ggarg joined #gluster
05:57 Wizek__ joined #gluster
05:57 Wizek joined #gluster
05:59 Bhaskarakiran joined #gluster
06:01 haomaiwa_ joined #gluster
06:03 ovaistariq joined #gluster
06:04 ju5t joined #gluster
06:10 nehar_ joined #gluster
06:16 karnan joined #gluster
06:23 Merlin_ joined #gluster
06:28 Wizek__ joined #gluster
06:30 ramky joined #gluster
06:35 ppai joined #gluster
06:36 kdhananjay1 joined #gluster
06:45 Merlin_ joined #gluster
06:46 arcolife joined #gluster
06:51 kotreshhr joined #gluster
06:51 sakshi joined #gluster
06:52 unlaudable joined #gluster
06:55 vmallika joined #gluster
06:55 prasanth joined #gluster
06:58 kdhananjay joined #gluster
07:01 haomaiwang joined #gluster
07:03 Merlin_ joined #gluster
07:13 Wizek_ joined #gluster
07:13 Wizek joined #gluster
07:15 jtux joined #gluster
07:18 mhulsman joined #gluster
07:23 ovaistariq joined #gluster
07:25 Wizek__ joined #gluster
07:26 Merlin_ joined #gluster
07:28 Wizek joined #gluster
07:29 sakshi joined #gluster
07:33 sakshi joined #gluster
07:33 [Enrico] joined #gluster
07:36 ekuric joined #gluster
07:40 jtux joined #gluster
07:42 robb_nl joined #gluster
07:42 [diablo] joined #gluster
07:46 Merlin_ joined #gluster
07:53 fsimonce joined #gluster
07:56 Merlin_ joined #gluster
07:57 ju5t joined #gluster
08:01 haomaiwa_ joined #gluster
08:08 jri joined #gluster
08:13 nehar joined #gluster
08:15 kshlm joined #gluster
08:15 DV joined #gluster
08:21 Merlin_ joined #gluster
08:24 archit_ joined #gluster
08:24 ovaistariq joined #gluster
08:25 rouven joined #gluster
08:25 post-factum JoeJulian: back to our talk on macvtap
08:25 post-factum JoeJulian: error: error creating macvtap interface macvtap0@team0.10 (00:16:3e:6c:e1:eb): Device or resource busy
08:25 gowtham joined #gluster
08:25 post-factum JoeJulian: I doubt it will work with VLAN interface
08:32 kovshenin joined #gluster
08:38 ju5t joined #gluster
08:40 jiffin1 joined #gluster
08:41 ira joined #gluster
08:41 deniszh joined #gluster
08:49 ctria joined #gluster
08:53 hackman joined #gluster
08:54 gowtham joined #gluster
08:55 iwinux joined #gluster
08:58 iwinux hi, which version of python-xattr should I install for glusterfind to work?
08:58 iwinux I'm using GlusterFS 3.7.3
08:59 jiffin joined #gluster
09:00 iwinux glusterfind is trying to call `xattr.list` when running `glusterfind pre`, which doesn't exist in either 0.6.4 or 0.8.0
09:00 iwinux traceback here: https://gist.github.com/iw​inux/fbc26da2b957de4a6f6c
09:00 glusterbot Title: glusterfind pre session vol · GitHub (at gist.github.com)
09:00 post-factum JoeJulian: umm, i see. if team0.10 is already in old bridge, it cannot be used for macvtap. so, i need to deconstruct bridges first
09:01 haomaiwa_ joined #gluster
09:01 jiffin1 joined #gluster
09:02 sakshi joined #gluster
09:04 jiffin joined #gluster
09:05 ahino joined #gluster
09:10 DJClean hi, we're looking at setting up gluster spread over 2 locations, and i'm wondering if in a distributed-replicated setup, it can be setup is such a way that the "replicated" part is aware that it should store stuff @ location A and B so we don't end up with the copy of the data being in the same location
09:12 Merlin_ joined #gluster
09:17 ghenry joined #gluster
09:18 shubhendu joined #gluster
09:19 Slashman joined #gluster
09:20 ndarshan joined #gluster
09:25 post-factum DJClean: you explicitly define replicated bricks location when you create the volume. the order matters
09:28 Norky joined #gluster
09:31 atalur joined #gluster
09:33 jiffin1 joined #gluster
09:34 dlambrig__ joined #gluster
09:37 Apeksha joined #gluster
09:38 Merlin_ joined #gluster
09:40 ndarshan joined #gluster
09:41 iwinux well, turns out I should install python-pyxattr instead of python-xattr
09:41 post-factum JoeJulian: ok, managed that macvtap. interesting observation: if macvtap is created on host, RTT is lower than with usual bridge. if, however, macvtap is used for VM, RTT is higher than with usual bridge
09:46 post-factum JoeJulian: error: unsupported configuration: Multiqueue network is not supported for: direct
09:46 post-factum JoeJulian: :(((((((
09:47 skoduri joined #gluster
09:49 Merlin_ joined #gluster
09:50 gem joined #gluster
09:50 post-factum JoeJulian: oh, what an interesting investigation: https://bugzilla.redhat.co​m/show_bug.cgi?id=1313264
09:50 glusterbot Bug 1313264: medium, medium, rc, mprivozn, POST , direct interface with multiqueue enabled donesn't support hotplugging
09:51 post-factum JoeJulian: sorry for spam, I guess, you are interested in this subject too
09:53 nishanth joined #gluster
09:54 DJClean post-factum: ty for that, didn't know that tbh :)
09:54 DJClean should help me on my way furtehr
09:54 DJClean further*
09:54 hchiramm joined #gluster
09:54 hchiramm_ joined #gluster
09:59 Merlin_ joined #gluster
10:00 Slydder joined #gluster
10:01 haomaiwa_ joined #gluster
10:01 atalur joined #gluster
10:02 post-factum DJClean: just ask, and someone will answer
10:06 post-factum JoeJulian: also, https://bugzilla.redhat.co​m/show_bug.cgi?id=1240439
10:06 glusterbot Bug 1240439: unspecified, unspecified, rc, mprivozn, ON_QA , Add multiqueue support for 'direct' interface types.
10:06 post-factum JoeJulian: so, it is basically fixed, but not for current CentOS :(
10:08 hchiramm_ joined #gluster
10:14 Slydder ok
10:14 Slydder figured it out.
10:15 gem_ joined #gluster
10:18 jiffin1 joined #gluster
10:21 Merlin_ joined #gluster
10:21 DJClean i will no worries been idling here for way too long :)
10:22 rafi1 joined #gluster
10:23 archit_ joined #gluster
10:25 ovaistariq joined #gluster
10:25 jiffin1 joined #gluster
10:27 hchiramm_ joined #gluster
10:28 hchiramm joined #gluster
10:28 shubhendu joined #gluster
10:30 nishanth joined #gluster
10:30 skoduri joined #gluster
10:32 deniszh1 joined #gluster
10:34 kotreshhr left #gluster
10:35 Merlin_ joined #gluster
10:38 dlambrig_ joined #gluster
10:49 Merlin_ joined #gluster
11:01 haomaiwa_ joined #gluster
11:01 mhulsman joined #gluster
11:04 Merlin_ joined #gluster
11:04 Wizek joined #gluster
11:04 Wizek_ joined #gluster
11:07 mhulsman joined #gluster
11:08 rafi joined #gluster
11:09 mhulsman1 joined #gluster
11:14 mhulsman joined #gluster
11:18 Merlin_ joined #gluster
11:21 johnmilton joined #gluster
11:21 mhulsman1 joined #gluster
11:23 caitnop joined #gluster
11:26 Merlin_ joined #gluster
11:31 hchiramm_ joined #gluster
11:31 hchiramm joined #gluster
11:34 Wizek_ joined #gluster
11:36 jiffin1 joined #gluster
11:39 bit4man joined #gluster
11:44 shyam joined #gluster
11:45 deniszh joined #gluster
11:45 Gnomethrower joined #gluster
11:45 Merlin_ joined #gluster
11:49 drankis joined #gluster
11:49 Norky joined #gluster
11:50 atinm joined #gluster
11:52 patrakov joined #gluster
11:53 patrakov Hello. I want to find which distributed filesystem is suitable for my needs.
11:53 patrakov There are two servers, one in USA and one in China
11:53 patrakov they store files, and these files need to be synchronized (both directions)
11:54 patrakov i.e. if I delete the file on the Chinese server, then, eventually, it has to get deleted from USA, too
11:55 patrakov Protection against node failures on the client is not important (i.e. it is better if the client in USA only tries to access the USA server)
11:55 patrakov Network between the servers is unreliable and slow - and that's the main thing that we need to overcome
11:56 Merlin_ joined #gluster
11:56 patrakov Should I use glusterfs in this situation, or look elsewhere?
11:58 jiffin patrakov: it is possible, but network latency will be very high, there will be performance bottlenecks
11:58 patrakov network latency is ~300 ms in our case
12:01 post-factum patrakov: this latency is unacceptable
12:01 post-factum patrakov: try something like lsyncd instead
12:01 patrakov thanks for the pointer
12:01 haomaiwa_ joined #gluster
12:06 hgowtham_ joined #gluster
12:06 hgowtham joined #gluster
12:16 Jules- joined #gluster
12:24 Merlin_ joined #gluster
12:25 ovaistariq joined #gluster
12:37 Merlin_ joined #gluster
12:38 Ulrar When I do a chmod -R on my gluster volume mount point I get a lot of input / output error on each file
12:38 Ulrar But it does chmod fine
12:39 Ulrar No heal in progress, no split brains, what can it be ?
12:39 Saravanakmr joined #gluster
12:41 Ulrar Ha, it's mounted with NFS, maybe that has some incidence
12:42 deniszh joined #gluster
12:42 chirino joined #gluster
12:43 anmol joined #gluster
12:43 overclk joined #gluster
12:43 unclemarc joined #gluster
12:45 jiffin Ulrar: can u find any errors in the logs
12:46 shaunm joined #gluster
12:46 jiffin in /var/log/gluster/nfs.log on the nfs-server
12:46 jiffin side
12:47 mhulsman joined #gluster
12:48 Ulrar The nfs.log is empty
12:48 Ulrar No log file seems to be moving
12:48 dlambrig_ joined #gluster
12:49 Gnomethrower Hey guys
12:49 jiffin Ulrar: that's strange
12:50 Gnomethrower Can anyone tell me how much profiling affects performance?
12:50 Gnomethrower There's a monitoring plugin we wish to use that depends on profiling, and the information I've found suggests profiling causes pretty bad perf issues
12:50 jiffin Ulrar: can u file a bug with steps to reproduce , logs , possible packet trace from nfs-server
12:50 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
12:51 jiffin Ulrar: and assigned(jthottan@redhat.com) to me , i will take a look
12:51 Gnomethrower https://access.redhat.com/documentation/en-​US/Red_Hat_Storage/2.1/html/Administration_​Guide/chap-User_Guide-Monitor_Workload.html​#chap-User_Guide-Monitor_Workload-Profile says "Running profile command can affect system performance while the profile information is being collected. Red Hat recommends that profiling should only be used for debugging."
12:51 glusterbot Title: Chapter 14. Monitoring Your Red Hat Storage Workload (at access.redhat.com)
12:53 Ulrar jiffin: The problem is I don't know the steps to reproduce .. I just installed glusterfs, mounted it on nfs, copied the data and tried to chmod it
12:53 Ulrar Never had that happen before
12:53 mpietersen joined #gluster
12:54 jiffin Ulrar: oh
12:54 mpietersen joined #gluster
12:55 jiffin Ulrar: u had mentioned about using chmod -R cause this issue
12:55 jiffin not on normal chmod
12:57 mhulsman1 joined #gluster
12:58 mhulsman1 joined #gluster
13:03 TvL2386 joined #gluster
13:04 mhulsman joined #gluster
13:05 skoduri joined #gluster
13:09 plarsen joined #gluster
13:13 sebamontini joined #gluster
13:13 Apeksha joined #gluster
13:14 Slydder hey all. why on earth are entries missing in the xml output of performance info? how can I calculate the % latency when the # of calls for fops "FORGET, RELEASE and RELEASEDIR" are not included?
13:14 squizzi joined #gluster
13:17 mhulsman1 joined #gluster
13:17 Apeksha_ joined #gluster
13:21 overclk joined #gluster
13:22 Merlin_ joined #gluster
13:22 mhulsman joined #gluster
13:24 patrakov joined #gluster
13:26 kdhananjay joined #gluster
13:27 haomaiwang joined #gluster
13:29 ilbot3 joined #gluster
13:29 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
13:30 haomaiwa_ joined #gluster
13:35 muneerse2 joined #gluster
13:35 dlambrig_ joined #gluster
13:39 muneerse joined #gluster
13:40 hamiller joined #gluster
13:46 Merlin_ joined #gluster
13:49 plarsen joined #gluster
13:49 patrakov left #gluster
13:57 shubhendu_ joined #gluster
13:58 aravindavk joined #gluster
14:00 coredump joined #gluster
14:01 haomaiwa_ joined #gluster
14:02 Merlin_ joined #gluster
14:03 mhulsman joined #gluster
14:04 ahino1 joined #gluster
14:04 dbruhn joined #gluster
14:05 bennyturns joined #gluster
14:06 ggarg joined #gluster
14:06 Jules- I'm having exactly the same Issue that Ulrar mentioned!
14:07 Jules- it occurs if i chown -R but doesn't if i do it with find
14:07 Jules- chown: changing ownership of ‘blahblahblah": Input/output error
14:08 Jules- nfs.log:[2016-03-15 14:07:43.776687] W [MSGID: 112199] [nfs3-helpers.c:3418:nfs3_log_common_res] 0-nfs-nfsv3: <gfid:a6ee3547-fd39-40ca-b381-041be​04bc732>/StaticRecordCollection.php => (XID: d6165f74, SETATTR: NFS: 5(I/O error), POSIX: 5(Input/output error))
14:08 jiffin Jules-: do u have steps to reproduce ?
14:09 jiffin Jules-: usually remote I/O error comes when connection b/w nfs server and glusterfs server lost
14:09 Ulrar I'm mounting localhost, so it's unlikely to be the problem
14:10 Ulrar I'm using glusterfs to have a replicated storage across two web servers, nothing fancy
14:10 Jules- and it only occurs on nfs mounted volumes. the glusterfs fuse client doesn't produce this I/O errors.
14:11 nbalacha joined #gluster
14:11 jiffin Ulrar: interesting
14:12 Jules- im using three replicated bricks connected with 10GBE Network.
14:12 jiffin Jules-, Ulrar: if it is easily reproducible on chmod -R, i can try that
14:12 jiffin for directory
14:12 Jules- the weeird is, i do have another cluster with two replicated bricks were it doesn't occur.
14:12 Ulrar Does seem to happen on every chmod -R
14:13 Jules- same setup, same version etc..
14:13 Ulrar Jules-: Yeah, I have other clusters too
14:13 Ulrar Don't know why this one
14:13 Jules- and i think it started with the latest release
14:13 Jules- 3.7.8-1
14:14 jiffin Ulrar, Jules-: oh
14:14 Ulrar glusterfs 3.7.8 built on Feb 11 2016 05:58:55
14:14 Ulrar Debian 8 version
14:14 Jules- Same here :-D
14:14 plarsen joined #gluster
14:14 jiffin Jules-, Ulrar: can one of u open up a bug for this issue
14:15 jiffin so that u can track the progress
14:15 jiffin @byg
14:15 jiffin @bug
14:15 glusterbot jiffin: (bug <bug_id> [<bug_ids>]) -- Reports the details of the bugs with the listed ids to this channel. Accepts bug aliases as well as numeric ids. Your list can be separated by spaces, commas, and the word "and" if you want.
14:15 JoeJulian Could also be split-brain.
14:15 Ulrar JoeJulian: Checked that, it's not for me at least
14:15 JoeJulian I think you're looking for "file a bug"
14:15 Jules- nope, its not split brain JoeJulian
14:15 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
14:16 jiffin JoeJulian: yes
14:16 Jules- i completly detached the whole cluster and also removed the data from the storage path e.g. and rebuilt it from scratch but the error is still occur.
14:17 Ulrar Jules-: Do you have an account on the bug tracker already maybe ? If not I'll file the bug tonight
14:17 Jules- nope i dont
14:20 natarej joined #gluster
14:26 ovaistariq joined #gluster
14:26 Merlin_ joined #gluster
14:30 shubhendu_ joined #gluster
14:37 ivan_rossi joined #gluster
14:40 Merlin_ joined #gluster
14:41 Wojtek Are there any settings to allow me to tune the self-heal process? We replaced the hardware on a brick, and fixed the brick's gfid/volume-id extended attributes. I modified the parameters like this: cluster.data-self-heal: on, cluster.self-heal-daemon: enable, cluster.entry-self-heal: off, cluster.metadata-self-heal: off because it's the only combination that allowed the heal process to take
14:41 Wojtek place with no impact on the performance (we have millions of small files). Now after a few days the heal seems to have stopped. It sync'ed 2.8TB out of 3.1TB. I've attempted to restart gluster on all nodes, disable/enable heal in an attempt to get it to finish the last 300GB, but it doesnt seem to help. volume heal gv0 statistics shows that an INDEX crawl is still in progress but 0 healed
14:41 Wojtek entries.
14:41 Manikandan joined #gluster
14:42 sebamontini left #gluster
14:42 Caveat4U joined #gluster
14:46 skoduri joined #gluster
14:46 gem joined #gluster
14:47 baojg joined #gluster
14:51 skylar joined #gluster
14:53 farhorizon joined #gluster
14:55 atalur joined #gluster
14:57 muneerse2 joined #gluster
14:58 natarej_ joined #gluster
15:01 haomaiwa_ joined #gluster
15:01 muneerse joined #gluster
15:02 Merlin_ joined #gluster
15:04 JoeJulian cluster.*-self-heal are for client-side heals. disable them all. cluster.self-heal-daemon is enabled by default, that's unnecessary.
15:05 JoeJulian Wojtek: What makes you sure there's another 300GB that needs healed?
15:05 JoeJulian Could it just be sparse files?
15:05 ahino joined #gluster
15:06 Jules- JoeJulian: Can you recommend any GlusterFS tunings for Replicated Cluster on 10GBE Network?
15:07 JoeJulian Bb
15:07 JoeJulian I've always liked a good B flat.
15:08 JoeJulian But seriously, the defaults are fine.
15:08 JoeJulian Unless you have a specific reason to change them, of course.
15:10 Caveat4U left #gluster
15:11 jiffin joined #gluster
15:12 Jules- maximum speed/low latency for small files over nfs3
15:14 nehar joined #gluster
15:14 Wojtek JoeJulian: What do you mean by client-side heals? What is the other type of heal? The 300GB diff comes from df -h. On one node it lists 3.1TB usage on the brick, on the one with the new hardware it lists 2.8TB usage. gluster volume status all detail also lists a difference of 36280768 inodes between both bricks
15:16 mowntan joined #gluster
15:17 mowntan joined #gluster
15:17 JoeJulian Wojtek: check with du --apparent
15:17 prasanth joined #gluster
15:18 JoeJulian And the other type of heal is via the self heal daemon (glustershd).
15:20 Wojtek du --apparent-size -hsc is running, should take some time
15:21 Merlin_ joined #gluster
15:22 Wojtek is the daemon type heal the one listed in volume heal x statistics (the index type)? And the client heal is when the volume is mounted on a filesystem and I access a file, say a stat?
15:22 JoeJulian Correct
15:23 nbalacha joined #gluster
15:24 Gnomethrower joined #gluster
15:24 Wojtek Good to know, thanks for the clarification
15:26 post-factum JoeJulian: saw my macvtap monologue?
15:26 JoeJulian Haven't had a moment to scroll back yet.
15:26 post-factum JoeJulian: ah, ok
15:27 JoeJulian I wasn't even out of bed before my wife suggested that fixing a problem on her computer might be my highest priority task before I start working.
15:31 wushudoin joined #gluster
15:39 om joined #gluster
15:39 om2 joined #gluster
15:39 om2 Hi
15:39 glusterbot om2: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
15:39 om2 I am trying to optimize glusterfs 3.7 for metadata reads
15:39 om2 the fs is ext4
15:40 Gnomethrower joined #gluster
15:40 om2 it takes almost 1 minute to do a directory listing
15:40 om2 which is horrible if you are connecting via sftp to a server with a  mounted glusterfs
15:40 om2 it just times out
15:40 om2 any ideas?
15:41 om2 I have the following options at this point:
15:41 om2 Options Reconfigured:
15:41 om2 nfs.disable: off
15:41 om2 performance.readdir-ahead: on
15:41 om2 performance.cache-size: 1GB
15:41 om2 performance.io-thread-count: 32
15:41 om2 performance.write-behind-window-size: 2000000
15:41 om2 performance.cache-refresh-timeout: 60
15:42 om2 cluster.server-quorum-ratio: 75%
15:42 JoeJulian That may be an issue if you have a massive number of files in the directory. other than that
15:42 JoeJulian please ,,(paste) and link.
15:42 glusterbot For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
15:43 JoeJulian other than that, if you use an old kernel without readdirplus support...
15:43 baojg joined #gluster
15:44 om2 I am listing the amount of dirs in the directory that won't list
15:44 om2 I would say, it's a couple hundred directories
15:44 om2 probably less
15:44 om2 139
15:44 om2 exactly
15:44 om2 is that too much?
15:44 post-factum JoeJulian: you should always take care of your wife's computer. work may wait
15:46 om2 apologies for the 15 messages in a row
15:46 JoeJulian om2: no, and a ls of 139 directory entries should absolutely not take a minute.
15:46 om2 hmm... what could possibly be the issue?
15:47 JoeJulian Is your volume replicated across a high latency connection?
15:48 om2 well, it is and it's not.  It is linked 1 Gbps privately between 2 replicas, and the other 2 replicas it goes across atlantic. lol
15:48 JoeJulian ~pasteinfo | om2
15:48 glusterbot om2: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
15:48 om2 so it's  replica 4
15:48 JoeJulian Ah, that's why then.
15:49 om2 http://fpaste.org/340364/5696114/
15:49 glusterbot Title: #340364 Fedora Project Pastebin (at fpaste.org)
15:49 kshlm joined #gluster
15:50 Merlin_ joined #gluster
15:50 om2 the read and writes are not bad io
15:50 om2 it's just directory listing.  so probably related to metadata
15:50 JoeJulian When you pull a fstat, it's going to read all the replicas to ensure that a heal is unnecessary and to ensure it's returning valid data. Your directory listing isn't just listing the names of the files, but it's calling an fstat on each one to return the rest of the metadata.
15:51 JoeJulian That means the client is going to have a network round trip across the atlantic for each file.
15:51 mhulsman joined #gluster
15:51 JoeJulian s/file/directory entry/
15:51 glusterbot What JoeJulian meant to say was: That means the client is going to have a network round trip across the atlantic for each directory entry.
15:52 om2 so it does fstat for every file on every replica every time?
15:52 JoeJulian That's up to your application.
15:52 JoeJulian If you "echo *" in the client mount, that should be almost instant.
15:52 dxd2 joined #gluster
15:53 dxd2 heya
15:53 JoeJulian But most of the time "ls" isn't really ls.
15:53 dxd2 Any suggestions on how to improve performance of gluster (mostly small files)?
15:53 dxd2 (docker+rancher environment)
15:54 JoeJulian ls='ls --group-directories-first --color=auto' (for instance) which the color switch pulls metadata so it knows how to color the entries.
15:54 JoeJulian @php
15:54 glusterbot JoeJulian: (#1) php calls the stat() system call for every include. This triggers a self-heal check which makes most php software slow as they include hundreds of small files. See http://joejulian.name/blog/optimizi​ng-web-performance-with-glusterfs/ for details., or (#2) It could also be worth mounting fuse with glusterfs --attribute-timeout=HIGH --entry-timeout=HIGH --negative-timeout=HIGH
15:54 glusterbot JoeJulian: --fopen-keep-cache
15:54 JoeJulian om2: Perhaps some of those mount options could help your use case.
15:55 JoeJulian dxd2: That factoid is for you, too.
15:56 dxd2 already have cache & most of the suggestions in there
15:56 dxd2 still really slow
15:56 dxd2 slow with the website, slow with wordpress
15:56 JoeJulian dxd2: Do you care if the metadata is perfect?
15:56 JoeJulian dxd2: If not, mount with nfs.
15:57 dxd2 is it much faster with nfs? I mean is it worth it
15:57 dxd2 ?
15:57 robb_nl joined #gluster
15:57 om2 thanks JoeJulian !
15:57 om2 I will trie the timeout settings
15:58 JoeJulian The kernel will cache the metadata so you'll be reading from ram. Sure, it may be stale but for php source, you probably don't care.
15:58 d0nn1e joined #gluster
15:58 JoeJulian Which is why I recommend caching the source in php itself, but whatever.
15:58 dxd2 We are doing that already
15:58 Bhaskarakiran joined #gluster
15:59 dxd2 It's still slow, and reviews around rate gluster as very good with large files but awful with small files
15:59 JoeJulian I wonder if WP is doing stupid things.
15:59 om2 I don't see a setting for negative-timeout though...
15:59 dxd2 Taking seconds to load blog posts which in theory should be really fast as it's just reading
15:59 JoeJulian Yeah, unfortunately reviewers seldom understand how clusters work nor do they take the time to learn.
16:00 JoeJulian om2: that's a mount option.
16:00 dxd2 And I want to keep using gluster as I had a nice experience with it until now and it's also very rancher/docker friendly
16:00 dxd2 friendly I mean you can install/deploy it with a few clicks
16:00 JoeJulian dxd2: yeah, it should just be reading from the database.
16:01 7GHAAI986 joined #gluster
16:01 om2 You mean like: mount -t glusterfs -o negative-timeout=3600 ?
16:01 JoeJulian I haven't looked at WP source in years, but last time I looked they weren't doing anything fancy with regard to includes.
16:01 JoeJulian om2: yes
16:03 JoeJulian dxd2: I would check your apc stats. Make sure it's actually caching. Make sure you're not discarding php with each page load. Check the gluster performance stats and make sure the fop count is where you would expect it.
16:03 JoeJulian I agree with you that it sounds like something unreasonable is happening.
16:03 dxd2 I use memcache :)
16:04 Merlin_ joined #gluster
16:04 JoeJulian that doesn't cache the php source.
16:04 dxd2 wasn't my call, the infrastructure is already in place
16:04 dxd2 is/was
16:04 dxd2 and I doubt it will change
16:04 JoeJulian If you can't fix that, then there's not much you can do. Try those mount options.
16:05 JoeJulian I always love it when the systems guys aren't allowed to fix things.
16:06 dxd2 Thank you very much, will do. Either they work or I'll be forced to research more into xtreemfs or whatever else :)
16:10 om2 JoeJulian: that seemed to do the trick!
16:11 om2 thank you!
16:11 JoeJulian excellent!
16:11 dxd2 JoeJulian, do you think ceph is a better way to go?
16:12 JoeJulian It would complicate things a lot more, though cephfs has just been declared ready for use. I haven't tried that at all, just the block storage.
16:17 timotheus1_ joined #gluster
16:20 d0nn1e joined #gluster
16:21 nathwill joined #gluster
16:27 ovaistariq joined #gluster
16:33 kanagaraj joined #gluster
16:36 Merlin_ joined #gluster
16:40 drankis joined #gluster
16:41 atrius joined #gluster
16:43 Merlin_ joined #gluster
16:44 baojg joined #gluster
16:49 jotun joined #gluster
16:51 jotun joined #gluster
16:58 hagarth joined #gluster
16:59 muneerse2 joined #gluster
17:01 jiffin joined #gluster
17:01 haomaiwang joined #gluster
17:03 muneerse joined #gluster
17:07 shubhendu_ joined #gluster
17:07 Merlin_ joined #gluster
17:08 shubhendu joined #gluster
17:21 Merlin_ joined #gluster
17:23 pur joined #gluster
17:23 arcolife joined #gluster
17:40 DV joined #gluster
17:40 Merlin_ joined #gluster
17:40 Wojtek Is there a way to modify the loggin behavior without recompiling from source? Specifically I would like to supress I [MSGID: 109066] [dht-rename.c:1411:dht_rename] 0-gv0-dht: renaming
17:46 mhulsman joined #gluster
17:47 JoeJulian You can only change the log level.
17:48 JoeJulian I would file a bug, however, if you think the log-level is wrong for a message.
17:48 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
17:48 JoeJulian That looks to me like it should be debug, not info. But maybe the rest of the context means more.
17:50 Wojtek Yea, I believe it used to be a Debug level message in 3.4 as we didn't see it back then
17:54 kanagaraj joined #gluster
17:55 Merlin_ joined #gluster
17:58 kanagaraj joined #gluster
17:59 farhorizon joined #gluster
18:00 ovaistariq joined #gluster
18:01 haomaiwa_ joined #gluster
18:01 Wojtek https://bugzilla.redhat.co​m/show_bug.cgi?id=1318001
18:01 glusterbot Bug 1318001: low, unspecified, ---, bugs, NEW , Wrong log level on dht_rename when using the cluster.extra-hash-regex option
18:03 jiffin1 joined #gluster
18:06 TealJax joined #gluster
18:07 Merlin_ joined #gluster
18:09 prasanth joined #gluster
18:15 JoeJulian bug 1130888
18:15 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1130888 urgent, high, ---, nbalacha, CLOSED CURRENTRELEASE, Renaming file while rebalance is in progress causes data loss
18:25 nishanth joined #gluster
18:27 calavera joined #gluster
18:27 Merlin_ joined #gluster
18:32 shubhendu__ joined #gluster
18:33 Wizek__ joined #gluster
18:37 Wizek_ joined #gluster
18:37 shubhendu_ joined #gluster
18:37 Wizek joined #gluster
18:38 Bardack joined #gluster
18:38 Wizek__ joined #gluster
18:42 Merlin_ joined #gluster
18:43 Wizek_ joined #gluster
18:45 Bardack joined #gluster
18:45 Wizek joined #gluster
18:46 shortdudey123 joined #gluster
18:46 baojg joined #gluster
18:48 Wizek__ joined #gluster
18:49 Wizek_ joined #gluster
18:51 jiffin joined #gluster
18:51 Wizek joined #gluster
18:52 PsionTheory joined #gluster
18:56 Merlin_ joined #gluster
18:57 bennyturns joined #gluster
19:01 haomaiwa_ joined #gluster
19:05 farhorizon joined #gluster
19:11 om joined #gluster
19:13 timotheus1_ joined #gluster
19:18 coredump joined #gluster
19:18 Merlin_ joined #gluster
19:27 chirino joined #gluster
19:30 coredump joined #gluster
19:33 shubhendu__ joined #gluster
19:39 ahino joined #gluster
19:41 Merlin_ joined #gluster
19:50 baojg joined #gluster
19:53 Merlin_ joined #gluster
19:57 kanagaraj joined #gluster
19:59 timotheus1_ joined #gluster
20:01 haomaiwang joined #gluster
20:02 farhorizon joined #gluster
20:02 deniszh joined #gluster
20:03 Wojtek JoeJulian: I think I've discovered the problem with the missing 300GB of my heal. The files on the new brick were created as 0-byte files. It seems gluster sees that they at least exist and does not copy the actual data from the other brick.
20:05 JoeJulian You can check the ,,(extended attributes) on both to see if there's still heals pending.
20:05 glusterbot (#1) To read the extended attributes on the server: getfattr -m .  -d -e hex {filename}, or (#2) For more information on how GlusterFS uses extended attributes, see this article: http://pl.atyp.us/hekafs.org/index.php/​2011/04/glusterfs-extended-attributes/
20:09 Wojtek http://pastebin.com/rj3sMsAb
20:09 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
20:10 Wojtek it's missing a few attributes on the empty brick
20:13 Merlin_ joined #gluster
20:15 JoeJulian Yep, looks like the heal is still pending.
20:17 robb_nl joined #gluster
20:20 Wojtek I think I'll script a little ghetto healer to accellerate this :)
20:20 Wojtek mv /mnt/gv0/files/nas/aa/01/aa01​fe04ad34a49de699be68a89040f6 /mnt/gv0/tmp/aa01fe04ad34a49de699be68a89040f6; mv /mnt/gv0/tmp/aa01fe04ad34a49de699be68a89040f6 /mnt/gv0/files/nas/aa/01/aa01​fe04ad34a49de699be68a89040f6
20:21 Wojtek seems to work OK, the data gets replicated that way
20:21 JoeJulian ok
20:22 Wojtek I tried a massif find and stat earlier but that didn't resolve it
20:23 JoeJulian cluster.*-self-heal settings probably have affected that.
20:25 Wojtek yea, that might of been why
20:26 Wojtek Appreciate your assistance Sir
20:32 Merlin_ joined #gluster
20:32 DV joined #gluster
20:32 arcolife joined #gluster
20:35 post-factum JoeJulian: so, sir, any comments on macvtap?
20:35 ahino joined #gluster
20:38 JoeJulian Ah, cool, so probably fixed for Arch.
20:38 JoeJulian Could be handy.
20:41 ovaistariq joined #gluster
20:41 Merlin_ joined #gluster
20:42 tessier So one of my brick servers was showing up as a rejected peer to the other machines in the cluster. So I followed the instructions on https://www.gluster.org/community/documen​tation/index.php/Resolving_Peer_Rejected and now when I do gluster volume status on the machine which had been rejected (but is no longer) it says "No volumes present" when there should be 11 volumes.
20:44 Wojtek JoeJulian: Was just trying a few things, and I've discovered that if a do a 'file' command instead of 'stat', the file gets healed just fine without having to do the heavy mv to tmp and back
20:48 Wojtek maybe again because of the cluster.*-self-heal settings, but I'm happy I found something simple that works :)
20:51 mhulsman joined #gluster
20:52 dbruhn Hey JoeJulian, it's been a long time. I am looking to use Gluster as a NFS backend for a Xen cluster using CTDB. I've set this up in the past but it's been quite a while and wanted to see if you would be willing to go over my hardware spec and see if you think it will support my requirements.
20:54 baojg joined #gluster
20:59 JoeJulian Wojtek: interesting, I'll have to look in to how that works differently from stat.
21:00 JoeJulian dbruhn: hey, yeah it's been ages. Never done ctdb in production yet, so I've got nothing for you there.
21:00 dbruhn I actually have it running under two smaller xen clusters. but I set them up two years ago and haven't had to touch them....
21:00 JoeJulian And every hardware will work, it's just whether or not it suits your needs. :)
21:01 haomaiwa_ joined #gluster
21:02 Merlin_ joined #gluster
21:03 dbruhn lol, well I was thinking 24X 256GB SSD DRIVES, 2X E5-2609 (2.50 GHZ 4CORE), 64GB of ECC, and 2x bonded teamed NIC's to support 10 Xen hypervisors running 108 web servers.
21:03 dbruhn two of those servers.
21:04 JoeJulian Sounds good to me (except for xen of course :P )
21:04 dbruhn There isn't a ton of I/O from the guests when it comes to disk, logging is about the worst of it.
21:04 dbruhn hahah to make it worse, that 108 guests is windows.... not my deal.
21:04 dbruhn Are you on the KVM train?
21:05 JoeJulian Well it depends...
21:05 JoeJulian If I want something that only uses half of the technology I have available, and requires me to do work-arounds, then I prefer Xen or VMware.
21:06 dbruhn My bigger issue with this project is these guys will have me set this stuff up, and then just run it without me there. They go dark on me and then pop up a few quarters later when they need something. They are windows dev's who don't really infra at all, so it can be really messy.
21:07 JoeJulian Ah the joys of consulting.
21:08 dbruhn I taught them Xen a long time ago, and basically was able to make it super simple with Xen Center for them. So it's a comfort level thing partially.
21:09 dbruhn Appreciate the feedback though.
21:11 karnan joined #gluster
21:11 tessier I used to use Xen a lot. But over the last year we have moved to KVM.
21:12 tessier Compiling Xen, userland tools, and kernel, got too complicated after RedHat stopped shipping Xen support.
21:13 ovaistariq joined #gluster
21:13 JoeJulian Yeah, when that was first announced is when I first started looking in to it.
21:13 JoeJulian Since then, the api integration with both Gluster and Ceph make it a no-brainer if you're starting virtualization from scratch.
21:15 Merlin_ joined #gluster
21:18 dbruhn I've only ever really used KVM in a single server situation. Works great, just never had an opportunity to use it in a larger capacity.
21:18 dbruhn XenServer is stupid simple these days since they opened it up.
21:19 hgichon joined #gluster
21:24 farhorizon joined #gluster
21:26 Merlin_ joined #gluster
21:28 ovaistariq joined #gluster
21:37 deniszh joined #gluster
21:39 wushudoin joined #gluster
21:43 Merlin_ joined #gluster
21:55 baojg joined #gluster
21:59 wushudoin joined #gluster
22:01 haomaiwang joined #gluster
22:02 ovaistariq joined #gluster
22:05 amye joined #gluster
22:25 Merlin_ joined #gluster
22:26 TealJax left #gluster
22:38 Merlin_ joined #gluster
22:40 ovaistariq joined #gluster
22:45 hagarth joined #gluster
22:55 baojg joined #gluster
22:56 Merlin_ joined #gluster
22:58 ovaistariq joined #gluster
23:01 hackman joined #gluster
23:01 ahino1 joined #gluster
23:01 haomaiwang joined #gluster
23:13 Merlin_ joined #gluster
23:13 ovaistariq joined #gluster
23:18 Pilgrim_ joined #gluster
23:19 Pilgrim_ Gluster documentation is making me want to slit my wrists! There, I've said it, back to trying to figure out how to make it faster...
23:24 tessier Pilgrim_: Lack or quality?
23:25 Pilgrim_ Ha, bit of both :) Can't for the life of me find the documented default for "nfs.write-size", although from forum posts I'm guessing it's 1mbyte. Then I can find the default for "performance.write-behind-window-size" but I have no idea what that actually changes. "diagnostics.latency-measurement" - that would be awesome! But how do I use it? </end_rant>
23:26 Pilgrim_ Gluster tuning volumes community page are from 3.2, there doesn't seem to be any per-version list of what has added/changed/been removed
23:26 Pilgrim_ I'm not a developer, I'm a guy trying to get Gluster working for my org, I understand storage generally but I shouldn't have to read code to understand how to configure it :)
23:27 Pilgrim_ I know I'm complaining about free software now by the way... And I actually really value the work the devs have put into making it awesome. I just wish it was better documented
23:32 alghost joined #gluster
23:49 JoeJulian Pilgrim_: Don't we all.
23:50 Merlin_ joined #gluster
23:50 JoeJulian "gluster volume set help" generally shows what the defaults are.
23:50 JoeJulian and "gluster volume get" can show you all the settings, even the ones that are still defaulted.
23:52 Pilgrim_ Thanks Joe! I literally just found that first one in a forum post somewhere. Didn't know the second one existed, have tested on 3.6 and 3.7 clusters I have up and looks like it's specific to 3.7 but that's fine for now!
23:54 JoeJulian Also... have you been looking at http://gluster.readthedocs.org/en/latest/Administ​rator%20Guide/Managing%20Volumes/#tuning-options ?
23:54 glusterbot Title: Managing Volumes - Gluster Docs (at gluster.readthedocs.org)
23:56 Pilgrim_ Ah no, I haven't... any idea which version of Gluster that was written for?
23:56 baojg joined #gluster
23:56 nathwill just to confirm my understanding here, storage.owner-{u,g}id tells the server to set files to configured owner/group, rather than mapping client uid/gid? is that right?
23:58 JoeJulian Pilgrim_: "latest" whatever that means. The documentation project was separated from the software so there's no control over that any more (good and bad).
23:58 JoeJulian nathwill: no
23:58 nathwill oh :/
23:58 Pilgrim_ Ha righto then. Thanks :)
23:58 JoeJulian nathwill: iirc, it's only the owner of the brick root.
23:59 nathwill ah, ok. thanks
23:59 JoeJulian Damn... who wanted dht to put a copy locally yesterday....
23:59 nathwill working on standardizing our UIDs across our app instances, think we'll probably be ok :D

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary