Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-09-22

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:37 sprachgenerator joined #gluster
01:11 justglusterfs joined #gluster
01:27 Ramereth joined #gluster
01:33 lyang0 joined #gluster
01:45 harish joined #gluster
02:09 bene2 joined #gluster
02:18 bala joined #gluster
02:38 haomaiwa_ joined #gluster
02:39 haomaiwa_ joined #gluster
02:42 gildub joined #gluster
02:46 haomai___ joined #gluster
02:48 Ramereth joined #gluster
03:05 elico joined #gluster
03:05 hchiramm_ joined #gluster
03:24 rejy joined #gluster
03:26 kshlm joined #gluster
03:43 bharata-rao joined #gluster
03:53 itisravi joined #gluster
03:55 ryan_clough joined #gluster
03:59 nbalachandran joined #gluster
04:06 dusmant joined #gluster
04:08 RameshN joined #gluster
04:08 kdhananjay joined #gluster
04:16 atinmu joined #gluster
04:22 ndarshan joined #gluster
04:25 Humble http://bits.gluster.org/pub/gluster/glu​sterfs/src/glusterfs-3.6.0beta1.tar.gz
04:25 Humble Please use above release for GlusterFest
04:28 RameshN joined #gluster
04:29 rjoseph joined #gluster
04:29 Humble rjoseph, Good Morning!
04:33 bala joined #gluster
04:34 Humble kshlm, ping
04:34 glusterbot Humble: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
04:35 Humble rjoseph, ping
04:35 glusterbot Humble: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
04:35 rjoseph Humble: Good Morning
04:35 Humble rjoseph, so its snapshot day :)
04:37 Rafi_kc joined #gluster
04:37 rafi1 joined #gluster
04:40 anoopcs joined #gluster
04:41 rjoseph Humble: :) yup... let me know if anybody needs any information or any assistance in testing snapshot
04:42 Humble http://www.gluster.org/community/documentatio​n/index.php/Features/Gluster_Volume_Snapshot
04:42 anoopcs1 joined #gluster
04:47 anoopcs joined #gluster
04:53 ramteid joined #gluster
04:53 Humble rjoseph++
04:53 glusterbot Humble: rjoseph's karma is now 1
04:57 Humble Track ur progress in https://public.pad.fsfe.or​g/p/GlusterFS-3.6-test-doc
04:57 glusterbot Title: FSFE Etherpad: public instance (at public.pad.fsfe.org)
04:57 Humble anoopcs, ping
04:57 glusterbot Humble: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
04:57 anoopcs Humble: pong
05:00 atinmu Humble, I will be available for assistance in case of any issues on barrier feature :)
05:01 jiffin joined #gluster
05:01 Humble atinmu, that would be awesome..
05:01 Humble atinmu, if u can find any bugs which is not in your area , it will help :)
05:01 Humble atinmu++
05:01 glusterbot Humble: atinmu's karma is now 2
05:07 kanagaraj joined #gluster
05:11 spandit joined #gluster
05:15 sputnik13 joined #gluster
05:28 karnan joined #gluster
05:31 hagarth joined #gluster
05:32 ppai joined #gluster
05:40 Humble rjoseph, a quick review needed http://review.gluster.org/#/c/8791
05:40 glusterbot Title: Gerrit Code Review (at review.gluster.org)
05:41 Alssi_ joined #gluster
05:42 rjoseph humble: Sachin has taken up the work to update the admin guide.
05:43 rjoseph humble: It will have more detail. But your patch will also help Sachin.
05:44 Humble rjoseph, oh .. ok..
05:45 Humble rjoseph, I am no more working on that doc , just this patch
05:46 Humble :)
05:46 rjoseph humble: gave ack :)
05:46 Humble thanks rjoseph++
05:46 glusterbot Humble: rjoseph's karma is now 2
05:48 RameshN_ joined #gluster
05:48 dusmant joined #gluster
05:49 RameshN joined #gluster
05:53 deepakcs joined #gluster
05:55 saurabh joined #gluster
06:05 atalur joined #gluster
06:08 R0ok_ joined #gluster
06:08 gildub joined #gluster
06:14 soumya__ joined #gluster
06:16 meghanam joined #gluster
06:16 meghanam_ joined #gluster
06:17 Guest84140 joined #gluster
06:22 overclk joined #gluster
06:30 lalatenduM joined #gluster
06:35 deepakcs joined #gluster
06:48 ekuric joined #gluster
06:48 Philambdo joined #gluster
06:51 ricky-ticky joined #gluster
06:54 d-fence joined #gluster
07:03 Fen1 joined #gluster
07:10 deepakcs joined #gluster
07:21 y4m4 joined #gluster
07:21 Arrfab joined #gluster
07:22 atinmu joined #gluster
07:22 Arrfab hi community .. have updated some gluster nodes from 3.5.0 to 3.5.2 and update seems to remove some file (saved to .rpmsave) and glusterd is killed, but not restarted .. so having to restart glusterd after each node update. is that expected ?
07:23 rjoseph joined #gluster
07:23 raghu joined #gluster
07:24 dmachi joined #gluster
07:32 hagarth joined #gluster
07:36 dusmant joined #gluster
07:47 fsimonce joined #gluster
07:47 rjoseph joined #gluster
07:49 atinmu joined #gluster
07:53 ndevos Arrfab: I think there is a bug about that...
07:54 Arrfab ndevos: well, the rpm postinstall scriplet kills glusterd but doesn't do anything else ..
07:55 Arrfab I mean, not related to even try to restart glusterd if it was running .. not even cleaning the pid file, meaning that a simple 'service glusterd status' complains about 'glusterd dead but subsys locked'
07:55 ndevos Arrfab: uh... it should start glusterd again, there was a bug where glusterd got started before the previous glusterd exited - that caused the 2nd glusterd to fail starting
07:57 nishanth joined #gluster
07:58 ndevos found it, bug 1113543
07:58 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1113543 low, unspecified, 3.6.0, kkeithle, ASSIGNED , Spec %post server does not wait for the old glusterd to exit
07:59 Arrfab ndevos: yeah, I wanted to file a bug report, but Patrick already did it it seems
07:59 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
08:01 ndevos Arrfab: feel free to comment on the bug and suggest a reasonable fix, I'll try to get it done before the 3.5.3 release
08:01 liquidat joined #gluster
08:02 Arrfab ndevos: well, I've updated our four nodes running 3.5.0 to 3.5.2 so I'd have to recreate a labs farm to suggest a fix ..
08:02 Arrfab let me add comments on the BZ though
08:03 ndevos Arrfab: a comment is fine, I guess I have to setup some test system for that anyway
08:03 milka joined #gluster
08:04 hagarth joined #gluster
08:05 Arrfab ndevos: don't you have some test nodes that are used to test such upgrades ?
08:07 ndevos Arrfab: yes, and things like this normally get tested too, I'm not sure why nobody noticed it :-/
08:07 Arrfab I'll also ping lalatenduM about this, as I use his latest rebuilt packages
08:09 ndevos Arrfab: thanks!
08:12 VerboEse joined #gluster
08:27 glusterbot New news from newglusterbugs: [Bug 1145000] Spec %post server does not wait for the old glusterd to exit <https://bugzilla.redhat.co​m/show_bug.cgi?id=1145000>
08:28 hagarth rjoseph: ping
08:28 glusterbot hagarth: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
08:28 hagarth glusterbot: right :)
08:28 ndevos some people never learn, glusterbot
08:29 hagarth rjoseph: ping, I think we need to have http://review.gluster.org/8191 in release-3.6
08:29 glusterbot Title: Gerrit Code Review (at review.gluster.org)
08:29 hagarth spandit: ^^^
08:29 bala joined #gluster
08:29 hagarth ndevos: thanks to whoever added this naked ping reminder ;)
08:30 rjoseph hagarth: pong
08:30 ndevos I dont know who did that, but glusterbot++
08:30 glusterbot ndevos: glusterbot's karma is now 7
08:30 spandit hagarth, will do
08:31 ndevos ew, that patch has a multi-line one-line subject :-/
08:31 hagarth spandit: please fix that for release-3.6 too
08:32 rjoseph will send the patch for 3.6 also fix the subject
08:32 spandit hagarth, sure
08:33 hagarth spandit: can you also please review if any of your patches on master need to be backported to release-3.6?
08:34 rjoseph hagarth: we are going through the list and will send those patches to release-3.6 branch
08:34 spandit hagarth, yeah sure, we will do that
08:34 hagarth rjoseph, spandit: cool, thanks!
08:34 hagarth noticed some of these problems when I got started with testing 3.6.0beta1
08:37 VerboEse joined #gluster
08:37 RaSTar joined #gluster
08:38 RaSTar joined #gluster
08:50 bala joined #gluster
08:52 rjoseph joined #gluster
08:54 nbalachandran joined #gluster
08:57 glusterbot New news from newglusterbugs: [Bug 1145020] [SNAPSHOT] : gluster volume info should not show the value which is not set explicitly <https://bugzilla.redhat.co​m/show_bug.cgi?id=1145020>
09:02 soumya__ joined #gluster
09:03 meghanam joined #gluster
09:03 meghanam_ joined #gluster
09:12 vikumar joined #gluster
09:13 hagarth joined #gluster
09:16 nbalachandran joined #gluster
09:21 soumya__ joined #gluster
09:21 pkoro joined #gluster
09:25 spandit hagarth, I have sent http://review.gluster.org/#/c/8793/ to release-3.6 branch
09:25 glusterbot Title: Gerrit Code Review (at review.gluster.org)
09:26 ninkotech__ joined #gluster
09:29 sputnik13 joined #gluster
09:29 hagarth spandit: cool, thanks
09:31 overclk joined #gluster
09:35 jiku joined #gluster
09:40 LebedevRI joined #gluster
09:43 Norky joined #gluster
09:52 ws2k333 joined #gluster
09:55 sputnik13 joined #gluster
09:57 nishanth joined #gluster
09:57 meghanam joined #gluster
09:57 meghanam_ joined #gluster
09:59 lalatenduM Arrfab, hii
10:03 mbukatov joined #gluster
10:06 nbalachandran joined #gluster
10:06 overclk joined #gluster
10:06 pkoro joined #gluster
10:15 rjoseph joined #gluster
10:27 glusterbot New news from newglusterbugs: [Bug 1145068] [SNAPSHOT]: In mixed cluster with RHS 2.1 U2 & RHS 3.0, newly created volume should not contain snapshot related options displayed in 'gluster volume info' <https://bugzilla.redhat.co​m/show_bug.cgi?id=1145068> || [Bug 1145069] [SNAPSHOT]: man or info page of gluster needs to be updated with snapshot commands <https://bugzilla.redhat.co​m/show_bug.cgi?id=1145069>
10:32 nishanth joined #gluster
10:39 mdavidson joined #gluster
10:44 R0ok_ gluster logs for a particular mount point on a client  if full of the entries: "[2014-09-22 10:02:46.100774] I [dict.c:370:dict_get] (-->/usr/lib64/glusterfs/3.5.2/xlator/pe​rformance/md-cache.so(mdc_lookup+0x318) [0x7f8be33c9518] (-->/usr/lib64/glusterfs/3.5.2/xlator/deb​ug/io-stats.so(io_stats_lookup_cbk+0x113) [0x7f8be31aec63] (-->/usr/lib64/glusterfs/3.5.2/xlator/syste​m/posix-acl.so(posix_acl_lookup_cbk+0x233) [0x7f8be2fa03d3]))) 0-dict: !this || key=system
10:44 glusterbot R0ok_: ('s karma is now -33
10:44 glusterbot R0ok_: ('s karma is now -34
10:44 glusterbot R0ok_: ('s karma is now -35
10:45 R0ok_ currently the log size is at 2.8GB, any ideas what causes this ?
10:46 deepakcs is there a way to see all the feature pages that I created in gluster.org ?
10:46 deepakcs after logging in and clicking on "Contributions" .. it shows it in not-so-good format!
10:46 deepakcs ndevos, ^
10:47 Humble JustinCl1ft, ^^^
10:49 ndevos deepakcs: there is a checkbox that says "Only show edits that are page creations", does if help when you enable that?
10:49 hagarth joined #gluster
10:49 deepakcs ndevos, let me try
10:49 ndevos deepakcs: this is my link, you can change the username: http://www.gluster.org/community/documenta​tion/index.php?limit=50&amp;tagfilter=&amp​;title=Special%3AContributions&amp;contrib​s=user&amp;target=Ndevos&amp;namespace=0&a​mp;newOnly=1&amp;year=2014&amp;month=-1
10:50 Humble I think it will do better job deepakcs :)
10:50 deepakcs ndevos, yeah it does show. but not a nice format
10:50 ndevos uh, it does not contain the username, you only need to click it :D
10:50 deepakcs 15:19, 19 August 2014 (diff | hist) . . (+1,872)‎ . . N Features/snap support for subdir ‎ (Created page with " == Feature == This feature provides a way for providing GlusterFS snapshot at subdir granularity (applicable for all protocols) == Summary == This features provides a way ...") (current)
10:50 ndevos deepakcs: what format are you looking for?
10:50 deepakcs ndevos, ^^ thats how it shows!
10:50 deepakcs ndevos, just bulleted list of links for which I created the pages :)
10:51 Humble ndevos, it contains the user name , Isnt it
10:51 deepakcs ndevos, anyways, thanks for that tip
10:52 ndevos deepakcs: the "watchlist" is more of a bulleted list, try http://www.gluster.org/community/docume​ntation/index.php/Special:EditWatchlist
10:52 ndevos or even http://www.gluster.org/community/document​ation/index.php/Special:EditWatchlist/raw
10:52 Humble JustinCl1ft++
10:52 glusterbot Humble: JustinCl1ft's karma is now 1
10:52 deepakcs ndevos, yeah tried that already.. but to have all my feature pages in watchlist, i need to first list or go to all of my pages and then add them to watchlist :)
10:52 ndevos Humble: ah, yes, it does!
10:52 JustinCl1ft The "Contributions" page is the only way I know of
10:53 Humble ndevos++
10:53 glusterbot Humble: ndevos's karma is now 1
10:53 ndevos deepakcs: when you create a page, it is automatically added to your watchlist, unless you unselect that checkbox
10:53 pkoro joined #gluster
10:53 JustinCl1ft There might be other ways though, I'm not super in-depth with MediaWiki
10:53 * JustinCl1ft can install it and do basic admin stuff, is about it :)
10:53 deepakcs ndevos, ok.. that didn't get added for me for some reason
10:54 deepakcs JustinCl1ft, same here.. but the user experience can be improved i feel.. its very naive the way its today and difficult to navaigate
10:54 JustinCl1ft deepakcs: Are you any good with PHP?
10:54 deepakcs JustinCl1ft, nope, never used it
10:55 JustinCl1ft MediaWiki is written in PHP, and has a plugin system
10:55 JustinCl1ft PHP is pretty basic kinda stuff
10:55 sputnik13 joined #gluster
10:55 JustinCl1ft It's a web language from back-in-the-day, and mostly seems targeted at being easy for newbies
10:56 JustinCl1ft deepakcs: https://www.mediawiki.org/wiki/How_to_become_​a_MediaWiki_hacker/Extension_Writing_Tutorial
10:56 glusterbot Title: How to become a MediaWiki hacker/Extension Writing Tutorial - MediaWiki (at www.mediawiki.org)
10:56 JustinCl1ft :)
10:56 deepakcs JustinCl1ft, ok  :)
10:57 glusterbot New news from newglusterbugs: [Bug 1145083] [SNAPSHOT] : gluster snapshot delete doesnt provide option to delete all / multiple snaps of a given volume <https://bugzilla.redhat.co​m/show_bug.cgi?id=1145083>
11:02 jiku joined #gluster
11:03 jiku joined #gluster
11:05 jiku joined #gluster
11:06 jiku joined #gluster
11:08 jiku joined #gluster
11:13 Humble spandit++
11:13 glusterbot Humble: spandit's karma is now 1
11:13 Humble thanks man
11:13 spandit Humble, No problem :)
11:18 chirino joined #gluster
11:21 calum_ joined #gluster
11:23 sputnik13 joined #gluster
11:24 pkoro joined #gluster
11:25 diegows joined #gluster
11:26 ppai joined #gluster
11:27 harish_ joined #gluster
11:27 glusterbot New news from newglusterbugs: [Bug 1145087] [SNAPSHOT]: error message for invalid snapshot status should be aligned with error messages of info and list <https://bugzilla.redhat.co​m/show_bug.cgi?id=1145087> || [Bug 1145090] [SNAPSHOT]: If the snapshoted brick has xfs options set as part of its creation, they are not automount upon reboot <https://bugzilla.redhat.co​m/show_bug.cgi?id=1145090> || [Bug 1145095] [SNAPSHOT]: sna
11:40 pkoro joined #gluster
11:40 dusmant joined #gluster
11:42 jiku joined #gluster
11:45 Fen1 joined #gluster
11:47 dockbram_ joined #gluster
11:47 soumya_ joined #gluster
12:03 ira joined #gluster
12:04 meghanam_ joined #gluster
12:04 meghanam joined #gluster
12:06 bala joined #gluster
12:10 sputnik13 joined #gluster
12:15 B21956 joined #gluster
12:16 B21956 joined #gluster
12:19 itisravi_ joined #gluster
12:22 virusuy joined #gluster
12:28 glusterbot New news from newglusterbugs: [Bug 858732] glusterd does not start anymore on one node <https://bugzilla.redhat.com/show_bug.cgi?id=858732>
12:29 edward1 joined #gluster
12:31 clutchk joined #gluster
12:33 sputnik13 joined #gluster
12:34 virusuy joined #gluster
12:34 virusuy joined #gluster
12:41 Humble kkeithley++ thanks!
12:41 glusterbot Humble: kkeithley's karma is now 16
12:47 ricky-ticky joined #gluster
12:49 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/ | GlusterFS 3.6 test week - https://public.pad.fsfe.or​g/p/GlusterFS-3.6-test-doc
12:59 R0ok_ gluster logs for a particular mount point on a client  if full of the entries: "[2014-09-22 10:02:46.100774] I [dict.c:370:dict_get] (-->/usr/lib64/glusterfs/3.5.2/xlator/pe​rformance/md-cache.so(mdc_lookup+0x318) [0x7f8be33c9518] (-->/usr/lib64/glusterfs/3.5.2/xlator/deb​ug/io-stats.so(io_stats_lookup_cbk+0x113) [0x7f8be31aec63] (-->/usr/lib64/glusterfs/3.5.2/xlator/syste​m/posix-acl.so(posix_acl_lookup_cbk+0x233) [0x7f8be2fa03d3]))) 0-dict: !this || key=system
12:59 glusterbot R0ok_: ('s karma is now -36
12:59 glusterbot R0ok_: ('s karma is now -37
12:59 glusterbot R0ok_: ('s karma is now -38
12:59 R0ok_ currently the log size is at 2.8GB, any ideas what causes this ?
13:02 dusmant joined #gluster
13:03 bene2 joined #gluster
13:07 RaSTar joined #gluster
13:08 dockbram_ /SET use_msgs_window off
13:08 dockbram_ oops
13:08 gmcwhistler joined #gluster
13:08 soumya_ joined #gluster
13:12 dockbram_ left #gluster
13:12 dockbram_ joined #gluster
13:16 julim joined #gluster
13:16 mojibake joined #gluster
13:17 sprachgenerator joined #gluster
13:23 hagarth joined #gluster
13:29 theron joined #gluster
13:34 bennyturns joined #gluster
13:38 ricky-ticky joined #gluster
13:44 Humble kkeithley++ thanks  a lot!
13:44 glusterbot Humble: kkeithley's karma is now 17
13:44 kkeithley lol
13:44 Humble \o/
13:44 bala joined #gluster
13:46 ricky-ticky joined #gluster
13:51 theron_ joined #gluster
13:55 gmcwhistler joined #gluster
14:01 dusmant joined #gluster
14:02 mojeime_ joined #gluster
14:02 mojeime_ Hi i need help
14:02 mojeime_ I created Striped Replicated Volumes
14:03 mojeime_ using this command : gluster volume create test-volume stripe 2 replica 2 server1:/exp1 server2:/exp3 server3:/exp2 server4:/exp4
14:04 mojeime_ now when i mount volume on client machine and make test file there on server 1 and 2 everything is ok but on server 3 and 4 file shows but it is 0 bytes?
14:04 mojeime_ Any help on this?
14:06 ndevos mojeime_: it depends on the size of the file you are creating, also ,,(stripe) has some more details
14:06 glusterbot mojeime_: Please see http://joejulian.name/blog/sho​uld-i-use-stripe-on-glusterfs/ about stripe volumes.
14:07 mojeime_ I did red that
14:07 mojeime_ if i put on mount folder some big tar for example
14:07 mojeime_ it shows up on every server without problem
14:08 mojeime_ only small files shows on every server but on server 3 and 4 they are 0 bytes
14:08 mojeime_ :/
14:18 plarsen joined #gluster
14:28 glusterbot New news from newglusterbugs: [Bug 1145189] Fix for spurious failure <https://bugzilla.redhat.co​m/show_bug.cgi?id=1145189>
14:31 ndevos mojeime_: there are also ,,(link files), maybe you are seeing those?
14:31 glusterbot mojeime_: I do not know about 'link files', but I do know about these similar topics: 'linkfile'
14:31 ndevos @linkfile
14:31 glusterbot ndevos: A zero-length file with mode T--------- on a brick is a link file. It has xattrs pointing to another brick/path where the file data resides. These are usually created by renames or volume layout changes.
14:32 mojeime_ i don think that is the case
14:33 mojeime_ is stripe supose to divide the file accross the servers or keep file size?
14:36 ndevos stripe will split files in smaller pieces and distributes the pieces (one file with multiple 'chunks') over the bricks
14:36 ndevos distribute would just spread the complete files over the different bricks
14:37 ndevos but, thats pretty well explained in ,,(stripe), which you have read... if that post is not clear enough, leave a comment in the blog or tell JoeJulian here
14:37 glusterbot Please see http://joejulian.name/blog/sho​uld-i-use-stripe-on-glusterfs/ about stripe volumes.
14:50 kshlm joined #gluster
14:51 kshlm joined #gluster
14:53 plarsen joined #gluster
14:53 sprachgenerator joined #gluster
14:54 ekuric1 joined #gluster
14:58 glusterbot New news from newglusterbugs: [Bug 1115648] Server Crashes on EL5/32-bit <https://bugzilla.redhat.co​m/show_bug.cgi?id=1115648>
15:02 rwheeler joined #gluster
15:09 elico joined #gluster
15:10 daMaestro joined #gluster
15:19 chirino joined #gluster
15:21 nbalachandran joined #gluster
15:39 failshell joined #gluster
15:39 hagarth joined #gluster
15:48 hagarth1 joined #gluster
15:49 RameshN joined #gluster
15:57 jobewan joined #gluster
15:58 theron joined #gluster
16:06 diegows joined #gluster
16:21 fubada joined #gluster
16:27 chirino joined #gluster
16:30 Gu_______ joined #gluster
16:34 tdasilva joined #gluster
16:46 ryan_clough joined #gluster
16:53 ryan_clough left #gluster
17:10 anoopcs joined #gluster
17:11 anoopcs joined #gluster
17:13 Liquid-- joined #gluster
17:32 PeterA joined #gluster
17:36 cfeller joined #gluster
17:39 sputnik13 joined #gluster
17:47 ttk joined #gluster
17:52 coredump joined #gluster
18:01 PeterA i have been getting more freq error on nfs.log
18:01 PeterA http://pastie.org/9585146
18:01 glusterbot Title: #9585146 - Pastie (at pastie.org)
18:01 PeterA and last friday i had a crash on the glusterfs
18:01 PeterA http://pastie.org/9577747
18:01 glusterbot Title: #9577747 - Pastie (at pastie.org)
18:01 PeterA how should i look into the root cause and fix?
18:02 PeterA is that a bug?
18:02 PeterA it's a 3x2 replicated vol
18:03 sputnik13 joined #gluster
18:13 ekuric joined #gluster
18:16 chirino joined #gluster
18:19 ThatGraemeGuy joined #gluster
18:31 fubada joined #gluster
18:46 semiosis running glusterfs 3.4.2 and self heal daemon is doing lots of work.  glusterfsd procs on one (of two) servers are using lots of CPU while the other server almost no CPU util.
18:47 semiosis client machines are reporting this in kernel log (dmesg): http://pastie.org/private/6dj0tlqen7ipwzwzyqbtpw
18:47 glusterbot Title: Private Paste - Pastie (at pastie.org)
18:47 semiosis INFO: task java:2903 blocked for more than 120 seconds. / "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
18:48 semiosis a stack trace has this at the top: fuse_change_entry_timeout.isra.9+0x3d/0x50
18:48 semiosis any ideas what is going on & why?
18:49 social joined #gluster
18:52 semiosis perhaps an upgrade to 3.4.5 would help
18:52 semiosis or a newer kernel?
18:55 T0aD joined #gluster
18:55 sputnik13 joined #gluster
18:58 semiosis anyone know how I can limit how much work the self heal daemon is doing?  i'd rather it take a long time to catch up than kill performance on the volume.
18:58 semiosis i already set background self heal to 4 but not sure if that has effect on the self heal daemon
19:02 dockbram joined #gluster
19:04 sputnik13 joined #gluster
19:05 sputnik13 joined #gluster
19:10 ryan_clough joined #gluster
19:15 ryan_clough left #gluster
19:17 semiosis well, that was fun
19:19 JoeJulian semiosis: Someone else emailed the list about that today too.
19:24 dmachi1 joined #gluster
19:34 MacWinner joined #gluster
19:36 semiosis the worst part was that those threads were zombified, sorta
19:36 semiosis even killing the JVM would not free the resources
19:36 semiosis couldn't release the tcp socket
19:36 semiosis couldn't free the memory
19:36 semiosis had to restart the whole VM
19:36 semiosis never seen anything like that in a JVM
19:37 semiosis this was a tomcat server
19:45 JoeJulian semiosis: Could have killed glusterfs, which would kill the mount of course, and that would have unzombied them I think.
19:45 semiosis ah right
19:45 semiosis was too panicked to think of that
19:45 JoeJulian Been there, done that.
19:45 semiosis also realized i could/should have killed glustersdh
19:45 semiosis shd*
19:45 semiosis and just done my usual find/stat trickery to heal the new files
19:46 semiosis like i did before upgrading
19:46 JoeJulian Odd
19:46 calum_ joined #gluster
19:47 semiosis i expanded half of the bricks, and glustershd was hitting them pretty hard, presumably causing clients to starve
19:47 semiosis now i get to try again with the other half :)
19:48 JoeJulian You took it down when you expanded the brick?
19:48 JoeJulian I've always left it up.
19:49 semiosis yep, using EBS, have to detach the existing EBS volume & replace it with a larger one restored from a snapshot
19:49 JoeJulian Oh, right
19:49 semiosis so, kill glusterfsd, unmount, detach, create, mount, grow, restart glusterfsd
19:50 semiosis s/detach/snapshot, detach/
19:50 glusterbot What semiosis meant to say was: so, kill glusterfsd, unmount, snapshot, detach, create, mount, grow, restart glusterfsd
19:51 semiosis s/mount/attach, mount/
19:51 glusterbot What semiosis meant to say was: so, kill glusterfsd, unattach, mount, detach, create, mount, grow, restart glusterfsd
19:51 semiosis eh
19:51 semiosis never mind
19:51 JoeJulian hehe
19:51 semiosis documenting it for internal use, maybe i'll just post that somewhere
19:51 JoeJulian Mine's always gone, lvresize, xfsgrow, done.
19:51 semiosis on the wiki perhaps
19:52 semiosis yep, that's too risky for me
19:52 semiosis i like to have a solid fallback plan
19:53 semiosis if anything goes wrong, i just restore the whole server from the snapshot
19:54 semiosis hmmmm, maybe if i had done this serially, one brick at a time, it would've been smoother.
19:55 semiosis instead of half the volume at once
19:55 JoeJulian Seems likely.
19:56 JoeJulian Also remember, once your background-self-heal queue is full, any subsequent files are going to hang your client until they're healed.
19:56 semiosis heh
19:56 semiosis oh btw, just made a cafe breve
19:56 JoeJulian mmmm...
19:56 semiosis got a new espresso machine in the office
19:56 JoeJulian nice
19:57 JoeJulian I'm going to have to come over...
19:57 semiosis sure
19:57 semiosis maybe i should call this cafe con breve
19:58 semiosis since it started out as cuban coffee
20:01 social joined #gluster
20:14 sprachgenerator joined #gluster
20:54 jvdm joined #gluster
20:55 msmith_ joined #gluster
20:56 Ramereth joined #gluster
21:02 Ramereth joined #gluster
21:14 semiosis JoeJulian: you were right, it took umount -fl /client, kill -9 client, umount -fl /client, mount /client to free the mem
21:14 semiosis i started up 3x more tomcat servers this time, to spread the load.  two of them died but the rest stayed up while I repaired the dead ones
21:14 semiosis :D
21:15 fubada joined #gluster
21:15 semiosis all the while glustershd is slamming the bricks
21:15 semiosis it's a beautiful thing
21:20 JoeJulian I wonder why it's hitting it so hard...
21:24 semiosis i dont understand all the stuff in the shd log, but it's the server that stayed up that has all the activity in the shd log, and on the other server the bricks that went down & got expanded are slammed
21:25 semiosis if i strace one of the hot brick daemons i see a lot of futex wait/wake
21:25 semiosis and although the brick daemons are at a constant ~30% cpu, they're mostly in S state
21:26 semiosis the system is not in iowait
21:33 longshot902 joined #gluster
21:40 chirino joined #gluster
21:47 semiosis and now the shd log is truncated.  why'd that happen?!
21:47 semiosis oh well
22:09 m0rph joined #gluster
22:16 chirino joined #gluster
22:20 nbvfuel joined #gluster
22:21 nbvfuel How terrible is it, to aws sync the underlying gluster filesystem directly (instead of the view as mounted by a client)?
22:21 semiosis aws sync?
22:22 nbvfuel Ie-- if we wanted to do a nightly sync of the data to AWS S3 (ie, read-only), would that be OK?
22:22 glusterbot nbvfuel: Ie's karma is now -1
22:22 m0rph Howdy! I have a dumb question... The other day I mounted a glusterfs volume on a machine and during heavy use the logfile eventually filled up my root partition. Does anyone know how to mount with logging disabled? Or should I set up a logrotate config to trim the fat every couple of minutes?
22:22 semiosis nbvfuel: is your glusterfs volume in EC2?  on EBS bricks?  you can use the CreateImage API call to snapshot the whole server
22:23 semiosis m0rph: you can set log level with the diagnostics.log-level ,,(options)
22:23 glusterbot m0rph: See config options and their defaults with 'gluster volume set help'; you can see the current value of an option, if it has been modified, with 'gluster volume info'; see also this page about undocumented options: http://goo.gl/mIAe4E
22:23 nbvfuel semiosis: No, it's on hardware outside of the AWS ecosystem.  The rsync would be for offsite DR.
22:23 semiosis ohhh
22:24 semiosis nbvfuel: you might as well just sync from a glusterfs client mount point to s3 then
22:24 semiosis that's how you'd have to restore the data, in the reverse direction
22:25 m0rph semiosis: Didn't know about that setting. I'll check that out, thank you.
22:25 nbvfuel semiosis: OK.  For speed of read, I wasn't sure if reading from the brick directly was kosher.
22:25 JoeJulian m0rph: 1st, this is why you should always make /var/log its own partition. 2nd, my home directory mount used a whopping 9k yesterday. What the heck is happening in your logs?
22:25 semiosis nbvfuel: you might consider using lvm snapshots then storing a whole lv image to s3
22:26 semiosis nbvfuel: your speed probably will be limited by your s3 bandwidth, not your disk bandwidth
22:27 semiosis nbvfuel: i wrote this python thing to upload a block device image to s3... https://github.com/semiosis/s​3-parallel-multipart-uploader
22:27 glusterbot Title: semiosis/s3-parallel-multipart-uploader · GitHub (at github.com)
22:27 semiosis nbvfuel: someone just contributed a patch to it, shockingly
22:27 nbvfuel semiosis: hmm, OK.  It's not the full filesystem, just a few important directories.
22:27 semiosis oh then never mind
22:28 m0rph JoeJulian: /var/log in a partition is a good idea, but I don't have that many logs usually. As for what was going on, I was rsyncing ~6TB. Maybe my logging level was too verbose for that much activity?
22:29 nbvfuel I guess my question was potentially a more general of: "is it stupid of me to think that I can read from the gluster file system directly, instead of from a client mount"
22:29 JoeJulian I couldn't tell without seeing it, but that still shouldn't make that big of a log.
22:29 semiosis it shouldn't hurt, but i dont think it will help either
22:29 semiosis nbvfuel: ^
22:30 m0rph I don't have the log anymore unfortunately.
22:34 m0rph A different question.. With regards to the "performance cache" translator. How smart is the refresh timeout? The default is 1sec; does that mean that after 1 second it frees the memory it's using to get fresh data? Or does it verify the data is still the same to avoid having to reload the same again the following second?
22:35 m0rph In a read-heavy environment, should I set it to a few seconds? or is that a bad idea?
23:03 sprachgenerator joined #gluster
23:20 taco2 joined #gluster
23:21 taco2 Hi all, I have a question about gluster: is it possible to have multiple workers reading the same file simultaneously?
23:22 taco2 (by multiple workers, I mean multiple processes)
23:22 m0rph taco2: yes, I don't see why not.
23:24 taco2 m0rph: thanks! I am asking this because I have a huge gzip file on gluster and I would like to have each process reading and processing its own part of the file
23:25 taco2 so I think I will do that
23:26 theron joined #gluster
23:34 taco2 Does the documentation describe the do's and don'ts of having multiple readers of the same file?
23:34 taco2 I couldnt find anything
23:37 sprachgenerator joined #gluster
23:42 JoeJulian taco2: If you're just reading, there's nothing special.
23:43 taco2 Understood. Thanks a lot for your help
23:44 taco2 (and yes I am only reading the file)
23:45 JoeJulian m0rph: It means that the cache is invalid after cache-timeout. I'm not sure on revalidation. I do know that it's fd specific so if you close the fd, the cache is invalidated immediately regardless of timeouts.

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary