Camelia, the Perl 6 bug

IRC log for #gluster, 2013-02-27

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 JoeJulian I bet the only reason they're not still running arcnet is most of those buildings were built after it died.
00:01 JoeJulian Oh, wait... this is #gluster not #bitchaboutmicrosoft sorry.
00:03 joehoyle joined #gluster
00:07 hagarth joined #gluster
00:25 jag3773 joined #gluster
00:27 H__ joined #gluster
00:35 __Bryan__ joined #gluster
00:37 bala1 joined #gluster
01:01 kevein joined #gluster
01:07 H__ joined #gluster
01:16 lpabon joined #gluster
01:19 hagarth joined #gluster
01:51 yinyin joined #gluster
01:59 isomorphic joined #gluster
02:25 cjohnston_work joined #gluster
02:40 _pol joined #gluster
02:41 __Bryan__ joined #gluster
02:47 sag47 joined #gluster
02:47 sag47 left #gluster
02:52 kevein joined #gluster
02:57 jdarcy joined #gluster
03:03 pipopopo joined #gluster
03:09 jdarcy joined #gluster
03:11 isomorphic joined #gluster
03:13 sgowda joined #gluster
03:30 vshankar joined #gluster
03:40 joe- joined #gluster
03:45 anmol joined #gluster
03:55 ehg joined #gluster
03:55 twx joined #gluster
03:55 Ramereth joined #gluster
03:55 xymox joined #gluster
03:55 errstr joined #gluster
03:55 stigchristian joined #gluster
03:56 xymox joined #gluster
03:58 sripathi joined #gluster
04:02 yinyin joined #gluster
04:03 bharata joined #gluster
04:16 dustint joined #gluster
04:21 vigia joined #gluster
04:21 satheesh joined #gluster
04:23 bulde joined #gluster
04:47 dlocal joined #gluster
04:55 vpshastry joined #gluster
05:02 dlocal Hi all! I have Gluster storage with 4 bricks, replicated, with 2x redundancy, and I am looking for best tuning options for usage as an OpenStack shared VM storage. And can't find it. Direct-IO, NFS, write-behind and flush-behind, playing with io-thread-count is tried by me, and in result I got only one - terribly slow filesystem on operations with large number of files. For example, copying one directory with 90000 files in the deep tree, with size abo
05:02 dlocal ut 3G takes... 18 minutes!
05:02 dlocal How I can resolve this? On storage bricks, gluster base hosted on SW RAID1 volumes, maybe root of troubles is here?
05:09 yinyin joined #gluster
05:10 NeatBasis joined #gluster
05:15 isomorphic joined #gluster
05:16 deepakcs joined #gluster
05:19 rastar joined #gluster
05:21 ramkrsna joined #gluster
05:22 shylesh joined #gluster
05:28 Humble joined #gluster
05:35 bala joined #gluster
05:35 aravindavk joined #gluster
05:36 raghu joined #gluster
05:43 srhudli joined #gluster
05:45 shireesh joined #gluster
05:47 pithagorians joined #gluster
05:53 lala joined #gluster
05:57 ultrabizweb joined #gluster
06:00 bala joined #gluster
06:01 hagarth joined #gluster
06:08 anmol joined #gluster
06:09 timothy joined #gluster
06:15 puebele joined #gluster
06:18 ngoswami joined #gluster
06:19 overclk joined #gluster
06:24 hagarth joined #gluster
06:24 anmol joined #gluster
06:29 rotbeard joined #gluster
06:30 cjohnston_work joined #gluster
06:34 Humble_afk joined #gluster
06:50 Nevan joined #gluster
07:06 vimal joined #gluster
07:16 _pol joined #gluster
07:17 GabrieleV joined #gluster
07:23 jtux joined #gluster
07:25 hagarth joined #gluster
07:26 sgowda joined #gluster
07:30 sgowda joined #gluster
07:30 guigui1 joined #gluster
07:30 ekuric joined #gluster
07:35 ThatGraemeGuy joined #gluster
07:36 yinyin joined #gluster
07:36 ctria joined #gluster
07:37 rotbeard joined #gluster
07:41 gbrand_ joined #gluster
07:43 glusterbot New news from resolvedglusterbugs: [Bug 763853] replace brick failure <http://goo.gl/qkkRU>
07:48 cw joined #gluster
08:03 jtux joined #gluster
08:09 hybrid5121 joined #gluster
08:09 Era joined #gluster
08:22 pithagorians joined #gluster
08:22 deepakcs joined #gluster
08:24 gbrand_ joined #gluster
08:30 andreask joined #gluster
08:30 GabrieleV joined #gluster
08:36 rastar joined #gluster
08:45 vimal joined #gluster
08:47 rotbeard joined #gluster
08:54 sahina joined #gluster
08:58 Humble_afk joined #gluster
09:20 ctria joined #gluster
09:23 Norky joined #gluster
09:26 Rocky_ joined #gluster
09:31 badone joined #gluster
09:37 Rocky_ joined #gluster
09:38 yinyin joined #gluster
10:09 ctria joined #gluster
10:27 andreask joined #gluster
10:43 yinyin joined #gluster
11:10 test_ joined #gluster
11:24 satheesh joined #gluster
11:25 rotbeard joined #gluster
11:27 glusterbot New news from newglusterbugs: [Bug 916127] Virt file does not get regenerated after wiping /var/lib/glusterd/groups and restarting glusterd <http://goo.gl/ensdu>
11:39 vpshastry1 joined #gluster
11:44 yinyin joined #gluster
11:46 gbrand_ joined #gluster
11:51 andreask joined #gluster
11:56 manik joined #gluster
12:04 jdarcy joined #gluster
12:04 edward1 joined #gluster
12:08 tryggvil joined #gluster
12:15 hagarth joined #gluster
12:17 hybrid5121 joined #gluster
12:27 rcheleguini joined #gluster
12:29 dobber joined #gluster
12:35 ngoswami joined #gluster
12:44 yinyin joined #gluster
12:48 bulde joined #gluster
12:48 yinyin_ joined #gluster
12:56 dustint joined #gluster
13:01 morse joined #gluster
13:18 vpshastry joined #gluster
13:18 jdarcy joined #gluster
13:22 hagarth joined #gluster
13:29 jdarcy joined #gluster
13:34 deepakcs joined #gluster
13:43 ctria joined #gluster
13:43 plarsen joined #gluster
13:43 gbrand_ joined #gluster
13:43 shylesh joined #gluster
13:51 morse joined #gluster
13:52 lala joined #gluster
14:02 bennyturns joined #gluster
14:05 ctria joined #gluster
14:06 morse joined #gluster
14:18 hagarth joined #gluster
14:26 pithagorians joined #gluster
14:27 lhawthor_ joined #gluster
14:36 balunasj joined #gluster
14:38 morse joined #gluster
14:41 rwheeler joined #gluster
14:41 gbrand_ joined #gluster
14:44 NeatBasis joined #gluster
14:52 stopbit joined #gluster
14:57 jdarcy joined #gluster
14:57 jdarcy_ joined #gluster
15:17 nueces joined #gluster
15:17 satheesh joined #gluster
15:21 puebele1 joined #gluster
15:27 pithagorians joined #gluster
15:29 gbrand_ joined #gluster
15:30 bugs_ joined #gluster
16:08 satheesh1 joined #gluster
16:10 bstromski joined #gluster
16:17 Humble joined #gluster
16:19 zaitcev joined #gluster
16:20 hybrid5122 joined #gluster
16:30 ctria joined #gluster
16:33 puebele2 joined #gluster
16:35 45PABXML7 joined #gluster
16:40 flrichar joined #gluster
16:42 hybrid5121 joined #gluster
16:44 gbrand__ joined #gluster
16:44 45PABXML7 Hey gang, I’m trying to troubleshoot some gluster latency issues.  Deleting or creating folders/files takes quite a bit longer than it should.  I noticed when doing an <ls>, it takes some time to display the directory.  One thing I did notice is a bunch of errors in glustershd.log.
16:44 45PABXML7 “I [afr-common.c:1215:afr_detect​_self_heal_by_lookup_status] 0-tcstorage-volume-replicate-4: entries are missing in lookup of <gfid:9f53acda-31a0-49a7-a7ee-dbf98edcf5cb>     E [afr-self-heal-common.c:2156:​afr_self_heal_completion_cbk] 0-tcstorage-volume-replicate-4: background  meta-data data entry self-heal failed on <gfid:9f53acda-31a0-49a7-a7ee-dbf98edcf5cb>”
16:51 _br_ joined #gluster
16:53 Humble joined #gluster
16:58 _br_ joined #gluster
16:59 luckybambu joined #gluster
17:00 flakrat joined #gluster
17:00 flakrat joined #gluster
17:05 raven-np joined #gluster
17:09 Gilbs Lets try this with a real nick...   I’m trying to troubleshoot some gluster latency issues.  Deleting or creating folders/files takes quite a bit longer than it should.  I noticed when doing an <ls>, it takes some time to display the directory.  One thing I did notice is a bunch of errors in glustershd.log.
17:09 Gilbs “I [afr-common.c:1215:afr_detect​_self_heal_by_lookup_status] 0-tcstorage-volume-replicate-4: entries are missing in lookup of <gfid:9f53acda-31a0-49a7-a7ee-dbf98edcf5cb>     E [afr-self-heal-common.c:2156:​afr_self_heal_completion_cbk] 0-tcstorage-volume-replicate-4: background  meta-data data entry self-heal failed on <gfid:9f53acda-31a0-49a7-a7ee-dbf98edcf5cb>”
17:19 manik joined #gluster
17:21 hagarth joined #gluster
17:24 yinyin joined #gluster
17:27 lala_ joined #gluster
17:39 rwheeler joined #gluster
17:43 _pol joined #gluster
17:46 Gilbs left #gluster
17:48 lh joined #gluster
17:48 lh joined #gluster
18:15 _pol joined #gluster
18:16 andreask joined #gluster
18:16 _pol joined #gluster
18:19 daMaestro joined #gluster
18:23 tc00per joined #gluster
18:23 Humble joined #gluster
18:23 tc00per left #gluster
18:25 yinyin_ joined #gluster
18:34 manik joined #gluster
18:44 ultrabizweb joined #gluster
18:47 ThatGraemeGuy joined #gluster
18:48 Humble joined #gluster
18:50 manik joined #gluster
18:55 ultrabizweb joined #gluster
18:58 rwheeler_ joined #gluster
18:59 glusterbot New news from newglusterbugs: [Bug 913699] Conservative merge fails on client3_1_mknod_cbk <http://goo.gl/ThGYk>
19:01 Mo_ joined #gluster
19:02 lpabon joined #gluster
19:23 Humble joined #gluster
19:25 yinyin_ joined #gluster
19:29 __Bryan__ joined #gluster
19:50 tjstansell joined #gluster
19:53 a2|cthon joined #gluster
19:53 tjstansell hey folks.  i have been testing procedures for rebuilding a node from scratch in a 2-node replicate cluster.  it seems that when i rebuild the node that has the first brick listed, it is able to self-heal all of the data back from the second brick, but then it updates all of the timestamps from the first brick (which are from the time the data was healed)
19:54 tjstansell what happens is that all timestamps in the filesystem get updated to when the self-heal healed the first brick.
19:55 tjstansell this does not happen when i rebuild the second brick.  it seems to always use the timestamps from the first brick.
19:55 tjstansell due to other restrictions, this is on centos 5.9 using ext3 as the backend filesystem for the bricks.
19:56 tjstansell and this is using glusterfs{,server,fuse}-3.3.1-1.el5
19:56 tjstansell i'm curious if my rebuild procedure is flawed or if i'm running into a bug somewhere in how it figures out how to self-heal.
19:57 jlkinsel joined #gluster
19:57 tjstansell anyone have any suggestions?
19:57 jlkinsel hey folks - I've got a gluster cluster running 3.3b2 - is it possible to do a rolling upgrade to 3.3.0?
20:05 cw joined #gluster
20:11 hateya joined #gluster
20:18 BSTR does anyone know if there is a way to turn xtime off on geo-replication?
20:20 akshay joined #gluster
20:25 yinyin joined #gluster
20:28 andreask joined #gluster
20:33 badone joined #gluster
20:47 Humble joined #gluster
20:50 manik joined #gluster
20:50 DataBeaver joined #gluster
21:05 Humble joined #gluster
21:06 gbrand_ joined #gluster
21:24 pithagorians joined #gluster
21:26 yinyin joined #gluster
21:28 y4m4 joined #gluster
21:29 jdarcy joined #gluster
21:30 hagarth joined #gluster
21:35 disarone joined #gluster
21:36 jdarcy joined #gluster
21:45 tryggvil joined #gluster
22:04 tjstansell well, this room is sadly disappointing :)
22:04 tjstansell anyone tested how broken glusterfs would be if xattr support wasn't enabled in your bricks?
22:07 NuxRo not me, but i would imagine it would be pretty broken
22:16 semiosis tjstansell: iirc the brick daemons will do a test and just die when they try to start up
22:17 semiosis tjstansell: brick daemons = glusterfsd ,,(processes)
22:17 glusterbot information.
22:17 glusterbot tjstansell: the GlusterFS core uses three process names: glusterd (management daemon, one per server); glusterfsd (brick export daemon, one per brick); glusterfs (FUSE client, one per client mount point; also NFS daemon, one per server). There are also two auxiliary processes: gsyncd (for geo-replication) and glustershd (for automatic self-heal). See http://goo.gl/hJBvL for more
22:18 semiosis see also the ,,(extended attributes) article
22:18 glusterbot (#1) To read the extended attributes on the server: getfattr -m .  -d -e hex {filename}, or (#2) For more information on how GlusterFS uses extended attributes, see this article: http://goo.gl/Bf9Er
22:18 semiosis and by the way, glusterfs uses trusted.* xattrs which are enabled by default on ext & xfs filesystems (not sure if they can even be disabled)
22:21 bstansell_ joined #gluster
22:21 tjstansell hm.. so then now i'm confused why i have a 2-node replicate cluster where when i rebuild brick1, it copies the data from brick2, but used brick1's timestamps as the source, thereby causing the whole filesystem's timestamps to get updated.
22:22 tjstansell i was thinking maybe the xattr stuff was busted since i wasn't mounting with user_xattr ... but maybe i just ran into a bug?
22:26 yinyin joined #gluster
22:31 semiosis i dont think glusterfs needs user_xattr
22:32 tjstansell yeah. it's my own misunderstanding of things...
22:32 tjstansell but i'm still stuck trying to figure out why the timestamps all get hosed when i rebuild my primary brick.
22:39 tjstansell so, anyone know how to enable debug logs that would indicate why it's replicating data from brick2->brick1 but metadata from brick1->brick2?
22:41 semiosis hmm, check the self heal log files on the servers
22:41 semiosis also the client log file
22:41 semiosis there's some log level options you can see using 'gluster volume set help'
22:48 jlkinsel joined #gluster
22:50 Humble joined #gluster
22:56 fidevo joined #gluster
23:07 vigia joined #gluster
23:26 Mo_ joined #gluster
23:26 yinyin joined #gluster
23:30 glusterbot New news from newglusterbugs: [Bug 916372] NFS3 stable writes are very slow <http://goo.gl/Z0gaJ> || [Bug 916375] Incomplete NLMv4 spec compliance <http://goo.gl/kjBbB>
23:34 jlkinsel left #gluster
23:35 raven-np joined #gluster
23:39 jdarcy joined #gluster
23:50 hattenator joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary