Camelia, the Perl 6 bug

IRC log for #gluster, 2012-10-31

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:22 UnixDev joined #gluster
00:22 UnixDev joined #gluster
00:54 UnixDev joined #gluster
00:54 UnixDev joined #gluster
01:11 UnixDev_ joined #gluster
01:23 UnixDev_ joined #gluster
01:23 UnixDev_ joined #gluster
01:41 rwheeler joined #gluster
01:57 dmachi1 joined #gluster
02:30 nodots joined #gluster
02:37 sunus joined #gluster
02:38 nodots left #gluster
02:40 ika2810 joined #gluster
03:14 bharata joined #gluster
03:20 bulde1 joined #gluster
03:37 vpshastry joined #gluster
03:41 shylesh joined #gluster
03:42 juhaj joined #gluster
03:46 berend joined #gluster
03:49 nodots joined #gluster
03:51 vimal joined #gluster
04:20 ramkrsna joined #gluster
04:20 ramkrsna joined #gluster
04:25 faizan joined #gluster
04:45 sgowda joined #gluster
04:46 vpshastry joined #gluster
04:51 deepakcs joined #gluster
04:51 sripathi joined #gluster
05:02 anti_user hello, latest gluster from git doesnt compile
05:05 nodots left #gluster
05:14 JoeJulian It does on jenkins, I wonder what's different...
05:16 hyperair joined #gluster
05:16 hyperair left #gluster
05:24 jays joined #gluster
05:29 ankit9 joined #gluster
05:37 ankit9 joined #gluster
05:38 raghu` joined #gluster
05:56 kshlm joined #gluster
05:56 kshlm joined #gluster
05:56 hagarth joined #gluster
05:58 kshlm joined #gluster
05:58 kshlm joined #gluster
06:24 y4m4 joined #gluster
06:30 hagarth joined #gluster
06:45 mohankumar joined #gluster
06:50 sripathi1 joined #gluster
06:51 bala joined #gluster
07:00 ngoswami joined #gluster
07:03 guigui3 joined #gluster
07:08 lkoranda joined #gluster
07:09 dobber joined #gluster
07:13 rosco joined #gluster
07:17 rgustafs joined #gluster
07:17 Nr18 joined #gluster
07:21 mohankumar joined #gluster
07:29 sripathi joined #gluster
07:31 puebele joined #gluster
07:31 glusterbot New news from resolvedglusterbugs: [Bug 807976] losing file ownership in replicated volume when one of the brick comes online <https://bugzilla.redhat.com/show_bug.cgi?id=807976>
07:36 sripathi joined #gluster
07:37 ekuric joined #gluster
07:38 puebele joined #gluster
07:39 ctria joined #gluster
07:42 lh joined #gluster
07:42 lh joined #gluster
07:47 ankit9 joined #gluster
07:57 hagarth joined #gluster
08:01 pkoro joined #gluster
08:26 vpshastry left #gluster
08:27 Triade joined #gluster
08:31 sripathi joined #gluster
08:32 dobber joined #gluster
08:38 ctria joined #gluster
08:47 Nr18 joined #gluster
08:47 mohankumar joined #gluster
08:50 gbrand_ joined #gluster
08:52 wica joined #gluster
08:52 wica Hi
08:52 glusterbot wica: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
08:52 wica Thnx bot :)
09:02 vpshastry joined #gluster
09:12 DaveS_ joined #gluster
09:16 DaveS joined #gluster
09:24 hagarth joined #gluster
09:24 hagarth @channelstats
09:24 glusterbot hagarth: On #gluster there have been 37821 messages, containing 1704789 characters, 283008 words, 1172 smileys, and 157 frowns; 284 of those messages were ACTIONs. There have been 13023 joins, 445 parts, 12640 quits, 2 kicks, 29 mode changes, and 5 topic changes. There are currently 183 users and the channel has peaked at 185 users.
09:31 glusterbot New news from resolvedglusterbugs: [Bug 764107] Gluster 3.1.2 and rpc-auth patch <https://bugzilla.redhat.com/show_bug.cgi?id=764107>
09:32 sripathi joined #gluster
09:44 vpshastry joined #gluster
09:48 Azrael808 joined #gluster
09:50 Fabiom joined #gluster
09:50 vpshastry joined #gluster
09:53 puebele1 joined #gluster
10:08 TheHaven joined #gluster
10:19 puebele1 joined #gluster
10:19 sshaaf joined #gluster
10:24 clag_ hi all, can someone confirm process on a replicate volume with glusterfs 3.2. If a server reboot with a brick. Once recover does files change are committed ?
10:26 overclk joined #gluster
10:29 NuxRo clag_: with gluster 3.2 you need to do this http://community.gluster.org/a/howto-targeted-s​elf-heal-repairing-less-than-the-whole-volume/
10:29 glusterbot Title: Article: HOWTO: Targeted self-heal (repairing less than the whole volume) (at community.gluster.org)
10:29 NuxRo glusterfs 3.3 has built-in self-healing, you might want to look into upgrading
10:29 clag_ ok that what i do
10:30 clag_ thx
10:34 raghu` joined #gluster
10:36 Psyborg joined #gluster
10:36 Psyborg hello guys
10:37 Psyborg i am looking for some gluster meeting, but http://www.gluster.org/meetups/ throws an error
10:37 glusterbot Title: GlusterFS Meetups | Gluster Community Website (at www.gluster.org)
10:41 ika2810 left #gluster
10:42 bala joined #gluster
10:55 saz joined #gluster
11:04 Fabiom joined #gluster
11:04 shireesh joined #gluster
11:06 puebele1 left #gluster
11:16 bala joined #gluster
11:43 cyberbootje joined #gluster
11:44 edward1 joined #gluster
11:47 NuxRo JoeJulian: do you know which supybot plugin is enabled glusterbot to echo the link titles in the channel?
11:48 NuxRo s/is enabled/enables
11:49 NuxRo think i found it, supybot.plugins.URL.titleSnarfer
12:27 shireesh joined #gluster
12:28 mdarade1 left #gluster
12:34 plarsen joined #gluster
12:35 Alpinist joined #gluster
12:45 bharata joined #gluster
12:46 shireesh joined #gluster
12:47 aliguori joined #gluster
12:50 balunasj joined #gluster
12:51 rgustafs joined #gluster
12:54 tryggvil joined #gluster
12:59 ramkrsna joined #gluster
13:03 bulde joined #gluster
13:16 manik joined #gluster
13:26 manik joined #gluster
13:27 ramkrsna joined #gluster
13:36 rwheeler joined #gluster
13:55 silopolis joined #gluster
13:57 dmachi joined #gluster
13:58 bulde joined #gluster
14:02 mrbucket_work joined #gluster
14:02 glusterbot New news from resolvedglusterbugs: [Bug 831151] Self heal fails on directories with symlinks <https://bugzilla.redhat.com/show_bug.cgi?id=831151>
14:03 mrbucket_work hey. i'm looking at gluster, but right now i only have one storage node. do i *need* two, or can i start with one and add another as time goes on
14:03 mrbucket_work All the guides im looking at are suggesting I start with two
14:06 kkeithley1 you can start with just one
14:07 mrbucket_work cool. i have a 36 drive machine with a raid card and such
14:07 stopbit joined #gluster
14:07 mrbucket_work we may never extend beyond this machine for the purpose it's being used for, but it'd be nice to know that I can do it in the future without any problems.
14:08 kkeithley but I kinda wonder, if you've got big raid and you're only going to use a single brick, then why use gluster in the first place?
14:11 mrbucket_work kkeithley: well, if i'm understanding correctly (and maybe im not) gluster allows me to put together storage on a variety of nodes into a single namespace?
14:12 kkeithley that's correct
14:12 mrbucket_work kkeithley: basically, i want the option for expansion then
14:13 mrbucket_work should it turn out this works well and we start putting more data than intended, i dont want to have future me deal with migration or anything like that
14:13 kkeithley that's a good reason
14:16 nodots joined #gluster
14:21 semiosis mrbucket_work: fair warning, you'll need to rebalance when you expand glusterfs, which is an expensive operation.  test with a scaled down dataset to see how long that might take.
14:25 mrbucket_work semiosis: good to know. thanks!
14:25 mrbucket_work left #gluster
14:25 manik joined #gluster
14:26 semiosis :O
14:27 wushudoin joined #gluster
14:27 chouchins joined #gluster
14:46 raghu` joined #gluster
14:46 jiffe98 is performance.cache-refresh-timeout server or client side?
14:49 manik joined #gluster
15:00 daddmac2 joined #gluster
15:03 daMaestro joined #gluster
15:10 chacken2 joined #gluster
15:14 JoeJulian :O
15:18 manik joined #gluster
15:25 ctria joined #gluster
15:25 jiffe98 are there plans for client caching through the gluster client?
15:26 jiffe98 I try to use the gluster client over nfs wherever possible because of the built-in redundancy
15:30 bala1 joined #gluster
15:35 davdunc joined #gluster
15:35 davdunc joined #gluster
15:36 JoeJulian jiffe98: The performance translators all exist on the client side except for io-threads which /can/ also be configured client-side but isn't by default.
15:37 JoeJulian jiffe98: The part that throws you off is that most (if not all) the performance translators are attached to an fd. If that fd is closed, the cache is released.
15:38 jiffe98 gotcha
15:49 kd joined #gluster
15:52 kd left #gluster
15:56 sripathi joined #gluster
15:59 puebele joined #gluster
16:05 nodots joined #gluster
16:05 nodots left #gluster
16:18 wnl_work joined #gluster
16:18 wnl_work http://fpaste.org/wJiI/
16:18 wnl_work WAT?
16:18 glusterbot Title: Viewing brick and volume by wnl_work (at fpaste.org)
16:18 semiosis or a prefix of it is already part of a volume
16:18 glusterbot semiosis: To clear that error, follow the instructions at http://joejulian.name/blog/glusterfs-path-or​-a-prefix-of-it-is-already-part-of-a-volume/
16:19 wnl_work i deleted a volume and now i cant reuse a brick from that volume in a new volume?
16:19 semiosis wnl_work: ^^
16:19 wnl_work heh
16:19 wnl_work guess i shoulda googled. thx
16:19 semiosis or just pasted the error here :)
16:20 * JoeJulian wonders how hard it would be to make glusterbot check the pastes for faq...
16:21 wnl_work of course it would help if my system already had setfattr installed.  :-/
16:21 JoeJulian Always handy to have that on all your gluster servers.
16:24 wnl_work btw: what is involved in changing the hostname of a gluster server?
16:25 wnl_work which would require changing the hostnames of the brick references for a volume
16:25 semiosis wnl_work: i think you can use replace-brick commit --force or something like that, but never tried myself
16:25 JoeJulian There's no good way of doing it. I've theorized about doing a replace-brick...commit force with the new hostname, but I haven't tried it yet.
16:25 semiosis wait, no, i take that back
16:26 JoeJulian no?
16:26 semiosis there'd probably be an issue with old-hostname & new-hostname not both being in the pool
16:26 JoeJulian Ah, right.
16:26 JoeJulian Though I suppose if you changed the gfid...
16:26 JoeJulian er, uuid
16:26 semiosis would be a good thing to test & write up, since this question has been coming up once a week lately
16:26 JoeJulian whatever.
16:27 semiosis because people don't follow my best-practice advice of creating FQDNs for gluster servers which can be mapped to hosts as needed
16:27 Mo__ joined #gluster
16:27 semiosis glusterbot: meh
16:27 glusterbot semiosis: I'm not happy about it either
16:27 wnl_work so, basically "dont do that"?
16:28 semiosis we just dont have a good answer ready yet
16:28 seanh-ansca joined #gluster
16:29 semiosis someone should file a bug asking for this
16:29 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
16:29 wnl_work i can create a separate cname that points to the server. shouldnt be a problem.
16:29 JoeJulian Been working on my presentation. I'm going to do a live demo, too. Should be interesting. I can go from deploy to mounted volume in about 2 1/2 minutes and that's doing everything that's actually gluster related by hand.
16:29 semiosis wnl_work: that's what i'd recommend
16:29 wnl_work thx
16:30 semiosis yw
16:30 semiosis JoeJulian: cool!
16:30 JoeJulian rackspace should give me some swag to. They're the one's I'll be using for the demo.
16:37 faizan joined #gluster
16:44 hagarth joined #gluster
16:46 sripathi joined #gluster
16:52 ekuric left #gluster
17:11 hagarth joined #gluster
17:24 bulde1 joined #gluster
17:34 lkoranda joined #gluster
17:41 Humble joined #gluster
17:50 hagarth joined #gluster
17:52 y4m4 joined #gluster
17:53 balunasj|mtg joined #gluster
17:53 bulde1 joined #gluster
17:53 balunasj joined #gluster
18:16 faizan joined #gluster
18:23 jiffe98 is there a way to reset a volume param to default?
18:31 gbrand_ joined #gluster
18:37 Humble_afk joined #gluster
18:45 JoeJulian I think that's what "gluster volume reset $vol" does without any additional options.
18:54 daddmac2 we have a couple of gluster servers that take turns grinding to a halt.  looking in the logs the one struggling has these errors logging in rapid succession: [2012-10-31 12:50:51.696556] E [afr-self-heal-common.c:2156:​afr_self_heal_completion_cbk] 0-gstorage-volume-replicate-4: background  meta-data data entry missing-entry gfid self-heal failed on [filepath]
18:55 daddmac2 are the two events related, and am i doing something to make it mad at me?
18:55 Gilbs1 joined #gluster
18:57 DaveS joined #gluster
19:02 DaveS joined #gluster
19:09 daddmac2 full disclosure: users access the system through "client servers", mounting gluster volume via gluster client, sharing via smb, using ctdb to distribute the load.  volumes are on servers running only glusterd.
19:10 JoeJulian daddmac2: ,,(ext4)?
19:10 glusterbot daddmac2: Read about the ext4 problem at http://joejulian.name/blog/gluste​rfs-bit-by-ext4-structure-change/
19:11 daddmac2 yes sir!
19:11 daddmac2 reading your log now.  ty gbot!
19:11 daddmac2 \blog
19:13 bulde1 joined #gluster
19:19 daddmac2 joejulian:we are on 2.6.32-220.el6.x86_64...
19:28 robo joined #gluster
19:32 tc00per left #gluster
19:34 tc00per joined #gluster
19:35 davdunc joined #gluster
19:35 davdunc joined #gluster
19:43 JoeJulian daddmac2: Check your brick logs for errors also. Btw, I do a samba reshare as well and I don't see that problem. I do only have a single samba server though.
19:45 daddmac2 will do!  thx!
19:50 madphoenix is it still necessary to run fix-layout after gracefully shrinking a volume, or is that done automatically now?
20:23 tc00per Is it possible to use geo-replication to mirror a glusterfs datastore from one site to another and then, after the sync is complete, to break the replication and have two standalone glusterfs systems? Envision a 'staging' to 'production' geo-replication 'copy' of a glusterfs VOL from virtual to physical hardware.
20:23 nightwalk joined #gluster
20:30 danishman joined #gluster
20:52 duerF joined #gluster
20:54 rwheeler joined #gluster
21:06 tryggvil_ joined #gluster
21:11 nightwalk joined #gluster
21:25 UnixDev does a striped volume have any data protection? can you loose a brick or few?
21:31 semiosis since glusterfs 3.3.0 you can do replicate & stripe (& distribute) all in the same volume, though ,,(stripe) is generally not recommended
21:31 glusterbot Please see http://joejulian.name/blog/sho​uld-i-use-stripe-on-glusterfs/ about stripe volumes.
21:39 jbrooks joined #gluster
21:51 Gilbs1 Building off what daddmac started, I made sure we're not in a split-brain senerio, but every time we browse this directory the logs are filled with these errors.  How do I know who replicate-4 is?  Is this a lock issue?
21:51 Gilbs1 [2012-10-31 15:43:56.044319] E [afr-self-heal-common.c:2156:​afr_self_heal_completion_cbk] 0-test-volume-replicate-4: background  meta-data data entry missing-entry gfid self-heal failed on /filepath
21:51 Gilbs1 [2012-10-31 15:43:56.032679] I [afr-common.c:1215:afr_detect​_self_heal_by_lookup_status] 0-test-volume-replicate-4: entries are missing in lookup of /filepath.
21:51 Gilbs1 [2012-10-31 15:43:56.032783] I [afr-common.c:1340:afr_launch_self_heal] 0-test-volume-replicate-4: background  meta-data data entry missing-entry gfid self-heal triggered. path: /filepath, reason: lookup detected pending operations
21:51 Gilbs1 [2012-10-31 15:43:56.044319] E [afr-self-heal-common.c:2156:​afr_self_heal_completion_cbk] 0-test-volume-replicate-4: background  meta-data data entry missing-entry gfid self-heal failed on /filepath
21:58 Gilbs1 left #gluster
21:58 Daxxial_1 joined #gluster
22:10 UnixDev semiosis: so stripe really has no "checksum" node/block. it would be nice if it would work more like a raid 5 stripe
22:11 blendedbychris joined #gluster
22:11 blendedbychris joined #gluster
22:22 semiosis UnixDev: feel free to file a bug requesting that :)
22:22 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
22:26 nodots joined #gluster
22:30 UnixDev semiosis: submitted
22:31 semiosis bz id?
22:32 nodots left #gluster
22:42 UnixDev 871986
22:42 semiosis bug 871986
22:42 glusterbot Bug https://bugzilla.redhat.com​:443/show_bug.cgi?id=871986 unspecified, unspecified, ---, sgowda, NEW , Striping should have optional parity/checksum
22:43 UnixDev bug 871987
22:43 glusterbot Bug https://bugzilla.redhat.com​:443/show_bug.cgi?id=871987 low, unspecified, ---, kaushal, NEW , Split-brain logging is confusing
22:43 UnixDev also opened that
22:43 Technicool joined #gluster
23:00 glusterbot New news from newglusterbugs: [Bug 871987] Split-brain logging is confusing <https://bugzilla.redhat.com/show_bug.cgi?id=871987> || [Bug 871986] Striping should have optional parity/checksum <https://bugzilla.redhat.com/show_bug.cgi?id=871986>
23:18 morse_ joined #gluster
23:19 plantain joined #gluster
23:19 plantain joined #gluster
23:19 juhaj joined #gluster
23:20 quillo joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary