Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-05-12

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 ctria joined #gluster
00:22 harish joined #gluster
00:24 sputnik13 joined #gluster
00:42 yinyin_ joined #gluster
00:47 saltsa joined #gluster
00:50 Moe-sama joined #gluster
00:51 RobertLaptop joined #gluster
00:52 hflai joined #gluster
00:52 Rydekull joined #gluster
01:03 Ark joined #gluster
01:26 nueces joined #gluster
01:39 harish joined #gluster
01:39 jag3773 joined #gluster
01:44 gdubreui joined #gluster
01:51 shapemaker joined #gluster
01:51 sprachgenerator_ joined #gluster
01:51 eclectic joined #gluster
01:51 d-fence joined #gluster
01:51 jvandewege_ joined #gluster
01:51 sage joined #gluster
01:51 NuxRo joined #gluster
01:52 sputnik13 joined #gluster
01:52 nikk_ joined #gluster
01:52 johnmark joined #gluster
01:54 silky joined #gluster
01:55 necrogami joined #gluster
01:55 atrius joined #gluster
01:56 atrius` joined #gluster
01:56 auganov joined #gluster
02:04 cyber_si joined #gluster
02:04 d3vz3r0 joined #gluster
02:04 elico joined #gluster
02:05 jiffe98 joined #gluster
02:14 yinyin_ joined #gluster
02:25 mohan__ joined #gluster
02:27 systemonkey JoeJulian: Hi Joe, I have a real dire question to ask you. I was running glusterfs on a EOL OS and decided to upgrade all with clean install. Also Installed clean 3.5 version of glusterfs from glusterfs 3.3 as well.
02:28 systemonkey now when I try to create a volume I get: volume create: dist-vol: failed: /tank or a prefix of it is already part of a volume.
02:28 glusterbot systemonkey: To clear that error, follow the instructions at http://joejulian.name/blog/glusterfs-path-or-a-prefix-of-it-is-already-part-of-a-volume/ or see this bug https://bugzilla.redhat.com/show_bug.cgi?id=877522
02:29 systemonkey so naturally, I searched and found your blog post aobut the error. As I was reading it, I'm wondering how risky is it to run the command you have posted on the blog? I can't lose the data.
02:36 MugginsM joined #gluster
02:36 MugginsM I've got a gluster issue I can't figure out. some (not all) clients are seeing files in triplicate in some folders.  eg. each filename appears three times
02:37 MugginsM I've tried unmounting/remounting, doesn't seem to help
02:40 systemonkey MugginsM: I have that issue. do you have distribute or replicate?
02:40 MugginsM self-heal logs a lot of "Non Blocking entrylks failed for"  and "remote operation failed"
02:41 MugginsM I've got a two server replicated
02:41 systemonkey i have distribute.
02:41 systemonkey have you check joejulian's blog about split brain issue?
02:41 MugginsM yep, we've had no end of split brain in the past, but this doesn't seem to be it
02:42 MugginsM the files I've checked seem ok on both servers to my eye
02:42 MugginsM same gfid, etc
02:42 MugginsM which version are you on?  3.4.1 here
02:43 systemonkey i was on 3.3 and decided to upgrade to 3.5 on a fresh install of OS. I'm freaking out here right now.
02:43 systemonkey :D
02:43 MugginsM we got a *lot* of problems going from 3.3 to 3.4, I suspect this is still a leftover
02:43 MugginsM just waiting for 3.5.1 before we jump again :)
02:44 bharata-rao joined #gluster
02:44 systemonkey crazy thing is I tested on a stg systems before I did this on production. and it worked fine.
02:44 MugginsM most seem to be fixable if you dive in and know what you're doing
02:44 MugginsM but some like this one I don't know why it's doing it
02:49 ceiphas_ joined #gluster
02:51 MugginsM we also have a fresh 3.4.1 system that hasn't given us any trouble at all
02:52 MugginsM I think it's just 3.3 left a lot of nasties lurking, waiting for upgrade to trigger
02:52 MugginsM just don't have enough space or time to do a data reinstall
02:52 MugginsM haven't come across one that wasn't fixable, at least, if that's any help :)
02:53 MugginsM I know it's not when things are melting
02:54 MugginsM apps do funny things when they get three files with the same name
02:54 systemonkey MugginsM: that's at least providing me with some comfort.
02:54 MugginsM well "funny" isn't the best word
02:55 systemonkey I'm wondering if semiosis is around...
02:57 B21956 joined #gluster
02:57 MugginsM ok here's what I did with one folder: found a client that wasn't seeing duplication, copied the files off
02:57 MugginsM renamed the broken folder
02:57 MugginsM then copied the files back on
02:57 MugginsM luckily I'm only seeing it in three or four (quite large) folders
03:17 systemonkey If I remove .glusterfs folder on all bricks, and rejoin by creating a new volume, I'm wondering if gluster can see all the files again....
03:18 systemonkey I wish it is as easy as that.
03:23 MugginsM some sort of "all the files on this server look fine, rebuild from it"  :)
03:24 systemonkey yah. regenerate the .glusterfs contents.
03:28 Georgyo joined #gluster
03:38 systemonkey MugginsM: what file system are you running glusterfs on?
03:46 MugginsM xfs
03:47 systemonkey ok. I'm running on zfs. do you get slowness with xfs? or is it pretty snappy?
03:49 kdhananjay joined #gluster
03:51 itisravi joined #gluster
03:53 MugginsM haven't noticed any problems
03:53 RameshN joined #gluster
03:56 MugginsM our main bottleneck seems to be network though :-/
03:56 MugginsM the one bit we can't control
04:07 msvbhat joined #gluster
04:14 yinyin_ joined #gluster
04:18 ppai joined #gluster
04:22 kumar joined #gluster
04:24 RameshN joined #gluster
04:30 haomaiwang joined #gluster
04:31 hagarth joined #gluster
04:32 aviksil joined #gluster
04:34 atinmu joined #gluster
04:34 nishanth joined #gluster
04:38 ndarshan joined #gluster
04:46 kanagaraj joined #gluster
04:47 yinyin joined #gluster
04:53 systemonkey I'm wondering copying over everything from /var/lib/glusterd/vols/* will let me start the volume...
04:54 bala joined #gluster
05:02 rahulcs joined #gluster
05:07 glusterbot New news from resolvedglusterbugs: [Bug 966848] "rm -rf" failed to remove directory complained "directory not empty" from fuse mount <https://bugzilla.redhat.com/show_bug.cgi?id=966848>
05:17 glusterbot New news from newglusterbugs: [Bug 1096578] "rm -rf" failed to remove directory complained "directory not empty" from fuse mount <https://bugzilla.redhat.com/show_bug.cgi?id=1096578>
05:18 TvL2386 joined #gluster
05:21 nshaikh joined #gluster
05:25 bala joined #gluster
05:29 prasanthp joined #gluster
05:35 ngoswami joined #gluster
05:37 shilpa_ joined #gluster
05:40 vpshastry joined #gluster
05:40 rjoseph joined #gluster
05:42 Philambdo joined #gluster
05:44 ngoswami joined #gluster
05:51 kanagaraj joined #gluster
05:52 hagarth joined #gluster
06:06 rwheeler joined #gluster
06:07 davinder joined #gluster
06:09 badone__ joined #gluster
06:10 rahulcs joined #gluster
06:13 rgustafs joined #gluster
06:14 davinder joined #gluster
06:14 ricky-ticky joined #gluster
06:15 GabrieleV joined #gluster
06:25 davinder joined #gluster
06:26 rjoseph joined #gluster
06:26 GabrieleV joined #gluster
06:27 meghanam joined #gluster
06:27 meghanam_ joined #gluster
06:32 davinder joined #gluster
06:37 ppai joined #gluster
06:51 ktosiek joined #gluster
06:53 rahulcs joined #gluster
07:04 ctria joined #gluster
07:05 monotek joined #gluster
07:06 [iilliinn]_ joined #gluster
07:07 [iilliinn]_ hi, is there a way to see what are the current gluster parameters?
07:08 eseyman joined #gluster
07:15 psharma joined #gluster
07:16 keytab joined #gluster
07:17 glusterbot New news from newglusterbugs: [Bug 1096610] [SNAPSHOT]: GlusterFS snapshot cli should support --xml option <https://bugzilla.redhat.com/show_bug.cgi?id=1096610>
07:25 rastar joined #gluster
07:39 fsimonce joined #gluster
07:50 DV joined #gluster
07:50 ppai joined #gluster
07:50 andreask joined #gluster
08:02 ctria joined #gluster
08:03 liquidat joined #gluster
08:05 rwheeler joined #gluster
08:11 ramteid joined #gluster
08:15 MugginsM joined #gluster
08:29 raghu joined #gluster
08:36 Slashman joined #gluster
08:37 rgustafs joined #gluster
08:37 msvbhat [iilliinn]_: gluster volume info
08:39 [iilliinn]_ msvbhat: hm.. if a parameter has the default value - should it be displayed in gluster volume info?
08:39 msvbhat [iilliinn]_: No it won't... ONly modified parameters wil be shown
08:40 [iilliinn]_ msvbhat: ok, thank you, it is clear now
08:40 msvbhat [iilliinn]_: You can run gluster volume set help
08:41 msvbhat [iilliinn]_: That shows default values of the parameters
08:41 [iilliinn]_ msvbhat: oh, this is helpful , thank you
08:42 [iilliinn]_ msvbhat: changing the parameters need to be done on all nodes, correct?
08:44 prasanthp joined #gluster
08:45 saravanakumar1 joined #gluster
08:48 glusterbot New news from newglusterbugs: [Bug 1075087] [Rebalance]:on restarting glusterd, the completed rebalance is starting again on that node <https://bugzilla.redhat.com/show_bug.cgi?id=1075087>
08:52 ctria joined #gluster
08:52 msvbhat [iilliinn]_: NO, set it through volume set and it changes paramater and volume level
08:52 caosk_kevin joined #gluster
08:52 msvbhat meaning in all the nodes of cluster
08:53 [iilliinn]_ msvbhat: do i need to add it also in the conf file? /etc/glusterfs/xxx.vol ?
08:54 caosk_kevin hi all , gluster will develop a centralized CACHE (SSD) management in the near future??????
08:57 msvbhat [iilliinn]_: NO. Just the gluster CLI takes care of doing that. Don't edit the .vol file manualu if you're not sure what you're doing
08:58 [iilliinn]_ msvbhat: ok, understood
09:01 kdhananjay joined #gluster
09:07 msvbhat caosk_kevin: Not sure. At least I'm not aware of it
09:07 tryggvil joined #gluster
09:08 d-fence_ joined #gluster
09:10 tryggvil joined #gluster
09:15 caosk_kevin msvbhat: thanks, if use ssd for r/w cache in using glusterfs system , you have some nice idea???  add ssd to local file system??
09:25 ricky-ticky joined #gluster
09:29 msvbhat caosk_kevin: I'm not sure what you mean?
09:42 rahulcs joined #gluster
09:51 hagarth joined #gluster
09:58 prasanthp joined #gluster
10:04 davinder joined #gluster
10:04 hagarth caosk_kevin: what are you looking for from SSDs with gluster? caching on the client side?
10:12 dcherednik joined #gluster
10:30 kdhananjay joined #gluster
10:34 davinder joined #gluster
10:36 dusmant joined #gluster
10:39 prasanthp joined #gluster
10:40 saurabh joined #gluster
10:42 bfoster joined #gluster
10:48 m0zes joined #gluster
11:01 d-fence joined #gluster
11:05 kdhananjay joined #gluster
11:07 jcsp joined #gluster
11:14 nshaikh joined #gluster
11:17 kkeithley joined #gluster
11:18 rwheeler joined #gluster
11:19 sputnik1_ joined #gluster
11:21 hchiramm__ joined #gluster
11:26 basso joined #gluster
11:41 rahulcs joined #gluster
11:43 diegows joined #gluster
11:48 an joined #gluster
11:49 ngoswami joined #gluster
11:54 shilpa_ joined #gluster
12:10 hybrid512 joined #gluster
12:11 kdhananjay joined #gluster
12:13 kanagaraj joined #gluster
12:14 itisravi_ joined #gluster
12:16 ProT-0-TypE joined #gluster
12:17 sjm joined #gluster
12:19 kanagaraj joined #gluster
12:20 Ark joined #gluster
12:24 eseyman joined #gluster
12:29 jmarley joined #gluster
12:29 jmarley joined #gluster
12:37 d-fence joined #gluster
12:43 japuzzo joined #gluster
12:44 Slashman joined #gluster
12:45 rahulcs joined #gluster
12:45 B21956 joined #gluster
12:49 d-fence joined #gluster
12:52 chirino joined #gluster
12:54 jag3773 joined #gluster
12:55 sroy_ joined #gluster
12:56 rahulcs joined #gluster
13:08 hagarth joined #gluster
13:11 nishanth joined #gluster
13:13 Scott6 joined #gluster
13:14 ctria joined #gluster
13:14 dblack joined #gluster
13:23 rahulcs joined #gluster
13:25 kaptk2 joined #gluster
13:32 [iilliinn]_ msvbhat: is there a way to reset the parameter back to the default value? gluster volume set key value would work i guess
13:40 ndevos [iilliinn]_: you can use: gluster volume reset $VOLUME $PARAMETER
13:40 primechuck joined #gluster
13:41 [iilliinn]_ ndevos: cool, thanks
13:42 chirino joined #gluster
13:43 jbd1 joined #gluster
13:43 coredump joined #gluster
13:49 glusterbot New news from newglusterbugs: [Bug 977497] gluster spamming with E [socket.c:2788:socket_connect] 0-management: connection attempt failed (Connection refused) when nfs daemon is off <https://bugzilla.redhat.com/show_bug.cgi?id=977497>
13:49 John_HPC joined #gluster
13:52 scuttle_ joined #gluster
13:53 B21956 joined #gluster
13:54 an joined #gluster
13:57 itisravi_ joined #gluster
14:03 bennyturns joined #gluster
14:05 ravindran1 joined #gluster
14:09 chirino_m joined #gluster
14:18 sputnik1_ joined #gluster
14:22 wushudoin joined #gluster
14:31 14WACU57I joined #gluster
14:31 6JTAANOJN joined #gluster
14:39 lpabon joined #gluster
14:41 LoudNoises joined #gluster
14:41 haomaiwa_ joined #gluster
14:44 JustinClift joined #gluster
14:45 haomaiw__ joined #gluster
14:46 cvdyoung Good morning, does anyone know how to alter the max amount of groups that gluster will allow.  We are getting the default 32 group limit, and I've heard that I can enable 96 groups, but am not sure how to do that.  We are running 3.5.  Thanks
14:48 jobewan joined #gluster
14:50 haomaiwang joined #gluster
15:03 XpineX joined #gluster
15:11 lmickh joined #gluster
15:21 davinder joined #gluster
15:22 jbd1 joined #gluster
15:23 ndevos cvdyoung: what do you use for mounting? fuse, nfs, something else?
15:24 * ndevos only knows about a group limit of 16 for nfs, and +/- 93 for the GlusterFS protocol
15:28 kkeithley there's a comment in the code that says FUSE can only get max 32 from /proc  (i.e. /proc/pid/cred I believe)
15:29 olisch joined #gluster
15:31 olisch did anybody try to upgrade from glusterfs 3.2.6 to 3.5? i am getting an error when starting a volume, because trusted.glusterfs.volume-id is missing
15:31 cvdyoung How do I change gluster to allow more than 32?  I think I can go to 96 groups in glusterfs, but not sure where to make the change.
15:33 m0zes cvdyoung: you can't. as kkeithley said.
15:33 nage joined #gluster
15:34 jobewan joined #gluster
15:36 dbruhn joined #gluster
15:40 ndevos kkeithley: yes, fuse only supports 32 groups, but I do not think we use that, and we call getgroups() instead... not 100% sure though
15:42 sprachgenerator joined #gluster
15:43 dcherednik Hello. Did anybody have experience to create GlusterFS volume with more than 200 servers?
15:44 ndevos kkeithley: ah, no, there is a get_groups() function in fuse-bridge, that indeed reads the groups from /proc/$PID/status :-/
15:45 ndevos cvdyoung: at the moment, you can either mount over NFS, or use libgfapi access to skip the 32-group limit in fuse
15:46 ndevos cvdyoung: nfs.server-aux-gids (I think) is the option you should enable for NFS
15:47 ndevos cvdyoung: and bug 1096425 will be used to backport a more complete fix into an upcoming 3.5 release
15:47 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=1096425 urgent, urgent, ---, ndevos, ASSIGNED , i/o error when one user tries to access RHS volume over NFS with 100+ GIDs
15:50 daMaestro joined #gluster
16:02 aviksil joined #gluster
16:03 Slashman joined #gluster
16:04 giannello joined #gluster
16:06 sputnik1_ joined #gluster
16:17 arya joined #gluster
16:25 edward1 joined #gluster
16:28 vpshastry joined #gluster
16:32 jag3773 joined #gluster
16:39 systemonkey I have a few large bricks running in distribute mode which holds up to 6TB of data. These bricks are running on gluster3.3 on a EOL OS and I want to do a fresh install of the OS and the glusterfs. What is the best recommended way to approach this? I tried doing this to all brick servers with clean install along with upgraded glusterfs.when I tried to create a volume with existing bricks, I get an error: "volume
16:39 systemonkey create: dist-vol: failed: /tank or a prefix of it is already part of a volume." I read the Joejulian's blog about this error, but I'm concerned what may happen after clearing the "trusted.glusterfs.volume-id". I'm hoping there is a better method to do fresh OS install with existing bricks.
16:39 glusterbot systemonkey: To clear that error, follow the instructions at http://joejulian.name/blog/glusterfs-path-or-a-prefix-of-it-is-already-part-of-a-volume/ or see this bug https://bugzilla.redhat.com/show_bug.cgi?id=877522
16:39 systemonkey sorry. it's 60TB data
16:44 ProT-0-TypE joined #gluster
16:49 [o__o] joined #gluster
16:49 Mo___ joined #gluster
16:49 nullck joined #gluster
16:50 systemonkey another approach, I'm thinking, by doing it one at a time, rolling method. bring down a server; install OS; add to existing cluster, do the second server, third, etc.
16:53 jbd1 systemonkey: you might want to do the OS upgrade and the GlusterFS upgrade in two distinct steps
16:53 jbd1 systemonkey: as in, upgrade os, install old glusterfs (with old glusterfs config) on all hosts, then upgrade glusterFS on all hosts
16:55 SFLimey joined #gluster
16:56 vpshastry joined #gluster
16:59 systemonkey jdb1. thanks. I wanted to do a clean install since the current glusterfs was making duplicates. (gfid mismatch issue)
17:00 systemonkey I'll think about using that method as one of the latter methods if clean install is not possible.
17:01 dbruhn systemonkey, you should fix that issue before you try and do any sort of forklift upgrade
17:01 scuttle_ joined #gluster
17:02 dbruhn unless you are going to build a new system and migrate the data over
17:03 systemonkey dbruhn. I wish I had another cluster to dump the files in. :( I have so many duplicates, it's not even funny.
17:03 dbruhn Are they in a specific directory tree?
17:04 dbruhn I am assuming you are having the issue where files are showing up twice from the mount point?
17:05 systemonkey druhn: yes sir.
17:06 systemonkey exact same files in same path on different bricks. only gfid are different
17:06 systemonkey s/are/is
17:06 andreask joined #gluster
17:07 dbruhn sounds like you have a directory in split brain
17:08 systemonkey yah... it is. the tools and fixes joejulian put up is for the replicated volumes. so it doesn't help my situation since I'm on distribute.
17:11 dbruhn oh weird, split brain on a distributed volume....
17:11 dbruhn never heard that one before
17:11 systemonkey :D Joe said he was able to duplicate the problem. I'm not sure if he was able to file a bug tho.
17:11 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
17:13 dbruhn Well no amount of upgrading is going to fix the existing issue sadly. How full is your 60TB volume?
17:14 coredump joined #gluster
17:14 systemonkey the total volume is 100TB
17:14 systemonkey it is 60TB full
17:14 dbruhn How many brick servers?
17:14 systemonkey 3
17:14 ctria joined #gluster
17:15 dbruhn Not a lot of room to do much with :/
17:17 dbruhn Did you file a bug report in BZ?
17:17 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
17:17 firemanxbr joined #gluster
17:20 systemonkey I didn't file a bug yet.
17:20 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
17:23 dbruhn File a bug, that might be dangerous enough that you at least get a fix out of it, and hopefully a solution to how it happen comes out of it for future versions.
17:23 dbruhn When you add a new file to the tree, do you see it added to both trees? or just one or the other?
17:24 systemonkey Yah. I'm filling it out at the moment.
17:25 systemonkey I'm not sure how it happens. I can't catch it until someone reports it to me. here is a example of a file. there are tons in the same folder. http://hastebin.com/aqapusizeb.vhdl
17:25 glusterbot Title: hastebin (at hastebin.com)
17:28 dbruhn what does the directory above look like from each brick?
17:31 systemonkey do you mean the directory where the files live? http://hastebin.com/weqejekixu.avrasm it's same
17:31 glusterbot Title: hastebin (at hastebin.com)
17:31 plarsen joined #gluster
17:32 systemonkey We once ran a script to recursively change the ownership.
17:33 systemonkey also, there was a time a server went offline and we added back few hours later. but that shouldn't create duplicates... I'm not sure why and how.
17:35 dusmant joined #gluster
17:40 vpshastry left #gluster
17:43 chirino joined #gluster
17:56 jbd1 systemonkey: are you on distribute or distributed-replicate ?
17:57 systemonkey distribute only.
17:58 vpshastry joined #gluster
18:14 ProT-0-TypE joined #gluster
18:20 glusterbot New news from newglusterbugs: [Bug 1096934] Duplicate files in same path. Possible Split-brain with distribute. <https://bugzilla.redhat.com/show_bug.cgi?id=1096934>
18:24 mjsmith2 joined #gluster
18:29 andreask joined #gluster
18:34 basso_ joined #gluster
18:49 Intensity joined #gluster
18:51 coredump joined #gluster
18:54 thornton joined #gluster
19:28 thornton left #gluster
19:36 DV joined #gluster
19:37 edward1 joined #gluster
19:40 hagarth joined #gluster
20:12 B21956 joined #gluster
20:56 DV joined #gluster
20:59 badone__ joined #gluster
21:28 arya joined #gluster
21:29 DV joined #gluster
21:39 DV joined #gluster
21:41 arya joined #gluster
21:42 jbd1 joined #gluster
22:01 arya joined #gluster
22:07 arya joined #gluster
22:18 arya joined #gluster
22:46 mjsmith2 joined #gluster
22:46 coredump joined #gluster
22:49 nueces joined #gluster
23:00 fidevo joined #gluster
23:10 mshadle joined #gluster
23:10 tjikkun joined #gluster
23:10 tjikkun joined #gluster
23:11 mshadle in gluster 3.5, can i force a real-time config change with "gluster volume set volname favorite-child" ? i have a split brain issue i can't seem to resolve easily
23:12 DV joined #gluster
23:17 velladecin joined #gluster
23:21 DV joined #gluster
23:29 JoeJulian mshadle: Should work.
23:30 mshadle volume set: failed: option : favorite-child does not exist            Did you mean write-behind?
23:31 JoeJulian Ah, right. Translator option. You could use the mount option...
23:31 mshadle like the actual mount.glusterfs command..?
23:32 JoeJulian Just a sec. I'm gathering syntax...
23:34 jag3773 joined #gluster
23:36 JoeJulian bummer. No, can't do it through mount.glusterfs. You would have to use the glusterfs command directly.
23:38 mshadle that's fine - does it change the state permanently though?
23:38 elyograg joined #gluster
23:39 JoeJulian Check ps for how it's used now. Unmount. Run the glusterfs command as shown in ps adding the option --xlator-option afr.favorite-child=$volume-client-N where N is the client you want to favor.
23:40 JoeJulian * "afr." is a best guess without spending too much time digging throudh source. It may instead be "replicate" or even "cluster/replicate" though I doubt the last.
23:41 rwheeler joined #gluster
23:42 elyograg I was going to ask how I can find the 3.4.2 RPMs, but then I found them.
23:48 JoeJulian 3.4.2? or 3.4.3?
23:52 elyograg 3.4.2.  I am installing two additional servers, have to match the current version so I know it will work.
23:52 elyograg the yum repo wants to install 3.5.

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary