Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-11-13

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:16 semiosis chirino: http://hawtjni.fusesource.org/ is not resolving!
00:18 semiosis in fact fusesource.org is not responding
00:20 phak joined #gluster
00:23 phak hello, need some helps...
00:24 phak I need to solve some projects for glusterFS, but i don't know very well,
00:24 phak Is anybody knows any reference or sth that i can get some fresh info about the gluster?
00:25 davemc phak, have you looked over the documentation available on gluster.org?
00:26 phak yeah, i did it. but i want to know more about gluster..
00:27 davemc phak, thats a really broad question.  There's a lot to know about gluster
00:28 badone joined #gluster
00:29 phak umm, then what do you think about the advantages for using glusterFS?
00:29 davemc phak, for example there are trnslators, AFR, Internals, integrations libgf
00:30 davemc phak, still pretty broad
00:30 davemc phak, GlusterFS is an open source, distributed file system capable of scaling to several petabytes (actually, 72 brontobytes!) and handling thousands of clients. GlusterFS clusters together storage building blocks over Infiniband RDMA or TCP/IP interconnect, aggregating disk and memory resources and managing data in a single global namespace. GlusterFS is based on a stackable user space design and can deliver exceptional performance for dive
00:30 davemc rse workloads.
00:30 davemc GlusterFS supports standard clients running standard applications over any standard IP network. Figure 1, above, illustrates how users can access application data and files in a Global namespace using a variety of standard protocols.
00:30 davemc No longer are users locked into costly, monolithic, legacy storage platforms. GlusterFS gives users the ability to deploy scale-out, virtualized storage – scaling from terabytes to petabytes in a centrally managed and commoditized pool of storage.
00:30 davemc all from the website.
00:32 davemc so single name space, easy to set up, support for multiple disks, erasure coding, online volume snapshots, etc
00:37 phak oh! I see.
00:37 phak davemc, but i read that glusterFS does not help to manage a small file, why is that?
00:39 davemc phak, high numbers of small files widely spread require a lot of metadata may cause performance issues. I don't have specific limits
00:40 davemc phak: the next releases of glusterfs has that as a focus feature to extend
00:41 davemc phak, you might want to look at http://www.gluster.org/community/documentati​on/index.php/Features/Feature_Smallfile_Perf
00:41 davemc and if you have ideas, let us know
00:42 davemc phak: that page also goes deeper into when and why
00:47 phak davemc, thank you for nice info, it's so great!
00:48 phak if i have some idea, i will.
00:48 davemc phal: glad to help to my limited abilities
00:48 davemc s/phal/phak/
00:49 glusterbot What davemc meant to say was: phak: glad to help to my limited abilities
00:57 topshare joined #gluster
01:16 DougBishop joined #gluster
01:23 cyberbootje joined #gluster
01:34 MugginsM joined #gluster
01:37 bala joined #gluster
01:43 side_control joined #gluster
01:45 haomaiwa_ joined #gluster
01:46 meghanam joined #gluster
01:46 meghanam_ joined #gluster
01:51 haomaiw__ joined #gluster
01:54 harish joined #gluster
01:59 topshare joined #gluster
02:02 cyberbootje joined #gluster
02:08 glusterbot New news from newglusterbugs: [Bug 1163543] Fix regression test spurious failures <https://bugzilla.redhat.co​m/show_bug.cgi?id=1163543>
02:16 d-fence joined #gluster
02:22 Odooyol joined #gluster
02:23 Odooyol Bonjour   Hello out there
02:30 cyberbootje joined #gluster
02:35 lyang01 joined #gluster
02:49 topshare joined #gluster
03:01 _Bryan_ joined #gluster
03:02 topshare joined #gluster
03:08 glusterbot New news from newglusterbugs: [Bug 1163561] A restart child can not clean the remaining files and directorys these have been delelte from mountpoint <https://bugzilla.redhat.co​m/show_bug.cgi?id=1163561>
03:11 overclk joined #gluster
03:15 cyberbootje joined #gluster
03:23 saurabh joined #gluster
03:32 bharata-rao joined #gluster
03:32 bharata_ joined #gluster
03:35 topshare joined #gluster
03:37 meghanam_ joined #gluster
03:38 meghanam joined #gluster
03:40 BlueRider joined #gluster
03:40 BlueRider hi, I’m trying to mount a newly created volume from a client, but the mounting process doesn’t work (no error). Just can’t see it in `mount` or `df`
03:41 Guest56776 joined #gluster
03:41 elyograg joined #gluster
03:43 plarsen joined #gluster
03:43 shubhendu_ joined #gluster
03:43 elyograg I've asked about this before ... does anyone know why my gluster NFS process would randomly crash?  It's so disruptive and unpredictable that I have written a shell script just to check for the existence of the process and stop/start the whole gluster stack when it's gone.
03:45 aravindavk joined #gluster
03:49 elyograg http://apaste.info/4Rx (script that runs continuously) and http://apaste.info/HuO (restartglusterfs)
03:56 RameshN joined #gluster
03:58 BlueRider are there issues using mount instead of mount.glusterfs?
03:58 BlueRider because mount.glusterfs doesn’t work, but mount does
04:06 msciciel joined #gluster
04:07 schrodinger_ joined #gluster
04:08 morsik_ joined #gluster
04:09 fyxim_ joined #gluster
04:09 glusterbot` joined #gluster
04:09 eclectic_ joined #gluster
04:09 natgeorg joined #gluster
04:10 natgeorg joined #gluster
04:11 the-me_ joined #gluster
04:11 Rydekull_ joined #gluster
04:11 HuleB joined #gluster
04:12 SteveCoo1ing joined #gluster
04:12 Peanut joined #gluster
04:12 haakon joined #gluster
04:12 glusterbot joined #gluster
04:12 scuttle|afk joined #gluster
04:13 georgeh joined #gluster
04:18 kanagaraj joined #gluster
04:18 nishanth joined #gluster
04:24 rafi joined #gluster
04:24 Rafi_kc joined #gluster
04:28 d4nku joined #gluster
04:29 nbalachandran joined #gluster
04:37 meghanam_ joined #gluster
04:38 meghanam joined #gluster
04:38 purpleidea @tell davemc of course, or if you like, use the bot to "remind" whoever you like that is forgetful. disclaimer: it's hardcoded to johnmark
04:38 glusterbot purpleidea: Error: I haven't seen davemc, I'll let you do the telling.
04:39 purpleidea @later davemc of course, or if you like, use the bot to "remind" whoever you like that is forgetful. disclaimer: it's hardcoded to johnmark
04:39 purpleidea glusterbot: wtf he's been here a lot
04:39 RameshN joined #gluster
04:41 jobewan joined #gluster
04:42 anoopcs joined #gluster
04:43 shubhendu joined #gluster
04:45 jiffin joined #gluster
04:47 dusmant joined #gluster
04:47 overclk joined #gluster
04:49 hagarth joined #gluster
05:04 meghanam joined #gluster
05:04 meghanam_ joined #gluster
05:05 rjoseph joined #gluster
05:07 raghug joined #gluster
05:07 smallbig joined #gluster
05:10 glusterbot New news from newglusterbugs: [Bug 1136769] AFR: Provide a gluster CLI for automated resolution of split-brains. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1136769> || [Bug 1163588] AFR: Provide a gluster CLI for automated resolution of split-brains. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1163588>
05:13 karnan joined #gluster
05:13 spandit joined #gluster
05:13 kshlm joined #gluster
05:15 atinmu joined #gluster
05:17 soumya joined #gluster
05:25 kdhananjay joined #gluster
05:25 kdhananjay left #gluster
05:28 ppai joined #gluster
05:29 ndarshan joined #gluster
05:30 aravindavk joined #gluster
05:31 d4nku joined #gluster
05:31 lalatenduM joined #gluster
05:37 atalur joined #gluster
05:43 pp joined #gluster
05:44 raghug joined #gluster
05:47 overclk joined #gluster
05:48 bala joined #gluster
05:51 nbalachandran joined #gluster
05:56 rjoseph joined #gluster
05:56 anoopcs joined #gluster
06:02 Humble joined #gluster
06:09 PeterA1 joined #gluster
06:12 elico joined #gluster
06:23 elico joined #gluster
06:23 soumya joined #gluster
06:24 badone joined #gluster
06:32 SOLDIERz_ joined #gluster
06:35 Slydder joined #gluster
06:36 nshaikh joined #gluster
06:38 sahina joined #gluster
06:41 pp joined #gluster
06:44 lalatenduM joined #gluster
06:55 raghug joined #gluster
06:55 mbukatov joined #gluster
06:59 ctria joined #gluster
07:04 siel joined #gluster
07:05 nbalachandran joined #gluster
07:06 ndarshan joined #gluster
07:08 Slydder morning all
07:11 glusterbot New news from newglusterbugs: [Bug 1163623] Erasure Volume quota error <https://bugzilla.redhat.co​m/show_bug.cgi?id=1163623>
07:15 ppai joined #gluster
07:19 aravindavk joined #gluster
07:20 rjoseph joined #gluster
07:33 ndarshan joined #gluster
07:41 glusterbot New news from newglusterbugs: [Bug 1163626] gstatus: Capacity usable from volumes is incorrect <https://bugzilla.redhat.co​m/show_bug.cgi?id=1163626>
07:42 aravindavk joined #gluster
07:43 rjoseph joined #gluster
07:44 bala joined #gluster
07:47 Fen1 joined #gluster
07:55 troublesome whats the best way to erase a file from a gluster partition?
07:55 troublesome or at least someone who might be able to explain the behavior that have made our partition run full over night (2.5GB added usage, but no additional files)
07:56 troublesome i just erased a few files to try and troubleshoot, but it seems the space arent released
07:56 Durzo deleting files from a gluster brick is bad mmmmkay
07:56 Durzo you prolly just corrupted your bick
07:56 troublesome i erased a regular file
07:57 Durzo there is no such thing as a regular file on a gluster brick
07:57 Durzo all files are hard linked back to gluster index files and glued together with extended file attributes
07:57 troublesome i have a partition, /cluster, which is a glusterfs
07:57 troublesome are you telling me that the files stored there, cannot be erased?
07:57 Durzo not directly from a brick
07:57 Durzo you have to delete them from a gluster client mount
07:57 troublesome i did..
07:58 Durzo well then, your ok!
07:58 troublesome went into /cluster on a server where its mounted
07:58 troublesome erased the file.. my issue however is something quite else
07:58 troublesome i was trying to make room on the partition.. 0 new files since yesterday
07:58 troublesome but the volume is filled now, 2.5GB extra used since yesterday
07:59 Durzo gluster doesnt just grow by itself
07:59 troublesome i have no idea where that space is going..
07:59 troublesome when i look up the files on the /cluster partition
07:59 troublesome we are using a little under 3GB
08:00 troublesome 15G 15G 97M 100% /cluster
08:00 troublesome i just erased a 1GB file from one of our nodes..
08:00 troublesome before that, there were 101M available space
08:03 reboot1 joined #gluster
08:03 Durzo troublesome, have you tried running a du inside /cluster to see whats taking up the space?
08:04 troublesome yup
08:04 troublesome 3.1G usage
08:04 Durzo and?
08:04 Durzo have you accounted for hidden files ?
08:05 Durzo can you show me the output of `gluster volume info |grep -i brick`
08:05 troublesome gluster volume info |grep -i brick
08:05 troublesome Number of Bricks: 1
08:05 troublesome Bricks:
08:05 troublesome Brick1: hq:/cluster
08:05 troublesome well.. if i du -sh it shows 15G
08:06 troublesome the only folder using space is .glusterfs
08:06 troublesome but i was told yesterday, that it wasnt actually using up any space
08:06 Durzo right, so .gluster is where your files are actually kept
08:06 Durzo when you remove files through a gluster mount it will remove them from the brick
08:06 Durzo if you go running rm from your brick in /cluster it will break things and no doubt end up in this situation
08:07 troublesome well, i have never done that, always from the client end
08:07 troublesome but, how can i fix it?
08:07 Durzo if its been unsynced, the only way to fix that i know of is to delete the volume and recreate it
08:08 troublesome hmm
08:08 Durzo if the files are important to you, copy them off from a gluster client mount
08:08 Durzo then copy them back onto the new volume
08:08 Durzo what does du -hs show when you run it from the gluster client ?
08:09 troublesome sec
08:09 troublesome takes a while
08:10 Durzo on my setup, i have 145 gb used brick from df -h and the gluster client also shows 145gb from du -hs
08:10 Durzo so gluster shouldnt be eating any space that isnt your files
08:10 T0aD joined #gluster
08:10 troublesome 0 1:/cluster# du -sh
08:10 troublesome 2.7G .
08:11 Durzo is /cluster the client mount ? i thought that was your brick path
08:11 troublesome its mounted as /cluster on each client
08:11 troublesome brick is: hq:/cluster
08:11 Durzo ok
08:11 Durzo well yeah sounds like its borked to me
08:11 troublesome actually /cluster is mounted to several paths on each client
08:12 troublesome will have to use some google-fu to try and find a fix for it
08:12 troublesome its used on a live setup, so cant really handle the downtime it takes to copy back and forth
08:12 Durzo you could create a new brick, and mirror it up then swap the clients over
08:13 ricky-ticky joined #gluster
08:13 Durzo then fix the old volume
08:15 Inflatablewoman joined #gluster
08:15 ricky-ticky1 joined #gluster
08:16 troublesome that might be possible.. its not that much data..
08:16 troublesome hopefully this is a one time thing..
08:16 troublesome :D
08:20 Inflatablewoman Hi, Can someone confirm or deny the discussion that Gluster client and Server must be the EXACT same version for interaction to work? https://github.com/coreos/coreos-overlay/pull/855
08:20 glusterbot Title: Added glusterfs by asiragusa · Pull Request #855 · coreos/coreos-overlay · GitHub (at github.com)
08:33 bala joined #gluster
08:34 fsimonce joined #gluster
08:36 bala joined #gluster
08:40 Humble joined #gluster
08:40 ababu joined #gluster
08:43 deepakcs joined #gluster
08:46 lalatenduM joined #gluster
08:47 vikumar joined #gluster
08:47 lalatenduM joined #gluster
08:52 bala1 joined #gluster
09:15 spandit joined #gluster
09:18 rjoseph joined #gluster
09:19 dusmant joined #gluster
09:20 sahina joined #gluster
09:20 shubhendu joined #gluster
09:28 ppai joined #gluster
09:28 hagarth kshlm: can you please comment on the coreos-overlay issue above?
09:33 kshlm Sure.
09:39 harish joined #gluster
09:39 deniszh joined #gluster
09:43 Inflatablewoman thanks guys!
09:49 diegows joined #gluster
09:50 liquidat joined #gluster
09:51 kkeithley1 joined #gluster
09:51 spandit joined #gluster
09:55 jkroon joined #gluster
09:55 jkroon hi guys, has the ability to native-mount (fuse) a gluster filesystem with noexec been sorted out yet?
09:57 ndevos jkroon: I'm not aware of any issue with that, got a reference to a bug?
09:59 rjoseph joined #gluster
09:59 jkroon had one ... can't find it now.
09:59 jkroon seems i should be able to.
09:59 jkroon let me just take the one machine out of the LB quickly in order to retest.
10:01 soumya_ joined #gluster
10:04 Inflatablewoman joined #gluster
10:04 jkroon 127.0.0.1:gv_home /home glusterfs _netdev,defaults, nodev,nosuid,noexec 0 0 -> mount /home:  Mount failed. Please check the log file for more details.
10:04 jkroon I've tried with various variations of that
10:10 meghanam joined #gluster
10:10 meghanam_ joined #gluster
10:10 ndevos jkroon: you have a ", " in the options, I assume it also does not work without the space?
10:13 SOLDIERz_ joined #gluster
10:20 shubhendu joined #gluster
10:21 sahina joined #gluster
10:22 dusmant joined #gluster
10:22 smohan joined #gluster
10:23 jkroon ndevos, correct.
10:24 rjoseph joined #gluster
10:26 ndevos jkroon: ah, maybe bug 1162910
10:26 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1162910 medium, unspecified, ---, bugs, NEW , mount options no longer valid: noexec, nosuid, noatime
10:32 Slashman joined #gluster
10:34 haomaiwa_ joined #gluster
10:41 glusterbot New news from newglusterbugs: [Bug 1163699] gstatus: -b flag doesn't provide much useful information on self-heal <https://bugzilla.redhat.co​m/show_bug.cgi?id=1163699> || [Bug 1163709] gstatus: If a volume is mounted more than once from a machine, it is still considered as a single client <https://bugzilla.redhat.co​m/show_bug.cgi?id=1163709>
11:18 rolfb joined #gluster
11:18 rjoseph joined #gluster
11:20 jkroon ndevos, looks related.
11:20 meghanam_ joined #gluster
11:21 meghanam joined #gluster
11:22 soumya_ joined #gluster
11:23 jkroon except that i'm still on gluster 3.4 ...
11:27 ndevos jkroon: uhm, okay... you should file a bug against 3.4 then, otherwise the fix might not get backported (if its the same fix)
11:27 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
11:27 kshlm joined #gluster
11:31 jkroon ndevos, well, i need to consider upgrading to 3.5 or 3.6 then anyway.
11:31 jkroon s/then//
11:31 glusterbot What jkroon meant to say was: ndevos, well, i need to consider upgrading to 3.5 or 3.6  anyway.
11:32 ndevos jkroon: I unerstand it works in 3.5, bu I have not tested that (yet) - you could stay on 3.4, and request it to get fixed, its really up to you
11:33 Inflatablewoman joined #gluster
11:33 jkroon busy reading now on how easy/difficult it would be to upgrade.  experience has shown that usually it's best to be on one of the newer major releases of a software project, but not always the newest (at least not early on in the cycle).
11:37 jkroon http://www.gluster.org/community/doc​umentation/index.php/Upgrade_to_3.5 ... seems easy enough.
11:46 meghanam joined #gluster
11:46 meghanam_ joined #gluster
11:49 ndevos jkroon: 3.4 is maintained until 3.7 gets release, so there is no hurry to upgrade
11:55 jkroon ndevos, it looks simple enough and I don't see any real risk.
11:58 soumya_ joined #gluster
12:07 lpabon joined #gluster
12:09 SOLDIERz_ joined #gluster
12:18 kshlm joined #gluster
12:33 hagarth joined #gluster
12:34 smohan joined #gluster
12:44 diegows joined #gluster
12:45 soumya_ joined #gluster
12:46 LebedevRI joined #gluster
12:51 lpabon joined #gluster
12:53 jmarley joined #gluster
12:57 ababu joined #gluster
12:59 edward1 joined #gluster
13:01 edwardm61 joined #gluster
13:02 deepakcs joined #gluster
13:03 Fen joined #gluster
13:12 glusterbot New news from newglusterbugs: [Bug 1163760] when replace one brick on disperse volume, ls sometimes goes wrong <https://bugzilla.redhat.co​m/show_bug.cgi?id=1163760>
13:16 liquidat joined #gluster
13:32 SOLDIERz_ joined #gluster
13:35 smohan joined #gluster
13:44 Inflatablewoman joined #gluster
13:44 tdasilva joined #gluster
13:46 RameshN joined #gluster
13:46 elyograg troublesome: if you have deleted filesf
13:48 elyograg troublesome: from the brick but not the .glusterfs directory, you could look for files in the glusterfs directory that only have one hardlink.  Most likely those would correspond to the files that you deleted, and you could free the space by deleting those.
13:49 elyograg mucking around in .glusterfs is dangerous, though ... you need to triple-check everything, including file contents, link count, xattrs, etc.
14:01 marbu joined #gluster
14:02 meghanam joined #gluster
14:03 meghanam_ joined #gluster
14:04 troublesome elyograg think im going the other way around and making a new volume, and hopefully this doesnt hit us again in the future.
14:06 virusuy joined #gluster
14:07 haomaiwang joined #gluster
14:14 Durzo anyone ever seen a gluster 3.5 (upgraded from 3.4) volume with a brand new geo-repl never leave the Hybrid Xsync mode? even when i change the change-detector to changelog it falls back to xsync :(
14:17 nbalachandran joined #gluster
14:22 diegows joined #gluster
14:26 coredump joined #gluster
14:34 calisto joined #gluster
14:40 marbu joined #gluster
14:41 Inflatablewoman joined #gluster
14:42 glusterbot New news from newglusterbugs: [Bug 1163821] Current timer implementation has no way to avoid some races <https://bugzilla.redhat.co​m/show_bug.cgi?id=1163821> || [Bug 1163822] Current timer implementation has no way to avoid some races <https://bugzilla.redhat.co​m/show_bug.cgi?id=1163822>
15:05 jmarley joined #gluster
15:06 plarsen joined #gluster
15:15 TheBrayn left #gluster
15:15 eightyeight joined #gluster
15:16 bennyturns joined #gluster
15:20 wushudoin joined #gluster
15:28 harish joined #gluster
15:56 raghug joined #gluster
15:56 rsquared joined #gluster
15:57 lpabon joined #gluster
15:58 redbeard joined #gluster
16:00 rsquared What would cause a "staging failed" error when creating a gluster volume?
16:08 virusuy joined #gluster
16:08 virusuy joined #gluster
16:13 nshaikh joined #gluster
16:17 rastar_afk joined #gluster
16:18 sac`away joined #gluster
16:18 plarsen joined #gluster
16:19 jackdpeterson joined #gluster
16:22 elico joined #gluster
16:23 ctria joined #gluster
16:24 Rydekull joined #gluster
16:26 maveric_amitc_ joined #gluster
16:26 vikumar joined #gluster
16:27 prasanth|afk joined #gluster
16:27 Humble joined #gluster
16:34 shubhendu joined #gluster
16:34 rsquared More specifically, what would cause a "host <host> not connected" error when doing a gluster create?
16:40 hagarth joined #gluster
16:41 daMaestro joined #gluster
16:44 lmickh joined #gluster
16:45 elyograg left #gluster
16:48 deniszh joined #gluster
16:49 jobewan joined #gluster
16:50 smohan joined #gluster
16:52 davemc joined #gluster
17:00 DV joined #gluster
17:04 jbrooks joined #gluster
17:05 soumya_ joined #gluster
17:08 PeterA joined #gluster
17:11 David_H_Smith joined #gluster
17:11 zerick joined #gluster
17:12 marbu joined #gluster
17:17 free_amitc_ joined #gluster
17:17 sac`away` joined #gluster
17:17 vikumar__ joined #gluster
17:17 prasanth|brb joined #gluster
17:17 hchiramm_ joined #gluster
17:17 RaSTarl joined #gluster
17:18 diegows joined #gluster
17:19 jiffin joined #gluster
17:23 RameshN joined #gluster
17:31 quique i have the data from an existing volume but the servers are new, ie don't have the info from /var/lib/glusterd is there a way to restore the volume?
17:32 zerick joined #gluster
17:43 glusterbot New news from newglusterbugs: [Bug 1163920] Glusterd segfaults on gluster volume status ... detail <https://bugzilla.redhat.co​m/show_bug.cgi?id=1163920>
17:47 RameshN joined #gluster
17:48 Pupeno joined #gluster
17:55 lalatenduM joined #gluster
18:01 cfeller joined #gluster
18:13 nshaikh joined #gluster
18:15 daMaestro joined #gluster
18:20 lpabon joined #gluster
18:32 jiffin joined #gluster
18:37 jbrooks joined #gluster
19:02 ghenry joined #gluster
19:12 jiffin joined #gluster
19:14 _Bryan_ joined #gluster
19:16 jbrooks joined #gluster
19:31 Delvirok joined #gluster
19:31 Delvirok left #gluster
19:50 davemc joined #gluster
19:58 lmickh joined #gluster
20:04 jmarley joined #gluster
20:10 theron joined #gluster
20:55 zerick joined #gluster
21:10 _Bryan_ joined #gluster
21:38 georgeh-LT2 joined #gluster
21:51 davemc joined #gluster
21:52 giannello joined #gluster
21:53 giannello left #gluster
21:56 DV joined #gluster
21:58 d4nku joined #gluster
21:59 jbrooks joined #gluster
22:02 siel joined #gluster
22:16 eightyeight joined #gluster
22:17 andreask joined #gluster
22:40 lmickh joined #gluster
23:00 diegows joined #gluster
23:17 badone joined #gluster
23:46 smallbig_ joined #gluster
23:49 MugginsM joined #gluster
23:49 lmickh joined #gluster
23:54 smallbig joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary