Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-08-24

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:56 plarsen joined #gluster
01:12 d0nn1e joined #gluster
01:26 harish joined #gluster
01:37 ppai joined #gluster
01:37 shdeng joined #gluster
01:46 Lee1092 joined #gluster
01:47 ilbot3 joined #gluster
01:47 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:01 aspandey joined #gluster
02:24 Javezim I'm having an issue with the following on 3.8.2 - getfattr -n replica.split-brain-status, I run it on all types of files and it never actually returns an output, just Input/Output Error.
02:25 Javezim Even on files with Metadata or Data Split Brain
02:29 nathwill joined #gluster
02:43 Javezim Anyone know why files would come back as "Input/Output" Error when using - getfattr -n replica.split-brain-status
02:52 spalai joined #gluster
03:06 spalai left #gluster
03:07 Gambit15 joined #gluster
03:09 jiffin joined #gluster
03:22 magrawal joined #gluster
03:31 shdeng joined #gluster
03:31 ankitraj joined #gluster
03:33 shdeng joined #gluster
04:06 aspandey joined #gluster
04:11 itisravi joined #gluster
04:18 atinm joined #gluster
04:37 nbalacha joined #gluster
04:38 Vaelatern joined #gluster
04:43 kukulogy joined #gluster
04:49 kshlm joined #gluster
04:51 kukulogy Hi, I put a file inside the mount. After a minutes the file will duplicate itself as I ls -la the /mnt folder.
04:52 kukulogy I checked the bricks, the image file is 0B and can't be viewed in the browser. Did anyone encountered this before? I'm using freebsd.
04:54 nathwill joined #gluster
04:54 ramky joined #gluster
04:55 kukulogy https://dpaste.de/VS2Q for reference
04:55 glusterbot Title: dpaste.de: Snippet #377579 (at dpaste.de)
04:57 jiffin joined #gluster
05:04 karthik_ joined #gluster
05:12 ndarshan joined #gluster
05:12 prasanth joined #gluster
05:14 nbalacha kukulogy, did you create the file on the gluster mount?
05:14 nbalacha kukulogy, it sounds like the linkto file is also being displayed
05:14 kukulogy yes I use wget directly to it.
05:14 nbalacha kukulogy, can you get the xattrs on both files from the bricks?
05:15 nbalacha kukulogy, also, are any of your bricks more than 90% full?
05:17 aravindavk joined #gluster
05:17 nbalacha kukulogy, it sounds like this one - can you confirm? https://bugzilla.redhat.com/show_bug.cgi?id=1176011
05:17 glusterbot Bug 1176011: high, high, ---, bugs, NEW , Client sees duplicated files
05:18 kukulogy nbalacha, okay checking. but the bricks is only 10% full
05:18 nbalacha kukulogy, did you rename the file?
05:18 shubhendu joined #gluster
05:19 kukulogy nbalacha, yes
05:19 nbalacha kukulogy, that explains it - it tried to create a linkto file
05:19 raghug joined #gluster
05:20 nbalacha kukulogy, can you please send across the xattrs and the volume info
05:23 kukulogy nbalacha, how to use xattrs? I tried to run it but command not found.
05:24 nhayashi joined #gluster
05:24 kukulogy and about the volume, do you mean this? https://dpaste.de/eXES
05:24 glusterbot Title: dpaste.de: Snippet #377581 (at dpaste.de)
05:26 rafi joined #gluster
05:28 mhulsman joined #gluster
05:29 kukulogy nbalacha, is it necessary that it will create linkto file? And will be broken? I expected it to be rename.
05:30 nbalacha kukulogy, for renames - it will create the linkto file if the new name hashes to a different brick
05:30 mhulsman joined #gluster
05:30 nbalacha kukulogy, getfattr -e hex -m . -d <path to file on brick>
05:31 nbalacha kukulogy, yes. thanks for the volinfo. I see you are using shards
05:32 Muthu_ joined #gluster
05:32 kukulogy thus this means that linkto is necessary whenever i rename a file, it's like creating a state were it can go back with incase i needed to roll it back?
05:32 kukulogy yes, I'm using shards
05:33 nbalacha kukulogy, not quite. Gluster places files on bricks based on their names and the ranges assigned to the brick.
05:33 nbalacha so file1 might map to brick 1, file2 to brick2 etc
05:34 nbalacha when it tries to access a file, the client will calculate the file hash and go to the brick it is supposed to be on to get it
05:34 nbalacha if for some reason the file is on a different brick, it will create a linkto (empty ) file which stores the actual location in an xattr
05:35 nbalacha this is so the client does not have to go and check every brick to see f the file exists on it
05:37 kukulogy nbalacha, do I need to worry about this linkto? will it remove it self or I have to disable this somewhere?
05:38 kukulogy I see, it’s like an index card of a library where, the location of the book is located
05:38 Bhaskarakiran joined #gluster
05:38 nbalacha kukulogy, right. It needs to be there. If you remove it gluster will recreate it
05:40 derjohn_mob joined #gluster
05:40 ankitraj joined #gluster
05:41 nbalacha kukulogy, I will check the xattrs to see if it is ok. I might need more info from you as well
05:42 ankitraj joined #gluster
05:43 kukulogy nbalacha, sorry I'm searching on how to make getfattr works in FreeBSD.
05:49 nbalacha kukulogy, np
05:54 Muthu joined #gluster
05:56 prasanth joined #gluster
05:56 hgowtham joined #gluster
05:58 kdhananjay joined #gluster
05:59 rafi1 joined #gluster
06:00 poornima joined #gluster
06:06 Manikandan joined #gluster
06:08 satya4ever joined #gluster
06:08 owlbot joined #gluster
06:10 karnan joined #gluster
06:14 Javezim How does one add a 3 Replica 1 Arbiter to an already existing volume?
06:14 skoduri joined #gluster
06:15 Javezim gluster volume add-brick <VOLNAME> replica 3 arbiter 1 <HOST:BRICK> <HOST:BRICK> <HOST:arbiter-brick-path>
06:15 Javezim This fails
06:15 ramky joined #gluster
06:16 kukulogy nbalacha, I'm not sure if this what you needed: https://dpaste.de/eMnw
06:16 glusterbot Title: dpaste.de: Snippet #377585 (at dpaste.de)
06:19 rastar joined #gluster
06:20 atalur joined #gluster
06:20 rafi1 joined #gluster
06:23 ppai joined #gluster
06:29 msvbhat joined #gluster
06:30 jtux joined #gluster
06:30 ashiq joined #gluster
06:32 nbalacha kukulogy, I need the xattrs for the files - this looks like a dir
06:33 nbalacha kukulogy, can you dump all xattrs for the files plese
06:37 jtux joined #gluster
06:40 kukulogy nbalacha, https://dpaste.de/5YMB
06:40 glusterbot Title: dpaste.de: Snippet #377587 (at dpaste.de)
06:40 nbalacha kukulogy, which is the file in question>
06:41 kukulogy i'm not really sure if I'm giving your the correct one since getfattr doesn't works in freebsd. sana_gif.gif and dahyun2.jpg
06:41 nbalacha k
06:42 hackman joined #gluster
06:42 nbalacha so I would assume these files are present on both bricks
06:42 nbalacha or rather on all 4 bricks as it is a 2x2 dist-rep volume
06:42 Saravanakmr joined #gluster
06:45 kukulogy nbalacha, yes it does
06:46 karthik_ joined #gluster
06:48 nbalacha right, can you list the xattrs on dahyun2.jpg on all of them
06:51 [diablo] joined #gluster
06:52 karnan joined #gluster
06:57 msvbhat joined #gluster
06:59 nbalacha kukulogy, xattrs and their values
07:02 ramky joined #gluster
07:05 kukulogy nbalacha, xattrs of dahyun2.jpg in each bricks?
07:05 nbalacha kukulogy, yes
07:07 ashiq joined #gluster
07:10 mhulsman joined #gluster
07:11 kramdoss_ joined #gluster
07:13 kxseven joined #gluster
07:13 kotreshhr joined #gluster
07:14 kukulogy nbalacha, https://dpaste.de/zM5f
07:14 glusterbot Title: dpaste.de: Snippet #377591 (at dpaste.de)
07:19 goretoxo joined #gluster
07:20 jkroon joined #gluster
07:20 Muthu joined #gluster
07:21 goretoxo joined #gluster
07:23 karnan joined #gluster
07:28 David_Varghese joined #gluster
07:32 kovshenin joined #gluster
07:33 jri joined #gluster
07:36 kaushal_ joined #gluster
07:40 Sebbo2 joined #gluster
07:42 hchiramm joined #gluster
07:42 kovsheni_ joined #gluster
07:43 fsimonce joined #gluster
07:50 mhulsman joined #gluster
08:01 nbalacha if anyone is interested, it looks like the T bit was not set on the linkto file on FreeBSD as per kukology's issue
08:01 nbalacha I will be looking into why
08:04 rafi1 joined #gluster
08:05 derjohn_mob joined #gluster
08:06 ramky joined #gluster
08:07 pur joined #gluster
08:12 ankitraj joined #gluster
08:14 hchiramm joined #gluster
08:15 ahino joined #gluster
08:19 devyani7 joined #gluster
08:22 Arrfab hey guys, using glusterfs-3.6.1 and gluster man page shows "volume rename <VOLNAME> <NEW-VOLNAME>" but when trying that, it complains : unrecognized word: rename (position 1)
08:23 Arrfab error in the man page, or feature not in the gluster version (and so mismatch between features and man page)
08:24 ndevos Arrfab: "volume rename" does not exist anymore (did it ever?), I hope the command has been removed from the man-page in recent versions
08:24 jiffin joined #gluster
08:24 Arrfab ndevos: argh ! and how does one rename a volume ?
08:25 ndevos Arrfab: you dont.
08:25 Arrfab nice :-( so no possible migration
08:25 ndevos Arrfab: you could do it manually, but thats a little tricky
08:25 Arrfab for a fuse mountpoint, changing it the gluster vol to use is easy
08:26 Arrfab not when when you have bunch of VMs having the gluster vol hardcoded in the deploy path (like for opennebula when using a gluster datastore)
08:26 ndevos Arrfab: all the details of the volume are under /var/lib/gluster, those are text-files and if you replace all the old volume names with new ones, on all servers (while glusterd has been stopped), it should work
08:26 Arrfab meaning that I'm blocked : (was willing to migrate to a striped LVM for bricks)
08:26 David_Varghese joined #gluster
08:27 ndevos Arrfab: renaming would need to modify the contents of the text-files, maybe some of the filenames and directories
08:28 ramky joined #gluster
08:28 Arrfab when I saw that "gluster volume rename" command in the cli, I was happy, and so created a new striped lv, init it as brick, created a new gluster vol, rsync'ed the data and now I'm stucked :(
08:28 kaushal_ joined #gluster
08:29 ndevos Arrfab: maybe kshlm or atinm know about a script that can rename volumes, they are two of the glusterd experts
08:33 Arrfab ndevos: well, in fact I'll have to rename two vol : the initial one (to something else) then the one (to the initial one)
08:33 atinm Arrfab, may I know what is the requirement here for renaming them?
08:33 Arrfab so that all libgfapi calls will continue to find the correct path
08:34 Arrfab atinm: because I created a new volume to replace an existing one
08:34 Arrfab we had one disk per gluster node (in distributed/replicated) but IO are really slow
08:34 Arrfab so we added a second disk per server, but that will not speed the IO as gluster will continue to only use one disk
08:35 Arrfab so the idea was to create a striped lv, use that a brick, and the move data : that way we force all IOs to go to two disks instead of one
08:35 Arrfab (the more spindles, the better)
08:35 Arrfab atinm: does that answer your question ?
08:36 atinm Arrfab, not really, I am still trying to understand how a renaming volume help here
08:36 Arrfab ndevos: and yes, I know that I should upgrade to at least 3.7 (or even 3.8 directly) one day with pkgs from the storage SIG :D
08:37 Arrfab atinm: because opennebula have the gluster path hardcoded in each file needed to start a VM
08:37 hchiramm joined #gluster
08:38 ndevos atinm: its a migration from old-volume to new-volume, not the new-volume has all the data, a rename was planned to make it active
08:38 atinm Arrfab, ahh there you go
08:39 atinm Arrfab, that was missing
08:39 atinm Arrfab, we don't have a script as such
08:39 Arrfab atinm: so I'd like to smack the guy who wrote and distributed that man page showing that it was possible :D
08:40 atinm what we'd have to do here is stop all the glusterd instances, in /var/lib/glusterd we need to replace the old volname with new volname, be it files or the content of the files and then restart glusterd instances one after another
08:42 Arrfab atinm: hmm, quite a bunch of files to modify everywhere, including filenames too ?
08:43 atinm Arrfab, yes
08:43 atinm Arrfab, but make sure you take a back up for all these files before changing them :)
08:43 Arrfab something I'd need to test on a gluster test cluster
08:43 Arrfab as I *can't* stop glusterd now :-(
08:44 creshal joined #gluster
08:44 * Arrfab wishes also that gluster would play it nice with rsync .. it so slow that one thinks it's running on a old 20Gb drive in pentium 2
08:45 Arrfab so it seems that quite some people tested that option that is in the man page, but never implemented : https://access.redhat.com/solutions/389583
08:45 glusterbot Title: How do I rename a Glusterfs (Red Hat Storage) volume? - Red Hat Customer Portal (at access.redhat.com)
08:45 Arrfab at least I'm not the only one :-)
08:49 ndevos Arrfab: hah, and it explains how to do it too, just like atinm and I mentioned a little briefer
08:50 Arrfab ndevos, atinm : thank you both for the info : I'll validate all those steps and will create a gluster test cluster to validate that before scheduling/announcing a maintenance window
08:50 Arrfab I'll probably also update to 3.7 or 3.8
08:50 ndevos Arrfab: stopping glusterd does not impact the running bricks, it only affects newly connecting clients
08:51 Arrfab ndevos: well I'll need all the VMs to be stopped anyway, and then restarted after the name change
08:51 Arrfab so a need for a maintenance window anyway
08:51 ndevos Arrfab: oh, yes, true
08:51 Arrfab ndevos: which one would be better on centos 6 ? 3.7 or 3.8 ? (the easiest/fastest migration)
08:52 Arrfab all that would probably a good blog post, including the reason for the migration, and the striped lv to increase disks utilization
08:52 ndevos Arrfab: 3.8 should have a few more improvements for VM workloads, kdhananjay and itisravi know best
08:53 Arrfab ndevos: no conflict with qemu-kvm (as it's built against gluster 3.6 iirc)
08:54 Arrfab ?
08:54 ndevos Arrfab: hmm, if you already copied the VMs to the new volume (on 3.6), you would not be able to use sharding+arbiter, that improves handling of VM images quite a bit
08:54 itisravi yes, 3.8 is better
08:54 Arrfab ndevos: yeah, sharding is another option but that will not solve the existing issues for existing VMs
08:55 Arrfab while moving to a striped lv would be a direct benefit (and transparent as the .qcow2 files would remain one file)
08:55 ndevos Arrfab: no conflict that I am aware of, qemu should be able to use the library without issues (it does not have an SO-version, but symbol-versions)
08:56 ndevos Arrfab: yeah, if you like the whole "one file on the volume matches one file on the backend", then sharding is not for you
08:56 Arrfab ndevos: ok, thanks a lot for the info. I'll test the update + vol rename
08:56 * cloph had bad experience with a striped lv (using 5 raid 10 as base), much better performance when using a raid 10 with the twenty drivres directly
08:56 Arrfab ndevos: well, it's also that I have already VMs that are running :-)
08:56 ndevos Arrfab: teh description on that RH Customer Portal page should work, serveral customers seem to have successfully applied it
08:57 cloph but might very well be that there was a misconfiguration somewhere down the path, but dm device was busy, while actual disks didn't have much to do..
09:00 ndevos cloph: lvm is normally pretty easy to setup and stable... I wonder what could cause a dm-device to be busy, oh well
09:01 cloph I was using it for small-file workload (rsnapshot) - apparently that didn't play well with it..
09:04 ndevos cloph: oh, that could be, rsnapshot uses hardlinks a lot iirc, so that does many filesystem-metadata updates, those are tiny for lvm
09:04 msvbhat joined #gluster
09:04 ndevos cloph: we had similar issues with certain device-mapper configurations too, and the Red Hat lvm-team improved it based on the Gluster use-cases
09:05 ndevos sometimes it really helps if a company has developers across many different components ;-)
09:05 cloph :-)
09:06 cloph and yes, lots of hardlinks, the daily, weekly, monthly rotations are created using cp -al
09:12 Arrfab yeah, using also that kind of workload, and OMG rsync on top of gluster volumes isn't a good idea
09:14 ndevos Arrfab: we're doing things to improve that, some of it might be in 3.9 already (end of September), or in the release after that (+3 months)
09:14 rastar joined #gluster
09:16 rafi joined #gluster
09:18 prasanth joined #gluster
09:20 cloph Arrfab: but gluster offers comfortable geo-replication (even if it is quite buggy/picky with symlinks it seems)
09:24 rafi1 joined #gluster
09:28 ndevos Arrfab: maybe we should teach rsync how to connect to gluster with libgfapi...
09:32 David_Varghese joined #gluster
09:32 jiffin1 joined #gluster
09:34 kotreshhr joined #gluster
09:34 rafi joined #gluster
09:37 nishanth joined #gluster
09:41 [diablo] Good morning #gluster
09:41 [fre] joined #gluster
09:42 [diablo] guys we're running a RHGS
09:42 [diablo] console is 3.1.1-0.65.el6
09:43 [diablo] and storage nodes run 3.7.-10
09:43 [diablo] we notice we have gluster_shared_storage
09:44 [diablo] volume. But we don't see any bricks
09:46 ivan_rossi joined #gluster
09:46 skoduri joined #gluster
09:51 [diablo] I've just deleted a volume, one that was called "test"
09:51 [diablo] used: gluster volume delete test
09:51 [diablo] that volume had the bricks as:
09:52 [diablo] Bricks:
09:52 [diablo] Brick1: svgfscapl001.prd.srv.cirb.lan:/rhgs/brick-data/brick-test2
09:52 [diablo] Brick2: svgfscupl001.prd.srv.cirb.lan:/rhgs/brick-data/brick-test2
09:52 [diablo] I still see the directories
09:59 ramky joined #gluster
10:02 nishanth joined #gluster
10:03 rastar joined #gluster
10:06 David_Varghese joined #gluster
10:06 jiffin1 joined #gluster
10:08 rafi joined #gluster
10:12 tomaz__ joined #gluster
10:12 jiffin joined #gluster
10:17 jiffin [diablo]: volume delete command never removes brick directories from backend
10:18 derjohn_mob joined #gluster
10:18 [diablo] hi jiffin
10:18 [diablo] OK how do we delete the bricks
10:18 [diablo] safely
10:19 MrRobotto joined #gluster
10:20 rafi joined #gluster
10:21 cloph what do you mean with safely? if you don't need that data anymore, just rm -r the directories...
10:21 jiffin [diablo]: if needed  backup the data from those directories
10:22 jiffin and remove them manually
10:23 shyam joined #gluster
10:25 Jacob843 joined #gluster
10:26 arcolife joined #gluster
10:27 Klas can you still mount a volume after it's deleted?
10:27 Klas if not, can you still access files after a volume has been deleted if it's still mounted?
10:28 cloph you can access the bricks, but not via mount
10:28 Klas ah, the files still exist on the server, of course
10:28 Klas that's expected
10:28 Klas at least in my mind
10:29 jiffin Klas: u create new volume using existing bricks, but it is not recommended
10:29 Klas jiffin: ah, it was not a feature request, I was just curious what problem [diablo] was trying to solve =)
10:30 Klas I would always plan on creating a new volume and migrate the data via client
10:30 msvbhat joined #gluster
10:31 aravindavk joined #gluster
10:34 * jiffin wonders the same
10:37 kukulogy joined #gluster
11:06 msvbhat joined #gluster
11:08 prasanth joined #gluster
11:13 shubhendu joined #gluster
11:14 rouven joined #gluster
11:16 rafi joined #gluster
11:18 [diablo] hi sorry went to lunch
11:19 [diablo] so if I want to remove a volume and all data held in the volume, gluster volume delete , then rm -rf out the data on both nodes?
11:19 Klas yup
11:26 aravindavk_ joined #gluster
11:26 David_Varghese joined #gluster
11:26 [diablo] OK cheers
11:28 kshlm Weekly community meeting starts in 30 minutes in #gluster-meeting
11:33 ankitraj joined #gluster
11:41 Gnomethrower joined #gluster
11:44 tomaz__ joined #gluster
11:45 rouven hi, i have got a gluster 3.7.12 replica-2  volume mounted via nfs on a centos 7 machine
11:45 rouven one file on it shows input/output error but is ok on the both gluster nodes
11:46 kukulogy joined #gluster
11:46 [diablo] ah we've dropped Ganesha for NFS, could we remove http://pastebin.com/raw/xvq9ESDU
11:46 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
11:46 rouven it shows up in ls like this "??????????? ? ?          ?             ?             ? test.yml"
11:46 creshal >two nodes
11:46 creshal Split brain?
11:47 [diablo] http://paste.ubuntu.com/23084862/#
11:47 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
11:47 [diablo] http://paste.ubuntu.com/23084862/
11:47 [diablo] even
11:47 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
11:47 rouven i cannot restart the nfs client - is there any way to get rid of this broken "file"
11:54 bluenemo joined #gluster
11:55 rouven creshal: i dunno if your question was directed at me - but: no, no split brain in my case :)
11:55 jiffin gluster v set all cluster.enable-shared-storage disable
11:55 jiffin [diablo]: ^^
11:56 jiffin rouven: do u want to remove that file?
11:58 rouven jiffin: i don't need that file, it's from a git repo and my git updates fail due to the i/o error, so deletion would at least be ok
11:59 jiffin rouven: mount it using native glusterfs protocol and then remove the file
12:00 kshlm Weekly community meeting starts now in #gluster-meeting
12:03 rouven jiffin: hmm, gluster mount shows the same i/o error and has some additional info in the logs: "Gfid mismatch detected for <9d1a7d5e-713c-40f8-83bf-54073b5f941f/test.yml>, cf574de4-eb3f-4e90-8669-068ba415ef21 on www-data-client-1 and 97270395-ba1b-44e9-b6d7-3010b8104b36 on www-data-client-0. Skipping conservative merge on the file."
12:03 rouven failed self-heal?
12:04 jiffin rouven: it seems to be split brain
12:04 rouven jiffin: hmm. how do i resolve that?
12:05 rouven gluster volume status doesn't show anything suspicious
12:05 [diablo] back jiffin just reading
12:06 jiffin rouven: http://gluster.readthedocs.io/en/latest/Troubleshooting/split-brain/
12:06 glusterbot Title: Split Brain (Manual) - Gluster Docs (at gluster.readthedocs.io)
12:06 rouven jiffin: thanks, was just starting to read that document :)
12:08 kukulogy joined #gluster
12:11 Gnomethrower joined #gluster
12:12 rouven jiffin: thanks, that did the trick
12:13 creshal Told you it's split brain. :D I never managed to get a stable gluster setup with just two nodes.
12:13 rafi joined #gluster
12:13 rouven creshal: a third is in the making :)
12:13 creshal \o/
12:13 rouven :)
12:14 cloph oh yeah, with two nodes maintaining proper server quorum and client quorum is ~ impossible.. Add a dummy peer / add arbiter and be much safer in that regard _=
12:16 sanoj joined #gluster
12:16 ankitraj joined #gluster
12:21 johnmilton joined #gluster
12:22 rouven btw, does a geo-replica count as a peer in regards to qourums? probably not, huh?
12:22 johnmilton joined #gluster
12:23 cloph depends, if it is in same trusted pool, it would count for server quorum
12:25 nishanth joined #gluster
12:28 julim joined #gluster
12:28 kotreshhr joined #gluster
12:36 ben453 joined #gluster
12:40 arcolife joined #gluster
12:44 unclemarc joined #gluster
12:45 shyam joined #gluster
12:56 ankitraj joined #gluster
13:00 tomaz__ how can i install glusterfs client on coreos?
13:04 tomaz__ anybody?
13:04 Klas do you mean in a docker image or outside of it?
13:05 tomaz__ what i would like is to use glusterfs which is running on a dedicated hw to be used for PVs for kubernetes, which runs on coreos nodes
13:05 Klas (I've never used coreos, so I can't be of much help barring, well, compiling it)
13:05 tomaz__ outside
13:06 tomaz__ if i would be doing this again, i wouldn't go with coreos either :)
13:06 Klas hehe
13:06 Klas story of everyones software project ever =P
13:06 arcolife joined #gluster
13:06 Klas I don't think I've ever heard of one instance where everything went smoothly in any large project =P
13:07 tomaz__ do you mean downloading tar.gz the whole package to coreos ... and then try to compile it there. i don't know if this will work like this... since you are so limited with what packages you have. the whole concept of coreos is ...ah
13:07 Klas that's the issue, yes ;)
13:08 Klas packeting the coreos way, I assume should be something along the lines of creating a docker image with gluster in it?
13:08 shubhendu joined #gluster
13:09 shyam tomaz__: From the little I know of coreOS compiling etc. on the host is not possible due to lack of packages on the OS
13:09 shyam I would go with what Klas states, creating a docker image with gluster in it
13:10 shyam and those would be the brick nodes, right? not the client bits
13:10 tomaz__ docker image with gluster in it. And what... then this gluster container is brick node?
13:10 tomaz__ so i would have let say 10 of those?
13:11 tomaz__ but.. let say... i have huge 3 servers (cloud bare bone), each > 200Gb of RAM. Which i wanted to use for kubenetes.
13:12 jiffin joined #gluster
13:13 derjohn_mob joined #gluster
13:13 tomaz__ so i would be running let say > 50 pods, or replication controllers on those 3 nodes. Each of them would need Persistent storage (PV). For that i was thinking of using GlusterFS. But I would like to use it not from kuberenetes cluster... since this is for the case node/host/infrastructure breaks
13:13 tomaz__ i don't know
13:14 tomaz__ glusterfs cluster would have to be its own infrastructure... not layered with complexity of Kubernetes, etc
13:14 tomaz__ so the whole problem i have is ... that i provisionect kubernetes with coreos. Sh**
13:14 tomaz__ :)
13:15 tomaz__ i used https://stackpoint.io but they have coreos only
13:15 glusterbot Title: StackPointCloud | Kubernetes Anywhere (at stackpoint.io)
13:15 shyam ok, so the gluster server/brick nodes are not CoreOS and run whatever you want? (like a full blown distro?)
13:16 tomaz__ gluster servers are centos 7, and gluster cluster (2 node) is running nicely
13:17 tomaz__ now i would need to have glusterfs client on each of kubernetes nodes (servers) which is a problem now.... due to coreos
13:17 tomaz__ :)
13:19 Klas http://kubernetes.io/docs/user-guide/persistent-volumes/
13:19 Klas Can't you use that?
13:19 glusterbot Title: Kubernetes - Persistent Volumes (at kubernetes.io)
13:19 Klas I've never worked with kubernetes, and only just heard of it, just seems to be the right place to look ;)
13:20 tomaz__ id read/tried all this... and a lot more.. http://kubernetes.io/docs/user-guide/volumes/#glusterfs
13:20 glusterbot Title: Kubernetes - Volumes (at kubernetes.io)
13:21 tomaz__ but ... eventually you come to this """All nodes in kubernetes cluster must have GlusterFS-Client Package installed"""
13:21 Klas oh
13:22 Klas and coreos is the basis for your kubernetes install?
13:22 shyam Hmmm... I know that is a solved problem... but I am not aware of the solution, I would hunt for some blogs on this by hchiramm
13:22 shyam Try this: https://www.gluster.org/pipermail/gluster-users.old/2016-March/025977.html
13:23 glusterbot Title: [Gluster-users] GlusterFS Containers with Docker, Kubernetes and Openshift (at www.gluster.org)
13:25 squizzi joined #gluster
13:35 [o__o] joined #gluster
13:37 [diablo] guys I'm trying to create a new volume, and I get
13:37 [diablo] Brick may be containing or be contained by an existing brick
13:39 hchiramm klaas, it may be helpful as well http://website-humblec.rhcloud.com/gluster-container-demo-videos-gluster-persistent-data-store-containers/?utm_source=feedburner&amp;utm_medium=email&amp;utm_campaign=Feed:+humblefeed+(My+Humble+Abode)
13:39 [diablo] gluster volume create fred replica 2 svgfscapl001.prd.srv.cirb.lan:/rhgs/brick-data/brick-fred svgfscupl001.prd.srv.cirb.lan:/rhgs/brick-data/brick-fred
13:39 [diablo] was the command I ran
13:39 [diablo] /rhgs/brick-data/ is an lvm
13:40 [diablo] there is active volume with bricks in /rhgs/brick-data/brick-data
13:40 [diablo] so a little confused as to why it errors. i'd expect that if I'd gone for /rhgs/brick-data/brick-data/brick-fred
13:43 Saravanakmr joined #gluster
13:46 skylar joined #gluster
13:49 dnunez joined #gluster
13:54 poornima joined #gluster
13:54 [diablo] anyone please?
13:57 David_Varghese joined #gluster
13:58 kukulogy joined #gluster
13:59 arcolife joined #gluster
14:01 kpease joined #gluster
14:03 nathwill joined #gluster
14:04 Larsen joined #gluster
14:06 plarsen joined #gluster
14:14 mhulsman joined #gluster
14:18 ramky joined #gluster
14:19 kotreshhr joined #gluster
14:20 jkroon joined #gluster
14:50 bkolden joined #gluster
14:52 kramdoss_ joined #gluster
14:53 [diablo] anyone around?
14:54 jobewan joined #gluster
14:56 creshal I'm around, but I have no idea what to do, so…
14:56 [diablo] LOLLLL
14:56 [diablo] great answer
14:57 [diablo] creshal, you'll be telling me next you've no idea how you ended up in here
14:58 creshal I used gluster once, eight months ago. Worked so horribly for our (fairly obscure) use case I went back to cron&rsync. Actually, why /am/ I still in here.
14:58 [diablo] LOLLLL
14:58 [diablo] stop you're killing me :D
14:58 creshal That's one way to solve your computer trouble, I guess.
15:00 jkroon joined #gluster
15:02 nbalacha joined #gluster
15:02 [diablo] :D
15:08 wushudoin joined #gluster
15:14 wushudoin joined #gluster
15:18 ivan_rossi left #gluster
15:26 jobewan joined #gluster
15:28 hagarth joined #gluster
15:37 ankitraj joined #gluster
15:40 jobewan joined #gluster
15:45 derjohn_mob joined #gluster
15:50 barajasfab joined #gluster
15:51 jkroon joined #gluster
15:51 skoduri joined #gluster
15:53 kshlm joined #gluster
15:57 ankitraj joined #gluster
16:05 msvbhat joined #gluster
16:21 JoeJulian [diablo]: what version?
16:22 Gambit15 joined #gluster
16:25 squizzi_1 joined #gluster
16:27 chirino_m joined #gluster
16:29 kpease joined #gluster
16:31 DaKnOb joined #gluster
16:31 hackman joined #gluster
16:39 B21956 joined #gluster
16:48 jiffin joined #gluster
16:57 d0nn1e joined #gluster
16:58 aravindavk_ joined #gluster
17:07 robb_nl joined #gluster
17:10 kotreshhr joined #gluster
17:11 jiffin joined #gluster
17:13 squizzi_ joined #gluster
17:16 jiffin joined #gluster
17:28 jiffin joined #gluster
17:32 squizzi_ joined #gluster
17:34 jkroon joined #gluster
17:36 jiffin joined #gluster
17:37 jiffin joined #gluster
17:43 suliba joined #gluster
17:52 jri joined #gluster
17:53 jiffin joined #gluster
17:54 Manikandan joined #gluster
18:01 kotreshhr left #gluster
18:02 skylar joined #gluster
18:03 jri joined #gluster
18:06 cliluw joined #gluster
18:12 derjohn_mob joined #gluster
18:15 dlambrig joined #gluster
18:16 hagarth joined #gluster
18:17 kovshenin joined #gluster
18:19 chirino_m joined #gluster
18:20 jiffin joined #gluster
18:21 jkroon JoeJulian, you around?
18:31 JoeJulian jkroon: yeah, what's up?
18:33 jkroon i saw some interesting thing with reference to link counts on gfid files.
18:33 jkroon let me pastebin quickly
18:35 bdashrad joined #gluster
18:35 jkroon https://paste.fedoraproject.org/413493/72063742/ <-- JoeJulian
18:35 glusterbot jkroon: <'s karma is now -24
18:35 glusterbot Title: #413493 Fedora Project Pastebin (at paste.fedoraproject.org)
18:36 jkroon basically gfids from volume heal ${volname} info, the associated gfid file in .gluster has a link count of 1.
18:38 jkroon the question really is how do I resolve them?
18:39 jkroon can I simply rm the file?
18:39 JoeJulian That's what I typically do.
18:39 JoeJulian I wish I knew how that happens though.
18:41 jkroon i can provide some speculation but it'll be exactly that.  i know i caused a few of them myself.
18:42 jkroon basically just rm'ed the file from the brick to cause a re-copy from the other side, presumably leaving the gfid file behind.
18:43 jkroon ok, so if I understand correctly the list from "volume heal ${volname} info" should pretty much remain empty?  I never used to worry about that, only really when they became split brain, but over the last week i learned (the hard way) that files listed for heal is at risk for going into split brain.
18:43 jobewan joined #gluster
18:44 jkroon another candidate for self-heal?  gfid not present on another replica and has link count = 1 ... auto unlink?
18:48 jiffin joined #gluster
18:51 nathwill joined #gluster
18:52 JoeJulian Gluster's default is to preserve data.
18:53 jkroon that makes sense.
18:53 JoeJulian It would, however, make sense to report link=1 gfid's somewhere, probably heal...info
18:54 JoeJulian Then offer the ability to delete them, with the appropriate forced override of default safety.
18:54 JoeJulian file a bug
18:54 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
18:54 jkroon indeed.  ok, one cluster cleared.
18:54 jkroon hehehe - right along next to the other one.
18:55 The_Pugilist joined #gluster
18:59 JoeJulian Heh, I love answering like this when management asks me if a volume heal is complete: http://www.indra.com/8ball/15.gif
19:03 jkroon https://bugzilla.redhat.com/show_bug.cgi?id=1369933
19:03 glusterbot Bug 1369933: low, unspecified, ---, bugs, NEW , need a way to detect and resolve dangling gfid files
19:03 jkroon JoeJulian, ... yea, wish I could get away with that.
19:03 jkroon but seeing that i'm the last line of defence :p
19:03 JoeJulian So am I.
19:07 jkroon that explains the level of detail you dig into in your blogs.
19:07 jkroon i love it btw.  thinking if I can get some time i'd like to write up some of what I've learned over the last two weeks as well.
19:09 kenansulayman joined #gluster
19:10 jkroon ok, i've got some heals left that is in an odd state, probably file renames that got interrupted, or had server reboots.  i have file paths which has clearly different gfid attributes - so I'd expect a split brain, yet only one of the two servers reports them, and then only as an entry in the heal list.
19:15 JoeJulian Are they 0 size mode 1000? If so they're dht link files and you can just delete them.
19:21 jkroon no
19:21 jkroon and mode 1000 is odd but not impossible (busy creating a pastebin for your opinion - just got interrupted by a woman that lost control of her car outside)
19:25 jkroon https://paste.fedoraproject.org/413536/20667231/
19:25 glusterbot Title: #413536 Fedora Project Pastebin (at paste.fedoraproject.org)
19:26 jkroon so more of the case that we saw the other day - except those actually ended up reporting as split-brain - surely these should go into split brain too?
19:27 karnan joined #gluster
19:28 JoeJulian Yep, sure looks splitbrain to me.
19:28 jkroon simplest way to resolve - just nuke the "older" file from the brick?
19:29 JoeJulian Seems rational for what that file seems to be.
19:29 jkroon need to take the gfid file with obviously :)
19:29 JoeJulian I would do it through either the cli or through the xattr fuse-mount method.
19:30 JoeJulian Or using split-mount.
19:34 jkroon ok, the cli "volume heal ??? split-brain stuff hasn't worked for me ever.  i've never managed the other two methods either, so I typically just hit the back-end files.
19:34 jkroon so on the bricks.  ... what is it tonight ... now the police is running around with sirens ...
19:38 kenansulayman joined #gluster
19:40 Gambit15 Hey guys, any suggestions on getting gluster to address the host & peers via an FQDN that is different to the host's FQDN?
19:41 Gambit15 For example, my storage network is isolated from the rest of the network & accessed via secondary interfaces on each server. The FQDN for each server returns the server's public IP, rather than the IP of its storage interface.
19:42 Gambit15 This is causing me much hurt... :/
19:43 JoeJulian Were we working on that before?
19:43 Gambit15 JoeJulian, you mean WRT my earlier questions? No
19:43 JoeJulian I was working with someone, I forget who, but then the problem seemed to go away.
19:44 JoeJulian So, Gambit15, what are your steps, results, and expectations?
19:44 Gambit15 The issue we discussed previously was WRT sharding slaughtering my throughput.
19:48 jkroon Gambit15, so basically you want the servers to use FQDN but it must resolve to different value inside your storage network than outside?
19:48 Gambit15 So, I've got a cluster/group/beowolf/etc/etc of servers which provide a service (VMs) whose data is hosted on Gluster. The FQDNs of these servers returns their public IPs, rather the IPs used for the storage network.
19:48 jkroon sounds like a DNS problem more than a gluster issue?  you could just add entries to /etc/hosts ?
19:49 Gambit15 I don't want gluster using $(hostname) to define each node/server/peer
19:49 jkroon so probe them by IP.
19:50 ahino joined #gluster
19:51 Gambit15 However I notice the first peer from which I added the others reports itself to the other peers using $(hostname) as the primary name
19:51 Gambit15 I've read about DNS & IP issues in gluster, so just trying to find out what issues I could expect & how to avoid them
19:53 bene2 joined #gluster
19:53 jkroon Gambit15, i generally try to avoid those kind of split network situations you have.  normally we have a bond0 link with LACP of all available interfaces going to a switch.
19:53 johnmilton joined #gluster
19:53 jkroon and then a public /28 or whatever, with a firewall running in front of that (which generally seems to not be the recommendation)
19:54 jkroon well, at the "single network, rather a split network".  and our FQDN entries points at the hosts, so we normally probe by hostname.
19:54 jkroon in the one case where we did probe by IP that just worked.
19:54 Gambit15 I'm also curious what I'd have to do in the case of changing the underlying subnet of the storage nodes. I know I can "replace" the bricks, but wouldn't that try to "resilver" the data, rather than picking up the peer's existing volume data?
19:55 JoeJulian "the first peer from which I added the others reports itself to the other peers using $(hostname) as the primary name" which doesn't matter at all when defining bricks. The hostname used for the brick is the hostname the clients will try to resolve.
19:56 Gambit15 Ideally, I want to keep the storage network independent of the general network to avoid contention. In this case, the storage network has its own dedicated network gear
19:58 JoeJulian If you want your servers to talk to each other over a different subnet, just use split-dns to have your clients get a different address.
19:59 JoeJulian Or use /etc/hosts on your servers and/or your clients as needed.
19:59 Gambit15 Split DNS doesn't work when the requests are coming from within the same host...
20:01 Gambit15 I'm already testing with peers defined by IP, however I'm trying to confirm this won't cause problems down the road. At least for previous versions of gluster, I've seen a couple of reported problems with this setup in the mailing lists
20:03 JoeJulian Sure, you can use name spaces to split dns resolution.
20:03 jkroon Gambit15, use IPs then.  the only problem might be if (when) you want to change those IPs.
20:03 JoeJulian How do IPs help?
20:03 jkroon JoeJulian, no - in some cases he wants on those machines to resolve the FQDN to public (most), but for gluster he want's the storage side.
20:04 JoeJulian Then define a hostname specific to the gluster address.
20:04 jkroon two reasons:  firstly - you eliminate DNS lookup failure, secondly, you control where they go.
20:04 jkroon that's the other option, so hostname.gluster.foo.com instead of just hostname.foo.com ... then use those *.gluster.foo.com addresses @ Gambit15
20:05 JoeJulian IP addresses lock you in to a configuration and isn't very future-proof.
20:05 plarsen joined #gluster
20:06 jkroon yea, @ Gambit15 for example:  we used to run a cluster on a private IP range.
20:06 Gambit15 Yup. That's what I'm looking at now. For example, the nodes all use v0.dc0, v1.dc0, etc. and the peers will use s0.dc0, s1.dc0, etc.
20:06 jkroon now we've got publics with 2G links to the switches, so we want to just use the publics.
20:06 Gambit15 I just wanted to make sure that won't cause any future issues. The hostname of each server will be configured with v0, v1, etc.
20:07 jkroon we ended up configuring the privates on lo interface ... and adding routes to the remote ends, eg "ip ro ad 192.168.0.1 via ${publicip} src 192.168.0.2" and vice versa ...
20:07 jkroon nasty.
20:08 jkroon btw, is anybody running glusterfs on recentish (>=4.1) kernels?
20:09 JoeJulian My self-heals occur over their own 10G interface, 10.1.0.0/16 and my clients connect via their own, 10.2.0.0/16. The brick hostname is server1.gluster.domain.dom. The /etc/host on the server resolves to the 10.1 address. dns lookup provides the 10.2. address.
20:09 Gambit15 Yeah. Ideally I want each service to use a dedicated interface/bond. The storage network isn't routed on the public network, and doesn't even have any physical connection between the two.
20:09 JoeJulian I'm running it on 4.7 at home.
20:09 jkroon i'd like to know your experiences with respect to performance and specifically flock.
20:09 jkroon no problems?
20:09 JoeJulian None
20:10 jkroon ok, then the problem we saw is the md code refactoring that came into 4.1 most likely.
20:10 jkroon so it should be pretty safe to upgrade thoses hosts to 4.0 at least ...
20:10 Gambit15 JoeJulian, couldn't having conflicting answers within /etc/hosts & DNS cause an issue, when you're running everything on the same host?
20:11 JoeJulian Could be. I'm not using md at all, so that may be a difference as well.
20:11 JoeJulian Gambit15: I would probably just containerize so they don't *look* like they're the same host.
20:11 JoeJulian I don't have the "same host" problem.
20:11 Gambit15 Technically, /etc/hosts should always override DNS, however some services force an nslookup
20:12 jkroon JoeJulian, we tracked a major performance problem with rsync to the code refactor in the md code in 4.1.
20:12 JoeJulian Interesting
20:12 Gambit15 Containers were another consideration on my list actually
20:13 JoeJulian It would certainly simplify what you're trying to do.
20:13 jkroon Gambit15, glibc uses NSS to resolve names, so /etc/nsswitch.conf most likely states "hosts:       files dns", so anything in /etc/hosts will override whatever is in DNS.
20:13 Gambit15 Although couldn't that add more traffic to the network, with queries to the same physical host being sent out to the switch?
20:15 jkroon Gambit15, it should handle that in the hypervisor.
20:15 jkroon well, the containing kernel in this case, so no, traffic should not go to the switch that wouldn't normally.
20:16 JoeJulian No. Use macvlan's with hairpin and it'll be quite fast.
20:17 Gambit15 jkroon. Cool, I wondered if that'd be the case. Not played with containers too much yet, so wasn't certain how "smart" they were
20:20 jkroon Gambit15, most virtualization environments (all that I've worked with) creates a "virtual switch" or "bridge" on the host, and then a physical port (bond preferably) goes into that that leads to the switch, virtual ports (tap devices in for example qemu) gets added to the bridge to which the guest connects, so it's all handled on the machine itself.
20:21 JoeJulian you should use macvlan even with qemu hosts. There's a lot of kernel overhead you can avoid.
20:21 jkroon JoeJulian, i've got a theory on those webalizer files ... during the time they would have been processed i was forced into a reboot situation.
20:21 Gambit15 Yup. Well accustomed to HV networking, however wasn't 100% sure whther containers worked the same way or not. Until now, it was merely an assumption
20:21 jkroon JoeJulian, thanks for that advise - i'll definitely look into those.
20:22 Gambit15 BTW, jkroon, don't suppose you use sharding?
20:22 JoeJulian Read my friend Major's blog: https://major.io/2015/10/26/systemd-networkd-and-macvlan-interfaces/
20:22 jkroon so one host changed files (which due to the md problem) never made it to disk on one of the hosts.  it creates a new file and renames.  so the webalizer process (which relies on flock() on gluster) processed same data on the other end.
20:22 JoeJulian That would certainly explain it.
20:23 jkroon Gambit15, we accidentally configured it in one cluster - it lasted about 24 hours before performance was raised, and nearly 72 hours to raise emergency change control to get it re-set up as pure distribute replicate.
20:23 JoeJulian If the posix translator thinks it's doing the safe and sane thing, but the filesystem (via the block storage) is lying, there's not much gluster can do to avoid that.
20:24 jkroon indeed.  so goes back to the underlying IO having being broken to a bad extent - not failing, nor succeeding.
20:24 jkroon just 'hanging'.  and most likely 'timing it out' on gluster side has other bad side effects.
20:24 JoeJulian Did you happen to file a bug report with gluster about that. You never know when they might be able to work around such a thing.
20:24 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
20:25 JoeJulian I've seen other filesystem specific issues that I thought was out of their control that they've fixed.
20:25 Gambit15 jkroon, same here then. Enabling 512M shards here brought our throughput down to 512Kbps
20:25 JoeJulian Sometimes by fixing kernel bugs.
20:25 jkroon JoeJulian, that macvlan stuff looks PARTICULARLY interesting.  do you need special hardware for the multiple MACs to an ethernet card though?  Or is that a relatively common thing?
20:26 jkroon (normally broadcast uses 0xff... for MAC, and multicast also has a specific range - and most hardware can filter those but I was not aware that they can deal with multiple "normal" MACs)
20:29 JoeJulian No special hardware. I've even done it on a RPi.
20:30 jkroon so it'll probably switch to promisc automatically if needed (which is what bridge code does)
20:31 shyam joined #gluster
20:36 bkolden joined #gluster
20:43 jkroon jay!  no more files in heal ... info - thanks again JoeJulian.
20:43 JoeJulian Woo-hoo!
20:43 JoeJulian You're welcome. :)
20:43 jkroon Gambit15, i'm not even sure exactly what the parameters for sharding was - i just know we disabled it as soon as we could.
20:43 * jkroon is liking gluster more and more.
20:56 jkroon will know in 5-10
20:57 repnzscasb joined #gluster
20:57 repnzscasb joined #gluster
21:15 BitByteNybble110 joined #gluster
21:57 repnzscasb joined #gluster
21:57 repnzscasb joined #gluster
22:11 wadeholler joined #gluster
22:12 cliluw joined #gluster
22:26 cliluw joined #gluster
22:30 caitnop joined #gluster
22:45 congpine joined #gluster
22:46 congpine hi all, I hope someone can help answering this quick question: I already have 2 bricks running, I brought down brick No 3 ( kill pid of that brick). When I start that brick again by running restarting glusterfs-server service, the port show as N/A for the other bricks
22:47 congpine gluster volume status shows 2 bricks with Port: N/A, Brick 3 has port assigned
22:48 congpine I checked ps and saw those 2 bricks still running, with port numbers assigned. Clients still maintain connections to those ports.
22:48 congpine I'm not sure why Gluster Volume Status show it as N/A. I have seen this in the past and I have to reboot the server so that gluster volume status report correct Port
22:55 JoeJulian congpine: I've seen it recently, too, with a 3.6 version. "gluster volume start $volname force" cured it.
23:02 congpine i'm running 3.5 . I have tried to force start VOL but no luck
23:15 jkroon glsuterfs-server likely only restarts glusterd, not the various glusterfsd processes.
23:16 jkroon JoeJulian, do you know if start ... force will kill glusterfsd processes that should no longer be running?  Eg, if you switch off shd?
23:22 congpine there were only processes to serve for those 2 bricks. I checked and see that those processes weren't restarted.
23:28 kukulogy joined #gluster
23:31 JoeJulian congpine: I saw that bug all the time with 3.5. It's EOL so I would just recommend upgrading.
23:32 JoeJulian jkroon: it will do the shd process, yes.
23:32 JoeJulian jkroon: but you're insane if you turn it off. ;)
23:37 congpine JoeJulian: yeah, but I can't upgrade it yet. we have 5 servers and need proper planing
23:38 plarsen joined #gluster
23:40 JoeJulian congpine: regardless, that's the fix for the problem you're seeing.
23:42 MugginsM joined #gluster
23:59 congpine yeah , I did try to force start VOL, but no luck. I have to reboot the server

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary