Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-03-09

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 haomaiwa_ joined #gluster
00:05 dlambrig joined #gluster
00:12 JoeJulian kattamm: nope
00:13 JoeJulian gbox: Might be able to get more help with development against libgfapi in gluster-dev. Most of the devs are in GMT+5:30 though, so you'd want to try during those business hours.
00:14 JoeJulian gbox: or use the gluster-devel mailing list
00:32 jhyland joined #gluster
00:40 calavera joined #gluster
00:45 ovaistariq joined #gluster
00:50 tyler274 @JoeJulian discovered I'm still getting quite bad performance even with the option you suggessted
00:53 tyler274 or rather, seems very inconsistent
00:54 tyler274 especially after I stop an rsync transfer to examine some things
00:54 tyler274 or unmount the volume and remount it
00:57 gbox @JoeJulian: thanks again
01:01 haomaiwa_ joined #gluster
01:06 baojg joined #gluster
01:14 johnmilton joined #gluster
01:21 ovaistariq joined #gluster
01:27 jhyland joined #gluster
01:29 jhyland joined #gluster
01:29 hagarth joined #gluster
01:30 DV__ joined #gluster
01:31 johnmilton joined #gluster
01:35 jhyland joined #gluster
01:55 hackman joined #gluster
01:59 nangthang joined #gluster
02:05 coredump joined #gluster
02:17 Lee1092 joined #gluster
02:20 haomaiwa_ joined #gluster
02:24 DV joined #gluster
02:25 haomaiwang joined #gluster
02:26 haomaiwang joined #gluster
02:27 5EXAACPU1 joined #gluster
02:28 haomaiwang joined #gluster
02:29 haomaiwa_ joined #gluster
02:30 haomaiwa_ joined #gluster
02:31 haomaiwa_ joined #gluster
02:32 haomaiwa_ joined #gluster
02:33 john51 joined #gluster
02:33 haomaiwa_ joined #gluster
02:40 kattamm joined #gluster
02:49 ilbot3 joined #gluster
02:49 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:49 aravindavk_ joined #gluster
02:55 chirino joined #gluster
03:11 sakshi joined #gluster
03:14 haomaiwa_ joined #gluster
03:17 ashiq joined #gluster
03:21 Manikandan joined #gluster
03:26 Manikandan joined #gluster
03:31 kattamm joined #gluster
03:33 nbalacha joined #gluster
03:33 jhyland joined #gluster
03:40 haomaiwang joined #gluster
03:42 kshlm joined #gluster
03:43 nishanth joined #gluster
03:51 kdhananjay joined #gluster
03:53 kanagaraj joined #gluster
03:53 RameshN joined #gluster
04:05 atinm joined #gluster
04:07 itisravi joined #gluster
04:11 jberkus joined #gluster
04:11 jberkus left #gluster
04:16 ayma joined #gluster
04:20 overclk joined #gluster
04:22 calavera joined #gluster
04:24 shubhendu joined #gluster
04:25 gem joined #gluster
04:33 ppai joined #gluster
04:35 Hamburglr joined #gluster
04:36 Hamburglr is there any way to add a new replication node without having the heal process go nuts?
04:40 Hamburglr like can I slow how fast the new node gets sync'd in and not have giant CPU/IO usage on the current gluster nodes
04:45 itisravi Hamburglr: gluster currently does not have any throttling mechanism. we're working on it. FWIW,  I've seen some blogs on limiting the heal process' resource usage using cgroups.
04:45 nehar joined #gluster
04:46 Hamburglr itisravi: I tried adding a 3rd node to a replicated volume, is it common for both existing nodes to see huge load?
04:47 JoeJulian I don't worry about load unless it's affecting something.
04:47 Hamburglr yeah it was
04:47 JoeJulian I'd rather have my resources being utilized than sitting there idle.
04:47 Hamburglr and now even after killing gluster on the 3rd node it's still running heal on the other two
04:47 JoeJulian Current version?
04:48 Hamburglr 3.6.1
04:48 JoeJulian Oh! Well then, there's a whole different kettle of fish.
04:49 Hamburglr oh yeah?
04:51 JoeJulian Since you're running the buggy leaky version, you'll want to match these settings: http://fpaste.org/336043/57499044/ (along with any other setting changes you've already done)
04:51 glusterbot Title: #336043 Fedora Project Pastebin (at fpaste.org)
04:52 JoeJulian 3.6.9 is the latest 3.6, btw, and you still need to do most of those.
04:52 n-st joined #gluster
04:52 JoeJulian but it doesn't leak as much memory.
04:53 Hamburglr so these do what?
04:53 JoeJulian the self-heal settings turn off all but the self-heal-daemons from performing the heals.
04:54 JoeJulian rpc-limit works around a bug that casues "heal info" to fail.
04:54 Hamburglr should I just shut down gluster now on the current two nodes and upgrade to 3.6.9?
04:55 JoeJulian durability works around a locking issue.
04:55 JoeJulian I would.
04:55 itisravi Right. In other words, it disables heals from happening via the clients.
04:55 Hamburglr clients are what?
04:56 JoeJulian @glossary
04:56 glusterbot JoeJulian: A "server" hosts "bricks" (ie. server1:/foo) which belong to a "volume"  which is accessed from a "client"  . The "master" geosynchronizes a "volume" to a "slave" (ie. remote1:/data/foo).
04:56 itisravi Hamburglr: yeah its common to see the load on both the 'source' nodes from which the data is read and the 'sink' nodes where data is written to.
04:57 EinstCrazy joined #gluster
04:58 JoeJulian And with 3.6 there's some major bug that causes everything to come to a screeching halt if the clients are healing instead of the self-heal daemons.
05:09 ndarshan joined #gluster
05:11 overclk joined #gluster
05:12 steveeJ joined #gluster
05:14 aravindavk joined #gluster
05:20 Hamburglr JoeJulian do I need to still use those options you set with the 3.6.9 version?
05:21 Hamburglr should I upgrade to 3.7 instead?
05:22 gowtham joined #gluster
05:22 kattamm joined #gluster
05:23 karthikfff joined #gluster
05:24 poornimag joined #gluster
05:25 kattamm joined #gluster
05:27 pppp joined #gluster
05:28 ndevos @stripe
05:28 glusterbot ndevos: Please see http://joejulian.name/blog/sho​uld-i-use-stripe-on-glusterfs/ about stripe volumes.
05:29 prasanth joined #gluster
05:30 pur joined #gluster
05:31 Apeksha joined #gluster
05:33 jiffin joined #gluster
05:36 vmallika joined #gluster
05:36 poornimag joined #gluster
05:37 Hamburglr upgraded to 3.7, now it's really broke: Client process will keep trying to connect to glusterd until brick's port is available
05:41 pur joined #gluster
05:42 Bhaskarakiran joined #gluster
05:43 jiffin joined #gluster
05:46 shubhendu joined #gluster
05:55 rafi joined #gluster
05:55 kattamm joined #gluster
05:57 ramteid joined #gluster
05:58 ayma joined #gluster
06:00 ayma joined #gluster
06:01 baojg joined #gluster
06:02 kotreshhr joined #gluster
06:06 rafi joined #gluster
06:07 kdhananjay joined #gluster
06:11 rafi1 joined #gluster
06:19 baojg joined #gluster
06:28 jwd joined #gluster
06:38 baojg joined #gluster
06:39 ashka joined #gluster
06:42 kattamm joined #gluster
06:44 hchiramm joined #gluster
06:48 spalai joined #gluster
06:52 ramky joined #gluster
06:55 atalur joined #gluster
06:59 baojg joined #gluster
06:59 skoduri joined #gluster
06:59 arcolife joined #gluster
07:01 shubhendu joined #gluster
07:02 Saravanakmr joined #gluster
07:03 Gaurav_ joined #gluster
07:05 Bhaskarakiran joined #gluster
07:13 rafi joined #gluster
07:39 [Enrico] joined #gluster
07:40 mhulsman joined #gluster
07:55 jri joined #gluster
08:01 themurph_ joined #gluster
08:16 jwaibel joined #gluster
08:17 Wizek joined #gluster
08:20 atalur joined #gluster
08:27 hackman joined #gluster
08:38 skoduri joined #gluster
08:42 hackman joined #gluster
08:43 DV joined #gluster
08:48 Wizek joined #gluster
08:53 ctria joined #gluster
08:55 baojg joined #gluster
08:58 deniszh joined #gluster
08:59 [Enrico] joined #gluster
09:01 ramteid joined #gluster
09:18 Slashman joined #gluster
09:24 muneerse joined #gluster
09:24 nbalacha joined #gluster
09:26 ashiq joined #gluster
09:29 Slydder joined #gluster
09:31 Slydder we have been using gluster for a while now and would like to get important metrics in grafana using diamond collector.  can anyone tell me the main metrics that we need to keep an eye on?
09:48 ramky joined #gluster
09:54 [Enrico] joined #gluster
10:00 Gnomethrower joined #gluster
10:02 baojg joined #gluster
10:03 Akee joined #gluster
10:04 hackman joined #gluster
10:10 ctria joined #gluster
10:13 madnexus joined #gluster
10:13 madnexus morning guys
10:14 madnexus from europe lol
10:14 madnexus having some issues with glusterfs mounting the brick on boot (Centos)
10:15 madnexus seems like the mount process is hanging. I can see the mount but trying to access it freezes the session
10:15 madnexus just wondering why the systemd service file is like this:
10:15 madnexus After=network.target rpcbind.service
10:15 madnexus Before=network-online.target
10:16 madnexus does glusterd needs to start before the actual network is totally up?
10:19 ramky joined #gluster
10:20 johnmilton joined #gluster
10:37 johnmilton joined #gluster
10:40 harish_ joined #gluster
10:40 Saravanakmr joined #gluster
10:41 suliba joined #gluster
10:41 nbalacha joined #gluster
10:41 nbalacha joined #gluster
10:51 Saravanakmr_ joined #gluster
11:08 vmallika joined #gluster
11:12 kshlm madnexus, Nope. The `Before=network-online.target` is only there to allow self gluster mounts work.
11:13 kshlm Gluster mounts on boot that is.
11:19 gem joined #gluster
11:21 Slashman joined #gluster
11:22 madnexus kshlm: well, im trying to mount the server node itself gluster mount...
11:22 madnexus node01:/mailvol01.rdma on /mnt/gluster type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default​_permissions,allow_other,max_read=131072)
11:22 madnexus that's on the fstab
11:23 madnexus seems like it takes a long time before it's accessible
11:24 kshlm madnexus, I hope you are using `_netdev` in the fstab entry.
11:24 kshlm How long does the mount take to become accessible?
11:26 madnexus node01:/mailvol01/mnt/glusterglusterfs defaults,_netdev 0 0
11:26 madnexus kshlm: yes I do
11:26 madnexus well, the system bootup, everything is ready
11:26 madnexus I can see the nics connected
11:28 madnexus also the glusterfs mount there.... but if I try to do a ls on /mnt/gluster the session hangs (not the system but the ssh session)
11:29 madnexus don't really want to add a delay as it seems a bit clunky
11:30 kshlm How long does it hang for?
11:30 madnexus let me check
11:30 kshlm Also, what sort of a volume are you using? Replicate or plain distribute.
11:31 spalai joined #gluster
11:44 johnmilton joined #gluster
11:47 Bhaskarakiran joined #gluster
11:48 hagarth joined #gluster
11:50 madnexus kshlm: replicate volume
11:50 madnexus and it's taking around 1.5 mins after everything is ready (network, glusterd, etc)
11:50 kshlm That's pretty long.
11:52 madnexus I'm checking the logs to see if I can figure out why
11:52 kshlm IIRC, the client to a replicate volume will wait for some time before allowing operations, to make sure that all it's children are up.
11:52 kshlm But that shouldn't be 1.5 minutes long.
11:54 surabhi joined #gluster
11:57 madnexus [2016-03-09 11:56:13.967513] W [MSGID: 103024] [rdma.c:1134:gf_rdma_cm_handle_addr_resolved] 0-mailvol01-client-0: rdma_resolve_route failed (me:10.0.0.1:65534 peer:10.0.0.1:24008) [Resource temporarily unavailable]
11:57 madnexus first error I can see on /var/log/glusterfs/mnt-gluster.log
11:57 madnexus [2016-03-09 11:57:43.920000] W [glusterfsd.c:1236:cleanup_and_exit] (-->/lib64/libpthread.so.0(+0x7dc5) [0x7fd8fc9d7dc5] -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x7fd8fe042905] -->/usr/sbin/glusterfs(cleanup_and_exit+0x69) [0x7fd8fe042789] ) 0-: received signum (15), shutting down
11:57 madnexus [2016-03-09 11:57:43.920066] I [fuse-bridge.c:5685:fini] 0-fuse: Unmounting '/mnt/gluster'.
11:57 glusterbot madnexus: ('s karma is now -126
11:58 madnexus just after that
11:58 madnexus sorry glusterbot :D
11:58 Saravanakmr joined #gluster
11:59 MrAbaddon joined #gluster
11:59 post-factum @paste
11:59 glusterbot post-factum: For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
11:59 post-factum madnexus: ^^
12:00 madnexus post-factum: thanks!
12:02 kshlm madnexus, Are you using rdma?
12:03 madnexus kshlm: yeah, using rdma
12:04 kshlm For an rdma mount, I think that the client first resolves the IPoIB ip of the rdma interface to fetch the volfile,
12:04 kshlm then figures out what is required to set up the rdma connection, and then establishes the connection over rdma.
12:04 * kshlm is not sure if the part about IPoIB is correct.
12:05 kshlm But that could be one of the reasons, it's taking so long for you.
12:05 kshlm Taking too long to resolve rdma addresses.
12:05 kshlm Someone more familiar with rdma transport would be of better help here.
12:06 kshlm rafi, rastar, Could you guys help madnexus?
12:08 madnexus kshlm: I have read this before: https://mjanja.ch/2014/04/gluster​fs-mounts-fail-at-boot-on-centos/
12:09 rafi madnexus: I guess your IP address assigned to IB device  is 10.0.0.1
12:09 rafi madnexus: am I correct ?
12:09 madnexus rafi: you are correct
12:13 rafi what is the status of ib devices ?
12:13 rafi madnexus: ^
12:13 madnexus up and running
12:13 madnexus I can ping each other and also have done a rdma connectivity test
12:13 madnexus giving me an OK
12:14 madnexus for some reason gluster still is not able to access the route on boot
12:14 rafi madnexus: ibv_devices shows the port active ?
12:14 rafi madnexus: cool
12:15 madnexus rafi: the ib adapter seems to have connectivity in all the test I have done to it
12:15 EinstCrazy joined #gluster
12:15 madnexus at least when the server let me ssh in...
12:15 Slashman joined #gluster
12:17 madnexus rafi: http://termbin.com/k3vk
12:17 nbalacha joined #gluster
12:17 ppai joined #gluster
12:17 rafi madnexus: now it is running
12:17 rafi madnexus: the last logs says all clear
12:17 madnexus rafi: /usr/lib/systemd/system/glusterd.service is set to default
12:18 rafi madnexus: I think your mount was successful ?
12:18 madnexus rafi: it does work, but I need to manually mount the folder again. it won't do it on the fstab
12:19 kshlm It could be that the client xlators havent' connected to the bricks yet.
12:19 kshlm THe mount process returns once the fuse is setup, but doesn't wait till clients actually connect.
12:19 d0nn1e joined #gluster
12:20 kshlm But the latest mount scripts should ensure that the mount is accessible before returning.
12:20 kshlm madnexus, Which version of GlusterFs are you using.
12:20 kshlm ?
12:20 kdhananjay joined #gluster
12:21 madnexus kshlm: glusterfs 3.7.8 built on Feb  9 2016 06:29:54
12:22 kshlm Well, 3.7.8 should have the proper mount.glusterfs script.
12:22 madnexus so far I had the speed perfomance issues mentioned in https://bugzilla.redhat.co​m/show_bug.cgi?id=1309462 and also this problem to mount the brick on start
12:22 glusterbot Bug 1309462: low, unspecified, ---, ravishankar, POST , Upgrade from 3.7.6 to 3.7.8 causes massive drop in write performance.  Fresh install of 3.7.8 also has low write performance
12:23 madnexus so it wasn't very successful hehe
12:24 madnexus kshlm: using mount.glusterfs should be the same as using a traditional "mount -t glusterfs etc,etc", right?
12:24 kshlm Yup.
12:24 kshlm If you were facing hangs on the mount, your boot should have hung as well.
12:24 kshlm THe mount script tries to do a lookup on the mount point, so that should have hung.
12:26 marbu joined #gluster
12:26 lanning_ joined #gluster
12:27 surabhi joined #gluster
12:27 g3kk0 joined #gluster
12:27 crashmag joined #gluster
12:27 hchiramm_ joined #gluster
12:28 wiza joined #gluster
12:28 overclk joined #gluster
12:28 voobscout joined #gluster
12:28 wnlx joined #gluster
12:28 hchiramm joined #gluster
12:28 rafi madnexus: what is the volume type ? tcp,rdma
12:29 Larsen_ joined #gluster
12:29 ndarshan joined #gluster
12:30 anil joined #gluster
12:30 arcolife joined #gluster
12:31 k-ma joined #gluster
12:32 madnexus rafi: rdma only
12:32 madnexus this is exaclty what happens on the log from boot: http://termbin.com/9bac
12:33 rafi madnexus: I think the NIC was not proper at the time
12:33 madnexus the last line ([2016-03-09 12:29:24.034328]) is what happens after I do a ls on the mountpoint /mnt/gluster
12:35 madnexus just adding a link delay on ifcfg-ib0
12:35 swebb joined #gluster
12:35 madnexus checking if it's working :)
12:37 rafi madnexus: i'm dropping of now, will be offline for sometime
12:37 rafi madnexus : if you the problem still persist , drop a mail to gluster-userrs or gluster-devel
12:38 madnexus yeah, that didn't work :(
12:39 rafi joined #gluster
12:39 DV joined #gluster
12:41 kdhananjay joined #gluster
12:44 ppai_ joined #gluster
12:47 chirino joined #gluster
12:49 ctria joined #gluster
12:53 spalai joined #gluster
13:00 nehar joined #gluster
13:02 ppai joined #gluster
13:07 baojg joined #gluster
13:21 sebamontini joined #gluster
13:24 cristian joined #gluster
13:25 cristian left #gluster
13:25 cristian joined #gluster
13:33 baojg joined #gluster
13:33 spalai joined #gluster
13:33 ira joined #gluster
13:36 mhulsman joined #gluster
13:41 B21956 joined #gluster
13:42 DV__ joined #gluster
13:46 unclemarc joined #gluster
13:52 hgowtham joined #gluster
13:57 mhulsman joined #gluster
13:57 spalai joined #gluster
13:59 spalai left #gluster
14:00 arcolife joined #gluster
14:09 sebamontini joined #gluster
14:12 Marbug when you start the service of glusterfs, is it possible to start the volumes that are present ?
14:12 R0ok_ joined #gluster
14:17 squaly joined #gluster
14:18 post-factum Marbug: could you please clarify your issue?
14:21 aravindavk joined #gluster
14:23 shaunm joined #gluster
14:24 Marbug well if I need to start
14:24 Marbug nothing works lol
14:24 Marbug but to start with something,
14:25 Marbug when I restart the container, and the service starts, no volume has being started at that point
14:25 Marbug is that normal? or should you somewhere specify that specific volumes need to be started ?
14:26 kshlm Marbug, Volumes that were started and running previously, before the last GlusterD stop, should start up automatically.
14:27 Marbug mmmm
14:27 Marbug do you need glusterfsd as a service too ?
14:27 Marbug I can't find much info about that service :/
14:27 post-factum yup
14:27 post-factum it is the service that actually runs "brick"
14:28 kshlm The glusterfsd service was only present to make sure that the bricks are killed on shutdown.
14:28 post-factum glusterd is management service
14:28 kshlm The bricks are started by glusterd.
14:28 Marbug ah that is why nothing is working
14:29 Marbug well I mean mm
14:29 Marbug I need more sleep :D
14:29 post-factum kshlm: i mean, the beast that listens on brick betowrk port is actially glusterfsd
14:29 post-factum *network
14:29 post-factum damn, i cannot hit right keys today
14:29 kshlm post-factum, Yup. But the bricks are started by GlusterD.
14:30 kshlm Not by the glusterfsd service.
14:30 post-factum yup
14:30 Marbug and do you need to define the bricks into a .vol file ?
14:30 post-factum one should edit volfile manually in recent gluster versions
14:30 post-factum *shouldn't
14:30 kshlm Marbug, GlusterD takes care of generating the volfiles.
14:30 Marbug hmm
14:31 Marbug that is easy :)
14:31 Marbug lets see
14:31 post-factum Marbug: probably, you also need self-heal daemon as well
14:31 Marbug because I don't think I saw any changes in the vol file
14:31 kshlm Marbug, which vol file are you looking at?
14:32 Marbug I got the /etc/glusterfs/glusterd.vol file with the local ip etc, but it never changes what peer I add or so, but I suppose the vol files are on another location ?
14:32 kshlm GlusterD generates volfiles for the bricks of a volume under /var/lib/glusterd
14:32 kshlm The volfile in /etc/glusterfs/glusterd.vol is the glusterd volfile. Which is used to start GlusterD.
14:33 post-factum Marbug: also, which glusterfs version are you running?
14:33 Marbug latest version on gentoo post-factum: 3.7.4
14:34 kshlm One cool features of GlusterFS is that, GlusterD is actually implemented as a translator. So we require a volfile for glusterd as well, to say that the process needs to load the glusterd xlator.
14:34 MrAbaddon joined #gluster
14:34 Marbug got the problem that it seems the 2 nodes seem to be in sync but arn't when you do something
14:34 squaly joined #gluster
14:34 Marbug maybe it's because it's a guest with same ip, but I don't think that could be a problem
14:34 kshlm Marbug, Since you are running in containers, make sure that /var/lib/glusterd is not lost when restarting the containers.
14:34 Marbug nope it isn't
14:35 post-factum also, it is ok tu run client and server under same ip
14:35 post-factum *to
14:35 post-factum Marbug: is that docker?
14:35 Marbug yes, but I mean I have 2 hosts, vhost1 and vhost2 which do run each a guest with the same guest ip
14:36 kshlm Marbug, you're trying to run gluster in docker across two hosts?
14:37 Marbug so glusterfs sees themself as 10.0.0.113, the other host is defined by their external ip. I have bound the 2 peers with their hostname which I even have defined in /etc/hosts with their external ip if it needs to have the other host
14:37 Marbug not docker kshlm just LXC
14:37 Marbug the other host is linux-vserver but it may not be any different
14:37 post-factum i've got lost in your network setup
14:37 Marbug mounting etc worked, just couldn't get the bricks to sync or so
14:38 post-factum you have 2 hosts: vhost1 and vhost2. are they hardware servers?
14:38 Marbug yes indeed post-factum
14:38 Marbug they run a glusterfs guest with an ip 10.0.0.113
14:38 post-factum let me investigate step-by-step plz :)
14:38 post-factum you run 1 lxc container per vhost?
14:39 jri joined #gluster
14:39 shyam joined #gluster
14:40 Marbug more than 1, but atm the 2nd vhost has only the glusterfs container as it's the first thing I want to set up right :)
14:40 post-factum ok, so 1 lxc per vhost with glusterfs instance inside
14:40 Marbug indeed
14:40 Marbug I want every vhost to have a glusterfs container, with each a replicated storage
14:41 post-factum do you want replication across vhosts? replica 2?
14:42 Marbug yes post-factum
14:42 post-factum so, how vhosts are interconnected? internal private lan?
14:42 Slydder we have been using gluster for a while now and would like to get important metrics in grafana using diamond collector.  can anyone tell me the main metrics that we need to keep an eye on?
14:42 Marbug I first made 1 host, and then extended to the 2nd one, as foor good practice I want to see how to do it when I add a 3rd host and add it as a 3rd replicated host
14:42 hamiller joined #gluster
14:43 Simmo joined #gluster
14:43 mhulsman1 joined #gluster
14:43 Marbug post-factum, they are dedicated servers, so the guests ports are forwarded to the outside ip, and they are communicating with their public ip
14:43 Simmo Hi Guys! :_(
14:44 post-factum so, lxc containers are NATted?
14:44 Simmo I have a replice volume with 2 nodes: one of those has gone offline for 30min. An now that it is back I cannot start the glusterd again
14:44 nix0ut1aw joined #gluster
14:44 Simmo (but the replication is happening )
14:45 Marbug yes post-factum they use a bridge and with iptables they are NATed like a router should do, the host is acting like a router
14:45 Simmo What can I check ? :)
14:45 post-factum Marbug: what ports do you forward?
14:46 post-factum Simmo: check logs first :)
14:47 Marbug I added the important rules post-factum http://apaste.info/Xxv
14:47 Simmo post: so far I checked etc-glusterfs-glusterd.vol.log
14:47 Simmo post: and this is worrying me: http://pastebin.com/be5XREMC
14:47 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
14:47 kshlm Marbug, Currently gluster doesn't work well when you've got two servers/glusterds, each behind NAT.
14:47 Marbug damn
14:47 post-factum Slydder: https://gluster.readthedocs.org/en/latest/A​dministrator%20Guide/Monitoring%20Workload/
14:47 Marbug that could be the problem :(
14:47 glusterbot Title: Monitoring Workload - Gluster Docs (at gluster.readthedocs.org)
14:48 Simmo ops ok => http://fpaste.org/336209/53488014/
14:48 glusterbot Title: #336209 Fedora Project Pastebin (at fpaste.org)
14:48 Marbug kshlm, anything you could suggest ?
14:48 kshlm There are places where we need to be able to figure out the actual IP we're communicating with, to make some decisions.
14:48 kshlm So we need to be directly connected to the network we'll be communicating to other gluster server with.
14:49 post-factum Marbug: use plain bridging without nat
14:49 post-factum Marbug: or native routing
14:49 kshlm ^ This should work.
14:49 haomaiwa_ joined #gluster
14:49 amye joined #gluster
14:50 Marbug mmmm let us think about that
14:50 Marbug and how we shall do it, thanks post-factum & kshlm
14:50 post-factum np
14:50 kshlm Marbug, glad to help.
14:50 post-factum kshlm: "resolve brick failed in restore" ← should this be dns issue?
14:50 kshlm post-factum, Yup.
14:51 post-factum Simmo: ^^
14:51 skylar joined #gluster
14:51 Simmo post: oui ? : )
14:51 post-factum Simmo: it seems your dns is broken :)
14:52 Simmo post: O_o
14:52 wnlx joined #gluster
14:52 Simmo post: which line explain it ? :)
14:52 kshlm Simmo, could you try starting GlusterD in debug mode, that should give logs which are more helpful.
14:53 post-factum Simmo: and show us "gluster peer status" plz
14:54 Simmo both: ok.. let me google/man page how to start glusterd in debug mode .. one sec : )
14:54 kshlm Simmo, `glusterd -LDEBUG`
14:54 Simmo kshlm: thanks : )
14:54 kshlm or `glusterd --log-level=DEBUG`
14:55 kshlm Simmo, post-factum was refering to line 17 in your paste.
14:56 nangthang joined #gluster
14:56 nehar joined #gluster
14:56 kshlm The weekly gluster community meeting will start in 5 minutes in #gluster-meeting
14:57 Simmo @all: so..here the debug mode output => http://fpaste.org/336214/14575354/
14:57 Simmo @all: and "Connection failed. Please check if gluster daemon is operational." from the node that was back
14:57 Simmo @all: and from the "server" we have
14:58 Simmo @all: http://fpaste.org/336215/14575355/
14:58 Slydder post-factum: I have read through that already. unfortunately all it does is show you how to get a bunch of possibly useful information out of gluster. The problem is what bits of info should one use to keep an eye on how the node is performing? Of course the profiling section can be ignored but the status section could be of some help if one were to massage the info enough to send it to graphite/grafana.
15:00 post-factum Simmo: looks like DNS PTR records are missing for 172.30.2.109
15:00 kshlm Simmo, Gluster cannot seem to find out if 10.11.1.31 is a peer.
15:00 jakob___1 joined #gluster
15:00 kshlm GlusterD isn't able to match that address with any of it's known peers.
15:01 * kshlm will be hosting the community meeting and will be back in ~1 hour
15:01 haomaiwang joined #gluster
15:01 post-factum Slydder: i'd stick to latency first
15:01 post-factum Slydder: network storage is about latency in the first place
15:02 Simmo @post,kshlm: thanks! I'll try to understand where this ip comes from :-/
15:02 B21956 joined #gluster
15:05 jakob___1 Anyone who has a clue why gluster performance dropped 50% when uppgrading from 3.7.1 (from Centos repo) to 3.7.8 (from gluster repo)? Exact same condidions and 100% reproducable. Tests was run with fio (seq. write w. 1MB blocksize). Changing parameters improved performance with 3.7.8, but I was only able to get to 85% of performance with 3.7.1... ??!?
15:05 post-factum jakob___1: because of https://bugzilla.redhat.co​m/show_bug.cgi?id=1309462
15:05 glusterbot Bug 1309462: low, unspecified, ---, ravishankar, POST , Upgrade from 3.7.6 to 3.7.8 causes massive drop in write performance.  Fresh install of 3.7.8 also has low write performance
15:06 jakob___1 Wow...great and fast answer! :) Thanks.
15:08 Simmo @post,kshlm: found something. Long time ago I had a third peer in this replica (another story of data migration): its ip was 10.11.1.31
15:08 Simmo @post,kshlm: at that time I run the steps to remove and detach a node for that IP
15:08 post-factum Simmo: it seems you still have some tails to clean up
15:09 Simmo @post,kshlm: which leftovers can still be there ? What can i remove ? :-/
15:09 hgowtham joined #gluster
15:10 post-factum Simmo: I believe you should check /var/lib/glusterd/peers first
15:12 Simmo @Post: interesting. So in both node there is a file (uuid)
15:12 Simmo @Post: I'm going to copy paste the content in a pastebin
15:12 post-factum just one file?
15:14 Simmo just one file
15:14 Simmo and this is the content http://fpaste.org/336228/36426145/
15:14 glusterbot Title: #336228 Fedora Project Pastebin (at fpaste.org)
15:14 Simmo there is a suspicious state=3 inside
15:14 post-factum then, also check /var/lib/glusterd/vols. some volume can still refer to missing server
15:14 post-factum you may grep -R 10.11.1.31 there
15:15 Simmo bingo!
15:15 baojg joined #gluster
15:15 Simmo Found in several files: http://fpaste.org/336231/36539145/
15:15 glusterbot Title: #336231 Fedora Project Pastebin (at fpaste.org)
15:16 pur joined #gluster
15:16 post-factum Simmo: make a backup of those files and try to dig into them manually
15:17 Simmo Yes, Sir... executing : )
15:20 Simmo Need help for the "info" file
15:20 Simmo If i remove a brick-2=ip:..etc
15:21 post-factum show us the whole file plz
15:21 Simmo then I need to decrease "count", sub:count and replica_count
15:21 Simmo oki
15:21 Simmo one sec
15:22 Simmo et voilà
15:22 Simmo http://fpaste.org/336234/45753692/
15:22 glusterbot Title: #336234 Fedora Project Pastebin (at fpaste.org)
15:22 Simmo I need to remove the last line
15:22 Simmo and the replica volume consists only of 2 instances
15:22 Simmo (1 brick)
15:22 post-factum yep, you should correct those values as well
15:23 me2 joined #gluster
15:23 Simmo op-version=3
15:23 Simmo client-op-version=3
15:23 Simmo i guess those I don't touch
15:24 post-factum yep
15:24 Simmo so only "count", "sub_count" and "replica_count"
15:24 Simmo i set to 2
15:25 Simmo next question : )
15:25 Simmo in this folder /var/lib/glusterd/vols/cr0/bricks
15:25 Simmo I have a file named 10.11.1.31:-export-glusterfs-cr0-brick1-brick
15:26 Simmo should I removed it ?
15:27 Marbug is it possible to have 1 volume, but you can mount a subdir of the volume ?
15:27 Marbug or will you need to make a volume for each dir you want to 'export' ?
15:27 plarsen joined #gluster
15:28 post-factum Simmo: backup that file, so you could always restore it
15:29 Simmo Post: already backup : )
15:29 Simmo and I think that it worked!!!
15:29 Simmo one sec for the pastebin
15:29 Simmo @Post: is it looking good now? http://fpaste.org/336240/45753737/
15:29 Simmo :-)
15:29 Simmo :_)
15:32 Simmo @Post: next and hopefully final question: how can i check that the replica is fine ? I mean.. that all files are in sync and to latest status ? :-*
15:33 post-factum check gluster peer status
15:33 post-factum and then check this:
15:34 Simmo peer status looks good :) http://fpaste.org/336246/45753763/
15:34 post-factum gluster volume heal VOLUME info
15:34 glusterbot Title: #336246 Fedora Project Pastebin (at fpaste.org)
15:34 Simmo ah true!
15:34 Simmo => Number of entries: 0
15:34 Simmo totally amazing!
15:34 post-factum then i guess you are done with that
15:35 Simmo Guys and you, Post-Factum, are so smart and amazing :_)
15:35 Simmo Ok, now I can start investigating why this machine in production went down (I guess hight traffic :-/)
15:36 post-factum have fun :)
15:36 Simmo Have a nice day!
15:37 Simmo and still thanks!
15:37 Simmo :_)
15:37 post-factum np
15:41 hagarth joined #gluster
15:43 farhorizon joined #gluster
15:44 Gnomethrower joined #gluster
15:46 ayma joined #gluster
15:49 kotreshhr left #gluster
15:55 yalu joined #gluster
15:55 raghu joined #gluster
16:01 haomaiwa_ joined #gluster
16:02 kshlm Did Simmo's problem get solved?
16:03 Simmo yes, it did. Post-factume has been great :-) :-)
16:03 post-factum kshlm: yup, we have coped with that
16:03 Simmo ops *factum
16:03 jdarcy joined #gluster
16:03 kshlm Good to know.
16:03 kshlm Thanks for helping post-factum
16:03 skoduri joined #gluster
16:03 kshlm post-factum++
16:03 glusterbot kshlm: post-factum's karma is now 3
16:03 post-factum np
16:06 loadtheacc joined #gluster
16:07 loadtheacc left #gluster
16:15 squaly joined #gluster
16:16 tommyli123 joined #gluster
16:18 tommyli123 our gluster 3.6 on EC2 with 6 nodes, everyday at a specific time eg. around 9pm PST that all nodes will spike to use up all the EBS IOPS making the system unavailable to use.   any idea?
16:18 ggarg joined #gluster
16:21 ashiq joined #gluster
16:24 mhulsman joined #gluster
16:25 dlambrig joined #gluster
16:25 harish_ joined #gluster
16:29 plarsen joined #gluster
16:35 nishanth joined #gluster
16:37 Merlin_ joined #gluster
16:38 Simmo @tommy: is it a replica volume ?
16:40 Merlin_ joined #gluster
16:43 tommyli123 all volumes
16:44 tommyli123 it last for 10 to 15 minutes everyday
16:49 syadnom guys, weird question.  If I had a volume that spanned 2 physical sites linked with slow internet, is there a way to configure a gluster volume to prefer to write to the local disk?
16:51 RayTrace_ joined #gluster
16:52 wolsen joined #gluster
16:53 syadnom I've tested just writing to one of the source folders and that seems to work..  ie  vol1 is on server1:/data1,server2:data1 < write directly to /data1 on server1 and gluster shows that when I look in vol1's mountpoint (/mnt/vol1).
16:56 EinstCrazy joined #gluster
16:57 sebamontini joined #gluster
17:01 haomaiwang joined #gluster
17:05 farhoriz_ joined #gluster
17:08 shubhendu joined #gluster
17:09 calavera joined #gluster
17:13 hchiramm joined #gluster
17:13 abyss^ joined #gluster
17:15 mtanner joined #gluster
17:17 farhorizon joined #gluster
17:23 chirino joined #gluster
17:24 ninjaryan joined #gluster
17:26 nathwill joined #gluster
17:33 skylar joined #gluster
17:38 TealJax joined #gluster
17:39 nage joined #gluster
17:40 skylar joined #gluster
17:44 JoeJulian syadnom: no. replication happens at the client and it will write to all replicas simultaneously.
17:44 JoeJulian syadnom: writing to the bricks is going to give unpredictable results.
17:45 JoeJulian Just like dd'ing to the middle of a disk and expecting xfs to know what to do with that data.
17:45 post-factum JoeJulian: well, xfs does not have server-side heal :)
17:46 JoeJulian And that file doesn't have metadata
17:47 JoeJulian And if you rename that file, replace it, etc. now you've broken the hardlink and left your data behind attached to an anonymous gfid.
17:47 post-factum for sure
17:48 syadnom this would be on a non-replicated volume
17:48 DaKnOb joined #gluster
17:49 post-factum syadnom: probably you should stick to something like lsyncd
17:50 JoeJulian +1
17:50 syadnom post-factum, I don't want to replicate, I just want a single volume appearance
17:50 JoeJulian Interesting idea though.
17:50 syadnom with 'delete through'
17:50 JoeJulian Do you have enough control you could touch the file through the mountpoint before writing actual data?
17:51 jiffin joined #gluster
17:51 JoeJulian No, that wouldn't work.
17:52 JoeJulian Ok, sure, in a strictly distribute volume, if you create the file and then lookup() that file through the mountpoint (by name), theoretically it should work.
17:52 post-factum syadnom: you may use cephfs as well, adjusting your crush map to write to selected osd first, and server will do the rest. but that is synchronous thing, and surely latency will affect the performance
17:52 post-factum btw, what is the RTT between nodes?
17:52 farhorizon joined #gluster
17:53 JoeJulian If you created the same filename on both bricks, though, you would likely lose one of them.
17:53 chirino joined #gluster
17:54 syadnom here's what I want to do, so you guys seem my direction :) :
17:54 syadnom I have 2 video NVR's.
17:54 ninjarya2 joined #gluster
17:54 syadnom I want to store all writes from NVR1 -on- NVR1
17:55 syadnom I want NVR2 to be able to see NVR1's files.  This is straight forward with aufs.  just stack them up and reference NVR1's storage first.  Can do with NFS.
17:55 JoeJulian syadnom: I would also encourage you to file a bug report asking for a new feature for distribute which allows a client to override the dht algorithm, choosing the local subvolume instead. Explain your use case there.
17:55 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
17:56 JoeJulian It's a good use case.
17:59 post-factum syadnom: also examine xtreemfs
17:59 JoeJulian What ever happened to xtreemos?
18:01 post-factum dunno, but they claim read-only caching-like partial replica as a feature
18:01 post-factum thought, might fit to this case
18:01 64MAAJKGI joined #gluster
18:02 tommyli123 left #gluster
18:18 arcolife joined #gluster
18:18 ninjarya3 joined #gluster
18:18 syadnom feature requested
18:22 ivan_rossi left #gluster
18:26 RayTrace_ joined #gluster
18:31 dlambrig_ joined #gluster
18:35 Hamburglr is it possible to bind the NFS server (and preferably everything for that matter) to a specific address? I saw the option transport.socket.bind-address but I'm either setting it wrong or it's not doing anything
18:36 jlp1 joined #gluster
18:39 sage joined #gluster
18:43 syadnom Hamburglr, you could use iptables to simply limit access on a specific interface/ip
18:44 Hamburglr syadnom: yeah, I already am but would prefer to know it's only on the local network
18:44 syadnom Hamburglr, are you not confident in iptables?
18:45 Hamburglr I am but mistakes happen, firewall could be shut off, extra overhead
18:46 Hamburglr I tried getting the NFS secured w/ /etc/exports also but that seemed to do nothing
18:46 syadnom Hamburglr, what OS/distro?
18:47 Hamburglr debian wheezy
18:47 syadnom gluster has it's own nfs server, doesn't use /etc/exports
18:47 Hamburglr it can w/ 3.7 using nfs.exports-auth-enable
18:48 Hamburglr though the documentation on it is practically nothing...
18:48 madnexus joined #gluster
18:50 jiffin Hamburglr: did u mean link from readthedocs?
18:51 Hamburglr jiffin: http://www.gluster.org/community/documentation/in​dex.php/Features/Exports_Netgroups_Authentication this is all I've found about it
18:51 TealJax Howdy!  I’m trying to right-size a gluster solution (likely, distributed-replicated).  I’m just wondering about a good value to use for glusterfs-specific overhead.  For example, I’m assuming 0.5% overhead for each XFS volume.  Any guidance?
18:52 ovaistariq joined #gluster
18:52 jiffin Hamburglr: i can help u a bit
18:52 Hamburglr TealJax: do you mean extra metadata storage or speed?
18:52 jiffin i had drafted that document
18:52 Hamburglr awesome, I'm all ears
18:53 jiffin Hamburglr: it basically used as client authentication mechanism
18:53 TealJax I’m meaning extra metadata storage
18:54 jiffin Hamburglr: can u specific where are u stuck? or what all missing in the doc ?
18:55 jiffin s/specific/specify
18:55 Hamburglr jiffin: maybe my config is wrong but in exports I put: /sc12-www 10.0.1.0/255.255.255.0(rw)
18:56 Hamburglr but I was still able to access remotely afterwards
18:56 Hamburglr TealJax: on 18GB of small files I have an additional ~4GB of metadata
18:57 Hamburglr TealJax: but that's ext4 I don't know if XFS would make a difference
18:57 jiffin Hamburglr: which file did u change? is it /var/lib/glusterd/nfs/export ?
18:57 Hamburglr jiffin: /etc/exports
18:58 jiffin Hamburglr: that is used by kernel nfs server, for gluster nfs server, we have another file
18:58 jiffin Hamburglr: /var/lib/glusterd/nfs/exports
18:59 jiffin u need to create that file before enabling the export-auth option
18:59 Hamburglr jiffin: well that makes a lot of sense, did I miss that in the docs?
18:59 jiffin Hamburglr: may be I had miss that in the doc
18:59 * jiffin checking
19:00 jiffin its there http://gluster.readthedocs.org/en/l​atest/Administrator%20Guide/Export%​20And%20Netgroup%20Authentication/
19:00 glusterbot Title: Export and Netgroup Authentication - Gluster Docs (at gluster.readthedocs.org)
19:01 Hamburglr ahhh yeah I was reading the wrong area
19:01 jiffin Hamburglr: the link u had mentioned before was pretty old one
19:01 TealJax Thanks, Hamburglr.  That’s lower than I would have expected.  Excellent!
19:01 jiffin and used as design level doc
19:01 haomaiwang joined #gluster
19:02 jiffin Hamburglr: sorry I forgot to mention it in the beginning
19:02 Hamburglr jiffin: did my exports line look right? are there specific options that are good to add for many small files?
19:03 JoeJulian jiffin: I don't suppose that file is also used with ganesha integration, is it?
19:04 jiffin Hamburglr:  there is no specific option for small files
19:05 Hamburglr jiffin: am I safe to just set nfs.exports-auth-enable off and then back on or do I need a restart?
19:05 jiffin JoeJulian: No. Ganesha uses the its conf for client authentication
19:05 jiffin Hamburglr: u can turn off the option, add the file
19:05 jiffin Hamburglr: and again turn on the option
19:06 MrAbaddon joined #gluster
19:06 JoeJulian I wonder if it would be benificial to have the ganesha hook manage that conf file from our exports and netgroup.
19:07 Hamburglr is Ganesha better to use?
19:07 unlaudable joined #gluster
19:08 Hamburglr looks confusing
19:09 JoeJulian It's more feature rich, supports nfs over udp, and all the current nfs versions.
19:09 jiffin JoeJulian: IMO ,  it is better separate them, like cli options for rootsquash, acl for gluster-nfs does not
19:09 jiffin have implications in ganesha
19:09 jiffin JoeJulian: but we can options in ganesha-ha.sh for ease of use
19:11 jiffin Hamburglr: only options supported in export for gluster nfs is sec flavour, rw permission, and uid
19:11 JoeJulian Just brainstorming.
19:12 jiffin Hamburglr: if u have a special interest for specific option, we can integrate it with current feature
19:12 Hamburglr jiffin: nope, just trying to do things the best way possible. thanks a ton for the help!
19:13 jiffin Hamburglr: i would recommend u to move to ganesha since more active development happens in there
19:14 dlambrig_ joined #gluster
19:15 jiffin Hamburglr: It may be little difficult for the first time, but once u mastered it then u will love it
19:15 Hamburglr I'll start reading on it, I'm seeing info about it being faster but nothing really solid. Have you seen good speed increases?
19:16 jiffin Hamburglr: did u mean comparison with ganesha and gluster nfs?
19:16 Hamburglr yes
19:17 jiffin Hamburglr: performance for ganesha(v3 and v4) is almost same/slight smaller with gluster nfs(only supports v3)
19:18 Hamburglr so is the benefit HA?
19:18 jiffin but for pNFS protocol in ganesha have better write performance than the gluster nfs
19:19 jiffin Hamburglr: HA is real benefit for ganehsa
19:19 jiffin it has v4 support
19:19 jiffin kerberos support
19:19 jiffin and soon
19:19 jiffin Hamburglr: there are lot developments happening in ganesha side to increase it performance
19:20 Hamburglr ok
19:20 jiffin like md-cache and multi-fd related works
19:21 jiffin Hamburglr: we can hope it will gives really good result(should be merged in ganesha2.4)
19:21 Hamburglr cool
19:21 jiffin and in v4 support for nfs delegations will be added(which improves client caching)
19:26 Hamburglr so last night I tried adding a replicate node (on 3.6 then, I followed JoeJulian's upgrade suggestion) and the load went nuts healing. how do you guys add nodes when you need more without locking up clients trying to access the current nodes? I've been afraid to turn back on the new node and cause the same lock up
19:27 TealJax left #gluster
19:27 JoeJulian After I set all those settings, I haven't had a problem with it.
19:28 Hamburglr so set those even w/ 3.7?
19:28 JoeJulian I have.
19:28 Hamburglr would you mind linking again?
19:33 JoeJulian https://botbot.me/freenode/gluster/​2016-03-09/?msg=61790336&amp;page=1
19:34 Hamburglr thanks, I was searching my browser history for it
19:34 cliluw joined #gluster
19:34 deniszh joined #gluster
19:37 bennyturns joined #gluster
19:59 jiffin joined #gluster
19:59 drankis joined #gluster
20:01 haomaiwa_ joined #gluster
20:02 farhorizon joined #gluster
20:15 jakob___1 Anyone who can give a hint why I see a drastic (50%) performance drop when going from 2 sequential write threads/clients to 4 threads/clients? Target is a replicated volume (2 servers w. one raidset each).
20:16 jakob___1 Two clients get 500+MB each Four clients is ~140MB/each.
20:17 jakob___1 I have verified the raid/hardware locally with 8+ threads w. total bw of 1GB+.
20:40 Hamburglr jakob___1 : has the heal process finished?
20:43 Hamburglr oh wait you aren't adding a server nm
20:47 chirino joined #gluster
20:54 Merlin_ joined #gluster
20:55 ovaistariq joined #gluster
20:58 ovaistariq joined #gluster
20:58 farhoriz_ joined #gluster
21:01 haomaiwa_ joined #gluster
21:14 calavera joined #gluster
21:35 btpier joined #gluster
21:45 ovaistariq joined #gluster
21:50 madnexus joined #gluster
21:54 btpier joined #gluster
22:01 haomaiwa_ joined #gluster
22:04 btpier left #gluster
22:07 btpier joined #gluster
22:07 DV joined #gluster
22:07 Merlin_ joined #gluster
22:10 jakob___1 Hamburglr: Yes I think soo...... to be honest I'm not sure... but there is no disk io unless I do the benchmark.
22:12 Hamburglr jakob___1 sorry man I have no idea other then watch the logs for something weird, hopefully one of the pros in here will know better
22:13 deniszh joined #gluster
22:14 jakob___1 Ok, thanks anyway....  I noticed one thing when comparing a run from several dd (local) with fio (via gluster); avgrq-sz is 256 for gluster, but 512 w. dd... resulting in twice as many requests...
22:15 jakob___1 Happy if anyone has some input, will go to bed soon (night in my TZ now), but check for any responce tomorrow.
22:18 gem joined #gluster
22:22 _Bryan_ joined #gluster
22:28 gem joined #gluster
22:35 drankis joined #gluster
22:43 ovaistariq joined #gluster
23:00 shyam joined #gluster
23:01 haomaiwang joined #gluster
23:04 chirino joined #gluster
23:04 rideh joined #gluster
23:04 msvbhat joined #gluster
23:10 mdavidson joined #gluster
23:11 harish_ joined #gluster
23:21 johnmilton joined #gluster
23:38 johnmilton joined #gluster
23:44 johnmilton joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary