Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-08-07

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:45 Peter4 is that normal to see glustershd.log quiet?
00:55 andreask joined #gluster
01:09 overclk joined #gluster
01:26 nishanth joined #gluster
01:35 itisravi joined #gluster
01:56 harish_ joined #gluster
02:08 msbrogli joined #gluster
02:09 msbrogli Hey. I’m new to glusterfs and my two nodes are out of sync. The client are continuously logging errors because of this and the heal command is not working. Can someone help me?
02:11 msbrogli Storage 1 has 91GB of data and Storage 2 has only 21GB. I’ve already tried to heal but it generates lot of errors at log. I already thought about rsyncing the storages but I don’t know if it is a good idea.
02:12 msbrogli I’m using gluster 3.4.4.
02:20 haomaiwa_ joined #gluster
02:28 haomaiw__ joined #gluster
02:31 JoeJulian msbrogli: Can you go to fpaste.org and paste your log there and share the link here?
02:32 msbrogli Sure. This is the most common error message at client log: http://fpaste.org/123749/14073787/
02:32 glusterbot Title: #123749 Fedora Project Pastebin (at fpaste.org)
02:32 JoeJulian Well that's not very useful.
02:33 JoeJulian Are your error logs not at info level, or is it just that useless?
02:33 msbrogli I don’t know much about gluster internals, but I already rsync’ed these directories and the error remains.
02:35 msbrogli JoeJulian: How can I change log level without umounting it?
02:37 JoeJulian msbrogli: Did you change it from default?
02:38 msbrogli It was at WARNING level. I changed it back to INFO and here is the new paste: http://fpaste.org/123750/79052140/
02:38 glusterbot Title: #123750 Fedora Project Pastebin (at fpaste.org)
02:38 JoeJulian Ah yes. "Skipping entry self-heal because of gfid absence" is your rsync
02:38 msbrogli gfid = gluster file id?
02:39 msbrogli Is it the identification of the file for the gluster?
02:39 JoeJulian yes. It's a consistent id that
02:39 msbrogli Do you recommend any reading so I can understand the internals?
02:39 JoeJulian that's used across replicas so the file can present a consistent inode.
02:39 msbrogli (Or the basics, lol)
02:39 JoeJulian @extended attributes
02:39 glusterbot JoeJulian: (#1) To read the extended attributes on the server: getfattr -m .  -d -e hex {filename}, or (#2) For more information on how GlusterFS uses extended attributes, see this article: http://hekafs.org/index.php/2011/04/glusterfs-extended-attributes/
02:40 coredump joined #gluster
02:40 msbrogli Thanks!
02:40 msbrogli How can I go to fix this?
02:40 JoeJulian at this point, I would probably just wipe the one you rsync'ed to.
02:41 JoeJulian then "gluster heal $vol full"
02:42 msbrogli For sure storage1 has the right data. So should I wipe out storage2 data and heal it?
02:42 JoeJulian That would be simplest.
02:42 msbrogli Does it matter where I’m running the heal command? Or should I run it at the wiped out storage?
02:43 msbrogli Is it possible to generate a gfid for my files?
02:45 msbrogli I just checked that some of my files at storage1 don’t have the xattrs also.
02:46 itisravi joined #gluster
02:48 msbrogli How can I stop a healing?
02:53 JoeJulian stop all glusterd. pkill -f glustershd
02:54 JoeJulian no gfids on the source? did you write directly to the bricks?
02:58 msbrogli Yes. I’m migrating from NFS to gluster. So I created the gluster volume and copied the files to there in the storage. Now I see why it wasn’t a good idea.
02:58 msbrogli I just discovered that if I stat the file at the client, gluster automatically creates the gfid data.
02:59 msbrogli Is it the best way to do it? I’m running a `find . -noleaf -print0 | xargs --null stat >/dev/null`
02:59 JoeJulian That should work if one side has no gfid. Not a good idea if both sides don't.
02:59 JoeJulian Could lead to mismatched gfids.
03:00 jtran joined #gluster
03:00 msbrogli I don’t know another way to go.
03:00 msbrogli What is a better way to go?
03:01 JoeJulian Wipe the second brick, then do a find from a fuse client.
03:02 msbrogli Should I disconnect the second brick first? Or just wipe it while still connected?
03:03 msbrogli Altough it works, the find is a very slow solution. Does it have a fast solution?
03:03 JoeJulian I was thinking about that too... sure. Stop glusterd, kill glustershd, then wipe. Be sure not to remove the extended attributes on the brick root, or if you do, be sure to replace the volume-id.
03:03 JoeJulian Find from multiple clients?
03:04 msbrogli No, from a single client.
03:04 JoeJulian my suggestion, multiple clients.
03:04 msbrogli Ah, ok! :)
03:04 JoeJulian Though at some point you're going to be network bound.
03:05 msbrogli It is generating several GB of logs.. probably it is related to the performance issues.
03:05 msbrogli Is disabling the log a good idea to improve performance?
03:06 JoeJulian Doesn't usually make any difference, from my testing.
03:06 msbrogli Ok! :-(
03:07 jaytee Silly question. If I write to one of my bricks in a replication setup, isn't that supposed to replicate to the client and the other brick?
03:08 msbrogli jaytee: as far as I understand, no. Something has to trigger this replication and it usually happens at the clients.
03:08 jaytee In that case, writes need to be done on the clients then, yeah?
03:09 jaytee o. That's what you said
03:09 msbrogli jaytee: I’m new to gluster, but in my experience, yes.
03:10 msbrogli Neither a heal would help, because your files don’t have a gfid (I just learned that and it could me a misinformation, lol)
03:10 jaytee Thanks, msbrogli
03:10 JoeJulian Right, writing to the brick is a no-no.
03:11 JoeJulian It would be like writing data to sectors on you hard drive hoping it'll just show up in xfs or ext4.
03:11 jaytee Gotcha, JoeJulian. I just couldn't find the answer
03:11 JoeJulian I know it's in the documentation somewhere, but it's often overlooked.
03:12 msbrogli jaytee: one idea is to mount the client in your storage (same server) and to write there
03:13 msbrogli I’m using gluster in other volumes. They are fine but I can notice some performance issues. Some `ls` aren’t as fast as before. Is there any configuration to improve it? I also checked some performance decrease when dealing with small files.
03:14 jaytee Thanks again JoeJulian, msbrogli
03:15 msbrogli jaytee: you’re welcome.
03:15 JoeJulian msbrogli: make sure you're kernel is new enough that it supports readdirplus.
03:16 JoeJulian "you're" ... seriously Joe?
03:16 msbrogli How can I check it? I’m on 3.11.0-15
03:16 JoeJulian That should be fine.
03:16 msbrogli 3.8.0-15 in storage2
03:17 JoeJulian I'm not sure when it was added. Only the client should matter though.
03:17 JoeJulian It's a FUSE thing.
03:17 msbrogli Among the clients and oldest one is 3.2.0-58.
03:19 jaytee What's a good way to sync uid and gids ?
03:19 msbrogli uid and gid?
03:19 msbrogli I didn’t get it.
03:19 jaytee userid and groupid
03:19 JoeJulian Use ldap
03:19 msbrogli I know it. But what do you mean by sync?
03:20 JoeJulian ubuntu doesn't create consistent uids and pids for its service users.
03:20 JoeJulian er, gits
03:20 JoeJulian gah
03:20 JoeJulian gids
03:20 jaytee JoeJulian. Was hoping for something else besides ldap. lol oh well
03:20 JoeJulian puppet
03:20 JoeJulian salt
03:21 JoeJulian If your users exist before the package is installed, the package will use that id.
03:21 JoeJulian So make sure your configuration management creates the user and group before installing the package.
03:23 msbrogli jaytee: I use fabric scripts to configure all servers.
03:24 msbrogli I think it is a good choice because I can develop in python and it is very easy to use.
03:24 jaytee Never heard of that before. I'll have to check it out
03:25 msbrogli http://www.fabfile.org/en/latest/
03:25 glusterbot Title: Welcome to Fabric! Fabric documentation (at www.fabfile.org)
03:25 msbrogli I never used pupper or any other configuration tool, so I can’t compare the solution. I manage about 30 servers and fabric is enough to me.
03:26 JoeJulian The down side of that is you then have to program for all your states. The purpose of configuration management is to simply define the end-state and let the software get you there.
03:27 msbrogli JoeJulian: It’s in my todo list to try any of them, but I’ve never done it.
03:27 msbrogli Do you recommend any of them?
03:28 rejy joined #gluster
03:28 JoeJulian imho, don't bother with chef. I've done puppet and chef so far and puppet is far more complete. I've had a little taste of Salt lately (pun intended) and so far I like it a lot. Also written in python.
03:31 bharata-rao joined #gluster
03:31 msbrogli Then, I’ll take a look at Salt. Normally we contribute to the project we choose to use. We have minor contributions at OpenStack, NFS, django, pyodbc, …
03:31 msbrogli And it is so fun to develop in python… :)
03:32 JoeJulian It is, isn't it. :D
03:32 shubhendu__ joined #gluster
03:33 ACiDGRiM joined #gluster
03:35 sputnik13 joined #gluster
03:36 sputnik13 joined #gluster
03:36 ACiDGRiM Hey all, if anyone has a moment to help me with a concern I have about my gluster volume in a personal lab I'm running. I used split mount to repair some splitbrain issues and I found one replica is larger than the other
03:36 ACiDGRiM /tmp/splith/tmpGcjTSg      4.1T  2.6T  1.6T  62% /tmp/splith/r1
03:36 ACiDGRiM /tmp/splith/tmpClCnXq      5.5T  2.5T  3.0T  46% /tmp/splith/r2
03:37 ACiDGRiM But the volumes in volume status match the capacity of the larger volume
03:38 ACiDGRiM I did a rebalance fix-layout, and I'm trying a full rebalance right now too
03:40 msbrogli Have you done a diff between the directories to find out what is different?
03:41 ACiDGRiM I actually rm -rf'd the entire /tmp/splith/r2 and then sent through a full heal
03:42 ACiDGRiM but maybe the full heal didn't complete, since r1 has 2.6TB used and r2 has 2.5TB used
03:42 ACiDGRiM but I'm more concerned about the capacity, 4.1TB vs 5.5TB
03:42 ACiDGRiM both systems are 3 2TB disks split into 13 144GB LVM volumes
03:42 ACiDGRiM and all bricks appear online
03:43 msbrogli So you’re supposed to have 6T in each, right?
03:43 JoeJulian Any chance one is sparse and the other isn't?
03:43 ACiDGRiM approx, 144GBx39=5.6TB
03:44 ACiDGRiM sparse? can you elaborate, not sure what you mean in this context
03:44 JoeJulian @lucky sparse files
03:44 glusterbot JoeJulian: http://en.wikipedia.org/wiki/Sparse_file
03:45 ACiDGRiM would the sparse files effect used space, rather than capacity?
03:46 ACiDGRiM and to clarify, each 144GB LBM volume is a brick
03:46 msbrogli ACiDGRiM: I’m new to gluster. I’m still using raid10 under my two gluster bricks. Is it worth to remove the raid10 and add each harddisk as a brick in the gluster? I could ser replication count to be 3 and it sounds even better than raid10.
03:46 JoeJulian Heh, I hadn't noticed that.
03:47 JoeJulian replication in clustered systems != raid
03:47 ACiDGRiM msbrogli: I currently do replica 2 as a distributed raid 1
03:47 JoeJulian You're not getting multi-spindle performance with replication.
03:47 ACiDGRiM well fake raid
03:47 JoeJulian replication is for fault tolerance.
03:48 JoeJulian It can be for performance but only under specialized use cases.
03:48 msbrogli JoeJulian: Yes, but I’m thinking about replication with strip.
03:48 kshlm joined #gluster
03:48 JoeJulian @stripe
03:48 glusterbot JoeJulian: Please see http://joejulian.name/blog/should-i-use-stripe-on-glusterfs/ about stripe volumes.
03:48 JoeJulian usually the wrong use case.
03:48 itisravi joined #gluster
03:49 msbrogli Thanks! I’ll read it!!
03:52 msbrogli Another thing I’m thinking to do is to run our virtual servers from the storage instead of in the local disk of the nodes. I was thinking about using gluster to mount the volumes locally. Do you have any experience doing it?
03:53 msbrogli My main concern is about performance. But I haven’t tested it yet.
03:53 ACiDGRiM gluster can be very fast with the right archetecture
03:54 ACiDGRiM *architecture
03:54 msbrogli What do you mean by “the right architecture"?
03:54 ACiDGRiM more nodes, better performance
03:55 ACiDGRiM and with replication, specifically
03:55 ThatGraemeGuy joined #gluster
03:56 msbrogli Why did you split your harddrives in so many LV in LVM and added each as a brick?
03:56 msbrogli I don’t see why including more bricks would improve perfomance.
03:57 ACiDGRiM no, I did this because I'm in an unstable environment and a 2TB fsck takes a long time
03:57 ACiDGRiM ideally, more nodes = more systems
03:58 msbrogli ah, but unstable in what sense? Because if any of your harddrives fails then you would need to fsck 2TB, right?
03:58 ACiDGRiM I found out this makes heal and rebalance operations very slow, which is why this is a personal lab
03:59 shubhendu__ joined #gluster
03:59 ACiDGRiM unstable because I'm in a home environment with residential power, so a powerfailure could be a big problem
04:00 ACiDGRiM it's really quick to fsck 144GB
04:00 ACiDGRiM 144 was just an even number I could split 2TB with minimal free space
04:00 ACiDGRiM *wasted space
04:01 ACiDGRiM Ideally I will have a UPS that can handle my system long enough to shut down, and then I'll be more inclined to use the full disk as a brick
04:03 ACiDGRiM In the datacenter environment I'm educating myself for, this won't be an issue of course.
04:03 kanagaraj joined #gluster
04:03 ACiDGRiM But for performance reasons, the more replica nodes you have the faster your READ will be
04:04 ACiDGRiM WRITE will be slower because the gluster client has to write to each replica, if I understand documentation correctly
04:05 ACiDGRiM Any ideas on why the disparity between capacities?
04:06 msbrogli JoeJulian: So it seems to be better to pair the harddisk in raid0 devices and let the mirroring to the gluster. Is it a good way to use it?
04:06 JoeJulian Was off tucking in my daughter.
04:06 JoeJulian Sure, that's a valid way.
04:07 msbrogli ACiDGRiM: Interesting environment. I don’t see why the performance would improve on reading, since the replica-count is 2, it is only able to read from two servers at the same time. Isn’t it?
04:07 JoeJulian Do you know what your goals are?
04:09 ACiDGRiM @msbrogli, yes it would be only 2, but while server1 is busy, server2 can answer another request
04:09 JoeJulian know your goals. design a system to meet them. predict performance. test.
04:10 bala joined #gluster
04:10 ThatGraemeGuy joined #gluster
04:12 msbrogli JoeJulian: Yes. We are a small company and we have only three nodes and two storages. We would like to be completely redundant, so we can turn off any storage and no server would notice that. We also have daily data snapshot which we call offline backups.
04:13 rjoseph joined #gluster
04:13 msbrogli The maintenance also includes harddisk failure and replacement.
04:15 ACiDGRiM I can attest to the fault tolerance of gluster, I was attracted to it specifically due to the unstable nature of my environment, before my recent tinkering I had 3 months of 100% storage availibility
04:16 msbrogli Nice!
04:17 ACiDGRiM I also looked at DRBD
04:17 JoeJulian I've had about a grand total of 30 minutes downtime over 5 years.
04:17 JoeJulian I wouldn't be where I am today if it weren't for DRBD.
04:17 msbrogli We still have our biggest partition to migrate to gluster. It is almost 2TB and I’m still in doubt about the best way to do it.
04:17 msbrogli JoeJulian: What do you mean?
04:18 msbrogli We are using DRBD to sync our backups and some LVM partitions (LVM over DRBD).
04:18 JoeJulian If it hadn't destroyed my data so completely, I never would have developed the goals that led me to Gluster.
04:18 msbrogli Sad thing to hear. I can’t imagine what would be to completely lost my data.
04:20 ACiDGRiM The fact that it is file aware and not a dumb bit for bit copy is what made me choose gluster, specifically due to the possibility of hosting it from a solaris system with zfs
04:20 JoeJulian I realized that I'm at the pinnacle of my field, so far, all due to that failure while I was talking with the DRBD sales guy at OSCON. He said he would share my story with his VP of sales.
04:21 msbrogli JoeJulian: Have you tried using another distributed fs? Like OCFS or GFS or HDFS?
04:21 JoeJulian No, none of them meet my goals, one of which is no single points of failure.
04:22 msbrogli Do they have SPF?
04:22 JoeJulian They're also not supposed to be horrible. ;)
04:22 ACiDGRiM I've use GFS, I was attracted by the redundancy possibly but then I did a doh, when the disk array they shared failed
04:23 JoeJulian Single-store multi-frontend... yeah.
04:23 msbrogli Hahaha.
04:23 ACiDGRiM fortunately, homelab
04:23 msbrogli Thanks God!
04:24 ACiDGRiM My biggest bullet on my resume is I do R&D at home
04:24 msbrogli What about your databases? How to you do with the data partition? Is it at gluster?
04:24 msbrogli As far as I read, it doesn’t seem to be a very good choice due to performance reasons.
04:24 JoeJulian I've been hosting mysql innodb engine data on a gluster volume for years.
04:26 JoeJulian I did a partial talk last year at linuxcon where I sharded innodb across distribute subvolumes and got double the performance of the same data on a single (virtual) disk.
04:26 ACiDGRiM in a non striped replica, will one large file always be served by one node? or will the gluster client pull from different nodes at the same time?
04:26 msbrogli Hum, I’ll try it. Nowadays it is in a LVM partition exported through iSCSI. We also have daily backups and hourly backups of the binary log.
04:27 JoeJulian ACiDGRiM: first to respond.
04:27 ACiDGRiM That's what I though
04:27 ACiDGRiM t
04:27 JoeJulian Typically that'll be one server since it'll already have that data in cache.
04:27 msbrogli JoeJulian: Is it the default configuration?
04:28 JoeJulian first-to-respond is the default. It will read from the local replica if it's mounted on a server though.
04:28 JoeJulian There are other options. See gluster volume set help
04:28 msbrogli You’re making me a fan of gluster. :) I’m just a new user in this land, but I hope to send some patch during the next year.
04:29 marbu joined #gluster
04:29 msbrogli Do you use any quorum configuration?
04:29 morse_ joined #gluster
04:29 JoeJulian You're in the best place to do that now. As you're going through the documentation learning stuff, if you find errors or deficiencies this is the best time to find and patch them.
04:30 mkzero joined #gluster
04:30 dockbram_ joined #gluster
04:30 lava_ joined #gluster
04:30 gomikemi1e joined #gluster
04:30 siel_ joined #gluster
04:30 Andreas-IPO_ joined #gluster
04:30 mjrosenb_ joined #gluster
04:30 nishanth joined #gluster
04:30 Ramereth|home joined #gluster
04:31 ndevos_ joined #gluster
04:31 ndevos_ joined #gluster
04:31 msbrogli How can I send the community my changes? I’m used to clone a repo, change it and send the patches.
04:31 tom][ joined #gluster
04:31 JoeJulian @hack
04:31 glusterbot JoeJulian: The Development Work Flow is at http://www.gluster.org/community/documentation/index.php/Development_Work_Flow
04:34 muhh joined #gluster
04:34 ThatGraemeGuy_ joined #gluster
04:34 stickyboy_ joined #gluster
04:34 atoponce joined #gluster
04:34 _weykent joined #gluster
04:34 msbrogli Is the “Getting Started” doc in the repo?
04:34 rturk|af` joined #gluster
04:35 stickyboy joined #gluster
04:35 Rafi_kc joined #gluster
04:36 prasanth_ joined #gluster
04:36 anoopcs joined #gluster
04:37 [o__o] joined #gluster
04:38 msbrogli In one of my clients, glusterfs process is consuming 1.8G of RAM. How can I check what is happening?
04:40 dusmant joined #gluster
04:41 cmtime joined #gluster
04:41 avati joined #gluster
04:41 mdavidson joined #gluster
04:43 rejy joined #gluster
04:43 kkeithley joined #gluster
04:44 tyrok_laptop joined #gluster
04:46 JoeJulian valgrind
04:47 anands joined #gluster
04:48 harish_ joined #gluster
04:52 ramteid joined #gluster
04:56 spandit joined #gluster
05:00 meghanam joined #gluster
05:00 meghanam_ joined #gluster
05:01 VerboEse joined #gluster
05:03 lalatenduM joined #gluster
05:04 hagarth joined #gluster
05:04 jiffin joined #gluster
05:05 anoopcs joined #gluster
05:13 msbrogli What steps do you recommend to migrate a 2TB from NFS to Gluster?
05:18 gildub joined #gluster
05:19 sahina joined #gluster
05:20 karnan joined #gluster
05:20 glusterbot New news from resolvedglusterbugs: [Bug 764655] NetBSD port <https://bugzilla.redhat.com/show_bug.cgi?id=764655>
05:21 sputnik13 joined #gluster
05:24 kdhananjay joined #gluster
05:33 twx joined #gluster
05:34 gildub joined #gluster
05:35 ppai joined #gluster
05:39 ababu joined #gluster
05:39 sputnik13 joined #gluster
05:47 raghu joined #gluster
05:50 sputnik13 joined #gluster
05:51 overclk joined #gluster
05:53 sahina joined #gluster
05:53 karnan joined #gluster
05:53 nishanth joined #gluster
05:54 shubhendu__ joined #gluster
05:54 gildub joined #gluster
05:54 sputnik13 joined #gluster
05:59 nickmoeck joined #gluster
06:01 nshaikh joined #gluster
06:02 LebedevRI joined #gluster
06:17 ws2k3 joined #gluster
06:18 meghanam joined #gluster
06:19 meghanam_ joined #gluster
06:28 aravindavk joined #gluster
06:36 rastar joined #gluster
06:37 kshlm joined #gluster
06:42 meghanam joined #gluster
06:43 meghanam_ joined #gluster
06:56 TvL2386 joined #gluster
06:59 ctria joined #gluster
07:04 sijis joined #gluster
07:04 sijis joined #gluster
07:04 marbu joined #gluster
07:05 nbalachandran joined #gluster
07:06 deepakcs joined #gluster
07:07 sahina joined #gluster
07:07 ababu joined #gluster
07:07 ACiDGRiM so I found the answer to my original problem, IP tables was blocking access to the needed ports of the new storage
07:07 shubhendu__ joined #gluster
07:08 nishanth joined #gluster
07:08 capri joined #gluster
07:08 ekuric joined #gluster
07:12 keytab joined #gluster
07:13 karnan joined #gluster
07:17 Bardack joined #gluster
07:21 deepakcs joined #gluster
07:21 rjoseph joined #gluster
07:25 aravindavk joined #gluster
07:31 haomaiwa_ joined #gluster
07:31 kshlm joined #gluster
07:37 shylesh__ joined #gluster
07:37 ws2k3 joined #gluster
07:46 haomaiw__ joined #gluster
07:49 ws2k33 joined #gluster
07:55 rastar joined #gluster
07:56 aravindavk joined #gluster
07:56 nbalachandran joined #gluster
07:57 rjoseph joined #gluster
08:03 bharata-rao joined #gluster
08:08 haomaiwa_ joined #gluster
08:13 ndarshan joined #gluster
08:13 ThatGraemeGuy joined #gluster
08:29 ACiDGRiM To migrate from NFS to Gluster, you're going to want to duplicate your storage. I had the luxury of having a raid 1 system so I broke it built gluster on one half and then expanded the replica to the other half of the raid array.
08:29 ACiDGRiM In a production environment I would have my original NFS array, and then build a gluster array (maybe even using the same system, just new disks) and then simply cp -a /mnt/NFS/* /mnt/glusterfs/ but depending on importance of data I might want to use something to hash each file to verify
08:29 ACiDGRiM be aware that since gluster is in userspace when you mount, you can't reexport glusterfs mount via NFS without stability issues. I resorted to CIFS, however gluster has NFS built in, but it didn't meet my needs particularly
08:31 Pupeno joined #gluster
08:45 glusterbot New news from newglusterbugs: [Bug 1120646] rfc.sh transfers patches with whitespace problems without warning <https://bugzilla.redhat.com/show_bug.cgi?id=1120646>
08:53 haomai___ joined #gluster
08:57 hagarth joined #gluster
08:58 vimal joined #gluster
08:59 Slashman joined #gluster
09:06 mrspastic joined #gluster
09:07 mrspastic i al trying to setup a geo-replicated volume and it is failing on creating the keypair
09:07 mrspastic when i run 'gluster system:: execute gsec_create'  i get the following error
09:08 mrspastic gsec_create not found.
09:08 mrspastic does anyone have any idea why it is not working? i am using gluster 3.5
09:09 vimal mrspastic, have you installed the geo-replication packages
09:09 hagarth joined #gluster
09:20 sputnik13 joined #gluster
09:33 sputnik13 joined #gluster
09:35 mrspastic i was unaware there were special packages for geo-replication
09:38 mrspastic yes it was just a package, user error
09:38 mrspastic ran 'yum install glusterfs-geo-replication'
09:39 mrspastic and then it ran 'gluster system:: execute gsec_create'
09:39 mrspastic and it created the keypair correctly
09:39 mrspastic thank you
09:39 cmtime joined #gluster
09:40 vimal mrspastic, you're welcome
09:41 spandit joined #gluster
09:42 mrspastic have a good day vimal, thanks again. good bye
09:43 fim joined #gluster
09:50 bharata-rao joined #gluster
09:57 elico joined #gluster
09:58 cmtime joined #gluster
10:00 keytab joined #gluster
10:01 qdk joined #gluster
10:02 giannello joined #gluster
10:06 Slashman joined #gluster
10:15 glusterbot New news from newglusterbugs: [Bug 1127653] Memory leaks of xdata on some fops of protocol/server <https://bugzilla.redhat.com/show_bug.cgi?id=1127653>
10:19 cmtime joined #gluster
10:25 ricky-ticky joined #gluster
10:48 hagarth joined #gluster
10:53 kkeithley1 joined #gluster
10:55 harish__ joined #gluster
11:02 haomaiwa_ joined #gluster
11:29 ira joined #gluster
11:30 prasanth_ joined #gluster
11:35 edward1 joined #gluster
11:35 msbrogli joined #gluster
11:46 raghug joined #gluster
11:46 raghug bfoster: ping
11:46 glusterbot raghug: Please don't naked ping. http://blogs.gnome.org/markmc/2014/02/20/naked-pings/
11:49 prasanth_ joined #gluster
12:14 ppai joined #gluster
12:18 [ilin] joined #gluster
12:19 rsavage_ joined #gluster
12:19 rsavage_ morning
12:20 lalatenduM joined #gluster
12:20 [ilin] hi, if i have 4 servers with replica 2 is it possible to remove/replace two of them without losing data? once scenario would be schrinking the cluster and another would be replacing/upgrading the servers?
12:22 _Bryan_ joined #gluster
12:26 chirino joined #gluster
12:39 sjm joined #gluster
12:49 lalatenduM joined #gluster
12:51 hagarth joined #gluster
12:53 skippy joined #gluster
12:54 ninthBit joined #gluster
12:57 julim joined #gluster
13:02 bene2 joined #gluster
13:03 bennyturns joined #gluster
13:04 ppai joined #gluster
13:08 ninthBit What time zone does this channel really start picking up?
13:08 ninthBit er... what UTC time does this channel start picking up lol?
13:09 sickness "what UTC" is a non sense ;) UTC is the same time all over the world ;)
13:10 msbrogli joined #gluster
13:10 ninthBit sickness: lol yes, that is what i was asking. then i convert that to my time zone lol
13:10 hagarth joined #gluster
13:11 sickness =_)
13:14 ninthBit what might be a cause and possible solution to the this... i have a gluster peer that mounts a volume using gluster fuse.  at some point the mount becomes "corrupt"? when I ls in the parent directory of the mount point I get this.  "ls cannot access gluster_gv0: transport endpiont is not connected" then the glusterfs_gv0 has all ???????? for the metadata.  I need to confirm that the peer really can't see the other peers. i have checked
13:14 ninthBit the other peers are OK and the volume is ok. and i think heal is still working. just this server goes to crap when the mount point goes into this state.
13:15 ninthBit i have checked and all servers are running the same gluster version and same os.
13:15 ninthBit now, it could be a case of "bad server" but... i want to make sure
13:15 ninthBit i am in the process of setting up two replacement servers and have not had much time to fully dig into all the logs.
13:18 Pupeno joined #gluster
13:18 tdasilva joined #gluster
13:27 ninthBit semiosis: i think you have made a new release for 3.4 as i see there is a new package to install..... here goes the update
13:30 julim_ joined #gluster
13:34 mojibake joined #gluster
13:40 sputnik13 joined #gluster
13:40 firemanxbr joined #gluster
13:43 msbrogli joined #gluster
13:45 daxatlas joined #gluster
13:47 _Bryan_ joined #gluster
13:50 mojibake I often see Glusterbot reffering to "chapters" in documentation. However looking at http://gluster.org/documentation/howto/HowTo/ I do not see anything laid out in a chapter format.
13:50 glusterbot Title: HowTo Gluster (at gluster.org)
13:50 ninthBit after checking the issues fixed for 3.4.5 it looks hopeful my issues might have been these fixed. fingers crossed...
13:51 XpineX joined #gluster
13:52 ninthBit mojibake: i have no affiliation with the gluster teams..... i agree the documentation needs some work :)
13:53 qdk joined #gluster
13:55 msbrogli joined #gluster
13:59 rwheeler joined #gluster
14:04 mojibake ninthBit: Thank you. I have seen your handle a few times while lurching.
14:04 mojibake lurking.
14:04 mojibake I am looking to get things right before going to deep and going to production.
14:05 mojibake Looking to make use of glusterfs at AWS for a central file system for some webservers.
14:05 ninthBit mojibake: ah, that is exactly what i am working on right now
14:06 mojibake from lurking on here for last couple weeks, looks like the native gluster client mount may not be good for small read/write like php.. and using NFS mounts on the client is the way to go. Correct/incorrect statement?
14:07 mojibake Autoscaling at AWS is nice, but not going to be able to get our users to commit code to repo, so Central File System is way would like to go.
14:10 sjm joined #gluster
14:12 wushudoin joined #gluster
14:12 cvdyoung left #gluster
14:15 raghug joined #gluster
14:17 UnwashedMeme joined #gluster
14:18 plarsen joined #gluster
14:24 diegows joined #gluster
14:52 ctria joined #gluster
14:56 deepakcs joined #gluster
15:00 gmcwhistler joined #gluster
15:02 sputnik13 joined #gluster
15:11 msbrogli joined #gluster
15:15 raghug joined #gluster
15:24 sjm joined #gluster
15:30 anands left #gluster
15:30 lalatenduM joined #gluster
15:32 nbalachandran joined #gluster
15:34 semiosis ninthBit: yes 3.4.5
15:35 semiosis ninthBit: check your client log to see why the client stopped working.  put the log on pastie.org & give the link here
15:36 marbu joined #gluster
15:36 bit4man joined #gluster
15:36 ninthBit semiosis: when i get this next server up and going i'll start digging into the logs. thank you for pushing the new update.
15:36 sjm left #gluster
15:36 semiosis yw
15:37 semiosis mojibake: see ,,(php) -- it can be made to perform well over a fuse client.  beware with nfs clients that you have no HA, which you can work around
15:37 glusterbot mojibake: (#1) php calls the stat() system call for every include. This triggers a self-heal check which makes most php software slow as they include hundreds of small files. See http://joejulian.name/blog/optimizing-web-performance-with-glusterfs/ for details., or (#2) It could also be worth mounting fuse with glusterfs --attribute-timeout=HIGH --entry-timeout=HIGH --negative-timeout=HIGH
15:37 glusterbot --fopen-keep-cache
15:39 ekuric left #gluster
15:39 semiosis mojibake: i'd also suggest running a storage cluster and a separate web server cluster (which autoscales, of course) that mounts the gluster volume
15:39 semiosis running storage & web on the same systems has some drawbacks
15:40 ctria joined #gluster
15:41 elico joined #gluster
15:42 mbukatov joined #gluster
15:43 eodchop joined #gluster
15:47 eodchop Hey guys. I lost a server in my 8 node gluster cluster, and had to rebuild it. It had 4 volumes on it, distributed in replicated. I am trying to get it back into the gluster and I am having some issues. This is 3.5.1. http://pastebin.com/rVi8jEGc
15:47 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
15:52 mbukatov joined #gluster
15:54 rsavage_ hey quick question: how can you convert a distributed gluster volume into a replicated gluster volume?
15:54 semiosis add-brick replica 2 <new-replica-brick>
15:55 semiosis iirc
15:57 semiosis eodchop: not enough info in that paste to diagnose.  first check to make sure that server is reachable by the hostname, then check the logs on that server, pastie them if you want.
15:57 semiosis check the glusterd log, etc-glusterfs-glusterd.log, and also the brick log
15:58 [ilin] on which ports do two gluster server replicate on?
15:58 rsavage_ semiosis: was your add-brick replica 2 <new-replica-brick> addressed to me?
15:58 semiosis rsavage_: yes
15:59 rsavage_ semiosis: hmm ok, I didn't know what would actually change an existing distributed volume to replica, but I will try it
15:59 semiosis [ilin]: see ,,(ports)
15:59 rsavage_ thanks
15:59 glusterbot I do not know about 'ilin', but I do know about these similar topics: 'mailing list', 'mailing lists', 'mailinglist', 'mailinglists' : glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up for <3.4 and 49152 & up for 3.4. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS
15:59 glusterbot also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
15:59 semiosis [ilin]: all data/metadata ops go to the bricks
16:00 [ilin] semiosis: so bricks will use 24009+? one per brick
16:01 semiosis since version 3.4 it's 49152 & up
16:01 [ilin] semiosis: and it is one per brick?
16:02 rsavage_ semiosis: add-brick replica 2 <new-replica-brick>, are you sure this is the right command?  I don't want to add yet another brick, I want to convert
16:02 semiosis yes
16:02 semiosis rsavage_: well then ,,(rtfm) :)
16:02 glusterbot rsavage_: Read the fairly-adequate manual at http://gluster.org/community/documentation//index.php/Main_Page
16:02 [ilin] semiosis: thank you
16:02 semiosis rsavage_: or better yet, try it out on a test env
16:03 rsavage_ glusterbot well I was trying to read some documents but pipermail is borked, thank you.
16:03 fsimonce joined #gluster
16:33 Peter2 joined #gluster
16:33 Peter2 glustershd.log seems a lot quiet after upgraded to 3.5.2
16:33 Peter2 is this normal? :)
16:34 sputnik13 joined #gluster
16:37 mojibake glusterbot: Thank you for the advice on php, and NFS, and the blog post.
16:38 mojibake semiosis: Do plan on the gluster being it's own cluster.
16:38 semiosis glusterbot: thx
16:38 glusterbot semiosis: you're welcome
16:38 semiosis mojibake: glusterbot needs simple thx
16:43 mojibake glusterbot: thx
16:43 glusterbot mojibake: you're welcome
16:44 kumar joined #gluster
16:48 dtrainor joined #gluster
16:49 sjm joined #gluster
16:52 mkzero joined #gluster
16:59 m0zes joined #gluster
17:00 Peter2 still getting these after upgrade to 3.5.2
17:00 Peter2 http://pastie.org/9453548
17:00 glusterbot Title: #9453548 - Pastie (at pastie.org)
17:00 semiosis Peter2: in channel because if I can't help, someone else may
17:00 semiosis and idk what to say about this issue :(
17:00 Peter2 yup totally
17:03 sputnik13 joined #gluster
17:07 mortuar joined #gluster
17:15 sjm left #gluster
17:25 sputnik13 joined #gluster
17:30 MacWinner joined #gluster
17:32 sputnik13 joined #gluster
17:32 sputnik13 joined #gluster
17:46 mortuar joined #gluster
17:47 sputnik13 joined #gluster
17:57 rastar joined #gluster
18:02 msbrogli joined #gluster
18:03 sputnik13 joined #gluster
18:06 ramteid joined #gluster
18:10 msbrogli joined #gluster
18:12 Pavid7 joined #gluster
18:13 daxatlas joined #gluster
18:15 Pavid7 joined #gluster
18:15 Pavid7 joined #gluster
18:16 msbrogli Can any one help me with this error: [2014-08-07 18:11:40.414741] E [iobuf.c:828:iobref_unref] (-->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(call_resume+0x303) [0x7fd72882f563] (-->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(default_readv_resume+0x152) [0x7fd7288177a2] (-->/usr/lib/x86_64-linux-gnu/glusterfs/3.4.4/xlator/performance/quick-read.so(qr_readv+0x62) [0x7fd722a982e2]))) 0-iobuf: invalid argument: iobref ?
18:16 glusterbot msbrogli: ('s karma is now -14
18:16 glusterbot msbrogli: ('s karma is now -15
18:16 glusterbot msbrogli: ('s karma is now -16
18:20 kkeithley_ FYI, dpkgs of 3.4.5 and 3.5.2 for Debian Wheezy are now available from http://download.gluster.org/pub/gluster/glusterfs/
18:20 glusterbot Title: Index of /pub/gluster/glusterfs (at download.gluster.org)
18:27 msbrogli joined #gluster
18:28 sputnik13 joined #gluster
18:29 ira joined #gluster
18:29 mortuar joined #gluster
18:30 bit4man joined #gluster
18:33 qdk joined #gluster
18:35 sputnik13 joined #gluster
18:53 msbrogli joined #gluster
18:57 _zerick_ joined #gluster
19:03 sputnik13 joined #gluster
19:03 mortuar joined #gluster
19:03 sputnik13 joined #gluster
19:19 kumar joined #gluster
19:21 sschultz joined #gluster
19:21 sschultz hello
19:21 glusterbot sschultz: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
19:21 sschultz Having trouble getting the samba vfs working on CentOS 6.5
19:23 sschultz I made sure to set server.allow-insecure on and also option rpc-auth-allow-insecure on in the glusterd.vol file
19:24 MattAtL joined #gluster
19:26 msbrogli joined #gluster
19:28 MattAtL Hi, I've a problem with geo-replication. I have a two node full replica, and then geo-replication to a third.  The first two work OK, but the geo-replication has status 'faulty'.  In the master logs, I see IOError: [Errno 13] Permission denied: '/var/log/glusterfs/geo-replication-slaves/mbr/52c...(rest of id) .. ab5:file%3A%2F%2F%2Fvar%2Flamplight%2Fvar.log
19:28 MattAtL So I can connect with passwordless SSH OK from master to slave
19:29 MattAtL And I think the permissions are correct for the SSH user (it's not root) on the slave
19:29 MattAtL Except for this log file
19:29 MattAtL But the SSH user is in the admin group
19:30 MattAtL And I had this all working on an earlier test setup, disappointingly
19:30 MattAtL Does anyone have any ideas?
19:31 msbrogli joined #gluster
19:32 MattAtL And on the slave, /var/log/glusterfs/geo-replication-slaves/mbr is empty
19:34 semiosis sschultz: can you pastie.org a log or some command output?
19:35 sschultz semiosis: In /var/log/messages I see the following entry "GlusterFS[6043]: STATUS=daemon 'smbd' finished starting up and ready to serve connectionsFailed to set volfile_server localhost
19:36 semiosis you're running samba on a glusterfs server?
19:37 sschultz I was trying to expose gluster over samba using the samba vfs plugin... is this not correct?
19:38 semiosis sounds perfectly reasonable.  just asking questions about your setup...
19:38 sschultz yes, so I am just testing different setups
19:38 semiosis from that error, it sounds like you have samba trying to find a gluster server on localhost
19:38 semiosis *is* there a gluster server on localhost?
19:39 sschultz yes, glusterd and smbd are running on all nodes
19:39 semiosis i.e. does telnet localhost 24007 work on that host?
19:39 sschultz I should mention that I am also using ctdb
19:39 semiosis please ,,(pasteinfo)
19:39 glusterbot Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
19:41 sschultz http://ur1.ca/hxjhm
19:41 glusterbot Title: #123983 Fedora Project Pastebin (at ur1.ca)
19:41 semiosis auth.allow: 10.* might be your problem
19:41 semiosis since localhost is most likely 127.0.0.1
19:42 sschultz I am trying to allow 10.0.0.0/8
19:42 sschultz ahh
19:42 zerick joined #gluster
19:42 sschultz got it
19:42 semiosis maybe if you use the 10. interface addr instead of localhost
19:42 sschultz so I need to allow 127.*,10.*
19:42 semiosis maybe
19:42 sschultz let me try that
19:45 barnim joined #gluster
19:48 msbrogli joined #gluster
19:57 msbrogli joined #gluster
19:57 msmith_ joined #gluster
19:58 sschultz semiosis: no luck
19:58 semiosis what did you try?
19:58 sschultz auth.allow 127.*,10.*
19:58 semiosis another basic test would be to make a client mount on the system... mount -t glusterfs <server>:<volume> /mount/point
19:59 sschultz yes, that is how I was exposing it before
19:59 semiosis if that doesnt work, at least you'll have a good log to work with
19:59 semiosis hmm
20:00 sschultz 1 second, going to restart ctdb to see if that help
20:00 sschultz *helps
20:04 sschultz semiosis: yup, still no go.  Oh well, I will just continue to mount the volume with the gluster fuse client and then expose it that way
20:05 semiosis not familiar with the samba vfs, but have you tried using anything besides localhost?
20:05 semiosis a hostname or ip addr?
20:05 sschultz yes, same result
20:07 JoeJulian sschultz: did you paste the vfs settings anywhere yet?
20:09 sschultz JoeJulian: No, but I'm just gonna bail, unless there is a really good reason to use the vfs plugin
20:09 sschultz I just wanted to run some load tests to see how it compared to mounting the gluster volume with fuse client and then exposing it over samba
20:09 mortuar left #gluster
20:09 JoeJulian I use it and I recommend it.
20:10 sschultz hmm... ok, let me post my settings, 1 sec
20:10 JoeJulian less load, faster throughput, lower latency...
20:10 sschultz do you use ctdb too?
20:10 msbrogli joined #gluster
20:10 JoeJulian no
20:11 sschultz JoeJulian: what OS are you on?
20:11 JoeJulian centos 6
20:13 sschultz http://ur1.ca/hxjqx
20:13 glusterbot Title: #123992 Fedora Project Pastebin (at ur1.ca)
20:13 sschultz thats the smb.conf
20:16 JoeJulian If you fuse mounted DocumentStore on /mnt, ie. mount -t glusterfs localhost:DocumentStore /mnt, would the path to the files you're trying to reach be "/mnt/gluster/DocumentStore"?
20:16 sac`away` joined #gluster
20:16 sschultz I mounted them here /gluster/DocumentStore
20:17 prasanth|afk joined #gluster
20:17 JoeJulian So your path should just be /
20:17 sschultz tried that too
20:17 sschultz gonna change it back
20:17 sschultz 1 sec
20:18 JoeJulian The other idea is if ctdb does something odd, perhaps use a hostname for the volfile_server instead of localhost.
20:19 sschultz let me turn off ctdb and just try smbd
20:19 semiosis failed to set volfile_server... seems like that happens before it even tries to go over the network
20:21 hchiramm joined #gluster
20:22 diegows I don't know what happen
20:22 diegows but I have I think all the file with self-heal failures
20:22 diegows :P
20:23 sschultz1 joined #gluster
20:23 diegows gluster volume heal VOLNAME should fix the issue
20:23 diegows I don't care if I lost some data
20:23 diegows I have  backup to rsync and recover from it
20:23 sschultz same thing
20:23 sschultz GlusterFS[25342]:   STATUS=daemon 'smbd' finished starting up and ready to serve connectionsFailed to set volfile_server 127.0.0.1
20:24 bennyturns joined #gluster
20:24 semiosis been a while since i used samba, but could you get more verbose info out of it, perhaps running in foreground/debug mode?  i'd want to see what it's up to right before that message appears
20:25 sschultz I'll up verbosity
20:25 sschultz 1 sec
20:28 sschultz nope, still just the same error
20:28 sschultz this is what I installed through rpm
20:28 sschultz samba-vfs-glusterfs-4.1.9-2.el6.x86_64
20:30 daxatlas joined #gluster
20:30 sschultz here are the packages I have installed
20:30 sschultz http://ur1.ca/hxju1
20:30 glusterbot Title: #124001 Fedora Project Pastebin (at ur1.ca)
20:30 rwheeler joined #gluster
20:32 sschultz JoeJulian: Could you pm your config?
20:32 sschultz I'd really like to use the vfs plugin if it is more efficient with less overhead
20:37 JoeJulian Here's the relevant bits from mine: http://fpaste.org/124003/74438111/
20:37 glusterbot Title: #124003 Fedora Project Pastebin (at fpaste.org)
20:37 JoeJulian Since I use the same volfile_server for all my shares, I just put it in global.
20:37 JoeJulian And just now I'm realizing I left debug logging on... I should probably change that before I fill up the VM.
20:40 sschultz did you make any other modifications to gluster configs or volume settings?
20:40 JoeJulian Oh! Right... Glad you asked that.
20:42 JoeJulian In /etc/glusterfs/glusterd.vol you need to add "option rpc-auth-allow-insecure on" to the "volume management" section (the only section).
20:42 JoeJulian Do that on all your servers and restart glusterd.
20:42 sschultz yes, that is already in there
20:43 sschultz :/
20:43 sschultz what about volume settings?
20:49 skippy left #gluster
20:51 JoeJulian sschultz: Looks like all my volumes have server.allow-insecure on
20:51 sschultz yeah, I have that too
20:52 sschultz hmm
20:52 sschultz I'm stumped
20:52 JoeJulian Set your logs up like mine and share the result.
20:55 sschultz I setup the logs the same, but it's not creating a logfile, probably because it can't connect to gluster
21:02 ninkotech joined #gluster
21:02 daxatlas joined #gluster
21:02 JoeJulian It sounds like it's not loading the vfs
21:03 JoeJulian The only other thing I did differently is to use 3.6.9 mostly because I was in a huge rush to replace the bare-metal samba server that just died.
21:04 JoeJulian And moving from whichever version it was under centos5 to samba4 was just way too much work.
21:04 sschultz yeah, hmm
21:08 sschultz JoeJulian: What volume type are you running... Distributed, Distributed-Replicated?
21:08 msbrogli joined #gluster
21:11 [ilin] left #gluster
21:19 JoeJulian from the perspective of the client, they're all the same.
21:32 prasanth|brb joined #gluster
21:32 sac`away joined #gluster
21:35 hchiramm joined #gluster
21:38 etaylor joined #gluster
21:39 etaylor_ joined #gluster
21:43 y4m4 joined #gluster
21:43 y4m4 JustinClift: you there?
21:45 sschultz JoeJulian and semiosis, thanks for your help
21:45 semiosis sschultz: yw. did you get it working?
21:45 sschultz unfortunately, no
21:46 semiosis i think JoeJulian was suggesting that your samba may not have gluster vfs support.  you might want to double check that, and upgrade if necessary
21:46 semiosis your samba version*
21:46 sschultz I installed the vfs plugin through rpm
21:47 semiosis ok
21:47 semiosis well
21:47 sschultz from here
21:47 sschultz http://download.gluster.org/pub/gluster/glusterfs/samba/
21:48 semiosis to be clear, you installed all of samba from there, or just the vfs package?
21:49 sschultz all samba
21:49 semiosis ok then i'm out of ideas :(
21:49 sschultz yeah me too...
21:50 sschultz I'll play with it some more this weekend, but I wanted to thank you both for trying to help me out
21:50 sschultz I appreciate it
21:50 semiosis yw, good luck
21:50 sijis left #gluster
21:52 barnim joined #gluster
21:59 sschultz semiosis: I'm dumb... everything started working after performing yum update
22:09 semiosis \o/
22:42 Peter4 joined #gluster
22:46 nage joined #gluster
22:50 vu joined #gluster
23:15 MattAtL left #gluster
23:16 msbrogli joined #gluster
23:17 daxatlas joined #gluster
23:25 bala joined #gluster
23:28 elico joined #gluster
23:29 msbrogli joined #gluster
23:39 gildub joined #gluster
23:46 msbrogli joined #gluster
23:51 msbrogli joined #gluster
23:54 msbrogli joined #gluster
23:54 raghug joined #gluster
23:55 nbarnett joined #gluster
23:57 msbrogli joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary