Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-07-30

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:14 caiozanolla_ joined #gluster
00:28 calum_ joined #gluster
00:31 gildub joined #gluster
00:43 gildub joined #gluster
01:01 recidive joined #gluster
01:30 harish_ joined #gluster
01:33 overclk joined #gluster
01:45 haomaiwa_ joined #gluster
01:45 haomaiwa_ joined #gluster
01:46 Humble joined #gluster
01:58 haomai___ joined #gluster
02:00 gEEbusT Hi guys, im getting really poor performance with gluster using small files. i'm using xfs with block size 4096 inode size 512, running replica 2 with bricks over 4 hosts. Any ideas where to look?
02:09 harish_ joined #gluster
02:14 ndk joined #gluster
02:29 Alex When you say 'poor performance', what exactly do you mean gEEbusT?
02:30 Alex Read/write? Do you see anything (system utilisation wise) that indicates what the problem is?
02:56 ThatGraemeGuy joined #gluster
03:01 bharata-rao joined #gluster
03:01 gildub joined #gluster
03:11 harish_ joined #gluster
03:12 MacWinner joined #gluster
03:16 nbalachandran joined #gluster
03:17 hagarth joined #gluster
03:19 gEEbusT Alex: i'm running the workload in AWS, with samba, and comparing writes between using a fuse mount + gluster to just a straight file system on an EBS volume - gluster is going much, much slower than the local file system. i'm thinking about switching from replica to geo-replication
03:20 gEEbusT ive got performance.flush-behind on, performance.write-behind-window-size 1GB and performance.io-thread-count 32 and that's sped it up a tiny bit, but still very slow
03:22 gEEbusT Alex: the only indicator of an issue is some warnings in the log about REMOVEXATTR failing with "No data available", referencing all 4 of the gluster servers
03:29 ira joined #gluster
03:42 gEEbusT http://blog.gluster.org/2012/07/improving-high-throughput-next-gen-sequencing/ <--- very simmilar to this
03:42 glusterbot gEEbusT: <-'s karma is now -1
03:46 prasanth_ joined #gluster
03:56 itisravi joined #gluster
03:57 prasanth_ joined #gluster
04:00 shubhendu joined #gluster
04:04 ppai joined #gluster
04:05 ramteid joined #gluster
04:06 Humble joined #gluster
04:10 haomaiwa_ joined #gluster
04:14 kshlm joined #gluster
04:15 Rafi_kc joined #gluster
04:18 _dist joined #gluster
04:25 nishanth joined #gluster
04:26 anoopcs joined #gluster
04:26 anoopcs1 joined #gluster
04:28 anoopcs1 left #gluster
04:28 anoopcs1 joined #gluster
04:29 anoopcs1 left #gluster
04:29 haomaiwang joined #gluster
04:32 jiffin joined #gluster
04:36 atinmu joined #gluster
04:38 anoopcs joined #gluster
04:45 anoopcs joined #gluster
04:45 cjhanks joined #gluster
04:48 recidive joined #gluster
05:00 spandit joined #gluster
05:04 haomaiwa_ joined #gluster
05:14 haomai___ joined #gluster
05:18 meghanam joined #gluster
05:18 meghanam_ joined #gluster
05:49 kshlm joined #gluster
05:52 vshankar joined #gluster
05:58 atinmu joined #gluster
06:01 lalatenduM joined #gluster
06:01 rastar joined #gluster
06:03 LebedevRI joined #gluster
06:18 Philambdo joined #gluster
06:20 haomaiwa_ joined #gluster
06:21 R0ok_ joined #gluster
06:28 karnan joined #gluster
06:29 Humble joined #gluster
06:30 sahina joined #gluster
06:34 shylesh__ joined #gluster
06:36 haomaiw__ joined #gluster
06:37 ndarshan joined #gluster
06:41 aravindavk joined #gluster
06:41 haomaiwa_ joined #gluster
06:42 haomaiwa_ joined #gluster
06:51 plarsen joined #gluster
06:55 rastar joined #gluster
06:58 harish_ joined #gluster
06:58 haomaiwang joined #gluster
06:59 ekuric joined #gluster
07:06 XpineX joined #gluster
07:07 TvL2386 joined #gluster
07:09 raghu joined #gluster
07:17 nbalachandran joined #gluster
07:19 psharma joined #gluster
07:23 keytab joined #gluster
07:29 SOLDIERz joined #gluster
07:29 SOLDIERz Hello everyone is somebody using here glusterfs with iscsi?
07:32 Nightlydev joined #gluster
07:39 Humble joined #gluster
07:54 nbalachandran joined #gluster
08:00 fsimonce joined #gluster
08:00 rastar joined #gluster
08:09 haomaiwa_ joined #gluster
08:18 JoeJulian SOLDIERz: Yes, but probably not how you mean. We have clients that need LUNs for their software, so they're mounting gluster backed cinder volumes and resharing them as iscsi.
08:20 JoeJulian gEEbusT: As compared to the performance your engineering predicted, how does the actual performance differ?
08:24 haomai___ joined #gluster
08:38 Slashman joined #gluster
08:38 NuxRo SOLDIERz JoeJulian there's also this http://blog.gluster.org/2013/12/libgfapi-and-the-linux-target-driver/
08:38 XpineX joined #gluster
08:41 JoeJulian Damn! I wish we were exposing gluster storage directly.
08:41 JoeJulian Hmm... I was asked if I could do it all over again how would I do it...
08:44 aravindavk joined #gluster
08:51 shubhendu joined #gluster
08:55 glusterbot New news from newglusterbugs: [Bug 1124744] glusterd : "heal full" command failed with "Commit failed on . Please check log file for details." <https://bugzilla.redhat.com/show_bug.cgi?id=1124744>
08:55 rastar joined #gluster
08:58 SOLDIERz JoeJulian well I'm testing at the moment glusterfs and iscsi
08:58 bene2 joined #gluster
08:58 SOLDIERz as a backendstorage for virtual machines
08:58 SOLDIERz so to setup a shared storage for virtual infrastructure
08:59 SOLDIERz but at the moment the performance seems really bad
08:59 SOLDIERz any optimise patterns suggested
09:00 vimal joined #gluster
09:02 ndarshan joined #gluster
09:02 Humble joined #gluster
09:09 SOLDIERz has somebody tested out libgfapi
09:09 SOLDIERz http://blog.gluster.org/2013/12/libgfapi-and-the-linux-target-driver/
09:09 SOLDIERz probably it could get me a better performance
09:12 aravindavk joined #gluster
09:13 aravindavk joined #gluster
09:13 dusmant joined #gluster
09:13 siXy libgfapi has some concurrency issues right now
09:14 siXy what vm technology are you using?
09:16 Philambdo joined #gluster
09:24 shubhendu joined #gluster
09:28 SOLDIERz xenserver at the moment
09:28 SOLDIERz siXy
09:31 siXy hm, I doubt there's any native glusterfs support for that.
09:32 siXy why are you trying to use iscsi to connect to a gluster volume?
09:34 SOLDIERz to map it into xenserver as an SR
09:34 siXy what I meant is why are you using iscsi rather than nfs?
09:35 spandit joined #gluster
09:35 dusmant joined #gluster
09:36 SOLDIERz well whats the difference?
09:37 SOLDIERz is there a better performance on nfs?
09:39 siXy fewer layers of abstraction.  it might well be faster.  however I'm also curious what you're hoping to achieve by using gluster in this way
09:40 siXy if it's just for redundancy there might be easier ways to achieve your goal, and if it's for performance then you're definitely barking up the wrong tree
09:41 SOLDIERz siXy well I know.. at the moment I just only want to setup a shared storage which replicates virtual machines between two replicas
09:42 itisravi_ joined #gluster
09:42 SOLDIERz glusterfs was the suggested solution from one of my colleagues. I for myself think glusterfs is here not the right use-case
09:42 siXy honesty, you're probably better off with drbd for that.
09:42 xavih_ joined #gluster
09:43 SOLDIERz that's what I also thought
09:43 SOLDIERz but my collegaue wnat glusterfs
09:43 gEEbusT I'm finding gluster isn't good at all for high amounts of tiny files either :(
09:43 _jmp_ joined #gluster
09:43 cfeller_ joined #gluster
09:43 stickyboy gEEbusT: Nope, it's not.  Unless you're on Infiniband.
09:43 siXy gEEbusT: what's 'high amounts'?
09:44 vincent_1dk joined #gluster
09:44 d-fence_ joined #gluster
09:44 stigchristian joined #gluster
09:44 saltsa_ joined #gluster
09:44 purpleid1a joined #gluster
09:44 siXy as long as it's not *too* high there's a few things you might be able to do
09:44 johnmark_ joined #gluster
09:44 jezier_ joined #gluster
09:44 Slasheri joined #gluster
09:44 Frank77 joined #gluster
09:44 Slasheri joined #gluster
09:44 sickness joined #gluster
09:44 SOLDIERz well we are also plannng at the moment to implement glusterfs on our live plattform
09:44 carrar joined #gluster
09:44 LebedevRI joined #gluster
09:44 ccha joined #gluster
09:44 jiffin joined #gluster
09:44 coreping joined #gluster
09:45 pdrakeweb joined #gluster
09:45 Pupeno joined #gluster
09:45 [o__o] joined #gluster
09:45 systemonkey joined #gluster
09:45 SOLDIERz and there are small tiny files in particular there are images
09:46 shubhendu joined #gluster
09:46 SOLDIERz you got there any test-case which approve that glusterfs is not the right use-case for that please send me some links
09:46 tomased joined #gluster
09:47 SOLDIERz because i'm not really impressed
09:47 SOLDIERz of my collegaues to implemnt it on the live plattform
09:49 hflai joined #gluster
09:51 gEEbusT siXy: a lot of differing sizes, but mostly small csv's ect - we have an in house app that downloads archives and extracts them to the mount (through samba) when installing, but it just seems to be really slow compared to the expectations we had after benchmarking samba with gluster with 4k non seq writes
09:52 karnan joined #gluster
09:52 SOLDIERz gEEbust well is the extraction done on the glusterfs share?
09:53 siXy if your client uses unbuffered IO to write that will be a problem right off
09:53 ekuric joined #gluster
09:54 SOLDIERz our application or use case in the near future for glusterfs would be
09:54 SOLDIERz to display small tiff files in our application
09:54 harish_ joined #gluster
09:55 SOLDIERz because at the moment glusterfs is out last alternative for the new plattform
09:56 SOLDIERz because no other distributed filesystem works in that form like glusterfs
09:56 SOLDIERz in the most popular distributed filesystem which mostly got the basic to be an object oriented storage
09:57 SOLDIERz need a quorum
09:57 SOLDIERz so but for our live plattform we need to be highly available
09:57 SOLDIERz and we got to fire zones
09:58 SOLDIERz so when one fire zone is gone we would run every time in the issue that the quorum is no longer balanced and running in a split brain situation
10:02 keytab joined #gluster
10:23 meghanam joined #gluster
10:24 meghanam_ joined #gluster
10:32 ppai joined #gluster
10:40 calum_ joined #gluster
10:44 kshlm joined #gluster
10:50 spandit joined #gluster
10:50 simulx joined #gluster
10:53 meghanam_ joined #gluster
10:53 meghanam joined #gluster
10:56 sahina joined #gluster
10:57 kkeithley joined #gluster
11:05 karnan joined #gluster
11:08 ira joined #gluster
11:11 ndarshan joined #gluster
11:14 gildub joined #gluster
11:22 shubhendu joined #gluster
11:26 glusterbot New news from newglusterbugs: [Bug 1120646] rfc.sh transfers patches with whitespace problems without warning <https://bugzilla.redhat.com/show_bug.cgi?id=1120646>
11:34 bala joined #gluster
11:38 dusmant joined #gluster
11:53 diegows joined #gluster
11:55 Frank77 Thank you for the link. I had another question about barriers. I can't figure out if gluster (and especially the libgfapi used with KVM) uses such mechanisms (or equivalent) ? This is intended to keep our databases as safe as possible against inconstancy. I disabled gluster write cache so is it enough or do I need to make sure barriers work knowing that there is a bug about them https://bugz
11:55 Frank77 illa.redhat.com/show_bug.cgi?id=1100568. Would I be concerned by this bug ?
11:56 glusterbot New news from newglusterbugs: [Bug 1117822] Tracker bug for GlusterFS 3.6.0 <https://bugzilla.redhat.com/show_bug.cgi?id=1117822>
12:01 Slashman_ joined #gluster
12:11 edward1 joined #gluster
12:12 Humble joined #gluster
12:14 prasanth_ joined #gluster
12:15 mojibake joined #gluster
12:17 itisravi joined #gluster
12:21 nullck_ joined #gluster
12:30 bennyturns joined #gluster
12:38 anoopcs left #gluster
12:42 plarsen joined #gluster
12:46 bene2 joined #gluster
12:48 itisravi_ joined #gluster
13:02 DV__ joined #gluster
13:18 jobewan joined #gluster
13:27 Philambdo joined #gluster
13:37 _Bryan_ joined #gluster
13:41 dragonball_ joined #gluster
13:49 recidive joined #gluster
13:50 tdasilva joined #gluster
13:50 chirino joined #gluster
14:01 Humble joined #gluster
14:06 bala joined #gluster
14:11 Nightlydev joined #gluster
14:14 wushudoin joined #gluster
14:15 wushudoin joined #gluster
14:16 xleo joined #gluster
14:31 plarsen joined #gluster
14:36 JustinCl1ft *** Weekly GlusterFS Community Meeting in 25 minutes.  #gluster-meeting on irc.freenode.net ***
14:39 xoritor joined #gluster
15:03 madphoenix joined #gluster
15:04 madphoenix I have a question about replacing a brick in a distributed volume.  The latest docs on Github seem to say that replace-brick is the preferred method, but I've also seen some (admittedly old) mailing list posts saying that feature was to be deprecated, and to instead just add the new brick and drop the old one (with 'start') to migrate the data.
15:04 madphoenix What is best practice these days/
15:10 JoeJulian madphoenix: wait a week and ask again.
15:23 atrius joined #gluster
15:27 aravindavk joined #gluster
15:50 elico joined #gluster
15:58 tdasilva joined #gluster
16:04 ndk joined #gluster
16:18 rastar joined #gluster
16:25 coredump joined #gluster
16:28 bala joined #gluster
16:29 MacWinner joined #gluster
16:30 Frank77 joined #gluster
16:32 anoopcs joined #gluster
16:34 Peter3 joined #gluster
16:34 anoopcs1 joined #gluster
16:36 arraen joined #gluster
16:46 arraen Hello! I'm traing to setup geo replication and receive such error:
16:46 arraen # gluster volume geo-replication data_excelcachepath 10.0.10.4:/data/data_excelcachepath create push-pem
16:46 arraen Staging failed on localhost. Please check the log file for more details.
16:46 arraen geo-replication command failed
16:46 arraen In etc-glusterfs-glusterd.vol.log:
16:46 arraen [2014-07-30 16:39:17.421835] E [glusterd-geo-rep.c:4083:glusterd_get_slave_info] 0-: Invalid slave name
16:46 arraen [2014-07-30 16:39:17.421979] W [dict.c:778:str_to_data] (-->/usr/lib64/glusterfs/3.5.1/xlator/mgmt/glusterd.so(glusterd_op_stage_gsync_create+0x1e2) [0x7f98c20a01f2] (-->/usr/lib64/glusterfs/3.5.1/xlator/mgmt/glusterd.so(glusterd_get_slave_details_confpath+0x116) [0x7f98c209b306] (-->/usr/lib64/libglusterfs.so.0(dict_set_str+0x1c) [0x7f98c70b145c])
16:46 arraen )) 0-dict: value is NULL
16:46 arraen [2014-07-30 16:39:17.421989] E [glusterd-geo-rep.c:3995:glusterd_get_slave_details_confpath] 0-: Unable to store slave volume name.
16:46 glusterbot arraen: ('s karma is now -8
16:46 arraen [2014-07-30 16:39:17.421998] E [glusterd-geo-rep.c:2056:glusterd_op_stage_gsync_create] 0-: Unable to fetch slave or confpath details.
16:46 arraen [2014-07-30 16:39:17.422006] E [glusterd-syncop.c:912:gd_stage_op_phase] 0-management: Staging of operation 'Volume Geo-replication Create' failed on localhost
16:46 glusterbot arraen: ('s karma is now -9
16:46 arraen RHEL6, glusterfs 3.5.1
16:46 arraen Can someone please help me with this issue?
16:46 glusterbot arraen: ('s karma is now -10
16:46 JoeJulian ~paste | arraen
16:46 glusterbot arraen: For RPM based distros you can yum install fpaste, for debian and ubuntu it's pastebinit. Then you can easily pipe command output to [f] paste [binit] and it'll give you a URL.
16:47 JoeJulian I'm heading out the door for a meeting. Search https://botbot.me/freenode/gluster/ for "invalid slave name". Someone else solved that once....
16:47 glusterbot Title: Logs for #gluster | BotBot.me [o__o] (at botbot.me)
16:51 sputnik13 joined #gluster
16:52 nueces joined #gluster
16:53 andres_ joined #gluster
16:54 andres_ Hi
16:54 glusterbot andres_: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
16:55 andres_ Anyone has a gluster setup in EC2? I'm getting super slow write speeds (Have one brick in us-east and one in us-west). Transfer rate is about 2kB/s
16:55 andres_ While iperf shows a rate of 300mbits
16:56 andres_ *speed
16:58 bala joined #gluster
17:01 daMaestro joined #gluster
17:01 arraen JoeJulian, Thank you, found it. I'm using 10.0.10.4:/data/data_excelcachepath, but should 10.0.10.4::data_excelcachepath
17:10 anoopcs joined #gluster
17:18 cjhanks joined #gluster
17:20 cjhanks Is GeoReplication designed to be something which is done periodically or continuously?
17:33 semiosis andres_: glusterfs replication is latency sensitive.  use replication between AZs in a single region
17:34 ira joined #gluster
17:38 andres_ semiosis: Ok, will try to do that
17:39 andres_ semiosis: GeoReplication behaves the same way
17:39 andres_ ?
17:39 semiosis no geo-replication is designed for high-latency wan links (between regions) however it is one-way master/slave & asynchronous
17:41 andres_ Ok, maybe I will go that way then
17:41 andres_ Thank you very much semiosis
17:41 semiosis yw
17:46 caiozanolla Hello all, what is considered essential config for backup on a gluster node? Today I had a situation where a full "/" broke volume info under /var/lib/glusterd/vols and glusterd wouldnt start saying it could not restore my volumeX and the management0 volume. Luckily, i found an answer from JoeJulian somewhere and rsynced that dir from another node to the failed node. gluster started again. So, sshould I backup /var/lib/glusterd ? what should be left
17:46 caiozanolla out?
17:47 semiosis the /var/lib/glusterd config is replicated between all peers
17:57 caiozanolla semiosis, not entirely true. /var/lib/glusterd/peers is unique amongst nodes
17:58 caiozanolla semiosis, at least on my setup.
17:58 caiozanolla semiosis, its a 2 node replicated. one has peer info fom the other
18:02 semiosis ok you got me
18:03 semiosis the peers directory has info on all peers except itself
18:03 semiosis but the UUID for itself is store in /var/lib/glusterd/glusterd.info, so that missing file could be recreated for another server
18:10 caiozanolla semiosis, nice, thanks. and how often does the content of /var/lib/glusterd/ changes? (besides on obvious things like brick and node maintenance) Im asking because my vols dir got vaporized over a full filesystem. very scary thing.
18:11 semiosis afaik it only changes when you change things using the gluster command
18:12 caiozanolla semiosis, thanks man. now lets proceed to backing it up! :)
18:24 rotbeard joined #gluster
18:24 keytab joined #gluster
18:28 zerick joined #gluster
18:32 calum_ joined #gluster
18:35 semiosis caiozanolla: yw
18:37 tdasilva joined #gluster
18:40 LebedevRI joined #gluster
18:40 [o__o] joined #gluster
18:45 georgem2 joined #gluster
18:51 ghenry joined #gluster
18:53 caiozanolla semiosis, a dozen posts back you said "use replication between AZs in a single region". Thats exactly how we are doing. Our clients are autoscaling machines. Right now we have zone A machines mounting fsA.domain.com and zone B mounting fsB.domain.com. since its replicated, clients will talk to both servers regardless of mouting zone A or B, right? If so, can I just tell them to mount an A RECORD returning the 2 ips of the servers? how will that work
18:53 caiozanolla for the situation where one of the servers is offline? (classic dns RR and connection successfull?)
18:54 semiosis caiozanolla: you got it.  ,,(rrdns)
18:54 glusterbot caiozanolla: You can use rrdns to allow failover for mounting your volume. See Joe's tutorial: http://goo.gl/ktI6p
18:54 caiozanolla semiosis, thanks one again!
18:54 semiosis yw
18:59 plarsen joined #gluster
18:59 tdasilva_ joined #gluster
19:01 Humble joined #gluster
19:07 plarsen joined #gluster
19:07 ekuric joined #gluster
19:16 purpleidea joined #gluster
19:17 jbrooks left #gluster
19:26 jbrooks joined #gluster
19:28 caiozanolla i have a probable split-brain situation where access to a file know to exist returns "Input/output error". I understand that the solution is clearing xattrs and resyncing said file manually on the brick, is that right? if so, how do I know on which of the bricks this file resides? (btw, in this case i can just delete the file, it will be recreated)
19:28 semiosis caiozanolla: the client log file will say for sure if it is a split brain or some other issue.  check the log first
19:39 caiozanolla semiosis, i have this:  gfid or missing entry self heal  failed
19:45 sijis joined #gluster
19:45 sijis what does this 'mean'? "failed: Stale file handle. Path: <path>"
20:19 SOLDIERz joined #gluster
20:35 chirino joined #gluster
20:45 xoritor joined #gluster
20:49 xleo joined #gluster
20:51 Humble joined #gluster
20:55 Mathilda joined #gluster
20:56 Mathilda left #gluster
20:57 Mathilda joined #gluster
20:57 Mathilda left #gluster
20:57 Maya__ joined #gluster
20:59 luckyinva joined #gluster
21:02 luckyinva Hello.  I’m looking for a bit of advice on gluster, mainly around the replicant settings/value.  Have I found the right place to ask?
21:03 JoeJulian yes
21:04 Maya__ Hi everyone- I've recently set up a Gluster volume in Replicated mode where one brick already contained 600GB of data which then had to be "self-healed" to the second brick. This didn't go as smoothly as I had hoped, but it's finished healing. The issue I'm having now is that the first node is using up all the memory and is now also using 80% swap space. Does anyone have any suggestions on why this may be happening and what I can d
21:04 Maya__ o (other than adding more memory/swap space)? Thanks guys.
21:05 JoeJulian I've seen that too, but when you look at page faults, it's not really accessing swap. I've been meaning to ask the devs about that.
21:05 luckyinva I’m fairly new to OpenStack and have been tasked with testing out gluster.  I have 3 servers and have set up three volumes
21:06 luckyinva I had recently read that you cannot increase the value once you have started
21:06 luckyinva is there a formula or recommendation for what this should be set at in relation to how many volumes/disks I have
21:07 julim joined #gluster
21:07 JoeJulian luckyinva: Value of what?
21:08 luckyinva replicate value
21:08 JoeJulian Ah, sure you can change it. But replication in a clustered system is about fault tolerance, not how many storage servers you have.
21:09 luckyinva ok thanks for the clarification
21:09 JoeJulian Are you aware of how to do uptime calculations?
21:10 luckyinva not experienced in that no, i can however and will research it
21:10 JoeJulian http://www.eventhelix.com/realtimemantra/faulthandling/system_reliability_availability.htm#Availability in Series
21:10 glusterbot Title: System Reliability and Availability Calculation (at www.eventhelix.com)
21:10 Maya__ Thanks Joe! So I needn't be so concerned about the server eating up so much memory?
21:11 luckyinva thanks Joe
21:11 JoeJulian heh, that anchor's wrong. Availability in parallel is what you're looking at with replication.
21:11 JoeJulian Maya__: Not unless I post a blog article that says otherwise. ;)
21:11 Maya__ Ha, got it- thanks for being a huge help!
21:22 zerick joined #gluster
21:38 atrius joined #gluster
21:56 nullck joined #gluster
22:09 DanishMan joined #gluster
22:53 bala joined #gluster
23:10 JamesG joined #gluster
23:21 y4m4 joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary