Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-06-17

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 haomaiw__ joined #gluster
00:06 chirino joined #gluster
00:13 JoeJulian @which brick
00:13 glusterbot JoeJulian: To determine on which brick(s) a file resides, run getfattr -n trusted.glusterfs.pathinfo $file through the client mount.
00:36 victori joined #gluster
00:45 Micromus joined #gluster
00:48 victori joined #gluster
01:06 Gill joined #gluster
01:23 gem joined #gluster
01:28 sysconfig joined #gluster
01:33 nangthang joined #gluster
01:37 sysconfig joined #gluster
01:45 Peppard joined #gluster
01:47 ilbot3 joined #gluster
01:47 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:48 harish_ joined #gluster
02:01 kdhananjay joined #gluster
02:20 victori joined #gluster
02:26 harish joined #gluster
02:31 nangthang joined #gluster
02:33 victori joined #gluster
02:34 victori joined #gluster
02:38 haomaiwa_ joined #gluster
02:44 bharata-rao joined #gluster
03:07 jrdn joined #gluster
03:13 victori joined #gluster
03:26 harish joined #gluster
03:41 craigcabrey joined #gluster
03:44 overclk joined #gluster
03:46 kovshenin joined #gluster
03:47 ashiq joined #gluster
03:47 stickyboy joined #gluster
03:48 RameshN_ joined #gluster
03:49 [7] joined #gluster
03:51 craigcabrey joined #gluster
04:04 nbalacha joined #gluster
04:04 atinm joined #gluster
04:12 sakshi joined #gluster
04:15 shubhendu joined #gluster
04:32 ppai joined #gluster
04:32 nbalacha joined #gluster
04:50 ramteid joined #gluster
04:52 jbrooks joined #gluster
04:56 harish joined #gluster
04:56 soumya joined #gluster
04:58 vimal joined #gluster
04:59 schandra joined #gluster
05:04 Bhaskarakiran joined #gluster
05:04 zeittunnel joined #gluster
05:05 ndarshan joined #gluster
05:06 hgowtham joined #gluster
05:06 jiffin joined #gluster
05:07 gem joined #gluster
05:10 pppp joined #gluster
05:10 spandit joined #gluster
05:10 atalur joined #gluster
05:12 meghanam joined #gluster
05:13 victori joined #gluster
05:14 Manikandan joined #gluster
05:22 victori joined #gluster
05:32 Philambdo joined #gluster
05:34 kdhananjay joined #gluster
05:35 autoditac joined #gluster
05:43 surabhi joined #gluster
05:47 maveric_amitc_ joined #gluster
05:52 atalur joined #gluster
05:57 saurabh_ joined #gluster
05:57 anil joined #gluster
05:59 haomaiwang joined #gluster
06:03 raghu joined #gluster
06:07 karnan joined #gluster
06:09 shubhendu joined #gluster
06:13 Bhaskarakiran joined #gluster
06:18 Larsen joined #gluster
06:19 rjoseph joined #gluster
06:20 atalur joined #gluster
06:28 vimal joined #gluster
06:31 shubhendu joined #gluster
06:33 shubhendu joined #gluster
06:39 rgustafs joined #gluster
06:44 nangthang joined #gluster
06:58 Bhaskarakiran joined #gluster
07:05 spalai joined #gluster
07:08 glusterbot News from newglusterbugs: [Bug 1232602] bug-857330/xml.t fails spuriously <https://bugzilla.redhat.co​m/show_bug.cgi?id=1232602>
07:17 tessier joined #gluster
07:18 [Enrico] joined #gluster
07:18 [Enrico] joined #gluster
07:25 spalai left #gluster
07:26 deniszh joined #gluster
07:27 nsoffer joined #gluster
07:32 RameshN joined #gluster
07:32 LebedevRI joined #gluster
07:33 fsimonce joined #gluster
07:38 Trefex joined #gluster
07:51 haomaiwang joined #gluster
07:56 anrao joined #gluster
08:05 arcolife joined #gluster
08:14 Slashman joined #gluster
08:18 al joined #gluster
08:19 [Enrico] joined #gluster
08:21 liquidat joined #gluster
08:23 jcastill1 joined #gluster
08:24 rgustafs joined #gluster
08:28 jcastillo joined #gluster
08:31 sysconfig joined #gluster
08:31 deepakcs joined #gluster
08:40 Pupeno joined #gluster
08:41 harish_ joined #gluster
08:44 mkzero joined #gluster
08:45 kdhananjay joined #gluster
09:09 glusterbot News from newglusterbugs: [Bug 1232660] Change default values of allow-insecure and bind-insecure <https://bugzilla.redhat.co​m/show_bug.cgi?id=1232660>
09:10 anti[Enrico] joined #gluster
09:12 autoditac joined #gluster
09:13 Saravana joined #gluster
09:14 Bhaskarakiran joined #gluster
09:22 rjoseph joined #gluster
09:24 predat joined #gluster
09:28 autoditac joined #gluster
09:29 sankarshan joined #gluster
09:33 [Enrico] joined #gluster
09:36 dcroonen joined #gluster
09:39 ghenry joined #gluster
09:39 ghenry joined #gluster
09:43 Bhaskarakiran joined #gluster
09:45 overclk joined #gluster
09:46 shubhendu joined #gluster
09:46 ndarshan joined #gluster
09:51 Micromus joined #gluster
10:04 jcastill1 joined #gluster
10:05 elico joined #gluster
10:08 shubhendu joined #gluster
10:08 gem joined #gluster
10:09 glusterbot News from newglusterbugs: [Bug 1232678] Disperse volume : data corruption with appending writes in 8+4 config <https://bugzilla.redhat.co​m/show_bug.cgi?id=1232678>
10:09 ndarshan joined #gluster
10:09 kshlm joined #gluster
10:09 jcastillo joined #gluster
10:20 RameshN joined #gluster
10:23 Philambdo joined #gluster
10:26 RameshN_ joined #gluster
10:27 side_control from what i understand gluster volumes mounted by fuse client do not require a vip, the fuse client will determine if the brick is down and forward to a working brick?
10:28 vovcia side_control: yes
10:28 haomaiwa_ joined #gluster
10:28 side_control vovcia: okay, so the same stands for using gluster in ovirt? because that aint working =/
10:29 TvL2386 joined #gluster
10:29 vovcia dunno :/
10:32 ndevos REMINDER: in ~90 minutes from now, the weekly Gluster community meeting starts in #gluster-meeting
10:39 glusterbot News from newglusterbugs: [Bug 1063506] No xml output on gluster volume heal info command with --xml <https://bugzilla.redhat.co​m/show_bug.cgi?id=1063506>
10:47 nbalacha joined #gluster
10:49 atinm joined #gluster
10:50 nbalacha joined #gluster
10:53 meghanam joined #gluster
11:14 wushudoin| joined #gluster
11:17 pppp joined #gluster
11:20 wushudoin| joined #gluster
11:39 glusterbot News from newglusterbugs: [Bug 1232717] Rebalance status changes from 'completed' to 'not started' when replace-brick volume operation is performed <https://bugzilla.redhat.co​m/show_bug.cgi?id=1232717>
11:41 morphkurt joined #gluster
11:41 pppp joined #gluster
11:42 morphkurt Hi All, as a new GlusterFS user, I would like to get some feedback about my usecase
11:43 morphkurt I currently have 4TB local SAS disk on two DL380G8 servers (These are pretty beefy servers). My use case is to store transient video files on a glusterFS on these two nodes.
11:44 itisravi joined #gluster
11:45 RameshN joined #gluster
11:46 morphkurt The file sizes are between 1250KB to 160KB. We have a process writing to the storage abbout 500Mbps.
11:46 morphkurt The second node will read the what first node has written and server to the customers
11:47 soumya_ joined #gluster
11:47 morphkurt Is GlusterFS a good solution for these type of highIO traffic?
11:47 B21956 joined #gluster
11:50 meghanam joined #gluster
11:50 ndevos morphkurt: sounds like a possible use-case, but those files are rather small? how does the 2nd server know when there are new files?
11:51 morphkurt the application has mysql database which it keeps a tally of the files. the mysql db is not using the glusterfs
11:51 atinm joined #gluster
11:51 morphkurt the files are video segments for adaptive streaming (2second chunks)
11:54 ndevos I guess that should work just fine, run some workload tests to try it out?
11:56 JustinClift morphkurt: Out of curiosity, what's the network connection those servers are using?  10GbE?
11:56 JustinClift 1GbE would be unwise ;)
11:56 morphkurt yes, 10G bonded interface on layer 2 switch
11:56 JustinClift Cool.  Shouldn't be a problem then.  In theory.
11:56 morphkurt the servers hanging off switch one hop away.
11:57 JustinClift morphkurt: Definitely run a workload test first, just to be safe
11:57 morphkurt I have started to configure my model, seems to be ok after few hours of testing.
11:57 predat Hello, new to glusterfs i’d like to know about macosx clients and glusterfs (NFS, CTDB, native gluster client and netatalk). Any Experience with it ?
11:58 ndevos REMINDER: in ~2 minutes from now, the weekly Gluster community meeting starts in #gluster-meeting
11:59 morphkurt question about replication vs distribution mode.
12:00 JustinClift predat: Use NFS for now with OSX clients
12:00 morphkurt for my use case where the second server reads straight away after a write, is distrubution good enough?
12:00 JustinClift predat: There was some work being done for a native OSX client, but no-one's been chasing it down witha focus recently, so I'm not sure if it's working
12:00 morphkurt I found replication mode is very CPU hungry
12:00 raghu joined #gluster
12:00 predat JustinClift: Some people say that NFS on macosx suck
12:01 JustinClift predat: Yes, it does
12:01 JustinClift But it's your best option atm :(
12:01 JustinClift OSX'
12:01 JustinClift OSX's Finder client is terrible with OSX
12:01 JustinClift But it's worse with everything else
12:02 JustinClift Use NFS mounted from the command line, and dont use Finder with the NFS volumes :)
12:02 predat And what do you think about CTDB (samba) with macosx ?
12:04 zeittunnel joined #gluster
12:06 overclk joined #gluster
12:06 John_HPC joined #gluster
12:06 sakshi joined #gluster
12:08 julim joined #gluster
12:08 John_HPC Quick Question: Rebalancing only moves the files that need to be moved? As in, once the new hex value is set for each brick, it only really moves those files that may be on the edges or aready linked to another location?
12:09 JustinClift predat: I have no opinion of it, as I haven't used it
12:09 firemanxbr joined #gluster
12:09 JustinClift s/terrible with OSX/terrible with NFS/
12:09 glusterbot What JustinClift meant to say was: An error has occurred and has been logged. Please contact this bot's administrator for more information.
12:15 predat JustinClift: I’ve try https://forge.gluster.org/~schaf​dog/glusterfs-core/osx-glusterfs but i’ve a lot of Finder freezing due to file missing (resource fork)
12:16 ppai joined #gluster
12:18 predat What do you think about Red Hat Storage support ?
12:18 JustinClift predat: If you're interested in getting the problem solved properly, it just needs someone to persistently try the OSX native client, file bugs where it's broken (might be in basic compilation atm), and then test the fixes + report if they work or not.
12:18 jcastill1 joined #gluster
12:18 JustinClift If you have the energy for it, that would actually be really welcome :)
12:19 JustinClift predat: I have tried compiling Gluster on OSX recently.  It's possible it's in a good state, but I'm unsure.
12:20 rgustafs joined #gluster
12:20 predat Compilation worked for me, but using it is a other story...
12:21 legreffier semiosis: are you up
12:25 JustinClift predat: Please file bugs.  Really. :)
12:25 JustinClift predat: There are people actively intersted in this stuff, it's just their not focused on it
12:25 JustinClift predat: So, someone filing bugs on it will get their attention :)
12:26 JustinClift Gah, my grammar + typing isn't winning today
12:26 predat JustinClift: ok, understood! Thank you for your response !
12:27 predat JustinClift: Are you the guy who wrote ‘glusterfs_macosx_dot_file_handler’ on github ?
12:28 JustinClift This? https://github.com/justinclift/g​lusterfs_macosx_dot_file_handler
12:29 predat yes
12:29 JustinClift Yeah.  It was *ages* though, and I don't even remember the code any more
12:29 JustinClift The concept is still valid though, but it would need more work to make it practical
12:29 JustinClift eg it would need to load that new translation automatically as part of volumes
12:30 JustinClift Hacking the NFS volfile manually isn't a long term solution, as Gluster regenerates the volfile spontaneously
12:32 JustinClift predat: If my responses sound a bit less optimistic/caring than desired, I'm sorry.  I'm outta here in two weeks, so am mostly finishing off existing things and not really into putting time into ancient things
12:32 JustinClift ;)
12:32 smohan joined #gluster
12:33 predat JustinClift: Don’t worry ! All informations are good to take. thank you
12:33 JustinClift :)
12:35 jcastillo joined #gluster
12:35 rjoseph joined #gluster
12:35 plarsen joined #gluster
12:35 ppai joined #gluster
12:40 [Enrico] joined #gluster
12:44 [Enrico] joined #gluster
12:45 John_HPC Quick Question: Rebalancing only moves the files that need to be moved? As in, once the new hex value is set for each brick, it only really moves those files that may be on the edges or aready linked to another location? This is what I'm currently looking at (and scared): http://paste.ubuntu.com/11730296/
12:45 [Enrico] joined #gluster
12:48 [Enrico] joined #gluster
12:49 dusmant joined #gluster
12:51 kanagaraj joined #gluster
12:53 chirino joined #gluster
13:02 hagarth joined #gluster
13:05 itisravi joined #gluster
13:05 julim joined #gluster
13:06 aravindavk joined #gluster
13:09 jcastill1 joined #gluster
13:12 meghanam joined #gluster
13:19 hamiller joined #gluster
13:20 kovshenin joined #gluster
13:21 spalai joined #gluster
13:24 kovshenin joined #gluster
13:24 mkzero joined #gluster
13:25 jcastillo joined #gluster
13:30 georgeh-LT2 joined #gluster
13:33 anti[Enrico] joined #gluster
13:47 DV joined #gluster
13:49 nbalacha joined #gluster
13:57 zeittunnel joined #gluster
13:58 Trefex one of my gluster nodes just become non-responsive with the following error messages: http://ur1.ca/mupx8
13:59 Trefex i use gluster 3.6.3 with kernel 3.10.0-229.4.2.el7.x86_64 on CentOS 7.1
13:59 Trefex does anybody have any ideas?
14:06 aravindavk joined #gluster
14:10 semiosis legreffier: i am here
14:11 DV joined #gluster
14:12 aravindavk joined #gluster
14:12 aravindavk joined #gluster
14:14 aravindavk joined #gluster
14:14 kbyrne joined #gluster
14:17 dgandhi joined #gluster
14:18 side_control Trefex: check your drives (i had something similar with my lsi went tits up)
14:19 smohan joined #gluster
14:23 ashiq joined #gluster
14:24 bennyturns joined #gluster
14:24 kanagaraj joined #gluster
14:25 ekuric joined #gluster
14:25 bennyturns my date / time in my gluster logs is off by 4 hours.  my date time on my system is correct.  anyone seen this before?
14:25 bennyturns JoeJulian, ^ any ideas?
14:26 dusmant joined #gluster
14:29 msvbhat bennyturns: gluster logs are in UTC
14:30 bennyturns msvbhat, ok so that is normal?
14:30 msvbhat bennyturns: So the difference you see is after converting to UTC (or both to your time zone)
14:30 msvbhat bennyturns: Yes
14:30 bennyturns msvbhat, kk thks!
14:30 msvbhat bennyturns: AFAIK that is ^^
14:30 bennyturns not sure how I never noticed that
14:31 msvbhat :)
14:34 bennyturns msvbhat, http://www.gluster.org/pipermail/glu​ster-users/2014-November/019673.html
14:34 bennyturns msvbhat, I am seeing tons of messages like:
14:34 [Enrico] joined #gluster
14:35 bennyturns msvbhat, https://paste.fedoraproject.org/233076/51712143/
14:35 Trefex side_control: we do a weekly zfs scrubbing, and it's all fine
14:36 bennyturns msvbhat, its only from one node.  Is that node losing and regaining connection all the time?  it happens every couple minutes and clients are occosionally unable to access files
14:36 bennyturns they get file not found
14:38 msvbhat bennyturns: Don't know about that? They are all in Info level
14:39 bennyturns msvbhat, yeah at first I wasnt worried about them but I see them spammin on 3 of the 4 nodes
14:39 bennyturns and none on the node that they are compaining about
14:40 side_control Trefex: sorry thats all i got, after i rebuilt the raid those messages went away
14:40 msvbhat BTW "Don't know about that?" should've been Don't know about that (no ?) :P
14:40 msvbhat bennyturns: ^^ peers are all connected?
14:41 Trefex side_control: ok thanks for trying :) This happened after 30 hours of heavy load
14:41 bennyturns msvbhat, yeah it looks like it
14:41 Trefex seems to be a kernel bug, disabled some settings, and will try again
14:43 side_control Trefex: what raid are you using out of curiosity?
14:43 msvbhat bennyturns: Hmm, I have no idea, you see this more often or just this once?
14:44 bennyturns msvbhat, it spams the logs constanstly
14:44 msvbhat bennyturns: Any gluster dev might have some idea
14:44 side_control Trefex: could be an lsi megaraid issue
14:44 side_control Trefex: it was bad enough that my host power cycled
14:44 bennyturns I just bounced all the nodes and fixed an MTU problem on one of the NICs, lets see if it helps
14:44 Trefex i believe this is Areca
14:45 msvbhat bennyturns: Okay
14:45 side_control nm then
14:45 kdhananjay joined #gluster
14:53 spalai joined #gluster
14:58 shubhendu joined #gluster
15:05 dusmant joined #gluster
15:08 marcoceppi joined #gluster
15:16 ashiq joined #gluster
15:17 spalai joined #gluster
15:19 elico joined #gluster
15:20 aravindavk joined #gluster
15:21 arcolife joined #gluster
15:27 ira joined #gluster
15:35 Philambdo joined #gluster
15:39 chirino joined #gluster
15:48 kovshenin joined #gluster
15:51 CyrilPeponnet Hiey guys
15:52 CyrilPeponnet I got something really bad this week end. My 3 nodes setup with a vol replica 3 started to split brain on every files, looks one of the brick was somehow corrupted (heal didn't work). I had to remove this brick and now everything is fine... I didn't know split brain could happens on a 3 node setup (metadata split brain).
15:56 DV joined #gluster
16:00 wkf joined #gluster
16:02 JoeJulian CyrilPeponnet: I haven't had coffee so I can't think of a scenario where a replica 3 with adequate quorum could split-brain, but I'm sure there's got to be a way.
16:02 nbalacha joined #gluster
16:02 CyrilPeponnet Hmm I didn't setup any qorum
16:02 CyrilPeponnet maybe it could help...
16:02 JoeJulian :)
16:03 CyrilPeponnet is qorum have an impact on overall performance and load average ?
16:03 JoeJulian No
16:03 CyrilPeponnet Ok :)
16:04 CyrilPeponnet @ndevos any idea when 3.5.5 will be out with the rmtab back port fix ? (I notice that pruning this file often help with load average).
16:08 CyrilPeponnet @JoeJulian have a good coffee them :p
16:08 CyrilPeponnet s/m/n
16:10 JoeJulian CyrilPeponnet: You can always check the contents of the latest planning meeting from http://meetbot.fedoraproject.​org/meetbot/gluster-meeting/
16:10 JoeJulian Or just join #gluster-meeting
16:12 CyrilPeponnet what the purpose of this channel ?
16:17 Leildin joined #gluster
16:18 pdrakeweb joined #gluster
16:22 plarsen joined #gluster
16:25 soumya_ joined #gluster
16:33 JoeJulian That's where planning and triage meetings happen.
16:40 aravindavk joined #gluster
16:41 RameshN joined #gluster
16:42 plarsen joined #gluster
16:43 R0ok__ joined #gluster
16:43 natgeorg joined #gluster
16:43 pdrakewe_ joined #gluster
16:46 fsimonce joined #gluster
16:47 chirino joined #gluster
16:49 cholcombe joined #gluster
16:52 jcastill1 joined #gluster
16:54 stickyboy joined #gluster
16:54 R0ok_ joined #gluster
16:57 jcastillo joined #gluster
17:03 papamoose1 joined #gluster
17:06 Twistedgrim joined #gluster
17:18 Rapture joined #gluster
17:43 plarsen joined #gluster
17:43 smohan joined #gluster
17:48 Intensity joined #gluster
17:53 spalai joined #gluster
17:57 Philambdo joined #gluster
17:59 dgbaley joined #gluster
18:00 dgbaley Is there a recommend method/ppa for getting qemu with libgfapi on Ubuntu?
18:00 dgbaley This would be for OpenStack
18:06 ttkg joined #gluster
18:07 _appelgriebsch joined #gluster
18:09 appelgriebsch joined #gluster
18:09 rotbeard joined #gluster
18:10 appelgriebsch joined #gluster
18:13 appelgriebsch joined #gluster
18:33 sysadmin-di2e joined #gluster
18:33 papamoose1 joined #gluster
18:34 B21956 joined #gluster
18:34 spalai left #gluster
18:35 sysadmin-di2e How can I remove this error? "2015-06-17 18:19:42.349800] W [client-rpc-fops.c:2766:client3_3_lookup_cbk] 0-uncl-client-3: remote operation failed: No such file or directory. Path: <gfid:dbe91c5c-6b62-4ac4-8377-00cc3233a0f2> (dbe91c5c-6b62-4ac4-8377-00cc3233a0f2)"
18:36 rehunted joined #gluster
18:37 spot joined #gluster
18:37 rehunted Hello. If I use two servers, and one stop, how glusterfs deal with that? I mean, does the system waits for the secondary server return to network, or it continue?
18:40 shaunm joined #gluster
18:40 aaronott joined #gluster
18:50 arcolife joined #gluster
18:52 kkeithley If you use two servers for a "replica 2" then clients will not notice, they'll read/write from/to the running server. For a "distribute" volume (the default) if the file is on the server that's up, the clients will read/write from/to it. If the file is on the server that's down, they'll get an I/O error
18:57 rehunted kkeithley, thank you. I will setup with replica = 3 and three machines. I'm used to 'postgres replication', so i'm a little lost here
18:57 rehunted in postgres, you can do replication sync and async, and for what I think,glusterfs is sync (because of locks and posix) but only until the others servers goes down
19:17 ekuric joined #gluster
19:44 asengupt_ joined #gluster
19:46 chirino joined #gluster
19:48 smohan joined #gluster
19:51 glusterbot News from resolvedglusterbugs: [Bug 1223286] [geo-rep]: worker died with "ESTALE" when performed rm -rf on a directory from mount of master volume <https://bugzilla.redhat.co​m/show_bug.cgi?id=1223286>
19:54 gnudna joined #gluster
19:54 gnudna left #gluster
20:00 Trefex joined #gluster
20:00 spalai joined #gluster
20:05 unclemarc joined #gluster
20:15 ConSi joined #gluster
20:15 ConSi hi folks!
20:16 ConSi I've question about disperse volume type
20:16 ConSi Anyone use it in production environment?
20:19 ConSi I've 10G lan, 3 nodes with about 800GB SSD space in raid5
20:19 ConSi and I want to setup shared storage between them with maximum capacity with redundancy
20:33 scubacuda joined #gluster
20:45 premera joined #gluster
20:45 hagarth joined #gluster
20:46 nage joined #gluster
20:50 tessier joined #gluster
20:50 chirino joined #gluster
20:50 shaunm joined #gluster
20:51 morphkurt joined #gluster
20:55 spot joined #gluster
20:59 jrm16020 joined #gluster
21:22 badone joined #gluster
21:26 unclemarc joined #gluster
21:40 Philambdo joined #gluster
21:42 ira joined #gluster
22:25 Philambdo joined #gluster
22:36 deniszh joined #gluster
23:21 gildub joined #gluster
23:23 magamo joined #gluster
23:23 magamo Hey folks, I have a question about geo-rep.
23:24 magamo I have our primary gluster volumes replicating to another volume, so far within the same datacenter, but have plans to cascade across the WAN to another datacenter.
23:24 magamo Problem is:  It only ever seemed to replicate 346GB out of the 2.6TB volume.
23:28 magamo Currently running gluster 3.6.2, but have plans to try a rolling update to gluster 3.7.1 tonight.
23:33 magamo Any ideas why this might not be working properly?

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary