Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2017-07-19

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:27 vbellur joined #gluster-dev
01:22 Alghost joined #gluster-dev
01:25 Saravanakmr joined #gluster-dev
01:26 wushudoin joined #gluster-dev
01:34 riyas joined #gluster-dev
01:49 ilbot3 joined #gluster-dev
01:49 Topic for #gluster-dev is now Gluster Development Channel - https://www.gluster.org | For general chat go to #gluster | Patches - https://review.gluster.org/ | Channel Logs - https://botbot.me/freenode/gluster-dev/ & http://irclog.perlgeek.de/gluster-dev/
02:38 prasanth joined #gluster-dev
03:09 mchangir joined #gluster-dev
03:29 susant joined #gluster-dev
03:32 riyas joined #gluster-dev
03:39 ppai joined #gluster-dev
03:40 psony joined #gluster-dev
03:48 nbalacha joined #gluster-dev
04:07 skumar joined #gluster-dev
04:10 ppai nigelb, the git blame in the new gerrit staged instance is a good feature. Just hovering over a single blame entry shows the entire commit message :)
04:10 atinmu joined #gluster-dev
04:11 itisravi joined #gluster-dev
04:13 pranithk1 joined #gluster-dev
04:20 poornima joined #gluster-dev
04:30 Shu6h3ndu__ joined #gluster-dev
04:51 atalur joined #gluster-dev
04:55 atalur pranithk1, I had a few questions about disperse volume. Let me know when you have some time?
04:55 pranithk1 atalur: hey! now
04:55 atalur pranithk1, hey!
04:56 atalur pranithk1, okay. In disperse volumes, is there a way of knowing whether a file is in good condition in min # of bricks from the mount?
04:56 atalur pranithk1, I realize that ops will fail with EIO otherwise. But is that enough confirmation?
04:57 pranithk1 atalur: if we do getfattr -n trusted.ec.heal /path/to/file it prints good/bad after the heal actually. But not sure what happens when things are not good. I don't remember if it fails with EIO or prints the info even in that case
04:57 pranithk1 atalur: you can try that out...
04:58 atalur pranithk1, hey, that helps! I will try that out
04:59 pranithk1 atalur: This is something we didn't touch for like more than an year, so I don't remember how it works now :-). If you find some bugs, send some patches
05:00 pranithk1 atalur: IMO it is a good place to have this functionality. Before sending a patch, send a mail on gluster-devel CCing Xavi also to know his thoughts...
05:00 pranithk1 atalur: ok na?
05:00 atalur pranithk1, sure :-)
05:00 pranithk1 atalur: cool
05:00 atalur pranithk1, okay sir :-)
05:01 atalur pranithk1, thanks for the help! pranithk1++
05:01 glusterbot atalur: pranithk1's karma is now 12
05:02 pranithk1 atalur: yeah, I like free patches :-D
05:03 pranithk1 atalur: trusted.ec.heal="Good: 000, Bad: 111", It is supposed to say All good and nothing else is bad, so may be there are issues :-)
05:10 atalur pranithk1, uh oh! :-D I will check that part of code for my understanding and then send a mail on gluster-devel.
05:11 msvbhat joined #gluster-dev
05:11 pranithk1 atalur: okay madam.
05:14 ashiq joined #gluster-dev
05:16 ashiq joined #gluster-dev
05:20 skoduri joined #gluster-dev
05:29 susant joined #gluster-dev
05:29 Saravanakmr joined #gluster-dev
05:30 atinm_ joined #gluster-dev
05:33 karthik_us joined #gluster-dev
05:33 ashiq joined #gluster-dev
05:42 pkalever joined #gluster-dev
05:43 jiffin joined #gluster-dev
05:47 ankitr joined #gluster-dev
05:52 ankitr joined #gluster-dev
05:54 apandey joined #gluster-dev
06:02 hgowtham joined #gluster-dev
06:10 ankitr joined #gluster-dev
06:11 sanoj joined #gluster-dev
06:16 kdhananjay joined #gluster-dev
06:27 pranithk1 ppai: hey, I wanted to understand more about dm-delay you mentioned on github for delay-gen issue. I do not think we can delay specific fops using dm-delay, if I understood it correctly. Just want to make sure if that is the case or not, before going ahead with the feature
06:28 pranithk1 ppai: It seems more like we can delay READS/WRITES because it is at device mapper layer
06:28 ppai pranithk1, dm-delay is at the block layer
06:29 ppai pranithk1, so yeah, like you said just read and write can be throttled
06:29 sanoj joined #gluster-dev
06:29 pranithk1 ppai: If I had known about this, I would not have implemented delay-gen at all :-)
06:29 ppai pranithk1, XFS will be seeing these additionally delay and naturally it'll be propagated to upper layers
06:30 pranithk1 ppai: okay, I think I got it, thanks!
06:33 rafi joined #gluster-dev
06:37 rastar joined #gluster-dev
06:38 sona joined #gluster-dev
06:44 ndarshan joined #gluster-dev
06:55 atinm joined #gluster-dev
06:55 msvbhat joined #gluster-dev
07:04 msvbhat_ joined #gluster-dev
07:38 msvbhat joined #gluster-dev
07:44 ankitr joined #gluster-dev
08:17 ndarshan joined #gluster-dev
08:26 sanoj joined #gluster-dev
08:32 ndarshan joined #gluster-dev
08:45 sunkumar joined #gluster-dev
08:55 ppai kshlm, can you merge #335 ?
08:56 kshlm ppai, Thanks for reminding. I should have done that yesterday.
08:56 rafi2 joined #gluster-dev
08:58 mchangir joined #gluster-dev
09:00 atinm_ joined #gluster-dev
09:09 jiffin1 joined #gluster-dev
09:10 nigelb ppai: yeah, I liked that too :)
09:13 ndarshan joined #gluster-dev
10:06 rafi1 joined #gluster-dev
10:09 msvbhat_ joined #gluster-dev
10:41 atinm_ joined #gluster-dev
10:59 anoopcs ndevos, Why Fedora package related point of contacts are not listed in MAINTAINERS 2.0?
11:01 sanoj joined #gluster-dev
11:03 ndevos anoopcs: because kkeithley/nigelb wanted them removed, just like for other distributions that provide packages but are not listed
11:04 anoopcs ndevos, But I see CentOS, OpenSuSE, Debian etc..
11:04 anoopcs Even Ubuntu :-/
11:06 ndevos anoopcs: yeah, opinions are diverse... I really dont care if we list them or not, but the current state seems to be what kkeithley/nigelb preferred
11:06 ndevos I dont remember the rational for (not) listing some
11:07 anoopcs I don't think its right to eliminate just Fedora. I tend to disagree
11:07 ndevos Arch Linux also provides glusterfs packages, and so do other distributions, we dont list everything
11:07 anoopcs May be even more...
11:08 ndevos maybe we should just list "Packages on download.gluster.org" to make it less confusing?
11:08 anoopcs Either everything or make it general
11:09 ndevos I dont think we can list *everything*, making it more general would have my preference
11:09 ndevos we can also link to http://gluster.readthedocs.io/en/latest/Install-Guide/Community_Packages/ that way
11:10 anoopcs That's better..
11:12 ndevos we can update that bit later on, we really need to get the M/P changes merged at one point :)
11:22 shyam joined #gluster-dev
11:35 rastar joined #gluster-dev
11:38 kkeithley I'm not sure I agree with the statement that I wanted the Fedora packaging PoC (i.e. me) removed.
11:39 kkeithley I questioned the utility of listing the official Debian packager. We don't have much, if any, control over that. Patrick almost never replies when I send him questions. Also we do our own community packages and we did not list the PoC (again me) for that.
11:41 kkeithley Debian, Ubuntu, CentOS, and SuSE have their own channels for reaching the packagers in their base OS.
11:42 kkeithley IMO our list is for contacting our maintainers of our Community packages.
11:43 kkeithley Yes, Fedora does, in most respects, kinda match the Deb/Ub/Cent/Su official packages.
11:44 kkeithley So it's the exception that proves the rule?
11:46 kkeithley So I guess it's the exception that proves the rule
11:50 rafi1 joined #gluster-dev
12:04 nigelb I'll echo kkeithley's sentiment there. I didn't argue for Fedora to be removed. In fact, I remember pointing out that Fedora is a special case.
12:04 nigelb (And so is Centos)
12:04 nigelb My only argument was that we should list only things that we have any control over.
12:07 kkeithley CentOS should have the same glusterfs packages in their base that RHEL has in its. So it's a slightly different special case.
12:09 kkeithley And I don't know the specific details about who updates those glusterfs in the CentOS base.
12:10 kkeithley In any event it's not us.
12:15 itisravi joined #gluster-dev
12:16 darshan joined #gluster-dev
12:29 rafi1 joined #gluster-dev
12:49 sona joined #gluster-dev
12:52 darshan joined #gluster-dev
12:57 jstrunk joined #gluster-dev
13:03 darshan joined #gluster-dev
13:36 kdhananjay joined #gluster-dev
13:46 ndevos kkeithley: did you ever look at the (new) mem-pool implementation?
13:46 ndevos if so, do you have an opinion about https://bugzilla.redhat.com/show_bug.cgi?id=1461543#c52 ?
13:46 glusterbot Bug 1461543: high, unspecified, ---, kkeithle, POST , [Ganesha] : Ganesha occupies ~6G of memory (RES size),even when all the files are deleted  from the mount,suspecting  mem leak.
13:46 ndevos that comment has a patch that I have not posted for review yet, but I think the theory makes sense
13:46 ndevos just not sure if ganesha would be affected by it
13:56 nbalacha joined #gluster-dev
13:56 kkeithley ndevos: I looked, at a pretty high level.
14:01 pranithk xavih: EC found an age-old bug in locks xlator :-)
14:01 kkeithley ndevos: FSAL_GLUSTER doesn't create any threads, not directly anyway.
14:01 pranithk xavih: I think I need to learn strict coding from you! i.e. check correct request and correct response
14:01 kkeithley only what glfs_init() and glfs_fini() might create.
14:02 pranithk xavih: locks xlator seems to assume that frame->root->pid and flock->l_pid will be same everywhere. It seems like gNFS has frame->root->pid as 1 and l_pid as 5 where as in fuse it is same
14:04 kkeithley ndevos: I guess I'm a little suspicious of moving a mem_put()'d (hot?) allocation into the mem-pool  of the current running thread.
14:05 kkeithley why do you want to not free it immediately if its thread has exited?
14:05 kkeithley just curious.
14:06 ndevos kkeithley: it is not about what creates the threads, it is about which threads call  mem_get() and mem_put()
14:07 kkeithley right, I understood (I think) that part.
14:07 ndevos kkeithley: well, if we free the allocation on mem_put(), the mem-pool would be pretty much useless, it wont keep a pool of allocations ready for quick re-use
14:08 ndevos when buildinf with "./configure --diable-mempool", mem_get() and mem_put() will be just like malloc() and free()
14:08 kkeithley yes
14:09 kkeithley also with debug builds (-DDEBUG) right?
14:09 ndevos not sure about that, I thin -DDEBUG actively invalidates the memory instead of free'ing it
14:10 ndevos but that is not done in the mem-pool implementation now, only in gf_free(), I think
14:10 kkeithley there's a mem-pool per thread. Your comment on the BZ says that if the thread exits the mem_put()'d memory (on the hot list, not actually freed yet) gets moved to the mem-pool of the current running thread. Do I have that correct?
14:10 ndevos no, thats not what I mean
14:11 kkeithley okay
14:11 ndevos if a thread exits, the pool for that thread is marked inactive
14:11 ndevos it does not affect the existing allocations directly
14:12 ndevos the hot_list and cold_list of the inactive thread will be cleaned (all objets will be free()'d)
14:12 ndevos active objects are not in the hot_list or cold_list
14:12 ndevos once mem_put() is done, the object to be free'd is moved to the hot_list of the original-mem_get()-thread
14:13 ndevos that can be an inactive thread (on the pool_free_list)
14:13 kkeithley okay so far
14:13 ndevos the inactive threads are not inspected for cleaning the hot_list or cold_list - they are inactive after all
14:14 ndevos that means, the object that was intended to be free'd, is appended to the hot_list of an inactive thread
14:14 ndevos and it will stay there indefinitely
14:14 kkeithley okay
14:15 ndevos until a new thread is started, and the inactive thread-structure is re-used
14:15 kkeithley IOW nothing ever cleans up the inactive (dead) thread mem-pools
14:15 ndevos (which might re-initialize an empty hot_list, which I did not check)
14:15 ndevos indeed, nothing cleans them up, I think
14:16 ndevos or, I dont think anything inspects the contents of an inactive thread mem-pool, and hence does not delete the objects in the hot_list
14:17 ndevos ... so far the theory, that is what I had this morning, and needs some verification if it is correct
14:17 kkeithley okay
14:20 ndevos oh, and it is only important for ganesha in case it starts and stops threads actively, if all threads get started  once, and only exit on process termination, then this should not be a problem for that use-case
14:29 kkeithley hmm. Okay I think. Do we get a thread for every glfs_init(), i.e. every time we create an export?
14:29 ndevos no, the threads I am speaking about are the ones from the application
14:29 kkeithley If someone is actively creating and removing exports?
14:30 ndevos ganesha has like 254 worker threads, each thread that goes a mem_get() or mem_put() will get its own pre-thread-pool
14:30 kkeithley ganesha's worker threads aren't using gluster mem-pools.
14:31 ankitr joined #gluster-dev
14:31 ndevos they will, once they call mem_get()
14:32 kkeithley they would only do that in FSAL_GLUSTER. And they don't. Not directly. By virtue of callling some glfs_foo() maybe
14:33 ndevos yes, whatever thread calls glfs_*() might gets its own per-thread-mem-pool
14:34 ndevos if all FSAL_GLUSTER actions are pinned to a single (or few) threads, it will be relatively contained
14:34 kkeithley ganesha's worker threads don't ever exit. At least not in the normal case.
14:34 ndevos if any of the worker threads can go through FSAL_GLUSTER, you potentially have 254 mem-pools, and each has 14 hot_list and 14 cold_list instances
14:35 kkeithley so the sweeper thread should have a chance to reap the mem_put()'d memory.
14:35 ndevos in that case, yes, I would say so
14:36 ndevos unfortunately we dont have state-dumps for mem-pools with the new implementation, that is something that still needs to be added
14:43 kkeithley ndevos: how about (at a minimum) debug-level logging of the sweeper running, and maybe free()ing or GF_FREE()ing memory
14:43 kkeithley ?
14:44 ndevos we could do that, but in a busy environment this might cause a *lot* of messages
14:45 ndevos maybe we can add a summary in pool_sweeper() on how many threads were cleaned (death_row) and how many objects were free'd
14:47 sunkumar joined #gluster-dev
14:51 kkeithley anything better than nothing I suppose. statedumps would be better
14:56 ndevos kkeithley: this would log some counters, is that what you were thinking about? http://termbin.com/zyk6
14:57 ndevos (untested, except that it compiles)
14:57 kkeithley isn't that all that matters? ;-)
14:57 atinmu joined #gluster-dev
15:03 kshlm Community meeting is on now in #gluster-meeting
15:04 shyam joined #gluster-dev
15:06 wushudoin joined #gluster-dev
15:16 msvbhat joined #gluster-dev
15:26 cholcombe joined #gluster-dev
15:46 susant joined #gluster-dev
15:51 rastar joined #gluster-dev
16:03 rastar joined #gluster-dev
16:44 atinmu joined #gluster-dev
17:34 atalur joined #gluster-dev
17:44 vbellur joined #gluster-dev
18:16 msvbhat joined #gluster-dev
18:27 sunkumar joined #gluster-dev
18:28 vbellur joined #gluster-dev
20:05 tinyurl_comSLASH joined #gluster-dev
20:08 tinyurl_comSLASH left #gluster-dev
20:26 vbellur joined #gluster-dev
21:23 wushudoin joined #gluster-dev
21:33 bwerthmann joined #gluster-dev
21:45 shyam joined #gluster-dev
22:25 vbellur joined #gluster-dev
23:13 misc so I finally increased the size of / on the builder in the cage (sorry that it took so long, it was just a 5 minutes matter, but I kept forgetting about it :/ )
23:13 misc so the disk full should happen less often
23:19 Alghost joined #gluster-dev
23:23 vbellur misc: thank you!
23:26 misc vbellur: thanks nigelb who have been fixing stuff patiently while I was doing meeting and others tasks :)
23:45 vbellur nigelb++
23:45 glusterbot vbellur: nigelb's karma is now 64
23:45 vbellur misc++
23:45 glusterbot vbellur: misc's karma is now 50
23:48 Alghost joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary