Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2016-11-24

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:36 Muthu joined #gluster-dev
01:43 vbellur joined #gluster-dev
02:22 Muthu joined #gluster-dev
02:23 nishanth joined #gluster-dev
02:48 ilbot3 joined #gluster-dev
02:48 Topic for #gluster-dev is now Gluster Development Channel - http://gluster.org | For general chat go to #gluster | Patches - http://review.gluster.org/ | Channel Logs - https://botbot.me/freenode/gluster-dev/ & http://irclog.perlgeek.de/gluster-dev/
03:20 magrawal joined #gluster-dev
03:34 atinm joined #gluster-dev
03:37 nbalacha joined #gluster-dev
03:49 pranithk1 joined #gluster-dev
04:05 itisravi joined #gluster-dev
04:30 vimal joined #gluster-dev
04:38 kdhananjay joined #gluster-dev
05:04 ashiq joined #gluster-dev
05:15 ankitraj joined #gluster-dev
05:15 karthik_us joined #gluster-dev
05:17 rafi joined #gluster-dev
05:23 ppai joined #gluster-dev
05:28 riyas joined #gluster-dev
05:28 atinm joined #gluster-dev
05:31 ashiq_ joined #gluster-dev
05:33 apandey joined #gluster-dev
06:00 sanoj joined #gluster-dev
06:06 skoduri joined #gluster-dev
06:09 itisravi joined #gluster-dev
06:14 hgowtham joined #gluster-dev
06:17 nishanth joined #gluster-dev
06:18 Muthu joined #gluster-dev
06:19 rastar joined #gluster-dev
06:19 jiffin joined #gluster-dev
06:25 pranithk1 joined #gluster-dev
06:26 skoduri joined #gluster-dev
06:38 rastar joined #gluster-dev
06:40 Saravanakmr joined #gluster-dev
06:53 skoduri joined #gluster-dev
06:57 kdhananjay joined #gluster-dev
06:57 gem joined #gluster-dev
07:08 rjoseph pranithk1:  itisravi: Review request for http://review.gluster.org/#/c/15892/
07:09 itisravi rjoseph: I need to refresh http://review.gluster.org/#/c/15673/ today. I can review after that :)
07:10 rjoseph ndevos: nbalacha: Review request for http://review.gluster.org/#/c/15789
07:11 rjoseph itisravi: Hope you will refresh the patch soon :-)
07:15 kdhananjay joined #gluster-dev
07:16 ashiq__ joined #gluster-dev
07:16 Debloper joined #gluster-dev
07:30 gem joined #gluster-dev
07:36 msvbhat joined #gluster-dev
07:53 nishanth joined #gluster-dev
07:59 atinm joined #gluster-dev
08:16 ndevos rjoseph: this note from poornima should be filed as a bug, and mentioned in the commit message: http://review.gluster.org/#/c/15789/1/xlators/features/upcall/src/upcall.c@330
08:18 rraja joined #gluster-dev
08:35 devyani7 joined #gluster-dev
08:43 nishanth joined #gluster-dev
08:44 ashiq joined #gluster-dev
08:48 k4n0 joined #gluster-dev
09:17 apandey xavih: Hi
09:20 xavih apandey: hi
09:20 rjoseph ndevos: sure, will file a bug for this.
09:21 apandey xavih: want to talk about the issue on which we had discussion on mail...
09:22 poornima_ joined #gluster-dev
09:22 xavih apandey: sure
09:24 apandey xavih: So in 4+2 , if I kill 2 bricks and I got inodelk on rest of the four copies, xattrop should be successful on that, right?
09:26 xavih apandey: yes, otherwise it means that something else is working on the same inode, but this shouldn't happen since we have a lock
09:26 apandey xavih: I am seeing that even ifinodelk was successful, xattarop is getting ENOENT from few bricks. It is not for all the directories but only 1 or 2
09:26 xavih apandey: if all remaining bricks are healthy, this shouldn't happen
09:26 apandey xavih: I am doing rm -rf * from 2 clients on root of the mount
09:27 devyani7 joined #gluster-dev
09:27 xavih apandey: if that's happening, then we have some race in the locking logic...
09:27 xavih apandey: it's weird
09:27 xavih apandey: no client should be able to delete files from a directory while another client is deleting them
09:28 xavih apandey: the clients should compete for the lock, but once acquired, no one else should touch that directory...
09:29 apandey xavih: Yeh, I also think so. but it is happening...and I think it has been happening for long time, it is just that if you get this error from 4 bricks you will have ans and you will return the ENOENT to the user..
09:30 apandey xavih: However, when you kill 2 bricks and have only minimum number of bricks, you see this IO error on mount point
09:31 xavih apandey: does this happen for files or for directories ?
09:32 xavih apandey: I mean, the entry that is present on some bricks but not the others, is a file or a directory ?
09:32 apandey xavih: directories.
09:33 xavih apandey: and the 4 bricks are healthy before the operation, right ?
09:33 xavih apandey: do you have multiple rm from the same client ?
09:33 apandey xavih: yes all the bricks are healthy...
09:34 xavih apandey: or each client/mount has only a single rm ?
09:34 apandey xavih: no. I am executing only one "rm -rf *" from 2 different mount points
09:35 apandey xavih: https://paste.fedoraproject.org/489070/
09:37 xavih apandey: after the failure, the directory does really exist on the brick that did succeed ?
09:39 apandey xavih: So at the end of the rm -rf * from both the clients all the data from mount point gets deleted..
09:40 xavih apandey: that's weird...
09:40 apandey xavih: and also tried to get gatfattr on this dir , but it was saying no such file or dir..
09:41 apandey xavih: So deletion is happening but with IO error...
09:42 xavih apandey: I think that the deletion has happened in the other client, but somehow the current client still gets valid data from some brick
09:42 xavih apandey: could this be related to the delayed unlink implemented in posix ?
09:42 ashiq joined #gluster-dev
09:45 xavih apandey: it seems that this only works for regular files, not directories...
09:46 apandey xavih: hmmm..You are talking about .glusterfs/unlink? Yes, If I remember correctly, It was more to do with anon fd case..
09:47 xavih apandey: yes
09:48 xavih apandey: I'm seeing something that might be related...
09:49 xavih apandey: if we try to lock an inode but that inode is NULL, we silently ignore it and we don't take a lock
09:49 xavih apandey: however I don't see how this could happen
09:53 xavih apandey: I think this could happen if ec receives a loc with loc->parent = loc->inode = NULL and only the other fields are set
09:53 xavih apandey: is that possible ?
09:55 apandey xavih: :) I will have to go through the code to ans this...
09:55 xavih apandey: you are testing this on master, don't you ?
09:56 apandey xavih: yes
09:56 xavih apandey: ec_loc_parent() tries to build a loc for a parent inode from the loc received
09:56 xavih apandey: if both loc->parent and loc->inode are NULL, it's unable to find the node corresponding to the parent
09:58 xavih apandey: however, if loc->pargfid is set, it considers the loc with enough information to continue and does not return any error
09:58 xavih apandey: if this loc is used to get an inodelk, it will silently fail because loc->inode is NULL
10:05 apandey xavih: okk. I am not sure If I understood it completely , I have to go through code and it will take time. That's the point I did not think... Let me explore code and do some exp..
10:06 xavih apandey: I'm preparing for doing a test to see if I can reproduce it...
10:06 apandey xavih: ok. You know steps? I have mentioned it in mail..
10:07 apandey xavih: important thing is that you should have 2 bricks killed..
10:07 xavih apandey: yes, I'll try your steps
10:11 apandey xavih: You should see 2 issues...one for inodelk getting ESTALE from some bricks but returning EIO. This we can solve by going through cbk's. and second one is this where xattrop will give EIO while itis getting ENOENT from cbk's
10:14 xavih apandey: ok
10:22 pranithk1 xavih: apandey: inode can still be in memory even when it is deleted. So I think the inodelk is coming between unlink and inode-purge on the server side
10:24 xavih pranithk1: but even then, xattrop shouldn't succeed on any brick if the directory has already been removed
10:24 pranithk1 xavih: directory deletion and xattrop can race
10:25 pranithk1 xavih: directory deletion takes lock on parent dir. But xattrop on the directory happens by taking inodelock on the dir that is getting deleted.
10:25 pranithk1 xavih: so unlink of a file and chown on the file can fail with EIO
10:25 xavih pranithk1: no, they can't. The xattrop is sent to the inode we have taken the lock
10:25 pranithk1 xavih: I mean one process doing unlink and other doing chown on the file. chown can fail with EIO
10:26 xavih pranithk1: oh, but that's another problem not related with what apandey is seeing...
10:26 pranithk1 xavih: they can. there are no ordering guarantees between unlink and setattr of the file right
10:26 pranithk1 xavih: no no, they are related
10:26 pranithk1 xavih: The moment I saw the bug apandey mentioned I didn't have a clue how it should be fixed as per current design as they are not ordered...
10:27 pranithk1 xavih: so dentry operation and inode operation can lead to EIO.
10:27 pranithk1 xavih: ^^ this is the bug we need to solve
10:30 apandey pranithk1: I think you have have a point when you say " directory deletion takes lock on parent dir. But xattrop on the directory happens by taking inodelock on the dir that is getting deleted" . As you suggested day before..I just modified the code for rmdir to take lock on parent and itself..
10:31 pranithk1 xavih: afr got lucky. It doesn't have this problem. in 2 way replication the fop is a success if on any one brick it succeeds. On 3-way replication it either succeeds on 2 otherwise it sends op_errno we get from the other bricks to the xlator above
10:31 pranithk1 apandey: that still doesn't solve the problem IMO. There is a very fine race.
10:32 pranithk1 xavih: apandey: I think based on what afr does and if I see ec, the main difference is about errno.
10:32 xavih pranithk1: I think I see the problem now. You think that the xattrop sent just before an rmdir races with the rmdir of the directory itself (which locks the parent)
10:32 pranithk1 xavih: yeah
10:33 pranithk1 xavih: It is not a good idea to order them....
10:33 pranithk1 xavih: I think the issue is with errno. But I am not sure which case we have to give which errno.
10:33 xavih pranithk1: rmdir /a/b/c and rmdir /a/b, right ?
10:34 pranithk1 xavih: yeah except c is already deleted by the time /a/b/c is attempted by some other process
10:34 pranithk1 xavih: so rmdir /a/b succeeds
10:35 pranithk1 xavih: so for rmdir /a/b/c, inodelk on 'b' may succeed on some bricks because the inode is yet to be purged.
10:35 pranithk1 xavih: so yeah, very fine race...
10:36 xavih pranithk1: maybe this should be avoided. We shouldn't be able to get a lock on an inode that doesn't really exist...
10:36 pranithk1 xavih: Well the inode has to stay even after deletion for unlocks of inode to succeed etc...
10:37 xavih pranithk1: yes, but that's like the files in posix. You can delete an open file and continue to use it. However once you close/unlock it, the file/lock is removed
10:37 xavih pranithk1: and any pending locks should fail immediately
10:38 xavih pranithk1: I think this would completely remove the problem. I don't see any other safe and efficient solution (we would require to take multiple locks or do extra checks after having a lock, and even this way it wouldn't be possible to avoid all possible cases)
10:38 pranithk1 xavih: inodelk/entrylk is not posix thing right. That is the issue. posix-locks are on an fd. Not on inode
10:39 pranithk1 xavih: I agree
10:39 xavih pranithk1: no, no, I say to implement inodelk/entrylk as files are implemented in posix, not that they are posix "things"
10:40 pranithk1 xavih: oh
10:40 xavih pranithk1: in the rmdir/unlink cbk of locks xlator we should mark the lock as "invalid" or "deleted" if the operation succeeded
10:40 pranithk1 xavih: You are saying that if a file is unlinked, closed, the locks on the file should be lost?
10:41 pranithk1 xavih: what should be done with blocked locks?
10:41 karthik_us joined #gluster-dev
10:42 xavih pranithk1: after that, when the lock is released in a future call to inodelk/entrylk, the lock will be deleted, and any blocked requests will be resumed and return ENOENT
10:42 xavih pranithk1: probably we would need to maintain that information in the inode context until it is purged to prevent future lock attempts
10:42 pranithk1 xavih: that makes sense
10:43 xavih pranithk1: maybe ESTALE instead of ENOENT would be better...
10:43 pranithk1 xavih: agree
10:43 pranithk1 apandey: ^^
10:44 pranithk1 xavih: wait it still doesn't solve our problem
10:44 pranithk1 xavih: imagine this
10:44 pranithk1 xavih: we have two processes: 1) unlink ('dir', a) and 2) chown 'a'
10:45 pranithk1 xavih: on all the 6 bricks if they are wound in parallel. 3 may give ENOENT and 3 may succeed inodelks i.e.
10:46 pranithk1 xavih: right?
10:47 apandey pranithk1: went for tea. reading now..
10:47 xavih pranithk1: yes, I think this could happen
10:47 pranithk1 xavih: yeah, so back to square-1 :-)
10:47 pranithk1 xavih: nice bug no?
10:48 xavih pranithk1: but this is only a problem when inodelk succeed and a later op (like the actual setattr) fails
10:48 xavih pranithk1: with the change apandey is doing, the inodelk problem disappears
10:49 xavih pranithk1: I've had an idea, though not sure if it would be the correct solution...
10:50 xavih pranithk1: what would happen if the locks xlator serializes all operations on inodes where its parent or one of its children is also locked ?
10:51 xavih pranithk1: the alternative would be to acquire from the ec xlator 2 locks for each fop
10:51 xavih pranithk1: 4 locks for a rename
10:51 pranithk1 xavih: hmm...
10:51 pranithk1 xavih: why doesn't this happen in afr?
10:51 pranithk1 xavih: It only sets the errno properly that's it
10:52 xavih pranithk1: afr only needs to maintain one good copy
10:52 pranithk1 xavih: that is true!
10:52 xavih pranithk1: ec needs to maintain at least 2 identical copies of things
10:53 pranithk1 xavih: but for EC technically speaking for directory operations it can keep a single good copy
10:53 pranithk1 xavih: because it is replicating the hierarchy?
10:53 pranithk1 xavih: may be quorum number
10:53 pranithk1 xavih: which is 4
10:53 pranithk1 xavih: okay :-) back to square-1
10:54 xavih pranithk1: even this solution wouldn't apply to the case of unlink/chown
10:54 xavih pranithk1: the problem appears on a regular file inode, not the directory itself
10:55 pranithk1 xavih: why? it can happen on any dentry operation + inode operation right?
10:55 pranithk1 xavih: something like rmdir a/b and chown b
10:56 xavih pranithk1: yes, but if the problem appears on an inode, the fact that the directory hierarchy is replicated is not enough
10:56 xavih pranithk1: we cannot take one of the answers and propagate it
10:56 pranithk1 xavih: exactly.
10:56 xavih pranithk1: brb
10:57 pranithk1 xavih: So EC giving EIO in this case is correct
11:05 xavih pranithk1: no, no, EIO is not correct because it's caused by a race between two (theoretically) atomic operations and we are not executing them atomically on all bricks
11:05 xavih pranithk1: we need to enforce this atomicity, and one way would be to serialize some fops in the locks xlator
11:06 pranithk1 xavih: but they are not cluster wide serialization right?
11:06 pranithk1 xavih: like I was mentioning above. 3 bricks can order them in one way and 3 in another
11:07 xavih pranithk1: no, only serialization when parent/child is already locked
11:07 xavih pranithk1: the idea is that the serialization causes all bricks to execute the fops in the same order (only the fops that have some kind of dependency)
11:10 xavih pranithk1: oops, that wouldn't work because we cannot rely on inodelk being acquired in the same order on all bricks some bricks could not know anything about the other lock at the time of executing the operation...
11:10 pranithk1 xavih: I am not sure that is easy with the current infra xavi
11:10 pranithk1 xavih: exactly ;-)
11:21 xavih pranithk1: the only solution I see is to acquire more inidelks from ec...
11:22 pranithk1 xavih: okay.... What all inodelks?
11:23 xavih pranithk1: no, for the inode itself and its parent
11:23 pranithk1 xavih: roughly you tell me what you had in mind for unlink/rename (where destination may be overwritten) / rmdir
11:23 pranithk1 xavih: and metadata/data transaction. Let me cross check if I see any problems
11:24 pranithk1 xavih: Clients may not have the latest parent inode in it's inode-table right..?
11:24 xavih pranithk1: that could happen I think. loc management is a nightmare... :(
11:25 pranithk1 xavih: :-)
11:26 pranithk1 xavih: Do you guys celebrate thanksgiving?
11:26 xavih pranithk1: we should enforce that each inode keep a reference to its parent, that way many problems would disappear...
11:27 xavih pranithk1: no, we don't celebrate it here
11:27 atinm joined #gluster-dev
11:27 xavih pranithk1: sorry, each dentry, not inode
11:28 xavih pranithk1: and change loc by a dentry, or have a dentry inside loc
11:29 Muthu joined #gluster-dev
11:29 xavih pranithk1: in fact loc's are some replacement of a dentry but much more difficult to manage
11:29 pranithk1 xavih: yeah
11:31 xavih pranithk1: a side note... having directories as files would simplify the problem a lot :P
11:31 xavih pranithk1: though we would still need some additional locks...
11:32 pranithk1 xavih: :-)
11:32 pranithk1 xavih: Let me also think about this problem. It doesn't sound easy to me
11:33 xavih pranithk1: no, it's not easy at all. It'll most probably need additional locks, so it will impact performance...
11:33 xavih pranithk1: I don't see any other way to solve it. I'll also think about it...
11:35 xavih pranithk1: one possibility is to take locks on inode and parent for rmdir and unlink fops
11:35 xavih pranithk1: I think this way we could solve the problem
11:36 xavih pranithk1: we would still need to modify locks xlator to deny locks on already deleted entries
11:38 pranithk1 xavih: That makes sense
11:40 pranithk1 xavih: but the problem I think is when unlink a/b/c comes. b itself may not exist by the time it wants to do unlink...
11:42 msvbhat joined #gluster-dev
11:49 xavih pranithk1: that case shouldn't happen with the changes I proposed...
11:51 xavih pranithk1: if b doesn't exist, the inodelk of /a/b (needed to unlink /a/b/c) should fail
11:51 pranithk1 xavih: No it won't because 'a' is not locked..
11:51 pranithk1 xavih: It is back to the same problem of parallel rmdir and inodelk
11:52 pranithk1 xavih: The way for this solution to work is if the whole path is locked...?
11:52 xavih pranithk1: no, no, if we unlink /a/b/c, we need to take locks on /a/b and /a/b/c
11:52 xavih pranithk1: if we unlink /a/b, we will take locks on /a and /a/b
11:53 pranithk1 xavih: consider this case
11:53 xavih pranithk1: if additionally the locks xlator causes the failure of locks that correspond to deleted inodes, that should fix the problem
11:55 pranithk1 xavih: seems to be working
11:57 pranithk1 xavih: rmdir requires 2 locks just like unlink
11:58 xavih pranithk1: yes, rmdir and unlink should take an additional lock on the inode being deleted
11:58 pranithk1 xavih: rename requires only 3 right? You were saying 4
11:58 pranithk1 xavih: if we do rename a/b -> c/d we need locks on a, c, d
11:58 pranithk1 xavih: why do we need on b?
11:58 xavih pranithk1: yes, we don't need to take a lock on the target directoty
11:59 xavih pranithk1: we need to take on a, b and c
11:59 pranithk1 xavih: huh? really
11:59 xavih pranithk1: we need b because any operation on /a/b after having deleted the inode would have the same problem we have been discussing
11:59 pranithk1 xavih: why? d is the one that is going to die right?
12:00 xavih pranithk1: oh, how do you represent the renames ?
12:00 xavih pranithk1: the original file is a/b, right ? and it will be moved to c/d, right ?
12:00 pranithk1 xavih: yes
12:00 xavih pranithk1: so b will be removed and d created
12:00 pranithk1 xavih: if a file 'd' exists it will be over-written
12:00 pranithk1 xavih: yes
12:01 xavih pranithk1: oh, in that case we need 4 locks
12:01 pranithk1 xavih: so 'd' is the inode that needs protecting if it exists right?
12:01 pranithk1 xavih: why do we need on 'b'?
12:01 xavih pranithk1: to avoid operations on a/b. For example rename('/a/b', '/c/d') and chown('/a/b')
12:02 pranithk1 xavih: but 'b' is not being deleted...
12:02 xavih pranithk1: its the same case of unlink/chown
12:02 pranithk1 xavih: why will chown on a/b fail?
12:02 pranithk1 xavih: oh oh, chown comes as setattron gfid-b
12:02 pranithk1 xavih: which doesn't change :-)
12:02 pranithk1 xavih: so we don't need the dentry to be protected
12:02 xavih pranithk1: ah, that can be assumed ?
12:03 xavih pranithk1: with loc's we alloc the representation of an inode by its path or some other combinations
12:03 xavih pranithk1: s/alloc/allow/
12:03 pranithk1 xavih: Not assumption. Check gfs3_setattr_req
12:04 xavih pranithk1: and all other fops that take a loc ?
12:04 pranithk1 xavih: path is just an indentified it shouldn't be taken seriously. It is used for optimization purposes sometimes that is all
12:04 xavih pranithk1: the problem is that loc allows that. There's any restriction (documentation, specification) that forbids future changes to use this possibility ?
12:05 xavih pranithk1: that wasn't the case when I started with ec. That's why it does so much work on locs...
12:06 xavih pranithk1: when requests come from fuse, most of the times (if not always), the loc is easy to manage. However there are requests started by intermediate xlators that do not supply so much information in all cases
12:06 pranithk1 xavih: all dentry fops and Lookup work on names.
12:06 pranithk1 xavih: but the METADATA/DATA operations come on gfids
12:06 xavih pranithk1: I've had problems with this on requests coming from dht long time ago
12:07 xavih pranithk1: and if we only have loc->gfid but not loc->inode and it's not cached ? is that a possibility ?
12:08 xavih pranithk1: it would be very interesting to have a document that enforces how loc's can be used and make sure that evertyone uses them in the same way
12:08 pranithk1 xavih: We should have the fields in rpc/xdr/src/glusterfs3-xdr.h to be populated, otherwise we can't send a request
12:08 xavih pranithk1: that way it would be simpler to take decisions like this
12:08 pranithk1 xavih: It is kind of enforcing because it is a network format we can't change it arbitrarily :-)
12:10 xavih pranithk1: ok, if we can rely on this, then 'b' don't need to be taken, I think
12:10 pranithk1 xavih: cool
12:10 pranithk1 apandey: ^^
12:11 apandey pranithk1: sorry, I was away...more issues on EC...
12:12 pranithk1 apandey: okay. You read this when you have time. I think this solution should work
12:12 pranithk1 apandey: may be you should post this on gluster-dev and seek inputs
12:12 pranithk1 apandey: inodelks are taken by dht as well I think
12:12 xavih pranithk1: and 'd' only needs to be taken if it already exists, though probably the easiest way to check for its existence is to try to get a lock...
12:12 apandey pranithk1: taking lock on entry and parent both in case of rmdir/unlink?
12:12 pranithk1 xavih: yeah
12:13 pranithk1 apandey: yes
12:13 apandey pranithk1: and that should be done by ec?
12:14 pranithk1 apandey: yes
12:15 pranithk1 apandey: there is a semantic change on locks xlator too. i.e. to fail locks with ESTALE when the inode is not present any more on the brick
12:15 apandey pranithk1: hmmmm. yes,
12:16 apandey pranithk1: ok..
12:16 pranithk1 apandey: You should first send out a mail...
12:16 apandey pranithk1: ok
12:18 xavih pranithk1: btw, one comment about an issue I've seen earlier. When we take a lock in ec, if the loc->inode is NULL, we silently skip the lock acquisition. Probably this was something to avoid a problem in earlier versions, but I think it should be removed
12:18 xavih pranithk1: if we don't have loc->inode, we fail
12:19 xavih pranithk1: what do you think. Is there any need for this case in current releases ?
12:19 xavih pranithk1: it seems dangerous
12:20 pranithk1 xavih: Can you point to the code.
12:20 ashiq_ joined #gluster-dev
12:20 xavih pranithk1: in ec_lock_prepare_inode_internal(). The first check
12:21 xavih pranithk1: in ec-common.c
12:21 pranithk1 xavih: we should error out. You are correct
12:22 pranithk1 apandey: ^^
12:28 apandey pranithk1: ok  :)
12:29 apandey pranithk1: xavih: leaving for the day. I will send a mail regarding this solution...
12:35 xavih pranithk1: see you :)
12:35 pranithk1 xavih: see you too! Thanks for this bug discussion :-)
12:36 pranithk1 xavih: were you telling apandey?
12:51 atinm joined #gluster-dev
13:14 mchangir joined #gluster-dev
13:19 shaunm joined #gluster-dev
13:36 pranithk1 joined #gluster-dev
13:38 atinm joined #gluster-dev
13:53 vimal joined #gluster-dev
13:57 vimal joined #gluster-dev
14:01 rjoseph ndevos: shyam: Review request for http://review.gluster.org/#/c/15913/
14:07 Muthu joined #gluster-dev
14:13 nbalacha joined #gluster-dev
14:53 nbalacha joined #gluster-dev
15:17 post-factum joined #gluster-dev
17:03 riyas joined #gluster-dev
17:12 gem joined #gluster-dev
20:47 rastar joined #gluster-dev
21:33 dlambrig_ joined #gluster-dev
22:04 ChrisHolcombe joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary