Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2016-01-12

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:10 zhangjn joined #gluster-dev
00:22 shyam joined #gluster-dev
00:41 sankarshan_ joined #gluster-dev
00:58 zhangjn joined #gluster-dev
01:05 EinstCrazy joined #gluster-dev
01:07 EinstCrazy joined #gluster-dev
01:26 hagarth joined #gluster-dev
01:45 hagarth joined #gluster-dev
01:51 EinstCrazy joined #gluster-dev
02:03 EinstCra_ joined #gluster-dev
02:08 EinstCrazy joined #gluster-dev
02:18 shyam joined #gluster-dev
02:22 kanagaraj joined #gluster-dev
02:41 shyam joined #gluster-dev
02:48 ilbot3 joined #gluster-dev
02:48 Topic for #gluster-dev is now Gluster Development Channel - http://gluster.org | For general chat go to #gluster | Patches - http://review.gluster.org/ | Channel Logs - https://botbot.me/freenode/gluster-dev/ & http://irclog.perlgeek.de/gluster-dev/
02:52 overclk joined #gluster-dev
03:00 kanagaraj joined #gluster-dev
03:05 zhangjn joined #gluster-dev
03:06 hagarth joined #gluster-dev
03:19 zhangjn joined #gluster-dev
03:33 EinstCrazy joined #gluster-dev
03:39 ppai joined #gluster-dev
03:48 kanagaraj joined #gluster-dev
03:57 jiffin joined #gluster-dev
03:59 atinm joined #gluster-dev
03:59 nishanth joined #gluster-dev
04:00 shubhendu joined #gluster-dev
04:02 pranithk joined #gluster-dev
04:28 sakshi joined #gluster-dev
04:29 ashiq_ joined #gluster-dev
04:33 Manikandan joined #gluster-dev
04:39 kotreshhr joined #gluster-dev
04:44 kanagaraj joined #gluster-dev
04:56 hagarth joined #gluster-dev
04:57 ndarshan joined #gluster-dev
05:02 kanagaraj_ joined #gluster-dev
05:08 vmallika joined #gluster-dev
05:13 mchangir joined #gluster-dev
05:16 kanagaraj joined #gluster-dev
05:16 Apeksha joined #gluster-dev
05:17 Apeksha joined #gluster-dev
05:19 kanagaraj_ joined #gluster-dev
05:19 kdhananjay joined #gluster-dev
05:20 pppp joined #gluster-dev
05:25 EinstCra_ joined #gluster-dev
05:28 Bhaskarakiran joined #gluster-dev
05:30 EinstCrazy joined #gluster-dev
05:30 ggarg joined #gluster-dev
05:32 rafi joined #gluster-dev
05:33 poornimag joined #gluster-dev
05:35 zhangjn joined #gluster-dev
05:35 atalur joined #gluster-dev
05:40 aravindavk joined #gluster-dev
05:40 nbalacha joined #gluster-dev
05:42 jiffin rastar: ping, In netbsd machine , creation of .tar file for logs fails while running prove(although test completes successfully) with following error
05:43 jiffin tar: Failed open to read/write on /autobuild/install/var/log/glusterfs/test.tar (No such file or directory)
05:43 jiffin tar: Unexpected EOF on archive file
05:46 Humble joined #gluster-dev
05:47 skoduri joined #gluster-dev
05:48 overclk joined #gluster-dev
05:50 zhangjn joined #gluster-dev
05:53 kanagaraj joined #gluster-dev
06:01 hgowtham joined #gluster-dev
06:01 vimal joined #gluster-dev
06:03 kotreshhr joined #gluster-dev
06:03 EinstCrazy joined #gluster-dev
06:03 asengupt joined #gluster-dev
06:03 Saravana_ joined #gluster-dev
06:04 kanagaraj joined #gluster-dev
06:04 hchiramm joined #gluster-dev
06:26 kanagaraj joined #gluster-dev
06:29 apandey joined #gluster-dev
06:32 kotreshhr joined #gluster-dev
06:35 zhangjn joined #gluster-dev
06:38 zhangjn_ joined #gluster-dev
06:39 EinstCrazy joined #gluster-dev
06:45 EinstCra_ joined #gluster-dev
06:57 EinstCrazy joined #gluster-dev
06:57 EinstCrazy joined #gluster-dev
07:00 EinstCra_ joined #gluster-dev
07:08 EinstCrazy joined #gluster-dev
07:14 itisravi joined #gluster-dev
07:15 josferna joined #gluster-dev
07:16 pranithk joined #gluster-dev
07:23 pranithk joined #gluster-dev
07:30 zhangjn joined #gluster-dev
07:39 hgowtham Manikandan++
07:39 glusterbot hgowtham: Manikandan's karma is now 46
07:41 EinstCra_ joined #gluster-dev
07:51 ndevos obnox: I dont think we encourage anyone to give +2, but we also dont have objections, maintainers of the components should review+merge, a +2 mostly speeds things up
07:52 ndevos obnox: it is possible in Gerrit to give the +2 privilege to certain people only, but we do not limit that - is that an encouragement?
08:07 EinstCrazy joined #gluster-dev
08:12 EinstCrazy joined #gluster-dev
08:16 zhangjn joined #gluster-dev
08:20 ndevos Humble: can you merge http://review.gluster.org/13187 and maybe give some of us the permissions to do so too?
08:27 ggarg joined #gluster-dev
08:39 Saravanakmr joined #gluster-dev
08:47 obnox ndevos: encouragement, probably not, but just using the interface, presents the user with some amount of WTF
08:47 jiffin ndevos: ping , In netbsd machines , I got following error [2016-01-12 04:31:39.239973] E [netgroups.c:152:ng_file_deinit] (-->0xb9bda921 <mnt3_auth_set_netgroups_auth+0x1a7> at /autobuild/install/lib/gluster​fs/3.8dev/xlator/nfs/server.so -->0xb9bd5e0b <ng_file_deinit+0x82> at /autobuild/install/lib/gluster​fs/3.8dev/xlator/nfs/server.so ) 0-nfs-netgroup: invalid argument: ngfile [Invalid argument]
08:47 glusterbot jiffin: ('s karma is now -10
08:48 ndevos obnox: I see +1 or +2 a bit like a confidence level, how much does the reviewer understand the change and its potential implications
08:48 jiffin ndevos: while running mount-nfs-auth.t , any idea about this?
08:49 obnox ndevos: ok, so I take it that if e.g. I would put +2 on a patch that I am absolutley positive about, this would not be frowned upon
08:49 jiffin ndevos: it is not  seen in linux vms
08:49 obnox (or some other non-maintiner)
08:49 ndevos jiffin: maybe part of http://review.gluster.org/#/c/12541​/3/xlators/nfs/server/src/netgroups.c ? needs some more work though
08:50 jiffin ndevos: and it is not related to spurious failure which I am looking
08:50 ndevos obnox: yes, +2 if you really understand the change and are confident it doesnt break anything else
08:51 jiffin ndevos: K. I will apply the change and rerun it again
08:51 obnox ndevos: ok, thx. makes sense
08:52 ndevos obnox: hmm, maybe we should publish some review guidelines in our docs?
08:52 ndevos that might encourage others to +2 patches ;-)
08:56 ppai_ joined #gluster-dev
08:56 pranithk1 joined #gluster-dev
08:58 zhangjn joined #gluster-dev
08:58 atalur_ joined #gluster-dev
09:02 kotreshhr joined #gluster-dev
09:06 aravindavk joined #gluster-dev
09:06 rjoseph :q
09:07 * rjoseph typed on the wrong window
09:11 nbalacha joined #gluster-dev
09:11 mchangir joined #gluster-dev
09:14 obnox ndevos: that would make sense indedd
09:14 Ryllise joined #gluster-dev
09:17 Saravanakmr joined #gluster-dev
09:24 ndevos rastar: hmm, did something change on build.gluster.org? at least two slaves start to fail with weird errors - https://build.gluster.org/job/rackspace​-regression-2GB-triggered/17436/console
09:24 ndevos slave28 gave the same problem...
09:26 ndevos and slave32 too!?
09:26 * csim look
09:27 ndevos oh, and slave27 :-/
09:28 ndevos csim: I only retrigger https://build.gluster.org/job/rackspace​-regression-2GB-triggered/17438/console and the next runs fails the same on other slaves
09:28 csim ndevos: yeah, the error message is puzzling
09:29 ndevos quite
09:31 sankarshan_ joined #gluster-dev
09:32 csim ndevos: it run fine outside of the BS ?
09:33 ndevos csim: it does not even start to run, the git checkout seems to fail on the slave
09:36 csim ndevos: disk full ?
09:37 csim nope
09:39 csim it run fine
09:40 csim I wonder if that's not a bug in jenkins
09:41 sakshi joined #gluster-dev
09:41 ndevos csim: I think rastar and kshlm were making some changes to jenkins, not sure if they also updated it or something
09:42 csim it would have been sent on gluster-infra, so everybody know, no ?
09:43 * csim need to take a train
09:43 ndevos well, it seems that also the maintainers list was left out for changes that the maintainers really should know about...
09:46 badone joined #gluster-dev
09:53 nbalacha joined #gluster-dev
09:54 asengupt joined #gluster-dev
09:54 Saravanakmr joined #gluster-dev
09:54 apandey joined #gluster-dev
09:55 vimal joined #gluster-dev
09:56 mchangir joined #gluster-dev
10:01 Bhaskarakiran joined #gluster-dev
10:01 atinm joined #gluster-dev
10:03 ndevos rafi: how are those additional patches related to the nfs topic of bug 1297311?
10:03 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1297311 unspecified, unspecified, ---, rkavunga, POST , Attach tier + nfs : Creates fail with invalid argument errors
10:04 rastar ndevos: misc: we had disabled triggers on friday
10:04 rastar ndevos csim there was some other problem too
10:04 rastar even after enabling the triggers, nothing was getting triggered
10:04 ndevos rastar: I was wondering if there were other changes done too, maybe updates or changes in the git checkouts from gerrit?
10:05 rastar manual triggering wasn't changed at all
10:05 rastar ndevos: no
10:05 csim it run fine in bash
10:05 rastar so kshlm restarted the jenkins on my request
10:05 rastar now triggers are working fine
10:06 rastar Here is a sample console for patch set trigger https://build.gluster.org/job/rackspace​-regression-2GB-triggered/17439/console
10:07 rastar ndevos: so slave27 which failed for your manual trigger is working for patch set trigger now...
10:07 rastar ndevos: did anyone change any other settings? I am not aware of any
10:08 ndevos rastar: how was that triggered? I have added CR +2 on some patches, and V +1 on others, but they did not seem to get started...
10:08 rastar ndevos: these triggers started working after kshlm did a jenkins safe restart and I re-added the trigger settings
10:09 ndevos rastar: ah, do you think you can re-trigger the tests through https://build.gluster.org/gerrit_manual_trigger/ ?
10:09 * rastar trying
10:10 rastar ndevos: I got added but failed with the same errors that you saw before
10:11 rastar ndevos: there is a AccessDeniedException: for path /d
10:11 rastar ndevos: have never seen that before
10:11 ndevos rastar: where do you see that?
10:11 rastar ndevos: https://build.gluster.org/job/rackspace​-regression-2GB-triggered/17442/console
10:11 rastar ndevos: line 7
10:12 ndevos rastar: well, that definitely is a different error than the one I got
10:13 rastar ndevos: ah yes..let me try retriggering refs/changes/71/11871/1
10:13 ndevos rastar: but that is on a manual trigger, they are always a litle weird anyway, I dont think we should allow them either
10:14 rafi ndevos: the same change for all interface
10:14 rastar ndevos: do you really want to retrigger 11871?
10:14 rastar ndevos: it has all the +1s
10:14 rastar and it is merged
10:14 rastar :(
10:14 ndevos rafi: maybe you could update the subject of the BZ then?
10:15 rafi ndevos: i will update the BZ , thanks for reminding
10:16 ndevos rastar: uh, where did that change-id come from?
10:16 ndevos rastar: maybe retrigger change 13200 through https://build.gluster.org/gerrit_manual_trigger/ ?
10:26 rastar ndevos: I got refs/changes/71/11871/1 from your link above https://build.gluster.org/job/rackspace​-regression-2GB-triggered/17436/console
10:27 ndevos rastar: hmm, I tried to retrigger the regression run for 13208, and I only clicked 'retrigger' on the single result that jenkins returned
10:29 ira joined #gluster-dev
10:29 kotreshhr joined #gluster-dev
10:38 Bhaskarakiran joined #gluster-dev
10:45 vmallika joined #gluster-dev
10:46 kkeithley1 joined #gluster-dev
10:47 Saravanakmr joined #gluster-dev
10:47 kdhananjay joined #gluster-dev
10:48 Bhaskarakiran joined #gluster-dev
10:48 Manikandan rastar++, thanks :-)
10:48 glusterbot Manikandan: rastar's karma is now 21
10:49 pranithk joined #gluster-dev
10:51 ndevos rjoseph, rafi: does one of you have something to reply on http://article.gmane.org/gmane.com​p.file-systems.gluster.devel/13533 ?
10:59 ndevos rastar: I think slave32 has an issue with that weird /d exception
11:00 zhangjn joined #gluster-dev
11:01 pranithk left #gluster-dev
11:01 karthik_u joined #gluster-dev
11:06 rastar ndevos: there are other slaves too
11:06 ndevos rastar: ah, looked at it and sent an email to the infra list about it
11:07 ndevos rastar: hmm, which ones?
11:07 rastar ndevos: rafi says our scripts in jenkins don't run on abort of runs and that causes weird problems
11:08 ndevos rastar: oh, that could be, the history of slave32 shows an aborted job too: https://build.gluster.org/computer​/slave32.cloud.gluster.org/builds
11:08 rastar ndevos: slave34 has the same problem
11:09 ndevos rastar: hmm, but there builds succeeded after a job got aborted...
11:10 kkeithley_ is there going to be a bug triage meeting today?
11:11 ndevos Manikandan, hgowtham: ^
11:11 jiffin kkeithley_: as per last bug triage meeting , hari will host today's meeting
11:11 rastar ndevos: You are right, :( hope this is not another instance of previous runs corrupting the machine.
11:11 hgowtham kkeithley, yes
11:12 hgowtham we do have it
11:12 Manikandan ndevos, yes hgowtham will host today's bug triage
11:12 kkeithley_ okay, thanks
11:12 Manikandan kkeithley, np
11:12 hgowtham ndevos, thanks didnt check irc
11:12 Manikandan jiffin, thanks :)
11:12 ndevos rastar: I hope we get to the point where we have a new VM for each regression test
11:12 kkeithley_ maybe someone should send a reminder to #gluster and the mailing lists. And here too
11:13 ndevos someone = hgowtham :)
11:13 kkeithley_ that would be the logical choice
11:13 Manikandan ndevos, ;P
11:13 kkeithley_ But I'm not his boss, so. ;-)
11:14 hgowtham kkeithley, sending a reminder
11:14 rastar ndevos: yes, a single lxc or docker container in a slave VM can remove the requirement of workspace or clean VM. But it is talk for some other day.
11:15 rastar ndevos: I am happy that we have now got back to the point of triggering regression runs.
11:15 rastar ndevos: Now it is set to a +1 verified or +2 code-review
11:16 rastar ndevos: one last test, I need your help. I triggered runs on http://review.gluster.org/#/c/13173/2 by providing a +1 verified. Lets see what happens if you +1 verify it too.
11:17 atinm rastar, what happens to the patches which already have +1 as verified but regressions were not run?
11:17 hgowtham REMINDER: Gluster Community Bug Triage meeting in about 45 minutes at #gluster-meeting on freednode
11:17 atinm rastar, are you going to trigger all of them manually?
11:18 rastar atinm: checking now, you can help me by giving a +1 verified on http://review.gluster.org/#/c/13173/2
11:19 atinm rastar, how can I provide a +1 verified for the patch which I don't own :)
11:22 shyam1 joined #gluster-dev
11:24 rastar ndevos: atinm Thanks, so multiple +1 verified won't retrigger a test run
11:30 ndevos rastar: they are VMs, is should be fast to restore a snapshot after a test was run :)
11:31 ndevos rastar: lxc or docker do not address the problem on NetBSD/FreeBSD, so we would still need to use something else there
11:34 kanagaraj_ joined #gluster-dev
11:39 badone joined #gluster-dev
11:50 kanagaraj joined #gluster-dev
11:53 obnox does netbsd know jails (like freebsd) ?
11:54 ndevos no idea...
11:54 csim afaik, no
11:54 ndevos netbsd is not that much into security bits, they dont have posix acls either
11:55 csim but they have the securelevel :)
11:55 kotreshhr joined #gluster-dev
12:01 hgowtham joined #gluster-dev
12:01 pranithk joined #gluster-dev
12:01 pranithk joined #gluster-dev
12:02 kdhananjay xavih: there?
12:02 xavih kdhananjay: hi
12:03 kdhananjay xavih: Hi! pranithk is also free, so can we have the discussion now?
12:03 xavih kdhananjay: sure :)
12:03 kdhananjay pranithk: Ready?
12:03 pranithk kdhananjay: yes :-)
12:04 xavih pranithk, kdhananjay: so what's the problem exactly ?
12:04 kdhananjay pranithk: xavih: OK great. So from the discussion pranithk and I had last week, it felt like each of us has our own interpretation of your design. And we both could be wrong!
12:05 kdhananjay xavih: So here's what we'd like to know first:
12:06 kdhananjay Like you nicely put it, at the end of the day, afr, dht etc need to use transactions to exclusively operate on a fop, atomically.
12:06 kdhananjay it could be through taking locks or through any other means.
12:07 xavih kdhananjay: yes
12:07 kdhananjay xavih: assuming we are going to go with locks, and i think we all know already that afr and dht acquire locks on different domains because they don't need to operate with mutual exclusivity (for example rebalance and self-heal can run in parallel):
12:09 kdhananjay in the new scheme of things, would a modification fop initiated by the client (in other words a txn) need to take a lock in a new domain (call it transaction domain)?
12:09 kdhananjay which is different from the traditional afr and dht domains?
12:10 kdhananjay would a transaction that needs atomicity for both afr and dht now need to just use one lock in a new domain, replacing the existing nested locks
12:10 kdhananjay ?
12:10 xavih kdhananjay: yes, if the implementation uses locks, it must use a new domain, different to any other currently used
12:10 kdhananjay hmmm then here is my question:
12:11 xavih kdhananjay: yes, they will share the same lock
12:11 xavih kdhananjay: thoiugh they don't know they are actuallu sharing a lock. They only create a transaction to do its job
12:12 kdhananjay this whole locking in the transaction is _not only_ to protect the inode from modification from other clients, but the inode also needs to be guarded from being modified by rebalance and self-heal, correct?
12:12 kdhananjay in which case shd and rebalance will also need to start using locks in this domain?
12:13 xavih kdhananjay: well, rebalance and self-heal should also use transactions
12:13 kdhananjay if that is so, when all clients are idle, rebalance and self-heal can never run in parallel, because they need to acquire locks in the same domain now?
12:13 xavih kdhananjay: why rebalance and self-heal can work in parallel ?
12:14 xavih kdhananjay: (sorry I don't know much details about this...)
12:14 kdhananjay xavih: did you mean 'why rebalance and sh cannot work in parallel'?
12:14 zhangjn joined #gluster-dev
12:15 xavih kdhananjay: you said they can run in parallel, right ? why ? if they touch the same inode, they should synchronize
12:16 xavih kdhananjay: unless they touch different things. In that case you could have two transactions that do not intersect so they can be executed in parallel
12:16 kdhananjay xavih: but as of today, it is possible for both rebalance and sh to run in parallel, without any consequences to the consistency of the inode
12:16 Saravanakmr joined #gluster-dev
12:16 kdhananjay in parallel && on the same inode
12:17 xavih kdhananjay: let me understand this...
12:17 kdhananjay self-heal operates only on the different replica sets.
12:17 xavih kdhananjay: suppose rebalance is working and is moving a file from one replica set to another
12:17 kdhananjay ...
12:17 xavih kdhananjay: and now assume that the destination replica set is also doing a self-heal on the same inode
12:17 xavih kdhananjay: right ?
12:18 kdhananjay sure. go on.
12:19 xavih kdhananjay: now, when dht issues a write to the replica set that is being healed, this write will be synchronized with the self-heal process that is also writing to the file, right ?
12:20 xavih kdhananjay: afr's writes coming from dht won't be executed in parallel with writes generated by a self-heal, right ?
12:21 xavih kdhananjay: so, even if rebalance and self-heal can run in parallel, when they touch the same inode, they also need to be serialized
12:21 kdhananjay yeah, we are certain that there will be no parallel writers to the same file, because rebalancer would also have afr on its stack, so wherever locking is required, afr would acquire it, no matter what the nature of the process it is in, is.
12:21 kdhananjay yeah, to that extent, they will need to be serialised.
12:22 xavih kdhananjay: this is the same with transactions
12:22 kdhananjay but there could be other parallel operations on the same inode which can be executed in parallel without requiring serialisation
12:22 pranithk xavih: wait
12:22 kdhananjay let me think of a scenario..
12:22 xavih kdhananjay: that is another thing
12:22 pranithk xavih: rebalance takes locks in a different domain to prevent dht from renaming the file in the I/O path. It has no connection to self-heal at the moment.
12:23 pranithk xavih: question is, will we have connection if we move to transaction model.
12:23 kdhananjay right, we had more fine grained locks before. will we continue to have that with this framework?
12:24 pranithk xavih: Rename and self-heal of the file are orthogonal at the moment. Will they still be orthogonal if we move to transactional model? What kind of changes do we have to do to retain this orthogonal nature
12:24 xavih pranithk: so, dht uses locks only to prevent renaming, not to do atomic writes. That's what are you saying ?
12:25 xavih pranithk: if dht doesn't need an atomic write, then it won't need to create a transaction
12:25 pranithk xavih: at the moment, yes. There is a read-modify-write race in dht at the moment though
12:26 pranithk xavih: for which it may have to take lock, read, write, then unlock.
12:26 xavih pranithk: if it needs to do that, then transactions will work correctly
12:27 pranithk xavih: okay. But kdhananjay's question still remains. How do we do we retain fine grained locking kind of behavior
12:27 xavih pranithk: what do you mean by "fine grained locking" ? having multiple domains ?
12:28 pranithk xavih: range locks, instead of taking full file lock.
12:28 kdhananjay pranithk: not sure i understood 'range locks'
12:28 xavih pranithk: well, that's an implementation decision of the transaction library
12:28 pranithk xavih: If the application is modifying 0-128KB, application modifying file at 256KB doesn't care
12:29 kdhananjay i thought it was about retaining multiple domains in the new framework too?
12:29 xavih pranithk: when afr creates a transaction for a write, the transaction library will know what range you want to modify
12:29 pranithk kdhananjay: Oops. okay. I understood wrong.
12:29 pranithk xavih: got it
12:30 pranithk xavih: kdhananjay's question is as follows
12:30 xavih pranithk: then it can decide if it makes sense to take a partial lock or a full lock
12:30 xavih pranith, kdhananjay: why do you need multiple domains for transactions ?
12:30 pranithk xavih: To prevent two self-heals, afr takes locks in 'self-heal' domain where as modifying file takes locks in a 'data' domain
12:31 pranithk kdhananjay|brb: Is that what you were trying to say?
12:31 xavih pranithk: the transaction framework doesn't replace the locks feature, but it uses the locks
12:32 xavih pranithk: if you need to take a lock for something that is not a transaction, you can still do it
12:32 xavih pranithk: in this way, since the transaction framework will use a new domain, it will work nicely
12:32 pranithk xavih: that is correct. But the question is, do we have to use transaction only for i/o path?
12:32 ndevos kkeithley_: maybe you could put the 3.6.8 packages on download.g.o? from http://buildlogs.centos.org/cent​os/7/storage/x86_64/gluster-3.6/ (and the /6/ directory)
12:33 pranithk kdhananjay: please confirm if I got your question right.
12:33 kdhananjay pranithk: well that is also one example.
12:34 xavih pranithk: the transaction framework could be used for anything that conceptually is a transaction (i.e. an atomic operation), be it in the I/O path or not
12:34 kdhananjay pranithk: maybe i got it wrong. catch me if i have misunderstood something.
12:34 pranithk xavih: I will ask my questions after we clarify kdhananjay's doubt...
12:34 pranithk kdhananjay: go ahead
12:36 kdhananjay pranithk: i was under the impression that rebalance's locks (in rename) is currently taking locks in a certain domain. but if the same inode needs, i don't know, a metadata heal, we don't want them to lock each other out.
12:36 kdhananjay they can both happen in parallel.
12:37 kdhananjay and we don't want to restrict the two from running in paralle.
12:37 kdhananjay l
12:38 pranithk kdhananjay: I agree!
12:38 pranithk xavih: ^^ that is correct
12:38 xavih kdhananjay: yes, but the question is that the lock taken for rebalance is not really a transaction. It's a lock to avoid other renames to happen. It's like a flag, not an atomic operation
12:38 ppai_ joined #gluster-dev
12:38 xavih kdhananjay: they are two different things conceptually
12:39 pranithk xavih: well, it is an atomic rebalance...
12:39 kdhananjay it is also a lock to guard the file from being migrated by another rebalancer? or are we guaranteed that only one rebalance daemon would migrate it
12:39 kdhananjay and no other rebalancer would even care to touch that file..
12:39 pranithk kdhananjay: true.
12:40 xavih pranithk: the rebalance is not atomic since we can write to the while it's being rebalanced
12:40 kkeithley_ ndevos: why exactly? The README tells people to get the packages from the Storage SIG repo.  I thought the general idea was to switch to the Storage SIG repo.
12:41 rastar ndevos: should we really wait for regression run for your patch http://review.gluster.org/#/c/13208/1
12:41 kkeithley_ s/general idea/plan
12:41 kkeithley_ s/general idea/plan/
12:41 pranithk xavih: In I/O's context. But not in rebalance context. that is why it is a separate domain. At least that is how I see it.
12:41 xavih kdhananjay: this seems a different thing. Here we are trying to avoid to execute a high level operation twice on the same inode...
12:41 * kkeithley_ wonders where glusterbot is
12:42 pranithk xavih: from the way I am understanding it so far. Transaction is a framework to do I/O operations atomically...
12:42 pranithk xavih: For the rest of the things we will still use locks
12:43 xavih pranithk: yes, I think you are right
12:43 ndevos kkeithley_: because at least on centos-6 the centos-release-gluster36 is not avaiable :-/
12:43 ndevos kkeithley_: I think for CentOS 7 it works though
12:43 kkeithley_ oh
12:43 pranithk kdhananjay: Was I clear? Do you have any doubts?
12:43 kkeithley_ is it going to be available eventually?
12:44 xavih pranithk: we could add a domain to the transactions, but I'm not sure if it's useful or only a wrapper over locks domains... I would need to think about it
12:44 kdhananjay pranithk: just one question - in light of your statement "Transaction is a framework to do I/O operations atomically", what would it mean for internal clients (shd/rebalance) to run in parallel with the normal IO?
12:45 EinstCrazy joined #gluster-dev
12:46 xavih kdhananjay: it's the same. When an I/O operation is issued as part of a rebalance/self-heal/anything, it will need to synchronize with other I/O operations, so transactions will work correctly
12:47 xavih kdhananjay: the only important think is that the transaction will start at a lower level (like afr) because dht doesn't need to create a transaction for writes, for example
12:47 ndevos kkeithley_: yes, I need to check what is missing and fix that
12:48 xavih pranithk, kdhananjay: the idea for transactions was basically to add support for operations on one xlator that involve multiple subvolumes. In this case, dht writes only happen on one of the subvolumes, so they do not need a transaction
12:49 pranithk xavih: when rebalance is in progress, technically they do
12:49 xavih pranithk: only one subvolume is written
12:49 pranithk xavih: but it still needs to be atomic to prevent read-modify-write
12:50 pranithk xavih: while rebalance is in progress both source and destination are written...
12:50 pranithk xavih: if an application does write
12:50 xavih pranithk: that's the race you commented before, right ?
12:50 Manikandan ndevos++
12:50 glusterbot Manikandan: ndevos's karma is now 231
12:51 pranithk xavih: yes
12:51 xavih pranithk: if so, dht would need to use transactions for this particular write (not the full rebalance), and in this case, transactions are perfectly valid
12:51 pranithk xavih: wait. Seems like kdhananjay had network problem
12:51 xavih pranithk: and in this case, transactions will avoid one lock
12:51 pranithk xavih: agree
12:52 pranithk xavih: true
12:52 pranithk xavih: kdhananjay was disconnected
12:52 kdhananjay joined #gluster-dev
12:52 xavih pranithk: oops
12:53 pranithk kdhananjay: what was the last message for you?
12:53 kdhananjay (06:16:27  IST) xavih: kdhananjay: it's the same. When an I/O operation is issued as part of a rebalance/self-heal/anything, it will need to synchronize with other I/O operations, so transactions will work correctly
12:53 kdhananjay (06:17:24  IST) xavih: kdhananjay: the only important think is that the transaction will start at a lower level (like afr) because dht doesn't need to create a transaction for writes, for example
12:53 kdhananjay (06:20:54  IST) ***kdhananjay is thinking ...
12:54 ndevos rastar: if you file a bug for the broken regression test, I'll merge it?
12:58 hgowtham joined #gluster-dev
12:59 xavih pranithk, kdhananjay: are you there
12:59 xavih pranithk, kdhananjay: ?
12:59 kdhananjay xavih: yes
12:59 pranithk xavih: yes yes
13:00 kdhananjay xavih: pranithk is explaining over phone what was discussed when i was disconnected
13:00 kdhananjay xavih: give us 2 min
13:00 xavih kdhananjay: ah, ok :)
13:00 kdhananjay we'll resume the discussion :)
13:00 kdhananjay in 2 min
13:00 xavih kdhananjay: no problem :)
13:07 pranithk xavih: While I was explaining it to kdhananjay, she brought up a nice question. At the moment, in afr, for data, metadata, entry transactions it takes locks in different domains
13:08 pranithk xavih: Now with transaction model, should they collide?
13:08 pranithk kdhananjay: did I get it right? ^^
13:09 xavih pranithk: it depends again on how we implement the transaction library. It can have intelligence to decide if two operations working on the same inode do really collide or not
13:09 xavih pranithk: anyway, this is dangerous. There are few things that can be done in parallel on an inode. For example, a write also changes some metadata (modification time, size, ...)
13:10 xavih pranithk: allowing a setattr and a write to be executed in parallel can result in inconsistent data between bricks
13:10 kdhananjay joined #gluster-dev
13:10 atalur_ joined #gluster-dev
13:10 kdhananjay am i connected?
13:10 xavih kdhananjay: yes
13:11 kdhananjay pranithk: yeah, that was what i wanted to ask.
13:11 pranithk kdhananjay: (06:39:02 PM) xavih: pranithk: it depends again on how we implement the transaction library. It can have intelligence to decide if two operations working on the same inode do really collide or not
13:11 pranithk kdhananjay:  xavih: pranithk: anyway, this is dangerous. There are few things that can be done in parallel on an inode. For example, a write also changes some metadata (modification time, size, ...)
13:12 pranithk kdhananjay:  xavih: pranithk: allowing a setattr and a write to be executed in parallel can result in inconsistent data between bricks
13:12 * ndevos points kdhananjay to https://botbot.me/freenode/gluster-dev/ from the /topic
13:12 pranithk kdhananjay: those were the messages before you were disconnected.
13:12 kdhananjay ndevos++ Thanks! :)
13:13 ndevos :)
13:13 kdhananjay pranithk: hmmm ok.
13:14 xavih kdhananjay: it's easier to concentrate all logic for deciding which operations interfere with which ones in one place instead of allowing xlators to do locks in multiple domains assuming that the operations do not overlap
13:15 xavih kdhananjay: as I said, data operations are rarely unrelated to metadata ones
13:15 lpabon_ joined #gluster-dev
13:15 ppai_ joined #gluster-dev
13:15 kdhananjay hmmm but the downside is performance issues?
13:15 glusterbot kdhananjay: ndevos's karma is now 232
13:15 Bhaskarakiran joined #gluster-dev
13:16 kdhananjay wouldnt it be too restrictive to block metadata operations when data operations are in progress?
13:16 kdhananjay xavih:
13:16 xavih kdhananjay: if we allow a setattr and a write to run in parallel, they might work faster, but the result might be incorrect for xlators that have multiple subvolumes
13:16 xavih kdhananjay: what's more important, performance or correctness ?
13:17 kdhananjay both :)
13:17 xavih kdhananjay: of course :P
13:17 pranithk xavih: I think she means setattr with uid/gid change etc with write :-)
13:17 pranithk xavih: Not the time one...
13:17 pranithk xavih: or may be setxattr/removexattr etc with write
13:17 xavih pranithk: that logic can be implemented in the transaction library to allow this kind of operations in parallel
13:17 ashiq_ joined #gluster-dev
13:17 pranithk xavih: makes sense :-)
13:17 pranithk kdhananjay: ^^?
13:17 Bhaskarakiran joined #gluster-dev
13:18 kdhananjay yeah, i am guessing these things will become clearer as we delve deeper into the design and implementation details in the coming days
13:18 pranithk kdhananjay: agree!
13:18 xavih pranithk: not that the transaction library can allow fine tuning of these things. Using a separate domain for all metadata transactions doesn't allow to serialize conflicting data/metadata operations
13:19 xavih kdhananjay: ^^
13:20 pranithk xavih: You are right. Also it helps in performance sometimes. For example, if we have to untar. We can do data/metadata operations under same lock by implementing eager-lock kind of semantics in transaction.
13:20 kdhananjay pranithk: that is an awesome thought!
13:20 xavih pranithk: yes, that was the idea
13:21 kdhananjay and xavih too!
13:21 pranithk xavih: I have major questions in implementation of transaction. kdhananjay, let me know if you have any other questions before we can move to that topic
13:21 kdhananjay next topic please. :)
13:21 xavih pranithk: the transaction library should encapsulate all locking optimizations used in many xlators
13:21 pranithk xavih: yep!
13:22 xavih pranithk: one thing before continuing...
13:22 pranithk xavih: go ahead
13:23 xavih pranithk, kdhananjay: note that we could use the transaction library even to get normal locks (inodelk, entrylk). It would be a special case of transaction, but since we need to use exactly the same logic (try a non-blocking lock, and then a sequential lock if the first one fails)
13:24 xavih pranithk, kdhananjay: it would allow xlators using traditional locks to not have to implement the retry code
13:24 pranithk xavih: I agree
13:24 kdhananjay +1
13:24 xavih pranithk, kdhananjay: a simple inodelk inside a transaction would be sufficient
13:24 pranithk xavih: that is where I was going to with my next question
13:24 pranithk xavih: 1st question. Do we really need the new xlators? Why not, at the time of first transaction creation, we acquire locks and from then on, lower xlators in the graphs can be given lock without sending anything on the network
13:24 xavih pranithk: oh :)
13:25 xavih pranithk: it won't work reliably...
13:25 pranithk xavih: why not?
13:25 xavih pranithk: for example, when a transaction is created we don't have enough information about which bricks will be involved in the same
13:26 pranithk xavih: what if we take that as argument?
13:26 pranithk xavih: at each layer
13:26 xavih pranithk: we know the direct childs of the xlator, but not the protocol/client xlators to which the request will be finally sent
13:26 xavih pranithk: how ?
13:26 pranithk xavih: It will flow exactly like how inodelk/entrylk would flow
13:26 pranithk xavih: right?
13:26 josferna joined #gluster-dev
13:26 xavih pranithk: a lower translator can decide that a request doesn't need to be processed by some subvolume
13:27 xavih pranithk: I know this is not a valid scenario right now, but suppose we have a dht below afr or ec
13:27 xavih pranithk: in this case a request sent by afr/ec won't be processed by all bricks that are accessible from them
13:28 pranithk xavih: how will inodelk be processed in this case?
13:28 pranithk xavih: even inodelk won't be going to all the bricks...
13:28 xavih pranithk: the original idea was to have an xlator just above protocol/client
13:29 xavih pranithk: when a request reaches it, it's sure that the related brick will be involved in the operation
13:29 xavih pranithk: what is your idea exactly ?
13:29 shyam1 xavih: pranithk: kdhananjay: Any mail/doc/page that discusses the transaction support that is being debated here? Curious...
13:30 xavih pranithk: do you want to explore all the childs of the xlator that initiates the txn and start issuing inodelk on all bricks ?
13:30 pranithk shyam: we will be sending the whole thing once we have clarity about what we want to implement
13:30 pranithk xavih: kind of yes
13:31 pranithk xavih: my idea is that inodelk flows exactly like fop with transaction flows
13:31 xavih pranithk: it would work with current architecture, but I'm not so sure if this is ok when multiple dht layers will be implemented or even with dht/tiering...
13:31 pranithk xavih: its the same thing xavi. inodelk won't be sent anywhere where the inode doesn't exist...
13:31 xavih pranithk: this way you will be sending unnecessary inodelk to bricks that do not care
13:31 Guest29589 pranithk: Sure, but there is some reference to ideas here, so is there anything at all that we can use to follow this discussion on IRC?
13:32 Guest29589 left #gluster-dev
13:32 xavih pranithk: how do you follow the inodelk call on subvolumes without actually issuing an special request ?
13:33 shaunm joined #gluster-dev
13:33 xavih pranithk: oh, I think I see what do you want to do...
13:33 xavih pranithk: let me think...
13:33 pranithk xavih: as part of transaction new, I thought I we will issue inodelk and only after we get this, we will call cbk...
13:34 xavih pranithk: oh, that will require an additional callback function...
13:34 xavih pranithk: I wanted to avoid this
13:34 ndevos kkeithley_: ah, it seems that none of the storage sig packages are available for centos 6 yet, only available in -testing
13:34 xavih pranithk: my idea was to have a library call that immediately returns when a transaction is created. This simplifies code complexity a lot on xlators
13:34 pranithk xavih: I know, but that is leading to new xlators which must call inodelk fops of different xlators that it is not connected to and it is leading to races! at least the way I thought about it
13:36 xavih pranithk: no, no, all inodelk calls will be issued by this new xlator, not the original creator of the transaction. It doesn't matter
13:36 kkeithley_ ndevos: okay
13:36 pranithk xavih: exactly, we don't know if all the bricks which need to participate in the transaction actually got a chance to know that this transaction exists. That is the race I was talking about
13:36 xavih pranithk: the idea was that when a transaction reaches this xlator, the library (on behalf of the txn xlator, not the original creator of the transaction) starts issuing the inodelk calls
13:37 kdhananjay xavih: also, it looks like the txn library will need to set/reset THIS before making a call to a fop on behalf of "this' xlator everytime, which is complex?
13:37 xavih kdhananjay: yes, but I don't think this is really complex
13:37 pranithk kdhananjay: hey, do you have links to IRC discussions with xavi before? could you give that info to shyam?
13:38 pranithk xavih: Do you see the race I am talking about?
13:38 xavih pranithk: why we don't know all the bricks that participate in the transaction ?
13:38 kanagaraj joined #gluster-dev
13:38 kdhananjay xavih: and it deviates from the free flowing nature of fop(s) in the gluster client stack.
13:38 pranithk xavih: okay. The way I understood it, let me explain..
13:38 kdhananjay stack = graph
13:39 zhangjn joined #gluster-dev
13:39 pranithk kdhananjay: may be we should ask one question after other. I will wait until your question is answered
13:39 kdhananjay pranithk: oops sorry. go ahead.
13:39 pranithk xavih: ^^ go ahead
13:39 zhangjn joined #gluster-dev
13:40 pranithk kdhananjay: I will wait. My question needs and example anyway. Let me recollect what I thought about. Meanwhile xavi can clear your doubt
13:40 kdhananjay pranithk: actually i thought it was related to the topic you guys are discussing
13:41 xavih kdhananjay: the library only encapsulates what the xlator should do, but we use the library to avoid too much work
13:41 xavih kdhananjay: another option would be to have callbacks in the xlator to have the STACK_WIND calls inside the xlator code if this seems better
13:42 kdhananjay not sure i got the stack_wind part
13:42 xavih kdhananjay: but I see this only as an implementation detail
13:42 xavih kdhananjay: the txn-xlator will need to issue inodelk calls to protocol/client, right ?
13:43 kdhananjay so the library calls can be made within the context of _any_ xlator, right?
13:44 xavih kdhananjay: my idea was to have this code inside the library since it knows which xlators are involved, making the code of the txn-xlator easier, but we could also do it in the txn-xlator itself by using callbacks
13:44 kdhananjay so if the library wants to execute inodelk on behalf of txn-n, then it will need to set THIS to point to txn-n, and then call the fop as if it is really being called by txn-n?
13:44 xavih kdhananjay: anywhay, the library will need to touch THIS before the callbacks
13:44 atalur joined #gluster-dev
13:44 xavih kdhananjay: yes
13:45 kdhananjay hmm that means we will need to remember the list of xlators which will participate in this txn, right?
13:45 xavih kdhananjay: this is already done in timer and syncops
13:45 xavih kdhananjay: yes, of course
13:45 kdhananjay xavih: hmmm ok.
13:45 kanagaraj_ joined #gluster-dev
13:45 xavih kdhananjay: this list is built by the txn-xlator when it calls txn_start()
13:46 xavih kdhananjay: each call to start a transaction attaches an additional xlator to the corresponding transaction
13:47 kdhananjay xavih: ok.
13:48 pranithk xavih: oh you are saying that the txn-create/update is going to have a list of xlators? Hmm...
13:48 kanagaraj__ joined #gluster-dev
13:48 pranithk xavih: I didn't know about this. So I was wondering how will txn-0 xlator know about sending inodelk to brick-1 which is below txn-1
13:48 xavih pranithk: yes, the library needs to know who is involved in the transaction
13:48 xavih pranithk: no, no...
13:49 pranithk xavih: please explain this bit!
13:49 xavih pranithk: afr will create a transaction and initialize it with 2 childs (its two subvolumes)
13:49 xavih pranithk: that transaction will reach txn-0 and txn-1, right ?
13:49 pranithk xavih: oh that part I got it...
13:49 pranithk xavih: yeah
13:49 pranithk xavih: I think I got it now.
13:50 xavih pranithk: ok, when txn-0 receives the transaction, it will call txn_start(). At this point, the transaction library will associate txn-0 as a member of the transaction
13:50 xavih pranithk: when txn-1 does the same, the transaction library will know all xlators involved in the transaction
13:50 pranithk xavih: what will txn-1 do? txn-0 already took the lock right
13:51 xavih pranithk: at this point (or even earlier if we implement optimistic locking), the txn-library can start issuing inodelk request to the child of txn-0 and txn-1
13:51 shyam joined #gluster-dev
13:51 xavih pranithk: no, no, txn-0 has only registered that it belongs to the transaction
13:51 pranithk xavih: who sends locks then?
13:51 kotreshhr joined #gluster-dev
13:52 atalur joined #gluster-dev
13:52 xavih pranithk: txn-0 will register a callback when it calls txn_create()
13:52 xavih pranithk: txn_create() will return immediately
13:53 xavih pranithk: sorry, I meant txn_start(), not txn_create()
13:53 xavih pranithk: when txn-1 does the same (calling txn_start()), it will also register a callback
13:53 xavih pranithk: now the txn library can start calling (in background) the inodelk on the childs of txn-0 and txn-1
13:54 xavih pranithk: this can be done as part of the txn library itself or have code in the txn-xlator to do it
13:54 xavih pranithk: when the lock is acquired, the callback of both txn-0 and txn-1 will be called
13:54 xavih pranithk: at this point, the original fop will be executed
13:55 mchangir joined #gluster-dev
13:55 xavih pranithk: if at the moment of txn_start() the lock is already acquired, the callback will be called immediately
13:55 pranithk xavih: lock on what?
13:55 xavih pranithk: on the inode (or inodes) involved in the transaction
13:57 pranithk xavih: But if each txn-n needs a call back, I am wondering how do we make sure that the lock is successful on at least quorum number of nodes before moving ahead?
13:57 sankarshan_ joined #gluster-dev
13:57 xavih pranithk: let me write some pseudo-code. I'll send it to you and kdhananjay this afternoon. It will be easier to understand I think...
13:57 xavih pranithk: this is the job of the txn library
13:57 pranithk xavih: totally :-).
13:58 pranithk kdhananjay: ^^ what do you say?
13:58 xavih pranithk: each transaction can have a desired/minimum number of bricks that must succeed
13:58 xavih pranithk: the txn library will check that based on the number of successfully acquired locks
13:58 kdhananjay pranithk: xavih that would be great!
13:59 xavih pranithk: if the quorum is not met, the callback will be called as well, but with an error code indicating that the transaction has failed
13:59 xavih pranithk: in this case the txn-n xlators should unwind the request with an error
13:59 xavih pranithk: I'll write the code and send it to you...
13:59 xavih pranithk: hope this way I could explain my ideas better :P
14:00 kanagaraj_ joined #gluster-dev
14:01 xavih pranithk, kdhananjay: I won't be here tomorrow morning, but I'll probably be able to answer emails if you have any question
14:01 ira joined #gluster-dev
14:01 kdhananjay xavih: np. We will also need some sink time to understand the pseudo code, we'll hopefully be ready with questions when you return. :)
14:02 xavih kdhananjay: great :)
14:02 kdhananjay xavih: pranithk so are we calling it a day?
14:04 kdhananjay in any case, i need to. still in office.
14:04 pranithk kdhananjay: yes yes
14:05 kdhananjay i'll go home and through the logs to see if more was discussed. bye for now.
14:05 xavih kdhananjay: I've to go also
14:05 kdhananjay thanks both of you. :)
14:05 xavih kdhananjay, pranithk: see you soon :)
14:05 kdhananjay pranithk++
14:05 glusterbot kdhananjay: pranithk's karma is now 40
14:05 kdhananjay xavih++
14:05 glusterbot kdhananjay: xavih's karma is now 21
14:05 kdhananjay see you guys! off I go.
14:07 ashiq__ joined #gluster-dev
14:08 dlambrig_ joined #gluster-dev
14:13 dlambrig_ joined #gluster-dev
14:13 dlambrig_ left #gluster-dev
14:23 sankarshan_ joined #gluster-dev
14:29 kkeithley_ did 3.6.8 ever get announced?
14:47 Apeksha joined #gluster-dev
15:01 rraja shyam: ping
15:01 glusterbot rraja: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
15:01 nbalacha joined #gluster-dev
15:05 lpabon joined #gluster-dev
15:07 hagarth joined #gluster-dev
15:11 mchangir joined #gluster-dev
15:29 kbyrne joined #gluster-dev
15:30 kbyrne joined #gluster-dev
15:35 Manikandan joined #gluster-dev
15:48 rafi joined #gluster-dev
15:50 sankarshan_ joined #gluster-dev
16:08 mchangir joined #gluster-dev
16:09 kbyrne joined #gluster-dev
16:10 Simmo joined #gluster-dev
16:10 Simmo Hi Devs!
16:10 shubhendu joined #gluster-dev
16:11 Simmo I'm just passing here to say that GlusterFS is amazing.. congrats to all of you! :-)
16:11 Simmo Unfortunately, I'm not skilled enough to help more than that :-p
16:12 pranithk Simmo: Depends. Do you want to write documentation?
16:12 Simmo At first I should understand it ;)
16:12 pranithk Simmo: What are you using glusterfs for?
16:13 Simmo Using it on AWS
16:13 Simmo Replicate mode
16:13 Simmo for keeping files in sync on all the nodes
16:13 Simmo given that an application needs a copy of those data locally
16:13 Simmo so far data are around 8GB
16:14 pranithk Simmo: Oh, 2 way replication or 3 way replication?
16:14 Simmo and they are split in small file (around 4-500Kb) and some bigger files (around 2-300MB)
16:15 Simmo In Prod I have only 2 nodes (for now)
16:15 Simmo but I'm already testing with 3 nodes
16:15 Simmo just to get practice on how to add/remove brick
16:15 Simmo :)
16:16 Simmo ah, so far based (still) on 3.5
16:16 Simmo looking forward to migrate to 3.7 : )
16:17 pranithk Simmo: okay.
16:17 Simmo Have a nice evening!
16:17 pranithk Simmo: Please make sure to post on gluster-users if you need any help/run into any problems!
16:17 Simmo Sure, I won't forget
16:18 Simmo Actually, I did it in #gluster
16:18 Simmo channel
16:18 Simmo regarding IRC
16:18 Simmo But I'm following also the mailing list
16:18 Simmo Thanks again!
16:19 Simmo left #gluster-dev
16:31 nishanth joined #gluster-dev
16:40 wushudoin joined #gluster-dev
16:42 shaunm joined #gluster-dev
17:04 skoduri joined #gluster-dev
17:13 jiffin joined #gluster-dev
17:17 jiffin kkeithley: ping one quick question. when common-ha come into picture, do we depreciate "gluster v set <volname> ganesha.enable <on/off>" (volume set option for exporting a volume) ?
17:29 mchangir joined #gluster-dev
18:10 ndevos jiffin: my guess is that the option stays, but the common-ha scripts are called in the background
18:10 ndevos jiffin: that is more of a question for jarrpa, but he doesnt seem to join this channel often
18:11 ndevos jiffin: maybe you can send him an email, with gluster-devel on cc?
18:12 ashiq__ joined #gluster-dev
18:12 jiffin ndevos: thanks. just want to confirm, otherwise plans in my mind become in-vain
18:12 ndevos jiffin: yeah, confirm it with jose, he's the main common-ha contact
18:13 jiffin ndevos: k, will do that
18:34 vimal joined #gluster-dev
18:39 kkeithley_ jiffin, ndevos, obnox: ping jarrpa in rh #rhs. I haven't seen anything wrt a design for converged HA. cc ira and obnox too.
18:41 kkeithley_ did the jenkins slaves lose DNS or something?
18:42 kkeithley_ csim: ^^^
18:52 ndevos kkeithley_: why the dns question?
19:01 kkeithley_ many regressions are failing with java errors
19:03 kkeithley_ I didn't look closely. Looks more like file permission problem deleting the job workspace maybe?
19:05 ndevos kkeithley_: I;ve deleted the workspace on several of the broken clients already
19:07 ndevos kkeithley_: if you're going to remove it on more slaves, please make sure there is not an active job running
19:23 lpabon joined #gluster-dev
19:46 ira joined #gluster-dev
20:01 raghu joined #gluster-dev
20:20 shaunm joined #gluster-dev
20:25 hagarth joined #gluster-dev
21:33 shyam left #gluster-dev
21:44 dlambrig joined #gluster-dev
22:27 hagarth joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary