Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2014-09-11

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:40 tdasilva joined #gluster-dev
01:23 bala joined #gluster-dev
01:36 vimal joined #gluster-dev
02:08 nishanth joined #gluster-dev
02:09 _Bryan_ joined #gluster-dev
02:10 aviksil joined #gluster-dev
02:59 hagarth joined #gluster-dev
03:15 spandit joined #gluster-dev
03:24 bharata-rao joined #gluster-dev
03:48 kanagaraj joined #gluster-dev
03:49 itisravi joined #gluster-dev
04:09 shubhendu joined #gluster-dev
04:13 bharata-rao joined #gluster-dev
04:38 Rafi_kc joined #gluster-dev
04:38 rafi1 joined #gluster-dev
04:41 anoopcs joined #gluster-dev
04:44 ndarshan joined #gluster-dev
04:47 atinmu joined #gluster-dev
04:49 ppai joined #gluster-dev
04:50 jiffin joined #gluster-dev
04:55 deepakcs joined #gluster-dev
04:57 hagarth joined #gluster-dev
05:03 aviksil joined #gluster-dev
05:03 bharata-rao joined #gluster-dev
05:25 shubhendu_ joined #gluster-dev
05:30 raghu joined #gluster-dev
05:33 anoopcs joined #gluster-dev
05:35 aviksil_ joined #gluster-dev
05:49 kdhananjay joined #gluster-dev
05:52 atalur joined #gluster-dev
05:53 hagarth raghu: ping
05:53 glusterbot hagarth: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
05:53 hagarth glusterbot: thanks ;)
05:53 RaSTar joined #gluster-dev
05:54 hagarth raghu: ping, a question on http://review.gluster.org/8324
06:03 raghu hagarth: k
06:04 raghu hagarth: ??
06:04 MacWinner joined #gluster-dev
06:04 hagarth raghu: where does the inode_ref() in line 1521 get unref'd?
06:05 hagarth line 1521 of snapview-server.c
06:05 raghu hagarth: k let me check
06:05 hagarth raghu: ok
06:06 bala joined #gluster-dev
06:06 raghu hagarth: oops. seems like a bug. Might lead to inode leak. Will add a unref
06:06 raghu hagarth: add a unref at the end
06:07 hagarth raghu: ok
06:09 hagarth raghu: while at it, please change all uuid_utoa to uuid_utoa_r() as well
06:19 soumya joined #gluster-dev
06:21 soumya joined #gluster-dev
06:32 lalatenduM joined #gluster-dev
06:35 RaSTar joined #gluster-dev
06:38 MacWinner joined #gluster-dev
06:41 aviksil__ joined #gluster-dev
06:41 hagarth joined #gluster-dev
07:10 hagarth joined #gluster-dev
07:13 rgustafs joined #gluster-dev
07:31 hagarth joined #gluster-dev
07:40 pranithk joined #gluster-dev
07:41 RaSTar joined #gluster-dev
08:33 rtalur_ joined #gluster-dev
08:36 rtalur__ joined #gluster-dev
08:50 soumya joined #gluster-dev
08:51 Yuan__ joined #gluster-dev
08:53 jiffin joined #gluster-dev
09:12 hagarth joined #gluster-dev
09:18 shubhendu_ joined #gluster-dev
09:19 ndarshan joined #gluster-dev
09:23 nishanth joined #gluster-dev
09:28 aravindavk joined #gluster-dev
10:07 jiffin joined #gluster-dev
10:07 soumya joined #gluster-dev
10:08 rtalur__ joined #gluster-dev
10:25 rgustafs joined #gluster-dev
10:30 ndevos ping pranithk: are you aware that http://review.gluster.org/8595 failed regression testing?
10:31 pranithk ndevos: Yes. I will take a look at it. Too many things to handle :'-(
10:32 ndevos pranithk: okay, no worries, I'm just checking :)
10:33 edward1 joined #gluster-dev
10:36 JustinClift pwd
10:36 JustinClift Heh
10:40 soumya joined #gluster-dev
10:42 kkeithley1 joined #gluster-dev
10:50 JustinClift Hehh Heh Heh, looking through the running MySQL instances on our servers + trying to figure out what's in use...
10:50 * JustinClift imagines spelunking would have a similar feeling :)
10:51 rtalur__ joined #gluster-dev
10:52 shyam joined #gluster-dev
10:59 xavih JustinClift: it seems that there is a problem with some test that leaves a stale glusterfs mount on slave servers
11:00 xavih JustinClift: this mount is not cleaned and some commands, like 'df' fail with "Transport endpoint is not connected"
11:00 JustinClift xavih: In theory, the cleanup portion at the start of each run should take care of it.  Sounds like that's not happening though.
11:00 xavih JustinClift: the mount is in /tmp/mnt?????
11:00 xavih JustinClift: now slave23 has one of these mounts
11:00 JustinClift Hmmm, that sounds like the snapshot code?
11:01 xavih JustinClift: all regression tests running on it are failing
11:02 JustinClift Ugh.
11:02 xavih JustinClift: Do I umount the stale mount ?
11:02 ndarshan joined #gluster-dev
11:02 xavih JustinClift: or do you want to see something before ?
11:02 nishanth joined #gluster-dev
11:03 shubhendu_ joined #gluster-dev
11:03 JustinClift Nah, I'll mark that slave as offline now.  Then I'll abort the current test, reboot the server, and kick off the test for the same CR manually.
11:03 JustinClift xavih: ^
11:03 xavih JustinClift: the current test is mine :P
11:03 JustinClift xavih: Yeah, I know. ;)
11:04 JustinClift xavih: Since it's aborted, it won't return back a "FAILED" verdict.
11:07 JustinClift xavih: Lets see how this goes: http://build.gluster.org/job/rackspace-r​egression-2GB-optionalcascade/16/console
11:08 xavih JustinClift: Ok thanks :)
11:08 xavih JustinClift: I modified the patch. It really had a bug :-/
11:09 JustinClift It's not the only test which does something wrong.  There's some other test which creates a whole new directory structure (root owned) in the jenkins workspace directory.  It's not urgent to fix that, but I would like to get around to it at some point.
11:09 JustinClift xavih: No worries at all. :)
11:09 JustinClift xavih: It's what the testing is for. ;)
11:10 xavih JustinClift: :)
11:10 xavih JustinClift++ ;)
11:10 glusterbot xavih: JustinClift's karma is now 21
11:11 JustinClift xavih: Since that's running on the same slave VM, if it hits the same weird mount problem again please ping me.
11:16 xavih JustinClift: Ok, np
11:18 rtalur__ joined #gluster-dev
11:22 atinmu xavih, u there?
11:23 xavih atinmu: Yes, but only for few minutes more...
11:23 atinmu xavih, it seems like some ec test cases have consistent failures
11:23 atinmu xavih, http://build.gluster.org/job/rackspace-r​egression-2GB-triggered/1425/consoleFull
11:23 atinmu xavih, http://build.gluster.org/job/rackspace-r​egression-2GB-triggered/1424/consoleFull
11:24 xavih atinmu: this failure is caused to a stale mount in /tmp/ left by another test
11:25 xavih atinmu: JustinClift and just restarted salve23 for this same reason
11:25 JustinClift Ahhh, so other slave VM's need restarting and rerunning too?
11:25 JustinClift Damn...
11:25 atinmu xavih, okay, has that test been identified which doesn't unmount?
11:25 xavih atinmu: a simple 'df' fails in this case
11:26 JustinClift atinmu: No, it hasn't.  If you have time to chase it down, that would be great. ?
11:26 atinmu xavih, JustinClift : we need to identify it...
11:26 JustinClift Option b) would be to adjust the cleanup() stuff to remove any left over mounts
11:27 JustinClift Or maybe do both :)
11:27 atinmu JustinClift, I would give a try, for that I would need access for the slave VMs
11:27 atinmu JustinClift, b would be a easy fix for now, what say?
11:28 JustinClift atinmu: Sure.  Do you need access to any particular VM?
11:28 JustinClift I've just disconnected slave20 and slave25 from Jenkins
11:28 JustinClift eg tests won't run on them
11:28 * JustinClift was going to clean out /tmp/ and reboot them
11:29 JustinClift But, if you want one of them to try stuff out on, you're welcome to
11:29 soumya joined #gluster-dev
11:29 JustinClift atinmu: And yeah, option b) is fine ;)
11:29 atinmu JustinClift, I will work on it tomorrow, I will get in touch with you...
11:30 JustinClift atinmu: Sure.  When you're ready to do stuff let me know, and we'll get you setup with whatever you need :)
11:30 atinmu JustinClift, thanks :)
11:32 Yuan__ joined #gluster-dev
11:33 JustinClift atinmu: Seems like theres a /tmp/quotad.socket file left on the slave servers too.  That should probably be cleaned up by the cleanup() stuff as well.
11:33 JustinClift (thought for tomorrow :>)
11:34 xavih JustinClift, atinmu: I don't see an obvious test doing the mount to /tmp
11:34 atinmu JustinClift, this definitely helps identifying the buggy place
11:34 xavih JustinClift, atinmu: I have to go now. I'll try to find the failing test later
11:35 atinmu xavih, sure
11:37 JustinClift Hmmm, maybe adjust the cleanup() function to check for left over mounts instead, and log the info to some text file.  Then run the full regression test and see what the text file says at the end?
11:37 JustinClift Hmmm, I shouldn't be thinking about this.  We really need our Gerrit stuff backing up.
11:37 * JustinClift gets back to that instaed
11:37 JustinClift instead
11:38 JustinClift btw, I've cleaned out the /tmp/ dirs on slave20 and slave25, rebooted both VM's, and restarted both of the CR's which last failed on them
11:38 atinmu JustinClift, we could do that...
11:38 JustinClift I'm sure you'll figure it out. :)
11:38 atinmu JustinClift, :)
12:04 soumya joined #gluster-dev
12:13 itisravi_ joined #gluster-dev
12:19 rtalur__ joined #gluster-dev
12:44 pranithk joined #gluster-dev
12:51 rtalur__ joined #gluster-dev
13:02 soumya joined #gluster-dev
13:17 hagarth joined #gluster-dev
13:20 bala joined #gluster-dev
13:31 shyam joined #gluster-dev
13:43 jobewan joined #gluster-dev
13:45 bala joined #gluster-dev
13:53 aravindavk joined #gluster-dev
13:59 wushudoin| joined #gluster-dev
14:12 aviksil__ joined #gluster-dev
14:22 tdasilva joined #gluster-dev
14:25 bala joined #gluster-dev
14:27 aravindavk joined #gluster-dev
14:43 itisravi_ joined #gluster-dev
14:53 soumya joined #gluster-dev
14:57 _Bryan_ joined #gluster-dev
14:58 deepakcs joined #gluster-dev
16:45 aviksil__ joined #gluster-dev
17:07 charta joined #gluster-dev
17:07 charta hello
17:08 charta I stuck with bug and thinking whether I can get around it or not.
17:09 charta is there person who is willing to discuss this?
17:09 hagarth charta: fire away, somebody or the other might be able to help
17:09 charta cool
17:10 charta basically I have two bricks, sync goes fine between them, and 4 web nodes mounts volume www via fuse.glusterfs
17:10 charta also there is dedicated deploy node, that accepts incoming rsync from webistrano
17:10 charta problem:
17:11 charta "cd /var/www/backend && ln -s releases/20140911151733 /var/www/backend/current_tmp && mv -Tf /var/www/backend/current_tmp /var/www/backend/current"
17:11 charta creates /var/www/backend/current as DIRECTORY that we cannot delete anymore
17:12 charta the only way to get rid of it is to stop all servers, and delete that "current" on brick itself
17:13 hagarth charta: what error do you get when you delete from mount point?
17:13 charta no error
17:13 charta it succeeds
17:13 charta but irectory reappears after 1sec or so
17:13 charta but directory reappears after 1sec or so
17:14 hagarth that looks strange, what version of glusterfs are you using?
17:14 charta 3.5.2-ubuntu1~trusty1
17:15 charta I am using deb http://ppa.launchpad.net/semios​is/ubuntu-glusterfs-3.5/ubuntu trusty main as my apt source
17:16 charta could it be that this bug is involved - https://bugzilla.redhat.co​m/show_bug.cgi?id=1117923
17:16 glusterbot Bug 1117923: medium, unspecified, ---, spalai, MODIFIED , DHT :- rm -rf is not removing stale link file and because of that unable to create file having same name as stale link file
17:17 hagarth no .. a directory reappearing is very strange
17:17 hagarth by any chance are any of your bricks offline?
17:17 charta you see it's .../current should be symlink, but it reappears as directory instead
17:18 charta no way, all bricks online
17:18 hagarth charta: hmm, can you please file a new bz for this?
17:19 charta just checked again: State: Peer in Cluster (Connected)
17:19 hagarth charta: gluster volume status is a better way to see if bricks are online
17:19 charta yes, should I use redhat bugzilla?
17:19 hagarth charta: yes, please
17:20 charta gluster volume status reports both bricks online
17:21 hagarth charta: ok, cool
17:22 charta another question
17:22 charta I tried to use bricks via nfs mount
17:23 charta but this way, if one brick goes down, I have to remount to another
17:24 charta do I have to care about proper mount/unmount, or simply flowing IP between bricks is enough?
17:24 hagarth charta: floating VIPs across nodes would be good enough
17:24 hagarth s/VIPs/VIP/
17:25 charta so nodes won't complain if VIP jumps to another brick?
17:25 charta client nodes
17:25 hagarth ideally .. they should not
17:26 charta any caveats I should know?
17:27 hagarth the drc implementation in gluster is experimental, so non-idempotent requests might fail during a VIP failover/failback
17:30 charta .
17:31 charta I wonder, if I have 4 webservers that mount glusterfs volume as read only, can I further optimize system for increased READ speed ?
17:31 davemc joined #gluster-dev
17:32 charta because only sympfony framework takes ~up to 3secs to start
17:32 charta on gluster volume I mean
17:33 hagarth charta: I am not sure what calls sympfony framework does upon startup
17:33 hchiramm_ joined #gluster-dev
17:34 charta we did timple test - for cycle that does 300 includes only.
17:34 charta php I mean
17:34 charta that takes up to 4 secs, compared to 0.04secs on native fs
17:35 semiosis charta: join me in #gluster and i'll give you some tips for php
17:35 hagarth semiosis: over to you :)
17:35 semiosis :)
17:49 lalatenduM joined #gluster-dev
17:55 portante joined #gluster-dev
18:00 sheba_ joined #gluster-dev
18:17 MacWinner joined #gluster-dev
18:35 charta here you are - https://bugzilla.redhat.co​m/show_bug.cgi?id=1140818
18:35 glusterbot Bug 1140818: high, unspecified, ---, gluster-bugs, NEW , symlink changes to directory, that reappears on removal
19:19 MacWinner joined #gluster-dev
19:20 ira joined #gluster-dev
19:37 foster_ joined #gluster-dev
19:39 rturk joined #gluster-dev
19:40 jobewan joined #gluster-dev
22:51 MacWinner joined #gluster-dev
23:52 tdasilva joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary