Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2016-02-12

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
01:32 shyam joined #gluster-dev
01:42 shyam joined #gluster-dev
02:06 wushudoin joined #gluster-dev
02:08 nishanth joined #gluster-dev
02:13 dlambrig_ joined #gluster-dev
02:24 baojg joined #gluster-dev
02:36 baojg_ joined #gluster-dev
03:42 shubhendu joined #gluster-dev
03:53 shyam joined #gluster-dev
03:53 kanagaraj joined #gluster-dev
03:54 shyam1 joined #gluster-dev
04:02 nbalacha joined #gluster-dev
04:04 luizcpg_ joined #gluster-dev
04:04 atinm joined #gluster-dev
04:05 itisravi joined #gluster-dev
04:06 hagarth itisravi: ping, was looking into the performance issue reported on ovirt-users.
04:06 itisravi hagarth: yup
04:06 hagarth itisravi: the virt profile settings do not seem to be in place there?
04:07 itisravi hagarth: you seem to be right
04:07 itisravi can't see it in volinfo output
04:07 hagarth itisravi: maybe worth asking him to enable that and check performance
04:08 itisravi hagarth: will do..If you create the volume using ovirt, is it enabled by default?
04:08 hagarth no, you need to explicity enable "optimize volume for virtualization" or something similar in ovirt
04:10 itisravi hagarth: hmm ok
04:12 aravindavk joined #gluster-dev
04:13 baojg joined #gluster-dev
04:20 aravindavk joined #gluster-dev
04:34 gem joined #gluster-dev
04:37 hgowtham joined #gluster-dev
04:42 mchangir joined #gluster-dev
04:43 josferna joined #gluster-dev
04:46 hgowtham_ joined #gluster-dev
04:54 pppp joined #gluster-dev
04:59 ndarshan joined #gluster-dev
05:05 karthikfff joined #gluster-dev
05:08 Apeksha joined #gluster-dev
05:09 post-fac1um joined #gluster-dev
05:09 dlambrig_ joined #gluster-dev
05:21 atalur joined #gluster-dev
05:21 kshlm joined #gluster-dev
05:31 hgowtham_ joined #gluster-dev
05:32 vmallika joined #gluster-dev
05:33 pppp joined #gluster-dev
05:43 kotreshhr joined #gluster-dev
05:46 poornimag joined #gluster-dev
05:47 ppai joined #gluster-dev
05:51 nishanth joined #gluster-dev
05:52 ggarg joined #gluster-dev
05:53 skoduri joined #gluster-dev
05:54 hgowtham_ joined #gluster-dev
05:55 vimal joined #gluster-dev
05:57 aspandey joined #gluster-dev
05:58 Bhaskarakiran joined #gluster-dev
06:00 kdhananjay joined #gluster-dev
06:01 sakshi joined #gluster-dev
06:10 atalur joined #gluster-dev
06:20 post-factum joined #gluster-dev
06:22 rafi joined #gluster-dev
06:24 asengupt joined #gluster-dev
06:28 ashiq joined #gluster-dev
06:30 Saravanakmr joined #gluster-dev
06:35 shubhendu joined #gluster-dev
06:43 sakshi joined #gluster-dev
06:44 hgowtham_ joined #gluster-dev
06:52 kshlm joined #gluster-dev
07:01 gem joined #gluster-dev
07:04 hgowtham_ joined #gluster-dev
07:06 rraja joined #gluster-dev
07:09 atalur_ joined #gluster-dev
07:10 Saravanakmr joined #gluster-dev
07:10 gem_ joined #gluster-dev
07:14 asengupt joined #gluster-dev
07:18 Apeksha joined #gluster-dev
07:20 Bhaskarakiran joined #gluster-dev
07:25 jiffin joined #gluster-dev
07:48 Bhaskarakiran joined #gluster-dev
07:54 gem_ joined #gluster-dev
08:19 hgowtham_ joined #gluster-dev
08:25 Saravanakmr joined #gluster-dev
09:08 kshlm csim, Are you up?
09:09 csim kshlm: yup
09:10 kshlm So shall we start?
09:10 csim but for some reason, i gouth 9 UTC was in 1 hour
09:10 kshlm CET is UTC+1 right?
09:10 csim yeah
09:10 csim it just change with summer/winter time
09:11 csim and it take me 5 months to adjust
09:11 kshlm You have daylight savings??
09:11 csim yup
09:11 csim at a different time than the US, for more added fun
09:11 kshlm I don't understand daylight savings at all.
09:12 kshlm csim, So I'll go ahead and stop gerrit first.
09:13 kshlm jenkins has some jobs going on.
09:13 kshlm Should have put it into shutdown mode earlier.
09:13 csim that's why we have 8h :)
09:17 kshlm I've stopped gerrit.
09:20 csim ok, so let's resync
09:20 * csim also stop httpd
09:20 csim (and salt)
09:21 csim and cron
09:21 kshlm I stopped the git daemon as well.
09:22 post-factum good luck with upgrade, people
09:23 csim so rsync of the file is going on
09:23 csim new IP (in case i forget) is 66.187.224.201
09:24 kshlm You already had systems setup?
09:24 kshlm Cool
09:24 csim well, i did copy the system and prepared stuff for gerrit
09:24 kshlm I thought you had done a trial run.
09:25 csim yeah, I did
09:25 csim I took the image, started it, and fixed network
09:25 csim and now I sync the data
09:25 kshlm Anyways awesome. Gerrit should move out quickly then.
09:25 csim that's for jenkins we have issue, because it failed at step "started it" :)
09:25 kshlm csim++
09:25 glusterbot kshlm: csim's karma is now 25
09:25 csim so I would likely need to shutdown the VM, copy the disk and then fix and migrate
09:26 csim ok so I synced /review/
09:28 csim kshlm: we do not use the mysql database, don't we ?
09:28 kshlm gerrit doesn't.
09:28 kshlm Maybe httpd does?
09:28 csim nope, httpd doesn't
09:29 kshlm Probably some old services used it.
09:29 csim yeah, I see reviewboard, etc
09:29 kshlm We used to run a tool called patchwork for reviews earlier.
09:29 csim yeah, likely that
09:29 csim I will clean later
09:29 csim kshlm: also, for /home ?
09:30 skoduri_ joined #gluster-dev
09:31 kshlm I don't think half the people with home dirs are around anymore.
09:31 csim yeah
09:31 kshlm Also, no one uses it.
09:31 csim ok
09:31 csim worst case, i can still migrate the directory
09:31 csim but they are nfs exported
09:31 csim so /var is being copied
09:32 kshlm Okay.
09:32 csim cute, didn't see the mail queue :/
09:32 csim gerrit try to send mail to jenkins@build.gluster.org, tought we did fixed that in the past
09:33 kshlm I don't think we'd need to copy everything on the system. It's being only used for gerrit now.
09:33 kshlm We should probably just copy over gerrit stuff.
09:34 csim yeah, that's already done
09:35 csim I plan to keep the old disk anyway as a backup once the live instance is moved
09:35 kshlm If we can get gerrit running with what's copied over, we should be good.
09:35 csim (some would say "as a trophy")
09:35 kshlm I was thinking the same as well.
09:37 hgowtham_ joined #gluster-dev
09:40 csim kshlm: so, gerrit is running ok on the new ip, and I just need to copy the new certs and stuff
09:41 kshlm Cool!
09:41 kshlm How long will the dns switch take?
09:42 csim in theoryn, we do not know
09:42 csim in practice, that should be a few hours, unless some ISP do some weird stuff like caching for too long
09:43 kshlm I was thinking of using jenkins to see if we can connect, before we shut it down.
09:43 csim let me push the dns
09:44 csim for now https://66.187.224.201/#/q/status:open seems ok
09:44 kshlm Works for me as well.
09:44 kshlm I'm trying a git clone.
09:48 csim kshlm: does it work ?
09:48 kshlm Yup.
09:49 csim so dns is done, httpd is done, git seems ok
09:49 kshlm Was a little slow. Though, that might be me.
09:49 csim or maybe people are eating the bandwidht
09:50 csim but IIRC, we have some kind of old hardware before moving to the cage so maybe that's the problem
09:51 mchangir joined #gluster-dev
10:00 kshlm csim, which nameservers are we using for gluster.org? I checked the redhat  nameservers, but they don't seem to have been updated.
10:02 csim kshlm: RH one
10:02 csim host dev.gluster.org ns1.redhat.com
10:02 csim dev.gluster.org has address 66.187.224.201
10:02 kshlm It's showing now!
10:03 csim quantum DNS !
10:03 kshlm google dns has picked up as well.
10:04 csim for a reason that escape me, https://dev.gluster.org show a centos page
10:04 sakshi joined #gluster-dev
10:04 csim but I am gonna fix that
10:04 kshlm httpd configuration probably.
10:04 csim yeah
10:05 kshlm I'm logged into gerrit again!
10:05 kshlm I'll verify with jenkins as well.
10:06 kshlm b.g.o has review.gluster.org set statically in /etc/hosts :\
10:06 kshlm Need to change that.
10:06 csim ah ah
10:06 csim (and that's also why we say "8h")
10:13 kshlm build.gluster.org has a weird resolv.conf.
10:13 csim using a local dns cache, iirc
10:13 csim we did that to try to fix network problem
10:13 kshlm I don't see any thing listening on port 53.
10:14 csim mhhh
10:14 kshlm What's the 192.168.122.* network btw?
10:15 csim the libvirt network
10:15 kshlm I get that with libvirt.
10:15 csim build is a VM running on engg.g.o , which is a physical system in iWeb
10:15 csim and that's using libvirt=xen
10:15 csim s/=/+/
10:16 kshlm Ah!
10:16 kshlm So the queries are being answered by engg.gluster.com
10:16 kshlm We also have libvirt running on build.g.o btw.
10:17 kshlm Is it okay if I change resolv.conf temporarily to point to 8.8.8.8?
10:17 csim sure
10:18 csim where do you see libvirt running on build.g.o ?
10:18 kshlm `pgrep libvirt`
10:19 kshlm pid 32050
10:20 csim indeed
10:20 mchangir joined #gluster-dev
10:20 csim I may need more coffee
10:21 csim but it doesn't make sense to have it running here :/
10:21 * csim stop it
10:23 kshlm jenkins (rather jvm) is caching dns replies. And from what I've searched to change the cache time, we need to restart jvm.
10:24 csim so let's it be
10:34 csim kshlm: so, shall i restart the VM ?
10:34 kshlm Jenkins vm?
10:34 csim non, tje jvm
10:34 csim well, jenkins
10:35 csim because jvm is java virtual machine, but also jenkins virtual machine :)
10:35 Bhaskarakiran joined #gluster-dev
10:35 kshlm Still 3 jobs running.
10:36 csim ok, so still waiting
10:36 csim was not sure if you were waiting on me or anything
10:37 kshlm I was waiting on the jobs to restart.
10:37 kshlm The longest estimate is still ~50minutes.
10:41 kshlm I'm going to stop the jobs. They need to be retriggered anyways for other failures.
10:42 csim ok
10:44 kshlm All the jobs are stopped now.
10:46 kshlm I restarted jenkins.
10:50 kshlm csim, Jenkins is working fine with the migrated gerrit.
10:50 kshlm The trigger plugin has established the connection, and querying works.
10:51 csim ok so we can shutdown the old gerrit VM ?
10:51 csim and take a look at copy of jenkins ?
10:52 kshlm We can.
10:53 csim good
10:53 sakshi joined #gluster-dev
10:55 csim so old VM is stopped
10:56 csim kshlm: ok to stop jenkins for the time of the copy ?
10:56 kshlm Sure.
10:56 kshlm I'm out of build.g.o now.
10:57 csim do we tell to people to use gerrit, and jenkins will pick jobs later, or it doesn't work like this ?
10:57 Bhaskarakiran joined #gluster-dev
10:57 kshlm Jenkins does support it, but our version of gerrit is old.
10:58 csim so 'no' :)
10:58 kshlm yeah, no.
11:04 Bhaskarakiran joined #gluster-dev
11:04 csim so transfering the file
11:04 csim shouldn't take too long since half of the work was already done, but still a long time
11:10 aspandey joined #gluster-dev
11:19 skoduri_ joined #gluster-dev
11:20 kotreshhr left #gluster-dev
11:24 csim wonder why we spend so much time gathering entropy while just watching rsync estimation do the job :)
11:38 shyam joined #gluster-dev
11:38 shyam1 joined #gluster-dev
11:43 Bhaskarakiran joined #gluster-dev
12:08 kanagaraj joined #gluster-dev
12:18 * csim grmbl
12:18 csim rsync: write failed on "/srv/build.gluster.org/system.img": No space left on device (28)
12:18 csim there is plenty of space, but well
12:20 Saravanakmr joined #gluster-dev
12:21 kshlm :\
12:21 kshlm csim, you can restart from where it stopped right?
12:22 csim kshlm: yeah, I guess so
12:22 csim I also did increase the partition size
12:22 csim I might go get food as well :)
12:22 csim I guess there is some stupid issue with sprse file going on
12:23 csim but if there is no space on /srv/build.gluster.org/system.img, there is plenty on /dev/misc/stomach
12:25 csim given that rsync is just reading files and showing nothing, I think it did copy enough stuff
12:27 kshlm What are you actually copying? The VM image or the files in the image?
12:27 kshlm I thought you were just copying the image. SO technically shouldn't that just be 1 file?
12:28 csim the vm image
12:28 csim and i am checking with strace that it is doing something
12:28 csim so for now, it is just reading the block on the iweb side, nothing is written yet on the RH side
12:35 csim (also checking fosdem videos, sound is not that great :/)
12:39 csim kshlm: so it did restart the sync
12:39 kshlm Hurray!!
12:40 josferna joined #gluster-dev
12:41 csim it say 16 minutes
12:41 csim then 92h
12:41 csim then back to 11
12:41 csim ok so food time
12:43 jiffin joined #gluster-dev
12:49 jiffin1 joined #gluster-dev
12:54 jiffin joined #gluster-dev
12:58 * kshlm will be afk for ~30 minutes
13:02 luizcpg joined #gluster-dev
13:03 gem joined #gluster-dev
13:09 jiffin1 joined #gluster-dev
13:12 jiffin joined #gluster-dev
13:17 jiffin1 joined #gluster-dev
13:17 kkeithley joined #gluster-dev
13:21 csim so I went to lunch, and the speed dropped to K/s
13:21 csim grmblblblblblblb
13:21 ndevos hey csim, kshlm, I cant download http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.7.8.tar.gz ? Is that expected?
13:21 ndevos also downloading with 0 K/s... no errors
13:21 csim ndevos: yes, the VM is down for migration
13:22 ndevos of course, when I write that, I get: http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.7.8.tar.gz
13:22 csim since that's on jenkins
13:22 ndevos uh Failed to connect to bits.gluster.org port 80: Connection timed out
13:22 ndevos I dont know if that's on the same system... oh well, I get the tarball from the fedora cache
13:24 csim it is
13:24 csim $ host bits.gluster.org
13:24 csim bits.gluster.org is an alias for build.gluster.org.
13:25 jiffin joined #gluster-dev
13:26 caveat- joined #gluster-dev
13:26 kkeithley the tarball is in download.gluster.org as well
13:26 kkeithley http://download.gluster.org/pub/gluster/glusterfs/3.7/3.7.8/glusterfs-3.7.8.tar.gz
13:28 jiffin1 joined #gluster-dev
13:29 kkeithley that was copied from bits.gluster.org.  It would be nice if the release job computed the sha256sum hash for the tarball at the same time the tarball is created.  maybe I'll look at that when {bits,build}.gluster.org comes back on line
13:29 ndevos kkeithley: I'm doing the CentOS SIG update for 3.7.8 now, you dont need to do that ;-)
13:29 kkeithley I don't need to do the build in the SIG?
13:30 ndevos yes, that is what I'm doing now
13:30 kkeithley thanks!  ndevos++
13:30 glusterbot kkeithley: ndevos's karma is now 238
13:31 * kkeithley thinks it would be nice if our other packaging volunteers would build packages without me having to ask for it. :-/
13:31 * kshlm is back
13:31 ndevos there are actually quite a bit of spec file changes that need to get sync'd to the SIG's version :-(
13:31 ndevos kkeithley++ for the Fedora packages :D
13:31 glusterbot ndevos: kkeithley's karma is now 109
13:32 kshlm kkeithley, Do you require a volunteer for any builds now?
13:32 kkeithley kshlm: no, they're already all done
13:32 kshlm Aw, next time then.
13:32 kshlm I want to build an official gluster package once.
13:32 kkeithley sure. that would be great.  klshm++
13:32 glusterbot kkeithley: klshm's karma is now 1
13:32 kkeithley lol
13:33 kkeithley kshlm++
13:33 glusterbot kkeithley: kshlm's karma is now 57
13:34 kkeithley ndevos: yeah, there were some changes. There are also some changes I need to submit to upstream tree for el5.
13:35 kkeithley minor .spec changes
13:35 ndevos kkeithley: I'm doing el7 and el6 only for now, we need to be able to build without -server on el5 for the SIG
13:35 kkeithley okay
13:36 ndevos but other than that, you could pick the spec from the sig for el5, I guess
13:36 kkeithley I thought we hadn't been doing el5 at all.
13:36 jiffin1 joined #gluster-dev
13:36 ndevos also, vimdiff++
13:36 glusterbot ndevos: vimdiff's karma is now 1
13:36 ndevos el5 is still pending...
13:37 kshlm ndevos, You should be able to build glusterfs normally on el5 and just package the client bits.
13:41 kkeithley kshlm: I think it's like Fedora, it's all or nothing.  So the .spec has to only build the packages you want.
13:42 kshlm Does this mean whatever has been built must be packaged?
13:50 ndevos kshlm: yes, correct, we cant 'blacklist' packages to prevent them from getting out to the mirrors
13:51 kshlm Do we have to use the same spec file every where?
13:51 kshlm Couldn't we just remove the glusterfs-server package section from the el5 spec?
13:51 ndevos kshlm: the ugly work around would be to delete the unneeded files after 'make install', but I do not want to accept that ;-)
13:51 jiffin joined #gluster-dev
13:52 ndevos all files that get installed with 'make install' need to be listed in a %files section somewhere, if we delete the -server part, building the rpms will fail
13:52 kshlm Ah.
13:53 kshlm This requires changes to our makefiles.
13:54 ndevos indeed, and ./configure flag
13:55 ndevos that should have been pushed from a certain downstream waaaay too long ago, but nobody really cared about it, since this downstream can blacklist packages
13:55 jiffin1 joined #gluster-dev
13:56 ndevos as a result, we dont have many people that are able to test the client on el5, they need to build it themselves
13:58 jiffin joined #gluster-dev
14:01 jiffin1 joined #gluster-dev
14:04 jiffin joined #gluster-dev
14:05 ndevos kshlm: btw, if you want, you're welcome to send a patch for the --without-server ./configure option ;-)
14:06 kshlm ndevos, Sure. I can easily add that option.
14:06 kshlm ;)
14:07 jiffin1 joined #gluster-dev
14:07 * kshlm cannot guarantee if that option will have any effect though.
14:08 ndevos hah, "easily"!
14:08 csim ah ah
14:08 ndevos I think I looked into it once, and decided it want a 15 minute task, and I moved on
14:23 ndevos kkeithley: http://cbs.centos.org/koji/packageinfo?packageID=5 glusterfs-3.7.8 is building for el6 and el7 :)
14:24 ndevos changes to the .spec are in https://github.com/CentOS-Storage-SIG/glusterfs/branches
14:24 nbalacha joined #gluster-dev
14:29 kkeithley remind me why we aren't shipping -server in el5?
14:30 ndevos userspace-rcu, ssl and other bits?
14:30 ndevos you moved Fedora bug 1306729 to Gluster, should we not have cloned it instead?
14:30 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=1306729 urgent, unspecified, ---, bugs, NEW , Glusterfs/Glusterd blocking root ports ( 1-1024 )
14:31 kshlm userspace-rcu should be fixed now.
14:31 kshlm Last I checked we have 0.7 on el5.
14:31 kshlm But SSL would affect clients as well as server.
14:31 kkeithley oh, but now we work with liburcu-0.6. Other bits?  I build el5 packages for d.g.o., although those might be crippled by only supporting insecure TLS ciphers.
14:33 hagarth csim: how's the jenkins disk copy coming along?
14:33 csim hagarth: still slow, not sure why
14:33 csim I/O on both side look fine
14:34 csim so network troughtput is down but no obvious reason why
14:34 kkeithley skoduri_, jiffin: On the four node ganesha+HA cluster I used for devconf demo, during fail-over, I'm seeing this in the logs:  ...epoch 56a2597d : run1 : ganesha.nfsd-21777[reaper] nfs4_clean_old_recov_dir :CLIENT ID :EVENT :Failed to open old v4 recovery dir (/var/lib/nfs/ganesha/v4old/node0), errno=2
14:34 hagarth csim: ok
14:34 kkeithley notice   ->   /var/lib/nfs/ganesha/v4old/node0
14:35 kkeithley there isn't any node named node0
14:35 csim hagarth: I am trying to diagnose that, but no luck
14:35 csim I might have to ressort to randomly blaming something, like the firewall :)
14:36 ndevos kkeithley: maybe you can debug that by adding something like this at the start of the script? "exec 2>&1 > /tmp/debug-script.log.$(date +%s) ; set -x
14:36 kkeithley skoduri_, jiffin: and I'm not otherwise seeing where it would get node0 from
14:37 kkeithley ndevos: start of what script?
14:37 kkeithley ganesha-ha.sh?
14:37 ndevos yeah
14:38 csim ok, so not the network as it run fine when doing benchmark
14:38 ndevos kkeithley: I'm not 100% sure if the "exec" part is right, you should check that before running ;-)
14:39 kkeithley nothing in the script or the /etc/ganesha/ganesha-ha.conf mentions node0
14:40 ndevos no, it would be a variable somewhere, something sets it to node0 instead of the hostname
14:41 * ndevos guesses that, didnt check the script recently
14:41 hagarth csim: possibly qos in the network :D
14:42 kshlm How many hours have be in now?
14:42 kshlm Nearly 6 hours already.
14:42 kshlm Okay.
14:43 csim hagarth: nope, as I did the test over a ssh tunnel, and I also did restart rsync :/
14:43 csim well, could be but would be annoying
14:43 kshlm Come on rsync, you can do it! Faster, faster!
14:44 csim yeah, let's all raise our hands for rsync
14:44 csim \o/
14:44 * kshlm is off for dinner and will be back in ~45 minutes
14:44 kshlm \o/
14:46 csim next time, i will just fly to Montreal and drive to Raleigh...
14:49 kkeithley That's a long drive
14:50 kkeithley And you still have to show your passport at the border.
14:50 csim true
14:51 csim :me think now of "silicon valley episode 2"
14:53 atinm joined #gluster-dev
14:55 aravindavk joined #gluster-dev
15:05 * kkeithley wasn't sure if showing your passport to get into the US is an issue. I know it is for some people.
15:05 shaunm joined #gluster-dev
15:09 csim usually, the less gov know about me, the better I feel
15:21 raghu joined #gluster-dev
15:26 dlambrig1 joined #gluster-dev
15:28 kshlm Is rsync done yet?
15:28 csim nope, still not
15:29 csim I suspect it is now syncing already existing block, hence the almost lack of transfert on both side
15:30 csim kshlm: but we can abort the sync if that's taking too long, and restart this weekend ?
15:30 kshlm rsync has some options to help speed up large files.
15:31 kshlm --inplace should just sync changed blocks
15:31 kshlm Yeah. We could do it over the weekend as well.
15:31 csim maybe I should have used --sparse too
15:31 kshlm Not too many users on weekends.
15:32 csim and for --inplace, tought this was the default
15:32 kshlm I read that --sparse and --inplace were exclusive. You can't use both together.
15:44 ggarg joined #gluster-dev
15:45 csim the UI and lack of progress for rsync is annoying, as I see it do stuff, I just do not how long it take
15:50 foster joined #gluster-dev
16:44 shamurai joined #gluster-dev
16:59 kshlm csim, Our 8 hours are up.
16:59 csim kshlm: so let's restart the VM
17:00 kshlm You have no idea how much transferred?
17:00 csim 46467143501
17:00 rafi joined #gluster-dev
17:00 csim so around 42G
17:01 kshlm Is there anything we can do to make it faster the next time?
17:02 csim well, finding why it was slow, but I would need to use gdb likely
17:03 kshlm Instead of rsync, why not just tar/scp the whole image over next time.
17:03 kshlm rsync isn't much help on large single files.
17:04 csim since iweb network was unreliable, rsync was supposed to cope with that
17:05 kshlm I didn't consider that.
17:05 kshlm Hopefully we can just resume next time.
17:05 csim we did already resume this time :/
17:05 hagarth how much of data still needs to be transferred?
17:05 csim hagarth: technically, 80G, but since that's a sparse file, I have no idea
17:06 csim I did already copied when the VM was running, but it did result in a non working VM for a reason I didn't understood
17:07 csim kshlm: so jenkins is up now
17:07 kshlm csim, thanks.
17:07 kshlm So now we need to plan the next downtime.
17:08 csim kshlm: yeah
17:08 csim even if now that gerrit is out, I can surely stop jenkins, copy the disk on a different name, restart jenkins, and then copy to RH DC
17:09 kshlm We could try that.
17:09 csim after all, as long as downtime is brief, people are ok, no ?
17:10 kshlm Stop jenkins. Make a local image copy. Start jenkins. Sync copy. Start copy in RH data center. And the rsync missing stuff.
17:10 csim kshlm: also, you know about gerrit replication, cause maybe we could have a replicated setup to prevent issue (like 1 in RH Dc, 1 in rackspace)
17:10 csim (and the same for jenkins, after all)
17:11 kshlm As long as it's not majorly in the india working hours, I think downtimes are okay.
17:11 kshlm Gerrit replicates the git repositories.
17:11 csim well, if we speak of a a 5 minutes downtime, I am quite sure we can tolerate during anyone working hour :)
17:12 kshlm Lunch time in India will be a good time to do this.
17:13 kshlm But we'll need to stop launching jobs at least 3 hours before.
17:13 csim mhh
17:13 csim I guess i could also schedule a script that do this in the middle of the night
17:14 kshlm We need to fix resolv.conf on build.g.o and restart jenkins.
17:14 kshlm Jenkins can't connect to gerrit.
17:15 kshlm NetworkManager probably overwrote resolv.conf.
17:17 kshlm I've fixed it for now.
17:17 csim mhh
17:18 csim kshlm: can you post about the problem on -infra, so we have a reference if it happen again ?
17:18 jiffin joined #gluster-dev
17:18 kshlm The resolv.conf problem?
17:18 csim I was about to get $HOME and maybe $PARTY
17:18 csim yeah
17:18 kshlm Okay.
17:19 kshlm You're still in Brno?
17:19 csim yes
17:20 csim had a few meetings and stuff to do (like this migration) so i prefered travel on weekend
17:21 kshlm And you're going to be on vacation next week?
17:21 kshlm Or is it the week after?
17:21 csim the week after
17:21 csim next week, i am gone to ansiblefest for 2 days
17:22 jiffin1 joined #gluster-dev
17:23 csim I am not sure if I updated my work calendar, but I will so I can point people to that :)
17:23 rafi joined #gluster-dev
17:25 csim kshlm: so going back to my flat, back in 20m, ping me if anything
17:25 kshlm csim, sure.
17:27 rafi1 joined #gluster-dev
17:29 owlbot joined #gluster-dev
17:29 jiffin1 joined #gluster-dev
17:31 hagarth csim, kshlm: will you update the announcement thread on -devel and -users?
17:31 kshlm hagarth, sure.
17:32 JoeJulian What distro is build.gluster.org?
17:33 owlbot joined #gluster-dev
17:37 owlbot joined #gluster-dev
17:38 jiffin joined #gluster-dev
17:40 kshlm JoeJulian, I think centos6
17:40 kshlm let me check
17:41 owlbot joined #gluster-dev
17:41 kshlm Yup. CentOS-6.7
17:41 JoeJulian bummer
17:41 kshlm Why?
17:41 jiffin1 joined #gluster-dev
17:41 JoeJulian No systemd
17:42 kshlm Are you a fan?
17:42 JoeJulian Absolutely. Makes management so much easier.
17:43 JoeJulian And it's surprisingly well thought out. Every time I tell someone, "I wish systemd did ..." I learn something more that it already does.
17:43 kshlm JoeJulian, Good to know. :)
17:44 kshlm I think the plan is to upgrade the os and the software themselves after migration.
17:44 kshlm So maybe we will upgrade to el7. Who knows?
17:45 kshlm Or, we may end up using the Centos-ci infra.
17:45 owlbot joined #gluster-dev
17:45 kshlm Need to wait and see what happens.
17:46 jiffin1 joined #gluster-dev
17:49 owlbot joined #gluster-dev
17:52 jiffin1 joined #gluster-dev
17:53 owlbot joined #gluster-dev
17:55 jiffin joined #gluster-dev
17:55 kshlm joined #gluster-dev
17:57 owlbot joined #gluster-dev
17:58 nishanth joined #gluster-dev
18:01 owlbot joined #gluster-dev
18:02 kaushal_ joined #gluster-dev
18:05 owlbot joined #gluster-dev
18:09 owlbot joined #gluster-dev
18:13 owlbot joined #gluster-dev
18:17 owlbot joined #gluster-dev
18:21 csim hagarth: didn't i ?
18:21 csim JoeJulian: for now, likely centos 6
18:21 owlbot joined #gluster-dev
18:21 hagarth csim: you updated the -infra thread
18:21 hagarth kshlm did it for -devel and -users
18:22 csim hagarth: oh, didn't see there was more than 1 :/
18:22 csim (but I did had trouble to find the existing one also...)
18:24 jiffin joined #gluster-dev
18:25 owlbot joined #gluster-dev
18:28 hagarth csim: ok
18:29 owlbot joined #gluster-dev
18:31 kshlm joined #gluster-dev
18:33 owlbot joined #gluster-dev
18:37 owlbot joined #gluster-dev
18:41 owlbot joined #gluster-dev
18:45 dlambrig1 left #gluster-dev
18:45 owlbot joined #gluster-dev
18:49 owlbot joined #gluster-dev
18:53 owlbot joined #gluster-dev
18:57 owlbot joined #gluster-dev
19:01 owlbot joined #gluster-dev
19:05 owlbot joined #gluster-dev
19:08 kshlm joined #gluster-dev
19:09 owlbot joined #gluster-dev
19:13 owlbot joined #gluster-dev
19:17 owlbot joined #gluster-dev
19:21 owlbot joined #gluster-dev
19:25 owlbot joined #gluster-dev
19:29 owlbot joined #gluster-dev
19:33 owlbot joined #gluster-dev
19:36 rastar joined #gluster-dev
19:37 rafi joined #gluster-dev
19:37 owlbot joined #gluster-dev
19:41 owlbot joined #gluster-dev
19:43 kshlm joined #gluster-dev
19:45 lalatenduM joined #gluster-dev
19:45 owlbot joined #gluster-dev
19:49 owlbot joined #gluster-dev
19:53 owlbot joined #gluster-dev
19:57 owlbot joined #gluster-dev
19:58 shamurai Hello, I am running into very poor NFS performance with VMWare and glusterfs. We are connected with 10Gbit to a 2 node cluster and are getting 100Mbit/s or less write and read performance. 8:25 am We are running GlusterFS 3.6.0 on CentOS 7 8:27 am Each node is configured with 12 - 7200 RPM SAS drives in RAID 0
19:59 shamurai I am also using ovirt to manage the storage.
20:01 owlbot joined #gluster-dev
20:05 owlbot joined #gluster-dev
20:09 rastar joined #gluster-dev
20:09 owlbot joined #gluster-dev
20:13 owlbot joined #gluster-dev
20:17 owlbot joined #gluster-dev
20:21 owlbot joined #gluster-dev
20:22 kshlm joined #gluster-dev
20:25 owlbot joined #gluster-dev
20:29 owlbot joined #gluster-dev
20:34 owlbot joined #gluster-dev
20:38 owlbot joined #gluster-dev
20:42 owlbot joined #gluster-dev
20:46 owlbot joined #gluster-dev
20:50 owlbot joined #gluster-dev
20:54 owlbot joined #gluster-dev
20:58 owlbot joined #gluster-dev
20:59 kshlm joined #gluster-dev
21:02 owlbot joined #gluster-dev
21:06 owlbot joined #gluster-dev
21:10 owlbot joined #gluster-dev
21:14 owlbot joined #gluster-dev
21:18 owlbot joined #gluster-dev
21:22 owlbot joined #gluster-dev
21:24 wushudoin joined #gluster-dev
21:26 owlbot joined #gluster-dev
21:30 owlbot joined #gluster-dev
21:34 owlbot joined #gluster-dev
21:37 kshlm joined #gluster-dev
21:38 owlbot joined #gluster-dev
21:42 owlbot joined #gluster-dev
21:46 owlbot joined #gluster-dev
21:49 shamurai left #gluster-dev
21:50 owlbot joined #gluster-dev
21:54 owlbot joined #gluster-dev
21:58 owlbot joined #gluster-dev
22:02 owlbot joined #gluster-dev
22:05 luizcpg_ joined #gluster-dev
22:06 owlbot joined #gluster-dev
22:10 owlbot joined #gluster-dev
22:14 owlbot joined #gluster-dev
22:14 kshlm joined #gluster-dev
22:18 owlbot joined #gluster-dev
22:22 owlbot joined #gluster-dev
22:26 owlbot joined #gluster-dev
22:30 owlbot joined #gluster-dev
22:34 owlbot joined #gluster-dev
22:38 owlbot joined #gluster-dev
22:42 owlbot joined #gluster-dev
22:46 owlbot joined #gluster-dev
22:47 kshlm joined #gluster-dev
22:50 owlbot joined #gluster-dev
22:54 owlbot joined #gluster-dev
22:58 owlbot joined #gluster-dev
23:02 owlbot joined #gluster-dev
23:06 owlbot joined #gluster-dev
23:10 owlbot joined #gluster-dev
23:14 owlbot joined #gluster-dev
23:18 owlbot joined #gluster-dev
23:22 owlbot joined #gluster-dev
23:24 kshlm joined #gluster-dev
23:26 owlbot joined #gluster-dev
23:30 owlbot joined #gluster-dev
23:34 owlbot joined #gluster-dev
23:38 owlbot joined #gluster-dev
23:42 owlbot joined #gluster-dev
23:46 owlbot joined #gluster-dev
23:50 owlbot joined #gluster-dev
23:54 owlbot joined #gluster-dev
23:58 owlbot joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary