Camelia, the Perl 6 bug

IRC log for #gluster, 2013-08-08

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:04 joshit_ have you run into this issue at all?
00:04 joshit_ would be great to listen to your results after you run your tests, i would appreciate your help alot
00:04 JoeJulian Not that I knew of, though it's possible that a couple people over the last week or so have actually encountered this but didn't communicate it clearly.
00:13 dhsmith joined #gluster
00:37 JoeJulian wait a minute.... it's in 3.3.2...
00:37 joshit_ whats in 3.3.2?
00:37 yinyin joined #gluster
00:42 JoeJulian The init scripts necessary to make that all work right.
00:42 Durzo joined #gluster
00:42 Durzo Hi, im not seeing any output when i run any of the gluster top commands on my replica volume, this happens on both bricks
00:43 Durzo and i know there are reads happening all the time (its a production cluster)
00:44 JoeJulian what version of gluster?
00:44 Durzo 3.3.2
00:45 johnsonetti joined #gluster
00:46 Durzo johnsonetti, greetings follow sydney sider :)
00:47 JoeJulian joshit_: yep, I can replicate the problem though...
00:47 johnsonetti as have I joshit. Hi Durzo
00:47 joshit_ awsome news JoeJulian
00:48 joshit_ question that puzzles me is how your centos 6.4 differs from a fresh centos 6.4?
00:49 JoeJulian that too
00:53 bala joined #gluster
00:58 JoeJulian hmm... maybe...
00:59 JoeJulian heh, nope
00:59 JoeJulian I've got to give Major a bad time... rackspace images default to selinux disabled.
01:00 kevein joined #gluster
01:00 JoeJulian That's funny because of http://major.io/2013/07/18/com​e-and-get-your-selinux-shirts/
01:00 glusterbot <http://goo.gl/b0fBUH> (at major.io)
01:02 vpshastry joined #gluster
01:10 Durzo so nobody got any idea why my gluster volume top commands show no output ??
01:17 JoeJulian Sorry, doing 4 things at once, and one of them is managing a three year old...
01:17 JoeJulian The only thing I can think of trying is restarting glusterd.
01:18 Durzo ok
01:18 Durzo its a prod system, but il try
01:22 johnsonetti joined #gluster
01:23 Durzo restarting gluster didnt help
01:24 Durzo strangely my etc-glusterfs-glusterd.vol.log file shows "0-transport: disconnecting now" every minute or so
01:27 JoeJulian Durzo: Even if you didn't have any access to the volume, you would still have output, ie. http://www.fpaste.org/30729/13759252/
01:27 glusterbot Title: #30729 Fedora Project Pastebin (at www.fpaste.org)
01:28 JoeJulian Check the client log and glusterd logs. (The "disconnecting now" message should just be about the close of a connection. That's normal for cli operations.
01:32 Durzo JoeJulian, http://www.fpaste.org/30731/75925505/ seems to be exiting code 254, no output like yours
01:32 glusterbot Title: #30731 Fedora Project Pastebin (at www.fpaste.org)
01:32 Durzo nothing in any logs to indicate an issue with volume or top
01:36 JoeJulian check peer status?
01:38 Durzo connected ok
01:39 JoeJulian There's got to be something in a log somewhere.. did you check all your glusterd logs for errors?
01:39 _pol joined #gluster
01:43 Durzo JoeJulian, yes
01:47 JoeJulian Durzo: Which distro are you running/
01:47 JoeJulian ?
01:47 Durzo ubuntu cloud 12.04
01:47 Durzo amazon ec2
01:47 Durzo using packages from semiosis
01:48 JoeJulian Do you mount any clients via nfs?
01:48 Durzo no
01:48 JoeJulian perfect
01:49 JoeJulian restart all your glusterd. That's perfectly safe as it leaves glusterfsd (bricks) alone.
01:50 JoeJulian joshit_: something about 6.4 vs 6.3. I'm guessing I haven't rebooted a server since 6.3. 6.4 doesn't run the K80glusterfsd init.
01:52 Durzo both glusters restarted, top still not working
01:54 JoeJulian Need logs from cli.log and glusterd.vol.log from both servers 1 second before and 1 second after issuing only 1 gluster volume top command.
01:54 awheeler joined #gluster
01:55 Durzo hmm
01:55 Durzo ok
01:55 Durzo take a while.. brb
02:01 Durzo my paste seems to be breaking fpaste.org, when i hit submit it gives me a bad gateway error :/
02:01 Durzo third time lucky
02:01 Durzo http://www.fpaste.org/30736/59272631/
02:01 glusterbot Title: #30736 Fedora Project Pastebin (at www.fpaste.org)
02:06 Durzo would it matter than my gluster servers seem to be about 40 seconds apart?
02:06 JoeJulian It shouldn't, but I would fix that.
02:06 joshit_ hard thing is JoeJulian all our servers are 6.4 and rebooted :/
02:06 JoeJulian joshit_: I've found what changed and am testing a workaround.
02:06 joshit_ awsome
02:07 joshit_ thank you
02:07 joshit_ would love to find out whats causing it
02:09 plarsen joined #gluster
02:11 awheeler joined #gluster
02:12 JoeJulian joshit_: See bug 994745
02:12 harish joined #gluster
02:12 glusterbot Bug http://goo.gl/PA7KlA unspecified, unspecified, ---, amarts, NEW , bricks are no longer being stopped on shutdown/reboot
02:14 JoeJulian Durzo: I think it's "Cluster lock held by 84df692d-0f25-4451-8333-3f65fcfa9dca"
02:15 JoeJulian But if all the glusterd processes were stopped, that shouldn't still exist.
02:15 Durzo i issued a restart to the init script
02:15 Durzo what that actually did... who knows
02:16 JoeJulian Theoretically should have worked...
02:16 Durzo i can check the pids and restart again
02:16 JoeJulian Sounds like a good plan. Also make sure that all glusterd are down at the same time.
02:17 JoeJulian Otherwise, the lock will still be held and re-propogated.
02:17 Durzo looks like they all changed pids apart from glusterfsd
02:17 vpshastry joined #gluster
02:17 JoeJulian Which is what we wanted.
02:18 Durzo i cant be sure they are down at the same time with a restart.. is it safe to issue a stop cmd to both without losing remote brick access?
02:18 joshit_ JoeJulian, awsome work, will test the workaround
02:18 Durzo (im thinking no)
02:18 joshit_ thank you for your time
02:18 JoeJulian It is safe.
02:19 JoeJulian The only thing that could cause a problem with is if a client tried to establish a new mount while glusterd is stopped.
02:19 Durzo ok
02:19 Durzo well here goes then
02:20 JoeJulian Wife's calling me to dinner. I'll follow up later.
02:21 Durzo ok
02:21 Durzo stopped both, can see glusterd not running at the same time. started both, gluster volume top - same behavior (empty).
02:24 joshit_ JoeJulian, your patch fixed it
02:24 joshit_ i over the moon
02:24 joshit_ very happy
02:24 joshit_ thank you for your time JoeJulian
02:26 johnsonetti have just added the check for the additional lock and tested on 4 centos 6.4 basic server installs running rpm from glusterfs-epel v 3.4.0-8 ... success .. THANK YOU
02:27 Bluefoxicy joined #gluster
02:27 johnsonetti no more locking on reboot
02:52 vshankar joined #gluster
03:08 dhsmith joined #gluster
03:10 kshlm joined #gluster
03:19 vpshastry left #gluster
03:24 shubhendu joined #gluster
03:29 lalatenduM joined #gluster
03:32 Durzo :/
03:34 bharata joined #gluster
03:43 edong23 joined #gluster
04:01 mohankumar joined #gluster
04:08 hagarth joined #gluster
04:10 dusmant joined #gluster
04:14 jag3773 joined #gluster
04:19 bala joined #gluster
04:36 ndarshan joined #gluster
04:39 aravindavk joined #gluster
04:44 ppai joined #gluster
04:56 shylesh joined #gluster
05:04 rjoseph joined #gluster
05:05 dhsmith joined #gluster
05:06 dhsmith joined #gluster
05:10 bala joined #gluster
05:18 lalatenduM joined #gluster
05:19 jporterfield joined #gluster
05:21 RameshN joined #gluster
05:25 RameshN joined #gluster
05:28 dusmant joined #gluster
05:32 jporterfield joined #gluster
05:35 ndarshan joined #gluster
05:36 shubhendu joined #gluster
05:37 bharata joined #gluster
05:37 glusterbot New news from resolvedglusterbugs: [Bug 927146] AFR changelog vs data ordering is not durable <http://goo.gl/jfrtO>
05:39 aravindavk joined #gluster
05:45 ppai joined #gluster
05:52 jag3773 joined #gluster
05:57 mooperd joined #gluster
06:03 shruti joined #gluster
06:08 raghu joined #gluster
06:10 deepakcs joined #gluster
06:15 psharma joined #gluster
06:19 rastar joined #gluster
06:22 Guest40030 joined #gluster
06:29 asias joined #gluster
06:34 vimal joined #gluster
06:43 ngoswami joined #gluster
06:44 kanagaraj joined #gluster
06:58 guigui3 joined #gluster
07:06 hybrid5121 joined #gluster
07:31 ricky-ticky joined #gluster
07:33 ngoswami joined #gluster
07:42 kshlm joined #gluster
07:42 RameshN joined #gluster
07:50 harish joined #gluster
07:53 mooperd joined #gluster
08:04 shruti joined #gluster
08:04 ndarshan joined #gluster
08:05 kanagaraj joined #gluster
08:06 aravindavk joined #gluster
08:06 mooperd joined #gluster
08:07 andreask joined #gluster
08:28 puebele1 joined #gluster
08:30 Norky joined #gluster
08:35 shubhendu joined #gluster
08:46 puebele1 joined #gluster
08:48 dusmant joined #gluster
08:49 RameshN joined #gluster
08:50 mbukatov joined #gluster
08:50 RichiH joined #gluster
08:51 hagarth xavih: ping
08:52 RichiH what's the current status of gluster as an iscsi source?
08:53 RichiH the internet seems to tend to "meh, but may work"
08:57 SynchroM joined #gluster
08:57 ndevos RichiH: gluster itself provides a filesystem, so you would use images on the filesystem to export via iscsi (those images can be backed by a logical volume if you use the bd-xlator)
08:58 ninkotech joined #gluster
08:59 ninkotech_ joined #gluster
09:15 harish joined #gluster
09:19 lalatenduM joined #gluster
09:19 psharma joined #gluster
09:30 edward1 joined #gluster
09:34 karthik joined #gluster
09:36 shubhendu joined #gluster
09:37 dusmant joined #gluster
09:41 wgao joined #gluster
09:41 kanagaraj joined #gluster
09:42 aravindavk joined #gluster
09:42 RameshN joined #gluster
09:43 lyang0 joined #gluster
09:49 Norky joined #gluster
10:05 ppai joined #gluster
10:10 sac_ joined #gluster
10:10 social joined #gluster
10:11 sac joined #gluster
10:13 kanagaraj joined #gluster
10:29 aravindavk joined #gluster
10:29 shubhendu joined #gluster
10:32 bulde joined #gluster
10:33 je23 joined #gluster
10:33 je23 Hi, anyone have experience of using rdiff-backup with gluster ?
10:35 RameshN joined #gluster
10:38 glusterbot New news from resolvedglusterbugs: [Bug 962619] glusterd crashes on volume-stop <http://goo.gl/XXzSY>
10:40 mooperd joined #gluster
10:42 dusmant joined #gluster
10:54 lpabon joined #gluster
11:08 kkeithley1 joined #gluster
11:09 piotrektt joined #gluster
11:14 andreask joined #gluster
11:21 mohankumar joined #gluster
11:26 spider_fingers joined #gluster
11:26 RameshN_ joined #gluster
11:32 B21956 joined #gluster
11:35 shubhendu joined #gluster
11:52 vpshastry joined #gluster
11:52 vpshastry left #gluster
12:00 shubhendu joined #gluster
12:08 glusterbot New news from resolvedglusterbugs: [Bug 990438] gnfs: server nfs and unlock doesn't happen <http://goo.gl/DFK8bR>
12:14 plarsen joined #gluster
12:17 hagarth joined #gluster
12:30 rgustafs joined #gluster
12:31 chirino joined #gluster
12:36 awheeler joined #gluster
12:37 awheeler joined #gluster
12:45 je23 joined #gluster
12:47 recidive joined #gluster
12:55 Durzo joined #gluster
12:56 Durzo im planning on doing a rolling upgrade of 2 replica bricks and 2 frontend clients (non-nfs) - is there anything i should be aware of before i begin? wise words? sacrificial rituals?
12:56 Durzo .... from 3.3.2 to 3.4
12:58 JoeJulian It's usually considered good form to wait until Friday at about 3:30 in the afternoon so that if anything goes wrong everybody else is aggravated too, especially if it's the end of the month.
12:59 aliguori joined #gluster
12:59 JoeJulian ... but in all seriousness, I've had several people tell me that it works.
12:59 neuroticimbecile joined #gluster
13:00 Durzo one question
13:00 Durzo how do i gracefully stop all gluster processes?
13:00 JoeJulian (unrelated) bug 888174
13:00 glusterbot Bug http://goo.gl/OUhHe medium, medium, ---, pkarampu, CLOSED CURRENTRELEASE, low read performance on stripped replicated  volume in 3.4.0qa5
13:00 Durzo if i issue a stop to the init script, it doesnt shutdown everything
13:01 Durzo im thinking i may have to reboot a whole brick just to get it upgraded
13:01 Durzo or should i just 'kill' it? what will that do to the clients
13:01 JoeJulian Which distro/release?
13:01 Durzo ubuntu 12.04 cloud, packages from semiosis
13:01 Durzo non-nfs
13:01 Durzo (and my damn volume top still doesnt work)
13:02 JoeJulian Yes, actually. SIGTERM will properly close TCP connections and the clients will not have to wait for ping-timeout.
13:02 Durzo orly? handy
13:02 Durzo so, upgrade bricks first, or clients first?
13:02 JoeJulian @3.4
13:02 glusterbot JoeJulian: (#1) 3.4 sources and packages are available at http://goo.gl/zO0Fa Also see @3.4 release notes and @3.4 upgrade notes, or (#2) To replace a brick with a blank one, see http://goo.gl/bhbwd2
13:02 Norky joined #gluster
13:02 JoeJulian @3.4 upgrade note
13:02 glusterbot JoeJulian: I do not know about '3.4 upgrade note', but I do know about these similar topics: '3.4 upgrade notes'
13:02 JoeJulian @3.4 upgrade notes
13:02 glusterbot JoeJulian: http://goo.gl/SXX7P
13:02 Durzo am on that page already
13:03 JoeJulian I wasn't... :D
13:03 Durzo it doesnt say how exactly to stop all gluster procs
13:03 Durzo but if your saying sigterm...
13:04 Durzo im worried from reading the comments people are saying due to the port changes that clients have issues reconnecting
13:04 JoeJulian I was just double checking. Looks like it's still servers first.
13:04 JoeJulian Hmm, I see.
13:05 JoeJulian I wonder if client first would solve that, or if the commenters have firewalls...
13:05 JoeJulian "After upgrading from 3.3.0 (Ubuntu packaged version) to 3.4 (source), I ran into this problem: http://lists.gnu.org/archive/html/g​luster-devel/2013-01/msg00011.html
13:05 JoeJulian There were old libs in /usr/lib, new ones in /usr/local/lib."
13:05 glusterbot <http://goo.gl/p6Epmn> (at lists.gnu.org)
13:05 JoeJulian ... well duh!
13:06 awheeler joined #gluster
13:06 Durzo last comment is especially worrying.. says a sigterm wont work
13:06 Durzo yeah i dont think il have issues with old files using debs
13:06 Durzo ohwell fuckit here goes nothing
13:06 JoeJulian lol
13:06 * Durzo sacrifices a virgin
13:08 JoeJulian There needs to be a happy medium somewhere. That and bug 988946 both expect that when you stop the management daemon everything else should stop. I disagree with that. I like the ability to restart one process without interrupting all my bricks.
13:08 glusterbot Bug http://goo.gl/ZWKpy2 urgent, unspecified, ---, amarts, NEW , 'systemd stop glusterd' doesn't stop all started gluster daemons
13:09 Durzo true, but as a sysadmin when i issue a stop command that it.. you know.. stops
13:10 Durzo perhaps there should be something similer to squid, if you issue squid a stop it gracefully stops and can take a few minutes.. but you can issue the init script a 'kill' command too
13:10 Durzo which literally kills the squid procs
13:10 JoeJulian I have 60 bricks per server. I do NOT want all 60 starting up just because I needed to restart glusterd.
13:11 Durzo thats alot of bricks
13:11 Durzo are you... crazy?
13:11 JoeJulian perhaps... :D
13:12 JoeJulian I have 15 volumes, 4 bricks per server per volume.
13:15 jclift Durzo: Hmmm, if someone has client's mounting via NFS (which can't fail over to a different host)... interrupting their NFS server just because glusterd needs to change a few settings doesn't seem optimal
13:15 Durzo nfs can die in a fire
13:16 JoeJulian +1
13:16 Durzo if nfs worked, i wouldnt use gluster
13:16 JoeJulian -1
13:16 Durzo and if gluster worked, i wouldnt still have nfs servers around :P
13:16 jclift Sure, but we have to take that kind of usage into account... even if we don't like it
13:16 Durzo sure, im just saying give us an _option_ atleast to stop everything
13:17 Durzo like another argument to an init script
13:18 jclift Personally, as a SysAdmin kind of person I'm kind of a control freak about how programs run on "my" servers.  So, at least having the option to kill or not kill everything makes sense to me.  Can see useful scenarios for both.
13:18 JoeJulian Except you're using ubuntu which doesn't actually use init scripts...
13:18 bennyturns joined #gluster
13:19 JoeJulian With init scripts there's glusterd and glusterfsd. Stop them both. When I was packaging the rpms I also had a gluster-nfs init for stopping that too.
13:19 harish joined #gluster
13:19 jclift Well then, that sounds like it does the job :D
13:19 JoeJulian With upstart... well... um...
13:19 JoeJulian same's true for systemd
13:20 deepakcs joined #gluster
13:20 Durzo eh wtf is it just me or did volume status dissapear from 3.4 ?
13:21 jclift Heh, it shouldn't have
13:21 kkeithley_ Were you doing that packaging outside of fedora/koji? There wasn't ever a gluster-nfs init.d script
13:21 JoeJulian I have an idea that you could use /bin/true for the start command in an upstart/systemd job and the same kill script from the init for the stop command...
13:21 JoeJulian kkeithley_: I was. I never got as far as getting sponsored (was just getting started on that process).
13:21 dewey joined #gluster
13:23 kkeithley_ the whole fedora package co-maintainer, sponsorship thing is pain in the patoot. I'm trying to get a couple more devs in BLR on as co-maintainers and it seems to be going nowhere.
13:24 Durzo so is this new port - 49512 static or is it a range ?
13:24 awheele__ joined #gluster
13:25 JoeJulian should be a range.
13:27 jdarcy joined #gluster
13:28 Durzo shit
13:28 Durzo root@salvador:~# gluster volume heal ds0
13:28 Durzo Commit failed on mordecai. Please check the log file for more details.
13:29 awheeler joined #gluster
13:32 B21956 joined #gluster
13:32 awheele__ joined #gluster
13:37 Durzo i dont like this
13:37 Durzo clients are trying to connect to my newly upgraded brick on port 24010 and theres nothing listening on that port so its sending back RST's
13:38 Durzo upgrade fail :/
13:47 JoeJulian What if you kill -HUP the client?
13:48 Norky joined #gluster
13:48 Durzo unmounting / remounting did it, but i had to take the webserver out of the load balancer, which isnt really a rolling upgrade
13:49 JoeJulian Yeah, that's why I was curious about a HUP. That'll tell the client to try to reload the volume configuration.
13:49 Durzo ahhrrmm
13:49 Durzo il try that on the next one
13:49 Durzo upgrading brick 2 now
13:50 Durzo which also happens to be the mastervolfile-server
13:50 Durzo if its going to screw up, here would be just my luck
13:55 Durzo sending a HUP didnt fix the port issue
13:55 Durzo client logged: 0-glusterfs: No change in volfile, continuing
13:55 Durzo and its still trying the old port
13:55 yinyin_ joined #gluster
13:57 JoeJulian change the client log-level
13:58 JoeJulian Then there will, indeed, be a change in the volfile so it might force it.
14:00 Durzo already in the process of removing the second client from load balancer to gracefully remount
14:03 deepakcs joined #gluster
14:10 sprachgenerator joined #gluster
14:13 [o__o] joined #gluster
14:18 tobias- I get "No such file or directory" on a directory, on the file that is listed. I'm running 3.0.5, it seems to be like that until file is fully replicated to all three replicas. Why is that?
14:24 Durzo you have a self heal issue
14:24 JoeJulian Check your client logs
14:24 JoeJulian 3.0.5... wow...
14:25 Durzo i second that wow
14:27 Durzo btw JoeJulian, upgrade looks good. just wasnt very "rolling"
14:30 Durzo yay, my volume top works!
14:30 JoeJulian yay
14:31 tobias- JoeJulian: nothing is happening in the logs
14:31 tobias- debian squeeze runs 3.0.5; but i'll upgrade it in a couple of month to 3.3.x
14:31 JoeJulian @ppa
14:31 glusterbot JoeJulian: The official glusterfs 3.3 packages for Ubuntu are available here: 3.3 stable: http://goo.gl/7ZTNY -- 3.3 QA: http://goo.gl/5fnXN -- and 3.4 QA: http://goo.gl/u33hy
14:31 tobias- (or if 3.4.x goes stable)
14:32 JoeJulian How do you trigger that error message?
14:33 tobias- running dd if=/dev/zero of=file bs=1M count=1024 and just hit ls -l in the same directory on another client node
14:33 lpabon joined #gluster
14:33 Durzo 3.4 is stable.... isnt it? oh god tell me its stable
14:34 JoeJulian Durzo: The devs consider it to be. The test results I've seen so far seem to agree.
14:34 Durzo thank fuck
14:34 JoeJulian I know they would really like us all to adopt 3.4 so they can stop backporting fixes to 3.3 :)
14:36 JoeJulian btw, tobias-, 3.0 hasn't had a bug fix since May 1, 2011 (version 3.0.8).
14:37 Durzo tobias-, i used to see the same issues your having with 3.3.0 with cron lock files (small, short lived files (race condition in creation/deletion)) - iv just upgraded my ubuntu cluster to 3.4.0 to hopefully fix it
14:38 JoeJulian tobias-: I would check netstat to ensure the clients are connected to all the bricks, though it's probably a known issue.
14:38 manik joined #gluster
14:39 JoeJulian If it's connected and there's no errors in the logs, the only other thing I can think to try would be removing performance translators (one at a time) to see if it's one of them.
14:40 Durzo hey JoeJulian, does a geo-replication endpoint need to have glusterd running? it just needs the files installed so the bricks can call gsyncd over ssh right ?
14:40 JoeJulian That's right.
14:40 Durzo i.e i can safely stop glusterd from starting on boot
14:41 JoeJulian yes
14:41 tobias- JoeJulian: ah forgot to check that. I actually thought about doing that but haven't checked the connectivity yet :p
14:41 tobias- i can't "replicate" the problem it with latest 3.3.x
14:42 theugster joined #gluster
14:42 Durzo ok next and hopefully last Q for the night.. would it be wise to resume a geo-replication from 3.3 on 3.4, or should i blow away the old geo-repl directories and sync from scratch?
14:42 Durzo I have over 100G of data, so really hoping resuming will work
14:42 JoeJulian I haven't seen a lot of changes in that code, so it should be safe.
14:43 Durzo really? damn.. geo-repl always caused me issues on 3.3
14:43 Durzo constantly going faulty for no reason
14:43 vbellur Durzo: geo-repl is being revamped. I would await 3.4.1.
14:43 JoeJulian well, not "no reason" but I know what you mean.
14:43 saurabh joined #gluster
14:43 Durzo vbellur, when you say await, you mean.. turn it off until then?
14:44 JoeJulian you changing your identity?
14:44 vbellur Durzo: if that's an option. There would be an upgrade path provided for better scalability and performance.
14:45 Durzo vbellur, thats kinda scary
14:45 Durzo is there a release date for 3.4.1 ?
14:45 vbellur JoeJulian: plan to use nicks interchangeably.
14:45 The_Ugster Howdy all, I've got a bit of a different question this time. I have a three node Proxmox Cluster, and I can attach my Gluster volume to it without issue, but when I attempt to create a VM or CT on it as storage, it denies me access. From the Proxmox nodes, I can create directories and put data on the NFS share.
14:45 vbellur Durzo: considering 08/21, will send out a mail on gluster-users shortly
14:45 Durzo vbellur, thats not so bad.. ok thanks
14:46 vbellur JoeJulian: would a weekly IRC meeting help to determine content of minor releases?
14:47 JoeJulian How about bi-weekly?
14:48 JoeJulian vbellur: Nobody likes meetings. ;)
14:48 Durzo i cant really have my prod cluster go without backups for 2 weeks. is something horrible going to happen if i use geo-repl on 3.4.0 in the meantime?
14:49 vbellur JoeJulian: sounds good ;)
14:50 ricky-ticky joined #gluster
14:50 vbellur Durzo: 3.4.0 would be similar to 3.3 .. nothing significantly different.
14:50 Durzo ok cool
14:50 Durzo atleast it will work half the time then
14:51 puebele joined #gluster
14:51 JoeJulian Pretty much any time Thu is good for me. Friday is good outside of "date night" which would be Saturday 01:00Z to roughly 04:00Z.
14:53 vbellur Maybe we could consider Thu eve pacific. How does Wed look for you?
14:54 johnmark vbellur: oh no... I want hagarth :(
14:55 JoeJulian lol
14:55 JoeJulian Wed is about the worst. :D
14:56 bala1 joined #gluster
14:57 aliguori joined #gluster
14:57 vbellur JoeJulian, johnmark: need to find a slot for a bi-weekly community meeting on IRC..
14:59 TuxedoMan joined #gluster
15:01 dhsmith joined #gluster
15:01 dhsmith joined #gluster
15:01 Durzo FYI JoeJulian it seems glusterd is required to be running on geo-repl slave
15:02 Durzo gsyncd tries to connect to it via 127.0.0.1
15:04 JoeJulian ndevos, vbellur: What's bug 958389 (in summary)?
15:04 glusterbot Bug http://goo.gl/9Lgxpm is not accessible.
15:04 ndevos JoeJulian:  glusterd service script (/etc/init.d/glusterd) doesn't stop glusterfsd as former version
15:05 JoeJulian ah
15:05 ndevos thats the same as the one you filed, but this one is for RHS, yours for upstream glusterfs
15:05 Durzo bloody hell, geo-repl causing 100% cpu on my brick, load is going through the roof :( :(
15:06 JoeJulian Not entirely accurate either. /etc/init.d/glusterd didn't ever stop glusterfsd (unless it was broken for some release).
15:07 ndevos JoeJulian: well, no, but thats a mistake the (unknowning?) bugreporter made in the description, the issue is the same
15:07 JoeJulian figured...
15:08 tqrst Has anyone tried rebalancing a decent sized distributed-replicate (Nx2) volume with 3.4? I'm curious if I'm the only one seeing what looks like a memory leak.
15:08 JoeJulian If I can get this damned API I'm working on for $dayjob to where it needs to be today, I'm hoping to spend some time figuring out a solution for systemd.
15:10 daMaestro joined #gluster
15:11 bugs_ joined #gluster
15:11 TuxedoMan left #gluster
15:21 johnmark vbellur: aye aye
15:21 johnmark I think I heard Thursday pm PDT/Friday am IST
15:21 johnmark JoeJulian: ^^^
15:23 Technicool joined #gluster
15:24 guigui3 left #gluster
15:29 daMaestro joined #gluster
15:29 JoeJulian works for me
15:29 aliguori joined #gluster
15:30 johnmark JoeJulian: double plus good
15:32 vbellur johnmark: 7 PM PDT ?
15:33 Durzo any idea how i can get rid of the -z flag that gsync is passing to rsync? iv looked at rsync-options and its not there, i cant even see it hard coded in the python libraries...
15:34 johnmark vbellur: ++
15:34 johnmark let's book it
15:35 vbellur johnmark: ok
15:35 johnmark I think we should send a note out ot gluster-devel
15:38 vbellur johnmark: yeah
15:42 johnmark vbellur: ok, I'll take that as an action item for me
15:43 spider_fingers left #gluster
15:51 jag3773 joined #gluster
15:54 Durzo JoeJulian, thanks very much for your help tonight
16:02 ipalaus left #gluster
16:10 awheeler joined #gluster
16:18 _pol joined #gluster
16:20 _pol_ joined #gluster
16:37 lpabon joined #gluster
16:48 thomaslee joined #gluster
16:53 jesse joined #gluster
16:57 difeta joined #gluster
17:00 difeta hey all. I have a two peer gluster system running. Each peer mounts the glusterfs fuse locally. The problem is, when I pull the network cable from one peer, the still wired one's fuse mountpoint becomes inaccessible for about a minute. I've set network.ping-timeout: 2 which helps, but is there a better solution?
17:01 JoeJulian Don't pull network cables?
17:02 JoeJulian @ping-timeout
17:02 glusterbot JoeJulian: The reason for the long (42 second) ping-timeout is because re-establishing fd's and locks can be a very expensive operation. Allowing a longer time to reestablish connections is logical, unless you have servers that frequently die.
17:02 difeta JoeJulian: you are funny. Such an act is beneficial to test gluster's handling of issues.
17:03 difeta JoeJulian: I see. But with a 42 second timeout, the cluster is inaccessible when a peer goes offline unplanned.
17:04 difeta JoeJulian: I don't plan on having nodes dies often. In my use-case however, we require that the data must be accessible as much as possible as we are constantly streaming media.
17:06 JoeJulian Correct, but it beats the entire cluster being degraded whenever you swap a network cable to another switch, or have a routing table convergence, or...
17:07 JoeJulian And, to be clear, this is only about servers. The rest of the nodes are inconsequential.
17:08 jebba joined #gluster
17:14 Maskul joined #gluster
17:15 Maskul hey guys, quick newbie question
17:15 Maskul how exactly do you install / configure translators?
17:20 semiosis they are built in, no installation needed.  configuration is done with the 'gluster volume set' command
17:25 Maskul so for the translator write-behind, i just use the command 'gluster volume set test-volume write-behind'?
17:26 difeta JoeJulian: ok, so it seems setting the ping-timeout to something low is the correct solution in my case. Thank you for the help.
17:26 awheele__ joined #gluster
17:27 andreask joined #gluster
17:35 saurabh joined #gluster
17:40 manik joined #gluster
17:40 hagarth joined #gluster
17:46 manik joined #gluster
18:02 JoeJulian Hey, difeta, you're in the Seattle area?
18:02 difeta JoeJulian: Yes I am.
18:02 JoeJulian Me too. :D
18:02 difeta JoeJulian: I'm in Kent off Orilla.
18:02 JoeJulian May I ask where you work?
18:03 difeta JoeJulian: Custom Control Concepts
18:03 JoeJulian Oh, I know them.
18:04 difeta JoeJulian: and you may I ask?
18:04 JoeJulian I think I may have even installed computers there back in the late 80s/early 90s.
18:05 JoeJulian Hmm, maybe not.
18:05 JoeJulian Right building but I see the company's not that old.
18:12 glusterbot New news from resolvedglusterbugs: [Bug 819130] Merge in the Fedora spec changes to build one single unified spec <http://goo.gl/GfSUw>
18:25 zaitcev joined #gluster
18:42 It_Burns joined #gluster
19:17 lalatenduM joined #gluster
19:28 codex joined #gluster
19:30 Guest32483 joined #gluster
19:30 mooperd joined #gluster
19:39 chirino joined #gluster
19:51 hagarth joined #gluster
19:59 Mo__ joined #gluster
20:15 nightwalk joined #gluster
20:43 MugginsM joined #gluster
20:48 Twinkies joined #gluster
20:48 Twinkies im trying to understand distributed replica sets
20:48 Twinkies the order in which they are entered
20:49 Twinkies does this mean that  each server have to have a differently named brick?
20:49 Twinkies four node distributed(replicated volume with two way mirror
20:50 Twinkies gluster volum e create test-volum e replica 2 transport tcp server1:/exp1
20:50 Twinkies server2:/exp2 server3:/exp3 server4:/exp4
20:52 Twinkies does the name of brick matter?
20:52 Twinkies in a lab  I have , ive named tem all the same on each server
20:52 samppah Twinkies: name of the brick doesn't matter
20:53 samppah your command would replicate files between server1:/exp1 and server2:/exp2
20:53 samppah server3:/exp3 and server4:/exp4 would be another pair
20:54 samppah file1 would be written to server1 and server2, file2 would be written to server3 and server4 and so on
20:58 JoeJulian @brick order
20:58 glusterbot JoeJulian: Replicas are defined in the order bricks are listed in the volume create command. So gluster volume create myvol replica 2 server1:/data/brick1 server2:/data/brick1 server3:/data/brick1 server4:/data/brick1 will replicate between server1 and server2 and replicate between server3 and server4.
20:59 Twinkies i almost got it
20:59 Twinkies seems to make sense
21:01 Twinkies If i put in  replica 4  server1:/brick1  server2:/brick3 ,through  server8:/brick1 , would this make 4 servers a pair?
21:02 Twinkies in the order in which they were put in?
21:03 mooperd joined #gluster
21:08 JoeJulian replica 4 A B C D E F G H would make A=B=C=D + E=F=G=H
21:11 awheeler joined #gluster
21:12 awheele__ joined #gluster
21:12 awheeler joined #gluster
21:14 sprachgenerator joined #gluster
21:23 awheele__ joined #gluster
21:30 jag3773 joined #gluster
21:35 recidive joined #gluster
21:38 Twinkies JoeJulian: thanks dude
21:38 Twinkies Ive been testing it and it make sense
21:38 Twinkies now
21:44 Twinkies Volume Name: gfs-data-rep-00
21:44 Twinkies Type: Distributed-Replicate
21:44 Twinkies Status: Created
21:44 Twinkies Number of Bricks: 2 x 2 = 4
21:44 Twinkies Transport-type: tcp
21:44 Twinkies Bricks:
21:44 Twinkies Brick1: soe-chi-glustersrv-00:/export/brick5
21:44 Twinkies Brick2: soe-chi-glustersrv-01:/export/brick5
21:44 Twinkies Brick3: soe-chi-glustersrv-02:/export/brick5
21:44 Twinkies Brick4: soe-chi-glustersrv-03:/export/brick5
21:45 Twinkies in this example, my  groups would be glsutersrv-00 glustersrv-01    then  02 and 03
21:45 The_Ugster Y U No pastebin?
21:46 Twinkies im a n00b at irc
21:46 Twinkies i forget
21:46 Twinkies :'(
21:47 The_Ugster I forgive you :p I also forget what they prefer here, it's not pastebin but similar
21:47 Twinkies what is it?
21:47 Twinkies before i get kicked for spamming? lol
21:49 The_Ugster They'll probably throttle you before they kick, I'd ask the others what they use though.
21:50 The_Ugster Pastebin is probably sufficient, the ads are just annoying if you aren't using a blocker
21:52 semiosis @paste
21:52 glusterbot semiosis: For RPM based distros you can yum install fpaste, for debian and ubuntu it's pastebinit. Then you can easily pipe command output to [f] paste [binit] and it'll give you a URL.
21:52 * The_Ugster points at semiosis and glusterbot
21:53 semiosis those commands use paste sites provided by the distros by default afaik
22:06 fidevo joined #gluster
22:09 fidevo joined #gluster
22:09 andreask joined #gluster
22:10 _pol joined #gluster
22:17 aliguori joined #gluster
22:27 jebba joined #gluster
22:48 B21956 joined #gluster
22:53 mooperd joined #gluster
22:54 joshit_ JoeJulian, thank you for yesterday
23:04 mooperd_ joined #gluster
23:24 mooperd joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary