Camelia, the Perl 6 bug

IRC log for #gluster, 2013-04-02

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:29 robo joined #gluster
00:59 yinyin joined #gluster
01:05 jules_ joined #gluster
01:06 kevein joined #gluster
01:22 dustint joined #gluster
01:32 yinyin joined #gluster
01:48 yinyin joined #gluster
01:58 _pol joined #gluster
02:01 bala joined #gluster
02:21 rastar joined #gluster
03:04 saurabh joined #gluster
03:09 vshankar joined #gluster
03:52 sgowda joined #gluster
03:57 bala joined #gluster
03:57 anmol joined #gluster
04:09 bharata joined #gluster
04:16 hagarth joined #gluster
04:21 bulde joined #gluster
04:35 mooperd joined #gluster
04:36 bulde joined #gluster
04:38 sripathi joined #gluster
04:42 deepakcs joined #gluster
04:50 vpshastry joined #gluster
04:52 bala joined #gluster
05:01 mohankumar joined #gluster
05:10 aravindavk joined #gluster
05:12 lalatenduM joined #gluster
05:15 saurabh joined #gluster
05:29 vpshastry joined #gluster
05:37 vpshastry joined #gluster
05:38 raghu joined #gluster
05:41 vshankar joined #gluster
05:44 lh joined #gluster
05:49 satheesh joined #gluster
05:50 lh joined #gluster
05:50 lh joined #gluster
05:57 lh joined #gluster
05:59 shireesh joined #gluster
06:00 rastar joined #gluster
06:21 glusterbot New news from newglusterbugs: [Bug 928575] Error Entry in the log when gluster volume heal on newly created volumes <http://goo.gl/KXsmD>
06:24 jtux joined #gluster
06:27 Rydekull joined #gluster
06:28 camel1cz joined #gluster
06:31 badone joined #gluster
06:41 jclift_ joined #gluster
06:42 test joined #gluster
06:46 magnus^^p joined #gluster
06:48 ricky-ticky joined #gluster
06:53 camel1cz joined #gluster
06:53 camel1cz left #gluster
06:59 ctria joined #gluster
06:59 mgebbe_ joined #gluster
07:01 bala joined #gluster
07:04 Nevan joined #gluster
07:18 ollivera joined #gluster
07:20 sripathi joined #gluster
07:23 hagarth joined #gluster
07:27 vimal joined #gluster
07:29 masterzen joined #gluster
07:38 msmith_ joined #gluster
07:45 ProT-0-TypE joined #gluster
07:46 shylesh joined #gluster
07:47 hagarth joined #gluster
07:50 sripathi joined #gluster
07:50 Oneiroi joined #gluster
07:53 ngoswami joined #gluster
08:01 piotrektt joined #gluster
08:16 raghu joined #gluster
08:19 Norky joined #gluster
08:25 rastar joined #gluster
08:48 tryggvil joined #gluster
08:52 lalatenduM joined #gluster
08:53 rastar joined #gluster
08:57 Rydekull joined #gluster
08:59 aravindavk joined #gluster
09:05 vpshastry joined #gluster
09:22 sripathi joined #gluster
09:32 xiu JoeJulian: after some days, everything seems ok with your fix, thanks ;)
09:35 H__ xiu: which fix for what issue ?
09:36 xiu H__: hum not a fix sorry, just some configurations tweaks, i had some race conditions and added direct-io-mode/deactivated write-behind
09:37 xiu following this mail: http://www.mail-archive.com/gluste​r-users@gluster.org/msg11320.html
09:37 glusterbot <http://goo.gl/1fe8p> (at www.mail-archive.com)
09:49 jtux joined #gluster
09:59 sohoo joined #gluster
10:02 DEac- joined #gluster
10:02 DEac- hi
10:02 glusterbot DEac-: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
10:02 mooperd joined #gluster
10:09 sohoo i have issue with version 3.3.1 i didnt test this on early realeses. The main issue is happen when 1 of a replication pair is rebooting then come back online. Clients gets io errors on new files that were added to node 1 when node2 was offline then come back again.
10:13 sohoo so i guess the questune is if sealf-heal works on new files as well or rather updated files only. The gluster docs desont say anything regarding that
10:14 sohoo simple questune :) but important
10:18 edward1 joined #gluster
10:21 sripathi joined #gluster
10:23 manik joined #gluster
10:26 alino2 joined #gluster
10:27 aravindavk joined #gluster
10:33 Bonaparte joined #gluster
10:35 Bonaparte Hello #gluster. I have a situation. I added a brick to gluster. The newly added brick is shown on one server and not shown on another server. http://paste2.org/VFpZPBsj shows the output of gluster volume info
10:35 glusterbot Title: Paste2.org - Viewing Paste VFpZPBsj (at paste2.org)
10:35 Bonaparte How can go about fixing this?
10:42 manik joined #gluster
10:46 Norky are the peers definitely communicating?
10:46 Norky gluster peer status   on both machines
10:47 Norky Bonaparte, also, I notice that on "server 2" vol info shows number of bricks to be 4, yet it only lists 3 bricks
10:49 Bonaparte Norky, the peer status command lists the hosts correctly
10:49 Bonaparte State: Peer in Cluster (Connected)
10:53 ninkotech_ joined #gluster
10:54 mooperd joined #gluster
10:54 mooperd_ joined #gluster
10:57 rcheleguini joined #gluster
11:03 hagarth joined #gluster
11:04 Norky hmm, I don't know then
11:04 Norky not a gluster expert
11:04 camel1cz joined #gluster
11:04 Bonaparte Norky restarting gluster fixed the problem
11:04 Norky jolly good :)
11:04 Bonaparte Thanks
11:07 camel1cz left #gluster
11:11 sohoo bonaparte, try gluster volume stop then start. It helped me on some cases and wielrd bugs
11:12 Bonaparte sohoo, the problem is fixed already :) Thanks for the tip
11:12 GLHMarmot joined #gluster
11:12 sohoo ops didnt see, volume stop/start is a bit better cause service restart restart just the current node while volume restart aperntly reset other issues as well
11:13 xiu [D
11:14 ctria joined #gluster
11:14 sohoo bonaparte, do you work with distrebuted/replica mode?
11:14 Bonaparte sohoo, yes
11:15 sohoo im having issues on new files created while 1 node of replication pair is restarting then come back online. do you know if the self-heal deal with new files as well? or just updated files
11:15 sohoo makes me wonder
11:15 sohoo no one here cfan give me direct answare to a simple questune like that
11:16 Norky people are often busy with other things
11:16 sohoo yep
11:16 Norky I believe self-heal will replicate the new files
11:16 sohoo you sure?
11:17 sohoo when it happen, clients get io errors on new files its very bad. only node reset(delete files) helps
11:17 Norky you might need to trigger it with "gluster volume heal "
11:18 sohoo gluster volume heal full give error and dont start
11:18 sohoo any other tip on that issue
11:20 DEac- i have a server with two ips (and two hostnames). i want to use only one ip for glutter, the other should never be used. but it is not the primary ip. how i can configure glutter to use only a specific ip?
11:20 kkeithley1 joined #gluster
11:26 sohoo deac, it is probebly set when you peer probe the specific IP
11:27 DEac- sohoo: which option i have to use to set my own peer-adress?
11:29 sohoo when you construct the volume and add peers, simple peer the specific IPs tyhen reflect that on all host files
11:30 sohoo then the clients can connect to other interface
11:30 sohoo glusterd is listening on all IPs but inner work with what you set on peer probe
11:31 DEac- if the other ip, which should not be used, will be removed, gluster will work anyway?
11:31 sohoo what do you mean removed?
11:32 sohoo for example gluster peer probe 192.168.20.10 then the inner gluster work will go thru this IP
11:32 DEac- ip addr r IP
11:34 DEac- example: ip1, ip2, ip5, ip6. host1 has ip1 and ip2 and host2 has ip5, ip6. i run on host1: gluster peer probe ip6
11:39 sohoo yes
11:39 sohoo the peered probe is the subnet you will work with
11:45 DEac- oh, both ips are in the same subnet
11:47 satheesh joined #gluster
11:47 DEac- so host1 has two ips in the same subnet and host2 in an other subnet
11:50 camel1cz1 joined #gluster
11:50 camel1cz1 left #gluster
11:52 camel1cz joined #gluster
11:52 camel1cz left #gluster
11:56 dustint joined #gluster
12:07 Staples84 joined #gluster
12:21 sohoo it can have same subnet but it will communicate on the interface of whice you used peer probe on
12:24 ctria joined #gluster
12:29 DEac- it's safe to edit files in /etc/glusterfs and /etc/glusterd?
12:35 awheeler_ semiosis: Thanks for gluster volume info, does that indicate anything when upgrading from 3.3 to 3.4, where currently the volumes don't work?
12:38 DEac- it's possible to configure synchronized replication? i shutdown one brick and it is possible to write on the volume
12:40 kkeithley1 DEac-: I wouldn't recommend editing those files by hand unless you really know what you're doing. What version of glusterfs are you running that you have file in /etc ?
12:40 DEac- 3.2.5
12:41 kkeithley1 That's pretty old, but I'm sure you knew that. ;-)
12:41 DEac- it's the last stable version for ubuntu12.04
12:42 kkeithley1 yes, but semiosis has been shipping newer versions for Ubuntu from his ppa.
12:42 kkeithley1 @repos
12:42 glusterbot kkeithley1: See @yum, @ppa or @git repo
12:42 kkeithley1 @ppa
12:42 glusterbot kkeithley1: The official glusterfs 3.3 packages for Ubuntu are available here: http://goo.gl/7ZTNY
12:42 DEac- at home i've ubuntu12.10 and glusterfs3.2.5 too
12:45 DEac- it is recommded for a server to use this ppa?
12:46 DEac- it is rolling released or only security-updates, if i use this ppa?
12:48 rotbeard joined #gluster
12:48 kkeithley1 I don't know how often semiosis updates it or what his policy is. You might ask him when he comes on-line
12:49 DEac- ok
12:49 kkeithley1 But if you're running production servers that's what I'd use, just to be up to date with the latest.
12:51 DEac- my primary issue is, to be 'safe'. i do not need the newest version, only the most stable and maintained
12:52 DEac- 3.3 does not provides encryption, like 3.2. this is a feature to think about to upgrade.
12:52 kkeithley1 IMO the latest, or even just 3.2.7, is much more stable than 3.2.5. All maintenance at this point is going into 3.3.x and 3.4
12:53 kkeithley1 3.3 does have an option for SSL sockets
12:53 DEac- really? ok, i will upgrade to 3.3
12:53 DEac- and synchronized replication? how i can configure it? or it is not supported?
12:54 kkeithley1 AFR (replication) is synchronous. Geo-rep in 3.3 is async
12:54 bulde joined #gluster
12:56 DEac- but if i shut down one machine?
12:57 kkeithley1 If you shut down one machine, the client continues to write to the other. When the machine comes back on, self heal starts. In 3.2.x you have to start it manually, e.g. by stat-ing a file that needs to be healed. In 3.3 the glustershd (gluster self heal daemon) heals files automatically.
13:02 hagarth joined #gluster
13:05 DEac- it's possible to configure, that it will be impossible to write if one machine is down?
13:07 kkeithley1 no. I'm not sure why you'd want to do that. Being able to continue to write while a replica is down is one of the key selling features of glusterfs after all.
13:07 kkeithley1 But if you want it to be impossible, just take them both down. ;-)
13:08 DEac- i need the feature: 'if i write something, it's really writen to X servers. X > N > 1.'
13:08 DEac- kkeithley1: if i take the second down, it's impossible to read to.
13:09 kkeithley1 Right. Like I said, I'm not sure why you'd want to do that.
13:09 Norky that would increase the failure chance by a factor of X
13:09 kkeithley1 But if you want X > N > 1, then I'd say you need 'replica 3' so that if one replica is off line, writes continue to the other two
13:12 DEac- for us, it's more important to guarantee, it's really writen, than, it's written and i will mirror it to the other machines (now). if this machine goes down, the other machines did not know it. if i lost this machine, the data will be lost too. the client must know, it's really written on enough machines.
13:13 DEac- it is not important, if the client knows, it can not be written. but it is really ok, because i know it and i can try it later.
13:16 DEac- postgresql is a dbms with this feature. now i need it for git too, but git did not provide this feature, so i thought, i try it with glusterfs or something like that. (first glusterfs, because we need it for something else.)
13:17 robo joined #gluster
13:19 DEac- my last way is, to write a filesystem, which stores everything in postgresql. if it fails, write goes wrong and git knows 'filesystem is broken, do not write anything else'
13:22 rwheeler joined #gluster
13:22 DEac- and i know drbd. but it is broken and unusable
13:24 bala joined #gluster
13:24 kkeithley1 Okay. Options are good. You could file a BZ for that as a feature request; as it is now though, the design center for glusterfs is that clients continue to write and read as long as one replica is still up.
13:26 robo joined #gluster
13:27 DEac- oh, glusterfs provides quorum. that is nice. in combination with synchronized writing, really usefull. (i fond volume options)
13:27 DEac- kkeithley1: bz? you mean bug tracker?
13:28 H__ DEac-: quorum in >=3.4, right ?
13:29 DEac- i do not know
13:29 H__ i think it's new 3.4 functionality
13:30 H__ i'm prepping a minimal production downtime upgrade for 3.2.5->3.3.1 script
13:30 H__ so no quorum for me yet :-P
13:33 kkeithley1 yes, bz = bug, on bugzilla.redhat.com
13:35 neofob joined #gluster
13:41 lh joined #gluster
13:41 lh joined #gluster
13:43 neofob left #gluster
13:45 satheesh joined #gluster
13:49 _pol joined #gluster
13:50 rosmo joined #gluster
13:54 hybrid5121 joined #gluster
13:55 hagarth joined #gluster
13:57 mohankumar joined #gluster
13:58 bennyturns joined #gluster
14:00 deepakcs joined #gluster
14:03 vpshastry joined #gluster
14:04 Chiku|dc why nfs mount is much better than glusterfs mount ?
14:04 Chiku|dc performance
14:08 Norky quantify performance?
14:08 johnmark Chiku|dc: client caching with NFS can be more aggressive
14:08 johnmark Chiku|dc: you're not getting a "true" performance reading
14:09 Norky if you mean metadata access, so that large directories take a long time to list/browse, then yes, as johnmark says, it's heavily skewed by caching
14:11 semiosis @seen kaushal
14:11 glusterbot semiosis: I have not seen kaushal.
14:12 Chiku|dc with glusterfs client can you setup more caching to get same performance than nfs client
14:13 jdarcy joined #gluster
14:14 Norky that is coming with 3.4
14:15 duerF joined #gluster
14:16 Norky I've done some testing of the 3.4 alpha with the FUSE readdirplus support listing large (>1000 entries) directories and it is much faster with repeated access
14:17 Norky as in, an order of magnitude or more
14:17 Norky YMMV
14:17 Norky I'm assuming we'r eboth talking about the same kind of performance
14:18 hagarth joined #gluster
14:21 Chiku|dc yes for read performance
14:21 Norky read performance of what?
14:21 Norky a single large file?
14:21 Norky file metadata?
14:22 Norky serveral small files?
14:22 Norky several*
14:22 Chiku|dc not 1k small files (100ko)
14:22 vpshastry joined #gluster
14:23 Chiku|dc bonnie with -n 10:102400:102400
14:25 johnmark Chiku|dc: I'd be curious to see what you get with bonnie on 3.4 alpha
14:25 Norky the creation of 10240 files of 100KiB size?
14:28 _pol joined #gluster
14:38 saurabh joined #gluster
14:39 hagarth joined #gluster
14:41 bugs_ joined #gluster
14:47 andreask joined #gluster
14:48 vshankar joined #gluster
14:49 aliguori joined #gluster
14:52 vpshastry joined #gluster
15:01 hagarth joined #gluster
15:02 daMaestro joined #gluster
15:14 seiryuu joined #gluster
15:19 neofob joined #gluster
15:22 bala joined #gluster
15:24 zaitcev joined #gluster
15:27 Chiku|dc johnmark, I don't have 3.4
15:27 daMaestro joined #gluster
15:31 badone joined #gluster
15:35 Bonaparte left #gluster
15:44 johnmark Chiku|dc: understood. want to try it?
15:49 vpshastry joined #gluster
15:55 badone joined #gluster
16:07 Chiku|dc when do you think 3.4 will be released ?
16:07 Chiku|dc since it alpha, maybe next year ?
16:08 bala joined #gluster
16:15 hagarth joined #gluster
16:15 kincl left #gluster
16:16 neofob from the user/client perspective, how is the block device feature in 3.4 used? example?
16:18 semiosis probably for virtual machine disk images, just guessing
16:20 johnmark neofob: there are qemu commands that allow you to target GlusterFS volumes
16:20 johnmark so far it's very specific to QEMU/KVM
16:20 semiosis hi johnmark
16:21 neofob can i have a block device on client? say, exporting .iso or disk image to client?
16:25 johnmark semiosis: howdy :)
16:25 semiosis building alpha2 packages for the ppa now, somehow i missed that when it was released
16:25 johnmark neofob: you have the block device wherever you have the gluster server, which can also be on the client
16:26 johnmark semiosis: w00t
16:26 semiosis also fixed a bug in the debian & ubuntu packages, libxml2 is a dependency for --xml option, but wasn't included in the packages :(
16:26 semiosis kkeithley1: ping re: ^^
16:28 neofob johnmark: so how does gluster store the data on the bricks? one giant file, many smaller file?
16:30 johnmark neofob: for a block device, ie. VM image, it's one file
16:31 neofob johnmark: hah, i see
16:32 neofob oh, what happens if the filesize is bigger than the brick size; say, 80GB and i only have 64GB bricks?
16:33 jdarcy neofob, you can use striping to overcome that limitation.  It's not on by default, but it's available.
16:34 Mo___ joined #gluster
16:35 hagarth joined #gluster
16:40 * neofob reading GlusterFS && block device
16:42 neofob => i can take snapshot of my virtualbox disk, store it in glusterfs, export back later as block device; neat!
16:42 neofob by snapshot, i mean dd :D
16:52 semiosis the-me: ping
16:57 andreask joined #gluster
16:59 manik joined #gluster
16:59 johnmark manik: ping
17:02 manik johnmark: pong
17:04 morse joined #gluster
17:05 johnmark manik: see query
17:08 ctria joined #gluster
17:14 vpshastry left #gluster
17:15 robos joined #gluster
17:16 bennyturns joined #gluster
17:16 lalatenduM joined #gluster
17:17 andrewbogott left #gluster
17:32 JoeJulian xiu: I hope you'll document your triumph at setting up Dovecot on GlusterFS somewhere and share with us the link. If you don't have your own blog, perhaps you could create a page on the wiki?
17:32 hagarth joined #gluster
17:47 rastar joined #gluster
17:49 semiosis johnmark: http://www.gluster.org/download/ needs updating for 3.4 alpha2
17:49 glusterbot Title: Download | Gluster Community Website (at www.gluster.org)
17:50 johnmark semiosis: thanks for the tip
17:51 _pol joined #gluster
17:51 semiosis yw
17:55 _pol joined #gluster
17:58 kkeithley1 semiosis: libxml2 yes. I have it as a Requires: in the RPMs.  Torbørn Thorsen did the Debian pkgs for 3.4.0alpha2
17:58 semiosis kkeithley1: hi!
17:58 semiosis how about 3.3.1 packages you built?  could you update those as well?
17:59 kkeithley1 ug
17:59 kkeithley1 ugh
17:59 semiosis it's 3 lines added to debian/control
17:59 semiosis heh, ok
17:59 kkeithley1 three lines and ???. Yes, I'll look at it in a bit
18:00 semiosis three lines and building, signing & publishing
18:00 semiosis well, and a version bump in debian/changelog
18:00 semiosis and
18:00 semiosis just kidding, thats all
18:01 semiosis kkeithley1: Bug 947226 was reported by 'noche' in #gluster-dev yesterday so i'm trying to get it fixed everywhere
18:01 glusterbot Bug http://goo.gl/jEc9n unspecified, unspecified, ---, kaushal, NEW , CLI option --xml doesn't have any effect on debian/ubuntu
18:02 semiosis updated my ubuntu ppas this morning, updating the packages in debian this afternoon
18:03 jiqiren joined #gluster
18:04 kkeithley1 re: and a version bump in debian/changelog, yup
18:05 lpabon joined #gluster
18:07 kkeithley1 squeeze or wheezy. IIRC wheezy "just worked" when I followed your recipe, and it's the later of the two? Squeeze I finally gave up and used the .debs from halfway through. So I'll do wheezy first — does that seem reasonable?
18:19 rastar joined #gluster
18:19 semiosis sure, and if it's really too much trouble i can do it instead
18:22 rastar joined #gluster
18:23 xiu JoeJulian: i think you're mistaken, i havn't set dovecot on gluster, i had a issue with a lot of small files. I spoke too fast, the issue reappeared during the day :/
18:24 JoeJulian Ah, must have got your issue mixed up. Sounds like a similar problem though.
18:35 the-me semiosis: pong
18:46 semiosis the-me: hi
18:46 semiosis i have an update for the debian glusterfs packages to fix bug 947226
18:46 glusterbot Bug http://goo.gl/jEc9n unspecified, unspecified, ---, kaushal, NEW , CLI option --xml doesn't have any effect on debian/ubuntu
18:46 semiosis adding a couple dependencies
18:47 semiosis ready to commit to your svn repo, but i wanted to check in with you before just committing
18:51 semiosis two new build dependencies: libxml2-dev & pkg-config, and one new dependency for glusterfs-server: libxml2
18:56 kkeithley1 hang on the the pkg-config. In $head and release-3.4 I rejiggered configure.ac to not use pkg-config for the libxml2 stuff
18:57 semiosis kkeithley1: great, i'll take it out of my ppa packages for 3.4
18:58 semiosis needed it for 3.3 tho
18:58 semiosis and 3.2 as well, which is in debian unstable
18:58 conslo joined #gluster
18:59 kkeithley1 yes, or you do the same thing I did for your 3.3. (Did I do it for 3.3 too, have to look, can't remember) Up to you.
18:59 kkeithley1 uname -a
19:03 semiosis uname -a? i dont understand how that applies in this context, going to stick with pkg-config
19:03 kkeithley1 that was typing in the wrong window, not related to pkg-config or libxml2
19:04 semiosis hahaha ok
19:04 semiosis thought that was a solution requiring deeper insight than i possess
19:05 kkeithley1 if it is, it needs deeper insight than I possess too. ;-)
19:08 kkeithley1 gah, some updated reenabled selinux on me.
19:14 jclift_ joined #gluster
19:44 badone joined #gluster
19:48 manik joined #gluster
19:56 plarsen joined #gluster
20:02 manik joined #gluster
20:34 lpabon joined #gluster
20:35 duerF joined #gluster
20:37 nueces joined #gluster
20:50 manik joined #gluster
20:54 sohoo is it possible to locate a file location?
20:55 semiosis ,,(pathinfo)
20:55 glusterbot find out which brick holds a file with this command on the client mount point: getfattr -d -e text -n trusted.glusterfs.pathinfo /client/mount/path/to.file
20:55 sohoo tnx glusterbob im tring now
20:59 _pol joined #gluster
20:59 sohoo great
20:59 _pol joined #gluster
21:00 semiosis ,,(awesome)
21:00 glusterbot ohhh yeeaah
21:00 sohoo :)
21:00 jclift_ "glusterbob"  <-- sounds like a cartoon :)
21:01 semiosis there used to be a glusterdave
21:03 sohoo fantastic for recovering broken self-heal files, for some resone only delete of those zero files helps against clients errors do you happen to know anything about it :)
21:04 sohoo type, glustebot
21:06 _pol joined #gluster
21:08 _pol joined #gluster
21:09 the-me semiosis: thanks, just commit if you have got something, I get a mail about the diff and still could ask you ;)
21:11 disarone joined #gluster
21:12 semiosis ok thanks, i will commit in a few minutes
21:16 mweichert_ joined #gluster
21:24 semiosis the-me: committed.  thank you :)
21:25 badone joined #gluster
21:46 _pol joined #gluster
21:46 _pol joined #gluster
21:57 eightyeight so, i'm trying to peer with a new box, and the probe is failing with the following error:
21:57 eightyeight Probe unsuccessful
21:57 eightyeight Probe returned with unknown errno 107
21:57 eightyeight the host is in dns, and i've even added it to my hosts(5) file, just to make sure, and it still fails
21:58 H__ firewall rules ?
21:59 H__ remote peer's gluster is up and running ?
21:59 eightyeight his firewall is open
21:59 H__ version mismatch ?
21:59 eightyeight ah. that's probably the problem
21:59 H__ which versions are you mixing ?
22:00 eightyeight one sec. verifying.
22:00 disarone joined #gluster
22:00 eightyeight yup. that's the issue
22:01 eightyeight 3.3.1 vs 3.2.7
22:01 eightyeight H__: thx
22:01 * eightyeight knew better too
22:02 H__ ah yes, those don't match. I wish they did, it'd save me production downtime fur the upgrade I'm facing
22:11 aliguori joined #gluster
22:37 eightyeight so, hmm
22:37 eightyeight i'm trying to remove 2 bricks, so i can add two new bricks, and my command says it's successful, but 'gluster volume info' shows that they have not been removed
22:37 eightyeight i'm using a 'linked list' topology
22:38 eightyeight http://pthree.org/2013/01/25/g​lusterfs-linked-list-topology/ fyi
22:38 glusterbot <http://goo.gl/0HHCK> (at pthree.org)
22:38 eightyeight actually...
22:43 saurabh joined #gluster
23:05 tryggvil joined #gluster
23:11 arusso joined #gluster
23:33 eightyeight got it worked out. heh.
23:38 zyk|off joined #gluster
23:44 eightyeight # gluster volume heal sandbox
23:44 eightyeight Launching Heal operation on volume sandbox has been unsuccessful
23:44 eightyeight why is it unsuccessful, and how can i make it successful?

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary