Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-04-07

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:04 RicardoSSP joined #gluster
00:13 purpleidea joined #gluster
00:13 purpleidea joined #gluster
00:17 julim joined #gluster
00:20 tg2 seems like a decent use case for drdb
00:20 JoeJulian no such thing
00:22 * JoeJulian still finds the DRBD subject too upsetting.
00:22 tg2 haha
00:23 tg2 i was under the impression that you had to quiesce a qcow2 file to copy it meaningfully
00:23 tg2 unless you were using something that is vm aware
00:26 tg2 you could probably find a way to do this with git-annex so you are only syncing the diffs and only when you trigger it
00:33 glusterbot News from newglusterbugs: [Bug 906763] SSL code does not use OpenSSL multi-threading interface <https://bugzilla.redhat.com/show_bug.cgi?id=906763>
00:56 wkf joined #gluster
01:00 kovsheni_ joined #gluster
01:03 glusterbot News from newglusterbugs: [Bug 1209286] Spurious regression failures in uss.t <https://bugzilla.redhat.com/show_bug.cgi?id=1209286>
01:12 gildub joined #gluster
01:16 _Bryan_ joined #gluster
01:35 halfinhalfout joined #gluster
01:51 ilbot3 joined #gluster
01:51 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:12 jmarley joined #gluster
02:17 meghanam joined #gluster
02:20 AndroUser2 joined #gluster
02:30 vimal joined #gluster
02:32 chirino joined #gluster
02:34 purpleidea joined #gluster
02:57 Gill joined #gluster
03:03 nangthang joined #gluster
03:29 bharata-rao joined #gluster
03:30 sripathi joined #gluster
03:31 nshaikh joined #gluster
03:33 kkeithley1 joined #gluster
03:33 kshlm joined #gluster
03:34 PeterA joined #gluster
03:34 PeterA help!!! one of the brick in a replica 2 volume keep crashing and not able to start :(
03:35 PeterA http://pastie.org/10077515
03:37 PeterA keep getting these whenever i tried to restart brick :(
03:37 PeterA http://pastie.org/10077516
03:38 kumar joined #gluster
03:38 ira joined #gluster
03:39 hgowtham joined #gluster
03:40 ashiq joined #gluster
03:44 ashiq- joined #gluster
03:46 itisravi joined #gluster
03:46 PeterA i was just able to bring the brick up
03:46 PeterA after rm /var/run/*.socket
03:46 PeterA why would that happen??
03:50 poornimag joined #gluster
03:52 kkeithley1 joined #gluster
03:53 gnudna joined #gluster
03:57 gnudna left #gluster
03:58 papamoose1 joined #gluster
03:59 atinmu joined #gluster
04:00 hgowtham joined #gluster
04:09 RameshN joined #gluster
04:12 overclk joined #gluster
04:13 kanagaraj joined #gluster
04:20 maveric_amitc_ joined #gluster
04:21 Anjana joined #gluster
04:21 anoopcs joined #gluster
04:23 jiffin joined #gluster
04:26 rafi joined #gluster
04:26 ashiq- joined #gluster
04:28 Manikandan joined #gluster
04:30 meghanam joined #gluster
04:37 hagarth joined #gluster
04:38 RameshN joined #gluster
04:42 purpleidea joined #gluster
04:48 meghanam PeterA, what operation were you trying to perform?
04:56 nshaikh joined #gluster
05:00 ndarshan joined #gluster
05:01 soumya_ joined #gluster
05:02 jiku joined #gluster
05:03 spandit joined #gluster
05:04 glusterbot News from newglusterbugs: [Bug 1209329] BitRot:BitD is not handled properly when re configuring the glusterfs services <https://bugzilla.redhat.com/show_bug.cgi?id=1209329>
05:05 hgowtham joined #gluster
05:11 vimal joined #gluster
05:14 vimal joined #gluster
05:19 Bhaskarakiran joined #gluster
05:20 nbalacha joined #gluster
05:23 karnan joined #gluster
05:23 kkeithley1 joined #gluster
05:24 gem joined #gluster
05:27 lalatenduM joined #gluster
05:32 T3 joined #gluster
05:36 ppai joined #gluster
05:39 Philambdo joined #gluster
05:40 kdhananjay joined #gluster
05:40 ashiq- joined #gluster
05:42 hchiramm joined #gluster
05:43 anil joined #gluster
05:44 nishanth joined #gluster
05:45 deepakcs joined #gluster
05:51 jiffin joined #gluster
05:53 ira_ joined #gluster
05:53 Bhaskarakiran joined #gluster
05:53 dusmant joined #gluster
05:53 jiffin joined #gluster
05:56 aravindavk joined #gluster
05:57 raghu joined #gluster
05:59 ashiq- joined #gluster
06:05 poornimag joined #gluster
06:22 ashiq- joined #gluster
06:26 anrao joined #gluster
06:27 R0ok_ joined #gluster
06:27 DV_ joined #gluster
06:30 siel joined #gluster
06:30 siel joined #gluster
06:33 jtux joined #gluster
06:34 T3 joined #gluster
06:35 glusterbot News from newglusterbugs: [Bug 1209340] Random regression test hang : bug-1113960.t <https://bugzilla.redhat.com/show_bug.cgi?id=1209340>
06:50 atalur joined #gluster
06:51 abyss_ joined #gluster
06:53 Manikandan joined #gluster
06:55 nangthang joined #gluster
06:56 dusmant joined #gluster
06:57 poornimag joined #gluster
07:11 deniszh joined #gluster
07:16 DV joined #gluster
07:16 Manikandan joined #gluster
07:25 fsimonce joined #gluster
07:27 social joined #gluster
07:27 m0zes joined #gluster
07:32 harish joined #gluster
07:34 nshaikh joined #gluster
07:35 LebedevRI joined #gluster
07:35 T3 joined #gluster
07:39 Slashman joined #gluster
07:44 DV joined #gluster
07:45 ashiq joined #gluster
07:45 hgowtham joined #gluster
07:55 o5k_ joined #gluster
07:56 poornimag joined #gluster
07:57 AndroUser2 joined #gluster
08:04 aravindavk joined #gluster
08:07 DV_ joined #gluster
08:08 ctria joined #gluster
08:16 ashiq joined #gluster
08:23 Norky joined #gluster
08:29 mbukatov joined #gluster
08:30 DV_ joined #gluster
08:30 anoopcs joined #gluster
08:33 anrao joined #gluster
08:35 T3 joined #gluster
08:42 poornimag joined #gluster
08:43 RameshN_ joined #gluster
08:46 Bhaskarakiran_ joined #gluster
08:46 harish_ joined #gluster
08:47 [Enrico] joined #gluster
08:49 hagarth joined #gluster
09:05 glusterbot News from newglusterbugs: [Bug 1209388] qemu-img throws error while creating image using native driver for gluster <https://bugzilla.redhat.com/show_bug.cgi?id=1209388>
09:05 glusterbot News from newglusterbugs: [Bug 1207028] [Backup]: User must be warned while running the 'glusterfind pre' command twice without running the post command <https://bugzilla.redhat.com/show_bug.cgi?id=1207028>
09:10 Prilly joined #gluster
09:12 bala1 joined #gluster
09:13 anrao joined #gluster
09:22 soumya_ joined #gluster
09:24 kovshenin joined #gluster
09:35 glusterbot News from resolvedglusterbugs: [Bug 960818] Installing glusterfs rpms on a pristine f19 system throws "error reading information on service glusterfsd". <https://bugzilla.redhat.com/show_bug.cgi?id=960818>
09:35 glusterbot News from resolvedglusterbugs: [Bug 854753] Hadoop Integration, Address known write performance issues <https://bugzilla.redhat.com/show_bug.cgi?id=854753>
09:36 T3 joined #gluster
09:40 poornimag joined #gluster
09:50 m0zes joined #gluster
09:50 kovshenin joined #gluster
09:52 Bhaskarakiran_ joined #gluster
09:56 kovshenin joined #gluster
09:57 o5k joined #gluster
10:02 jmarley joined #gluster
10:02 kovsheni_ joined #gluster
10:03 atalur joined #gluster
10:06 dusmant joined #gluster
10:07 anil joined #gluster
10:16 poornimag joined #gluster
10:20 kovshenin joined #gluster
10:26 nbalacha joined #gluster
10:29 kovshenin joined #gluster
10:35 kovshenin joined #gluster
10:36 glusterbot News from newglusterbugs: [Bug 1209408] [Snapshot] Scheduler should accept only valid crond schedules <https://bugzilla.redhat.com/show_bug.cgi?id=1209408>
10:37 atalur joined #gluster
10:37 T3 joined #gluster
10:47 anrao joined #gluster
10:49 Slasheri left #gluster
10:55 rwheeler joined #gluster
10:59 DV__ joined #gluster
11:02 itisravi joined #gluster
11:06 glusterbot News from newglusterbugs: [Bug 1209432] Using TLS Identities for Authorization is mandatory, not optional <https://bugzilla.redhat.com/show_bug.cgi?id=1209432>
11:06 glusterbot News from newglusterbugs: [Bug 1209430] quota/marker: turn off inode quotas by default <https://bugzilla.redhat.com/show_bug.cgi?id=1209430>
11:07 ktosiek joined #gluster
11:17 itisravi joined #gluster
11:18 itisravi_ joined #gluster
11:28 gildub joined #gluster
11:29 kovshenin joined #gluster
11:30 _NiC joined #gluster
11:33 kovsheni_ joined #gluster
11:35 kovshenin joined #gluster
11:38 T3 joined #gluster
11:39 sac joined #gluster
11:41 Slashman joined #gluster
11:44 osiekhan3 joined #gluster
11:45 kovshenin joined #gluster
11:45 ndevos REMINDER: Gluster Bug Triage Meeting starts in 15 minutes in #gluster-meeting
11:45 ernetas Hey guys.
11:46 ernetas Is there a way to disable ipv6 support in gluster 3.6.2?
11:46 ernetas I'm getting "0-files-client-0: DNS resolution failed on host xxx" and then "getaddrinfo failed (No address associated with hostname)"
11:47 ndevos you could put the IPv4 address and hostnames in /etc/hosts..... I dont have any other ideas about it
11:48 kovshenin joined #gluster
11:49 ernetas Hmm, that's the thing - I have the IPv4 addresses in /etc/hosts and DNS should work with IPv4 too. I have no idea why it fails.
11:52 kovshenin joined #gluster
11:56 chirino joined #gluster
11:56 soumya_ joined #gluster
12:00 hchiramm joined #gluster
12:00 ndevos REMINDER: Gluster Bug Triage Meeting starting now in #gluster-devel
12:00 kovshenin joined #gluster
12:01 ndevos correction, not #gluster-devel, but #gluster-meeting
12:03 kovsheni_ joined #gluster
12:06 kovsheni_ joined #gluster
12:08 kovshenin joined #gluster
12:08 ipmango joined #gluster
12:09 karnan joined #gluster
12:12 hagarth joined #gluster
12:13 Zephura joined #gluster
12:13 anoopcs joined #gluster
12:14 ira joined #gluster
12:14 kovshenin joined #gluster
12:17 Anjana joined #gluster
12:18 kovsheni_ joined #gluster
12:18 Zephura Hello, I got some problem with gluster : peer are rejected after reboot of the peer... did I miss somehting obvious ?
12:20 kovshenin joined #gluster
12:23 kovsheni_ joined #gluster
12:25 kovshenin joined #gluster
12:31 kovshenin joined #gluster
12:33 DJClean joined #gluster
12:34 kovsheni_ joined #gluster
12:36 glusterbot News from newglusterbugs: [Bug 1209461] BVT: glusterd crashed and dumped during upgrade (on rhel7.1 server) <https://bugzilla.redhat.com/show_bug.cgi?id=1209461>
12:36 glusterbot News from newglusterbugs: [Bug 1207979] BitRot :- In case of NFS mount, Object Versioning and file signing is not working as expected <https://bugzilla.redhat.com/show_bug.cgi?id=1207979>
12:36 kovshenin joined #gluster
12:38 T3 joined #gluster
12:39 kanagaraj joined #gluster
12:39 kovsheni_ joined #gluster
12:41 ernetas Hmm, okay, after multiple reboots, it seems that it works now without SSL, but doesn't work with SSL enabled. http://fpaste.org/207955/14284105/ - here's the client side log. Any ideas what could be wrong?
12:42 kovshenin joined #gluster
12:45 kovshenin joined #gluster
12:47 ernetas Okay, the problem seems to be with mount using volume files... :/
12:48 ernetas Anything special that should be added to volume file when using SSL?
12:48 wkf joined #gluster
12:51 o5k_ joined #gluster
12:51 ernetas Nevermind, figured it out, "option transport.socket.ssl-enabled on" was needed. :) Would be nice to make a doc about it.
12:53 Gill joined #gluster
12:53 kovshenin joined #gluster
12:55 kanagaraj_ joined #gluster
12:56 kovsheni_ joined #gluster
12:58 DV joined #gluster
12:58 kovsheni_ joined #gluster
13:01 and` joined #gluster
13:02 kovshenin joined #gluster
13:03 purpleidea joined #gluster
13:03 purpleidea joined #gluster
13:05 halfinhalfout joined #gluster
13:07 kovshenin joined #gluster
13:07 poornimag joined #gluster
13:08 ninkotech joined #gluster
13:08 ira joined #gluster
13:08 ninkotech_ joined #gluster
13:09 ppai joined #gluster
13:12 tom[] joined #gluster
13:14 kovshenin joined #gluster
13:16 kovshenin joined #gluster
13:18 Prilly joined #gluster
13:19 kovshenin joined #gluster
13:19 Anjana joined #gluster
13:21 kovshenin joined #gluster
13:26 owlbot joined #gluster
13:28 social joined #gluster
13:28 _PiGreco_ joined #gluster
13:29 georgeh-LT2 joined #gluster
13:32 kovshenin joined #gluster
13:36 glusterbot News from newglusterbugs: [Bug 1209484] Unable to stop/start a volume <https://bugzilla.redhat.com/show_bug.cgi?id=1209484>
13:37 kovshenin joined #gluster
13:37 dgandhi joined #gluster
13:39 T3 joined #gluster
13:40 cicero semiosis: thanks. and yeah i shouldn't have used a personal PPA :X
13:40 kovshenin joined #gluster
13:45 hamiller joined #gluster
13:47 kovshenin joined #gluster
13:47 T3 joined #gluster
13:49 kovsheni_ joined #gluster
13:55 kovshenin joined #gluster
13:56 plarsen joined #gluster
13:57 kovsheni_ joined #gluster
14:01 kovshenin joined #gluster
14:03 kovshenin joined #gluster
14:05 overclk joined #gluster
14:08 kovshenin joined #gluster
14:10 kshlm joined #gluster
14:13 _Bryan_ joined #gluster
14:15 kovshenin joined #gluster
14:19 tom][ joined #gluster
14:23 kovsheni_ joined #gluster
14:24 jobewan joined #gluster
14:27 B21956 joined #gluster
14:27 B21956 left #gluster
14:27 B21956 joined #gluster
14:32 kovshenin joined #gluster
14:35 kovshenin joined #gluster
14:36 tom[] joined #gluster
14:36 wushudoin joined #gluster
14:38 wushudoin joined #gluster
14:40 harish joined #gluster
14:41 coredump joined #gluster
14:41 lpabon joined #gluster
14:47 jmarley joined #gluster
14:48 kovshenin joined #gluster
14:51 kovsheni_ joined #gluster
15:01 halfinhalfout1 joined #gluster
15:03 kovshenin joined #gluster
15:05 kovshenin joined #gluster
15:07 tom[] joined #gluster
15:07 kovshenin joined #gluster
15:12 kovshenin joined #gluster
15:14 kovshenin joined #gluster
15:18 overclk joined #gluster
15:18 nbalacha joined #gluster
15:19 kovsheni_ joined #gluster
15:21 kovshenin joined #gluster
15:25 virusuy joined #gluster
15:27 kovshenin joined #gluster
15:28 tom[] joined #gluster
15:29 kovshenin joined #gluster
15:31 kovshenin joined #gluster
15:33 kovshenin joined #gluster
15:35 kovshenin joined #gluster
15:38 kovshenin joined #gluster
15:40 kovshenin joined #gluster
15:47 kovsheni_ joined #gluster
15:47 kdhananjay joined #gluster
15:48 georgeh-LT2 joined #gluster
15:51 kovshenin joined #gluster
15:53 georgeh-LT2 joined #gluster
15:54 kovsheni_ joined #gluster
16:02 kovshenin joined #gluster
16:04 kovshenin joined #gluster
16:07 kovshenin joined #gluster
16:10 kovsheni_ joined #gluster
16:10 AndroUser2 joined #gluster
16:14 kovshenin joined #gluster
16:14 Debloper joined #gluster
16:16 Pupeno joined #gluster
16:16 Pupeno joined #gluster
16:21 pjschmit1 joined #gluster
16:24 kovshenin joined #gluster
16:25 lid1 joined #gluster
16:27 mdavidson joined #gluster
16:27 sas_ joined #gluster
16:33 JoeJulian ernetas: Since volfile mounting has not been supported since 3.0, documentation of an unsupported feature seems unlikely. Feel free to commit said documentation if you disagree with the position of the rest of the community. We're always happy to embrace differing opinions.
16:35 plarsen joined #gluster
16:40 lid1 if I am in a state, where the gluster volume is offline, how can I get it back online?
16:47 jobewan joined #gluster
16:48 Manikandan joined #gluster
16:48 coredump joined #gluster
16:50 JoeJulian gluster volume start $volname
16:50 JoeJulian Or maybe even
16:50 JoeJulian gluster volume start $volname force
16:50 ron-slc joined #gluster
16:51 JoeJulian Usually, though, it's best to look in the log files and find out why it's failing to start.
16:55 lid1 great!! thanks the force option did the trick
16:55 lid1 before it complained, that the volume is already started
17:01 tessier joined #gluster
17:03 kovshenin joined #gluster
17:09 Manikandan joined #gluster
17:10 corretico joined #gluster
17:12 foster joined #gluster
17:15 RameshN joined #gluster
17:15 kovsheni_ joined #gluster
17:18 bene2 joined #gluster
17:19 tg2 you have the transport.socket.ssl* settings enabled?
17:19 tg2 derp just realized that was from 9am lol
17:20 kovshenin joined #gluster
17:20 ctria joined #gluster
17:23 kovshenin joined #gluster
17:25 papamoose joined #gluster
17:26 anrao joined #gluster
17:26 papamoose joined #gluster
17:27 alpha01 joined #gluster
17:28 alpha01 is the disperse volume type stable for production use?
17:29 dbruhn joined #gluster
17:29 Prilly joined #gluster
17:30 kovshenin joined #gluster
17:31 Rapture joined #gluster
17:32 kovsheni_ joined #gluster
17:35 kovsheni_ joined #gluster
17:35 o5k_ joined #gluster
17:36 jackdpeterson joined #gluster
17:36 hchiramm_ joined #gluster
17:37 fubada joined #gluster
17:38 JoeJulian alpha01: I haven't heard any feedback here yet.
17:39 hellomichibye joined #gluster
17:39 ernetas JoeJulian: I find that the whole documentation is very scattered. There are like maybe 5 pages about GlusterFS SSL support. There is also a bit information about 3.2, 3.1, but there's not so much 3.5 documented, even though it probably works pretty much the same way. Are there any plans to consolidate this or should I just write a blog post (actually, I just did), contribute to some specific page in docs or do something else?
17:39 alpha01 We
17:40 alpha01 We’re evaluating a disperse deployment, but I’m somewhat nervous about it since its still fairly new
17:40 JoeJulian ernetas: http://www.gluster.org/documentation/
17:40 JoeJulian Administrator Guide
17:41 JoeJulian I'm always surprised how many people complain that there's little documentation scattered and old that haven't looked there.
17:42 JoeJulian Anyway, to contribute to that documentation, follow the ,,(hack)ing guide.
17:42 glusterbot The Development Work Flow is at http://www.gluster.org/community/documentation/index.php/Simplified_dev_workflow
17:42 JoeJulian alpha01: As you should be. Be sure to test, then test some more. After that it's a good idea to test. ;)
17:43 alpha01 :)
17:46 jermudgeon joined #gluster
17:50 kovshenin joined #gluster
17:52 hellomichibye hi! I*m creating a quiet simple setup in AWS with a single volume with replica 2. I am using hard disks on the server so I am not using network attached storage or something similar. that’s why i want to make some kind of „disaster“ baxkup to the object store s3
17:52 hellomichibye is it „save“ if i just sync the files of the brick of one of the servers to s3 ?
17:52 kovshenin joined #gluster
17:53 hellomichibye the volume is around 2 TB in size
17:55 kovshenin joined #gluster
17:57 JoeJulian hellomichibye: Sure, that's fine.
17:57 hellomichibye I found an articel somewhere where geo replication was mentioned to perform the task? does this makes sense?
17:58 thisisdave joined #gluster
18:00 hellomichibye so many tabs… I can’t find the articel. but the guy had the idea of a „translator“ that uses s3 as the storage backend
18:00 kovshenin joined #gluster
18:02 roost joined #gluster
18:03 hellomichibye https://forums.aws.amazon.com/thread.jspa?threadID=13786
18:03 hellomichibye from 2007 :)
18:05 thisisdave joined #gluster
18:05 thisisdave left #gluster
18:06 JoeJulian There's no s3 translator, though it wouldn't be impossible to write one, I'd probably use the python bindings if I were going to try. Otherwise there are fuse mount interfaces to s3. You could use that with geo-rep.
18:06 kovsheni_ joined #gluster
18:06 Prilly joined #gluster
18:08 glusterbot News from newglusterbugs: [Bug 1203739] Self-heal of sparse image files on 3-way replica "unsparsifies" the image <https://bugzilla.redhat.com/show_bug.cgi?id=1203739>
18:09 kovshenin joined #gluster
18:09 hellomichibye thx!
18:10 o5k__ joined #gluster
18:11 kovshenin joined #gluster
18:13 kovsheni_ joined #gluster
18:15 o5k joined #gluster
18:18 lalatenduM joined #gluster
18:20 kovshenin joined #gluster
18:23 kovshenin joined #gluster
18:25 kovsheni_ joined #gluster
18:29 kovshenin joined #gluster
18:31 anrao joined #gluster
18:32 hagarth joined #gluster
18:35 _Bryan_ joined #gluster
18:35 kovshenin joined #gluster
18:37 lpabon joined #gluster
18:41 kovsheni_ joined #gluster
18:47 Prilly joined #gluster
18:49 kovshenin joined #gluster
18:53 kovsheni_ joined #gluster
18:55 kovsheni_ joined #gluster
19:05 tom][ joined #gluster
19:12 Rapture joined #gluster
19:14 TrDS joined #gluster
19:15 TrDS left #gluster
19:29 _Bryan_ joined #gluster
19:31 shaunm joined #gluster
19:31 kovshenin joined #gluster
19:36 Pupeno joined #gluster
19:56 kovsheni_ joined #gluster
19:57 hchiramm_ joined #gluster
20:03 kovshenin joined #gluster
20:03 getup joined #gluster
20:08 kovshenin joined #gluster
20:11 jeek joined #gluster
20:12 jeek Is there a good way to speed up the healing process?
20:15 jbrooks joined #gluster
20:17 DV joined #gluster
20:18 jbrooks joined #gluster
20:18 kovsheni_ joined #gluster
20:19 hchiramm_ joined #gluster
20:22 kovshenin joined #gluster
20:27 purpleidea joined #gluster
20:27 purpleidea joined #gluster
20:30 kovsheni_ joined #gluster
20:38 kovshenin joined #gluster
20:44 purpleidea joined #gluster
20:44 purpleidea joined #gluster
20:50 JoeJulian More bandwidth, faster cpu and ram...
21:15 Pupeno joined #gluster
21:16 badone_ joined #gluster
21:17 Pupeno joined #gluster
21:17 Pupeno joined #gluster
21:29 jpds joined #gluster
21:29 jpds Hello.
21:29 glusterbot jpds: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
21:29 jpds Is there a way to make a replica volume into a geo-replicated one?
21:32 badone__ joined #gluster
21:42 Prilly joined #gluster
21:45 JoeJulian jpds: To convert it? not really. You can remove replication, "gluster volume remove-brick replica 1 $server:$brick ..." if your volume is only two bricks, then that will make it a 1 brick volume with no replication. Then you can create geo-replication from there.
21:45 jpds JoeJulian: Ah, that's what I need, thanks.
21:46 jpds All the docs seem to suggest having clusters of servers, with geo-replication between them.
21:46 JoeJulian depends on the use case.
21:46 jpds In my case, I only have two boxes with an internet link between them.
21:46 JoeJulian And you're aware that geo-rep is uni-directional?
21:47 jpds I'm not, but the second one is a backup server.
21:47 JoeJulian As long as that suits your need, have at it. :D
21:48 jpds It could act as a hot-spare, but I'd notice the first one dying first. :)
21:50 lexi2 joined #gluster
21:59 jbrooks joined #gluster
21:59 wkf joined #gluster
22:00 jbrooks joined #gluster
22:12 hchiramm joined #gluster
22:24 lexi2_ joined #gluster
22:26 Gill joined #gluster
22:32 bene2 joined #gluster
22:35 T3 joined #gluster
22:37 Peppard joined #gluster
22:44 hchiramm joined #gluster
22:48 T3 joined #gluster
22:50 Pupeno joined #gluster
23:15 dgandhi joined #gluster
23:33 Gill joined #gluster
23:37 gildub joined #gluster
23:48 lexi2 joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary