Time |
Nick |
Message |
00:08 |
|
mpietersen joined #gluster |
00:10 |
|
kpease_ joined #gluster |
00:19 |
|
The_Pugilist joined #gluster |
00:19 |
|
moss joined #gluster |
00:21 |
|
lezo joined #gluster |
00:34 |
|
shaunm joined #gluster |
00:38 |
|
johnmilton joined #gluster |
00:39 |
|
mpietersen joined #gluster |
00:52 |
|
beeradb joined #gluster |
00:52 |
|
luizcpg joined #gluster |
01:06 |
|
shyam joined #gluster |
01:24 |
|
EinstCrazy joined #gluster |
01:28 |
|
dgandhi joined #gluster |
01:30 |
|
dgandhi joined #gluster |
01:32 |
|
dgandhi joined #gluster |
01:33 |
|
luizcpg joined #gluster |
01:34 |
|
dgandhi joined #gluster |
01:36 |
|
dgandhi joined #gluster |
01:36 |
|
harish joined #gluster |
01:38 |
|
dgandhi joined #gluster |
01:38 |
|
Lee1092 joined #gluster |
01:40 |
|
dgandhi joined #gluster |
01:41 |
|
jobewan joined #gluster |
01:48 |
|
ilbot3 joined #gluster |
01:48 |
|
Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/ |
02:00 |
|
EinstCrazy joined #gluster |
02:10 |
|
B21956 joined #gluster |
02:15 |
|
davpenguin joined #gluster |
02:35 |
|
pgreg joined #gluster |
02:37 |
|
pgreg joined #gluster |
02:42 |
|
haomaiwang joined #gluster |
02:50 |
|
ninjaryan joined #gluster |
03:01 |
|
haomaiwang joined #gluster |
03:17 |
|
ppai joined #gluster |
03:26 |
|
nehar joined #gluster |
03:43 |
|
renout_away joined #gluster |
03:57 |
|
overclk joined #gluster |
03:59 |
|
itisravi joined #gluster |
04:01 |
|
haomaiwang joined #gluster |
04:10 |
|
Wojtek joined #gluster |
04:11 |
|
atinm joined #gluster |
04:14 |
|
nishanth joined #gluster |
04:20 |
|
Gnomethrower joined #gluster |
04:23 |
|
kotreshhr joined #gluster |
04:31 |
|
Manikandan joined #gluster |
04:39 |
|
shortdudey123 joined #gluster |
04:42 |
|
gem joined #gluster |
04:46 |
|
aspandey joined #gluster |
04:48 |
|
DV joined #gluster |
05:00 |
|
DV__ joined #gluster |
05:01 |
|
haomaiwang joined #gluster |
05:06 |
|
Bhaskarakiran joined #gluster |
05:08 |
|
mowntan joined #gluster |
05:11 |
|
beeradb_ joined #gluster |
05:14 |
|
ramky joined #gluster |
05:15 |
|
tyler274 joined #gluster |
05:15 |
|
kotreshhr joined #gluster |
05:15 |
|
malevolent joined #gluster |
05:15 |
|
kkeithley joined #gluster |
05:15 |
|
xavih joined #gluster |
05:17 |
|
raghug joined #gluster |
05:17 |
|
raghug joined #gluster |
05:17 |
|
edong23_ joined #gluster |
05:18 |
|
Ramereth joined #gluster |
05:26 |
|
Apeksha joined #gluster |
05:27 |
|
prasanth joined #gluster |
05:30 |
|
karthik___ joined #gluster |
05:31 |
|
hgowtham joined #gluster |
05:33 |
|
jiffin joined #gluster |
05:34 |
|
nehar joined #gluster |
05:39 |
|
spalai joined #gluster |
05:43 |
|
itisravi joined #gluster |
05:44 |
|
atalur joined #gluster |
05:46 |
|
ppai joined #gluster |
05:48 |
|
dgandhi joined #gluster |
05:49 |
|
The_Ball joined #gluster |
05:49 |
The_Ball |
What does it mean when volume heal shows: <gfid:71e46e87-aabe-4543-b31a-80c57dee7d44>? |
05:49 |
|
rafi joined #gluster |
05:50 |
|
dgandhi joined #gluster |
05:50 |
|
nehar joined #gluster |
05:52 |
|
dgandhi joined #gluster |
05:53 |
|
dgandhi joined #gluster |
05:55 |
|
aravindavk joined #gluster |
05:55 |
|
dgandhi joined #gluster |
05:57 |
|
dgandhi joined #gluster |
05:58 |
itisravi |
The_Ball: The file corresponding to that gfid needs heal. |
05:58 |
|
nehar joined #gluster |
05:59 |
The_Ball |
itisravi, is there a good way to find gfid -> filename mapping? |
06:00 |
|
dgandhi joined #gluster |
06:00 |
itisravi |
The_Ball: https://github.com/gluster/glusterfs/blob/master/doc/debugging/gfid-to-path.md |
06:00 |
glusterbot |
Title: glusterfs/gfid-to-path.md at master · gluster/glusterfs · GitHub (at github.com) |
06:01 |
|
haomaiwang joined #gluster |
06:01 |
The_Ball |
Thanks! |
06:01 |
|
jtux joined #gluster |
06:02 |
|
skoduri joined #gluster |
06:03 |
|
ndarshan joined #gluster |
06:05 |
|
kdhananjay joined #gluster |
06:08 |
|
Siavash joined #gluster |
06:08 |
|
Siavash joined #gluster |
06:11 |
|
jobewan joined #gluster |
06:12 |
|
Saravanakmr joined #gluster |
06:17 |
|
cliluw joined #gluster |
06:19 |
|
tyler274 joined #gluster |
06:19 |
|
karnan joined #gluster |
06:20 |
|
valkyr1e joined #gluster |
06:22 |
|
Pintomatic joined #gluster |
06:25 |
|
mlhess joined #gluster |
06:26 |
|
hchiramm joined #gluster |
06:28 |
|
overclk joined #gluster |
06:30 |
|
The_Pugilist joined #gluster |
06:52 |
|
auzty joined #gluster |
07:01 |
|
haomaiwang joined #gluster |
07:03 |
|
jri joined #gluster |
07:15 |
|
armyriad joined #gluster |
07:16 |
|
[Enrico] joined #gluster |
07:17 |
|
Saravanakmr joined #gluster |
07:20 |
|
armyriad joined #gluster |
07:21 |
|
ivan_rossi joined #gluster |
07:23 |
|
level7 joined #gluster |
07:24 |
|
kdhananjay joined #gluster |
07:25 |
|
Saravanakmr joined #gluster |
07:25 |
|
Slashman joined #gluster |
07:25 |
|
ashiq_ joined #gluster |
07:26 |
|
spalai joined #gluster |
07:27 |
|
TvL2386 joined #gluster |
07:27 |
|
level7 joined #gluster |
07:32 |
|
rastar joined #gluster |
07:34 |
|
level7 joined #gluster |
07:36 |
|
fsimonce joined #gluster |
07:39 |
|
level7 joined #gluster |
07:39 |
|
anil_ joined #gluster |
07:39 |
|
skoduri joined #gluster |
07:44 |
|
level7 joined #gluster |
07:44 |
|
gowtham_ joined #gluster |
07:48 |
|
ctria joined #gluster |
07:50 |
|
RameshN joined #gluster |
07:51 |
|
level7 joined #gluster |
07:51 |
|
ahino joined #gluster |
07:54 |
|
level7 joined #gluster |
07:57 |
|
karthik___ joined #gluster |
07:59 |
|
prasanth joined #gluster |
08:00 |
|
level7 joined #gluster |
08:01 |
|
haomaiwang joined #gluster |
08:08 |
|
gowtham_ joined #gluster |
08:11 |
|
kovshenin joined #gluster |
08:18 |
|
harish_ joined #gluster |
08:24 |
|
sdeshpande_ joined #gluster |
08:29 |
|
arcolife joined #gluster |
08:40 |
|
EinstCrazy joined #gluster |
08:40 |
|
TvL2386 joined #gluster |
08:41 |
|
level7 joined #gluster |
08:47 |
|
ghenry joined #gluster |
08:47 |
|
ghenry joined #gluster |
08:51 |
|
atrius joined #gluster |
08:51 |
|
JesperA- joined #gluster |
08:56 |
|
spalai joined #gluster |
08:57 |
|
ppai joined #gluster |
08:57 |
|
EinstCrazy joined #gluster |
09:01 |
|
haomaiwang joined #gluster |
09:10 |
|
itisravi joined #gluster |
09:20 |
|
7GHAA6HYK joined #gluster |
09:28 |
|
deniszh joined #gluster |
09:38 |
|
Trefex joined #gluster |
09:41 |
|
shubhendu joined #gluster |
09:51 |
|
level7 joined #gluster |
09:57 |
|
itisravi joined #gluster |
10:01 |
|
haomaiwang joined #gluster |
10:05 |
|
dgandhi joined #gluster |
10:07 |
|
dgandhi joined #gluster |
10:09 |
|
dgandhi joined #gluster |
10:10 |
|
dgandhi joined #gluster |
10:11 |
|
JesperA joined #gluster |
10:12 |
|
dgandhi joined #gluster |
10:14 |
|
dgandhi joined #gluster |
10:15 |
|
atalur_ joined #gluster |
10:16 |
|
dgandhi joined #gluster |
10:23 |
|
[Enrico] joined #gluster |
10:26 |
|
rideh joined #gluster |
10:32 |
|
EinstCrazy joined #gluster |
10:48 |
|
pgreg_ joined #gluster |
11:01 |
|
haomaiwang joined #gluster |
11:04 |
|
DV joined #gluster |
11:05 |
|
Gnomethrower joined #gluster |
11:13 |
|
atinm joined #gluster |
11:13 |
|
ira joined #gluster |
11:20 |
|
johnmilton joined #gluster |
11:21 |
|
robb_nl joined #gluster |
11:22 |
|
hgowtham joined #gluster |
11:31 |
Saravanakmr |
#REMINDER: Gluster Community Bug Triage meeting at 12:00 UTC ( start ~in 30 minutes) |
11:34 |
|
karthik___ joined #gluster |
11:44 |
|
nishanth joined #gluster |
11:45 |
|
julim joined #gluster |
11:48 |
|
rafi joined #gluster |
12:03 |
|
RameshN joined #gluster |
12:07 |
|
atrius joined #gluster |
12:13 |
|
nottc joined #gluster |
12:16 |
|
nehar joined #gluster |
12:18 |
|
Gnomethrower joined #gluster |
12:19 |
|
Norky joined #gluster |
12:28 |
|
fortpedro joined #gluster |
12:42 |
|
unclemarc joined #gluster |
12:46 |
|
nishanth joined #gluster |
12:46 |
|
chirino joined #gluster |
12:55 |
|
johnmilton joined #gluster |
12:58 |
|
atinm joined #gluster |
13:00 |
|
Biopandemic joined #gluster |
13:03 |
|
nehar joined #gluster |
13:06 |
|
alvinstarr joined #gluster |
13:11 |
|
haomaiwang joined #gluster |
13:12 |
|
Gnomethrower joined #gluster |
13:16 |
|
karnan joined #gluster |
13:16 |
|
nishanth joined #gluster |
13:19 |
|
atrius joined #gluster |
13:19 |
|
Telsin joined #gluster |
13:21 |
|
ahino joined #gluster |
13:24 |
|
skylar joined #gluster |
13:28 |
|
ic0n_ joined #gluster |
13:29 |
|
nthomas joined #gluster |
13:32 |
|
anil_ joined #gluster |
13:37 |
|
B21956 joined #gluster |
13:38 |
|
drue joined #gluster |
13:38 |
|
B21956 joined #gluster |
13:38 |
drue |
how are people monitoring their gluster clusters in production? |
13:40 |
|
DV__ joined #gluster |
13:43 |
|
otaku_coder joined #gluster |
13:46 |
otaku_coder |
I have a 2-node replicated gluster cluster up and running. I now need to configure the client-side to mount my replicated volume. Documentation for this is somewhat confusing however. Most examples say simply add one of the 2 nodes into your fstab config to mount the volume. I have also found the `backupvolfile-server` parameter which can be passed to the glusterfs-client driver, but then found a changelog w |
13:46 |
otaku_coder |
hich stated it had been deprecated. My question is this, how do you configure a High-availabiliy client mount? |
13:48 |
|
Lee1092 joined #gluster |
13:50 |
|
haomaiwang joined #gluster |
14:01 |
|
haomaiwang joined #gluster |
14:10 |
|
P0w3r3d joined #gluster |
14:18 |
post-factum |
otaku_coder: DNS RR |
14:24 |
|
atinmu joined #gluster |
14:24 |
|
atinmu joined #gluster |
14:25 |
|
dgandhi joined #gluster |
14:27 |
|
dgandhi joined #gluster |
14:29 |
|
dgandhi joined #gluster |
14:31 |
|
aravindavk joined #gluster |
14:35 |
|
JesperA- joined #gluster |
14:45 |
|
theeboat joined #gluster |
14:45 |
|
armyriad joined #gluster |
14:47 |
theeboat |
Im looking on some advice as to whether gluster will be the correct choice. The content that I am storing is media files ranging from 20-140GB. Over the course of a year we use around 400TB in storage space with the possibility of change. We are looking to buy 60bay chassis with near-line SAS drives, 2x6core Xeon processors & 32GB ram. |
14:47 |
|
wushudoin joined #gluster |
14:47 |
|
wushudoin joined #gluster |
14:51 |
|
rafi1 joined #gluster |
15:00 |
|
julim joined #gluster |
15:01 |
|
haomaiwa_ joined #gluster |
15:06 |
|
mpietersen joined #gluster |
15:13 |
|
aravindavk joined #gluster |
15:17 |
|
rtalur joined #gluster |
15:22 |
|
kpease joined #gluster |
15:26 |
|
rwheeler joined #gluster |
15:39 |
|
julim joined #gluster |
15:40 |
|
ghenry joined #gluster |
15:40 |
|
fsimonce joined #gluster |
15:40 |
|
mrEriksson joined #gluster |
15:40 |
|
yoavz joined #gluster |
15:45 |
|
atrius joined #gluster |
15:45 |
|
edong23_ joined #gluster |
15:45 |
|
jvandewege joined #gluster |
15:45 |
|
dblack joined #gluster |
15:45 |
|
XpineX joined #gluster |
15:45 |
|
javi404 joined #gluster |
15:50 |
|
skoduri joined #gluster |
15:55 |
|
timotheus1_ joined #gluster |
15:55 |
|
kovshenin joined #gluster |
16:01 |
|
haomaiwang joined #gluster |
16:06 |
|
kovshenin joined #gluster |
16:06 |
|
atrius joined #gluster |
16:06 |
|
edong23_ joined #gluster |
16:06 |
|
jvandewege joined #gluster |
16:06 |
|
dblack joined #gluster |
16:06 |
|
XpineX joined #gluster |
16:06 |
|
javi404 joined #gluster |
16:21 |
|
Gnomethrower joined #gluster |
16:22 |
|
dlambrig_ joined #gluster |
16:25 |
|
karnan joined #gluster |
16:29 |
|
armyriad joined #gluster |
16:32 |
|
jwd joined #gluster |
16:32 |
|
plarsen joined #gluster |
16:40 |
|
Siavash joined #gluster |
16:40 |
|
Siavash joined #gluster |
16:46 |
|
squizzi_ joined #gluster |
16:48 |
|
ivan_rossi left #gluster |
16:52 |
|
mowntan joined #gluster |
16:53 |
|
mowntan joined #gluster |
16:53 |
|
mowntan joined #gluster |
17:01 |
|
haomaiwang joined #gluster |
17:05 |
|
beeradb_ joined #gluster |
17:08 |
|
Siavash joined #gluster |
17:13 |
|
deniszh1 joined #gluster |
17:15 |
|
luizcpg joined #gluster |
17:19 |
|
jiffin joined #gluster |
17:21 |
|
nathwill joined #gluster |
17:38 |
|
Gnomethrower joined #gluster |
17:41 |
|
shyam joined #gluster |
17:44 |
|
tomfite joined #gluster |
17:46 |
tomfite |
Hey all! |
17:57 |
tomfite |
Looks like I'm the second person today to ask about production hardware requirements... we're looking to start with ~100 TB cluster containing millions of files of varying sizes. For the storage nodes we're looking at buying at least 8 core with 64GB, 32x4.0TB drives with RAID 6 with a 10GBe card for interconnects. Each box will be set up as one brick. |
17:58 |
tomfite |
Wondering if anybody knows roughly how many connections a setup like that would be able to deal with. |
17:59 |
JoeJulian |
Hey, tomfite. Unfortunately theeboat logged off before he could get an answer. |
18:00 |
tomfite |
Yeah it kinda sounds like we have similar requirements |
18:00 |
JoeJulian |
I would do that differently with the same hardware (I would add ram though if that can change). |
18:01 |
|
haomaiwang joined #gluster |
18:01 |
tomfite |
Would you go to 128 or more? |
18:01 |
JoeJulian |
8 x raid0 bricks replica 3 (obviously some multiple of 3 servers) |
18:02 |
|
hagarth joined #gluster |
18:02 |
JoeJulian |
I'd buy as much ram as I can get budget for. |
18:02 |
tomfite |
Can't have too much memory! |
18:02 |
JoeJulian |
Nobody ever got fired for having stuff in cache. |
18:03 |
tomfite |
So we're looking at doing replica 3, but I was assuming we'd be replicating across boxes... if we have more than one brick per node can we assume we can lose a box and be OK? |
18:03 |
JoeJulian |
Yes, across boxes. The reason for multiple bricks is recovery. |
18:03 |
tomfite |
Oh I see |
18:04 |
tomfite |
I'm guessing you do RAID 0 for better performance? |
18:05 |
JoeJulian |
RAID6 increases your MTBF and actually has a longer MTTR. |
18:05 |
JoeJulian |
Yes, raid 0 just gives you your 10Gb network throughput. If you need 40, adjust as necessary. |
18:06 |
tomfite |
Gotcha |
18:07 |
JoeJulian |
So yeah, you lose a drive you lose the whole raid, but you've got 2 replica that are still up on a different machines. Replace the drive, build a new array, let gluster re-replicate. |
18:07 |
|
ahino joined #gluster |
18:07 |
tomfite |
OK that makes a lot of sense |
18:08 |
JoeJulian |
It takes us, with 4TB drives, 10% as long to do it that way as it does to rebuild a 30 disk raid6. |
18:08 |
tomfite |
Right... we're running RAID 6 currently and the rebuilds take forever |
18:08 |
JoeJulian |
(well, 28+2) |
18:08 |
tomfite |
so getting away from that would be great |
18:09 |
JoeJulian |
Another option, with the new tiering feature, is to not do raid0 at all, but have a hot tier of ssd. |
18:10 |
JoeJulian |
* I haven't tried tiering yet |
18:10 |
tomfite |
Interesting, how new is that? I haven't run across it yet. |
18:10 |
JoeJulian |
http://blog.gluster.org/2016/03/automated-tiering-in-gluster/ |
18:10 |
glusterbot |
Title: Automated Tiering in Gluster | Gluster Community Website (at blog.gluster.org) |
18:10 |
JoeJulian |
3.7 |
18:11 |
tomfite |
Also I was just about to ask you how you make sure that you don't replicate bricks on the same machine but you already answered it https://joejulian.name/blog/how-to-expand-glusterfs-replicated-clusters-by-one-server/ :) |
18:11 |
glusterbot |
Title: How to expand GlusterFS replicated clusters by one server (at joejulian.name) |
18:11 |
tomfite |
OK cool, I'll dig into that a bit |
18:11 |
JoeJulian |
That and ,,(brick-order) |
18:12 |
glusterbot |
I do not know about 'brick-order', but I do know about these similar topics: 'brick order' |
18:12 |
JoeJulian |
That and ,,(brick order) |
18:12 |
glusterbot |
Replicas are defined in the order bricks are listed in the volume create command. So gluster volume create myvol replica 2 server1:/data/brick1 server2:/data/brick1 server3:/data/brick1 server4:/data/brick1 will replicate between server1 and server2 and replicate between server3 and server4. |
18:12 |
JoeJulian |
coffee's slowly beginning to kick in. |
18:12 |
|
jvandewege_ joined #gluster |
18:12 |
|
kovsheni_ joined #gluster |
18:14 |
tomfite |
OK so as long as the bricks are added across machines in volume create (or volume add-brick) then we're good |
18:15 |
tomfite |
I'll play around with that in my testing environment |
18:15 |
JoeJulian |
If I had my way there would be three tiers. I wonder if that's possible. I want a frozen tier where the drives are actually powered off. |
18:16 |
|
edong23 joined #gluster |
18:16 |
|
javi404 joined #gluster |
18:16 |
JoeJulian |
When we're running a 19kW rack, it would be nice to be able to shut things down that aren't needed. |
18:18 |
|
plarsen joined #gluster |
18:18 |
tomfite |
Efficient! |
18:20 |
|
anil_ joined #gluster |
18:21 |
tomfite |
Thanks so much for the help, Joe! |
18:23 |
JoeJulian |
Any time. |
18:23 |
JoeJulian |
I'm pretty much always here. |
18:30 |
|
rafi joined #gluster |
18:34 |
|
tomfite joined #gluster |
18:39 |
|
plarsen joined #gluster |
18:46 |
|
Siavash joined #gluster |
18:50 |
|
plarsen joined #gluster |
18:53 |
|
beeradb joined #gluster |
18:58 |
|
plarsen joined #gluster |
18:59 |
|
jobewan joined #gluster |
19:01 |
|
haomaiwang joined #gluster |
19:07 |
|
P0w3r3d joined #gluster |
19:10 |
|
deniszh joined #gluster |
19:14 |
|
shaunm joined #gluster |
19:19 |
|
P0w3r3d joined #gluster |
19:20 |
|
ctria joined #gluster |
19:25 |
|
chirino_m joined #gluster |
19:34 |
|
ghenry joined #gluster |
19:40 |
|
beeradb joined #gluster |
19:49 |
|
Siavash_ joined #gluster |
19:57 |
|
BitByteNybble110 joined #gluster |
20:01 |
|
haomaiwang joined #gluster |
20:11 |
|
DV_ joined #gluster |
20:11 |
|
luizcpg_ joined #gluster |
20:21 |
|
edong23 joined #gluster |
20:32 |
|
kpease joined #gluster |
20:43 |
|
deniszh joined #gluster |
20:45 |
|
jri joined #gluster |
21:01 |
|
haomaiwang joined #gluster |
21:02 |
|
guhcampos joined #gluster |
21:03 |
guhcampos |
Is there any other GUI frontend for managing glusterfs besides ovirt? I have some volumes which span to machines which are not part of my ovirt cluster, and ovirt can't see the bricks on those |
21:04 |
guhcampos |
I'm mostly interested in having a visual overview of the volume's health, but being able to manage the volumes is a plus |
21:13 |
|
frakt joined #gluster |
21:29 |
|
Wizek joined #gluster |
21:35 |
|
johnmilton joined #gluster |
21:50 |
|
tomfite joined #gluster |
21:50 |
|
shaunm joined #gluster |
21:57 |
|
level7 joined #gluster |
21:58 |
|
plarsen joined #gluster |
22:01 |
|
haomaiwang joined #gluster |
22:07 |
|
DV_ joined #gluster |
22:16 |
JoeJulian |
guhcampos: nope, though the rewrite of glusterd is supposed to have an html api so rolling your own should be pretty easy at that point. |
22:16 |
JoeJulian |
And you can presently use the cli's xml output to do the same. |
22:17 |
|
eKKiM_ joined #gluster |
22:19 |
guhcampos |
JoeJulian I see, thanks. I tried the glusterfs capabilities in ovirt, gave up. It does not really work |
22:27 |
|
beeradb joined #gluster |
22:33 |
|
mpietersen joined #gluster |
22:42 |
JoeJulian |
I've never successfully used ovirt. I tried it out at home for about 4 days before I gave up on it. |
23:01 |
|
haomaiwang joined #gluster |
23:33 |
|
fortpedro joined #gluster |
23:38 |
|
ravana_2 joined #gluster |
23:41 |
ravana_2 |
Hello all, I'm new to Gluster. I have a 2 node replicated setup. If a node is unavailable, is it normal to see a performance degradation? |
23:44 |
JoeJulian |
Not usually, no. |
23:49 |
ravana_2 |
Thanks JoeJulian , I'm testing on AWS and dd showed 94.6 MB/s and after taking a node down writing speed was 644 kB/s. |
23:50 |
JoeJulian |
I wouldn't use dd for any application you need cluster-wide performance from. |
23:51 |
JoeJulian |
But yes, that doesn't sound normal. |
23:52 |
JoeJulian |
I've literally seen increased performance during an outage. It makes sense if you think about it. All the client needs to do during an outage it write to one server and mark the other as stale. |
23:55 |
ravana_2 |
Indeed, I'll to more testing to find out the reason. Thanks JoeJulian. |