Time |
Nick |
Message |
00:11 |
|
ninkotech joined #gluster |
00:11 |
|
ninkotech_ joined #gluster |
00:23 |
|
JoseBravo joined #gluster |
00:40 |
|
sputnik13 joined #gluster |
00:48 |
|
sputnik13 joined #gluster |
00:59 |
|
justinmburrous joined #gluster |
01:00 |
gomikemike |
what can cause "mismatching ino/dev between file" |
01:11 |
|
justinmburrous joined #gluster |
01:19 |
|
msmith_ joined #gluster |
01:24 |
|
plarsen joined #gluster |
01:27 |
|
sputnik13 joined #gluster |
01:30 |
|
harish joined #gluster |
01:38 |
|
gildub joined #gluster |
01:57 |
|
MacWinner joined #gluster |
02:07 |
|
msmith_ joined #gluster |
02:13 |
|
wgao joined #gluster |
02:21 |
|
haomaiwa_ joined #gluster |
02:26 |
|
harish joined #gluster |
02:29 |
|
justinmburrous joined #gluster |
02:31 |
|
haomaiw__ joined #gluster |
02:33 |
|
bala joined #gluster |
02:39 |
|
haomai___ joined #gluster |
02:46 |
|
coredump joined #gluster |
02:59 |
|
edong23 joined #gluster |
02:59 |
|
bala joined #gluster |
03:08 |
|
edong23_ joined #gluster |
03:09 |
|
side_control joined #gluster |
03:33 |
|
bala joined #gluster |
03:34 |
|
itisravi joined #gluster |
03:35 |
|
edong23 joined #gluster |
03:37 |
|
side_control joined #gluster |
03:42 |
|
RameshN joined #gluster |
03:43 |
|
nbalachandran joined #gluster |
03:49 |
|
bharata-rao joined #gluster |
03:50 |
|
kanagaraj joined #gluster |
03:59 |
|
Peanut joined #gluster |
03:59 |
|
haomaiwa_ joined #gluster |
04:01 |
|
shubhendu joined #gluster |
04:07 |
|
edong23 joined #gluster |
04:10 |
|
ndarshan joined #gluster |
04:10 |
|
edong23_ joined #gluster |
04:11 |
|
RameshN joined #gluster |
04:12 |
|
itisravi joined #gluster |
04:20 |
|
haomaiwang joined #gluster |
04:29 |
|
kshlm joined #gluster |
04:35 |
|
Rafi_kc joined #gluster |
04:35 |
|
rafi1 joined #gluster |
04:37 |
|
spandit joined #gluster |
04:38 |
|
soumya_ joined #gluster |
04:39 |
|
PeterA1 joined #gluster |
04:39 |
|
anoopcs joined #gluster |
04:39 |
|
kaushal_ joined #gluster |
04:40 |
|
deepakcs joined #gluster |
04:50 |
|
ramteid joined #gluster |
04:54 |
|
sputnik13 joined #gluster |
04:55 |
|
harish joined #gluster |
05:00 |
|
justinmburrous joined #gluster |
05:02 |
|
bkrram joined #gluster |
05:04 |
|
msmith_ joined #gluster |
05:05 |
|
jiffin joined #gluster |
05:05 |
|
rjoseph joined #gluster |
05:09 |
|
aravindavk joined #gluster |
05:23 |
|
bala joined #gluster |
05:26 |
|
atalur joined #gluster |
05:39 |
|
hagarth joined #gluster |
05:41 |
|
ndarshan joined #gluster |
05:45 |
|
edong23 joined #gluster |
05:47 |
|
meghanam joined #gluster |
05:47 |
|
meghanam_ joined #gluster |
05:51 |
|
foster joined #gluster |
05:52 |
|
edong23 joined #gluster |
05:58 |
|
edong23 joined #gluster |
05:59 |
|
kumar joined #gluster |
06:06 |
|
lalatenduM joined #gluster |
06:12 |
|
soumya joined #gluster |
06:14 |
|
edong23 joined #gluster |
06:14 |
|
nishanth joined #gluster |
06:16 |
|
nshaikh joined #gluster |
06:16 |
|
kaushal_ joined #gluster |
06:20 |
|
pkoro_ joined #gluster |
06:35 |
|
ThatGraemeGuy joined #gluster |
06:54 |
|
ekuric joined #gluster |
07:01 |
|
ctria joined #gluster |
07:01 |
|
sputnik13 joined #gluster |
07:09 |
|
Fen1 joined #gluster |
07:11 |
|
nixpanic joined #gluster |
07:15 |
|
sputnik13 joined #gluster |
07:18 |
|
ackjewt joined #gluster |
07:23 |
|
user_42 joined #gluster |
07:31 |
|
fsimonce joined #gluster |
07:52 |
|
spandit joined #gluster |
07:53 |
|
liquidat joined #gluster |
07:56 |
|
nthomas joined #gluster |
08:02 |
|
Slydder joined #gluster |
08:02 |
Slydder |
morning all. |
08:03 |
Slydder |
does anyone here have a complete list of fuse mount options one can use with a gluster mount? am needing to know if there is a way to get larger than 4K writes wot work. or something. gluster hangs writing to the fuse mount (mount to local gfs server). after about 10-15 minutes it starts writing again. |
08:08 |
|
anands joined #gluster |
08:08 |
|
calum_ joined #gluster |
08:16 |
Slydder |
and I keep getting this error even though fuse-utils is installed : 0-glusterfs-fuse: failed to exec fusermount: No such file or directory |
08:16 |
Slydder |
any ideas? |
08:17 |
|
TvL2386 joined #gluster |
08:19 |
|
harish joined #gluster |
08:24 |
|
ninkotech__ joined #gluster |
08:26 |
|
nthomas_ joined #gluster |
08:33 |
Fen1 |
Hi ! Can we set up the size of a brick ? |
08:45 |
Slydder |
Fen1: just partition to the needed size and make that partition a brick. |
08:45 |
Slydder |
use lvs to ensure you can enlarge it later. |
08:46 |
|
RameshN joined #gluster |
08:46 |
Fen1 |
I don't understand, how could i make it ? |
08:46 |
|
RaSTar joined #gluster |
08:47 |
Fen1 |
I have 4 brick in a volume, but when i write "volume info", i don't see the size of the volume |
08:47 |
|
rtalur_ joined #gluster |
08:48 |
Fen1 |
i don't understand why... |
08:48 |
Fen1 |
Is it a dynamic volume ? with no limit ? |
08:49 |
Slydder |
try gluster volume status VOLNAME detail |
08:49 |
Fen1 |
ok i will |
08:50 |
Slydder |
shows me free space and total space |
08:51 |
Fen1 |
wrong command |
08:51 |
Slydder |
what version do you have? |
08:51 |
|
Philambdo joined #gluster |
08:51 |
Fen1 |
gluster> volume info |
08:51 |
Fen1 |
Volume Name: test-volume |
08:51 |
Fen1 |
Type: Distribute |
08:51 |
Fen1 |
Status: Started |
08:51 |
Fen1 |
Number of Bricks: 4 |
08:51 |
Fen1 |
Transport-type: tcp |
08:51 |
Fen1 |
Bricks: |
08:51 |
Fen1 |
Brick1: 10.0.176.10:/exp1 |
08:51 |
Fen1 |
Brick2: 10.0.176.10:/exp2 |
08:51 |
Fen1 |
Brick3: 10.0.176.11:/exp1 |
08:51 |
Fen1 |
Brick4: 10.0.176.11:/exp2 |
08:52 |
Slydder |
gluster volume status test-volume detail |
08:52 |
Slydder |
it should list each brick and all info for each brick. |
08:53 |
Fen1 |
maybe i have an old version, i just install with apt-get install glusterfs-server |
08:53 |
Slydder |
gluster --version |
08:54 |
Slydder |
I'm running 3.5.2 atm |
08:54 |
Fen1 |
glusterfs 3.2.7 |
08:55 |
Slydder |
then upgrade to the newest version. |
08:55 |
Slydder |
what distro? |
08:56 |
Fen1 |
it's VM debian 3.2 |
08:56 |
Fen1 |
maybe i can't with this debian |
08:56 |
Slydder |
3.2? |
08:57 |
Fen1 |
yep |
08:57 |
Slydder |
there is actually 3.2 in your /etc/debian_version file? |
08:59 |
Fen1 |
no 7.6 it's weird |
08:59 |
Slydder |
ok. wheezy |
08:59 |
Slydder |
3.2 is your kernel version |
09:00 |
Fen1 |
ok thx ;) |
09:00 |
Slydder |
you did a uname -a or some such |
09:00 |
|
vimal joined #gluster |
09:01 |
Slydder |
wget -O - http://download.gluster.org/pub/gluster/glusterfs/3.5/3.5.2/Debian/wheezy/pubkey.gpg | apt-key add - |
09:01 |
Slydder |
echo deb http://download.gluster.org/pub/gluster/glusterfs/3.5/3.5.2/Debian/wheezy/apt wheezy main > /etc/apt/sources.list.d/gluster.list |
09:01 |
Fen1 |
thx ;) |
09:01 |
Slydder |
after those 2 commands do apt-get update and then apt-get install glusterfs-server |
09:01 |
Fen1 |
and do you know when the 3.6 release ? |
09:01 |
Slydder |
nope. |
09:03 |
|
harish joined #gluster |
09:10 |
|
allupkau joined #gluster |
09:17 |
|
nthomas_ joined #gluster |
09:27 |
Slydder |
god there has to be a way to mount a gfs volume without fuse. what a piece of crap. grrrrrrr. fuse I mean. |
09:33 |
|
hagarth joined #gluster |
09:33 |
|
saurabh joined #gluster |
09:34 |
|
lalatenduM joined #gluster |
09:35 |
|
nthomas_ joined #gluster |
09:41 |
Fen1 |
you can mount with NFS or CIFS |
09:42 |
Fen1 |
Slydder: no ? |
09:43 |
Slydder |
yeah. was hoping to avoid. |
09:43 |
Slydder |
either way I now know that fuse is not my problem. |
09:44 |
Fen1 |
so here my volume status : |
09:44 |
Fen1 |
gluster> volume status |
09:44 |
Fen1 |
Status of volume: TEST |
09:44 |
Fen1 |
Gluster processPortOnlinePid |
09:44 |
Fen1 |
------------------------------------------------------------------------------ |
09:44 |
Fen1 |
Brick 10.0.176.10:/part149152Y2801 |
09:44 |
Fen1 |
Brick 10.0.176.11:/part249152Y2735 |
09:44 |
Fen1 |
NFS Server on localhost2049Y2813 |
09:44 |
Fen1 |
NFS Server on 10.0.176.112049Y2747 |
09:44 |
Fen1 |
|
09:44 |
Fen1 |
Task Status of Volume TEST |
09:44 |
Fen1 |
------------------------------------------------------------------------------ |
09:44 |
Fen1 |
There are no active volume tasks |
09:46 |
Slydder |
now do the status detail call on test and you will see your brick sizes |
09:47 |
Slydder |
nfs is faster than fuse but not by much |
09:48 |
|
bala joined #gluster |
09:48 |
Fen1 |
nfs is better a lot of small files than fuse |
09:50 |
Fen1 |
Free : 3.7Go / Total : 4.9Go |
09:52 |
Fen1 |
But if i need a huge space for my storage, i will make a lot of brick ??? i can't use bigger brick ? |
09:53 |
|
Slydder1 joined #gluster |
09:53 |
Slydder1 |
cool |
09:57 |
|
bkrram joined #gluster |
10:04 |
|
Sunghost joined #gluster |
10:06 |
Sunghost |
hello, ist in actual 3.5.2 any performance improvements for rebalance done? |
10:16 |
Sunghost |
i read about multifile rebalance instead of a single file - is this actualy possible or only in future versions? |
10:20 |
|
elico joined #gluster |
10:30 |
|
lalatenduM joined #gluster |
10:31 |
|
kkeithley2 joined #gluster |
10:32 |
|
bharata-rao joined #gluster |
10:38 |
|
bkrram joined #gluster |
10:39 |
|
sputnik13 joined #gluster |
10:42 |
|
hagarth joined #gluster |
10:43 |
|
diegows joined #gluster |
10:45 |
|
soumya_ joined #gluster |
10:47 |
|
bkrram joined #gluster |
10:49 |
|
meghanam joined #gluster |
10:49 |
|
meghanam_ joined #gluster |
10:53 |
|
rjoseph|afk joined #gluster |
10:55 |
|
bkrram joined #gluster |
10:58 |
|
lalatenduM joined #gluster |
11:11 |
|
LebedevRI joined #gluster |
11:12 |
|
bkrram joined #gluster |
11:14 |
|
anands joined #gluster |
11:18 |
|
bjornar joined #gluster |
11:21 |
|
bjornar joined #gluster |
11:22 |
|
vimal joined #gluster |
11:25 |
|
ricky-ti1 joined #gluster |
11:34 |
|
bala joined #gluster |
11:38 |
|
bkrram joined #gluster |
11:40 |
|
plarsen joined #gluster |
11:42 |
|
gildub joined #gluster |
11:45 |
|
vimal joined #gluster |
11:50 |
|
vimal joined #gluster |
11:54 |
|
DV joined #gluster |
11:59 |
|
tdasilva joined #gluster |
12:00 |
|
edwardm61 joined #gluster |
12:02 |
ndevos |
REMINDER: Bug Triage meeting starting in #gluster-meeting |
12:03 |
|
ira joined #gluster |
12:04 |
|
Fen1 joined #gluster |
12:04 |
|
kanagaraj joined #gluster |
12:05 |
|
bkrram joined #gluster |
12:06 |
|
soumya joined #gluster |
12:13 |
|
sks joined #gluster |
12:20 |
|
hagarth joined #gluster |
12:20 |
|
user_44 joined #gluster |
12:24 |
|
kanagaraj joined #gluster |
12:27 |
|
lkoranda joined #gluster |
12:34 |
|
B21956 joined #gluster |
12:43 |
|
anands joined #gluster |
12:48 |
|
haomaiwa_ joined #gluster |
13:06 |
|
ppai joined #gluster |
13:07 |
|
Sunghost joined #gluster |
13:08 |
Sunghost |
hello, i run a rebalance but the rebalance.log grows fast and now exceede my space on /, what can i do? |
13:09 |
|
julim joined #gluster |
13:13 |
Sunghost |
can i simply stop the reblance process and delete this log file? |
13:14 |
|
sks joined #gluster |
13:17 |
|
chucky_z joined #gluster |
13:18 |
chucky_z |
Hola! I'm on a server where a (VERY) old version of Gluster was installed and when trying to mount up the new brick I created it's saying that it's part of a brick... Actual error is: ERROR: /shared is in use as a brick of a gluster volume |
13:21 |
|
elico joined #gluster |
13:23 |
|
coredump joined #gluster |
13:24 |
|
kdhananjay joined #gluster |
13:26 |
|
jmarley joined #gluster |
13:27 |
Slydder1 |
strange. no matter what I do gluster seems to be limited to 2mbit |
13:31 |
|
coredump joined #gluster |
13:37 |
|
itisravi joined #gluster |
13:39 |
|
virusuy joined #gluster |
13:39 |
|
virusuy joined #gluster |
13:56 |
|
msmith_ joined #gluster |
13:59 |
Fen1 |
Hi ! Can i set up the size of a brick ? |
14:00 |
Fen1 |
Because for now 1 brick = 5Go |
14:04 |
partner |
the brick size is the size of your mountpoint |
14:04 |
partner |
ie. if you mount 100G disk into that place it'll be 100G |
14:05 |
Fen1 |
but i don't set up the size of my mountpoint... !? |
14:06 |
partner |
its the size of the disk/partition you mount there |
14:06 |
skippy |
how did you define your brick(s), Fen1 ? |
14:06 |
Fen1 |
mount -t glusterfs 10.0.176.10:/test-volume /gluster-storage |
14:07 |
partner |
no, the bricks, i assume "part1" and 2 |
14:08 |
Fen1 |
gluster volume create test-volume @IP:/part1 @IP:/part2 |
14:08 |
partner |
yes but how exactly you created the underlaying bricks? mkdir /something/part1 ? |
14:09 |
Fen1 |
nop |
14:09 |
partner |
i'm assuming you just created some dirs somewhere and they consume the disk of perhaps root |
14:09 |
Fen1 |
part1 and 2 didn't exist before i write this command |
14:09 |
partner |
what exactly instructions you followed to build up your test volume? |
14:09 |
|
mojibake joined #gluster |
14:10 |
partner |
interesting, you created a volume and defined bricks that do not exist and yet it works? what version of glusterfs? |
14:10 |
|
plarsen joined #gluster |
14:10 |
Fen1 |
-peer probe |
14:10 |
Fen1 |
-volume create |
14:10 |
Fen1 |
-volume start |
14:10 |
Fen1 |
-mount -t glusterfs |
14:11 |
Fen1 |
. |
14:11 |
Fen1 |
3.5.2 |
14:12 |
partner |
hmm gluster should warn about creating brick to the root and asks you to apply "force" if you want to continue |
14:12 |
partner |
did you do that? |
14:12 |
Fen1 |
yep i force |
14:12 |
partner |
alright, then its on your root disk ie. / |
14:12 |
Fen1 |
because there is no other user |
14:13 |
partner |
not sure what instructions you followed if any but its an important step to prepare the _bricks_ for the volume |
14:13 |
partner |
now basically your volume size is the sum of your two servers root drive |
14:13 |
Fen1 |
ok... and what are instructions ? |
14:14 |
partner |
not sure what are the best ones nowadays but quickly googling: http://www.gluster.org/community/documentation/index.php/QuickStart |
14:15 |
partner |
i often refer to this nice doc from RH as they have bunch of nice pictures and what not: https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/chap-User_Guide-Setting_Volumes.html |
14:16 |
Fen1 |
ok thx :) |
14:16 |
partner |
but not sure how community likes it.. :o |
14:16 |
Fen1 |
i was following this : http://www.gluster.org/wp-content/uploads/2012/05/Gluster_File_System-3.3.0-Administration_Guide-en-US.pdf |
14:16 |
partner |
i'll have a look |
14:17 |
partner |
hmm quickly viewing those instructions seem to completely skip the brick part.. |
14:18 |
|
ivok joined #gluster |
14:18 |
partner |
and are for ancient version, too |
14:19 |
|
rgustafs joined #gluster |
14:19 |
|
jobewan joined #gluster |
14:19 |
partner |
yeah, there are pretty much exactly two sentences remotely touching what brick is, no talk of filesystems or much anything else |
14:20 |
partner |
please rather see the links i provided and forget that old guide |
14:20 |
|
theron joined #gluster |
14:22 |
chucky_z |
ah did anyone see my earlier question? some people seem alive now |
14:22 |
chucky_z |
:) |
14:24 |
|
kdhananjay joined #gluster |
14:25 |
|
keytab joined #gluster |
14:26 |
ivok |
hi guys! i am preparing to deploy gluster to production enviroment. all tests went fine. I found one article, that references multiple articles, with people complaining on gluster performance and reliability. any comments? here is the link: http://www.softwareprojects.com/resources/programming/t-6-months-with-glusterfs-a-distributed-file-system-2057.html |
14:26 |
|
wushudoin joined #gluster |
14:27 |
johndescs |
hum "quota context not set in inode" in quota_rename when doing an "mv" of a file put in gluster via samba vfs (libgfapi), does it tell something to one of you ? :P |
14:28 |
partner |
chucky_z: hard to say with the given info except you probably try to add something to a volume that is already part of a volume |
14:28 |
skippy |
ivok: that post is 2 years old. i'd assume things have improved since then. but i can't speak to any specifics, as I'm new to Gluster. |
14:29 |
|
fubada joined #gluster |
14:29 |
fubada |
hi purpleidea, thank you for your reply re puppet-gluster |
14:30 |
ivok |
skippy yes, I noticed its a bit outdated, but I am not sure how much was improved in the meantime |
14:30 |
|
nbalachandran joined #gluster |
14:30 |
ivok |
my setup is with many files |
14:30 |
fubada |
purpleidea: im not using puppetdb as you pointed out, is this an issue? I have my ca and master separated |
14:31 |
ivok |
~1.5M files |
14:31 |
fubada |
purpleidea: i also run puppet from RHEL scl ruby 1.9 |
14:31 |
fubada |
using the gem |
14:37 |
|
sprachgenerator joined #gluster |
14:43 |
|
lmickh joined #gluster |
14:47 |
skippy |
ivok: what's your data access like? Are you doing a lot of reads / writes? Lots of writes with few reads? |
14:48 |
ivok |
skippy, actually, lot of reads, few writes |
14:48 |
ivok |
files are being generated on backend, and then served through multiple channels |
14:48 |
ivok |
and many requests for the same file are to be expected |
14:49 |
skippy |
i've seen mention that NFS provides (slight?) performance improvements in that scenario. but haven't tested it myself. |
14:49 |
partner |
while couple of years old i still like this explaining the issue (at least partly): http://joejulian.name/blog/nfs-mount-for-glusterfs-gives-better-read-performance-for-small-files/ |
14:49 |
partner |
nfs brings the caching |
14:50 |
skippy |
does the kernel not cache anything when using the FUSE client? |
14:50 |
partner |
but downside is you will lose quite a few of the nicest features |
14:53 |
|
fubada joined #gluster |
14:53 |
fubada |
purpleidea: I replied to your email with some logs from my puppet master :) |
14:58 |
|
_dist joined #gluster |
15:00 |
|
justinmburrous joined #gluster |
15:05 |
|
rbennacer left #gluster |
15:11 |
user_44 |
Hi All. I can't find any detailed information about when to use geo-replication over replication except that geo-replocation is used "for geographically distributed clusters". What does this exactly mean? Are two servers in one data center but not on the same rack already geographically distributed? What about two servers in two data centers in the same city? |
15:11 |
skippy |
I'm rsyncing 1.1GB of data from an NFS mount (NAS appliance) into a Gluster FUSE mount. The Gluster volume is 4GB. I passed '--inplace' to rsync. Occasionally, the command spits out "rsync: open "<file>" failed: No space left on device (28) |
15:11 |
skippy |
but then the rsync keeps going. |
15:11 |
skippy |
why would Gluster report that the volume is out of space? |
15:13 |
skippy |
user_44: geo-replication is asynchronous. Normal Gluster replication is synchronous (as performed by the client writing to all back-end bricks simultaneously) |
15:15 |
user_44 |
skippy: and normal gluster provides high-availability while geo-replication ensures backing up of data for disaster recovery... |
15:15 |
|
daMaestro joined #gluster |
15:15 |
user_44 |
skippy: but how to decide when I am still able to use normal gluster? |
15:15 |
skippy |
huh. The Gluster volume reports 3.4GB, while the NAS NFS volume claims it's 1.1 GB. rsync claims it sent "sent 3572907090 bytes". Something's fishy. |
15:16 |
|
sputnik13 joined #gluster |
15:17 |
|
n-st joined #gluster |
15:22 |
|
sputnik13 joined #gluster |
15:30 |
kkeithley_ |
whether you use "regular" gluster replication or geo-rep depends on the latency to the "remote" cluster. |
15:31 |
kkeithley_ |
If you've got fibre to the remote DC in the same city, you might be able to use regular replication. |
15:34 |
|
sputnik13 joined #gluster |
15:36 |
Fen1 |
Hi again :) Just one question : I have mounted a volume (created with 2 brick of 2 different VM), i'have filled it with a huge file so it's now empty, but when i look "volume status detail", i see just 1 brick full and the other empty !? what's the problem ? |
15:36 |
Fen1 |
*it's now full |
15:36 |
skippy |
Fen1: did you build a replicated volume? |
15:36 |
Fen1 |
nop |
15:37 |
skippy |
the default is to distribute, not replicate. |
15:37 |
skippy |
Fen1: https://github.com/gluster/glusterfs/blob/master/doc/admin-guide/en-US/markdown/admin_setting_volumes.md |
15:37 |
Fen1 |
so i can't mutualise the storage of 2 different VM ? |
15:38 |
skippy |
do you want the same file to live on both VMs? That's a replicated setup. |
15:40 |
Fen1 |
No, just have mount the storage of 2 VM on a third (VM1+VM2=>on VM3) |
15:40 |
skippy |
yes, you can do that. |
15:42 |
Fen1 |
So it's what i did, but when i fill the storage mounted on VM3, it say that just 1 brick (VM1) is full, the other (VM2) is empty |
15:42 |
|
sputnik13 joined #gluster |
15:43 |
Fen1 |
so the storage of my VM2 is useless :( |
15:43 |
|
rwheeler joined #gluster |
15:44 |
skippy |
Gluster should decide on which brick to put the files. If you put one big file, Gluster chose where to put that. If you add another file, I'd expect that to end up on the other brick. |
15:44 |
skippy |
If you're expecting GLuster to act like a RAID and split files across bricks, you need to explicitly set up a striped volume, and understand the implications of that. |
15:44 |
skippy |
http://joejulian.name/blog/should-i-use-stripe-on-glusterfs/ |
15:45 |
kkeithley_ |
Fen1: if you want big files to be, for lack of a better term maybe, sharded across two volumes, you need to use a 'stripe' volume |
15:45 |
Fen1 |
I don't care wich brick Gluster use, i just want it use both xD |
15:47 |
kkeithley_ |
a 'replica' volume will write N copies of you file on each brick. As skippy already told you, a 'distribute' volume will write some files on one brick, other files on the other brick. |
15:47 |
semiosis |
Fen1: you need to write data through a client mount point. once you create a volume you should not write data into the brick directories |
15:48 |
kkeithley_ |
and 'distribute' is what you get by default if you didn't explicitly say stripe or replica |
15:48 |
Fen1 |
Yeah i understand that, but when a brick is full, Gluster should write on the other ? But it don't worked :/ |
15:49 |
semiosis |
that's not how it works |
15:49 |
Fen1 |
Yep but i just want distribute |
15:49 |
semiosis |
gluster places files on bricks by hashing the filename |
15:49 |
semiosis |
it doesnt place by bytes free |
15:49 |
kkeithley_ |
write another file, it'll probably land on the other brick (vm) |
15:49 |
semiosis |
and stripe is almost certainly not what you want |
15:49 |
Fen1 |
semiosis: so why it don't want to write on the second brick ? |
15:50 |
semiosis |
gluster uses a hash algorithm to distribute files over bricks according to the hash of the filename |
15:51 |
Fen1 |
when i mount the storage, do i write both server ? |
15:52 |
Fen1 |
i wrote this : mount -t glusterfs "@IP":/"volume-name" /"directory" |
15:53 |
Fen1 |
but the @IP is just for 1 VM (and 1 brick too) |
15:55 |
skippy |
the Gluster FUSE client will talk to all bricks in the pool for that volume |
15:55 |
Fen1 |
ok it's what i expect |
15:55 |
skippy |
when you mount, you point it to one server, but once it's mounted, it'll talk to all servers with the bricks you need |
15:56 |
Fen1 |
so i don't understand why my second brick is useless... |
15:56 |
Fen1 |
For the moment it's like my volume = 1 brick |
15:56 |
skippy |
because as has been pointed out, Gluster uses a hashing algorithm to decide where to store files. |
15:56 |
skippy |
Have you tried to write additional files to the volume? |
15:56 |
Fen1 |
yep |
15:57 |
Fen1 |
it said storage full |
15:57 |
skippy |
if the hash determines that it should use brick1, and brick1 is full, then it obviously won't work. |
15:57 |
skippy |
The details of the hashing algorithm are unknown to me. |
15:58 |
Fen1 |
well it's really weird, maybe i did something wrong |
15:58 |
Fen1 |
i'll try again tomorrow |
15:59 |
Fen1 |
thx for the help all ;) |
15:59 |
Fen1 |
have a nice day/night |
15:59 |
|
nshaikh joined #gluster |
15:59 |
skippy |
I'm having my own problems today! An `rsync --inplace` of a 1.1GB NFS share to a Gluster volume resulted in that Gluster volume being 3.4GB. I've no idea why. |
15:59 |
skippy |
any ideas, semiosis ? |
16:05 |
Slydder1 |
hey all |
16:06 |
Slydder1 |
have a strange situation. if I upload a bunch of files to a gluster volume (no replication just a vol with a single node) it's nice and fast. but if I update the files afterwards with rsync then it hangs and takes forever to finish. has anyone ever had this problem? |
16:07 |
Slydder1 |
and, of course, all file accesses are also very slow. |
16:08 |
skippy |
I'm also getting spurious "No space left on device" messages in my gluster logs, but that's demonstrably false. |
16:11 |
Slydder1 |
strange. on the empty volume I get about 30Mbit for the 20GB of small files transfered with rsync. the update takes almost as long and doesn't even break 300 kbit |
16:12 |
semiosis |
skippy: how are you making that space determination? |
16:13 |
semiosis |
skippy: re no space left, are you out of inodes? quite possible if your bricks are ext4 |
16:13 |
skippy |
semiosis: just doing a plain `df`. bricks are XFS |
16:13 |
semiosis |
well that's just weird then |
16:13 |
skippy |
mkfs.xfs -i size=512 .... |
16:14 |
semiosis |
Slydder1: in general it's good to use --inplace when rsyncing into gluster, and you might get better performance with --whole-file, depending on your use case |
16:41 |
|
JoseBravo joined #gluster |
16:42 |
|
hchiramm joined #gluster |
16:43 |
|
haomaiwa_ joined #gluster |
16:44 |
|
Llevar joined #gluster |
16:44 |
Llevar |
Hello, might anyone be able to help me with a gluster issue? |
16:44 |
|
PeterA joined #gluster |
16:45 |
purpleidea |
fubada: and i replied too. your puppetmaster is setup incorrectly |
16:45 |
|
an joined #gluster |
16:46 |
purpleidea |
kkeithley_: i think ndevos script went awry |
16:46 |
ndevos |
purpleidea: any details? |
16:47 |
purpleidea |
ndevos: yeah, your script is unassigning me from puppet-gluster bugs :P |
16:47 |
purpleidea |
not that puppet-gluster has any bugs of course ;) |
16:48 |
ndevos |
purpleidea: only if they were in the NEW state, if you should be assigned, the bug should have been in ASSIGNED state |
16:48 |
purpleidea |
ndevos: and then you assigned poor kkeithley_ :P |
16:48 |
purpleidea |
ndevos: nope |
16:48 |
purpleidea |
What |Removed |Added |
16:48 |
purpleidea |
---------------------------------------------------------------------------- |
16:48 |
purpleidea |
Assignee|jshubin redhat.com |gluster-bugs redhat.com |
16:48 |
ndevos |
purpleidea: kkeithley_ is actually looking in using pacemaker for nfs-ganesha + glusterfs, so he's a good fit for the NEEDINFO :D |
16:49 |
ndevos |
purpleidea: check the headers of the email, the status of the bug probably was NEW |
16:49 |
purpleidea |
ndevos: ah cool :) did not know about pacemaker, glad if he's working on it ;) |
16:49 |
Llevar |
I have a 4 brick volume and one of the bricks filled up while others have lots of space. I tried to rebalance but no data was migrated off the full brick. How can I balance the data? Thanks. |
16:49 |
ndevos |
purpleidea: and, you should catch up on email: http://supercolony.gluster.org/pipermail/gluster-devel/2014-September/042405.html ;-) |
16:50 |
purpleidea |
ndevos: RE: your script. irregardless of status, it removed me from getting emails about it, when i was on it. |
16:50 |
purpleidea |
ndevos: yeah, i saw that mail :) |
16:50 |
purpleidea |
ndevos: can you fpaste the script somewhere? |
16:50 |
ndevos |
purpleidea: you're not kept on CC? |
16:50 |
purpleidea |
ndevos: correct, it removed me. |
16:51 |
ndevos |
hmm, thats strange... |
16:51 |
purpleidea |
ndevos: not a big deal because i'm not a2 or hagarth, but it would probably piss me off if it was 100 bugs or something |
16:51 |
purpleidea |
12:54 < purpleidea> ndevos: can you fpaste the script somewhere? |
16:51 |
Slydder1 |
semiosis: both inplace and whole-file had no effect. |
16:52 |
ndevos |
purpleidea: the script was like this: http://paste.fedoraproject.org/137849/95906141/raw/ |
16:52 |
|
sputnik13 joined #gluster |
16:53 |
ndevos |
purpleidea: anyone can follow the bugs through the bugs gluster.org list, and I'm pretty sure hagarth and avati receive more than enough emails ;) |
16:53 |
Slydder1 |
semiosis: wait. I did get up to 900 kbit that time. |
16:54 |
ndevos |
purpleidea: the bugzilla command comes from the python-bugzilla package, you'll need to do a 'bugzilla login' before you can modify bugs |
16:54 |
purpleidea |
ndevos: i've used the tool ;) |
16:54 |
ndevos |
purpleidea: ah, ok :) |
16:54 |
purpleidea |
ndevos: anyways, just wanted to report that your script is incorrectly removing me from getting emails and i had to add them back manually. please fix that issue or i'll have to switch to a different bug too |
16:55 |
purpleidea |
l |
16:56 |
ndevos |
purpleidea: the advised way to get bugzilla emails for all bugs for a component is http://www.gluster.org/community/documentation/index.php/Bugzilla_Notifications |
16:56 |
ndevos |
purpleidea: and, it's a one-off change, not sure if you want me to fix things? |
16:57 |
purpleidea |
ndevos: lol, no thanks! i don't want a special snowflake way to get bugs for gluster! |
16:57 |
ndevos |
purpleidea: thats how any community user gets the notifications, we try not to make it different for RH people :) |
16:58 |
purpleidea |
ndevos: ? bugzilla lets you cc yourself on any bug directly. this works just fine |
16:59 |
ndevos |
purpleidea: sure, but that does not really scale well for new bugs |
17:00 |
purpleidea |
ndevos: if you want to implement a new process that's fine, but don't break existing things for everyone else please :) |
17:01 |
|
theron joined #gluster |
17:03 |
ndevos |
purpleidea: it is not a 'new' process, it is the only process for all gluster components... but maybe puppet-gluster is an exception there |
17:03 |
ndevos |
well, 'website' and 'project-infrastructure' also are a little different, I suppose |
17:04 |
purpleidea |
ndevos: i think you should just acknowledge the bug, try to fix it, potentially tell me to take puppet-gluster to a different issue tracker if it causes you trouble, and then we can go have a $beer |
17:04 |
ndevos |
purpleidea: anyway, if you need me to do something, send it by email and I'll look into it tomorrow morning |
17:05 |
ndevos |
purpleidea: oh, I dont mind puppet-gluster to be grouped in the GlusterFS 'product', we probably just need to document some exceptional components - or something like that |
17:06 |
purpleidea |
ndevos: ;) exceptionally exceptional! |
17:06 |
ndevos |
purpleidea: but well, dinner is calling... ttyl! |
17:06 |
purpleidea |
later!@ |
17:06 |
* purpleidea |
things ndevos must be into raw food if his dinner is calling |
17:06 |
purpleidea |
thinks* |
17:09 |
|
nshaikh left #gluster |
17:13 |
|
JoseBravo joined #gluster |
17:14 |
JoseBravo |
I need to do a master-master replication in two different geo locations. gluster geo-location is just a replica in one direction, right? is there any other way to do that? |
17:16 |
|
zerick joined #gluster |
17:17 |
purpleidea |
JoseBravo: geo is asynchronous, unidirectional |
17:18 |
purpleidea |
JoseBravo: you might want to read: https://en.wikipedia.org/wiki/CAP_theorem |
17:26 |
|
tryggvil joined #gluster |
17:26 |
|
gothos left #gluster |
17:32 |
Llevar |
I have a 4 brick volume and one of the bricks filled up while others have lots of space. I tried to rebalance but no data was migrated off the full brick. How can I balance the data? Thanks. |
17:32 |
|
sputnik1_ joined #gluster |
17:32 |
|
PeterA joined #gluster |
17:35 |
|
sputnik13 joined #gluster |
17:36 |
|
MeMIk joined #gluster |
17:37 |
MeMIk |
Hi guys! I'm living problems with the new geo-replication. Does someone can help me ? |
17:39 |
|
tryggvil joined #gluster |
17:40 |
|
Slydder joined #gluster |
17:42 |
|
theron joined #gluster |
17:42 |
|
elico joined #gluster |
17:51 |
|
cmtime joined #gluster |
17:58 |
semiosis |
MeMik: please describe the problem |
18:02 |
MeMik |
thanks semiosis i.ve able to create and start the replication successfully but i always get this error when i tail the log:I [monitor(monitor):150:monitor] Monitor: worker(/bricks/brick001) died before establishing connection |
18:04 |
|
ivok joined #gluster |
18:05 |
MeMik |
the /bricks/brick001 is from the local Master gluster server, so this meen the daemon crash trying to view local storage |
18:08 |
JoseBravo |
Gluster is designed to work in low latency networks, how long is low latency? |
18:11 |
|
htrmeira joined #gluster |
18:13 |
MeMik |
semiosis , another information, when i run the gsyncd.py in debug mode i get [2014-09-30 14:11:00.212687] I [monitor(monitor):130:monitor] Monitor: starting gsyncd worker |
18:13 |
MeMik |
failure: malformed path |
18:13 |
semiosis |
JoseBravo: a LAN |
18:14 |
semiosis |
MeMik: what distro? what version of glusterfs? |
18:14 |
MeMik |
centos7 latest stable 3.5.2 based on epel repot |
18:14 |
MeMik |
glusterfs 3.5.2 built on Jul 31 2014 18:41:18 |
18:16 |
partner |
darn, i thought my worries with rebalance memory leak was over after upgrade but it turns out i ran into a new bug causing the same problems |
18:18 |
Llevar |
I have a 4 brick volume and one of the bricks filled up while others have lots of space. I tried to rebalance but no data was migrated off the full brick. How can I balance the data? Thanks. |
18:18 |
partner |
Llevar: hmm i wonder if you could set the min-disk-free option to redirect writes to remaining bricks, works for me at least |
18:19 |
Llevar |
partner: Thanks, the problem is I'm mid process in a large distributed workflow that is trying to pull hundreds of gigs of data from a repository via torrent |
18:20 |
Llevar |
gluster placed my file on this brick and it filled up |
18:20 |
Llevar |
I'd like to move other files off that brick |
18:20 |
|
ekuric joined #gluster |
18:20 |
Llevar |
so my download can complete |
18:20 |
partner |
i have probably 80% of my bricks full (as per under the defined min free space) and they never fill up all the way |
18:21 |
Llevar |
This is glusterfs 3.5.2 on Ubuntu 12.04 btw |
18:21 |
Llevar |
right, I think it's a good suggestion for my other clusters |
18:22 |
Llevar |
that aren't in the full state yet |
18:22 |
partner |
its for the volume, not per brick |
18:22 |
Llevar |
ah, ok |
18:22 |
partner |
not sure thought if rebalance would be aware of it and migrate anything off.. |
18:22 |
|
giannello joined #gluster |
18:22 |
partner |
but it might help you proceed with the remaining downloads, maybe even delete something existing from the full brick |
18:23 |
partner |
i have not tested that on 3.5 thought, worked on 3.3.2 and i still hope it keeps working on 3.4.5 or i'm screwed |
18:23 |
|
ivok joined #gluster |
18:25 |
Llevar |
So am I right in understanding that this is a capacity threshold at which gluster will start scheduling files to be written to bricks with lower occupancy? |
18:26 |
partner |
i'd say its a hint "i don't wan't more than this to my bricks" if and when all the hashes would happen to match one particular brick |
18:27 |
partner |
what will happen is that it will write it then elsewhere and place a sticky pointer to the original planned (as per hash) location |
18:27 |
Llevar |
Well that will likely help some, although I'm wondering if there is a way for a manual rebalance still |
18:27 |
Llevar |
I saw some discussion on the mailing list that describes a manual fix layout |
18:27 |
partner |
i would suggest using bytes value for the limit at least based on previous guidance, rather than percentage or Mb/Gb |
18:29 |
Llevar |
like here - http://gluster.org/pipermail/gluster-users.old/2013-January/012289.html |
18:29 |
partner |
hmm that sounds like playing with attributes and what not ie. quite low-level activity on brick, doing the stuff its doing for you |
18:30 |
partner |
hmm IMO that is completely different thing |
18:32 |
Llevar |
well, that's kind of what I'm trying to do, my one brick has like 50Kb space left and others ahve hundreds of gigs |
18:32 |
partner |
but i'm not expert there, just thought if it would help to share how i've figured out this issue ie. setting the limit to prevent gluster writing there after less than defined amount of space is available after which it writes the stuff elsewhere |
18:32 |
Llevar |
thanks, that definitely helps for the futute |
18:32 |
partner |
why wouldn't you then delete one file to make room, set the limit and try it out? |
18:32 |
Llevar |
gluster is backing a distributed workflow system with many jobs in-flight |
18:33 |
Llevar |
deleting a file = killing some job in an unknown state |
18:33 |
partner |
set the limit and try to rebalance? at least new files should start flowing to empty bricks |
18:33 |
Llevar |
might be ok otherwise but these are workflows that take hundreds of hours of compute on a multi-node compute cluster |
18:34 |
Llevar |
yep, doing tha tnow |
18:34 |
partner |
yeah, don't want to mess with such |
18:37 |
partner |
ie. gluster volume set <volname> cluster.min-free-disk num_of_bytes |
18:37 |
Llevar |
thanks, the documentation seems to indicate only a percentage value |
18:41 |
partner |
i was specificly instructed not to use percentage as it had bug. i don't know if its fixed or not.. this was the case i recall: https://bugzilla.redhat.com/show_bug.cgi?id=889334 |
18:41 |
Llevar |
Good to know |
18:41 |
Llevar |
Good to know, thanks |
18:42 |
partner |
np |
18:42 |
partner |
cannot confirm how its on 3.5.2 as i currently don't have such environment |
18:46 |
Llevar |
This dug up an interesting tidbit about ZFS becoming very slow after getting to about 85% capacity |
18:47 |
fubada |
purpleidea: hi, i got a bit further and replied to your email with a new issue :) |
18:47 |
|
daMaestro joined #gluster |
18:50 |
|
buckaroobanzai joined #gluster |
18:52 |
Llevar |
it looks like rebalance is blind to the setting |
18:53 |
|
diegows joined #gluster |
18:56 |
Llevar |
Still hoping someone knows how to manually move data onto another brick |
18:58 |
purpleidea |
fubada: ack |
19:07 |
semiosis |
Llevar: you're not really supposed to do that. it might be possible, if you stop the volume, to move the file to another brick (be careful to also remove & recreate the hard link in .glusterfs) then maybe glusterfs will recognize the file is on the other brick |
19:07 |
semiosis |
never tried this |
19:07 |
semiosis |
it's really not a supported/recommended procedure |
19:08 |
skippy |
how do I diagnose and resolve failed heals on a replica volume? |
19:08 |
semiosis |
skippy: failed heals? |
19:08 |
semiosis |
skippy: log files... the client log, the shd log |
19:09 |
skippy |
http://fpaste.org/137902/10413114/ |
19:09 |
semiosis |
http://joejulian.name/blog/glusterfs-split-brain-recovery-made-easy/ |
19:09 |
JoeJulian |
(apparently not working consistently with 3.5...) |
19:09 |
Llevar |
@semiosis: Is there another course of action you can recommend when a brick fill up? |
19:09 |
skippy |
thi sis not split brain. this is replica. both servers report connected, and have not reported disconnects |
19:09 |
semiosis |
JoeJulian: where's glusterbot?!?! |
19:09 |
MeMik |
semiosis did you already seen the problem i reported with the geo-replication? |
19:09 |
JoeJulian |
He's resting. |
19:09 |
|
ghenry joined #gluster |
19:09 |
JoeJulian |
Hasn't had a vacation in years... |
19:09 |
semiosis |
JoeJulian: well i guess you better go wake him up then |
19:10 |
semiosis |
ehhh |
19:10 |
JoeJulian |
Actually, I need to figure out why my script is spawning multiple copies. Should be able to take a look later today. |
19:10 |
skippy |
arrgh. Again I'm seeing "[2014-09-30 19:09:34.259608] W [client-rpc-fops.c:256:client3_3_mknod_cbk] 0-forms-client-0: remote operation failed: No space left on device." |
19:10 |
skippy |
but the remote is not out of space. |
19:11 |
semiosis |
skippy: remote operation failed.... means something on a brick. go to the brick log |
19:11 |
fubada |
purpleidea: hi |
19:11 |
kkeithley_ |
sleep well glusterbot. I'll most likely kill you in the morning |
19:11 |
semiosis |
Llevar: don't run out of space |
19:11 |
semiosis |
Llevar: that's my advice |
19:11 |
fubada |
purpleidea: thanks for your help! |
19:11 |
|
giannello joined #gluster |
19:12 |
skippy |
2014-09-30 19:09:34.259533] E [posix.c:1133:posix_mknod] 0-forms-posix: mknod on /bricks/forms1/brick/pdf/blah.pdf failed: No space left on device |
19:12 |
semiosis |
Llevar: kinda too late now, but you can use LVM to grow a brick to keep ahead of filling up |
19:12 |
skippy |
/dev/mapper/bricks-forms1 4.0G 2.2G 1.9G 54% /bricks/forms1 |
19:12 |
semiosis |
skippy: inodes? |
19:12 |
skippy |
/dev/mapper/bricks-forms1 on /bricks/forms1 type xfs (rw,relatime,attr2,inode64,noquota) |
19:12 |
semiosis |
skippy: df -i |
19:12 |
Llevar |
@semiosis: My volume is only 60% full, managing bricks should be gluster's job, it's not my fault it filled up one while keeping others empty, they were all created at the same time |
19:12 |
skippy |
/dev/mapper/bricks-forms1 2097152 125312 1971840 6% /bricks/forms1 |
19:13 |
semiosis |
Llevar: that kind of thinking is what got you into this mess in the first place. time to start taking responsibility |
19:13 |
|
tryggvil joined #gluster |
19:13 |
|
nshaikh joined #gluster |
19:14 |
Llevar |
@semiosis: :) |
19:15 |
purpleidea |
fubada: yw |
19:15 |
skippy |
so ... why would Gluster think that the backing filesystem of a brick is out of space? This is rather frustrating. |
19:15 |
fubada |
purpleidea: i replied with the tree commands |
19:16 |
JoeJulian |
skippy: df -i maybe? |
19:16 |
fubada |
purpleidea: i no longer see the storedconfig errors on master, so puppetdb is fixed |
19:16 |
skippy |
JoeJulian: /dev/mapper/bricks-forms1 2097152 125312 1971840 6% /bricks/forms1 |
19:16 |
JoeJulian |
Ah, right... those are different numbers. I wasn't paying attention. |
19:17 |
skippy |
i've tried stopping and restarting the volume. that didn't seem to remedy antyhing earlier. |
19:17 |
skippy |
but I can certainly try again. |
19:18 |
skippy |
as soon as it comes back up, the brick reports no space left on device |
19:19 |
semiosis |
selinux? |
19:19 |
Arrfab |
skippy: have you "grown" the xfs volume used as a brick ? I had that strange issue once .. or deleted files but xfs will still thinking they were there |
19:20 |
skippy |
Arrfab: I *did* grow the LV on which this brick is built. |
19:20 |
skippy |
selinux is off. |
19:20 |
skippy |
i stopped the volume, unmounted the brick FS, remounted, and started volume. |
19:20 |
Arrfab |
skippy: ok, so that's the same issue I had too .. let me find the article explaining how xfs is stupid and should be avoided |
19:21 |
skippy |
no immediate complaints int he brick log |
19:21 |
Arrfab |
skippy: issue not at the gluster level, but xfs |
19:21 |
skippy |
oh great! Gluster touts XFS, but XFS should avoided. Hooray! |
19:21 |
skippy |
ok |
19:22 |
Arrfab |
skippy: I dont remember which article I found (back in the time) but if you google for "xfs growfs space left on device" you'll find a loooooot of people having had the issue |
19:23 |
JoeJulian |
Hey there Fabian. Nice having you here. |
19:23 |
Arrfab |
JoeJulian: hey |
19:23 |
skippy |
thanks Arrfab. This is good to know. The volume is fully healed now. |
19:25 |
Arrfab |
skippy: I don't feel as comfortable with xfs/growfs than with resize2fs/ext4 :-) |
19:25 |
Arrfab |
the only nodes using XFS for centos are the bricks in our gluster setup, but not feeling very confident (and perf are awful but had no time to debug why) |
19:26 |
skippy |
what, then, is the Gluster preference for XFS? |
19:26 |
Arrfab |
skippy: very laaaaaaaarge filesystems, so scalability |
19:27 |
skippy |
would inode64 on the brick FS mount options help alleviate this problem? http://xfs.org/index.php/XFS_FAQ#Q:_Why_do_I_receive_No_space_left_on_device_after_xfs_growfs.3F |
19:27 |
JoeJulian |
... more streamlined codebase. |
19:28 |
semiosis |
shouldn't need 64bit inodes for a 4G disk |
19:28 |
skippy |
i shouldnt think so; but if that avoids the growfs problem, that would be good to know. |
19:29 |
JoeJulian |
4G? |
19:29 |
semiosis |
anecdotal evidence isn't worth much, but in any case, i've been using xfs in prod with glusterfs for a while now, grown it many times, never had a problem |
19:29 |
* Arrfab |
smacks himself for not having bookmarked the useful doc |
19:29 |
semiosis |
JoeJulian: [15:12] <skippy> /dev/mapper/bricks-forms1 4.0G 2.2G 1.9G 54% /bricks/forms1 |
19:30 |
JoeJulian |
wierd |
19:30 |
skippy |
i grew the volume from 2GB to 4GB. |
19:30 |
skippy |
lvresize -L+4GB -r /dev/mapper/bricks-forms1 |
19:30 |
JoeJulian |
the ENOSPC thing all seems to be about growing from <= 1Tb to > 1Tb. |
19:31 |
partner |
i guess i've been lucky then, 30+ bricks on top of lvm and xfs and each single one expanded at least once, no issues of any kind :o |
19:31 |
partner |
i wonder should i get scared now.. |
19:31 |
skippy |
thanks for the tip, Arrfab. This is definitely something on which to keep notes! |
19:31 |
semiosis |
if this were a common problem we'd see it in here more often. people are growing their xfs bricks all the time |
19:32 |
Arrfab |
skippy: I'm now searching for the link, as it was really explaining in details the issue involving metadata |
19:32 |
partner |
might be something to bump into on various testings, good to know for the future |
19:32 |
skippy |
in the event that it matters, I'm using RHEL7 for the GLuster servers. RHEL6 fo rthe clients currently |
19:32 |
JoeJulian |
I have some that started out small but have gotten bigger and bigger with each CentOS release. |
19:32 |
JoeJulian |
(my in-house mirror) |
19:33 |
Arrfab |
also, as I have several gluster experts in the channel, can someone give me pointers about how to optimize/tune bricks/gluster ? |
19:33 |
JoeJulian |
Typically, the answer is, "don't". |
19:33 |
|
JoseBravo joined #gluster |
19:33 |
JoeJulian |
What are you trying to tune for? |
19:34 |
Arrfab |
4 nodes, each exposing a 1TiB brick .. when writing on xfs at the brick level, I get ~150MB/s write .. if I do the same at the gluster level, it drops to ~50Mb/s |
19:34 |
semiosis |
that's life |
19:34 |
semiosis |
that would be perfect if you had replica 3 |
19:34 |
JoeJulian |
So you want to tune for strictly write performance. |
19:35 |
Arrfab |
JoeJulian: write and read .. volume is distributed+replicated |
19:35 |
semiosis |
apples vs orchards |
19:35 |
JoeJulian |
Now you're changing the spec... You must not be an engineer. :P |
19:35 |
|
zerick joined #gluster |
19:35 |
JoseBravo |
I'm trying to do a geo-replication. In the master I created the volume this way: "gluster volume create master:/export/sda1/brick" is that ok? or I need at least two nodes? |
19:35 |
Arrfab |
idea was to provide spaces for VMs and I've switch to ligbfapi but nothing better at the VM level (obviously) |
19:35 |
JoeJulian |
For a single client writing data, you don't want a clustered filesystem. |
19:35 |
JoseBravo |
gluster volume create home master:/export/sda1/brick |
19:36 |
Arrfab |
JoeJulian: the four nodes are hypervisor too and so starting qemu-kvm with libgfapi :-) |
19:37 |
JoeJulian |
i/o within the VM after switching to libgfapi was not faster? It should have been. Higher iops, maybe not greater throughput if you're maxing out your network. |
19:38 |
JoseBravo |
I'm trying to start the geo-replication, this way: gluster volume geo-replication home pi1-2:home start but I get this errors: "Staging failed on localhost. Please check the log file for more details., geo-replication command failed" |
19:38 |
Arrfab |
JoeJulian: I tested when nothing was using network : the four nodes have dual-port network and one port dedicated to gluster/libgfapi |
19:39 |
Llevar |
Is there a way to find out which files are on which brick in your volume? |
19:39 |
Arrfab |
confirmed available network speed between the four nodes with iperf (to verify that Gigabit ethernet was really working) |
19:39 |
Arrfab |
Llevar: ls on each node ? :-) |
19:40 |
skippy |
partner: when you resize XFS, do you do `lvresize -r` on a mounted volume? Or do you unmount first, grow the LV, then grow the filesystem? |
19:42 |
partner |
live of course |
19:42 |
partner |
expand lvm and then resize the filesystem on top of it |
19:42 |
skippy |
two separate steps? |
19:42 |
partner |
yes as they are two separate things |
19:43 |
skippy |
lvresize has a `-r` flag to make it one step. Just asking how you do it; maybe there's some problem with lvresize handling the filesystem resize... |
19:45 |
Arrfab |
skippy: probably "old school" but I've always used lvextend/resize2fs (so two steps) |
19:45 |
|
coredump joined #gluster |
19:46 |
partner |
sure but i'm stuck using lvextend always |
19:47 |
skippy |
sorry, lvextend was the command. |
19:47 |
skippy |
too much going on here. :( |
19:48 |
partner |
both have the same switch |
19:50 |
partner |
just being precautious with my bricks, i have 600+ million files on top of them |
19:50 |
partner |
while i guess two commands will double the chance of failure :D |
19:50 |
|
andreask joined #gluster |
19:56 |
|
gildub joined #gluster |
19:57 |
|
vertex joined #gluster |
20:01 |
|
failshell joined #gluster |
20:07 |
|
sputnik1_ joined #gluster |
20:10 |
|
sputnik13 joined #gluster |
20:10 |
|
sputnik__ joined #gluster |
20:18 |
|
sputnik13 joined #gluster |
20:24 |
partner |
hmm when might be the next version out? dunno if there is a place to check for it or what might be included? |
20:32 |
|
theron joined #gluster |
20:43 |
|
lmickh joined #gluster |
20:55 |
JoseBravo |
I have the geo-replication working... if I create a file it send it to the slave, but if I modify the file, it don't send the updated content... why is that? |
21:06 |
|
sauce joined #gluster |
21:07 |
|
sputnik13 joined #gluster |
21:15 |
|
sauce joined #gluster |
21:16 |
|
h4rry joined #gluster |
21:19 |
|
jbrooks joined #gluster |
21:24 |
|
giannello joined #gluster |
21:32 |
|
sputnik13 joined #gluster |
21:51 |
JoeJulian |
My guess would be you didn't wait long enough. iirc it happens on a timer. |
22:00 |
|
JoseBravo joined #gluster |
22:20 |
|
calum_ joined #gluster |
22:22 |
|
schrodinger joined #gluster |
22:47 |
|
firemanxbr joined #gluster |
22:47 |
|
dockbram joined #gluster |
22:50 |
|
ira joined #gluster |
22:54 |
|
sprachgenerator joined #gluster |
23:24 |
|
elico joined #gluster |
23:32 |
|
Philambdo joined #gluster |