Time |
Nick |
Message |
00:00 |
|
DV joined #gluster |
00:07 |
|
masuberu joined #gluster |
00:11 |
|
shyam joined #gluster |
00:24 |
|
masuberu joined #gluster |
01:02 |
|
yawkat joined #gluster |
01:03 |
|
Ulrar joined #gluster |
01:05 |
|
al joined #gluster |
01:34 |
|
Lee1092 joined #gluster |
01:36 |
|
haomaiwang joined #gluster |
01:48 |
|
ilbot3 joined #gluster |
01:48 |
|
Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/ |
01:57 |
|
haomaiwang joined #gluster |
01:57 |
|
haomaiwang joined #gluster |
02:08 |
|
nishanth joined #gluster |
02:18 |
|
JesperA- joined #gluster |
02:54 |
|
haomaiwang joined #gluster |
03:11 |
|
Gambit15 joined #gluster |
03:20 |
|
hagarth joined #gluster |
03:22 |
|
aravindavk joined #gluster |
03:25 |
|
magrawal joined #gluster |
03:29 |
|
nbalacha joined #gluster |
03:43 |
|
nbalacha joined #gluster |
03:45 |
|
amye joined #gluster |
03:57 |
|
ppai joined #gluster |
03:58 |
|
jiffin joined #gluster |
04:03 |
|
F2Knight joined #gluster |
04:10 |
|
RameshN joined #gluster |
04:10 |
|
atinm joined #gluster |
04:17 |
|
hchiramm joined #gluster |
04:17 |
|
gem joined #gluster |
04:18 |
|
aspandey joined #gluster |
04:22 |
|
shubhendu joined #gluster |
04:23 |
|
valkyr1e joined #gluster |
04:25 |
|
itisravi joined #gluster |
04:33 |
|
raghug joined #gluster |
04:43 |
|
devilspgd joined #gluster |
04:48 |
|
Javezim joined #gluster |
04:48 |
|
Javezim left #gluster |
04:48 |
|
Javezim joined #gluster |
04:48 |
|
nehar joined #gluster |
04:54 |
|
Manikandan joined #gluster |
04:58 |
|
aravindavk joined #gluster |
05:02 |
|
Manikandan joined #gluster |
05:04 |
|
raghug joined #gluster |
05:05 |
|
prasanth joined #gluster |
05:09 |
|
mobaer joined #gluster |
05:14 |
|
ndarshan joined #gluster |
05:16 |
|
satya4ever joined #gluster |
05:21 |
|
karnan joined #gluster |
05:22 |
|
rafi joined #gluster |
05:22 |
|
jiffin joined #gluster |
05:28 |
|
hgowtham joined #gluster |
05:31 |
|
rafi1 joined #gluster |
05:33 |
|
ashiq joined #gluster |
05:35 |
|
rafi joined #gluster |
05:37 |
|
poornimag joined #gluster |
05:40 |
|
Bhaskarakiran joined #gluster |
05:40 |
|
luizcpg_ joined #gluster |
05:40 |
|
karthik___ joined #gluster |
05:42 |
|
Apeksha joined #gluster |
05:43 |
|
[diablo] joined #gluster |
05:46 |
|
mchangir joined #gluster |
05:52 |
|
kdhananjay joined #gluster |
05:54 |
|
jwd joined #gluster |
05:55 |
|
jtux joined #gluster |
06:01 |
|
pur_ joined #gluster |
06:02 |
|
kovshenin joined #gluster |
06:07 |
|
aspandey joined #gluster |
06:12 |
|
kotreshhr joined #gluster |
06:14 |
|
atalur joined #gluster |
06:16 |
|
Manikandan joined #gluster |
06:17 |
|
raghug joined #gluster |
06:23 |
|
arcolife joined #gluster |
06:25 |
|
mobaer joined #gluster |
06:36 |
|
[Enrico] joined #gluster |
06:36 |
|
skoduri joined #gluster |
06:40 |
|
kramdoss_ joined #gluster |
06:40 |
|
kramdoss__ joined #gluster |
06:41 |
|
hackman joined #gluster |
06:43 |
|
arcolife joined #gluster |
06:43 |
|
d0nn1e joined #gluster |
06:44 |
|
kovshenin joined #gluster |
06:46 |
|
level7 joined #gluster |
06:46 |
|
msvbhat joined #gluster |
06:53 |
|
skoduri joined #gluster |
06:56 |
|
om joined #gluster |
06:58 |
|
[o__o] joined #gluster |
07:00 |
|
gem joined #gluster |
07:03 |
|
mobaer joined #gluster |
07:05 |
|
gem joined #gluster |
07:07 |
|
atalur joined #gluster |
07:09 |
|
gem_ joined #gluster |
07:11 |
|
mobaer joined #gluster |
07:13 |
|
Gnomethrower joined #gluster |
07:14 |
|
mobaer joined #gluster |
07:14 |
|
karnan joined #gluster |
07:17 |
|
arcolife joined #gluster |
07:20 |
|
anil_ joined #gluster |
07:20 |
|
jri joined #gluster |
07:21 |
|
[o__o] joined #gluster |
07:23 |
|
kramdoss_ joined #gluster |
07:23 |
|
kramdoss__ joined #gluster |
07:34 |
|
jri joined #gluster |
07:35 |
|
fsimonce joined #gluster |
07:52 |
|
karthik___ joined #gluster |
07:56 |
|
kramdoss__ joined #gluster |
07:56 |
|
kramdoss_ joined #gluster |
07:58 |
|
ju5t joined #gluster |
07:58 |
|
deniszh joined #gluster |
07:59 |
ju5t |
Hi, we've just added 2 bricks to a cluster of 4, but the disk space doesn't seem to be available on the clients. Any ideas what might have gone wrong? |
08:00 |
post-factum |
ju5t: please provide info about volume layout first |
08:01 |
ju5t |
3 sets of 2 in a distributed replica set up, all bricks have the same amount of disk space |
08:02 |
ju5t |
is that what you're looking for? |
08:05 |
|
baojg joined #gluster |
08:12 |
|
ahino joined #gluster |
08:15 |
post-factum |
ju5t: how did you add another bricks? |
08:16 |
post-factum |
ju5t: show current volume info as well |
08:18 |
|
Wizek__ joined #gluster |
08:23 |
|
Slashman joined #gluster |
08:27 |
ju5t |
post-factum: it's a problem on our end, dns didn't play ball the way i expected it |
08:27 |
ju5t |
it's all solved now |
08:27 |
post-factum |
hm, ok |
08:29 |
|
itisravi joined #gluster |
08:33 |
|
gowtham joined #gluster |
08:34 |
|
tom[] joined #gluster |
08:41 |
|
kramdoss_ joined #gluster |
08:41 |
|
kramdoss__ joined #gluster |
08:51 |
|
level7 joined #gluster |
09:12 |
|
gem joined #gluster |
09:14 |
|
hchiramm joined #gluster |
09:24 |
|
arif-ali joined #gluster |
09:29 |
|
Alghost joined #gluster |
09:31 |
|
jiffin1 joined #gluster |
09:35 |
|
aspandey joined #gluster |
09:37 |
|
Gnomethrower joined #gluster |
09:37 |
|
haomaiwang joined #gluster |
09:40 |
|
nishanth joined #gluster |
09:41 |
|
gem joined #gluster |
09:41 |
|
robb_nl joined #gluster |
09:43 |
|
kdhananjay joined #gluster |
09:46 |
|
atalur joined #gluster |
09:49 |
|
om joined #gluster |
09:56 |
|
DV joined #gluster |
10:03 |
|
Guest47189 joined #gluster |
10:04 |
|
jiffin1 joined #gluster |
10:07 |
|
om3 joined #gluster |
10:07 |
|
om2 joined #gluster |
10:08 |
|
haomaiwang joined #gluster |
10:10 |
|
ibotty joined #gluster |
10:12 |
ibotty |
Hi, does anyone know whether there is a publicly available rhgs docker repo/rebuild? |
10:12 |
ibotty |
and if so, where ;) |
10:25 |
|
gem joined #gluster |
10:25 |
|
msvbhat_ joined #gluster |
10:33 |
jiffin |
ibotty:I guess hchiram/ ashiq should have the info |
10:33 |
ibotty |
do you know when they are available (time zone)? |
10:33 |
jiffin |
they are in IST, |
10:34 |
jiffin |
should be available now |
10:34 |
ashiq |
hi ibotty its not publicly available |
10:34 |
ibotty |
i see |
10:34 |
|
hchiramm joined #gluster |
10:35 |
ibotty |
thank you anyway. |
10:37 |
|
itisravi joined #gluster |
10:37 |
ashiq |
ibotty, you are welcome |
10:39 |
|
mobaer joined #gluster |
10:39 |
|
atalur_ joined #gluster |
10:39 |
|
jiffin joined #gluster |
10:41 |
|
kdhananjay joined #gluster |
10:42 |
luizcpg |
hi, quick question… by clicking on link “GlusterFS version 3.7 is the latest version at the moment.” in the site… I can see the version 3.8.0, which is weird ….but, what about gluster 3.7.12 ? Are you going to jump to 3.8.0 and 3.7.x has finished ? Thanks |
10:42 |
hchiramm |
ibotty, r u looking for gluster container images ? |
10:42 |
hchiramm |
ibotty, https://hub.docker.com/r/gluster/gluster-centos/ |
10:43 |
ibotty |
hchiramm, ashiq: Do you squash this image before pushing to docker hub? |
10:43 |
ibotty |
it's |
10:43 |
ibotty |
it has many additional layers |
10:46 |
hchiramm |
I think docker hub does squashing |
10:47 |
ibotty |
hchiramm: no, it does not. |
10:47 |
hchiramm |
ibotty, its an automated build in docker hub |
10:47 |
hchiramm |
iic, it does squashing .. |
10:50 |
hchiramm |
I need to check it though |
10:50 |
hchiramm |
ibotty, also the source and dockerfile are available there |
10:50 |
hchiramm |
if u want to build it locally |
10:50 |
hchiramm |
and squash |
10:50 |
ibotty |
I know ;). I was wondering whether you would consider a pull request doing that. |
10:51 |
ibotty |
(I did not want to sound hostile and I apologize if I did) |
10:53 |
|
aspandey joined #gluster |
10:53 |
ibotty |
does the image work with lvm from within the docker container? There are issues regarding /etc/lvm, at least there were with my image? |
10:54 |
hchiramm |
ibotty, not at all, PRs are always welcome :) |
10:54 |
hchiramm |
waiting for it |
10:54 |
hchiramm |
ibotty++ thanks |
10:54 |
glusterbot |
hchiramm: ibotty's karma is now 1 |
10:55 |
hchiramm |
it should work.. |
10:56 |
hchiramm |
if not, /run/lvm has to be exported |
10:56 |
hchiramm |
depends on the base version.. |
10:56 |
hchiramm |
rhel it works without issues.. |
10:56 |
|
johnmilton joined #gluster |
10:56 |
ibotty |
I'll test with the failures I had. I'll see whether they occur :) |
10:56 |
hchiramm |
in centos we may need to expose /run/lvm to the container.. |
10:56 |
hchiramm |
please let us know if you face any issues. |
10:56 |
ibotty |
is there a difference between rhel and centos re lvm? I would consider that a serious bug in centos. |
10:57 |
hchiramm |
ibotty, sure!! |
10:57 |
hchiramm |
even I thought it should be same |
10:58 |
|
JesperA joined #gluster |
10:58 |
hchiramm |
however with centos , we experienced that issue |
11:04 |
|
ju5t_ joined #gluster |
11:12 |
anoopcs |
luizcpg, Which link are you talking about? |
11:13 |
luizcpg |
http://download.gluster.org/pub/gluster/glusterfs/LATEST/ |
11:13 |
glusterbot |
Title: Index of /pub/gluster/glusterfs/LATEST (at download.gluster.org) |
11:13 |
|
level7 joined #gluster |
11:14 |
anoopcs |
luizcpg, This link is correct because 3.8.0 is the latest version. |
11:14 |
luizcpg |
“GlusterFS version 3.7 is the latest version at the moment.” |
11:14 |
luizcpg |
^ this is the label of the link |
11:15 |
anoopcs |
luizcpg, Where did you see the label? gluster.org? |
11:15 |
luizcpg |
http://www.gluster.org/download/ |
11:15 |
glusterbot |
Title: Install GlusterFS Gluster (at www.gluster.org) |
11:15 |
luizcpg |
^here |
11:17 |
anoopcs |
luizcpg, Ah..you are right.. Let me see if I can fix it. |
11:17 |
kkeithley |
there are three actively maintained versions: 3.6, 3.7, and 3.8. |
11:17 |
|
prasanth joined #gluster |
11:19 |
|
DV joined #gluster |
11:20 |
|
nehar joined #gluster |
11:20 |
|
[diablo] joined #gluster |
11:20 |
|
haomaiwang joined #gluster |
11:22 |
|
JesperA- joined #gluster |
11:23 |
anoopcs |
luizcpg, Would you mind raising a PR to https://github.com/gluster/glusterweb to fix the links? |
11:23 |
glusterbot |
Title: GitHub - gluster/glusterweb: Web Content for gluster.org (at github.com) |
11:24 |
|
Gnomethrower joined #gluster |
11:26 |
|
poornimag joined #gluster |
11:36 |
|
Gnomethrower joined #gluster |
11:36 |
|
mchangir joined #gluster |
11:36 |
|
JesperA joined #gluster |
11:38 |
luizcpg |
Would be better to have another link for 3.7 and reuse the existing one for 3.8 ... |
11:38 |
|
karnan joined #gluster |
11:38 |
anoopcs |
luizcpg, Yes.. |
11:39 |
anoopcs |
kkeithley, Do we really need to provide 3.5 download link @ https://www.gluster.org/download/? |
11:39 |
glusterbot |
Title: Install GlusterFS Gluster (at www.gluster.org) |
11:40 |
|
rastar joined #gluster |
11:41 |
|
arcolife joined #gluster |
11:41 |
anoopcs |
luizcpg, In case you are searching for the exact source file: https://github.com/gluster/glusterweb/blob/master/source/download/index.html.haml |
11:41 |
glusterbot |
Title: glusterweb/index.html.haml at master · gluster/glusterweb · GitHub (at github.com) |
11:41 |
luizcpg |
ok.. I’ll make a pr |
11:42 |
anoopcs |
Cool. luizcpg++ |
11:42 |
glusterbot |
anoopcs: luizcpg's karma is now 1 |
11:42 |
post-factum |
kkeithley: "actively" and "3.6" is not the whole truth |
11:43 |
* anoopcs |
would like to remove 3.5 link from downloads page |
11:51 |
|
and` left #gluster |
11:51 |
kkeithley |
9_9 |
11:52 |
|
nottc joined #gluster |
11:58 |
|
nottc joined #gluster |
11:59 |
|
nehar joined #gluster |
12:00 |
|
unclemarc joined #gluster |
12:02 |
|
prasanth joined #gluster |
12:06 |
|
karnan joined #gluster |
12:06 |
|
kotreshhr left #gluster |
12:11 |
|
haomaiwang joined #gluster |
12:16 |
|
haomaiwang joined #gluster |
12:20 |
|
kdhananjay joined #gluster |
12:21 |
anoopcs |
luizcpg, Since 3.5 has reached it EOL you can make that change too in your PR for fixing the download links |
12:21 |
|
ibotty joined #gluster |
12:25 |
|
haomaiwang joined #gluster |
12:30 |
|
karnan joined #gluster |
12:30 |
|
DV joined #gluster |
12:33 |
|
om3 joined #gluster |
12:33 |
|
om2 joined #gluster |
12:39 |
|
Alghost joined #gluster |
12:39 |
|
Gnomethrower joined #gluster |
12:41 |
|
kdhananjay1 joined #gluster |
12:43 |
|
rwheeler joined #gluster |
12:45 |
|
[diablo] joined #gluster |
12:45 |
|
mchangir joined #gluster |
12:46 |
|
Atul joined #gluster |
12:46 |
|
ira joined #gluster |
12:46 |
Atul |
hi |
12:46 |
glusterbot |
Atul: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer. |
12:47 |
Atul |
how we can mount glustermount on server |
12:55 |
|
om2 joined #gluster |
12:55 |
|
om3 joined #gluster |
12:58 |
|
jiffin1 joined #gluster |
13:01 |
|
raghug joined #gluster |
13:09 |
|
manous_ joined #gluster |
13:09 |
manous_ |
hello |
13:09 |
glusterbot |
manous_: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer. |
13:17 |
morgangrobin |
In a two node cluster where data is replicated across peers, when one node goes down the share becomes inaccessible. Is there a workaround for this behaviour? So that data is still accessible if one node goes down? |
13:25 |
|
msvbhat_ joined #gluster |
13:29 |
Anarka_ |
apparently you dont want that, just check split brain. im looking at a similar scenario and it seems arbiter volumes is the "cheapest" way |
13:30 |
Anarka |
but ideas are welcome :) |
13:30 |
atinm |
morgangrobin, I don't think with a default settings you will loose access the volume, if quorum is enabled, then certainly you need 2/2 nodes to be up |
13:30 |
atinm |
morgangrobin, gluster volume info output please? |
13:33 |
|
level7 joined #gluster |
13:38 |
|
wadeholler joined #gluster |
13:40 |
|
dnunez joined #gluster |
13:41 |
|
kramdoss__ joined #gluster |
13:42 |
|
kramdoss_ joined #gluster |
13:47 |
|
luizcpg joined #gluster |
13:47 |
|
nbalacha joined #gluster |
13:47 |
|
plarsen joined #gluster |
13:48 |
morgangrobin |
Atinm: Status of volume: gv0 |
13:48 |
morgangrobin |
Gluster process TCP Port RDMA Port Online Pid |
13:48 |
morgangrobin |
------------------------------------------------------------------------------ |
13:48 |
morgangrobin |
Brick glustera:/pool/data 49152 0 Y 28384 |
13:48 |
morgangrobin |
Brick glusterb:/pool/data 49152 0 Y 18223 |
13:48 |
glusterbot |
morgangrobin: ----------------------------------------------------------------------------'s karma is now -13 |
13:48 |
morgangrobin |
NFS Server on localhost N/A N/A N N/A |
13:48 |
morgangrobin |
Self-heal Daemon on localhost N/A N/A Y 28411 |
13:48 |
morgangrobin |
NFS Server on glusterb N/A N/A N N/A |
13:49 |
morgangrobin |
Self-heal Daemon on glusterb N/A N/A Y 18245 |
13:49 |
morgangrobin |
Task Status of Volume gv0 |
13:49 |
morgangrobin |
------------------------------------------------------------------------------ |
13:49 |
glusterbot |
morgangrobin: ----------------------------------------------------------------------------'s karma is now -14 |
13:49 |
morgangrobin |
There are no active volume tasks |
13:49 |
post-factum |
morgangrobin: please do no do that anymore |
13:49 |
post-factum |
@paste |
13:49 |
glusterbot |
post-factum: For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999 |
13:49 |
|
luizcpg joined #gluster |
13:49 |
morgangrobin |
Sorry. Thanks for the info |
13:50 |
post-factum |
@pastebin |
13:50 |
glusterbot |
post-factum: I do not know about 'pastebin', but I do know about these similar topics: 'paste', 'pasteinfo' |
13:50 |
post-factum |
@pasteinfo |
13:50 |
glusterbot |
post-factum: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here. |
13:50 |
|
nehar joined #gluster |
13:50 |
|
guhcampos joined #gluster |
13:51 |
morgangrobin |
atinm: http://paste.fedoraproject.org/383710/66689870/ |
13:51 |
glusterbot |
Title: #383710 Fedora Project Pastebin (at paste.fedoraproject.org) |
13:53 |
atinm |
morgangrobin, you have given the output of gluster v status |
13:53 |
atinm |
morgangrobin, I am more interested in gluster v info where we get to see whether quorum tunables are turned on or not |
13:54 |
morgangrobin |
Sorry totally misread that. http://paste.fedoraproject.org/383712/46669003/ |
13:54 |
glusterbot |
Title: #383712 Fedora Project Pastebin (at paste.fedoraproject.org) |
13:54 |
atinm |
morgangrobin, if they are then this behavior is expected and quorum on a two node cluster doesn't make sense |
13:54 |
morgangrobin |
So I should turn quorum off, and data will be accessible when one node is down? |
13:55 |
atinm |
morgangrobin, so data should be accessible even one of the node goes down |
13:55 |
atinm |
morgangrobin, you don't have to as I don't see it turned on |
13:55 |
atinm |
morgangrobin, however if there is a case when both the node goes down and only one comes up, we don't start the brick processes |
13:55 |
atinm |
morgangrobin, I am not sure whether you hit that case |
14:01 |
wadeholler |
hi all, need some simple help, first time here: testing Erasure coding / dispersed volume; I have 3 clients (gluster-fuser mount) writing to the same directories; as soon as I launch the second client performance slows to a crawl. |
14:02 |
wadeholler |
replicated volume testing did not show this behavior |
14:02 |
wadeholler |
does someone have an explination? the behavior is so pronounced I'm guessing this is a known thing |
14:03 |
|
om3 joined #gluster |
14:03 |
|
om2 joined #gluster |
14:04 |
|
kramdoss_ joined #gluster |
14:04 |
|
kramdoss__ joined #gluster |
14:20 |
|
hagarth joined #gluster |
14:21 |
|
ahino joined #gluster |
14:22 |
|
skoduri joined #gluster |
14:26 |
|
haomaiwang joined #gluster |
14:31 |
|
msvbhat_ joined #gluster |
14:34 |
|
haomaiwang joined #gluster |
14:38 |
|
Manikandan joined #gluster |
14:38 |
|
Gnomethrower joined #gluster |
14:39 |
|
deniszh1 joined #gluster |
14:41 |
|
Alghost joined #gluster |
14:47 |
|
haomaiwang joined #gluster |
14:56 |
|
om3 joined #gluster |
14:56 |
|
om2 joined #gluster |
15:03 |
ibotty |
hchiramm: do you have a minute re docker image? |
15:05 |
|
wushudoin joined #gluster |
15:11 |
|
kpease joined #gluster |
15:13 |
|
Gambit15 joined #gluster |
15:13 |
|
Gnomethrower joined #gluster |
15:13 |
|
guhcampos joined #gluster |
15:16 |
|
DV joined #gluster |
15:16 |
|
alvinstarr joined #gluster |
15:18 |
|
om2 joined #gluster |
15:18 |
|
om3 joined #gluster |
15:19 |
|
arcolife joined #gluster |
15:32 |
|
om2 joined #gluster |
15:32 |
|
om3 joined #gluster |
15:32 |
|
Gambit15 joined #gluster |
15:40 |
|
squizzi joined #gluster |
15:41 |
|
al joined #gluster |
15:43 |
|
al joined #gluster |
15:45 |
|
rafi joined #gluster |
15:47 |
|
haomaiwang joined #gluster |
15:48 |
ashka |
hi, I can't find proper volume create parameters for what I want to do, if anyone can help.. I have 20 2TB disks, I want a 20TB gluster while allowing some disks to fail. I created using stripe 2 replica 2, but the resulting gluster is 18TB (number of bricks: 5x2x2) |
15:54 |
|
haomaiwang joined #gluster |
15:54 |
|
level7 joined #gluster |
16:00 |
JoeJulian |
ashka: Probably a simple replica 2 volume is what you want. ,,(stripe) is usually not what anyone wants. |
16:00 |
glusterbot |
ashka: Please see http://joejulian.name/blog/should-i-use-stripe-on-glusterfs/ about stripe volumes. |
16:01 |
|
DV joined #gluster |
16:02 |
ashka |
JoeJulian: I need decent parallel write performance, so I'm not sure. I'll check the link |
16:03 |
JoeJulian |
wadeholler: it's not a known thing, no, but there was this email this morning. I'm pretty sure it's about multiple threads on the same client though. http://www.gluster.org/pipermail/gluster-devel/2016-June/049878.html |
16:03 |
glusterbot |
Title: [Gluster-devel] performance issues Manoj found in EC testing (at www.gluster.org) |
16:08 |
|
robb_nl joined #gluster |
16:10 |
ashka |
also, that's an entirely different question, but gluster in replica 2 shows bricks: 10x2. Does this mean I need to add 10 bricks at a time to expand it ? Can I get it lower ? (ideally 4) |
16:12 |
|
kpease joined #gluster |
16:12 |
JoeJulian |
ashka: no, 2 bricks at a time. |
16:12 |
ashka |
JoeJulian: oh. heh. thanks |
16:12 |
|
DV joined #gluster |
16:16 |
|
muneerse joined #gluster |
16:16 |
|
bowhunter joined #gluster |
16:18 |
|
baojg joined #gluster |
16:19 |
|
nbalacha joined #gluster |
16:20 |
|
DV joined #gluster |
16:26 |
|
Gambit15 joined #gluster |
16:31 |
|
om joined #gluster |
16:31 |
|
om2 joined #gluster |
16:31 |
|
hagarth joined #gluster |
16:31 |
|
ira joined #gluster |
16:38 |
|
DV joined #gluster |
16:42 |
|
Alghost joined #gluster |
16:47 |
|
arcolife joined #gluster |
16:51 |
|
The_Ball joined #gluster |
16:51 |
The_Ball |
What does client quorum mean? I understand server side quorum, but not client quorum |
16:54 |
|
atinm joined #gluster |
16:55 |
|
FrankLee joined #gluster |
16:55 |
JoeJulian |
Server quorum will shut down services if it loses quorum with the other servers. Client quorum will disable a mount (or cause it to go read-only if so configured) if the client loses connection with a quorum of servers. |
16:56 |
|
F2Knight joined #gluster |
16:56 |
|
kotreshhr joined #gluster |
16:57 |
The_Ball |
Aha, thanks |
16:57 |
|
kotreshhr left #gluster |
17:00 |
FrankLee |
does glusterfs support advisory file locking ? |
17:00 |
FrankLee |
like NFSv4 ? |
17:01 |
JoeJulian |
Is that posix? |
17:05 |
JoeJulian |
Looks like it is, so it should be supported |
17:05 |
JoeJulian |
otoh, I don't see fadvise in the source. |
17:07 |
|
kpease_ joined #gluster |
17:09 |
|
jwd joined #gluster |
17:14 |
|
shubhendu joined #gluster |
17:17 |
|
davidj joined #gluster |
17:17 |
|
ahino joined #gluster |
17:21 |
|
robb_nl joined #gluster |
17:22 |
|
ben453 joined #gluster |
17:25 |
|
guhcampos joined #gluster |
17:29 |
|
kpease joined #gluster |
17:34 |
|
chirino_m joined #gluster |
17:34 |
|
guhcampos_ joined #gluster |
17:37 |
|
guhcampo_ joined #gluster |
17:42 |
|
rwheeler joined #gluster |
17:48 |
|
atinm-mob joined #gluster |
17:50 |
|
atinm-mob joined #gluster |
18:01 |
|
FrankLee joined #gluster |
18:02 |
|
nishanth joined #gluster |
18:12 |
|
hagarth joined #gluster |
18:12 |
alvinstarr |
I am seeing sporadic client hangs on fuse_request_send with Centos6.4 and gluster 3.7.11. |
18:13 |
alvinstarr |
Some related bugs look to have existed in 3.4 but I don't see anything recient. |
18:13 |
|
F2Knight joined #gluster |
18:14 |
JoeJulian |
Hangs like, hangs and doesn't come back? Or hangs like momentary pauses? Or somewhere in between? |
18:16 |
alvinstarr |
hangs as in never comes back. Kill -9 does nothing unmount -f will kill the process though. |
18:16 |
|
squizzi_ joined #gluster |
18:25 |
JoeJulian |
kill -9 of the glusterfs process or of the application that's using the mount? |
18:27 |
alvinstarr |
JoeJulian: sorry. A kill -9 on the hung process. The umount brings down the gluster process and that in turn seems to send a kill signal to the hung process |
18:27 |
|
gem joined #gluster |
18:28 |
JoeJulian |
That's what I expected, but just asking for clarity. So the best way to diagnose that, I think, would be to run gdb against the running client process and do a backtrace (thread apply all bt) and file a bug report. |
18:28 |
glusterbot |
https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS |
18:28 |
JoeJulian |
I've not heard of that being a problem in quite a long time. |
18:29 |
JoeJulian |
If you do create the bug report, ping me with the id. I'd be interested in following it. |
18:30 |
JoeJulian |
Oh, also you could try to get a state dump of the client, "pkill -USR1 glusterfs". It'll dump state in /var/run/gluster. |
18:31 |
alvinstarr |
JoeJulian: I tried gdb but it would not connect to the process. It seems that if its hung in an uninterruptiable system call strace and gdb(attach) don't work |
18:31 |
JoeJulian |
bummer |
18:32 |
JoeJulian |
I wonder if you could use gcore to get a coredump and backtrace that. |
18:32 |
|
jiffin joined #gluster |
18:33 |
|
Manikandan joined #gluster |
18:34 |
jiffin |
Atul: are u around? |
18:35 |
JoeJulian |
@later tell Atul jiffin was looking for you... |
18:35 |
glusterbot |
JoeJulian: The operation succeeded. |
18:36 |
alvinstarr |
JoeJulian: gcore looks like its not doing much either. It looks like gcore calls gdb. |
18:36 |
jiffin |
JoeJulian: thanks |
18:38 |
JoeJulian |
alvinstarr: I think SIGABRT should cause it to dump core (abnormal termination). |
18:43 |
|
Alghost joined #gluster |
18:44 |
JoeJulian |
ie. pkill -3 glusterfs |
18:45 |
FrankLee |
just double checking, but glusterfs does support file advisory locking right ? |
18:45 |
alvinstarr |
Ahhh. I don |
18:46 |
JoeJulian |
FrankLee: scroll up to the last time you asked. |
18:46 |
alvinstarr |
JoeJulian: I can't kill glusterfs. Its a live system. The hung apache process on the other hand I can whack at will. |
18:46 |
JoeJulian |
Yeah, but you're killing it anyway when you unmount. |
18:48 |
alvinstarr |
yes but at this point I am not about to unmount the filesystem. The hung client processes are not collecting fast so I can wait till 00:00 and do my work during quite times. |
18:48 |
JoeJulian |
Sounds like a plan. |
18:49 |
FrankLee |
JoeJulian: I dont answer your question. file advisory locking is supported natively by NFSv4 |
18:49 |
FrankLee |
*understand |
18:49 |
JoeJulian |
Oh, wait... you were trying to get a backtrace of apache? |
18:49 |
alvinstarr |
The trouble is that once the processes are all killed I have no more information that I can gather from them. |
18:50 |
alvinstarr |
sadly apache is not yielding much information either. |
18:50 |
JoeJulian |
Yeah, don't care about apache. The glusterfs process (client process) is where the interesting stuff is. |
18:51 |
JoeJulian |
And if it's not completely hanging everything then you should be able to gcore it as well as trigger the state dump (SIGUSR1). |
18:51 |
alvinstarr |
I would like to know what the PHP code is doing when it hangs. If I can find that then I may be able to create a test case. |
18:52 |
alvinstarr |
I can try dumping core on the glusterfs process tonight. |
18:53 |
alvinstarr |
The weird thing is that I have just 1 hung apache process and there are hundreds of requests per hour that are still getting written to the gluster volume. |
18:53 |
JoeJulian |
Yeah, that's where that state dump might be valuable. |
18:55 |
alvinstarr |
Oh well. Another night with broken sleep. |
18:56 |
JoeJulian |
cron jobs. ;) |
18:57 |
|
dnunez joined #gluster |
18:57 |
JoeJulian |
Actually, if it's only midnight, I usually just plan on coming in late the next morning and just stay up. |
18:57 |
|
nage joined #gluster |
18:58 |
alvinstarr |
I am not about to be killing servers via cron jobs. That can go so bad so easily. |
18:58 |
JoeJulian |
Then my stupid brain just wakes me up on time anyway... |
18:58 |
alvinstarr |
Well for me the office is 15 feet from the bedroom. Working from home sucks at times. |
18:59 |
JoeJulian |
Me too |
18:59 |
JoeJulian |
Though I guess I have a bigger house. ;) |
18:59 |
JoeJulian |
I have to walk 30 feet. |
18:59 |
JoeJulian |
FrankLee: I don't see fadvise in the source, so unless I'm looking for the wrong function, I just don't see it supported. |
18:59 |
alvinstarr |
Oh great. Next you will tell me you are living in sunny California. |
19:00 |
JoeJulian |
Even better, sunny Washington, about 15 minutes north of Seattle. |
19:01 |
alvinstarr |
Well. I am south of you then in Toronto. |
19:01 |
JoeJulian |
Well say hi to purpleidea for me. |
19:03 |
|
dgandhi joined #gluster |
19:04 |
|
karnan joined #gluster |
19:04 |
|
wadeholler joined #gluster |
19:06 |
|
wadeholler joined #gluster |
19:10 |
alvinstarr |
Toronto is a bigish city. |
19:14 |
|
DV joined #gluster |
19:45 |
|
takarider joined #gluster |
19:57 |
|
wadeholler joined #gluster |
20:01 |
|
jri joined #gluster |
20:03 |
|
hagarth joined #gluster |
20:05 |
|
dgandhi joined #gluster |
20:11 |
|
DV joined #gluster |
20:13 |
|
wadeholler joined #gluster |
20:15 |
|
kpease joined #gluster |
20:20 |
|
wadeholler joined #gluster |
20:27 |
|
JesperA- joined #gluster |
20:27 |
|
devilspgd joined #gluster |
20:27 |
|
fale joined #gluster |
20:27 |
|
inodb joined #gluster |
20:27 |
|
s-hell joined #gluster |
20:27 |
|
renout_away joined #gluster |
20:27 |
|
klaas joined #gluster |
20:27 |
|
a1 joined #gluster |
20:27 |
|
crashmag joined #gluster |
20:27 |
|
d-fence_ joined #gluster |
20:27 |
|
siel joined #gluster |
20:27 |
|
dblack joined #gluster |
20:27 |
|
gbox joined #gluster |
20:27 |
|
Gugge joined #gluster |
20:27 |
|
pocketprotector joined #gluster |
20:27 |
|
jwd joined #gluster |
20:29 |
|
Vaizki joined #gluster |
20:30 |
|
virusuy joined #gluster |
20:31 |
|
kenansulayman joined #gluster |
20:31 |
|
_fortis joined #gluster |
20:31 |
|
ben453 joined #gluster |
20:32 |
|
davidj joined #gluster |
20:33 |
|
NuxRo joined #gluster |
20:35 |
|
fyxim joined #gluster |
20:37 |
|
wadeholl_ joined #gluster |
20:39 |
|
wadeholl_ joined #gluster |
20:40 |
|
PotatoGim joined #gluster |
20:41 |
|
tyler274 joined #gluster |
20:43 |
|
d4n13L joined #gluster |
20:43 |
|
gluytium joined #gluster |
20:44 |
|
Alghost joined #gluster |
20:46 |
|
hackman joined #gluster |
20:46 |
|
lh joined #gluster |
20:54 |
|
deniszh joined #gluster |
20:55 |
|
d0nn1e joined #gluster |
20:58 |
|
Gambit15 joined #gluster |
21:04 |
|
rafaels joined #gluster |
21:25 |
|
JesperA joined #gluster |
21:33 |
|
bowhunter joined #gluster |
21:40 |
|
kovshenin joined #gluster |
21:52 |
|
kovshenin joined #gluster |
22:00 |
|
F2Knight joined #gluster |
22:08 |
|
DV joined #gluster |
22:20 |
|
gluytium joined #gluster |
22:27 |
|
d0nn1e joined #gluster |
22:41 |
|
overclk joined #gluster |
22:44 |
|
natarej joined #gluster |
22:44 |
|
Alghost joined #gluster |
22:52 |
|
deniszh1 joined #gluster |
23:14 |
|
om joined #gluster |
23:14 |
|
om2 joined #gluster |
23:51 |
|
hagarth joined #gluster |
23:53 |
|
DV joined #gluster |