Time |
Nick |
Message |
00:10 |
|
russoisraeli joined #gluster |
00:25 |
|
plarsen joined #gluster |
00:28 |
|
calisto joined #gluster |
00:32 |
unwastable |
hello? |
00:32 |
glusterbot |
unwastable: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer. |
00:33 |
PeterA |
yes |
00:33 |
unwastable |
the 32bit NFS client is having "NFS: Buggy server - nlink == 0!" error, anybody knows whats going on? |
00:34 |
unwastable |
PeterA are you still in? |
00:34 |
anastymous |
did you follow the link i posted? |
00:34 |
PeterA |
yes |
00:34 |
PeterA |
e7196100-ebb5-4501-b4b1-b8d4ce14ae76==Directory:./gfid-resolver.sh: line 38: cd: /brick02/gfs//.glusterfs/e7/19/../../e2/35/e235dd64-4e75-4d26-9c00-e951298e626b: Too many levels of symbolic links |
00:34 |
PeterA |
when i tried to run gfid-resolver.sh |
00:34 |
unwastable |
have you got what you need? |
00:34 |
PeterA |
how do we resolve these sym links? |
00:35 |
PeterA |
some of them |
00:35 |
PeterA |
i am resolving most of the gfid to files with gfid-resolver.sh |
00:35 |
PeterA |
but some getting sym links which not able to resolve |
00:35 |
Durzo |
semiosis, FYI i dont think this is something that can be fixed in the .deb but maybe you think otherwise: https://bugzilla.redhat.com/show_bug.cgi?id=1162905 |
00:35 |
glusterbot |
Bug 1162905: medium, unspecified, ---, bugs, NEW , hardcoded gsyncd path causes geo-replication to fail on non-redhat systems |
00:35 |
unwastable |
does the 1st server shows the same e7196100-ebb5-4501-b4b1-b8d4ce14ae76 |
00:36 |
PeterA |
only one server knows |
00:36 |
PeterA |
the gfid only exist on one node |
00:40 |
unwastable |
in that case you are safe to remove the file/dir that associate with this gfid, and do a local self heal in that dir to propogate the missing files |
00:41 |
unwastable |
if every entry has the same pattern, you can put them in a automate script |
00:41 |
unwastable |
anastymous: this is gluster specific i believe, but not a exportfs |
00:41 |
unwastable |
fsck shows no error |
00:43 |
unwastable |
appreciate it if you could help me with this |
00:44 |
unwastable |
PeterA are you still with me? |
00:44 |
PeterA |
yes |
00:45 |
PeterA |
i will try to remove those gfid and heal |
00:45 |
unwastable |
does it resolves your problem? |
00:45 |
PeterA |
it's a super big folder and still runing the heal |
00:45 |
PeterA |
not sure at this point |
00:45 |
PeterA |
will keep u post :) |
00:46 |
PeterA |
thank you VERY much :D |
00:46 |
PeterA |
it's been bugging us |
00:46 |
PeterA |
we are running a full heal now on the 17TB volume |
00:46 |
unwastable |
but would you be able to check those healed files in the dir? |
00:46 |
PeterA |
not yet |
00:47 |
PeterA |
ya those gfid that links to a file exist |
00:47 |
unwastable |
just do a getfattr -d -m . -e hex /dir/file |
00:47 |
PeterA |
but not those are sym links or dirs |
00:47 |
skippy |
it seems that the FUSE client in 3.6.1 dropped support for the following mount options: noatime,noexec,nosuid ... |
00:48 |
skippy |
Those options worked on 3.5.2, but after upgrading to 3.6.1 each of those options reports "Invalid option" :( |
00:48 |
unwastable |
if all your replicated servers shows the same 0x0000 in the changelog and the same gfid, means that file is OK |
00:49 |
PeterA |
i am able to do the getfattr on the node that having the heal fail |
00:49 |
PeterA |
but only one copy of those files exist |
00:50 |
unwastable |
first of all, you got to find out whether or not the file is a split brain? or the discrepancy in gfid |
00:51 |
unwastable |
PeterA whats your setup? |
00:51 |
PeterA |
file is not split brain |
00:51 |
PeterA |
we having a 3x2 volume |
00:52 |
PeterA |
one of the node was down for couple days |
00:52 |
PeterA |
and got all these heal-failed gfid |
00:52 |
PeterA |
the node was brought backup |
00:52 |
PeterA |
and running the full heal now |
00:52 |
PeterA |
but all the gfid of the heal-failed still exist |
00:52 |
PeterA |
wonder if should wait for the heal to finish |
00:52 |
PeterA |
or if would ever disappear |
00:53 |
JoeJulian |
heal-failed works like a log. They do not disappear. |
00:53 |
PeterA |
so how do we heal them? |
00:54 |
unwastable |
if you have already triggered the self heal, you have to wait |
00:54 |
PeterA |
ok |
00:54 |
PeterA |
otherwise? |
00:54 |
unwastable |
aplit brain and discrepancy in gfid must be done by hand |
00:55 |
unwastable |
*split |
00:55 |
PeterA |
ic |
00:55 |
unwastable |
ok.. why dont you copy and paste your getfattr |
00:56 |
PeterA |
http://pastie.org/9713153 |
00:56 |
glusterbot |
Title: #9713153 - Pastie (at pastie.org) |
00:57 |
|
rwheeler joined #gluster |
00:58 |
unwastable |
and the getattr from another "OK" replicated server |
01:00 |
PeterA |
hmm there are none |
01:00 |
PeterA |
seems like that file only has one copy |
01:00 |
unwastable |
is this the only copy in the volume? |
01:01 |
|
topshare joined #gluster |
01:02 |
PeterA |
YES! |
01:03 |
glusterbot |
New news from newglusterbugs: [Bug 1162905] hardcoded gsyncd path causes geo-replication to fail on non-redhat systems <https://bugzilla.redhat.com/show_bug.cgi?id=1162905> || [Bug 1162910] mount options no longer valid: noexec, nosuid, noatime <https://bugzilla.redhat.com/show_bug.cgi?id=1162910> |
01:03 |
unwastable |
after you have confirmed the integrity of the data.... you can reset the AFR changelog to all 0x000 with setfattr |
01:04 |
PeterA |
how? |
01:05 |
unwastable |
if you can find the gfid in ./glusterfs ... if not.. make a copy of that file through a NFS / gluster native client |
01:05 |
PeterA |
ok got it :) |
01:05 |
PeterA |
thanks! :D |
01:06 |
unwastable |
ok |
01:13 |
|
_dist joined #gluster |
01:18 |
unwastable |
PeterA: ic, it will be easier if you can empty out the recovered drive |
01:18 |
PeterA |
oh…then just let it resync all?? |
01:18 |
unwastable |
comletely replace the brick |
01:19 |
unwastable |
and then do a full self heal on the affected server |
01:20 |
unwastable |
make sure the date/time is in sync among these replicated servers |
01:22 |
unwastable |
you can rsync the data, but you can't rsync the gfid hardlinks? or can you? full self-heal will be easier in this case. how do you rsync in the first place? you can |
01:22 |
unwastable |
you can't conflict the nfs server with glusterfs |
01:22 |
|
leoc joined #gluster |
01:23 |
|
Pupeno_ joined #gluster |
01:24 |
|
lchill joined #gluster |
01:25 |
unwastable |
sorry.. my bad.. overlooked resync != rsync |
01:54 |
|
David_H_Smith joined #gluster |
01:54 |
|
meghanam joined #gluster |
01:54 |
|
meghanam_ joined #gluster |
02:09 |
|
gburiticato joined #gluster |
02:12 |
|
bala joined #gluster |
02:13 |
|
nishanth joined #gluster |
02:20 |
|
meghanam joined #gluster |
02:22 |
|
rjoseph joined #gluster |
02:24 |
|
meghanam_ joined #gluster |
02:31 |
|
russoisraeli joined #gluster |
02:47 |
|
bharata-rao joined #gluster |
02:47 |
|
nhayashi joined #gluster |
02:57 |
|
haomaiwa_ joined #gluster |
02:59 |
|
haomaiw__ joined #gluster |
03:04 |
|
hagarth joined #gluster |
03:06 |
|
haomaiwa_ joined #gluster |
03:08 |
|
haomaiwang joined #gluster |
03:29 |
|
haomai___ joined #gluster |
03:38 |
|
topshare joined #gluster |
03:39 |
|
topshare joined #gluster |
03:41 |
|
calisto joined #gluster |
03:43 |
|
kanagaraj joined #gluster |
03:45 |
|
RameshN joined #gluster |
03:46 |
|
shubhendu joined #gluster |
03:48 |
|
shubhendu_ joined #gluster |
03:59 |
|
badone joined #gluster |
04:08 |
|
ndarshan joined #gluster |
04:12 |
|
dusmant joined #gluster |
04:12 |
|
topshare joined #gluster |
04:13 |
|
topshare joined #gluster |
04:22 |
|
nbalachandran joined #gluster |
04:27 |
|
ababu joined #gluster |
04:34 |
|
soumya_ joined #gluster |
04:38 |
|
rafi joined #gluster |
04:38 |
|
Rafi_kc joined #gluster |
04:38 |
|
dusmantkp_ joined #gluster |
04:40 |
|
saurabh joined #gluster |
04:40 |
|
lalatenduM joined #gluster |
04:40 |
|
nishanth joined #gluster |
04:41 |
|
meghanam joined #gluster |
04:41 |
|
meghanam_ joined #gluster |
04:46 |
|
pp joined #gluster |
04:49 |
|
bala joined #gluster |
04:51 |
|
rjoseph joined #gluster |
04:51 |
|
karnan joined #gluster |
04:52 |
|
jiffin joined #gluster |
04:59 |
|
atinmu joined #gluster |
05:03 |
|
_Bryan_ joined #gluster |
05:04 |
|
spandit joined #gluster |
05:09 |
|
kshlm joined #gluster |
05:10 |
|
kumar joined #gluster |
05:13 |
|
karnan joined #gluster |
05:15 |
|
ppai joined #gluster |
05:19 |
|
rjoseph joined #gluster |
05:22 |
|
ricky-ticky joined #gluster |
05:26 |
|
hagarth joined #gluster |
05:30 |
|
ababu joined #gluster |
05:32 |
|
nshaikh joined #gluster |
05:37 |
|
kdhananjay joined #gluster |
05:38 |
|
kdhananjay left #gluster |
05:39 |
|
sahina joined #gluster |
05:46 |
|
overclk joined #gluster |
05:47 |
|
unwastable joined #gluster |
05:50 |
unwastable |
this is a gluster 3.3 1x2 replication. The 32bit client is getting a "NFS: Buggy server - nlink == 0!" any idea? |
05:52 |
Durzo |
this means the file has zero links |
05:52 |
Durzo |
technically this is impossible (for a file to have zero links), its possible that your NFS server filesystem needs an fsck |
05:53 |
Durzo |
preferably an offline fsck |
05:54 |
unwastable |
the bricks in the volume seem ok, but receiving on 32bit clients |
05:54 |
Durzo |
until you can verify the integrity of your filesystem, its not worth looking any further |
05:55 |
unwastable |
you mean my local filesystem? or the filesystem in gluster's volume? |
05:55 |
Durzo |
if the nfs server is gluster, you should fsck all bricks |
05:55 |
unwastable |
but its only happening on 32bit clients, any idea? |
05:55 |
|
ababu joined #gluster |
05:55 |
Durzo |
are you implying the servers are 64bit ? |
05:56 |
unwastable |
yes |
05:56 |
Durzo |
http://gluster.org/community/documentation/index.php/Gluster_3.2:_Setting_Volume_Options#nfs.enable-ino32 |
05:56 |
Durzo |
the servers are using 64bit inodes that the 32bit clients cannot address, you should turn on that option |
05:56 |
unwastable |
should i turn on ino32? |
05:57 |
unwastable |
let me try |
05:58 |
Durzo |
you would need to umount / remount on the client, you may also need to restart your gluster servers |
06:00 |
unwastable |
is this applicable to version 3.3 ? |
06:00 |
Durzo |
i assume so, its an nfs thing |
06:05 |
Durzo |
also, are you instructing your clients to mount nfs or using the glusterfs fuse? |
06:06 |
|
bennyturns joined #gluster |
06:07 |
unwastable |
nfs |
06:07 |
unwastable |
because of its cache features for small file access |
06:08 |
Durzo |
ok |
06:09 |
Durzo |
so is it working? |
06:10 |
unwastable |
checking now.. give me a moment |
06:10 |
|
atalur joined #gluster |
06:13 |
unwastable |
kernel: NFS: server glusterfs01 error: fileid changed |
06:14 |
unwastable |
kernel: fsid 0:1b: expected fileid 0x92d6fda0c0965364, got 0x1b1d95fd |
06:14 |
Durzo |
neat |
06:15 |
Durzo |
did you restart the servers & remount all your clients? |
06:15 |
unwastable |
any idea? |
06:15 |
unwastable |
i restarted the glusterd, and remounted the nfs client |
06:16 |
|
dusmantkp_ joined #gluster |
06:16 |
unwastable |
thats probably due to my rsync running, they are backup servers by the way |
06:17 |
|
anoopcs joined #gluster |
06:17 |
|
rjoseph joined #gluster |
06:25 |
|
ricky-ticky joined #gluster |
06:27 |
unwastable |
Durzo, are the fileid errors normal? |
06:29 |
Durzo |
probably not, no |
06:29 |
Durzo |
unsure, i dont use nfs |
06:29 |
Durzo |
all the info iv given you i got from google |
06:30 |
Durzo |
consider me your google proxy |
06:35 |
|
soumya_ joined #gluster |
06:36 |
unwastable |
why not nfs? |
06:38 |
unwastable |
anybody knows how to search for a channel here? |
06:39 |
Durzo |
why not nfs? because its 2014 and we use 64bit gluster clients |
06:39 |
|
overclk joined #gluster |
06:40 |
unwastable |
no no 64bit gluster client is slower than NFS |
06:40 |
unwastable |
i mean my case |
06:40 |
unwastable |
for small file access |
06:41 |
Durzo |
your also using gluster 3.3 |
06:41 |
Durzo |
thats pretty old school |
06:41 |
unwastable |
you mean later edition will have a faster gluster client ? |
06:44 |
Durzo |
thats what iv seen |
06:44 |
unwastable |
beginning 3.3 it starts with a new trend by placing gfid hardlinks in ./glusterfs, i didnt know about the speed changes in gluster native |
06:44 |
Durzo |
i just upgraded to 3.5, and its noticably faster than 3.4, which was noticebly faster than 3.3 etc... |
06:46 |
unwastable |
any improvements in nfs? |
06:46 |
Durzo |
dont know, dont use it |
06:46 |
unwastable |
ever had slpit brain issue with the native client? |
06:46 |
Durzo |
yes |
06:47 |
unwastable |
selfheal? |
06:47 |
Durzo |
alot on 3.3, not so many on 3.4 |
06:48 |
unwastable |
a windows client (through samba) keep requesting for a setfattr even with a read and created a dirty flag in afr changelog. any idea? |
06:48 |
unwastable |
mounted with noatime,nodiratime |
06:49 |
Durzo |
none, sorry |
06:49 |
|
ctria joined #gluster |
06:49 |
unwastable |
whats your recommendations for small files (1,000 per second) with native client? |
06:52 |
Durzo |
1000 per second? dont use gluster |
06:52 |
|
SOLDIERz joined #gluster |
06:52 |
unwastable |
yeah 1000 files per sec |
06:52 |
unwastable |
nfs? |
06:52 |
unwastable |
you mean the fs altogether? |
06:55 |
Durzo |
yeah id look into something else |
06:55 |
Durzo |
block replication, ceph maybe |
06:55 |
|
ricky-ticky joined #gluster |
06:59 |
unwastable |
too late for that |
07:03 |
|
David_H_Smith joined #gluster |
07:04 |
|
deepakcs joined #gluster |
07:04 |
|
David_H_Smith joined #gluster |
07:05 |
|
ppai joined #gluster |
07:11 |
|
Slydder joined #gluster |
07:11 |
Slydder |
morning all |
07:12 |
|
rjoseph joined #gluster |
07:12 |
|
bala joined #gluster |
07:13 |
unwastable |
morning |
07:15 |
|
Humble joined #gluster |
07:15 |
|
dusmant joined #gluster |
07:20 |
|
bala joined #gluster |
07:33 |
mator |
morning (rosetta / philae) |
07:36 |
|
Philambdo joined #gluster |
07:37 |
|
dusmant joined #gluster |
07:47 |
|
Fen2 joined #gluster |
07:53 |
|
ricky-ticky joined #gluster |
07:54 |
|
ppai joined #gluster |
08:14 |
|
haomaiwang joined #gluster |
08:19 |
|
ricky-ticky joined #gluster |
08:21 |
|
fsimonce joined #gluster |
08:25 |
|
Philambdo joined #gluster |
08:25 |
|
TvL2386 joined #gluster |
08:38 |
|
rjoseph joined #gluster |
08:43 |
|
vimal joined #gluster |
08:44 |
|
T0aD joined #gluster |
08:46 |
|
shubhendu_ joined #gluster |
08:47 |
|
dusmant joined #gluster |
08:49 |
|
sahina joined #gluster |
08:49 |
|
ndarshan joined #gluster |
08:52 |
|
bala joined #gluster |
09:00 |
|
cmorandin joined #gluster |
09:05 |
|
eightyeight joined #gluster |
09:05 |
|
cmorandin joined #gluster |
09:05 |
|
eryc joined #gluster |
09:05 |
|
eryc joined #gluster |
09:12 |
|
atalur joined #gluster |
09:13 |
|
kdhananjay joined #gluster |
09:16 |
|
ppai joined #gluster |
09:17 |
|
rgustafs joined #gluster |
09:17 |
|
lyang0 joined #gluster |
09:17 |
|
Slashman joined #gluster |
09:18 |
|
rjoseph joined #gluster |
09:23 |
|
sahina joined #gluster |
09:25 |
|
ndarshan joined #gluster |
09:25 |
|
shubhendu_ joined #gluster |
09:26 |
|
haomai___ joined #gluster |
09:31 |
|
dusmant joined #gluster |
09:33 |
|
bala joined #gluster |
09:36 |
|
Norky joined #gluster |
09:43 |
|
aravindavk joined #gluster |
10:11 |
|
dusmant joined #gluster |
10:12 |
|
Norky joined #gluster |
10:18 |
|
lkoranda joined #gluster |
10:20 |
|
kdhananjay joined #gluster |
10:23 |
|
kdhananjay left #gluster |
10:24 |
|
Champi joined #gluster |
10:26 |
|
Antitribu joined #gluster |
10:27 |
Antitribu |
Hi all, I’ve a bit of an issue with a geo-replication getting completely out of whack, i’d like to force a resync per the docs it suggests turning off the indexing and restarting but when i try i get: |
10:27 |
Antitribu |
gluster volume set share geo-replication.indexing off |
10:27 |
Antitribu |
volume set: failed: geo-replication.indexing cannot be disabled while geo-replication sessions exist |
10:27 |
Antitribu |
any suggestions? |
10:30 |
|
atalur joined #gluster |
10:35 |
glusterbot |
New news from newglusterbugs: [Bug 1163071] RHEL 5 noarch repo broken/missing <https://bugzilla.redhat.com/show_bug.cgi?id=1163071> |
10:35 |
|
ppai joined #gluster |
10:48 |
|
diegows joined #gluster |
10:53 |
|
lalatenduM joined #gluster |
11:07 |
|
ricky-ticky joined #gluster |
11:28 |
|
soumya joined #gluster |
11:28 |
|
Inflatablewoman joined #gluster |
11:31 |
|
m0zes joined #gluster |
11:32 |
|
meghanam__ joined #gluster |
11:33 |
|
meghanam joined #gluster |
11:38 |
|
edong23 joined #gluster |
11:42 |
|
calisto joined #gluster |
11:49 |
|
rjoseph joined #gluster |
11:51 |
davemc |
community meeting in 10 minutes on #gluster-meeting. |
11:52 |
|
ppai joined #gluster |
11:55 |
|
ricky-ticky joined #gluster |
11:59 |
|
tdasilva joined #gluster |
12:00 |
|
raghug joined #gluster |
12:01 |
raghug |
joakim_24: are you there? |
12:02 |
joakim_24 |
Yes |
12:02 |
|
pp joined #gluster |
12:03 |
raghug |
joakim_24: I'll start working on the rebalance hang issue you reported to Pranith |
12:03 |
raghug |
is there a bug filed on this? |
12:04 |
hagarth |
raghug: none afaik, we need to analyze logs to see what the problem is about. |
12:05 |
|
liquidat joined #gluster |
12:05 |
raghug |
hagarth: ok, I was wondering where should I start from |
12:05 |
|
pp joined #gluster |
12:23 |
|
edward1 joined #gluster |
12:25 |
|
soumya joined #gluster |
12:28 |
|
meghanam__ joined #gluster |
12:28 |
|
meghanam joined #gluster |
12:32 |
|
LebedevRI joined #gluster |
12:43 |
|
deepakcs left #gluster |
12:48 |
|
B21956 joined #gluster |
12:57 |
|
rjoseph joined #gluster |
13:00 |
|
harish joined #gluster |
13:00 |
|
Slashman_ joined #gluster |
13:02 |
|
pp joined #gluster |
13:05 |
glusterbot |
New news from newglusterbugs: [Bug 1163161] With afrv2 + ext4, lookups on directories with large offsets could result in duplicate/missing entries <https://bugzilla.redhat.com/show_bug.cgi?id=1163161> |
13:06 |
|
cultav1x joined #gluster |
13:13 |
|
meghanam_ joined #gluster |
13:13 |
|
meghanam joined #gluster |
13:15 |
|
calisto joined #gluster |
13:32 |
|
bala joined #gluster |
13:34 |
|
smohan joined #gluster |
13:37 |
|
mkzero left #gluster |
13:42 |
|
calisto joined #gluster |
13:49 |
|
smohan joined #gluster |
13:50 |
|
hagarth joined #gluster |
13:54 |
|
doubt joined #gluster |
13:55 |
|
calisto joined #gluster |
13:59 |
|
ramteid joined #gluster |
14:06 |
|
bene2 joined #gluster |
14:07 |
|
julim joined #gluster |
14:07 |
|
bennyturns joined #gluster |
14:16 |
|
haomaiwang joined #gluster |
14:17 |
|
troublesome joined #gluster |
14:18 |
troublesome |
Hi, Im having a few issues with our gluster setup and was wondering if anyone could answer it in here.. What we are seeing is a lvm partition, which is 15GB (usage around 14G) the problem however is that there are only 4GB files on the partition, but there is a .glusterfs folder using up the rest, and its constantly going up in size.. just to days ago (monday) we resized from 10GB -> 15GB and |
14:18 |
troublesome |
almost 4GB have been added in two days.. |
14:19 |
troublesome |
4GB more used, but only around 200MB files.. What might be the problem here? |
14:22 |
|
virusuy joined #gluster |
14:22 |
|
virusuy joined #gluster |
14:23 |
|
lalatenduM joined #gluster |
14:28 |
|
haomaiwang joined #gluster |
14:35 |
|
SOLDIERz joined #gluster |
14:36 |
|
rgustafs joined #gluster |
14:36 |
|
SOLDIERz_ joined #gluster |
14:37 |
|
asku joined #gluster |
14:38 |
|
SOLDIERz_ joined #gluster |
14:38 |
|
lalatenduM joined #gluster |
14:39 |
kkeithley |
the "files" in .glusterfs are just hard links to files in the brick. They don't consume any space |
14:41 |
|
nbalachandran joined #gluster |
14:53 |
|
Norky joined #gluster |
14:54 |
|
jmarley joined #gluster |
14:54 |
|
jmarley_ joined #gluster |
14:58 |
|
LebedevRI joined #gluster |
15:06 |
|
sahina joined #gluster |
15:13 |
|
_dist joined #gluster |
15:13 |
|
meghanam joined #gluster |
15:13 |
|
meghanam_ joined #gluster |
15:18 |
troublesome |
kkeithley - but the files on the drive, does not use up more than 4.2GB.. |
15:19 |
troublesome |
the used space on the partition has gone from 5 - 14GB, but the actual files have only gone up with 400MB |
15:19 |
|
tedski joined #gluster |
15:20 |
tedski |
i had a 2 node 3.2 cluster that i was trying to move to different servers in order to stage an upgrade to 3.5. i brought up 2 new servers, peer probed them, then add-bricked to them. that caused my 2-node replicated cluster to become a 2 x 2 Distributed-Replicated cluster. |
15:20 |
|
saurabh joined #gluster |
15:21 |
tedski |
now i want to get back to 2 node replicated, but of course since i added the 2 new nodes at the end, they are a replication pair with half of the files of the entire cluster. |
15:21 |
tedski |
what's the best course of action to get all data onto the 2 new nodes? |
15:21 |
tedski |
do i remove bricks 2 and 4, then rebalance? |
15:21 |
tedski |
then replace-brick 1 to 4? |
15:22 |
|
SOLDIERz joined #gluster |
15:24 |
|
SOLDIERz joined #gluster |
15:24 |
|
TheBrayn joined #gluster |
15:24 |
TheBrayn |
hi |
15:24 |
glusterbot |
TheBrayn: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer. |
15:25 |
|
soumya joined #gluster |
15:25 |
|
jobewan joined #gluster |
15:25 |
|
xavih joined #gluster |
15:26 |
|
wushudoin joined #gluster |
15:26 |
|
SOLDIERz joined #gluster |
15:28 |
|
SOLDIERz joined #gluster |
15:29 |
|
jmarley joined #gluster |
15:30 |
|
SOLDIERz_ joined #gluster |
15:33 |
|
SOLDIERz_ joined #gluster |
15:35 |
|
SOLDIERz_ joined #gluster |
15:35 |
|
smohan_ joined #gluster |
15:37 |
|
lmickh joined #gluster |
15:38 |
Inflatablewoman |
What are the advantages of using the glusterfs client to connect over nfs ? |
15:38 |
|
marcoceppi joined #gluster |
15:39 |
tedski |
Inflatablewoman: native client has auto failover and will also do concurrent connections for higher performance. |
15:40 |
Inflatablewoman |
ahh cool. |
15:40 |
Inflatablewoman |
Thanks for the tip. |
15:41 |
tedski |
np |
15:41 |
Inflatablewoman |
but NFS is not "bad" right? |
15:41 |
tedski |
depends on the use case, of course. managing failover with nfs is a pain. |
15:41 |
Inflatablewoman |
ahh ok |
15:42 |
tedski |
see https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.0/html/Administration_Guide/ch09s04.html for the process |
15:42 |
glusterbot |
Title: 9.4. Configuring Automated IP Failover for NFS and SMB (at access.redhat.com) |
15:42 |
tedski |
whereas with the native client, i just add all gluster nodes to a dns round robin pool |
15:42 |
Inflatablewoman |
So the client would understand the connected node died and then connect to a new one? |
15:43 |
Inflatablewoman |
but nfs you have to do some magic yourself. |
15:43 |
Inflatablewoman |
I have added my Gluster to CoreOS using nfs as it has no support for GlusterFS yet. |
15:43 |
tedski |
yeah, and it'll reconnect when it's back. it also detects changes to the cluster while running. |
15:43 |
Inflatablewoman |
good info |
15:43 |
Inflatablewoman |
thanks |
15:44 |
Inflatablewoman |
without the client, I have to do some nfs magic somehow... ok. |
15:45 |
Inflatablewoman |
tedski: thanks for the info, very helpful! |
15:45 |
tedski |
no prob! |
15:45 |
Inflatablewoman |
off to #coreos to badger them about GlusterFS support. ;) |
15:45 |
tedski |
if you have a clue about my question, that'd be of much help :) |
15:47 |
|
aravindavk joined #gluster |
15:47 |
Inflatablewoman |
sadly, I am a complete newbie to Gluster. :/ |
15:51 |
|
calisto joined #gluster |
15:54 |
|
bala joined #gluster |
16:05 |
|
rwheeler joined #gluster |
16:13 |
|
rjoseph joined #gluster |
16:19 |
dastar_ |
hi |
16:19 |
glusterbot |
dastar_: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer. |
16:20 |
dastar_ |
i have a little question : kill -HUP `cat /var/run/glusterd.pid` does'nt reopen log file, is it normal ? |
16:23 |
|
daMaestro joined #gluster |
16:30 |
|
davemc joined #gluster |
16:33 |
_dist |
Inflatablewoman: I don't see why coreos wouldn't support gluster |
16:50 |
|
_Bryan_ joined #gluster |
16:55 |
|
d4nku joined #gluster |
16:56 |
|
hagarth joined #gluster |
17:01 |
|
soumya joined #gluster |
17:09 |
|
_shaps_ joined #gluster |
17:09 |
_shaps_ |
Hi, I've got a quick question |
17:09 |
Inflatablewoman |
_dist: They will support it. It's just upstream. |
17:10 |
_dist |
yeap, I figured |
17:10 |
_dist |
_shaps_: best to just ask it |
17:10 |
_shaps_ |
yeah, was going to, just checking if anyone was there :D |
17:11 |
Inflatablewoman |
https://github.com/coreos/coreos-overlay/pull/855 |
17:11 |
glusterbot |
Title: Added glusterfs by asiragusa · Pull Request #855 · coreos/coreos-overlay · GitHub (at github.com) |
17:11 |
_shaps_ |
I've got a geo-rep setup which copies data across 3 DCs, it works fine, but I've just spotted that the both the master and slave server ( the main ones ) ran out of inodes |
17:12 |
_shaps_ |
and looking through the directories which have lots of files, the "/var/lib/misc/glusterfsd/[volname]/[connection]/.processed |
17:12 |
|
calum_ joined #gluster |
17:12 |
|
davemc joined #gluster |
17:12 |
_shaps_ |
has got thousands of xsync changelog files |
17:13 |
_shaps_ |
any idea if it is possible to delete files from there? |
17:14 |
_shaps_ |
using glusterfs 3.6.0beta3, as incurred in a bug while using 3.5.2 which was preventing geo-rep to work properly |
17:14 |
_shaps_ |
and that bug has been fixed in that version |
17:17 |
hagarth |
_shaps_: might be a good idea to tar files in .processed and then delete those files. The geo-replication developers can provide a better recommendation .. might be a good question for gluster-devel |
17:18 |
_shaps_ |
hagarth: ok, I'll bring the question to the dev then. Thanks a lot |
17:19 |
tedski |
i asked above, sorry for repeating, but, I had a 2 node cluster with replicated bricks. i added 2 nodes and add-brick'd to them creating a distributed-replication cluster. i didn't realize the distribution order, so now i have the 2 old nodes as a replication pair with half the files and 2 new nodes as replication pair with the other half of the files |
17:19 |
tedski |
what's the best way to move all files to the 2 new nodes and go back to a replicated pair? |
17:20 |
tedski |
i was thinking remove-brick 2 and 4 |
17:20 |
tedski |
then replace-brick 1 to 4 |
17:29 |
|
Xanacas joined #gluster |
17:36 |
glusterbot |
New news from newglusterbugs: [Bug 1151384] Rebalance fails to complete - stale file handles after 202,908 files <https://bugzilla.redhat.com/show_bug.cgi?id=1151384> |
17:36 |
JoeJulian |
tedski: you should be able to remove-brick+start 1 and 2, the "old" bricks, and the files should migrate to the new (3 and 4) bricks. |
17:37 |
JoeJulian |
I say *should* because I've never successfully had a brick migration. |
17:37 |
tedski |
JoeJulian: not sure i understand. if i remove-brick, doesn't it just disappear and the files are no longer available to clients? |
17:38 |
|
nshaikh joined #gluster |
17:38 |
tedski |
JoeJulian: what do you mean by '+start 1 and 2'? |
17:39 |
tedski |
right now i have 4 bricks. (1&2)+(3&4) where & denotes replication and + denotes distribution |
17:39 |
tedski |
and they're all cruising along just fine |
17:39 |
tedski |
i want the end state to be (3&4) and that's it |
17:39 |
JoeJulian |
dastar_: I'm not sure if that's normal for glusterd or not. I thought that would reopen /var/log/glusterfs/etc-glusterfs-glusterd.vol.log but I use copytruncate so I never worry about re-opening logs. |
17:41 |
JoeJulian |
tedski: volume remove-brick <VOLNAME> [replica <COUNT>] <BRICK> ... [start|stop|status|commit|force] - remove brick from volume <VOLNAME> |
17:42 |
tedski |
JoeJulian: oh, i forgot to remention i'm on 3.2 :( |
17:42 |
tedski |
JoeJulian: that's the whole point of this move... to upgrade |
17:43 |
tedski |
volume remove-brick <VOLNAME> <BRICK> ... - remove brick from volume <VOLNAME> |
17:43 |
|
calisto joined #gluster |
17:43 |
JoeJulian |
yeah, you're SOL if you don't want to just upgrade in-place. |
17:43 |
tedski |
i thought i can't upgrade in place for 3.2->3.3 |
17:44 |
JoeJulian |
You can, it just requires a short amount of down time. |
17:44 |
tedski |
do you think my remove-brick, then replace-brick plan would fail? |
17:44 |
tedski |
oh, just the downtime to rearrange to support the new fs layout |
17:44 |
tedski |
i.e. /etc/glusterd to /var/lib/glusterd |
17:44 |
tedski |
and run the upgrade process |
17:44 |
tedski |
hrmm |
17:45 |
tedski |
but, 3.2 clients are incompatible with 3.3, right? |
17:45 |
JoeJulian |
There's not even a layout change, per se. It's just that the client and server rpc changed. |
17:45 |
JoeJulian |
right |
17:45 |
JoeJulian |
That's why the down time. You have to upgrade both. |
17:45 |
tedski |
yeah, upgrading the clients in this environment is problematic... readonly images requiring a reboot of the entire prod env |
17:45 |
tedski |
heh |
17:45 |
JoeJulian |
btw, I'd jump straight to 3.5 |
17:46 |
tedski |
that's the plan, i wanted to move these bricks to these new hosts and do a cutover at some point |
17:46 |
tedski |
so, no reasonable way to go from distributed-replicated to replicated? |
17:46 |
tedski |
hrmm |
17:46 |
|
pp joined #gluster |
17:46 |
JoeJulian |
Not back then. |
17:47 |
JoeJulian |
Ooh, |
17:47 |
JoeJulian |
You could parallel bricks on the "new" servers, doing a replace-brick and waiting for the self-heal to finish. |
17:48 |
JoeJulian |
~pasteinfo | tedski |
17:48 |
glusterbot |
tedski: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here. |
17:48 |
tedski |
okay, hangon |
17:49 |
tedski |
just the bricks i want to migrate, yah? |
17:49 |
tedski |
err just the ovlume |
17:51 |
tedski |
JoeJulian: https://dpaste.de/WP5P |
17:51 |
glusterbot |
Title: dpaste.de: Snippet #291027 (at dpaste.de) |
17:51 |
tedski |
let me know if you need more |
17:58 |
|
raghug joined #gluster |
17:59 |
|
geaaru joined #gluster |
17:59 |
tedski |
JoeJulian: how do you parallel bricks? |
18:02 |
geaaru |
hi, i'm trying to use geo-replication with gluster v.3.5.2 but when I try to remove ignore_deletes option with config !ignore_deletes i receive a segfault from gluster, while if I use config '!ignore_deletes' i receive 'Invalid option'. |
18:02 |
geaaru |
what is the right syntax for disable ignore_deletes option ? thanks in advance |
18:03 |
|
cygoonda joined #gluster |
18:03 |
|
quique joined #gluster |
18:05 |
quique |
how do i create a volume from a directory with an existing .glusterfs directory (it was a brick)? |
18:05 |
quique |
and i don't want to lose the data |
18:05 |
cygoonda |
Hi all, I’m new to gluster and just installed on centos, I see the “glusterfs” command in my path but not “gluster”. Can somebody tell me where the gluster command is installed? |
18:05 |
kkeithley |
you need to install the glusterfs-cli RPM |
18:06 |
kkeithley |
cygoonda: ^^^ |
18:06 |
cygoonda |
kkeithley: Thanks! Will give that a try. |
18:11 |
|
PeterA joined #gluster |
18:21 |
JoeJulian |
tedski: sorry, got called away... https://dpaste.de/FOUQ |
18:21 |
glusterbot |
Title: dpaste.de: Snippet #291033 (at dpaste.de) |
18:21 |
tedski |
JoeJulian: ahh, i thought about that, too |
18:22 |
JoeJulian |
I think that's the only way to do what you're trying to do. |
18:23 |
JoeJulian |
geaaru: gluster volume reset $vol $option |
18:23 |
tedski |
how about this? heh... https://dpaste.de/Y58p |
18:23 |
glusterbot |
Title: dpaste.de: Snippet #291034 (at dpaste.de) |
18:24 |
tedski |
JoeJulian: since .51 and .52 are exactly the same (replicated) |
18:24 |
tedski |
and .53 and .54 are exactly the same |
18:24 |
tedski |
removing .52 and .54 just reduces availability |
18:24 |
|
calum_ joined #gluster |
18:24 |
tedski |
but keeps consistency |
18:25 |
JoeJulian |
You can't remove 1 brick in a replicated volume. |
18:25 |
tedski |
oh? |
18:25 |
JoeJulian |
You have to remove entire replica set. |
18:25 |
tedski |
ahh |
18:25 |
tedski |
okay, that solves that entirely then |
18:25 |
tedski |
cool |
18:25 |
tedski |
thanks for the help |
18:26 |
JoeJulian |
You're welcome. |
18:26 |
tedski |
so, that makes me think of a semi-related question |
18:26 |
tedski |
if i lose a brick in a replica set i.e. the host goes down for an hour |
18:26 |
geaaru |
JoeJulian: ignore_deletes is not a gluster option but a geo-replication option, if i try to reset option ignore_deletes from volume I receive a message like this: volume reset: failed: Option ignore_deletes does not exist |
18:26 |
tedski |
or the local disk dies |
18:26 |
tedski |
when i bring it back up... will it automatically re-replicate to that empty brick? |
18:27 |
JoeJulian |
geaaru: Ah, I haven't really played with geo-rep. |
18:27 |
JoeJulian |
tedski: yes and no. |
18:27 |
JoeJulian |
3.2, no. |
18:28 |
tedski |
so, if i lose the disks that the brick is on in a replica set, what's the recovery path? |
18:28 |
JoeJulian |
With 3.3+ there's a self-heal daemon that will re-replicate files that need re-replicated. |
18:28 |
tedski |
bring up peer with new empty disks |
18:28 |
tedski |
then rebalance? |
18:28 |
JoeJulian |
If you lose the contents of the brick entirely, you'll need to set the volume-id ,,(extended attribute) before the brick daemon (glusterfsd) will start for that brick. |
18:28 |
glusterbot |
To read the extended attributes on the server: getfattr -m . -d -e hex {filename} |
18:29 |
JoeJulian |
Once started, the self-heal daemon will re-replicate. |
18:29 |
tedski |
set the volume-id where? |
18:30 |
JoeJulian |
@volume-id |
18:30 |
|
abcrawf joined #gluster |
18:30 |
JoeJulian |
hmm, I need to define that one... |
18:30 |
tedski |
okay, so, what if the contents of the brick aren't lost, they're just stale |
18:31 |
abcrawf |
FYI the documentation page on the site links to http://www.gluster.org/architecture.html which 404s |
18:33 |
JoeJulian |
@learn volume-id as The volume-id is an extended attribute on the brick root which identifies that brick for use with a specific volume. If that attribute is missing, gluster assumes that the brick did not mount and will not start the brick service for that brick. To set the id on a replaced brick, read it from another brick "getfattr -n trusted.volume-id -d -e hex $brick_root" and set it on the new brick with "setfattr -n trusted.volume-id -v |
18:33 |
JoeJulian |
$value $brick_root". |
18:33 |
glusterbot |
JoeJulian: Error: No closing quotation |
18:33 |
tedski |
also, JoeJulian, i appreciate the help and apologize for the barrage of questions... but... getfattr -m . -d -e hex <file in brick> returns nothing |
18:34 |
JoeJulian |
@learn volume-id as The volume-id is an extended attribute on the brick root which identifies that brick for use with a specific volume. If that attribute is missing, gluster assumes that the brick did not mount and will not start the brick service for that brick. To set the id on a replaced brick, read it from another brick \"getfattr -n trusted.volume-id -d -e hex $brick_root\" and set it on the new brick with \"setfattr -n trusted.volume-id |
18:34 |
JoeJulian |
-v $value $brick_root\". |
18:34 |
glusterbot |
JoeJulian: The operation succeeded. |
18:34 |
JoeJulian |
tedski: must be root |
18:34 |
tedski |
derp |
18:34 |
JoeJulian |
abcrawf: thanks, I'll let people know. |
18:35 |
tedski |
hrmm |
18:35 |
tedski |
no volume-id on the brick root |
18:37 |
JoeJulian |
@update volume-id s/trusted.volume-id/trusted.glusterfs.volume-id/g |
18:37 |
JoeJulian |
@change volume-id s/trusted.volume-id/trusted.glusterfs.volume-id/g |
18:37 |
glusterbot |
JoeJulian: Error: The command "change" is available in the Factoids, Herald, and Topic plugins. Please specify the plugin whose command you wish to call by using its name as a command before "change". |
18:37 |
JoeJulian |
@factoids change volume-id s/trusted.volume-id/trusted.glusterfs.volume-id/g |
18:37 |
glusterbot |
JoeJulian: Error: 's/trusted.volume-id/trusted.glusterfs.volume-id/g' is not a valid key id. |
18:38 |
JoeJulian |
@factoids change volume-id 1 s/trusted.volume-id/trusted.glusterfs.volume-id/g |
18:38 |
glusterbot |
JoeJulian: The operation succeeded. |
18:38 |
JoeJulian |
@volume-id |
18:38 |
glusterbot |
JoeJulian: The volume-id is an extended attribute on the brick root which identifies that brick for use with a specific volume. If that attribute is missing, gluster assumes that the brick did not mount and will not start the brick service for that brick. To set the id on a replaced brick, read it from another brick \"getfattr -n trusted.glusterfs.volume-id -d -e hex $brick_root\" and set it on |
18:38 |
glusterbot |
JoeJulian: the new brick with \"setfattr -n trusted.glusterfs.volume-id |
18:38 |
JoeJulian |
seriously, glusterbot... you kept the backslashes??? |
18:38 |
JoeJulian |
@factoids change volume-id 1 s/\//g |
18:38 |
glusterbot |
JoeJulian: Error: 's/\\//g' is not a valid regular expression. |
18:38 |
JoeJulian |
@factoids change volume-id 1 s/\\//g |
18:38 |
glusterbot |
JoeJulian: The operation succeeded. |
18:38 |
JoeJulian |
@volume-id |
18:38 |
glusterbot |
JoeJulian: The volume-id is an extended attribute on the brick root which identifies that brick for use with a specific volume. If that attribute is missing, gluster assumes that the brick did not mount and will not start the brick service for that brick. To set the id on a replaced brick, read it from another brick "getfattr -n trusted.glusterfs.volume-id -d -e hex $brick_root" and set it on |
18:38 |
glusterbot |
JoeJulian: the new brick with "setfattr -n trusted.glusterfs.volume-id |
18:38 |
JoeJulian |
That's almost better. |
18:39 |
JoeJulian |
@factoids change volume-id 1 s/$/"./ |
18:39 |
glusterbot |
JoeJulian: The operation succeeded. |
18:39 |
tedski |
JoeJulian: https://dpaste.de/TOkY |
18:39 |
JoeJulian |
@volume-id |
18:39 |
glusterbot |
JoeJulian: The volume-id is an extended attribute on the brick root which identifies that brick for use with a specific volume. If that attribute is missing, gluster assumes that the brick did not mount and will not start the brick service for that brick. To set the id on a replaced brick, read it from another brick "getfattr -n trusted.glusterfs.volume-id -d -e hex $brick_root" and set it on |
18:39 |
glusterbot |
JoeJulian: the new brick with "setfattr -n trusted.glusterfs.volume-id". |
18:39 |
|
smohan joined #gluster |
18:39 |
glusterbot |
Title: dpaste.de: Snippet #291038 (at dpaste.de) |
18:39 |
|
kiwnix joined #gluster |
18:39 |
JoeJulian |
Sorry for the spam. |
18:40 |
JoeJulian |
Oh, right... 3.2 |
18:40 |
tedski |
sanitization fail... https://dpaste.de/ANjz |
18:40 |
glusterbot |
Title: dpaste.de: Snippet #291039 (at dpaste.de) |
18:40 |
JoeJulian |
With 3.2, gluster will happily replicate to your root partition if your brick fails to mount. |
18:40 |
tedski |
hahaha |
18:41 |
tedski |
wow, i really need to prioritize this upgrade |
18:41 |
JoeJulian |
There are a lot of stress-saving features that have been added since then. |
18:42 |
tedski |
and this was all to make room for the new gluster hosts |
18:42 |
tedski |
okay, so, let's say the brick is mounted, but the contents are stale |
18:42 |
tedski |
and i bring the node back up |
18:43 |
JoeJulian |
That's what those "trusted.afr.*" attributes are for. They keep track of pending changes destined for *other* bricks. That way when a brick returns, glusterfs knows that there are changes that need to be written to it. |
18:44 |
tedski |
okay, cool |
18:44 |
|
calisto joined #gluster |
18:44 |
JoeJulian |
With 3.2 you'll have to walk a mount (find $mountpoint -exec stat {} \; >/dev/null) to trigger the self-heal. |
18:44 |
tedski |
okay, yeah, i've had to do that once already |
18:44 |
tedski |
so, i'm familiar with that |
18:46 |
tedski |
well, this has been SUPER helpful. |
18:46 |
tedski |
much much thanks, JoeJulian |
18:47 |
abcrawf |
JoeJulian: the Developers > Gluster Testing link is also broken |
18:49 |
|
andreask joined #gluster |
18:56 |
|
ricky-ticky joined #gluster |
18:59 |
|
diegows joined #gluster |
19:01 |
JoeJulian |
abcrawf: Where are you seeing that? I don't even see a "Gluster Testing" link. |
19:01 |
PeterA |
is there a max no of heal failed to list? |
19:01 |
PeterA |
seems like it's maxed at 1024? |
19:03 |
JoeJulian |
1024 |
19:03 |
PeterA |
ic!! |
19:03 |
JoeJulian |
And remember, it's just a log so it's even possible to have the same entry showing up over and over again. |
19:03 |
PeterA |
oh….so how should I get rid of it? |
19:04 |
JoeJulian |
IIRC, restarting all glusterd clears that and the split-brain lists. |
19:05 |
JoeJulian |
I just check for new entries since I last investigated. |
19:05 |
|
n-st joined #gluster |
19:05 |
PeterA |
u mean restart all glusterd on ALL nodes or just the node having heal failed? |
19:06 |
PeterA |
i restarted the glusterd on the node having heal-failed and still having that list... |
19:06 |
|
_Bryan_ joined #gluster |
19:11 |
|
krullie joined #gluster |
19:14 |
|
elico joined #gluster |
19:18 |
JoeJulian |
all |
19:23 |
PeterA |
restarted all….the heal-failed still show 1024 gfids... |
19:24 |
PeterA |
opps…less….209 now.... |
19:27 |
PeterA |
oh…list cleared! |
19:27 |
PeterA |
thanks! |
19:30 |
PeterA |
oh….list pop out again….:( |
19:33 |
|
redbeard joined #gluster |
19:38 |
PeterA |
when i look into the content of the gfid under .glusterfs, i can guess the location of the file. Should I remove it from the gfs? |
19:40 |
PeterA |
or is that safe to remove/delete the gfid from .glusterfs?? |
19:51 |
PeterA |
so the bunch of heal-failed entries keep poping up every time i restarted the glusterd |
19:51 |
PeterA |
meaning it still keep looking for the same set of gfids even tho the files on gfs already gone |
19:57 |
|
ricky-ticky joined #gluster |
19:57 |
|
DV joined #gluster |
20:17 |
|
XpineX joined #gluster |
20:18 |
|
JMWbot joined #gluster |
20:18 |
JMWbot |
I am JMWbot, I try to help remind johnmark about his todo list. |
20:18 |
JMWbot |
Use: JMWbot: @remind <msg> and I will remind johnmark when I see him. |
20:18 |
JMWbot |
/msg JMWbot @remind <msg> and I will remind johnmark _privately_ when I see him. |
20:18 |
JMWbot |
The @list command will list all queued reminders for johnmark. |
20:18 |
JMWbot |
The @about command will tell you about JMWbot. |
20:19 |
purpleidea |
JMWbot: @remind now #p4h knows about JMWbot |
20:19 |
JMWbot |
purpleidea: Okay, I'll remind johnmark when I see him. [id: 10] |
20:20 |
purpleidea |
JMWbot: @list |
20:20 |
JMWbot |
@10 purpleidea reminded johnmark to: now #p4h knows about JMWbot [19 sec(s) ago] |
20:25 |
|
DougBishop joined #gluster |
20:28 |
|
jobewan joined #gluster |
20:30 |
|
vxitch left #gluster |
20:38 |
davemc |
boy am I glad I don't have a bot tracking me |
20:39 |
|
smohan joined #gluster |
20:40 |
|
anastymous joined #gluster |
20:40 |
|
plarsen joined #gluster |
20:45 |
|
abcrawf left #gluster |
21:09 |
semiosis |
purpleidea: ^^^ |
21:40 |
|
_Bryan_ joined #gluster |
21:40 |
|
n-st joined #gluster |
21:44 |
|
hchiramm joined #gluster |
21:52 |
|
DV joined #gluster |
22:31 |
purpleidea |
davemc: then behave and it's all good :) |
22:31 |
purpleidea |
;) |
22:31 |
|
tedski left #gluster |
22:39 |
|
_shaps_ joined #gluster |
22:45 |
|
smohan joined #gluster |
23:02 |
davemc |
purpleidea, caan I define wht behave means? |
23:35 |
|
russoisraeli joined #gluster |
23:54 |
|
_Bryan_ joined #gluster |