Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-09-17

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:08 dlambrig joined #gluster
00:55 EinstCrazy joined #gluster
00:58 zhangjn joined #gluster
01:02 johndescs_ joined #gluster
01:03 gildub joined #gluster
01:19 badone_ joined #gluster
01:27 badone__ joined #gluster
01:29 baojg joined #gluster
01:31 nangthang joined #gluster
01:36 zhangjn joined #gluster
01:45 badone joined #gluster
01:45 64MADU61I joined #gluster
01:53 EinstCrazy joined #gluster
01:54 baojg joined #gluster
02:01 haomaiwa_ joined #gluster
02:05 nangthang joined #gluster
02:42 shaunm joined #gluster
02:45 gildub joined #gluster
02:52 haomai___ joined #gluster
03:02 haomaiwang joined #gluster
03:03 [7] joined #gluster
03:23 beeradb joined #gluster
03:23 baojg joined #gluster
03:24 auzty joined #gluster
03:29 EinstCrazy joined #gluster
03:31 yangfeng joined #gluster
03:33 baojg joined #gluster
03:37 ccoffey joined #gluster
04:02 haomaiwang joined #gluster
04:19 harish joined #gluster
04:23 kalzz joined #gluster
04:34 hchiramm_home joined #gluster
04:43 baojg joined #gluster
04:48 baojg joined #gluster
04:56 ccoffey joined #gluster
05:00 Philambdo joined #gluster
05:02 haomaiwa_ joined #gluster
05:19 onorua joined #gluster
05:19 vmallika joined #gluster
05:28 EinstCrazy joined #gluster
05:37 haomaiwa_ joined #gluster
05:39 baojg joined #gluster
05:42 yangfeng joined #gluster
05:44 vimal joined #gluster
06:00 ctria joined #gluster
06:01 haomaiwa_ joined #gluster
06:03 arcolife joined #gluster
06:10 badone_ joined #gluster
06:14 gem joined #gluster
06:16 mhulsman joined #gluster
06:18 baojg joined #gluster
06:20 nishanth joined #gluster
06:22 jtux joined #gluster
06:22 mhulsman1 joined #gluster
06:26 baojg joined #gluster
06:29 jwd joined #gluster
06:33 rgustafs joined #gluster
06:34 badone_ joined #gluster
06:53 Trefex joined #gluster
07:01 haomaiwa_ joined #gluster
07:17 fsimonce joined #gluster
07:17 mhulsman joined #gluster
07:26 dizquierdo joined #gluster
07:28 Pupeno joined #gluster
07:37 Pupeno joined #gluster
07:44 arcolife joined #gluster
07:47 jcastill1 joined #gluster
07:52 jcastillo joined #gluster
07:52 [Enrico] joined #gluster
07:54 ashka joined #gluster
07:55 maveric_amitc_ joined #gluster
07:55 amitc__ joined #gluster
07:55 free_amitc_ joined #gluster
08:01 haomaiwang joined #gluster
08:17 Booo22 joined #gluster
08:17 Booo22 Hi. I currently have random IO errors, the system was running fine for months. I can also see it's trying to perform some heals, but even if I touch the files nothing happens.
08:21 LebedevRI joined #gluster
08:24 veleno joined #gluster
08:34 Slashman joined #gluster
08:37 badone_ joined #gluster
08:38 Booo22 Right, full disks, that's probably my problem.
08:45 Booo22 Is removing files inside the brick a valid option
08:45 Booo22 removing a 100GB file seems to take a century
08:46 Booo22 on a client
08:50 anujSharma joined #gluster
08:52 hagarth joined #gluster
08:53 anujSharma testing glusterfs on single machine linux ubuntu 14.04
08:53 anujSharma can anyone help me or guide me please?
08:55 _shaps_ joined #gluster
09:01 haomaiwa_ joined #gluster
09:12 Philambdo joined #gluster
09:20 hgichon joined #gluster
09:25 lbarfield joined #gluster
09:30 natarej_ joined #gluster
09:31 natarej joined #gluster
09:33 LebedevRI joined #gluster
09:52 ashiq joined #gluster
09:56 hgowtham joined #gluster
10:01 haomaiwa_ joined #gluster
10:04 hgichon
10:18 haomaiwa_ joined #gluster
10:24 baojg joined #gluster
10:26 alghost joined #gluster
10:26 alghost Hello
10:26 glusterbot alghost: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
10:31 hchiramm_home joined #gluster
10:47 mhulsman1 joined #gluster
10:48 kkeithley1 joined #gluster
10:48 mhulsman joined #gluster
10:48 RameshN joined #gluster
11:01 haomaiwa_ joined #gluster
11:02 timbyr_ joined #gluster
11:09 muneerse joined #gluster
11:09 ira joined #gluster
11:37 SeerKan joined #gluster
11:38 SeerKan Hi, any way to mount a glusterfs storage (with the fuse mount) trough a haproxy lb ?
11:45 [Enrico] joined #gluster
12:18 jtux joined #gluster
12:23 haomaiwa_ joined #gluster
12:28 EinstCrazy What do you use haproxy for ?
12:30 EinstCrazy SeerKan You use haproxy for data or management?
12:31 LebedevRI joined #gluster
12:32 SeerKan for load balancing and to access services running in the private network from outside like mysql. I need to access the glusterfs storage from outside the network for a limited time and was wondering if haproxy can do that... if not I am stuck with an nfs share
12:34 ndevos SeerKan: the gluster-fuse client needs to talk to the bricks of the volume directly, you can not really pass that through a proxy
12:35 SeerKan ndevos: thanks for confirming what I was suspecting... nfs it is then :)
12:35 shyam joined #gluster
12:36 ndevos SeerKan: yeah, nfs only for now, there have been ideas to make it work through a proxy, but no actual work that I am aware of
12:37 ndevos s/actual work/functional proof of concept or examples/
12:37 glusterbot ndevos: Error: I couldn't find a message matching that criteria in my history of 1000 messages.
12:40 SeerKan the nfs type mount uses random ip's or only 2049 + 38465 all the time ?
12:40 SeerKan need to know what ports to keep open :)
12:40 SeerKan * sorry, random ports
12:41 dgandhi joined #gluster
12:41 skoduri joined #gluster
12:43 ndevos ~ports | SeerKan
12:43 glusterbot SeerKan: glusterd's management port is 24007/tcp (also 24008/tcp if you use rdma). Bricks (glusterfsd) use 49152 & up since 3.4.0 (24009 & up previously). (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
12:46 SeerKan thanks
12:47 shaunm joined #gluster
12:50 julim joined #gluster
12:50 unclemarc joined #gluster
13:00 papamoose1 joined #gluster
13:00 DV__ joined #gluster
13:01 haomaiwa_ joined #gluster
13:04 ekuric joined #gluster
13:07 firemanxbr joined #gluster
13:12 DRoBeR joined #gluster
13:13 r0di0n joined #gluster
13:18 hgichon joined #gluster
13:19 EinstCrazy joined #gluster
13:20 hgichon Hello guys.. i am looking for zfs snapshot merge code prakash
13:21 hgichon http://www.spinics.net/lists​/gluster-users/msg19920.html
13:21 glusterbot Title: Re: ZFS and Snapshots Gluster Users (at www.spinics.net)
13:22 hgichon I want to testing and debugging that code for my cluster
13:40 SeerKan ndevos: forgot to ask something that I keep searching for but can't seem to find a response... Considering I have a "raid 1" type gluster, mounted with backupvolfile-server. Server1 goes down, the mount goes over to server2 and keeps writing stuff. My question is what happens when server1 is back online with old data, does the mount go back to server1, it will wait until server1 is in sync and then go back or stay on server2 until it's goes
13:40 SeerKan down and then it will use server1 again ?
13:44 ndevos SeerKan: its a little different, only the initial mount process uses the server in /etc/fstab, after that, the fuse-client knows how the volume is structured and will connect to the single bricks directly
13:45 zhangjn joined #gluster
13:45 SeerKan ok, so the mount will always show the latest data, even in my situation when a server that is no longer in sync comes back, right ?
13:46 ndevos SeerKan: replication uses a changelog to keep track of modifications, and if there are outstanding (non replicated) changes on one brick, replication (afr) will detect that and heal it
13:46 zhangjn joined #gluster
13:46 ndevos SeerKan: yes, replication is smart enough to only read the data that is latest (and/or is in sync)
13:46 SeerKan great, thanks
13:49 jiffin joined #gluster
14:01 64MADVCYZ joined #gluster
14:01 amye joined #gluster
14:03 bennyturns joined #gluster
14:03 jlp1448 joined #gluster
14:05 jlp1448 i have a volume that is Distributed-Replicate (3 X 2) that i want to reduce to a 2 X 2 by removing 2 bricks.  is this safe to do?  is there any trick to doing it? is there a potential for data loss? any other potential problems?
14:05 julim joined #gluster
14:07 hagarth joined #gluster
14:08 xaeth left #gluster
14:21 msciciel_ joined #gluster
14:26 hgichon joined #gluster
14:35 calisto joined #gluster
14:53 sakshi joined #gluster
14:54 neofob joined #gluster
14:57 squizzi joined #gluster
15:01 haomaiwa_ joined #gluster
15:03 vmallika joined #gluster
15:08 jbrooks joined #gluster
15:13 _Bryan_ joined #gluster
15:17 calavera joined #gluster
15:17 hagarth joined #gluster
15:17 hagarth o/
15:19 csim hi
15:19 glusterbot csim: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
15:20 _maserati joined #gluster
15:21 * csim throw a cake on glusterbot
15:21 amye does glusterbot like cake?
15:22 ndevos @cake
15:22 glusterbot ndevos: I do not know about 'cake', but I do know about these similar topics: 'ctdb'
15:22 RicardoSSP joined #gluster
15:24 csim glusterbot: cake is a lie
15:25 ndevos glusterbot: learn cake as the cake is a lie
15:25 glusterbot ndevos: The operation succeeded.
15:25 ndevos ~cake | csim
15:25 glusterbot csim: the cake is a lie
15:26 csim @cake
15:26 glusterbot csim: the cake is a lie
15:26 zhangjn joined #gluster
15:26 csim so we can rename glusterbot to glados ?
15:27 ndevos I doubt that, glados is probably a nick from someone else
15:29 _maserati gladdos* is the AI robot from the game Portal
15:29 _maserati wait, 1 d, not 2
15:34 Gill joined #gluster
15:39 bfoster joined #gluster
15:43 cholcombe joined #gluster
15:48 tessier joined #gluster
16:00 Leildin joined #gluster
16:01 haomaiwa_ joined #gluster
16:06 Gill_ joined #gluster
16:09 harish joined #gluster
16:10 Leildin hey guys I have just added two 5T bricks to a 30T volume and wanted to rebalance to put the layout everywhere and stuff
16:11 Leildin the rebalance start says success but status says failed
16:11 Leildin instantly
16:11 Leildin gluster 3.6.2 on a distributed volume
16:11 Leildin any idea where to investigate ?
16:23 Rapture joined #gluster
16:31 DRoBeR _maserati, I guess it is GladOS in a sarcastical way. As suggestion you can use Gla2 in Spanish, since dos=2 ;P
16:40 side_control joined #gluster
16:49 Leildin JoeJulian, I'd love a little of your magic touch ! I've already done a rebalance on this volume before when adding other storage.
16:50 JoeJulian Leildin: I was going through the commit logs to see which bugfixes I would look at to see if the problem is already fixed.
16:51 JoeJulian bug 1186119 bug 1179136 bug 1204140
16:51 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1186119 high, unspecified, ---, kdhananj, ON_QA , tar on a gluster directory gives message "file changed as we read it" even though no updates to file in progress
16:51 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1179136 high, unspecified, 3.6.3, amukherj, MODIFIED , glusterd: Gluster rebalance status returns failure
16:51 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1204140 high, high, ---, rtalur, CLOSED CURRENTRELEASE, "case sensitive = no" is not honored when "preserve case = yes" is present in smb.conf
16:51 JoeJulian huh... I thought those would all be "closed currentrelease"
16:52 Leildin hmmm
16:52 Leildin those don't seem to be related
16:52 _maserati ummm how do i get past "State: Accepted peer request (Connected)" and to "Peer in Cluster"
16:52 _maserati it seems stuck there
16:53 JoeJulian _maserati: I wish I knew.
16:53 _maserati oh god
16:53 JoeJulian Often restarting all glusterd solves that.
16:54 _maserati that worked
16:54 _maserati lol
16:56 Leildin there's nothing in the logs regarding the rebalance
16:56 Leildin is there anything else than /var/log/glusterfs to look at ?
16:57 JoeJulian glusterd.vol.log, *rebalance*.log
16:58 jwd_ joined #gluster
16:59 Leildin can I paste you the rebalance ?
16:59 Leildin @paste
16:59 glusterbot Leildin: For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
17:01 haomaiwang joined #gluster
17:02 Leildin http://ur1.ca/nt875
17:02 glusterbot Title: #268533 Fedora Project Pastebin (at ur1.ca)
17:02 Leildin latest rebalance log
17:05 JoeJulian That's weird
17:06 Leildin indeed ! there's nothing in the balance log regarding the rebalance I started at least 30 mins after that last log
17:06 JoeJulian That's not the weird part.
17:07 JoeJulian It fails because [2015-09-17 16:15:44.396007] E [graph.y:153:new_volume] 0-parser: Line 11: volume 'data-client-0' defined again
17:07 Leildin my volume is called data
17:08 Leildin http://ur1.ca/nt890  <- etc-glusterfs-glusterd.vol.log
17:08 glusterbot Title: #268540 Fedora Project Pastebin (at ur1.ca)
17:10 Leildin I'm thinking of removing bricks from volume, deleting anything stored on the bricks and re-adding them again
17:13 JoeJulian Yeah, something's messed up with that volume. You might be able to fix it by renaming all the .vol files under /var/lib/glusterd/vols and running "pkill glusterd ; glusterd --xlator-option *.upgrade=on -N ; glusterd"
17:14 calavera joined #gluster
17:14 hagarth joined #gluster
17:16 Leildin so it's going to remake the .vol files correctly ?
17:16 Leildin "normally"
17:21 _maserati why would (in a functioning 2-node cluster) adding a peer worked with 1 node but is rejected in the other? And how do i fix?
17:22 prg3 joined #gluster
17:31 Leildin JoeJulian, I'll try that rename and retry fix tomorrow, I'm praying your magic is still strong :)
17:41 _maserati gdm! now it's "Sent and Received peer request (Connected)"
17:48 julim joined #gluster
17:51 marlinc joined #gluster
17:52 calavera_ joined #gluster
17:54 calavera joined #gluster
17:56 lanning joined #gluster
18:05 togdon joined #gluster
18:09 _maserati Okay I dont think my 2-node cluster see's itself as one... or something
18:09 _maserati peer probe will work on one server, but never share that data with the other in cluster node...
18:10 * _maserati shoots self in head. all problems fixed.
18:10 JoeJulian That failure should be in one or the other glusterd.vol.log
18:12 _maserati Received friend update from uuid: 2cad3077-....
18:13 _maserati Received CLI probe req utsldl-st-2135... (new node im trying to add)
18:13 _maserati Unable to find peerinfo for host: utsldl-st-2135
18:16 _maserati oh and peer probe on the other "working" node: peer probe: failed: utsldl-st-2135 is already part of another cluster
18:19 RayTrace_ joined #gluster
18:21 calavera joined #gluster
18:21 _maserati with the little rm -rf !(glusterd.info) hack, i can get the new node into Peer in Cluster state on ONE server, but it still shows rejected on the other =$
18:25 JoeJulian Well yeah
18:25 JoeJulian glusterd does not handle the uuid for a server changing.
18:26 JoeJulian Oh, misread... you're saying not glusterd.info
18:26 htrmeira joined #gluster
18:26 shaunm joined #gluster
18:27 ajneil joined #gluster
18:27 ajneil /msg NickServ VERIFY REGISTER ajneil xkzvwctmwqcu
18:28 ajneil doh
18:29 _maserati So trying something new, just shut off glusterd on working node 2, since node 1 accepts the peer request. Added the brick and it doesnt seem to be filling up the new brick with anything. Is there anyway i can force it to?
18:29 _maserati well it did add 33MB of .glusterfs but thats it
18:30 RedW joined #gluster
18:30 calavera joined #gluster
18:30 jobewan joined #gluster
18:31 _maserati erg Launching Heal operation on volume dev_volume has been unsuccessful
18:33 _maserati haha okay... i just cant even... adding a mount point on the node im trying to add lets me access the data. fine. but its not being replicated to its own local brick!?!!#/
18:43 ajneil what does gluster v status dev_volume say?
18:43 _maserati which node should i run that on
18:44 ajneil any
18:45 TheCthulhu2 joined #gluster
18:45 _maserati There are no active volume tasks
18:45 _maserati bricks online Y
18:45 ajneil what does it say about the bricks?
18:46 _maserati Online Y, different ports, different pids?
18:46 ajneil how many nodes in the volume?
18:47 _maserati 2, but i stopped 1 of the 2 primary nodes that would not accept the peer request
18:47 _maserati there SHOULD be 3 nodes, but i cant get that other node to agree
18:48 ajneil so there is one node up of a three node replicated volume, and you are loop back mounting it?
18:48 _maserati there is 2 nodes of 3 up
18:48 ajneil ahh
18:49 _maserati 1 of the primary nodes, and the new 1 im trying to get up
18:49 ajneil but data is only going to one of the nodes?
18:49 _maserati it looks like data is filtering into the new node now... but verrrrrry slowly... like 30 MB in the last 30 mins
18:49 ajneil how many other volumes do you have?
18:49 _maserati just the 1
18:52 _maserati well i just tried creating a file on the new node and it did indeed end up on both node's bricks...
18:52 ashiq joined #gluster
18:52 _maserati why is the syncing so slow?
18:52 ajneil you can try, on the node that won't peer backing up /var/lib/glusterd/glusterd.info and removing the contents of /var/lib/glusterd then restore glusterd.info and restart glusterd and try to peer
18:53 _maserati peer FROM that node or TO that node?
18:53 squizzi joined #gluster
18:53 ajneil you can only ever peer to a node that is not in a cluster
18:53 ajneil you have to issue a peer request from a node in the cluster
18:53 _maserati arent i going to run into the issue that the other  primary node beleives that node is already in the cluster?
18:54 ajneil peer detach it first
18:55 ajneil also rsync the vols subirectory from a good node to /var/lib/glusterd when you restore the glusterd.info file.
18:56 _maserati oh man, so im gonna have to remove-brick from that node... is it going to lose it's data or have to resync everything?
18:56 ajneil is your volume available now?
18:56 _maserati glusterd is stopped on the misbehaving server
18:56 ajneil what does glutser heal dev_volume split-brain say?
18:58 _maserati is this the command you mean? gluster volume heal dev_volume info split-brain
18:58 ajneil well if glusterd is stopped it's not going to be able to respond to peer requests
18:58 ajneil yup my mistake
18:58 ajneil can you start glusterd on the node?
18:58 _maserati says 0 entries on all 3 nodes
18:58 _maserati yeah i'll try
18:59 _maserati its up, still says the new node is rejected
18:59 ayma_ joined #gluster
19:00 ajneil try stopping gluster d and cleaning out /var/lib/glusterd as I suggested
19:00 _maserati wow... this appears to be tri-hopping
19:00 glusterbot _maserati: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
19:00 _maserati i created a text file on C, i believe it replicated to A, in turn B trusts A, so it grabbed that file from A
19:01 _maserati while A trusts C and B
19:01 _maserati i'll do as you suggested
19:01 ayma_ hi
19:01 glusterbot ayma_: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
19:02 ajneil are you createing the fils via the fuse mount?
19:02 _maserati yeah
19:02 ajneil so glusterd is up, and files are appearing on the bad server's brick?
19:02 _maserati and immediately checked the local brick on C (where i created the file on the fuse mount) and it was there
19:03 _maserati then checked A's brick and it was there
19:03 _maserati and after "ls" ing the fuse mount on C, it ended up it's brick
19:03 _maserati but not until I did that ls
19:03 ajneil but gluster peer status still shows disconnected on the node?
19:04 _maserati specifically: State: Peer Rejected (Connected)
19:04 _maserati (on B)
19:04 _maserati on A: State: Peer in Cluster (Connected)
19:04 ajneil gluster volume status  shows all the bricks up?
19:05 _maserati shows Online Y for all 3 nodes, but the misbehaving node has N/A under Port
19:05 ayma_ i'm having trouble with installing gluster on  ubuntu(trusty)  I'm trying to use the PPA mention in  https://launchpad.net/~gluster​/+archive/ubuntu/glusterfs-3.7 but keep getting
19:05 glusterbot Title: glusterfs-3.7 : “Gluster” team (at launchpad.net)
19:05 ayma_ Cannot add PPA: 'ppa:gluster/glusterfs-3.7'. Please check that the PPA name or format is correct.
19:07 ayma_ cmd i'm trying is "sudo add-apt-repository ppa:gluster/glusterfs-3.7"
19:08 ajneil _maserati  you only have the one volume correct?
19:09 _maserati yeah
19:09 dlambrig joined #gluster
19:10 JoeJulian ayma_: Worked for me: http://fpaste.org/268584/14425170/
19:10 glusterbot Title: #268584 Fedora Project Pastebin (at fpaste.org)
19:11 ajneil I would shutdown glusterd on the bad node and then move /var/lib/glusterd/vols/dev_volume direcory out of the ray and rsync it from one of the good nodes then restart glusterd
19:11 ajneil "out of way" not "out of ray" by which I mean rename ot move to a new location
19:11 JoeJulian I would move it out of /var/lib/glusterd/vols
19:12 ajneil yep
19:14 ayma_ joined #gluster
19:15 mhulsman joined #gluster
19:18 _maserati still peer rejected
19:19 _maserati and i rsync'd the vols/dev_volume directory from the server that says Peer in Cluster
19:21 _maserati lol and any file i create is ending up on all 3 bricks
19:22 _maserati this is so wrong.
19:23 ajneil well glusterd is just for management.  you won;t be able to mount from that node or add bricks
19:23 ajneil but the glusterfsd will work untill you reboot the server
19:23 _maserati i mean, i created a file on C, and checked the actual brick of B (bad server) and the file is there...
19:24 _maserati no probing (ls, find, etc) necessary this time
19:24 JoeJulian Why are you trying to break things? ;)
19:25 _maserati JoeJulian: I truly beleive that if i create a file on C, A is accepting it, and B accepts it from A
19:25 _maserati because B wont negotiate peer status with C but trusts A
19:25 JoeJulian Nope
19:25 _maserati D:
19:25 _maserati B says Peer rejected
19:25 _maserati A says Peer in Cluster
19:25 JoeJulian Replication happens at the client. As long as all the bricks are up, it'll do whatever the volume definition tells it to do.
19:25 _maserati (about node C)
19:26 _maserati Are you saying it's okay that one of my nodes says peer rejected and the other says in cluster?
19:27 JoeJulian stop all glusterd. truncate all glusterd.vol.log. start all glusterd. Wait 30 seconds. fpaste all the glusterd.vol.log files.
19:27 JoeJulian And no. It's not ok. It'll fail to do any management tasks or start bricks after a reboot.
19:28 _maserati i feel silly... but i forgot how to truncate logs
19:28 JoeJulian I like to use "truncate --size=0" because it'll accept multiple targets.
19:28 _maserati ok
19:28 JoeJulian otherwise a simple ">$filename" works.
19:29 JoeJulian s/"/"./
19:29 _maserati which node would u like the fpastes from?
19:29 JoeJulian all
19:29 _maserati ok
19:30 _maserati my new node only has "etc-glusterfs-glusterd.vol.log"  is that correct?
19:33 JoeJulian oops, yes. Forgot the etc-
19:33 calavera joined #gluster
19:44 ayma_ thanks JoeJulian for trying it out,  I'm still getting errors.  I am now trying to do it from the source list, but am not sure what to put in for the recv key  I tried 3FE869A9
19:45 _maserati Okay here ya go:
19:45 _maserati From Node A: http://fpaste.org/268591/14425189/
19:45 glusterbot Title: #268591 Fedora Project Pastebin (at fpaste.org)
19:45 _maserati From Node B: http://fpaste.org/268592/14425190/
19:45 glusterbot Title: #268592 Fedora Project Pastebin (at fpaste.org)
19:45 Pupeno joined #gluster
19:45 _maserati From Node C: http://fpaste.org/268593/14425190/
19:45 glusterbot Title: #268593 Fedora Project Pastebin (at fpaste.org)
19:46 JoeJulian "Cksums of volume dev_volume differ. local cksum = 1668065927, remote cksum = -1812138325"
19:46 _maserati im not sure how that would have even happened
19:48 JoeJulian so mv /var/lib/glusterd/vols /tmp on "node" B and C, then rsync /var/lib/glusterd/vols from A to B and C.
19:48 _maserati All three logs show that checksums differ, is that okay still?
19:49 calavera joined #gluster
19:52 chirino joined #gluster
19:53 JoeJulian Once you sync that directory tree, they won't differ any more.
19:58 Pupeno joined #gluster
20:02 _maserati well
20:02 _maserati checksums dont differ anymore
20:02 _maserati but it completely kicked C out
20:03 _maserati out of volume status anyway
20:03 _maserati it still shows as a brick in volume info
20:03 Pupeno joined #gluster
20:04 JoeJulian What if you do volume status from C
20:04 _maserati Node C now says Peer Rejected on both A and B
20:04 _maserati Brick utsldl-st-2135:/srv/bricks/dev_volume             N/A     N       N/A
20:05 _maserati remove C as a brick, detach peer, and try to reprobe i guess?
20:05 Gill joined #gluster
20:05 JoeJulian I'd look to see why it's rejected first.
20:05 _maserati Where would i find that
20:06 JoeJulian That same log etc-glusterfs-glusterd.vol.log
20:07 _maserati all i see is:  Received RJT from....
20:08 _maserati Basically: Recieved probe from C..... then Received RJT from C
20:12 _maserati and somehow C is still getting data trickled to it's brick
20:16 JoeJulian Just ignore the bricks for now.
20:16 JoeJulian So what's C's log say about that rejection?
20:22 Pupeno joined #gluster
20:22 jiffin joined #gluster
20:26 JoeJulian :q
20:29 haomaiwa_ joined #gluster
20:35 _maserati sorry went to grab lunch lemme check
20:36 _maserati C says absolutely nothing about it
20:36 _maserati other than it received a RJT
20:38 JoeJulian _maserati: on a call, bbl.
20:52 ajneil joined #gluster
20:55 Pupeno joined #gluster
21:02 _Bryan_ joined #gluster
21:14 _maserati Is there an easier way to see the checksum on a volume than the logs? I wanna check our production volumes for any of this nightmare i've been dealing with today
21:21 ajneil did you kick off a full heal?
21:22 ajneil btw anyone know if there is a problem with the listsetrv today?
21:30 amye ajneil: Problem how?
21:30 ajneil no messages since yesterday afternoon - seems unlikely
21:35 amye hmmm. I can go poke the tires but I think we're also in a holiday period for BLR
21:36 JoeJulian amye: I would talk to Michael Scherer. He at least knows where all the services are.
21:37 amye JoeJulian: Except that he should? be offline at this point.
21:37 JoeJulian Alright _maserati, let's see what you've got left to fix.
21:38 JoeJulian He has hours?
21:38 amye JoeJulian: trust me, I was surprised as you were
21:39 JoeJulian Oh, that's his nick.. I didn't put that together...
21:39 * JoeJulian pokes csim
21:40 amye The last I have on the mailing list is from yesterday, I've got some spam trapped in the filters from today.
21:41 JoeJulian The reduced email interruption is rather pleasant.
21:41 JoeJulian Can we keep it this way? ;)
21:42 amye JoeJulian: Heh. We just need to have more people on holidays.
21:45 _maserati JoeJulian: Okay, where should we begin? Last we left off... Node C just shows a RJT
21:48 JoeJulian lol..
21:48 JoeJulian MAIL FROM: me@joejulian.name
21:48 JoeJulian 250 2.1.0 Ok
21:48 JoeJulian RCPT TO: gluster-users@gluster.org
21:48 JoeJulian 450 4.2.0 <gluster-users@gluster.org>: Recipient address rejected: Greylisted, see http://postgrey.schweikert​.ch/help/gluster.org.html
21:48 glusterbot Title: Postgrey Help (at postgrey.schweikert.ch)
21:49 _maserati You done effed up Joe
21:52 _maserati Can I have 1 server serve bricks to two different clusters?
21:53 JoeJulian Does not compute.
21:53 _maserati 1 server, serve in two different clusters?
21:53 * amye laughs. 'greylisted'
21:53 _maserati brick1 served to development cluster
21:53 _maserati brick2 served to prod cluster
21:53 JoeJulian I'm not sure how you're using the term "cluster" in this context.
21:54 JoeJulian volume?
21:54 _maserati ummm... groups of peer'd nodes
21:54 JoeJulian Ah, no.
21:54 _maserati damn.
21:54 JoeJulian A server can only be part of one trusted peer group.
21:55 JoeJulian Of course, that said, you could actually, but it would require a bunch of untested features.
21:55 _maserati i can't even get a core environment working correctly yet, ha
21:55 JoeJulian You would need two different glusterd.vol files and two different glusterd instances.
21:55 _maserati yeah that's what i figured
21:56 JoeJulian Then all your clients would have to know to look to a port different than the default.
21:56 _maserati Boss man is just being a *******
21:56 JoeJulian They get that way.
21:57 _maserati so, the plan in my best interest is to rip this new node out of dev and use it for the production test we're doing saturday =[
21:57 _maserati so, figuring out my problems in dev is no longer going to help me
21:57 _maserati though everything i learned today, im way more confident, so there's that
21:58 _maserati i think though, im just going to make a client gluster mount point at our other site, let him "test" that it works, then work on truly implementing it over the next week
21:58 Pupeno joined #gluster
21:59 JoeJulian +1
21:59 _maserati actually if i do that
21:59 JoeJulian Teach the boss about CAP theory.
22:00 _maserati i can keep it in the development peer group, and still work these kinks out and learn
22:00 _maserati i think, right?
22:00 JoeJulian sure
22:00 _maserati ok cool
22:01 JoeJulian So in /var/lib/glusterd/peers, there should be one file for each *other* server (not itself).
22:01 _maserati there is
22:01 _maserati state=6 on each on Node C
22:02 _maserati on Node B, Node A = State=3
22:02 _maserati Node C, state = 6
22:03 _maserati same with Node A
22:04 _maserati Just to back track a little, we copied the volume information from a working node to the other two servers. The two primary sync'd up again, the 3rd went Rejected. My thoughts were at this point maybe it'd be worth a try to peer detach the third and try to freshly reintroduce it to the peer group
22:05 _maserati Or, what we did would have no affect on that?
22:07 DV__ joined #gluster
22:22 JoeJulian Can't do that because it's part of a volume.
22:23 _maserati as a replica
22:23 _maserati can always remove it
22:23 JoeJulian Sure, that's a possibility.
22:24 _maserati I follow your advice, was just mentioning. if you dont think it'll help, i wont try :)
22:25 JoeJulian Let's try this first. stop all glusterd. change the state in the peer files to "state=3". Start all glusterd again.
22:25 _maserati ok
22:27 JoeJulian I think that somehow you have the actual cluster trying to add the new node still, and the new node trying to create a new cluster by adding the other nodes.
22:28 _maserati whoa
22:28 _maserati ...
22:28 _maserati oh nvm
22:28 _maserati lol
22:28 _maserati i was like there's different uuids!
22:28 _maserati forgetting... each node will have 1 different
22:28 _maserati my brain is fried
22:28 JoeJulian :)
22:29 JoeJulian Yeah, try this last thing and quit for the day.
22:29 JoeJulian You have my permission.
22:29 _maserati oh thank god
22:29 _maserati i literally won't leave work if i got your attention ha
22:30 _maserati okay everything set to 3, bringing em up
22:30 JoeJulian I've been there. But my help was in Bangalore.
22:30 _maserati i dunno where you live so ... that could suck or maybe not? ha
22:30 JoeJulian Seattle
22:30 JoeJulian Other side of the globe.
22:30 _maserati oh right on, Colorado Springs here
22:30 JoeJulian Up 'till 4am getting help.
22:32 _maserati pulled itself into the same state... all 6's that were there are back
22:32 JoeJulian Mmkay. Interesting.
22:32 _maserati and... the rejected brick is still collecting files.... lol
22:32 JoeJulian It's not the brick, it's the peer.
22:33 JoeJulian It's at the management layer.
22:33 JoeJulian not the data layer.
22:33 _maserati ohhh
22:33 _maserati well, until tommorrow?
22:33 JoeJulian Tomorrow.
22:33 _maserati have a good night
22:33 JoeJulian You too.
22:35 RayTrace_ joined #gluster
22:35 dlambrig joined #gluster
22:37 badone joined #gluster
22:54 dgandhi joined #gluster
22:55 gildub joined #gluster
23:00 hgichon joined #gluster
23:02 hgichon joined #gluster
23:03 chirino joined #gluster
23:15 hgichon Good day~~
23:23 dgbaley joined #gluster
23:31 hgichon joined #gluster
23:33 beeradb_ joined #gluster
23:36 RayTrace_ joined #gluster
23:38 beeradb_ joined #gluster
23:45 Rapture joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary