Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-12-14

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 susant joined #gluster
00:18 Jacob843 joined #gluster
00:42 vbellur joined #gluster
01:05 shdeng joined #gluster
01:22 RameshN joined #gluster
01:23 dnorman joined #gluster
01:27 mlhess joined #gluster
02:06 haomaiwang joined #gluster
02:13 PaulCuzner joined #gluster
02:14 haomaiwang joined #gluster
02:22 toredl joined #gluster
02:26 loadtheacc joined #gluster
02:48 d0nn1e joined #gluster
02:59 derjohn_mobi joined #gluster
03:14 haomaiwang joined #gluster
03:15 loadtheacc joined #gluster
03:29 magrawal joined #gluster
03:34 Ashutto joined #gluster
03:35 Ashutto Hello
03:35 glusterbot Ashutto: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
03:38 Ashutto pasete
03:38 Ashutto paste
03:38 asriram joined #gluster
03:38 Ashutto pastebin
03:38 vbellur @pastebin
03:38 glusterbot vbellur: I do not know about 'pastebin', but I do know about these similar topics: 'paste', 'pasteinfo'
03:38 vbellur @paste
03:38 glusterbot vbellur: For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
03:38 Ashutto thanks :D
03:38 vbellur :)
03:39 krk joined #gluster
03:41 Ashutto http://termbin.com/w2ee        <- I'm trying to create a snapshot on my volume. It fails bragging about a geo-replication session that i deleted weeks ago.
03:42 Ashutto is there a way to get rid of this session that is ... well... non existing?
03:45 vbellur Ashutto: restarting glusterd services on all nodes would cleanup any stale sessions. the error message refers to a glusterd session, not a geo-replication one.
03:46 Ashutto nice... i'll try it :)
03:46 Ashutto thanks
03:54 itisravi joined #gluster
03:56 atinm joined #gluster
03:57 Lee1092 joined #gluster
03:58 Ashutto It works! Wow :)
04:02 panina joined #gluster
04:03 riyas joined #gluster
04:04 Ashutto joined #gluster
04:06 nbalacha joined #gluster
04:07 susant joined #gluster
04:08 loadtheacc joined #gluster
04:08 sbulage joined #gluster
04:12 buvanesh_kumar joined #gluster
04:14 haomaiwang joined #gluster
04:26 pioto joined #gluster
04:39 dnorman joined #gluster
04:40 ankitraj joined #gluster
04:41 shubhendu joined #gluster
04:46 Prasad joined #gluster
04:46 susant joined #gluster
04:48 ppai joined #gluster
04:59 apandey joined #gluster
05:02 loadtheacc joined #gluster
05:03 ndarshan joined #gluster
05:03 karthik_us joined #gluster
05:05 haomaiwang joined #gluster
05:19 RameshN joined #gluster
05:20 hchiramm joined #gluster
05:23 nishanth joined #gluster
05:29 ashiq joined #gluster
05:30 prasanth joined #gluster
05:30 Javezim joined #gluster
05:40 rafi joined #gluster
05:42 susant joined #gluster
05:49 nbalacha joined #gluster
05:50 buvanesh_kumar joined #gluster
05:51 haomaiwang joined #gluster
05:55 krk joined #gluster
05:55 aravindavk joined #gluster
05:55 kdhananjay joined #gluster
05:58 Muthu joined #gluster
05:59 Karan joined #gluster
06:02 susant joined #gluster
06:06 jiffin joined #gluster
06:12 PaulCuzner left #gluster
06:12 sanoj joined #gluster
06:14 haomaiwang joined #gluster
06:15 hgowtham joined #gluster
06:18 gem joined #gluster
06:18 hchiramm joined #gluster
06:30 sbulage joined #gluster
06:35 nbalacha joined #gluster
06:42 msvbhat joined #gluster
06:46 Muthu joined #gluster
06:47 _nixpanic joined #gluster
06:47 _nixpanic joined #gluster
06:49 bluenemo joined #gluster
06:53 kotreshhr joined #gluster
07:04 susant joined #gluster
07:05 mhulsman joined #gluster
07:10 ankitraj joined #gluster
07:12 jtux joined #gluster
07:14 haomaiwang joined #gluster
07:18 k4n0 joined #gluster
07:28 Anarka joined #gluster
07:30 satya4ever_ joined #gluster
07:51 jtux joined #gluster
07:53 [diablo] joined #gluster
08:02 BuBU29 joined #gluster
08:03 Philambdo joined #gluster
08:14 ivan_rossi joined #gluster
08:14 haomaiwang joined #gluster
08:15 bwerthmann joined #gluster
08:16 jtux joined #gluster
08:25 jri joined #gluster
08:26 flomko joined #gluster
08:29 buvanesh_kumar_ joined #gluster
08:32 derjohn_mobi joined #gluster
08:33 fsimonce joined #gluster
08:33 BuBU29 joined #gluster
08:40 sbulage joined #gluster
08:41 Gnomethrower joined #gluster
08:44 sanoj joined #gluster
08:48 Gnomethrower joined #gluster
08:49 buvanesh_kumar joined #gluster
09:05 dnorman joined #gluster
09:14 haomaiwang joined #gluster
09:14 rastar joined #gluster
09:17 jtux joined #gluster
09:20 pulli joined #gluster
09:29 kramdoss_ joined #gluster
09:33 buvanesh_kumar joined #gluster
09:39 Slashman joined #gluster
09:44 buvanesh_kumar joined #gluster
09:58 hackman joined #gluster
10:01 mahendratech joined #gluster
10:06 dnorman joined #gluster
10:14 haomaiwang joined #gluster
10:22 skoduri joined #gluster
10:36 Gambit15 joined #gluster
10:45 ShwethaHP joined #gluster
10:46 MilosCuculovic joined #gluster
10:47 MilosCuculovic Hello Guys
10:47 MilosCuculovic I have a GlusterFS related question
10:47 MilosCuculovic If anyone could help, I would greetly appreciate
10:47 MilosCuculovic Here is my current situation
10:47 MilosCuculovic I have a 1x1 volume on a server named storage
10:48 MilosCuculovic So one brick
10:48 MilosCuculovic now, my goal is to add a new replicated bric and have 2x2 replicated volume
10:48 MilosCuculovic For that, I first copeid all the files on the new server (5.5T)
10:48 MilosCuculovic Then installed glusterfs-server inthere
10:49 MilosCuculovic Both have version 3.9.0
10:49 MilosCuculovic In order to add the new replicated bric and switch from 1x1 to 2x2 replica
10:50 MilosCuculovic I added on storage server: sudo gluster peer probe storage2
10:50 MilosCuculovic storage2 being my new server
10:51 MilosCuculovic Then, on storage, when trying to do sudo gluster volume add-brick storage replica 2 storage2:/data/data-cluster
10:51 MilosCuculovic I am getting volume add-brick: failed: Host storage2 is not in 'Peer in Cluster' state
10:51 MilosCuculovic Any idea?
10:55 Norky joined #gluster
10:55 MilosCuculovic Noone? :)
10:59 buvanesh_kumar joined #gluster
11:00 jiffin1 joined #gluster
11:01 rastar joined #gluster
11:01 ivan_rossi MilosCuculovic: did you "peer probe" storage2?
11:01 Saravanakmr joined #gluster
11:01 msvbhat joined #gluster
11:01 shdeng joined #gluster
11:02 rastar joined #gluster
11:02 MilosCuculovic Hi Ivan
11:03 MilosCuculovic run the " sudo gluster peer probe storage2" on storage server?
11:03 MilosCuculovic If so, yes I did
11:08 dnorman joined #gluster
11:19 atinm joined #gluster
11:20 rastar joined #gluster
11:21 jkroon joined #gluster
11:32 MilosCuculovic ?
11:32 kramdoss_ joined #gluster
11:33 anoopcs MilosCuculovic, What does 'sudo gluster peer status' say about storage2?
11:35 k4n0 joined #gluster
11:38 MilosCuculovic On storage, this command returns
11:38 MilosCuculovic sudo gluster peer status Number of Peers: 1  Hostname: storage2 Uuid: 32bef70a-9e31-403e-b9f3-ec9e1bd162ad State: Sent and Received peer request (Connected)
11:38 MilosCuculovic On storage2, this command returns
11:38 MilosCuculovic Number of Peers: 1  Hostname: storage Uuid: 50539e7e-323a-44c4-b317-a66cbffe44ae State: Probe Sent to Peer (Connected)
11:40 ankitraj joined #gluster
11:45 MilosCuculovic What do you think about?
11:49 rastar joined #gluster
11:52 haomaiwang joined #gluster
12:02 anoopcs MilosCuculovic, Usually peer status must be 'Peer in cluster.'
12:02 anoopcs Check for any errors in glusterd log..
12:03 jiffin1 joined #gluster
12:14 hgowtham joined #gluster
12:22 MilosCuculovic @anoopcs, on storage 2, I have:
12:22 MilosCuculovic [2016-12-14 12:21:01.620214] I [MSGID: 106499] [glusterd-handler.c:4360:__gl​usterd_handle_status_volume] 0-management: Received status volume req for volume storage [2016-12-14 12:21:01.620510] E [MSGID: 106525] [glusterd-op-sm.c:4013:glusterd_dict_set_volid] 0-management: Volume storage does not exist [2016-12-14 12:21:01.620539] E [MSGID: 106289] [glusterd-syncop.c:1895:gd_sync_task_begin] 0-management: Failed to build pay
12:22 MilosCuculovic This error commes constantly
12:24 anoopcs MilosCuculovic, I hope you were issuing 'sudo gluster volume status storage' from storage2..
12:24 anoopcs and the error messages indicate that storage2 is not yet part of the cluster because of some failure in peer probe.
12:25 MilosCuculovic When issuing this on storage2, it says "Volume storage does not exist"
12:25 MilosCuculovic As explained on the beginign, I added the storage2 brick on the storage server
12:25 atinm joined #gluster
12:25 MilosCuculovic So storage2 only had some files, didn't had the volume created?
12:25 anoopcs MilosCuculovic, Are you trying to peer probe from storage2?
12:26 MilosCuculovic I have two servers, stoage and storage2
12:26 MilosCuculovic on storage, glusterfs 1x1 is running
12:26 MilosCuculovic My goal is to add a new replica brick
12:26 MilosCuculovic This replica bric will be on storage2
12:27 MilosCuculovic On storage, I was running peer probe
12:27 MilosCuculovic Should I also ran it on storage2?
12:28 anoopcs MilosCuculovic, Not needed.
12:29 MilosCuculovic Ok, then I have no idea why it fails
12:29 MilosCuculovic The add-brick on storage will create the volume on storage2, I do not need to do it manually right?
12:30 kshlm MilosCuculovic, It appears the initial peer probe you did from stroage to storage2 did not complete successfully.
12:30 kshlm This is shown in the peer status output.
12:30 kshlm Now, since you've not used storage 2 for anything yet, the easiest thing to do would be retry probe.
12:31 kshlm MilosCuculovic, From storage, first do a `gluster peer detach storage2`.
12:31 kshlm Then try `gluster peer probe storage2`
12:31 Sebbo2 joined #gluster
12:31 kshlm Next, before trying to add-brick, check `gluster peer status` on both peers. Both of them should have the status as 'Peer in Cluster'
12:31 MilosCuculovic Hi kshlm
12:32 shyam joined #gluster
12:32 kshlm Only then can you proceed with doing add-brick.
12:32 p7mo joined #gluster
12:32 MilosCuculovic sudo gluster peer detach storage2 (on storage takes time now)
12:33 MilosCuculovic Something is weird
12:34 MilosCuculovic When doing: sudo gluster peer detach storage2
12:34 MilosCuculovic I am getting
12:34 MilosCuculovic peer detach: failed: Peer is already being detached from cluster. Check peer status by running gluster peer status
12:34 MilosCuculovic When doing sudo gluster peer probe storage2
12:35 MilosCuculovic I am getting peer probe: success. Host storage2 port 24007 already in peer list
12:35 MilosCuculovic Why?
12:35 rastar joined #gluster
12:37 kshlm MilosCuculovic, That's because peer detach hasn't completed yet.
12:37 kshlm Peer status should still be showing storage2
12:38 kshlm Since the easy way didn't work out, we can try the more manual way.
12:38 kshlm MilosCuculovic, This way will require you to stop glusterd on storage.
12:40 kotreshhr left #gluster
12:42 kdhananjay joined #gluster
12:45 jdarcy joined #gluster
12:52 johnmilton joined #gluster
12:56 ira joined #gluster
13:00 MilosCuculovic kshlm, Stopping glusterd on storage is not an option, this is a production server
13:00 kshlm MilosCuculovic, glusterd is just the management daemon.
13:01 kshlm Your volume would still be online, and any connected clients would still be connected.
13:01 MilosCuculovic What if someone deposits the files?
13:01 kshlm For the brief period that glusterd is down, no new clients can mount.
13:01 MilosCuculovic I think the detach finished
13:01 MilosCuculovic sudo gluster peer status
13:01 MilosCuculovic Number of Peers: 0
13:02 MilosCuculovic This is shown on storage now
13:02 kshlm If someone deposits a file to an existing mount, it will be safely written to the volume.
13:02 kshlm MilosCuculovic, Cool. Also check on storage2 as well.
13:02 MilosCuculovic Ok, then it sound good :)
13:02 dnorman joined #gluster
13:02 kshlm Since detach is finished let's try the easy method first.
13:03 kshlm MilosCuculovic, But first what version of GlusterFS are you using?
13:03 MilosCuculovic 3.9.0
13:03 MilosCuculovic The latest one
13:03 MilosCuculovic I can still see the peer on storage2
13:03 kshlm On both storage and storage2?
13:03 MilosCuculovic no
13:04 kshlm MilosCuculovic, That's fine. We can cleanup storage2 and start fresh.
13:04 MilosCuculovic yes
13:04 MilosCuculovic that would be perfect
13:04 MilosCuculovic so, should I detach storage from storage2
13:04 MilosCuculovic ?
13:04 kshlm I meant, is it 3.9.0 on both storage and storage2.
13:05 MilosCuculovic yes, both have the same 3.9.0 version, sorry
13:05 kshlm You don't need to do that.
13:05 MilosCuculovic Ok
13:05 MilosCuculovic So, what's the next step?
13:05 kshlm On storage2, stop glusterd. And kill any other gluster processes.
13:06 kshlm Then, cleanup everything in /var/lib/glusterd/
13:06 MilosCuculovic sudo service glusterfs-server stop => done
13:06 kshlm You can also clean up the logs in /var/log/glusterfs as well.
13:06 MilosCuculovic done
13:07 kshlm Then reinstall (or uninstall/install) to get a fresh start.
13:07 MilosCuculovic will do
13:07 MilosCuculovic uninstall/install done
13:08 kshlm Cool. Check that glusterfs-server service is started on storage2
13:08 MilosCuculovic it is
13:08 MilosCuculovic Active: active (running) since Wed 2016-12-14 14:07:53 CET; 18s ago
13:08 kshlm Now, from storage, do `gluster peer probe storage2`
13:08 rastar joined #gluster
13:08 kshlm The command might return success.
13:08 MilosCuculovic peer probe: success.
13:09 ndarshan joined #gluster
13:09 kshlm But there is a sync that happens in the background, that determines if the new peer was successfully added to the pool.
13:09 kshlm Check `gluster peer status` on both.
13:09 MilosCuculovic on storage
13:09 MilosCuculovic Number of Peers: 1  Hostname: storage2 Uuid: 3d561927-913d-4ddc-9c05-ad0c97b93b3a State: Peer in Cluster (Connected)
13:09 MilosCuculovic on storage2
13:10 kshlm If it is 'Peer in cluster' on both, then you're good.
13:10 MilosCuculovic Number of Peers: 1  Hostname: storage Uuid: 50539e7e-323a-44c4-b317-a66cbffe44ae State: Peer in Cluster (Connected)
13:10 MilosCuculovic that's the case
13:10 kshlm You're set to expand your volume now.
13:10 MilosCuculovic cool
13:10 MilosCuculovic sudo gluster volume add-brick storage replica 2 storage2:/data/data-cluster
13:10 susant joined #gluster
13:10 MilosCuculovic should i ran this on storage?
13:10 kshlm Yup.
13:11 kshlm Yup.
13:11 MilosCuculovic volume add-brick: success
13:11 MilosCuculovic :)
13:11 MilosCuculovic and now?
13:11 kshlm Now, your volume will begin healing to fill the data from your original brick to the new brick on storage2.
13:11 nbalacha joined #gluster
13:12 MilosCuculovic wow, cool
13:12 MilosCuculovic How can I check the heal?
13:12 kshlm This might cause the load on you volume and the servers to go up for a little while.
13:12 kshlm The little while depends on the amount of data you already have.
13:12 kshlm `gluster volume heal <name> info` should give you some stats.
13:12 MilosCuculovic ok
13:12 MilosCuculovic Question
13:13 MilosCuculovic I already synced the storage with storage2 manually (with rsync) yesterday
13:13 MilosCuculovic I suppose only differences will be copied
13:13 MilosCuculovic not all 5.5T I have right?
13:14 kshlm It doesn't normally work that way. The only way glusterfs can keep track of data in a brick is if it was written by glusterfs.
13:14 MilosCuculovic Ok
13:14 MilosCuculovic So what will happen with existing files now?
13:14 MilosCuculovic on storage2
13:14 kshlm I'm not really sure about that.
13:15 kshlm I need to check.
13:15 B21956 joined #gluster
13:15 MilosCuculovic Ok, will also try to monitor it
13:15 MilosCuculovic I have other questions if you have couple of more minutes
13:15 MilosCuculovic When I am mounting now the volume on a client server
13:15 MilosCuculovic Does it matter which host do I use
13:15 MilosCuculovic atm, all are mounting storage
13:16 MilosCuculovic des this mean only storage will render files
13:16 MilosCuculovic Should I do half/half
13:17 kshlm The address you use whenn mounting on a client, is only used to fetch the volfile. After that, the clients connect to both the servers.
13:17 MilosCuculovic Oh, seems my mounted points are not returning files
13:17 MilosCuculovic how to stop rendering files from storage
13:17 MilosCuculovic sorry, from storage2
13:17 MilosCuculovic until this one is ready?
13:18 kshlm I'm not sure there's a way.
13:19 MilosCuculovic hm
13:19 kshlm When glusterfs itself handles the syncing, you have no problems.
13:19 MilosCuculovic oh
13:19 MilosCuculovic shal I remove the bric from storage?
13:19 kshlm But when there is data already, the behavior is unknown to me.
13:19 kshlm Don't do that.
13:20 MilosCuculovic I removed it
13:20 kshlm storage is your original source.
13:20 MilosCuculovic Sorry, I removed the storage2 brick from storage server
13:20 kshlm That's fine.
13:20 MilosCuculovic I had to do it as our website didn't show any files anymore
13:20 MilosCuculovic So, what I need now
13:21 MilosCuculovic Remove all files from storage2 server
13:21 MilosCuculovic And add the brick again?
13:21 MilosCuculovic this should work in theory?
13:21 kshlm Yup. Let gluster handle syncing.
13:21 kshlm BTW, what sort of files and network do you have?
13:22 MilosCuculovic gigabit network, mostly pdf files, word, xml, png, tiff etc
13:22 MilosCuculovic Several hundred thousands
13:22 kshlm So small files?
13:22 MilosCuculovic yes, small files
13:22 MilosCuculovic max 20MB, average 3MB per file
13:22 kshlm Say kbs to several mbs ?
13:22 MilosCuculovic Say kbs to several mbs ? more several mbs to several dozens of mbs
13:23 nbalacha joined #gluster
13:23 kshlm In the MBs range should be fine I guess.
13:24 kshlm Very small files or very large files take up lot of resources healing.
13:24 MilosCuculovic Then I am in the middle
13:24 MilosCuculovic I purged and reinstalled glusterfs-server again on storage2
13:24 MilosCuculovic Will now remove all files from storage2
13:24 MilosCuculovic will take several hours
13:24 MilosCuculovic as there are 5.5TB
13:24 MilosCuculovic MAybe 6T
13:25 MilosCuculovic Then, I will do the same actions as earlier and check how is the healing going
13:25 kshlm Once the healing begins, it should take several hours to heal as well. Ideally the same amount of time rsync took.
13:25 MilosCuculovic I see
13:25 MilosCuculovic Question about healing
13:25 MilosCuculovic Once the healing starts, glusterfs should't render the non synced storage2 server to clients anymore
13:25 MilosCuculovic ?
13:26 kshlm When you mean purge, you followed the same steps you took earlier (detach, clean /var/lib, reinstall glusterfs)
13:26 kshlm Yup.
13:26 MilosCuculovic oh, forgot detach
13:26 MilosCuculovic will do it right now
13:26 MilosCuculovic http://review.gluster.org/#/c/13806/
13:26 glusterbot Title: Gerrit Code Review (at review.gluster.org)
13:27 MilosCuculovic detach done
13:27 MilosCuculovic Will do the probe and add bric now
13:27 MilosCuculovic now = once the rm finished
13:27 kshlm I was just about to say that.
13:28 kshlm If the brick is a seperate filesystem, it would be easier to just reformat.
13:28 MilosCuculovic It is
13:28 MilosCuculovic You're right
13:28 MilosCuculovic will do so
13:29 kshlm MilosCuculovic, Cool. I'll be going away now.
13:30 kshlm If you have any more issues or questions, people from the US timezone should be coming online soon.
13:30 MilosCuculovic Ok, thanks a lot kshlm
13:30 MilosCuculovic You helped me a lot
13:30 MilosCuculovic Really appreciate
13:31 rafi1 joined #gluster
13:37 rafi joined #gluster
13:39 jiffin1 joined #gluster
13:40 kramdoss_ joined #gluster
13:40 pulli joined #gluster
13:55 TvL2386 joined #gluster
13:55 sbulage joined #gluster
13:59 Asako joined #gluster
14:00 Asako Good morning.  What is the best way to ensure that we have a complete copy of the entire volume at every location?  We're working on building a globally disbursed file system and every plant needs to be able to access data when other sites are down.
14:03 Gambit15 Erm...geo-rep?
14:18 plarsen joined #gluster
14:20 unclemarc joined #gluster
14:22 Arrfab joined #gluster
14:23 squizzi_ joined #gluster
14:26 shyam joined #gluster
14:29 plarsen joined #gluster
14:31 rwheeler joined #gluster
14:32 skylar joined #gluster
14:40 jri left #gluster
14:40 cloph geo-repo won't "ensure" it, as it is async, but might be good enough. other than that: regular replication always is full-copy of the entire file.
14:44 rafi1 joined #gluster
14:52 wiebel joined #gluster
14:53 unclemarc joined #gluster
14:54 wiebel How can I I tell in a replicated-distributed setup which bricks are in a replication group, or am I understanding that concept wrong?
14:54 panina joined #gluster
14:54 armin joined #gluster
14:56 panina joined #gluster
14:59 yalhyane joined #gluster
15:03 cloph wiebel: the oder of the bricks determines the replication groups.
15:04 cloph for a 2x2 a b c d : (ab would be replicates, and cd would be replicates, and the files are distributed between the ab and cb pairs.
15:04 Karan joined #gluster
15:08 mb_ joined #gluster
15:08 wiebel so the order I see in gluster volume info is all I need to know
15:08 mb_ !zvvprdbf001 cmd help
15:08 wiebel perfect, thank you
15:09 annettec joined #gluster
15:11 plarsen joined #gluster
15:13 phileas joined #gluster
15:18 rwheeler joined #gluster
15:23 Gambit15 joined #gluster
15:29 dnorman joined #gluster
15:31 rafi joined #gluster
15:43 kramdoss_ joined #gluster
15:47 unclemarc joined #gluster
15:49 d0nn1e joined #gluster
15:57 cholcombe joined #gluster
15:59 haomaiwang joined #gluster
16:03 farhorizon joined #gluster
16:08 Philambdo joined #gluster
16:15 B21956 joined #gluster
16:19 jtux left #gluster
16:20 vbellur joined #gluster
16:20 wushudoin joined #gluster
16:22 susant joined #gluster
16:28 dnorman joined #gluster
16:30 haomaiwang joined #gluster
16:47 Jacob843 joined #gluster
16:49 msvbhat joined #gluster
17:03 hackman joined #gluster
17:26 plarsen joined #gluster
17:37 susant joined #gluster
17:38 bwerthmann joined #gluster
17:49 ahino joined #gluster
17:58 wushudoin| joined #gluster
17:58 rafi joined #gluster
18:03 jiffin joined #gluster
18:05 msvbhat joined #gluster
18:08 farhoriz_ joined #gluster
18:25 rafi joined #gluster
18:31 arif-ali joined #gluster
18:42 jbrooks joined #gluster
18:45 farhorizon joined #gluster
18:46 Gambit15 Is there a way to monitor heal progress beyond "heal VOL info"?
18:47 Gambit15 I recently had a case where a <5GB file took 2 days to heal, and during that time I had no idea what was actually going on or whether something had become stuck
18:47 mhulsman joined #gluster
18:48 Gambit15 It came as a bit of a surprise on day 3 to wake up & find the issue resolved.
18:49 Gambit15 There wasn't even any telltale traffic on the storage interface that would have helped indicate that there was at least activity
19:02 farhorizon joined #gluster
19:04 wushudoin| joined #gluster
19:04 jiffin joined #gluster
19:06 dnorman joined #gluster
19:07 jbrooks joined #gluster
19:23 jbrooks joined #gluster
19:34 haomaiwang joined #gluster
19:36 k4n0 joined #gluster
19:40 Slashman joined #gluster
19:45 B21956 joined #gluster
19:46 hchiramm joined #gluster
19:51 Slashman joined #gluster
19:56 mhulsman1 joined #gluster
19:58 mhulsman joined #gluster
20:13 farhorizon joined #gluster
20:23 johnmilton joined #gluster
20:27 msvbhat joined #gluster
20:35 dnorman joined #gluster
20:38 dnorman_ joined #gluster
20:43 mhulsman1 joined #gluster
20:46 mhulsman joined #gluster
20:50 overclk joined #gluster
21:22 arpu joined #gluster
21:25 gem joined #gluster
21:28 farhoriz_ joined #gluster
21:29 ikla joined #gluster
21:29 ikla I'm trying to run glusterd on a diskless node and getting read-only error
21:29 ikla but my brick is mounted in read-write
21:33 farhorizon joined #gluster
21:35 ikla [MSGID: 101010] [common-utils.c:105:mkdir_p] 0-: Failed due to reason [Read-only file system]
21:35 ikla what is gluster trying to create and where?
21:44 vbellur ikla: possibly /var/lib/glusterd
21:47 ikla does the mgmt/glusterd need to be unique per node
21:47 ikla ?
21:47 ikla can this run in memory?
21:48 vbellur with a ramfs?
21:48 ikla yeah
21:48 ikla my nodes are diskless nfsroot
21:49 k4n0 joined #gluster
21:50 vbellur that should work .. but glusterd uses this for configuration persistence
21:53 abyss^ JoeJulian: Are you there?:)
21:54 JoeJulian abyss^: More or less...
21:57 ikla vbellur, so it's ok to lose on a reboot
21:58 vbellur ikla: ok
21:59 skylar joined #gluster
22:02 JoeJulian What? No it's not. You need to keep /var/lib/glusterd unique per storage host.
22:02 JoeJulian It needs to persist.
22:03 abyss^ JoeJulian: Would you have some time I have issue with 19 directories (I fixed about 20 by hand as you said and it is working fine but those 19 is something simillar I suppose;))
22:03 abyss^ JoeJulian: if you don't have time I will try tommorow, ok?:)
22:04 JoeJulian Sure, what's the trouble?
22:04 abyss^ JoeJulian: ok, thank you. Give me a sec I have to connect to my work;)
22:04 ikla JoeJulian, :(
22:05 ikla JoeJulian, maybe symlink it to a hdd per host
22:05 ikla JoeJulian, what happens if /var/lib/glusterd gets lost
22:06 JoeJulian Then that server doesn't have any configuration. It no longer has the correct uuid assigned, is not peered with the other servers, and has no knowledge of any volumes configured.
22:07 JoeJulian You can use automation to save and reconstruct /var/lib/glusterd/glusterd.info, re-probe a good server, and sync the volume configuration from that server.
22:10 ikla is there documentation on that?
22:11 ikla is that the only important file in there?
22:11 ikla glusterd.info
22:12 JoeJulian No, but that's the only one that, if wrong, makes re-joining the cluster impossible.
22:12 abyss^ JoeJulian: see my directories that need to be repair: https://gist.github.com/anonymous​/f84f1ff6613bcd007ae66f0bec010564 Now lets get one example: https://gist.github.com/anonymous​/aec27a79d4493e5d64bc771a978a82cf
22:12 glusterbot Title: gist:f84f1ff6613bcd007ae66f0bec010564 · GitHub (at gist.github.com)
22:12 JoeJulian But when if the whole cluster goes down?
22:12 JoeJulian s/when/what/
22:12 glusterbot What JoeJulian meant to say was: But what if the whole cluster goes down?
22:13 abyss^ I reset there to zero: trusted.afr.saas_bookshelf-client-14 on gluster-saas-3-prd and then I did fix-layout
22:13 ikla does that file only hold UUID and operating version ?
22:13 abyss^ but this time it did help this dir is still in split-brain mode, why?;)
22:15 JoeJulian abyss^: That non-zero trusted.afr.saas_bookshelf-client-15 on one host means that it has updates pending for Brick 16 (client-N where N starts with 0) but the other server shows a non-zero trusted.afr.saas_bookshelf-client-14. Both think they need to update the other. A 3rd replica or a arbiter would have solved that.
22:15 JoeJulian abyss^: Just set one of them to 0x000000000000000000000000 and it should fix itself.
22:15 JoeJulian ikla: Yes
22:16 ikla ok I can make  that unique per node easily
22:16 ikla anything else in that directory that is unique to a specific host
22:17 JoeJulian /var/lib/glusterd/peers
22:17 abyss^ JoeJulian: so I need reset to zero both: trusted.afr.saas_bookshelf-client-14 and trusted.afr.saas_bookshelf-client-15 on  gluster-saas-3-prd right?
22:17 JoeJulian Those hold that specific server's peers. That can be rebuilt with a peer-probe though.
22:17 abyss^ (trusted.afr.saas_bookshelf-client-14 is now zero bot previously has non-zero)
22:18 ikla but that can be lost on a reboot
22:18 JoeJulian abyss^: No, just on one of them needs to have both trusted.afr entries be zero.
22:19 JoeJulian ikla: Again, what about when disaster strikes and you lose all your servers simultaneously? Do you lose all your volume configuration data? It's better to have /var/lib/glusterd be a static directory for each server.
22:19 abyss^ JoeJulian: so trusted.afr.saas_bookshelf-client-14 has zero on gluster-3 and client-15 has zero on gluster-4 it is not correct?
22:20 JoeJulian abyss^: trusted.afr.saas_bookshelf-client-14 and trusted.afr.saas_bookshelf-client-14 on gluster-3. Then the heal will come from gluster-4 to make them all 0.
22:21 ikla JoeJulian, I understand that. I'm want to make this work in a diskless os environment. I just need to know the files from that directory that is unique per host.
22:21 ikla s/I'm/I
22:21 ikla :p
22:22 ikla obviously vols holds volume information
22:22 ikla one that is set it doesn't change unless the configuration changes?
22:22 abyss^ errr...  To clarify, I need set trusted.afr.saas_bookshelf-client-14 to zero on both gluster-3 and gluster-4?
22:22 JoeJulian abyss^: no, just one of them.
22:22 JoeJulian abyss^: self-heal will handle the rest.
22:23 abyss^ but on gluster-3 as you can see: https://gist.github.com/anonymous​/aec27a79d4493e5d64bc771a978a82cf there is a zero for trusted.afr.saas_bookshelf-client-14
22:23 glusterbot Title: gist:aec27a79d4493e5d64bc771a978a82cf · GitHub (at gist.github.com)
22:23 JoeJulian abyss^: What I said was that both entries on gluster-3 OR gluster-4 need to be zeroed. Self-heal will handle the rest.
22:26 abyss^ JoeJulian: but when I said that trusted.afr.saas_bookshelf-client-14 and client-15 have to be zeroed you said no;) Or I completely don't get something;)
22:27 abyss^ I have only one entry about trusted.afr.saas_bookshelf-client-14 on gluster-3... As you can see: https://gist.github.com/anonymous​/aec27a79d4493e5d64bc771a978a82cf
22:27 glusterbot Title: gist:aec27a79d4493e5d64bc771a978a82cf · GitHub (at gist.github.com)
22:27 JoeJulian abyss^: You were suggesting changes to both gluster-3 and gluster-4, unless I was reading something wrong.
22:29 abyss^ I just wrote: 23:17 < abyss^> JoeJulian: so I need reset to zero both: trusted.afr.saas_bookshelf-client-14 and trusted.afr.saas_bookshelf-client-15 on  gluster-saas-3-prd right?
22:29 abyss^ so it is correct or not?:D
22:31 JoeJulian abyss^: So on gluster-saas-3-prd client-14 is already 0. Change client-15 to 0. That's all you need to manually change. Next self-heal, the shd will look at gluster-sass-4-prd and see client-14 is non-zero. It will heal from gluster-sass-4-prd to gluster-saas-3-prd resetting client-14 on saas-4 leaving both servers with zeros in both trusted.afr entries.
22:31 JoeJulian correct
22:33 abyss^ Ok. Thank you. BTW: I suppose to do self-heal I don't need to do fix-layout I just can stat on that dir from client?
22:33 abyss^ I'm asking because fix-layout last about 3 days :/
22:34 JoeJulian After it's healed, check the attributes again. You can fix-layout a single directory with a ,,(targeted fix-layout)
22:34 glusterbot You can trigger a fix-layout for a single directory by setting the extended attribute distribute.fix.layout to any value for that directory. This is done through a fuse client mount. This does not move any files, just fixes the dht hash mapping.
22:35 abyss^ ok, thank you:)
22:35 JoeJulian You're welcome. :)
22:41 derjohn_mobi joined #gluster
22:42 farhorizon joined #gluster
22:57 loadtheacc joined #gluster
23:00 Klas joined #gluster
23:12 haomaiwang joined #gluster
23:34 bluenemo joined #gluster
23:38 bwerthmann joined #gluster
23:42 ikla what is the behavior of a striped volume when the a member/node goes up and down
23:44 JoeJulian @stripe
23:44 glusterbot JoeJulian: Please see http://joejulian.name/blog/sho​uld-i-use-stripe-on-glusterfs/ about stripe volumes.
23:45 JoeJulian @factoids learn stripe as The stripe translator is deprecated. Consider enabling sharding instead.
23:45 glusterbot JoeJulian: The operation succeeded.

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary