Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-07-18

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:31 om2_ joined #gluster
00:35 shyam joined #gluster
00:52 gyadav_ joined #gluster
00:54 Alghost_ joined #gluster
01:07 jeffspeff joined #gluster
01:20 plarsen joined #gluster
01:24 Alghost joined #gluster
01:47 prasanth joined #gluster
02:06 Somedream_ joined #gluster
02:07 AppStore_ joined #gluster
02:07 yosafbridge` joined #gluster
02:08 kjackal_ joined #gluster
02:10 JoeJulian pl3bs: Well, I know I say it a lot but it depends.
02:10 JoeJulian Typically, though, systemd is plenty sufficient.
02:11 semiosis_ joined #gluster
02:12 pl3bs I configured resources in pacemaker, works well :D
02:12 n-st- joined #gluster
02:12 pl3bs https://paste.fedoraproject.org/paste/n0UqEoaZVTRq96Lh2TGKmg
02:12 Telsin_ joined #gluster
02:12 glusterbot Title: Untitled - Modern Paste (at paste.fedoraproject.org)
02:12 DJCl34n joined #gluster
02:12 Limebyte_4 joined #gluster
02:12 v12aml_ joined #gluster
02:13 thatgraemeguy_ joined #gluster
02:13 thatgraemeguy_ joined #gluster
02:13 iopsnax joined #gluster
02:13 tdasilva- joined #gluster
02:13 DJClean joined #gluster
02:14 armyriad joined #gluster
02:14 Vaelater1 joined #gluster
02:15 kshlm joined #gluster
02:18 tamalsaha[m] joined #gluster
02:18 decayofmind joined #gluster
02:19 uebera|| joined #gluster
02:19 victori joined #gluster
02:28 shyam left #gluster
02:28 Alghost joined #gluster
02:29 Kassandry joined #gluster
02:30 Alghost_ joined #gluster
02:37 baber joined #gluster
02:39 amarts joined #gluster
02:52 msvbhat joined #gluster
02:55 Igel joined #gluster
02:57 thwam joined #gluster
03:00 raginbajin joined #gluster
03:00 kramdoss_ joined #gluster
03:08 msvbhat joined #gluster
03:10 kramdoss_ joined #gluster
03:12 nbalacha joined #gluster
03:28 winrhelx joined #gluster
03:38 om2_ joined #gluster
03:41 tarepanda joined #gluster
03:42 tarepanda Anyone around who could help me with a gluster issue?
03:42 JoeJulian How would we know? You haven't said what the issue is yet. ;)
03:43 tarepanda Well, last time I tried asking nobody was around at all. :)
03:43 ppai joined #gluster
03:44 JoeJulian Well I'm sorry about that. I normally scroll back in the morning to see if anybody got missed and I missed that myself.
03:45 tarepanda On GlusterFS 3.10.0 (CentOS 7.3) and with a single replicated volume, two bricks. Both bricks show ok in status but are very obviously out of sync... trying to heal yields "unsuccessful on bricks that are down"
03:45 tarepanda heal info spits out a large list of files but freezes after a specific one
03:45 JoeJulian what does gluster volume status show? (use some ,,(paste) service)
03:45 glusterbot For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
03:47 tarepanda https://pastebin.com/ndaZdd5i
03:47 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
03:47 tarepanda Also sync fails with "is not a friend"
03:48 JoeJulian "gluster volume sync ..." is for syncing configurations.
03:48 tarepanda ahh
03:49 JoeJulian Grab the last 200 lines (or so) from glustershd.log (on either server)
03:50 tarepanda 200 lines of glustershd.log coming up: https://paste.fedoraproject.org/paste/MqJWk99uv1Yle54PvNdrbw
03:50 glusterbot Title: Untitled - Modern Paste (at paste.fedoraproject.org)
03:52 JoeJulian "Gfid mismatch detected" suggests that the files were created independently on each brick so now they don't have matching gfid numbers.
03:53 tarepanda That seems in line with my thinking that the bricks are out of sync; some files exist on one but not the other and the brick sizes are different.
03:53 tarepanda I know that for a time the connection between the two servers went down, which would explain why, but I don't know how to fix it.
03:54 JoeJulian To prevent this, you might consider adding an arbiter.
03:55 JoeJulian To repair it... stand by. I'm checking to see if there's anything built in for this. There's a number of ways of healing split-brain but this doesn't seem to be identifying as such in the log.
03:55 JoeJulian It is, of course, actually split-brain.
03:56 riyas joined #gluster
03:57 tarepanda That's my conundrum -- gluster itself seems to think everything is fine, which is a bit baffling. :)
03:57 JoeJulian If there's nothing built in, ,,(splitmount) should handle it.
03:57 glusterbot https://github.com/joejulian/glusterfs-splitbrain
03:58 JoeJulian You'd have to script something to choose one or the other, but that's relatively easy.
03:59 JoeJulian Ah good. This is handled by the built-in split-brain handling.
03:59 tarepanda Is there any risk in using that when gluster itself doesn't realize it's split?
04:00 itisravi joined #gluster
04:00 JoeJulian See `gluster volume set help` look for "cluster.favorite-child-policy"
04:00 JoeJulian Always a risk when you're healing split-brain by policy.
04:01 atinm_ joined #gluster
04:02 JoeJulian In case you're interested, this is the bit of the code that handles that: https://github.com/gluster/glusterfs/blob/master/xlators/cluster/afr/src/afr-self-heal-entry.c#L50-L117
04:02 glusterbot Title: glusterfs/afr-self-heal-entry.c at master · gluster/glusterfs · GitHub (at github.com)
04:02 tarepanda Oh, nice... that's quite legible. :)
04:03 JoeJulian Yeah, they do some pretty good work.
04:04 tarepanda Strange, trying to set the option is having no effect -- it just stays "none"
04:06 tarepanda Oh, oops. User error.
04:07 tarepanda With the setting changed should it automatically be fixing itself, or do I have to use splitmount still?
04:09 psony joined #gluster
04:09 buvanesh_kumar joined #gluster
04:10 dominicpg joined #gluster
04:12 JoeJulian No need for splitmount
04:12 JoeJulian you might have to "gluster volume heal $vol"
04:13 rejy joined #gluster
04:13 tarepanda Hm, getting a lot of "skipping conservative heal on file" entries in the log.
04:15 JoeJulian Still? And is it still "gfid mismatch" or is it "type mismatch"?
04:16 tarepanda https://paste.fedoraproject.org/paste/znUrz-e1YtssTTJDTPLOZg
04:16 glusterbot Title: Untitled - Modern Paste (at paste.fedoraproject.org)
04:16 tarepanda Favorite child policy is set to mtime, btw.
04:17 itisravi joined #gluster
04:18 JoeJulian That last log entry was from about 3 minutes before you posted it. Was that human lag or is it just not adding new entries since 04:14?
04:19 tarepanda no new entries
04:20 JoeJulian I'm considering that a victory. You should be able to check with "gluster volume heal $vol statistics"
04:21 JoeJulian And of course "heal $vol info" should eventually be changing.
04:23 skumar joined #gluster
04:23 tarepanda Oh, nice, it did recognize it as a split brain... though there are 1200ish heal failed entries
04:24 JoeJulian that's ok. They get retried.
04:26 tarepanda You've been a huge help. :)
04:27 JoeJulian Happy I could.
04:27 aravindavk joined #gluster
04:27 shdeng joined #gluster
04:32 Saravanakmr joined #gluster
04:33 susant joined #gluster
04:35 msvbhat joined #gluster
04:36 tarepanda Since there are so many files listed as being in split brain but not being resolved, is there a way to force those to resolve?
04:36 JoeJulian They are. That's what the self-heal daemon does.
04:37 tarepanda They are being resolved? But the log just says it's skipping them.
04:40 nbalacha joined #gluster
04:41 JoeJulian Sorry, I'm confused. I thought you were saying that there were no new log entries.
04:41 Shu6h3ndu__ joined #gluster
04:42 tarepanda There weren't when you asked, and there are now. Both are from the heal process and saying that it's just skipping all of those 1200-odd files.
04:42 tarepanda http://gluster-users.gluster.narkive.com/bhwUFTPW/glusterfs-split-brain-issue
04:42 glusterbot Title: GlusterFS Split Brain issue (at gluster-users.gluster.narkive.com)
04:42 tarepanda This suggests that I need to manually weed out the files?
04:46 tarepanda Not sure how out of date that is.
04:48 tarepanda I did a grep for one gfid that's supposedly been self healed, but it doesn't look like it's being dealt with at all: https://paste.fedoraproject.org/paste/6F4sjq9IYv3J-V-pGrhVAg
04:48 glusterbot Title: Untitled - Modern Paste (at paste.fedoraproject.org)
04:49 gyadav_ joined #gluster
04:58 JoeJulian two years is pretty old.
04:59 JoeJulian Ok... let's try "gluster volume start $vol force". That won't interrupt your fuse clients but it will make the self-heal daemon and nfs daemons reload.
04:59 ankitr joined #gluster
05:02 susant joined #gluster
05:05 jiffin joined #gluster
05:08 karthik_us joined #gluster
05:09 tarepanda @joejulian Ran it, got "success"
05:10 tarepanda Log shows same stuff -- trying self heal, gfid mismatch, skipping consecutive merge.
05:11 JoeJulian show "gluster volume info"
05:11 amarts joined #gluster
05:12 tarepanda gluster volume info: https://paste.fedoraproject.org/paste/8uoQ8tqrrAjDqENGdiia6w
05:12 glusterbot Title: Untitled - Modern Paste (at paste.fedoraproject.org)
05:14 rafi1 joined #gluster
05:16 ndarshan joined #gluster
05:19 JoeJulian Well... I'm heading for bed. If you turn on debug logging you should see https://github.com/gluster/glusterfs/blob/master/xlators/cluster/afr/src/afr-self-heal-common.c#L723
05:19 glusterbot Title: glusterfs/afr-self-heal-common.c at master · gluster/glusterfs · GitHub (at github.com)
05:20 apandey joined #gluster
05:21 JoeJulian If you don't, that would imply that something didn't set the valid flag for one or more of the afr replies. I could help again tomorrow but the morning will be hectic. I'm taking my daughter in for a tonsillectomy.
05:21 JoeJulian Hopefully this is enough to get you started.
05:27 tarepanda Thanks, Joe. Hope all goes smoothly for your daughter!
05:40 kdhananjay joined #gluster
05:43 Prasad joined #gluster
05:55 hgowtham joined #gluster
05:57 uebera|| joined #gluster
05:58 _KaszpiR_ joined #gluster
05:59 Karan joined #gluster
06:01 apandey joined #gluster
06:01 poornima joined #gluster
06:02 skumar_ joined #gluster
06:16 ayaz joined #gluster
06:20 sanoj joined #gluster
06:22 kotreshhr joined #gluster
06:24 gospod2 joined #gluster
06:31 apandey_ joined #gluster
06:31 skumar_ joined #gluster
06:39 Saravanakmr joined #gluster
06:47 msvbhat joined #gluster
06:51 ankitr joined #gluster
06:53 Wizek_ joined #gluster
07:26 AshishS joined #gluster
07:27 ivan_rossi joined #gluster
07:36 amarts joined #gluster
07:44 fsimonce joined #gluster
07:44 dubs joined #gluster
07:59 sunkumar joined #gluster
08:11 msvbhat joined #gluster
08:12 gyadav__ joined #gluster
08:13 amarts joined #gluster
08:14 sona joined #gluster
08:19 gyadav_ joined #gluster
08:22 itisravi joined #gluster
08:28 mbukatov joined #gluster
08:42 itisravi__ joined #gluster
08:45 pkoro joined #gluster
08:58 sanoj joined #gluster
09:00 kramdoss_ joined #gluster
09:07 Alghost joined #gluster
09:08 atinmu joined #gluster
09:15 hgowtham joined #gluster
09:19 amarts joined #gluster
09:21 kramdoss_ joined #gluster
09:30 arif-ali joined #gluster
09:41 Saravanakmr joined #gluster
09:43 MrAbaddon joined #gluster
09:56 atinmu joined #gluster
10:03 amarts joined #gluster
10:05 ankitr joined #gluster
10:10 hgowtham_ joined #gluster
10:19 msvbhat joined #gluster
10:25 bfoster joined #gluster
10:29 bfoster joined #gluster
10:31 amarts joined #gluster
10:40 MrAbaddon joined #gluster
10:42 kdhananjay joined #gluster
10:56 amarts joined #gluster
11:02 atinm_ joined #gluster
11:02 msvbhat joined #gluster
11:04 apandey joined #gluster
11:09 msvbhat joined #gluster
11:22 baber joined #gluster
11:32 Anarka joined #gluster
11:50 kotreshhr left #gluster
12:05 sona joined #gluster
12:18 shaunm joined #gluster
12:19 atinmu joined #gluster
12:33 kramdoss_ joined #gluster
12:36 marbu joined #gluster
12:44 atinm_ joined #gluster
12:45 jstrunk joined #gluster
12:47 mbukatov joined #gluster
12:48 msvbhat joined #gluster
12:50 buvanesh_kumar joined #gluster
12:54 gyadav_ joined #gluster
13:10 hvisage joined #gluster
13:22 aravindavk joined #gluster
13:29 nbalacha joined #gluster
13:32 skylar joined #gluster
13:37 AshishS joined #gluster
13:45 msvbhat joined #gluster
13:45 ahino joined #gluster
13:46 nbalacha joined #gluster
13:57 shyam joined #gluster
14:00 buvanesh_kumar joined #gluster
14:02 Humble joined #gluster
14:03 kramdoss_ joined #gluster
14:04 arpu joined #gluster
14:14 Teraii joined #gluster
14:20 sunkumar joined #gluster
14:30 atinmu joined #gluster
14:35 sona joined #gluster
14:37 aravindavk joined #gluster
14:40 kpease joined #gluster
14:44 farhorizon joined #gluster
15:01 wushudoin joined #gluster
15:03 bowhunter joined #gluster
15:16 jstrunk joined #gluster
15:19 Teraii joined #gluster
15:35 vbellur joined #gluster
15:36 vbellur joined #gluster
15:39 winrhelx joined #gluster
15:41 plarsen joined #gluster
15:52 vbellur joined #gluster
15:53 vbellur joined #gluster
15:53 fsimonce joined #gluster
15:54 vbellur joined #gluster
15:55 vbellur joined #gluster
16:13 msvbhat joined #gluster
16:14 winrhelx joined #gluster
16:15 Saravanakmr joined #gluster
16:17 MrAbaddon joined #gluster
16:19 dubs joined #gluster
16:46 ivan_rossi left #gluster
17:19 ChrisHolcombe joined #gluster
17:21 sona joined #gluster
17:32 rafi joined #gluster
17:40 rafi1 joined #gluster
17:44 winrhelx joined #gluster
17:46 csaba joined #gluster
17:48 jiffin joined #gluster
17:48 vbellur joined #gluster
17:52 rafi joined #gluster
17:52 sunkumar joined #gluster
17:56 Jacob843 joined #gluster
18:05 [diablo] joined #gluster
18:07 deep-book-gk_ joined #gluster
18:09 deep-book-gk_ left #gluster
18:11 rafi joined #gluster
18:11 Jacob843 joined #gluster
18:17 major ahh .. I have to intelligently lay-out all the bricks ahead of time if I have multiple bricks per node
18:19 Jacob843 joined #gluster
18:23 Gambit15 joined #gluster
18:35 msvbhat joined #gluster
18:37 vbellur joined #gluster
18:45 rastar joined #gluster
18:55 Jacob843 joined #gluster
19:07 major are there any docs for expanding dispersed volumes?
19:23 baber joined #gluster
19:32 sergem joined #gluster
19:40 sergem Hello. Short question: how can I remove hostname2 of the peer? For one of the peers in /var/lib/glusterd/peers/a8e452b6-3d3c-45b9-826d-8a08e747a57e I have: hostname1=node18.mydomain.com and hostname2=192.168.6.28. The hostname1 is correct, so it works. But that second IP is gone a long time ago. Is there some `gluster remove hostname 192.168.6.28` command to remove it? Or is it stuck there forever?
19:42 major sergem, see: gluster peer detach
19:42 major or you only want to gut the IP?
19:43 major when I did that earlier I ended up just detaching the peer and re-probing it
19:46 WebertRLZ joined #gluster
19:47 sergem major: I can't just detach it, its hostname version is used in one of the bricks: Brick1: node18.mydomain.com:/srv/disk1/brick. So `detach` says: peer detach: failed: Brick(s) with the peer 192.168.6.28 exist in cluster
19:50 fcami joined #gluster
19:50 major sergem, I don't really know of any easy way to just update the IP outside of hand editing the configs .. which may be a bit error prone
19:51 major which is why I opted in my case to take the hit of moving the data off the used bricks and detach/probe the server and re-add the bricks.  I have read a few posts from the mailing lists of people who have edited the files by hand .. some with success, and some not so successful
19:52 major http://lists.gluster.org/pipermail/gluster-users/2014-May/017323.html
19:52 glusterbot Title: [Gluster-users] Proper procedure for changing the IP address of a glustefs server (at lists.gluster.org)
19:54 sergem It's not that I want to edit it, I don't mind to remove hostname2 completely, as long as correct hostname1 is still there.
19:54 major I suspect it is generally the same process
19:55 sergem node18 has 52G if data (not too fast to move the data off it). And editing configs manually would probably require me to stop all 7 glusterfs nodes I have and edit configs on each of them. So I secretly hoped there's some undocumented command like `gluster remove hostname ...` making things easier :)
19:55 major I could be wrong .. but the vast majority of that data is stored off in flat text files that are kinda all over the place
19:55 major maybe there is some random 'set' operation to do it though
19:56 major one of those tricks I would love to know if it is in there
19:59 sergem I actually didn't even know gluster supports multiple hostnames for the same peer, until I found that in one of our peers. :)
19:59 sergem "That's a nice feature, probably added to allow changing IP/hostname of the peer" - I thought. "Like, just add new hostname for same peer, then remove the old one." So there must be a way to remove it... I hoped...
20:00 major hmmm
20:00 farhorizon joined #gluster
20:01 major http://lists.gluster.org/pipermail/gluster-devel/2015-July/045997.html
20:01 glusterbot Title: [Gluster-devel] Problems when using different hostnames in a bricks and a peer (at lists.gluster.org)
20:02 major looks like that was cleaned up in 3.6
20:09 sergem major: Thanks for the links by the way! I'm reading those now...
20:12 baber joined #gluster
20:12 winrhelx joined #gluster
20:17 major no probs .. though honestly I was hoping to find information that this bit of insanity was finally cleaned up
20:18 major one more thing to add to my ever growing todo list
20:21 dubs joined #gluster
20:22 daMaestro joined #gluster
20:33 fcami joined #gluster
20:45 ekarlso does gluster do subvolumes ?
20:46 major no...
20:46 major like . mounting a subdirectory from w/in a volume?
20:46 skylar joined #gluster
20:46 major closest you can currently do is bind mount the subdirectory
20:46 ekarlso major: more like that if you have a volume 2 tb disks and you want to use the space there..
20:47 ekarlso ok
20:47 major ...
20:47 major I don't entirely follow
20:47 ekarlso major: we have setup a volume atm with 6*1.8tb and would like to utilize some of the space for something else then ovirt ;)
20:48 major oh..
20:49 ekarlso but I guess that is not how gluster works ;)
20:49 major soo ... yes ... but I guess there are certain risks associated with it, and it is generally limited to how you allocated your space in the first place
20:49 major case in point, I have some btrfs pools, and I create btrfs subvol's for my gluster vols .. and I can "technically" allocate new volumes atop new btrfs subvols on the same drives
20:50 ekarlso ok, but I guess ceph would be better at this : P
20:50 major and gluster can't really tell the difference
20:50 gospod2 joined #gluster
20:52 major I am generally of the opinion that gluster doesn't really have a good way to "manage" the space you hand to it .. so I do a lot of the space management outside of gluster via my storage pools
20:52 major there are certainly ways to enhance gluster to do the work IMHO .. but .. I am not aware of any activety on that front
20:54 major within the current code I guess you would either make one huge volume and then use quotas on subdirectories and bind-mount the subdirectories to specific locations (wonder if automount could help w/ that), or its sort of a matter of managing the storage pools external to gluster (btrfs, lvm, zpool) and creating various volumes using those pools
20:54 major both approaches have a bit of manual overhead
20:54 major not like you can just kick it off from the gluster CLI
20:56 major https://github.com/gluster/glusterfs-specs/blob/master/under_review/subdirectory-mounts.md
20:56 glusterbot Title: glusterfs-specs/subdirectory-mounts.md at master · gluster/glusterfs-specs · GitHub (at github.com)
20:56 major if this was ever completed then it would likely fit the same solution space
21:43 vbellur1 joined #gluster
21:43 vbellur1 joined #gluster
21:44 vbellur1 joined #gluster
21:45 vbellur1 joined #gluster
21:46 vbellur joined #gluster
21:47 vbellur joined #gluster
22:47 om2 joined #gluster
23:02 Alghost joined #gluster
23:06 Alghost joined #gluster
23:51 Alghost_ joined #gluster
23:59 kramdoss_ joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary