Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-11-11

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:41 jcall joined #gluster
00:50 baber joined #gluster
00:55 mdeanda did i screw up? i setup a 3 server cluster where 1 is arbitor but i never intended on having all 3 up and running, only arbitor + 1 data server. the goal was that the other data server would only come up a few times a week to "sync" then shut down again. it mostly works as expected except some volume don't mount unless the 3rd is up.
00:56 mdeanda i really just wanted something better than a nightly 'rsync' to help address hard-drive failures at home
02:02 gospod3 joined #gluster
02:56 ilbot3 joined #gluster
02:56 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
03:19 JoeJulian mdeanda: I can't think of any reason (with the default settings) that your volume should fail to mount with one server down. Check the client logs.
03:24 kramdoss_ joined #gluster
03:52 kramdoss_ joined #gluster
03:52 psony joined #gluster
04:10 psony joined #gluster
04:30 psony joined #gluster
04:37 psony joined #gluster
04:47 kramdoss_ joined #gluster
05:00 psony joined #gluster
05:03 nishanth joined #gluster
05:10 kramdoss_ joined #gluster
05:29 mdeanda JoeJulian: i see a lot of "failed to resolve" type errors, as if its trying to reach the downed server. my fstab references only the 2 running servers
06:01 kramdoss_ joined #gluster
06:06 gyadav__ joined #gluster
06:10 msvbhat joined #gluster
06:18 kramdoss_ joined #gluster
07:21 aravindavk joined #gluster
07:42 aravindavk joined #gluster
07:50 bipul joined #gluster
08:35 mattmcc_ joined #gluster
08:46 aravindavk joined #gluster
08:48 skoduri joined #gluster
09:52 nishanth joined #gluster
10:11 gyadav__ joined #gluster
10:57 major joined #gluster
11:12 ThHirsch joined #gluster
11:45 madwizard joined #gluster
11:45 madwizard joined #gluster
12:02 atinm joined #gluster
12:59 atinm joined #gluster
13:03 nishanth joined #gluster
13:19 gyadav__ joined #gluster
13:42 atinm joined #gluster
13:52 atinm joined #gluster
13:53 owlbot joined #gluster
14:01 Saravanakmr joined #gluster
14:13 vbellur joined #gluster
14:16 major joined #gluster
14:17 Humble joined #gluster
14:24 gyadav__ joined #gluster
14:38 jockek joined #gluster
14:38 plarsen joined #gluster
14:57 plarsen joined #gluster
15:23 wushudoin joined #gluster
15:29 wushudoin joined #gluster
15:31 wushudoin joined #gluster
15:33 sanoj joined #gluster
16:17 gyadav__ joined #gluster
16:27 Acinonyx joined #gluster
16:42 Wizek_ joined #gluster
16:48 Acinonyx joined #gluster
17:01 JoeJulian mdeanda: the server(s) you specify at mount time are only used to retrieve the volume definition. The client connects to all the servers in the volume. In your log, you'll want to look for those attempts. It should mount with 1 missing replica, but if it's failing to connect to the other two that could cause it to fail to mount.
17:02 pasqualeiv dir structure... for the .glusterfs/ dir.  ex: .glusterfs/aa/bb/aabbccguid -- in what case would it be ideal to go 3 levels deep?  ex aa/bb/cc/aabbccguid?
17:04 PatNarciso_ I understand ls, find operations benifit from dirs with smaller contents.
17:04 JoeJulian PatNarciso_: in re the .glusterfs tree, never. It's a fixed layout.
17:06 PatNarciso_ JoeJulian, got it.  so, I should stop this line of thought... I'm aware nginx allowed the dir layout to be changed for caching related performance -- was unsure if there was a benifit with .glusterfs tree.
17:06 JoeJulian For your own directory trees - find the point at which your directory access strategy begins to lag and tree out your directories to keep the number of files in the directories below the point at which you cannot meet your SLA.
17:06 gyadav__ joined #gluster
17:06 JoeJulian The .glusterfs tree is only read at the server-side, so there's no latency problems.
17:06 PatNarciso_ my SLA changes depending on the hour of the day, and amount of hot coffee in my cup.
17:07 JoeJulian ... and who else is looking.
17:07 JoeJulian @joe's performance metric
17:07 glusterbot JoeJulian: nobody complains.
17:07 PatNarciso_ NICE.
17:09 * PatNarciso_ is coming up with a data storage layout this weekend for a media server... ya know, retaining thumbnails, low quality proxies, and .json data full of meta data about the files.
17:10 JoeJulian All the "small file" stuff. :/
17:10 PatNarciso_ yes sir.
17:10 JoeJulian I know there's been a lot of work done and it's way better, but there's still that TCP and latency overhead that can never be eliminated.
17:12 PatNarciso_ so - I'm thinking about leveraging the glusterfs guid's as the UID of the asset.
17:12 PatNarciso_ and then storing the proxies, thumbnails, .json, etc in a /glustervolume/.metadata/aa/bb/aabb/thumbnails/ type of structure.
17:13 PatNarciso_ to be clear: movie.mxf would be guid aabbcc-4gui-detc
17:15 JoeJulian Ah, yes, that's how I've done it.
17:16 PatNarciso_ I think I've got {2, 5?}million assets right now -- just scratching my head over the best method for dir structure -- and was looking at glusterfs and nginx for performance notes of other admins.
17:19 PatNarciso_ gluster and (many) nginx setups suggest an /aa/bb structure.  I've got this itch to explore a /aa/bb/cc (3 level depth) structure.   But I'm unsure if this is a valid itch, or a waste of time.
17:21 JoeJulian It won't hurt. It's just about predicting the number of files in the directory. Obviously the aa directory can only have 255 entries so I've considered /aaa/bbb which then can have 4096 entries per top level directory which is still reasonably quick.
17:21 JoeJulian s/4096/4095/
17:21 glusterbot JoeJulian: Error: I couldn't find a message matching that criteria in my history of 1000 messages.
17:24 * PatNarciso_ thinks about the curious staff, who browse via samba from home... and complain our servers are too slow.
17:26 PatNarciso_ If this data was inaccessable, except by api -- I would consider /aaa/bbb
17:30 PatNarciso_ ... whats the best way to grab the gluster guid these days?  actually, is there a way to get a file guid dump of all files on a volume?
17:31 PatNarciso_ I see /.glusterfs/*.db
17:32 JoeJulian From the client you can use find. The inode number is the gfid.
17:33 JoeJulian That will, of course, change if the file is ever recreated.
17:33 JoeJulian Or if, in the future when you want to migrate to a new volume type, you copy the file between volumes.
17:35 PatNarciso_ inode number = gfid = awesome.  I'm ok with the guid changing upon recreate (as if a video is being recreated, there is a reason that should translate to action (more transcodes, etc))... future migration: excellent point, and may stop me from using the inode.
17:37 PatNarciso_ In the past I've made a script that assigns xattr damguid to all files within X path (if not already set).  I may stick with this... altho it does need refinement.   (I hate fs crawls).   I'm thinking a dameon to monitor the changelog may help focus updates.
17:37 melliott joined #gluster
17:46 PatNarciso_ Thanks for your help today JoeJulian!
18:05 ic0n joined #gluster
18:57 DV joined #gluster
19:31 mdeanda JoeJulian: i do get a message like this for both server that up: [2017-11-11 19:23:37.008659] I [MSGID: 114018] [client.c:2285:client_rpc_notify] 0-extra-client-0: disconnected from extra-client-0. Client process will keep trying to connect to glusterd until brick's port is available
19:32 JoeJulian mdeanda: Which version are you using?
19:32 mdeanda then followed by a quorom not met. and an all subvolumes are down
19:33 mdeanda JoeJulian: the client (slackware) is using 3.11.2, let me get server version
19:33 mdeanda server has 3.12.2, ubuntu
19:35 JoeJulian There's a bug that was recently fixed, checking to see if their fix made it into 3.12.2, where the server fails to report the correct port. A workaround is stopping and starting the volume (all servers need to be up to do that).
19:35 ic0n joined #gluster
19:39 mdeanda how can i tell what port the server is reporting for the volumes? gluster volume info doesn't have that
19:40 JoeJulian gluster volume status
19:45 mdeanda seems to show different ports and netstat seems to show network activity on expected ports so as far as i can tell its ok. i'll keep poking around. about to step out thanks for help!
19:46 mdeanda actually i think this issue is only from the older slackware client. i'll try the same mount in a bit with only the newer 3.12.2 version -- i should probably the slackware client anyway so it matches
19:47 JoeJulian Good luck
20:02 mdeanda JoeJulian: updated slackware client and good news is its not more broken, but it didn't fix. i'll keep trying stuff. wife it in the care waiting.. hehe
20:11 plarsen joined #gluster
20:24 madwizard joined #gluster
20:24 madwizard joined #gluster
22:07 gospod4 joined #gluster
22:29 Intensity joined #gluster
23:34 gospod4 joined #gluster
23:37 gospod3 joined #gluster
23:56 mattmcc joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary