Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-03-04

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:21 Acinonyx joined #gluster
00:32 moneylotion joined #gluster
00:36 wushudoin joined #gluster
00:42 moneylotion joined #gluster
00:44 major JoeJulian, I am going to sort of ignore all the ZFS code that was part of this original patch .. #1 .. it had no license with it and I dunno who has the rights to sign-off on it #3 it doesn't entirely apply to the btrfs side
00:44 major so I sort of pushed it all over to its own tree and parked it
00:45 misc no #2 ?
00:46 major #2 all the LVM changes where just moving the LVM functions into different files, there are no code changes at all
00:46 major and I wrote all the Makefile.am changes to get it all to compile
00:47 major that said .. I am still trying to come up with ways to ... abuse the dictionary keys .. btrfs doesn't have all the orig_device and snap_device needs .. but .. it has other needs that can likely use (abuse) those interfaces..
00:48 major mostly trying to really quantify what btrfs needs vs what glusterd expects
00:48 major and really .. zfs sorta looks like a drop-in replacement to lvm in a lot of ways
00:57 major is there a recommended environment for testing this stuff .. like .. a VM recommendation or something?
00:57 major I mean .. I can test it on my live servers and all
00:57 major ;)
00:58 misc mhh, any VM would work
00:58 misc like, we run test on 2G VM on rackspace
00:58 major I guess I was more wondering if there was some sort of turnkey test environment
00:59 major vagrant based or something .. I dunno .. maybe wishful thinking :)
00:59 major its that "but ma .. do I have to??!?!"
00:59 misc nope, we do not have
01:00 misc we have some ansible roles to install our jenkins builder, but that's likely not useful for what you want
01:00 major yah .. not so much ..
01:00 major not a big deal atm .. but .. I don't want to do long-term testing on my live servers ..
01:01 major I am gonna need to figure out an on-demand solution
01:02 major glupy needs Python2?
01:03 misc I think so
01:03 misc we didn't port to python3, as far as I know
01:03 major la sigh
01:04 major okay .. that got passed the Python.h error
01:06 major been disabling it this whole time .. but .. all better now
01:10 Acinonyx joined #gluster
01:42 tdasilva joined #gluster
01:51 jeffspeff joined #gluster
02:09 major Well .. thats interesting .. I can make a lot of these static ..
02:09 major maybe...
02:09 major maybe not
02:09 major nevermind
02:11 major just 1
02:16 loadtheacc joined #gluster
02:18 major okay.. spent waaaaay too long bringing these gluster nodes up .. not glusters fault .. mostly fighting GPT and EFI crap .. but .. thats done .. and I have the lvm-snapshot code fairly well packed into its own little corner of glusterd .. tomorrow will see about wrapping these 5 LVM interfaces up behind an abstraction layer so it isn't directly called anymore .. and then .. I suppose it will be off into the
02:18 major land of btrfs
02:19 major and maybe someone who has zfs advice can help me clean-room some new zfs code .. because the other stuff wasn't usable
02:31 skoduri joined #gluster
02:41 kramdoss_ joined #gluster
02:44 al joined #gluster
02:50 derjohn_mob joined #gluster
02:54 kramdoss_ joined #gluster
02:55 overyander joined #gluster
02:55 loadtheacc joined #gluster
02:59 daMaestro joined #gluster
03:07 Gambit15 joined #gluster
03:25 sbulage joined #gluster
03:28 atinm joined #gluster
03:31 al joined #gluster
03:55 loadtheacc joined #gluster
04:01 Wizek_ joined #gluster
04:31 d0nn1e joined #gluster
04:35 loadtheacc joined #gluster
04:52 daMaestro joined #gluster
05:07 BitByteNybble110 joined #gluster
05:37 RameshN joined #gluster
05:54 ksandha_ joined #gluster
05:59 itisravi joined #gluster
06:04 ethaniel joined #gluster
06:04 ethaniel Hello!
06:04 glusterbot ethaniel: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
06:04 itisravi ethaniel: hi
06:04 itisravi ethaniel: can you pm me the IP of the machines?
06:04 ethaniel hold on, let me get a better client.
06:06 ethaniel joined #gluster
06:31 chris349 joined #gluster
06:37 devcenter joined #gluster
07:55 ashiq joined #gluster
08:15 k4n0 joined #gluster
08:26 DV__ joined #gluster
08:46 nishanth joined #gluster
08:48 ksandha_ joined #gluster
09:01 sona joined #gluster
09:23 sona joined #gluster
09:23 sona joined #gluster
09:25 sona joined #gluster
09:45 ksandha_ joined #gluster
10:01 ahino joined #gluster
10:33 level7 joined #gluster
10:42 untoreh joined #gluster
10:49 john4 joined #gluster
10:56 skoduri joined #gluster
10:56 arpu joined #gluster
11:00 mober joined #gluster
11:42 chris349 joined #gluster
12:18 msvbhat joined #gluster
12:58 skoduri joined #gluster
13:48 caitnop joined #gluster
13:57 level7 joined #gluster
14:17 atinm joined #gluster
14:29 kpease joined #gluster
14:35 tg2 joined #gluster
14:37 riyas joined #gluster
15:00 jiffin joined #gluster
15:59 kramdoss_ joined #gluster
16:01 StormTide joined #gluster
16:04 StormTide with the fuse client and fstab... what are the reasons failover of the node that is in the fstab wouldnt work? I can reboot node2 without clients losing acccess but node1 goes down (the one in the fstab) and they lose access even though node2 is up
16:19 chris349 joined #gluster
16:40 major StormTide, thinkthat is related to http://serverfault.com/questions/502410/can-i-use-an-amazon-load-balancer-in-front-of-multiple-glusterfs-ec2-instances
16:41 glusterbot Title: Can I use an Amazon Load Balancer in front of multiple GlusterFS EC2 instances? - Server Fault (at serverfault.com)
16:46 StormTide major, no i dont think so. this isnt like trying to mount when node1 is down, but rather it goes to transport disconnected error if i take down node1 after the client is connected. the clients status page shows the client is connected to all the bricks before i take the box down, so it should just continue...
16:47 StormTide it works for the node not listed in the fstab... it can go up/down without noticing...
16:48 major hurm
16:49 StormTide it immediately goes
16:49 StormTide ls: cannot open directory '.': Transport endpoint is not connected
16:49 StormTide as soon as i issue the reboot command on node1
16:49 StormTide once the node comes back, it resumes operation no problem
16:50 StormTide actually just tried it again, it actually unmounts the volume
16:51 StormTide all that logs on the client is
16:51 StormTide 0-glusterfsd-mgmt: failed to connect with remote-host: internal.node1.example.org (No data available)
16:52 major and this is a replica 2 or replica 3 arbiter 1 volume?
16:52 Peppard joined #gluster
16:54 StormTide replica 2, with child policy mtime and fixed quorum of 1
16:55 StormTide 2 boxes
16:55 StormTide 4 bricks
16:57 StormTide https://paste.fedoraproject.org/paste/c1NM3cLfH46xI-cZgy~zz15M1UNdIGYhyRLivL9gydE=
16:57 glusterbot Title: Untitled - Modern Paste (at paste.fedoraproject.org)
16:57 StormTide is the volume configuration
17:01 StormTide another client i had connected... same error : [glusterfsd-mgmt.c:2102:mgmt_rpc_notify] 0-glusterfsd-mgmt: failed to connect with remote-host: internal.node1.example.ca (No data available)
17:07 StormTide i turned the loglevel to info....
17:07 StormTide https://paste.fedoraproject.org/paste/~nXYsBsU8TrusTYh5tPjxV5M1UNdIGYhyRLivL9gydE=
17:07 jiffin joined #gluster
17:07 glusterbot Title: Untitled - Modern Paste (at paste.fedoraproject.org)
17:07 StormTide is the client log when the node1 reboots
17:08 StormTide think the key is ... "Exhausted all volfile servers" ... but im not sure why it would do that as the node1 goes down
17:09 major did you alter the server-quorum in addition to the client-quorum?
17:09 major or do you only want reads to go through when 1 node is down
17:10 major is that even a thing if server-quorum isn't met?
17:10 StormTide reads and writes can go on with one server, heal will be child policy mtime
17:11 major cluster.server-quorum-ratio?
17:11 StormTide the servers seem happy, and the volume actually unmounts, it doesnt go read-only like a quorum loss would behave
17:11 major hmmm
17:12 StormTide maybe i can fix it by explicitly adding the other node as a volfile server
17:12 StormTide but i had thought that wasnt necessary now...
17:13 StormTide https://github.com/gluster/glusterfs/blob/1e3538baab7abc29ac329c78182b62558da56d98/glusterfsd/src/glusterfsd-mgmt.c#L2102 <--- seems to be the code path in play
17:13 glusterbot StormTide: <-'s karma is now -5
17:13 glusterbot Title: glusterfs/glusterfsd-mgmt.c at 1e3538baab7abc29ac329c78182b62558da56d98 · gluster/glusterfs · GitHub (at github.com)
17:16 major Certainly looks to be the code you are running into
17:16 StormTide backup-volfile server on client fixes
17:17 StormTide backupvolfile-server=internal.node2.glusterfs.example.ca .... as a mount option fixes the problem
17:17 StormTide so i can use that... but might be a bug
17:17 major There is a volfile approach to that as well .. https://www.jamescoyle.net/how-to/439-mount-a-glusterfs-volume
17:17 glusterbot Title: Mount a GlusterFS volume – JamesCoyle.net (at www.jamescoyle.net)
17:20 StormTide cool, thx for the help... working properly now, can reboot either box without interruption...
17:21 major well .. I mostly was a sounding board .. you did all the work :)
17:22 StormTide sometimes thats all it takes ;) ... was actually my generating more detailed logs for pasting here that got me on the right track there... so yah, helpful ;)
17:24 StormTide anyway, got to run *cheers*
17:30 plarsen joined #gluster
18:21 ksandha_ joined #gluster
18:22 d0nn1e joined #gluster
18:29 derjohn_mob joined #gluster
18:51 rastar joined #gluster
19:06 tdasilva joined #gluster
19:10 Jacob843 joined #gluster
20:19 ahino joined #gluster
20:21 ksandha_ joined #gluster
20:30 chris349 joined #gluster
20:36 Wizek_ joined #gluster
20:40 rastar joined #gluster
21:05 p7mo joined #gluster
21:07 fre joined #gluster
21:07 mlg9000_1 joined #gluster
21:08 nobody481 joined #gluster
21:16 cliluw joined #gluster
21:17 bitonic_ joined #gluster
21:18 pasik joined #gluster
21:18 d-fence__ joined #gluster
21:18 decayofmind joined #gluster
21:35 jesk joined #gluster
22:02 raginbajin joined #gluster
22:02 Ulrar joined #gluster
22:02 tru_tru joined #gluster
22:02 mattmcc_ joined #gluster
22:02 bhakti joined #gluster
22:03 bitonic_ joined #gluster
23:13 CmndrSp0ck joined #gluster
23:46 pulli joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary