Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-09-19

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:50 kimmeh joined #gluster
00:55 JoeJulian Netwolf: No clue why they would say that.
00:55 shdeng joined #gluster
00:56 shdeng joined #gluster
00:57 JoeJulian ten10: Isn't multipathing supposed to handle that? I'm not an iscsi expert by any stretch, but that was the impression I got.
01:02 zhangjn joined #gluster
01:08 kukulogy joined #gluster
01:21 derjohn_mobi joined #gluster
01:29 baudster joined #gluster
01:46 baudster when i mount a gluster volume with the "backupvolfile-server=" option which is pointing to a secondary gluster node. I am assuming that when the primary gluster node goes down, the mount will automaticallyu point to the second one.. correct?
01:47 ilbot3 joined #gluster
01:47 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:56 harish joined #gluster
02:07 blu__ joined #gluster
02:09 ZachLanich joined #gluster
02:47 Lee1092 joined #gluster
03:04 harish joined #gluster
03:04 Gambit15 joined #gluster
03:07 harish_ joined #gluster
03:14 magrawal joined #gluster
03:17 nbalacha joined #gluster
03:23 harish_ joined #gluster
03:50 riyas joined #gluster
03:52 baudster joined #gluster
03:53 baudster I asked this earlier but I got disconnected so I wasnt able to see the response.
04:03 baudster if i mount a gluster volume on a client with through fstab with the "backupvolfile-server=" option pointing to a secondary gluster node on the cluster, if the primary node goes down the mounted volume will point automatically to the secondary node and there should not be any downtime.... correct?
04:05 itisravi joined #gluster
04:06 prth joined #gluster
04:08 harish joined #gluster
04:10 jiffin joined #gluster
04:10 kshlm baudster, 'backupvolfile-server' is used only for the initial fetching of the GlusterFS volfile when you do a `mount`.
04:11 kshlm The backup server is used to fetch the volfile if the primary isn't available.
04:11 kshlm Once the mount is done, automatic failover/fallback depends on the volume type.
04:12 kshlm If the volume is a replicate volume, if one of the replica set goes offline, the client will continue operating with the other members in the set.
04:14 atinm joined #gluster
04:21 baudster so I have 2 gluster nodes with a replicated volume (glusterA, glusterB) and a client that mounts that gluster volume through fstab with the line "glusterA:/glustervol0    /var/www        glusterfs       defaults,_netdev,backupvolfi​le-server=glusterB:/var/www  0       0"
04:21 msvbhat joined #gluster
04:21 baudster when I take down glusterA, the /var/www mount on the client is not accessible
04:21 baudster when accourding to your statement, it should?
04:32 ashiq joined #gluster
04:37 ppai joined #gluster
04:43 kdhananjay joined #gluster
04:51 kimmeh joined #gluster
04:54 karthik_ joined #gluster
04:56 ramky joined #gluster
04:58 kshlm baudster, Yup. The client should still be accesible.
05:00 ndevos baudster: that backupvolfile-server=glusterB:/var/www looks weird, backupvolfile-server= only needs the hostname
05:01 kshlm It might take a little while to detect that the glusterA is down. But it should not be longer than 42s (the RPC ping-timeout).
05:07 ndarshan joined #gluster
05:10 Pupeno joined #gluster
05:12 ankitraj joined #gluster
05:15 lalatenduM joined #gluster
05:22 aravindavk joined #gluster
05:24 apandey joined #gluster
05:32 Philambdo joined #gluster
05:33 [diablo] joined #gluster
05:35 robb_nl joined #gluster
05:51 msvbhat joined #gluster
05:51 mhulsman joined #gluster
05:52 k4n0 joined #gluster
05:56 derjohn_mob joined #gluster
06:04 satya4ever joined #gluster
06:09 mhulsman joined #gluster
06:10 R0ok_ joined #gluster
06:13 Bhaskarakiran joined #gluster
06:14 purpleidea (and the answer to life, the universe and everything!)
06:15 rafi joined #gluster
06:28 jtux joined #gluster
06:32 Jacob843 joined #gluster
06:39 k4n0_ joined #gluster
06:58 deniszh joined #gluster
06:58 hackman joined #gluster
06:59 ivan_rossi joined #gluster
07:01 prth joined #gluster
07:06 kdhananjay1 joined #gluster
07:19 derjohn_mob joined #gluster
07:21 kdhananjay joined #gluster
07:35 jwd joined #gluster
07:37 baudster joined #gluster
07:38 ieth0 joined #gluster
07:42 petan joined #gluster
07:44 fsimonce joined #gluster
07:45 hchiramm joined #gluster
07:51 Philambdo1 joined #gluster
07:53 arcolife joined #gluster
07:56 ndevos (and bug 1377097 too!)
07:56 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1377097 medium, medium, ---, ndevos, POST , The GlusterFS Callback RPC-calls always use RPC/XID 42
07:56 amud joined #gluster
07:57 baudster all i had to do was lower the network ping timeout limit setting
07:57 baudster default seems to be 45 seconds
07:57 baudster so it takes that amount to time for the gluster volume on the other node to be accessible after the primary node goes down.
07:58 baudster changed it by using "gluster volume set <volume> network.ping-timeout <time in sec>"
07:58 baudster works like a charm now
07:58 ndevos ~ping timeout | baudster
07:58 glusterbot baudster: I do not know about 'ping timeout', but I do know about these similar topics: 'ping-timeout'
07:58 ndevos ah!
07:59 ndevos @ping-timeout
07:59 glusterbot ndevos: The reason for the long (42 second) ping-timeout is because re-establishing fd's and locks can be a very expensive operation. With an average MTBF of 45000 hours for a server, even just a replica 2 would result in a 42 second MTTR every 2.6 years, or 6 nines of uptime.
08:00 baudster ok, you just lost me right there :)
08:02 amud hi, i am looking for bitrot signature process optimisation (faster signing of files) in 3.8.3, how to do this?
08:03 purpleidea 145892
08:05 purpleidea ~ping-timeout | baudster
08:05 glusterbot baudster: The reason for the long (42 second) ping-timeout is because re-establishing fd's and locks can be a very expensive operation. With an average MTBF of 45000 hours for a server, even just a replica 2 would result in a 42 second MTTR every 2.6 years, or 6 nines of uptime.
08:05 purpleidea sorry scrollback issue
08:05 jri joined #gluster
08:05 purpleidea just fooling with Joe's bot.
08:05 baudster :) it's cool
08:06 baudster can somebody actually explain to me all those mambo jumbo? :)_
08:06 purpleidea which part?
08:06 purpleidea MTBF==mean time between failure
08:06 purpleidea MTTR==mean time to recovery
08:07 baudster got it
08:07 baudster ta
08:18 ilbot3 joined #gluster
08:18 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
08:20 ndarshan joined #gluster
08:20 Philambdo joined #gluster
08:25 shubhendu joined #gluster
08:36 derjohn_mob joined #gluster
08:42 Saravanakmr joined #gluster
09:07 [diablo] joined #gluster
09:32 robb_nl joined #gluster
09:41 scc joined #gluster
09:44 jkroon joined #gluster
09:46 rwheeler joined #gluster
09:47 msvbhat joined #gluster
09:53 jwd joined #gluster
10:04 ilbot3 joined #gluster
10:04 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
10:05 amud quit
10:06 amud help
10:12 kramdoss_ joined #gluster
10:35 itisravi joined #gluster
10:36 aravindavk joined #gluster
10:41 rastar joined #gluster
10:45 MikeLupe joined #gluster
10:53 kimmeh joined #gluster
10:57 poornima joined #gluster
10:57 Mmike joined #gluster
11:00 Wizek_ joined #gluster
11:05 karnan joined #gluster
11:06 MikeLupe2 joined #gluster
11:07 MikeLupe3 joined #gluster
11:09 k4n0_ joined #gluster
11:10 MikeLupe4 joined #gluster
11:10 MikeLupe4 joined #gluster
11:17 harish joined #gluster
11:24 hgowtham joined #gluster
11:24 skoduri joined #gluster
11:26 shubhendu joined #gluster
11:27 Muthu_ joined #gluster
11:37 prth joined #gluster
11:38 Agaron joined #gluster
11:40 nisroc joined #gluster
11:41 Agaron Hello! I saw the glusterd2 github page https://github.com/gluster/glusterd2/wiki/Design. I want to know this will be part of the glusterfs or this is a separetad project? It is stable? Thanks for your help!
11:41 glusterbot Title: Home · gluster/glusterd2 Wiki · GitHub (at github.com)
11:41 sanoj joined #gluster
11:42 pdrakeweb joined #gluster
11:42 Philambdo joined #gluster
11:48 webmind left #gluster
11:49 Klas sounds like it might someday replace normal gluster
11:49 Klas or live side-by side
11:51 karnan joined #gluster
11:53 Agaron Yes. And I find this: https://github.com/gluster/glusterd2/wiki
11:53 glusterbot Title: Home · gluster/glusterd2 Wiki · GitHub (at github.com)
11:53 Agaron "GlusterD-2.0 is the next generation GlusterFS management daemon, being prepared for use with GlusterFS-4.0."
11:56 Philambdo joined #gluster
11:56 Agaron left #gluster
11:56 ndarshan joined #gluster
11:57 arcolife joined #gluster
12:08 unclemarc joined #gluster
12:10 Klas oh
12:11 Klas that is indeed kinda major
12:18 jwd joined #gluster
12:27 johnmilton joined #gluster
12:29 johnmilton joined #gluster
12:31 devyani7 joined #gluster
12:39 johnmilton joined #gluster
12:41 d0nn1e joined #gluster
12:45 plarsen joined #gluster
12:46 arcolife joined #gluster
12:46 prth_ joined #gluster
12:47 kkeithley joined #gluster
12:58 suliba joined #gluster
13:02 karnan joined #gluster
13:03 rauchrob joined #gluster
13:04 rauchrob Hey, are there any instructions for upgrading from gluster v3.7 to v3.8?
13:07 deniszh joined #gluster
13:22 Gambit15 Hey guys, I'm trying to work out the best way to minimise the risk of split-brain in a 2-node cluster, and without it going RO in the case one of the nodes fails. At the moment, I'm thinking the best way would be some sort of heartbeat pinger to the gateway, and I was wondering if there's already something in-built/written to do this?
13:25 jmeeuwen the 2 node cluster contains replicated bricks?
13:25 Gambit15 Hold on, I'm being daft - I need to build this into the service running on top of Gluster, not into the Gluster service
13:26 msvbhat joined #gluster
13:27 jmeeuwen like a 2-node set of servers clustered, using a volume off of a 2-node set of Gluster servers?
13:27 Gambit15 Yeah, rep 2. It's for an HA control cluster that'll manage the larger cluster - eggs in two baskets 'n all
13:27 arcolife joined #gluster
13:28 jmeeuwen if it's set up correctly (say, replica 2 over 2 bricks), then what is your worry about a split brain?
13:28 Gambit15 Like I said, this question isn't for this channel actually. The nodes'll be running some core VMs, so this HA needs to be built into the hypervisor's controls, not Gluster.
13:28 jmeeuwen the number 2, btw, is what should scare you the most -- there's no majority quorum to be achieved between only two nodes
13:28 Gambit15 Exactly!
13:28 kramdoss_ joined #gluster
13:29 jmeeuwen ok then
13:33 Gambit15 I think the best solution will be to force the VMs to always run on one node. If the active "master" node is unable to ping both its neighbour & the gateway, it'll fence itself. If the slave is able to communicate with the gateway, but not its neighbour, it'll become master & start the VMs.
13:33 cloph Gambit15: and you cannot just have another machine in the trusted pool? (doesn't need to carry data - but for once would help to maintain server quorum)
13:35 Gambit15 In this case, I need a way to inform gluster to always trust the "master" in the case of a split brain - the control servers won't have lots of I/O, so a couple of seconds of lost data isn't too much of a problem
13:35 cloph with exactly half the bricks online (and with the auto setting), the volume will survive it it is the first brick (as defined while creating the volume)
13:35 cloph but in replica volumes there is no such thing as "master" and "slave"
13:35 Gambit15 The problem is if that first brick is on the fenced node. It won't know
13:36 Gambit15 Yes, this master/slave thing is for the hypervisor & VMs that run off the Gluster volumes
13:37 Gambit15 Gluster can heal after a split, I just need to find the best way to get the hypervisor to RO itself ASAP when it realises it's fenced
13:37 Gambit15 (oVirt BTW)
13:37 cloph with only two machines, I'd say stay away from ovirt/automatic "enterprise grade" failover stuff.. It is too easy to shoot yourself in the foot with those...
13:37 cloph been there, done that...
13:37 Gambit15 Hmm
13:39 cloph things might have improved in the meantime, but with our backup back then everything was fine as long as everything worked, but on failure the rebalancing itself caused a major wreckage, isolating the hypervisor itself, causing ovirt to loose track of ~everything...
13:39 Gambit15 I've had some very good success with HA-Lizard, which runs over XenServer & DRBD. Not once had an issue in over 2 years
13:39 cloph not worth the hassle
13:39 cloph Of course if you really need automatic migration of machines, and you have plenty of resources on either of the fallbacks, then shouldn't be *that* much of a problem.
13:40 Gambit15 But the new environment is oVirt, and I wanted to avoid having different HV platforms
13:41 cloph two servers is too little to make use of ovirt in a useful way would be my tldr; summary :-)
13:42 Gambit15 If not oVirt, KVM (as that could share the oVirt documentation, without having to train people on two different environments).
13:42 cloph if you can separate gluster network from hypervisor/vm network, then it's easier
13:48 Gambit15 Here's the thing, I've got a whole rack of servers & I need to pull them into a single environment. I can spare 3 servers for core services (so a problem on the main stack won't completely take out access & control). The router/gateway/firewall is part of these services. My original idea was to have rep3 & then run the firewall & management VMs across them, however I need to make sure the firewall *never* goes down, so the simpler its design, the better.
13:49 cloph that msg got truncated for me after "the simpler its design, the better. W"
13:49 Gambit15 The firewall/gateway is FreeBSD. Put an arbiter on that, perhaps?
13:50 Gambit15 ...the simpler its design, the better. With this, I decided to put the firewall/gateway on baremetal & mirror it on a VM. That gives me a highly stable network, but  makes HA on the rest of the control group more complex
13:51 cloph ah, so if the final system will consist of more than two servers, then it is much more sensible to use.
13:52 cloph with only two servers you cannot tell whether the firewall is not running, or whether you just cannot reach it..
13:53 jwd joined #gluster
13:53 Gambit15 However if the server can't communicate with both the firewall *and* its peer, then it's a good guess that it's the fenced node
13:54 skylar joined #gluster
13:55 Gambit15 The replication will be done via bonded ethernet links directly to each peer, bypassing a switch
13:55 prth joined #gluster
13:57 skoduri joined #gluster
13:57 Gambit15 if (gateway = up) and (peer = down), become master. if (gateway = down) and (peer = down), become slave. if (gateway = down) and (peer = up)...hmm
14:00 shaunm joined #gluster
14:01 archit_ joined #gluster
14:04 squizzi joined #gluster
14:09 shyam joined #gluster
14:14 hchiramm joined #gluster
14:16 ZachLanich joined #gluster
14:19 level7 joined #gluster
14:20 Muthu_ joined #gluster
14:23 nbalacha joined #gluster
14:24 bowhunter joined #gluster
14:26 ankitraj joined #gluster
14:35 prth joined #gluster
14:41 lpabon joined #gluster
14:45 dgandhi I have a few bricks that throw a "port already in use" error, and are not available in the volume status, is there a way to manually restart one brick/assign it a different port ?
14:46 cloph not sure about assigning a port manually, but you can kill the brick process and then do a volume start force
14:48 dgandhi so I kill it via PID, and then start force will bring it back up, presumably finding an available port?
14:50 cloph yes - but make sure it is really not in use - otherwise you bring it down for nothing...
14:51 dgandhi these are all redundant, presumably the access would fall over to another brick
14:54 dgandhi will to take the start force take the volume offline?
14:55 kimmeh joined #gluster
14:58 cloph no, start force won't bring it offline.
15:01 nblanchet joined #gluster
15:02 wushudoin joined #gluster
15:02 Bhaskarakiran joined #gluster
15:05 victori joined #gluster
15:09 aravindavk joined #gluster
15:10 aravindavk joined #gluster
15:13 MadPsy Is it possible/sensible to use the kernel NFSv4 server with gluster if v4 is required but Ganesha isn't yet a feasible option?
15:15 msvbhat joined #gluster
15:17 MadPsy s/sensible/feasible/
15:17 glusterbot What MadPsy meant to say was: Is it possible/feasible to use the kernel NFSv4 server with gluster if v4 is required but Ganesha isn't yet a feasible option?
15:17 JoeJulian MadPsy: not really. There are deadlocks that can happen.
15:18 MadPsy fair enough, I didn't think it was a good idea
15:19 prth joined #gluster
15:20 kkeithley do you mean use knfs to export a gluster fuse volume? If so, yes, what JoeJulian said
15:20 kkeithley why isn't ganesha an option?
15:22 MadPsy I did mean that, yes... it might be an option or rather is the only option so was more theoretical than anything
15:23 ankitraj joined #gluster
15:24 MadPsy I presume with ganesha you don't need to mount the gluster volume on said machine for ganesha to be able to export it.. in other words I'm assuming gluster has actual integration with ganesha via FSAL or something
15:25 JoeJulian correct
15:25 MadPsy nice
15:26 MadPsy thanks, I best get playing rather than asking questions
15:27 JoeJulian I recommend a combination. :)
15:30 hchiramm joined #gluster
15:34 MadPsy do we know if flock() works reliably with gluster + ganesha, it's one of the few reasons for requiring  NFSv4
15:34 JoeJulian ndevos: ^
15:35 Wizek joined #gluster
15:35 nishanth joined #gluster
15:44 Bhaskarakiran joined #gluster
15:46 mreamy joined #gluster
15:49 ivan_rossi left #gluster
15:50 kpease joined #gluster
15:51 robb_nl joined #gluster
15:53 arcolife joined #gluster
15:54 jiffin joined #gluster
15:56 kpease joined #gluster
16:11 prth joined #gluster
16:22 ankitraj joined #gluster
16:28 RameshN joined #gluster
16:30 RameshN joined #gluster
16:32 msvbhat joined #gluster
16:50 tom[] joined #gluster
16:50 atinm joined #gluster
16:52 ieth0 joined #gluster
17:02 victori joined #gluster
17:02 atinm joined #gluster
17:13 squizzi joined #gluster
17:17 mreamy joined #gluster
17:18 ashiq joined #gluster
17:21 jiffin joined #gluster
17:21 mreamy joined #gluster
17:27 rafi joined #gluster
17:38 mhulsman joined #gluster
17:41 rastar joined #gluster
17:41 ashiq_ joined #gluster
17:51 squizzi_ joined #gluster
17:53 ashiq_ joined #gluster
17:54 Netwolf left #gluster
18:10 rastar joined #gluster
18:33 Pupeno joined #gluster
18:34 atinm joined #gluster
18:34 roost joined #gluster
18:41 ieth0 joined #gluster
18:42 Intensity joined #gluster
18:43 ankitraj joined #gluster
18:47 atinm joined #gluster
18:51 atinm joined #gluster
18:57 kimmeh joined #gluster
19:00 GandalfCorvotemp joined #gluster
19:00 Ganda998 joined #gluster
19:01 iopsnax joined #gluster
19:09 jiffin joined #gluster
19:16 mhulsman joined #gluster
19:16 rauchrob joined #gluster
19:19 deniszh joined #gluster
19:23 deniszh1 joined #gluster
19:32 johnmilton joined #gluster
19:51 deniszh joined #gluster
20:09 ieth0 joined #gluster
20:37 rafi joined #gluster
20:37 rafi joined #gluster
20:38 rafi joined #gluster
20:50 rafi joined #gluster
20:55 rafi joined #gluster
20:56 rafi1 joined #gluster
20:56 skylar1 joined #gluster
21:00 rafi joined #gluster
21:11 rafi joined #gluster
21:16 rafi joined #gluster
21:17 atinm joined #gluster
21:21 rafi joined #gluster
21:44 kpease joined #gluster
21:51 suliba joined #gluster
21:51 rafi joined #gluster
21:51 Mmike joined #gluster
21:52 sgryschuk1 joined #gluster
21:53 sgryschuk1 Hey! I have a question about setting up gluster on a kubernetes cluster, is this the place that could help?
22:02 derjohn_mob joined #gluster
22:04 Chinorro joined #gluster
22:05 sgryschuk1 When I `peer probe server2` from server 1, server 1 get's server 2's hostname in it's pool list, but server 2 puts server1's IP ADDRESS in it's pool list
22:06 sgryschuk1 the docs suggest that I should be able to `peer probe` from server 2 to server 1 to add server 1's hostname to the pool https://gluster.readthedocs.io/en/latest​/Administrator%20Guide/Storage%20Pools/
22:06 glusterbot Title: Storage Pools - Gluster Docs (at gluster.readthedocs.io)
22:06 sgryschuk1 but when I do that it just says that the peer already exists
22:06 tdasilva joined #gluster
22:09 JoeJulian sgryschuk1: It doesn't matter. It will work.
22:10 sgryschuk1 because this is running on kubernetes pods the ip addresses aren't static
22:10 JoeJulian Yeah, so build your volume using hostnames.
22:11 JoeJulian Also... the peer already existing... it should still add the hostname to the peer information.
22:12 sgryschuk1 when I try to build the volume with hostnames I receive 'Peer not in cluster state'
22:12 JoeJulian Sure, prove me wrong. ;)
22:13 JoeJulian Could you ,,(paste) peer status from both?
22:13 glusterbot For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
22:14 sgryschuk1 The hostname does get added to the Other names in the peer status
22:16 samikshan joined #gluster
22:18 shyam joined #gluster
22:20 rafi1 joined #gluster
22:25 sgryschuk1 JoeJulian: These are the commands I'm running and the end result. http://pastebin.com/utjumxz0
22:25 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
22:25 rafi joined #gluster
22:26 sgryschuk1 or: https://paste.fedoraproject.org/431189/
22:26 glusterbot Title: #431189 • Fedora Project Pastebin (at paste.fedoraproject.org)
22:27 JoeJulian Thanks. That little 934pixel window is irksome.
22:32 fang64 joined #gluster
22:33 JoeJulian Could you either start glusterd in debug mode (--debug) which makes it run in the foreground, or enable debug logging in /etc/glusterfs/glusterd.vol? Also, grab /etc/glusterfs/glusterd.log (the whole thing) from both servers please (assuming it's empty at the start of this process).
22:33 glusterbot JoeJulian: ('s karma is now -162
22:33 JoeJulian Poor (
22:33 JoeJulian (++
22:33 glusterbot JoeJulian: ('s karma is now -161
22:43 sgryschuk1 I don't have the file /etc/glusterfs/glusterd.log, is that the correct path? or will that be populated once it's running in debug mode?
22:45 sgryschuk1 @paste
22:45 glusterbot sgryschuk1: For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
22:49 fang64 joined #gluster
22:50 JoeJulian Meh, sorry, /var/log/glusterfs/glusterd.log
22:53 sgryschuk1 logs from server1: https://paste.fedoraproject.org/431195/ and the logs from server2: https://paste.fedoraproject.org/431196/
22:53 glusterbot Title: #431195 • Fedora Project Pastebin (at paste.fedoraproject.org)
22:56 JoeJulian sgryschuk1: can both servers resolve both hostnames?
22:56 sgryschuk1 if it helps the hostname of server 1 is not the hostname we are actually accessing. Because it's in kubernetes we are routing through a internal service that has it's own hostname
22:56 JoeJulian Hmm
22:56 JoeJulian Well server1 wants to be able to resolve gluster-fs-int-svc-1 to an ip address it has.
22:57 JoeJulian Otherwise it doesn't know that's who it is.
22:57 caitnop joined #gluster
22:58 sgryschuk1 ping gluster-fs-int-svc-1 // PING gluster-fs-int-svc-1.default.svc.cluster.local (10.255.245.82): 56 data bytes
22:59 sgryschuk1 server 1 is resolving to the service ip address, which would then route back to itself
22:59 kimmeh joined #gluster
22:59 JoeJulian It has to resolve to an ip address shown with the command 'ip addr show'
23:01 sgryschuk1 ah, it currently doesn't :/
23:03 sgryschuk1 is the ip address it's looking for setup in a config file that I could change?
23:06 JoeJulian /etc/hosts?
23:13 sgryschuk1 how does gluster conver the hostname to a uuid? Is it a static lookup?
23:13 rafi joined #gluster
23:22 sgryschuk1 JoeJulian: setting 127.0.0.1 -> gluster-fs-int-svc-[server#] on each pod worked
23:22 sgryschuk1 thanks!
23:23 JoeJulian You're welcome.
23:24 jeremyh joined #gluster
23:41 lanning joined #gluster
23:44 Ramereth joined #gluster
23:46 pocketprotector joined #gluster
23:47 eryc joined #gluster
23:49 plarsen joined #gluster
23:54 dataio joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary