Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-10-29

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:15 mjrosenb ahh, I figured out what was wrong; I removed the link after gluster started, and had already stat'ed that file
00:15 mjrosenb it only performs the heal the first time it goes to look at the file
00:15 mjrosenb or after the gfid gets removed from its cache.
00:21 JoeJulian I guess that makes sense.
00:24 marlinc joined #gluster
00:25 johndescs_ joined #gluster
00:35 rideh joined #gluster
00:55 mlhamburg_ joined #gluster
01:18 zhangjn joined #gluster
01:19 jonfatino pff also remember that cloudflare is a pure ngninx reverse proxy... it is NOT a cdn
01:25 doekia joined #gluster
01:26 Lee1092 joined #gluster
01:30 zhangjn joined #gluster
01:38 julim joined #gluster
01:43 dlambrig_ joined #gluster
01:50 EinstCrazy joined #gluster
01:51 Humble joined #gluster
01:52 jonfatino [2015-10-29 01:51:53.713651] E [name.c:147:client_fill_address_family] 0-glusterfs: transport.address-family not specified. Could not guess default value from (remote-host:(null) or transport.unix.connect-path:(null)) options
01:52 jonfatino I keep getting these errors... why?
02:00 gem joined #gluster
02:23 shortdudey123 joined #gluster
02:24 doekia joined #gluster
02:45 nangthang joined #gluster
02:47 ilbot3 joined #gluster
02:47 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:48 harish joined #gluster
02:51 doekia joined #gluster
02:52 dblack joined #gluster
02:53 gem joined #gluster
03:13 gildub joined #gluster
03:14 JoeJulian joined #gluster
03:19 Saravana_ joined #gluster
03:31 doekia joined #gluster
03:38 shubhendu joined #gluster
03:39 stickyboy joined #gluster
03:43 nishanth joined #gluster
03:55 nbalacha joined #gluster
03:56 sakshi joined #gluster
04:05 RameshN joined #gluster
04:10 overclk joined #gluster
04:11 neha_ joined #gluster
04:14 gem joined #gluster
04:32 ppai joined #gluster
04:35 vk|lavi joined #gluster
04:35 doekia joined #gluster
04:38 bharata-rao joined #gluster
04:41 TheSeven joined #gluster
04:42 jiffin joined #gluster
04:53 rafi joined #gluster
04:53 kotreshhr joined #gluster
05:03 Saravana_ joined #gluster
05:04 poornimag joined #gluster
05:05 jatb joined #gluster
05:05 neha_ joined #gluster
05:06 pppp joined #gluster
05:08 ndarshan joined #gluster
05:09 theron joined #gluster
05:12 kovshenin joined #gluster
05:15 ashiq joined #gluster
05:16 hgowtham joined #gluster
05:20 kdhananjay joined #gluster
05:22 julim joined #gluster
05:32 overclk joined #gluster
05:40 Manikandan joined #gluster
05:41 Bhaskarakiran joined #gluster
05:42 overclk joined #gluster
05:43 anil_ joined #gluster
05:44 Nagaprasad joined #gluster
05:44 kovsheni_ joined #gluster
05:45 atalur joined #gluster
05:46 Humble joined #gluster
05:46 hagarth joined #gluster
06:00 vk|lavi joined #gluster
06:01 jtux joined #gluster
06:06 hagarth joined #gluster
06:08 ramteid joined #gluster
06:09 kovshenin joined #gluster
06:32 DV__ joined #gluster
06:33 kanagaraj joined #gluster
06:39 mbukatov joined #gluster
06:41 bhuddah joined #gluster
06:45 rjoseph joined #gluster
06:50 ramky joined #gluster
06:50 anil_ joined #gluster
06:51 LebedevRI joined #gluster
06:56 ramteid joined #gluster
07:04 julim joined #gluster
07:07 ramky joined #gluster
07:09 DV joined #gluster
07:10 theron joined #gluster
07:13 nangthang joined #gluster
07:17 spalai joined #gluster
07:18 frozengeek joined #gluster
07:22 mhulsman joined #gluster
07:45 pff__ joined #gluster
07:46 pff__ is there any way I can determine a file from a gfid?
07:53 mhulsman joined #gluster
08:06 DV__ joined #gluster
08:10 DV joined #gluster
08:14 SOLDIERz joined #gluster
08:15 Philambdo joined #gluster
08:25 night joined #gluster
08:25 ivan_rossi joined #gluster
08:30 mlhamburg joined #gluster
08:30 Trefex joined #gluster
08:33 arcolife joined #gluster
08:34 fsimonce joined #gluster
08:36 pff__ @all: split brain tells me that there's 1 entry, it gives me a gfid but no file name, gfid-resolver.sh doesn't give me the filename either; is it safe to delete the gfid from .glusterfs?
08:36 ctria joined #gluster
08:40 kovshenin joined #gluster
08:48 paratai_ joined #gluster
08:50 Trefex1 joined #gluster
08:50 jiffin pff__: http://gluster.readthedocs.org/en/latest/Troubleshooting/gfid-to-path/?highlight=gfid%20to%20path will be helpful
08:50 glusterbot Title: gfid to path - Gluster Docs (at gluster.readthedocs.org)
08:51 jiffin pff__: it is not safe to delete the gfid from .glusterfs
08:53 mhulsman joined #gluster
08:55 [Enrico] joined #gluster
08:57 ramky joined #gluster
08:58 jiffin pff__: all the operations on server side may use the path under the .glusterfs(based on the client)
08:58 vmallika joined #gluster
08:59 pff__ jiffin: I tried Get file path from GFID (Method 3): from that link, nothing
08:59 jiffin pff__: what about one and two
09:01 jiffin pff__: if u want to recover file from split brain this link may help http://gluster.readthedocs.org/en/latest/Troubleshooting/split-brain/?highlight=split%20brain%20resolution
09:01 glusterbot Title: Split Brain - Gluster Docs (at gluster.readthedocs.org)
09:01 overclk joined #gluster
09:05 pff__ jiffin: sorry, juggling kids breakfast time ...
09:06 cliluw joined #gluster
09:07 prg3 joined #gluster
09:09 Pupeno joined #gluster
09:12 theron joined #gluster
09:16 DV joined #gluster
09:24 aravindavk joined #gluster
09:29 kovshenin joined #gluster
09:30 vikki joined #gluster
09:30 Pupeno joined #gluster
09:31 ramky_ joined #gluster
09:34 Sunghost joined #gluster
09:34 vikki left #gluster
09:34 pff__ jiffin: do you know of anyone / any organisation who might be able to help me on a commercial basis, I'm really stuck!
09:35 pff__ jiffin: we're all hosted on AWS if that helps narrow it down
09:35 vikki joined #gluster
09:38 vikki joined #gluster
09:40 Sunghost HI, i have 2 node distributed and must copy files direkt from brick old vol dir to new mount vol, in vol lock i see error "length:4 error: operation not supported" what does this mean?
09:41 stickyboy joined #gluster
09:42 vikki joined #gluster
09:45 vikki joined #gluster
09:45 pff__ jiffin: https://paste.ee/p/hZrw5
09:45 glusterbot Title: Paste.ee - View paste hZrw5 (at paste.ee)
09:48 julim joined #gluster
09:50 kovshenin joined #gluster
09:52 spalai joined #gluster
09:54 DV joined #gluster
09:58 poornimag joined #gluster
10:01 pff__ jiffin: I've found the the file: /data/gluster/brick/.glusterfs/f3/96/f396701c-1b5f-4e94-8e07-d17204392e60 it's a hefty 1705955319 bytes
10:01 pff__ getfattr doesn't return anything for that file
10:02 rafi1 joined #gluster
10:03 ndevos pff__: how many links does that file have when you check with "ls -l /data/gluster/brick/.glusterfs/f3/96/f396701c-1b5f-4e94-8e07-d17204392e60" ?
10:04 jiffin1 joined #gluster
10:04 ndevos pff__: also http://gluster.org/consultants/ contains a few companies, but there are more, including proxy.nl
10:04 glusterbot Title: Professional Support Gluster (at gluster.org)
10:09 pff__ ndevos: -rw-rw-r--. 1 1002 1003 1705955319 Oct 28 17:23 /data/gluster/brick/.glusterfs/f3/96/f396701c-1b5f-4e94-8e07-d17204392e60
10:09 glusterbot pff__: -rw-rw-r's karma is now -2
10:10 ndevos pff__: looks like there is only one link to that inode, no other hardlinks, so that contents/gfid does not have an other filename
10:11 pff__ ndevos: I did delete a file off the bricks yesterday, it was about that size
10:11 pff__ (I panicked)
10:11 ndevos pff__: I'm not sure what you're trying to solve, but I guess that a file got deleted, but the gfid-hardlink was not
10:11 pff__ so can I delete it?
10:12 pff__ or heal it?
10:12 ndevos pff__: well, if you do not need the data anymore, yeah, you can delete it - or inspect the file, the contents are the same as the file you deleted yesterday
10:13 hagarth joined #gluster
10:14 pff__ ndevos: the contents are as I was expecting, it's a large log file
10:14 pff__ ndevos: do I just rm it off both boxes?
10:16 poornimag joined #gluster
10:19 ppai joined #gluster
10:20 pgreg joined #gluster
10:20 ashiq joined #gluster
10:20 pgreg joined #gluster
10:26 ndevos pff__: if you really intend to delete the contents, then deleting the file where the link-count == 1 would do the trick
10:32 pff__ @ndevos: forgive me, link-count? I'm not getting any output at all from getfattr
10:33 ndevos pff__: link-count is the 2nd column in the "ls -l" output
10:33 pff__ nedevos: oh right, never knew that!
10:34 pff__ @ndevos; so I'd need to do that on both boxes?
10:35 ndevos pff__: yeah, on both replicas
10:39 EinstCrazy joined #gluster
10:43 nangthang joined #gluster
10:48 Bhaskarakiran joined #gluster
10:52 spalai joined #gluster
11:00 spalai joined #gluster
11:00 poornimag joined #gluster
11:01 pff__ ndevos: thanks, got one of my sys admins to increase the size of the volume so am waiting for that work finished, after that I'll remove file and check split-brain
11:05 rafi joined #gluster
11:09 arcolife joined #gluster
11:10 jiffin1 joined #gluster
11:13 theron joined #gluster
11:18 ppai joined #gluster
11:19 gem joined #gluster
11:22 Manikandan joined #gluster
11:24 Sunghost joined #gluster
11:24 Sunghost Hi, i got after upgraded from 3.5->3.6 nfs server not responding messages and system hangs while copying files.
11:25 Sunghost Now i read about nfs-gansha. Is that new and replace normal nfs in glusterfs?
11:25 vmallika1 joined #gluster
11:26 Sunghost if i use nfs for mount on clients, do i install on all server the nfs-server or work they just as clients
11:35 harish_ joined #gluster
11:42 jiffin1 joined #gluster
11:48 jiffin Yes nfs-ganesha is introduced 3.7 , the gluster nfs supports only version , but ganesha supports v3,v4 , v4.1 and pNFS
11:58 jiffin the High availablity nfs-server cluster is introduced with ganesha
12:08 jwd joined #gluster
12:17 rjoseph joined #gluster
12:20 unclemarc joined #gluster
12:35 kkeithley Sunghost:  gnfs (or gluster nfs) is the default through 3.7. As jiffin mentioned, it is only NFSv3.   nfs-ganesha is optional. Eventually nfs-ganesha will become the default and recommended solution and gnfs will be deprecated.
12:36 kkeithley For our friends who use gnfs, don't worry. I don't envision removing it from the source.
12:38 ira joined #gluster
12:53 gem joined #gluster
12:54 kovshenin joined #gluster
12:58 overclk joined #gluster
12:59 shyam joined #gluster
13:00 k-ma do i need to somehow manually update the "glusterd.info: operating-version=N" after gluster upgrade? it's still 2 after 3.4 -> 3.6 update
13:00 k-ma or is it anything to worry about
13:01 jdarcy joined #gluster
13:03 mhulsman1 joined #gluster
13:04 deniszh joined #gluster
13:05 kdhananjay joined #gluster
13:06 julim joined #gluster
13:06 rjoseph joined #gluster
13:07 shubhendu joined #gluster
13:08 spalai joined #gluster
13:09 ctria joined #gluster
13:09 gothos joined #gluster
13:09 k-ma should i run gluster volume set all cluster.op-version 30606? (for 3.6.6)
13:10 gothos Hello! I've a replica 2 with glusterfs 3.7 (was 3.6) and am having a lot of files listed under 'volume foo heal info', most are not in split-brain.
13:10 gothos When I open the corresponding gfid in the .gluster directory it's often empty.
13:10 gothos ie. an empty file.
13:10 gothos How do I proceed with that?
13:16 mpietersen joined #gluster
13:17 mpietersen joined #gluster
13:17 RameshN joined #gluster
13:21 jiffin joined #gluster
13:21 jwaibel joined #gluster
13:22 atalur joined #gluster
13:28 skylar joined #gluster
13:29 GB21 joined #gluster
13:36 hagarth joined #gluster
13:37 hchiramm joined #gluster
13:47 pff__ jiffin, ndevos have removed the gfid file and now split brain says no entries in split brain, thanks
13:49 chirino joined #gluster
13:52 atalur joined #gluster
13:52 RameshN joined #gluster
13:53 Trefex joined #gluster
14:06 overclk joined #gluster
14:08 EinstCrazy joined #gluster
14:09 jmarley joined #gluster
14:10 dgandhi joined #gluster
14:20 maserati joined #gluster
14:34 jobewan joined #gluster
14:38 theron joined #gluster
14:40 a_ta joined #gluster
14:48 Slashman joined #gluster
14:50 tomatto joined #gluster
14:50 a_ta Is it possible to setup a trusted storage pool between an Ubuntu node and a CentOS node? Peer probe returns "Error: Request timed out"
14:50 jiffin joined #gluster
14:52 aravindavk joined #gluster
14:55 ayma joined #gluster
15:01 coredump joined #gluster
15:10 gothos Interesting. The issues with multiple networks on the gluster server still haven't been fixed
15:17 overclk joined #gluster
15:22 shubhendu joined #gluster
15:24 Trefex joined #gluster
15:32 Trefex1 joined #gluster
15:41 stickyboy joined #gluster
15:58 overclk_ joined #gluster
16:01 cholcombe joined #gluster
16:08 jiffin joined #gluster
16:09 zhangjn joined #gluster
16:10 zhangjn joined #gluster
16:16 JoeJulian gothos: wait for the heal to complete maybe?
16:16 JoeJulian gothos: And what "issues" with multiple networks? It works just fine.
16:18 JoeJulian k-ma: If you need a higher op-version it /should/ negotiate one automatically. If you try to do some operation that reports an op-version error, then go ahead and update it with volume set.
16:19 a_ta left #gluster
16:20 calavera joined #gluster
16:21 gothos JoeJulian: The part with binding to interfaces, that is also a Linux problem tho
16:21 JoeJulian I have no problem with that. It binds to all interfaces.
16:21 k-ma JoeJulian: i did cluster.op-version'ed it to 30603 as it apparently is the max 3.6.6 can do. Probably no harm as all my clients are 3.6.6
16:22 k-ma JoeJulian: shouldn't it have bumped it up tho as the afrv2 was introduced in 3.6 and it aint backwards compatible
16:22 JoeJulian Did you enable afrv2?
16:23 k-ma ohh, i need to enable it? :)
16:23 k-ma thought it replaced the old one
16:23 JoeJulian Also, are you sure? I thought it wasn't in yet.
16:24 k-ma > If you are using GlusterFS replication ( < 3.6) in your setup , please note that the new afrv2 implementation is only compatible with 3.6 GlusterFS clients.
16:24 k-ma that was in the upgrade docs
16:24 k-ma and iirc release notes too, so i think it is in
16:27 JoeJulian Ah, I was confusing that with nsr.
16:29 k-ma How can i access the meta xlator data? apparently it came in 3.6 too, but haven't figured out how to get in to the /.meta/nnn
16:30 k-ma found some info in 3.7 docs, but they say the meta xlator is loaded automatically. No .meta on my mounts tho
16:34 JoeJulian It doesn't show up in a directory listing.
16:34 JoeJulian But if you ls .meta there it is.
16:35 ChrisNBlum joined #gluster
16:35 k-ma JoeJulian: damn, you're totally right. pretty sure i tried it but guess not. sorry, and thanks :)
16:35 JoeJulian Wow, that's going to be handy. Why didn't I know about .meta before. :D
16:36 k-ma yea peaked my interest too
16:36 k-ma want to see what can i build on top of it
16:36 gothos JoeJulian: Yes, it does. The thing is that it shouldn't
16:37 gothos And that it requires via more than one network in some configurations even tho the peers are only discovered via one network
16:37 gothos -via
16:39 jwd joined #gluster
16:41 JoeJulian gothos: I still don't see the issue.
16:42 JoeJulian gothos: It's been brought up before by others, but nobody's been able to make a compelling argument why this is an issue. I'd be interested in listening (and some light arguing) if you can.
16:43 Dragotha joined #gluster
16:44 gothos JoeJulian: two networks, one internal, one external. the gluster peer hostname are in the internal network only, but the systems can communicate via both networks. we are thereby assuming that all gluster communication should be restricted to the internal network
16:44 gothos but it isn't, which gives gives very weird error messages
16:44 gothos -gives
16:44 JoeJulian Correct. If the hostnames resolve to the address on the internal network, the communications will be on the internal network.
16:46 JoeJulian It couldn't be otherwise because it wouldn't have any other address. It's going to follow standard ip routing.
16:46 kotreshhr joined #gluster
16:47 ayma1 joined #gluster
16:47 gothos Yes, but assuming only that one interface with the internal network is used and configuring iptables to reject anything else and only allowing data from that network _on that nic_ will kill glusterfs
16:47 gothos "kill" like make it unusable
16:48 Intensity joined #gluster
16:48 overclk joined #gluster
16:48 JoeJulian Then you have a network problem. I can do just as you're describing with no problem.
16:48 JoeJulian In fact, that's the standard way I do it.
16:49 RedW joined #gluster
16:50 JoeJulian I have gluster servers on openstack vms with public and private addresses. I always iptables firewall the public addresses and use the private network for gluster.
16:51 JoeJulian Do you use short hostnames or fqdn?
16:52 atalur joined #gluster
16:53 kotreshhr joined #gluster
16:53 kotreshhr left #gluster
17:02 rafi joined #gluster
17:03 frozengeek joined #gluster
17:08 Humble joined #gluster
17:09 rafi1 joined #gluster
17:11 spalai joined #gluster
17:14 rafi joined #gluster
17:15 overclk joined #gluster
17:16 rafi joined #gluster
17:19 Rapture joined #gluster
17:21 rafi joined #gluster
17:28 rafi joined #gluster
17:31 rafi joined #gluster
17:39 nangthang joined #gluster
17:39 jwd joined #gluster
17:44 jwaibel joined #gluster
17:45 rafi joined #gluster
17:50 gothos JoeJulian: I'm using short hostnames. Interesting. Are you also blocking based on interface?
17:57 JoeJulian yes
18:00 JoeJulian gothos: yes I am. I also use short hostnames. I have used either of /etc/host entries as well as dns entries resolved using the domain in the search parameter in resolv.conf. My hostnames resolve to only one ip address (dig hostname) being the internal address.
18:02 gothos Yes,same here. One IP per hostname. I'll investigate again then
18:06 JoeJulian gothos: Good luck. I hope you find it.
18:09 theron joined #gluster
18:12 theron joined #gluster
18:22 spalai left #gluster
18:22 a2 joined #gluster
18:36 atalur joined #gluster
18:38 shaunm joined #gluster
18:45 coredump joined #gluster
18:46 ivan_rossi left #gluster
18:51 klaxa joined #gluster
19:00 theron joined #gluster
19:08 SOLDIERz joined #gluster
19:15 bowhunter joined #gluster
19:17 mhulsman joined #gluster
19:20 jwaibel joined #gluster
19:34 theron joined #gluster
19:49 mhulsman joined #gluster
19:55 David_Vargese joined #gluster
19:56 EinstCra_ joined #gluster
19:59 kovshenin joined #gluster
19:59 calavera joined #gluster
20:03 ctria joined #gluster
20:09 theron joined #gluster
20:12 skylar joined #gluster
20:35 theron joined #gluster
20:44 shaunm joined #gluster
20:48 dlambrig_ joined #gluster
20:53 ahodgson joined #gluster
20:59 mhulsman joined #gluster
21:00 mhulsman1 joined #gluster
21:00 mhulsman joined #gluster
21:01 calavera joined #gluster
21:07 pff__ so I know that gluster has lots of options for tuning performance, does it produce any metrics by which you can judge what might need to be tweaked?
21:08 pff__ is it trial and error or is there a more scientific approach
21:08 theron joined #gluster
21:11 dlambrig_ joined #gluster
21:14 mhulsman joined #gluster
21:26 theron joined #gluster
21:38 Dragotha joined #gluster
21:40 stickyboy joined #gluster
21:44 theron joined #gluster
21:49 calavera joined #gluster
21:52 jonfatino JoeJulian: once again my .glusterfs folder is filling up :-( I don't know why and there is only folders on the brick no real data. https://paste.ee/r/OtyV3
21:56 JoeJulian du -ax .glusterfs | sort -n | tail
21:57 JoeJulian That'll tell you where the space is going.
22:00 jonfatino JoeJulian: https://paste.ee/r/0WRAb  not sure why the raw files are not being placed in there
22:05 JoeJulian Meh, that didn't help like I'd hoped. May have to lengthen that tail.
22:07 JoeJulian Or do your df of .glusterfs/cf to see which subdirectory if *it* is the biggest and narrow it down.
22:07 JoeJulian s/if/of/
22:07 glusterbot What JoeJulian meant to say was: Or do your df of .glusterfs/cf to see which subdirectory of *it* is the biggest and narrow it down.
22:07 DV joined #gluster
22:08 JoeJulian point being, figure out what's using up space then figure out if it's legit.
22:11 pff__ joined #gluster
22:14 jonfatino JoeJulian: this ones going to play with your mind :-) its all bs really. https://paste.ee/r/RBdFC
22:15 JoeJulian I also forgot they changed df to default to -h.
22:15 * JoeJulian grumbles.
22:16 JoeJulian I mean I've been typing the same command for over a decade, now I have to remember to -k! Grr.
22:18 jonfatino So as you can see each of these dirs are junk and if you look all the way in one of them (example -  /gluster/.glusterfs/ac/6d# ls -lash|grep M)   I get 100's of samll files and one large one.      47M -rw-r--r--   2 root tssadmin  47M Apr 17  2012 ac6d95a2-c28b-4f8a-838c-fb1728701073
22:18 glusterbot jonfatino: -rw-r--r's karma is now -18
22:21 JoeJulian So that doesn't sound all that abnormal. How much space is in use on that brick?
22:24 gildub joined #gluster
22:34 ira joined #gluster
22:50 jonfatino lol JoeJulian upon investigation your going to love this...
22:50 jonfatino https://paste.ee/r/01S2F
22:50 jonfatino funny thing is the files are actually loading when called by the gluster client / nfs moutn so the data is there
22:51 jonfatino however I want pull that data directly from the bricks and not use the gluster client (for performance nginx cdn stuff)
22:51 JoeJulian Do you du with --apparent
22:52 theron joined #gluster
22:52 JoeJulian And yeah, that's probably why Pranith deprecated the replace-brick start (which it should have told you when you tried it).
22:58 jonfatino JoeJulian: so how do I get these files outside of .glusterfs and into the raw brick?
22:58 JoeJulian Try rephrasing that. I don't know what you're asking.
22:59 JoeJulian The files in .glusterfs *are* the files on the brick.
22:59 jonfatino So I want to nginx  to run directly from the brick and let gluster do the replication nginx will read only and never write. so I need all these raw mp3 files and what not inside /gluster/brick/ etc
23:00 JoeJulian And where are they?
23:12 calavera joined #gluster
23:19 julim joined #gluster
23:21 frozengeek joined #gluster
23:23 julim joined #gluster
23:36 gbox joined #gluster
23:36 pff__ what's the best way to safely take down a gluster node, I need to resize the instance
23:36 pff__ stop the service?
23:37 JoeJulian systemctl stop glusterd
23:37 JoeJulian pkill -f glusterfsd
23:37 JoeJulian actually, I think you can
23:37 JoeJulian systemctl stop glusterfsd
23:37 JoeJulian unless you're on other of those silly distros that doesn't use systemd.
23:37 pff__ mp
23:37 pff__ no
23:38 pff__ ok, if I do that, stop the server, create an image of the server and bring it back as a larger box, how would I reconnect everything to the new private IP?
23:38 JoeJulian basically, as long as the tcp connections close before the network is stopped, you should be good.
23:38 JoeJulian You use ,,(hostnames) when you create your volume.
23:38 glusterbot Hostnames can be used instead of IPs for server (peer) addresses. To update an existing peer's address from IP to hostname, just probe it by name from any other peer. When creating a new pool, probe all other servers by name from the first, then probe the first by name from just one of the others.
23:39 pff__ each server has entries in /etc/hosts x.x.x.x fileserver01, x.x.x.y fileserver02
23:40 JoeJulian So then you update hosts.
23:41 pff__ on both gluster nodes; do I need to do anything special with the other node?
23:41 pff__ restart glusterfsd on that box or will it pick up the hosts change?
23:44 JoeJulian It /should/ pick up the change. Watch your logs.
23:50 pff__ systemctl stop glusterd has not stopped it
23:50 pff__ kill it?
23:52 pff__ killed
23:55 shyam joined #gluster
23:55 pff__ JoeJulian: imaging box

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary