Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-05-04

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:52 ankitr joined #gluster
00:57 daMaestro joined #gluster
00:58 gem joined #gluster
00:59 shdeng joined #gluster
01:14 vbellur joined #gluster
01:16 ankitr joined #gluster
01:47 ankitr joined #gluster
01:47 cholcombe joined #gluster
01:49 ilbot3 joined #gluster
01:49 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:35 vinurs joined #gluster
02:45 kramdoss_ joined #gluster
02:52 masber joined #gluster
02:58 farhorizon joined #gluster
03:03 john51 joined #gluster
03:03 ccha3 joined #gluster
03:04 d4n13L_ joined #gluster
03:05 billputer_ joined #gluster
03:05 Jules- joined #gluster
03:05 Gambit15 joined #gluster
03:07 jackhill_ joined #gluster
03:09 shruti joined #gluster
03:10 kaushal_ joined #gluster
03:10 k0nsl joined #gluster
03:10 k0nsl joined #gluster
03:10 pocketprotector- joined #gluster
03:12 serg_k joined #gluster
03:12 Chinorro joined #gluster
03:15 prasanth joined #gluster
03:16 Telsin joined #gluster
03:17 Telsin left #gluster
03:17 riyas joined #gluster
03:18 mdavidson joined #gluster
03:18 rastar_afk joined #gluster
03:19 arpu joined #gluster
03:23 tdasilva joined #gluster
03:45 itisravi joined #gluster
03:45 skoduri joined #gluster
03:58 nbalacha joined #gluster
04:05 Telsin joined #gluster
04:09 atinm joined #gluster
04:11 masber joined #gluster
04:11 k0nsl joined #gluster
04:11 k0nsl joined #gluster
04:27 Humble joined #gluster
04:38 gem joined #gluster
04:44 jiffin joined #gluster
04:46 msvbhat joined #gluster
04:49 sanoj joined #gluster
04:52 ankitr joined #gluster
04:59 skumar joined #gluster
05:08 aravindavk joined #gluster
05:09 Prasad joined #gluster
05:11 karthik_us joined #gluster
05:12 farhorizon joined #gluster
05:13 portdirect joined #gluster
05:13 gyadav__ joined #gluster
05:14 ankitr joined #gluster
05:15 nbalacha joined #gluster
05:17 buvanesh_kumar joined #gluster
05:21 ppai joined #gluster
05:22 ndarshan joined #gluster
05:29 jiffin joined #gluster
05:33 ankitr joined #gluster
05:37 jiffin1 joined #gluster
05:37 Karan joined #gluster
05:40 skumar joined #gluster
05:41 amarts joined #gluster
05:48 msvbhat joined #gluster
05:55 hgowtham joined #gluster
05:57 ppai joined #gluster
05:58 kramdoss_ joined #gluster
06:04 kdhananjay joined #gluster
06:07 ashiq joined #gluster
06:09 mbukatov joined #gluster
06:11 Saravanakmr joined #gluster
06:11 hgowtham joined #gluster
06:11 skoduri joined #gluster
06:35 rafi joined #gluster
06:36 [diablo] joined #gluster
06:48 ndarshan joined #gluster
06:52 ayaz joined #gluster
07:00 ivan_rossi joined #gluster
07:01 ivan_rossi left #gluster
07:03 armyriad joined #gluster
07:07 jiffin1 joined #gluster
07:08 gyadav_ joined #gluster
07:11 ppai joined #gluster
07:18 jkroon joined #gluster
07:21 rastar_afk joined #gluster
07:21 ndarshan joined #gluster
07:23 amarts joined #gluster
07:25 sona joined #gluster
07:30 gyadav__ joined #gluster
07:42 reen_ joined #gluster
07:44 sona joined #gluster
07:44 reen_ Hi guys, I'm just evaluating gluster storage for a Proxmox Master, Slave setup. I've played around with glusters geo-replicaiton, how does it differ from a normal rsync job?
08:01 amarts reen_, geo-rep works with internal glusterfs feature of changelog, which only syncs the files which changed from last sync to this sync
08:01 amarts (without crawl)
08:02 amarts hence little faster
08:02 reen_ hi amarts :)
08:02 reen_ ok so but this is still file base, right?
08:03 reen_ so if I want to replicate a vm which is one big file (vmdk or raw) do I still have a benefit from gluster?
08:05 reen_ by replicate I mean to transfer/backup the vm on node01 to node02, so that if node01 dies I can start the vm on node02
08:23 _KaszpiR_ joined #gluster
08:29 derjohn_mob joined #gluster
08:33 sanoj joined #gluster
08:34 amarts reen_, if you are looking at backing up the VM, we recommend snapshot  feature
08:34 amarts and then copying the snapshot
08:35 amarts because if VM is active, geo-rep is not a good solution, it would corrupt the copy, as file content would have changed by the time it starts copying the file
08:38 reen_ hey amarts. Thanks for your input! yeah That's what I thought in the first place. I've got it working with a vzdump snapshot, than rsync it to node02 and restore it there.
08:39 reen_ I was just wondering if there's a better reason to use gluster for this use case
08:40 Klas you could vzdump to a gluster node and have geo-replication there
08:41 Klas but if I remember proxmox correctly, dumps end up in vim/${vmid}/ along with qemu files
08:41 Klas so that wouldn't really be practical
08:42 Klas s,vim/,vm/,
08:42 Klas you would just end up replicating everything all the time for little to no benefit
08:42 Klas at least with georeplication
08:52 kramdoss_ joined #gluster
08:55 MrAbaddon joined #gluster
09:00 reen_ joined #gluster
09:00 rafi1 joined #gluster
09:01 amarts joined #gluster
09:01 reen_ hi Klas. Yeah that would be an option to use glusterFS for the dump directory, so you don't have to trigger an rsync job
09:02 karthik_us joined #gluster
09:02 reen_ But like you and amarts suggest, I think gluster is to much overhead for this kind of structure.
09:03 reen_ thank you guys for the short discussion on this topic :)
09:08 MrAbaddon joined #gluster
09:11 kotreshhr joined #gluster
09:18 [diablo] Good morning #gluster
09:18 [diablo] guys, if I have 5 x gluster nodes replicating a volume amongst them
09:19 [diablo] but I only want two of those nodes to serve the data, is there a way I can do this please?
09:19 apandey joined #gluster
09:19 [diablo] I see when native client connects to a server it replies with all the servers holding the data...
09:19 jkroon [diablo], the answer depends on what you mean with that ...
09:19 [diablo] hi jkroon
09:19 skumar_ joined #gluster
09:19 [diablo] OK I'll try to explain better...
09:19 [diablo] basically we have a 5 x nodes...
09:19 jkroon they all have bricks?
09:19 [diablo] I create a new volume called jkroon
09:20 [diablo] and there's 5 x bricks
09:20 jkroon then the answer (to the best of my knowledge) is no.  clients connects directly to the bricks.
09:20 [diablo] but, 3 of these servers are not all reachable by clients
09:20 jkroon build a vpn :)
09:21 [diablo] when a client connects, gluster is responding with all 5 x hosts
09:21 [diablo] I want to limit that to 2
09:21 MrAbaddon joined #gluster
09:21 [diablo] the two which are reachable by the client, the other 3 are not...
09:22 [diablo] the only trick I thought of was dns spoofing ...
09:22 [diablo] that the other 3 fqdn's point to the 2 x valid nodes
09:22 [diablo] spoof on the clients dns zone
09:23 [diablo] someone said to me possibly using geo-replication ...
09:23 [diablo] but I need to check if thats a valid option to do this
09:27 twisted` joined #gluster
09:29 apandey joined #gluster
09:32 bitonic joined #gluster
09:33 skumar__ joined #gluster
09:38 Wizek_ joined #gluster
09:41 Klas client is the one writing to the bricks
09:41 [diablo] hi Klas
09:41 Klas so not alowing it access to the servers is fundamentally broken
09:41 Klas hi =)
09:41 [diablo] damn I'm explaining this real bad...
09:42 Klas we all suck att communication, the sooner you realize that, the better =)
09:42 Klas (we=humankind)
09:42 [diablo] Klas, does it write to all bricks at same time, or one brick that then the internal replication pumps out to the other nodes?
09:46 jkroon [diablo], no you're not.  can't be done.
09:46 jkroon client need to connect to the right brick.
09:47 jkroon so even if you dns spoof it (/etc/hosts?) then it'll connect to the wrong brick and result in failures.
09:47 [diablo] hmmmm
09:47 [diablo] ok
09:53 [diablo] sorry but I'm lost as to why the fuse client would need to be able to write to all 5 backend gluster servers...
09:53 [diablo] that would be hard on network...
09:53 [diablo] surely it writes to one, and the backend server handles replicating it out to the other 4 servers
09:54 rafi joined #gluster
09:54 [diablo] thus how would I be breaking anything if the fuse client is only writing possibly 2 x of the gluster servers
09:54 jkroon because the architecture is such that the server you connect to only provides meta information to the client - in other words - where to find the bricks.  And the layout.  data is read from one brick only, data is written to appropriate replicas.
09:55 jkroon distributing that workload to the clients distributes the computational effort required, resulting in overall higher performance, not lower.
09:56 [diablo] ahhhh  now]
09:56 [diablo] I understgand
09:57 [diablo] so, if I'm understanding well...
09:57 [diablo] say I have 4 x back end gluster nodes.... a volume called "bob" , replicated
09:57 [diablo] my desktop mounts via fuse "bob"
09:57 [diablo] I write a 1G file to it
09:58 [diablo] 1/4 could go to 1st node, 2 quarter to the 2nd node, 3rd quarter to the 3rd node.. etc
09:58 [diablo] ?
09:58 [diablo] to speed things up?
09:58 Klas nope
09:58 Klas not at all
09:58 Klas you write that file 4 times, on to each node
09:58 Klas the alternative is that a server receives all data
09:58 Klas and then replicates it
09:59 Klas this makes the servers a bottleneck
09:59 Klas while writing to all four offloads the overhead
09:59 Klas it does have disadvantages, primarily bandwidth from client
09:59 Klas (I don't really like the design btw, and learned of it quite recently)
10:00 [diablo] sounds bizarre
10:00 [diablo] I'm shocked
10:00 Klas not really
10:00 [diablo] so me sending a 1GB file to "bob", with 4 x backend nodes, reulsts in 4GB of transfer
10:00 Klas it's very good for scaling to a large amount of clients with a small amount of servers
10:00 Klas that it does
10:00 [diablo] from my client
10:01 Klas it's not a good FS for bandwidth-lacking clients
10:01 Klas however, it's based on replica count, not node count
10:01 Klas a replica 2 across 1000 nodes
10:01 Klas still only sends data twice
10:02 Klas in most scenarios, replica 2 or 3 are the only relevant options for gluster
10:02 [diablo] OK I'm gonna have to dig deeper on this
10:02 [diablo] cheers for the info Klas
10:02 deniszh joined #gluster
10:02 [diablo] my co-worker has mainly been dealing with Gluster, so I need to brush up
10:03 Klas all network FS has got some drawbacks
10:03 Klas I somewhat miss designs like AFS myself
10:03 [diablo] :)
10:04 skumar joined #gluster
10:21 msvbhat joined #gluster
10:30 msvbhat joined #gluster
10:33 MrAbaddon joined #gluster
10:46 bfoster joined #gluster
10:51 k0nsl joined #gluster
10:51 k0nsl joined #gluster
10:55 MrAbaddon joined #gluster
11:00 k0nsl joined #gluster
11:00 k0nsl joined #gluster
11:01 amarts joined #gluster
11:07 k0nsl joined #gluster
11:07 k0nsl joined #gluster
11:13 sona joined #gluster
11:23 melliott joined #gluster
11:29 ankitr joined #gluster
11:59 kotreshhr left #gluster
12:22 cholcombe joined #gluster
12:23 Humble joined #gluster
12:26 k0nsl joined #gluster
12:26 k0nsl joined #gluster
12:27 shyam joined #gluster
12:32 nbalacha joined #gluster
12:40 buvanesh_kumar joined #gluster
12:53 baber joined #gluster
13:04 cholcombe joined #gluster
13:24 gem joined #gluster
13:25 sanoj joined #gluster
13:26 skylar joined #gluster
13:27 Karan joined #gluster
13:34 skumar joined #gluster
13:45 msvbhat joined #gluster
13:49 ic0n joined #gluster
13:50 derjohn_mob joined #gluster
13:52 ic0n joined #gluster
14:02 cholcombe joined #gluster
14:09 farhorizon joined #gluster
14:26 skumar joined #gluster
14:35 nbalacha joined #gluster
14:39 arpu joined #gluster
14:44 tz-zeejay Hi. In a 4 x 2 distributed dispersed volume what would happen if a single sub-volume failed? Would this cause the entire volume to be unavailable or would it just make the resources (files, available space, etc.) related to the subvolume to be unavailable?
14:52 Humble joined #gluster
14:53 farhorizon joined #gluster
15:05 msvbhat joined #gluster
15:06 cholcombe joined #gluster
15:11 jiffin joined #gluster
15:13 nbalacha joined #gluster
15:15 renout_away joined #gluster
15:19 msvbhat joined #gluster
15:22 legreffier joined #gluster
15:26 Wizek_ joined #gluster
15:32 jiffin joined #gluster
15:33 jiffin1 joined #gluster
15:49 jiffin joined #gluster
15:54 dyasny joined #gluster
15:56 shaunm joined #gluster
16:00 cholcombe joined #gluster
16:09 dyasny joined #gluster
16:19 skumar joined #gluster
16:21 jiffin joined #gluster
16:22 baber joined #gluster
16:26 Gambit15 joined #gluster
16:28 skoduri joined #gluster
16:38 twisted` joined #gluster
16:42 bitonic joined #gluster
16:44 gem joined #gluster
16:47 k0nsl joined #gluster
16:47 k0nsl joined #gluster
16:51 melliott joined #gluster
16:54 k0nsl joined #gluster
16:54 k0nsl joined #gluster
16:57 baber joined #gluster
17:02 jkroon joined #gluster
17:08 sona joined #gluster
17:24 ashiq joined #gluster
17:31 _KaszpiR_ joined #gluster
17:35 pocketprotector- joined #gluster
17:36 ashiq joined #gluster
17:41 XpineX joined #gluster
17:41 amarts joined #gluster
18:10 msvbhat joined #gluster
18:14 Acinonyx joined #gluster
18:19 Acinonyx joined #gluster
18:22 baber joined #gluster
18:27 hevisko does anybody have a “advised” systemd service to have the /etc/fstab glusterfs gets mounted after the gluster cluster have stabilized and the volumes are available??
18:27 hevisko the Centos packages seems to also have this problem that LP doesn’t want to fix by adding retries to the filesystem mounts
18:27 hevisko … in systemd’s .mount
18:35 gyadav__ joined #gluster
18:45 hevisko left #gluster
18:46 hevisko joined #gluster
18:47 hevisko left #gluster
18:47 hevisko joined #gluster
18:49 hvisage joined #gluster
18:59 ashiq joined #gluster
19:04 rastar joined #gluster
19:07 cholcombe joined #gluster
19:07 gyadav__ joined #gluster
19:12 JoeJulian Well, hevisko's gone but still, imho, there needs to be dbus support added to handle that.
19:16 hvisage yeah, client dropped..
19:17 hvisage JoeJulian: but it is a general problem: How do I retry the mounts during a bootstrap of the cluster?
19:18 hvisage I got a semi working systemd.service for Debian, but on Centos that services fails,
19:19 hvisage and I suspect it also impact nfs-ganesha (The lack of the shared storage) during the cluster bootstrapping
19:19 glusterbot hvisage: Please don't naked ping. http://blogs.gnome.org/markmc/2014/02/20/naked-pings/
19:21 JoeJulian shut up glusterbot...
19:21 hvisage :D :D
19:21 * JoeJulian thinks he should fix that regex match someday.
19:22 JoeJulian hvisage: You can set backup-volfile-servers and fetch-attempts in your fstab
19:24 JoeJulian You can also set x-systemd.requires=glusterd.service to ensure the mount isn't attempted until glusterd has started.
19:25 hvisage that fetch-attempts sounds like a plan… will go search for it in the docs
19:25 JoeJulian man mount.glusterfs
19:25 JoeJulian @lucky man mount.glusterfs
19:25 glusterbot JoeJulian: https://linux.die.net/man/8/mount.glusterfs
19:25 hvisage requires on glusterd won’t work, as the volumes aren’t yet available even after the glusterd have started up
19:26 JoeJulian I understand, but that would also delay the start of your retries which would still be benificial.
19:26 JoeJulian beneficial even.
19:26 hvisage and since it’s the local glusterd/brick I would prefer to mount (distributed nodes) I can’t say I want a backup mount, do I?
19:27 JoeJulian Why would you prefer to mount from localhost?
19:27 JoeJulian It doesn't really matter. ,,(mount server)
19:27 glusterbot (#1) The server specified is only used to retrieve the client volume definition. Once connected, the client connects to all the servers in the volume. See also @rrdns, or (#2) One caveat is that the clients never learn of any other management peers. If the client cannot communicate with the mount server, that client will not learn of any volume changes.
19:27 hvisage I agree that requires=glusterd.service is beneficial, but it’s not easily added in /etc/fstab ;)
19:28 JoeJulian Sure it is. x-systemd.requires=glusterd.service
19:28 JoeJulian It's in the mount options.
19:28 hvisage it’s not added bedefault to the shared_storage mount?
19:29 hvisage s/bedefault/by default/
19:29 glusterbot What hvisage meant to say was: it’s not added by default to the shared_storage mount?
19:29 JoeJulian I'm not sure. I don't use nfs so I've only once enabled that and didn't think to look.
19:30 hvisage ds2-int:/gluster_shared_storage /var/run/gluster/shared_storage/ glusterfs defaults        0 0
19:30 hvisage so that also fails (random;y) during gluster cluster bootstrapping :)
19:31 ahino joined #gluster
19:34 hvisage Centos 7, Gluster 3.10, man mount.glusterfs: fetch-attempts=N
19:34 hvisage Deprecated option - placed here for backward compatibility [default: 1]
19:36 cholcombe joined #gluster
19:36 baber joined #gluster
19:46 hvisage May 04 21:44:32 ds2.local.hv GlusterFS[1907]: [glusterfsd.c:1964:parse_cmdline] 0-glusterfs: obsolete option '--volfile-max-fetch-attempts or fetch-attempts' was provided
19:46 glusterbot hvisage: ''s karma is now -4
19:50 JoeJulian mmm, yeah.. says so right in the man page that I didn't fully read. :/
19:51 major pfft .. documentation ;P
19:52 gem joined #gluster
19:53 hvisage Yeah, it’s like sex, if it’s good, it’s *really* good, if it’s bad, it’s better than nothing ;(
19:54 major I dunno .. I have read some docs that no longer reflected the software enough that it only made things worse to read them vs just opening the code :(
19:55 hvisage You are starting to echo my sentiment w.r.t. systemd (and fixing the code myself)
19:55 hvisage JoeJulian x-systemd.requires=glusterd.service doesn’t solve the problem reliably either…. even with fetch-attempts=5
20:08 _KaszpiR_ joined #gluster
20:11 msvbhat joined #gluster
20:27 cmd_pancakes joined #gluster
20:28 cmd_pancakes hello, is it possible to add another local interface as a peer in a volume? to force certain bricks to be hosted on specific local IPs?
20:43 shyam joined #gluster
21:08 hvisage could somebody tells me the tests for the list of volumes that is configured and not in quorum state?
21:16 joejulianw joined #gluster
21:22 k0nsl joined #gluster
21:22 k0nsl joined #gluster
21:31 MrAbaddon joined #gluster
21:32 hvisage I see that if the volume was (at some stage) in quorum, then you can mount/umount/mount it, but if you’ve freshly booted, and weren’t in quorum state before, then you can’t mount the vollume… I can’t seem to find a option in “glouster volume” that will show me whether the volume in “in quorum”  or not O_o
21:33 gem joined #gluster
21:48 wushudoin joined #gluster
21:57 shyam joined #gluster
22:03 d-fence joined #gluster
22:04 d-fence_ joined #gluster
22:09 vbellur joined #gluster
22:11 cholcombe joined #gluster
22:34 melliott joined #gluster
22:49 shyam joined #gluster
22:55 farhorizon joined #gluster
23:15 JoeJulian cmd_pancakes: that's what ,,(hostnames) are for.
23:15 glusterbot cmd_pancakes: Hostnames can be used instead of IPs for server (peer) addresses. To update an existing peer's address from IP to hostname, just probe it by name from any other peer. When creating a new pool, probe all other servers by name from the first, then probe the first by name from just one of the others.
23:16 JoeJulian Oh! I need to update that one.
23:16 JoeJulian @forget hostnames
23:16 glusterbot JoeJulian: The operation succeeded.
23:19 JoeJulian @learn hostnames as Hostnames should be used instead of IPs for server (peer) addresses. By doing so, you can use normal hostname resolution processes to connect over specific networks.
23:19 glusterbot JoeJulian: The operation succeeded.
23:19 JoeJulian hvisage: I was thinking (and looking for) the same thing.
23:20 JoeJulian I need to file a bug
23:20 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
23:29 k0nsl joined #gluster
23:29 k0nsl joined #gluster
23:30 cmd_pancakes JoeJulian: so i also had a second DNS name for the secondary IP address on the host, but when i probed it to be added to the list, the same UUID was turned (since it was the same process as the other IP) and the cluster just thought the second DNS name was an alias for the original interface
23:31 cmd_pancakes was return*
23:32 cmd_pancakes JoeJulian: i figured i'd probably need to run 2 gluster daemons on the host, serving 2 different volumes, but i can't seem to find a config where glusterfsd doesn't bind to port 49152 on all interfaces
23:33 cmd_pancakes the second gluster daemon fails to start due to address already in use...so i was wondering if it was still worth the effort of running two daemons, or if i was missing something simple
23:35 JoeJulian cmd_pancakes: it was. if your bricks use hostnames, then just resolve the hostname to the ip address you want to use from any one client.
23:37 JoeJulian ie, from server1 you want 127.0.2.1 but from server2 you want 192.168.99.1. From client1 you want 10.0.0.1. You can use /etc/hosts, or split-horizon dns or whatever.
23:37 JoeJulian And yes.
23:37 JoeJulian Gluster binds to 0.0.0.0
23:40 cmd_pancakes JoeJulian: gotcha, ok well i'll give it a try again, but when i tried to mount the same gluster daemon instance as two separate bricks on the same host using the different hostname, it got the same UUID as the original peer and failed to be added
23:40 MrAbaddon joined #gluster
23:40 cmd_pancakes my goal is to have 1 server export 2 bricks over different IP addresses
23:42 JoeJulian You could probably do that with net namespaces.
23:43 cmd_pancakes is that a gluster thing? the network end is fine, but when i get to the same gluster daemon is when i have problems
23:43 JoeJulian Not sure what that gets you. They listen on different ports anyway.
23:44 JoeJulian Network namespaces are a kernel thing.
23:44 JoeJulian ip netns
23:44 JoeJulian @lucky ip netns
23:44 glusterbot JoeJulian: http://man7.org/linux/man-pages/man8/ip-netns.8.html
23:45 cmd_pancakes ah ok interesting
23:46 cmd_pancakes JoeJulian: so the overall goal is an LACP issue on the network layer...we are seeing our bonds not work correctly and saturate a single link...so instead of 1 40Gb/s interface, i wanted to break it up into 2x 20Gb/s and force traffic down another IP interface
23:47 cmd_pancakes i had hoped i could just probed the same peer under a different DNS name and IP but the gluster daemon was too smart for that
23:47 JoeJulian yeah, I have yet to hear somebody tell me they bonded interfaces and got just what they wanted.
23:47 * JoeJulian shrugs.
23:47 cmd_pancakes hahaha yep, you know what im talking about :)
23:47 JoeJulian Ok, so are these bricks in the same volume?
23:48 cmd_pancakes correct, more or less we have, say 4 bricks on a single host
23:48 cmd_pancakes currently they are all added with the same DNS name, everything works fine
23:48 JoeJulian Actually, doesn't matter. Make your volume. Brick 1 is on server1a which resolves to 10.0.0.1. Brick 2 is on server1b which resolves to 10.0.0.2, etc.
23:49 cmd_pancakes but to break them up, i wanted to add bricks 1 and 2 on host1.example, and bricks 3 and 4 on host1b.example
23:49 JoeJulian or that
23:49 cmd_pancakes same physical host, just 2 interfaces with their own IP
23:49 JoeJulian When you define your volume, use those hostnames. Then just worry about dns resolution to get the interfaces you're targeting.
23:49 cmd_pancakes but yeah same volume
23:50 cmd_pancakes JoeJulian: yep, that was my approach...and that should work fine?
23:50 JoeJulian yeah
23:50 cmd_pancakes the other problem is that the volume is already created...so im removing bricks from the old hostname and just re-adding them with the new hostname
23:51 cmd_pancakes but that didn't seem to go as i had planned
23:51 JoeJulian Each brick will still be on a different port and glusterd will manage it all.
23:51 JoeJulian Yeah, that sucks.
23:51 cmd_pancakes ok yeah but let me give it another try if that should have just worked
23:51 JoeJulian If it's not in production, you could just delete the volume and recreate it.
23:51 cmd_pancakes i felt like going through the whole show of making to separate gluster daemon wasn't the right path
23:52 cmd_pancakes it is in production unfortunately :(
23:52 JoeJulian Way too complex for my tastes.
23:52 cmd_pancakes completely agree...just wasn't sure if that was my only option
23:52 cmd_pancakes but ok great thanks JoeJulian! just wanted to make sure i wasn't crazy in what i was trying to do :)
23:52 JoeJulian ok.. here's the plan...
23:53 cmd_pancakes yeah im all ears...i have a test environment i can reproduce prod and give it a dry run
23:53 cmd_pancakes im on gluster 3.7 if that makes a difference
23:55 JoeJulian kill (SIGTERM) glusterfsd for the brick you're going to replace with a new hostname. unmount that brick. use volume...replace-brick...commit force to replace the old hostname brick with the new hostname brick (same path). (more)
23:56 JoeJulian glusterd will start the daemon and start self-heal. kill (sigterm again) glusterfsd for that one brick. Mount the brick data again. "gluster volume start $volname force" to start that brick daemon again.
23:57 JoeJulian Wait for self-heal to finish.
23:57 JoeJulian (this is replicated, right?)
23:57 cmd_pancakes it is not....just distributed
23:57 JoeJulian Ah, poo.
23:57 cmd_pancakes but we can accept the downtime for the data on the particular brick
23:58 JoeJulian Ok, then that plan still works. I've done it myself.
23:58 JoeJulian I was doing it to change the brick path, but the same principal holds.
23:59 cmd_pancakes ok great! i'll give that a whirl in a test environment first
23:59 JoeJulian +1
23:59 cmd_pancakes and having the same UUID for different hostnames should be fine?
23:59 cmd_pancakes in general
23:59 JoeJulian Yes, glusterd only cares about that for identifying peers.

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary