Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-04-14

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:22 Scott_ joined #gluster
00:23 Scott_ Hello all
00:24 Scott_ Wondering if anyone is around to assist with a rather curly GlusterFS issue we're having with Windows NFS clients
00:28 lexi2 wanted to know if anyone is around to talk about some perfomance issues i am seeing
00:33 Pupeno joined #gluster
01:11 mike2512 left #gluster
01:12 Gill joined #gluster
01:22 badone__ joined #gluster
01:50 _NiC joined #gluster
02:02 bharata-rao joined #gluster
02:17 nangthang joined #gluster
02:21 CyrilPeponnet joined #gluster
02:22 harish joined #gluster
02:32 jvandewege_ joined #gluster
02:45 sripathi joined #gluster
02:55 lyang0 joined #gluster
03:12 nangthang joined #gluster
03:13 sripathi joined #gluster
03:13 rotbeard joined #gluster
03:13 hagarth joined #gluster
03:13 wushudoin joined #gluster
03:13 RioS2 joined #gluster
03:13 capri joined #gluster
03:13 Larsen_ joined #gluster
03:13 uebera|| joined #gluster
03:13 fubada joined #gluster
03:21 gnudna joined #gluster
03:22 gnudna Hi guys
03:22 gnudna quick question
03:22 gnudna i want to replace a node in a replicated volume
03:22 gnudna is there an easy way to remove the deprecated and add the new one?
03:27 chirino joined #gluster
03:35 kkeithley1 joined #gluster
03:39 itisravi joined #gluster
03:46 gnudna so guys any easy way to accomplish?
03:47 shubhendu joined #gluster
03:49 gem joined #gluster
04:00 plarsen joined #gluster
04:03 hagarth joined #gluster
04:04 kumar joined #gluster
04:06 gnudna so guys anybody know an easy way to remove a node from gluster replicated setup
04:06 gnudna and add a new one as a replacement?
04:09 nbalacha joined #gluster
04:11 meghanam joined #gluster
04:11 atinmu joined #gluster
04:11 glusterbot News from newglusterbugs: [Bug 1211462] nfs-ganesha: HA related cluster.conf is not deleted on all nodes of cluster after the teardown operation is executed. <https://bugzilla.redhat.com/show_bug.cgi?id=1211462>
04:18 gnudna left #gluster
04:18 Bhaskarakiran joined #gluster
04:19 kdhananjay joined #gluster
04:22 Peppard joined #gluster
04:37 atalur joined #gluster
04:40 ppp joined #gluster
04:41 RameshN joined #gluster
04:41 ppai joined #gluster
04:41 glusterbot News from newglusterbugs: [Bug 1202209] RFE: Sync the time of logger with that of system <https://bugzilla.redhat.com/show_bug.cgi?id=1202209>
04:43 hagarth joined #gluster
04:43 kanagaraj joined #gluster
04:52 deepakcs joined #gluster
04:52 kasturi joined #gluster
04:54 vimal joined #gluster
04:54 spandit joined #gluster
04:56 schandra joined #gluster
05:03 ndarshan joined #gluster
05:04 rafi joined #gluster
05:07 soumya joined #gluster
05:11 overclk joined #gluster
05:16 ira joined #gluster
05:21 RameshN joined #gluster
05:23 sakshi joined #gluster
05:25 lalatenduM joined #gluster
05:25 anoopcs joined #gluster
05:28 ppai joined #gluster
05:32 T3 joined #gluster
05:34 jiffin joined #gluster
05:36 ashiq joined #gluster
05:37 aravindavk joined #gluster
05:39 Manikandan joined #gluster
05:43 dusmant joined #gluster
05:47 hchiramm__ joined #gluster
05:51 R0ok_ joined #gluster
05:52 sakshi joined #gluster
05:59 kdhananjay joined #gluster
06:08 kshlm joined #gluster
06:11 anil joined #gluster
06:14 oxae joined #gluster
06:15 R0ok_ joined #gluster
06:15 maveric_amitc_ joined #gluster
06:15 mbukatov joined #gluster
06:22 jtux joined #gluster
06:23 hagarth joined #gluster
06:23 soumya joined #gluster
06:25 jankoprowski joined #gluster
06:25 karnan joined #gluster
06:25 Philambdo joined #gluster
06:26 jankoprowski Hi. After umounting glusterfs volumes I cannot delete mount points. I'm getting 'device or resource busy'. losf/fuser does not show anything. Any ideas?
06:28 itpings hi guys
06:28 itpings need to know one thing
06:29 itpings can we migrate data from one vol to a new vol ?
06:29 itpings i mean data from brick from vol1 to brick of vol2
06:29 itpings i know we can do it in bricks in same vol but what about different volumes
06:30 ashiq joined #gluster
06:33 T3 joined #gluster
06:47 jtux joined #gluster
06:48 ghenry joined #gluster
06:48 ghenry joined #gluster
06:51 raghu joined #gluster
06:52 soumya joined #gluster
06:55 ppp joined #gluster
06:56 nshaikh joined #gluster
06:57 sakshi joined #gluster
06:59 [Enrico] joined #gluster
07:01 atinmu joined #gluster
07:04 crashmag joined #gluster
07:05 msvbhat joined #gluster
07:14 oxae joined #gluster
07:30 hchiramm__ itisravi++
07:30 glusterbot hchiramm__: itisravi's karma is now 1
07:30 hchiramm__ kdhananjay++
07:30 glusterbot hchiramm__: kdhananjay's karma is now 2
07:33 malevolent joined #gluster
07:34 T3 joined #gluster
07:34 deepakcs joined #gluster
07:37 fsimonce joined #gluster
07:41 T0aD joined #gluster
07:42 harish_ joined #gluster
07:44 hagarth joined #gluster
07:46 deniszh joined #gluster
07:50 Prilly joined #gluster
07:51 liquidat joined #gluster
07:51 T0aD joined #gluster
07:52 ppai joined #gluster
07:52 lyang0 joined #gluster
07:59 T0aD joined #gluster
08:04 Slashman joined #gluster
08:08 [Enrico] joined #gluster
08:10 James joined #gluster
08:14 T0aD joined #gluster
08:17 atinmu joined #gluster
08:24 jiku joined #gluster
08:26 Norky joined #gluster
08:26 Slashman joined #gluster
08:34 shubhendu joined #gluster
08:34 T3 joined #gluster
08:35 lalatenduM joined #gluster
08:41 lalatenduM joined #gluster
08:42 anoopcs joined #gluster
08:47 schandra joined #gluster
08:49 ktosiek joined #gluster
09:05 ppai joined #gluster
09:09 nbalacha joined #gluster
09:14 Bhaskarakiran joined #gluster
09:16 vipulnayyar joined #gluster
09:23 rjoseph joined #gluster
09:27 jiffin joined #gluster
09:27 ppai joined #gluster
09:30 ndarshan joined #gluster
09:30 RameshN joined #gluster
09:31 sakshi joined #gluster
09:33 ashiq joined #gluster
09:35 anoopcs joined #gluster
09:35 T3 joined #gluster
09:36 rafi joined #gluster
09:38 corretico joined #gluster
09:38 atalur joined #gluster
09:38 meghanam joined #gluster
09:39 soumya joined #gluster
09:39 pppp joined #gluster
09:43 badone_ joined #gluster
09:45 spandit joined #gluster
09:46 ppai joined #gluster
09:48 atinmu joined #gluster
09:51 schandra joined #gluster
09:52 dockbram_ joined #gluster
09:52 R0ok__ joined #gluster
09:53 kke_ joined #gluster
09:54 dev-0 joined #gluster
09:58 sage_ joined #gluster
10:01 yosafbridge joined #gluster
10:02 devilspgd_ joined #gluster
10:09 MarkR joined #gluster
10:11 anoopcs joined #gluster
10:12 MarkR Could anyone help me with the following. After a reboot, I get: file2:$ sudo gluster volume status GLUSTER-HOME Status of volume: GLUSTER-HOME Gluster processPortOnlinePid ------------------------------------------------------------------------------ Brick file1:/data/export-home-149153Y1212 Brick file2:/data/export-home-2N/ANN/A NFS Server on localhost2049Y5096 Self-heal Daemon on localhostN/AY5101 N
10:12 glusterbot MarkR: ----------------------------------------------------------------------------'s karma is now -4
10:12 MarkR Hm.
10:12 MarkR Could anyone help me with the following. After a reboot, I get:
10:12 MarkR file2:$ sudo gluster volume status GLUSTER-HOME
10:13 MarkR Brick file1:/data/export-home-149153Y1212
10:13 MarkR Brick file2:/data/export-home-2N/ANN/A
10:13 MarkR GlusterFS 3.4.6-ubuntu1~precise1
10:13 itisravi_ joined #gluster
10:14 atinmu MarkR,  2nd brick is down
10:15 atinmu MarkR, could you check the brick log file for the same
10:15 MarkR The only strange thing I noticed in the logs of the particular brick is:
10:15 MarkR E [socket.c:2872:socket_connect] 0-management: connection attempt failed (Connection refused)
10:15 MarkR I [glusterd-utils.c:1079:glusterd_volume_brickinfo_get] 0-management: Found brick
10:16 MarkR which repeats every few seconds
10:16 MarkR I disabled iptables (temporarely) to no avail
10:16 harish_ joined #gluster
10:16 anrao joined #gluster
10:18 hagarth itpings: ping, we are posting your video on twitter @gluster. Do you have a twitter handle?
10:18 pkoro joined #gluster
10:18 MarkR Here some more log data:
10:18 MarkR E [posix.c:4419:init] 0-GLUSTER-HOME-posix: Extended attribute trusted.glusterfs.volume-id is absent
10:19 MarkR That should be the problem I guess, now how to fix it...
10:19 hgowtham joined #gluster
10:22 atinmu MarkR, do u see any log with E apart from socket_connect, do you see anything abnormal in glusterd log, as a workaround can you try to start the same volume with a force option?
10:23 MarkR Ah, some more insight: the machine was migrated and rebooten. I guess some ACL data was not migrated properly.
10:59 sripathi joined #gluster
11:07 gem_ joined #gluster
11:07 meghanam joined #gluster
11:07 rafi1 joined #gluster
11:09 pppp joined #gluster
11:11 pppp joined #gluster
11:13 glusterbot News from newglusterbugs: [Bug 1211576] Gluster CLI crashes when volume create command is incomplete <https://bugzilla.redhat.com/show_bug.cgi?id=1211576>
11:13 glusterbot News from newglusterbugs: [Bug 1211562] Data Tiering:UI:changes required to CLI responses for attach and detach tier <https://bugzilla.redhat.com/show_bug.cgi?id=1211562>
11:13 glusterbot News from newglusterbugs: [Bug 1211570] Data Tiering:UI:when a user looks for detach-tier help, instead command seems to be getting executed <https://bugzilla.redhat.com/show_bug.cgi?id=1211570>
11:15 anoopcs joined #gluster
11:15 vimal joined #gluster
11:19 atalur joined #gluster
11:19 jiffin1 joined #gluster
11:21 spandit joined #gluster
11:22 atinmu joined #gluster
11:25 soumya joined #gluster
11:31 badone_ joined #gluster
11:36 uebera|| joined #gluster
11:37 T3 joined #gluster
11:56 papamoose joined #gluster
12:01 anoopcs1 joined #gluster
12:02 rafi joined #gluster
12:05 atinmu joined #gluster
12:06 meghanam_ joined #gluster
12:07 morse joined #gluster
12:11 hagarth joined #gluster
12:12 Gill_ joined #gluster
12:13 gildub joined #gluster
12:13 glusterbot News from newglusterbugs: [Bug 1211594] status.brick memory allocation failure. <https://bugzilla.redhat.com/show_bug.cgi?id=1211594>
12:15 rjoseph joined #gluster
12:24 kanagaraj joined #gluster
12:33 LebedevRI joined #gluster
12:34 anoopcs joined #gluster
12:36 firemanxbr joined #gluster
12:37 T3 joined #gluster
12:38 shaunm_ joined #gluster
12:39 corretico joined #gluster
12:42 RameshN joined #gluster
12:43 glusterbot News from newglusterbugs: [Bug 1209831] peer probe fails because of missing glusterd.info file <https://bugzilla.redhat.com/show_bug.cgi?id=1209831>
12:43 glusterbot News from newglusterbugs: [Bug 1211614] [NFS] Shared Storage mounted as NFS mount gives error "snap_scheduler: Another snap_scheduler command is running. Please try again after some time" while running any scheduler commands <https://bugzilla.redhat.com/show_bug.cgi?id=1211614>
12:43 glusterbot News from newglusterbugs: [Bug 1209790] DHT rebalance :Incorrect error handling <https://bugzilla.redhat.com/show_bug.cgi?id=1209790>
12:43 sripathi joined #gluster
12:47 T3 joined #gluster
12:51 wkf joined #gluster
12:56 shubhendu joined #gluster
13:01 bene2 joined #gluster
13:01 rafi1 joined #gluster
13:02 bennyturns joined #gluster
13:03 rwheeler joined #gluster
13:05 schwing joined #gluster
13:09 wica joined #gluster
13:09 wica Hi, no question ;p
13:17 julim joined #gluster
13:21 haomaiwa_ joined #gluster
13:22 theron joined #gluster
13:23 schwing i need to start trying to get some performance out of my 3.5.2 setup and thought adjusting the volume options would be a good place to start.  any advice on which ones to start with?
13:24 dgandhi joined #gluster
13:24 LostPlanet joined #gluster
13:25 Iodun joined #gluster
13:25 mibby joined #gluster
13:26 LostPlanet hello, i am considering using gluster with a group of machines that i want to have performing a kind of auto-scaling . i'm wondering if it is fairly easy to dynamically add / remove machines so that a new machine comes up and shared a directory with the others ?
13:26 georgeh-LT2 joined #gluster
13:27 theron joined #gluster
13:30 DV__ joined #gluster
13:31 smohan joined #gluster
13:34 lalatenduM itpings++ awesome video https://www.youtube.com/watch?v=NYGn7sgMrMw
13:34 glusterbot lalatenduM: itpings's karma is now 1
13:34 wkf joined #gluster
13:36 karnan joined #gluster
13:38 schwing if i make a change to the volume options, such as performance.write-behind-window-size, do i need to restart the gluster service for them to take effect or are they immediately available?
13:38 hamiller joined #gluster
13:43 glusterbot News from newglusterbugs: [Bug 1211640] [SNAPSHOT]: glusterd crash when snapshot create was in progress on different volumes at the same time - job edited to create snapshots at the given time <https://bugzilla.redhat.com/show_bug.cgi?id=1211640>
13:47 foster joined #gluster
13:51 lpabon joined #gluster
13:52 RayTrace_ joined #gluster
13:56 ppai joined #gluster
13:56 kdhananjay joined #gluster
14:00 DV__ joined #gluster
14:01 RameshN joined #gluster
14:02 rjoseph joined #gluster
14:02 chirino joined #gluster
14:13 hchiramm__ joined #gluster
14:16 RameshN joined #gluster
14:22 maveric_amitc_ LostPlanet, glusterfs is a scale-out cluster.. which means you add and remove nodes with lot of ease
14:22 maveric_amitc_ LostPlanet, without any impact on performance or stability...
14:22 maveric_amitc_ LostPlanet, very minimal impact
14:23 maveric_amitc_ LostPlanet, if u can share the specifics of your scenriao, I can share some of my thoughts on it :)
14:23 aravindavk joined #gluster
14:31 diegows joined #gluster
14:32 lkoranda joined #gluster
14:34 deepakcs joined #gluster
14:38 lkoranda joined #gluster
14:46 LostPlanet maveric_amitc_: my concern is how do i automatically hook a newly created server into the existing network ? i want to have a replicated volume across all servers and when a new one comes up it too can access the same data
14:47 LostPlanet does adding a new one in require commands to execute just on the new server or also on one or the existing servers ?
14:54 kanagaraj joined #gluster
14:56 getup joined #gluster
14:57 lkoranda joined #gluster
15:01 corretico joined #gluster
15:02 lkoranda joined #gluster
15:02 bene3 joined #gluster
15:08 lkoranda joined #gluster
15:09 ctria joined #gluster
15:10 tg2 Anybody know why, with 3.6.2, when doing a remove-brick it starts, but in remove-brick status it shows run time in secs: 0
15:12 karnan joined #gluster
15:12 lkoranda joined #gluster
15:22 lkoranda joined #gluster
15:23 kanagaraj joined #gluster
15:25 virusuy_ joined #gluster
15:31 lkoranda joined #gluster
15:33 jobewan joined #gluster
15:43 lkoranda joined #gluster
15:53 bfoster joined #gluster
15:54 neofob left #gluster
15:55 shubhendu joined #gluster
16:07 bene3 joined #gluster
16:17 JoeJulian tg2: quantum uncertainty?
16:18 kmai007 joined #gluster
16:18 kmai007 hello friends
16:19 JoeJulian LostPlanet: Your understanding of clustered filesystems is incorrect. You do *not* want a replicated volume across all servers. That's horribly inefficient. You want clustered storage where your files are *available* on every server, and you want it done in such a way that it will meet your SLA.
16:19 kmai007 its been a long time.  Can anyone suggest what would help me accomplish gluster storage or gluster-client to cache negative lookups through fuse ?
16:19 schwing is there a sister command to "gluster volume set" that would retrieve the volume options that are currently set?
16:20 kmai007 i know ther eis a translator that J. Darcy wrote, but isn't ready for production
16:20 kmai007 schwing: gluster volume info <VOLUME>
16:20 squizzi schwing: gluster volume info
16:20 squizzi ;)
16:20 kmai007 but it won't show the defaults
16:20 JoeJulian kmai007: iirc, something's been added to that end...
16:20 kmai007 you'll have to read the "gluster volume set help"
16:20 schwing i was hoping to see all the values for one's i haven't changed
16:21 RayTrace_ joined #gluster
16:21 kmai007 JoeJulian: where would i find literature on that "end"
16:21 JoeJulian set help shows the defaults, but no. There's nothing that outputs all the defaults *and* your changes.
16:22 JoeJulian kmai007: "man glusterfs" "/negative-timeout"
16:22 JoeJulian I *think* that's what that is.
16:22 kmai007 thats the key word i wanted to look for, thanks JoeJulian
16:23 purpleidea trivially easy patch for someone to review and merge: review.gluster.org/#/c/10236 ... someone might want to add a test case to look for this weird char which sneaks into gluster source occasionally.
16:23 purpleidea err moving to devel, sorry
16:26 kmai007 JoeJulian:have you used the negative-timeout=N seconds, the default is 0, i'm not sure what i should tune it to
16:26 kmai007 i suppose its negative, lookups, so it doesn't matter
16:32 schwing are there any volume options that would help with write performance?
16:35 JoeJulian schwing: write performance is much more abstract than that simple question. Write performance for 1 thread on 1 client? Write performance for multiple threads on 1 client? Write performance for multiple threads on thousands of clients?
16:35 JoeJulian Don't get stuck comparing apples to orchards. Look at your whole system in determining performance needs and engineer for the whole.
16:39 schwing for me, i'm trying to use gluster for a failover solution.  most file assets are written to a CDN, but i need something in case the the CDN melts so most everything is going to be writes to gluster.  there are only a few client applications that write to it, and also the CDN, but it's not a lot of traffic.  basically, file-and-forget
16:40 schwing that will be once all the data is rsync'd in.  migrating from NFS to gluster now, but writes have drastically slowed down as more and more data has copied in.
16:49 T3 joined #gluster
16:50 [o__o] joined #gluster
16:56 JoeJulian schwing: Two possibilities come to mind. One, and I haven't verified, is wondering if rsync reads the directory before it starts the next file. If it does, and you have tens or hundreds of thousands of files per directory, that could have an effect.
16:56 JoeJulian The other would be to check your servers and make sure they're not swapping.
16:57 JoeJulian As an aside, when using rsync for initial copies, always use --inplace. It's much more efficient.
16:59 soumya joined #gluster
17:00 RameshN joined #gluster
17:11 Slashman joined #gluster
17:16 glusterbot News from resolvedglusterbugs: [Bug 1210557] gluster peer probe with selinux enabled throws error <https://bugzilla.redhat.com/show_bug.cgi?id=1210557>
17:19 kanagaraj joined #gluster
17:20 bene2 joined #gluster
17:25 schwing JoeJulian: thanks for the rsync tip.  i had always had the sneaking suspicion that rsync may be an issue.
17:28 JoeJulian fwiw, rsync does a lot of checking to see what needs copied, what the state of the target is vs the source, etc. If you know you have a blank slate, find+cpio+netcat(+gzip in some cases) doesn't do any checking and just blindly writes stuff out. It can be way more efficient than any of the other options.
17:33 schwing rsync was the only tool i could think of at the time of the initial copy.  i wish i would've researched a bit more, now.  i just mounted my gluster to the nfs server and did an rsync between directories/mounts
17:34 JoeJulian I hate when I do that. When a task is more efficient to leave it inefficient because it's too far in.
17:35 schwing do you know of any other options now that the initial push is complete to keep the two locations in sync?  the web app pushes about 300GB/day but i can never seem to get caught up so am multiple days behind
17:35 coredump joined #gluster
17:35 Arminder joined #gluster
17:36 schwing i really like lsyncd to rsync to multiple places, but need to get closer in sync before i can use it
17:36 JoeJulian geo-replication
17:38 oxae joined #gluster
17:41 lalatenduM joined #gluster
17:43 maveric_amitc_ joined #gluster
17:44 schwing i had to add a couple bricks to increase the storage (saw a message on the elist about keeping in-use to below 80% for better performance) so my system is doing the rebalance.  would it make sense to stop the rebalance, work on getting things in sync, then restart the rebalance?  i'm worried that the rebalance is affecting what i'm seeing for write performance issues.
17:44 glusterbot News from newglusterbugs: [Bug 1211718] After fresh install of gluster rpm's the log messages shows error for glusterd.info file as no such file or directory <https://bugzilla.redhat.com/show_bug.cgi?id=1211718>
17:45 JoeJulian schwing: it would make sense
17:46 glusterbot News from resolvedglusterbugs: [Bug 1209831] peer probe fails because of missing glusterd.info file <https://bugzilla.redhat.com/show_bug.cgi?id=1209831>
17:47 schwing is a complete rebalance needed for the space to be presented as available to gluster, or can i just do a fix-layout rebalance?  or maybe neither??
17:48 JoeJulian Just fix-layout
17:49 diegows joined #gluster
17:50 tg2 joined #gluster
17:53 schwing JoeJulian: thank you so much for taking time to chat.  it was very helpful!
17:53 JoeJulian any time
17:54 JoeJulian Glad I could help.
17:54 schwing oh, could i trouble you for one more question?
17:54 kmai007 there is no troubles here
17:54 kmai007 only helpers
17:55 schwing the post i mentioned earlier about keeping the disks below 80% in use for performance ... would that mean keeping each brick below 80% in use or the entire gluster mount at below 80% in use?
17:56 oxae joined #gluster
18:01 kmai007 the gluster bricks should be even if u used dist-rep, and you rebalanced....i'd imgine that overall 80% is ideal for performance
18:01 kmai007 but i've nver crossed that path, so i'm not fit to answer
18:01 rafi joined #gluster
18:02 schwing the bricks were even before the rebalance and i could watch them slowly free up as more data was moved to the new brick.  i assume that when the rebalance finished that they would all be even
18:02 kmai007 yes, that is the rebalance feature
18:03 kmai007 you wouldn't need to keep track of each brick, if you chose the right brick design
18:04 JoeJulian They would end up more or less even, but the answer to the question is per-brick.
18:04 JoeJulian @dht
18:04 glusterbot JoeJulian: I do not know about 'dht', but I do know about these similar topics: 'dd'
18:04 JoeJulian @meh
18:04 glusterbot JoeJulian: I'm not happy about it either
18:05 JoeJulian wikipedia has an article on distributed hash translation (dht) and I have an example on how it's actually used on my blog.
18:05 schwing could you link your blog?
18:06 JoeJulian https://joejulian.name
18:06 JoeJulian @lucky dht misses are expensive
18:06 glusterbot JoeJulian: https://joejulian.name/blog/dht-misses-are-expensive/
18:07 Arminder joined #gluster
18:09 schwing this is my first foray in to distributed filesystems (if you don't count nfs) and i find it fascinating.  can't wait until i can get my current need completed so i can dig deeper.
18:11 huleboer joined #gluster
18:15 rafi joined #gluster
18:17 ekuric joined #gluster
18:22 gnudna joined #gluster
18:22 gnudna joined #gluster
18:22 gnudna left #gluster
18:22 gnudna joined #gluster
18:23 gnudna Hi Guys was wondering how can one break a replicated setup so i can remove a node to add a replacement one?
18:23 gnudna host1 <--> hosts2  remove host1 completely and add a new node into the setup instead
18:23 glusterbot gnudna: <'s karma is now -13
18:23 gnudna example host3
18:24 gnudna @glusterbot karma
18:24 gnudna is karma -13 a bad thing?
18:25 Rapture joined #gluster
18:26 kmai007 something like this gnudna https://joejulian.name/blog/replacing-a-glusterfs-server-best-practice/
18:27 kmai007 since you're not adding a node, you're "replacing" it, it think these procedures apply
18:34 rafi1 joined #gluster
18:38 rafi joined #gluster
18:42 rafi joined #gluster
18:43 anrao joined #gluster
18:48 chirino joined #gluster
18:52 rafi joined #gluster
18:53 gnudna thanks kmai007 will look into it
18:54 xiu joined #gluster
18:56 gnudna in my case i am re-using the same server just doing a clean install
18:57 gnudna so i would have liked to join as trusted and somehow told the remaining server that this is your new host and let replication do it's thing
19:09 kmai007 yes
19:13 gnudna would be nice but i need to stop treating replication as raid 1
19:13 gnudna but in this case would be really practical feature
19:21 social joined #gluster
19:28 chirino joined #gluster
19:32 xavih joined #gluster
19:38 RayTrace_ joined #gluster
19:42 Arminder joined #gluster
19:43 Arminder joined #gluster
19:44 Arminder joined #gluster
19:44 theron joined #gluster
19:45 Arminder joined #gluster
19:46 chirino joined #gluster
19:46 Arminder joined #gluster
19:47 Arminder joined #gluster
19:48 Arminder joined #gluster
19:49 Arminder joined #gluster
19:50 Arminder joined #gluster
19:51 Arminder joined #gluster
19:52 Arminder joined #gluster
19:52 Arminder joined #gluster
19:54 Arminder joined #gluster
19:54 Arminder joined #gluster
19:58 msmith joined #gluster
20:00 squizzi joined #gluster
20:11 social joined #gluster
20:14 oxae joined #gluster
20:25 _Bryan_ joined #gluster
20:39 Pupeno_ joined #gluster
20:44 halfinhalfout joined #gluster
20:46 roost joined #gluster
20:47 halfinhalfout given a replicate volume w/ 2 nodes, each w/ 1 brick. vs the same replicate volume w/ 3 nodes, each w/ 1 brick. and all else being equal. if 1 brick fails 1 must be replaced, will the heal process be faster on the 3-node replicate volume due to the ability to pull data from 2 healthy bricks instead of 1?
20:48 redbeard joined #gluster
20:57 gnudna left #gluster
21:08 lifeofguenter joined #gluster
21:13 lexi2 joined #gluster
21:19 coredump joined #gluster
21:28 ctria joined #gluster
21:45 glusterbot News from newglusterbugs: [Bug 1193174] flock does not observe group membership <https://bugzilla.redhat.com/show_bug.cgi?id=1193174>
21:51 ctria joined #gluster
22:03 badone_ joined #gluster
22:09 T0aD joined #gluster
22:32 dgandhi joined #gluster
22:44 badone_ joined #gluster
22:46 msmith joined #gluster
22:48 gildub joined #gluster
23:34 edwardm61 joined #gluster
23:45 halfinhalfout joined #gluster
23:48 Arminder joined #gluster
23:55 gildub joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary