Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-09-19

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 dtrainor that must have dislodged something, because the volume started again, and the brick is online
00:01 anthony25 joined #gluster
00:02 dtrainor i see two glusterfs processes for this new brick, identical commands, with the exception of the port number being different.  None of the other bricks exhibit this.
00:02 JoeJulian That's odd
00:02 dtrainor i lied.  the port is the same, too.
00:02 JoeJulian odder
00:02 JoeJulian g v status and kill the other one, I think.
00:03 dtrainor yep
00:03 dtrainor 1s
00:04 dtrainor ok, everything looks sane. still looking for a heal.  can i try forcing that now?
00:06 JoeJulian sure
00:07 dtrainor hmm, just did a check before trying to heal, and i see an error with that new brick https://paste.fedoraproject.org/paste/iv82IhaKfQX1MV0XjghcHg
00:07 glusterbot Title: Untitled - Modern Paste (at paste.fedoraproject.org)
00:15 dtrainor i'll be back in a moment
00:16 msvbhat joined #gluster
00:21 JoeJulian dtrainor: Try restarting glusterd again and _not_ killing the duplicate brick pid. Really, only one of them is going to be able to have the port anyway.
00:29 dtrainor right.  the rogue process is not running, even after a restart.
00:30 dtrainor just one glusterfsd process running for the one, new brick, now.  'gluster volume status' says the correct brick is up, and the glusterfsd process listing the port matches what shows in status.  but i still see something unsettling in 'gluster volume heal slow_gv00 info', https://paste.fedoraproject.org/paste/iv82IhaKfQX1MV0XjghcHg
00:30 glusterbot Title: Untitled - Modern Paste (at paste.fedoraproject.org)
01:03 dtrainor i'm out of ideas here JoeJulian, got any more?
01:18 msvbhat joined #gluster
01:38 plarsen joined #gluster
01:53 dtrainor Ok, I got it to come back.
01:53 dtrainor there may have been a hardware issue with the new-in-box drive.
01:55 ilbot3 joined #gluster
01:55 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:56 baber joined #gluster
01:56 prasanth joined #gluster
02:22 kramdoss_ joined #gluster
02:42 daMaestro joined #gluster
03:19 msvbhat joined #gluster
03:20 gyadav joined #gluster
03:25 vbellur joined #gluster
03:43 apandey joined #gluster
03:49 itisravi joined #gluster
03:51 msvbhat joined #gluster
04:26 hgowtham joined #gluster
04:32 daMaestro joined #gluster
04:32 jiffin joined #gluster
04:32 nbalacha joined #gluster
04:44 atinm joined #gluster
04:54 rdanter joined #gluster
04:58 bwerthmann joined #gluster
05:07 jkroon joined #gluster
05:10 Prasad joined #gluster
05:15 ndarshan joined #gluster
05:17 sunnyk joined #gluster
05:23 skumar joined #gluster
05:31 karthik_us joined #gluster
05:34 kdhananjay joined #gluster
05:45 Saravanakmr joined #gluster
05:49 ndarshan joined #gluster
06:00 mbukatov joined #gluster
06:01 Saravanakmr joined #gluster
06:05 g_work joined #gluster
06:05 anthony25 joined #gluster
06:05 rideh joined #gluster
06:05 jbrooks joined #gluster
06:07 ndarshan joined #gluster
06:07 john51 joined #gluster
06:10 hgowtham joined #gluster
06:10 decayofmind joined #gluster
06:10 inodb joined #gluster
06:10 tdasilva joined #gluster
06:14 susant joined #gluster
06:17 xavih joined #gluster
06:18 xavih left #gluster
06:20 nbalacha joined #gluster
06:22 bwerthmann joined #gluster
06:22 itisravi joined #gluster
06:23 sanoj joined #gluster
06:32 jtux joined #gluster
06:33 skumar_ joined #gluster
06:37 xavih joined #gluster
06:38 skumar__ joined #gluster
06:44 jtux joined #gluster
06:49 rafi1 joined #gluster
06:51 msvbhat joined #gluster
07:01 prasanth joined #gluster
07:05 apandey joined #gluster
07:05 ivan_rossi joined #gluster
07:05 ivan_rossi left #gluster
07:14 jkroon joined #gluster
07:20 georgeangel[m] joined #gluster
07:38 sunnyk joined #gluster
07:39 fsimonce joined #gluster
07:51 tamalsaha[m] joined #gluster
07:51 smohan[m] joined #gluster
07:51 marin[m] joined #gluster
07:51 _KaszpiR_ joined #gluster
08:03 nh2 joined #gluster
08:07 skoduri joined #gluster
08:11 stoff1973 joined #gluster
08:11 Arrfab joined #gluster
08:48 g_work hi there, i'm trying to totally destroy a volume with gluster volume remove-brick backup2  xxx.xxx.xxx.xxx:/gluster/backup2 start ... results are same with 'start' or 'force' ... I get : volume remove-brick start: failed: Deleting all the bricks of the volume is not allowed
08:48 g_work How can I totally erase this volume now ? :)
08:51 g_work ok found, gluster volulme delete
08:51 g_work have a nice day all
08:52 ThHirsch joined #gluster
08:58 Wizek_ joined #gluster
09:11 [fre] joined #gluster
09:12 [fre] Guys, apparantly RH ships their gluster storage with a regular scan (mlocate, updatedb, cron.daily) of all filesystems.
09:12 [fre] Is there any need from out of gluster?
09:13 [fre] I see no use in scanning many terabytes of rh gluster storage... Can we just exclude /rhgs out of the updatedb?
09:16 _nixpanic joined #gluster
09:16 _nixpanic joined #gluster
09:16 [fre] updatedb can't parse millions of files&directories in one night... so the job keeps hanging and a new run starts on the next day...
09:17 susant joined #gluster
09:18 hgowtham joined #gluster
09:19 stoff1973 see /etc/updatedb.conf (PRUNEFS, PRUNEPATHS, etc.)
09:19 stoff1973 man updatedb.conf
09:21 [fre] I've seen it. I know the conf and the Prune... but the real question is, is it safe to remove the gluster-volumes?
09:21 [fre] Is there any use to keep it?
09:21 [fre] or can I just prune it?
09:22 rdanter joined #gluster
09:22 dominicpg joined #gluster
09:25 stoff1973 There is an old bug (https://bugzilla.redhat.com/show_bug.cgi?id=762270) talking about this. IMHO, you can prune the glusterfs filesystem safely
09:25 glusterbot Bug 762270: medium, low, ---, sac, CLOSED CURRENTRELEASE, disable glusterfs mount from updatedb.conf
09:25 karthik_us joined #gluster
09:27 karthik_ joined #gluster
09:37 [fre] stoff1973, also the ones that are mounted locally over XFS?
09:44 itisravi__ joined #gluster
09:45 stoff1973 fre: yes, you can exclude all filesystems that be part of your glusterfs volume
09:48 rwheeler joined #gluster
09:57 msvbhat joined #gluster
10:02 buvanesh_kumar joined #gluster
10:05 cloph freephile: re updated - you can exclude filesystem types (so it won't try scan the gluster fuse mounts for example, but excluding the brick directories requires you to add the paths manually.
10:06 nbalacha joined #gluster
10:10 msvbhat joined #gluster
10:11 [fre] cloph.... So I did. I did presume gluster itself doesn't rely on mlocate...
10:12 cloph yeah, that's purley indexing for user's convenience.
10:16 [fre] ok.
10:16 [fre] tnx for your nice support!
10:19 Wizek_ joined #gluster
10:44 nbalacha joined #gluster
10:46 msvbhat joined #gluster
10:54 _KaszpiR_ joined #gluster
10:54 bfoster joined #gluster
11:04 MikeLupe joined #gluster
11:05 [diablo] joined #gluster
11:07 baber joined #gluster
11:23 susant joined #gluster
11:26 ThHirsch joined #gluster
11:35 susant joined #gluster
11:40 skoduri_ joined #gluster
11:46 _KaszpiR_ joined #gluster
11:59 msvbhat joined #gluster
12:03 jkroon joined #gluster
12:06 skumar joined #gluster
12:09 [fre] guys, what the hell means this?  it comes from samba running on gluster:../source3/smbd/oplock.c:134(downgrade_file_oplock)   trying to downgrade an already-downgraded oplock!
12:24 gyadav joined #gluster
12:25 nh2 joined #gluster
12:29 skoduri_ joined #gluster
12:36 rafi joined #gluster
12:44 karthik_ joined #gluster
12:51 vbellur joined #gluster
12:52 vbellur joined #gluster
12:53 vbellur joined #gluster
12:54 vbellur joined #gluster
12:55 vbellur joined #gluster
13:04 plarsen joined #gluster
13:17 prasanth joined #gluster
13:17 atinm joined #gluster
13:19 baber joined #gluster
13:21 rafi1 joined #gluster
13:27 karthik_ joined #gluster
13:29 susant joined #gluster
13:38 vbellur joined #gluster
13:38 vbellur joined #gluster
13:45 nbalacha joined #gluster
13:45 jobewan joined #gluster
13:45 weller joined #gluster
13:47 weller hi, is there a ETA when gluster 3.12 will be released for CentOS?
13:47 vbellur joined #gluster
13:47 hgowtham joined #gluster
13:49 cloph https://buildlogs.centos.org/centos/7/storage/x86_64/gluster-3.12/ (in other words: If you want it, you can install it)
13:49 glusterbot Title: Index of /centos/7/storage/x86_64/gluster-3.12CentOS Mirror (at buildlogs.centos.org)
13:49 cloph no idea when CentOS as distribution will add it to their main/default repositories
13:55 skumar_ joined #gluster
13:58 jstrunk joined #gluster
14:04 rafi1 joined #gluster
14:09 vbellur joined #gluster
14:10 vbellur joined #gluster
14:11 vbellur joined #gluster
14:12 vbellur1 joined #gluster
14:12 dijuremo joined #gluster
14:12 vbellur joined #gluster
14:13 dijuremo If I have a long heal process ongoing and wanted to pause during the day, what would be the best way to do it?
14:16 hmamtora joined #gluster
14:17 vbellur joined #gluster
14:22 skylar joined #gluster
14:27 dominicpg joined #gluster
14:27 atinm joined #gluster
14:30 farhorizon joined #gluster
14:38 prasanth joined #gluster
14:54 xavih joined #gluster
15:00 susant joined #gluster
15:09 wushudoin joined #gluster
15:11 jtux joined #gluster
15:20 jiffin joined #gluster
15:22 atinm joined #gluster
15:25 shyam joined #gluster
15:28 farhorizon joined #gluster
15:30 jefarr_ joined #gluster
15:41 Wizek_ joined #gluster
15:54 xavih joined #gluster
15:59 xavih joined #gluster
15:59 farhorizon joined #gluster
16:04 atinm joined #gluster
16:08 xavih joined #gluster
16:14 prasanth joined #gluster
16:21 decayofmind joined #gluster
16:32 jkroon joined #gluster
16:37 rafi joined #gluster
16:47 gyadav joined #gluster
16:49 atrius joined #gluster
17:02 susant joined #gluster
17:06 gyadav_ joined #gluster
17:10 JoeJulian dijuremo: There isn't really a good way. The best I can recommend is to set the cluster.self-heal volume settings (3 of them) off to disallow client-side heals. This pushes all the self-healing to the daemons which is generally more efficient anyway and helps prevent resource exhaustion.
17:11 rafi joined #gluster
17:16 rafi joined #gluster
17:28 gyadav joined #gluster
17:28 _KaszpiR_ joined #gluster
17:33 BitByteNybble110 joined #gluster
17:46 rafi joined #gluster
17:53 jefarr_ Hello all, I'm trying to get a better understanding of what ganesha-ha does in a failover scenario.  If I understand correctly, ganesha-ha will be monitoring the health of each node (with the help of pacemaker/corosync) and in the event of a failure will switch active connections to a working host.  How does this work when the underlying bricks were on a downed host?
17:55 msvbhat joined #gluster
17:55 cloph ganesha-ha cannot magically lift quorum limits - so if your actual gluster volume goes down, then the ha won't help of course.
17:56 rafi1 joined #gluster
18:07 jefarr_ yea, this is why I'm looking for a better understanding.  That makes perfect sense to me, and so if I was putting together a simple distributed cluster the HA wouldn't do anything useful.
18:21 omie888777 joined #gluster
18:23 omie88877777 joined #gluster
18:43 Humble joined #gluster
19:03 msvbhat joined #gluster
19:13 msvbhat joined #gluster
19:29 msvbhat joined #gluster
19:29 baber joined #gluster
20:02 dijuremo joined #gluster
20:04 [diablo] joined #gluster
20:12 foobert joined #gluster
20:23 dtrainor joined #gluster
20:23 vbellur1 joined #gluster
20:24 vbellur1 joined #gluster
20:25 vbellur1 joined #gluster
20:26 vbellur2 joined #gluster
20:27 shyam joined #gluster
20:38 [diablo] joined #gluster
21:07 skylar joined #gluster
21:09 foobert joined #gluster
21:22 baber joined #gluster
21:24 dijuremo joined #gluster
21:39 PatNarciso joined #gluster
21:50 victori joined #gluster
21:51 decayofmind joined #gluster
21:58 shyam joined #gluster
22:48 nh2 joined #gluster
22:51 vbellur joined #gluster
23:08 shyam joined #gluster
23:25 jbrooks joined #gluster
23:47 MrAbaddon joined #gluster
23:54 dijuremo joined #gluster
23:58 plarsen joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary