Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-03-21

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:09 mlg9000 joined #gluster
00:12 MrAbaddon joined #gluster
00:24 masber joined #gluster
00:30 vbellur joined #gluster
00:37 kramdoss_ joined #gluster
00:50 plarsen joined #gluster
01:16 shdeng joined #gluster
01:22 plarsen joined #gluster
01:35 ankush joined #gluster
01:37 nishanth joined #gluster
01:39 kramdoss_ joined #gluster
01:53 major man .. so quiet .. would think people have a life away from computers...
01:54 raghu joined #gluster
02:16 skoduri joined #gluster
02:48 ilbot3 joined #gluster
02:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
03:02 raghu joined #gluster
03:08 Gambit15 joined #gluster
03:11 kramdoss_ joined #gluster
03:13 derjohn_mob joined #gluster
03:15 masber joined #gluster
03:44 nthomas joined #gluster
03:48 ashiq joined #gluster
03:49 prasanth joined #gluster
03:50 dominicpg joined #gluster
03:55 ppai joined #gluster
03:56 RameshN joined #gluster
03:59 itisravi joined #gluster
04:00 skumar joined #gluster
04:02 gyadav joined #gluster
04:02 raghu joined #gluster
04:03 Prasad joined #gluster
04:07 kotreshhr joined #gluster
04:22 ashiq joined #gluster
04:33 sanoj joined #gluster
04:44 skumar joined #gluster
04:46 sbulage joined #gluster
04:50 Shu6h3ndu joined #gluster
04:54 kramdoss_ joined #gluster
05:15 karthik_us joined #gluster
05:19 apandey joined #gluster
05:22 prasanth joined #gluster
05:22 Philambdo joined #gluster
05:32 gyadav joined #gluster
05:33 ndarshan joined #gluster
05:34 ankitr joined #gluster
05:36 susant joined #gluster
05:40 Saravanakmr joined #gluster
05:42 ankush joined #gluster
05:43 riyas joined #gluster
05:49 poornima_ joined #gluster
05:50 ankush joined #gluster
05:55 msvbhat joined #gluster
05:55 sona joined #gluster
05:56 Vaelatern joined #gluster
05:56 kramdoss_ joined #gluster
05:58 buvanesh_kumar joined #gluster
06:00 susant joined #gluster
06:05 skoduri joined #gluster
06:13 Wizek joined #gluster
06:16 hgowtham joined #gluster
06:24 kdhananjay joined #gluster
06:25 nthomas joined #gluster
06:31 anbehl joined #gluster
06:32 izkasi joined #gluster
06:36 Karan joined #gluster
06:37 anbehl__ joined #gluster
06:39 kharloss joined #gluster
06:45 [diablo] joined #gluster
06:48 hgowtham joined #gluster
06:48 hgowtham joined #gluster
07:07 mbukatov joined #gluster
07:09 jwd joined #gluster
07:11 mhulsman joined #gluster
07:14 rastar joined #gluster
07:25 sbulage joined #gluster
07:27 izkasi_ joined #gluster
07:29 jtux joined #gluster
07:32 mk-fg Will glusterfs break horribly if rebuilt/restarted using user.* xattr namespace instead of trusted.* everywhere?
07:33 mk-fg Seem to be missing piece to run glusterd in uid namespace, where root isn't really root but can still chown things
07:33 mk-fg ("uid namespace" as in "systemd-nspawn -U")
07:36 jtux left #gluster
07:39 ivan_rossi joined #gluster
07:39 ivan_rossi left #gluster
07:41 buvanesh_kumar joined #gluster
07:41 mhulsman joined #gluster
07:42 verdurin joined #gluster
07:44 buvanesh_kumar_ joined #gluster
07:45 BuBU29 joined #gluster
07:49 BuBU29 joined #gluster
08:05 fsimonce joined #gluster
08:09 verdurin Is it possible to migrate an arbiter volume to a different brick?
08:10 verdurin We'd like to move our arbiter volumes to much smaller bricks, to free some space.
08:19 itisravi verdurin: do you mean migrating only the arbiter bricks?
08:20 verdurin itisravi: yes
08:21 itisravi verdurin: you can run the replace-brick command.
08:21 sanoj joined #gluster
08:46 dominicpg joined #gluster
08:59 verdurin itisravi: okay, thanks
09:02 itisravi verdurin: replace-brick will also trigger self-heal to heal the data to the newly replaced brick.  You might want to make sure to monitor the heal-info command and ensure that the new arbiter brick has all the files synced to it.
09:03 verdurin itisravi: yes, thanks again
09:04 itisravi np!
09:12 mhulsman1 joined #gluster
09:24 flying joined #gluster
09:29 kdhananjay joined #gluster
09:36 buvanesh_kumar joined #gluster
09:38 atm0sphere joined #gluster
09:49 Philambdo joined #gluster
09:51 RameshN joined #gluster
10:04 ankush joined #gluster
10:04 ankitr joined #gluster
10:05 fdgdf What number does Octave output for primes(4e5)(end)
10:05 fdgdf left #gluster
10:05 dfdf joined #gluster
10:07 R0ok_ joined #gluster
10:08 poornima_ joined #gluster
10:12 mhulsman joined #gluster
10:13 kramdoss_ joined #gluster
10:14 gyadav joined #gluster
10:22 shann joined #gluster
10:22 shann hi i disappoint break failure with glusterfs replica 3 on my proxmox cluster.
10:23 kdhananjay joined #gluster
10:23 shann i have 3 nodes one of them unaivalable, gluster volume not accessible, i restart glusterfs-server on 3 nodes, i disable volume to reenable them, but now can't reenable them
10:24 atinm joined #gluster
10:26 shann Really shit, partition not mounted on every node
10:45 ppai joined #gluster
10:47 mhulsman1 joined #gluster
10:48 MrAbaddon joined #gluster
10:50 mhulsman joined #gluster
10:50 rastar joined #gluster
10:58 atinm joined #gluster
10:58 gyadav joined #gluster
11:13 mhulsman joined #gluster
11:18 msvbhat joined #gluster
11:27 kotreshhr left #gluster
11:29 sona joined #gluster
11:30 Vide hello, is it ok to add 3.8.10 bricks to an existing 3.8.4 volume?
11:30 susant left #gluster
11:36 Vide ok, it worked flawlessly :)
11:40 msvbhat joined #gluster
11:49 kkeithley community bug triage in #gluster-meeting in 10 minutes
11:56 Vide how can I check if my Xeon has TSX support looking at /proc/cpuinfo ?
11:56 Vide I cannot find what's the name of the flag I should look for
11:59 skoduri joined #gluster
12:06 vbellur joined #gluster
12:07 vbellur joined #gluster
12:08 MadPsy Vide, afaik it's not reported there
12:08 unclemarc joined #gluster
12:10 MadPsy Vide, just look up the model on Intel's website and it'll tell you
12:14 mhulsman joined #gluster
12:18 kpease joined #gluster
12:21 vbellur joined #gluster
12:22 vbellur joined #gluster
12:22 vbellur joined #gluster
12:23 vbellur joined #gluster
12:23 jwd joined #gluster
12:33 gem joined #gluster
12:36 derjohn_mob joined #gluster
12:38 derjohn_mobi joined #gluster
12:39 Saravanakmr joined #gluster
12:45 atinm joined #gluster
12:46 shaunm joined #gluster
12:47 rejy joined #gluster
12:49 nthomas joined #gluster
12:54 prasanth joined #gluster
13:02 baber joined #gluster
13:06 gyadav joined #gluster
13:10 kramdoss_ joined #gluster
13:14 shyam joined #gluster
13:14 raghu joined #gluster
13:17 rwheeler joined #gluster
13:31 ashiq joined #gluster
13:32 skylar joined #gluster
13:33 malevolent joined #gluster
13:33 xavih joined #gluster
13:38 ira joined #gluster
13:45 sbulage joined #gluster
13:50 vbellur joined #gluster
13:52 vbellur joined #gluster
13:52 vbellur joined #gluster
13:53 vbellur joined #gluster
13:54 vbellur joined #gluster
13:55 vbellur joined #gluster
13:56 vbellur joined #gluster
13:57 vbellur joined #gluster
13:57 vbellur joined #gluster
13:58 vbellur joined #gluster
13:59 vbellur joined #gluster
14:01 vbellur joined #gluster
14:01 buvanesh_kumar joined #gluster
14:01 vbellur joined #gluster
14:02 buvanesh_kumar joined #gluster
14:02 vbellur joined #gluster
14:02 kharloss joined #gluster
14:07 gem joined #gluster
14:07 susant joined #gluster
14:12 morganb left #gluster
14:15 Vide MadPsy, sorry for the delay, I think I figured it out on the Intel website, thanks anyway!
14:15 Vide moreover this is the #gluster channel, not #ovirt :)
14:20 ira joined #gluster
14:24 jwd joined #gluster
14:30 ankush joined #gluster
14:33 susant left #gluster
14:36 fcoelho joined #gluster
14:40 atinm joined #gluster
14:41 rwheeler joined #gluster
14:45 farhorizon joined #gluster
14:47 oajs joined #gluster
14:50 pioto joined #gluster
15:01 mhulsman joined #gluster
15:02 wushudoin joined #gluster
15:03 oajs_ joined #gluster
15:04 shyam joined #gluster
15:05 fcoelho joined #gluster
15:14 Jacob843 joined #gluster
15:17 dfdf joined #gluster
15:21 wushudoin joined #gluster
15:39 plarsen joined #gluster
15:46 gyadav joined #gluster
15:53 skoduri joined #gluster
15:55 Wizek joined #gluster
16:06 ankitr joined #gluster
16:10 morse joined #gluster
16:10 ankitr joined #gluster
16:12 oajs joined #gluster
16:23 Gambit15 joined #gluster
16:25 jwd joined #gluster
16:35 mallorn Does anyone know if you need to restart any services after increasing disperse.shd-max-threads, or does increasing the number automatically take effect?
16:57 atinm joined #gluster
17:01 mallorn Another question...  We were running 3.7 for a while, and we would mount the gluster file system over fuse by referring to a specific node, like 'mount -t glusterfs controller:glance /var/lib/glance'.  If controller went down, the filesystem stayed up (because we presumed it was talking to other gluster servers).  Now, with 3.10, if the glusterd process is killed on the controller then every node mounting from there loses its filesystem.  Is the
17:02 mk-fg mallorn, That message ended on "Is ther"
17:02 R0ok_ joined #gluster
17:02 mk-fg (cut to max length by either irc client or server)
17:02 mallorn Is there an option we need to set/change?
17:04 mk-fg I actually wonder about that too, I think 3.10 gets volfile and contacts bricks directly same as before, didn't test killing mount-from node though
17:06 major that sounds a bit like a bug..
17:07 oajs joined #gluster
17:10 mallorn OK, thanks.  I'm not going to test it further because it's a Bad Thing if it happens again, but we have scheduled maintenance on Thursday and I can poke at it then.
17:11 atinm joined #gluster
17:17 starryeyed joined #gluster
17:17 mallorn And one more question...  :)  After the 3.10 upgrade we're seeing a lot of DNS requests (~600/second) from that controller (the host everyone points to when mounting the filesystem) for the following (sanitized) hosts:
17:17 mallorn /var/run/glusterd.socket.example.edu /var/run/glusterd.socket.os.example.edu /var/run/glusterd.socket
17:17 mallorn Does anyone know where those could be coming from?  It's from the glusterd process itself, but I don't know where it's set.
17:24 msvbhat joined #gluster
17:24 mallorn For now I've added entries in /etc/hosts that resolve to 0.0.0.0 to reduce the lookups, but an strace on the glusterd binary shows that it's still being queried.
17:25 mk-fg mallorn, I'd think some "remote-host ..." got set to that /var/run/glusterd.socket in some .vol under /var/lib/glusterd
17:25 mk-fg Should probably be greppable
17:26 mk-fg (...if that's the case)
17:32 riyas joined #gluster
17:33 mallorn [checking]
17:39 mallorn I checked again, but there's nothing there.  All of the remote-host options refer to valid entries in the local /etc/hosts file.
17:42 mallorn So it looks like something is looking up /var/run/glusterd.socket in DNS, but trying all of the domain names specified in the resolv.conf search parameter.  I can't find /var/run/glusterd.socket referenced anywhere (and the file doesn't even exist either).
17:42 mk-fg I think glusterd also logs all failed dns reqs, so might be some hints as to where it gets the thing in /var/log/gluster, if you haven't checked there yet
17:46 mallorn Thanks!  I need to increase my logging and will look around some more.  I'll report back what I find when I find it.  It might wait until Thursday's maintenance window.
17:48 jwd joined #gluster
17:49 mk-fg Logged brick fails for me with default level, but then they all are in that "remote-host" field, and kinda critical
17:50 mk-fg Also, you might want to enable glustereventsd and set DEBUG level in its .json just to have it write its events.log
17:50 mk-fg Recently found at least one spot where it gives very useful info for error where usual log is very unspecific
17:51 mk-fg (it's a structured logging thingy)
17:53 vbellur joined #gluster
17:54 vbellur joined #gluster
17:55 vbellur joined #gluster
17:55 vbellur joined #gluster
18:05 Drankis joined #gluster
18:23 atinm joined #gluster
18:38 arpu joined #gluster
18:58 R0ok_ joined #gluster
19:00 baber joined #gluster
19:13 oajs joined #gluster
19:18 Wizek joined #gluster
19:36 mhulsman joined #gluster
19:59 rastar joined #gluster
20:21 rwheeler joined #gluster
20:26 farhorizon joined #gluster
20:31 baber joined #gluster
20:38 vbellur joined #gluster
20:58 shaunm joined #gluster
20:59 skylar joined #gluster
21:04 baber joined #gluster
22:14 vbellur joined #gluster
22:14 vbellur joined #gluster
22:34 APag96 joined #gluster
22:34 vbellur joined #gluster
22:37 APag96 Would someone mind giving me a hand with a problem I'm having? I'm testing out glusterfs for the first time and have 3 glusterfs-servers (gluster-test{01..02}) with a gluster FUSE mount of the same volume on each. When all 3 servers are online, everything works perfectly. If I take down gluster-test01, the mount point on 02 and 03 disappear. I have this set up to replicate across all 3 servers. Any ideas what's going on here?
22:38 APag96 My mistake. The 3 servers are gluster-test{01..03}
22:39 m0zes joined #gluster
22:40 JoeJulian Check the client log /var/log/glusterfs/(mountpoint with / replaced with -).log
22:40 JoeJulian ie /var/log/glusterfs/mnt-myvol.log
22:40 APag96 Checking now...
22:41 APag96 [2017-03-21 22:29:00.467883] E [glusterfsd-mgmt.c:2102:mgmt_rpc_notify] 0-glusterfsd-mgmt: failed to connect with remote-host: gluster-test01 (No data available) [2017-03-21 22:29:00.467907] I [glusterfsd-mgmt.c:2120:mgmt_rpc_notify] 0-glusterfsd-mgmt: Exhausted all volfile servers
22:41 APag96 So it's failing to connect with the server I took down. According to the docs, when you mount a gluster volume, it should learn about ALL of the servers in the cluster. So I'm wondering why this is not happening
22:42 JoeJulian It should, yes.
22:42 JoeJulian @pasteinfo
22:42 glusterbot JoeJulian: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
22:44 APag96 This is gluster-test03: https://paste.fedoraproject.org/paste/gM3​TIZzE54VRqHF0r4RZYF5M1UNdIGYhyRLivL9gydE=
22:44 glusterbot Title: Untitled - Modern Paste (at paste.fedoraproject.org)
22:46 JoeJulian As long as I'm looking, the reconfigured options are the defaults anyway.
22:46 JoeJulian Do you have any firewalls?
22:47 APag96 I don't believe so. I've verified that firewalld and iptables are off
22:47 APag96 These are fresh CentOS 7 installs on Digital Ocean
22:49 APag96 Glusterfs version 3.10.0 is on all servers by the way
22:50 JoeJulian ok
22:50 JoeJulian Does it only fail if it's that one server?
22:50 JoeJulian Oh, and when you say the mount "goes away" do you mean it's no longer mounted?
22:51 JoeJulian s/goes away/disappears/
22:51 glusterbot What JoeJulian meant to say was: An error has occurred and has been logged. Please contact this bot's administrator for more information.
22:51 glusterbot JoeJulian: Error: I couldn't find a message matching that criteria in my history of 1000 messages.
22:51 * JoeJulian whacks glusterbot
22:52 APag96 Correct. The mount will appear empty if I try to list the contents of it. If it's my pwd when gluster-test01 goes down, I'll get "Transport endpoint is not connected"
22:52 APag96 I'll test this when taking down gluster-test02. One moment..
22:53 JoeJulian But the lines you posted are the last lines the client shows when that happens?
22:53 JoeJulian I ask because that's not what should be there if the client ends.
22:54 tg2 joined #gluster
22:54 APag96 I'm afraid I'm not sure what you're asking. Sorry
22:56 JoeJulian when you stop 01 and the mounts disappear, the last lines in that client log I had you look at.
22:57 major this behavior was mentioned earlier as well
22:57 major mallorn reported the same thing
22:57 JoeJulian also on .10?
22:57 major yah
22:57 JoeJulian swell
22:57 major curiously, the confirmed that it doesn't happen on versions prior to .10
22:58 APag96 If I take down gluster-test02 instead of gluster-test03, the mount stays online on gluster-test01 and 03 and replication continues
22:59 major "mallorn: Another question...  We were running 3.7 for a while, and we would mount the gluster file system over fuse by referring to a specific node, like 'mount -t glusterfs controller:glance /var/lib/glance'.  If controller went down, the filesystem stayed up (because we presumed it was talking to other gluster servers).  Now, with 3.10, if the glusterd process is killed on the controller then every node
22:59 major mounting from there loses its filesystem.  Is there an option we need to set/change?"
22:59 APag96 So far, it looks like this only happens when I take down the node that I used when mounting the volume. Exactily like major said above
23:00 Klas joined #gluster
23:02 APag96 I can pull some fresh logs for debugging if you'd like. Just tell me what to do and what logs you want. I'll grab it all.
23:03 JoeJulian Ok, then I would try ,,(rrdns) or adding backup_volfile_servers to the mount
23:03 glusterbot You can use rrdns to allow failover for mounting your volume. See Joe's tutorial: http://goo.gl/ktI6p
23:03 glusterbot joined #gluster
23:04 JoeJulian hmmm
23:04 APag96 I read online that the backup_volfile_servers option is only used when mounting initally. Is that correct?
23:04 APag96 This will go down even when the fuse mount is already mounted
23:05 JoeJulian It's an experiment.
23:05 JoeJulian Based on an educated guess.
23:05 APag96 OK, I'll give that a try now. I'll let you know what happens..
23:05 JoeJulian It's faster to have you try that than it is for me to read the source code. ;)
23:05 major hehe
23:06 major I am .. sadly .. neck deep in a random side project I suddenly created for myself .. I dunno how I keep doing this sort of thing to myself
23:06 APag96 Agreed!
23:06 APag96 I know what that's like
23:06 major would think I am some sort of code-massocist
23:10 APag96 By adding the -o backupvolfile-server=gluster-test02 to gluster-test03 and -o backupvolfile-server=gluster-test03 to gluster-test02 and then taking down gluster-test01, I can see that the mount points DO NOT go away
23:17 APag96 I going to need to get off of here for a while. Is there anything else I can provide to help before I do?
23:25 JoeJulian Nope, thanks
23:26 APag96 Any time. Thank you for your help. Is there anywhere I can check back up on this if it's a bug?
23:26 JoeJulian I think I'll file a bug
23:27 glusterbot joined #gluster
23:27 JoeJulian ahem... I said...
23:27 JoeJulian I think I'll file a bug
23:27 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
23:27 JoeJulian That's better.
23:37 JoeJulian APag96: bug 1434617
23:37 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1434617 unspecified, unspecified, ---, bugs, NEW , mounts fail to remain connected if the mount server is brought down
23:42 APag96 Thanks @JoeJulian!
23:48 ttkg joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary