Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-11-17

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:05 Javezim Any way to limit the self healing when you've added an Arbiter to an existing volume? Clusters going rather slow and I believe its due to the amount of Self Heals its trying to perform
00:15 plarsen joined #gluster
00:24 Javezim Sort of throttle it?
00:25 Klas joined #gluster
00:49 farhorizon joined #gluster
00:58 farhorizon joined #gluster
01:02 shdeng joined #gluster
01:02 Javezim itisravi If this continues performing like this we will likely just stop using the arbiter, the self healing is just too slow and is causing the entire cluster to slow down. If I wanted to remove the arbiter bricks from my Dist-Replica setup - http://paste.ubuntu.com/23488131/ - Would the command just be - http://paste.ubuntu.com/23488138/, it would then just go back to using the metadata from each replica server?
01:02 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
01:03 shdeng joined #gluster
01:04 ankitraj joined #gluster
01:15 ic0n_ joined #gluster
01:27 suliba joined #gluster
01:33 jkroon joined #gluster
01:44 suliba joined #gluster
01:47 dnorman joined #gluster
02:02 shdeng joined #gluster
02:06 Gambit15 joined #gluster
02:08 derjohn_mobi joined #gluster
02:12 dnorman joined #gluster
02:14 haomaiwang joined #gluster
02:15 dengjin joined #gluster
02:22 pfactum joined #gluster
02:24 jri_ joined #gluster
02:32 ankitraj joined #gluster
02:37 Lee1092 joined #gluster
02:48 ilbot3 joined #gluster
02:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:50 nbalacha joined #gluster
02:52 Peppard joined #gluster
03:02 nathwill joined #gluster
03:03 nathwill joined #gluster
03:04 nathwill joined #gluster
03:17 cliluw joined #gluster
03:29 kramdoss_ joined #gluster
03:34 Javezim Might explain it, in Brick Logs on Arbiter am seeing a tonne of - http://paste.ubuntu.com/23488549/
03:34 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
03:34 Javezim We don't use ACLs, anyone know what these would apply too?
03:34 magrawal joined #gluster
03:46 atinm joined #gluster
04:02 shubhendu joined #gluster
04:05 nathwill joined #gluster
04:14 itisravi joined #gluster
04:21 jiffin joined #gluster
04:21 RameshN joined #gluster
04:23 sanoj joined #gluster
04:24 suliba joined #gluster
04:40 rafi joined #gluster
04:40 buvanesh_kumar joined #gluster
04:53 kdhananjay joined #gluster
04:57 prasanth joined #gluster
05:04 apandey joined #gluster
05:05 riyas joined #gluster
05:07 Jacob8432 joined #gluster
05:20 magrawal_ joined #gluster
05:23 ankitraj joined #gluster
05:24 nishanth joined #gluster
05:26 karthik_us joined #gluster
05:33 ndarshan joined #gluster
05:33 riyas joined #gluster
05:41 shdeng joined #gluster
05:42 skoduri joined #gluster
05:43 sanoj_ joined #gluster
05:49 aravindavk joined #gluster
06:00 msvbhat joined #gluster
06:03 jkroon joined #gluster
06:05 Saravanakmr joined #gluster
06:06 hgowtham joined #gluster
06:06 sbulage joined #gluster
06:07 nathwill joined #gluster
06:17 crag joined #gluster
06:21 prth joined #gluster
06:22 ppai joined #gluster
06:22 cliluw joined #gluster
06:23 rastar joined #gluster
06:25 crag joined #gluster
06:26 itisravi joined #gluster
06:34 hchiramm joined #gluster
06:42 Klas I'm getting weird errors with a volume
06:42 Klas no authentication module is interested in accepting remote-client (null)
06:43 Klas running 3.7.15
06:44 Klas seems to have something to do with the auth.allow list (just tested "*"
06:48 Klas hmmm, I'm starting to wonder if it might be related to ipv6
06:49 Klas nope, it doesn't seem to listen on ipv6
06:54 ashiq joined #gluster
07:02 poornima_ joined #gluster
07:08 nathwill joined #gluster
07:14 Philambdo joined #gluster
07:16 Klas I can mount toward localhost and toward local IP, but not toward other server (this is a mount shared by all the servers)
07:16 Klas barring the first IP in the auth.allow
07:16 Klas that one can mount towards all the servers
07:20 apandey joined #gluster
07:21 apandey joined #gluster
07:21 jtux joined #gluster
07:37 rofl_____ joined #gluster
07:38 poornima_ joined #gluster
07:44 [diablo] joined #gluster
07:44 atrius joined #gluster
07:50 jkroon joined #gluster
07:50 apandey joined #gluster
07:54 witsches joined #gluster
07:58 ivan_rossi joined #gluster
08:03 msvbhat joined #gluster
08:05 derjohn_mob joined #gluster
08:06 ivan_rossi joined #gluster
08:10 jkroon_ joined #gluster
08:13 devyani7 joined #gluster
08:13 ivan_rossi left #gluster
08:18 loadtheacc joined #gluster
08:30 jri joined #gluster
08:36 fsimonce joined #gluster
08:40 riyas joined #gluster
08:43 msvbhat_ joined #gluster
08:49 msvbhat joined #gluster
08:50 poornima joined #gluster
08:53 derjohn_mob joined #gluster
08:56 flying joined #gluster
09:00 Slashman joined #gluster
09:02 ivan_rossi joined #gluster
09:02 jri_ joined #gluster
09:03 hackman joined #gluster
09:05 Javezim Anyone can help with this - We're wanting to remove the Arbiter bricks from our Replica 3 Arbiter 1 Setup, the Self heal is going to take too long and the performance decrease is too much.  If I wanted to remove the arbiter bricks from my Dist-Replica setup - http://paste.ubuntu.com/23488131/ - Would the command just be - http://paste.ubuntu.com/23488138/, it would then just go back to using the metadata from each replica server? Would it try and migrate t
09:05 Javezim his data first or just remove them straight away and go back to a replica 2?
09:05 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
09:10 nathwill joined #gluster
09:10 ahino joined #gluster
09:12 prth joined #gluster
09:13 kdhananjay joined #gluster
09:14 itisravi Javezim: `gluster volume remove brick replica 2 gvo <all  9 bricks> force`
09:15 itisravi Javezim: it is bad though that the heal is taking so long for you.
09:16 bluenemo joined #gluster
09:17 Javezim Thanks itisravi `gluster volume remove brick replica 2 gvo <all  9 bricks> force` so this will convert it to a Replica 2? Will it try migrate all the data or will it remove immediately?
09:17 Javezim Yeah its honestly awful, cluster is so slow with it going
09:17 itisravi Javezim: no migration, it will just remove it immediately.
09:18 itisravi s/remove brick/ remove-brick
09:18 itisravi sorry for the typo.
09:24 Javezim Thanks itisravi, I figure we may build a new cluster from scratch starting with an Arbiter
09:24 Javezim Adding one is too painful!
09:29 itisravi Javezim: no prob. If you are able to create some reproducer where add-brick is not healing the files, just file a bug with the details.
09:29 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
09:34 rafi joined #gluster
09:34 prth joined #gluster
09:35 rafi joined #gluster
09:40 karthik_us joined #gluster
09:56 atinm joined #gluster
09:58 xw joined #gluster
09:58 xw http://download.gluster.org/pub​/gluster/glusterfs/3.9/rsa.pub is not accessible
10:01 thereisnospoon joined #gluster
10:03 panina joined #gluster
10:05 Bhaskarakiran joined #gluster
10:11 msn2 joined #gluster
10:12 rafi1 joined #gluster
10:21 rastar joined #gluster
10:23 derjohn_mob joined #gluster
10:26 atinm joined #gluster
10:29 abyss^ JoeJulian: but I should zero this attr on one of the folder?
10:42 witsches joined #gluster
10:47 msvbhat joined #gluster
10:50 zat joined #gluster
10:52 atinm joined #gluster
10:54 Plam question with 2 replicated GLuster nodes, with a client access it in block via iSCSI in Multipath
10:55 msvbhat joined #gluster
10:55 Plam everytime I reboot one of the Gluster node, from the client perspective files are corrupted.
10:55 Plam I suspect the client reconnect to the "bad" node before the heal is done
10:55 Plam is it possible?
10:56 Plam is there a way to tell the iSCSI target (LIO) to expose only if the node is OK? Is there a mechanism in Gluster to prevent file access while it's not healed?
11:03 jiffin pkalever: ^^
11:07 rastar joined #gluster
11:08 dlambrig_ joined #gluster
11:11 nathwill joined #gluster
11:12 eKKiM joined #gluster
11:12 eKKiM left #gluster
11:19 armin joined #gluster
11:27 Caveat4U joined #gluster
11:27 Sebbo2 joined #gluster
11:28 dlambrig_ joined #gluster
11:34 guhcampos joined #gluster
11:37 armin joined #gluster
11:38 Marbug joined #gluster
11:39 magrawal_ joined #gluster
11:42 dlambrig_ joined #gluster
11:50 Plam it seems I hit the issue described in this old blog post: https://blog.gocept.com/2011/​06/27/no-luck-with-glusterfs/
11:51 Plam > But disconnecting a replicated file server which had the newer copy of a VM image before the other file server has caught up would render the VM unusable
11:57 rwheeler joined #gluster
11:57 armin joined #gluster
11:57 Wizek joined #gluster
11:58 ivan_rossi left #gluster
12:03 Klas I "solved" my issue by mounting towards localhost on all server nodes
12:09 atinm joined #gluster
12:12 nathwill joined #gluster
12:18 mhulsman joined #gluster
12:23 jkroon joined #gluster
12:32 jiffin1 joined #gluster
12:36 kramdoss_ joined #gluster
12:39 skoduri joined #gluster
12:44 nbalacha joined #gluster
12:45 witsches joined #gluster
12:49 poornima joined #gluster
12:50 owitsches joined #gluster
12:50 johnmilton joined #gluster
12:54 ashiq joined #gluster
12:55 arpu hi how can i mark all data messages glsuterfs is using to transfer files?  with iptables what port is glusterfs use
12:56 TvL2386 joined #gluster
13:03 jkroon joined #gluster
13:07 owitsches joined #gluster
13:10 owitsches joined #gluster
13:10 anonbeat joined #gluster
13:12 anonbeat hello. In debian 8 how can I mount gluster volumes from fstab at boot time ? I have right now 'defaults,_netdev,fetch-attempts=10' options but fails
13:12 anonbeat please note that the server and the client are at the same nodes
13:13 nathwill joined #gluster
13:14 johnmilton joined #gluster
13:14 cloph no nice solution  - there is no reliable signal that would tell "gluster is done initializing, the volumes are available" - so your best chance is to stick a "mount -a" or similar in your rc.local or similar
13:16 rwheeler joined #gluster
13:19 anonbeat thanks
13:24 owitsches joined #gluster
13:29 owitsches joined #gluster
13:30 Saravanakmr joined #gluster
13:30 shaunm joined #gluster
13:38 XpineX joined #gluster
13:41 d0nn1e joined #gluster
13:41 msn joined #gluster
13:47 Wizek joined #gluster
13:50 nbalacha joined #gluster
13:54 jiffin1 joined #gluster
13:55 shyam joined #gluster
13:56 unclemarc joined #gluster
13:58 kdhananjay joined #gluster
14:00 ankitraj joined #gluster
14:02 ashiq joined #gluster
14:08 owitsches joined #gluster
14:09 B21956 joined #gluster
14:13 dlambrig_ joined #gluster
14:14 annettec joined #gluster
14:14 nathwill joined #gluster
14:21 porunov joined #gluster
14:22 RameshN joined #gluster
14:22 plarsen joined #gluster
14:23 prth joined #gluster
14:23 dlambrig_ joined #gluster
14:24 porunov Hello, when the glaster 3.9 will be in a stable release?
14:26 jiffin porunov: it is STM(EOL after 3months)
14:27 porunov jiffin: thank you!
14:28 atrius joined #gluster
14:29 skylar joined #gluster
14:31 rafi joined #gluster
14:36 rafi joined #gluster
14:43 Shu6h3ndu joined #gluster
14:47 dnorman joined #gluster
14:51 owitsches joined #gluster
14:52 itisravi joined #gluster
14:56 Caveat4U joined #gluster
15:00 prth joined #gluster
15:00 squizzi joined #gluster
15:07 Lee1092 joined #gluster
15:09 sanoj joined #gluster
15:12 rafi joined #gluster
15:15 nathwill joined #gluster
15:33 squizzi joined #gluster
15:33 pkalever joined #gluster
15:34 Caveat4U joined #gluster
15:38 kpease joined #gluster
15:39 prth joined #gluster
15:40 dlambrig_ joined #gluster
15:42 nathwill joined #gluster
15:42 farhorizon joined #gluster
15:42 Caveat4U joined #gluster
15:45 Caveat4U_ joined #gluster
15:49 rwheeler joined #gluster
16:03 B21956 joined #gluster
16:06 shyam joined #gluster
16:10 RameshN joined #gluster
16:19 dlambrig_ joined #gluster
16:36 porunov joined #gluster
16:38 porunov GlusterFS 3.8 command 'gluster system:: execute gsec_create' shows: 'gsync peer_gsec_create command not found.'. Does somebody know how to repair it?
16:39 suliba joined #gluster
16:45 derjohn_mob joined #gluster
16:46 JoeJulian porunov: what distro?
16:48 blues-man joined #gluster
16:49 porunov centos 7.0
16:50 porunov JoeJulian: I have checked the version. It is 3.8.5. I run on CentOS 7
16:51 blues-man hello, using ganesha ha with latest gluster, I found out that if I configure nodes with FQDN, ganesha heatbeat doesn't work actually due using hostname short name: https://github.com/gluster/glusterfs/blob/​master/extras/ganesha/ocf/ganesha_mon#L190
16:51 glusterbot Title: glusterfs/ganesha_mon at master · gluster/glusterfs · GitHub (at github.com)
16:53 blues-man instead of using server1,server2 here using fqdn such as server1.lab.redhat.com, server2.lab.redhat.com, etc https://gluster.readthedocs.io/en/​latest/Administrator%20Guide/NFS-G​anesha%20GlusterFS%20Integration/
16:53 glusterbot Title: Configuring NFS-Ganesha server - Gluster Docs (at gluster.readthedocs.io)
16:53 haomaiwang joined #gluster
16:54 blues-man I got lot of warning on this number: https://github.com/gluster/glusterfs/blob/​master/extras/ganesha/ocf/ganesha_mon#L196
16:54 glusterbot Title: glusterfs/ganesha_mon at master · gluster/glusterfs · GitHub (at github.com)
16:54 blues-man s/heatbeat/heartbeat/
16:54 glusterbot What blues-man meant to say was: hello, using ganesha ha with latest gluster, I found out that if I configure nodes with FQDN, ganesha heartbeat doesn't work actually due using hostname short name: https://github.com/gluster/glusterfs/blob/​master/extras/ganesha/ocf/ganesha_mon#L190
16:54 hackman joined #gluster
16:55 JoeJulian porunov: From what I can see of the spec file, peer_gsec_create should be in /usr/libexec/glusterfs/
16:56 JoeJulian blues-man: Can you configure the search parameter in resolv.conf?
16:56 blues-man I was wondering if it is worth to open a bugzilla or it was expected actually
16:56 blues-man JoeJulian: yes it is configured and working
16:57 dlambrig_ joined #gluster
16:58 kkeithley blues-man: yes, file a BZ
16:59 JoeJulian +1
16:59 blues-man ok fine
17:00 kkeithley otherwise I might forget ;-)
17:00 porunov JoeJulian: It isn't there. In that directory I have only 'peer_add_secret_pub' and a directory 'glusterfind'. Where can I found peer_gsec_create? I have installed glusterfs from epel repository
17:00 kkeithley I can't forget if there's a BZ
17:01 blues-man eheh right
17:01 JoeJulian porunov: Looks like it's in the glusterfs-geo-replication package.
17:05 farhorizon joined #gluster
17:06 porunov JoeJulian: Yes it is there. Thank you very much for your help!
17:07 JoeJulian porunov: fyi, for next time you need something installed that you can't find, "dnf provides '*/peer_gsec_create'" is how you could have figured that out.
17:15 porunov JoeJulian: Thank you for your advice!
17:15 skoduri joined #gluster
17:16 Muthu joined #gluster
17:18 hchiramm joined #gluster
17:18 nishanth joined #gluster
17:18 kkeithley blues-man: actually, you don't need to file a BZ.
17:18 armin joined #gluster
17:19 kkeithley if you config with FQDN, then line 190 always fails, but line 194 should then work.
17:21 kkeithley unless you see that line 194 doesn't work. In which case then file a BZ
17:28 blues-man yes I see also that line kkeithley I was mentioning line 190 for showing that hostname -s is used but 194 is more correct to mention
17:28 blues-man I opened a BZ if can be useful https://bugzilla.redhat.co​m/show_bug.cgi?id=1396194
17:28 glusterbot Bug 1396194: unspecified, unspecified, ---, bugs, NEW , Ganesha heartbeat through crm_attribute send warnings if nodes are configured with FQDN
17:53 shaunm joined #gluster
18:04 squizzi joined #gluster
18:04 dataio joined #gluster
18:13 jkroon joined #gluster
18:17 shyam joined #gluster
18:17 MidlandTroy joined #gluster
18:20 derjohn_mob joined #gluster
18:22 farhoriz_ joined #gluster
18:26 dnorman joined #gluster
18:42 porunov joined #gluster
18:53 farhorizon joined #gluster
18:59 dlambrig_ joined #gluster
19:07 shyam joined #gluster
19:10 haomaiwang joined #gluster
19:13 bluenemo joined #gluster
19:14 ahino joined #gluster
19:21 dnorman joined #gluster
19:25 panina joined #gluster
19:44 bluenemo joined #gluster
19:52 bluenemo joined #gluster
19:56 bluenemo joined #gluster
19:59 Micha2k_ joined #gluster
20:01 farhoriz_ joined #gluster
20:07 Micha2k joined #gluster
20:10 Micha2k_ joined #gluster
20:21 zat joined #gluster
20:33 arpu joined #gluster
20:38 squizzi joined #gluster
20:40 plarsen joined #gluster
20:44 dnorman joined #gluster
20:49 ahino joined #gluster
20:54 farhorizon joined #gluster
21:55 dnorman joined #gluster
22:04 Javezim joined #gluster
22:04 Micha2k_ joined #gluster
22:15 shaunm joined #gluster
22:28 gnulnx joined #gluster
22:28 gnulnx Hi everyone.  I checked out the origin/release-3.7 branch, built and installed it, and I'm at 3.7.16, though the release notes for that branch show 3.7.17.  What gives?
22:44 JoeJulian gnulnx: v3.7.17 was tagged at 8b95ebaf070f3356c37a0929f3519867557e6e91 and a subsequent commit was made against it updating the release-notes and the release was re-tagged to c11131fcdf47c4f0144b0ee1709e8c4bb05dac08. That commit was never merged back in to release-3.7 so the last tag that git describe can find that's in the release-3.7 tree is v3.7.16.
22:44 JoeJulian That seems like a bug to me.
22:47 haomaiwang joined #gluster
22:50 gnulnx That would explain it
22:50 gnulnx Thanks JoeJulian
22:51 JoeJulian The correction, it would seem to me, would be to rebase release-3.7 off v3.7.17.
23:13 farhoriz_ joined #gluster
23:33 Caveat4U joined #gluster
23:41 Caveat4U joined #gluster
23:44 shyam joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary