Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-07-27

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 csaba joined #gluster
00:38 Teraii_ joined #gluster
00:42 bwerthmann joined #gluster
00:46 johnnyNumber5 joined #gluster
01:09 anoopcs joined #gluster
01:29 riyas joined #gluster
01:52 ilbot3 joined #gluster
01:52 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:10 johnnyNumber5 joined #gluster
02:20 tamalsaha[m] joined #gluster
02:22 prasanth joined #gluster
02:22 bwerthmann joined #gluster
02:29 sona joined #gluster
02:42 ppai joined #gluster
02:49 skoduri joined #gluster
03:07 kramdoss_ joined #gluster
03:15 susant joined #gluster
03:25 loadtheacc joined #gluster
03:29 kotreshhr joined #gluster
03:51 om2 joined #gluster
03:52 nbalacha joined #gluster
03:56 sanoj joined #gluster
03:59 johnnyNumber5 joined #gluster
04:03 poornima joined #gluster
04:19 WebertRLZ joined #gluster
04:23 kramdoss_ joined #gluster
04:23 itisravi joined #gluster
04:25 kdhananjay joined #gluster
04:44 atinm joined #gluster
04:46 jiffin joined #gluster
04:51 sona joined #gluster
04:55 buvanesh_kumar joined #gluster
04:57 Shu6h3ndu joined #gluster
05:04 poornima joined #gluster
05:06 skumar joined #gluster
05:06 kotreshhr joined #gluster
05:07 Gambit15 joined #gluster
05:08 aravindavk joined #gluster
05:20 Saravanakmr joined #gluster
05:27 ndarshan joined #gluster
05:31 apandey joined #gluster
05:34 apandey joined #gluster
05:34 XpineX joined #gluster
05:35 hgowtham joined #gluster
05:36 kramdoss_ joined #gluster
05:41 sanoj joined #gluster
05:45 kdhananjay joined #gluster
06:02 kotreshhr joined #gluster
06:05 sahina joined #gluster
06:08 susant joined #gluster
06:13 kotreshhr joined #gluster
06:16 msvbhat joined #gluster
06:21 Prasad joined #gluster
06:24 buvanesh_kumar joined #gluster
06:25 jtux joined #gluster
06:28 susant joined #gluster
06:36 sahina joined #gluster
06:37 ankitr joined #gluster
06:38 shdeng joined #gluster
06:47 rafi1 joined #gluster
06:49 rastar joined #gluster
06:51 hgowtham joined #gluster
06:57 WebertRLZ joined #gluster
06:59 itisravi joined #gluster
07:02 sahina joined #gluster
07:10 mbukatov joined #gluster
07:14 ahino joined #gluster
07:14 skoduri joined #gluster
07:19 jkroon joined #gluster
07:27 [diablo] joined #gluster
07:32 ivan_rossi joined #gluster
07:43 subscope joined #gluster
08:03 buvanesh_kumar joined #gluster
08:15 rastar joined #gluster
08:16 msvbhat joined #gluster
08:26 [diablo] Good morning #gluster
08:26 [diablo] guys has the gluster fuse shares now the ability to share a directory or still only just the entire volume please?
08:34 buvanesh_kumar joined #gluster
08:37 poornima joined #gluster
08:43 _KaszpiR_ joined #gluster
08:47 DV joined #gluster
08:56 ashiq joined #gluster
08:56 ndevos [diablo]: entire volume only, but it is actively worked on, and planned to be partially available with the upcoming 3.12 release
08:57 [diablo] morning ndevos ok cool, actually we use RHGS
08:57 [diablo] suppose it'll be a good while before it's in it
08:59 ndevos yeah, I dont know when it will be picked up by RHGS
08:59 _KaszpiR_ joined #gluster
09:01 riyas joined #gluster
09:02 ndevos if you have not done so yet, you can ask the Red Hat support team for the feature, managers need to know what customers want/need so they can prioritize the work
09:10 msvbhat joined #gluster
09:15 jiffin1 joined #gluster
09:16 karthik_us joined #gluster
09:18 sona joined #gluster
09:32 sanoj joined #gluster
09:36 buvanesh_kumar joined #gluster
09:39 jiffin joined #gluster
10:07 Prasad_ joined #gluster
10:09 Prasad__ joined #gluster
10:15 shdeng joined #gluster
10:24 skumar_ joined #gluster
10:29 skoduri joined #gluster
10:32 MrAbaddon joined #gluster
10:35 poornima joined #gluster
10:40 bfoster joined #gluster
11:01 skumar__ joined #gluster
11:02 skumar_ joined #gluster
11:13 chawlanikhil24 joined #gluster
11:15 apandey_ joined #gluster
11:15 mb_ joined #gluster
11:16 mb_ !nucprdsrv202 wol 00:1f:c6:9c:34:24
11:16 mb_ !nucprdsrv202 wol 00:1f:c6:9c:34:24
11:17 mb_ !nucprdsrv202 help
11:17 mb_ !nucprdsrv201 help
11:17 mb_ !macprdsvr209 help
11:19 ankitr joined #gluster
11:22 baber joined #gluster
11:31 skoduri joined #gluster
11:35 apandey__ joined #gluster
11:58 [diablo] joined #gluster
11:59 martinetd joined #gluster
12:01 riyas joined #gluster
12:06 Marbug joined #gluster
12:07 Marbug Hi, I suppose #glusterfs is a wrong channel ?
12:08 Marbug Anyway, I got a problem with my glusterFS setup, I have a 4 node replicated glusterFS with 1x4 bricks. Problem is that on node1 I get a spamming of operation not permitted in the logs, but I can't find any solution on how to find the origin of the problem nor can I find something about it. So if anyone can help and point to a specific way it would be great
12:12 kkeithley what is the operation not permitted?  Please paste a bit of the logs with the message that's being spammed
12:13 kkeithley @paste
12:13 glusterbot kkeithley: For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
12:14 Marbug kkeithley,
12:14 Marbug [2017-07-27 12:13:44.188234] E [MSGID: 113001] [posix-helpers.c:1180:posix_handle_pair] 0-share-posix: /var/lib/glusterd/glusterfs-store/share/.glusterfs/43/85/4385bcec-0a0e-49c8-849d-0ce7fe16728a: key:user.xdg.origin.urlflags: 0 length:58 [Operation not supported]
12:14 Marbug such lines with another file, I think around 15-20 lines/files per second are spammed, all servers do spam the next error the same amount of times:
12:14 Marbug [2017-07-27 12:14:44.257690] I [MSGID: 108026] [afr-self-heal-common.c:770:afr_log_selfheal] 0-share-replicate-0: Completed metadata selfheal on 85d07d90-09d6-4771-b136-e0a1ff36f69b. sources=[1] 2  sinks=
12:14 kkeithley what version of gluster, what is the brick file system
12:14 Marbug [2017-07-27 12:14:44.260014] I [MSGID: 108026] [afr-self-heal-metadata.c:56:__afr_selfheal_metadata_do] 0-share-replicate-0: performing metadata selfheal on 366b288e-d032-4c6e-98b0-65503b8c50b8
12:14 kkeithley paste!
12:15 Marbug Oh, yeah it was just 3 lines, I think that is the maximum for a paste? :-)
12:15 Marbug I'm running glusterFS 3.7.20 on all nodes
12:17 kkeithley can't set xattr.  What is the brick file system? Is it xfs?
12:17 Marbug http://termbin.com/8ovj & http://termbin.com/7dfm
12:17 Marbug nope, it's ext4 kkeithley
12:18 kkeithley okay, ext4 should work.
12:19 kkeithley check mkfs options and mount options?
12:19 [diablo] joined #gluster
12:20 kkeithley esp. compared to the other nodes that aren't getting these errors
12:20 Marbug they didn't changed for over a year:
12:20 Marbug localhost:/share on /glusterfs-share type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)
12:20 Marbug ook
12:20 Marbug :)
12:21 kkeithley oh, okay. interesting
12:21 Marbug ok other hosts, which don't spawn the error from the 1st termbin have the following mount options added: relatime,user_id=0,group_id=0
12:22 brian83 joined #gluster
12:27 apandey joined #gluster
12:27 Marbug ok so the problem is on 1 node only, because when I reboot it, the logs don't get spammed anymore
12:29 kkeithley that's good.
12:31 Marbug yeah but from when iut's booted again ,the logs gets spammed :p
12:31 Marbug I saw that the options are the same on that 1st node as the 4th as they both run an older kernel, so it has nothing to do with the mount options
12:32 Marbug nvm :/
12:32 colm4 joined #gluster
12:32 Marbug the 4rth node also running al older node also has the operation not supported :(
12:34 rwheeler joined #gluster
12:35 nbalacha joined #gluster
12:37 Marbug but it's clear that the problem is on node1 as all the spamming stops when I reboot that node
12:37 Marbug Is there a way to find out which files are linkes to the hashed files? (/var/lib/glusterd/glusterfs-store/share/.glusterfs/b1/7d/b17d5dac-e2e4-48bd-8da6-31c5f8d3d714)
12:42 kdhananjay joined #gluster
12:49 [diablo] joined #gluster
12:56 msvbhat joined #gluster
13:02 leni1 joined #gluster
13:09 Marbug mmmm with the command "gluster volume heal share info" I found the files on which he gives hes errors, but I can't seem to find anything wrong with the files, even the sha1 is the same. In the retrieved list the files are displayed through a gfid
13:10 [diablo] joined #gluster
13:40 plarsen joined #gluster
13:42 skylar joined #gluster
13:45 poornima joined #gluster
13:46 MrAbaddon joined #gluster
13:50 DV joined #gluster
13:55 aravindavk joined #gluster
14:00 itisravi joined #gluster
14:05 [diablo] joined #gluster
14:08 rwheeler joined #gluster
14:33 Marbug ok it's solved, I removed every file from a working node which was marked as conflicted with the command
14:34 poornima joined #gluster
14:40 xiu /b 9
14:43 Saravanakmr joined #gluster
14:43 Saravanakmr_ joined #gluster
14:43 snehring joined #gluster
14:45 bwerthmann joined #gluster
14:46 jstrunk joined #gluster
14:52 farhorizon joined #gluster
14:58 johnnyNumber5 joined #gluster
15:04 farhorizon joined #gluster
15:12 wushudoin joined #gluster
15:18 MrAbaddon joined #gluster
15:23 aravindavk joined #gluster
15:35 om2 joined #gluster
15:39 bb0x joined #gluster
15:40 bb0x hi guys, I have a replicated volume across 2 servers ... gluster is deployed on top of LVM ... can I increase capacity by just extending the LVM/File system or there are any configuration required ?
15:40 bb0x thanks in advance
15:45 bb0x joined #gluster
15:46 WebertRLZ joined #gluster
15:58 Jacob843 joined #gluster
16:05 vbellur1 joined #gluster
16:08 vbellur joined #gluster
16:09 johnnyNumber5 joined #gluster
16:16 msvbhat joined #gluster
16:17 fabianvf joined #gluster
16:31 sona joined #gluster
16:32 ivan_rossi left #gluster
16:36 shyam joined #gluster
16:45 kotreshhr1 joined #gluster
16:55 vbellur joined #gluster
16:57 johnnyNumber5 joined #gluster
16:59 susant joined #gluster
17:01 jiffin joined #gluster
17:01 martinetd Hi, got a silly question but.. mount.glusterfs adds gluster to PRUNEFS in updatedb.conf and I can't see any way around it, and we'd like to keep ONE client with gluster in the db (because locate rocks)
17:01 martinetd is there a semi-recommanded way to go about that, or should we hack mount.glusterfs? Maybe trigger an action after mount with systemd? But I didn't find anything pretty for that
17:03 farhoriz_ joined #gluster
17:05 ndevos martinetd: hmm, interesting, there has been a report that fuse.glusterfs was not added to the updatedb.conf and that caused us to report https://bugzilla.redhat.com/show_bug.cgi?id=1331944
17:05 glusterbot Bug 1331944: medium, unspecified, ---, msekleta, NEW , Add fuse.glusterfs to PRUNEFS in /etc/updatedb.conf
17:05 martinetd oh, it does work on rhel7, believe me :D
17:05 ndevos martinetd: but, I see that it is still done in the mount.glusterfs script
17:05 kraynor5b joined #gluster
17:05 ndevos we dont have a recommended way to not update it, patches are welcome!
17:06 martinetd I think it would be better to update it in the mlocate package so admins can chose, with a sensible default that disables it
17:07 ndevos yes, that is the plan... I probably will strip out the bit from mount.glusterfs so that is it not done through that script anymore
17:07 martinetd Would be awesome, I can live with a kludge until then :)
17:10 Jacob843 joined #gluster
17:11 bwerthmann joined #gluster
17:11 ndevos martinetd: oh, you can have a simple workaround, just add a comment like "# prevent /sbin/mount.glusterfs from adding fuse.glusterfs to PRUNEFS"
17:11 martinetd Yeah, we've already done that, just weren't sure it's good in the long term
17:12 martinetd If I can say it'll get removed eventually I'll get away with it more easily :D
17:12 ndevos ok, yes, that should be fine for long term too - until mlocate updates the default file
17:13 ndevos that might be the case when RHEL-7.5 gets released, at least that is being evaluated through https://bugzilla.redhat.com/show_bug.cgi?id=1331870
17:13 glusterbot Bug 1331870: medium, high, pre-dev-freeze, msekleta, NEW , Looks like /etc/updatedb.conf does have a PRUNEFS variable updated with fuse.glusterfs
17:14 martinetd ndevos++ cheers :)
17:14 glusterbot martinetd: ndevos's karma is now 31
17:14 ndevos you're welcome!
17:24 poornima joined #gluster
17:27 Jacob843 joined #gluster
17:28 jiffin joined #gluster
17:33 farhorizon joined #gluster
17:42 bwerthmann joined #gluster
17:46 kotreshhr1 left #gluster
17:53 jiffin joined #gluster
17:57 farhorizon joined #gluster
17:57 kotreshhr joined #gluster
17:57 leni1 joined #gluster
18:00 msvbhat joined #gluster
18:01 kpease joined #gluster
18:02 johnnyNumber5 joined #gluster
18:05 kotreshhr left #gluster
18:08 farhorizon joined #gluster
18:19 msvbhat joined #gluster
18:31 farhoriz_ joined #gluster
18:35 shyam joined #gluster
18:42 jkroon joined #gluster
19:14 msvbhat joined #gluster
19:31 kraynor5b Hey guys I need help figuring out why my glusterd service will not start
19:31 kraynor5b My "etc-glusterfs-glusterd.vol.log" has the following messages at the tail
19:31 kraynor5b https://paste.fedoraproject.org/paste/RyXFePpS3L9WhmS78FMjrA
19:31 glusterbot Title: Untitled - Modern Paste (at paste.fedoraproject.org)
19:34 rwheeler joined #gluster
19:38 kraynor5b I'm running GlusterFS version 3.7.6
19:42 kraynor5b journalctl -xe yields the following as well
19:42 kraynor5b https://paste.fedoraproject.org/paste/iRg1EMaC2sIm3nWqaB0d0w
19:42 glusterbot Title: Untitled - Modern Paste (at paste.fedoraproject.org)
19:48 W_v_D joined #gluster
19:51 W_v_D joined #gluster
19:54 W_v_D joined #gluster
19:54 johnnyNumber5 joined #gluster
19:55 kkeithley dunno about the problem, but good grief, 3.7.6 is seriously ancient.  If you're running on something like Ubuntu Trusty or Xenial you can at least get 3.7.20 from the Gluster Community PPA
19:55 kkeithley @ppa
19:55 glusterbot kkeithley: The GlusterFS Community packages for Ubuntu are available at: 3.8: https://goo.gl/MOtQs9, 3.10: https://goo.gl/15BCcp
19:56 kkeithley Or do yourself a favor and get 3.8(.14) from the PPA. At least that's being maintained (for a few more weeks.)
19:57 kkeithley Or 3.10, which will be maintained even longer.
19:57 kraynor5b I'm running CentOS 7.2.1511, this was all setup at the beginning of last year, the system has no internet access and a pretty strict SLA for uptime so I wasn't able to update it
19:57 shyam1 joined #gluster
19:58 kraynor5b should I rebuild that node?
19:58 kkeithley mm, okay. I guess your hands are tied.
19:58 kraynor5b unfortunately :'(
19:58 kkeithley but there are 3.8 and 3.10 in the CentOS Storage SIG. If you can get a window to update I'd highly recommend it
19:59 kraynor5b how is the upgrade process with Ganesha implemented as well?  any major changes that require modifications?
20:00 kkeithley should not need any mods. Config files haven't changed
20:01 kraynor5b so you would recommend updating before attempting to get that node in to a better state?
20:01 kkeithley I would
20:01 shyam left #gluster
20:02 shyam joined #gluster
20:04 kkeithley because somehow gluster is trying to set a -1000 timeout? That
20:04 MrAbaddon joined #gluster
20:04 kkeithley that's got to be a bug
20:05 kraynor5b Yeah I saw a bug report on something similar
20:05 kraynor5b "Add below line in /usr/lib/systemd/system/glusterd.service unit file  after ExecStart
20:06 kraynor5b TimeoutSec=0"
20:06 kraynor5b https://bugzilla.redhat.com/show_bug.cgi?id=1269536
20:06 glusterbot Bug 1269536: high, unspecified, ---, anekkunt, CLOSED WORKSFORME, Glusterd cannot be started while having a large amount of volumes
20:19 kraynor5b I'm not sure if adding the option "TimeoutSec=0" would do anything as the parameter being passed is wrong
20:21 mb_ joined #gluster
20:45 elitecoder Does Gluster 3.5.9 not work with 3.11? I did a probe peer ########## to a 3.11 from a 3.5.9, and the 3.11 server is spamming it's logs with [2017-07-27 20:44:15.748102] E [MSGID: 106167] [glusterd-handshake.c:2186:__glusterd_peer_dump_version_cbk] 0-management: Error through RPC layer, retry again later
20:53 skylar joined #gluster
20:53 vbellur joined #gluster
21:15 farhorizon joined #gluster
21:16 skylar joined #gluster
21:26 shyam joined #gluster
21:35 johnnyNumber5 joined #gluster
21:38 johnnyNumber5 joined #gluster
21:39 farhoriz_ joined #gluster
21:40 kraynor5b kkeithley:  I'm really scared to update
21:40 kraynor5b haha
21:40 bwerthmann joined #gluster
21:41 * kraynor5b probably should not be working with technology if he is scared to update things
21:52 skylar joined #gluster
22:06 farhorizon joined #gluster
22:10 johnnyNumber5 joined #gluster
22:10 W_v_D joined #gluster
22:22 johnnyNumber5 joined #gluster
22:34 BlackoutWNCT joined #gluster
22:38 decay joined #gluster
22:41 glusterbot` joined #gluster
22:41 elitecod1r joined #gluster
22:41 nixpanic_ joined #gluster
22:41 nigelb joined #gluster
22:43 ilbot3 joined #gluster
22:43 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
22:43 BitByteNybble110 joined #gluster
22:44 samikshan joined #gluster
22:44 koolfy joined #gluster
22:44 thwam joined #gluster
22:45 ndevos_ joined #gluster
22:45 ndevos_ joined #gluster
22:45 samikshan joined #gluster
22:46 JonathanD joined #gluster
22:47 decayofmind joined #gluster
22:47 Wizek_ joined #gluster
22:49 elitecod1r Does Gluster 3.5.9 not work with 3.11? I did a probe peer ########## to a 3.11 from a 3.5.9, and the logs are filling up with errors.
22:49 elitecod1r I was hoping to migrate to 3.11 by adding in new bricks and dropping the old ones
22:49 JoeJulian I forget, does 3.5.9 have opversion?
22:50 Guest49918 JoeJulian: if you're referring to get opversion, that ... i could not get to work
22:51 Guest49918 sorry I'll get my nick back in a minute
22:52 elitecod3r The command to check opversion within gluster was not working for me it didn't seem to recognize it
22:52 elitecod3r but i checked a file somewhere and it said opversion=2
22:59 elitecod3r # gluster volume get all cluster.op-version
22:59 elitecod3r unrecognized word: get (position 1)
23:03 shyam joined #gluster
23:06 JoeJulian You cannot check opversion prior to 3.10, but the rpc versioning started long before that. I can't remember how long before though and 3.5 is ancient.
23:07 JoeJulian If it's prior to that being added, upgrading will require downtime.
23:08 elitecod3r ok
23:08 elitecod3r maybe i can setup a live rsync or something and take a webserver out of rotation, upgrade it's gluster client and then pop it back in
23:08 elitecod3r and do the same with the next
23:14 elitecod3r i'll think about it
23:18 johnnyNumber5 joined #gluster
23:54 DV joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary