Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-08-24

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:09 JoeJulian tannerb3: my guess (and it's only a guess) is the changelog btrfs database is corrupted on that brick that overflowed. I would look there and see if you can repair it.
00:37 masber joined #gluster
00:56 omie888777 joined #gluster
01:13 MadPsy joined #gluster
01:13 MadPsy joined #gluster
01:31 overclk joined #gluster
01:50 rastar joined #gluster
01:52 ilbot3 joined #gluster
01:52 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:02 gospod3 joined #gluster
02:03 kdhananjay joined #gluster
02:18 DV joined #gluster
02:19 jarrpa joined #gluster
02:25 skoduri joined #gluster
02:27 humblec joined #gluster
02:29 prasanth joined #gluster
02:35 msvbhat joined #gluster
02:39 MrAbaddon joined #gluster
03:00 bens__ joined #gluster
03:15 MrAbaddon joined #gluster
03:16 gyadav__ joined #gluster
03:38 kpease joined #gluster
03:43 ppai joined #gluster
03:45 MrAbaddon joined #gluster
03:49 riyas joined #gluster
04:04 itisravi joined #gluster
04:06 Guest9038 joined #gluster
04:09 jiffin joined #gluster
04:19 Shu6h3ndu joined #gluster
04:24 atinmu joined #gluster
04:24 gyadav_ joined #gluster
04:25 nbalacha joined #gluster
04:26 luizcpg joined #gluster
04:28 gyadav__ joined #gluster
04:52 karthik_us joined #gluster
05:00 ankitr joined #gluster
05:09 skumar joined #gluster
05:16 msvbhat joined #gluster
05:26 omie888777 joined #gluster
05:27 tru_tru joined #gluster
05:34 hgowtham joined #gluster
05:35 rafi joined #gluster
05:53 apandey joined #gluster
05:53 Saravanakmr joined #gluster
06:09 sanoj joined #gluster
06:11 sona joined #gluster
06:12 susant joined #gluster
06:13 flachtassekasse joined #gluster
06:19 atinmu joined #gluster
06:20 jtux joined #gluster
06:33 atinmu joined #gluster
06:33 rafi2 joined #gluster
06:35 rafi1 joined #gluster
06:37 Saravanakmr joined #gluster
06:44 purpleidea joined #gluster
06:44 purpleidea joined #gluster
06:58 tg2 joined #gluster
06:59 Acinonyx joined #gluster
07:00 dominicpg joined #gluster
07:07 jkroon joined #gluster
07:12 bEsTiAn joined #gluster
07:18 henkjan left #gluster
07:21 rafi joined #gluster
07:24 tg2 joined #gluster
07:27 _KaszpiR_ joined #gluster
07:32 MrAbaddon joined #gluster
07:39 fsimonce joined #gluster
07:40 kotreshhr joined #gluster
07:52 mbukatov joined #gluster
07:52 ashiq joined #gluster
07:56 ThHirsch joined #gluster
07:59 kotreshhr left #gluster
08:04 weller the documentation says, you can create quota for a directory that does not exist. does not work for me on gluster 3.10.5. is that a bug or is that feature disabled?
08:07 Guest9038 joined #gluster
08:22 ankitr joined #gluster
08:33 itisravi joined #gluster
08:39 skoduri joined #gluster
08:43 karthik_us joined #gluster
08:44 btspce joined #gluster
08:51 social joined #gluster
08:53 ankitr joined #gluster
09:06 sanoj joined #gluster
09:10 buvanesh_kumar joined #gluster
09:10 buvanesh_kumar joined #gluster
09:14 Guest9038 joined #gluster
09:39 ankitr joined #gluster
09:43 Acinonyx joined #gluster
09:44 MrAbaddon joined #gluster
09:56 atinmu joined #gluster
10:00 ankitr joined #gluster
10:01 bens_ joined #gluster
10:14 rafi joined #gluster
10:21 shyam joined #gluster
10:32 atinmu joined #gluster
10:32 ankitr joined #gluster
10:34 rastar joined #gluster
10:35 nbalacha joined #gluster
10:38 dominicpg joined #gluster
10:49 skoduri joined #gluster
11:03 TBlaar joined #gluster
11:10 hosom joined #gluster
11:11 luizcpg joined #gluster
11:14 psony joined #gluster
11:15 apandey joined #gluster
11:27 Guest9038 joined #gluster
11:29 sanoj joined #gluster
11:32 baber joined #gluster
11:40 jstrunk joined #gluster
11:46 Guest9038 joined #gluster
11:54 shyam joined #gluster
11:54 sona joined #gluster
12:01 flachtassekasse joined #gluster
12:08 shyu joined #gluster
12:19 rafi1 joined #gluster
12:19 susant joined #gluster
12:21 jiffin1 joined #gluster
12:48 X-ian joined #gluster
12:52 X-ian hi. my bricks are killed regularly at 7:05 am. finally tracked it down to a ENODEV on a readv(2) on /dev/fuse . /dev/fuse is of 2017-06-23 (mtime,ctime). what's happening?
12:55 guhcampos joined #gluster
12:56 cloph guess you're mixing it up - likely first your brick go down, then the glusterfs fuse mount fails because there are no bricks anymore...
12:58 susant joined #gluster
12:59 ankitr joined #gluster
13:00 nbalacha joined #gluster
13:00 X-ian let's get this straight: there is a process  /usr/sbin/glusterfs --acl --volfile-server=127.0.0.1 --volfile-id=/d_data /data  - this is the one getting killed
13:02 farhorizon joined #gluster
13:04 MrAbaddon joined #gluster
13:09 jkroon X-ian, i'd tone it down.  people here are here voluntarily.
13:10 jkroon what happens first - the ENODEV or the kill?
13:10 X-ian no offense whatsoever intended. sorry.
13:11 jkroon in other words - which is the cause and which is the effect.
13:11 jkroon X-ian, i completely understand, we all get very frustrated and it tends to overflow in the most minuscule ways, which normally we don't intend.
13:11 jkroon and also, which distribution are you using?
13:12 X-ian looks like the ENODEV happens and then a (that?) thread sends signal 15 to the co-threads
13:13 X-ian deb 8.9 w/ glusterfs 3.10.4-1
13:13 * cloph didn't take it as offensive either, just as attempt to clarify, i.e. exactly what was asked for, so no worries :-)
13:16 rafi1 joined #gluster
13:18 * cloph also using debian 8.9, wtih gluster 3.10.5-1, and while geo-replication from time to time I get "0-glusterfs-fuse: read from /dev/fuse returned -1 (Operation not permitted)" in client log (that annoyingly spams the logs with tons of those) - it seems related to load, and not reproducible each day (and recovers by itself)
13:19 X-ian message here: [fuse-bridge.c:4975:fuse_thread_proc] 0-glusterfs-fuse: terminating upon getting ENODEV when reading /dev/fuse
13:20 shyam joined #gluster
13:20 baojg joined #gluster
13:24 jkroon hmm, so why would a readv on /dev/fuse give ENODEV
13:25 Guest9038 joined #gluster
13:25 X-ian no idea (yet)
13:26 jkroon hmm, ok, so the process that opens /dev/fuse is the fuse mount process.
13:26 jkroon as well as the killed process.
13:27 jkroon your bricks remain in tact i'm guessing?
13:27 X-ian yes.
13:27 jkroon what time is your logrotates set to run?
13:29 plarsen joined #gluster
13:29 jkroon it looks like there was a discussion a whilst back on getting ENODEV either on umount or triggering an abort via sysfs (fs/fuse/connections/NNN/abort)
13:30 X-ian they're run by cron.daily which terminates between 06:25 and 07:01
13:31 jkroon would you mind telling me what's in the postrotate sections of the glusterfs rotate?
13:33 jkroon I have "/usr/bin/killall -HUP glusterfs" in mine, which would send the FUSE process a HUP too.  This does not seem to cause problems.
13:33 jkroon i've seen on RHEL and CentOS that whenever logrotate happens some of the port numbers change (this does not happen on my system, Gentoo based).
13:34 X-ian /usr/bin/killall -HUP glusterfs > /dev/null 2>&1 || true  \\  /usr/bin/killall -HUP glusterd > /dev/null 2>&1 || true  \\  /usr/bin/killall -HUP glusterfsd > /dev/null 2>&1 || true  \\  [ ! -f /var/run/glusterd.pid ] || kill -HUP `cat /var/run/glusterd.pid`  \\  for pid in `ps -aef | grep glusterfs | egrep "\-\-aux-gfid-mount" | awk '{print $2}'`; do /usr/bin/kill -HUP $pid > /dev/null 2>&1 || true ; done
13:34 MrAbaddon joined #gluster
13:34 jkroon the implication is that on RHEL/CentOS the processes are getting restarted.
13:34 jkroon that looks sane.
13:36 jkroon http://www.spinics.net/lists/linux-fsdevel/msg104955.html
13:36 glusterbot Title: Re: [fuse-devel] fuse: feasible to distinguish between umount and abort? — Linux Filesystem Development (at www.spinics.net)
13:36 jkroon perhaps something is triggering an abort?
13:37 skylar joined #gluster
13:37 Shu6h3ndu joined #gluster
13:42 baojg joined #gluster
13:42 riyas joined #gluster
13:44 shyam joined #gluster
13:48 baojg joined #gluster
13:50 X-ian I'm not sure about the role of /dev/fuse. I've used FUSE as component of sshfs and that was it up to now. :-)
13:55 farhorizon joined #gluster
14:12 X-ian so what I'm gonna look for next? the kernel source?
14:18 WebertRLZ joined #gluster
14:19 aronnax joined #gluster
14:19 baojg joined #gluster
14:26 baojg joined #gluster
14:33 skoduri joined #gluster
14:40 pioto joined #gluster
14:41 cloph X-ian: as your prob is triggered always at the same time, look at what is happening at that time.
14:42 cloph Do you put much load on gluster itself? i.e. create/read lots of files at that time?
14:42 cloph If not, examine the cronjobs that are run more thoroughly.
14:43 cloph if you suspect a bug in fuse, consider mounting the volume using nfs
14:50 farhorizon joined #gluster
15:01 baber joined #gluster
15:04 omie888777 joined #gluster
15:07 wushudoin joined #gluster
15:08 susant joined #gluster
15:09 jfarr joined #gluster
15:19 X-ian cloph: it's always 07:05 - but I have not found something that would account for that (yet). logrotate, backup are long done.
15:20 prasanth joined #gluster
15:21 kpease joined #gluster
15:24 baojg joined #gluster
15:39 baojg joined #gluster
15:41 baber joined #gluster
15:48 MrAbaddon joined #gluster
16:09 farhorizon joined #gluster
16:10 susant joined #gluster
16:18 ivan_rossi left #gluster
16:19 snehring joined #gluster
16:24 mb_ joined #gluster
16:24 mb_ !nucprdsrv202 wol 00:1F:C6:9B:E5:99
16:25 mb_ !nucprdsrv202 wol 00:e0:4c:68:01:67
16:25 mb_ !macprdsrv208 wol 00:e0:4c:68:01:67
16:25 mb_ !macprdsrv208 wol 00:1F:C6:9B:E5:9
16:26 mb_ !nucprdsrv201 wol 00:e0:4c:68:01:67
16:26 misc ?
16:26 mb_ !nucprdsrv201 wol 00:1F:C6:9B:E5:99
16:26 misc mb_: wrong chan ?
16:36 luizcpg joined #gluster
16:36 jkroon joined #gluster
16:41 gyadav joined #gluster
16:54 ThHirsch joined #gluster
17:20 omie888777 joined #gluster
17:24 farhorizon joined #gluster
17:31 side_control joined #gluster
17:38 buvanesh_kumar joined #gluster
17:57 h4rry joined #gluster
18:07 guhcampos joined #gluster
18:16 buvanesh_kumar joined #gluster
18:22 h4rry joined #gluster
18:29 primehaxor joined #gluster
18:29 bowhunter joined #gluster
18:44 bowhunter joined #gluster
19:12 side_control joined #gluster
19:17 farhorizon joined #gluster
19:22 h4rry joined #gluster
19:29 merps joined #gluster
19:29 merps hello everyone
19:31 merps i'm having a problem with a simple three node replicated gluster setup, we have an api server that is updating a sqlite database on the filesystem and calling sync/fsync, an immediate request to read that file (after the first request returns) shows old data
19:31 merps i haven't had much luck finding guidance for modifying sync behaviour on glusterfs
19:31 merps i'm using version 3.10.5
19:32 merps can someone offer some advice or maybe a reference to where i should look?
19:32 merps i was expecting that glusterfs wouldn't return from a fsync/sync call until the data has been written to all replicated peers
19:33 JoeJulian That is true. I suspect you're getting something from a cache. You may need to disable performance translators.
19:33 sac joined #gluster
19:34 shruti joined #gluster
19:40 h4rry joined #gluster
19:43 ThHirsch joined #gluster
20:24 merps JoeJulian, do you have any idea which ones would affect the behaviour of fsync/sync?
21:21 MrAbaddon joined #gluster
21:24 farhorizon joined #gluster
21:40 bowhunter joined #gluster
21:42 bens__ joined #gluster
21:58 ij_ joined #gluster
22:19 h4rry joined #gluster
22:53 farhorizon joined #gluster
23:07 bootc joined #gluster
23:15 plarsen joined #gluster
23:39 farhorizon joined #gluster
23:50 mb_ joined #gluster
23:50 mb_ !nucprdsrv201 wol 00:1f:c6:9b:e5:99
23:51 mb_ !nucprdsrv201 wol 00:e0:4c:68:01:67
23:51 mb_ !nucprdsrv201 wol 00:e0:4c:68:01:67
23:51 mb_ !nucprdsrv201 wol 00:1f:c6:9b:e5:99
23:51 mb_ !nucprdsrv202 wol 00:1f:c6:9b:e5:99
23:52 mb_ !nucprdsrv202 wol 00:e0:4c:68:01:67
23:52 mb_ !nucprdsrv202 help
23:52 mb_ !nucprdsrv201 help
23:54 Gugge joined #gluster
23:55 mb_ joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary