Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-04-14

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 moneylotion joined #gluster
00:23 farhorizon joined #gluster
00:35 plarsen joined #gluster
01:31 shdeng joined #gluster
01:46 jdossey joined #gluster
01:48 ilbot3 joined #gluster
01:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:58 derjohn_mob joined #gluster
02:28 daMaestro joined #gluster
02:38 purpleidea joined #gluster
02:38 purpleidea joined #gluster
02:39 om2_ joined #gluster
03:02 susant joined #gluster
03:04 sbulage joined #gluster
03:24 nishanth joined #gluster
03:28 riyas joined #gluster
03:29 Gambit15 joined #gluster
03:30 siel joined #gluster
03:45 purpleidea joined #gluster
03:45 purpleidea joined #gluster
03:53 siel joined #gluster
03:53 siel joined #gluster
04:50 susant joined #gluster
04:50 susant left #gluster
05:06 msvbhat joined #gluster
05:42 nishanth joined #gluster
05:45 gyadav joined #gluster
05:45 gyadav_ joined #gluster
05:50 prasanth joined #gluster
06:40 ayaz joined #gluster
06:49 susant joined #gluster
07:17 ahino joined #gluster
07:35 msvbhat joined #gluster
07:41 jwd joined #gluster
07:46 AshuttoB joined #gluster
07:46 AshuttoB Hello
07:46 glusterbot AshuttoB: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer a
07:46 AshuttoB I have frequently "transport endpoint is not connected". I'm using gluster 3.10. Is there some "global knowledge" about it?
08:04 ndevos AshuttoB: that error tends to occur when the fuse mount process (its a daemon) exits unexpectedly (killed by something, or crashed)
08:05 AshuttoB fair enough. Is there a common workaround?
08:05 ndevos AshuttoB: you should be able to get some pointers in the /var/log/gluster/<mount-point>.log
08:05 ndevos no real workaround, figure out what causes it to exit and address that
08:06 AshuttoB i have no "E" in the logs
08:07 AshuttoB just some info bragging about "0-vol-cor-homes-dht: Found anomalies in" (btw, which anomalies?)
08:07 AshuttoB where "vol-cor-homes" is my volume
08:08 AshuttoB (gfid = 4b2ceb0f-e89e-4c48-89eb-8fc58f44048a). Holes=1 overlaps=0
08:10 AshuttoB there is nothing more till my restart
08:24 ndevos a "hole" is basically a hash-range that is not assigned to a particular brick (per directory, the gfid one), no idea what happens to newly created files that have a hash-value in the "hole"
08:25 ndevos this should not happen, and a directory (on all bricks combined) should have the full hash-range
08:26 ndevos it could be a problem when a brick has been removed at one point, and no rebalance/fix-layout has been performed
08:27 ndevos do you happen to have a kind of stack trace in the log, just before "Started running glusterfs version" is mentioned?
08:28 ndevos a stack trace suggests that the process crashed, and maybe a core file in /core.* or /var/log/core/.... or such is available
08:33 AshuttoB i have no stack trace
08:34 msvbhat joined #gluster
08:34 AshuttoB and... no brick were added/removed since the gluster went online
08:41 Wizek_ joined #gluster
08:50 gyadav joined #gluster
08:50 gyadav_ joined #gluster
09:20 nbalacha joined #gluster
09:36 msvbhat joined #gluster
10:10 susant joined #gluster
10:23 arpu joined #gluster
10:29 flying joined #gluster
10:48 [diablo] joined #gluster
10:53 Gambit15 joined #gluster
10:57 fsimonce joined #gluster
11:04 susant left #gluster
11:21 Prasad joined #gluster
11:24 MrAbaddon joined #gluster
11:56 gyadav joined #gluster
11:56 gyadav_ joined #gluster
11:59 Gambit15 joined #gluster
12:14 nh2 joined #gluster
12:22 baber joined #gluster
13:02 foster joined #gluster
13:25 ayaz joined #gluster
13:28 skylar joined #gluster
13:30 Gambit15 joined #gluster
13:32 riyas joined #gluster
13:40 oajs joined #gluster
13:47 susant joined #gluster
14:00 nh2 joined #gluster
14:20 rafi joined #gluster
14:24 _nixpanic joined #gluster
14:24 _nixpanic joined #gluster
14:28 susant joined #gluster
14:33 arpu joined #gluster
14:35 gyadav joined #gluster
14:35 gyadav_ joined #gluster
14:35 nh2 joined #gluster
14:37 riyas joined #gluster
14:45 farhorizon joined #gluster
15:02 saali joined #gluster
15:06 jdossey joined #gluster
15:07 wushudoin joined #gluster
15:28 Gambit15 joined #gluster
15:54 Gambit15 joined #gluster
16:02 jwd joined #gluster
16:09 farhorizon joined #gluster
16:15 farhorizon joined #gluster
16:20 plarsen joined #gluster
16:46 gyadav_ joined #gluster
16:46 gyadav joined #gluster
16:48 Gambit15 joined #gluster
17:01 buvanesh_kumar joined #gluster
17:26 major bleh .. spent the better part of last night chasing memory handling bugs in code I didn't even write :(
17:27 major but .. they seem to be all fixered uppers now
17:27 major just have to convince glusterd_snapshot_mount() that its okay if the snapshot is already mounted (thanks ZFS)
18:03 jwd joined #gluster
18:04 Gambit15 joined #gluster
18:22 Gambit15 joined #gluster
18:34 arpu joined #gluster
18:36 cliluw joined #gluster
18:47 susant joined #gluster
18:48 Gambit15 joined #gluster
19:03 farhorizon joined #gluster
19:16 farhorizon joined #gluster
19:33 Gambit15 joined #gluster
19:33 shaunm joined #gluster
19:37 farhorizon joined #gluster
19:40 major JoeJulian, so far as a status update .. zfs snapshots are now working as well
19:42 JoeJulian +1
19:42 major trying to fix the activate/deactivate side of clones and making certain snapshots are mounted at boot (restart)
19:43 major and then .. more testing
19:43 major almost at a point where can seriously start discussing optimizations/cleanups/refactoring
19:52 jwd joined #gluster
20:17 Wizek_ joined #gluster
20:18 oajs joined #gluster
20:20 Wizek_ joined #gluster
20:21 Gambit15 joined #gluster
20:33 farhorizon joined #gluster
20:43 farhorizon joined #gluster
20:43 Gambit15 joined #gluster
20:53 shyam joined #gluster
21:06 farhorizon joined #gluster
21:18 shyam joined #gluster
21:23 Gambit15 joined #gluster
21:30 major https://gist.github.com/major0/48caeae9896ac5682f42b756cdc9530a
21:30 glusterbot Title: Glusterfs snapshots on ZFS ยท GitHub (at gist.github.com)
21:30 major was there any other snapshot we cared about at this point?
21:31 major the documentation portion of this stuff is gonna be a pain :(
21:46 Gambit15 joined #gluster
22:00 Klas joined #gluster
22:08 farhorizon joined #gluster
22:14 oajs joined #gluster
22:21 farhorizon joined #gluster
22:25 Gambit15 joined #gluster
22:37 Gambit15 joined #gluster
23:52 Gambit15 joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary