Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-06-01

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:16 farhorizon joined #gluster
00:45 Alghost joined #gluster
00:46 daMaestro joined #gluster
01:22 kpease joined #gluster
01:48 ilbot3 joined #gluster
01:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:48 daMaestro joined #gluster
02:11 baber joined #gluster
02:23 pioto_ joined #gluster
02:26 Alghost joined #gluster
02:27 Alghost joined #gluster
02:46 daMaestro joined #gluster
02:54 kraynor5b joined #gluster
03:07 Gambit15 joined #gluster
03:20 kramdoss_ joined #gluster
03:30 kpease joined #gluster
03:38 nbalacha joined #gluster
03:43 hgowtham joined #gluster
03:47 riyas joined #gluster
03:52 karthik_us joined #gluster
03:56 ndarshan joined #gluster
04:00 apandey joined #gluster
04:18 jerrcs_ joined #gluster
04:27 ic0n Anyone there?  my servers upgraded to 3.11 and now no directories are shown!
04:31 Shu6h3ndu joined #gluster
04:50 gyadav joined #gluster
04:55 xMopxShell joined #gluster
04:56 jerrcs_ joined #gluster
05:26 buvanesh_kumar joined #gluster
06:00 masber joined #gluster
06:01 Karan joined #gluster
06:07 TBlaar joined #gluster
06:12 Humble joined #gluster
06:32 jtux joined #gluster
06:37 rafi1 joined #gluster
06:37 Saravanakmr joined #gluster
06:39 devyani7 joined #gluster
06:39 ppai joined #gluster
06:41 ashiq joined #gluster
06:43 kdhananjay joined #gluster
06:45 pkoro joined #gluster
06:57 rafi joined #gluster
06:58 susant joined #gluster
06:58 itisravi joined #gluster
06:58 gyadav joined #gluster
06:59 ivan_rossi joined #gluster
06:59 mbukatov joined #gluster
07:01 ankitr joined #gluster
07:03 aravindavk joined #gluster
07:08 riyas joined #gluster
07:09 chawlanikhil24 joined #gluster
07:10 karthik_us joined #gluster
07:12 Drankis joined #gluster
07:15 ahino joined #gluster
07:20 sanoj joined #gluster
07:30 shdeng joined #gluster
07:34 ahino joined #gluster
07:35 ankitr joined #gluster
07:36 atinm joined #gluster
07:40 fsimonce joined #gluster
07:53 pioto_ joined #gluster
08:08 MrAbaddon joined #gluster
08:09 panina joined #gluster
08:26 sona joined #gluster
08:35 om2 joined #gluster
08:44 p7mo joined #gluster
08:49 shdeng joined #gluster
08:56 chawlanikhil24 joined #gluster
08:57 panina joined #gluster
09:02 jiffin joined #gluster
09:07 hybrid512 joined #gluster
09:11 Alghost joined #gluster
09:11 susant joined #gluster
09:16 shdeng joined #gluster
09:39 panina joined #gluster
09:42 chawlanikhil24 joined #gluster
09:49 sona joined #gluster
09:53 panina joined #gluster
09:57 panina joined #gluster
10:12 rastar joined #gluster
10:20 MrAbaddon joined #gluster
10:33 ankitr joined #gluster
10:36 ic0n anyone? 3.11?
10:37 cloph no - but even if:  better to just state your actual question.
10:38 Klas he did, several hours ago
10:39 Klas [06:27:52] <ic0n> Anyone there?  my servers upgraded to 3.11 and now no directories are shown!
10:40 ic0n It is a replicated volume, 2 servers.  One of them seems to not be healing...
10:43 panina joined #gluster
10:44 panina joined #gluster
10:50 ic0n also, heal info shows the first dir entry as a gifd:etc
10:50 poornima_ joined #gluster
10:56 Klas is there a good place to get a good overview of release notes of different versions? Trying to establish a good maintenance routine.
11:01 cloph there's docs directory with the relnotes
11:02 cloph e.g https://github.com/gluster/glusterfs/tree/v3.10.3/doc/release-notes
11:02 glusterbot Title: glusterfs/doc/release-notes at v3.10.3 · gluster/glusterfs · GitHub (at github.com)
11:02 Klas cloph: ah, ok, thanks
11:05 rastar joined #gluster
11:05 Teraii joined #gluster
11:06 mbukatov joined #gluster
11:20 Jacob843 joined #gluster
11:28 flying joined #gluster
11:28 Jacob843 joined #gluster
11:34 dijuremo ic0n
11:34 dijuremo ic0n: are the peers connected? Worst case scenario you can just stop glusterfs and downgrade to 3.10.x you had before and try that.
11:35 dijuremo ic0n: gluster peer status
11:36 dijuremo ic0n: I meant stop glusterd glusterfsd, downgrade restart the services.
11:36 jkroon joined #gluster
11:41 mbukatov joined #gluster
11:42 baber joined #gluster
11:48 _KaszpiR_ joined #gluster
11:49 Seth_Karlo joined #gluster
11:50 Seth_Kar_ joined #gluster
11:57 pkoro joined #gluster
12:05 victori joined #gluster
12:15 ajph not a specific gluster question but perhaps someone has some experience. I appologise in advance if this is not appropriate. i'm running nginx to serve from a gluster volume. if nginx is started before the mount, all files 404 until i restart nginx. i assume this is because nginx has attached to the existing dir/file handle but unsure and i can't find anything by searching. is there a way to get nginx to serve the files after the mount has occurred?
12:19 Klas start order
12:20 Klas you should make nginx not start until glusterfs has already started with systemd or sysv or whatever else you are running
12:20 Klas as long as nginx depends in some way on glusterfs being started then it should be fine
12:21 Klas btw, I suck at systemd order and miss SysV terribly when working with it
12:24 ajph thanks Klas, that makes sense. i thought perhaps there was some other wizardry i could do. i'm slowly getting used to systemd myself and i feel your pain
12:24 Klas there might be another way, but this seems the most robust
12:26 Jacob843 joined #gluster
12:32 ic0n dijuremo: all peers are connected.  I tried downgrading.. the bricks don't come online.
12:33 Klas is the data visible in the brick paths?
12:33 ic0n yes
12:33 Klas recreate volumes, export data?
12:34 Klas 8 hours later seems disaster recovery territory
12:34 ic0n that's pretty much where I'm at.  It is 48TB though...
12:34 Klas yeah, I feel your pain
12:34 ic0n fortunately I have one client using 3.10 that can still see the dirs...
12:35 Klas huh
12:35 Klas you can still connect and mount then?
12:35 ic0n other clients on both 3.10 and 3.11 after umount/mount do not see the dirs.
12:35 ic0n I can mount and connect, they just show only files.  Except for this one which I never umounted
12:35 Klas it might actually be that one client effing everything up
12:36 Klas we've had situations where one client effed everything up for the servers and the clients
12:36 dijuremo ic0n: After the dowgrade what is the op version?
12:36 Klas also, how can you have stopped the volumes if you had a connected client all the time?
12:37 ic0n I stopped them by killing all gluster processes, separating from the rest of the network, etc.
12:37 ic0n I didn't plan this upgrade sadly.
12:37 dijuremo ic0n what OS?
12:37 ic0n ubuntu 1604, using gluster ppa
12:38 dijuremo You want to apt-mark hold glusterfs-package-names
12:39 dijuremo cat /var/lib/glusterd/glusterd.info
12:39 ic0n am thinking of detaching the brick which isn't healing
12:39 dijuremo What is the operating-version?
12:40 ic0n UUID=3d4e3a5f-a747-49a5-a4e8-e870af98a973
12:40 ic0n operating-version=31100
12:40 dijuremo If you downgrade, you have to set it back for 310
12:40 dijuremo That should have been 3.10
12:40 ic0n was 31000 when this problem began... tried bumping it up to fix the problem, hasn't helped
12:41 ic0n bumping it down and downgrading the packages doesn't work as the bricks don't come up.  I can try again though.
12:41 Klas how can the bricks be down and a client is still able to access them?
12:41 dijuremo I would try stopping glusterd in all servers. Then edit the op version to be 31000 then downgrade to 3.10 Then start gluster again
12:41 ic0n bricks are up, using 3.11
12:42 Klas ah
12:42 dijuremo Bricks wont be up on 3.10 with a higher OP version
12:42 ic0n yeha I've tried that several times through the night.  the bricks don't come up.  even after putting the op version back
12:43 ic0n at the time I didn't know how to bump up the debug logging for the bricks. Now I do, so I'm about to try again.
12:44 cornfed78 joined #gluster
12:46 ic0n I think from now on I will keep my /var/lib/gluster in git
12:49 dijuremo I always make a backup copy of it just in case prior to upgrades... :)
12:50 dijuremo And I was once in a similar situation a long time ago upgrading from 3.6 to 3.7 and hitting issues and JoeJulian saved me with the op-version changes.. :)
12:51 ic0n [2017-06-01 12:50:20.728122] I [MSGID: 100030] [glusterfsd.c:2475:main] 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 3.10.2 (args: /usr/sbin/glusterd -p /var/run/glusterd.pid)
12:51 ic0n [2017-06-01 12:50:20.735238] I [MSGID: 106478] [glusterd.c:1449:init] 0-management: Maximum allowed open file descriptors set to 65536
12:51 ic0n [2017-06-01 12:50:20.735281] I [MSGID: 106479] [glusterd.c:1496:init] 0-management: Using /var/lib/glusterd as working directory
12:51 ic0n [2017-06-01 12:50:20.740042] W [MSGID: 103071] [rdma.c:4590:__gf_rdma_ctx_create] 0-rpc-transport/rdma: rdma_cm event channel creation failed [No such device]
12:51 ic0n [2017-06-01 12:50:20.740070] W [MSGID: 103055] [rdma.c:4897:init] 0-rdma.management: Failed to initialize IB Device
12:51 ic0n [2017-06-01 12:50:20.740084] W [rpc-transport.c:350:rpc_transport_load] 0-rpc-transport: 'rdma' initialization failed
12:51 ic0n [2017-06-01 12:50:20.740145] W [rpcsvc.c:1661:rpcsvc_create_listener] 0-rpc-service: cannot create listener, initing the transport failed
12:51 ic0n [2017-06-01 12:50:20.740163] E [MSGID: 106243] [glusterd.c:1720:init] 0-management: creation of 1 listeners failed, continuing with succeeded transport
12:51 ic0n [2017-06-01 12:50:23.006898] E [MSGID: 106022] [glusterd-store.c:2190:glusterd_restore_op_version] 0-management: wrong op-version (31100) retrieved [Invalid argument]
12:51 ic0n [2017-06-01 12:50:23.006950] E [MSGID: 106244] [glusterd.c:1862:init] 0-management: Failed to restore op_version
12:51 ic0n [2017-06-01 12:50:23.006983] E [MSGID: 101019] [xlator.c:503:xlator_init] 0-management: Initialization of volume 'management' failed, review your volfile again
12:51 ic0n [2017-06-01 12:50:23.007003] E [MSGID: 101066] [graph.c:325:glusterfs_graph_init] 0-management: initializing translator failed
12:51 ic0n [2017-06-01 12:50:23.007015] E [MSGID: 101176] [graph.c:681:glusterfs_graph_activate] 0-graph: init failed
12:51 ic0n [2017-06-01 12:50:23.007552] W [glusterfsd.c:1332:cleanup_and_exit] (-->/usr/sbin/glusterd(glusterfs_volumes_init+0xe5) [0x55942def21d5] -->/usr/sbin/glusterd(glusterfs_process_volfp+0x1b7) [0x55942def2097] -->/usr/sbin/glusterd(cleanup_and_exit+0x54) [0x55942def1624] ) 0-: received signum (1), shutting down
12:51 glusterbot ic0n: ('s karma is now -176
12:51 ic0n crap it rewrote my glusterd.info went back to 3.11
12:52 Klas ic0n: please use some paste function instead
12:52 ic0n ok, that's better
12:55 Karan joined #gluster
12:55 ahino1 joined #gluster
13:01 sona joined #gluster
13:05 Shu6h3ndu joined #gluster
13:07 dijuremo ic0n: Did changing the op version to 31000 fixed your issue?
13:19 skylar joined #gluster
13:21 Jacob843 joined #gluster
13:29 ankitr joined #gluster
13:29 jstrunk joined #gluster
13:49 nbalacha joined #gluster
13:52 ic0n No.  But dropping a brick did.  I'm back in business. Running 3.11 now.
13:53 ic0n going back to ZoneMinder development now.
13:56 baber joined #gluster
14:03 ahino joined #gluster
14:05 saintpablo joined #gluster
14:06 [diablo] joined #gluster
14:06 buvanesh_kumar joined #gluster
14:20 _KaszpiR_ joined #gluster
14:23 sona joined #gluster
14:25 riyas joined #gluster
14:34 kpease joined #gluster
14:35 sona joined #gluster
14:35 ankitr joined #gluster
14:37 _KaszpiR_ joined #gluster
14:45 kramdoss_ joined #gluster
14:50 farhorizon joined #gluster
15:03 wushudoin joined #gluster
15:04 wushudoin joined #gluster
15:08 jiffin joined #gluster
15:08 shaunm joined #gluster
15:14 jtux left #gluster
15:24 gnulnx left #gluster
15:39 ankitr joined #gluster
15:39 om2 joined #gluster
15:48 gyadav joined #gluster
15:56 susant joined #gluster
16:15 ankitr joined #gluster
16:31 shyam joined #gluster
16:31 rwheeler_ joined #gluster
16:41 sona joined #gluster
16:50 sona joined #gluster
16:51 ivan_rossi left #gluster
16:55 jkroon joined #gluster
17:15 Seth_Karlo joined #gluster
17:15 cornfed78 joined #gluster
17:18 om2 joined #gluster
17:33 MrAbaddon joined #gluster
17:39 kpease joined #gluster
17:51 bwerthmann joined #gluster
17:58 farhorizon joined #gluster
18:02 bwerthmann joined #gluster
18:03 sona joined #gluster
18:10 niknakpaddywak joined #gluster
18:27 jbrooks joined #gluster
18:29 Jacob8432 joined #gluster
18:34 Supermathie joined #gluster
18:35 dijuremo Can I easily go from replica 2 to replica 3 when I add a new server with it's own brick?
18:38 Supermathie hi everyone, I've got a 3.3.1 volume where everything is online but when I try to gather heal information on split-brain files, it times out (at least it seems to time out - the cli just says 'failed' after 30s or so)… any way to increase that timeout?
18:48 farhorizon joined #gluster
19:13 om2 joined #gluster
19:15 MrAbaddon joined #gluster
19:39 ahino joined #gluster
19:49 jbrooks joined #gluster
20:18 shyam joined #gluster
20:21 Supermathie anyone have any idea why certain directories might have duplicate files showing up in the mount listing?
20:22 Supermathie i.e. "ls -d /gluster-home/path/comp*" → "/gluster-home/path/compevpfft  /gluster-home/path/compevpfft"
20:22 Supermathie no split brains in this case
20:23 loadtheacc joined #gluster
20:44 chatter29 joined #gluster
20:59 JoeJulian Supermathie: check the client log, but I've only seen that with mismatched gfid (a form of split-brain).
21:02 Supermathie mmmk I'll check that… it also happens with new directories/files created in certain locations.
21:07 Supermathie JoeJulian, https://gluster.readthedocs.io/en/latest/Troubleshooting/gfid-to-path/ options don't work as this is an ancient (3.3.1) version of glusterfs - can I read it from the directory on the bricks?
21:07 glusterbot Title: gfid to path - Gluster Docs (at gluster.readthedocs.io)
21:08 JoeJulian The gfid is in the ,,(extended attributes) on the bricks.
21:08 glusterbot (#1) To read the extended attributes on the server: getfattr -m .  -d -e hex {filename}, or (#2) For more information on how GlusterFS uses extended attributes, see this article: http://pl.atyp.us/hekafs.org/index.php/2011/04/glusterfs-extended-attributes/
21:12 Supermathie JoeJulian, gfid on all the bricks for that directory is the same; maybe something beneath or above it has a mismatched gfid?
21:12 JoeJulian Perhaps, but it would only make sense if it was only one directory up.
21:13 JoeJulian I've only seen this once with a version post 3.4, but I wasn't able to diagnose it as umounting and mounting again fixed it.
21:15 Supermathie JoeJulian, remounting doesn't help. Also it's odd - in the parent directory files show once but directories show twice: https://pastebin.com/raw/8siRGde6
21:15 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
21:17 JoeJulian If you check the gfid of that directory against the .glusterfs tree, the gfid file should be a symlink that points to its parent directory. I'm not sure what would happen if that's not true but it's worth looking at.
21:17 Supermathie … and then listing the contents of one of those directories shows all files/directories twice…
21:17 Supermathie ok lemme check
21:17 JoeJulian Also... what changed prior to this behavior? I assume it was working just fine for a while.
21:21 Supermathie those gfid links under .glusterfs are correct… prior to this behaviour heh I wish I know - it's been set up and running for 5 years and now I'm assisting them with various weird behaviours like this one - really it's probably time to nuke it and start over with an updated version.
21:22 JoeJulian There's no clue in the client log?
21:23 Supermathie nothing in there (/var/log/glusterfs/volumename.log right?)
21:24 Supermathie well nothing related to this
21:24 JoeJulian right
21:27 loadtheacc joined #gluster
21:28 Supermathie I'll probably try shutting everything down and bringing it back up again tomorrow, see if that fixes it… thanks
21:54 shyam joined #gluster
22:07 shyam joined #gluster
22:14 vbellur joined #gluster
22:19 Alghost joined #gluster
23:32 k0nsl joined #gluster
23:32 k0nsl joined #gluster
23:37 doc|work joined #gluster
23:38 shyam joined #gluster
23:57 Alghost joined #gluster
23:57 Alghost_ joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary