Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-10-24

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:03 blu_ joined #gluster
00:11 hmamtora joined #gluster
00:13 CrackerJackMack snehring, you happen to use ZFS on centos ?
00:35 susant joined #gluster
00:41 ahino joined #gluster
01:06 map1541 joined #gluster
01:10 shyu joined #gluster
01:56 ilbot3 joined #gluster
01:56 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:33 plbertrand_ joined #gluster
02:36 plbertrand_ Hi, I'm currently trying to setup NFS but I fail miserably. The documentation says that it should work magically but it isn't. Am I supposed to install nfs-ganesha or nfs-kernel-server or nothing at all? I don't see any mounts when I run showmount -e localhost.
02:38 plbertrand_ I've installed glusterfs-3.12 on Ubuntu 16.04
03:22 BlackoutWNCT plbertrand_, don't install the kernel server, it conflicts. It is reccomended to use Ganesha, but the inbuilt nfs server does also work
03:22 BlackoutWNCT Can you output your 'gluster volume info' for me?
03:22 plarsen joined #gluster
03:24 kpease joined #gluster
03:34 skoduri joined #gluster
04:04 itisravi joined #gluster
04:32 Humble joined #gluster
04:33 rafi1 joined #gluster
04:33 sanoj joined #gluster
04:34 apandey joined #gluster
04:41 psony joined #gluster
04:43 Shu6h3ndu joined #gluster
04:44 kramdoss_ joined #gluster
04:50 nbalacha joined #gluster
05:02 ppai joined #gluster
05:05 skumar joined #gluster
05:06 omie888777 joined #gluster
05:10 apandey_ joined #gluster
05:10 aravindavk joined #gluster
05:16 sunny joined #gluster
05:16 xavih joined #gluster
05:21 msvbhat joined #gluster
05:27 skoduri joined #gluster
05:31 ndarshan joined #gluster
05:40 rastar joined #gluster
05:53 apandey_ joined #gluster
06:05 hgowtham joined #gluster
06:14 karthik_us joined #gluster
06:15 susant joined #gluster
06:18 kdhananjay joined #gluster
06:19 mattmcc joined #gluster
06:19 jtux joined #gluster
06:21 ndarshan joined #gluster
06:21 mbukatov joined #gluster
06:22 rastar joined #gluster
06:25 ppai joined #gluster
06:26 Saravanakmr joined #gluster
06:28 aravindavk joined #gluster
06:33 xavih joined #gluster
06:33 apandey__ joined #gluster
06:39 omie888777 joined #gluster
06:53 ivan_rossi joined #gluster
06:53 ivan_rossi left #gluster
06:58 msvbhat joined #gluster
07:02 bEsTiAn joined #gluster
07:07 poornima_ joined #gluster
07:11 Humble joined #gluster
07:15 fsimonce joined #gluster
07:20 atinm joined #gluster
07:24 apandey joined #gluster
07:28 Saravanakmr joined #gluster
07:32 msvbhat joined #gluster
07:57 buvanesh_kumar joined #gluster
07:59 apandey joined #gluster
08:01 aravindavk joined #gluster
08:08 rastar joined #gluster
08:10 [diablo] joined #gluster
08:18 ndarshan joined #gluster
08:20 ppai joined #gluster
08:24 abyss_ joined #gluster
08:28 msvbhat joined #gluster
08:39 _KaszpiR_ joined #gluster
08:41 rouven joined #gluster
08:42 atinm joined #gluster
08:45 apandey joined #gluster
09:00 ThHirsch joined #gluster
09:09 atinm joined #gluster
09:10 Saravanakmr joined #gluster
09:14 sanoj joined #gluster
09:19 atinm joined #gluster
09:27 apandey joined #gluster
09:28 apandey joined #gluster
09:58 atinm joined #gluster
10:07 msvbhat joined #gluster
10:16 Wizek_ joined #gluster
10:25 WebertRLZ joined #gluster
10:35 ndarshan joined #gluster
10:45 baber joined #gluster
10:47 shyam joined #gluster
10:50 kenansulayman joined #gluster
10:54 _KaszpiR_ joined #gluster
11:12 kdhananjay joined #gluster
11:33 sanoj joined #gluster
11:42 Humble joined #gluster
11:53 HTTP_____GK1wmSU joined #gluster
11:53 panina joined #gluster
11:54 HTTP_____GK1wmSU left #gluster
11:54 panina Hello. I'm trying to recover from a breakdown following an upgrade from glusterfs 3.7 to 3.10. I had some trouble with bricks being listed as offline.
11:55 panina I have more or less solved it with replacing bricks. I replaced the downed bricks, and re-synced the data. That process seemed to go well. However, when I now run 'gluster volume heal all', it responds that there are still downed bricks.
11:55 panina When I run the heal-command on the individual volumes, all bricks are listed as online.
11:56 panina I'm not seeing any errors in the logs. Does anyone have ideas on how to proceed?
11:56 _KaszpiR_ joined #gluster
12:05 phlogistonjohn joined #gluster
12:14 itisravi__ joined #gluster
12:22 sunnyk joined #gluster
12:24 msvbhat joined #gluster
12:28 sunny joined #gluster
12:34 apandey_ joined #gluster
12:57 shyam joined #gluster
13:02 apandey joined #gluster
13:09 nbalacha joined #gluster
13:11 plarsen joined #gluster
13:21 rouven joined #gluster
13:22 baber joined #gluster
13:27 atinm joined #gluster
13:27 rouven joined #gluster
13:35 csaba joined #gluster
13:38 zyffer joined #gluster
13:40 phlogistonjohn joined #gluster
13:45 shyam joined #gluster
13:46 major joined #gluster
13:53 rouven joined #gluster
13:57 skylar1 joined #gluster
14:01 zyffer does the gluster server keep a log of clients that have mounted volumes?
14:01 baber joined #gluster
14:07 snehring CrackerJackMack: RHEL and Fedora, but yeah
14:14 hmamtora joined #gluster
14:31 zyffer i've inherited some gluster nas and am trying to figure out what has mounted the volumes, historically
14:33 zyffer i would have expected when a client mounts a volume, there would be some entry in a log somewhere, but i am not finding anything meaningful
14:38 kpease joined #gluster
14:39 kpease_ joined #gluster
14:45 omie888777 joined #gluster
14:49 susant joined #gluster
14:53 NoctreGryps @zyffer  - I agree, gluster logs are not helpful for that, but you may find examining your /etc/fstab and /etc/hosts will help fill in some info
14:53 panina zyffer - my system keeps massive amounts of logs in /var/log/glusterfs. It's a CentOS system. Not sure how yours look.
14:53 NoctreGryps on your instances you suspect that used it, that is
14:53 panina Those logs are a bit on the verbose side though.
14:54 panina zyffer but if you can figure out how the log posts look, I supsect you should be able to get the relevant info with zgrep.
14:55 farhorizon joined #gluster
14:58 dr-gibson Is there some way to get gluster to trim it's memory consumption?
14:59 dr-gibson I created 100k files, glusterfs went to 1.2GB mem.. then deleted the files but gluster's memory consumption continues to be high
15:00 rwheeler joined #gluster
15:02 wushudoin joined #gluster
15:07 zyffer panina: thanks.  yeah, it's centos 6, gluster 3.8.5 .  the whole /var/log/glusterfs is ~118MB, and unfortunately, does not appear to reflect when a client mounts a volume
15:08 zyffer i mounted the volume, looked at all the changed files in that dir, and nothing reflects that, i was hoping it was stored somewhere else
15:09 snehring zyffer: do you have a subdir in /var/log/glusterfs called bricks?
15:09 zyffer sure do
15:10 snehring the logs in there should have entries like 'accepted connection from <client>'
15:10 zyffer omg, that looks like it might have it
15:10 zyffer how did i miss that, thank you
15:11 snehring np
15:11 baber joined #gluster
15:26 panina joined #gluster
15:31 baber joined #gluster
15:32 rouven joined #gluster
15:35 buvanesh_kumar joined #gluster
15:38 susant joined #gluster
16:05 mlhess joined #gluster
16:09 aravindavk joined #gluster
16:17 Luds joined #gluster
16:18 Luds Hi everybody... Quick question. I was about to add a server to my 2 replicas setup.
16:19 Luds I was also adding 2 bricks on the new server and 1 brick per existing server.
16:19 Luds In the end, the goal was to have 1 volume which would be a combination of 3 times 2 replicas.
16:20 Luds During my readings, I found that if sharding is enabled, it could create file corruption.
16:20 Luds Is this still relevant?
16:22 Luds I have Gluster 3.10.3-1 on the older servers and 3.10.6-1 on the one I want to add.
16:26 cloph_away joined #gluster
16:54 rastar joined #gluster
16:56 farhorizon joined #gluster
17:00 CrackerJackMack joined #gluster
17:06 alvinstarr joined #gluster
17:15 atinm joined #gluster
17:16 CrackerJackMack snehring, with glusterfs? if so then like you said, this bug might be ubuntu+zfs build specific and it might be what it is, but a rather important issue that I hope the devs can clarify for me.  No information out there that I could find regarding this.
17:17 CrackerJackMack but now the real kicker, do I convert my storage systems to centos or fbsd...
17:19 Vapez_ joined #gluster
17:20 snehring I don't think that'll solve your problem, at least right now
17:21 snehring even 0.7.3 doesn't seem to support all the fallocate options
17:22 CrackerJackMack correct, but if glusterfs+zfs works on centos then there is something in the environments that is making centos take a different code path from an ubuntu built one based on those #ifdef's I was seeing.
17:22 CrackerJackMack possible
17:22 CrackerJackMack or nobody ever runs rebalance
17:22 Vapez joined #gluster
17:22 Vapez joined #gluster
17:23 snehring my primary zfs+glusterfs use case is with xfs bricks on top of lvm on top of a zvol (for native snapshot support)
17:24 snehring I had a smaller volume setup for hpc that was directly ontop of zfs datasets, but that was with an older version of gluster on centos
17:24 snehring newer clients may do more with fallocate than they used to
17:28 CrackerJackMack ooohhhh
17:29 plarsen joined #gluster
17:29 CrackerJackMack the reason for lvm ontop of a zvol is if you wanted to add a zvol you could expand the brick as I understand it ?
17:31 snehring that'd be one way to do that sure, but the main reason is for snapshots
17:31 snehring since gluster (presently) uses lvm for snapshots
17:35 CrackerJackMack oh right, you said that but I didn't grok
17:41 rafi joined #gluster
17:56 andrws joined #gluster
17:57 rouven joined #gluster
18:07 msvbhat joined #gluster
18:07 Guest71620 joined #gluster
18:11 social joined #gluster
18:12 Guest71620 hello gluster group.... I've created a 3 node cluster with 6 disks with replica 3, then I added 3 ssd disks, one for each node as a hot tier. I created a bunch of VMs, then shut them down overnight. Aren't they supposed to move to "cold" storage?
18:14 Guest71620 Volume Name: gv2tb
18:14 Guest71620 Type: Tier
18:14 Guest71620 Volume ID: a2938b4f-5363-4da3-9af5-7b5c0e848a2d
18:14 Guest71620 Status: Started
18:14 Guest71620 Snapshot Count: 0
18:14 Guest71620 Number of Bricks: 9
18:14 Guest71620 Transport-type: tcp
18:14 Guest71620 Hot Tier :
18:14 Guest71620 Hot Tier Type : Distribute
18:14 Guest71620 Number of Bricks: 3
18:15 Guest71620 Brick1: mx10-lds07:/data/ssd2/data
18:15 Guest71620 Brick2: mx10-lds06:/data/ssd2/data
18:15 Guest71620 Brick3: mx10-lds02:/data/ssd2/data
18:15 Guest71620 Cold Tier:
18:15 Guest71620 Cold Tier Type : Distributed-Replicate
18:15 Guest71620 Number of Bricks: 2 x 3 = 6
18:15 Guest71620 Brick4: mx10-lds02:/data/2tba/data
18:15 Guest71620 Brick5: mx10-lds06:/data/2tba/data
18:15 Guest71620 Brick6: mx10-lds07:/data/2tba/data
18:15 Guest71620 Brick7: mx10-lds02:/data/2tbb/data
18:15 Guest71620 Brick8: mx10-lds06:/data/2tbb/data
18:15 Guest71620 Brick9: mx10-lds07:/data/2tbb/data
18:15 Guest71620 Options Reconfigured:
18:15 Guest71620 performance.write-behind: off
18:15 Guest71620 cluster.self-heal-window-size: 256
18:15 Guest71620 performance.readdir-ahead: off
18:15 Guest71620 features.shard-block-size: 512MB
18:15 Guest71620 cluster.tier-mode: cache
18:15 Guest71620 features.ctr-enabled: on
18:15 Guest71620 ganesha.enable: on
18:15 Guest71620 features.cache-invalidation: on
18:15 Guest71620 features.shard: off
18:15 Guest71620 transport.address-family: inet
18:15 Guest71620 nfs.disable: on
18:15 Guest71620 cluster.enable-shared-storage: enable
18:15 Guest71620 nfs-ganesha: enable
18:18 Guest71620 2nd issue, you'll other features changed from default. This is for troubleshooting ganesha core dumps when sharding is enabled. That's my 2nd huge issue... whenever I enable sharding, then do testing from NFS, ganesha crashes
18:19 panina joined #gluster
18:20 Guest71620 glusterfs 3.10.5
18:20 Guest71620 on fedora 26
18:23 MrAbaddon joined #gluster
18:42 arif-ali joined #gluster
18:46 _KaszpiR_ joined #gluster
18:54 Vapez_ joined #gluster
19:06 koolfy joined #gluster
19:11 koolfy joined #gluster
19:11 farhorizon joined #gluster
19:14 plarsen joined #gluster
19:14 koolfy joined #gluster
19:22 koolfy joined #gluster
19:26 Jacob843 joined #gluster
19:38 rastar joined #gluster
19:39 don joined #gluster
19:45 Guest32321 on the same systems, I get this WARNING: Locking disabled. Be careful! This could corrupt your metadata.
19:46 Guest32321 could it be related?
19:47 janlam7 joined #gluster
19:58 msvbhat_ joined #gluster
20:01 ThHirsch joined #gluster
20:15 bluenemo joined #gluster
20:30 shyam joined #gluster
20:38 skylar1 joined #gluster
21:04 farhorizon joined #gluster
21:05 Humble joined #gluster
21:09 farhoriz_ joined #gluster
21:17 plarsen joined #gluster
21:24 omie888777 joined #gluster
21:50 Guest32321 left #gluster
21:54 gbox joined #gluster
22:04 farhorizon joined #gluster
22:19 jeffspeff joined #gluster
22:23 gbox Erasure Coding seems popular among #gluster gurus, at least conceptually.  Does anyone online use it as their default volume type?
22:30 bluenemo joined #gluster
22:34 gbox One of the great things about gluster is being able to peek under the hood and see the files.  If gluster goes kablooey, how can files in an EC volume be recovered?  It seems obvious, but hacks like glusterfind won’t work correct?
23:29 bluenemo joined #gluster
23:46 bluenemo joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary