Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-09-27

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:04 plarsen joined #gluster
00:46 MeltedLux joined #gluster
00:48 BlackoutWNCT I'm having some trouble with the Samba VFS module for gluster at the moment. Basically when I can't access shares that use the VFS module, but I can access shares that don't. Config: https://paste.ubuntu.com/25618899/ Log: https://paste.ubuntu.com/25618894/
00:48 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
01:07 shdeng joined #gluster
01:07 susant joined #gluster
01:27 shdeng joined #gluster
01:38 bwerthmann joined #gluster
01:55 ilbot3 joined #gluster
01:55 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:01 bwerthmann joined #gluster
02:02 gospod3 joined #gluster
02:44 atinm joined #gluster
02:51 susant joined #gluster
02:54 shyu joined #gluster
03:31 shdeng joined #gluster
03:33 psony joined #gluster
03:37 kramdoss_ joined #gluster
03:40 ahino joined #gluster
03:44 nbalacha joined #gluster
03:47 shdeng joined #gluster
03:51 itisravi joined #gluster
03:56 gyadav joined #gluster
04:05 ahino joined #gluster
04:15 aravindavk joined #gluster
04:19 dominicpg joined #gluster
04:21 BlackoutWNCT1 joined #gluster
04:22 wistof_ joined #gluster
04:22 glusterbot` joined #gluster
04:22 s34n_ joined #gluster
04:23 aravindavk_ joined #gluster
04:24 mlg9000_1 joined #gluster
04:24 ndk- joined #gluster
04:24 owlbot` joined #gluster
04:24 nigelb joined #gluster
04:24 misc_ joined #gluster
04:24 social_ joined #gluster
04:24 JonathanS joined #gluster
04:25 johnmark1 joined #gluster
04:25 saltsa_ joined #gluster
04:25 ndevos_ joined #gluster
04:25 ndevos_ joined #gluster
04:25 nobody483 joined #gluster
04:25 Neoon_ joined #gluster
04:25 d-fence__ joined #gluster
04:25 klma joined #gluster
04:25 psony_ joined #gluster
04:26 cholcombe_ joined #gluster
04:28 apandey joined #gluster
04:30 scc_ joined #gluster
04:30 armyriad joined #gluster
04:30 [o__o] joined #gluster
04:30 brayo joined #gluster
04:30 DJClean joined #gluster
04:30 DJClean joined #gluster
04:32 leifmadsen joined #gluster
04:32 unixfg joined #gluster
04:32 apandey_ joined #gluster
04:32 legreffier joined #gluster
04:35 PatNarciso joined #gluster
04:36 swebb joined #gluster
04:36 foster joined #gluster
04:37 ron-slc joined #gluster
04:38 ppai joined #gluster
04:42 loadtheacc joined #gluster
04:53 ndarshan joined #gluster
04:54 susant joined #gluster
04:59 skumar joined #gluster
05:13 xavih joined #gluster
05:16 ws2k3 joined #gluster
05:17 ws2k3 joined #gluster
05:17 ws2k3 joined #gluster
05:18 ws2k3 joined #gluster
05:18 ws2k3 joined #gluster
05:19 apandey joined #gluster
05:19 ws2k3 joined #gluster
05:24 kdhananjay joined #gluster
05:30 gyadav_ joined #gluster
05:35 jkroon joined #gluster
05:39 sanoj joined #gluster
05:40 hgowtham joined #gluster
05:43 mbukatov joined #gluster
05:47 msvbhat joined #gluster
05:49 gyadav__ joined #gluster
05:52 itisravi joined #gluster
05:57 Prasad joined #gluster
06:02 kdhananjay1 joined #gluster
06:03 apandey_ joined #gluster
06:07 Saravanakmr joined #gluster
06:13 poornima_ joined #gluster
06:24 susant joined #gluster
06:29 Anarka joined #gluster
06:30 apandey__ joined #gluster
06:30 Anarka joined #gluster
06:32 g_work joined #gluster
06:32 jtux joined #gluster
06:32 BlackoutWNCT1 joined #gluster
06:33 rouven_ joined #gluster
06:36 Saravanakmr joined #gluster
06:38 jtux joined #gluster
06:41 Teraii joined #gluster
06:42 prasanth joined #gluster
06:42 kdhananjay joined #gluster
06:42 [diablo] joined #gluster
06:42 ivan_rossi joined #gluster
06:45 kdhananjay1 joined #gluster
06:46 rafi1 joined #gluster
06:52 BlackoutWNCT joined #gluster
06:52 gospod3 joined #gluster
06:53 skoduri joined #gluster
06:54 rwheeler joined #gluster
06:55 blue joined #gluster
07:03 kdhananjay joined #gluster
07:09 apandey_ joined #gluster
07:10 madwizard joined #gluster
07:10 skumar joined #gluster
07:17 _ndevos joined #gluster
07:17 _ndevos joined #gluster
07:20 fsimonce joined #gluster
07:22 apandey__ joined #gluster
07:23 Wizek_ joined #gluster
07:27 rastar joined #gluster
07:27 skumar joined #gluster
07:28 MrAbaddon joined #gluster
07:32 apandey_ joined #gluster
07:35 rouven joined #gluster
07:39 legreffier joined #gluster
07:42 tdasilva joined #gluster
07:53 flomko joined #gluster
07:54 weller good morning everyone ;-) fresh start to face the daily problems: switching from fuse-mounted samba share to vfs_gluster destroys my ACLs (previously set permissions messed up, cannot set new permissions)... is there anything I can do? centos 7.4, gluster 3.12, samba 4.7... thanks in advance!
08:00 kdhananjay joined #gluster
08:03 kdhananjay1 joined #gluster
08:08 kdhananjay joined #gluster
08:14 _KaszpiR_ joined #gluster
08:14 kdhananjay joined #gluster
08:15 msvbhat joined #gluster
08:19 ThHirsch joined #gluster
08:21 weller joined #gluster
08:22 xavih joined #gluster
08:26 _KaszpiR_ joined #gluster
08:30 kdhananjay1 joined #gluster
08:38 owlbot joined #gluster
08:46 sanoj joined #gluster
08:46 kdhananjay joined #gluster
08:58 ThHirsch joined #gluster
09:16 skumar joined #gluster
09:18 flomko joined #gluster
09:27 nbalacha joined #gluster
09:31 shdeng joined #gluster
09:34 skumar_ joined #gluster
09:37 kramdoss_ joined #gluster
09:38 marin[m] guys, it's not clear to me which is the latest stable version of glusterfs
09:38 marin[m] https://www.gluster.org/install/
09:38 glusterbot Title: Gluster » Install (at www.gluster.org)
09:39 marin[m] it says latest is 3.12, but there's also a link to glusterfs 3.10 (stable)
09:39 ndevos marin[m]: latest is 3.12.x, but we also do updates for 3.10.x
09:39 marin[m] so does that mean that 3.12 is.. a development version?
09:39 ndevos no, 3.10 is still maintained for the people that can not yet take the time to upgrade to 3.12
09:39 marin[m] so i should be using 3.12 in production, right?
09:40 ndevos yes, for new deployments 3.12 is recommended
09:40 marin[m] ok, got it, thanks!
09:40 ndevos yw!
09:41 prasanth joined #gluster
09:44 kdhananjay joined #gluster
09:52 kdhananjay joined #gluster
10:02 nbalacha joined #gluster
10:08 skoduri joined #gluster
10:09 jiffin joined #gluster
10:21 msvbhat joined #gluster
10:25 nh2 joined #gluster
10:25 shyam joined #gluster
10:41 itisravi joined #gluster
10:43 buvanesh_kumar joined #gluster
10:48 * weller starts to wonder if gluster&samba are not compatible to windows ACLs
10:49 msvbhat joined #gluster
10:50 weller guess there are plenty of users out ther. however, am I really the only one that cannot make windows ACLs on a samba/ctdb cluster with vfs_gluster work?
10:51 weller would that be the gluster part, or the samba part btw? when I mount the gluster volume, and then export, everything works as expected. directly sharing the volume with the gluster vfs messes up the permissions.
10:53 anoopcs weller, what error are you facing while setting up ACLs from windows side?
10:53 weller with a user that is in the 'domain admins' group -> permission denied
10:54 weller with the domain administrator itself -> no error message, but permissions are not as set
10:56 Saravanakmr joined #gluster
10:56 anoopcs weller, Can you please paste the getfacl output on root of the volume(mount-point) after a fuse mount of the gluster volume?
10:58 skumar_ joined #gluster
10:59 weller http://pastebin.centos.org/295216/15065099/
11:00 jkroon joined #gluster
11:02 weller this is how I created the share: http://pastebin.centos.org/295241/50651010/
11:02 skoduri joined #gluster
11:26 rastar joined #gluster
11:33 atinm joined #gluster
11:37 rastar joined #gluster
11:41 shdeng joined #gluster
11:56 weller anoopcs, this would be testparm (sorry, mixed the channel...) http://pastebin.centos.org/295376/15065133/
12:02 prasanth joined #gluster
12:15 shyam joined #gluster
12:33 jiffin joined #gluster
12:43 jiffin1 joined #gluster
12:46 weller solution: net conf setparm <share> "vfs objects" "acl_xattr glusterfs"
12:47 nbalacha joined #gluster
12:48 weller global config of acl_xattr seeems not enough
12:56 dominicpg joined #gluster
12:57 prasanth_ joined #gluster
13:10 msvbhat joined #gluster
13:13 weller are there recommended tweaks for a gluster machine with lots of ram?
13:18 baber joined #gluster
13:20 kramdoss_ joined #gluster
13:22 buvanesh_kumar joined #gluster
13:29 xavih joined #gluster
13:36 poornima joined #gluster
13:38 rouven joined #gluster
13:39 skylar joined #gluster
13:44 plarsen joined #gluster
13:52 farhorizon joined #gluster
13:54 jiffin1 joined #gluster
13:56 skoduri joined #gluster
13:59 jiffin joined #gluster
14:01 _KaszpiR_ joined #gluster
14:06 toredl joined #gluster
14:07 skumar__ joined #gluster
14:11 atinm joined #gluster
14:18 jefarr joined #gluster
14:21 hmamtora joined #gluster
14:21 hmamtora_ joined #gluster
14:27 garbageyard joined #gluster
14:27 WebertRLZ joined #gluster
14:28 vbellur1 joined #gluster
14:29 vbellur joined #gluster
14:30 vbellur joined #gluster
14:30 vbellur joined #gluster
14:35 dgandhi joined #gluster
14:36 baber joined #gluster
14:43 jefarr Hello all, I've been doing some comparison tests with nfs-ganesha, the built in NFS and replicated vs distributed volumes and I'm curious if my results are typical.  Using smallfile on a distributed volume I see a nice scaling of IOPS, MiB/s and Files/s as I add threads but when I use a replicated volume all these stats drop by a factor of 4 and don't appear to get any benefit from adding more threads.. which is dissapointing.
14:46 jefarr The distributed volume was just 4 hosts with 1 brick each, the replicated is 8 hosts with 1 brick each.
14:47 Neoon joined #gluster
14:47 rouven joined #gluster
14:50 kpease_ joined #gluster
14:58 rouven joined #gluster
14:58 MrAbaddon joined #gluster
15:02 rouven joined #gluster
15:06 aravindavk_ joined #gluster
15:07 gyadav__ joined #gluster
15:07 jstrunk joined #gluster
15:08 Anarka joined #gluster
15:12 jstrunk joined #gluster
15:13 xavih joined #gluster
15:25 jbrooks joined #gluster
15:27 xavih joined #gluster
15:29 JonathanD joined #gluster
15:30 _KaszpiR_ joined #gluster
15:38 xavih joined #gluster
15:39 baber joined #gluster
15:40 ThHirsch joined #gluster
15:40 sohmestra joined #gluster
15:45 xavih joined #gluster
15:48 susant joined #gluster
15:50 msvbhat joined #gluster
15:58 farhoriz_ joined #gluster
16:06 bchilds joined #gluster
16:12 ivan_rossi left #gluster
16:19 baber joined #gluster
16:22 vbellur joined #gluster
16:22 vbellur joined #gluster
16:22 weller are there updated instructions how to use nfs-ganesha with gluster?
16:23 weller 'gluster nfs-ganesha enable' works fine for 3.10, but not for 3.12 anymore...
16:24 cloph don't think much has changed with this - ganesha is running/not reporting issues?
16:25 weller cloph: we have disabled it completely before for other reasons, and now after upgrading to 3.12 we wanted to play with it again
16:25 weller 'unrecognized word: nfs-ganesha (position 0)'
16:27 cloph I was referring to nfs-ganesha the system service (i.e. systemctrl start nfs-ganesha) (and enable, so it will launch at boot)
16:28 cloph (and for gluster: you'd need to specify your volume, there is no global command "nfs-gnesha"
16:28 weller sorry, misunderstood. nfs-ganesha runs just fine
16:28 weller 'To setup the HA cluster, enable NFS-Ganesha by executing the following command: #gluster nfs-ganesha enable'
16:28 weller https://gluster.readthedocs.io/en/latest/Administrator%20Guide/NFS-Ganesha%20GlusterFS%20Integration/
16:29 glusterbot Title: Configuring NFS-Ganesha server - Gluster Docs (at gluster.readthedocs.io)
16:35 farhorizon joined #gluster
16:36 elico1 joined #gluster
16:37 baber joined #gluster
16:39 elico1 *: I want to use glusterfs to share a specific directory between 5 nodes.
16:39 elico1 They all need to have the same files in sync, I am using this shared directory as a "flags board".
16:39 elico1 Each node has a running service which updates the flags periodically with it's unique name or id  and two of the nodes monitors the FS for changes and states.
16:39 elico1 What type of a volume I need? replica 5?
16:39 cloph ah, you're setting up HA - for that thescripts were removed
16:40 weller cloph: well thats unfortunate :D
16:40 ic0n joined #gluster
16:40 cloph https://review.gluster.org/#/c/16506/ - HA stuff is handled differently now (but never used that myself)
16:41 glusterbot Title: Gerrit Code Review (at review.gluster.org)
16:41 ndevos kkeithley, jiffin ^
16:42 cloph elico1: no, replica doesn't matter depending number of clients - how much replica you need depends on how much redundancy you want/need (how many gluster bricks you want to be able to "loose" without impacting the volume
16:45 elico1 cloph: I just need it to be consistent across the 5 servers and I need 1 copy of each file on each of the nodes.
16:46 elico1 kind of a raid mirror across 5 nodes..
16:46 cloph statement doesn't really make sense.
16:46 cloph you can mount a volume to multiple clients and have access to the files.
16:47 elico1 cloph: I want 5 servers to have a shared volume... not as clients..
16:47 cloph but if you really need 5 copies stored on the gluster bricks, then yes, you'd need replica 5, but I thinkg you're misunderstanding stuff.
16:47 elico1 cloph: what I might not understand?
16:47 kkeithley the scripts are still in 3.10.
16:47 cloph whatever mounts the volume is a client.
16:49 rafi joined #gluster
16:49 elico1 I don't want anyone to mount the volume.. just five nodes that will have a volume setup.
16:49 elico1 I assumed that if I am using a replica 5 then if I will "touch /vol-dir/1" the file will be updated automatically on the other nodes as well.
16:50 elico1 cloph: Did I gt it wrong?
16:50 cloph you must mount the volume, as otherwise you cannot access the data at all.
16:51 cloph so on all hosts you want to do the touch, you need to mount the volume.
16:51 plarsen joined #gluster
16:52 weller @kkeithley: talking about 3.12
16:52 elico1 cloph: so it's not enough that I'm a part of the volume? I must mount it on each node to actually share it?
16:53 vbellur joined #gluster
16:54 vbellur joined #gluster
16:55 rafi joined #gluster
16:57 vbellur1 joined #gluster
16:57 weller elico1: to _share_ it, there are other options, but what you have described requires that you mount the gluster volume on each node. having replica gives you only additional nodes with the same data backend.
16:57 gyadav__ joined #gluster
16:57 vbellur1 joined #gluster
17:02 elico1 weller: I will try to describe what I am doing.
17:02 elico1 I have a cluster of proxy servers and 2-3 management nodes. Each of the proxy nodes in the cluster should write\update to the shared volume his current state in his dedicated directory.
17:02 elico1 The role of this share should be something like a shared memory between the cluster of proxies and the masters.
17:02 elico1 The masters should be able to update or challenge the proxy to verify that they are still alive.
17:02 elico1 Now the simplest way to share a directory between all the nodes in this setup is using either some key=>value replicated DB or to use GlusterFS with a simple FS structure on-top of it.
17:04 elico1 Locking should never be a problem since the files that I will use as "flags" will always be unique when each of the nodes is writing.
17:04 elico1 The only matter is to make sure that all will have the same up-to-date FS.
17:16 cloph memcache or similar also could do this, just as a regular nfs export, absolutely no need for gluster.
17:16 cloph Anything you can mount to all of the nodes or access in some other way will do, nothing thatwould need replication/any feature specific to gluster)
17:19 rastar joined #gluster
17:20 elico1 cloph: no need for gluster in general but what I want from gluster is auto recovery and auto replication which is distributed across all the nodes.
17:20 elico1 I was thinking about other options such as etcd or consul but I am more familiar with gluster.
17:21 elico1 The other option is to create a gluster volume on the  masters and mount it on each of the proxies.
17:22 rouven_ joined #gluster
17:22 cloph sorry, but doesn't seem like you're familiar  with gluster at all.
17:23 kkeithley ndevos: you could ask obnox and jarrpa (if they were here, they seem to be hiding) where storhaug is. (How's that for passing the buck?)
17:25 elico1 cloph: what do you think about this idea: https://www.brightbox.com/docs/guides/glusterfs/
17:25 glusterbot Title: Replicated filesystem with GlusterFS - Brightbox Cloud (at www.brightbox.com)
17:25 cloph you surely can use gluster with any amount of replication for it, with mentioning recovery it should be at least using three peers with replica 3 or 2 with arbiter (but if you're just using flagfiles, then more or less the same)
17:26 cloph no need for all the clients (those machines who touch the file) to be part of the volume, don't even need to be the same machines at all.
17:26 cloph but you'll have to mount the gluster volume on each of the nodes where you want to touch it.
17:27 elico1 cloph: thanks!
17:29 garbageyard joined #gluster
17:34 baber joined #gluster
17:36 rafi joined #gluster
17:53 Humble joined #gluster
17:53 rouven_ joined #gluster
17:59 jefarr joined #gluster
18:03 rouven_ joined #gluster
18:07 MrAbaddon joined #gluster
18:08 kkeithley joined #gluster
18:12 gospod2 joined #gluster
18:15 kkeithley joined #gluster
18:18 rouven_ joined #gluster
18:19 kkeithley joined #gluster
18:20 skylar joined #gluster
18:25 farhorizon joined #gluster
18:30 baber joined #gluster
18:31 jefarr I was reading through a blog (which I've now lost the link to) and it mentioned something I hadn't thought of.  It was adding multiple bricks to each host for a replicated cluster.  Is the a recommended method and can it improve performance? (presuming it doesn't send data over the network when two bricks are on the same host)
18:32 shyam joined #gluster
18:32 rafi1 joined #gluster
18:33 rafi joined #gluster
18:34 branko joined #gluster
18:36 buvanesh_kumar joined #gluster
18:40 branko left #gluster
18:53 alvinstarr joined #gluster
18:54 alvinstarr joined #gluster
19:00 hmamtora_ I have a 6 node replicated gluster on 3.8.13, for one of the node I see this - State: Sent and Received peer request (Connected) when I run gluster peer status from 4 other nodes
19:00 hmamtora_ Anybody seen this above issue ^^^
19:13 ThHirsch joined #gluster
19:13 rwheeler joined #gluster
19:19 Saravanakmr joined #gluster
19:20 fenikso joined #gluster
19:22 rouven joined #gluster
19:25 fenikso Will I run into problems if one of my gluster servers is running 3.10 (forced to use ubuntu 14.04) and the rest use 3.12?
19:25 fenikso or is there a way to get 3.12 on ubuntu 14.04?
19:27 baber joined #gluster
19:28 jkroon joined #gluster
19:43 msvbhat joined #gluster
19:52 _nixpanic joined #gluster
19:52 _nixpanic joined #gluster
20:00 vbellur joined #gluster
20:20 elico1 cloph: I am researching etcd since it probably a good choice for a key\value distributed DB.
20:32 loadtheacc joined #gluster
20:40 msvbhat joined #gluster
20:52 _KaszpiR_ joined #gluster
20:53 rouven joined #gluster
21:12 nh2 are there instructions somewhere for how to upgrade a geo-replication session after upgrading (e.g. from 3.10 to 3.12)? The paths to the old binary seem to be hardcoded in /var/lib/glusterd/geo-replication/distfs_10.0.1.1_distfs-georep/gsyncd.conf
21:13 farhorizon joined #gluster
21:30 Brainspackle joined #gluster
21:30 Brainspackle hey folks, wondering if anyone has any insight on these 3+ year old bug that has been cloned and closed for a few years now: https://bugzilla.redhat.com/show_bug.cgi?id=1138970
21:30 glusterbot Bug 1138970: urgent, unspecified, ---, rgowdapp, CLOSED EOL, file corruption during concurrent read/write
21:31 Brainspackle the latest clone is at https://bugzilla.redhat.com/show_bug.cgi?id=1286102 but lacks any context
21:31 glusterbot Bug 1286102: urgent, unspecified, ---, pkarampu, NEW , file corruption during concurrent read/write
21:33 Brainspackle i believe that bug is affecting a gluster deployment of ours :(
21:34 Brainspackle seems crazy to me though that a file corruption bug has been kicked down the road for 3 years
21:51 vbellur joined #gluster
21:56 Brainspackle scary stuff
22:02 hmamtora__ joined #gluster
22:03 hmamtora joined #gluster
22:09 flomko joined #gluster
22:28 Acinonyx joined #gluster
22:37 hmamtora__ joined #gluster
22:39 hmamtora joined #gluster
23:12 Brainspackle joined #gluster
23:13 Jacob843 joined #gluster
23:41 vbellur joined #gluster
23:47 uebera|| joined #gluster
23:47 uebera|| joined #gluster
23:57 Brainspackle bueller?

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary