Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-10-11

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:07 XpineX joined #gluster
00:15 farhoriz_ joined #gluster
01:01 daMaestro joined #gluster
01:15 vbellur joined #gluster
01:37 jkroon joined #gluster
01:55 ilbot3 joined #gluster
01:55 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:02 gospod2 joined #gluster
02:27 ramteid joined #gluster
02:45 rideh joined #gluster
02:50 sadbox joined #gluster
02:58 MrAbaddon joined #gluster
03:19 daMaestro joined #gluster
03:24 kramdoss_ joined #gluster
03:29 MrAbaddon joined #gluster
03:37 kramdoss_ joined #gluster
03:43 JPaul joined #gluster
03:45 itisravi joined #gluster
03:46 nbalacha joined #gluster
03:49 ppai joined #gluster
03:52 dlambrig joined #gluster
03:53 JPaul joined #gluster
04:10 psony joined #gluster
04:12 apandey joined #gluster
04:14 Prasad joined #gluster
04:21 Prasad joined #gluster
04:23 ndarshan joined #gluster
04:32 msvbhat_ joined #gluster
04:32 msvbhat joined #gluster
04:32 atinm joined #gluster
04:32 atrius joined #gluster
04:50 rouven joined #gluster
04:51 ramteid joined #gluster
04:52 sahina joined #gluster
04:53 dominicpg joined #gluster
05:05 susant joined #gluster
05:07 skumar joined #gluster
05:07 msvbhat_ joined #gluster
05:07 msvbhat joined #gluster
05:10 aravindavk joined #gluster
05:17 omie888777 joined #gluster
05:18 jiffin joined #gluster
05:19 xavih joined #gluster
05:25 karthik_us joined #gluster
05:29 sanoj joined #gluster
05:33 aravindavk joined #gluster
05:37 Prasad_ joined #gluster
05:41 Prasad joined #gluster
05:43 kdhananjay joined #gluster
05:47 atinm joined #gluster
05:48 karthik_us joined #gluster
05:54 ppai joined #gluster
06:03 aravindavk joined #gluster
06:05 hgowtham joined #gluster
06:06 msvbhat joined #gluster
06:06 msvbhat_ joined #gluster
06:09 karthik_us joined #gluster
06:11 rafi joined #gluster
06:13 Saravanakmr joined #gluster
06:19 Prasad_ joined #gluster
06:20 sanoj joined #gluster
06:21 Prasad joined #gluster
06:25 ppai joined #gluster
06:25 jtux joined #gluster
06:27 atinm joined #gluster
06:29 ws2k3 joined #gluster
06:30 ws2k3 joined #gluster
06:30 ws2k3 joined #gluster
06:31 ws2k3 joined #gluster
06:31 ws2k3 joined #gluster
06:32 ws2k3 joined #gluster
06:34 skoduri joined #gluster
06:34 jtux joined #gluster
06:35 poornima_ joined #gluster
06:45 lefreut joined #gluster
06:45 Prasad joined #gluster
06:52 msvbhat joined #gluster
06:52 msvbhat_ joined #gluster
06:57 mbukatov joined #gluster
06:59 Prasad_ joined #gluster
07:01 fsimonce joined #gluster
07:03 Prasad joined #gluster
07:04 msvbhat__ joined #gluster
07:05 msvbhat joined #gluster
07:12 bEsTiAn joined #gluster
07:20 skoduri joined #gluster
07:31 rouven left #gluster
07:31 rouven joined #gluster
07:32 d-fence joined #gluster
07:32 d-fence_ joined #gluster
07:44 jkroon joined #gluster
07:48 weller hi, is there documentation somewhere how to setup ganesha with high availability manually on gluster 3.12? docs.gluster.org/en/latest still refers to the old setup scripts that have been removed..
07:51 apandey joined #gluster
07:57 jiffin weller: for the time being we don't have proper documentation for storhaug
07:58 jiffin kkeithley can help u bit and he will be online in 5 hr or so
07:59 lefreut hey there. Any doc on the geo replications uses cases, including write/read workflow on the clients side?
07:59 lefreut actual meaning of the question: is it usable in other cases than backup?
08:03 nbalacha misc, nigelb , ping
08:03 glusterbot nbalacha: Please don't naked ping. http://blogs.gnome.org/markmc/2014/02/20/naked-pings/
08:03 nbalacha misc, nigelb any luck with Zhang's  login issue?
08:06 misc nbalacha: nope, I did start to get a look, but didn't went far. What is the bug number again ?
08:07 nbalacha misc - there was no infra BZ filed. The only mention was in https://bugzilla.redhat.com/show_bug.cgi?id=1490642#c4
08:07 glusterbot Bug 1490642: unspecified, unspecified, ---, zhhuan, NEW , glusterfs client crash when removing directories
08:08 anoopcs BlackoutWNCT, How can we help with vfs module for gluster? Can you elaborate on the issue?
08:10 BlackoutWNCT anoopcs, The issue that I'm having is that samba appears to enter a panic state when any share which is configured to use the vfs module is accessed. This prevents access to that share, however doesn't appear to cause any disruption to other shares which do not use the module.
08:10 misc nbalacha: ok, so I guess a bug would help, cause i kinda forgot the details on this one :/
08:10 anoopcs BlackoutWNCT, please provide the output of `testparam -s` and your volume config.
08:10 BlackoutWNCT https://paste.ubuntu.com/25631741/
08:10 anoopcs via some paste service
08:10 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
08:11 nbalacha misc, thanks. I will ask him to file a bug
08:11 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
08:11 misc I suspect we may need to dig in the db, but his account was present
08:12 BlackoutWNCT anoopcs, there's also this, which is an extract from the sambe log from the time of access of the share https://paste.ubuntu.com/25711392/
08:12 [diablo] joined #gluster
08:12 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
08:12 skoduri joined #gluster
08:13 anoopcs BlackoutWNCT, What are the samba and gluster versions?
08:14 nigelb nbalacha: working through that at the moment.
08:14 nigelb nbalacha: I've made the changes on staging, waiting to see what happens.
08:14 BlackoutWNCT anoopcs, Samba:4.3.11-Ubuntu gluster: 3.10.6
08:15 nigelb nbalacha / misc - There was a BZ, bug 1494363
08:15 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=1494363 unspecified, unspecified, ---, nigelb, ASSIGNED , Couldn't log into review.gluster.org with GitHub account
08:16 nbalacha nigelb, thanks
08:19 ThHirsch joined #gluster
08:19 buvanesh_kumar joined #gluster
08:21 anoopcs BlackoutWNCT, Can you please increase the samba log level to a higher value (may be 6 or 7) and share the logs for understanding the context?
08:24 rouven left #gluster
08:24 BlackoutWNCT anoopcs, which log would you like? the one that you have currently is the connection log defined under global as /var/log/samba/log.%m however I don't see an option to increase the logging level for this. I can provide you with the log file defined under the share if you would like?
08:24 rouven joined #gluster
08:25 anoopcs BlackoutWNCT, Use smb.conf parameter 'log level' to increase the log level
08:31 BlackoutWNCT anoopcs, https://paste.ubuntu.com/25718699/
08:31 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
08:31 BlackoutWNCT This is log level = 7
08:31 BlackoutWNCT If you need it raised, let me know
08:31 rouven joined #gluster
08:38 dlambrig joined #gluster
08:53 msvbhat joined #gluster
08:53 msvbhat_ joined #gluster
09:05 jkroon joined #gluster
09:16 Wizek_ joined #gluster
09:17 anoopcs BlackoutWNCT, Not very clear still. May be there are not enough logs in that area where is crashed. SO you are able to make connections to all other shares, right?
09:18 msvbhat joined #gluster
09:19 msvbhat_ joined #gluster
09:23 skoduri joined #gluster
09:24 MrAbaddon joined #gluster
09:31 _KaszpiR_ joined #gluster
09:37 msvbhat joined #gluster
09:38 msvbhat_ joined #gluster
09:42 nbalacha joined #gluster
09:45 sanoj joined #gluster
09:55 cloph_away joined #gluster
10:02 skoduri joined #gluster
10:09 cloph_away joined #gluster
10:20 skoduri joined #gluster
10:21 MeltedLux joined #gluster
10:27 susant joined #gluster
10:29 shyam joined #gluster
10:33 kramdoss_ joined #gluster
10:33 ws2k3 joined #gluster
10:33 yosafbridge joined #gluster
10:34 ws2k3 joined #gluster
10:34 ws2k3 joined #gluster
10:34 ws2k3 joined #gluster
10:35 ws2k3 joined #gluster
10:35 ws2k3 joined #gluster
10:55 MrAbaddon joined #gluster
10:57 legreffier win 12
11:02 fabianvf_ joined #gluster
11:03 Bonaparte left #gluster
11:05 masuberu joined #gluster
11:31 kotreshhr joined #gluster
11:42 msvbhat joined #gluster
11:42 msvbhat_ joined #gluster
11:51 kkeithley weller, jiffin: there is no doc for using storhaug. If you need nfs-ganesha with HA, you _must_ use 3.10.
11:51 kkeithley At least until storhaug becomes usable.
11:51 rouven joined #gluster
11:52 kotreshhr left #gluster
11:58 ws2k3 joined #gluster
12:00 MrAbaddon joined #gluster
12:02 jiffin1 joined #gluster
12:06 nh2 joined #gluster
12:16 jiffin1 joined #gluster
12:26 lefreut_ joined #gluster
12:48 shyam joined #gluster
12:50 vbellur joined #gluster
12:51 nbalacha joined #gluster
12:52 vbellur joined #gluster
12:53 shyam joined #gluster
13:03 aravindavk joined #gluster
13:04 vbellur joined #gluster
13:05 vbellur joined #gluster
13:06 vbellur joined #gluster
13:06 vbellur joined #gluster
13:07 vbellur joined #gluster
13:08 vbellur joined #gluster
13:09 vbellur joined #gluster
13:10 vbellur1 joined #gluster
13:12 msvbhat joined #gluster
13:12 vbellur joined #gluster
13:13 msvbhat_ joined #gluster
13:15 karthik_us joined #gluster
13:21 yoavz joined #gluster
13:21 jkroon joined #gluster
13:22 lefreut joined #gluster
13:23 lefreut is there a minimal shard block size? can't find it in the doc
13:24 rastar joined #gluster
13:33 vbellur joined #gluster
13:34 vbellur joined #gluster
13:35 vbellur1 joined #gluster
13:36 rastar joined #gluster
13:40 vbellur joined #gluster
13:42 skylar1 joined #gluster
13:43 vbellur joined #gluster
13:43 susant joined #gluster
13:44 vbellur1 joined #gluster
13:47 hmamtora joined #gluster
13:48 hmamtora_ joined #gluster
13:48 dominicpg joined #gluster
13:50 kramdoss_ joined #gluster
13:51 nbalacha joined #gluster
13:57 vbellur joined #gluster
13:58 vbellur joined #gluster
14:00 vbellur joined #gluster
14:04 _KaszpiR_ joined #gluster
14:19 kshlm Community meeting will start in ~45 minutes in #gluster-meeting. Add your topics and updates to https://bit.ly/gluster-community-meetings . Hoping to actually have a productive meeting this time.
14:19 glusterbot Title: Gluster Community Meeting - HackMD (at bit.ly)
14:22 atinm joined #gluster
14:30 msvbhat joined #gluster
14:30 msvbhat_ joined #gluster
14:32 vbellur joined #gluster
14:37 ws2k3 joined #gluster
14:37 ws2k3 joined #gluster
14:38 ws2k3 joined #gluster
14:38 ws2k3 joined #gluster
14:39 _KaszpiR_ joined #gluster
14:39 ws2k3 joined #gluster
14:39 ws2k3 joined #gluster
14:40 skumar joined #gluster
14:48 kpease joined #gluster
14:59 farhorizon joined #gluster
15:09 farhorizon joined #gluster
15:24 abazigal joined #gluster
15:32 abazigal Hi there ! I just did a serie of test on gluster (v3.12.1, 30 servers x 2 bricks, replica 3 arbiter 1) and I have some questions on my mind :
15:33 abazigal why the default value of network.ping-timeout is so high ? what would be the shortcomings of setting it to, let's say, 5, on a local production network ?
15:37 msvbhat joined #gluster
15:37 abazigal 2) is there a way to remove "nicely" a server or a brick from the volume, for maintenance purpose or whatever, without facing the "volume completely blocked during network.ping-timeout seconds" ?
15:38 abazigal I tested various things (simple reboot, stop of glusterd, stop of glusterfsd), but I always had this hang
15:42 msvbhat_ joined #gluster
15:44 abazigal (when I say "remove" I mean "temporarily"; just the time to do some updates + reboot)
15:51 dlambrig joined #gluster
15:51 farhorizon joined #gluster
16:07 atinm joined #gluster
16:25 hmamtora abazigal: yes if you have redundancy for that brick/node
16:26 kramdoss_ joined #gluster
16:28 snehring joined #gluster
16:32 msvbhat joined #gluster
16:32 msvbhat_ joined #gluster
16:34 jefarr_ joined #gluster
16:42 major joined #gluster
17:01 major joined #gluster
17:10 jefarr_ does the gluster fuse client support advisory file locking?  I'm building for a high concurrency (git) system and the docs specify that advisory locks are required... google isn't brining up much yet, thanks!
17:14 msvbhat_ joined #gluster
17:14 msvbhat joined #gluster
17:18 farhorizon joined #gluster
17:24 ppai joined #gluster
17:25 snehring joined #gluster
17:27 mdeanda joined #gluster
17:29 rouven joined #gluster
17:30 rouven left #gluster
17:30 rouven joined #gluster
17:35 mdeanda hi all. new user here. i've setup gluster at home such that my old file server and my gaming machine and an intel nuc are in a cluster with the nuc being arbitor (since it has less storage). the gaming machine is usually off but the goal is that it would sync when i start i -- roughly a few times a week. i'm not really going for high availability but more for redundancy in case of drive failures. i plan
17:35 mdeanda on doing backups separately. both the nuc and file server will run docker and i plan on having them use gluster shares; the nuc works fine, but the server seems to have trouble mounting the glusterfs at startup. i saw this https://bugs.launchpad.net/ubuntu/+source/glusterfs/+bug/876648 but solutions seemed more like temporary workarounds. my fstab for each of nuc/server use the local hostname for the
17:35 glusterbot Title: Bug #876648 “Unable to mount local glusterfs volume at boot” : Bugs : glusterfs package : Ubuntu (at bugs.launchpad.net)
17:35 mdeanda mount, should i use the _other_ node as the mount server? would it slow it down? i wonder if that would help with the failed mount at startup
17:49 mdeanda interesting, i just restarted and it mounted fine. in this case i had both of the other machines running during the restart. normally only the arbitor is on while 2nd machine is starting
17:50 prasanth joined #gluster
17:58 lefreut joined #gluster
18:02 Acinonyx joined #gluster
18:07 vbellur jefarr_: gluster fuse client supports advisory file locking
18:09 dr-gibson joined #gluster
18:11 saurabh joined #gluster
18:11 jefarr_ thanks vbellur :)
18:12 dr-gibson Hi everyone, quick q, does anyone know what drives memory usage for gluster? i have two very similar systems with similar workload and each having a single standalone gluster volume
18:12 vbellur1 joined #gluster
18:13 dr-gibson on one system glusterfs consumes ~90mb and on the other close to 700mb
18:13 vbellur joined #gluster
18:13 dr-gibson just curious what could drive that difference
18:14 dr-gibson seems odd for a low usage volume to need 700mb of memory
18:14 vbellur joined #gluster
18:15 vbellur joined #gluster
18:16 jiffin joined #gluster
18:20 saurabh We are having issues with gluster in 2 replication mode and sharding enabled. Gluster is running on 10 node server and the version 3.12.1. When we are writing files via spark framework on two or worker nodes, it randomly crashes compaling about `transport endpoint not connected`. Upon healing, the files are healed, however we are not able to pinpoint the problem. We wrote small python-based benchmark to write lots of small files directly on
18:20 saurabh gluster and we ended up with "transport end point not connected". We tried following options
18:20 saurabh ssh ubuntu@dvorak-1-1.maas "sudo gluster v set hdd cluster.quorum-reads yes"
18:20 saurabh ssh ubuntu@dvorak-1-1.maas "sudo gluster v set hdd cluster.consistent-metadata on"
18:20 saurabh ssh ubuntu@dvorak-1-1.maas "sudo gluster v set hdd performance.flush-behind off"
18:20 saurabh ssh ubuntu@dvorak-1-1.maas "sudo gluster v set hdd performance.nfs.flush-behind off"
18:20 saurabh ssh ubuntu@dvorak-1-1.maas "sudo gluster v set hdd cluster.favorite-child-policy mtime"
18:20 saurabh ssh ubuntu@dvorak-1-1.maas "sudo gluster v set hdd performance.strict-write-ordering on"
18:20 saurabh ssh ubuntu@dvorak-1-1.maas "sudo gluster v set hdd performance.read-after-open on"
18:20 saurabh ssh ubuntu@dvorak-1-1.maas "sudo gluster v set hdd network.ping-timeout 120"
18:20 saurabh ssh ubuntu@dvorak-1-1.maas "sudo gluster v set hdd features.grace-timeout 120"
18:20 saurabh ssh ubuntu@dvorak-1-1.maas "sudo gluster v set hdd cluster.quorum-count 2"
18:20 saurabh ssh ubuntu@dvorak-1-1.maas "sudo gluster v set hdd cluster.quorum-type fixed"
18:20 saurabh ssh ubuntu@dvorak-1-1.maas "sudo gluster v set hdd features.shard on"
18:20 saurabh ssh ubuntu@dvorak-1-1.maas "sudo gluster v set hdd features.shard-block-size 64MB"
18:20 saurabh none of these options helped resolved the issue.
18:25 dxlsm saurabh: Dumb question: Did you verify your local iptables/firewall configuration and make sure nothing (like puppet, for instance) is resetting it?
18:26 dxlsm That sounds a lot like a networking problem to me.
18:26 branko joined #gluster
18:27 vbellur joined #gluster
18:28 vbellur joined #gluster
18:29 vbellur joined #gluster
18:29 vbellur joined #gluster
18:30 vbellur joined #gluster
18:30 saurabh We did look into firewall/iptables. But it did not help. We are using infiniband and gluster partition was created with transport rdma
18:31 rouven_ joined #gluster
18:31 saurabh When we write one file at a time in a directory at say 5 files/minute we do not run into issues of transport endpoint not connected.
18:31 branko Hi. I'm going slowly through the docs, and tried running the gluster-mountbroker command. This is on Debian 9.2 (glusterfs package version 3.12.1-2) - turns out the command depends on Python prettytable library (probably package python-prettytable).
18:32 branko What would be the correct place to report the issue?
18:32 branko (wasn't quite sure where the Debian-related scripts and configs are at for creating the packages)
18:32 MrAbaddon joined #gluster
18:32 branko Ah... glusterfs-debian :)
18:34 dxlsm saurabh: I'm not running over rdma, so I probably can't help much with debugging that. I'm sorry. I'm not a gluster expert and am actually here trying to get some information on why adding bricks to our cluster has killed performance to the point of being unusable.
18:35 saurabh dxlsm: have you tried rebalincing ?
18:40 ws2k3 joined #gluster
18:40 ws2k3 joined #gluster
18:41 ws2k3 joined #gluster
18:47 jiffin joined #gluster
18:49 vbellur joined #gluster
19:02 dr-gibson anyone have any thoughts on the question i posed about regarding what drives memory consumption?
19:03 dxlsm saurabh: the rebalance is part of the problem. 145 hours and counting, and the cluster performance is unusable.
19:24 msvbhat_ joined #gluster
19:24 msvbhat joined #gluster
19:27 buvanesh_kumar joined #gluster
20:11 _KaszpiR_ joined #gluster
20:19 _KaszpiR_ joined #gluster
20:19 rouven joined #gluster
21:01 coincoyote joined #gluster
21:23 coincoyote just thought I'd pop in here and drop the VOISE music platform bomb on everyone https://www.voise.com
21:24 coincoyote https://trello.com/b/VzHr9Nov/voise-transparent-roadmap
21:24 glusterbot Title: VOISE : Crypto Music Streaming (at www.voise.com)
21:24 glusterbot Title: Trello (at trello.com)
21:24 coincoyote base on the latest ethereum smart contract technology VOISE offers artists a decentralized market where they can promote and sell 100% of there works on their own terms
21:24 coincoyote a free market
21:25 coincoyote thing of it like a decentralized point to point tunnel between you and the artist, and the artist sets the terms of the deal
21:26 coincoyote unlike other offerings out there, pay per use, or streaming services, VOISE takes on a more traditional approach where you receive a copy of the music from the artist
21:26 coincoyote you sample, and if you like you get a copy of the music
21:27 coincoyote VOISE
21:27 coincoyote going to be a huge market
21:27 omie888777 joined #gluster
21:27 coincoyote has the potential to take over
21:28 coincoyote 7 days until the big launch, still a good time to grab some shares and show your support for the project in it's early stages of development
21:28 coincoyote you can get them at https://www.livecoin.net/?from=Livecoin-aQ5jvzQ5
21:28 coincoyote best cryptocoin exchange, best price, best service
21:28 coincoyote VOISE
21:28 coincoyote check it out for yourself
21:29 vbellur joined #gluster
21:30 vbellur joined #gluster
21:32 mdeanda joined #gluster
21:35 omie888777 joined #gluster
22:33 marlinc joined #gluster
22:44 amye Dah, sorry, I didn't see all that come through. If said person comes back and does it again, I'll use the banhammer.
23:14 masber joined #gluster
23:30 plarsen joined #gluster
23:57 gospod2 joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary