Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-08-29

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:27 plarsen joined #gluster
00:40 Guest9038 joined #gluster
01:02 masber joined #gluster
01:20 daMaestro joined #gluster
01:51 ilbot3 joined #gluster
01:51 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:02 gospod3 joined #gluster
02:18 gnulnx joined #gluster
02:19 gnulnx I'm having an issue that I can't track down, hoping someone else has delt with it.  Two peers, visible to each other via 'gluster peer status'.  They can also ping eachother.
02:20 gnulnx The problem is that when performing operations such as 'volume stop', I either get 'Error : Request timed out' or 'volume stop: ftp: failed: Another transaction is in progress for ftp. Please try again after sometime.'
02:20 gnulnx On 3.10.4
02:23 luizcpg joined #gluster
02:35 X-ian joined #gluster
02:35 gnulnx http://lists.gluster.org/pipermail/gluster-users/2017-June/031657.html matches the issue I am having, however the network is fine, and restarting the gluster services didn't resolve the issue.
02:35 glusterbot Title: [Gluster-users] Gluster failure due to "0-management: Lock not released for <volumename>" (at lists.gluster.org)
03:09 pioto joined #gluster
03:13 baojg joined #gluster
03:25 WebertRLZ joined #gluster
03:35 Guest9038 joined #gluster
03:35 luizcpg_ joined #gluster
03:36 sanoj joined #gluster
03:38 omie888777 joined #gluster
03:46 gyadav joined #gluster
03:47 nbalacha joined #gluster
03:52 itisravi joined #gluster
04:02 riyas joined #gluster
04:08 masber joined #gluster
04:13 nbalacha joined #gluster
04:15 atinmu joined #gluster
04:41 jiffin joined #gluster
04:43 _KaszpiR_ joined #gluster
04:43 aravindavk joined #gluster
04:50 skumar joined #gluster
04:50 rejy joined #gluster
04:57 _KaszpiR_ joined #gluster
04:57 rafi joined #gluster
04:59 major joined #gluster
05:03 pioto joined #gluster
05:07 msvbhat joined #gluster
05:12 poornima joined #gluster
05:17 sona joined #gluster
05:18 ndarshan joined #gluster
05:20 gyadav_ joined #gluster
05:22 itisravi joined #gluster
05:25 gyadav__ joined #gluster
05:26 _KaszpiR_ joined #gluster
05:29 Prasad joined #gluster
05:30 kdhananjay joined #gluster
05:33 dominicpg joined #gluster
05:38 prasanth joined #gluster
05:39 kotreshhr joined #gluster
05:44 karthik_us joined #gluster
05:46 shruti joined #gluster
05:48 Saravanakmr joined #gluster
05:53 hgowtham joined #gluster
05:53 apandey joined #gluster
06:02 apandey_ joined #gluster
06:04 apandey joined #gluster
06:04 apandey joined #gluster
06:07 buvanesh_kumar joined #gluster
06:15 jkroon joined #gluster
06:15 rafi joined #gluster
06:18 rafi1 joined #gluster
06:22 mbukatov joined #gluster
06:28 jtux joined #gluster
06:38 skoduri joined #gluster
06:44 ndarshan joined #gluster
06:47 sanoj joined #gluster
06:51 jtux joined #gluster
06:52 skumar joined #gluster
06:52 msvbhat joined #gluster
06:53 rastar joined #gluster
07:09 ndarshan joined #gluster
07:10 bEsTiAn joined #gluster
07:14 Saravanakmr joined #gluster
07:23 skumar_ joined #gluster
07:28 bEsTiAn joined #gluster
07:28 fsimonce joined #gluster
07:32 omie888777 joined #gluster
07:32 rastar joined #gluster
07:58 bens__ joined #gluster
08:06 shyu joined #gluster
08:22 weller joined #gluster
08:22 weller hi, i am having trouble re-enabling ganesha on a two-node cluster. I had the cluster up and running, and wanted to change the virtual IPs. so I did gluster nfs-ganesha disable, changed the ip values in ganesha.conf, and tried gluster nfs-ganesha enable...
08:25 weller also, I do not find the ganesha-ha.sh script in /usr/libexec/ganesha/
08:25 weller that folder only has 'nfs-ganesha-config.sh' inside. is this normal?
08:38 zuber joined #gluster
08:39 buvanesh_kumar joined #gluster
08:40 skoduri weller, thats strange...if you had nfs-ganesha cluster up and running via "gluster nfs-ganesha enable" the first time, that means ganesha,sh script must had been present
08:40 skoduri weller, have you updated deleted/updated glusterfs-ganesha rpm?
08:40 weller yep
08:41 weller I (re?) installed glusterfs-ganesha to get the files now
08:41 skoduri okay
08:41 weller not knowingly
08:43 weller yep, the last update had gluster packages in it
08:43 weller basicly the update from 3.10.3 to 3.10.5
08:45 skoduri so you mean after update from 3.10.3 to 3.10.5, the files went missing?
08:47 weller I honestly have no idea... we tested the whole thing previously, installed everything by copy paste. it is really strange that the package glusterfs-ganesha was not installed at all
08:48 weller in the update history, glusterfs-ganesha was also updated
08:49 weller oh, 'yum remove pcs corosync pacemaker' has automatically erased glusterfs-ganesha.
08:50 skoduri glusterfs-ganesha has dependency on those clustering software packages
08:51 weller I tried to reinstall them, to make ganesha working... the error message still is the same
08:51 weller nfs-ganesha: failed: creation of symlink ganesha.conf in /etc/ganesha failed
08:53 skoduri do you ganesha.conf present in '/etc/ganesha' folder on all the nodes?
08:54 weller no
08:55 weller there is the symlink to /var/run/gluster/shared_storage/nfs-ganesha/ganesha.conf on every node
08:59 skoduri okay probably "gluster ganesha disable" hadn't cleaned up this symlink. Does that symlink points to a valid file?
08:59 skoduri or is it dead link
08:59 skoduri ?
09:00 weller the symlink got created by gluster ganesha enable
09:00 weller it is a valid file
09:00 weller but empty
09:00 weller I tried the command on both nodes
09:01 rafi joined #gluster
09:01 weller the symlink only get's created on localhost
09:01 jkroon joined #gluster
09:03 karthik_ joined #gluster
09:03 apandey_ joined #gluster
09:03 skumar__ joined #gluster
09:04 apandey_ joined #gluster
09:04 sanoj joined #gluster
09:05 skoduri okay so the command "gluster nfs-ganesha enable" fails with the symlink creation error though it created the symlinks already?
09:07 weller yes, I removed the symlinks, and started the command. on the node itself the link gets created, but not on the other
09:08 weller and it fails with the errormessage
09:10 skoduri when you removed symlinks, did you copy back the original ganesha.conf file back to /etc/ganesha/ location. I am not sure if CLI expects ganesha.conf file (not symlink) to be present in /etc/ganesha folder but it may worth a try.. jiffin could confirm..
09:10 skoduri I suggest to replace those symlinks (if present) with actual ganesha.conf file on all the nodes - "cp /var/run/gluster/shared_storage/nfs-ganesha/ganesha.conf /tmp; rm /etc/ganesha/ganesha.conf; cp /tmp/ganesha.conf /etc/ganesha'
09:11 skoduri and then try "gluster nfs-ganesha enable"
09:12 weller same error
09:12 weller but the file got replaced by the symlink again
09:15 MrAbaddon joined #gluster
09:18 susant joined #gluster
09:19 skoduri weller, maybe its due to regression caused by commit - https://github.com/gluster/glusterfs/commit/da9f6e9a4123645a20b664a1c167599b64591f7c .. this commit introduced dependency selinux to be enabled on the systems
09:19 glusterbot Title: common-ha: enable and disable selinux gluster_use_execmem · gluster/glusterfs@da9f6e9 · GitHub (at github.com)
09:19 skoduri do you have selinux enabled?
09:19 weller selinux is on permissive
09:20 skoduri I suggest to change it to enforcing and then try the setup
09:20 weller failed
09:21 skoduri with the same error?
09:21 weller yep
09:21 skoduri hmm
09:23 skoduri any errors in glusterd.log or /var/log/messages?
09:23 susant joined #gluster
09:23 _KaszpiR_ joined #gluster
09:25 weller [glusterd-syncop.c:1321:gd_stage_op_phase] 0-management: Staging of operation 'Volume (null)' failed on localhost : nfs-ganesha is already disabled.
09:25 weller that is on disable
09:25 weller enable gives nothing
09:25 poornima joined #gluster
09:26 weller the other node shows '0-management: Unable to acquire volname'
09:29 apandey__ joined #gluster
09:30 msvbhat joined #gluster
09:42 weller any other ideas?
09:43 weller thanks for your help already!
09:43 skoduri shared storage is mounted on all the nodes right @/var/run/gluster/shared_storage location?
09:44 weller yep
09:45 weller and there is the folder 'nfs-ganesha' in it, with 2 files: ganesha.conf and ganesha-ha.conf
09:46 skumar_ joined #gluster
09:46 weller the content of ganesha-ha.conf:
09:46 skoduri okay ..lets try executing the script cmd manually ... #/usr/libexec/ganesha/ganesha-ha.sh --setup-ganesha-conf-files /var/run/gluster/shared_storage /nfs-ganesha yes" .
09:46 weller HA_NAME="nfs" HA_CLUSTER_NODES="fa,fb" VIP_fa="172.16.1.45" VIP_fb="172.16.1.46"
09:46 skoduri on akk tge bides
09:46 skoduri *on all the nodes
09:47 weller without the whitespace between shared_storage[ ]/nfs-ganesha
09:47 weller ?
09:47 skoduri right :)
09:47 skoduri sorry for that
09:48 weller ValueError: Boolean gluster_use_execmem is not defined
09:50 skoduri okay finally..
09:50 skoduri you do not have valid selinux package which defines that variable
09:52 gilfoyle joined #gluster
09:52 weller is there an easy way to install that?
09:53 weller I found this bug: https://bugzilla.redhat.com/show_bug.cgi?id=1471917
09:53 glusterbot Bug 1471917: urgent, urgent, ---, kkeithle, CLOSED CURRENTRELEASE, [GANESHA] Ganesha setup creation fails due to selinux blocking some services required for setup creation
09:53 weller " Requires selinux-policy >= 3.13.1-160 in RHEL7"
09:53 weller centos has 3.13.1.102
09:54 skoduri oh..
09:54 gilfoyle anyone using gluster+autofs? (without following the gluster blog post)
10:03 weller i have simply commented out the semanage lines in the ganesha-ha script, now gluster nfs-ganesha enable runs smooth: success
10:03 weller thanks for the support! :)
10:06 _KaszpiR_ joined #gluster
10:08 skoduri okay..sorry was afk..welcome :)
10:28 shyam joined #gluster
10:29 ndarshan joined #gluster
10:37 baber joined #gluster
10:40 psony joined #gluster
10:47 ndarshan joined #gluster
10:48 bfoster joined #gluster
10:50 poornima joined #gluster
11:04 WebertRLZ joined #gluster
11:11 ThHirsch joined #gluster
11:11 ThHirsch joined #gluster
11:24 btspce joined #gluster
11:34 gilesww joined #gluster
11:34 gilesww heya gluster peeps
11:38 skoduri_ joined #gluster
11:44 luizcpg joined #gluster
11:52 baojg joined #gluster
11:56 ahino joined #gluster
11:58 Prasad_ joined #gluster
12:10 msvbhat joined #gluster
12:13 gilesww so i'm trying to mount a gluster brick using nfs
12:13 gilesww it's all gone fine apart from the mount being read only
12:14 gilesww these are the mount options used /var/liferay/data/document_library nfs rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.160.79.207,mountvers=3,mountport=38465,mountproto=tcp,local_lock=none,addr=10.160.79.207 0 0
12:20 shyam joined #gluster
12:36 marbu joined #gluster
12:41 karthik_us joined #gluster
12:47 ndarshan joined #gluster
12:59 ndarshan joined #gluster
13:00 baber joined #gluster
13:03 poornima joined #gluster
13:06 Gambit15 joined #gluster
13:11 jstrunk joined #gluster
13:12 prasanth joined #gluster
13:13 ThHirsch joined #gluster
13:20 ahino joined #gluster
13:29 ndarshan joined #gluster
13:34 plarsen joined #gluster
13:35 skylar joined #gluster
13:48 kotreshhr left #gluster
13:48 guhcampos joined #gluster
13:52 skylar joined #gluster
14:00 nbalacha joined #gluster
14:03 hchiramm__ joined #gluster
14:07 aravindavk joined #gluster
14:13 shyam joined #gluster
14:18 luizcpg joined #gluster
14:18 Prasad joined #gluster
14:22 msvbhat joined #gluster
14:24 jiffin joined #gluster
14:26 dijuremo joined #gluster
14:26 baojg joined #gluster
14:28 mbukatov joined #gluster
14:28 dijuremo Doc pages say when a replica 3 cluster is created, cluster.quorum-type is set to auto by default, however that is not the case for me.
14:30 marbu joined #gluster
14:31 dijuremo # gluster v get aevmstorage all | grep -i quorum-type
14:31 dijuremo cluster.quorum-type                     none
14:31 dijuremo cluster.server-quorum-type              off
14:33 dijuremo So do I just set cluster.quorum-type to auto and I am ready to roll or is there any other option needed?
14:39 baojg joined #gluster
14:40 baojg joined #gluster
14:42 baojg joined #gluster
14:44 sona joined #gluster
14:48 dijuremo How does one figure out what is the best value for: features.shard-block-size
14:51 cloph guess there is no best answer - depends on how large your typical files are we have 512MB for VM images all 50+GB in size - but no idea whether smaller value would be better or not....
14:52 dijuremo Will go with 512MB then. I was not sure if this depended on storage speed, or network bandwith, etc.. so before trying anything, I wanted to set the best option.
14:53 dijuremo Do you have a 3+ node replica?
14:53 cloph it comes into play when self-heal is needed, so ideally many chunks should be non-changing and thus easy to sync up/verify.
14:53 cloph replica 2 + arbiter
14:55 farhorizon joined #gluster
14:55 dijuremo Trying to find out what the quorum options should be for replica 3
14:56 dijuremo The creation of the volume did not set quorum to auto..  so not sure what all the options should be set to...
15:03 cloph https://github.com/gluster/glusterfs/blob/master/extras/group-virt.example so that explicitly includes quorum type, so not sure whether it would have been default without that...
15:03 glusterbot Title: glusterfs/group-virt.example at master · gluster/glusterfs · GitHub (at github.com)
15:04 cloph and what quorum should be depends on your setup of course. But having the default "OK if two are up" is sensible for most szenarios :-)
15:05 ThHirsch joined #gluster
15:08 ahino joined #gluster
15:08 wushudoin joined #gluster
15:10 omie888777 joined #gluster
15:13 dijuremo cloph: it is replica 3, so should be "OK with two UP"
15:26 rwheeler joined #gluster
15:30 kpease joined #gluster
15:38 susant joined #gluster
15:42 dijuremo I had never worked with sharding, but it seems that the original file gets written to one host, and in the other hosts it shows up only as a 512MB file and then the other pieces get created in the .shard folder, does this seem correct?
15:42 jiffin joined #gluster
15:43 cloph no - the stuff that's stored on the brick is the same on all regular bricks (and metadata only for arbiter as usual) - in the mount the file shows as a single file/you cannot tell the difference when accessing via a regular mount
15:43 dijuremo My bad, it is the same in all nodes, just that I did not look at the brick...
15:45 dijuremo So in the mount point I see a file that is large, i.e my test file is 6.7GB but in the bricks, I see a 512MB file with that name and then the other pieces in the .shard folder
15:46 jiffin joined #gluster
15:48 jiffin joined #gluster
15:49 vbellur joined #gluster
15:50 armyriad joined #gluster
15:50 dijuremo Are there any specific optimization or configuration options when using 10Gbit NICs on the peers?
15:54 vbellur joined #gluster
16:16 Prasad joined #gluster
16:18 ajph joined #gluster
16:19 msvbhat joined #gluster
16:24 jiffin joined #gluster
16:26 aravindavk joined #gluster
16:36 farhorizon joined #gluster
16:47 susant joined #gluster
16:51 sona joined #gluster
16:59 zcourts_ joined #gluster
17:06 prasanth joined #gluster
17:12 baber joined #gluster
17:15 plarsen joined #gluster
17:19 farhorizon joined #gluster
17:26 farhorizon joined #gluster
17:28 NuxRo joined #gluster
17:37 msvbhat joined #gluster
17:54 farhorizon joined #gluster
17:59 baber joined #gluster
18:10 omie88877777 joined #gluster
18:32 rastar joined #gluster
18:56 _KaszpiR_ joined #gluster
18:59 jobewan joined #gluster
19:01 rastar joined #gluster
19:04 yosafbridge joined #gluster
19:10 farhoriz_ joined #gluster
19:25 btspce joined #gluster
19:25 jbrooks joined #gluster
19:32 farhorizon joined #gluster
19:40 gnulnx What does the upgrade path for 3.10 to 3.11 look like?
19:41 gnulnx I only have 2 servers, 1 of which is also a fuse client.  That's it.
20:02 shyam joined #gluster
20:05 ahino joined #gluster
20:07 primehaxor joined #gluster
20:09 WebertRLZ joined #gluster
20:19 zcourts joined #gluster
20:22 jkroon joined #gluster
20:38 amosbird joined #gluster
20:55 gnulnx I can't figure this out.  If I hvae both gluster services online, then I get timeouts when running gluster commands (like volume status)
20:55 gnulnx If only one server is online, then the commands always work.
21:00 shyam joined #gluster
21:08 jbrooks_ joined #gluster
21:09 mbrandeis joined #gluster
21:10 jbrooks_ joined #gluster
21:11 zcourts joined #gluster
21:36 jbrooks joined #gluster
21:40 omie888777 joined #gluster
21:44 luizcpg joined #gluster
22:05 omie888777 joined #gluster
22:10 CmndrSp0ck joined #gluster
22:11 farhorizon joined #gluster
22:23 baojg joined #gluster
22:35 omie88877777 joined #gluster
22:41 mbrandeis joined #gluster
22:58 jkroon joined #gluster
23:04 mbrandeis joined #gluster
23:24 vbellur joined #gluster
23:30 gilesww joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary