Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-07-04

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 xMopxShell joined #gluster
00:07 cloph_away joined #gluster
00:08 victori joined #gluster
00:22 shyam joined #gluster
00:34 Wizek_ joined #gluster
00:37 victori joined #gluster
01:02 Alghost joined #gluster
01:07 Klas joined #gluster
01:12 gyadav joined #gluster
01:48 ilbot3 joined #gluster
01:48 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:22 jeffspeff joined #gluster
02:35 Alghost joined #gluster
02:40 victori joined #gluster
02:52 msvbhat joined #gluster
03:10 victori joined #gluster
03:17 victori joined #gluster
03:26 msvbhat joined #gluster
03:27 susant joined #gluster
03:30 ppai joined #gluster
03:40 kramdoss_ joined #gluster
03:41 atinm joined #gluster
03:51 Alghost_ joined #gluster
03:54 Alghost joined #gluster
03:55 Alghost_ joined #gluster
03:56 itisravi joined #gluster
04:16 Prasad joined #gluster
04:17 Prasad joined #gluster
04:20 ppai joined #gluster
04:27 sanoj joined #gluster
04:34 gyadav joined #gluster
04:37 jiffin joined #gluster
04:46 ankitr joined #gluster
04:47 buvanesh_kumar joined #gluster
04:50 gyadav_ joined #gluster
04:51 ppai joined #gluster
04:54 Shu6h3ndu joined #gluster
04:54 Chris______ joined #gluster
04:54 Chris______ left #gluster
04:54 gyadav__ joined #gluster
04:55 OzPenguin joined #gluster
04:57 sahina joined #gluster
05:05 victori joined #gluster
05:06 OzPenguin I have a general question about "geo-replication", is it still a valid option in 3.11 ?   I get an error stating: "unrecognized word: geo-replication (position 1)"
05:07 OzPenguin command was:  gluster volume geo-replication XXXX-master server2::XXXX-volume create
05:07 Prasad joined #gluster
05:07 karthik_us joined #gluster
05:10 nbalacha joined #gluster
05:10 Saravanakmr joined #gluster
05:12 OzPenguin repeating ...
05:12 OzPenguin I have a general question about "geo-replication", is it still a valid option in 3.11 ?   I get an error stating: "unrecognized word: geo-replication (position 1)"
05:12 OzPenguin command was:  gluster volume geo-replication XXXX-master server2::XXXX-volume create
05:20 buvanesh_kumar joined #gluster
05:29 msvbhat joined #gluster
05:37 hgowtham joined #gluster
05:43 apandey joined #gluster
05:48 rafi joined #gluster
05:52 OzPenguin you can ignore my previous question, I just worked it out ... I needed to uninstall gluster* 3.11.0-0.1.rc0.el7  and install 3.10.3-1.el7
05:53 OzPenguin then I was able to get the geo-replication with "yum install glusterfs-geo-replication"
05:56 kdhananjay joined #gluster
05:56 nbalacha joined #gluster
05:57 gyadav_ joined #gluster
06:02 gyadav__ joined #gluster
06:05 skoduri joined #gluster
06:06 amarts joined #gluster
06:08 Karan joined #gluster
06:19 sona joined #gluster
06:20 apandey joined #gluster
06:21 ashiq joined #gluster
06:31 ankitr joined #gluster
06:33 scc joined #gluster
06:37 _KaszpiR_ joined #gluster
06:49 OzPenguin can anyone help me understand what I need to change to fix this geo-replication error:
06:49 OzPenguin [2017-07-04 06:28:43.586999] I [gsyncdstatus(monitor):241:set_worker_status] GeorepStatus: Worker Status: Faulty [2017-07-04 06:28:43.672514] E [master(/mnt/vdi-sdd3/data):762:log_failures] _GMaster: ENTRY FAILED: ({'uid': 0, 'gfid': 'bf37dc2c-5bcc-4668-833e-684d1fe31e4e', 'gid': 0, 'mode': 17901, 'entry': '.gfid/00000000-0000-0000-0000-000000000001/CDs', 'op': 'MKDIR'}, 17, 'bd4542f2-a86d-4508-8adf-8b046113d5a5') [2017-07-04 0
06:50 OzPenguin oops:
06:50 OzPenguin [2017-07-04 06:28:43.586999] I [gsyncdstatus(monitor):241:set_worker_status] GeorepStatus: Worker Status: Faulty
06:50 OzPenguin [2017-07-04 06:28:43.672514] E [master(/mnt/vdi-sdd3/data):762:log_failures] _GMaster: ENTRY FAILED: ({'uid': 0, 'gfid': 'bf37dc2c-5bcc-4668-833e-684d1fe31e4e', 'gid': 0, 'mode': 17901, 'entry': '.gfid/00000000-0000-0000-0000-000000000001/CDs', 'op': 'MKDIR'}, 17, 'bd4542f2-a86d-4508-8adf-8b046113d5a5')
06:50 OzPenguin [2017-07-04 06:28:43.672702] E [syncdutils(/mnt/vdi-sdd3/data):265:log_raise_exception] <top>: The above directory failed to sync. Please fix it to proceed further.
06:50 Klas is geo-replication a seperate package?
06:50 ndarshan joined #gluster
06:50 OzPenguin [2017-07-04 06:28:43.672983] I [syncdutils(/mnt/vdi-sdd3/data):238:finalize] <top>: exiting.
06:51 OzPenguin Klas, geo-replication is part of gluster (see https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Geo%20Replication/)
06:51 glusterbot Title: Geo Replication - Gluster Docs (at gluster.readthedocs.io)
06:52 Klas sorry, a bit short question, my point was, is it seperated in 3.11?
06:52 Klas it definitel isn't in 3.7
06:52 Klas and I probably need to upgrade my gluster install in a couple of weeks
06:52 OzPenguin it did not install in 3.11, so I backed out to 3.10
06:54 Klas huh
06:54 OzPenguin in CentOS, it is a separate package to install (yum install glusterfs-geo-replication) which was not available in 3.11
07:02 Klas ah
07:02 sahina OzPenguin, which repo did you install 3.11 from?
07:04 OzPenguin sahina, it was:    centos-release-gluster310.noarch : Gluster 3.10 (Long Term Stable) packages from the CentOS Storage SIG repository
07:05 sahina OzPenguin, I meant when you tried the 3.11 install where geo-rep package was missing. this one only has the 3.10 packages, right?
07:06 mbukatov joined #gluster
07:08 OzPenguin sahina, to be honest I am not sure how I ended up with 3.11 on this server. but from memory it was something like:  centos-gluster310-test
07:09 OzPenguin owever that version is removed now and am currently on 3.10
07:11 kotreshhr joined #gluster
07:12 sahina OzPenguin, ok..never mind the 3.11 issue then. for the geo-rep error, do you have it setup correctly? passwordless ssh setup between all nodes in master with the slave host?
07:15 ankitr joined #gluster
07:15 OzPenguin sahina, I do have it running passwordless (ssh trust), and I ran the "gluster-georep-sshkey generate", then created the geo-rep
07:16 OzPenguin however, I only ran "gluster-georep-sshkey generate" on the master
07:16 sahina OzPenguin, and the slave volume was empty? i.e did not have any data from previous geo-rep sessions?
07:16 OzPenguin but the ssh keys run both mast to slave and slave to master
07:18 OzPenguin no, the slave was the original master, then I build the new master and created the geo-rep with the push-pem and force options
07:18 OzPenguin data between the two server was sync's using rsync
07:20 ivan_rossi joined #gluster
07:22 sahina OzPenguin, is this correct? - you had a master1 and slave1 where master1 was replicating to slave1. You stopped geo-rep, then created a new master - master2 volume, synced data to master2 from slave1 using rsync ?
07:22 sahina OzPenguin, looking through gluster archives , possible cause of error - http://lists.gluster.org/pipermail/gluster-users/2016-March/025610.html
07:22 glusterbot Title: [Gluster-users] new warning with geo-rep: _GMaster: ENTRY FAILED: (at lists.gluster.org)
07:22 OzPenguin sahina, no I never had geo-rep before today
07:23 mk-fg joined #gluster
07:23 mk-fg joined #gluster
07:26 OzPenguin sahina, I had an old fedora NAS, then I build a gluster (backup) server ... rsyncing the data to the gluster backup server, then I rebuild the original server on centos 7 with gluster and rsync'd the backup gluster to the new master ... now I am trying to get geo-repliaction going
07:26 ankitr joined #gluster
07:26 sahina OzPenguin, when setting up a new geo-rep session, the slave (or the destination) volume should be empty to avoid errors
07:28 OzPenguin sahina, I could wipe it and start from sync'ing back from scratch, but was hoping to avoid that give the data was already synchroised between the new servers, do you think it is really nessecary to wipe the backup server first ?
07:30 [diablo] joined #gluster
07:37 ankitr joined #gluster
07:38 OzPenguin sahina, (typing errors corrected) I could wipe it and start geo-sync'ing back from scratch, but was hoping to avoid that given the data was already synchronised between the two servers, do you think it is really necessary to wipe the backup server first ?
07:38 sahina OzPenguin, from what I know, geo-rep keeps track of the sync using extended attributes on the files and matching the GFIDs. so if you had a previous copy of the same data, this may fail - as in your case
07:38 fsimonce joined #gluster
07:38 sahina OzPenguin, i'm not a geo-rep expert though. hold on..confirming this for you
07:39 OzPenguin sahina, thankyou, that would make some sense
07:40 sahina Saravanakmr, would you know ^^
07:40 skumar joined #gluster
07:42 * Saravanakmr please wait - looking into..
07:50 Saravanakmr OzPenguin, You cannot mix rsync followed by geo-replication (which is what I understood from the discussion)
07:50 nbalacha joined #gluster
07:51 OzPenguin sahina and Saravanakmr, it looks like your thoughts are correct as to the cause of the errors. I just move the original backup directories into a subsir and restarted the geo-rep, it is now syncing files
07:51 OzPenguin *subdir
07:51 Saravanakmr OzPenguin, fine
07:53 OzPenguin that being the case, I will remove that subdir and let it sync completely
08:08 ahino joined #gluster
08:09 Klas georeplica only needs port 22, right?
08:09 Klas (I just realized that one of my replica servers was suddenly firewalled away from the other nodes)
08:13 OzPenguin Klas, by default it works over port 22 ssh, but you can reconfigure which port is should communicate over
08:14 Klas that's what I figured, just wanted to confirm
08:14 Klas thanks
08:15 OzPenguin Klas, see the example on the docs ...    gluster volume geo-replication gv1 snode1::gv2 create ssh-port 50022 push-pem
08:16 Klas yeah, not asking about switching port =P
08:16 OzPenguin no prob
08:18 jkroon joined #gluster
08:33 Klas yay, a volume has not been backuped since 2017-04-27
08:34 Klas are there any good nagios/icinga compatible monitor options for gluster?
08:34 Klas in general, I've only seen poorly maintained ruby scripts
08:35 rastar joined #gluster
08:36 Klas another very relevant question, one of my server nodes first put georeplication in "faulty" state, then it simply doesn't show
08:40 xMopxShell joined #gluster
08:41 sanoj joined #gluster
08:46 Wizek_ joined #gluster
09:00 susant joined #gluster
09:03 sanoj joined #gluster
09:30 msvbhat joined #gluster
09:30 jkroon joined #gluster
09:31 gyadav_ joined #gluster
09:35 ankitr joined #gluster
09:47 victori joined #gluster
10:04 mst__ joined #gluster
10:14 Vaelatern joined #gluster
10:15 Klas there is definitely something more than port 22 needed when creating the replica
10:16 Klas and it's not even the normal glusterfs ports either, since it's already allowed to talk to those as well (atm, all ports are allowed)
10:29 jkroon generally i just accept all traffic between gluster peers.
10:29 nbalacha joined #gluster
10:32 skumar_ joined #gluster
10:33 Klas how horrible
10:33 Klas =P
10:33 Klas but I understand
10:35 jkroon my rule is this:  if the application shouldn't be accessible it shouldn't be listening.  the result is that all exposed applications are ones that should be available to the "cluster" environment in any case.
10:35 jkroon not always possible ... and thus why I say generally.
10:36 skoduri joined #gluster
10:43 Klas I prefer whitelisting things I know needs it
10:44 Klas but, yeah, between servers is fine, especially since they already own each other through ssh-keys
10:45 jkroon FORCE_COMMAND is your friend.
10:46 jkroon oh wait, just command= ... eg:  from="1.2.3.4",command="/usr/sbin/ulsbackup_restrict bender",no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty ssh-rsa ...
11:02 skumar__ joined #gluster
11:10 shyam joined #gluster
11:16 WebertRLZ joined #gluster
11:17 sona joined #gluster
11:33 Alghost joined #gluster
11:36 Alghost joined #gluster
11:44 jkroon joined #gluster
11:58 ahino1 joined #gluster
12:06 fokas joined #gluster
12:06 fokas Hi all, I've been through split-brain docs, but I still have a doubt
12:06 fokas I'm setuping a 2+1 nodes cluster
12:07 fokas with 1 volume based on two bricks with replica 2
12:07 fokas both active servers export it as NFS (standard)
12:08 fokas with round robin + VIPs + CTDB
12:08 fokas so theorically, 2 clients could access the same file from both servers
12:11 fokas I struggle to understand how to protect my file to be corrupted while being written simultaneously by these 2 clients
12:14 nbalacha joined #gluster
12:18 fokas It looks like it is a bad design and only one node should write to the volume... Right ?
12:18 amarts joined #gluster
12:18 Alghost joined #gluster
12:30 susant left #gluster
12:32 panina joined #gluster
12:34 msvbhat joined #gluster
12:45 edong23 joined #gluster
12:54 ahino joined #gluster
13:01 Peppard hi... i'm currently running a remove-brick on a distributed volume (gluster v3.8.12), afaik this is a special kind of rebalance... i'm observing something strange... i started with disk usage of about 90%/90%/.../90%/20%, and i'm removing one of the 90% bricks, where the absolute size of the 20% brick >> size of the removed brick, so there is plenty of space... however some of the bricks are having increased disk usage now, one is
13:01 Peppard at 100%!! why is it doing this?
13:05 Peppard i'm asking because usually gluster seems to have a limit of 90% disk usage...
13:06 kotreshhr left #gluster
13:11 Peppard are there any volume parameters regarding rebalance behaviour?
13:25 Klas georeplication is eating all my inodes
13:26 Klas anything I can do to counter that barring giving more disk to that partition (seperate /tmp seems ugly)
13:28 Klas could I, for instance, put them somewhere else but /tmp?
13:45 gyadav_ joined #gluster
13:46 Shu6h3ndu joined #gluster
13:49 ivan_rossi left #gluster
13:51 nbalacha joined #gluster
13:55 Saravanakmr joined #gluster
13:57 Klas anyway, my georeplicas are working and it failed over well when I needed to reboot server
14:03 shyam joined #gluster
14:07 _KaszpiR_ joined #gluster
14:30 Klas I found a way, setting TMPDIR in initscript worked
14:30 Klas $TMPDIR
14:30 susant joined #gluster
14:46 legreffier joined #gluster
14:58 buvanesh_kumar joined #gluster
15:01 Peppard does "remove-brick" in a distributed volume include a "rebalance fix-layout" operation or do i have to run it manually?
15:05 skumar__ joined #gluster
15:10 legreffier joined #gluster
15:22 nbalacha joined #gluster
15:31 buvanesh_kumar joined #gluster
15:39 victori joined #gluster
15:50 mst__ joined #gluster
15:54 susant left #gluster
15:56 ankitr joined #gluster
16:36 nbalacha joined #gluster
17:12 rastar_ joined #gluster
17:15 gospod2 joined #gluster
17:18 rastar joined #gluster
17:18 victori joined #gluster
17:19 susant joined #gluster
17:35 susant left #gluster
17:36 rafi1 joined #gluster
17:42 ahino joined #gluster
17:45 kotreshhr joined #gluster
17:50 sona joined #gluster
17:56 jkroon joined #gluster
18:26 rafi joined #gluster
18:28 mbukatov joined #gluster
18:38 rafi1 joined #gluster
18:42 victori joined #gluster
18:43 rastar joined #gluster
18:43 [diablo] joined #gluster
18:49 xMopxShell joined #gluster
18:52 jiffin joined #gluster
19:50 fcami joined #gluster
19:53 Teraii joined #gluster
20:05 zcourts joined #gluster
20:17 victori joined #gluster
20:27 rastar joined #gluster
20:27 ^andrea^ joined #gluster
20:29 Teraii joined #gluster
20:42 xMopxShell joined #gluster
20:51 zcourts joined #gluster
21:13 Acinonyx joined #gluster
21:19 Acinonyx joined #gluster
21:34 xMopxShell joined #gluster
22:22 victori joined #gluster
22:31 Alghost joined #gluster
22:42 zcourts joined #gluster
22:45 Alghost joined #gluster
22:46 zcourts_ joined #gluster
23:09 nick_g joined #gluster
23:11 nick_g Hi guys, can someone point me to a place where I can read about each configuration from the output of "gluster volume get VOLUME-NAME all"?
23:12 victori joined #gluster
23:36 Wizek_ joined #gluster
23:43 Alghost_ joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary