Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-09-08

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:33 zcourts joined #gluster
00:37 cbb joined #gluster
01:07 pdrakeweb joined #gluster
01:33 plarsen joined #gluster
01:35 scc joined #gluster
01:41 cbb joined #gluster
01:55 ilbot3 joined #gluster
01:55 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:06 prasanth joined #gluster
02:30 vbellur joined #gluster
03:27 protoporpoise joined #gluster
03:27 protoporpoise howdy all, I was wondering if anyone had any experience with improving the performance of setting volume options using the 'gluster volume set ...' command?
03:28 Guest24_ joined #gluster
03:28 protoporpoise Situation: Have about 40 volumes in a 3.10 cluster, and as we're setting about 15 options on each volume its /very/ slow to provision volumes or a new cluster.
03:29 protoporpoise each volume option seems to take about 1 second (or more) to set, so 40*15 so it could be 10 minutes just to set the options for each volume!
03:31 Guest24_ joined #gluster
03:34 gluster joined #gluster
03:40 saali joined #gluster
04:09 itisravi joined #gluster
04:13 atinm joined #gluster
04:15 karthik_us joined #gluster
04:22 apandey joined #gluster
04:23 nbalacha joined #gluster
04:26 kdhananjay joined #gluster
04:50 skumar joined #gluster
04:52 kdhananjay joined #gluster
04:55 jiffin joined #gluster
05:02 aravindavk joined #gluster
05:03 gyadav_ joined #gluster
05:05 dominicpg joined #gluster
05:09 rejy joined #gluster
05:13 Prasad joined #gluster
05:16 riyas joined #gluster
05:31 poornima joined #gluster
05:40 kotreshhr joined #gluster
05:52 ThHirsch joined #gluster
05:57 skumar_ joined #gluster
06:00 farhorizon joined #gluster
06:00 itisravi joined #gluster
06:05 buvanesh_kumar joined #gluster
06:17 bkunal joined #gluster
06:17 susant joined #gluster
06:25 hchiramm__ joined #gluster
06:26 _KaszpiR_ joined #gluster
06:39 buvanesh_kumar joined #gluster
06:50 ankitr joined #gluster
06:56 atinm joined #gluster
06:56 rastar_ joined #gluster
06:59 msvbhat joined #gluster
07:04 ankitr joined #gluster
07:11 ivan_rossi joined #gluster
07:13 apandey joined #gluster
07:18 fsimonce joined #gluster
07:32 ankitr joined #gluster
07:34 mbukatov joined #gluster
07:36 Saravanakmr joined #gluster
07:40 skumar__ joined #gluster
07:44 skoduri joined #gluster
07:45 nbalacha joined #gluster
07:55 _KaszpiR_ joined #gluster
08:01 _KaszpiR_ joined #gluster
08:12 nbalacha joined #gluster
08:12 bwerthmann joined #gluster
08:24 Intensity joined #gluster
08:26 itisravi joined #gluster
08:30 jkroon joined #gluster
08:31 mohan joined #gluster
08:34 _KaszpiR_ joined #gluster
08:37 bEsTiAn joined #gluster
08:37 bEsTiAn hi, can you tell me how to make sure a newly added replicated brick has indeed finished replicating the content ? I'm willing to remove the old brick. but not before the data is consistently replicated everywhere.
08:43 bwerthmann Did you see "Replacing brick in Replicate/Distributed Replicate volumes" here https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Managing%20Volumes/#replace-brick?
08:43 glusterbot Title: Managing Volumes - Gluster Docs (at gluster.readthedocs.io)
08:44 ankush joined #gluster
09:09 bEsTiAn bwerthmann you confirm the remove-brick command first confirms that the data is in sync ? that's what seems indicated to me in the doc. I just don't have a lab to test before, so...
09:11 bwerthmann I'd lab it up in vagrant or something first
09:11 MrAbaddon joined #gluster
09:11 bwerthmann with your specific versions and such
09:12 bwerthmann IIRC we use 'replace-brick'
09:13 bwerthmann but YMMV
09:25 msvbhat joined #gluster
09:28 kdhananjay joined #gluster
09:41 baojg joined #gluster
09:43 bEsTiAn thanks. sadly the bricks were added before already. and when i want to remove one, i get a very nice warning about potential data loss ^^ for a replicated volume...
09:53 atinmu joined #gluster
10:06 msvbhat joined #gluster
10:09 skumar_ joined #gluster
10:11 marlinc joined #gluster
10:12 ankitr joined #gluster
10:29 dkossako joined #gluster
10:30 dkossako hello
10:30 glusterbot dkossako: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offe
10:32 dkossako Does anybody knows something about gluster & systemd & ubuntu? My server starts before rpcbind so nfs does not start. I wanted to add systemd dependency but there are only upstart confs.
10:33 dkossako glusterfs 3.7.6 built on Dec 25 2015 20:50:44
10:56 sadbox joined #gluster
11:11 fiyawerx_ joined #gluster
11:14 decayofmind joined #gluster
11:21 poornima joined #gluster
11:25 hosom joined #gluster
11:25 kpease joined #gluster
11:37 msvbhat joined #gluster
11:38 ThHirsch joined #gluster
11:39 karthik_us joined #gluster
11:43 Humble joined #gluster
11:47 shyam joined #gluster
11:49 baojg joined #gluster
12:00 dijuremo dkossako: Does your glusterd.service looks like this?
12:00 dijuremo https://thepasteb.in/p/GZhW5GBJxx0FV
12:00 glusterbot Title: ThePasteBin - For all your pasting needs! (at thepasteb.in)
12:01 baber joined #gluster
12:02 dkossako dijuremo, I don't have such file, only /etc/init/glusterfs-server.conf
12:02 dijuremo I know in CentOS in the past some times upon a server restart the NFS service would not start and you had to manually restart rpcbind after gluster had started. It eventually got fixed but I lived with the issue for a while...
12:02 dijuremo systemd will be: /etc/systemd/system/multi-user.target.wants/glusterd.service
12:03 dkossako absent
12:03 dkossako maybe to old gluster version?
12:03 dkossako I'm using Ubuntu Server 16.04 and gfs from official repo
12:05 dijuremo systemctl enable glusterfs-server
12:05 dijuremo glusterfs-server.service is not a native service, redirecting to systemd-sysv-install
12:06 dijuremo So after enabling the service on 16.04LTS, I see a file: /etc/init.d/glusterfs-server
12:07 dkossako I can see /etc/init.d/glusterfs-server, but it is upstart file, not systemd
12:07 dkossako and afaik upstart does not allow to handle dependencies
12:07 dijuremo You will have to modify the script
12:08 dijuremo Then be careful on updates...
12:08 dijuremo But in any case... 3.7 is now unsupported...
12:08 dijuremo Better try something like 3.10 since 3.8 will loose support soon since 3.12 is out...
12:09 dkossako yeah, I will try with nonofficial repo
12:09 dkossako there should be newer version
12:10 ronrib_ joined #gluster
12:10 dijuremo dkossako: Be really careful, the PPA's only keep the most current versions, so every time you install gluster, make sure to download and keep a copy of the pacakges. That allows you to roll back without having to compile the packages from source
12:11 buvanesh_kumar joined #gluster
12:13 dkossako ok, thanks
12:17 itisravi joined #gluster
12:18 kdhananjay1 joined #gluster
12:29 vbellur joined #gluster
12:33 weller joined #gluster
12:34 weller Hi, is there a smart way to make better use of ram on a gluster node?
12:35 weller I only use some 10%, maybe that can be traded for speed?
12:55 baber joined #gluster
12:56 kotreshhr left #gluster
13:22 ThHirsch joined #gluster
13:27 [diablo] joined #gluster
13:53 plarsen joined #gluster
13:54 jstrunk joined #gluster
13:57 jiffin joined #gluster
14:12 mrw___ I cannot setup geo replication. gverify.sh fails with «Unable to fetch slave volume details. Please check the slave cluster and slave volume.»; I suppose, the interesting part is in /var/log/glusterfs/geo-replication-slaves/slave.log:  https://pastebin.com/ak8uysYT → Any help? What is the problem?
14:12 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
14:13 rastar_ joined #gluster
14:19 mrw___ http://paste.ubuntu.com/25490307/
14:19 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
14:20 cyberbootje joined #gluster
14:29 mahendratech joined #gluster
14:29 poornima joined #gluster
14:39 hosom joined #gluster
14:40 nbalacha joined #gluster
15:03 wushudoin joined #gluster
15:10 omie888777 joined #gluster
15:12 mbrandeis joined #gluster
15:51 mbrandeis joined #gluster
16:05 jbrooks joined #gluster
16:07 elyograg joined #gluster
16:09 cyberbootje joined #gluster
16:10 vbellur joined #gluster
16:10 elyograg JoeJulian: I tried to talk to glusterbot to get config information like you mentioned, but it didn't do anything useful.
16:14 cyberbootje joined #gluster
16:15 elyograg the whole response to "list MessageParser" is "add, info, list, lock, rank, remove, show, unlock, and vacuum" ... tried some further experiments but couldn't figure anything out.
16:15 msvbhat joined #gluster
16:17 marc_ I cannot setup geo replication. gverify.sh fails with «Unable to fetch slave volume details. Please check the slave cluster and slave volume.»; I suppose, the interesting part is in /var/log/glusterfs/geo-replication-slaves/slave.log: http://paste.ubuntu.com/25490307/ → Any help? What is the problem?
16:17 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
16:21 gyadav_ joined #gluster
16:27 cyberbootje joined #gluster
16:32 jbrooks joined #gluster
16:44 kpease joined #gluster
16:47 JoeJulian marc_: I think the key is, "[2017-09-08 14:06:27.027849] E [glusterfsd-mgmt.c:1932:mgmt_getspec_cbk] 0-mgmt: failed to fetch volume file (key:replication)[2017-09-08 14:06:27.027849] E [glusterfsd-mgmt.c:1932:mgmt_getspec_cbk] 0-mgmt: failed to fetch volume file (key:replication)"
16:47 JoeJulian Looks like it's expecting the target volume to be named "replication" but there's no volume by that name.
16:56 ivan_rossi left #gluster
17:04 WebertRLZ joined #gluster
17:05 marc_ joejulian, there is, created on the slave with: sudo gluster-mountbroker add volumes replication
17:19 msvbhat joined #gluster
17:30 ankitr joined #gluster
17:32 JoeJulian marc_: Can I see a "gluster volume info" from the slave, please?
17:32 DV joined #gluster
17:41 MrAbaddon joined #gluster
17:50 marc_ joejulian: «No volumes present» → How does the volume go to the client?
17:50 marc_ joejulian: «No volumes present» → How does the volume go to the cslave?
17:51 JoeJulian That I'm not sure about. I have never had need of geo-rep in my use cases.
17:51 JoeJulian Back a long time ago when I did it just for testing, I created the target volume by hand.
17:51 marc_ how?
17:52 marc_ there is nothing in the manual…
17:52 marc_ https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Geo%20Replication/
17:52 glusterbot Title: Geo Replication - Gluster Docs (at gluster.readthedocs.io)
17:52 JoeJulian Same way I created a volume on the master.
17:52 marc_ really?
17:53 marc_ ok… just a moment…
17:53 JoeJulian That's what _I_ did. Not sure if that's the right way anymore.
17:53 marc_ But then you need a brick?
17:54 marc_ Is the brick, what you specify in «gluster-mountbroker setup»?, e.g. /var/replication from: sudo gluster-mountbroker setup /var/replication replication
17:54 JoeJulian Well, you would need to put the data somewhere.
17:55 JoeJulian That sounds, to me, like trying to mount a volume named "replication" to the path "/var/replication"
17:55 marc_ ?
17:56 marc_ that's where the replication should go to
17:56 marc_ as far as I understood
17:58 marc_ can you explain me?
17:58 marc_ at least, /var/replication is not mounted
17:59 JoeJulian Right, because the volume "replication" does not exist on the slave.
17:59 marc_ shall I try: sudo gluster volume create replication raum:/var/replication force
18:00 JoeJulian You could, but then you can't mount that volume to /var/replication
18:00 JoeJulian maybe /mnt/replication
18:00 marc_ don'twant it to mount anywhere ...
18:00 JoeJulian I thought that's what the mount broker was for
18:01 marc_ I don't know, ther eare no explanations in this awful documentation.
18:01 marc_ But mountbroker does not mount anythong
18:01 marc_ obvousely
18:01 JoeJulian Feel free to file issues as you run across things.
18:01 marc_ since I understand nothing...
18:01 JoeJulian https://github.com/gluster/glusterdocs/issues
18:02 glusterbot Title: Issues · gluster/glusterdocs · GitHub (at github.com)
18:02 marc_ first I must get it up, then I'll blog my problems...
18:02 marc_ ... and the solutions
18:03 marc_ so, I'll try: sudo gluster volume create replication raum:/var/gluster/replication force
18:06 ankitr joined #gluster
18:11 marc_ still does not work; do I need to only create teh volume, or also to start it?
18:14 marc_ Yeah, seemsright, Joejulian! Thanks
18:17 _KaszpiR_ joined #gluster
18:18 marc_ Hey, Joejulian, you're great! «Creating geo-replication session between volumes & replication@raum::replication has been successful»
18:23 _KaszpiR_ joined #gluster
18:26 ThHirsch joined #gluster
18:39 marc_ Georeplication: Status is «Faulty» in «sudo gluster volume geo-replication status» — how can I analyze, where do I find logs?
18:40 marc_ e.g.: universum      volumes       /var/gluster/volumes    replication    ssh://replication@raum::replication    N/A           Faulty    N/A             N/A
18:42 cloph most likely error on slave in /var/log/glusterfs/geo-replication-slaves/*log
18:46 cloph and/or in master in whatever gluster volume geo master slave config log_file  points to...
18:46 marc_ seems to be here: /var/log/glusterfs/geo-replication/volumes/ssh%3A%2F%2Freplication%40192.168.99.7%3Agluster%3A%2F%2F127.0.0.1%3Areplication.log http://paste.ubuntu.com/25491907/
18:46 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
18:47 marc_ any idea?
18:50 marc_ ssh command seems to fail, but why?
18:50 cloph you didn't properly setup  authentication check on slave what fails
18:50 marc_ can I try this manually?
18:50 marc_ how do I setup authentication check on slave?
18:53 cloph you didn't properly setup authentication. Check on slave what fails.
18:54 marc_ how do I setup authentication check on slave?
18:54 marc_ how do I setup authentication?
18:55 marc_ what authentication? ssh works…
18:55 cloph so you claim, but apparently not for the user you try to use for the geo-replication
18:56 Saravanakmr joined #gluster
18:56 cloph but as said: Check logs on the *SLAVE* to get more info.
19:28 kpease joined #gluster
19:29 WebertRLZ joined #gluster
19:33 Gambit15 joined #gluster
20:05 kpease joined #gluster
20:14 PatNarciso_ joined #gluster
20:38 jbrooks joined #gluster
20:58 PatNarciso_ considering upgrading from 3.10.5 to 3.12 (skipping 3.11) unless suggested otherwise.
20:58 elyograg left #gluster
20:59 PatNarciso_ (simple single volume, single brick setup)
20:59 JoeJulian I'd always test your workload before committing to that, but that's the only qualm I would have.
21:04 PatNarciso_ ... I'm looking for verification, but at this time -- I believe CentOS SIG is behind?
21:05 JoeJulian Last time I looked, yes.
21:08 marc_ cloph, log on slave, could be that: http://paste.ubuntu.com/25492451/
21:08 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
21:08 PatNarciso_ k.  thanks.  I'm going to postpone the upgrade until after the hurricane passes.
21:08 JoeJulian Ugh, you're dealing with that, huh. Best wishes to you.
21:10 JoeJulian marc_: Looks like "/var/log/glusterfs/geo-replication-slaves/c6ed98d9-483d-4252-b55b-8aaff4d8a59b:universum.%2Fvar%2Fgluster%2Fvolumes.replication.gluster.log/var/log/glusterfs/geo-replication-slaves/c6ed98d9-483d-4252-b55b-8aaff4d8a59b:universum.%2Fvar%2Fgluster%2Fvolumes.replication.gluster.log" has the interesting bits.
21:10 marc_ which file not found, JoeJulian?
21:11 marc_ what is it trying to do?
21:11 JoeJulian log output
21:12 JoeJulian Line 3 and 14 of the log you posted.
21:12 marc_ log post ist from this: /var/log/glusterfs/geo-replication-slaves/mbr/c6ed98d9-483d-4252-b55b-8aaff4d8a59b:universum.%2Fvar%2Fgluster%2Fvolumes.replication.log
21:13 JoeJulian Meh, my damned paste key bounced.
21:13 PatNarciso_ JoeJulian, thank you sir.  Monday 5am EST -- all current forecasts look bad for Orlando.  This site sums up the situation well: https://www.windy.com/?gfs,2017-09-11-09,28.361,-81.380,10
21:13 glusterbot Title: Windy, Windyty. Wind map & weather forecast (at www.windy.com)
21:17 marc_ JoeJulian ist the problem, that the log cannot be opened, because it belongs to root an replication is done with an unprivileged user account?
21:18 marc_ Or what file cannot be opened?
21:18 marc_ at least, something is writing into th log
21:21 marc_ What's this: [glusterd-mountbroker.c:548:glusterd_do_mount] 0-management: 'option mountbroker-root' missing in glusterd vol file
21:22 marc_ found in /var/log/glusterfs/glusterd.log
21:23 marc_ seems mountbroker tries to mount and fails?
21:25 marc_ what and why?!?
21:26 marc_ from /var/log/glusterfs/glusterd.log http://paste.ubuntu.com/25492522/
21:26 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
21:28 * JoeJulian grumbles about how Red Hat's upstream first policy doesn't apply to documentation. https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.0/html/Administration_Guide/chap-User_Guide-Geo_Rep-Preparation-Settingup_Slave.html
21:45 kpease joined #gluster
21:46 major joined #gluster
22:15 Telsin joined #gluster
22:22 anthony25 joined #gluster
22:30 baojg joined #gluster
22:57 baojg joined #gluster
23:03 omie888777 joined #gluster
23:08 baojg joined #gluster
23:24 baojg joined #gluster
23:45 baojg joined #gluster
23:47 rastar_ joined #gluster
23:49 kpease joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary