Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-04-07

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:09 gem joined #gluster
00:25 arpu joined #gluster
00:25 nh2 glustin, JoeJulian: So what would you recommend for Linux distributions? At the time of NixOS 17.03 release, 3.9 was stable. The NixOS people are wondering whether they should upgrade to 3.10 inside their stable distribution release or not. If e.g. there will be no security updates or important other updates for 3.9, I think they should upgrade even though it's inside their stable release
00:38 farhoriz_ joined #gluster
00:42 zerick joined #gluster
00:42 farhorizon joined #gluster
00:45 gospod3 joined #gluster
00:47 gospod3 im thinking about deploying gluster on my 2 nodes and I have some questions before. 1) Is communication between nodes "multichannel"? If provided with 5x GbE, does it use all at the same time (5GBps)? 2) Which filesystem underneath is best (centos)? 3) BTRFS good for gluster?
00:48 gospod3 4) gluster under freebsd possible yet? 5) gluster possible atleast with freebsd+docker? any good tutorials you can provide also :-)
00:50 armyriad joined #gluster
01:04 kramdoss_ joined #gluster
01:18 daMaestro joined #gluster
01:22 shdeng joined #gluster
01:29 arpu joined #gluster
01:37 kramdoss_ joined #gluster
01:46 aravindavk joined #gluster
01:48 ilbot3 joined #gluster
01:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:57 farhorizon joined #gluster
02:05 derjohn_mob joined #gluster
02:05 gyadav_ joined #gluster
02:07 gyadav_ joined #gluster
03:02 ira joined #gluster
03:04 Gambit15 joined #gluster
03:09 ankitr joined #gluster
03:15 prasanth joined #gluster
03:25 pdrakeweb joined #gluster
03:42 om2_ joined #gluster
03:43 riyas joined #gluster
03:49 buvanesh_kumar joined #gluster
03:49 bulde joined #gluster
04:00 gem joined #gluster
04:03 kramdoss_ joined #gluster
04:13 gyadav_ joined #gluster
04:13 apandey joined #gluster
04:25 Shu6h3ndu_ joined #gluster
04:26 zerick joined #gluster
04:28 skumar joined #gluster
04:28 nishanth joined #gluster
04:35 ashiq joined #gluster
04:39 msvbhat joined #gluster
04:44 ankitr joined #gluster
04:46 ppai joined #gluster
04:54 jiffin joined #gluster
04:58 kotreshhr joined #gluster
04:59 ankitr joined #gluster
05:02 nbalacha joined #gluster
05:04 prasanth joined #gluster
05:07 ndarshan joined #gluster
05:12 nbalacha joined #gluster
05:13 hgowtham joined #gluster
05:14 skoduri joined #gluster
05:16 sanoj joined #gluster
05:22 apandey_ joined #gluster
05:26 Prasad joined #gluster
05:27 atinm joined #gluster
05:41 ankitr joined #gluster
05:42 Karan joined #gluster
05:42 karthik_us joined #gluster
05:43 kramdoss_ joined #gluster
05:44 sbulage joined #gluster
05:48 aravindavk joined #gluster
05:49 Saravanakmr joined #gluster
05:56 bulde joined #gluster
06:07 kdhananjay joined #gluster
06:07 rafi joined #gluster
06:09 buvanesh_kumar joined #gluster
06:09 kramdoss_ joined #gluster
06:11 saduser joined #gluster
06:15 prasanth joined #gluster
06:17 nbalacha joined #gluster
06:20 msvbhat joined #gluster
06:26 jtux joined #gluster
06:27 jtux left #gluster
06:32 bulde joined #gluster
06:34 jiffin1 joined #gluster
06:35 sac` joined #gluster
06:35 amarts` joined #gluster
06:41 sona joined #gluster
06:43 R0ok_ joined #gluster
06:48 skoduri joined #gluster
06:48 flying joined #gluster
06:53 mbukatov joined #gluster
06:56 kdhananjay1 joined #gluster
06:56 kramdoss_ joined #gluster
06:57 kdhananjay joined #gluster
07:04 ivan_rossi joined #gluster
07:06 ivan_rossi left #gluster
07:12 itisravi joined #gluster
07:19 kramdoss_ joined #gluster
07:25 absolutejam joined #gluster
07:30 masber joined #gluster
07:40 jiffin1 joined #gluster
07:52 Karan joined #gluster
07:55 skoduri joined #gluster
08:09 ksandha_ joined #gluster
08:13 itisravi_ joined #gluster
08:13 Jacob8432 joined #gluster
08:14 derjohn_mob joined #gluster
08:17 social joined #gluster
08:19 Chewi left #gluster
08:20 Skinny joined #gluster
08:23 Skinny hi all
08:24 Skinny I'm trying to enable ganesha on my newly setup gluster cluster. However it fails with : `nfs-ganesha: failed: creation of symlink ganesha.conf in /etc/ganesha failed`
08:25 Skinny running the command (ganesha-ha.sh) manually works ok, but apparently the 'runner' logic in gluster messes something up in my environment. How can I troubleshoot this issue further ?
08:29 bulde joined #gluster
08:31 skoduri Skinny, have you copied ganesha.conf and ganesha-ha.conf files in the shared-storage ?
08:31 Skinny jup
08:31 skoduri just to confirm in '/var/run/gluster/shared_storage/nfs-ganesha' folder right?
08:31 Skinny correct
08:31 Skinny `/usr/lib/x86_64-linux-gnu/ganesha/ganesha-ha.sh --setup-ganesha-conf-files /var/run/gluster/shared_storage/nfs-ganesha/ yes`
08:32 Skinny I run that command, it creates the symlink to /etc/ganesha/ganesha.conf
08:32 Skinny and the exitcode = 0
08:32 Skinny but from `gluster ganesha enable` it complains it cannot create the symlink
08:34 atinm joined #gluster
08:34 skoduri okay do you have nfs-ganesha rpms installed on all the nodes of the gluster cluster?
08:34 Skinny I'm running ubuntu, but yes, the packages are all there
08:35 skoduri could u cross check if /etc/ganesha/ folder exists in all the nodes in the storage pool before running nfs-ganesha CLI
08:35 skoduri from all the nodes I mean it includes the ones which are part of gluster cluster but may not be part of nfs-ganesha cluster
08:36 Skinny all gluster nodes (3) will join the ganesha-cluster too
08:36 Skinny so there's 1-1 match
08:36 skoduri ah okay..hmm... wait u r using ubuntu build right..
08:37 Skinny yes
08:37 Skinny glusterfs-3.10 ppa
08:37 skoduri can u confirm if those builds create default ganesha.conf at '/etc/ganesha' folder when installed?
08:37 Skinny https://gist.github.com/skinny/348b940ae974d20fa8e3e2120eec332e
08:37 glusterbot Title: error · GitHub (at gist.github.com)
08:37 Skinny these line appear in the glusterd log when executing "gluster ganesha enable"
08:37 Skinny yes they all create a default ganesha.conf
08:38 Skinny but I deleted them because the symlink would fail otherwise
08:38 skoduri you need not delete them gluster-CLI+ganesha-ha.sh script takes care of removing those files and replace them with symlinks
08:38 skoduri that could be error you may have got
08:39 Skinny I can place them back, but the error appeared when those files were still there
08:39 Skinny and running the ganesha-ha.sh script manually works
08:42 skoduri ahh ..got it .. maybe glusterd is not able to resolve the path of ganesha-ha.sh properly
08:43 sanoj joined #gluster
08:43 Skinny how can I check that ?
08:44 skoduri Skinny, from the sources looks like it tries to look into "$(libexecdir)/ganesha\" folder... I am not sure what is $libexecdir path for ubuntu... ndevos ^^..any idea?
08:45 Skinny let me try to find out
08:48 apandey joined #gluster
08:50 Skinny @skoduri https://github.com/gluster/glusterfs-debian/commit/238f8a67f94963b04db7d015b8c159d8f5587955
08:50 Skinny 28 days ago and actually something about libexec
08:51 Skinny mm installed version is : 3.10.1-ubuntu1~xenial1
08:51 skoduri Skinny, I am looking at https://review.gluster.org/#/c/16881/ (https://bugzilla.redhat.com/show_bug.cgi?id=1430844) too
08:51 glusterbot Bug 1430844: unspecified, unspecified, ---, kkeithle, CLOSED CURRENTRELEASE, build/packaging: Debian and Ubuntu don't have /usr/libexec/; results in bad packages
08:52 Skinny looks like the issue resulting in the commit I mentioned
08:52 Skinny glusterfs-3.10.1 is on my system but maybe the fix is not 100% correct
08:53 Skinny https://review.gluster.org/#/c/16881/4/xlators/mgmt/glusterd/src/Makefile.am
08:53 glusterbot Title: Gerrit Code Review (at review.gluster.org)
08:53 Skinny why is that line changed, but not the ganesha
08:53 Skinny (two lines below)
08:53 Skinny just thinking out loud btw... nofi
08:55 skoduri thats because GSYNCD_PREFIX is libexecdir/glusterfs where as GANESHA_PREFIX is libexecdir/ganesha
08:55 skoduri maybe  you could file a bug and we can check with kkeithley once he comes online in few hours
08:55 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
08:55 apandey_ joined #gluster
09:02 Skinny @skoduri : does this summarize the issue good enough ?
09:02 Skinny https://bugzilla.redhat.com/show_bug.cgi?id=1440064
09:02 glusterbot Bug 1440064: unspecified, unspecified, ---, bugs, NEW , gluster nfs-ganesh enable fails to create symlink
09:03 skoduri Skinny, yup..thanks..will CC kkeithley too in the bug
09:03 Skinny tx
09:04 Skinny in the meantime, can you think of a way around this (to do it manually ?)
09:04 Skinny so I can hopefully setup my test cluster before the weekend
09:04 Skinny running the .sh scripts on all nodes is not the issue, but letting gluster know it's alright is another
09:06 skoduri Skinny, maybe manually ganesha folder to libexecdir location? I do not the path in Ubuntu..you could try creating '/usr/libexec/'
09:06 skoduri *manually copying
09:16 Norky joined #gluster
09:26 apandey__ joined #gluster
09:40 sona joined #gluster
09:45 Skinny @skoduri is there a way to find out with what parameters the ubuntu packages were built ? (especially the libexecdir)
09:46 apandey_ joined #gluster
09:57 jiffin Skinny: ndevos or kkeithley are the building experts
09:58 jwd joined #gluster
10:00 riyas joined #gluster
10:17 sona joined #gluster
10:27 jkroon joined #gluster
10:30 percevalbot joined #gluster
10:39 hgowtham_ joined #gluster
10:40 shwethahp joined #gluster
10:40 shwethahp left #gluster
10:41 hgowtham joined #gluster
10:45 msvbhat joined #gluster
10:45 [diablo] joined #gluster
10:46 [diablo] Good afternoon #gluster
10:46 [diablo] guys, the following is a problem my co-worker encountered yesterday. I will try to pass on the facts as I understood them. I'd be very interested in your views:
10:47 [diablo] we have 2 x nodes running the official RHGS ... both provide NFS (via Gluster), Samba/CTDB, and Gluster fuse
10:48 [diablo] one unit was to be placed in another DC, so he moved all IP/VIP's etc to one node, and shutdown the other
10:48 [diablo] upon doing this, the node that was meant to remain up and provide service also shutdown
10:49 [diablo] it was then powered backup again, he managed to get everything working, except the NFS (gluster)
10:49 [diablo] seems when he sets it enabled on a volume, it fails to enable
10:49 [diablo] .... any ideas please?
11:04 nh2 joined #gluster
11:09 Karan joined #gluster
11:11 buvanesh_kumar joined #gluster
11:16 msvbhat joined #gluster
11:19 Wizek_ joined #gluster
11:33 buvanesh_kumar joined #gluster
11:33 kkeithley Skinny, jiffin: The debian and ubuntu packaging bits are in https://github.com/gluster/glusterfs-debian
11:33 glusterbot Title: GitHub - gluster/glusterfs-debian: Debian packaging of Gluster (at github.com)
11:33 kkeithley Or I believe you can unpack the .dsc file and get them all.
11:34 devyani7 joined #gluster
11:35 devyani7 joined #gluster
11:36 kkeithley you have a question about $libexecdir?
11:36 skoduri kkeithley, we wanted to know what is the path of libexecdir for ubuntu packages
11:36 skoduri yes
11:39 kkeithley For a long time it was /usr/libexec, but that was a bug. In the latest 3.10 and 3.8 packages it's /usr/lib/$arch-linux-gnu
11:39 kkeithley A bug because Debian doesn't have /usr/libexec normally.
11:40 kkeithley see https://review.gluster.org/16882
11:40 glusterbot Title: Gerrit Code Review (at review.gluster.org)
11:40 Skinny @kkeithley ok, as I installed the machines yesterday I assume I have the correct version (from the 3.10 ppa )
11:40 kkeithley https://review.gluster.org/16881 and https://review.gluster.org/16880
11:40 glusterbot Title: Gerrit Code Review (at review.gluster.org)
11:41 kkeithley ??? if you installed from the PPA you have the only version.
11:42 kkeithley the latest packages from the PPA (3.10.1, 3.8.10) have the fixed libexecdir
11:42 Skinny I have those, thanks
11:43 Skinny Makes me wonder why the  'gluster nfs-ganesha enable' call would fail
11:43 Skinny when the underlying ganesha-ha.sh script  runs fine when executed manually
11:48 Skinny Ok, I 'strace'd the gluster daemon whil executing the enable command
11:48 Skinny https://gist.github.com/skinny/b385375bd6490c8c261049efaca4b76b
11:48 glusterbot Title: gist:b385375bd6490c8c261049efaca4b76b · GitHub (at gist.github.com)
11:48 kotreshhr left #gluster
11:48 kkeithley the GANESHA_PREFIX (used by glusterd to exec ganesha-ha.sh) is $(libexecdir)/ganesha so it should be looking in the right place for it, i.e. /usr/lib/$arch-linux-gnu/ganesha/ganesha-ha.sh
11:49 Skinny and the gist above shows that it actually does, but the error that shows up is not making sense to me
11:51 kkeithley ah, well, there's at least one hard-coded /usr/libexec/ganesha in ganesha-ha.sh itself. That's a bug
11:53 Skinny I fixed that locally but that's not the cause in my case
11:56 [diablo] guys, now I see there's some movement here, at 12:48 I wrote something about an issue we encountered yesterday, if u guys have any ideas they'd be most appreciated
11:57 kkeithley I've opened https://bugzilla.redhat.com/show_bug.cgi?id=1440148 to fix the hard-coded /usr/libexec/...
11:57 glusterbot Bug 1440148: unspecified, unspecified, ---, kkeithle, ASSIGNED , common-ha (debian/ubuntu): ganesha-ha.sh has a hard-coded /usr/libexec/ganesha...
11:58 kkeithley If you figure anything else out feel free to update that BZ with the details
11:58 Skinny I removed the rhel7_config function to test
11:58 Skinny the only error remaining is :  /usr/lib/x86_64-linux-gnu/ganesha/ganesha-ha.sh: Syntax error: redirection unexpected
11:58 Skinny around line 242
11:58 Skinny my shell script skills lack a bit there to see why that could happen
12:00 kkeithley I've added those comments to the BZ so I don't forget. If you find anything else ping me here or update the BZ
12:01 Skinny I think I found it
12:01 Skinny you have 1 min to confirm my toughts ?
12:01 Skinny running the ganesha-ha.sh script from the ubuntu command line manually will run just fine (within bash)
12:02 Skinny however from within the glusterd deamon, the script is executed with a 'sh -c' call
12:02 Skinny which will override the used shell
12:02 Skinny the default shell on ubuntu is dash
12:02 Skinny and that doesnt have the <<< operator
12:02 Skinny used in the scripts (for logging)
12:02 Skinny that's the line failing with the 'redirection' error
12:03 kkeithley but ganesha.ha has shebang /bin/bash
12:03 Skinny http://stackoverflow.com/a/2462357
12:03 glusterbot Title: ubuntu - Bash: Syntax error: redirection unexpected - Stack Overflow (at stackoverflow.com)
12:03 Skinny read that answer (3th comment)
12:03 Skinny sh 'script' will use 'sh' instead of the shell in the file
12:04 kkeithley okay
12:04 skoduri [diablo], I suppose you are using gluster native NFS..could you check /var/log/gluster/nfs.log
12:05 Skinny @kkeithley
12:05 Skinny resetting ubuntu to use bash at the default shell
12:05 Skinny fixes this error
12:05 Skinny just tested it
12:06 Skinny on to the next one : '[2017-04-07 12:05:01.381485] E [MSGID: 106471] [glusterd-ganesha.c:159:manage_service] 0-management: Could not start NFS-Ganesha.Service manager for distro not recognized.'
12:07 nishanth joined #gluster
12:08 Skinny fixed by (for now symlinking /bin/systemctl to /usr/bin/systemctl)
12:09 Skinny the locations checked in 'manage_service' dont look in the ubuntu location which is /bin/systemctl
12:09 kkeithley I'm not sure how well other ubuntu users will like the idea of having to change the default shell.
12:11 [diablo] hi skoduri yes, it's native
12:11 [diablo] sorry went afk
12:11 [diablo] I'll take a look
12:11 skoduri [diablo] no problem..are there any errors in the nfs.log file?
12:11 [diablo] just getting into the machine now
12:13 Skinny @kkeithlley.. no, but I just think that you should remove  'sh' from the gluster code
12:13 Skinny just execute the script and rely on the #!/bin/bash in the file to execute the bash shell
12:13 [diablo] skoduri ok there's quite a lot, but here's the ending lines http://paste.ubuntu.com/24333652/
12:13 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
12:14 [diablo] I only learned of this today, I had no involvement until now, so I am a little blind as to exactly what is going on
12:18 skoduri [diablo], looks like rpc.statd service was not running.. try manually starting it using "service statd start" command
12:19 [diablo] hi yeah I've just seen that also
12:19 [diablo] odd thing is... it's actually running currently
12:19 krink joined #gluster
12:20 [diablo] so I assume in theory we should be able to start the NFS on the volumes
12:20 skoduri [diablo], anything related to timing issue?..that it was not running before gluster-NFS tried to start
12:20 skoduri yes
12:20 kkeithley oh duh., of course exec("sh -c ....") ignores the shebang.
12:20 Skinny yes :()
12:21 Skinny nasty one
12:21 w4nt joined #gluster
12:22 [diablo] just trying to reach the guy who was working on it yesterday
12:22 dominicpg joined #gluster
12:25 aardbolreiziger joined #gluster
12:27 riyas_ joined #gluster
12:28 [diablo] OK sadly he wants to wait until Monday, as the 2nd node will be getting placed in the other DC
12:29 [diablo] but thanks skoduri I'll look more into this
12:29 skoduri [diablo], sure... welcome
12:29 [diablo] many thanks dude
12:30 kkeithley skoduri is a lady, not a dude. ;-)
12:30 skoduri :)
12:33 dominicpg joined #gluster
12:33 Skinny :D
12:33 Skinny *assumptions are ..*
12:33 Skinny ;-)
12:34 Skinny [This will take a few minutes to complete. Please wait .. nfs-ganesha : success]
12:34 Skinny well that would be a nice end of the week
12:34 skoduri definitely... awesome debugging Skinny :)
12:34 Skinny https://gist.github.com/skinny/1ba1c20e1bcfed5d72e4e0c17255d81b
12:34 glusterbot Title: gist:1ba1c20e1bcfed5d72e4e0c17255d81b · GitHub (at gist.github.com)
12:35 kkeithley hmm, trying to find where that "sh -c" happens
12:36 Skinny in ganesha.c
12:36 Skinny glusterd-ganesha.c
12:36 Skinny runner_add_args (&runner, "sh",                         GANESHA_PREFIX"/create-export-ganesha.sh",                         CONFDIR, value, volname, NULL);
12:36 Skinny those kind of lines
12:37 kkeithley yup
12:40 kkeithley seems silly to run "sh /usr/.../ganesha/ganesha-ha.sh blah blah"   just run "/usr/.../ganesha/ganesha-ha.sh blah blah blah"
12:41 kkeithley then the shebang will do its magic
12:42 kharloss joined #gluster
12:49 Skinny @kkeithley life can be simple :)
12:50 Skinny any idea why the script would create the HA IP addresses in corosync/pacemaker with ` cidr_netmask=32 `
12:50 Skinny ?
12:52 kkeithley because that's what the tutorial I was following showed, and it works. Or seems to anyway.
12:54 kkeithley The pacemaker devs reviewed the code and have helped debug and never said it was wrong
12:54 kkeithley s/code/setup script/
12:54 glusterbot What kkeithley meant to say was: An error has occurred and has been logged. Please contact this bot's administrator for more information.
12:54 glusterbot What kkeithley meant to say was: I've opened https://bugzilla.redhat.com/show_bug.cgi?id=1440148 to fix the hard-setup scriptd /usr/libexec/...
12:55 kkeithley glusterbot fail
12:55 glusterbot kkeithley: I do not know about 'fail', but I do know about these similar topics: 'heal-failed', 'remote operation failed'
12:55 Skinny indeed, I was looking at it the wrong way, nvm
12:56 msvbhat joined #gluster
12:58 kkeithley of course they may not have ever actually looked at that part, so...
13:00 glusterbot joined #gluster
13:00 glusterbot` joined #gluster
13:01 glusterbot` joined #gluster
13:01 94KAAPCHB joined #gluster
13:01 glusterbot` joined #gluster
13:01 glusterbot` joined #gluster
13:01 glusterbot` joined #gluster
13:01 glusterbot` joined #gluster
13:01 glusterbot` joined #gluster
13:01 glusterbot` joined #gluster
13:02 glusterbot` joined #gluster
13:02 glusterbot` joined #gluster
13:02 glusterbot` joined #gluster
13:02 glusterbot` joined #gluster
13:02 sbulage joined #gluster
13:02 glusterbot` joined #gluster
13:02 07EAAO3C0 joined #gluster
13:02 glusterbot` joined #gluster
13:02 glusterbot` joined #gluster
13:03 glusterbot` joined #gluster
13:03 glusterbot` joined #gluster
13:03 glusterbot` joined #gluster
13:03 glusterbot` joined #gluster
13:03 glusterbot` joined #gluster
13:03 glusterbot` joined #gluster
13:03 glusterbot` joined #gluster
13:03 glusterbot` joined #gluster
13:04 glusterbot` joined #gluster
13:04 glusterbot` joined #gluster
13:04 glusterbot` joined #gluster
13:04 glusterbot` joined #gluster
13:04 glusterbot` joined #gluster
13:04 glusterbot joined #gluster
13:06 glusterbot` joined #gluster
13:06 94KAAPCJ1 joined #gluster
13:06 glusterbot` joined #gluster
13:06 glusterbot` joined #gluster
13:07 glusterbot` joined #gluster
13:07 glusterbot` joined #gluster
13:07 glusterbot` joined #gluster
13:07 94KAAPCKI joined #gluster
13:07 glusterbot joined #gluster
13:08 glusterbot` joined #gluster
13:09 glusterbot` joined #gluster
13:09 glusterbot` joined #gluster
13:09 glusterbot` joined #gluster
13:09 glusterbot` joined #gluster
13:09 glusterbot` joined #gluster
13:09 7IZAA0KIB joined #gluster
13:10 glusterbot` joined #gluster
13:10 glusterbot` joined #gluster
13:10 glusterbot` joined #gluster
13:10 glusterbot` joined #gluster
13:10 glusterbot` joined #gluster
13:11 glusterbot` joined #gluster
13:11 7IZAA0KI2 joined #gluster
13:11 glusterbot` joined #gluster
13:11 glusterbot` joined #gluster
13:11 glusterbot` joined #gluster
13:11 glusterbot` joined #gluster
13:11 glusterbot` joined #gluster
13:11 glusterbot` joined #gluster
13:11 glusterbot` joined #gluster
13:12 glusterbot` joined #gluster
13:12 glusterbot` joined #gluster
13:12 glusterbot joined #gluster
13:14 glusterbot` joined #gluster
13:14 glusterbot` joined #gluster
13:14 7IZAA0KKJ joined #gluster
13:14 glusterbot` joined #gluster
13:14 17SAAQ15A joined #gluster
13:15 g1usterbot joined #gluster
13:15 94KAAPCOK joined #gluster
13:15 glusterbot` joined #gluster
13:15 glusterbot` joined #gluster
13:15 glusterbot` joined #gluster
13:15 glusterbot` joined #gluster
13:15 glusterbot` joined #gluster
13:15 glusterbot` joined #gluster
13:15 glusterbot` joined #gluster
13:16 glusterbot` joined #gluster
13:16 glusterbot` joined #gluster
13:16 glusterbot` joined #gluster
13:16 glusterbot` joined #gluster
13:16 7IZAA0KLF joined #gluster
13:16 glusterbot` joined #gluster
13:16 glusterbot` joined #gluster
13:17 glusterbot` joined #gluster
13:17 7IZAA0KLZ joined #gluster
13:17 glusterbot` joined #gluster
13:17 glusterbot_ joined #gluster
13:18 glusterbot` joined #gluster
13:18 glusterbot` joined #gluster
13:18 glusterbot` joined #gluster
13:18 glusterbot` joined #gluster
13:18 glusterbot joined #gluster
13:20 07EAAO3H5 joined #gluster
13:20 glusterbot` joined #gluster
13:20 ndevos JoeJulian: glusterbot is in disagreement with itself again...
13:20 glusterbot` joined #gluster
13:20 glusterbot` joined #gluster
13:21 glusterbot` joined #gluster
13:21 glusterbot` joined #gluster
13:21 glusterbot` joined #gluster
13:21 glusterbot` joined #gluster
13:21 07EAAO3IP joined #gluster
13:21 glusterbot` joined #gluster
13:21 glusterbot` joined #gluster
13:21 glusterbot` joined #gluster
13:21 94KAAPCRF joined #gluster
13:22 glusterbot joined #gluster
13:23 glusterbot` joined #gluster
13:23 glusterbot` joined #gluster
13:23 glusterbot` joined #gluster
13:23 glustin nh2: I would stick with the LTM releases for that, which have 12-month maintenance windows. Every-other release is LTM; right now that means even-numbered dot releases, but the release plan isn't specifically tied to even numbers, just every-other.
13:23 7IZAA0KO3 joined #gluster
13:23 glusterbot` joined #gluster
13:23 glustin nh2: see https://www.gluster.org/community/release-schedule/
13:23 glusterbot` joined #gluster
13:23 glusterbot` joined #gluster
13:24 glusterbot` joined #gluster
13:24 glusterbot` joined #gluster
13:24 17SAAQ19A joined #gluster
13:24 glustin 3.9 was a STM release, which meant a maintenance window of only 3 months, so I would not include that in a Linux release.
13:24 gluster4ot joined #gluster
13:25 7IZAA0KPR joined #gluster
13:25 glusterbot` joined #gluster
13:25 glusterbot` joined #gluster
13:25 94KAAPCS1 joined #gluster
13:25 glusterbot` joined #gluster
13:25 glusterbot` joined #gluster
13:26 07EAAO3JT joined #gluster
13:26 glusterbot` joined #gluster
13:26 glusterbot` joined #gluster
13:26 glusterbot` joined #gluster
13:26 glusterbot` joined #gluster
13:26 glusterbot` joined #gluster
13:27 glusterbot` joined #gluster
13:27 94KAAPCTV joined #gluster
13:27 glusterbot` joined #gluster
13:27 7IZAA0KQS joined #gluster
13:27 glusterbot` joined #gluster
13:27 glusterbot` joined #gluster
13:27 Asako joined #gluster
13:27 glusterbot` joined #gluster
13:27 jiffin joined #gluster
13:27 glusterbot joined #gluster
13:28 Asako Good morning.  Does gluster automatically load balance read requests?  I'm going to be setting up a file server with mirrored volumes and it would be nice to balance the load for better performance.
13:28 glustin Asako: effectively, sort-of, yes. ;)
13:29 glustin Asako: with the native client, it's an intentional race for reads of replicated volumes
13:29 glustin lowest latency wins
13:29 glusterbot` joined #gluster
13:29 glusterbot` joined #gluster
13:29 glusterbot` joined #gluster
13:29 Asako glustin: right now I have nfs-ganesha exporting the volume using the gluster FSAL
13:29 glusterbot` joined #gluster
13:30 07EAAO3K3 joined #gluster
13:30 glust4rbot joined #gluster
13:30 Asako the lowest latency will probably always be localhost
13:30 glustin Asako: OK, NFS is a different story
13:30 glustin you have to load-balance that externally
13:30 glusterbot` joined #gluster
13:31 glusterbot` joined #gluster
13:31 glusterbot` joined #gluster
13:31 Asako I could have clients mount the volume natively but I don't want them to have access to the entire file system
13:31 07EAAO3LK joined #gluster
13:31 glusterbot joined #gluster
13:31 glustin So with NFS you are exporting subdirectories of the Gluster volume?
13:31 jiffin Asako: ever heard of pNFS?
13:32 Asako heard of it, don't know anything about it
13:32 jiffin Asako: what type of volume are using?
13:32 Asako glustin: yeah, I'm exporting subdirs through ganesha
13:32 glusterbot` joined #gluster
13:32 glusterbot` joined #gluster
13:32 glusterbot` joined #gluster
13:33 glusterbot` joined #gluster
13:33 glustin Asako: and you mentioned localhost... so your Gluster servers are also clients?
13:33 Asako jiffin: it's just single brick volume right now, I'll be adding another brick later once the VM is built
13:33 07EAAO3L3 joined #gluster
13:33 glusterbot` joined #gluster
13:33 Asako the gluster host is also the nfs host
13:33 glusterbot` joined #gluster
13:33 glustin right, ok
13:33 glusterbot` joined #gluster
13:33 glusterbot` joined #gluster
13:33 17SAAQ2C4 joined #gluster
13:34 glustin So either pNFS as jiffin is mentioning, or you need an external load balancer for NFS-Ganesha
13:34 Asako I'm just wondering if the secondary node will just sit idle all the time
13:34 glu5terbot joined #gluster
13:34 Asako ok
13:34 glusterbot` joined #gluster
13:34 7IZAA0KUG joined #gluster
13:34 glusterbot` joined #gluster
13:34 7IZAA0KUL joined #gluster
13:35 glusterbot` joined #gluster
13:35 glusterbot joined #gluster
13:35 Asako and how do I control who can mount a volume?  I don't want our entire network being able to just mount the volume.
13:36 jiffin Asako: for i/o's pNFS clients ensure that it will talk to ganesha servers which is local to bricks
13:36 glusterbot` joined #gluster
13:36 glusterbot` joined #gluster
13:37 7IZAA0KVO joined #gluster
13:37 glusterbot joined #gluster
13:37 skylar joined #gluster
13:37 Asako this is all wonderfully complicated :D
13:38 jiffin Asako: Yeah a kind of
13:38 jiffin :)
13:38 glusterbot` joined #gluster
13:38 glusterbot` joined #gluster
13:38 glusterbot` joined #gluster
13:38 glusterbot` joined #gluster
13:38 glusterbot` joined #gluster
13:39 glusterbot` joined #gluster
13:39 glusterbot` joined #gluster
13:39 7IZAA0KWQ joined #gluster
13:39 glusterbot joined #gluster
13:40 glustin Crude, but:
13:40 glustin +----------+                         +----------+
13:40 glustin | Client 1 +--------+       +--------+ Client 1 |
13:40 glustin +----------+        |       |        +----------+
13:40 glustin |       |
13:40 7IZAA0KWQ glustin: +--------'s karma is now -1
13:40 glusterbot glustin: +--------'s karma is now -5
13:40 glusterbot glustin: +--------'s karma is now -6
13:40 7IZAA0KWQ glustin: +--------'s karma is now -2
13:40 glustin |       |
13:40 glusterbot glustin: +------'s karma is now -3
13:40 7IZAA0KWQ glustin: +------'s karma is now -1
13:40 7IZAA0KWQ glustin: +------'s karma is now -2
13:40 glusterbot glustin: +------'s karma is now -4
13:40 glustin |       |
13:40 glusterbot glustin: +--------'s karma is now -7
13:40 7IZAA0KWQ glustin: +--------'s karma is now -3
13:40 7IZAA0KWQ glustin: +--------'s karma is now -4
13:40 glusterbot glustin: +--------'s karma is now -8
13:40 glusterbot 7IZAA0KWQ: +------'s karma is now -5
13:40 7IZAA0KWQ glusterbot: +------'s karma is now -6
13:40 glustin |       |
13:40 7IZAA0KWQ glusterbot: +------'s karma is now -7
13:40 glusterbot 7IZAA0KWQ: +------'s karma is now -8
13:40 glusterbot 7IZAA0KWQ: +----'s karma is now -2
13:40 7IZAA0KWQ glusterbot: +----'s karma is now -1
13:40 glustin |       |
13:40 7IZAA0KWQ glusterbot: +----'s karma is now -4
13:40 glusterbot 7IZAA0KWQ: +----'s karma is now -3
13:40 jiffin Asako: u can read abt it, http://events.linuxfoundation.org/sites/events/files/slides/untitled2.pdf
13:40 glustin |       |
13:41 glustin +---+-------+---+
13:41 glusterbot glusterbot: +------'s karma is now -9
13:41 glusterbot glusterbot: +------'s karma is now -12
13:41 cloph oh, botfight
13:41 glustin |               |
13:41 7GHAAQVD6 joined #gluster
13:41 glustin |  Load Balancer|
13:41 glustin +-+               +-+
13:41 glustin | +---------------+ |
13:41 glustin |                   |
13:41 glustin |                   |
13:41 glustin ...+........     ......+.....
13:41 glustin . Ganesha  .     . Ganesha  .
13:41 glusterbot` joined #gluster
13:41 glustin +---+ HA VIP 1 .     . HA VIP 2 +----+
13:41 glustin |   ............     ............    |
13:41 glustin |                                    |
13:41 glusterbot` joined #gluster
13:41 glustin |                                    |
13:41 glustin +----------------+-----+                   +----------+-----------+
13:41 glustin |                      |                   |                      |
13:41 glustin |   Gluster / NFS      |                   |    Gluster / NFS     |
13:41 glustin |                      | <---------------> |                      |
13:41 glusterbot` joined #gluster
13:41 glustin |                      |    Gluster        |                      |
13:41 cloph glustin: next time use a pastebin @paste glustin
13:41 glustin +----------------------+    Traffic        +----------------------+
13:41 glustin gawd lol
13:41 glusterbot` joined #gluster
13:41 * glustin apologizes
13:41 7IZAA0KX1 joined #gluster
13:41 glustin yeah; mea culpa
13:42 glusterbot` joined #gluster
13:42 Asako I found the docs
13:42 glusterbot joined #gluster
13:43 Asako pnfs reminds me of lustre
13:43 glusterbot` joined #gluster
13:43 94KAAPC05 joined #gluster
13:43 glusterbot` joined #gluster
13:43 glusterbot` joined #gluster
13:43 glusterbot` joined #gluster
13:44 glusterbot` joined #gluster
13:44 glusterbot` joined #gluster
13:44 glusterbot` joined #gluster
13:44 07EAAO3P7 joined #gluster
13:44 glusterbot` joined #gluster
13:44 glusterbot` joined #gluster
13:44 glusterbot` joined #gluster
13:45 glusterbot` joined #gluster
13:45 glusterbot` joined #gluster
13:45 glusterbot` joined #gluster
13:45 glusterbot` joined #gluster
13:45 7IZAA0KZT joined #gluster
13:45 glusterbot` joined #gluster
13:45 glusterbot` joined #gluster
13:45 glusterbot_ joined #gluster
13:46 glusterbot` joined #gluster
13:46 7IZAA0KZ2 joined #gluster
13:46 glusterbot` joined #gluster
13:46 glusterbot` joined #gluster
13:46 94KAAPC2B joined #gluster
13:46 glusterbot` joined #gluster
13:46 glusterbot` joined #gluster
13:46 glusterbot` joined #gluster
13:46 glusterbot` joined #gluster
13:47 msvbhat joined #gluster
13:47 glusterbot joined #gluster
13:48 glusterbot` joined #gluster
13:49 glusterbot` joined #gluster
13:49 glusterbot` joined #gluster
13:49 glusterbot` joined #gluster
13:49 7GHAAQVGZ joined #gluster
13:49 glusterbot` joined #gluster
13:49 glusterbot` joined #gluster
13:49 glusterbot_ joined #gluster
13:50 glusterbot` joined #gluster
13:50 glusterbot` joined #gluster
13:50 07EAAO3SJ joined #gluster
13:50 glusterbot` joined #gluster
13:50 glusterbot` joined #gluster
13:50 glusterbot` joined #gluster
13:50 94KAAPC4S joined #gluster
13:50 glusterbot` joined #gluster
13:51 glusterbot` joined #gluster
13:51 glusterbot joined #gluster
13:53 glusterbot joined #gluster
13:54 plarsen joined #gluster
13:54 glusterbot` joined #gluster
13:54 glusterbot` joined #gluster
14:01 BitByteNybble110 joined #gluster
14:03 kkeithley Skinny: https://review.gluster.org/#/c/17013/1
14:03 kkeithley take a look. I think this fixes everything
14:03 kkeithley skoduri: ^^^
14:05 glustin kkeithley: I'm quoting you on that... "I think this fixes everything"
14:06 kkeithley context is everything
14:11 Skinny @kkeithley thanks! so now you removed the check for /usr/bin/systemcl ?
14:12 Skinny don't know if there are any distros who use that
14:13 Skinny looks good ! will try it asap but then I need to setup my build env first (is going to be on monday)
14:15 farhorizon joined #gluster
14:16 kramdoss_ joined #gluster
14:20 farhorizon joined #gluster
14:28 squizzi joined #gluster
14:30 kkeithley Skinny: Fedora and RHEL7 have a symlink /bin -> /usr/bin so we can just use /bin/systemctl everywhere
14:31 glusterbot joined #gluster
14:36 aardbolreiziger joined #gluster
14:36 glusterbot joined #gluster
14:42 krink joined #gluster
14:45 nbalacha joined #gluster
14:52 skoduri joined #gluster
14:59 ndevos glustin: I love ascii art diagrams too, and would like to point you to termbin.com :)
15:00 ndevos s/ascii art/ascii-art/
15:00 glusterbot What ndevos meant to say was: glustin: I love ascii-art diagrams too, and would like to point you to termbin.com :)
15:00 * ndevos watches glusterbot not crashing
15:00 glustin ndevos: Ooooh, I fogot about termbin
15:06 MrAbaddon joined #gluster
15:06 shyam joined #gluster
15:07 kpease joined #gluster
15:07 alvinstarr joined #gluster
15:18 wushudoin joined #gluster
15:19 Asako any way to see why ganesha.nfsd is using 48% cpu?
15:21 krink joined #gluster
15:23 Skinny joined #gluster
15:28 farhoriz_ joined #gluster
15:35 rafi joined #gluster
16:05 ndevos Asako: if you are on an older version of nfs-ganesha, it may be caused by a busy loop that handles the cache-invalidation
16:05 ndevos Asako: you can see if it goes away when you disable performance.cache-invalidation on the Gluster volumes and restart nfs-ganesha
16:06 MadPsy Asako, there was an issue with the upcall thread which caused that, try that ^^
16:06 ndevos Asako: if that works, you should look into upgrading nfs-ganesha to a more corrent version
16:06 ndevos yes, cache-invalidation is done by the upcall thread in nfs-ganesha
16:06 susant joined #gluster
16:07 johnmilton joined #gluster
16:07 Asako I'm using version 2.4.3
16:07 Asako which is what CentOS 7 provides
16:07 ndevos I'm not sure what version has the fix... a new version was released just this week and I should update the CentOS packages I guess
16:07 Asako features.cache-invalidation is on
16:08 Asako the docs tell you to enable it :D
16:08 ndevos right, that is the one, having it on is good, but it exposed a bug in nfs-ganesha
16:08 * ndevos signs off for the day, and will be having weekend now
16:08 Asako I wanted to test out the pnfs stuff
16:09 Asako see ya ndevos
16:09 ndevos Asako: send an email to the gluster-users@gluster.org list, some of the gluster + nfs-ganesha devs will surely help with your questions there
16:09 ndevos cya!
16:12 bulde joined #gluster
16:19 StormTide joined #gluster
16:26 StormTide hey guys, just had a weird failure while installing updates... 2 nodes 4 bricks with autoheal mtime... so i upgrade node1 (apt) and reboot the box... all the mounts happily continue to run on node2... no problem so far. Node 1 comes back online and i do a checkpoint for georep and let it complete. Status the volume and everything looks like its up fine. So i go to do the second node, upon apt install, gluster goes down on node2 an
16:26 StormTide the mount points crap out saying transport disconnected (they dont continue to run on node1 again)... i have both volfile servers in the mount setups.... so this should work... any clue whats going on here?
16:28 StormTide 3.9 series, with small file performance features enabled
16:29 Gambit15 joined #gluster
16:37 bulde joined #gluster
16:40 StormTide https://paste.fedoraproject.org/paste/tPq6aC1H7sd7j3V~OVqMeV5M1UNdIGYhyRLivL9gydE= is what the gluster mountpoints (on an app server) say about it in the gluster.log
16:40 glusterbot Title: Untitled - Modern Paste (at paste.fedoraproject.org)
16:56 bulde joined #gluster
17:33 ankitr joined #gluster
17:46 shyam joined #gluster
17:59 krink joined #gluster
18:12 rastar joined #gluster
18:37 plarsen joined #gluster
18:58 rafi joined #gluster
18:58 Asako and is it better to use fuse or nfs mounts?  I can't seem to find any docs on which is faster.
18:59 mallorn Quick question...  I have a system with IPs of 10.0.0.27 and 10.0.1.27.  I accidentally did a peer probe to both IPs, so now they both show up in the 'Other names' section under 'gluster peer status.'
18:59 jwd joined #gluster
19:00 mallorn Is there any way to remove gluster's knowledge of the the second IP?  I tried a 'gluster peer detach', but it warns me that bricks will disappear.  I'm not sure if it's removing everything associated with that IP, or just that IP.
19:12 absolutejam joined #gluster
19:29 rafi joined #gluster
19:29 jiffin joined #gluster
19:30 gem joined #gluster
19:37 ankitr joined #gluster
19:38 JoeJulian mallorn: Yes there is a way, but it's completely unnecessary. The use of that data does not change function.
19:46 mallorn OK, thanks.  If that IP goes away it won't affect anything because the IP is only used from the volume definition?
19:46 JoeJulian right
19:46 JoeJulian Actually, wait
19:46 JoeJulian You should be using hostnames for volume definition.
19:46 mallorn Or from the gluster volume route.
19:47 JoeJulian You can use ips, but you're going to eventually decide that was a bad idea.
19:47 mallorn I am in most places, but this was a stupid mistake.  I was on the wrong system while trying to test something.   :/
19:47 JoeJulian The knowledge of what ips any one peer has doesn't affect the volume.
19:48 mallorn OK, thanks!
19:49 mallorn Yes, I will definitely use hostnames because our /24 is getting too small and we'll have to go /22 or something.
19:51 Asako just make sure the names resolve properly
19:51 Asako or put them in /etc/hosts
19:51 ankitr joined #gluster
20:07 Jules-_ joined #gluster
20:09 Karan joined #gluster
20:18 arisjr joined #gluster
20:37 squizzi joined #gluster
20:48 rafi joined #gluster
20:54 rafi joined #gluster
20:56 squizzi joined #gluster
20:57 Karan joined #gluster
21:09 Vapez joined #gluster
21:09 Vapez joined #gluster
21:50 shyam joined #gluster
22:36 primusinterpares joined #gluster
23:10 niknakpaddywak joined #gluster
23:16 shyam joined #gluster
23:19 jbrooks joined #gluster
23:26 Wizek_ joined #gluster
23:50 gem joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary