Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-05-02

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
01:36 derjohn_mob joined #gluster
01:49 ilbot3 joined #gluster
01:49 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:58 gyadav joined #gluster
02:39 baber joined #gluster
03:16 vbellur joined #gluster
03:19 prasanth joined #gluster
03:23 nbalacha joined #gluster
03:29 atinm joined #gluster
03:32 atinm joined #gluster
03:41 itisravi joined #gluster
03:48 daMaestro joined #gluster
03:50 riyas joined #gluster
04:08 gyadav joined #gluster
04:29 Saravanakmr joined #gluster
04:40 ankitr joined #gluster
04:41 kdhananjay joined #gluster
04:45 buvanesh_kumar joined #gluster
04:57 skumar joined #gluster
05:09 aravindavk joined #gluster
05:14 ndarshan joined #gluster
05:19 Humble joined #gluster
05:34 rafi joined #gluster
05:40 sanoj joined #gluster
05:47 ndarshan joined #gluster
05:49 msvbhat joined #gluster
05:50 aravindavk joined #gluster
05:57 Karan joined #gluster
06:10 skumar joined #gluster
06:10 ndarshan joined #gluster
06:13 karthik_us joined #gluster
06:19 ayaz joined #gluster
06:28 percevalbot joined #gluster
06:29 aravindavk joined #gluster
06:32 sona joined #gluster
06:40 Prasad_ joined #gluster
06:47 itisravi joined #gluster
06:58 ankitr joined #gluster
07:00 nishanth joined #gluster
07:04 ivan_rossi joined #gluster
07:05 amarts joined #gluster
07:09 jiffin joined #gluster
07:09 skoduri joined #gluster
07:11 jkroon joined #gluster
07:16 [diablo] joined #gluster
07:20 ankitr joined #gluster
07:24 lanning joined #gluster
07:30 [diablo] joined #gluster
07:33 bartden joined #gluster
07:37 bartden Hi, i have a strange behaving distributed cluster (4 bricks). I can create 2 files with same name in same folder. It only happens in certains folders. I see both files exist at two bricks. I recently added a brick and did a fix layout afterwards. The file also contains special permissions flags (-rw-rw---T+). Any idea how i can investigate this?
07:37 glusterbot bartden: (-rw-rw-'s karma is now -1
07:39 ndevos bartden: DHT has special "link-to" files that have permissions like ---------T, these empty files point (with a special xattr) to the right location/brick of the file
07:39 glusterbot ndevos: -------'s karma is now -11
07:40 ndevos bartden: could it be that the files you create start with ---------T permissions?
07:40 glusterbot ndevos: -------'s karma is now -12
07:41 bartden ndevos i create the files as root (for test) and they get permissions -rw-rw—
07:42 bartden we also use acl for setting rights to files
07:42 bartden it also happens if i copy a existing file to that specific location
07:43 bartden one file remains empty the other contains the data
07:43 mbukatov joined #gluster
07:43 bartden This is the ls output from the copied file
07:43 bartden -rw-r----T+  1 root    root       0 May  2 09:40 testfile
07:43 bartden -rw-r-----+  1 root    root      47 May  2 09:40 testfile
07:43 glusterbot bartden: -rw-r--'s karma is now -1
07:43 glusterbot bartden: -rw-r---'s karma is now -1
07:43 bartden as you can see the empty one contains the T flag, but don’t know why he does that
07:44 FuzzyVeg joined #gluster
07:44 ndevos on the brick, the file with the T flag probably has a "link-to" xatt, pointing to the testfile on an other brick
07:46 bartden well if i do cat testfile i don’t get any output … so it seems that i does not read the correct file
07:46 ndevos you could check that with: getfattr -m. -d /path/on/brick/to/file
07:46 ndevos oh, thats also weird...
07:46 bartden if i do vi twice i once get the correct content and once i don't
07:47 ndevos I dont see any of the dht developers online atm, it is probably best to file a bug for this
07:47 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
07:47 _KaszpiR_ joined #gluster
07:48 ndevos report the bug against the "distribute" component, and give as many details as possible
07:48 bartden ok will do
07:48 bartden # file: cluster/ams03data02/platform/bluebee-data/LEXOGEN/testfile
07:48 bartden security.selinux="system_u:object_r:fusefs_t:s0"
07:48 bartden system.posix_acl_access=0sAgAAAAEABgD/////AgAHANcxhW8EAAcA/////wgAAwCJAwAAEAAEAP////8gAAAA/////w==
07:48 bartden trusted.SGI_ACL_FILE=0sAAAABgAAAAH/////AAYAAAAAAAJvhTHXAAcAAAAAAAT/////AAcAAAAAAAgAAAOJAAMAAAAAABD/////AAQAAAAAACD/////AAAAAA==
07:48 bartden output from getfattr
07:49 FuzzyVeg joined #gluster
07:55 ndevos bartden: execute that command on the gluster storage servers, directly on the bricks - the output will help troubleshooting/debugging
07:56 ndevos bartden: also run that command on the directory, and do that on all bricks that have the directory, it contains the dht-layout
07:56 ndevos bartden: but, run it as "getfattr -ehex -m. -d ...." instead
07:57 jiffin bartden: what type clients are u using ?
07:57 bartden centos 6, gluster-fuse client i guess
07:58 bartden I put the output in the bug report?
07:58 bartden or do you already want to see it?
07:59 flying joined #gluster
08:00 bartden and a side note … its gluster 3.7
08:00 bartden 3.7.5 to be exact
08:01 ndevos I dont need to see the output, I do not think I can really help with it
08:01 ndevos others that understand about dht-layouts will need to have a look
08:01 Nebraskka joined #gluster
08:02 ndevos it is well possible that there have been fixes for this or related things since 3.7.5 was released, you could have a look at the release notes, those contain the fixed bugs
08:02 bartden ok will do thx for assistance
08:02 ndevos https://github.com/gluster/glusterfs/tree/release-3.7/doc/release-notes/
08:02 glusterbot Title: glusterfs/doc/release-notes at release-3.7 · gluster/glusterfs · GitHub (at github.com)
08:03 ndevos note that 3.7 has been marked End-Of-Life, and will not receive any updates from us anymore, you really should consider upgrading to 3.8 or 3.10
08:07 pocketprotector joined #gluster
08:08 nbalacha bartden, I have seen behaviour like this when the posix acls are copied over to the linkto files (____T) files, causing the permissions to get messed up
08:08 Telsin joined #gluster
08:08 nbalacha and since that ends up on the hashed subvol, it ends up being treated as the linkto file which is why you end up reading the wrong file
08:09 nbalacha the bug was fixed a while ago
08:09 bartden could you give me a reference to this bug ?
08:09 bartden Question on a side note, is it “easy” to upgrade to the latest version?
08:10 nbalacha bartden, https://bugzilla.redhat.com/show_bug.cgi?id=1258377
08:10 glusterbot Bug 1258377: high, high, ---, bugs, CLOSED CURRENTRELEASE, ACL created on a dht.linkto file on a files that skipped rebalance
08:10 nbalacha bartden, this should show up when you run a rebalance, not a fix layout thought
08:10 nbalacha did you run a rebal at any point?
08:10 nbalacha bartden, easy in what sense?
08:11 nbalacha an upgrade will not fix this for the existing files
08:11 nbalacha they will need to be manually cleaned up
08:11 bartden nbalacha i only did a fix-layout
08:11 nbalacha then that is strange
08:11 nbalacha do you have a reproducible test case?
08:12 bartden on that specific location i can still create duplicate files, or what do you mean?
08:12 nbalacha yes, on that location , if you create a new file, do you still see this?
08:12 bartden yes
08:13 nbalacha ok, can you please send me a tcp dump from the mount point when creating a new file
08:14 nbalacha as well as debug logs for the client?
08:14 derjohn_mob joined #gluster
08:16 legreffier joined #gluster
08:16 bartden sure, but since this is a live env … i’ll have to schedule it
08:17 vieira joined #gluster
08:18 vieira Hello someone here ?
08:19 nbalacha bartden, no problem
08:19 nbalacha bartden, and you would need to clean those files up
08:19 nbalacha or you could have writes going to those instead of the actual files
08:20 vieira can i monitoring glusterfs on ubuntu with nagios ?
08:24 bartden nbalacha any solution on how i can find all those files ?
08:29 nbalacha bartden, as ndevos said, look for files with the trusted.glusterfs.dht.linkto xattr
08:29 bartden ok, but search on the bricks then and not on the client?
08:29 nbalacha you would need to double check if those are actually the linkto files (0 byte)
08:29 nbalacha as an incomplete rebalance could have left the actual data files with that xattr as well
08:31 nbalacha bartden, yes, search on the bricks
08:32 bartden ok thx
08:32 nbalacha bartden, np
08:32 nbalacha bartden, what is the exact version you are using?
08:33 bartden glusterfs-server-3.7.5-1.el6
08:35 nbalacha then it looks like we have not fixed it completely - this should have been fixed in 3.7.5
08:35 nbalacha are all your servers running the same version?
08:36 vieira can i monitoring glusterfs on ubuntu with nagios or other service?
08:38 bartden nbalacha yes
08:39 kotreshhr joined #gluster
08:39 bartden nbalacha i have the client debug log and the tcpdump
08:40 bartden how do i share them with you?
08:46 vieira join rackerhacker
08:47 vieira JOIN
08:55 rastar joined #gluster
08:59 ivan_rossi left #gluster
09:00 MrAbaddon joined #gluster
09:02 karthik_us joined #gluster
09:25 mdavidson joined #gluster
09:29 jarbod__ Hi, we have a random crash of glusterfsd (maybe 1 crash every 2 days) One of my colleague checks the coredump with gdb and maybe find something about inode: https://paste.fedoraproject.org/paste/Gd2OL9ubdibhAxH-6wKzkV5M1UNdIGYhyRLivL9gydE= Are you aware of this kind of problem ? Do you think the patch is harmless ? (something "similar" was made on https://github.com/gluster/glusterfs/commit/c9239db7961afd648f1fa3310e5ce9b8281c8ad2)
09:29 glusterbot Title: Untitled - Modern Paste (at paste.fedoraproject.org)
09:29 jarbod__ context: we have 1 volume, 2 nodes, 1 brick each (replicated), glusterfs 3.7.20, centos 6.7
09:32 Vaizki_ left #gluster
09:43 Acinonyx joined #gluster
09:49 sona joined #gluster
09:57 AppStore joined #gluster
09:58 ivan_rossi joined #gluster
10:08 skoduri joined #gluster
10:10 sona joined #gluster
10:19 vinurs joined #gluster
10:22 ppai joined #gluster
10:23 ankitr joined #gluster
10:23 testa1 joined #gluster
10:27 Acinonyx joined #gluster
10:38 Wizek_ joined #gluster
11:06 MrAbaddon joined #gluster
11:08 chawlanikhil24 joined #gluster
11:14 skoduri joined #gluster
11:16 shyam joined #gluster
11:30 MrAbaddon joined #gluster
11:48 karthik_us joined #gluster
11:58 baber joined #gluster
12:01 daniel joined #gluster
12:02 skoduri joined #gluster
12:02 Guest36645 hi, does someone have an idea how to make ganesha-ha with gluster working? tried to follow the instructions here (http://blog.gluster.org/2015/10/linux-scale-out-nfsv4-using-nfs-ganesha-and-glusterfs-one-step-at-a-time/) but it fails on centos7. thanks in advance
12:06 jiffin Guest36645: can u check http://gluster.readthedocs.io/en/latest/Administrator%20Guide/NFS-Ganesha%20GlusterFS%20Integration/
12:06 glusterbot Title: Configuring NFS-Ganesha server - Gluster Docs (at gluster.readthedocs.io)
12:06 jiffin which is updated one
12:08 Guest36645 @jiffin: thanks, I will try that.
12:08 chawlanikhil24 joined #gluster
12:14 ppai joined #gluster
12:18 Guest36645 @jiffin: "gluster nfs-ganesha enable" says it will take a few minutes to complete, but it immediately finishes. that does not seem right
12:19 msvbhat joined #gluster
12:23 _KaszpiR_ joined #gluster
12:24 itisravi_ joined #gluster
12:25 Guest36645 does on need to configure pacemaker and corosync manually?
12:28 MrAbaddon joined #gluster
12:29 cloph only need that stuff if you want HA setup
12:29 Guest36645 @cloph: yeah, that is exactly what I want
12:30 Guest36645 but apparently it is not mentioned in the documentation!?
12:31 ment0s joined #gluster
12:31 ment0s Hi
12:31 glusterbot ment0s: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an an
12:32 ment0s Is it wise to run ISCSI > LVM > XFS > GlusterFS in production environment ? I dont see many mentions about iscsi and glusterfs, I was wondering if is there a reason for it ?
12:33 cloph https://gluster.readthedocs.io/en/latest/Administrator%20Guide/NFS-Ganesha%20GlusterFS%20Integration/#step-by-step-guide on HA lists "Install Pacemaker and Corosync on all machines." in pre-requisite section...
12:33 glusterbot Title: Configuring NFS-Ganesha server - Gluster Docs (at gluster.readthedocs.io)
12:33 Guest36645 @cloph: both packages are installed, but not configured
12:37 kotreshhr left #gluster
12:40 itisravi_ joined #gluster
12:45 cholcombe joined #gluster
12:49 jbrooks joined #gluster
12:56 nbalacha joined #gluster
13:03 jiffin1 joined #gluster
13:04 buvanesh_kumar joined #gluster
13:04 MrAbaddon joined #gluster
13:07 nbalacha joined #gluster
13:18 Guest36645 when performing gluster nfs-ganesha enable, I get the error message "nfs-ganesha: failed: Failed to set up HA config for NFS-Ganesha. Please check the log file for details"
13:19 Guest36645 the glusterd.log says: [2017-05-02 13:17:07.626303] E [MSGID: 106123] [glusterd-syncop.c:1451:gd_commit_op_phase] 0-management: Commit of operation 'Volume  (null)' failed on localhost : Failed to set up HA config for NFS-Ganesha. Please check the log file for details
13:35 tliff joined #gluster
13:39 tliff I have an issue where the client under heavy write load disconnects from my server because it "has not responded in the last 42 seconds, disconnecting."
13:40 tliff I tcpdumped the issue and i can see a Gluster Dump packet going to the server and also being answered like 3ms later. as far as i understand it this should be considered a successful ping.
13:43 skylar joined #gluster
13:44 riyas joined #gluster
13:50 sona joined #gluster
13:54 baber joined #gluster
13:56 jiffin1 joined #gluster
14:10 Karan joined #gluster
14:21 Guest36645 gluster nfs-ganesha enable fails without helpful information for me. does anyone know what logfiles can be helpful? (* /var/log/glusterfs/glusterd.log * /var/log/messages (and grep for pcs commands) * /var/log/pcsd/pcsd.log) did not do it...
14:23 chawlanikhil24 joined #gluster
14:31 ankitr joined #gluster
14:35 kpease joined #gluster
14:38 gem joined #gluster
14:42 pioto_ joined #gluster
14:45 tom[] joined #gluster
14:46 atrius joined #gluster
14:46 papna joined #gluster
14:46 Ramereth joined #gluster
14:47 nobody481 joined #gluster
14:48 serg_k joined #gluster
14:48 jbrooks joined #gluster
14:49 scuttle|afk joined #gluster
14:49 telius joined #gluster
14:49 farhorizon joined #gluster
14:52 mlg9000 joined #gluster
14:52 kkeithley Guest36645: /var/log/ganesha.log and /var/log/ganesha-gfapi.log
14:53 cvstealth joined #gluster
14:53 kkeithley Guest36645: also /var/log/cluster/corosync.log
14:53 kkeithley and /var/log/pacemaker.log
14:54 icey joined #gluster
14:54 Nebraskka joined #gluster
14:55 JoeJulian jarbod__: lgtm, but I'm not one of the devs. I would file a bug report with your crash log, then submit your patch to gerrit for review.
14:55 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
14:55 JoeJulian ~hack | jarbod__
14:55 glusterbot jarbod__: The Simplified Development Workflow is at https://gluster.readthedocs.io/en/latest/Developer-guide/Simplified-Development-Workflow/
14:56 Intensity joined #gluster
14:57 Guest36645 @kkeithley: thanks. unfortunately, there is also nothing new there...
14:57 major joined #gluster
14:57 Guest36645 ganesha-gfapi.log does not exist
14:58 Humble joined #gluster
14:59 kkeithley /var/log/messages also has some logging from /usr/libexec/ganesha/ganesha-ha.sh (exec'd by glusterd do do the HA setup)
15:00 JoeJulian ment0s: No reason, just not a common use case. Typically gluster is used with attached commodity storage. Mounting iscsi drives just adds more latency than most are willing to accept. Viability depends on your use case and budget.
15:01 Guest36645 puzzeling. I assume I follow the installation instructions word by word, but still the setup fails
15:01 JoeJulian tliff: Seems to happen to folks occasionally, and I've yet to see a commonality to cause it. There is, however, an effort to combat the problem: https://review.gluster.org/17105
15:01 glusterbot Title: Gerrit Code Review (at review.gluster.org)
15:02 kkeithley Guest36645: yes, unfortunately it has been my experience that the pacemaker+corosync set up can be finicky.
15:03 Guest36645 the documentation reads as if ganesha-enable should take care of configuration there!?
15:04 baber joined #gluster
15:04 kkeithley which version are you using?  glustin_ wrote a revised setup. Maybe he can point you at it.
15:05 jiffin joined #gluster
15:05 kkeithley yes, through 3.10 the `gluster volume set $vol ganesha-enable on` adds the export block to the ganesha.conf and makes ganesha export the volme
15:05 kkeithley volume
15:05 wushudoin joined #gluster
15:06 Guest36645 centos7, gluster 3.10.1
15:08 papna joined #gluster
15:08 Guest36645 gluster vol set kvm ganesha.enable on volume set: failed: The option nfs-ganesha should be enabled before setting ganesha.enable.
15:10 kkeithley you need to do `gluster nfs-ganesha enable` first to set up the HA and start the ganesha.nfsd on each node.  then do the `gluster vol set kvm ganesha.enable on` to export  kvm
15:12 Guest36645 that's the thing. gluster nfs-ganesha enable fails, no matter what I try...
15:13 kkeithley please paste your /etc/ganesha/ganesha-ha.conf
15:15 kkeithley use fpaste or termbin
15:15 xiu joined #gluster
15:16 [o__o] joined #gluster
15:16 jackhill joined #gluster
15:17 Guest36645 @kkeithley: https://pastebin.com/raw/wwhg6cvZ
15:19 Guest36645 according to (https://gluster.readthedocs.io/en/latest/Administrator%20Guide/NFS-Ganesha%20GlusterFS%20Integration/), the file is /run/gluster/shared_storage/nfs-ganesha/ganesha-ha.conf
15:19 glusterbot Title: Configuring NFS-Ganesha server - Gluster Docs (at gluster.readthedocs.io)
15:21 kkeithley yes, same file
15:21 Guest36645 i dont have /etc/ganesha/ganesha-ha.conf
15:21 kkeithley paste the one you've got.
15:22 Guest36645 /run/gluster/shared_storage/nfs-ganesha/ganesha-ha.conf: https://pastebin.com/raw/wwhg6cvZ
15:22 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
15:23 kkeithley hmmm..   maybe try putting it in /etc/ganesha and let the setup move it and create its symlink from /etc/ganesha to /run/gluster/shared_storage/ganesha/
15:26 nbalacha joined #gluster
15:27 farhorizon joined #gluster
15:28 Guest36645 having the file in both directories did the trick
15:28 Guest36645 thank you, @kkeithley!
15:29 kkeithley yw
15:30 atinm joined #gluster
15:32 vinurs joined #gluster
15:43 rastar joined #gluster
15:47 portdirect joined #gluster
15:53 skoduri joined #gluster
15:58 riyas joined #gluster
16:22 kpease joined #gluster
16:25 Gambit15 joined #gluster
16:41 kpease joined #gluster
16:42 farhoriz_ joined #gluster
16:59 farhorizon joined #gluster
17:01 ankitr joined #gluster
17:03 baber joined #gluster
17:06 ankitr joined #gluster
17:12 tliff JoeJulian: interesting, thanks for the link
17:22 arpu joined #gluster
17:23 Miner joined #gluster
17:24 Miner Hi, i have a volume which is started and doing lots of IOs, can enable cluster.granular-entry-heal without issues ?
17:27 jiffin joined #gluster
17:29 _KaszpiR_ joined #gluster
17:30 cholcombe joined #gluster
17:31 farhorizon joined #gluster
17:45 _KaszpiR_ joined #gluster
17:52 cholcombe joined #gluster
18:07 skylar1 joined #gluster
18:16 vbellur joined #gluster
18:19 jiffin joined #gluster
18:40 ment0s joined #gluster
18:41 kpease_ joined #gluster
18:45 ment0s thanks JoeJulian that helps
18:58 sona joined #gluster
18:58 derjohn_mob joined #gluster
19:01 kpease joined #gluster
19:16 baber joined #gluster
19:18 snehring joined #gluster
19:30 vbellur joined #gluster
19:58 vbellur joined #gluster
19:59 shyam joined #gluster
20:19 bartden_ joined #gluster
20:19 vbellur joined #gluster
20:39 hevisko joined #gluster
20:40 hevisko What is the “preferred”/developing platform of GlusterFS?
20:41 hevisko I’m deploying services on Debian/Ubuntu, but are running into various troubles that looks like I should rather use a Centos/fedora based system?
20:41 JoeJulian master with backports for bug fixes.
20:42 JoeJulian Are you using the ,,(ppa)?
20:42 glusterbot The GlusterFS Community packages for Ubuntu are available at: 3.8: https://goo.gl/MOtQs9, 3.10: https://goo.gl/15BCcp
20:43 hevisko I’ve been using the https://download.gluster.org/pub/gluster/ for Debian,
20:43 glusterbot Title: Index of /pub/gluster (at download.gluster.org)
20:43 JoeJulian Ok
20:43 JoeJulian Should be just as supported as CentOS/Fedora.
20:44 JoeJulian What kind of troubles?
20:45 hevisko May 02 21:39:17 ds2 systemd[1885]: Failed at step EXEC spawning /usr/libexec/ganesha/nfs-ganesha-config.sh: No such file or directory
20:45 hevisko and the docs refers to /usr/libexec, while the scripts appears to be in /usr/libexec-x64
20:45 * JoeJulian grumbles
20:46 JoeJulian Would you please file a bug report for that. That shouldn't happen.
20:46 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
20:47 hevisko @JoeJulian like this one: https://github.com/nfs-ganesha/nfs-ganesha-debian/issues/1 ?
20:48 JoeJulian Heh, I didn't notice it was ganesha. I just saw /usr/libexec and knew that was the problem.
20:49 hevisko the ganesha/glusterfs seems to be very integrated, and I’m not quite sure where the boundaries are ;(
20:50 farhorizon joined #gluster
20:50 JoeJulian I'm mostly sure that shell script is part of gluster. I could look but I'm in the middle of beating up Jenkins here so it'll pass my damned PRs ($dayjob).
20:50 major I like to blame Jenkins too ;)
20:51 hevisko I have a baseball bat??
20:52 JoeJulian One of my coworkers was arguing that he's a butler and should have gotten my lunch for me so I would have been able to stay at my desk. I argued that this guy's actually just the towel guy in fancy restrooms. https://upload.wikimedia.org/wikipedia/commons/thumb/e/e3/Jenkins_logo_with_title.svg/1032px-Jenkins_logo_with_title.svg.png
20:52 hevisko but then perhaps the more “correct” question: what is the preferred/“supported” HA NFS setup on top of GlusterFS?
20:52 JoeJulian corosync+pacemaker+ganesha-nfs
20:52 hevisko And CTDB?
20:53 major damn .. I read that as peacemaker ...
20:53 JoeJulian never used it, but I've heard of others doing so.
20:53 JoeJulian hehe
20:53 major suddenly wondering what a colt has to do with HA outside of testing it
20:53 hevisko @major I’m needing some peacemakers here ;(
20:53 major "Take that server.."
20:54 JoeJulian The colt 45 version of STONITH.
20:54 major I support this endevour..
20:54 JoeJulian Now I want to set up a HA Raspberri Pi cluster and literally set up STONITH with servos.
20:55 JoeJulian Would make a great youtube video.
20:55 major viral video sponsored by Redhat? ;)
20:55 JoeJulian hehe
20:55 JoeJulian and Colt
20:56 baber joined #gluster
20:56 hevisko I can just imagine when both servos pulls the other’s power plugs O_o
20:56 JoeJulian No, no.. I'm saying literally shoot the other one.
20:56 major yah ..
20:57 guhcampos joined #gluster
20:57 major wonder if could find a Schofield 45 LC
20:57 major servo motor with a cam would easily serve to pull the trigger
20:57 major JoeJulian, you need to stop putting ideas into my head
20:58 JoeJulian lol
20:59 hevisko but then you should do it properly (in case of misfiring pins): https://en.wikipedia.org/wiki/Arsenal_Firearms_AF2011A1
20:59 glusterbot Title: Arsenal Firearms AF2011A1 - Wikipedia (at en.wikipedia.org)
20:59 hevisko https://www.youtube.com/watch?v=3tEYcUSQDyw
21:01 major my brain is reeling
21:02 hevisko Back to my debian “fun-and-games” (instead of working solutions) So given the fun I’m having  in making glusterfs + nfs-ganesha work on Debian, should I rather switch to Ubuntu? perhaps Centos? for getting a stable GlusterFS + NFS + HA setup?
21:14 JoeJulian If it was me, CentOS.
21:15 major I haven't run into too many real issues with any of the distros so far... but then .. I sort of expect to break everything
21:16 major a lot more satisfying to destroy a "stable" release vs an unstable one ;)
21:16 hevisko @major … and I thought *I* was the guy that breaks any and everything just by installing it ;(
21:17 major I am inclined to go do dumb things like hijack functions in the running kernel ;)
21:17 major s/dumb/fun/
21:17 major actually .. it works either way
21:17 glusterbot major: Error: I couldn't find a message matching that criteria in my history of 1000 messages.
21:17 major @glusterbot .. you have issues
21:18 hevisko I’m just trying to make an installation with a replica 3 arbiter 1 setup going… and that keeps breaking on me ;( I’ve setup a ansibles to kill and recreate the Debian VMs, but still looking for a decent setup of how-to instuctions
21:18 major hmm
21:18 major I haven't tried that one on Debian yet...
21:19 major did it on Ubuntu/CentOS
21:19 hevisko Okay, perhaps I’ll have to retry on Ubuntu then, and after that Centos, or which one you’ve found the most boring to install the HA -nfs?
21:20 major it has all been pretty cut and dry for me really
21:20 major well .. compared to hacking on the code
21:21 major I dunno if I am qualified to state which one is eaiest ... I am inclined to build my own packages and fire up gdb on the running binaries
21:21 JoeJulian Since most of the devs are Red Hat employees, if there is a bug (like the libexec path) it's going to favor RHEL based distros.
21:24 arpu joined #gluster
21:26 major I think I would use Debian or something if I worked at RH as a little act of rebellion ;)
21:27 hevisko kay, I’ll retry tomorrow then on Ubuntu/Centos…
21:28 hevisko Was looking for some cookie-cutter examples for HA NFS which doesn’t seem to exist yet
21:29 major thought Ganesha was already fairly HA
21:30 major or you saying more like a gluster-nfs-ha virtual package that you can point at and then just configure?
21:30 hevisko Well… that would’ve been first prize :)
21:30 major heh
21:32 hevisko ganesha “appears” to be HA, but there are integrations that’s needed from corosync/pacemaker or CTDB to make it do the right things w.r.t. active-active fail overs
21:34 hevisko and then you have that “gluster nfs-ganesha enable” that throws it’s own set of “strange” errors O_o
21:35 major ..
21:35 major odd
21:36 hevisko another “fun” part that I’ve found, is that to “script” things with Ansible, I needed to have tests for example to check when the “gluster volume set all shared_cluster_storage” have finished it hooked script before you do the next step (like adding fstab entries) as it does things asynchronous, and don’t provide a simple check to confirm the asynch operation(s) have finished
21:37 hevisko but that is perhaps from me not having foubd the right manual set to configure from :(=)
21:41 major is those pesky magic incantations ...
21:42 major I can't recall if it was swish and flick or point and click....
21:42 JoeJulian That's another reason I prefer salt.
21:43 hevisko I just have the one client that got me stuck on Ansible…. will try Salt another round…. but first need to get the FreeBSD + HAST replaced with a decent HA-NFS solution that’s stable and Active-active
21:44 hevisko @major: nfs-ganesha: failed: creation of symlink ganesha.conf in /etc/ganesha failed
21:45 cholcombe joined #gluster
21:45 hevisko and strace doesn’t yet revealed me the specific place that’s trying /etc/ganesha/ganesha.conf
21:51 JoeJulian I assume, from that error, that etc/ganesha doesn't exist.
21:53 shyam joined #gluster
21:53 major soo .. half the bolts on the top of both of these engines were loose .. and a few were long enough that they were not possible to tighten down (washer was loose under the heads)
21:53 major should clarify .. under the bolt head ..
21:53 JoeJulian oh my
21:54 major totally explains one problem
21:54 major I am also fairly certain that the oil cooler is plumbed post-pump as opposed to post-water-intake ..
21:55 major generally thought engine engineers universally placed the oil cooler pre-pump on the intake side as negative pressure on the oil-cooler is not inclined to put water into engines/transmissions
21:57 major like .. could be just me ..
21:58 shyam joined #gluster
22:38 cholcombe joined #gluster
22:40 nathwill joined #gluster
22:43 Acinonyx joined #gluster
22:57 shyam joined #gluster
23:04 shyam joined #gluster
23:32 farhorizon joined #gluster
23:38 shyam joined #gluster
23:45 cholcombe joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary