Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-11-29

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:09 d0nn1e joined #gluster
00:21 Wizek joined #gluster
00:50 p7mo_ joined #gluster
00:50 shaunm joined #gluster
00:51 RameshN joined #gluster
00:57 farhorizon joined #gluster
01:00 farhorizon joined #gluster
01:03 shdeng joined #gluster
01:03 haomaiwang joined #gluster
01:04 atrius joined #gluster
01:20 theron joined #gluster
02:14 virusuy joined #gluster
02:26 masber joined #gluster
02:26 masber joined #gluster
02:40 haomaiwang joined #gluster
02:48 ilbot3 joined #gluster
02:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:57 Lee1092 joined #gluster
03:08 ashiq joined #gluster
03:14 riyas joined #gluster
03:19 kramdoss_ joined #gluster
03:30 farhorizon joined #gluster
03:34 magrawal joined #gluster
03:36 haomaiwang joined #gluster
03:42 atinm joined #gluster
03:47 nbalacha joined #gluster
04:07 itisravi joined #gluster
04:08 buvanesh_kumar joined #gluster
04:14 RameshN joined #gluster
04:15 satya4ever joined #gluster
04:15 ppai joined #gluster
04:30 farhorizon joined #gluster
04:31 Prasad joined #gluster
04:36 jiffin joined #gluster
04:42 Shu6h3ndu joined #gluster
04:46 jkroon joined #gluster
04:53 skoduri joined #gluster
04:55 rafi joined #gluster
05:10 ndarshan joined #gluster
05:11 kotreshhr joined #gluster
05:12 buvanesh_kumar joined #gluster
05:15 ankitraj joined #gluster
05:16 dnorman joined #gluster
05:19 hchiramm joined #gluster
05:19 karthik_us joined #gluster
05:23 kramdoss_ joined #gluster
05:26 prth joined #gluster
05:27 haomaiwang joined #gluster
05:32 farhorizon joined #gluster
05:37 Karan joined #gluster
05:39 aravindavk joined #gluster
05:40 riyas joined #gluster
05:41 Saravanakmr joined #gluster
05:49 rastar joined #gluster
05:54 armyriad joined #gluster
05:56 hgowtham joined #gluster
05:59 poornima_ joined #gluster
06:01 ankitraj joined #gluster
06:02 theron joined #gluster
06:06 dnorman joined #gluster
06:16 nishanth joined #gluster
06:20 haomaiwang joined #gluster
06:20 kdhananjay joined #gluster
06:22 Muthu joined #gluster
06:22 msvbhat joined #gluster
06:47 dnorman joined #gluster
06:48 jkroon joined #gluster
06:52 k4n0 joined #gluster
06:59 [diablo] joined #gluster
07:05 masber joined #gluster
07:14 susant joined #gluster
07:17 satya4ever joined #gluster
07:20 mhulsman joined #gluster
07:24 apandey joined #gluster
07:25 kramdoss_ joined #gluster
07:26 dnorman joined #gluster
07:28 Philambdo joined #gluster
07:33 farhorizon joined #gluster
07:35 jtux joined #gluster
07:41 k4n0 joined #gluster
07:47 msvbhat joined #gluster
07:51 hackman joined #gluster
08:04 Ryllise joined #gluster
08:04 Ryllise Hey, Does anyone know of any companies that do consultancy or support work for Gluster in Australia?
08:19 sanoj joined #gluster
08:21 nbalacha joined #gluster
08:22 abyss^ joined #gluster
08:28 fsimonce joined #gluster
08:31 haomaiwang joined #gluster
08:32 devyani7 joined #gluster
08:32 devyani7 joined #gluster
08:34 jri joined #gluster
08:34 farhorizon joined #gluster
08:42 derjohn_mob joined #gluster
08:46 karthik_us joined #gluster
08:48 atinm joined #gluster
08:50 joef101 joined #gluster
08:59 k4n0 joined #gluster
09:03 jiffin1 joined #gluster
09:06 atinm joined #gluster
09:12 ahino joined #gluster
09:12 panina joined #gluster
09:14 haomaiwang joined #gluster
09:14 apandey joined #gluster
09:18 ndevos Ryllise: except for Red Hat, I've not heard of anyone, and https://www.gluster.org/consultants/ doesnt list any either :-/
09:18 glusterbot Title: Professional Support — Gluster (at www.gluster.org)
09:19 skoduri joined #gluster
09:28 kramdoss_ joined #gluster
09:35 farhorizon joined #gluster
09:39 joef101 Hi All, I'm trying to recover from directory split-brain in production. I'm using two node cluster and i've compared gfid's. Could someone please assist me :)
09:41 Sebbo2 joined #gluster
09:43 joef101 I've deleted non matching gfid link in .gluster directory and i triggered self-heal but that particular directory is still in split-brain state.
09:47 Slashman joined #gluster
09:48 flying joined #gluster
09:48 ndevos joef101: you need to delete the file *and* the gfid-symlink, http://gluster.readthedocs.io/en/la​test/Troubleshooting/split-brain/#f​ixing-directory-entry-split-brain should explain the details
09:48 glusterbot Title: Split Brain (Manual) - Gluster Docs (at gluster.readthedocs.io)
09:54 joef101 <ndevos> Please see screenshot http://imgur.com/a/eItT2
09:54 glusterbot Title: Imgur: The most awesome images on the Internet (at imgur.com)
09:56 joef101 Brick2 on node2 has un-matched gfid and it has 0 files. I've deleted gfid link inside .glusterfs/1e/a6<gfid> and triggered self-heal
09:56 post-factum joined #gluster
09:58 ndevos joef101: I'm not too much into split-brain resolving, maybe kdhananjay or someone else that knows more about it can give some more details
09:59 ndevos joef101: also, does the directory itself not cause a split-brain? maybe the directory has different gfids on different bricks too?
10:01 jiffin1 joined #gluster
10:08 joef101 <ndevos> Not 100% sure, the directory that is split-brain is currently unaccessible via mount point
10:14 haomaiwang joined #gluster
10:19 atinm joined #gluster
10:24 itisravi joined #gluster
10:24 jtux joined #gluster
10:28 ndevos joef101: yeah, contents in split-brain are not accessible through Gluster, you really need to check the gfid of the directory on the bricks
10:29 joef101 <ndevos> i've already compared gfid's of the directory which is in split-brain. Please see http://imgur.com/a/eItT2
10:29 glusterbot Title: Imgur: The most awesome images on the Internet (at imgur.com)
10:30 joef101 <ndevos> I've managed to recover from directory split-brain before by deleting gfid link in .gluster directory but this time it didn't work.
10:31 ndevos joef101: that doesnt work if the directory has different GFIDs, you have to delete the directory *and* the symlink
10:32 joef101 <ndevos>, so i need to delete </gluster brick path>/directory and corresponding gfid link?
10:33 ndevos joef101: yes, that is my understanding
10:35 joef101 <ndevos> Thanks, the documentation only mentions deleting files not directory.
10:35 farhorizon joined #gluster
10:36 skoduri joined #gluster
10:46 pcdummy joined #gluster
10:46 pcdummy joined #gluster
10:52 derjohn_mob joined #gluster
10:55 jtux joined #gluster
11:06 atinm joined #gluster
11:09 itisravi_ joined #gluster
11:14 haomaiwang joined #gluster
11:16 jtux joined #gluster
11:33 haomaiwang joined #gluster
11:36 atinm joined #gluster
11:37 k4n0 joined #gluster
11:40 msvbhat joined #gluster
11:47 bfoster joined #gluster
11:47 rastar joined #gluster
11:52 Sebbo3 joined #gluster
11:53 Saravanakmr joined #gluster
11:54 bfoster joined #gluster
12:02 Sebbo1 joined #gluster
12:05 farhorizon joined #gluster
12:12 Saravanakmr joined #gluster
12:21 derjohn_mob joined #gluster
12:30 kotreshhr left #gluster
12:32 ira joined #gluster
12:40 Saravanakmr joined #gluster
12:45 primehaxor joined #gluster
12:46 Gnomethrower joined #gluster
12:50 Gambit15 joined #gluster
12:51 Gnomethrower joined #gluster
12:52 mhulsman joined #gluster
12:56 TvL2386 joined #gluster
12:56 derjohn_mob joined #gluster
13:00 Saravanakmr joined #gluster
13:01 johnmilton joined #gluster
13:21 Saravanakmr left #gluster
13:29 B21956 joined #gluster
13:32 jiffin1 joined #gluster
13:33 Ryllise Anyone do any consulting in Australia for Gluster?
13:37 theron joined #gluster
13:41 msvbhat joined #gluster
13:49 atinm joined #gluster
13:54 unclemarc joined #gluster
13:56 k4n0_afk joined #gluster
13:57 k4n0 joined #gluster
14:02 derjohn_mob joined #gluster
14:07 ndevos Ryllise: I dont know, but I'll pass the question on to some folks in Australia that know Gluster
14:07 jiffin joined #gluster
14:08 ndevos Ryllise: you can also send an email to the gluster-users@gluster.org list, maybe someone cares to share experiences with some consultancy companies (I assume those have some Gluster expertise too)
14:08 satya4ever joined #gluster
14:08 Ryllise thanks ndevos
14:08 jkroon joined #gluster
14:10 aravindavk joined #gluster
14:15 Gnomethrower joined #gluster
14:16 haomaiwang joined #gluster
14:23 skoduri joined #gluster
14:30 haomaiwang joined #gluster
14:32 ankitraj joined #gluster
14:36 skylar joined #gluster
14:38 hackman joined #gluster
14:42 flyingX joined #gluster
14:54 bluenemo joined #gluster
14:56 d0nn1e joined #gluster
14:56 plarsen joined #gluster
14:56 squizzi joined #gluster
15:00 farhorizon joined #gluster
15:00 haomaiwang joined #gluster
15:00 Muthu joined #gluster
15:00 B21956 joined #gluster
15:02 Ashutto joined #gluster
15:02 Ashutto Hello
15:02 glusterbot Ashutto: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
15:02 Ashutto read-only
15:04 Ashutto ok. i have a problem...i mounted a volume using native client, adding the backupvolume server. every now and then, i becomes readonly i have to manually unmount/remount it
15:04 panina joined #gluster
15:06 Ashutto i have quorum (cluster.quorum-type=auto,clus​ter.server-quorum-type=server, cluster.server-quorum-ratio=51%), with 3 nodes using a replica 3 volume
15:07 Ashutto i found a similar issue in the bug database using an old version of glusterfs (https://bugzilla.redhat.co​m/show_bug.cgi?id=1064007)
15:07 glusterbot Bug 1064007: medium, unspecified, ---, rhs-bugs, CLOSED EOL, [RHEV-RHS] gluster fuse mount remains read-only filesystem, after disabling client-side quorum after it is not met
15:08 Ashutto i have a "gluster peer status" issued on every server, every 60 seconds and at the occurrence of the problem, all nodes were healthy
15:10 Ashutto the same volume, mounted on different servers, remains RW as expected
15:27 Gambit15 joined #gluster
15:28 dnorman joined #gluster
15:29 annettec joined #gluster
15:36 shaunm joined #gluster
15:39 prth joined #gluster
15:41 atinm joined #gluster
15:43 ira joined #gluster
15:52 derjohn_mob joined #gluster
15:57 vbellur joined #gluster
16:00 ashiq joined #gluster
16:01 cloaked1 joined #gluster
16:03 iopsnax joined #gluster
16:06 susant joined #gluster
16:07 wushudoin joined #gluster
16:11 prth joined #gluster
16:18 jiffin joined #gluster
16:20 jiffin1 joined #gluster
16:23 rastar joined #gluster
16:26 kpease joined #gluster
16:30 skoduri joined #gluster
16:31 flying joined #gluster
16:38 akanksha__ joined #gluster
16:38 cloaked1 Having very strange behavior from gluster while creating a three-node cluster. From my master, I've successfully probed my peers, created a replica volume and started the volume. However, I'm unable to mount.glusterfs the volume for use. When seeking through the logs, I'm not sure if this is a red herring, but I get the following messages: http://pastebin.com/u25vKf0c
16:38 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
16:40 cloaked1 https://paste.fedoraproject.org/493215/43760614/
16:40 glusterbot Title: #493215 • Fedora Project Pastebin (at paste.fedoraproject.org)
16:41 cloaked1 for context, these nodes are AWS instances Ubuntu 16.04 --
16:41 cloaked1 I cannot for the life of figure out why the NFS stuff is not working properly. No firewall is running. Let me check selinux really fast.
16:42 cloaked1 selinux is disabled
16:42 JoeJulian New versions don't enable nfs by default as ganesha is now preferred.
16:42 cloaked1 ganesha?
16:43 k4n0 joined #gluster
16:43 JoeJulian http://nfs-ganesha.github.io/
16:43 glusterbot Title: Nfs-ganesha.github.com (at nfs-ganesha.github.io)
16:44 plarsen joined #gluster
16:44 cloaked1 so that's a manual step that needs to be done by me I guess? It's not covered in the documentation at all.
16:45 cloaked1 oh, not sure if this matters. I'm using glusterfs 3.7.6
16:45 cloaked1 is that new enough to be affected by not enabling nfs by default and requiring ganesha to be installed?
16:45 JoeJulian Nope. :(
16:46 cloaked1 k
16:46 cloaked1 so I'll take that off the table as root cause possibilities for now.
16:46 JoeJulian @nfs
16:46 glusterbot JoeJulian: To mount via nfs, most distros require the options, tcp,vers=3 -- Also an rpc port mapper (like rpcbind in EL distributions) should be running on the server, and the kernel nfs server (nfsd) should be disabled
16:48 cloaked1 nfs-kernel-server is not installed. RPCbind is active and listening (actually have rpcbind.(service|socket|target) and run-rpc_pipefs.mount all active.
16:48 cloaked1 let me try the tcp,vers=3 options for mounting.
16:50 cloaked1 Invalid option tcp
16:50 cloaked1 hmm
16:52 cloaked1 vers and version are also unknown options
16:53 JoeJulian which distro?
16:53 JoeJulian oh
16:53 JoeJulian found it
16:53 cloaked1 k
16:59 JoeJulian Hmm, according to http://manpages.ubuntu.com/ma​npages/xenial/man5/nfs.5.html "tcp" should be the equivalent to "proto=tcp" and "vers=" and "nfsvers=" should also be equivalent.
16:59 glusterbot Title: Ubuntu Manpage: nfs - fstab format and options for the nfs file systems (at manpages.ubuntu.com)
16:59 cloaked1 hrm
16:59 cloaked1 k, let me try that.
17:00 JoeJulian nfs-common is installed, I presume.
17:01 cloaked1 yes
17:07 cloaked1 yeah, those options are still not working. Still checking some things though. This is throwing me for a loop.
17:07 JoeJulian What if you do "mount.nfs" instead of "mount"?
17:08 cloaked1 I've been using mount.glusterfs and mount -t glusterfs
17:08 k4n0 joined #gluster
17:09 JoeJulian That's how you use the fuse mount. I thought you were trying to nfs mount. Did I misunderstand? I just now started drinking my morning coffee...
17:11 cloaked1 well, I'm not a guru on nfs/fuse or the differences thereof within the context of how gluster works. My original question was asking about the errors in the gluster logs with a follow-up to the fact that I can't mount the volume using the mount.glusterfs. I thought they were related so I might have mucked up the question a bit.
17:11 cloaked1 the logs show a ton of nfs issues, so I thought the inability to mount and the nfs errors were related.
17:12 JoeJulian Ah, yep, I missed the details in your first post, sorry. The " I cannot for the life of figure out why the NFS stuff is not working properly." threw me off.
17:12 cloaked1 yeah, I'm sorry. My bad.
17:12 cloaked1 so the nfs stuff is irrelevant if using fuse?
17:12 JoeJulian Where are you trying to mount your volume?
17:13 JoeJulian It is.
17:13 cloaked1 cool! that's good to know.
17:13 JoeJulian And most of those logs are probably irrelevant for mounting.
17:13 cloaked1 well, I first was going to use /usr/local/openvpnas/etc/db_remote and that didn't work, so I decided to try using /gfs/ (off root)
17:13 cloaked1 oh...awesome!
17:14 JoeJulian Then you should have a /var/log/glusterfs/gfs.log that is the client log.
17:14 cloaked1 I'd prefer to use /usr/local/openvpnas/etc/db_remote if it's not a problem.
17:14 JoeJulian Or /var/log/glusterfs/usr-local​-openvpnas-etc-db_remote.log
17:14 cloaked1 I have a gfs-.log (not sure what the deal is with the extraneous - there is)
17:15 JoeJulian Trailing /.. no biggie.
17:15 cloaked1 no, trailing '-'
17:15 cloaked1 gfs-.log
17:15 JoeJulian Yeah, the "/" is converted to "-" when naming the log files.
17:16 cloaked1 oh, so quick question. The mount point is not the same as where the brick is living right?
17:16 cloaked1 oh! OK.
17:16 JoeJulian Right.
17:17 cloaked1 so I followed these instructions: https://gluster.readthedocs.io/en​/latest/Install-Guide/Configure/
17:17 glusterbot Title: Configure - Gluster Docs (at gluster.readthedocs.io)
17:17 cloaked1 but I wanted to follow these instructions: https://docs.openvpn.net/how-to-tutorialsgu​ides/administration/active-active-high-avai​lability-setup-for-openvpn-access-server/
17:17 flying joined #gluster
17:17 cloaked1 I mostly followed the gluster instructions, though.
17:17 cloaked1 so a little more context...
17:18 MidlandTroy joined #gluster
17:18 cloaked1 I have two sets of VPN clusters. One cluster is running fine. I followed the gluster docs on those and kinda just made the pathing work for the location of the configs. No biggie.
17:19 JoeJulian @pasteinfo | cloaked1
17:19 JoeJulian Meh
17:19 JoeJulian ~pasteinfo | cloaked1
17:19 glusterbot cloaked1: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
17:19 cloaked1 Now, for this cluster, I was hoping to follow the openvpn docs a little more closely while kind of muxing those with the gluster docs.
17:19 cloaked1 k
17:20 cloaked1 https://paste.fedoraproject.org/493256/48044000/
17:20 glusterbot Title: #493256 • Fedora Project Pastebin (at paste.fedoraproject.org)
17:21 cloaked1 so you see the /mnt/openvpn_as_db brick location. The openvpn docs say that that should actually be /usr/local/openvpn_as/etc/db_remote...which didn't seem to jive with the gluster docs.
17:22 cloaked1 so I left the brick location there and will be mounting the cluster on /usr/local/openvpn_as/etc/db_remote if possible. For testing purposes, I was just trying to mount to /gfs/
17:22 JoeJulian I, personally, prefer putting my bricks in /srv/gluster/$volume_name/$someth​ing_related_to_the_storage/brick
17:23 JoeJulian But I may be a little CDO about that.
17:23 cloaked1 when I get a better bead on what exactly a brick is (seems like it's primarily a metadata/storage point) then I may change the enpoint.
17:23 JoeJulian exactly
17:23 cloaked1 lol. nice. I see what you did there.
17:24 JoeJulian Alphabetical, as it should be. :)
17:24 cloaked1 lol...indeed
17:24 JoeJulian Ok, so as long as your clients can reach both bricks, let's erase that mount log and try mounting again.
17:25 cloaked1 k
17:25 JoeJulian Then fpaste the new log.
17:27 dnorman joined #gluster
17:28 cloaked1 https://paste.fedoraproject.org/493262/44050714/
17:28 glusterbot Title: #493262 • Fedora Project Pastebin (at paste.fedoraproject.org)
17:31 cloaked1 the connection timeouts are confusing. When I do a peer status, the peers show up and connected.
17:31 JoeJulian cloaked1: That client is running glusterfs version 3.8.6. Are all your servers running 3.8.6 or are they running 3.7.6 like you thought?
17:32 cloaked1 let me check
17:32 cloaked1 I just upgraded them...so, let me verify
17:33 cloaked1 they're all running 3.8.6
17:33 cloaked1 probably wasn't the wisest thing of me to do that...sorry about that.
17:33 JoeJulian Ok, then I suspect the vol files need upgraded.
17:33 cloaked1 meh, I can just wipe everything out and recreate it all.
17:33 cloaked1 it'll take me 1 min.
17:33 JoeJulian That's why it's giving an invalid argument error when reading configuring the translators.
17:33 cloaked1 ah, kk.
17:34 JoeJulian /var/lib/glusterd is where everything lives.
17:34 cloaked1 if I needed to upgrade that stuff manually, how would one go about doing that? Fortunately, right now it's easy because I can recreate everything from scratch.
17:37 JoeJulian Well, centos, rhel (and any other EL based distro), and arch do it as part of the package upgrade. I guess ubuntu doesn't have a way of doing that, huh? It should do "glusterd --xlator-option*.upgrade=on -N" after renaming *.vol under /var/lib/glusterd.
17:38 dnorman joined #gluster
17:39 cloaked1 it does
17:39 cloaked1 but you have to specify a new PPA (repo endpoint)
17:39 cloaked1 otherwise, Ubuntu sticks with the major version and updates that version (using LTS)
17:40 cloaked1 OK. I've created a cluster
17:40 JoeJulian Right, but if the packaging system is capable of running post-update scripts, perhaps we need some help with that.
17:40 cloaked1 let's see what happens here. I decided to change the location of the brick using your convention..somewhat.
17:41 renout_away joined #gluster
17:44 cloaked1 https://paste.fedoraproject.org/493272/14804414/
17:44 glusterbot Title: #493272 • Fedora Project Pastebin (at paste.fedoraproject.org)
17:45 cloaked1 0-gv0-dht: invalid argument: inode [Invalid argument]  <<-- ????
17:45 glusterbot cloaked1: <<'s karma is now -1
17:45 cloaked1 ?
17:46 cloaked1 looks like I did something there inadvertently :\
17:46 cloaked1 It looks like the mount script is not properly capturing the inode of the mount point
17:50 cloaked1 stat: cannot stat '/usr/local/openvpn_as/etc/db_remote': Transport endpoint is not connected
17:51 cloaked1 so odd, because if I run the stat command manually, I get the inode. I think this is the root cause. Now just have to figure out why stat is failing in the script.
17:51 cloaked1 # stat -c %i /usr/local/openvpn_as/etc/db_remote
17:51 cloaked1 539
17:54 JoeJulian Well that's working because the client already failed.
17:54 JoeJulian If the client had succeeded, it would be 1.
17:54 cloaked1 ah, I see. So it's trying to mount first and then stat?
17:54 JoeJulian Right
17:55 cloaked1 got it
17:56 JoeJulian I'm not sure where this error is coming from. "invalid argument: inode [Invalid argument]"
17:56 JoeJulian Digging through the source to see what I can find.
17:56 cloaked1 ok
17:57 skoduri joined #gluster
18:03 ivan_rossi left #gluster
18:23 plarsen joined #gluster
18:24 theron joined #gluster
18:31 cloaked1 JoeJulian: I'm running straces on gluster now. So far, nothing really sticking out about an inode argument.
18:33 theron joined #gluster
18:38 k4n0 joined #gluster
18:42 plarsen joined #gluster
18:43 JoeJulian It's an assert because the inode variable to dht_inode_ctx_time_update is null.
18:43 DV__ joined #gluster
18:43 DV__ joined #gluster
18:47 msvbhat joined #gluster
18:52 arc0 joined #gluster
18:53 B21956 joined #gluster
18:58 cliluw joined #gluster
19:00 msvbhat joined #gluster
19:02 farhorizon joined #gluster
19:04 B21956 joined #gluster
19:12 JoeJulian cloaked1: Well, I cannot seem to duplicate the issue so far.
19:12 cloaked1 :\
19:13 cloaked1 when I strace it, the mount "succeeds" but the endpoint transport is not available so I can't actually do anything with it.
19:13 cloaked1 as is the case sometimes with stracing applications.
19:13 cloaked1 false positives.
19:14 cloaked1 besides port 24007, are there any other ports that are required to be available? Maybe my security group is the problem.
19:14 Jacob843 joined #gluster
19:16 JoeJulian @ports
19:16 glusterbot JoeJulian: glusterd's management port is 24007/tcp (also 24008/tcp if you use rdma). Bricks (glusterfsd) use 49152 & up. All ports must be reachable by both servers and clients. Additionally it will listen on 38465-38468/tcp for NFS. NFS also depends on rpcbind/portmap ports 111 and 2049.
19:17 JoeJulian If that's it then there's some other bug. It should say so if it's a network problem.
19:18 cloaked1 I just switched the security groups and rebooted the instance. I'll let you know in a sec.
19:21 cloaked1 huh.
19:21 cloaked1 so two of the three machines are mounting up just fine. The main machine, though, is just hanging on mount.
19:21 JoeJulian Well that's a change.
19:21 cloaked1 yes. yes it is.
19:23 cloaked1 and progress. I backgrounded the mount (not sure what the deal is with that yet) and the mount point is mounted. I copied files into the directory and they are now showing up in the clustered machines as well.
19:26 cloaked1 ah, I know why it was hanging
19:26 dnorman joined #gluster
19:27 cloaked1 sweet. I got everything up.
19:28 cloaked1 the security group was the problem.
19:28 JoeJulian excellent!
19:28 JoeJulian Why was it hanging?
19:28 cloaked1 I altered the mount.glusterfs script to strace the mount...
19:28 JoeJulian Ah, that makes sense.
19:28 cloaked1 strace doesn't fork.
19:28 cloaked1 yeah
19:29 cloaked1 I appreciate your help. Still not sure what the inode message was about...
19:29 JoeJulian Still dissapointed that we didn't get a valid network error.
19:29 cloaked1 that's an odd duck and totally misleading...but now we know.
19:29 theron joined #gluster
19:29 JoeJulian I'm going to see if I can duplicate that and get a decent bug filed.
19:29 cloaked1 the only port I had open in the original SG was 24007
19:29 cloaked1 I'll send you the SGs
19:30 cloaked1 or rather, the port list.
19:30 cloaked1 all inbound.
19:30 JoeJulian I'm off to lunch. ttfn.
19:30 cloaked1 k. enjoy. thanks man!
19:34 cloaked1 JoeJulian: https://paste.fedoraproject.org/493356/44804914/
19:34 glusterbot Title: #493356 • Fedora Project Pastebin (at paste.fedoraproject.org)
19:39 ahino joined #gluster
19:48 haomaiwang joined #gluster
19:57 theron joined #gluster
20:03 guhcampos joined #gluster
20:08 derjohn_mob joined #gluster
20:12 mhulsman joined #gluster
20:12 unclemarc joined #gluster
20:17 dnorman joined #gluster
20:17 panina joined #gluster
20:19 m0zes joined #gluster
20:32 TvL2386 joined #gluster
20:46 alvinstarr I have 2 volumes A and B. I would like to copy A to B. When I do this using rsync  the second run takes just about as long as the first run. There about 8M files using about 160G of space.  What would be the best/fastest way to sync one volume to the other?
20:47 haomaiwang joined #gluster
20:59 bluenemo joined #gluster
21:08 ajneil joined #gluster
21:11 farhorizon joined #gluster
21:13 Wizek joined #gluster
21:21 hackman joined #gluster
21:50 JoeJulian alvinstarr: geo-replication, imho.
21:55 theron joined #gluster
21:55 vbellur joined #gluster
22:00 masuberu joined #gluster
22:05 theron joined #gluster
22:08 masuberu joined #gluster
22:09 farhorizon joined #gluster
22:20 farhoriz_ joined #gluster
22:35 panina joined #gluster
22:39 joef101 joined #gluster
22:40 panina joined #gluster
22:51 haomaiwang joined #gluster
22:55 DV__ joined #gluster
23:11 theron joined #gluster
23:20 primehaxor joined #gluster
23:21 arc0 joined #gluster
23:23 farhorizon joined #gluster
23:27 DV__ joined #gluster
23:30 Micha2k joined #gluster
23:32 plarsen joined #gluster
23:38 Marbug_ joined #gluster
23:52 theron joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary