Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-08-04

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:07 luckyinva joined #gluster
00:23 luckyinva left #gluster
00:24 DV joined #gluster
00:58 lyang0 joined #gluster
01:01 hflai joined #gluster
01:28 RioS2 joined #gluster
01:35 gildub joined #gluster
01:41 harish_ joined #gluster
01:51 haomaiwa_ joined #gluster
01:57 haomaiw__ joined #gluster
02:19 RameshN joined #gluster
02:37 overclk joined #gluster
02:55 bharata-rao joined #gluster
02:58 shubhendu_ joined #gluster
03:52 itisravi joined #gluster
03:58 kanagaraj joined #gluster
04:06 kshlm joined #gluster
04:06 ppai joined #gluster
04:16 kanagaraj joined #gluster
04:19 kanagaraj joined #gluster
04:19 ndarshan joined #gluster
04:30 spandit joined #gluster
04:34 Rafi_kc joined #gluster
04:35 anoopcs joined #gluster
04:40 hagarth joined #gluster
04:43 nthomas joined #gluster
04:49 shubhendu_ joined #gluster
04:52 rejy joined #gluster
04:55 Humble joined #gluster
05:14 jiffin joined #gluster
05:16 dusmant joined #gluster
05:20 Humble joined #gluster
05:23 Pupeno joined #gluster
05:24 prasanth_ joined #gluster
05:25 kdhananjay joined #gluster
05:27 nshaikh joined #gluster
05:35 foster joined #gluster
05:40 Humble joined #gluster
05:42 RameshN joined #gluster
05:43 haomaiwa_ joined #gluster
05:44 ramteid joined #gluster
05:45 sputnik13 joined #gluster
05:46 haomaiw__ joined #gluster
05:47 karnan joined #gluster
05:47 LebedevRI joined #gluster
05:49 deepakcs joined #gluster
05:50 Humble joined #gluster
05:52 nbalachandran joined #gluster
05:52 foster joined #gluster
05:52 RameshN left #gluster
05:54 RameshN joined #gluster
05:56 vpshastry joined #gluster
05:59 glusterbot New news from newglusterbugs: [Bug 1126289] [SNAPSHOT]: Deletion of a snapshot in a volume or system fails if some operation which acquires the volume lock comes in between. <https://bugzilla.redhat.com/show_bug.cgi?id=1126289>
05:59 RameshN_ joined #gluster
06:06 Humble joined #gluster
06:07 lalatenduM joined #gluster
06:10 andreask joined #gluster
06:13 ekuric joined #gluster
06:17 XpineX joined #gluster
06:17 foster joined #gluster
06:20 6A4AAN5RD joined #gluster
06:20 7GHAAH957 joined #gluster
06:23 foster joined #gluster
06:25 hagarth joined #gluster
06:31 atalur joined #gluster
06:36 foster joined #gluster
06:38 fsimonce joined #gluster
06:40 karnan joined #gluster
06:40 sputnik13 joined #gluster
06:42 bala joined #gluster
06:47 kumar joined #gluster
06:47 rastar joined #gluster
06:52 ppai joined #gluster
06:54 foster joined #gluster
07:00 tim_lau joined #gluster
07:00 foster joined #gluster
07:01 violuke joined #gluster
07:03 ctria joined #gluster
07:03 sputnik13 joined #gluster
07:06 aravindavk joined #gluster
07:06 bala joined #gluster
07:10 R0ok_ joined #gluster
07:13 keytab joined #gluster
07:17 kanagaraj joined #gluster
07:17 marcoceppi joined #gluster
07:17 marcoceppi joined #gluster
07:20 nbalachandran joined #gluster
07:21 dusmant joined #gluster
07:27 DV joined #gluster
07:28 wgao joined #gluster
07:28 Rydekull joined #gluster
07:31 raghu joined #gluster
07:35 foster joined #gluster
07:49 DV joined #gluster
07:51 haomaiwa_ joined #gluster
07:55 ricky-ti1 joined #gluster
07:58 liquidat joined #gluster
07:58 aravindavk joined #gluster
08:01 ppai joined #gluster
08:02 foster joined #gluster
08:03 nbalachandran joined #gluster
08:03 shubhendu_ joined #gluster
08:07 haomai___ joined #gluster
08:08 dusmant joined #gluster
08:08 foster joined #gluster
08:14 foster joined #gluster
08:17 Norky joined #gluster
08:18 sputnik13 joined #gluster
08:23 foster joined #gluster
08:28 calum_ joined #gluster
08:37 karnan joined #gluster
08:41 violuke joined #gluster
08:50 ppai joined #gluster
08:58 ctria joined #gluster
09:01 spandit joined #gluster
09:04 Slashman joined #gluster
09:06 vimal joined #gluster
09:18 ninkotech joined #gluster
09:18 ninkotech_ joined #gluster
09:21 hagarth joined #gluster
09:22 calum_ joined #gluster
09:23 suliba joined #gluster
09:25 mbukatov joined #gluster
09:32 foster joined #gluster
09:33 glusterbot New news from resolvedglusterbugs: [Bug 1078068] dist-geo-rep: Python backtrace seen in geo-rep logs "ValueError: signal only works in main thread" <https://bugzilla.redhat.com/show_bug.cgi?id=1078068>
09:34 kumar joined #gluster
09:44 SmithyUK Hi fellas, does anyone know if there is a "background" rebalance? I added a bunch of new bricks to an existing volume and it seems like it is balancing data. Nothing in the rebalance logs
09:50 lalatenduM SmithyUK, the new data will use new layout (after addition of new bricks), but old data will not rebalance unless you explicitly execute a rebalance comamnd
09:50 lalatenduM s/comamnd/command/
09:50 glusterbot lalatenduM: Error: I couldn't find a message matching that criteria in my history of 1000 messages.
09:50 lalatenduM s/comamnd/command/
09:50 glusterbot What lalatenduM meant to say was: SmithyUK, the new data will use new layout (after addition of new bricks), but old data will not rebalance unless you explicitly execute a rebalance command
09:51 foster joined #gluster
09:52 SmithyUK http://i.imgur.com/WTalbRU.png
09:53 SmithyUK lalatenduM: there is the image of the data free on the bricks i added, it seems localised to just some servers
09:53 SmithyUK strange behaviour indeed
10:03 Chr1s1an joined #gluster
10:05 Chr1s1an Anyone here having experience running two bricks on same physical disk, is that supported or will it cause issues? I know that both gluster volumes will share the free space , but if it grows on one of them it’s also shown on the other volume.
10:07 ndevos Chr1s1an: that is a relatively common configuration, it works fine (maybe you want to add quotas), but it will prevent you to create snapshots of the volume (glusterfs-3.6 feature based on lvm)
10:11 ppai joined #gluster
10:12 Chr1s1an So for future features it might not be smart to do it then
10:12 Chr1s1an Thanks for the fast reply :)
10:17 itisravi_ joined #gluster
10:20 RameshN_ joined #gluster
10:23 ctria joined #gluster
10:24 Slasheri is there any data integrity checks/checksumming in glusterfs (like zfs), including auto repair when enough redundancy is available?
10:26 lalatenduM Slasheri, the auto repairing is available in gluster when u use replication type volume, its called self healing
10:27 SmithyUK lalatenduM, figured it out; looks like a rebalance was somehow started on 2 servers, the rest were marked as "failed". Not quite sure how the rebalance was started but at least I know why the data was moving. Thanks for the help
10:27 Slasheri lalatenduM: excellent, thanks. But do you know how possible data integrity errors are handled/detected. For example if one bit corrupts inside the brick volume, does gluster detect the corruption?
10:37 nshaikh joined #gluster
10:38 ndarshan joined #gluster
10:39 Norky that's more an underlying filesystem issue
10:41 foster joined #gluster
10:42 overclk joined #gluster
10:43 vpshastry1 joined #gluster
10:52 overclk joined #gluster
10:54 overclk joined #gluster
10:55 foster joined #gluster
10:59 lalatenduM SmithyUK, cool, which version of gluster u r running?
11:01 lalatenduM Slasheri, as of now gluster does not have the feature, it is being developed for future release, but if you use a disk file system like btrfs on the bricks it would be handled by the on disk fs
11:01 tdasilva joined #gluster
11:01 tdasilva_ joined #gluster
11:02 SmithyUK lalatenduM: v3.5.1
11:03 lalatenduM SmithyUK, I think you should move to v3.5.2 (bug fix rel). check https://github.com/gluster/glusterfs/blob/release-3.5/doc/release-notes/3.5.2.md
11:03 glusterbot Title: glusterfs/3.5.2.md at release-3.5 · gluster/glusterfs · GitHub (at github.com)
11:05 lalatenduM SmithyUK, check http://supercolony.gluster.org/pipermail/gluster-users/2014-August/041229.html
11:05 glusterbot Title: [Gluster-users] glusterfs-3.5.2 RPMs are now available (at supercolony.gluster.org)
11:12 overclk joined #gluster
11:37 vpshastry joined #gluster
11:37 ira joined #gluster
11:39 sputnik13 joined #gluster
11:42 dusmant joined #gluster
11:43 diegows joined #gluster
11:47 Humble joined #gluster
11:48 kkeithley ,,(ports)
11:48 glusterbot glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up for <3.4 and 49152 & up for 3.4. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
11:48 violuke joined #gluster
12:00 Humble joined #gluster
12:01 Slashman_ joined #gluster
12:09 Xanacas_ joined #gluster
12:09 pdrakewe_ joined #gluster
12:09 dreville joined #gluster
12:09 T0aD- joined #gluster
12:10 Pupeno_ joined #gluster
12:11 stickyboy_ joined #gluster
12:11 mkzero_ joined #gluster
12:12 foster_ joined #gluster
12:12 harish_ joined #gluster
12:12 hagarth1 joined #gluster
12:12 mibby- joined #gluster
12:13 anoopcs1 joined #gluster
12:13 hflai_ joined #gluster
12:13 dusmant joined #gluster
12:13 purpleid1a joined #gluster
12:14 anoopcs1 joined #gluster
12:15 ppai_ joined #gluster
12:18 Humble joined #gluster
12:19 stickyboy joined #gluster
12:22 Pupeno joined #gluster
12:23 kumar joined #gluster
12:23 bene2 joined #gluster
12:23 gehaxelt joined #gluster
12:23 92AAAGBSS joined #gluster
12:25 chirino joined #gluster
12:27 the-me joined #gluster
12:30 edwardm61 joined #gluster
12:32 magicrobotmonkey left #gluster
12:34 overclk joined #gluster
12:36 vpshastry1 joined #gluster
12:40 _Bryan_ joined #gluster
12:40 swebb joined #gluster
12:49 B21956 joined #gluster
12:59 ctria joined #gluster
13:00 glusterbot New news from newglusterbugs: [Bug 1126435] Problem in glusterfs <https://bugzilla.redhat.com/show_bug.cgi?id=1126435>
13:01 julim joined #gluster
13:06 bennyturns joined #gluster
13:09 julim joined #gluster
13:10 hchiramm_ joined #gluster
13:19 ctria joined #gluster
13:30 caiozanolla_ joined #gluster
13:37 R0ok_ joined #gluster
13:38 tdasilva joined #gluster
13:44 Humble joined #gluster
13:53 hchiramm_ joined #gluster
13:54 vpshastry joined #gluster
13:56 edward1 joined #gluster
14:07 mojibake joined #gluster
14:08 sjm joined #gluster
14:09 rwheeler joined #gluster
14:15 wushudoin joined #gluster
14:20 mortuar joined #gluster
14:30 jbrooks joined #gluster
14:35 hagarth joined #gluster
14:36 qdk joined #gluster
14:46 bennyturns joined #gluster
14:52 overclk joined #gluster
15:01 hagarth joined #gluster
15:08 sputnik13 joined #gluster
15:09 RameshN_ joined #gluster
15:11 mbukatov joined #gluster
15:11 vpshastry joined #gluster
15:16 edward1 joined #gluster
15:17 nage joined #gluster
15:28 plarsen joined #gluster
15:30 overclk joined #gluster
15:38 tkatarki joined #gluster
15:46 dtrainor joined #gluster
15:52 AndroUser2 joined #gluster
15:52 AndroUser2 joined #gluster
15:52 dtrainor__ joined #gluster
15:54 Peter1 joined #gluster
15:56 Peter1 hi semiosis is ubuntu 3.5.2 out
15:56 Peter1 ?
15:56 Peter1 i just afraid i missed it
16:00 Xanacas joined #gluster
16:15 overclk joined #gluster
16:23 plarsen joined #gluster
16:24 chirino_m joined #gluster
16:45 chirino joined #gluster
16:53 mbukatov joined #gluster
17:01 zerick joined #gluster
17:04 vpshastry joined #gluster
17:13 lyang0 joined #gluster
17:15 nbalachandran joined #gluster
17:28 edward1 joined #gluster
17:41 ricky-ticky1 joined #gluster
17:49 GlenK joined #gluster
17:50 GlenK howdy.  typical jerk on my end.  just want to power through on things initially.  take heart, it's just my home test it up jerk stuff, so it's not so bad.
17:51 GlenK anyhow, the documentation I'm encountering so far seems to be about two systems that replicate data.  Sounds good to me.  But say I want to throw 3 systems in to the mix.  If I still keep the options "replica 2" but add 3 bricks I guess you call them, that works out?
17:52 GlenK or "replica 3" would work out too.  I'm just trying to understand a bit here.
17:54 GlenK the difference I assume would be with "replica 2" and 3 disks, hell, partitions I guess I mean.  Anyhow, with that junk would be spread out across 3 pcs but only 2 copies which would not each reside on the same machine?
17:55 GlenK and then "replica 3", you'd need 3 machines, but then you'd have 3 copies and in theory 2 of them could even go down and you'd still be good client side?
17:55 GlenK then I'm wondering about bitrot.  say I have btrfs as the underlying filesystem.  Is that enough?
17:57 rotbeard joined #gluster
18:02 kkeithley ,,(bricks)
18:02 glusterbot kkeithley: Error: No factoid matches that key.
18:02 kkeithley ,,(terminology)
18:02 glusterbot kkeithley: Error: No factoid matches that key.
18:05 clutchk joined #gluster
18:09 clutchk left #gluster
18:09 kkeithley you can't do 'replica 2' with three bricks (i.e. bricks = systems)
18:11 julim joined #gluster
18:11 kkeithley you can do 'replica 3', or distribute across the three systems
18:14 GlenK ha, right.  sounds like I just need to keep playing with it.  cheers.
18:23 jruggiero left #gluster
18:23 vpshastry joined #gluster
18:29 Philambdo joined #gluster
18:51 nueces joined #gluster
19:02 swebb joined #gluster
19:04 coredump joined #gluster
19:05 stickyboy joined #gluster
19:17 ninthBit joined #gluster
19:22 ninthBit glusterfs v3.4 on ubuntu 12.04 pulled from launchpad.net/~semiosis.  What does it mean if in the heal info for a volume i see under a brick items like <gfid:uuid-xxx> .  are these glusterfs ids for files?  some hours later I see file names and no loger see the gfid entries.  would this mean the heal has found the file name for these gfid entries?
19:25 ninthBit the follow up on the heal.  i see the same files show up over hours and the exact same file name shows up on each brick.  the size looks the same but the owner of the file is different.  I can't explain how this happened but if needed there is a level of complexity as SQL performed a backup a CIFS share provided by Samba which is written to a glusterfs client mount to the glusterfs servers.
19:26 ninthBit how production stable is glusterfs 3.5?  i can't exactly find production "ready" hints on the new web page.  when we were first working on gluster 3.4 was said to be used for production.
19:29 ninthBit splitbrain has no files listed....
19:29 bene2 joined #gluster
19:30 andreask joined #gluster
19:32 bennyturns joined #gluster
19:36 B21956 joined #gluster
19:36 bennyturns joined #gluster
19:44 Peter1 joined #gluster
19:53 skippy joined #gluster
19:55 skippy I have two Gluster servers, each serving one brick in a replicate volume.
19:56 skippy I added a 3rd node as a quorum-only server, not serving any bricks (per Volume /tmp on node epamotron1.innova.local has exceeded 85% utilization.
19:56 skippy Space used: 1.7 G
19:56 skippy whops
19:56 skippy per https://github.com/gluster/glusterfs/blob/master/doc/features/server-quorum.md
19:56 glusterbot Title: glusterfs/server-quorum.md at master · gluster/glusterfs · GitHub (at github.com)
19:57 skippy i enabled quorum-type "server" on my volume.
19:57 skippy but testing shows that when one of the brick-hosting nodes goes down, writes to the FUSE-mounted volume block and hang.
19:57 skippy I was expecting the writes to continue, because I have quorum: 2 servers out of three.
19:58 JoeJulian ninthBit: The gfid in the self-heal table only means that the file self-heal daemon is unaware of the actual filename for the file that needs healed. It shouldn't matter, it should heal it anyway.
19:59 JoeJulian ninthBit: file ownership changing is usually due to having mismatched uids on your clients.
20:00 JoeJulian how is the "brick-hosting node" aka "server" going down?
20:00 JoeJulian skippy: ^
20:00 skippy ifdown from the console.
20:01 JoeJulian Then you're stuck waiting for a ,,(ping-timeout).
20:01 glusterbot The reason for the long (42 second) ping-timeout is because re-establishing fd's and locks can be a very expensive operation. Allowing a longer time to reestablish connections is logical, unless you have servers that frequently die.
20:01 skippy ah
20:01 JoeJulian To avoid the ping-timeout, shutdown your servers.
20:02 JoeJulian That will close the tcp connection and the client will be aware not to wait for it.
20:02 bit4man joined #gluster
20:03 skippy i'm trying to understand what failure will look like in production, where graceful shutdowns won't always happen.
20:03 skippy thanks for the info!
20:03 JoeJulian 42 seconds once or twice a year are still within 5 nines.
20:05 ninthBit JoeJlian: thanks for the help on the gfid.  the mismatched uids on the clients.  you mean the uid of the user or of something else?  the odd thing is the file is written from the same server and process.  we don't doubt sql server is doing crazy things that make this a bad solution for sql direct backups. maybe i can begin that as a new question but first starting with the gfid and user owner question :)
20:07 JoeJulian yes, user id. If it's only one client and it's not changing to root, then I'm at a loss.
20:07 Peter3 joined #gluster
20:09 chirino joined #gluster
20:14 ninthBit JoeJulian: thanks.  i can assist with some of the background.  we have samba installed on each of the glusterfs nodes.  provides a "cifs" access for the windows network.  the samba is sharing a local glusterfs mount of the glusterfs volume.  the samba server is attached to the windows domain.  the user is based on the active directory user.
20:15 calum_ joined #gluster
20:15 ninthBit JoJulian: to provide HA access to the CIFS share we use round robin DNS on a host name.  works. but i would not doubt could be a source to issues........
20:15 aknapp_ joined #gluster
20:16 JoeJulian I would. It's the client that assigns the uid to the file. The servers just store it numerically.
20:16 ninthBit JoJulian: working in AWS and need an HA CIFS option for the Windows nodes and to allow Active Directory management against the NAS.
20:16 * JoeJulian barfs a little.
20:17 ninthBit yes....
20:17 ninthBit i agree
20:22 bene2 joined #gluster
20:26 semiosis windows is not HA
20:26 semiosis ;)
20:26 bene2 joined #gluster
20:27 ninthBit semiosis: i agree and their file system solution is hopeless for aws
20:27 semiosis well no doubt. gotta sell that azure
20:28 ninthBit we always ask "are you sure?" to rhyme with azure....
20:29 ninthBit now, looking to see if it is possible to restrict a glusterfs client to a specific path on a glusterfs volume. this would help move our ftp off the samba hack and directly hitting the glsuterfs.  ftp on linux (b)
20:30 JoeJulian Microsoft is showing up to all the open-source conferences trying to sell Azure. How that has anything to do with open-source is a mystery.
20:30 JoeJulian Not through fuse. I would just use a separate volume.
20:32 skippy A suprising number of Linux instances are running inside Azure.
20:32 JoeJulian but that doesn't make them a contributor or a supporter. Just a leach.
20:33 ninthBit JoeJulian: i was thinking that also.  The volume owns a brick right?  so, for the dedicated ftp volume we would need to stop thinking that we want a huge NAS space that is managed with active directory access and perhaps start dedicating volumes to the targetted solution.
20:34 Peter1 joined #gluster
20:34 JoeJulian That's how I've done it. I even split up the storage with LVM so I can easily allocate more space to whichever volume needs more.
20:35 Peter1 joined #gluster
20:35 Peter1 joined #gluster
20:40 skippy left #gluster
20:44 ninthBit JoJulian: thanks we are going forward with that plan and see how it works in the testbed.  the remainder is to find out why SQL backup is killing the CIFS shares.  samba still runs but loses the config of the shares... note, i have a split samba config where the CIFS shares are defined in a config file stored on a gluster volume.  that way each node in the cluster has the same samba configuration for the shares.  i ahve not gone deep en
20:44 ninthBit at least ftp should be more stable to the GlusterFS volume and not crapping out when all samba shares go offline...
20:45 semiosis ninthBit: are you using the glusterfs samba vfs plugin?
20:45 semiosis or whatever it's called
20:45 semiosis ,,(samba)
20:45 glusterbot (#1) Samba 4.1.0 RPMs for Fedora 18+ with the new GlusterFS libgfapi VFS plug-in are available at http://download.gluster.org/pub/gluster/glusterfs/samba/ , or (#2) more information about alternate samba configurations can be found at http://lalatendumohanty.wordpress.com/2014/04/20/glusterfs-vfs-plugin-for-samba/
20:46 JoeJulian The vfs is very nice.
20:46 semiosis pretty sure it's built in to the newer samba releases
20:47 ninthBit semiosis: no, we are using gluster 3.4 and needed Windows Active Directory authentication support for the CIFS shares.  At the time I *think* glusterfs's samba plugin didn't support AD authentication.... so we stand up full Samba server on each Glusterfs server node.  each node self mounts with the GlsuterFS fuse client.  then the samba server shares the glusterfs fuse mount with the Active Directory groups controlling access
20:48 semiosis ah
20:48 ninthBit it has been working but SQL 2014 backups seem to be killing it.... trying to work through how and there is a lot of options to explore.
20:48 ninthBit pretty crazy setup..... i dedicate it to Microsoft...
20:48 Peter3 joined #gluster
20:49 ninthBit we don't use the samba cluster feature either. it can't work in amazon AWS when dealing with availablility zones....
20:51 Peter1 joined #gluster
20:55 ninthBit it is easy to find solutions for GlusterFS + CIFS + Active Directory + HA in Amazon VPC. I have managed to scrap something together but there might be some more technical details I have to get into.
20:55 ninthBit er NOT easy
20:56 ninthBit HA NAS for CIFS + AD does not have an out of the box solution yet.... i can see why... ugh.. samba + ad is voodoo
20:58 tyrok_laptop joined #gluster
20:58 tyrok_laptop Is there a kosher way to add translators to an existing replicated Gluster volume?  Most of what I've been seeing pretty much says you should edit the .vol files.
20:59 tyrok_laptop I should note that this existing volume was created using a "gluster volume create" command.
21:05 ninthBit is GlusterFS 3.5 recommended for production use?  i have not been able to self help myself to that information on the web page. or googlefoo...
21:12 supersix joined #gluster
21:12 JoeJulian ninthBit: According to the developers it is.
21:13 JoeJulian tyrok_laptop: If you're reading stuff that says to edit the vol file, you're reading really old stuff.
21:13 JoeJulian tyrok_laptop: What're you trying to do?
21:14 tyrok_laptop JoeJulian: Add the performance/io-cache translator to the server side.
21:16 supersix hello all, would anyone have a moment to help me with an NFS issue on a new Gluster 3.5.1 CentOS 6.5 install
21:16 JoeJulian @nfs
21:16 glusterbot JoeJulian: To mount via nfs, most distros require the options, tcp,vers=3 -- Also an rpc port mapper (like rpcbind in EL distributions) should be running on the server, and the kernel nfs server (nfsd) should be disabled
21:17 supersix thanks, i tried those options without luck
21:17 supersix quick question though, "gluster volume status", shoudl that show the NFS server as online with a port number?/
21:18 supersix it currently does not show as online
21:19 JoeJulian It should be, unless you specifically disabled it. Check the log file, /var/log/glusterfs/nfs.log
21:19 supersix two things:
21:19 supersix 1) 0-nfs-server: initializing translator failed
21:20 supersix 2) 0-nfs-server: Initialization of volume 'nfs-server' failed, review your volfile again
21:21 edward1 joined #gluster
21:21 supersix i checked the volfile and didnt see "nfs-server" but i also didnt write these configs, they are out of the box
21:21 JoeJulian That's a pretty short log file.
21:22 JoeJulian Just try restarting glusterd (or glusterfs-server if you're on ubuntu) and see if the logs tell any different story.
21:24 supersix log file output of nfs.log on glusterd restart
21:24 supersix http://pastie.org/9445504
21:24 glusterbot Title: #9445504 - Pastie (at pastie.org)
21:25 JoeJulian 0-rpc-service: Could not register with portmap
21:26 JoeJulian Which relates back to the factoid.
21:26 JoeJulian @nfs
21:26 glusterbot JoeJulian: To mount via nfs, most distros require the options, tcp,vers=3 -- Also an rpc port mapper (like rpcbind in EL distributions) should be running on the server, and the kernel nfs server (nfsd) should be disabled
21:28 supersix rpcbind (pid  4919) is running...
21:29 supersix im not sure what else im missing
21:29 supersix /etc/init.d/nfs is stopped
21:31 JoeJulian check rpcinfo -p
21:32 supersix interesting, lost of stuff here: http://pastie.org/9445527
21:32 glusterbot Title: #9445527 - Pastie (at pastie.org)
21:33 chirino joined #gluster
21:34 JoeJulian now you can use netstat to see what's listening on those ports, for instance, netstat -tlnp | grep 2049
21:39 supersix weird, one just started working while the other doesnt
21:39 supersix rpcinfo shows the port in use but netstat shows nothing
21:41 JoeJulian And you're not checking a udp port with the -t option I presume.
21:45 supersix both -u and -t show nothing using that port
21:46 supersix restarting rpcbind on the server released
21:47 supersix even though rebooting the server did not, weird
21:47 supersix now NFS is showing as online on both nodes, but not sure why restarting rpcbind, then glusterd fixed it
21:48 JoeJulian which port?
21:48 JoeJulian More accurately, which service?
21:48 supersix 2049
21:48 supersix the service i restarted is rpcbind via service rpcbind restart
21:49 supersix then i restarted glusterd and it works after that
21:49 JoeJulian Sure sounds like the kernel nfsd was starting at boot.
21:50 supersix hmm, welp i believe that
21:51 supersix but need to figure out how to disable it, after i figure out what it is!
21:51 supersix thanks for your help, i'll look into that
21:51 JoeJulian good luck
21:51 supersix thanks
21:52 siel joined #gluster
21:56 coredump joined #gluster
22:00 Peter1 got a super high io wait on gfs client....
22:00 Peter1 i have 10gE on both client and sever....
22:01 Peter1 any tuning i should look into?
22:01 JoeJulian On what fop?
22:02 Peter1 fop
22:02 Peter1 ?
22:02 JoeJulian file operation
22:02 Peter1 write
22:02 JoeJulian Not all file operations are equal.
22:03 JoeJulian Just write, or lookup, lock, write, unlock, fsync, close?
22:04 Peter1 hmmm how can i tell?
22:05 JoeJulian strace, wireshark, read the source, use gluster volume profile... There's probably more but I'm starting to get tired.
22:07 Peter1 sorry…..
22:07 Peter1 how do i get gluster volume profiler from client?
22:08 Peter1 or i should just profile the volume?
22:08 JoeJulian Right, you would have to profile the volume from the server side.
22:08 sjm left #gluster
22:08 JoeJulian You could use profiler, but it adds its own overhead and probably wouldn't tell you much about i/o.
22:14 bennyturns joined #gluster
22:14 bennyturns sry needed a break
22:15 Peter1 how do i use strace to find out>
22:21 bennyturns joined #gluster
22:24 Peter1 http://www.jamescoyle.net/how-to/439-mount-a-glusterfs-volume
22:24 Peter1 when refer subvolume in the gluster/datastore.vol
22:24 Peter1 is that mean directories under a volume?
22:28 ninthBit would gluster 3.5 have any enhancements to make running scripts or tasks when the gluster service is running and the volume is online?  specifically to mount a glusterfs volume when it is up and running on localhost?
22:28 ninthBit my attempts at an upstart have worked better than rc.local but stil not perfect
22:36 glusterbot New news from resolvedglusterbugs: [Bug 878663] mount_ip and remote_cluster in fs.conf are redundant <https://bugzilla.redhat.com/show_bug.cgi?id=878663> || [Bug 924792] Gluster-swift does not allow operations on multiple volumes concurrently. <https://bugzilla.redhat.com/show_bug.cgi?id=924792> || [Bug 960889] G4S: PUT/GET/HEAD/DELETE request where file and containers are named in UTF-8 format fails <https://bugzilla.redhat.com/show_bug.cgi?id=
22:56 ninthBit i am having trouble finding the possibly obvious.  how would i restrict access to a gluster volume to specific glusterfs-fuse clients?  can the glusterfs-fuse client authenticate against the glusterfs volume?
23:07 andreask joined #gluster
23:20 ninthBit i think i found the only way is restriction based upon ip. which will work in our case
23:24 RicardoSSP joined #gluster
23:24 RicardoSSP joined #gluster
23:39 Paul-C joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary