Camelia, the Perl 6 bug

IRC log for #gluster, 2013-04-19

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:12 failshell hello. sometimes, mount reports the gluster volume as mounted. but in reality, its not. anyone else experiencing this? on 3.2
00:14 dmojoryder early today I brought up the advantages/disadvantages of nfs client vs native client when writing to a distributed volume group. From my tests its very clear that the native client implements the DHT hash and distributes the writes to the appropriate bricks, vs nfs just write to the connect glusterfs dameon and that then distributes. So by using native client you can really scale writes much better
00:17 glusterbot New news from newglusterbugs: [Bug 953694] Requirements of Samba VFS plugin for glusterfs <http://goo.gl/v7g29>
00:18 awheeler_ joined #gluster
00:19 hagarth joined #gluster
00:33 san joined #gluster
00:33 san Can any one explain how to achieve failover for mount endpoint using gluster native client ?
00:34 san Also, if anyone has test cases and recovery procedures documented.
00:37 hagarth joined #gluster
00:59 hagarth joined #gluster
01:01 yinyin joined #gluster
01:29 itisravi joined #gluster
01:30 d3O joined #gluster
01:32 kevein joined #gluster
01:42 d3O joined #gluster
02:00 yinyin joined #gluster
02:18 d3O joined #gluster
02:25 JoeJulian left #gluster
02:25 JoeJulian joined #gluster
02:27 _pol joined #gluster
02:53 hchiramm_ joined #gluster
02:58 _pol joined #gluster
03:00 d3O joined #gluster
03:02 vshankar joined #gluster
03:50 portante joined #gluster
03:55 itisravi joined #gluster
03:58 itisravi joined #gluster
04:18 itisravi joined #gluster
04:19 y4m4 joined #gluster
04:22 kai_office joined #gluster
04:23 saurabh joined #gluster
04:24 itisravi joined #gluster
04:24 sgowda joined #gluster
04:26 itisravi joined #gluster
04:30 humbug joined #gluster
04:32 _pol joined #gluster
04:35 shylesh joined #gluster
04:44 hagarth joined #gluster
04:45 vpshastry joined #gluster
04:46 aravindavk joined #gluster
04:50 raghu joined #gluster
05:02 14WAAO1P6 joined #gluster
05:07 _pol joined #gluster
05:13 joeto joined #gluster
05:14 bala joined #gluster
05:25 mohankumar joined #gluster
05:25 deepakcs joined #gluster
05:31 lalatenduM joined #gluster
05:31 rotbeard joined #gluster
05:34 saurabh joined #gluster
05:38 saurabh joined #gluster
05:49 _pol joined #gluster
05:52 pai joined #gluster
05:56 shireesh joined #gluster
06:02 hflai joined #gluster
06:07 atrius joined #gluster
06:09 vimal joined #gluster
06:19 rgustafs joined #gluster
06:33 flrichar joined #gluster
06:33 nat joined #gluster
06:34 m0zes joined #gluster
06:35 ollivera joined #gluster
06:35 vex joined #gluster
06:35 vex joined #gluster
06:36 guigui1 joined #gluster
06:39 rastar joined #gluster
06:40 hchiramm_ joined #gluster
06:42 ricky-ticky joined #gluster
06:43 flrichar joined #gluster
06:44 johnmark joined #gluster
06:45 joeto joined #gluster
06:51 jiffe98 joined #gluster
06:54 puebele joined #gluster
06:58 hybrid512 joined #gluster
07:00 hybrid5121 joined #gluster
07:13 puebele joined #gluster
07:16 rb2k joined #gluster
07:19 Nevan joined #gluster
07:21 ctria joined #gluster
07:21 hybrid5121 joined #gluster
07:26 ujjain joined #gluster
07:28 satheesh joined #gluster
07:39 tjikkun_work joined #gluster
07:41 aravindavk joined #gluster
07:46 mohankumar joined #gluster
07:53 ngoswami joined #gluster
07:54 bulde joined #gluster
07:55 andreask joined #gluster
07:56 hybrid5121 joined #gluster
08:06 hybrid512 joined #gluster
08:26 hybrid5122 joined #gluster
08:27 karoshi joined #gluster
08:27 karoshi what's the recommended procedure for bringing online a new empty brick? In my tests, it seems to cause some freeze if done abruptly against a peer with lots of data
08:28 duerF joined #gluster
08:29 karoshi scenario: 2-brick replicated volume, client continuously accessing random files. One server is shut down, client keeps working without issue. Server is brought back online but with its brick empty, client freezes for a long time.
08:36 Norky joined #gluster
08:46 kevein joined #gluster
08:46 Taruk joined #gluster
08:47 edward1 joined #gluster
08:52 karoshi joined #gluster
08:52 bulde1 joined #gluster
08:53 Taruk Hi bulde, just found your comment @ http://major.io/2010/08/11/one-mo​nth-with-glusterfs-in-production/
08:53 glusterbot <http://goo.gl/5lavO> (at major.io)
09:01 vpshastry1 joined #gluster
09:07 m0zes joined #gluster
09:22 spider_fingers joined #gluster
09:43 clag_ joined #gluster
09:49 duerF joined #gluster
09:56 itisravi joined #gluster
10:28 d3O joined #gluster
10:32 vpshastry1 joined #gluster
10:32 vrturbo joined #gluster
10:37 ProT-0-TypE joined #gluster
11:01 Staples84 joined #gluster
11:05 dustint joined #gluster
11:07 piotrektt_ joined #gluster
11:10 Chiku|dc on a good replicated server, I got 2 attr differents for same file for both clients
11:10 Chiku|dc client-0=0x000000000000000000000000
11:10 Chiku|dc client-1=0x000007a80000000000000000
11:11 Chiku|dc and on the client-1 (the server with the bad file) I got this for this file
11:11 Chiku|dc client-0=0x000000090000000000000000
11:12 Chiku|dc client-1=0x000000090000000000000000
11:12 Chiku|dc self-head doesn't heal it
11:13 Chiku|dc in fact it doesn't success to heal it
11:14 Chiku|dc it try to heal every 10minutes
11:14 Chiku|dc and doesn't flag it as heal-failed either
11:17 Taruk what gfs ver?
11:19 Chiku|dc 3.3.1
11:22 karoshi joined #gluster
11:34 bulde joined #gluster
11:49 hybrid5122 joined #gluster
12:15 hagarth joined #gluster
12:17 vpshastry joined #gluster
12:29 sjoeboo_ joined #gluster
12:31 duerF joined #gluster
12:31 balunasj joined #gluster
12:35 awheeler_ joined #gluster
12:50 glusterbot New news from newglusterbugs: [Bug 953887] [RHEV-RHS]: VM moved to paused status due to unknown storage error while self heal and rebalance was in progress <http://goo.gl/tw8oW>
12:54 bennyturns joined #gluster
12:54 aliguori joined #gluster
12:56 jclift joined #gluster
13:02 chirino joined #gluster
13:06 jdarcy joined #gluster
13:12 ujjain joined #gluster
13:25 zhashuyu joined #gluster
13:34 mohankumar joined #gluster
13:44 spider_fingers joined #gluster
13:47 hagarth joined #gluster
13:51 dblack_ joined #gluster
13:53 SteveCooling Ran into a problem setting up geo-replication. Latest CentOS x86_64 and GlusterFS 3.3.1. https://dl.dropboxusercontent​.com/u/683331/glusterlog.txt
13:53 glusterbot <http://goo.gl/79NuE> (at dl.dropboxusercontent.com)
13:53 SteveCooling SSH to the slave works fine using the identity file gluster uses.
13:56 SteveCooling Any tips?
14:00 Umarillian1 joined #gluster
14:03 vpshastry joined #gluster
14:11 tjstansell does anyone know of an update to the question of using NFS with gluster alongside the system NFS daemon to export other volumes?  http://community.gluster.org/q/g​luster-nfs-existing-nfs-server/ indicates it shouldn't be done.  i'm assuming that's still the case?
14:11 glusterbot <http://goo.gl/BqoNq> (at community.gluster.org)
14:19 semiosis tjstansell: afaik it's impossible to have two nfs servers running on the same host
14:19 semiosis just my 2c, i'm no expert on it though
14:20 tjstansell well, i was tempted to try to restrict the system one to certain nics and the gluster one to another nic ... but i don't think i can do that anyway...
14:20 glusterbot New news from newglusterbugs: [Bug 951800] AFR fops fail to propagate xdata <http://goo.gl/qhG6l>
14:23 karoshi joined #gluster
14:23 andreask joined #gluster
14:27 tjstansell when we first tried to test nfs ... saw tons of issues with stale nfs file handles and permissions with ??????? showing up in ls output.
14:28 tjstansell i'm starting to think maybe the system nfs stuff was partially still running ... and things were colliding.... because it seems to be working this time ...
14:30 tjstansell we are using glusterfs to replicate an administrative filesystem... so it has lots of smaller files (though some iso files as well)... and native gluster access was horribly slow.
14:30 tjstansell so looking at using nfs.
14:37 spider_fingers left #gluster
14:37 tjstansell hm... one of our applications that uses the data on this gluster volume currently uses flock() ... i'm assuming that's not supported since it doesn't seem to be working...
14:37 tjstansell over NFS, that is.
14:38 ndevos tjstansell: yeah, thats documented in 'man 2 flock'
14:44 bugs_ joined #gluster
15:07 jbrooks joined #gluster
15:09 tjstansell any suggestions on well-tested locking mechanisms on glusterfs over NFS?  I'm looking at File::NFSLock right now ... uses the method of creating a hardlink to a file and checking the link count.
15:12 iatecake joined #gluster
15:34 nueces joined #gluster
15:35 zaitcev joined #gluster
15:51 _pol joined #gluster
15:58 SteveCooling Regarding the reo-replication: I seem to have got it fixed. The manual is a little vague I think on the slave needing to actually run glusterd. Also I forgot to install rsync all over.
16:04 _pol joined #gluster
16:05 daMaestro joined #gluster
16:08 mriv I have weirdness happening with glusterfs and git .. has anyone tried to check out a git repo in a glusterfs mount?
16:08 glennpratt joined #gluster
16:08 * glennpratt waves at rb2k
16:08 rb2k ha
16:09 rb2k anybody got an idea why gluster peer status returns "Peer status unsuccessful"
16:09 rb2k even though I can create a file on one node and it shows up on the other one
16:09 rb2k "volume status all" doesn't return anything
16:10 ndevos glusterd is the daemon that handles peer status requests (and more), glusterfsd (brick) and glusterfs (mount) are not talking to glusterd after mounting
16:11 ndevos so, something borked up your glusterd process, check in /var/log/glusterfs/etc-gluster....log and verify glusterd is running
16:11 rb2k it is
16:12 rb2k glusterfsd isn't for some reason
16:12 mriv ??????????? ? ?     ?        ?            ? VD.war  <-- this is what happens when you remove a folder and check out a git repo in its place on glusterfs and something has a files descriptor open on it before you deleted it
16:12 mriv lol
16:13 ndevos glusterfsd does the reading/writing to the filesystem on the bricks - you mentioned that is still happenening?
16:13 ndevos s/happenening/hapening/ ?
16:13 mriv ndevos: are you asking me?
16:14 ndevos mriv: no, thats for rb2k
16:14 mriv sorry
16:14 rb2k yes it is
16:14 rb2k gluster 3.3.2qa1
16:15 rb2k a ps just shows glusterfs and glusterd
16:15 ndevos rb2k: then it is extremely strange that the files land on the bricks!
16:16 rb2k certainly is
16:16 rb2k does there have to be one per machine?
16:17 ndevos there *should* be one glusterfsd per brick
16:18 semiosis ,,(processes)
16:18 glusterbot the GlusterFS core uses three process names: glusterd (management daemon, one per server); glusterfsd (brick export daemon, one per brick); glusterfs (FUSE client, one per client mount point; also NFS daemon, one per server). There are also two auxiliary processes: gsyncd (for geo-replication) and glustershd (for automatic self-heal). See http://goo.gl/hJBvL for more information.
16:22 rb2k the link mentions glusterfs twice
16:22 rb2k the second time it should probably glusterhd
16:23 rb2k I don't even have glusterhd
16:23 rb2k gluster     glusterd    glusterfs   glusterfsd
16:23 rb2k those I have as binaries
16:28 Norky glustershd
16:28 Norky you're missing an s
16:31 rotbeard joined #gluster
16:32 Mo____ joined #gluster
16:34 semiosis rb2k: read the page again... glustershd was introduced in version 3.3
16:35 rb2k don't have it either, that was gluster[tab][tab]
16:35 semiosis and yes it's actually a 'glusterfs' process but it's playing the role of glustershd
16:35 rb2k semiosis: #gluster --version
16:35 rb2k glusterfs 3.3.2qa1 built on Apr 17 2013 16:01:25
16:35 semiosis wha?!
16:35 hagarth joined #gluster
16:35 rb2k semiosis: oh, so it's not called glustershd?
16:35 semiosis well i didnt expect that :)
16:36 semiosis the whole command line of the running shd process is in the article
16:36 rb2k semiosis: thanks for the help again?
16:36 semiosis i copied that right out of ps
16:36 rb2k (ignore the question mark)
16:36 semiosis :)
16:37 rb2k ah, ok
16:37 rb2k so I don't have to panic that I don't have that binary
16:37 rb2k or that there isn't a process with that name
16:37 semiosis so those binaries you mentioned, theres really only two *binaries* -- gluster & glusterfsd, the others (glusterd & glusterfs) are symlinks
16:37 semiosis iirc
16:37 rb2k 0 lrwxrwxrwx 1 root root 10 2013-04-19 08:40 /usr/sbin/glusterd -> glusterfsd
16:37 rb2k phew
16:38 rb2k so in the end it's ok that there are only two processes running?
16:38 rb2k # pgrep -l gluster     ----->   24808 glusterd      26065 glusterfs
16:39 semiosis idk exactly what circumstances cause glustershd to be started... but in the normal case where a server has a brick in a replicated volume there should be a glustershd running
16:39 rb2k but there is no binary by that name?
16:39 rb2k so it would be another "glusterd"
16:40 _pol joined #gluster
16:40 rb2k because currently there are only two running and things seem to be mostly working
16:40 rb2k but making me doubt my setup :)
16:40 semiosis idk what to say
16:40 semiosis too busy today to try to reproduce :(
16:41 rb2k no worries
16:42 humbug joined #gluster
17:03 portante joined #gluster
17:20 cw joined #gluster
17:22 CROS_ joined #gluster
17:23 Matthaeus1 joined #gluster
17:24 Umarillian1 Does replicated volumes always have a 50% drive capacity loss? Or does that percentage go down as you add more bricks? I'm quite new. Getting confused by some of the things I am reading.
17:25 cicero if you have a replicated volume you need 2x the sapce
17:25 cicero space*
17:25 cicero so that might be the 50% drive capacity loss you're talking about?
17:25 Umarillian1 Yes. that explains it.
17:27 Umarillian1 So if we have 30 TB on 4 servers we only have 15 TB usable.
17:27 Umarillian1 Is there a volume type which scales less harshly; Say that will allows ( x count ) of hosts to drop before data loss occurs? Similar to raid 5 scaling?
17:28 samppah nope
17:28 Uzix joined #gluster
17:31 aliguori joined #gluster
17:42 Uzix joined #gluster
17:45 jag3773 joined #gluster
17:50 ctria joined #gluster
18:03 portante joined #gluster
18:03 ujjain2 joined #gluster
18:06 matt_grill_ joined #gluster
18:06 matt_grill_ left #gluster
18:07 Umarillian1 There a way to get the service mounted on reboot? Just an upstart job or is there a built-in mechanism? Apologies for all the questions.
18:08 matt_grill_ joined #gluster
18:09 iatecake left #gluster
18:12 stickyboy joined #gluster
18:13 wN joined #gluster
18:16 samppah Umarillian1: you mean that client mounts the glusterfs share on boot?
18:16 Umarillian1 Well I am intending on exporting it with NFS; But it needs to be mounted in order to be exported as far as I am aware.
18:16 samppah ah, ok
18:17 Umarillian1 If you reboot and it never mounts it locally then the export can't take place. I believe.
18:17 samppah sorry, i'm not very familiar with upstart :(
18:24 jskinner_ joined #gluster
18:30 JoeJulian Umarillian1: The best thing to do it to take everything you think you know about raid, and assume it has absolutely nothing to do with clustered storage.
18:30 Umarillian1 I've started doing that =P
18:30 Umarillian1 Thanks; Just trying to figure it out based on previous knowledge. That wasn't working so making progress; slowly but making progress.
18:31 ninkotech joined #gluster
18:31 JoeJulian I did the same thing when I started. Unfortunately there was nobody to offer me that advice back then.
18:31 ninkotech_ joined #gluster
18:31 Umarillian1 JoeJulian: Oh, believe me I am grateful.
18:36 shylesh joined #gluster
18:37 lh joined #gluster
19:00 flrichar joined #gluster
19:02 jack joined #gluster
19:10 _pol joined #gluster
19:11 _pol joined #gluster
19:14 bennyturns joined #gluster
19:20 Umarillian1 I've ran into an issue where I can't start stop modify or make any changes to a volume I just keep getting command unsuccessful messages; Peer status indicates the two devices are connected. Anyone seen this before? Brand new install and it occurred after a reboot.
19:21 theron joined #gluster
19:22 semiosis Umarillian1: check your glusterd log file, /var/log/glusterfs/etc-glusterfs-glusterd.log, for more information
19:22 semiosis pastie.org the log if you want
19:24 Nagilum_ Umarillian1: if the log gives no clear answer maybe compare /var/lib/glusterfs/peers/ between the hosts
19:24 Umarillian1 Looks like one of the peers UUIDs changed?
19:24 Umarillian1 0-management: b5226a08-2556-457d-a02c-e30599a071de doesn't belong to the cluster. Ignoring request.
19:25 JoeJulian That does look that way.
19:25 semiosis Umarillian1: thats unusual. you can write the correct uuid into /var/lib/glusterd/glusterd.info on the changed server, then restart glusterd, and hopefully it will be ok
19:25 JoeJulian You can figure out which server has that uuid by looking in /var/lib/glusterd/glusterd.info
19:26 JoeJulian You could probably even change that to the uuid you expect by looking in /var/lib/glusterd/peers on another server and looking to see which uuid is missing from the glusterd.info files.
19:27 Nagilum_ Umarillian1: I'd make a backup of /var/lib/glusterfs before anything :>
19:27 JoeJulian I wouldn't, but I'm a maverick!
19:40 Umarillian1 joined #gluster
19:42 Umarillian1 Ugh "check if daemon is operational"
19:43 Umarillian1 I've tried purging and reinstalling but still no luck. -_- ~shakes head~
19:46 nueces joined #gluster
19:58 Umarillian1 Is there a way to uniformly blank the gluster configuration?
20:01 Umarillian1 Nevermind that. Apologies,
20:35 sjoeboo_ joined #gluster
20:45 Jippi joined #gluster
21:00 cw joined #gluster
21:00 rb2k joined #gluster
21:04 sjoeboo_ joined #gluster
21:21 failshell joined #gluster
21:22 failshell hello. im trying to use backupvolfile-server=foo in /etc/fstab. but it doesnt work. when the primary server is unavaiable, its not using the backup to mount the volume.
21:22 failshell anyone can help with that?
21:28 gdavis33 can anyone shed some light on why geo-replication is so painfully slow?
21:42 rjosec joined #gluster
21:43 sjoeboo_ joined #gluster
21:54 rcheleguini joined #gluster
21:56 rjcheleguini joined #gluster
22:05 xavih joined #gluster
22:10 semiosis failshell: use ,,(rrdns)
22:11 glusterbot failshell: You can use rrdns to allow failover for mounting your volume. See Joe's tutorial: http://goo.gl/ktI6p
22:13 pull joined #gluster
22:19 failshell semiosis: i wonder if Windows' DNS servers support that
22:20 semiosis failshell: probably.  it's just a fancy name for having multiple A records (different IPs) for the same name
22:20 semiosis pretty standard
22:20 failshell yeah i know, use it all the time with bind
22:20 semiosis oh ok
22:20 failshell but i wonder why that option with fstab doesnt work
22:21 failshell also, once mounted, if the server fails, that doesnt seem to matter as the client is smart enough to switch to another server in the cluster
22:39 rb2k joined #gluster
23:23 zaitcev joined #gluster
23:50 awheeler_ joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary