Camelia, the Perl 6 bug

IRC log for #gluster, 2013-04-22

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 hagarth_ joined #gluster
00:06 avati joined #gluster
00:18 raghavendrabhat joined #gluster
00:20 hagarth_ joined #gluster
00:20 avati joined #gluster
00:30 raghavendrabhat joined #gluster
00:34 hagarth_ joined #gluster
00:42 raghavendrabhat joined #gluster
00:50 badone__ joined #gluster
00:51 hagarth_ joined #gluster
00:56 NeatBasis joined #gluster
00:56 zhashuyu joined #gluster
00:57 koodough joined #gluster
01:01 koodough1 joined #gluster
01:03 raghavendrabhat joined #gluster
01:06 humbug__ joined #gluster
01:06 NeatBasis joined #gluster
01:14 d3O joined #gluster
01:16 avati joined #gluster
01:23 kevein joined #gluster
01:25 hagarth_ joined #gluster
01:33 nicolasw joined #gluster
01:42 raghavendrabhat joined #gluster
01:47 raghavendrabhat joined #gluster
01:49 portante joined #gluster
01:57 raghavendrabhat joined #gluster
02:04 hagarth_ joined #gluster
02:08 d3O_ joined #gluster
02:09 d3O joined #gluster
02:19 raghaven1rabhat joined #gluster
02:19 avati joined #gluster
02:21 _pol joined #gluster
02:30 raghavendrabhat joined #gluster
02:50 bharata joined #gluster
03:32 shylesh joined #gluster
03:53 itisravi joined #gluster
04:05 raghu joined #gluster
04:14 bulde joined #gluster
04:18 sjoeboo_ joined #gluster
04:25 domnic joined #gluster
04:30 bala joined #gluster
04:45 vpshastry joined #gluster
04:55 zhashuyu joined #gluster
04:55 satheesh joined #gluster
04:57 _pol joined #gluster
05:03 sgowda joined #gluster
05:05 mohankumar joined #gluster
05:06 saurabh joined #gluster
05:11 satheesh joined #gluster
05:13 bulde1 joined #gluster
05:14 bala joined #gluster
05:19 lalatenduM joined #gluster
05:27 vshankar joined #gluster
05:30 aravindavk joined #gluster
05:33 glusterbot New news from newglusterbugs: [Bug 928656] nfs process crashed after rebalance during unlock of files. <http://goo.gl/fnZuR> || [Bug 950024] replace-brick immediately saturates IO on source brick causing the entire volume to be unavailable, then dies <http://goo.gl/RBGOS> || [Bug 954057] do not do root squashing for the clients mounted in the storage pool <http://goo.gl/RGwh6> || [Bug 917901] Mismatch in calculation for
05:37 lh joined #gluster
05:37 lh joined #gluster
05:44 sjoeboo_ joined #gluster
05:48 aravindavk joined #gluster
05:53 rastar joined #gluster
05:55 shireesh joined #gluster
06:10 d3O joined #gluster
06:14 deepakcs joined #gluster
06:17 guigui3 joined #gluster
06:21 rotbeard joined #gluster
06:26 ricky-ticky joined #gluster
06:43 hagarth joined #gluster
06:45 vimal joined #gluster
06:59 ctria joined #gluster
07:04 hybrid512 joined #gluster
07:05 saurabh joined #gluster
07:08 tjikkun_work joined #gluster
07:19 ollivera joined #gluster
07:23 saurabh joined #gluster
07:25 keerthi joined #gluster
07:25 keerthi Hi
07:25 glusterbot keerthi: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
07:26 keerthi Is the glusterfs is works same as Swift Storage
07:26 keerthi Can anyone say is it possible to intergrate glusterfs with Cloudstack
07:27 samppah afaik it's possible to use glusterfs with cloudstack
07:27 bulde joined #gluster
07:35 NuxRo keerthi: you can use a mounted gluster volume in cloudstack as primary storage
07:37 NuxRo keerthi: glusterfs is a filesyste, swift is not, they are not the same
08:00 spider_fingers joined #gluster
08:01 ngoswami joined #gluster
08:08 rb2k joined #gluster
08:10 jkroon joined #gluster
08:12 gbrand_ joined #gluster
08:19 raghu joined #gluster
08:21 spider_fingers joined #gluster
08:30 ndevos hey NuxRo, when you updated with --force, was glusterd running before and after?
08:47 lanning joined #gluster
08:47 NuxRo ndevos: yes
08:47 NuxRo glusterd was running before
08:47 NuxRo but somehow was not after
08:47 NuxRo you saw the logs
08:48 * NuxRo bbl 1h
08:48 ndevos sure
08:49 ndevos NuxRo: the logs confused me a little, the condrestart seemed to work okay, but something else asked glusterd to stop...
08:50 ujjain joined #gluster
08:53 bala joined #gluster
08:53 ChikuLinu__ joined #gluster
09:01 DEac- joined #gluster
09:01 DEac- hi
09:01 glusterbot DEac-: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
09:04 DEac- i run gluster and i wonder, that there are only minimal informations about cifs and nfs. i muster configure glusterfs to provide smb/cifs. i read the doc and thought 'no configurations needed?'. but i can not found the process, which provides smb/cifs. only nfs. which process does smb/cifs? or i must configure gluster to understand cifs/cmb?
09:04 vpshastry1 joined #gluster
09:04 rastar1 joined #gluster
09:12 itisravi joined #gluster
09:17 sgowda joined #gluster
09:19 ujjain joined #gluster
09:22 ollivera left #gluster
09:23 jikz joined #gluster
09:27 clag_ joined #gluster
09:33 brunoleon__ joined #gluster
09:36 ndevos DEac-: you would use samba to provide the cifs functionality
09:53 sgowda joined #gluster
09:55 rastar joined #gluster
10:11 DEac- ndevos: ok
10:11 DEac- thanks
10:12 duerF joined #gluster
10:26 edward1 joined #gluster
10:37 rb2k I'm very confused
10:37 rb2k I don't run glusterfsd
10:38 rb2k yet everything seems to be working
10:38 rb2k I just see these two:
10:38 rb2k usr/sbin/glusterfs --volfile=/etc/glusterfs/glusterfs-client.vol /mnt/gfs
10:38 rb2k usr/sbin/glusterd -p /var/run/glusterd.pid
10:38 rb2k in ps -ef
10:39 rb2k according to http://community.gluster.org/q/what-processes​-does-glusterfs-run-during-normal-operation/ , "glusterfsd is the process that serves brick export directories out over the network"
10:39 glusterbot <http://goo.gl/QoAag> (at community.gluster.org)
10:39 rb2k maybe this works because for my setup, I run both, the client and the glusterd server on all machines?
10:42 ndevos rb2k: what version of glusterfs is that, starting the glusterfs process by hand and passing a .vol file is not needed anymore
10:43 rb2k glusterfs 3.3.2qa1 built on Apr 21 2013 11:24:51
10:43 rb2k :)
10:44 rb2k who would start the glusterfs process in that case? that's the one initialising the mounts, right?
10:44 ndevos rb2k: how did you get that /etc/glusterfs/glusterfs-client.vol file? normal usage is to create a volume through the gluster command
10:44 rb2k puppet
10:44 ndevos yeah, list the mountpoint in /etc/fstab
10:44 rb2k has been there since 3.0 for us
10:44 ndevos right, thats a pretty old way of doing things, the current releases automate much more
10:44 rb2k I could understand that the glusterfs doesn't need to be started manually anymore
10:45 rb2k but the missing glusterfsd is really confusing to me
10:46 ndevos I guess your /etc/glusterfs/glusterfs-client.vol contains storage/posix translators, those are normally part of the .vol file used by the glusterfsd processes
10:46 rb2k https://gist.github.com/rb2k/4b1437047382c9ba1578
10:47 glusterbot <http://goo.gl/89TwH> (at gist.github.com)
10:47 rb2k the glistered.vol however does: https://gist.github.com/rb2k/dfc59b4102315a0e61be
10:47 glusterbot <http://goo.gl/mmyKh> (at gist.github.com)
10:47 ndevos the now common way looks like: glusterfs with protocol/client <-> glusterfsd with protocol/server .... storage/posix
10:47 rb2k *glusterd
10:48 rb2k oh, interesting
10:48 rb2k yeah, we should probably update things
10:48 rb2k seeing as every single piece of documentation confuses me when looking at our setup :)
10:49 ndevos ah, looks like you your glusterd contains the translator for storage/posix - that means your glusterd does the writing to the brick
10:50 rb2k aka the job of glusterfsd?
10:51 ndevos yeah
10:51 rb2k ha
10:51 rb2k do you happen to know if that has any disadvantages?
10:51 ndevos now glusterd only does the management, and it starts a glusterfds per brick
10:52 rb2k ohh, so I'd have an init.d/upstart script for glusterd
10:52 ndevos well, restarting glusterd does normally not affect the serving of bricks, it does in your case
10:52 rb2k and an fstab entry for glusterfs-client
10:52 rb2k glusterd would start glusterfsd
10:52 rb2k the mounting would start the client
10:52 ndevos correct
10:52 rb2k and voila, 3 daemons
10:52 rb2k 1 upstart script
10:53 ndevos is there a reason to not use the ,,(ppa) builds?
10:53 glusterbot The official glusterfs 3.3 packages for Ubuntu are available here: http://goo.gl/7ZTNY
10:53 rb2k ndevos: oh we do.
10:53 rb2k We currently just have to build it ourselves because we want a fix from the QA branch
10:54 rb2k but our puppet setup still installs creates those .vol files
10:54 rb2k and adds an init.d script
10:54 ndevos okay, but those packages should come with an upstart config already?
10:54 rb2k we probably replace that in puppet
10:54 ndevos looks like you have a half-updated-system :-/
10:55 rb2k yeah
10:55 rb2k it works (tm)
10:55 rb2k :)
10:55 rb2k I'll try to get it up-to-date though
10:55 rb2k because I've been staring at the documentation for way too long now
10:55 ProT-0-TypE joined #gluster
10:55 rb2k thanks for helping btw
10:58 rb2k ndevos: you mentioned that the command line is the default way to create volumes
10:58 rb2k but that would usually just generate the vol files too, right?
10:58 rb2k or does it do something in addition to that
10:59 ndevos the command utility connects to glusterd which creates the .vol files and starts the glusterfsd (maybe setting some xattrs somewhere)
11:01 rb2k ok, and once the vol files are in place, the upstart job would take care of that on the next start
11:01 rb2k the gluster command line utility in our current setup doesn't really want to do anything useful. I assume it might be pissed because glusterfsd is missing
11:04 rb2k ndevos: who usually starts gsyncd.py ?
11:05 ndevos I'd guess glusterd does that
11:16 hagarth joined #gluster
11:20 sgowda joined #gluster
11:33 manik joined #gluster
11:35 rcheleguini joined #gluster
11:39 vpshastry1 joined #gluster
11:47 rwheeler joined #gluster
11:47 sgowda joined #gluster
11:50 hybrid512 joined #gluster
11:55 NeatBasis joined #gluster
12:01 bulde1 joined #gluster
12:02 nicolasw joined #gluster
12:08 Chiku|dc joined #gluster
12:08 flrichar joined #gluster
12:13 bulde joined #gluster
12:14 bulde1 joined #gluster
12:19 piotrektt_ joined #gluster
12:22 yongtaof joined #gluster
12:29 yongtaof joined #gluster
12:30 yongtaof Dear glusterfs experts
12:30 yongtaof I want to ask a question
12:30 dastar_ joined #gluster
12:30 sjoeboo_ joined #gluster
12:30 yongtaof There's a xlator called features/index
12:31 yongtaof and it uses a index-base directory which is under .glusterfs/indices
12:31 yongtaof can I change it to some other places
12:31 yongtaof anybody helps?
12:32 dastar joined #gluster
12:33 yongtaof We have encounterd a xfs shutdown issue related to fops under ./glusterfs/indices/xattrop directory
12:33 yongtaof so I am working on a plan to move this directory out of xfs filesystem
12:34 yongtaof any body answers?
12:40 yongtaof ?
12:43 awheeler_ joined #gluster
12:43 kkeithley no doubt someone will answer once they've woken up and get to work. I expect most of the developers in India — where it's 8PM — have gone home for the day. And many of the volunteers live on the US west coast — where it's 5:45AM — probably haven't even woken up.
12:45 yongtaof ok thank you I'll wait and it's 21:00 here
12:48 nicolasw joined #gluster
12:48 bennyturns joined #gluster
12:49 umarillian1 joined #gluster
12:51 NeatBasis_ joined #gluster
12:51 dustint joined #gluster
12:53 aliguori joined #gluster
13:01 umarillian1 I keep getting permission Denied when trying to write to a default gluster exported NFS. I've tried finding something on the help FAQ but most responses are to come here? Could anyone perhaps point me in the right direction?
13:01 umarillian1 -2xnode ubuntu 12.04 cluster
13:01 umarillian1 -Distributed; Local LVM used as mount;
13:01 umarillian1 -both windows and linux mount.nfs clients get the same error ( using version 3 )
13:01 manik joined #gluster
13:09 itisravi joined #gluster
13:12 vpshastry1 left #gluster
13:17 fleducquede umarillian1,
13:17 fleducquede i remember having this error :S
13:17 tryggvil joined #gluster
13:17 fleducquede what do u type to mount ur mountpoint ?
13:18 fleducquede umarillian1, ?
13:25 umarillian1 Apologies, Got drawn away into another discussion.
13:26 umarillian1 I've been trying various solutions but my current mount string is this
13:26 umarillian1 sudo mount -t nfs -o vers=3 192.168.33.127:/backups /mnt/nfs/
13:30 theron joined #gluster
13:31 aravindavk joined #gluster
13:39 Nagilum_ umarillian1: check the exports with "showmount -e 192.168.33.127"
13:40 umarillian1 The exports show up fine.
13:40 Nagilum_ then tail -f the syslog of 192.168.33.127 when trying to mount
13:40 jdarcy joined #gluster
13:41 Nagilum_ on HP-UX I had to use "/sbin/fs/nfs/mount -o vers=3,port=38467 nfs://g9t3030/gv01 /mnt/gv01" to mount
13:41 Nagilum_ maybe you also need to specify the port
13:42 umarillian1 Even though it mounts fine? it just can't "
13:42 umarillian1 Write"
13:42 Nagilum_ oh
13:42 Nagilum_ I thought mounting was the problem
13:42 Nagilum_ can you write to the gfs on 192.168.33.127?
13:43 umarillian1 Apologies, Yea I am just unable to write while mounted with NFS.
13:43 Nagilum_ but if you're logged on to 192.168.33.127, can you write to the gfs there - locally
13:44 Nagilum_ if you mount it using glusterfs
13:44 umarillian1 I'll try that; just a moment
13:46 umarillian1 I fail to write there as well
13:46 Nagilum_ then check your /var/log/glusterfs/ logs
13:47 kkeithley And what are the permissions on the brick volume(s)?
13:48 sjoeboo_ joined #gluster
13:48 umarillian1 I think I might've messed up the permissions on the mount points. Let me check that out.
13:55 umarillian1 Manually specified mounts to be read-write; Still getting issues; I'll check gluster logs
13:56 jclift_ joined #gluster
13:58 yongtaof any body is family with .glusterfs/indices/ directory?
13:59 mohankumar joined #gluster
13:59 jdarcy yongtaof: Somewhat.
14:00 yongtaof a directory under each brick dir
14:00 yongtaof which is used for glusterfs features/index xlator
14:01 dustint joined #gluster
14:02 karoshi joined #gluster
14:03 lh joined #gluster
14:03 lh joined #gluster
14:04 karoshi I'm experiencing freezes on the clients when I bring back online a previously failed brick which has less data than the brick that remained online (2-bricks replica volume). It looks like the client freezes until the brick with less stuff has fully synchronized. Is this expected?
14:07 jdarcy karoshi: I wouldn't expect it.  Access to a particular file might pause while it's healed, but there shouldn't be a volume-wide pause.
14:09 karoshi testing: client constantly creates new files; one brick is shut off; client keeps working creating files on the only brick left; when this brick has some thousand files more than the offline one, bring the second one back online
14:09 lpabon joined #gluster
14:09 karoshi as soon as it gets online, client freezes, and I see (by looking at the brick) that missing files are appearing
14:09 karoshi *on the brick that was offline
14:10 karoshi when almost all the files have appeared, client unfreezes
14:10 karoshi it's a bit of a corner case scenario, but I figured I'd ask anyway, perhaps I'm doing something wrong
14:12 karoshi more on client: it's running a script that basically generates a random filename, random data, and creates a file with that name and that content, all this all the time without interruptions
14:12 jkroon joined #gluster
14:13 partner on distributed setup is the rebalancing with data migration the suggested way of keeping all bricks in balance (having all the default settings for dist setup)? to avoid the oldest ones from filling up before new ones and so forth. also with force i guess to cut down vast amounts of links i'd guess?
14:13 karoshi (again, a corner case, but good for testing I'd say)
14:13 karoshi this is gluster 3.3.1 on debian squeeze, if it helps
14:15 jdarcy partner: Yes.
14:15 jdarcy karoshi: What you're probably seeing is that the *directory* remains locked while missing files are populated into it.
14:16 partner jdarcy: rgr, thought so too but getting more cautious now in production with terabytes of data already..
14:16 karoshi jdarcy: probably
14:16 karoshi the client is writing all files into the same dir
14:17 karoshi is there a way to avoid this?
14:18 jdarcy partner: Understandable.  Not sure about the "force" part (links are innocuous) but periodic rebalancing is important to keep the distribution even.
14:18 jdarcy karoshi: Not really.  The only way we could avoid it would be to create zero-length files while the directory is locked and then populate them when it's unlocked, but most people would consider that incorrect behavior.
14:19 jdarcy karoshi: Without the locking we'd *definitely* have incorrect behavior due to race conditions.
14:19 NeatBasis joined #gluster
14:20 jdarcy karoshi: That said, there *are* supposed to be mechanisms in place to prevent self-heal from starving out regular I/O.  Odd that they're not kicking in.
14:20 karoshi jdarcy: ok, so IIUC, and taking it to its extreme, if I plug-in an empty brick, would that block the whole topmost brick dir until the whole brick has synced?
14:21 portante joined #gluster
14:21 karoshi jdarcy: I can provide you with my script and simple steps to reproduce the issue, if you want
14:22 karoshi It can be reproduced reliably
14:22 dustint_ joined #gluster
14:23 jdarcy karoshi: It would block that topmost directory until its *direct* descendants had been created (and written if they're regular files).  Then it would unlock the top dir and start healing each subdir in turn.
14:24 karoshi ok, I see
14:25 partner jdarcy: to my understanding force is just for moving the files on top of the existing links to place them in correct places, to avoid any extra lookups. well, having it just adds another call to proper place.. still i don't see how the volume would balance without migrating all the data over to other node.. ohwell, lets try it out :)
14:25 umarillian joined #gluster
14:26 yongtaof can I just delete files under .glusterfs/xattrop ? What will happen?
14:29 karoshi anyway, in case you are curious: start with a two-brick replicated volume, synced, and mount it on a client. On the client, run  http://pastebin.com/raw.php?i=ekxWCM37 . While it's running, shut down one brick. Client keeps going. Leave it running 5 minutes or so, so it creates many files on the surviving brick. With the client still running, bring the offline brick online, and watch the client stop until all healing has happened.
14:29 glusterbot Please use http://fpaste.org or http://dpaste.org . pb has too many ads. Say @paste in channel for info about paste utils.
14:30 karoshi (in my case, the "shut down" part is really powering off the machine that hosts the brick)
14:31 lorin1 joined #gluster
14:31 karoshi http://dpaste.org/1VkGd/
14:31 glusterbot Title: dpaste.de: Snippet #225410 (at dpaste.org)
14:32 karoshi sorry, correct one is http://dpaste.org/DKqPX/
14:32 lorin1 I'm running gluster 3.2.7, and the "gluster volume statedump" command doesn't seem to be there. Was it added in 3.3, or is there some other reason why it wouldn't be there?
14:32 glusterbot Title: dpaste.de: Snippet #225411 (at dpaste.org)
14:33 partner i wonder if rebalance without force just creates the links or what is the purpose of it if it doesn't move actual files around.. or just files without links or what.. conflicting docs around..
14:34 jdarcy yongtaof: That would prevent self-heal from files that might need it.
14:35 jdarcy lorin1: It was added pretty recently, before that you had to find the PID and send a signal manually.
14:36 jdarcy partner: There are normally two parts to rebalance.  Fix-layout just changes the information about how *new files* should be placed.  Migrate-data actually moves *old files* to where they should be.
14:37 partner yup
14:37 yongtaof I try to delete it in my test server and self heal still happens when I access the file
14:37 ehg hiya. we're using ~1000 directory quotas with gluster 3.3 - when adding new limits, they don't seem to take effect until we restart the servers. any ideas?
14:37 yongtaof I think it's just prevent the so called proactive self-heal?
14:37 lorin1 jdarcy: Thanks. Do you know offhand which signal to send, or do you have a link to docs with the info? Couldn't find this via googling.
14:37 jdarcy lorin1: Pretty sure it was USR1.
14:38 partner but the force had some special meaning on top of that. i recall it "fixes" the "errors" given by the normal rebalance.. need to check that later on again, just want to understand what is happening, bbl ->
14:38 yongtaof What's your opinion? jdarcy
14:38 jdarcy yongtaof: Right.  If you have client self-heal enabled it will check the flags when a client accesses the file, but without the index info proactive self-heal won't know to look at it.
14:39 jdarcy yongtaof: That means you could have a large and unknown number of files that are not replicated as they should be and are in danger if a replica fails.
14:40 yongtaof ok
14:40 yongtaof so if after delete the file I trigger self-heal on the whole volume
14:40 ujjain joined #gluster
14:41 jdarcy yongtaof: If you want to disable proactive self-heal (e.g. to avoid interference with normal I/O) then you can, but there are risks.
14:41 jdarcy yongtaof: Yes, that's kind of how things used to work before we had proactive self-heal.
14:41 lorin1 jdarcy: kill -USR1 <pid> seems to have done the trick.
14:41 yongtaof the file will be healed anyway
14:41 yongtaof when accessed?
14:42 jdarcy yongtaof: Yes, when it's looked up (before it's opened).
14:42 yongtaof I have reasons to do so
14:43 yongtaof Recently we found xfs shutdown happens during glusterfs rebalance
14:43 yongtaof after 1 week debug with xfs experts
14:43 yongtaof we find the aggessive link/unlink fops under this cause xfs kernel shutdown
14:43 jdarcy yongtaof: Ah, so they're blaming us, even though we never issue any FS-specific calls?  Typical.
14:44 yongtaof so I want to move ./glusterfs/indices out of xfs file system
14:45 yongtaof we have test it in our test cluster and xfs not shut down again
14:45 jdarcy yongtaof: We're discussing this issue in #gluster-dev with Eric Sandeen (Red Hat local-FS lead).  I would consider our link/unlink behavior a bit antisocial, but the *crash* is an XFS bug.
14:46 Nagilum_ antisocial? kernel is there to work not to socialize! ;)
14:46 jdarcy Nagilum_: Antisocial in the sense that it's unnecessary, hard to deal with, and potentially disruptive to other users.
14:47 Nagilum_ if thats the case there is ionice
14:47 yongtaof yes thank you for your help
14:47 yongtaof then I can do it
14:47 yongtaof which protect our protection server from shutdown
14:47 jdarcy yongtaof: It won't *by itself* cause data loss, if that's what you're getting at.
14:48 yongtaof we have 3TB data every day
14:48 yongtaof If xfs
14:48 yongtaof shutdown
14:48 jdarcy Nagilum_: Ionice on what process?
14:48 yongtaof glusterfs 3.3 has a bug that it can't handle xfs shutdown
14:49 yongtaof then it breaks all clients
14:49 yongtaof It caused the most serious problem to us
14:49 yongtaof the worst case is 8 servers shutdown at the same time
14:49 yongtaof when we grow 8 servers to 22 servers
14:50 sandeen oh hey, it's yongtaof :)
14:50 yongtaof so before xfs has a fix I will try to move it out
14:50 yongtaof yes it's me
14:50 yongtaof I'm asking help from glusterfs experts now
14:51 yongtaof and I confirmed there's risk to move the indices directory out but it's ok
14:51 sandeen jdarcy, we are not blaming gluster :)
14:52 sandeen we're just isolating the behavior that led to the race
14:52 d3O joined #gluster
14:52 jdarcy sandeen: At first I thought he meant that we were doing something explicit to cause the shutdown.  Seems not.
14:52 sandeen no, the only thing you could do to do that is call a debugging XFS_IOC_GOINGDOWN ioctl ;)
14:53 jdarcy sandeen: Wait, isn't that the one we might *start* issuing because of some other problem?
14:53 svenneK joined #gluster
14:53 yongtaof yes I'm not blaming too, I just need your expertise help
14:53 jdarcy Oh no, that was just to test something else.
14:53 bfoster jdarcy: that's the "crash simulation" thing
14:53 bfoster iirc
14:53 sandeen jdarcy, no, I hope not! :)
14:54 sandeen bfoster, right.  (sorry, now I'm getting #gluster off track)
14:55 svenneK is there any special trick to getting gluster to work with qemu?
14:55 jdarcy yongtaof: So, key point: removing (or relocating) the index directory makes you *vulnerable* to data loss, but doesn't by itself *cause* data loss.
14:56 yongtaof yes thank you for your info
14:56 svenneK i run qemu-1.4.0 from my distribution (gentoo) and have two hosts A and B, I now try to start a VM on host C with file=gluster://butler/gv0/gentootest.qcow2 ... No such file or directory ... when i mount the fuse-filesystem the file exists
14:56 rwheeler joined #gluster
14:56 yongtaof I'll do it to protect online serverice
14:56 yongtaof I'll take the risk
14:56 svenneK btw. host C is not a peer to A and B (it should not have files)
14:57 svenneK (butler is host A btw)
14:57 yongtaof and I'll try to trigger self-heal after move index-base dir out
14:57 yongtaof and I'll also copy it out before change volume file
14:58 karoshi jdarcy: 0-size files, you said? that seems indeed to be the case
14:58 jdarcy yongtaof: That should work, but the full self-heal might take a while and could interfere with other I/O while it runs.
14:58 karoshi I'm looking at it more carefully and I see lots of 0-size files appearing
14:58 yongtaof yes I look into the code of features/index
14:59 jdarcy karoshi: You mean initially zero size, but then getting filled in later, or staying at zero?
14:59 yongtaof indeed it call link and unlink, the files are some kind of flag file
14:59 karoshi jdarcy: let me see
15:00 yongtaof in our production server, the link count has reached 40000
15:00 karoshi jdarcy: staying at zero until client accesses them
15:01 karoshi but nonetheless the client doesn't resume until all of them (albeit 0-sized) have appeared
15:01 yongtaof the files under ./glusterfs/indices/xattrop are always zero size
15:01 karoshi I'm talking about the normal brick dir
15:01 yongtaof ok
15:01 jdarcy karoshi: Hmmm.  Those should get healed up to their proper size without a client access, assuming you have proactive self-heal on.
15:02 karoshi how do I check whether it's on?
15:02 jdarcy karoshi: It's on by default, so unless you've explicitly turned it off it should be operative.
15:03 karoshi so it's strange
15:03 jdarcy karoshi: You can check for the presence of glustershd daemons if you like.
15:04 karoshi the only thing I see with glustershd in its command line is: /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/lib/glusterd/glustershd/run/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /tmp/f499075e4823554c92284c3036b7cf37.socket --xlator-option *replicate*.node-uuid=c3f8912​0-1228-480e-b843-cbffab51ccca
15:04 karoshi no deamon
15:04 spider_fingers left #gluster
15:05 pats joined #gluster
15:05 jdarcy karoshi: That's the one.  Don't know why we don't actually invoke it as glustershd.
15:05 yongtaof svenneK  BTW my test case to reproduce xfs shutdown is set up a cluster with 8 servers and then set up a 2x2 volume and create thousands of files under different directory, then I remove one of the brick directory, after that I growth the volume to 4x2 and run rebalance and self-heal at the same time
15:06 karoshi ok, so I'm out of ideas
15:06 yongtaof it almost reproduce xfs shutdown 100%
15:06 pats Hello everyone; I had posted this to the list: http://gluster.org/pipermail/glus​ter-users/2013-April/035930.html
15:06 glusterbot <http://goo.gl/vbH6a> (at gluster.org)
15:06 pats Since no one seems to have a solution that would work with the gluster command interface, I was wondering if it's kosher to just edit the files by hand
15:07 pats That way, I could add a third node by hand and restart the glusterfsd
15:07 karoshi jdarcy: would detaching the peer and syncing its brick manually before bringing it back online help (so it finds fewer things to heal)?
15:07 glusterbot New news from newglusterbugs: [Bug 861947] Large writes in KVM host slow on fuse, but full speed on nfs <http://goo.gl/UWw7a>
15:07 pats Would that be likely to work?
15:08 avishwan joined #gluster
15:09 jdarcy karoshi: Hard to say.  It's usually the *scanning* that makes self-heal slow, not the actual healing.  In certain circumstances, if you're very careful e.g. to preserve xattr values, "pre-syncing" like you suggest might make things go faster.  It could also cause big problems.
15:09 karoshi I see
15:09 jdarcy pats: Editing the volfiles by hand is still possible, but once you've done that you can't go back to using the CLI.
15:10 NeatBasis joined #gluster
15:10 bugs_ joined #gluster
15:12 * jdarcy is still trying to figure out that email proposal.
15:13 yongtaof so to change the index-base dir out of xfs file system my plan is(stop one of the server's glusterd service, copy the index-base dir out, set the volume files to the new directory then start glusterd service) do it one by one in several days to keep the clients safe(without impact online service)
15:14 jdarcy pats: How is this different than what you'd get by using the CLI to create a volume with replica 5 and then using add-brick to change it to replica 6 etc.?
15:15 jdarcy partner: There are also some issues in 3.3* with replica levels greater than two, but those should be fixed in 3.4
15:18 jclift_ jdarcy: Any idea if we can hold a lock across all peers while adding a new brick?
15:18 daMaestro joined #gluster
15:18 portante|ltp joined #gluster
15:18 jclift_ jdarcy: We don't seem to at the moment, which leads to weird situations like 1/2 arsed failures if the brick turns out to already have xattrs on it
15:19 jclift_ jdarcy: That's with git master anyway
15:19 jclift_ Might be just a bug :)
15:20 pats jdarcy: It doesn't let me.
15:20 pats If I try to add a brick, I get something like this : "Incorrect number of bricks supplied 1 for type REPLICATE with count 2"
15:21 jclift_ pats: Show full command line?
15:21 jclift_ Hopefully it's just something slightly wrong on command line. :)
15:22 jdarcy pats: "subvolumes alpha nu xi omicron sigma" in your email suggests you were already at replica 5.
15:22 pats jdarcy: That's the old gluster2 setup
15:22 pats I'm creating a new one for gluster3
15:22 pats and starting with replica 2 on the new systems
15:22 pats trying to add a third
15:23 jdarcy pats: OK, so when you say "How can I get the same setup with gluster3?" you mean the same except for the replication level?
15:23 pats jdarcy: I need to be able to add hosts as I bring them online
15:23 pats So, I'm starting with 2, then three, then a few weeks later 4.
15:23 jdarcy pats: If you're at replica 2 and you want to add a replica, you'd need to do something like "gluster volume add-brick myvolume replica 3 newhost:/srv/whatever"
15:24 pats okay, let me try that.
15:24 jdarcy pats: On the other hand, if you're at replica 2 and you want to add new hosts without changing the replication level, then it gets a bit more complicated.
15:27 pats Perhaps I'm getting confused about what a replication level is. I went through the documentation, and I thought it was a 1:1 mapping to the number of hosts replicated
15:27 pats That command isn't quite working, probably because of the changing of the replica: volume add-brick <VOLNAME> <NEW-BRICK> ... - add brick to volume <VOLNAME>
15:28 pats This is v 3.2.7
15:28 pats Perhaps it's a newer option?
15:28 jdarcy pats: Yeah, 3.2 didn't support changing the replica level in add-brick.
15:28 pats Aha, so there's the problem
15:28 jdarcy Don't remember if that's a 3.3 or 3.4 thing.
15:28 pats Okay, I can do a backport
15:29 pats So, the only thing then is about whether my understanding is correct as to the replica level
15:29 pats I need all of the data available locally on each of the hosts
15:29 pats so everytime I add a host, I guess I need to bump the replica level?
15:29 karoshi jdarcy: I've set up some instrumentation and I can definitely say with certainty that client hangs until all the zero-size files have appeared. If this isn't the way it is supposed to work, should I open a bug?
15:30 nueces joined #gluster
15:32 jdarcy karoshi: If the files are all in the same directory, then I would expect clients to block on the directory lock until all of the (zero length) entries have been created.  Whether that's a bug is kind of another question.  ;)
15:32 jdarcy pats: If you're trying to create replicas on every host, then yes.
15:32 karoshi ah ok, so the zero-size files do have to be created?
15:32 karoshi yes, it's all in the same dir
15:33 karoshi still, it takes around 30 seconds to create ~3500 files
15:33 jdarcy karoshi: Yes.  Otherwise we have races where somebody might actually create/remove/rename those same files while we're in the middle of healing the directory.
15:33 karoshi I take it then that if the new files are at random location (ie not all in the same dir) the impact would be lower
15:34 jdarcy karoshi: Exactly.
15:34 karoshi ok, that would be closer to the real use case
15:34 karoshi I'll adapt the testbed so files aren't all in the same dir
15:35 jdarcy pats: Normally, replication level would be only two or three (to survive one or two failures respectively), but you might have many more servers than that.
15:35 karoshi and see if things get better
15:35 svenneK I have another question... i find various advice whether ext4 is an acceptable fs for the bricks.. what is the current official status?
15:36 jdarcy svenneK: It depends on whether you're talking about GlusterFS as a community project or Red Hat Storage.
15:36 svenneK the community project
15:36 jdarcy svenneK: Ext4 is "supported" at the community level, though there are problems with more recent versions of ext4 causing readdir loops.
15:37 svenneK so basically "dont use ext4" ?
15:37 jdarcy svenneK: There are workarounds, and fixes in 3.4 (maybe 3.3 as well).
15:37 svenneK okay, what is the timeframe for 3.4 (if any)?
15:38 svenneK I am evaluating it for a new VM setup (under KVM)
15:38 jdarcy svenneK: 3.4 is now in its third alpha, about to enter beta.
15:38 svenneK any idea what that means in time ? Weeks? Months ?
15:39 svenneK some projects go through a release in a week (the linux kernel's RCs for example) while other take months between releases (even alpha/beta ones)
15:39 jdarcy svenneK: I'd guess - and I can't commit to more than a guess - that the release might be in late summer, possibly early fall (northern hemisphere).
15:39 svenneK and some are "done when done" (which is okay, but making planning harder)
15:39 jdarcy svenneK: It's probably too late to get a release before Red Hat Summit, which was the original plan.
15:40 svenneK okay, so the timeframe is "summer-ish"
15:40 jdarcy svenneK: Ish.  ;)
15:40 svenneK :)
15:41 svenneK btw. any input on my first problem ? That qemu 1.4 does not seem to find the file on gluster?
15:41 yongtaof thank you gluster and xfs experts I'll go to sleep and it's 24:00 here
15:41 hagarth joined #gluster
15:42 jdarcy svenneK: The cadence so far has been every three or four weeks (because that's all I have patience to wrangle through).  Figure two betas and then a release, that gets us about ten weeks from now - some time in July.  Build in a bit of padding and you get August/September.
15:42 jdarcy svenneK: I don't know much about what the qemu piece is doing, so I'm afraid I couldn't say.
15:43 svenneK okay... it just seemed from documentation on the interwebs that it "just works" (which is seemingly not the case)
15:43 svenneK the fuse stuff works, but is rather slow, right?
15:44 jdarcy svenneK: Depends on what you're doing.  For synchronous I/O (which is what you'd typically get from VMs) or metadata ops, then yes.
15:45 svenneK another question: I have two hosts hosting bricks A and B, and a client C.. right now, C is not a peer (it should not be, right?) but is it then possible to view "gluster volume stauts" and such from C?
15:46 jdarcy svenneK: C doesn't need to be a peer to mount.  If you want to use the CLI from it, you can make it a peer or use the (undocumented) --remote-host option.
15:46 samppah in my experience fuse with 3.4 has been much faster than 3.3
15:46 jdarcy gluster --remote-host=B volume info
15:47 svenneK thanks for the info for now.. i will try to see if I can get qemu to accept the file through the gluster:// url
15:49 samppah jdarcy: btw, do you know if there are any gotchas when using oss gluster client with Red Hat Storage or what version should be used?
15:50 samppah RHS seems to be based on 3.3.0 but it has some newer patches
15:51 karoshi jdarcy: thanks, populating a directory structure rather than a single dir makes the client almost not notice. However, ISTR that when starting out with a fresh brick (ie after doing gluster volume add-brick myvol replica 2 newbrick:/brick_myvol) healing is really "on-demand" (when client accesses a file) and I don't remember seeing those 0-byte files. Is that case different from what I'm testing now?
16:07 Matthaeus1 joined #gluster
16:10 _pol joined #gluster
16:10 _pol joined #gluster
16:15 bala joined #gluster
16:15 hagarth joined #gluster
16:19 y4m4 joined #gluster
16:25 rwheeler joined #gluster
16:27 portante` joined #gluster
16:28 Mo_ joined #gluster
16:41 H__ eww, compiling gluster-3.3-head on ubuntu 11.10 : checking if libxml2 is present... ./configure: line 12689: syntax error near unexpected token `LIBXML2,' ./configure: line 12689: `PKG_CHECK_MODULES(LIBXML2, libxml-2.0 >= 2.6.19,'
16:44 d3O joined #gluster
16:48 tjstansell i have a *simple* setup right now with a single brick hosted on a single host.  The data is mostly small files and native gluster performance is horrible when just stating files and such so we are testing with NFS instead.  Performance is better (though not great) but we're getting intermittent stale file handle errors that show up with ? for all statistics.  A second ls then works fine and I get all of the data.
16:53 tjstansell and interestingly, my gluster logs all seem to have timestamps of Apr 21 03:24 ... they seemed to have just simply stopped logging anything.
16:53 tjstansell though fuser says a glusterfs process is using the nfs.log file...
16:56 ctria joined #gluster
16:58 zaitcev joined #gluster
17:00 tjstansell it looks like log rotation has caused glusterd and glusterfsd logging to get into a weird state.
17:01 tjstansell this is the last entry from the rotated log:
17:01 tjstansell [2013-04-21 03:24:01.940347] I [glusterfsd.c:896:reincarnate] 0-glusterfsd: Fetching the volume file from server...
17:01 tjstansell and this is the only entry in the new file:
17:01 tjstansell [2013-04-21 03:24:01.940878] I [glusterfsd-mgmt.c:1560:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing
17:01 tjstansell then nothing else ...
17:03 semiosis tjstansell: that alone does not indicate any problem, just normal log chatter... do you have other reasons to believe there is a problem?
17:03 tjstansell those are the only log entries anywhere... all logging seems to have stopped.
17:04 tjstansell even though we've been accessing data and seeing the 'stale file handle' errors.
17:04 semiosis tjstansell: what logging do you expect to see?  my gluster logs are usually pretty quiet even in prod
17:04 tjstansell the same is true for all logs in /var/log/glusterfs
17:04 shylesh joined #gluster
17:05 semiosis well if you want to see some logs, you can turn the logging level up to debug or trace, that will surely produce some logs
17:05 semiosis if turning logging up to trace doesnt produce any logs, then i'll agree there's a problem
17:05 lorin1 joined #gluster
17:06 jiffe98 anyone know why when I try to mount gluster via nfs I am getting 'mount.nfs: mount to NFS server '64.251.189.224:/WEBSTATS' failed: RPC Error: Program unavailable' ?
17:06 semiosis jiffe98: see ,,(nfs)
17:06 glusterbot jiffe98: To mount via nfs, most distros require the options, tcp,vers=3 -- Also an rpc port mapper (like rpcbind in EL distributions) should be running on the server, and the kernel nfs server (nfsd) should be disabled
17:06 tjstansell well, we were seeing plenty of E and W logs when we were playing with things on friday ... that look related to bug 872923, which looks like it's only fixed in 3.4 ... i'm on 3.3.2qa1.
17:07 glusterbot Bug http://goo.gl/4evTb unspecified, high, ---, rajesh, ON_QA , ln command execution fails on files with "invalid argument" error when executed the command from nfs mount
17:08 jiffe98 I have that, this is my fstab entry '64.251.189.224:/WEBSTATS        /disks/disk1    nfs     defaults,_netdev,vers=3,mountpr​oto=tcp,rsize=65536,wsize=65536  0       0'
17:09 jiffe98 it had worked at one point but is failing now, I'm not seeing anything in the logs to indicate why
17:09 semiosis tjstansell: ok idk what to say... if you're having problems, please describe the problem you're having.  quiet logs (by itself) is not a problem
17:10 tjstansell is there a way to increase the nfs log level?  i'm not seeing that option.  only the brick and client log levels.
17:10 semiosis jiffe98: mount with -v for additional client output maybe?
17:10 semiosis tjstansell: the client log level might affect the nfs.log
17:10 semiosis tjstansell: worth a try
17:10 tjstansell ok. i'll try that.
17:12 semiosis H__: install pkg-config to fix that
17:12 tjstansell hm... changing that setting seems to have restarted the nfs service ... so yeah, it's logging more now.
17:12 semiosis if you change it back (to INFO) does it go quiet again?
17:15 lorin1 left #gluster
17:17 nueces joined #gluster
17:17 tjstansell hm... i set it back to INFO and ran a df and the df is hung right now ...
17:18 tjstansell and now finally came back ... it hung for a good 30 seconds.
17:19 gbrand_ joined #gluster
17:19 tjstansell i find it interesting that i got a stale file handle error while client logging was set to TRACE and i couldn't find 'stale' in any logs.  maybe it logs it differently...
17:21 saurabh joined #gluster
17:21 mkonecny joined #gluster
17:22 mkonecny hello, anybody know the correct steps for correcting a file with attributes "?????????? ? ? ? ?            ? unknown-Deixa-me Ir-320kbps.mp3" when doing an ls -l?
17:22 H__ semiosis: confirmed ! Thanks !!
17:22 mkonecny when I view this file from another server in the cluster its attibutes are correct
17:23 Matthaeus1 left #gluster
17:23 tjstansell semiosis: after setting logging back to INFO, I get no logs, even when i get a stale file handle.
17:24 tjstansell stale file handle results in attributes like this:
17:24 tjstansell -????????? ? ?    ?         ?            ? common-sense-3.6.tar.gz
17:24 tjstansell i do an ls again and then it's fine.
17:24 mkonecny From the random threads I've googled, apparently I need to "do the work manually" - this means simply copying  from the server that had a "good" copy and overwriting the "bad" copy?
17:25 SpongeBob joined #gluster
17:28 SpongeBob joined #gluster
17:29 ZombieCh_ joined #gluster
17:30 aravindavk joined #gluster
17:30 ZombieCh_ Hi, has anyone solved this error "or a prefix of it is already part of a volume" when trying to re-use a brick on OpenIndiana (Illumos) ?
17:30 glusterbot ZombieCh_: To clear that error, follow the instructions at http://goo.gl/YUzrh or see this bug http://goo.gl/YZi8Y
17:32 ZombieCh_ i know, but there seems to be no alternative for setfattr and getfattr under OpenIndiana .. (?)
17:33 semiosis mkonecny: what version fo glusterfs are you using?
17:33 Nagilum_ you could umount the brick and do a newfs on it
17:34 ZombieCh_ errr, 3.3.0
17:34 mkonecny semiosis: glusterfs 3.3.1
17:34 semiosis mkonecny: nfs or fuse client?
17:35 mkonecny semiosis: fuse
17:35 ZombieCh_ yes, but then i would lose my data
17:35 dblack joined #gluster
17:36 semiosis mkonecny: check your client log file, /var/log/glusterfs/the-mount-point.log, for more information.  feel free to pastie/gist that log file too
17:36 efries joined #gluster
17:39 mkonecny semiosis: it's full of "[2013-04-22 19:38:14.142787] I [afr-common.c:3786:afr_local_init] 0-airtimepro-replicate-1: no subvolumes up
17:39 mkonecny " popping up every 2-3 seconds. Will get a full log pasted in a sec
17:40 tjstansell does anyone know if fuse client is any faster with 3.4 than 3.3 for small files/directory listings, etc.
17:40 semiosis mkonecny: unmount the client, truncate the log file, mount the client again, then pastie.org the log file so we can see connection attempts & failures right after mount
17:40 tjstansell ?
17:41 jag3773 joined #gluster
17:43 mkonecny semiosis: I'm unable to unmount the client at the moment since it's a production machine serving clients. I can try that during off-peak hours, is there anything else we can try in the mean-time?
17:44 semiosis it's not serving clients too well if those "no subvolumes up" messages are for real
17:44 semiosis but you could simply make another mount to some unimportant location on the same client machine and get the log from that
17:45 semiosis if there's a config or network issue preventing good communication between that client machine and the servers that should reveal it
17:45 bulde joined #gluster
17:49 rastar joined #gluster
17:55 xymox joined #gluster
18:09 Supermathie http://pastie.org/private/xbx8btkcazalpqsopod7cg
18:09 glusterbot <http://goo.gl/vsw4l> (at pastie.org)
18:09 Supermathie Looks like gluster is getting confused about the state of a file... it thinks it needs healing when it probably doesn't. Perhaps due to the truncate error.
18:10 Supermathie I have a tcpdump of ALL the traffic involving this file which I'll be digging through momentarily. It's an Oracle archive log, so it was just created.
18:11 Supermathie Also: why did gluster fail the truncate operation: [2013-04-22 13:57:22.073805] W [nfs3.c:889:nfs3svc_truncate_cbk] 0-nfs: 8b534455: /fleming1/db0/ALTUS_flash/archivelog/2013_​04_22/.o1_mf_1_1093__1366653401581181_.arc => -1 (Permission denied)
18:28 jskinner_ joined #gluster
18:29 jskinner_ I am having an issue with the _netdev option in fstab on CentOS 6.3
18:35 CROS_ joined #gluster
18:35 manik joined #gluster
18:36 semiosis jskinner_: keep talkin
18:38 Supermathie jskinner_: I'm not. Just a warning.
18:38 jskinner_ right
18:38 jskinner_ thats what I am getting
18:39 jskinner_ saying that it's not sure what that option is basically
18:39 Supermathie Yeah, that's not an option to mount.gluster, that's an option to the Linux init.d scripts
18:39 jskinner_ when I leave that option there, it won't mount it on boot
18:40 semiosis jskinner_: can you just ignore the warning?
18:40 tjstansell we ended up writing our own init script to mount glusterfs filesystems when we wanted ... and just specify noauto option in fstab entries.
18:40 Supermathie jskinner_: chkconfig netfs on
18:41 jskinner_ hmm
18:41 jskinner_ ok, I'll give these options a shot
18:41 jskinner_ thanks guys.
18:41 Supermathie jskinner_: TBH I'm not sure if netfs will handle glusterfs filesystems, may need to add the plumbing. But that's what mounts _netdev filesystems.
18:42 jskinner_ might end up just writing something for now
18:43 Supermathie echo 'mount -a -t glusterfs' >> /etc/rc.d/rc.local
18:43 Supermathie If you want to be lazy about it :D
18:44 semiosis why doesnt the normal way work?
18:44 tjstansell this is what we use, fyi ... if you want to use it: http://fpaste.org/cF79/
18:44 glusterbot Title: Viewing Paste #293932 (at fpaste.org)
18:49 Supermathie jskinner_: Do you have something like this in /var/log/glusterfs? [2013-04-03 15:30:22.702278] E [mount.c:598:gf_fuse_mount] 0-glusterfs-fuse: cannot open /dev/fuse (No such file or directory)
18:51 jskinner_ which log file?
18:52 Supermathie jskinner_: /var/log/glusterfs/<volumename>.log ?
18:54 semiosis Supermathie: you mean <mount-point>.log?
18:54 Supermathie sure :)
18:55 jskinner_ nope, recursive grep for "cannot" in the glusterfs log directory brings back nothing
18:56 Supermathie jskinner_: On my (RHEL 5.3) systems, netfs failed to mount my gluster volumes due to the fuse module not being loaded. But netfs should handle glusterfs without modifications. Try running /etc/init.d/netfs start by hand and see what it shows.
18:56 manik joined #gluster
18:56 Supermathie s/But/Other than that,/
18:56 glusterbot What Supermathie meant to say was: jskinner_: TBH I'm not sure if netfs will handle glusterfs filesystems, may need to add the plumbing. Other than that, that's what mounts _netdev filesystems.
18:56 Supermathie glusterbot: FAILURE
18:57 jskinner_ lol
18:57 sjoeboo why would a rebalance fail on some files? or at all?
18:57 semiosis maybe i missed something, but jskinner_, could you please restate your problem?
18:58 semiosis sjoeboo: files locked, maybe?
18:58 sjoeboo yeah...that would do it of course...hmm
18:59 semiosis jskinner_: also, what version of glusterfs?
19:00 jskinner_ 3.3
19:00 semiosis jskinner_: are your glusterfs mounts in fstab using localhost?
19:01 sjoeboo_ joined #gluster
19:01 semiosis jskinner_: also, you could look at the client log file after a failed mount at boot to see 1) if the mount was tried at all, 2) if so, why it failed
19:02 jskinner_ no localhost
19:06 Supermathie jskinner_: output of '/etc/init.d/netfs start' ?
19:12 jskinner_ mounting other filesystems: OK
19:12 jskinner_ Ive got the mounts up now, I just did it manually. I chkconfig'd netfs on
19:25 pats jdarcy: Thanks, man. With 3.3, adding and removing bricks and growing and shrinking the replication appears to work as I would expect.
19:32 rwheeler joined #gluster
19:35 nueces joined #gluster
19:36 vincent_vdk joined #gluster
19:38 zwu joined #gluster
19:48 sjoeboo_ so...follow up to my question re: a rebalance failing...how about just a fix-layout failing?
19:48 sjoeboo_ it was chuging along for a while..
20:30 portante` joined #gluster
20:51 jbrooks joined #gluster
20:54 rwheeler joined #gluster
21:02 ash13 joined #gluster
21:20 _pol joined #gluster
21:21 _pol joined #gluster
21:29 t35t0r joined #gluster
21:33 lh joined #gluster
21:33 lh joined #gluster
21:33 alex88 joined #gluster
21:38 mtanner_ joined #gluster
21:38 sandeen_ joined #gluster
21:41 sjoeboo joined #gluster
21:42 jclift joined #gluster
22:10 pull_ joined #gluster
22:13 sjoeboo joined #gluster
22:18 CROS_ joined #gluster
22:18 premera joined #gluster
22:18 genewitch joined #gluster
22:18 morse joined #gluster
22:18 flin_ joined #gluster
22:18 Azrael joined #gluster
22:18 Dave2 joined #gluster
22:18 jds2001 joined #gluster
22:58 sjoeboo joined #gluster
23:02 humbug__ joined #gluster
23:12 CROS_ joined #gluster
23:12 premera joined #gluster
23:12 genewitch joined #gluster
23:12 morse joined #gluster
23:12 flin_ joined #gluster
23:12 Azrael joined #gluster
23:12 Dave2 joined #gluster
23:12 jds2001 joined #gluster
23:49 juhaj joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary