Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2013-11-05

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 Fresleven joined #gluster
00:01 mattapperson joined #gluster
00:03 bstr joined #gluster
00:04 mattapperson joined #gluster
00:05 mattappe_ joined #gluster
00:12 mattapperson joined #gluster
00:14 kPb_in joined #gluster
00:19 mattapperson joined #gluster
00:21 elyograg I grepped through the logfile for files that failed to migrate.  There are 91 of them that don't show up in the volume at all.  checking all the bricks for them, they seem to be just gone.
00:26 mattapperson joined #gluster
00:27 haritsu joined #gluster
00:29 mattappe_ joined #gluster
00:58 _mattf joined #gluster
00:58 nonsenso joined #gluster
00:58 jbrooks joined #gluster
00:58 ccha joined #gluster
00:59 ccha joined #gluster
00:59 yosafbridge joined #gluster
01:00 klaxa joined #gluster
01:04 GabrieleV joined #gluster
01:09 brieweb_ left #gluster
01:21 cjh973 left #gluster
01:23 hagarth joined #gluster
01:27 harish joined #gluster
01:27 haritsu joined #gluster
01:42 DV joined #gluster
01:42 johnbot11 joined #gluster
01:58 haritsu joined #gluster
02:00 haritsu joined #gluster
02:00 Skaag joined #gluster
02:03 pdrakeweb joined #gluster
02:06 T0aD joined #gluster
02:09 harish joined #gluster
02:12 haritsu joined #gluster
02:50 bharata-rao joined #gluster
03:03 johnbot11 joined #gluster
03:04 RobertLaptop joined #gluster
03:13 haritsu joined #gluster
03:25 johnbot1_ joined #gluster
03:35 hagarth joined #gluster
04:07 _ilbot joined #gluster
04:07 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
04:07 RobertLaptop joined #gluster
04:13 haritsu joined #gluster
04:17 Skaag joined #gluster
04:20 RobertLaptop joined #gluster
04:23 kPb_in_ joined #gluster
04:32 wushudoin joined #gluster
04:36 RobertLaptop joined #gluster
04:45 FooBar joined #gluster
04:55 the-me joined #gluster
05:01 johnbot11 joined #gluster
05:02 bulde joined #gluster
05:14 haritsu joined #gluster
05:23 morse joined #gluster
05:43 hagarth joined #gluster
06:07 [o__o] left #gluster
06:09 [o__o] joined #gluster
06:15 haritsu joined #gluster
06:23 mohankumar joined #gluster
06:29 Fresleven joined #gluster
06:31 haritsu joined #gluster
06:33 haritsu joined #gluster
06:42 haritsu joined #gluster
06:42 shireesh joined #gluster
06:43 [o__o] left #gluster
06:44 haritsu joined #gluster
06:44 haritsu joined #gluster
06:45 [o__o] joined #gluster
07:12 jtux joined #gluster
07:19 brieweb joined #gluster
07:31 haritsu joined #gluster
07:34 bharata-rao joined #gluster
07:35 hagarth joined #gluster
07:45 haritsu joined #gluster
07:57 haritsu joined #gluster
07:57 ekuric joined #gluster
07:58 haritsu joined #gluster
08:01 ababu joined #gluster
08:04 ctria joined #gluster
08:07 franc joined #gluster
08:11 eseyman joined #gluster
08:12 keytab joined #gluster
08:13 kPb_in joined #gluster
08:24 nasso joined #gluster
08:24 eseyman joined #gluster
08:25 ricky-ticky joined #gluster
08:47 hngkr_ joined #gluster
08:56 harish joined #gluster
08:57 bulde joined #gluster
09:01 mgebbe___ joined #gluster
09:01 mgebbe___ joined #gluster
09:03 mgebbe joined #gluster
09:06 satheesh joined #gluster
09:07 mgebbe joined #gluster
09:12 al joined #gluster
09:18 shireesh joined #gluster
09:26 hagarth joined #gluster
09:37 ProT-0-TypE joined #gluster
09:43 ProT-0-TypE joined #gluster
09:45 shireesh joined #gluster
10:08 MediaSmurf joined #gluster
10:18 bulde joined #gluster
10:19 MediaSmurf trying to heal a gluster volume after a crash on one node, now I have one 'orphan' gfid which I don't need anymore, there is no corresponding hard link, can I safely remove the gfid file?
10:23 MediaSmurf when I search for the inum of this gfid file, I only find the gfid file itself
10:23 MediaSmurf not quite sure how to fix this :)
10:24 MediaSmurf the version is GlusterFS 3.3.0
10:25 jordi12 joined #gluster
10:29 jordi12 Hi! I've a problem when I install gluster on Fedora 19, specifically when I start gluster: service glusterd start
10:29 jordi12 Failed to start GlusterFS an clustered file-system server.
10:29 jordi12 How can I solve it?
10:42 morse joined #gluster
10:42 diegows_ joined #gluster
11:07 MediaSmurf jordi12: any helpful log lines?
11:08 ndevos jordi12: if you check 'systemctl status glusterd.service' you will probably see that this unit is active
11:08 harish joined #gluster
11:09 ndevos jordi12: the glusterfsd.service is the one with the issue, that was filed as bug 1022542
11:09 glusterbot Bug http://goo.gl/8UhTjA unspecified, unspecified, ---, ndevos, ON_QA , glusterfsd stop command does not stop bricks
11:10 jordi12 Thanks for your help, I reinstalled gluster and it seems to works now
11:12 calum_ joined #gluster
11:24 MediaSmurf I've solved my issue by creating the missing hard link on the brick by hand, and then removing it on the client
11:48 edward1 joined #gluster
11:56 chirino joined #gluster
11:57 kkeithley1 joined #gluster
12:14 hngkr_ joined #gluster
12:24 Rav_ joined #gluster
12:33 raar joined #gluster
12:47 rcheleguini joined #gluster
12:49 ProT-0-TypE joined #gluster
13:00 dewey joined #gluster
13:01 ira joined #gluster
13:01 B21956 joined #gluster
13:16 dewey joined #gluster
13:26 bennyturns joined #gluster
13:45 dewey joined #gluster
13:46 davidbierce joined #gluster
13:49 [o__o] joined #gluster
13:50 dewey joined #gluster
13:52 bennyturns joined #gluster
13:54 davidbierce Does anyone know why there would be a transaction lock on the entire cluster when there are not transaction?  More specifically when I try to run anything that requires a transaction lock I get something like:  [2013-11-05 13:41:09.036134] E [glusterd-utils.c:329:glusterd_lock] 0-management: Unable to get lock for uuid: 770f3572-003d-4c56-b0ab-ecf4a8a1c02d, lock held by: 770f3572-003d-4c56-b0ab-ecf4a8a1c02d
13:54 davidbierce Where the UUID is always the node I'm trying to perform the operation
13:54 calum_ joined #gluster
14:00 danci1973 joined #gluster
14:07 tjikkun_work joined #gluster
14:14 haritsu joined #gluster
14:33 bennyturns joined #gluster
14:33 jruggiero joined #gluster
14:34 jruggiero left #gluster
14:34 lpabon joined #gluster
14:37 DV joined #gluster
14:37 aliguori joined #gluster
14:44 haritsu joined #gluster
14:47 clag_ joined #gluster
14:48 bugs_ joined #gluster
14:51 kaptk2 joined #gluster
14:51 ndk joined #gluster
14:55 ndevos JoeJulian: about bug 1022542, would you like a solution where glusterfsd.service can be disabled (preventing restarts of glusterfsd), but have it enabled by default (to apply updated immediately)?
14:55 glusterbot Bug http://goo.gl/8UhTjA unspecified, unspecified, ---, ndevos, ON_QA , glusterfsd stop command does not stop bricks
14:56 ndevos that would allow people like you to prevent glusterfsd restarts, and others that would like to apply the bugfixes to actually benefit from the fixes
14:57 DV__ joined #gluster
14:57 ndevos yet and other tunable in /etc/sysconfig/gluster* is not very attractive to me
14:57 zerick joined #gluster
15:05 JoeJulian ndevos: Don't forget, systemctl restart glusterd is a not uncommon troubleshooting tool when the management daemon is having a problem. If we could have no problems, that would be the optimal solution. :)
15:06 JoeJulian But yes. The ability to avoid blindly restarting the bricks would be satisfactory.
15:06 ndevos JoeJulian: "systemctl restart glusterd" should *not* affect the glusterfsd processes
15:07 ndevos okay, I'll install a fedora 19 or so and see what happens when I disable glusterfsd.service
15:07 JoeJulian Ah, I must have missed that bit.
15:08 JoeJulian I know everyone says not to auto-update stuff, but if I have to read and test every release of every package that's installed on our systems, there'll need to be three of me.
15:10 dewey joined #gluster
15:14 ira joined #gluster
15:23 hybrid5121 joined #gluster
15:23 Technicool joined #gluster
15:28 failshell joined #gluster
15:31 hagarth joined #gluster
15:32 elyograg JoeJulian: is your emergency over yet?  I haven't figured out yet what went wrong with my rebalance. Currently I'm looking through my brick .gluster directories for files that only have one link - hoping that the 91 files that have simply disappeared from the volume will somehow still be there.
15:33 JoeJulian not yet... :( Check the other bricks for those files.
15:35 wushudoin joined #gluster
15:41 jbrooks joined #gluster
15:42 DV__ joined #gluster
15:43 elyograg I haven't done it for all the files, but I've checked the regular brick directories (not .glusterfs) for a few of them and they aren't there.
15:44 nueces joined #gluster
15:44 elyograg I've still got to get on the train to work, but once I get there today, I'll be writing scripts to take my errored file lists and do extensive checks.
15:55 georgeh|workstat joined #gluster
15:56 rotbeard joined #gluster
16:10 MediaSmurf grmbl next issue.. after trying a rolling upgrade from 3.3.0 to 3.4.1 the self-heal is not working anymore, so the two replica servers are out of sync, what should I do? roll-back and upgrade with downtime? or does someone have a solution?
16:11 JoeJulian Two options. Wait 10 minutes or "gluster volume heal $vol"
16:11 JoeJulian If that still doesn't work, try "gluster volume heal $vol full"
16:12 MediaSmurf gluster volume heal $vol on 3.3.0 says "Heal operation on volume cluster has been unsuccessful" and on 3.4.1 it says "Commit failed on 10.0.0.51. Please check the log file for more details."
16:12 JoeJulian oooh
16:12 MediaSmurf the same with "full" behind the command
16:13 JoeJulian did the log file offer you any clues?
16:13 bulde joined #gluster
16:17 MediaSmurf nothing that looks like a useful error
16:17 MediaSmurf Received heal vol req for volume cluster, Acquired local lock, Received ACC from uuid, Sent op req to 1 peers, Received ACC from uuid, Cleared local lock, Received resp to heal volume, Exiting with: -1
16:18 Elico joined #gluster
16:18 JoeJulian Is that on 10.0.0.51?
16:19 Elico hey there. I have mounted glusterFS on a client which is not a part of any of the group and it shows me:"d?????????  ? ?    ?       ?  " in the ls -la while not allowing access to this mounted directory.
16:19 JoeJulian client log
16:19 Elico I am not even sure where and what to look at?
16:19 JoeJulian Elico: /var/log/glusterfs/etc-glusterfs-glusterd.vol.log
16:19 JoeJulian ... probably...
16:20 MediaSmurf JoeJulian: yes, that was the logging on 10.0.0.51
16:20 MediaSmurf when I execute the heal command also on 10.0.0.51
16:21 MediaSmurf and the logging on the other (upgraded) server 10.0.0.50 remains empty
16:21 JoeJulian oops, got my conversations mixed up. Sorry, not paying that much attention over here. Elico /var/log/glusterfs/{mountpoint with / converted to -}.log
16:22 JoeJulian MediaSmurf: is glusterd running on .50?
16:22 MediaSmurf jep
16:23 Elico JoeJulian: OK so in the logs it states something about the dns resolution..
16:23 MediaSmurf and peer status is connected to both sides
16:23 JoeJulian MediaSmurf: I would restart both glusterd. Should be a safe operation.
16:24 MediaSmurf okay thanks, that means downtime right? ;)
16:26 JoeJulian MediaSmurf: no downtime
16:32 DV__ joined #gluster
16:32 m0zes anybody going to SuperComputing this year?
16:40 failshel_ joined #gluster
16:41 Elico now I have a question regarding "ls"
16:41 Elico is it possible to see a ls output or maybe it
16:41 Elico its' only bluffing for me?
16:43 failshel_ joined #gluster
16:45 JoeJulian Elico: That question confuses me. The answer should be yes unless I'm misunderstanding your question.
16:45 JoeJulian Did you fix your hostname lookup problem?
16:50 Elico JoeJulian: I am testing GlusterFS on a very tiny machine so maybe it is to much for the machine but I need to put three or four machines together and then see what happens..
16:53 Elico when I am trying to access a file that I know that exists I can access it but a ls just get stuck.
16:53 Elico also a "find" and I am not sure why.
16:53 JoeJulian Elico: which version?
16:54 ira joined #gluster
16:54 Elico 3.2.7 ubuntu-saucy
16:54 JoeJulian Elico: That's why. ,,(latest)
16:54 glusterbot Elico: The latest version is available at http://goo.gl/zO0Fa . There is a .repo file for yum or see @ppa for ubuntu.
16:54 ira joined #gluster
16:55 Elico JoeJulian: are you sure about it? what's current version?
16:55 Elico ho and my swap is 50% so I assume it might cause something..
16:55 JoeJulian Follow the link.
16:55 JoeJulian And yes I'm sure. That's why I said so... :P
16:56 Elico I am wondering to myself: "what version RH using?"
16:57 JoeJulian Not sure. They have their own way of handling versions that include specific bug backports, too, like the one to manage the ext4 bug.
16:58 Elico what ext4 bug?
16:58 JoeJulian @lucky glusterfs ext4 bug
16:58 glusterbot JoeJulian: http://www.gluster.org/category/howtos/
16:59 JoeJulian @google gluster ext4 structure change
16:59 glusterbot JoeJulian: Howtos - Gluster: <http://www.gluster.org/category/howtos/>; Simple GlusterFS log rotation | Gluster Community Website: <http://www.gluster.org/category/glusterfs/>; GlusterFS bit by ext4 structure change - Joe Julians Blog: <http://goo.gl/PEBQU>; LKML: Bryan Whitehead: ext4 change in v3.3-rc2 broke user space:
16:59 glusterbot JoeJulian: <http://lkml.org/lkml/2013/3/11/776>; Re: [Gluster-devel] regressions due to 64-bit ext4 directory cookies: <http://goo.gl/qnjFhN>; IRC log for #gluster, 2013-03-19: <http://irclog.perlgeek.de/gluster/2013-03-19>; Certain operations like ls hang in 4-node gluster setup - Server Fault: (1 more message)
16:59 Elico OK..
17:06 Elico well it is not allowing me to install the PPA..
17:08 semiosis Elico: ???
17:10 semiosis Elico: did you do add-apt-repository, apt-get update, apt-get install?
17:10 semiosis that works
17:10 Elico semiosis: when I run "add-apt-repository ppa:semiosis/ubuntu-glusterfs-3.4" saucy is complaining about somehitng..
17:11 semiosis pastebin the error please
17:11 Elico ok
17:11 Elico http://pastebin.com/qQ1jgj79
17:12 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
17:12 dewey_ joined #gluster
17:12 Elico well to late for pastebin..
17:13 Elico semiosis: is it ok??
17:13 semiosis hmm, never seen this before
17:13 semiosis let me try to reproduce
17:14 Elico on a saucy 13.10
17:14 * semiosis gets out the laptop
17:14 semiosis my desktop is kubuntu precise
17:14 semiosis no problem on that
17:15 Elico well I was amazed too...
17:16 JoeJulian Googling... one person reported that error but never truly resolved it. Claimed it was from dual booting windows - which is obviously incorrect.
17:16 Elico haha
17:16 semiosis works for me
17:16 Elico on a 13.10?
17:16 semiosis i saw the error then tried again and it worked 2nd time
17:16 semiosis :O
17:16 semiosis kubuntu saucy
17:16 Elico I am amazed..
17:16 semiosis trying again & again
17:17 semiosis yep got error again
17:17 semiosis going to tcpdump this
17:17 semiosis i suspect an SSL issue
17:17 Elico hmm
17:19 semiosis for now i suggest to keep trying
17:19 Elico it's weird like hell
17:19 Elico do you think ubuntu is the place to report it?
17:20 semiosis stand by
17:20 Elico here
17:22 davidbierce We had a brick crash hard with a memory failure.  The Brick has returned, but the client log is filling non stop with this errors.  Is there a way for the client to forget the invalid FD? https://gist.github.com/anonymous/38e4689ed95e4fcd2757
17:22 glusterbot <http://goo.gl/uUZKpU> (at gist.github.com)
17:23 johnbot11 joined #gluster
17:28 social joined #gluster
17:29 semiosis i'm getting strange dns timeouts on my laptop
17:31 bsaggy joined #gluster
17:31 semiosis some weirdness with my office wifi.  i switched over to a mobile hotspot and now add-apt-repository works every time
17:31 glusterbot New news from newglusterbugs: [Bug 1024369] Unable to shrink volumes without dataloss <http://goo.gl/jZ350k>
17:31 semiosis :/
17:32 semiosis Elico: check your dns
17:36 Mo__ joined #gluster
17:41 semiosis heh, 47% packet loss between my laptop & the access point
17:41 JoeJulian ewww
17:45 rwheeler joined #gluster
17:46 semiosis Elico: in light of all this, i'd suggest checking if you can reach https://launchpad.net in a browser, and working on any network issues that get in the way
17:46 glusterbot Title: Launchpad (at launchpad.net)
17:47 rotbeard joined #gluster
17:48 semiosis a ha! power management on my wifi adapter.  plug in the laptop ac adapter and the packet loss vanishes :D
17:54 hybrid512 joined #gluster
17:55 failshell joined #gluster
18:02 aliguori joined #gluster
18:09 bosszaru joined #gluster
18:13 Elico semiosis: I will see that in couple minutes
18:13 calum_ joined #gluster
18:14 Elico semiosis: from my point of view lanchpad.net and others are fine...
18:16 Elico the problem is that there is no debug so I cannot even know the source of the problem.
18:37 B21956 joined #gluster
18:39 B21956 joined #gluster
18:41 kPb_in_ joined #gluster
18:43 semiosis Elico: according to the source of add-apt-repository the -m command line option should give you debug output, but it did nothing for me
18:54 nueces joined #gluster
19:00 Elico for me either.
19:00 Elico I will not try to hards since I do manage to add the repo manually
19:00 Elico now I have tried to delete a volume and build it from 0
19:05 elyograg would it be possible to find a gluster consultant to look at our setup and tell me if I did something wrong?
19:08 elyograg and help figuring out what the hell went wrong with our rebalance.
19:09 Elico I must say that glusterfs 3.4 provide better couple things
19:34 hybrid5121 joined #gluster
19:36 mjrosenb joined #gluster
19:36 mjrosenb question: how do I disable the gluster-nfs bridge?
19:36 mjrosenb it looks like it is running on a brick, and hijacking that brick's actual nfs exports.
19:37 JoeJulian "gluster volume set $vol nfs.disable on" for each $vol.
19:38 mjrosenb well, now i'm getting a different error on an nfs client...
19:39 mjrosenb clnt_create: RPC: Program not registered
19:39 awickham joined #gluster
19:40 kPb_in_ joined #gluster
19:40 mjrosenb will that change persist across reboots, or do I need to edit a config file?
19:42 JoeJulian It will persist
19:51 mjrosenb wow, gluster is realy a whole lot more user friendly than it was a few years ago!
19:52 JoeJulian hehe, that it is...
19:54 joverstr joined #gluster
20:05 davidbierce Is there a way for a 3.4.0 client to forget the invalid FD from a brick that crashed without restarting the client?
20:13 JoeJulian yes, there's a "gluster volume clear-locks" command that I don't know how to use.
20:19 Elico I am trying to create more then a lot of folders and files and from an unknown reason it starts showing " touch 1
20:19 Elico touch: cannot touch ‘1’: No space left on device"
20:20 Elico while 'df |grep gfs' "gfs1:test         304512    100224    188544  35% /mnt/gfstv"
20:20 JoeJulian What about df -i
20:20 Elico great this is what I forgot!
20:20 P0w3r3d joined #gluster
20:21 Elico gfs1:test       " 78936  78845      91  100% /mnt/gfstv" HEHE
20:21 Elico something is weird to me...
20:21 JoeJulian :D That'll do it.
20:21 Elico how can it be 78k inodes?
20:21 JoeJulian double the number of actual directories and files?
20:22 Elico JoeJulian: I am not sure I understood
20:22 Elico each directory takes two inodes?
20:22 JoeJulian Yes. See http://joejulian.name/blog/what-is-this-new-glusterfs-directory-in-33/
20:23 glusterbot <http://goo.gl/j981n> (at joejulian.name)
20:25 zaitcev joined #gluster
20:25 Elico JoeJulian: I am not sure I understood the relationship between the article and the problem..
20:26 Elico I have 11839 files and directories (sum) on this disk...
20:26 Elico disk = volume
20:28 Elico isn't the limit is about 4 million ??
20:28 Elico just wondering since I maybe did not understood it right
20:29 hybrid512 joined #gluster
20:31 haritsu joined #gluster
20:32 JoeJulian Elico: For every file and directory there is a hardlink (or symlink for directories) under .glusterfs (plus some overhead for directory structure). Still doesn't add up, though, so perhaps the inode size is small enough that the extended attribute data is spilling over into additional inodes.
20:32 Elico a sec.
20:33 Elico one brick inode count is 26312
20:33 JoeJulian The quantity of inodes is determined when you format the filesystem.
20:33 Elico OK so on that now it's the ext4 thingy..
20:33 Elico since I have 26312 max per this device...
20:34 JoeJulian Ah, ok.
20:34 Elico but now I need to find out what and how I am using more then that...
20:34 Elico and I need to strip volume over bricks
20:36 JoeJulian @stripe
20:36 glusterbot JoeJulian: Please see http://goo.gl/5ohqd about stripe volumes.
20:36 Elico thanks!!
20:36 bosszaru so if I have two blocks in a replication set and I write a file to the mounted volume I see it appear in the file system on one brick immediately, but it takes several minutes to show up on the file system of the replicant pair.  This is a testing cluster and there is zero load and the file is only a few bytes.  Is this to be expected?  is using worm the way to ensure quicker replication?
20:37 mjrosenb Elico: I highly recommend ncdu
20:37 JoeJulian bosszaru: Nope. Sounds like your client isn't able to connect to both servers. Check the client log /var/log/glusterfs/{mountpoint tr / -}.log
20:38 bosszaru the mount point is a dns RR name might that be the issue?
20:38 bosszaru when the dns ttl expires, it remounts the other and writes there?
20:38 Elico mjrosenb: for what ncdu??
20:39 Elico nice!!!!
20:40 mjrosenb Elico: showing where your inodes/space has gone
20:40 mjrosenb although it gives file counts, not inode counts.
20:40 JoeJulian bosszaru: nope. ,,(mount host)
20:40 glusterbot JoeJulian: Error: No factoid matches that key.
20:40 mjrosenb also, I don't know if you can sort based on file counts
20:40 JoeJulian @mount
20:40 glusterbot JoeJulian: I do not know about 'mount', but I do know about these similar topics: 'If the mount server goes down will the cluster still be accessible?', 'mount server'
20:40 JoeJulian bosszaru: nope. ,,(mount server)
20:40 glusterbot bosszaru: The server specified is only used to retrieve the client volume definition. Once connected, the client connects to all the servers in the volume. See also @rrnds
20:41 JoeJulian Now you know I'm distracted when I forget my own ,,(glossary) terms...
20:41 glusterbot A "server" hosts "bricks" (ie. server1:/foo) which belong to a "volume"  which is accessed from a "client"  . The "master" geosynchronizes a "volume" to a "slave" (ie. remote1:/data/foo).
20:41 Elico JoeJulian: I am not sure how the strip works exactly but for a small cache it can be enough
20:42 bosszaru thanks JoeJulian
20:42 bosszaru you rock
20:42 JoeJulian That's easy. strip works for cash. Just bring some and stick it in their G-String.
20:43 Elico ok so it's 11837 items which is using 78936
20:43 Elico inodes
20:43 Elico the reason for that is unkown..
20:43 JoeJulian stripe, on the other hand, is much more complex.
20:43 Elico stripe it is...
20:44 * JoeJulian would prefer strip to stripe any day of the week.
20:44 Elico JoeJulian: until you will get more then enough of that..
20:44 Elico then you will want a bit stipe
20:44 Elico stripe..
20:47 davidbierce Oh, I thought "gluster volume clear-locks" was just for client locks?  The command seems like something you'd run on the client, but the gluster command isn't part of the client install
20:47 elyograg what kind of a filesystem only allows 78k inodes?  I would hope for billions, or at least millions.  or are there other things on it?
20:48 JoeJulian davidbierce: Was this not a brick lock? If it's glusterd, just kill all of them and start them up again.
20:51 Elico elyograg: it's a 3X100mb drive..
20:52 Elico I would need to power up another 3 machines to upper that..
20:52 Elico and the beast is roaring with 2tb+2tb+1tb
20:54 Elico for now I have written a small script that creates files on the FS which is a "/[0-9a-f]{1}/[0-9a-f]{2}/[0-9a-f]{6}"
20:55 Elico top level dir second level dir and a file
20:55 Elico which should be more then 65k files..
20:57 [o__o] left #gluster
20:58 failshell joined #gluster
20:59 [o__o] joined #gluster
21:02 JoeJulian mkfs.ext4 settings to know about: -I inode-size, -N number-of-inodes or -i bytes-per-inode
21:02 Elico JoeJulian: thanks!
21:07 elyograg well, crap.  I just discovered that my two new storage servers were several hours off on their time.  Seems I didn't get ntp running on them.  would that cause a rebalance to go wrong?
21:08 JoeJulian I can't think of any reason why it would, but I don't know every step of the process by heart...
21:09 elyograg I guess CentOS doesn't ask about time synchronization during install.  that was probably fedora, when I was messing with that.
21:13 davidbierce [2013-11-05 17:05:33.153013] W [fuse-bridge.c:1132:fuse_err_cbk] 0-glusterfs-fuse: 2405977691: FSYNC() ERR => -1 (Bad file descriptor)
21:13 davidbierce [2013-11-05 17:05:33.153230] W [fuse-bridge.c:2127:fuse_writev_cbk] 0-glusterfs-fuse: 2405977692: WRITE => -1 (Bad file descriptor)
21:13 davidbierce 11:06 for mount: ps1.us2.san:/gv-cloud1 on /mnt/volumes/gv-cloud1 type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)
21:13 davidbierce 11:06 on kh5
21:13 davidbierce JoeJulian: The thing that is pitching a fit is a client, not a brick.  It is filling logs on the client with things like
21:14 davidbierce Sorry, bad paste....new IRC client :(
21:15 JoeJulian davidbierce: That should be the right tool then.
21:16 davidbierce Where is the place to search for the file/brick that is locked?  The command seemed to want a path to the file or inode.
21:17 Elico JoeJulian: it seems like a larger disk can handle more inodes by default..
21:17 JoeJulian Elico: That's the "-i bytes-per-inode" that's got some default that I've never looked at.... :D
21:18 Elico well when you test somethings you get into it...
21:18 JoeJulian I use xfs..
21:19 Elico well I had a storage server that ran xfs and ban kernel panic while writing to disk..
21:19 Elico bam*
21:19 Elico I still dont know if it's the xfs or another part of the system like nfs but it happens and it annoy!
21:30 bstr joined #gluster
21:33 glusterbot New news from newglusterbugs: [Bug 1026977] [abrt] glusterfs-3git-1.fc19: CThunkObject_dealloc: Process /usr/sbin/glusterfsd was killed by signal 11 (SIGSEGV) <http://goo.gl/XU5IHq>
21:37 mjrosenb ugh.  my client seems to not be in sync with a the bricks?
21:38 mjrosenb 85] W [fuse-bridge.c:292:fuse_entry_cbk] 0-glusterfs-fuse: 59: LOOKUP() /media => -1 (Invalid argument)
21:38 mjrosenb 88] W [dht-layout.c:186:dht_layout_search] 0-magluster-dht: no subvolume for hash (value) = 527739072
21:38 mjrosenb 10] E [dht-common.c:1372:dht_lookup] 0-magluster-dht: Failed to get hashed subvol for /media
21:39 calum_ joined #gluster
22:26 davidbierce Is that tool supposed to be run on the gluster node with the bricks?  That tool does't appear to be available on the client that is having issues with the stuck fd write.
22:26 failshel_ joined #gluster
22:43 Gugge joined #gluster
22:48 ira_ joined #gluster
22:48 root____ joined #gluster
22:52 ski_tr joined #gluster
23:01 elyograg can anyone here do expert-level gluster consulting in Salt Lake City?  I don't know how much the company is willing to pay, but it's been all but approved.
23:02 elyograg we've had a rebalance fail, resulting in a number of lost files, permission issues (changed to 000), and files that are throwing read errors via the fuse/nfs mount but are available directly on bricks.
23:13 JoeJulian elyograg: I'd offer to see if I could, but right now I'm focused on rebuilding a disaster... maybe later tonight I can help.
23:30 rwheeler joined #gluster
23:47 failshell joined #gluster
23:56 cfeller joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary