Camelia, the Perl 6 bug

IRC log for #gluster, 2013-01-25

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:05 glusterbot New news from newglusterbugs: [Bug 903873] Ports show as N/A in status <http://goo.gl/0oB6Y>
01:39 kevein joined #gluster
01:42 fabio|gone joined #gluster
01:44 polenta joined #gluster
01:57 greylurk What does the "total files scanned" in the rebalance status actually mean?
01:59 greylurk I've got a rebalance that's reporting it's scanned 7 million files, but I've only got 4 million files on the cluster.
02:03 polenta greylurk, do you see rebalanced X files of size Y, so Y = 4 millions
02:04 greylurk $ sudo gluster volume rebalance gv0 status
02:04 greylurk rebalance step 2: data migration in progress: rebalanced 1942943 files of size 77799730312 (total files scanned 7564958)
02:05 greylurk $ find /var/local/cur8/var/media/ | wc -l
02:05 greylurk 4518429
02:05 greylurk with /var/local/cur8/var/media being the mount point
02:07 greylurk I am just trying to get an estimate of percentage complete,
02:09 raven-np joined #gluster
02:19 * polenta checking
02:28 polenta greylurk, did you have a chance to check: https://access.redhat.com/knowledge/do​cs/en-US/Red_Hat_Storage_Software_Appl​iance/3.2/html/User_Guide/sect-User_Gu​ide-Managing_Volumes-Rebalancing.html
02:28 glusterbot <http://goo.gl/NKvuu> (at access.redhat.com)
02:29 polenta there is an information regarding the time to complete the rebalance operation
02:31 eightyeight why do i see two directories in the client mount when creating a new dir?
02:31 eightyeight no so with files
02:42 eightyeight whenever i 'mkdir /gluster/foo' i see to foo dirs in /gluster/. and 'rmdir /gluster/foo' removes both
02:47 polenta eightyeight, what kind of replication you are using ?
02:48 eightyeight distributed replication, 2 copies
02:48 eightyeight 3 peers. 2 bricks on each peer.
02:53 polenta eightyeight, are you mounting using glusterfs ?
02:55 eightyeight yes
02:55 eightyeight # mount.glusterfs server:volume /gluster
02:56 y4m4 joined #gluster
02:59 * polenta will brb
03:25 sripathi joined #gluster
03:34 bharata joined #gluster
03:40 deepakcs joined #gluster
03:44 eightyeight long brb
03:46 glusterbot joined #gluster
03:46 glusterbot joined #gluster
03:46 glusterbot joined #gluster
03:47 glusterbot joined #gluster
03:47 glusterbot joined #gluster
03:47 glusterbot joined #gluster
03:47 glusterbot joined #gluster
03:48 glusterbot joined #gluster
03:48 theron joined #gluster
03:48 glusterbot joined #gluster
03:48 glusterbot` joined #gluster
03:49 glusterbot joined #gluster
03:49 glusterbot joined #gluster
03:49 glusterbot` joined #gluster
03:50 glusterbot joined #gluster
03:50 glusterbot joined #gluster
03:50 glusterbot joined #gluster
03:50 glusterbot joined #gluster
03:51 glusterbot joined #gluster
03:51 glusterbot joined #gluster
03:51 glusterbot joined #gluster
03:51 glusterbot joined #gluster
03:52 glusterbot joined #gluster
03:52 glusterbot joined #gluster
03:52 glusterbot joined #gluster
03:52 glusterbot joined #gluster
03:53 glusterbot joined #gluster
03:53 glusterbot joined #gluster
03:53 glusterbot joined #gluster
03:54 glusterbot joined #gluster
03:54 glusterbot joined #gluster
03:54 glusterbot joined #gluster
03:54 glusterbot joined #gluster
03:55 glusterbot joined #gluster
03:55 glusterbot joined #gluster
03:55 eightyeight hello spam
03:55 glusterbot joined #gluster
03:56 glusterbot joined #gluster
03:56 glusterbot joined #gluster
03:56 glusterbot joined #gluster
03:56 glusterbot joined #gluster
03:57 glusterbot joined #gluster
03:57 glusterbot joined #gluster
03:58 glusterbot joined #gluster
03:58 glusterbot joined #gluster
03:58 glusterbot joined #gluster
03:58 glusterbot joined #gluster
03:59 glusterbot joined #gluster
03:59 glusterbot joined #gluster
03:59 glusterbot joined #gluster
03:59 glusterbot joined #gluster
04:00 glusterbot joined #gluster
04:00 glusterbot joined #gluster
04:00 glusterbot joined #gluster
04:01 glusterbot joined #gluster
04:01 glusterbot joined #gluster
04:01 glusterbot` joined #gluster
04:01 glusterbot joined #gluster
04:01 glusterbot joined #gluster
04:02 glusterbot joined #gluster
04:02 glusterbot joined #gluster
04:02 glusterbot joined #gluster
04:02 glusterbot joined #gluster
04:03 glusterbot joined #gluster
04:03 glusterbot joined #gluster
04:03 glusterbot joined #gluster
04:03 glusterbot joined #gluster
04:04 glusterbot joined #gluster
04:04 glusterbot joined #gluster
04:04 glusterbot joined #gluster
04:04 glusterbot joined #gluster
04:05 glusterbot joined #gluster
04:05 glusterbot joined #gluster
04:05 glusterbot joined #gluster
04:05 glusterbot joined #gluster
04:06 glusterbot joined #gluster
04:06 glusterbot joined #gluster
04:06 glusterbot joined #gluster
04:06 glusterbot joined #gluster
04:07 glusterbot joined #gluster
04:07 glusterbot joined #gluster
04:07 glusterbot joined #gluster
04:07 glusterbot joined #gluster
04:08 glusterbot` joined #gluster
04:08 glusterbot joined #gluster
04:08 glusterbot joined #gluster
04:08 glusterbot joined #gluster
04:09 glusterbot joined #gluster
04:09 glusterbot joined #gluster
04:09 glusterbot` joined #gluster
04:09 glusterbot joined #gluster
04:10 glusterbot joined #gluster
04:10 glusterbot joined #gluster
04:10 glusterbot joined #gluster
04:10 glusterbot joined #gluster
04:11 glusterbot joined #gluster
04:11 glusterbot joined #gluster
04:11 pai joined #gluster
04:11 glusterbot joined #gluster
04:11 glusterbot` joined #gluster
04:12 glusterbot joined #gluster
04:12 glusterbot joined #gluster
04:12 glusterbot` joined #gluster
04:12 glusterbot` joined #gluster
04:13 glusterbot joined #gluster
04:13 glusterbot` joined #gluster
04:13 glusterbot` joined #gluster
04:13 glusterbot` joined #gluster
04:14 glusterbot joined #gluster
04:14 glusterbot` joined #gluster
04:14 glusterbot` joined #gluster
04:14 glusterbot joined #gluster
04:15 glusterbot` joined #gluster
04:15 glusterbot` joined #gluster
04:15 glusterbot` joined #gluster
04:15 glusterbot` joined #gluster
04:21 lala joined #gluster
04:25 eightyeight good riddens
04:31 hagarth1 joined #gluster
04:35 pai joined #gluster
04:40 sahina joined #gluster
04:44 vpshastry joined #gluster
04:44 sgowda joined #gluster
04:48 ramkrsna joined #gluster
04:59 shylesh joined #gluster
05:28 mohankumar joined #gluster
05:39 sripathi joined #gluster
05:51 overclk joined #gluster
05:53 sripathi joined #gluster
06:07 sripathi joined #gluster
06:14 atrius joined #gluster
06:19 atrius_away joined #gluster
06:24 gmcwhistler joined #gluster
06:26 theron joined #gluster
06:34 atrius joined #gluster
06:49 sgowda joined #gluster
06:49 ekuric joined #gluster
06:54 tru_tru joined #gluster
07:09 Nevan joined #gluster
07:18 bala joined #gluster
07:21 jtux joined #gluster
07:21 shireesh joined #gluster
07:33 sgowda joined #gluster
07:33 rgustafs joined #gluster
07:40 ngoswami joined #gluster
07:44 guigui3 joined #gluster
07:53 greylurk joined #gluster
07:55 melanor9 joined #gluster
07:56 dobber joined #gluster
07:56 ekuric joined #gluster
07:59 theron joined #gluster
08:01 sashko joined #gluster
08:03 theron joined #gluster
08:17 tjikkun_work joined #gluster
08:23 jtux joined #gluster
08:24 andreask joined #gluster
08:27 Joda joined #gluster
08:29 Dell joined #gluster
08:40 mgebbe_ joined #gluster
08:41 gbrand_ joined #gluster
08:48 rnts What would happend if I remove files/directories directly from the 'bricks' of a gluster-system?
08:50 shylesh joined #gluster
09:00 x4rlos joined #gluster
09:05 DaveS_ joined #gluster
09:28 gbrand__ joined #gluster
09:31 JoeJulian wtf?
09:32 JoeJulian @reconnect
09:32 shireesh joined #gluster
09:33 JoeJulian Huh? Did glusterbot just ban itself? hehe
09:33 ndevos JoeJulian: it didnt in #gluster-dev
09:34 JoeJulian Well that was wierd... it somehow started twice.
09:35 lala_ joined #gluster
09:35 bauruine joined #gluster
09:37 tryggvil joined #gluster
09:37 tryggvil_ joined #gluster
09:37 JoeJulian Hmm, now to figure out how to unban itself...
09:38 x4rlos hehe, paradox.
09:41 sashko joined #gluster
09:42 JoeJulian Crap... Need avati to do it.
09:45 sgowda joined #gluster
09:48 vpshastry1 joined #gluster
09:49 vpshastry joined #gluster
09:54 shylesh joined #gluster
09:54 jbrooks joined #gluster
10:29 sashko joined #gluster
10:35 sgowda joined #gluster
10:36 duerF joined #gluster
10:36 sahina joined #gluster
10:43 nueces joined #gluster
10:56 cyberbootje joined #gluster
11:15 luis_alen joined #gluster
11:24 edward1 joined #gluster
11:31 ctria joined #gluster
11:35 nhm joined #gluster
11:39 andreask joined #gluster
12:14 Ass3mbler joined #gluster
12:34 Ass3mbler Hello, a newbie question: there is some problem with the wiki? I see a lot of pages referenced by google missing, deleted on may 2012...
12:42 Ass3mbler noone knows?
12:43 jgillmanjr I don't know about the status of the wiki. Do you have a particular question?
12:48 Dell_ joined #gluster
12:57 balunasj joined #gluster
13:00 dustint joined #gluster
13:03 ctria joined #gluster
13:06 gbrand_ joined #gluster
13:17 hagarth joined #gluster
13:18 Ass3mbler yes, about the single process execution. I cannot find anything on the docs
13:25 mohankumar joined #gluster
13:28 ramkrsna joined #gluster
13:32 Dell__ joined #gluster
13:34 ngoswami joined #gluster
13:34 abkenney joined #gluster
13:41 gbrand_ joined #gluster
13:48 chacken joined #gluster
13:48 rwheeler joined #gluster
14:03 ctrianta joined #gluster
14:07 aliguori joined #gluster
14:08 eightyeight why is my gluster mount showing the same directory twice? making a directory shows two, and removing a directory removes both.
14:09 chacken are they symlinked?
14:09 eightyeight no
14:09 eightyeight "mkdir /gluster/foo" only makes one directory, but it displays twice with "ls /gluster"
14:10 eightyeight and "rmdir /gluster/foo" will remove the newly created directory, and "ls /gluster" shows both are gone
14:10 chacken weird
14:10 eightyeight yet, for regular files, there is only one of each
14:10 eightyeight it's only for directories this oddity is occuring
14:14 kkeithley eightyeight: that is indeed not normal. What version of glusterfs? On what linuxdist?
14:14 eightyeight debian testing on two nodes, and ubuntu 12.04 on the third. all are running 3.3.1
14:16 eightyeight i don't know if this matters, but the three nodes are in a linked list setup
14:17 kkeithley what's a linked list setup?
14:17 eightyeight each node has 2 bricks. thus: server1:brick1 - server2:brick2, server2:brick1 - server3:brick2, server3:brick1 - server1:brick2
14:17 eightyeight it's a distributed replication, with copies=2
14:20 Norky I have a misbehaving gluster volume. It's one out of four, the others seem fine. NFS mounts of it don't work. Looking in the log file I see complaints about the ownership/mode of the root of the bricks, which actually look to be correct, however the size looks wrong on two of them: not a multiple of 4K
14:20 Norky http://fpaste.org/YMBC/
14:21 kkeithley those bricks are all discrete file systems on the nodes, right? They aren't luns on a SAN being shared between nodes somehow are they?
14:21 eightyeight correct
14:22 tryggvil_ joined #gluster
14:22 tryggvil joined #gluster
14:22 eightyeight they're all zfs datasets local to each server
14:22 kkeithley ohhhhh, zfs
14:23 eightyeight ?
14:24 kkeithley eightyeight: I don't grok your brick layout above. What's the exact command you used to create the volume?
14:24 eightyeight how about i give you a pastebin of 'volume info'?
14:24 kkeithley sure
14:25 eightyeight http://ae7.st/p/7cp
14:26 eightyeight /pool/brick{1,2} is the zfs dataset. /pool/brick?/vol/ is the brick (obviously), so if the mount doesn't exist, neither will the brick
14:27 kkeithley yup
14:28 kkeithley Well, I don't see anything wrong with your setup
14:31 kkeithley Easier to grok your daisy chain setup from the volume info.
14:32 eightyeight i was looking at the brick names, and i think i'll rename them
14:32 eightyeight /pool/head/brick and /pool/tail/brick
14:33 eightyeight then it's easier to see the linked list, and after all, '/pool/head' and '/pool/tail' have nothing to do with gluster. the physical brick is '/pool/head/brick'
14:33 eightyeight nitpicky
14:35 kkeithley Yeah, I just couldn't parse the string you posted here. Off hand I wouldn't think the daisy chain arrangement would cause what you're seeing, and I don't see anything else wrong. Let me try a daisy chain arrangement here and see what happens
14:36 eightyeight ok
14:37 eightyeight this _is_ a sandbox, so we can tear it down, and rebuild easily enough
14:37 jgillmanjr Gotta love sandboxes
14:37 Norky I updated that paste: http://fpaste.org/wi0s/
14:38 eightyeight but i don't think the two directory oddity is a by-product of the linked list setup. something else is happening. and at this point, i'm not sure if it's only my client mount, or visible on all three
14:38 Norky why would XFS be reporting a >4K directory size? AFAICT that's broken
14:38 Norky err <4K
14:40 kkeithley right, I don't think it's the linked list, but before I say <cough>zfs</cough> I want to eliminate that.
14:40 kkeithley Norky: is it doing that on the brick?
14:41 Norky two out of the four bricks are being reported as being of size 366
14:41 Norky the root directory, that is
14:42 Norky see line 90 of http://fpaste.org/wi0s/
14:44 eightyeight i'd be surprised if it's zfs, actually. wouldn't surprise me if it's something else, like a gluster mount on top of another, or something
14:46 Norky "clush" is a parallel shell tool, "storage" points to the the four gluster servers (lnasilo0, 1, 2, 3), the output is being aggregated, so fi two machines return exactly the same stadout, it's shown only once
14:46 eightyeight i would like to find out of the other two boxes see the same in their client mount. they haven't gotten into work yet. we're doing this with our workstations. :)
14:46 kkeithley eightyeight: sure
14:46 kkeithley Norky: yup
14:48 eightyeight kkeithley: let me know what you see when you build the same setup. in the meantime, i'll see if the other two see the same oddity i'm seeing
14:48 eightyeight kkeithley: thx for your help
14:50 kkeithley yw, but maybe save the thanks for after it's fixed. ;-)
14:50 eightyeight heh
14:53 Norky I used clush to create all the bricks, i.e. the same command was run on each machine and "clush xfs_info" returns identical information for each brick
14:54 kkeithley do you have the mkfs.xfs command at hand? I guess `mkfs.xfs -i size=512`    anything else?
14:55 Norky I think I wrote it down, one sec...
14:55 kkeithley xfs devs here agree that size=366 is not normal
14:57 Norky clush -b -l root -w lnasilo[0-3] mkfs.xfs -i size=512 -d su=256k,sw=10,agcount=64 -l su=256k,lazy-count=1 /dev/vg_shelf0/brick3
14:58 Norky the same line was used for the other 3 (*4) bricks
14:58 Norky and they're all reporting a more reasonable 4K or 8K directory size
14:59 andreask joined #gluster
15:01 plarsen joined #gluster
15:01 Norky directory size <4K on subdirectories seems normal everywhere, but not the root of the FS
15:03 kkeithley Norky: devs ask "why restate the lazy-count?"
15:04 kkeithley and "ls the contents of the brick" i.e. the one with size 366
15:04 kkeithley from the brick
15:06 Norky http://fpaste.org/jj8F/  ls -ln of all bricks, the first two, 0 and 1 are the ones reporting 366 root dir. size
15:07 stopbit joined #gluster
15:09 Norky as to why "lazy-count=1", my reading of the mkfs.xfs man page suggested it might be useful, but I think I misread it - I didn't grok that it was on by default
15:13 kkeithley root dir size < 4096 just means the whole directory is "inlined" in an inode. Happens when there are only a couple entries. Like an symlink being fully contained in an inode instead of as a separate block on disk. Once more files or subdirs get created xfs should allocate a whole block and it'll change.
15:15 Norky ahh, okay, so if I were to create few more sub dirs that get put on those bricks, the directory will pass a threshold size and a get a whoel block to itself?
15:16 Norky right, I shall forget that red herring :)
15:18 bugs_ joined #gluster
15:20 Norky I'm still seeing that complaint about "unable to self-heal", and NFS access is still broken
15:20 tqrst left #gluster
15:24 ctrianta joined #gluster
15:26 Norky http://fpaste.org/YJar/   this might be two problems
15:27 Staples84 joined #gluster
15:27 purpleidea joined #gluster
15:27 purpleidea joined #gluster
15:27 Norky d'oh, wrong log, I'm an idiot
15:31 arusso joined #gluster
15:32 Norky http://fpaste.org/5KQr/
15:33 Norky note that only one of the machiens is actually making the /apps export available
15:33 thekev joined #gluster
15:34 Norky and that one machine is complaining about permissions/ownership of /
15:41 lala joined #gluster
15:50 errstr using glusterfs 3.3, with 3x2 Distributed-Replicate and geo-replication on, i am seeing two dentrys per directory, e.e. a `mkdir foo` results in two 'foo' directories. any idea why this would be happening?
15:51 errstr ^ not happening with files, just directories
15:52 eightyeight kkeithley: errstr is one of my coworkers. copec the other.
15:52 errstr heh, missed the scrollback
15:52 Norky I was about to say, "someone else was reporting a very simialr problem earlier" :)
16:00 ctrianta joined #gluster
16:03 manik joined #gluster
16:04 jgillmanjr To confirm - gluster client nodes don't need to unmount the volume in order for additional storage to show up, correct?
16:10 elyograg jgillmanjr: my testing indicates that is the case.
16:12 Norky same here
16:12 jgillmanjr Ok. Thank you
16:12 Norky still having problems, jgillmanjr ?
16:12 chouchins joined #gluster
16:13 jgillmanjr Norky: Actually, haven't tried adding the new nodes yet (work gets in the way sometimes lol), but I just wanted to know before if I need to unmount my cluster nodes or not
16:17 sashko joined #gluster
16:17 ctrianta joined #gluster
16:19 kkeithley eightyeight, errstr: okay, replicated your daisy chain setup. (but with fedora vms and xfs for the fs). Not seeing the duplicate dirs
16:21 kkeithley so, wrt daisy chain, like you, I didn't think that was the problem, but at least I've eliminated that.
16:22 eightyeight hmm
16:23 erik49 joined #gluster
16:23 erik49 sorry if this is a noob question, but my stripe=2 volume seems to also be replicating data
16:23 erik49 is that normal?
16:24 errstr eightyeight: i'm wondering if it has to do with us starting with 1 node and adding each on to the tail, with all those replaces with links from tail->head
16:24 erik49 the 350GB file i have on the volume is duplicated
16:24 erik49 but the volume is not supposed to be replicating
16:24 errstr kkeithley: you just created the nodes in one command, right?
16:24 erik49 here's the volume info: http://dpaste.com/891829/
16:25 erik49 maybe i just don't understand what a gluster stripe is..
16:26 kkeithley errstr: yes, one command:  gluster volume create eightyeight replica 2 192.168.122.241:/var/tmp/brick1/vol 192.168.122.242:/var/tmp/brick2/vol 192.168.122.242:/var/tmp/brick3/vol 192.168.122.243:/var/tmp/brick4/vol 192.168.122.243:/var/tmp/brick5/vol 192.168.122.241:/var/tmp/brick6/vol
16:26 kkeithley Creation of volume eightyeight has been successful. Please start the volume to access data.
16:26 kkeithley hang on and I'll paste the volume info
16:27 errstr what we did was create 1 node, with two bricks replicated to themselves, then added another separate host adding and replacing like a linked-list until we had a 3x2 configuration
16:28 errstr i'm hypothesising that it could be all the replace-brick commands we ran that caused it... which we can easily test
16:28 elyograg is there a new glusterfs-swift yet?  I know there was talk about upgrading it after swift released a new version.
16:28 eightyeight i'm game
16:29 kkeithley @stripe
16:29 Norky erik49, what makes you think it is replicating?
16:29 erik49 i see the file on two bricks
16:29 eightyeight errstr: i'll get with you in a second about that
16:29 erik49 and its full size
16:29 kkeithley I guess glusterbot's not back on yet
16:29 kkeithley :-(
16:30 erik49 after i copied it over
16:30 kkeithley Please see http://goo.gl/5ohqd about stripe volumes.
16:30 ndevos kkeithley: glusterbot is in #gluster-dev
16:31 kkeithley eightyeight: http://fpaste.org/hGGC/
16:31 kkeithley ndevos: ? We used to have glusterbot here too
16:31 kkeithley just yesterday even
16:31 erik49 I'll read that again, thanks
16:32 ndevos kkeithley: yeah, but it banned itself - avati is the only one who can free it again
16:32 kkeithley yeah, I saw that happened in #gluster-dev, not here
16:33 Dell_ joined #gluster
16:33 elyograg that's an interesting feature. ;)
16:33 Norky erik49, I beleive that's normal, if you happened to examine the file on each brick, I think you'd see 'holes' that correspond to the stripes
16:33 ndevos for some reason it banned itself from this channel, but not from the other... weird bots
16:33 erik49 Norky, but they shouldn't be taking up the full size
16:33 kkeithley elyograg: my fedorapeople 3.3.1-8 has ufo using swift-1.7.4. Not up to 1.7.5 yet
16:33 Norky it's not really using the reported size on each brick
16:33 erik49 Norky, only half for a stripe=2
16:34 ndevos anyone using NFS? care to respond to http://lists.nongnu.org/archive/html​/gluster-devel/2013-01/msg00093.html ?
16:34 elyograg kkeithley: that's better than the 1.4.8 at my last trial.  thanks.
16:34 Norky kkeithley, am I correct in this?
16:34 elyograg oh, i'm using fedora 18, and your repo didn't work for that last time i checked.  what is available there?
16:35 elyograg centos on the gluster bricks, but f18 on the peers for network access - NFS, Samva, Swift.
16:36 ndevos erik49: sounds like you are not using the coalesce option for your striped volume...
16:36 Norky try comparing ls -l to du
16:37 ndevos erik49: you may want to check http://review.gluster.com/3282
16:37 kkeithley elyograg: 3.3.1-8 is in the fedora18 updates now
16:37 erik49 ndevos, thanks will do
16:37 kkeithley Norky: worry, what?
16:37 kkeithley Norky: sorry, what?
16:37 Norky what I said to erik49, is it correct, or am I talking nonsense? :)
16:38 kkeithley Norky: yes, you're correct
16:38 ndevos erik49: xfs has some predictive heuristics for allocating files, basically it fills sparse files -> breaking the stripe idea
16:38 tqrst joined #gluster
16:39 kkeithley Where's bfoster, he's the expert on stripes and the stripe coalesce option
16:39 daMaestro joined #gluster
16:40 foster sounds like somebody is hitting speculative prealloc behavior..?
16:40 erik49 ndevos, ah that must be it
16:40 erik49 does coalesce fix that?
16:40 erik49 it doesn't seem part of the documentation
16:41 ndevos erik49: yeah, foster added that :)
16:41 erik49 i can only find the commit message about it
16:41 erik49 doesn't sound like something i should rely on :D
16:41 Norky ndevos, I use NFS, but dont' really consider myself expert enough to respond
16:41 tqrst I am rsyncing from a regular hard drive onto a gluster volume with -vaxl --stats --remove-source-files --whole-file --inplace. I saw a few "/mnt/my/gluster/volume/foobar: file not found" errors even though rsync is copying *to* my volume, coupled with errors such as E [afr-self-heal-common.c:2160:​afr_self_heal_completion_cbk] 0-bigdata-replicate-15: background  meta-data data entry missing-entry gfid self-heal failed on /foobar. What's up with that?
16:41 erik49 perhaps i should just use mdadm as the underlying stripe layer?
16:41 tqrst the destination folder on my volume didn't exist, so all files created by rsync should be new
16:42 ndevos Norky: well, I'm interested to hear what you would expect to see when you do a "showmount -a $NFS_SERVER"
16:43 foster erik49: the purpose of the coalesce option was to sanely lay out files in striped volumes to avoid the issues with xfs speculative preallocation, if that is the problem you're having..?
16:43 ndevos erik49: that coalesce option should be documented somewhere, I'm pretty sure its stable for a while already
16:43 foster it's relatively new, last I checked I don't think it was part of a "release"
16:44 ndevos ah, well, I'm building rpms from 'master' all the time, and am sure it's in there :E
16:44 ndevos :D even
16:45 foster should be :)
16:47 kkeithley eightyeight: so how did you build your volume? If you've got the sequence of commands I can try it with my sandbox and see what happens in my sandbox.
16:48 eightyeight kkeithley: it appears it was fscked up xattrs
16:48 eightyeight copec knows more
16:48 GLHMarmot joined #gluster
16:48 Norky ndevos, I'd certainly like gluster servers that are serving NFS to maintain state about (a list) connected clients, if that's what you ,mean :)
16:48 kkeithley oh good, so you solved it. Excellent
16:50 Norky why does gluster implement its own (v3 only) server, btw?
16:51 eightyeight Norky: because you can then ensure that the bricks are online, before the directory is exported via NFS
16:51 Norky if the answer is "it'd take to long to explain", that's fine
16:52 Norky it just struck me as reinventing the wheel, and then having to reimplement the kind of thing that your patch talks about, when there's an existing NFS server
16:53 kkeithley Norky: because glusterfs runs in user space is the short answer.
16:54 ndevos Norky: well, yeah, the state is important, but how exact would it need to be? does a "showmount -a $NFS_SERVER" listing all NFS-client from the whole cluster sound good?
16:55 ndevos Norky: exporting a FUSE filesystem through the kernel-nfs-server is not an option, see the README.NFS from the fuse rpm
16:55 ndevos and well, implementing NFSv4 is quite some work too... so we're stuck with v3 for now
16:56 daMaestro joined #gluster
16:57 Norky hmm... I think I'd prefer that it return only the information for $NFS_SERVER, but would that be more work than returning the aggregate information in the CTDB?
16:57 kkeithley showmount on my gluster nfs server does show mounted clients.
16:57 ndevos kkeithley: keep a nfs-client mounted and restart glusterd on the server :)
16:57 Norky nothing here....
16:58 kkeithley oh, okay
16:58 Norky mind you, I have no control over the NFS clients in this case - they might be off for all I know
16:59 ndevos Norky: the difficulty is in the change of nfs-server after a client mounted it, when an IP fails-over to an other server
17:00 Norky ndevos, point taken about FUSE and the kernel NFS support - I didn't know that
17:00 ndevos Norky: after the fail-over there is not a new mount (or remount), the NFSv3 protocol/connection will still work, just talking to an other server
17:00 Norky and yeah, I'm aware that NFSv4 is very different protocl to NFSv3
17:00 erik49 foster for production striping would you recommend an mdadm layer then?
17:01 ndevos erik49: have you read http://goo.gl/5ohqd already?
17:02 bala joined #gluster
17:02 ndevos Norky: and, it is pretty difficult to say what client is connected to which NFS-server, the tcp-connection does not exist all the time
17:04 Norky erik49, I think the conclusion is try without gluster striping, test real-world examples and see how fast it is - gluster striping is not generally expected to improve performance for many cases
17:05 Norky I came to gluster assuming I'd want striping as well, everything I've learnt since suggests not
17:06 foster erik49: Well, I don't think we recommend using stripe in production. I'd echo what Norky said.
17:07 erik49 ndevos, yeah
17:07 erik49 I've got multiple processes reading from 350+GB files
17:07 erik49 so it seems like striping should make a difference
17:10 manik joined #gluster
17:19 Cable2999_ joined #gluster
17:20 cable2999 joined #gluster
17:21 Norky erik49, do these processes only ever work on one file?
17:21 erik49 no
17:22 Norky err, I mean is there ever a time when only one file is active at any one time?
17:22 erik49 yes
17:22 erik49 well
17:22 erik49 at least being read
17:23 erik49 actually maybe it won't help that much
17:23 erik49 since all the processes are reading the whole file start to finish
17:24 Norky okay, then striping might help - but as you said, md striping would probably be better
17:24 Norky do try gluster striping, and benchmark it
17:26 Norky I'm sure the devs would welcome data :)
17:28 Norky good night folks
17:32 Mo___ joined #gluster
17:41 cable2999 I have a question about gluster replication.  From what I can determine, it is done at the brick level.
17:42 cable2999 is that a correct understanding?
17:45 eightyeight yes
17:45 eightyeight well, the file is what is replicated. not the full brick
17:46 eightyeight but the file is replicated between the bricks
17:51 ultrabizweb joined #gluster
17:52 eightyeight no matter what i try, i cannot get a successful geo-replication working with 3.3.1. what am i missing?
17:55 bauruine joined #gluster
18:11 jjnash left #gluster
18:13 theron joined #gluster
18:16 fixxxermet joined #gluster
18:36 GLHMarmot joined #gluster
18:36 bulde joined #gluster
18:45 GLHMarmot joined #gluster
18:57 theron joined #gluster
18:59 plarsen joined #gluster
19:02 greylurk left #gluster
19:02 ass3mbler joined #gluster
19:02 greylurk joined #gluster
19:12 gbrand_ joined #gluster
19:15 copec I'd like to know more about how georeplication works
19:15 copec I guess I can look at the vol files for this answer
19:15 copec where is the georeplication translator
19:16 copec and what nodes actually perform the replication
19:19 y4m4 joined #gluster
19:20 ass3mbler Hi all... I'm surely missing something, but there is some documentation about the .vol files, what the translators do and their parameters? In the 3.3 docs I can't find anything and only some options are documented
19:26 isomorphic joined #gluster
19:32 ass3mbler joined #gluster
19:35 ass3mbler can somebody point me in the right direction please? sorry for the newbie question
19:36 jgillmanjr ass3mbler: Well, here is this: http://www.gluster.org/community/d​ocumentation/index.php/Translators
19:36 jgillmanjr though that seems to be more for the developer side
19:37 ass3mbler jgillmanjr: thank you, but I was looking for something documenting the options for existing translators
19:37 GLHMarmo1 joined #gluster
19:43 jgillmanjr ass3mbler: Don't know if this will help, but found this : http://www.gluster.org/community/docum​entation/index.php/Gluster_Translators
19:44 kkeithley copec: geo-sync isn't done by a translator. Source for the gsyncd is in .../xlators/features/marker/utils/src/gsyncd/
19:44 ass3mbler jgillmanjr: thank you very much, it seems a good start! I really appreciate your help
19:45 copec thanks kkeithley
19:45 jgillmanjr ass3mbler: Glad I could help even a little!
19:46 * jgillmanjr still has his resize issue going on
19:46 jgillmanjr http://dpaste.org/iPjfH/
19:51 ShaunR joined #gluster
19:51 ShaunR joined #gluster
19:52 jgillmanjr Interesting. Doesn't look like the files are going to the new nodes...
19:52 tqrst left #gluster
19:52 GLHMarmot joined #gluster
19:54 Rav|2 joined #gluster
20:03 a2 joined #gluster
20:05 Rav|2 is there any way to get rid of /tmp/xxxx.socket?
20:07 jgillmanjr Rav|2: context?
20:07 jgillmanjr Kill any application using it and rm?
20:07 Rav|2 it's glusterfs
20:07 kkeithley what version?
20:07 Rav|2 3.3.0
20:07 Rav|2 glusterfs 1320 root    6u  unix 0xffff88012e667b80      0t0 77006 /tmp/04f4e6fb01618fbecfdca3ac17ea181e.socket
20:08 Rav|2 I can't find anything about unix socket in my config
20:09 kkeithley seems it moved to /var/run in 3.3.1.
20:10 plarsen joined #gluster
20:10 kkeithley I'm sure it's hard-coded, not configurable
20:10 Rav|2 I see
20:18 zaitcev joined #gluster
21:07 rwheeler joined #gluster
21:32 Mo___ joined #gluster
21:40 frakt joined #gluster
21:47 sashko joined #gluster
21:59 tryggvil joined #gluster
21:59 tryggvil_ joined #gluster
22:15 drockna joined #gluster
22:15 drockna Does gluster support ipv6?
22:28 drockna1 joined #gluster
22:38 polfilm joined #gluster
22:43 plarsen joined #gluster
22:59 glusterbot joined #gluster
23:13 TekniQue so, what is the official recommendation when I have 40 or so drives in a machine with gluster
23:13 TekniQue run linux software raid to bring them into one brick or mount them all as individual bricks for gluster to handle?
23:13 TekniQue is there a huge performance penalty with a big number of bricks?
23:17 hattenator joined #gluster
23:18 a2 JoeJulian, ping
23:40 layer3switch joined #gluster
23:47 polfilm joined #gluster
23:54 sashko joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary