Camelia, the Perl 6 bug

IRC log for #gluster, 2013-02-15

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:22 dustint joined #gluster
00:24 Shdwdrgn joined #gluster
00:42 semiosis DeltaF: you're probably doing a write test with dd, is that right?
00:43 semiosis DeltaF: first of all, dd is not representative of a real world use case, so its results are probably not valid
00:43 semiosis DeltaF: that being said though, if you increase the block size using bs=1M you should see more bandwidth utilization
00:55 raven-np joined #gluster
01:30 tomsve joined #gluster
01:31 bala joined #gluster
01:49 DeltaF semiosis: No, not dd. a copy of files and then reading iostat. I think I'm reading the wrong info because it definitely was not going that rate.
01:49 DeltaF Shortly after I said that, it finished copying 15-20GB. Definitely would not have done that if it were going at that speed. :)
01:55 jiffe1 joined #gluster
01:59 tomsve joined #gluster
02:04 a3 joined #gluster
02:12 sjoeboo_ joined #gluster
02:16 sjoeboo_ joined #gluster
02:18 DeltaF hmm. I'm trying to tune stuff like this, but not sure how .vol files are used, etc. http://gluster.org/pipermail/glust​er-users/2010-February/003998.html
02:18 glusterbot <http://goo.gl/9Yi4s> (at gluster.org)
02:55 glusterbot New news from newglusterbugs: [Bug 911443] Account and container objects updated needlessly resulting is a hefty performance hit for object and container PUT operations <http://goo.gl/lZr02>
02:55 raven-np joined #gluster
02:59 raven-np1 joined #gluster
03:05 pipopopo joined #gluster
03:13 tomsve joined #gluster
03:21 Shdwdrgn joined #gluster
03:25 Shdwdrgn joined #gluster
03:25 glusterbot New news from newglusterbugs: [Bug 911446] Internally or externally generated HEAD requests on accounts and containers can cause severe performance problems with Gluster/Swift responsive and impact volume stability <http://goo.gl/Gqmkt> || [Bug 911448] Unnecessary stat() and xattr() system calls made processing GET operations for containers and accounts <http://goo.gl/5V3zV>
03:35 sjoeboo_ joined #gluster
03:39 Shdwdrgn joined #gluster
03:41 sgowda joined #gluster
03:44 Ryan_Lane joined #gluster
03:54 lala joined #gluster
04:09 Shdwdrgn joined #gluster
04:19 sahina joined #gluster
04:20 sahina joined #gluster
04:20 Shdwdrgn joined #gluster
04:27 pai joined #gluster
04:37 Shdwdrgn joined #gluster
04:46 satheesh joined #gluster
04:46 bala1 joined #gluster
04:50 tomsve joined #gluster
04:56 deepakcs joined #gluster
04:58 dl joined #gluster
05:10 hagarth joined #gluster
05:12 rastar joined #gluster
05:12 rastar1 joined #gluster
05:13 vpshastry joined #gluster
05:13 bulde joined #gluster
05:15 tomsve joined #gluster
05:15 satheesh joined #gluster
05:16 jcapgun joined #gluster
05:16 jcapgun hi
05:16 glusterbot jcapgun: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
05:17 jcapgun hopefully somebody can help me here.  I'm actually having a weird issue where I've added a gluster volume, peers say they are disconnected, but when i do a gluster volume status, it shows that all except one server's bricks are offline
05:18 jcapgun volume was successfully created, so i'm not sure how or why the other nodes bricks are "offline"
05:19 jcapgun please note that i have another gluster volume already set up with these hosts and that one seems to be ok
05:19 jcapgun but here is the output of "gluster volume status" for this particular volume
05:20 jcapgun Brick firstip:/mnt/brick_savegame                    24011   Y       1635
05:20 jcapgun Brick secondip:/mnt/brick_savegame                    24011   N       N/A
05:20 jcapgun Brick thirdip:/mnt/brick_savegame                    24011   N       N/A
05:20 jcapgun Brick fourthip:/mnt/brick_savegame                    24011   N       N/A
05:20 jcapgun Brick fifthip:/mnt/brick_savegame                     24011   N       N/A
05:21 jcapgun Brick sixthip:/mnt/brick_savegame                     24011   N       N/A
05:21 jcapgun Brick seventhip:/mnt/brick_savegame                     24011   N       N/A
05:21 jcapgun Brick eighthip:/mnt/brick_savegame                     24011   N       N/A
05:21 jcapgun Brick ninthip:/mnt/brick_savegame                    24011   N       N/A
05:21 JoeJulian "peers say they are disconnected" If this wasn't a typo, that's a problem.
05:21 jcapgun oops sorry, peers are connected
05:21 JoeJulian Also, use fpaste or dpaste for sharing blocks of info
05:21 jcapgun ok
05:22 JoeJulian Have you checked the logs?
05:22 jcapgun i have checked the logs, and i see lots of items that state, disconnecting from a node
05:22 JoeJulian etc-glusterfs-glusterd.vol.log might tell something on the failed servers, or the brick lock specifically might.
05:23 jcapgun ok
05:23 jcapgun let me check that
05:23 JoeJulian Check a ps on the failed servers to see if glusterfsd is running
05:23 jcapgun glusterfsd is running, but only for the other volume
05:23 jcapgun i have two volumes set up in this environment
05:24 JoeJulian got it
05:24 jcapgun seems the second glusterfsd doesn't seem to want to fire off
05:25 JoeJulian Did you find a log entry showing that?
05:25 jcapgun not yet
05:25 jcapgun going to search the log now
05:26 jcapgun by the way, thanks very much for your help
05:26 jcapgun it's really appreciated
05:26 JoeJulian 'cause I guarantee you it's not a matter of desire. ;)
05:26 JoeJulian You're welcome.
05:26 jcapgun where specifically is that log?
05:27 jcapgun etc-glusterfs?
05:28 jcapgun there is only one file in that dir, and that's glusterd.vol
05:28 JoeJulian The ones we'd be interested in are on one of the failed servers. /var/log/glusterfs/etc-glusterfs-glusterd.vol.log and /var/log/glusterfs/bricks/[some name that matches the brick definition]
05:28 jcapgun kk
05:30 jcapgun cannot create listener, initing the transport failed
05:30 jcapgun transport-type 'rdma' is not valid or not found on this machine
05:30 JoeJulian And is it?
05:31 jcapgun i'm not sure if it is or not
05:31 jcapgun i'm assuming it is since the other volume is ok?
05:31 JoeJulian Are you using infiniband?
05:31 jcapgun no
05:31 JoeJulian Then you don't have rdma
05:31 jcapgun rdma is only used for that correct
05:31 jcapgun ok
05:32 sgowda joined #gluster
05:32 jcapgun let me look at the other log you mentioned as well
05:33 jcapgun yeah, some bad stuff in there i think
05:33 JoeJulian fpaste it if you want me to take a look.
05:34 JoeJulian Probably should have you ,,(pasteinfo) too
05:34 glusterbot Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
05:34 jcapgun ok, just making sure there is no sensitive data in there :)
05:34 jcapgun could get in trouble I think
05:35 jcapgun like IP's etc.
05:35 JoeJulian should be using ,,(hostnames) anyway, imho.
05:35 glusterbot Hostnames can be used instead of IPs for server (peer) addresses. To update an existing peer's address from IP to hostname, just probe it by name from any other peer. When creating a new pool, probe all other servers by name from the first, then probe the first by name from just one of the others.
05:39 ramkrsna joined #gluster
05:39 jcapgun http://dpaste.org/vEz4h/
05:39 glusterbot Title: dpaste.de: Snippet #219279 (at dpaste.org)
05:41 JoeJulian Oh, that's weird...
05:41 jcapgun what do you see sir
05:41 JoeJulian Is anything special about /mnt/brick_savegame
05:41 jcapgun nope, not at all
05:42 jcapgun it's just a dir that's mounted to an LVM volume
05:42 JoeJulian What filesystem?
05:42 jcapgun ummm
05:42 jcapgun it is
05:42 jcapgun ext3
05:42 JoeJulian Well, that's bad too, but not for this reason.
05:42 JoeJulian @ext4
05:42 glusterbot JoeJulian: Read about the ext4 problem at http://goo.gl/PEBQU
05:43 JoeJulian And that applies to all extN
05:43 jcapgun let me double check the filesystem quickly
05:43 jcapgun yes, ext3
05:44 JoeJulian Basically it's saying that /mnt/brick_savegame/.glusterfs/00/00/​00000000-0000-0000-0000-000000000001 doesn't point to the root of the brick, and it should. stat /mnt/brick_savegame/.glusterfs/00/00/​00000000-0000-0000-0000-000000000001
05:45 jcapgun you want me to run taht command?
05:45 jcapgun oh nevermind
05:45 jcapgun but why is the first node working?
05:45 jcapgun that brick is set up exactly the same on all nodes
05:46 JoeJulian not sure.
05:46 JoeJulian So yeah, fpaste that stat command
05:46 jcapgun ok, one second
05:47 jcapgun i'm trying to copy it from this window
05:47 jcapgun and it's not easy
05:47 JoeJulian Hehe
05:47 JoeJulian Are you running an rpm based distro?
05:48 jcapgun yes
05:48 jcapgun running centos
05:48 JoeJulian You can yum install fpaste (it's in epel) which makes it as simple as piping.
05:49 jcapgun not sure i should isntall anything on these servers.  I might want to ask my manager first. :P
05:49 JoeJulian Sure
05:49 jcapgun i know it's a tiny little thing but
05:49 jcapgun do i need to stat the whole line including the 000's?
05:49 JoeJulian yep
05:50 JoeJulian Unless you want to replace them with wildcards (I do, I'm lazy).
05:50 balunasj joined #gluster
05:50 jcapgun ok so this fpaste thing
05:50 jcapgun i have it copied to the clipboard
05:50 jcapgun the output
05:51 jcapgun i'm new to this IRC stuff, so please forgive me
05:52 JoeJulian Dpaste is just as good. I just try to keep people away from pastebin
05:53 JoeJulian If you're asking about the fpaste utility, though, it would be "stat /mnt/brick_savegame/.glusterfs/00/00/​00000000-0000-0000-0000-000000000001 | fpaste"
05:54 jcapgun http://fpaste.org/O8MZ/
05:54 glusterbot Title: Viewing Paste #277691 (at fpaste.org)
05:55 JoeJulian Ooh! Yours did it too. I encountered this once before and I filed a bug report but I have no idea how to produce it.
05:56 JoeJulian @query 00000000-0000-0000-0000-000000000001
05:56 glusterbot JoeJulian: No results for "00000000-0000-0000-0000-000000000001."
05:56 JoeJulian @query directory
05:56 glusterbot JoeJulian: Bug http://goo.gl/MOc1N medium, unspecified, ---, csaba, NEW , Cannot delete directory when special characters are used.
05:56 glusterbot JoeJulian: Bug http://goo.gl/ZzHRS unspecified, medium, ---, vshastry, NEW , Quota doesn't handle directory names with ','.
05:56 glusterbot JoeJulian: Bug http://goo.gl/kdV4E medium, unspecified, ---, divya, ASSIGNED , subdirectory nfs mount on solaris
05:56 JoeJulian that's not going to be useful...
05:56 jcapgun what's going on?  LOL
05:57 JoeJulian I'm trying to remember what I titled that bug report.
05:57 JoeJulian What version are you running?
05:57 jcapgun of glusterfs?
05:57 jcapgun i,,,
05:57 jcapgun umm
05:57 jcapgun how to get the version
05:57 jcapgun with the command line
05:58 jcapgun it's
05:58 jcapgun glusterfs 3.3.1 built on Oct 11 2012 22:01:04
05:58 sgowda joined #gluster
05:58 JoeJulian ok
05:58 jcapgun so i'm screwed for now? :)
05:59 jcapgun is it just lucky that this has happened to me?
05:59 JoeJulian No, it's easy to fix...
05:59 jcapgun oh cool!
05:59 jcapgun but it's a bug you say?
05:59 jcapgun and I didn't do anything wrong with  my setup? :)
05:59 JoeJulian But I'm hoping we can take this opportunity to try to figure out how it happened.
05:59 jcapgun yeah for sure
06:00 JoeJulian bug 859581
06:00 glusterbot Bug http://goo.gl/60bn6 high, unspecified, ---, vsomyaju, CLOSED WORKSFORME, self-heal process can sometimes create directories instead of symlinks for the root gfid file in .glusterfs
06:00 JoeJulian Can I see the "gluster volume info $vol" for that volume please?
06:01 jcapgun sure, one second
06:02 jcapgun http://fpaste.org/OcN4/
06:02 glusterbot Title: Viewing Paste #277693 (at fpaste.org)
06:02 rastar1 joined #gluster
06:03 JoeJulian 3-way replication... That's consistent with how I generated the problem.
06:03 shylesh joined #gluster
06:04 jcapgun and actually, yesterday was crappy
06:04 jcapgun i had to redo this volume
06:04 jcapgun because
06:04 JoeJulian Was that log file from the creation of the volume, or has it been rotated?
06:04 jcapgun in the gluster volume itself, there was a bunch of duplicated directories (well it looked like it anyway)
06:04 JoeJulian Makes sense
06:04 jcapgun that's why i deleted the volume and started from scratch
06:05 jcapgun ummm
06:05 jcapgun it's been appended to
06:05 jcapgun because the original volume was also called gv_savedata
06:05 jcapgun i created a new volume, same name, but with different brick names
06:05 jcapgun the brick names used to be called brick3
06:05 jcapgun in /mnt/brick3
06:06 JoeJulian 2013-02-14 21:30:30.435294 is the first timestamp in that brick log. I'm looking for something closer to when it was first created.
06:07 jcapgun hmmm
06:07 jcapgun well those servers are in dallas
06:07 jcapgun i live in australia
06:07 jcapgun so that means that is from today
06:07 JoeJulian protip: always use GMT for all your servers.
06:07 jcapgun here, it's the 15th already
06:08 jcapgun yes
06:08 JoeJulian So the fix is to replace that directory with a symlink: ln -sf ../../.. /mnt/brick_savegame/.glusterfs/00/00/​00000000-0000-0000-0000-000000000001
06:09 JoeJulian Maybe I'll just see if I can make it happen again.
06:09 JoeJulian Was this a brand-new volume?
06:09 JoeJulian Yesterday, I mean?
06:09 jcapgun ummm
06:09 JoeJulian Well, you used new bricks, so even today...
06:09 jcapgun i created it a couple of days ago
06:09 jcapgun yes
06:09 jcapgun brand new today
06:10 jcapgun i'm not sure i understand
06:10 JoeJulian mmkay... that should make it fairly easy to duplicate...
06:10 jcapgun do i need to do that on all the nodes
06:10 JoeJulian I hope
06:10 JoeJulian no, just all the servers.
06:10 JoeJulian Well, all but the one that's working.
06:10 jcapgun yeah, i mean on all servers
06:10 jcapgun what does this do
06:11 JoeJulian The program was /supposed/ to create that symlink. Instead it somehow created a directory.
06:11 jcapgun hmm, ok
06:12 jcapgun how the heck do you copy from this thing
06:12 jcapgun :)
06:12 JoeJulian I use XChat so I just highlight.
06:12 jcapgun ok, my linux foo is a bit lightweight
06:12 jcapgun so the symlink will be pointing to what from the above command?
06:12 JoeJulian To the root of the brick.
06:12 JoeJulian In your instance, /mnt/brick_savegame
06:13 JoeJulian But it has to be ../../..
06:13 raghu joined #gluster
06:14 jcapgun ok, and it creates taht symlink where?
06:14 jcapgun sorry
06:14 jcapgun i got an error
06:14 jcapgun target `00-0000-0000-0000-000000000001' is not a directory
06:14 JoeJulian ... ok...
06:15 JoeJulian rmdir /mnt/brick_savegame/.glusterfs/00/00/​00000000-0000-0000-0000-000000000001 && ln -sf ../../.. /mnt/brick_savegame/.glusterfs/00/00/​00000000-0000-0000-0000-000000000001
06:15 jcapgun there's a space in the last set of 0's?
06:15 JoeJulian http://joejulian.name/blog/what-is-​this-new-glusterfs-directory-in-33/
06:15 glusterbot <http://goo.gl/j981n> (at joejulian.name)
06:15 JoeJulian No spaces.
06:16 JoeJulian Oh, here...
06:16 jcapgun that was the issue theni think
06:16 jcapgun there's a space when i copied and pasted
06:16 JoeJulian Ah
06:16 jcapgun i got this
06:16 jcapgun rmdir /mnt/brick_savegame/.glusterfs/00/00/​00000000-0000-0000-0000-000000000001 && ln -sf ../../../mnt/brick_savegame/.gluster​fs/00/00/00000000-0000-0000-0000-00 0000000001
06:17 JoeJulian http://fpaste.org/43AJ/
06:17 glusterbot Title: Viewing Paste #277694 (at fpaste.org)
06:17 jcapgun dir not empty :)
06:17 jcapgun rm -rf?
06:18 JoeJulian yes
06:18 jcapgun ok done
06:18 jcapgun on one node
06:18 jcapgun can we see if it works on just the one node first?
06:19 JoeJulian sure. Just restart glusterd.
06:19 jcapgun on the node i just ran that on
06:19 jcapgun k
06:19 JoeJulian yep
06:19 jcapgun wow
06:19 jcapgun it worked
06:19 jcapgun you are quite good at this stuff
06:19 jcapgun very impressed pal :)
06:19 JoeJulian :) Thanks.
06:20 jcapgun how the heck did you know to do that
06:20 JoeJulian I read the source when it happened to me.
06:20 jcapgun wow
06:20 JoeJulian Read this about that directory structure: http://joejulian.name/blog/what-is-​this-new-glusterfs-directory-in-33/
06:20 glusterbot <http://goo.gl/j981n> (at joejulian.name)
06:20 JoeJulian For that matter, there may be many articles on my blog that you'll find interesting.
06:20 jcapgun one other thing i should mention
06:21 jcapgun is that for some reason it didn't like /mnt/brick_savegame initially
06:21 jcapgun so i read about how to remove attributes
06:21 jcapgun and i'm not sure why it thought it had existed before because it never did
06:21 JoeJulian Ah, so you've been to my blog before.
06:21 jcapgun oh, haha
06:21 jcapgun that was yours as well?
06:21 JoeJulian Probably.
06:22 JoeJulian I was the first to document that.
06:23 jcapgun very cool
06:23 jcapgun ok
06:23 jcapgun so why did it think that existed already?
06:23 jcapgun i do have another brick in /mnt dir
06:24 jcapgun called /mnt/brick_mobiledev
06:24 jcapgun was it just confused?
06:24 JoeJulian I don't see how it would be...
06:24 JoeJulian Maybe someone else tried creating it once?
06:24 jcapgun and the previous one used to be called /mnt/brick3
06:24 jcapgun which i then deleted
06:25 JoeJulian Maybe /mnt/brick3 wasn't formatted before it was remounted to /mnt/brick_savegame?
06:26 jcapgun i never mounted /mnt/brick3 to brick_savegame
06:26 jcapgun brick3 was mounted to an LVM volume
06:26 JoeJulian I can only speculate, but the only way for those xattrs to be there was for it to have previously been a brick.
06:26 jcapgun hmmm
06:26 jcapgun ok
06:27 JoeJulian Well, there's one bug if the name of the brick contains all of the name of another brick. /mnt/foo3 would break if you had a /mnt/foo
06:28 jcapgun ok
06:28 jcapgun so this would break then
06:28 JoeJulian /mnt/brick_savegame would give that error if you had a /mnt/brick_save
06:28 jcapgun brick3 -> brick_savegame
06:29 jcapgun or no
06:29 jcapgun that should work
06:29 JoeJulian No
06:29 jcapgun oh wait though
06:29 JoeJulian brick3_savegame yes.
06:29 jcapgun brick_savegame
06:29 jcapgun and brick_mobiledata
06:29 jcapgun no
06:29 JoeJulian nope
06:29 jcapgun damn
06:29 jcapgun ah well
06:29 jcapgun minor issue compared to the one i just experienced
06:29 JoeJulian Even so, the xattrs wouldn't have been there.
06:29 jcapgun i would have never figured it out
06:29 jcapgun but my other issue
06:30 jcapgun so yesterday when they were all "online" however
06:30 jcapgun the duplicate folders
06:30 jcapgun it was mangled
06:30 Nevan joined #gluster
06:30 jcapgun let me fix the rest of these machines
06:30 jcapgun i did have a "transport" error earlier
06:30 jcapgun when trying to write the file
06:30 jcapgun but it's probably b ecause of this
06:31 JoeJulian The duplicate folders may have been the same symlink issue.
06:31 jcapgun see it was weird though
06:31 jcapgun because for example
06:31 JoeJulian It would have caused all sorts of hate and discontent.
06:31 jcapgun there was 3 directories of each folder
06:31 JoeJulian One for each replica
06:31 jcapgun 3 dups
06:31 jcapgun yes
06:31 JoeJulian That's that symlink problem.
06:31 jcapgun but you could cd into it
06:31 JoeJulian yep
06:31 jcapgun and when you deleted one, it would delete all of them
06:31 JoeJulian yep
06:32 jcapgun files were ok though
06:32 jcapgun ?
06:32 JoeJulian This will fix that.
06:32 jcapgun so this was not my fault :)
06:32 JoeJulian Though once you stopped glusterfsd it wouldn't (as you discovered) restart.
06:32 JoeJulian no, not your fault. Just some weird edge-case bug that neither I nor the developers have been able to isolate.
06:33 jcapgun so this means the other volume is ok though
06:33 jcapgun i don't see the same issue there
06:33 JoeJulian If I could have all your logs, I could probably figure it out, but I'm content with getting you working and trying to repro it myself.
06:34 JoeJulian Yes.
06:34 tomsve joined #gluster
06:34 JoeJulian And once this symlink is fixed, it doesn't return.
06:35 jcapgun almost done here
06:35 jcapgun then i'm going to try and write a file
06:37 jcapgun ok, i'm going to mount one of these guys
06:37 jcapgun so should we switch to xfs?
06:37 JoeJulian yes
06:38 JoeJulian There's been some discussion with Teddy Ts'o about ways to resolve the problem, but for now it'll get into an infinite loop.
06:42 jcapgun dude
06:42 jcapgun it works
06:42 jcapgun perfectly
06:42 jcapgun best
06:42 jcapgun thanks again for all your help
06:42 jcapgun really
06:43 JoeJulian You're welcome.
06:43 jcapgun i really appreciate it big time
06:43 JoeJulian I've been there, myself. That's why I'm here now.
06:43 rgustafs joined #gluster
06:43 jcapgun glusterfs is pretty damn cool though
06:43 jcapgun :)
06:43 JoeJulian When I first started with Gluster, there were 16 people in the channel and none of them actually here.
06:44 JoeJulian I had to figure everything out by myself. It kinda pissed me off and I started hanging out here out of spite.
06:45 JoeJulian 2 1/2 years later...
06:45 jcapgun lol
06:45 jcapgun well i'm going to stay on here now as well
06:45 jcapgun :)
06:45 JoeJulian cool :)
06:45 jcapgun and try to help other people that might have an issue
06:45 jcapgun this particular one
06:45 jcapgun :D
06:45 JoeJulian I'll be nice to have someone in your timezone.
06:45 JoeJulian So which part of oz?
06:46 jcapgun Melbourne actually
06:46 jcapgun and I've just moved here with my wife 3 weeks ago
06:46 jcapgun :)
06:46 jcapgun from Vancouver, BC, Canada :)
06:46 JoeJulian Hey, cool. My boss' daughter just moved down there recently. And I'm in Edmonds, WA
06:46 jcapgun very cool
06:47 JoeJulian She runs the marketing for some hippie grocery thing...
06:47 jcapgun very cool :)
06:47 jcapgun you mean she moved to Vancouver?
06:47 jcapgun or Melbourne?
06:47 JoeJulian No, there in Melbourne
06:47 jcapgun ah, cool
06:47 jcapgun well you should visit here sometime
06:47 jcapgun great place
06:48 jcapgun and if you do, we'll go for a few beers :)
06:48 jcapgun on me ;)
06:48 JoeJulian It's on my to-do list.
06:48 jcapgun cool!
06:48 JoeJulian People's Market... That's the one.
06:48 jcapgun ok, i'll check it out
06:49 jcapgun do you play mobile games?
06:49 JoeJulian Sometimes. If I'm really bored.
06:49 jcapgun cool
06:49 JoeJulian So which ones are you?
06:50 jcapgun well if you want a really good mobile/tablet game, get Real Racing 3 at the end of the month
06:50 jcapgun i've been working with EA for the past 7 years
06:50 jcapgun and they bought a company called Firemonkey's
06:50 JoeJulian I'm not sure I'd say that around any gamers.
06:50 jcapgun which is who i work for now
06:50 jcapgun haha
06:50 jcapgun i know right?
06:51 JoeJulian I'm playing wurm online right now, a java based sandbox game.
06:51 jcapgun ok
06:51 jcapgun i'll check it out
06:51 JoeJulian The other one that's piqued my interest is Firefall. The only thing wrong with it is that it's on Windows.
06:51 jcapgun haha
06:51 jcapgun you hate windows
06:52 JoeJulian I actually had to install a windows partition so I could play it.
06:52 JoeJulian That's the only thing on it.
06:52 jcapgun lol
06:52 JoeJulian And yes. Me and windows have had a very long dysfunctional relationship.
06:53 jcapgun i'm just starting to get into the linux world
06:53 jcapgun and i'm liking it
06:53 jcapgun there is so much you can do
06:53 jcapgun and it's sooo stable eh
06:53 JoeJulian I always thought, in the Windows 3.1 days, that the "tada" sound that they played at startup was in reference to, "Tada! It booted this time!"
06:53 jcapgun if i was going to run any server, I'd definitely go linux
06:53 jcapgun hahahahaha
06:54 JoeJulian Microsoft feels the same way. <shhh> They have more linux servers than windows servers.
06:54 jcapgun haha
06:54 jcapgun i bet they do
06:55 JoeJulian I was actually at the Windows 2.0 launch presentation at Kane Hall at the University of Washington presented by Bill Gates himself.
06:55 jcapgun wow
06:55 jcapgun how long ago was that
06:55 JoeJulian We talked briefly about this new hot thing that was going to revolutionize software distribution.
06:55 JoeJulian The CD-ROM.
06:56 jcapgun that's crazy
06:56 jcapgun that's cool that you were in that
06:57 jcapgun oh
06:57 jcapgun i do have one other questoin for you sorry
06:57 JoeJulian Fire away
06:57 jcapgun so is there a way, to make a server access the share locally
06:58 jcapgun i mean the volume
06:58 jcapgun instead of going through the network?
06:58 JoeJulian Yes and no.
06:58 jcapgun ok
06:58 JoeJulian The fd is opened on the first-to-respond. Usually that's the local server (if the file's on that server of course).
06:58 18VAAQUN6 joined #gluster
06:59 jcapgun ok, so in a distributed environment, the chances are that it will not be on that server
06:59 JoeJulian If the local server's too busy, it'll open on whichever one is the first to respond. That /usually/ will optimize your access times.
06:59 sgowda joined #gluster
06:59 JoeJulian On your current configuration, there's a 1:3 chance it'll be on that server.
06:59 jcapgun so i'm curious
07:00 jcapgun on each server, i mount a glusterfs volume
07:00 jcapgun and that volume is like so:
07:00 jcapgun someipordnsname:/gv_savedata
07:00 jcapgun and that ipordnsname is the actual server
07:00 jcapgun i do that for each server
07:01 JoeJulian @mount server
07:01 glusterbot JoeJulian: (#1) The server specified is only used to retrieve the client volume definition. Once connected, the client connects to all the servers in the volume. See also @rrnds, or (#2) Learn more about the role played by the server specified on the mount command here: http://goo.gl/0EB1u
07:01 jcapgun so it really does not matter
07:01 JoeJulian right
07:01 jcapgun so i can make it easy and use the same server on each client
07:02 JoeJulian Yep
07:02 JoeJulian Or use an rrdns to ensure that the client will be able to mount the volume if a server is down for some reason.
07:03 jcapgun hmmm ok
07:05 jcapgun i'm going to head out for now Joe, but thanks again for everything
07:05 jcapgun let's keep in touch
07:05 jcapgun i'll be on here a lot
07:05 jcapgun so for sure we'll say hi again
07:05 ngoswami joined #gluster
07:06 JoeJulian Sounds good. See ya
07:07 turbo124 hi guys
07:07 turbo124 I am trying to geo-replicate some Virtual Machines, however during the replication process, the VMs are getting put into Read-Only mode file system, i presume due to the file locking being imposed by rsync, i haven't seen any literature that this occurs... just needed confirmation that this is expected behaviour?
07:08 JoeJulian I wouldn't expect that, no.
07:08 JoeJulian This is in 3.3.1?
07:08 turbo124 yes
07:09 JoeJulian I didn't think rsync locked the file...
07:10 JoeJulian If I were diagnosing that, I'd probably throw an strace on the master and see if that's actually happening.
07:11 turbo124 this is in production, so i'm hesitant to turn geo-rep back on... may need to create another environment :-/ i was just wondering if this was expected behaviour... It does make sense that rsync would lock the file, even if it is just read-only..
07:14 JoeJulian Well, I would hope that the no-blocking-io flag is set
07:17 turbo124 cool, ok thanks for help Joe.
07:18 vimal joined #gluster
07:23 JoeJulian hrm... no, they didn't include that switch afaict.
07:25 JoeJulian turbo124: If it were me, I'd add --no-blocking-io to line 508 of /usr/libexec/glusterfs/pyt​hon/syncdaemon/resource.py
07:25 JoeJulian And I'd also recommend you file a bug report
07:25 glusterbot http://goo.gl/UUuCq
07:25 JoeJulian ^
07:26 turbo124 wow
07:26 cw joined #gluster
07:26 turbo124 thanks, will do!
07:34 turbo124 Bug 911489 has been added to the database
07:34 glusterbot Bug http://goo.gl/rlqUc urgent, unspecified, ---, csaba, NEW , Georeplication causing Virtual Machines to be put into Read Only mode.
07:35 JoeJulian Cool, thanks.
07:36 glusterbot New news from resolvedglusterbugs: [Bug 764890] Keep code more readable and clean <http://goo.gl/p7bDp>
07:36 tomsve joined #gluster
07:37 cw joined #gluster
07:49 _br_ joined #gluster
07:49 ekuric joined #gluster
07:55 _br_ joined #gluster
07:56 glusterbot New news from newglusterbugs: [Bug 911489] Georeplication causing Virtual Machines to be put into Read Only mode. <http://goo.gl/rlqUc>
07:59 andreask joined #gluster
08:00 ctria joined #gluster
08:18 tjikkun_work joined #gluster
08:23 _br_ joined #gluster
08:27 _br_ joined #gluster
08:32 tryggvil joined #gluster
08:35 dobber joined #gluster
08:35 overclk joined #gluster
08:37 bulde joined #gluster
08:38 aravindavk joined #gluster
08:40 cwin joined #gluster
08:46 WildPikachu joined #gluster
08:48 Staples84 joined #gluster
08:49 JoeJulian Wow! Do we know anyone near Chelyabinsk?
08:51 gbrand_ joined #gluster
08:55 ndevos my 'near' is about 4200km away according to google maps
08:55 JoeJulian Did you see http://rt.com/news/meteorite-​crash-urals-chelyabinsk-283/
08:55 glusterbot <http://goo.gl/S1c2s> (at rt.com)
08:56 ndevos wow, no
08:58 jiffe1 joined #gluster
09:04 aravinda_ joined #gluster
09:10 sonne joined #gluster
09:11 sonne greetings!
09:11 z00dax hello
09:11 glusterbot z00dax: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
09:12 z00dax glusterbot: but I dont have a question :(
09:13 ndevos you still can get an answer ;)
09:13 sonne i have one though...
09:13 z00dax .. eventually :D
09:13 sonne if i have a distributed volume with, say, 3 nodes
09:13 sonne if i remove one of the bricks i suppose data gets moved elsewhere....
09:14 sonne i'm wondering what happens if there isn't enough available space?
09:14 cw joined #gluster
09:16 z00dax sonne: so, thats mostly a physics issue. you cant sqeeze in more storage than you have
09:16 sonne that's why i'm wondering :)
09:16 sonne i will eventually test it myself, but maybe someone knew already
09:17 z00dax having been in that exact situation not long ago, I recommend ( with 3.2 ) to have 125% capacity at time of brick removal
09:17 z00dax also, i hope you are ok with the idea that if its only distributed, there is no redundancy ( i.e your remove will need to be planned, and executed within gluster )
09:18 sonne yep
09:18 sonne i'm reading administration guide chapter 5 on volyme types
09:19 sonne i'm intrigued by "distributed replicated"
09:19 bulde joined #gluster
09:19 sonne looks like the closest thing to a raid5 that you can get :)
09:19 gbrand_ joined #gluster
09:23 shireesh joined #gluster
09:23 cw joined #gluster
09:27 bala1 joined #gluster
09:36 manik joined #gluster
09:39 bulde1 joined #gluster
09:43 pai joined #gluster
10:01 mooperd joined #gluster
10:03 bauruine joined #gluster
10:18 bulde joined #gluster
10:23 andrei_ joined #gluster
10:34 inodb joined #gluster
10:44 sahina joined #gluster
11:01 shireesh joined #gluster
11:05 bala1 joined #gluster
11:09 lh joined #gluster
11:09 Staples84 joined #gluster
11:27 glusterbot New news from newglusterbugs: [Bug 906832] chown system calls() performed on object creation after being renamed into place <http://goo.gl/RR0wr>
11:37 luis_alen joined #gluster
11:41 rastar1 joined #gluster
11:44 rastar left #gluster
12:09 Staples84 joined #gluster
12:22 sjoeboo_ joined #gluster
12:26 dobber joined #gluster
12:30 andrei_ joined #gluster
12:40 raven-np joined #gluster
12:53 JuanBre joined #gluster
13:13 vpshastry joined #gluster
13:14 tryggvil joined #gluster
13:22 luis_alen left #gluster
13:26 dustint joined #gluster
13:28 jclift joined #gluster
13:30 ninkotech_ joined #gluster
13:39 sjoeboo_ joined #gluster
13:42 andreask joined #gluster
13:45 masterzen joined #gluster
13:51 Staples84 joined #gluster
13:55 masterzen joined #gluster
13:56 dustint joined #gluster
13:59 edward1 joined #gluster
14:06 aliguori joined #gluster
14:08 JuanBre joined #gluster
14:15 rwheeler joined #gluster
14:29 manik joined #gluster
14:38 awickham joined #gluster
14:44 manik joined #gluster
14:45 ceocoder1 joined #gluster
14:46 ceocoder1 left #gluster
14:57 stopbit joined #gluster
15:02 Staples84 joined #gluster
15:05 _br_ joined #gluster
15:07 _br_ joined #gluster
15:07 _br_ joined #gluster
15:12 hagarth joined #gluster
15:18 jbrooks joined #gluster
15:18 manik joined #gluster
15:21 bugs_ joined #gluster
15:25 tryggvil joined #gluster
15:34 bennyturns joined #gluster
15:49 bala joined #gluster
15:50 plarsen joined #gluster
15:57 jag3773 joined #gluster
16:00 nueces joined #gluster
16:01 sjoeboo_ joined #gluster
16:01 root joined #gluster
16:03 jag3773 joined #gluster
16:14 DeltaF joined #gluster
16:16 bluefoxxx joined #gluster
16:16 bluefoxxx ok I think I figured this out
16:16 bluefoxxx GlusterFS in no way maintains actual consistency.
16:16 manik joined #gluster
16:16 bluefoxxx It appears to maintain consistency of the exported volume
16:17 bluefoxxx i.e. the actual bricks may hold all kinds of crazy shit in their meta-data and the contents can be wildly different, as long as what's actually exported is the same
16:18 bluefoxxx case in point:  I deleted EVERYTHING (total rm -rf) from a replicated volume.  Brick 1 has 254742 files and directories in .glusterfs; brick 2 has 254730
16:18 bluefoxxx that's by sudo ls -laR /mnt/silo0/.glusterfs | wc -l
16:23 randomcamel given that GlusterFS's only purpose in life is to provide consistent exported volumes, "in no way maintains actual consistency" seems a little harsh when it's succeeding at its stated mission.
16:28 glusterbot New news from newglusterbugs: [Bug 902953] Clients return ENOTCONN or EINVAL after restarting brick servers in quick succession <http://goo.gl/YhZf5>
16:34 bluefoxxx randomcamel, shrug.
16:35 zaitcev joined #gluster
16:35 bluefoxxx randomcamel, all I know is I have file systems I've made from scratch and replicated, the exported volume is consistent, but the file systems acting as 'bricks' are not in the least bit identical.
16:36 bluefoxxx I guess MySQL and PGS work the same way though.  Replicate transactions/rows/queries (QBR is ridiculously stupid), not exact chunks of files.
16:40 luckybambu joined #gluster
16:40 NeonLicht Is it possible / eassy to move from one replica level to another one?  For example, imagine I have three servers with /home as a gluster replicated (3) volume (one brick per server).  Now I want to add a fourth server, and I want to include a brick of it to the gluster /home volume, and I want it to be replica 4.  Is it possible to do so without deleting/creating the /home volume, please?
16:40 Ryan_Lane joined #gluster
16:44 randomcamel bluefoxxx: yeah, I would have been surprised if they were. I'd assumed .glusterfs would reflect the specific history of that brick. but, my expectations come from cloud-based distributed systems, and as long as systems fulfill their promises, I purposefully don't care how they do it. =)
16:45 bluefoxxx My experience comes from being majorly burned out so I may not be thinking straight.
16:45 bluefoxxx Ever.
16:45 bluefoxxx Recovering now
16:45 randomcamel yeah, from backscroll it sounds like you're pretty wiped out.
16:45 randomcamel I hope you can resolve your stuff.
16:45 bluefoxxx I just took too many projects
16:46 bluefoxxx Culminating in collapsing on the stairs and crying until I passed out yesterday, which is not a big deal :)
16:46 randomcamel (there are also valid expectations from other experiences, e.g. if someone is used to DRBD in a data center they will be used to rather different things than me.)
16:47 bluefoxxx YEah drbd is block-level consistency
16:50 randomcamel I've been working on monitored DNS failover (which Amazon just released, but there are various reasons you might not want to use it), so I answer the "Surely this is a solved problem?" question a lot. (answer: "It is! Using several techniques and appliances that aren't available in EC2.")
16:54 tryggvil joined #gluster
16:54 bluefoxxx randomcamel, my faith in solved problems is lacking
16:54 bluefoxxx look at database clustering and replication.
16:55 bluefoxxx there are 800 ways to do it.  Note that none of them actually work.
16:56 bluefoxxx MySQL RBR is better than QBR (which may be inconsistent), but doesn't support myisam, or guarantee consistency.  Percona XtraDB guarantees consistency, but may have issues with myisam, and a cluster may be fragile (i.e. restarting mysqld simultaneously = bad)
16:56 swinchen joined #gluster
16:56 bluefoxxx PostgreSQL is better... at master->slave replication.  Master<->Master extensions exist, all with their own caveats.
16:58 bluefoxxx It's actually surprising GlusterFS works at all, given the scope of the problem--two replicants?  Split brain issues, fail-overs, interruption of sensitive applications, all difficult problems to address.  The use case is arguably harder than database replication.
16:58 swinchen Do any of you start the gluster daemon with pacemaker?  I took a look online and found this tidbit: "We use init-script (lsb:glusterfs) to integrate glusterfs-daemons." but I am not exactly sure what they are saying.  This may be more of a pacemaker question ...
16:58 bluefoxxx swinchen, my understanding is that glusterfsd doesn't do anything
16:59 bluefoxxx well, it does something, but notihng you want to touch.
16:59 bluefoxxx What you want to drop is glusterfsd if you're trying to stop gluster
16:59 swinchen Well, you need the gluster service running ...  correct?
17:01 bluefoxxx yes
17:01 bluefoxxx if you are trying to up/down it, glusterd is the service.  glusterfsd is handled automatically as needed.
17:01 bluefoxxx however you shouldn't down glusterd if you can avoid it.
17:02 swinchen In ubuntu server it is called "glusterfs-server" I got the terms mixxed up.  Interestingly I just stopped the service and was still able to mount gv0
17:03 randomcamel bluefoxxx: like I said, my expectations may be lower. =) we haven't had any major problems with MySQL replication. in some ways what GlusterFS is doing (masterless consensus, afaict) is a simpler problem space with better solutions.
17:03 bluefoxxx nod
17:04 bluefoxxx randomcamel, a quorum cluster like Percona/MariaDB XtraDB will fail if it loses 50%.  It refuses service.  This includes refusing to bring nodes joining back into the cluster into consistency.
17:04 bluefoxxx that means you have 3 nodes, restart mysqld on 2, suddenly the whole cluster seizes.
17:05 bluefoxxx Write-everywhere is either A) slow because of pessimistic locking; or B) prone to deadlocks and transaction rollbacks.  Pick one.  It's the nature of the problem.
17:05 bluefoxxx MySQL replication proper, master-slave, is fine... as long as you don't rapidly fail over and fail back.
17:05 bluefoxxx Query based replication will cause inconsistency if your queries are non-deterministic (i.e. call DATE() in a query)
17:05 randomcamel yeah, I'd expect any distributed system to, at best, significantly degrade at 50% loss. freaking out and shutting down is an acceptable response.
17:06 bluefoxxx yes.
17:06 bluefoxxx Anything having to do with clustering and providing multiple sources of service is HARD.
17:06 bluefoxxx The problem scope is just like that.
17:13 _benoit_ joined #gluster
17:14 hattenator joined #gluster
17:23 Ryan_Lane joined #gluster
17:23 _br_ joined #gluster
17:25 _br_ joined #gluster
17:27 ctria joined #gluster
17:31 elyograg NeonLicht: I'm not sure if you ever got an answer to your replica change question.  yes, you can add a new replica to an existing volume.  just include "replica 4" on your add-brick command.
17:31 an joined #gluster
17:32 dustint joined #gluster
17:34 rwheeler joined #gluster
17:35 flakrat joined #gluster
17:38 manik joined #gluster
17:39 jdarcy joined #gluster
17:40 Mo_ joined #gluster
17:42 NeonLicht I didn't, elyograg, at least not one that included my nickname so that I didn't miss it, anyway.  Thanks a lot, man, I'm going to try it.  :-)
17:43 NeonLicht It should be possible going from 'no replica' to 'replica 2' that way then, right, elyograg?
17:44 elyograg NeonLicht: I believe that's the case, yes.  I have not actually tried this stuff, it's just been an answer that other people have gotten.
17:45 NeonLicht I see, elyograg, I'll try it out on my testbed and come back to you to let you know (or blame you, LOL) how it goes for me.
17:47 sjoeboo_ joined #gluster
17:48 elyograg :)
18:01 NeonLicht add-brick doesn't seem to accept the 'replica' argument, elyograg.   :(
18:03 flakrat left #gluster
18:04 NeonLicht Oh, it seems it's supported on 3.3.0, but not on 3.2.7, which is the one I'm using.
18:05 JoeJulian randomcamel, bluefoxxx: Here's what the .glusterfs directory is all about: http://joejulian.name/blog/what-is-​this-new-glusterfs-directory-in-33/
18:05 glusterbot <http://goo.gl/j981n> (at joejulian.name)
18:05 NeonLicht It also says:   * Replica 3->4 also is supported, but replica count of 4 is not adviced as of now.
18:07 JoeJulian There's some edge-case bugs in replica>2 too, that aren't in a release yet.
18:09 JoeJulian With 3.2, the only way to change replication is to delete and recreate the volume. That can be done safely wrt data.
18:09 NeonLicht Thank you.  What do you mean by "wrt data", JoeJulian?
18:09 gbrand__ joined #gluster
18:09 JoeJulian with regard to
18:10 elyograg NeonLicht: sorry about my error there.  I didn't know it wouldn't work on 3.2, though I guess I shouldn't be surprised.
18:10 JoeJulian As in, you can delete a volume and all your data is left completely intact on the bricks.
18:10 NeonLicht Oh, I see, thanks.  I'm not a native English speaker, you know, and my English sucks.
18:10 JoeJulian :)
18:10 JoeJulian I forget sometimes that I'm speaking to a global audience.
18:11 NeonLicht Yeah, it happens.  :-)
18:11 * NeonLicht jotes down 'wrt' on he's list of acronyms to memorise.   LOL
18:11 NeonLicht s/he's/his/
18:12 glusterbot NeonLicht: Error: I couldn't find a message matching that criteria in my history of 1000 messages.
18:12 JoeJulian So to go from no replication to replication take an example volume: "gluster volume create foo s1:/b s2:/b" and turn it into a replica 2 volume (after deleting it) "gluster volume create foo replica2 s1:/b s3:/b s2:/b s4:/b"
18:13 elyograg the bot is having some issues.
18:13 al joined #gluster
18:13 NeonLicht How would you go from 1->2 w/o data loss then, JoeJulian?  Remove volume, data stays on brick, then create new volume?  Will the data on a brick be 'imported' into the volume?  I thought that would not work,
18:13 JoeJulian note that the replicas are pairs listed in order. s1,s3 s2,s4
18:15 NeonLicht Gonna try....  :-)
18:15 bluefoxxx joined #gluster
18:17 JoeJulian Ryan_Lane: http://joejulian.name/blog/fixin​g-split-brain-with-glusterfs-33/
18:17 glusterbot <http://goo.gl/FPFUX> (at joejulian.name)
18:17 Ryan_Lane yeah
18:17 Ryan_Lane I found that
18:17 Ryan_Lane this is a cruel way to have to fix split-brains
18:17 Ryan_Lane especially if you need to do so for hundreds of files
18:17 JoeJulian You might also be interested in http://joejulian.name/blog/what-is-​this-new-glusterfs-directory-in-33/
18:17 glusterbot <http://goo.gl/j981n> (at joejulian.name)
18:19 JoeJulian Ryan_Lane: I agree. I've been pushing for a client-side way of fixing that for a while now. Jeff has, at least, added the ability to do that through setting an xattr in a patch. I think it might be in 3.4.
18:20 NeonLicht Oh, man, it has done it, JoeJulian!  Creating the volume with data in the bricks actually imported the data into de volume!  That's really amazing!  Thanks a lot, JoeJulian!
18:20 JoeJulian You're welcome.
18:20 NeonLicht That is soooo cool!  :-)
18:31 disarone joined #gluster
18:31 morse joined #gluster
18:32 inodb joined #gluster
18:41 DeltaF How dangerous is it to mount the brick instead of the vol in a distrbuted set? :)
18:42 DeltaF sorry. replicated.
18:42 DeltaF It can't be great in either situation, but distributed would be silly.
18:44 xian1 Is there a toolset whereby I can specify a directory tree I wish to reverse-recursively traverse on all bricks in parallel, removing files and parent directories up to my starting point?  I want to do this to fix split-brain problems.  I'd like to do it while gluster is running, as I have a boatload of fuse clients I don't want to stop.  Then I can rsync —inplace back to the fuse-mounted tree.  Highly inadvisable, tools exist, or finish writing my own hac
18:45 xian1 on a distributed-replicated cluster, that is...
18:46 nueces joined #gluster
18:48 elyograg DeltaF: the data won't be replicated right away.  On 3.3, there is a self-heal daemon that would eventually notice the discrepancy and fix it, but on older versions it would have to be triggered.  in the meantime, clients using the volume could get inconsistent info.  It might lead to split-brain problems where things just break, but I don't know that for sure.
18:50 DeltaF I'm mostly OK with the situation you describe..
18:51 DeltaF since the alternate is a rsync/cron job
18:51 elyograg it's always better to just mount the volume.
18:51 DeltaF Not for PHP performance. ;)
18:53 elyograg this may seem very rude ... I don't mean it to be.  If you're after PHP performance, why are you using gluster?  With that out of the way, I believe there are some things you can do to reduce the cost when an app requests files that don't exist - negative lookup caching.
18:53 Staples84 joined #gluster
18:54 DeltaF I need to scale to multiple front-ends with shared files, but only for a short time (a month tops)
18:54 DeltaF If it were long-term, then it would make sense for a distributed/replicated cluster of file servers.
18:55 elyograg ah, a temporary band-aid.  I hope it all works!
18:55 DeltaF Temporary traffic surge, then back to 1-server setup
18:55 DeltaF rsync last year, but deleting files was impossible because the other members would put the file back
18:58 jdarcy joined #gluster
19:02 luckybambu_ joined #gluster
19:09 xian1 my file system is very large, but no files are irreplaceable.  I'd be happy to destroy dir trees that contain "gfid differs" issues (but unreported by 'gluster volume heal VOL info split-brain') while glusterfs is running, then rsync back from master repository. [apologies in advance for refreshing my question above]
19:09 luckybambu joined #gluster
19:22 rwheeler joined #gluster
19:22 luckybambu joined #gluster
19:23 andreask joined #gluster
19:29 _pol joined #gluster
19:36 y4m4 joined #gluster
19:42 luckybambu joined #gluster
19:44 luckybambu joined #gluster
19:47 Ryan_Lane joined #gluster
20:07 ladd h
20:08 * ladd mistyped
20:18 y4m4 joined #gluster
20:27 jdarcy joined #gluster
20:32 gbrand_ joined #gluster
21:04 tqrst on a 20 x 2 distributed-replicate setup, doing "gluster volume add-brick myvol server1:/mnt/somebrick server2:/mnt/someotherbrick" will make server2:/mnt/someotherbrick a replica of server1:/mnt/somebrick, right?
21:04 tqrst (and add them, of course)
21:07 manik joined #gluster
21:10 H__ yes
21:15 lanning joined #gluster
21:27 tryggvil joined #gluster
21:36 tqrst thanks
21:41 Ryan_Lane joined #gluster
22:08 jdarcy joined #gluster
22:14 jdarcy joined #gluster
22:29 cyberbootje joined #gluster
22:37 semiosis @self heal
22:37 glusterbot semiosis: I do not know about 'self heal', but I do know about these similar topics: 'targeted self heal'
22:38 semiosis @split brain
22:38 glusterbot semiosis: I do not know about 'split brain', but I do know about these similar topics: 'split-brain'
22:38 semiosis @split-brain
22:38 glusterbot semiosis: (#1) learn how to cause split-brain here: http://goo.gl/nywzC, or (#2) To heal split-brain in 3.3, see http://goo.gl/FPFUX .
22:38 semiosis Ryan_Lane: #2 ^^
22:44 Ryan_Lane joined #gluster
23:01 inodb joined #gluster
23:01 JoeJulian xian1: In your situation, what I would probably do is rm -rf the directory tree you're referring to, and rm -rf .glusterfs on that same brick (it will be rebuilt).
23:04 bennyturns joined #gluster
23:04 bennyturns running over the weekend with some of these server in this job
23:04 bennyturns https://beaker.engineering.redhat.com/jobs/380352
23:04 glusterbot <http://goo.gl/fYaZz> (at beaker.engineering.redhat.com)
23:05 raven-np joined #gluster
23:05 bennyturns wrong channel my mistake
23:05 semiosis haha
23:05 semiosis broken link
23:06 xian1 hmm, in a 4x2 cluster, just remove .glusterfs dir on one node?  and the rm -rf at the fuse mount point or brick mount point?  I was hoping for something more…surgical.
23:07 JoeJulian No, all on the "bad" brick.
23:07 JoeJulian I may have misread what you were looking to do though.
23:07 JoeJulian I thought you wanted to do a split-brain removal on all the files in a specific tree.
23:07 xian1 I'm not at all sure there's one bad brick…
23:08 inodb joined #gluster
23:08 xian1 yes, I want to eradicate a directory tree which seems to be all over the map on different bricks.  It will not let me remove it via the fuse mount point.  I was thinking
23:09 xian1 that if I tried to recursively remove a tree from bricks, but didn't have all the gfid strings, I'd never find them.  So I wanted to walk backwards on all bricks.
23:10 xian1 I've been doing that with your post on fixing split brain, but on a file by file basis, and now I want to hit some thousands of files in a tree that looks very much like a balanced tree structure.  many, many dirs.
23:11 JoeJulian You could use parts of http://joejulian.name/blog/quick-and-d​irty-python-script-to-check-the-dirty-​status-of-files-in-a-glusterfs-brick/ to get the xattrs, parse the gfid into the uuid format, and remove the file and the .glusterfs equivalent. Obviously that's not what that script does, but it does parts of that and would at least offer an example framework to start from.
23:11 glusterbot <http://goo.gl/grHFn> (at joejulian.name)
23:12 xian1 ok, thanks.  so last crucial point—can I get away with this with gluster running?
23:13 JoeJulian usually, yes.
23:13 JoeJulian Since you're operating on files without gluster knowing about it, it'll take a "heal...full" to repair it after you're done.
23:14 xian1 and what will that accomplish?  does it fix up some in-memory stuff?
23:15 xian1 hmm, maybe I ought to take what I've got and say thanks!
23:16 JoeJulian It'll walk the directory tree and repair the replication.
23:16 JoeJulian Oh, wait... you're not running 3.3 are you?
23:16 xian1 yes
23:16 JoeJulian I'm losing track.
23:16 JoeJulian Ok, then that's good.
23:17 xian1 last one we spoke about was 3.3.1.  this one's 3.3.0.  so far everything you've told me has been incredibly helpful in transforming the voodoo to decent science.
23:32 an joined #gluster
23:33 nueces joined #gluster
23:39 andrei__ joined #gluster
23:42 gbrand_ joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary