Camelia, the Perl 6 bug

IRC log for #gluster, 2013-07-10

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:14 yinyin joined #gluster
00:28 nightwalk joined #gluster
00:44 lpabon joined #gluster
00:47 lyang0 joined #gluster
00:47 kedmison joined #gluster
00:51 wgao_ joined #gluster
01:12 mjrosenb if I want to migrate the backing store for a brick
01:12 mjrosenb how should I do this?
01:12 mjrosenb I'm trying to use rsync
01:12 mjrosenb but it looks like it is breaking all of the hardlinks
01:12 mjrosenb which is uhhh
01:12 mjrosenb bad.
01:13 mjrosenb at least i'm assuming that it is bad.
01:22 yinyin joined #gluster
01:26 chirino joined #gluster
01:28 semiosis mjrosenb: there's extra rsync options you should use to preserve hard links & also xattrs
01:29 semiosis note from the rsync man page what's left out from -a...  archive mode; equals -rlptgoD (no -H,-A,-X)
01:30 semiosis so you should add those yourself: -aHAX
01:30 semiosis among whatever other options you have
01:38 kedmison @mjrosenb: What are you trying to do?  I''ve been struggling with something and wonder if I can learn from what you are doing.
01:43 vpshastry joined #gluster
01:44 bala joined #gluster
01:56 semiosis kedmison: what are *you* trying to do?
01:56 semiosis kedmison: mjrosenb is trying to migrate the backing store for a brick, as stated
02:00 bet_ joined #gluster
02:05 harish joined #gluster
02:05 kedmison semiosis: I need to re-build the os-level on a couple of my gluster nodes.  I've got a distribute config so I can't just take them offline; I have to keep the data available.  my data is a lot of small files, so
02:06 semiosis kedmison: you'll probably have to use replace-brick, which can migrate bricks while online
02:06 kedmison semiosis: I'm currently working through add-brick/remove-brick as the approach, but I am finding the performance of that very low for the number of bricks I have to replace (and for the timeframe in which I hope to accomplish this)
02:07 semiosis have you tried replace-brick?
02:07 kedmison semosis: I had tried replace-brick, but ran into troubles because of FD leaks and the large number of files on the bricks in question.
02:07 semiosis ah yes, bugs :(
02:08 semiosis generally speaking, if you can't afford downtime, then you shouldn't be using pure-distribute -- you should be using replicate or distributed-replicated
02:09 semiosis although i know that's no help to you right now
02:09 recidive joined #gluster
02:10 semiosis if you could afford the downtime, then maybe you could rsync the data to the replacement bricks and then do a replace-brick commit force, which should modify the volume graph without actually doing the buggy migration
02:11 kedmison :)  yes, I hear you. This is early days for this cluster; I have plans for it to grow in future (and become a replicated cluster) but in the short term, hardware/budget are limitations.
02:11 semiosis you could probably do that without downtime, but you would risk inconsistency
02:11 kedmison I can have short windows of downtime, like an evening or something, but that's not enough to cope with moving 10TB around…
02:11 semiosis you may be able to rsync out the data while the cluster is online, then just sync up whatever changes were made since then during your maint window
02:12 semiosis and do the commit-force
02:12 semiosis might work
02:12 semiosis ,,(replace)
02:12 glusterbot Useful links for replacing a failed server... if replacement server has different hostname: http://goo.gl/4hWXJ ... or if replacement server has same hostname:
02:12 glusterbot http://goo.gl/rem8L
02:12 semiosis shoot.  that first link is 404 and it would've been helpful
02:14 mjrosenb oh right, xattrs
02:14 mjrosenb I really want those.
02:15 semiosis yeah, not sure about the ACLs though
02:21 kedmison semiosis: your plan sounds good though; I could indeed rsync the bricks while online and then shut down the volume and rsync again to pick up the deltas.  I've pulled that trick before with a VM conversion from raw images to an LVM-based config and it worked well to reduce the downtime window.  For the bricks, I assume I should be bringing the .glusterfs directory as well?
02:22 semiosis i would.  note the -aHAX options for rsync as mentioned
02:23 kedmison semiosis: Yes, I usually use SHAX as options when I'm doing stuff like this but I don't have any sparse files so the S isn't likely needed.
02:23 semiosis good point.  mjrosenb ^^ -S for sparse files
02:23 kedmison semiosis btw: for the first link that 404-ed: I found this version at archive.org: http://web.archive.org/web/20120508153302/h​ttp://community.gluster.org/q/a-replica-nod​e-has-failed-completely-and-must-be-replace​d-with-new-empty-hardware-how-do-i-add-the-​new-hardware-and-bricks-back-into-the-repli​ca-pair-and-begin-the-healing-process/
02:23 glusterbot <http://goo.gl/nIS6z> (at web.archive.org)
02:24 kedmison semiosis: It's old; is it roughly what you were expecting to see?
02:24 semiosis yay the wayback machine!  totally forgot about that.
02:25 semiosis thats it
02:25 semiosis just wanted that to point out the replace-brick commit force command.  idk where else it's documented
02:26 semiosis maybe in the ,,(rtfm)
02:26 glusterbot Read the fairly-adequate manual at http://goo.gl/E3Jis
02:26 semiosis admin guide
02:27 kedmison ok, so I would do the stage-1 rsync online, volume stop, stage-2 rsync, ensure all clients were stopped, start the volume, then replace-brick start/replace-brick commit force?  and then I could bring the clients back online?
02:29 semiosis never tried, but maybe it's even possible to do the replace brick with a stopped volume.  doubt it, but just maybe.
02:29 semiosis besides that, yeah sounds about right
02:31 kedmison seems pretty devious but also like it should work.
02:33 semiosis you have to be devious to successfully admin a distributed filesystem :O
02:33 lalatenduM joined #gluster
02:33 kedmison lol
02:35 mjrosenb random silly question... there are a bunch of hard links in .glusterfs
02:35 mjrosenb what happens if the hard links cannot be created?
02:36 kedmison semiosis: I couldn't find any recommendations re: brick configurations for a large number of small files.  I started out with 8-disk RAID-6, and 10x 1TB bricks per server, but I have been finding that config a little limiting in terms of IOPS.
02:36 semiosis mjrosenb: no idea.  maybe glusterfs will (try to) recreate them, or maybe things will go horribly wrong.  let me know
02:37 kedmison semiosis: so I'm moving the configs to 4x RAID-1 pairs, for 7x 1TB bricks per server but with more parallel IOPS possible.  Does this make sense at all, or should I be looking at changing brick size to decrease # of bricks per server?
02:37 semiosis kedmison: around here people tend to recommend glusterfs replication over any raid replication
02:37 mjrosenb semiosis: well, I ask because .gluster is a different filesystem from some of the files, so nothing in heaven or earth can make those links.
02:38 semiosis mjrosenb: i can't imagine how you'd ever end up in that situation.  do tell
02:39 mjrosenb semiosis: well, the filesystem is zfs, and I decided to make a new subvolume for each top level directory
02:39 mjrosenb semiosis: since df -h is ~10,000,000,000 times faster than du -hs
02:40 kedmison semiosis: I don't think I have enough servers yet to hit that crossover point.  I've only got 2 servers right now, but as the cluster grows I could see switching to pure glusterfs replication.
02:40 semiosis kedmison: you can get more iops by either 1) increasing the glusterfs distribution (more aggregate iops ovee all threads), or 2) striping block devices together to make bricks (more iops per thread, probably)
02:40 mjrosenb and other assorted administrative niceties
02:40 semiosis kedmison: which of those is better depends a lot on your particular workload
02:41 semiosis mjrosenb: wow thats pretty far out
02:42 semiosis mjrosenb: not sure how glusterfs will handle that.  should be interesting
02:42 mjrosenb semiosis: well, I'm already running one brick like that
02:42 semiosis mjrosenb: how well is self-heal working?
02:42 mjrosenb evidently there is some code for 'the directory that i'm using is actually multiple filesystems'
02:43 mjrosenb semiosis: I have no clue, I wouldn't even know how to tell if it is.
02:44 semiosis kill one of the brick ,,(processes) write data through a client (ensuring that you write data that gets placed on that replica set) then restart the killed brick
02:44 glusterbot The GlusterFS core uses three process names: glusterd (management daemon, one per server); glusterfsd (brick export daemon, one per brick); glusterfs (FUSE client, one per client mount point; also NFS daemon, one per server). There are also two auxiliary processes: gsyncd (for geo-replication) and glustershd (for automatic self-heal). See http://goo.gl/F6jqx for more information.
02:44 semiosis mjrosenb: i assume you have a replicated (or distrib-repl) volume
02:44 mjrosenb semiosis: distributed, no replication.
02:44 semiosis oh, then no self-heal, so i dont even know what .glusterfs would be doing
02:45 semiosis afaik it was for keeping track of replication (specifically changes while replication was not happening)
02:46 semiosis "what does .glusterfs do for a purely distributed volume?" would be a good question to ask the devs
03:02 vpshastry joined #gluster
03:17 jag3773 joined #gluster
03:24 mohankumar joined #gluster
03:53 shylesh joined #gluster
04:02 sgowda joined #gluster
04:03 semiosis @pathinfo
04:03 glusterbot semiosis: find out which brick holds a file with this command on the client mount point: getfattr -d -e text -n trusted.glusterfs.pathinfo /client/mount/path/to.file
04:05 semiosis if anyone's around... is there a virtual xattr i can query through a client to get the gfid of a file?
04:05 semiosis similar to pathinfo
04:05 semiosis implementing this: http://docs.oracle.com/javase/7/docs/api/java/nio​/file/attribute/BasicFileAttributes.html#fileKey()
04:05 glusterbot <http://goo.gl/gf4OR> (at docs.oracle.com)
04:06 semiosis @extended attributes
04:06 glusterbot semiosis: (#1) To read the extended attributes on the server: getfattr -m .  -d -e hex {filename}, or (#2) For more information on how GlusterFS uses extended attributes, see this article: http://goo.gl/Bf9Er
04:09 semiosis also need to read the xattr from an unprivileged client :/
04:09 semiosis doubt this is going to happen
04:10 semiosis going to file a bug requesting this feature
04:10 glusterbot http://goo.gl/UUuCq
04:13 semiosis maybe an email to -devel ML first
04:18 bulde joined #gluster
04:25 rtalur joined #gluster
04:43 hchiramm_ joined #gluster
04:58 vpshastry joined #gluster
05:00 shireesh joined #gluster
05:04 Guest69816 joined #gluster
05:04 raghu joined #gluster
05:09 lalatenduM joined #gluster
05:10 lalatenduM joined #gluster
05:19 psharma joined #gluster
05:19 bala joined #gluster
05:20 hagarth joined #gluster
05:22 balunasj|mtg joined #gluster
05:26 rjoseph joined #gluster
05:29 deepakcs joined #gluster
05:31 rtalur joined #gluster
05:51 rastar joined #gluster
06:00 rastar joined #gluster
06:01 rastar joined #gluster
06:08 saurabh joined #gluster
06:11 vshankar joined #gluster
06:16 ricky-ticky joined #gluster
06:23 satheesh joined #gluster
06:24 jtux joined #gluster
06:29 CheRi joined #gluster
06:33 _ndevos joined #gluster
06:34 FilipeMaia joined #gluster
06:40 _ndevos_ joined #gluster
06:40 ndevos joined #gluster
06:42 18WAD13HA joined #gluster
06:42 satheesh joined #gluster
06:42 ngoswami joined #gluster
06:46 ctria joined #gluster
06:56 rastar joined #gluster
06:58 ekuric joined #gluster
06:58 jclift joined #gluster
07:01 mooperd joined #gluster
07:07 jtux joined #gluster
07:12 hybrid512 joined #gluster
07:16 bulde joined #gluster
07:21 andreask joined #gluster
07:32 CheRi joined #gluster
07:34 recidive joined #gluster
07:51 vdrmrt_ joined #gluster
07:51 vdrmrt joined #gluster
07:58 Tim__ joined #gluster
07:59 _br_ joined #gluster
08:00 Tim__ Hi
08:01 glusterbot Tim__: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
08:01 puebele1 joined #gluster
08:06 Tim__ I have a test-setup for replication with 2 VMs. It all works well. But I ran into a problem when disconnecting one VM and editing the same file both on the VM which was still online and the one which was apparently offline. After reconnecting the second VM some kind of replication-conflict appeared. The mentioned file was ether read-only on both machines or opening it throws an IO-error. How can I solve the conflict?
08:06 FilipeMaia joined #gluster
08:10 puebele1 joined #gluster
08:14 rjoseph joined #gluster
08:15 dobber_ joined #gluster
08:23 dobber joined #gluster
08:28 pkoro joined #gluster
08:32 nightwalk joined #gluster
08:32 rastar joined #gluster
08:42 csshankaravadive joined #gluster
08:43 csshankaravadive I am following this
08:43 csshankaravadive http://gluster.org/community/documen​tation/index.php/Gluster_3.2:_Brick_​Restoration_-_Replace_Crashed_Server
08:43 glusterbot <http://goo.gl/60uJV> (at gluster.org)
08:43 csshankaravadive to replace a new brick
08:44 csshankaravadive when I gluster volume sync server1 all I get volume sync: unsuccessful
08:44 csshankaravadive I am using 3.3
08:44 csshankaravadive how can I fix this?
08:48 itisravi joined #gluster
08:49 balunasj joined #gluster
08:59 vimal joined #gluster
09:01 mkollaro joined #gluster
09:02 kshlm joined #gluster
09:03 vrturbo joined #gluster
09:04 anand joined #gluster
09:05 ramkrsna joined #gluster
09:08 csshankaravadiv1 joined #gluster
09:19 sgowda joined #gluster
09:26 rickytato joined #gluster
09:47 mtanner_ joined #gluster
09:49 sgowda joined #gluster
09:52 twx_ joined #gluster
09:52 bfoster_ joined #gluster
09:52 vincent_1dk joined #gluster
09:52 jcastle_ joined #gluster
09:52 juhaj_ joined #gluster
09:52 js__ joined #gluster
09:52 furkaboo_ joined #gluster
09:53 abyss^__ joined #gluster
09:56 mooperd joined #gluster
10:00 furkaboo_ hi all.
10:00 Tim_____ joined #gluster
10:00 furkaboo_ having issues with a brick on a distributed volume.
10:02 furkaboo_ If I go into /mnt/glus3/glustervol2 there's a lot of directories have "??????"
10:03 baoboa joined #gluster
10:07 rnts joined #gluster
10:07 rnts Anyone know if the guy behind the "semiosis" PPA-archive is around here?
10:08 * samppah_ points at semiosis
10:10 furkaboo_ it's just on one brick
10:11 samppah furkaboo_: anything weird in log messages?
10:12 sgowda joined #gluster
10:12 furkaboo_ kernel: [2597351.172273] Filesystem "sdb3": xfs_log_force: error 5 returned. looks a _bit_ nasty.
10:15 samppah indeed hmm
10:19 yinyin joined #gluster
10:23 furkaboo_ brb
10:23 hagarth joined #gluster
10:24 rnts semiosis: ping :)
10:26 samppah @log
10:26 glusterbot samppah: I do not know about 'log', but I do know about these similar topics: 'Joe's blog', 'chat logs', 'loglevel', 'logstash'
10:26 samppah @logstash
10:26 glusterbot samppah: semiosis' logstash parser for glusterfs logs: https://gist.github.com/1499710
10:33 DataBeaver joined #gluster
10:51 spider_fingers joined #gluster
11:01 raghug joined #gluster
11:20 CheRi joined #gluster
11:28 chirino joined #gluster
11:36 mkollaro joined #gluster
11:40 harish joined #gluster
11:53 ramkrsna joined #gluster
11:53 CheRi joined #gluster
11:57 robo joined #gluster
12:03 hybrid5121 joined #gluster
12:04 bet_ joined #gluster
12:04 spider_fingers left #gluster
12:09 edward1 joined #gluster
12:10 Rocky joined #gluster
12:10 neofob joined #gluster
12:10 hagarth joined #gluster
12:13 Rocky__ joined #gluster
12:13 rastar joined #gluster
12:14 Rocky__ left #gluster
12:16 aliguori joined #gluster
12:23 ingard__ !oldbug 3011
12:23 glusterbot Bug http://goo.gl/jt1TN high, urgent, ---, rgowdapp, CLOSED CURRENTRELEASE, Uninterruptible processes writing(reading ? ) to/from glusterfs share
12:23 ingard__ oldbug 3011
12:23 glusterbot Bug http://goo.gl/jt1TN high, urgent, ---, rgowdapp, CLOSED CURRENTRELEASE, Uninterruptible processes writing(reading ? ) to/from glusterfs share
12:44 rwheeler joined #gluster
12:47 jthorne joined #gluster
12:50 robo joined #gluster
12:51 lalatenduM joined #gluster
12:57 aliguori_ joined #gluster
12:59 rastar joined #gluster
12:59 semiosis :O
13:00 GabrieleV joined #gluster
13:01 mohankumar joined #gluster
13:08 puebele1 joined #gluster
13:11 Tim__ joined #gluster
13:15 agaran joined #gluster
13:15 agaran hello
13:15 glusterbot agaran: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
13:16 agaran is there any way to define list of allowed pathes under which bricks can be created for given glusterd node?
13:19 semiosis doubt it
13:19 kedmison joined #gluster
13:20 agaran i tried to find anything saying it might be done either on web or in sources, found nothing so came here
13:20 mkollaro joined #gluster
13:21 arusso- joined #gluster
13:22 VeggieMeat_ joined #gluster
13:22 glusterbot` joined #gluster
13:22 plarsen joined #gluster
13:22 stickyboy_ joined #gluster
13:24 cicero_ joined #gluster
13:26 haakon__ joined #gluster
13:27 bala joined #gluster
13:29 arusso joined #gluster
13:29 stickyboy joined #gluster
13:30 jiqiren joined #gluster
13:31 tjikkun joined #gluster
13:31 tjikkun joined #gluster
13:31 Shdwdrgn joined #gluster
13:34 mjrosenb joined #gluster
13:34 joelwallis joined #gluster
13:35 semiosis chirino: i ran into a problem last night trying to map struct stat, the atime/mtime/ctime fields produced errors: http://paste.ubuntu.com/5861631/
13:35 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
13:36 semiosis i commented them out and then everything worked fine: https://github.com/semiosis/libgfapi-jni/bl​ob/master/glfsjni/src/main/java/org/fusesou​rce/glfsjni/internal/structs/stat.java#L27
13:36 glusterbot <http://goo.gl/58w2x> (at github.com)
13:36 chirino semiosis: paste src/glfsjni_structs.c
13:36 semiosis ok
13:38 semiosis glfsjni_structs.c: http://paste.ubuntu.com/5861646/
13:38 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
13:38 chirino BTW.. when I start getting into native build issues, I tend to like just doing a 'cd target/native-build' and tweaking stuff / running make  there until I figure it out
13:39 chirino I don't get what's wrong /w line 14?
13:40 semiosis hmm my first paste didnt come through
13:40 semiosis oh actually it just didnt load right in my browser the first time, weird
13:40 chirino talking about [INFO] src/glfsjni_structs.c:14:103: error: expected ':', ',', ';', '}' or '__attribute__' before '.' token
13:40 shdwdrgn_ joined #gluster
13:41 chirino wonder if the C preprocessor is expanding one of those field names into something else.
13:43 chirino semiosis: try putting a newline between all those field names.
13:43 chirino lets see if we can pin it down to one field.
13:43 ujjain2 joined #gluster
13:44 semiosis i have 4 copies of that file, I suppose i'll need to edit one & run make... this one? ./glfsjni-linux64/target/nativ​e-build/src/glfsjni_structs.c
13:44 chirino yeah
13:44 mtanner__ joined #gluster
13:45 madd_ joined #gluster
13:45 RangerRick14 joined #gluster
13:45 VeggieMeat joined #gluster
13:46 hchiramm__ joined #gluster
13:46 rastar_ joined #gluster
13:46 johnmark chirino: careful... demonstrating competency in these parts will result in me asking you to present at conferences ;)
13:46 kaptk2 joined #gluster
13:46 johnmark ...and bloging... and whatever else I can wrangle
13:47 clag__ joined #gluster
13:47 zoldar_ joined #gluster
13:47 semiosis he'll do it too
13:48 semiosis (johnmark i mean)
13:48 bfoster_ joined #gluster
13:48 johnmark heh
13:48 frakt_ joined #gluster
13:49 GabrieleV_ joined #gluster
13:49 semiosis chirino: it's the st_atime
13:49 chirino same line still?
13:50 semiosis at least that the first one that breaks
13:50 chirino guess we need to get the pre-processor output.
13:50 semiosis i broke the line up and it stops on line 24 now, which is where st_atime is
13:50 semiosis notice the "e" gets cut off of those, st_?time becomes st_?tim
13:50 chirino ah.. well there is progress.
13:51 arusso- joined #gluster
13:51 chirino no don't see the cut off
13:51 theron joined #gluster
13:52 semiosis for example line 11 of the error paste: http://paste.ubuntu.com/5861631/
13:52 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
13:52 semiosis and line 14, 17, ...
13:53 chirino tell ya what.. let use different field names for those.
13:54 bulde joined #gluster
13:54 chirino for example: @JniField(accessor="st_atime") long atime
13:55 semiosis sounds great
13:55 chirino that might do the trick.
13:55 tjikkun_ joined #gluster
13:58 mtanner_ joined #gluster
13:58 failshell joined #gluster
13:58 ctria joined #gluster
13:58 tjikkun_work joined #gluster
13:58 semiosis ok that seems to have fixed it for the st_?time fields, but the *_nsec fields are failing now...
13:58 semiosis [INFO] src/glfsjni_structs.c:56:10: error: 'struct stat' has no member named 'st_atime_nsec'
13:59 semiosis i could live without those though, since java only has ms precision dates
13:59 fpy joined #gluster
13:59 shdwdrgn_ joined #gluster
13:59 kedmison chirino: I have run into odd situations like this when an unsupported character has crept into the source code.  Tough to detect, but that question mark and the error messages not printing the entire field name are reminding me of that situation.
14:00 kedmison chirino: I've wound up deleting the entire line and re-typing it manually sometimes to fix the problem.
14:00 semiosis the question mark is just my single-character wildcard
14:00 semiosis i wrote st_?time to mean all three of st_mtime, st_atime, and st_ctime
14:00 semiosis only in IRC, the ? isn't in the code or error messages
14:00 kedmison ah, gotcha, but that doesn't explain the truncated field name in the error messages.
14:01 semiosis we didnt get an explanation but by mapping the field differently we were able to avoid the problem, without fully understanding it
14:02 semiosis which I'm fine with :)
14:02 fpy Hello, can someone help with one split-brain issue that I just have in my environment? I can see about 2046 split-brain entries and high load on only one node - when I strace it, I can see a lot of lstat/getattr/link calls, that seems to come from self-heal. But it slows down glusterfs signtifically. But while the data are temporary on this storage, I can "fix" the split-brain by just wiping the entries with affected files - does anyone know some easy wa
14:02 kedmison fair enough :)
14:03 chirino semiosis: I suspect that there is C preprocessor #define for those fields.
14:04 fpy + I have no idea what to do with entries like 2013-07-10 05:15:51 <gfid:3071a181-4a47-410d-a5cf-f5b2c1855117> -- there's really lack of documentation on GlusterFS troubleshooting :-/
14:04 recidive joined #gluster
14:04 chirino semiosis: for example #define st_atime priv_field.time.part1
14:04 semiosis interesting
14:05 chirino and that messes up the field declaration on line 14
14:05 chirino did you try doing the field rename for that other field giving you grief?
14:06 semiosis yes but it didnt help, so i just commented out the nsec fields.  i dont need them (yet at least)
14:06 chirino ok
14:07 semiosis fpy: check out ,,(gfid resolver)
14:07 glusterbot fpy: https://gist.github.com/4392640
14:08 robos joined #gluster
14:08 semiosis fpy: see also ,,(split brain)
14:08 glusterbot fpy: I do not know about 'split brain', but I do know about these similar topics: 'split-brain'
14:08 semiosis fpy: see also ,,(split-brain)
14:08 glusterbot fpy: (#1) To heal split-brain in 3.3, see http://goo.gl/FPFUX ., or (#2) learn how to cause split-brain here: http://goo.gl/Oi3AA
14:08 semiosis #1
14:10 madd joined #gluster
14:10 semiosis chirino: btw, in case you haven't met before, johnmark is the gluster community guy. you can bug him for community stuff :)
14:10 johnmorr_ joined #gluster
14:11 morse_ joined #gluster
14:11 hchiramm_ joined #gluster
14:12 romero_ joined #gluster
14:12 bulde1 joined #gluster
14:13 Ramereth|home joined #gluster
14:13 fpy semiosis: thanks, I'll have a look :-)
14:14 semiosis yw, good luck
14:15 NeatBasis joined #gluster
14:15 zaitcev joined #gluster
14:15 arusso joined #gluster
14:15 soukihei_ joined #gluster
14:15 Kins_ joined #gluster
14:16 fpy Hm, unfortunately gfid-resolver uses find that will pass whole my heavy structure maybe in a few hours on this slow AWS storage.. :-/
14:17 portante_ joined #gluster
14:17 semiosis fpy: while true, note that it runs on the *brick* filesystem, on your servers, not through a client mount which would be much slower
14:20 fpy semiosis: I know, but it's still extremely slow. If I would have to run this for all 2046 gfid entries, it would probably take a whole universe.
14:21 fpy the directory tree is really heavy - about 300k of directories in total with about the same count of files
14:23 fpy semiosis: isn't there some simple way how to say like: odd bricks are the right ones, let even sync from them?
14:23 Debolaz joined #gluster
14:24 agaran left #gluster
14:25 mohankumar did anyone tried glusterfs-3.4-beta4  fedora 17 rpms?
14:25 mohankumar when i start glusterd i get "/usr/sbin/glusterd: symbol lookup error: /usr/sbin/glusterd: undefined symbol: create_frame"
14:26 hagarth mohankumar: you will need to remove /lib/libglusterfs* if it exists
14:27 mohankumar hagarth: i removed and still getting same error
14:28 hagarth mohankumar: /usr/local/lib/libglusterfs* exist?
14:29 semiosis fpy: not yet that i know of (other than wiping the bad ones & syncing everything)
14:31 balunasj joined #gluster
14:31 fpy semiosis: I am thinking about what impact will cause disabling self-heal-daemon.. on storage that is used for temporary data with retention about 24 hours (they are removed after that time).
14:32 semiosis if you're getting split brain regularly you should address that problem first.  even without a self heal daemon split brain files will cause trouble, glusterfs wont let you access them at all
14:33 semiosis why would you want to stop self-heal daemon?
14:34 fpy semiosis: because it's causing high load on one of the nodes and slowing down operations signtifically. While I can't see what it's really doing, how long it will be running or force it to just remove inconsistent entries :-/
14:35 mohankumar hagarth: i cleaned up that too, now slightly better but still fails :)
14:35 mohankumar W [xlator.c:185:xlator_dynload] 0-xlator: /usr/lib64/glusterfs/3.4.0be​ta4/xlator/mgmt/glusterd.so: undefined symbol: rpc_clnt_is_disabled
14:35 semiosis fpy: gotta go afk for a bit, good luck
14:36 mohankumar i tried rhel6 RPMS on a machine, it worked without any issue (before installing rpms, from gluster source i did make uinstall thats it)
14:36 fpy semiosis: ok, thank you
14:37 fpy Hm, I have also noticed following in logs: E [afr-self-heald.c:685:_link_inode_update_loc] 0-Staging-replicate-0: inode link failed on the inode (00000000-0000-0000-0000-000000000000)
14:40 ctria joined #gluster
14:44 kkeithley mohankumar: I just installed 3.4.0beta4 rpms (from YUM repo at http://download.gluster.org/pub/glust​er/glusterfs/qa-releases/3.4.0beta4/) and they work. What does `ldd /usr/sbin/glusterd` show you?
14:44 glusterbot <http://goo.gl/964uJ> (at download.gluster.org)
14:44 kkeithley That is on a Fedora 17 box
14:45 kkeithley (Although given that Fedora 17 is EOL on 30 July, I'd strongly suggest that you update to f18 or F19)
14:46 bugs_ joined #gluster
14:47 mohankumar kkeithley: http://pastebin.com/tp5APhz4
14:47 glusterbot Please use http://fpaste.org or http://dpaste.org . pb has too many ads. Say @paste in channel for info about paste utils.
14:54 csshankaravadive joined #gluster
15:02 ndevos mohankumar: that looks strange, you have some libs in /lib, they should all be in /lib64, I think
15:03 ndevos (well, actually that would be in /usr/lib64, /lib64 is a symlink)
15:03 ChanServ joined #gluster
15:03 NeatBasis joined #gluster
15:03 ChanServ left #gluster
15:05 mohankumar thanks ndevos
15:05 mohankumar i removed those /lib/*gluster* symlinks that were created early by me
15:12 raghug joined #gluster
15:18 rcheleguini joined #gluster
15:18 daMaestro joined #gluster
15:21 lpabon joined #gluster
15:27 rickytato Hi, if I've two servers with 8 hdd each, and I want configure in raid, the best config are raid10 4+4 and only one brick per server, or raid10 2+2, 2+2 and two brick per server? I need use replica 2 to be safe if one server go down.. tnx
15:27 rickytato I'll use zfs so I'll configure mirror disk.. no raidz or other...
15:28 Ramereth joined #gluster
15:36 samppah rickytato: what's your use case and how are nodes connected?
15:39 rickytato node are connected in tcp, 2x1Gbit/s in bonding (I know is not the best)
15:40 samppah afaik, currently each brick has own glusterfs process and it may boost performance if files are on different bricks
15:44 rickytato ok, tnx
15:45 samppah of course 8 disk raid10 is also fast, so that's not easy question :)
15:56 rickytato I've to check zfs config to use ssd cache if I can use same ssd cache for two volume or not...
15:57 rickytato because 8 disk I'll use will be slow 2TB nearline sas device :(
16:00 ctria joined #gluster
16:01 ChanServ joined #gluster
16:01 NeatBasis joined #gluster
16:09 18WAD13HA left #gluster
16:13 neofob if my client machine is using NAT, what ports do i need to forward in order to connect to gluster server?
16:14 semiosis neofob: ,,(ports)
16:14 glusterbot neofob: glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111.
16:16 neofob ,,(NLM)
16:16 glusterbot I do not know about 'NLM', but I do know about these similar topics: 'n2n', 'nfs'
16:16 samppah @n2n
16:16 glusterbot samppah: See http://goo.gl/vTDlU
16:17 bala joined #gluster
16:17 neofob NLM=Network Lock Manager
16:17 samppah yeah.. just curious about what that was :)
16:17 kkeithley nfs lockd
16:18 neofob does glusterbot do AI learning?
16:18 neofob ,,(NLM)
16:18 glusterbot I do not know about 'NLM', but I do know about these similar topics: 'n2n', 'nfs'
16:18 plarsen joined #gluster
16:18 kkeithley learn NLM as Network Lock Manager (nfs lockd)
16:18 kkeithley @learn NLM as Network Lock Manager (nfs lockd)
16:18 glusterbot kkeithley: The operation succeeded.
16:19 kkeithley @NLM
16:19 glusterbot kkeithley: Network Lock Manager (nfs lockd)
16:19 neofob ok glusterbot, ,,(NLM)
16:19 glusterbot Network Lock Manager (nfs lockd)
16:19 kkeithley @forget NLM
16:19 glusterbot kkeithley: The operation succeeded.
16:19 kkeithley learn NLM as NLM is the Network Lock Manager, i.e. NFS lockd
16:20 kkeithley @learn NLM as NLM is the Network Lock Manager, i.e. NFS lockd
16:20 glusterbot kkeithley: The operation succeeded.
16:20 jclift @learn Buffffffffffffffffffffffffeeeeeeeeeeeeeeeeeeeeeeee​rrrrrrrrrrrrrrrrrrrrrrrrrOOOOOOOOOOOOOvvvvverrrrrr​rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr​rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr​rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrffffffffffffffffff​ffffffffffffffffffffffffffffffffffffffffffffffflow  :)
16:20 glusterbot jclift: (learn [<channel>] <key> as <value>) -- Associates <key> with <value>. <channel> is only necessary if the message isn't sent on the channel itself. The word 'as' is necessary to separate the key from the value. It can be changed to another word via the learnSeparator registry value.
16:20 jclift @learn Buffffffffffffffffffffffffeeeeeeeeeeeeeeeeeeeeeeee​rrrrrrrrrrrrrrrrrrrrrrrrrOOOOOOOOOOOOOvvvvverrrrrr​rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr​rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr​rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrffffffffffffffffff​ffffffffffffffffffffffffffffffffffffffffffffffflow as Justin, stop being evil
16:20 glusterbot jclift: The operation succeeded.
16:21 jclift glusterbot, ,,(Buffffffffffffffffffffffffeeeeeeeeeeeeeee​eeeeeeeeerrrrrrrrrrrrrrrrrrrrrrrrrOOOOOOOOOO​OOOvvvvverrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr​rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr​rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr​rrrrrrrrrrrrrrrfffffffffffffffffffffffffffff​fffffffffffffffffffffffffffffffffffflow)
16:21 jclift @Buffffffffffffffffffffffffeeeeeeeeeeeeeeee​eeeeeeeerrrrrrrrrrrrrrrrrrrrrrrrrOOOOOOOOOO​OOOvvvvverrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr​rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr​rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr​rrrrrrrrrrrrrrrrrrfffffffffffffffffffffffff​fffffffffffffffffffffffffffffffffffffffflow
16:21 glusterbot jclift: Justin, stop being evil
16:21 jclift Heh.  Dammit though.
16:21 jclift @forget Buffffffffffffffffffffffffeeeeeeeeeeeeeeeeeeeeeeee​rrrrrrrrrrrrrrrrrrrrrrrrrOOOOOOOOOOOOOvvvvverrrrrr​rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr​rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr​rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrffffffffffffffffff​ffffffffffffffffffffffffffffffffffffffffffffffflow
16:21 glusterbot jclift: The operation succeeded.
16:22 * jclift should go find the source one day, see what the boundaries are on it
16:25 bstr_ left #gluster
16:25 kkeithley jclift: remind me the next time I see you to smack you upside the head. ;-)
16:25 jclift kkeithley: :D
16:27 samppah :O
16:28 raghug joined #gluster
16:37 johnmark eek
16:37 * johnmark looks the other way
16:38 * glusterbot smacks jclift upside the head
16:39 jclift glusterbot, if you keep that up I _will_ go looking through your source.  You won't be the first process I've found effective termination code for.
16:39 jclift :D
16:41 JoeJulian @kick jclift
16:41 * jclift yawns
16:41 JoeJulian hehe
16:42 jclift That reminds me, I really do need to get down the street before the shops close
16:42 * jclift goes shopping
16:43 JoeJulian jclift must be near gmt...
16:43 jclift London
16:43 johnmark loondoon tone
16:43 JoeJulian -7 here, so it's still AM.
16:45 stickyboy left #gluster
16:45 jclift johnmark: That new meeting time, is that in like 15 mins?  I won't be able to make it, else I miss the shops.
16:45 jclift (and then starve overnight)
16:45 stickyboy joined #gluster
16:46 johnmark gah
16:46 johnmark jclift: no worries
16:46 johnmark jclift: do take a look at the packstack work going on, however
16:46 jclift johnmark: Any chance of pushing it back by 1 to 1.5 hours?
16:47 * johnmark checks schedule
16:47 johnmark re: packstack + openstack, if anyone else wants to check it out - https://review.openstack.org/#/c/35162/3
16:47 glusterbot Title: Gerrit Code Review (at review.openstack.org)
16:48 JoeJulian What... no Tesco stores nearby?
16:49 jclift Nope.  Nearest reasonable place is Ealing.
16:49 jclift Which I want to head to.
16:49 jclift But I need to leave nowish
16:49 jclift k, I'm going
16:50 neofob so what is the release date for 3.4?
16:54 johnmark jclift: sorry, dude, the rest of my afternoon is teh busy. I could do 4:30EDT/8:30UTC
16:55 johnmark jclift: oh wait, that won't work either. I
16:55 johnmark 'lll get you next time
17:07 semiosis neofob: http://i0.kym-cdn.com/photos/imag​es/original/000/117/102/FmnRi.jpg
17:07 glusterbot <http://goo.gl/h69K2> (at i0.kym-cdn.com)
17:08 samppah :)
17:10 raghug joined #gluster
17:10 hagarth neofob: RSN :)
17:15 \_pol joined #gluster
17:15 vpshastry joined #gluster
17:16 vpshastry left #gluster
17:20 jag3773 joined #gluster
17:23 csshankaravadive joined #gluster
17:29 sjoeboo joined #gluster
17:41 neofob @learn 3.4 release date as RSN
17:41 glusterbot neofob: The operation succeeded.
17:42 neofob ok bot, what is 3.4 release date?
17:42 neofob bot?
17:42 * neofob kick glusterbot
17:43 FilipeMaia joined #gluster
17:43 hagarth @3.4 release date
17:43 glusterbot hagarth: RSN
17:43 ChanServ joined #gluster
17:43 NeatBasis joined #gluster
17:47 vpshastry joined #gluster
17:51 rotbeard joined #gluster
17:56 kedmison joined #gluster
18:00 csshankaravadive left #gluster
18:03 lalatenduM joined #gluster
18:10 dberry joined #gluster
18:11 rwheeler_ joined #gluster
18:14 vpshastry left #gluster
18:15 neofob joined #gluster
18:17 johnmark ha :)
18:19 plarsen joined #gluster
18:28 jclift Back finally.
18:35 _pol joined #gluster
18:36 mooperd joined #gluster
18:40 FilipeMaia joined #gluster
18:46 mkollaro joined #gluster
18:46 puebele joined #gluster
18:47 lanning "Are we there yet?!?!"
18:51 semiosis JoeJulian: did you see hagarth's reply re: GFID/inode on the dev ml?  in summary, the lstat ino isn't the gfid, but should serve my purpose (unique file ID) just as well
18:54 semiosis @qa releases
18:54 glusterbot semiosis: The QA releases are available at http://bits.gluster.com/pub/gluster/glusterfs/ -- RPMs in the version folders and source archives for all versions under src/
19:05 raghug joined #gluster
19:07 JoeJulian Tossing this out there into chat history: A solarflare adapter ( http://goo.gl/wvNFk ) with an arista 7100t (http://goo.gl/nAKoB) could do a self-heal check in about 5-10 usec...
19:07 glusterbot Title: 7100 Series 10GBASE-T - AristaArista Networks (at goo.gl)
19:08 JoeJulian semiosis: I did. I should have thought of that. I knew those were longer than 64bit...
19:10 JoeJulian I wonder if I could get the pieces to write a review... Throw some SSDs into the mix and it could make for some very interesting benchmarks.
19:29 jebba joined #gluster
19:34 joelwallis joined #gluster
19:38 raghug joined #gluster
19:40 _pol joined #gluster
19:44 jdarcy joined #gluster
19:52 mtanner joined #gluster
19:55 ThatGraemeGuy joined #gluster
20:05 nightwalk joined #gluster
20:05 ThatGraemeGuy joined #gluster
20:06 ctria joined #gluster
20:11 rwheeler joined #gluster
20:17 RangerRick15 joined #gluster
20:19 stigchristian joined #gluster
20:32 _pol joined #gluster
20:38 nightwalk joined #gluster
20:45 toOD joined #gluster
20:46 failshel_ joined #gluster
20:47 toOD joined #gluster
20:49 chlunde_ joined #gluster
20:49 Ramereth|home joined #gluster
20:49 bfoster_ joined #gluster
20:49 GLHMarmot joined #gluster
20:50 purpleid1a joined #gluster
20:50 the-me_ joined #gluster
20:50 badone joined #gluster
20:52 swaT30_ joined #gluster
20:55 RobertLaptop joined #gluster
20:59 _pol joined #gluster
21:00 Guest98373 joined #gluster
21:00 andreask joined #gluster
21:05 mtanner_ joined #gluster
21:11 failshell joined #gluster
21:14 rnts_ joined #gluster
21:15 Peanut_ joined #gluster
21:16 JusHal_ joined #gluster
21:17 GLHMarmo1 joined #gluster
21:18 Ramereth joined #gluster
21:19 Ramereth joined #gluster
21:21 recidive joined #gluster
21:25 joelwallis joined #gluster
21:30 _pol joined #gluster
21:34 masterzen joined #gluster
21:37 JonnyNomad joined #gluster
21:39 Avatar[01] joined #gluster
21:39 fcami joined #gluster
21:39 tjikkun_work joined #gluster
21:43 purpleidea joined #gluster
21:44 badone joined #gluster
21:44 _pol joined #gluster
21:55 neofob left #gluster
22:07 fidevo joined #gluster
22:13 jag3773 joined #gluster
22:21 fcami joined #gluster
22:35 devoid joined #gluster
22:35 devoid left #gluster
22:48 _pol joined #gluster
23:10 DataBeaver joined #gluster
23:16 _pol joined #gluster
23:34 kedmison joined #gluster
23:47 badone joined #gluster
23:54 tjstansell joined #gluster
23:55 tjstansell is there a way in the latest 3.4 beta to display the values for all volume options?
23:55 tjstansell not just ones that i've set, but to see what the defaults are as well?
23:55 JoeJulian gluster volume set help
23:58 tjstansell hm... yeah, i knew about that... i guess i was hoping there was a volume get to see what everything is currently set to, rather than parsing the options section of volume info output and knowing that those override the defaults listed in set help.

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary