Camelia, the Perl 6 bug

IRC log for #gluster, 2013-10-14

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:13 harish joined #gluster
00:26 hagarth joined #gluster
00:31 raz left #gluster
00:34 manik joined #gluster
00:46 nixpanic joined #gluster
00:47 nixpanic joined #gluster
00:58 hagarth joined #gluster
01:08 MrNaviPacho joined #gluster
01:22 lpabon joined #gluster
01:28 nixpanic joined #gluster
01:29 nixpanic joined #gluster
01:38 manik joined #gluster
01:41 nixpanic joined #gluster
01:41 nixpanic joined #gluster
01:49 nixpanic joined #gluster
01:49 nixpanic joined #gluster
01:55 vynt joined #gluster
02:08 manik joined #gluster
02:10 glusterbot New news from newglusterbugs: [Bug 987555] Glusterfs ports conflict with qemu live migration <http://goo.gl/SbL8x>
02:13 davinder joined #gluster
02:45 NuxRo joined #gluster
02:54 asias joined #gluster
02:55 aik__ joined #gluster
04:02 mohankumar joined #gluster
04:25 aik__ joined #gluster
04:35 CheRi joined #gluster
04:43 purpleidea @later tell MugginsM ,,(ports)
04:43 glusterbot purpleidea: The operation succeeded.
04:43 purpleidea ,,(thanks) glusterbot
04:43 glusterbot you're welcome
04:44 * purpleidea almost has #gluster support figured out completely
04:59 vigia joined #gluster
05:12 MrNaviPacho joined #gluster
05:35 Zylon joined #gluster
05:46 kPb_in_ joined #gluster
06:13 rgustafs joined #gluster
06:15 jtux joined #gluster
06:51 glusterbot New news from resolvedglusterbugs: [Bug 918052] Failed getxattr calls are throwing E level error in logs. <http://goo.gl/7yXTH>
06:53 ctria joined #gluster
06:56 Shri joined #gluster
07:03 ekuric joined #gluster
07:04 eseyman joined #gluster
07:06 tjikkun_work joined #gluster
07:13 Shri joined #gluster
07:16 blook joined #gluster
07:17 keytab joined #gluster
07:23 rgustafs joined #gluster
07:29 ricky-ticky joined #gluster
07:39 mgebbe_ joined #gluster
08:07 andreask joined #gluster
08:08 clag_ joined #gluster
08:10 andreask1 joined #gluster
08:13 glusterbot New news from newglusterbugs: [Bug 1018178] Glusterfs ports conflict with qemu live migration <http://goo.gl/oDNTL3>
08:24 Rocky__ joined #gluster
08:27 morse_ joined #gluster
08:27 clag_ left #gluster
08:28 dusmant joined #gluster
08:38 merrittZA joined #gluster
08:47 Staples84 joined #gluster
08:51 X3NQ joined #gluster
08:58 pkoro joined #gluster
09:05 tryggvil joined #gluster
09:11 vimal joined #gluster
09:26 davinder joined #gluster
10:00 ctria joined #gluster
10:14 kopke joined #gluster
10:21 kopke joined #gluster
10:28 KORG joined #gluster
10:45 kPb_in___ joined #gluster
10:57 kPb_in_ joined #gluster
11:10 tryggvil joined #gluster
11:13 jtux joined #gluster
11:18 kkeithley joined #gluster
11:28 aik__ joined #gluster
11:37 vynt joined #gluster
11:40 ctria joined #gluster
11:54 VerboEse joined #gluster
11:57 giannello joined #gluster
12:03 rgustafs joined #gluster
12:07 ccha2 hello
12:07 glusterbot ccha2: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
12:07 ccha2 I have sometimes these error about geo-replication
12:07 ccha2 [syncdutils:168:log_raise_exception] <top>: execution of "rsync" failed with E2BIG (Argument list too long)
12:08 andreask joined #gluster
12:10 dusmant joined #gluster
12:12 abyss^ hmm, on my one server where gluster is mounted I had instead of size, owner and privileges the '?'... I think it might start from here: http://pastebin.ca/2466577 Someone have any idea what happend? Remount gluster on client helped.
12:12 glusterbot Title: pastebin - gluster-errror - post number 2466577 (at pastebin.ca)
12:16 giannello hi everyone, is there a way to add/change translators configuration without stopping the volume?
12:16 mbukatov joined #gluster
12:19 ctria joined #gluster
12:21 pdrakeweb joined #gluster
12:37 abradley joined #gluster
12:40 jclift joined #gluster
12:42 pdrakeweb joined #gluster
12:42 ttre joined #gluster
12:44 glusterbot New news from newglusterbugs: [Bug 1018793] Saucy packaging problem <http://goo.gl/lpMKtK>
12:46 ttre Hello all, i started to test the glusterfs and is very well made. In my scenario i have 4 server with 2 bricks each (distributed and replicated 2). But, when one brick gets full some of my transfers get "no left space". I using gluster 3.4.1
12:56 ctria joined #gluster
13:01 MrNaviPacho joined #gluster
13:04 ctria joined #gluster
13:09 Bluefoxicy joined #gluster
13:13 Debolaz joined #gluster
13:13 chirino joined #gluster
13:14 Oneiroi joined #gluster
13:16 pkoro cluster I believe decides on which brick to store a file depending on the filename. Try to rebalance your volume (cluster volume rebalance) and it will even out the distribution of files on bricks
13:16 pkoro s/cluster/gluster/g
13:16 glusterbot pkoro: Error: I couldn't find a message matching that criteria in my history of 1000 messages.
13:16 dbruhn pkoro and tyre, rebalance operations are only used when adding or removing storage from a cluster.
13:17 giannello ttre, distribution of files between bricks uses a DHT algorithm, and it does not care about free space
13:17 dbruhn ttre: Gluster should place the file on another brick if the original brick is full file a bug
13:17 glusterbot http://goo.gl/UUuCq
13:19 giannello ttre, you can also take a look at this https://bugzilla.redhat.com/show_bug.cgi?id=889334
13:19 glusterbot <http://goo.gl/eOt3c> (at bugzilla.redhat.com)
13:19 glusterbot Bug 889334: high, medium, ---, asengupt, CLOSED CURRENTRELEASE, cluster.min-free-disk does not appear to work as advertised
13:20 ttre pkoro, i alredy try to rebalance...no moving data.
13:20 dbruhn tyre is your volume that contains /var/lib full?
13:21 pkoro i see. I thought it applied also in fixed number of bricks…
13:22 ndevos ttre: could it be that a file is bigger than the free space on the brick?
13:23 dbruhn pkoro: nope, and a full brick is typically non consequential with Gluster, it just places the data on another brick. I only know this because I thought the same thing and was wrong
13:23 ttre <dbruhn> the /var/lib is in another partition
13:24 ndevos dbruhn, pkoro: you both are right :) if a brick is (near) full, and the filename hashes to the location of the (near) full brick, a so called 'link file' is created on the brick that would contain the file
13:25 ndevos that link file points (with an xattr) to the actual brick that contains the file
13:25 dbruhn ttre: thanks, not applicable here, but I have seen if the partition containing the configuration data on the servers "/var/lib/glusterd" is full you will get that feed back as well.
13:27 ttre ndevos, thank you, in my case may is a bug ?
13:30 ndevos ttre: maybe, that is difficult to judge... are you extending existing files, and then getting a ENOSPACE?
13:31 ndevos ttre: it may be valid, in case the file is bigger than the available free space on any brick
13:31 ndevos ttre: files are always placed complete on the bricks in case of distribute-replica volumes
13:32 ndevos a striped volume would split files over multiple bricks, but that is not practical for backup/recovery in many cases
13:33 foobar I'm having some issues with geo-replication... state stays at 'faulty' ... and i'm seeing the sync daemon crashing all the time with the following message: http://pastebin.com/Cgt42Z27
13:33 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
13:33 shubhendu joined #gluster
13:34 MrNaviPacho joined #gluster
13:34 foobar http://fpaste.org/46613/17576611/ :P
13:34 glusterbot Title: #46613 Fedora Project Pastebin (at fpaste.org)
13:34 dusmant joined #gluster
13:37 ndevos foobar: that looks like bug 886808
13:37 glusterbot Bug http://goo.gl/nfHvK high, urgent, ---, ndevos, CLOSED CURRENTRELEASE, geo-rep: select() returning EBADF not handled correctly
13:38 cekstam joined #gluster
13:39 ttre ndevos, all the partition are empty and the files have 10MB
13:39 ttre ndevos, i using some dd to create the files over a client
13:39 ndevos ttre: if the partitions are empty, how can one brick be full?
13:41 ttre ndevos, sorry, in the beginning
13:42 ndevos ttre: okay, but in the beginning you get the error already?
13:42 cekstam Does anyone know if a GlusterFS 3.4.0 running on CentOS 6.4 would be affected by the 32 vs 64 bit directory bug? I found http://pastie.org/4412258 to test it, and I'm getting the same output back as https://bugzilla.redhat.com​/show_bug.cgi?id=838784#c1. But I'm a bit uncertain how to interpret it.
13:42 glusterbot <http://goo.gl/xrjhBh> (at bugzilla.redhat.com)
13:42 glusterbot Bug 838784: high, high, ---, sgowda, CLOSED CURRENTRELEASE, DHT: readdirp goes into a infinite loop with ext4
13:45 ttre ndevos, i put multiples files with 10MB using one client mouting the glusterfs
13:45 ttre ndevos, using dd
13:45 dbruhn ttre, are you creating files that are too big to fit on the file system?
13:46 foobar ndevos: i'll have a look ...
13:46 ttre ndevos, after all bricks get full i remove some file (all using one client)
13:48 ttre ndevos, with this a have all the bricks with 95% used
13:49 ttre ndevos, then, i put a file to fill just the one brick. With that i have (in the 4 bricks) one full and the another 3 with space.
13:50 ttre ndevos, the volume showing i just 3% free space
13:51 ttre ndevos, with that, when i try to write small files again some of the dd show "no left space"
13:51 ttre dbruhn, no, the bigger files can fit into the bricks
13:52 ndevos ttre: how do you place that file on the one brick? if it is a replicated volume, the file would land on two bricks
13:53 bennyturns joined #gluster
13:53 foobar any idea how to work around: geo-replication already running in an ananother machine
13:53 foobar (besides the typo :P )
13:54 foobar ndevos: applied the patch from that bug, testing now ;)
13:54 ndevos foobar: oh, you're fast!
13:55 vynt joined #gluster
13:55 ttre ndevos, always using the client i have i have 8 bricks. All with 66GB, distributed and replica 2
13:56 foobar ndevos: it was just a 2 line edit ;)
13:57 chirino joined #gluster
13:57 foobar looks a lot better: [2013-10-14 15:57:15.936903] I [master:272:crawl] GMaster: completed 50 crawls, 1 turns
13:58 dbruhn tyre it sounds like you are creating a file on a brick and then filling it to overrun the remaining space on the file system.
13:58 foobar another small thing... the rpm packages for el6 seem to point to /usr/local/libexec for gsync, though it's actually in /usr/libexec (in the package)
14:14 MrNaviPacho joined #gluster
14:17 ttre bruhn, that's the idea. i am trying to understand how gluster will manage this situation. One bricks is full but the volume have free space. The documentation showing the external link when a brick is full, but this not happening....
14:19 dbruhn ttre, it will only externally link if you write a file that's too big from the get go, if you have files that are going to grow like that you will need to create a striped volume
14:19 dbruhn you will take a performance penalty though
14:22 bugs_ joined #gluster
14:23 ndevos foobar: which package points to /usr/local/libexec? That is definitely a bug, but it could have been fixed in more recent packages
14:23 ttre dbruhn, i undersant. Even i have free space in the volume, but one brick is full gluster tell me some times (when the hash fits on the full brick) he don't have free space ?
14:24 ndevos ttre: that should not be the case, the file should be placed on a brick with enough space, and a link-file will be placed on the full brick
14:24 dbruhn ttre: the file exists on one of the bricks, if you keep writing to it, it's not going to move it to another brick.
14:25 ndevos ttre: there always must be some free space kept, that is where the cluster.min-free-space (or something) option comes in
14:27 ttre dbruhn, when i have just on brick full, a try to create new files and some times i get "no left space"
14:27 wushudoin joined #gluster
14:28 ttre devos, i tried the cluster.min-free-space but seems to respect this option
14:28 ttre ndevos, i tried the cluster.min-free-space but seems to respect this option
14:28 kkeithley I'm reasonably certain the only way you could get /usr/local/libexec (or anything in /usr/local/) is if you compile from source and do a `make install`.
14:29 kkeithley The RPMs and dpkgs should not have anything in /usr/local
14:29 dbruhn That might be an issue where there isn't enough free space to create the link file if the DHT is trying to place the file on a full brick.
14:30 dbruhn not 100% sure though, I have volumes with full bricks and don't experience any of these issues. I am on 3.3.1 though
14:30 muellerpete_ joined #gluster
14:30 muellerpete_ hi everyone, i have some trouble with a gluster volume
14:30 ttre ndevos, dbruhn, i will run the test again. Thank you very much. ( I was thinking I had done something wrong)
14:31 ttre <dbruhn> i am using 3.4.1
14:31 foobar ndevos: glusterfs-geo-replication-3.3.1-1.el6.x86_64
14:31 zaitcev joined #gluster
14:31 muellerpete_ healing seems not to work
14:32 muellerpete_ can i pick your brains for a few minutes with questions?
14:33 muellerpete_ its a replicating setup with just 2 bricks
14:34 muellerpete_ gluster volume heal <name> info says that brick2 is not connected
14:36 muellerpete_ volume info <name> show the brick as online
14:36 _chjohnstwork joined #gluster
14:37 _chjohnstwork seeing some weird issues with geo-rep on 3.4.1 - status says OK and there are no errors in the log (ssh keys work) but data is not getting replicated.. any known issues?
14:41 ndevos foobar: from what repository comes that version?
14:48 foobar ndevos: gluster.org repo's: baseurl=http://download.gluster.org/pub/gluster/glusterfs​/3.3/3.3.1/EPEL.repo/epel-$releasever/$basearch/
14:48 glusterbot <http://goo.gl/gIzd6> (at download.gluster.org)
14:52 kkeithley hmmm, wonder what happened with the 3.3.1-15 bits
14:54 saurabh joined #gluster
14:56 kkeithley not that I think that matters a lot. Most of the 3.3.1-1 -> 3.3.1-15 changes were to fix packaging nits
14:56 kkeithley Should be using 3.3.2 anyway if you're using 3.3.x
14:56 daMaestro joined #gluster
15:01 jag3773 joined #gluster
15:03 ctria joined #gluster
15:03 chirino joined #gluster
15:07 merrittZA joined #gluster
15:09 ndevos foobar: kkeithley is right, the latest (glusterfs-geo-replication-3.3.2-2.el6.x86_64.rpm) does not use /usr/local/libexec anymore
15:09 kaptk2 joined #gluster
15:10 foobar ok... upgrading from 3.3.1 to 3.3/LATEST in my repo-file ;)
15:10 ndevos semiosis: bug 1018793 is related to ubuntu saucy - there is also no need to depend on fuse-utils, the rpms dont have that dependency
15:10 glusterbot Bug http://goo.gl/lpMKtK medium, unspecified, ---, vbellur, NEW , Saucy packaging problem
15:11 foobar machines are now running 3.3.2-2
15:11 foobar rolling reboot should be fine right ?
15:11 muellerpete_ left #gluster
15:14 _chjohnstwork so any ideas on how to debug a geo-replication issue, where status says its OK but the remote side is not getting updated files
15:16 sprachgenerator joined #gluster
15:16 Dga joined #gluster
15:17 foobar _chjohnstwork: does passwordless ssh work
15:17 dbruhn if I enable cluster.min-free-space option will it rebalance the bricks that are full over that number?
15:17 foobar and look at the logfile (/var/log/gluster/geo-repl​ication/volumename/*.log)
15:17 foobar there is a command in there, try running it manually ;)
15:18 shubhendu joined #gluster
15:22 kPb_in_ joined #gluster
15:26 ncjohnsto joined #gluster
15:27 ncjohnsto foobar: yes passwordless ssh is working, I validated it manually as well as watching ssh connections happen.. the weird this is I have one node geo-replicating to two other sites and one site works fine and the other does not
15:31 Technicool joined #gluster
15:35 chirino joined #gluster
15:36 LoudNoises joined #gluster
15:47 aliguori joined #gluster
15:50 zerick joined #gluster
16:00 MrNaviPa_ joined #gluster
16:02 shubhendu joined #gluster
16:13 Mo___ joined #gluster
16:14 RedShift joined #gluster
16:15 cjohnston_work foobar: debug mode indicates an error 'connection to peer is broken'
16:15 cjohnston_work which is odd as ssh works perfectly fine
16:33 andreask joined #gluster
16:54 davinder2 joined #gluster
16:55 frellus joined #gluster
16:58 kPb_in joined #gluster
17:03 rotbeard joined #gluster
17:34 vimal joined #gluster
17:35 premera joined #gluster
18:01 semiosis ndevos: thx for bringing that to my attention
18:23 kPb_in joined #gluster
18:48 dbruhn joined #gluster
19:21 giannello joined #gluster
19:36 Alpinist joined #gluster
19:43 shane_ left #gluster
19:44 SpeeR joined #gluster
19:47 SpeeR is xfs still suggested over ext4 for gluster? I remember there were concerns with ext4 awhile ago
19:49 johnmark SpeeR: those concerns were addressed, but XFS still preferred
19:49 SpeeR great thanks johnmark I'll still with xfs
19:51 SpeeR heh sorry, meant stick with xfs
19:55 zerick joined #gluster
20:15 blutdienst left #gluster
20:15 glusterbot New news from newglusterbugs: [Bug 1017993] gluster processes call call_bail() at high frequency resulting in high CPU utilization <http://goo.gl/eWncSv>
20:20 go2k joined #gluster
20:26 mibby Hey semiosis you around? I still haven't finalised my EC2 config and have a couple questions...
20:26 semiosis shoot
20:29 mibby hmm..ok.. i've let go of having Geo redundancy, and I don't want to play with the default quorum thresholds. So with 2 x AZ's I'm thinking of having 2 Gluster servers in each AZ. If a single AZ goes down I'll go into RO but that's fine. A single host failure in an AZ will still leave 75% of the bricks available. So...
20:30 mibby I'm getting a little confused with how best to configure the bricks. With 2 hosts 'replica 2' is straight forward, but with 4 hosts I
20:30 mibby oops... with 4 hosts I'm not entirely sure
20:31 mibby am I insane with considering 'replica 4' ?
20:31 semiosis replica 4 probably isnt what you want
20:32 semiosis remember that replica has to do with bricks, not servers -- and you can have many bricks per server
20:32 mibby my current server config has 8 x 500GB EBS volumes
20:34 mibby I'm thinking each of the servers' bricks would ideally be identical, hence the replica 4 thought. Any recommendations?
20:50 dbruhn mibby if you have replica 4 you will have 4 copies of your data
20:50 dbruhn replica 2 means two copies
20:50 dbruhn that of course is a simplified version of what's actually going on
20:54 mibby that's sort of what I thought. Maybe my use case for Gluster is not using it correctly... Ultimately I want 4TB of usable space across 2 x AZ's that even if a single AZ goes down I can still read the data, or if a single host in a single AZ goes down I still have full RW access to the data.
20:56 mibby replica 4 would have all bricks on all 4 servers identical, with the distribute set within each server, is this correct?
21:00 dbruhn yeah, but your write speeds are going to be painful, and your directory traversal is probably going to be painful too
21:01 dbruhn You might want to think about using the geo replication for your second cluster
21:01 dbruhn and doing a cluster in each zone
21:02 mibby i was under the impression AFR and geo replication weren't able to work together?
21:02 mibby latency between each zone is <5ms which I thought was fast anough for AFR
21:03 mibby so yeah I'm generally confused at the moment ;)
21:03 dbruhn afr?
21:04 tryggvil joined #gluster
21:05 mibby automatic file replication - it's the Gluster replication component.
21:05 dbruhn you mean replication is not compatible with geo-replication?
21:06 dbruhn That's news to me, but I have never used georeplication, so not a good source on that one.
21:11 nueces joined #gluster
21:19 mcblady joined #gluster
21:21 mcblady hey guys, i have 3 peers and replicated brick... there are many clients connected to this brick. But if i disconnect one of the peers, the clients get stuck. Even df doesn't work. Thought it will take couple second for fencing some mechanism to clear sessions, but after 8 mintues the system is still down.
21:22 mcblady is there a safe way to disconnect peer with clients being able to continue to work ?
21:25 go2k hey mcblady
21:26 go2k this is the option you want to change :) -
21:26 go2k network.ping-timeout
21:27 go2k In theory changes should get replicated amongst peers, but that's theory. Generally the best method to avoid so called split brains is to not to allow for them to happen
21:27 dbruhn mibby have you read Joe Julian's replication article
21:31 mcblady default value for ping-timeout is 42 seconds, this was down even after 8 minutes
21:43 MugginsM joined #gluster
21:46 mcblady the answer to my question is a bug in gluster 3.2.5 https://bugzilla.redhat.com/show_bug.cgi?id=810944
21:46 glusterbot <http://goo.gl/CwSxZ> (at bugzilla.redhat.com)
21:46 glusterbot Bug 810944: low, low, ---, vagarwal, CLOSED CURRENTRELEASE, glusterfs nfs mounts hang when second node is down
21:50 mibby dbruhn: yeah I have. I think my traditional IT brain (RAID, clusters, etc) is interfering.... I'd love to get deployment recommendations for my situation.
21:52 DV joined #gluster
22:13 uebera|| joined #gluster
22:58 mdjunaid joined #gluster
23:16 tryggvil joined #gluster
23:22 Cenbe joined #gluster
23:43 nueces joined #gluster
23:48 P0w3r3d joined #gluster
23:53 cyberbootje joined #gluster
23:57 xavih joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary