Camelia, the Perl 6 bug

IRC log for #gluster, 2013-03-22

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:37 yinyin joined #gluster
00:38 y4m4 joined #gluster
00:44 _pol joined #gluster
00:59 hybrid5123 joined #gluster
01:07 jules_ joined #gluster
01:24 yinyin joined #gluster
01:37 torbjorn__ joined #gluster
01:38 cw joined #gluster
01:51 sahina joined #gluster
01:55 kevein joined #gluster
01:58 bala joined #gluster
02:11 sahina joined #gluster
02:37 jdarcy joined #gluster
03:08 nixpanic joined #gluster
03:08 nixpanic joined #gluster
03:20 rastar joined #gluster
03:21 shylesh joined #gluster
03:28 yinyin joined #gluster
03:30 bharata joined #gluster
03:43 bharata joined #gluster
03:45 nueces joined #gluster
04:00 bulde joined #gluster
04:02 dendazen joined #gluster
04:05 anmol joined #gluster
04:05 sac joined #gluster
04:07 saurabh joined #gluster
04:20 sgowda joined #gluster
04:32 pai joined #gluster
04:41 deepakcs joined #gluster
04:51 sripathi joined #gluster
04:54 yinyin joined #gluster
05:01 raghu joined #gluster
05:02 aravindavk joined #gluster
05:10 harshpb joined #gluster
05:18 yinyin joined #gluster
05:20 mohankumar joined #gluster
05:22 eiki joined #gluster
05:27 hagarth joined #gluster
05:33 helloadam joined #gluster
05:34 rastar joined #gluster
05:37 vpshastry joined #gluster
05:43 satheesh joined #gluster
05:49 lala_ joined #gluster
05:54 joaquim__ joined #gluster
05:57 vshankar joined #gluster
05:59 pai joined #gluster
06:00 ultrabizweb joined #gluster
06:05 glusterbot New news from resolvedglusterbugs: [Bug 905203] glusterfs 3.3.1 volume heal data info not accurate. <http://goo.gl/axucm> || [Bug 772360] FEATURE REQUEST: more control over data location <http://goo.gl/ahIWw>
06:19 aravindavk joined #gluster
06:27 mohankumar joined #gluster
06:29 satheesh joined #gluster
06:52 timothy joined #gluster
06:53 vimal joined #gluster
06:54 18WAC4A4J joined #gluster
06:56 deepakcs joined #gluster
07:08 vshankar joined #gluster
07:12 ujjain joined #gluster
07:13 Nevan joined #gluster
07:21 jtux joined #gluster
07:22 joeto joined #gluster
07:26 vpshastry joined #gluster
07:43 ngoswami joined #gluster
07:51 fendrychl joined #gluster
07:51 jtux joined #gluster
07:53 hateya joined #gluster
07:58 harshpb joined #gluster
08:00 ctria joined #gluster
08:10 fendrychl left #gluster
08:13 harshpb joined #gluster
08:16 tjikkun_work joined #gluster
08:27 andreask joined #gluster
08:32 camel1cz joined #gluster
08:38 vpshastry joined #gluster
08:38 dobber_ joined #gluster
08:38 harshpb joined #gluster
08:42 Nagilum_ joined #gluster
08:43 camel1cz joined #gluster
08:45 satheesh joined #gluster
08:47 satheesh When the server IP changes and if a gluster volume is created using IP, the glusterd start fails, no error proper message, any ideas how it should behave
08:50 HaraldJensas joined #gluster
08:50 ProT-0-TypE joined #gluster
08:58 stickyboy joined #gluster
09:05 fendrychl joined #gluster
09:21 Staples84 joined #gluster
09:26 satheesh hagarth: bulde: When the server IP changes and if a gluster volume created using IP, the glusterd start fails, with no proper error message, any ideas how it should behave
09:31 glusterbot New news from newglusterbugs: [Bug 924636] xattrop, mkdir, create fops referring to null gfid <http://goo.gl/cxwOU> || [Bug 924637] link file with NULL gfid pointing to a file with valid gfid <http://goo.gl/ZOSjh>
09:34 deepakcs joined #gluster
09:39 shireesh joined #gluster
09:43 tryggvil_ joined #gluster
09:45 * camel1cz likes leaving message of jules
09:58 camel1cz left #gluster
10:10 Norky joined #gluster
10:10 harshpb joined #gluster
10:17 manik joined #gluster
10:18 manik joined #gluster
10:18 harshpb joined #gluster
10:23 pithagorians joined #gluster
10:26 pithagorians hi all. i'm encountering issues related to input / output error on some of the files from client side. As i understand it's related to split brain and it can be fixed by enabling quorum, as docs describes. Please explain what means "subvolumes" here http://gluster.org/community/documentation/index​.php/Gluster_3.2:_Setting_Volume_Options#cluster.quorum-type
10:26 glusterbot <http://goo.gl/dZ3EL> (at gluster.org)
10:28 harshpb joined #gluster
10:31 Nagilum_ pithagorians: probably the file in question
10:32 pithagorians hm
10:33 pithagorians any idea how can i count it? :)
10:33 Nagilum_ if everything is ok it should be equal to your replica count
10:37 deepakcs joined #gluster
10:38 pithagorians thx
10:38 pithagorians ill try
10:38 Nagilum_ I keep cluster.quorum-count at 0
10:40 Nagilum_ what I did set is cluster.quorum-type auto
10:46 pithagorians hm
10:46 pithagorians worth a try
10:54 lalatenduM joined #gluster
11:07 jdarcy joined #gluster
11:07 glusterbot New news from resolvedglusterbugs: [Bug 857549] brick/server replacement isn't working as documented.... <http://goo.gl/Tr285>
11:35 duerF joined #gluster
11:49 ngoswami joined #gluster
11:50 mgebbe_ joined #gluster
11:54 dendazen joined #gluster
12:01 H__ A 3.3.1 glusterfsd died (was serving replace-brick data). I see no clues in the logs. What are recommended methods to monitor for and restart glusterfsd's ?
12:12 joeto joined #gluster
12:12 madphoenix joined #gluster
12:20 manik joined #gluster
12:23 plarsen joined #gluster
12:25 vpshastry joined #gluster
12:28 JoeJulian H__: To monitor, "gluster volume status". To restart, "gluster volume start $vol force".
12:31 H__ thanks. I'll use those both next time
12:33 H__ The target box+brick , which are both not yet part of the volume, do not show up in "gluster volume status"
12:34 edward1 joined #gluster
12:34 JoeJulian Which would make sense as it's only checking that the bricks are running for any particular volume.
12:34 sgowda joined #gluster
12:35 rotbeard joined #gluster
12:36 jdarcy joined #gluster
12:40 GreyFoxx left #gluster
12:41 pranithk joined #gluster
12:42 jdarcy joined #gluster
12:55 dustint joined #gluster
12:58 yinyin joined #gluster
13:10 aliguori joined #gluster
13:16 manik joined #gluster
13:18 theron joined #gluster
13:20 robo joined #gluster
13:21 hagarth joined #gluster
13:23 pranithk_ joined #gluster
13:25 pithagorians how should i look for all files that have input / output error in volume or brick ?
13:31 lpabon joined #gluster
13:37 Staples84 joined #gluster
13:38 lpabon joined #gluster
13:43 guigui3 joined #gluster
13:44 vpshastry joined #gluster
13:46 JoeJulian With minions.
13:52 nueces joined #gluster
13:52 bennyturns joined #gluster
13:54 pranithk_ JoeJulian: ping
13:54 _pol joined #gluster
13:55 JoeJulian Hey there pranithk_
13:55 pranithk_ JoeJulian: If you see one more person with same issue as https://bugzilla.redhat.com/show_bug.cgi?id=859581
13:55 glusterbot <http://goo.gl/60bn6> (at bugzilla.redhat.com)
13:55 glusterbot Bug 859581: high, unspecified, ---, vsomyaju, ASSIGNED , self-heal process can sometimes create directories instead of symlinks for the root gfid file in .glusterfs
13:56 pranithk_ JoeJulian: Could you ask him to collect getfattr -d -m. -e hex output of that directory and add it to that bug...
13:56 JoeJulian Will do
14:17 camel1cz joined #gluster
14:18 jskinner_ joined #gluster
14:22 manik joined #gluster
14:31 awheeler So I'm following the self-heal on replicate, and I ran the find command on one of the old nodes in my cluster, and that did not start the self-heal.  All of the nodes are already clients, except the new node.  The self heal didn't begin until I ran the command on the new, empty node, which had all the files, but length 0.
14:32 vpshastry joined #gluster
14:32 awheeler So, would the the heal have started if I ran it on a non-cluster-member client?  Or does it really have to run on the new node?
14:32 glusterbot New news from newglusterbugs: [Bug 924792] Gluster-swift does not allow operations on multiple volumes concurrently. <http://goo.gl/Smv7Z>
14:33 JoeJulian The files were probably already in the process of being background self-healed.
14:34 awheeler I was running a du -sh on the directories for an hour, and they only had 0-length files.
14:34 awheeler On the new node.
14:34 JoeJulian And please use ,,(glossary) terms. It's too early in the morning to be deciphering what node is what.
14:34 glusterbot A "server" hosts "bricks" (ie. server1:/foo) which belong to a "volume"  which is accessed from a "client"  . The "master" geosynchronizes a "volume" to a "slave" (ie. remote1:/data/foo).
14:34 wushudoin joined #gluster
14:35 JoeJulian Eh, I don't know then. Check the client log on the client that you thought should have succeeded.
14:35 awheeler Thank you JoeJulian, my mistake.  In this case, the servers are also clients.  There is no master currently.
14:35 JoeJulian If you're using the 3.3 version, you didn't have to run the repair, you could have just "gluster volume heal $vol full"
14:35 awheeler That command fails outright.
14:36 awheeler I am using 3.3.1-11
14:37 JoeJulian Well solving that failure should be high on your priority list.
14:38 awheeler But I was hoping it would work.  Haven't had any luck with the volume heal $vol full.
14:38 awheeler Agreed.  I have been assuming it doesn't.  lol
14:38 ProT-0-TypE joined #gluster
14:38 JoeJulian I assume you realize that you're to replace $vol (or assign the variable in the shell).
14:39 awheeler naturally.  :)
14:39 awheeler Is there an xfs bug on CentOS 6?
14:39 JoeJulian None that I've encountered.
14:40 awheeler Excellent, then I assume the etc-glusterfs-glusterd.vol.log will contain the answers?
14:40 JoeJulian That or cli.log probably.
14:41 awheeler W [dict.c:2339:dict_unserialize] (-->/usr/lib64/libgfrpc.so​.0(rpc_clnt_notify+0x120) [0x7fed1c6d78b0] (-->/usr/lib64/libgfrpc.so.0​(rpc_clnt_handle_reply+0xa5) [0x7fed1c6d70b5] (-->gluster(gf_cli3_1_heal_volume_cbk+0x2e3) [0x41ca43]))) 0-dict: buf is null!
14:41 awheeler Looks promising.
14:41 awheeler Followed by:  E [cli-rpc-ops.c:5968:gf_cli3_1_heal_volume_cbk] 0-: Unable to allocate memory
14:42 JoeJulian that would do it
14:45 awheeler Is there an NFS dependency with glusterfs for any of this?
14:45 JoeJulian no
14:47 awheeler There are no memory issues on the box that I can see -- 3.5GB RAM and 700M free/ 1.8GB buffers/cache
14:47 awheeler So, do I need more RAM?
14:47 JoeJulian ulimits?
14:47 awheeler I haven't changed the defaults.  As root it's unlimited
14:48 JoeJulian ugh, I need coffee...
14:49 _pol joined #gluster
14:49 lh joined #gluster
14:49 lh joined #gluster
14:50 awheeler in proc, for the glustershd process, stack is limited (10485760) and locked memory, but otherwise, memory is unlimited
14:56 awheeler Disregard, that happened yesterday.  Nothing quite so clear in the logs today.
15:05 lpabon joined #gluster
15:08 flrichar joined #gluster
15:17 manik joined #gluster
15:18 awheeler JoeJulian: Just recreated the situation, initiated: gluster volume heal system full, and got: Launching Heal operation on volume system has been unsuccessful
15:18 failshell joined #gluster
15:18 awheeler In the glustershd.log I see: Stopping crawl as < 2 children are up
15:18 failshell hello. is it possible to configure gluster to use syslog?
15:19 awheeler Do I need to have 3 replicas for self-heal to work this way?
15:19 ndevos failshell: yes, see the different options under 'gluster volume set help'
15:20 ndevos failshell: also, "glusterd --help" should show some options for the glusterd process, depending on your distro, you can configure it in /etc/sysconfig/glusterd
15:21 Norky joined #gluster
15:22 JoeJulian awheeler: No, but you can't self-heal with only 1 of the replica bricks up.
15:22 failshell i only options to specify a file
15:22 failshell i want to send to daemon.info for example
15:24 awheeler JoeJulian: So I have a 2-replica set, with 4 servers, replaced one server, it's bricks are now showing in the gluster volume status.  So, does that count as up?  The other replica is also still present.  Or would I need to have two bricks in good health to self heal back to a thrid?
15:29 JoeJulian Hmm, it looks like it's saying that it couldn't contact one of the bricks.
15:32 manik1 joined #gluster
15:35 awheeler Here's the output from the logs when I issued the command: https://gist.github.com/awh​eeler/fe3731ac7eeb7718663b
15:35 glusterbot <http://goo.gl/6i8wM> (at gist.github.com)
15:38 awheeler JoeJulian: I am hitting the cluster pretty hard the whole time as well.
15:38 manik joined #gluster
15:54 eightyeight joined #gluster
16:00 _pol_ joined #gluster
16:00 _pol joined #gluster
16:08 johnmark if you're suffering from ext4 wonkiness, please test this patch: http://review.gluster.org/#change,4711
16:08 glusterbot Title: Gerrit Code Review (at review.gluster.org)
16:09 johnmark if it works, we'll look to backport to 3.3 and 3.4
16:12 zaitcev joined #gluster
16:17 jclift joined #gluster
16:20 awheeler How do you know when a heal has comleted?
16:23 hagarth joined #gluster
16:24 jdarcy joined #gluster
16:25 theron joined #gluster
16:26 hateya joined #gluster
16:26 _pol joined #gluster
16:27 _pol joined #gluster
16:28 jdarcy joined #gluster
16:29 awheeler Is this command supposed to show what needs to be healed?: gluster volume heal $vol info
16:30 lalatenduM joined #gluster
16:42 rastar joined #gluster
17:06 timothy joined #gluster
17:19 Mo___ joined #gluster
17:23 andreask joined #gluster
17:33 hateya joined #gluster
17:45 fendrychl left #gluster
17:53 mohankumar joined #gluster
17:54 sgowda joined #gluster
18:01 wushudoin left #gluster
18:01 _pol joined #gluster
18:03 jdarcy joined #gluster
18:03 hchiramm_ joined #gluster
18:11 _pol joined #gluster
18:16 kedmison joined #gluster
18:28 manik joined #gluster
18:36 madphoenix joined #gluster
18:36 furkaboo joined #gluster
19:03 ricky-ticky joined #gluster
19:03 glusterbot New news from newglusterbugs: [Bug 924891] autogen should warn if tar missing <http://goo.gl/io6xp>
19:16 jag3773 joined #gluster
19:24 bennyturns joined #gluster
19:29 camel1cz joined #gluster
19:32 camel1cz left #gluster
19:35 awheeler apparently, portmap (rpcbind) must be running or things don't work quite as well.
19:36 semiosis ,,(nfs)
19:36 glusterbot To mount via nfs, most distros require the options, tcp,vers=3 -- Also an rpc port mapper (like rpcbind in EL distributions) should be running on the server, and the kernel nfs server (nfsd) should be disabled
19:37 awheeler Right, well healing seems to work better with the mapper running.  I now the rpcbind rpm was pulled in by the gluster install, but I didn't realize it wasn't set to start automatically.  I am not specifically using NFS for anything.
19:37 awheeler s/I now/I know/
19:37 glusterbot What awheeler meant to say was: Right, well healing seems to work better with the mapper running.  I know the rpcbind rpm was pulled in by the gluster install, but I didn't realize it wasn't set to start automatically.  I am not specifically using NFS for anything.
19:39 awheeler This is probably why I was continually getting stale NFS handles, lol.
19:41 copec joined #gluster
19:47 kedmison joined #gluster
20:04 disarone joined #gluster
20:05 rubbs Anyone have any ideas as to why I would get this error with a script on a replicated volume? "cp: skipping file $FILEPATH as it was replaced while being copied"
20:05 rubbs I can post the source of the script that we're using if that helps
20:21 manik joined #gluster
20:24 jdarcy joined #gluster
20:25 camel1cz joined #gluster
20:25 camel1cz left #gluster
20:26 ricky-ticky joined #gluster
20:26 nueces joined #gluster
20:34 stickyboy joined #gluster
20:34 stickyboy Holy crap, NFS is fast.
20:36 samppah :)
20:38 stickyboy I was migrating some data into my Gluster via FUSE + rsync and I was only getting a few hundred mbps.
20:38 stickyboy The NFS client is at ~900mbps over 1GbE.
20:38 stickyboy Zoom zoom
20:39 ricky-ticky Hi, can anyone explain why gluster thinks that rebalance is running and don't let me run remove-brick command? logs here: http://pastebit.com/pastie/12283
20:39 glusterbot Title: Pastebit, beep beep beep (at pastebit.com)
20:48 adil_root joined #gluster
20:52 rubbs Is there any known issues with running a perl script that copies and replaces files and gluster invalidating the writes?
20:53 rubbs I'm a FS newb so I don't even know if I'm asking that question right.
20:54 rubbs I'm not seeing any errors in the log but `cp` seems to fail
20:59 jclift rubbs: As a thought, if you don't get an answer here (it's friday night for many people, etc), definitely ask on the gluster-users mailing list.
20:59 jclift rubbs: http://www.gluster.org/mail​man/listinfo/gluster-users
20:59 glusterbot <http://goo.gl/2zvu7> (at www.gluster.org)
20:59 _pol joined #gluster
21:00 jclift rubbs: Again, you might not get an answer until weekend or monday-ish though.
21:00 jclift rubbs: Or you could be lucky. :)
21:00 rubbs jclift: I figured. thanks for the tip
21:00 rubbs it's not super time critical
21:00 jclift Cool. :)
21:00 rubbs but I'll put it on the mailing list. thanks!
21:00 dr3amc0d3r2 joined #gluster
21:01 _pol joined #gluster
21:02 dr3amc0d3r2 joined #gluster
21:03 hateya joined #gluster
21:04 dr3amc0d3r joined #gluster
21:05 rotbeard joined #gluster
21:08 _pol joined #gluster
21:10 glusterbot New news from resolvedglusterbugs: [Bug 922432] Upstream generated spec file references non-existing patches <http://goo.gl/ThpfV>
21:20 sag47 left #gluster
21:24 sjoeboo joined #gluster
21:26 bennyturns joined #gluster
21:36 k7_ joined #gluster
21:39 deever joined #gluster
21:48 deever hi
21:48 glusterbot deever: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
21:49 samppah heyhey
21:52 semiosis :O
21:55 deever well, anyone here using gluster on freebsd? i'd like to make my infrastructure spof-free...
21:56 semiosis unlikely
21:57 semiosis i've heard of people running the server parts on freebsd, but i dont think the fuse client works on it
21:57 semiosis not sure about that though
21:58 andreask joined #gluster
22:01 deever semiosis: well, i'd only need gluster for synchronizing already existing file systems, so i won't need the fuse part i guess?
22:01 semiosis that's not how glusterfs works
22:02 deever wait...need to dive into it first
22:02 semiosis use unison or rsync if you just want to mirror directories
22:09 deever i'd need something syncing in realtime...rsync and unison may be great, but do not fit here...;)
22:15 semiosis if you could get the glusterfs server working then you could try using NFS clients, gluster does provide an NFS server
22:15 zaitcev joined #gluster
22:16 semiosis but there's limitations with the nfs server
22:17 joehoyle- joined #gluster
22:29 samppah i remember there was some community member working on freebsd port of glusterfs but i haven't heard about that for very long time
22:32 samppah https://bugzilla.redhat.com/show_bug.cgi?id=893795
22:32 glusterbot <http://goo.gl/U8QFF> (at bugzilla.redhat.com)
22:32 glusterbot Bug 893795: medium, medium, ---, amarts, ASSIGNED , Gluster 3.3.1 won't compile on Freebsd
22:41 joehoyle joined #gluster
22:42 Chiku|dc native gluster client support failover, righ ? how to set it iup ?
22:45 Chiku|dc mount -t glusterfs server1:/test-volume /mnt/glusterfs <-- if server1 is down, does it still working with server2 ?
22:45 elyograg Chiku|dc: If the volume is replicated and therefore has redundancy, then redundancy on the client side is automatic.  the hostname used when mounting is only used to retrieve the volume information, after which the client contacts all the servers in the volume directly.
22:46 Chiku|dc yes volume is replicated on 2 servers
23:04 hagarth joined #gluster
23:11 dustint-away joined #gluster
23:16 jdarcy joined #gluster
23:32 jiqiren joined #gluster
23:40 hateya joined #gluster
23:51 semiosis Chiku|dc: ,,(mount server)
23:51 glusterbot Chiku|dc: (#1) The server specified is only used to retrieve the client volume definition. Once connected, the client connects to all the servers in the volume. See also @rrnds, or (#2) Learn more about the role played by the server specified on the mount command here: http://goo.gl/0EB1u
23:51 semiosis also ,,(ping timeout)
23:51 glusterbot I do not know about 'ping timeout', but I do know about these similar topics: 'ping-timeout'
23:51 semiosis also ,,(ping-timeout)
23:51 glusterbot The reason for the long (42 second) ping-timeout is because re-establishing fd's and locks can be a very expensive operation. Allowing a longer time to reestablish connections is logical, unless you have servers that frequently die.
23:52 redsolar joined #gluster
23:53 lanning That's wrong. :) it is because "42" is the answer!

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary