Camelia, the Perl 6 bug

IRC log for #gluster, 2013-01-08

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 andrewbogott for good measure I've killed the glusterfsd process for this volume on each brick and restarted… still swamped with logfile spam, same errors.
01:14 cicero joined #gluster
01:30 lanning when stat() triggers a self-heal, does the self-heal go off into the background?
01:30 lanning or does the stat() block until the self-heal is complete?
01:34 H__ joined #gluster
01:35 kevein joined #gluster
01:37 sunus joined #gluster
01:46 y4m4 joined #gluster
01:54 duffrecords I just finished setting up my Gluster boxes again from scratch.  so far, Gluster is configured with the default options and throughput is much slower than the underlying RAID.  copying a 10 GB file from a brick to itself to itself goes at about 125 MBps. the same file from the volume to itself is about 21 MBps.  something's not right here.  no errors in the logs.  any suggestions on where to start looking?
02:00 __Bryan__ joined #gluster
02:01 cicero joined #gluster
02:24 cicero joined #gluster
02:27 raven-np joined #gluster
02:54 lh joined #gluster
02:54 lh joined #gluster
02:57 cicero joined #gluster
03:00 hagarth joined #gluster
03:05 dhsmith joined #gluster
03:21 shylesh joined #gluster
03:31 sgowda joined #gluster
03:55 duffrecords left #gluster
04:14 cicero joined #gluster
04:34 vpshastry joined #gluster
04:37 sripathi joined #gluster
05:11 raghu joined #gluster
05:12 deepakcs joined #gluster
05:13 sunus joined #gluster
05:16 hagarth joined #gluster
05:21 cicero joined #gluster
05:22 DataBeaver joined #gluster
05:22 rastar joined #gluster
05:25 sripathi1 joined #gluster
05:26 sunus joined #gluster
05:26 mohankumar joined #gluster
05:30 sunus joined #gluster
05:33 lala joined #gluster
05:34 akenney_ joined #gluster
05:35 lala__ joined #gluster
05:36 hagarth joined #gluster
06:02 sunus joined #gluster
06:08 layer7switch joined #gluster
06:15 cicero joined #gluster
06:16 bulde joined #gluster
06:17 pranithk joined #gluster
06:20 nueces joined #gluster
06:27 glusterbot New news from resolvedglusterbugs: [Bug 781953] glusterd core dumps upon replace-brick with segfault <http://goo.gl/D32IF>
06:29 lala joined #gluster
06:32 dhsmith joined #gluster
06:33 bala1 joined #gluster
06:34 nueces joined #gluster
06:47 cicero_ joined #gluster
06:51 vimal joined #gluster
06:54 ramkrsna joined #gluster
06:54 ramkrsna joined #gluster
06:56 Nevan joined #gluster
06:59 cyr_ joined #gluster
07:01 Nevan1 joined #gluster
07:05 cicero joined #gluster
07:25 sripathi joined #gluster
07:32 cicero joined #gluster
07:36 hateya joined #gluster
07:39 ngoswami joined #gluster
07:52 ekuric joined #gluster
08:01 ctria joined #gluster
08:08 hagarth joined #gluster
08:13 andreask joined #gluster
08:33 puebele1 joined #gluster
08:34 vimal joined #gluster
08:37 tjikkun_work joined #gluster
09:03 hateya joined #gluster
09:04 sripathi1 joined #gluster
09:13 vijaykumar joined #gluster
09:17 cicero joined #gluster
09:18 hateya joined #gluster
09:20 andreask1 joined #gluster
09:22 edward1 joined #gluster
09:23 duerF joined #gluster
09:24 hagarth joined #gluster
09:28 morse joined #gluster
09:30 gbrand_ joined #gluster
09:31 gbrand__ joined #gluster
09:37 ninkotech_ joined #gluster
09:41 morse joined #gluster
10:03 dobber joined #gluster
10:06 cicero joined #gluster
10:06 guigui1 joined #gluster
10:09 overclk joined #gluster
10:14 badone joined #gluster
10:17 khushildep joined #gluster
10:21 morse joined #gluster
10:30 glusterbot New news from newglusterbugs: [Bug 892808] Can't mount subdirectories in RHEL 6.3 with native client <http://goo.gl/wpcU0>
10:33 bauruine joined #gluster
10:41 Alpinist joined #gluster
10:41 DaveS_ joined #gluster
11:04 andreask joined #gluster
11:06 goto_ joined #gluster
11:06 goto_ hi
11:06 glusterbot goto_: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
11:07 goto_ Hi, I want to set up a shared file storage between two EC2 instances. Is gluster the right way to go? If yes, could you please point me to any doc that can help me with that. If not what should I look into to achieve this?
11:30 glusterbot New news from newglusterbugs: [Bug 892985] Crash in glusterd during volume delete <http://goo.gl/aR0uJ>
11:32 overclk joined #gluster
11:37 manik joined #gluster
11:44 hateya joined #gluster
11:52 lala_ joined #gluster
11:53 vimal joined #gluster
11:54 raven-np joined #gluster
12:05 andreask joined #gluster
12:06 gbrand__ joined #gluster
12:21 goto_ hey
12:26 cyr_ joined #gluster
12:26 overclk joined #gluster
12:28 glusterbot New news from resolvedglusterbugs: [Bug 892985] Crash in glusterd during volume delete <http://goo.gl/aR0uJ>
12:31 lorderr joined #gluster
12:32 hagarth joined #gluster
12:35 lorderr hi. i have set up 2 node replicate cluster. on server1 i mount with mount -t glusterfs localhost:/volname0 /mnt/mountpoint, but on server 2 i get "Error: Mount point does not exist". On server 2 i can see the volume with, gluster volume list. what am i doing wrong?
12:36 khushildep joined #gluster
12:40 x4rlos lorderr: Where you trying to mount it to?
12:43 hateya joined #gluster
12:46 lorderr x4rlos: hi.. well. I am trying to mount it localy. i am not entierly sure if this is the best way to do it. let me elaborate.
12:48 lorderr serverA: mount -t localhost:/vmail0 /vmail, serverB: mount -t localhost:/vmail0 /vmail -> Error: mountpoint not does not exist.
12:49 lorderr x4rlos: should i not be able to do this?
12:50 lorderr but both servers can see the volume with gluster volume info
12:54 lorderr x4rlos: ugh.. you were right. silly me. guess working from home when sick is not the smartest thing to do. forgot to create the mountpoint dir. *embarrased*
12:57 x4rlos :-)
13:04 vijaykumar left #gluster
13:14 kkeithley1 joined #gluster
13:21 dustint joined #gluster
13:22 spn joined #gluster
13:27 gbrand__ joined #gluster
13:32 spn joined #gluster
13:37 spn joined #gluster
13:46 cicero joined #gluster
13:50 lh joined #gluster
13:59 rwheeler joined #gluster
14:00 jack_^ joined #gluster
14:01 puebele1 joined #gluster
14:02 akenney joined #gluster
14:09 hateya joined #gluster
14:16 cicero joined #gluster
14:22 hagarth joined #gluster
14:26 theron joined #gluster
14:36 balunasj joined #gluster
14:46 designbybeck joined #gluster
14:49 shireesh joined #gluster
14:51 plarsen joined #gluster
14:58 dbruhn joined #gluster
14:58 glusterbot New news from resolvedglusterbugs: [Bug 764358] nfs.rpc-auth-allow/reject accepts invalid ip address. <http://goo.gl/zgdAO>
15:07 stopbit joined #gluster
15:13 bugs_ joined #gluster
15:16 jbrooks joined #gluster
15:18 chirino joined #gluster
15:28 jbautista joined #gluster
15:33 manik joined #gluster
15:37 bdperkin joined #gluster
15:40 lala joined #gluster
15:58 obryan joined #gluster
15:58 aliguori joined #gluster
16:01 chirino joined #gluster
16:04 chirino_m joined #gluster
16:13 erik49 joined #gluster
16:18 shireesh joined #gluster
16:29 rodlabs joined #gluster
16:37 designbybeck joined #gluster
16:38 efries joined #gluster
16:45 vpshastry joined #gluster
16:45 vpshastry left #gluster
16:46 zaitcev joined #gluster
16:49 theron joined #gluster
17:02 raven-np1 joined #gluster
17:05 cicero joined #gluster
17:10 nueces joined #gluster
17:32 Mo___ joined #gluster
17:52 bauruine joined #gluster
18:00 erik49 joined #gluster
18:01 z00dax hi guys, is Kaleb
18:01 z00dax agh,
18:02 z00dax is Kaleb in the room ?
18:02 johnmark kkeithley1: ^^
18:02 johnmark kkeithley: ^^
18:02 z00dax hi johnmark
18:10 theron joined #gluster
18:19 hateya joined #gluster
18:23 raven-np joined #gluster
18:29 jbrooks joined #gluster
18:39 andreask joined #gluster
18:47 chirino joined #gluster
18:52 hattenator joined #gluster
18:55 jbrooks joined #gluster
19:02 theron joined #gluster
19:14 kkeithley1 z00dax: sorry, missed that. What's up?
19:24 lorderr joined #gluster
19:35 DaveS_ joined #gluster
19:40 DaveS____ joined #gluster
19:43 duffrecords joined #gluster
19:46 gbrand_ joined #gluster
19:49 jbautista joined #gluster
19:50 duffrecords if I'm accessing files in a volume through Gluster's built-in NFS and use UCARP to create a single floating IP address that could switch to another node in the event of a failure could the lock-up described here occur?  http://community.gluster.org/a/nfs-per​formance-with-fuse-client-redundancy/
19:50 glusterbot <http://goo.gl/WGUrr> (at community.gluster.org)
20:00 nueces joined #gluster
20:02 theron joined #gluster
20:03 chirino joined #gluster
20:05 ekuric joined #gluster
20:10 hateya joined #gluster
20:16 chirino joined #gluster
20:24 chirino joined #gluster
20:24 mmakarczyk joined #gluster
20:42 dustint joined #gluster
20:43 dustint_ joined #gluster
20:52 tc00per joined #gluster
21:04 gbrand_ joined #gluster
21:14 duerF joined #gluster
21:16 Staples84 joined #gluster
21:17 Rocky joined #gluster
21:42 Maxzrz joined #gluster
21:42 Maxzrz is it acceptable if the glusterfs client is newer than the glusterfs server?
21:43 Maxzrz I've got a strange issue in that mounting via nfs works, but not native glusterfs (client is centos5 and server is debian 6)
21:50 m0zes Maxzrz: currently it is expected that they are all the same version.
21:51 Maxzrz doh
21:56 H___ joined #gluster
22:02 badone joined #gluster
22:15 ciber joined #gluster
22:19 petersaints joined #gluster
22:21 petersaints hi guys. I have a simple gluster cluster. Two servers that are client of themselfs. It works fine... however if one of the servers goes down the other one stops having access to the distributed file system. Is this a quorum or replication problem? Any ideas on how to configure Gluster to allow that the remaining server still works?
22:24 badone joined #gluster
22:26 semiosis petersaints: you'll have to check client log files (/var/log/glusterfs/client-mount-point.log) to find out whats going wrong
22:26 semiosis assuming you've waited longer than the network.ping-timeout (default 42s) during which all clients will freeze hoping the server returns quickly
22:32 petersaints semiosis: I was checking the log after simulating a failure (I'm running my setup in two VMs) and now it seems to work. It didn't work earlier... was I just unlucky. However could you point me to some documentation in how to setup the replication and quorum policies?
22:32 semiosis well there's ,,(rtfm)
22:33 glusterbot Read the fairly-adequate manual at http://goo.gl/E3Jis
22:33 semiosis idk where quorum stuff is doc'd though
22:33 semiosis maybe in there maybe not
22:34 khushildep joined #gluster
22:36 petersaints tks semiosis
22:36 semiosis yw, hth
22:37 vex I have a file showing in info split-brain:
22:37 vex 2013-01-09 11:36:17 <gfid:a4247637-d1ca-4d4b-a0​cc-bd658ad049f0>/39717.png
22:37 vex but that gfid cannot be found in the brick
22:38 vex any suggestions?
22:38 semiosis is that gfid in the brick's .glusterfs directory?
22:39 tc00per Is it 'normal' for load to be 1.0 x (number of bricks) on IDLE (ie. no file operations) gluster servers?
22:40 vex semiosis: no
22:40 vex same gfid showing on both bricks as split-brain
22:40 vex and not found on either
22:41 semiosis tc00per: no
22:42 tc00per Hmmm... interesting.
22:42 semiosis tc00per: zombies?
22:44 tc00per semiosis: don't think so. Here is 'typical' top....top - 14:44:41 up 18 days,  6:01,  1 user,  load average: 3.00, 3.00, 3.00, Tasks: 247 total,   1 running, 246 sleeping,   0 stopped,   0 zombie
22:46 nueces_ joined #gluster
22:46 tc00per All peers, bricks, nfs/self-heal daemons online.
22:47 nueces_ joined #gluster
22:47 nueces joined #gluster
22:49 tc00per stop volume, stop services, unmount bricks and load goes back to very near 0.0. reverse and as soon as volume is started the load climbs again to 3.00 on both peers in a 3x2 dist-repl cluster.
23:02 tc00per Just verified on my other 'test' cluster. this one has two bricks per server and the load rises to 2.00 after starting the volume. Shutdown behavior the same as the other gluster cluster.
23:02 vex tc00per: just a thought. is there a find command running?
23:04 tc00per vex: not on either cluster. neither cluster has any clients attached nor do they self-mount.
23:04 vex weird.
23:04 tc00per even the glusterd glusterfs and glusterfsd proc's are all idle.
23:06 tc00per I was thinking it was some kind of XFS background task but the 4x2 cluster has been up for 42 days and NOT doing anything except heating the server room.
23:06 tc00per When shutting down the load ONLY drops when the bricks are unmounted. It ONLY rises when the volume is started.
23:07 semiosis well now i'm wondering if maybe that is normal, tbh i've not used 3.3 much
23:07 semiosis it wasn't normal in 3.1 though
23:07 semiosis seems unlikely to be normal
23:09 JoeJulian tc00per: Definitely not or I'd be running at 60+ all day.
23:09 JoeJulian Er, 30+
23:17 andreask joined #gluster
23:18 tc00per afk
23:20 andreask1 joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary