Camelia, the Perl 6 bug

IRC log for #gluster, 2013-01-03

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:55 raven-np1 joined #gluster
00:55 duffrecords I tried copying three large ISOs within a Gluster volume and it seems like there is a very low overall I/O limit now (each copy process slowed from 24 MB/s to about 8).  before I reinstalled the OS I was able to do multiple concurrent copies, nearly saturating my gigabit network.  what can I do to identify the bottleneck?
00:55 duffrecords that's not supposed to be a smiley, it was 8 MB/s
00:58 JoeJulian Eww, that sounds ugly. What're you connecting over?
01:08 duffrecords you mean what switch?
01:18 duffrecords using tar -cf - . | (cd <destination>; tar -xf - ) is only slightly better, at 11 MB/s
01:22 duffrecords it's a gigabit network and each of the 4 Gluster servers has (6) 7200 RPM 6 Gb/s disks in RAID 10, so I imagine it should be blazing fast
01:32 badone joined #gluster
01:44 lh joined #gluster
01:46 FyreFoX semiosis: after step 5 just needed to dpkg-source --commit. and for step 7 needed gpg keys setup. and step 9 ran pbuilder with sudo. other than that it worked fine
02:16 raven-np joined #gluster
02:32 VSpike joined #gluster
02:34 yinyin joined #gluster
02:39 plarsen joined #gluster
03:18 FyreFoX in a 2 server replica how can you determine if a server is current and up todate?
03:33 duffrecords joined #gluster
03:35 bharata joined #gluster
03:51 layer3switch joined #gluster
04:00 nightwalk joined #gluster
04:09 shylesh joined #gluster
04:35 GLHMarmot joined #gluster
04:36 sripathi joined #gluster
04:36 layer3switch joined #gluster
04:42 mohankumar joined #gluster
04:45 vpshastry joined #gluster
04:47 yinyin joined #gluster
04:51 sgowda joined #gluster
04:54 Humble joined #gluster
04:58 overclk joined #gluster
05:17 hagarth joined #gluster
05:28 hagarth1 joined #gluster
05:32 bala1 joined #gluster
05:44 mdarade1 joined #gluster
05:44 mdarade1 left #gluster
05:50 rastar joined #gluster
05:55 hateya joined #gluster
05:56 ramkrsna joined #gluster
06:10 mdarade joined #gluster
06:20 yinyin joined #gluster
06:21 mohankumar joined #gluster
06:24 hchiramm_ joined #gluster
06:33 sahina joined #gluster
06:37 harshpb joined #gluster
06:40 vimal joined #gluster
07:03 sripathi joined #gluster
07:05 ngoswami joined #gluster
07:21 jtux joined #gluster
07:31 5EXAAJGL4 joined #gluster
07:32 sripathi joined #gluster
07:38 duffrecords joined #gluster
07:47 hateya joined #gluster
07:49 berend semiosis: just attempting an upgrade on a simple i368 gluster 2 server cluster with your ppa.
07:49 yinyin joined #gluster
07:49 berend but get this:
07:49 berend /usr/sbin/glusterd: symbol lookup error: /usr/sbin/glusterd: undefined symbol: xdr_gf_event_notify_rsp
07:49 berend is that some old compile issue on my system?
07:49 berend (I had my own compiled version)
07:52 berend semiosis: it appears it is.
08:06 ctria joined #gluster
08:10 adrian15 joined #gluster
08:11 hagarth joined #gluster
08:19 adrian15 If I define auth.allow I'm allowing some hosts to the volume. But... Should I set auth.reject to * so that the rest of the hosts can't access it? The admin guide seems not very clear. Thank you.
08:25 berend I have a problem where I can do "gluster peer probe" from one machine, but it fails from the other.
08:26 berend This 2 machines until 1 hour ago were happy gluster 3.2 friends.
08:26 adrian15 berend: You only have to do it once from one machine.
08:26 berend I was too scared to uprade, so this should be a blank situation.
08:26 berend adrian15: because it works only half, anything else after this fails obviously, like creating volumes.
08:27 berend If you do it from the working machine, I get a "(disconnected)" message on the other.
08:28 berend The entire message is: State: Accepted peer request (Disconnected)
08:28 berend So what could this be?
08:28 adrian15 berend: Not sure what you setup is... if it's replicated you might try to detach and then probe the another host again.
08:29 berend adrian15: obviously all these things don't work.
08:29 berend else I wouldn't be bothering people here.
08:29 berend and replication doesn't come into play until you create volumes.
08:29 adrian15 berend: Anyway I'm not expert on Gluster problems except for the basics. I suppose that the firewall ports are right.
08:31 vpshastry joined #gluster
08:34 dobber joined #gluster
08:35 sripathi joined #gluster
08:36 tjikkun_work joined #gluster
08:39 berend The failing machine is also a regular NFS server, might that interfere?
08:43 adrian15 berend: If your volumes aren't serving as NFS then I think it's ok.
08:47 ninkotech_ joined #gluster
08:47 adrian15 joined #gluster
08:48 vpshastry1 joined #gluster
08:51 berend And indeed, a firewall issue that was no problem in 3.2 is a problem in 3.3.
08:51 berend The documentation should have some debugging guides: telnet peer 24007
08:59 rastar1 joined #gluster
09:00 adrian15 joined #gluster
09:07 puebele joined #gluster
09:14 kaos01 joined #gluster
09:15 vpshastry joined #gluster
09:18 lh joined #gluster
09:18 lh joined #gluster
09:23 vpshastry1 joined #gluster
09:24 DaveS joined #gluster
09:26 puebele1 joined #gluster
09:29 adrian15 joined #gluster
09:33 rastar joined #gluster
09:41 berend On one peer (2 peer replica) I get a message that port 24009 is not accessible.
09:41 berend Port                 : 24009
09:41 berend Online               : N
09:41 berend
09:41 berend (part of gluster volume status)
09:41 berend So its says 24009 online for 1 peer, but not for the other.
09:42 berend And indeed, "telnet localhost 24009" fails on that peer
09:43 overclk joined #gluster
09:43 berend Does that sound like a problem?
09:53 duerF joined #gluster
10:02 _ilbot joined #gluster
10:02 Topic for #gluster is now  Gluster Community - http://gluster.org | Q&A - http://community.gluster.org/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - http://irclog.perlgeek.de/gluster/
10:15 adrian15 berend: Here's a reference on what ports should be open: http://gluster.org/community/documentation/ind​ex.php/Gluster_3.1:_Installing_GlusterFS_on_Re​d_Hat_Package_Manager_%28RPM%29_Distributions . There's also an equivalent page for Debian based systems.
10:15 vpshastry joined #gluster
10:15 glusterbot <http://goo.gl/aeZB7> (at gluster.org)
10:19 adrian15 joined #gluster
10:21 vimal joined #gluster
10:31 Humble joined #gluster
10:34 QuentinF joined #gluster
10:34 QuentinF Hi
10:34 glusterbot QuentinF: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
10:36 QuentinF I've problem with glusterfs 3.3.1 (Servers on ubuntu 12.04 and clients on CentOS 6.3). I do same procedure as I did with gluster 3.2 : replicated-distribute
10:36 QuentinF replicate x2
10:36 QuentinF But I've pb with client's connexion
10:38 QuentinF this is log http://paste.org/59579
10:38 glusterbot Title: Your code. Your site. Use it. - paste.org (at paste.org)
10:46 rastar joined #gluster
10:48 hagarth joined #gluster
11:01 khushildep_ joined #gluster
11:03 shireesh joined #gluster
11:05 x4rlos QuentinF: are all the clients the same version?
11:24 deepakcs joined #gluster
11:32 rastar joined #gluster
11:34 hateya joined #gluster
11:34 QuentinF x4rlos: prolem is resolve now :)  I've conflict with another server ... :/
11:37 x4rlos QuentinF: ahh, good :-)
11:53 adrian15 joined #gluster
12:06 shireesh joined #gluster
12:17 bharata joined #gluster
12:21 rastar joined #gluster
12:28 vpshastry joined #gluster
12:29 Humble joined #gluster
12:30 balunasj joined #gluster
12:31 kkeithley1 joined #gluster
12:31 kkeithley1 left #gluster
12:34 jjnash joined #gluster
12:34 kkeithley1 joined #gluster
12:46 bharata joined #gluster
13:05 chirino joined #gluster
13:13 Alpinist joined #gluster
13:14 adrian15 joined #gluster
13:21 manik joined #gluster
13:22 shireesh joined #gluster
13:35 glusterbot New news from resolvedglusterbugs: [Bug 863223] Changelogs are cleared prematurely for replica 3 <http://goo.gl/JDTpt>
13:35 andreask joined #gluster
13:44 shireesh joined #gluster
13:49 hagarth joined #gluster
14:07 glusterbot New news from newglusterbugs: [Bug 885424] File operations occur as root regardless of original user on 32-bit nfs client <http://goo.gl/BiF6P>
14:09 harshpb joined #gluster
14:13 adrian15 joined #gluster
14:18 adrian15 joined #gluster
14:23 harshpb joined #gluster
14:25 rwheeler joined #gluster
14:32 plarsen joined #gluster
14:37 bennyturns joined #gluster
14:38 jtux joined #gluster
14:52 chirino joined #gluster
14:53 shireesh joined #gluster
15:01 stopbit joined #gluster
15:06 bugs_ joined #gluster
15:17 Alpinist joined #gluster
15:17 Alpinist joined #gluster
15:23 wushudoin joined #gluster
15:26 noob2 joined #gluster
15:26 noob2 has anyone else had issues with tailing a file and the output not showing up or only showing fragments?
15:26 noob2 when i cat the same file everything is there
15:28 DRMacIver left #gluster
15:29 lh joined #gluster
15:36 bitsweat joined #gluster
15:38 tqrst joined #gluster
15:38 shireesh joined #gluster
16:00 JuanBre hi again, any advice on how to detect glusters bottleneck?
16:18 jds2001 joined #gluster
16:21 andreask joined #gluster
16:21 aliguori joined #gluster
16:21 jtux joined #gluster
16:22 chirino joined #gluster
16:27 khushildep joined #gluster
16:27 obryan joined #gluster
16:31 m0zes @yum repo
16:31 glusterbot m0zes: kkeithley's fedorapeople.org yum repository has 32- and 64-bit glusterfs 3.3 packages for RHEL/Fedora/Centos distributions: http://goo.gl/EyoCw
16:32 m0zes @ppa
16:32 cicero l33t
16:32 glusterbot m0zes: The official glusterfs 3.3 packages for Ubuntu are available here: http://goo.gl/7ZTNY
16:32 bitsweat left #gluster
16:49 wushudoin joined #gluster
16:49 andreask joined #gluster
16:51 sjoeboo anyone have any good references for geo-replication, aside form the official docs? I'm looking for a good break down of how it works (what talks to what how and when), how oftern sync should happen/how, is ssh always needed, or is that no the case when doing gluster -> gluster ?
16:51 sjoeboo finding the docs a bit vauge on all of it
16:52 ekuric left #gluster
17:06 chirino joined #gluster
17:08 plarsen joined #gluster
17:12 greylurk joined #gluster
17:26 JoeJulian joined #gluster
17:29 y4m4 joined #gluster
17:30 adrian15 joined #gluster
17:38 unalt joined #gluster
17:47 zaitcev joined #gluster
18:00 Azrael808 joined #gluster
18:03 adrian15 joined #gluster
18:08 nueces joined #gluster
18:11 nueces joined #gluster
18:26 sjoeboo so....took a distributed replicated volume, and split it into 2 distributed volumes, but, one of them has a ton of duplicate directory entries (files are fine), and its not everything, but most.
18:27 sjoeboo stop/delete that volume, remove .glusterfs from bricks and try again?
18:27 sjoeboo i'm doing a rebalnce now to see if that helps
18:27 duffrecords joined #gluster
18:36 JoeJulian sjoeboo: If it were me, I would probably "find -type f -size 0 -perm 1000 -exec /bin/rm {} \;" to remove all the dht linkto's.
18:36 sjoeboo on teh volume itself or the bricks?
18:36 JoeJulian on the bricks
18:36 sjoeboo got it
18:37 sjoeboo in .glusterfs on teh bricks, though, right?
18:38 sjoeboo also getting tons (like, only) gfid not present msgs when rebalancing the same volume
19:13 adrian15 joined #gluster
19:26 xinkeT joined #gluster
19:41 H___ joined #gluster
19:43 sjoeboo JoeJulian: hm, that hasn't seemed to help, but i i'm still poking...
20:12 isomorphic joined #gluster
20:27 DaveS joined #gluster
20:45 xlibre joined #gluster
20:48 DaveS joined #gluster
20:58 rwheeler joined #gluster
21:09 kshlm|AFK joined #gluster
21:52 andreask joined #gluster
21:59 harshpb joined #gluster
22:02 _pol joined #gluster
22:03 _pol I have a couple "find" processes stuck in "D" (uninterruptible sleep), they are waiting on "fuse_request_send". The finds were run on the gluster mount.  Is there a clean way to restart gluster client if that is the problem?
22:04 _pol (Gluster v3.2.5)
22:05 kkeithley1 left #gluster
22:05 JoeJulian _pol: umount -f /should/ clear those.
22:06 _pol you mean just umount the gluster mount and remount it?
22:08 JoeJulian Yes.
22:08 JoeJulian But the question comes back to why are they happening...
22:09 JoeJulian Since you're running 3.2.5, I'm guessing it's not because of a kernel update.
22:11 _pol JoeJulian: Nope, probably not. I did a "echo w > /proc/sysrq-trigger" to get more info, but I am not good at reading it.
22:12 _pol I could gist the kernel stack trace, but it would probably only be useful to a gluster dev.
22:14 _pol (call trace rather)
22:15 QuentinF left #gluster
22:16 JoeJulian Probably true. I would also upgrade to at least 3.2.7 to get the bug fixes.
22:24 lh joined #gluster
22:24 lh joined #gluster
22:40 duerF joined #gluster
22:43 lh joined #gluster
22:43 lh joined #gluster
23:07 haidz if i set up a new cluster of 5 nodes.. can i set a replication of 3?
23:23 robinr joined #gluster
23:25 robinr hi,  on gluster volume heal HomeVol info, I've been getting entries that says gfid:etc... as seen on http://www.dpaste.org/W7Xe5/
23:25 glusterbot Title: dpaste.de: Snippet #216006 (at www.dpaste.org)
23:25 robinr after issuing multiple "gluster volume heal HomeVol full", the issue persists
23:44 elyograg haidz: here's one possible way to get replica 3 on five nodes.  the numbers on the left represent your nodes.  the other numbers represent the replica sets - set 1 lives on nodes 1, 2, and 3.  http://www.fpaste.org/C2xT/
23:44 glusterbot Title: Viewing replica 3 on 5 nodes by elyograg (at www.fpaste.org)
23:45 elyograg haidz: in this setup, node 1 only has two bricks, node 5 only has one.  you could keep adding replica sets by adding bricks, but I don't know when it would even out.  probably not before you ran out of places to put disks. :)
23:46 FyreFoX semiosis: are you around?
23:52 haidz elyograg, is there a way to have a consistent number of bricks across hosts and scale by adding hosts?
23:57 haidz will gluster ever write a file to two different bricks on the same host? or will they go to different hosts by design?

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary