Camelia, the Perl 6 bug

IRC log for #gluster, 2013-03-17

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 yinyin joined #gluster
01:02 yinyin joined #gluster
01:12 davidbitton joined #gluster
01:28 jdarcy joined #gluster
01:30 cyberbootje joined #gluster
01:42 davidbitton joined #gluster
01:49 davidbitton has anyone seen an issue where the data on the server disappears? i setup a test bench, and after I rebooted my VMs, my bricks are empty on both servers in the cluster
02:02 yinyin joined #gluster
02:13 joehoyle joined #gluster
02:20 dbruhn left #gluster
02:23 davidbitton_ joined #gluster
02:35 jdarcy joined #gluster
02:41 mooperd joined #gluster
02:50 joehoyle joined #gluster
02:51 Ryan_Lane joined #gluster
02:51 ProT-0-TypE joined #gluster
02:55 Ryan_Lane1 joined #gluster
03:02 yinyin joined #gluster
03:25 disarone_ joined #gluster
03:26 joehoyle joined #gluster
03:42 bala joined #gluster
04:03 joehoyle- joined #gluster
04:03 yinyin joined #gluster
04:39 20WAB4VGP joined #gluster
04:41 yinyin joined #gluster
04:50 kjoshi____ joined #gluster
05:26 yinyin joined #gluster
05:27 Ryan_Lane joined #gluster
05:50 Ryan_Lane joined #gluster
06:14 yinyin joined #gluster
06:47 harshpb joined #gluster
07:26 harshpb joined #gluster
07:34 Ryan_Lane joined #gluster
08:10 aravindavk joined #gluster
08:24 plarsen joined #gluster
08:25 plarsen Is selinux supported on gluster?
09:01 vimal joined #gluster
09:18 bala joined #gluster
09:25 Ryan_Lane joined #gluster
09:32 davis_ joined #gluster
09:51 mynameisdeleted2 joined #gluster
09:51 mynameisdeleted2 hi
09:51 glusterbot mynameisdeleted2: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an
09:51 glusterbot answer.
09:51 mynameisdeleted2 [root@cgluster-n3 ~]# ls /var/lib/nova/instances/instance-0000003b/disk
09:51 mynameisdeleted2 ls: cannot access /var/lib/nova/instances/instance-0000003b/disk: Transport endpoint is not connected
09:54 johndesc1 huhuh another openstack tweaker :P
10:01 ProT-0-TypE joined #gluster
10:02 ProT-0-TypE Hi, I'm testing CTDB + Gluster following this guide: http://download.gluster.org/pub/gluster/​glusterfs/doc/Gluster_CTDB_setup.v1.pdf
10:02 glusterbot <http://goo.gl/95vXT> (at download.gluster.org)
10:03 ProT-0-TypE but in the ctdb logs I have this error: ERROR: samba tcp port 445, is not responding
10:03 ProT-0-TypE but smb is up and running and listening on that port
10:03 ProT-0-TypE anyone with the same problem? I'm using samba4 not 3
10:13 ProT-0-TypE and I'm on CentOS 6.4
10:22 mynameisdeleted2 hi all
10:22 mynameisdeleted2 wahts best test to make sure a gluster filesystem is 100% there
10:22 mynameisdeleted2 beyond just mkdir and see if it shows on other servers? and reads?
10:26 jdarcy joined #gluster
10:30 davis_ mynameisdeleted2, Erm, huh? If it's "there"? If you can create/read data on it, then it's "there"... . o O(How else would you be seeing it?)
10:32 H__ Why can one not replace multiple bricks in a volume at a time ?
10:34 wenzi joined #gluster
10:56 mynameisdeleted2 well.. I seem to have libvirtd errors
10:56 mynameisdeleted2 I did a thing by adding multiple bricks... 2 bricks to a 2x1 volume
10:57 mynameisdeleted2 2 nodes with 2x replication
10:57 mynameisdeleted2 so it became 2x2
10:57 mynameisdeleted2 could have I have messed up the filesystem in that process
11:01 ProT-0-TypE <ProT-0-TypE> but in the ctdb logs I have this error: ERROR: samba tcp port 445, is not responding <--- seems to be a samba4 problem, with samba3 it works
11:03 samppah mynameisdeleted2: what glusterfs version you are using? can you send output of gluster volume info to pastie.org?
11:05 mynameisdeleted2 http://pastie.org/6580460
11:05 glusterbot Title: #6580460 - Pastie (at pastie.org)
11:05 mynameisdeleted2 the tutorial only showed 4 way replication.. no distribution
11:05 mynameisdeleted2 so maybe openstack might not like distribution if it causes delay in update?
11:06 mynameisdeleted2 maybe a bad file-locking option which lets another node try to read the file before its ready?
11:06 mynameisdeleted2 very small instance launches work
11:07 mynameisdeleted2 can I reconfigure to 4 way replication live?
11:07 samppah mynameisdeleted2: i don't think that's the issue
11:11 ProT-0-TypE found the solution for samba4: uncomment CTDB_SAMBA_CHECK_PORTS="445" on /etc/sysconfig/ctdb
11:12 mynameisdeleted2 http://codepad.org/hHSrJREq
11:12 glusterbot Title: Plain Text code- 9 lines - codepad (at codepad.org)
11:12 mynameisdeleted2 why doesnt pastie work twice for me?
11:13 mynameisdeleted2 http://pastie.org/6580514
11:13 glusterbot Title: #6580514 - Pastie (at pastie.org)
11:13 mynameisdeleted2 there
11:14 mynameisdeleted2 might I fix my problems by makign one node be an nfs server and usign all nfs mounts?
11:14 mynameisdeleted2 I know this is terrible for distributed performance
11:15 mynameisdeleted2 but woudl force 100% synchronous file reads/writes
11:15 mynameisdeleted2 is there a better way to do that?
11:15 mynameisdeleted2 http://community.gluster.org/q/why-doesn-t​-gluster-native-client-work-on-openstack/
11:15 glusterbot <http://goo.gl/xzzPD> (at community.gluster.org)
11:17 disarone_ joined #gluster
11:51 aravindavk joined #gluster
12:15 wenzi joined #gluster
12:21 timothy joined #gluster
12:48 ProT-0-TypE joined #gluster
12:53 yinyin_ joined #gluster
13:03 johndesc2 joined #gluster
13:14 davidbitton joined #gluster
13:25 hagarth joined #gluster
14:25 davidbitton joined #gluster
14:32 aravindavk joined #gluster
14:32 davidbitton i'm stuck. no matter what I put in /etc/fstab and/or /etc/rc.local, i am unable to get my glusterfs share to mount at boot on my CentOS 6.4 VM
14:52 shylesh joined #gluster
14:55 yinyin_ joined #gluster
14:56 H__ i bet that's the same race i see in ubuntu
15:09 joehoyle joined #gluster
15:14 18VAAVDJ9 joined #gluster
15:15 robo joined #gluster
15:28 davidbitton after doing A LOT of googling, i added a sleep 10 before the mod probe and mount -a in my rc.local file
15:40 davidbitton i just wish i knew what I was sleeping for
15:46 joehoyle joined #gluster
15:59 disarone joined #gluster
16:03 bulde joined #gluster
16:12 hateya joined #gluster
16:29 bulde1 joined #gluster
16:39 joehoyle joined #gluster
16:53 bstansell joined #gluster
16:55 mooperd joined #gluster
16:56 brunoleon joined #gluster
17:08 disarone joined #gluster
17:33 davidbitton joined #gluster
18:05 _pol joined #gluster
18:07 plarsen joined #gluster
18:08 hagarth joined #gluster
18:29 hagarth joined #gluster
18:32 JoeJulian mynameisdeleted2: I added an answer to that question.
18:35 mynameisdeleted2 let me look
18:36 JoeJulian mynameisdeleted2: Did you do a rebalance (or at least a rebalance...fix-layout) after adding the new bricks? If you didn't, then the client's still trying to create your new instances on the first two bricks. That would explain why smaller images work, but larger don't.
18:36 mynameisdeleted2 migth
18:36 mynameisdeleted2 I did rebalance
18:36 mynameisdeleted2 you saw my rebalance output in the pasturls
18:36 mynameisdeleted2 but did run awfully fast
18:37 JoeJulian I didn't go back and open them, tbh
18:37 mynameisdeleted2 and I did fix-layout
18:37 mynameisdeleted2 maybe my reboot wil fix this
18:37 mynameisdeleted2 let me try set back to gluster mount instead of nfs and see if it works well
18:37 mynameisdeleted2 nfs mount makes it take forever to boot instances
18:38 JoeJulian I've been successfully using the native mount with openstack.
18:38 JoeJulian ... and I didn't do anything special.
18:38 mynameisdeleted2 yeah.. before adding 2 nodes I was too
18:38 mynameisdeleted2 I wonder if moofs is better?
18:39 mynameisdeleted2 like dont have to rebalance and remount everyting when you grow
18:39 JoeJulian Why is one server named "64.182.68.142"?
18:39 mynameisdeleted2 thats an ip
18:39 mynameisdeleted2 not really sure
18:39 JoeJulian Not the point.
18:39 mynameisdeleted2 it should have hosts entry
18:39 mynameisdeleted2 and hostname set
18:39 mynameisdeleted2 maybe somethign wrong with config there?
18:39 JoeJulian Did you not probe the first machine from any of the subsequent ones?
18:40 mynameisdeleted2 only from first machine
18:40 mynameisdeleted2 is that why?
18:40 mynameisdeleted2 I have to prob every machine from all?
18:40 mynameisdeleted2 I followed the centos gluster guide to the best of my reading
18:40 JoeJulian No, just that first one.
18:40 JoeJulian It's in the documentation.
18:40 mynameisdeleted2 yeah.. thast what I thought I read
18:41 JoeJulian Also in this factoid: ,,(hostnames)
18:41 glusterbot factoid: Hostnames can be used instead of IPs for server (peer) addresses. To update an existing peer's address from IP to hostname, just probe it by name from any other peer. When creating a new pool, probe all other servers by name from the first, then probe the first by name from just one of the others.
18:41 glusterbot Hostnames can be used instead of IPs for server (peer) addresses. To update an existing peer's address from IP to hostname, just probe it by name from any other peer. When creating a new pool, probe all other servers by name from the first, then probe the first by name from just one of the others.
18:41 mynameisdeleted2 if I did probe from another by mistake is that bad
18:42 JoeJulian No. When you probe all your servers from one, that one can't determine what hostname you would like it to be. Should it be it's fqdn? shortname? some other cname? So you have to probe that first server from one other (any other).
18:42 mynameisdeleted2 let me do that
18:42 mynameisdeleted2 will showing ip make it broken you think?
18:42 JoeJulian I don't /think/ that has anything to do with the problem, just something that I noticed.
18:43 mynameisdeleted2 I'm wondering if my reboot might have fixed it
18:43 mynameisdeleted2 I think I did reboot before after rebalance and it didnt
18:45 JoeJulian If not, you might want to do another rebalance and add the word "force" to it. That might make it complete without error. It reports errors if it tries to move a file from a less-full brick to a more-full one.
18:45 mynameisdeleted2 thanks
18:45 mynameisdeleted2 let me try that
18:46 mynameisdeleted2 volume rebalance vm-instances start force
18:46 mynameisdeleted2 within gluster shell
18:46 mynameisdeleted2 lets see how this fares
18:46 mynameisdeleted2 should tkae a few hours?
18:46 JoeJulian The "transport endpoint not connected" error generally means that the tcp connection between the client and server (or pair of servers if the distribute subvolume is a replica) is broken.
18:47 JoeJulian depends on how much it has to move.
18:47 JoeJulian Doesn't look like it though since you only had one error.
18:47 mynameisdeleted2 only one node is balancing
18:47 mynameisdeleted2 when I run that again with status
18:47 JoeJulian yeah, that's normal.
18:48 JoeJulian That q&a that you linked is one of the reasons I hate q&a sites.
18:49 JoeJulian Why would you ask a question about an error message and not include the error message? It's a rookie mistake.
18:49 mynameisdeleted2 hey.. thats what I get from untrained novice "tech support" all the time
18:50 mynameisdeleted2 in an old company
18:50 mynameisdeleted2 they think they can hire grandmas for that
18:50 JoeJulian Hehe
18:50 JoeJulian Luckily in our company if it gets to me it's made it through a manager. If a manager makes that mistake, I just send it back to them.
18:50 mynameisdeleted2 this project for mac needs Xquartz... they dont know how to google its download site or that it needs to be downloaded and installed
18:51 mynameisdeleted2 yeah
18:51 mynameisdeleted2 well.. when I call tech support at most datacenters its a know-nothign dropout
18:51 mynameisdeleted2 so I guess most companies tech support is at same level
18:51 JoeJulian Yikes. You need better datacenters. :D
18:52 mynameisdeleted2 I like it... they hae techs in romania or wherever that know the stuff
18:52 mynameisdeleted2 once I move over for work I'll hire/manage my own datacenter workers
18:52 mynameisdeleted2 we are buying out that company
18:52 mynameisdeleted2 they do have very good ac, power, etc
18:52 mynameisdeleted2 and we are geting peering in
18:52 JoeJulian Sounds exciting.
18:52 mynameisdeleted2 I have to read up on bgp
18:53 mynameisdeleted2 so.. if I can get desktop computer on any budget
18:53 mynameisdeleted2 dual 10-core xeon with server-board put inside desktop tower?
18:53 mynameisdeleted2 or amd 128-core?
18:53 JoeJulian I prefer the amd, but it's personal preference.
18:53 mynameisdeleted2 amd is slower but you can put 8 cpus in a server shared memory with 16 cores per cpu
18:53 H__ Why can one not replace multiple bricks in a volume at a time ?
18:54 JoeJulian H__: Good question.
18:54 mynameisdeleted2 so I ran my volume rebalance vm-instances start force
18:55 mynameisdeleted2 after abotu 20 mins or so .. maybe 30 it shows all complete
18:55 mynameisdeleted2 should I try swithcing back from nfs you thinik?
18:55 mynameisdeleted2 this system is not production
18:55 mynameisdeleted2 its testign right now
18:55 mynameisdeleted2 s I can afford downtime
18:55 mynameisdeleted2 testing-systems are meant to be broken
18:55 mynameisdeleted2 but a good uptime record makes people happier about swithig them
18:56 JoeJulian H__: The process it goes through is to create a mount with the new brick and a special volume configuration that makes that new brick a mirror of the old. It then does the self-heal on just that brick to create the new one. Once that self-heal is completed, the commit then replaces the brick in the client definition.
18:56 JoeJulian mynameisdeleted2: I would.
18:57 JoeJulian H__: So, could that be done with multiple bricks? I can't think of any reason why not, other than perhaps load issues. file a bug report
18:57 glusterbot http://goo.gl/UUuCq
18:58 H__ good idea. I will
18:59 JoeJulian Ok, I'm going to take my daughter for a walk. ttfn.
19:00 H__ and i'll be putting mine in bed now :) 20:00 here
19:18 eiki joined #gluster
19:21 H__ Bug 922542 has been added to the database :)
19:21 glusterbot Bug http://goo.gl/0O7OW low, unspecified, ---, vbellur, NEW , Please add support to replace multiple bricks at a time.
19:27 glusterbot New news from newglusterbugs: [Bug 922542] Please add support to replace multiple bricks at a time. <http://goo.gl/0O7OW>
19:52 bala joined #gluster
20:13 tryggvil__ joined #gluster
20:18 bala joined #gluster
20:21 jdarcy joined #gluster
20:38 badone joined #gluster
20:46 hateya joined #gluster
21:01 badone joined #gluster
21:06 badone joined #gluster
21:23 bstromski joined #gluster
22:21 jdarcy joined #gluster
22:29 jdarcy joined #gluster
22:33 zeedon joined #gluster
22:34 zeedon was wondering if anyone could give me some information on this setting for tuning volume options, "performance.read-ahea" I cannot seem to find any details in the documentation for gluster 3.3
22:45 hagarth joined #gluster
22:49 jdarcy joined #gluster
22:53 ackjewt joined #gluster
22:59 21WAABFC3 joined #gluster
23:03 jdarcy_ joined #gluster
23:03 hagarth joined #gluster
23:23 mynameisdeleted2 so... http://eshop.macsales.com/item/Other%20World%2​0Computing/SSDPHW2R960/?utm_source=google&amp;​utm_medium=shoppingengine&amp;utm_campaign=goo​glebase&amp;gclid=CI7Wo_rxhLYCFYSK4AodjX8A8g
23:23 glusterbot <http://goo.gl/gMgeq> (at eshop.macsales.com)
23:23 mynameisdeleted2 960GB ssd pcie drive
23:23 mynameisdeleted2 most of those read or write 1GB/s random
23:23 mynameisdeleted2 much better than sata or sas. but requires a server that supports pcie
23:23 mynameisdeleted2 or using a desktop
23:24 mynameisdeleted2 my network card... is probably infiniband with rdma suport becuase I can get 40g used for 130 a card and it still works, but this again requires a 2nd pcie 8x slot
23:25 mynameisdeleted2 if I add gpu-compute nodes(lxc or whatever) I'll need a 3rd and 4th for that.. this may be a setup that works better with desktop form than server form
23:25 mynameisdeleted2 dual-port 40g ifiniband with pcie-ssd is the ideal for gluster performance right?
23:26 mynameisdeleted2 any idea of servers thta suport lots of high-bandwidth pcie addon cards?
23:31 mynameisdeleted2 ahh.. poweredge c8220 has dual pcie slots in a blade-server... data nodes run IB + pcie-ssd... gpu-nodes run gpu + IB
23:37 badone joined #gluster
23:38 vigia joined #gluster
23:41 hagarth joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary