Camelia, the Perl 6 bug

IRC log for #gluster, 2012-11-23

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:05 JoeJulian JFK: No, you can't force the client to read from a specific server yet. That is something being worked on though. It was originally designed to be able to scale up the number of clients as necessary. When you have more clients than servers the better choice is whichever server is less busy.
00:06 JoeJulian mnaser: Most people use ucarp for tcp failover. Also ,,(php)
00:06 glusterbot mnaser: php calls the stat() system call for every include. This triggers a self-heal check which makes most php software slow as they include hundreds of small files. See http://goo.gl/uDFgg for details.
00:07 JoeJulian GLHMarmot: If you use ,,(hostnames) you can just use one of many split-horizon dns solutions.
00:07 glusterbot GLHMarmot: Hostnames can be used instead of IPs for server (peer) addresses. To update an existing peer's address from IP to hostname, just probe it by name from any other peer. When creating a new pool, probe all other servers by name from the first, then probe the first by name from just one of the others.
00:08 mnaser JoeJulian: that was my justification for using nfs which apparently improved that
00:11 GLHMarmot JoeJulian: WOOT! Re: creating with a pre-filled drive.
00:12 GLHMarmot JoeJulian: Re: hostnames, again, WOOT!
00:12 GLHMarmot Got some more playing to do, thanks.
00:12 JoeJulian You're welcome.
00:13 JFK JoeJulian: this is not good info :-( but thanks anyway
00:14 JFK is there anything better fo replication than gluster? I do not need distributed fs. Replicated only is fine but have to be able to operate on more than 2 machines.
00:28 JoeJulian JFK: rsync & a cron job?
00:36 andreask1 JFK: what is your usecase?
00:49 ackjewt joined #gluster
00:50 shireesh joined #gluster
01:23 andreask1 left #gluster
01:25 JFK JoeJulian, too slow.
01:26 JFK andreask1, 2x web server with proxy over.
01:26 JFK i have to be able to use any of those webservers anytime so i need exacly same data there
01:30 JFK ideal fs should even be able to keep sessions files -  a lot of small files modified constantly
01:40 kevein joined #gluster
02:04 sunus joined #gluster
02:12 redsolar joined #gluster
02:20 bala1 joined #gluster
02:24 bharata joined #gluster
02:26 kevein joined #gluster
02:41 kevein joined #gluster
03:38 Humble joined #gluster
03:59 bala1 joined #gluster
03:59 vpshastry joined #gluster
04:05 Humble joined #gluster
04:07 sripathi joined #gluster
04:48 bulde joined #gluster
05:07 bala1 joined #gluster
05:20 vpshastry joined #gluster
05:23 glusterbot New news from resolvedglusterbugs: [Bug 835034] Some NFS file operations fail after upgrading to 3.3 and before a self heal has been triggered. <http://goo.gl/Q2l0l>
05:44 sunus hi, what is argp-standalone ?
05:49 vpshastry joined #gluster
05:53 chirino joined #gluster
05:57 Humble joined #gluster
06:01 vshastry joined #gluster
06:03 mtanner joined #gluster
06:09 ramkrsna joined #gluster
06:10 Varun joined #gluster
06:12 hagarth joined #gluster
06:13 ramkrsna joined #gluster
06:13 ramkrsna joined #gluster
06:15 vshastry joined #gluster
06:19 Varun joined #gluster
06:20 kleind JoeJulian: thanks for replying
06:26 vshastry joined #gluster
06:26 inodb joined #gluster
06:27 Humble joined #gluster
06:30 Varun joined #gluster
06:32 mohankumar joined #gluster
06:35 rudimeyer joined #gluster
06:40 guigui3 joined #gluster
06:42 puebele1 joined #gluster
06:48 bulde joined #gluster
07:02 vshastry joined #gluster
07:09 ngoswami joined #gluster
07:11 rgustafs joined #gluster
07:21 vshastry joined #gluster
07:23 Humble joined #gluster
07:34 guigui3 left #gluster
07:42 dobber joined #gluster
07:43 bala1 joined #gluster
07:44 ctria joined #gluster
07:53 xymox joined #gluster
07:55 lkoranda joined #gluster
08:01 deepakcs joined #gluster
08:03 ekuric joined #gluster
08:07 ekuric joined #gluster
08:09 Varun joined #gluster
08:17 guigui3 joined #gluster
08:26 tjikkun_work joined #gluster
08:27 Varun joined #gluster
08:30 bulde joined #gluster
08:47 andreask joined #gluster
09:01 gbrand_ joined #gluster
09:06 Azrael808 joined #gluster
09:22 bauruine joined #gluster
09:24 pkoro joined #gluster
09:34 ramkrsna joined #gluster
09:34 ramkrsna joined #gluster
09:35 frakt Hi glusterers. Is there an installation guide for installing glusterfs 3.3 on debian 6?
09:40 glusterbot New news from newglusterbugs: [Bug 879536] Add use-rsync-xattrs to geo-replication <http://goo.gl/kwNbB>
09:43 Humble joined #gluster
09:45 kleind frakt: use this repo: http://download.gluster.org/pub/gluster/​glusterfs/3.3/3.3.1/Debian/squeeze.repo, then import the gpg key and install "glusterfs-server" and "attr"
09:45 glusterbot <http://goo.gl/rX2xb> (at download.gluster.org)
09:45 frakt okay
09:46 frakt thanks
09:48 bala2 joined #gluster
09:57 inodb joined #gluster
10:04 Humble joined #gluster
10:06 Humble joined #gluster
10:21 inodb_ joined #gluster
10:22 rudimeyer Anybody tested out a failure of an EBS disk in a Gluster/Amazon EC2 setup?
10:37 vshastry joined #gluster
10:38 duerF joined #gluster
10:40 vpshastry joined #gluster
10:42 twx_ glusterbot: repo
10:42 glusterbot twx_: I do not know about 'repo', but I do know about these similar topics: 'repository', 'yum repo', 'yum repository', 'git repo', 'ppa repo', 'yum33 repo', 'yum3.3 repo'
10:42 twx_ glusterbot: yum repo
10:42 glusterbot twx_: kkeithley's fedorapeople.org yum repository has 32- and 64-bit glusterfs 3.3 packages for RHEL/Fedora/Centos distributions: http://goo.gl/EyoCw
10:48 bulde joined #gluster
10:56 sunus volume create Dvol-4 gf233:/gfdata/Dbrk32335 s71:/gfdata/Dbrks372115 hd244:/gfdata/Dbrk3245
10:56 sunus volume create: Dvol-4: failed: Operation failed on hd244
10:56 sunus what could possibly cause that ? hd244 is a new node, i can ping hd244 and peer status says hd244 is connected.
11:01 Humble joined #gluster
11:03 sshaaf joined #gluster
11:25 twx_ jdarcy: hey man, I saw a couple of days ago that you resolved (I think) an issue reported in the bugzilla regardin reuse of existing brick thingie, which was not resolved by using method described for clearing xattrs provided by JoeJulian. You don't happen to have an url for that issue or a number? Couldn't find it when I searched in bugzilla
11:33 Nr18 joined #gluster
11:51 asou joined #gluster
11:52 asou hi all
11:53 asou can someone validate that the solution provided here ( http://community.gluster.o​rg/q/glusterfs-directory/ ) will not lead to data corruption or/and loss ?
11:53 glusterbot <http://goo.gl/IgU7f> (at community.gluster.org)
11:57 joeto joined #gluster
12:00 inodb joined #gluster
12:13 inodb_ joined #gluster
12:20 Jippi joined #gluster
12:23 manik joined #gluster
12:31 asou can someone validate that the solution provided here ( http://community.gluster.o​rg/q/glusterfs-directory/ ) will not lead to data corruption or/and loss ?
12:31 glusterbot <http://goo.gl/IgU7f> (at community.gluster.org)
12:36 duerF joined #gluster
12:46 wica Hi, I have some files in mij heal-failed overview. What can I do with them?
12:48 vshastry joined #gluster
12:49 Johan joined #gluster
12:49 Johan Hi All
12:50 Guest18045 If you add a new brick to your volume, do you have to manually trigger a "heal" or does self / auto heal kick in after some time?
12:52 hchiramm_ joined #gluster
12:58 Humble joined #gluster
13:07 asou guest18045 afaik it does trigger an auto self-heal whenever a process issues a stat call on file
13:13 Guest18045 This means that when a new brick is added the gluster heal command had to be exected OR the stat command on the files of the replacted brick. Is this correct?
13:16 inodb joined #gluster
13:17 asou i am not sure how it works internally, but an "ls -lR" within the gluster volume, it will recursively perform a stat call on all files
13:18 Guest18045 and after a normal server reboot, is it needed to heal or happens this automatically?
13:19 rgustafs_ joined #gluster
13:22 lkoranda joined #gluster
13:32 andreask joined #gluster
13:33 inodb_ joined #gluster
13:37 tryggvil joined #gluster
13:43 Humble joined #gluster
13:45 sshaaf joined #gluster
13:47 RobertLaptop joined #gluster
13:55 edward1 joined #gluster
13:57 Norky Gluster over NFS is not working. "gluster volume status VONAME nfs" shows Online Y for all (4) servers, and rpcinfo shows nfs service running on the corresponding port, but showmount -e lists no exports and the server responds "no such directory" when I try to mount
13:59 kleind Norky: share details and commands?
14:00 Norky I'll put the commands and output in a pastebin rather than spam the channel, one moment...
14:03 Norky http://pastie.org/5423224
14:03 glusterbot Title: #5423224 - Pastie (at pastie.org)
14:04 Norky four gluster hosts (distributed, replicated) runnign glusterfs 3.3.0 on Red HAt Storage , one volume
14:04 Norky the client is a RHEL6.3 host
14:05 Norky I have had NFS to the gluster voluem working before, but have recently recreated the volume after doing some FS tuning
14:06 Norky I apoligise for the syntax highlighting in that paste, browser trouble meant I coudl not change from the default of ruby on rails
14:10 asou Norky i had similar issues and i had to restart rpcbind, glusterd and nfs
14:10 asou then all worked as expected
14:12 Norky I'll try that in a minute then (I'm currently doing some tests via the native client)
14:13 Norky I'm comparing the native client to NFS-to-glusterd, as well as standard NFS
14:13 Norky so far, native client performance sucks :/
14:14 asou depends on the type of work load. for small files (like a web server root) it really sucks
14:14 asou but for large files, it is performing very well
14:14 Norky this is with large files
14:17 Norky I'm doing dumb tests like dd if=/dev/zero of=/mnt/1tbfile bs=1M count=1M and seeing speeds around 50MB/s, both over TCP and RDMA (I'm not convinced it is actually using rdma)
14:18 Norky the same test done locally on one of the gluster servers gives > 1100 MB/s (yes, over 1 Gigabyte per second) so I know the local disk subsystem on each server is up to the job
14:18 Norky bonnie++  gives similar figures for seq writes
14:22 puebele joined #gluster
14:24 Norky also, what is the correct option to mount for using rdma for the native client? I've seen references to both -o transport=rdma and servername:/VOLNAME.rdma
14:28 nueces joined #gluster
14:34 yinyin joined #gluster
14:34 yinyin joined #gluster
14:41 puebele joined #gluster
14:42 yinyin hi all, who use systemtap to test glusterfs?
14:44 vpshastry joined #gluster
14:44 andreask Norky: looks like it uses 1Gb interfaces for the replication ... 2 replicas ... 2x50MB ~ 100MB/1Gb
14:46 ron-slc joined #gluster
14:49 Norky andreask, I was wondering that myself, but I was getting the same speed when I recreated the volume using only rdma
14:50 andreask you can simply check by network statistics
14:51 Norky ib_ping and ib_rdma_read between servers and clients work as expected, i.e I was seeing speeds of 3200 MB/s (or 1700MB/s for my older 4x DDR IB nodes)
14:52 Norky what do you mean, andreask ? sit and watch ifconfig | grep bytes ?
14:52 andreask iftop or any other tool ... or dstat/vmstat
14:55 Norky hmm, that'd work
14:56 Norky I'm limited in what I have on the servers, there's no iftop, but dstat -N works
14:58 Norky yep, it's definitely using the Ethernet interfaces, and not the IB interfaces, despite my telling it to use rdma
15:22 andreask and you mount glusterfs with "-o rdma" option?
15:22 andreask transport=rdma
15:24 andreask Norky: ^^^
15:24 Norky yes
15:24 Norky sudo mount -t glusterfs -o transport=rdma lnasilo0:/tid /cae
15:25 Norky I also tried with /tid.rdma
15:25 andreask no log entries?
15:29 Norky client log after mount -o transport=rdma     http://pastie.org/5423531
15:29 glusterbot Title: #5423531 - Pastie (at pastie.org)
15:29 Norky it definitely says it's using tcp
15:29 Norky well, by my reading
15:31 andreask tried creating the volume with rdma transport only?
15:33 Norky yes, I believe then the log suggested it was using rdma transport, however the speed was identical
15:34 Norky I will retry to confirm it
15:35 * andreask needs to run ... nice weekend!
15:35 Norky okay, thanks for your help
15:35 Norky have a fine weekend :)
15:44 mtanner joined #gluster
16:19 rudimeyer_ joined #gluster
16:22 social_ joined #gluster
17:10 Nr18 joined #gluster
17:14 hagarth joined #gluster
17:18 inodb joined #gluster
17:19 inodb^ joined #gluster
17:22 inodb_ joined #gluster
17:28 Jippi joined #gluster
17:42 Daxxial_ joined #gluster
17:42 inodb joined #gluster
17:51 nightwalk joined #gluster
18:08 nightwalk joined #gluster
18:14 m0zes joined #gluster
18:22 masterzen joined #gluster
18:31 gbrand_ joined #gluster
18:40 tryggvil joined #gluster
18:49 bauruine joined #gluster
19:01 y4m4 joined #gluster
19:01 GLHMarmot I have an update on the creation of a volume that has a brick pre-populated with files.
19:01 GLHMarmot It only kinda works. :-)
19:01 GLHMarmot The volume gets created just fine but none of the files show up
19:02 andreask joined #gluster
19:02 GLHMarmot If I mount the volume and "touch" any of the objects (directories or files) in the mounted volume
19:02 GLHMarmot They then show up and are replicated.
19:03 GLHMarmot This seems to be because when the volume is first created, none of the existing files have extended attributes
19:03 GLHMarmot If I "touch" them, the attributes are created.
19:04 GLHMarmot This is backed up by a test where I created a volume on a path that had files and directories from a previous test that already had the attributes.
19:04 GLHMarmot Any object that existed from the previous test showed up in the "new" volume and was replicated.
19:05 GLHMarmot This is a workable solution for me as I can easily script the "touch" command.
19:06 inodb_ joined #gluster
19:11 GLHMarmot Hah! Even better, I only have to "touch" the top level items in the volume. Any sub-directories and files show up after that.
20:40 m0zes joined #gluster
20:50 JoeJulian wica: heal-failed question: That depends on why the heal failed. If they're not in split-brain then you'll have to parse through logs looking for more info. The heal-failed should have a timestamp to look for.
20:53 JoeJulian GLHMarmot: "gluster volume heal $vol full" might do that without changing timestamps.
20:56 daMaestro joined #gluster
21:06 johnmark
21:06 johnmark oopa
21:07 daMaestro oopa to you too johnmark ;-)
21:07 andreask left #gluster
21:07 johnmark heh :)
21:07 johnmark gobble gobble
21:08 semiosis oppa is gangnam style
21:25 nueces joined #gluster
21:26 daMaestro joined #gluster
21:29 jiffe98 anyone know why when I connect nfs to a virtual IP on one of the gluster servers rather than the main IP bound on the interface apache would hang when I make requests to it?
21:30 jiffe98 I can list and open files on the mounted filesystem fine, and apache works fine if I mount nfs through the main IP
21:30 jiffe98 it also works if I use the gluster client using either IP
21:40 semiosis maybe somehting (rpcbind/portmap/gluster) isn't bound to the vip?
21:41 JoeJulian jiffe98: there's nothing that pops out as obvious.
21:41 semiosis or an iptables issue
21:41 JoeJulian dns resolution?
21:42 JoeJulian There's a bug about hangs while locally mounting nfs but iirc that was during huge write ops.
22:20 jiffe98 it works though, it mounts it and I can list and open files fine
22:20 jiffe98 only apache seems to have a problem and I'm not sure why
22:27 JoeJulian If I were trying to diagnose that, I'd probably just create one apache thread and strace to see what it's stuck on.
22:28 jiffe98 last line in strace is flock(11, LOCK_EX
22:29 JoeJulian 3.3.1, right?
22:29 JoeJulian @ports
22:29 semiosis ah, nlm
22:29 glusterbot JoeJulian: glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111.
22:30 bauruine joined #gluster
22:30 JoeJulian Hmm, but localhost shouldn't matter...
22:31 JoeJulian Maybe nfslock isn't started (or whatever it is on your distro)?
22:36 jiffe98 I don't see any processes related to locking but it works if I connect to the main IP vs the virtual IP
22:37 jiffe98 tcpdump shows traffic to/from the main IP even when I connect to the virtual IP
22:37 semiosis nfslock not listening on the vip?
22:37 tryggvil joined #gluster
22:38 jiffe98 netstat doesn't show any processes listening on any specific IPs, they're all on 0.0.0.0
22:38 jiffe98 a couple on localhost
22:39 JoeJulian what about rpcinfo?
22:40 JoeJulian ~pasteinfo | jiffe98
22:40 glusterbot jiffe98: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
22:45 jiffe98 JoeJulian: http://nsab.us/public/gluster
22:51 JoeJulian nsab.us <- cool! I've wanted to do something like that myself. Was thinking of using a RPi for the logic board. You might also think it's cool that I was once involved with http://liftport.com/
22:51 glusterbot Title: Liftport Group (at liftport.com)
22:56 jiffe98 that is cool, how realistic do you think that goal is?
22:57 JoeJulian very
22:57 JoeJulian The only difficulty is balancing cost with material strength.
22:59 JoeJulian If they can get CNT fiber strength over 40GPa then building a ribbon out of it with at least 30GPa should be doable and that brings the cost into a manageable range. CNT is theoretically capable of 300GPa.
23:05 jiffe98 gotcha, sweet
23:25 robo joined #gluster
23:44 daMaestro joined #gluster
23:46 tryggvil joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary