Camelia, the Perl 6 bug

IRC log for #gluster, 2013-04-03

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 JoeJulian Have to check the logs. Should say in the log for glusterd
00:52 hagarth joined #gluster
00:58 yinyin joined #gluster
01:00 bala joined #gluster
01:02 the-me joined #gluster
01:04 jules_ joined #gluster
01:17 yinyin_ joined #gluster
01:30 eightyeight how can i calculate the expected size of the volume when more peers are added?
01:31 eightyeight suppose i'm doing distributed replicate, with replication=2. and each peer has 2 bricks, where each brick is 10 GB in size
01:31 eightyeight if only 2 peers are in the volume, then i would expect the volume size to be 10 GB total
01:31 eightyeight but what if there are 3 peers?
01:31 eightyeight should it be 20 GB?
01:31 eightyeight or 15GB?
01:31 JoeJulian Each peer has 20Gb, so each replica of peers would add 20Gb.
01:31 eightyeight it isn't clear to me
01:32 eightyeight but if replication = 2, wouldn't each peer effictively bring only 10 GB?
01:32 eightyeight rather than 20?
01:32 JoeJulian If you're adding one peer with that configuration then you're adding 2x10Gb bricks (1 replica set) so you're increasing the volume 10Gb.
01:33 JoeJulian Right. My first statement implies the most efficient method of expansion, adding 2 peers.
01:34 eightyeight ok
01:35 eightyeight so, how large would 3 peers be then?
01:35 eightyeight 30 GB
01:35 eightyeight gah
01:35 JoeJulian B/r where B=Brick and r=replica count.
01:35 JoeJulian So for 6 10Gb bricks, 6*10/2 = 30.
01:36 JoeJulian I guess that should be N*B/r...
01:36 JoeJulian where N is the number of bricks.
01:37 JoeJulian Though if you're telling someone else about your storage at a pub, it's (N*B/r)^p (where p = pints consumed)
01:38 JoeJulian Just be aware, that talking about your storage at the pub is not a good way to pick up dates.
01:38 eightyeight heh
01:38 eightyeight noted
01:39 JoeJulian Alright... enough of my nonsense. I'm going home.
01:40 eightyeight ok. if that's the case, then i shouldn't have more than 500 GB, and i have 560 GB total.
01:41 eightyeight something is amiss, although my setup is slightly different
01:41 eightyeight in my case, the 2 bricks are sharing the same backend 250 GB. so, effectively, each brick is 125 GB
01:41 eightyeight there are 8 bricks in my cluster, so 8*125/2 = 500
01:42 eightyeight yet, i have 560 GB usable
01:42 JoeJulian Each brick isn't going to know it's only supposed to use 125, so they're both going to report 250.
01:43 JoeJulian If you want it to report correctly, you'll have to partition that 250.
01:47 robos joined #gluster
02:09 yinyin joined #gluster
02:31 Bullardo joined #gluster
02:35 DWSR joined #gluster
02:35 DWSR joined #gluster
03:04 nueces_ joined #gluster
03:06 fidevo joined #gluster
03:09 hagarth joined #gluster
03:26 vshankar joined #gluster
03:35 portante joined #gluster
03:37 bala1 joined #gluster
03:53 sgowda joined #gluster
04:13 bala1 joined #gluster
04:15 sripathi joined #gluster
04:18 yinyin joined #gluster
04:20 anmol joined #gluster
04:25 mohankumar joined #gluster
04:30 Bullardo joined #gluster
04:30 juan_ joined #gluster
04:30 JuanBre Hi, Im having a problem
04:30 JuanBre I tried to move the gluster export brick moving all the contents of the directory to a new directory and then doing a simbolic link to the new folder...
04:31 JuanBre I rebooted and mounted the gluster volume but only very few files appear...
04:31 JuanBre I am using gluster v 3.4 and in a 2 stripe 2 replica environment
04:31 JuanBre any advice in how to move on?
04:32 shylesh joined #gluster
04:35 bulde joined #gluster
04:46 hagarth joined #gluster
04:55 Han joined #gluster
04:56 eightyeight joined #gluster
04:58 deepakcs joined #gluster
05:03 yinyin joined #gluster
05:13 aravindavk joined #gluster
05:20 saurabh joined #gluster
05:21 puebele joined #gluster
05:31 yinyin joined #gluster
05:34 raghu joined #gluster
05:36 vpshastry joined #gluster
05:38 vshankar joined #gluster
05:42 bharata joined #gluster
06:01 sripathi joined #gluster
06:14 ollivera joined #gluster
06:17 satheesh joined #gluster
06:17 jtux joined #gluster
06:25 glusterbot New news from resolvedglusterbugs: [Bug 852224] Crash in fuse_thread_proc with kernel-3.5.2-3.fc17.x86_64 <http://goo.gl/jOste>
06:25 hagarth joined #gluster
06:31 rastar joined #gluster
06:33 vpshastry joined #gluster
06:38 Nevan joined #gluster
06:40 kshlm joined #gluster
06:45 ricky-ticky joined #gluster
06:46 lalatenduM joined #gluster
06:56 ctria joined #gluster
07:03 sripathi joined #gluster
07:17 hybrid512 joined #gluster
07:19 anmol joined #gluster
07:21 vimal joined #gluster
07:28 sripathi joined #gluster
07:29 NuxRo JuanBre: if you need to use another brick you should follow the "replace brick" procedure
07:30 NuxRo some random "benchmark" on 3.4git http://fpaste.org/we70/ , surprised the deletion took twice as long
07:30 glusterbot Title: Viewing Paste #289144 (at fpaste.org)
07:33 ujjain joined #gluster
07:34 ngoswami joined #gluster
07:47 sripathi joined #gluster
07:48 andreask joined #gluster
07:50 camel1cz joined #gluster
07:50 camel1cz left #gluster
08:00 Norky joined #gluster
08:17 mooperd joined #gluster
08:17 RobertLaptop joined #gluster
08:19 hagarth joined #gluster
08:34 vpshastry joined #gluster
08:38 camel1cz joined #gluster
08:46 tryggvil joined #gluster
08:48 sripathi1 joined #gluster
08:57 glusterbot New news from newglusterbugs: [Bug 947774] [FEAT] Display additional information when geo-replication status command is executed <http://goo.gl/Bpg3O>
09:02 rotbeard joined #gluster
09:34 sripathi joined #gluster
09:38 rosmo joined #gluster
09:51 jtux joined #gluster
09:54 edward1 joined #gluster
09:59 verdurin joined #gluster
10:03 sripathi joined #gluster
10:04 camel1cz left #gluster
10:16 manik joined #gluster
10:19 hagarth @channelstats
10:19 glusterbot hagarth: On #gluster there have been 107793 messages, containing 4635709 characters, 779291 words, 3199 smileys, and 398 frowns; 702 of those messages were ACTIONs. There have been 39408 joins, 1274 parts, 38155 quits, 15 kicks, 115 mode changes, and 5 topic changes. There are currently 205 users and the channel has peaked at 217 users.
10:22 x4rlos lets make the smilies a round number :-)
10:22 hagarth @channelstats
10:22 glusterbot hagarth: On #gluster there have been 107796 messages, containing 4636100 characters, 779355 words, 3200 smileys, and 398 frowns; 702 of those messages were ACTIONs. There have been 39408 joins, 1274 parts, 38155 quits, 15 kicks, 115 mode changes, and 5 topic changes. There are currently 205 users and the channel has peaked at 217 users.
10:22 hagarth there you go :)
10:23 ndevos lol
10:27 test__ joined #gluster
10:47 sripathi1 joined #gluster
11:02 sripathi joined #gluster
11:03 sripathi joined #gluster
11:21 sripathi joined #gluster
11:22 andreask joined #gluster
11:26 bharata joined #gluster
11:27 H__ tip : when replace-brick to a fresh-fs-ed brick won't start on 3.3.1 due to an upgrade from 3.2.5 with broken replace-brick leftovers then "rm rbstate rb_dst_brick.vol" and restart all glusterd's. Check that rbstate now says rb_status=0
11:28 glusterbot New news from newglusterbugs: [Bug 947824] process is getting crashed during dict_unserialize <http://goo.gl/bTvo4> || [Bug 918917] 3.4 Beta1 Tracker <http://goo.gl/xL9yF>
11:36 mkarg_ joined #gluster
11:39 noche joined #gluster
11:50 rcheleguini joined #gluster
11:51 rcheleguini joined #gluster
11:51 piotrektt_ joined #gluster
11:53 piotrektt___ joined #gluster
11:56 dustint joined #gluster
11:59 plarsen joined #gluster
12:13 vpshastry joined #gluster
12:23 hflai joined #gluster
12:43 rastar joined #gluster
12:45 aliguori joined #gluster
12:49 robo joined #gluster
12:51 lalatenduM joined #gluster
13:00 bulde joined #gluster
13:00 duerF joined #gluster
13:13 theron joined #gluster
13:18 vpshastry joined #gluster
13:20 lh joined #gluster
13:20 lh joined #gluster
13:22 hagarth joined #gluster
13:40 puebele3 joined #gluster
13:48 Chinorro joined #gluster
13:48 hflai joined #gluster
13:49 rwheeler joined #gluster
13:58 bennyturns joined #gluster
13:59 mohankumar joined #gluster
14:04 lpabon joined #gluster
14:09 lpabon joined #gluster
14:15 bugs_ joined #gluster
14:25 bala joined #gluster
14:27 camel1cz joined #gluster
14:28 xavih has anyone suffered this XFS error ? kernel: XFS (sdc): xfs_iunlink_remove: xfs_inotobp() returned error 22
14:28 xavih on CentOS 6.3
14:29 xavih after that some other errors are reported and the FS is shutdown
14:30 xavih it needs to be unmounted and remounted to access it again
14:32 rastar joined #gluster
14:35 VSpike left #gluster
14:42 morse joined #gluster
14:43 bennyturns joined #gluster
14:44 disarone joined #gluster
14:59 daMaestro joined #gluster
15:00 johnmorr johnmark: looks like that brick restart bug (https://bugzilla.redhat.com​/show_bug.cgi?id=902953#c3) also affects the glusterfs nfs server :-/
15:00 glusterbot <http://goo.gl/bNh3n> (at bugzilla.redhat.com)
15:00 glusterbot Bug 902953: high, unspecified, ---, rgowdapp, NEW , Clients return ENOTCONN or EINVAL after restarting brick servers in quick succession
15:01 johnmorr johnmark: i didn't kill the glusterfs process that went off in the weeds if there's anything you want me to look at
15:01 bennyturns joined #gluster
15:05 wmp joined #gluster
15:06 wmp hello, i found mistake on this page: http://www.gluster.org/community/docume​ntation/index.php/Getting_started_rrqsg look on mount on: Mount the partition as a Gluster "brick"
15:06 glusterbot <http://goo.gl/uMyDV> (at www.gluster.org)
15:09 lpabon I saw one in the "Add an entry to /etc/fstab" and just fixed it. I don't see the one you mean, though
15:17 plarsen joined #gluster
15:36 klaxa_ joined #gluster
15:37 joeto joined #gluster
15:41 bala joined #gluster
15:41 lalatenduM joined #gluster
15:45 Bullardo joined #gluster
15:48 mjrosenb joined #gluster
15:48 mjrosenb hey, I just built my own glusterfs-3.3.0 on a dreamplug, and attempting to mount it gave:
15:48 mjrosenb /usr/local/sbin/glusterfs: symbol lookup error: /usr/lib/libgfrpc.so.0: undefined symbol: gf_log_xl_log_set
15:49 NeatBasis joined #gluster
15:51 wmp lpabon: look:   mkdir -p /export/brick1 && mount /dev/sdb /export/brick1
15:52 mjrosenb oh, it looks like that only happens when I use mount -t glusterfs
15:52 wmp lpabon: mount /dev/sdb ?
15:52 wmp lpabon: no /dev/sdb1 ?
15:52 Norky mjrosenb, using the dreamplug (debian ARM system?) as a client or server?
15:53 mjrosenb Norky: as a client.
15:53 mjrosenb so glusterfs --volfile-server=memoryalpha --volfile-id=data /data worked
15:53 Norky hmm
15:53 mjrosenb but  |mount -t glusterfs memoryalpha:/data /data| failed with that error message
15:54 mjrosenb oh, it *says* it worked, but I don't see anything on /data
15:54 mjrosenb well, it didn't give an error message
15:55 Norky grep fuse /proc/mounts    says what?
15:55 mjrosenb root@dreamplug-debian:/usr/lib# grep fuse /proc/mounts
15:55 mjrosenb fusectl /sys/fs/fuse/connections fusectl rw,relatime 0 0
15:55 mjrosenb yeah, didn't work.
15:55 Norky indeed
15:56 Norky you shoudl have "memoryalpha:/data /data fuse.glusterfs ....." in that file
15:57 mjrosenb [2013-03-18 11:42:22.713520] E [glusterfsd-mgmt.c:1555:mgmt_getspec_cbk] 0-glusterfs: failed to get the 'volume file' from server
15:57 mjrosenb well, that could do it.
15:57 Norky you do have the fuse library and devel files (.h) installed, yes?
15:58 Norky actually, n/m what I jsut said, it's breaking before FUSE is onvolved I think
15:59 Norky check the build logs for reasons why you're missing that symbol
15:59 Norky I don't have any more useful advice really, sorry
16:01 mjrosenb Norky: but when I run glusterfs on my own, it doesn't seem to be having that issue.
16:01 mjrosenb oh
16:01 mjrosenb I installed glusterfs from apt
16:01 mjrosenb let me undo that
16:08 badone_ joined #gluster
16:11 hflai joined #gluster
16:13 mjrosenb ok, I uninstalled the glusterfs-client that I'd installed with apt
16:13 mjrosenb and now mount -t glusterfs says it doesn't know about the filesystem type glusterfs.
16:13 * mjrosenb re- make installs
16:21 ehg joined #gluster
16:22 hagarth joined #gluster
16:34 Mo_ joined #gluster
16:42 lpabon wmp: fixed it. thanks for the info
16:42 wmp lpabon: no problem ;)
16:46 andreask joined #gluster
16:48 mjrosenb ok, that worked.
16:48 mjrosenb my probelms were entirely self-inflicted.
16:52 shylesh joined #gluster
16:58 csu joined #gluster
16:58 csu hi
16:58 glusterbot csu: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
16:58 csu i'm using ovirt 3.2 with gluster 3.4 alpha 2, and am trying to create a new gluster volume via the web interface. how long should the creation process take?
16:58 csu creating the same volume outside of the web interface using gluster directly takes seconds
16:59 martoss joined #gluster
16:59 martoss hey folks, I am experiencing a problem on a volume with bricks on little and big-endian machines.
17:00 martoss glusterd-utils.c:2147:glust​erd_compare_friend_volume] 0-: Cksums of volume intranet differ. local cksum = 1939930268, remote cksum = -1728160709
17:00 semiosis martoss: what brick format?
17:00 semiosis ext4? xfs? other?
17:01 martoss xfs on powerpc and ext4 on amd64
17:01 martoss do the brick underlying formats have to be the same?
17:02 semiosis no, i migrated from ext4 to xfs and had a mix for a couple weeks without problems
17:03 semiosis but that was all x86 arch
17:07 martoss What is puzzeling me, is that, I can't see where the conversion is made e.g. in glusterd-utils.c:1918
17:24 zaitcev joined #gluster
17:26 bulde joined #gluster
17:26 Mo_ joined #gluster
17:31 wmp left #gluster
17:35 bulde joined #gluster
17:35 rwheeler joined #gluster
17:46 camel1cz left #gluster
17:47 johnmorr < johnmorr> johnmark: i didn't kill the glusterfs process that went off in the weeds if there's anything you want me to look at
17:48 johnmorr er, herp, that was jdarcy i was talking with originally
17:55 shylesh joined #gluster
17:59 puebele joined #gluster
18:04 robos joined #gluster
18:07 mjrosenb johnmorr: yeah, It is probably a bad thing that irc clients are willing to tab complete to $USER
18:23 Supermathie joined #gluster
18:35 robos joined #gluster
18:42 ricky-ticky joined #gluster
18:56 Supermathie I've stumbled across an odd problem with GlusterFS. My setup: 2 x RHEL server with a bunch of SSDs (each is a brick) replicating data between them. Single volume (gv0) exported via gluster's internal NFS.
18:57 Supermathie It's mounted on 4 identical servers, into which I regularly clusterssh. If I mount gv0 and do and ls -al inside a directory from all 4 clients at the same time, I get inconsistent results.
18:57 Supermathie Some clients see all the files, some are missing a file or two, some have duplicates (!) in the listing.
18:57 Supermathie stat() finds the missing files, but ls still doesn't see them.
18:58 Supermathie If I remount and list the directory in series instead of in parallel, everything looks good.
18:58 jclift joined #gluster
18:59 tryggvil_ joined #gluster
19:03 samppah Supermathie: what glusterfs version you are using and are you using locking with nfs or do you mount with nolock?
19:06 Supermathie glusterfs-3.3.1-1.el6.x86_64 from EPEL, mount options are: rw,sync,tcp,wsize=8192,rsize=8192 (I added sync after I noticed the weird behaviour)
19:07 samppah ok.. all nodes are mounting remote directory and not using localhost to mount?
19:07 samppah @latest
19:07 glusterbot samppah: The latest version is available at http://goo.gl/zO0Fa . There is a .repo file for yum or see @ppa for ubuntu.
19:07 Supermathie lockd is running on clients, I presume gluster has an internal lockd
19:08 samppah hmm
19:08 samppah @yum repo
19:08 glusterbot samppah: kkeithley's fedorapeople.org yum repository has 32- and 64-bit glusterfs 3.3 packages for RHEL/Fedora/Centos distributions: http://goo.gl/EyoCw
19:09 andreask joined #gluster
19:09 Supermathie samppah: you mean mounting local glusterfs as a localhost nfs client? None of the 4 NFS clients in this setup are participating in the gluster (in the gluster cluster? :) )
19:09 samppah Supermathie: yes and ok, that's good :)
19:10 samppah i'm not very familiar with gluster nfs solution nor issues it may have
19:11 samppah however there are newer packages available at ,,(yum repo)
19:11 glusterbot kkeithley's fedorapeople.org yum repository has 32- and 64-bit glusterfs 3.3 packages for RHEL/Fedora/Centos distributions: http://goo.gl/EyoCw
19:12 samppah brb
19:12 Supermathie samppah: Can reproduce it at-will: http://imgur.com/JU8AFrt
19:12 glusterbot Title: Odd GlusterFS problem - Imgur (at imgur.com)
19:15 Chiku|dc Supermathie, when you do ls -al | md5sum, you do md5sum on the ls text output?
19:15 Supermathie Yeah
19:15 Supermathie Just gives me a quick way of noting which hosts are different.
19:16 Supermathie For instance, saveservice.properties is missing from fleming1
19:16 Supermathie [michael@fleming1 bin]$ ls -al saveservice.properties
19:16 Supermathie -rw-r--r-- 1 michael users 22186 Jan 24 06:21 saveservice.properties
19:16 Supermathie [michael@fleming1 bin]$ ls -al | grep saveservice.properties
19:16 Supermathie (no result)
19:16 Supermathie and mirror-server.sh missing from directory listing on fleming4
19:16 Supermathie (and httpclient.parameters)
19:19 Chiku|dc Supermathie, 4 servers with replica 4 ?
19:19 Chiku|dc or replica 2 ?
19:20 Supermathie These four servers are NFS clients only. NFS server is two servers with replica 2
19:20 Chiku|dc oh ok
19:21 Supermathie If I unmount (clear cache) and remount, and ls -al again, I get different results (different files missing on different servers).
19:21 Supermathie If I ls -al one at a time on each client, everything's OK.
19:21 Chiku|dc what about gluster client ?
19:21 Chiku|dc mount glusterfs client
19:22 stickyboy joined #gluster
19:23 stickyboy Anyone use GlusterFS FUSE on Mac OS X?  I see some old blog posts and stuff about it, but thought I'd check here.
19:24 JoeJulian I thought Mach didn't have fuse...
19:25 stickyboy JoeJulian: http://osxfuse.github.com/
19:25 glusterbot Title: Home - FUSE for OS X (at osxfuse.github.com)
19:25 stickyboy It's been touch and go over the years I think.  I stopped paying attention when I left Mac OS X for Linux a few years ago.
19:25 JoeJulian I stopped using Macs in 1989...
19:26 tryggvil__ joined #gluster
19:26 stickyboy JoeJulian: :P
19:26 stickyboy JoeJulian: If I hadn't seen your picture on Twitter I'd ask how long your beard was.
19:26 JoeJulian lol
19:27 stickyboy You're not a Richard Stallman type ;)
19:27 Supermathie Chiku|dc: clients are RHEL5… is glusterfs-client.x86_64 0:2.0.9-2.el5 going to be happy with a 3.3.1 server? probably not… :)
19:27 JoeJulian Not quite... :D
19:27 JoeJulian Supermathie: Lol!!!!!
19:27 JoeJulian Supermathie: no.
19:28 Supermathie grabbing the 3.3.1 from kkeithle's repo :)
19:31 jdarcy joined #gluster
19:33 Supermathie Chiku|dc: mounting as glusterfs client yields consistent correct results
19:33 JoeJulian What filesystem are your bricks?
19:34 Supermathie ext4
19:34 JoeJulian bingo
19:35 Supermathie Oh wait, right, *that* problem… ext4 with dir_index turned off
19:35 JoeJulian I suspect the same "cookie" problem that's been the focus around the ,,(ext4) problem is what you're seeing with nfs.
19:35 glusterbot Read about the ext4 problem at http://goo.gl/PEBQU
19:35 JoeJulian Something about the cookie being inconsistent between calls.
19:36 stickyboy I even knew about the ext4 problem, and I was still bit by it deploying GlusterFS last month. :D
19:36 Supermathie I encountered the dir_index problem right off the bat (NOTHING worked) and it was fine after turning off dir_index on each brick filesystem
19:37 JoeJulian And you thought I was going to just blindly point fingers... Granted, I don't fully understand the problem, but turning off the dir_index was a workaround to prevent the endless loop. I don't /think/ it solves the inconsistent cookie thing.
19:37 Supermathie First I heard about an inconsistent cookie… reading
19:38 JoeJulian Check the gluster-devel mailing list. Look for the threads with Bernd and Theodore.
19:40 jdarcy ... and me, and Avati, and Zach, and Eric, and ... ;)
19:40 jdarcy Long thread.
19:40 Supermathie Running the d_off test against the brick dir returns:
19:42 JoeJulian jdarcy: Do I have the essence of the problem correct, though? Inconsistent directory listings via nfs mount from an ext4 filesystem?
19:42 camel1cz joined #gluster
19:43 Supermathie http://pastebin.com/zQt5gZnZ
19:43 camel1cz joined #gluster
19:43 glusterbot Please use http://fpaste.org or http://dpaste.org . pb has too many ads. Say @paste in channel for info about paste utils.
19:43 camel1cz left #gluster
19:43 Supermathie (all 32-bit values)
19:45 Supermathie JoeJulian: The odd thing about it is that it *is* consistent… the bug tickle seems to be doing it from 4 clients in parallel. Makes me think something about the request processing at the server is getting confused.
19:45 JoeJulian That's why I leaned toward that being the problem.
19:45 JoeJulian I could be wrong though.
19:46 JoeJulian Try a similar test with xfs and see if it's close.
19:46 Supermathie https://bugzilla.redhat.com​/show_bug.cgi?id=838784#c14
19:47 glusterbot <http://goo.gl/wBHbB> (at bugzilla.redhat.com)
19:47 glusterbot Bug 838784: high, high, ---, sgowda, POST , DHT: readdirp goes into a infinite loop with ext4
19:47 Supermathie I will - I'm in the midst of testing out Oracle doing DNFS to Gluster. Going to be trying out a few different configs and brick filesystems.
19:49 jdarcy Inconsistent directory listings seems like might be the hash-collision problem that the ext4 "fix" was trying to address.
19:49 piotrektt_ joined #gluster
19:57 Supermathie Whoah… this time, . and .. are missing from one server, and another has them listed twice. And the latter also has 'examples' twice.
19:58 Supermathie (,,mailinglist)
20:14 awayne joined #gluster
20:16 awayne Hi all. New gluster user here, looking into UFO.
20:16 ricky-ticky joined #gluster
20:17 lpabon awayne: Same here, very new to it, still reading about it.  Hopefully I'll be able to help answer some of the questions and some of the bugs in the coming weeks
20:17 awayne Is anyone familiar with the "async_pending" folder and what it's purpose is? We noticed that when we turn up the request volume on UFO this folder shows up, but I haven't been able to pinpoint exactly what it's purpose is and whether it indicates any sort of communication problem(s) between the gluster nodes.
20:18 Nicolas_Leonidas joined #gluster
20:18 Nicolas_Leonidas hi, one of our servers was dead for a night
20:18 Nicolas_Leonidas now when I do gluster peer status it says peer in cluster (Disconnected)
20:18 Nicolas_Leonidas what can I do to trouble shoot this?
20:26 yosafbridge joined #gluster
20:30 glusterbot New news from newglusterbugs: [Bug 948039] rebase to grizzly <http://goo.gl/NK24m> || [Bug 948041] rebase to grizzly <http://goo.gl/77lzU>
20:31 tryggvil__ joined #gluster
20:42 portante awayne: async_pending is a directory created by the openstack swift code when the object server is not able to connect to the container server to notify it of a PUT or DELETE
20:44 portante This behavior of swift is not needed by gluster, but is not currently turned off by it either, and so the directory will appear once the container server starts to respond slowly to requests (under some kind of load)
20:45 awayne I was just realizing that it seems to be something from the swift side, not gluster/UFO. We're doing some very limited load testing, but currently our async_pending count is well into the 1,000's
20:45 portante You can safely delete those directories, I believe, they are ignored by the gluster code, and a future update to the gluster / swift integration will make all that go away
20:45 awayne Should I just cron to delete them?
20:45 portante for now, yes
20:45 portante And file a big with what you are seeing so that we can be sure to track and address it
20:47 awayne This leads to another great question. We're seeing a good number of 503's (mostly from timeouts). Granted our setup is on an over-committed VMWare VC cluster. Is there any guidance on tuning?
20:48 portante what version are you using?
20:48 portante 3.4?
20:48 portante 3.3?
20:48 portante Is it RHS from Red Hat?
20:48 portante community version?
20:48 awayne CentOS running GFS 3.3.1
20:48 portante What are your proxy node_timeouts set to?
20:48 awayne And our UFO install is from kkeithle's distro
20:49 awayne The message log says (Timeout 10s)
20:49 portante That is too short, try bumping it up to about 60 to give more head room
20:49 portante And how many containers and total files on the volume?
20:51 awayne 10 containers, 10,000 objects
20:51 jthorne joined #gluster
20:51 portante 1ge or 10ge links?
20:51 portante what is your URL pattern?
20:52 portante Do you perform an container or account GETs or HEADs?
20:52 awayne 1 gig link in the lab (I can force them to the same host for even faster speed)
20:52 portante how 'bout between nodes, same?
20:52 portante between gluster noddes
20:52 portante nodes
20:52 awayne Yeah, all 2x2 volume, all 4 nodes on the same VC cluster
20:53 awayne Our load testing is pushing about 4000 2MB files, 20 workers.
20:54 awayne We're not seeing a large number of timeouts, but occasional which I guess is what's leading to the async pending files.
20:54 portante how many workers for object and container?
20:55 awayne Upload, all files are going into the same container. Download, same in reverse -- 20 workers working down the 4000 2MB files. The GET/HEAD requests don't seem to suffer from any timeouts. Just PUT requests on objects.
20:56 portante yes, there are bugs open on that. :(
20:57 portante happily they are fixed, and we are working to get them upstream
20:57 awayne What's the guidance on where to run UFO? Is it recommended to run on separate servers? Run with the gluster nodes?
20:58 portante it depends on a few factors, and we have not heavily tested all the pros and cons, but for now run on the gluster nodes since you have extra compute cycles
21:03 awayne UFO has been a godsend. We had a great need for a reliable, redundant file store and chose gluster but realized we might have a problem if the all the nodes in a cluster alternately went down.
21:03 awayne I'm not exactly sure how, but UFO detects that situation and realizes that the file stored doesn't match the uploaded file.
21:06 disarone joined #gluster
21:13 badone joined #gluster
21:33 Supermathie I am seeing *tons* of these in my volumename.log (one every 3s) - any ideas what it is?
21:33 Supermathie [2013-04-03 12:04:21.982542] I [client.c:2090:client_rpc_notify] 0-gv0-client-5: disconnected
21:36 Supermathie joined #gluster
21:36 daMaestro joined #gluster
21:41 jthorne Supermathie is "gluster peer status" reporting a disconnected node?
21:41 Supermathie Nope the nodes are happy
21:45 Supermathie will be on probably tomorrow 9AM EST, or msg me, or email me (I emailed the dev list)
21:45 Supermathie cheers
22:16 ninkotech_ joined #gluster
23:15 stopbit joined #gluster
23:26 neofob left #gluster
23:37 vigia joined #gluster
23:44 tryggvil__ joined #gluster
23:51 a2 supermathie is gone?

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary