Camelia, the Perl 6 bug

IRC log for #gluster, 2013-05-01

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:04 bchilds joined #gluster
00:13 nueces joined #gluster
00:29 nickw joined #gluster
00:44 bchilds joined #gluster
00:46 bala joined #gluster
01:52 jag3773 joined #gluster
02:10 y4m4 joined #gluster
02:24 portante|ltp joined #gluster
02:34 bchilds joined #gluster
02:47 jbrooks joined #gluster
02:58 jbrooks joined #gluster
03:21 glusterbot New news from newglusterbugs: [Bug 957877] Self-heal of user extended attributes does not happen <http://goo.gl/mKYDp>
03:24 bchilds joined #gluster
04:23 thomasle_ joined #gluster
04:27 bulde joined #gluster
04:29 jikz joined #gluster
04:34 arusso joined #gluster
04:34 bchilds joined #gluster
04:35 hagarth joined #gluster
04:42 yinyin joined #gluster
04:54 bchilds joined #gluster
04:57 mohankumar joined #gluster
05:15 bchilds joined #gluster
05:47 yinyin joined #gluster
06:05 primusinterpares joined #gluster
06:06 hagarth joined #gluster
06:15 bchilds joined #gluster
06:30 juhaj joined #gluster
06:42 vimal joined #gluster
06:43 vigia joined #gluster
06:49 piotrektt_ joined #gluster
06:55 ekuric joined #gluster
07:03 satheesh joined #gluster
07:05 bchilds joined #gluster
07:19 ctria joined #gluster
07:32 yinyin joined #gluster
07:35 bchilds joined #gluster
08:16 rb2k joined #gluster
08:35 bchilds joined #gluster
08:48 jalsot joined #gluster
09:01 yinyin joined #gluster
09:15 bchilds joined #gluster
09:32 tziOm joined #gluster
09:32 tziOm Does glusterfs have support for ndmp ?
09:32 ujjain joined #gluster
09:33 ndevos I doubt that. Whats ndmp?
09:38 tziOm http://bit.ly/ZTXPA8
09:38 glusterbot Title: Let me google that for you (at bit.ly)
09:39 ndevos it does not natively, but it could work if you have a ndmp service/server that works with other filesystems
09:41 tziOm Seems like someone is working on it...
09:41 tziOm was mentioned in 2009 I think
09:42 lh joined #gluster
09:42 lh joined #gluster
09:42 tziOm then I found this: https://github.com/shishirng/ndmp
09:42 glusterbot Title: shishirng/ndmp · GitHub (at github.com)
09:45 ndevos thats sgowda, one of the glusterfs devs - its a holiday in india so you may want to look for him tomorrow
09:45 bchilds joined #gluster
09:48 Jippi joined #gluster
09:52 fs_hack joined #gluster
09:55 bchilds joined #gluster
09:55 jikz joined #gluster
09:56 jikz hello all
09:56 jikz was installing glusterfs 3.3.1 in 2 of our servers
09:56 jikz wanted a replicated setup
09:56 jikz gluster probe worked
09:56 jikz but when doing the gluster volume create,
09:56 jikz we got an error "Operation failed on web2"
09:56 jikz when i reissue the command i get the error, /data/webexports or a prefix of it is already part of a volume
09:56 glusterbot jikz: To clear that error, follow the instructions at http://goo.gl/YUzrh or see this bug http://goo.gl/YZi8Y
09:58 jikz why did the error come, "Operation failed on web2"
09:58 jikz ?
09:58 jikz there is no firwall block between 2 servers as probe worked.
10:01 yinyin joined #gluster
10:50 glusterbot New news from resolvedglusterbugs: [Bug 921024] Build process not aware of --prefix directory <http://goo.gl/uUplT>
10:56 pithagorians joined #gluster
10:57 pithagorians hi all. getting https://gist.github.com/anonymous/5494705 in logs. what does it mean ?
10:57 glusterbot Title: gist:5494705 (at gist.github.com)
10:59 pithagorians same for https://gist.github.com/anonymous/5494720
10:59 glusterbot Title: gist:5494720 (at gist.github.com)
11:00 pithagorians and https://gist.github.com/anonymous/5494725
11:00 glusterbot Title: gist:5494725 (at gist.github.com)
11:01 pithagorians and one of the replica cpu went to ~ 80% load
11:05 bchilds joined #gluster
11:42 nicolasw joined #gluster
11:49 chirino joined #gluster
11:55 bchilds joined #gluster
12:01 edward1 joined #gluster
12:02 yinyin joined #gluster
12:05 cw joined #gluster
12:05 bchilds joined #gluster
12:22 tziOm joined #gluster
12:36 bchilds joined #gluster
12:38 NuxRo Hi, anyone knows whatever happened to this feature and how to use it? http://freebsdfoundation.blogspot.co.uk/20​12/03/new-funded-project-grow-mounted.html
12:38 glusterbot <http://goo.gl/wKoY7> (at freebsdfoundation.blogspot.co.uk)
12:46 aliguori joined #gluster
12:47 NuxRo ops, wrong channel
12:50 elyograg joined #gluster
12:52 bulde joined #gluster
12:53 jclift joined #gluster
12:53 dustint joined #gluster
13:03 yinyin joined #gluster
13:07 tjikkun joined #gluster
13:07 tjikkun joined #gluster
13:27 vikumar joined #gluster
13:36 bchilds joined #gluster
13:47 bennyturns joined #gluster
14:06 bchilds joined #gluster
14:16 bchilds joined #gluster
14:22 Supermathie a2: Yeah, nobody seemed to have an answer on what to do for the stripe-size issue
14:24 Supermathie OK, twice now when tarring up an IDLE gluster volume, I get something like: tar: ./common: file changed as we read it
14:24 Supermathie tar: ./common/apache-jmeter-2.9/docs: file changed as we read it
14:24 Supermathie tar: ./common/apache-jmeter-2.9/lib/ext: file changed as we read it
14:24 Supermathie tar: ./common/apache-jmeter-2.9/lib/junit: file changed as we read it
14:25 Supermathie Is that healing happening? Dates on the files on the underlying bricks may have been different prior to this.
14:27 TakumoKatekari joined #gluster
14:27 TakumoKatekari Hey, does anyone understand this error
14:27 TakumoKatekari "One of the bricks contain the other"
14:27 Supermathie Nope.
14:28 Supermathie Oh, darn, you beat me to it. Sounds like it things one of the bricks is in a subdir of the other.
14:28 TakumoKatekari they're on different servers
14:28 TakumoKatekari both mounted at /bricks/xvdd1
14:29 TakumoKatekari so I'm trying to create a volume with `replica 2 transport tcp server1:/bricks/xvdd1 localip:/bricks/xvdd1`
14:29 Ramereth joined #gluster
14:30 gbrand_ joined #gluster
14:32 Supermathie From the perspective of 'server1', is 'localip' == 'server1'?
14:34 Supermathie TakumoKatekari: You probably want to specify bricks as: server1:/bricks/xvdd1 server2:/bricks/xvdd1
14:36 bchilds joined #gluster
14:38 tqrst joined #gluster
14:42 TakumoKatekari server1 is a peer of localip
14:42 TakumoKatekari localip is the lan ip of the server im running the console one
14:42 fs_hack joined #gluster
14:44 Supermathie TakumoKatekari: Post your logs to pastie, there may be something in there
14:44 TakumoKatekari where are the logs? /var/log?
14:44 Supermathie /var/log/gluster
14:44 Supermathie Try creating it from server1, what command do you run? What's the output?
14:47 tqrst is there a clean way to lower the priority of 3.3.1's rebalance other than just going on every server, figuring out which glusterfs process has "-l /var/log/.../volname-rebalance.log" in its options and renicing those?
14:49 portante joined #gluster
14:49 TakumoKatekari The command is `volume create media replica 2 transport tcp server1:/bricks/xvdd1 server2:/xvdd2`
14:49 TakumoKatekari where server1 and server2 both have IPs in 10.2.0.0/16
14:50 Supermathie TakumoKatekari: And it worked this time?
14:50 TakumoKatekari nope, same error, one of the bricks contain the other
14:50 glusterbot TakumoKatekari: Check that your peers have different UUIDs ('gluster peer status' on both). The uuid is saved in /var/lib/glusterfs/glusterd.info - that file should not exist before starting glusterd the first time. It's a common issue when servers are cloned. You can delete the /var/lib/glusterfs/peers/<uuid> file and /var/lib/glusterfs/glusterd.info, restart glusterd and peer-
14:50 glusterbot probe again.
14:51 TakumoKatekari Oh! OOH!
14:51 TakumoKatekari They are cloned
14:51 Supermathie glusterbot FTW
14:53 Supermathie Shouldn't be long now: ./autogen.sh && ./configure --prefix=/usr/local/glusterfs --enable-debug && make -j64
14:53 TakumoKatekari I installed it from ppa
14:56 bchilds joined #gluster
14:59 Supermathie TakumoKatekari: I have... alternative needs :D
15:01 TakumoKatekari Now it says... "Host not connected" after deleting those files and then restarting glusterd
15:01 Supermathie TakumoKatekari: Did you re-probe? better clear all state first.
15:02 TakumoKatekari yeah I did re-probe
15:02 TakumoKatekari and peer status shows the peers as "accepted peer request (connected)"
15:06 bchilds joined #gluster
15:10 theron joined #gluster
15:14 tryggvil_ joined #gluster
15:15 portante|ltp joined #gluster
15:16 TakumoKatekari Hmm, this is really odd, the hosts are connected and peered, but then create volume says the peer isn' connected
15:16 bchilds joined #gluster
15:17 bugs_ joined #gluster
15:17 Supermathie better wipe all state first, start from scratch
15:17 Supermathie ==> /usr/local/glusterfs/var/lo​g/glusterfs/glustershd.log <==
15:17 Supermathie [2013-05-01 15:17:31.828252] W [common-utils.c:2330:gf_ports_reserved] 0-glusterfs-socket:  is not a valid port identifier
15:18 Supermathie [2013-05-01 15:17:31.828781] W [socket.c:514:__socket_rwv] 0-gv0-client-1: readv failed (No data available)
15:18 Supermathie [2013-05-01 15:17:31.828821] I [client.c:2097:client_rpc_notify] 0-gv0-client-1: disconnected
15:18 Supermathie ^ any ideas? seeing this over and over
15:25 gdavis33 TakumoKatekari: Does peer status from the other node also show connected?
15:26 TakumoKatekari I got it working!
15:26 Jippi joined #gluster
15:26 bchilds joined #gluster
15:26 daMaestro joined #gluster
15:27 zaitcev joined #gluster
15:27 jbrooks joined #gluster
15:30 kkeithley FYI, I've just put new glusterfs-3.3.1-14 rpms in my fedorapeople.org repo. epel[567]; fedora-17, i386, x86_64; fedora-18 arm, armhfp. Changelog is at https://koji.fedoraproject.org​/koji/buildinfo?buildID=415412
15:30 glusterbot <http://goo.gl/W7OYU> (at koji.fedoraproject.org)
15:30 kkeithley @repo 3.3
15:30 kkeithley @repo
15:30 glusterbot kkeithley: I do not know about 'repo', but I do know about these similar topics: 'repository', 'yum repo', 'yum repository', 'git repo', 'ppa repo', 'yum33 repo', 'yum3.3 repo', 'repos'
15:30 kkeithley @yum3.3 repo
15:30 glusterbot kkeithley: kkeithley's fedorapeople.org yum repository has 32- and 64-bit glusterfs 3.3 packages for RHEL/Fedora/Centos distributions: http://goo.gl/EyoCw
15:31 kkeithley If you don't use rdma or G4S (a.k.a. UFO) then you don't need to hurry to apply this update.
15:32 kkeithley fedora-18 i386 and x86_64 will be in the official Fedora YUM repo after passing the obligatory test period.
15:36 tqrst any timeline for 3.3.2 yet? I saw there's a QA release out.
15:38 kkeithley there will be an alpha and beta too before 3.3.2ga.
15:39 nicolasw will readdirplus be in 3.3.2?
15:40 kkeithley if you mean the ext4 fix, I believe so.
15:41 semiosis @qa releases
15:41 glusterbot semiosis: The QA releases are available at http://bits.gluster.com/pub/gluster/glusterfs/ -- RPMs in the version folders and source archives for all versions under src/
15:43 nicolasw not the fix i think, the 2nd time dir listing performance is greatly improved in 3.4 as I tested, but in 3.3.2qa3 is still slow
15:44 kkeithley johnmark (and anyone else who cares): I'm thinking that it's time to start phasing out my fedorapeople.org repo and make download.gluster.org the official YUM repo going forward. Starting with 3.4.0 and 3.3.2.
15:46 nicolasw and the underlying fs is xfs when i tested 3.4
15:47 kkeithley not sure, one sec
15:47 nicolasw ok, thx
15:50 Supermathie Oh these are some nicer numbers out of gluster:
15:51 Supermathie Shouldn't be long now: ./autogen.sh && ./configure --prefix=/usr/local/glusterfs --enable-debug && make -j64Device: rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
15:51 Supermathie sdi       0.00     0.00    0.00  387.67     0.00 176353.33   909.82    45.52  125.79   0.88  34.13
15:51 Supermathie sdg       0.00     0.00    0.00  843.67     0.00 384036.00   910.40   102.56  125.17   0.88  74.37
15:51 Supermathie sdc       0.00     0.00    0.00 1129.00     0.00 514378.67   911.21   142.08  125.70   0.89 100.00
15:51 Supermathie sda       0.00     0.00    0.00 1128.67     0.00 514208.00   911.18   141.89  125.74   0.89 100.00
15:53 gdavis33 and back with another question about geo-replication...
15:53 gdavis33 what do you do when it stops replicating?
15:53 gdavis33 i know i can kill the index on the slave to force a comparison on all files
15:54 gdavis33 but that seems like a last resort
15:55 gdavis33 1 master vol to 4 slaves have all not sync'd since yesterday
15:55 TakumoKatekari where can I get an exact list of ports I need open on my gluster nodes?
15:56 TakumoKatekari so I can configure my firewall as tight as possible
15:56 kkeithley nicolasw: if I'm reading the commit log correctly then the readdirplus fix is not in release-3.3.2 yet.
15:56 nicolasw kkeithley: thx
15:56 bchilds joined #gluster
15:56 nicolasw will it be ported to the 3.3.2ga?
15:56 kkeithley but there's still plenty of time for it to happen
15:59 nicolasw it's a good function for me. listing a large number of files is painful right now
15:59 TakumoKatekari What causes this Error: "XDR Decoding Error" and "failed to fetch volume file"
16:04 JoeJulian TakumoKatekari: Mismatched versions.
16:04 TakumoKatekari of? Gluster?
16:04 JoeJulian ~ports | TakumoKatekari
16:04 glusterbot TakumoKatekari: glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111.
16:05 JoeJulian TakumoKatekari: Yes. Sounds like a 3.2 vs 3.3 issue.
16:06 bchilds joined #gluster
16:07 TakumoKatekari but I used aptitude to install both client and server on ubuntu
16:08 Supermathie TakumoKatekari: Don't guess - check.
16:10 JoeJulian TakumoKatekari: Which is what you would get if you installed the ,,(ppa) on one but not the other.
16:10 glusterbot TakumoKatekari: The official glusterfs 3.3 packages for Ubuntu are available here: http://goo.gl/7ZTNY
16:11 Supermathie is compiling with --enable-debug going to slow down glusterfs significantly?
16:12 JoeJulian only one way to find out... ;)
16:14 TakumoKatekari Has anyone got experiance with using gluster on aws? more specifically replication between AZs in the same region..?
16:15 Supermathie JoeJulian: I'm untarring my data onto two mountpoints (one repl, one dist-repl) and the fuse processes are SOOOOO busy...
16:16 Supermathie 2547 root      20   0  295m  29m 3156 R 149.1  0.0  25:15.17 glusterfs
16:16 Supermathie 3029 root      20   0  342m  61m 3092 S 144.5  0.0  11:06.36 glusterfs
16:16 Supermathie Want to know who to blame :D
16:17 bchilds joined #gluster
16:17 Supermathie A better question: does --enable-debug just leave the debug symbols in, or does it turn on other codepaths?
16:23 primusinterpares joined #gluster
16:23 soukihei joined #gluster
16:24 semiosis TakumoKatekari: yes lots. whats your question?
16:24 * semiosis too busy to fill out surveys
16:25 TakumoKatekari semiosis: Replication between AZs, use the volume replication or geo-distribution?
16:26 semiosis regular replication, it is multi-master, synchronous
16:26 TakumoKatekari Because the network latency between eu-west-1a and eu-west-1c is ~1ms
16:26 TakumoKatekari good, thats what I thought
16:26 semiosis works for me
16:26 TakumoKatekari and is it possible to have the clients failover to another replica if a node dies?
16:26 semiosis but glusterfs is particularly well suited for my use case, maybe not for yours
16:26 semiosis i can tolerate that latency fine
16:27 semiosis yes fuse clients automatically fail over, see ,,(mount server)
16:27 bchilds joined #gluster
16:27 glusterbot (#1) The server specified is only used to retrieve the client volume definition. Once connected, the client connects to all the servers in the volume. See also @rrnds, or (#2) Learn more about the role played by the server specified on the mount command here: http://goo.gl/0EB1u
16:27 TakumoKatekari :D
16:27 TakumoKatekari thats brilliant.
16:27 TakumoKatekari This is going to be such an improvement over using a single NFS share.
16:32 semiosis +1
16:36 Mo__ joined #gluster
16:37 bchilds joined #gluster
16:37 gdavis33 so. I think i am going to replace glusterfs geo-replication with csync2
16:38 gdavis33 my syncs just stop with with no error
16:43 mjrosenb joined #gluster
16:47 bchilds joined #gluster
16:55 aliguori joined #gluster
16:55 Supermathie [2013-05-01 16:55:02.965470] I [afr-self-heal-entry.c:2253:afr_sh_entry_fix] 0-gv0-replicate-2: <gfid:13c900bc-2237-40d0-85bd-77e48320453e>: Performing conservative merge
16:55 Supermathie How do I look up a GFID?
16:57 cfeller_ joined #gluster
16:57 Nicolas_Leonidas joined #gluster
16:58 Nicolas_Leonidas hi I have a really weird problem
16:58 Nicolas_Leonidas I have a shared folder on 4 instances
16:58 thomasle_ joined #gluster
16:59 Nicolas_Leonidas this file exists on the server /r_images/original/172/171097.jpg
16:59 Nicolas_Leonidas I can see that by doing ls /r_images/original/172/171097.jpg which returns results
16:59 Nicolas_Leonidas but when php wants to see if that file exists or not, it returns false, I'm using php's file_exist
16:59 Nicolas_Leonidas this is a new issue, everything was working for months, what could cause this?
17:02 Supermathie Nicolas_Leonidas: What does echo '<? print file_exists("/r_images/original/172/171097.jpg"); print "\n" ?>' | php
17:02 Supermathie return?
17:03 Supermathie echo '<? print file_exists("/r_images/original/172/171097.jpg") ? "EXISTS\n" : "DOES NOT EXIST\n"; ?>' | php
17:03 Supermathie ^ better :D
17:08 Supermathie ruh-roh... I crashed gluster's nfsd
17:09 JoeJulian Supermathie: on the brick, <gfid:13c900bc-2237-40d0-85bd-77e48320453e> will be in .glusterfs/13/c9/. If you want to know what file that is, ls -li 13c900bc-2237-40d0-85bd-77e48320453e then find -inum the inode.
17:10 Supermathie Ah, they're hardlinked? Nice.
17:11 helloadam joined #gluster
17:11 JoeJulian The gfid represents the inode number you'll get on the client.
17:11 semiosis ,,(gfid resolver)
17:11 glusterbot https://gist.github.com/4392640
17:11 semiosis also ndevos had some ideas about that script, he might have a better one
17:12 jdarcy joined #gluster
17:13 JoeJulian By using a gfid that hardlinks to the filename, other hardlinks are able to be maintained as well, as well as deletions being tracked, etc.
17:13 JoeJulian Also something about "anonymous inodes" in nfsv4?
17:13 JoeJulian v?
17:13 JoeJulian doing too much ipv6 talking...
17:14 brunoleon_ joined #gluster
17:17 Supermathie If I were to say that I suspect glusterfs/nfs isn't clearing NULL RPC frames from it's stack/buffers/whatever, does that sound familiar?
17:18 Supermathie crashdump says: 46K frames of (type(0) op(0)), followed by stack trace
17:24 glusterbot New news from newglusterbugs: [Bug 958108] Fuse mount crashes while running FSCT tool on the Samba Share from a windows client <http://goo.gl/BdJvh>
17:24 Supermathie Yeah, I rather suspect that's the case.
17:27 bchilds joined #gluster
17:31 Supermathie Hmm... how to find the NULL RPC handler... grep -r NULL ~/prog/glusterfs... shit.
17:31 bulde joined #gluster
17:37 bchilds joined #gluster
17:51 xavih_ joined #gluster
17:53 mriv joined #gluster
17:54 shawns|work joined #gluster
17:54 neofob joined #gluster
17:54 bugs_ joined #gluster
17:57 plarsen joined #gluster
17:57 bchilds joined #gluster
18:05 renihs joined #gluster
18:07 bchilds joined #gluster
18:07 Jippi joined #gluster
18:09 mriv joined #gluster
18:10 lpabon joined #gluster
18:10 romero joined #gluster
18:17 bchilds joined #gluster
18:18 portante|ltp joined #gluster
18:22 jikz joined #gluster
18:33 morse joined #gluster
18:47 bchilds joined #gluster
18:49 jikz joined #gluster
18:49 bsaggy joined #gluster
18:56 sprachgenerator joined #gluster
18:57 sprachgenerator so I just added a group of bricks to my volume and the storage space increase is not being reflected on the clients - this is 3.4 - has anyone encountered this behavior before?
18:59 bchilds joined #gluster
18:59 Jippi left #gluster
19:05 brunoleon joined #gluster
19:13 elyograg sprachgenerator: On my 3.3 testbed, adding additional replica sets to ky distribute+replicate volume reflected an immediate increase in avialable disk space on clients.
19:17 andreask joined #gluster
19:19 bchilds joined #gluster
19:25 sandeen_ joined #gluster
19:33 pdurbin joined #gluster
19:33 pdurbin hmm, so ilbot_bck is in here. interesting
19:34 pdurbin left #gluster
19:39 bchilds joined #gluster
19:40 rb2k joined #gluster
19:42 codex joined #gluster
19:59 bchilds joined #gluster
20:21 gbrand_ joined #gluster
20:22 jclift joined #gluster
20:39 bchilds joined #gluster
20:43 Supermathie a2: The AFR doesn't seem to be playing very nicely with 3.4.0a3+
20:49 vigia joined #gluster
20:49 bchilds joined #gluster
21:15 sandeen_ left #gluster
21:16 jag3773 joined #gluster
21:20 Supermathie I'm seeing a lot of files being reported as needing healing without any obvious errors.
21:30 bchilds joined #gluster
21:50 bchilds joined #gluster
22:00 bchilds joined #gluster
22:20 bchilds joined #gluster
22:39 daMaestro joined #gluster
23:06 GLHMarmot joined #gluster
23:15 fidevo joined #gluster
23:36 aliguori joined #gluster
23:43 jbrooks joined #gluster
23:51 bchilds joined #gluster
23:59 jbrooks joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary