Camelia, the Perl 6 bug

IRC log for #gluster, 2012-10-08

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:09 neofob what kind of hardware/cpu acceleration that glusterfs can benefit from (beside fast io)?
00:10 neofob i notice that my little cubox cpu util is over the top when it is in healing process, file hashing?
00:22 dbanck JoeJulian, server came back online: https://paste.welcloud.de/​show/F7EwFLg1itNK4d3mQmQI/
00:22 dbanck Oo
00:22 glusterbot Title: Paste #F7EwFLg1itNK4d3mQmQI | LodgeIt! (at paste.welcloud.de)
00:32 dbanck my glusterfs process seems to hang in a repair loop
00:35 daniel__ joined #gluster
00:36 puebele joined #gluster
00:36 puebele left #gluster
00:49 dbanck My log gets spammed with this: https://paste.welcloud.de/​show/CKgz0FXVoxogOJUvE2Vs/
00:49 glusterbot Title: Paste #CKgz0FXVoxogOJUvE2Vs | LodgeIt! (at paste.welcloud.de)
00:50 dbanck What is it?
00:56 hjmangalam JoeJ, Thanks for the pointer.  In fact, one server is complaining of the port being taken (tho it doesn't give the port #) and the other is complaining of a UUID mismatch.  Both here. <http://pastie.org/4930589>.  Can't think of why this would be the case.  Is there a way to re-sync the UUID?
00:57 glusterbot Title: #4930589 - Pastie (at pastie.org)
00:59 hjmangalam can you start glusterfsd with a specific UUID to re-sync with the right UUID?
01:45 McLev joined #gluster
01:48 kevein joined #gluster
02:09 McLev joined #gluster
02:10 McLev anybody have tips on small-files for 3.3?
02:58 sunus1 joined #gluster
03:38 cattelan joined #gluster
04:04 jrist joined #gluster
04:22 JoeJulian McLev: http://joejulian.name/blog/nfs-mount-for-glusterf​s-gives-better-read-performance-for-small-files/
04:22 glusterbot Title: NFS mount for GlusterFS gives better read performance for small files? (at joejulian.name)
04:31 indivarnair joined #gluster
04:44 McLev JoeJulian> Thanks man.
04:44 jays joined #gluster
04:57 vpshastry joined #gluster
04:58 hagarth joined #gluster
04:58 raghu joined #gluster
04:59 sripathi joined #gluster
05:05 harish joined #gluster
05:05 hjmangalam1 joined #gluster
05:09 mdarade joined #gluster
05:14 bulde joined #gluster
05:19 indivarnair I have a many large and small files. So I would like to have a mix of RAID5 Volumes and Individual HDDs, and force gluster to save large files (*.ma, *.mb, etc.) to the RAID5 volumes. Is it possible?
05:19 sgowda joined #gluster
05:28 sripathi1 joined #gluster
05:52 rgustafs joined #gluster
05:57 vpshastry joined #gluster
06:01 mo joined #gluster
06:02 sripathi joined #gluster
06:04 shylesh joined #gluster
06:05 badone_home joined #gluster
06:09 ankit9 joined #gluster
06:13 mdarade1 joined #gluster
06:18 badone_ joined #gluster
06:19 badone__ joined #gluster
06:22 badone joined #gluster
06:23 badone_ joined #gluster
06:23 badone__ joined #gluster
06:23 indivarnair left #gluster
06:25 kshlm joined #gluster
06:25 kshlm joined #gluster
06:25 JordanHackworth joined #gluster
06:26 ramkrsna joined #gluster
06:26 ramkrsna joined #gluster
06:31 ngoswami joined #gluster
06:31 mdarade1 left #gluster
06:32 mdarade joined #gluster
06:38 blendedbychris joined #gluster
06:38 blendedbychris joined #gluster
06:43 bulde joined #gluster
06:50 ctria joined #gluster
06:50 bulde1 joined #gluster
06:52 Nr18 joined #gluster
06:55 clag_ joined #gluster
06:57 faizan joined #gluster
06:58 McLev1 joined #gluster
07:00 badone_home joined #gluster
07:02 sgowda joined #gluster
07:08 vimal joined #gluster
07:10 sripathi joined #gluster
07:14 lkoranda joined #gluster
07:16 andreask joined #gluster
07:24 TheHaven joined #gluster
07:33 ondergetekende joined #gluster
07:38 sgowda joined #gluster
07:39 vpshastry1 joined #gluster
07:40 ngoswami joined #gluster
07:40 deepakcs joined #gluster
07:41 bulde joined #gluster
07:51 stickyboy joined #gluster
07:51 aberdine joined #gluster
07:55 sripathi joined #gluster
07:57 ankit9 joined #gluster
08:04 oneiroi joined #gluster
08:04 tjikkun_work joined #gluster
08:10 badone joined #gluster
08:15 vpshastry joined #gluster
08:17 ctria joined #gluster
08:18 dobber joined #gluster
08:19 gbrand_ joined #gluster
08:29 ngoswami joined #gluster
08:30 Triade joined #gluster
08:33 oneiroi joined #gluster
08:40 shireesh joined #gluster
08:42 crashmag joined #gluster
08:43 ankit9 joined #gluster
08:52 TheHaven joined #gluster
08:59 sshaaf joined #gluster
09:00 ramkrsna joined #gluster
09:01 ngoswami_ joined #gluster
09:01 vikumar joined #gluster
09:08 sgowda joined #gluster
09:08 kshlm joined #gluster
09:08 kshlm joined #gluster
09:08 sac joined #gluster
09:08 shylesh joined #gluster
09:08 shireesh joined #gluster
09:11 raghu joined #gluster
09:11 vpshastry joined #gluster
09:13 hagarth joined #gluster
09:16 stickyboy joined #gluster
09:26 shireesh joined #gluster
09:29 badone_home joined #gluster
09:29 shylesh joined #gluster
09:30 bulde joined #gluster
09:38 badone joined #gluster
09:41 pkoro joined #gluster
09:43 badone_home joined #gluster
10:00 kaney777 joined #gluster
10:01 ankit9 joined #gluster
10:03 kaney777 joined #gluster
10:04 kaney777 left #gluster
10:10 stickyboy Trying to plan my gluster implementation... thinking about where to mount data so it doesn't get confusing.
10:10 stickyboy What are some schemes people use?
10:16 samppah i'm mounting bricks into /gluster/brick0 and i export /gluster/brick0/export directory so it shouldn't export empty directory if it's not mounted for some reason
10:34 * ndevos uses /bricks/* for the bricks and mounts volumes on /export or /vol
10:47 bulde ndevos: i guess samppah is talking about 'mount' of backend itself, not the gluster client mount
10:47 bulde or you are using symlink?
10:49 shylesh joined #gluster
11:02 stickyboy samppah: So you mount your raw data partitions to /gluster/brick{0,1,2}... then add those to volumes?  Like, server1:brick0?
11:22 sripathi joined #gluster
11:22 ramkrsna joined #gluster
11:23 rwheeler joined #gluster
11:28 bala1 joined #gluster
11:29 andreask joined #gluster
11:30 jays joined #gluster
11:30 ndevos bulde: no, I tend to mount the bricks on /bricks/... , and have the volume mounted on /vol/$VOLNAME (but, that also differs per client)
11:33 stickyboy ndevos: By bricks you mean "raw storage", right?  ie /dev/sda or /dev/md0, etc?
11:34 stickyboy Still trying to wrap my head around the jargon...
11:35 kkeithley Bricks have to have a file system, so /dev/sda or /dev/md0 can't be bricks.
11:35 stickyboy kkeithley: Right, sorry.  I meant "where raw storage is mounted"... sorry.
11:35 stickyboy !repos
11:36 mohankumar joined #gluster
11:36 kkeithley Make a file system on /dev/sda, mount it at, e.g., /bricks/volsda, that's your brick
11:36 stickyboy kkeithley: Gotcha.
11:36 stickyboy I'm trying to plan it nicely in the beginning so my infrastructure doesn't get out of hand :)
11:37 kkeithley Now when you create your volume you're going to use `gluster volume create myvolume mybrickhost1:/bricks/volsda mybrickhost2:/bricks/volsda ...`
11:38 stickyboy kkeithley: Ok.
11:38 samppah @glossary
11:38 glusterbot samppah: A "server" hosts "bricks" (ie. server1:/foo) which belong to a "volume"  which is accessed from a "client"  . The "master" geosynchronizes a "volume" to a "slave" (ie. remote1:/data/foo).
11:39 stickyboy I think I'll use /bricks.  Nice and simple, and hard to forget that bricks are mounted there.
11:41 stickyboy I think I read that you can't mount directories using the FUSE client.  Would that be one reason to divide storage into multiple bricks (as opposed to having one big-ass "cloud")?
11:48 stickyboy Just thining of a scenario where I might want to keep data separate like shared apps and user homes (which I *know* will be big, versus apps which is relatively small).
12:07 badone_home joined #gluster
12:09 kkeithley johnmark: ping
12:18 plarsen joined #gluster
12:29 raghu_ joined #gluster
12:35 mdarade joined #gluster
12:36 mdarade1 joined #gluster
12:36 mdarade1 left #gluster
12:56 hagarth joined #gluster
12:56 andreask1 joined #gluster
12:56 andreask joined #gluster
12:56 vpshastry1 joined #gluster
13:13 bennyturns joined #gluster
13:31 ramkrsna joined #gluster
13:31 sergio__ joined #gluster
13:31 Nr18 joined #gluster
13:34 sshaaf joined #gluster
13:35 sergio__ Hello guys! I have a problem with gluster. I have a cluster of 3 nodes and everything is fine, until one of the nodes goes down
13:35 sergio__ when it comes back again it freeze all the other nodes
13:35 sergio__ each node is client and server at the same time
13:36 sergio__ I would like to know if this is a misconfiguration or it is supossed to be like this
13:37 sergio__ I am using the last stable release
13:37 flowouf hello
13:37 glusterbot flowouf: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
13:37 flowouf hello sergio__
13:38 flowouf can u output me in mp the result of gluster volume info  ?
13:38 sergio__ sure
13:42 Triade joined #gluster
13:57 kkeithley johnmark: ping
14:03 hjmangalam1 joined #gluster
14:08 stopbit joined #gluster
14:10 Nr18 joined #gluster
14:30 nueces joined #gluster
14:30 wushudoin joined #gluster
14:30 deepakcs joined #gluster
14:53 lh joined #gluster
14:53 lh joined #gluster
14:54 bulde1 joined #gluster
14:54 kaisersoce joined #gluster
14:57 vpshastry1 left #gluster
15:10 seanh-ansca joined #gluster
15:15 daMaestro joined #gluster
15:15 ndevos @ports
15:15 glusterbot ndevos: glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111.
15:18 quillo joined #gluster
15:18 bulde1 ndevos: this got changed with http://review.gluster.com/3339
15:18 glusterbot Title: Gerrit Code Review (at review.gluster.com)
15:18 bulde1 it is 'mostly' true, but the brick ports should change
15:19 ndevos bulde1: ah, okay, I was just checking for the nfs ports :)
15:20 neofob joined #gluster
15:21 ntt joined #gluster
15:21 blendedbychris joined #gluster
15:21 blendedbychris joined #gluster
15:22 JoeJulian bulde1: Will that happen in 3.3.1?
15:22 ntt hi
15:22 glusterbot ntt: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
15:23 ntt it is possible access to a gluster fs from a remote host (internet) ?
15:24 davdunc ntt what protocol? what latency?
15:25 ntt native gluster protocol (or others. no preferences)
15:25 jiffe98 one way I can get around adding machines int a cluster that are a larger size is to create more bricks in the same volume on them?
15:26 ctria joined #gluster
15:26 davdunc ntt, do you have a particular goal in mind?
15:26 kkeithley It's all tcp. As long as there are no firewalls blocking. I'd guess that latency will be terrible, and probably throughput too
15:27 ntt a shared folder over the internet. permissions are welcome :)
15:29 davdunc ntt, you could use gluster's unified file and object storage with something like cyberduck to get good results: http://cyberduck.ch/
15:29 glusterbot Title: Cyberduck – FTP, SFTP, WebDAV, Cloud Files, Google Drive & Amazon S3 Browser for Mac & Windows. (at cyberduck.ch)
15:29 ngoswami joined #gluster
15:30 ntt so this is not the right way to achieve something similar to dropbox?
15:31 kkeithley ISTR jdarcy used glusterfs-3.2.x and HekaFS to make a secure dropbox clone. Check his blog at http://www.hekafs.org/
15:31 glusterbot Title: HekaFS (at www.hekafs.org)
15:32 ntt can i ask another question? (thanks for hekafs)
15:34 kkeithley JoeJulian: no change so for to ports in the release-3.3 branch
15:40 kkeithley s/so for/so far/
15:41 glusterbot What kkeithley meant to say was: JoeJulian: no change so far to ports in the release-3.3 branch
15:41 kkeithley ntt: ask away.
15:42 ntt ok. really thanks
15:43 ntt can I have a storage servers with multiple network interfaces so that the replication operations pass on an internal network (storage server restricted) ?
15:44 ntt example: 2 storage server in replication. ip 10.10.1.1 and 10.10.1.2 on eth0 and 192.168.1.81 and 192.168.1.82 on eth1
15:44 ntt client 192.168.1.10
15:45 ntt how can i add peers ? which ip in /etc/hosts ?
15:46 hjmangalam3 joined #gluster
15:47 jiffe98 I don't see why not, if the peers' IPs are on the internal network, the OS is going to use the interface the internal network reside on
15:48 neofob ntt: you could make alias in /etc/hosts and use those aliases when you add/remove bricks to/from your volume
15:48 dbruhn joined #gluster
15:48 ntt ok neofob, but 1 alias for 2 ips?
15:48 bulde1 joined #gluster
15:48 dbruhn Hey guys, has 3.3.1 GA been packaged yet?
15:50 neofob ntt: if that's what you want, i dont think it will work; i thought you were looking for 2 different ips on 2 different eth interfaces
15:50 kkeithley dbruhn: There is not 3.3.1GA yet.
15:51 dbruhn What is the latest package to make it through the fedora builds?
15:51 aberdine left #gluster
15:51 kkeithley For Fedora 17 there's 3.2.7. For Fedora 18 there's 3.3.0
15:52 dbruhn Sorry, I meant not in GA
15:52 kkeithley And you can get 3.3.0 for Fedora 17 from my repo
15:52 kkeithley @yum3.3 repo
15:52 glusterbot kkeithley: kkeithley's fedorapeople.org yum repository has 32- and 64-bit glusterfs 3.3 packages for RHEL/Fedora/Centos distributions: http://goo.gl/EyoCw
15:52 ntt neofob: you think properly. I have 2 interfaces with 2 different ips. But if i add peers with ip 10.10.1.1 and 10.10.1.2, the client 192.168.1.10 cannot access to the storage server. is that right?
15:53 dbruhn kkeithley is 3.3.0.-9 stable?
15:53 kkeithley dbruhn: if you want 3.3.1qa3 you're going to have to roll your own.
15:54 kkeithley 3.3.0-11 (or even 3.3.0-9) is as stable as it gets. ;-)
15:55 dbruhn haha, yeah, I am on 3.3.0-1 and have a couple of issues with commands not working, I need to restart the cluster today for some maintenance reasons and was wondering if there was an opportunity to upgrade while I had the downtime.
15:55 geggam joined #gluster
15:56 dbruhn and if it was worthwhile
15:56 kkeithley Yes, I've patched a few hard crashes in the releases after -1. See the changelog in the RPM
15:57 sripathi joined #gluster
15:58 dbruhn I have been having an issue where two nodes I added are not balancing
15:58 neofob ntt: oh, in that case, check your network setting; i ran into this before with vmware glusters (using 192.168.174.*) which is only seen by other vmware on my windows machines or windows
15:58 dbruhn they are attached and healthy, just no data is being written to them
16:01 kkeithley dbruhn: no fixes for anything like that. If you're using DHT, short, similar file names have a high probability of hashing to the same brick or same set of bricks. Sometimes people see that all their bricks aren't being used. Often that's the reason.
16:01 dbruhn what is DHT? sorry for my ignorance
16:01 kkeithley distributed hash table, i.e. distribution
16:02 kkeithley IOW if you didn't specify replication when you created the volume(s), then you got distribution. Writes are scattered among all the bricks in the volume.
16:03 dbruhn the original system was 4 nodes, I added two more nodes to it, the file names are all numerical hashes, the storage on the first four nodes filled up on me, and never overflowed to the new nodes
16:03 JoeJulian @glossary
16:03 dbruhn Here is how I am set up, I have replication turned one
16:03 glusterbot JoeJulian: A "server" hosts "bricks" (ie. server1:/foo) which belong to a "volume"  which is accessed from a "client"  . The "master" geosynchronizes a "volume" to a "slave" (ie. remote1:/data/foo).
16:03 dbruhn Volume Name: ENTV02EP
16:03 dbruhn Type: Distributed-Replicate
16:03 dbruhn Volume ID: 8ff49136-5c3c-43fe-ae96-d1f1d172cc2e
16:03 dbruhn Status: Started
16:03 dbruhn Number of Bricks: 3 x 2 = 6
16:03 dbruhn Transport-type: tcp
16:03 dbruhn Bricks:
16:03 dbruhn Brick1: ENTSNV02001EP:/var/ENTV02EP
16:03 dbruhn Brick2: ENTSNV02002EP:/var/ENTV02EP
16:03 dbruhn Brick3: ENTSNV02003EP:/var/ENTV02EP
16:03 JoeJulian @kick dbruhn
16:03 glusterbot JoeJulian: Error: I need to be at least halfopped to kick someone.
16:03 dbruhn Brick4: ENTSNV02004EP:/var/ENTV02EP
16:03 dbruhn Brick5: ENTSNV02005EP:/var/ENTV02EP
16:03 dbruhn Brick6: ENTSNV02006EP:/var/ENTV02EP
16:03 dbruhn sorry for the flood
16:05 glusterbot joined #gluster
16:10 jiffe98 so reading that hekafs blog, with gluster if I make a small change to a large file it needs to rewrite that whole file?
16:16 jiffe98 I could be reading into that too
16:19 sensei jiffe98: What blog is that ?
16:20 jiffe98 http://cloudfs.org/?p=7
16:20 glusterbot Title: HekaFS » CloudFS: Why? (at cloudfs.org)
16:20 jiffe98 I'm guessing thats just fs expectations rather than a comparison
16:21 sensei Ta
16:26 Mo_ joined #gluster
16:26 hattenator joined #gluster
16:40 tc00per joined #gluster
16:50 ondergetekende joined #gluster
16:51 sunus joined #gluster
16:58 mo joined #gluster
17:33 raghu joined #gluster
17:40 Psi-Jack joined #gluster
17:42 Psi-Jack I'm trying to determine what kind of vm setup to give to a pair of GlusterFS storage server VM's for use in a multi-server environment. I'm putting 3 production webservers to point to a glusterfs storage, along with 2 job-processing servers. Most of which will be read heavy more than write heavy at all. I'm wondering what kind of things to consider when tuning the VM guest hardware.
17:42 dbruhn I am having issues with getting a volume to stop to take it down safely for reboots, would any one have an idea
17:44 kkeithley `gluster volume stop $name force` didn't work?
17:44 dbruhn nope
17:44 dbruhn getting an operation failed
17:44 kkeithley then `killall glusterfsd`
17:45 dbruhn I can stop the services on the machines
17:46 dbruhn or maybe not? weird
17:51 y4m4 joined #gluster
17:56 mspo can I get the file path from a gfid?
17:57 gbrand_ joined #gluster
18:00 semiosis mspo: afaik you have to 'find' it... two ways would be to find the file with the gfid xattr, the other would be to locate the hard link in .glusterfs then find other files with the same inode number
18:00 semiosis if i understand correctly
18:00 atrius so before i go off and try and implement something... and maybe there's just some detail i'm missing... but in the end it doesn't seem to be that gluster is particularly HA by itself... you setup a cluster... you mount the one IP/server... it goes down.. you lose the file system on the client... is that about right?
18:00 semiosis ~mount server | atrius
18:00 glusterbot atrius: (#1) The server specified is only used to retrieve the client volume definition. Once connected, the client connects to all the servers in the volume. See also @rrnds, or (#2) Learn more about the role played by the server specified on the mount command here: http://goo.gl/0EB1u
18:00 JoeJulian not even close, atrius
18:01 semiosis unless you're using nfs clients, then actually kinda close
18:01 JoeJulian well, pfft
18:01 semiosis and a good day to you, sir!
18:01 atrius ah, got you... that was the piece that was missing :D
18:02 atrius so i mount server A, client A connects to everyone in the cluster and after that point it shouldn't matter if server A goes away... that about right?
18:02 mo joined #gluster
18:02 kkeithley using gluster native fuse mounts, yes
18:03 atrius okay, thanks :)
18:03 atrius now i can go off and do the test i was going to do :D
18:03 JoeJulian Un #@$% believable. I had a hardware raid1 that apparently failed on the 28th (notification didn't happen) and the failed drive came back as the healthy drive when that machine rebooted overnight, so the files on that array were from the 28th! Has nothing to do with Gluster, but ARGH!
18:04 johnmark JoeJulian: sad banda :(
18:04 johnmark that sucks
18:04 johnmark er panda
18:04 mspo semiosis: ouch
18:07 semiosis atrius: wait a sec, is A client or server?
18:08 semiosis or is A both a client and a server?
18:08 kkeithley I think he has a server A, and a client A also.
18:08 atrius semiosis: sorry, i shouldn't have used the same letter for both... Server A (mount server), Client B (random fuse client)
18:08 semiosis oh i see
18:08 semiosis some people do have a client & server on the same machine
18:09 mspo I do
18:10 dbruhn 2nd
18:10 thinkclay joined #gluster
18:10 atrius semiosis: i had actually pondered that actually... right now i've only got a limited number of machines to deal with and my main goal is to keep the file system consistent between them as opposed to providing a complex storage system for lots of clients
18:12 mspo atrius: that's my use-case too
18:13 atrius mspo: did you just put the client and server on all the nodes?
18:13 mspo atrius: yes
18:13 atrius then just have them all mount the mount server?
18:13 thinkclay left #gluster
18:14 mspo atrius: mount -t glusterfs $(hostname):/VOL /mnt/VOL
18:14 atrius how well does that perform? is the storage local or backed by something else?
18:14 mspo atrius: doesn't perform well, but I also have peer's on high latency links
18:14 atrius mspo: internet pipes?
18:15 mspo atrius: can't use nfs because of the rpc lock isues
18:15 mspo atrius: basically
18:15 atrius mspo: got you
18:16 mspo atrius: so far my tuning has been: noatime on the disk, performance.io-thread-count=32, performance.cache-refresh-timeout=5, and performance.cache-size=256MB
18:16 wmp_ joined #gluster
18:16 wmp_ hello
18:16 glusterbot wmp_: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
18:17 wmp_ glusterbot: thank
18:17 wmp_ i want to use glusterfs to replication virtual server files, is possible to make few files only local? Eq: IP on hostname?
18:19 mspo atrius: let me know if you do the same and find good tunables ;)
18:20 atrius mspo: will do :)
18:21 atrius mspo: those options you listed, are those mount options?
18:22 mspo atrius: noatime is a mount options.  The others are volume tunables
18:22 mspo atrius: gluster-side
18:23 atrius mspo: okay, i'll take a look at those
18:25 semiosis mspo: have you measured any change in performance by chaning io-thread-count, cache-refresh-timeout, or cache-size?
18:26 semiosis changing*
18:32 mspo semiosis: well I just had an event where my system load went up to 500
18:32 mspo semiosis: which is better than 3000 like it used to :)
18:32 mspo semiosis: can't say which settings did what, though
18:33 semiosis right
18:33 mspo I have free memory and cpu so I figured I would increase them
18:33 atrius yikes... i thought my load average was high a while back
18:34 mspo semiosis: the refresh-timeout was because of my high latency problems
18:34 mspo semiosis: trying to reduce network traffic
18:34 mspo atrius: I have weirdly high load a lot
18:34 atrius mspo: IOwaits?
18:34 semiosis yeah probably iowaits for all that internet latency
18:34 mspo atrius: one cpu will be stuck on iowait
18:35 mspo despite io-thread-count
18:35 atrius huh... are you using iscsi?
18:35 mspo no it's direct-attached
18:35 atrius weird
18:35 mspo I assume the fuse mount isn't very threaded
18:35 mspo if I could get 16 cpu's into iowait instead of 1 I'd be making progress
18:36 atrius i admit that's one concern i have with using this for "serious work"... i'm not sure i've ever heard fuse and high performance in the same sentence unless the word 'not' was present
18:37 semiosis mspo: you can set performance.client-io-threads to on to make fuse a little more threaded :)
18:37 semiosis i'd be very interested to know how that works out for you
18:38 mspo semiosis: "on" instead of "16" like I have it now?
18:38 mspo semiosis: I was confused about client vs server settings
18:38 semiosis re-read what i just said
18:38 semiosis performance.client-io-threads to on
18:38 semiosis that's not the same as performance.io-thread-count
18:39 mspo oh, right
18:39 mspo yeah let's try it
18:39 atrius how well does this perform on local GigE links?
18:39 mspo atrius: that sounds like a good way to go :)
18:40 mspo semiosis: performance.client-io-threads=ON
18:40 wmp_ i want to use glusterfs to replication virtual server files, is possible to make few files only local? Eq: IP on hostname?
18:40 mspo now my load is spiking.  Interesting
18:41 atrius wmp_: you mean you want a set of files to be only local and not replicated?
18:41 semiosis mspo: because you're getting *more* processes into iowait
18:41 mspo semiosis: okay
18:41 wmp_ atrius: yes
18:41 mspo semiosis: I don't think I'm going to survive that
18:42 semiosis wmp_: it's possible but very unusual
18:42 semiosis and not really supported
18:42 semiosis probably will be easier to do (in other words, possible to do) in the 3.4 release
18:42 wmp_ i have 3.4
18:42 semiosis eh?
18:43 wmp_ oughhh, sorry :D
18:44 ackjewt joined #gluster
18:44 semiosis trying to remember the name of that feature... something like "custom layouts"
18:45 semiosis wmp_: see this: http://community.gluster.org/q/bala​nce-across-non-uniform-nodes-sizes/
18:45 glusterbot Title: Question: Balance across non uniform nodes sizes. (at community.gluster.org)
18:45 semiosis that gives some hints about what's coming in future releases of glusterfs that may help your use-case
18:46 semiosis i mean, Jeff Darcy's top answer to that question
18:46 mspo semiosis: is set to OFF the same as unsetting performance.client-io-threads ?
18:46 ackjewt Hi, i´m pretty new to Gluster and trying to find good documentation on how to configure a back-end network between the nodes. Does anyone have a good link?
18:46 semiosis mspo: iirc, yes
18:47 semiosis ackjewt: what exactly are you trying to do?
18:48 ackjewt My clients are primarily nfs/cifs, and I want to hava a back-end network for replication etc.
18:49 semiosis ok great, remember that FUSE clients do client-side replication, so you can't really do a separate back-end network for replication between servers with them
18:49 semiosis but with nfs clients, it's pretty easy
18:50 semiosis basically you want the ,,(hostnames) you used to define your bricks to resolve to the IP address of the server-server (backend) interfaces
18:50 glusterbot Hostnames can be used instead of IPs for server (peer) addresses. To update an existing peer's address from IP to hostname, just probe it by name from any other peer. When creating a new pool, probe all other servers by name from the first, then probe the first by name from just one of the others.
18:50 semiosis then the clients would mount nfs/cifs from the other interface on the server(s)
18:50 ackjewt oh... that easy huh.. :)
18:50 ackjewt thanks!
18:50 semiosis yw
18:52 jdarcy Split-horizon DNS FTW.
18:53 mspo would I need to re-mount for the performance.client stuff to actually go into effect?
18:54 semiosis no
18:54 mspo k
18:54 juhaj So, what kind of performnces (in GB/s) can one squeeze out from glusterfs? You can assume parallel io
18:54 semiosis client log file will show you that the client is dynamically updating to the new config whenever you do a volume set
18:54 semiosis juhaj: all teh gigabits!
18:58 juhaj semiosis: Seriously. I am considering it as a replacement for another distributed filesystem
19:04 semiosis idk what youre hardware is capable of, or what your workload is, or how your application uses storage
19:04 semiosis s/youre/your/
19:04 glusterbot What semiosis meant to say was: idk what your hardware is capable of, or what your workload is, or how your application uses storage
19:05 semiosis except assuming parallel io, of course, but that's not nearly enough information to estimate the GB/s you'll see
19:07 nueces joined #gluster
19:07 puebele joined #gluster
19:10 mspo semiosis: performance.client-io-threads landed me with a bunch of hung threads :(
19:18 mspo I unmounted/remounted and load came back down.  Now I just spend all of my time in system
19:26 mo joined #gluster
19:36 noob2 joined #gluster
19:37 noob2 silly question of the day.  anything i can tune to speed up ls -l or tab completion of cd when looking through gluster shares?
19:37 aliguori joined #gluster
19:39 puebele joined #gluster
19:41 JoeJulian noob2: Why do you keep asking that?
19:42 noob2 JoeJulian: sorry i didn't look through the logs from over the weekend
19:42 noob2 did someone answer it already?
19:42 JoeJulian twice
19:43 noob2 sorry :(
19:43 noob2 i'll check the logs
19:44 noob2 are the 2012-10 logs up online to search?
19:47 noob2 i saw over the weekend someone setup gluster as an iscsi target with LIO, that was pretty interesting
19:47 andreask really? you have a link?
19:48 noob2 yeah let me find it
19:50 noob2 http://www.slideshare.net/keithsea​hus/glusterfs-as-an-object-storage
19:50 glusterbot Title: GlusterFS As an Object Storage (at www.slideshare.net)
19:50 noob2 i believe this was it
19:52 noob2 he talks about using it with iscsi but never goes deeper into how he did it.
19:52 noob2 the way i understand LIO i don't know if that would work
19:53 noob2 JoeJulian: we noticed here that you can speed up rsync by kicking off about 400 instances of it on lower levels of your filesystem tree
19:54 noob2 if you point it to the top of the tree the stat's it keeps doing slow it down too much
20:00 tc00per test gluster 3.3 cluster, added two peers to dist-repl 2x2, added 4 bricks w/rebalance, what are 'failures' in status report? http://fpaste.org/TsFd/
20:00 glusterbot Title: Viewing Paste #241687 (at fpaste.org)
20:02 elyograg tc00per: The volname-rebalance log in /var/log/glusterfs should have entries corresponding to those errors.  That was my experience when I was doing remove-brick tests.  grepping for " E " would probably show them clearly.
20:02 elyograg meeting time!
20:03 sshaaf joined #gluster
20:05 Technicool grep "\] E\ \["
20:05 semiosis i dont think you need to escape the space inside quotes
20:06 semiosis not sure tho :)
20:06 Technicool semiosis, living dangerously
20:07 Technicool semiosis wins, fatality
20:07 tc00per 485 'failures' listed in status output... 523 lines matching " E " in /var/log/glusterfs/gvol-rebalance.log
20:08 tc00per Failure listed but NO explanation. Is it possible to determine 'status' of files somewhere?
20:08 tc00per Fun watching the re-balance traffic with etherape... :)
20:09 andreask noob2: well, you can use a file-based lun in lio .... and also all other targets ... so nothing special
20:09 noob2 oh i didnt know that
20:09 noob2 maybe that's what that guy was doing
20:10 noob2 i'm working on setting up LIO as we speak in fedora 17
20:10 tc00per Any clues why my 'master' peer is listed as localhost in 'gluster volume rebalance gvol status' ?
20:11 noob2 andreask: what os did you setup LIO on?
20:11 bennyturns joined #gluster
20:11 andreask noob2: ubuntu, suse, rhel/centos
20:11 noob2 haha, so you've done them all
20:11 andreask customers have, yes
20:11 mspo LIO?
20:12 noob2 how did you build the rtslib?  when i do qla2xxx/ info it says rtslib not found
20:12 andreask never need to build it
20:12 tc00per updated http://fpaste.org/AzKG/ with re-balance log tail.
20:12 glusterbot Title: Viewing gluster volume rebalance by tc00per (at fpaste.org)
20:13 noob2 hmm ok, maybe i'm missing a package in fedora then
20:13 andreask noob2: you have qlogic iscsi offload adapter?
20:13 noob2 i have a qlogic fiber card installed in there
20:13 noob2 thought the qla2xx was fiber
20:14 andreask fibre channel?
20:14 noob2 yup
20:14 noob2 qlogic fibre cards
20:14 noob2 theyr'e older 4Gb cards just for lab testing
20:15 andreask I meant the protocoll ... you plan a FC target or iscsi?
20:15 noob2 oh sorry, yes a FC target
20:15 JoeJulian tc00per: speaking of etherape: https://twitter.com/JoeCyberG​uru/status/254720620691091456
20:15 glusterbot Title: Twitter / JoeCyberGuru: You cant have Fedora People ... (at twitter.com)
20:15 badone joined #gluster
20:17 JoeJulian kkeithley: ^
20:17 tc00per :)
20:18 tc00per I'm usually careful to say 'ether' - 'ape'... jic
20:18 JoeJulian hehe
20:19 rwheeler joined #gluster
20:22 semiosis i watched the "spacex" launch yesterday... was careful to say space ... ex
20:23 noob2 did it go well?
20:23 semiosis yes
20:23 noob2 awesome
20:24 JoeJulian Space, now property of Elon Musk.
20:24 jdarcy It's amazing how often I get email responses from Bangalore at this time of day (actually night).
20:25 noob2 yeah he is a man with a plan that's for sure
20:26 tc00per in my rebalance... status indicates scanned is increasing but not much has changed as far as disk usage on the new peers (viewed with df). Any tips?
20:28 JoeJulian tc00per: Did you rebalance after you added bricks?
20:30 tc00per In progress now...
20:31 tc00per Only 4.5GB on 560GB volume...
20:37 tc00per updated http://fpaste.org/AzKG/ with peer 'df' results...
20:37 glusterbot Title: Viewing gluster volume rebalance by tc00per (at fpaste.org)
20:39 tc00per re-balance... 1.2M lookups, 8 files rebalanced... ?
20:40 tc00per afk
20:43 nueces joined #gluster
20:45 tc00per so... is it a lot of 'looking' first then move files later?
20:46 blendedbychris joined #gluster
20:46 blendedbychris joined #gluster
20:49 dbruhn Question on rebalance operations and the status feedback. When you're doing a rebalance after adding nodes, does the system see the directories it needs to create or finds missing as failures?
20:52 JoeJulian tc00per: first pass walks the directory tree. wrt dbruhn's question I'd be very surprised if it didn't create directories during this pass. during that walk it changes the trusted.glusterfs.dht mask. The second pass walks the tree and moves files to match those new masks.
20:53 lh joined #gluster
20:53 dbruhn I am just seeing some high numbers in the failures category when I run the status, so I am not sure if it's a problem or expected.
20:55 elyograg when I did a rebalance on my test system after adding a couple of bricks, there were tons of failures in the log, but I spot-checked some of the files it said were failed and was able to access them just fine.  I did not happen to notice whether the failure column in the status output reflected those failures, but it probably did.
20:56 tc00per JoeJulian: Thanks. Curious about the high traffic between peer0 and peers2&3. peers2&3 are brand new nodes with NO files on the bricks. Relatively little traffic between peer0 and peer1, the ORIGINAL replica pair.
20:56 bennyturns joined #gluster
20:58 tc00per OK... found likely problem. Fixing...
21:00 bennyturns joined #gluster
21:00 bennyturns joined #gluster
21:01 noob2 JoeJulian: i should prob start using a better irc client rather than this web client.  everytime i disconnect i lose the conversation backlog :-/
21:01 JoeJulian :)
21:02 puebele1 joined #gluster
21:02 semiosis noob2: we got logs, y'know
21:02 noob2 semiosis: yeah i was looking for october's logs.  i don't see them online
21:02 noob2 unless i'm losing it haha
21:02 semiosis hmm
21:02 semiosis thx for pointing that out
21:03 semiosis http://irclog.perlgeek.de/gluster/
21:03 glusterbot Title: IRC logs - Index for #gluster (at irclog.perlgeek.de)
21:03 semiosis i think that the problem with the gluster.org logs was not intentional
21:03 noob2 hah wow
21:04 tc00per re-balance is 'working' now.
21:04 semiosis but coincidentally we did get that new logging set up around the same time
21:04 semiosis thx to johnmark & pdurbin for that
21:04 noob2 yeah that is cool
21:05 tc00per THIS noob didn't mount the bricks on the correct directory on the new nodes... doh!
21:06 noob2 JoeJulian: I see your answer.  i remember you saying that.  I'm sorry I completely forgot and a developer was bugging me again this morning about slowness
21:06 JoeJulian send your developer to my blog. :D
21:06 noob2 maybe i'll tell him to just start using echo * and stop being a pest haha
21:07 noob2 yeah i'll send him over to your blog.  that sounds like a good idea slo
21:08 badone_home joined #gluster
21:11 tc00per re-balance complete...
21:13 imcsk8 joined #gluster
21:16 imcsk8 hello, i'm having problems with gluster 3.3.0, the geo replication is giving me problems. since i'm not planning to use it right now ¿can somebody give me a hint on how to disable it?
21:17 elyograg kkeithley: today I will reinstall fedora 17 from scratch on the servers that are doing UFO, see if perhaps I screwed something up.
21:21 elyograg kkeithley: regarding the swift.pid file being placed into the gluster volume directory, is there a specific place on bugzilla I should go to file that, or is it something you are already thinking about or that might already have a bug filed?
21:21 noob2 elyograg: do you have a writeup of how you installed it?
21:22 elyograg noob2: aside from the fact that the volume was already created, I pretty much followed kkeithley's instructions.  http://www.gluster.org/2012/09/howto-using​-ufo-swift-a-quick-and-dirty-setup-guide/
21:22 glusterbot Title: Howto: Using UFO (swift) — A Quick Setup Guide | Gluster Community Website (at www.gluster.org)
21:22 noob2 ok
21:23 elyograg noob2: my brick servers are CentOS, I am adding these two fedora systems to the cluster as well.  Just now i did 'peer detach' on them from a centos commandline so that I can wipe/reinstall.
21:23 noob2 gotcha
21:23 noob2 so your fedora servers are your UFO frontends?
21:24 elyograg that's the idea.  I am doing that because I ran into the authentication issue listed on the howto as 'breaking news.'
21:24 noob2 yeah
21:25 noob2 i'm still working on getting openstack folsom working
21:25 noob2 after that i can tie it into gluster's ufo
21:28 tc00per Oops... broken links everywhere after re-balance. Not SURE they were good before though. Tips on supporting symlinks...?
21:29 JoeJulian How could they be broken? That doesn't make sense.
21:29 elyograg I have some interest in doing an install of swift myself and then plugging in bits to make it work with gluster, but I am missing the knowledge needed to do this myself.  If you or anyone else can point me at specific named sections of relevant openstack documentation, and spcific bits of source code for the glue, I am willing to give it a try.
21:29 JoeJulian Oh, unless you're looking at them on the brick!
21:30 tc00per Looking at brick AND at client mount.
21:30 JoeJulian They could easily be broken on the brick, but not be broken on the client.
21:30 tc00per Again... may NOT have been broken in the first place. Need to verify that...
21:31 badone_home joined #gluster
21:49 ankit9 joined #gluster
22:05 crashmag joined #gluster
22:06 tryggvil joined #gluster
22:20 tc00per Strange... unable to delete test data that has broken links in it. Typical message on server peer0...
22:20 tc00per [2012-10-08 15:19:18.438762] I [server3_1-fops.c:1085:server_unlink_cbk] 0-gvol-server: 1080016: UNLINK /.../im0073.dcm (--) ==> -1 (No such file or directory)
22:20 tc00per ...ideas?
22:20 tc00per This is from a brick log.
22:22 tc00per From a glusterfs client it shows as a broken link.
22:27 tc00per brb
22:27 tc00per left #gluster
22:35 tc00per joined #gluster
22:38 tc00per Unable to rmdir... "transport endpoint not connected"... all peers are 'connected'.
22:40 JoeJulian tc00per: Can you remount? Not sure what's causing that, but it would be good to try.
22:46 tc00per Remount where? Remounted at client after inability to delete files. Then deleted files but cannot rmdir the dirs. Remount the gluster volume on peers? I've tried to restart glusterd on all peers. After doing that and remounting on client contents of glusterfs mount are 'unusable'...
22:46 tc00per ?--------- ? ? ? ?            ? data
22:47 tc00per I can repave the volume easily (testing) but clearly need to understand this problem. :(
22:55 tc00per Remount at client - no good. Stop/start volume - no good. Restart glusterd on all peers - no good.
22:56 JoeJulian wth... See what the error is in the client log.
23:00 tc00per JoeJulian: Thanks... you are glusterfs god. :) Problem found...
23:00 JoeJulian Ah, good.
23:00 JoeJulian wwi
23:00 tc00per This is probably is a FAQ somewhere but case it's not my screw up may help other noob's
23:01 tc00per Two new peers don't have DNS yet. /etc/hosts configured on all peers but NOT on client (oops).
23:01 stigchristian joined #gluster
23:01 JoeJulian Ah, yes. very important. :D
23:01 tc00per Mount via gNFS would have never noticed. Mount with glusterfs DOES.
23:02 tc00per Added entries to /etc/hosts on glusterfs client and the mount/remove of the tree went fine.
23:02 crashmag joined #gluster
23:02 tc00per I wonder if the following type of entry on client is meaningful ASIDE from the problems created earlier by DNS...
23:03 JoeJulian You would have had errors, too, adding files that hashed out to the disconnected servers, btw.
23:03 tc00per 2012-10-08 15:59:42.243224] I [dht-layout.c:593:dht_layout_normalize] 0-gvol-dht: found anomalies in /data/tcooper/PING_TEST/orig​/P0008_000_01/P0008/unnamed - 1454/(opt)fMRIMANUALPRESCAN_10. holes=2 overlaps=1
23:03 stigchristian Lets say I have two nodes with two bricks each in a distributed-replicated volume (replica 2) and want to add another server with two more bricks. How do I make sure that files get replicated on different nodes?
23:04 tc00per I didn't add any files to the messed config... only tried to rebalance on server. Discovered broken links from client (though were they only broken because client couldn't resolve new peers????)
23:04 JoeJulian tc00per: I've seen that too, but since it's "I" and I've never seen a problem from that, I haven't looked at what it means.
23:04 tc00per "I" == "Information"?
23:04 JoeJulian Yep
23:04 JoeJulian @order
23:04 JoeJulian Damn, I should add that.
23:05 JoeJulian stigchristian: The replication is determined by the order of the bricks. The first two are replicas, the next two are, etc.
23:06 tc00per Does gluster peer status exist for client?
23:06 JoeJulian tc00per: no. If you want to run that on a client, you'll have to install the server and add it to the peer group.
23:06 JoeJulian stigchristian: that is, of course, assuming "replica 2"
23:06 stigchristian So I have to add two servers with two volumes each in order: s1:v1 s2:v1 s1:v2 s2:v2 right?
23:07 JoeJulian If you have two bricks per server, yes.
23:07 stigchristian Can I re-order the bricks in any way?
23:07 JoeJulian replace-brick
23:07 stigchristian ah.. of couse!
23:07 stigchristian course
23:07 JoeJulian :D
23:09 stigchristian What about this one: I have 36 disk per chassis, and a raid controller. Would you recommend 2 x (18 disks in raid6) or 36 disks in raid 6, or any other configuration?
23:11 tc00per JoeJulian: stigchristian is asking/doing similar to what I've tried (aside from my noob problems). I had s0:b0 s1:b0 s0:b1 s1:b1 and added s2:b0 s3:b0 s2:b1 s3:b1. What's the 'proper' way to get 'balanced' distribution of the mirrors?
23:12 JoeJulian Just a few minutes and I'll try to give a very detailed answer to that...
23:12 tc00per Seems on glusterfs client easiest way to detect server issues is to grep for "Transport endpoint is not connected" in the client log. Agree?
23:12 JoeJulian That's good. Also grep ' E '
23:12 tc00per JoeJulian: Thanks.
23:18 JFK joined #gluster
23:19 JFK hi all.
23:23 puebele1 joined #gluster
23:23 foo_ joined #gluster
23:23 JFK I've googled half of the net and nothing else helped so i'm here :-) 2 of 3 brics in replicated mode went offline. Info show that they are connected but status show online: n. Is there any simple method to put them back online? And how to prevent such events in future?
23:24 JoeJulian "gluster volume start $volname force" or restarting glusterd usually works too.
23:24 JoeJulian To prevent them, you'd need to look in the brick logs and figure out what happened to them.
23:24 JoeJulian Also check dmesg to make sure they weren't oom killed.
23:25 JFK restarting didnt helped - did that already
23:27 JFK force start wrote:
23:27 JFK Starting volume replicated has been successful
23:27 JFK but status is still not online :-(
23:28 JoeJulian Check the brick logs to see why they're failing to start.
23:28 JFK logs shows a lot of :  I [client.c:2090:client_rpc_notify] 0-replicated-client-1: disconnected
23:29 JFK every minute at least 10 of such entries
23:29 JoeJulian That's a client log, isn't it?
23:30 JFK i'm not sure. /var/log/gluster/$volname.log
23:31 elyograg kkeithley, JoeJulian: rebuilt my UFO servers.  now I have everything working locally on the server except the curl command at step 11 on the howto.  http://www.gluster.org/2012/09/howto-using​-ufo-swift-a-quick-and-dirty-setup-guide/
23:31 glusterbot Title: Howto: Using UFO (swift) — A Quick Setup Guide | Gluster Community Website (at www.gluster.org)
23:32 JoeJulian JFK: /var/log/glusterfs/bricks/*
23:32 JFK http://pastebin.com/SenYABxf
23:32 glusterbot Please use http://fpaste.org or http://dpaste.org . pb has too many ads. Say @paste in channel for info about paste utils.
23:33 tc00per Hmmm... subdirectories listed twice on glusterfs client?
23:34 tc00per ...now some are listed 3 times... each with same inode number....?
23:35 JoeJulian I had that once during this recent series of repairs I went through... I can't remember how I fixed it though... Check the client log again...
23:35 JFK http://fpaste.org/GkKM/
23:35 glusterbot Title: Viewing Paste #241731 (at fpaste.org)
23:36 tc00per 27447 errors... better rotate that log. :)
23:36 elyograg cyberduck still wasn't working.  then I went to my second UFO server and then tried a swift command to list objecs, which works on the other server.  that hangs for a really long time, then says "[Errno 104] Connection reset by peer"
23:36 JoeJulian JFK: looks like your volume definitions are out of sync.
23:37 JoeJulian Did you delete and recreate that volume?
23:38 JFK i did recreated gluster volume twice but i'm not sure weather it was that one.
23:38 elyograg earlier, when I had tried to list the contents of an existing directory (container) with a TON of files in it, I tried something else.  on that second server, I shut down swift and did an strace of the swift command (which is connecting to the first server) ... I discovered that it is trying to connect to 127.0.0.1.
23:38 elyograg connect(5, {sa_family=AF_INET, sin_port=htons(443), sin_addr=inet_addr("127.0.0.1")}, 16) = 0
23:39 JFK i found a method of deleting volue on :http://joejulian.name/blog
23:39 JoeJulian stop glusterd on one of the broken servers. Normally I'd have you delete /var/lib/glusterd/vols then start glusterd and issue, "gluster volume sync all" to repair it. You can try that, but if it's like I've been seeing it won't work.
23:39 JFK it is yours, isn't it?
23:39 JoeJulian yes, that's mine
23:39 JFK btw great job, i've learned there a lot :-)
23:40 JoeJulian If it works, it should start the volume and it'll be up and running. If it doesn't work, stop glusterd, rsync that directory from a good working server and start glusterd.
23:41 JoeJulian Thank you for the complement. :D
23:41 elyograg I remember from when I set up swift (using swift's own object storage, not gluster), part of the proxy-server config was pointing at itself with the http/https URL.  the gluster-swift config doesn't have that part, so I think it is assuming 127.0.0.1.  That works perfectly fine if you are connecting to swift on the local machine, but not if it's remote.
23:42 nightwalk joined #gluster
23:44 JFK stopped,deleted,started ,gluster volume sync all
23:44 JFK please delete all the volumes before full sync
23:44 JFK directory vols is again there
23:45 JFK trying rsync
23:47 JoeJulian Ah, right... I forgot about the 3rd possibility that it just syncs the volumes automatically.
23:48 JFK new version should do that, right?
23:48 tc00per JoeJulian: re: multiple subdir listing on glusterfs client. unmounted, rotated client log, re-mounted. multiple dirs still there. http://fpaste.org/Ori5/
23:48 glusterbot Title: Viewing glusterfs.log by tc00per (at fpaste.org)
23:50 blendedbychris joined #gluster
23:50 blendedbychris joined #gluster
23:50 JoeJulian nothing interesting there... On your servers is /srv/glusterfs/bricks/gvol.0*/.glusterfs a symlink on all of them? It should be.
23:52 stigchristian joined #gluster
23:52 elyograg ok, I figured out my mistake on my second server, now swift will work from the commandline just like from the primary server ... unless I stop swift, in which case it fails because it is still trying to contact localhost.  I believe this is also why cyberduck (remote from my windows machine) is failing.
23:53 tc00per JoeJulian: Nope... looks like a directory on all servers.
23:54 JoeJulian See my blog post on the .glusterfs tree
23:56 JFK i'm thinking about one thing: one of those bricks is on ext4 partition. I've read about bug, but strage is that failed two bricks on xfs and ext4 is fine
23:57 tc00per JoeJulian: This will take a few moments to grok... getting coffee. brb

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary