Camelia, the Perl 6 bug

IRC log for #gluster, 2012-10-22

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:23 andreask joined #gluster
00:36 bala2 joined #gluster
01:14 lng joined #gluster
01:15 lng Hi! Can anybody help me to switch to namebased configuration of Gluster please?
01:41 kevein joined #gluster
02:00 sashko joined #gluster
02:05 bulde joined #gluster
02:22 plarsen joined #gluster
02:54 bharata joined #gluster
03:20 sunus joined #gluster
03:26 sashko joined #gluster
03:31 shylesh joined #gluster
03:37 hagarth joined #gluster
04:27 deepakcs joined #gluster
04:34 hchiramm_ joined #gluster
04:42 sripathi joined #gluster
04:43 hagarth joined #gluster
04:46 Humble joined #gluster
04:49 yeming joined #gluster
04:49 JZ_ joined #gluster
04:51 raghu joined #gluster
04:53 bala1 joined #gluster
05:03 sgowda joined #gluster
05:09 lng glusterfs-client : Depends: libglusterfs0 (>= 3.3.0-ppa1~precise3) but it is not going to be installed
05:09 lng issue with - https://launchpad.net/~semi​osis/+archive/glusterfs-3.3
05:09 glusterbot Title: glusterfs-3.3 : semiosis (at launchpad.net)
05:11 sripathi1 joined #gluster
05:11 sripathi1 joined #gluster
05:35 lng WHat does this error mean? 'Host server01 not a friend'
05:36 kshlm lng: have you peer probed server01?
05:36 lng yes
05:37 kshlm what does peer status show?
05:37 lng State: Peer in Cluster (Connected)
05:37 lng on both
05:37 lng I have two nodes
05:38 kshlm where did you get the error? in the logs or while performinf a command?
05:38 lng cli
05:39 lng gluster volume create storage replica 2 transport tcp ...
05:41 overclk joined #gluster
05:41 kshlm that is strange. did you peer probe with the hostnames?
05:43 lng yes
05:43 lng on server1 I probed server2 by its hostname
05:44 lng after that I probed server1 from server2 too
05:44 lng because on sever2 it returned serer1 IP
05:47 lng is it ok, mount point has the same name on both servers?
05:47 lng \/storage
05:53 yeming I use same mount point names on all nodes.
05:54 kshlm is it server01 or server1?
05:55 lng yes, sorry
05:55 lng typo
05:55 kshlm okay. brick mount points don't matter any way. they can be the same.
05:56 lng I stopped glusterd and delteted /var/lib/glusterd/*
05:56 lng now I have no peers
05:58 lng on server1 I'm running - `gluster peer probe glustertest-server2.com`
05:58 lng Probe successful
05:59 lng after that, should I probe server1 from server2?
05:59 kshlm yes, you should. otherwise, server2 will have only ip of server1.
06:00 lng true
06:00 lng both are connected
06:01 lng 'Number of Peers: 1' - on each server
06:01 kshlm correct till now.
06:03 lng same error
06:06 lng [2012-10-22 06:03:32.918943] E [glusterd-utils.c:4288:glu​sterd_new_brick_validate] 0-management: Host server2 not a friend
06:06 kshlm you have to use the complete hostname: glustertest-server2.com:<brick-path>
06:07 lng yes, I did
06:07 lng sorry - I remove real host
06:08 lng gluster volume create data replica 2 transport tcp  glustertest-node1.com:/storage  glustertest-node2.com:/storage
06:08 lng it is like that^
06:11 yeming kshlm: I probe peers only on one of the servers. Will that cause problems?
06:14 lng yeming: do you use hostnames or IP addresses?
06:14 kshlm yeming: if you are using ips, no problem. if you are using hostnames, you will have to ping the original server from one of the others.
06:14 kshlm s/ping/probe/
06:14 glusterbot What kshlm meant to say was: yeming: if you are using ips, no problem. if you are using hostnames, you will have to probe the original server from one of the others.
06:17 kshlm lng: are you using the exact hostnames being shown in 'peer status'
06:17 lng kshlm: yea
06:17 lng kshlm: do I have to add them to /etc/hosts?
06:18 lng kshlm: they resolves
06:18 kshlm if peer probe succeeded, that means they resolve correctly.
06:18 lng yea
06:18 lng I understand that
06:20 lng I never encountered this issue with IP addresses setup
06:22 johnmark :O
06:24 yeming I use hostnames. The original server is shown as IP addresses in the rest servers. But I haven't encountered a problem, yet. What kind of problems will I run into?
06:24 kshlm lng: are you executing commands from server1? can server1 resolve itself?
06:26 lng kshlm: I have terminated both
06:27 lng going to start from scratch
06:28 lng kshlm: how to check if it resolves?
06:29 sripathi joined #gluster
06:29 kshlm one check is to ping server1's hostname you used from server1 itself. you could also try 'peer probe server1' on server1, if it says 'probe not needed on localhost' then resolution is happening correctly.
06:31 lng yes, it was saying 'probe not needed on localhost
06:45 puebele joined #gluster
06:49 lng kshlm: same problem on the new servers
06:49 lng Failed to perform brick order check. Do you want to continue creating the volume?  (y/n) y
06:50 lng is it normal?
06:50 ctria joined #gluster
06:52 sripathi joined #gluster
06:53 lng kshlm: I can ping both hosts
06:54 lng locally and remotelly
06:59 glusterbot New news from newglusterbugs: [Bug 859183] volume set gives wrong help question <https://bugzilla.redhat.com/show_bug.cgi?id=859183>
06:59 ramkrsna joined #gluster
06:59 ramkrsna joined #gluster
07:00 lng Creation of volume data has been successful. Please start the volume to access data.
07:00 lng Creation of volume data has been successful. Please start the volume to access data.
07:00 lng sorry for double posting
07:00 lng with the use of IPs it works
07:01 lng probably this is Route53 problem
07:01 nightwalk joined #gluster
07:02 lng maybe I will use ElasticIP and publict DNS...
07:02 Azrael808 joined #gluster
07:03 mdarade2 joined #gluster
07:16 kshlm lng: so you are using aws. i've heard that is is quite problematic to setup and there are ways to overcome that. unfortunately, i don't know much about it, but someone in this channel might now.
07:19 tjikkun_work joined #gluster
07:22 vimal joined #gluster
07:24 lkoranda joined #gluster
07:25 lng kshlm: okay, thanks for helping me anyway!
07:25 andreask joined #gluster
07:29 glusterbot New news from newglusterbugs: [Bug 868796] glusterd crashed <https://bugzilla.redhat.com/show_bug.cgi?id=868796> || [Bug 868792] baseurl in the yum repo file is incorrect. <https://bugzilla.redhat.com/show_bug.cgi?id=868792>
07:34 sripathi joined #gluster
07:42 hagarth joined #gluster
07:49 Nr18 joined #gluster
07:55 lng kshlm: finally, I probbed that by host
07:56 lng oh, my bad - I was able to prob it beofore too
07:56 pkoro joined #gluster
07:58 Triade joined #gluster
07:59 glusterbot New news from newglusterbugs: [Bug 868801] gluster volume creation says 'success' but volume does not exist on any of the peers <https://bugzilla.redhat.com/show_bug.cgi?id=868801>
08:04 gbrand_ joined #gluster
08:13 guigui3 joined #gluster
08:19 dobber joined #gluster
08:19 badone_ joined #gluster
08:23 gbrand_ joined #gluster
08:29 vikumar joined #gluster
08:30 ekuric joined #gluster
08:35 nightwalk joined #gluster
08:53 quillo joined #gluster
09:04 nightwalk joined #gluster
09:04 TheHaven joined #gluster
09:13 zoldar does mountbroker configuration require configuring volume on geo-replication slave? or can a plain directory be used?
09:16 gbrand__ joined #gluster
09:17 TheHaven joined #gluster
09:20 tryggvil joined #gluster
09:21 tryggvil joined #gluster
09:22 ramkrsna joined #gluster
09:22 ramkrsna joined #gluster
09:22 lng kshlm: fixed the issue
09:22 sgowda joined #gluster
09:30 glusterbot New news from newglusterbugs: [Bug 835034] Some NFS file operations fail after upgrading to 3.3 and before a self heal has been triggered. <https://bugzilla.redhat.com/show_bug.cgi?id=835034> || [Bug 842549] getattr command from NFS xlator does not make hard link file in .glusterfs directory <https://bugzilla.redhat.com/show_bug.cgi?id=842549>
09:31 ekuric1 joined #gluster
09:46 ekuric1 left #gluster
10:00 glusterbot New news from newglusterbugs: [Bug 847821] After disabling NFS the message "0-transport: disconnecting now" keeps appearing in the logs <https://bugzilla.redhat.com/show_bug.cgi?id=847821>
10:06 lng I have stopped one of the two nodes, after that IP changed stopped node when I started again. This setup is namebased - peers have been probed by hostnames. I have updated DNS according to the new IP and the hostname resolves to it. Finally, from restarted server I can see 'State: Peer in Cluster (Connected)', but from the second node, it's 'State: Peer in Cluster (Disconnected)'. What should I do with it?
10:07 lng so where's the benefit of hostnames?
10:07 lng of stopped node*
10:09 lng oh, seams like I need to restart glusterd on the second node
10:09 lng but this is not good
10:10 lng _is it normal situation I need to restart the daemon on another node_?
10:11 lng OMG
10:11 lng now client writes to the only 1 node
10:12 lng node with changed IP stays intact
10:28 vpshastry1 joined #gluster
10:28 sgowda joined #gluster
10:30 Alpinist joined #gluster
10:30 glusterbot New news from newglusterbugs: [Bug 849526] High write operations over NFS causes client mount lockup <https://bugzilla.redhat.com/show_bug.cgi?id=849526>
10:33 lng how to sync replicated bricks?
10:34 edward1 joined #gluster
10:35 hagarth joined #gluster
10:38 lng gluster is not easy to operate with...
10:38 lng it is not intuitive and there's not enough info coulod be found on web
10:39 nocturn joined #gluster
10:42 tryggvil_ joined #gluster
10:43 raghu joined #gluster
10:44 lng how to sync replicated bricks???
10:44 lng please anybody
10:47 lng obviously I need to engage self-healing, but it does not help
10:49 tryggvil joined #gluster
11:24 henrik___ joined #gluster
11:30 glusterbot New news from newglusterbugs: [Bug 868796] glusterd crashed <https://bugzilla.redhat.com/show_bug.cgi?id=868796>
11:43 duerF joined #gluster
11:50 andreask joined #gluster
12:01 balunasj joined #gluster
12:11 hagarth joined #gluster
12:11 tryggvil joined #gluster
12:12 tryggvil_ joined #gluster
12:13 tryggvil_ joined #gluster
12:17 manik joined #gluster
12:24 Nr18 joined #gluster
12:28 plarsen joined #gluster
12:30 ankit9 joined #gluster
12:38 sshaaf joined #gluster
13:00 hagarth joined #gluster
13:00 glusterbot New news from newglusterbugs: [Bug 857549] brick/server replacement isn't working as documented.... <https://bugzilla.redhat.com/show_bug.cgi?id=857549>
13:04 kkeithley wiqd: if you want to take a gander at the squeeze.repo I put up this morning and see if it's okay——
13:04 _Bryan_ JoeJulian: You online?
13:11 aliguori joined #gluster
13:22 hagarth joined #gluster
13:28 Nr18 joined #gluster
13:29 TheHaven joined #gluster
13:31 Nr18 joined #gluster
13:34 vpshastry1 left #gluster
13:35 roffer joined #gluster
13:38 ondergetekende joined #gluster
13:46 oneiroi if I have a replica setup and want to go to a replica 5 setup, how would I go about adding each brick to the existing volume and ensuring a resync on the new brick to bring it up to date?
13:48 oneiroi perhaps: http://community.gluster.org​/q/expand-a-replica-volume/
13:48 glusterbot Title: Question: expand a replica volume (at community.gluster.org)
13:51 manik joined #gluster
13:55 johnmark kkeithley: ping
13:55 johnmark kkeithley: see gluster-users - anyway you could package up 3.3.0 RPM's for your epel repo?
13:57 Nr18 joined #gluster
13:57 kkeithley and break HekaFS?
13:58 johnmark kkeithley: er. no comprende
13:58 johnmark kkeithley: basically, someone upgraded to 3.3.1, found some type of NFS UCARP bug, and wanted to downgrade
13:59 shylesh joined #gluster
13:59 kkeithley The 3.3.0 RPMs are still there in the .../old/ subdir
13:59 shylesh_ joined #gluster
13:59 johnmark oh! ok
13:59 kkeithley you're talking about my fedorapeople.org repo, not the real EPEL repo
14:03 johnmark kkeithley: correct
14:03 johnmark kkeithley: the real EPEL repo is the official rh.com one, right?
14:04 stopbit joined #gluster
14:05 kkeithley EPEL? That's a fedoraproject thing.
14:05 johnmark kkeithley: oh oh... gah
14:05 johnmark kkeithley: in any case, yes, I was referring to your repo on fedorapeople
14:21 wushudoin joined #gluster
14:21 JoeJulian G'morning _Bryan_.
14:21 JoeJulian oneiroi: http://www.joejulian.name/blog/glu​sterfs-replication-dos-and-donts/
14:21 glusterbot Title: GlusterFS replication dos and donts (at www.joejulian.name)
14:23 lh joined #gluster
14:23 lh joined #gluster
14:25 morse joined #gluster
14:28 sripathi joined #gluster
14:29 Nr18 joined #gluster
14:31 oneiroi JoeJulian: interesting, thanks I was indeed using replica for availability will review that :)
14:33 nueces joined #gluster
14:35 fubada hi folks, i missed someones reply to my question regardign the _netdev option not being valid in fstab on centos 6.  My scroll buffer ate the answer to this question
14:35 fubada can someone clarify
14:35 fubada defaults,_netdev,backupvolf​ile-server=gluster2.foo.com 0 0
14:36 fubada is that i use as mount options
14:36 fubada unknown option _netdev (ignored)
14:36 fubada is what i get on mount
14:37 hagarth joined #gluster
14:38 Alpinist joined #gluster
14:39 shylesh_ joined #gluster
14:40 JoeJulian fubada: http://irclog.perlgeek.de/g​luster/2012-10-20#i_6081722
14:40 glusterbot Title: IRC log for #gluster, 2012-10-20 (at irclog.perlgeek.de)
14:40 fubada thank you
14:41 fubada got it
14:41 JoeJulian You're welcome. Thought you'd like knowing where the logs have moved as well.
14:44 deepakcs joined #gluster
14:49 _Bryan_ JoeJulian:  Sorry was afk...
14:49 _Bryan_ JoeJUlian: in reference to your email on replace brick....do you still have the env setup where you can test something on it?
14:50 JoeJulian I do have 2 more drives to migrate.
14:50 nocturn left #gluster
14:55 johnmark JoeJulian: which reminds me, need to bring in that feed to gluster.org
14:55 foexle joined #gluster
14:57 foexle heyho guys, gluster 3.2 doc links seems down. It's a server prob? Or can i find the pdf or html on an other place ?
15:01 JoeJulian hmm... ,,(rtfm) << 'cause I'm lazy and this gets me the link quickest...
15:01 glusterbot Read the fairly-adequate manual at http://gluster.org/community/doc​umentation//index.php/Main_Page
15:01 JoeJulian johnmark!! ^^ link is dead for the IG.
15:02 morse joined #gluster
15:03 Teknix joined #gluster
15:08 johnmark JoeJulian: what?
15:08 Teknix joined #gluster
15:08 johnmark JoeJulian: which 3.2 doc? oh, right- that was hosted on d.g.o
15:08 johnmark bleh. hang on
15:09 chouchins joined #gluster
15:14 johnmark JoeJulian: IG guide link should really go to new quickstart
15:15 MrHeavy__ joined #gluster
15:19 morse joined #gluster
15:28 daMaestro joined #gluster
15:28 JoeJulian _Bryan_: did you actually have something you wanted me to test?
15:30 Tarok_ joined #gluster
15:30 Tarok joined #gluster
15:30 Tarok Hi all
15:31 JoeJulian What's up, Tarok?
15:31 Tarok I have a question about gluster
15:31 Tarok I have a pool with 2 nodes gluster
15:31 semiosis you're in the right place
15:31 Tarok And i want to add one
15:31 Tarok ;)
15:32 JoeJulian Tarok: Something like this? http://www.joejulian.name/blog/how-to-expand-​glusterfs-replicated-clusters-by-one-server/
15:32 glusterbot Title: How to expand GlusterFS replicated clusters by one server (at www.joejulian.name)
15:32 Tarok add one node  without stopping the pool
15:33 Tarok Yes JoeJulian  ;) I'll read this and i come back later (sorry for my BAD english ;) )
15:34 Tarok Thx a lot
15:34 JoeJulian You're quite welcome.
15:34 * semiosis went to the first meeting of the local sysadmin group saturday
15:35 JoeJulian How many ponytails did you count?
15:35 semiosis out of about 8 people in attendance, three (including myself) are using glusterfs!
15:35 semiosis i was shocked
15:35 JoeJulian Wow, nice ratio.
15:35 semiosis !!!
15:36 johnmark semiosis: wait what? where was this?
15:36 JoeJulian florida
15:36 johnmark semiosis: that's pretty cool
15:36 semiosis the magic city
15:36 * JoeJulian has a firm grasp of the blatantly obvious.
15:37 semiosis JoeJulian: no ponytails tho
15:37 JoeJulian must be a northwest thing.
15:37 semiosis probably
15:39 _Bryan_ JoeJulian:  I can replace a 10TB brick in about 18 hours.....but I run with cluster.background-self-heal-count: 2,cluster.data-self-heal-algorithm: full
15:39 _Bryan_ JoeJulian:  I would like to know when you replace a brick with a new server...if you set this for the initial sync..if everything happens in a much more reasonable time
15:39 _Bryan_ JoeJulian: in comaprison to what you are seeing normally.
15:42 JoeJulian I think one difference is that I (and sure, it may be foolishly) run 15 volumes. At 1 brick per disk on each of 4 disks that means that when I replaced the server I was healing 60 bricks. Even at a self-heal count of 2, that's 120 heals.
15:43 JoeJulian I suppose I could try bulde's proposal one brick at a time though.
15:44 JoeJulian Yeah, I'll do that with a VM brick overnight tonight or tomorrow and follow up on that email.
15:50 hagarth joined #gluster
15:57 johnmark semiosis: sounds like they're ready for an intro talk on GlusterFS
15:57 johnmark semiosis: hint, hint ;)
16:02 _Bryan_ Cool...
16:03 semiosis johnmark: yup
16:03 semiosis and stickers
16:04 semiosis which i brought in the car but forgot to bring in with me :(
16:06 _Bryan_ As a point of note though..in my system...I have 1 brick on each server....can't wait to see if this helps
16:10 wushudoin| joined #gluster
16:21 bala1 joined #gluster
16:24 ondergetekende joined #gluster
16:26 Mo_ joined #gluster
16:30 johnmark there have been a spate of unsubs from themailing list today
16:30 johnmark I think it's time to put together some posting guidelines
16:35 elyograg johnmark: rats fleeing from the minor flamewar?
16:35 johnmark elyograg: apparently :)
16:36 johnmark elyograg: I want gluster-users to be a happy place, so I don't mind laying down a little law
16:36 johnmark emphasis on the word "little"
16:37 JoeJulian Did I still come across as harsh? I was trying to soften my blows as much as possible.
16:37 johnmark JoeJulian: I don't think you came across as harsh
16:37 JoeJulian I was much nicer than I wanted to be.
16:37 johnmark when I say "lay down the law" I mean create posting guidelines that are pro-constructive criticism and discourage ranting
16:38 johnmark ie. "ZOMG! GlusterFS sucks!
16:38 johnmark is not particularly helpful
16:38 JoeJulian :)
16:38 johnmark but something along the lines of 1. here's my problem 2. can this work? 3. here's the BZ number
16:39 johnmark JoeJulian: and I could tell you held back. A little gentle herding can be quite useful
16:40 redsolar joined #gluster
16:42 JoeJulian I recognize his frustration. It happens from time-to-time here. Someone has some obscure issue that's impossible to track down that's giving him issues that nobody else has. We can't track those down and unless they have enough skill to isolate their own problem, it's never going to get resolved.
16:43 Tarok JoeJulian, I need to add my 3rd server on pool before your actions (http://www.joejulian.name/blog/how-to-expand-​glusterfs-replicated-clusters-by-one-server/)
16:43 glusterbot Title: How to expand GlusterFS replicated clusters by one server (at www.joejulian.name)
16:43 JoeJulian That's not to say that they're unskilled, just that the problem may be exceptionally difficult.
16:44 JoeJulian Tarok: That's just a simple peer probe.
16:44 Tarok ok thx
16:44 johnmark JoeJulian: indeed.
16:50 johnmark what is the safest way to upgrade again? Stop all Gluster volumes, stop glusterd, upgrade and then restart?
16:50 johnmark and you should start with servers?
16:51 johnmark @stats
16:51 glusterbot johnmark: I have 3 registered users with 0 registered hostmasks; 1 owner and 1 admin.
16:51 JoeJulian @channelstats
16:52 glusterbot JoeJulian: On #gluster there have been 33352 messages, containing 1474290 characters, 244079 words, 959 smileys, and 146 frowns; 243 of those messages were ACTIONs. There have been 11834 joins, 384 parts, 11431 quits, 1 kick, 23 mode changes, and 4 topic changes. There are currently 160 users and the channel has peaked at 185 users.
16:52 johnmark ah, that's the one :)
16:52 semiosis 1 kick... let that be a warning to the rest of ya!  :P
16:53 semiosis ...to not copy all your terminal input to this channel lol
16:53 JoeJulian Well, the safest way to upgrade would be to install all new hardware, install the software from scratch, build your volume(s), kick all your users off the system, then copy your data onto the volume through client mount.
17:02 ika2810 joined #gluster
17:09 JoeJulian johnmark: One other possibility as to why people are leaving the mailing list: it's working and they don't need help anymore.
17:12 johnmark JoeJulian: also true :)
17:23 JoeJulian Ok, coffee then upload form.
17:32 wintix joined #gluster
17:33 ondergetekende joined #gluster
17:33 y4m4 joined #gluster
17:34 ika2810 left #gluster
17:35 nueces joined #gluster
17:36 wintix hey there, are there any best practices for having large replicated glusters over a 10G ethernet link? like what default options would you change, what mtu would you use on the network, etc?
17:38 wintix i've searched the web quite a bit but i can't seem to find any up-to date blog entries, performance optimized setup docs or something in that regard
17:39 semiosis well jumbo frames seems pretty obvs.
17:39 JoeJulian I think, for the most part, that the defaults are the best generally. Anyone that changes from the defaults does so because they've tested their specific application and tuned to it. Network wise, though, sure. Bigger frames is a +1.
17:41 Tarok Hum,  It's not a success ...
17:42 Tarok I lost my data (it's not important)
17:42 * JoeJulian raises a skeptical eyebrow.
17:42 Triade joined #gluster
17:43 wintix yeah, that's kind of what i was expecting, JoeJulian, semiosis. so no obvious must reads in the web then?
17:43 tru_tru_ joined #gluster
17:44 JoeJulian My entire blog. ;)
17:45 _Bryan_ I have a question...lets say I have a 60TB gluster volume....and I want to add to it...is there anything I can do manually to speed up the rebalance of data when I add the new replica pair to the volume?
17:45 _Bryan_ wintix: this is what I am runing...what kind of NICs are you using?
17:45 Triade1 joined #gluster
17:46 tru_tru_ joined #gluster
17:46 _Bryan_ wintix:  I have found to push full network speed you have to disable tso,rx and tx offloading and checksumming
17:46 _Bryan_ if you do not then you get limited to about 20-25% of possible speeds
17:46 wintix JoeJulian: you might actually be right there. will give it a read, thanks for the hint
17:47 _Bryan_ wintix: beyond that you will want to tune your file systems and performance based on the profiles of your file types and sizes
17:47 wintix _Bryan_: we're using Intel Corporation 82599EB 10-Gigabit SFI/SFP+ Network Connection
17:48 _Bryan_ wintix: brand?  I have used Emmulex, HP and QLogics
17:48 _Bryan_ wintix: or straight Intel?
17:48 wintix _Bryan_: Yeah, straight Intel.
17:48 JoeJulian _Bryan_: I haven't heard of anything, and structurally I can't think of any way either. Obviously you can just do the rebalance...fix-layout and files will start filling in the new bricks, but as for actually moving files and balancing usage, not really.
17:49 _Bryan_ yeah dont want to let it just fill the new ones as I will lose the distributed speed I get from having files spread out
17:49 _Bryan_ I am almost considering building out a whoel new gluster volume and syncing over and letting it distribute while it copes
17:49 _Bryan_ as I can complete that faster than a gluster reblance
17:50 wintix _Bryan_: so you actually manage to saturate a 10G link with gluster on top?
17:50 JoeJulian jdarcy's prototyped a method that won't cause as much data movement for that, but I don't know where it is as far as implementation.
17:50 _Bryan_ wintix: I can bring it to its knees....
17:50 nightwalk joined #gluster
17:50 _Bryan_ wintix: I run normally...in the 750-800MB/s speeds...
17:50 _Bryan_ to keep from killing my 10GB switches
17:50 wintix heh.
17:50 _Bryan_ the problem occurs due to the buffers in the switch..they can get blown...if you push to hard
17:51 _Bryan_ but I have pushed up to 970MB/s
17:51 _Bryan_ but was getting alot of packet drops...
17:51 _Bryan_ so I settled on 700-800
17:52 _Bryan_ We have just moved to a new enterprise 10GB switch..and things are looking better...in normal use...but I hve not had a chance to test and find thier crater point yet
17:52 _Bryan_ One of my big performance boosts came from tuning the file systems the bricks are on....this was as important as tuning the NICs and Gluster options
17:53 wintix below the gluster will be an xfs fs. it's pretty much the default config except a few tweaks in logbufs, running it with noatime, nodiratime and inode64
17:54 _Bryan_ wintix:  here is my xfs mkfs line..
17:54 _Bryan_ mount -t xfs -o rw,noatime,attr2,nobarrier,inode64,noquota /dev/sdb1 /gfs/brick0
17:55 _Bryan_ here is the mkfs for that
17:55 _Bryan_ mkfs -t xfs -f -l size=128m -d su=256k,sw=10 /dev/sdb1
17:55 _Bryan_ this is for a setup with alot of LARGE 300MB-4GB files
17:55 _Bryan_ I run on average between 3-4 million files on a gluster volume
17:57 wintix give me a sec, have to look at the xfs manpage for a moment :)
17:58 tru_tru joined #gluster
18:00 wintix _Bryan_: i'm guessing you're running on a hw raid controller. how many disks do you have in your raid6(?) ?
18:01 _Bryan_ wintix:  I run RAID5 with 22 x 500GB sata drives
18:01 _Bryan_ and yes on a HW Raid Controller
18:02 wintix my setup will be raid60 over 48 4tb disks
18:03 _Bryan_ wintix:  You will want to create multiple bricks on that volume to mitigate any issues down the road..
18:03 wintix atm i have a setup with 48x 2tb drives that can easily handle peaks of 1,3GB/s of written data throughput
18:03 _Bryan_ how are you testing?
18:03 wintix didn't really feel the need to tune the xfs much because of those performance figures
18:04 wintix i'm just having issues with getting gluster to perform roughly on the same level
18:04 _Bryan_ gluster offers up its performance in scale...
18:04 _Bryan_ I can run 18 clients all pulling 700-800MB/s at the same time...
18:04 elyograg _Bryan_: your mkfs doesn't have the option to increase inode size.  Everything I've read says to make it 512 (default 256), but if you want to plan for anything else that uses xattrs (like ACLs) 1024 might be better.
18:04 _Bryan_ that aggregate is well north of 1.3
18:05 Gilbs1 joined #gluster
18:05 wintix _Bryan_: from how many replicated bricks are the clients pulling?
18:06 _Bryan_ elyograg:  never come close to using up the inodes...so no need to incraese the size to allow for more files...
18:06 _Bryan_ wintix:  I am running 6x2
18:06 _Bryan_ each brick is 10TB...with distributed replicated on 12 servers
18:06 _Bryan_ I only have one brick on each server
18:07 elyograg _Bryan_: it's not to allow more files.  the xattrs that gluster adds are likely to exceed 256 bytes, so not everything will fit in one inode.  a request for a file will then take two disk seeks instead of one.
18:07 _Bryan_ Hmm....I will look at that...thanks...elyograg
18:07 _Bryan_ Added to notes list for my gluster configuration.....
18:08 wintix [volume split]: i thought of that but am unsure if the complexity it adds is worth it
18:08 _Bryan_ I might add...I do nto run a very deep directory structure either....it is like 8 deep at max
18:08 wintix seems sensible.
18:09 wintix elyograg: thanks for the hint
18:12 wintix _Bryan_: you have been very helpful, thanks, my todolist for tomorrow just got bigger++ :)
18:13 _Bryan_ hehe...hope it helps..I went through weeks working on mine...to get it tuned....feel free to ping me and I will offer what I can...
18:13 MinhP joined #gluster
18:13 _Bryan_ elyorag:  you are reffering to "-i size=512" correct?
18:13 wintix _Bryan_: yeah, finding myself in the trial-and-error weeks right now, it's much apprechiated, thanks :)
18:13 elyograg _Bryan_: that sounds correct.  I used 1024 because I may want to use ACLs in the future.
18:14 _Bryan_ elyorag: thanks....appreciate the advice
18:17 Tarok Yeah ! Thx a lot JoeJulian ! I succeeded
18:17 JoeJulian Yay!
18:17 puebele joined #gluster
18:18 Tarok bye
18:21 wintix so far i didn
18:22 wintix so far i didn't really bother with xattrs on the fs backend, i take it it should be enabled, despite me unable to find any mentioning about it int the docs other than that gluster can be backed by filesystems that support xattrs?
18:24 wintix ah, nevermind, seems like xfs has them enabled by default as it seems and hence it's not being mentioned.
18:24 wintix just got confused with _Bryan_s mount line
18:26 elyograg wintix: I have not seen attr2 before.  glusterfs will work with xfs with no special mount options as far as I'm aware. I use this on my testbed: noatime,nodiratime,nobarrier,inode64
18:26 elyograg nobarrier is only included because when this goes into production, the servers will have NVRAM-backed cache.
18:28 wintix same here. i only have lofbufs=4 added in addition
18:34 tryggvil joined #gluster
18:37 sashko joined #gluster
18:37 Teknix joined #gluster
18:37 gbrand_ joined #gluster
18:39 hagarth left #gluster
18:43 chouchins joined #gluster
18:45 chouchins joined #gluster
18:48 VisionNL inodes issues? We have a few million files in some volumes without a problem on the max inode end
18:49 JoeJulian It's just more efficient if you can fit all the xattrs for one file into the one inode.
18:53 VisionNL _Bryan_: what kind of switch did you upgrade from and to?
18:59 tru_tru joined #gluster
19:02 ondergetekende joined #gluster
19:15 Technicool joined #gluster
19:16 sashko joined #gluster
19:16 Erwon joined #gluster
19:20 elyograg how do i take the gluster-swift config, which I finally got working with kkeithley's directions and one tiny addition, and switch it so it's presenting a working S3 interface?
19:20 elyograg I've heard keystone mentioned many times, but although I can find a lot of swift keystone stuff, I haven't found anything on keystone itself.
19:21 elyograg although i would appreciate being fed a complete config, it would be nice if someone can point me at the relevant documentation so I can figure it out myself.
19:23 kkeithley well, most of the UFO work was actually done on swift-1.4.4, which only had tempauth for authentication. The keystone auth apparently landed very late for 1.4.8 (don't quote me here)
19:23 kkeithley Then gluster did a quickie rebase against 1.4.8.
19:23 kkeithley Bottom line, AFAIK, UFO currently only supports tempauth.
19:24 elyograg kkeithley: can tempauth be leveraged into an s3 interface?
19:24 kkeithley I honstly don't know.
19:24 johnmark elyograg: you know who might be able to answer your question?
19:24 johnmark jbrooks: you there?
19:25 johnmark elyograg: I *think* jbrooks has used UFO with keystone, but don't quote me until he says so himself
19:25 jbrooks johnmark, hey
19:25 jbrooks um
19:25 elyograg at this point, i don't really care whether keystone is involved.  i just want an s3 interface.  it would be nice if i could have both swift and s3.
19:26 jbrooks I don't think I've gotten keystone working w/ it
19:26 jbrooks for this post, I definitely didn't use keystone: http://blog.jebpages.com/archives/a-b​uzzword-packed-return-to-gluster-ufo/
19:26 glusterbot Title: A Buzzword-Packed Return to Gluster UFO | jebpages (at blog.jebpages.com)
19:27 jbrooks But S3 works
19:27 Erwon Hello everyone. I have a question: i've setup 2 bricks, created a replica volume and all seems to be going well. i nfs to brick1, save 20gb of data, "fail brick1", connect to brick2 via nfs, continue working like a happy camper (all good so far)... but when i boot up the "failed" brick1 a bit later i get split-brian heal info and whatnot... do i need to do something by hand to get brick1 up again and sync it/should i not autostart gluster  and first clean o
19:27 elyograg wow, cyberduck sure starts slow.
19:29 johnmark elyograg: heh
19:29 johnmark elyograg: oh, there's a filter for s3 stuff
19:29 johnmark elyograg: which I think is independent of the auth filter
19:30 jbrooks http://docs.openstack.org/trunk/openstack-​object-storage/admin/content/configuring-o​penstack-object-storage-with-s3_api.html
19:30 johnmark jbrooks: that' sthe one :)
19:30 glusterbot Title: Configuring Object Storage with the S3 API - OpenStack Object Storage Administration Manual  - trunk (at docs.openstack.org)
19:33 elyograg johnmark: that page says swauth.  i've got tempauth.  When I try it using cyberduck's s3 connection type, I get a 500 error.
19:34 elyograg oh, i might have modified the wrong server. :)
19:36 elyograg ok, progress.  now it connects and will show me top level stuff in the container ... but i can't expand the directory tree any further, like i can when i connect using the swift interface.
19:37 johnmark elyograg: yeah, I think that's inline with what I experienced
19:37 johnmark are you using cyberduck?
19:38 elyograg yes.
19:38 johnmark actually, I know someone who had success with cyberduck - and cloudberry, too
19:39 johnmark elyograg: I've also had success with jekyll-s3
19:40 johnmark a ruby gem thingy
19:40 johnmark elyograg: BUT, there's also been a problem with expanding files in subdirs
19:41 JoeJulian I need to come up with some sort of innuendo-style name for my demo...
19:41 y4m4 joined #gluster
19:41 elyograg johnmark: I think that subdirs will be a requirement.
19:42 johnmark elyograg: it depends on what you would use on the client
19:42 johnmark elyograg: if it's all cyberduck, it shouldn't be an issue. my issue stemmed from trying to include files into static HTML
19:42 elyograg johnmark: some kind of java API ... but a desktop app would be exceedingly nice.
19:43 johnmark like when I tried to bring in a CSS file in an HTML page via a browser
19:43 elyograg might also be APIs for C# and a few other languages.
19:43 elyograg i won't be writing the code, I'm the admin.
19:43 johnmark do you absolutely require s3 compatibility?
19:45 elyograg that's what the programmers keep talking about.  if there are very fast APIs for swift, we could try those.  When I was trying out swift as a storage solution (using their own back-end storage method, not gluster), the rackspace java API seemed very slow to me.
19:46 elyograg granted, I was comparing it to object storage in mongodb, which is not really apples to apples, but the performance difference was absolutely staggering.
19:48 elyograg so my hope was to try out an s3 interface, because in theory amazon ought to have performance bottlenecks pretty well worked out by now.
19:48 andreask joined #gluster
19:49 jbrooks johnmark, when you were working with the static site stuff, did you put StaticWeb in your pipeline: http://docs.openstack.org/develo​per/swift/misc.html#module-swift.common.middleware.staticweb
19:49 glusterbot Title: Misc Swift 1.7.5-dev documentation (at docs.openstack.org)
19:49 JoeJulian johnmark: So you wanted RH branding instead of this even better look? https://docs.google.com/open?i​d=0B4p9a4V-9wr-WGpLNG4wMHVLMUE
19:49 glusterbot Title: ufo_demo.png - Google Drive (at docs.google.com)
19:50 jbrooks elyograg, I wouldn't expect the different api to change performance
19:50 johnmark jbrooks: I did
19:51 JoeJulian jbrooks: using staticweb, tempurl and formpost for that. :D
19:51 jbrooks ok -- I haven't messed w/ that stuff on my own install yet. I did notice that jekyll was setting the wrong links
19:52 jbrooks JoeJulian, fun!
19:52 johnmark jbrooks: yeah, it was.
19:52 JoeJulian Need to figure out the cname thing after this is done.
19:52 johnmark JoeJulian: heh. oh believe me, I like your version better
19:52 jbrooks I bet it could be made to do it right, though
19:52 johnmark JoeJulian: I think we'll have different themes, depending on which you prefer
19:53 JoeJulian I'm sure.. but I just wanted to wet your appetite a little. Got the Ant on there and all making it look even more cool than it is.
19:54 johnmark JoeJulian: heh heh... very sneaky, JoeJulian!
19:55 JoeJulian As soon as I get this last bit, I'll have to head down to the mac store and see if it works on those things.
19:55 JoeJulian I'm not gunna buy one, just use theirs.
19:55 johnmark haha
19:55 elyograg so this directory problem ... is it a problem on the server or the client?
19:55 johnmark well, I can tell you with my wife's ipad in a jiff
19:56 elyograg meeting time.  afk for a bit.
19:56 johnmark elyograg: I haven't fully investigated. I wouldn't feel comfortable saying one or the other
19:58 balunasj joined #gluster
20:14 hchiramm_ joined #gluster
20:15 pdurbin joined #gluster
20:19 semiosis amazing luck, i suppose... in this AWS meltdown i only lost two servers root disks (which autoscaling & puppet completely replace & rebuild) and not a single glusterfs brick!
20:20 jdarcy :)
20:20 semiosis yes, in case you missed the memo, us-east-1 has been melting down for the last hour+
20:20 JoeJulian ew
20:23 pdurbin semiosis: i guess i'm... happy for you? :)
20:23 semiosis pdurbin: thanks?
20:24 pdurbin heh
20:43 duerF joined #gluster
20:49 Gilbs1 left #gluster
20:59 y4m4 joined #gluster
21:02 wN joined #gluster
21:05 Triade joined #gluster
21:06 jiffe98 joined #gluster
21:10 tryggvil joined #gluster
21:44 JoeJulian johnmark: documenting...
21:49 jiffe98 I suppose it is advised against running gluster 3.3.* on ext4 ?
21:50 semiosis jiffe98: it's linux kernel 3.3+ and rhel/centos with the ext stuff backported that's a problem with any version of glusterfs
21:51 semiosis but yeah generally xfs is recommended now
21:56 elyograg joined #gluster
21:59 MinhP joined #gluster
21:59 gbrand_ joined #gluster
22:06 jiffe98 gotcha
22:10 elyograg now wasn't that fun. connnectivity to my home network broke.  i lurk via irssi in a screen on my home server.
22:18 nueces joined #gluster
22:19 semiosis @ext
22:19 glusterbot semiosis: I do not know about 'ext', but I do know about these similar topics: 'extended attributes', 'ext4'
22:19 semiosis ~ext4 | jiffe98
22:19 glusterbot jiffe98: Read about the ext4 problem at http://joejulian.name/blog/gluste​rfs-bit-by-ext4-structure-change/
22:19 semiosis sorry forgot to link you that earlier... maybe that's what you're thinking about?
22:21 hattenator joined #gluster
22:28 aliguori joined #gluster
22:44 ondergetekende joined #gluster
22:47 Nr18 joined #gluster
22:56 JoeJulian johnmark: So?!?! Does it work on Apple products?
22:56 JoeJulian https://github.com/joejulian/ufopilot
22:56 glusterbot Title: joejulian/ufopilot · GitHub (at github.com)
23:24 y4m4 joined #gluster
23:36 nueces joined #gluster
23:50 leejohn joined #gluster
23:51 elyograg joined #gluster
23:52 leejohn Good day guys, i have a 4 node replicated volume called node01-node04 can I use at least node01 and node02 as the mount point ?
23:53 leejohn or on a 4 node I just have to use only one server? sorry if i'm too naive to this
23:53 wiqd kkeithley: sorry for the delay, all working 100% now, thanks!
23:55 leejohn gluster volume create data replica 4 node01:/data01 node02:/data02 node03:/data03 node04:/data04
23:55 leejohn this is how I create my volume just to give an info.
23:55 leejohn sorry i'm fairly new to glusterfs
23:57 leejohn I use native glusterfs client, my question is I have 10 workstation can I use node01 and node02 respectively as a mount point 5 each ?

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary