Camelia, the Perl 6 bug

IRC log for #gluster, 2013-04-18

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:20 chirino joined #gluster
00:36 \_pol joined #gluster
00:39 humbug joined #gluster
00:45 edong23_ joined #gluster
00:57 yinyin joined #gluster
01:12 glusterbot New news from newglusterbugs: [Bug 953332] repo missing 6Server directory for intstalling alpha packages on RHEL6 <http://goo.gl/pEl35>
01:24 d3O joined #gluster
01:36 tyl0r joined #gluster
02:01 awheeler_ joined #gluster
02:05 \_pol joined #gluster
02:30 yinyin joined #gluster
02:57 bharata joined #gluster
03:01 msmith_ joined #gluster
03:03 H___ joined #gluster
03:03 H__ joined #gluster
03:03 logstashbot joined #gluster
03:09 lkoranda joined #gluster
03:12 vshankar joined #gluster
03:36 saurabh joined #gluster
03:47 satheesh joined #gluster
03:55 hagarth joined #gluster
03:55 sgowda joined #gluster
03:57 bulde joined #gluster
04:04 alex88 joined #gluster
04:09 itisravi joined #gluster
04:10 pai joined #gluster
04:11 lalatenduM joined #gluster
04:13 itisravi joined #gluster
04:18 itisravi_ joined #gluster
04:20 itisravi joined #gluster
04:30 hagarth1 joined #gluster
04:35 theron joined #gluster
04:39 shylesh joined #gluster
04:40 genewitch joined #gluster
04:41 genewitch how do i tell the storage total size? i don't trust df -h, unless you guys say that is accurate. I think i should have 6TB free but it only reports 3.2 which seems suspect
04:45 vpshastry joined #gluster
04:48 \_pol joined #gluster
04:54 raghu joined #gluster
04:56 satheesh joined #gluster
05:11 samppah genewitch: what kind of setup you have? can you send out put of gluster volume info and df to pastie.org?
05:11 genewitch samppah: sure can!
05:12 logstashbot New news from newjiraissues: Peter Butkovic created LOGSTASH-1015 - Exception in thread "LogStash::Runner" org.jruby.exceptions.RaiseException: (IOError) no !/ in spec <https://logstash.jira.com/browse/LOGSTASH-1015>
05:12 glusterbot Title: [LOGSTASH-1015] Exception in thread "LogStash::Runner" org.jruby.exceptions.RaiseException: (IOError) no !/ in spec - logstash.jira.com (at logstash.jira.com)
05:12 genewitch samppah: http://bpaste.net/show/92269/ and http://bpaste.net/show/92268/
05:12 logstashbot Title: Paste #92269 at spacepaste (at bpaste.net)
05:12 glusterbot Title: Paste #92269 at spacepaste (at bpaste.net)
05:13 d3O_ joined #gluster
05:14 JoeJulian @kick logstashbot
05:14 logstashbot was kicked by glusterbot: JoeJulian
05:14 logstashbot joined #gluster
05:14 JoeJulian @ban logstashbot
05:14 JoeJulian @kick logstashbot
05:14 logstashbot was kicked by glusterbot: JoeJulian
05:14 logstashbot joined #gluster
05:15 samppah genewitch: hmm, order of bricks looks bit odd to me
05:15 JoeJulian @kban logstashbot
05:15 logstashbot was kicked by glusterbot: JoeJulian
05:16 genewitch samppah: oh let me look
05:16 samppah is it intentionally replicating data between these bricks:
05:16 samppah Brick1: glu1:/gluster/brick1
05:16 samppah Brick2: glu1:/gluster/brick2
05:16 samppah JoeJulian: can you look at that? http://bpaste.net/show/92268/
05:16 glusterbot Title: Paste #92268 at spacepaste (at bpaste.net)
05:16 genewitch samppah: yeah, they're all seperate spindles
05:16 yinyin joined #gluster
05:16 samppah ah, right.. and on same machine?
05:16 genewitch this one is just the place to transfer to the final gluster cluster and a testing platform of gluster for our needs in general
05:17 genewitch samppah: well, 4 different virtual servers on what may be different hypervisors
05:17 genewitch it's AWS on ephemeral drives, if that means something to you
05:18 JoeJulian It's fine if you're just experimenting, but samppah's right about ,,(brick order)
05:18 glusterbot Replicas are defined in the order bricks are listed in the volume create command. So gluster volume create myvol replica 2 server1:/data/brick1 server2:/data/brick1 server3:/data/brick1 server4:/data/brick1 will replicate between server1 and server2 and replicate between server3 and server4.
05:18 genewitch yes
05:18 genewitch i just got that.
05:18 JoeJulian What size is each brick supposed to be?
05:18 genewitch that is good to know.
05:18 genewitch each brick is 422GB
05:18 genewitch i think
05:19 JoeJulian So each 2 bricks is also 422 (since they're a replica pair) and your total volume should be 8x422.
05:19 genewitch http://bpaste.net/show/92269/
05:19 glusterbot Title: Paste #92269 at spacepaste (at bpaste.net)
05:20 genewitch JoeJulian: oh so df is correct :-)
05:20 genewitch is there circumstances where df is not accurate?
05:21 JoeJulian Yes, if you've assigned more than one brick to the same filesystem, it'll report that one filesystem twice.
05:21 genewitch as in /dev/sda1 has two bricks on it?
05:21 JoeJulian So, for instance, you make a volume with 4 bricks on one single 1Tb filesystem. (Say /data is the mount and /data/brick1 /data/brick2... are the bricks) that would report 4tb.
05:22 genewitch okay, i get it :-)
05:22 genewitch because it's based on freespace
05:22 JoeJulian right
05:24 ngoswami joined #gluster
05:26 genewitch How much sense would it make for me to do replica 3 and only use 15 partitions? is there a "hot standby" option in gluster?
05:26 mohankumar joined #gluster
05:27 genewitch actually i should raid5 the 4 discs on each box and do replica 2
05:28 genewitch glu1:/gluster/glu1brick glu2:/gluster/glu2brick and so on
05:31 JoeJulian I prefer the single brick per block device design, but there's no wrong way.
05:32 genewitch well, something to test, at least.
05:32 ujjain joined #gluster
05:32 genewitch with 16 gigs each for fs cache i can see these 4 boxes hosting all our web traffic with no issues
05:32 genewitch hosting storage for*
05:33 * JoeJulian grumbles about putting cache too far from the user.
05:35 genewitch well there's varnish and squid and cdns too
05:35 genewitch and APC and whatever else they come up with to make sites faster because our storage is trash
05:38 hagarth joined #gluster
05:39 JoeJulian To be fair to storage, sometimes it's the applications that are designed wastefully with regard to I/O.
05:41 * genewitch mumbles about sysadmins using network storage for swap
05:41 genewitch Indeeeed, sir.
05:42 genewitch Well i must get out, you all are very helpful as usual
05:43 glusterbot New news from newglusterbugs: [Bug 951800] AFR writev ignores xdata <http://goo.gl/qhG6l>
05:44 samppah humm, any recommendations or rule of thumb for setting background-qlen option? :)
05:45 ngoswami_ joined #gluster
05:46 rotbeard joined #gluster
05:51 JoeJulian As many as your system's capable of without degrading your use to an unacceptable level. ;)
05:51 samppah ;)
05:51 rastar joined #gluster
05:55 guigui3 joined #gluster
06:09 Nevan joined #gluster
06:10 rgustafs joined #gluster
06:11 venkatesh joined #gluster
06:13 rgustafs joined #gluster
06:25 vimal joined #gluster
06:38 aravindavk joined #gluster
06:39 jkroon joined #gluster
06:41 jkroon hi guys, i've got a glusterfs file system for /home, now, from bash, doing exec 3>/home/.etc/uls-srvconf/.lock results in an error being generated:  -bash: /home/.etc/uls-srvconf/.lock: Input/output error
06:41 jkroon this is on a Distributed-Replicate, 2 x 2 over TCP cluster.
06:45 jkroon how do I go about trouble-shooting the issue?  It works on 2 of the nodes, the other two fails.  looking at the backing bricks, the file is stored on node 1+2, failures 2+3 usually, but this also varies, atm it's only on 1.
06:46 jkroon if I rm the file and re-create it then everything works fine again for a while ...
06:49 ollivera joined #gluster
06:53 rastar joined #gluster
06:55 tjikkun_work joined #gluster
06:57 bulde1 joined #gluster
07:00 bala joined #gluster
07:01 gbrand_ joined #gluster
07:03 ctria joined #gluster
07:04 bulde joined #gluster
07:08 zhashuyu joined #gluster
07:10 hybrid512 joined #gluster
07:12 itisravi joined #gluster
07:13 itisravi joined #gluster
07:14 rb2k joined #gluster
07:17 ash13 joined #gluster
07:22 satheesh joined #gluster
07:25 andreask joined #gluster
07:29 gbrand___ joined #gluster
07:32 dustint joined #gluster
07:37 itisravi joined #gluster
07:49 shireesh joined #gluster
08:00 ngoswami joined #gluster
08:03 saurabh joined #gluster
08:09 ricky-ticky joined #gluster
08:20 joehoyle joined #gluster
08:20 joehoyle- joined #gluster
08:34 harish joined #gluster
08:42 saurabh joined #gluster
08:47 d3O joined #gluster
08:55 vpshastry1 joined #gluster
09:08 yinyin joined #gluster
09:10 sgowda joined #gluster
09:45 bulde joined #gluster
09:49 venkatesh joined #gluster
09:51 d3O joined #gluster
09:54 duerF joined #gluster
09:56 vrturbo joined #gluster
10:03 jheretic joined #gluster
10:09 saurabh joined #gluster
10:16 d3O joined #gluster
10:25 deepakcs joined #gluster
10:31 itisravi joined #gluster
10:34 vpshastry1 joined #gluster
10:47 venkatesh joined #gluster
11:04 aravindavk joined #gluster
11:04 kkeithley1 joined #gluster
11:13 jheretic joined #gluster
11:15 hybrid5121 joined #gluster
11:18 jheretic hi, i'm wondering if anyone knows where/how the stat-prefetch information is stored on a simple 2-host, 1 brick per host replicated setup. my stat-prefetch seems to be corrupted, or maybe out of sync? certain files become inaccessible when it's turned on
11:20 rgustafs joined #gluster
11:25 shireesh joined #gluster
11:28 andreask joined #gluster
11:29 yinyin_ joined #gluster
11:37 hagarth joined #gluster
11:49 mgebbe joined #gluster
12:06 rcheleguini joined #gluster
12:13 yinyin_ joined #gluster
12:14 shireesh joined #gluster
12:18 edward1 joined #gluster
12:25 aliguori joined #gluster
12:34 jheretic joined #gluster
12:46 itisravi joined #gluster
12:49 awheeler_ joined #gluster
12:51 chirino joined #gluster
12:55 m0zes joined #gluster
13:09 karoshi joined #gluster
13:09 manik joined #gluster
13:09 karoshi is it normal for the hidden .glusterfs directory to be several GB in size?
13:15 bennyturns joined #gluster
13:16 edong23 joined #gluster
13:22 bulde karoshi: .glusterfs contains hardlinks to the 'actual' data, so if you do a ''du -sh' it would give a the size of the brick itself
13:23 karoshi ah ok
13:23 karoshi thanks
13:25 awheeler_ joined #gluster
13:32 mohankumar joined #gluster
13:40 piotrektt_ joined #gluster
13:43 vpshastry1 joined #gluster
13:50 H__ I have a file that gives "Input/output error" on any access yet sits fine and accessible on the bricks itself. I see nothing in the various logs. Any things I can try ?
13:51 jkroon H__, I have the same problem.
13:53 d3O joined #gluster
13:58 jbrooks joined #gluster
14:01 Norky joined #gluster
14:06 Nagilum_ H__: find a good version of the file on any brick, move it to /tmp or so, then delete it from all bricks and move it from /tmp/ back into the gfs
14:10 itisravi joined #gluster
14:11 H__ I have found it : split brain. Fixed it by removing the old file on one of the bricks, including the .glusterfs/ part.
14:13 deepakcs joined #gluster
14:17 hagarth joined #gluster
14:19 H__ why does 'split brain on file abc' not show in a logfile ?
14:22 Nagilum_ H__: it usually does
14:23 Nagilum_ H__: but don't ask me in which :>
14:27 RobertLaptop joined #gluster
14:32 karoshi am I correct that the client uses the server name given at mount time only to download the volfile and learn the names of all servers?
14:33 semiosis karoshi: ,,(mount server)
14:33 glusterbot karoshi: (#1) The server specified is only used to retrieve the client volume definition. Once connected, the client connects to all the servers in the volume. See also @rrnds, or (#2) Learn more about the role played by the server specified on the mount command here: http://goo.gl/0EB1u
14:34 karoshi right, thanks
14:34 semiosis yw
14:40 daMaestro joined #gluster
14:47 bugs_ joined #gluster
14:49 karoshi when you force a heal, how does gluster know in which direction to move data?
14:52 semiosis see the article about ,,(extended attributes) for a high level explanation of the algorithm
14:52 glusterbot (#1) To read the extended attributes on the server: getfattr -m .  -d -e hex {filename}, or (#2) For more information on how GlusterFS uses extended attributes, see this article: http://goo.gl/Bf9Er
14:58 jskinner_ joined #gluster
15:04 theron joined #gluster
15:07 karoshi ok, so there's no way to tell it "use this brick as the source" as you do, for example, for drbd
15:08 semiosis thats correct
15:10 karoshi ok thanks
15:10 kkeithley| 3.4.0alpha3 yum repo at http://download.gluster.org/pub/glust​er/glusterfs/qa-releases/3.4.0alpha3/
15:11 glusterbot <http://goo.gl/KwdPy> (at download.gluster.org)
15:11 samppah kkeithley|: thanks :)
15:13 Nagilum_ kkeithley|: have you tried installing http://repos.fedorapeople.org/repo​s/kkeithle/glusterfs/epel-6/x86_64​/glusterfs-3.3.1-12.el6.x86_64.rpm ?
15:13 glusterbot <http://goo.gl/bd6fz> (at repos.fedorapeople.org)
15:14 ndevos Nagilum_: see https://bugzilla.redhat.com​/show_bug.cgi?id=952122#c5
15:14 glusterbot <http://goo.gl/FXpMl> (at bugzilla.redhat.com)
15:14 glusterbot Bug 952122: medium, low, ---, ndevos, MODIFIED , rpms contains useless provides for xlator .so files and private libraries
15:15 kkeithley| Nagilum_: new -13s will be there soon with the fix
15:15 Nagilum_ k, thx
15:15 H__ if you stop a rebalance it can start where it left afaik. Now what if you added bricks when the rebalance is stopped, how does it know it has to start all over again ?
15:17 kkeithley| if you can't wait, get them here http://koji.fedoraproject.org/k​oji/packageinfo?packageID=5443
15:17 glusterbot <http://goo.gl/C5enu> (at koji.fedoraproject.org)
15:19 Nagilum_ ah, useful link!
15:28 karoshi can the mount server IP be a load-balanced VIP (eg with lvs) ?
15:29 semiosis ,,(rr-dns)
15:29 glusterbot I do not know about 'rr-dns', but I do know about these similar topics: 'rrdns'
15:29 semiosis ,,(rrdns)
15:29 glusterbot You can use rrdns to allow failover for mounting your volume. See Joe's tutorial: http://goo.gl/ktI6p
15:29 karoshi yes, that I know
15:29 karoshi hence my question
15:29 semiosis then i dont understand your question
15:29 joehoyle- left #gluster
15:30 karoshi ie not giving out different IPs round-robin with DNS, but instead point the mount server name to a single) VIP
15:31 karoshi which load-balances among servers
15:31 karoshi the consequence of that is that clients can hit a different server each time
15:33 semiosis yeah sure i guess that would work i just dont see the benefit over rrdns
15:33 semiosis if your load balancer is just going to round robin anyway
15:37 karoshi yes, it's because we already have a balancing infrastructire in place
15:37 Nagilum_ rrdns doesn't work if the client ignores the DNS TTL
15:37 karoshi so we'd just add another ip
15:37 karoshi neither solution is complicated, though
15:38 semiosis then go for it, i think it will work
15:38 semiosis and if not, then probably worth filing bugs about
15:50 JoeJulian @mount server
15:50 glusterbot JoeJulian: (#1) The server specified is only used to retrieve the client volume definition. Once connected, the client connects to all the servers in the volume. See also @rrnds, or (#2) Learn more about the role played by the server specified on the mount command here: http://goo.gl/0EB1u
15:51 JoeJulian karoshi: ^
15:54 karoshi JoeJulian: thanks yes, semiosis pointed me to that page earlier
15:55 dmojoryder Is it best to use the native gluster client when using DHT? I assume in the case the client applies the hash and talks to the appropriate brick?
15:56 semiosis dmojoryder: you can use nfs or fuse regardless of volume config
15:56 \_pol joined #gluster
15:57 \_pol joined #gluster
15:59 dmojoryder semiosis; Ok, I am using nfs atm but it seems like all the traffic goes to a single server (the one specified in the nfs mount) then presumably gets distributed. This would seem to cause that single server to be a bottleneck. That is why I asked if I used the native glusterfs client, and if it hashed on the client, it would distribute amongst all my bricks and remove the bottleneck I see with nfs
15:59 chirino joined #gluster
15:59 \_pol joined #gluster
15:59 \_pol joined #gluster
16:00 JoeJulian karoshi: So you understand, then, that your load balancer is irrelevant for a fuse mount then.
16:01 karoshi I thought so, but I figured I'd ask anyway
16:01 semiosis dmojoryder: yes you got that right.  there are benefits and disadvantages to NFS clients, but they are supported.  whether to use NFS or FUSE is a choice you have to make in your environment.
16:01 semiosis dmojoryder: try them both and see what works best for you
16:02 JoeJulian dmojoryder: nfs does not do anything client-side. The kernel nfs client connects to an nfs service which then acts as the glusterfs client.
16:05 jskinner_ joined #gluster
16:06 jskinne__ joined #gluster
16:07 jskinner_ joined #gluster
16:07 jskinner joined #gluster
16:08 jskinne__ joined #gluster
16:09 jskinner_ joined #gluster
16:10 jskinne__ joined #gluster
16:12 jskinner_ joined #gluster
16:13 jskinne__ joined #gluster
16:14 jskinner_ joined #gluster
16:15 glusterbot New news from newglusterbugs: [Bug 952693] 3.4 Beta1 Tracker <http://goo.gl/DRzjx>
16:15 jskinner_ joined #gluster
16:16 jskinne__ joined #gluster
16:17 jskinner_ joined #gluster
16:18 jskinne__ joined #gluster
16:19 jskinner joined #gluster
16:20 jskinne__ joined #gluster
16:21 jskinner joined #gluster
16:22 jskinne__ joined #gluster
16:23 jskinner joined #gluster
16:24 jskinner_ joined #gluster
16:25 jskinne__ joined #gluster
16:26 jskinner_ joined #gluster
16:27 jskinner_ joined #gluster
16:27 jskinne__ joined #gluster
16:28 jskinner_ joined #gluster
16:29 samppah @latest
16:29 glusterbot samppah: The latest version is available at http://goo.gl/zO0Fa . There is a .repo file for yum or see @ppa for ubuntu.
16:29 jskinner_ joined #gluster
16:29 samppah @yum repo
16:29 glusterbot samppah: kkeithley's fedorapeople.org yum repository has 32- and 64-bit glusterfs 3.3 packages for RHEL/Fedora/Centos distributions: http://goo.gl/EyoCw
16:30 jskinne__ joined #gluster
16:32 jskinner_ joined #gluster
16:33 jskinn___ joined #gluster
16:34 jskinner_ joined #gluster
16:41 hagarth joined #gluster
16:45 CROS_ joined #gluster
16:45 CROS_ left #gluster
16:45 CROS_ joined #gluster
17:01 humbug joined #gluster
17:04 jskinner_ joined #gluster
17:05 y4m4 joined #gluster
17:13 hagarth joined #gluster
17:16 manik joined #gluster
17:18 lh joined #gluster
17:18 lh joined #gluster
17:30 portante joined #gluster
17:34 bulde joined #gluster
17:36 jskinner_ joined #gluster
17:40 kkeithley| my fedorapeople.org yum repo is updated to 3.3.1-13. this should fix the problems people had with 3.3.1-12
17:41 JoeJulian Cool.
17:41 kkeithley| now let's see what the deal is with 3.4.0beta3 on RHEL6
17:42 rb2k joined #gluster
17:43 NuxRo kkeithley|: you mean alpha3?
17:45 kkeithley| lol, yes
17:45 NuxRo kkeithley|: hehe.. in my case (upgrading from 3.4git) I ended up with dead glusterd and http://fpaste.org/aHxb/
17:45 glusterbot Title: Viewing Paste #293060 (at fpaste.org)
17:45 Mo___ joined #gluster
17:49 kkeithley| it's my week for brainfarts, yesterday I said jvayas for Samba here. Should have said chertel.
17:50 NuxRo none ring any bells, so fart away :-)
17:52 kkeithley| yeah, either way, neither one of them has had much of a presence here. :-(
17:52 saurabh joined #gluster
17:52 manik joined #gluster
17:53 nueces joined #gluster
17:53 manik joined #gluster
18:00 CROS_ joined #gluster
18:09 portante joined #gluster
18:46 ingard_ joined #gluster
18:50 RobertLaptop joined #gluster
19:06 awickhm joined #gluster
19:12 piotrektt_ hey. i have this issue. the ip address of the gluster node changed and i cant stop and remove volumes. is there any option to force that?
19:12 piotrektt_ i use 3.1 version
19:14 semiosis piotrektt_: you dont want to do that if one of your servers is out of the cluster
19:14 semiosis it will cause ,,(peer rejected)
19:14 glusterbot I do not know about 'peer rejected', but I do know about these similar topics: 'peer-rejected'
19:15 semiosis ,,(peer-rejected)
19:15 glusterbot http://goo.gl/nWQ5b
19:18 piotrektt_ ok, so i need to restore the old ip config?
19:19 chirino joined #gluster
19:19 semiosis piotrektt_: yes, and please use ,,(hostnames) if your IPs are going to change
19:19 glusterbot piotrektt_: Hostnames can be used instead of IPs for server (peer) addresses. To update an existing peer's address from IP to hostname, just probe it by name from any other peer. When creating a new pool, probe all other servers by name from the first, then probe the first by name from just one of the others.
19:20 semiosis also these links about ,,(replace) may be helpful for you
19:20 glusterbot Useful links for replacing a failed server... if replacement server has different hostname: http://goo.gl/4hWXJ ... or if replacement server has same hostname:
19:20 glusterbot http://goo.gl/rem8L
19:21 Matthaeus joined #gluster
19:32 hagarth joined #gluster
19:32 edward1 joined #gluster
19:59 hagarth joined #gluster
20:03 chirino joined #gluster
20:07 chirino joined #gluster
20:14 sohoo joined #gluster
20:15 sohoo is it possible to mount the base HDs with noatime in gluster dist../replication? will it help performance like it should
20:16 sohoo i know it needs the extended atributes but im not sure it will need that
20:16 H__ you mean the bricks ? yes, you can use noatime there
20:17 sohoo yes, i mean /b1 as ext4 etc.. not the gluster mount
20:19 sohoo is the noatime will help also in the gluster mount? its an option but im not sure what this will achive
20:19 H__ it will speed up reads on lots of files as it does not also have to write for every read anymore when you use noatime
20:20 H__ i don't know if gluster mount has noatime. i doubt it has it. maybe some dev will chime in
20:21 Nagilum_ sohoo: don't use ext4
20:21 Nagilum_ sohoo: use xfs
20:22 sohoo actualy i use ext3 but gonna move some to XFS :) thanks
20:23 CROS_ Why no ext4 with gluster?
20:24 Nagilum_ because gluster doesn't work with ext4
20:24 sohoo what do you mean doesnt work?
20:25 CROS_ http://gluster.org/community/docume​ntation/index.php/Gluster_3.1:_Chec​king_GlusterFS_Minimum_Requirements
20:25 glusterbot <http://goo.gl/ZekRJ> (at gluster.org)
20:25 CROS_ is that out of date, then?
20:25 CROS_ It seems to think that XFS is the one that hasn't been widely tested?
20:26 Nagilum_ http://joejulian.name/blog/gluste​rfs-bit-by-ext4-structure-change/
20:26 glusterbot <http://goo.gl/PEBQU> (at joejulian.name)
20:26 Nagilum_ CROS_: xfs is fine
20:27 sohoo how about ext3, whats wrong with it? :)
20:27 sohoo beside long format
20:27 CROS_ got'cha
20:28 CROS_ so, prepatch it's fine...
20:28 Nagilum_ sohoo: no idea, never tried
20:29 sohoo i tried :) its ok, just very long format time but beside that is very stable
20:29 H__ I choose ext4 over xfs still, the 64bit issue is Ok with my kernel and a fix is ready
20:30 sohoo just reading the link regarding that, very intresting
20:32 semiosis sohoo: noatime,nodiratime all the things \o/
20:32 semiosis sohoo: unless you actually need atimes (but lets be honest, no one does)
20:33 semiosis sohoo: glusterfs does not depend on atimes
20:34 sohoo thanks semiosis, when all is redundent realy no need for such things
20:34 sohoo i thought gluster needs it somehow but wasnt sure
20:35 semiosis ext3 and ext4 are handled by the same kernel code, so if you're using an affected kernel neither will work with glusterfs
20:36 semiosis CROS_: anything labeled gluster *3.1* is probably out of date :)
20:37 semiosis CROS_: and maybe also incorrect
20:40 sohoo i agree, 3.3 and up are way better
20:41 Nagilum_ semiosis: btw. why is the patch for that taking so long? Why isn't it enough to discard the upper 32 bits?
20:42 semiosis i am the wrong person to ask about that :D
20:42 Nagilum_ k
20:42 semiosis but let me find the bz id for you
20:42 Nagilum_ https://bugzilla.redhat.com/show_bug.cgi?id=838784
20:42 glusterbot <http://goo.gl/CO1VZ> (at bugzilla.redhat.com)
20:42 semiosis i'm pretty sure it's Bug 838784
20:42 glusterbot Bug 838784: high, high, ---, sgowda, ON_QA , DHT: readdirp goes into a infinite loop with ext4
20:42 glusterbot Bug http://goo.gl/CO1VZ high, high, ---, sgowda, ON_QA , DHT: readdirp goes into a infinite loop with ext4
20:42 semiosis yeah that
20:43 semiosis i use a safe kernel AND switched to XFS, so haven't been following that one too close :)
20:44 dustint joined #gluster
20:45 H__ so that bug says it's fixed and backported to 3.3
20:46 semiosis there will be a 3.3.2 soon.  lets see if its in the ,,(qa releases) yet...
20:46 glusterbot The QA releases are available at http://bits.gluster.com/pub/gluster/glusterfs/ -- RPMs in the version folders and source archives for all versions under src/
20:46 semiosis indeed: http://bits.gluster.com/pub/​gluster/glusterfs/3.3.2qa1/
20:46 glusterbot <http://goo.gl/0Yy60> (at bits.gluster.com)
20:47 semiosis i *guess* that will have it
20:47 H__ http://git.gluster.org/?p=glusterfs.gi​t;a=shortlog;h=refs/heads/release-3.3
20:47 glusterbot <http://goo.gl/7n9Jv> (at git.gluster.org)
20:48 Nagilum_ *yay*
20:50 Nagilum_ semiosis: the git commit history indicates that 3.3.2qa1 doesn't have it yet though
20:50 semiosis :(
20:50 Nagilum_ release has it
20:50 Nagilum_ 3.3
20:53 theron joined #gluster
21:02 _pol_ joined #gluster
21:06 jskinner_ joined #gluster
21:11 Nagilum_ btw, quote from the ceph doc: "Ceph OSDs depend on the Extended Attributes (XATTRs) of the underlying file system for various forms of internal object state and metadata. The underlying file system must provide sufficient capacity for XATTRs. btrfs does not bound the total xattr metadata stored with a file. XFS has a relatively large limit (64 KB) that most deployments won’t encounter, but the
21:11 Nagilum_ ext4 is too small to be usable." How much does glusterfs typically use?
21:15 semiosis usually 512 b is recommended inode size on xfs.  if glusterfs exceeds this then another inode is allocated.
21:17 semiosis Nagilum_: link to that doc?
21:17 Nagilum_ http://ceph.com/docs/master/rados/conf​iguration/filesystem-recommendations/
21:17 glusterbot <http://goo.gl/Z1Smj> (at ceph.com)
21:18 semiosis thx
21:18 Nagilum_ yw
21:19 semiosis see also http://docs.openstack.org/trunk/op​enstack-object-storage/admin/conte​nt/filesystem-considerations.html
21:19 glusterbot <http://goo.gl/CY0qU> (at docs.openstack.org)
21:19 semiosis which recommends inode size 1024 to store swift xattrs without overflowing the inode
21:19 Nagilum_ I see
21:19 semiosis that applies to glusterfs UFO (swift integration)
21:20 semiosis imagine if you had 1kb inode size and 64 k of xattr data to store... reading the xattrs of a file would require reading 64 inodes.  yuck.
21:21 semiosis seems inefficient imo, but i'm no fs expert
21:21 semiosis wonder how much ceph uses in practice
21:21 Nagilum_ not 64 inodes, just an extend
21:22 Nagilum_ but still..another seek
21:22 semiosis ohh
21:22 semiosis ok
21:25 Nagilum_ anyway, current disks have 4k blocks, so even using 4k inodes would not seem unreasonable if you know you need metadata
21:25 semiosis thats interesting
21:29 kkeithley| fwiw, our performance team's benchmarks show UFO performs no better with inode size=1024 than with inode size512
21:29 kkeithley| s/size512/size=512/
21:29 glusterbot What kkeithley| meant to say was: fwiw, our performance team's benchmarks show UFO performs no better with inode size=1024 than with inode size=512
21:30 sohoo is it possible to mix fs types on a volume? axttr data is the same just wonder
21:30 semiosis so i looked at a file on one of my replica 2 volumes and here's the size of the xattrs... 12b x 2 replicas + 16b gfid + 20b linkto = 60b
21:30 Nagilum_ I have one of those newer disks currently in my nas, performance suxx when the operations aren't 4k aligned :-/
21:31 semiosis of course not all files will have a linkto
21:31 semiosis s/b/B/g
21:31 glusterbot What semiosis meant to say was: so i looked at a file on one of my replica 2 volumes and here's the size of the xattrs... 12B x 2 replicas + 16B gfid + 20B linkto = 60B
21:32 Nagilum_ ah :) nice
21:34 BSTR Hey guys, quick question
21:34 semiosis and while the afr & gfid attrs are fixed length, the linkto is a string which includes the volume name, so that would vary a bit
21:34 semiosis Nagilum_: to give you an idea of how to estimate inode size needs
21:34 BSTR the gluster-rdma package : is that only for clients, or can you use it to sync between devices?
21:35 Nagilum_ semiosis: sounds like one would need an obscene amount of replicas to exceed ext4 limits :)
21:37 semiosis haha yeah
21:37 semiosis well, xfs yeah, idk even what the ext4 limit is
21:38 sohoo still dont understand whats so special with xfs :)
21:39 Nagilum_ xfs is simply the best!
21:39 semiosis it's EXTREME!
21:39 Nagilum_ until glusterfs has the dht fix
21:39 sohoo :) i know that
21:39 Nagilum_ then ext4 will be king again
21:40 Nagilum_ until btrfs or ext5 take over
21:43 sohoo ok im convinced, we have to add 2 more nodes soon is it possible to format them as XFS(all HDs) while most of the VOLUME is ext3?
21:43 Nagilum_ I don't see why not
21:44 rb2k joined #gluster
21:44 semiosis sure you can have a mix.  i migrated slowly from ext4 to xfs.  no problem.
21:46 y4m4 joined #gluster
21:46 failshell joined #gluster
21:47 failshell hello. sometimes, mount reports the gluster volume as mounted. but in reality, its not. anyone else experiencing this? on 3.2
21:48 sohoo tnx, XFS it is
21:55 fidevo joined #gluster
22:03 gdavis33 joined #gluster
22:05 redsolar joined #gluster
22:32 _pol joined #gluster
22:43 hagarth joined #gluster
22:53 coredumb joined #gluster
22:59 _pol_ joined #gluster
23:11 hagarth joined #gluster
23:14 piotrektt_ hey i wan to create striped replicated volume on 2 gluster servers (2 bricks each)
23:14 piotrektt_ i use command: stripe 2 replica 2
23:14 piotrektt_ everything is created ok
23:15 piotrektt_ but when i want to write anything on gluster i get error
23:18 _pol joined #gluster
23:26 semiosis piotrektt_: if this is your first time using glusterfs, dont use stripe yet.  save that for later if you *really* need it.
23:27 piotrektt_ but i need it
23:27 semiosis why?
23:27 piotrektt_ i need it to work that way
23:27 piotrektt_ we have huge files and a lot of nodes
23:28 piotrektt_ to connect to gluster
23:28 semiosis now, if you want help with an error, put it on pastie.org with some logs and give the link here
23:28 piotrektt_ but its error from cp
23:28 piotrektt_ gluster seems to set everything right
23:29 piotrektt_ i only wonder if i gave right stripe COUNT and replica COUNT
23:31 semiosis like i said, if you want help with an error, put it on pastie.org with some logs and give the link here
23:32 semiosis would like to help, but too busy to start guessing :)
23:33 piotrektt_ but wich log should i see first for thtat problem?
23:34 semiosis client log file, /var/log/glusterfs/the-mount-point.log
23:39 piotrektt_ [2013-04-19 08:10:34.259007] E [stripe-helpers.c:268:stripe_ctx_handle] 0-gluster3-stripe-0: Failed to get stripe-size
23:39 piotrektt_ [2013-04-19 08:10:34.261238] W [fuse-bridge.c:2025:fuse_writev_cbk] 0-glusterfs-fuse: 21: WRITE => -1 (Invalid argument)
23:58 hagarth joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary