Camelia, the Perl 6 bug

IRC log for #gluster, 2013-03-12

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:15 hagarth joined #gluster
00:25 jdarcy joined #gluster
00:38 lh joined #gluster
00:49 hagarth joined #gluster
00:59 yinyin joined #gluster
00:59 yinyin_ joined #gluster
01:03 yinyin joined #gluster
01:04 dgarstang joined #gluster
01:04 dgarstang NOT WORKING!!!!! http://community.gluster.org/q/how-​do-i-reuse-a-brick-after-deleting-t​he-volume-it-was-formerly-part-of/
01:04 glusterbot <http://goo.gl/HTfdm> (at community.gluster.org)
01:06 yinyin_ joined #gluster
01:08 jdarcy joined #gluster
01:08 _pol joined #gluster
01:15 Humble joined #gluster
01:22 _pol joined #gluster
01:29 yinyin joined #gluster
01:33 Humble joined #gluster
01:42 JoeJulian @reuse brick
01:42 glusterbot JoeJulian: To clear that error, follow the instructions at http://goo.gl/YUzrh or see this bug http://goo.gl/YZi8Y
02:06 Humble joined #gluster
02:12 hagarth joined #gluster
02:23 Humble joined #gluster
02:30 vigia joined #gluster
02:37 kkeithley1 joined #gluster
03:03 sjoeboo_ joined #gluster
03:08 Humble joined #gluster
03:10 Humble joined #gluster
03:28 Humble joined #gluster
03:28 kevein joined #gluster
03:43 bulde joined #gluster
03:47 bharata joined #gluster
03:51 vshankar joined #gluster
04:11 shylesh joined #gluster
04:11 hagarth joined #gluster
04:23 mtanner_ joined #gluster
04:30 anmol joined #gluster
04:30 vpshastry joined #gluster
04:33 hagarth joined #gluster
04:34 sjoeboo_ joined #gluster
04:34 Humble joined #gluster
04:37 deepakcs joined #gluster
04:47 isomorphic joined #gluster
04:51 sgowda joined #gluster
05:00 mohankumar joined #gluster
05:06 lalatenduM joined #gluster
05:21 bharata In a multinode replicated volume, does gluster ensure that the data requested from a client on a node is fetched from the local brick ?
05:23 raghu` joined #gluster
05:29 mohankumar joined #gluster
05:29 bala joined #gluster
05:36 satheesh joined #gluster
05:36 vshankar joined #gluster
05:44 tyl0r joined #gluster
05:45 sripathi joined #gluster
05:55 glusterbot New news from resolvedglusterbugs: [Bug 912052] Default and document request logging parameter for gluster installations <http://goo.gl/OLo4X>
05:55 rastar joined #gluster
05:59 shireesh joined #gluster
06:01 rastar joined #gluster
06:22 hchiramm_ joined #gluster
06:29 glusterbot New news from newglusterbugs: [Bug 920434] Crash in index_forget <http://goo.gl/mH2ks>
06:44 displaynone joined #gluster
06:47 sripathi joined #gluster
06:55 Nevan joined #gluster
07:17 jtux joined #gluster
07:24 aravindavk joined #gluster
07:34 ngoswami joined #gluster
07:40 disarone joined #gluster
08:01 jtux joined #gluster
08:04 andreask joined #gluster
08:07 mooperd joined #gluster
08:09 stickyboy joined #gluster
08:09 stickyboy So I upgraded to CentOS 6.4 on a machine using the fuse client and now my mounts won't mount at boot:  type=1400 audit(1363004014.209:4): avc:  denied  { execute } for  pid=1150 comm="mount.glusterfs" name="glusterfsd" dev=sda1 ino=1315297 scontext=system_u:system_r:mount_t:s0 tcontext=system_u:object_r:glusterd_exec_t:s0 tclass=file
08:10 * stickyboy is tempted to disable selinux, but knows it's wrong... :P
08:31 tryggvil joined #gluster
08:31 sripathi1 joined #gluster
08:49 redbeard joined #gluster
08:51 vpshastry joined #gluster
08:52 tjikkun_work joined #gluster
08:53 dobber_ joined #gluster
08:59 hagarth joined #gluster
09:02 bulde joined #gluster
09:06 aravindavk joined #gluster
09:10 Philip_ joined #gluster
09:13 sripathi joined #gluster
09:18 aravindavk joined #gluster
09:20 kkeithley_blr stickyboy: meh, it's not so much that it's right or wrong, it's just the expedient thing to do until things get caught up.
09:20 sahina joined #gluster
09:25 ninkotech_ joined #gluster
09:30 stickyboy kkeithley_blr: Even better, I learned how to compile my own module. :D
09:31 stickyboy I learned about audit2allow... :D
09:33 vpshastry joined #gluster
09:40 shireesh joined #gluster
09:42 mgebbe_ joined #gluster
09:43 maxiepax joined #gluster
09:46 Staples84 joined #gluster
09:49 maxiepax joined #gluster
09:51 manik joined #gluster
10:00 anmol joined #gluster
10:03 shireesh joined #gluster
10:08 cw joined #gluster
10:10 errstr joined #gluster
10:11 Philip__ joined #gluster
10:13 tryggvil joined #gluster
10:19 maxiepax joined #gluster
10:21 displaynone joined #gluster
10:26 glusterbot New news from resolvedglusterbugs: [Bug 821139] Running arequal after rebalance says "short read" <http://goo.gl/7XonM>
10:29 duerF joined #gluster
10:41 mgebbe_ joined #gluster
10:45 acritox joined #gluster
10:47 _br_ joined #gluster
10:48 _br_ joined #gluster
10:49 _br_ joined #gluster
10:53 acritox Hi, does anyone use replicated glusterfs for storage of VMs? I've tried to set up a replicated gluster volume with two bricks where KVM qcow2 images are stored and I'm wondering about bad write performance (~1MB/s) on those virtual disks
10:54 edward1 joined #gluster
11:10 stickyboy acritox: Yeah, I think that's inefficient, cuz you're basically doing block IO over the network.
11:11 stickyboy I think it's better to mount the gluster volumes using fuse inside the VM itself.
11:11 stickyboy But I know there's work on a native qemu driver in glusterfs 3.4... not sure of the details though.
11:18 puebele joined #gluster
11:35 sripathi joined #gluster
11:36 edong23_ joined #gluster
11:37 sr71_ joined #gluster
11:38 hagarth__ joined #gluster
11:39 dec_ joined #gluster
11:39 raghavendrabhat joined #gluster
11:41 _benoit__ joined #gluster
11:43 aravindavk joined #gluster
11:45 atrius joined #gluster
11:48 kshlm joined #gluster
11:48 kshlm joined #gluster
12:05 sjoeboo_ joined #gluster
12:14 jdarcy_ joined #gluster
12:16 stickyboy Hmm, I seem to have a split-brain on /
12:16 stickyboy How is that possible?
12:17 stickyboy Log snippet: http://pastie.org/pastes/645983​4/text?key=qxnrv5uq6faleazblm6w
12:17 glusterbot <http://goo.gl/y9e0i> (at pastie.org)
12:25 balunasj joined #gluster
12:29 anmol joined #gluster
12:29 bala joined #gluster
12:29 andreask joined #gluster
12:30 manik joined #gluster
12:38 jdarcy joined #gluster
12:46 aravindavk joined #gluster
12:46 jdarcy joined #gluster
12:48 manik joined #gluster
12:56 puebele1 joined #gluster
12:59 robos joined #gluster
13:06 bennyturns joined #gluster
13:07 kkeithley1 joined #gluster
13:09 displaynone joined #gluster
13:09 plarsen joined #gluster
13:10 jtux joined #gluster
13:15 stickyboy Ah, I figured out my problem; I had set up directory services (389) with sssd but my data was stale.
13:15 rastar1 joined #gluster
13:16 NeatBasis joined #gluster
13:16 stickyboy Turns out my 389 server wasn't running!
13:17 stickyboy Once I brought dirsrv back up on the LDAP box the self heal completed immediately.
13:17 jclift joined #gluster
13:20 anmol joined #gluster
13:21 dustint joined #gluster
13:28 sjoeboo_ joined #gluster
13:29 rwheeler joined #gluster
13:31 lh joined #gluster
13:31 lh joined #gluster
13:38 lpabon joined #gluster
14:19 wushudoin joined #gluster
14:19 vshankar joined #gluster
14:27 torbjorn1_ does anyone know of any documentation for "eager locking" that is mentioned on http://www.gluster.org/community/d​ocumentation/index.php/Features34 ? .. Is it the same thing as is mentioned here, perhaps ? http://hekafs.org/index.php/2012/03/gl​usterfs-algorithms-replication-future/
14:27 glusterbot <http://goo.gl/4MvOh> (at www.gluster.org)
14:30 bugs_ joined #gluster
14:35 hagarth joined #gluster
14:39 lpabon joined #gluster
14:40 tryggvil joined #gluster
14:42 rcheleguini joined #gluster
14:50 daMaestro joined #gluster
14:51 bitsweat joined #gluster
14:57 johnmark torbjorn1_: is there a feature page for it?
14:58 bitsweat left #gluster
14:59 sjoeboo_ joined #gluster
15:02 aliguori joined #gluster
15:02 jbrooks joined #gluster
15:02 torbjorn1_ johnmark: not that I can see, it's only mentioned on the Features34 page
15:04 dberry joined #gluster
15:04 dberry joined #gluster
15:07 johnmark torbjorn1_: yeah, I see what you mean. We should document that.
15:11 torbjorn1_ johnmark: That would be appreciated. In any case, is "eager locking" the same thing that is being described in the hekafs.org article ?
15:13 * johnmark looks
15:13 johndesc1 kkeithley_blr: sorry for the noise yesterday, seems we had something wrong in our tests, now speed is correct :)
15:14 johnmark torbjorn1_: ah yes, it does
15:14 johnmark er is
15:14 kkeithley_blr no worries. It's good to know you're getting better performance
15:14 johnmark torbjorn1_: specifically created to help with VM images
15:14 johnmark johndesc1: anything you did in particular to help there?
15:16 johndesc1 johnmark: yeah, more testing :D
15:16 eryc joined #gluster
15:16 johndesc1 the fact is something seems bad in the windows 7 driver for 10gbps links, also on a VM with only one VCPU the performances drop
15:17 Staples84 joined #gluster
15:17 BSTR joined #gluster
15:17 johndesc1 but on a 1gbps windows 7 VM with 4 VCPU the perfs are around 100mbps, so it is quite nice
15:18 bulde joined #gluster
15:18 johndesc1 on the other side, the linux at only ~20mbps seems to behave better with a mount.cifs that with nautilus mount…
15:19 torbjorn1_ johnmark: great, that looks like a really interesting feature, looking forward to it
15:22 NeatBasis_ joined #gluster
15:23 Humble joined #gluster
15:25 wN joined #gluster
15:32 manik joined #gluster
15:48 displaynone joined #gluster
15:50 inodb_ joined #gluster
15:51 tjstansell joined #gluster
15:51 lalatenduM joined #gluster
15:53 _NiC joined #gluster
15:53 nhm joined #gluster
15:53 thekev joined #gluster
15:54 displaynone joined #gluster
15:54 awheeler_ I'm exploring using the UFO and Swift to create my own internal S3 alternative, but it seems under heavy development.  Is it ready for a production environment, or would I be better off using Openstack's Swift until UFO/Swift stabilizes?
15:54 yosafbridge joined #gluster
15:54 VeggieMeat joined #gluster
15:54 lkoranda joined #gluster
15:55 efries joined #gluster
15:57 jiffe98 joined #gluster
16:02 awheeler_ Additionally, assuming it is ready, what authentication scheme do you recommend: keystone, swauth, ?
16:02 kkeithley_blr Heavy development is, perhaps, pretty subjective. It's released, it's supported, we're fixing bugs. It is being used in production by several large companies.
16:03 kkeithley_blr Out of the box it UFO currently only supports tempauth. We're working on keystone auth.
16:03 awheeler_ Fair point.  I just see that my current setup of multiple volumes is broken in the 3.4 alpha.
16:05 awheeler_ And you are switching to the ring setup, so if I have a 3.3 setup, will there be a clean migration path?
16:05 kkeithley_blr Yes, untangling the "swift.diff" did break multiple volume support, but that's fixed in git since 3.4alpha was released, if I'm not mistaken.
16:08 awheeler_ I did see a patch for that, so good.  I've been using keystone, but the overhead is higher than I would have expected.  Any thoughts on when keystone will be officially supported?  BTW, I discovered that using sqlite3 on a shared glusterfs store is not as efficient as I would have hoped.
16:13 kkeithley_blr Not sure what you mean by the ring setup. With UFO we use a single Swift ring and use Gluster distribution for better scale out. Keystone is on the short list of things to get done.
16:13 awheeler_ Any thoughts one when 3.4 will release, and if Keystone will be a supported auth?
16:14 tyl0r joined #gluster
16:14 kkeithley_blr With luck by the time we ship 3.4, but updates to UFO are not tightly coupled to gluster releases so if it takes a bit longer...
16:14 kkeithley_blr We're aiming for an April release IIRC.
16:14 kkeithley_blr Early April maybe?
16:16 awheeler_ The ring setup is the swift ring, yes.  In 3.3 it wasn't an obvious part of the equation.
16:18 awheeler_ Keystone seems to work reasonably well for me except for 2 issues:  occasionally I will get an auth token not found, and when no requests are going to the process, it's still spinning madly away doing I don't know what, lol.
16:18 kkeithley_blr I haven't seen a way to  increase the ring count in Swift in a running system. (Am I wrong about that?). With gluster you can add bricks to a running to increase capacity.
16:18 kkeithley_blr s/running to increase/running system to increase/
16:18 glusterbot kkeithley_blr: Error: I couldn't find a message matching that criteria in my history of 1000 messages.
16:19 kkeithley_blr good job glusterbot
16:19 awheeler_ Personally I haven't looked too much at the way Openstack does their Swift as I wanted to use Gluster.  So the ring thing escapes me a bit, but I suspect I'll need to become more familiar with it.
16:21 awheeler_ I have been planning on setting up each gluster node with swift and keystone, and putting them all behind an nginx proxy which round-robins to all of the nodes.
16:21 Alknelt joined #gluster
16:22 awheeler_ Does that seem like a bad idea?  Does it make more sense to have separate UFO servers from gluster, and separate Keystone as well?
16:25 kkeithley_blr Dunno about keystone. Separating the UFO servers from the gluster nodes means less context switching but some additional network latency perhaps. You'll have to try it and let us know how it goes.
16:27 kkeithley_blr And running UFO servers on separate servers from gluster means you can add more UFO servers or more gluster nodes independent from each other. That seems like a plus to me, but YMMV.
16:28 Alknelt Is it possible to move a LUN which contains a brick to another Gluster peer and maintain the brick contents?
16:28 awheeler_ Fair enough.  Do you know if there is Gluster related keystone work/code going on somewhere that I can look at and possibly contribute to?
16:29 Alknelt … and that brick as a member of the volume.
16:30 awheeler_ Does the new peer already have bricks of its own?
16:31 Alknelt No, but it is connected to the same SAN. If it did it would be a migrate brick...
16:33 Alknelt I'm thinking it might have to be a remove-brick to drain the brick on server A. remove from server A, present to server B. Then migrate brick 1, repeat until desired LUNs are all moved
16:33 kkeithley_blr Unless lpabon has started since I left, Friday a week ago, I think the answer is no.
16:33 awheeler_ So you are trying to move a brick from one server to another by moving the LUN rather than doing a replace/migrate?
16:34 Alknelt That'd be ideal yes awheeler
16:34 Alknelt I have 48 bricks, each a LUN that I want to distribute from 2 servers to 4
16:35 awheeler_ Ah, that does sound like a painful, time-consuming process.
16:35 Alknelt Yes...
16:36 awheeler_ I suspect with trickery (ie, not supported) it could be done by messing with the raw files in /var/lib/glusterd it could be done.  But I'm not sure it would be any less painful, or wise.
16:37 Alknelt You know everyone who works in IT has a bit of black magic up each sleeve...
16:37 awheeler_ lol
16:38 elyograg kkeithley_blr: is a package for S3 available yet? I'm on F18.
16:40 kkeithley_blr A package for S3? Do you mean the S3 API filter for Swift?
16:40 awheeler_ @Alknelt: I assume this is a LOT of data?  I'd do some experiments with a dev setup and see if it's even feasible.
16:41 _pol joined #gluster
16:41 kkeithley_blr Not as part of UFO, and last I looked the openstack-swift packaging hadn't changed
16:42 kkeithley_blr oh, look, openstack-swift 1.7.6 for f19.
16:42 elyograg openstack-swift-plugin-swift3 exists, but won't install with the glusterfs-swift packages installed. the s3 config I used with the 1.4.8-based packages fails to find swift3.
16:43 Mo___ joined #gluster
16:43 Alknelt @awheeler_ at this point I'm still in testing phase to ensure Gluster can meet our needs. However, I can see a very likely possible situation occurring here like this scenario I describe. I've been testing with 70TB. Actual will be 215TB. My test has been on 2 servers, production will be 4, but I'm worried that might not support the load and I'll have to add more servers.
16:43 kkeithley_blr oh, I wasn't aware of that
16:45 kkeithley_blr And that just further emphasizes why I want to stop packaging swift and just use the openstack-swift package.
16:45 awheeler_ @Alknelt: That's the beauty of Gluster, you can always add more nodes and bricks.  I'd start with fewer bricks, and add them as needed, possibly on new servers, rather than plan to migrate them to new servers if the need arises.
16:45 elyograg kkeithley_blr: +1 ;)
16:45 shylesh joined #gluster
16:45 elyograg so you'd ultimately just have a glusterfs plugin and some sample configs?
16:46 kkeithley_blr exactly
16:46 awheeler_ kkeithley_blr: +1 here to.  That would be great.  You might be able to manage that with some translating middleware.
16:46 Alknelt @awheeler_ Wish I could, but that option isn't within the constraints of the project. I have a fixed number of spindles, but a flexible number of servers. The beauty of a SAN….
16:48 Alknelt I have 130 LUNs, and I can't let any sit idle.
16:48 kkeithley_blr 1.7.6 has everything we need. We have the constraints-config from 1.7.6 backported into 1.7.4 but the openstack-swift packagers won't take it.
16:48 awheeler_ @Alknelt: Sure, I have a SAN too.  So you have 48 bricks because you need the space provided by them all, but not yet the processing power of multiple nodes?  And you want the 'option' to move some of that data to other servers.
16:48 kkeithley_blr So we're stuck shipping our own Swift until we get to f19 and (RH)EL7.
16:48 Alknelt @awheeler_ Exactly
16:49 awheeler_ @Alknet: Well, that is an interesting challenge.  lol
16:49 kkeithley_blr And it's late in Bangalore. I'm going to sign off before I get sucked into anything else.
16:50 elyograg the f19 schedule says late may for a beta, mid-april for alpha.
16:50 Alknelt Looking in /var/lib/glusterfs/vols/<volume> Each file is a brick, but each server host is has a unique auth key which looks more like a hash…. No fun.
16:51 awheeler_ @alknet: are the nodes virtual?
16:51 Alknelt @awheeler_ No, physical.
16:53 awheeler_ @alknet: Well, it would "theoretically" be as "simple" as renaming files with the new host name, and replacing the old host hashes and names with the new ones for the correct bricks, and all gluster nodes would need to be off.
16:53 awheeler_ @Alknet: On each of the existing nodes of course.
16:53 elyograg kkeithley: something to think about an answer for later: how long until a glusterfs swift3 plugin will be available for the current stuff? how long until I could reasonably hack together a 1.7.6 setup on a pre-release F19 that I hack together myself before an installer is available?
16:53 bulde joined #gluster
16:53 Alknelt @awheeler_ Would be an interesting exercise.
16:54 elyograg oh, he logged off and that's his other nick.  i guess i'll ask later.
16:55 awheeler_ @Alknet: Definitely in the realm of magic.  :)
16:55 Alknelt @awheeler_ I'm pretty sure I saw that in the career description somewere...
16:55 awheeler_ lol
16:55 Alknelt @awheeler_ Thank you for your insight. I'll play around with it and see if I can break it.
16:57 Alknelt @awheeler_ It would be great if there were some peer fail-over capablilty. I suppose I can put that in the Gluster Ideas bucket.
16:58 awheeler_ @Alknet: You are welcome, and good luck.  I'd recommend scripting it so you can be sure to get consistent results: "# replace_brick_node <old node name> <new node name> <old uuid> <new uuid>" .  And, if it works, you can post that to github.
16:59 Alknelt Whats the github address?
16:59 awheeler_ @Alknet: Peer failover?  You mean like a spare or something?
16:59 Alknelt never mind about github address..
17:00 Alknelt More like ServerA has B1,B3   ServerB has B2,B4… ServerA goes offline and ServerB now has B1,B2,B3,B4
17:00 Alknelt Because each LUN is presented to all servers
17:00 displaynone joined #gluster
17:01 awheeler_ @Alknet: Hmm, A could only have those if there are elsewhere than B, so where does A get them from?
17:01 awheeler_ @Alknet: And if A already had them, why not share them all the time?
17:02 Alknelt @awheeler_  I'm not following you.
17:02 awheeler_ @Alknet: I assume you meant by B1,B2,B3,B4 you meant bricks, and that A has B1 and B3, and B has B2, and B4, yes?
17:03 Alknelt @awheeler_ Yes
17:03 awheeler_ @Alknet: So, if B goes offline, and he has the only copies of the contents of B2, and B4...
17:03 Alknelt @awheeler_ Yes
17:03 awheeler_ @Alknet: Or were some of those copies?  Or are you thinking about LUNs here?
17:03 Alknelt @awheeler_ But A also has B2 and B4 presented as LUNs
17:03 glusterbot Alknelt: You've given me 5 invalid commands within the last minute; I'm now ignoring you for 10 minutes.
17:04 awheeler_ @Alknet: Ah, got it, the secret-sauce of LUNs.
17:04 glusterbot awheeler_: You've given me 5 invalid commands within the last minute; I'm now ignoring you for 10 minutes.
17:04 Alknelt Yes, LUNs would be a prerequisite
17:04 awheeler_ So an extension of what we had already been discussing.  Makes sense.
17:05 Alknelt But while A is running B1 and B3 are ignored by B
17:05 awheeler_ Only gluster isn't designed to work with a SAN, but more as a possible replacement for one, IMHO.
17:05 Alknelt I'm thinking of how Ibrix GFS works.. Which I've been running for a couple years now
17:05 awheeler_ Ah, not familiar with them.
17:06 Alknelt Its proprietary but HP bought them a few years ago… Not so great anymore and been rebranded to X9000
17:07 Alknelt I see how Gluster could be a replacement for a SAN in many ways. But at the same time the SAN i have can only export LUNS >2TB I have to concatenate those LUNS some how
17:08 Alknelt excuse me < 2TB
17:08 awheeler_ Ah, that's weird.
17:09 Alknelt Older Fibre channel disks..
17:09 Alknelt HP EVA 5000, 8000, and 8100
17:11 Alknelt Thank you again.
17:11 Alknelt left #gluster
17:12 awheeler_ NP, and good luck.
17:12 manik joined #gluster
17:27 _pol joined #gluster
17:27 _pol joined #gluster
17:28 redbeard joined #gluster
17:37 y4m4 joined #gluster
17:37 lpabon joined #gluster
17:38 dberry any issues with 3.3.1 and rsyncing files to a glusterfs mount?  When I rsync files from another server, the glusterfs hangs
17:41 Ryan_Lane joined #gluster
17:42 ajm anyone know if an issue where the "xattrop" directory gets extremely large?
17:47 jdarcy joined #gluster
17:52 manik joined #gluster
17:59 edong23 joined #gluster
18:01 rob__ joined #gluster
18:04 manik joined #gluster
18:05 semiosis dberry: usually a good idea to use --inplace when rsyincing into glusterfs.  check your client log file for more info about the hang
18:08 JasonG joined #gluster
18:13 JasonG I'm testing 3.4.0alpha-2.el6 and i'm having a strange issue. If i delete files from 1 client the file deletes but after a few seconds it comes back, but as an empty file
18:14 JasonG I guess i should clarify i'm running a replicated 2 setup
18:18 Mo____ joined #gluster
18:20 manik joined #gluster
18:25 sjoeboo_ joined #gluster
18:36 dberry added --inplace and kicked off the rsync and there is nothing in the log file
18:36 dberry I have the client mounted on the server and when I do an ls /mnt/gluster, it hangs
18:38 SpeeR joined #gluster
18:53 disarone joined #gluster
19:07 semiosis dberry: sounds like you're hitting the ,,(ext4) bug.  xfs with inode size 512 is recommended
19:07 glusterbot dberry: Read about the ext4 problem at http://goo.gl/PEBQU
19:10 H__ semiosis: can I check how much of the inode size space I'm using 'today' ?
19:11 _br_ joined #gluster
19:12 semiosis H__: what do you mean?
19:12 H__ I want to know how close I am to hitting the ext4 inode size limit
19:13 semiosis H__: well there's two things going on here... xattrs which glusterfs stores *in* each inode, and the total number of inodes in the filesystem
19:14 semiosis if glusterfs puts too many xattrs on a file (in the backend brick) to fit in one inode, then another is used to store the spill over.
19:14 H__ i mean the xattr limits
19:14 H__ oh ? it spills over ? then what's the problem ?
19:15 semiosis H__: it's an optimization.  if you know you're going to need a bit of space for xattrs, make the inodes larger so you only have to read one inode to get the xattrs
19:16 H__ got it. I thought it was a hard limit where glusterfs over ext4 bricks stopped working
19:16 daMaestro joined #gluster
19:17 semiosis there is a bug which basically means glusterfs doesnt work -- at all -- on ext4 bricks, which happened with a linux kernel change a few months ago in mainline kernel 3.3.0 and backported to older RH/cent kernels
19:17 semiosis see the link glusterbot gave about that
19:18 _br_ joined #gluster
19:19 H__ thanks.i'm aware of that one. it means i cannot upgrade to ubuntu 12.10 until that's "resolved" or I move all fs over to , say, xfs which i'd rather not use.
19:21 tqrst plus the conversion itself is a hassle
19:22 nueces joined #gluster
19:22 tqrst 3.4 will probably be out by the time I'm done converting to xfs
19:24 semiosis you have to weight the options for yourself.  i made the switch from ext4 to xfs and have been very happy with it
19:24 semiosis s/weight/weigh/
19:24 glusterbot What semiosis meant to say was: you have to weigh the options for yourself.  i made the switch from ext4 to xfs and have been very happy with it
19:25 H__ tqrst: yes, it'll cost me 2w to 1m I think
19:26 JoeJulian H__: Why would you rather not use xfs?
19:27 H__ have had very bad fsck recovery with it, in contrast to the multilpe ext fs's I've seen break.
19:27 _br_ joined #gluster
19:29 daMaestro we have also lost multiple bricks formatted xfs when needing to do a xfs_repair
19:29 daMaestro granted, this was with the el5 xfs modules
19:30 daMaestro because of DHT, that caused file system paths to become unreadable
19:30 daMaestro permanently.
19:33 JoeJulian Ric Wheeler swayed my opinion of xfs over ext last year at Red Hat Summit. He was pretty candid about what he though of the two code bases.
19:34 JoeJulian Well, actually at dinner...
19:35 * semiosis was at the wrong end of the table
19:35 JoeJulian You really were. :(
19:35 JoeJulian Maybe this year
19:35 semiosis perhaps
19:35 _br_ joined #gluster
19:36 JoeJulian Holy cow...
19:36 JoeJulian @channelstats
19:36 glusterbot JoeJulian: On #gluster there have been 98320 messages, containing 4300737 characters, 722238 words, 2923 smileys, and 361 frowns; 670 of those messages were ACTIONs. There have been 34946 joins, 1153 parts, 33822 quits, 14 kicks, 109 mode changes, and 5 topic changes. There are currently 198 users and the channel has peaked at 203 users.
19:39 jruggiero joined #gluster
19:40 H__ If I considered code bases I'd still be running FreeBSD ;-)
19:40 JoeJulian johnmark: are we doing the board meeting this year?
19:41 semiosis H__: openbsd!
19:53 Humble joined #gluster
19:56 andreask joined #gluster
19:57 _pol_ joined #gluster
20:00 hateya joined #gluster
20:11 Humble joined #gluster
20:11 mooperd_ joined #gluster
20:17 ladd joined #gluster
20:18 isomorphic joined #gluster
20:21 awheeler_ Are there any plans to support swauth?  I have made it work, but it requires a few code tweaks on the gluster swift side.
20:27 jrossi left #gluster
20:37 eedfwchris joined #gluster
20:37 eedfwchris Should i be able to mount nfs on localhost? I cant seem to.
20:37 eedfwchris It also seems that when i mount on a non localhost it actually transfers to and back from that mount so that's no good.
20:40 JoeJulian awheeler_: I /think/ so, but feel free to file a bug report and submit your tweaks to gerrit.
20:40 glusterbot http://goo.gl/UUuCq
20:40 JoeJulian ~hack | awheeler_
20:40 glusterbot awheeler_: The Development Work Flow is at http://goo.gl/ynw7f
20:41 JoeJulian eedfwchris: You should be able to, but you don't want to. There's a race condition in the kernel wrt memory allocation.
20:41 eedfwchris hrm
20:42 eedfwchris I'd like to use gluster client but it seems awkwardly slow
20:43 eedfwchris i pressume my php stat calls are taking a toll
20:43 JoeJulian @php
20:43 glusterbot JoeJulian: php calls the stat() system call for every include. This triggers a self-heal check which makes most php software slow as they include hundreds of small files. See http://goo.gl/uDFgg for details.
20:44 JoeJulian I've heard rumor that 3.4 is faster, but I haven't had a chance to test it myself.
20:45 eedfwchris I heard that same rumor with 3.3 ;)
20:48 JoeJulian Well 3.3 was definitely faster than 3.1. I skipped 3.2.
20:48 dberry so instead of going to xfs, can I downgrade gluster to 3.2.9?
20:49 JoeJulian no
20:49 JoeJulian You can use a filesystem other than ext4 or go with a kernel that doesn't have that bug.
20:49 JoeJulian @ext4
20:49 glusterbot JoeJulian: Read about the ext4 problem at http://goo.gl/PEBQU
20:50 JoeJulian And by ext4 that's actually every implementation of ext.
20:50 awheeler_ JoeJulian: Cool, thanks.  Have to see what has changed in git since 3.3 around that bit.  The key difficulty is that gluster volumes can't contain a '.'.
20:51 JoeJulian Hmm, that seems like a rather arbitrary restriction.
20:52 awheeler_ I did find it odd.  Swauth wants to use an account called AUTH_.auth to hold the authentication stuff, which gluster translates to a volume named .auth, which I can't create.  lol
20:52 JoeJulian I wonder if that's because of the xattr naming convention.
20:53 awheeler_ So I just modified the code to look for the volume, or the volume plus a preceding '.'  So my auth volume was subb'd in.
20:53 awheeler_ Thinking it might make more sense to look for the volume name, or the name preceded by 'DOT_' to make it less likely to collied.
20:53 awheeler_ s/collied/collide/
20:53 glusterbot What awheeler_ meant to say was: Thinking it might make more sense to look for the volume name, or the name preceded by 'DOT_' to make it less likely to collide.
20:56 jag3773 joined #gluster
20:57 JoeJulian so that's a modification to the swauth middleware then?
21:00 bstansell joined #gluster
21:00 tjstansell hi folks. anyone here know if the patch for bug 918437 will result in a new 3.3.1 build? or how/when i might get bits that includes that fix?
21:00 glusterbot Bug http://goo.gl/1QRyw urgent, unspecified, ---, pkarampu, POST , timestamps updated after self-heal following primary brick rebuild
21:01 tjstansell that's awesome they fixed the bug so quickly, fyi!!!
21:05 JoeJulian tjstansell: Not sure how quickly stuff sync to the ,,(git repo).
21:05 glusterbot tjstansell: https://github.com/gluster/glusterfs
21:05 JoeJulian tjstansell: Otherwise you may have to get it through ,,(hack).
21:05 glusterbot tjstansell: The Development Work Flow is at http://goo.gl/ynw7f
21:06 awheeler_ JoeJulian: No, not to the swauth plugin, but to the way that gluster decides which volume to mount in relation to a request.
21:06 JoeJulian Oh, good.
21:07 awheeler_ JoeJulian: Additionally it is looking for a db_file and is_status_deleted to be defined in the DiskDir object.
21:10 awheeler_ JoeJulian: Or something close to that, I'll have to re-examine my changes.
21:11 awheeler_ JoeJulian: Right, it was the DiskAccount object that needed nose, not DiskDir
21:24 _pol joined #gluster
21:29 tqrst anyone else seeing "kernel: possible SYN flooding on port 24009. Sending cookies." in their server syslogs with 3.3.1?
21:31 tjstansell JoeJulian: thanks for those links.  it looks like the change is in the main repo already, but i guess i'm more curious how things get backported to an older release, like the last stable one (3.3.1)
21:32 tjstansell and if/how a new official centos build would get created (or if it would)
21:34 semiosis wish someone would send me cookies when i SYN flood
21:39 hybrid5122 joined #gluster
21:43 JoeJulian tjstansell: I'd like to know that too. For the most part it looks like things aren't getting backported to 3.3. I find this very frustrating. Add a request to backport to 3.3 to your bug report. I'll try to be a pain in the butt about it.
21:44 tjstansell is there an option in the bug to do that?  i'll definitely fight to get it backported. :)
21:44 JoeJulian No option. Just add it to the notes.
21:45 tjstansell ok.
21:49 tjstansell JoeJulian: well, i've added my comment, and checked the box that i'm requiring more info from the assignee .. so we'll see if that triggers anything.  thanks for the help.
21:49 bstansell joined #gluster
21:50 semiosis please add a bullet to this page referencing the bugfix you'd like backported: http://www.gluster.org/community/docu​mentation/index.php/Backport_Wishlist
21:50 glusterbot <http://goo.gl/6LCcg> (at www.gluster.org)
21:51 semiosis @backport wishlist
21:51 semiosis @learn backport wishlist as http://www.gluster.org/community/docu​mentation/index.php/Backport_Wishlist
21:51 glusterbot semiosis: The operation succeeded.
21:52 JoeJulian semiosis: You're assuming that avati or hagarth will even look at that.
21:52 JoeJulian afaict, the devs have been considering bugfixes in master ending up in 3.4 backporting.
21:53 semiosis JoeJulian: read it and weep! http://irclog.perlgeek.de/g​luster/2013-01-18#i_6349888
21:53 glusterbot <http://goo.gl/pTW48> (at irclog.perlgeek.de)
21:53 semiosis tears of joy that is
21:54 JoeJulian Heh, I'll hold my breath waiting. :)
21:54 tjstansell too bad there's no way to request backports in the bug itself...
21:55 semiosis tjstansell: in a comment
21:55 JoeJulian Actually... bugs can be linked. There should be  a backport bug that links to the bugs that need backported... hmm.
21:58 JoeJulian eh, "Depends On" doesn't seem to be something we can add to.
22:00 tjstansell i think we'd want this bug to block a 3.3 backport tracking bug.
22:00 tjstansell and a 3.4 backport tracking bug
22:00 tjstansell or maybe just a generic backport tracking bug :)
22:03 tjstansell semiosis: 1 of the 3 bugs listed on that wiki actually looks like it got backported to 3.3, so that's good.  the other 2 bugs though haven't been updated in 1 and 3 months.
22:04 semiosis told you people look at that page
22:04 semiosis :)
22:04 semiosis please add your bug
22:05 tjstansell yes, i will ...
22:10 tjstansell wiki updated.
22:15 * H__ sees 2 serious bugs in there
22:15 JoeJulian I see a lot more serious bugs in the changelog. I just haven't had a spare moment to try to catalog them.
22:16 JoeJulian That's why I wish the coder would consider the severity at the time they're fixing it and backport it if it's appropriate to do so.
22:16 * H__ screams and runs away weaving arms and other flaps frantically
22:16 tjstansell and tracking backports via wiki seems like a bad idea to me.
22:16 tjstansell it should be part of the bug tracking system
22:17 tjstansell the wiki could link to the dependency tree of the tracking bugs ... that would be fine.
22:19 edong23 joined #gluster
22:19 H__ semiosis: about the upstart script race: i have not found time for it yet and am going with the sleep in post-script. I suggest you add that to your scripts as well as it at least minimizes the race
22:20 semiosis thank you
22:20 semiosis will consider it
22:23 JoeJulian Hmph. I fixed a bug that never made it into release-3.3. I thought I submitted that before the branch though.
22:23 tjstansell semiosis: assuming a bug does get backported, do you know how new builds get created/distributed to official repos, like the centos repo?
22:24 semiosis JoeJulian: ^^^^ ?
22:24 semiosis "the centos repo"?
22:24 semiosis which repo is that exactly?
22:26 tjstansell well, i guess just the one hosted on download.gluster.org ... http://download.gluster.org/pub/g​luster/glusterfs/LATEST/CentOS/e
22:26 glusterbot <http://goo.gl/BbGGU> (at download.gluster.org)
22:26 JoeJulian They're not likely to get into EPEL because they're in a Red Hat Supplemental repo and those are restricted from being added to EPEL. When a point release is tagged (or if there's a bug that's critical), kkeithley builds new releases for the ,,(yum repo)
22:26 glusterbot kkeithley's fedorapeople.org yum repository has 32- and 64-bit glusterfs 3.3 packages for RHEL/Fedora/Centos distributions: http://goo.gl/EyoCw
22:26 tjstansell oops. sorry abou tthe trailing e in that.
22:27 H__ I build from source. is there a 3.3.1+patches branch or something I can use instead of the tar ball ?
22:27 JoeJulian And I think a bug's only considered critical if someone specifically asks kkeithley to make a new build for it.
22:27 tjstansell ok. that's fine too.  i actually don't care what repo gets it ... but somehow folks should be able to find out where the *latest* version of these things are.
22:27 JoeJulian The 3.3 branch is named release-3.3
22:28 semiosis tjstansell: s/*latest*/,,(latest)/
22:28 glusterbot tjstansell: The latest version is available at http://goo.gl/zO0Fa . There is a .repo file for yum or see @ppa for ubuntu.
22:28 JoeJulian You also have to make changes to configure.ac if you're building from source.
22:28 semiosis two for one!
22:28 H__ changes like ?
22:28 semiosis tjstansell: ideally going to gluster.org and clicking download in the top right corner should get anyone to the latest packages
22:30 tjstansell semiosis: agreed.  but if these backports only end up in kkeithley's repo, that's not really true ...
22:30 JoeJulian I'm not sure the reasoning, but kkeithley doesn't upload his patch builds there. You can only get the -1 builds.
22:31 tjstansell notice that the download.gluster.org repo lists -1 from 2012, whereas kkeithley's repo has rev -11 from 06-Mar-2013 (for epel-6/x86_64)
22:32 tjstansell so it seems there's a disparity between the latest "official" release from redhat/gluster.org and what's actually checked into the branch
22:33 H__ so, 18 commits on release-3-3 since 3.3.1 . Right ? http://git.gluster.org/?p=glusterfs.gi​t;a=shortlog;h=refs/heads/release-3.3
22:33 glusterbot <http://goo.gl/7n9Jv> (at git.gluster.org)
22:33 semiosis tjstansell: afaict the release branch gets backported bugs, then gets frozen for QA releases, then a new release is tagged
22:33 tjstansell so there would at some point be a 3.3.2 released?
22:34 semiosis tjstansell: i'd like to see it
22:34 tjstansell me too! :)
22:34 semiosis but there's no official timetable for it
22:34 semiosis that i am aware of
22:34 * semiosis is not official
22:35 H__ JoeJulian: what changes to configure.ac are you referring to ? (when building from source)
22:36 JoeJulian It's been a while...
22:36 JoeJulian Something to do with version information.
22:36 JoeJulian You should be able to diff it against the tar file and find out exactly.
22:37 H__ thanks, I'll do that
22:41 Humble joined #gluster
23:01 Humble joined #gluster
23:02 duerF joined #gluster
23:04 jdarcy joined #gluster
23:08 jdarcy joined #gluster
23:21 Humble joined #gluster
23:23 y4m4 dblack: ping
23:38 msmith_ joined #gluster
23:42 masterzen joined #gluster
23:54 Humble joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary