Camelia, the Perl 6 bug

IRC log for #gluster, 2013-09-25

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:07 wgao joined #gluster
00:17 mgalkiewicz joined #gluster
00:57 recidive joined #gluster
01:00 bdperkin joined #gluster
01:12 rcoup joined #gluster
01:21 _pol joined #gluster
01:26 harish joined #gluster
01:31 davinder joined #gluster
01:37 _pol joined #gluster
01:42 bala joined #gluster
01:49 bala joined #gluster
02:05 tw joined #gluster
02:09 jag3773 joined #gluster
02:10 zaitcev joined #gluster
02:31 harish joined #gluster
02:32 _pol joined #gluster
02:44 rjoseph joined #gluster
02:45 wgao joined #gluster
02:48 vshankar joined #gluster
02:48 sgowda joined #gluster
02:48 _pol joined #gluster
03:26 shubhendu_ joined #gluster
03:42 itisravi joined #gluster
03:54 saurabh joined #gluster
04:01 mohankumar joined #gluster
04:17 kanagaraj joined #gluster
04:24 glusterbot New news from newglusterbugs: [Bug 1010834] No error is reported when files are in extended-attribute split-brain state. <http://goo.gl/HlfVxX>
04:28 kshlm joined #gluster
04:29 jag3773 joined #gluster
04:29 jglo joined #gluster
04:33 ppai joined #gluster
04:33 ndarshan joined #gluster
04:35 nightwalk joined #gluster
04:38 lalatenduM joined #gluster
04:46 jsmith_ joined #gluster
04:49 harish_ joined #gluster
04:57 nasso joined #gluster
04:58 emil_ joined #gluster
05:01 vpshastry joined #gluster
05:02 dusmant joined #gluster
05:06 shylesh joined #gluster
05:14 CheRi_ joined #gluster
05:20 raghu joined #gluster
05:22 nshaikh joined #gluster
05:29 rastar joined #gluster
05:32 bala joined #gluster
05:48 vimal joined #gluster
05:55 glusterbot New news from newglusterbugs: [Bug 1011761] NFS crashed <http://goo.gl/SSAMwx>
05:55 bala joined #gluster
05:57 psharma joined #gluster
05:59 hagarth joined #gluster
06:02 ababu joined #gluster
06:05 shruti joined #gluster
06:15 syoyo__ joined #gluster
06:15 glusterbot New news from resolvedglusterbugs: [Bug 959190] NFS crashed <http://goo.gl/WBkkfd>
06:16 rgustafs joined #gluster
06:18 saurabh joined #gluster
06:20 mohankumar joined #gluster
06:24 satheesh1 joined #gluster
06:26 anands joined #gluster
06:26 davinder joined #gluster
06:31 mooperd joined #gluster
06:45 ricky-ticky joined #gluster
06:54 ctria joined #gluster
06:58 odata joined #gluster
06:59 ngoswami joined #gluster
07:03 ekuric joined #gluster
07:06 CheRi_ joined #gluster
07:08 satheesh1 joined #gluster
07:13 lalatenduM joined #gluster
07:16 dusmant joined #gluster
07:21 eseyman joined #gluster
07:29 chirino joined #gluster
07:29 aravindavk joined #gluster
07:29 odata Hi, i have to build a replicate volume. But due to space restrictions i only have one empty LUN the size of my productive LUN holding the Data. Is ist possible to add each LUN to a brick (the one with Data to one and the epty one to another) and use "selfhealing" to copy the data over?
07:33 mgebbe joined #gluster
07:33 saurabh joined #gluster
07:41 andreask joined #gluster
07:48 chirino joined #gluster
07:56 verdurin_ joined #gluster
07:59 samppah odata: afaik it's possible to create volume with existing data but i haven't personally tried it
08:03 odata samppah: i have tried it with small testvolumes (1G random data)  and it worked , im just wondering if this is a good practice or if i had just luck and will fail on my productive volumes (8 TB)
08:03 lalatenduM joined #gluster
08:06 odata i triggered selfheal  like this: http://gluster.org/community/documentation/index.p​hp/Gluster_3.2:_Triggering_Self-Heal_on_Replicate
08:06 glusterbot <http://goo.gl/BCKDiO> (at gluster.org)
08:23 mibby Hi, is it possible to use GlusterFS for both replicated distributed volumes across a high speed link, along with geo-replication across a slower link? My use case is Amazon EC2, 2 servers in a single region (1 in each availability zone), and a 3rd server in another region. Is this configuration possible.
08:24 samppah mibby: currently geo-replication is one way only replication
08:25 verdurin joined #gluster
08:28 tryggvil joined #gluster
08:34 mibby will normal replication work well across slower links? Understand it might be a bit slower getting to that single site.
08:48 kanagaraj joined #gluster
08:50 ctria joined #gluster
08:53 [o__o] left #gluster
08:54 dusmant joined #gluster
08:54 mbukatov joined #gluster
08:55 mbukatov joined #gluster
08:55 [o__o] joined #gluster
09:04 sgowda joined #gluster
09:06 mtanner_ joined #gluster
09:09 vpshastry joined #gluster
09:11 chirino joined #gluster
09:19 dastar joined #gluster
09:23 chirino joined #gluster
09:39 sgowda joined #gluster
09:50 kanagaraj joined #gluster
10:06 chirino joined #gluster
10:31 nshaikh joined #gluster
10:35 _pol joined #gluster
10:41 vpshastry1 joined #gluster
10:44 pithagorians_ joined #gluster
10:45 pithagorians_ hey all. what makes the gluster partition unmount on clients?
10:48 dusmant joined #gluster
10:57 jtux joined #gluster
11:04 manik joined #gluster
11:09 vpshastry joined #gluster
11:11 rastar joined #gluster
11:17 cyberbootje hi, anyone heard of the following issue?: A virtual machine host is connected to a gluster replicated storage and when all VM's on that host get busy the host suddenly reboots without clue's or any reasons.....
11:17 tryggvil joined #gluster
11:20 CheRi_ joined #gluster
11:20 ppai joined #gluster
11:23 hagarth joined #gluster
11:25 eseyman joined #gluster
11:31 Remco cyberbootje: What is stored on the gluster volume? And which version of gluster?
11:41 andrewklau joined #gluster
11:53 StarBeast joined #gluster
11:57 failshell joined #gluster
12:00 nshaikh joined #gluster
12:09 CheRi_ joined #gluster
12:15 harish_ joined #gluster
12:17 andrewklau if I do a full rm -rf /brick1 on a node will the other replica replicate the files? I've got an inconsistency in file sizes and I don't mind if I get a rollback but I just need to get the replica back up and working asap
12:20 bennyturns joined #gluster
12:21 hagarth joined #gluster
12:31 CheRi_ joined #gluster
12:34 davinder joined #gluster
12:42 d-fence joined #gluster
12:46 davinder joined #gluster
12:48 pkoro joined #gluster
12:58 PatNarciso Good morning FreeNode.
12:58 edward2 joined #gluster
12:59 rcheleguini joined #gluster
13:01 jclift joined #gluster
13:04 edward2 joined #gluster
13:06 davinder2 joined #gluster
13:07 Norky_ joined #gluster
13:11 chirino joined #gluster
13:17 recidive joined #gluster
13:17 PatNarciso joined #gluster
13:18 davinder joined #gluster
13:27 rwheeler joined #gluster
13:29 satheesh2 joined #gluster
13:30 davinder2 joined #gluster
13:35 hagarth PatNarciso: good morning
13:37 dastar_ joined #gluster
13:38 vpshastry1 joined #gluster
13:44 [o__o] left #gluster
13:46 ababu joined #gluster
13:46 [o__o] joined #gluster
13:52 bugs_ joined #gluster
13:54 jskinner_ joined #gluster
13:55 mreamy joined #gluster
13:57 esalexa joined #gluster
13:59 vpshastry1 left #gluster
14:06 NuxRo guys, can i just copy a file from a brick and on to another mounted volume, should i expect any issues?
14:08 jskinne__ joined #gluster
14:20 failshell joined #gluster
14:22 shylesh joined #gluster
14:23 lalatenduM joined #gluster
14:24 Norky_ reading directly from a brick shoudl be okay (writing to a brick is not)
14:27 jag3773 joined #gluster
14:30 ndk joined #gluster
14:37 ndarshan joined #gluster
14:46 _pol joined #gluster
14:55 _pol joined #gluster
15:02 davinder joined #gluster
15:02 chirino joined #gluster
15:04 daMaestro joined #gluster
15:08 sprachgenerator joined #gluster
15:09 jcsp joined #gluster
15:27 zerick joined #gluster
15:33 vpshastry joined #gluster
15:33 vpshastry left #gluster
15:35 ndarshan joined #gluster
15:37 manik joined #gluster
15:43 _pol_ joined #gluster
15:46 kaptk2 joined #gluster
15:50 mooperd joined #gluster
15:57 LoudNoises joined #gluster
15:57 jclift NuxRo: ping
15:57 NuxRo jclift: pong
15:57 jclift Still having the peer rejection problem?
15:58 NuxRo yep
15:58 semiosis @peer rejected
15:58 glusterbot semiosis: I do not know about 'peer rejected', but I do know about these similar topics: 'peer-rejected'
15:58 semiosis @peer-rejected
15:58 glusterbot semiosis: http://goo.gl/SmmGEA
15:58 semiosis hope thats not 404
15:58 semiosis ah, worse
15:58 NuxRo not in google's cache either
15:59 jporterfield left #gluster
15:59 semiosis trying wayback
15:59 jclift Sounds like a known problem + solution, but it's disappeared from de internets
15:59 semiosis wayback fail
15:59 jclift Damn
15:59 semiosis i'll make a page on the wiki, brb
15:59 semiosis 10 min
15:59 NuxRo thanks
15:59 jclift Remember any keywords from it, so we can see if someone made a copy?
16:00 jclift semiosis: Yeah, I was just abt to suggest we copy it to a wiki page :)
16:00 * semiosis never should've used helpfist
16:00 * jclift wonders if a helpfist is like a lart
16:00 semiosis we were supposed to get a static export of the helpfist site
16:00 semiosis still bitter about that
16:00 * semiosis is
16:00 johnbot11 joined #gluster
16:01 semiosis i dont care how snazzy your CRM/SaaS/Cloud marketing strategy is, nothing beats a mediawiki documenting things
16:02 semiosis but i digress
16:02 semiosis back to documenting
16:02 NuxRo if it does help, the rejection may follow me doing a fix-layout ... weeks after adding the bricks :D
16:02 semiosis odd
16:02 NuxRo for some reason, either me being stupid or having the wrong docs, i did not run fix-layout right after add-brick ...
16:05 jclift semiosis: Is helpshiftcrm.com the crowd that used to do the Gluster Q&A site?
16:05 jclift semiosis: The site that shut down a while ago that is?
16:06 jclift semiosis: If so, they still seem to be in existence, as helpshift.com.  Could we ask them for the static export they promised?
16:07 jclift semiosis: Next thought, after you've written up the peer rejection wiki page, would you be ok to respond to NuxRo on the mailing list with the link to it?
16:07 jclift semiosis: If not, I can do the reply thing. :)
16:07 semiosis brb, emerg just poppet up at work
16:08 NuxRo cheers guys, I'll be watching the ml/irc
16:13 jclift What the... ?
16:13 jclift Someone just approved a "subscribe" message to gluster-devel.
16:13 jclift Wasn't me.
16:13 * jclift gives the evil eye to whoever that was
16:17 hagarth joined #gluster
16:19 zaitcev joined #gluster
16:22 __Bryan__ joined #gluster
16:32 bulde joined #gluster
16:35 Mo__ joined #gluster
16:36 DV joined #gluster
16:57 DV joined #gluster
16:57 RedShift joined #gluster
16:59 shylesh joined #gluster
17:00 zerick joined #gluster
17:05 RedShift sup
17:11 DV joined #gluster
17:16 NuxRo semiosis: any luck with that article?
17:19 vpshastry joined #gluster
17:21 DV joined #gluster
17:21 semiosis NuxRo: been busy with work
17:22 semiosis will let you know
17:25 NuxRo semiosis: thanks
17:27 semiosis http://www.gluster.org/community/documen​tation/index.php/Resolving_Peer_Rejected
17:27 glusterbot <http://goo.gl/g0b4Oi> (at www.gluster.org)
17:28 semiosis @forget peer-rejected
17:28 glusterbot semiosis: The operation succeeded.
17:28 semiosis @learn peer-rejected as http://www.gluster.org/community/documen​tation/index.php/Resolving_Peer_Rejected
17:28 glusterbot semiosis: The operation succeeded.
17:28 semiosis NuxRo:
17:28 hagarth semiosis: nice writeup on resolving :)
17:28 semiosis thx
17:33 aliguori joined #gluster
17:36 NuxRo semiosis: thanks a lot
17:36 semiosis yw, hth
17:37 NuxRo hagarth: i think the whole problem was trigerred by me running fix-layout several weeks after adding new bricks. does this sound horribly wrong?
17:37 NuxRo there was data created in these new bricks, even without fix-layout which filled my logs with layout-mismatch errors
17:38 hagarth NuxRo: that is not horribly wrong
17:39 hagarth NuxRo: I assume data would have been in directories that were created after the addition of new bricks?
17:40 hagarth however the peer rejected problem should be related to some volume configuration inconsistency
17:41 hagarth semiosis: should we add a troubleshooting section in our admin guide and include such writeups which help in resolving problems?
17:42 Technicool joined #gluster
17:42 semiosis i think a public wiki is better than PDF documentation
17:42 semiosis just my opinion
17:43 NuxRo hagarth: correct, gluster started using the new bricks basically, thought this was not possible until fix-layout is run
17:43 NuxRo semiosis: +1 wiki
17:43 hagarth semiosis: we should roll out the admin guide in html nevertheless
17:44 hagarth s/in html/in html on gluster.org/
17:44 glusterbot hagarth: Error: I couldn't find a message matching that criteria in my history of 1000 messages.
17:44 hagarth glusterbot: my Error. I should have escaped those whitespaces in between.
17:56 PatNarciso semiosis: sometimes I print a wiki as a PDF.
17:56 semiosis I'm OK with you doing that :)
17:56 PatNarciso did I just blow your mind?  :)
17:56 semiosis it's a mind boggling waste of perfectly good electrons
17:57 semiosis imho ;)
17:57 l0uis an online html manual would be fabulouis. nuke the pdf from orbit.
17:57 glusterbot New news from newglusterbugs: [Bug 1007509] Add Brick Does Not Clear xttr's <http://goo.gl/Qx4F4w>
17:58 semiosis fabulouis?
17:58 l0uis lol
17:58 hagarth markdown makes it easier to convert to html, pdf, mediawiki and your favorite format ;)
17:59 hagarth i think we should have a gluster doc patchathon one of these days
17:59 l0uis semiosis: any news on those gluster 3.4 respins? :)
18:01 semiosis not yet sorry
18:09 l0uis semiosis: k, i guess i'll just work around things then. sounds like you probably won't get to it anytime soon?
18:11 andreask joined #gluster
18:13 pithagorians_ joined #gluster
18:16 PatNarciso would it be crazy, to mount a gluster volume via an NFS client to simply gain the caching performance that an NFS mount may provide?
18:17 PatNarciso My thinking is that... if I don't use a system that has readdirplus, I'd like to queeze some sort of performance gain.
18:17 PatNarciso s/queeze/squeeze
18:18 l0uis PatNarciso: I don't think there's anything wrong w/ that? I think its even recommended to get better small file performance. Or atleast I remember reading that somewhere.
18:19 JoeJulian Unless you need directory consistency between clients.
18:20 _ilbot joined #gluster
18:20 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
18:21 johnmark semiosis: do you have a blog?
18:22 johnmark because I would like to syndicate your peer rejected article
18:22 PatNarciso how excessive would the inconsistency be?  are we talking about a few seconds/mins of lag, or are we takling about total fragmentation?
18:22 johnmark or I can just copy + paste :)
18:22 semiosis johnmark: it's in the gluster.org wiki
18:22 semiosis @peer-rejected
18:22 glusterbot semiosis: http://goo.gl/g0b4Oi
18:22 semiosis no blog yet
18:22 johnmark semiosis: I know - but if I write is up in the blog, it will be attributed to me, unless I create an account for you
18:22 johnmark which you should have anyway :)
18:22 semiosis oh hm
18:23 * PatNarciso asks out of ignorance as he hasn't played with Gluster+NFS yet.
18:24 _ilbot joined #gluster
18:24 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
18:25 mooperd joined #gluster
18:30 JoeJulian PatNarciso: seconds
18:31 JoeJulian That's from the kernel FSCache.
18:33 PatNarciso right on - I could live with that.
18:34 bulde joined #gluster
18:35 purpleidea PatNarciso: try to avoid the nfs stuff if you can. think about it as for legacy users.
18:35 purpleidea *as being for legacy users
18:36 PatNarciso aww man. bummer.
18:36 PatNarciso normally, with all my installs, I start with Ubuntu 12.04  and document from there.
18:37 JoeJulian I like to start with Fedora 2...
18:37 * JoeJulian is being facetious.
18:38 _ilbot joined #gluster
18:38 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
18:39 _ilbot joined #gluster
18:39 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
18:47 PatNarciso cluster.choose-local: documented undocumented says this will increase read speed. cool.  how could I increase write speed?  my concern: I'll have users storing large video files [often 2gb, sometimes larger] via LAN, and want to make sure their storage speed isn't reduced significantly.
18:50 PatNarciso this assumes that I'm OK with the file not being distributed/redundant immediately.
18:52 JoeJulian Create some non-gluster directory and have a cron job rsync it?
18:52 StarBeast joined #gluster
18:53 PatNarciso Heh - I was thinking the same thing.  User experience would be funky tho.
18:54 PatNarciso I was looking at NUFA -- not sure if this would do the trick, or if I'm simply wanting it to win because it's what I desire.
18:57 recidive joined #gluster
18:57 JoeJulian Yeah, you could build a volume without any change management and load it directly using glusterfsd. That's not a "beginner" task so make sure you've learned everything. :D
19:00 * PatNarciso remains beginner.
19:12 PatNarciso so, JoeJulian -- your last comment is now just sinking in.  understanding in this setup there'd be no change management -- how would server1 and server2 "sync"?
19:13 JoeJulian You can build the graph as you like. If nufa still works (and I don't know if it does or doesn't) you could build that in to your graph.
19:15 PatNarciso graph: All the translators hooked together to perform a function is called a graph.
19:16 neofob joined #gluster
19:30 _pol joined #gluster
19:36 jdarcy joined #gluster
19:49 tryggvil joined #gluster
19:51 mooperd joined #gluster
19:52 jbrooks joined #gluster
19:53 jbrooks Hey guys, is there a definitive: "these are the open firewall ports that gluster needs" doc?
19:54 jbrooks http://www.jamescoyle.net/how-t​o/457-glusterfs-firewall-rules --> I'll try that
19:54 glusterbot <http://goo.gl/bIuGaF> (at www.jamescoyle.net)
19:55 JoeJulian @ports
19:55 glusterbot JoeJulian: glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up for <3.4 and 49152 & up for 3.4. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
19:56 JoeJulian Also see purpleidea's ,,(puppet) module.
19:56 glusterbot (#1) https://github.com/semiosis/puppet-gluster, or (#2) https://github.com/purpleidea/puppet-gluster
19:56 jbrooks thanks, JoeJulian
20:02 uebera|| joined #gluster
20:02 uebera|| joined #gluster
20:06 t4bs joined #gluster
20:08 PatNarciso JoeJulian: my thoughts after staring at this for an hour -- I'll use samba to share the glustervolume to the lan.  within the glustervolume I'll have a symbolic link to a local drive, and configure samba to follow symbolic links.  nightly, I'll run a proc to remove the symbolic link, and move/rsync the data from the local drive to the glustervolume.
20:09 johnbot11 joined #gluster
20:10 bet_ joined #gluster
20:17 StarBeast joined #gluster
20:24 mooperd joined #gluster
20:24 failshell joined #gluster
20:29 StarBeast joined #gluster
20:45 _pol joined #gluster
20:47 _pol_ joined #gluster
20:47 joshin joined #gluster
21:00 badone joined #gluster
21:15 purpleidea it's a proud day for me when JoeJulian is promoting my puppet module :P
21:16 semiosis @forget puppet 1
21:16 glusterbot semiosis: The operation succeeded.
21:16 semiosis @puppet
21:16 glusterbot semiosis: https://github.com/purpleidea/puppet-gluster
21:17 JoeJulian Hmm, 3.4.1rc1 just released.
21:17 purpleidea semiosis: wat?
21:17 purpleidea semiosis: why?
21:18 JoeJulian Because he listed it for reference as it's not a general purpose module. Since yours is now a better reference and is a general purpose one, he's deprecating his.
21:19 jclift left #gluster
21:19 purpleidea JoeJulian: okay.
21:19 purpleidea semiosis: i was just looking at your code actually... trying to think if there are features in your code that i can add to my module for you.
21:19 JoeJulian Even though yours formats drives... ;)
21:20 purpleidea i know we talked about it briefly at the con. if you would use it, i'm willing to do the porting work to make sure it's 100% on ubuntu
21:20 semiosis ok i'll put it back.  I at least wanted yours to be first on the list
21:20 purpleidea JoeJulian: hehe yeah i know, it's still my favourite part
21:20 semiosis @learn puppet as semiosis' unmaintained puppet module: https://github.com/semiosis/puppet-gluster
21:20 glusterbot semiosis: The operation succeeded.
21:20 semiosis @puppet
21:20 glusterbot semiosis: (#1) https://github.com/purpleidea/puppet-gluster, or (#2) semiosis' unmaintained puppet module: https://github.com/semiosis/puppet-gluster
21:20 purpleidea semiosis: i don't mind it being off. but i'd like to make sure mine does everything you need
21:21 JoeJulian I need it to make good Mocha Breve'.
21:21 purpleidea JoeJulian: don't we all
21:22 semiosis purpleidea: we just had a power failure at the office.  that big vm server i mentioned won't turn on now.  even though it was on battery backup & should have shut itself down gracefully.
21:22 _pol joined #gluster
21:22 purpleidea semiosis: need help with that?
21:22 semiosis put the HDDs in another machine & back in business.  i'll deal with the bad mobo another day
21:23 purpleidea semiosis: okay :) happy to hear it but i think i'm missing the context as to why you're telling me?
21:25 semiosis i mentioned that i had a powerful machine running a bunch of vm's at the con... now it's dead.  that's all
21:25 semiosis s/powerful machine/machine that used to be powerful, 3 years ago/
21:25 glusterbot What semiosis meant to say was: i mentioned that i had a machine that used to be powerful, 3 years ago running a bunch of vm's at the con... now it's dead.  that's all
21:26 semiosis never mind
21:26 * semiosis gbtw
21:26 JoeJulian I remember overhearing part of that conversation. It's true. It happened.
21:27 purpleidea semiosis: got it. i guess the magic smoke is gone for good
21:27 semiosis probably.  this happened before while it was under warranty.  but that's long gone
21:28 JoeJulian I was glad that people got the "magic smoke" reference when I made it. I wasn't sure if that was too old and long forgotten.
21:28 purpleidea semiosis: maybe the osl cloud will be a good replacement
21:28 purpleidea JoeJulian: haha yeah, i'm also that old, but hopefully not forgotten
21:29 semiosis http://en.wikipedia.org/wiki/Magic_smoke
21:29 glusterbot Title: Magic smoke - Wikipedia, the free encyclopedia (at en.wikipedia.org)
21:29 purpleidea wow it has a wiki page
21:29 JoeJulian someone had too much time on their hands....
21:37 purpleidea does someone have a detailed document on the elastic hash specifics, including the magic JoeJulian mentioned at the conference about specifying weird file names to "pick" a brick?
21:40 JoeJulian The magic filename isn't documented anywhere.
21:41 purpleidea JoeJulian: is it a stable interface? want to remind us of the format?
21:41 JoeJulian It's basically an {whatever}@{translator}:{subvolume}
21:42 JoeJulian So to specify you want "foo" on subvolume 0 of the distribute translator: foo@dht:0
21:42 JoeJulian The rest of the specifics you can get from my blog post, "dht misses are expensive"
21:43 purpleidea http://joejulian.name/blog​/dht-misses-are-expensive/
21:43 glusterbot <http://goo.gl/A3mCk> (at joejulian.name)
21:43 JoeJulian @lucky dht misses are expensive
21:43 glusterbot JoeJulian: http://goo.gl/A3mCk
21:44 purpleidea @lucky joe julian is awesome
21:44 glusterbot purpleidea: http://www.imdb.com/name/nm1928515/
21:44 purpleidea bad gluster bot, bad.
21:45 JoeJulian lol
21:45 JoeJulian better him than the San Francisco politician.
21:45 recidive joined #gluster
22:04 zwu joined #gluster
22:12 johnbot11 joined #gluster
22:19 rwheeler joined #gluster
23:00 andrewklau joined #gluster
23:08 JoeJulian purpleidea: http://sideeffect.kr/popularconvention/#python
23:09 glusterbot Title: Popular Coding Convention on Github (at sideeffect.kr)
23:09 JoeJulian See... spaces!
23:11 torrancew The graph is pretty consistent for all langs.. except Java
23:11 JoeJulian But we were already aware that java developers were masochistic.
23:12 torrancew Aye
23:16 andrewklau if I want to 'reinstall' a brick in a replica. Can I simply rm -rf /badbrick/ and let gluster self heal the files back?
23:19 purpleidea JoeJulian: wow, cool stats. I wonder how misleading this is... Kernel uses tabs.
23:20 JoeJulian That's because the kernel's old.
23:20 JoeJulian It predates "best practices".
23:20 purpleidea also, if more people are doing something, it must be better right?
23:20 JoeJulian Yes.
23:20 purpleidea hehe
23:20 JoeJulian If everyone you know jumps off a cliff...
23:21 purpleidea JoeJulian: i actually switched a lot of my code over to spaces once I wrote: http://ttboj.wordpress.com/smarter-spaces/ but then newest gedit broke it due to broken autotab plugin. so i'm waiting for that.
23:21 glusterbot Title: smarter spaces | The Technical Blog of James (at ttboj.wordpress.com)
23:21 JoeJulian ... then what the heck are you doing still standing there? If everyone's jumping, most of them rational educated people that aren't normally inclined to suicide, are jumping, what do they know that you don't?
23:22 purpleidea JoeJulian: have you seen this? http://www.emacswiki.org/emacs/TabsSpacesBoth
23:22 glusterbot Title: EmacsWiki: Tabs Spaces Both (at www.emacswiki.org)
23:22 JoeJulian hehe
23:22 JoeJulian It's still opening but I can guess.
23:22 JoeJulian Oh, right. I have seen that.
23:23 purpleidea bbl food
23:51 chirino joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary