Camelia, the Perl 6 bug

IRC log for #gluster, 2013-01-31

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 semiosis JoeJulian: what do you suggest to find out how it is working?
00:04 JusHal left #gluster
00:07 w3lly joined #gluster
00:17 Shdwdrgn joined #gluster
00:29 manik joined #gluster
00:33 JoeJulian http://joejulian.name/blog/glusterfs-volumes-​not-mounting-in-debian-squeeze-at-boot-time/
00:33 glusterbot <http://goo.gl/t6PY4> (at joejulian.name)
00:36 tomsve joined #gluster
00:37 raven-np joined #gluster
00:48 w3lly joined #gluster
00:55 w3lly joined #gluster
00:59 amccloud joined #gluster
01:00 semiosis JoeJulian: try wheezy
01:00 semiosis see if you have less success than I had
01:06 JoeJulian mmkay
01:11 clusterflustered so i see that gluster store files algorimically, without using metadata, does this mean the process of of chooseing a shard is never happens?
01:15 JoeJulian If
01:15 JoeJulian If I'm understanding your question correctly, then you are correct.
01:16 JoeJulian I think this article explains the method of brick placement pretty well: http://joejulian.name/blog​/dht-misses-are-expensive/
01:16 glusterbot <http://goo.gl/A3mCk> (at joejulian.name)
01:19 clusterflustered i ahvnt read this much in ages
01:19 clusterflustered my eyes hurt
01:19 clusterflustered i think my brain has the dumb, too
01:24 glusterbot New news from resolvedglusterbugs: [Bug 763897] peer probe command does not allow alternative name or IP to be specified from initial host <http://goo.gl/QKh0Q>
01:30 semiosis ha
01:32 clusterflustered how common is it for people to lookup files that aren't there?
01:33 elyograg clusterflustered: people won't do it very often, but programs do it all the time.
01:34 kevein joined #gluster
01:35 clusterflustered is there a way to see that that is happening? in the logs, etcs?
01:39 elyograg clusterflustered: I would guess the answer to that is no, but to find out for sure, go to a client mount point and do 'ls badfilename' then check the logs.
01:39 elyograg gotta catch a train.
01:54 JFK joined #gluster
01:57 w3lly joined #gluster
02:08 luckybambu joined #gluster
02:34 raven-np joined #gluster
02:36 designbybeck__ joined #gluster
03:00 w3lly joined #gluster
03:04 overclk joined #gluster
03:05 bharata joined #gluster
03:47 hagarth joined #gluster
03:52 bulde joined #gluster
03:53 berend joined #gluster
03:56 shireesh joined #gluster
04:02 w3lly joined #gluster
04:05 sripathi joined #gluster
04:22 sahina joined #gluster
04:33 pai joined #gluster
04:39 purpleidea joined #gluster
04:41 _benoit_ joined #gluster
04:43 Humble joined #gluster
04:45 vpshastry joined #gluster
04:54 _benoit_ joined #gluster
04:54 jjnash left #gluster
04:57 lala joined #gluster
05:06 shireesh joined #gluster
05:23 Humble joined #gluster
05:32 mohankumar joined #gluster
05:39 raghu joined #gluster
05:40 lala joined #gluster
05:59 Nevan joined #gluster
06:02 jtux joined #gluster
06:03 w3lly joined #gluster
06:06 mohankumar joined #gluster
06:10 deepakcs joined #gluster
06:11 shylesh joined #gluster
06:25 sripathi joined #gluster
06:27 andreask joined #gluster
06:28 samppah :O
06:29 samppah deepakcs: hello, are you working with ovirt?
06:31 deepakcs samppah, VDSM part of ovirt
06:33 samppah deepakcs: ok, do you know if ovirt requires upgraded qemu package or is gluster support backported into qemu-img-rhev?
06:35 deepakcs samppah,  glsuter suport is in qemu 1.3.. reg. -rhev not sure, u need to check with RH for that i guess
06:37 samppah deepakcs: thanks :)
06:43 deepakcs samppah, wc
06:43 Humble joined #gluster
06:43 ramkrsna joined #gluster
06:43 ramkrsna joined #gluster
06:51 lala joined #gluster
06:55 vimal joined #gluster
06:55 bala joined #gluster
06:57 glusterbot New news from newglusterbugs: [Bug 905871] Geo-rep status says OK , doesn't sync even a single file from the master. <http://goo.gl/CpA33>
06:57 sahina joined #gluster
06:59 ngoswami joined #gluster
07:05 sripathi joined #gluster
07:07 w3lly joined #gluster
07:08 lala_ joined #gluster
07:26 theron joined #gluster
07:29 jbrooks joined #gluster
07:31 lh joined #gluster
07:31 lh joined #gluster
07:45 rgustafs joined #gluster
07:45 ekuric joined #gluster
07:57 shireesh joined #gluster
07:58 sripathi1 joined #gluster
08:02 sripathi joined #gluster
08:03 Azrael808 joined #gluster
08:08 w3lly joined #gluster
08:27 bulde joined #gluster
08:29 bala joined #gluster
08:34 tjikkun_work joined #gluster
08:44 w3lly joined #gluster
08:46 sripathi joined #gluster
08:53 manik joined #gluster
08:58 tryggvil joined #gluster
09:02 isomorphic joined #gluster
09:04 ngoswami joined #gluster
09:05 sahina joined #gluster
09:06 hateya joined #gluster
09:19 Humble joined #gluster
09:24 Staples84 joined #gluster
09:31 Joda joined #gluster
09:34 jtux joined #gluster
09:35 bauruine joined #gluster
09:36 17WAA25W1 joined #gluster
09:37 jjnash joined #gluster
09:37 nightwalk joined #gluster
09:38 tomsve joined #gluster
09:55 pluto17 joined #gluster
09:56 pluto17 I have seen that geo-replication is using rsync...
09:56 pluto17 is it possible to set some kind of filter to determine which files should be syncronized?
09:56 pluto17 (sorry! Hi All!)
09:58 glusterbot New news from newglusterbugs: [Bug 906238] glusterfs client hang when parallel operate the same dir <http://goo.gl/sPdjr>
10:01 hchiramm_ joined #gluster
10:04 johndescs joined #gluster
10:06 andreask joined #gluster
10:09 xavih joined #gluster
10:30 Humble joined #gluster
10:30 pluto17 Hi? Is anybody there?
10:32 duerF joined #gluster
10:45 JuanBre joined #gluster
10:46 dobber joined #gluster
10:47 shireesh joined #gluster
10:51 hateya joined #gluster
10:53 vpshastry joined #gluster
10:57 ctria joined #gluster
11:10 jjnash joined #gluster
11:10 nightwalk joined #gluster
11:11 isomorphic joined #gluster
11:21 raven-np joined #gluster
11:21 edward joined #gluster
11:28 shireesh joined #gluster
11:31 lala joined #gluster
11:34 vikumar joined #gluster
11:41 manik hagarth and hagarth_ : ping
11:42 hagarth manik: pong
11:43 manik hagarth: do you have anyone on your team in Europe?  Or is everyone based in India/US?
11:43 manik hagarth: someone technical I mean.  :)
11:44 shireesh joined #gluster
11:44 ngoswami joined #gluster
11:46 hagarth manik: most developers are in India/US. We have ndevos out there in Europe.
11:46 hateya joined #gluster
11:46 manik hagarth: ok, thanks
11:47 x4rlos semiosis: ping
11:48 x4rlos partner: ping
11:48 x4rlos Just wondering if you guys had much progress from last night on the debian os's and the mount on boot "problem".
11:49 vpshastry joined #gluster
11:51 Staples84 joined #gluster
11:51 johnmark manik: getting someone technical in the EU would be beneficial, IMO
11:51 johnmark manik: do you have anyone in mind?
11:53 lala_ joined #gluster
11:56 manik johnmark: nope, was just asking if there was anyone
12:00 _br_ joined #gluster
12:01 vimal joined #gluster
12:01 johndescs hum, when a client mounts a volume without being listed in auth.allow, the mount succeeds but any subsequent command fails (ls, df, …)
12:02 johndescs bug or feature ?
12:02 johndescs sounds like a punishment :D
12:02 johndescs fails by stalling forever…
12:08 rgustafs joined #gluster
12:11 partner x4rlos: pong. yes, it fails, figuring out _proper_ fix
12:13 partner Joe even wrote a blog entry: http://joejulian.name/blog/glusterfs-volumes-​not-mounting-in-debian-squeeze-at-boot-time/
12:13 glusterbot <http://goo.gl/t6PY4> (at joejulian.name)
12:22 kkeithley1 joined #gluster
12:27 JuanBre joined #gluster
12:30 hateya joined #gluster
12:34 guigui1 joined #gluster
12:37 morse joined #gluster
12:37 JuanBre joined #gluster
12:43 x4rlos partner: ah, cool. Was scrolling back through text and seemed you highlighted the problem related to nfsmount :-)
12:43 x4rlos oo0h. i'll have a read :-)
12:48 tomsve joined #gluster
12:49 rgustafs joined #gluster
12:49 aliguori joined #gluster
12:49 x4rlos sweet.
12:52 luckybambu joined #gluster
12:58 jbrooks joined #gluster
13:08 w3lly joined #gluster
13:10 partner x4rlos: its related but Joe sums it all nicely on the blog, the backlog is full of guessing and random notes
13:30 bfoster joined #gluster
13:36 _Bryan_ joined #gluster
13:48 Humble joined #gluster
13:54 the-me_ joined #gluster
13:55 shylesh joined #gluster
13:56 hateya joined #gluster
13:57 badone joined #gluster
14:01 dustint joined #gluster
14:04 vpshastry joined #gluster
14:07 jack_ joined #gluster
14:08 Humble joined #gluster
14:14 rwheeler joined #gluster
14:26 puebele1 joined #gluster
14:30 bennyturns joined #gluster
14:31 bauruine joined #gluster
14:34 killermike joined #gluster
14:35 luckybambu joined #gluster
14:37 rwheeler joined #gluster
14:38 spn joined #gluster
14:40 hagarth joined #gluster
14:43 bronaugh joined #gluster
14:56 puebele joined #gluster
15:00 manik joined #gluster
15:00 Azrael808 joined #gluster
15:00 manik joined #gluster
15:01 puebele1 joined #gluster
15:03 stopbit joined #gluster
15:05 plarsen joined #gluster
15:07 Staples84_ joined #gluster
15:08 Staples84_ joined #gluster
15:08 Staples84_ joined #gluster
15:09 vpshastry joined #gluster
15:09 Staples84_ joined #gluster
15:12 edward left #gluster
15:12 puebele joined #gluster
15:13 edward joined #gluster
15:16 lh joined #gluster
15:17 Staples84 joined #gluster
15:18 Humble joined #gluster
15:18 vimal joined #gluster
15:18 puebele joined #gluster
15:29 glusterbot New news from newglusterbugs: [Bug 906401] glusterfsd crashes sometimes on disconnect when SSL is enabled <http://goo.gl/EkIes>
15:30 wushudoin joined #gluster
15:31 triode3 joined #gluster
15:32 chouchins joined #gluster
15:32 triode3 Is there a big performance difference between 3.2.5 and the latest? Our performance is still pretty bad on 3.2.5 under centos and I am thinking of testing 3.3... is it worth it?
15:38 semiosis x4rlos: pong
15:39 x4rlos semiosis: hi.
15:40 partner semiosis: good $timeofday
15:40 semiosis triode3: i've noticed (but not quantified) improved fuse performance on kernels 3.0 and newer
15:40 semiosis hi x4rlos & partner
15:41 Azrael808 joined #gluster
15:41 triode3 semiosis, hrm. I see centos is still on 2.6.32, even on cent 6.3... What distro are you using?
15:42 semiosis ubuntu \o/
15:43 triode3 ah. I see. I kindof gave up ubuntu after the switch. I can do gentoo/RHEL/Cent/Debian/etc. I may change to get to a later kernel if it is a real improvement. We have had very slow gluster rates for years and tried a lot to bring it up.
15:43 semiosis triode3: idk specifics but i'd guess there's probably some performance improvements in glusterfs 3.3... would be interested to hear your feedback if you try
15:43 triode3 heh, oddly, I can not get to download.gluster.com/pub/gl​uster/glusterfs/LATEST/RHEL right now.
15:43 semiosis triode3: s/com/org/
15:44 triode3 semiosis, wow, I copy/pasted that right from http://www.gluster.org/community/documen​tation/index.php/Getting_started_install
15:44 glusterbot <http://goo.gl/chDN9> (at www.gluster.org)
15:45 triode3 semiosis, they need to fix the docs. :O
15:45 semiosis triode3: they is us.  it's a wiki & anyone can contribute/edit
15:45 bitsweat joined #gluster
15:45 triode3 semiosis, ah, ok. Well, it's incorrect. :)
15:46 puebele1 joined #gluster
15:46 semiosis johnmark: http://irclog.perlgeek.de/g​luster/2013-01-29#i_6390906
15:46 glusterbot <http://goo.gl/PKkHj> (at irclog.perlgeek.de)
15:46 semiosis johnmark: <kombucha> Hoping to have a gluster event / camp / something this summer in NYC btw  :-)
15:47 semiosis triode3: thanks for the feedback
15:49 semiosis triode3: hah oops, i already noted on the talk page for that article that the links are broken... http://www.gluster.org/community/documentat​ion/index.php/Talk:Getting_started_install
15:49 glusterbot <http://goo.gl/IDKwC> (at www.gluster.org)
15:49 triode3 semiosis, sure. I am just trying to give it another shot before we get something else really. We have been using it since uh, about 2009.
15:49 semiosis TODO list fail :(
15:49 triode3 semiosis, heh, yeah, I know the TODO failures.
15:50 Joda joined #gluster
15:51 triode3 semiosis, if this seems ok and I try to update my older gluster to the new one, will there be problems between those two revisions?
15:51 semiosis upgrade to 3.3.0+ from <= 3.2.x requires downtime, see ,,(3.3 upgrade notes)
15:51 glusterbot http://goo.gl/qOiO7
15:51 triode3 ok.
15:54 puebele1 joined #gluster
15:57 triode3 semiosis, I see you guys are saying xfs now. Should I use that over ext4 for bricks?
15:57 JuanBre joined #gluster
15:57 semiosis yep
15:57 semiosis with inode size 512
15:57 triode3 semiosis, for reference, I have 28 bricks, each with 3.6TB of raid5.
15:57 aliguori joined #gluster
15:58 triode3 semiosis, ah, ok.
15:59 m0zes dstat
15:59 m0zes :/ whoops
16:01 puebele joined #gluster
16:09 daMaestro joined #gluster
16:10 bugs_ joined #gluster
16:15 _Bryan_ joined #gluster
16:15 _Bryan_ #ops
16:15 manik joined #gluster
16:19 kombucha anyone got the doc link handy for why to use xfs not ext4?
16:19 kombucha glusterbot?
16:21 x4rlos jojulians page
16:21 x4rlos with an e
16:22 lh joined #gluster
16:22 lh joined #gluster
16:25 m0zes ,,(ext4) | kombucha
16:25 glusterbot Read about the ext4 problem at http://goo.gl/PEBQU
16:27 mynameisbruce_ joined #gluster
16:27 mynameisbruce_ hi !
16:27 kombucha much oblidged
16:27 mynameisbruce_ looks like libglustefsclient is removed from master branch? does that mean that libglusterfsclient will not be integrated in 3.4?
16:28 m0zes mynameisbruce_: I think it is being replaced by libgfapi
16:29 mynameisbruce_ okay...thx
16:30 mynameisbruce_ where can i find infos about libgfapi? mailinglist?
16:30 mynameisbruce_ i like to run kvm vms on 3.4...
16:31 mynameisbruce_ i reed that libglusterfsclient or now libgfapi will boost performance of vms
16:36 johnmark mynameisbruce_: you can find a few things in the gluster-devel archives
16:37 lh joined #gluster
16:37 lh joined #gluster
16:37 johnmark mynameisbruce_: but you can find out how it works by building from source, or running a QA build
16:38 amccloud joined #gluster
16:39 mynameisbruce_ i think i had to build an 1.0 version of libvirt to use the "driver gluster:" storage directly
16:40 johnmark mynameisbruce_: yeah, I know the latest release has what you need, but I don't know if there are any binaries available
16:41 johnmark mynameisbruce_: gluster binaries can be found at http://bits.gluster.org/pub/​gluster/glusterfs/v3.4.0qa7/
16:41 glusterbot <http://goo.gl/Ber7b> (at bits.gluster.org)
16:41 mynameisbruce_ i gonna build an debian package and give it a try
16:41 johnmark ah, ok
16:41 * johnmark looks for blog posts
16:42 johnmark mynameisbruce_: not sure if you saw this: http://www.youtube.com/watch?v=JG3kF_djclg
16:42 glusterbot Title: Using QEMU to boot a VM image on GlusterFS volume - YouTube (at www.youtube.com)
16:45 mynameisbruce_ oh thanks a lot...looks good
16:46 clusterflustered another question, if i want to localize all my data, can i install gluster everywhere, and have 100percent replication to each node? this of course assumes my data is smaller than all local hard drives.
16:48 mynameisbruce_ i think the problem is that you cant configure gluster to prefer local node as read source
16:48 johnmark kombucha: hey. definitely interested in helping you spin up a gluster meetup
16:49 kombucha hey that's great johnmark (I didn't realize you knew my irc nick, cool)
16:50 kombucha I'll shoot you an email to follow up.
16:50 johnmark kombucha: I only know your IRC nick because semiosis directed me to it :)
16:51 johnmark and I owe you an email, I believe :)
16:51 kombucha good to hear from you, and I owe you updating from 3.2 to 3.3, hahaha
16:55 luckybambu_ joined #gluster
16:56 johnmark heh :)
17:01 sashko joined #gluster
17:10 JoeJulian 5 rewrites later, I send my response to gluster-users. I should have saved the first draft for you to read <evil laugh>. (johnmark, semiosis, jdarcy, kkeithley)
17:11 semiosis mynameisbruce_: debian experimental has 3.4 packages... http://packages.debian.org/search?key​words=glusterfs&amp;searchon=names&am​p;suite=experimental&amp;section=all
17:11 glusterbot <http://goo.gl/G1Aoc> (at packages.debian.org)
17:12 semiosis mynameisbruce_: gluster automatically prefers the local replica, you dont have to configure that (and you're right, you can't configure it)
17:12 mynameisbruce_ i know....i use the experimental 3.4qa7 packages....without dependency hell :D
17:12 mynameisbruce_ but qemu-kvm has quiete a lot of dependencies
17:12 mynameisbruce_ so i think i build them from scratch
17:13 mynameisbruce_ @semiosis ... i tested with 3.3 ... and i dont think that gluster automatically prefer local node
17:14 mynameisbruce_ i dont why...maybe thats an bug ?
17:14 mynameisbruce_ know
17:14 johnmark JoeJulian: oh oh oh... you must share
17:14 semiosis weird
17:15 johnmark semiosis: oh nice - good to know we have builds in debian exp.
17:15 JoeJulian clusterflustered: http://joejulian.name/blog/glust​erfs-replication-dos-and-donts/
17:15 glusterbot <http://goo.gl/B8xEB> (at joejulian.name)
17:15 JoeJulian johnmark: I broke down 3 paragraphs of response to "That doesn't lend me much confidence in your expertise with regard to your other recommendations, Stephan."
17:16 JoeJulian I had quite the rant going...
17:16 johnmark mynameisbruce_: well, if the mount point is on the same server as the replica, then it *should* be the first responder
17:16 semiosis johnmark: it's good for what it is, but right now there's no official debian package of the latest glusterfs release 3.3.1.  stable has 3.0.5, unstable has 3.2.7, and experimental has 3.4qa?
17:16 semiosis johnmark: i'm kinda disappointed in that situation
17:16 johnmark semiosis: oy. ok
17:17 johnmark yeah... is that our fault? just trying to figure out why we couldn't get 3.3.x into unstable
17:17 johnmark I think it had to do with release schedule
17:17 triode3 I don't need to install the glusterfs-fuse (rpms) on the servers if I am not mounting on the servers, correct?
17:17 spn joined #gluster
17:17 JoeJulian incorrect. There are several features (self-heal daemon) that use those bits.
17:18 triode3 Is there a doc somewhere that explains what tweaking and performance enhancements I can do, or is the gluster now all automagic (I have not installed it since the 2.x series)
17:18 johnmark JoeJulian: I was going to give you public kudos for your response.
17:18 johnmark triode3: you can still do all sorts of modifications to vol files
17:18 JoeJulian triode3: There's this old ,,(options) page but I honestly haven't seen any improvements from tweaking anything.
17:19 glusterbot triode3: http://goo.gl/dPFAf
17:19 semiosis clusterflustered: JoeJulian: i'd recommend replica 3 + quorum if you want servers to also be writable clients.  seems thats the only sane way to recommend people to have writable clients on their servers without them going all split-brain
17:19 johnmark but you lose the auto-magic-y stuff with glusterd
17:19 triode3 JoeJulian, thanks.
17:19 JoeJulian semiosis +1
17:19 semiosis johnmark: not our fault
17:20 johnmark semiosis: good to know :) if you have suggestions on how to fix, let me konw
17:20 semiosis johnmark: oh yeah i forgot about backports... let me check that
17:20 johnmark I wonder if Stephan will ever wander into #gluster
17:20 erik49 joined #gluster
17:21 johnmark @stats
17:21 glusterbot johnmark: I have 3 registered users with 0 registered hostmasks; 1 owner and 1 admin.
17:21 semiosis hrm backports just provides the 3.2.7 package from unstable to the stable distro, still no 3.3.1
17:21 johnmark ah, ok
17:21 semiosis @channelstats
17:21 glusterbot semiosis: On #gluster there have been 79816 messages, containing 3567208 characters, 598172 words, 2433 smileys, and 300 frowns; 591 of those messages were ACTIONs. There have been 27183 joins, 962 parts, 26269 quits, 9 kicks, 102 mode changes, and 5 topic changes. There are currently 184 users and the channel has peaked at 203 users.
17:21 johnmark hrm
17:21 johnmark semiosis: thanks :)  that's what I was looking for
17:22 semiosis johnmark: i think the best thing is what we have now, our own repos on d.g.o
17:22 johnmark wow, 184 users
17:22 johnmark semiosis: yup.
17:22 johnmark when I first joined we had 75 - 80 users
17:22 JoeJulian When I first joined we had 16 and they were always afk.
17:24 JoeJulian Ok, I just had to share that response to "NFS Availability" on gluster-users. I thought it was a good burn. Now I've got to get my coffee and get to the office. TTFN
17:30 redsolar joined #gluster
17:49 hattenator joined #gluster
17:50 flrichar joined #gluster
17:51 zaitcev joined #gluster
17:57 amccloud joined #gluster
18:03 inodb joined #gluster
18:05 bauruine joined #gluster
18:17 redsolar joined #gluster
18:17 vpshastry left #gluster
18:19 theron joined #gluster
18:33 clusterflustered okey doke, so my developers are telling me that one of the key features of hadoop they like, is the ability to locate the data where the job is, through a scheduler. does gluster have any such feature?
18:34 cicero what would be the equivalent of a job for gluster?
18:35 cicero (by hadoop do you mean hdfs?)
18:35 flrichar joined #gluster
18:35 DaveS joined #gluster
18:36 clusterflustered yes, hhfs with map reduce. so what the are telling me, is with map reduce, if we have a sim running on say node 34, they can figure out where all the data is that this sim will need, and locate it on node 34. they essentially schedule the data to be moved to node 34 so it is there before the sim needs it, in order to help alliviate network latency
18:37 clusterflustered also, i was just handed a hadoop book, so if this is incorrect, please say so.
18:37 cicero well, gluster is a distributed filesystem only
18:37 cicero there are no notions of jobs
18:37 cicero that's a concern of the architecture
18:38 cicero how the volumes, composed of bricks, are laid out
18:45 clusterflustered did you guys consider hadoop at all? is there a reason you went with gluster?
18:47 chirino joined #gluster
18:48 ekuric joined #gluster
18:48 cicero it's apples and oranges, hadoop and gluster
18:48 cicero hdfs and gluster is a little more oranges and grapefruit
18:49 cicero the distinguishing feature of gluster is that it's basically a transparent filesystem, presented to your system like any other network mount (e.g. NFS)
18:49 cicero whereas hdfs requires (afaik) one to speak the protocol
18:50 cicero "One of the pieces that hampers Hadoop's scale-out capabilities is the storage backend. HDFS is the filesystem used to store Hadoop data, and by default, it needs at least 3 nodes to replicate data. HDFS also has some limitations when it comes to the amount of storage and the total number of storage nodes it can utilize."
18:50 cicero - http://gluster.org/community/d​ocumentation/index.php/Hadoop
18:50 glusterbot <http://goo.gl/m2Zix> (at gluster.org)
18:51 raven-np joined #gluster
18:53 clusterflustered i read that page yesterday, which is a part of our concern, the scalability of hadoop.
18:54 clusterflustered we have 250 nodes we want to use as data nodes, so to me gluster seems like a no brainer, we also dont have to change any of our code due to is posix compliance. however, this idea that hadoop's scheduler can locate data where a job is running before the job is running so that the bottlenekcs are local i/o could be a striking blow.
18:55 clusterflustered i need to setup 10 nodes of each, but on such a small roll out, i doubt we'd see any of hadoops weaknesses compared to a 250 node rollout
18:55 semiosis i thought hadoop ran the code where the data was, not moved the data
18:55 clusterflustered see, now i've read that same thing as well.
18:55 semiosis "moving computation is cheaper than moving data"
18:56 semiosis but if you already have an app that's written for posix... wouldnt that have to be ported to hadoop map-reduce?
18:56 clusterflustered that is still an ambiguous topic to me, as ive seen hadoop developer blogs that said it moves jobs to data, then ive had my developers that have worked with hadoop tell me it moves data to jobs.
18:57 clusterflustered it would be. a lot of our jobs would need a re-write to support map reduce.
18:57 jack joined #gluster
19:02 DaveS___ joined #gluster
19:19 andreask joined #gluster
19:26 Staples84 joined #gluster
19:36 rwheeler joined #gluster
19:37 inodb joined #gluster
19:42 DaveS joined #gluster
19:46 DaveS joined #gluster
19:56 triode3 joined #gluster
19:57 triode3 Hello again. I find the community documentation to be... lacking. I have not installed gluster since 2.x. I find some things... missing. After you install all of the rpms, do you start the daemons up before you do gluster peer probe? Do you put all other nodes in gluster peer probe? Do you need to do that on every node?
19:59 partner i've been using RH docs, very clear and logical thought not very detailed
19:59 JoeJulian triode3: You start glusterd on all your servers, not all your nodes. You then need to peer probe all your servers (again, not all your nodes) and you have to peer probe that first server from one other peer in order to assign it it's hostname.
19:59 triode3 The docs go from rpm glusterfs{,-server,-fuse,-geo-replication,-3} to gluster peer probe <ip>. But I see glusterd and glusterfsd services that could be running althoguh they are all off (centos). Which one do you start for the server setup?
19:59 partner triode3: try from there onwards https://access.redhat.com/knowledge/docs/en-​US/Red_Hat_Storage/2.0/html/Administration_G​uide/chap-User_Guide-Start_Stop_Daemon.html
19:59 glusterbot <http://goo.gl/tNFl3> (at access.redhat.com)
19:59 triode3 JoeJulian, ok, I have 14 servers. Do I go to each one a do gluster peer probe <ip1, ip2, ip3... ip14? ?
20:00 JoeJulian for d in $(seq 2..14); do gluster peer probe server$d; done
20:00 triode3 parter, I get a RedHat customer portal 404 on that page.
20:01 JoeJulian And use ,,(hostnames). You'll be happy in the long run.
20:01 glusterbot Hostnames can be used instead of IPs for server (peer) addresses. To update an existing peer's address from IP to hostname, just probe it by name from any other peer. When creating a new pool, probe all other servers by name from the first, then probe the first by name from just one of the others.
20:01 partner triode3: strange, works for me perfectly, perhaps you copypasted it and missed one character or the url was cut, try the goo.gl url below it?
20:01 JoeJulian Are you looking at http://www.gluster.org/community/d​ocumentation/index.php/QuickStart ?
20:01 glusterbot <http://goo.gl/OEzZn> (at www.gluster.org)
20:01 triode3 JoeJulian, ok. Thanks.
20:02 triode3 JoeJulian, http://www.gluster.org/community/document​ation/index.php/Getting_started_configure
20:02 glusterbot <http://goo.gl/BsK02> (at www.gluster.org)
20:03 kkeithley1 left #gluster
20:03 triode3 JoeJulian, I am just noting that even if you read them all, when you go from install to configure it does not say "and start glusterd" then do peers...
20:03 JoeJulian Yeah, I see that.
20:03 triode3 JoeJulian, Also, what is the difference between glusterd and glusterfsd...
20:04 elyograg let's see if I can do this right ... ,,(processes)
20:04 glusterbot the GlusterFS core uses three process names: glusterd (management daemon, one per server); glusterfsd (brick export daemon, one per brick); glusterfs (FUSE client, one per client mount point; also NFS daemon, one per server). There are also two auxiliary processes: gsyncd (for geo-replication) and glustershd (for automatic self-heal). See http://goo.gl/hJBvL for more information.
20:04 JoeJulian I'd chastise Technicool but he's not on...
20:05 triode3 And how do you guys know what to ask glusterbot... I would not know that processes would bring up that information.
20:06 elyograg I've seen the channel regulars do it. :)
20:06 triode3 Wait up, so we have a difference between a server and a brick on the new gluster?
20:06 elyograg a server can have multiple bricks.  each of my servers that will go into production has four of them.
20:06 triode3 Ah, wait, I see it on the "common criteria page"
20:07 elyograg oh, wait.  four bricks per raid volume, so each server actually has eight.
20:07 triode3 I was slightly confused for a second, my servers are bricks.
20:07 JoeJulian They're mostly shortcuts to allow us to answer frequently asked questions easily. You can talk directly to glusterbot, too, if you want. "factoids search *" should show a list of all of them (iirc).
20:07 JoeJulian @glossary
20:07 glusterbot JoeJulian: A "server" hosts "bricks" (ie. server1:/foo) which belong to a "volume"  which is accessed from a "client"  . The "master" geosynchronizes a "volume" to a "slave" (ie. remote1:/data/foo).
20:08 JoeJulian My servers have 30 bricks.
20:08 triode3 Ahhhwait, so one server has to be running the geo-replication?
20:08 JoeJulian No
20:08 chouchins joined #gluster
20:08 triode3 JoeJulian, ok.
20:09 JoeJulian It's just the glossary terms
20:09 JoeJulian Because I get really tired of having to guess what a node is.
20:09 JoeJulian Technically, a node is an endpoint.
20:10 JoeJulian People use that damned word for everything from a server, to a brick, to a client, sometimes all in the same sentence. It's like the "smurf" of system administration.
20:11 * JoeJulian steps off his soapbox.
20:17 JoeJulian Sorry if I offended you. I wasn't trying to kill the channel.
20:18 partner i am, i like the node-term
20:18 JoeJulian :P
20:18 partner i even named my server testgn1 as in gluster NODE ;)
20:21 triode3 kill the channel? Heh, my boss walked in :)
20:21 partner but i feel your pain, been there done that. keep preaching
20:22 JoeJulian That's okay. I wasn't really feeling guilty, just thought it was funny.
20:28 y4m4 joined #gluster
20:29 partner "The answer is as varied as personalities." - darn.. i would need to make some hardware purchase decisions but i'm all lost still, haven't got time to try out any hw setups and try to saturate any parts..
20:35 flrichar joined #gluster
20:39 DaveS_ joined #gluster
20:39 partner funny no matter how many times you google for same thing you end up always finding new pages around the topic
20:40 partner http://www.slideshare.net/Gluster/gluster​-for-geeks-performance-tuning-tips-tricks
20:40 glusterbot <http://goo.gl/BPP0w> (at www.slideshare.net)
20:40 JoeJulian partner: Have you documented your goals and prioritized them?
20:41 partner JoeJulian: not all of them no as i don't exactly have it all.. and yet of course everything would need to be ready yesterday as always :)
20:42 partner so, while gathering it all i'm also trying to do as much research as possible. and also out of personal interest i do shoot lots of questions
20:42 partner so please forgive me :)
20:42 JoeJulian I mean your design goals. Must be able to do X transactions per Y, must have no single point of failure, must be able to withstand Z server failures simultaneously....
20:43 JoeJulian Must store whole files...
20:43 DaveS__ joined #gluster
20:44 JoeJulian When I settled here, my two highest requirements (after piecing together stripes of data off drbd drives to try to salvage lost data) is that files MUST be stored whole, and there MAY NOT BE any single points of failure.
20:45 JoeJulian And until you have horror stories of your own, it's much harder to prioritize those goals. :)
20:45 partner yeah :)
20:46 partner i am collecting the requirements currently and then of course at the same time the dudes are rewriting completely the service that handles the store and get operations and serves that interface towards rest of the infra
20:46 partner and of course there are multiple different use cases so not just one single storage for one purpose
20:48 partner on the largest storage side we only store files, never alter. ok, we just might delete the file possibly some day if it turns out to be worthless but file content is not touched ever.
20:50 JoeJulian Are reads based on popularity of a specific file, or are they spread out evenly?
20:51 partner reads go mainly to recent files and its partially currently cached also so reads don't go even into disk
20:53 JoeJulian So that's a fairly simple requirement. Nothing exciting there. What about the other need(s)?
20:54 partner yeah, not much requirements regarding the speed for example, just the ability to scale, i guess it was roughly 10 TB a month that needs to be stored (measured+added some safety marginal)
20:55 partner and needs to be available
20:56 partner i'm being pushed so i've had to come up with something so i am thinking of getting few boxes with 12x 3 TB disks and 2 more for OS mirror
20:56 andreask joined #gluster
20:57 JoeJulian Sounds like a good plan.
20:58 partner still figuring out the raid and stuff, i'd rather use some as it gives us time to react and adds a bit more protection (datacenters are not at the office)
20:59 partner maybe split that into 3 bricks with raid5, dunno at all.. "The answer is as varied as personalities." as some Joe well put it ;)
21:00 JoeJulian hehe
21:00 partner i would just loose a lot of disk with that..
21:00 JoeJulian lose
21:00 JoeJulian a loose disk is a whole other problem. ;)
21:01 partner correct, my languange is written as its spelled, seem to split it occasionally into english too i see :D
21:01 partner anyways, 4 disk raid 5 makes me lose already 9 TB on one box..
21:02 partner 18 TB total. AND not forgetting i'm thinking a distributed replica so that add another 27 TB lots :)
21:02 partner *lost
21:02 partner makes no sense, i see it now.. wth..
21:04 partner making it brick per disk would save that 18 TB what parity disks would eat
21:05 inodb left #gluster
21:15 tryggvil joined #gluster
21:21 JoeJulian partner: That's the decision I made as well. The downside is that if a disk fails, you need a way to quickly determine that and kill the brick service(s) using that drive or your volume may hang when accessing the failed disk.
21:22 twx_ hm wsup with my gluster vol if ls -la output from gluster mountpoint looks like this?
21:22 twx_ ???????????  ? ?    ?       ?            ? _templates
21:22 twx_ (one example directory)
21:25 m0zes twx_: that sounds like a split-brained directory.
21:26 m0zes I've not had to deal with them, so I am not sure what the cleanup procedure is for that.
21:26 twx_ its a single box with two bricks on it
21:27 partner JoeJulian: so something that monitors the brick/mount/disk availability (can think many) and trigger actions rather immediately.. vol would probably end up being big so one brick failure just cannot stop the operations.. thanks, that is an excellent piece of info i need to take into consideration!
21:28 twx_ m0zes: split brain affects distributed volumes?
21:29 m0zes twx_: shouldn't. I don't know what you are seeing then.
21:30 partner anything on the logs?
21:32 partner or what the gluster tools tell you about the volume health?
21:36 twx_ ok, now it has magically fixed itself it seems
21:36 twx_ ..?!?!
21:36 twx_ :>
21:37 twx_ after volume start/stop
21:39 partner check the logs..
21:40 partner you have not provided any info on your setup so its not worth even guessing what could have caused it
21:41 twx_ I have a hint what may have caused it, also, yes, i provided some info
21:41 twx_ distrib volume across diff raid sets (one brick on each) on a single server
21:41 triode3 Hey all, going back to my starting out, after I probe all server from the "main" server, then probe the main from one other server, I then need to create the volume on each one of the servers, correct, or just from one?
21:41 twx_ and looking at the logs, it's hard to find the interesting parts
21:41 partner twx_: its part of the job ;)
21:42 triode3 Also, should I just turn off iptables on all servers, or are there a list of ports I should open?
21:42 m0zes triode3: just one.
21:42 m0zes @ports | triode3
21:42 m0zes ~ports | triode3
21:42 glusterbot triode3: glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111.
21:42 twx_ partner: well of course. I just didn't see any obvious errors in the recently modified logs, that's why I turned here and not to google :)
21:43 twx_ or well, actually I turned to google first
21:43 twx_ anyway, all good, volume seems to be running
21:43 twx_ :)
21:43 partner twx_: we cannot know what you have done and what not unless you tell us :) but goot it got sorted out
21:43 triode3 m0zes, thanks. I am still not sure about the volume. I am much more used to the old style of just creating config files.
21:43 partner i'm just more worried *why* it happened as it might very well repeat
21:43 twx_ partner: if I told you what I did you probably wouldn't want to help me :))
21:43 partner unless the root cause is identified
21:44 partner twx_: heh
21:44 twx_ I've been fucking around pretty bad with the underlying storage
21:44 twx_ moving lvm PV's, moving data onto degraded mdadm raid5 arrays etc
21:45 twx_ nothing I'd ever try out in a production env
21:45 m0zes triode3: volume create <NEW-VOLNAME> [stripe <COUNT>] [replica <COUNT>] [transport <tcp|rdma|tcp,rdma>] <SRV1:/BRICK> <SRV2:/BRICK> <SRV3:/BRICK> (et al)
21:46 m0zes just issue that on one server and it will create the volume across the specified servers/bricks in the peer group.
21:46 triode3 m0zes, sorry, I know the syntax, but I am following the howto/readme... do you do it on each server?
21:46 triode3 m0zes, ah, thanks
21:47 partner twx_: cool, i appreciate people how try and break it and then figure out how to fix it, you learn a lot from it
21:48 partner on proper environment that is
21:55 triode3 I am assuming that if I do not want replicated data I just set COUNT for replica to 1?
21:58 amccloud joined #gluster
21:59 m0zes triode3: you don't need to specify it at all if you don't want replication. both stripe and replica can be left out.
21:59 triode3 m0zes, thanks
22:06 triode3 Amazingly, I did that, then it told me I should star the volume, but I am reading the docs, so I did a gluster volume info. Its created. Now the docs say it is time to wrap up after we start the volume. Well, I read the man page and see that I do a gluster volume start <volname>... again, I am just pointing out that the docs are really lacking in step by step procedures.
22:12 partner hmm true, it completely lacks that step
22:12 jack joined #gluster
22:13 m0zes fixed now.
22:13 triode3 I note that for a client, I should not need the server and geo-replicate, correct? I should just install gluster and fuse? How would I connect to the server from the client?
22:14 triode3 Again, the community docs only really do a two server setup. What If I want a client with no server on it.
22:14 partner its on the really really fast docs bottom line..
22:15 partner no volume start there either
22:16 m0zes it is odd that the quickstart is more thorough...
22:16 squizzi joined #gluster
22:16 triode3 partner, Thanks, and the only thing I should need to install on that (rpm wise) would be gluster-3... and fuse?
22:16 triode3 m0zes, agree.
22:16 squizzi left #gluster
22:16 partner triode3: not sure what was your OS but i have the client and common packages installed, fuse is needed and i had it readily available (on debian)
22:17 jack_ joined #gluster
22:17 partner so no server-anything to client side
22:17 triode3 partner, CentOs for this test.
22:18 wNz joined #gluster
22:22 triode3 Hrm, installed gluster and gluster-fuse on the client. I suppose the docs mean to use mount.glusterfs? mount -t glusterfs is not a filesystem that my client sees
22:23 partner sorry, not at all familiar with centos
22:27 triode3 partner, got that figured out, now it is failing with "failed to get volume file from server, failed to fetch volume file
22:28 triode3 my status is started from gluster volume info and I See all bricks on the server.. hrm..
22:28 partner triode3: the volume is started and you can get info of it. no firewall in between?
22:28 triode3 partner, nope, I turned them all off.
22:29 partner fuse running too?
22:29 partner (we've been debugging one issue with mounting the volume on bootup on debian just yesterday, fuse wasn't up at the mount time..)
22:29 triode3 partner, facepalm. I was using server0:/thenameIcalledthemountpoint not server0:/volname Damn.
22:30 partner no worries, happens :)
22:30 partner so it all works now nice'n'smooth?
22:30 triode3 partner, dunno yet. I will have to throw a few TB at it firstr.
22:31 triode3 partner, so I have 1/2 of the array up, the other 1/2 is running an older gluster. This one is 51TB. The other is 51TB.
22:32 triode3 partner, I will throw some data at it tomrorow and run some tests. I have 10GB interconnects on all servers and most clients, so I will see how the performance is that way.
22:33 partner come back and tell how it turned out
22:34 triode3 partner, I can. Right now I am connected to a client that only has 1G ethernet, so I will try to get some 10G clients on it.
22:35 raven-np joined #gluster
22:36 duerF joined #gluster
22:38 JoeJulian twx_: if I were to guess, I'd guess that since it's a distribute-only volume, you had one brick service (glusterfsd) not running. "gluster volume status" may have proven that.
22:41 johnmorr joined #gluster
22:46 sashko joined #gluster
22:46 jskinner_ joined #gluster
22:59 tomsve joined #gluster
23:04 mynameisbruce_ joined #gluster
23:15 polenta joined #gluster
23:20 TomS joined #gluster
23:25 hagarth joined #gluster
23:25 H__ joined #gluster
23:31 theron joined #gluster
23:32 clusterflustered im not having any good luck understanding geo-relication, can someone explain it to me?
23:36 frakt joined #gluster
23:47 jbrooks joined #gluster
23:50 JoeJulian It's an rsync to a remote location. It's managed by gluster because gluster knows which files have changed and need to be resynched.
23:59 ultrabizweb joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary