Camelia, the Perl 6 bug

IRC log for #gluster, 2013-01-28

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:32 greylurk joined #gluster
00:54 greylurk joined #gluster
01:12 melanor9 joined #gluster
01:46 greylurk joined #gluster
02:18 eightyeight joined #gluster
02:46 lala joined #gluster
03:09 hagarth joined #gluster
03:27 sgowda joined #gluster
03:32 bharata joined #gluster
03:35 bulde joined #gluster
03:35 sashko joined #gluster
04:17 sripathi joined #gluster
04:19 overclk joined #gluster
04:19 shylesh joined #gluster
04:22 sahina joined #gluster
04:34 bitsweat joined #gluster
04:34 bitsweat left #gluster
04:41 sripathi1 joined #gluster
04:41 sripathi joined #gluster
04:43 melanor9 joined #gluster
05:00 srhudli joined #gluster
05:01 vpshastry joined #gluster
05:09 ramkrsna joined #gluster
05:09 ramkrsna joined #gluster
05:15 raghu joined #gluster
05:26 melanor91 joined #gluster
05:33 bharata joined #gluster
05:33 sripathi joined #gluster
05:51 rastar joined #gluster
05:52 sripathi joined #gluster
06:05 test joined #gluster
06:12 kevein joined #gluster
06:16 sripathi joined #gluster
06:29 sgowda joined #gluster
06:32 sripathi joined #gluster
06:33 lala joined #gluster
06:40 glusterbot New news from resolvedglusterbugs: [Bug 808452] nfs: mem leak found with valgrind <http://goo.gl/p1jb6>
06:51 sripathi joined #gluster
06:51 sgowda joined #gluster
06:54 rgustafs joined #gluster
06:56 deepakcs joined #gluster
06:58 tru_tru joined #gluster
07:05 sripathi1 joined #gluster
07:10 glusterbot New news from resolvedglusterbugs: [Bug 902684] Crash seen on ssl_setup_connection() <http://goo.gl/GY7rw> || [Bug 893779] Gluster 3.3.1 NFS service died after <http://goo.gl/RBJPS>
07:22 jtux joined #gluster
07:27 ngoswami joined #gluster
07:37 _NiC joined #gluster
07:39 vpshastry joined #gluster
07:43 JusHal joined #gluster
07:52 sgowda joined #gluster
07:54 ekuric joined #gluster
07:54 guigui1 joined #gluster
07:55 the-me joined #gluster
07:56 andreask joined #gluster
07:58 sripathi joined #gluster
07:59 vpshastry1 joined #gluster
08:04 ctria joined #gluster
08:05 jtux joined #gluster
08:09 puebele joined #gluster
08:17 sripathi joined #gluster
08:19 mohankumar joined #gluster
08:25 bulde joined #gluster
08:27 puebele joined #gluster
08:32 srhudli joined #gluster
08:34 Joda joined #gluster
08:40 melanor9 joined #gluster
08:50 sripathi1 joined #gluster
08:51 Nevan joined #gluster
09:05 tjikkun_work joined #gluster
09:10 shireesh joined #gluster
09:10 Joda joined #gluster
09:14 nightwalk joined #gluster
09:14 Norky joined #gluster
09:14 jjnash joined #gluster
09:14 isomorphic joined #gluster
09:16 gbrand_ joined #gluster
09:19 shireesh joined #gluster
09:31 dobber joined #gluster
09:31 cyberbootje joined #gluster
09:32 GLHMarmot joined #gluster
09:33 DaveS joined #gluster
09:36 bauruine joined #gluster
09:37 shireesh joined #gluster
09:40 zhashuyu joined #gluster
09:40 lala_ joined #gluster
09:41 dcmbrown quit
09:42 sgowda joined #gluster
09:44 cyberbootje joined #gluster
09:45 vpshastry joined #gluster
09:49 sripathi joined #gluster
09:50 manik joined #gluster
09:58 tomsve joined #gluster
09:58 cyberbootje joined #gluster
10:05 melanor91 joined #gluster
10:06 melanor92 joined #gluster
10:07 sripathi joined #gluster
10:09 clag_ joined #gluster
10:12 sgowda joined #gluster
10:14 sripathi1 joined #gluster
10:17 rcheleguini joined #gluster
10:18 shireesh joined #gluster
10:24 ngoswami joined #gluster
10:25 melanor9 joined #gluster
10:28 tatra joined #gluster
10:28 duerF joined #gluster
10:32 badone joined #gluster
10:32 cyberbootje joined #gluster
10:33 tomsve joined #gluster
10:34 ekuric joined #gluster
10:37 sripathi joined #gluster
10:40 ekuric joined #gluster
10:45 melanor91 joined #gluster
11:03 melanor9 joined #gluster
11:05 sahina joined #gluster
11:13 andreask joined #gluster
11:26 shireesh joined #gluster
11:26 sripathi joined #gluster
11:28 bala1 joined #gluster
11:30 melanor91 joined #gluster
11:31 melanor92 joined #gluster
11:35 melanor9 joined #gluster
11:39 Staples84 joined #gluster
11:40 shireesh joined #gluster
11:42 tatra Hallo, we are using glusterfs for 2 years, nowadays version 3.2.7. It works without problems, but we have a huge memory overcommit. We have system with 32G ram, 35 glusterfs mounts (35 glusterfs processes). Overcommit grows in 1 mount by 32G, see https://dl.dropbox.com/u/2061501/memory-month.png. Could you please help me with this problem, or who should I contact?
11:42 glusterbot <http://goo.gl/6DGkw> (at dl.dropbox.com)
11:43 tjikkun_work joined #gluster
11:45 JusHal left #gluster
11:57 lh joined #gluster
11:57 lh joined #gluster
11:58 longsleep joined #gluster
11:59 longsleep Hi guys, i just updated a gluster server from 3.2 to 3.3 and new the builtin nfs server crashes on startup (E [nfs3.c:812:nfs3_getattr] 0-nfs-nfsv3: Volume is disabled: home). Any hints?
11:59 shireesh joined #gluster
12:01 joaquim__ joined #gluster
12:06 edward1 joined #gluster
12:14 shireesh joined #gluster
12:22 melanor91 joined #gluster
12:27 yinyin joined #gluster
12:33 bala1 joined #gluster
12:38 shireesh joined #gluster
12:44 ngoswami joined #gluster
12:51 melanor91 tatra, you have a memory leak
12:54 tatra what is the source of memory leak, is there a way how to debug the reason?
12:54 balunasj joined #gluster
12:54 balunasj joined #gluster
13:01 tomsve joined #gluster
13:08 aliguori joined #gluster
13:12 guigui1 joined #gluster
13:12 Staples84 joined #gluster
13:19 ngoswami joined #gluster
13:34 dustint joined #gluster
13:48 melanor9 joined #gluster
14:00 Norky I want to make an existing gluster volume bigger without adding new servers. My bricks are XFS on LVM. My immediate thought was to make the existing bricks bigger with lvextend and xfs_growfs . I did this and it worked fine, but https://access.redhat.com/knowledge/docs/en-US/​Red_Hat_Storage/2.0/html/Administration_Guide/s​ect-User_Guide-Managing_Volumes-Expanding.html only admits the possibility of doing it via adding bricks i.e. in my case creating another
14:00 Norky set of XFS LVs to act as additional bricks.
14:00 glusterbot <http://goo.gl/xyAvr> (at access.redhat.com)
14:00 wN joined #gluster
14:01 Norky which is 'better'/the advised method?
14:01 Norky to my mind, lvextend/growfs is cleaner, so I'd lean toward that.
14:02 ndevos Norky: your method works fine, but it is adviced to keep the bricks all of an equal size
14:03 chirino joined #gluster
14:03 ndevos Norky: and, the glusterfs clients may need to be restarted afterwards, the check-for-free-space-when-creating-new-files functions could have the old size cached
14:04 ndevos and, they probably request the size of the brick from glusterfsd, so these may need restarting too
14:04 manik joined #gluster
14:08 kkeithley joined #gluster
14:10 hateya joined #gluster
14:11 Norky ndevos, yup, all bricks will always be equal (differeing performance or start times of xfs_growfs notwithstanding)
14:12 ndevos Norky: should be okay, but I also think that if the brick size increases a lot, you may run into issues
14:13 ndevos Norky: xfs creates some structures on formatting, these structures can not be changed afterwards
14:14 Norky 4TB bricks, becoming 6TB
14:15 ndevos Norky: things like the size of the allocation groups is what I am thinking of
14:17 Norky do you have an approximate ratio for where that might become a rpoblem? I.e define "a lot" - 50%? 200%?
14:17 ndevos Norky: see agcount in 'man mkfs.xfs' for details
14:18 ndevos no, not really, it also depends on the usage of the fs, if files are created/expanded rarely, the effects will hardly be noticed
14:22 jack joined #gluster
14:39 ctria joined #gluster
14:47 jgillmanjr Norky: Indeed still having issues. Here is a paste of what I've done to look at things: http://dpaste.org/NOaw9/
14:47 glusterbot Title: dpaste.de: Snippet #217820 (at dpaste.org)
14:47 jgillmanjr You'll notice that files don't seem to be getting placed on the new nodes
14:59 Norky jgillmanjr, with regard to size visible from the client, something ndevos said earlier applies:
14:59 Norky <ndevos> Norky: and, the glusterfs clients may need to be restarted afterwards, the check-for-free-space-when-creating-new-files functions could have the old size cached
14:59 Norky <ndevos> and, they probably request the size of the brick from glusterfsd, so these may need restarting too
15:01 Norky check that the rebalance has finished
15:02 neofob joined #gluster
15:03 plarsen joined #gluster
15:05 hagarth joined #gluster
15:06 nueces joined #gluster
15:07 jgillmanjr it was
15:08 jgillmanjr so restart the client instances then
15:11 stopbit joined #gluster
15:12 Norky yes, I think so
15:12 Norky also, a total of four files (of less than FSBLOCKSIZE) doesn't seem like enough to me
15:16 Norky I'd do something like a "for count in {0..99} ; do cp /bin/ls /mnt/glusterfs/file$count"
15:16 Norky before expanding the volume, that is
15:27 jgillmanjr init 6'ing the client nodes right now to see where that gets me
15:29 guigui3 joined #gluster
15:34 jgillmanjr ahh damn. Now I realize what I forgot to do (and it fixed the issue)
15:34 jgillmanjr forgot to add the new hosts entries on the gluster clients
15:34 bennyturns joined #gluster
15:35 jgillmanjr now it sees the extra space
15:35 jgillmanjr and yes, I would be using DNS in a production environment :)
15:37 Norky I think that not seeing the extra space is a separate issue from files apparently not being rebalanced as you expect
15:38 bugs_ joined #gluster
15:40 nueces joined #gluster
15:40 jgillmanjr could be. The resolution issue though is making the clients seeing 600GB storage though now though.
15:45 kr4d10 joined #gluster
15:47 kr4d10 hi all, is there any documentation for the reason why the 3.3.x glusterd uses a '.glusterfs' subdirectory in each of the server-side bricks?
15:49 jgillmanjr http://joejulian.name/blog/what-is-​this-new-glusterfs-directory-in-33/
15:49 glusterbot <http://goo.gl/j981n> (at joejulian.name)
15:49 jgillmanjr kr4d10: ^^
15:50 kr4d10 jgillmanjr: thank you, I'll check that out
15:51 inodb joined #gluster
15:58 puebele3 joined #gluster
15:59 lala joined #gluster
15:59 chouchins joined #gluster
16:01 daMaestro joined #gluster
16:02 ekuric joined #gluster
16:03 kkeithley johnmark: ping
16:03 kkeithley johnmark: unping
16:09 kr4d10 so I assume when a new file as added the gfid for the file is created first by the server and then hard linked to the relative path? Is the relative path always supposed to be available for reading on the server-side brick or is that hit and miss? This is the issue I am really trying to resolve since we have been using the pre-3.3.x glusterfsd for a while and have systems that depend on being able to read from the server's disk directly.
16:09 neofob left #gluster
16:11 semiosis kr4d10: could you give some context?  i must not be understanding what you want to do because afaict you should be able to read files from bricks in 3.3+ just like previous releases
16:12 semiosis and i hope you have noatime,nodiratime on your bricks if you're reading from them
16:13 kr4d10 semiosis: I believe you understand it correctly, we are trying to read from the hosting server's local filesystem directory that is being served out as a replicated brick
16:13 semiosis great, so whats the problem?
16:14 kr4d10 semiosis: no, we actually don't AFAIK. I didn't see that documented anywhere, what is reason for it?
16:14 semiosis the documented recommendation is to just not access the bricks directly
16:14 kr4d10 semiosis: we started seeing files that were visible on the client'e mount but not on the server's filesystem (at least not in the regular path)
16:15 semiosis but if you must, you should only read from them, and to prevent your "reads" from modifying the files you should add the noatime,nodiratime options
16:15 semiosis otherwise even reads will affect file metadata (access times)
16:15 semiosis also don't let your reading apps lock files
16:15 kr4d10 right, but we have to due to the slowdown from going through the protocol. We have many small files that are being updated frequently
16:16 semiosis ~pasteinfo | kr4d10
16:16 glusterbot kr4d10: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
16:16 kr4d10 ah, I see your point about the atimes. Thanks, I'll set that up
16:16 Norky kr4d10, let me guess... php?
16:16 kr4d10 lol... unfortunatly
16:16 puebele joined #gluster
16:17 semiosis kr4d10: see also ,,(php) for some ways to optimize
16:17 glusterbot kr4d10: php calls the stat() system call for every include. This triggers a self-heal check which makes most php software slow as they include hundreds of small files. See http://goo.gl/uDFgg for details.
16:17 Norky http://joejulian.name/blog/optimizi​ng-web-performance-with-glusterfs/
16:17 glusterbot <http://goo.gl/uDFgg> (at joejulian.name)
16:17 semiosis that
16:17 Norky semiosis beat me to it :)
16:18 semiosis if you do php right there's little to no performance hit on glusterfs
16:18 semiosis and by right i mean 1) use autoloading instead of require/include, and 2) use an opcode cacher like APC
16:18 semiosis and 3) optimize your include path
16:19 semiosis these are good practices even when serving php from a local disk
16:19 kr4d10 I'm not sure exactly how our dev team has that part set up, I'll check with them about it
16:20 * Norky predicts a big argument and a headache for kr4d10
16:21 kr4d10 Norky: we're way beyond that bit...
16:21 Norky as in, already had the argument?
16:21 semiosis ops should be able to do #2 and #3 without touching the code, #1 is app dependent though most modern frameworks do it already
16:21 semiosis it's the legacy code that's hard to convert from require/include to autoloading
16:21 Norky or your web dev folks will actually listen to stuff "you need to change how you do this to get better performance"
16:23 kr4d10 gluster volume info, with some names blanked http://fpaste.org/A23n/
16:23 glusterbot Title: Viewing Paste #271245 (at fpaste.org)
16:24 kr4d10 Norky: they'll listen and we can change our gluster setup, we just have a few different versions of software currently coexisting and our upgrade paths are a little mangled
16:25 Norky ahh, cool
16:25 VSpike I know this came up here a while back, but I can't remember the answer and I don't have it logged. If you clone an existing gluster installation, what config files do you need to delete to get back a clean state? I've purged using aptitude but I recall from last time there's a file containing a UID that remains
16:26 Norky I've encountered application/db/web dev people who can be a little resistant to listening to sysadmins in case like this, so I wqas being pessimistic
16:26 VSpike My memory said /var/lib/gluster but that's not it
16:27 kr4d10 semiosis: so, the issues we saw where a client would write a file to the volume and the file would propagate to other clients but not show up on the disk of the server actually hosting one of the volume's bricks should never really happen?
16:27 VSpike Is it just /etc/glusterd.info ?
16:28 kr4d10 Norky: no worries, we've all met those :)
16:29 Norky /var/lib/glusterd/glusterd.info here, but I'm on Red HAt Storage, whihc may not be exactly the same as the gluster packages from gluster.org
16:30 semiosis kr4d10: possible if the client is not connected to that server.  but normally, when everything is connected as it should be, all writes/creates get replicated to all bricks (in a pure replicate volume) synchronously
16:30 semiosis VSpike: /var/lib/glusterd as of glusterfs 3.3.0, previous versions used /etc/glusterd
16:31 kr4d10 semiosis: however since the change was propagated to another client, it should have been there correct?
16:32 semiosis kr4d10: your volumes are all pure replicate, so in each volume all bricks should have exactly the same contents
16:33 Norky VSpike, regarding state, you will also need to remove the .glusterfs dir and the xattrs from each brick (unless you're starting again with the brick FS)
16:33 VSpike johncc
16:33 VSpike sorry, wrong window :0
16:33 kr4d10 semiosis: and correct me if I'm wrong but it sounds like the directory structure under the brick path (excluding the .glusterfs directory) should match that of the clients?
16:35 Norky yes, for a pure replicate volume, it should
16:35 VSpike semiosis: thanks - luckily that reminded me that I took this clone of the vm *before* I added your PPA when setting up the production servers
16:35 VSpike so I need to do that on this one
16:35 Norky your brick names.... are you using NFS mounts for the bricks?
16:35 ndevos @cloned systems
16:36 puebele joined #gluster
16:36 ndevos @clone
16:36 glusterbot ndevos: I do not know about 'clone', but I do know about these similar topics: 'cloned servers'
16:36 ndevos ~cloned servers | VSpike
16:36 glusterbot VSpike: Check that your peers have different UUIDs ('gluster peer status' on both). The uuid is saved in /var/lib/glusterfs/glusterd.info - that file should not exist before starting glusterd the first time. It's a common issue when servers are cloned. You can delete the /var/lib/glusterfs/peers/<uuid> file and /var/lib/glusterfs/glusterd.info, restart glusterd and peer-probe again.
16:36 Norky stupid bot, do fuzzy matching!
16:37 VSpike Norky: I'm starting again with the brick FS. Creating a blank one and the going to set up geo-replication
16:38 kr4d10 semiosis: thanks
16:39 kr4d10 Norky: yes we are right now, we have a package of PHP that doesn't play nicely with gluster right now
16:42 jbrooks joined #gluster
16:42 glusterbot New news from newglusterbugs: [Bug 826021] Geo-rep ip based access control is broken. <http://goo.gl/jsj1f>
16:43 kr4d10 semiosis: is there a way to have PHP read a file from gluster without calling stat?
16:43 semiosis kr4d10: yes, using APC, i think joe's blog post on ,,(php) talks about that
16:43 glusterbot kr4d10: php calls the stat() system call for every include. This triggers a self-heal check which makes most php software slow as they include hundreds of small files. See http://goo.gl/uDFgg for details.
16:43 Norky glusterfs over NFS sounds a bit bonkers to me, but other (more experienced) folks here may have a different opinion
16:44 semiosis kr4d10: the catch is that you have to restart apache whenever a file changes, otherwise (without calling stat) it will never know that the file has changed
16:44 semiosis kr4d10: i think you can set up APC to selectively disable stat on certain paths too
16:46 kr4d10 semiosis: oh that would never work, we have thousands of apcvar files changing all the time
16:46 kr4d10 crud
16:47 Norky kr4d10,  do a "gluster volume status VOLNAME detail" and check that the disk space free and free inodes match
16:49 kr4d10 Norky: yes thae output value of Free Inodes from gluster volume info detail and df -i match up
16:49 semiosis kr4d10: does gluster volume status show that all your bricks running?
16:49 semiosis s/bricks/bricks are/
16:49 glusterbot What semiosis meant to say was: kr4d10: does gluster volume status show that all your bricks are running?
16:49 kr4d10 semiosis: yes they are both up
16:52 semiosis kr4d10: ok then, are you sure you're looking in the bricks of the volume that client is actually mounting?
16:53 kr4d10 yeah, we were. Do you think that if some of the files were read directly from the server with atime enabled this could have been the cause of these issues?
16:53 semiosis doubt it, but check client logs
16:54 semiosis also check the glustershd logs on the servers
16:54 semiosis look for info about self healing
16:54 sashko joined #gluster
16:54 Norky kr4d10, could you restate your problem (I'm getting confused with all the history): the contents of some brick do not match the other bricks or what the client sees?
16:54 semiosis or anything related to those paths really
16:54 kr4d10 semiosis: nothing there out of the ordinary if I recall. This issue was a month or so ago, we're just now getting back to it
16:56 kr4d10 Norky: the contents of all the clients match up fine, but it does not show up on the filesystem of the server. The clients performance reading thousands of small files is very bad so we have to read data (no writes) from the local filesystem on the gluster servers.
16:57 kr4d10 semiosis: will do further
16:58 kr4d10 is there a way to improve this with heavy caching? We have about 14G free memory on each of the servers but would need any file updates to propagate instantly.
17:00 semiosis kr4d10: sounds to me like you either have a brick down or are looking in the wrong brick
17:01 * Norky has just had a nasty thought...
17:01 semiosis there's not really "propagation" in glusterfs, because there's no master... writes are simply sent to all bricks
17:01 Norky kr4d10, is it possible that you mounted the NFS export on the gluster server *after* the volume was started?
17:01 kr4d10 semiosis: nope, it was the right one. We only had data in one at that time and I spent some time verifying just this
17:02 Norky i.e. the files are being written to the 'parent' filesystem, and hidden 'behind' the NFS mount....
17:02 semiosis are all your clients connected to the same server for nfs?
17:02 kr4d10 Norky: yes actually we have them mounted to is locally
17:02 kr4d10 *to it
17:02 semiosis mount -t nfs localhost:volume ????
17:02 Norky err, sorry, I cannot parse that :)
17:03 kr4d10 Norky: you mean on the same path? That's suicide, no way
17:03 Norky not deliberately
17:03 kr4d10 semiosis: no they are mounted locally via gluster
17:03 semiosis oh ok
17:03 kr4d10 Norky: no, that has never been done
17:04 kr4d10 regarding caching though, if a client changes a file would it be changed in the servers cache?
17:04 Norky is the problem brick completely empty?
17:04 kr4d10 Norky: no it isn't, it had most of the correct files/dirs, but not all
17:08 Norky are you still seeing the problem now? i.e. if you do a "touch /mnt/glusterfs/newtestfile" now, does it fail to appear on one of the bricks?
17:10 kr4d10 Norky: no it seems to be working fine now, which is why I suspect it may have been caused by something we did
17:10 Norky ahh, righto, I'll cease worrying then :)
17:11 Norky your only problem at present is performance then?
17:11 kr4d10 Norky: ah yes, sorry for any added stress :) there was soemone on the dev team who misunderstood 'don't write to that system' and that is probably the cause...
17:11 kr4d10 yes it is
17:13 Norky I've never tried php or or things that involve large numbers of stat()s on many small files, so I can only say "yeah, I've heard that's a problem" and point at JoeJulian's blog post and mumble "dunno mate, start there"
17:14 kr4d10 the servers have a private GB link and are local to each other with memory and drives to spare but their load is always very low. We're not sending much throughput so we're trying to track down the bottleneck and it always seems to come back to gluster
17:14 kr4d10 Norky: no problem
17:14 kr4d10 thanks for the help guys
17:14 JoeJulian Mornin'
17:16 semiosis heyo
17:16 Norky geographical-appropriate time-based greeting
17:17 JoeJulian As an aside, kr4d10, if you put your bricks under, for instance, /data/gluster you can then chmod 700 /data/gluster to keep those pesky devs out of your bricks.
17:19 baczy joined #gluster
17:19 JoeJulian Wow.. lots of scrollback this morning.
17:19 JoeJulian tl;dr
17:19 Norky indeed
17:20 kr4d10 JoeJulian: yeah that's how we have it in prod, this issue was with a new version of a package in a stage environment so they have nearly full access
17:20 Norky summary: kr4d10 is finding that just throwing their php-based service onto gluster doesnt' yield the performance they might have hoped - we pointed him at your blog post
17:21 JoeJulian Nice to know that even when I'm so overloaded that I can't barely look at IRC for a week, I can still help people. :D
17:22 kr4d10 actually we're coming from using glusterfsd 3.1.2
17:22 Norky ahh, and the performance on the older version was better?
17:23 JoeJulian Not from what I remember...
17:24 Norky question was for kr4d10
17:24 kr4d10 I'm not entirely sure since we are reading directly from the local FS with the old version. I can say reading via gluster on 3.3.x is much slower than that
17:24 JoeJulian hehe
17:27 JoeJulian kr4d10: Has your need for consistency or scalability changed? You /could/ still read from the bricks if it hasn't.
17:29 kr4d10 JoeJulian: yeah, since talking with semiosis and Norky it seems that the issue with us not being able to do this in stage may have been self-inflicted
17:29 kr4d10 is there a way to change the number of IO threads on the server without modifying the .vol files by hand?
17:30 kr4d10 I haven't seen one in the docs
17:31 kr4d10 * ignore me, yes gluster has this. I'm thinking of another system... sorry
17:31 Norky JoeJulian, is it a viable option to run the gluster server and the php/web server on the same hosts (assuming you have sufficient local storage on the web servers)?
17:31 kr4d10 JoeJulian: no, we have almost 200 web servers that would be mounting this
17:31 Norky i.e. might it improve performance in cases like this?
17:31 rastar joined #gluster
17:32 kr4d10 oops, that was meant for Norky
17:32 Norky yeah, I guessed as much :)
17:32 kr4d10 there is too much data...
17:33 Norky forget that idea then
17:34 Norky righto, I'm off
17:34 kr4d10 actually... don't. If we could decouple the numerous high-traffic small files to their own volume, this could happen. We could probably pull that off
17:34 kr4d10 Norky: thanks again
17:35 JoeJulian Thanks, Norky, for hanging out. See you around.
17:35 Norky kr4d10, I'm not even sure that would fix your problem - but it might be worth investigating...
17:35 Norky good evening chaps
17:36 Teknix joined #gluster
17:39 JoeJulian Most php code doesn't change all /that/ often. A simple script that checks for some trigger to reload apache or php-fastcgi (maybe a simple version check) on a software upgrade would allow you to use apc with the apc.stat=0 option.
17:42 kr4d10 JoeJulian: it's not PHP code that is changing though. They are essentially partial database records that get written to the volume, then aggregated and pushed in to a DB
17:43 JoeJulian Could you do them as appends instead of individual files?
17:43 kr4d10 JoeJulian: what do you mean exactly?
17:44 kr4d10 oh, append to the file
17:44 kr4d10 unfortunately no, they each have to have an individual uid for tracking. There is really more that gets done with them but that is not the issue here
17:48 JoeJulian Ok, well the apc.stat=0 will help the php code for includes/requires. Anything that's loaded through an open() is still going to see the latency hit, so reduce those as your situation allows. apc won't help with that.
17:51 kr4d10 JoeJulian: thanks, will do. How does gluster's server-side caching work? If a cached file gets written to or deleted by a client will the cached version be updated before the cache expires?
17:51 JoeJulian Personally, if I were creating small chunks of data to be parsed and processed separately, I'd probably be looking at a messaging queue or memcache, depending on my tolerance for faults.
17:52 JoeJulian I'm not really sure on the caching question. I've never dug into the code enough to see how it works.
17:52 kr4d10 yeah we have memcache and are writing our own queuing service at the moment but gluster will always have a place for some things
17:52 kr4d10 ok
17:52 JoeJulian I did notice, though, that the caches seem to only last as long as the fd. Once closed, a cache is released.
17:53 kr4d10 Nice... that is one thing we were questioning but haven't tried yet.  thanks
17:58 vpshastry joined #gluster
18:00 vpshastry1 joined #gluster
18:00 melanor9 joined #gluster
18:30 andreask joined #gluster
18:33 DaveS joined #gluster
18:38 DaveS___ joined #gluster
18:40 vpshastry1 left #gluster
18:47 zaitcev joined #gluster
18:50 portante joined #gluster
18:52 y4m4 joined #gluster
18:54 bauruine joined #gluster
19:00 kombucha joined #gluster
19:04 kombucha This 3.2.5 install on Ubuntu 12.04 I'm looking at doesn't have /etc/init.d/glusterd as per the documentation.
19:04 kombucha Instead it has /usr/sbin/glusterd
19:04 kombucha Has anyone seen this on Ubuntu? Using the recommended package, not a PPA
19:05 JoeJulian kombucha: Two different things.
19:05 JoeJulian ~ppa | kombucha
19:05 glusterbot kombucha: The official glusterfs 3.3 packages for Ubuntu are available here: http://goo.gl/7ZTNY
19:05 kombucha well yes, one is an init script that calls the binary
19:05 kombucha they most adamantly do not want to upgrade to 3.3
19:06 JoeJulian Ubuntu doesn't use sysinit, iirc.
19:06 kombucha weird
19:06 JoeJulian They use upstart
19:06 kombucha ah right
19:06 kombucha ok so perhaps the docs just need to be updated
19:06 JoeJulian Do they at least want to run 3.2.7 to get the bugfixes?
19:06 kombucha hahaha
19:07 kombucha # stop glusterd
19:07 kombucha stop: Unknown job: glusterd
19:07 kombucha grrr
19:07 JoeJulian semiosis: ^
19:08 semiosis what is "the recommended package" ?
19:08 JoeJulian Interesting, though, that someone would be adamant about not wanting to use 3.3. Any reason why?
19:08 semiosis one from ubuntu's universe or one from gluster.org?
19:08 JoeJulian gluster.org, of course.
19:08 semiosis kombucha: who most adamantly doesnt want to upgrade to 3.3?
19:09 kombucha their somewhat unofficial motto is "don't poke the bear"
19:09 semiosis kombucha: who are you talking about?
19:09 kombucha a company I am working with that doesn't want to update gluster
19:09 semiosis oh ok
19:10 * semiosis still uses 3.1.7
19:10 kombucha this was installed from ubuntu universe,
19:10 kombucha lol semiosis
19:10 semiosis if it ain't broke...
19:10 JoeJulian So they have something that works and they're not interested in fixing it. That's a policy I can understand.
19:11 kombucha It's still unclear how the 3.2.5 package from ubuntu universe handles stopping/starting glusterd
19:11 kombucha it doesn't seem to be in upstart
19:11 semiosis maybe more appropriate than 'dont poke the bear' would be 'dont step on the ant hill'
19:11 kombucha though I see there is a ppa avail that has it in upstart
19:11 JoeJulian But fyi, installing from ubuntu universe simply means you're installing stuff we've already figured out is broken.
19:12 semiosis kombucha: the 3.2.5 package glusterfs-server in ubuntu universe provides an upstart job (/etc/init/glusterfs-server.conf) which can be started/stopped the usual way
19:13 semiosis kombucha: service glusterfs-server start|stop
19:13 semiosis kombucha: also, what docs are you referring to?
19:13 kombucha http://gluster.org/community/document​ation//index.php/Gluster_3.2:_Startin​g_and_Stopping_the_glusterd_Manually
19:13 glusterbot <http://goo.gl/eHT0I> (at gluster.org)
19:14 semiosis oh yeah so in debian/ubuntu s/glusterd/glusterfs-server/ as far as the packages & initscripts are concerned
19:14 kombucha that is srsly confusing
19:15 semiosis i agree
19:16 sashko joined #gluster
19:16 semiosis when i made the ppas i decided to "resolve" that issue, calling the package & initscripts for the server "glusterd"
19:16 semiosis but that caused too many problems with people installing from universe then trying to upgrade to the ppa
19:16 semiosis as you can imagine
19:17 semiosis so i have deprecated that now
19:18 Teknix left #gluster
19:18 partner isn't it more confusing to call it "glusterd" when it starts also glusterfsd and glusterfs along glusterd?
19:22 partner but, i seem to suck on googling for translators, found one page from the community site and some links to translators 101 but nothing in between.. like what for example is default translator?
19:23 semiosis not a single default translator
19:24 partner would there be some doc i've missed that would explain so that i don't have to bug you with stupid questions?
19:24 semiosis translators, also known as xlators, are stacked up to build a 'graph'
19:24 kombucha yes, I get how translators build a graph
19:24 partner there is something on the web about the 'graphs" but the pictures are broken and there are only few sentences
19:24 kombucha What I don't get is how ubuntu has changed how the services run
19:25 kombucha yes, I saw the pictures were broken
19:25 semiosis partner: kombucha: same person?
19:25 partner no :)
19:25 kombucha there was a messsage on the page  something about "edit in progress"
19:25 semiosis kombucha: was addressing partner's questions about xlators
19:26 kombucha ah sorry I missed his question
19:26 semiosis partner: perhaps ,,(semiosis tutorial) will help explain the concepts
19:26 glusterbot partner: http://goo.gl/6lcEX
19:26 kombucha semiosis: so how are you supposed to start glusterd on ubuntu then? Am I missing something about that?
19:26 partner does gluster cli provide means to modify translator settings if i want to switch to something? or do i need to edit volume file? on all storage nodes or does it get replicated? ... lots of questions, can't find much answers at this point so any pointers are welcome
19:26 partner and i just got some, thanks :)
19:26 semiosis kombucha: service glusterfs-server start
19:27 kombucha that does not appear to be starting glusterd
19:27 semiosis kombucha: though it should start on package install & every boot thereafter
19:27 semiosis as is debian policy
19:27 kombucha #  gluster volume info
19:27 kombucha Connection failed. Please check if gluster daemon is operational.
19:27 kombucha ps aux | grep gluster
19:27 kombucha root      1068  0.0  0.9 232696 40036 ?        Ssl  13:13   0:00 /usr/sbin/glusterfs -f /etc/glusterd/nfs/nfs-server.vol -p /etc/glusterd/nfs/run/nfs.pid -l /var/log/glusterfs/nfs.log
19:27 semiosis kombucha: then check /var/log/glusterfs/etc-glusterfs-glusterd.log, maybe it's trying to start but dying
19:27 kombucha ah that may be, thx
19:28 semiosis partner: gluster cli manages vol files you should not mess with them, although if you're interested in learning more about how glusterfs works, reading the vol files may be informative
19:29 kombucha #:/var/log/glusterfs# service glusterfs-server start
19:29 kombucha glusterfs-server start/running, process 5482
19:29 kombucha #:/var/log/glusterfs# service glusterfs-server stop
19:29 kombucha stop: Unknown instance:
19:29 semiosis yep, check the logs
19:30 kombucha # initctl list | grep gluster
19:30 kombucha glusterfs-server stop/waiting
19:30 kombucha mounting-glusterfs stop/waiting
19:33 partner huoh, debian package doesn't come with man page
19:34 cicero does rebalancing a volume incur an IO impact or lock the files that are being rebalanced?
19:34 semiosis partner: man pages were removed from 3.3.0 because they were not updated.  i think one was updated & included with 3.3.1
19:34 cicero sorry, incur *significant IO impact. i know it'll incur some sort of IO impact to copy files
19:35 semiosis cicero: yes, significant. lock, i dont know
19:35 semiosis cicero: there may be opts you can use to restrain it, see ,,(options)
19:35 glusterbot cicero: http://goo.gl/dPFAf
19:35 semiosis or possibly ,,(undocumented options) <-- outdated
19:35 cicero ah
19:35 glusterbot The old 3.1 page of undocumented options is at http://goo.gl/P89ty
19:36 cicero is there a targeted rebalance of sorts? (all these questions i've googled for)
19:36 partner semiosis: eh. someone thought its better to let people to google for 3.1 and 3.2 manual pages as 3.3 stuff isn't hitting the top
19:36 partner cicero: you can do it for example based on time (say, modified less than hour ago)
19:37 cicero ah
19:37 isomorphic joined #gluster
19:37 cicero in that vein, is it possible to say "new files go to these bricks"?
19:37 cicero basically time-based sharding
19:38 semiosis no
19:38 cicero hokay
19:38 semiosis at least not that i know of
19:38 partner cicero: http://community.gluster.org/a/howto-targeted-s​elf-heal-repairing-less-than-the-whole-volume/ - i guess this is what you were looking for?
19:38 glusterbot <http://goo.gl/E3b2r> (at community.gluster.org)
19:38 semiosis that's not rebalance
19:38 cicero interesting
19:39 cicero as an aside i never knew i had to xargs stat
19:39 partner oh, true, my bad, missed that part
19:39 semiosis and that's not needed since glusterfs 3.3.0
19:39 cicero i just did the find traversal
19:39 semiosis 3.3.0 introduced proactive self heal (the self heal daemon)
19:39 partner yeah it runs every 10 minutes
19:40 cicero cool
19:40 cicero alright thanks as usual
19:40 semiosis yw
19:40 semiosis bbl
19:40 partner what you mean by "targetted rebalance" ?
19:40 cicero partner: more like, i want to move over a subset of dirs to the new brick
19:40 cicero it's ok though. i think i'll just bring up a new volume.
19:41 cicero bbl myself
19:41 partner is it faster or what might be the reasoning behind such?
19:42 JoeJulian cicero: All directories exist on all bricks. It's the files that get distributed. See http://joejulian.name/blog​/dht-misses-are-expensive/ for a brief overview on dht
19:42 glusterbot <http://goo.gl/A3mCk> (at joejulian.name)
19:42 JoeJulian time-based sharding would require a custom translator.
19:42 partner i in a sense would perhaps need something similar but i guess we will just somehow copy/move files around
19:43 partner having gazzillion files which never change - it doesn't make sense to put backup to go through everything each round
19:44 m0zes time based sharding could be very interesting for a tiered filesystem.
19:44 partner instead, we plan to somehow stage the files for a day or few for backup and then get them further
19:45 partner yup, tiering approach has been thought on this and few similar use cases
19:47 partner one of our enterprise storages does the tiering (ssd -> scsi -> sata) and i do like it, fresh and most used stuff on top, rest will flow further down
19:53 partner any definition of often mentioned "small file" or pointers to benchmarks testing different file sizes?
19:54 JoeJulian @small file
19:54 glusterbot JoeJulian: I do not know about 'small file', but I do know about these similar topics: 'small files'
19:54 JoeJulian @small files
19:54 glusterbot JoeJulian: See http://goo.gl/5IS4e
19:54 JoeJulian And THANK YOU for actually applying your brain to that instead of just sucking in all the FUD!
19:55 partner :)
19:55 partner thanks joe, i might have bumped into that post too but i already have some 500+ tabs open with glusterfs stuff so i seem to be already forgotting some findings
19:56 JoeJulian hehe
19:56 partner i want to really understand what this does and how it works before applying it to production
19:57 JoeJulian rephrased: I want to understand all the parameters before I engineer this into our system so I can build a solution that meets our requirements.
19:58 JoeJulian Which is exactly what I keep trying to hammer into the world. :)
19:58 partner well put, i've copypasted the quote to safe :)
19:59 partner it unfortunately seems we most likely drop into category "lots of small files".. i need to make monthly stats to get a bit broader view
20:01 partner and i'd rather not use the nfs as native client does all the fancy things nicely by itself
20:02 JoeJulian It does sound like that's likely. As long as you're not renaming files, you shouldn't have to worry about dht misses. If you can keep files open, that'll help as well.
20:03 JoeJulian If you know what file you're opening and don't have to rely on directory listings, that'll help a lot too.
20:03 partner these files in question currently (on first targetted project) none of the files are ever modified once written to disk
20:04 partner and to my understanding the path is fully known in advance and no need to go into directory listings
20:07 JoeJulian So then the question seems to come down to how many of these small files are going to be needed per transaction, what's the target response time per transaction or how many transactions/second are expected and how many servers will it take to provide that response.
20:07 partner don't have all the details of the service but i don't see any need to list anything as the filename is hash of the file in question and are split into subdirs based on the hashes (i guess a bit similarly to DHT)
20:07 w3lly joined #gluster
20:08 w3lly Hi. Could someone explain me why my mountpoint receives options "relatime"? In my fstab I configured norelatime and not relatime. My version is 3.3.1
20:09 partner access numbers are small and we are not at least initially worried about the performance at all, nevertheless i'm interested on this "small file" problem in case we ever bump into it as the usage pattern just might change
20:11 partner in our case the whole glusterizing is due to having constantly have to react to increasing storage capacity and its currently basically just adding new nfs mounts to a single server (SPOF)
20:11 JoeJulian ugh
20:11 partner yes
20:13 partner not talking about any huge numbers, no petabyte amounts but nevertheless an issue anyways we periodically tackle
20:15 partner perhaps 100-300 GB data stored per day
20:15 JoeJulian So the self-heal check adds a little bit of latency overhead to a lookup(). It's not a tremendous amount, but it's basically network latency + disk latency * 2. Mitigating that would be to reduce those latencies or to provide enough clients to handle the load.
20:16 JoeJulian actually, it's probably just network latency * 2 + disk latency...
20:16 jdarcy Doing your own hashing into multiple directories is a bit of an anti-pattern with GlusterFS.
20:17 partner jdarcy: are you suggesting storing all the files into single directory?
20:17 JoeJulian ... but reducing the number of files per directory is often helpful.
20:17 partner exactly the reason
20:17 jdarcy It's mostly a geo-replication issue.  The optimizations we use to scan for changed files break if files are being placed into every single (hash-based) directory ever time quantum.  It's actually better to concentrate new files into as few directories as possible.
20:18 kombucha Where are the release notes for 3.2.7 incl bug fixes?
20:18 jdarcy A couple of large customers have ended up creating time-based directories.  Put 10K in this one, then 10K in that one, etc.
20:18 JoeJulian @git repo
20:18 glusterbot JoeJulian: https://github.com/gluster/glusterfs
20:19 partner jdarcy: seeing the stats we probably store around 5M files per month, multiply that with n years
20:19 JoeJulian kombucha: https://github.com/gluster​/glusterfs/commits/v3.2.7
20:19 glusterbot <http://goo.gl/c6PjO> (at github.com)
20:20 kombucha nice, thanks.
20:20 JoeJulian You're welcome.
20:20 nhm joined #gluster
20:21 jdarcy nhm: Hey there.
20:24 partner jdarcy: and having my file deadbeef in /storage/dea/deadbeef makes it easy for the service to construct the path without ever looking into the dir. and manual user will find it too easily
20:24 nhm jdarcy: Heya, how goes linux.conf.au?
20:26 partner JoeJulian: haven't decided yet about the hardware but options for the network are pretty much 1-n Gbit interfaces or going to 10 Gbit. no idea really, haven't done any benchmarkings yet
20:26 jdarcy nhm: Good question.  I was going to ask you if there'd been any word.  The Ceph vs. GlusterFS slugfest was about 24 hours ago, right?
20:27 jdarcy No, about 18.  4:30pm Canberra time.
20:27 nhm jdarcy: I actually don't know, haven't been keeping up with what's going on down there.
20:28 jdarcy partner: Well, if that's more important to you then fine.  Just be aware that there's a tradeoff.
20:28 nhm jdarcy: that's right, I forgot you had to cancel.
20:28 nhm jdarcy: I was wondering why you were asking me. :)
20:28 jdarcy I haven't heard anything from John Mark.  I hope Sage didn't beat him up too badly.
20:28 partner jdarcy: sorry, what is more important here, just described what we currently have, no gluster involved in that :)
20:29 JoeJulian When shopping for switches, latency is one spec that's high on my crit list.
20:29 partner i'm open for all the suggestions and comments are most welcome, that's the reason i'm here :)
20:30 partner joe noted
20:30 jdarcy partner: What I was trying to get at was that if you used such a directory structure on top of GlusterFS *and* you used geo-replication then there'd be some pretty bad performance degradation.
20:30 nhm jdarcy: Sage is usually pretty laid back. :)
20:31 jdarcy nhm: Yeah, I tried to calm John Mark down by pointing that out, then I riled him up by doing some performance comparisons.  ;)
20:31 * jdarcy <- evil
20:31 partner jdarcy: roger, i'll make a note of that. currently we are not planning for it but then again one option would be to replicate the data into other datacenter and use that one for for example file level backup off-load.
20:32 JoeJulian hehe
20:32 nhm jdarcy: I guess it depends how confident John Mark feels about exploring such topics. :)
20:33 jdarcy nhm: I'm hoping the main message was that we have to kill all the really had storage before we start fighting among ourselves.  ;)
20:36 UnixDev joined #gluster
20:37 UnixDev for large files, like machine images, is it better to use a replica or distributed replica for data accessibility?
20:39 jdarcy When you say "distributed replica" do you mean geo-replication?
20:40 UnixDev i mean a volume with distribute and replica options
20:40 UnixDev not geo replication at this time
20:41 jdarcy UnixDev: Well, distribution is always active even if it only has one subvolume (in this case a replica set, in other cases a single brick) to work with.
20:41 jdarcy UnixDev: That way we minimize differences between the N=1 and N>1 cases.
20:41 UnixDev jdarcy: I was hoping to address some of my previous issues by adding a third server, before I had 2 servers, 1 brick on each, now I have 3
20:42 UnixDev so whats the best way to create a volume? it will not let me create a replica 2 with 1 brick on each
20:42 jdarcy There was an article about that just a couple of days ago.  Once sec.
20:42 UnixDev and if I put 2 bricks on each? will gluster make sure to never store a file in both bricks on the same server?
20:42 JoeJulian @brick order
20:42 glusterbot JoeJulian: Replicas are defined in the order bricks are listed in the volume create command. So gluster volume create myvol replica 2 server1:/data/brick1 server2:/data/brick1 server3:/data/brick1 server4:/data/brick1 will replicate between server1 and server2 and replicate between server3 and server4.
20:43 jdarcy http://pthree.org/2013/01/25/g​lusterfs-linked-list-topology/
20:43 glusterbot <http://goo.gl/0HHCK> (at pthree.org)
20:43 GLHMarmot joined #gluster
20:43 glusterbot New news from newglusterbugs: [Bug 905203] glusterfs 3.3.1 volume heal data info not accurate. <http://goo.gl/axucm>
20:43 JoeJulian Also http://joejulian.name/blog/how-to-expand-gl​usterfs-replicated-clusters-by-one-server/
20:43 glusterbot <http://goo.gl/BM1qD> (at joejulian.name)
20:46 JoeJulian Oh, wow... That's a cool re-write of my article. I like theirs better.
20:48 jdarcy JoeJulian: That's why I like you, Joe.  Most of the techies I know would never say that.
20:58 GLHMarmot joined #gluster
21:00 partner thank you gentlemen for your advice and comments, much appreciated and nice to see such a warm and helpful community around the project, seen many kinds during my career..
21:05 wN joined #gluster
21:13 lh joined #gluster
21:13 lh joined #gluster
21:29 johnmark jdarcy: heya
21:29 johnmark it was awesome
21:32 johnmark jdarcy: although I did have to throw you under the bus for your "unbiased" comparison
21:32 johnmark heh
21:35 nhm johnmark: Sage was on good behavior? ;)
21:37 melanor9 joined #gluster
21:40 johnmark nhm: sage is a good guy
21:40 nhm johnmark: I know, I work for him. :)
21:42 johnmark nhm: oh? cool
21:43 nhm johnmark: http://ceph.com/uncategorized/argon​aut-vs-bobtail-performance-preview/
21:43 glusterbot <http://goo.gl/Ya8lU> (at ceph.com)
21:43 nhm johnmark: I'm Mark. :)
21:43 johnmark nhm: ah, hi, Mark
21:46 johnmark nhm: it was good. I think everyone there learned some things
21:46 nhm johnmark: Glad to hear it!
21:47 johnmark nhm: I think jdarcy could have been a bit more forceful because of his depth of knowledge
21:47 johnmark but I think I acquitted myself well :)
21:47 nhm johnmark: It's tough to compete with Sage even if you are a technical guy. :)
21:50 nhm johnmark: I know I have to work hard to keep up and I think very highly of myself. ;)
21:52 johnmark heh, yeah
21:53 johnmark I would never try to out-duel him technically
21:53 johnmark it was about grandiose visions and what we're trying to accomplish
21:54 nhm Interesting, I'll have to watch the video
21:54 UnixDev jdarcy: I read through the link you set, thank you. but I'm still not clear when I should use the stripe parameter when creating a volume? is there an advantage to striped replica 2?
21:54 bennyturns joined #gluster
21:55 wushudoin joined #gluster
21:56 GLHMarmot joined #gluster
21:57 GLHMarmot joined #gluster
22:02 JoeJulian @stripe
22:03 glusterbot JoeJulian: Please see http://goo.gl/5ohqd about stripe volumes.
22:09 GLHMarmot joined #gluster
22:13 melanor9 joined #gluster
22:16 partner joined #gluster
22:25 johnmark so is it true that adding bricks and/or servers to existing Gluster volumes is easier than wiht other DFS's?
22:32 partner wouldn't that ruin the whole myth around distributed fs's?-)
22:33 m0zes DFS is magic! why can't it read my mind!?1?
22:34 elyograg joined #gluster
22:37 UnixDev JoeJulian: "Stripe + Replicate" mentions "large files with random i/o." that is basically what a hard disk image (for virtual machines) is
22:39 aliguori joined #gluster
22:40 jack joined #gluster
22:41 elyograg I just tried to get UFO installed on fedora 18.  got a bunch of transaction check errors about config file conflicts between attempted installs of glusterfs-ufo-3.3.1-8.fc18.noarch and glusterfs-swift-3.3.1-8.fc18.noarch
22:41 elyograg should i only be installing one of those two packages?  which one?
22:43 _Bryan_ JoeJulian: ??
22:50 chirino joined #gluster
22:57 hattenator joined #gluster
22:58 eightyeight joined #gluster
23:00 amccloud joined #gluster
23:12 JoeJulian UnixDev: That's correct, it is. But the part that it's not is being accessed by hundreds of clients. It's one single client. I'm not saying (for sure anyway) that you should absolutely not use stripe, but that it's not as intuitive as people with raid backgrounds generally think and it doesn't usually get you anything beneficial.
23:13 JoeJulian elyograg: I know I saw a bug filed against that, but I'm not sure... let me look and see what I can figure out...
23:13 UnixDev JoeJulian: one particular area that interested me is scaling random reads… this is q problem with virtual machine storage
23:13 melanor9 joined #gluster
23:13 UnixDev random writes, not so much on gluster, since you can use trusted sync
23:14 UnixDev so you have memory speed essentially as the max there
23:16 JoeJulian UnixDev: Assuming you've cloned, or at least built your images the same, I can potentially see all your VMs hitting generally the same bricks more heavily since the starting brick for stripe is always the same.
23:16 JoeJulian Off the top of my head...
23:16 UnixDev ahh, interested
23:16 UnixDev interesting*
23:16 UnixDev I'm going to try the linked list topology also
23:17 UnixDev one thing that I wish was available is dedupe. is that something thats planned in the future?
23:18 JoeJulian It's been discussed several times, but I haven't seen any code committed to that end.
23:19 amccloud joined #gluster
23:20 amccloud Trying out bluster for the first time on ubuntu 12.04
23:21 amccloud I'm getting df: `/mnt/glusterfs': Transport endpoint is not connected
23:21 amccloud I followed http://www.gluster.org/community/document​ation/index.php/Getting_started_configure
23:21 glusterbot <http://goo.gl/BsK02> (at www.gluster.org)
23:23 fedora joined #gluster
23:24 JoeJulian amccloud: Did you start your volume?
23:25 amccloud yes
23:25 amccloud https://gist.github.com/ac6e7e79db1ef8486b3d
23:25 glusterbot Title: gist:ac6e7e79db1ef8486b3d (at gist.github.com)
23:27 amccloud JoeJulian: I installed through apt.
23:28 JoeJulian elyograg: I can only find one reference where kkiethley mentions that the package was renamed (twice) and ended up glusterfs-ufo so I would guess (unless kkeithley says otherwise) that that's the correct package.
23:29 JoeJulian amccloud: Check the client log for clues: /var/log/glusterfs/mnt-glusterfs.log
23:30 JoeJulian Sorry... I've got to run. I'll be back a little later.
23:34 polfilm joined #gluster
23:37 w3lly joined #gluster
23:37 eightyeight joined #gluster
23:45 RicardoSSP joined #gluster
23:45 RicardoSSP joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary