Camelia, the Perl 6 bug

IRC log for #gluster, 2013-05-14

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 meeew nvm found it
00:13 jthorne joined #gluster
00:24 vpshastry joined #gluster
00:29 yinyin joined #gluster
00:40 Rorik joined #gluster
00:52 chirino joined #gluster
00:54 snarkyboojum joined #gluster
01:08 jules_ joined #gluster
01:17 fidevo joined #gluster
01:18 kevein joined #gluster
01:18 kevein joined #gluster
01:19 bala joined #gluster
01:48 fidevo joined #gluster
01:56 nickw joined #gluster
02:04 nickw joined #gluster
02:35 sjoeboo joined #gluster
02:36 harish joined #gluster
02:39 sjoeboo joined #gluster
02:42 sjoeboo joined #gluster
02:42 lalatenduM joined #gluster
02:48 sjoeboo joined #gluster
02:51 sjoeboo__ joined #gluster
03:14 bharata joined #gluster
03:17 aravindavk joined #gluster
03:22 sgowda joined #gluster
03:25 saurabh joined #gluster
03:40 Supermathie meeew: well don't leave us hanging
03:41 Supermathie semiosis: If you're interested in Discourse, and have an Ubuntu 12.04 VM, I have a prod install guide for which I need a guinea pig :)
04:01 majeff joined #gluster
04:06 anand joined #gluster
04:14 ngoswami joined #gluster
04:16 sgowda joined #gluster
04:18 majeff1 joined #gluster
04:23 yinyin joined #gluster
04:35 lalatenduM joined #gluster
04:44 hagarth joined #gluster
04:44 fidevo joined #gluster
04:52 mohankumar__ joined #gluster
05:00 bulde joined #gluster
05:02 glusterbot New news from newglusterbugs: [Bug 962619] glusterd crashes on volume-stop <http://goo.gl/XXzSY>
05:03 sgowda joined #gluster
05:10 shylesh joined #gluster
05:16 vpshastry joined #gluster
05:18 Susant joined #gluster
05:19 Susant left #gluster
05:20 raghu joined #gluster
05:22 bharata joined #gluster
05:25 johnmark joined #gluster
05:26 VSpike joined #gluster
05:33 majeff joined #gluster
05:42 guigui3 joined #gluster
05:42 meeew Supermathie: --remote-host worked
05:50 bala joined #gluster
05:55 rastar joined #gluster
05:56 ricky-ticky joined #gluster
06:01 ramkrsna joined #gluster
06:01 ramkrsna joined #gluster
06:04 krishna_ joined #gluster
06:12 aravindavk joined #gluster
06:22 shireesh joined #gluster
06:25 jtux joined #gluster
06:25 mohankumar__ joined #gluster
06:31 saurabh joined #gluster
06:32 kshlm joined #gluster
06:32 mohankumar joined #gluster
06:33 ollivera joined #gluster
06:34 yinyin_ joined #gluster
06:42 msvbhat joined #gluster
06:42 vimal joined #gluster
06:45 bharata joined #gluster
06:55 majeff1 joined #gluster
06:56 andreask joined #gluster
07:00 ekuric joined #gluster
07:01 satheesh joined #gluster
07:01 satheesh1 joined #gluster
07:04 vpshastry left #gluster
07:07 vpshastry joined #gluster
07:12 aravindavk joined #gluster
07:18 nickw joined #gluster
07:27 hybrid5121 joined #gluster
07:36 krishna_ joined #gluster
07:38 guigui3 joined #gluster
07:44 anup joined #gluster
07:47 mohankumar joined #gluster
07:49 rotbeard joined #gluster
07:58 mgebbe_ joined #gluster
08:00 majeff joined #gluster
08:01 anup hi
08:01 glusterbot anup: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
08:03 samppah heyhey
08:04 anup using Gluster 3.2.2 I am looking for documentation on upgrading to latest version.
08:06 anup I guess I will come back more prepared. Thanks!
08:07 anup left #gluster
08:10 majeff1 joined #gluster
08:10 manik joined #gluster
08:11 andrei_ joined #gluster
08:20 Nagilum_ joined #gluster
08:31 Nagilum_ hmm, what do I do when I see: "execution of "rsync" failed with E2BIG (Argument list too long)" in my geo replication log?
08:32 Nagilum_ .oO(how hard can it be to pipe this through xargs if I'm too lazy to figure out the max arg size?)
08:35 rastar joined #gluster
08:38 andreask Nagilum_: there is a Red Hat knowledge base article about that ... https://access.redhat.com/site/solutions/202923
08:38 glusterbot Title: Geo Replication fails to sync initial set of files - execution of "rsync" failed with E2BIG (Argument list too long) - Red Hat Customer PortalRed Hat Customer Portal (at access.redhat.com)
08:38 Nagilum_ andreask: i don't have access to that
08:38 Nagilum_ do you?
08:39 andreask Nagilum_:  Root Cause
08:39 andreask gsyncd (the worker for geo-replication) could cause an argument overflow to rsync. This was caused by passing the files to be synced as arguments to rsync, and fixed by having rsync read them on stdin with -0 --files-from=-.
08:39 ekuric1 joined #gluster
08:39 andreask .. they say it is fixed in later 3.3 versions ... though referring to Red Hat package numbers
08:40 Nagilum_ I have 3.3.1-14 installed
08:40 andreask hmm ... seems to be too old
08:40 mohankumar__ joined #gluster
08:41 duerF^ joined #gluster
08:47 Goatberto joined #gluster
08:47 Nagilum_ does it say which version has a fix? Or mentions a specific commit?
08:49 Nagilum_ http://git.gluster.org/?p=glusterfs.git;a=comm​it;h=7a2362d818baf7cae0ae54ffede436821491c876
08:49 glusterbot <http://goo.gl/xNa6l> (at git.gluster.org)
08:49 Nagilum_ hmm
08:49 Nagilum_ I should be able to apply this
08:53 andreask hmm ... looks like its in upcoming 3.4 ...
08:59 krishna_ joined #gluster
09:05 vpshastry1 joined #gluster
09:19 jtux joined #gluster
09:28 aravindavk joined #gluster
09:31 rgustafs joined #gluster
09:31 vpshastry joined #gluster
09:32 krishna_ joined #gluster
09:33 shylesh joined #gluster
09:33 spider_fingers joined #gluster
09:36 kshlm joined #gluster
09:39 deepakcs joined #gluster
09:40 nickw joined #gluster
09:41 rastar joined #gluster
09:41 Nagilum_ andreask: i think I managed to apply the fix to 3.3.1 now :) (OpenSource FTW!)
09:41 andreask Nagilum_: great ;-)
09:50 lh joined #gluster
09:50 lh joined #gluster
09:52 andrei_ joined #gluster
09:59 muhafly joined #gluster
09:59 muhafly hi
09:59 glusterbot muhafly: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
10:01 muhafly if i have one gluster server on qws that has two ebs volumes attached, and i used them to create a replicated volume, not the server has died and i create a new gluster server, i would like to attach the ebs's and "remount" the volume. is it possible?
10:03 glusterbot New news from newglusterbugs: [Bug 961668] gfid links inside .glusterfs are not recreated when missing, even after a heal <http://goo.gl/4vuYc>
10:22 Norky joined #gluster
10:28 majeff joined #gluster
10:28 hagarth1 joined #gluster
10:31 yinyin_ joined #gluster
10:31 vrturbo joined #gluster
10:37 andreask joined #gluster
10:41 vpshastry joined #gluster
10:42 andreask joined #gluster
10:42 aravindavk joined #gluster
10:46 mgebbe_ joined #gluster
10:58 jtux joined #gluster
10:58 rkbstr_wo joined #gluster
11:02 kshlm joined #gluster
11:04 kkeithley1 joined #gluster
11:12 vshankar joined #gluster
11:13 manik joined #gluster
11:17 manik joined #gluster
11:19 vpshastry joined #gluster
11:32 yinyin_ joined #gluster
11:34 yinyin- joined #gluster
11:35 nicolasw joined #gluster
11:42 lkoranda_ joined #gluster
11:44 Hennakin joined #gluster
11:46 Hennakin left #gluster
11:52 andreask joined #gluster
11:53 majeff1 joined #gluster
11:58 manik joined #gluster
12:07 manik joined #gluster
12:09 manik joined #gluster
12:11 avati joined #gluster
12:15 jtux joined #gluster
12:15 Skun_ joined #gluster
12:16 jag3773 joined #gluster
12:16 vpshastry joined #gluster
12:17 georgeh|workstat joined #gluster
12:19 krishna joined #gluster
12:22 vimal joined #gluster
12:29 edward1 joined #gluster
12:36 plarsen_ joined #gluster
12:39 tshm joined #gluster
12:43 redshirtlinux joined #gluster
12:47 lpabon joined #gluster
12:47 redshirtlinux Hello All, if I add different size bricks to a distributed replicated volume, would the size be constrained to that of the smalled mirrored pair?
12:47 dustint joined #gluster
12:48 nickw joined #gluster
12:49 tshm If you add a pair of mirrored bricks, you will extend your volume's size with the size of the new bricks, regardless. However, you may notice funny performance effects such as smaller bricks filling up quicker etc.
12:50 tshm Then, of course, if the pair you add consists of two different-sized bricks, then the smaller one will determine how much space you extend by...
12:51 redshirtlinux Thank you @tshm!
12:51 tshm no worries
12:55 jag3773 joined #gluster
12:57 dmojoryder joined #gluster
12:58 Zengineer joined #gluster
12:58 fps joined #gluster
12:59 mohankumar__ joined #gluster
12:59 jtux joined #gluster
13:01 lalatenduM joined #gluster
13:05 zaitcev joined #gluster
13:07 bennyturns joined #gluster
13:07 bala joined #gluster
13:11 coredumb joined #gluster
13:14 robos joined #gluster
13:23 saurabh joined #gluster
13:26 jdarcy joined #gluster
13:32 dewey joined #gluster
13:40 nueces joined #gluster
13:42 krishna joined #gluster
13:47 jbrooks joined #gluster
13:48 majeff joined #gluster
13:49 aliguori joined #gluster
13:50 Supermathie avati: re my comments yesterday about reducing the time gluster spends processing writes:      72.63    2765.08 us      35.00 us  143107.00 us         428225       FLUSH
13:51 Supermathie nvm that: http://fpaste.org/12050/53946013/
13:51 glusterbot Title: #12050 Fedora Project Pastebin (at fpaste.org)
13:56 sjoeboo_ so, I've got a directory structure on my gluster volume i seem unable to remove via the fuse client mount...(hangs up, compains about some dirs not being empty when they are). The files are all trash, i just need to get rid of them. Can I jsut remove them from all bricks and be done with it? any glusterfs cleanup needed if i do that?
13:58 rkbstr_wo joined #gluster
13:58 jdarcy It's not really recommended, but yes you can.  The one really tricky (but critical!) part is removing the .glusterfs links as well.
13:58 balunasj|mtg joined #gluster
13:59 Supermathie sjoeboo_: I have a python script you can easily adapt to do that... I use it to clear the trusted.afr attributes.
14:00 glusterbot New news from resolvedglusterbugs: [Bug 764890] Keep code more readable and clean <http://goo.gl/p7bDp>
14:00 Supermathie sjoeboo_: essentially: recursively walk the directory, read trusted.gfid, remove appropriate hardlink from brickroot/.glusterfs/a/b/abcd-etc, remove file.
14:00 vpshastry joined #gluster
14:01 sjoeboo_ Supermathie: and you have a script for this laying around ? or close to it?
14:02 Supermathie sjoeboo_: Enough to be useful: http://fpaste.org/12051/13685401/
14:02 glusterbot Title: #12051 Fedora Project Pastebin (at fpaste.org)
14:03 tshm Now I've got a question. I've got a test setup, with replicated bricks. I shot down the client and modify one of the files on both bricks (in different ways), trying to achieve a split-brain scenario. Then I reconnect the client, and try to trigger a self-heal. No errors show up in the log, but the files don't sync. They are still different on the two different bricks, and the client seems...
14:03 tshm ...to always pick the same one (when I examine its content), except of course if I offline that brick, in which case the other one is picked. What can I do about this?
14:03 tshm *shut down
14:03 Supermathie tshm: How do you modify the file? Under the hood? Or through gluster?
14:04 tshm just like so:   echo "lorem ipsum" >> file
14:04 Supermathie you didn't answer the question :p
14:04 tshm i.e., on the brick itself, with the client offline
14:04 tshm tried to :-/
14:05 tshm So no, not through Gluster.
14:05 Supermathie My understanding is that gluster keeps track of all split-brain with the trusted.afr.* attributes. They'll only get set if the file is written *through* gluster with the brick offline.
14:05 jdarcy tshm: Modifying the file contents on a brick won't induce split-brain.
14:05 tshm oh
14:06 jtux joined #gluster
14:06 tshm Okay... so maybe that was just a bad test to make.
14:06 jdarcy You can either do it by writing with one brick down, as Supermathie suggests, or by manually manipulating the xattrs (not recommended unless you *really* know what they mean).
14:06 semiosis http://gluster.helpshiftcrm.com/q/what-is-spl​it-brain-in-glusterfs-and-how-can-i-cause-it/
14:06 Supermathie tshm: Yeah it's an invalid test.
14:06 glusterbot <http://goo.gl/Oi3AA> (at gluster.helpshiftcrm.com)
14:06 semiosis you want to cause split-brain, there you go
14:06 tshm Cool, thanks!
14:07 semiosis yw
14:07 tshm As a matter of fact I tried another thing, too, writing a bunch of files and bringing one brick offline halfway. That also didn't turn out very well. The files showed up on the other brick when I brought it back online, but some files appeared only partially when having triggered self-heal.
14:08 tshm (i.e. bringing the brick offline _while_ files were being written)
14:08 semiosis tshm: split brain is not data written to one brick only.  split brain is out-of-sync writes to *both* replicas.
14:09 tshm true; that was a different test
14:09 jdarcy tshm: The files will initially appear as zero length (as part of entry self-heal on the parent directory) and then get filled with actual data (as part of data self-heal for the file itself).
14:09 tshm I'm all into testing self-healing today.
14:09 tshm @jdarcy: Yes, they did. First they were 0-byte files, then they were partially filled and stayed so for a really long time.
14:09 jdarcy It's a breadth-first search, where file contents are "deeper" than their directory entries.
14:10 jdarcy tshm: The "really long time" part seems odd, otherwise it's what I'd expect.
14:10 tshm All files were 50M to begin with, then some were 13M, some 14M, 26M, even 42M
14:11 kaptk2 joined #gluster
14:11 tshm jdarcy: A "really long time" in this case is ~10 minutes; the network is fast so the initial partial repair was done within a second or two  - otherwise it's what I'd expect, too!
14:11 tshm Maybe I should add it's a really small set of files, only 20 files @50M each
14:12 tshm Thank you all, so far. I'll go about trying to reproduce that behaviour.
14:15 piotrektt joined #gluster
14:20 daMaestro joined #gluster
14:20 krishna__ joined #gluster
14:20 Supermathie semiosis: I actually have cases where files have gotten out of sync (split-brain) due to glusterfs/nfs not pushing the writes out to the bricks due to https://bugzilla.redhat.com/show_bug.cgi?id=960141
14:20 glusterbot <http://goo.gl/RpzTG> (at bugzilla.redhat.com)
14:20 glusterbot Bug 960141: urgent, unspecified, ---, vraman, NEW , NFS no longer responds, get  "Reply submission failed" errors
14:20 Supermathie Which ends up looking like the end result of tshm's 'invalid test'
14:21 Supermathie tshm: So you're actually generating a case that can happen, but not normally :) glusterfs shows those files as needing healing, but not split-brained.
14:21 Supermathie Kind of off.
14:21 Supermathie odd.
14:23 tshm Yes, I suppose that's a more correct way of describing it.
14:24 Supermathie I should add that to the bug report - this is Very Bad.
14:24 johnmorr joined #gluster
14:25 tshm Yes, I thought so too, even if it's not supposed to happen at all. I mean, you're supposed to do all modifications through the client.
14:25 snarkyboojum_ joined #gluster
14:26 Supermathie Well, it's not so bad that your situation isn't handled. :) But it's bad that my setup got into that state.
14:26 tshm Indeed.
14:26 yinyin_ joined #gluster
14:27 balunasj joined #gluster
14:27 Supermathie balunasj|mtg: shortest break between meetings ever.
14:28 aliguori joined #gluster
14:28 balunasj|mtg Supermathie: LOL - yeah, something got messed up with my nick - but somedays it feels that way...
14:29 msvbhat_ joined #gluster
14:30 ngoswami_ joined #gluster
14:30 Zenginee1 joined #gluster
14:30 bala1 joined #gluster
14:30 lyang0 joined #gluster
14:31 andrewjsledge joined #gluster
14:31 kkeithley1 joined #gluster
14:32 msmith__ joined #gluster
14:34 m0zes_ joined #gluster
14:35 bronaugh joined #gluster
14:35 H___ joined #gluster
14:36 jag3773 joined #gluster
14:36 hflai joined #gluster
14:36 badone joined #gluster
14:36 clutchk1 joined #gluster
14:37 foster_ joined #gluster
14:37 NuxRo joined #gluster
14:37 jds2001 joined #gluster
14:37 bugs_ joined #gluster
14:37 lanning_ joined #gluster
14:38 bfoster_ joined #gluster
14:38 larsks_ joined #gluster
14:39 vpshastry1 joined #gluster
14:40 rastar joined #gluster
14:40 shanks` joined #gluster
14:41 ehg_ joined #gluster
14:41 bdperkin_ joined #gluster
14:42 H__ joined #gluster
14:46 krishna__ joined #gluster
14:47 ngoswami_ joined #gluster
14:47 Nagilum_ is there any trick to getting geo-replication to work?
14:47 Nagilum_ it says status OK, but I don't see it doing anything
14:48 spider_fingers left #gluster
14:49 dmojoryder joined #gluster
14:49 manik joined #gluster
14:49 anand joined #gluster
14:49 hchiramm_ joined #gluster
14:50 JZ_ joined #gluster
14:53 duerF joined #gluster
14:53 tqrst joined #gluster
14:57 awheeler_ joined #gluster
14:57 ctria joined #gluster
15:01 brian_ joined #gluster
15:01 harish joined #gluster
15:04 glusterbot New news from newglusterbugs: [Bug 960141] NFS no longer responds, get "Reply submission failed" errors <http://goo.gl/RpzTG>
15:06 elyograg joined #gluster
15:07 elyograg left #gluster
15:07 plarsen joined #gluster
15:08 brian_ Hello. I just got a gluster volume all set up and I have managed to mount the volume properly (I think).  My setup consists of a head node and 3 compute nodes (CentOS 6.4). The 3 compute nodes are the ones containing my "bricks". My question is about the propery way to mount (i.e. where is the best place for my mount point… the head node?). What I have tried so far is to create a directory on the head node called gluster-mnt-dir and I mou
15:08 brian_ my gluster volume (called gv0, on the head, at that directory). I did a manual mount (just for testing) at that directory. Is creating a mount point on the head node to mount the gluster volume the best way to utilize my glusterfs volume?
15:08 Supermathie brian_: You can mount it directly from any server you wish to have access
15:09 Supermathie s/can/should/, s/any/every/ :)
15:09 brian_ ok well my thinking is that once I have it mounted at the head, I would export that mount so that it will be shared across the nodes… Perfomance wise, is this the best way?
15:09 Supermathie brian_: nono, just mount it on all machines from where you want to use it.
15:10 brian_ so make manual mounts on each node to the gluster volume?
15:10 brian_ instead of using NFS
15:10 brian_ ?
15:11 brian_ will NFS slow things down?
15:11 jthorne joined #gluster
15:11 brian_ my boss wants me to do performance benchmarks on it… so I'm trying to figure out what will give the best performance
15:11 Supermathie brian_: as to performance, you need to test. But you can't use nfs client on the same node as the gluster nfs server, it breaks locking.
15:12 Supermathie I recommend starting with the fuse mount on each server
15:12 Supermathie brian_: also try these options: http://fpaste.org/12068/54435713/
15:12 glusterbot Title: #12068 Fedora Project Pastebin (at fpaste.org)
15:12 brian_ ok by fuse mounting, you mean just using the mount command as normal on each individual node?
15:13 Supermathie brian_: yeah the "gluster client"
15:14 brian_ is there something specific in the mount command that specifies to use the "gluster client", or am I using it by default every time I mount the gluster volume just using the normal linux mount command?
15:14 Supermathie There is no normal Linux mount command. :)
15:14 ollivera_ joined #gluster
15:14 Supermathie fearless1:/gv0 /gv0 glusterfs defaults,_netdev,backupvolfile-server=fearless2 0 0 <--- gluster client mount
15:14 kshlm joined #gluster
15:14 brian_ I have just being using "mount"
15:15 andrewjsledge joined #gluster
15:15 Rorik_ joined #gluster
15:15 tshm_ joined #gluster
15:15 brian_ ok so the line your wrote above is just what you put in your /etc/fstab?
15:15 Supermathie yeah
15:15 brian_ k
15:16 Supermathie the fuse client isn't ideal, but it's a good place to start. Be sure to read the config guides, there's lots of fiddly bits you may not expect.
15:16 brian_ when I do a manual mount, (without putting in fstab), is that also using the "gluster client mount"?
15:16 Supermathie depends, show your command
15:16 brian_ ok just a sec
15:17 duerF joined #gluster
15:17 johnmorr_ joined #gluster
15:17 vpshastry2 joined #gluster
15:18 meeew_ joined #gluster
15:19 brian_ ok apparently it is, here is the output of mount on my head node: http://fpaste.org/12069/13685447/
15:19 glusterbot Title: #12069 Fedora Project Pastebin (at fpaste.org)
15:19 bet_ joined #gluster
15:19 Supermathie Yeah, it's mounted with fuse, but what was your actual command?
15:20 brian_ mount -t glusterfs node02:gv0 /gluster-mnt-dir
15:20 saurabh joined #gluster
15:20 Supermathie OK, that's a lot more specific than saying "the normal mount command". Yeah, that specifies to use the fuse client. Without -t glusterfs, you'll get an NFS mount.
15:21 chirino_m joined #gluster
15:21 Supermathie We like specifics in here :)
15:21 brian_ ok cool
15:23 brian_ right now I'm running iozone to do some benchmarking. Do you guys know a better tool for benchmarking a file system? I thought about trying to use bonnie++ but I'm not sure how well it works over gluster. Any recommendations?
15:23 Supermathie brian_: fio
15:24 Supermathie https://github.com/khailey/fio_scripts
15:24 glusterbot Title: khailey/fio_scripts · GitHub (at github.com)
15:24 brian_ cool thx Supermathie
15:25 brian_ is it recommened to mount a glusterfs over NFS or does it slow things down?
15:26 brian_ I plan on testing this myself, but I wondered if any of you have had experience with that already.
15:26 Supermathie For *my workload*, it's a lot faster over NFS.
15:27 bdperkin joined #gluster
15:27 brian_ so when mounting over NFS, I wouldn't use the fuse client at alll, right? meaning, NFS can mount a gluster filesystem without the need of the fuse client?
15:28 Supermathie right, gluster includes an internal NFS server
15:28 brian_ ok
15:29 avati joined #gluster
15:31 brian_ here is probably a really dumb questiion, but this whole "distributed filesystem" concept is still pretty confusing to me… When I do a manual mount (using the fuse client as I said above), I specified one of the gluster brick nodes (node02 in my example), I don't need to run any other mounts in order to access the bricks on the other nodes (node03 and node04), right? It's my understanding that the mount just needs info about the volume fr
15:31 brian_ one of then nodes in the brick list (could be any node), just for the mounting purposes… right?
15:32 coredumb joined #gluster
15:32 brian_ then once that is mounted, I have access to the entire volume… all bricks… etc?
15:33 Supermathie sort of. This might help (my own work, not vetted): http://www.websequencediagrams.com/fi​les/render?link=Q5L4oIoO10o6od6RRMHT
15:33 glusterbot <http://goo.gl/hXmvL> (at www.websequencediagrams.com)
15:33 brian_ k. i'll take a look
15:33 Supermathie @glossary | brian_
15:34 Supermathie heh
15:34 Supermathie the fuse client (or nfs server which is a gluster client) has the responsibility of taking care of all that. You just mount it and use it.
15:34 hagarth joined #gluster
15:34 brian_ ok.
15:35 brian_ that diagram looks pretty cool, although I dont understand it :)
15:35 Supermathie sequence of events for a distributed replicated gluster vol
15:35 tshm_ read from the top down :-)
15:35 brian_ ok mine is "distributed"…
15:35 brian_ same priciples apply though right?
15:35 Supermathie brian_: yeah, you won't have the mirrored writes.
15:36 Supermathie Other than that, same idea
15:36 brian_ k
15:36 tshm_ Oh... so no replication? Then just skip the two rightmost boxes and all arrows pointing to and from them.
15:36 brian_ k
15:36 tshm_ (server2/...)
15:37 hagarth joined #gluster
15:38 brian_ another (probably dumb), question.. When all of this stuff is getting written to the "bricks", the bricks readable? meaning, could I go to one of the brick directories and see where it is putting things (i.e. file names etc, right in the brick directories)?
15:38 tshm_ yes, you can
15:39 tshm_ Go ahead and try! Do a simple ls wherever you have your brick directories on your servers.
15:39 tshm_ You should see that some files end up in one brick, some in the other...
15:40 brian_ but at the mount point of the gluster volume, everything would appear there all in one place with file pointers (behind the scenes) actually pointing gluster to where the files actually are in bricks.
15:40 brian_ right?
15:41 Supermathie brian_: yep, http://www.gluster.org/community/documentation/ind​ex.php/Arch/A_Newbie's_Guide_to_Gluster_Internals
15:41 glusterbot <http://goo.gl/3ntVd> (at www.gluster.org)
15:41 brian_ cool…
15:41 Supermathie brian_: cat /var/lib/glusterd/vols/gv0/gv0-fuse.vol and you'll kind of get an idea about what's going on
15:41 tshm_ But you wouldn't exactly call them file pointers, would you? (Supermathie)
15:42 brian_ "file pointers" was my own made up term to try and make my question more clear :)
15:43 Supermathie tshm_: no, the files don't point at the bricks, the client hashes the file info (name? path? both?) to figure out where it *should* be.
15:43 brian_ k.. that command output a lotta stuff
15:44 Supermathie brian_: http://hekafs.org/index.php/2012/03​/glusterfs-algorithms-distribution/
15:44 glusterbot <http://goo.gl/MLB8a> (at hekafs.org)
15:48 sprachgenerator joined #gluster
15:50 tshm_ Yes, that's a great article. I keep coming back to it whenever I need to look something up about distribution.
16:01 portante` joined #gluster
16:01 rob__ joined #gluster
16:19 bala joined #gluster
16:22 brian_ Supermathie:cool article… btw, those settings you gave me (here > http://fpaste.org/12068/54435713/ ). where do I put those in my configuration in order to use them?
16:22 glusterbot Title: #12068 Fedora Project Pastebin (at fpaste.org)
16:25 andrewjsledge joined #gluster
16:25 manik joined #gluster
16:29 manik1 joined #gluster
16:31 MrNaviPacho joined #gluster
16:34 glusterbot New news from newglusterbugs: [Bug 962875] Entire volume DOSes itself when a node reboots and runs fsck on its bricks while network is up <http://goo.gl/uG0yY>
16:37 E-T joined #gluster
16:41 JoeJulian brian_: Those are changed via "gluster volume set"
16:42 brian_ ok.. thought so just makin sure.. thx for all your help btw.. much appreciated
16:42 brian_ You and Supermathie are awesome.
17:22 _ilbot joined #gluster
17:22 Topic for #gluster is now  Gluster Community - http://gluster.org | Q&A - http://community.gluster.org/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - http://irclog.perlgeek.de/gluster/
17:23 Supermathie STAT slow? UNLINK is slow: http://pastie.org/private/yxxvmhbor0mguoafzm19gq
17:23 glusterbot <http://goo.gl/pzqlD> (at pastie.org)
17:24 H__ that may be so , it's also quite less important to me ;-)
17:25 al joined #gluster
17:25 MrNaviPacho JoeJulian Specifically DHT misses.
17:26 Supermathie lol cue blog post
17:26 MrNaviPacho JoeJulian I think I even looked at a post you wrote about it.
17:26 JoeJulian DHT misses are something else. A miss cache would be useful and jdarcy made an example translator in python.
17:27 JoeJulian But I wasn't being facetious with regard to an "I don't care" flag. There may be instances where that's valid. "give me the image. I don't care right now if it's been updated since we last checked, I need it now."
17:28 JoeJulian Sort-of an eventually consistent model.
17:28 al joined #gluster
17:32 JoeJulian MrNaviPacho: btw... that blog post does offer solutions to overcome "poorly written code" ;)
17:32 foster123 joined #gluster
17:32 foster123 hello
17:32 glusterbot foster123: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
17:33 foster123 I want to change the port for a volume that was just created and I don't see a way to do that.  Is there a way that I don't know of that will change the volume that I
17:34 foster123 volume ports that I deleted to reuse the old port
17:34 semiosis ~ports | foster123
17:34 glusterbot foster123: glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111.
17:34 semiosis foster123: in short, no.
17:35 foster123 ok..so every time I delete a volume I have to increase the port number on my firewall?
17:36 semiosis apparently
17:36 Supermathie foster123: allow the range. You could probably edit things manually if you really want to do that, but There Be Dragons.
17:36 semiosis also the range is changing in 3.4
17:37 foster123 thank you...that's what I thought as the documentation does not say if I could reuse ports
17:37 foster123 from the 24009- + bricks?
17:37 semiosis yes
17:38 foster123 does the 3.4 documentation say it
17:39 foster123 Thanks for the help..
17:40 anand joined #gluster
17:41 smellis joined #gluster
17:55 aravindavk joined #gluster
17:56 lalatenduM joined #gluster
18:02 bstr_work joined #gluster
18:11 harold_ joined #gluster
18:20 bronaugh left #gluster
18:21 aravindavk joined #gluster
18:23 tjikkun joined #gluster
18:23 tjikkun joined #gluster
18:25 StarBeast joined #gluster
18:40 lbalbalba joined #gluster
18:42 lbalbalba hi. im getting the error "The brick you.org:/export/brick1 is a mount point." when i try to create a volume. does anyone know what im doing wrong ? im following this guide : http://www.gluster.org/community/d​ocumentation/index.php/QuickStart
18:42 glusterbot <http://goo.gl/OEzZn> (at www.gluster.org)
18:44 andrewjsledge joined #gluster
18:45 harish joined #gluster
18:47 JoeJulian That error suggests that you already have a gluster volume mounted at /export/brick1
18:47 coagen joined #gluster
18:47 coagen hello
18:47 glusterbot coagen: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
18:47 coagen hi glusterbot
18:48 coagen i'm seeing very high cpu usage on a gluster client, is this a common problem?
18:51 lbalbalba JoeJulian: thats odd. yes, i have mounted xfs filesystems at /export/brick1, on both nodes, like the guide told me to. but i havent used any of them yet to create a gluster volume. should i unmount the xfs filesystems on both nodes before executing the gluster volume create command ?
18:51 JoeJulian even odder... I can't find "is a mount point" in the source.
18:51 phox JoeJulian / Supermathie: how does one "specify" this "_netdev" thing?
18:51 lbalbalba JoeJulian: huh ?
18:51 * phox totally not familiar
18:52 Supermathie phox: That's a mount option in fstab
18:53 JoeJulian lbalbalba: Ah, there it is. The current master says this which should be a little clearer: "The brick %s:%s is a "
18:53 JoeJulian "mount point. Please create a sub-directory "
18:53 JoeJulian "under the mount point and use that as the "
18:53 JoeJulian "brick directory. Or use 'force' at the end "
18:53 JoeJulian "of the command if you want to override this "
18:53 phox Supermathie: so just put it in the options column?
18:53 JoeJulian "behavior."
18:54 phox Supermathie: _cool_
18:54 lbalbalba JoeJulian: just got a email reply on the mail list. looks like the dir should be a subdir of the mountpoint, and not the mountpoint itself...
18:58 lbalbalba JoeJulian: which still leaves me with the question of the naming. Doing /exports/brick1/brick1, /exports/brick2/brick2 seems silly (/exports/brick2 as mount point, subdir brick2 in the mountpoint), and so does doing /exports1, exports2 as the mount points with subdirs in them..
18:59 JoeJulian I like /data/{volume}/{some sort of hardware reference. I use a,b,c,d}/brick
18:59 Supermathie lbalbalba: I'm using /export/bricks/SASWWN/glusterdata
19:00 JoeJulian so all my /data/*/a are on /dev/sda, /data/*/b are on /dev/sdb, etc.
19:01 lbalbalba hrm. /exports/llvm2volname as the mountpoint, with 'brick' as the subdir (resulting in /exports/llvmvolname/brick) seems nice ...
19:02 JoeJulian btw... I use lvm also, but I create lv's that are pinned to individual hard drives. Makes failure easier.
19:03 rotbeard joined #gluster
19:03 lbalbalba well /exports/sda1/brick works for me, too
19:03 lbalbalba thanks
19:09 Supermathie lbalbalba: Assuming that /exports/sda1 is always the same disk :)
19:10 Uzix joined #gluster
19:12 lbalbalba lbalbalba: yes, there's that assumption. i know it hasnt have to be that way, but naming your llvm2 volumes the same on all nodes can be done, and makes me have less of a headache.
19:12 lbalbalba Supermathie: yes, there's that assumption. i know it hasnt have to be that way, but naming your llvm2 volumes the same on all nodes can be done, and makes me have less of a headache.
19:14 lbalbalba Supermathie: so maybe doing /exports/brick1 as mountpoint, and the subdir the disk name (which then can be diff on each host) is cleaner when not using llvm2
19:14 lbalbalba Supermathie: host1 /exports/brick1/sda1 host2 /exports/brick1/sdf5
19:14 lbalbalba for the same brick
19:16 lbalbalba then the naming convention tells you immediatly whicj phisical disks or llvm volumes are part of a brick. i like it
19:17 MrNaviPacho joined #gluster
19:18 JoeJulian You could even just do a,b,c,d,etc that map to the hot-swap tray.
19:19 JoeJulian which, if I rename these again, is how I'm going to do it the next time.
19:21 NuxRo lbalbalba: "llvm" is a compiler, what you want is "lvm"
19:21 lbalbalba oh, sorry, doing too many things at the same  time. i meant lvm2, as ypu pointed out
19:22 brunoleon_ joined #gluster
19:22 jiffe99 joined #gluster
19:22 MrNaviPacho joined #gluster
19:23 jiffe99 so I am trying to figure out the best way to do this: for billing we pull disk usage reports on our users and we are currently using the quota system to get this information.  They don't have a quota it is set to 0 but we can get their disk usage by this method without having to do a du.  Is there some way I can do this with gluster?
19:24 JoeJulian still use quota on the brick and aggregate
19:25 NuxRo maybe in the future we can use https://github.com/justinclift/glusterflow for this kind of info :-)
19:25 glusterbot Title: justinclift/glusterflow · GitHub (at github.com)
19:26 JoeJulian Excellent point. I was just writing a response that was far less ingenious. :D
19:26 MrNaviPacho JoeJulian Does "set lookup-unhashed off" apply immediately?
19:28 JoeJulian I've never done it... You can probably find out by looking in your client log.
19:28 JoeJulian I would expect it to.
19:28 MrNaviPacho It seems like it does but it's not helping.
19:29 jiffe99 yeah running quota on each distribute node isn't the best way but much better than du through the client
19:30 MrNaviPacho I have a php app that uses way too many fie_exists calls and it's taking 8 seconds for the page to load with bluster.
19:30 MrNaviPacho gluster*
19:30 y4m4 joined #gluster
19:30 JoeJulian yikes
19:30 jiffe99 glusterflow looks interesting
19:31 JoeJulian The script itself is calling that? It's not an include or requires?
19:34 MrNaviPacho The script is, it's awful but I don't think there is a good way to change it.
19:41 jbrooks joined #gluster
19:44 _ilbot joined #gluster
19:44 Topic for #gluster is now  Gluster Community - http://gluster.org | Q&A - http://community.gluster.org/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - http://irclog.perlgeek.de/gluster/
19:46 m0zes joined #gluster
19:51 jag3773 joined #gluster
19:54 lbalbalba hrm. I keep seeing ' Active: active (exited)' when I do 'service glusterd status'. ps -ef still shows glusterd and glusterfs. but cmds like 'gluster volume info' fail: 'Connection failed. Please check if gluster daemon is operational.'.
19:55 jiffe99 Number of Bricks: 2 x 2 = 4
19:55 jiffe99 which 2 is replicas and which 2 is distribute?
19:56 Supermathie jiffe99: Distribute 2 x Replicate 2 = 4
19:56 lbalbalba oh, its now gone from ps too. but 'service glusterd status' still shows ' Active: active (exited)'
19:58 redsolar joined #gluster
19:59 andreask joined #gluster
20:03 lbalbalba oh. i wiped /var/lib/glusterd/ by running the tests on one of the nodes. nevermind.
20:11 lbalbalba crap. i wiped /var/lib/glusterd on both nodes, but still get this error when trying to re-create the volume 'volume create: glustervol02: failed: /export/glusterfslv2/brick is already part of a volume'. Is this data stored in other locations than  /var/lib/glusterd/  ?
20:11 lbalbalba of course ''gluster volume info' shows 'No volumes present'
20:11 lbalbalba :(
20:12 jiffe99 I just thought of something
20:13 jiffe99 this isn't going to work the way it is setup anyway
20:13 jiffe99 mail and web both share the same filesystems on the gluster side
20:15 andreask joined #gluster
20:16 andreask joined #gluster
20:20 nueces joined #gluster
20:23 lbalbalba ah. its in the mountpoint subdir, '/export/glusterfslv2/brick' in my case. 'rm -rf /export/glusterfslv2/brick && mkdir /export/glusterfslv2/brick' fixed it, i can now re-create stuff
20:27 andrewjs1edge joined #gluster
20:31 pithagorians joined #gluster
20:34 pithagorians does gluster have some cache that makes df -h show same used space even the deletion script estimates some thens of GBs of images to be deleted?
20:44 avati pithagorians, gluster does not cache df related variables
20:47 pithagorians avati: does it cache something related to used space?
20:54 lbalbalba exactly. how much / what info is cached
20:54 lbalbalba perhaps thats a question for the -devel channel ?
20:56 tc00per joined #gluster
21:01 redshirtlinux left #gluster
21:03 tc00per In reference to the mailing list thread/message at... http://www.gluster.org/pipermail/g​luster-users/2013-May/036059.html
21:03 tc00per I'm wondering if anybody knows of documentation for 'moving' volumes from /dir/mountpoint to /dir/mountpoint/subdir 'safely'. Nothing found in a quick search...
21:03 tc00per If not then any thoughts?
21:03 tc00per My system have very little data so I'm able to try/fail fairly easily. It would be nice to lower the probability of failure at the start though... :)
21:03 glusterbot <http://goo.gl/YKr8C> (at www.gluster.org)
21:05 lbalbalba crap. migrations suck. backup / delete old / restore to new location ?
21:10 lbalbalba or: mkdir /dir/mountpoint/subdir on all nodes, and then
21:10 lbalbalba mv /dir/mountpoint to /dir/mountpoint/subdir on one node
21:10 lbalbalba and let self healing take care of thing ?
21:11 lbalbalba oh, i dunno
21:19 lbalbalba it would be great if it could be done without downtime
21:21 pithagorians the problem is i delete files from gluster partition, but the df -h doesn't show me that the space is free up
21:21 pithagorians and i'm wondering if it's something related to gluster
21:22 lbalbalba pithagorians: from where do you delete the files ?
21:22 pithagorians of course from client
21:22 semiosis good answer
21:22 lbalbalba just checking
21:23 pithagorians the gluster client mounted on the same server as the gluster server
21:23 pithagorians i see
21:24 pithagorians i know if i delete them on gluster server partition side the effect will be weird
21:24 lbalbalba client and server on the same system ? that may confuse the cache, maybe ? do you do df from the client as well ?
21:24 pithagorians from other clients - same
21:25 lbalbalba does 'sync' on the server change anything ?
21:25 pithagorians what do you mean by sync?
21:26 lbalbalba man sync "sync - flush file system buffers" need to run as root
21:26 pithagorians also, maybe relevant, i have performance.cache-size: 512MB
21:27 pithagorians same after sync :|
21:28 lbalbalba odd. well, that were all my good ideas. i havent got anything else
21:30 lbalbalba maybe ask about this on the -dev channel, or the gluster-devel mailing list. maybe the devs know more
21:31 semiosis this is the right channel
21:31 semiosis devs are here too
21:31 lbalbalba oops. sorry
21:31 semiosis the -dev channel (and -devel mailing list) are for discussion about the development of glusterfs
21:32 semiosis this channel (and the gluster-users ML) are for user support
21:32 lbalbalba ok
21:32 duerF joined #gluster
21:32 semiosis if a dev were around, and able to help, they would do so here
21:32 semiosis you also get the benefit of many other users in here, who may also be able to help -- this is the place
21:33 lbalbalba point taken
21:34 semiosis tc00per: you could experiment with replace-brick commit force, such as explained in the ,,(replace) link for changing hostname...
21:34 glusterbot tc00per: Useful links for replacing a failed server... if replacement server has different hostname: http://goo.gl/4hWXJ ... or if replacement server has same hostname:
21:34 glusterbot http://goo.gl/rem8L
21:34 semiosis tc00per: 'experiment' being a key word though
21:35 semiosis tc00per: seems likely you could move the data on the backend filesystem yourself, then use the replace-brick commit force to update gluster volume config.  just an idea though, not sure if it will work.  if you are successful though, please let us know
21:36 semiosis pithagorians: when you delete a file (the normal way, through a client mount point) its space should be freed.  if that's not happening, then there's probably a problem.
21:37 semiosis pithagorians: idk what could cause that, but standard troubleshooting would be to unmount your client, truncate its log, remount the client, and see if the file is really gone.  delete again if necessary, and check client log file.
21:37 semiosis or if you cant unmount the client, just make a new client mount somewhere else, to check
21:40 pithagorians https://gist.github.com/anonymous/5579817
21:40 glusterbot Title: gist:5579817 (at gist.github.com)
21:40 pithagorians almost same free space as i see with df
21:43 lbalbalba ... what cmd makes you see the additional free space ?
21:43 semiosis pithagorians: that doesnt help us diagnose your problem -- or even suggest that a problem exists.  try the troubleshooting steps i recommend
21:43 pithagorians lbalbalba: df -h
21:44 lbalbalba oh im sorry, guess i misunderstood this :                   <pithagorians> almost same free space as i see with df
21:48 lbalbalba pithagorians: so does df or does df not show you the free space ? ?
21:48 pithagorians it shows the free space
21:48 pithagorians but it doesn't show it relevant to my actions
21:48 pithagorians i delete images
21:48 tc00per semiosis: Thanks, I'll add that to the top of the list of things to try/fail
21:48 pithagorians but free space doesn't change
21:50 semiosis tc00per: aw nuts, that article i wanted you to see is 404 :( /cc JoeJulian
21:59 jag3773 joined #gluster
22:02 helloadam joined #gluster
22:49 portante|ltp joined #gluster
23:20 robos joined #gluster
23:35 jclift_ joined #gluster
23:46 jag3773 joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary