Camelia, the Perl 6 bug

IRC log for #gluster, 2013-01-18

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 a2 are they healed already? maybe two self-heal instances raced each other to heal the file, and the one which lost the race complained that it could not find sinks by the time it got the opportunity to heal
00:00 JoeJulian killed the glustershd processes and restarted glusterd. Now it's connected. The brick log hasn't shown a connection from the local hostname in days.
00:01 JoeJulian And no. They were not healed.
00:03 JoeJulian I'll watch this. I think it's entirely possible that the glustershd daemon hasn't been killed and restarted since 3.3.0.
00:03 JoeJulian If that's true, then it's a known bug.
00:05 JoeJulian Yeah, that's looking likely. glustershd doesn't stop with glusterd. The rpm shuts down glusterd and glusterfsd then restarts glusterd. This would have left glustershd and nfs (if I hadn't disabled nfs on all my volumes) running.
00:06 * JoeJulian is going to file a bug
00:06 glusterbot http://goo.gl/UUuCq
00:06 JoeJulian Or would if bz wasn't down again!
00:08 JoeJulian kkeithley: Please add a kill for /var/lib/glusterd/glustershd/run/glustershd.pid and /var/lib/glusterd/nfs/run/nfs.pid to the glusterd::stop init script (if those processes are named glusterfs).
00:08 JoeJulian .. and however that's done in systemd.
00:09 * JoeJulian guesses he should learn systemd soon.
00:09 JoeJulian Nah, I've changed my mind. glustershd yes. nfs should have it's own init script.
00:14 a2 hmm
00:18 * johnmark wonders whether JoeJulian actually needs anyone else to have deep conversations
00:18 JoeJulian hehe, nope
00:18 johnmark :)
00:18 JoeJulian I argue with myself all the time.
00:19 phox left #gluster
00:19 JoeJulian I usually win out in the end though.
00:19 johnmark haha :P
00:19 * johnmark has to drive home
00:19 RicardoSSP joined #gluster
00:20 JoeJulian I'm kind-of surprised you're not in KS.
00:31 elyograg I have to say it ... I don't think he's in Kansas anymore.
00:31 JoeJulian Mwahaha! :)
00:32 runlevel1 joined #gluster
00:48 bennyturns joined #gluster
00:51 hagarth joined #gluster
00:57 glusterbot New news from newglusterbugs: [Bug 901332] glustershd and nfs services are not restarted during an upgrade <http://goo.gl/Fjdww>
01:04 kevein joined #gluster
01:21 wushudoin joined #gluster
01:28 kevein joined #gluster
01:32 bala1 joined #gluster
01:35 ultrabizweb joined #gluster
01:39 abkenney joined #gluster
01:39 ninkotech_ joined #gluster
01:46 ultrabizweb joined #gluster
01:47 nik__ joined #gluster
02:07 polfilm joined #gluster
03:01 bharata joined #gluster
03:09 sjoeboo joined #gluster
03:14 sgowda joined #gluster
03:20 chirino joined #gluster
03:22 hagarth joined #gluster
03:31 lala_ joined #gluster
03:43 hagarth joined #gluster
04:07 shylesh joined #gluster
04:09 chirino joined #gluster
04:18 hagarth joined #gluster
04:21 sripathi joined #gluster
04:24 vpshastry joined #gluster
04:42 jag3773 joined #gluster
05:13 jim` joined #gluster
05:24 sripathi joined #gluster
05:32 hagarth joined #gluster
05:38 raghu joined #gluster
05:39 rastar joined #gluster
05:54 mohankumar joined #gluster
05:58 glusterbot New news from resolvedglusterbugs: [Bug 896410] gnfs-root-squash: write success with "nfsnobody", though file created by "root" user <http://goo.gl/F764A> || [Bug 896411] gnfs-root-squash: read successful from nfsnobody for files created by root <http://goo.gl/6o6pf>
06:09 ramkrsna joined #gluster
06:09 ramkrsna joined #gluster
06:19 hagarth joined #gluster
06:38 ngoswami joined #gluster
06:43 romero joined #gluster
06:52 bzf130_mm joined #gluster
07:01 ngoswami joined #gluster
07:01 vimal joined #gluster
07:07 Nevan joined #gluster
07:15 jtux joined #gluster
07:18 hagarth joined #gluster
07:50 sripathi joined #gluster
07:51 jtux joined #gluster
07:54 sripathi joined #gluster
07:58 dobber joined #gluster
08:01 ekuric joined #gluster
08:01 guigui joined #gluster
08:04 ctria joined #gluster
08:09 Joda joined #gluster
08:15 raven-np1 joined #gluster
08:16 sripathi joined #gluster
08:18 eurower joined #gluster
08:18 eurower left #gluster
08:24 tjikkun_work joined #gluster
08:28 andreask joined #gluster
08:29 deepakcs joined #gluster
08:35 shireesh joined #gluster
08:41 sripathi joined #gluster
08:46 bulde joined #gluster
08:46 ramkrsna joined #gluster
08:46 ramkrsna joined #gluster
08:49 vpshastry joined #gluster
08:56 sgowda joined #gluster
08:57 bala joined #gluster
09:00 sripathi joined #gluster
09:19 gbrand_ joined #gluster
09:19 manik joined #gluster
09:27 webwurst joined #gluster
09:33 Norky joined #gluster
09:35 sgowda joined #gluster
09:35 bulde joined #gluster
09:56 Azrael808 joined #gluster
10:02 puebele joined #gluster
10:12 tryggvil joined #gluster
10:15 bauruine joined #gluster
10:20 puebele joined #gluster
10:22 DaveS_ joined #gluster
10:29 mgebbe joined #gluster
10:29 sripathi joined #gluster
10:38 jjnash joined #gluster
10:38 nightwalk joined #gluster
10:44 raven-np joined #gluster
11:09 duerF joined #gluster
11:23 vpshastry joined #gluster
11:30 andreask joined #gluster
11:35 duerF joined #gluster
11:36 hagarth joined #gluster
11:54 rgustafs joined #gluster
11:56 shireesh joined #gluster
12:05 bauruine joined #gluster
12:07 DataBeaver I have a glusterfs client that for some reason can't see the full contents of some directories on the server.  Any idee what could be wrong or what I could try?  I already unmounted and remounted the filesystem, as well as restarted the server.
12:08 RicardoSSP joined #gluster
12:14 ndevos DataBeaver: that can happen when data is added/modified on the bricks directly, and not through a glusterfs mountpoint
12:14 ndevos always only add/modify data through a glusterfs mountpoint
12:15 ndevos ,,(targetted self-heal) ?
12:15 glusterbot I do not know about 'targetted self-heal', but I do know about these similar topics: 'targeted self heal'
12:15 ndevos ~targeted self heal | DataBeaver
12:15 glusterbot DataBeaver: http://goo.gl/E3b2r
12:21 vimal joined #gluster
12:23 sripathi joined #gluster
12:28 hagarth joined #gluster
12:40 shireesh joined #gluster
12:42 duerF joined #gluster
12:45 DataBeaver ndevos: Even if I restart the server?
12:45 wica joined #gluster
12:46 bala joined #gluster
12:47 dustint joined #gluster
12:49 vpshastry joined #gluster
12:55 sripathi left #gluster
13:14 stickyboy joined #gluster
13:16 stickyboy I've got an ext4 volume where I'm using extended ACLs.  I will be putting the data from this volume on a glusterfs share soon.  Will the extended ACLs be stored properly?
13:18 edward1 joined #gluster
13:22 shireesh joined #gluster
13:26 stickyboy I see for sure that I need the server-side brick filesystem to be mounted with acl support.
13:26 stickyboy I'll read up on it more as I do my test implementation.
13:26 stickyboy :)
13:26 stickyboy left #gluster
13:28 ndevos DataBeaver: yes, the extended attributes that are set on the files on the bricks do not appear suddenly when restarting the server
13:29 DataBeaver ndevos: Okay, good to know.
13:40 Joda joined #gluster
13:49 nullck joined #gluster
13:50 rwheeler joined #gluster
13:54 lala joined #gluster
13:54 Joda1 joined #gluster
14:02 ctria joined #gluster
14:04 ctria joined #gluster
14:09 abkenney joined #gluster
14:13 sashko joined #gluster
14:16 lala joined #gluster
14:25 edward1 joined #gluster
14:27 x4rlos Hi all. Is ext4 a no-no for use with gluster? I hear a few people are having trouble. I am on wheezy for my servers. http://www.gluster.org/category/rants-raves/ Suggests that the patch causing the trouble isn't applied yet? gluster: 3.3.1
14:27 glusterbot Title: GlusterFS bit by ext4 structure change | Gluster Community Website (at www.gluster.org)
14:29 johnmark x4rlos: correct. although it depends on the kernel version you're using
14:29 johnmark I think it starts with Linux kernel 3.4 and higher.
14:32 aliguori joined #gluster
14:40 semiosis mainline kernel 3.3.0, but the new ext4 code has been backported to 2.6 rh/cent kernels too
14:40 the-me semiosis: btw I still do not see any progress on a backported security patch..
14:41 x4rlos johnmark: Even using 3.3.1?
14:42 x4rlos ah, sorry, didnt read all the way :-) cheers semiosis:
14:43 semiosis the-me: i'm not a c prorgrammer, what should I do about it?
14:43 x4rlos Is the a suggested/preferred safe option for filesystem using gluster?
14:44 semiosis x4rlos: xfs with inode size 512
14:44 gbrand_ joined #gluster
14:44 semiosis x4rlos: i've also heard inode size 1024 if using UFO
14:44 the-me semiosis: kickass ppl :p
14:44 semiosis hehehe
14:54 stopbit joined #gluster
14:56 daMaestro joined #gluster
14:58 plarsen joined #gluster
15:00 glusterbot New news from newglusterbugs: [Bug 901568] print the time of rebalance start when rebalance status command is issued <http://goo.gl/7w0I3>
15:00 duerF joined #gluster
15:04 johnmark the-me: I asked our devs on the bug tracker
15:04 johnmark I will ask them again
15:04 johnmark kkeithley: ^^^
15:04 johnmark kkeithley: can you help?
15:05 kkeithley er, help with which?
15:10 sjoeboo_ joined #gluster
15:14 bugs_ joined #gluster
15:16 the-me kkeithley: a backported patch for the CVE-2012-4417 issue for 3.2.7
15:17 m0zes fudcon!
15:17 chirino joined #gluster
15:21 kkeithley just so I'm clear, are you asking for a 3.2.8 release? Or just want a semi-officially blessed patch? Or fedora, epel, ubuntu ppa releases of 3.2.7 with said patch?
15:23 x4rlos semiosis: thanks for that.
15:24 the-me kkeithley: for me (Debian) a patch would be enough
15:24 the-me since the one from git is not applyable to the 3.2.x series :(
15:30 obryan joined #gluster
15:30 duerF joined #gluster
15:44 wushudoin joined #gluster
15:50 jag3773 joined #gluster
15:52 TekniQue joined #gluster
15:52 TekniQue one of my bricks isn't running
15:52 TekniQue how do I remedy this?
15:53 m0zes kill glusterd and start it again is the easiest.
15:54 m0zes if you use the init script makes sure it doesn't kill glusterfsd as well
15:54 semiosis aka service glusterd restart
15:54 semiosis oh yeah that
15:55 m0zes gentoo is a current offender of that.
15:56 TekniQue ok I restarted
15:56 TekniQue and the brick came alive
15:56 TekniQue but brick has died again :(
15:57 TekniQue wtf
15:57 gauravp Hi .. I have a replicated gluster volume with a brick that I want to mount at a different location on each of the gluster servers. Is there an easy way to do that or do I have to blow away the volume and start over?
15:57 TekniQue what would cause a brick process to die like this?
15:58 ndevos TekniQue: you hopefully find that answer in /var/log/glusterfs/bricks/...
15:59 TekniQue the brick configuration is corrupted
16:00 TekniQue http://pastie.org/5719886
16:00 glusterbot Title: #5719886 - Pastie (at pastie.org)
16:00 TekniQue there's no data on that brick
16:01 TekniQue can I somehow re-create it?
16:02 ndevos TekniQue: "Different dirs /bricks/bt0 (256/22) != /bricks/bt0/.glusterfs/00/00" is the problem... something might have deleted the .glusterfs directory?
16:02 TekniQue yes
16:03 TekniQue the .glusterfs directory is probably left over from a previous configuration
16:04 TekniQue ok
16:04 TekniQue I deleted the .glusterfs directory from the brick path
16:04 TekniQue and restarted gluster
16:04 TekniQue it's now healing
16:06 errstr with gluster's write-back replication, does this mean that the origin host ACKs the write when all data is on the wire or when a TCP connection is established or when the remote host receives all the data in RAM?
16:07 nueces joined #gluster
16:08 webwurst left #gluster
16:12 errstr ah, the write is complete once the changelog is decremented on all replica nodes... so the kernel-filesystem level with ACK (which could be async)
16:14 TekniQue any tuning tips for gluster to speed up stat?
16:14 TekniQue it takes a while to do a directory index
16:15 obryan left #gluster
16:15 ultrabizweb joined #gluster
16:19 gauravp Hi .. I have a replicated gluster volume with a brick that I want to mount at a different location on each of the gluster servers. Is there an easy way to do that or do I have to blow away the volume and start over?
16:21 inodb joined #gluster
16:25 gmcwhistler joined #gluster
16:25 Humble joined #gluster
16:31 chouchins joined #gluster
16:42 sjoeboo_ joined #gluster
16:53 daMaestro joined #gluster
17:01 sashko joined #gluster
17:02 m0zes joined #gluster
17:09 mohankumar joined #gluster
17:12 tryggvil joined #gluster
17:20 JoeJulian gauravp: You can mount the volume as many times and places as you want. (or am I misreading the question?)
17:21 Mo___ joined #gluster
17:22 JoeJulian TekniQue: Some ways to speed up directory listings that do a stat on every entry would be:switch to using infiniband
17:22 JoeJulian buy lower latency ethernet switches
17:22 JoeJulian Don't use replicated volumes.
17:24 JoeJulian If you're more concerned with lookup() performance than actual throughput or redundancy, mount via nfs to utilize the kernel fscache.
17:25 hagarth joined #gluster
17:25 omkar_ joined #gluster
17:25 omkar_ Hi Guys
17:26 omkar_ Need Help regrading gluster psql DB
17:28 JoeJulian You might want to ask the question you want answered. All I can really do so far is offer sympathy.
17:30 omkar_ so I basically installed gluster using ovirt
17:30 JoeJulian the-me: To have a patch backported, I suspect the best way to make that happen would be to file a bug report.
17:30 glusterbot http://goo.gl/UUuCq
17:31 johnmark JoeJulian: ah, there is a bug report
17:31 * johnmark tries to hunt it down
17:32 johnmark omkar_: oh interesting. not sure what that has to do with postgres though
17:32 JoeJulian Seems like we have to file bug reports asking for bugs to be fixed for the previous branches or they only get applied to master anymore.
17:33 johnmark aha, ok
17:35 y4m4 joined #gluster
17:39 x4rlos On that subject, was wondering if the debian packages had been updated recently :-/
17:48 JoeJulian Isn't semiosis only building 3.3?
17:52 semiosis huh?
17:54 semiosis JoeJulian: johnmark: Bug 856341
17:54 glusterbot Bug http://goo.gl/9cGAC high, medium, ---, security-response-team, CLOSED ERRATA, CVE-2012-4417 GlusterFS: insecure temporary file creation
17:54 gauravp JoeJulian: actually meant the brick mountpoint .. I was reading this post http://www.mail-archive.com/gluste​r-users@gluster.org/msg10657.html and i see the pros of adding each brick on a server as individual lv rather than concat, so i'll have to redo my mount names to add provisions for additional phys disks
17:54 glusterbot <http://goo.gl/w7k7x> (at www.mail-archive.com)
17:55 x4rlos Re: https://bugzilla.redhat.com/show_bug.cgi?id=895656
17:55 glusterbot <http://goo.gl/ZNs3J> (at bugzilla.redhat.com)
17:55 glusterbot Bug 895656: unspecified, unspecified, ---, csaba, NEW , geo-replication problem (debian) [resource:194:logerr] Popen: ssh> bash: /usr/local/libexec/glusterfs/gsyncd: No such file or directory
17:55 x4rlos "Will work on this once more info is available?"
17:55 x4rlos What info he need?
17:56 theron joined #gluster
17:56 x4rlos The symlink works, but its a bit hacky/dirty.
17:57 x4rlos Wouldnt it be better for gluster to point to a more local file, and the package create the symlink in the appropriate place depending on the package? Rather that is assume its a redhat-variant and then create a symlik?
17:58 * x4rlos wonders if that made sens
17:58 * x4rlos ....e
17:58 kkeithley x4rlos: Louis Zuckerman is semiosis, he's a volunteer, not a Red Hat employee.
17:59 semiosis a ,,(volunteer)
17:59 glusterbot A person who voluntarily undertakes or expresses a willingness to undertake a service: as one who renders a service or takes part in a transaction while having no legal concern or interest or receiving valuable consideration.
17:59 kkeithley The real fix is in the works.
17:59 daMaestro joined #gluster
18:00 x4rlos thats cool, i dont mean to be on the offence or anything. I would like to understand more on how this works. And help if i can.
18:02 zaitcev joined #gluster
18:02 ThunderTree joined #gluster
18:11 gauravp JoeJulian: So I have a LV lv_data allocated from a whole disk partition (sda1) and mounted at /exports/brick/data. This is then used as a brick in a gluster volume. Now when I add sdb1, I was planning to concat the second disk to lv_data, but after reading that thread, i understand the benefit of instead creating a second LV and adding a second brick to the gluster volume. To do this, I would have to mount sda1 at (say) /exports/bricks
18:15 semiosis while we're on the subject of backporting bugs... FyreFoX has been looking for bug 887098 to get backported into 3.3
18:15 glusterbot Bug http://goo.gl/QjeMP urgent, high, ---, vshastry, POST , gluster mount crashes
18:16 semiosis JoeJulian: agree it does seem like we have to ask (often) for bugfix releases of 3.2 and 3.3
18:17 semiosis JoeJulian: johnmark: anyone: would making a page on the community wiki with a "wish list" of patch backports & maintenance releases be helpful?
18:17 RicardoSSP joined #gluster
18:17 RicardoSSP joined #gluster
18:18 semiosis where people can add their request for a particular bugfix in master to be backported to release-3.2 or -3.3 or both & a new release made?
18:20 msgq joined #gluster
18:24 semiosis well... http://www.gluster.org/community/docu​mentation/index.php/Backport_Wishlist
18:24 glusterbot <http://goo.gl/6LCcg> (at www.gluster.org)
18:30 * semiosis gbtw
18:31 hagarth semiosis: http://review.gluster.org/#change,4395
18:31 glusterbot Title: Gerrit Code Review (at review.gluster.org)
18:34 semiosis hagarth: \o/
18:35 semiosis any word on a 3.3.2 release?
18:37 hagarth semiosis: is there anything else you are looking for in 3.3.2?
18:38 semiosis not me personally, i just put together that wiki page based on what i remember people asking for recently
18:39 hagarth the wiki page is cool, it will help in figuring out what backports are needed
18:39 semiosis glad to hear that :)
18:39 johnmark semiosis: sweet!
18:39 * johnmark <3 's semiosis
18:40 hagarth since we haven't heard many complaints with 3.3.0 or 3.3.1, we have not set out backporting (m)any fixes
18:40 semiosis that makes sense, i appreciate how important feedback is
18:41 johnmark ps - I'm also arranging some docs in an Arch section: http://www.gluster.org/communit​y/documentation/index.php/Arch
18:41 glusterbot <http://goo.gl/4LlkI> (at www.gluster.org)
18:41 johnmark just need to start arranging in a way that makes sense
18:41 semiosis johnmark: cool!
18:41 hagarth cool!
18:41 johnmark semiosis: yeah, we've only been asked for that a thousand times :)
18:42 semiosis lol, so true
18:42 hagarth johnmark: how about a prioritization board in trello?
18:43 sjoeboo_ joined #gluster
18:44 johnmark hagarth: that's something I spoke to rich about yesterday
18:44 johnmark if it helps, I say do it
18:45 hagarth johnmark: think so, will setup something over the weekend
18:45 johnmark hrm... apparently one of our SA's is about to post an SNMP monitoring script
18:45 johnmark hagarth: ok
18:45 * johnmark wonders how many people care about SNMP
18:46 elyograg being able to monitor gluster in opennms would be pretty awesome.
18:46 kkeithley hargath: you want a whole board, or should it just be a card on the Refactoring Backlog board?
18:47 hagarth kkeithley: we could have a new board for the community
18:50 johnmark hagarth: that's what I was thinking
18:53 johnmark elyograg: ok
18:53 johnmark agreed. and Nagios, and... <insert tool here>
18:55 kkeithley semiosis, the-me: Re: CVE-2012-4417 (a.k.a. bz 856341)  want to give   http://kkeithle.fedorapeople.org/g327.patch a spin?
18:57 elyograg don't know that anyone will care, but you can hear what I'm listening to.  http://www.kcqnutah.com/
18:57 glusterbot Title: KCQN Utah - The Next Wave of New Wave (at www.kcqnutah.com)
18:58 dbruhn joined #gluster
18:58 dbruhn Does anyone have an idea why my rebalance operations would continually just stall out.
19:03 semiosis kkeithley: looks like i'm going to be busy with open source this weekend :)
19:03 semiosis i'll add that one to my TODO list.  thanks for the patch
19:22 daMaestro joined #gluster
19:31 tryggvil joined #gluster
19:37 dustint left #gluster
19:41 _br_ joined #gluster
19:43 chirino joined #gluster
19:49 johnmorr joined #gluster
19:50 _br_ joined #gluster
20:27 johnmorr i have a distributed (non-replicated) glusterfs mounted on several clients. i'm trying to make a directory on two of the clients, but mkdir returns EEXIST even though the directory doesn't exist
20:27 johnmorr on a third client, the directory creation is successful, and the directory looks fine on that host:
20:27 johnmorr drwxr-xr-x 2 jwm rc_admin 672 Jan 18 15:27 log
20:27 johnmorr but on the other two clients (the ones getting EEXIST):
20:27 johnmorr ?---------  ? ?        ?         ?            ? log
20:28 johnmorr when i do a directory listing of log's parent directory on the two 'broken' clients, i get some noise in the servers' brick logs about:
20:28 johnmorr [2013-01-18 20:27:42.430326] I [server3_1-fops.c:1707:server_stat_cbk] 0-hsph_gluster-server: 588585: STAT (null) (--) ==> -1 (No such file or directory)
20:28 daMaestro joined #gluster
20:29 omkar_ left #gluster
20:30 johnmorr if i remove the directory on the 'working' client, i can't find any files/directories named 'log' on any of the gluster servers' bricks
20:30 johnmorr any ideas on how to troubleshoot this further? i'm stumped at this point
20:32 jdarcy So the directory doesn't exist on *any* of the bricks and you still get EEXIST?
20:32 johnmorr jdarcy: it seems that way, yeah
20:33 johnmorr fwiw, this is with 3.3.1 on the clients and 3.3.0 on the servers
20:34 jdarcy Anything in the *server* logs when this occurs?
20:36 johnmorr i don't think so, but let me double check
20:37 johnmorr nothing in the servers' logs when i try to create the directory
20:38 johnmorr if i create the directory on the 'working' client and do a directory listing on one of the 'broken' clients, i get the STAT failure i pasted above
20:42 sjoeboo_ jdarcy: johnmark: johnmorr is one of the new guys on our team over here (we're taking over your channel!)
20:49 johnmark sjoeboo_: PHear
20:49 johnmark heh
20:50 * johnmorr smiles.
20:55 JoeJulian gauravp: Just got back. You can use replace-brick to change the mount path of a brick. What I do is kill the glusterfsd process for the brick that I'm moving, unmount, mount at the new path, then do the replace-brick with "commit force". The glusterfsd process is then started pointing at the new mountpoint.
20:59 JoeJulian hagarth: If I've reported a bug against 3.3, my expectation is that the bug fix will make it into 3.3. I wouldn't expect that I have to upgrade to 3.4 since every 3.x.0 has had some pretty significant new bugs.
21:02 jdarcy johnmorr: After you create the directory on the working client, can you check on all of the bricks and see if it's there?
21:02 JoeJulian johnmorr: Best practice is to ensure the servers are updated first. I don't know of any specific bugs that should cause that though.
21:04 sjoeboo_ johnmorr: i never realized those boxes were on 3.3.0, not 3.3.1 ...though i'm not supprised
21:09 jdarcy johnmorr: Oh, and where are my manners?  Welcome!
21:33 glusterbot New news from newglusterbugs: [Bug 895656] geo-replication problem (debian) [resource:194:logerr] Popen: ssh> bash: /usr/local/libexec/glusterfs/gsyncd: No such file or directory <http://goo.gl/ZNs3J>
21:34 johnmorr jdarcy: thanks!
21:35 johnmorr jdarcy: and if i create the directory on the working client, it *is* on all of the bricks
21:40 daMaestro joined #gluster
21:52 gauravp JoeJulian: Nice, so `service glusterfs stop` on one node, remount the device and then 'replace-brick; commit force' and then self heal will kick in to update all the hashes (i assume the hashes will all have to change since there's a new path?)
21:59 JoeJulian No, hashes shouldn't even have to change since they're all based on brick list offsets.
21:59 gauravp ah ok, i was under the impression they were based on filenames
21:59 JoeJulian And I don't do "service glusterfs stop" myself since I have like 60 bricks and I don't want them all dead at once. :D
22:00 JoeJulian You can look at them with 'getfattr -m . -d -e hex $brickpath/$filepath' to get a feeling for what they are.
22:01 gauravp thanks! yeah, makes sense, i'm still testing, so just 3 volumes. but it would make sense to do them one at a time
22:03 jdarcy johnmorr: If it is on all of the bricks, and seems normal from the one working client, then the problem is likely to be with the other clients.  Have you tried remounting from them?  Also, I guess it wouldn't hurt to dump the xattrs for the directory on each brick, see if there's anything funky there.
22:06 gauravp so getfattr returns trusted.gfid and trusted.glusterfs.volume-id ... was that command for determining the glusterfsd process to kill?
22:10 gauravp nvm, that was probably for someone else
22:13 JoeJulian To figure out the glusterfsd process to kill, I just use ps and grep.
22:22 gauravp Yup. You had also started helping me with a question yesterday, but I was away: My rsync to the replica 2 volume using gluster-fuse client from another server connected over GigE was running at ~30MB/s and you said there could be several variables ...
22:24 Humble joined #gluster
22:25 gauravp I was trying to understand how the writes from the gluster-fuse client work - does the data go from client to both nodes? And if so, I guess a limiting factor would be the theoretical max 120MB/s of GigE from the client, where each additional replica would further strain the throughput
22:29 daMaestro joined #gluster
22:29 jdarcy gauravp: Indeed.  Personally, I'm beginning to think that server-side replication would be preferable for many environments, because of just that issue.
22:33 koodough joined #gluster
22:34 hattenator joined #gluster
22:44 koodough joined #gluster
22:49 gauravp jdarcy: Ouch, so replica 4 would mean further halfing of the speed I'm currently getting on writes. Good to know, now how do writes with NFS mount work?
22:51 jdarcy gauravp: If you were doing replica 4 (why?) that would mean one copy going from the client to the NFS server which is also a GlusterFS native client, then three copies going from there to the other replicas.
22:52 jdarcy gauravp: So you'd be getting 1/3 of your back-end bandwidth instead of 1/4 of your front-end (if the two are even different).
22:53 gauravp ok, (replica 4 just hypothetical), but even with replica 2 does nfs write to nfs server and then that gluster server then replicates to other? is the replication in that case asynchronous? and finally, is there a way to get the gluster-fuse client to behave similarly
22:53 jdarcy gauravp: On the down side, your write latency would be higher.  So would your read latency, but you'd have better caching.
22:55 jdarcy Local replication (AFR) is always synchronous.  At the low levels it's possible to set up server-side replication (the I/O path can handle it) but there's no way to do it through the CLI and you're likely to end up in untested/unsupported territory.
22:56 gauravp jdarcy: I see. The reason i'm not worried about the back-end replication is that the two gluster servers are currently VMs on a single host, but even in a larger setup, would have less latency between them, vs from the client
22:57 gauravp So maybe like you said earlier, server-side replication could have its benefits
22:57 Humble joined #gluster
23:02 jdarcy If there VMs on a single host, wouldn't the replicas be on the same actual storage?
23:13 gauravp i've got separate physical disks allocated to the two VMs which are then bricks that are replicated
23:14 gauravp not using the gluster volume as vm storage, the vms are on a separate disks and are the gluster server nodes
23:37 raven-np joined #gluster
23:43 RicardoSSP joined #gluster
23:43 RicardoSSP joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary