Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-03-06

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 elyograg trying to decipher, but this is where not knowing python may be a disadvantage. :)
00:01 JoeJulian S_ISREG(fmode) looks for regular files.
00:01 JoeJulian I'm pretty sure S_ISDIR is what you want.
00:01 JoeJulian ... and you can get rid of the getsize logic
00:01 jiqiren joined #gluster
00:02 elyograg removing the 'if' line entirely and reducing the indent on the following lines looks like it has taken care of it.
00:03 vpshastry joined #gluster
00:04 elyograg very nice.  I can figure this out.
00:04 JoeJulian excellent
00:04 elyograg thank you.  you've been a lifesaver.
00:05 JoeJulian ... and just 10 Calories.
00:10 jbrooks JoeJulian: I eventually got it back in order :)
00:10 elyograg so that skips afr attributes that are all zeros?
00:10 jbrooks restarting glusterd everywhere was the start
00:11 elyograg did an strace, I seem to see a lot of entries with afr attributes, but looks like they are all zero.
00:11 wrale-josh joined #gluster
00:13 wrale-josh i'm using 3.5b3 .. i'd like to benchmark the on-wire compression options of gluster.  how can i compress on both sides, client and server, before sending to the opposite?  any tips?
00:15 velladecin when I do 'gluster vol stop <VOL>', reboot a server and while it's down i do 'gluster vol start <VOL>' then the gluster mount across the cluster is OK... interesting
00:16 velladecin when the rebooted server comes back up, it just 'puts' itself back into the cluster and no problems
00:16 semiosis when you stop a volume all bricks go down, clients can't operate
00:17 nightwalk joined #gluster
00:18 velladecin yep, my problem is that when I reboot a server the gluster mount across the cluster is unavailable. But when I stop the volume, reboot a server and while it's down I do volume start, then the gluster mount is OK
00:20 flrichar joined #gluster
00:21 haomaiwang joined #gluster
00:22 cp0k_ joined #gluster
00:23 tokik joined #gluster
00:24 velladecin rebooting a server and while it's down doing 'vol stop and vol start' makes the gluster mount just as inaccessible... it seems that that the gluster reboot/shutdown is somehow screwing things up..?
00:25 elyograg how long are you waiting?  Gluster has a timeout that defaults to 42 seconds.  After that much time, it should be OK.
00:26 elyograg reducing the timeout is possible, but not advised because re-establishing everything is a very expensive operation, one that you don't want to do unless it *really* is down.
00:28 JoeJulian additionally, the GlusterFS ,,(processes) should be stopped before the network does, preventing the ping-timeout from being a factor.
00:28 glusterbot The GlusterFS core uses three process names: glusterd (management daemon, one per server); glusterfsd (brick export daemon, one per brick); glusterfs (FUSE client, one per client mount point; also NFS daemon, one per server). There are also two auxiliary processes: gsyncd (for geo-replication) and glustershd (for automatic self-heal). See http://goo.gl/F6jqx for more information.
00:29 velladecin I've got web server and a loop that wgets from the web -> gluster mount. The server (VM) reboots in about 1 min or so, but during that 1min the mount across the cluster is unavailable. When the server comes back the mount starts working again
00:29 JoeJulian velladecin: Are you mounting via fuse (mount -t glusterfs) or nfs? Fuse is the only one that should survive a server reboot.
00:29 velladecin Ok, i'll make sure they're off before the network
00:29 velladecin mount -t glusterfs
00:29 velladecin fuse
00:30 JoeJulian Check your client log for clues.
00:30 elyograg there is a problem I've seen with CentOS where just rebooting a machine takes the network down before stopping the services.  That was on 3.3.x, not sure if it's been fixed in 3.4.x packages.
00:31 JoeJulian The init order has been right since 3.0 unless something else has changed it.
00:31 velladecin I'm using 3.4.2 and it would make sense when 'vol stop', reboot and while server down 'vol start' would work
00:31 elyograg if you were to shutdown instead of reboot, you'd probably find that the volume recovers after about as much time as it takes for the reboot to happen.
00:32 JoeJulian No, that wouldn't make sense. You can't start a volume unless all the bricks are present. I suppose it would be good to have a "force" option though ( there may be. Have you tried adding the word force to the end? )
00:33 elyograg JoeJulian: my experience was that if I just typed 'reboot' then my volume was inaccessible for the timeout period.
00:33 velladecin yep, i'm using /sbin/shutdown -r now
00:33 velladecin I can actually start the volume while the server is down.
00:34 JoeJulian Not sure if "reboot" does the "init 6". I know "shutdown -r" does. Check /etc/rc.d/rc6.d and look at the K order.
00:34 velladecin W [socket.c:514:__socket_rwv] 0-management: readv failed (No data available)
00:34 velladecin W [socket.c:1962:__socket_proto_state_machine] 0-management: reading from socket failed. Error (No data available), peer (10.116.126.32:24007)
00:34 velladecin that's all from the logs :(
00:35 elyograg reboot is a binary file.  i wonder why.
00:35 JoeJulian K*glusterd K*glusterfsd should happen (numerically) before K*network
00:37 velladecin it does not in my case
00:37 velladecin K80 glusterfsd
00:37 velladecin S10 network
00:37 velladecin S20 glusterd
00:37 JoeJulian In rc6.d?
00:37 velladecin ls -l /etc/rc.d/rc5.d/ | grep -E 'network|gluster'
00:37 JoeJulian That's for booting up.
00:37 JoeJulian rc6.d is for reboot
00:37 JoeJulian and rc0.d is halt
00:38 velladecin aaah ok
00:39 velladecin K80 gluster*, K90network
00:40 elyograg looks right, but my experience was that it caused a timeout wait.  Haven't tried with 3.4.
00:43 velladecin I'll try to shut the server down instead of rebooting and see if gluster recovers after the timeout
00:44 JoeJulian Yay. Finally have my openstack deployment fully functional, puppetized and tested.
00:46 velladecin yes, gluster recovers after a while ~40secs or so
00:47 velladecin so basically, this is expected then? if server crashes (or becomes unavailable) there is a less then a minute outage window
00:47 JoeJulian That means that the K80glusterfsd isn't killing the bricks before the network connection is lost. The clients never receive the TCP FIN, and handshake.
00:48 JoeJulian If the tcp connection is closed, the clients won't wait for it.
00:48 velladecin maybe a little 'sleep' in when it takes down glusterfsd? From your answer I assume there should not be an outage?
00:49 elyograg if everything works right, no.  my experience is that it doesn't work right. :)
00:50 JoeJulian And/or check what else might be interfering in your rc.[06]. Maybe you're using some firewall thing that's blocking ports as its shut down?
00:50 elyograg if it suddenly disappears (power loss, switch failure, hardware fault), then the timeout would be expected.
00:50 velladecin :) I'll try adding sleep to glusterfsd. No, I've got local FW open 1 - 65335 for all memebers of the cluster
00:51 elyograg I don't use the firewall, but iptables gets shut down at K92 on CentOS 6.5, so I'd expect that to be fine.
01:03 nightwalk joined #gluster
01:09 glusterbot New news from newglusterbugs: [Bug 1073111] %post install warning for glusterfs-server that it can't find /etc/init.d/glusterfsd (on EL6) <https://bugzilla.redhat.co​m/show_bug.cgi?id=1073111>
01:11 velladecin yep, there are still plenty of glusterfsd processes when network stops...
01:17 cp0k_ joined #gluster
01:18 velladecin it seems that there is no '/var/lock/subsys/glusterfsd' file and so the K80 glusterfsd never actually executes
01:33 haomaiwa_ joined #gluster
01:34 sprachgenerator joined #gluster
01:36 harish joined #gluster
01:36 velladecin I think there is a bug in the glusterfsd init script. When I chkconfig add it then it runs at shutdown, but evaluates as 'glusterfsd is stopped' which is not true
01:39 glusterbot New news from newglusterbugs: [Bug 1073188] SIGHUP doesn't do anything for 'glusterfs' client processes anymore. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1073188>
01:40 primechuck joined #gluster
01:41 primechuck joined #gluster
01:43 haomaiwang joined #gluster
01:43 sprachgenerator joined #gluster
01:45 haomaiwang joined #gluster
01:46 haomaiwa_ joined #gluster
01:49 nightwalk joined #gluster
02:02 velladecin no, the problem is the missing subsys file. If I do 'touch /var/lock/subsys/glusterfsd' then rebooting the server goes without problems. But the file is not created automagically...
02:08 harish joined #gluster
02:20 andrewklau joined #gluster
02:20 prasanth joined #gluster
02:21 andrewklau Is there anything like garbd for gluster (garbd will fake a host in a mysql cluster for ie. quorum min 3)?
02:22 dusmant joined #gluster
02:23 elyograg andrewklau: there is a patch that someone created that adds arbiter nodes for quorum on replica2 volumes.  I don't know why it hasn't been accepted by the project, but it hasn't.  can't locate the bugid.
02:24 elyograg ah, here's something on it.  http://www.gluster.org/pipermail/glu​ster-users/2012-November/034747.html
02:24 andrewklau elyograg: aw, ok thanks for that. I remember reading about it somewhere but couldn't find it
02:24 glusterbot Title: [Gluster-users] Avoid Split-brain and other stuff (at www.gluster.org)
02:27 andrewklau hmm, but that will only enforce the min-2. So if one drops, you lose the write actions
02:28 elyograg if you want to guarantee you don't ever get split-brain, that's the only way.
02:29 sas_ joined #gluster
02:30 andrewklau well I was hoping to do more of a, if host B drops but host A and fake host are still up. Let the client write because we've got a min-2
02:30 elyograg the referenced blog post is good reading.  Jeff Darcy is the author of the arbiter concept.  Couldn't remember his name before seeing that. :)
02:30 elyograg he does not appear to be online here at the moment.
02:31 elyograg I have replica 2 volumes.  I'd really like this feature.  yesterday. :)
02:32 andrewklau It's common in mysql/mariadb galera, was hoping gluster had such a feature :(
02:38 elyograg enough info now to fing the bug.  glusterbot, here we go.  bug 914804
02:38 glusterbot Bug https://bugzilla.redhat.com​:443/show_bug.cgi?id=914804 medium, unspecified, ---, jdarcy, POST , [FEAT] Implement volume-specific quorum
02:45 andrewklau elyograg: review in progress since 1 year ago :(
02:45 saurabh joined #gluster
02:50 andrewklau elyograg: hmm, now following this trail is where I'm lost. According to this unreviewed patch, wouldn't that mean the quorums are still based on cluster rather than volume. Meaning, an extra gluster-server as a peer would work as my arbiter node?
02:55 nightwalk joined #gluster
02:57 velladecin can you guys check the non/existence of /var/lock/subsys/glusterfsd file? According to me :) if it does not exist, and it does not for me without touch-ing it manually, there is an outage of gluster during a single server reboot. About 40secs, then everything comes back to normal. When the file does exist during reboot, then, there's no outage
02:58 velladecin this is for distributed/replicated setup, haven't tried any other
03:05 bharata-rao joined #gluster
03:09 kris joined #gluster
03:23 nightwalk joined #gluster
03:29 jporterfield joined #gluster
03:30 tomato joined #gluster
03:31 elyograg velladecin: i see the basic problem.  the glusterfsd init script is really only ever used to *STOP* glusterfsd processes.  The 'start' is never called, and the only place that the lockfile gets created is in the start() routine of the init script.  Do you want the honor to file a bug (thank you in advance glusterbot for the link) or should I?
03:31 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
03:35 raghu joined #gluster
03:36 haomai___ joined #gluster
03:39 shylesh joined #gluster
03:41 velladecin elyograg: Yes, that's correct the start() never gets called, the file doesn't get created and so it does not attempt to stop() it when it's needed.
03:41 velladecin I can file the bug, but I will need a login?
03:43 velladecin yes I do... I'm working on it now
03:57 itisravi joined #gluster
03:58 tomato joined #gluster
04:03 velladecin https://bugzilla.redhat.co​m/show_bug.cgi?id=1073217
04:03 glusterbot Bug 1073217: medium, unspecified, ---, vraman, NEW , /var/lock/subsys/glusterfsd missing
04:08 nightwalk joined #gluster
04:08 elyograg just started re-watching Stargate SG-1.  There's an anomaly in the first five minutes.  How did Apophis dial the earth gate?  There's no DHD, and although they're really good with technology, they could NOT have figured out their dialing computer.
04:10 glusterbot New news from newglusterbugs: [Bug 1073217] /var/lock/subsys/glusterfsd missing <https://bugzilla.redhat.co​m/show_bug.cgi?id=1073217>
04:11 RameshN joined #gluster
04:15 haomaiwang joined #gluster
04:15 shubhendu joined #gluster
04:20 ndarshan joined #gluster
04:20 mattappe_ joined #gluster
04:21 jporterfield joined #gluster
04:21 CheRi joined #gluster
04:24 CheRi joined #gluster
04:26 latha joined #gluster
04:26 hagarth joined #gluster
04:28 vpshastry joined #gluster
04:29 nightwalk joined #gluster
04:32 kris joined #gluster
04:32 cjanbanan joined #gluster
04:39 ppai joined #gluster
04:57 bala joined #gluster
05:03 kdhananjay joined #gluster
05:09 sahina joined #gluster
05:09 davinder joined #gluster
05:11 snehal joined #gluster
05:12 deepakcs joined #gluster
05:12 satheesh joined #gluster
05:16 JoeJulian velladecin: Nope, that's invalid. The stop function is called by init prior to the network stop function.
05:16 JoeJulian velladecin: Like I said before. The glusterfsd is stopped before network according to your init scripts. There must be something else.
05:18 JoeJulian One thought that just came to mind... You're in runlevel 5. I assume that means you're running Xwindows and probably NetworkManager. Perhaps NetworkManager is releasing the dhcp acquired address when Xwindows shuts down, which I'm pretty sure is earlier than those init scripts.
05:18 nightwalk joined #gluster
05:21 velladecin no, only on a server in level3, no desktop. The stop() function gets ONLY called when the subsys file is present
05:21 JoeJulian why?
05:21 JoeJulian There's nothing in the scripts that would cause that.
05:21 elyograg mine are doing runlevel 3.  it does use network manager.  no dhcp.
05:22 JoeJulian "killproc [-p pidfile] [ -d delay] {program} [-signal]"
05:22 JoeJulian No mention of lockfile
05:22 velladecin http://www.redhat.com/archives/redh​at-list/2008-December/msg00034.html
05:22 glusterbot Title: Re: Init script not called during system shutdown (at www.redhat.com)
05:22 rjoseph joined #gluster
05:22 larsks JoeJulian: I think /etc/rc.d/rc is what's checking for the subsys file and not bothering to call stop on the init script.
05:23 kanagaraj joined #gluster
05:23 velladecin I did test with it. When the subsys file is not present the init file is not even called. When you do chkconfig add glusterfsd, the init file gets called but stop() does not, when you remove glusterfsd from chkconfig but create the subsys file, then everything is good
05:24 velladecin at the time networks is stopped, there is glusterfsd running (always) unless subsys file is present
05:25 larsks This is the relevant logic in the "rc" script: https://gist.github.com/larsks/9383107
05:25 glusterbot Title: gist:9383107 (at gist.github.com)
05:25 JoeJulian larsks: Hmm, looks like you're right.
05:25 JoeJulian ... then how does mine stop correctly every time...
05:27 JoeJulian Thanks for proving me wrong, btw.. :)
05:27 JoeJulian I actually like it when that happens.
05:27 larsks Eh, I've been dealing with initscripts for a looooooooong time :)
05:27 velladecin you cannot see any difference (nothing in logs), only when you try to access the gluster mount then you see that you cannot see anything untill about 40sec later
05:28 JoeJulian velladecin: yeah, I've got constant access to 15 volumes. I should have come across this by now.
05:28 mohankumar__ joined #gluster
05:33 JoeJulian Well, I would say that glusterd start() is going to have to create that lock/subsys file. I can't think of any other reasonably managed way of handling that.
05:34 elyograg the glusterd init script does define an env variable for glusterfsd.
05:34 JoeJulian I was noticing that too.
05:35 aravindavk joined #gluster
05:35 velladecin when the start() runs it says 'glusterfsd is running' - also from my tests
05:35 JoeJulian which start?
05:36 velladecin in the init file it runs status() which returns 'glusterfsd is running' so the start does not execute. Does glusterd start it? These were just quick tests so I may be wrong about this..
05:37 JoeJulian The glusterfsd init is pretty ugly, too. It expects option variables to start with spaces. Ewww.
05:40 yhben joined #gluster
05:40 yhben left #gluster
05:46 JoeJulian Argh!
05:46 JoeJulian bug 1073071
05:46 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1073071 high, unspecified, ---, kkeithle, NEW , Glusterfs 3.4.2 data replication doesn't work on Fedora 20 two node cluster
05:47 JoeJulian Why do people try to write to the bricks! Gah!
05:57 rastar joined #gluster
05:59 cedric___ joined #gluster
06:04 JoeJulian One of my very rare code submissions made. Thanks for fighting for your cause, velladecin. I was so sure it's always been working I never would have looked deeper.
06:06 nightwalk joined #gluster
06:07 velladecin no worries, I'm glad I could help
06:16 benjamin_____ joined #gluster
06:20 edong23 joined #gluster
06:22 ricky-ti1 joined #gluster
06:23 nightwalk joined #gluster
06:23 nshaikh joined #gluster
06:25 JoeJulian bug 1041109
06:25 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1041109 urgent, unspecified, ---, csaba, NEW , structure needs cleaning
06:32 ProT-0-TypE joined #gluster
06:35 glusterbot joined #gluster
06:43 saurabh joined #gluster
06:44 dusmant joined #gluster
06:45 vikumar joined #gluster
06:53 cedric___ joined #gluster
06:55 rahulcs joined #gluster
06:58 ngoswami joined #gluster
06:59 kdhananjay joined #gluster
07:04 haomaiwang joined #gluster
07:05 glusterbot New news from newglusterbugs: [Bug 1073023] glusterfs mount crash after remove brick, detach peer and termination <https://bugzilla.redhat.co​m/show_bug.cgi?id=1073023> || [Bug 1073168] The Gluster Test Framework could use some initial sanity checks <https://bugzilla.redhat.co​m/show_bug.cgi?id=1073168> || [Bug 1066778] Make AFR changelog attributes persistent and independent of brick position <https://bugzilla.redhat.com/sh
07:05 ndarshan joined #gluster
07:12 nightwalk joined #gluster
07:13 __123_cyber hi. how i can repair volumes?
07:14 __123_cyber bug 1037511
07:14 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1037511 high, unspecified, ---, vbellur, NEW , Operation not permitted occurred during setattr of <nul>
07:14 kshlm joined #gluster
07:20 lalatenduM joined #gluster
07:21 jtux joined #gluster
07:24 rossi_ joined #gluster
07:26 rahulcs joined #gluster
07:27 ekuric joined #gluster
07:40 ProT-O-TypE joined #gluster
07:47 CheRi joined #gluster
07:48 ProT-0-TypE joined #gluster
07:51 Guest30207 joined #gluster
07:53 rgustafs joined #gluster
07:56 kdhananjay joined #gluster
07:59 cjanbanan joined #gluster
08:01 ctria joined #gluster
08:02 nightwalk joined #gluster
08:02 tjikkun joined #gluster
08:02 tjikkun joined #gluster
08:03 madhu joined #gluster
08:14 an_ joined #gluster
08:16 eseyman joined #gluster
08:21 ProT-0-TypE joined #gluster
08:26 hybrid512 joined #gluster
08:27 prasanth joined #gluster
08:32 keytab joined #gluster
08:41 ravindran joined #gluster
08:51 nightwalk joined #gluster
08:51 rastar joined #gluster
08:53 Frankl joined #gluster
08:55 Guest30207 joined #gluster
08:58 kdhananjay joined #gluster
09:01 an_ joined #gluster
09:02 keytab joined #gluster
09:06 liquidat joined #gluster
09:12 an__ joined #gluster
09:14 shubhendu joined #gluster
09:16 psharma joined #gluster
09:18 an_ joined #gluster
09:20 bharata-rao joined #gluster
09:21 Joe630 joined #gluster
09:22 an_ joined #gluster
09:23 rahulcs joined #gluster
09:26 an__ joined #gluster
09:33 prasanth joined #gluster
09:37 shubhendu joined #gluster
09:38 kanagaraj joined #gluster
09:39 rahulcs_ joined #gluster
09:42 nightwalk joined #gluster
09:46 Frankl joined #gluster
09:52 unlocksmith_ joined #gluster
09:52 dusmant joined #gluster
09:53 ndarshan joined #gluster
09:55 RameshN joined #gluster
09:57 latha joined #gluster
10:01 meghanam joined #gluster
10:01 meghanam_ joined #gluster
10:05 an_ joined #gluster
10:05 vpshastry joined #gluster
10:06 rastar joined #gluster
10:06 rwheeler joined #gluster
10:07 lalatenduM joined #gluster
10:10 Frankl joined #gluster
10:15 zingoto joined #gluster
10:15 ThatGraemeGuy joined #gluster
10:20 doekia ~php | doekia
10:20 glusterbot doekia: (#1) php calls the stat() system call for every include. This triggers a self-heal check which makes most php software slow as they include hundreds of small files. See http://joejulian.name/blog/optimizi​ng-web-performance-with-glusterfs/ for details., or (#2) It could also be worth mounting fuse with glusterfs --attribute-timeout=HIGH --entry-timeout=HIGH
10:20 glusterbot --negative-timeout=HIGH --fopen-keep-cache
10:26 al joined #gluster
10:27 zingoto joined #gluster
10:33 zingoto Hey guys, im a student and doing my bachelor thesis in system and network administration, im running gluster on my storage servers, and want to test the iops with fio from a ubuntuserver that has mounted the gluster folder, however the folder is defined as a directory and what i tried was to create a block device file within the gluster folder, this gave me a very low iops, like 30. Should I test this differently or is this the way
10:33 zingoto to go ?
10:33 al joined #gluster
10:35 ccha there are files inside .glusterfs/indices/xattrop with old dates
10:42 nightwalk joined #gluster
10:44 ccha I don't find any gfid file from these names
10:44 doekia fuse.glusterfs performances really bad compared to nfs + cachefilesd ... any way to tune that?
10:44 Guest30207 joined #gluster
10:45 doekia here is the pasty to the bench & conf
10:45 doekia http://ur1.ca/grq3x
10:45 glusterbot Title: #82891 Fedora Project Pastebin (at ur1.ca)
10:45 Frankl joined #gluster
10:53 ndarshan joined #gluster
10:54 kanagaraj joined #gluster
10:55 gdubreui joined #gluster
10:57 al joined #gluster
10:57 dusmant joined #gluster
11:11 al joined #gluster
11:12 hybrid512 joined #gluster
11:17 diegows joined #gluster
11:19 doekia fuse.glusterfs performances really bad compared to nfs + cachefilesd ... any way to tune that? see http://ur1.ca/grq3x for benchmark conf & code
11:19 glusterbot Title: #82891 Fedora Project Pastebin (at ur1.ca)
11:31 gdubreui joined #gluster
11:34 nightwalk joined #gluster
11:34 RameshN joined #gluster
11:36 glusterbot New news from newglusterbugs: [Bug 1065551] Unable to add bricks to replicated volume <https://bugzilla.redhat.co​m/show_bug.cgi?id=1065551>
11:36 Pavid7 joined #gluster
11:42 rahulcs joined #gluster
11:42 edward1 joined #gluster
11:43 prasanth joined #gluster
11:44 hagarth joined #gluster
11:47 eseyman joined #gluster
11:54 cfeller joined #gluster
12:00 kkeithley1 joined #gluster
12:00 kkeithley1 left #gluster
12:01 kkeithley1 joined #gluster
12:03 lpabon joined #gluster
12:04 tokik joined #gluster
12:08 glusterbot New news from resolvedglusterbugs: [Bug 1070573] layout is missing when add-brick is done,new created files only locate on old bricks <https://bugzilla.redhat.co​m/show_bug.cgi?id=1070573>
12:12 tokik joined #gluster
12:12 ppai joined #gluster
12:18 CheRi joined #gluster
12:18 Pavid7 joined #gluster
12:19 Norky joined #gluster
12:20 vpshastry1 joined #gluster
12:21 cedric___ joined #gluster
12:23 mohankumar__ joined #gluster
12:25 nightwalk joined #gluster
12:28 an_ joined #gluster
12:36 glusterbot New news from newglusterbugs: [Bug 1073071] Glusterfs 3.4.2 data replication doesn't work for cinder backend in RDO Havana on Fedora 20 two node cluster <https://bugzilla.redhat.co​m/show_bug.cgi?id=1073071> || [Bug 1067852] Usage of Libgfapi and License Agreement <https://bugzilla.redhat.co​m/show_bug.cgi?id=1067852>
12:41 itisravi joined #gluster
12:42 sputnik13 joined #gluster
12:43 ravindran joined #gluster
12:47 Pavid7 joined #gluster
12:53 eseyman joined #gluster
12:54 al joined #gluster
12:57 an_ joined #gluster
12:58 bennyturns joined #gluster
12:59 al joined #gluster
12:59 purpleidea joined #gluster
13:00 haomaiwang joined #gluster
13:00 mohankumar__ joined #gluster
13:13 benjamin_____ joined #gluster
13:14 an_ joined #gluster
13:16 mattappe_ joined #gluster
13:16 rfortier1 joined #gluster
13:20 nightwalk joined #gluster
13:21 gmcwhistler joined #gluster
13:22 CheRi joined #gluster
13:22 rahulcs joined #gluster
13:24 jtux joined #gluster
13:29 khushildep joined #gluster
13:29 an_ joined #gluster
13:33 harish joined #gluster
13:41 rfortier1 joined #gluster
13:43 chirino joined #gluster
13:44 nightwalk joined #gluster
13:46 davinder joined #gluster
13:52 davinder joined #gluster
14:01 badone joined #gluster
14:02 RayS joined #gluster
14:04 al joined #gluster
14:05 nightwalk joined #gluster
14:06 qdk joined #gluster
14:06 glusterbot New news from newglusterbugs: [Bug 1073468] Cleanup and organise hook-scripts for smb. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1073468>
14:07 japuzzo joined #gluster
14:09 khushildep_ joined #gluster
14:12 sroy joined #gluster
14:15 dobenshain joined #gluster
14:15 clutchk1 joined #gluster
14:16 bennyturns joined #gluster
14:17 dewey joined #gluster
14:19 ctria joined #gluster
14:19 DV__ joined #gluster
14:23 theron joined #gluster
14:25 theron_ joined #gluster
14:32 diegows joined #gluster
14:32 theron joined #gluster
14:37 jmarley joined #gluster
14:38 jmarley joined #gluster
14:40 harish_ joined #gluster
14:41 harish_ joined #gluster
14:42 jobewan joined #gluster
14:42 vpshastry joined #gluster
14:51 B21956 joined #gluster
14:52 rahulcs joined #gluster
14:54 aravindavk joined #gluster
14:55 emwav joined #gluster
14:55 benjamin_____ joined #gluster
14:56 kaptk2 joined #gluster
14:57 mtanner_ joined #gluster
14:58 tdasilva joined #gluster
15:01 chirino_m joined #gluster
15:01 emwav anyone available for a question?
15:02 lmickh joined #gluster
15:03 sjoeboo joined #gluster
15:05 primechuck joined #gluster
15:07 rahulcs joined #gluster
15:09 dewey joined #gluster
15:10 ctria joined #gluster
15:11 purpleidea emwav: you should just ask, and if the right person to answer your question they will :)
15:11 purpleidea @justask
15:12 emwav I need to create a gluster volume and replicate it after the fact. In theory it seems it would work e.g. I use the gluster volume create command without the replica 2 option
15:12 emwav however, I can't seem to get the data to sync when I add an additional brick
15:13 emwav the reason is, I have 1 iscsi server housing data.   I have no room to move that data off & back.  So I would like to add a gluster server, copy data to that server, format the iscsi server and add it to the cluster
15:15 jobewan joined #gluster
15:16 bugs_ joined #gluster
15:17 Norky I'm unclear on how exactly you are doing this, emwav, could you elaborate?
15:18 emwav On the first server I run:  gluster volume create gv0  gluster01:/mnt/data/brick/
15:19 emwav On my kvm sever I mount it and write data to the gv0 volume
15:19 emwav works fine
15:19 RameshN joined #gluster
15:19 emwav hrm. i'm wondering if i need to stop the volume before adding the replica
15:19 emwav brb
15:20 chirino joined #gluster
15:21 elyograg emwav: when you add the second brick, you need to also include 'replica 2' so it knows that it needs to change the replica count rather than make it a distributed volume.
15:21 emwav i did that, however the data wasn't showing up on both bricks. but i didn't stop the volume before I added the replica.  i'm trying that now
15:22 elyograg that won't help.
15:23 elyograg did you stat everything via the mount point, or run a 'heal full' ?  Something has to tell gluster that files need healing - it won't just proactively start copying.
15:23 elyograg there is the self-heal daemon, but it only wakes up every ten minutes and I doubt it will look at the whole volume.
15:24 elyograg actually, it might only deal with things that have already been reported as needing healing via other mechanisms.  I'm not entirely sure.
15:26 DV__ joined #gluster
15:28 dusmant joined #gluster
15:28 dcmbrown joined #gluster
15:32 khushildep_ joined #gluster
15:35 emwav hrm. tried heal. still no go.
15:37 Norky just "heal" or "heal full"?
15:37 emwav just heal.
15:37 emwav i'll try heal full
15:37 Norky then do as elyograg says and run heal full.
15:38 emwav tada!
15:40 Norky it might take a while depending how much data you have, you can monitor it with "gluster vol heal gv0 info"
15:40 emwav just some test files
15:40 emwav so it was instant
15:40 emwav this wasn't on the live system yet w/ TB of data
15:40 Norky +1 for testing first
15:41 emwav :-D
15:41 emwav always.  test. document.  retest with documents. fix documentation.  retest. live.
15:41 Norky try it with non-empty files in (sub)directories as well
15:41 failshell joined #gluster
15:42 Norky assuming your test data is just a bunch of files in the 'root' of the volume
15:42 emwav cool. will do.  we plan on using this to host kvm images.  any problems seen with this?  our initial tests seemed to work fine but they weren't tested w/ production load.
15:42 emwav yea, they were just test data on the root of the volume
15:44 Norky I can't recall, but you might need to "gluster vol rebalance gv0 fix-layout" to synchronise directory structure in an empty brick
15:44 sprachgenerator joined #gluster
15:44 rpowell joined #gluster
15:45 DV__ joined #gluster
15:45 bennyturns joined #gluster
15:50 msp3k1 joined #gluster
15:52 msp3k1 Hi.  I'm running gluster 3.4.2-ubuntu2~precise6.  Is there a way to manually clear the heal-failed and split-brain lists once I've fixed the problems?
15:59 rpowell left #gluster
16:01 msp3k1 left #gluster
16:09 glusterbot New news from resolvedglusterbugs: [Bug 1073442] large NFS writes to Gluster slow down then stop <https://bugzilla.redhat.co​m/show_bug.cgi?id=1073442>
16:09 rwheeler joined #gluster
16:12 ekuric joined #gluster
16:14 daMaestro joined #gluster
16:16 hybrid512 joined #gluster
16:19 hagarth joined #gluster
16:19 aravindavk joined #gluster
16:20 ThatGraemeGuy joined #gluster
16:23 doekia joined #gluster
16:25 lpabon joined #gluster
16:30 doekia joined #gluster
16:37 zerick joined #gluster
16:39 griz1 partjoin #starcluster
16:39 griz1 left #gluster
16:41 cjanbanan joined #gluster
16:49 zaitcev joined #gluster
16:50 JonnyNomad joined #gluster
16:56 marcoceppi joined #gluster
16:56 marcoceppi joined #gluster
16:58 theron joined #gluster
17:00 Matthaeus joined #gluster
17:02 divbell joined #gluster
17:06 semiosis doekia: i'm here
17:07 doekia ;-) ... I made massive tests today... no problem w/ nfs client + gluster server
17:08 semiosis good
17:08 rahulcs joined #gluster
17:08 doekia I made other test in term of perf
17:08 semiosis i tried the localhost nfs mount at boot on wheezy... it failed
17:08 semiosis hung the boot
17:08 semiosis :(
17:08 doekia http://fpaste.org/82891/94102618/
17:08 glusterbot Title: #82891 Fedora Project Pastebin (at fpaste.org)
17:09 doekia you need to rpcbind & nfs-common + gluster volume set xxx nfs.disable off
17:10 semiosis doekia: i know all that, thanks.  nfs mount works fine after boot
17:10 semiosis actually i forgot the tcp,vers=3 opts in fstab! doh!
17:10 doekia got 3 system booting nfs mount actually
17:10 semiosis weird that it worked ok after boot
17:10 doekia lol
17:11 semiosis will give it another try tonight
17:11 doekia definitly performances w/ nfs client it nothing compare to fuse
17:12 REdOG joined #gluster
17:14 vpshastry joined #gluster
17:15 JoeJulian doekia: The sarcastic side of me wants to say, "yes, the developers purposefully configure the defaults to run as slow as possible in order to allow you to feel like you're doing something extraordinary by making it run at a reasonable speed again" but in reality, no. When you have clustered filesystems you have overhead that you don't have in a local filesystem. NFS + cfs simply caches locally, allowing you to use stale file data. Since cfs do
17:15 JoeJulian esn't support fuse based filesystems, if you want that level of inconsistency you have to provide it using some other tool.
17:16 JoeJulian (referring back to the old scrollback question about tuning fuse)
17:17 JoeJulian btw... you don't need to "gluster volume set xxx nfs.disable off". That's the default.
17:18 doekia ;-) actually I have run massive test w/ nfs client + cachefilesd against gluster volume ... things go swimingly ...
17:18 doekia defaults are prone to change ... ;-)
17:18 JoeJulian doekia: If that model works for you, great!
17:18 JoeJulian That's why it's there.
17:18 doekia well I can't find any other model
17:19 JoeJulian It all depends on use case.
17:19 doekia sure
17:20 doekia glusterfs is trully impressive eliminating single point of failure
17:20 JoeJulian I couldn't use nfs because the loss of connection to the nfs server during the vip migration in the event of a server going down would break my systems.
17:21 RameshN joined #gluster
17:21 doekia The solution here is within same datacenter 1Gb/s link ... gives me ability to make small, yet expandable cluster for web app
17:22 doekia 6 entry level server amongst 1 is panel/supervision
17:22 doekia Business grows? we just add one node ... then another one ... etc
17:22 JoeJulian Mine too, but things like VM images and mysql innodb don't do well with filehandles closing arbitrarily.
17:23 doekia innodb is not on the gluster volume only plain php files
17:24 doekia for mysql ... the other key / smart piece of code is the galera cluster
17:24 rossi_ joined #gluster
17:24 JoeJulian Out of curiosity, is there a reason why you can't use apc?
17:24 doekia seems it does not quite work well with the fcgid
17:25 daMaestro joined #gluster
17:25 doekia I says seems but I haven't try it much
17:25 JoeJulian Odd, I use it.
17:25 doekia The only test I made gave me absolutly no speed up
17:26 JoeJulian I prefer fcgid + nginx. It uses less memory and gives me better page times than apache.
17:26 doekia have you look thru the mini bench I pastied
17:27 JoeJulian I had not.
17:27 doekia http://fpaste.org/82891/94102618/
17:27 glusterbot Title: #82891 Fedora Project Pastebin (at fpaste.org)
17:27 ndk joined #gluster
17:27 JoeJulian Ah, right, that wouldn't perform any better.
17:27 JoeJulian apc is about loading the php files themselves.
17:27 JoeJulian require/include
17:28 doekia the bench just look at raw number from the fs
17:28 JoeJulian Right. Did you perhaps try 50 clients doing this simultaneously? I'd be more curious about performance as a cluster vs one client.
17:29 T0aD doekia, interessant
17:29 T0aD i wanted to do kind some similar setup for quite some time
17:30 doekia actually I'm running ab with concurrency 10 w/o any problem... the system will go live production over the we...
17:32 Mo_ joined #gluster
17:36 Matthaeus joined #gluster
17:44 mattappe_ joined #gluster
17:54 rossi_ joined #gluster
17:54 badone joined #gluster
17:59 lpabon joined #gluster
18:18 cjanbanan joined #gluster
18:27 Matthaeus joined #gluster
18:29 elyograg there's something I've been trying to get a handle on.  Let's say you're in a situation that requires a rebalance -- you've added storage because your bricks are very full.  You've done a fix-layout (or, like in my case, had a failed rebalance) so new data is going to the new bricks, but also the old bricks.
18:30 elyograg Is the min-free-disk option supposed to ensure that you don't completely fill up the old bricks?  If it is, it's not working.  3.4.2.  I've got it at 5%.  Should I give it an actual space value?
18:31 Matthaeus joined #gluster
18:32 rwheeler joined #gluster
18:32 rossi_ joined #gluster
18:37 semiosis elyograg: min free is supposed to place new files on other bricks which aren't as full as the brick that dht would normally place the file
18:37 JoeJulian elyograg: It's to prevent creating new *files* if the brick exceeds the min-free. Writing data to existing files will still be able to fill up the brick.
18:37 elyograg we only write to new files.
18:38 elyograg the default is supposed to be 10%.  Bricks got more than 90% full.  I set it to 5%.  We have now had some reach 96% full.  I've now set it to 230GB, as our most-full brick has 233GB left.
18:39 elyograg final comment: http://hekafs.org/index.php/2012/03​/glusterfs-algorithms-distribution/
18:40 asku joined #gluster
18:42 elyograg now I can't remember whether going above 95% full happened before or after the 3.4.2 upgrade.
18:43 elyograg probably before, but I've lost track. :)
18:45 Matthaeus joined #gluster
18:50 elyograg so now it might be working.  we've been getting space alarms from our monitoring, and i'm sleep deprived from figuring everything out on this heal problem.
18:51 JoeJulian I bet
18:52 elyograg i think i have enough information now that I can fix all the heal problems.  your script for finding dirty attributes will let me get rid of them, so I will have some assurance that any new problems from the rebalance actually ARE new problems.
18:53 mbukatov joined #gluster
18:54 elyograg email from a co-worker who's gotten more sleep: The xymon monitors were set up after the upgrade, and I believe the 95% alarm triggered over the weekend as we copied files into the volume to free space on the InforTrends.  That's why this concerns me.
18:54 JoeJulian btw... I made splitmount installable now instead of the half-baked download and run it from the directory crud.
18:54 elyograg nice.  i'll take a look.
18:55 rahulcs joined #gluster
18:55 khushildep joined #gluster
18:56 elyograg if you didn't already think of it, a nice feature would be a force option that causes it to use a dirty directory anyway - delete old tmp files, unmount existing mounts, and mount again.
18:56 elyograg not in that order. :)
19:02 swat30 hi folks
19:02 swat30 having an issue with deleted files not freeing up disk
19:03 swat30 du -hs . shows the disk as being freed on the brick
19:03 swat30 however df -h does not
19:04 swat30 the brick has hit 100% used
19:05 rotbeard joined #gluster
19:05 bc__ joined #gluster
19:06 bchilds left #gluster
19:07 elyograg swat30: are you deleting from the mount, or the brick?
19:07 swat30 deleted from the mount
19:07 JoeJulian swat30: "gluster volume heal $vol info" maybe? Perhaps your client isn't connected to all servers?
19:08 swat30 JoeJulian: both bricks that had the file are still showing 100% usage
19:08 JoeJulian Is the file still open by some application?
19:08 JoeJulian On your client, check lsof.
19:10 JoeJulian (or maybe it's open on some other client)
19:10 swat30 JoeJulian: looking now
19:10 swat30 I see glusterfs as having it open
19:11 elyograg there should be another process that has it open too.
19:11 elyograg glusterfs is the mount, usually.
19:11 swat30 alright, checking on my other clients
19:13 JoeJulian glusterfs shouldn't have the file open... glusterfsd would on the server, but not glusterfs.
19:13 swat30 yup, that's what I meant sorry
19:15 badone joined #gluster
19:15 swat30 JoeJulian: lsof is freezing.. could be b/c of the 100% usage on that brick pair?
19:16 JoeJulian Can you ctrl-c out of it?
19:16 swat30 yup
19:16 JoeJulian Then it's not what I was thinking... not sure then.
19:17 JoeJulian Found a new tool, not sure how to make use of it yet though... "gluster volume status $vol fd
19:17 elyograg i often end up doing 'kill -9' on things that are talking to gluster and spinning.
19:18 swat30 JoeJulian: running 3.2.5, doesn't seem to be avail
19:18 swat30 Ubuntu LTS version
19:19 JoeJulian LTS is a myth, imho.
19:19 JoeJulian @ppa
19:19 glusterbot JoeJulian: The official glusterfs packages for Ubuntu are available here: 3.4 stable: http://goo.gl/u33hy -- 3.5 QA: http://goo.gl/Odj95k -- introducing QEMU with GlusterFS 3.4 support: http://goo.gl/7I8WN4
19:20 elyograg speaking of commands, did you see me say that I can't make 'gluster volume status $vol nfs clients' show me anything but zeros, even though I *know* that there are NFS connections that are active?
19:20 swat30 yea, the upgrade path is tough. we're running a lot of client VMs on this
19:21 JoeJulian elyograg: heh, I was about to try it - then remembered I have nfs disabled on all my volumes. :D
19:21 JoeJulian Did you file a bug report? :D
19:21 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
19:21 elyograg no.  too much going on.
19:28 elyograg I will get to it.  it's low on my priority list.
19:28 elyograg I should probably write that list down. :)
19:28 JoeJulian done
19:40 elyograg that 'rm -rf' which I complained about yesterday (on an old brick) is *still* deleting .glusterfs.  no actual space recovered yet, because it hasn't started deleting the 'real' files.
19:41 JoeJulian wow
19:44 elyograg I'm going to delete the other seven at once, load average be damned. :)
19:45 JoeJulian heh
19:45 elyograg better renice them, though. :)
19:46 theron joined #gluster
19:47 svalery joined #gluster
19:48 elyograg load is up over 15 now.  CPU usage (including iowait) is very low, though.
19:49 elyograg the rm processes aren't at the top of the 'top' list.
19:54 elyograg I'm not sure whether the load should worry me or not.
19:55 elyograg oh, now iowait is getting a little nuts.  but not for extended periods.
19:55 diegows joined #gluster
19:56 elyograg top is updating once a second and showing many cycles where iowait is zero.
20:03 lanning_ joined #gluster
20:03 JonnyNomad joined #gluster
20:03 JordanHackworth joined #gluster
20:08 JoseBravo joined #gluster
20:08 JoseBravo Hi
20:08 glusterbot JoseBravo: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
20:10 glusterbot New news from newglusterbugs: [Bug 1073616] Distributed volume rebalance errors due to hardlinks to .glusterfs/... <https://bugzilla.redhat.co​m/show_bug.cgi?id=1073616>
20:11 JoseBravo In, http://www.gluster.org/community/document​ation/index.php/Getting_started_overview says that "Gluster does not support so called ?structured data?, meaning live, SQL databases" So If I want to use gluster to save VM's volumes, and those VM's have mysql/postgresql/sql servers databases, Gluster is not what I'm looking for?
20:11 glusterbot Title: Getting started overview - GlusterDocumentation (at www.gluster.org)
20:12 samppah JoseBravo: if you are using gluster to store VM images, that's fine
20:15 JoseBravo About what is talking the documentation about the sql databases?
20:16 JoeJulian I host MariaDB innodb tables on a gluster volume. I've done some preliminary tests that show that sharding innodb data across multiple dht volumes could actually improve performance.
20:17 JoeJulian myisam tables would probably perform rather poorly though.
20:19 JoeJulian I've been arguing against that negative statement for years. I really don't understand why johnmark says that.
20:19 ultrabizweb joined #gluster
20:20 kanagaraj joined #gluster
20:22 chirino joined #gluster
20:23 samppah JoeJulian: btw.. why you are storing DB on glusterfs?
20:25 JoeJulian My mariadb server runs in a vm. If anything goes wrong with that vm I can boot a new one in moments without worrying about lost data.
20:25 JoeJulian I have replica 3 redundancy
20:25 samppah okay
20:25 samppah vm isn't on gluster?
20:26 JoeJulian It is.
20:26 JoeJulian Belt and suspenders. :D
20:26 samppah :D
20:27 samppah maybe you should add in Galera too ;)
20:27 JoeJulian Heh.
20:30 Matthaeus joined #gluster
20:33 junaid joined #gluster
20:33 bugs_ joined #gluster
20:35 dberry joined #gluster
20:35 dberry joined #gluster
20:47 jruggiero joined #gluster
20:52 khushildep joined #gluster
20:53 seapasulli joined #gluster
20:57 Pavid7 joined #gluster
21:04 andrewklau joined #gluster
21:09 sputnik13 joined #gluster
21:13 sputnik13 joined #gluster
21:15 wrale Anyone publish benchmarks for the compression-on-wire options yet? http://www.gluster.org/community/do​cumentation/index.php/Features/On-W​ire_Compression_%2B_Decompression
21:15 glusterbot Title: Features/On-Wire Compression + Decompression - GlusterDocumentation (at www.gluster.org)
21:22 xymox joined #gluster
21:22 badone joined #gluster
21:29 rahulcs joined #gluster
21:32 xymox joined #gluster
21:32 JoeJulian I'd be very surprised if anyone had.
21:33 rwheeler joined #gluster
21:36 wrale Can you recommend a good method of benchmarking GlusterFS volumes?  I've been struggling with fio, but it seems like the most powerful choice.
21:37 JoeJulian I've got a strong aversion to benchmarking tools as they seldom reflect any semblance of real world use cases, especially with regard to clustered systems.
21:38 wrale I see.  Makes sense.
21:39 wrale I was trying to run this kind of benchmark using gluster instead of ceph.. The staging for the benchmark is enormous... http://software.intel.com/en-us/blog​s/2013/10/25/measure-ceph-rbd-perfor​mance-in-a-quantitative-way-part-i
21:39 glusterbot Title: Measure Ceph RBD performance in a quantitative way (part I) (at software.intel.com)
21:39 wrale (but i think it's a cool approach, either way)
21:40 JoeJulian Not bad at all, and a not uncommon use case.
21:43 edong23 joined #gluster
21:58 Sun^^ joined #gluster
22:09 YazzY joined #gluster
22:09 YazzY joined #gluster
22:10 rahulcs joined #gluster
22:12 sjoeboo joined #gluster
22:14 rahulcs joined #gluster
22:15 failshel_ joined #gluster
22:17 primechuck joined #gluster
22:21 rahulcs_ joined #gluster
22:23 sun^^^ joined #gluster
22:24 sun^^^^ joined #gluster
22:24 zerick joined #gluster
22:27 Sun^^ joined #gluster
22:28 sputnik13 joined #gluster
22:34 Matthaeus1 joined #gluster
22:38 tdasilva left #gluster
22:39 sputnik13 joined #gluster
22:39 seapasulli joined #gluster
22:43 RayS joined #gluster
22:44 irctc720 joined #gluster
22:44 balanced21 joined #gluster
22:48 wrale I wonder if anyone can instruct me on the most straight-forward and reliable way to mount a 4 x 3 volume.. Is this the correct method? https://www.gluster.org/2013/12/glusterfs-and-​its-nature-of-configuration-high-availability/
22:50 wrale (using 3.5b3)
22:51 wrale (volumes are configure for server quorum)
22:51 cfeller joined #gluster
22:51 wrale s/configure/configured/
22:51 glusterbot What wrale meant to say was: (volumes are configured for server quorum)
22:57 velladecin In distributed/replicated setup, what would be the best solution for adding replicas to the cluster over time? Would that be a complete 'rebalance' or is 'fix-layout' and then let the files spread around more preferrable?
22:57 semiosis you dont need to rebalance when you change the replica count
22:58 semiosis rebalancing is for changing the distribution count
22:58 velladecin Rebalance takes a long time, with lot of content days, so you recon 'fix-layout' and then wait for the files to spread around naturally so to say :)
22:58 JoeJulian wrale: I use rrdns. Many use the "backup-volfile-servers" option. Both seem pretty straight-forward and reliable to me.
22:59 sputnik13 joined #gluster
22:59 JoeJulian I think "replicas" = "replica subvolumes to dht" in his question.
22:59 semiosis ahh right
23:01 velladecin yes, sorry -> adding replica subvolumes to dht would be the correct qustion
23:02 wrale JoeJulian: thanks.. I'm going to try the backup method via fstab.
23:03 cfeller joined #gluster
23:08 JoeJulian velladecin: The file's won't spread themselves around after a fix-layout unless you have some sort of file churn.
23:09 purpleidea @vagrant
23:09 glusterbot purpleidea: (#1) Part 1 @ https://ttboj.wordpress.com/2013/12​/09/vagrant-on-fedora-with-libvirt/, or (#2) Part 2 @ https://ttboj.wordpress.com/2013/12​/21/vagrant-vsftp-and-other-tricks/, or (#3) Part 3 @ https://ttboj.wordpress.com/2014/01/​02/vagrant-clustered-ssh-and-screen/, or (#4) Part 4 @ https://ttboj.wordpress.com/2014/0​1/08/automatically-deploying-glust​erfs-with-puppet-gluster-vagrant/,
23:09 glusterbot purpleidea: or (#5) https://ttboj.wordpress.com/2014/01/16​/testing-glusterfs-during-glusterfest/
23:10 velladecin yep, sorry English is my second language :)
23:11 JoeJulian It's okay, I just want to ensure you're understanding the concepts correctly. Knowing that, whatever fits your use case. Personally, I'd do the full rebalance.
23:11 velladecin What I meant is that after the 'fix-layout' when adding new files they will start to spread to the new subvolume and technically If it's left long enough it will sort of rebalance itself
23:12 velladecin ic I always do rebalance but I thought if I could possibly avoided I wouldn't mind as the rebalance takes a long time
23:12 JoeJulian Well, no. It'll keep adding files evenly to the pre-existing bricks and the new ones until the pre-existing ones are full.
23:12 JoeJulian Does it taking a long time interfere with anything (it shouldn't).
23:12 velladecin Ok, well that means rebalance is the way to go then. To have nicely even bricks
23:14 velladecin No not really, I just feel little uneasy with leaving it run for some days while there are other operations done to the content. But it is designed to cope with it..
23:14 elyograg JoeJulian: my modified version of your dirty files script (which just removes the "is it a file, and is it nonzero in size" check, follows the symlinks in .glusterfs which finds its way to the end file.  So I think I need to have it not follow symlinks.  if symlinks cannot have xattrs, it probably can skip them entirely.
23:15 elyograg haven't looked at the script again.  probably should ahve done that first. :)
23:17 wrale JoeJulian: do you run/recommend direct-io-mode=disabled for production use?
23:24 elyograg my tiny reference sample says that gluster probably does not set xattrs on symlinks, even though it's probably possible.  so I think I can just skip symlinks.
23:24 edong23 joined #gluster
23:28 semiosis symlinks are just text files with the link bit set
23:28 semiosis i'd bet gluster does all the usual xattr stuff for them
23:29 seapasulli joined #gluster
23:30 semiosis hmm
23:30 semiosis i see a gfid but no afr xattrs
23:31 elyograg that was what I just saw when I looked at one.
23:31 theron joined #gluster
23:32 semiosis and if i change the target, i get a new gfid!?
23:33 semiosis btw i'm on 3.1.7 here
23:33 semiosis testing this on prod
23:33 semiosis can check on 3.4.2 later
23:35 elyograg JoeJulian: my changed script.  It doesn't spit out the xattrs, just dirty files.  http://paste.fedoraproject.org/83203/48870139
23:35 glusterbot Title: #83203 Fedora Project Pastebin (at paste.fedoraproject.org)
23:36 elyograg so the commented code line actually should have been just deleted.
23:36 elyograg since it probably wouldn't work there anyway.
23:36 theron_ joined #gluster
23:36 cfeller joined #gluster
23:47 sjoeboo joined #gluster
23:54 gdubreui joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary