Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-10-31

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:06 plarsen joined #gluster
00:37 Pupeno_ joined #gluster
00:50 bennyturns joined #gluster
01:19 plarsen joined #gluster
01:30 haomaiwang joined #gluster
01:32 haomaiwang joined #gluster
01:38 elico joined #gluster
01:44 David_H__ joined #gluster
01:53 glusterbot New news from newglusterbugs: [Bug 1117509] Gluster peer detach does not cleanup peer records causing peer to get added back <https://bugzilla.redhat.com/show_bug.cgi?id=1117509>
01:59 wgao joined #gluster
02:11 msmith_ joined #gluster
02:17 glusterbot New news from resolvedglusterbugs: [Bug 1065616] Host name is not updating <https://bugzilla.redhat.com/show_bug.cgi?id=1065616> || [Bug 1115199] Unable to get lock for uuid, Cluster lock not held <https://bugzilla.redhat.com/show_bug.cgi?id=1115199>
02:19 elico joined #gluster
02:21 kshlm joined #gluster
02:25 Guest5348 joined #gluster
02:26 anoopcs joined #gluster
02:27 bala joined #gluster
02:29 anoopcs joined #gluster
02:32 MugginsM I'm seeing a lot of "Operation not permitted occurred during setattr of <nul>" errors :-/
02:35 David_H_Smith joined #gluster
02:52 haomai___ joined #gluster
02:56 kshlm joined #gluster
03:22 hollaus joined #gluster
03:32 kshlm joined #gluster
03:33 jiffin joined #gluster
03:35 shubhendu joined #gluster
03:35 haomaiwang joined #gluster
03:38 bala joined #gluster
03:39 DV joined #gluster
03:39 nbalachandran joined #gluster
03:44 haomaiw__ joined #gluster
03:44 kanagaraj joined #gluster
03:51 hagarth joined #gluster
04:03 bala joined #gluster
04:06 Pupeno joined #gluster
04:08 jiffin joined #gluster
04:23 RameshN joined #gluster
04:28 ppai joined #gluster
04:29 atinmu joined #gluster
04:37 rafi1 joined #gluster
04:41 harish_ joined #gluster
04:49 nishanth joined #gluster
05:00 itisravi joined #gluster
05:01 lalatenduM joined #gluster
05:05 kshlm joined #gluster
05:09 hagarth joined #gluster
05:11 atalur joined #gluster
05:14 ndarshan joined #gluster
05:14 Lee- joined #gluster
05:22 dusmant joined #gluster
05:23 kshlm joined #gluster
05:31 kdhananjay joined #gluster
05:35 sahina joined #gluster
05:35 saurabh joined #gluster
05:41 overclk joined #gluster
05:47 overclk joined #gluster
05:58 deepakcs joined #gluster
06:02 sputnik13 joined #gluster
06:03 aravindavk joined #gluster
06:09 ppai joined #gluster
06:27 ramteid joined #gluster
06:34 hchiramm_ joined #gluster
06:35 SOLDIERz joined #gluster
06:37 dusmant joined #gluster
06:37 Philambdo joined #gluster
06:54 glusterbot New news from newglusterbugs: [Bug 1159172] [rfe] geo-replication status command should provide slave host uuid. <https://bugzilla.redhat.com/show_bug.cgi?id=1159172>
06:56 kumar joined #gluster
06:56 shubhendu joined #gluster
06:58 rgustafs joined #gluster
07:14 soumya joined #gluster
07:17 atinmu joined #gluster
07:18 aravindavk joined #gluster
07:22 ricky-ti1 joined #gluster
07:24 shubhendu joined #gluster
07:26 atalur joined #gluster
07:31 ppai joined #gluster
07:34 bala joined #gluster
07:34 Fen1 joined #gluster
07:36 rgustafs joined #gluster
07:47 SOLDIERz joined #gluster
07:50 SOLDIERz joined #gluster
07:52 hollaus joined #gluster
08:01 SOLDIERz_ joined #gluster
08:03 SOLDIERz__ joined #gluster
08:08 SOLDIERz___ joined #gluster
08:13 SOLDIERz___ joined #gluster
08:19 RameshN joined #gluster
08:19 hollaus joined #gluster
08:22 rgustafs joined #gluster
08:25 SOLDIERz____ joined #gluster
08:37 ricky-ticky joined #gluster
08:38 haomaiwa_ joined #gluster
08:40 badone_ joined #gluster
08:41 SOLDIERz____ joined #gluster
08:42 badone__ joined #gluster
08:42 SOLDIERz_____ joined #gluster
08:47 karnan joined #gluster
08:48 deniszh joined #gluster
08:49 ricky-ticky1 joined #gluster
08:49 vikumar joined #gluster
08:49 SOLDIERz_____ joined #gluster
08:52 elico joined #gluster
08:54 SOLDIERz_____ joined #gluster
08:55 glusterbot New news from newglusterbugs: [Bug 1159209] Geo-Replication Passive node is not getting promoted to active when one node of replicated slave volume goes down <https://bugzilla.redhat.com/show_bug.cgi?id=1159209> || [Bug 1159213] Dist-geo-rep : geo-rep doesn't preserve the ownership while syncing entry operations to slave through mount-broker. <https://bugzilla.redhat.com/show_bug.cgi?id=1159213>
08:59 michael__ joined #gluster
09:02 ppai joined #gluster
09:02 SOLDIERz_____ joined #gluster
09:03 jiffin1 joined #gluster
09:03 SOLDIERz_____ joined #gluster
09:05 atalur joined #gluster
09:20 shubhendu joined #gluster
09:20 soumya joined #gluster
09:20 ndarshan joined #gluster
09:22 liquidat joined #gluster
09:24 atinmu joined #gluster
09:24 dusmant joined #gluster
09:25 glusterbot New news from newglusterbugs: [Bug 1159221] io-stats may crash the brick when loc->path is NULL in some fops <https://bugzilla.redhat.com/show_bug.cgi?id=1159221>
09:26 bala joined #gluster
09:33 Guest86619 we had yesterday a fatal error, have anybod a litle bit time to help us to understand what happend?
09:35 harish_ joined #gluster
09:40 Javad joined #gluster
09:41 partner back to yesterdays question, my clients get flooded with "disk layout missing / mismatching layouts" messages roughly 72 times per second causing /var mounts to fill up, is there something i can do to fix that (other than hide the INFO level messages by switching logging to WARNING) ?
09:44 partner this started to happen pretty much after 3.3.2 -> 3.4.5 upgrade, clients mostly/all wheezy and the volume is distributed
09:47 bala joined #gluster
09:56 harish_ joined #gluster
09:56 _weykent joined #gluster
10:03 harish_ joined #gluster
10:04 harish_ joined #gluster
10:05 DV_ joined #gluster
10:06 jiffin joined #gluster
10:07 overclk joined #gluster
10:09 andrea_ joined #gluster
10:09 ekuric joined #gluster
10:10 andrea_ Hello, I need help on glusterfs. Can I talk in private with someone?
10:10 shubhendu joined #gluster
10:14 atinmu joined #gluster
10:14 dusmant joined #gluster
10:15 ndarshan joined #gluster
10:23 kdhananjay joined #gluster
10:24 soumya joined #gluster
10:24 warci joined #gluster
10:25 glusterbot New news from newglusterbugs: [Bug 1159248] glupy compilation issue <https://bugzilla.redhat.com/show_bug.cgi?id=1159248> || [Bug 1159253] GlusterFS 3.6.1 tracker <https://bugzilla.redhat.com/show_bug.cgi?id=1159253>
10:27 shubhendu joined #gluster
10:50 bala joined #gluster
10:55 glusterbot New news from newglusterbugs: [Bug 1159269] Random crashes when generating an internal state dump with signal USR1 <https://bugzilla.redhat.com/show_bug.cgi?id=1159269>
11:02 SOLDIERz_____ joined #gluster
11:03 shubhendu joined #gluster
11:04 mojibake joined #gluster
11:19 virusuy joined #gluster
11:19 virusuy joined #gluster
11:22 bene joined #gluster
11:25 elico joined #gluster
11:25 lalatenduM_ joined #gluster
11:26 Guest5348 joined #gluster
11:27 shubhendu joined #gluster
11:28 nbalachandran joined #gluster
11:29 chrisu joined #gluster
11:31 chrisu hi, I'm looking to see if there is a way to set client affinity to particular servers. scenario is 2 datacenters with a fast enough link to not require geo replication. each DC as 2 gluster servers that are part of a 4 node cluster. I want the clients in the datacenter to talk to its local gluster servers rather than traverse the link unneccesarily. Does anyone know if thats possible?
11:42 soumya joined #gluster
11:45 meghanam joined #gluster
11:45 meghanam_ joined #gluster
11:47 meghanam joined #gluster
11:47 meghanam_ joined #gluster
11:49 LebedevRI joined #gluster
11:49 theron joined #gluster
11:52 B21956 joined #gluster
11:54 pdrakeweb joined #gluster
11:55 hagarth left #gluster
11:55 glusterbot New news from newglusterbugs: [Bug 1159284] Random crashes when generating an internal state dump with signal USR1 <https://bugzilla.redhat.com/show_bug.cgi?id=1159284>
11:57 theron_ joined #gluster
11:58 hagarth1 joined #gluster
11:59 nbalachandran joined #gluster
12:01 jdarcy joined #gluster
12:04 SOLDIERz_____ joined #gluster
12:09 edward1 joined #gluster
12:12 plarsen joined #gluster
12:12 SOLDIERz_____ joined #gluster
12:14 soumya_ joined #gluster
12:22 SOLDIERz_____ joined #gluster
12:26 julim joined #gluster
12:41 elico joined #gluster
12:44 warci joined #gluster
12:44 diegows joined #gluster
12:44 sputnik13 joined #gluster
12:47 mariusp joined #gluster
12:47 calisto joined #gluster
12:56 bala joined #gluster
12:57 plarsen joined #gluster
12:58 nage joined #gluster
12:59 Norky joined #gluster
13:05 Fen1 joined #gluster
13:16 bene joined #gluster
13:25 UnwashedMeme1 left #gluster
13:29 elico joined #gluster
13:30 bennyturns joined #gluster
13:30 msmith joined #gluster
13:34 bene2 joined #gluster
13:41 Zordrak I'm using gluster under ovirt on CentOS7 and I struggle every reboot because I can't seem to work out what is starting up an NFS client and registering with portmap on boot.
13:43 Zordrak So every boot I have to have glusterd disabled, rpcinfo -d nfs 3; rpcinfo -d nfs 1; rpcinfo -d nfs_acl 3; rpcinfo -d nfs_acl 1; rpcinfo -d nlockmgr 4; rpcinfo -d nlockmgr 3; rpcinfo -d nlockmgr 1
13:43 ndevos Zordrak: without checking, you may want to 'systemctl disable nfs.target'
13:44 ndevos Zordrak: I think there is also a nfs-lock.service (or something) that should not be running
13:44 rgustafs joined #gluster
13:45 Zordrak Loaded: loaded (/usr/lib/systemd/system/nfs-lock.service; disabled)
13:45 getup joined #gluster
13:45 Zordrak Loaded: loaded (/usr/lib/systemd/system/nfs.target; disabled)
13:45 Zordrak already tried them :(
13:48 ndevos Zordrak: also nfs-server.service?
13:48 Zordrak Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; disabled)
13:48 ndevos Zordrak: is the nfsd or lockd kernel module loaded?
13:49 Zordrak lockd yes
13:49 Zordrak not nfsd
13:50 Zordrak although.. i have just noticed autofs is enabled on boot .. hrrrm :-/
13:50 theron joined #gluster
13:50 ndevos hmm, depending on your autofs config, that could be indeed the cause
13:50 ndevos (well, I also dont know how the default config is...)
13:51 Zordrak it's not configured other than default.. I have puppet configging other autofs instances via sssd, but no on these boxes
13:51 Zordrak welp, at least i can disable that too
13:51 Zordrak then try on next boot of a host which should be shortly since i have a kernel upgrade pending
14:01 bene joined #gluster
14:03 deepakcs joined #gluster
14:13 georgeh joined #gluster
14:18 calisto joined #gluster
14:21 nbalachandran joined #gluster
14:21 nueces joined #gluster
14:28 nshaikh joined #gluster
14:35 _dist joined #gluster
14:43 sijis joined #gluster
14:56 Zordrak ndevos: Made NO difference
14:56 Zordrak ndevos: the box is in maintenance now, post-reboot, no ovirt services started and yet the nfs services are already registered with portmap
14:57 diegows joined #gluster
14:57 ndevos Zordrak: you could blacklist lockd to prevent it from loading
14:57 Zordrak no lockd kernel module loaded, no nfsd kernel module loaded
14:57 ndevos oh
14:57 Zordrak inorite
14:57 ndevos strange, I dont think I've seen that before
14:57 Zordrak i just literally have no idea whats regstering the services
14:58 Zordrak :-O root      1496     1  0 14:55 ?        00:00:00 /usr/sbin/automount --pid-file /run/autofs.pid
14:58 rwheeler joined #gluster
14:58 Zordrak WTF I disabled it!
14:58 ndevos is there a process on one of the ports that 'rpcinfo -p' shows?
14:58 shubhendu joined #gluster
14:59 theron joined #gluster
14:59 Zordrak (doh) Notice: /Stage[main]/Nfs::Config/Service[autofs]/enable: enable changed 'false' to 'true'
15:00 Zordrak ndevos: not sure what you mean about a process
15:00 Zordrak oh i get you,
15:01 theron_ joined #gluster
15:01 Zordrak no, not according to netstat -anp
15:01 Zordrak i think it might be autofs
15:01 Zordrak let me kill it from puppet and reboot and check again
15:03 theron joined #gluster
15:03 ndevos I do not think autofs does that
15:04 ndevos Zordrak: can you check if rpcbind is running with the -w optin?
15:04 ndevos *option
15:05 ndevos there seems to be some status/cache file that it can use, maybe that is a new default
15:07 Zordrak yes it is using -w
15:08 Zordrak ExecStart=/sbin/rpcbind -w ${RPCBIND_ARGS}
15:10 Zordrak safe to remove do you think? as ugly a hack as it would  be
15:11 plarsen joined #gluster
15:12 theron joined #gluster
15:15 Zordrak_ joined #gluster
15:16 theron_ joined #gluster
15:17 Zordrak ndevos: very kind help sir!
15:22 ndevos Zordrak: yeah, it should be safe to remove the -w
15:23 ndevos I'm not sure why that would be used for starting... a hot-restart would use that, but upon booting?!
15:24 ndevos oh, I think the cache file should be located under /var/run and it should get removed upon boot - but thats my opinion
15:24 * ndevos has no idea where that file currently would be located
15:26 diegows joined #gluster
15:34 the-me joined #gluster
15:36 theron joined #gluster
15:38 theron joined #gluster
15:40 theron_ joined #gluster
15:45 lmickh joined #gluster
15:52 sputnik13 joined #gluster
16:04 hagarth joined #gluster
16:31 jobewan joined #gluster
16:34 davemc GlusterFS 3.6.0 is alive! Just in time for your holiday treat, bluster community has released 3.6.0. More at http://blog.gluster.org/?p=10661
16:34 hagarth davemc: s/bluster/gluster/
16:34 davemc damn, changed that once already
16:35 ndevos hagarth: bluster is short for blog.gluster.org ;)
16:35 davemc GlusterFS 3.6.0 is alive! Just in time for your holiday treat, gluster community has released 3.6.0. More at http://blog.gluster.org/?p=10661
16:35 hagarth ndevos: cool to have a community for blog.gluster.org ;)
16:38 kumar joined #gluster
16:52 deniszh left #gluster
16:52 fattaneh1 joined #gluster
17:03 meghanam_ joined #gluster
17:03 meghanam joined #gluster
17:04 drewskis1 joined #gluster
17:04 drewskis1 my system shows a df of 535gb and a du of 585gb tried clearing all the locks and i cant get all the space to get to the DF
17:06 diegows joined #gluster
17:16 DougBishop joined #gluster
17:18 JoeJulian drewskis1: Is this through a client mount or on a brick directly?
17:19 Pupeno_ joined #gluster
17:20 n-st joined #gluster
17:25 Pupeno joined #gluster
17:33 Pupeno_ joined #gluster
17:53 andreask joined #gluster
17:59 mbukatov joined #gluster
18:00 wgao joined #gluster
18:06 mojibake joined #gluster
18:06 gdavis331 joined #gluster
18:10 PeterA joined #gluster
18:23 samkottler joined #gluster
18:23 Lee- joined #gluster
18:31 fattaneh1 joined #gluster
18:31 magamo left #gluster
18:31 fattaneh1 left #gluster
18:48 hollaus joined #gluster
18:58 Pupeno joined #gluster
18:59 Pupeno joined #gluster
19:16 lalatenduM joined #gluster
19:20 rotbeard joined #gluster
19:23 alendorfme joined #gluster
19:24 alendorfme hi, I get the msg "failed to get the 'volume file' from server" in ubuntu 14:04, Gluster 3.4.2.
19:24 alendorfme Version 3.5 might fix this?
19:25 semiosis alendorfme: please put the client log file on pastie.org or similar
19:26 theron joined #gluster
19:26 alendorfme semiosis: http://pastie.org/9688430
19:26 glusterbot Title: #9688430 - Pastie (at pastie.org)
19:26 semiosis alendorfme: whats the mount command?
19:27 alendorfme in /etc/fstab: gluster01:11e5b847-93ca-40a3-9678-0d9cf8240921  /var/www  glusterfs  log-file=/var/log/data.vol,ro,defaults   0  0
19:27 alendorfme run mount -a
19:28 semiosis and please ,,(pasteinfo)
19:28 glusterbot Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
19:29 alendorfme semiosis: http://ur1.ca/ine5k
19:29 glusterbot Title: #146958 Fedora Project Pastebin (at ur1.ca)
19:32 alendorfme semiosis: using ubuntu 14.04 in two amazon ec2, gluster01 and gluster02 point to cname hosts
19:37 altendorfme joined #gluster
19:38 altendorfme semiosis: *return, change irc client =)
19:45 semiosis altendorfme: you need to use the volume name, "brick", not the volume ID, in your fstab line, like this: gluster01:brick  /var/www  glusterfs  log-file=/var/log/data.vol,ro,defaults   0  0
19:45 semiosis "brick" is not a great name for a volume by the way
19:46 semiosis altendorfme: also note that you can find newer packages in the ,,(ppa)
19:46 glusterbot altendorfme: The official glusterfs packages for Ubuntu are available here: STABLE: 3.4: http://goo.gl/M9CXF8 3.5: http://goo.gl/6HBwKh -- QA: 3.4: http://goo.gl/B2x59y 3.5: http://goo.gl/RJgJvV 3.6: http://goo.gl/ncyln5 -- QEMU with GlusterFS support: http://goo.gl/e8IHnQ (3.4) & http://goo.gl/tIJziO (3.5)
19:47 altendorfme semiosis: owww, my problem his "/" gluster01:/brick
19:48 semiosis you dont need the /
19:48 semiosis just gluster01:brick is good
19:48 semiosis not gluster01:/brick
19:48 altendorfme semiosis: hahha, my first try in gluster... sorry for brick stupid name =)
19:48 semiosis :)
20:06 altendorfme semiosis: create a vol file, and create new fstab https://dpaste.de/Vzmi, necessary using max-read?
20:07 semiosis why use a vol file?  that's not recommended
20:08 altendorfme semiosis: I saw this recommendation on blogs =/
20:09 semiosis they're either old & wrong, or just plain wrong :)
20:10 altendorfme semiosis: hahha, thx. I will install the mount /var/www and install ISPConfig, have any recommendations for performance for php?
20:10 semiosis optimize your include path, use APC, use autoloading, see also ,,(php)
20:10 glusterbot (#1) php calls the stat() system call for every include. This triggers a self-heal check which makes most php software slow as they include hundreds of small files. See http://joejulian.name/blog/optimizing-web-performance-with-glusterfs/ for details., or (#2) It could also be worth mounting fuse with glusterfs --attribute-timeout=HIGH --entry-timeout=HIGH --negative-timeout=HIGH
20:10 glusterbot --fopen-keep-cache
20:12 altendorfme semiosis: owwww... very thank you!!!!
20:12 semiosis yw
20:16 altendorfme semiosis: last question (promise): recommends installing 3.5 because of readdir-ahead?
20:16 semiosis idk about that
20:17 semiosis try it out
20:17 altendorfme semiosis: okay.
20:20 JoeJulian altendorfme: I'm recommending 3.5 now for various bug fixes that haven't been backported.
20:21 altendorfme JoeJulian:
20:21 altendorfme I will test the entire installation project with 3.4, hopefully I upgrade. Thx!
20:29 coredump joined #gluster
20:39 Debolaz altendorfme: As a sidenote, Amazon EC2 performance with gluster selfhealing will be absolutely horrible compared to many other virtual server providers.
20:40 JoeJulian ?
20:40 Debolaz EC2 has a really high latency between nodes.
20:40 JoeJulian semiosis: ^ any thoughts on that statement
20:41 Debolaz That's not a glusterfs problem though, it just affects anything that relies on low latency for performance.
20:41 semiosis heh
20:41 semiosis performance isn't everything
20:41 semiosis so it's not the fastest but it has other advantages
20:41 semiosis s/so/maybe/
20:41 glusterbot What semiosis meant to say was: maybe it's not the fastest but it has other advantages
20:42 JoeJulian I'm surprised to hear that about network latency. I know for a fact that they monitor that to the 10ns.
20:42 semiosis i wouldn't call it "really high latency" but that's a subjective term anyway.  inter-az times are under 1 ms
20:42 Debolaz If it turns a 300 ms webrequest into a 3s webrequest, it is everything. I doubt it's *that* bad on EC2, but it's not pretty. The problem magically goes away with providers like Linode though, where we run large crappy PHP applications without caching without serious performance hickups on GlusterFS.
20:43 JoeJulian linode++
20:43 glusterbot JoeJulian: linode's karma is now 1
20:43 firemanxbr joined #gluster
20:44 Debolaz Performance was one of the things I was worried about when switching to GlusterFS, there weren't exactly lacking articles detailing how bad some worst case scenarios could be. But we've yet to hit any of those.
20:44 dfLessTdh joined #gluster
20:44 altendorfme Debolaz: The company I work for uses amazon for all applications, my task was to get a cluster solution for institutional sites and some blogs. Even with a problem of latency I think it will be enough.
20:44 JoeJulian Have you since written said articles? ;)
20:46 dfLessTdh Hi Joe, I saw you using many 32 bit machines in your post.  Is that enthusiasm for tiny linux living, or just a painful budget?
20:47 Debolaz There are two issues we are struggling a bit with though, neither are performance related. 1) glusterd is horrible at managing processes, it seems to just start crying in the face of race conditions. Though this is the only significant source of work needed to maintain GlusterFS here.
20:48 Debolaz Because 2) Reading glusterfs logs is like looking through a haystack to find a needle. There's tons of info, and GlusterFS makes little to no effort to separate the "hi, i'm telling you ive connected to something now" from the "OH SHIT BIG PROBLEM!" entries.
20:48 dfLessTdh um, a hay nebula, Debolaz
20:49 dfLessTdh and if you think the logs are big, try reading a dump.  That said, I'll take verbose over dead silence anyday.
20:50 Debolaz A problem that has become smaller over time as I've learnt to infer what the various stuff it spews out means. But I'd rather it just had a "problem.log" where it could dump "OH SHIT BIG PROBLEM" log entries.
20:50 dfLessTdh amen there
20:50 dfLessTdh I wish all open source projects used a log facility level system with weighted priorites, and user definale promote/demote strings
20:51 dfLessTdh maybe they call it the cloud because none of use can see where we are going
20:51 zerick joined #gluster
20:54 dfLessTdh Hey anyone, I have a problem with my replica 2 brick files.  I lost a brick, and rebuilt it from scratch.
20:54 dfLessTdh After DAYs of full healing, I get 535gb on df, but 585gb on du.  That is mind boggling.  Source brick
20:54 dfLessTdh Source brick shows 585gb, as do all nfs clients.
20:54 dfLessTdh Too scared to fail over... files may be locked or something but this doesn't help:
20:55 dfLessTdh gluster volume clear-locks ops / kind all posix
20:55 dfLessTdh gluster volume clear-locks ops / kind all entry
20:55 dfLessTdh gluster volume clear-locks ops / kind all inode
20:55 dfLessTdh ideas?
20:57 semiosis dfLessTdh: does the df/dh discrepancy appear on the other brick as well?
20:57 dfLessTdh no, only on the new replacement.  I'd calm down, otherwise
20:57 semiosis s/df\/dh/df\/du/
20:58 semiosis or if this were calc class, df/dt
20:58 Pupeno_ joined #gluster
20:58 dfLessTdh semiosis: funny, but hurts too much to laugh.  Any ideas what kind of gluster or posix locks need cleared?  even remounted 99-100% of thenfs  clients
21:00 dfLessTdh 300 million files, 12.5 million inodes, 595G on 1tb+ ssd raid10 (good media).  8gb servers with 4 cores plus hyperthreading -- anything misbalanced here?  I ask that since many gluster commands fail until long after full healing
21:01 drewskis1 joined #gluster
21:15 JoeJulian dfLessTdh: couple of possibilities, different block sizes if the new brick is a 4k drive and the source brick is not. I assume you compare sparse vs not sparse. There may be stale files in $brick/.glusterfs/* look for gfid *files* (not symlinks) with only 1 link.
21:19 Pupeno joined #gluster
21:23 coredump joined #gluster
21:30 semiosis JoeJulian: know a fs that does blocks larger than 4k?
21:30 Pupeno_ joined #gluster
21:36 JoeJulian semiosis: I think you can configure both xfs and ext4 to do that. But when you go with defaults, they detect the block size of the drive (or raid stripe) and use that. I know for sure you can override the raid data on xfs.
21:37 semiosis interesting. i thought it only supported 4k on linux
21:37 semiosis will look into that
21:37 dfLessTdh xfs can be modded tons
21:37 dfLessTdh and sometimes the native tools detect something stupid for xfs stripe info
21:37 dfLessTdh http://jeremytinley.wordpress.com/2014/10/20/xfs-and-ext4-testing-redux/
21:38 dfLessTdh we are using ext4
21:38 dfLessTdh 4k drives
21:45 coredump joined #gluster
22:00 dfLessTdh JoeJulian: everything is 4k.  does gluster have a certain affinity for sparse files (or tools for?)
22:02 JoeJulian dfLessTdh: When it's healing, it will skip the empty parts which will create sparse files.
22:03 dfLessTdh hmmm... when it is done, will it "normal up" or stay skewed?
22:07 dfLessTdh JoeJulian: is my du > df something you've seen much? ever?
22:07 dfLessTdh I've only ever seen the exact opposite
22:09 JoeJulian Yep. There were files on the root of the mount that were not part of the brick.
22:22 dfLessTdh you mean the .glusterfs files?
22:23 JoeJulian maybe
22:25 coredump joined #gluster
22:33 JoeJulian dfLessTdh: Sorry, meant to finish answering when I said "maybe" but I got called away. https://botbot.me/freenode/gluster/2014-10-31/?msg=24616492&amp;page=5
22:33 glusterbot Title: Logs for #gluster | BotBot.me [o__o] (at botbot.me)
22:43 dfLessTdh JoeJulian: maybe maybe..  ok.  hey -- on 3.3.1 will having replica 3 cause more overhead?  Increase read speed for ro files to parralel nfs clients?  (When did gluster introduce quorum?)
22:49 dfLessTdh JoeJulian: hey, thanks for pointing me in (a|the right) direction -- the du with sparse is suddenly coming back with "du: cannot access `./htdocs/topshop/html/img/tmp/userimage1.jpeg': No such file or directory"
22:51 dfLessTdh nice to see something new in this dimly lit problem.  Hoping in a replica 2, since the file is absent in both bricks, that the file was long gone before my event (both servers matched prior to event, per logs,tho I can't speak for this image)
23:06 siel joined #gluster
23:26 plarsen joined #gluster
23:40 Pupeno joined #gluster
23:54 Pupeno_ joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary