Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2016-05-12

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:14 jobewan joined #gluster-dev
00:40 opelhoward_ joined #gluster-dev
00:48 dlambrig_ joined #gluster-dev
01:30 EinstCrazy joined #gluster-dev
02:57 gem joined #gluster-dev
03:21 atinm joined #gluster-dev
03:40 kshlm joined #gluster-dev
03:41 josferna joined #gluster-dev
03:52 itisravi joined #gluster-dev
03:58 pranithk1 joined #gluster-dev
04:03 EinstCrazy joined #gluster-dev
04:11 hgowtham joined #gluster-dev
04:19 poornimag joined #gluster-dev
04:23 shubhendu joined #gluster-dev
04:24 ppai joined #gluster-dev
04:26 gem joined #gluster-dev
04:28 Saravanakmr joined #gluster-dev
04:33 hgowtham joined #gluster-dev
04:33 pranithk1 joined #gluster-dev
04:35 prasanth joined #gluster-dev
04:38 sakshi joined #gluster-dev
04:56 raghug joined #gluster-dev
05:00 hgowtham joined #gluster-dev
05:01 mchangir joined #gluster-dev
05:07 ndarshan joined #gluster-dev
05:07 nbalacha joined #gluster-dev
05:08 skoduri joined #gluster-dev
05:12 rastar joined #gluster-dev
05:15 rafi joined #gluster-dev
05:16 pkalever joined #gluster-dev
05:17 aspandey joined #gluster-dev
05:23 penguinRaider joined #gluster-dev
05:23 Manikandan joined #gluster-dev
05:25 kdhananjay joined #gluster-dev
05:26 overclk joined #gluster-dev
05:27 itisravi joined #gluster-dev
05:28 nishanth joined #gluster-dev
05:29 aravindavk joined #gluster-dev
05:34 hchiramm joined #gluster-dev
05:35 rafi1 joined #gluster-dev
05:36 jiffin joined #gluster-dev
05:39 Apeksha joined #gluster-dev
05:48 rafi joined #gluster-dev
05:49 vimal joined #gluster-dev
05:50 karthik___ joined #gluster-dev
06:00 jiffin1 joined #gluster-dev
06:13 ppai joined #gluster-dev
06:22 atalur joined #gluster-dev
06:23 pkalever1 joined #gluster-dev
06:23 pkalever1 left #gluster-dev
06:29 sabansal_ joined #gluster-dev
06:30 kotreshhr joined #gluster-dev
06:36 spalai joined #gluster-dev
06:41 k4n0 joined #gluster-dev
06:51 gem joined #gluster-dev
06:56 raghug joined #gluster-dev
06:57 jiffin1 joined #gluster-dev
06:59 mchangir joined #gluster-dev
07:01 foster joined #gluster-dev
07:04 pranithk1 joined #gluster-dev
07:04 atinm joined #gluster-dev
07:12 itisravi joined #gluster-dev
07:12 ndevos post-factum++ thanks for the mail on the releases :)
07:12 glusterbot ndevos: post-factum's karma is now 12
07:15 ndevos post-factum: also, dont forget to submit a talk on http://events.linuxfoundation.org/events/linuxcon-europe/program/cfp - I and others are happy to review your topic/abstract
07:16 spalai joined #gluster-dev
07:27 mchangir joined #gluster-dev
07:33 vimal joined #gluster-dev
07:33 rafi josferna++
07:33 glusterbot rafi: josferna's karma is now 4
07:42 atinm joined #gluster-dev
08:21 pranithk1 joined #gluster-dev
08:25 obnox ndevos: you pinged me yesterday in the community meeting... all our samba people are currently at the sambaXP conference ... and it was roughly when I was giving my talk... so I could not reply :-)
08:27 raghug joined #gluster-dev
08:28 obnox ndevos: also network routes here collide with vpn, so if anything, community works best to reach me until next week ;-)
08:31 karthik___ joined #gluster-dev
08:34 dlambrig_ joined #gluster-dev
08:38 aravindavk joined #gluster-dev
08:39 asengupt joined #gluster-dev
08:40 jiffin joined #gluster-dev
08:40 mchangir ndevos, please review my query at http://review.gluster.org/#/c/14294/
08:41 ndevos obnox: no problem, we just want to make sure that someone from the Samba guys working with Gluster attend the weekly meeting, we want news about Gluster+Samba tasks :)
08:42 ndevos obnox: jdarcy has the AI to contact you/ira about that
08:42 ndevos mchangir: sure, lets see
08:45 pranithk1 atalur: you will like http://review.gluster.org/14302
08:46 atalur pranithk1, Checking
08:46 hchiramm joined #gluster-dev
08:49 atalur pranithk1, awesome! Gave +1.
08:49 rraja joined #gluster-dev
08:50 atalur pranithk1, was this the bug kdhananjay was talking about earlier? heal info shows entries when i/o is going on?
08:51 pranithk1 atalur: yeah, it shows the entries only on 2nd and 3rd bricks :-)
08:51 pranithk1 atalur: It started happening because we recently made an optimization to give preference to local-brick
08:51 opelhoward_ joined #gluster-dev
08:52 pranithk1 atalur: I mean in selecting source
08:52 jiffin joined #gluster-dev
08:52 atalur pranithk1, hmm.. but it was a bug anyway :)
08:53 pranithk1 atalur: yeah, I am giving you background info as to why it was not seen till now :-)
08:53 atalur pranithk1, :)
08:58 kdhananjay pranithk1: you don't think the complexity in afr code is increasing? :) seemingly simple patches in one place are breaking functionality in other places. About time we had strict guidelines on what all values @ret can take (perhaps to keep things simple, only -1 and 0 should be allowed, everything else be passed by reference to the callee function).
08:58 atinm joined #gluster-dev
08:58 kdhananjay pranithk|afk I mean, ^^^
09:03 ppai joined #gluster-dev
09:13 pranithk kdhananjay: I think the problem here was variable naming. So I changed it too
09:15 pranithk kdhananjay: One thing we need to simplify is code duplication in read-children code
09:22 mchangir ndevos, patch corrected ... please review comment response and patch at http://review.gluster.org/#/c/14294/
09:42 mchangir joined #gluster-dev
09:44 pranithk1 joined #gluster-dev
09:47 ndevos mchangir: done, looks good to me - once you run a local test, you can mark it Verified+1 and the regressions will get started
09:48 ndevos *regression tests
09:49 ndevos mchangir: could you send a pull request for the glusterfind+bareos details? just edit https://github.com/gluster/glusterweb/edit/master/source/community/roadmap/3.8/index.md and pass me the url of the updated file
09:53 kotreshhr joined #gluster-dev
10:03 atalur joined #gluster-dev
10:04 ppai joined #gluster-dev
10:14 cholcombe joined #gluster-dev
10:18 kshlm joined #gluster-dev
10:27 pranithk1 joined #gluster-dev
10:31 EinstCrazy joined #gluster-dev
10:31 bfoster joined #gluster-dev
10:35 atinm joined #gluster-dev
10:39 kotreshhr joined #gluster-dev
10:48 prasanth_ joined #gluster-dev
11:06 atalur joined #gluster-dev
11:10 Saravanakmr joined #gluster-dev
11:16 raghug joined #gluster-dev
11:17 pur joined #gluster-dev
11:21 mchangir joined #gluster-dev
11:33 mchangir has anything changed for CentOS regressions ... they seem to be running quicker
11:35 misc do not think so
11:35 misc which one seems to be faster ?
11:39 mchangir https://build.gluster.org/job/rackspace-regression-2GB-triggered/20661/console    ... maybe my inference is wrong
11:41 itisravi joined #gluster-dev
11:52 shubhendu joined #gluster-dev
11:59 rastar mchangir: what tells you that it is quicker?
12:04 kkeithley_ joined #gluster-dev
12:05 raghug joined #gluster-dev
12:05 mchangir rastar, I was wondering about https://build.gluster.org/job/rackspace-regression-2GB-triggered/20661
12:09 spalai joined #gluster-dev
12:10 post-factum guys and girls, is that possible to set oom_score_adj for client process via mount options?
12:10 post-factum it seems mount.glusterfs script does not support that, but it could be useful
12:11 rastar mchangir: got the link, but nothing tells me that it is faster
12:12 post-factum ndevos: ^^ ?
12:12 rastar it is still running tests/basic tests after 1.5 hours
12:12 rastar mchangir: ^^
12:26 mchangir rastar, hehe ... I must've been half awake when I saw it
12:27 aravindavk joined #gluster-dev
12:28 rafi dlambrig_++
12:28 glusterbot rafi: dlambrig_'s karma is now 2
12:36 pranithk1 joined #gluster-dev
12:42 karthik___ joined #gluster-dev
12:48 spalai left #gluster-dev
12:58 ndevos post-factum: I dont think we have an option to set it, but we could add it... file a bug with a description of your use-case, and if possible send a patch :)
12:59 kotreshhr left #gluster-dev
13:01 shyam joined #gluster-dev
13:01 jiffin joined #gluster-dev
13:09 post-factum ndevos: ok :)
13:12 post-factum ndevos: one more question
13:12 post-factum what is the difference between xlators/mount/fuse/utils/mount_glusterfs.in and xlators/mount/fuse/utils/mount.glusterfs.in&
13:13 nbalacha joined #gluster-dev
13:16 ndevos post-factum: one of them is used on Mac OSX, I think
13:16 ndevos post-factum: the Makefile.am in that directory should be clear on that
13:17 post-factum ndevos: some uncertainty feel I
13:17 misc mhh, but we have builder for osx somewhere ?
13:17 misc or we count on free/netbsd to catch issues ?
13:17 ndevos misc: jclift had one, not sure what happened with it
13:17 post-factum oh, i see, mount.glusterfs is for Linux, mount_glusterfs is for everything else
13:18 post-factum dunno if everything else have oom adjusts
13:18 post-factum so i need linux one
13:18 ndevos yeah, keep it simple :)
13:19 ndevos kkeithley_: do you know more about building the FUSE client on Mac OSX?
13:19 misc I guess we could put a minimac in the DC, once we get the ip
13:20 ndevos if there is someone that cares enough about Mac OSX, and would want to fix issues that get found
13:21 misc I am not sure there is a lot more issues than on *bsd
13:21 misc most stuff I had to deal with were simple .h issue, etc
13:21 ndevos no idea, I'm not betting on anything
13:21 misc but it was also for full userspace stuff like telepathy
13:24 spalai joined #gluster-dev
13:24 post-factum ndevos: and one more. where could i find entry point (main()) for glusterfs executable?
13:27 kkeithley_ ndevos: I haven't tried building on Mac OS X in a long time.
13:28 kkeithley_ we already have a mac mini in the lab here in Westford.
13:29 gem joined #gluster-dev
13:29 kkeithley_ there are patches in gerrit – stagnating – for Mac OS X.
13:35 ndarshan joined #gluster-dev
13:37 kkeithley_ I'd like to get a VM in the DC with a public IP. How close are we to being able to do that?  I want to do things like run coverity and clang compiles and post the results (without having to scp files to download.gluster.org)
13:37 kkeithley_ misc: ^^^
13:38 ndevos post-factum: iirc main() is under the glusterfsd/ directory
13:38 ndevos post-factum: easiest is to 'git grep' for one of the parameters that get listed in the 'glusterfs --help
13:38 ndevos ' output
13:38 post-factum i believe oom_score_adj should be adjusted from inside glusterfs process (via /proc/self/oom_score_adj)
13:39 ndevos post-factum: yes, I agree with that
13:39 kkeithley_ glusterfs's and glusterfsd's main() is in …/glusterfsd/src/glusterfsd.c
13:39 post-factum hm, i thought those are separate entities
13:39 post-factum okay
13:40 kkeithley_ glusterfs is a symlink to glusterfsd.  Same executable, different vol files
13:40 ndevos post-factum: I think the option could be considered valid for any glusterfs(d) executable in any case
13:41 post-factum agree
13:45 misc kkeithley_: for now, I am working on bringing back the existing VM, but I can install another one if needed
13:45 misc kkeithley_: for a public ip, it is around 1 week to get it, based on previous tickets
13:47 misc kkeithley_: what OS do you want it to be ?
13:49 kkeithley_ Fedora 23 or 24 is fine
13:50 kkeithley_ hostname something like analysis.gluster.org, or ???
13:52 kkeithley_ it will need a good amount of disk. coverity results are big.
13:52 mchangir joined #gluster-dev
13:54 ndevos kkeithley_: could that maybe easily run in the CentOS CI?
13:55 misc define "good amount of disk" too, are we speaking in the order of 100G, 200G ?
13:55 kkeithley_ probably
13:55 ndevos kkeithley_: that would still require to copy the results somewhere, artifacts.ci.centos.org can be an option for that
13:56 kkeithley_ The VM I use now has ~50G, with about 8G free ATM. I only keep 10 days worth
14:02 kdhananjay joined #gluster-dev
14:07 * kkeithley_ has a slight preference for the results to be on $something.gluster.org.  Having it all on one machine that doesn't require lots of scp and ssh is a plus.
14:08 misc kkeithley_: so, just to understand, the VM will be running the jobs, or will be serving the result ?
14:08 kkeithley_ both
14:08 misc and coverty or clang ?
14:08 misc (IIRC, coverty need a license or something)
14:11 kkeithley_ yes, coverity needs a license. clang, AMD and Intel compiles, valgrind, etc.  Maybe I wouldn't actually run coverity there because of the license. But eliminating the scp/ssh to download.gluster.org is a goal.
14:12 misc and how would things be triggered ?
14:12 kkeithley_ cron jobs
14:13 kkeithley_ just like I do now.
14:13 kkeithley_ no connection to jenkins or gerrit is required.
14:15 misc mhh, and in term of CPU, how much are we speaking of ?
14:17 ndevos kkeithley_: instead of a cronjob, it would be nicer to use Jenkins as a scheduler. I'm planning to move the "bug status email" job there too
14:18 misc it is however simpler to use cron for now
14:18 misc if we do the VM on formicary, I am not sure I will not hit firewall issue :/
14:20 ndevos of course cron is easier, that is also why we use it for things :)
14:21 ndevos but it would be much nicer to see the different jobs that we have in Jenkins, and allow others to modify them
14:21 ndevos or monitor, at the very least
14:21 misc ndevos: that too
14:21 misc but this can be switched later is more what I wanted to convey
14:21 misc but I have soon a meeting
14:22 misc guess i can use the time to see how I can install a F23
14:23 misc you are lucky, the meeting got cancelled
14:23 ndevos right, we're all about improving things step by step, its great to see progress :)_
14:23 misc nigelb: so, FYI i am gonna resume working on a ansible role for deploying VM, not sure if you did something
14:24 misc ndevos: meeting being cancelled is "step by step progress" :p ?
14:24 nigelb misc: I haven't gotten a chance yet, no.
14:24 wushudoin joined #gluster-dev
14:25 kshlm misc, Is the ansible role ready to deploy netbsd?
14:26 misc kshlm: which one ?
14:26 kshlm The jenkins_builder one.
14:26 misc didn't tested yet
14:26 kshlm So it just works for centos now then?
14:26 misc and freebsd
14:27 kshlm centos6 in particualr.
14:27 kshlm Ah okay.
14:27 misc and fedora, afaik, but the fedora host was buested
14:27 misc (hence me working on a role to create VM and then test)
14:27 kshlm I tried it on a netbsd7 vm and it didn't work.
14:27 misc kshlm: mhh, interesting, can you give me the log ?
14:28 misc (cause there is a ton of linuxism all around :/)
14:28 misc in fact, the freebsd run didn't finish due to the patch issue I sent a email before
14:28 misc so maybe the role has issue after the git clone
14:29 kshlm It failed instantly, on nginx task.
14:29 misc good, so that's easy to verify :p
14:29 kshlm Also, ansible doesn't detect netbsd package manager yet.
14:30 misc great :/
14:30 misc ok so that, I can fix too, but more long term
14:30 kshlm I had to hack in one line into the facts module
14:30 pranithk1 joined #gluster-dev
14:30 misc kshlm: what version of ansible ?
14:31 kshlm 2.0.2
14:31 misc netbsd 6 or 7 ?
14:31 kshlm https://gist.github.com/kshlm/51f2833f08c8360233d2f678a04f5576 the log is here if you need.
14:31 misc (only have a 6 ready )
14:31 kshlm 7
14:31 misc oh, that
14:32 misc yeah, this one is "easy" to fix
14:32 kshlm I ran the playbook from a Archlinux master.
14:32 kshlm The roles worked well for centos machines.
14:32 misc https://github.com/gluster/gluster.org_ansible_configuration/blob/master/roles/nginx/vars/main.yml
14:32 misc the netbsd need to be added here
14:32 misc (once we know the right value)
14:32 kshlm Ah, I'll try that.
14:33 itisravi joined #gluster-dev
14:33 misc and so, the package manager is pkgin ?
14:33 kshlm Yup.
14:33 kshlm Should be available by default with netbsd7
14:34 kshlm Its available under /usr/pkg/bin/pkgin
14:34 kshlm THe role worked well CentOS 7 as well.
14:35 mchangir joined #gluster-dev
14:35 kshlm I'll test how much further I can get with netbsd.
14:35 misc so obviously, not having python by default is a issue :/
14:36 kshlm And report back later.
14:36 kshlm misc, I have a tiny playbook for that.
14:37 kshlm https://gist.github.com/kshlm/e9974acee3e0c1902271405fc9449e7e
14:37 kshlm Catch ya later.
14:37 misc kshlm: thanks
14:37 * misc was looking for python and python2 package
14:38 kshlm You'll need to set ansible_python_interpreter to /usr/pkg/bin/python2.7 after that.
14:38 kshlm Or before doesn't matter.
14:39 kshlm Now I'm leaving for real.
14:39 misc yeah, that's like we do for freebsd
14:39 rafi1 joined #gluster-dev
14:39 misc (but I am still just trying to see if ansible HEAD detect pkgin or not)
14:45 atalur joined #gluster-dev
14:58 shyam joined #gluster-dev
14:59 pranithk1 joined #gluster-dev
14:59 rafi joined #gluster-dev
15:01 atinm joined #gluster-dev
15:02 kkeithley1 joined #gluster-dev
15:06 kkeithley2 joined #gluster-dev
15:10 pranithk1 does anyone know about why https://build.gluster.org/job/rackspace-regression-2GB-triggered/20657/console kind of failures are happening?
15:11 kkeithley2 ndevos: it could be in triggered by jenkins/gerrit.  But, actually there's a lot of buts.  Coverity runs only take about 10 minutes. But a Coverity runs that report the same 1000 defects on every commit, or very Verify+1 doesn't seem useful if nobody is ever going to fix any of them.
15:12 kkeithley2 As it is, I'm not sure what the value of nightly coverity runs is, given that almost nothing is being done to address the issues it finds.
15:13 pranithk1 ndevos: https://build.gluster.org/job/rackspace-regression-2GB-triggered/20657/console
15:14 kkeithley_ Prasanna has a patch to add clang static analysis. Actually, clang analyze reports a lot of things  that aren't bugs. Just a clang compile will find real bugs (and a few non-bugs)
15:14 spalai joined #gluster-dev
15:15 kkeithley_ The value of being hooked into gerrit is if we can rely on, e.g. a clean clang compile to be worth a +1. But
15:15 kkeithley_ But we can't rely on that.
15:16 kkeithley_ I'm happy, for now, to just have daily cron jobs. Later on we can revisit hooking into gerrit and jenkins
15:29 misc pranithk1: this is being discussed on gluster-devel
15:29 misc pranithk1: I do not know more
15:30 pranithk1 misc: thanks misc :-)
15:30 misc I think this was fixed in the past
15:36 ndevos kkeithley_: oh, but the idea is to just use Jenkins for schedulding weekly/daily jobs, just like cron does, not based on a Gerrit trigger
15:37 * ndevos leaves for the day, bye!
15:38 kkeithley_ okay. I didn't know that was a use case for jenkins. Seems like it adds a teeny bit of complexity relative to cron, but doesn't seem like a big deal
15:53 skoduri joined #gluster-dev
15:59 Manikandan joined #gluster-dev
16:02 dlambrig_ joined #gluster-dev
16:04 ashiq joined #gluster-dev
16:10 mchangir kkeithley_, could you peek into https://build.gluster.org/job/rackspace-regression-2GB-triggered/20661/   and see why its taking so long for the CentOS regressions to complete
16:24 pur joined #gluster-dev
16:31 dlambrig_ joined #gluster-dev
17:06 hchiramm joined #gluster-dev
17:10 shubhendu joined #gluster-dev
17:17 kkeithley_ mchangir: I don't have access to any of the jenkins slaves
17:17 kkeithley_ misc: ^^^
17:20 kkeithley_ based on the console log timestamps it seems to be running, maybe just really slowly.
17:21 kkeithley_ s/running/making progress/
17:23 mchangir kkeithley_, hmm ... really painfully slow
17:23 kkeithley_ yeah ;-/
17:23 kkeithley_ :-/
17:24 mchangir kkeithley_, I wanted ndevos or you to do the honors of merging it today so that I could get the downstream patch merged as well
17:27 rafi joined #gluster-dev
17:27 mchangir kkeithley_, are those systems physical machines or virtual machines?
17:27 kkeithley_ I merged the last one. Go ahead and submit a new one and I'll merge it.
17:27 kkeithley_ IIRC they're VMs
17:29 kkeithley_ I merged the last one. Go ahead and submit the hopefully final one downstream and I'll merge it.
17:29 kkeithley_ If you want
17:30 mchangir did you just merge the downstream patch?
17:31 mchangir kkeithley_, did you just merge the downstream patch?
17:31 kkeithley_ yesterday
17:32 kkeithley_ before ndevos' comment
17:32 kkeithley_ before ndevos' comment on the upstream patch
17:32 dlambrig_ joined #gluster-dev
17:37 misc kkeithley_: yeah, i also did see that the jenkins builder seems to have become a lot slower
17:37 mchangir kkeithley_, ouch ... that needed to incorporate ndevos suggestions ... and some downstream cleanup for glusterfs-devel package upgradeability
17:38 misc mchangir: all builders are VM, most running in rackspace cloud
17:38 kkeithley_ misc: yeah, over seven hours so far
17:38 misc kkeithley_: crap
17:39 * misc look
17:39 kkeithley_ mchangir: my optimism was too optimistic I guess. ;-)
17:39 misc so what is taking time is lvcreate
17:39 misc and running strace, just allocating memory
17:40 mchangir kkeithley_, then there's another patch to get the %post{un) {libs|api} -p /sbin/ldconfig    issue out of the way by undoing the "-p /sbin/ldconfig" optimization
17:40 misc I suspect that /etc/lvm/archive/ is full or starting to hit some limit
17:41 misc # ls /etc/lvm/archive/ |wc -l
17:41 misc 202293
17:43 kkeithley_ mchangir: this isn't the right channel for downstream....  But can you submit a patch or patches for those changes, and I'll merge them. For the deadline tomorrow.
17:44 kkeithley_ misc: should we have an lvm expert take a look and see if we're doing something suboptimal?
17:44 ashiq joined #gluster-dev
17:44 kkeithley_ either in the tests or in our VM config?
17:45 misc kkeithley_: there was a mail already on this
17:45 misc let me continue the discussion on the list, since I am about to go for the evening
17:45 kkeithley_ okay.
17:45 kkeithley_ sure
17:49 misc kkeithley_: also, is jeff darcy in the office (and are you), and can he tell us what is the value to change in lvm.conf, cf his mail 1 month and half ago ?
17:49 misc (I would have used a drone to poke him, but I do not have one, so ...)
17:49 spalai joined #gluster-dev
17:56 rafi1 joined #gluster-dev
18:00 kkeithley_ misc: I'm not in the office today, and jdarcy is very rarely in the office.
18:03 kkeithley_ carp. 7.5 hours to have it get a spurious failure.
18:04 mchangir damn
18:05 mchangir talk about due process
18:06 mchangir can we please convert the VMs into physical systems
18:07 mchangir this unpredictability is just nerve wracking
18:41 mchangir kkeithley_, can I do the /sbin/ldconfig changes to upstream as well?
18:50 rraja joined #gluster-dev
19:13 kkeithley_ mchangir: yes please
19:13 hagarth joined #gluster-dev
20:01 misc kkeithley_: that's ok, guess I need to see if I can use some "uber to ping people in real life"
20:09 hagarth joined #gluster-dev
20:18 kkeithley_ I'm not convinced we need to convert the VMs to bare metal. Bumping the activation/reserved_memory setting sounds promising
20:20 misc I kinda suspect cleaning the archive would do the trick, but I need to read about that before breaking everything, so this will wait tomorow at best
21:01 kkeithley_ Maybe ask Heinz Mauelshagen (lvmguy)!
21:15 kkeithley_ off hand though I'd say 200K+ files in /etc/lvm/archive is probably not doing anything good for us.  Does it get cleared out on a reboot?
21:19 dlambrig_ joined #gluster-dev
21:19 misc nope
21:20 misc far from it
21:20 misc just ls take time, so I can imagine that adding a file there also take time
21:20 misc hence slow tests
21:35 csaba joined #gluster-dev
21:36 lkoranda joined #gluster-dev
22:29 pranithk1 joined #gluster-dev
22:37 shyam joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary