Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-03-31

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:19 Ponyo joined #gluster
00:19 Ponyo Is the native gluster client multi threaded?
00:45 yinyin joined #gluster
00:50 ryand joined #gluster
00:52 harish_ joined #gluster
00:55 jag3773 joined #gluster
00:56 cyberbootje joined #gluster
00:57 cyberbootje joined #gluster
01:24 ilbot3 joined #gluster
01:24 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:29 glusterbot New news from resolvedglusterbugs: [Bug 767545] [7235e5b1af090ffc9d87ac59daadf7926433b495] gluster volume statedump volname not working <https://bugzilla.redhat.com/show_bug.cgi?id=767545> || [Bug 769774] [7eed1d5ba51b65e865f79a392aff70048c702bf0] improve gluster volume heal vol cli outputs <https://bugzilla.redhat.com/show_bug.cgi?id=769774> || [Bug 771313] [29b9b0c72809456b1ba334a40adc1ed9929eca91]: brick crashed in posi
01:31 glusterbot New news from newglusterbugs: [Bug 882127] The python binary should be able to be overridden in gsyncd <https://bugzilla.redhat.com/show_bug.cgi?id=882127> || [Bug 908518] [FEAT] There is no ability to check peer status from the fuse client (or maybe I don't know how to do it) <https://bugzilla.redhat.com/show_bug.cgi?id=908518> || [Bug 915996] [FEAT] Cascading Geo-Replication Weighted Routes <https://bugzilla.redhat.com/s
01:33 gdubreui joined #gluster
01:35 swat30 joined #gluster
01:51 harish_ joined #gluster
02:00 tokik joined #gluster
02:01 glusterbot New news from newglusterbugs: [Bug 947774] [FEAT] Display additional information when geo-replication status command is executed <https://bugzilla.redhat.com/show_bug.cgi?id=947774> || [Bug 1058569] Openstack glance image corruption after remove-brick/rebalance on the Gluster nodes <https://bugzilla.redhat.co​m/show_bug.cgi?id=1058569> || [Bug 1005786] NFSv4 support <https://bugzilla.redhat.co​m/show_bug.cgi?id=1005786> ||
02:26 hagarth joined #gluster
03:01 nightwalk joined #gluster
03:08 kseifried joined #gluster
03:17 haomaiwa_ joined #gluster
03:25 mattappe_ joined #gluster
03:56 haomai___ joined #gluster
04:37 kseifried hey for gluster is xfs still the recommended file system?
04:38 Ponyo I'm not sure but i've used it sucessfully on xfs and zfs now
04:38 Ponyo :)
04:44 kseifried yeah I've used both, xfs seems to be the reccomended way
04:44 kseifried never had problems either way
05:00 Ponyo i suggest noatime for xfs
05:00 Ponyo it seemed to make a noticeable difference for me
05:07 kseifried despite http://xfs.org/index.php/XFS​_FAQ#Q:_Is_using_noatime_or.2Fand_nodiratime_at_mount_time_giving_a​ny_performance_benefits_in_xfs_.28or_not​_using_them_performance_decrease.29.3F
05:07 glusterbot Title: XFS FAQ - XFS.org (at xfs.org)
05:07 kseifried seems like it shouldn't make a significant hit
05:08 Ponyo it seems for me
05:08 kseifried main thing is I'm using local ephemeral storage instead of EBS which should help a lot (latency, latency latency!)
05:08 Ponyo every time i use it, from old platters, to usb media, to iscsi media, it makes a speed gain
05:08 kseifried Ponyo : like.. how much? 1%? 10%?
05:08 Ponyo i dunno i haven't measured in a while
05:12 kseifried hmm I guess I could bonnie it and see
05:14 harish_ joined #gluster
05:16 benjamin_____ joined #gluster
05:20 kseifried wow. never seen bonnie take this long.
05:55 kseifried holy  heck.. still running
06:05 rahulcs joined #gluster
06:37 Philambdo joined #gluster
06:38 jtux joined #gluster
06:40 ekuric joined #gluster
06:47 keytab joined #gluster
06:51 ricky-ti1 joined #gluster
06:54 rgustafs joined #gluster
07:05 vimal joined #gluster
07:11 fidevo joined #gluster
07:14 ctria joined #gluster
07:14 eseyman joined #gluster
07:16 andreask joined #gluster
07:23 masterzen joined #gluster
07:27 hybrid512 joined #gluster
07:27 hybrid512 joined #gluster
07:37 Copez someone who could help me out with performance issues?
07:39 Alex Generally if you ask, someone will try :)
07:39 Ponyo did you try to turn it off and turn it back on again? :)
07:40 warci joined #gluster
07:44 fsimonce joined #gluster
08:00 DV joined #gluster
08:00 d-fence joined #gluster
08:15 wgao joined #gluster
08:16 X3NQ joined #gluster
08:16 badone joined #gluster
08:18 d-fence joined #gluster
08:21 qdk joined #gluster
08:23 Alex Well that was a fun chat
08:23 Copez Hello ALex
08:23 Copez Sorry was away
08:24 Copez Have a 3x Dell R515, 6cores, 32 GB, 8x 3TB in RAID10 based on ZFS
08:24 Copez Testing local on disk i'm getting 1,3 GByte/s
08:25 Ponyo Want my address? I'll happily take that thing off your hands, it must be reaaaal heavy :)
08:25 Copez But the speed over Gluster won't hit over the 230 MB/s (3x replica)
08:25 Ponyo how are they interconnected?
08:26 Copez With 10GBe
08:26 Copez The strange thing is that I have 2 VM-hosts
08:26 Copez connected with 10Gbe to the Storage-nodes
08:26 Copez over the GlusterFS protocol / no NFS!
08:27 Copez When I start a DD-test on the VM-host, I'm seeing 1.3 Gbit/s pushed to each storage node
08:27 Copez When I simultanious start a DD-test on the other VM-host. I'm seeing also 1.3 Gbit/s pushed to each storage node
08:28 Copez When I check one storage-node I'm seeing 2x 1.3 Gbit/s as incoming traffic
08:28 Copez But when I test with DD on one VM-host, I'm not getting above the 1.3 Gbit/s
08:29 Copez Offcourse, the VM-host has to send 3x 1.3 Gbit to each storage node but 3x 1.3 = 4 Gbit/s
08:30 Copez So I shoud easily hit 3,3 Gbit/s from 1 VM-host
08:43 jbustos_ joined #gluster
08:43 sahina joined #gluster
08:47 ravindran1 joined #gluster
08:47 ravindran1 joined #gluster
08:55 andreask joined #gluster
08:57 monotek joined #gluster
08:57 yosafbridge joined #gluster
09:10 ninkotech joined #gluster
09:11 Copez Nobody?
09:11 Pavid7 joined #gluster
09:13 YazzY joined #gluster
09:13 haomaiwang joined #gluster
09:32 haomai___ joined #gluster
10:00 Norky joined #gluster
10:06 ProT-0-TypE joined #gluster
10:15 druuhl joined #gluster
10:21 calum_ joined #gluster
10:30 druuhl Hi, how to set up user/pass auth for each volume under debain/Gluster 3.2.7. I have this Information -> https://github.com/gluster/glusterfs​/blob/master/doc/authentication.txt  . But i dont understand the config entries, exact the protocol/server and protocol/client
10:30 glusterbot Title: glusterfs/doc/authentication.txt at master · gluster/glusterfs · GitHub (at github.com)
10:47 sputnik1_ joined #gluster
10:56 sprachgenerator joined #gluster
11:08 keytab joined #gluster
11:11 shyam joined #gluster
11:25 diegows joined #gluster
11:30 m0zes joined #gluster
11:33 glusterbot New news from resolvedglusterbugs: [Bug 1017868] Gluster website needs better instructions for mailing list page <https://bugzilla.redhat.co​m/show_bug.cgi?id=1017868>
11:49 calston joined #gluster
11:49 calston where do I complain at people about bugs?
11:50 Norky on bugzilla?
11:51 calston my god, it's like falling into 1995, and drowning in horrible
11:51 dbruhn joined #gluster
11:58 hagarth joined #gluster
12:00 nightwalk joined #gluster
12:03 hagarth joined #gluster
12:06 jbustos_ joined #gluster
12:14 hagarth1 joined #gluster
12:17 benjamin_____ joined #gluster
12:17 ctria joined #gluster
12:25 japuzzo joined #gluster
12:32 Norky calston, that's not a terribly useful criticism :)
12:35 harish_ joined #gluster
12:36 shyam joined #gluster
12:40 jag3773 joined #gluster
12:42 calston Norky: "Get a nicer bug tracker" sounds like a useful criticism
12:43 calston it made me not bother to report the 3 bugs I just found
12:44 Norky are you running the 'community' version of Gluster or commercial Red Hat Storage?
12:45 dewey joined #gluster
12:47 swat30 joined #gluster
13:00 lmickh joined #gluster
13:00 Pavid7 joined #gluster
13:01 jag3773 joined #gluster
13:02 dbruhn joined #gluster
13:06 dewey quick question:  How do I rotate the gluster *client* logs (I.e. the system that has mounted GlusterFS)?
13:08 lalatenduM joined #gluster
13:09 dbruhn dewey most guys use the copy truncate option in log rotate
13:11 ctria joined #gluster
13:11 dewey Will the glusterfs process keep the file open?  Do I need to send it any kind of signal?
13:11 dbruhn Here is an example of how I have mine configured.
13:11 dbruhn http://fpaste.org/90207/62714841/
13:11 glusterbot Title: #90207 Fedora Project Pastebin (at fpaste.org)
13:12 dewey (belaya that -- just red the man page on logrotate)
13:12 dbruhn If you send a signal it restarts the client
13:12 dewey Awesome, thanks.
13:12 dbruhn np
13:14 primechuck joined #gluster
13:20 ricky-ticky joined #gluster
13:23 harish_ joined #gluster
13:23 bennyturns joined #gluster
13:45 tdasilva joined #gluster
13:47 mattappe_ joined #gluster
13:47 Copez Why can't I get more than 1.3 Gbit/s over 10GBe over a single thread?
13:48 Copez I've put 8x SSD's in R0 in each Gluster Nodes (3) but can't achieve more then 1.3 Gbit/s
13:49 jbustos_ what iperf results are you getting between Gluster nodes?
13:49 Copez Full 10 GBe performance
13:50 Copez If I start 2 DD sessions, my perfomance will go to 2x 1.3 Gbit/s
13:50 Copez Network is not the bottleneck
13:50 Copez (I believe)
13:51 jbustos_ On your switch no QoS or similiar right?
13:51 Copez The perfomacne between SATA en SSD is at this moment equal. So the disks / SSD aren't the problem
13:52 Copez No QoS in enabled
13:52 seapasulli joined #gluster
13:52 Copez I've enabled JumboFrames and saw the perfomance boosting from 1.1 Gbit/S to 1.3Gbit/s
13:52 Copez But I don't seem to get over the 1.3 bottleneck
13:53 Copez Also with a Distributed-Replicated over 6 Nodes I'm unable to achieve more then 1.3 Gbit
13:53 Copez Why are my threads capped at 1.3 Gbit?
13:54 jbustos_ If you are doing a dd, that is quite an amazing speed, I think you may be maxing out write performance.
13:55 jbustos_ example 1Gb eth will max at 120 Mb/s
13:55 jbustos_ on dds
13:55 Copez 120 Megabyte/s
13:55 jbustos_ yes with 1GbE
13:55 Copez i'm getting not more then 1.3 GigaBIT a seceond
13:56 Copez the write perfomance goes with 230 MB/s
13:56 Copez (over network)
13:56 Copez when I test RAW on disk, I'm hitting 800 Megabyte to 1.3 Gigabyte /s
13:57 Copez 1.3 Gigabit = 160 Megabye per second
13:58 Copez But when I start two threads, I'm hitting 2,7 Gigabit/s = 330 Megabyte per second
13:58 Copez So the disks can handle the speed / traffic
14:01 jag3773 joined #gluster
14:02 rpowell joined #gluster
14:04 rfortier joined #gluster
14:09 lpabon joined #gluster
14:09 rpowell left #gluster
14:11 monotek joined #gluster
14:12 seapasulli left #gluster
14:15 wushudoin joined #gluster
14:28 jmarley joined #gluster
14:28 jmarley joined #gluster
14:29 NeatBasis joined #gluster
14:39 ryand joined #gluster
14:50 ricky-ticky joined #gluster
14:54 lalatenduM joined #gluster
14:55 seapasulli joined #gluster
14:55 [o__o] left #gluster
14:58 [o__o] joined #gluster
14:59 kaptk2 joined #gluster
15:02 sroy_ joined #gluster
15:03 jbrooks joined #gluster
15:19 hagarth joined #gluster
15:22 benjamin_____ joined #gluster
15:23 hagarth1 joined #gluster
15:27 sprachgenerator joined #gluster
15:27 chirino joined #gluster
15:30 P0w3r3d joined #gluster
15:38 jobewan joined #gluster
15:38 ndk joined #gluster
15:41 seapasulli_ joined #gluster
15:44 Pavid7 joined #gluster
15:51 cyberbootje1 joined #gluster
15:59 xdexter joined #gluster
15:59 xdexter Hello, i have this error when create a volume "volume create: images: failed"
15:59 xdexter i use: "gluster volume create images replica 2 transport tcp app1:/mnt/glusterfs/images db1:/mnt/glusterfs/images"
15:59 xdexter somebody help me?
16:03 dbruhn xdexter, anything in your logs?
16:03 jag3773 joined #gluster
16:05 xdexter nothing
16:06 cyberbootje joined #gluster
16:07 cyberbootje joined #gluster
16:07 xdexter but getting the same message when trying to create again got the error
16:07 xdexter volume create: images: failed: /mnt/glusterfs/images or a prefix of it is already part of a volume
16:07 glusterbot xdexter: To clear that error, follow the instructions at http://joejulian.name/blog/glusterfs-path-or​-a-prefix-of-it-is-already-part-of-a-volume/ or see this bug https://bugzilla.redhat.com/show_bug.cgi?id=877522
16:07 dbruhn do you have iptables or selinux enabled?
16:08 xdexter but in volume list is empty
16:08 xdexter no
16:08 dbruhn Yeah, it tried to create it and failed but it must have written out extended attributes, or .glusterfs directory while trying to make it
16:08 cyberbootje1 joined #gluster
16:09 dbruhn if you follow the information in that blog article that will help you clean that up
16:09 xdexter yes
16:09 idontknow joined #gluster
16:09 xdexter I follow the steps and I can create again, but the error always appears
16:10 dbruhn are all of your peers connected from both sides?
16:11 xdexter dbruhn, http://pastebin.com/vgNW7JK0
16:11 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
16:11 xdexter sorry
16:11 xdexter http://fpaste.org/90280/96282279/
16:11 glusterbot Title: #90280 Fedora Project Pastebin (at fpaste.org)
16:12 dbruhn distro?
16:12 xdexter CentOS release 6.5 (Final)
16:12 xdexter glusterfs-server-3.4.2-1.el6.x86_64
16:13 dbruhn nothing in the brick log
16:13 xdexter /var/log/glusterfs/bricks/ is empty
16:13 recidive joined #gluster
16:14 Guest89725 hi everybod, we run into issues with lots of duplicates files & folders any idea of the direction to fix that?
16:14 dbruhn what logs are in /var/log/glusterfs
16:14 xdexter bricks  cli.log  etc-glusterfs-glusterd.vol.log
16:14 dbruhn whats in the etc log?
16:14 xdexter hmm
16:15 xdexter i make create in app1
16:15 xdexter in db1 i have this:
16:15 xdexter [2014-03-31 16:14:51.095535] E [glusterd-utils.c:5214:glu​sterd_new_brick_validate] 0-management: Host db1 is not in 'Peer in Cluster' state
16:15 xdexter [2014-03-31 16:14:51.095603] E [glusterd-volume-ops.c:788:glu​sterd_op_stage_create_volume] 0-management: Host db1 is not in 'Peer in Cluster' state
16:15 xdexter [2014-03-31 16:14:51.095624] E [glusterd-op-sm.c:3719:glusterd_op_ac_stage_op] 0-management: Stage failed on operation 'Volume Create', Status : -1
16:15 dbruhn ahh ok
16:15 dbruhn is your dns resolving properly between then?
16:15 dbruhn them?
16:16 xdexter i make only "gluster peer probe db1" in app1
16:16 xdexter right?
16:16 dbruhn yep
16:16 dbruhn you could try and go to db1 and probe app1
16:16 dbruhn and see if that does anything for you
16:17 Mo__ joined #gluster
16:17 xdexter in db1: peer probe: success: host app1 port 24007 already in peer list
16:18 dbruhn Copez, might want to read this when you get back. http://download.intel.com/support/​network/sb/fedexcasestudyfinal.pdf
16:19 cyberbootje joined #gluster
16:19 dbruhn xdexter, you are sure you've disabled iptables? and selinux on both servers?
16:19 xdexter yes
16:19 dbruhn did you just open ports on iptables? or actually disable it?
16:19 xdexter disable
16:19 xdexter all policies is ACCEPT
16:20 dbruhn http://joejulian.name/blog/glusterfs-path-or​-a-prefix-of-it-is-already-part-of-a-volume/
16:20 glusterbot Title: GlusterFS: {path} or a prefix of it is already part of a volume (at joejulian.name)
16:20 dbruhn use that to clean it up, and try again
16:20 dbruhn maybe you hit a network timeout or something
16:23 xdexter dbruhn, i found the error
16:23 xdexter in db1, host "db1" point do external ip, not 127.0.0.1
16:24 xdexter ;/
16:24 dbruhn that will do it
16:24 xdexter i change to 127.0.0.01
16:24 LoudNoises joined #gluster
16:24 xdexter volume create: images: success: please start the volume to access data
16:25 dbruhn good to hear
16:25 verdurin joined #gluster
16:25 xdexter hehe
16:25 xdexter thank you for you help
16:25 dbruhn Of course :)
16:29 sputnik1_ joined #gluster
16:31 hagarth joined #gluster
16:36 Guest89725 hi all, is there a faq or some good doc of fixing duplicates folders and files. many thanks
16:42 Guest89725 :) nobody?
16:43 Guest89725 so i guess the next step will be mailing list then we stack :)
16:43 Guest89725 ah gluster........
16:46 DV joined #gluster
16:48 [o__o] joined #gluster
16:53 jclift dbruhn: That Intel Fedex paper was interesting. (just scanned through it)
17:07 Pavid7 joined #gluster
17:28 rwheeler joined #gluster
17:30 Matthaeus joined #gluster
17:30 recidive joined #gluster
17:41 dbruhn Guest89725, whats the issue.
17:42 dbruhn jclift, agreed. I was doing some searching earlier, as Copez earlier was complaining about maxing out his single thread performance.
17:43 sputnik1_ joined #gluster
17:43 lalatenduM joined #gluster
18:02 sputnik1_ joined #gluster
18:04 shyam joined #gluster
18:14 elico joined #gluster
18:26 NeatBasis left #gluster
18:32 sputnik1_ joined #gluster
18:32 zaitcev joined #gluster
18:34 jruggiero joined #gluster
18:47 elyograg joined #gluster
18:48 elyograg in the "worked before but doesn't work now" category: trying to nfs mount from a solaris machine is returning RPC: Program not registered
18:48 elyograg we have other solaris machines that have the gluster volume mounted just fine right now.  can't get it mounted on another.
18:49 elyograg version 3.4.2
18:51 rpowell1 joined #gluster
18:57 lalatenduM elyograg, are you sure you required nfs services on the client side is started
18:57 lalatenduM s/you//
18:57 glusterbot What lalatenduM meant to say was: elyograg, are  sure you required nfs services on the client side is started
18:57 lalatenduM s/you sure you/you sure/
18:57 glusterbot What lalatenduM meant to say was: elyograg, are you sure required nfs services on the client side is started
18:57 elyograg the machine has other things mounted via NFS.
18:58 lalatenduM elyograg, hmm
18:58 elyograg but can't mount gluster.  which is mounted on linux and other solaris machines.
18:59 lalatenduM elyograg, can try try restarting the nfs services on the client side
19:00 elyograg I'll see if we are at a point where that can be done safely.  The machine is likely doing work on those other nfs mounts.
19:01 elyograg SunOS palace 5.9 Generic_118558-39 sun4u sparc SUNW,Sun-Fire-V210
19:04 seapasulli joined #gluster
19:05 elyograg looks like it got rebooted about an hour ago.  Not sure why.  we've had a problem where we must do an nfs mount via a linux machine immediately before we try to mount on a solaris machine.  Not sure if that is still required now that we've upgraded from 3.3.1 to 3.4.2, but it doesn't seem to be helping now.
19:06 elyograg after the upgrade, I know that we did get solaris machines mounted, and there are still some of those mounted even now.
19:09 lalatenduM elyograg, might be some some bug, you may want to mail gluster-users if it is a blocker
19:10 elyograg we've been having problem after problem with this thing.  it's production, not test.
19:18 _dist joined #gluster
19:18 divbell joined #gluster
19:40 B21956 joined #gluster
19:43 elyograg as was just pointed out to me (and I should have realized it) you don't get a more effective NFS client restart than rebooting the machine.
19:54 bstr joined #gluster
20:07 Dga joined #gluster
20:11 rotbeard joined #gluster
20:14 cyberbootje joined #gluster
20:24 chirino joined #gluster
20:38 cyberbootje joined #gluster
20:41 mattappe_ joined #gluster
20:43 tdasilva left #gluster
20:49 nikk datadog web ui *very* slow still
20:51 mattappe_ joined #gluster
20:51 dbruhn nikk, why not run your own monitoring?
20:54 nikk oh hey this isn't #datadog haha
20:54 nikk /facepalm
20:55 dbruhn lol
20:58 cyberbootje joined #gluster
21:23 bennyturns purpleidea, ping!
21:25 Matthaeus left #gluster
21:25 Matthaeus joined #gluster
21:26 _dist Matthaeus: just thought you might like to know we've been live on gluster+proxmox for about a month now, no issues
21:27 Matthaeus Nice.  How's your performance?
21:28 _dist well, it's hard to say since we tweaked a lot of the VMs during convert from vmware. Nothing is slower that much I can say, 32VMs using about 30% 24 2.4ghz xeon cores (if we run all on one node)
21:29 _dist disk i/o is much faster than before, but moving from low spindle hardware raid to gluster+zfs high spindle with SSDs is not a fair comparison
21:29 Matthaeus Oh, yeah.
21:29 Matthaeus No kidding.
21:30 Matthaeus My setup is basically scrounged hardware, so no SSDs, spindle drives are, well, probably outside their warranty already.
21:30 Matthaeus It's a bit slow.
21:30 _dist either way my fears of kernel panic, or other wonky kvm stuff cropping up are basically gone, but I'm still ready :)
21:31 _dist we get about 1200MBs/read and 700/write in a 4k random test
21:32 _dist yeah, I think you mentioned yours was more test lab, for fun/glory? :)
21:32 badone joined #gluster
21:51 andreask joined #gluster
22:06 marcoceppi joined #gluster
22:19 compbio joined #gluster
22:21 bstr joined #gluster
22:21 cyberbootje joined #gluster
22:23 semiosis Matthaeus: splunk can't afford some ssd?
22:25 Matthaeus semiosis: Splunk surely can.  My own private projects, funded out of my pocket, cannot.
22:25 Matthaeus At least, not until my stock vests. ;)
22:25 semiosis ah right
22:27 compbio is anybody else using network bonding on gluster servers? I'm filling the network pipes, but only seeing half as much disk traffic as network traffic, and can't figure out why the net traffic is doubled
22:27 Matthaeus I don't currently use gluster professionally.  Just as a hobby.
22:28 semiosis compbio: replica 2 would explain it
22:29 compbio good thought, but it's not a replicated volume (data is easily available elsewhere)
22:30 compbio we're using balance-alb as the interface bonding option, if that makes a difference (i'm not very familiar with these options)
22:31 elyograg I am bondin interfaces, but only for redundancy, not network capacity. so it's still 1Gb/s.
22:31 elyograg each of the cables is plugged into a different switch.
22:31 purpleidea bennyturns: pong!
22:32 compbio we do have two switches, but were hoping that we would get redundancy and performance out of it
22:33 JoeJulian http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
22:33 dbruhn joined #gluster
22:33 * JoeJulian pokes glusterbot
22:34 systemonkey Would the size of the cache-size bigger the better?
22:35 JoeJulian I should make glusterbot offer that link as a response to naked pings.
22:36 systemonkey directory browsing is slow and I would like to really tune it. Can someone give me some pointers?
22:38 systemonkey here is my current config. http://pastebin.com/64XLXPtA
22:38 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
22:39 systemonkey http://ur1.ca/gycwp
22:39 glusterbot Title: #90443 Fedora Project Pastebin (at ur1.ca)
22:40 JoeJulian Someone should fork GlusterFS... Call it a tuning fork.... mwahaha!
22:40 elyograg glusterbot is only mostly dead.  time to go through his pockets for loose change.
22:40 JoeJulian +1
22:41 JoeJulian And an extra +1 for PB references.
22:41 Matthaeus I thought PB had too many ads...?
22:41 JoeJulian :P
22:43 systemonkey @paste
22:43 glusterbot systemonkey: For RPM based distros you can yum install fpaste, for debian and ubuntu it's pastebinit. Then you can easily pipe command output to [f] paste [binit] and it'll give you a URL.
22:48 glusterbot joined #gluster
22:49 JoeJulian http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
22:51 primechuck joined #gluster
22:52 fidevo joined #gluster
22:52 systemonkey JoeJulian: thanks. probably that's why I havent' getting any inputs.
22:55 JoeJulian systemonkey: wrt your question, do you have a kernel/fuse that supports readdirplus?
22:59 systemonkey JoeJulian: I'm not sure. Sorry, how do I check if the kernel/fuse supports readdirplus?
23:04 systemonkey JoeJulian: btw, the mount is shared through samba. From quick reading I see that readdirplus is for NFS? Most users here are on windows...
23:05 JoeJulian readdirplus was an enhancement for nfs that was added to fuse.
23:06 cyberbootje joined #gluster
23:07 systemonkey JoeJulian: ah... i see. well, I would like to try to see if it can improve the gluster. Could you point me to starting point on how I can implement readdirplus? like a website or blog?
23:10 primechu_ joined #gluster
23:12 Copez joined #gluster
23:14 cyberbootje joined #gluster
23:18 glusterbot New news from resolvedglusterbugs: [Bug 767545] [7235e5b1af090ffc9d87ac59daadf7926433b495] gluster volume statedump volname not working <https://bugzilla.redhat.com/show_bug.cgi?id=767545> || [Bug 769774] [7eed1d5ba51b65e865f79a392aff70048c702bf0] improve gluster volume heal vol cli outputs <https://bugzilla.redhat.com/show_bug.cgi?id=769774> || [Bug 771313] [29b9b0c72809456b1ba334a40adc1ed9929eca91]: brick crashed in posi
23:18 glusterbot New news from newglusterbugs: [Bug 882127] The python binary should be able to be overridden in gsyncd <https://bugzilla.redhat.com/show_bug.cgi?id=882127> || [Bug 908518] [FEAT] There is no ability to check peer status from the fuse client (or maybe I don't know how to do it) <https://bugzilla.redhat.com/show_bug.cgi?id=908518> || [Bug 915996] [FEAT] Cascading Geo-Replication Weighted Routes <https://bugzilla.redhat.com/s
23:23 JoeJulian systemonkey: I /think/ that readdirp is supposed to be used  by default, but to be sure I mount with use-readdirp=on. If it doesn't support readdirplus, it will fail to readdir.
23:24 JoeJulian s/fail to/fail over to/
23:24 glusterbot What JoeJulian meant to say was: systemonkey: I /think/ that readdirp is supposed to be used  by default, but to be sure I mount with use-readdirp=on. If it doesn't support readdirplus, it will fail over to readdir.
23:27 gdubreui joined #gluster
23:28 systemonkey JoeJulian: thank you so much. I'll go ahead and try that. I'm full of hope now :) thanks!

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary