Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-01-22

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:03 SpeeR joined #gluster
00:03 JoeJulian Nope, not yet.
00:03 ninkotech joined #gluster
00:27 swaT30 joined #gluster
00:28 swaT30 hi all
00:28 swaT30 we have a replicated distributed volume setup on three nodes w/ 6 disks each
00:28 swaT30 we've run into an issue where one set of bricks is at 90% usage, but others are between 20-40%
00:29 swaT30 not sure if running a rebalance will fix this, but would like to more evenly distribute the data
00:29 JoeJulian A rebalance should fix that. What version are you running and roughly how many files do you have?
00:31 swaT30 3.2.5, 517 files
00:31 ninkotech joined #gluster
00:32 JoeJulian 3.2.5 didn't rebalance very reliably. It may take more than one run, though with that few files it may be successful.
00:32 JoeJulian Keep an eye on memory usage and abort the rebalance if it gets out of hand.
00:33 JoeJulian ... and plan your upgrade. ;)
00:34 ninkotech_ joined #gluster
00:37 swaT30 JoeJulian: ok cool, tks! how is the upgrade path from 3.2.5 to latest stable?
00:37 swaT30 we're running on Precise via Ubuntu's PPA
00:41 marcoceppi joined #gluster
00:41 marcoceppi joined #gluster
00:42 TrDS left #gluster
00:42 sroy__ joined #gluster
00:44 JoeJulian @ppa
00:44 glusterbot JoeJulian: The official glusterfs packages for Ubuntu are available here: 3.3 stable: http://goo.gl/7ZTNY -- 3.4 stable: http://goo.gl/u33hy -- 3.5 QA: http://goo.gl/Odj95k
00:44 zapotah joined #gluster
00:44 zapotah joined #gluster
00:45 swaT30 awesome
00:45 swaT30 I assume there are some good upgrade notes out there
00:51 ninkotech_ joined #gluster
00:52 JoeJulian @upgrade
00:52 glusterbot JoeJulian: I do not know about 'upgrade', but I do know about these similar topics: '3.3 upgrade notes', '3.4 upgrade notes'
00:52 JoeJulian @3.4 upgrade notes
00:52 glusterbot JoeJulian: http://vbellur.wordpress.com/2013/07/15/upgrading-to-glusterfs-3-4/
00:56 ninkotech__ joined #gluster
00:59 eshy joined #gluster
01:01 eshy joined #gluster
01:02 gdubreui joined #gluster
01:03 swaT30 JoeJulian: awesome
01:03 swaT30 thanks for your help
01:03 JoeJulian You're welcome
01:08 ninkotech__ joined #gluster
01:10 gdubreui joined #gluster
01:14 plarsen joined #gluster
01:15 ninkotech joined #gluster
01:15 y4m4 joined #gluster
01:16 purpleidea ,,(next)
01:16 glusterbot Another satisfied customer... NEXT!
01:21 swaT30 haha
01:49 mohankumar__ joined #gluster
01:56 sprachgenerator joined #gluster
02:12 harish joined #gluster
02:38 jporterfield joined #gluster
02:38 vpshastry joined #gluster
02:51 bharata-rao joined #gluster
03:01 smellis the heal info command, not sure if it means that brick has a file with data that needs to goto the other brick, or the other way around
03:06 wgao joined #gluster
03:16 nshaikh joined #gluster
03:18 kshlm joined #gluster
03:21 sprachgenerator joined #gluster
03:30 jag3773 joined #gluster
03:32 ababu joined #gluster
03:32 shubhendu joined #gluster
03:45 itisravi joined #gluster
03:48 ninkotech__ joined #gluster
03:49 vpshastry joined #gluster
04:10 shyam joined #gluster
04:26 shylesh joined #gluster
04:26 raghug joined #gluster
04:27 kanagaraj joined #gluster
04:27 RameshN joined #gluster
04:30 kdhananjay joined #gluster
04:32 ppai joined #gluster
04:33 smellis ok, figured out I ran into this same thing: http://www.gluster.org/pipermail/gluster-users/2013-August/036841.html
04:33 glusterbot Title: [Gluster-users] No active sinks for performing self-heal on file (at www.gluster.org)
04:35 ndarshan joined #gluster
04:53 dusmant joined #gluster
04:56 vpshastry1 joined #gluster
05:06 aravindavk joined #gluster
05:10 psharma joined #gluster
05:27 bala joined #gluster
05:32 bala joined #gluster
05:34 mohankumar__ joined #gluster
05:35 prasanth joined #gluster
05:51 raghug joined #gluster
05:51 ndarshan joined #gluster
05:55 shubhendu joined #gluster
05:57 hagarth joined #gluster
05:57 bala joined #gluster
05:58 benjamin__ joined #gluster
05:59 rjoseph1 joined #gluster
06:03 raghu joined #gluster
06:05 satheesh1 joined #gluster
06:06 rjoseph1 left #gluster
06:07 mohankumar__ joined #gluster
06:10 glusterbot New news from newglusterbugs: [Bug 990028] enable gfid to path conversion <https://bugzilla.redhat.com/show_bug.cgi?id=990028>
06:12 ricky-ti1 joined #gluster
06:13 mik3 joined #gluster
06:15 davinder joined #gluster
06:15 raghug joined #gluster
06:31 lalatenduM joined #gluster
06:37 raghug joined #gluster
06:45 pk1 joined #gluster
06:46 CheRi joined #gluster
06:51 ndarshan joined #gluster
06:52 bala joined #gluster
06:53 zapotah joined #gluster
06:53 zapotah joined #gluster
06:53 dusmant joined #gluster
06:54 shyam joined #gluster
06:57 shubhendu joined #gluster
06:58 kdhananjay joined #gluster
07:12 glusterbot New news from newglusterbugs: [Bug 1056406] DHT + add brick : Directory self heal is fixing hash layout for few Directories <https://bugzilla.redhat.com/show_bug.cgi?id=1056406>
07:18 kdhananjay joined #gluster
07:20 jtux joined #gluster
07:27 Shri joined #gluster
07:27 benjamin________ joined #gluster
07:29 mohankumar__ joined #gluster
07:30 zapotah joined #gluster
07:30 zapotah joined #gluster
07:33 ngoswami joined #gluster
07:50 benjamin________ joined #gluster
07:52 overclk joined #gluster
07:53 abyss^ JoeJulian: I've already read about extended attr but it interesting but I don't how it would help me;)
07:57 mohankumar__ joined #gluster
08:03 ngoswami joined #gluster
08:05 eseyman joined #gluster
08:06 ndarshan joined #gluster
08:07 s2r2_ joined #gluster
08:11 franc joined #gluster
08:11 franc joined #gluster
08:22 jag3773 joined #gluster
08:22 dusmant joined #gluster
08:23 tobira_ HI, in terms of BRICK config ,what is more suggested stripe or distributed if i have lots of big video files 1 to 20gb and also lots of small/medium  files 1kb to 512kb and 4mb to 300mb, directory  files could reach 70k to 120k files per directory.
08:25 samppah tobira_: i'd recommend distributed
08:25 samppah @stripe
08:25 glusterbot samppah: Please see http://joejulian.name/blog/should-i-use-stripe-on-glusterfs/ about stripe volumes.
08:25 samppah tobira_: ^
08:29 tobira_ Ok thanks , would it be better to use stripe when raid6/5 will be out?
08:30 mohankumar__ joined #gluster
08:30 kdhananjay joined #gluster
08:31 samppah tobira_: i can't say about that
08:36 blook joined #gluster
08:39 saurabh joined #gluster
08:45 shyam joined #gluster
08:47 mohankumar__ joined #gluster
08:49 hagarth joined #gluster
08:52 raghug joined #gluster
08:54 dusmant joined #gluster
08:54 aravindavk joined #gluster
08:54 ngoswami joined #gluster
08:55 kanagaraj joined #gluster
08:56 ndarshan joined #gluster
08:59 mgebbe_ joined #gluster
09:01 sinatributos joined #gluster
09:02 psharma joined #gluster
09:03 bala joined #gluster
09:13 tryggvil joined #gluster
09:15 klaxa|work joined #gluster
09:23 sinatributos joined #gluster
09:27 aravindavk joined #gluster
09:37 b0e joined #gluster
09:42 ngoswami joined #gluster
09:44 RameshN joined #gluster
09:44 samsamm left #gluster
09:45 RameshN joined #gluster
09:46 kanagaraj joined #gluster
09:49 CheRi joined #gluster
09:54 vbhat joined #gluster
09:57 vbhat joined #gluster
09:59 aravindavk joined #gluster
09:59 ndarshan joined #gluster
10:00 harish joined #gluster
10:02 vbhat joined #gluster
10:05 sac`away joined #gluster
10:06 F^nor joined #gluster
10:07 bala joined #gluster
10:09 vbhat joined #gluster
10:11 vbhat joined #gluster
10:11 ells joined #gluster
10:20 ngoswami joined #gluster
10:24 mohankumar__ joined #gluster
10:27 hagarth joined #gluster
10:30 raghug joined #gluster
10:35 davinder joined #gluster
10:36 CheRi joined #gluster
10:39 purpleidea JoeJulian: requesting permission to paste in channel, will require 64 lines. reason? because awesome.
10:48 dusmant joined #gluster
10:51 ppai joined #gluster
10:52 kdhananjay joined #gluster
10:53 purpleidea joined #gluster
10:59 diegows joined #gluster
10:59 hagarth joined #gluster
11:00 edward1 joined #gluster
11:23 hybrid512 joined #gluster
11:28 dusmant joined #gluster
11:35 klaxa|work hi, i asked before, about a week ago, but we're still having huge performance issues with glusterfs 3.3.2 https://gist.github.com/anonymous/c589aca90960d5ef7c4f
11:35 glusterbot Title: gist:c589aca90960d5ef7c4f (at gist.github.com)
11:38 nshaikh joined #gluster
11:39 samppah klaxa|work: hey, i think i spoke with you about this earlier
11:39 samppah what kind of disk you have?
11:41 klaxa|work um... i'm not too sure what hardware exactly, but on one it's a raid0 on the other one a raid10, both produce... more write speed than we get within VMs on the mount
11:41 ndk joined #gluster
11:41 klaxa|work let me get the actual numbers, one sec
11:42 klaxa|work looks like we didn't write them down, it is far more than 100 mb/s writing speed though
11:43 klaxa|work within VMs we get ~5 mb/s
11:43 klaxa|work we're really out of ideas, this didn't happen with 3.2.2 and 3.1.4
11:44 klaxa|work we ran 3.2.2 and 3.1.4 on debian squeeze, upgraded to debian wheezy when we upgraded to 3.3.2
11:45 samppah klaxa|work: you were using nfs? or fuse?
11:45 klaxa|work fuse
11:46 klaxa|work we're kinda suspecting fuse too, it's only a vague guess, but that's the only thing that is between glusterfs and the harddrives
11:46 samppah klaxa|work: iirc there was hughe performance boost between glusterfs 3.3 and 3.4
11:47 samppah i'm not sure if those patches have been backported in 3.3
11:47 klaxa|work we've resolved to trying to run 3.4.2
11:47 klaxa|work but the debian backport packages for libvirt and qemu-kvm are not built against libgfapi
11:47 samppah https://bugzilla.redhat.com/show_bug.cgi?id=858850 also i think that this patch was a huge help in RHEL
11:47 glusterbot Bug 858850: high, urgent, rc, bfoster, CLOSED ERRATA, fuse: backport scatter-gather direct IO
11:48 samppah klaxa|work: 3.4 is a lot faster with fuse too
11:49 klaxa|work okay, thanks i guess we'll try that first
11:49 samppah only problem is that you seem to be using rdma?
11:49 klaxa|work hmmm yeah but that's by far not the bottleneck
11:50 klaxa|work we already discussed it, we'll run it with ip over rdma
11:50 samppah ahh
11:50 samppah okay :)
11:50 klaxa|work so it'll be tcp again
11:50 klaxa|work :)
11:50 samppah good
11:50 klaxa|work i mean if right now, writing speed within VMs is 5 mb/s, it wouldn't matter if we had 40 gbps or 100 mbps network
11:51 klaxa|work and it will probably still be a maximum throughput of at least 10 gbps
11:51 klaxa|work which is still way faster than our harddrives, so there's that
11:51 andreask joined #gluster
11:54 samppah klaxa|work: i have something like 70 MB/s per VM with 10GbE and fuse
11:54 samppah using 3.4.1
11:54 klaxa|work okay that sounds good, do you have comparison values for 3.3.X?
11:55 samppah not really, we started this environment with 3.4 but 5 MB/s sounds like that i have seen something like that too
11:58 msvbhat_is_away joined #gluster
11:59 CheRi joined #gluster
11:59 hagarth joined #gluster
12:01 itisravi_ joined #gluster
12:08 kshlm joined #gluster
12:09 ells joined #gluster
12:09 dusmant joined #gluster
12:16 kdhananjay joined #gluster
12:21 ira joined #gluster
12:26 s2r2_ joined #gluster
12:36 ctria joined #gluster
12:38 GabrieleV joined #gluster
12:45 raghug joined #gluster
12:47 CheRi joined #gluster
12:50 Shri joined #gluster
12:56 sinatributos Hello. Is 3.4.2 functioning with rdma transport?
12:57 sinatributos I mean, for a production environment.
12:58 Shri left #gluster
13:04 keytab joined #gluster
13:08 Cenbe joined #gluster
13:12 benjamin________ joined #gluster
13:12 pk1 left #gluster
13:13 glusterbot New news from newglusterbugs: [Bug 957917] gluster create volume doesn't cleanup after its self if the create fails. <https://bugzilla.redhat.com/show_bug.cgi?id=957917>
13:16 tryggvil joined #gluster
13:19 shyam joined #gluster
13:39 sroy_ joined #gluster
13:44 kkeithley left #gluster
13:45 kaushal_ joined #gluster
13:45 mohankumar__ joined #gluster
13:45 kkeithley joined #gluster
13:45 Shri joined #gluster
13:46 jag3773 joined #gluster
13:46 B21956 joined #gluster
13:50 kaushal_ joined #gluster
13:51 tziOm joined #gluster
13:53 dusmant joined #gluster
13:58 japuzzo joined #gluster
14:14 power8 joined #gluster
14:16 power8 ubuntu 12.04 with gluster 3.4.  I want to mount the glusterfs volume at boot on the same server as a cluster node.  I am not having much luck with my own upstart script.  How would i use the mounting-glusterfs.conf? i just came accross it and i am thinking it might help me out.
14:17 Krikke I disabled mounting-glusterfs and put mount /path into rc.local
14:17 Krikke seems to fix it for now
14:18 power8 yes, that worked for me on a VMWare but it does not work well in amazon ec2 it appears the boot and startup time is too long and by the time the rc.local is running the mount the glusterfs service is not running
14:18 Krikke oh
14:18 Krikke dunno then
14:20 shyam joined #gluster
14:21 kaushal_ joined #gluster
14:22 power8 thanks, this is the last item to figure out and I will have my setup ready for use :P
14:23 sinatributos Anyone willing to help? Trying to build a volume with rdma transport on 2 debian wheezy boxes with gluster 3.3.2. I have some questions as i can mount the volume with transport tcp but not rdma.
14:24 vpshastry joined #gluster
14:24 bennyturns joined #gluster
14:25 jskinner_ joined #gluster
14:29 rwheeler joined #gluster
14:30 ira joined #gluster
14:31 ira joined #gluster
14:34 vpshastry left #gluster
14:35 wica joined #gluster
14:36 wica Hi
14:36 glusterbot wica: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
14:36 power8 i might have found one mistake in my script that was causing me problems...
14:37 wica When using qemu+glusterfs is it necerly to create a disk image with qemu-img on that gluster volumes? It looks like it is not possible to access a disk image on glusterfs, when it is not created with qemu-img.
14:38 samppah wica: no it's not necessary but qemu user must have rights to access that file
14:39 gork4life joined #gluster
14:39 gork4life hello all
14:39 gork4life I'm needing clarification
14:39 Dga joined #gluster
14:40 wica samppah: duhh, thnx
14:40 gork4life is storage tiering possible with gluster
14:40 Peanut I've uploaded some logfiles for the kvm live-migration issue: http://epboven.home.xs4all.nl/gluster-migrate.html
14:40 glusterbot Title: Gluster KVM migration issue (at epboven.home.xs4all.nl)
14:41 jdarcy joined #gluster
14:41 XpineX joined #gluster
14:41 chirino joined #gluster
14:42 gork4life is storage tiering possible with gluster
14:43 franc joined #gluster
14:43 franc joined #gluster
14:46 gork4life samppah: Is storage tiering possible with glusterfs
14:46 power8 i figured it all out! i have upstart script working and it worked in amazon ec2 (B) glusterfs is awesome
14:47 gork4life power8: Is storage tiering possible with glusterfs
14:47 tdasilva joined #gluster
14:47 jobewan joined #gluster
14:48 power8 gork4life: i do not know. I was here for help also.  i am using it with the mirror/distributed setup accross hard disks
14:48 Peanut power8: nice, awesome :-)
14:49 power8 there is a tough problem at getting a NAS/CIFS solution that is HA in amazon and glusterFS has helped us reach that goal.
14:50 power8 we are going to figure out a way to give back to the community
14:50 gork4life power8: thanks for the reply
14:50 hagarth gork4life: what kind of tiering support are you looking for?
14:51 hagarth power8: awesome, very cool to hear that!
14:52 kanagaraj joined #gluster
14:53 theron joined #gluster
14:53 dbruhn joined #gluster
14:54 gork4life hagarth: I'm wondering if gluster can support any kind of storage tiering all the way to tier one
14:55 zaitcev joined #gluster
14:55 theron_ joined #gluster
14:55 XpineX I have not seen anything about tiering in the glusterfs documentation. I do not think it is something that is supported at the moment, maybe in a future release ?
14:56 CheRi joined #gluster
14:56 Technicool joined #gluster
14:56 hagarth gork4life: http://www.gluster.org/community/documentation/index.php/Features/data-classification -- something like this?
14:56 glusterbot Title: Features/data-classification - GlusterDocumentation (at www.gluster.org)
14:57 primechuck joined #gluster
14:57 jskinner_ joined #gluster
14:58 gork4life hagarth: Thanks for the link will
14:58 zapotah joined #gluster
15:00 gork4life XpineX: That's what I thought
15:03 kkeithley Reminder: backport wishlists for 3.3.3 and 3.4.3 are at http://www.gluster.org/community/documentation/index.php/Backport_Wishlist.
15:04 ells joined #gluster
15:05 sinatributos left #gluster
15:07 lpabon joined #gluster
15:08 sinatributos joined #gluster
15:08 klaxa samppah i ran a performance test at work, 3 mb/s in a vm, hypervisor load at 32, i'll run it with profiling enabled tomorrow
15:08 sinatributos Can anyone provide a link to download source code of gluster 3.3.2 to build it with rdma support on debian wheezy?
15:09 sinatributos The links here http://www.gluster.org/download/gluster-source-code/ do not bring me to any source code
15:10 kkeithley http://download.gluster.org/pub/gluster/glusterfs/3.3/3.3.2/glusterfs-3.3.2.tar.gz
15:10 kkeithley Or http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.3.2.tar.gz
15:12 gork4life Is gluster able to group all mounted disk as one volume
15:13 zapotah joined #gluster
15:13 zapotah joined #gluster
15:13 ells joined #gluster
15:14 glusterbot New news from newglusterbugs: [Bug 1056621] Move hosting to Maven Central <https://bugzilla.redhat.com/show_bug.cgi?id=1056621>
15:14 dbruhn gork4life, not sure what you are asking
15:15 kshlm joined #gluster
15:17 gork4life dbruhn: If I have 5 disks mounted on my system can gluster group these disks into one volume
15:18 dbruhn gork4life, it can indeed, it will not be a single block level volume, but accessible through a network mount
15:20 gork4life dbruhn: So if I mount a nfs and attach it to a vm it will see one volume
15:20 dbruhn yep
15:20 gork4life dbruhn: thanks
15:20 dbruhn np
15:21 dbruhn are you planning on only ever using a single server for the storage?
15:21 gork4life dbruhn: no just in testing phase right now to see how gluster works
15:22 wushudoin joined #gluster
15:22 gork4life dbruhn: actually planning on having multiple servers using glusters distributed replication
15:23 dbruhn gork4life, welcome to the party, aixsyd has been doing some similar testing to you, I don't think he is using vmware though.
15:23 plarsen joined #gluster
15:24 sinatributos kkeithley: I've been in this folder 20 times! I think I'm blind. Thanks!!
15:25 smellis what's the best way to know if everything is consistent on a dist-repl volume?  When I've got a bunch of vms running on my volume, it's showing alot of different images in volume heal info
15:25 smellis if I shut all of the vms down, things settle out and all of the bricks show 0
15:26 kkeithley sinatributos: yw
15:26 gork4life dbruhn: when do you think gluster will support storage tiering?
15:26 smellis but I'd like to be able to reboot a machine and know that it's healed, before I restart the next host
15:26 dbruhn gork4life, last I heard was rumblings of 3.6, but I haven't gotten any confirmation on that.
15:34 andreask joined #gluster
15:35 gork4life dbruhn: Do you recommend running gluster on top of zfs
15:35 dbruhn There is a good write up on it, I am running it on XFS personally, there are a couple of people who have said the performance on ZFS is better.
15:36 gork4life dbruhn: Do you have a link for setting up zfs and gluster
15:38 dbruhn gork4life: http://www.gluster.org/community/documentation/index.php/GlusterOnZFS
15:38 glusterbot Title: GlusterOnZFS - GlusterDocumentation (at www.gluster.org)
15:38 wushudoin left #gluster
15:39 gork4life dbruhn: Would you happen to have on for ubuntu 12.04
15:41 dbruhn sorry I don't, but most of it should transfer over to ubuntu
15:41 dbruhn first glance the only thing that doesn't apply is the selinux stuff
15:41 gork4life dbruhn: Ok thanks you  have been wonderful
15:41 dbruhn good luck
15:41 gork4life dbruhn: :)
15:42 shyam joined #gluster
15:43 cyberbootje JoeJulian: You there?
15:44 psyl0n joined #gluster
15:44 psyl0n joined #gluster
15:44 glusterbot New news from newglusterbugs: [Bug 991084] No way to start a failed brick when replaced the location with empty folder <https://bugzilla.redhat.com/show_bug.cgi?id=991084>
15:50 bugs_ joined #gluster
15:53 sprachgenerator joined #gluster
15:56 kshlm joined #gluster
15:58 sinatributos Trying to build gluster 3.3.2 from source code with --enable-ibverbs I obtain a "ibverbs requested but not found". I have libibverbs1 and ibverbs-utils installed. I'm on wheezy. Any clue which package am I missing?
15:59 Alex libibverbs-dev?
16:00 sinatributos alex: package not found
16:00 Alex odd - looks to exist: http://packages.debian.org/wheezy/libibverbs-dev
16:00 glusterbot Title: Debian -- Details of package libibverbs-dev in wheezy (at packages.debian.org)
16:01 pravka joined #gluster
16:01 Alex I can install it on Wheezy - from main.
16:02 sinatributos Alex: it does. And it works. Typo.
16:02 sinatributos Thanks
16:03 Alex np, I typoed it as libibibibibiverbs the best part of a dozen times. :)
16:04 sinatributos Yes, the name comes from deep in hell.
16:13 daMaestro joined #gluster
16:16 kkeithley ,,(zfs)
16:16 glusterbot I do not know about 'zfs', but I do know about these similar topics: 'zend'
16:17 morsik lol.
16:19 Dave2 zfs, zend, close enough
16:21 dbruhn http://www.gluster.org/community/documentation/index.php/GlusterOnZFS
16:21 glusterbot Title: GlusterOnZFS - GlusterDocumentation (at www.gluster.org)
16:21 dbruhn How does one make glusterbot remember that
16:24 zapotah joined #gluster
16:24 zapotah joined #gluster
16:25 kkeithley @learn zfs as  http://www.gluster.org/community/documentation/index.php/GlusterOnZFS
16:25 glusterbot kkeithley: The operation succeeded.
16:26 kkeithley ,,(zfs)
16:26 glusterbot http://www.gluster.org/community/documentation/index.php/GlusterOnZFS
16:27 kkeithley @forget zfs
16:27 glusterbot kkeithley: The operation succeeded.
16:28 kkeithley @learn zfs as GlusterOnZFS - GlusterDocumentation http://www.gluster.org/community/documentation/index.php/GlusterOnZFS
16:28 glusterbot kkeithley: The operation succeeded.
16:28 kkeithley ,,(zfs)
16:28 glusterbot GlusterOnZFS - GlusterDocumentation http://www.gluster.org/community/documentation/index.php/GlusterOnZFS
16:29 SpeeR joined #gluster
16:33 kaptk2 joined #gluster
16:35 sinatributos Any clue about why I obtain this errors? I have OFED and ib interfacebut:
16:35 sinatributos [2014-01-22 16:32:51.432036] C [rdma.c:4110:gf_rdma_init] 0-rpc-transport/rdma: No IB devices found
16:35 sinatributos [2014-01-22 16:32:51.432124] E [rdma.c:4993:init] 0-vol-datosweb-client-3: Failed to initialize IB Device
16:35 sinatributos [2014-01-22 16:32:51.432149] E [rpc-transport.c:316:rpc_transport_load] 0-rpc-transport: 'rdma' initialization failed
16:36 sinatributos TIA.
16:41 LoudNoises joined #gluster
16:42 msvbhat_ joined #gluster
16:44 radez joined #gluster
16:45 jclift joined #gluster
16:45 dblack joined #gluster
16:45 msvbhat joined #gluster
16:45 zerick joined #gluster
16:45 tdasilva joined #gluster
16:45 portante joined #gluster
16:46 ira joined #gluster
16:46 kkeithley joined #gluster
16:48 bfoster joined #gluster
16:53 crazifyngers joined #gluster
16:53 nocturn joined #gluster
17:01 dbruhn sinatributos, are you using infiniband?
17:01 SpeeR joined #gluster
17:03 SpeeR joined #gluster
17:06 tdasilva joined #gluster
17:06 radez joined #gluster
17:07 kkeithley joined #gluster
17:07 bfoster joined #gluster
17:07 portante joined #gluster
17:07 msvbhat joined #gluster
17:10 jclift joined #gluster
17:10 sinatributos dbruhn Yes, I have mellanox card installed.
17:10 sinatributos dbruhn, I think i have problems with the OFED sw stack, but not sure.
17:11 dbruhn from what those errors are telling you the IB cards are probably not functioning
17:11 dbruhn I would suggest testing out your IB system first
17:11 dbruhn also what version of Gluster are you trying to run on RDMA?
17:12 sinatributos 3.3.2 as I've heard is the only stable. Compiled from source.
17:12 dbruhn Why didn't you install the ubuntu packages for 3.3.2?
17:12 sinatributos I have ipoib working among the nodes and I can see the ib0 interface.
17:13 dbruhn And I am running 3.3.2 on my RDMA systems, there seems to be a lack of clarity on what doesn't work about it beyond that
17:13 sinatributos I am on wheezy. I was not sure if the .deb were including rdma support or not.
17:14 sinatributos I will keep on checking the IB software stack. It clearly seems the problem is there.
17:15 shyam joined #gluster
17:16 dbruhn is the RDMA service running
17:16 bfoster joined #gluster
17:17 dbruhn sinatributos, on redhat/centos there is a rdma daemon that runs, is that running for you?
17:17 jclift joined #gluster
17:18 sinatributos I think I have found it. there were some missing libs from my mellanox hca.
17:18 sinatributos apt-get install libmlx4-1
17:18 dbruhn ibutils libibverbs libnes opensm libibmad infiniband-diags libibverbs-utils libibverbs-devel perftest libmlx4 openmpi libmthca
17:18 sinatributos now ibv_devices lists my mlx4 device
17:18 dbruhn these are all of the IB packages I install
17:19 sinatributos many thanks dbruhn.
17:19 dbruhn no problem
17:22 radez joined #gluster
17:24 jclift joined #gluster
17:24 kkeithley joined #gluster
17:28 msvbhat joined #gluster
17:31 bala joined #gluster
17:32 Mo_ joined #gluster
17:32 tryggvil joined #gluster
17:35 diegows joined #gluster
17:44 tryggvil joined #gluster
17:49 ndk joined #gluster
18:07 s2r2_ joined #gluster
18:32 portante joined #gluster
18:34 ira joined #gluster
18:50 * JoeJulian is losing respect for MIT... <sigh>
18:51 semiosis ??
18:51 kkeithley what did they do now?
18:52 JoeJulian mailing list thread...
18:53 JoeJulian besides just doing things at random with a volume he seems to want to save, when asked for log files the client log starts in NOVEMBER!!!
18:53 JoeJulian 40 frigging megabyte log file... grrr.
18:54 kkeithley haha
19:01 zaitcev joined #gluster
19:03 gmi1456 joined #gluster
19:04 TrDS joined #gluster
19:10 dbruhn I wish my log files were only 40 meg from November...
19:10 JoeJulian amen
19:10 s2r2_ joined #gluster
19:12 zaitcev joined #gluster
19:13 JoeJulian I hope this doesn't read too harsh: http://permalink.gmane.org/gmane.comp.file-systems.gluster.user/14445
19:13 glusterbot Title: Gluster not recognizing available space (at permalink.gmane.org)
19:14 dbruhn Your email read fine
19:14 dbruhn I saw it in the list
19:15 zaitcev_ joined #gluster
19:15 JoeJulian I try really hard to filter out all the harsh that's really going on in my head sometimes... ;)
19:16 dbruhn And I will say I have probably appreciated that from time to time ;)
19:16 dbruhn I am with you there though, bite my young often
19:16 JoeJulian lol
19:16 JoeJulian poor kids... :D
19:17 andreask joined #gluster
19:18 JoeJulian Funniest part is that I did read that as you intended the first time and only saw the typo/autocorrect fail on second glance.
19:18 dbruhn haha
19:18 SpeeR joined #gluster
19:19 dbruhn Apple's auto correct has gotten really weird, and kind of quiet. Sometimes it waits till you are done with the sentence to correct.
19:20 JoeJulian That must be the right way to do it. That's why the walled garden exists.
19:29 johnbot11 joined #gluster
19:29 _dist joined #gluster
19:32 _dist hey there, I was wondering if anyone could walk me through A) a simple way to add back-end support (libgfapi) to a package install of qemu and libvirt (1.2.1) or if that's not easy what point me to a good repository (debian) that has an already built version
19:32 _dist (or should I be bugging the qemu and libvirt guys instead)
19:33 Reikoshea1 joined #gluster
19:36 gmi1456 how quick should a glusterfs client failover to the second glusterfs server on a replicated volume?
19:38 _dist right away, do you have more info on the circumstance?
19:39 JoeJulian _dist: should already be built in.
19:40 _dist oh really? that's awesome. I'm using this ppa https://launchpad.net/~jacob/+archive/virtualisation right now
19:40 glusterbot Title: virtualisation : Jacob Zimmermann (at launchpad.net)
19:40 JoeJulian gmi1456, The client is always connected to each brick in the volume and handles replication from there. If a server is shutdown, the tcp connection is closed and the client will continue with the remaining servers, reconnecting when the missing server returns.
19:40 gmi1456 I have a Centos 5 x32 client that has a gluster replicated volume using the native client; the client server serves an ISO using Apache and I download that ISO while watching with iptraf to see from which gluster node the traffic is being retrieved, and then I reboot that node and my download errors
19:41 _dist are you using monut.glusterfs and not nfs?
19:41 JoeJulian If, however, the server goes away without closing the tcp connection (pulled plug), there's a 42 second ,,(ping-timeout).
19:41 glusterbot The reason for the long (42 second) ping-timeout is because re-establishing fd's and locks can be a very expensive operation. Allowing a longer time to reestablish connections is logical, unless you have servers that frequently die.
19:42 gmi1456 how can I change the ping-timeout and retest?
19:42 JoeJulian Why do people always focus on the wrong problem?
19:42 JoeJulian gluster volume set help
19:43 gmi1456 and I don't pull the plug on the gluster node, I run "reboot" which should cleanly close all daemons and connections
19:43 JoeJulian I agree, and therein lies the real problem that needs fixed.
19:44 Reikoshea1 Are there any specific tuning options I should be looking at to reduce the gluster client's cpu usage for my PHP app? With a warm cache in front of the app, it's not a huge deal, but with a cold cache the load on the box jumps 20x. I'm not looking for my box to be idle, just under 100 load (32 cores).
19:44 JoeJulian @php
19:44 glusterbot JoeJulian: (#1) php calls the stat() system call for every include. This triggers a self-heal check which makes most php software slow as they include hundreds of small files. See http://joejulian.name/blog/optimizing-web-performance-with-glusterfs/ for details., or (#2) It could also be worth mounting fuse with glusterfs --attribute-timeout=HIGH --entry-timeout=HIGH --negative-timeout=HIGH
19:44 glusterbot JoeJulian: --fopen-keep-cache
19:44 JoeJulian oops, trigger finger.
19:45 Reikoshea1 thanks JoeJulian, i read that article and implemented many of those suggestions
19:46 Reikoshea1 it was very helpful, just trying to prevent my clients from being unusable in the event of a cold cache
19:46 JoeJulian Reikoshea1: Check out the ,,(undocumented options). There's one for changing the behavior of "first server to respond" owning the FD.
19:46 glusterbot Reikoshea1: Undocumented options for 3.4: http://www.gluster.org/community/documentation/index.php/Documenting_the_undocumented
19:48 SpeeR joined #gluster
19:48 khushildep joined #gluster
19:50 JoeJulian Also, if you're sure the files will be created with the filename they're going to have forever, you can set cluster.lookup-unhashed off to save on negative lookup calls.
19:50 Reikoshea1 correct me if im wrong, i've only been working on this for a few days, but would that be driving up the load on my clients? i wouldn't think a read op would cause a fd to be owned.
19:50 JoeJulian Oh, this is client load? That's surprising.
19:51 Reikoshea1 yeah
19:51 SpeeR joined #gluster
19:51 Reikoshea1 i expected the server load im seeing
19:51 Reikoshea1 my client load is the one that worries me
19:51 JoeJulian sure
19:51 psyl0n joined #gluster
19:51 Reikoshea1 running replica 2, if that matters
19:51 JoeJulian perhaps test performance.client-io-threads?
19:52 SpeeR joined #gluster
19:53 Reikoshea1 maybe im looking in the wrong place, but i havent found any relevant documentation on client tuning params for 3.4.2...if you could point me in the right direction, i could come back with more...pointed questions.
19:54 Reikoshea1 most of what i've found is related to the old client, and many of the options suggested prevent the volumes from mounting
19:54 SpeeR_ joined #gluster
19:55 gmi1456 JoeJulian:I'm not sure I understand, is it something I did wrong to setup Gluster? if my Gluster servers do not frequently die, should I leave the ping-timeout to its 48 sec default and every time I reboot one Glusterfs node for a planned maintenance expect 48 sec of downtime?
19:56 SpeeR__ joined #gluster
19:56 JoeJulian gmi1456, No, figure out what's blocking the TCP FIN from reaching the network. I imagine it's something out of order in /etc/rc.d/rc0.d
19:57 johnbot11 joined #gluster
19:58 SpeeR joined #gluster
19:58 dneary joined #gluster
19:59 JoeJulian Reikoshea1: I haven't seen any such guide. The devs try to make things as efficient as they can so there's not really any performance suggestions from that end since it should already be awesome. I haven't seen any good scientific user articles either.
19:59 SpeeR joined #gluster
20:01 Reikoshea1 Fair enough. I mean it's only 2-3 minutes. Worst case, i'll replicate traffic to a cold node before putting it in rotation
20:01 JoeJulian I still think cluster.lookup-unhashed off to save on negative lookup calls would help if that's something that your use case can allow for.
20:01 SpeeR joined #gluster
20:02 Reikoshea1 so if DHT works the same way in gluster as it does in torrents, wouldnt that essentially send all reads to all bricks in the replica?
20:03 JoeJulian See my article on dht misses are expensive to see why I think that would help.
20:03 Reikoshea1 will do
20:03 JoeJulian @lucky dht misses are expensive
20:03 glusterbot JoeJulian: http://joejulian.name/blog/dht-misses-are-expensive/
20:03 SpeeR_ joined #gluster
20:04 Reikoshea1 yeah im not using distribute in my architecture, only replica
20:04 Reikoshea1 i could DEFINITELY see the advantage in distribute
20:04 asku joined #gluster
20:04 JoeJulian Mmm, that certainly narrows down the possibilities.
20:05 SpeeR__ joined #gluster
20:05 Reikoshea1 yeah, the architecture we came up with is very isolated
20:05 Reikoshea1 but the http traffic is exceptionally high inside each isolation pool
20:06 JoeJulian Try testing disabling read-ahead and/or quick-read.
20:06 SpeeR joined #gluster
20:06 Reikoshea1 those options are definitely on :), ill give it a whirl
20:07 JoeJulian Those are the only two that I can think of that are worth testing though.
20:07 Reikoshea1 two is better than none
20:08 SpeeR_ joined #gluster
20:09 Reikoshea1 is there a better way to do this than updating the vols file and stop-starting the volume?
20:09 Reikoshea1 i feel lime im doing it wrong
20:09 Reikoshea1 like*
20:10 lpabon joined #gluster
20:12 JoeJulian Yeah.. gluster volume set help
20:13 semiosis Reikoshea1: ^^ & see also ,,(undocumented options)
20:13 glusterbot Reikoshea1: Undocumented options for 3.4: http://www.gluster.org/community/documentation/index.php/Documenting_the_undocumented
20:13 Reikoshea1 lol, thanks, i knew i had to be doing it wrong
20:16 gmi1456 JoeJulian:ok, thanks
20:20 s2r2__ joined #gluster
20:28 _dist left #gluster
20:28 johnbot11 joined #gluster
20:29 GLHMarmot left #gluster
20:30 Reikoshea1 read-ahead helped a ton, thanks for that, dropped the load on the boxes by 75%
20:34 Reikoshea1 JoeJulian: 5k rps on a default wordpress install running on gluster, on unconfigured apache, unconfigured mysql, and only updated the real path cache to 64k and installed APC. Granted, wordpress was running on a beast of a box, but still pretty impressive for only tuning the back end server.
20:35 Reikoshea1 no caching mechanism in front of the apache server for that test either
20:35 JoeJulian nice
20:35 JoeJulian Are you going to blog about your findings?
20:35 Reikoshea1 very likely, the amount of vendor supplied hardware i have is pretty substantial
20:36 JoeJulian excellent. Let me know when it's up and we'll get it syndicated.
20:36 Reikoshea1 md1200, fusion iodrive2, and emc vmax are a few of my storage arrays
20:36 Reikoshea1 so hopefully this coupled with pretty iozone graphs should make for a good read
20:50 badone joined #gluster
20:56 zapotah joined #gluster
20:56 zapotah joined #gluster
21:02 rwheeler joined #gluster
21:03 johnbot11 joined #gluster
21:03 JonnyNomad joined #gluster
21:23 marcinEF joined #gluster
21:58 zapotah joined #gluster
21:58 zapotah joined #gluster
22:07 ira joined #gluster
22:17 pravka joined #gluster
22:18 glusterbot New news from newglusterbugs: [Bug 1045309] "volfile-max-fetch-attempts" was not deprecated correctl.. <https://bugzilla.redhat.com/show_bug.cgi?id=1045309>
22:29 pravka joined #gluster
22:36 RicardoSSP joined #gluster
22:37 psyl0n joined #gluster
22:38 pravka joined #gluster
22:38 SpeeR joined #gluster
22:57 sroy joined #gluster
23:17 gdubreui joined #gluster
23:30 sroy joined #gluster
23:42 theron joined #gluster
23:46 rwheeler joined #gluster
23:48 raghug joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary