Camelia, the Perl 6 bug

IRC log for #gluster, 2013-05-17

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:32 yinyin joined #gluster
00:52 jag3773 joined #gluster
00:52 token joined #gluster
00:52 92AAAM1YK joined #gluster
00:52 nat joined #gluster
00:52 abelur joined #gluster
00:52 mriv joined #gluster
00:52 gluslog joined #gluster
00:52 __NiC joined #gluster
00:52 stoile joined #gluster
00:52 tru_tru joined #gluster
00:52 samppah joined #gluster
00:52 irk joined #gluster
00:52 mtanner joined #gluster
00:52 errstr joined #gluster
00:52 atrius joined #gluster
00:52 glusterbot joined #gluster
00:52 martin2_1 joined #gluster
00:52 kbsingh joined #gluster
00:52 Peanut joined #gluster
00:52 tjikkun_work joined #gluster
00:52 fleducquede joined #gluster
00:52 kkeithley joined #gluster
00:52 GLHMarmo1 joined #gluster
00:52 purpleidea joined #gluster
00:52 MattRM joined #gluster
00:52 H__ joined #gluster
00:52 lanning joined #gluster
00:52 larsks joined #gluster
00:52 johnmorr_ joined #gluster
00:52 bdperkin joined #gluster
00:52 avati joined #gluster
00:52 smellis joined #gluster
00:52 bstr_work joined #gluster
00:52 jiffe99 joined #gluster
00:52 helloadam joined #gluster
00:52 DEac- joined #gluster
00:52 tjikkun joined #gluster
00:52 NuxRo joined #gluster
00:52 mynameisbruce__ joined #gluster
00:52 theron joined #gluster
00:52 eightyeight joined #gluster
00:52 SteveCooling joined #gluster
00:52 sjoeboo_ joined #gluster
00:52 MinhP_ joined #gluster
00:52 xavih joined #gluster
00:52 NeatBasis joined #gluster
00:52 haakon_ joined #gluster
00:52 redsolar joined #gluster
00:52 x4rlos joined #gluster
00:52 waldner joined #gluster
00:52 dmojorydger joined #gluster
00:52 Shdwdrgn joined #gluster
00:52 cfeller joined #gluster
00:52 statix_ joined #gluster
00:52 ThatGraemeGuy joined #gluster
00:52 jurrien_ joined #gluster
00:52 premera joined #gluster
00:52 jcastle joined #gluster
00:52 hflai joined #gluster
00:52 phix joined #gluster
00:52 Ramereth joined #gluster
00:52 jiqiren joined #gluster
00:52 vex joined #gluster
00:52 hybrid512 joined #gluster
00:52 al joined #gluster
00:52 dxd828 joined #gluster
00:52 karoshi joined #gluster
00:52 bleon joined #gluster
00:52 yosafbridge joined #gluster
00:52 mohankumar joined #gluster
00:52 daMaestro joined #gluster
00:52 jthorne joined #gluster
00:52 abyss^ joined #gluster
00:52 m0zes joined #gluster
00:52 the-me joined #gluster
00:52 andrei__ joined #gluster
00:52 neofob joined #gluster
00:52 hchiramm__ joined #gluster
00:52 vigia joined #gluster
00:52 ninkotech__ joined #gluster
00:52 wN joined #gluster
00:52 lkoranda joined #gluster
00:52 duerF joined #gluster
00:52 fidevo joined #gluster
00:52 jclift_ joined #gluster
00:52 robos joined #gluster
00:52 foster_ joined #gluster
00:52 xymox joined #gluster
00:52 msvbhat_ joined #gluster
00:52 zykure joined #gluster
00:52 ehg_ joined #gluster
00:52 atrius- joined #gluster
00:52 Guest3022 joined #gluster
00:52 lh joined #gluster
00:52 war|child joined #gluster
00:52 wgao__ joined #gluster
00:52 Skunnyk joined #gluster
00:52 Nagilum_ joined #gluster
00:52 Guest82024 joined #gluster
00:52 cicero joined #gluster
00:52 sr71 joined #gluster
00:52 zwu joined #gluster
00:52 lkthomas joined #gluster
00:52 _Bryan_ joined #gluster
00:52 partner joined #gluster
00:52 stigchristian joined #gluster
00:52 mjrosenb joined #gluster
00:53 badone joined #gluster
00:53 warthog9 joined #gluster
00:53 chirino_m joined #gluster
00:53 Foo1 joined #gluster
00:53 jbrooks joined #gluster
00:53 hagarth joined #gluster
00:53 MrNaviPacho joined #gluster
00:53 soukihei joined #gluster
00:53 twx_ joined #gluster
00:53 lyang0 joined #gluster
00:53 Gugge joined #gluster
00:53 Goatbert joined #gluster
00:53 flrichar joined #gluster
00:53 Rorik_ joined #gluster
00:53 awheeler_ joined #gluster
00:53 tqrst joined #gluster
00:53 JZ_ joined #gluster
00:53 hchiramm_ joined #gluster
00:53 shanks` joined #gluster
00:53 jds2001 joined #gluster
00:53 msmith__ joined #gluster
00:53 snarkyboojum_ joined #gluster
00:53 balunasj|away joined #gluster
00:53 georgeh|workstat joined #gluster
00:53 VSpike joined #gluster
00:53 johnmark joined #gluster
00:53 eryc joined #gluster
00:53 Chiku|dc joined #gluster
00:53 jim` joined #gluster
00:53 dblack joined #gluster
00:53 DWSR joined #gluster
00:53 thekev joined #gluster
00:53 roo9 joined #gluster
00:53 semiosis joined #gluster
00:53 hagarth__ joined #gluster
00:53 chlunde_ joined #gluster
00:53 ingard__ joined #gluster
00:53 tdb- joined #gluster
00:53 arusso joined #gluster
00:53 JordanHackworth_ joined #gluster
00:53 VeggieMeat joined #gluster
00:53 avati_ joined #gluster
00:53 codex joined #gluster
00:53 pull joined #gluster
00:53 ndevos joined #gluster
00:53 Kins joined #gluster
00:53 NeonLich1 joined #gluster
00:53 cyberbootje1 joined #gluster
00:53 red_solar joined #gluster
00:53 juhaj joined #gluster
00:53 warthog9 joined #gluster
00:59 yinyin joined #gluster
01:16 hchiramm__ joined #gluster
01:20 awheeler joined #gluster
01:25 Uzix joined #gluster
01:32 kevein joined #gluster
01:33 majeff joined #gluster
01:47 hchiramm__ joined #gluster
01:49 FyreFoX joined #gluster
01:51 majeff joined #gluster
01:59 awheeler joined #gluster
02:02 majeff left #gluster
02:06 yinyin joined #gluster
02:07 harish joined #gluster
02:16 majeff joined #gluster
02:31 Supermathie I think the write-behind translator is leading to inconsistent files under 3.3.1.
02:32 Supermathie Exciting news for a Friday night
02:35 wNz joined #gluster
02:37 harish joined #gluster
02:44 bharata joined #gluster
02:46 awheeler joined #gluster
02:48 Supermathie If I set a gluster client mount option, does it take effect immediately or do you need to remount?
02:58 bala joined #gluster
03:03 vex so I added a couple of bricks to a volume - but the clients aren't seeing the additional space?
03:04 vex or rather - df shows that the space has been added - but only a very small % is available to the client
03:07 vex do I have to rebalance?
03:08 Supermathie vex: you'll need to rebalance and fix-layout ISTR
03:09 vex hmm
03:11 vex that seems to be doing something. Slowly.
03:19 saurabh joined #gluster
03:39 robos joined #gluster
03:46 yinyin joined #gluster
03:48 anands joined #gluster
03:52 awheeler joined #gluster
03:57 fidevo joined #gluster
03:59 awheeler joined #gluster
04:12 anands joined #gluster
04:37 pRTGv joined #gluster
04:40 hagarth joined #gluster
04:41 sgowda joined #gluster
04:51 deepakcs joined #gluster
04:55 vpshastry joined #gluster
04:56 raghu joined #gluster
04:57 B6i7s joined #gluster
05:03 anands joined #gluster
05:06 satheesh joined #gluster
05:06 satheesh1 joined #gluster
05:09 bulde joined #gluster
05:09 yinyin joined #gluster
05:09 awheeler joined #gluster
05:10 shylesh joined #gluster
05:14 kshlm joined #gluster
05:29 rotbeard joined #gluster
05:55 lalatenduM joined #gluster
05:56 ricky-ticky joined #gluster
06:10 puebele joined #gluster
06:16 jtux joined #gluster
06:18 glusterbot New news from newglusterbugs: [Bug 962350] leak in entrylk <http://goo.gl/9Y3JQ>
06:26 StarBeast joined #gluster
06:26 majeff1 joined #gluster
06:29 vshankar joined #gluster
06:29 puebele joined #gluster
06:35 satheesh joined #gluster
06:38 ngoswami joined #gluster
06:39 ollivera_ joined #gluster
06:47 anands joined #gluster
06:49 edong23 joined #gluster
06:49 hchiramm__ joined #gluster
06:52 ctria joined #gluster
06:53 andreask joined #gluster
06:57 guigui3 joined #gluster
06:59 jules_ joined #gluster
07:06 sgowda joined #gluster
07:06 vpshastry1 joined #gluster
07:14 andreask joined #gluster
07:16 hchiramm__ joined #gluster
07:24 satheesh1 joined #gluster
07:31 Uzix joined #gluster
07:31 ekuric joined #gluster
07:35 majeff joined #gluster
07:35 odXTx joined #gluster
07:35 vpshastry joined #gluster
07:50 Rhomber joined #gluster
07:51 sgowda joined #gluster
07:51 Rhomber Has anyone had any success setting up an NFS gluster mount in fstab?
07:51 Rhomber I get "requested NFS version or transport protocol is not supported" on boot, but mount -a works fine
07:51 glusterbot Rhomber: make sure your volume is started. If you changed nfs.disable, restarting your volume is known to work.
07:51 Rhomber the options im using are: defaults,_netdev,vers=3,mountproto=tcp
07:52 ekuric joined #gluster
07:52 Rhomber thanks bot, any real advice?
07:54 Rhomber this is on Centos 6.4 and gluster was installed from the repos as documented in Quick Start
07:54 puppen joined #gluster
07:56 redsolar joined #gluster
08:00 shireesh joined #gluster
08:01 andreask Rhomber: try "proto" instead of "mountproto"
08:02 Rhomber ok, will try :)
08:02 Rhomber thanks
08:03 Rhomber same issue
08:04 Rhomber any other ideas? :(
08:04 hchiramm__ joined #gluster
08:06 andreask how does your fstab entry exactly look like?
08:06 puebele joined #gluster
08:06 Rhomber localhost:/shopzgv0     /var/shopz/shared       nfs     defaults,_netdev,vers=3,proto=tcp 0 0
08:07 Rhomber (and as I say, once it's booted up.. mount -a works fine)
08:07 andreask oh ... don't do this with nfs
08:07 Rhomber umm?
08:07 Rhomber the FAQ says to
08:07 andreask a local mount
08:07 Rhomber i had terrible performance with glusterfs
08:08 Rhomber like 12 seconds to load vs 2
08:08 Rhomber apparently NFS is better for web servers.. lots of small files
08:08 andreask yeah, but local mounts of nfs still having the same problem as with any other NFS server
08:09 Rhomber to be honest, i completely scrapped gluster.. and tried to find another option.. but there was none.. so im only using it as i've got no choice (no offence)
08:09 Rhomber which is?
08:11 andreask it can deadlock under high systemload
08:11 andreask one sec ...
08:12 Rhomber ok, well it doesn't work anyway.. and i'm not really keen to put it in /etc/rc.local as a mount command.. seems wrong
08:12 Rhomber im happy to go with any option.. as long as it's fast
08:12 ninkotech joined #gluster
08:12 ninkotech_ joined #gluster
08:13 Rhomber I have 3 nodes.. and last time I setup a 4th node.. so I could do replica 2... this time.. I have just gone replica 3 with 3 nodes.. PRAYING that I get better read performance.. and praying that it serves files from the local disk.. All I really want is file replication .. not extra space
08:23 Rhomber sorry just wasted 3 days on this
08:23 Rhomber andreask: Any advice would be greatly appreciated :)
08:25 andreask Rhomber: here is a nice description oft the nfs mount problem ... http://goo.gl/SxfRb
08:25 glusterbot Title: Mounting NFS Over Loopback Result in Hang | HP® Support (at goo.gl)
08:26 GabrieleV joined #gluster
08:26 andreask Rhomber: hmm ... so agressive caching was not sufficient for you?
08:31 satheesh joined #gluster
08:35 majeff joined #gluster
08:37 Rhomber andreask: Ok, i've switched back to regular gluster
08:37 Rhomber I haven't really tweaked the gluster settings..
08:37 Rhomber but will replica 3 with 3 servers.. mean the file is served locally?
08:38 Rhomber I have also changed magento from being stored on the gluster share to being on the local fs.. so only images are on gluster now (but there are 10 Gb of those... and the thumbnail cache)
08:38 Rhomber I am about to test this setup (with magento being local)
08:39 ramkrsna joined #gluster
08:39 ramkrsna joined #gluster
08:41 Rhomber seems ok :)
08:41 Rhomber but what are these 'aggressive caching options', can you point me to a tutorial?
08:49 red_solar joined #gluster
08:50 satheesh joined #gluster
08:52 vpshastry left #gluster
08:56 andreask Rhomber: you can do a lot in your application and by using some sort of reverse proxies ... Joe Julians blog has a lot of good hints http://goo.gl/xRlIA
08:56 vpshastry joined #gluster
09:01 Foo1 Hi! Is a fail over time out of ~45 seconds normal?
09:01 Foo1 if one node crashes the client can't write data for 45 seconds to the other node...
09:02 Foo1 I wanted to use gluster for storing my VM's but with such a failover time it's crashing my VM's
09:08 andreask yes ... its 42s
09:09 Foo1 Is it possible to decrease the fail-over time? As I'm trying to use glusterfs as my storage for hosting VM's this requires it to be way lower
09:10 andreask sure ... not recommended but possible
09:10 Zengineer joined #gluster
09:10 clutchk1 joined #gluster
09:10 Foo1 How could I configure the fail-over time ?
09:12 airbear joined #gluster
09:13 andreask the tuning option is called network.ping-timeout
09:13 Guest26928 left #gluster
09:14 andreask but be sure to read about it in the admin guide and understand its implications
09:14 Foo1 where is this time-out, 42 seconds, based on?
09:14 dan_a joined #gluster
09:15 Foo1 Is it possible to have a failover VM storage with the glusterfs beta?
09:16 Airbear_ joined #gluster
09:16 anands joined #gluster
09:19 glusterbot New news from newglusterbugs: [Bug 961668] gfid links inside .glusterfs are not recreated when missing, even after a heal <http://goo.gl/4vuYc>
09:19 NuxRo Foo1: I believe what you're looking for is network.ping-timeout , but be advised the devs do not recommend modifying this, do through tests before using in production
09:20 andreask there are quite some improvements in 3.4 to work better with VM images ... but that timeout behavior is still the same
09:21 Foo1 Thanks a lot guys, gotten some more insight on it now :)
09:21 Foo1 Reading something about mounting shares with " --disable-direct-io-mode"  and setting the time-out to 10 seconds. Gonna give it a shot :)
09:22 NuxRo Foo1: care to share?
09:22 Foo1 sure
09:24 Rhomber andreask: Oh, I thought you meant in gluster.. sure.. my application is very aggressively cached.... varnish frontend and redis used internally as well
09:24 lh joined #gluster
09:24 lh joined #gluster
09:25 andreask Rhomber: there are  some gluster tunings too but not that much
09:25 Rhomber fair enough, it seems ok now.. having all of the .php classes on gluster was probably my main issue
09:26 Rhomber while it's nice to have them synced, it's not mandatory
09:26 puebele joined #gluster
09:43 Foo1 joined #gluster
09:46 puebele joined #gluster
09:54 Airbear_ Hi, I have an 8 node 3.3.1 cluster with a 16x2 distribute-replicate volume - I think that's how to describe it.  gluster volume status shows that the NFS server and Self-heal daemon are down on 6 out of 8 hosts.  If I issue `gluster volume heal datastore', I receive message: "Self-heal daemon is not running. Check self-heal daemon log file.".  ps on the same host shows that there is a glusterfs
09:54 Airbear_ process running the glustershd volfile.  The glustershd log on that node shows some errors for connections to bricks which are offline and 'W [socket.c:1876:socket_server_event_handler] 0-socket.glusterfsd: Failed to set keep-alive: Operation not supported' and otherwise only 'I' level messages.
09:54 Airbear_ Can anybody offer advice on how to get self-heal daemon up and running?  Thanks.
09:55 duerF joined #gluster
09:56 harish joined #gluster
09:57 Foo1 joined #gluster
09:59 Foo1 joined #gluster
10:01 Foo1 @NuxRo and other; we've succeeded in our implementation! With a timeout of 10 seconds the VM's still works! The VM's freeze for 10 seconds but after that they still function :)... Now we're gonna do some more testing.
10:01 Foo1 now lunch! brb
10:10 harish joined #gluster
10:11 anands joined #gluster
10:12 H__ joined #gluster
10:17 vshankar joined #gluster
10:18 andrei__ joined #gluster
10:20 hchiramm__ joined #gluster
10:28 dan_a Hi all, is it possible to find files requiring a self heal without triggering that self heal? I've read http://hekafs.org/index.php/2011/​04/glusterfs-extended-attributes/ but can't figure out how to relate that to the attrs on the files on our bricks
10:28 glusterbot <http://goo.gl/Bf9Er> (at hekafs.org)
10:30 anands joined #gluster
10:30 andreask joined #gluster
10:32 ndevos dan_a: http://review.gluster.org/​#/c/4216/1/doc/split-brain.txt,unified tries to explain it a little
10:32 glusterbot <http://goo.gl/udd7X> (at review.gluster.org)
10:34 dan_a ndevos: Looks perfect, thanks
10:34 ndevos dan_a: great!
10:36 andrei__ hello guys
10:36 andrei__ could some one help me with the nfs over glusterfs setup?
10:37 andrei__ I seems to be hitting a bottleneck on the glusterfs process
10:37 andrei__ when I have several parallel reads from the clients
10:37 andrei__ that glusterfs process is responsible for nfs handling
10:37 andrei__ it consumes 100% cpu
10:37 andrei__ however, the server itself is around 80% idle
10:38 andrei__ is there a way to have multiple glusterfs nfs processes running at the same time?
10:55 piotrektt joined #gluster
10:56 lpabon joined #gluster
10:59 puebele1 joined #gluster
11:01 rastar joined #gluster
11:05 andreask joined #gluster
11:12 y4m4 joined #gluster
11:12 edward1 joined #gluster
11:31 dan_a joined #gluster
11:31 karoshi joined #gluster
11:37 dustint joined #gluster
11:38 andreask joined #gluster
11:39 fps left #gluster
11:43 puebele1 joined #gluster
11:51 rotbeard joined #gluster
11:55 bala1 joined #gluster
12:03 puebele1 joined #gluster
12:06 lbalbalba joined #gluster
12:08 vshankar joined #gluster
12:17 vpshastry1 joined #gluster
12:26 aliguori joined #gluster
12:39 shireesh joined #gluster
12:44 spider_fingers joined #gluster
12:46 andrei__ hello guys
12:47 andrei__ does anyone know why am I seeing these entries in the nfs.log file every second: [2013-05-17 13:46:31.632702] I [client.c:2090:client_rpc_notify] 0-secondary-ip-client-0: disconnected
12:47 andrei__ i have three volumes and i am seeing these entries for all three volumes every second?
12:49 andrei__ i am also seeing these in the etc-glusterfs-glusterd.vol.log file:
12:49 andrei__ [2013-05-17 13:48:44.261687] I [socket.c:1798:socket_event_handler] 0-transport: disconnecting now
12:49 harish joined #gluster
12:49 andrei__ also every second
12:54 awheeler joined #gluster
13:01 jack joined #gluster
13:03 lbalbalba andrei__: assuming you are using nfs ?
13:04 robos joined #gluster
13:05 awheeler_ joined #gluster
13:06 andrei__ yes, i am
13:06 andrei__ but this happens even if I disconnect all the nfs clients
13:09 purpleidea joined #gluster
13:09 purpleidea joined #gluster
13:12 plarsen joined #gluster
13:13 Airbear joined #gluster
13:28 lbalbalba andrei__: could it be this bug ? : https://bugzilla.redhat.com/show_bug.cgi?id=847821
13:28 glusterbot <http://goo.gl/gJor4> (at bugzilla.redhat.com)
13:28 glusterbot Bug 847821: low, medium, ---, rabhat, ASSIGNED , After disabling NFS the message "0-transport: disconnecting now" keeps appearing in the logs
13:33 clag_ joined #gluster
13:34 bennyturns joined #gluster
13:35 rwheeler joined #gluster
13:35 MrNaviPacho joined #gluster
13:42 majeff joined #gluster
13:43 kaptk2 joined #gluster
13:44 hchiramm__ joined #gluster
13:56 spider_fingers joined #gluster
13:56 harish joined #gluster
14:00 purpleid1a joined #gluster
14:01 enseven joined #gluster
14:01 plarsen joined #gluster
14:04 wushudoin joined #gluster
14:04 wushudoin left #gluster
14:06 enseven Hi all! My "/" filesystem rann full on one knode. Now my peers/*, nfs/nfs-server.vol and glustershd/glustershd-server.vol files are size 0. glusterd does not start. How can I recover an join the knode after freeing "/" ?
14:06 ricky-ticky joined #gluster
14:06 portante|ltp joined #gluster
14:06 mohankumar joined #gluster
14:08 JoeJulian enseven: just rsync peers and vols from another peer
14:09 JoeJulian hrm.. but..
14:09 enseven JoeJulian: yes?
14:09 JoeJulian you'll end up with a peer file for itself that you'll have to remove
14:10 JoeJulian And one will be missing for the peer that you copy from. If you have more than two peers you can rsync the missing one from another peer.
14:10 rwheeler joined #gluster
14:11 JoeJulian ... or just use a peer file as a template for creating the missing one.
14:11 enseven I have just 2 peers. One of them is dead now.
14:11 JoeJulian protip: Always put /var/log on it's own partition. ;)
14:11 enseven Are the .vol files allover the same?
14:12 JoeJulian yes
14:12 enseven Ok! So I just have to reconstruct one missing peer file?
14:13 JoeJulian yes
14:13 JoeJulian You might be able to leave peers empty and probe from the good server, but I've not tested that theory.
14:13 enseven I have got uuid=, state= and hostname1= in it. What does they mean?
14:13 johnmorr joined #gluster
14:14 JoeJulian uuid is in /var/lib/glusterd/glusterd.info
14:14 enseven Check! ;)
14:14 enseven What about state= ?
14:15 JoeJulian state... I don't know the meanings, just leave it as whatever's there.
14:15 enseven The alive knode has state=3, so I set it alike?
14:16 JoeJulian yes
14:16 enseven Thanks a lot! I'll give that a try now! :-)
14:16 JoeJulian ~glossary | enseven
14:16 glusterbot enseven: A "server" hosts "bricks" (ie. server1:/foo) which belong to a "volume"  which is accessed from a "client"  . The "master" geosynchronizes a "volume" to a "slave" (ie. remote1:/data/foo).
14:21 jtux joined #gluster
14:23 enseven JoeJulian: Success! Thanks a lot! I consider this as a procedure that should be documented in "Gluster_File_System-3.3.0-Adm​inistration_Guide-en-US.pdf". Who is responsable for that? Do you know?
14:24 JoeJulian enseven: Well... It's in the source tree... See: ,,(hack)
14:24 glusterbot enseven: The Development Work Flow is at http://goo.gl/ynw7f
14:24 JoeJulian Though I want to find some time to pull it out of the source tree and reformat it in asciidoc.
14:25 manik joined #gluster
14:26 enseven Good Idea! :-) ... Thanks!
14:27 andrei__ hello guys
14:27 andrei__ does anyone know if you can use multiple threads to hand nfs over glusters?
14:27 andrei__ i am seeing 100% useage of the glusterfs process that handles nfs
14:28 andrei__ which limits the server speed
14:28 andrei__ however, the server itself is 70% idle
14:28 andrei__ it would be great if this process could use  multiple cpus
14:28 andrei__ or gluster could spawn multiple processes to handle nfs
14:28 andrei__ is that currently possible?
14:28 JoeJulian Supermathie had the same thoughts. I don't know of any way to do that though.
14:29 JoeJulian I wonder which translator is the bottleneck...
14:30 nueces joined #gluster
14:30 andrei__ JoeJulian: if I look at the top, there is one process using 100% cpu
14:30 andrei__ ps aux tells me that it's the nfs process that glusterfs uses
14:31 harold_ joined #gluster
14:31 andrei__ i think on a typical generic nfs server there is a way to specify a number of nfsd instances that are run at one time
14:31 andrei__ i am looking for something similar with gluster
14:35 andrewjs1edge joined #gluster
14:38 bugs_ joined #gluster
14:44 mgebbe_ joined #gluster
14:44 enseven left #gluster
14:45 dumbda joined #gluster
14:46 dumbda Hi, when trying to mount gluster share i get this error 0-glusterfs XDR decoding error, 0-mgmt: failed to fetch volume file (key:/autosupport)
14:47 failshell joined #gluster
14:47 dumbda autosupport is the name of the share on the server
14:47 dumbda share(volume)
14:50 rastar joined #gluster
14:53 spider_fingers left #gluster
14:58 JoeJulian dumbda: Looks like you're mixing versions.
14:59 dumbda hmm let me see
15:03 majeff joined #gluster
15:03 dumbda Thank you sir you were right.
15:03 dumbda on the client it is 3.2.5
15:03 dumbda and server 3.3.1
15:03 dumbda hm now i need to upgrade somehow
15:04 JoeJulian @yum repo
15:04 glusterbot JoeJulian: kkeithley's fedorapeople.org yum repository has 32- and 64-bit glusterfs 3.3 packages for RHEL/Fedora/Centos distributions: http://goo.gl/EyoCw
15:04 JoeJulian @ppa
15:04 glusterbot JoeJulian: The official glusterfs 3.3 packages for Ubuntu are available here: http://goo.gl/7ZTNY
15:05 dumbda yeah i need ubuntu
15:06 dumbda Thank you
15:06 JoeJulian You're welcome
15:08 wN joined #gluster
15:09 purpleidea joined #gluster
15:11 andrei__ guys, is it common to see glusterfsd process consuming over 200% cpu when you have data written to the volume?
15:12 lbalbalba ~ports | lbalbalba
15:12 glusterbot lbalbalba: glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111.
15:12 lbalbalba lbalbalba he. glusterbot is cool
15:12 * lbalbalba hugs glusterbot
15:12 lbalbalba ~yum repo | lbalbalba
15:12 glusterbot lbalbalba: kkeithley's fedorapeople.org yum repository has 32- and 64-bit glusterfs 3.3 packages for RHEL/Fedora/Centos distributions: http://goo.gl/EyoCw
15:12 lbalbalba nifty
15:13 portante` joined #gluster
15:16 kkeithley just fyi, for 3.3.2 and 3.4.0 the fedora and epel repos will be on download.gluster.org and I'll retire my fedorapeople.org repo.
15:16 portante joined #gluster
15:17 jthorne joined #gluster
15:27 thebishop joined #gluster
15:29 rb2k joined #gluster
15:29 rb2k Hey, quick question:
15:29 rb2k This is from the docs "After peer probe, in the remote machine, the peer machine information is stored with IP address instead of hostname."
15:29 rb2k does this mean that in the machine that I issue the "peer probe" command, the peer status will display the other peer
15:29 rb2k 's hostname
15:30 rb2k and if I do a gluster peer status on the other machine, I will see the initial machine's IP?
15:35 Keawman joined #gluster
15:35 perfectsine joined #gluster
15:36 Keawman has anyone here issues with gluster on the client side having Transport engpoint is not connected errors?
15:36 ladd joined #gluster
15:37 semiosis Keawman: everyone's had that problem.  it's usually iptables or hostname resolution problems (or your volume isnt started)
15:37 semiosis Keawman: your client log file should have some indication of what the problem is
15:37 Keawman semiosis, ok
15:39 JoeJulian rb2k: That is correct. Probe the first server from any other peer to correct that.
15:39 rb2k and it won't complain that the server is already part of the pool?
15:39 JoeJulian correct
15:39 rb2k oh, interesting
15:41 ndevos @hostnames
15:41 glusterbot ndevos: Hostnames can be used instead of IPs for server (peer) addresses. To update an existing peer's address from IP to hostname, just probe it by name from any other peer. When creating a new pool, probe all other servers by name from the first, then probe the first by name from just one of the others.
15:46 perfectsine Hello,  I've reboot all my nodes and 2 of them the glusterfsd process isnt starting and the volume status shows "N" for that brick.  Any ideas?  Gluster 3.3.1 on redhat 5.4
15:47 majeff1 joined #gluster
15:48 JoeJulian have you looked at your logs for clues/
15:48 JoeJulian ?
15:49 perfectsine JoeJulian: is there a specific log file it would be located in?
15:50 thebishop hi i'm having trouble creating a volume.  The gluster volume create command finishes without an error or confirmation.  can someone look at my logs here: http://tny.cz/cd15d11e ?
15:50 glusterbot Title: gluster_wtf - cd15d11e (at tny.cz)
15:51 JoeJulian thebishop: I've seen that behavior before. Restarting glusterd solved it.
15:52 JoeJulian perfectsine: /var/log/glusterfs has logs for glusterd and under bricks/ for the bricks. I'd probably check glusterd's log first, then if there was no decent hints there I'd try the brick log.
15:52 Rhomber not a great second impression of gluster.. i left the process run.. warming my site, which has some 30,000 URLs.. and generates thumbnails and smaller images and such as it goes.. and came back to find the website broken.. because not even the htaccess file could be read...
15:53 Rhomber [root@shopz-1 /]# ls /var/shopz/shared/nginx/htpasswd
15:53 Rhomber ls: cannot access /var/shopz/shared/nginx/htpasswd: Input/output error
15:53 Rhomber and so on, for any file/dir in the share
15:53 perfectsine JoeJulian:  0-storage0-posix: mismatching volume-id (XXXXXXXXXXX) received. already is a part of volume
15:53 perfectsine I found that
15:53 Rhomber what the heck?.. I assume gluster is too immature for prime time?
15:53 JoeJulian Rhomber: You know what happens when you assume.
15:54 mynameisbruce_ joined #gluster
15:54 mynameisbruce joined #gluster
15:54 Rhomber well, forgive me for making that assumption that 'anything' would just do that
15:54 JoeJulian perfectsine: What that's saying is that the volume-id extended attribute doesn't match the uuid of the defined volume. Perhaps there's a mount issue for the underlying storage?
15:54 Rhomber all the peers are contactable..  so, im not sure what happened
15:54 Rhomber doesn't inspire me with confidence at all
15:55 JoeJulian Rhomber: have you checked the logs?
15:55 Rhomber a little, but im new so im not sure exactly what to look for
15:55 Rhomber any pointers?
15:56 JoeJulian Start with " E " in the client log (/var/log/glusterd/{mount-point-directory}.log
15:57 JoeJulian Also check "gluster volume heal $vol info split-brain" (that's my guess what's wrong)
15:57 Rhomber i saw that log, but im kinda new so not sure what it all means
15:57 Rhomber i'll run that command
15:57 JoeJulian If it is split-brained, then you have to come back to the question, how did your volume get that way...
15:58 Rhomber to summarise, they all say: Number of entries: 0
15:58 Rhomber so im guessing it's not?
15:59 JoeJulian fpaste the client log (or at most the last 100 lines of it)
16:00 Rhomber http://pastebin.com/MnyjkUsn
16:00 glusterbot Please use http://fpaste.org or http://dpaste.org . pb has too many ads. Say @paste in channel for info about paste utils.
16:00 Rhomber sorry i did it before you said 100
16:00 Rhomber if you need more i'll do-over
16:01 Rhomber err, is the pb ok still?
16:01 JoeJulian I'll do for now
16:01 Rhomber cool, ta
16:01 JoeJulian it... gah
16:03 Rhomber the warmer hits like.. http://shopz.com.au/the-dvd.html  ... http://shopz.co.nz/the-dvd.html   .. http://myshopz.com/the-dvd.html ...  in sequence on the same thread... so it shouldn't try and generate them at the same time on different nodes...  but.. related product images might be shared, so thats the only thing i could see creating a file on two nodes with the same name....  i saw errors in my app logs saying it couldn't find the ima
16:03 Rhomber ge.. but when i looked it was there.. so no idea
16:03 Rhomber but.. is that what broke it?
16:04 JoeJulian Looks like it's trying to heal something that's not broken...
16:05 JoeJulian unless....
16:05 perfectsine JoeJulian: How would it be getting a mismatching UUID?  I checked it's gluster.info file against the other nodes and it has the right UUID.
16:06 JoeJulian Could multiple clients be doing the same mkdir? That might explain the "File exists" messges
16:06 JoeJulian perfectsine: gluster.info is the server's uuid.
16:06 JoeJulian perfectsine: getfattr -m . -d -e hex $brick-mount-path
16:07 Rhomber potentially, as i said.. a product page itself won't be done in parallel... but images on the page might occur on another product being warmed at the same time
16:07 Rhomber umm
16:07 JoeJulian Gah, we're leaving for Portland and the wife wants me to be helping get the car loaded. I'll be back from the car...
16:07 majeff joined #gluster
16:08 Rhomber and also..  /magento/static/media/catalog/produc​t/cache/sz/thumbnail/50x70/17f82f742​ffe127f42dca9de82fb58b1/ext/4/9/5/2  ... 49521 .. and 49522 would both need that directory
16:08 Rhomber haha, oh wow. thanks :)
16:08 JoeJulian True, my question was more along the lines of: does your app try to create the directory even if it exists?
16:09 perfectsine JoeJulian: you're right, that number is different from the other nodes
16:09 Rhomber I wouldn't.. but magento.. sure.. it might
16:10 thebishop JoeJulian, restarting gluster didn't fix my issue creating a volume
16:10 Rhomber (not the best software in the world)
16:10 perfectsine JoeJulian: It looks like it's pulled info from my original test volume which has been deleted for some months now, after the reboot from earlier today
16:12 perfectsine JoeJulian: would it be as easy as writing the correct UUID to that variable?
16:17 Rhomber JoeJulian: I just checked the code, the imediate code looks to only create the directory if it doesn't exist
16:25 perfectsine JoeJulian: Removed the attr and had gluster reinitialize it, thank you!!
16:30 semiosis Rhomber: glusterfs version?  linux distro/version?
16:30 hagarth joined #gluster
16:31 Rhomber 3.3.1-1 on Centos 6.4
16:31 Rhomber I removed /export/brick1/vdb1/magento/static/media/c​atalog/product/cache/sz/thumbnail/50x70/17​f82f742ffe127f42dca9de82fb58b1/ext/4/9/5/2 and remounted... and the Input/Output errors are gone.. but it didn't heal that directory? it's still missing on node 1
16:32 semiosis brick filesystem?  xfs?
16:32 Rhomber ah, oops.. i ls'ed on the share and it healed
16:32 Rhomber yeah, xfs
16:32 Rhomber i wasn't sure.. i read up and i think i'm meant to use xfs?
16:33 semiosis +1
16:33 Rhomber but if you could help explain what happened so I can prevent this in production that would be.. awesome.. and less scary
16:37 brian_ joined #gluster
16:38 brian_ If I want to clear all of my "bricks" of data, how do I do that? Simply go into the brick directories and detele everything with an rm -rf *, or is there a way to format an entire gluster volume with one command?
16:40 Rhomber semiosis: any ideas? :)
16:41 semiosis Rhomber: idk what happened
16:41 semiosis brian_: stop the volume, then i would umount, mkfs, mount each brick before starting the volume again.
16:43 brian_ semiosis: Thanks. I was also wondering if when I first set this up, if I needed to format the bricks in the first place? I never did format them at all to start with.
16:44 semiosis brian_: thats unpossible
16:44 semiosis brian_: glusterfs bricks are directories, meaning they need to be on a mounted filesystem, meaning they need to be formatted
16:45 brian_ semiosis: yes they are on a Centos6 ext4 filesystem
16:45 semiosis well
16:45 brian_ So no need to unmount and run mkfs then right?
16:46 semiosis brian_: what version of glusterfs?
16:46 semiosis how are you not affected by the ,,(ext4) bug?
16:46 glusterbot (#1) Read about the ext4 problem at http://goo.gl/xPEYQ or (#2) Track the ext4 bugzilla report at http://goo.gl/CO1VZ
16:46 vpshastry joined #gluster
16:46 vpshastry left #gluster
16:47 semiosis brian_: as for what you need to do, i can't say.  but you asked how to delete all data from a volume, that's how i'd do it -- by running mkfs on the brick block devices
16:47 brian_ ok
16:47 semiosis brian_: it's a best practice to use dedicated block devices for your gluster bricks; not to place glusterfs bricks on your root mount
16:48 thebishop does gluster require the exact same build on every server? i've got "glusterfs 3.3.1 built on Oct 22 2012 07:54:24" and "glusterfs 3.3.1 built on Apr  2 2013 15:09:50" on another.  i'm wondering if that's the root of my silent failures
16:48 semiosis for the record, xfs with inode size 512 is recommended
16:48 semiosis thebishop: wellll..... that's a good question for kkeithley
16:48 semiosis or JoeJulian but he's afk for a bit
16:48 Rhomber ah ok
16:49 Rhomber so i assume you liked the versions im running then?
16:49 semiosis Rhomber: seems ok
16:49 Rhomber im going to tweak my URL warmer to shuffle the pages before it queues the jobs.. so as to not process them sequentially and avoid 'related' products showing up on two different product pages being warmed at the same time
16:50 Rhomber don't have a lot of other choices.. since no one knows what happend
16:50 semiosis Rhomber: you're using the fuse client right?  or nfs?
16:50 Rhomber question though, can I prevent these issues from locking up the mount?
16:50 Rhomber I was told localhost NFS mounts are bad.. so i went back to glusterfs (even though my use case is lots of small-ish files)
16:51 semiosis hmmm localhost mounts
16:51 Rhomber he pointed me to a HP article where they say it can cause deadlocks
16:51 semiosis that is true
16:51 semiosis localhost nfs mounts are bad
16:51 semiosis they fail under load
16:52 Rhomber which is what just happened to gluster :)
16:52 Rhomber though, to be fair.. not just load
16:52 semiosis localhost fuse mounts, less bad, though still dangerous in some situations
16:52 Rhomber that's what im doing..
16:52 Rhomber 3 nodes, each uses itself
16:53 Rhomber replica 3
16:53 semiosis Rhomber: do you have quorum enabled?
16:53 Rhomber how do I check?
16:53 semiosis you would have enabled it.  ,,(pasteinfo) to see the options you've configured
16:53 brian_ semiosis: So glusterfs can't be used with ext4 according to this bug?
16:53 glusterbot Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
16:54 semiosis brian_: not with certain kernels
16:54 Rhomber http://fpaste.org/12809/68809668/
16:54 glusterbot Title: #12809 Fedora Project Pastebin (at fpaste.org)
16:56 kkeithley if you're talking about the builds from my fedorapeople.org repo, with one or two minor exceptions, all the changes are to the UFO bits.
16:56 semiosis brian_: mainline linux kernel 3.3+ and the recent centos 6 kernels, even tho 2.6, got the new ext4 code backported
16:56 brian_ semiosis: my kernel version is: 2.6.32-358.6.1.el6.x86_64 and my gluster version is:glusterfs 3.3.1
16:57 semiosis kkeithley: so none of the patches you cherry picked could cause interop problems?
16:57 semiosis Rhomber: you do not have quorum enabled.  see this: http://gluster.helpshiftcrm.com/q/what-is-spl​it-brain-in-glusterfs-and-how-can-i-cause-it/
16:57 glusterbot <http://goo.gl/Oi3AA> (at gluster.helpshiftcrm.com)
16:58 kkeithley the only cherry-picked patch was the rdma fix
16:58 semiosis oh cool
16:58 krishna-- joined #gluster
16:58 semiosis Rhomber: i wonder if write-behind could've caused inconsistency between your servers
17:02 Rhomber is that a cache mode?
17:02 semiosis not exactly
17:02 lpabon joined #gluster
17:02 andreask joined #gluster
17:03 semiosis imho the only safe way to do localhost mounts (on fuse) is read-only, and i just found out read-only mount doesnt work in 3.3.1 :(
17:03 semiosis (relatedly, imho there's no safe way to do localhost nfs mounts, as we talked about)
17:04 semiosis quorum certainly helps with the more common split-brain issues with localhost mounts
17:04 semiosis but if you didnt have an outage, then your issue is different
17:04 semiosis Supermathie: ping
17:05 Rhomber semiosis: Ok, so split-brain seems potential (the fix I am writing will minimise it, but no guarantee)... in which case.. if both files have the same checksum, wouldn't the resolution be obvious? and I would assume they would have the same checksum in my case... and also..  what is quorum and how do i enable it?.. i'll google the options again
17:05 daMaestro joined #gluster
17:05 semiosis Supermathie: iirc you have apps on two clients working on the same file.  are those clients also servers mounting fuse localhost?
17:06 Rhomber i should never have an outage
17:06 semiosis Rhomber: 'gluster volume set help' to see all the options available.  iirc it's cluster.quorum auto
17:06 Rhomber all servers are connected via a dedicated port on a local switch
17:06 Rhomber thanks :)
17:06 bulde joined #gluster
17:06 semiosis ,,(extended attributes) article explains what split brain means...
17:07 glusterbot (#1) To read the extended attributes on the server: getfattr -m .  -d -e hex {filename}, or (#2) For more information on how GlusterFS uses extended attributes, see this article: http://goo.gl/Bf9Er
17:07 semiosis checksums are not involved
17:09 Rhomber hmm, between doing a checksum compare and taking an entire share offline.... shouldn't they be involved?!
17:10 Rhomber or is gluster too 'new' ?
17:10 Rhomber also..  quarum doesn't seem to be mentioned in the docs for 3.3.. guess it was taken out?
17:20 andrei__ I am struggling to understand something. I've got a server which could write through gluster nfs around 130mb/s cumulative using two dd threads. That is the client mounts nfs from the gluster server and writes data using two concurrent dd processes
17:20 andrei__ however, when I add this server to a replicated glusterfs setup I am only seeing about 50mb/s write performance
17:21 andrei__ the second replicating server is far more powerful than the first one and can handle around 350mb/s writes using nfs + gluster
17:21 andrei__ what am I missing here? why can't the replicated setup write as fast as the slowest server, which is around 130mb/s?
17:36 andrewjsledge joined #gluster
18:07 yinyin_ joined #gluster
18:09 cfeller joined #gluster
18:11 plarsen joined #gluster
18:15 robinr joined #gluster
18:16 lbalbalba joined #gluster
18:17 robinr Based on gluster volume heal info, I got an GFID that Gluster thinks it's not in sync. How to get gfid to file mapping most efficiently ? i'm walking file system tree to get those info now.
18:18 thebishop kkeithley, semiosis mentioned you might know if nodes in a gluster cluster must be running the same exact build.  I have 3.3.1 installed on both machines, but one came from ubuntu's repo and another from the gluster.org tarball.  When I try to create a volume, gluster silently fails with no error message and nothing alarming in the logs
18:21 kkeithley built from the gluster.org tarball should be the same bits that are in semiosis' ppa.
18:21 thebishop ok
18:22 kkeithley when you indicated different build dates I think we inferred that you were using fedora or epel builds from my fedorapeople.org repo.
18:22 thebishop oh
18:22 thebishop no
18:22 kkeithley But even so, 3.3.1 is 3.3.1, regardless of the build dates
18:22 thebishop kkeithley, i put the same build on both machines anyway, and I still see the problem
18:23 thebishop doesn't seem like a firewall issue.  i can telnet to every open port on both machines
18:24 thebishop I saw JoeJulian's blog post about reusing volumes, but i've reformatted my bricks from ext4 to xfs since having the issue
18:25 thebishop and i don't see any entries under /var/lib/glusterd/vols anyway
18:26 ujjain joined #gluster
18:42 yinyin_ joined #gluster
18:53 warthog9 joined #gluster
18:59 nueces joined #gluster
19:04 krishna-- left #gluster
19:15 ladd left #gluster
19:20 thebishop joined #gluster
19:22 nueces joined #gluster
19:24 tjstansell joined #gluster
19:27 portante joined #gluster
19:28 awheeler_ joined #gluster
19:39 rwheeler joined #gluster
19:45 andreask joined #gluster
20:15 Guest79483 joined #gluster
20:19 brunoleon joined #gluster
20:22 rb2k joined #gluster
20:24 warthog19 joined #gluster
20:37 sprachgenerator joined #gluster
20:38 devoid joined #gluster
20:39 devoid anyone know where I can find glusterfs-swift* debian packages for 3.4?
20:39 devoid https://launchpad.net/~semiosis only has the core stuff.
20:39 glusterbot Title: semiosis in Launchpad (at launchpad.net)
20:40 semiosis devoid: there arent any
20:42 devoid semiosis: I know these instructions are out of date: http://www.gluster.org/2012/09/howto-using​-ufo-swift-a-quick-and-dirty-setup-guide/ Are there better setup instructions?
20:42 glusterbot <http://goo.gl/Wf7Yx> (at www.gluster.org)
20:42 semiosis idk, i havent tried swift/ufo
20:43 semiosis JoeJulian: do you know?
20:43 semiosis (if he's still around)
20:48 perfectsine left #gluster
20:50 bstromski joined #gluster
20:51 doc|holliday joined #gluster
20:53 lbalbalba ~dev
20:54 doc|holliday in a striped volume if I am writing a file and the brick on the node runs out of space, does the file get automatically relocated to another node? or what happens?
20:54 lbalbalba crap. does anyone know whehe the howot devel setup pages are
21:00 lbalbalba ah, this : http://www.gluster.org/community/docume​ntation/index.php/Development_Work_Flow
21:00 glusterbot <http://goo.gl/ynw7f> (at www.gluster.org)
21:01 thebishop joined #gluster
21:06 semiosis lbalbalba: ,,(hack)
21:06 glusterbot lbalbalba: The Development Work Flow is at http://goo.gl/ynw7f
21:08 lbalbalba semiosis: thanks ;)
21:09 lbalbalba im about to set that up for my vm. patches supplied to bug have not been noticed :( : https://bugzilla.redhat.com/show_bug.cgi?id=962226
21:09 glusterbot <http://goo.gl/J2qCz> (at bugzilla.redhat.com)
21:09 glusterbot Bug 962226: unspecified, unspecified, ---, vbellur, NEW , 'prove' tests failures
21:10 lbalbalba so ill gues ill gerrit review them ( whatver that means)
21:11 * lbalbalba hugs glusterbot
21:16 doc|holliday in a striped volume if I am writing a file and the brick on the node runs out of space, does the file get automatically relocated to another node? or what happens?
21:17 semiosis dont run out of space
21:17 semiosis also, dont use stripe (unless you've exhausted all other options)
21:17 lbalbalba code should handle error conditions ;)
21:18 semiosis lbalbalba: feel free to contribute patches
21:18 lbalbalba hahha. right thx
21:18 semiosis no filesystems handles a full disk well
21:18 semiosis if you run out of space, that's an administrative error... i.e. you messed up
21:18 semiosis no filesystem can save you from yourself
21:19 semiosis capacity planning is part of the job
21:19 doc|holliday semiosis: s/striped/distributed/
21:19 semiosis monitoring, etc
21:19 semiosis doc|holliday: oh distributed is good :)
21:19 doc|holliday sorry :)
21:22 doc|holliday semiosis: can I delete the files directly from underlying bricks?
21:22 semiosis doc|holliday: you shouldn't, but it's possible
21:23 doc|holliday basically, here is my scenario, which will hopefully clarify the question:
21:25 doc|holliday I will keep the cluster filled up to about 90% and delete earliest stuff if it gets above that. Now my question is: I start writing a file, a brick is picked automagically to write to, the file is really long and the brick gets full. What happens? Error?
21:26 lbalbalba gdb glusterd ;)
21:26 semiosis doc|holliday: yeah error.  the write will fail.
21:26 doc|holliday I can deal with it, if I am allowed to work directly on the underlying bricks, but I am not sure if it will screw with gluster's logic
21:26 semiosis dont let your bricks fill up
21:26 semiosis and don't work directly on the underlying bricks either
21:27 lbalbalba ah, bricks will fill up at some point
21:27 doc|holliday 8-/
21:27 semiosis if you're asking here for support, those are the answers you'll get, to keep you out of trouble
21:27 semiosis now, if you want to have some fun, go get into that trouble yourself on a test environment, figure out exactly how things work for your design, etc
21:28 semiosis but know that you're not using glusterfs the way it's intended, so you're on your own mostly
21:28 doc|holliday I am still in the fun stage, but at some point will be putting the cluster into production environment
21:29 lbalbalba so where's  the  'using glusterfs the way it's intended'  doc ?
21:30 semiosis ,,(rtfm)
21:30 glusterbot Read the fairly-adequate manual at http://goo.gl/E3Jis
21:30 semiosis various docs on gluster.org
21:31 doc|holliday semiosis: what's the magic behind gluster's picking of *the one* (the brick) which gets written to. is it trying to keep them fairly balanced? does load (number of concurrent connections) have any bearing on it?
21:31 lbalbalba ,,(rtfm)
21:31 glusterbot Read the fairly-adequate manual at http://goo.gl/E3Jis
21:31 lbalbalba glusterbot rocks
21:31 glusterbot lbalbalba: I do not know about 'rocks', but I do know about these similar topics: 'rrdns', 'repos'
21:31 semiosis glusterbot: awesome
21:31 glusterbot semiosis: ohhh yeeaah
21:32 semiosis doc|holliday: a hash of the file path. files will be evenly distributed over bricks, generally speaking.  file content (or size) has nothing to do with it.
21:33 lbalbalba glusterbot: awesome
21:33 glusterbot lbalbalba: ohhh yeeaah
21:33 lbalbalba w00t
21:34 lbalbalba im sorry :(
21:35 doc|holliday semiosis: ok, so theoretically if I keep deleting the earliest files (which are evenly distributed), the bricks should remain filled to about the same level, correct?
21:36 semiosis doc|holliday: all else being equal, yes that sounds right.
21:36 doc|holliday aha, thank you.
21:36 semiosis doc|holliday: but if your files sizes vary widely, or over time, ...
21:37 semiosis the all else being equal is important
21:37 semiosis theoretically ;)
21:37 doc|holliday :)
21:37 doc|holliday this brings up another question, if you don't mind:
21:40 doc|holliday I have a cluster of 20 servers (4TB each) and about 500 clients writing/reading. up to 300 can be writing at the same time. each server can handle 30-40 at most. if I add an empty brick, will all writes be directed to it? (other bricks being more than half full)
21:42 doc|holliday semiosis: ^
21:50 lbalbalba doc|holliday: whoa - cluster of (4TB each) 20 servers and 500 clients writing/reading ?
21:51 doc|holliday lbalbalba: going online in 1-2 weeks :)
21:52 lbalbalba doc|holliday: oh, the horror. maybe time to admit its time for a a red hat support contract ?
21:54 lbalbalba doc|holliday: if the redhat experts cant solve the problem,  then how are you supposed to ?
21:56 doc|holliday lbalbalba: I didn't say there was a problem. I am just trying to think of as many gotchas as possible ahead of time :)
21:56 lbalbalba doc|holliday: ah, good thinking +++
21:58 doc|holliday I really like gluster and want to make it work. don't mind getting my hands dirty. but always run out of time and get my ballz shot off. :)
21:59 lbalbalba this, take note :  <doc|holliday> I really like gluster and want to make it work.
22:00 doc|holliday s/make it work/make it work for my application/ :)
22:00 lbalbalba :)
22:01 doc|holliday sorry I am battling OpenLDAP on the other computer at the same time
22:02 nightwalk joined #gluster
22:13 jag3773 joined #gluster
22:14 GLHMarmot joined #gluster
22:18 kaptk2 joined #gluster
22:30 jag3773 joined #gluster
23:01 badone joined #gluster
23:07 harish joined #gluster
23:41 awheeler joined #gluster
23:54 jag3773 joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary