Camelia, the Perl 6 bug

IRC log for #gluster, 2013-06-04

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 JordanHackworth joined #gluster
00:14 JordanHackworth joined #gluster
00:21 JordanHackworth joined #gluster
00:25 yinyin joined #gluster
00:28 JordanHackworth joined #gluster
00:31 JordanHackworth joined #gluster
00:34 JordanHackworth joined #gluster
00:37 JordanHackworth joined #gluster
00:57 bala joined #gluster
00:59 meunierd1 joined #gluster
01:24 majeff joined #gluster
01:34 kevein joined #gluster
01:40 RicardoSSP joined #gluster
01:40 RicardoSSP joined #gluster
02:18 _pol joined #gluster
02:40 john1000_ joined #gluster
02:43 brunoleon joined #gluster
03:09 majeff joined #gluster
03:14 saurabh joined #gluster
03:21 mohankumar__ joined #gluster
03:23 john1000 hi, I am trying to create a gluster replicated volume, mirrored between 2 servers.  I think I am close, because when I create the volume, the directory is created on the second system.  However, when I write any files to the first system, they do not show up in the mirrored directory on the second system.  What should I be checking to fix this?
03:27 john1000_ joined #gluster
03:42 aravindavk joined #gluster
03:45 shylesh joined #gluster
03:46 m0zes john1000: are you writing to the volume or the bricks? ,,(glossary)
03:46 glusterbot john1000: A "server" hosts "bricks" (ie. server1:/foo) which belong to a "volume"  which is accessed from a "client"  . The "master" geosynchronizes a "volume" to a "slave" (ie. remote1:/data/foo).
03:49 john1000 m0zes: I am just writing files to the directory on one of the servers directly in a shell, like "touch test.txt"
03:50 m0zes john1000: ,,(pasteinfo)
03:50 glusterbot john1000: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
03:51 john1000 m0zes: ok, i will, it will just a take a minute, i destroyed the volumes I had to try to change a few things to try to make sure amazon dns isn't causing the issues
03:51 majeff joined #gluster
03:52 m0zes if I understand what you are doing correctly you did something like 'gluster volume create config replica 2 server1:/etc server2:/etc' started the volume and are writing directly to /etc on one of the servers.
03:53 john1000 m0zes: yes, that is correct
03:54 john1000 when I ran the "gluster volume info" command earlier (before I started changing it all) everything looked ok, I am recreating now so I can share it.
03:55 m0zes this isn't the way glusterfs is supposed to be accessed. ideally the bricks would be /mnt/glusterfs/config/brick1 and /mnt/glusterfs/config/brick2 on servers 1 and 2 respectively. you'd start the volume and mount it via 'mount -t glusterfs <oneofyourservers>:<volname> /etc/' then you can write to /etc/
03:56 john1000 the way I did was to put the volumes at /export/test on each server, then I would write to /export/test on one of the servers, and I was hoping it would mirror.   But you are saying I should be mounting first with the client software, and only accessing through the mount?
03:56 m0zes the writes need to go through the fuse layer to be properly tracked and replicated. your first attempt worked (sort of) because of the self-heal daemon scanning /etc and replicating what it assumed was a file that hadn't been replicated correctly.
03:57 m0zes john1000: for writes, definitely only through a client mount. for reads it *can* be through the brick
03:58 john1000 m0zes: ok, thank you. that makes sense.  I just about have it re- setup, so we will see if that does the trick for me very shortly
04:22 john1000 m0zes: sorry, it took a little long than I thought it would to redo my dns entries.  But, it works!  To make the cluster failover, do I need to use some kind of round-robin/failover DNS, or does the gluster client automatically use the other brick if the first one fails after the initial mount?
04:24 m0zes it uses whichever is first to respond. the only reason you would need rrdns is for the initial mount. the client downloads a volfile that tells it which servers to connect to.
04:25 john1000 m0zes, oh, i see, after the initial connect, then the client uses whichever of the servers is first to respond?  But it just needs an initial server to get the vol info.
04:27 m0zes yep, though I guess I glossed over one point: *normal* replication is done via the client process to both servers. reads are first to respond.
04:27 m0zes if a server crashes, the self-heal daemon will replicate from one server to the other when both are back up.
04:29 vpshastry joined #gluster
04:31 john1000 m0zes: ok, thank you.  That makes sense.  My goal is to use geo-replication because my servers are in different AWS availability zones, so I think now all I have to do is this "gluster volume geo-replication test5 server2:/export/test5 start" (ssh keyfiles are already setup), and it will switch from using normal replication to geo-replication, right?
04:32 john1000 From my reading, it looks like normal replication is not suitable when there is a lot of latency, or with heavy reads.
04:33 m0zes john1000: no. geo-replication is between a gluster volume and another server (possibly to another gluster volume). also geo-replication isn't multi-master. only one will be useful for writing.
04:34 bala joined #gluster
04:34 sgowda joined #gluster
04:37 hjmangalam1 joined #gluster
04:37 john1000 m0zes: that is ok if only 1 is available for writing.  In that case, can i just create a single-brick volume on the master, and then replicate to a directory or to another volume on the remote system(s)?
04:37 m0zes john1000: yep that will work.
04:41 vpshastry1 joined #gluster
04:44 rotbeard joined #gluster
04:46 bala joined #gluster
04:49 john1000 m0zes: my geo-replication is now working! Thank you so much for your help.  I have spent many hours trying to get this work, but I was missing a couple basic concepts that you cleared up for me.  Thank you so much.
04:49 m0zes no problem. I am glad it works for your use case :)
04:55 psharma joined #gluster
04:56 john1000 m0zes: yes, I am trying to build a HA cluster that is somewhat geographically distributed to support http servers.  Geo-replication is an important step.  Does it sound reasonable to geo-replicate over WAN to a volume that is a replica with other bricks inside of the LAN in that region?  (hopefully i am making sense)
04:58 m0zes john1000: absolutely. that would be ideal. it also doesn't have to be pure replicating volume on either side. you could do distributed-replication i.e. replica 2 with 4 bricks in 1 volume.
05:01 john1000 m0zes: I think I understand, that would give me 2 copies of the data on the slave end, but spread over 4 bricks, so I would get a sort of automatic load balancing.
05:01 majeff1 joined #gluster
05:02 m0zes yep
05:04 hchiramm_ joined #gluster
05:04 john1000 cool, thank you.  I'm gonna call it a night, brain fog starting to set in from tiredness, but this has been extremely helpful.  Thanks again.
05:06 anands joined #gluster
05:20 rastar joined #gluster
05:23 hagarth joined #gluster
05:33 satheesh joined #gluster
05:38 deepakcs joined #gluster
05:44 majeff joined #gluster
05:48 majeff1 joined #gluster
06:02 bulde joined #gluster
06:16 rgustafs joined #gluster
06:19 jtux joined #gluster
06:20 brunoleon__ joined #gluster
06:27 StarBeast joined #gluster
06:28 glusterbot New news from newglusterbugs: [Bug 960985] G4S: HEAD Request for a file return 200(OK) in place of 204(No Content) <http://goo.gl/sfaly>
06:41 vimal joined #gluster
06:42 jtux joined #gluster
06:43 vpshastry1 joined #gluster
06:45 guigui3 joined #gluster
06:50 raghu joined #gluster
06:50 ollivera joined #gluster
06:52 vshankar joined #gluster
06:53 hchiramm_ joined #gluster
07:00 ricky-ticky joined #gluster
07:02 lalatenduM joined #gluster
07:04 ujjain joined #gluster
07:05 andreask joined #gluster
07:06 vpshastry1 joined #gluster
07:14 m0zes joined #gluster
07:15 thomaslee joined #gluster
07:17 datapulse joined #gluster
07:21 rb2k joined #gluster
07:22 Snowdrift joined #gluster
07:24 satheesh joined #gluster
07:29 glusterbot New news from newglusterbugs: [Bug 952029] Allow an auxiliary mount which lets users access files using only gfids <http://goo.gl/x5z1R>
07:29 ctria joined #gluster
07:29 ekuric joined #gluster
07:43 majeff joined #gluster
07:47 pkoro joined #gluster
07:48 ccha3 joined #gluster
07:51 hybrid512 joined #gluster
08:09 datapulse greetings
08:09 datapulse anyone knows a workaround or a patch for this bug? https://bugzilla.redhat.com/show_bug.cgi?id=874554
08:09 glusterbot <http://goo.gl/xbQQC> (at bugzilla.redhat.com)
08:09 glusterbot Bug 874554: unspecified, medium, ---, rtalur, ON_QA , cluster.min-free-disk not having an effect on new files
08:16 hchiramm_ joined #gluster
08:30 bulde datapulse: it seems to be fixed (and hence ON_QA)
08:31 dobber_ joined #gluster
08:33 datapulse so I guess I wait for a the new version...
08:33 datapulse its just we hit that error on our production env and created a big mess
08:37 atrius_ joined #gluster
08:38 datapulse I see its fixed on version glusterfs-3.4.0qa8, but this is still a beta correct?
08:38 Norky v3.4 is still in beta, yes
08:44 saurabh joined #gluster
08:44 datapulse as bulde well pointed out this is marked as ON_QA, I guess this means that next 3.3 release will have it included correct?
08:45 bulde datapulse: not very sure... the problem with it is, any bug by default will get fixed in 'master' branch, which is always moving target... for sure it is in 3.4.0 release builds (GA is not out yet)
08:46 bulde 3.3.x i am not sure, have to see 'git log' once to confirm
08:49 ccha3 bulde: right it's beta2. Usually how many beta before GA version ?
08:52 bulde ccha3: planning to have GA *very soon*, may be another beta before GA
09:00 datapulse ty guys for your answers, I will check on git and wait for new versions… till that time I will try to make some free space on that brick
09:05 anands joined #gluster
09:18 puebele1 joined #gluster
09:19 Staples84 joined #gluster
09:27 hchiramm_ joined #gluster
09:33 mooperd joined #gluster
09:35 bala joined #gluster
09:47 js_ will i get better performance from a volume where a brick resides, than from a networked client?
09:48 spider_fingers joined #gluster
09:49 mgebbe_ joined #gluster
09:52 ccha3 where can I find all possible mount options for glusterfs-client ?
09:54 anands joined #gluster
09:54 saurabh joined #gluster
09:55 aravindavk joined #gluster
09:56 ccha3 or there are same for all mount options for all fs ?
10:17 17WABQFX8 joined #gluster
10:20 DMooring joined #gluster
10:23 portante joined #gluster
10:25 guigui1 joined #gluster
10:25 glusterbot New news from resolvedglusterbugs: [Bug 764565] Prevent heterogeneous backend file systems <http://goo.gl/Ho4MA>
10:29 aravindavk joined #gluster
10:37 rotbeard joined #gluster
10:55 glusterbot New news from resolvedglusterbugs: [Bug 949890] Dbench errors out on add-brick with open failures <http://goo.gl/bPfQg>
11:08 kke does glusterfs offer some kind of transport security? is it ok to use it over public network?
11:10 aravindavk joined #gluster
11:12 H__ kke: not afaik
11:13 guigui3 joined #gluster
11:43 abyss^ It's possible to make work gluster (gluster client) over wan (not georeplication)?
11:44 abyss^ with replica
11:49 abyss^ because we have two glusters with internal IP but not in the same network, so to hosts we add external addresses for both glusters and on router do port redirect, but now we have a lot of errors like: cannot open shared object file, reading from socket failed etc
11:50 andreask joined #gluster
11:50 kke joined #gluster
11:51 mooperd joined #gluster
12:00 glusterbot New news from newglusterbugs: [Bug 959069] A single brick down of a dist-rep volume results in geo-rep session "faulty" <http://goo.gl/eaoet>
12:02 aravindavk joined #gluster
12:02 majeff joined #gluster
12:03 abyss^ kke: if you want security over public network try georeplication or do some tunneling
12:03 abyss^ no one can answer on my question?;)
12:05 edward1 joined #gluster
12:06 bulde1 joined #gluster
12:07 yinyin_ joined #gluster
12:07 balunasj joined #gluster
12:09 bulde joined #gluster
12:17 H__ abyss^: i expect machine and portnames to be in the comm protocol. So set up a VPN.
12:17 H__ performance will suck btw.
12:22 abyss^ H__: ofcourse VPN is the solution, but I wonder if it should work over WAN :)
12:26 lh joined #gluster
12:26 lh joined #gluster
12:28 js_ when using replicate on two bricks, are both of them a possible master?
12:28 js_ or will i still be bound to a single point of failure? if not, how does this relate to replicate + distribute?
12:28 ehg joined #gluster
12:29 samppah js_: native glusterfs client connects to all bricks, so it's using both of bricks in replicate pair
12:29 Norky abyss^, over WAN, yes. Over a network with two points of NAT... it would appear not.
12:30 yinyin_ joined #gluster
12:35 js_ samppah: all right, i think in my setup i'm going to have every brick act as a client (4 web nodes), using replicate + distribute
12:35 js_ what i'm asking is basically if things will work if the server i used for probing and volume activation goes down
12:35 js_ and what will happen if it comes back up
12:35 js_ (gluster 3.3)
12:39 Snowdrift joined #gluster
12:40 abyss^ Norky: You mean remote address --> port redirect ---> internal address would not working properly? Sorry to ask, but I try to understand this - my English isn't as good as I'd like but I working on it:)
12:43 Norky I don't know for certain, but the problems you have encountered suggest GlusterFS wont work with NAT (port redirection)
12:45 theron joined #gluster
12:46 abyss^ Norky: Yes it possible but it is possible that the customer who have control on those routers do smth wrong and it not necessarily gluster issue... That why I'am asking:) To make sure that the gluster work/doesn't work via port redirection etc
12:47 Norky hmm, I can only suggest you test it yourself (on some VMs)
12:47 jbourke joined #gluster
12:48 dastar_ joined #gluster
12:49 bennyturns joined #gluster
12:49 abyss^ Norky: OK. Thank you for your help:)
12:53 bulde joined #gluster
13:03 dxd828 joined #gluster
13:07 rob__ joined #gluster
13:11 theron joined #gluster
13:13 joelwallis joined #gluster
13:13 deepakcs joined #gluster
13:20 ccha3 there is no shell completion about volume name, sub commands... for gluster dommands
13:20 ccha3 ?
13:22 jclift joined #gluster
13:22 Norky shell completion for various commands is, I believe, a function of the shell
13:23 Norky certainly on Fedora and RHEL/CentOS there's a package bash-completion which includes various 'modules' in /usr/share/bash-completion/completions/ . There's nothing in there for gluster on my systems.
13:24 Norky things might be different on a different distro/shell
13:25 mohankumar__ joined #gluster
13:27 hchiramm_ joined #gluster
13:27 ProT-0-TypE joined #gluster
13:34 ccha3 yes gluster module for shell completion should be usefull
13:35 vpshastry joined #gluster
13:36 ccha3 gluster vo<tab> <tab>(lists start/stop/info) <tab>(list all existed volume name)
13:37 Norky well I'm sure http://bash-completion.alioth.debian.org/ will gratefully receive contributed modules, have fun :)
13:37 glusterbot Title: Bash-Completion (at bash-completion.alioth.debian.org)
13:43 yinyin_ joined #gluster
13:44 bennyturns joined #gluster
13:46 rwheeler joined #gluster
13:50 hagarth joined #gluster
13:55 failshell joined #gluster
13:55 portante joined #gluster
13:56 failshell when i tcpdump my servers, i see a lot of attempts to connect to port tcp/1023. but i have nothing listening on that port. what am i missing?
14:02 bulde1 joined #gluster
14:02 lpabon joined #gluster
14:03 kaptk2 joined #gluster
14:04 wushudoin joined #gluster
14:05 tqrst joined #gluster
14:06 tqrst any idea why my glustershd.log is getting ~50 errors per second appended to it on a specific server? The other servers seem to be pretty quiet from what I've seen so far.
14:07 purpleidea joined #gluster
14:07 purpleidea joined #gluster
14:08 tqrst it's a whole bunch of "Unable to self-heal permissions/ownership of 'gfid:...>' (possible split-brain)"
14:09 aliguori joined #gluster
14:11 hjmangalam1 joined #gluster
14:13 JordanHackworth joined #gluster
14:13 bugs_ joined #gluster
14:15 anands joined #gluster
14:20 sjoeboo so, i'm in a situation where most gluster volume commands are failing to return anything (info, works, not status, or anything that takes actions)
14:20 sjoeboo and i've seeing this in logs:
14:20 sjoeboo [2013-06-04 10:19:08.234915] E [glusterd-utils.c:277:glusterd_lock] 0-glusterd: Unable to get lock for uuid: 757297b4-5648-4e31-88f4-00fc167a43e4, lock held by: 757297b4-5648-4e31-88f4-00fc167a43e4
14:20 sjoeboo this are teh same uuid
14:21 sjoeboo so...i'm ASSUMING something has a lock and didn't let it go...but "forgot' about it? any thoughts on how to clear this up?
14:55 rwheeler joined #gluster
14:59 lh joined #gluster
14:59 lh joined #gluster
14:59 puebele joined #gluster
15:11 morse joined #gluster
15:13 hchiramm_ joined #gluster
15:18 jthorne joined #gluster
15:21 theron joined #gluster
15:30 ekuric left #gluster
15:31 Technicool joined #gluster
15:31 dobber_ joined #gluster
15:38 rubdos joined #gluster
15:38 rubdos Is there any intergration with IPA or Kerberos in gluster?
15:39 hjmangalam joined #gluster
15:40 theron joined #gluster
15:42 hjmangalam1 joined #gluster
15:55 nightwalk joined #gluster
15:57 bennyturns joined #gluster
16:08 vpshastry1 joined #gluster
16:15 ccha3 what is the meaning of this message ?
16:15 hjmangalam joined #gluster
16:15 ccha3 [2013-06-04 18:12:44.429762] I [client-handshake.c:1445:client_setvolume_cbk] 0-VOL_REPL1-client-1: Server and Client lk-version numbers are not same, reopening the fds
16:15 glusterbot ccha3: This is normal behavior and can safely be ignored.
16:15 ccha3 ok
16:16 devoid joined #gluster
16:24 dbruhn__ joined #gluster
16:26 jruggiero left #gluster
16:26 jbrooks joined #gluster
16:41 bala joined #gluster
16:44 45PAAJX3T joined #gluster
16:49 Mo_ joined #gluster
16:51 Airbear joined #gluster
16:52 Guest47954 Hi, does anyone know why I might see thousands of log messages such as: [2013-06-04 17:49:28.771761] W [client3_1-fops.c:5306:client3_1_finodelk] 0-datastore-client-7:  (1c9d75c7-441a-476f-8634-c1ba5af60b96) remote_fd is -1. EBADFD
16:52 _pol joined #gluster
16:53 Guest47954 ^ In the client logs?
16:53 ultrabizweb joined #gluster
16:53 _pol joined #gluster
17:01 Technicool Guest47954, one possibility is network issues
17:02 Technicool the client is simply exposing the issue for you, but in that case it would not be a gluster issue per se
17:02 Guest47954 Thanks so, it indicates client unable to reach a brick?
17:06 ProT-0-TypE joined #gluster
17:08 Technicool Guest47954, yes, but with the addition that it is unable to communicate properly with the brick for some reason....so you may be able to ping, but that won't necessarily show that there is no issue
17:08 sjoeboo anyone around to help me along with:
17:08 sjoeboo E [glusterd-utils.c:277:glusterd_lock] 0-glusterd: Unable to get lock for uuid: 0edce15e-0de2-4496-a520-58c65dbbc7da, lock held by: 0edce15e-0de2-4496-a520-58c65dbbc7da
17:09 sjoeboo (which i see on each node, when on THAT node i try to do any gluster volume <command> commands)
17:09 sjoeboo and get nothing back.
17:09 sjoeboo (i've mailed the slit about it too)
17:10 saurabh joined #gluster
17:12 hjmangalam1 joined #gluster
17:25 zaitcev joined #gluster
17:25 bulde joined #gluster
17:25 hjmangalam1 joined #gluster
17:31 bennyturns joined #gluster
17:38 tziOm joined #gluster
17:52 mtanner_ joined #gluster
17:53 JoeJulian sjoeboo: Try restarting glusterd on that box.
17:54 JoeJulian Technicool: really? That's what EBADFD means?
17:54 andreask joined #gluster
17:54 edong23_ joined #gluster
17:56 Technicool JoeJulian, nope
17:56 m0zes_ joined #gluster
17:57 Technicool thats one possible way to read the client logs tho
17:57 Gugge_ joined #gluster
17:57 krishna-_ joined #gluster
17:57 JoeJulian Darn. I was hoping there was finally a definitive answer.
17:57 Technicool yes, because we are blessed with so many definitive answers in this business  ;)
17:57 JoeJulian hehe
17:57 brunoleon joined #gluster
17:57 chlunde_ joined #gluster
17:57 war|chil1 joined #gluster
17:57 smellis_ joined #gluster
17:58 puebele2 joined #gluster
17:58 snarkyboojum_ joined #gluster
17:59 tjikkun joined #gluster
17:59 tjikkun joined #gluster
18:01 sjoeboo JoeJulian: I've restarted it. it happens on all nodes. The even been rebooted to make sure all is clean.
18:03 JoeJulian my only guess, then, would be that somehow 0edce15e-0de2-4496-a520-58c65dbbc7da has itself in its peers list.
18:03 sjoeboo oh...damn you ARE right on that...
18:03 StarBeas_ joined #gluster
18:03 JoeJulian did somebody rsync /var/lib/glusterd from another server?
18:04 x4rlos joined #gluster
18:04 sjoeboo no, but this volume/cluster has been a bit of a mess.
18:05 ThatGraemeGuy_ joined #gluster
18:07 edong23 joined #gluster
18:07 sjoeboo okay, so the question is...how do i remove that...
18:07 mriv_ joined #gluster
18:07 _Dave2_ joined #gluster
18:08 kkeithley1 joined #gluster
18:08 wushudoin| joined #gluster
18:08 sjoeboo is it going into /var/lib/gluster/ on each node, and making sure it doesn't have itself in there and bouncing glusterd?
18:08 kkeithley1 joined #gluster
18:08 wushudoin| joined #gluster
18:08 kke_ joined #gluster
18:09 js_ joined #gluster
18:09 tg2 joined #gluster
18:09 eryc joined #gluster
18:09 mynameisbruce joined #gluster
18:09 eryc joined #gluster
18:09 hchiramm_ joined #gluster
18:09 aliguori joined #gluster
18:09 c4 joined #gluster
18:09 jag3773 joined #gluster
18:09 cfeller joined #gluster
18:11 _ilbot joined #gluster
18:11 Topic for #gluster is now  Gluster Community - http://gluster.org | Q&A - http://community.gluster.org/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - http://irclog.perlgeek.de/gluster/
18:11 johnmark joined #gluster
18:12 xavih joined #gluster
18:13 stopbit joined #gluster
18:13 jcastle joined #gluster
18:13 juhaj joined #gluster
18:13 paratai joined #gluster
18:15 nueces joined #gluster
18:16 kspaans joined #gluster
18:16 paratai joined #gluster
18:17 JonnyNomad joined #gluster
18:17 eryc_ joined #gluster
18:17 y4m4_ joined #gluster
18:18 jiffe98 joined #gluster
18:19 js__ joined #gluster
18:20 frakt joined #gluster
18:23 tg2 joined #gluster
18:23 johnmark joined #gluster
18:27 paratai_ joined #gluster
18:28 waldner_ joined #gluster
18:28 cyberbootje1 joined #gluster
18:28 mjrosenb_ joined #gluster
18:28 jurrien__ joined #gluster
18:29 joelwallis joined #gluster
18:29 VSpike_ joined #gluster
18:29 lanning_ joined #gluster
18:29 Bryan_ joined #gluster
18:30 twx_ joined #gluster
18:32 fleducquede joined #gluster
18:32 thekev` joined #gluster
18:32 mrEriksson joined #gluster
18:32 StarBeast joined #gluster
18:32 Rhomber joined #gluster
18:32 JoeJulian joined #gluster
18:36 _ilbot joined #gluster
18:36 Topic for #gluster is now  Gluster Community - http://gluster.org | Q&A - http://community.gluster.org/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - http://irclog.perlgeek.de/gluster/
18:36 JordanHackworth joined #gluster
18:36 twx joined #gluster
18:37 puebele joined #gluster
18:38 mynameisbruce joined #gluster
18:39 foster_ joined #gluster
18:39 cicero_ joined #gluster
18:41 mtanner_ joined #gluster
18:41 juhaj_ joined #gluster
18:44 _ilbot joined #gluster
18:44 Topic for #gluster is now  Gluster Community - http://gluster.org | Q&A - http://community.gluster.org/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - http://irclog.perlgeek.de/gluster/
18:44 theron_ joined #gluster
18:45 brunoleon joined #gluster
18:45 aliguori_ joined #gluster
18:45 eryc joined #gluster
18:45 eryc joined #gluster
18:46 tzi0m joined #gluster
18:47 Norky_ joined #gluster
18:47 johnmark joined #gluster
18:48 joelwallis joined #gluster
18:48 js__ joined #gluster
18:48 mriv joined #gluster
18:49 joelwallis joined #gluster
18:52 joelwallis joined #gluster
18:53 Chr1z joined #gluster
18:54 Chr1z What FS type is best to add to gluster ?  I'm assuming just ext4 or something or would LVM allow easier expansion of space later without adding additional servers?
18:55 semiosis xfs (with inode size 512) is recommended
18:55 semiosis although any posix fs with extended attributes should work (* except when there's a bug)
18:55 semiosis glusterfs is tested and used most with xfs though
18:56 Chr1z semiosis: ok.. :)  so LVM is not preferred to add storage.. the correct way is to just add 'bricks' ?
18:57 semiosis you can put lvm between xfs and your block devices, thats common
18:57 Chr1z Ok.. just checking… thanks.
18:58 devoid joined #gluster
19:01 krishna- joined #gluster
19:02 efries_ joined #gluster
19:02 MinhP_ joined #gluster
19:02 VeggieMeat_ joined #gluster
19:03 kke joined #gluster
19:03 Ramereth|home joined #gluster
19:03 root____5 joined #gluster
19:03 samppah_ joined #gluster
19:03 thekev joined #gluster
19:04 johnmorr_ joined #gluster
19:05 theron__ joined #gluster
19:05 mriv joined #gluster
19:07 DataBeaver joined #gluster
19:07 cyberbootje joined #gluster
19:07 _pol joined #gluster
19:08 eightyeight joined #gluster
19:09 jiqiren joined #gluster
19:10 morse joined #gluster
19:14 cyberbootje1 joined #gluster
19:16 mriv joined #gluster
19:17 kspaans joined #gluster
19:17 plarsen joined #gluster
19:20 ingard_ joined #gluster
19:26 bennyturns joined #gluster
19:31 rob__ joined #gluster
19:32 devoid joined #gluster
19:33 failshell joined #gluster
19:33 failshell sometimes, when we import data, i have a two nodes that fill up faster than others
19:33 failshell all servers are speced the same
19:33 jiffe98 so taking a gluster mount and re-exporting via nfs seems to speed web access up quite a bit, probably due to difference in caching, and it also keeps me redundant, is there anyway around that bug that causes files/attributes to go missing?
19:33 failshell why would that happen?
19:35 JoeJulian fail I detail how DHT works here: http://joejulian.name/blog​/dht-misses-are-expensive/
19:35 glusterbot <http://goo.gl/A3mCk> (at joejulian.name)
19:36 semiosis jiffe98: which bug?
19:37 JoeJulian jiffe98: iirc, there was some race lockup that was the reason they moved away from that and built their own nfs service.
19:38 jiffe98 JoeJulian: that sounds familiar, it was something with the kernel
19:42 jiffe98 this will be 90% read so the more I can cache the better, I've followed your posts to optimizing web as much as I could for the content but I haven't been able to get near the speeds I'm seeing re-exporting with the kernel nfs server
19:42 devoid joined #gluster
19:43 jiffe98 plus it still uses the gluster client attaching to the gluster mirrors
19:47 Chr1z joined #gluster
19:47 failshell JoeJulian: that describes reads more than writes
19:48 Chr1z Ok.. trying to run gluster volume create test replica 2 transport tcp server1:/data server2:/data2 and I'm getting an error saying I have overlapping export directories from the same peer.. what does that mean?
19:49 failshell JoeJulian: i have 16 nodes, we're importing data right now, only 2 out of 16 are filling up faster
19:49 failshell shouldn't it distribute the data evenly?
19:49 JoeJulian Depends on the hash
19:50 JoeJulian The hashing algorithm works the same either way.
19:51 failshell so what can i do then? stop the import? run a rebalance? start again, and so forth?
19:52 JoeJulian How does the import work?
19:53 failshell rsync
19:53 failshell into the mounted volume
19:53 JoeJulian Are you using --inplace:
19:53 JoeJulian er...
19:53 JoeJulian s/:/?
19:53 failshell lemme ask
19:54 failshell nope
19:54 failshell rsync -a
19:54 JoeJulian If you're not using the inplace switch, it's going to create temporary filenames then rename those to the correct filename. Those are going to hash improperly and leave a lot of sticky-pointers and performance issues.
19:55 failshell ok
19:55 failshell are those cleaned eventually?
19:55 JoeJulian Only if you rebalance or replace the file.
19:55 failshell ok
19:56 failshell is it a good practice to run a rebalance say weekly?
19:56 JoeJulian It's less efficient to move a file to a different brick just because the filename was changed.
19:56 JoeJulian Depends on the situation. I run one on a couple of my volumes annually. Some never get rebalanced.
19:58 failshell well, we stopped that rsync
19:58 failshell going to rebalance the volume
20:05 semiosis @later tell Chr1z try hanging out in the channel for more than a few minutes at a time so we can help
20:05 glusterbot semiosis: The operation succeeded.
20:08 bstr_work joined #gluster
20:11 mtanner__ joined #gluster
20:12 devoid joined #gluster
20:14 theron_ joined #gluster
20:16 recidive joined #gluster
20:17 recidive joined #gluster
20:24 mriv joined #gluster
20:24 war|child joined #gluster
20:27 GabrieleV joined #gluster
20:28 DataBeaver joined #gluster
20:28 thekev joined #gluster
20:28 sysconfig joined #gluster
20:28 nueces joined #gluster
20:28 andreask joined #gluster
20:28 masterzen joined #gluster
20:28 zwu joined #gluster
20:28 al joined #gluster
20:30 brian__ joined #gluster
20:32 brian__ hello… I'm getting this error in /var/log/gluster/ when i try to mount, but I don't understand why because I installed everything using the same packages:  "Server and Client lk-version numbers are not same, reopening the fds"
20:32 glusterbot brian__: This is normal behavior and can safely be ignored.
20:32 brian__ thanks bot! lol
20:38 brian__ another question… Just as a test (my boss asked me to try this), I'm running two bonnie++ benchmarks simultaneously (two separate processes, with mounts from two different machines) on a gluster volume just to see the performance…. and it sometimes will run (with horrible results. like 20M read/write, or it will one of them will completely fail giving no results at all). However if I run one bonnie++ benchmark at a time, the results ar
20:38 brian__ ok… I don't understand why running both would cause problems since gluster (so I'm thinking), should be able to handle all the reading and writing from both of the bonnie++ tests at the same time…
20:39 semiosis bonnies working in separate directories or same?
20:45 brian__ separate directories
20:48 brian__ the set up is: I have a mount on one node (called head), and other mount on another node (called node01). my gluster volume consisted of 3 bricks (on node02, node03 and node04)… I fails using both a distributed volume or a striped volume… but like is said, if I do just one bonnie++ process by itself, the results are fine… it's when I try to do two simultaneously that it seems they are interferring with each other somehow
20:49 y4m4 joined #gluster
20:50 brian__ was thinking that the "version numbers not the same…" error I was getting might have something to do with it, but I guess the bot answered that question… :)
20:56 neofob left #gluster
21:12 JoeJulian I haven't actually read the source around this, but I'm pretty sure that the lk-version is a serial that, when a client doesn't have the same serial, shows that the locks need to be refreshed on the client.
21:12 JonnyNomad joined #gluster
21:17 errstr left #gluster
21:19 brian__ JoeJulian: Hi Joe!.. That error happens when I mount… I don't get any errors in the logs when the simultaneous bonnie++ runs are going. The only error I get from the bonnie++ runs are output to the screen when the process fails ans stops.. it says it can't read the file it writing… dont' have the exact error but I can get it if I you think it will help diagnose
21:20 JoeJulian It couldn't hurt. I'm not a bonnie expert (or fan for that matter) but you never know...
21:21 brian__ so if I do a clear-locks on the volume, that what you mean?
21:22 JoeJulian No, I'm saying that since you're mounting a client it /should/ not have a matching lk-version. It hasn't propogated the locks yet. Now if this was happening all the time on an already mounted client, then I might be concerned.
21:24 rkeene joined #gluster
21:25 hjmangalam joined #gluster
21:26 rb2k joined #gluster
21:35 yinyin_ joined #gluster
21:40 brian__ k
21:41 edoceo joined #gluster
21:41 brian__ left #gluster
21:53 chirino joined #gluster
21:53 chirino what happeed to http://community.gluster.org/q/how-ca​n-i-cause-split-brain-in-glusterfs-wh​en-cluster-quorum-type-is-set-to-auto
21:53 glusterbot <http://goo.gl/PU49D> (at community.gluster.org)
21:54 JoeJulian The service that was hosting the Q&A site went out of business.
21:54 chirino oh
21:54 chirino sad.
22:04 yinyin joined #gluster
22:05 _pol joined #gluster
22:06 _pol joined #gluster
22:06 devoid joined #gluster
22:07 theguidry joined #gluster
22:12 phox joined #gluster
22:12 hjmangalam1 joined #gluster
22:12 phox any suggestions on making local glusterfs mounts come up after glusterd is started, a-la-'_netdev' ?
22:16 balunasj joined #gluster
22:21 phox sorry, bit of clarification, because that was not totally clear:  any suggestions on making local glusterfs mounts automatically come up /after/ glusterd is started on boot, a-la-'_netdev' ?
22:26 JoeJulian phox: What distro?
22:26 phox Debian
22:27 c3 joined #gluster
22:28 JoeJulian The way I would do it would be to switch to CentOS. ;)
22:28 JoeJulian semiosis: has been looking at that recently. Not sure where he's at on it though.
22:30 semiosis hahaha
22:30 semiosis what debian?
22:31 semiosis phox: also could you pastie.org your client log file from a failed mount attempt at boot time?
22:33 charlescooke_ joined #gluster
22:43 ultrabizweb joined #gluster
22:44 phox semiosis: /var/log/glusterfs/cli.log ?
22:44 phox semiosis: _nothing_ in it from it trying to come up per fstab ; however I'm sure I can go mount it with mount -a right now
22:44 semiosis ah no that's the command line interface, not client
22:44 semiosis client log files are usually written to /var/log/glusterfs/the-mount-point.log
22:45 phox k.
22:58 charlescooke_ anybody ever seen an error like this on an NFS mounted gluster (3.3.1): [2013-06-04 22:57:22.330712] E [rpcsvc.c:203:rpcsvc_program_actor] 0-rpc-service: RPC Program procedure not available for procedure 5 in NLM4
22:58 portante joined #gluster
23:07 plarsen joined #gluster
23:08 soukihei joined #gluster
23:37 _pol joined #gluster
23:40 jiffe2 joined #gluster
23:42 Cenbe joined #gluster
23:49 theguidry joined #gluster
23:49 Technicool joined #gluster
23:56 kevein joined #gluster
23:58 StarBeast joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary