Camelia, the Perl 6 bug

IRC log for #gluster, 2012-11-25

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:13 hagarth joined #gluster
01:01 davdunc joined #gluster
01:23 MinhP joined #gluster
01:52 purpleidea joined #gluster
01:52 purpleidea joined #gluster
02:26 designbybeck joined #gluster
02:55 bala1 joined #gluster
03:36 quillo joined #gluster
04:06 bulde joined #gluster
04:56 quillo joined #gluster
05:16 quillo joined #gluster
06:09 SpeeR joined #gluster
06:11 Shdwdrgn joined #gluster
06:19 arusso joined #gluster
06:19 cyberbootje joined #gluster
06:19 thekev joined #gluster
06:19 samkottler joined #gluster
06:19 XmagusX joined #gluster
06:19 niv joined #gluster
06:19 zwu joined #gluster
06:19 juhaj joined #gluster
06:19 JoeJulian joined #gluster
06:19 johnmark joined #gluster
06:20 kkeithley joined #gluster
06:20 _br_- joined #gluster
06:20 bdperkin joined #gluster
06:20 crashmag_ joined #gluster
06:20 social___ joined #gluster
06:20 rz___ joined #gluster
06:20 rosco__ joined #gluster
06:20 Ramereth joined #gluster
06:20 hagarth_ joined #gluster
06:20 sensei_ joined #gluster
06:20 purpleidea joined #gluster
06:20 MinhP joined #gluster
06:20 davdunc joined #gluster
06:20 daMaestro joined #gluster
06:20 TSM2 joined #gluster
06:20 gm__ joined #gluster
06:20 primusinterpares joined #gluster
06:20 khushildep joined #gluster
06:20 Hymie joined #gluster
06:20 sac_ joined #gluster
06:20 badone_ joined #gluster
06:20 ekuric joined #gluster
06:20 Daxxial_ joined #gluster
06:20 pull joined #gluster
06:20 robo joined #gluster
06:20 bauruine joined #gluster
06:20 masterzen joined #gluster
06:20 mtanner joined #gluster
06:20 RobertLaptop joined #gluster
06:20 joeto joined #gluster
06:20 redsolar joined #gluster
06:20 JFK joined #gluster
06:20 chandank joined #gluster
06:20 Oneiroi joined #gluster
06:20 joaquim__ joined #gluster
06:20 xavih joined #gluster
06:20 stopbit joined #gluster
06:20 tjikkun joined #gluster
06:20 nick5 joined #gluster
06:20 circut joined #gluster
06:20 morse joined #gluster
06:20 tru_tru joined #gluster
06:20 n8whnp joined #gluster
06:20 saz joined #gluster
06:20 wNz joined #gluster
06:20 yosafbridge` joined #gluster
06:20 meshugga_ joined #gluster
06:20 H__ joined #gluster
06:20 GLHMarmot joined #gluster
06:20 UnixDev joined #gluster
06:20 snarkyboojum joined #gluster
06:20 mnaser joined #gluster
06:20 helloadam joined #gluster
06:20 madphoenix joined #gluster
06:20 layer3switch joined #gluster
06:20 chacken joined #gluster
06:20 clag_ joined #gluster
06:20 plantain_ joined #gluster
06:20 jdarcy joined #gluster
06:20 jiffe1 joined #gluster
06:20 tc00per joined #gluster
06:20 genewitch joined #gluster
06:20 stigchri1tian joined #gluster
06:20 samppah joined #gluster
06:20 bfoster joined #gluster
06:20 carrar joined #gluster
06:20 jiqiren joined #gluster
06:20 NuxRo joined #gluster
06:20 efries joined #gluster
06:20 kshlm|AFK joined #gluster
06:20 gmcwhistler joined #gluster
06:20 cbehm joined #gluster
06:20 JordanHackworth joined #gluster
06:20 ninkotech_ joined #gluster
06:20 Dave2 joined #gluster
06:20 unalt_ joined #gluster
06:20 tripoux joined #gluster
06:20 dec joined #gluster
06:20 vincent_vdk joined #gluster
06:20 frakt joined #gluster
06:20 VeggieMeat joined #gluster
06:20 linux-rocks joined #gluster
06:20 z00dax joined #gluster
06:20 maxiepax joined #gluster
06:20 jiffe98 joined #gluster
06:20 pdurbin joined #gluster
06:20 wintix joined #gluster
06:20 VisionNL joined #gluster
06:20 zoldar joined #gluster
06:20 Zengineer joined #gluster
06:20 smellis joined #gluster
06:20 jds2001 joined #gluster
06:20 haidz joined #gluster
06:20 SteveCooling joined #gluster
06:20 sadsfae joined #gluster
06:20 al joined #gluster
06:20 Daxxial_ joined #gluster
06:20 Ramereth joined #gluster
06:21 quillo joined #gluster
06:21 atrius joined #gluster
06:21 sr71_ joined #gluster
06:21 abyss__ joined #gluster
06:21 misuzu joined #gluster
06:21 er|c joined #gluster
06:21 a2 joined #gluster
06:21 flin joined #gluster
06:21 _Bryan_ joined #gluster
06:21 ackjewt_ joined #gluster
06:21 hagarth joined #gluster
06:21 wN joined #gluster
06:21 m0zes joined #gluster
06:21 the-dude2 joined #gluster
06:21 hackez joined #gluster
06:21 wica joined #gluster
06:21 Psi-Jack joined #gluster
06:21 gluslog joined #gluster
06:21 raghavendrabhat joined #gluster
06:25 nightwalk joined #gluster
06:38 kshlm|AFK joined #gluster
06:38 bdperkin joined #gluster
06:38 kkeithley joined #gluster
07:55 lh joined #gluster
08:32 rudimeyer_ joined #gluster
09:18 blendedbychris joined #gluster
09:18 blendedbychris joined #gluster
09:41 zr joined #gluster
09:41 zr hi
09:41 glusterbot zr: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
09:42 zr i have a question regarding geo replication over WAN
09:42 zr or internet
09:42 zr how can I tell glusterfs that it should use which type of deployment scenario
09:46 H__ zr: do you mean when creating the gluster filesystem ? separate from setting up geo replication ?
09:48 zr i have set up gluster file system already
09:48 zr now the problem is
09:49 zr i have two nodes a part from each other and bandwidth is 512kb upload at both
09:49 zr Both nodes should have read/write access
09:51 zr now when i copy a file of 500MB on node1, it starts replicating file to node2 at the same time the file is being copied to node1
09:52 zr hence a LAN user uploading to node1, who should have no limit to upload this file gets limited to 500kbps
09:54 zr seems like syncshronus replication instead of asynchronous
10:04 H__ are you sure you're not mixing up node-replication and geo-replication ?
10:11 bala joined #gluster
10:27 zr maybe.. i created node replication first and then created geo-replication
10:29 zr because when i tried " gluster volume geo-replication local node2:/rep config remote_gsyncd /usr/libexec/glusterfs/gsyncd "
10:29 zr it returned Volume local does not exist
10:29 zr so i used the already created replicated volume
10:31 zr followed this http://wiki.sysconfig.org.uk/displa​y/howto/GlusterFS+on+CentOS+6.x+inc​l.+Geo+Replication+--+Short+Howto
10:31 glusterbot <http://goo.gl/ePIbA> (at wiki.sysconfig.org.uk)
11:59 TSM2 joined #gluster
14:29 TSM2 joined #gluster
15:14 khushildep joined #gluster
16:04 gbrand_ joined #gluster
16:12 inodb_ joined #gluster
16:26 zr in case of geo replication, do i need 3 boxes?
16:27 zr I want to acheive master master case
16:29 blendedbychris joined #gluster
16:29 blendedbychris joined #gluster
16:38 hackeracidburn joined #gluster
16:39 hackeracidburn left #gluster
16:45 lh joined #gluster
16:48 mnaser umm, anyone ever dealt with this?  php app, when it does an open(".." then runs flock() it just sits there forever
16:48 mnaser as if it either can't acquire a lock or something else has it locked (but nothing is reading that same file
16:50 mnaser flock() does not lock files over NFS.
16:50 mnaser there we go
17:10 m0zes zr, there is no master-master georeplication currently. I think it is on the roadmap *somewhere* but it is not an easy problem to solve. Your two current options with glusterfs are master->slave geo-replication (writes to slave will not be replicated to master, and may be lost if master updates the files), or a master-master replicated volume (writes will go to both masters simultaneously and latency will greatly affect all performance).
17:11 m0zes mnaser: locks on /any/ shared filesystem are spotty at best in my experience.
17:11 mnaser m0zes: yeah i guess i have no choice but to switch to gluster backend
17:11 mnaser but this is php shared hosting so i dont know how it'll cope with it
17:12 mnaser it sounds pretty bad as i've read but maybe it's not that bad..
17:12 m0zes ~php | mnaser
17:12 glusterbot mnaser: php calls the stat() system call for every include. This triggers a self-heal check which makes most php software slow as they include hundreds of small files. See http://goo.gl/uDFgg for details.
17:12 m0zes it can be pretty painful for un-tuned php apps.
17:12 mnaser m0zes: i've read that but the issue is this is a shared hosting platform so i don't "control" my enviornment..
17:13 mnaser i do have apc in there, i increased realpath_cache_size (might want to add that in there, basically cache's those million stat calls)
17:13 mnaser i'll add that apc setting as well
17:13 m0zes good luck :)
17:13 mnaser or actaully doesnt sound too good of an idea
17:14 mnaser php-fpm backend that means their files will never get reloaded, boo
17:14 mnaser at least i'll know that my glusterfs is always properly healed, hah.
17:23 JoeJulian Good morning
17:32 mnaser JoeJulian: morning, might want to update your google post to mention realpath_cache_size :)
17:32 zwu joined #gluster
17:32 mnaser the default is 16k which is pretty low, bumping it up to 1M makes a decent diff
17:33 JoeJulian Cool!
17:43 saz joined #gluster
17:44 JoeJulian mnaser: are you vexxhost?
17:44 mnaser JoeJulian: yes
17:44 JoeJulian cool
17:44 mnaser :)
17:45 JoeJulian Added that and plugged your blog there.
17:46 mnaser Sweet, thank you!
17:53 mohankumar joined #gluster
18:28 ackjewt joined #gluster
19:03 rudimeyer_ joined #gluster
19:17 gbrand_ joined #gluster
19:20 daMaestro joined #gluster
19:22 rudimeyer_ joined #gluster
19:36 rudimeyer joined #gluster
19:50 daMaestro joined #gluster
19:59 50UAB1A31 joined #gluster
20:08 Bullardo joined #gluster
20:54 nullsign_ joined #gluster
20:54 nullsign_ hey guys.. what does it mean when you try to create a volume with gluster and it says the local host is "not a friend" ?
20:54 nullsign_ trying to make a shared volume between 2 servers, each shows the other in peer status.
20:55 nullsign_ but create fails with; "Host: gluster-fs1 not a friend"
20:55 nullsign_ very odd.
20:55 daMaestro well then introduce them and give them a bottle of scotch
20:55 daMaestro they'll be friends quickly ;-)
20:55 daMaestro basically it means the nodes don't have a trust
20:56 nullsign_ i did a peer probe.
20:56 nullsign_ on one another.
20:56 nullsign_ er. brb
20:56 daMaestro that should do it
20:56 nullsign_ how do i make them trust one another?
20:56 daMaestro did those succeed?
20:56 nullsign_ they succeed with peer probe
20:56 daMaestro then they should trust eachother
20:56 daMaestro active cluster?
20:57 daMaestro or can you reset the peers?
20:57 daMaestro http://community.gluster.org/q/glus​ter-volume-creation-isn-t-working/
20:57 glusterbot <http://goo.gl/NxFFw> (at community.gluster.org)
20:57 daMaestro nullsign_, also what does gluster peer status show?
21:23 jiffe2 joined #gluster
22:16 mnaser A shot in the dark here… can't use NFS because some scripts use flock(..) and NLM doesn't seem to be working (flock call gets stuck), if I use glusterfs, i get slow performance because of the insane amount of stat calls that wordpress does
22:17 daMaestro you can look to add the stat caching translator on the client side
22:17 gbrand_ joined #gluster
22:18 NuxRo http://docs.openstack.org/developer/cinder​/api/cinder.volume.nfs.html#module-cinder.volume.nfs <- any chance we could use gluster in a similar way?
22:18 glusterbot <http://goo.gl/ZDjUh> (at docs.openstack.org)
22:19 mnaser NuxRo: i believe a driver would have to be written
22:21 NuxRo that would be nice
22:24 lh joined #gluster
22:24 lh joined #gluster
22:27 lh joined #gluster
22:27 mnaser what daMaestro mentioned before leaving.. is that the io-cache translator?
22:28 UnixDev joined #gluster
22:29 mnaser local fs vs over glusterfs is 1 second slower
22:44 lhawthor_ joined #gluster
22:45 robo joined #gluster
22:47 daMaestro joined #gluster
22:50 mnaser daMaestro: the stat cache = io-cache translator?
23:06 daMaestro yeah, that might be it
23:07 daMaestro i thought there was a cache translator specific to stat/inode caching
23:07 daMaestro but those might have been options on the io-cache translator
23:07 mnaser i only found two options and neither mentioned stat/inode
23:07 mnaser http://gluster.org/community/documentation/​index.php/Translators/performance/io-cache
23:07 glusterbot <http://goo.gl/CqDKI> (at gluster.org)
23:07 mnaser unless im missing documentation
23:08 daMaestro mnaser, http://joejulian.name/blog/optimizi​ng-web-performance-with-glusterfs/
23:08 glusterbot <http://goo.gl/uDFgg> (at joejulian.name)
23:08 daMaestro mnaser, are you using APC?
23:08 daMaestro that will fix wordpress doing so many stats
23:09 mnaser daMaestro: yes, been there done that, and i ca't apc.stat=0 because it's a shared platform
23:09 mnaser i use php-fpm backend and enabling that means that customer sites remain unloaded
23:10 mnaser the strange thing is that even with nfs, 0 improvement..
23:24 nullsign_ hey DaMaestro
23:24 daMaestro nullsign_, sup?
23:26 nullsign_ im back
23:26 nullsign_ still having vol create issues, server1 and server2 both uses each ohter in the peer status lists, as connected
23:27 nullsign_ uses/shows/ typo
23:27 nullsign_ on server1; when i issue the create command, references paths on server1, and server2, it tells me that server1 is not trusted.
23:27 nullsign_ which is odd, why does server1 not trust itself?
23:28 nullsign_ trusted = not a friend
23:28 nullsign_ oot@glusterfs-1:/var/lib/glusterd/peers# gluster volume create shared replica 2 transport tcp glusterfs-1.nestlabs.com:/exp glusterfs-2.nestlabs.com:/exp
23:28 nullsign_ Host glusterfs-1.nestlabs.com not a friend
23:29 nullsign_ ive reset and removed the peers, and reprobed one another, as suggested. no change
23:30 daMaestro what does gluster peer status show?
23:33 daMaestro heh, send me a nest device and i'll give you my ssh pub key for me to fix this for you ;-)
23:33 * daMaestro kidding
23:33 daMaestro but i might become a customer
23:34 daMaestro damn, not priced for an implus buy. i'll have to look into it more later ;-)
23:34 daMaestro impulse*
23:35 nullsign_ http://pastie.org/5433732
23:35 glusterbot Title: #5433732 - Pastie (at pastie.org)
23:35 nullsign_ lol
23:35 nullsign_ see above for mah pastie.
23:40 daMaestro ha, well i'm now a customer
23:40 daMaestro so i'm vested in getting your gluster cluster working so i get software updates ;-)
23:40 nullsign_ lol
23:40 nullsign_ can't argue with that.
23:41 daMaestro i loath my thermostat to the point where i manually adjust it as i come and go, so i just bought a holiday present for myself
23:41 daMaestro anywhoo
23:41 nullsign_ they rock :D
23:41 nullsign_ smart little guys.
23:41 nullsign_ they adjust to you. :)
23:41 daMaestro so, see the issue?
23:41 daMaestro make sure you probe using FQDN
23:41 daMaestro that is your issue
23:41 nullsign_ er one thing, i removed the domain names from the pastie.
23:41 nullsign_ sorry, should have said that to you.
23:42 nullsign_ here is the real problem thol.
23:42 daMaestro well you left one
23:42 daMaestro which is why i spotted you didn't use FQDN on both
23:42 daMaestro use FQDN in everything with gluster
23:42 nullsign_ ah.. yeah.. ok im lazy there.
23:42 nullsign_ i did :)
23:42 nullsign_ i promise, both peer status's have FWDN
23:42 nullsign_ FQDN/
23:42 nullsign_ which are resovable.
23:42 nullsign_ here is the weird part;
23:42 nullsign_ root@glusterfs-1:/var/lib/glusterd/peers# gluster volume create shared replica 2 transport tcp glusterfs-1.nestlabs.com:/exp glusterfs-2.nestlabs.com:/exp
23:42 nullsign_ Host glusterfs-1.nestlabs.com not a friend
23:43 nullsign_ why would fs-1 not think he is not his own friend?
23:44 daMaestro what does your /etc/hosts look like?
23:44 daMaestro it's possible something is going wrong there
23:44 nullsign_ both have the same entries, ext ips.
23:45 daMaestro is fqdn listed first?
23:45 daMaestro or rather, just fqdn?
23:46 nullsign_ updated pastie;
23:46 gbrand_ joined #gluster
23:46 nullsign_ http://pastie.org/5433764
23:46 glusterbot Title: #5433764 - Pastie (at pastie.org)
23:46 nullsign_ just fqdn
23:46 nullsign_ it reasons if hosts was borked i wouldnt get (connected) on each?
23:48 nullsign_ im stumped...
23:49 daMaestro root@glusterfs-2: # gluster peer probe glusterfs-2.fqdn
23:49 daMaestro root@glusterfs-1: # gluster peer probe glusterfs-1.fqdn
23:49 mnaser how safe is it to set 'lookup-unhashed off'
23:50 mnaser reading through http://joejulian.name/blog​/dht-misses-are-expensive/
23:50 glusterbot <http://goo.gl/A3mCk> (at joejulian.name)
23:50 daMaestro mnaser, i refused to use it
23:50 mnaser daMaestro: any docs about it more, cant seem to find a lot about it
23:50 nullsign_ damaestro: run that command with the domain, or with .fqdn ?
23:50 mnaser and why?/
23:50 daMaestro with your correct fqdn
23:51 daMaestro mnaser, we tried it on a client and a lot of our stuff could not be accessed
23:51 daMaestro but again, we had a cluster that was ... an upgrade
23:51 mnaser daMaestro, i see, not a lot of docs about it hthough, bleh
23:51 mnaser from unify to dht?
23:51 nullsign_ damaestro: each one says the other is already in the peer list.
23:52 daMaestro notice i was adding itself
23:52 daMaestro i can't recall if it needs to trust itself or not
23:52 mnaser im thinking that lookup-unhashed is creating massive amount of extra io
23:53 daMaestro nullsign_, looks like it should trust itself.
23:54 nullsign_ how do i do that?
23:54 daMaestro nullsign_, root@glusterfs-2: # gluster peer probe glusterfs-2.fqdn
23:54 nullsign_ everything i read says 'never probe yourself'
23:54 daMaestro hmm
23:54 daMaestro lol
23:54 daMaestro well then don't listen to me
23:54 nullsign_ that and i, i did probe myself earlier.
23:54 nullsign_ and it still was failing.
23:54 nullsign_ had to rm all the peers to start over.
23:54 daMaestro k
23:55 nullsign_ :(
23:56 daMaestro so you stopped glusterd and removed all the peers and then started glusterd and probed server 2 from sever 1?
23:56 daMaestro server*
23:57 daMaestro am i able to assume this cluster is not active?
23:57 daMaestro so we can put glusterd into debug mode so we can see wtf it is rejecting
23:58 m0zes joined #gluster
23:59 daMaestro ah right, probes are one way

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary