Camelia, the Perl 6 bug

IRC log for #gluster, 2012-11-05

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 lh joined #gluster
00:01 lh joined #gluster
00:03 Nr18 joined #gluster
00:09 kevein joined #gluster
00:54 blendedbychris joined #gluster
00:54 blendedbychris joined #gluster
00:55 blendedbychris cyberbootje: you figure out if its in sync?
00:56 blendedbychris wonder why it cant be in the volume tatus
01:57 JoeJulian cyberbootje, H__, jiffe1, TSM: gluster volume heal $vol info /should/ give a clear indication of whether there's any self-heal pending.
02:29 bharata joined #gluster
02:45 ika2810 joined #gluster
02:47 erik49__ joined #gluster
03:11 kevein joined #gluster
03:31 sunus joined #gluster
03:42 shylesh joined #gluster
03:55 blendedbychris joined #gluster
03:55 blendedbychris joined #gluster
04:48 ika2810 joined #gluster
04:53 vimal joined #gluster
05:05 quillo joined #gluster
05:19 vimal joined #gluster
05:26 mdarade joined #gluster
05:29 faizan joined #gluster
05:47 bulde joined #gluster
05:54 jays joined #gluster
05:54 pranithk joined #gluster
05:57 mohankumar joined #gluster
05:59 ramkrsna joined #gluster
05:59 ramkrsna joined #gluster
06:00 vimal joined #gluster
06:04 JoeJulian bug 872703
06:04 glusterbot Bug http://goo.gl/m3MHd unspecified, unspecified, ---, pkarampu, NEW , sticky-pointer with no trusted.dht.linkto after a replace-brick commit force, heal full migration
06:21 raghu joined #gluster
06:24 sunus joined #gluster
06:28 ngoswami joined #gluster
06:34 vpshastry joined #gluster
06:42 puebele joined #gluster
06:50 mdarade1 joined #gluster
07:04 guigui1 joined #gluster
07:05 glusterbot New news from newglusterbugs: [Bug 872703] sticky-pointer with no trusted.dht.linkto after a replace-brick commit force, heal full migration <http://goo.gl/m3MHd>
07:12 plantain joined #gluster
07:12 plantain joined #gluster
07:19 lkoranda joined #gluster
07:21 plantain joined #gluster
07:21 plantain joined #gluster
07:36 ctria joined #gluster
07:38 ctria joined #gluster
07:46 vimal joined #gluster
07:49 ankit9 joined #gluster
08:02 ekuric joined #gluster
08:03 Azrael808 joined #gluster
08:07 sshaaf joined #gluster
08:13 dobber joined #gluster
08:18 gbrand_ joined #gluster
08:18 Humble joined #gluster
08:24 joeto joined #gluster
08:29 manik joined #gluster
08:34 mohankumar joined #gluster
08:36 hagarth joined #gluster
08:38 ika2810 joined #gluster
08:45 Triade joined #gluster
08:51 shylesh joined #gluster
08:52 shylesh_ joined #gluster
08:53 lh joined #gluster
08:53 lh joined #gluster
08:54 badone joined #gluster
08:56 vpshastry joined #gluster
09:02 bulde1 joined #gluster
09:02 gbrand_ joined #gluster
09:15 TheHaven joined #gluster
09:15 ramkrsna joined #gluster
09:15 ramkrsna joined #gluster
09:16 Nr18 joined #gluster
09:18 shireesh joined #gluster
09:19 hurdman joined #gluster
09:19 hurdman hi
09:19 glusterbot hurdman: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
09:20 shylesh_ joined #gluster
09:23 joeto joined #gluster
09:27 hurdman are you planning to include the possibility to mount directly subdirectory ?
09:30 andreask joined #gluster
09:42 ndevos hurdman: you're referring to mounting /volume-name/a-sub-dir through glusterfs (fuse-client)? it's possible with NFS now already
09:44 * ndevos thinks he's seen the request before, but cant find it in bugzilla
09:53 DaveS joined #gluster
10:00 vimal joined #gluster
10:15 Jippi joined #gluster
10:21 joeto1 joined #gluster
10:25 guigui3 joined #gluster
10:25 balunasj joined #gluster
10:27 wica Hi, I have a strange issue. We use glusterfs 3.3.1 on ubuntu. On the client I see, 2 times the same file in the sam dir.
10:28 wica file size is the same, and also the name
10:28 wica Why does this happend, and how can i solve this?
10:28 ndevos also the same inode? check with 'ls -li'
10:28 wica Yep
10:28 ndevos wow, weird
10:29 wica 10059405875149890901 -rw-rw-r-- 5091 524 500   2215182 Nov  3 07:59 filename.mp3
10:29 wica 10059405875149890901 -rw-rw-r-- 5091 524 500   2215182 Nov  3 07:59 filename.mp3
10:30 wica how can I see on which brick the file is?
10:31 hurdman ndevos: thx for your response, is nfs safe with replication volume ? if one brick is down, it switches ?
10:31 wica If yuo use a Virtual IP, it should be save
10:32 joeto joined #gluster
10:32 ndevos hurdman: nfs doesnt have an auto-failover for the mount-point, the nfs-server itself is a gusterfs-client and will handle a failing brick just fine
10:32 manik joined #gluster
10:33 ndevos hurdman: so, the nfs-client <-> nfs-server connection is the one you need to worry about most, like hurdman says, a virtual-ip or something would be needed
10:34 hurdman ndevos: ok thx
10:34 puebele2 joined #gluster
10:34 wica hurdman: You can also use roundrobin in the DNS. Then NFS client trys to connect to a working server
10:44 duerF joined #gluster
10:50 rcheleguini joined #gluster
11:01 NuxRo @ppa
11:01 glusterbot NuxRo: The official glusterfs 3.3 packages for Ubuntu are available here: http://goo.gl/7ZTNY
11:07 guigui3 joined #gluster
11:08 andreask1 joined #gluster
11:08 andreask joined #gluster
11:20 ika2810 left #gluster
11:33 jays left #gluster
11:33 andreask joined #gluster
11:34 faizan joined #gluster
11:39 shireesh joined #gluster
11:45 mohankumar joined #gluster
11:52 manik joined #gluster
12:02 edward1 joined #gluster
12:24 aliguori joined #gluster
12:27 crashmag joined #gluster
12:38 aliguori joined #gluster
13:07 glusterbot New news from newglusterbugs: [Bug 849526] High write operations over NFS causes client mount lockup <http://goo.gl/ZAcyz>
13:12 samkottler joined #gluster
13:30 manik joined #gluster
13:37 glusterbot New news from newglusterbugs: [Bug 843819] Fix statedump code in nfs xlator <http://goo.gl/5LDGL>
13:45 bennyturns joined #gluster
13:49 plarsen joined #gluster
13:58 bulde joined #gluster
14:29 gmcwhistler joined #gluster
14:31 nueces joined #gluster
14:40 sensei_ joined #gluster
14:44 sripathi joined #gluster
14:48 JoeJulian wica: Please "stat $file" and "getfattr -m . -d -e hex $file" for that duplicated file on all your bricks and fpaste the successes.
14:50 saz joined #gluster
15:02 balunasj joined #gluster
15:03 hagarth joined #gluster
15:10 jiffe98 so is there anyway to do more client side caching beyond per file descriptor
15:11 stopbit joined #gluster
15:16 JoeJulian Not that I've seen.
15:18 JoeJulian I know someone wrote a python script that went through and opened a bunch of files and directories and then went to sleep forever. I'm not sure what the overall implications would be from that though.
15:18 wushudoin joined #gluster
15:18 jiffe98 it would be handy for mostly read type loads like web
15:19 JoeJulian If, by web, you mean php apps, negative lookups are what probably kills you the most.
15:20 JoeJulian I went through and made all my requires reference absolute paths and it's nice and quick. Plus, of course, I take most of my own advice from my blog.
15:22 jiffe98 how do negative lookups affect performance over positive lookups?
15:27 JoeJulian When you require/include a file, php traverses the include path looking for it. For most pre-packaged apps, this includes a lot of module and library directories that don't have what it's looking for. The DHT algorithm send the query to the predicted brick (based on filename hash). When the file's not there it queries all the bricks and waits for them all to respond.
15:27 jdarcy ...and it does that *every time* the script is loaded.
15:27 JoeJulian Good morning jdarcy.
15:28 jdarcy So if you look in N directories on average before you find what you're looking for, that's N network round trips without negative-lookup caching . . . or one with negative-lookup caching.
15:28 jdarcy JoeJulian: heya
15:31 jiffe98 ah, gotcha
15:37 * JoeJulian ponders bayesian predictive lookups
15:41 daMaestro joined #gluster
15:45 Humble joined #gluster
15:57 dshea left #gluster
16:01 balunasj joined #gluster
16:06 jbrooks joined #gluster
16:17 semiosis :O
16:26 blendedbychris joined #gluster
16:26 blendedbychris joined #gluster
16:29 semiosis blendedbychris: just saw your message about invoke-rc.d
16:29 blendedbychris semiosis: it was the installer that goofed up i think
16:29 semiosis blendedbychris: i booted up a precise VM, updated everything, then installed the glusterfs 3.3.1 package and it worked fine
16:29 semiosis ok, how'd you goof up? :)
16:30 blendedbychris no idea heh… i reloaded the os and it worked fine
16:30 blendedbychris my speculation lies with having 3.2 not cleanly uninstalled
16:30 semiosis when in doubt, wipe it out
16:30 semiosis could be
16:30 blendedbychris but i have me puppet script working now
16:31 semiosis sweet
16:31 blendedbychris i need to figure out how to make it create volumes though
16:32 blendedbychris or bricks rather
16:32 blendedbychris is glusterfs xml?
16:32 semiosis there's a couple ,,(puppet) modules out there already...
16:32 glusterbot (#1) https://github.com/semiosis/puppet-gluster, or (#2) https://github.com/purpleidea/puppet-gluster
16:32 semiosis i use mine, of course
16:33 blendedbychris1 joined #gluster
16:35 semiosis there's a couple ,,(puppet) modules out there already...
16:35 glusterbot (#1) https://github.com/semiosis/puppet-gluster, or (#2) https://github.com/purpleidea/puppet-gluster
16:35 semiosis i use mine, of course
16:35 semiosis blendedbychris1: ^^^
16:35 semiosis you might have missed that
16:36 blendedbychris1 ah
16:37 semiosis personally, i dont think automated management of glusterfs volumes is a good idea.  after much consideration I came to the conclusion that glusterfs volume configuraiton ought to be managed manually, though everything else around that (server & client setup, monitoring, etc) should be automated.
16:37 semiosis just my opinion
16:38 blendedbychris1 maybe the initial volume
16:38 blendedbychris1 i don't see an issue with adding additional bricks
16:41 blendedbychris1 semiosis: why didn't you just use the Mount resource out of curiosity?
16:43 semiosis blendedbychris1: line #?
16:45 semiosis i do use the mount resource... what do you mean?
16:47 blendedbychris1 o okay i just looked at the configuration
16:50 semiosis brb, rebooting for system updates
17:04 balunasj joined #gluster
17:08 balunasj joined #gluster
17:14 raghu joined #gluster
17:34 seanh-ansca joined #gluster
17:37 Mo_ joined #gluster
17:40 raghu joined #gluster
17:59 blendedbychris joined #gluster
17:59 blendedbychris joined #gluster
18:04 faizan joined #gluster
18:11 erik49__ joined #gluster
18:20 mohankumar joined #gluster
18:38 Nr18 joined #gluster
18:44 dblack joined #gluster
18:45 dblack joined #gluster
18:47 dblack joined #gluster
18:49 dblack joined #gluster
19:06 silopolis joined #gluster
19:09 gbrand_ joined #gluster
19:17 njTAP joined #gluster
19:17 njTAP Hi, can I setup multi cascading geo replication in the following way with 3 servers, A master of B, B master of C and C master of A?
19:17 Nr18 joined #gluster
19:20 misuzu i don't think so
19:21 misuzu furthermore, i can imagine a few scenarios that would result in changes being lost in such a setup
19:21 Triade joined #gluster
19:25 njTAP I want to have 40x striped volume, that is replicated to 2 other datacenters. But I also want to present devices in other datacenter, a cluster in the same DC... what would be the way to make it happen, if any?
19:29 njTAP or do I have to wait for 3.4
19:29 raghu joined #gluster
19:33 hagarth joined #gluster
19:48 njTAP anybody?
19:54 semiosis njTAP: are you sure you want ,,(stripe) ?  chances are that it's not what you really want
19:54 glusterbot njTAP: Please see http://goo.gl/5ohqd about stripe volumes.
19:58 njTAP Very good document. I dont want to stripe, I want to distribute (between 40 servers in one datacenter) and replicated (in the other 2 datacenters)
19:59 semiosis whats the latency between these datacenters?
20:00 njTAP <5 to <10ms
20:01 semiosis well that's pushing it but depending on your use-case/worload you may get acceptable performance replicating between datacenters
20:01 semiosis i for example replicate between ec2 availability zones, which have ~1ms latency, and i'm happy with that
20:03 semiosis replication is sensitive to latency because it is synchronous
20:04 jdarcy OK, nilfs2 is a washout.  :(
20:06 njTAP Will 3.4's Multi-master GeoReplication provide what I am looking for?
20:09 jdarcy njTAP: The multi-master GeoReplication scheduled for 3.4 is just a tweak to our existing GeoReplication.  It's asynchronous so it won't incude extra latency (like AFR does), but it's also unordered so the replicas can be pretty wildly inconsistent.
20:12 semiosis njTAP: what's your use case?
20:14 * semiosis guesses vm images
20:15 njTAP I have 3 datacenters, with VM's for users... want to use gluster for /home to begin with. User's round robin to 3 datacenters.
20:16 njTAP our VM's are served from a Hitachi SAN for now... eventually i'd like to move them to Gluster or perhaps Swift, depends. But thats not planned yet
20:17 njTAP So idea is, no matter which Datacenter the uses gets his VM from, their /home drive is mounted from a local Gluster
20:18 njTAP and their data is replicated across sites at all times, so if I am working in DC-A and log off and login to DC-B my data should be up to date
20:18 semiosis so the round-robin happens when the vm is provisioned, and that vm lives in the same dc for it's whole life?
20:18 jdarcy njTAP: How up-to-date should that be?
20:19 njTAP VMs are stateless. All customization lives in /home
20:19 njTAP jdarcy: as up to date as possible.
20:19 semiosis of course
20:20 semiosis this sounds like a very complicated setup!
20:20 njTAP Yeah LOL.
20:30 jdarcy njTAP: Well, let's say that the two DCs had been disconnected for a while, and hadn't caught up yet.  Would it be OK if somebody moving from DC-A to DC-B saw their files go back in time several hours?  Would it be OK if *some* files were several hours behind but others (written earlier) were up to date?
20:30 mtanner joined #gluster
20:35 njTAP If the whole datacenter goes down, until Gluster is caught up, it will no be presented as node
20:36 njTAP if one server out of 40 goes down, then I would like gluster to get the files stored on that one server to come from the corresponding servers in DC-B or DC-C
20:38 jdarcy njTAP: Sounds like you want replication *below* distribution.
20:54 Jippi joined #gluster
20:54 njTAP jdarcy: that would be correct
21:01 jdarcy njTAP: Well, that would require a more manual type of setup.  In fact, I'm not sure if gsyncd will allow replication at the brick level, since it usually operates at the volume level.
21:03 njTAP what would happen if i have 10ms latency and I created a distributed 3x replicated volume across the 3 DCs?
21:05 semiosis njTAP: as i suggested earlier, that should "work" though you may find the performance to be too poor
21:09 njTAP and that's because replication is synchronous, am i correct?
21:11 semiosis as i said before, "replication is sensitive to latency because it is synchronous"
21:13 njTAP thanks semiosis and jdarcy, very helpful!!
21:17 semiosis yw
21:22 badone joined #gluster
22:12 nueces joined #gluster
22:13 tryggvil joined #gluster
22:22 njTAP I am reading this post and it has me a little confused http://rackerhacker.com/2010/08/11/on​e-month-with-glusterfs-in-production/
22:22 glusterbot <http://goo.gl/M68Ht> (at rackerhacker.com)
22:23 njTAP If i have double replica volume, and I write a file to the volume, who is responsible for writing the mirror file, the server that gluster client is connected to or gluster client itself?
22:44 hattenator joined #gluster
22:45 elyograg njTAP: it's my understanding the the client manages this.  on 3.3, the self-heal daemon is supposed to take care of anything that doesn't get copied while things are down.
22:46 Jippi joined #gluster
22:51 stefanha joined #gluster
23:02 semiosis njTAP: that article is over 2 years old, it's a bit outdated in some regards
23:07 atrius joined #gluster
23:09 niv joined #gluster
23:24 njTAP joined #gluster
23:28 gbrand_ joined #gluster
23:44 duerF joined #gluster
23:45 elyograg anyone know enough about gluster-swift to make it work with a publicly signed cert that has an intermediate cert between it and the root?  I have added the CA cert to the file referenced in proxy-server.conf, but it doesn't seem to be working.  a quick grep through the swift python code doesn't seem to turn up any additional options to configure a cert chain, like apache has.
23:47 layer3 joined #gluster
23:50 atrius joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary