Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-08-09

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 siel joined #gluster
00:02 sputnik13 joined #gluster
00:09 sputnik13 joined #gluster
00:13 bala joined #gluster
00:14 sputnik13 joined #gluster
00:48 siel joined #gluster
00:49 Alex_____ joined #gluster
01:25 JoeJulian caiozanolla: You should just be able to restart glusterd.
01:25 JoeJulian that should restart the self-heal daemon.
01:26 JoeJulian BossR: I'm going to find out if I actually get the performance I engineer. I fully expect to.
01:27 BossR Yeah I won't lie, the thought that I could share my php and static files from GlusterFS has a lot of appeal to me...
01:27 JoeJulian You won't get local hard disk performance.
01:27 JoeJulian Plus, you should consider the advice at ,,(php)
01:27 glusterbot (#1) php calls the stat() system call for every include. This triggers a self-heal check which makes most php software slow as they include hundreds of small files. See http://joejulian.name/blog/optimizing-web-performance-with-glusterfs/ for details., or (#2) It could also be worth mounting fuse with glusterfs --attribute-timeout=HIGH --entry-timeout=HIGH --negative-timeout=HIGH --fopen-keep-cache
01:28 BossR hmm...
01:28 JoeJulian But the question of, "I'm going to build something and want to know if it'll meet my expectations." It won't. Nothing ever will when you haven't defined your expectations.
01:30 BossR oh thats fair... What I am dealing with is a CMS Application called SocialEngine which is built using Zend Framework 1.3.2 - so, I want to have the ability to keep the data consistant without using a multi-master lsyncd configuration...
01:30 JoeJulian Zend has some pretty good caching ability. You might fare pretty well.
01:30 BossR I of course will be using Varnish (specifically Fastly.com) - and nginx on the web servers...
01:30 BossR and memcached
01:31 JoeJulian Sounds like a winning combination.
01:32 JoeJulian Keep your directory sizes under 10k if you can and I expect you'll find it satisfactory.
01:32 BossR yeah... I just know WHAT GlusterFS is... I have not taken the dive into it yet... cause I wanted to know fundamentally if it will work or if the performance loss will be substantial
01:32 BossR what happens over 10k
01:32 BossR And I am guessing you mean 10,000 files/directories
01:34 JoeJulian The way the kernel and directory listings interface, the number of dirents within a single directory can exceed a single buffer. Once they do, the kernel code does some fancy circus acts to keep track of where you are in a directory listing that can be different every time you query it. That, apparently, doesn't perform well.
01:35 bit4man joined #gluster
01:35 JoeJulian Family's in the car. I'd best head off to dinner. I'll be around late if you're still here and have any more questions.
02:26 daxatlas joined #gluster
02:28 BossR joined #gluster
02:33 hagarth joined #gluster
02:49 Alex_____ joined #gluster
03:00 BossR joined #gluster
03:30 BossR Can you do mirroring between 3 GlusterFS Nodes?
03:31 Alex_____ joined #gluster
03:43 y4m4 joined #gluster
04:52 purpleidea BossR: it's called replica == 3, so yes. (also, g2g)
04:52 purpleidea night
05:01 bala joined #gluster
05:06 nbalachandran joined #gluster
05:11 sputnik13 joined #gluster
05:12 daxatlas joined #gluster
05:34 dockbram joined #gluster
05:41 ramteid joined #gluster
05:45 LebedevRI joined #gluster
05:50 kumar joined #gluster
05:53 jiku joined #gluster
06:01 daxatlas joined #gluster
06:06 daxatlas joined #gluster
06:08 sputnik13 joined #gluster
06:12 ricky-ti1 joined #gluster
06:30 ThatGraemeGuy joined #gluster
06:38 nbalachandran joined #gluster
08:28 sputnik13 joined #gluster
08:48 hagarth joined #gluster
09:23 kumar joined #gluster
09:24 hagarth joined #gluster
09:25 sahina joined #gluster
10:15 nbalachandran joined #gluster
10:32 LebedevRI joined #gluster
10:37 qdk joined #gluster
11:32 ricky-ti1 joined #gluster
12:26 Gugge joined #gluster
12:32 diegows joined #gluster
12:44 coredump joined #gluster
13:21 diegows semiosis, are you around?
13:21 diegows I have a question about your script gfid-resolver.sh
13:21 diegows why do you execute find instead of directly access to the file?
14:02 DV_ joined #gluster
14:23 ws2k33 joined #gluster
14:55 sputnik13 joined #gluster
15:10 daxatlas joined #gluster
15:27 sputnik13 joined #gluster
15:45 kumar joined #gluster
16:01 sputnik13 joined #gluster
16:39 rotbeard joined #gluster
16:58 BossR joined #gluster
17:31 sputnik13 joined #gluster
17:32 BossR joined #gluster
17:39 JoeJulian diegows: because gfid files are hardlinks. The only way to find the other directory entry that points to the same inode is through find.
17:44 BossR joined #gluster
17:46 diegows JoeJulian, thanks... this is going to take some time :)
17:46 * diegows is dealing with an split brain issue
17:48 JoeJulian :/
18:01 diegows JoeJulian, may be you can help... suppose that I don't really know what the right copy but because I have an old rsync and can recover all the data. Is it ok to reset the xattr in one of them?
18:01 diegows I mean, of one of the nodes (they are two)
18:01 ricky-ti1 joined #gluster
18:03 JoeJulian yes
18:03 JoeJulian one of the bricks
19:02 msmith joined #gluster
19:13 BossR joined #gluster
19:56 qdk joined #gluster
19:56 caiozanolla JoeJulian, I did taht already, on both nodes, still self healing daemon wont start.
20:02 caiozanolla JoeJulian, dont know if there is a relation, but this show at the logs upon "gluster volume status" : "glusterd-op-sm.c:3404:glusterd_op_modify_op_ctx] 0-management: op_ctx modification failed"
20:02 caiozanolla JoeJulian, just restarted both nodes glusterd daemon, still, self healing daemon wont start.
20:10 BossR joined #gluster
20:44 sjm joined #gluster
21:15 ThatGraemeGuy joined #gluster
21:40 marcoceppi joined #gluster
21:47 marcoceppi joined #gluster
22:08 marcoceppi joined #gluster
22:20 hagarth joined #gluster
22:20 qdk joined #gluster
22:37 msmith joined #gluster
23:05 SpComb joined #gluster
23:34 goodwill joined #gluster
23:35 SpComb joined #gluster
23:35 goodwill is there a page with like possible usecases for gluster (trying to learn)
23:44 SpComb joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary