Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-04-05

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 vpshastry joined #gluster
00:13 bala joined #gluster
00:54 haomaiwang joined #gluster
00:55 jag3773 joined #gluster
01:46 ilbot3 joined #gluster
01:46 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:59 wrale joined #gluster
02:13 RicardoSSP joined #gluster
02:13 RicardoSSP joined #gluster
02:39 hagarth joined #gluster
03:01 nightwalk joined #gluster
03:01 glusterbot New news from newglusterbugs: [Bug 1084653] tests/bugs/bug-865825.t needs to wait longer for self-heal daemon to start. It's failing in Rackspace due to this. <https://bugzilla.redhat.com/show_bug.cgi?id=1084653>
03:12 hagarth joined #gluster
03:13 eastz0r joined #gluster
03:22 Ark joined #gluster
03:43 joshin joined #gluster
03:43 joshin joined #gluster
03:52 shyam joined #gluster
04:18 gmcwhistler joined #gluster
04:25 sks joined #gluster
04:39 davinder joined #gluster
04:42 sks joined #gluster
04:53 tokik joined #gluster
05:02 glusterbot New news from newglusterbugs: [Bug 1081274] clang compilation fixes and other directory restructuring <https://bugzilla.redhat.com/show_bug.cgi?id=1081274>
05:12 vpshastry joined #gluster
05:18 jbrooks joined #gluster
05:42 hchiramm_ joined #gluster
05:45 wgao joined #gluster
06:06 tokik joined #gluster
06:47 ngoswami joined #gluster
07:04 hchiramm_ joined #gluster
07:20 RobertLaptop joined #gluster
07:34 ProT-0-TypE joined #gluster
07:54 vasilitch joined #gluster
08:01 vimal joined #gluster
08:07 ipshnik joined #gluster
08:09 hchiramm_ joined #gluster
08:30 Pavid7 joined #gluster
09:21 doekia joined #gluster
09:21 doekia_ joined #gluster
09:22 doekia_ left #gluster
09:26 tokik left #gluster
09:50 ctria joined #gluster
10:41 jiku joined #gluster
10:57 hagarth joined #gluster
10:59 vpshastry joined #gluster
11:33 vpshastry joined #gluster
11:54 ilbot3 joined #gluster
11:54 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
12:00 vpshastry joined #gluster
12:14 vpshastry left #gluster
12:35 ilbot3 joined #gluster
12:35 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
12:49 vpshastry joined #gluster
13:09 VerboEse joined #gluster
13:13 shyam joined #gluster
13:15 qdk joined #gluster
13:31 hchiramm_ joined #gluster
13:38 Ark joined #gluster
13:56 hchiramm_ joined #gluster
14:18 hchiramm_ joined #gluster
14:26 cyberbootje joined #gluster
15:44 droner joined #gluster
16:05 hchiramm_ joined #gluster
16:16 sputnik13 joined #gluster
16:25 bala joined #gluster
16:37 lalatenduM joined #gluster
17:09 ProT-O-TypE joined #gluster
17:16 droner joined #gluster
17:20 beatnix joined #gluster
17:21 beatnix wonder if fuse makes use of operating system's VFS, means each gluster mount point have its own VFS cache ?
17:21 beatnix that can lead to consistency issues?
17:36 lalatenduM beatnix, yes fuse makes use of operating system's VFS, but does not lead to consistency issues
17:36 lalatenduM beatnix, all vfs call goes back to gluster via /dev/fuse
17:42 beatnix thanks latatenduM
17:42 uebera|| joined #gluster
17:48 gmcwhistler joined #gluster
17:50 imkiev joined #gluster
17:59 Ark joined #gluster
17:59 beatnix latatenduM but imagine with me
18:00 beatnix if a mount point a write call was issued
18:00 beatnix will that refreshes the vfs caches of every mount point?
18:23 lalatenduM beatnix, I am not aware much about the implementation details , but I have tested similar scenario and I dont see any issue
18:23 lalatenduM beatnix, may be you want to ask this question on gluster-devel mailing list
18:23 lalatenduM @mailinglists
18:23 glusterbot lalatenduM: http://www.gluster.org/interact/mailinglists
18:23 lalatenduM @mailinglists | beatnix
18:24 lalatenduM beatnix | @mailinglists
18:25 beatnix yes, thank lalatenduM :)
18:25 beatnix i might take a look at the code first
18:25 lalatenduM @irc
18:26 lalatenduM beatnix, you should join #gluster-devel irc channel for code discussions
18:27 beatnix okay thank you :)
18:33 Andyy2 I'm having problems with VM-disk corruption on brick reboots (kvm-qemu-libgfapi virtual machines on a replicated-distributed gluster cluster). As soon as I reboot one of the replicas in a brick, the virtual machines get disk corruption errors. Any ideas?
18:34 hchiramm_ joined #gluster
18:35 nightwalk joined #gluster
18:57 ProT-0-TypE joined #gluster
19:21 plarsen joined #gluster
19:25 qdk joined #gluster
20:18 shyam joined #gluster
20:18 shyam left #gluster
20:53 droner left #gluster
20:54 Durzo joined #gluster
21:28 cyberbootje joined #gluster
21:44 nightwalk joined #gluster
21:47 liquidn2o joined #gluster
21:56 atrius joined #gluster
22:08 siel joined #gluster
22:34 glusterbot New news from newglusterbugs: [Bug 1084721] GLuster RPMs for CentOS/etc from download.gluster.org are not consistently signed <https://bugzilla.redhat.com/show_bug.cgi?id=1084721>
22:42 diegows joined #gluster
23:26 owenmurr joined #gluster
23:30 mtanner joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary