Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-09-23

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:05 fubada joined #gluster
00:19 sprachgenerator joined #gluster
00:28 taco2 joined #gluster
00:30 gildub joined #gluster
00:36 plarsen joined #gluster
00:37 RicardoSSP joined #gluster
00:46 misuzu left #gluster
00:52 pdrakeweb joined #gluster
00:53 sprachgenerator joined #gluster
01:00 pdrakeweb joined #gluster
01:39 Liquid-- joined #gluster
01:56 msmith_ joined #gluster
02:10 haomaiwa_ joined #gluster
02:14 haomaiw__ joined #gluster
02:26 harish_ joined #gluster
02:38 msmith_ joined #gluster
02:49 sputnik13 joined #gluster
03:01 sputnik13 joined #gluster
03:06 churnd joined #gluster
03:10 jrcresawn joined #gluster
03:22 fubada joined #gluster
03:30 rejy joined #gluster
03:35 meghanam joined #gluster
03:35 meghanam_ joined #gluster
03:38 harish joined #gluster
03:56 itisravi joined #gluster
03:57 kanagaraj joined #gluster
03:58 cyber_si joined #gluster
03:59 nbalachandran joined #gluster
04:05 bharata-rao joined #gluster
04:06 ricky-ticky joined #gluster
04:06 ndarshan joined #gluster
04:09 aravindavk joined #gluster
04:09 nishanth joined #gluster
04:17 shubhendu joined #gluster
04:26 atinmu joined #gluster
04:36 anoopcs joined #gluster
04:37 lalatenduM joined #gluster
04:37 rjoseph joined #gluster
04:43 rafi joined #gluster
04:45 soumya_ joined #gluster
04:46 RameshN joined #gluster
05:04 spandit joined #gluster
05:07 m0zes joined #gluster
05:07 aravindavk joined #gluster
05:16 deepakcs joined #gluster
05:17 aravindavk joined #gluster
05:20 sputnik13 joined #gluster
05:21 ryan_clough joined #gluster
05:22 kumar joined #gluster
05:32 raghu joined #gluster
05:34 overclk joined #gluster
05:34 kdhananjay joined #gluster
05:37 m0zes joined #gluster
05:38 hagarth joined #gluster
05:39 jiffin joined #gluster
05:41 ryan_clough left #gluster
05:41 dusmant joined #gluster
05:42 ppai joined #gluster
05:48 karnan joined #gluster
05:56 atalur joined #gluster
05:56 bala joined #gluster
05:56 soumya_ joined #gluster
05:58 saurabh joined #gluster
06:03 gildub joined #gluster
06:05 soumya_ joined #gluster
06:05 dusmantkp_ joined #gluster
06:10 bala joined #gluster
06:15 atalur joined #gluster
06:19 hagarth joined #gluster
06:23 nbalachandran joined #gluster
06:27 rjoseph joined #gluster
06:30 pkoro joined #gluster
06:31 dusmantkp_ joined #gluster
06:32 glusterbot New news from newglusterbugs: [Bug 1145450] Fix for spurious failure <https://bugzilla.redhat.com/show_bug.cgi?id=1145450> || [Bug 1108448] selinux alerts starting glusterd in f20 <https://bugzilla.redhat.com/show_bug.cgi?id=1108448>
06:34 kshlm joined #gluster
06:39 deepakcs joined #gluster
06:46 nbalachandran joined #gluster
06:49 dusmantkp_ joined #gluster
06:49 atinmu joined #gluster
06:54 ekuric joined #gluster
06:58 saurabh joined #gluster
06:59 mjrosenb joined #gluster
07:01 mbukatov joined #gluster
07:05 deepakcs joined #gluster
07:11 milka joined #gluster
07:22 LebedevRI joined #gluster
07:24 elico joined #gluster
07:24 dmachi joined #gluster
07:25 kaushal_ joined #gluster
07:26 kaushal_ joined #gluster
07:33 RaSTar joined #gluster
07:33 Fen1 joined #gluster
07:38 fsimonce joined #gluster
07:40 hagarth joined #gluster
07:40 atinmu joined #gluster
07:44 Fen1 joined #gluster
07:46 Thilam joined #gluster
07:48 ramon_dl joined #gluster
07:51 R0ok_ joined #gluster
08:03 Alssi_ joined #gluster
08:07 liquidat joined #gluster
08:08 aravinda_ joined #gluster
08:16 ricky-ticky joined #gluster
08:29 kanagaraj joined #gluster
08:50 soumya joined #gluster
08:53 Slashman joined #gluster
08:54 soumya joined #gluster
09:01 kanagaraj joined #gluster
09:04 vimal joined #gluster
09:05 harish joined #gluster
09:06 haomaiwa_ joined #gluster
09:08 bala joined #gluster
09:11 kdhananjay ndevos++
09:11 glusterbot kdhananjay: ndevos's karma is now 2
09:14 haomaiwa_ joined #gluster
09:16 haomai___ joined #gluster
09:18 haomai___ joined #gluster
09:19 dusmantkp_ joined #gluster
09:20 shubhendu joined #gluster
09:22 elico joined #gluster
09:22 ndarshan joined #gluster
09:29 pkoro joined #gluster
09:37 pkoro joined #gluster
09:43 meghanam joined #gluster
09:43 meghanam_ joined #gluster
09:51 B21956 joined #gluster
09:56 sputnik13 joined #gluster
10:04 ws2k333 joined #gluster
10:20 soumya joined #gluster
10:20 shubhendu joined #gluster
10:20 atinmu joined #gluster
10:20 dusmant joined #gluster
10:20 hagarth joined #gluster
10:21 ndarshan joined #gluster
10:23 bala joined #gluster
10:23 diegows joined #gluster
10:31 pkoro joined #gluster
10:37 ppai joined #gluster
10:38 calum_ joined #gluster
10:45 kkeithley1 joined #gluster
10:46 pkoro joined #gluster
10:50 haomaiwang joined #gluster
10:51 haomaiwa_ joined #gluster
10:56 bene2 joined #gluster
11:04 atinmu joined #gluster
11:08 B21956 joined #gluster
11:09 B219561 joined #gluster
11:12 dusmant joined #gluster
11:12 edward1 joined #gluster
11:17 haomaiwang joined #gluster
11:18 chirino joined #gluster
11:19 ndarshan joined #gluster
11:21 haomai___ joined #gluster
11:22 nishanth joined #gluster
11:28 kanagaraj joined #gluster
11:28 soumya joined #gluster
11:33 ppai joined #gluster
11:37 meghanam joined #gluster
11:37 meghanam_ joined #gluster
11:39 hagarth joined #gluster
11:47 Thilam joined #gluster
11:49 Thilam hi, I've recently performed a volume rebalancing, which have ended successfully but with some skipped files
11:49 Thilam my pb is the following : skipped files are still present on 2 bricks
11:50 Thilam on one brick, there is the good file
11:50 Thilam on one other a corrupted file
11:51 Thilam is there a command to launch to clean all that mess ?
11:51 Thilam ie to delete copies that have been started and have failed
11:55 soumya__ joined #gluster
11:59 Fen1 joined #gluster
12:00 pkoro joined #gluster
12:00 ndevos REMINDER: Gluster Bug Triage meeting starting in a bit in #gluster-meeting
12:01 Slashman_ joined #gluster
12:07 julim joined #gluster
12:10 Slashman joined #gluster
12:10 ppai joined #gluster
12:11 Thilam if someone could have a look to my 13:49 question and give me a workaround it would be great
12:13 virusuy joined #gluster
12:13 virusuy joined #gluster
12:18 itisravi joined #gluster
12:19 chirino joined #gluster
12:25 Humble ndevos++ kkeithley++ ppai++
12:25 glusterbot Humble: ndevos's karma is now 3
12:25 glusterbot Humble: kkeithley's karma is now 18
12:25 glusterbot Humble: ppai's karma is now 1
12:27 Thilam no one ?
12:29 msp3k1 left #gluster
12:36 Thilam ok, I'll fill a new bug
12:36 ndevos Humble++ :)
12:36 glusterbot ndevos: Humble's karma is now 2
12:37 ricky-ticky1 joined #gluster
12:37 ndevos Thilam: I'm dont know about that, but there were some recent corrections with rebalance, but those are not included in a release yet
12:38 hagarth joined #gluster
12:38 ndevos Thilam: are you mounting over NFS? if so, you could be hitting bug 1140338
12:39 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=1140338 urgent, urgent, ---, gluster-bugs, POST , rebalance is not resulting in the hash layout changes being available to nfs client
12:39 Thilam it's strange because in a cifs mount for ex. if you refresh the explorer, on time it is displayed the file of one brick, the other time the file of the second
12:39 Thilam no
12:39 Thilam mount with gluster
12:39 Thilam and shared by samba for windows users
12:41 Thilam # ls -lrt /glusterfs/projets-brick1/projets/CALVAL/Calval/cancet/maree/alti_prediction/envisat_ESA/predictions/FES04/ENV_ESA_corrtide_fes04_750_104_red.dat
12:41 Thilam -rw-rw----+ 2 cancet calval 32302 Sep  5 10:34 /glusterfs/projets-brick1/projets/CALVAL/Calval/cancet/maree/alti_prediction/envisat_ESA/predictions/FES04/ENV_ESA_corrtide_fes04_750_104_red.dat
12:41 Thilam # ls -lrt /glusterfs/projets-brick2/projets/CALVAL/Calval/cancet/maree/alti_prediction/envisat_ESA/predictions/FES04/ENV_ESA_corrtide_fes04_750_104_red.dat
12:41 Thilam -rw-rw---T+ 2 cancet calval 32302 Sep 19 10:01 /glusterfs/projets-brick2/projets/CALVAL/Calval/cancet/maree/alti_prediction/envisat_ESA/predictions/FES04/ENV_ESA_corrtide_fes04_750_104_red.dat
12:41 glusterbot Thilam: -rw-rw--'s karma is now -1
12:41 glusterbot Thilam: -rw-rw-'s karma is now -1
12:41 Thilam sorry :/
12:41 Thilam same file on 2 diff. bricks
12:41 ndevos no, its not the same file, it is a ,,(link file)
12:42 glusterbot I do not know about 'link file', but I do know about these similar topics: 'linkfile'
12:42 ndevos argh!
12:42 ndevos @linkfile
12:42 glusterbot ndevos: A zero-length file with mode T--------- on a brick is a link file. It has xattrs pointing to another brick/path where the file data resides. These are usually created by renames or volume layout changes.
12:42 ndevos uh, no, its not a linkfile, the size is non-0
12:42 Thilam yes, but it is corrupted
12:42 Thilam in this case the good is on project1
12:43 Thilam brick1
12:43 ndevos looks like the file is in a ,,(split brain)
12:43 glusterbot I do not know about 'split brain', but I do know about these similar topics: 'split-brain'
12:43 ndevos @split-brain
12:43 glusterbot ndevos: (#1) To heal split-brain, use splitmount. http://joejulian.name/blog/glusterfs-split-brain-recovery-made-easy/, or (#2) For additional information, see this older article http://joejulian.name/blog/fixing-split-brain-with-glusterfs-33/
12:43 Thilam I'm on a distributed vol
12:44 ndevos oh, interesting....
12:44 ndevos I dont know if/how rebalance can cause that, I guess it would really help if you have steps to reproduce it...
12:44 Thilam the files concerned are those which have been skipped during relancing process
12:45 ndevos anyway, you really want to file a bug for that so that someone that is more familiar with rebalance can chime in
12:45 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
12:45 Thilam not relancing but rebalancing
12:45 Thilam I'll do this right now
12:46 plarsen joined #gluster
12:46 Thilam in my case, the scenar is quite simple, remove a brick, commit the removal, add a new, brick, launch rebalance
12:46 kanagaraj joined #gluster
12:47 Thilam once rebalance is finished, skipped files remained at 2 locations
12:49 ndevos okay, sounds simple enough for the rebalance guys to check and verify
12:51 jmarley joined #gluster
12:53 theron joined #gluster
12:54 dblack joined #gluster
13:01 Slashman joined #gluster
13:03 social joined #gluster
13:03 cristov joined #gluster
13:03 soumya__ joined #gluster
13:04 gmcwhistler joined #gluster
13:04 cristov joined #gluster
13:06 ekuric joined #gluster
13:10 plarsen joined #gluster
13:11 plarsen joined #gluster
13:11 glusterbot New news from resolvedglusterbugs: [Bug 1143905] Brick still there after removal <https://bugzilla.redhat.com/show_bug.cgi?id=1143905>
13:20 coredump joined #gluster
13:24 Thilam ndevos, is there a specific "component" for rebalancing ?
13:24 Thilam glusterd ?
13:26 bene2 joined #gluster
13:29 ndevos Thilam: not sure, but 'distribute' or 'dht' should be better than glusterd
13:29 Thilam k thx
13:30 Thilam I'm sorry, I'm always experiencing problems
13:30 Thilam it seems I'm a black cat :|
13:30 Thilam (if to be a black cat is an english expression)
13:40 R0ok_ joined #gluster
13:46 deepakcs joined #gluster
13:50 ekuric joined #gluster
13:52 msmith_ joined #gluster
13:53 lalatenduM joined #gluster
14:04 glusterbot New news from newglusterbugs: [Bug 1145681] Rebalancing Issue in distributed volume <https://bugzilla.redhat.com/show_bug.cgi?id=1145681>
14:06 fyxim__ joined #gluster
14:09 bala joined #gluster
14:13 wushudoin| joined #gluster
14:17 longshot902 joined #gluster
14:22 tdasilva joined #gluster
14:24 XpineX_ joined #gluster
14:38 hagarth joined #gluster
14:40 sputnik13 joined #gluster
14:47 R0ok_ gluster logs for a mount point on a client is over 3GB.
14:47 R0ok_ its full of ""[2014-09-22 10:02:46.100774] I [dict.c:370:dict_get] (-->/usr/lib64/glusterfs/3.5.2/xlator/pe​rformance/md-cache.so(mdc_lookup+0x318) [0x7f8be33c9518] (-->/usr/lib64/glusterfs/3.5.2/xlator/deb​ug/io-stats.so(io_stats_lookup_cbk+0x113) [0x7f8be31aec63] (-->/usr/lib64/glusterfs/3.5.2/xlator/syste​m/posix-acl.so(posix_acl_lookup_cbk+0x233) [0x7f8be2fa03d3]))) 0-dict: !this || key=system"
14:47 glusterbot R0ok_: ('s karma is now -39
14:47 glusterbot R0ok_: ('s karma is now -40
14:47 glusterbot R0ok_: ('s karma is now -41
14:51 nbalachandran joined #gluster
14:52 VerboEse joined #gluster
14:54 rwheeler joined #gluster
15:04 jobewan joined #gluster
15:09 kshlm joined #gluster
15:12 hagarth joined #gluster
15:22 lmickh joined #gluster
15:25 theron joined #gluster
15:36 bennyturns joined #gluster
15:39 sprachgenerator joined #gluster
15:43 hagarth joined #gluster
15:44 failshell joined #gluster
15:47 dlambrig_ joined #gluster
15:49 soumya__ joined #gluster
15:58 R0ok_ joined #gluster
15:59 sputnik13 joined #gluster
16:06 tdasilva joined #gluster
16:08 daMaestro joined #gluster
16:09 elico joined #gluster
16:16 semiosis http://gluster.org/community/documentation/index.php/Adding_space_in_EC2_with_EBS
16:20 PeterA joined #gluster
16:30 sputnik13 joined #gluster
16:33 Slashman_ joined #gluster
16:33 sputnik1_ joined #gluster
16:36 Slashman joined #gluster
17:12 tdasilva joined #gluster
17:23 sputnik13 joined #gluster
17:32 _pol joined #gluster
17:37 longshot902 joined #gluster
17:43 nshaikh joined #gluster
17:45 PeterA joined #gluster
17:50 anoopcs joined #gluster
17:50 hagarth1 joined #gluster
17:59 dblack joined #gluster
18:09 RameshN joined #gluster
18:23 coolguy6699 joined #gluster
18:26 chirino joined #gluster
18:27 ThatGraemeGuy joined #gluster
18:27 pkoro joined #gluster
18:39 lalatenduM joined #gluster
18:46 bit4man joined #gluster
18:48 fattaneh1 joined #gluster
18:53 ekuric joined #gluster
19:00 ramon_dl joined #gluster
19:21 Kailyn_Quitzon joined #gluster
19:24 dmachi1 joined #gluster
19:29 milka joined #gluster
19:39 B21956 joined #gluster
19:48 gmcwhistler left #gluster
19:49 jackdpeterson joined #gluster
19:50 jackdpeterson Hey all, got a question regarding Geo-Replication. I'm a little confused with some of the differences in 3.5 From what I can tell the primary difference is removing single threaded-server bottlenecks. Things are still master-slave as far as I can tell. Is there a way to do master-master with asynchronous replication?
19:50 sputnik13 joined #gluster
19:51 jackdpeterson E.g., host two GlusterFS instances (one in AWS US-West and one in AWS US-East). Goal being geographic redundancy with active-active configuration
19:52 jackdpeterson and by two instances I really mean two replicated servers per region (focused purely on High-availability), totaling 4 initially.
19:53 rafi1 joined #gluster
19:54 zerick joined #gluster
20:00 gildub joined #gluster
20:06 ThatGraemeGuy joined #gluster
20:28 semiosis jackdpeterson: no
20:50 dlambrig_ left #gluster
20:53 jbrooks joined #gluster
21:20 zerick joined #gluster
21:25 systemonkey joined #gluster
22:55 chirino joined #gluster
23:03 nothau joined #gluster
23:04 MacWinner joined #gluster
23:12 toordog_wrk joined #gluster
23:37 lutix joined #gluster
23:38 torbjorn__ joined #gluster
23:38 lutix curious if anyone else has had a problem with nfs.rpc-auth-allow, the documentation says it should reject everything by default, but I'm finding quite the opposite
23:39 lutix I can make it reject by setting nfs-rpm-auth-reject '*', but then my allow statements dont override
23:40 bala joined #gluster
23:40 lutix just curious if anyone else has seen this?
23:47 chirino joined #gluster
23:49 Lelia_McKenzie47 joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary