Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-11-07

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:43 saurabh joined #gluster
00:46 bala joined #gluster
00:50 calisto joined #gluster
00:53 bunni_ joined #gluster
00:59 uebera|| joined #gluster
00:59 Andreas-IPO joined #gluster
00:59 rotbeard joined #gluster
01:02 capri joined #gluster
01:09 keds joined #gluster
01:29 calisto1 joined #gluster
02:07 bala joined #gluster
02:12 kalzz joined #gluster
02:25 harish joined #gluster
02:35 bala joined #gluster
03:03 David_H__ joined #gluster
03:16 kdhananjay joined #gluster
03:18 saurabh joined #gluster
03:33 nshaikh joined #gluster
03:42 plarsen joined #gluster
03:43 kshlm joined #gluster
03:48 calisto joined #gluster
03:52 shubhendu_ joined #gluster
03:56 bharata-rao joined #gluster
04:01 meghanam joined #gluster
04:01 meghanam_ joined #gluster
04:07 RameshN joined #gluster
04:16 kanagaraj joined #gluster
04:18 hagarth joined #gluster
04:21 nishanth joined #gluster
04:22 David_H_Smith joined #gluster
04:23 atinmu joined #gluster
04:24 David_H_Smith joined #gluster
04:25 David_H_Smith joined #gluster
04:25 itisravi joined #gluster
04:25 David_H_Smith joined #gluster
04:27 nbalachandran joined #gluster
04:36 anoopcs joined #gluster
04:39 dusmant joined #gluster
04:40 smohan joined #gluster
04:43 Rafi_kc joined #gluster
04:43 rafi1 joined #gluster
04:43 spandit joined #gluster
04:46 spandit_ joined #gluster
04:55 jiffin joined #gluster
04:59 prasanth_ joined #gluster
05:00 ndarshan joined #gluster
05:07 lalatenduM joined #gluster
05:11 ppai joined #gluster
05:16 topshare joined #gluster
05:19 soumya_ joined #gluster
05:37 anoopcs joined #gluster
05:38 sahina joined #gluster
05:39 anoopcs joined #gluster
05:44 hagarth joined #gluster
05:45 anoopcs joined #gluster
05:51 ramteid joined #gluster
05:54 glusterbot New news from newglusterbugs: [Bug 1161416] snapshot delete all command fails with --xml option. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1161416> || [Bug 1161424] [RFE] snapshot configuration should have help option. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1161424>
05:56 renopt joined #gluster
05:58 overclk joined #gluster
06:01 renopt hi there glusterfolk, is it possible to use an existing ext4 filesystem as a brick without reformatting it or losing the data on it?
06:02 renopt as in, could I just add it to a new volume with no problems
06:03 aravindavk joined #gluster
06:11 kdhananjay joined #gluster
06:11 bala joined #gluster
06:17 sahina joined #gluster
06:21 karnan joined #gluster
06:21 soumya joined #gluster
06:21 nishanth joined #gluster
06:23 meghanam joined #gluster
06:23 meghanam_ joined #gluster
06:28 dusmant joined #gluster
06:30 ctria joined #gluster
06:30 ppai joined #gluster
06:41 soumya joined #gluster
06:51 ira joined #gluster
06:53 mator where/how do i see volume default options?
06:53 mator "gluster volume info" shows only changed ones
06:54 mator thanks
06:56 atinmu joined #gluster
06:57 ricky-ticky joined #gluster
06:57 SOLDIERz joined #gluster
07:01 nbalachandran joined #gluster
07:03 rgustafs joined #gluster
07:08 kanagaraj joined #gluster
07:10 ProT-0-TypE joined #gluster
07:12 sahina joined #gluster
07:12 nishanth joined #gluster
07:14 ndarshan joined #gluster
07:15 aravindavk joined #gluster
07:17 renopt so it turns out the answer was yes, thx
07:20 dusmant joined #gluster
07:20 Pupeno joined #gluster
07:25 nshaikh joined #gluster
07:26 tom[] joined #gluster
07:36 Fen1 joined #gluster
07:37 calum_ joined #gluster
07:39 dusmant joined #gluster
07:40 ZhangHuan joined #gluster
07:42 ZhangHuan Hello guys, have a question about AFR performance, anyone available to help?
07:44 atinmu joined #gluster
07:51 ricky-ticky1 joined #gluster
07:53 ndevos mator: "gluster volume set help" should show the default values
08:00 ppai joined #gluster
08:17 deniszh joined #gluster
08:39 T0aD joined #gluster
08:40 liquidat joined #gluster
08:46 [Enrico] joined #gluster
08:47 vikumar joined #gluster
08:48 smohan_ joined #gluster
08:52 hagarth joined #gluster
08:53 dusmant joined #gluster
08:55 nishanth joined #gluster
09:03 ndarshan joined #gluster
09:04 karnan joined #gluster
09:11 Slashman joined #gluster
09:11 flu_ joined #gluster
09:15 shubhendu_ joined #gluster
09:17 sahina joined #gluster
09:21 RameshN joined #gluster
09:24 atalur joined #gluster
09:25 glusterbot New news from newglusterbugs: [Bug 1161502] filter quota internal xattrs in afr metadata self-heal <https://bugzilla.redhat.co​m/show_bug.cgi?id=1161502>
09:36 ramteid joined #gluster
09:36 nishanth joined #gluster
09:37 gildub joined #gluster
09:38 overclk joined #gluster
09:43 elico joined #gluster
09:48 atinmu joined #gluster
09:48 hagarth joined #gluster
09:54 ocellus joined #gluster
10:12 sahina joined #gluster
10:13 RameshN joined #gluster
10:15 ndarshan joined #gluster
10:15 dusmant joined #gluster
10:16 nishanth joined #gluster
10:18 choe joined #gluster
10:22 SteveCooling is there something special that needs to be done to get geo-replication to also do deletes in 3.5 ?
10:26 ccha joined #gluster
10:35 shubhendu joined #gluster
10:35 lyang0 joined #gluster
10:38 diegows joined #gluster
10:39 atinmu joined #gluster
10:42 meghanam joined #gluster
10:42 meghanam_ joined #gluster
10:43 SteveCooling I'm seeing errors in my geo-replication logs that seem to be this problem: https://bugzilla.redhat.co​m/show_bug.cgi?id=1046604
10:43 glusterbot Bug 1046604: medium, high, RHS 2.1.2, avishwan, CLOSED ERRATA, geo-replication fails with OSError when setting remote xtime
10:45 SteveCooling but i'm having trouble finding out if this is supposed to be fixed in 3.5.2
10:47 glusterbot New news from resolvedglusterbugs: [Bug 1073844] geo-replication fails with OSError when setting remote xtime <https://bugzilla.redhat.co​m/show_bug.cgi?id=1073844>
10:48 hagarth looks like it is fixed in 3.5.0 - https://bugzilla.redhat.co​m/show_bug.cgi?id=1073844
10:48 glusterbot Bug 1073844: medium, high, ---, khiremat, CLOSED CURRENTRELEASE, geo-replication fails with OSError when setting remote xtime
10:49 SteveCooling well
10:51 SteveCooling my traceback is not the same
10:51 karnan joined #gluster
10:55 SteveCooling completely
10:55 SteveCooling but is very similar
10:55 glusterbot New news from newglusterbugs: [Bug 1058300] VMs do not resume after paused state and storage connection to a gluster domain (they will also fail to be manually resumed) <https://bugzilla.redhat.co​m/show_bug.cgi?id=1058300>
11:07 [Enrico] joined #gluster
11:09 smohan joined #gluster
11:09 SteveCooling i'm gonna switch back to rsync and see if this only occurs with tar+ssh transferring
11:14 ProT-O-TypE joined #gluster
11:15 shubhendu joined #gluster
11:26 glusterbot New news from newglusterbugs: [Bug 1161573] Enhancement in tcp mount routine <https://bugzilla.redhat.co​m/show_bug.cgi?id=1161573>
11:38 soumya__ joined #gluster
11:45 shubhendu joined #gluster
11:51 SOLDIERz joined #gluster
11:53 rgustafs joined #gluster
11:56 diegows joined #gluster
11:59 rwheeler joined #gluster
12:00 B21956 joined #gluster
12:14 P0w3r3d joined #gluster
12:16 calisto joined #gluster
12:19 mariusp joined #gluster
12:20 mojibake joined #gluster
12:24 soumya__ joined #gluster
12:25 SteveCooling Confirmed what is failing (not sure why though)
12:25 SteveCooling It's not https://bugzilla.redhat.co​m/show_bug.cgi?id=1046604 but related
12:25 glusterbot Bug 1046604: medium, high, RHS 2.1.2, avishwan, CLOSED ERRATA, geo-replication fails with OSError when setting remote xtime
12:26 glusterbot New news from newglusterbugs: [Bug 1161588] ls -alR can not heal the disperse volume <https://bugzilla.redhat.co​m/show_bug.cgi?id=1161588>
12:26 SteveCooling it is instead failing on the code line above the one "fixed" in this commit: http://review.gluster.org/#/c/7207/​2/geo-replication/syncdaemon/master.py
12:26 glusterbot Title: Gerrit Code Review (at review.gluster.org)
12:27 SteveCooling Traceback from log here: http://piratepad.net/XzS9zSSsIQ
12:27 glusterbot Title: PiratePad: XzS9zSSsIQ (at piratepad.net)
12:28 mariusp joined #gluster
12:31 SteveCooling sorry, i meant this bug: https://bugzilla.redhat.co​m/show_bug.cgi?id=1073844
12:31 glusterbot Bug 1073844: medium, high, ---, khiremat, CLOSED CURRENTRELEASE, geo-replication fails with OSError when setting remote xtime
12:32 ndevos SteveCooling: sending and email to the mailinglist may be a good option, I do not know how many irc users are familiar with these details in geo-rep
12:32 ndevos you can keep on dropping notes here, maybe someone can help out - but a summary in an email is likely to get more reactions
12:35 SteveCooling thanks.
12:35 ProT-0-TypE joined #gluster
12:41 SOLDIERz joined #gluster
12:46 gildub joined #gluster
12:49 LebedevRI joined #gluster
12:56 diegows joined #gluster
13:01 Fen1 joined #gluster
13:07 keds joined #gluster
13:12 RameshN joined #gluster
13:14 elico joined #gluster
13:26 glusterbot New news from newglusterbugs: [Bug 1158008] Quota utilization not correctly reported for dispersed volumes <https://bugzilla.redhat.co​m/show_bug.cgi?id=1158008>
13:28 hagarth joined #gluster
13:30 calisto1 joined #gluster
13:37 harish joined #gluster
13:37 edward1 joined #gluster
13:38 gildub joined #gluster
13:45 gildub joined #gluster
13:52 plarsen joined #gluster
13:56 glusterbot New news from newglusterbugs: [Bug 1161573] Enhancement in tcp mount routine, replace usage of glfs_* functions <https://bugzilla.redhat.co​m/show_bug.cgi?id=1161573> || [Bug 1161621] Possible file corruption on dispersed volumes <https://bugzilla.redhat.co​m/show_bug.cgi?id=1161621>
14:01 XpineX joined #gluster
14:05 bennyturns joined #gluster
14:12 virusuy joined #gluster
14:14 _dist joined #gluster
14:15 karoshi joined #gluster
14:16 karoshi is a replace-brick as expensive and brings a server to its knees like a self-heal?
14:21 SOLDIERz joined #gluster
14:23 jmarley joined #gluster
14:26 ProT-0-TypE joined #gluster
14:31 calum_ joined #gluster
14:33 mariusp joined #gluster
14:45 nbalachandran joined #gluster
14:50 mariusp joined #gluster
14:58 jmarley joined #gluster
14:58 Pupeno_ joined #gluster
15:03 rotbeard joined #gluster
15:06 LebedevRI joined #gluster
15:13 jobewan joined #gluster
15:15 wushudoin joined #gluster
15:20 nueces joined #gluster
15:23 theron joined #gluster
15:24 theron joined #gluster
15:39 bala joined #gluster
15:40 coredump joined #gluster
15:41 bala1 joined #gluster
15:47 anoopcs joined #gluster
15:48 soumya__ joined #gluster
15:48 plarsen joined #gluster
15:52 mariusp joined #gluster
16:03 shubhendu joined #gluster
16:05 atrius joined #gluster
16:07 julim joined #gluster
16:17 ira joined #gluster
16:22 meghanam joined #gluster
16:22 meghanam_ joined #gluster
16:24 diegows joined #gluster
16:31 bala joined #gluster
16:51 Pupeno joined #gluster
16:59 necrogami joined #gluster
16:59 jiffin joined #gluster
17:12 lmickh joined #gluster
17:13 jiffin joined #gluster
17:14 julim joined #gluster
17:16 jiffin1 joined #gluster
17:16 daMaestro joined #gluster
17:24 soumya__ joined #gluster
17:27 ira joined #gluster
17:34 Pupeno_ joined #gluster
17:42 vimal joined #gluster
17:48 jiffin joined #gluster
17:52 XpineX joined #gluster
17:53 jiffin joined #gluster
17:59 jiffin joined #gluster
17:59 Pupeno joined #gluster
17:59 Pupeno joined #gluster
18:04 jiffin joined #gluster
18:16 jiffin1 joined #gluster
18:16 SOLDIERz joined #gluster
18:19 calisto joined #gluster
18:21 Pupeno_ joined #gluster
18:28 ira joined #gluster
18:31 jiffin joined #gluster
19:00 PeterA joined #gluster
19:22 _dist JoeJulian: While the version of gluster 3.5.2 I am using does properly report healing status (my main concern a while back). Info healed still reports several heals taking place every second all day long on my vm volume. Perhaps that part of the command was missed in the fix?
19:24 _dist JoeJulian: correction, it only does it once, for every file, at the begining of each crawl
19:29 ira joined #gluster
19:30 jiffin joined #gluster
19:32 nshaikh joined #gluster
19:35 Kedsta joined #gluster
19:41 osiekhan1 joined #gluster
19:43 lalatenduM joined #gluster
19:52 daMaestro joined #gluster
20:12 daMaestro joined #gluster
20:23 krullie joined #gluster
20:26 krullie joined #gluster
20:29 theron joined #gluster
20:34 chirino joined #gluster
20:54 Pupeno joined #gluster
21:04 jiffin joined #gluster
21:08 Pupeno joined #gluster
21:08 Pupeno joined #gluster
21:46 plarsen joined #gluster
21:55 calum_ joined #gluster
22:07 calisto joined #gluster
22:15 nueces_ joined #gluster
22:17 keds joined #gluster
22:18 chirino joined #gluster
22:32 theron joined #gluster
22:59 badone joined #gluster
23:09 David_H__ joined #gluster
23:13 Kedsta joined #gluster
23:14 Kedsta joined #gluster
23:48 SOLDIERz joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary