Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-04-24

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:39 plarsen joined #gluster
00:41 gildub joined #gluster
00:57 badone__ joined #gluster
00:59 badone_ joined #gluster
01:01 harish_ joined #gluster
01:05 meghanam joined #gluster
01:23 edwardm61 joined #gluster
01:51 jcastill1 joined #gluster
01:56 jcastillo joined #gluster
02:10 kbyrne joined #gluster
02:24 zerick_ joined #gluster
02:26 harish_ joined #gluster
02:28 bharata-rao joined #gluster
02:29 Jmainguy joined #gluster
02:30 wushudoin joined #gluster
02:37 Jmainguy joined #gluster
02:40 wushudoin left #gluster
02:46 Jmainguy joined #gluster
02:53 gem joined #gluster
03:01 tessier joined #gluster
03:06 raghug joined #gluster
03:09 kovshenin joined #gluster
03:17 rwheeler joined #gluster
03:24 gem joined #gluster
03:27 wushudoin joined #gluster
04:00 atinmu joined #gluster
04:06 kdhananjay joined #gluster
04:06 ppai joined #gluster
04:09 zerick_ joined #gluster
04:18 hflai joined #gluster
04:20 kanagaraj joined #gluster
04:26 nangthang joined #gluster
04:27 overclk joined #gluster
04:27 itisravi joined #gluster
04:29 rjoseph joined #gluster
04:30 sakshi joined #gluster
04:32 RameshN joined #gluster
04:36 shubhendu joined #gluster
04:36 soumya joined #gluster
04:41 anoopcs joined #gluster
04:45 nbalacha joined #gluster
04:46 kotreshhr joined #gluster
04:47 schandra joined #gluster
04:48 Bhaskarakiran joined #gluster
04:49 hagarth joined #gluster
04:49 kripper left #gluster
04:52 jiffin joined #gluster
04:55 Jmainguy joined #gluster
04:56 nbalacha joined #gluster
05:01 kshlm joined #gluster
05:03 spandit joined #gluster
05:03 lalatenduM joined #gluster
05:06 Jmainguy joined #gluster
05:12 Anjana joined #gluster
05:15 gem joined #gluster
05:16 ndarshan joined #gluster
05:17 deepakcs joined #gluster
05:18 poornimag joined #gluster
05:32 glusterbot News from newglusterbugs: [Bug 1214994] Disperse volume: Rebalance failed when plain disperse volume is converted to distributed disperse volume <https://bugzilla.redhat.com/show_bug.cgi?id=1214994>
05:35 gem_ joined #gluster
05:39 ashiq joined #gluster
05:41 anil joined #gluster
05:45 soumya joined #gluster
05:46 raghu joined #gluster
05:50 gem_ joined #gluster
05:54 Manikandan joined #gluster
05:55 Manikandan_ joined #gluster
06:01 atalur joined #gluster
06:02 glusterbot News from newglusterbugs: [Bug 1215002] glusterd crashed on the node when tried to detach a tier after restoring data from the snapshot. <https://bugzilla.redhat.com/show_bug.cgi?id=1215002>
06:14 Jmainguy joined #gluster
06:15 the-me joined #gluster
06:19 meghanam joined #gluster
06:19 jtux joined #gluster
06:20 gem__ joined #gluster
06:22 Jmainguy joined #gluster
06:22 bharata-rao joined #gluster
06:24 nishanth joined #gluster
06:37 Bhaskarakiran joined #gluster
06:39 Jmainguy joined #gluster
06:40 Guest75481 joined #gluster
06:42 ghenry joined #gluster
06:45 mbukatov joined #gluster
06:48 Philambdo joined #gluster
06:50 gem__ joined #gluster
06:58 schandra joined #gluster
07:00 Jmainguy joined #gluster
07:00 atalur joined #gluster
07:01 aravindavk joined #gluster
07:02 glusterbot News from newglusterbugs: [Bug 1215017] gf_msg not giving output to STDOUT. <https://bugzilla.redhat.com/show_bug.cgi?id=1215017>
07:02 glusterbot News from newglusterbugs: [Bug 1215018] [New] - gluster peer status goes to disconnected state. <https://bugzilla.redhat.com/show_bug.cgi?id=1215018>
07:03 nbalacha joined #gluster
07:03 Manikandan_ joined #gluster
07:03 Manikandan joined #gluster
07:09 [Enrico] joined #gluster
07:13 lifeofguenter joined #gluster
07:23 Slashman joined #gluster
07:27 liquidat joined #gluster
07:27 maveric_amitc_ joined #gluster
07:32 glusterbot News from newglusterbugs: [Bug 1215025] Disperse volume: server side heal doesn't work <https://bugzilla.redhat.com/show_bug.cgi?id=1215025>
07:32 glusterbot News from newglusterbugs: [Bug 1215033] Bitrot command line usage is not correct <https://bugzilla.redhat.com/show_bug.cgi?id=1215033>
07:32 glusterbot News from newglusterbugs: [Bug 1215022] Populate message IDs with recommended action. <https://bugzilla.redhat.com/show_bug.cgi?id=1215022>
07:32 glusterbot News from newglusterbugs: [Bug 1215026] Tracker bug for 3.7 Issues reported by Coverity static analysis tool - <https://bugzilla.redhat.com/show_bug.cgi?id=1215026>
07:35 gem__ joined #gluster
07:58 atalur joined #gluster
07:58 raghug joined #gluster
08:00 nbalacha joined #gluster
08:01 hgowtham joined #gluster
08:10 Pupeno joined #gluster
08:11 badone__ joined #gluster
08:13 zerick joined #gluster
08:20 jtux joined #gluster
08:26 argonius joined #gluster
08:26 nbalacha joined #gluster
08:27 argonius hi *   i've create a distributed-replicated volume with 4 bricks. this is like a raid 10, so how can i find out, which bricks do the raid 1 and which do the raid 0?
08:28 Norky joined #gluster
08:45 ndarshan joined #gluster
08:48 vikumar joined #gluster
08:50 zerick joined #gluster
08:50 ira_ joined #gluster
08:55 zerick joined #gluster
08:56 ktosiek joined #gluster
08:58 kdhananjay joined #gluster
09:03 glusterbot News from newglusterbugs: [Bug 1215078] Glusterd crashed when volume was stopped <https://bugzilla.redhat.com/show_bug.cgi?id=1215078>
09:03 RameshN joined #gluster
09:03 atalur joined #gluster
09:04 ThatGraemeGuy left #gluster
09:05 T0aD joined #gluster
09:10 jiffin joined #gluster
09:13 ndarshan joined #gluster
09:17 zerick joined #gluster
09:21 ira joined #gluster
09:22 raghug joined #gluster
09:22 raghug JustinClift: there?
09:23 atalur joined #gluster
09:35 glusterbot News from resolvedglusterbugs: [Bug 1215078] Glusterd crashed when volume was stopped <https://bugzilla.redhat.com/show_bug.cgi?id=1215078>
09:41 Slashman joined #gluster
09:41 ctria joined #gluster
09:42 haomaiwa_ joined #gluster
09:45 ashiq joined #gluster
09:52 itisravi_ joined #gluster
09:55 Norky joined #gluster
09:59 ppai joined #gluster
10:00 dusmant joined #gluster
10:01 hgowtham joined #gluster
10:01 _shaps_ joined #gluster
10:11 harish_ joined #gluster
10:24 sakshi joined #gluster
10:35 sakshi joined #gluster
10:38 mbukatov joined #gluster
10:48 nbalacha joined #gluster
11:00 ninkotech joined #gluster
11:00 ninkotech_ joined #gluster
11:01 kovshenin joined #gluster
11:02 raghug joined #gluster
11:03 glusterbot News from newglusterbugs: [Bug 1215114] gluster peer probe hangs <https://bugzilla.redhat.com/show_bug.cgi?id=1215114>
11:03 glusterbot News from newglusterbugs: [Bug 1215117] Disperse volume: rebalance and quotad crashed <https://bugzilla.redhat.com/show_bug.cgi?id=1215117>
11:03 glusterbot News from newglusterbugs: [Bug 1215120] Bitrot file crawling is too slow <https://bugzilla.redhat.com/show_bug.cgi?id=1215120>
11:05 glusterbot News from resolvedglusterbugs: [Bug 1147236] gluster 3.6 compatibility issue with gluster 3.3 <https://bugzilla.redhat.com/show_bug.cgi?id=1147236>
11:06 ndevos c=IN
11:06 ndevos urgh...
11:20 kovsheni_ joined #gluster
11:25 kovshenin joined #gluster
11:33 glusterbot News from newglusterbugs: [Bug 1215129] After adding/removing the bricks to the volume bitrot is crawling the all volumes bricks <https://bugzilla.redhat.com/show_bug.cgi?id=1215129>
11:33 glusterbot News from newglusterbugs: [Bug 1215122] Data Tiering: attaching a tier with non supported replica count crashes glusterd on local host <https://bugzilla.redhat.com/show_bug.cgi?id=1215122>
11:39 kovsheni_ joined #gluster
11:40 firemanxbr joined #gluster
11:41 lifeofgu_ joined #gluster
11:47 hagarth joined #gluster
11:48 kovshenin joined #gluster
11:50 ashiq joined #gluster
11:52 LebedevRI joined #gluster
11:52 kovshenin joined #gluster
11:55 bene2 joined #gluster
11:56 kovsheni_ joined #gluster
11:57 diegows joined #gluster
11:58 gem joined #gluster
11:58 anoopcs joined #gluster
11:59 rafi1 joined #gluster
12:01 kanagaraj joined #gluster
12:01 kovshenin joined #gluster
12:01 anrao joined #gluster
12:02 jmarley joined #gluster
12:03 kovshenin joined #gluster
12:05 glusterbot News from resolvedglusterbugs: [Bug 1213380] Data Tiering: Tracker bug for disallowing attach and detach brick on a tiered volume(for 3.7 only) <https://bugzilla.redhat.com/show_bug.cgi?id=1213380>
12:07 kovsheni_ joined #gluster
12:12 kovshenin joined #gluster
12:14 pdrakeweb joined #gluster
12:14 nishanth joined #gluster
12:15 rjoseph joined #gluster
12:15 gem joined #gluster
12:17 ashiq joined #gluster
12:18 LebedevRI joined #gluster
12:23 shaunm_ joined #gluster
12:26 kovshenin joined #gluster
12:28 mark_m joined #gluster
12:29 rafi joined #gluster
12:30 mark_m Hi folks - can we get some help here - we are trying to use http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-6/x86_64/ to install the latest version, but when we do a "yum install glusterfs-fuse" the repo tells us 3.6.2 is available, BUT the repo only contains the new 3.6.3 files. Please advise?
12:31 meghanam joined #gluster
12:33 glusterbot News from newglusterbugs: [Bug 1215152] [Data Tiering] : Attaching a replica 2 hot tier to a replica 3 volume changes the volume topology to nx2 - causing inconsistent data between bricks in the replica set <https://bugzilla.redhat.com/show_bug.cgi?id=1215152>
12:35 hagarth joined #gluster
12:35 kovsheni_ joined #gluster
12:37 kovsheni_ joined #gluster
12:39 kovshenin joined #gluster
12:43 Gill_ joined #gluster
12:47 soumya joined #gluster
12:47 kovshenin joined #gluster
12:48 ndevos mark_m: can you ,,(paste) the output of "yum list glusterfs-fuse" ?
12:48 glusterbot mark_m: For RPM based distros you can yum install fpaste, for debian, ubuntu, and arch it's pastebinit. Then you can easily pipe command output to [f] paste [binit] and it'll give you a URL.
12:48 ndevos mark_m: and, also maybe  "yum repolist" ?
12:52 kovshenin joined #gluster
12:58 rjoseph joined #gluster
13:01 kovsheni_ joined #gluster
13:04 kovshenin joined #gluster
13:05 atalur joined #gluster
13:14 RameshN joined #gluster
13:22 gem joined #gluster
13:24 soumya joined #gluster
13:28 kanagaraj joined #gluster
13:30 georgeh-LT2 joined #gluster
13:32 kovshenin joined #gluster
13:32 diegows joined #gluster
13:35 jmarley joined #gluster
13:41 hamiller joined #gluster
13:43 kovsheni_ joined #gluster
13:44 theron joined #gluster
13:46 kovsheni_ joined #gluster
13:48 kotreshhr left #gluster
13:54 yossarianuk hi - just to confirm, I have a existing 2 sever replicated glusterfs setup
13:55 yossarianuk I want to change to geo-replication / async
13:55 yossarianuk do I actually have to delete the glusterfs volume first>
13:57 dberry joined #gluster
13:57 dberry joined #gluster
13:58 rjoseph joined #gluster
14:02 plarsen joined #gluster
14:04 glusterbot News from newglusterbugs: [Bug 1215173] Disperse volume: rebalance and quotad crashed <https://bugzilla.redhat.com/show_bug.cgi?id=1215173>
14:04 glusterbot News from newglusterbugs: [Bug 1215187] timeout/expiry of group-cache should be set to 300 seconds <https://bugzilla.redhat.com/show_bug.cgi?id=1215187>
14:08 bennyturns joined #gluster
14:11 rwheeler joined #gluster
14:13 wushudoin joined #gluster
14:19 kovshenin joined #gluster
14:24 kovsheni_ joined #gluster
14:27 yossarianuk or should I just delete the 'brick' on the new slave side ?
14:27 kovshenin joined #gluster
14:33 atalur joined #gluster
14:34 glusterbot News from newglusterbugs: [Bug 1215189] timeout/expiry of group-cache should be set to 300 seconds <https://bugzilla.redhat.com/show_bug.cgi?id=1215189>
14:37 bennyturns joined #gluster
14:38 kovshenin joined #gluster
14:49 ashiq joined #gluster
14:54 kovshenin joined #gluster
14:54 julim joined #gluster
15:03 kdhananjay joined #gluster
15:04 meghanam joined #gluster
15:05 kovsheni_ joined #gluster
15:08 kovsheni_ joined #gluster
15:08 nangthang joined #gluster
15:10 cholcombe joined #gluster
15:11 kovshenin joined #gluster
15:13 shubhendu joined #gluster
15:14 kovshenin joined #gluster
15:15 roost joined #gluster
15:17 kovsheni_ joined #gluster
15:17 yossarianuk anyone ?
15:17 yossarianuk I have a existing 2 sever replicated glusterfs setup  - I want to change to geo-replication / async ,  do I actually have to delete the glusterfs volume first?
15:17 yossarianuk or should I just delete the 'brick' on the new slave side ?
15:19 yossarianuk I am trying to follow this - https://github.com/gluster/glusterfs/blob/master/doc/admin-guide/en-US/markdown/admin_distributed_geo_rep.md
15:19 kovshenin joined #gluster
15:22 kovshenin joined #gluster
15:23 Manikandan joined #gluster
15:23 Manikandan_ joined #gluster
15:24 yossarianuk running the command 'gluster system:: execute gsec_create'
15:24 yossarianuk I get
15:24 yossarianuk gsec_create not found.
15:24 yossarianuk ?
15:26 nbalacha joined #gluster
15:30 yossarianuk i think im stuck on the fact that I have no volume now
15:30 yossarianuk I deleted the replicated one
15:31 yossarianuk how do I create a master/slave volume ??
15:32 yossarianuk According to - > https://github.com/lpabon/glusterfs/blob/master/doc/admin-guide/en-US/markdown/admin_distributed_geo_rep.md
15:32 yossarianuk I need to 'And both the master and slave volumes should have been created and started before creating geo-rep session.'
15:32 yossarianuk How do I do that ^^^^^^^^
15:33 yossarianuk i,e any example would do.
15:33 kkeithley maybe try 'gluster system:: execute peer_gsec_create'
15:33 kovsheni_ joined #gluster
15:33 kkeithley and @fileabug
15:33 yossarianuk ah 'peer_gsec_create not found.' - I guess I need to re-add the perr
15:33 yossarianuk peer
15:34 yossarianuk thats not it
15:34 daMaestro joined #gluster
15:35 yossarianuk am I getting ' 'peer_gsec_create not found.' ' because I do not have a volume ????
15:35 yossarianuk (replicated setup was easy and just worked..)
15:35 kkeithley dunno.  What distro?
15:35 yossarianuk centos5
15:35 kkeithley geo-rep doesn't work in el5. python is too old
15:36 yossarianuk ah thanks
15:36 yossarianuk will have to upgrade in the future then...
15:36 virusuy joined #gluster
15:37 soumya joined #gluster
15:40 kovshenin joined #gluster
15:54 Gill joined #gluster
15:57 yossarianuk kkeithley: One last thing - if I were to compile an updated python would ge-replication then work ?
15:58 yossarianuk or is the el5 package missing that functionality?
16:03 ninkotech joined #gluster
16:03 ninkotech_ joined #gluster
16:10 yossarianuk https://bugzilla.redhat.com/show_bug.cgi?id=1074045  - looks like the package has had it removed
16:10 glusterbot Bug 1074045: low, unspecified, ---, jclift, CLOSED CURRENTRELEASE, Geo-replication doesn't work on EL5, so rpm packaging of it should be disabled
16:12 kovshenin joined #gluster
16:18 kovshenin joined #gluster
16:20 kovshenin joined #gluster
16:28 theron_ joined #gluster
16:37 jmarley joined #gluster
16:53 kovsheni_ joined #gluster
16:59 kovshenin joined #gluster
17:06 Rapture joined #gluster
17:18 kovsheni_ joined #gluster
17:29 VeggieMeat joined #gluster
17:30 pdrakeweb joined #gluster
17:31 fattaneh joined #gluster
17:35 ashiq joined #gluster
17:41 fattaneh left #gluster
17:51 redbeard joined #gluster
17:52 kovshenin joined #gluster
17:53 tessier joined #gluster
17:54 kovshen__ joined #gluster
17:55 jrdn joined #gluster
17:56 _pol joined #gluster
17:56 _pol How often should one run a gluster rebalance?
17:56 _pol Assuming that you aren't adding more bricks?
18:00 kovshenin joined #gluster
18:03 kovshenin joined #gluster
18:05 Gill joined #gluster
18:07 kovshenin joined #gluster
18:08 ekuric joined #gluster
18:09 kovsheni_ joined #gluster
18:11 kovshenin joined #gluster
18:12 shaunm_ joined #gluster
18:14 theron joined #gluster
18:22 kovshenin joined #gluster
18:29 ashiq joined #gluster
18:42 social joined #gluster
18:45 kovshenin joined #gluster
18:51 kovshenin joined #gluster
18:53 wkf joined #gluster
18:57 bene2 joined #gluster
19:03 coredump joined #gluster
19:10 coredump joined #gluster
19:12 kovshenin joined #gluster
19:16 kovshenin joined #gluster
19:17 kovshenin joined #gluster
19:21 srsc joined #gluster
19:23 kovsheni_ joined #gluster
19:39 kovshenin joined #gluster
19:40 kovshenin joined #gluster
19:42 kovshenin joined #gluster
19:45 lexi2 joined #gluster
19:45 kovshenin joined #gluster
19:49 kovshenin joined #gluster
19:51 kovshenin joined #gluster
19:53 kovshenin joined #gluster
20:01 kovshenin joined #gluster
20:02 theron joined #gluster
20:05 kovshenin joined #gluster
20:08 srsc ok...i have the split brain. gluster 3.4.1, distributed replicate, tcp, 4x2 bricks
20:09 srsc six of those bricks report split brain files in volume heal VOLUME info split-brain, and all six report 1023 files, which seems suspect
20:10 srsc also the vast majority of the split brain files are reported as gfids and not file paths, although a few file paths are reported
20:11 srsc also, one of the replicate brick pairs reports the same gfid over and over again with different time stamps for most of their 1023 entries
20:12 kovshenin joined #gluster
20:13 srsc i'm hoping to not have to look up 3k gfids and manually delete duplicates, so any insight is welcome
20:25 JoeJulian _pol: Theoretically, you would never have to. That's assuming you don't rename files.
20:26 JoeJulian srsc: If you run a heal...full it will eventually resolve all of the gfid to filenames. Otherwise you might just have to pick one side and move on. You can use ,,(splitmount) to make that easier.
20:26 glusterbot srsc: https://github.com/joejulian/glusterfs-splitbrain
20:31 kovsheni_ joined #gluster
20:32 kovshenin joined #gluster
20:44 bene2 joined #gluster
20:46 rshade98 joined #gluster
20:47 kovsheni_ joined #gluster
20:49 kovshenin joined #gluster
21:13 srsc JoeJulian: i've run the full...heal a few times, but the unresolved gfids are still present
21:14 srsc also, looking for a few of them in BRICK/.gluster, it looks like some of the gfid files are 0 bytes. does that mean anything?
21:14 theron joined #gluster
21:21 harish_ joined #gluster
21:22 srsc yeah, every gfid that won't resolve via heal..full is 0 bytes in BRICK/.gluster. can i just delete those 0 byte gfid files?
21:22 srsc that would bring the split brain files down to a number managable with glusterfs-splitbrain
21:31 JoeJulian srsc: probably. Have you looked at the xattrs?
21:51 Gill_ joined #gluster
21:56 srsc JoeJulian: the 0 byte gfid files do have standard looking xattrs: http://www.fpaste.org/215334/99125591/
21:58 Gill joined #gluster
21:58 JoeJulian srsc: if they all have "trusted.glusterfs.dht.linkto" attributes, they can be safely deleted.
22:00 srsc JoeJulian: excellent, i'll script something up. thanks for your help!
22:29 srsc JoeJulian: i should have asked earlier, but...is it ok to delete *ALL* files under .glusterfs with size 0 and the trusted.glusterfs.dht.linkto xattr? or just those that show up in the heal..info split-brain output?
22:41 JoeJulian yes
22:54 Gill joined #gluster
23:03 Gill joined #gluster
23:47 srsc hmm, so even after deleting the 0 byte gfid files from both replicate bricks and trigging a heal, those entries remain in the heal...info split-brain output
23:48 RioS2 joined #gluster
23:48 RioS2 joined #gluster
23:52 srsc i see that the gfid entries still exist in BRICK/.glusterfs/indices/xattrop, and ls -l report that they have differing numbers of hardlinks on the two bricks (ex 84 links vs 50557 links)
23:53 srsc should i delete those too?
23:55 srsc hmm, and one brick has 86 files in BRICK/.glusterfs/indices/xattrop, the other brick has 102490
23:59 jcastill1 joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary