Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-05-09

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:07 dgandhi joined #gluster
00:22 micneon joined #gluster
00:22 micneon hi
00:22 glusterbot micneon: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
00:24 micneon can i change a glusterfs with some bricke to an gluster fs some brick an replikation to one brick on an other server ?
00:26 joseki joined #gluster
00:27 joseki performing my first rebalance on a distributed volume!
00:27 joseki added a new brick, now rebalancing... both firsts
00:36 plarsen joined #gluster
00:48 joseki i was suspecting that all my files would be available during the rebalance (this is distributed 2 node setup), but when I connect from a client, all my files aren't there (yet). is this as-designed?
00:49 joseki seeing lots of activity in the rebalance status, counters are steadily climbing up
00:55 micneon joseki you didn´t use replication ?
00:55 joseki no, just distributed
00:56 micneon then the file splitted
00:56 micneon some node 1 some node 2
00:56 joseki right. but i assumed during this process, everything would show up under my mount
00:57 micneon how did you mount it ?
00:57 joseki native gluster
00:58 joseki i just umounted it for the time being, but i peeked and yeah, not everything is there
01:01 micneon the gluster mount seems complete and with out gluster mount you have some file node1 some node2 that should be right
01:02 joseki yeah, but the gluster mount isn't complete... let me check again
01:03 joseki yeah, gluster mount is not showing everything. I can see files without the gluster mount on one brick that are not on the mount.
01:07 joseki strange. i did a ls /dir for some directory that wasn't there and it returned back just fine
01:07 micneon did you copy the file in that dir over the gluster mount or over the brick dir ?
01:08 joseki i didn't copy it
01:08 joseki so, i mounted via glusterfs and did "ls"
01:08 joseki it showed some of the dirs
01:09 joseki but when i did "ls /foo", even though /foo wasn't in the above output, it worked
01:09 joseki very odd, but i guess thigns are there
01:09 micneon how did you put files in your bricks ?
01:10 micneon which you miss
01:10 joseki i started off with a single brick
01:10 joseki and the second brick was completely empty
01:10 joseki i just added the second brick and started a rebalance
01:10 micneon an then you add an brick on other server to that volume
01:11 joseki yes
01:11 micneon gluster peer status ?
01:12 joseki yeah, each host shows a single peer - which is the other host
01:12 micneon i have 8 bricks at one node
01:13 micneon i the rebalancing done or in progress
01:13 micneon is
01:13 joseki in progress
01:13 joseki from the new empty node, obviously "rebalanced-files" is 0
01:13 joseki but from the old node, lots of "rebalanced-files"
01:14 micneon i duno but wait to complete maybe then you have all
01:14 joseki yeah, just unexpected
01:14 joseki is it okay to have clients connect during the rebalance?
01:14 micneon right
01:14 joseki ok, good. i will proceed to add my clients back then
01:15 micneon i think its okay
01:15 micneon but i am not profi in glusterfs  i yust have one simple installation
01:15 micneon but if i reblance alle file there
01:17 micneon only when i remove a brick without rebalance then i lose files
01:47 ilbot3 joined #gluster
01:47 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:17 Peppard joined #gluster
02:36 nangthang joined #gluster
02:37 Peppaq joined #gluster
02:38 kshlm joined #gluster
03:09 badone_ joined #gluster
03:19 kripper joined #gluster
03:19 kripper JoeJulian: hi
03:19 kripper what's the status of geo-replication for live VMs?
03:27 kshlm joined #gluster
03:33 JustinClift joined #gluster
03:40 kotreshhr joined #gluster
03:49 glusterbot News from newglusterbugs: [Bug 1211594] status.brick memory allocation failure. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1211594>
03:51 TheSeven joined #gluster
03:56 gnudna joined #gluster
04:17 RameshN joined #gluster
04:18 kshlm joined #gluster
04:18 kshlm joined #gluster
04:25 atalur joined #gluster
04:32 atinmu joined #gluster
04:32 XpineX joined #gluster
04:33 schandra joined #gluster
04:40 gnudna left #gluster
04:42 anoopcs joined #gluster
04:52 anoopcs joined #gluster
04:53 schandra joined #gluster
05:03 atalur joined #gluster
05:12 anrao joined #gluster
05:13 jermudgeon joined #gluster
05:14 rjoseph|afk joined #gluster
05:15 gem joined #gluster
05:20 glusterbot News from newglusterbugs: [Bug 1220011] Force replace-brick lead to the persistent write(use dd) return Input/output error <https://bugzilla.redhat.co​m/show_bug.cgi?id=1220011>
05:27 anoopcs joined #gluster
05:34 anoopcs joined #gluster
05:51 harish_ joined #gluster
06:03 kripper left #gluster
06:04 nishanth joined #gluster
06:20 atinmu joined #gluster
06:26 anrao joined #gluster
06:30 kotreshhr joined #gluster
06:30 ashiq joined #gluster
06:38 atinmu joined #gluster
06:44 hchiramm joined #gluster
06:58 kshlm joined #gluster
06:58 schandra joined #gluster
07:06 PaulCuzner joined #gluster
07:09 hagarth joined #gluster
07:33 LebedevRI joined #gluster
07:46 PaulCuzner joined #gluster
07:46 kovshenin joined #gluster
07:50 glusterbot News from newglusterbugs: [Bug 1220021] bitrot testcases fail spuriously <https://bugzilla.redhat.co​m/show_bug.cgi?id=1220021>
07:50 glusterbot News from newglusterbugs: [Bug 1220020] status.brick memory allocation failure. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1220020>
07:51 PaulCuzner joined #gluster
07:57 PaulCuzner joined #gluster
08:12 ghenry joined #gluster
08:20 glusterbot News from newglusterbugs: [Bug 1220022] package glupy as a subpackage under gluster namespace. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1220022>
08:23 kovsheni_ joined #gluster
08:37 anrao joined #gluster
08:44 DV joined #gluster
08:44 kovshenin joined #gluster
08:45 rafi joined #gluster
08:48 kovshenin joined #gluster
09:14 atinmu joined #gluster
09:19 DV joined #gluster
09:25 gem joined #gluster
09:36 kripper1 joined #gluster
09:48 gem joined #gluster
09:51 kovsheni_ joined #gluster
09:53 kovshenin joined #gluster
09:58 micneon can i change a glusterfs with some bricks in distributed mode to an gluster fs some brick and replikation to one brick on an other server ? without lose data
10:00 Debloper joined #gluster
10:02 kovsheni_ joined #gluster
10:05 kovshenin joined #gluster
10:10 anrao joined #gluster
10:15 kovshenin joined #gluster
10:18 kovsheni_ joined #gluster
10:22 kovshenin joined #gluster
10:29 kovsheni_ joined #gluster
10:33 kovshenin joined #gluster
10:36 Slashman joined #gluster
10:38 kovshenin joined #gluster
10:42 anrao_ joined #gluster
10:51 glusterbot News from newglusterbugs: [Bug 1220031] glusterfs-cli should depend on the glusterfs package <https://bugzilla.redhat.co​m/show_bug.cgi?id=1220031>
10:54 kovshenin joined #gluster
11:06 msvbhat joined #gluster
11:07 anrao joined #gluster
11:07 anrao_ joined #gluster
11:20 rafi1 joined #gluster
11:20 kovsheni_ joined #gluster
11:26 kovshenin joined #gluster
11:26 fattaneh joined #gluster
11:30 fattaneh joined #gluster
11:30 kovsheni_ joined #gluster
11:30 pdrakeweb joined #gluster
11:31 fattaneh left #gluster
11:32 kaushal_ joined #gluster
11:40 MF1 joined #gluster
11:44 kripper joined #gluster
11:49 kshlm joined #gluster
12:03 ira joined #gluster
12:06 kovshenin joined #gluster
12:10 atinmu joined #gluster
12:12 kovsheni_ joined #gluster
12:30 hchiramm_home joined #gluster
12:33 kovshenin joined #gluster
12:43 MF1 left #gluster
12:52 glusterbot News from newglusterbugs: [Bug 1220041] timer wheel and throttling in bitrot <https://bugzilla.redhat.co​m/show_bug.cgi?id=1220041>
12:59 Pupeno joined #gluster
13:05 harish joined #gluster
13:06 Twistedgrim joined #gluster
13:09 ron-slc joined #gluster
13:11 rjoseph|afk joined #gluster
13:18 kovshenin joined #gluster
13:22 glusterbot News from newglusterbugs: [Bug 1220047] Data Tiering:3.7.0:data loss:detach-tier not flushing data to cold-tier <https://bugzilla.redhat.co​m/show_bug.cgi?id=1220047>
13:22 glusterbot News from newglusterbugs: [Bug 1220050] Data Tiering:UI:when a user looks for detach-tier help, instead command seems to be getting executed <https://bugzilla.redhat.co​m/show_bug.cgi?id=1220050>
13:22 glusterbot News from newglusterbugs: [Bug 1220051] Data Tiering: Volume inconsistency errors getting logged when attaching uneven(odd) number of hot bricks in hot tier(pure distribute tier layer) to a dist-rep volume <https://bugzilla.redhat.co​m/show_bug.cgi?id=1220051>
13:22 glusterbot News from newglusterbugs: [Bug 1220052] Data Tiering:UI:changes required to CLI responses for attach and detach tier <https://bugzilla.redhat.co​m/show_bug.cgi?id=1220052>
13:23 kovshenin joined #gluster
13:31 kovshenin joined #gluster
13:36 kovshenin joined #gluster
13:37 kshlm joined #gluster
13:41 kovshenin joined #gluster
13:45 atrius joined #gluster
13:45 kovshenin joined #gluster
13:50 kovshenin joined #gluster
13:50 Supermathie joined #gluster
14:11 kovshenin joined #gluster
14:14 plarsen joined #gluster
14:38 poornimag joined #gluster
14:45 gem joined #gluster
15:13 DV__ joined #gluster
15:16 kovshenin joined #gluster
15:20 premera joined #gluster
15:20 Twistedgrim joined #gluster
15:38 kripper left #gluster
15:46 kovshenin joined #gluster
15:54 glusterbot News from resolvedglusterbugs: [Bug 1220057] glusterd crashes when brick option validation fails <https://bugzilla.redhat.co​m/show_bug.cgi?id=1220057>
16:02 ira joined #gluster
16:11 gem joined #gluster
16:16 kovsheni_ joined #gluster
16:22 glusterbot News from newglusterbugs: [Bug 1220059] Disable known bad tests <https://bugzilla.redhat.co​m/show_bug.cgi?id=1220059>
16:23 kovshenin joined #gluster
16:25 Supermathie joined #gluster
16:27 kovshenin joined #gluster
16:32 kovshenin joined #gluster
16:34 spiekey joined #gluster
16:37 kovshenin joined #gluster
16:42 kovshenin joined #gluster
16:47 kovsheni_ joined #gluster
17:03 hagarth joined #gluster
17:03 kovshenin joined #gluster
17:08 kovshenin joined #gluster
17:14 kovshenin joined #gluster
17:14 uebera|| joined #gluster
17:18 gem joined #gluster
17:19 kovsheni_ joined #gluster
17:24 chirino joined #gluster
17:24 kovshenin joined #gluster
17:24 glusterbot News from resolvedglusterbugs: [Bug 1165143] A restarted child can not clean files/directories which were deleted while down <https://bugzilla.redhat.co​m/show_bug.cgi?id=1165143>
17:27 kovshenin joined #gluster
17:32 kovsheni_ joined #gluster
17:36 kovshenin joined #gluster
17:40 kovsheni_ joined #gluster
17:50 lalatenduM joined #gluster
17:53 glusterbot News from newglusterbugs: [Bug 1211123] ls command failed with features.read-only on while mounting ec volume. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1211123>
17:53 kovshenin joined #gluster
17:55 glusterbot News from resolvedglusterbugs: [Bug 1211915] Disperse volume could not mount through NFS when features.read-only on <https://bugzilla.redhat.co​m/show_bug.cgi?id=1211915>
17:55 glusterbot News from resolvedglusterbugs: [Bug 1210193] Commands hanging on the client post recovery of failed bricks <https://bugzilla.redhat.co​m/show_bug.cgi?id=1210193>
18:08 kovsheni_ joined #gluster
18:13 kovshenin joined #gluster
18:21 kovsheni_ joined #gluster
18:23 glusterbot News from newglusterbugs: [Bug 1214912] Failure to recover disperse volume after add-brick failure <https://bugzilla.redhat.co​m/show_bug.cgi?id=1214912>
18:25 glusterbot News from resolvedglusterbugs: [Bug 1209113] Disperse volume: Invalid index errors in readdirp requests <https://bugzilla.redhat.co​m/show_bug.cgi?id=1209113>
18:25 kovshenin joined #gluster
18:29 kovshenin joined #gluster
18:37 julim joined #gluster
18:38 kovsheni_ joined #gluster
18:41 gem joined #gluster
18:41 kovshenin joined #gluster
18:49 kovshenin joined #gluster
18:52 kovsheni_ joined #gluster
18:55 glusterbot News from resolvedglusterbugs: [Bug 1210137] [HC] qcow2 image creation using qemu-img hits segmentation fault <https://bugzilla.redhat.co​m/show_bug.cgi?id=1210137>
18:56 kovshenin joined #gluster
18:59 kovsheni_ joined #gluster
19:06 kovshenin joined #gluster
19:09 kovsheni_ joined #gluster
19:13 kovshenin joined #gluster
19:22 JoeJulian @stats
19:22 glusterbot JoeJulian: I have 3 registered users with 0 registered hostmasks; 1 owner and 1 admin.
19:22 kovshenin joined #gluster
19:23 ndevos @register
19:23 glusterbot ndevos: Error: That operation cannot be done in a channel.
19:23 JoeJulian @channelstats
19:23 glusterbot JoeJulian: On #gluster there have been 412709 messages, containing 15595693 characters, 2556189 words, 9050 smileys, and 1276 frowns; 1811 of those messages were ACTIONs.  There have been 192326 joins, 4734 parts, 188007 quits, 29 kicks, 2558 mode changes, and 8 topic changes.  There are currently 218 users and the channel has peaked at 276 users.
19:25 glusterbot News from resolvedglusterbugs: [Bug 1198618] Refactor volume creation for tiering <https://bugzilla.redhat.co​m/show_bug.cgi?id=1198618>
19:25 glusterbot News from resolvedglusterbugs: [Bug 1206602] Data Tiering: Newly added bricks not getting tier-gfid <https://bugzilla.redhat.co​m/show_bug.cgi?id=1206602>
19:25 kovshenin joined #gluster
19:29 kovshenin joined #gluster
19:32 kovshenin joined #gluster
19:46 kovsheni_ joined #gluster
19:53 glusterbot News from newglusterbugs: [Bug 1220075] Fix duplicate entires in glupy makefile. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1220075>
20:10 DV__ joined #gluster
20:12 chirino joined #gluster
20:18 chirino_m joined #gluster
20:23 glusterbot News from newglusterbugs: [Bug 1217722] Tracker bug for Logging framework expansion. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1217722>
20:25 glusterbot News from resolvedglusterbugs: [Bug 1212368] Data Tiering:Clear tier gfid when detach-tier takes place <https://bugzilla.redhat.co​m/show_bug.cgi?id=1212368>
20:27 deniszh joined #gluster
20:33 chirino joined #gluster
21:00 kripper joined #gluster
21:08 hagarth joined #gluster
22:01 daMaestro joined #gluster
22:19 d0ugal joined #gluster
22:20 d0ugal left #gluster
22:23 nishanth joined #gluster
22:28 PaulCuzner joined #gluster
22:36 soumya joined #gluster
22:53 micneon joined #gluster
22:53 micneon how can it start a resync of georeplication .... i have startet replication but it doesn´t sync

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary