Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-04-11

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:03 chirino joined #gluster
00:13 vpshastry joined #gluster
00:15 B21956 joined #gluster
00:15 gdubreui joined #gluster
00:30 gtobon joined #gluster
00:31 gtobon Morning community Gurus.
00:31 gtobon I wondering if any one can help me?
00:32 gtobon I have the follow issue.
00:33 gtobon gluster volume heal  gv0_shares info
00:33 gtobon Gathering Heal info on volume gv0_shares has been successful
00:33 gtobon Brick 10.50.50.220:/shares
00:33 gtobon Number of entries: 7
00:33 gtobon I ran volume heal full
00:34 gtobon But still having 7 un-sync entries
00:35 gtobon How can I fix the files? I have read the documentation.
00:35 gtobon I'm Using Gluster 3.3
00:36 gtobon I'm unable to find any solution to this issue
00:41 jmarley joined #gluster
00:45 davinder joined #gluster
00:46 gtobon .
01:13 semiosis file a bug
01:13 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
01:13 semiosis yep
01:18 harish joined #gluster
01:23 glusterbot New news from newglusterbugs: [Bug 1086460] Ubuntu code audit results (blocking inclusion in Ubuntu Main repo) <https://bugzilla.redhat.co​m/show_bug.cgi?id=1086460>
01:32 chirino joined #gluster
01:45 vpshastry joined #gluster
01:45 MacWinner joined #gluster
01:48 harish joined #gluster
01:52 diegows joined #gluster
01:52 gdubreui joined #gluster
02:02 baojg joined #gluster
02:04 brodiem joined #gluster
02:14 brodiem left #gluster
02:39 ceiphas__ joined #gluster
02:40 badone joined #gluster
02:40 dbruhn joined #gluster
02:41 vpshastry joined #gluster
02:53 glusterbot New news from newglusterbugs: [Bug 1086493] [RFE] - Add a default snapshot name when creating a snap <https://bugzilla.redhat.co​m/show_bug.cgi?id=1086493> || [Bug 1086497] [RFE] - Upon snaprestore, immediately take a snapshot to provide recovery point <https://bugzilla.redhat.co​m/show_bug.cgi?id=1086497>
03:00 haomaiwang joined #gluster
03:00 haomaiwa_ joined #gluster
03:01 nightwalk joined #gluster
03:02 vpshastry joined #gluster
03:07 bharata-rao joined #gluster
03:10 badone joined #gluster
03:28 Durzo joined #gluster
03:32 Licenser joined #gluster
03:32 Licenser Hi guys :)
03:42 jruggier2 joined #gluster
03:45 itisravi joined #gluster
03:48 dbruhn Hi
03:48 glusterbot dbruhn: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
03:54 ceez joined #gluster
04:04 deepakcs joined #gluster
04:06 hchiramm__ joined #gluster
04:07 Durzo joined #gluster
04:12 spandit joined #gluster
04:13 shubhendu joined #gluster
04:15 pk joined #gluster
04:17 atinm joined #gluster
04:22 Ark joined #gluster
04:25 baojg__ joined #gluster
04:31 pk joined #gluster
04:34 ndarshan joined #gluster
04:38 chirino joined #gluster
04:38 ultrabizweb joined #gluster
04:41 kdhananjay joined #gluster
04:43 dusmant joined #gluster
04:43 pk left #gluster
04:44 sputnik13 joined #gluster
04:45 liquidity joined #gluster
04:45 liquidity Hi there Gluster!
04:46 liquidity I've been trying to create a glusterfs in docker these past 2 days and I'm banging my head against the wall, I can't get it to work
04:46 liquidity I can install the server, the client, issue mount -t glusterfs ip:/media /mnt, I can see the mount when I type "mount"
04:47 liquidity but then if I do: ls /mnt or touch /mnt/hello
04:47 liquidity I get "cannot access /mnt: Transport endpoint is not connected"
04:48 liquidity in my logs I can see :
04:48 liquidity [2014-04-11 04:35:39.391055] E [socket.c:1685:socket_connect_finish] 0-media-client-0: connection to  failed (Connection refused)
04:48 liquidity [2014-04-11 04:35:39.394050] I [fuse-bridge.c:3461:fuse_graph_setup] 0-fuse: switched to graph 0
04:48 liquidity [2014-04-11 04:35:39.394200] I [fuse-bridge.c:3049:fuse_init] 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.13 kernel 7.22
04:48 liquidity [2014-04-11 04:35:44.56056] W [fuse-bridge.c:466:fuse_attr_cbk] 0-glusterfs-fuse: 2: LOOKUP() / => -1 (Transport endpoint is not connected)
04:48 liquidity I'm stuck.
04:49 kanagaraj joined #gluster
04:49 liquidity Has anybody ever used gluster in docker?
04:52 baojg joined #gluster
04:57 rastar joined #gluster
05:01 ravindran1 joined #gluster
05:05 itisravi liquidity: You could try GAnt, a work-in-progress by kshlm at https://forge.gluster.org/gant
05:06 glusterbot Title: GAnt - Gluster Community Forge (at forge.gluster.org)
05:07 liquidity itisravi: thanks! I'lll check it out right now
05:08 itisravi liquidity: okay :)  I haven't tried it out msyelf yet though.
05:08 vpshastry joined #gluster
05:09 liquidity itisravi: I just realized that there are images for gluster in the docker registry
05:09 liquidity I'll check these out. Thanks to you I found them ;)
05:10 benjamin_____ joined #gluster
05:11 raghu joined #gluster
05:11 prasanth_ joined #gluster
05:14 meghanam_ joined #gluster
05:17 Philambdo joined #gluster
05:17 meghanam joined #gluster
05:22 saurabh joined #gluster
05:22 sputnik13 joined #gluster
05:23 ppai joined #gluster
05:29 bala joined #gluster
05:43 shylesh__ joined #gluster
05:44 Ark joined #gluster
05:51 prasanth_ joined #gluster
05:51 sahina joined #gluster
05:56 nishanth joined #gluster
05:58 klaas joined #gluster
06:06 lalatenduM joined #gluster
06:16 nishanth joined #gluster
06:16 nthomas joined #gluster
06:16 ndarshan joined #gluster
06:17 rahulcs joined #gluster
06:21 deepakcs joined #gluster
06:22 psharma joined #gluster
06:24 jtux joined #gluster
06:28 sputnik13 joined #gluster
06:30 zorgan joined #gluster
06:32 rahulcs joined #gluster
06:33 vimal joined #gluster
06:33 sticky_afk joined #gluster
06:34 stickyboy joined #gluster
06:44 ksingh1 joined #gluster
06:45 RameshN joined #gluster
06:46 rahulcs joined #gluster
06:49 ksingh1 left #gluster
06:57 nshaikh joined #gluster
06:58 eseyman joined #gluster
07:01 ctria joined #gluster
07:04 ekuric joined #gluster
07:06 rahulcs joined #gluster
07:07 Pavid7 joined #gluster
07:10 baojg joined #gluster
07:14 haomaiwa_ joined #gluster
07:14 sputnik13 joined #gluster
07:15 keytab joined #gluster
07:15 hybrid512 joined #gluster
07:18 deepakcs joined #gluster
07:19 hybrid512 joined #gluster
07:27 Durzo joined #gluster
07:28 rgustafs joined #gluster
07:30 baojg joined #gluster
07:44 keytab joined #gluster
07:46 fsimonce joined #gluster
07:50 andreask joined #gluster
07:55 Andyy2 mixing gluster 3.4.2 nodes with gluster 3.4.3 nodes: Is this a problem? It seems that 3.4.3 is only bugfixes. But are they compatible between versions?
07:57 rgustafs joined #gluster
07:57 Pavid7 joined #gluster
08:01 hybrid512 joined #gluster
08:01 ndevos Andyy2: those should be compatible, at least for all I know
08:01 hybrid512 joined #gluster
08:06 prasanth_ joined #gluster
08:07 knfbny joined #gluster
08:11 chirino joined #gluster
08:17 nishanth joined #gluster
08:18 nishanth joined #gluster
08:19 rgustafs_ joined #gluster
08:20 Andyy2 ndevos: thanks
08:21 dusmant joined #gluster
08:39 liquidat joined #gluster
08:40 ngoswami joined #gluster
08:43 raghu joined #gluster
08:45 Durzo joined #gluster
08:48 haomai___ joined #gluster
09:04 nishanth joined #gluster
09:09 ravindran1 joined #gluster
09:14 Durzo joined #gluster
09:17 rahulcs joined #gluster
09:20 baojg joined #gluster
09:46 nishanth joined #gluster
10:13 chirino joined #gluster
10:18 rahulcs joined #gluster
10:38 elico joined #gluster
10:39 tdasilva left #gluster
10:41 baojg_ joined #gluster
10:46 baojg joined #gluster
10:54 glusterbot New news from newglusterbugs: [Bug 1049981] 3.5.0 Tracker <https://bugzilla.redhat.co​m/show_bug.cgi?id=1049981>
10:56 rahulcs joined #gluster
10:57 baojg joined #gluster
11:00 RameshN joined #gluster
11:04 prasanth_ joined #gluster
11:09 andreask joined #gluster
11:10 prasanth_ joined #gluster
11:18 shubhendu joined #gluster
11:25 diegows joined #gluster
11:27 jiku joined #gluster
11:31 nishanth joined #gluster
11:38 ira_ joined #gluster
11:39 ira_ joined #gluster
11:44 chirino joined #gluster
11:48 DV joined #gluster
11:52 rahulcs joined #gluster
11:54 Ark joined #gluster
11:58 itisravi_ joined #gluster
12:12 delhage joined #gluster
12:13 delhage joined #gluster
12:14 RameshN joined #gluster
12:16 delhage joined #gluster
12:17 rpowell joined #gluster
12:19 Pavid7 joined #gluster
12:20 delhage joined #gluster
12:23 rahulcs joined #gluster
12:25 glusterbot New news from newglusterbugs: [Bug 1086747] Add documentation for the Feature: Distributed Geo-Replication <https://bugzilla.redhat.co​m/show_bug.cgi?id=1086747> || [Bug 1086743] Add documentation for the Feature: RDMA-connection manager (RDMA-CM) <https://bugzilla.redhat.co​m/show_bug.cgi?id=1086743> || [Bug 1086745] Add documentation for the Feature: Support for NUFA translator <https://bugzilla.redhat.com/show_bug.cgi?id=
12:26 RobertLaptop joined #gluster
12:27 andreask joined #gluster
12:27 harish joined #gluster
12:34 williamj_ joined #gluster
12:35 williamj_ joined #gluster
12:35 nishanth joined #gluster
12:37 williamj_ Hi,  got a distributed  / replicated  problem , sound like spit brain but i think it is not.  On brick1 I have the file in perfect order but on Brick 2 I have the file with zero bytes.
12:37 williamj_ Any  help?
12:39 williamj__ joined #gluster
12:40 williamj_ joined #gluster
12:46 shubhendu joined #gluster
12:48 benjamin_____ joined #gluster
12:50 John_HPC joined #gluster
12:51 williamj_ joined #gluster
12:51 Pavid7 joined #gluster
12:55 glusterbot New news from newglusterbugs: [Bug 1086762] Add documentation for the Feature: BD Xlator - Block Device translator <https://bugzilla.redhat.co​m/show_bug.cgi?id=1086762> || [Bug 1086764] Add documentation for the Feature: Duplicate Request Cache (DRC) <https://bugzilla.redhat.co​m/show_bug.cgi?id=1086764> || [Bug 1086765] Add documentation for the Feature: Server-Quorum <https://bugzilla.redhat.co​m/show_bug.cgi?id=1086765>
13:01 rpowell left #gluster
13:08 pdrakeweb joined #gluster
13:12 mgarcesMZ joined #gluster
13:12 mgarcesMZ hi there
13:13 mgarcesMZ I have a small question... can I have a glusterFS, with 2+1 nodes, in which the +1 is asynced with the others? What I need is, not to delay the writes, because of this node
13:15 rahulcs joined #gluster
13:17 bennyturns joined #gluster
13:17 kkeithley mgarcesMZ: you can use geo-replication to async replicate to the +1 node.
13:18 mgarcesMZ kkeithley: really? that is exactly what I need, to async to a offsite, but the link is slow. Can you point me to the docs where I can learn more?
13:18 japuzzo joined #gluster
13:20 williamj_ hi link : http://www.gluster.org/community/docume​ntation/index.php/HowTo:geo-replication
13:20 glusterbot Title: HowTo:geo-replication - GlusterDocumentation (at www.gluster.org)
13:25 glusterbot New news from newglusterbugs: [Bug 1086774] Add documentation for the Feature: Access Control List - Version 3 support for Gluster NFS <https://bugzilla.redhat.co​m/show_bug.cgi?id=1086774> || [Bug 1086781] Add documentation for the Feature: Eager locking <https://bugzilla.redhat.co​m/show_bug.cgi?id=1086781> || [Bug 1086782] Add documentation for the Feature: oVirt 3.2 integration <https://bugzilla.redhat.com/show_bug.cgi?i
13:26 plarsen joined #gluster
13:27 mgarcesMZ williamj_: thank you
13:31 lpabon joined #gluster
13:34 rahulcs joined #gluster
13:36 theron joined #gluster
13:37 pdrakeweb joined #gluster
13:40 dbruhn joined #gluster
13:41 kkeithley @geo-replication
13:41 glusterbot kkeithley: See the documentation at http://download.gluster.org/pub/gluste​r/glusterfs/3.2/Documentation/AG/html/​chap-Administration_Guide-Geo_Rep.html
13:42 chirino joined #gluster
13:50 B21956 joined #gluster
13:53 B21956 joined #gluster
13:55 glusterbot New news from newglusterbugs: [Bug 1086796] Add documentation for the Feature: Distributed Geo-Replication <https://bugzilla.redhat.co​m/show_bug.cgi?id=1086796>
13:59 rahulcs joined #gluster
13:59 sijis social: oh no worries. it just ended up being a hosts file typo.
14:03 Pavid7 joined #gluster
14:09 wushudoin joined #gluster
14:11 baojg joined #gluster
14:13 tdasilva joined #gluster
14:14 gdubreui joined #gluster
14:18 chirino joined #gluster
14:29 user_42 joined #gluster
14:30 jobewan joined #gluster
14:35 mgarcesMZ how can I create a volume, which will be distributed (replica), but for now I have only one node... I will add the extra node for replica later
14:38 mgarcesMZ i want to prepare everything on one server, and later add a second node, which might be geo-located also
14:39 systemonkey joined #gluster
14:40 vipulnayyar joined #gluster
14:41 pjschmitt joined #gluster
14:42 ms77 joined #gluster
14:43 ms77 gluster volume create warns not to create bricks on the root partition. what would be the implications of not heeding this warning?
14:43 wushudoin left #gluster
14:44 ms77 all I can find is that you better don't or about forcing this in script mode etc. but what's the reason/the dangers
14:45 dbruhn ms77, If that volume ever becomes full there are all sorts of things that the system will do and it will stop working
14:45 dbruhn the config files are dynamic, and will corrupt
14:46 dbruhn the log files filling up a partition stop the system
14:46 dbruhn stuff like that
14:46 kkeithley mgarces: In gluster, distributed has a specific meaning. So does replica.  Start by creating the volume with a single server+brick. Later add another server+brick for ordinary replication (local).   Later on add your remote DR gluster server and use geo-replication to replicate from your local gluster to the remote gluster.
14:46 dbruhn and it's just generally considered bad for
14:46 dbruhn s/for/form/
14:46 glusterbot What dbruhn meant to say was: and it's just generally considered bad form
14:47 ms77 dbruhn: thanks!
14:48 sjoeboo joined #gluster
14:49 kkeithley ms77: not only do we recommend not on the root partition, but also not on the root or top directory of the brick file system. E.g. if your brick is mounted at /bricks/brick1, then `mkdir /bricks/brick1/storage` and create your volume with `gluster volume create $myvol $myserver:/bricks/brick1/storage`
14:51 mgarcesMZ kkeithley: will do, trying to figure this out on the docs
14:51 mgarcesMZ I was able to create the volume (used a mount point, not a folder inside the mountpoint) but now I can't seem to mount it
14:52 mgarcesMZ I wiil test it with a folder inside the mountpoint
14:57 LoudNoises joined #gluster
14:58 Pavid7 joined #gluster
15:09 ctria joined #gluster
15:09 John_HPC left #gluster
15:14 benjamin_ joined #gluster
15:17 gmcwhistler joined #gluster
15:24 tziOm joined #gluster
15:28 bennyturns joined #gluster
15:37 daMaestro joined #gluster
15:42 rwheeler joined #gluster
15:48 chirino joined #gluster
15:56 Slash joined #gluster
15:58 Slashman joined #gluster
15:58 Slash joined #gluster
16:00 Slashman hello, I'm trying to replace a failed server on a distributed replicated setup with glusterfs 3.4.3 but the process is not documented when the new server doesn't have the same hostname as the old one (http://gluster.org/community/document​ation/index.php/Gluster_3.4:_Brick_Re​storation_-_Replace_Crashed_Server), any help on how to do this ?
16:00 glusterbot Title: Gluster 3.4: Brick Restoration - Replace Crashed Server - GlusterDocumentation (at gluster.org)
16:03 Slashman not that smart this bot...
16:04 Mo__ joined #gluster
16:21 chirino joined #gluster
16:26 glusterbot New news from newglusterbugs: [Bug 1071800] 3.5.1 Tracker <https://bugzilla.redhat.co​m/show_bug.cgi?id=1071800>
16:30 gmcwhistler joined #gluster
16:34 gmcwhistler joined #gluster
16:44 JonnyNomad joined #gluster
16:54 vpshastry joined #gluster
17:05 zaitcev joined #gluster
17:05 vpshastry left #gluster
17:05 sputnik13 joined #gluster
17:24 Pavid7 joined #gluster
17:26 user_42 joined #gluster
17:29 liquidity joined #gluster
17:30 brokeasshachi joined #gluster
17:38 asku left #gluster
17:50 chirino joined #gluster
17:51 lmickh joined #gluster
17:53 Matthaeus joined #gluster
17:53 zerick joined #gluster
17:54 saravanakumar1 joined #gluster
18:19 _dist joined #gluster
18:35 lalatenduM joined #gluster
18:41 sputnik13 noob question...  when I remove a brick from a replicated/striped set, will the data be rebalanced automatically?
18:44 lalatenduM sputnik13, yes, only if you do remove-brick <bricks> start, status, commit
18:53 Guest11738 joined #gluster
18:54 Guest11738 Hi I am trying to recover a crashed node and rebuild all the bricks
18:54 Guest11738 and the heal process stuck on a few files that been writing during the crash
18:54 Guest11738 Self Heal:  4/ 4   Heal backlog of 2 files
18:54 Guest11738 anyone experienced the same issue?
18:55 Guest11738 i am running 3.4.3
18:56 Guest11738 nfs.log:[2014-04-11 17:09:38.911558] I [afr-self-heal-data.c:655:afr_sh_data_fix] 0-gfsr2s1-replicate-1: no active sinks for performing self-heal on file /test4/test.peter7
18:56 Guest11738 Brick glusterdev003:/data/gfsr2s1/gfs
18:56 Guest11738 Number of entries: 1
18:56 Guest11738 /test4/test.peter7
18:56 Guest11738 Brick glusterdev004:/data/gfsr2s1/gfs
18:56 Guest11738 Number of entries: 1
18:56 Guest11738 /test4/test.peter7
19:04 Guest11738 Thanks for looking into this :)
19:24 Guest11738 alias peter
19:26 brokeasshachi joined #gluster
19:51 dbruhn joined #gluster
20:34 sijis left #gluster
20:39 zerick joined #gluster
20:43 MeatMuppet joined #gluster
20:43 MeatMuppet Matthaeus: Thx for the hand last evening.
20:48 Matthaeus Umm...sure!
21:02 andreask joined #gluster
21:06 cyberbootje joined #gluster
21:21 gmcwhistler joined #gluster
21:24 chirino joined #gluster
21:34 chirino joined #gluster
22:00 brokeasshachi joined #gluster
22:05 bennyturns joined #gluster
22:25 Matthaeus joined #gluster
22:30 gdubreui joined #gluster
22:31 Ark joined #gluster
22:32 gdubreui joined #gluster
22:55 chirino joined #gluster
22:59 gmcwhistler joined #gluster
23:26 hagarth joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary