Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-08-03

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:08 zerick joined #gluster
00:27 Xanacas_ joined #gluster
00:36 sjm joined #gluster
01:00 qdk joined #gluster
02:04 bala joined #gluster
02:44 JoeJulian caiozanolla: that's normal. They all do that.
02:46 JoeJulian caiozanolla: the volume-id should not be different.
02:46 JoeJulian Xanacas_: 3 bricks.
02:46 JoeJulian ~brick-order | Xanacas_
02:46 glusterbot Xanacas_: I do not know about 'brick-order', but I do know about these similar topics: 'brick order'
02:46 caiozanolla JoeJulian, that is very odd.
02:46 JoeJulian ~brickorder | Xanacas_
02:46 glusterbot Xanacas_: I do not know about 'brickorder', but I do know about these similar topics: 'brick order'
02:46 JoeJulian ~brick order | Xanacas_
02:46 glusterbot Xanacas_: Replicas are defined in the order bricks are listed in the volume create command. So gluster volume create myvol replica 2 server1:/data/brick1 server2:/data/brick1 server3:/data/brick1 server4:/data/brick1 will replicate between server1 and server2 and replicate between server3 and server4.
02:46 caiozanolla JoeJulian, gluster is running like this on 1st node
02:47 caiozanolla JoeJulian, maybe the "gluster volume sync server1 all" exported wrong info back to the working node?
02:47 JoeJulian caiozanolla: I'm just saying they all do the "Unknown key" thing. It's normal and probably should be a bug, though I don't know if anyone's reported it. It may be fixed in 3.5 but I haven't checked.
02:48 JoeJulian But the volume-id is going to need to be set manually. That's a "feature".
02:48 caiozanolla JoeJulian, ok. but the other node is not joining
02:50 caiozanolla JoeJulian, there are 2 places for the volume-id to appeear, /mnt/brick/data ( it was build with data folder as the brick instead of the whole drive). And the /var/lib/glusterd/vols/$vol/info file, correct?
02:51 caiozanolla all of my bricks show the same volume-id, which is different from the volume-id shown in /var/lib/glusterd/vols/$vol/info
02:51 caiozanolla here, ill paste some info:
02:51 JoeJulian The /var/lib/glusterd/vols/$vol/info file should have been synced from another server. The brick would match that if you dumped it in hex, "-e hex".
02:55 caiozanolla JoeJulian, here is info from the working node… http://pastie.org/9440997
02:55 glusterbot Title: #9440997 - Pastie (at pastie.org)
02:56 caiozanolla JoeJulian, here is brick info from the "non functional node"… http://pastie.org/9440999
02:56 glusterbot Title: #9440999 - Pastie (at pastie.org)
02:58 caiozanolla JoeJulian, and here is the glusterd log from the non working node… http://pastie.org/9441001
02:58 glusterbot Title: #9441001 - Pastie (at pastie.org)
03:01 caiozanolla JoeJulian, peering info seems ok too… http://pastie.org/9441007
03:01 glusterbot Title: #9441007 - Pastie (at pastie.org)
03:02 JoeJulian On the non-working server, try "glusterd -d"
03:07 caiozanolla JoeJulian, i cant make any sense of it…  http://pastie.org/9441013
03:07 glusterbot Title: #9441013 - Pastie (at pastie.org)
03:15 caiozanolla JoeJulian, ok, hex dump shows all volume-id match all from A all from B and /var/lib/glusterd/vols/$vol/info
03:15 caiozanolla JoeJulian, still it wont work
03:23 caiozanolla JoeJulian, ok, it seems i made progress. on node B i got glusterfsd processes runing on bricks, also glusterd on A shows success on heals! but "gluster volume status" should show all bricks, right? (being the fs is replicated). yet, each node shows only their own bricks
03:37 bala joined #gluster
04:00 hagarth joined #gluster
04:30 firemanxbr joined #gluster
04:44 Guest95929 joined #gluster
06:29 elico joined #gluster
06:29 sputnik13 joined #gluster
06:55 ekuric joined #gluster
07:07 LebedevRI joined #gluster
07:11 sjm left #gluster
07:33 Xanacas_ glusterbot: so i have to do something like this if i have more than one brick: gluster volume create myvol replica 2 server1:/data/brick1 server2:/data/brick1 server1:/data/brick2 server2:/data/brick2 ? like that it will replicate between brick1 on both nodes and brick2 on both nodes?
07:49 ramteid joined #gluster
07:50 ricky-ti1 joined #gluster
08:59 siel joined #gluster
09:22 Xanacas joined #gluster
09:23 Xanacas joined #gluster
10:14 edward1 joined #gluster
11:13 mhoungbo_ joined #gluster
11:37 diegows joined #gluster
12:34 DV__ joined #gluster
13:23 chirino joined #gluster
13:27 firemanxbr joined #gluster
13:42 mhoungbo joined #gluster
13:43 DV__ joined #gluster
13:50 Bardack joined #gluster
13:55 andreask joined #gluster
14:01 Xanacas joined #gluster
14:14 firemanxbr joined #gluster
14:56 glusterbot New news from newglusterbugs: [Bug 1122834] Issues reported by Coverity static analysis tool <https://bugzilla.redhat.com/show_bug.cgi?id=1122834>
15:21 cjhanks joined #gluster
15:21 firemanxbr joined #gluster
15:55 LebedevRI joined #gluster
16:08 luckyinva joined #gluster
16:20 rotbeard joined #gluster
16:45 cjhanks joined #gluster
16:49 edward1 joined #gluster
16:56 plarsen joined #gluster
17:19 RioS2 joined #gluster
17:34 andreask joined #gluster
17:50 luckyinva joined #gluster
19:10 Humble joined #gluster
19:27 decimoe left #gluster
19:34 DV__ joined #gluster
19:38 plarsen joined #gluster
19:51 firemanxbr joined #gluster
20:28 ThatGraemeGuy joined #gluster
21:07 qdk joined #gluster
21:22 LebedevRI joined #gluster
21:27 sputnik13 joined #gluster
21:57 kseifried joined #gluster
22:17 bala joined #gluster
22:26 _Bryan_ joined #gluster
22:44 pdrakeweb joined #gluster
23:11 plarsen joined #gluster
23:12 RicardoSSP joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary