Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-10-24

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 calavera joined #gluster
00:33 calavera joined #gluster
00:41 RedW joined #gluster
01:46 vmallika joined #gluster
02:25 nangthang joined #gluster
02:28 Lee1092 joined #gluster
02:57 haomaiwa_ joined #gluster
03:01 haomaiwa_ joined #gluster
03:16 _Bryan_ joined #gluster
03:19 TheSeven joined #gluster
03:38 stickyboy joined #gluster
04:00 [7] joined #gluster
04:01 haomaiwa_ joined #gluster
04:12 Ludo- joined #gluster
04:13 haomaiwang joined #gluster
04:43 kovshenin joined #gluster
04:47 kovsheni_ joined #gluster
04:57 klaxa joined #gluster
05:01 haomaiwa_ joined #gluster
05:14 harish joined #gluster
05:19 calavera joined #gluster
05:22 klaxa joined #gluster
05:23 kovshenin joined #gluster
05:25 beeradb joined #gluster
05:41 F2Knight joined #gluster
05:42 klaxa joined #gluster
06:01 haomaiwa_ joined #gluster
06:23 RayTrace_ joined #gluster
06:33 ParsectiX joined #gluster
07:01 haomaiwang joined #gluster
07:09 overclk_ joined #gluster
07:28 ParsectiX joined #gluster
07:32 armyriad joined #gluster
07:46 Alex__ joined #gluster
07:46 Alex__ hello
07:46 glusterbot Alex__: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
07:48 Alex__ Is it possible to have a redundant connection to the storage in virt-manager when using using libgfapi and kvm?
08:01 haomaiwa_ joined #gluster
08:20 ParsectiX joined #gluster
08:31 mlhamburg1 joined #gluster
08:39 tomatto joined #gluster
08:41 mhulsman joined #gluster
08:51 overclk joined #gluster
09:01 haomaiwang joined #gluster
09:11 mhulsman1 joined #gluster
09:12 mhulsman joined #gluster
09:21 bhuddah joined #gluster
09:23 nishanth joined #gluster
09:38 stickyboy joined #gluster
09:38 bluenemo joined #gluster
09:46 haomaiwang joined #gluster
09:47 haomaiwa_ joined #gluster
10:01 haomaiwa_ joined #gluster
10:12 shubhendu_ joined #gluster
10:15 vimal joined #gluster
10:30 LebedevRI joined #gluster
10:37 gem joined #gluster
10:55 overclk joined #gluster
10:55 rafi joined #gluster
11:01 haomaiwa_ joined #gluster
11:05 zhangjn joined #gluster
11:30 kovsheni_ joined #gluster
11:30 ParsectiX joined #gluster
11:31 ParsectiX joined #gluster
11:42 overclk joined #gluster
11:45 bhuddah joined #gluster
11:55 julim joined #gluster
11:58 julim_ joined #gluster
12:01 haomaiwa_ joined #gluster
12:20 overclk joined #gluster
12:24 overclk_ joined #gluster
12:27 overclk joined #gluster
12:27 julim joined #gluster
12:44 RayTrace_ joined #gluster
13:01 haomaiwa_ joined #gluster
13:13 ira joined #gluster
13:44 Nisroc joined #gluster
14:01 haomaiwa_ joined #gluster
14:13 muneerse2 joined #gluster
14:19 julim joined #gluster
14:21 DV__ joined #gluster
14:26 DV joined #gluster
14:39 EinstCrazy joined #gluster
14:45 haomaiwa_ joined #gluster
15:02 haomaiwang joined #gluster
15:07 gem joined #gluster
15:14 ekuric joined #gluster
15:19 EinstCrazy joined #gluster
15:27 adamaN joined #gluster
15:30 overclk joined #gluster
15:31 overclk_ joined #gluster
15:33 kotreshhr joined #gluster
15:39 stickyboy joined #gluster
16:10 haomaiwa_ joined #gluster
16:24 shyam joined #gluster
16:26 Peppard joined #gluster
16:29 EinstCrazy joined #gluster
16:39 Peppard joined #gluster
16:39 gem joined #gluster
16:45 dmnchild joined #gluster
16:52 Peppard joined #gluster
17:14 F2Knight joined #gluster
17:16 overclk joined #gluster
17:16 overclk_ joined #gluster
17:36 EinstCrazy joined #gluster
18:15 overclk joined #gluster
18:53 mhulsman joined #gluster
18:55 mhulsman1 joined #gluster
19:10 mhulsman joined #gluster
19:11 thoht_ i had to isolate 1 node for 2 mn; and now i got some "possibily undergoing heal" on a few files (VM files), is it bad ?
19:13 akik joined #gluster
19:30 EinstCrazy joined #gluster
19:35 armyriad joined #gluster
19:36 kovshenin joined #gluster
19:46 RayTrace_ joined #gluster
19:59 F2Knight joined #gluster
20:11 DV joined #gluster
20:13 armyriad2 joined #gluster
20:17 armyriad joined #gluster
20:24 armyriad joined #gluster
20:40 jonfatino joined #gluster
20:41 jonfatino root@content11:~# gluster volume heal volume1 info heal-failed|wc -l
20:41 jonfatino 1033
20:41 jonfatino how do I fix these heal-failed files?
20:41 thoht_ OMG !
20:42 thoht_ heal-failed ? it is not working on my gluster volume
20:43 thoht_ jonfatino: check the log
20:43 thoht_ in /var/log/glusterfs/glustershd.log
20:44 jonfatino [2015-10-24 20:44:01.924849] I [afr-self-heal-metadata.c:18​6:afr_sh_metadata_sync_cbk] 0-volume1-replicate-0: setting attributes failed for <gfid:5a31319f-0cd3-4d26-8f4f-52a36fadc264> on volume1-client-0 (Numerical result out of range)
20:45 thoht_ what did you do to be in this disaster situation .?
20:46 jonfatino well I had two nodes setup with two bricks and replica 2. I added a third node with replica 3 and it took forever to rebuild 1tb of data like + 3 days and only 300gb of data was sent and it seems everything got stored in .glusterfs in the brick and not the actual /var/www/raw/files
20:47 jonfatino anyways long story short I removed the third node from the peer and changed replica back to 2
20:47 jonfatino I attempted to replace brick and let that run for a bit but seems nothing was happening (not even .gluster folder created).      gluster volume replace-brick volume1 content11:/gluster/volume1 content2:/gluster/volume1 start
20:48 jonfatino so I aborted that and well that seemed to cause all kinds of problems.   gluster volume status    kept telling me that another trans was in progress even though it was aborted bla bla
20:48 jonfatino so I rebooted one node and everything came back to normal
20:49 jonfatino So what I am trying to do is setup gluster in replica across 6 or 7 nodes that are around the world. run nginx for content (cdn) server
20:50 jonfatino that way each cdn node can mount gluster volume1 (/gluster/brick bla)  via gluster client on /var/www
21:00 thoht_ kewl
21:00 thoht_ sounds a good plan
21:01 thoht_ hope your box are Gbit
21:01 jonfatino Yea hardware not an issue all top of the line
21:01 jonfatino should I use replica 2 3 4 etc as I add more boxes
21:01 jonfatino or the geo replication?
21:01 thoht_ geo repli is asynchrone
21:01 thoht_ it is a kind of rsync
21:02 jonfatino ok will stick with replica
21:02 jonfatino thoht_: so self heal failed aside.
21:02 jonfatino volume add-brick: failed: Replace brick is in progress on volume volume1. Please retry after replace-brick operation is committed or aborted
21:03 jonfatino I get that error when I try and add a 3rd node
21:03 jonfatino # gluster volume add-brick volume1 replica 3 content2:/gluster/volume1
21:04 thoht_ it sounds like a total crazy mess in your data
21:04 thoht_ maybe best is to take your backup and start over
21:05 jonfatino I attempted to force stop the "replace brick" and what not and even the volume status says its completed
21:06 jonfatino http://pastebin.com/raw.php?i=2kG5jeFX
21:06 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
21:21 jonfatino anyone know how to force kill a gluster replace brick task id?
21:42 stickyboy joined #gluster
21:55 julim joined #gluster
22:08 DV joined #gluster
23:25 mlhamburg_ joined #gluster
23:48 calavera joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary