Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2018-02-06

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 vbellur joined #gluster
00:05 Rakkin joined #gluster
00:13 xiubli joined #gluster
00:34 MrAbaddon joined #gluster
00:38 jstrunk joined #gluster
01:27 plarsen joined #gluster
01:38 waqstar joined #gluster
02:07 atinm joined #gluster
02:27 Rakkin joined #gluster
02:31 aravindavk joined #gluster
02:37 xiubli joined #gluster
02:56 ppai joined #gluster
02:58 ilbot3 joined #gluster
02:58 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
03:06 rastar joined #gluster
03:24 nbalacha joined #gluster
03:32 Vishnu joined #gluster
03:33 hgowtham joined #gluster
03:45 varshar joined #gluster
04:07 om2 joined #gluster
04:18 nbalacha joined #gluster
04:53 nbalacha joined #gluster
04:57 kotreshhr joined #gluster
04:59 karthik_us joined #gluster
05:19 Prasad joined #gluster
05:21 owlbot joined #gluster
05:39 risjain joined #gluster
05:41 sunny joined #gluster
05:43 kramdoss_ joined #gluster
05:58 skumar joined #gluster
06:18 itisravi joined #gluster
06:29 xavih joined #gluster
06:37 poornima_ joined #gluster
06:41 apandey joined #gluster
06:46 jiffin joined #gluster
07:05 jkroon joined #gluster
07:19 jtux joined #gluster
07:49 rouven joined #gluster
08:02 hgowtham joined #gluster
08:03 uli joined #gluster
08:06 uli hey there, I got a question regarding rebuild one node of a two node, replica 2, glustervolume (about 3 TB size)
08:06 uli I disconnected one brick and detatched peer
08:06 uli then i rebuilt the disconnected node (complete new setup including os etc)
08:07 uli now i rejoined the node (peer probe, add-brick replica 2...)
08:07 uli the volume is still in use and has load
08:08 uli now the nodes are in heal, but the sync of the nodes is running with about 2-3 MB per second (10Gbit interfaces...) and load on both peers is arount 18 with 28 cores per machine....
08:09 uli so I'm searchin for a soluion for this... I tried in a testsetup to copy the files from one brick-folder to the other peer in the brick-folder, then rejoining the brick an then start vol heal full...
08:11 uli so I'd like to run that way on the production system as well... but: the heal on the two producktion peers is runnung whole weekend and yesterday... if i remove the other brick, will there be inconsistency regarding the added / changed files in the time, where the heal was running?
08:15 jri joined #gluster
08:24 ThHirsch joined #gluster
08:47 BitByteNybble110 joined #gluster
08:56 fsimonce joined #gluster
09:06 hvisage joined #gluster
09:08 rouven joined #gluster
09:23 jri joined #gluster
09:29 varshar joined #gluster
09:36 jri_ joined #gluster
09:45 kotreshhr left #gluster
09:55 kdhananjay joined #gluster
10:36 ThHirsch joined #gluster
10:36 imajing joined #gluster
10:53 jri joined #gluster
10:57 MrAbaddon joined #gluster
10:59 skumar_ joined #gluster
11:21 ppai joined #gluster
11:51 bfoster joined #gluster
11:58 Vishnu_ joined #gluster
12:23 poornima_ joined #gluster
12:27 kkeithley @ports
12:27 glusterbot kkeithley: glusterd's management port is 24007/tcp (also 24008/tcp if you use rdma). Bricks (glusterfsd) use 49152 & up. All ports must be reachable by both servers and clients. Additionally it will listen on 38465-38468/tcp for NFS. NFS also depends on rpcbind/portmap ports 111 and 2049.
12:50 serkan-ist joined #gluster
12:58 garbageyard joined #gluster
12:58 garbageyard left #gluster
13:00 serkanper joined #gluster
13:03 phlogistonjohn joined #gluster
13:09 buvanesh_kumar joined #gluster
13:17 gospod joined #gluster
13:18 gospod2 joined #gluster
13:22 aravindavk joined #gluster
13:26 KpuCko joined #gluster
13:30 rastar joined #gluster
13:35 nbalacha joined #gluster
13:36 poornima_ joined #gluster
13:38 jstrunk joined #gluster
13:48 poornima_ joined #gluster
13:56 plarsen joined #gluster
14:11 sunny joined #gluster
14:25 waqstar_ joined #gluster
14:30 skylar1 joined #gluster
14:40 mobeats joined #gluster
14:43 rwheeler joined #gluster
14:43 phlogistonjohn joined #gluster
14:47 RustyB joined #gluster
14:48 brayo joined #gluster
14:52 sunny joined #gluster
14:53 billputer joined #gluster
14:57 plarsen joined #gluster
15:03 Somedream joined #gluster
15:20 atinm joined #gluster
15:26 melliott joined #gluster
15:30 alvinstarr joined #gluster
15:37 PotatoGim joined #gluster
15:40 kpease_ joined #gluster
15:50 melliott joined #gluster
16:02 uli_ joined #gluster
16:07 prth joined #gluster
16:10 kotreshhr joined #gluster
16:15 wlmbasson joined #gluster
16:27 uli_ joined #gluster
16:43 kotreshhr joined #gluster
17:15 jkroon joined #gluster
17:23 WebertRLZ joined #gluster
17:29 jri joined #gluster
17:30 kotreshhr left #gluster
18:06 mobeats joined #gluster
18:08 sunny joined #gluster
18:22 mobeats joined #gluster
18:33 sunnyk joined #gluster
18:39 social joined #gluster
18:45 mobeats joined #gluster
19:23 rouven joined #gluster
19:31 buvanesh_kumar joined #gluster
19:34 melliott joined #gluster
19:54 prth joined #gluster
20:00 skylar1 joined #gluster
20:02 skylar1 joined #gluster
20:27 Vapez joined #gluster
20:34 scc_ joined #gluster
20:46 rouven joined #gluster
20:58 melliott joined #gluster
21:08 melliott joined #gluster
22:29 ThHirsch joined #gluster
22:39 alvinstarr I am running gluster 3.8.9 and trying to setup a geo-replicated volume over ssh and it looks like the volume create is trying to directly access the server over port 24007.  The docs imply that all communications are over ssh. What am I missing?
23:06 ThHirsch joined #gluster
23:18 MrAbaddon joined #gluster
23:24 jstrunk joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary