Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2018-02-08

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:10 jstrunk joined #gluster
00:32 Tom___ joined #gluster
00:37 Tom___ I have a setup with 3 nodes running GlusterFS.
00:37 Tom___ Unfortunately one node got out of sync some weeks ago. So the data in the physical brick directors is very different in Node1 compares to Node2 / Node3.
00:37 Tom___ I simply did a "service glusterd restart" on the faulty Node1, hoping that it will sync again. But it did not sync, only the load on all nodes went up.
00:38 Tom___ How to I sync data from the healthy nodes Node2/Node3 back to Node1?
01:04 shyam joined #gluster
01:35 jiffin joined #gluster
01:59 shyam joined #gluster
02:03 waqstar_ joined #gluster
02:56 ilbot3 joined #gluster
02:56 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
03:13 kramdoss_ joined #gluster
03:17 nbalacha joined #gluster
03:20 aravindavk joined #gluster
03:22 jiffin joined #gluster
03:22 varshar joined #gluster
03:28 atinmu joined #gluster
03:32 daMaestro joined #gluster
03:42 kotreshhr joined #gluster
03:42 Vishnu joined #gluster
03:43 itisravi joined #gluster
03:47 psony joined #gluster
03:48 xiubli joined #gluster
03:56 jiffin joined #gluster
03:58 sunnyk joined #gluster
03:59 varshar joined #gluster
04:04 kotreshhr joined #gluster
04:09 varshar joined #gluster
04:10 sunny joined #gluster
04:24 Vishnu joined #gluster
04:29 ppai joined #gluster
04:48 hgowtham joined #gluster
04:49 xiubli joined #gluster
05:06 skumar joined #gluster
05:08 skumar_ joined #gluster
05:14 Prasad joined #gluster
05:14 karthik_us joined #gluster
05:53 poornima_ joined #gluster
05:54 TBlaar joined #gluster
06:06 apandey joined #gluster
06:08 kdhananjay joined #gluster
06:16 xiubli joined #gluster
06:23 risjain joined #gluster
06:38 xavih joined #gluster
06:47 ppai joined #gluster
07:07 varshar joined #gluster
07:20 rastar joined #gluster
07:20 hvisage left #gluster
07:26 Saravanakmr joined #gluster
07:38 jtux joined #gluster
07:59 xiubli joined #gluster
08:02 xiubli-1 joined #gluster
08:18 lunaaa joined #gluster
08:19 ppai joined #gluster
08:20 kramdoss_ joined #gluster
08:27 kotreshhr joined #gluster
08:30 jri joined #gluster
08:33 jri joined #gluster
08:33 apandey joined #gluster
08:34 atinmu joined #gluster
08:34 Humble joined #gluster
08:36 ppai joined #gluster
08:46 itisravi joined #gluster
08:51 kotreshhr joined #gluster
08:54 ivan_rossi joined #gluster
08:59 kotreshhr joined #gluster
09:17 illwieckz joined #gluster
09:18 jesk Tom, that was similar to my question yesterday, unfortunately that channel is completely dead or deaf :D
09:20 hgowtham ppai++
09:20 glusterbot hgowtham: ppai's karma is now 5
09:35 nbalacha joined #gluster
09:51 poornima_ joined #gluster
09:54 jkroon joined #gluster
10:02 kramdoss_ joined #gluster
10:03 xiubli-1 joined #gluster
10:04 Humble joined #gluster
10:15 fsimonce joined #gluster
10:21 jesk jesk++
10:21 glusterbot jesk: Error: You're not allowed to adjust your own karma.
10:22 jesk glusterbot--
10:22 glusterbot jesk: glusterbot's karma is now 6
10:37 poornima joined #gluster
10:38 atinm joined #gluster
10:41 T-Bone84 joined #gluster
11:01 ppai joined #gluster
11:01 xiubli-1 joined #gluster
11:03 atinm_ joined #gluster
11:21 atinmu joined #gluster
11:29 Pet0r joined #gluster
11:31 Pet0r with round robin DNS, should the glusterfs client by default try the other IPs if it fails to connect to the first one? I have that setup but it seems like if it times out on the first IP then it just doesn't try another
11:33 apandey joined #gluster
11:34 tontsa atleast on 3.7 and 3.8 versions that never has worked
11:37 Pet0r hmm - is there a correct way to do that on 3.8 then?
11:39 tontsa let me know if you find one.. based on experimenting if you use dns names they will only be resolved during initial mount
11:42 Teraii joined #gluster
11:42 shellclear joined #gluster
11:43 Pet0r hm that sucks, does it work if the DNS name only lists working nodes, but the other nodes in the cluster are aware of a node that is currently down?
11:43 Pet0r ie. if there are 3 nodes in the cluster, one of them is dead, the DNS endpoint lists the 2 active nodes
11:48 shellclear joined #gluster
12:07 shyam joined #gluster
12:21 kkeithley @ports
12:21 glusterbot kkeithley: glusterd's management port is 24007/tcp (also 24008/tcp if you use rdma). Bricks (glusterfsd) use 49152 & up. All ports must be reachable by both servers and clients. Additionally it will listen on 38465-38468/tcp for NFS. NFS also depends on rpcbind/portmap ports 111 and 2049.
12:23 renout joined #gluster
12:35 renout joined #gluster
12:57 Saravanakmr joined #gluster
13:08 skumar joined #gluster
13:42 sunnyk joined #gluster
13:44 rastar joined #gluster
13:49 shyam joined #gluster
13:50 nbalacha joined #gluster
13:54 jstrunk joined #gluster
13:58 kotreshhr left #gluster
14:00 bowhunter joined #gluster
14:04 sunkumar joined #gluster
14:06 jtux joined #gluster
14:14 pladd joined #gluster
14:17 jiffin joined #gluster
14:19 shyam joined #gluster
14:32 Saravanakmr joined #gluster
14:33 jiffin1 joined #gluster
14:43 skumar_ joined #gluster
14:43 shyam joined #gluster
14:55 ThHirsch joined #gluster
14:55 nbalacha joined #gluster
14:56 shyam joined #gluster
14:57 Saravanakmr joined #gluster
14:59 skumar__ joined #gluster
15:03 pladd_ joined #gluster
15:19 buvanesh_kumar joined #gluster
15:27 ws2k3 joined #gluster
15:28 ws2k3 joined #gluster
15:29 kramdoss_ joined #gluster
15:39 shyam joined #gluster
15:40 kkeithley @termbin
15:59 shyam joined #gluster
16:04 jri joined #gluster
16:08 shyam joined #gluster
16:14 ivan_rossi left #gluster
16:19 kpease joined #gluster
17:22 abrakadab joined #gluster
17:22 abrakadab hello, is Gluster still support integration with hadoop instead of hdfs ?
17:25 shellclear joined #gluster
17:35 ThHirsch joined #gluster
17:36 skylar1 joined #gluster
17:37 major sooo .. I think I likely caused a really awesome corruption in one of my GFS volumes
17:38 major is there any documentation already available for suggestions on cleaning things up and getting the volume back to sane?
17:45 major also .. what is the "best" way to monitor a volume to validate that no bricks have anything laying around?
17:58 * major sings, "It's the sound .. of silence..." :P
18:02 major oh .. "gluster volume heal <name> statistics heal-count" .. is that everything that a summary of what needs fixing? like, if those are all 0 then the volume is fine?
18:15 CMax joined #gluster
18:26 cMax joined #gluster
18:31 phlogistonjohn joined #gluster
19:05 Rakkin joined #gluster
19:16 Humble joined #gluster
19:28 lunaaa joined #gluster
19:43 shyam joined #gluster
20:20 risjain joined #gluster
20:25 jkroon joined #gluster
20:41 kpease joined #gluster
20:53 shyam joined #gluster
21:17 shyam joined #gluster
21:26 earnThis joined #gluster
21:27 earnThis anyone know why red hat chose gluster for its HCI platform and not Ceph?
21:27 xniega joined #gluster
21:33 kpease joined #gluster
21:34 earnThis joined #gluster
21:34 earnThis anyone know why red hat chose gluster for its HCI platform and not Ceph?
21:36 samppah earnThis: afaik gluster was more integrated into ovirt when they started working on hci
21:37 samppah also i think that they are using ceph more with openstack
21:38 xavih_ joined #gluster
21:55 earnThis samppah, gotcha
21:55 earnThis thanks
22:01 skylar1 joined #gluster
22:15 earnThis_ joined #gluster
22:43 Klas joined #gluster
22:43 earnThis joined #gluster
22:44 Rakkin joined #gluster
23:12 misc joined #gluster
23:12 shyam joined #gluster
23:15 xiubli joined #gluster
23:23 Rakkin joined #gluster
23:54 xniega joined #gluster
23:55 MrAbaddon joined #gluster
23:59 bowhunter joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary