Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-09-11

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
01:30 aj__ joined #gluster
02:36 cholcombe joined #gluster
02:52 d0nn1e joined #gluster
02:56 kramdoss_ joined #gluster
03:23 sanoj joined #gluster
03:32 pedrogibson joined #gluster
03:33 pedrogibson JoeJulian: Gracias for the info.. I will check that dir out to see if there are existing py scripts I can emulate for my <vol>-index feature settings.
03:50 riyas joined #gluster
03:50 sanoj joined #gluster
03:51 pkalever joined #gluster
03:52 rjoseph|afk joined #gluster
03:55 sac joined #gluster
04:01 rastar joined #gluster
04:21 hackman joined #gluster
04:25 sanoj joined #gluster
04:27 ic0n joined #gluster
04:59 kramdoss_ joined #gluster
05:22 gem joined #gluster
06:11 kramdoss_ joined #gluster
06:20 Philambdo joined #gluster
06:33 ieth0 joined #gluster
06:35 ieth0 joined #gluster
06:43 kukulogy joined #gluster
06:44 armyriad joined #gluster
06:52 sanoj joined #gluster
07:04 ieth0 joined #gluster
07:16 atinm joined #gluster
07:32 Gnomethrower joined #gluster
07:51 ramky joined #gluster
07:58 mhulsman joined #gluster
08:25 Lee1092 joined #gluster
08:26 kukulogy joined #gluster
08:32 prth joined #gluster
08:33 Klas joined #gluster
08:51 riyas joined #gluster
09:06 jri joined #gluster
09:10 mhulsman joined #gluster
09:34 jri joined #gluster
09:38 Pupeno joined #gluster
09:38 Pupeno joined #gluster
09:42 Klas joined #gluster
09:44 partner joined #gluster
10:12 Klas joined #gluster
10:14 Gnomethrower joined #gluster
10:19 mhulsman joined #gluster
10:22 Pupeno joined #gluster
10:34 robb_nl joined #gluster
10:58 Wizek_ joined #gluster
11:07 mhulsman joined #gluster
11:19 kukulogy joined #gluster
11:19 Pupeno joined #gluster
11:31 baojg joined #gluster
11:59 [diablo] joined #gluster
12:01 nbalacha joined #gluster
12:32 mhulsman joined #gluster
12:47 social joined #gluster
12:49 kukulogy joined #gluster
13:01 kramdoss_ joined #gluster
13:18 nbalacha joined #gluster
13:32 victori joined #gluster
13:33 kukulogy joined #gluster
13:43 kukulogy joined #gluster
13:45 mhulsman joined #gluster
13:56 victori joined #gluster
13:57 ieth0 joined #gluster
14:08 Philambdo joined #gluster
14:24 nbalacha joined #gluster
14:26 skoduri joined #gluster
14:32 prth joined #gluster
14:33 kukulogy joined #gluster
14:35 atinm joined #gluster
14:49 johnmilton joined #gluster
15:00 victori joined #gluster
15:02 Pupeno joined #gluster
15:28 B21956 joined #gluster
15:32 ieth0 joined #gluster
15:42 john51 joined #gluster
15:42 kukulogy joined #gluster
15:45 victori joined #gluster
16:15 Pupeno joined #gluster
16:32 rafi joined #gluster
16:33 rafi1 joined #gluster
16:54 philiph joined #gluster
16:54 philiph I followed this tutorial https://joejulian.name/blog/how-to-expand-glusterfs-replicated-clusters-by-one-server/
16:54 glusterbot Title: How to expand GlusterFS replicated clusters by one server (at joejulian.name)
16:57 philiph The operation seems to be finished but the new brick is not filled the same as its replicate
16:58 philiph I have tried doing a heal command. This had helped but still it is stopping before bricks are equal size-filled
16:58 philiph Can anyone help me
16:58 philiph ?
16:58 philiph Thx
17:03 rafi joined #gluster
17:08 kukulogy joined #gluster
17:23 sandersr joined #gluster
17:23 victori joined #gluster
17:30 Pupeno joined #gluster
17:30 d0nn1e joined #gluster
17:41 rafi joined #gluster
17:43 jobewan joined #gluster
17:57 rafi joined #gluster
18:02 Philambdo joined #gluster
18:18 janlam7 joined #gluster
18:27 Pupeno joined #gluster
18:28 Wizek_ joined #gluster
18:37 victori joined #gluster
18:39 Philambdo1 joined #gluster
19:09 Philambdo joined #gluster
19:45 jobewan joined #gluster
20:13 ieth0 joined #gluster
20:45 tdasilva joined #gluster
20:48 ieth0 joined #gluster
20:52 congpine joined #gluster
21:19 congpine hi all, have anyone ever experienced this issue? all connections to port 24007 to one of gluster servers shows as: TIME_WAIT
21:20 congpine I restarted that server and now it comes back with such connection. But I can run gluster volume command, peer status on that server
21:28 congpine I don't have any firewall, on the server, I can see that port is open and binded to 0.0.0.0
21:28 congpine tcp        0      0 0.0.0.0:24007           0.0.0.0:*               LISTEN
22:08 MikeLupe joined #gluster
23:23 deangiberson joined #gluster
23:44 deangiberson I have a large disperse (5+1) array of drives, when running my distributed application glfs_h_stat calls will start out quick (between 0.2-1.5 s) but quickly drop to an absurd rate (40-90s).
23:45 deangiberson There are no errors in the gluster logs that I can see. Can any one suggest a method to debug what's going on?

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary