Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-12-21

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:07 l2_ joined #gluster
00:08 siel joined #gluster
00:21 siel joined #gluster
00:33 siel joined #gluster
00:34 bluenemo joined #gluster
00:35 yonex joined #gluster
00:40 ofaq joined #gluster
00:40 ofaq what is the fastest configuration? stripe?
00:45 siel joined #gluster
00:56 shdeng joined #gluster
01:40 siel joined #gluster
01:52 siel joined #gluster
01:59 Gambit15 joined #gluster
01:59 yonex joined #gluster
02:05 siel joined #gluster
02:06 yonex joined #gluster
02:17 siel joined #gluster
02:30 siel joined #gluster
02:46 siel joined #gluster
02:48 ilbot3 joined #gluster
02:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
03:00 siel joined #gluster
03:16 skoduri joined #gluster
03:17 shdeng joined #gluster
03:21 riyas joined #gluster
03:33 nbalacha joined #gluster
03:35 magrawal joined #gluster
03:47 atinmu joined #gluster
03:51 msvbhat joined #gluster
04:00 krk joined #gluster
04:00 nishanth joined #gluster
04:11 siel joined #gluster
04:11 itisravi joined #gluster
04:12 wushudoin joined #gluster
04:15 gem joined #gluster
04:24 siel joined #gluster
04:29 rafi joined #gluster
04:38 buvanesh_kumar joined #gluster
04:47 armin joined #gluster
04:48 siel joined #gluster
04:51 Shu6h3ndu joined #gluster
04:54 apandey joined #gluster
05:01 sanoj joined #gluster
05:03 jiffin joined #gluster
05:04 siel joined #gluster
05:05 masber joined #gluster
05:10 RameshN joined #gluster
05:12 ppai joined #gluster
05:12 yonex joined #gluster
05:14 prasanth joined #gluster
05:15 kdhananjay joined #gluster
05:18 Saravanakmr joined #gluster
05:19 siel joined #gluster
05:25 hgowtham joined #gluster
05:28 Philambdo joined #gluster
05:37 karthik_us joined #gluster
05:49 sona joined #gluster
05:55 hchiramm joined #gluster
05:56 skoduri joined #gluster
06:01 aravindavk joined #gluster
06:01 Saravanakmr joined #gluster
06:04 msvbhat joined #gluster
06:04 buvanesh_kumar_ joined #gluster
06:04 ashiq joined #gluster
06:06 ankitraj joined #gluster
06:08 siel joined #gluster
06:09 Muthu joined #gluster
06:11 Karan joined #gluster
06:13 krk joined #gluster
06:14 d0nn1e joined #gluster
06:23 siel joined #gluster
06:29 yonex_ joined #gluster
06:30 yonex joined #gluster
06:30 yonex_ joined #gluster
06:33 nbalacha joined #gluster
06:42 yonex joined #gluster
06:50 cvstealt1 joined #gluster
06:50 siel joined #gluster
06:54 ankitraj joined #gluster
07:02 siel joined #gluster
07:12 hackman left #gluster
07:14 siel joined #gluster
07:15 Wizek joined #gluster
07:19 mhulsman joined #gluster
07:20 asriram|mtg joined #gluster
07:21 jtux joined #gluster
07:22 sona joined #gluster
07:27 atinmu joined #gluster
07:35 siel joined #gluster
07:38 [diablo] joined #gluster
07:45 yonex joined #gluster
07:47 siel joined #gluster
07:48 Wizek joined #gluster
07:52 mhulsman joined #gluster
07:53 jtux joined #gluster
07:53 swebb joined #gluster
07:59 siel joined #gluster
08:10 Wizek joined #gluster
08:11 gnulnx joined #gluster
08:13 sona joined #gluster
08:14 sn-x joined #gluster
08:15 mattmcc- joined #gluster
08:17 siel joined #gluster
08:18 nbalacha joined #gluster
08:19 jesk joined #gluster
08:23 jri joined #gluster
08:23 lucasrolff joined #gluster
08:23 Telsin joined #gluster
08:23 semiosis joined #gluster
08:23 dgandhi joined #gluster
08:23 yonex joined #gluster
08:28 Wizek joined #gluster
08:29 kramdoss_ joined #gluster
08:34 sona joined #gluster
08:35 fsimonce joined #gluster
08:40 ofaq joined #gluster
08:48 kotreshhr joined #gluster
08:48 [diablo] joined #gluster
08:48 Wizek joined #gluster
08:54 apandey joined #gluster
08:55 siel joined #gluster
09:01 BuBU29 joined #gluster
09:05 susant joined #gluster
09:06 flying joined #gluster
09:11 Wizek joined #gluster
09:11 siel joined #gluster
09:11 nbalacha joined #gluster
09:15 ankitraj joined #gluster
09:16 lucasrolff joined #gluster
09:16 Telsin joined #gluster
09:16 semiosis joined #gluster
09:17 nishanth joined #gluster
09:17 pulli joined #gluster
09:23 siel joined #gluster
09:24 Slashman joined #gluster
09:27 msvbhat joined #gluster
09:28 Wizek joined #gluster
09:32 BuBU291 joined #gluster
09:38 siel joined #gluster
09:43 derjohn_mobi joined #gluster
09:44 BatS9 Is there a way to put self-heal on lower priority? currently it's killing the performance on the gluster volume and it's set up in a 3 replication standard so it's not critical time wise to finish the selfheal
09:49 itisravi BatS9: you could explore regulating the selfhealdaemon process using cgroups.
09:50 siel joined #gluster
09:52 jkroon joined #gluster
09:53 shyam joined #gluster
09:53 shyam left #gluster
09:53 shyam joined #gluster
09:54 BatS9 itisravi: Problem is it does not seem like its the self-heal daemon doing the heavy lifting in the healing process but the regulare brick pids
09:56 jtux joined #gluster
09:57 itisravi BatS9: can you try setting cluster.data-self-heal-algorithm to "full"  instead of "diff" ?
10:01 nishanth joined #gluster
10:03 siel joined #gluster
10:06 RameshN joined #gluster
10:11 mb_ joined #gluster
10:14 yonex_ joined #gluster
10:15 siel joined #gluster
10:17 toredl left #gluster
10:26 rafi joined #gluster
10:27 siel joined #gluster
10:31 BuBU29 joined #gluster
10:32 apandey joined #gluster
10:32 jtux joined #gluster
10:38 BatS9 itisravi: same issue
10:40 siel joined #gluster
10:40 itisravi BatS9: is it the CPU consumption of the brick process that is high?
10:41 itisravi BatS9: and what version of gluster are you running?
10:43 kotreshhr left #gluster
10:47 derjohn_mobi joined #gluster
10:49 atinmu joined #gluster
10:53 sona joined #gluster
11:04 rastar joined #gluster
11:08 siel joined #gluster
11:10 msvbhat joined #gluster
11:10 rastar joined #gluster
11:13 Lee1092 joined #gluster
11:13 GoKule joined #gluster
11:17 GoKule Hello, I need a help regarding gluster volume heal test split-brain source-brick command
11:17 GoKule I'm using gluster configuration with two bricks, one brick stop working, and off course there was a lot off error when I start down brick
11:18 GoKule with command gluster volume heal volumename info I have 3800 bad entries
11:18 GoKule heal option was fix during two days more around 2000
11:18 GoKule and that shop
11:19 GoKule now I getting only gfid files that are corrupted
11:19 GoKule I tried gluster volume heal test split-brain source-brick but getting error message failed:Transport endpoint is not connected.
11:20 GoKule gluster volume info shows everithing is ok
11:20 GoKule also gluster peer status
11:20 BatS9 itisravi: sorry for long responstime, 3.7.18. The Cpu consumption is high but manageable it seems to be more an issue of maxing the IO
11:21 GoKule Useing gluster 3.7.4
11:22 siel joined #gluster
11:24 Philambdo joined #gluster
11:26 jtux joined #gluster
11:28 hchiramm joined #gluster
11:29 mhulsman joined #gluster
11:30 GoKule Full command is
11:30 GoKule gluster volume heal storage3132_fs split-brain source-brick Sb0:/brick2/fs gfid:c5f64fc1-696a-4026-8261-caa5429202fd
11:36 siel joined #gluster
11:37 sona joined #gluster
11:48 siel joined #gluster
11:49 Wizek joined #gluster
11:51 itisravi BatS9: Yes so if you throttle the self-heal daemon with cgroups or some other way, it would send less I/O to the bricks.
11:53 ppai joined #gluster
11:58 Wizek joined #gluster
12:00 siel joined #gluster
12:05 jdarcy joined #gluster
12:12 siel joined #gluster
12:15 atinmu joined #gluster
12:15 Wizek_ joined #gluster
12:16 project0 joined #gluster
12:21 project0 Hey Guys, does anyone knows why I have trouble with not responding nfs server while large file writes? Reading is fine, but writing on nfs seems impossible for me, the client stucks and the gluster nfs server does not respond anymore
12:23 mhulsman1 joined #gluster
12:31 rwheeler joined #gluster
12:34 siel joined #gluster
12:38 mhulsman joined #gluster
12:42 GoKule joined #gluster
12:49 Wizek_ joined #gluster
12:50 l2_ Hi. I am having a question. When i am using replica 3 + JBOD with one arbiter, how could i manage that big files arent bigger than the brick and get distributed over other bricks? Could i create a distributed striped volume with replica 3 to bypass this?
12:59 skoduri_ joined #gluster
13:03 jiffin1 joined #gluster
13:03 johnmilton joined #gluster
13:06 nishanth joined #gluster
13:06 l2__ joined #gluster
13:11 Muthu joined #gluster
13:17 Wizek_ joined #gluster
13:32 unclemarc joined #gluster
13:32 Wizek_ joined #gluster
13:38 B21956 joined #gluster
13:42 hchiramm joined #gluster
13:46 siel joined #gluster
13:51 Wizek_ joined #gluster
13:57 asriram|mtg joined #gluster
13:57 Shu6h3ndu joined #gluster
13:58 siel joined #gluster
14:03 jiffin1 joined #gluster
14:03 gem joined #gluster
14:06 skoduri joined #gluster
14:10 siel joined #gluster
14:13 kettlewell joined #gluster
14:15 riyas joined #gluster
14:19 shaunm joined #gluster
14:20 Wizek joined #gluster
14:26 siel joined #gluster
14:32 skylar joined #gluster
14:35 farhorizon joined #gluster
14:43 s-hell joined #gluster
14:45 bowhunter joined #gluster
14:46 Wizek joined #gluster
14:46 siel joined #gluster
14:50 Philambdo joined #gluster
14:52 nbalacha joined #gluster
14:58 siel joined #gluster
15:00 jiffin1 joined #gluster
15:01 rwheeler joined #gluster
15:07 Wizek joined #gluster
15:11 jiffin joined #gluster
15:18 siel joined #gluster
15:18 skoduri joined #gluster
15:18 Gambit15 joined #gluster
15:23 post-factum l2__, use sharding
15:31 siel joined #gluster
15:32 ira joined #gluster
15:32 Wizek joined #gluster
15:44 siel joined #gluster
15:48 yonex joined #gluster
15:50 ahino joined #gluster
15:52 bowhunter joined #gluster
15:53 Philambdo1 joined #gluster
16:01 siel joined #gluster
16:03 Philambdo joined #gluster
16:15 siel joined #gluster
16:19 aravindavk joined #gluster
16:20 bluenemo joined #gluster
16:21 ahino joined #gluster
16:25 hchiramm joined #gluster
16:36 wushudoin joined #gluster
16:36 susant joined #gluster
16:41 yonex joined #gluster
16:43 foster joined #gluster
16:44 prasanth joined #gluster
16:50 jvandewege_ joined #gluster
17:02 bowhunter joined #gluster
17:02 Intensity joined #gluster
17:11 ahino joined #gluster
17:11 yonex_ joined #gluster
17:15 farhorizon joined #gluster
17:30 ofaq what is the fastest configuration? stripe?
17:48 kotreshhr joined #gluster
17:59 rwheeler joined #gluster
18:04 JoeJulian ofaq: Depends on the use case. Never ,,(stripe)
18:04 glusterbot ofaq: (#1) Please see http://joejulian.name/blog/sho​uld-i-use-stripe-on-glusterfs/ about stripe volumes., or (#2) The stripe translator is deprecated. Consider enabling sharding instead.
18:10 Karan joined #gluster
18:10 hchiramm joined #gluster
18:14 siel joined #gluster
18:18 jiffin joined #gluster
18:25 kotreshhr left #gluster
18:28 hchiramm joined #gluster
18:32 jiffin joined #gluster
18:37 siel joined #gluster
18:38 jiffin joined #gluster
18:38 squizzi joined #gluster
18:50 siel joined #gluster
19:04 siel joined #gluster
19:17 siel joined #gluster
19:17 farhorizon joined #gluster
19:19 MidlandTroy joined #gluster
19:30 siel joined #gluster
19:43 siel joined #gluster
19:47 shaunm joined #gluster
19:58 bowhunter joined #gluster
19:58 siel joined #gluster
20:05 the-me joined #gluster
20:09 mhulsman joined #gluster
20:12 purpleidea joined #gluster
20:18 siel joined #gluster
20:35 d0nn1e joined #gluster
20:51 derjohn_mobi joined #gluster
20:57 plarsen joined #gluster
21:02 kpease joined #gluster
21:27 arpu joined #gluster
21:45 siel joined #gluster
21:47 skylar joined #gluster
21:53 panina joined #gluster
21:53 panina Hi. Has there been any work on tools to change a host's primary address? I need to move my hosts from IP adressing to fqdn.
21:54 panina Reading the mailing lists from a couple of years ago, it seems I need to basically use sed on a lot of config files. Has a better tool been built?
21:58 mhulsman joined #gluster
21:58 siel joined #gluster
22:01 kpease joined #gluster
22:10 siel joined #gluster
22:21 zpallin joined #gluster
22:22 zpallin joined #gluster
22:23 zpallin Hello.
22:23 glusterbot zpallin: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
22:24 siel joined #gluster
22:25 zpallin Hello. I have 2 nodes with 2 bricks each, the bricks are failing intermittently when glusterd runs. We think they are corrupt. I'm getting this error in the logs on restart: `Failed to verify stub directory [/mnt/brick1/brick/.glusterfs/quanrantine] [Input/output error]`. Anyone familiar with this?
22:27 zpallin Should mention: running 3.9 on Centos7, bricks are raid6 with lvm over xfs.
22:41 siel joined #gluster
22:54 zpallin I mean, would anyone have any hints as to what to do if a brick keeps going unresponsive with Input/Output errors?
22:58 B21956 joined #gluster
23:01 siel joined #gluster
23:06 wushudoin| joined #gluster
23:15 siel joined #gluster
23:26 Klas joined #gluster
23:28 siel joined #gluster
23:42 arpu joined #gluster
23:48 Wizek joined #gluster
23:54 farhorizon joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary