Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-02-03

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:07 jporterfield joined #gluster
00:11 Jayunit100 joined #gluster
00:15 andrewklau joined #gluster
00:24 klaas joined #gluster
00:41 ilbot3 joined #gluster
00:41 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
00:41 T0aD joined #gluster
00:42 msvbhat_ joined #gluster
00:44 yosafbridge joined #gluster
00:46 zapotah joined #gluster
00:46 johnmark joined #gluster
00:46 sac`away joined #gluster
00:47 edoceo joined #gluster
00:57 ilbot3 joined #gluster
00:57 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
00:57 mattapperson joined #gluster
00:57 tg2 joined #gluster
00:58 saltsa joined #gluster
01:01 georgeh|workstat joined #gluster
01:03 morsik_ joined #gluster
01:03 tziOm joined #gluster
01:03 m0zes joined #gluster
01:03 avati joined #gluster
01:03 cfeller joined #gluster
01:03 purpleid1a joined #gluster
01:03 fidevo joined #gluster
01:03 jporterfield joined #gluster
01:03 haakon_ joined #gluster
01:03 msciciel joined #gluster
01:03 eastz0r joined #gluster
01:03 X3NQ joined #gluster
01:03 Amanda joined #gluster
01:03 qdk joined #gluster
01:03 Peanut joined #gluster
01:03 eryc_ joined #gluster
01:03 45PABAE03 joined #gluster
01:03 yosafbridge joined #gluster
01:03 johnmark joined #gluster
01:03 sac`away joined #gluster
01:03 edoceo joined #gluster
01:03 codex joined #gluster
01:03 iksik joined #gluster
01:03 lanning joined #gluster
01:03 ujjain joined #gluster
01:03 _NiC joined #gluster
01:03 92AAAT7KY joined #gluster
01:03 wgao joined #gluster
01:03 brosner joined #gluster
01:03 TheDingy joined #gluster
01:03 stigchristian joined #gluster
01:03 T0aD joined #gluster
01:03 Dave2_ joined #gluster
01:03 xavih joined #gluster
01:03 hybrid512 joined #gluster
01:03 morsik joined #gluster
01:03 mojorison joined #gluster
01:03 d-fence joined #gluster
01:03 FrodeS joined #gluster
01:03 harish joined #gluster
01:03 tjikkun_work joined #gluster
01:03 ron-slc joined #gluster
01:03 portante joined #gluster
01:03 lava joined #gluster
01:03 mkzero joined #gluster
01:03 dewey joined #gluster
01:03 divbell_ joined #gluster
01:03 _br_- joined #gluster
01:03 Dgamax joined #gluster
01:03 mibby- joined #gluster
01:03 smellis_ joined #gluster
01:03 primusinterpare1 joined #gluster
01:03 duerF^ joined #gluster
01:03 aurigus_ joined #gluster
01:03 msvbhat_ joined #gluster
01:03 17SAAEX0G joined #gluster
01:03 ofu___ joined #gluster
01:03 shapemaker joined #gluster
01:03 social joined #gluster
01:03 jikz joined #gluster
01:03 sarkis joined #gluster
01:03 NuxRo joined #gluster
01:03 askb joined #gluster
01:03 hagarth joined #gluster
01:03 XpineX joined #gluster
01:03 bennyturns joined #gluster
01:03 StarBeast joined #gluster
01:03 NeatBasis joined #gluster
01:03 spechal joined #gluster
01:03 marcoceppi joined #gluster
01:03 asku joined #gluster
01:03 Cenbe joined #gluster
01:03 crazifyngers joined #gluster
01:03 eshy joined #gluster
01:03 atrius joined #gluster
01:03 delhage joined #gluster
01:03 Gugge joined #gluster
01:03 davidjpeacock joined #gluster
01:03 gluslog joined #gluster
01:03 masterzen joined #gluster
01:03 glusterbot joined #gluster
01:03 eclectic_ joined #gluster
01:03 sticky_afk joined #gluster
01:03 Oneiroi joined #gluster
01:03 tjikkun_ joined #gluster
01:03 Slasheri joined #gluster
01:03 eightyeight joined #gluster
01:03 hflai joined #gluster
01:03 pdrakeweb joined #gluster
01:03 lawrie joined #gluster
01:03 ndevos joined #gluster
01:03 foster joined #gluster
01:03 verdurin joined #gluster
01:03 atrius` joined #gluster
01:03 paratai joined #gluster
01:03 cicero joined #gluster
01:03 abyss^ joined #gluster
01:03 partner joined #gluster
01:03 JoeJulian joined #gluster
01:03 ccha4 joined #gluster
01:03 osiekhan1 joined #gluster
01:03 samppah joined #gluster
01:03 Ramereth joined #gluster
01:04 mick271 joined #gluster
01:04 DV__ joined #gluster
01:05 paratai joined #gluster
01:06 verdurin_ joined #gluster
01:08 iksik joined #gluster
01:08 bfoster joined #gluster
01:08 jezier joined #gluster
01:08 _br_ joined #gluster
01:08 edong23 joined #gluster
01:08 partner_ joined #gluster
01:12 Humble joined #gluster
01:12 badone joined #gluster
01:12 sulky joined #gluster
01:12 morsik_ joined #gluster
01:12 harish joined #gluster
01:12 bfoster joined #gluster
01:12 juhaj joined #gluster
01:12 twx joined #gluster
01:12 jiqiren joined #gluster
01:12 sac`away joined #gluster
01:12 _NiC joined #gluster
01:14 wgao_ joined #gluster
01:14 sulky joined #gluster
01:15 xymox_ joined #gluster
01:17 xavih_ joined #gluster
01:18 abyss^ joined #gluster
01:18 jurrien_ joined #gluster
01:20 mkzero joined #gluster
01:21 iksik joined #gluster
01:21 eastz0r joined #gluster
01:21 sac`away joined #gluster
01:21 atrius` joined #gluster
01:21 tokik joined #gluster
01:21 badone joined #gluster
01:21 radez_g0n3 joined #gluster
01:21 paratai joined #gluster
01:21 23LAAZHYR joined #gluster
01:21 dblack_ joined #gluster
01:21 codex joined #gluster
01:21 k4nar joined #gluster
01:21 d-fence joined #gluster
01:21 hybrid512 joined #gluster
01:21 qdk joined #gluster
01:21 17SAAEX0G joined #gluster
01:21 sarkis joined #gluster
01:22 lkoranda joined #gluster
01:23 gluslog joined #gluster
01:24 pdrakeweb joined #gluster
01:24 mattappe_ joined #gluster
01:24 overclk joined #gluster
01:24 mick27 joined #gluster
01:24 mick271 joined #gluster
01:24 kshlm joined #gluster
01:24 cicero joined #gluster
01:24 yosafbridge` joined #gluster
01:26 mojorison joined #gluster
01:26 lyang0 joined #gluster
01:26 paratai joined #gluster
01:30 T0aD- joined #gluster
01:30 klaas joined #gluster
01:30 Kins joined #gluster
01:30 msciciel joined #gluster
01:30 portante joined #gluster
01:32 sulky joined #gluster
01:32 dblack_ joined #gluster
01:32 radez_g0n3 joined #gluster
01:32 sac`away joined #gluster
01:32 lkoranda joined #gluster
01:32 kshlm joined #gluster
01:32 zerick joined #gluster
01:34 T0aD joined #gluster
01:34 mick27 joined #gluster
01:34 brosner joined #gluster
01:34 14WAB0IP9 joined #gluster
01:34 _br_ joined #gluster
01:34 jclift joined #gluster
01:36 avati joined #gluster
01:36 hflai joined #gluster
01:36 mattappe_ joined #gluster
01:36 eryc joined #gluster
01:36 jporterfield joined #gluster
01:36 ujjain joined #gluster
01:36 asku joined #gluster
01:36 nage joined #gluster
01:36 wgao_ joined #gluster
01:36 dork joined #gluster
01:36 zapotah joined #gluster
01:36 tzi0m joined #gluster
01:36 cyberbootje joined #gluster
01:36 JonnyNomad joined #gluster
01:36 l0uis joined #gluster
01:36 Peanut joined #gluster
01:36 johnmark joined #gluster
01:36 Kins joined #gluster
01:38 msciciel joined #gluster
01:40 TheDingy joined #gluster
01:40 Amanda joined #gluster
01:40 T0aD-- joined #gluster
01:41 VerboEse joined #gluster
01:41 uebera|| joined #gluster
01:41 jiffe98 joined #gluster
01:41 al joined #gluster
01:42 JonnyNomad_ joined #gluster
01:44 mattapp__ joined #gluster
01:45 zerick joined #gluster
01:45 yosafbridge joined #gluster
01:46 lyang0 joined #gluster
01:48 mojorison joined #gluster
01:48 lkoranda joined #gluster
01:48 mkzero joined #gluster
01:48 FrodeS joined #gluster
01:48 portante joined #gluster
01:48 ofu___ joined #gluster
01:48 social joined #gluster
01:48 jikz joined #gluster
01:48 askb joined #gluster
01:48 StarBeast joined #gluster
01:48 yosafbridge joined #gluster
01:49 edoceo joined #gluster
01:50 T0aD joined #gluster
01:53 _NiC joined #gluster
01:53 Slasheri_ joined #gluster
01:53 T0aD-- joined #gluster
01:53 msciciel joined #gluster
01:53 divbell joined #gluster
01:53 T0aD--- joined #gluster
01:54 mojorison joined #gluster
01:54 lkoranda joined #gluster
01:54 mkzero joined #gluster
01:54 FrodeS joined #gluster
01:54 portante joined #gluster
01:54 ofu___ joined #gluster
01:54 jikz joined #gluster
01:54 StarBeast joined #gluster
01:55 T0aD--- joined #gluster
01:55 _br_ joined #gluster
01:57 twx joined #gluster
01:57 tziOm joined #gluster
01:57 Amanda joined #gluster
01:58 sulky_ joined #gluster
01:58 asku joined #gluster
01:58 eightyeight joined #gluster
01:58 nage joined #gluster
01:58 nage joined #gluster
01:58 zapotah joined #gluster
01:58 zapotah joined #gluster
01:59 jporterfield joined #gluster
02:00 solid_liq joined #gluster
02:00 solid_liq joined #gluster
02:01 badone joined #gluster
02:01 ron-slc joined #gluster
02:01 jezier joined #gluster
02:01 xavih joined #gluster
02:03 partner joined #gluster
02:03 social joined #gluster
02:03 Dave2 joined #gluster
02:03 Dga joined #gluster
02:05 partner joined #gluster
02:05 crazifyngers joined #gluster
02:09 sulky joined #gluster
02:09 Humble joined #gluster
02:11 T0aD joined #gluster
02:12 overclk joined #gluster
02:12 Kins joined #gluster
02:14 zerick joined #gluster
02:14 _NiC joined #gluster
02:14 JoeJulian joined #gluster
02:14 eightyeight joined #gluster
02:15 xavih joined #gluster
02:15 mick271 joined #gluster
02:15 juhaj joined #gluster
02:15 mick27 joined #gluster
02:16 dblack joined #gluster
02:16 jiqiren joined #gluster
02:17 lanning joined #gluster
02:18 TheDingy joined #gluster
02:20 edoceo joined #gluster
02:20 askb joined #gluster
02:21 T0aD joined #gluster
02:21 mattappe_ joined #gluster
02:22 iksik joined #gluster
02:23 14WAB0PP1 joined #gluster
02:23 14WAB0PTN joined #gluster
02:29 brosner joined #gluster
02:29 fidevo joined #gluster
02:30 radez_g0n3 joined #gluster
02:34 jiqiren joined #gluster
02:34 bfoster joined #gluster
02:34 tziOm joined #gluster
02:34 iksik joined #gluster
02:34 edoceo joined #gluster
02:34 _NiC joined #gluster
02:34 zerick joined #gluster
02:34 overclk joined #gluster
02:34 crazifyngers joined #gluster
02:34 ron-slc joined #gluster
02:34 jporterfield joined #gluster
02:34 twx joined #gluster
02:34 mojorison joined #gluster
02:34 portante joined #gluster
02:34 StarBeast joined #gluster
02:34 juhaj joined #gluster
02:37 johnmark joined #gluster
02:38 micu2 joined #gluster
02:38 FrodeS joined #gluster
02:42 cfeller joined #gluster
02:44 badone joined #gluster
02:44 VerboEse joined #gluster
02:44 msvbhat_` joined #gluster
02:44 JonnyNomad joined #gluster
02:44 purpleidea joined #gluster
02:44 dblack joined #gluster
02:45 lanning joined #gluster
02:48 l0uis joined #gluster
02:48 Humble joined #gluster
02:48 radez_g0n3 joined #gluster
02:49 gdubreui joined #gluster
02:50 nage joined #gluster
02:50 mick27 joined #gluster
02:51 xavih joined #gluster
02:53 divbell joined #gluster
02:54 zerick joined #gluster
03:01 Dga joined #gluster
03:04 jurrien joined #gluster
03:10 divbell joined #gluster
03:11 23LAAZ0A7 joined #gluster
03:11 zapotah joined #gluster
03:12 partner joined #gluster
03:14 nixpanic joined #gluster
03:14 zapotah joined #gluster
03:15 23LAAZ0WZ joined #gluster
03:25 hflai joined #gluster
03:26 social joined #gluster
03:26 stigchristian joined #gluster
03:26 solid_liq joined #gluster
03:26 solid_liq joined #gluster
03:28 bharata-rao joined #gluster
03:30 nage joined #gluster
03:31 s2r2 joined #gluster
03:32 badone joined #gluster
03:32 jezier joined #gluster
03:41 abyss^ joined #gluster
03:57 social joined #gluster
03:58 Dave2 joined #gluster
03:58 mojorison joined #gluster
04:00 iksik joined #gluster
04:05 ron-slc joined #gluster
04:05 slappers joined #gluster
04:05 badone joined #gluster
04:08 wgao_ joined #gluster
04:15 crazifyngers joined #gluster
04:18 juhaj joined #gluster
04:18 partner joined #gluster
04:19 a2 joined #gluster
04:21 iksik joined #gluster
04:22 d-fence_ joined #gluster
04:22 jezier joined #gluster
04:22 social joined #gluster
04:22 a2 joined #gluster
04:22 k4nar joined #gluster
04:22 itisravi joined #gluster
04:22 overclk joined #gluster
04:23 kanagaraj joined #gluster
04:26 Dga joined #gluster
04:28 badone joined #gluster
04:29 eryc joined #gluster
04:29 X3NQ joined #gluster
04:29 mkzero joined #gluster
04:29 Peanut joined #gluster
04:29 iksik joined #gluster
04:29 wgao_ joined #gluster
04:29 s2r2 joined #gluster
04:29 nage joined #gluster
04:29 ujjain joined #gluster
04:29 solid_liq joined #gluster
04:29 jclift joined #gluster
04:29 jurrien joined #gluster
04:29 Amanda joined #gluster
04:29 tziOm joined #gluster
04:29 kkeithley joined #gluster
04:29 cyberbootje joined #gluster
04:29 semiosis joined #gluster
04:30 ron-slc joined #gluster
04:30 twx joined #gluster
04:30 ndarshan joined #gluster
04:31 davinder joined #gluster
04:32 radez_g0n3 joined #gluster
04:32 crazifyngers joined #gluster
04:33 solid_liq joined #gluster
04:33 solid_liq joined #gluster
04:34 edong23 joined #gluster
04:36 eryc joined #gluster
04:36 eryc joined #gluster
04:36 ujjain joined #gluster
04:36 a2 joined #gluster
04:36 edong23 joined #gluster
04:36 Dga joined #gluster
04:37 shubhendu joined #gluster
04:37 social_ joined #gluster
04:37 _NiC joined #gluster
04:40 jiqiren joined #gluster
04:41 purpleidea joined #gluster
04:41 purpleidea joined #gluster
04:41 rastar joined #gluster
04:41 abyss^ joined #gluster
04:43 VerboEse joined #gluster
04:43 mojorison joined #gluster
04:43 social joined #gluster
04:43 l0uis joined #gluster
04:44 rastar joined #gluster
04:44 l0uis joined #gluster
04:45 dork joined #gluster
04:46 badone joined #gluster
04:48 social_ joined #gluster
04:49 morsik joined #gluster
04:50 natgeorg joined #gluster
04:51 rjoseph joined #gluster
04:51 codex_ joined #gluster
04:53 mkzero_ joined #gluster
04:55 jurrien joined #gluster
04:56 wgao_ joined #gluster
04:58 d-fence joined #gluster
04:58 jclift_ joined #gluster
04:59 stigchri1tian joined #gluster
04:59 Slasheri joined #gluster
05:00 harish_ joined #gluster
05:01 Dga joined #gluster
05:03 Peanut joined #gluster
05:04 bharata_ joined #gluster
05:04 VerboEse joined #gluster
05:04 fidevo joined #gluster
05:04 Dave2_ joined #gluster
05:04 45PABBOH5 joined #gluster
05:04 ujjain joined #gluster
05:04 jezier_ joined #gluster
05:04 CheRi joined #gluster
05:04 eryc joined #gluster
05:05 crazifyngers_ joined #gluster
05:05 jurrien joined #gluster
05:06 eryc joined #gluster
05:07 rastar joined #gluster
05:07 codex_ joined #gluster
05:07 abyss^ joined #gluster
05:09 overclk joined #gluster
05:10 stigchri1tian joined #gluster
05:10 jezier joined #gluster
05:11 stigchristian joined #gluster
05:13 a2 joined #gluster
05:14 Amanda joined #gluster
05:14 badone_ joined #gluster
05:14 kanagaraj_ joined #gluster
05:14 eryc joined #gluster
05:14 eryc joined #gluster
05:15 Dave2 joined #gluster
05:15 bala joined #gluster
05:15 tziOm joined #gluster
05:16 bharata-rao joined #gluster
05:16 Slasheri_ joined #gluster
05:16 CheRi joined #gluster
05:16 fidevo joined #gluster
05:19 natgeorg joined #gluster
05:19 natgeorg joined #gluster
05:21 xavih joined #gluster
05:21 dork joined #gluster
05:21 ccha4 joined #gluster
05:23 morsik joined #gluster
05:24 36DACCYK1 joined #gluster
05:25 partner joined #gluster
05:30 saurabh joined #gluster
05:30 shylesh joined #gluster
05:30 k4nar_ joined #gluster
05:32 davinder joined #gluster
05:32 edong23 joined #gluster
05:33 hagarth joined #gluster
05:35 nage joined #gluster
05:35 nage joined #gluster
05:36 crazifyngers joined #gluster
05:38 Dave2_ joined #gluster
05:39 ron-slc joined #gluster
05:48 zapotah joined #gluster
05:50 cfeller joined #gluster
05:52 l0uis joined #gluster
05:56 solid_liq joined #gluster
06:00 Slasheri joined #gluster
06:00 Slasheri joined #gluster
06:00 Dave2 joined #gluster
06:00 solid_liq joined #gluster
06:00 divbell joined #gluster
06:00 jiffe98 joined #gluster
06:00 al joined #gluster
06:00 bfoster joined #gluster
06:00 Humble joined #gluster
06:00 rjoseph joined #gluster
06:00 jclift_ joined #gluster
06:00 shylesh joined #gluster
06:00 hagarth joined #gluster
06:02 abyss^ joined #gluster
06:02 mohankumar joined #gluster
06:03 klaas joined #gluster
06:04 klaas joined #gluster
06:06 fidevo joined #gluster
06:06 ujjain joined #gluster
06:06 StarBeast joined #gluster
06:06 VerboEse joined #gluster
06:06 semiosis joined #gluster
06:06 cfeller joined #gluster
06:06 mojorison joined #gluster
06:06 cyberbootje joined #gluster
06:06 iksik joined #gluster
06:06 jurrien joined #gluster
06:06 Amanda joined #gluster
06:06 bala joined #gluster
06:06 social joined #gluster
06:06 xavih joined #gluster
06:06 itisravi_ joined #gluster
06:06 ppai joined #gluster
06:06 eastz0r_ joined #gluster
06:06 kkeithley joined #gluster
06:06 dork joined #gluster
06:06 jezier joined #gluster
06:06 k4nar joined #gluster
06:06 eryc joined #gluster
06:06 tziOm joined #gluster
06:06 twx joined #gluster
06:06 X3NQ joined #gluster
06:06 crazifyngers joined #gluster
06:06 d-fence joined #gluster
06:06 badone_ joined #gluster
06:06 edong23 joined #gluster
06:06 CheRi joined #gluster
06:06 bharata-rao joined #gluster
06:06 kanagaraj_ joined #gluster
06:06 zerick joined #gluster
06:06 askb joined #gluster
06:06 asku joined #gluster
06:06 pdrakeweb joined #gluster
06:06 hybrid512 joined #gluster
06:06 qdk joined #gluster
06:06 17SAAEX0G joined #gluster
06:06 sarkis joined #gluster
06:06 eastz0r joined #gluster
06:08 juhaj joined #gluster
06:09 45PABBZ7M joined #gluster
06:09 nage joined #gluster
06:09 morsik joined #gluster
06:09 semiosis joined #gluster
06:09 nage joined #gluster
06:09 eryc joined #gluster
06:09 shylesh joined #gluster
06:09 tzi0m joined #gluster
06:10 davinder joined #gluster
06:10 45PABB0S3 joined #gluster
06:10 _VerboEse joined #gluster
06:11 eryc joined #gluster
06:11 spandit joined #gluster
06:11 Peanut joined #gluster
06:11 eastz0r joined #gluster
06:12 klaas joined #gluster
06:12 jurrien_ joined #gluster
06:13 nixpanic joined #gluster
06:13 shylesh joined #gluster
06:13 ujjain joined #gluster
06:14 Dave2 joined #gluster
06:15 FrodeS joined #gluster
06:15 eastz0r joined #gluster
06:17 k4nar joined #gluster
06:17 badone_ joined #gluster
06:19 xavih joined #gluster
06:20 vpshastry joined #gluster
06:21 k4nar_ joined #gluster
06:22 itisravi_ joined #gluster
06:23 mkzero joined #gluster
06:23 l0uis joined #gluster
06:24 eryc joined #gluster
06:24 nixpanic_ joined #gluster
06:25 eastz0r_ joined #gluster
06:25 nixpanic_ joined #gluster
06:26 slappers joined #gluster
06:26 jiqiren joined #gluster
06:26 kdhananjay joined #gluster
06:26 Peanut__ joined #gluster
06:27 benjamin_____ joined #gluster
06:28 Slasheri joined #gluster
06:28 Slasheri joined #gluster
06:29 rjoseph joined #gluster
06:30 eryc joined #gluster
06:30 Amanda joined #gluster
06:31 eryc joined #gluster
06:31 jurrien joined #gluster
06:31 k4nar joined #gluster
06:31 zapotah joined #gluster
06:31 hflai joined #gluster
06:32 StarBeas_ joined #gluster
06:32 ppai joined #gluster
06:33 uebera|| joined #gluster
06:33 overclk joined #gluster
06:34 ujjain joined #gluster
06:35 social joined #gluster
06:35 mojorison joined #gluster
06:35 itisravi joined #gluster
06:36 nage joined #gluster
06:36 nage joined #gluster
06:36 eryc joined #gluster
06:36 eryc joined #gluster
06:36 k4nar joined #gluster
06:36 overclk joined #gluster
06:37 ron-slc joined #gluster
06:38 hflai joined #gluster
06:40 eastz0r joined #gluster
06:40 Peanut joined #gluster
06:42 eryc joined #gluster
06:42 eryc joined #gluster
06:48 Dave2 joined #gluster
06:49 Humble joined #gluster
06:49 semiosis joined #gluster
06:50 45PABB60N joined #gluster
06:51 ujjain joined #gluster
06:54 dork joined #gluster
06:54 eryc_ joined #gluster
06:54 _tziOm joined #gluster
06:54 eastz0r joined #gluster
06:55 juhaj joined #gluster
06:55 mohankumar joined #gluster
06:57 badone_ joined #gluster
06:58 social joined #gluster
06:58 overclk_ joined #gluster
06:59 hchiramm_ joined #gluster
07:00 johnmark joined #gluster
07:02 Dga joined #gluster
07:02 benjamin_____ joined #gluster
07:02 eryc joined #gluster
07:02 eryc joined #gluster
07:02 morsik_ joined #gluster
07:04 ron-slc joined #gluster
07:04 fidevo joined #gluster
07:06 rjoseph joined #gluster
07:07 eryc joined #gluster
07:08 Amanda joined #gluster
07:08 36DACC3DX joined #gluster
07:13 bfoster joined #gluster
07:14 ujjain joined #gluster
07:14 badone_ joined #gluster
07:14 Slasheri joined #gluster
07:14 FrodeS joined #gluster
07:14 zapotah joined #gluster
07:14 zapotah joined #gluster
07:14 ndarshan joined #gluster
07:16 klaas joined #gluster
07:16 codex joined #gluster
07:18 klaas joined #gluster
07:22 davinder joined #gluster
07:22 hagarth joined #gluster
07:22 pk1 joined #gluster
07:24 jurrien joined #gluster
07:24 micu joined #gluster
07:24 morsik joined #gluster
07:25 glusterbot New news from resolvedglusterbugs: [Bug 862082] build cleanup <https://bugzilla.redhat.com/show_bug.cgi?id=862082>
07:26 micu1 joined #gluster
07:32 partner joined #gluster
07:34 kdhananjay joined #gluster
07:34 ccha4 joined #gluster
07:38 davinder joined #gluster
07:41 johnmark joined #gluster
07:41 divbell joined #gluster
07:41 jiffe98 joined #gluster
07:41 al joined #gluster
07:42 ccha5 joined #gluster
07:43 mojorison joined #gluster
07:43 jezier joined #gluster
07:47 Philambdo joined #gluster
07:47 morsik joined #gluster
07:47 _tziOm joined #gluster
07:48 codex joined #gluster
07:48 harish joined #gluster
07:48 iksik_ joined #gluster
07:48 badone_ joined #gluster
07:51 ccha joined #gluster
07:51 jezier joined #gluster
07:51 kdhananjay joined #gluster
07:55 glusterbot New news from newglusterbugs: [Bug 1021686] refactor AFR module <https://bugzilla.redhat.com/show_bug.cgi?id=1021686>
07:59 bfoster joined #gluster
08:04 kdhananjay joined #gluster
08:04 iksik joined #gluster
08:04 morsik joined #gluster
08:04 itisravi joined #gluster
08:09 benjamin_____ joined #gluster
08:15 harish joined #gluster
08:15 mojorison joined #gluster
08:23 Slasheri joined #gluster
08:24 Philambdo joined #gluster
08:24 keytab joined #gluster
08:24 ujjain joined #gluster
08:24 dork joined #gluster
08:24 andreask joined #gluster
08:24 45PABCE6D joined #gluster
08:24 eryc joined #gluster
08:24 pk1 joined #gluster
08:24 hagarth joined #gluster
08:24 rjoseph joined #gluster
08:24 45PABB60N joined #gluster
08:24 kkeithley joined #gluster
08:24 bala joined #gluster
08:24 cyberbootje joined #gluster
08:24 itisravi joined #gluster
08:27 jtux joined #gluster
08:29 partner joined #gluster
08:30 badone_ joined #gluster
08:30 micu1 joined #gluster
08:30 16WAALQWH joined #gluster
08:30 ndarshan joined #gluster
08:30 zapotah joined #gluster
08:30 Dga joined #gluster
08:30 hchiramm_ joined #gluster
08:30 overclk_ joined #gluster
08:30 mohankumar joined #gluster
08:30 juhaj joined #gluster
08:30 hflai joined #gluster
08:30 StarBeas_ joined #gluster
08:30 jiqiren joined #gluster
08:30 xavih joined #gluster
08:30 abyss^ joined #gluster
08:30 solid_liq joined #gluster
08:30 stigchristian joined #gluster
08:30 rastar joined #gluster
08:30 jclift_ joined #gluster
08:30 purpleidea joined #gluster
08:32 tziOm joined #gluster
08:32 ujjain joined #gluster
08:32 ron-slc joined #gluster
08:32 45PABCKXI joined #gluster
08:32 twx joined #gluster
08:32 benjamin joined #gluster
08:35 hybrid512 joined #gluster
08:37 16WAALS5E joined #gluster
08:37 bfoster joined #gluster
08:37 badone_ joined #gluster
08:37 micu1 joined #gluster
08:37 16WAALQWH joined #gluster
08:37 ndarshan joined #gluster
08:37 zapotah joined #gluster
08:37 Dga joined #gluster
08:37 hchiramm_ joined #gluster
08:37 overclk_ joined #gluster
08:37 mohankumar joined #gluster
08:37 juhaj joined #gluster
08:37 hflai joined #gluster
08:37 StarBeas_ joined #gluster
08:37 jiqiren joined #gluster
08:37 xavih joined #gluster
08:37 abyss^ joined #gluster
08:37 solid_liq joined #gluster
08:37 stigchristian joined #gluster
08:37 rastar joined #gluster
08:37 jclift_ joined #gluster
08:37 purpleidea joined #gluster
08:39 rjoseph1 joined #gluster
08:39 hflai joined #gluster
08:39 solid_liq joined #gluster
08:39 solid_liq joined #gluster
08:40 StarBeast joined #gluster
08:41 ron-slc joined #gluster
08:42 eryc joined #gluster
08:42 johnmark joined #gluster
08:42 divbell joined #gluster
08:42 jiffe98 joined #gluster
08:42 al joined #gluster
08:45 purpleidea joined #gluster
08:45 purpleidea joined #gluster
08:46 Dga joined #gluster
08:47 spandit_ joined #gluster
08:48 jezier joined #gluster
08:48 mojorison joined #gluster
08:50 Peanut joined #gluster
08:51 iksik joined #gluster
08:54 FrodeS joined #gluster
08:56 glusterbot New news from newglusterbugs: [Bug 1060654] Regression test failure while mounting nfs <https://bugzilla.redhat.com/show_bug.cgi?id=1060654>
08:56 d-fence joined #gluster
08:56 micu1 joined #gluster
08:56 16WAALQWH joined #gluster
08:56 ndarshan joined #gluster
08:56 hchiramm_ joined #gluster
08:56 overclk_ joined #gluster
08:56 juhaj joined #gluster
08:56 xavih joined #gluster
08:56 abyss^ joined #gluster
08:56 stigchristian joined #gluster
08:56 rastar joined #gluster
08:56 jclift_ joined #gluster
08:56 CheRi_ joined #gluster
09:00 prasanth joined #gluster
09:00 zapotah joined #gluster
09:00 zapotah joined #gluster
09:01 jiqiren joined #gluster
09:01 Dave2_ joined #gluster
09:02 twx joined #gluster
09:03 ron-slc_ joined #gluster
09:03 nixpanic joined #gluster
09:03 ujjain joined #gluster
09:03 45PABCORS joined #gluster
09:03 ujjain joined #gluster
09:03 ron-slc joined #gluster
09:03 mohankumar joined #gluster
09:03 kdhananjay1 joined #gluster
09:03 mbukatov joined #gluster
09:03 Amanda_ joined #gluster
09:03 mkzero joined #gluster
09:03 k4nar joined #gluster
09:03 badone_ joined #gluster
09:03 eryc joined #gluster
09:03 social_ joined #gluster
09:04 ujjain joined #gluster
09:04 eryc joined #gluster
09:06 k4nar joined #gluster
09:06 blook2nd joined #gluster
09:06 Dga joined #gluster
09:06 edong23 joined #gluster
09:06 semiosis joined #gluster
09:06 kkeithley joined #gluster
09:06 blook joined #gluster
09:06 morsik joined #gluster
09:06 ekuric joined #gluster
09:06 Peanut joined #gluster
09:07 kanagaraj joined #gluster
09:07 vkoppad joined #gluster
09:08 badone_ joined #gluster
09:09 Philambdo1 joined #gluster
09:09 bfoster joined #gluster
09:09 codex joined #gluster
09:10 mick27 joined #gluster
09:10 blook3rd joined #gluster
09:10 sweedey joined #gluster
09:11 edong23_ joined #gluster
09:12 l0uis joined #gluster
09:14 uebera|| joined #gluster
09:14 uebera|| joined #gluster
09:14 marbu joined #gluster
09:16 kanagaraj joined #gluster
09:16 Amanda joined #gluster
09:17 cyberbootje1 joined #gluster
09:17 jezier joined #gluster
09:18 andreask joined #gluster
09:18 Amanda_ joined #gluster
09:18 eryc joined #gluster
09:18 mick27 joined #gluster
09:18 eryc joined #gluster
09:18 mkzero joined #gluster
09:19 ujjain joined #gluster
09:19 nixpanic joined #gluster
09:19 morsik_ joined #gluster
09:19 nixpanic joined #gluster
09:20 prasanth joined #gluster
09:21 ujjain joined #gluster
09:22 eryc joined #gluster
09:22 eryc joined #gluster
09:23 mkzero_ joined #gluster
09:23 masterzen joined #gluster
09:23 delhage joined #gluster
09:23 delhage joined #gluster
09:24 johnmwilliams_ joined #gluster
09:24 Dga joined #gluster
09:26 johnmark joined #gluster
09:26 divbell joined #gluster
09:26 jiffe98 joined #gluster
09:26 al joined #gluster
09:27 slappers joined #gluster
09:29 bala1 joined #gluster
09:29 eryc joined #gluster
09:29 vpshastry joined #gluster
09:29 eryc joined #gluster
09:29 marcoceppi joined #gluster
09:29 marcoceppi joined #gluster
09:33 social joined #gluster
09:34 prasanth joined #gluster
09:34 masterzen joined #gluster
09:41 ekuric joined #gluster
09:41 Dga joined #gluster
09:43 dblack joined #gluster
09:44 andreask joined #gluster
09:47 kanagaraj joined #gluster
09:49 shubhendu joined #gluster
09:50 ujjain joined #gluster
09:50 eryc joined #gluster
09:50 eryc joined #gluster
10:05 kdhananjay joined #gluster
10:05 andrewklau joined #gluster
10:05 prasanth joined #gluster
10:05 badone_ joined #gluster
10:05 partner joined #gluster
10:05 dork joined #gluster
10:05 nixpanic joined #gluster
10:05 cyberbootje1 joined #gluster
10:05 stickyboy joined #gluster
10:05 k4nar joined #gluster
10:05 andrewklau joined #gluster
10:05 semiosis joined #gluster
10:05 kkeithley joined #gluster
10:05 dork joined #gluster
10:05 semiosis joined #gluster
10:05 zmotok joined #gluster
10:05 nixpanic joined #gluster
10:06 nixpanic joined #gluster
10:07 zmotok hello everyone, I'm having the following problem (talked ~12h ago here about it for a bit): have upgraded glusterfs from 3.4.1 to 3.4.2 (Scientific Linux 6.1 amd64) and since then the bricks aren't coming online, glusterd isn't spawning glusterfsd processes; downgrading to 3.4.1 doesn't solve the problem, any ideas what I could try/check?
10:08 andrewklau Is there a config to have the gluster NFS server start on boot? I'm having this weird issue where in a two replica (one offline temporarily) the NFS server will show as Online: N. It'll only startup if I make a change to an option or the second host comes up. It's mountable through glusterfs though.
10:08 badone_ joined #gluster
10:10 saltsa joined #gluster
10:46 ilbot3 joined #gluster
10:46 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
10:47 samppah try to set 0x in the beginning
10:48 zmotok it's setting a different value
10:48 zmotok trusted.glusterfs.volume-id=0sAnYdSCMoR4eH2UpW3FVLhw==
10:48 FrodeS dump it with hex-encoding and you get the same as you put in
10:49 zmotok FrodeS: ah, yes, I see it
10:49 zmotok damn
10:49 zmotok the volume started now
10:50 zmotok ok, I'll redo the same for the other volumes/mountpoints..
10:50 zmotok bits crossed
10:52 xymox joined #gluster
10:52 Rydekull joined #gluster
10:53 Krikke joined #gluster
10:54 jporterfield joined #gluster
10:55 zmotok dear samppahand FrodeS, your suggestions worked, thank you very much!
10:56 tg2 joined #gluster
10:56 FrodeS cool
10:56 zmotok trying to mount one of the volumes on a client now
10:57 DV joined #gluster
10:57 edoceo joined #gluster
11:01 ells joined #gluster
11:04 jclift joined #gluster
11:04 partner joined #gluster
11:04 kkeithley joined #gluster
11:07 FrodeS while we're at it - I have another issue that might be a bit harder to resolve. we're building a replicated gluster setup on rhel6.5 with gluster 3.4.2 focused on HA, meaning that failover times etc should be pretty low
11:07 FrodeS hard rebooting a node, firewalling it, dropping network connections etc are handled pretty fine within ~10 seconds (which is the ping time we've set)
11:08 FrodeS the clients can cope with that
11:08 FrodeS however, if I freeze IO on one of the bricks (out of two), nothing happens - it just waits for an unfreeze
11:09 FrodeS we freeze the device with dmsetup suspend to simulate a failure on the underlaying IO system - issues we've seen on virtual nodes where the underlaying storage system has gone fubared
11:10 FrodeS if course, the ideal situation would be that the replica would be kicked out just as when we just hard reboot it
11:10 FrodeS s/if/of/
11:10 glusterbot What FrodeS meant to say was: of course, the ideal situation would be that the replica would be kicked out just as when we just hard reboot it
11:11 semiosis joined #gluster
11:12 portante joined #gluster
11:12 FrodeS we've tried to debug it a bit and it seems to get stuck with some locks eventually it runs out of io threads
11:13 FrodeS linx-aio/posix-aio seems to handle it - but we have serious performance issues with it - from 250-300MB/s without linux-aio on to ~35-60MB/s
11:13 crazifyngers joined #gluster
11:13 FrodeS any suggestions?
11:14 vpshastry1 joined #gluster
11:14 prasanth joined #gluster
11:14 atrius` joined #gluster
11:21 23LAA1JY8 joined #gluster
11:22 brosner joined #gluster
11:22 sticky_afk joined #gluster
11:23 morse joined #gluster
11:24 swaT30 joined #gluster
11:34 JonathanD joined #gluster
11:34 kdhananjay joined #gluster
11:34 nshaikh joined #gluster
11:34 T0aD joined #gluster
11:35 dblack joined #gluster
11:35 eightyeight joined #gluster
11:35 zapotah joined #gluster
11:35 semiosis joined #gluster
11:35 sac`away joined #gluster
11:35 davinder joined #gluster
11:37 k4nar joined #gluster
11:44 sac`away joined #gluster
11:48 DV joined #gluster
11:49 xymox joined #gluster
11:50 xymox joined #gluster
11:50 rjoseph joined #gluster
11:53 davinder joined #gluster
11:53 ninkotech joined #gluster
11:53 k4nar_ joined #gluster
11:53 T0aD joined #gluster
11:55 T0aD- joined #gluster
11:56 glusterbot New news from resolvedglusterbugs: [Bug 1058185] Build from glusterfs.spec fails on EPEL-5 because of BuildRequires lvm2-devel <https://bugzilla.redhat.com/show_bug.cgi?id=1058185>
12:20 Shri joined #gluster
12:21 edong23 joined #gluster
12:21 k4nar joined #gluster
12:21 sac`away joined #gluster
12:21 zapotah joined #gluster
12:21 zapotah joined #gluster
12:21 dblack joined #gluster
12:22 jporterfield joined #gluster
12:22 itisravi joined #gluster
12:22 Kins joined #gluster
12:23 kkeithley joined #gluster
12:23 davinder joined #gluster
12:23 Shri left #gluster
12:23 portante joined #gluster
12:24 rjoseph joined #gluster
12:26 semiosis joined #gluster
12:26 JonathanD joined #gluster
12:26 stickyboy joined #gluster
12:26 PatNarciso joined #gluster
12:26 23LAA1W6D joined #gluster
12:26 atrius` joined #gluster
12:26 ninkotech joined #gluster
12:26 23LAA1WSE joined #gluster
12:26 xymox joined #gluster
12:26 glusterbot New news from newglusterbugs: [Bug 1060703] client_t calls __sync_sub_and_fetch and causes link failures on EPEL-5-i386 <https://bugzilla.redhat.com/show_bug.cgi?id=1060703>
12:27 T0aD joined #gluster
12:27 eightyeight joined #gluster
12:27 eightyeight joined #gluster
12:28 saltsa joined #gluster
12:32 xymox joined #gluster
12:32 semiosis joined #gluster
12:32 brosner joined #gluster
12:33 ctria joined #gluster
12:36 kkeithley joined #gluster
12:38 Guest13436 joined #gluster
12:39 semiosis joined #gluster
12:46 xymox joined #gluster
12:46 xymox joined #gluster
12:47 mattapperson joined #gluster
12:47 andreask joined #gluster
12:48 andreask joined #gluster
12:49 l0uis joined #gluster
12:54 hagarth joined #gluster
12:55 xymox joined #gluster
12:55 xymox joined #gluster
13:01 jtux joined #gluster
13:03 andrewklau left #gluster
13:08 flrichar joined #gluster
13:14 marcoceppi joined #gluster
13:14 verdurin joined #gluster
13:14 tjikkun_work_ joined #gluster
13:14 VeggieMeat_ joined #gluster
13:14 fyxim joined #gluster
13:21 social when I have heal failed on gfid where should I look first?
13:24 Ark_explorys joined #gluster
13:26 itisravi_ joined #gluster
13:28 atoponce joined #gluster
13:29 atoponce joined #gluster
13:33 espanhol joined #gluster
13:33 espanhol hola
13:34 mattapperson joined #gluster
13:34 haomaiwang joined #gluster
13:34 espanhol alguien de weimar
13:34 ninkotech_ joined #gluster
13:34 ninkotech__ joined #gluster
13:43 mattappe_ joined #gluster
13:43 rcaskey joined #gluster
13:51 rcaskey hey all, i've got 1-2TB i'd like georeplicated for DR, any suggestions on VM hosting options? It would also be nice if they had discount bandwidth so that I can feasibly keep 'offline' backups in glacier.
13:52 ira joined #gluster
13:52 ninkotech_ joined #gluster
13:52 T0aD joined #gluster
13:52 cyberbootje joined #gluster
13:52 dork joined #gluster
13:52 swaT30 joined #gluster
13:52 morse joined #gluster
13:57 glusterbot New news from newglusterbugs: [Bug 1035586] gluster volume status shows incorrect information for brick process <https://bugzilla.redhat.com/show_bug.cgi?id=1035586>
14:01 ^rcaskey joined #gluster
14:01 radez joined #gluster
14:02 T0aD joined #gluster
14:02 Ark_expl_ joined #gluster
14:04 edward2 joined #gluster
14:04 ninkotech joined #gluster
14:05 atrius` joined #gluster
14:05 semiosis_ joined #gluster
14:07 bennyturns joined #gluster
14:08 hchiramm_ joined #gluster
14:08 vkoppad joined #gluster
14:08 bfoster joined #gluster
14:08 marbu joined #gluster
14:08 johnmwilliams_ joined #gluster
14:08 ekuric|mtg joined #gluster
14:08 lkoranda joined #gluster
14:08 sac`away joined #gluster
14:08 dblack joined #gluster
14:08 portante joined #gluster
14:08 kkeithley joined #gluster
14:08 ira joined #gluster
14:08 radez joined #gluster
14:08 edward2 joined #gluster
14:11 13WABR6J7 joined #gluster
14:12 pat_ joined #gluster
14:12 ninkotech__ joined #gluster
14:12 swaT30_ joined #gluster
14:13 xymox joined #gluster
14:13 mattapp__ joined #gluster
14:14 ira joined #gluster
14:19 xymox_ joined #gluster
14:19 JonathanS joined #gluster
14:20 ninkotech_ joined #gluster
14:22 haomaiwang joined #gluster
14:22 stickyboy joined #gluster
14:24 brosner joined #gluster
14:25 ninkotech joined #gluster
14:26 theron joined #gluster
14:27 prasanth joined #gluster
14:27 robo joined #gluster
14:28 B21956 joined #gluster
14:28 ira joined #gluster
14:29 swaT30 joined #gluster
14:30 xymox joined #gluster
14:31 theron_ joined #gluster
14:31 xymox_ joined #gluster
14:34 semiosis joined #gluster
14:40 PatNarciso joined #gluster
14:42 atrius_ joined #gluster
14:43 xymox joined #gluster
14:51 semiosis joined #gluster
14:52 dneary joined #gluster
14:58 jmarley joined #gluster
14:59 jobewan joined #gluster
15:01 dneary joined #gluster
15:02 ninkotech joined #gluster
15:03 ninkotech_ joined #gluster
15:03 tru_tru joined #gluster
15:04 theron joined #gluster
15:05 nullck joined #gluster
15:06 haomaiwa_ joined #gluster
15:06 theron joined #gluster
15:06 rcaskey joined #gluster
15:06 dbruhn joined #gluster
15:06 saltsa joined #gluster
15:06 T0aD joined #gluster
15:06 B21956 joined #gluster
15:06 cyberbootje joined #gluster
15:06 jobewan joined #gluster
15:07 brosner joined #gluster
15:07 ira joined #gluster
15:07 ctria joined #gluster
15:08 zaitcev joined #gluster
15:09 semiosis joined #gluster
15:09 sticky_afk joined #gluster
15:17 ninkotech__ joined #gluster
15:18 sarkis joined #gluster
15:18 sarkis purpleidea: you around?
15:18 edong23_ joined #gluster
15:19 purpleidea sarkis: hi
15:20 saltsa joined #gluster
15:20 robo joined #gluster
15:20 johnmilton joined #gluster
15:21 dbruhn free node being weird for everyone else this morning too?
15:21 purpleidea dbruhn: yeah
15:21 purpleidea probably a ddos
15:21 sarkis purpleidea: hey, so trying to set this up.. should i be specifying the "client" mountpoint with your puppet module or does it not support it... i.e. i have a brick at /opt/some/where/here and the clients should mount that to /etc/puppet using glusterfs filesystem
15:21 sarkis dbruhn: ya it just kicked me
15:21 sarkis and then reconnect was super slow
15:22 purpleidea sarkis: supported!
15:22 jmarley joined #gluster
15:22 dbruhn it took like 15 attempted connects before I got anywhere
15:22 sarkis ok ill search more :)
15:22 purpleidea (... and a cool new feature for this will be landing soon, i think...)
15:22 sarkis thanks
15:22 purpleidea sarkis: examples/ folder
15:22 sarkis oh duh, mount-example :)
15:22 purpleidea if it's not clear or something doesn't work, ping me
15:23 sarkis ah perfect, example is awesome
15:23 purpleidea great!
15:23 sarkis i think the only thing i am trying to wrap my head around is the distributed-replicate example, i have to specify all the bricks on all servers?
15:23 kshlm purpleidea: quick update. vagrant-puppet-gluster works fine with vagrant 1.4.3 with the latest vagrant-libvirt 0.0.15. The command line options don't work though.
15:24 purpleidea the one thing that this doesn't do (which isn't really possible anywhere in puppet yet actually) is check that the volume is up and working before mounting it... this feature will come soon
15:24 purpleidea sarkis: gluster::simple does this for you...
15:24 purpleidea kshlm: oh cool! what happens with the --gluster options?
15:25 purpleidea sarkis: but you can also specify the gluster specifics manually if you want to...
15:26 sarkis wait what is gluster simple doing?
15:26 purpleidea sarkis: okay, tell me when i loose you....
15:26 kshlm they're rejected as invalid options. so I edit the yaml file directly.
15:26 sarkis reading manifest while you tell me ;)
15:27 purpleidea kshlm: can you post the output/errors ?
15:27 purpleidea sarkis: for any gluster gluster (unrelated to puppet) you needs host, bricks, volumes, and so on...
15:28 purpleidea sarkis: my puppet module defines types for each of these logical entities in puppet-gluster...
15:28 purpleidea sarkis: now, a sysadmin might choose to set each and everyone of these manually in their puppet definition...
15:29 purpleidea sarkis: if you have a four host cluster with 2 bricks each, that means 4xhost type def's, 8xbrick def's, 1xserver classes, 1xvolume def... and so on...
15:29 kshlm purpleidea: http://ix.io/air , for vagrant up --gluster-cachier=true puppet
15:29 purpleidea sarkis: it's good that it exists this way so that you can have the choice to set what you want....
15:30 purpleidea sarkis: but if you don't want to have to figure all that out yourself, gluster::simple will automatically generate all of that for you from a few simple lines of puppet code.
15:31 purpleidea kshlm: ah, like that... hmmm something must have changed in core... usually the --gluster options are removed by ruby before vagrant gets a chance to see them...
15:32 purpleidea kshlm: i'll test this when i get 1.4.3 installed... for now, glad you're able to workaround with the puppet-gluster.yaml file. any other issues?
15:32 purpleidea sarkis: does all that make sense now?
15:35 kshlm No issues for now. I'm getting the hang of puppet now.
15:35 kshlm BTW, I'll send out a patch for vagrant-cachier 0.5.1 for mount options support when I find some time.
15:36 sarkis oh ya
15:36 purpleidea kshlm: cool! yeah the dev eventually added a more elaborate mechanism for the option i needed...
15:36 sarkis purpleidea: yea unfortunately i know the ins and outs of gluster (at least way more than before) and am pretty good with puppet as its my day job
15:36 purpleidea kshlm: i've actually disabled it by default because he wasn't super responsive, but i found another issue:
15:37 zaitcev_ joined #gluster
15:37 sarkis the thing that confuses me though, distributed-replicate example.. that entire manifest would be run on all the nodes.. i guess your module is smart enough to say ok this volume is already created, don't do anything or s/volume/brick
15:37 purpleidea kshlm: if you try out a beta like 3.5.0*, and then you destroy and rebuild the whole thing to do a 'latest version', then it will still use the 3.5.x rpms, because it sees them as latest in the cache... so remember to clear it first...
15:38 purpleidea sarkis: maybe this will explain how it's done: https://ttboj.wordpress.com/2012/08/23/how-to-avoid-cluster-race-conditions-or-how-to-implement-a-distributed-lock-manager-in-puppet/
15:39 purpleidea sarkis: and yes, the module is smart about figuring all this stuff out... that's why it's a very complicated module... not for the faint of heart :)
15:39 purpleidea but it does what's necessary to automate it all
15:39 sarkis that is what i needed! you rock.
15:39 purpleidea yw
15:39 sarkis even write a blog post explaining everything heh
15:39 purpleidea ,,(next)
15:39 glusterbot Another satisfied customer... NEXT!
15:39 purpleidea ;)
15:39 sarkis haha you guys got that from archlinux?
15:39 purpleidea sarkis: i got the idea from #git
15:40 purpleidea after i was a satisfied customer
15:40 sarkis oh cool :)
15:40 purpleidea maybe they got it from #arch or viceverse
15:40 sarkis yea who knows, was hoping there are arch users lol
15:40 ^rcaskey joined #gluster
15:40 purpleidea s/viceverse/vice-versa/
15:40 glusterbot What purpleidea meant to say was: maybe they got it from #arch or vice-versa
15:40 ninkotech__ joined #gluster
15:40 cyberbootje joined #gluster
15:40 robo joined #gluster
15:40 brosner joined #gluster
15:41 kaptk2 joined #gluster
15:41 nullck joined #gluster
15:41 sarkis nice feature
15:41 dork joined #gluster
15:41 sarkis s/nice/great
15:41 purpleidea trailing slash needed
15:41 sarkis s/nice/great/
15:41 glusterbot What sarkis meant to say was: great feature
15:41 purpleidea you know, like sed
15:42 Gluster joined #gluster
15:43 zerick joined #gluster
15:45 sarkis haha
15:45 johnmilton joined #gluster
15:45 sarkis ok this is great cause we already use keepalived, i have a vip for the puppetmaster cluster
15:46 sarkis and looks like if i specify vip =>, your module makes sure to do the one time things there.. perfect
15:46 purpleidea sarkis: yep
15:47 purpleidea sarkis: if you use vrrp => true, it will manage keepalived for you with false, you do it yourself
15:48 theron joined #gluster
15:48 bugs_ joined #gluster
15:48 bennyturns joined #gluster
15:48 B21956 joined #gluster
15:49 sarkis ya were already managing it.. so i may false it
15:50 theron_ joined #gluster
15:50 purpleidea sarkis: see! puppet-gluster let's you do everything. if you add beverage => coffee, it will brew you a cup while you're waiting for the cluster to build
15:51 sarkis hahaha
15:51 purpleidea also {water,milk} are supported
15:51 dbruhn joined #gluster
15:54 Gluster joined #gluster
15:56 jbrooks joined #gluster
16:02 Ark_explorys joined #gluster
16:04 s2r2 joined #gluster
16:05 Technicool joined #gluster
16:06 T0aD joined #gluster
16:09 robo joined #gluster
16:11 johnmark joined #gluster
16:11 divbell joined #gluster
16:11 jiffe98 joined #gluster
16:11 al joined #gluster
16:13 zerick joined #gluster
16:13 Gluster left #gluster
16:27 wushudoin joined #gluster
16:34 k4nar_ joined #gluster
16:35 l0uis_ joined #gluster
16:40 LoudNoises joined #gluster
16:51 Rydekull joined #gluster
16:52 jporterfield joined #gluster
16:53 benjamin_____ joined #gluster
16:57 nullck joined #gluster
16:58 kaptk2 joined #gluster
17:00 radez joined #gluster
17:00 rcaskey joined #gluster
17:00 portante joined #gluster
17:00 theron joined #gluster
17:00 bennyturns joined #gluster
17:00 brosner joined #gluster
17:00 kl4m joined #gluster
17:00 ninkotech__ joined #gluster
17:00 saltsa joined #gluster
17:00 Ark_explorys joined #gluster
17:00 haomaiwa_ joined #gluster
17:00 jmarley joined #gluster
17:00 T0aD joined #gluster
17:00 jmarley joined #gluster
17:00 wushudoin joined #gluster
17:00 cyberbootje joined #gluster
17:01 tru_tru_ joined #gluster
17:01 johnmilton joined #gluster
17:02 zerick joined #gluster
17:02 dbruhn joined #gluster
17:06 plarsen joined #gluster
17:06 hagarth joined #gluster
17:08 zerick joined #gluster
17:10 Guest21219 joined #gluster
17:12 plarsen joined #gluster
17:12 nage joined #gluster
17:12 isuckatgluster joined #gluster
17:14 isuckatgluster left #gluster
17:15 Leon joined #gluster
17:15 dork joined #gluster
17:16 LeonSandcastle joined #gluster
17:17 jbrooks Hey guys, is there going to be another test day / weekend before 3.5 is released?
17:17 zapotah joined #gluster
17:18 partner joined #gluster
17:18 flrichar joined #gluster
17:20 hagarth jbrooks: yes, after beta3 is out sometime this week
17:20 jbrooks hagarth: Cool, thanks -- I want to mention it in a blog post I'm writing. You think it'll be this weekend?
17:22 nage joined #gluster
17:22 hagarth jbrooks: in all probability
17:22 jbrooks :) thanks
17:23 hagarth jbrooks: might be useful to mention that an announcement will follow on gluster mailing lists
17:23 jbrooks hagarth: Will do
17:25 tdasilva joined #gluster
17:25 LeonSandcastle Hello all.  Need a little help as I'm perplexed.  I am having a problem reattaching an out of date node to my pool.  Here is the fpaste.org output of gluster peer status and volume info: http://fpaste.org/73986/48312139/
17:25 glusterbot Title: #73986 Fedora Project Pastebin (at fpaste.org)
17:25 wushudoin left #gluster
17:25 LeonSandcastle 1c is the out of date node that I'm trying to re-attach.
17:27 dbruhn LeonSandcastle, what does "gluster peer status" return? and what do you mean by "out of date"
17:27 tziOm joined #gluster
17:28 cfeller joined #gluster
17:28 LeonSandcastle It was disconnected a month ago and reattached today.  I have to run.  I'll be back in a bit and re-ask.  Thanks.
17:30 Mo__ joined #gluster
17:31 MacWinner joined #gluster
17:32 zapotah joined #gluster
17:32 sarkis purpleidea: last question i promise.. so in the beginning i bring up the puppet masters 1 by 1, is there a way to edit the replica count
17:33 sarkis i'm checking out gluster::volume to see if it modifies the replica count if i change that later
17:35 sarkis doesnt look like it from the manifest :/
17:37 sarkis i guess technically that should be a feature of the gluster::brick definiton as via the command line... i modify the replica # upon new brick creation
17:41 DV joined #gluster
17:42 kl4m joined #gluster
17:55 davinder joined #gluster
18:07 LeonSandcastle Sorry I had to leave quickly earlier.  Summary:Trying to reattaching a node, prod-1c, to my pool that only includes prod-1a.  Prod-1c was disconnected a month ago.  I am getting a peer status of "Accepted peer request (Disconnected)" for prod-1a when I run gluster peer status on prod-1c.  Here is the fpaste.org output of gluster peer status and volume info: http://fpaste.org/73986/48312139/
18:07 glusterbot Title: #73986 Fedora Project Pastebin (at fpaste.org)
18:12 dbruhn LeonSandcastle, what happened to cause it to disconnect? Also are the services running?
18:12 LeonSandcastle IP of the server changed.
18:13 LeonSandcastle glusterd is running on both servers.
18:13 semiosis LeonSandcastle: try restarting glusterd on both
18:13 dbruhn You should be able to re-probe the server from the running server
18:13 semiosis LeonSandcastle: also please ,,(pasteinfo)
18:13 glusterbot LeonSandcastle: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
18:14 LeonSandcastle http://fpaste.org/73986/48312139/
18:14 glusterbot Title: #73986 Fedora Project Pastebin (at fpaste.org)
18:14 dbruhn is your DNS resolution working properly between the servers?
18:15 LeonSandcastle DNS seems to be good.
18:17 Humble joined #gluster
18:17 LeonSandcastle When I try "gluster peer probe prod-1c" on prod-1a I get this message:  "Probe on host prod-1c port 24007 already in peer list"
18:17 dbruhn well that means it's talking at least
18:18 semiosis LeonSandcastle: did you restart glusterd on both?
18:18 purpleidea sarkis: just running out, bbl, but --gluster-replica=2 on the vagrant up puppet command.
18:19 LeonSandcastle Yes I restarted glistered on both instances.
18:19 semiosis LeonSandcastle: iptables
18:19 semiosis ,,(ports)
18:19 glusterbot glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up for <3.4 and 49152 & up for 3.4. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
18:20 sac`away joined #gluster
18:20 B21956 joined #gluster
18:22 semiosis @seen JoeJulian
18:22 glusterbot semiosis: JoeJulian was last seen in #gluster 4 days, 21 hours, 2 minutes, and 52 seconds ago: <JoeJulian> rpc-auth-allow-insecure... "it's not 1980"
18:24 JonathanD joined #gluster
18:26 vpshastry joined #gluster
18:29 LeonSandcastle @semiosis I see established connections between the two servers on 24007 and 24009.
18:30 semiosis LeonSandcastle: try probing both ways again, then restarting glusterd on both hosts again.
18:30 semiosis brb, rebooting
18:32 failshell joined #gluster
18:33 failshell looking at my georeplication status, it's doing  a 'hybrid crawl'. what is that exactly?
18:35 failshell i see rsync running. i was under the impression it didnt use rsync anymore
18:35 failshell with 3.4
18:36 LeonSandcastle semiosis: I tried probing both ways again.. both time it said the other was already in the peer list.  Restarted and peer statuses are the same:  Accepted peer request (Disconnected) on one and "Sent and Received peer request (Connected)" on the other.
18:41 semiosis LeonSandcastle: check /var/log/glusterfs/etc-glusterfs-glusterd.log on both.  i'm sure this must be frustrating, but every time i've run into this it was resolved by restarting glusterd a few times (after verifying iptables, etc was not blocking anything)
18:41 LessSee__ joined #gluster
18:46 LeonSandcastle semiosis: I actually don't have a /var/log/glusterfs on either server.  Is there another place gluster would log?  I'm running 3.2.4 on both servers.  I didn't setup these servers nor install gluster so it's kind of a black box to me.  I very much appreciate the help.
18:47 semiosis LeonSandcastle: no idea.  try using lsof | grep gluster maybe?
19:05 chirino joined #gluster
19:06 sputnik13 joined #gluster
19:06 gmcwhistler joined #gluster
19:08 asku joined #gluster
19:21 drowe joined #gluster
19:25 vpshastry left #gluster
19:25 pixelgremlins joined #gluster
19:27 pixelgremlins hey-- my main server isn't connecting... when I run gluster peer status from the other nodes, I get State: Peer in cluster (Disconnected) and when I add/change files in var/export/www they don't get moved to other servers... The other two servers are showing Peer in Cluster (Connected)
19:27 stickyboy joined #gluster
19:29 theron joined #gluster
19:31 theron joined #gluster
19:31 madphoenix joined #gluster
19:31 sroy_ joined #gluster
19:33 theron joined #gluster
19:36 lalatenduM joined #gluster
19:39 lanning joined #gluster
19:40 madphoenix hello all.  i just upgraded my gluster cluster from 3.3.1 to 3.4.2, then added two new bricks to a distributed volume.  When I started the rebalance, 5 out of the 6 nodes immediately fail.  I'm not sure how to go about troubleshooting since the logs are rather cryptic.  Hoping that somebody could take a look and point me in the right direction: http://pastebin.com/GVa3vfMk
19:40 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
19:41 jporterfield joined #gluster
19:50 spiekey joined #gluster
19:50 rcaskey what's a reasonable minimum network.ping-timeout for hosts just using conventional hard drives?
19:50 spiekey Hello!
19:50 glusterbot spiekey: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
19:52 spiekey i am benchmarking glusterfs because i might use it with ovirt (kvm). However I'm getting really bad performance with replication.
19:53 spiekey my link is a dedicated gigabit link with a mtu of 9000. Iperf shows me 1GBit, but glusterfs performs very bad.
19:53 spiekey any idea where i start looking?
19:55 spiekey with dd i get a local write speed of 100MB/sec. With glusterfs i only get 30MB/sec
19:55 japuzzo joined #gluster
19:56 LeonSandcastle I finally got some usable log info for my issue http://fpaste.org/74041/14570511/.  I have restarted the service quite a bit and now have different peer statuses (both in cluster but one is disconnected).  More detailed info here: http://fpaste.org/74044/57326139/.
19:56 glusterbot Title: #74041 Fedora Project Pastebin (at fpaste.org)
19:56 jag3773 joined #gluster
20:03 denaitre joined #gluster
20:04 primusinterpares joined #gluster
20:07 dewey Oh Gods, of Gluster, I offer an obscure problem for your consideration:  I have a 2 node replicated cluster with a 3rd system acting as SAMBA front end.  DB backup processes (Litespeed) using said front end are receiving locking conflicts when trying to write.  When I point the same Samba configuration to a non-gluster file system, no problems.  The odd thing is...initially it looked like it...
20:07 dewey ...was working well.  Statedump shows no interesting locks in Gluster -- only a couple of inode locks for cwds for 2 processes.
20:10 semiosis spiekey: dd is a terrible benchmark tool.  it's not so bad as a load generator.  if you must, be sure to set bs=1M, and try running multiple dd on the same client in parallel.
20:11 semiosis comparing a distributed filesystem like glusterfs with a local disk is like comparing apples to orchards
20:11 gdubreui joined #gluster
20:11 spiekey semiosis: well, not performance wise
20:12 semiosis idk what you mean
20:12 spiekey what performance cani expect?
20:12 dewey spiekey:  Also, 100 mbyte/sec is about 1Gbit.  Gluster has to use some of that bandwidth to replicate between nodes.
20:12 spiekey if i get 100MB/s local?
20:12 dewey so you might indeed be network bound
20:12 spiekey no, 100MB/sec is local on lvm, 30MB/sec was in a clusterfs
20:13 semiosis hard to say what you can expect... you need to test your equipment & design around what you need... this is capacity planning
20:13 calum_ joined #gluster
20:13 dewey OK, so you're getting 100MB/s transfer rate when on the gluster machine to a native file system, then getting 30MB/s when writing to the gluster mount, yes?
20:13 spiekey yes
20:14 semiosis if you have replica 3 then 30MB is about right, since clients writes would be sent in triplicate giving you effectively 1/3 useable
20:14 dewey semiosis -- wouldn't those happen in parallel though?
20:14 spiekey i am two nodes in my case
20:15 dewey spiekey -- hang on a sec I'm going to do a quick replication of your issue in my gluster system.
20:15 semiosis dewey: parallel?  how many network links are there? :)
20:15 semiosis dewey: whats your dd command?
20:15 semiosis sorry, that was for spiekey
20:15 semiosis spiekey: : whats your dd command?
20:15 dewey Oh, I see what you're saying.  Yes, but if he has 2 nodes on a 1GBit link and he's doing the dd on 1 of them he should have the full 1Gbit available
20:16 semiosis spiekey: if you run two dd at a time, do you get better than 30MB/s aggregate perf?
20:16 spiekey i can test that
20:16 semiosis dewey: did spiekey say his test client is on one of the servers?  i missed that... thought client was a 3rd machine
20:17 spiekey i have two nodes. And i run dd on one of them
20:17 semiosis whats teh dd command???
20:17 spiekey on a mounted Clusters directory
20:17 spiekey one sec
20:18 spiekey dd bs=1M count=2048 if=/dev/zero of=/mnt/test conv=fdatasync
20:19 semiosis spiekey: and also ,,(pasteinfo) please
20:19 glusterbot spiekey: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
20:19 dewey Spiekey:  my results agree with yours.
20:20 spiekey http://fpaste.org/74062/91458804/
20:20 glusterbot Title: #74062 Fedora Project Pastebin (at fpaste.org)
20:20 dewey I used time dd if=/dev/zero bs=1M count=10k on both native and gluster fuse mounted file systems and see 1GB/s transfer on native, 391MB/s on gluster-fuse
20:21 semiosis dewey: what if you run two of those dd at the same time?  what's the aggregate performance?
20:22 dewey Good question.  Trying it out.
20:23 dewey FYI I'm right now benchmarking raids and filesystems to find optimal native performance for a new pair of gluster node.  I'm then going to benchmark gluster when I have the new nodes in place.  I'm intending to write up a post on the results when I have them -- likely in about 3 weeks (purchasing delay on the equipment).
20:24 semiosis cool!
20:24 pixelgremlins if I have everything setup Brick1 apollo:/var/export/www  Brick2: chronos:/var/export/www ..etc -- if create file @ apollo:/var/export/www/test.123 it should sync at chronos:/var/export/www/test.123  ...
20:24 spiekey with two dd processes i get 20MB/sec and 22MB/sec
20:25 spiekey i also found this: http://funwithlinux.net/2013/01/kvm-and-glusterfs-3-3-1-performance-on-centos-6/
20:25 semiosis so 42 total, that is almost 50% more performance!
20:25 semiosis !!!
20:25 dewey semiosis:  on gluster, 1 dd transferred at 350 MB/s.  2 dds aggregate 465 MB/s
20:25 spiekey still a lot missing
20:26 semiosis spiekey: try aliasing each server's hostname to 127.0.0.1 on itself
20:26 dewey So again, very compatible results.  2 dds running on the native:  216 MB/s (seems odd). Note that there *is* some load on this cluster.
20:27 pixelgremlins okay, so I ran gluster peer status - all peers are connected, I have replica 3, 3 servers, and quorum-type:auto, transport type tcp, - I've auth allowed all 3 ip addresses of the other servers. but files aren't being synched...
20:27 semiosis pixelgremlins: are you writing through a client mount point?  can't write directly to the bricks!
20:27 dewey semiosis -- any idea what's the best channel to ask my locking question?  Gluster Users mailing list?  This is getting in the way of my company adopting gluster :-(
20:28 semiosis dewey: could always send a message to the users ML.  also probably should keep asking in here, different answers from different people at different times of day.
20:28 dewey 2nd run on the native:  very compatible.  Conclusion:  Gluster scales *MUCH* better than native :-)
20:28 semiosis dewey: oh hey have you tried the new gluster/samba integration?  maybe that's the way to go
20:29 semiosis samba with a native gluster client compiled in (no fuse)
20:29 dewey I have not.  where can I find some good info?
20:29 pixelgremlins blah!!! I should freakin' stay home on Mondays! of course then Tuesdays would be the new monday... and I'd have to stay home then too.. lol -- Did all the install/etc/ on Friday/saturday -- and forgot that /var/www is where the files go, not /var/export/www ...
20:30 pixelgremlins ls
20:30 semiosis pixelgremlins: progress!
20:30 pixelgremlins oops
20:30 semiosis @lucky glusterfs samba
20:30 glusterbot semiosis: http://gluster.org/community/documentation/index.php/Gluster_3.2:_Exporting_Gluster_Volumes_Through_Samba
20:30 semiosis hmm probably *not* that link
20:31 LeonSandcastle @semiosis:  I finally got some usable log info for my issue http://fpaste.org/74041/14570511/.  I have restarted the service quite a bit and now have different peer statuses (both in cluster but one is disconnected).  More detailed info here: http://fpaste.org/74044/57326139/.  Anything in the log file speak towards what might be happening?
20:31 dewey spiekey:  to summarize my dd results:  my single thread results were completely compatible with yours.  when I ran 2 dds, gluster gave me an increase in aggregate performance (each one was 60% of single thread speed for an aggregate of 120%).  Native gave me about 10% of single thread speed for an aggregate of 20%.
20:32 dewey Incidentally, I'm running this on an XFS file system with a 12 drive RAID50 with a 16k stripe size -- and all those *definitely* matter when it comes to performance.
20:32 theron joined #gluster
20:33 semiosis LeonSandcastle: idk what to say... double check that the required ,,(ports) are allowed in iptables on both machines... still looks like a network issue
20:33 glusterbot LeonSandcastle: glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up for <3.4 and 49152 & up for 3.4. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
20:33 dewey semiosis -- that's effectively what I'm doing save that I'm using a 3rd system to mount the glusterFS and re-export via SAMBA.
20:34 semiosis dewey: do you have the gluster hostnames aliased to 127.0.0.1 on themselves?
20:34 dewey No.
20:34 semiosis dewey: that link is like 2 years old.  there's a new samba vfs module that links directly with glusterfs
20:34 semiosis dewey: https://forge.gluster.org/samba-glusterfs/samba-glusterfs-vfs
20:34 glusterbot Title: samba-glusterfs-vfs in Samba-Gluster Integration - Gluster Community Forge (at forge.gluster.org)
20:34 semiosis but idk where to find docs about how to use it
20:35 dewey I'm googling for it now.
20:37 semiosis spiekey: if your workload demands extreme single thread performance then you will want to look into better networking than 1GbE.  Infiniband/RDMA is the best performing network for glusterfs afaik
20:37 semiosis however if your workload can tolerate lower single thread perf. but use many parallel threads, then you can scale out glusterfs to many servers & clients with cheap old 1GbE and get lots more aggregate performance
20:37 semiosis thats the deal
20:38 semiosis i wouldn't waste any more time looking for another couple Mb/s on your lan :)
20:39 spiekey semiosis: well, still. the Network is not my bottle neck i think
20:39 madphoenix semiosis: just curious, what is the purpose of aliasing each gluster host to 127.0.0.1 in /etc/hosts?  i've not seen that recommended before (but I don't hang out in here much)
20:39 spiekey i will come back tomorrow with documentation and performance tests ;)
20:39 semiosis spiekey: when the network is 1GbE, it's usually the bottleneck.
20:39 semiosis cool!
20:39 spiekey semiosis: i know. We use DRBD a lot. I am aware of those I/O Bottlenecks
20:41 semiosis madphoenix: maybe just superstition on my part, but i think it resolved some "peer not a friend" error.  i wonder if it will have an impact on performance, by moving local traffic onto lo interface instead of eth0
20:41 madphoenix interesting
20:42 semiosis otoh i could see how the kernel might already be optimized to handle local traffic on eth0, it might not made a difference at all
20:42 semiosis s/made/make/
20:42 glusterbot What semiosis meant to say was: otoh i could see how the kernel might already be optimized to handle local traffic on eth0, it might not make a difference at all
20:51 madphoenix i'm having trouble with "transport endpoint not connected errors" after expanding my volume with 2 new bricks.  On the cluster side, nothing is logged.  In the client side, I see errors like "[client-rpc-fops.c:2541:client3_3_opendir_cbk] 0-<volname>-client-6: remote operation failed: No such file or directory. Path: <path>"
20:51 madphoenix i'm not really sure what to do next to figure out where the issue is coming from
20:52 semiosis wherever there's a "remote operation failed" message on a client, there's a corresponding message in a brick log.  find it
20:52 madphoenix on the server side?
20:52 semiosis right
20:52 semiosis other end of the "remote"
20:52 semiosis as in remote operation failed
20:52 madphoenix ok
20:53 semiosis @remote
20:53 glusterbot semiosis: I do not know about 'remote', but I do know about these similar topics: 'remote operation failed'
20:53 semiosis ooh ,,(remote operation failed)
20:53 glusterbot any time you see 'remote operation failed' in a client (or shd, or nfs) log file, you should look in the brick log files for a corresponding entry, that would be the other end of the remote operation
20:53 semiosis sounds familiar!
20:53 semiosis @alias remote operation failed as remote
20:53 glusterbot semiosis: (alias [<channel>] <oldkey> <newkey> [<number>]) -- Adds a new key <newkey> for factoid associated with <oldkey>. <number> is only necessary if there's more than one factoid associated with <oldkey>. The same action can be accomplished by using the 'learn' function with a new key but an existing (verbatim) factoid content.
20:53 semiosis @alias remote operation failed remote
20:53 glusterbot semiosis: (alias [<channel>] <oldkey> <newkey> [<number>]) -- Adds a new key <newkey> for factoid associated with <oldkey>. <number> is only necessary if there's more than one factoid associated with <oldkey>. The same action can be accomplished by using the 'learn' function with a new key but an existing (verbatim) factoid content.
20:53 badone joined #gluster
20:53 semiosis @alias 'remote operation failed' remote
20:53 glusterbot semiosis: (alias [<channel>] <oldkey> <newkey> [<number>]) -- Adds a new key <newkey> for factoid associated with <oldkey>. <number> is only necessary if there's more than one factoid associated with <oldkey>. The same action can be accomplished by using the 'learn' function with a new key but an existing (verbatim) factoid content.
20:53 semiosis oh well
20:54 ultrabizweb joined #gluster
20:56 madphoenix okay, so all of the brick errors are on one server.  lots of messages like "0-server: inode for the gfid <gfid> is not found. anonymous fd creation failed"
20:57 madphoenix seems like this is a dht layout issue?
20:57 madphoenix and the server throwing errors is the one hosting the two new bricks
20:58 madphoenix more complete log here: http://fpaste.org/74080/14611141/
20:58 glusterbot Title: #74080 Fedora Project Pastebin (at fpaste.org)
20:59 semiosis you did add-brick, then you did rebalance?
20:59 madphoenix ah crap, nevermind that was misformatted
20:59 madphoenix semiosis: yes, i did add-brick then rebalance
20:59 madphoenix the thing is, the rebalance failed on all but one node
20:59 semiosis pretty sure you need that to succeed
20:59 madphoenix i think you're right, but i've sort of painted myself in a corner here.  the rebalance on the one node it's working on is still running
21:00 madphoenix i can't seem to do anything while that is running
21:00 madphoenix since "rebalance stop" really just seems to pause it
21:01 madphoenix actually the first rebalance looks like it finished, so i'll try running it again
21:02 Ark_explorys Anyone have a link to documentation on replacing a failed node on glusterfs 3.3?
21:07 madphoenix so my rebalance is failing on 5 of 6 servers with this log: http://fpaste.org/74082/14615971/
21:07 glusterbot Title: #74082 Fedora Project Pastebin (at fpaste.org)
21:08 madphoenix notably, it's telling me "1 subvolume(s) are down. Skipping fix layout.".  which is strange because i don't have any subvolumes
21:08 dbruhn Yes you do, those are the bricks
21:08 dbruhn so if you have one brick not functioning, that would be why
21:09 madphoenix but those bricks are functioning
21:09 Ark_explorys dbruhn: I kicked off the backup of the backups I will be trying the remove vols/pdv fix tomorrow
21:10 dbruhn Ark_explorys, sounds good, hope it goes smoothly.
21:10 madphoenix dbruhn: would that be one brick on the server, or one brick within the volume?
21:10 dbruhn madphoenix, one brick within the volume
21:10 semiosis Ark_explorys: ,,(replace)
21:10 glusterbot Ark_explorys: Useful links for replacing a failed server... if replacement server has different hostname: http://web.archive.org/web/20120508153302/http://community.gluster.org/q/a-replica-node-has-failed-completely-and-must-be-replaced-with-new-empty-hardware-how-do-i-add-the-new-hardware-and-bricks-back-into-the-replica-pair-and-begin-the-healing-process/ ... or if replacement
21:10 glusterbot server has same hostname: http://goo.gl/rem8L
21:10 madphoenix gluster volume status reports all bricks as being online
21:10 dbruhn madphoenix, run "gluster volume status"
21:11 dbruhn hmm
21:11 dbruhn were you just saying one of them had a hosed file system?
21:12 madphoenix there's one new server with 2 bricks in this volume.  some data got rebalanced and/or written there.  when i try to delete some files, i get "transport not connected" and corresponding errors on the new 2 bricks
21:13 madphoenix the funny thing is, the files i'm trying to delete and getting "transport not connected" on really aren't on the brick filesystem of either
21:16 madphoenix dbruhn: the other strange thing is that one of the six servers starts to reblanace just fine.  which is strange that the other five complain about having a missing subvolume somewhere in the cluster
21:20 khushildep joined #gluster
21:20 dbruhn so was the one not complaining the one with the bad brick?
21:20 madphoenix no
21:20 madphoenix different server
21:21 mrfsl joined #gluster
21:22 mrfsl Hello fellows! I was wondering, my cluster (3.4.2) is in the middle of an ongoing full heal. Can I perform a brick-replace while this operation is in motion?
21:23 semiosis mrfsl: probably not a great idea, but also probably possible
21:23 [o__o] joined #gluster
21:24 mrfsl Ha! --- Is there a way to know something like "Percentage Complete" on these heal processes?
21:24 semiosis idk
21:30 mrfsl so ... is there a way to know when a triggered full self-heal is completed?
21:31 rcaskey is there a way to designate a pecking order so that one will go read-only and elminate split brain?
21:34 StarBeas_ joined #gluster
21:34 iksik_ joined #gluster
21:35 johnmwilliams__ joined #gluster
21:37 jmarley joined #gluster
21:37 hflai_ joined #gluster
21:37 purpleid1a joined #gluster
21:37 delhage_ joined #gluster
21:38 lawrie joined #gluster
21:38 solid_li1 joined #gluster
21:38 jmarley joined #gluster
21:38 social_ joined #gluster
21:38 mkzero joined #gluster
21:40 spiekey joined #gluster
21:41 simon__ joined #gluster
21:41 ueberall joined #gluster
21:41 ueberall joined #gluster
21:42 Slasheri joined #gluster
21:42 Slasheri joined #gluster
21:43 masterzen joined #gluster
21:43 semiosis rcaskey: see the quorum options in 'gluster volume set help'
21:44 mojorison joined #gluster
21:46 hybrid512 joined #gluster
21:46 jmarley joined #gluster
21:48 rcaskey thx
21:56 m0zes joined #gluster
22:01 chirino joined #gluster
22:02 purpleid1a i'd like to propose a motion: we find out who's ddos-ing freenode, and then we break their servers. we'll use the top secret redhat ddos tool to do so. can't people ddos apple, the bsa, google, or whoever the current evil companies are?
22:10 ells joined #gluster
22:16 semiosis nay
22:16 LeonSandcastle left #gluster
22:22 sarkis purpleidea: so i was wondering how to raise the replica count in a production setting not vagrant
22:23 daMaestro joined #gluster
22:27 failshel_ joined #gluster
22:32 Dave2 joined #gluster
22:33 yosafbridge joined #gluster
22:50 JoeJulian joined #gluster
22:55 theron joined #gluster
22:55 mrfsl trying to do a brick replace I on 3.4.2 I am now faced with this prompt:
22:55 mrfsl All replace-brick commands except commit force are deprecated. Do you want to continue?
22:55 mrfsl can someone help me explain what to do?
22:57 semiosis what is your goal?  why do you want to use replace-brick?
22:57 mrfsl i have a 3 node distributed replicated cluster. I am adding a fourth node
22:57 mrfsl so to balance the cluster I need to replace bricks with new bricks - add bricks -and reblanace
22:58 semiosis well i think a replace-brick commit force will cause the volume to switch to the new brick but it will be empty.  self heal should then fill it with data from its replica(s)
22:58 semiosis just guessing here
22:59 semiosis you might try preloading the brick with data from its replicas using rsync
22:59 semiosis again, just guessing at strategies
22:59 marcoceppi_ joined #gluster
22:59 mrfsl does anyone have any documentation on why the brick-replace commands were deprecated?
23:00 semiosis when rsyncing bricks i always recommend --inplace --whole-file and -aHAX
23:03 qdk joined #gluster
23:05 sarkis purpleidea: u back or stuck in freenode hell
23:05 awheeler_ joined #gluster
23:09 LoudNoises joined #gluster
23:11 JoeJulian Oh good. I wasn't the only one in freenode hell huh?
23:18 zerick joined #gluster
23:23 zerick joined #gluster
23:23 plarsen joined #gluster
23:28 chirino joined #gluster
23:30 ells_ joined #gluster
23:31 chirino joined #gluster
23:32 zerick joined #gluster
23:42 chirino joined #gluster
23:56 mrfsl left #gluster
23:56 sarkis it's been bad all day

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary