Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-07-11

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 cjanbanan joined #gluster
00:02 chirino joined #gluster
00:16 Peter3 is there a way to sort the gluster volume info output by volume??
00:28 purpleidea Peter3: you can parse it however you like with the --xml flag
00:28 purpleidea Peter3: for some examples see: https://github.com/purpleidea/puppet-gluster/blob/master/files/xml.py
00:28 glusterbot Title: puppet-gluster/files/xml.py at master · purpleidea/puppet-gluster · GitHub (at github.com)
00:29 Peter3 thanks!
00:29 purpleidea yw
00:37 itisravi joined #gluster
00:46 theron joined #gluster
00:49 carrar joined #gluster
00:54 gildub joined #gluster
01:02 cjanbanan joined #gluster
01:09 Edddgy joined #gluster
01:14 gildub joined #gluster
01:26 harish__ joined #gluster
01:55 cjanbanan joined #gluster
01:56 SteveCooling joined #gluster
01:56 radez_g0n3 joined #gluster
01:56 Rydekull joined #gluster
01:56 FooBar joined #gluster
01:56 jiffe98 joined #gluster
01:56 samkottler joined #gluster
01:56 TheSov joined #gluster
01:56 ccha2 joined #gluster
01:56 huleboer joined #gluster
01:56 lanning joined #gluster
01:56 ws2k3 joined #gluster
01:56 l0uis joined #gluster
01:56 lezo_ joined #gluster
01:56 troj joined #gluster
01:56 dencaval joined #gluster
01:56 jcsp joined #gluster
01:56 elico joined #gluster
01:56 haomai___ joined #gluster
01:56 richvdh joined #gluster
01:56 abyss__ joined #gluster
01:56 weykent joined #gluster
01:56 nage joined #gluster
01:56 masterzen joined #gluster
01:56 avati joined #gluster
01:56 swebb joined #gluster
01:56 VeggieMeat joined #gluster
01:56 eryc joined #gluster
01:56 verdurin joined #gluster
01:56 fyxim_ joined #gluster
01:56 georgeh|workstat joined #gluster
01:56 Peanut joined #gluster
01:56 mjrosenb joined #gluster
01:56 glusterbot joined #gluster
01:56 mwoodson joined #gluster
01:56 kkeithley joined #gluster
01:56 uebera|| joined #gluster
01:56 ThatGraemeGuy joined #gluster
01:56 Diddi_ joined #gluster
01:56 xrsa joined #gluster
01:56 VerboEse joined #gluster
01:56 rturk|afk joined #gluster
01:56 SpeeR joined #gluster
01:56 edwardm61 joined #gluster
01:56 coredumb joined #gluster
01:56 k3rmat joined #gluster
01:56 jbrooks joined #gluster
01:56 cfeller_ joined #gluster
01:56 firemanxbr joined #gluster
01:56 dockbram joined #gluster
01:56 gomikemike joined #gluster
01:56 prasanth|offline joined #gluster
01:56 [o__o] joined #gluster
01:56 xymox joined #gluster
01:56 Slasheri joined #gluster
01:56 atrius joined #gluster
01:56 eclectic_ joined #gluster
01:56 decimoe joined #gluster
01:56 cicero joined #gluster
01:56 stigchristian joined #gluster
01:56 Dave2 joined #gluster
01:56 JustinClift joined #gluster
01:56 ninkotech__ joined #gluster
01:56 SpComb joined #gluster
01:56 qdk joined #gluster
01:56 bennyturns joined #gluster
01:56 fuz1on joined #gluster
01:56 tty00 joined #gluster
01:56 msvbhat_ joined #gluster
01:56 RicardoSSP joined #gluster
01:56 theron joined #gluster
01:56 klaas joined #gluster
01:56 silky joined #gluster
01:56 zerick joined #gluster
01:56 hagarth joined #gluster
01:56 gehaxelt joined #gluster
01:56 sac`away joined #gluster
01:56 JoeJulian joined #gluster
01:56 sijis joined #gluster
01:56 ninkotech_ joined #gluster
01:56 stickyboy joined #gluster
01:56 hybrid512 joined #gluster
01:56 mshadle joined #gluster
01:58 Intensity joined #gluster
01:58 jiqiren joined #gluster
01:58 dblack joined #gluster
01:58 anotheral joined #gluster
01:58 johnmwilliams__ joined #gluster
01:58 m0zes joined #gluster
01:58 yosafbridge joined #gluster
01:58 Alex joined #gluster
01:58 semiosis joined #gluster
01:58 sadbox joined #gluster
01:58 Georgyo joined #gluster
01:58 Nopik joined #gluster
01:58 eightyeight joined #gluster
01:58 fubada joined #gluster
01:58 JordanHackworth joined #gluster
01:58 pasqd joined #gluster
01:58 osiekhan3 joined #gluster
01:58 marcoceppi joined #gluster
01:58 sputnik13 joined #gluster
01:58 nixpanic_ joined #gluster
01:58 NCommander joined #gluster
01:58 msciciel joined #gluster
01:58 wgao joined #gluster
01:58 oxidane joined #gluster
01:58 _NiC joined #gluster
01:58 carrar joined #gluster
01:58 tom[] joined #gluster
01:58 systemonkey joined #gluster
01:58 gmcwhistler joined #gluster
01:58 pdrakeweb joined #gluster
01:58 DV joined #gluster
01:58 jezier joined #gluster
01:58 ghenry joined #gluster
01:58 n0de_ joined #gluster
01:58 sauce joined #gluster
01:58 tg2 joined #gluster
01:58 edong23 joined #gluster
01:58 tomased joined #gluster
01:58 saltsa joined #gluster
01:58 ninkotech joined #gluster
01:58 d-fence joined #gluster
01:58 atrius` joined #gluster
01:58 portante joined #gluster
01:58 johnmark joined #gluster
01:58 asku joined #gluster
01:58 hflai joined #gluster
01:58 overclk joined #gluster
01:58 coreping joined #gluster
01:58 vincent_vdk joined #gluster
01:58 sspinner joined #gluster
01:58 xavih joined #gluster
01:58 _jmp__ joined #gluster
01:59 haomaiwang joined #gluster
02:01 dblack joined #gluster
02:03 crashmag joined #gluster
02:03 fim joined #gluster
02:03 al joined #gluster
02:03 ackjewt joined #gluster
02:03 samppah joined #gluster
02:03 sman joined #gluster
02:03 Eco_ joined #gluster
02:03 koobs joined #gluster
02:03 eshy joined #gluster
02:04 T0aD joined #gluster
02:04 cristov joined #gluster
02:04 marmalodak joined #gluster
02:05 pvh_sa joined #gluster
02:05 DanF_ joined #gluster
02:05 mibby joined #gluster
02:05 Peter3 joined #gluster
02:05 sage joined #gluster
02:05 dtrainor joined #gluster
02:05 coredump joined #gluster
02:05 bfoster joined #gluster
02:05 Ch3LL_ joined #gluster
02:05 Bardack joined #gluster
02:05 tru_tru joined #gluster
02:05 azenk1 joined #gluster
02:05 delhage joined #gluster
02:05 cyberbootje joined #gluster
02:05 Kins joined #gluster
02:05 JonathanD joined #gluster
02:05 ultrabizweb joined #gluster
02:05 NuxRo joined #gluster
02:05 tziOm joined #gluster
02:05 partner joined #gluster
02:05 fraggeln joined #gluster
02:05 kke joined #gluster
02:06 torbjorn__ joined #gluster
02:06 the-me joined #gluster
02:06 itisravi joined #gluster
02:06 Gugge joined #gluster
02:06 foster joined #gluster
02:06 muhh joined #gluster
02:06 irated joined #gluster
02:08 ndevos joined #gluster
02:08 gildub joined #gluster
02:08 neoice joined #gluster
02:08 siel joined #gluster
02:08 Thilam|work joined #gluster
02:08 purpleidea joined #gluster
02:08 necrogami joined #gluster
02:08 codex joined #gluster
02:08 hchiramm_ joined #gluster
02:08 SNow joined #gluster
02:08 Ramereth joined #gluster
02:08 mkzero_ joined #gluster
02:08 samsaffron joined #gluster
02:08 morse joined #gluster
02:08 lyang0 joined #gluster
02:08 capri joined #gluster
02:08 Andreas-IPO joined #gluster
02:08 glusterbot joined #gluster
02:09 ndevos joined #gluster
02:09 Edddgy joined #gluster
02:10 ambish joined #gluster
02:19 bala joined #gluster
02:19 harish__ joined #gluster
02:27 dusmant joined #gluster
02:28 bharata-rao joined #gluster
02:35 haomai___ joined #gluster
02:47 lkoranda joined #gluster
02:51 Durzo joined #gluster
03:08 dusmant joined #gluster
03:10 Edddgy joined #gluster
03:20 davinder16 joined #gluster
03:25 aravindavk joined #gluster
03:32 cjanbanan joined #gluster
03:34 pureflex joined #gluster
03:40 itisravi joined #gluster
03:41 shubhendu joined #gluster
03:43 kshlm joined #gluster
03:44 kshlm joined #gluster
03:47 haomaiwa_ joined #gluster
03:48 RameshN joined #gluster
03:49 hagarth1 joined #gluster
03:53 atinmu joined #gluster
03:57 nbalachandran joined #gluster
03:58 Eco_ joined #gluster
04:00 kanagaraj joined #gluster
04:11 Edddgy joined #gluster
04:12 haomaiw__ joined #gluster
04:13 sahina joined #gluster
04:16 shubhendu joined #gluster
04:26 cjanbanan joined #gluster
04:27 ppai joined #gluster
04:27 dino82 joined #gluster
04:47 spandit joined #gluster
04:47 nbalachandran joined #gluster
04:48 hagarth joined #gluster
04:53 ndarshan joined #gluster
04:55 ramteid joined #gluster
05:04 cjanbanan joined #gluster
05:11 psharma joined #gluster
05:11 aravindavk joined #gluster
05:11 Edddgy joined #gluster
05:17 karnan joined #gluster
05:21 JoeJulian @later tell Eco__ Sorry, I wasn't actually near a computer. I was raging from my phone.
05:21 glusterbot JoeJulian: The operation succeeded.
05:27 raghu joined #gluster
05:27 vpshastry joined #gluster
05:28 lalatenduM joined #gluster
05:35 pureflex joined #gluster
05:36 prasanth joined #gluster
05:49 mbukatov joined #gluster
05:57 rjoseph joined #gluster
06:06 saurabh joined #gluster
06:10 cjanbanan joined #gluster
06:13 Edddgy joined #gluster
06:13 rastar joined #gluster
06:22 ppai joined #gluster
06:26 pvh_sa joined #gluster
06:32 cjanbanan joined #gluster
06:36 tty00 joined #gluster
06:40 monotek joined #gluster
06:42 davinder16 joined #gluster
06:45 cjanbanan joined #gluster
06:46 ricky-ti1 joined #gluster
06:47 jvandewege joined #gluster
06:50 pvh_sa joined #gluster
06:52 nbalachandran joined #gluster
06:57 glusterbot New news from newglusterbugs: [Bug 1118591] core: all brick processes crash when quota is enabled <https://bugzilla.redhat.com/show_bug.cgi?id=1118591>
06:57 kdhananjay joined #gluster
07:03 hagarth joined #gluster
07:05 ctria joined #gluster
07:08 eseyman joined #gluster
07:09 nishanth joined #gluster
07:10 nishanth joined #gluster
07:11 ppai joined #gluster
07:13 doekia joined #gluster
07:13 doekia_ joined #gluster
07:16 keytab joined #gluster
07:23 rjoseph joined #gluster
07:26 hagarth joined #gluster
07:31 rgustafs joined #gluster
07:37 rgustafs joined #gluster
07:43 fsimonce joined #gluster
07:50 rjoseph joined #gluster
07:53 andreask joined #gluster
08:13 hagarth joined #gluster
08:23 siel joined #gluster
08:25 sputnik1_ joined #gluster
08:28 glusterbot New news from newglusterbugs: [Bug 1118629] Erasure coding translator <https://bugzilla.redhat.com/show_bug.cgi?id=1118629>
08:30 ndevos joined #gluster
08:30 Norky joined #gluster
08:35 RameshN joined #gluster
08:37 vikumar joined #gluster
08:39 cjanbanan joined #gluster
08:42 shubhendu|lunch joined #gluster
08:46 msciciel joined #gluster
09:02 harish_ joined #gluster
09:06 zerick joined #gluster
09:09 cjanbanan joined #gluster
09:10 vpshastry joined #gluster
09:10 harish__ joined #gluster
09:16 Pupeno joined #gluster
09:17 richvdh joined #gluster
09:24 haomaiwa_ joined #gluster
09:27 qdk joined #gluster
09:28 Pupeno_ joined #gluster
09:29 sputnik1_ joined #gluster
09:40 glusterbot New news from resolvedglusterbugs: [Bug 1042764] glusterfsd process crashes while doing ltable cleanup <https://bugzilla.redhat.com/show_bug.cgi?id=1042764>
09:46 vpshastry joined #gluster
09:49 fsimonce` joined #gluster
09:54 fsimonce joined #gluster
09:54 karnan_ joined #gluster
09:56 hagarth joined #gluster
10:04 Pupeno joined #gluster
10:14 vpshastry joined #gluster
10:25 fsimonce` joined #gluster
10:29 mdavidson joined #gluster
10:30 bene3 joined #gluster
10:33 stickyboy Took some notes last night on performance issues I'm having in my replica 2.
10:33 stickyboy https://gist.github.com/alanorth/2581413a81dad7cead9f
10:33 glusterbot Title: glusterfs-performance-forensics.md (at gist.github.com)
10:35 Slashman joined #gluster
10:40 DV joined #gluster
10:47 DV joined #gluster
10:50 calum_ joined #gluster
10:57 al joined #gluster
11:02 diegows joined #gluster
11:06 sjm joined #gluster
11:08 hagarth joined #gluster
11:18 chirino joined #gluster
11:24 julim joined #gluster
11:39 cjanbanan joined #gluster
11:40 harish__ joined #gluster
11:44 julim joined #gluster
11:44 andreask joined #gluster
11:46 TvL2386 joined #gluster
11:50 ambish left #gluster
11:50 homer5439 joined #gluster
11:51 homer5439 I want to add a new brick to a replicated volume with a lot of data
11:51 homer5439 from past experiences, I know that if I just "add-brick", self-healing just kills the machines
11:52 homer5439 is there a way to pre-populate the brick so that when I add it, self-healing needs to do almost nothing?
12:01 stickyboy Anyone using 10GbE SFP+?  I'm on copper and sometimes makes me wanna poke my eye out.
12:04 sahina joined #gluster
12:06 itisravi_ joined #gluster
12:13 VerboEse joined #gluster
12:13 plarsen joined #gluster
12:15 theron joined #gluster
12:22 andreask joined #gluster
12:23 hagarth joined #gluster
12:24 dino82 joined #gluster
12:26 julim joined #gluster
12:28 LebedevRI joined #gluster
12:35 haomai___ joined #gluster
12:42 vpshastry joined #gluster
12:55 theron joined #gluster
12:59 plarsen joined #gluster
13:04 sahina joined #gluster
13:09 chirino joined #gluster
13:11 bennyturns joined #gluster
13:19 hagarth joined #gluster
13:20 japuzzo joined #gluster
13:25 firemanxbr joined #gluster
13:29 hchiramm_ joined #gluster
13:39 cjanbanan joined #gluster
13:58 tdasilva joined #gluster
14:04 vpshastry joined #gluster
14:06 daMaestro joined #gluster
14:07 jobewan joined #gluster
14:17 fsimonce joined #gluster
14:17 zerick joined #gluster
14:21 hchiramm_ joined #gluster
14:27 sahina_ joined #gluster
14:28 Pupeno_ joined #gluster
14:34 tdasilva_ joined #gluster
14:39 cjanbanan joined #gluster
14:47 ctria joined #gluster
14:49 chirino joined #gluster
14:52 Peter1 joined #gluster
14:52 Peter1 anyone had log rolling issue on gluster that logs like nfs.log would not get writiten after log rolled?
15:00 dtrainor joined #gluster
15:01 semiosis Peter1: i use logrotate's copytruncate option to avoid that problem
15:01 Peter1 cool thanks
15:02 semiosis yw
15:03 Peter1 this is my current set
15:03 Peter1 http://pastie.org/9378338
15:03 glusterbot Title: #9378338 - Pastie (at pastie.org)
15:08 chirino joined #gluster
15:09 coredump joined #gluster
15:11 glusterbot New news from resolvedglusterbugs: [Bug 840349] Use consistent maximum volume name length <https://bugzilla.redhat.com/show_bug.cgi?id=840349> || [Bug 829170] gluster cli returning "operation failed" output for command "gluster v heal vol info" <https://bugzilla.redhat.com/show_bug.cgi?id=829170> || [Bug 812230] Quota: quota show wrong value and log full of "quota context not set in inode" <https://bugzilla.redhat.com/show_bug.cgi?id=812230>
15:13 davinder16 joined #gluster
15:19 lmickh joined #gluster
15:20 chirino_m joined #gluster
15:29 mortuar joined #gluster
15:29 glusterbot New news from newglusterbugs: [Bug 990028] enable gfid to path conversion <https://bugzilla.redhat.com/show_bug.cgi?id=990028>
15:34 xavih joined #gluster
15:39 cjanbanan joined #gluster
15:40 Pupeno joined #gluster
15:40 Intensity joined #gluster
15:41 dtrainor joined #gluster
15:41 glusterbot New news from resolvedglusterbugs: [Bug 857797] Quiesce xlator support for blocking all fops through reconfigure <https://bugzilla.redhat.com/show_bug.cgi?id=857797> || [Bug 903723] [RFE] Make quick-read cache the file contents in the open fop instead of lookup <https://bugzilla.redhat.com/show_bug.cgi?id=903723> || [Bug 864611] When brick directory doesn't exist, volume creation should fail <https://bugzilla.redhat.com/show_bug.cgi?id=864611>
15:44 RameshN joined #gluster
15:44 edward1 joined #gluster
15:59 glusterbot New news from newglusterbugs: [Bug 969461] RFE: Quota fixes <https://bugzilla.redhat.com/show_bug.cgi?id=969461>
16:00 Edddgy joined #gluster
16:09 davinder16 joined #gluster
16:09 ron-slc joined #gluster
16:11 chirino joined #gluster
16:11 glusterbot New news from resolvedglusterbugs: [Bug 960455] [FEAT] Implement geo-rep delete in a distributed geo-replication set-up <https://bugzilla.redhat.com/show_bug.cgi?id=960455> || [Bug 960527] [FEAT]Remove geo-rep log-rotate cli. <https://bugzilla.redhat.com/show_bug.cgi?id=960527> || [Bug 961342] [FEAT] Start gsyncd on only one node of a replica pair <https://bugzilla.redhat.com/show_bug.cgi?id=961342> || [Bug 924296] Glusterd hangs running volum
16:26 Mo__ joined #gluster
16:30 glusterbot New news from newglusterbugs: [Bug 948692] [FEAT] Make geo-rep start and stop distributed. <https://bugzilla.redhat.com/show_bug.cgi?id=948692> || [Bug 948698] [FEAT] Implement geo-rep start force and stop force <https://bugzilla.redhat.com/show_bug.cgi?id=948698> || [Bug 914804] [FEAT] Implement volume-specific quorum <https://bugzilla.redhat.com/show_bug.cgi?id=914804>
16:39 cjanbanan joined #gluster
16:42 glusterbot New news from resolvedglusterbugs: [Bug 865493] Metadata storage uses 254 byte key/value xattr pairs <https://bugzilla.redhat.com/show_bug.cgi?id=865493> || [Bug 868087] Multiple memcache key/value pairs are used to store metadata for account and containers <https://bugzilla.redhat.com/show_bug.cgi?id=868087> || [Bug 919007] After some time, transfer is slow and all writes are 4kb. Re-opening fds brings back fast transfer <https://bugzilla.redha
16:53 MacWinner joined #gluster
17:08 Ark joined #gluster
17:15 Peter1 is there a way to prevent locks error when multiple gluster command running?
17:17 Peter1 http://pastie.org/9378663
17:17 glusterbot Title: #9378663 - Pastie (at pastie.org)
17:24 hagarth joined #gluster
17:29 _Bryan_ joined #gluster
17:30 chirino joined #gluster
17:32 Edddgy1 joined #gluster
17:35 msvbhat joined #gluster
17:35 SpComb^ joined #gluster
17:36 Slasheri_ joined #gluster
17:36 fleducquede_ joined #gluster
17:36 cicero_ joined #gluster
17:38 stigchri1tian joined #gluster
17:39 cjanbanan joined #gluster
17:40 gomikemi1e joined #gluster
17:40 dtrainor_ joined #gluster
17:41 tty00 joined #gluster
17:41 atrius_ joined #gluster
17:42 hchiramm_ joined #gluster
17:43 [o__o] joined #gluster
17:45 JustinClift joined #gluster
17:45 chirino joined #gluster
17:47 prasanth|offline joined #gluster
17:47 Dave2 joined #gluster
17:47 eclectic joined #gluster
17:48 daMaestro joined #gluster
17:50 decimoe joined #gluster
17:58 dtrainor joined #gluster
18:06 P0w3r3d joined #gluster
18:14 theron joined #gluster
18:15 chirino joined #gluster
18:15 theron joined #gluster
18:18 theron joined #gluster
18:27 johnmark JoeJulian: know anything about this? https://botbot.me/freenode/gluster/
18:27 glusterbot Title: Logs for #gluster | BotBot.me [o__o] (at botbot.me)
18:27 johnmark [o__o]: howdy
18:33 stickyboy Realtime logs...
18:33 stickyboy Interesting. :)
18:42 glusterbot New news from resolvedglusterbugs: [Bug 1036539] Distributed Geo-Replication enhancements <https://bugzilla.redhat.com/show_bug.cgi?id=1036539> || [Bug 1005183] cluster wide lock not released when originating glusterd dies <https://bugzilla.redhat.com/show_bug.cgi?id=1005183> || [Bug 1022593] Dist-geo-rep : When node goes down and come back in master cluster, that particular session will be defunct. <https://bugzilla.redhat.com/show_bug.cgi?id=1
18:50 vimal joined #gluster
18:53 cjanbanan joined #gluster
18:53 al joined #gluster
19:00 glusterbot New news from newglusterbugs: [Bug 1069494] DHT - In rebalance(after add-brick or sub-vol per dir change) hash layout is not re distributed properly. It can be optimized to reduce file migration <https://bugzilla.redhat.com/show_bug.cgi?id=1069494>
19:12 glusterbot New news from resolvedglusterbugs: [Bug 1065655] [cdc] resetting compression on the volume leaves behind compression option on the volume <https://bugzilla.redhat.com/show_bug.cgi?id=1065655> || [Bug 1065658] [cdc] compression options are not clearly mentioned <https://bugzilla.redhat.com/show_bug.cgi?id=1065658> || [Bug 1061044] DHT - rebalance - during data migration , rebalance is migrating files to correct sub-vol but after that it creates l
19:15 LessSeen_ joined #gluster
19:26 ndk joined #gluster
19:30 glusterbot New news from newglusterbugs: [Bug 1081018] glusterd needs xfsprogs and e2fsprogs packages <https://bugzilla.redhat.com/show_bug.cgi?id=1081018> || [Bug 1028582] GlusterFS files missing randomly - the miss triggers a self heal, then missing files appear. <https://bugzilla.redhat.com/show_bug.cgi?id=1028582> || [Bug 1075611] [FEAT] log: enhance gluster log format with message ID and standardize errno reporting <https://bugzilla.redhat.com/show_bu
19:42 glusterbot New news from resolvedglusterbugs: [Bug 989541] Dist-geo-rep : 'gluster volume geo :: stop' gives wrong error message if session does not exist <https://bugzilla.redhat.com/show_bug.cgi?id=989541> || [Bug 999531] Dist-geo-rep : geo-rep create force succeeds even if the master doesn't have the passwd less ssh to slave host <https://bugzilla.redhat.com/show_bug.cgi?id=999531> || [Bug 994353] Dist-geo-rep: Worker in one of the master node keeps cra
19:47 cristov hi #gluster...
19:49 cristov anybody telling me... i was wondering if there are plans in the roadmap for gluster to support pNFS clients ?
19:52 MacWinner joined #gluster
19:54 tty00 joined #gluster
20:07 JoeJulian johnmark: Not much, besides someone asked if they could quite a while ago and we said yes.
20:09 cjanbanan joined #gluster
20:12 johnmark JoeJulian: cool
20:12 LessSeen_ joined #gluster
20:14 theron joined #gluster
20:15 stickyboy JoeJulian: If I add more nodes to my replica 2, I know throughput can/will go up, but is it also likely to exacerbate latency issues, like with `ls` type stuff?
20:16 JoeJulian It shouldn't, no.
20:16 JoeJulian Provided you leave it replica 2
20:17 stickyboy JoeJulian: Ah, because clients connect to replica_count of bricks at any given time.
20:17 theron joined #gluster
20:18 JoeJulian clients are always connected to all the bricks.
20:18 JoeJulian But there isn't any additional lag due to checking replication.
20:19 tdasilva__ joined #gluster
20:19 stickyboy JoeJulian: Nice.
20:19 tdasilva___ joined #gluster
20:19 stickyboy JoeJulian: I was troubleshooting my 10GbE performance issues last night, and I found my XFS stride width was mis-calculated for my RAID.
20:20 JoeJulian Nice
20:20 stickyboy Fixing that improved my read speeds by 5-10x.
20:20 JoeJulian I guess it's time to format my desktop again.
20:20 stickyboy But writes are super slow because my controllers don't have BBU so they're writing in super safe mode apparently.
20:20 JoeJulian btrfs is not ready for even personal use.
20:22 JoeJulian You can usually override that. I always do. Dual power supplies on separate rails, there's no need for bbu on the controller.
20:22 stickyboy JoeJulian: I've been trying btrfs every year for like 4 years.  Laptop SSD, external USB backups, /home on desktop...
20:22 stickyboy I always end up switching back.
20:24 JoeJulian I keep thinking I should back up my home directory, then I look at all the crap I accumulate in it and decide not to.
20:33 stickyboy Overheard in #btrfs: "btrfs is not ready for even personal use."
20:33 stickyboy LOL
20:34 stickyboy Yep... just switched back from a btrfs RAID1 last night... personal desktop.
20:56 Pupeno joined #gluster
20:59 JoeJulian stickyboy: No, I said that here. It would have been rude to have said that in #btrfs.
21:01 Pupeno_ joined #gluster
21:08 Pupeno joined #gluster
21:09 cjanbanan joined #gluster
21:14 systemonkey joined #gluster
21:16 Pupeno joined #gluster
21:17 LessSeen_ joined #gluster
21:20 Pupeno joined #gluster
21:22 Pupeno_ joined #gluster
21:24 systemonkey joined #gluster
21:26 stickyboy JoeJulian: Oop, yes.
21:26 stickyboy irssi confusion. :P
21:27 ctria joined #gluster
21:27 stickyboy JoeJulian: In other news, I just enabled Write Back on my RAID controllers and write speeds increased by 3-4x.
21:28 stickyboy Now I need to pass by the server room tomorrow and see how my redundant power supplies are configured.  I think they're like yours, on separate rails.
21:29 vimal joined #gluster
21:33 wgoetz joined #gluster
21:36 Pupeno joined #gluster
21:44 Pupeno_ joined #gluster
21:45 Pupeno__ joined #gluster
21:45 purpleidea stickyboy: and maybe add a BBC
21:46 stickyboy purpleidea: Is that a BBU?
21:46 purpleidea battery backed cache
21:46 purpleidea unit or whatever
21:46 wushudoin joined #gluster
21:46 purpleidea when the power gets pulled on your raid, it flushes the writes to disk and stops acknowledging new ones (if it's working properly)
21:47 purpleidea flushes them from it's cache that is
21:47 purpleidea its*
21:47 LessSeen_ joined #gluster
21:48 stickyboy purpleidea: I think I've seen those... I was looking around on LSI's site earlier.
21:48 purpleidea stickyboy: it's standard fare
21:48 Pupeno joined #gluster
21:48 stickyboy purpleidea: Or I could just buy new controllers.
21:48 JoeJulian Or just never lose power.
21:50 wushudoin joined #gluster
21:50 semiosis gotta get me one of these... http://www.hitec-ups.com/
21:50 glusterbot Title: Hitec Power Protection - EN - Diesel Rotary UPS Systems (at www.hitec-ups.com)
21:52 stickyboy JoeJulian: Yeah, if I'm worried about power then that's a really different problem...
21:52 stickyboy If I lose power more than once a year... then we are doing something wrong. haha
21:53 stickyboy Anyways, shit happens, so I'll buy controllers with BBUs in the future.
21:53 purpleidea stickyboy: JoeJulian: the point is that things will eventually always break. so have redundancy. you *will* loose power.
21:53 JoeJulian Meh, that's what barrier is for.
21:53 JoeJulian besides, how safe do you need to be
21:54 JoeJulian I've never had a failure that a bbu would have prevented.
21:55 purpleidea JoeJulian: me, not safe at all :) you should see what i've done with my work laptop :P Fedora20+btrfs+Gnome3.12COPR+airlied-kernel-patches+idk what else :P
21:55 purpleidea JoeJulian: it's a good point. i wonder statistically how often BBU's have helped...
21:56 purpleidea JoeJulian: i bet it helps when people don't act on the low power signals from their UPS.
21:56 stickyboy purpleidea: That's kinda what I thought.
21:56 stickyboy purpleidea: We run alllllllllll our other systems without battery backups and they die all the time.  Workstations, laptops, phones, etc.
21:56 stickyboy Filesystems are getting better...
21:56 purpleidea lol
21:57 Pupeno joined #gluster
21:57 JoeJulian Mine have always been where the ups only has to last until the generators come on.
21:57 stickyboy Now, on "production", where I have redundancy up the wazoo (RAID + Gluster replica), I need BBU?
21:57 stickyboy Seems like overkill.
21:57 stickyboy But I'm not losing $1,000,000 per minute of down time.
22:00 Pupeno_ joined #gluster
22:01 Edddgy joined #gluster
22:03 gehaxelt joined #gluster
22:04 cjanbanan joined #gluster
22:05 Pupeno joined #gluster
22:10 Pupeno_ joined #gluster
22:20 chirino joined #gluster
22:21 Pupeno joined #gluster
22:21 qdk joined #gluster
22:25 LessSee__ joined #gluster
22:36 Edddgy joined #gluster
22:40 MacWinner joined #gluster
22:49 LessSeen_ joined #gluster
23:01 plarsen joined #gluster
23:09 cjanbanan joined #gluster
23:18 gehaxelt joined #gluster
23:19 LessSeen_ joined #gluster
23:20 chirino joined #gluster
23:23 Edddgy joined #gluster
23:31 vimal joined #gluster
23:34 LessSeen_ joined #gluster
23:38 Edddgy joined #gluster
23:43 LessSeen_ joined #gluster
23:56 gmcwhist_ joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary