Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-01-19

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:25 jporterfield joined #gluster
00:32 mattappe_ joined #gluster
00:32 TrDS left #gluster
00:34 jporterfield joined #gluster
00:44 Technicool joined #gluster
00:52 mattappe_ joined #gluster
00:54 jporterfield joined #gluster
01:12 RameshN joined #gluster
01:21 sprachgenerator joined #gluster
01:22 GLHMarmot joined #gluster
01:32 jporterfield joined #gluster
01:37 mkzero joined #gluster
01:38 _pol_ joined #gluster
01:58 RicardoSSP joined #gluster
02:13 ZhangHuan joined #gluster
02:33 _pol joined #gluster
02:36 atrius joined #gluster
02:39 jporterfield joined #gluster
02:44 jporterfield joined #gluster
02:45 mattappe_ joined #gluster
02:45 mattappe_ joined #gluster
02:52 pravka joined #gluster
03:02 sprachgenerator joined #gluster
03:03 jporterfield joined #gluster
03:41 jporterfield joined #gluster
04:08 ZhangHuan joined #gluster
04:11 SFLimey joined #gluster
04:14 mattappe_ joined #gluster
04:20 mattappe_ joined #gluster
04:26 davinder joined #gluster
04:28 SFLimey joined #gluster
04:32 SFLimey Is there a good site or doc on recovering from all of your gluster nodes being hard power down. All four went down and now I'm unable to start glusterd service, I'm a little stuck.
04:42 raghug joined #gluster
05:02 dbruhn joined #gluster
05:03 sprachgenerator joined #gluster
05:12 gmcwhistler joined #gluster
05:38 Cenbe joined #gluster
05:47 mattapperson joined #gluster
05:48 ZhangHuan joined #gluster
05:52 ZhangHuan joined #gluster
05:56 ZhangHuan joined #gluster
06:15 DV joined #gluster
06:18 gmcwhistler joined #gluster
06:21 SFLimey joined #gluster
06:26 sprachgenerator joined #gluster
06:29 psyl0n joined #gluster
06:48 mattappe_ joined #gluster
07:08 dbruhn joined #gluster
07:15 RameshN joined #gluster
07:21 shapemaker joined #gluster
07:22 erik49_ joined #gluster
07:24 tbaror_ joined #gluster
07:31 klaxa joined #gluster
07:44 ekuric joined #gluster
07:58 RameshN joined #gluster
08:17 jporterfield joined #gluster
08:24 ZhangHuan joined #gluster
08:39 itisravi joined #gluster
08:42 SFLimey joined #gluster
08:45 jporterfield joined #gluster
08:50 jporterfield joined #gluster
08:51 mattapperson joined #gluster
08:51 RameshN joined #gluster
08:55 qdk joined #gluster
08:58 jporterfield joined #gluster
09:09 jporterfield joined #gluster
09:20 ZhangHuan joined #gluster
09:25 davinder joined #gluster
09:30 jporterfield joined #gluster
09:40 jporterfield joined #gluster
09:54 jporterfield joined #gluster
10:36 jporterfield joined #gluster
10:47 psyl0n joined #gluster
10:57 jporterfield joined #gluster
11:07 jporterfield joined #gluster
11:16 _pol joined #gluster
11:17 jporterfield joined #gluster
11:53 jporterfield joined #gluster
11:56 mattapperson joined #gluster
11:58 psyl0n joined #gluster
12:01 tryggvil joined #gluster
12:03 jporterfield joined #gluster
12:09 jporterfield joined #gluster
12:17 XpineX joined #gluster
12:30 qdk joined #gluster
13:03 NeatBasis joined #gluster
13:11 psyl0n joined #gluster
13:16 tobira_ joined #gluster
13:17 _pol joined #gluster
13:27 tjikkun_ joined #gluster
13:32 diegows joined #gluster
13:50 ZhangHuan joined #gluster
13:54 zapotah joined #gluster
13:54 zapotah joined #gluster
14:00 mattappe_ joined #gluster
14:05 marcoceppi joined #gluster
14:13 ira joined #gluster
14:16 mattapperson joined #gluster
14:27 robo joined #gluster
14:28 pravka joined #gluster
14:31 tobira_ HI , i am planing building Gluster storage with high load usage ,that will consist of big files 1~10 gig and lots of small files 1~128 KB  ,medium files 1MB ~ 300MB
14:31 tobira_ the type of ops will be mixed sequential r/w with with random and seek along the  big files , search also across folders with max 70k ~10k files in folder
14:31 tobira_ multiple session (20~30) stream that write  in same time form block 0 to 1 ~10 gig files 120Mb/s and 10~20 session 4 ~ 5Mb/s writing/reading small blocks files
14:31 tobira_ My question is there any calculator to calculate or rules guide should i follows in terms f hardware disk type, amount of disk recommended on each node  ,controllers , amount of nodes, network needed assuming i will use 10G how much interface needed on each node , cpu memory etc..
14:31 tobira_ Please advice
14:31 tobira_ Thanks
14:37 sprachgenerator joined #gluster
14:55 johnmilton joined #gluster
15:09 dbruhn tobira_ a calculator like this doesn't exist.
15:14 tobira_ ok , so how can i estimate hardware need according to my load need is there any guide?
15:18 _pol joined #gluster
15:18 TrDS joined #gluster
15:39 dbruhn tobira_, if you know you iop requirements, your throughput needs, and the amount of storage you need that might be of more value to the conversation
15:40 mattappe_ joined #gluster
15:40 dbruhn also knowing the percentage of io that will be read vs write is helpful
15:44 ZhangHuan joined #gluster
15:44 dbruhn You haven't really defined what you want for a type of volume either, is it distributed, or replicated, or striped, or a combination?
15:57 mattappe_ joined #gluster
15:58 iksik_ joined #gluster
15:58 mattapperson joined #gluster
16:06 tobira_ Thanks dbruhn , i don't have the IO estimation but ,i think the load will be sequential 70%/40% random  the total throughput estimate 1200MBytes/s most of it wirte
16:06 tobira_ total 150~180TB space
16:06 tobira_ the stream type i mention for 120Mb/s and is HD video format , the 4~5Mb/s is low res format
16:06 tobira_ and as i wrote folder may consist 70~100k files and also small files need to browsing files fast by db for metadata index etc..
16:09 tobira_ I thought on striped mode , but i learned that its not like Isilon that its N+1 or N+2 that can sustain 1 or 2 nodes down
16:11 jporterfield joined #gluster
16:12 pravka joined #gluster
16:13 sprachgenerator joined #gluster
16:22 pravka joined #gluster
16:22 pravka joined #gluster
16:28 mtanner joined #gluster
16:33 mattappe_ joined #gluster
16:36 tobira_ The hardware for node ,i had in mined was using was using either chenbro or  supermicro chassis 3u 16 3.5" hot plug using controller Adaptec 71605Q  with loaded 1x500GB SSD+ 12 sata Seagate 3TB Barracuda Sata III ST3000DM001  , core i 5 cpu 16 gb memory , dual Intel 10G nic
16:41 flrichar joined #gluster
16:48 jporterfield joined #gluster
17:08 smellis I'm getting W [socket.c:514:__socket_rwv] 0-vpool1-client-5: readv failed (No data available) in glustershd.log, which I think is why I'm not seeing any healing happening
17:08 smellis can anyone point me in the right direction?
17:09 samppah smellis:  is it able to connect all servers?
17:09 smellis gluster volume status shows everything online
17:10 smellis does the self heal daemon need to talk directly to the other servers?
17:15 smellis also see this in the etc-gluster-... log:  
17:15 smellis I [glusterd-handler.c:3260:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vpool1
17:16 smellis sorry, I meant this: W [glusterd-op-sm.c:3237:glusterd_op_modify_op_ctx] 0-management: op_ctx modification failed
17:16 smellis in my lab environment healing worked like a champ
17:16 smellis not sure what the difference is here
17:18 jporterfield joined #gluster
17:18 _pol joined #gluster
17:19 smellis running 3.4.2 but glustershd.log has this message: I [client-handshake.c:1659:select_server_supported_programs] 0-vpool1-client-5: Using Program GlusterFS 3.3, Num (1298437), Version (330)
17:19 smellis what's that about?
17:22 rotbeard joined #gluster
17:24 chirino joined #gluster
17:35 flrichar joined #gluster
17:38 vpshastry joined #gluster
17:47 jporterfield joined #gluster
18:04 smellis is anyone available to help me troubleshoot self heal?
18:06 RedShift joined #gluster
18:06 gmcwhistler joined #gluster
18:08 gmcwhistler joined #gluster
18:12 aurigus joined #gluster
18:16 TheDingy joined #gluster
18:25 TrDS left #gluster
18:29 vpshastry left #gluster
18:31 psyl0n joined #gluster
18:56 jporterfield joined #gluster
18:58 klaas joined #gluster
19:02 smellis well crap, that host has storage issues
19:02 sprachgenerator joined #gluster
19:19 _pol joined #gluster
19:45 robo joined #gluster
19:54 morsik hi... i have possible simple question
19:54 morsik how can i (correctly) edit volfile?
19:54 morsik i would like to set io-cache  cache-timeout, but i don't see that it's possible from 'gluster' commandline
19:54 morsik should i stop volume, edit volfile and start volume? or there's another way to do this?
19:59 Amanda joined #gluster
20:05 smellis morsik: i think you need to use gluster volume <volname> set
20:06 morsik smellis: i've tried...
20:07 smellis ah ok
20:07 morsik blind guessing actually... http://pastebin.com/nBpeV0ML
20:07 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
20:07 smellis yeah I don't see those options documented
20:16 Amanda joined #gluster
20:32 iksik joined #gluster
20:32 klaas joined #gluster
21:05 robo joined #gluster
21:20 _pol joined #gluster
21:25 smellis ok, having heal-failed issues?
21:25 smellis I am having heal-failed issues
21:25 smellis what should I look at?
21:30 SFLimey joined #gluster
21:33 smellis glustershd.log is showing this  W [socket.c:514:__socket_rwv] 0-vpool1-client-0: readv failed (No data available)
21:33 smellis not sure what that is
21:48 MugginsM joined #gluster
21:55 _pol joined #gluster
21:57 TrDS joined #gluster
22:03 badone joined #gluster
22:20 smellis ok, I think the self heal daemon is taking a long time to crawl, and that's why i'm not seeing self heal happening
22:24 smellis anyone around?
22:28 MugginsM yeah, can't really help :)
22:31 jporterfield joined #gluster
22:41 robo joined #gluster
23:16 jporterfield joined #gluster
23:32 robo joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary