Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-12-25

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:29 social joined #gluster
01:21 bala joined #gluster
01:30 DV joined #gluster
01:46 msmith joined #gluster
02:04 itisravi joined #gluster
02:07 haomaiwa_ joined #gluster
02:45 itisravi joined #gluster
02:46 msmith joined #gluster
03:00 poornimag joined #gluster
03:24 itisravi joined #gluster
04:06 shubhendu joined #gluster
04:26 suman_d joined #gluster
04:35 msmith joined #gluster
04:39 shubhendu joined #gluster
04:41 shubhendu_ joined #gluster
04:47 msmith joined #gluster
04:50 shubhendu__ joined #gluster
04:59 msmith joined #gluster
05:50 shubhendu__ joined #gluster
05:59 NuxRo joined #gluster
06:00 stickyboy joined #gluster
06:01 SmithyUK joined #gluster
06:13 msmith joined #gluster
06:58 daMaestro|isBack joined #gluster
06:59 Tanay joined #gluster
06:59 suman_d Tanay, ping
06:59 glusterbot suman_d: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
07:19 Tanay_g joined #gluster
07:19 suman_d Tanay_g, ping
07:19 glusterbot suman_d: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
07:20 Tanay_g hi
07:20 glusterbot Tanay_g: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
07:22 suman_d joined #gluster
07:30 SOLDIERz joined #gluster
07:47 badone joined #gluster
08:15 badone joined #gluster
08:15 nbalacha joined #gluster
08:18 nbalacha joined #gluster
08:41 zerick joined #gluster
08:45 badone joined #gluster
09:17 SOLDIERz joined #gluster
09:29 Bardack joined #gluster
09:36 kovshenin joined #gluster
10:01 elico joined #gluster
10:22 SOLDIERz joined #gluster
10:51 badone joined #gluster
11:01 itisravi joined #gluster
11:04 elico joined #gluster
11:17 fsimonce joined #gluster
11:20 badone joined #gluster
11:35 elico joined #gluster
11:53 tdasilva joined #gluster
12:09 LebedevRI joined #gluster
13:17 suman_d joined #gluster
13:22 calisto joined #gluster
13:27 nbalacha joined #gluster
13:29 kovshenin joined #gluster
13:29 calisto joined #gluster
13:52 soumya joined #gluster
13:58 nbalacha joined #gluster
14:40 hagarth joined #gluster
14:54 ama Hello.  Are you around, semiosis?
14:58 ama I am prepared to re-create a gluster volume out of six bricks which already have data on them.  I need to know if the order of the bricks in the old volume was the one that was stored in the 'volume.info' file, please?
15:18 kovshenin joined #gluster
15:28 tdasilva joined #gluster
15:55 ama I'm trying to mount my gluster volume using the glusterfs protocol and it won't mount.  I can mount it using NFS though.  Any idea, please?
16:04 calisto joined #gluster
16:23 hagarth joined #gluster
16:42 msmith_ joined #gluster
16:54 vimal joined #gluster
17:01 _polto_ joined #gluster
17:02 _polto_ Some mailing lists say that BDB storage type was removed, but current documentation still talk about BDB ...  http://www.gluster.org/community/docum​entation/index.php/Translators/storage
17:02 _polto_ any idea if it is really supported ?
17:03 _polto_ is it supported via CLI ?
17:11 calisto1 joined #gluster
17:12 hagarth _polto_: no, bdb has not been touched in a while and is not supported
17:17 _polto_ hagarth: thanks. are they any other DB storage solution in Gluster ?
17:19 _polto_ I have millions of jpeg files, feature and descriptor files for each image file and some related GPS data.. I currently split images into max 90'000 files for performance issues. doing "ls" in a directory take a while...
17:20 msmith_ joined #gluster
17:58 hagarth _polto_: no other DB backends for Gluster, but there are ongoing enhancements to improve performance for small files & readdir is one area that is being looked into extensively. expect some of these enhancements to land into glusterfs 3.7.
17:59 hagarth _polto_: if you are using 3.5 or later, you can also enable readdir-ahead to improve performance for readdir operations. cluster.readdir-optimize tunable has also helped in improving performance for readdir in a few deployments.
18:23 _polto_ thank you hagarth
18:23 plarsen joined #gluster
18:58 msmith_ joined #gluster
19:52 kovshenin joined #gluster
20:11 Sunghost joined #gluster
20:21 Sunghost Hello and happy x-mas to all. I use glusterfs as an distributed volume on raid6 nodes. i know read about new volume type dispers, but cant find any good documentation.
20:21 Sunghost As far as i understand its like raid5 for now and later planings for raid6 and other modes
20:22 Sunghost So my plan is to use each disk on my nodes for disperse and no raid6 on disks.
20:22 Sunghost But how does disperse work? Over nodes or bricks?
20:23 calisto joined #gluster
20:27 ndevos Sunghost: disperse is like raid5 over the network, you need at least 3 bricks, and it is advised to have those bricks on different storage servers
20:34 Sunghost ok of course its better to use different servers
20:34 Sunghost i have actually 2 server each with 12hdds in raid6
20:35 Sunghost i will renew that with the point of loosing not much diskspace
20:35 Sunghost so i could setup each disk as a brick and all together into one volume for desperse - will that work?
20:36 ndevos yes, that should work
20:36 ndevos see http://www.gluster.org/community/docu​mentation/index.php/Features/disperse for a little more details
20:37 Sunghost found that allready but with not enough informations for me
20:38 Sunghost how stable is disperse since it is in 3.6 `?
20:38 ndevos if you need more details, send an email to the list, several developers/users should be able to answer your questions
20:38 Sunghost i read anywhere that disperse like raid6 will come soon - any information when?
20:38 Sunghost ok will do if yours are not enough
20:39 ndevos quite some bugs have been found recently, and I think many backports have been done to get included in the upcoming 3.6.2 version
20:40 Sunghost ok found some infos on the list for raid 6, which is decided by the level of bricks - if i understand it right
20:41 Sunghost had or use anyone disperse mode? any experience for that - how it perform e.g.
20:41 Sunghost yes found a bug report for renaming files
20:44 ndevos these are all the bugs for disperse in 3.6 (some have been closed already with current releases):
20:44 ndevos https://bugzilla.redhat.com/buglist.cgi?c​omponent=disperse&f1=version&list​_id=3115310&o1=regexp&product=Glu​sterFS&query_format=advanced&v1=^3.6
20:45 ndevos I do not know if there are any production deployments using disperse, I only heard about people testing it
20:46 ndevos and well, performance is said to be relatively low, but I guess that is to be expected - no doubt that there are plans to improve it
20:46 ndevos and well, the performance you need and deem acceptible really depends on your own use-case and workload :)
20:48 Sunghost ok understand that-buglist for disperse reads not good to use it as stable ;(. not good - so i will stay on my concept of raid6 on each server and all in distributed. disperse reads good and it will hopefully do it in future
20:51 ndevos yeah, I think disperse is not very stable yet, but if you find the time to test and reports issues, that would help!
20:53 Sunghost yes i would do and help, but i sadly got no time for that. i like gluster and the community mostly is helpfull ;) but time is not on my side
20:54 Sunghost is there any better idea to build up an glusterfs like my actualy concept? most of speed and disk usage with low risk of disk failure and loosing diskspace?
20:54 SOLDIERz joined #gluster
20:55 ndevos sure, no problem, just bookmark that bugzilla link, new disperse bugs will automatically get added to that list - and the status should get moved to CLOSED ;)
20:56 ndevos no, not really, 12 disks per raid6 and two servers would make up a 2-brick distribute volume, that sounds ok
20:57 ndevos unless you want to make sure to have the data available on both servers, then you would need to use 4 bricks, and create a distribute-replicate volume
20:58 ndevos well, or just replicate, no need to distribute in that case
21:00 Sunghost yes for security i think replicate whould a good choice too. ok thanks for informations and help - good group - good project good people - wish you all the best
21:02 ndevos thanks, and same to you!
21:50 diegows joined #gluster
22:27 sage_ joined #gluster
22:35 tdasilva joined #gluster
23:18 doekia joined #gluster
23:57 doubt joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary