Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-12-20

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 ninkotech joined #gluster
00:01 ninkotech_ joined #gluster
00:07 fandi joined #gluster
00:38 XpineX joined #gluster
01:10 XpineX joined #gluster
01:31 XpineX joined #gluster
02:05 an joined #gluster
02:09 elico joined #gluster
02:38 fandi joined #gluster
02:47 prg3 joined #gluster
02:56 XpineX joined #gluster
03:10 noddingham joined #gluster
03:17 XpineX joined #gluster
03:19 semiosis @ubuntu
03:22 semiosis @learn ubuntu as semiosis is gearing up to improve the ubuntu (and debian) packages. if you have an interest in glusterfs packages for ubuntu please ping semiosis. if you have an issue or bug to report regarding the ubuntu packages (even if you've already told semiosis about it) please open an issue on github: https://github.com/semiosis/glusterfs-debian
03:22 glusterbot semiosis: The operation succeeded.
03:22 semiosis @ubuntu
03:22 glusterbot semiosis: semiosis is gearing up to improve the ubuntu (and debian) packages. if you have an interest in glusterfs packages for ubuntu please ping semiosis. if you have an issue or bug to report regarding the ubuntu packages (even if you've already told semiosis about it) please open an issue on github: https://github.com/semiosis/glusterfs-debian
03:24 semiosis also I just now published glusterfs packages to the PPAs for glusterfs 3.4.6, 3.5.3, and 3.6.1 for ubuntu precise & utopic (trusty packages were already there)
03:25 semiosis which reminds me...
03:25 semiosis @ppa
03:25 glusterbot semiosis: The official glusterfs packages for Ubuntu are available here: STABLE: 3.4: http://goo.gl/M9CXF8 3.5: http://goo.gl/6HBwKh -- QA: 3.4: http://goo.gl/B2x59y 3.5: http://goo.gl/RJgJvV 3.6: http://goo.gl/ncyln5 -- QEMU with GlusterFS support: http://goo.gl/e8IHnQ (3.4) & http://goo.gl/tIJziO (3.5)
03:25 semiosis @forget ppa
03:25 glusterbot semiosis: The operation succeeded.
03:26 semiosis @learn ppa as The official glusterfs packages for Ubuntu are available here: 3.4: http://goo.gl/M9CXF8 3.5: http://goo.gl/6HBwKh 3.6: http://goo.gl/XyYImN -- See more PPAs for QEMU with GlusterFS support, and GlusterFS QA releases at https://launchpad.net/~gluster
03:26 glusterbot semiosis: The operation succeeded.
03:26 semiosis @ppa
03:26 glusterbot semiosis: The official glusterfs packages for Ubuntu are available here: 3.4: http://goo.gl/M9CXF8 3.5: http://goo.gl/6HBwKh 3.6: http://goo.gl/XyYImN -- See more PPAs for QEMU with GlusterFS support, and GlusterFS QA releases at https://launchpad.net/~gluster
03:26 semiosis @forget ppa
03:26 glusterbot semiosis: The operation succeeded.
03:27 semiosis @learn ppa as The official glusterfs packages for Ubuntu are available here: 3.4: http://goo.gl/M9CXF8 3.5: http://goo.gl/6HBwKh 3.6: http://goo.gl/XyYImN -- See more PPAs for QEMU with GlusterFS support, and GlusterFS QA releases at https://launchpad.net/~gluster -- contact semiosis with feedback
03:27 glusterbot semiosis: The operation succeeded.
03:27 semiosis @ppa
03:27 glusterbot semiosis: The official glusterfs packages for Ubuntu are available here: 3.4: http://goo.gl/M9CXF8 3.5: http://goo.gl/6HBwKh 3.6: http://goo.gl/XyYImN -- See more PPAs for QEMU with GlusterFS support, and GlusterFS QA releases at https://launchpad.net/~gluster -- contact semiosis with feedback
03:32 XpineX joined #gluster
03:36 jaank joined #gluster
04:00 harish joined #gluster
04:02 bala joined #gluster
04:25 bala joined #gluster
04:31 XpineX joined #gluster
04:55 bala joined #gluster
05:01 XpineX joined #gluster
05:04 glusterbot News from resolvedglusterbugs: [Bug 1138897] NetBSD port <https://bugzilla.redhat.com/show_bug.cgi?id=1138897>
05:09 lalatenduM joined #gluster
05:15 shubhendu joined #gluster
05:50 newdave joined #gluster
05:57 XpineX joined #gluster
06:03 soumya joined #gluster
06:09 an joined #gluster
06:11 sputnik13 joined #gluster
06:13 bala joined #gluster
06:16 XpineX joined #gluster
06:25 Gorian joined #gluster
07:03 an joined #gluster
07:12 georgeh_ joined #gluster
07:12 frankS2 joined #gluster
07:13 morsik joined #gluster
07:14 an joined #gluster
07:18 masterzen joined #gluster
07:18 ultrabizweb joined #gluster
07:19 pdrakeweb joined #gluster
07:21 soumya joined #gluster
07:22 sputnik13 joined #gluster
07:24 ultrabizweb joined #gluster
07:26 JordanHackworth joined #gluster
07:27 ninkotech__ joined #gluster
07:28 johnnytran joined #gluster
07:30 sadbox joined #gluster
07:32 strata joined #gluster
08:09 rafi joined #gluster
08:15 Gorian joined #gluster
08:37 an joined #gluster
08:46 kovshenin joined #gluster
08:49 Arrfab joined #gluster
08:51 ricky-ticky joined #gluster
08:57 an joined #gluster
09:00 sputnik13 joined #gluster
09:23 an joined #gluster
09:25 sputnik13 joined #gluster
09:32 bala joined #gluster
09:32 XpineX joined #gluster
09:50 an joined #gluster
10:04 TrDS joined #gluster
10:35 glusterbot News from resolvedglusterbugs: [Bug 1176309] glfs_h_creat() leaks file descriptors <https://bugzilla.redhat.com/show_bug.cgi?id=1176309>
10:35 glusterbot News from resolvedglusterbugs: [Bug 1176310] glfs_h_creat() leaks file descriptors <https://bugzilla.redhat.com/show_bug.cgi?id=1176310>
10:35 glusterbot News from resolvedglusterbugs: [Bug 1176311] glfs_h_creat() leaks file descriptors <https://bugzilla.redhat.com/show_bug.cgi?id=1176311>
10:40 coredump joined #gluster
11:21 scottymeuk joined #gluster
11:21 scottymeuk Hey, just a quick question. I have around 160gb of 1gb+ files that i need to store and serve (can be big spikes in downloads, 8tb over 4 days for example). What setup do you guys suggest?
11:42 uebera|| joined #gluster
11:42 uebera|| joined #gluster
11:58 LebedevRI joined #gluster
12:11 ghenry joined #gluster
12:20 partner_ l0uis: i'm not saying gluster is lying, i said they all report the same numbers. the question is why the brick is full while os reports there is space and inodes, no open files
12:48 sac joined #gluster
13:22 jaank joined #gluster
13:59 T3 joined #gluster
14:05 calum_ joined #gluster
14:06 deniszh joined #gluster
14:22 Gorian joined #gluster
14:22 M28_ joined #gluster
14:40 SOLDIERz joined #gluster
14:45 M28 joined #gluster
14:46 Danishman joined #gluster
14:47 kovshenin joined #gluster
14:48 M28 joined #gluster
15:02 plarsen joined #gluster
15:06 pcaruana joined #gluster
15:27 sputnik13 joined #gluster
15:33 ricky-ticky1 joined #gluster
15:47 elico joined #gluster
15:49 l0uis partner_: but you pasted a df output showing the os displaying the brick has being full
15:49 l0uis partner_: gluster doens't report that, the os does. unless im mistaken on what you pasted
15:51 l0uis scottymeuk: by 'what setup' what do you mean exactly? :)
15:51 l0uis scottymeuk: distributed volume w/ replica 2 or more if you want redundancy? :)
15:52 scottymeuk Hey l0uis. Basically i have no experience with Gluster at all (other than the 2 server replica set i just setup to try it out), and i was looking for some advise. I need that 160gb to be available on all servers, but i dont want each server to have to have 160gb space. Want to try and shard it out or something
15:53 scottymeuk From what i have read, it sounds like i need replica + stripped. But that would be a huge amount of storage space, so just trying to work it out
16:02 l0uis scottymeuk: you want a distributed volume
16:02 l0uis scottymeuk: does the storage system provide redundancy or do you want gluster to do it?
16:03 l0uis scottymeuk: if you want gluster to do it, you make it a replicate volume as well, and specify how many replicas... 2 or 3 or whatever.
16:03 scottymeuk I am starting from nothing, so which ever you think is easier/best
16:04 l0uis (I'm no expert)
16:05 l0uis I run a 5 node 10 brick gluster setup w/ about 25 TB usable.. in the grand scheme it's small, but we hit it pretty hard
16:05 l0uis we use distributed replica 2 ...
16:05 scottymeuk Thanks for your help by the way
16:05 scottymeuk So
16:06 scottymeuk Say i had that setup, would it be easy to add extra capcity should i need it?
16:06 l0uis if you are on a 1gbit network you can do ~ 120 MB / second read. to sustain 8 TB xfer over 4 days you need 25 MB / second. Thats easily do able on 1 node, so on a distributed gluster setup it will be cake
16:06 l0uis yes
16:06 l0uis you just add bricks
16:06 scottymeuk and how do you handle adding of files? Do you just add to any server? Or do i need to pick one with lowest amount of data
16:06 scottymeuk Sorry for the silly questions
16:07 l0uis yes, gluster unifies this.
16:07 scottymeuk Going to be running this on Digital Ocean, say i needed 200gb to start with, what specs woul you choose from: http://digitalocean.com/pricing
16:07 l0uis so my 5 node 10 brick gluster setup looks like a single volume
16:07 scottymeuk Ahh awesome. This sounds perfect. And what happens if one server dies in that case?
16:07 l0uis if you have replica > 1 then the other nodes continue to serve the data
16:08 l0uis and you take corrective action to bring the bricks back online or replcae them.
16:08 l0uis then self-heal runs
16:08 scottymeuk That is awesome :)
16:09 scottymeuk How does gluster handle serving of files, in terms of speed etc
16:09 scottymeuk Get spikes of traffic, and bandwidth can be upwards of 5tb/day
16:09 l0uis gluster is just a file system. you serve the data however you want
16:10 scottymeuk yeah, i shall be doing it via nginx, but obviously it will need to pull data from multiple servers wont it?
16:10 l0uis i would not use a striped volume for a 1 GB file
16:10 l0uis there really isn't any benefit
16:10 l0uis so a single file is stored in whole on a single brick
16:10 l0uis we easily saturate our network (2x1gbit bonded)
16:11 l0uis if gluster is ever a bottleneck its on meta data operations
16:11 l0uis so be mindful of that.
16:11 l0uis also, when writing data your write bandwidth is / # of replicas
16:11 scottymeuk Ok, so striping it would just saturate it even more
16:11 l0uis so if you have replica 2, you cna write at most 60 MB/second since it must go to two servers
16:11 l0uis striping just complicates things from my point of view
16:11 l0uis im sure for some workloads it makes sense
16:11 scottymeuk Yeah, writing is not a huge issue, it's backgrounded so it can take as long as it needs really
16:11 l0uis we have 30-100 GB files and are happy w/ the performance we have atm
16:12 scottymeuk That's awesome, thank you
16:13 l0uis if you only have 160 GB of data is it really worth running gluster?
16:13 scottymeuk I am happy to look into other options
16:14 scottymeuk But i cannot think of another way, without having it all on 1 server, which seems like a bad idea
16:14 l0uis rsync, drdb, whatever.
16:14 scottymeuk Also needs to be totally automated
16:15 scottymeuk And splitting them out so that bandwidth is not all hammering one server
16:15 l0uis gluster will do just fine, but there might be simpler solutions. i dont know what you're trying to do so i cant make a value judgement
16:16 l0uis apart from: gluster has served me well
16:16 scottymeuk Basically
16:16 scottymeuk romhut.com
16:16 l0uis but it is more complicated than a period rsync or running drdb
16:16 scottymeuk Developers upload roms. And then people download them
16:16 scottymeuk For the past 2 years
16:16 scottymeuk we have ran from a NAS, and have multiple nginx proxie servers which cache the files
16:16 scottymeuk But the NAS is being shut down, so we need to figure out a way to store the files, and scale it
16:16 l0uis how much data is written each day
16:16 M28_ joined #gluster
16:17 scottymeuk not a lot at all
16:17 l0uis why not store it and serve it from S3?
16:17 scottymeuk Huge costs
16:17 scottymeuk i have unlimited bandwidth from Digital ocean, so it makes much more finacial sense to put on there
16:17 scottymeuk Romhut makes very little money (no profit) :(
16:18 coredump|br joined #gluster
16:18 l0uis ok so run multiple DO nodes, use the 320/mo plan, run 2 of them. run drdb or rsync to keep data in sync between a master and slave.
16:18 scottymeuk the issue there is bandwidth
16:18 l0uis nginx proxies to both master/slave to serve data, but writes always happen on master. simple setup, easy to debug.
16:18 scottymeuk Hmm ok, so just the same setup as we have now, but have DO acting as the NAS
16:19 DV joined #gluster
16:19 l0uis well, there is no nas. each of your servers will serve from local disk on DO
16:19 scottymeuk i was trying to work out a way to get rid of the proxy servers if possible
16:19 l0uis and you have a simple setup to replicate data sa it is written
16:19 l0uis well you need to proxy traffic for download right?
16:19 l0uis so it doesn't all hit the same node
16:19 scottymeuk Yeah yeah, sorry i was just working it out in my head, didnt mean NAS specifically
16:19 l0uis i mean, however your want to distribute thos erequests doesnt matter
16:19 msciciel_ joined #gluster
16:20 _br_- joined #gluster
16:20 gothos1 joined #gluster
16:20 l0uis dont get me rwong, gluster will work fine too.. but i wouldn't run it for 160gb of data
16:20 Guest25082 joined #gluster
16:20 loki27_ joined #gluster
16:20 scottymeuk Yeah :)
16:20 DJCl34n joined #gluster
16:21 scottymeuk I guess i was just trying to combine lots of stuff, so get rid of the proxy servers, and have the ability to expand to bigger data easily
16:21 lanning_ joined #gluster
16:22 ndk_ joined #gluster
16:23 l0uis you still need proxies if you use gluster, assuming you want multiple nodes to serve data
16:23 side_con1rol joined #gluster
16:23 samppah_ joined #gluster
16:23 scottymeuk I was going to have local nginx instances and just round robin the DNS
16:24 scottymeuk or we currently have a consistant hashing algorithm that sends specific file requests to the same server all the time (as we cache them atm)
16:24 l0uis you can do that w/ a rsync/drdb setup too
16:24 l0uis if you're confident you wont have conflicts
16:24 l0uis you can probably have both nodes handle writes
16:25 scottymeuk and just rsync between them each way?
16:25 l0uis nod
16:26 l0uis but if write volume is low its easier and simpler to have 1 node do it
16:26 mikedep3- joined #gluster
16:26 dockbram_ joined #gluster
16:26 l0uis or run gluster and let it handle all that but do it knowing you will have the extra layer to admin
16:27 LebedevRI_ joined #gluster
16:27 scottymeuk Yeah i was going to say that, is that basically the sae as having a volume with a replica on it?
16:27 scottymeuk But yeah rsync simplifies that
16:27 scottymeuk digitalocean does not seem to support drbd from what i can see
16:28 l0uis it would be the functional equivalent of replica 2
16:28 l0uis distributed replica 2
16:29 l0uis with DO, whats the latency between nodes? are you guaranteed to be near the other node? w/ gluster you pay a latency price for every single meta data op.
16:29 fandi_ joined #gluster
16:30 Arminder joined #gluster
16:30 coredump joined #gluster
16:31 glusterbot` joined #gluster
16:31 jvandewege_ joined #gluster
16:31 M28___ joined #gluster
16:33 frankS2_ joined #gluster
16:33 RobertLaptop_ joined #gluster
16:34 Andreas-IPO joined #gluster
16:34 kovsheni_ joined #gluster
16:34 glusterbot joined #gluster
16:35 tg2_ joined #gluster
16:36 mikedep333 joined #gluster
16:37 uebera|| joined #gluster
16:37 uebera|| joined #gluster
16:37 Bosse_ joined #gluster
16:37 scottymeuk l0uis: i think it varies
16:38 scottymeuk You are not gaurenteed to be on the same node
16:38 scottymeuk but the internal transfer has always been pretty good
16:38 scottymeuk Like i said though, we could background it so it's not a huge issue. Download wont be marked as downloadable until the rsync has completed
16:39 edong23 joined #gluster
16:41 yoavz- joined #gluster
16:41 saltsa joined #gluster
16:42 Rydekull_ joined #gluster
16:43 DJCl34n joined #gluster
16:43 sac`away` joined #gluster
16:43 side_con2rol joined #gluster
16:43 SmithyUK joined #gluster
16:43 side_control joined #gluster
16:43 DJClean joined #gluster
16:44 pdrakeweb joined #gluster
16:44 vincent_vdk joined #gluster
16:44 ron-slc_ joined #gluster
16:44 msciciel joined #gluster
16:44 ghenry joined #gluster
16:45 lanning joined #gluster
16:45 ghenry joined #gluster
16:45 masterzen joined #gluster
16:45 tom[] joined #gluster
16:45 owlbot joined #gluster
16:45 georgeh_ joined #gluster
16:46 Intensity joined #gluster
16:46 sac`away joined #gluster
16:46 crashmag joined #gluster
16:46 hchiramm_ joined #gluster
16:49 CyrilPeponnet joined #gluster
16:49 ws2k3 joined #gluster
16:49 social joined #gluster
16:49 DV joined #gluster
16:49 samppah joined #gluster
16:49 ws2k3 joined #gluster
16:50 Intensity joined #gluster
16:50 m0zes joined #gluster
16:50 frankS2 joined #gluster
16:51 kalzz joined #gluster
16:51 malevolent joined #gluster
16:52 sac`away joined #gluster
17:20 dastar joined #gluster
17:46 T3 joined #gluster
18:06 TrDS1 joined #gluster
18:12 T3 joined #gluster
18:17 TrDS1 joined #gluster
18:18 TrDS1 left #gluster
18:30 newdave joined #gluster
18:39 shubhendu joined #gluster
18:43 hchiramm joined #gluster
18:44 fleducquede joined #gluster
18:54 Arminder joined #gluster
19:37 glusterbot News from newglusterbugs: [Bug 1094724] mkdir/rmdir loop causes gfid-mismatch on a 6 brick distribute volume <https://bugzilla.redhat.com/show_bug.cgi?id=1094724>
19:57 side_control joined #gluster
20:05 jaank joined #gluster
20:07 glusterbot News from newglusterbugs: [Bug 1175551] Intermittent open() failure after creating a directory <https://bugzilla.redhat.com/show_bug.cgi?id=1175551>
20:37 glusterbot News from newglusterbugs: [Bug 1101479] glusterfs process crash when creating a file with command touch on stripe volume <https://bugzilla.redhat.com/show_bug.cgi?id=1101479>
20:37 m0zes joined #gluster
20:44 TrDS left #gluster
20:54 dastar joined #gluster
20:57 ProT-0-TypE joined #gluster
21:22 DV joined #gluster
21:37 glusterbot News from newglusterbugs: [Bug 1065618] gluster 3.5 hostname shows localhostname as localhost in gluster pool list command <https://bugzilla.redhat.com/show_bug.cgi?id=1065618>
21:48 calum_ joined #gluster
21:58 ProT-0-TypE joined #gluster
22:07 glusterbot News from newglusterbugs: [Bug 1176354] "gluster pool list" does not show the real hostname of the host executing the command <https://bugzilla.redhat.com/show_bug.cgi?id=1176354>
22:07 glusterbot News from resolvedglusterbugs: [Bug 1115850] libgfapi-python fails on discard() and fallocate() due to undefined symbol <https://bugzilla.redhat.com/show_bug.cgi?id=1115850>
22:14 partner_ l0uis: i never pasted anything showing brick would be full. that would have been easy to resolve
22:15 partner_ touch: cannot touch `foo': No space left on device
22:42 chirino joined #gluster
23:18 T3 joined #gluster
23:37 glusterbot News from newglusterbugs: [Bug 1115648] Server Crashes on EL5/32-bit <https://bugzilla.redhat.com/show_bug.cgi?id=1115648>
23:38 jaank joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary