Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-11-25

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:14 B21956 joined #gluster
00:18 farhorizon joined #gluster
00:19 Pupeno joined #gluster
00:36 Klas joined #gluster
00:40 arc0 joined #gluster
01:05 victori joined #gluster
01:08 shdeng joined #gluster
01:17 haomaiwang joined #gluster
01:34 farhorizon joined #gluster
02:00 fang64 joined #gluster
02:23 daMaestro joined #gluster
02:52 Wizek_ joined #gluster
02:53 derjohn_mobi joined #gluster
03:10 nbalacha joined #gluster
03:14 valkyrie joined #gluster
03:20 prth joined #gluster
03:21 Muthu joined #gluster
03:29 Lee1092 joined #gluster
03:37 prth joined #gluster
03:38 mb_ joined #gluster
03:48 atinm joined #gluster
03:50 itisravi joined #gluster
03:54 ppai joined #gluster
03:59 kramdoss_ joined #gluster
04:03 Shu6h3ndu joined #gluster
04:15 hackman joined #gluster
04:21 buvanesh_kumar joined #gluster
04:21 nbalacha joined #gluster
04:22 magrawal joined #gluster
04:25 armyriad joined #gluster
04:48 hchiramm joined #gluster
04:57 Prasad joined #gluster
05:00 jiffin joined #gluster
05:06 ankitraj joined #gluster
05:08 RameshN joined #gluster
05:10 karthik_us joined #gluster
05:11 prasanth joined #gluster
05:14 Karan joined #gluster
05:15 sbulage joined #gluster
05:22 msvbhat joined #gluster
05:30 rafi joined #gluster
05:32 Saravanakmr joined #gluster
05:34 farhorizon joined #gluster
05:41 apandey joined #gluster
05:42 riyas joined #gluster
05:43 hgowtham joined #gluster
05:45 prth joined #gluster
05:46 msvbhat joined #gluster
05:56 Karan joined #gluster
06:00 ashiq joined #gluster
06:03 dnorman joined #gluster
06:05 kdhananjay joined #gluster
06:08 sanoj joined #gluster
06:13 nishanth joined #gluster
06:14 newdave joined #gluster
06:14 newdave Hi all - What's the best practice procedure for increasing the size of a volume? Add additional bricks to the existing servers?
06:15 newdave Or is it possible to increase the size of the existing bricks?
06:17 atinm joined #gluster
06:17 kramdoss_ joined #gluster
06:19 jiffin newdave: add additional bricks to the volume
06:19 newdave And is there some sort of recommended max number of bricks per server? We've currently got 4 servers serving a distributed-replicate volume. As our disk space requirements have grown we've been adding virtual disks to the servers (which are VM's on separate Xenserver nodes using local storage).
06:20 newdave So at the moment, I'm in the process of adding the third 250gb volume, which will give us 1.5T usable space.
06:21 shruti` joined #gluster
06:22 newdave I'd actually like to redo this structure so we're using 500gb bricks... Is it possible for me to add another brick per node (perhaps using gluster volume replace-brick?) that is larger than the existing bricks?
06:24 susant joined #gluster
06:25 prth joined #gluster
06:31 mhulsman joined #gluster
06:31 skoduri joined #gluster
06:50 prth joined #gluster
07:02 kramdoss_ joined #gluster
07:02 dnorman joined #gluster
07:05 jkroon joined #gluster
07:07 prth joined #gluster
07:08 msvbhat joined #gluster
07:13 mb_ joined #gluster
07:14 Pupeno joined #gluster
07:18 Pupeno_ joined #gluster
07:32 mhulsman joined #gluster
07:34 rastar joined #gluster
07:36 daMaestro newdave, what's the purpose of redoing the structure?
07:36 newdave daMaestro: less bricks but same amount of usable space
07:37 newdave the bricks are virtual disks that are LV's carved out of the server's local RAID-10 storage
07:37 newdave (handled by Xenserver)
07:37 atinm joined #gluster
07:38 daMaestro is a few brick configuration more optimal?
07:38 atinm joined #gluster
07:38 newdave that's my question i guess... is there an inherent inefficiency/performance penalty from having 3+ bricks per server
07:39 Raide joined #gluster
07:41 daMaestro well that is a fundamental question (that sorry i will not be able to answer)
07:42 daMaestro do you want your failure domain (your replicated bricks) to be large or small?
07:42 daMaestro let's say one of your LVs goes bad... do you have any extra protections by having a smaller amount of data at risk?
07:42 daMaestro since it's all coming from the same raid-10 array i'm not sure you would
07:42 haomaiwang joined #gluster
07:43 daMaestro a hardware failure would take out all the bricks on that node, so having more bricks would not be of help
07:44 daMaestro speaking to the distribute translator, i don't think it really has to work harder for more bricks (ASSuME) as last i knew it hashes based on file path
07:45 daMaestro so it's more going to be replication between the bricks is where you might have some efficiency, but thinking back to your failure domains, you have 4
07:47 ivan_rossi joined #gluster
07:47 * daMaestro use to run a ~500TB distributed-replicate gluster cluster a while back (in the 1->2 transition, then we hit 3 and $dayjob moved to another solution when we hit a PB)
07:48 daMaestro i've experienced xfs under gluster going sideways due to inode corruption after a power outage; so i now take the approach pay attention to your failure domains not your efficiency
07:48 daMaestro newdave, so that would lead me to recommend a single brick per raid-10
07:49 daMaestro but i don't know your particular situation
07:52 newdave OK, some further background: We're using gluster to host quite a large number (~16m) of small files (couldn't tell you an average size, but we're talking small xml, pdf's, email attachments, etc).
07:52 newdave I'll pastebin some output, one second
07:53 newdave http://pastebin.com/Y3w7j7cW
07:53 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
07:54 newdave Sorry, this is a little more PC: http://paste.fedoraproject.org/489405/48006042/
07:54 glusterbot Title: #489405 • Fedora Project Pastebin (at paste.fedoraproject.org)
07:54 newdave This rebalance has taken a bloody long time
07:58 [diablo] joined #gluster
08:05 newdave daMaestro: So how would you propose I migrate from 3 bricks per node to a single brick?
08:06 daMaestro newdave, i actually don't know what the latest and greatest way to do that is
08:06 newdave i'll settle for an older and passable way :P
08:07 daMaestro if you can spare the storage, create a new brick with your target total size and add it, then remove the others
08:07 daMaestro that would be how i'd approach it if that is possible with the current state of things
08:07 daMaestro it reduces risk as much as possible, assuming that when you remove a smaller brick it will correctly move content to the new, larger brick
08:08 daMaestro i don't know if that capability is there nowadays
08:10 Karan joined #gluster
08:13 jtux joined #gluster
08:20 daMaestro|isBack joined #gluster
08:21 prth joined #gluster
08:30 jri joined #gluster
08:32 fsimonce joined #gluster
08:33 hackman joined #gluster
08:34 Sebbo1 joined #gluster
08:37 ahino joined #gluster
08:43 elastix joined #gluster
08:54 sanoj joined #gluster
08:54 flying joined #gluster
08:56 karthik_us joined #gluster
08:58 devyani7 joined #gluster
09:04 atinm joined #gluster
09:15 prth joined #gluster
09:32 Slashman joined #gluster
09:51 derjohn_mobi joined #gluster
09:52 d0nn1e joined #gluster
09:56 msvbhat joined #gluster
09:58 atinm joined #gluster
10:02 hchiramm joined #gluster
10:18 Muthu joined #gluster
10:33 prth joined #gluster
10:35 panina joined #gluster
10:35 panina joined #gluster
10:39 yalu I was thinking of a cheap trick to speed up the migration from NFS to GlusterFS (on a replicated brick), by writing directly to the backend and then let GlusterFS' replication copy the missing files. I was told that an ls -lR from a client will trigger a replication on all files, but that doesn't seem to work out as I hoped
10:40 yalu for exemple a file created on node1 will sometimes show up on a client that has a mount on node2, but give I/O errors if you read it
10:56 Karan joined #gluster
10:58 Prasad joined #gluster
11:01 kramdoss_ joined #gluster
11:03 prth joined #gluster
11:03 msvbhat joined #gluster
11:07 k4n0 joined #gluster
11:13 hchiramm joined #gluster
11:15 elastix joined #gluster
11:19 panina joined #gluster
11:24 atinm joined #gluster
11:33 jiffin1 joined #gluster
11:43 mhulsman joined #gluster
11:49 karthik_us joined #gluster
11:54 devyani7 joined #gluster
12:11 Wizek_ joined #gluster
12:19 kramdoss_ joined #gluster
12:21 jiffin1 joined #gluster
12:49 sanoj joined #gluster
13:00 jri_ joined #gluster
13:00 nbalacha joined #gluster
13:02 buvanesh_kumar joined #gluster
13:05 BitByteNybble110 joined #gluster
13:05 Saravanakmr joined #gluster
13:09 prth joined #gluster
13:23 ashka joined #gluster
13:23 ashka joined #gluster
13:23 atinm joined #gluster
13:27 msvbhat joined #gluster
13:28 plarsen joined #gluster
13:48 buvanesh_kumar joined #gluster
13:55 jiffin joined #gluster
14:05 msvbhat joined #gluster
14:16 Pupeno joined #gluster
14:22 Muthu joined #gluster
14:24 jri joined #gluster
14:29 TvL2386 joined #gluster
14:33 nbalacha joined #gluster
14:53 panina joined #gluster
15:00 hackman joined #gluster
15:03 jiffin joined #gluster
15:08 jkroon joined #gluster
15:16 hchiramm joined #gluster
15:19 Pupeno joined #gluster
15:20 prth joined #gluster
15:27 nbalacha joined #gluster
15:27 Gambit15 joined #gluster
15:34 prth joined #gluster
15:54 newdave joined #gluster
15:56 TvL2386 joined #gluster
15:57 jerrcs_ joined #gluster
16:02 jkroon joined #gluster
16:24 panina joined #gluster
16:30 msvbhat joined #gluster
16:35 Muthu joined #gluster
16:36 shyam left #gluster
16:38 RameshN joined #gluster
16:41 kramdoss_ joined #gluster
16:51 derjohn_mobi joined #gluster
16:56 dnorman joined #gluster
17:10 B21956 joined #gluster
17:27 dnorman joined #gluster
17:29 panina joined #gluster
17:35 newdave joined #gluster
17:45 jiffin joined #gluster
17:45 flying joined #gluster
17:46 ivan_rossi left #gluster
18:00 mhulsman joined #gluster
18:17 panina joined #gluster
18:20 dnorman joined #gluster
18:20 hackman joined #gluster
18:23 sage__ joined #gluster
18:27 B21956 joined #gluster
18:38 prth joined #gluster
19:02 riyas joined #gluster
19:51 dnorman joined #gluster
20:03 jkroon joined #gluster
20:33 arpu joined #gluster
20:44 dnorman joined #gluster
21:19 newdave joined #gluster
22:15 Wizek_ joined #gluster
22:38 panina joined #gluster
23:03 dnorman joined #gluster
23:13 plarsen joined #gluster
23:39 Marbug joined #gluster
23:47 Pupeno joined #gluster
23:51 hackman joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary