Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-08-20

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 ZachLanich Say I start with 3 servers and Replica 3 replicated. Each server had a 20GB HDD and I created a huge ~20GB Brick on each. I've now outgrown my 20GB and would like to expand vertically by adding a Block Volume of say 100GB onto each node. How do I vertically expand my volume to those 3 nodes?
00:01 ZachLanich Do I resize the Bricks? Add another brick onto each server, then add those Bricks to the volume?
00:01 ZachLanich I don't want maintenance to become hell over time by having a shit ton of Bricks because of poor planning lol.
00:02 JoeJulian You can use lvm
00:02 JoeJulian Some people like zfs (I thought it stunk)
00:02 JoeJulian You can enable shard
00:03 ZachLanich @JoeJulian Ok, I'm not super familiar with how LVM works, so give me a super quick run down of what I'm doing with it and the end goal and I'll do my own homework on how all the little pieces work.
00:03 JoeJulian Hmm.. interesting question that I don't know the answer to yet. If I enable shard and rebalance, does it break up the too-large file?
00:03 JoeJulian lvm takes your physical devices and creates logical partitions out of them.
00:04 ZachLanich Is that what Gluster's Docs tell you to use to create the Bricks?
00:04 JoeJulian I don't know. I haven't read most of the docs.
00:04 ZachLanich And I also recall seeing implications of using thinly provisioned volumes and no being ablet ouse snapshots or something, so Idk what that's all about either.
00:04 JoeJulian When I started, the docs were worst than worthless.
00:04 JoeJulian s/worst/worse/
00:05 glusterbot What JoeJulian meant to say was: When I started, the docs were worse than worthless.
00:05 JoeJulian Ah, that's for the block device translator. That's not what we've been discussing.
00:05 ZachLanich So what exactly am I doing with lvm to accomplish my goal?
00:06 ZachLanich Am I resizing a Brick? Something else? I just need a summary/direction so I can go down my wormhole of figuring out how to make it work lol
00:06 JoeJulian Yep
00:07 JoeJulian Read up on lvm2, and the man page for xfs_growfs
00:07 JoeJulian That's your homework for the weekend. My wife is calling me, so I must leave.
00:07 ZachLanich Thanks!!!
00:07 ZachLanich I owe you a drink!
00:43 plarsen joined #gluster
01:09 siavash joined #gluster
01:47 ilbot3 joined #gluster
01:47 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:49 ZachLanich joined #gluster
02:03 ahino joined #gluster
02:11 shyam joined #gluster
02:21 caitnop joined #gluster
02:43 pfactum joined #gluster
02:50 nishanth joined #gluster
02:53 * Gambit15 sees JoeJulian's comment on ZFS. Glares.
02:58 Gambit15 With the right resources, and properly configured, *nothing* beats ZFS for storage. If you need more flexibility than allowed by the vdevs, chuck LVM on top.
02:59 Gambit15 For all of the complaints I've heard about it, it was either because it wasn't given enough resources or the user didn't understand the architecture.
03:02 Gambit15 joined #gluster
03:10 siavash joined #gluster
03:11 ahino joined #gluster
03:54 kramdoss_ joined #gluster
04:11 aravindavk joined #gluster
04:18 ankitraj joined #gluster
04:25 om joined #gluster
04:47 ZachLanich joined #gluster
05:11 siavash joined #gluster
05:41 johnmilton joined #gluster
06:13 msvbhat joined #gluster
06:33 tjikkun Gambit15: well, ZFS on Linux has it's fair share of bugs. But, that is just one implementation of course
06:39 arcolife joined #gluster
06:48 natgeorg joined #gluster
06:48 Alghost_ joined #gluster
06:48 sage___ joined #gluster
06:49 ndevos_ joined #gluster
06:49 mrten_ joined #gluster
06:49 lanning_ joined #gluster
06:49 Peppaq joined #gluster
06:50 gnulnx_ joined #gluster
06:50 siavash joined #gluster
06:51 d4n13L_ joined #gluster
06:52 PotatoGim_ joined #gluster
06:52 syadnom_ joined #gluster
06:52 cogsu joined #gluster
06:53 gluytium_ joined #gluster
06:53 Ulrar_ joined #gluster
06:53 DJCl34n joined #gluster
06:54 DJClean joined #gluster
06:55 RustyB joined #gluster
06:57 al joined #gluster
06:58 [o__o] joined #gluster
07:03 DV_ joined #gluster
07:05 gluytium joined #gluster
07:18 mhulsman joined #gluster
07:21 poornimag joined #gluster
07:34 moss joined #gluster
07:37 hchiramm joined #gluster
07:38 kovshenin joined #gluster
08:15 msvbhat joined #gluster
08:48 atalur joined #gluster
08:58 rafi joined #gluster
09:04 ZachLanich joined #gluster
09:15 derjohn_mob joined #gluster
09:17 msvbhat joined #gluster
09:48 atalur_ joined #gluster
10:20 wadeholler joined #gluster
10:24 Wizek_ joined #gluster
10:29 msvbhat joined #gluster
10:37 DV_ joined #gluster
10:49 atalur__ joined #gluster
11:05 robb_nl joined #gluster
11:05 MikeLupe joined #gluster
11:05 kramdoss_ joined #gluster
11:16 msvbhat joined #gluster
11:25 chirino joined #gluster
11:53 caitnop joined #gluster
12:29 shyam joined #gluster
12:58 rafi1 joined #gluster
13:02 robb_nl joined #gluster
13:14 msvbhat joined #gluster
13:46 mhulsman joined #gluster
13:50 shyam joined #gluster
14:03 Gambit15 tjikkun: BSD FTW!
14:15 msvbhat joined #gluster
14:24 shyam joined #gluster
14:42 Gnomethrower joined #gluster
15:02 robb_nl joined #gluster
15:02 kovshenin joined #gluster
15:03 lezo joined #gluster
15:03 Gambit15 joined #gluster
15:05 masber joined #gluster
15:11 masber joined #gluster
15:22 pdrakeweb joined #gluster
15:25 msvbhat joined #gluster
15:59 msvbhat joined #gluster
16:00 [diablo] joined #gluster
16:01 lezo joined #gluster
16:14 leucos joined #gluster
16:24 leucos left #gluster
16:48 JoeJulian Gambit15: Correct, not enough resources. I completely underestimated the amount of resources necessary. I don't like wasting hundreds of gigabytes of ram and a couple dozen cores just for home use.
16:49 hchiramm joined #gluster
16:55 hagarth1 joined #gluster
16:58 hagarth joined #gluster
17:15 Lee1092 joined #gluster
17:17 hagarth joined #gluster
17:25 msvbhat joined #gluster
18:12 [diablo] joined #gluster
18:16 ZachLanich joined #gluster
18:17 plarsen joined #gluster
18:53 msvbhat joined #gluster
19:02 johnmilton joined #gluster
19:42 David_Varghese joined #gluster
19:44 David_Varghese hi, i change host in fstab. but when trying to mount its failed because its still try to connect to old host. how do i fix that? the new host is define in /etc/hosts
19:47 David_Varghese [2016-08-20 19:38:14.137652] E [name.c:242:af_inet_client_get_remote_sockaddr] 0-volume_david-client-0: DNS resolution failed on host data1
19:47 David_Varghese data3:/volume_david /old-pool glusterfs defaults,_netdev,backupvolfile-server=data4 0 0
19:54 hackman joined #gluster
19:56 om joined #gluster
20:10 mhulsman joined #gluster
21:09 kovshenin joined #gluster
21:14 lezo joined #gluster
22:01 john51_ joined #gluster
22:03 twisted`_ joined #gluster
22:03 gbox_ joined #gluster
22:03 snila_ joined #gluster
22:03 zerick joined #gluster
22:04 malevolent joined #gluster
22:04 abyss^ joined #gluster
22:04 cliluw joined #gluster
22:04 lalatenduM joined #gluster
22:04 squeakyneb joined #gluster
22:05 unforgiven512 joined #gluster
22:08 Kins joined #gluster
22:08 rossdm joined #gluster
22:08 rossdm joined #gluster
22:08 ndk_ joined #gluster
22:11 Peppard joined #gluster
22:16 dgandhi joined #gluster
22:17 coreping joined #gluster
22:18 dgandhi joined #gluster
23:09 David_Varghese hi, i change host in fstab. but when trying to mount its failed because its still try to connect to old host. how do i fix that? the new host is define in /etc/hosts
23:57 XpineX joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary