Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-11-10

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:32 vbellur joined #gluster
00:34 vbellur joined #gluster
00:35 vbellur joined #gluster
00:36 vbellur joined #gluster
00:37 vbellur joined #gluster
00:38 vbellur joined #gluster
00:39 vbellur joined #gluster
00:40 vbellur joined #gluster
00:41 vbellur joined #gluster
00:43 vbellur joined #gluster
00:47 vbellur joined #gluster
00:50 kramdoss__ joined #gluster
00:53 atrius joined #gluster
00:53 vbellur joined #gluster
01:06 baber joined #gluster
01:35 jbrooks joined #gluster
01:42 jbrooks joined #gluster
02:03 magrawal joined #gluster
02:23 skoduri joined #gluster
02:56 ilbot3 joined #gluster
02:56 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
03:08 nbalacha joined #gluster
03:09 gyadav joined #gluster
03:13 major joined #gluster
03:40 pladd joined #gluster
03:46 mdeanda joined #gluster
03:52 psony joined #gluster
03:59 mdeanda hi all, i setup a cluster at home for my files with the main intention really being for having a 2nd copy of files in case of hard drive failure. in my case i setup 3 nodes with 1 being arbitor but i only intended on really having 2 of them running 24x7 and the last node only running a few times a week to sync. so far it works well except one of the volumes only mounts when the 3rd machine is up. is it
03:59 mdeanda possible to correct this?
04:00 mdeanda so in my case arbitor + 1 node are up 24x7
04:03 test12345 joined #gluster
04:03 test12345 left #gluster
04:06 msvbhat joined #gluster
04:07 itisravi joined #gluster
04:17 susant joined #gluster
04:33 rafi joined #gluster
04:35 vishnu_sampath joined #gluster
04:36 sunny joined #gluster
04:46 sanoj joined #gluster
04:49 skumar joined #gluster
05:02 ppai joined #gluster
05:02 sanoj joined #gluster
05:06 atinm joined #gluster
05:16 apandey joined #gluster
05:17 hgowtham joined #gluster
05:19 sahina joined #gluster
05:21 aravindavk joined #gluster
05:22 karthik_us joined #gluster
05:24 ndarshan joined #gluster
05:34 gyadav_ joined #gluster
05:41 susant joined #gluster
05:46 susant joined #gluster
05:49 kdhananjay joined #gluster
05:59 gyadav__ joined #gluster
06:01 apandey joined #gluster
06:02 vishnuk joined #gluster
06:07 victori joined #gluster
06:08 vishnuk joined #gluster
06:17 poornima_ joined #gluster
06:20 xavih joined #gluster
06:20 rastar joined #gluster
06:25 Saravanakmr joined #gluster
06:32 itisravi joined #gluster
06:33 msvbhat joined #gluster
06:43 itisravi joined #gluster
06:47 dgandhi joined #gluster
06:51 marbu joined #gluster
06:54 msvbhat joined #gluster
06:57 kramdoss__ joined #gluster
06:58 jkroon joined #gluster
07:04 daMaestro joined #gluster
07:06 wushudoin joined #gluster
07:16 kramdoss__ joined #gluster
07:17 sanoj joined #gluster
07:18 apandey joined #gluster
07:19 nbalacha joined #gluster
07:22 jtux joined #gluster
07:26 rastar joined #gluster
07:35 sanoj joined #gluster
07:35 kdhananjay1 joined #gluster
07:50 ivan_rossi joined #gluster
08:17 [diablo] joined #gluster
08:21 fsimonce joined #gluster
08:34 kramdoss__ joined #gluster
08:39 apandey_ joined #gluster
08:42 rafi joined #gluster
08:51 rafi joined #gluster
08:53 kramdoss__ joined #gluster
08:58 itisravi joined #gluster
08:58 apandey__ joined #gluster
09:07 msvbhat joined #gluster
09:11 kramdoss__ joined #gluster
09:16 rafi1 joined #gluster
09:20 buvanesh_kumar joined #gluster
09:23 dimitris joined #gluster
09:30 aravindavk joined #gluster
09:31 itisravi joined #gluster
09:44 ppai joined #gluster
09:46 kramdoss_ joined #gluster
10:05 kramdoss_ joined #gluster
10:07 hgowtham joined #gluster
10:10 _KaszpiR_ joined #gluster
10:14 itisravi joined #gluster
10:20 aravindavk joined #gluster
10:22 agatineau joined #gluster
10:24 kramdoss_ joined #gluster
10:28 skumar_ joined #gluster
10:28 _KaszpiR_ joined #gluster
10:32 ppai joined #gluster
10:37 ^andrea^ joined #gluster
10:37 malevolent joined #gluster
10:41 kdhananjay joined #gluster
10:44 Saravanakmr joined #gluster
10:46 skumar__ joined #gluster
10:47 kramdoss_ joined #gluster
11:00 eryc joined #gluster
11:00 eryc joined #gluster
11:00 msvbhat joined #gluster
11:10 vishnuk joined #gluster
11:13 vishnu_kunda joined #gluster
11:13 purpleidea joined #gluster
11:13 purpleidea joined #gluster
11:28 nishanth joined #gluster
11:30 msvbhat joined #gluster
11:34 susant joined #gluster
11:48 kramdoss_ joined #gluster
11:49 buvanesh_kumar joined #gluster
12:09 msvbhat joined #gluster
12:10 karthik_us joined #gluster
12:12 kotreshhr joined #gluster
12:15 kramdoss_ joined #gluster
12:15 sinanp joined #gluster
12:15 sinanp Hello
12:15 glusterbot sinanp: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an a
12:16 sinanp I have a volume with 6 bricks, distributed+replicated. For some reason new disks that I create is filling up 2 bricks, while other bricks are almost empty.
12:17 sinanp Is there any setting for balancing?
12:17 Klas sinanp: https://www.tecmint.com/perform-self-heal-and-re-balance-operations-in-gluster-file-system/
12:18 Klas hmm, might be outdated
12:18 Klas but, yeah, rebalancing is a thing and it is the right term to google =)
12:18 Klas never done it myself, I'm running a non-distributed setup
12:19 sinanp Thanks, I will read about it, but for me it is strange that it doesn't use the brick with the most empty space...
12:19 Klas yeah, that is a seperate issue than how to fix the situation =)
12:19 Klas (no idea why this would be)
12:20 Klas it should be "somewhat" balanced automatically
12:20 kramdoss__ joined #gluster
12:21 sinanp The bricks are 800G each, 2 bricks have a 80% filling. The other 4 bricks have 11% filling. I created a new disk of 200G and it did place it on the 80% filled bricks :/
12:24 kotreshhr left #gluster
12:42 sinanp Hmm, since I started rebalance, I can't execute any commands on the volume(s) anymore.
12:42 sinanp rebalance status outputs: volume rebalance: rhevm: failed: Another transaction is in progress for rhevm. Please try again after sometime.
13:04 phlogistonjohn joined #gluster
13:12 plarsen joined #gluster
13:27 Humble joined #gluster
13:32 Humble joined #gluster
13:42 sinanp Any suggestions?
13:50 nbalacha joined #gluster
14:03 kramdoss__ joined #gluster
14:07 msvbhat joined #gluster
14:10 shyam joined #gluster
14:13 ThHirsch joined #gluster
14:15 shyam joined #gluster
14:21 kramdoss__ joined #gluster
14:21 pladd joined #gluster
14:27 vishnu_kunda joined #gluster
14:40 ctria joined #gluster
14:48 kdhananjay joined #gluster
14:51 gyadav__ joined #gluster
14:53 rafi joined #gluster
14:55 marlinc joined #gluster
14:57 phlogistonjohn joined #gluster
14:59 kramdoss__ joined #gluster
14:59 aravindavk joined #gluster
15:03 psony joined #gluster
15:20 kpease joined #gluster
15:21 misc joined #gluster
15:33 farhorizon joined #gluster
15:39 hmamtora joined #gluster
16:12 wushudoin joined #gluster
16:15 jbrooks joined #gluster
16:23 m0zes joined #gluster
16:23 nishanth joined #gluster
16:33 louis_philippe_r joined #gluster
16:33 int-0x21 joined #gluster
16:36 ItsMe` joined #gluster
16:36 louis_philippe_r Any tips on certificate/CA management for large numbers of clients with encrypted IO?
16:37 louis_philippe_r Proven... that you actually use, not just theory. ;)
16:38 louis_philippe_r based on opensource...
16:51 ThHirsch joined #gluster
16:51 Humble joined #gluster
16:53 buvanesh_kumar joined #gluster
16:58 gyadav__ joined #gluster
16:59 bowhunter joined #gluster
17:08 ivan_rossi left #gluster
17:09 Saravanakmr joined #gluster
17:15 jkroon joined #gluster
17:25 alan113696 joined #gluster
17:26 kdhananjay joined #gluster
17:28 lalatenduM joined #gluster
17:30 kdhananjay left #gluster
17:37 amye Final call on anything that people wanted to ask in the Gluster Summit Survey - https://github.com/gluster/community/issues/3
17:37 glusterbot Title: Gluster Summit 2017 - Prague, CZ · Issue #3 · gluster/community · GitHub (at github.com)
17:38 amye (also welcome to the community issue queue, where we do good things for community)
17:42 vishnuk joined #gluster
17:50 vishnuk joined #gluster
17:54 bit4man joined #gluster
17:56 vishnuk joined #gluster
17:59 vishnuk joined #gluster
18:02 kpease joined #gluster
18:05 boutcheee520 joined #gluster
18:06 farhorizon joined #gluster
18:07 farhorizon joined #gluster
18:07 boutcheee520 Hello all, I have a dumb question... I can I tell if I have Gluster setup properly? I just installed Gluster-3.12 on two different CentOS-7 servers. On node1 I ran gluster peer probe <node2>. Both bricks show up when I do gluster volume info
18:07 boutcheee520 but I can seem to create a test file and replicate between the two
18:11 boutcheee520 my XFS partition has been created and I mounted it to /archive. I created a /mnt/gluster/archive. Ran the gluster volume create and the cluster is now up
18:12 kpease_ joined #gluster
18:18 louis_philippe_r with a two node setup you might get split brain problems.  You might want to look into arbiter bricks or using replica 3.
18:22 boutcheee520 ahhh, it did mention the split brain stuff... I just said continue.. and used replica2 :)
18:23 boutcheee520 guess I shouldn't have
18:24 louis_philippe_r Did the same once :) ... and loss data.
18:25 boutcheee520 so basically with the arbiter I need to have 3 servers available ?
18:28 louis_philippe_r yes, and ideally 3 zones...but it depends on what you are trying to protect from.
18:28 ThHirsch joined #gluster
18:29 boutcheee520 I think my current setup in prod has client-quorom disabled because we only have two servers ha
18:29 boutcheee520 trying to update and clean up stuff past people did
18:31 boutcheee520 I want to have some sort of replication to occur, for around 7terabytes (future state for prod)
18:32 louis_philippe_r i think it's the default...and with a two node setup you don't have much of a choice..The only other option is to have a quorum-type to auto but if you lose one server it will go read only.  You have to choose between data consitency and availability
18:33 Jacob843 joined #gluster
18:46 jcall joined #gluster
18:47 farhorizon joined #gluster
18:47 dlambrig joined #gluster
18:51 baber joined #gluster
18:58 louis_philippe_r I guess, boutcheee, your best bet (if not going full replica 3) is 3 replicas with one of them being an arbiter brick or using a dispersed (never used that though - you will probably get a performance hit).
19:09 boutcheee520 cool, thanks for the advice! I'll have to dig in and see which method sounds to be the best.. I'd prefer to not spin up another server but if I have to so be it
19:43 jbrooks joined #gluster
19:50 msvbhat joined #gluster
19:52 dlambrig joined #gluster
20:00 louis_philippe_r left #gluster
20:03 jbrooks joined #gluster
20:10 baber joined #gluster
20:20 aronnax joined #gluster
20:36 rastar joined #gluster
20:44 farhorizon joined #gluster
20:45 _KaszpiR_ joined #gluster
20:56 dlambrig joined #gluster
20:59 gospod2 joined #gluster
21:04 plarsen joined #gluster
21:10 msvbhat joined #gluster
21:12 gospod2 joined #gluster
21:20 atrius joined #gluster
21:24 melliott_ joined #gluster
22:05 davidb2111 left #gluster
22:23 baber joined #gluster
22:43 farhorizon joined #gluster
22:55 amazingchazbono joined #gluster
22:58 hmamtora joined #gluster
23:12 marlinc joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary