Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-01-16

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:31 victori joined #gluster
00:54 Klas joined #gluster
00:58 shdeng joined #gluster
01:27 jackhill joined #gluster
01:31 BlackoutWNCT Hey Guys, I've got a few gluster volumes which aren't enabling the NFS server and I'm not sure why. Could anyone give me a hand in troubleshooting this?
01:49 Caveat___ joined #gluster
02:00 alvinstarr joined #gluster
02:03 Gugge joined #gluster
02:46 derjohn_mob joined #gluster
02:48 ilbot3 joined #gluster
02:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:54 jackhill Is http://review.gluster.org/#/c/13478/2/under_review/Kerberos.md the latest on the Kerberos proposal? I'm not sure if it's waiting on more design work, or if it's ready for implementation.
02:54 glusterbot Title: Gerrit Code Review (at review.gluster.org)
02:55 prasanth joined #gluster
03:04 kramdoss_ joined #gluster
03:40 nbalacha joined #gluster
03:41 Shu6h3ndu joined #gluster
03:54 nthomas_ joined #gluster
04:05 vbellur jackhill: mostly design ready, awaiting implementation
04:10 hgowtham joined #gluster
04:15 atinmu joined #gluster
04:31 sankarshan joined #gluster
04:32 sankarshan joined #gluster
04:33 gyadav joined #gluster
04:39 jackhill vbellur: cool, thanks
04:53 Saravanakmr joined #gluster
04:56 ppai joined #gluster
04:58 Prasad joined #gluster
05:01 Lee1092 joined #gluster
05:07 apandey joined #gluster
05:10 ndarshan joined #gluster
05:27 aravindavk joined #gluster
05:29 LiftedKilt joined #gluster
05:32 skoduri joined #gluster
05:34 itisravi joined #gluster
05:39 alvinstarr joined #gluster
05:41 alvinstarr1 joined #gluster
05:42 riyas joined #gluster
05:54 kdhananjay joined #gluster
05:54 rjoseph joined #gluster
05:56 susant joined #gluster
06:00 karthik_us joined #gluster
06:02 msvbhat joined #gluster
06:04 jiffin joined #gluster
06:07 sanoj joined #gluster
06:09 hgowtham joined #gluster
06:10 Karan joined #gluster
06:12 apandey joined #gluster
06:13 sona joined #gluster
06:21 k4n0 joined #gluster
06:48 kdhananjay joined #gluster
06:53 tg2 joined #gluster
07:03 level7 joined #gluster
07:07 k4n0 joined #gluster
07:13 [diablo] joined #gluster
07:21 mhulsman joined #gluster
07:21 mhulsman joined #gluster
07:26 jtux joined #gluster
07:29 k4n0 joined #gluster
07:30 freepe joined #gluster
07:33 jkroon joined #gluster
07:33 msvbhat joined #gluster
07:44 ivan_rossi joined #gluster
08:04 k4n0 joined #gluster
08:13 ashiq joined #gluster
08:13 squizzi joined #gluster
08:16 rafi joined #gluster
08:23 ankit joined #gluster
08:24 msvbhat joined #gluster
08:28 ankit joined #gluster
08:30 ankit joined #gluster
08:42 nishanth joined #gluster
08:50 alezzandro joined #gluster
08:50 Humble joined #gluster
08:54 ankit joined #gluster
08:56 fsimonce joined #gluster
08:58 mb_ joined #gluster
08:58 jkroon joined #gluster
08:58 riyas joined #gluster
09:04 mbukatov joined #gluster
09:14 msvbhat joined #gluster
09:20 sbulage joined #gluster
09:22 rafi1 joined #gluster
09:25 saybeano joined #gluster
09:25 susant joined #gluster
09:26 nishanth joined #gluster
09:36 jtux joined #gluster
09:40 flying joined #gluster
09:43 msvbhat joined #gluster
09:43 atinmu joined #gluster
09:47 Slashman joined #gluster
09:48 derjohn_mob joined #gluster
09:48 susant joined #gluster
09:49 karthik_us joined #gluster
09:58 alvinstarr joined #gluster
09:58 alvinstarr1 joined #gluster
10:02 msvbhat joined #gluster
10:05 rastar joined #gluster
10:29 rjoseph joined #gluster
10:34 jri joined #gluster
10:43 ankit_ joined #gluster
10:52 ankit_ joined #gluster
10:56 Jacob843 joined #gluster
10:56 ankit__ joined #gluster
11:12 jkroon joined #gluster
11:14 pulli joined #gluster
11:25 Guest__ joined #gluster
11:37 ivan_rossi left #gluster
11:38 ashiq joined #gluster
11:51 kramdoss_ joined #gluster
12:15 jwd joined #gluster
12:28 apandey joined #gluster
12:30 kdhananjay joined #gluster
12:50 kramdoss_ joined #gluster
12:58 Karan joined #gluster
13:04 ankit_ joined #gluster
13:26 ws2k3 joined #gluster
13:27 unclemarc joined #gluster
13:28 ahino joined #gluster
13:29 guhcampos joined #gluster
13:34 f0rpaxe joined #gluster
13:51 rwheeler joined #gluster
13:52 ashiq joined #gluster
13:54 Guest____ joined #gluster
13:55 rwheeler joined #gluster
13:56 musa22 joined #gluster
14:00 plarsen joined #gluster
14:02 squizzi joined #gluster
14:04 kpease joined #gluster
14:05 kpease_ joined #gluster
14:07 Karan joined #gluster
14:08 atinmu joined #gluster
14:10 kpease_ joined #gluster
14:12 susant left #gluster
14:19 musa22 Hi All, is glusterfs 3.8 production ready or should i stick to 3.7.x?
14:20 cloph it's ready
14:20 musa22 anyone using in production? any issues?
14:23 skoduri joined #gluster
14:23 cloph using it in production in a small scale (four peers), and no issues observed (using for storing qemu VM images)
14:25 musa22 cloph: Thanks.
14:31 bluenemo joined #gluster
14:41 musa22 cloph: Small scare environments (2/3 peers), do you recommend replicated volume over distributed-replicated volume?
14:57 arpu joined #gluster
15:03 p7mo joined #gluster
15:05 shyam joined #gluster
15:07 ivan_rossi joined #gluster
15:08 mhulsman joined #gluster
15:08 vbellur joined #gluster
15:08 rwheeler joined #gluster
15:14 ankit joined #gluster
15:16 rwheeler joined #gluster
15:18 nbalacha joined #gluster
15:20 guhcampos joined #gluster
15:25 Gambit15 joined #gluster
15:25 farhorizon joined #gluster
15:36 mhulsman joined #gluster
15:36 k4n0 joined #gluster
15:44 mhulsman joined #gluster
15:45 farhoriz_ joined #gluster
15:50 shyam joined #gluster
15:53 mhulsman joined #gluster
15:57 jiffin joined #gluster
16:04 wushudoin joined #gluster
16:07 bowhunter joined #gluster
16:11 farhorizon joined #gluster
16:14 susant joined #gluster
16:14 wushudoin joined #gluster
16:16 Vide joined #gluster
16:16 Vide Hello, I'm trying to install gluster 3.8 on a fresh Centos 7.3 but I get some weird dependency problems, it always worked with CentOS 7.2
16:17 Vide https://paste.fedoraproject.org/528369/48458344/
16:17 glusterbot Title: #528369 • Fedora Project Pastebin (at paste.fedoraproject.org)
16:17 Vide this is the yum output following this tutorial: https://wiki.centos.org/HowTos/GlusterFSonCentOS
16:17 glusterbot Title: HowTos/GlusterFSonCentOS - CentOS Wiki (at wiki.centos.org)
16:20 musa22 Vide: list available yum repo on your system: # yum repolost
16:21 musa22 The output shows that you are installing packaging from both 3.8 and 3.7 repo.
16:23 jiffin joined #gluster
16:23 vbellur joined #gluster
16:31 kramdoss_ joined #gluster
16:32 Karan joined #gluster
16:33 nishanth joined #gluster
16:37 Humble joined #gluster
16:43 Vide joined #gluster
17:02 alvinstarr joined #gluster
17:02 ivan_rossi left #gluster
17:02 alvinstarr1 joined #gluster
17:11 rafi joined #gluster
17:17 msvbhat joined #gluster
17:18 jiffin joined #gluster
17:19 Shu6h3ndu joined #gluster
17:20 Shu6h3ndu joined #gluster
17:27 shyam joined #gluster
17:30 rafi joined #gluster
17:36 cloph musa22: sorry, completely overlooked the question. Having only two peers is tricky, as you cannot guarantee quorum in this case (cannot distinguish a network split from a peer that went down)
17:37 cloph so depends on the size of the disk/their connectivity. For three hosts I'd go for a replica 3, not much gained from using replica 2 with arbiter I guess.
17:38 sbulage joined #gluster
17:38 musa22 i'm currently facing exact that in prod, a split plain. I'm recommending the client to use 3 node instead.
17:38 cloph with two peers, also no question to use replica 2. And configure / disable quorum as needed/as one can live with :-)
17:39 cloph you can also have servers that don't have bricks on them to maintain quorum
17:42 musa22 currently using 2 node distributed-replicated volume. i will switch to 3 node to avoid split-brain. Do you recommend replicated volume instead of distributed replicated volume?
17:42 cloph (but if that means adding a third one, one could also make it arbiter, to also have another level of quorum on bricks)
17:42 cloph 2 node distributed replicate doesn't really make much sense to me - if you lose one server completely, you loose all the data on that host, right?
17:43 musa22 Not sure.
17:43 cloph or I'm failing to see how it is distributed/where the replication happens but replicating across the two servers, makes having it distributed between the two kinda pointless..
17:44 cloph and if the distribution happens on the bricks on one server, I rather have those disks combined in a raid or experiment with tiers instead.
17:46 Gambit15 cloph, "For three hosts I'd go for a replica 3, not much gained from using replica 2 with arbiter I guess.", care to elaborate what you mean by that?
17:47 cloph if the machine have similar specs, and the network capacity is available, you'll have three copies of the files spread across the three hosts.
17:47 musa22 cloph: I have limited gluster knowledge. Its best i keep it simple by using 3 node and replica 3.
17:47 cloph And three bricks participating in quorum.
17:47 Gambit15 I'm using rep 3 arb 1 here to reduce network load & economise space (I have backups, of course)
17:48 cloph arbiter would only store metadata, and thus will participate in quorum, but doesn't contain the actual data.
17:48 rastar joined #gluster
17:48 cloph so if you are low on diskspace on the third host, or don't have the network capacity, go for arbiter, otherwise benefit of having a third copy :-)
17:49 Gambit15 musa22, a distributed volume without replication is similar to RAID0!
17:49 cloph Also depends on the io capabilities of your drivers, if they don't have similar specs, making one an arbiter would help even out the field.
17:49 Gambit15 The only difference being you won't lose the *entire* volume
17:50 musa22 Gambit15: i meant distributed+replicated and not distributed which is similar to raid0.
17:50 cloph but distributed+replicated on two hosts?
17:51 Gambit15 Well, if you've only got 2 nodes/servers, you won't be able to do dist-rep anyway, either or
17:51 cloph To be able to cope with failure/loss of one of the hosts, you'll need to have the files on both, so where is the distributed part then? :-)
17:51 musa22 cloph: believe or not, this is our current setup :) Architect decision.
17:52 Gambit15 cloph, another side case use for arbiter would be where you've only got 2 main hosts, but are able to run a small host/VM on an independent server
17:52 cloph yeah, but either your distribution is an illusion, or your replica :-)
17:53 cloph see above "but if that means adding a third one, one could also make it arbiter, to also have another level of quorum on bricks)"
17:53 Gambit15 eg. I've got a small 2-node cluster with an arbiter on a small VM on a secondary cluster
17:54 atinmu joined #gluster
17:54 Gambit15 musa22, "believe or not", no, that doesn't make sense
17:54 bbooth joined #gluster
17:55 cloph it is technically possible, but doesn't make sense. Only reason for doing that is in anticipation of expanding that with more hosts :-)
17:55 musa22 Gambit15: sorry, baby feeding on one hand while typing on the other :)
17:57 bowhunter joined #gluster
17:58 Gambit15 Actually, I suppose you could have something like the following: 2 bricks on each server, replicating brick1a & brick1b & distributing to brick2a & brick2b. Might have some effect on performance perhaps...
17:58 Gambit15 musa22, have another look at the docs
17:58 Gambit15 https://gluster.readthedocs.io/en/latest/Quick-Start-Guide/Architecture/
17:58 glusterbot Title: Architecture - Gluster Docs (at gluster.readthedocs.io)
17:59 Gambit15 But regardless, I would be *very* careful about trying to do High-Availability replication with only 2 servers
17:59 cloph (I also "cheat" with our 2x(2+1) setup - the arbiters are on the one of the hosts of the other replica pair (so x1+arbiter_y,x2 and y1+arbiter_x,y2) but on a different set of disks)
17:59 Gambit15 Split-brain will become a common headache
18:00 Gambit15 cloph, same here!
18:00 musa22 Gambit15: I dealing with 2nd split-brain in 6 months :)
18:00 Gambit15 Every 3rd server is the arbiter for the previous pair
18:01 Gambit15 And the 1st server is the arbiter for the last pair
18:02 Gambit15 musa22, you're lucky, and I'll guess you've got a low-use system in a very stable environment then
18:03 musa22 Gambit15: i wish our cloud provider was stable but yes, i'm lucky.
18:03 Gambit15 If you were to suffer any power/network outages or were running a setup where both hosts frequently write to the same volumes, I'd expect problems *far( more frequently
18:04 cloph if you share the same network interface with your internet/other traffic, things get even more fragile. really should have a dedicated connection for gluster in that case to be at least somewhat stable...
18:05 cloph and if it really is low-use, and don't mind loosing an update, you can disable server quorum and always have one brick "win" the split-brain...
18:05 Gambit15 Dedicated interfaces & switches preferably, or isloated VLANs at the very least...
18:05 cloph also depends on what you want to use gluster for between the hosts. You might also prefer to use geo-replication only.
18:06 cloph will be async, but again depends on the purpose for using gluster in the first place.
18:07 musa22 cloph: / Gambit15: Following is my current config https://paste.fedoraproject.org/528419/59000614/
18:07 glusterbot Title: #528419 • Fedora Project Pastebin (at paste.fedoraproject.org)
18:07 Gambit15 Although be careful with geo-rep if you've got large files constantly being written to, eg. VMs
18:09 musa22 cloph: Gambit15: First issue i need to resolve is to use 3 node to avoid split-brain.
18:09 cloph ok, replica between the two hosts, and I guess the different bricks are different disks? i.e. 6 disks on each server?
18:10 musa22 cloph: Gambit15: Its doesn't make sense to use distributed-replicated with only 3 node right?
18:10 cloph musa22: it might make sense to use distributed, but it depends on what you expect from gluster/what your needs for redundancy is.
18:11 jiffin joined #gluster
18:11 cloph you could distribute across the three servers, and have a raid10 or similar on the hosts and rely on the hosts not catching fire as a whole/or live with loosing the data of one distribution unit.
18:14 cloph depending on the use of the volume, you can also consider dispersed one. But if files change frequently it's not suitable.
18:15 Gambit15 TBH, as long as there are regular external backups (which you should always have regardless), you could skip replication
18:16 musa22 cloph: Thanks. well use 3 node, 3 replica volume.
18:16 cloph make sure to have the network bandwidth for that though.
18:17 Gambit15 I use RAID5 & RAID6 for our cluster, with each server providing 1 brick
18:17 Gambit15 Takes away a bit of the hassle if a drive dies
18:17 jiffin joined #gluster
18:17 musa22 users upload about 15-20 GB files per day. each file can be from few kb to 15MB.
18:17 Gambit15 musa22, don't you have any other servers on the same network/same datacenter available?
18:18 musa22 We have another 2 node in remote site and we use it for geo-replication.
18:18 Gambit15 You can add an arbiter to another existing server just to provide quorum, it doesn't need to big or a dedicated serverv
18:19 musa22 I've managed to recover from split-brain last time, this time i'm not so confident.
18:19 Gambit15 Heck, you could even use a RaspberryPi as an arbiter!
18:20 musa22 I was thinking setting up new 3 node glusterfs cluster the migrate existing cluster.
18:20 Gambit15 You don't need to migrate anything to go from rep 2 to rep 3
18:22 musa22 I can do. but this doesn't fix the directories that are in split-brain.
18:23 musa22 I can add 1 additional node then create new 3-way replica volume then migrate files from existing volume?
18:24 cloph yes, provided you have enough space left for storing another copy/or are OK with moving files
18:24 Gambit15 Of which, if you've only got 2 nodes & are serving 1000s of small files, then I'd use CRDB or Ceph. Gluster can be a bit slow with large file trees, its power is in scalability
18:25 Gambit15 musa22, fix the files that are in split-brain before adding the 3rd server
18:25 Gambit15 You'd have to do that anyway...
18:27 Gambit15 musa22, http://lists.gluster.org/pipermail/gluster-users/2016-March/026060.html
18:27 glusterbot Title: [Gluster-users] Convert replica 2 to replica 3 arbiter 1 (at lists.gluster.org)
18:27 Gambit15 https://gluster.readthedocs.io/en/latest/Administrator%20Guide/arbiter-volumes-and-quorum/#replica-2-and-replica-3-volumes
18:27 glusterbot Title: Arbiter volumes and quorum options - Gluster Docs (at gluster.readthedocs.io)
18:30 musa22 Gambit15: Thanks, will read it.
18:55 farhoriz_ joined #gluster
18:56 farhoriz_ joined #gluster
19:49 om2 joined #gluster
19:52 farhorizon joined #gluster
19:53 MikeLupe joined #gluster
19:54 farhorizon joined #gluster
20:09 derjohn_mob joined #gluster
20:10 k4n0 joined #gluster
20:52 alezzandro joined #gluster
20:59 derjohn_mob joined #gluster
21:23 farhoriz_ joined #gluster
21:24 msvbhat joined #gluster
21:25 farhori__ joined #gluster
21:31 musa22 joined #gluster
21:39 derjohn_mob joined #gluster
21:40 squizzi joined #gluster
21:41 fcoelho joined #gluster
21:42 bowhunter joined #gluster
21:48 bbooth joined #gluster
21:48 nettlejam joined #gluster
21:48 nettlejam Hi - does anyone know where I could find the documentation for Gluster 3.5.4?
21:49 nettlejam We've inherited an older system that needs some maintenance, and we can only find very old (3.1) or very new documentation...
22:04 bbooth joined #gluster
22:24 yalu_ joined #gluster
22:26 msvbhat joined #gluster
22:56 farhorizon joined #gluster
23:03 jdossey joined #gluster
23:16 farhoriz_ joined #gluster
23:19 farhorizon joined #gluster
23:49 Vaelatern joined #gluster
23:58 plarsen joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary