Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-09-26

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:25 jiffin joined #gluster
00:35 protoporpoise joined #gluster
00:57 g_work joined #gluster
01:11 ic0n joined #gluster
01:15 kpease joined #gluster
01:55 ilbot3 joined #gluster
01:55 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:02 gospod2 joined #gluster
02:06 prasanth joined #gluster
02:52 kotreshhr joined #gluster
03:09 gospod2 joined #gluster
03:19 Limebyte_4 joined #gluster
03:30 gyadav joined #gluster
03:54 gospod2 joined #gluster
03:54 susant joined #gluster
03:55 kdhananjay joined #gluster
03:57 ppai joined #gluster
04:01 BlackoutWNCT Hey guys, getting an odd error in a samba log which is using the samba VFS module for gluster. Was wondering if anyone had any ideas.
04:01 BlackoutWNCT [2017-09-26 04:00:12.282086] E [socket.c:2327:socket_connect_finish] 0-gfapi: connection to ::1:24007 failed (Connection refused); disconnecting socket
04:02 BlackoutWNCT I've checked and the system does not have IPv6 disabled, I can also run 'ping6 ::1' without any issues.
04:02 BlackoutWNCT I can't telnet to ::1 24007 though, but I can telnet to 'localhost 24007' It's likely the system is prioritising the IPv4 for the hostname resolution though]
04:07 apandey joined #gluster
04:16 kdhananjay joined #gluster
04:20 kdhananjay1 joined #gluster
04:25 kdhananjay joined #gluster
04:27 kdhananjay1 joined #gluster
04:32 gyadav_ joined #gluster
04:33 skumar joined #gluster
04:34 atinm joined #gluster
04:40 nbalacha joined #gluster
04:47 gyadav__ joined #gluster
04:53 Saravanakmr joined #gluster
05:01 kdhananjay joined #gluster
05:07 ndarshan joined #gluster
05:12 itisravi joined #gluster
05:17 xavih joined #gluster
05:21 skoduri joined #gluster
05:23 BlackoutWNCT joined #gluster
05:24 poornima_ joined #gluster
05:25 aravindavk joined #gluster
05:25 BlackoutWNCT joined #gluster
05:26 hgowtham joined #gluster
05:27 mbukatov joined #gluster
05:30 kdhananjay joined #gluster
05:37 kdhananjay1 joined #gluster
05:39 karthik_us joined #gluster
05:41 susant joined #gluster
05:42 prasanth joined #gluster
05:48 Humble joined #gluster
05:49 kdhananjay joined #gluster
05:52 Anarka joined #gluster
05:54 Prasad joined #gluster
06:01 kdhananjay joined #gluster
06:03 sanoj joined #gluster
06:09 kdhananjay1 joined #gluster
06:12 rafi1 joined #gluster
06:13 psony joined #gluster
06:17 ThHirsch joined #gluster
06:19 jtux joined #gluster
06:22 kdhananjay joined #gluster
06:23 itisravi__ joined #gluster
06:32 kramdoss_ joined #gluster
06:34 kdhananjay joined #gluster
06:38 rouven_ joined #gluster
06:40 apandey joined #gluster
06:42 ivan_rossi joined #gluster
06:42 ivan_rossi left #gluster
06:45 atinm joined #gluster
06:47 msvbhat joined #gluster
06:55 asciiker left #gluster
07:03 sanoj joined #gluster
07:03 kdhananjay joined #gluster
07:04 shruti joined #gluster
07:06 ndarshan joined #gluster
07:06 hgowtham joined #gluster
07:25 kdhananjay joined #gluster
07:27 kdhananjay1 joined #gluster
07:33 kdhananjay joined #gluster
07:34 ndarshan joined #gluster
07:42 rastar joined #gluster
07:52 atinm joined #gluster
07:59 nh2 joined #gluster
08:05 weller joined #gluster
08:07 weller hi, I just found this: https://github.com/gluster/glusterdocs/blob/master/Administrator%20Guide/Accessing%20Gluster%20from%20Windows.md
08:07 glusterbot Title: glusterdocs/Accessing Gluster from Windows.md at master · gluster/glusterdocs · GitHub (at github.com)
08:07 weller is the info still up to date?
08:08 weller especially '4. If using Samba 4.X version add the following line in smb.conf for all gluster volume or in the global section'?
08:09 kdhananjay joined #gluster
08:09 weller the thing is: after a few days of usage, my samba/ctdb/gluster cluster becomes unhealthy, with smbd processes that cannot be killed (kill -9).
08:10 weller since the backend for everything are gluster volumes, maybe I also search here for a solution ;-)
08:17 rouven joined #gluster
08:27 ahino joined #gluster
08:40 buvanesh_kumar joined #gluster
08:42 _KaszpiR_ joined #gluster
08:43 jkroon joined #gluster
08:54 Wizek_ joined #gluster
08:54 kdhananjay1 joined #gluster
08:58 weller joined #gluster
09:01 ThHirsch joined #gluster
09:02 rwheeler joined #gluster
09:06 sanoj joined #gluster
09:09 mbukatov joined #gluster
09:11 itisravi joined #gluster
09:16 asdf_ joined #gluster
09:18 hgowtham joined #gluster
09:19 buvanesh_kumar joined #gluster
09:20 rafi1 joined #gluster
09:22 anoopcs weller, When did you start facing issues?
09:23 anoopcs Or were you never been able to set it up?
09:29 msvbhat joined #gluster
09:38 Saravanakmr joined #gluster
09:40 gyadav_ joined #gluster
09:43 mk-fg joined #gluster
09:43 mk-fg joined #gluster
09:52 jiffin joined #gluster
09:52 MrAbaddon joined #gluster
10:10 gyadav__ joined #gluster
10:18 kdhananjay joined #gluster
10:22 weller anoopcs: we configured/installed the system, and tested it - everything was perfectly working. when we switched to the new system, and users started adding actual load, we started facing the issue...
10:23 anoopcs weller, Can you explain one such issue?
10:25 kdhananjay1 joined #gluster
10:28 kdhananjay2 joined #gluster
10:28 weller we have a two-node samba/ctdb cluster as fileserver for windows clients. the backend are fuse-mounted gluster shares. users can access files, smbstatus gives info and returns to shell, everything is running without errors. at random times, smbstatus shows the info, but then times out before returning to the shell.
10:28 shyam joined #gluster
10:28 weller we then (usually? hard to say since it is every 1-2 days) have zombie smbd processes, that cannot be killed (kill -9)
10:30 anoopcs weller, Are you able to run `pstack <smbd-pid>` and get a process stack?
10:30 weller no
10:30 weller a running process gives the info, but one of these 'zombies' not
10:32 anoopcs weller, Is there a reason why you have not tried vfs module for GlusterFS instead of fuse mounted volume?
10:33 weller yep. windows users cannot mount subfolders (https://bugzilla.redhat.com/show_bug.cgi?id=1484427)
10:33 glusterbot Bug 1484427: unspecified, unspecified, ---, anoopcs, ASSIGNED , Cannot map subfolder of gluster/samba share when using vfs objects = glusterfs
10:36 apandey joined #gluster
10:38 rouven joined #gluster
10:39 anoopcs :-) Ah.. I remember this BZ.
10:39 weller and I will eventually respond to your request ;-)
10:39 anoopcs I was about to raise it
10:40 anoopcs he..he
10:42 weller if gluster 3.12 has some solution potential, do you have an ETA when CentOS will 'officially' include the release? looks like the git repository https://github.com/CentOS-Storage-SIG/centos-release-gluster312 has been finished a month ago already
10:42 glusterbot Title: GitHub - CentOS-Storage-SIG/centos-release-gluster312: dist-git like repository for centos-release-gluster312 (at github.com)
10:44 jkroon joined #gluster
10:44 anoopcs weller, http://lists.gluster.org/pipermail/packaging/2017-September/000377.html
10:44 glusterbot Title: [gluster-packaging] glusterfs-3.12.1 released (at lists.gluster.org)
10:44 anoopcs weller, But we still need to figure out where the problem lies.. updating to 3.12 may not fix the issue if its not from GlusterFS side
10:45 weller true, unfortunately
10:46 anoopcs weller, and you say that it happens every 1-2 days? around how many users are accessing the volume?
10:47 weller it seems unpredictable, every 1-2 days, around 20 users
10:49 itisravi joined #gluster
10:52 weller anoopcs: could metadata-caching be an issue here?
10:55 bfoster joined #gluster
10:58 anoopcs weller, Can you please paste the output of `gluster volume info <VOLNAME>`?
11:00 weller http://pasted.co/83556550
11:00 glusterbot Title: gluster info - 83556550 (at pasted.co)
11:06 weller anoopcs: 'good news' btw, we just got a stale process
11:08 anoopcs weller, and smbstatus is being timed out?
11:08 weller yes
11:09 weller still have to identify the right process
11:09 weller and smbstatus does not show the complete list of locked files
11:09 weller the timeout is 20 seconds
11:09 anoopcs weller, Are the nodes in HEALTHY state?
11:10 anoopcs `ctdb status`
11:10 weller all 'OK'
11:11 weller gluster volume status shows that 'data' is crashed on one node
11:11 weller that could be some indication at least
11:11 anoopcs Aha..
11:11 anoopcs What is 'data'?
11:11 mbukatov joined #gluster
11:11 anoopcs Ok..the brick name
11:11 weller sry, the volume shared to windows users
11:12 anoopcs also the volume name
11:13 anoopcs Do you have a coredump file for the crash?
11:14 weller dont know how to get one
11:14 weller in principle, the servers are still running, and users still can access files
11:15 anoopcs weller, Can you check what's there in brick log file on the node where it crashed?(/var/log/glusterfs/bricks/)
11:15 weller the brick log: http://pasted.co/6cd3194f
11:15 glusterbot Title: brick log - 6cd3194f (at pasted.co)
11:19 mbukatov joined #gluster
11:20 rouven_ joined #gluster
11:22 msvbhat joined #gluster
11:24 pioto joined #gluster
11:34 prasanth joined #gluster
11:38 skoduri joined #gluster
11:47 itisravi__ joined #gluster
11:50 buvanesh_kumar joined #gluster
11:51 nh2 joined #gluster
11:52 kdhananjay joined #gluster
11:53 shyam joined #gluster
11:55 apandey_ joined #gluster
11:56 kdhananjay joined #gluster
12:04 vbellur joined #gluster
12:05 msvbhat joined #gluster
12:07 kdhananjay1 joined #gluster
12:08 kotreshhr joined #gluster
12:15 ahino joined #gluster
12:19 vbellur joined #gluster
12:20 vbellur joined #gluster
12:20 vbellur joined #gluster
12:21 poornima_ joined #gluster
12:21 vbellur joined #gluster
12:23 vbellur joined #gluster
12:24 vbellur joined #gluster
12:28 nbalacha joined #gluster
12:30 aravindavk joined #gluster
12:33 jiffin joined #gluster
12:40 kotreshhr left #gluster
12:41 kdhananjay joined #gluster
12:54 buvanesh_kumar joined #gluster
12:56 baber joined #gluster
12:58 jiffin joined #gluster
13:06 msvbhat joined #gluster
13:07 aravindavk joined #gluster
13:08 dominicpg joined #gluster
13:10 vbellur joined #gluster
13:14 ahino joined #gluster
13:15 jstrunk joined #gluster
13:21 plarsen joined #gluster
13:31 plarsen joined #gluster
13:33 rouven_ joined #gluster
13:38 skylar joined #gluster
13:49 Norky joined #gluster
13:50 vbellur joined #gluster
14:00 weller after upgrade to gluster 3.12 i have some obsolete settings for my volumes (e.g. nfs-ganesha). I cannot reset them manually... how can I make a cleanup there?
14:02 rouven_ joined #gluster
14:03 kotreshhr joined #gluster
14:06 farhorizon joined #gluster
14:08 farhorizon joined #gluster
14:09 hmamtora joined #gluster
14:14 kramdoss_ joined #gluster
14:23 susant joined #gluster
14:25 nbalacha joined #gluster
14:26 kpease joined #gluster
14:26 msvbhat joined #gluster
14:28 jiffin1 joined #gluster
14:29 anoopcs weller++ :-)
14:29 glusterbot anoopcs: weller's karma is now 1
14:30 weller hooray (i guess)
14:38 weller anoopcs++
14:38 glusterbot weller: anoopcs's karma is now 2
14:42 rouven_ joined #gluster
14:43 BitByteNybble110 joined #gluster
14:45 primehaxor joined #gluster
14:47 plarsen joined #gluster
14:49 kotreshhr joined #gluster
14:50 primehaxor joined #gluster
14:53 farhorizon joined #gluster
14:59 gyadav__ joined #gluster
15:00 Saravanakmr joined #gluster
15:04 ashka joined #gluster
15:04 ashka joined #gluster
15:14 jiffin weller: there is a hacky way to do it
15:14 jiffin which noone will recommend
15:15 baber joined #gluster
15:18 wushudoin joined #gluster
15:21 apandey joined #gluster
15:23 weller jiffin: is there an alternative?
15:24 weller or is it just orphaned settings, which do no harm?
15:24 skumar_ joined #gluster
15:24 jiffin weller: yeah it won't harm
15:24 jiffin actually
15:26 skumar__ joined #gluster
15:27 msvbhat joined #gluster
15:28 aravindavk joined #gluster
15:39 jefarr joined #gluster
15:42 buvanesh_kumar joined #gluster
15:47 ppai joined #gluster
15:48 bowhunter joined #gluster
16:07 weller seems like we still have some trouble with gluster & samba/ctdb... we upgraded to gluster 3.12, and switched to vfs objects = glusterfs... before we had shared the fuse mounted locations. the issue is, that all windows ACLs are do NOT give users 'delete' permissions
16:10 msvbhat joined #gluster
16:14 baber joined #gluster
16:22 arpu joined #gluster
16:24 rafi joined #gluster
16:34 rafi joined #gluster
16:34 vbellur joined #gluster
16:42 farhoriz_ joined #gluster
16:46 bwerthmann joined #gluster
16:46 armyriad joined #gluster
16:56 farhorizon joined #gluster
16:59 farhoriz_ joined #gluster
17:06 farhorizon joined #gluster
17:15 baber joined #gluster
17:22 ahino joined #gluster
17:24 s34n joined #gluster
17:25 s34n I saw a sheepdog slide from a few years ago that paints gluster performance very poorly in comparison to sheepdog and ceph
17:26 s34n I'm sure gluster fans have their own benchmarks to champion
17:27 btspce joined #gluster
17:27 s34n is there some recent comparison of performance that would be helpful?
17:28 btspce What is the correct tune2fs command to change the timeout setting before fs goes readonly?
17:30 buvanesh_kumar joined #gluster
17:41 rastar joined #gluster
17:44 Saravanakmr joined #gluster
18:04 jkroon joined #gluster
18:10 dominicpg joined #gluster
18:21 baber joined #gluster
18:30 bowhunter joined #gluster
18:33 vbellur1 joined #gluster
18:34 vbellur1 joined #gluster
18:34 vbellur1 joined #gluster
18:35 arpu joined #gluster
18:35 vbellur1 joined #gluster
18:36 vbellur1 joined #gluster
18:42 MrAbaddon joined #gluster
18:42 rafi joined #gluster
18:49 vbellur joined #gluster
18:50 vbellur joined #gluster
18:53 rafi1 joined #gluster
18:57 baber joined #gluster
19:07 threebean joined #gluster
19:09 threebean hi.  I just tried gluster on f26 (for a fedora infrastructure service) and hit a stack trace at startup.
19:09 threebean https://paste.fedoraproject.org/paste/lSltm9R3wFAUEQJtvcj0~g
19:09 glusterbot Title: Untitled - Modern Paste (at paste.fedoraproject.org)
19:09 threebean any pointers on where to go next?  (should I not even be trying this?)
19:11 jiffin1 joined #gluster
19:12 threebean this is glusterfs-server-3.10.5-1.fc26
19:14 threebean here is my /etc/glusterfs/glusterd.vol https://paste.fedoraproject.org/paste/tFYYO8fZx8UJu~vvrrVbkQ
19:14 glusterbot Title: Untitled - Modern Paste (at paste.fedoraproject.org)
19:20 threebean other journal output may be helpful?  https://paste.fedoraproject.org/paste/assV0qXJofgACttODglWkA
19:20 glusterbot Title: Untitled - Modern Paste (at paste.fedoraproject.org)
19:31 ndevos threebean: hmm, your glusterd.vol does not look normal, it should be a .vol file for the management daemon
19:32 ndevos threebean: since glusterfs 3.2 (I think?) the 'gluster' commandline should be used to create volumes, no manual (or ansible) created .vol files
19:38 threebean oh - that's significant.
19:38 threebean i'm using an ansible role that we used to configure gluster on el7 for this.  will investigate and rework it.  thank you.
19:39 ndevos even the versions on el7 do not work like that...
19:40 ndevos there is a gluster module for ansible too, you may want to use that instead, or look into github.com/gluster/gdeploy
19:40 puiterwijk joined #gluster
19:44 skylar joined #gluster
20:04 vbellur joined #gluster
20:19 shyam joined #gluster
20:28 kotreshhr left #gluster
20:40 s34n I'm trying to prep my first gluster install
20:41 cloph s34n: make sure to use large enough inode size as described in docs, so it can also hold the extended attributes
20:42 s34n So I need a partition dedicated to gluster, right?
20:42 s34n cloph: I haven't read about inode sizing in the docs
20:42 cloph no, not needed.
20:42 cloph but if you have a dedicated partition, use a subdirectory for  the brick, not the toplevel/root
20:43 s34n bricks are subdirectories?
20:43 s34n err, directories?
20:44 cloph bricks are where the gluster volume's data is stored, and yes, for the server it is just a directory.
20:45 cloph https://gluster.readthedocs.io/en/latest/Quick-Start-Guide/Quickstart/#step-2-format-and-mount-the-bricks guess that docs should be improved to state what the -i size=512 is used for (i.e. allocate larger block for inode)
20:45 glusterbot Title: Quick Start Guide - Gluster Docs (at gluster.readthedocs.io)
20:45 cloph https://access.redhat.com/articles/1273933 is a little more verbose
20:46 glusterbot Title: Key Points to remember before you create your Red Hat Gluster Storage 3.0 trusted pool - Red Hat Customer Portal (at access.redhat.com)
20:46 _KaszpiR_ joined #gluster
20:47 cloph (you don't really need to use xfs though, any filesystem that supports the extended attributes would work, just xfs is kinda the goto filesystem (and I personally like it))
20:48 prasanth joined #gluster
20:49 s34n the point of putting bricks on their own partition is to specialize inode size?
20:50 cloph also when using lvm below the ability to grow the volume if needed.
20:50 freephile joined #gluster
20:50 rouven_ joined #gluster
20:51 cloph and makes sense to not have other i/o on the volume - so if you can afford it, having dedicated volumes for gluster is preferred (but "partition" doesn't really cut it, can be raid, lvm, disk,.....)
21:00 uebera|| joined #gluster
21:00 uebera|| joined #gluster
21:07 s34n cloph: So I could set up lvm thin volumes for /, /var, /home, et al and one for /bricks, then let /bricks take as much space as it can on the harddrive? Format the /bricks with i=512, and fill it with gluster bricks?
21:08 s34n cloph: why do I need to put the brick dirs in a subdir of /bricks?
21:09 cloph ideally only one brick  - but that  really depends on what you intend to use gluster for/what you expect from glust
21:10 s34n cloph: I was planning to put vm images on it
21:11 s34n So (and I'm just feeling my way through this) I figured each vm would get a gluster volume
21:13 cloph nah, no need for each vm to get a volume, that would not be the point of using gluster - you either use gluster go have high avialability and/or redundancy in the storage
21:13 s34n So that would mean one brick per host for each vm with a foot on that host, right? (for a replicated or striped gluster volume)
21:14 cloph not sure what you mean with foot on the host, but if you don't have multiple peers/servers to combine into a gluster volume, then no point in using gluster to begin with.
21:15 s34n ok. so let's say I have 3 baremetal hosts to begin with
21:15 cloph and don't use striped volume, use sharding instead
21:16 cloph depending on how beefy they are/what the network looks like, I'd either use a replica 3 or a replica 2 with arbiter with that
21:16 s34n I set them each up with a /bricks thin lvm volume
21:17 s34n replica 3/2 meaning "mirrored"?
21:18 cloph replica meaning multiple copies of the data spread across the bricks, yes.
21:18 cloph arbiter is meta-data only, so doesn't need diskspace, but can prevent split-brain/provides quorum
21:20 s34n I still need to read about sharding, but why mirroring over striping?
21:20 cloph nah, sharding over striping, striping is considered obsolete.
21:20 cloph and mirror/replica to have a benefit of using gluster.
21:21 cloph You dind't write why you want to use gluster, so I assume the redundancy/high-availability aspect to be key factor.
21:22 cloph For that you need replica/redundancy in the setup, and with only three hosts, you need replica 3 or 2 with arbiter if you any of the hosts to go AWOL without interrupting operation of the volume
21:22 cloph @stripe
21:22 glusterbot cloph: (#1) Please see http://joejulian.name/blog/should-i-use-stripe-on-glusterfs/ about stripe volumes., or (#2) The stripe translator is deprecated. Consider enabling sharding instead.
21:22 s34n I'm thinking of striping like RAID, but what I just read doesn't look like that->http://docs.gluster.org/en/latest/Quick-Start-Guide/Architecture/
21:23 glusterbot Title: Architecture - Gluster Docs (at docs.gluster.org)
21:23 s34n cloph: yes. I'm looking for resiliency
21:25 s34n top ddg link on gluster sharding is dead: http://blog.gluster.org/2015/12/introducing-shard-translator/
21:27 cloph yeah, the website got revamped, and unfortunately gluster is notorious for breaking weblinks :-/
21:27 cloph try http://www.gluster.org/introducing-shard-translator/
21:29 cloph sharding is turned on by the default "storage for vm images" tuning/group https://github.com/gluster/glusterfs/blob/master/extras/group-virt.example
21:29 glusterbot Title: glusterfs/group-virt.example at master · gluster/glusterfs · GitHub (at github.com)
21:37 arpu joined #gluster
21:38 mattmcc joined #gluster
21:39 s34n cloph: is there striping in a RAID5/6 type concept for gluster?
21:39 s34n not just in a RAID0 way?
21:41 s34n cloph: something a bit less disk-expensive than mirroring?
21:42 cloph no, it doesn't have inpackt on disk-saving, it is merely splitting larger files into chunks transparently.
21:42 cloph and those indvidual smaller chunks then get replicated
21:48 s34n cloph: I know I'm missing something here. A 10MB file on a replicated volume always consumes 20MB in the cluster?
21:53 cloph depends on what kind of replica you have - for replica 2 this is true (there's also dispersed volume that work differently, but those are not suitable for VM images)
21:53 cloph replica x → data stored x times (on x different bricks)
21:53 s34n cloph: can't I stripe across 6 bricks, using two as parity for each stripe?
21:55 cloph you can use a distributed replica across 6 bricks, and you can somehow think of arbiter as parity, although not really the same
21:55 s34n but a distributed replica will need 2x diskspace for each file, right?
21:56 s34n where as the 4+2parity striping would only need 1.5x dispace for each file to achieve the same resilience
21:57 s34n or actually better resiliency
21:59 s34n a 10+2p cluster would only cost 1.2x with arguably better resiliency than distributed-replicated volumes
22:00 s34n what am I missing?
22:00 weller s34n: my guess is you would have to pay with performance
22:00 s34n it should be very nearly as fast
22:01 s34n with better confidence in the data integrity
22:02 s34n how does a replicated volume know if one of the replicas in corrupted?
22:03 s34n weller: I've seen the term 'self-healing' in the gluster docs, but haven't investigated it. When does gluster self-heal?
22:04 s34n when a host disappears?
22:04 weller s34n: sorry, no Idea. learning user myself ;-)
22:04 weller but I think that is a good question
22:05 weller atm we rely on a raid6 as storage for our bricks, and then on top we have 2 nodes with replica 2 :D
22:06 s34n weller: I think a parity computation would not be expensive and would be well worth it
22:07 weller depends on what the main cause of brick failure is... normally you just take one node offline (reboot), that is not a scenario for 'regular' raid!?
22:08 s34n when a node goes offline, a replica has to get rebuilt on a different node, right?
22:08 jbrooks joined #gluster
22:08 weller no
22:09 kharloss joined #gluster
22:09 s34n how do you maintain resiliency if you don't maintain your replicas?
22:10 weller my understanding is, you dont
22:10 s34n it seems to me you would have to re-replicate the missing part of the volume
22:11 weller maybe that is different for more than 2 nodes
22:11 weller cloph: ?
22:11 * s34n laughs at two ignorant users taking guesses
22:12 s34n poor cloph can't get anything done while getting pestered by me
22:22 farhorizon joined #gluster
22:52 jbrooks joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary