Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-02-09

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:14 victori joined #gluster
00:32 ashiq joined #gluster
00:58 shdeng joined #gluster
01:16 pdrakeweb joined #gluster
01:16 rastar joined #gluster
01:37 rastar joined #gluster
01:39 pjrebollo joined #gluster
01:44 victori joined #gluster
01:50 rastar joined #gluster
01:52 pjrebollo joined #gluster
01:53 susant joined #gluster
01:55 farhorizon joined #gluster
01:58 PatNarciso any suggestions where to look when... a gluster volume appears to freeze?  I'm able to reproduce by performing a timemachine backup to a samba share on gluster fuse mount.  all nodes with with this volume mounted, freeze.  BUT all nodes/server that dont have the volume mounted-- are fine.
01:58 glusterbot PatNarciso: mounted's karma is now -1
01:59 PatNarciso glusterbot: exactly.
02:00 pjrebollo joined #gluster
02:01 Gambit15 joined #gluster
02:08 loadtheacc joined #gluster
02:10 ShwethaHP joined #gluster
02:15 PatNarciso ahh, bug 1399015.
02:15 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=1399015 medium, unspecified, ---, bugs, POST , performance.read-ahead on results in processes on client stuck in IO wait
02:17 PatNarciso POST, but in 3.9.1 release notes.
02:23 derjohn_mob joined #gluster
02:25 rastar joined #gluster
02:48 ilbot3 joined #gluster
02:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:54 gem joined #gluster
02:59 foster joined #gluster
03:01 niknakpaddywak joined #gluster
03:02 [o__o] joined #gluster
03:03 blu_ joined #gluster
03:04 nthomas joined #gluster
03:07 plarsen joined #gluster
03:08 midacts joined #gluster
03:17 victori joined #gluster
03:28 susant joined #gluster
03:32 riyas joined #gluster
03:44 magrawal joined #gluster
03:45 victori joined #gluster
03:47 buvanesh_kumar joined #gluster
03:48 atinm joined #gluster
04:07 victori joined #gluster
04:09 pjrebollo joined #gluster
04:17 nbalacha joined #gluster
04:29 ankit joined #gluster
04:30 victori joined #gluster
04:32 prasanth joined #gluster
04:36 susant left #gluster
04:38 Saravanakmr joined #gluster
04:38 victori joined #gluster
04:49 Prasad joined #gluster
04:54 skumar joined #gluster
04:59 ndarshan joined #gluster
05:01 Shu6h3ndu joined #gluster
05:04 ankit_ joined #gluster
05:20 Jacob843 joined #gluster
05:22 rafi joined #gluster
05:22 rastar joined #gluster
05:24 sbulage joined #gluster
05:28 gyadav joined #gluster
05:30 jiffin joined #gluster
05:32 k4n0 joined #gluster
05:33 apandey joined #gluster
05:35 jiffin1 joined #gluster
05:41 Karan joined #gluster
05:44 riyas joined #gluster
05:46 apandey joined #gluster
05:47 susant joined #gluster
05:47 Karan joined #gluster
05:54 rjoseph joined #gluster
05:56 Karan joined #gluster
05:58 Humble joined #gluster
06:08 victori joined #gluster
06:13 hgowtham joined #gluster
06:16 rafi joined #gluster
06:18 sanoj joined #gluster
06:19 msvbhat joined #gluster
06:27 jiffin1 joined #gluster
06:28 overyander joined #gluster
06:31 msvbhat_ joined #gluster
06:33 klaas joined #gluster
06:33 jbrooks_ joined #gluster
06:33 kshlm joined #gluster
06:34 skoduri joined #gluster
06:34 nbalacha joined #gluster
06:34 john51 joined #gluster
06:35 loadtheacc joined #gluster
06:39 victori joined #gluster
06:45 [diablo] joined #gluster
06:48 rafi1 joined #gluster
06:51 mhulsman joined #gluster
06:56 rafi joined #gluster
06:58 nthomas joined #gluster
07:16 poornima joined #gluster
07:20 jkroon joined #gluster
07:23 msvbhat joined #gluster
07:32 jtux joined #gluster
07:35 kotreshhr joined #gluster
07:39 mhulsman joined #gluster
07:49 kdhananjay joined #gluster
07:58 ivan_rossi joined #gluster
08:11 armyriad joined #gluster
08:25 fsimonce joined #gluster
08:28 itisravi joined #gluster
08:39 mhulsman joined #gluster
08:40 mhulsman joined #gluster
08:41 sanoj joined #gluster
08:42 skarlso_ joined #gluster
08:47 musa22 joined #gluster
08:51 ppai joined #gluster
09:03 Wizek_ joined #gluster
09:06 ahino joined #gluster
09:09 ShwethaHP joined #gluster
09:09 skoduri joined #gluster
09:09 pjrebollo joined #gluster
09:11 buvanesh_kumar joined #gluster
09:14 shdeng joined #gluster
09:16 jtux joined #gluster
09:17 shdeng joined #gluster
09:24 rafi1 joined #gluster
09:36 derjohn_mob joined #gluster
09:37 Seth_Karlo joined #gluster
09:37 Nuxr0 hi, for some reason we cannot start the nfs server on gluster 3.8.8. rpcbind is on, iptables and selinux off. nfs log here https://paste.fedoraproject.org/551427/66328931/raw/
09:37 Nuxr0 any pointers?
09:38 tallmocha joined #gluster
09:39 Seth_Karlo joined #gluster
09:40 Marbug_ joined #gluster
09:41 k4n0 joined #gluster
09:46 dspisla joined #gluster
09:49 Seth_Kar_ joined #gluster
09:51 Seth_Karlo joined #gluster
09:51 kotreshhr joined #gluster
10:00 Nuxr0 jesus christ, someone messed with /etc/hosts.allow ... once I added ALL:127.0.0.1 to it things got better ...
10:11 jiffin1 joined #gluster
10:16 TvL2386 joined #gluster
10:20 susant joined #gluster
10:29 skarlso_ Nuxr0: wow. that's something to remember for later on...
10:31 skarlso_ am I to understand that GlusterFS still doesn't provide SoftLImit on Quotas?
10:31 skarlso_ I'm getting mixed information here, from red hat and from the gluster documentation.
10:31 skarlso_ # gluster volume quota VOLNAME default-soft-limit soft_limit
10:32 skarlso_ Yes, gluster says
10:32 skarlso_ Note For now, only Hard limits are supported. Here, the limit cannot be exceeded and attempts to use more disk space or inodes beyond the set limit is denied.
10:32 skarlso_ so... um... which one? :D
10:32 skarlso_ And than it goes on saying, you can get alerts on soft limits..
10:32 skarlso_ LIke.... WTF? :D
10:33 skarlso_ So is it now supported or NOT?
10:33 apandey joined #gluster
10:33 skarlso_ Setting Alert Time
10:33 skarlso_ Alert time is the frequency at which you want your usage information to be logged after you reach the soft limit.
10:59 kotreshhr joined #gluster
11:29 ankit__ joined #gluster
11:32 ahino1 joined #gluster
12:00 rafi1 joined #gluster
12:10 pulli joined #gluster
12:18 Seth_Kar_ joined #gluster
12:19 Wizek_ joined #gluster
12:21 vbellur joined #gluster
12:23 shyam joined #gluster
12:26 ashiq joined #gluster
12:30 Philambdo joined #gluster
12:35 flying joined #gluster
12:46 ahino joined #gluster
12:51 musa22 joined #gluster
12:55 musa22 joined #gluster
12:59 sona joined #gluster
13:04 ankit__ joined #gluster
13:05 Seth_Karlo joined #gluster
13:06 Seth_Karlo joined #gluster
13:07 pdrakeweb joined #gluster
13:09 musa22 joined #gluster
13:10 nbalacha joined #gluster
13:13 musa22 joined #gluster
13:15 nthomas joined #gluster
13:16 musa22 joined #gluster
13:16 devops joined #gluster
13:17 devops Hi all, I have a question about split brain when you have a 3 replica distributed volumen. According to the docs, there is a corner case that split brain can occur even then. https://gluster.readthedocs.io/en/latest/Administrator%20Guide/arbiter-volumes-and-quorum/#replica-2-and-replica-3-volumes
13:17 glusterbot Title: Arbiter volumes and quorum options - Gluster Docs (at gluster.readthedocs.io)
13:17 devops Is this is just applicable to arbiter volumes or did i read that correctly in it saying that it could apply to a volume with a replica set of 3?
13:18 devops If so, is this theoretical or has it actually happened?
13:21 nthomas joined #gluster
13:29 unclemarc joined #gluster
13:30 cloph devops: what kind of distributed replica do you have? arbiter and distributed volumes is different things
13:34 kotreshhr left #gluster
13:37 Philambdo joined #gluster
13:41 devops correct
13:42 devops i have a distributed volume -- not arbiter
13:42 devops the documentation was IMHO confusingly worded so i wasn't sure if it just applied to arbiter
13:42 devops cloph: does this just apply to arbiter?
13:42 cloph yeah, with replica 3, you'll always have split-brain if the wrong two nodes go down at the same time.
13:43 ira joined #gluster
13:43 devops what does the wrong two nodes mean exactly?
13:43 cloph and you can have split brain if writes at the same time of parts of a file fail on different hosts (that is the corner case mentioned)
13:44 cloph If the bricks are part of the same replica
13:44 devops oh okay
13:45 devops and scenarios where this could happen are: server dies during write, process dies during write, or network partition
13:45 devops something along the typical lines but the bad thing happens at the exaact wrong time
13:45 devops did i get that right?
13:46 cloph server dies during write → yeah, then the brick will be out of brick, and in addition also server-quorum is questioned.
13:46 cloph process dies during write: not sure, gluster wouldn't care about the client process dying while writing, but if gluster itself dies, then yes
13:47 cloph for network partition, this also counts to server-quorum at the same time.
13:47 devops this would only be the case if this happened basically at the same time on two different nodes?
13:47 cloph (but in a distributed replicated one, server quorum is less of a problem, as you have the servers of the other distributed parts still available)
13:47 devops in other words, if the bad thign happened on a single node, would qourum protect me?
13:48 devops i would use client and server side quorum
13:49 cloph it is not quorum that would protect you, it is the replica type that would protect you. Quorum would prevent inconsistencies in case a split-brain is detected.
13:49 devops i am planning on using distributed - replicated volume with a replica set of 3
13:50 devops so if i do client+server side quorum with a replica set of 3, qourum would prevent inconsistence if the disaster scenario occurs only with one server?
13:50 cloph devops: how many bricks would you have, and how do you plan to handle the "distribute" part in this? For a replica 3 distributed setup, you need to have at least 6 servers
13:50 cloph (at least 6 bricks)
13:50 devops I was going to have three bricks, one brick on each server
13:51 devops I am still learning the basics so i may be missing something here
13:51 cloph yes, with replica 3 (or replica 2 with arbiter), you can continue with regular operation even when one brick goes down completely.
13:51 devops why would 6 servers be necessary?
13:51 devops in the scenario i dscribed isn't that a replica 3 distributed setup?
13:51 cloph you would distribute between a replica set (a1 b1 c1) and another (a2 b2 c2)
13:52 cloph that would be distribute between two sets of replica 3
13:52 msvbhat joined #gluster
13:52 cloph devops: maybe you're using distributed differently than gluster docs, that's while I keep asking on what you think your layout will actually be...
13:52 devops cloph: understood
13:52 devops let me see if i can clarify
13:53 devops i was going off this https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Setting%20Up%20Volumes/
13:53 glusterbot Title: Setting Up Volumes - Gluster Docs (at gluster.readthedocs.io)
13:54 devops based on that, i thought what i was doing qualified as distributed and replicated
13:54 devops since each brick was on a separate server and there were 3 replicas, one replica on each brick
13:54 cloph you still didn't tell *what* you were thinking about doing.
13:55 devops in regards to layout?
13:56 cloph yes, what you did think of when you labeled it as "distributed replica 3"
13:57 devops okay sure....1. logical volume 2. the volume has 3 bricks 3. each brick is on a different server 4. replica count is 3
13:57 devops that's what i thought that meant
13:57 cloph OK, if the volumes are on the same brick, then there is no point at all.
13:58 cloph or you're mixing terminology...
13:58 devops i guess i am missing something
13:58 devops there's a  volume with three bricks, and each brick is on a different server
13:58 saybeano joined #gluster
13:58 cloph you speak about "logical volume", that I take as a lvm volume.
13:58 devops oh no sorry
13:58 devops i am referring to gluster volume
13:58 devops i am definitely not putting this all on one server
13:59 cloph a gluster volume consisting of three bricks, and those bricks being on different servers in replica 3 mode is just a replica 3 volume.
13:59 cloph No distribution here.
13:59 devops ohhh
13:59 devops what does distribution mean then in the context of gluster?
13:59 cloph All data is stored on all of the three bricks
13:59 devops i do have my terminology mixed up
13:59 devops ohhhh
13:59 devops i see now why i was confusing you so badly
14:00 devops yikes
14:00 cloph distributed means: File A is stored on <part1>, and File B is stored on <part b> of the volume.
14:00 devops i took distributed to mean "file copied to mulitple servers"
14:00 devops :(
14:00 devops my bad
14:01 cloph part can be a single brick (in case it is just a distributed volume), or a replica-set (then it would be distributed replica)
14:01 musa22 joined #gluster
14:01 devops so my layout is a replica 3 volume -- not distributed
14:02 devops is that enough info to circle back to the split brain question?
14:03 Seth_Karlo joined #gluster
14:03 cloph OK, so with replica 3 you can have one node going down without going into split brain.
14:03 devops ahhh
14:03 devops that's what i originally thought
14:03 cloph if you disable quorum, you could loose up to two (in this case the first brick you specify when creating the volume must be up)
14:03 kippi joined #gluster
14:03 devops i have no plans to disable quorum
14:04 devops i intend to have both client side and server side quorum and only permit a single node failure. After that everything is read only
14:04 devops and by node, i am referring to the gluster servers in the trusted pool
14:04 cloph OK, so gluster will set the volume to r/o once two bricks fail (or one fails shortly after the other, and the first one didn't fully recover yet)
14:05 devops yes i've seen that but i've also read red hat documentaiton that indicates i need to set properties on the volume
14:05 devops is that documentation out of date?
14:05 cloph peer and node are somewhat ambiguous, since you can have peers being a brick of different volumes
14:05 devops that's a good point
14:06 cloph if you're using xfs as filesystem for the bricks, you should format it with a large enough inode-size, so gluster doesn't need to store the extended attributes in an additional inode.
14:06 kippi Hi, I have a three node cluster, either side of America, I am using Geo replication to go from Site A to Site B, however it seems to get stuck on the 'changelog crawl', It's not copying any files, there is currently 16GB of data, but the other end if empty. We have used geo-rep else where and it has worked well, however this time it really isn't working. Is there a way we can monitor what it is doing, there are not errors etc, it looks fine but
14:06 kippi doing about 6MB a hour
14:07 devops cloph: i am using xfs. is it 512?
14:07 devops that's what i was in the gluster docs
14:07 cloph yep, that should be fine
14:07 devops https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/html/Administration_Guide/sect-Managing_Split-brain.html
14:07 glusterbot Title: 8.10. Managing Split-brain (at access.redhat.com)
14:07 sbulage joined #gluster
14:07 devops those are the options i was reading about that should be set
14:07 devops if i want quorum
14:09 cloph (also note that for replica you need to have enough network bandwidth, as you need to transfer the data to all three bricks at once)
14:09 devops gluster volume set all cluster.server-quorum-ratio 51%; gluster volume set VOLNAME cluster.server-quorum-type server; gluster volume set VOLNAME quorum-type auto
14:09 devops cloph: understood on the bandwidth. We aren't doing big data. We shouldn't exceed more than a terabyte and will have a dedicate backplan with a 10 gb nic
14:11 devops what's a bit confusing is the default quorum behavior in gluster 3.8. I am assuming i need to set those options to achieve client-side and server-side quorum for replica 3 set up. But i also read that it defaults to the "right" thing
14:12 devops my inclination is to set those properties regardless so that if someone does a gluster volume info they'd know the set up
14:13 cloph note with server quorum 51%, the notion of "first brick is enough to be up" won't hold anymore. With 51% server quorum you can really only use one node at a time.
14:13 devops you mean i can only suffer a single node failure?
14:13 devops and still have service availability?
14:14 cloph yes. That is what you tell it :-)
14:15 cloph if two nodes are not available, gluster cannot tell whether it is a network-split or the servers being down.
14:15 devops perfect
14:15 devops that's exactly what i want
14:15 pjrebollo joined #gluster
14:15 devops in other words, if we were being theoretical, I pick consistency over available per CAP theorem
14:15 devops i would much rather i lose access to my data then introduce inconsistency into it
14:16 devops my company's applications can tolerate latency or loss of service....data inconsistency would be far far more catastrophic
14:17 devops Thank you for the guidance. This has been very helpful. And really glad you corrected me on horrible misuse of gluster terminology. I was probably confusing so many people.
14:19 cloph you're welcome :-)
14:19 devops cloph: if i can try your patience a bit more. I have a question about geo replication. The docs don't explicitly say it but the replication is only one way right? There's no such thing as an active-active set up
14:20 cloph yes, it is from master → replication slave only, and async
14:20 devops right
14:20 devops it would be insane to allow writes on the slave
14:20 devops how could you ever gaurantee data consistency
14:21 devops that's what i thought
14:25 skylar joined #gluster
14:34 sanoj joined #gluster
14:37 kpease joined #gluster
14:45 Seth_Karlo joined #gluster
14:46 Seth_Karlo joined #gluster
14:54 kdhananjay joined #gluster
14:55 ankit__ joined #gluster
14:56 plarsen joined #gluster
15:00 ahino joined #gluster
15:05 plarsen joined #gluster
15:13 sanoj joined #gluster
15:20 dspisla joined #gluster
15:21 dspisla joined #gluster
15:26 Gambit15 joined #gluster
15:36 sac joined #gluster
15:36 shruti joined #gluster
15:36 buvanesh_kumar joined #gluster
15:36 rafi joined #gluster
15:37 susant joined #gluster
15:38 rafi joined #gluster
15:38 lalatenduM joined #gluster
15:41 sac joined #gluster
15:43 shruti joined #gluster
15:43 buvanesh_kumar_ joined #gluster
15:45 pjrebollo joined #gluster
15:55 raghu joined #gluster
15:58 susant left #gluster
15:59 farhorizon joined #gluster
16:00 squizzi joined #gluster
16:06 wushudoin joined #gluster
16:14 atinm joined #gluster
16:17 farhorizon joined #gluster
16:28 Karan joined #gluster
16:29 jkroon joined #gluster
16:43 msvbhat joined #gluster
16:44 jrrivera joined #gluster
16:51 jdossey joined #gluster
16:53 rafi joined #gluster
16:56 farhoriz_ joined #gluster
17:06 riyas joined #gluster
17:06 Seth_Karlo joined #gluster
17:07 Seth_Karlo joined #gluster
17:09 pjreboll_ joined #gluster
17:15 susant joined #gluster
17:37 gyadav joined #gluster
17:42 susant left #gluster
17:49 ivan_rossi left #gluster
17:49 ahino joined #gluster
17:54 jbrooks joined #gluster
17:54 bbooth joined #gluster
17:59 [diablo] joined #gluster
18:07 sona joined #gluster
18:14 jbrooks joined #gluster
18:15 ankit__ joined #gluster
18:16 rafi joined #gluster
19:03 jbrooks joined #gluster
19:19 jrrivera_ joined #gluster
19:48 jrrivera_ joined #gluster
19:54 msvbhat joined #gluster
20:03 musa22 joined #gluster
20:09 farhorizon joined #gluster
20:18 amye joined #gluster
20:21 derjohn_mob joined #gluster
20:26 jwd joined #gluster
20:31 overclk joined #gluster
20:35 ankit__ joined #gluster
20:46 jbrooks joined #gluster
20:46 pdrakeweb joined #gluster
20:46 unclemarc joined #gluster
20:59 types joined #gluster
21:01 derjohn_mob joined #gluster
21:02 types Hey fellas. I would like to ask about gluster best practice. Should I always mount a client directory to have my web app read files from gluster?
21:03 types or can i directly use the brick path?
21:16 arpu joined #gluster
22:05 MadPsy left #gluster
22:08 Humble joined #gluster
22:35 musa22 joined #gluster
22:52 tallmocha joined #gluster
23:12 gluytium joined #gluster
23:32 ashka joined #gluster
23:32 ashka joined #gluster
23:47 bhakti joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary