Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-06-15

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 hagarth joined #gluster
00:26 chirino joined #gluster
00:28 rafi joined #gluster
00:29 FrancisL joined #gluster
00:31 FrancisL Hello, I am wondering if this can improve small files performance on current gluster 3.7... http://stackoverflow.com/questions/32123033/writing-small-amount-of-data-to-large-number-of-files-on-glusterfs-3-7
00:31 glusterbot Title: cluster computing - Writing small amount of data to large number of files on GlusterFS 3.7 - Stack Overflow (at stackoverflow.com)
00:45 amye joined #gluster
00:52 chirino joined #gluster
00:58 JoeJulian FrancisL: Looks like you already have the answer. It did. Here's a link to the email thread if you want more information: https://www.gluster.org/pipermail/gluster-users/2015-August/023355.html
00:58 glusterbot Title: [Gluster-users] write(filename, ...) implementation (at www.gluster.org)
01:10 FrancisL @JoeJulian, it seems it might have been improving it.! I've been looking at the source code though to see how hard it would be to implement the similar patch
01:12 FrancisL i'm only finding like api/src/glfs-fops.c  but i do not beleive that is the right place to update the code. Looking for some guidance to implement a similar patch
01:14 JoeJulian What code are you working on?
01:15 haomaiwang joined #gluster
01:15 FrancisL well, release-3.7 from github
01:16 JoeJulian Ah, the poster was improving *his* source to use the gluster api.
01:16 FrancisL hmm i see
01:18 FrancisL currently, i have been able to find a workaround ie: create a zfs pool on a gluster and then mount it locally  over nfs . since is writing in a bigfile i beleive the gluser metadata doesn't get impacted and small files gets written close to real drive but lose the gluster shared data across servers unless I expose a NFS server
01:19 JoeJulian You could just write to a ramdisk and rsync it to the gluster volume.
01:19 JoeJulian It would allow faster writes and leave you in just as big of a stale state.
01:20 FrancisL ;)
01:21 FrancisL Thats why im tying to find a better option hehe
01:21 JoeJulian Don't close the files?
01:25 JoeJulian If you can keep files open, you avoid the whole lookup(),open(),write(),close() loop and can just write(),write(),write() (maybe sync() once in a while, too).
01:30 FrancisL Since it keeps in its file open cache.... i guess it would require that file gets closed still after a while to prevent too many open files
01:30 FrancisL but for creation of new files that would also helps? I guess it does many lookups to set attributes
01:31 Lee1092 joined #gluster
01:34 FrancisL Question on a different topic. Is aggregate-size still an option that can be configured for write-behind in 3.7 ?
01:34 hackman joined #gluster
01:35 JoeJulian I don't see that setting in "gluster volume set help"
01:48 ilbot3 joined #gluster
01:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:15 sage joined #gluster
02:34 amye joined #gluster
02:38 julim joined #gluster
03:09 bwerthmann joined #gluster
03:22 poornimag joined #gluster
03:24 msvbhat_ joined #gluster
03:30 RameshN joined #gluster
03:35 Apeksha joined #gluster
03:46 nishanth joined #gluster
03:53 atinm joined #gluster
04:01 sakshi joined #gluster
04:02 shubhendu joined #gluster
04:06 kramdoss_ joined #gluster
04:13 ramky joined #gluster
04:15 nehar joined #gluster
04:18 nbalacha joined #gluster
04:21 akay Does anyone know if there's a recommended limit on the size of a gluster volume?
04:32 amye joined #gluster
04:36 aspandey joined #gluster
04:36 aravindavk joined #gluster
04:44 skoduri joined #gluster
04:58 natarej joined #gluster
05:06 prasanth joined #gluster
05:07 karthik___ joined #gluster
05:07 amye joined #gluster
05:08 rafi joined #gluster
05:10 ppai joined #gluster
05:15 sakshi joined #gluster
05:15 atalur joined #gluster
05:17 rafi joined #gluster
05:19 cliluw joined #gluster
05:21 JoeJulian akay: iirc, it was something like 72 brontobytes
05:24 gowtham joined #gluster
05:25 kotreshhr joined #gluster
05:28 ndarshan joined #gluster
05:31 micw joined #gluster
05:31 micw hi
05:31 glusterbot micw: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
05:32 micw are communications between gluster nodes secured/encrypted in any way?
05:32 atalur joined #gluster
05:32 msvbhat_ joined #gluster
05:33 kshlm joined #gluster
05:34 Gnomethrower joined #gluster
05:37 JoeJulian micw: They can be.
05:37 micw I found this: https://kshlm.in/network-encryption-in-glusterfs/
05:37 glusterbot Title: Setting up network encryption in GlusterFS (at kshlm.in)
05:37 micw looks what I'm looking for
05:38 rafi joined #gluster
05:38 hgowtham joined #gluster
05:38 JoeJulian https://gluster.readthedocs.io/en/latest/Administrator%20Guide/SSL/
05:38 jiffin joined #gluster
05:38 glusterbot Title: SSL - Gluster Docs (at gluster.readthedocs.io)
05:38 micw i'm going to set up a 3-node gluster for our backups
05:38 micw (2x4TB each node, 3x redundancy)
05:38 itisravi joined #gluster
05:39 JoeJulian cool
05:39 micw this are cheap boxes with only 1 nic (public ip address)
05:39 micw i can add an extra private interface but for the price i can get another 3 boxes ;-)
05:39 micw (almost)
05:40 JoeJulian Before it was native, I did it using ipsec
05:40 micw since the boxes are low-traffic i think i can go with the public ip network
05:40 micw (all on same rack/switch)
05:41 JoeJulian Oh, sure. Just firewall things (iptables should be fine)
05:41 micw does it make sense to mount the gluster on all 3 nodes (i'd like to run the backup daemon on the nodes)
05:44 JoeJulian Whatever's convenient.
05:50 d0nn1e joined #gluster
05:50 micw if you say "Just firewall things" wouldn't you use tls in this scenario?
05:51 JoeJulian I'm just suggesting you drop packets from sources other than your own for gluster's ,,(ports)
05:51 glusterbot glusterd's management port is 24007/tcp (also 24008/tcp if you use rdma). Bricks (glusterfsd) use 49152 & up. All ports must be reachable by both servers and clients. Additionally it will listen on 38465-38468/tcp for NFS. NFS also depends on rpcbind/portmap ports 111 and 2049.
05:52 JoeJulian All the encryption in the world doesn't stop a ddos.
05:53 satya4ever_ joined #gluster
05:53 atalur joined #gluster
05:56 aspandey joined #gluster
06:03 karnan joined #gluster
06:06 micw i see, thank you
06:08 ashiq joined #gluster
06:16 atalur joined #gluster
06:17 jtux joined #gluster
06:20 raghug joined #gluster
06:31 rafi joined #gluster
06:32 karnan joined #gluster
06:33 Apeksha joined #gluster
06:34 anil_ joined #gluster
06:37 kshlm joined #gluster
06:39 arif-ali joined #gluster
06:39 kdhananjay joined #gluster
06:49 aspandey_ joined #gluster
06:50 itisravi joined #gluster
06:51 atalur joined #gluster
06:54 Manikandan joined #gluster
06:57 arif-ali joined #gluster
07:06 jri joined #gluster
07:15 Gnomethrower joined #gluster
07:16 [Enrico] joined #gluster
07:19 cvstealth joined #gluster
07:29 robb_nl joined #gluster
07:29 fsimonce joined #gluster
07:30 satya4ever_ joined #gluster
07:42 kdhananjay joined #gluster
07:57 hackman joined #gluster
07:59 aspandey_ joined #gluster
08:01 karthik___ joined #gluster
08:11 ahino joined #gluster
08:15 [Enrico] joined #gluster
08:16 deniszh joined #gluster
08:19 MikeLupe11 joined #gluster
08:19 Slashman joined #gluster
08:20 Gnomethrower joined #gluster
08:28 muneerse joined #gluster
08:31 kramdoss_ joined #gluster
08:34 msvbhat_ joined #gluster
08:45 akay JoeJulian, has anyone done any real world testing at the 72 brontobyte level? :)
08:45 muneerse2 joined #gluster
08:46 aspandey_ joined #gluster
08:47 aspandey joined #gluster
08:48 kramdoss_ joined #gluster
08:49 muneerse joined #gluster
08:56 kovshenin joined #gluster
09:02 [Enrico] joined #gluster
09:02 msvbhat_ joined #gluster
09:16 Gnomethrower joined #gluster
09:21 kdhananjay joined #gluster
09:25 anil_ joined #gluster
09:27 karthik___ joined #gluster
09:29 kdhananjay joined #gluster
09:30 jiffin1 joined #gluster
09:31 itisravi joined #gluster
09:32 bb0x joined #gluster
09:39 arif-ali joined #gluster
09:40 Gnomethrower joined #gluster
09:44 Jules- joined #gluster
09:44 rastar joined #gluster
09:46 Jules- Hey Guys. Is that normal that objects starting with: <gfid: or with fullpath, occur periodically if i run command: gluster volume heal netshare info over the day?! All bricks seems up but i see probably new written files in the heal info all the time.
09:47 itisravi Jules-: what version of gluster are you running?
09:47 msvbhat_ joined #gluster
09:48 Jules- 3.7.11-1
09:48 Jules- i never recongnized this before so i'm wondering if that is normal.
09:49 itisravi Hmm then you shouldn't see any spurious entries in  the command output. i.e. the files might be needing heal for real.
09:49 itisravi Even though the bricks might be up, the client might have lost connection to them during IO and hence the need for heal.
09:49 Jules- they seem to get healed since they disappear after some seconds
09:51 Jules- lets say, i do modify a file on one of the gfs shares mounted as nfs, it usually shouldn't get displayed in heal info command after that?
09:52 Jules- currently this is what seems to happen on my setup and i never seen this issue before v3.7
09:53 itisravi It shouldn't. See the client logs (nfs.log in this case) or glustershd logs on the all nodes for something like "Completed data selfheal on <gfid of the file>"
09:55 aspandey joined #gluster
09:58 pocketprotector joined #gluster
10:00 Jules- nope. all the logs seems fine.
10:00 Jules- very weird.
10:01 hybrid512 joined #gluster
10:01 rastar_ joined #gluster
10:02 rastar_ joined #gluster
10:07 itisravi That is strange. Did you also happen to check for  "Completed metadata selfheal.." or "Completed entry selfheal..." too?
10:13 jiffin1 joined #gluster
10:15 Gambit15 joined #gluster
10:20 gowtham joined #gluster
10:20 muneerse2 joined #gluster
10:25 Jules- yes i completly tailed all my logs from today looking for messages like this.
10:28 Jules- all i found was this: [2016-06-14 01:31:36.726124] E [MSGID: 113027] [posix.c:1427:posix_mkdir] 0-netshare-posix: mkdir of /storage/gfs/netshare/.glusterfs/9e/d7/9ed7d856-a208-4b4e-9232-93658080964e/Censored/Censored failed [File exists]
10:29 robb_nl joined #gluster
10:41 Norky joined #gluster
10:44 Gambit15 joined #gluster
10:44 atinm joined #gluster
10:47 rastar_ joined #gluster
10:55 gowtham joined #gluster
10:59 ata joined #gluster
11:00 om joined #gluster
11:03 Guest146 xavih: Is there any method to mount glusterfs without any formatted bricks?
11:04 xavih Guest146: no, you already need formatted bricks to create the volume. Otherwise the volume cannot be created
11:07 Guest146 xavih: thank you. what about this page: "http://staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Developer-guide/bd-xlator/"? you mean bd xlator should also format the bricks?
11:08 xavih Guest146: I don't know much about bd xlator. Maybe it can do things a bit different, but I'm not sure
11:09 Guest146 xavih: thanks xavih
11:14 atinm joined #gluster
11:16 johnmilton joined #gluster
11:23 arif-ali joined #gluster
11:28 kotreshhr left #gluster
11:33 johnmilton joined #gluster
11:34 robb_nl joined #gluster
11:38 johnmilton joined #gluster
11:43 Tord joined #gluster
11:46 rafi1 joined #gluster
11:47 ppai joined #gluster
11:48 kshlm Heads up! Weekly Community meeting starts in ~10 minutes in #gluster-meeting
11:54 Gnomethrower joined #gluster
11:54 hgowtham joined #gluster
11:56 ira_ joined #gluster
11:56 B21956 joined #gluster
11:58 rafi joined #gluster
11:58 hgowtham joined #gluster
11:59 kshlm Weekly Community meeting starts now in #gluster-meeting
12:01 om joined #gluster
12:06 aravindavk joined #gluster
12:06 kotreshhr joined #gluster
12:14 jdarcy joined #gluster
12:16 bb0x_ joined #gluster
12:21 glafouille joined #gluster
12:34 ppai joined #gluster
12:42 Tord Hi, we have 5 servers running GlusterFS. Every individual server has 802.3ad 8gbits Port-channel via ifenslave, the Read speeds per server is 1GB/s and write ~800MB/s . But when we use gluster in disperse mode we get terribly slow file and replication speeds between the nodes, why?
12:43 arif-ali joined #gluster
12:43 Tord The networking is checked out fine, iperf speeds shows between 2-3gbits
12:46 ben453 joined #gluster
12:56 Apeksha joined #gluster
12:58 guhcampos joined #gluster
12:59 julim joined #gluster
12:59 rwheeler joined #gluster
13:02 bb0x_ joined #gluster
13:13 RameshN joined #gluster
13:24 Apeksha joined #gluster
13:32 bb0x_ joined #gluster
13:34 shyam1 joined #gluster
13:35 jiffin joined #gluster
13:35 msvbhat_ joined #gluster
13:35 shyam1 left #gluster
13:38 shyam joined #gluster
13:40 nbalacha joined #gluster
13:40 plarsen joined #gluster
13:46 kpease joined #gluster
13:47 kxseven joined #gluster
13:48 arcolife joined #gluster
13:54 rafi joined #gluster
14:01 Manikandan joined #gluster
14:07 squizzi joined #gluster
14:11 [Enrico] joined #gluster
14:11 cholcombe joined #gluster
14:14 mcb30 joined #gluster
14:16 gnulnx Anyone using gluster on freebsd?  Can't find much about the state of gluster there.
14:17 gnulnx Looking to set up a volume between a linux gluster and a freebsd gluster server
14:21 johnmilton joined #gluster
14:25 misc iirc, we test that it build on freebsd, but not sure if we run the regression test suite :/
14:30 Jules- anyone run and having experience with tiering on ssd's in production yet?
14:37 nehar joined #gluster
14:51 wushudoin joined #gluster
14:54 kkeithley we _don't_ run the regression (or any other tests, including basic does it even run) on FreeBSD :-(.  We do run abbreviated regression tests on NetBSD and it is packaged for NetBSD by the NetBSD maintainer
14:54 Apeksha joined #gluster
14:56 Wizek joined #gluster
14:58 arif-ali joined #gluster
14:59 kkeithley which means there's an excellent opportunity for someone to get involved with the community. ;-)
15:00 robb_nl joined #gluster
15:01 archit_ joined #gluster
15:02 kovshenin joined #gluster
15:14 amye joined #gluster
15:16 cornfed78 joined #gluster
15:17 Wizek_ joined #gluster
15:18 robb_nl joined #gluster
15:25 nishanth joined #gluster
15:30 Elmo_ joined #gluster
15:31 Elmo_ Hi everyone :-)
15:31 Elmo_ JoeJulian: I don't know if you are there, I might have a question on https://joejulian.name/blog/one-more-reason-that-glusterfs-should-not-be-used-as-a-saas-offering/
15:31 glusterbot Title: One more reason that GlusterFS should not be used as a SaaS offering (at joejulian.name)
15:32 JoeJulian Yes?
15:32 Elmo_ Is it still the case today?
15:32 gnulnx kkeithley: I'd be happy to get involved in that regard.
15:33 Elmo_ And.. is this a security issue that can happen anywhere? or it needs a special component installed
15:33 skylar joined #gluster
15:34 kkeithley gnulnx: excellent
15:35 JoeJulian Yes, it's still a problem. An untrusted server can use --remote-host to access a trusted pool.
15:37 Elmo_ crap
15:37 Elmo_ JoeJulian: and that's exposed by the Gluster Daemon?
15:38 JoeJulian At least they seem to have limited it to read-only though, which is an improvement.
15:38 Elmo_ you still need physical reach to the pool right? (If the Gluster pool is behind a private network, it should be fine I guess)
15:38 JoeJulian Right
15:39 Elmo_ But as soon as you have a proxy to access, like if you want to have geo-replications, then you're wide open to attacks?
15:39 JoeJulian "as a service", though, would allow clients access and, therefore, would allow those clients remote-host access as well.
15:39 JoeJulian Anything that can reach port 24007.
15:40 Elmo_ Which is bascially any client mounting the Gluster cluster (yikes)
15:40 Elmo_ Can it be a loop hole to run ANY commands? or just Gluster commands?
15:41 bwerthmann joined #gluster
15:41 alvinstarr JoeJulian: That seems like a monster sized security hole. It implies that Gluster can only be run in a trusted environment
15:41 JoeJulian I'm not "black hat" enough to answer that question. I suppose there could potentially be an attack through that path.
15:42 JoeJulian alvinstarr: I agree.
15:42 chirino_m joined #gluster
15:46 JoeJulian I'd be happy to reopen bug 990284 if you guys would like to add your opinions to it.
15:46 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=990284 is not accessible.
15:47 JoeJulian What do you mean it's not accessible? Bite me.
15:47 alvinstarr JoeJulian: why would I bite you? I doubt you taste good.
15:47 shaunm joined #gluster
15:48 bwerthma1n joined #gluster
15:48 JoeJulian :P
15:51 misc JoeJulian: interesting, i can't access it either
15:51 JoeJulian I suspect because it's security related?
15:52 Elmo_ JoeJulian: Me neither, says I don't have access
15:52 misc JoeJulian: if you check the security group, yes
15:52 JoeJulian I don't have that option. I'm just a nobody there.
15:54 JoeJulian Here's a pdf print of the bug: https://drive.google.com/file/d/0B4p9a4V-9wr-M3Z6RGwyMFVqTEU/view?usp=sharing
15:54 glusterbot Title: bug990284.pdf - Google Drive (at drive.google.com)
15:55 Elmo_ Well thats nice
15:55 Elmo_ not much comment on why it was closed as not a bug
15:58 alvinstarr Well the first thing you do is peer with the server. If it is possible to limit peering then it would stop you from changing the volumes.
16:00 JoeJulian Like I said, with 3.7.11 it does seem to be read-only now. Not sure if that still poses a security risk or not.
16:01 JoeJulian The api still accepts strings from the client, so if there's any overflow vulnerabilities it may still be bad.
16:04 chirino joined #gluster
16:05 alvinstarr The gluster docs seem to claim that you can only probe  new servers from trusted servers.(http://www.gluster.org/community/documentation/index.php/Gluster_3.1:_Creating_Trusted_Storage_Pools) If that is the case then you should be prohibited from promoting your bogus server to a peer and breaking things.
16:06 JoeJulian That does seem to be correct, now. When I wrote that article, you could use that hole to probe your own machine and make it a trusted peer.
16:16 JoeJulian alvinstarr, Elmo_: Here's the patch that made remote-host more secure: https://github.com/gluster/glusterfs/commit/fc637b14cfad4d08e72bee7064194c8007a388d0
16:16 glusterbot Title: cli,glusterd: Changes to cli-glusterd communication · gluster/glusterfs@fc637b1 · GitHub (at github.com)
16:21 Elmo_ JoeJulian: Not sure what it does without diving too deep in it, but overall it does that read-only thing you talked about?
16:22 JoeJulian yes
16:22 plarsen joined #gluster
16:25 bb0x_ joined #gluster
16:25 guhcampos joined #gluster
16:26 Gambit15 joined #gluster
16:30 haomaiwang joined #gluster
16:37 gowtham joined #gluster
16:43 bb0x joined #gluster
16:45 arif-ali joined #gluster
16:55 mcb30 left #gluster
16:56 kotreshhr joined #gluster
16:57 karnan joined #gluster
17:00 kotreshhr left #gluster
17:14 Elmo_ I'm trying to run some local benchmark here with GlusterFS
17:14 Elmo_ just to make sure I understand the flow
17:14 Elmo_ with 2 replicas
17:15 Elmo_ a write will write on both bricks before returning
17:15 Elmo_ from the client to each bricks?
17:15 Elmo_ so, that implies a factor of 2 in added latency vs a local write (since the 2 bricks will need to be written)
17:16 Elmo_ unless this is done in parallel? (doesn't look like it)
17:16 Elmo_ FYI, I tried a 1GB file
17:16 Elmo_ on a local folder, got 140MB/s
17:17 Elmo_ on my Gluster cluster (2x2 replicas), I had 55.4MB/s
17:17 Elmo_ reads on the other end are similar between local and gluster in terms of performance (because most likely, the replicas are not read during the process)
17:18 Elmo_ network connection is not an issue here since its 10Gbe
17:23 guhcampos joined #gluster
17:29 bb0x joined #gluster
17:38 JoeJulian Elmo_: Yes, writes are done "synchronously". Reads are done by "first to respond" but that behavior can be adjusted.
17:41 Elmo_ JoeJulian: and writes are done twice? (Like, one for brick A, then one for brick B (the replicant))
17:41 Elmo_ using the network twice
17:41 Elmo_ etc.
17:41 Elmo_ so Replicas 3 is even worst I assume?
17:41 JoeJulian Replication is managed by the client, so yes.
17:41 JoeJulian bandwidth / replica
17:41 Elmo_ I see
17:41 Elmo_ and is this configurable^
17:41 Elmo_ ?
17:42 Elmo_ I mean, well.. I can't really see how this could be solved easily
17:42 JoeJulian No, writes behavior is not configurable.
17:42 Elmo_ Compared to DRBD which does it async
17:42 Elmo_ but since Primary/Slave config.. its a different story
17:42 Elmo_ completely different
17:42 JoeJulian You're causing me anxiety again... ;)
17:42 Elmo_ hahahaha
17:43 Elmo_ Ghosts from the past hunting you forever
17:43 Elmo_ D....R.....BEEEE.DEEEE (ghostly voice)
17:44 JoeJulian @k1ck Elmo_
17:45 Elmo_ :P
17:45 Elmo_ Curious
17:46 Elmo_ is there any places on the web that explains the read/write flows of GlusterFS?
17:46 Elmo_ like.. GlusterClient to GlusterD bricks?
17:46 JoeJulian None that I've seen.
17:47 JoeJulian I've thought about it but finding the time...
17:47 Elmo_ Always about time
17:47 Elmo_ We need to create something to stop time
17:47 Elmo_ or.. split our brains
17:47 Elmo_ well
17:47 Elmo_ no.. no split-brains ;-)
17:48 JoeJulian Split-brains are bad, mmkay.
17:49 amye JoeJulian++
17:49 glusterbot amye: JoeJulian's karma is now 28
17:50 Elmo_ so many times happened with DRBD..
17:50 JoeJulian amye: Tell me you hear that in Mr Mackey's voice.
17:50 amye i so do
18:00 Elmo_ Aside from doing file stripping on a cluster, is there any way to improve GlusterFS performances?
18:01 Elmo_ (I know, I'm always coming back with my performance questions)
18:01 JoeJulian I don't know how stripping improves performance... I tend to find it distracting.
18:03 JoeJulian To improve performance, reduce things that take time. Program against libgfapi, use infiniband+rdma, reduce fault tolerance.
18:03 arif-ali joined #gluster
18:06 JoeJulian You can also head to Iceland to get your certificate in elf spotting (http://www.theelfschool.com/home) and hope that elf magic makes CAP theorum irrelevant.
18:06 glusterbot Title: Home | The Elfschool (at www.theelfschool.com)
18:12 Elmo_ Thats some voodoo magic
18:12 Elmo_ the RDMA thing
18:17 bb0x joined #gluster
18:20 Elmo_ I think the main culprit here is the disk IO speed
18:20 Elmo_ unless, you put SSDs in there lol
18:22 shubhendu joined #gluster
18:22 JoeJulian Or raid0
18:22 JoeJulian I raid0 and count on my replication to provide fault tolerance.
18:23 arif-ali joined #gluster
18:23 Elmo_ hmmm good idea
18:23 Elmo_ defeats the purpose of cheap storage, but again, storage is so cheap these days
18:23 JoeJulian We don't guarantee write performance over 1Gb so we have plenty of room on our 10Gb network.
18:24 Elmo_ how many replicas you do normally
18:24 Elmo_ ?
18:24 JoeJulian 3
18:24 Elmo_ That seems like a magic number
18:24 JoeJulian raid0 doesn't reduce your cost per TB.
18:24 Elmo_ it's because 2 is really dangerous I guess?
18:25 Elmo_ no indeed, raid0 doubles your cost
18:25 Elmo_ per KiloBytes :P
18:27 jiffin joined #gluster
18:27 JoeJulian No, raid1 does. raid0 does not.
18:28 JoeJulian 2 isn't "really" dangerous. I build for 6 nines. It's a match equation.
18:28 JoeJulian s/match/math/
18:28 glusterbot What JoeJulian meant to say was: 2 isn't "really" dangerous. I build for 6 nines. It's a math equation.
18:31 Elmo_ 6 nines?
18:32 JoeJulian 99.9999% uptime
18:32 Elmo_ ha!
18:32 JoeJulian @reliability calculation
18:32 glusterbot JoeJulian: I do not know about 'reliability calculation', but I do know about these similar topics: 'reliability calculations'
18:32 Elmo_ 2 replicas gives how many nines?
18:32 JoeJulian @reliability calculations
18:32 glusterbot JoeJulian: Calculate your system reliability and availability using the calculations found at http://www.eventhelix.com/realtimemantra/faulthandling/system_reliability_availability.htm . Establish replica counts to provide the parallel systems to meet your SLA requirements.
18:33 JoeJulian Depends on your hardware MTBF and MTTR.
18:33 Elmo_ You saw me coming lol
18:35 arif-ali_ joined #gluster
18:36 tom[] are there recomended mount opts for an xfs brick?
18:37 bb0x joined #gluster
18:39 Elmo_ you make me think tom, is xfs or ext4 better?
18:39 Elmo_ It seems I read somewhere not to use XFS with GlusterFS
18:39 Elmo_ then elsewhere, that it was prefered
18:40 JoeJulian Red Hat recommends xfs. I prefer xfs.
18:40 jiffin joined #gluster
18:40 JoeJulian Theodore T'so prefers ext4.
18:40 tom[] naturally
18:41 JoeJulian And he's a really nice guy. I have nothing bad to say about ext4. I think the code looks cleaner in xfs, so I have a greater degree of trust.
18:42 bb0x joined #gluster
18:44 Elmo_ ext4 is more widely used though no?
18:44 Elmo_ I mean, generally speaking
18:44 JoeJulian From what I've seen, xfs has the greatest use with Gluster. ext4, second and zfs a distant 3rd.
18:45 Elmo_ JoeJulian: Cool, I might go XFS then
18:45 Elmo_ Oh
18:45 * JoeJulian <-- not a zfs fan.
18:45 glusterbot JoeJulian: <'s karma is now -20
18:45 Elmo_ is there somewhere that explains the hashing algorithm (high level) also?
18:45 msvbhat_ joined #gluster
18:45 Elmo_ lol
18:45 Elmo_ poor "<"
18:46 Elmo_ I mean, for reads, how does the Gluster Client figures where the file is located?
18:46 JoeJulian https://joejulian.name/blog/dht-misses-are-expensive/
18:46 glusterbot Title: DHT misses are expensive (at joejulian.name)
18:46 Elmo_ in a distributed setup
18:46 JoeJulian There's the dht algorithm.
18:46 Elmo_ Your good JoeJulian :-)
18:47 Elmo_ s/Your/You're
18:47 tom[] are there recomended mount opts for a brick?
18:47 JoeJulian see "gluster volume help" and search for cluster.read-hash-mode for which replica brick is chosen
18:47 JoeJulian not that I've seen.
18:53 chirino joined #gluster
18:56 Elmo_ Is there an advantage of having a distributed system vs a big brick?
18:56 Elmo_ to reduce costs?
18:56 Elmo_ oh nevermind
18:56 Elmo_ kind of answered my own question in my head
18:57 Elmo_ because of rebalancing, its also more reliable I guess
18:57 JoeJulian Also in the event of self-heals, smaller bricks are healthy faster.
19:02 bb0x joined #gluster
19:10 jiffin joined #gluster
19:21 bb0x joined #gluster
19:25 rjoseph joined #gluster
19:26 lalatenduM joined #gluster
19:28 shruti joined #gluster
19:29 sac joined #gluster
19:30 deniszh joined #gluster
19:30 bb0x joined #gluster
19:36 gowtham joined #gluster
19:43 bb0x joined #gluster
19:44 Ulrar Ho, 3.8 was released ? So I don't need to wait for 3.7.12, I can go to 3.8 directly for VM storage ?
19:47 post-factum Ulrar: good luck, dude
19:50 JoeJulian Ulrar: Be sure and offer feedback and file a bug report if you find any bugs.
19:50 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
19:51 Ulrar Sure
19:52 Ulrar JoeJulian: Could you just confirm that the fix to the bugs between AFR and shards is fixed in that release ?
19:52 Ulrar No idea what I'm talking about, just know that's what is killing me on our production servers :)
19:53 JoeJulian I don't even know what bugs those are.
19:54 JoeJulian If you have the bug ids, you can just say them in channel, ie. bug 772808 , and glusterbot will tell you the status.
19:54 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=772808 unspecified, unspecified, ---, fharshav, CLOSED UPSTREAM, get_mem function support for platforms other than linux
19:56 Ulrar I don't know the id, I was just told that what that was. Basically any VM running of a sharded volume on 3.7.11 got I/O errors, Krutika told me that was fixed in 3.7.12. I'm guessing there is no reason that fix hasn't been ported to 3.8 ?
19:58 rwheeler joined #gluster
19:58 Ulrar Well, guess I'll try it anyway, will quickly see if I get I/O errors or not :)
19:59 bb0x joined #gluster
20:00 JoeJulian Ulrar: My advice when getting solutions from developers to a problem you're having is to always ask them for the bug id, and add yourself to the CC list so you'll know what the status is.
20:01 JoeJulian I realize that won't help in the past, but it's free advice. If you don't like my advice I'll happily issue you a refund. :D
20:02 arcolife joined #gluster
20:03 Ulrar Ha ha, thamks
20:03 Ulrar I'll keep that in mind
20:03 chirino joined #gluster
20:04 Ulrar I was sent a patched version of 3.7.11 that seemed to work well, so I'm pretty confident about 3.7.12. Just didn't know 3.8 was to be released this early
20:07 sac` joined #gluster
20:07 lalatend1M joined #gluster
20:07 shruti` joined #gluster
20:07 rjoseph_ joined #gluster
20:07 archit_ joined #gluster
20:12 JoeJulian It's actually a week late, but that's still pretty good.
20:12 JoeJulian 3.8 will begin a new release cycle, iiuc.
20:17 ben453 I'm having an issue trying to recover from a brick failure due to "gluster volume heal <vol-name> full" not working. For my initial setup, I have a replication cluster with 3 bricks and all of the bricks have the same data when I start the volume. After I have mounted the volume I run "find . | xargs stat" to make sure gluster builds up all of the metadata correctly.
20:17 JoeJulian https://www.gluster.org/pipermail/gluster-users/2016-May/026697.html
20:17 glusterbot Title: [Gluster-users] [Gluster-devel] Idea: Alternate Release process (at www.gluster.org)
20:18 ben453 But it seems like sometimes trying to explicitly run a "full heal" even after the cluster is just created does not work, and fails with the message: Launching heal operation to perform full self heal on volume <volname> has been unsuccessful on bricks that are down
20:18 JoeJulian It probably won't. If all three have data, there's a likely possibility that conflicting metadata.
20:19 bb0x joined #gluster
20:19 JoeJulian That message suggests that bricks are down - or at least not connected to one or more glustershd clients.
20:19 JoeJulian Check the glustershd logs and the glusterd logs on all your servers.
20:19 ben453 Running gluster volume status shows that all of the brick processes are running
20:20 ben453 So you're saying that the tactic of building up metadata with "find . | xargs stat" might yield conflicting metadata =/
20:20 ben453 ?
20:21 JoeJulian Possibly. The client you run that command on won't necessarily be the only client going through your bricks. The self-heal daemons may also be doing that.
20:22 JoeJulian So if, for instance, two different clients touch two different replicas of a file first, they may assign different gfids.
20:22 ben453 Right that makes sense.
20:22 JoeJulian Or they may assign conflicting dht hash ranges to a directory.
20:22 JoeJulian If you're going to use pre-loaded data, I recommend only loading the 1st brick and letting self-heal handle the rest.
20:23 ben453 Yeah that seems like it would work.
20:23 ben453 I think I might try disabling the self heal daemons before mounting the volume and running the find . command, and then re-enable them after the command returns
20:24 JoeJulian That might be sufficient. Certainly worth a try.
20:24 JoeJulian Report back your discoveries.
20:25 ben453 Will do!
20:27 bwerthmann joined #gluster
20:29 JesperA- joined #gluster
20:34 chirino joined #gluster
20:37 d0nn1e joined #gluster
21:10 deniszh joined #gluster
21:37 wnlx joined #gluster
21:59 bb0x joined #gluster
22:04 bb0x joined #gluster
23:04 [1]akay joined #gluster
23:05 twisted`_ joined #gluster
23:06 PM_Me_Your_Cervi joined #gluster
23:07 abyss^_ joined #gluster
23:07 samppah_ joined #gluster
23:07 kevc_ joined #gluster
23:08 partner_ joined #gluster
23:08 bio_ joined #gluster
23:08 wiza joined #gluster
23:08 delhage_ joined #gluster
23:08 juhaj_ joined #gluster
23:08 sysanthrope_ joined #gluster
23:08 tom][ joined #gluster
23:11 sloop- joined #gluster
23:11 yoavz- joined #gluster
23:11 necrogami_ joined #gluster
23:11 JPau1 joined #gluster
23:11 Kins_ joined #gluster
23:14 sadbox joined #gluster
23:18 mdavidson joined #gluster
23:19 stopbyte joined #gluster
23:19 monotek joined #gluster
23:42 shyam joined #gluster
23:58 rafi joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary