Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-08-16

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:59 Alghost joined #gluster
01:10 julim joined #gluster
01:24 Lee1092 joined #gluster
01:30 DV joined #gluster
01:39 Alghost joined #gluster
01:40 Alghost_ joined #gluster
01:51 Javezim Is there an official support address we can email to advise developers of some issues we've been seeing on Glusterfs 3.8?
02:00 wadeholler joined #gluster
02:03 harish_ joined #gluster
02:17 hagarth joined #gluster
02:21 nishanth joined #gluster
02:41 Peppard joined #gluster
02:46 shdeng joined #gluster
02:50 rafi joined #gluster
02:57 victori_ joined #gluster
03:00 kshlm joined #gluster
03:04 rafi1 joined #gluster
03:10 victori_ joined #gluster
03:10 ZachLanich joined #gluster
03:11 ZachLanich Hey guys, I'm trying to wrap my head around replication and Gluster's Arbiter option, and I need a tiny bit of help to figure out how many nodes I need, etc. Anyone around?
03:16 magrawal joined #gluster
03:33 auzty joined #gluster
03:39 rafi1 joined #gluster
03:40 skoduri joined #gluster
03:43 jnix left #gluster
04:01 itisravi joined #gluster
04:02 ZachLanich joined #gluster
04:04 hgowtham joined #gluster
04:06 shubhendu__ joined #gluster
04:07 atinm joined #gluster
04:16 ZachLanich Anyone around?
04:23 nbalacha joined #gluster
04:23 nbalacha joined #gluster
04:25 victori joined #gluster
04:27 victori joined #gluster
04:30 shubhendu__ joined #gluster
04:31 kramdoss_ joined #gluster
04:34 victori joined #gluster
04:35 victori joined #gluster
04:36 Vaelatern It's 2300 in the USA. If you hang around, someone who knows something may stop by, ZachLanich, but you may best be off asking your question then lurking.
04:37 ZachLanich What's lurking? lol
04:37 ZachLanich Vaelatern ^
04:38 Vaelatern Waiting around and reading things. Idling is the word I meant, which is leaving your connection to IRC open and just coming back every couple hours to check on things
04:39 aspandey joined #gluster
04:39 Vaelatern IRC works slowly. We all have real lives.
04:43 shubhendu joined #gluster
04:44 ZachLanich Vaelatern Yea, it's true. I'm on a laptop though, so it sucks cuz I have to shut it and carry it around haha.
04:46 Vaelatern That's why I IRC from a random server :)
04:47 unforgiven512 joined #gluster
04:48 unforgiven512 joined #gluster
04:48 unforgiven512 joined #gluster
04:49 eightyeight joined #gluster
04:49 ZachLanich Vaelatern Please explain? I'm just using an IRC client on Mac atm.
04:49 unforgiven512 joined #gluster
04:49 eightyeight would it be fair to call the "distributed", "replicated", "striped", and "dispersed" volume types as "basic volume types"?
04:49 unforgiven512 joined #gluster
04:50 eightyeight and "distributed replicated", "distributed striped", "distributed striped replicated", "distributed dispersed", and "striped replicated" as "hybrid volume types"?
04:50 unforgiven512 joined #gluster
04:51 unforgiven512 joined #gluster
04:51 Vaelatern ZachLanich: I remotely log in to a server and use an IRC client from the command line on that server.
04:51 RameshN joined #gluster
04:53 unforgiven512 joined #gluster
04:53 karthik_ joined #gluster
04:54 unforgiven512 joined #gluster
04:55 derjohn_mob joined #gluster
04:57 victori joined #gluster
04:58 victori joined #gluster
05:00 kdhananjay joined #gluster
05:01 vigumnov joined #gluster
05:02 tom[] joined #gluster
05:05 ZachLanich Vaelatern Ah, that makes sense. I should start doing that lol
05:06 shdeng joined #gluster
05:09 DV joined #gluster
05:10 poornimag joined #gluster
05:12 msvbhat joined #gluster
05:15 ankitraj joined #gluster
05:17 ndarshan joined #gluster
05:22 ramky joined #gluster
05:26 AdStar joined #gluster
05:27 AdStar hi guys, my /var/log/glusterfs/bricks/ log file is filling up with this... 2016-08-16 05:26:18.843398] W [dict.c:1282:dict_foreach_match] (-->/lib64/libglusterfs.so.​0(dict_foreach_match+0x5c) [0x7f7b7caa9c1c] -->/usr/lib64/glusterfs/3.7.11/x​lator/features/index.so(+0x3980) [0x7f7b6cc64980] -->/lib64/libglusterfs.so.​0(dict_foreach_match+0xe3) [0x7f7b7caa9ca3] ) 0-dict: dict|match|action is NULL
05:27 AdStar [Invalid argument]
05:27 glusterbot AdStar: ('s karma is now -149
05:32 Philambdo joined #gluster
05:35 shruti` joined #gluster
05:37 satya4ever joined #gluster
05:44 sac joined #gluster
05:54 mhulsman joined #gluster
05:56 ashiq joined #gluster
06:00 Muthu joined #gluster
06:02 h4xrrr joined #gluster
06:02 Muthu_ joined #gluster
06:04 portante joined #gluster
06:10 [diablo] joined #gluster
06:19 siavash joined #gluster
06:31 hgowtham joined #gluster
06:35 jtux joined #gluster
06:35 Philambdo joined #gluster
06:37 sanoj joined #gluster
06:38 karthik_ joined #gluster
06:47 nishanth joined #gluster
06:49 arcolife joined #gluster
06:58 ju5t joined #gluster
07:01 ashiq joined #gluster
07:01 atalur joined #gluster
07:02 harish_ joined #gluster
07:05 jwaibel joined #gluster
07:08 noobs joined #gluster
07:08 jkroon joined #gluster
07:09 ppai joined #gluster
07:19 devyani7 joined #gluster
07:20 devyani7 joined #gluster
07:21 jri joined #gluster
07:24 kotreshhr joined #gluster
07:24 Slashman joined #gluster
07:38 hackman joined #gluster
07:42 Javezim Anyone know what this Error is? http://paste.ubuntu.com/23060776/
07:42 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
07:43 Javezim Seeing it on 3 different sites now, all running different versions
07:43 Javezim One - glusterfs 3.7.11 built on Apr 18 2016 12:34:01
07:44 Javezim Two - glusterfs 3.8.2 built on Aug 10 2016 16:09:15
07:44 Javezim Three - glusterfs 3.7.8 built on Feb 12 2016 13:08:20
07:52 LinkRage joined #gluster
07:52 fsimonce joined #gluster
07:54 LinkRage I'm trying to mount -t glusterfs . This error I get only on a host on a different network than the host where everything works with the same setup - https://gist.github.com/anonymous​/440f4f8afcbe723f046afaae3f8c2f83
07:54 glusterbot Title: storage1.log · GitHub (at gist.github.com)
07:55 LinkRage Any ideas? all the host are reachable to each other and no firewalls in between etc.
07:57 derjohn_mob joined #gluster
08:02 Mmike joined #gluster
08:06 jkroon joined #gluster
08:12 mhulsman joined #gluster
08:14 ahino joined #gluster
08:14 robb_nl joined #gluster
08:16 Javezim Do you have performance.readdir-ahead set on the volume you are trying to mount?
08:18 MessedUpHare joined #gluster
08:18 Javezim @LinkRage
08:18 post-factum LinkRage: Cache size 1073741824 is greater than the max size of 513048576
08:18 post-factum LinkRage: fix that
08:21 LinkRage post-factum, thanks
08:22 Javezim @LinkRage Are you using Samba 4.3.9 with your Gluster 3.8.2? Do you use VFS Mount or Samba for Windows accessing machines? Have you noticed any performance issues since upgrading?
08:24 karnan joined #gluster
08:24 LinkRage Javezim, I thin I do not have the performance.readdir-ahead in /etc/glusterfs/ . It's only on Linux. No NFS, no SMB
08:25 Javezim Fair enough, I've seen that error when Ive set performance.readdir-ahead to a higher amount than whats actually available to the server/client
08:26 msvbhat joined #gluster
08:29 rastar joined #gluster
08:38 [diablo] joined #gluster
08:39 hgowtham joined #gluster
08:42 hackman joined #gluster
08:49 MessedUpHare_ joined #gluster
08:50 rouven joined #gluster
08:51 aravindavk joined #gluster
08:54 unforgiven512 joined #gluster
09:00 Klas Would nfs-ganesha provide HA when mounting glusterfs in windows? Trying to wrap my head around if NFS-Ganesha or Samba+CTDB would be the "best" way. Not even sure if it will be used here, mostly curious.
09:05 Sebbo2 joined #gluster
09:07 kovshenin joined #gluster
09:07 jkroon Klas, NFS in my experience provides better performance than samba, but I haven't done those benchmarks on top of gluster.  That said, the CTDB thing looks very interesting and since it's got native support in Windows I'd personally go that way.
09:07 robb_nl joined #gluster
09:16 aravindavk joined #gluster
09:40 msvbhat joined #gluster
09:41 Klas ntfs is native in windows as well, just not activated by default
09:41 Klas but thanks =)
09:42 wadeholler joined #gluster
09:45 ppai joined #gluster
09:46 harish_ joined #gluster
09:50 Wizek_ joined #gluster
09:52 Muthu joined #gluster
09:56 RameshN joined #gluster
09:56 kdhananjay joined #gluster
09:56 hgowtham joined #gluster
09:57 johnmilton joined #gluster
10:00 itisravi joined #gluster
10:24 arcolife joined #gluster
10:25 itisravi_ joined #gluster
10:27 mhulsman joined #gluster
10:28 Wizek_ joined #gluster
10:28 arcolife joined #gluster
10:30 kdhananjay joined #gluster
10:31 karnan joined #gluster
10:40 atalur joined #gluster
10:55 BitByteNybble110 joined #gluster
10:58 Muthu joined #gluster
10:59 ira joined #gluster
11:31 atalur joined #gluster
11:40 hackman joined #gluster
11:40 shellclear left #gluster
11:43 mhulsman joined #gluster
11:44 Saravanakmr joined #gluster
11:46 ankitraj #info Bug-traige meeting will be held in 12 minutes on #gluster-meeting
11:54 kshlm joined #gluster
11:54 aspandey joined #gluster
12:01 kkeithley REMINDER: Gluster Community Bug Triage Meeting in #gluster-meeting starting now in #gluster-meeting
12:02 Muthu joined #gluster
12:02 Saravanakmr joined #gluster
12:09 johnmilton joined #gluster
12:16 atalur joined #gluster
12:16 johnmilton joined #gluster
12:19 Gnomethrower joined #gluster
12:21 shubhendu joined #gluster
12:24 unclemarc joined #gluster
12:33 hchiramm joined #gluster
12:34 nishanth joined #gluster
12:36 kdhananjay joined #gluster
12:50 dlambrig joined #gluster
12:59 julim joined #gluster
13:02 ppai joined #gluster
13:06 plarsen joined #gluster
13:16 atinm joined #gluster
13:18 DV_ joined #gluster
13:19 ahino joined #gluster
13:28 dlambrig left #gluster
13:30 squizzi joined #gluster
13:33 Philambdo joined #gluster
13:36 arcolife joined #gluster
13:38 glustin joined #gluster
13:43 ppai joined #gluster
13:44 dnunez joined #gluster
13:47 hackman joined #gluster
13:52 rwheeler joined #gluster
13:56 kramdoss_ joined #gluster
13:58 hagarth joined #gluster
14:01 dupondje joined #gluster
14:01 dupondje Hi. I got some small question about GlusterFS monitoring.
14:01 dupondje I see there are some glusterfs-nagios packages, but they seems to have a dependency on nagios/nrpe .. THis while we are using icinga2 ourself
14:03 ndevos dupondje: I think RameshN knows more about it, but I would not exepect him to be responsive anymore today, send an email to gluster-users@gluster.org maybe?
14:04 ahino joined #gluster
14:05 RameshN dupondje,  glusterfs-nagios packages is completely built on nagios, nrpe and nsca
14:09 Wizek__ joined #gluster
14:10 dupondje RameshN: Well I see indeed, but they could be perfectly usable for Icinga2 also for example?
14:10 dupondje or are there other/better alternatives then?
14:10 dupondje imo icinga2/nagios just needs a script that takes arguments and returns OK/CRITICAL/WARNING/UNKNOWN :)
14:12 Sebbo2 @glusterfs team: There is an missing dependency for glusterfs-client on Ubuntu 16.04 LTS: attr. Without this package, you'll receive this message at mounting: "WARNING: getfattr not found, certain checks will be skipped.."
14:13 Wizek_ joined #gluster
14:13 RameshN dupondje, yes. YOu are right. But we have to make sure all nrpe commands and arguments are working in the same way with icinga2
14:15 RameshN dupondje, I have never used icinga2. But theoretically it should work. But we may have to tweak the configs and commands
14:17 dupondje Well just need a clean & simple GlusterFS check :) but seems there aren't alot ... :(
14:18 atinm joined #gluster
14:19 ppai joined #gluster
14:19 micke Even the nagios-plugins package in debian/ubuntu doesn't depend on nagios, it suggests nagios or icinga though
14:24 Wizek joined #gluster
14:28 hagarth ndevos: we are going ahead with the call
14:37 kkeithley Sebbo2: which version are you seeing that on?
14:38 lpabon joined #gluster
14:39 Sebbo2 kkeithley: glusterfs-client and glusterfs-common 3.7.6-1ubuntu1
14:42 kkeithley attr was added in the 3.7.10 .debs.  specifically to glusterfs-common.
14:42 kkeithley you really should update
14:43 kkeithley @ppa
14:43 glusterbot kkeithley: The official glusterfs packages for Ubuntu are available here: 3.6: http://goo.gl/XyYImN, 3.7: https://goo.gl/aAJEN5, 3.8: https://goo.gl/gURh2q
14:45 misc so I am looking on https://github.com/gluster/glusterdocs/pull/146 do we still offer debian package on download.gluster.org ?
14:45 glusterbot Title: Fixed issue with detection of Debian version by Sebi94nbg · Pull Request #146 · gluster/glusterdocs · GitHub (at github.com)
14:48 kkeithley http://download.gluster.org/pub​/gluster/glusterfs/*/*/Debian/....
14:48 ndevos hagarth: ah, shall I still join? I was watching the other irc for a note
14:48 kkeithley Yup, looks like we do
14:50 eightyeight would it be fair (or even accurate) to call the "distributed", "replicated", "striped", and "dispersed" volume types as "basic volume types"?
14:50 eightyeight and "distributed replicated", "distributed striped", "distributed striped replicated", "distributed dispersed", and "striped replicated" as "hybrid volume types"?
14:51 ndevos hagarth: hmm, maybe you moved it? it seems to have dropped from my calendar
14:51 cloph remove striped from that list
14:51 eightyeight cloph: why?
14:51 rafi joined #gluster
14:51 cloph glusterbot: whatis striped
14:51 glusterbot cloph: I do not know about 'striped', but I do know about these similar topics: 'stripe'
14:51 cloph glusterbot: whatis stripe
14:51 glusterbot cloph: Please see http://joejulian.name/blog/sho​uld-i-use-stripe-on-glusterfs/ about stripe volumes.
14:52 cloph glusterbot: whatis shard
14:52 glusterbot cloph: I do not know about 'shard', but I do know about these similar topics: 'sharding'
14:52 cloph glusterbot: whatis sharding
14:52 glusterbot cloph: for more details about sharding, see http://blog.gluster.org/2015/1​2/introducing-shard-translator
14:52 kkeithley @shard
14:52 glusterbot kkeithley: I do not know about 'shard', but I do know about these similar topics: 'sharding'
14:52 kkeithley @sharding
14:52 glusterbot kkeithley: for more details about sharding, see http://blog.gluster.org/2015/1​2/introducing-shard-translator
14:52 Sebbo2 kkeithley: Ah, ok. Great. That's fine, if it's already included.
14:52 eightyeight bot spam
14:52 * eightyeight blocks glusterbot
14:53 eightyeight i'm not claiming striped is best for anything, perfromance or otherwise. i'm only asking if it, along with the other 3, could be called a "basic volume type" and the others as "hybrid"
14:53 eightyeight or if there is better terminology
14:54 ndevos eightyeight: yes, see sharding and the other links glusterbot posted
14:54 cloph you're missing also replicated with arbiter in your "hybrid" list then
14:56 kkeithley strip is deprecated. If you're writing about basic volume types, don't even bother writing about stripe.  Use shard instead
14:56 kkeithley s/strip is/stripe is/
14:56 glusterbot What kkeithley meant to say was: stripe is deprecated. If you're writing about basic volume types, don't even bother writing about stripe.  Use shard instead
15:02 dupondje somebody should just adjust the glusterfs-nagios package so it only installs the python files and libs, and nothing of nagios configs etc :)
15:02 dupondje would make it much more usefull
15:04 ZachLanich joined #gluster
15:05 nbalacha joined #gluster
15:07 harish_ joined #gluster
15:08 poornimag joined #gluster
15:09 dswebb joined #gluster
15:10 wushudoin joined #gluster
15:10 dswebb hi all, quick question.  If you have a 4 brick distribute volume, and 1 brick is offline what happens if a client tries to create a new file that would normally get hashed to that brick?
15:11 johnmilton joined #gluster
15:15 hagarth joined #gluster
15:25 hackman joined #gluster
15:29 johnmilton joined #gluster
15:31 nbalacha joined #gluster
15:31 bkolden joined #gluster
15:48 Gambit15 Hey, what's the deal with dispersed volumes? Are they just a form of distributed+replicated where you're able to have a level of replication (or parity in this case...) that's less than the power of 2?
15:50 JoeJulian dswebb: The client gets an error.
15:50 dswebb crap
15:51 squizzi joined #gluster
15:54 Gambit15 Q2: Not worked with RDMA before. Is there any benefit of using RDMAoE in a 10GbE environment?
15:56 nishanth joined #gluster
15:56 JoeJulian Gambit15: Have you read https://github.com/gluster/glusterfs-specs/b​lob/master/done/GlusterFS%203.6/disperse.md
15:56 glusterbot Title: glusterfs-specs/disperse.md at master · gluster/glusterfs-specs · GitHub (at github.com)
15:56 JoeJulian ?
15:57 JoeJulian Gambit15: RDMAoE would be beneficial if it's handled by the NIC, bypassing a context switch.
15:59 ndevos JoeJulian: an error, really? I thought the file would be created on a different brick... (didnt try it though)
15:59 P0w3r3d joined #gluster
16:02 Gambit15 JoeJulian, need to check, is it common in modern 10GbE NICs & switches?
16:04 derjohn_mob joined #gluster
16:05 Gambit15 JoeJulian, WRT dispersed, I noted the warning for read-modify-write transactions in the docs. Is it likely to have a higher performance penalty than d+r? (I'm imagining RAID1 v RAID5)
16:05 ankitraj joined #gluster
16:07 victori Silly question, say you have 6 gluster nodes, replication factor of 2, all are down but one - are you still available? or just partially available on whatever that last partition ?
16:07 JoeJulian ndevos: yes. if file, foo, exists on a dht brick and that brick is down, you don't want to create foo on a different brick.
16:07 victori well reads, I suppose the data would not be there, but in terms of writes?
16:08 victori Is there something like hinted hand off for writes?
16:08 JoeJulian Gambit15: I've never seen RDMAoE in any NICs, but I haven't researched it.
16:09 Gambit15 So in this this hypothetical case, instead of having pairs of Distributed+Replicated*2 nodes, I'd use 3*Disperse nodes with a redundancy of 1
16:09 Gambit15 Cool, I'll leave the RDMA stuff to the side for now then
16:09 dswebb JoeJulian, thanks, I was hoping the file would just be created in a new brick.  kind of scuppers my plan
16:10 JoeJulian Gambit15: I would expect some, yes, but I think it would all be client-side, cpu and context switches.
16:11 Gambit15 JoeJulian, sorry, didn't grok that last comment?
16:11 JoeJulian victori: partially. Just what's on that one brick. Without quorum being configured you should be able to write to files that exist on that brick, and create files whose filenames match the dht allocation of that brick.
16:12 JoeJulian Gambit15: Referring to the performance penalty
16:13 victori JoeJulian thanks - and no hinted hand off? as in the brick will accept writes for other bricks and replay them once the other bricks become available?
16:14 Gambit15 Q3: Arbiters, as they're just a sort of journal, I presume additional arbiters don't have much impact on performace?
16:14 ndevos JoeJulian: hmm, but if that is not done, LOOKUP should only be needed on the hashed-subvol, by default it is done everywhere :-/
16:15 JoeJulian lookup is only done on the hashed subvol unless the file doesn't exist, then it's done everywhere.
16:15 victori well I guess I am falling into AP territory here (cap theorem)
16:15 JoeJulian ndevos: have you even read my blog? ;)
16:16 ndevos JoeJulian: hehe, at one point for sure
16:16 JoeJulian https://joejulian.name/blog​/dht-misses-are-expensive/
16:16 glusterbot Title: DHT misses are expensive (at joejulian.name)
16:17 JoeJulian That talks about how the dht algorithm works, specifically for gluster. (for anyone who's interested)
16:17 ndevos JoeJulian: so a create of a new file may then not need the lookup-everywhere, only after rebalance
16:18 JoeJulian You can even disable lookup-unhashed.
16:18 ndevos yes, I'm aware of that, but its enabled by default
16:19 JoeJulian Right, but that happens only if the lookup-hashed fails.
16:19 ndevos which happens for each new file that gets created
16:19 JoeJulian But if lookup-hashed fails because the brick is missing, the client cannot create a file.
16:19 JoeJulian It's been ages, but let me see if I can find that again...
16:20 ndevos I'm wondering why there is a need to do a lookup on the unhashed-subvols then, not doing so would improve the smallfile workloads a lot
16:21 JoeJulian rename
16:21 msvbhat joined #gluster
16:21 ndevos ... and raghug, nithya or shyam are not here either :-/
16:21 JoeJulian rename, rebalance fix-layout, hardlinks...
16:22 ndevos well, even on a rename the linkfile should get created, and if the hashed-subvol for the new-name isnt there, rename should fail imho
16:22 eightyeight joined #gluster
16:22 Gambit15 joined #gluster
16:22 JoeJulian It should, yes. I think originally the linkfile was a lazy process.
16:22 JoeJulian Was only created on lookup-unhashed
16:22 JoeJulian But fix-layout still doesn't create linkfiles.
16:23 ndevos right, fix-layout is one case, but that could be an xattr on the directory "needs lookup everywhere"
16:24 JoeJulian Perhaps when we get journalled replication that could ease all of that.
16:24 JoeJulian nsr, or whatever it's called now.
16:24 ndevos improving creation of files would be a huge benefit, and this sounds like low hanging fruit, relatively
16:24 ndevos jbr, yes
16:25 JoeJulian But yeah, if unhashed could be intelligently avoided, that would help small files and lookups of non-existant files.
16:25 ndevos but, jbr is an alternative for afr and disperse, not dht
16:26 ndevos hagarth, kkeithley: are you making a note of that ^^ and get shyam or raghug to respond to it?
16:26 * ndevos needs to leave for the day, bye!
16:27 JoeJulian Goodnight
16:33 noobs joined #gluster
16:44 Gambit15 Reading the doc you linked to. When creating a dispersed volume, you can configure both dispersal & redundancy. As I understand it, redundancy # is akin to parity #. But why configure dispersal? I thought the dispersal # would be equal to the number of bricks in the volume?
16:46 JoeJulian I'll answer shortly. I have a work incident I need to manage first.
16:46 Gambit15 (Apologies for the beginner questions - I'm just trying to make sure I fully understand everything)
16:47 Gambit15 No worries, and much appreciated
16:56 Gambit15 JoeJulian, going AFK, will check replies when I get back.
17:00 Sebbo1 joined #gluster
17:09 ira joined #gluster
17:09 hchiramm joined #gluster
17:12 derjohn_mob joined #gluster
17:15 dnunez joined #gluster
17:27 bwerthmann joined #gluster
17:28 ZachLanich joined #gluster
17:37 kovsheni_ joined #gluster
17:50 kkeithley I'm not sure what ndevos wanted me to make note of
17:51 JoeJulian His ideas about how to avoid lookup-unhashed intelligently.
17:52 kkeithley so I can tell raghug and shyam? Why doesn't he tell them directly? They all work for the same company.
17:53 kkeithley strange
17:53 JoeJulian Maybe he's quitting in the morning.
17:56 hagarth kkeithley: hmm, ndevos feel free to route this feedback on gluster-devel ;)
17:56 hagarth kkeithley: the frequency of one of us meeting either of them in person is slightly better than ndevos'
17:57 JoeJulian I suspect it was more the "ooh, a good idea just got brainstormed here and I've got to run. I hope this doesn't get forgotten."
17:57 kkeithley yeah, I was just being circuitous
17:58 robb_nl joined #gluster
18:01 kpease joined #gluster
18:06 sandersr joined #gluster
18:11 kkeithley @later tell raghug see ndevos' comments about intelligently avoiding lookup-unhashed
18:11 glusterbot kkeithley: The operation succeeded.
18:11 kkeithley @later tell shyam see ndevos' comments about intelligently avoiding lookup-unhashed
18:11 glusterbot kkeithley: The operation succeeded.
18:18 hagarth joined #gluster
18:24 rwheeler joined #gluster
18:35 Gambit15 Hey all, will re-post my earlier question, if anyone's able to clarify?
18:35 Gambit15 When creating a dispersed volume, you can configure both dispersal & redundancy. As I understand it, redundancy # is akin to parity #. But why configure dispersal? I thought the dispersal # would be equal to the number of bricks in the volume?
18:36 rouven joined #gluster
18:41 rastar joined #gluster
18:42 ahino joined #gluster
18:52 ghollies joined #gluster
19:09 JoeJulian Ok, I can get that now, I think. So if you disperse 3 redundancy 1, you'll need 4 bricks and you can lose any 1 of those 4 bricks and have access to your data.
19:10 JoeJulian If you had 100 disks, you probably don't want disperse 99 redundancy 1 because the odds of losing more than one drive simultaneously is a factor of 10 greater.
19:10 JoeJulian (nearly)
19:11 Gambit15 Of course. I was just curious of the need to define the number of diverse nodes, when I expected that would be the total minus redundancy level
19:12 JoeJulian https://www.youtube.com/watch?v=GD6qtc2_AQA
19:12 Gambit15 Ah, it's either or...you define either dispersal or redundancy, but you don't need to define *both*...?
19:13 JoeJulian Wouldn't you want both?
19:13 JoeJulian Or at least couldn't?
19:14 JoeJulian Perhaps you want 3x1 or 5x2...
19:15 JoeJulian Or you can really live dangerously and do 14x1
19:15 Gambit15 Well you're already defining how many bricks there are by addressing all of them when creating the volume...
19:15 JoeJulian (and I won't name names)
19:15 JoeJulian No
19:17 JoeJulian So I've got 8 knox trays with 30 drives each on 8 servers for 240 bricks (for example). I add them all to the volume but I want a disperse 5 redundancy 2 so out of those 240 disks, I could theoretically lose 236 of them and still have access to my data (for any specific file).
19:17 madm1ke joined #gluster
19:17 JoeJulian Whereas if I only have redundancy 2, I can only lose 2 disks before I lose access.
19:19 ZachLanich joined #gluster
19:19 Jacob8432 joined #gluster
19:23 Gambit15 ...assuming all of the data fits on the 4 remainders...but I'm still not understanding that :/
19:24 Gambit15 Probably because my mind is trying to draw parallels with RAID, for which there is none...
19:24 JoeJulian Yeah, it's a hard mindset to break.
19:59 johnmilton joined #gluster
20:00 Gambit15 Sorry, connection flapped, not sure if you added anything after the mindset comment
20:01 Gambit15 Can you explain or point me to something that explains the concept more clearly?
20:01 Gambit15 JoeJulian
20:02 Gambit15 The description in "/Administrator Guide/Setting Up Volumes/" hasn't helped much... :/
20:04 derjohn_mob joined #gluster
20:05 johnmilton joined #gluster
20:14 Jacob843 joined #gluster
20:33 DV_ joined #gluster
20:50 blah joined #gluster
20:52 BoBurnham joined #gluster
20:55 BoBurnham I have a 2 node cluster with an arbiter brick.  Planning to put MySQL on top of it to make it somewhat Hot-Cold (more like Hot-Hot).  Are there any known concerns that I should consider before heading towards direction (such as latency, file locking, etc)?  I found articles where it was recommended not to do this, but most of them were 4 or more years old.
20:56 post-factum BoBurnham: you do not want mysql over glusterfs
20:56 post-factum BoBurnham: you want mysql replication instead
20:59 JoeJulian post-factum, BoBurnham: actually I ran mysql on a dht volume and got better than native performance (on rackspace nova volumes).
21:00 post-factum JoeJulian: sounds crazy to me
21:01 JoeJulian The way I did it was to break up the innodb monolithic storage into 6 files and named them such that they would land on 3 different dht subvolumes (two each).
21:01 JoeJulian Not using file-per-table, obviously.
21:02 JoeJulian So I was essentially sharding my mysql data at the file level, rather than modifying an application or using a proxy.
21:02 post-factum JoeJulian: sounds twice crazier than I thought before
21:02 JoeJulian It's never had a problem in 7 years.
21:03 post-factum JoeJulian: i believe you do not load 72 cpu cores with mysqld ;)
21:03 post-factum JoeJulian: and do not do thousands of qps
21:06 BoBurnham Is it because of performance concerns or stability concerns?
21:06 post-factum BoBurnham: everything is about latency
21:07 JoeJulian I did test using some load testing tool, but it was several years ago and I never had time to finish my presentation.
21:07 BoBurnham why not using file-per-table?
21:07 post-factum BoBurnham: in normal deploy you must use file per table
21:07 post-factum BoBurnham: but
21:08 JoeJulian Because your bigger and/or more heavily used tables won't shard. They'll all sit on one dht subvolume.
21:08 post-factum BoBurnham: ^^
21:09 JoeJulian If you did want to store it all locally, you could do the same thing I did with innodb files on separate drives (or raid 10 stacks).
21:10 BoBurnham pretty much using it only for replication purposes, not for striping or distributed
21:11 JoeJulian Then I'd probably use galera.
21:14 BoBurnham I wish that was an option.  Our backup EC2 boxes are pretty much useless, good enough only for file replication and I don't think management would want to increase EC2 expenses
21:14 BoBurnham :(
21:15 JoeJulian good, fast, or cheap - pick 2. ;)
21:30 BoBurnham good and cheap
21:32 BoBurnham as long as good means fast too .. j/k
21:38 DV_ joined #gluster
22:16 chirino joined #gluster
22:24 ira joined #gluster
22:45 plarsen joined #gluster
22:56 d0nn1e joined #gluster
23:17 BubonicPestilenc joined #gluster
23:17 BubonicPestilenc hey all
23:18 d0nn1e joined #gluster
23:18 BubonicPestilenc if i have same files (rsynced with -a) on 3 servers
23:18 BubonicPestilenc can i add bricks w/o syncing?
23:18 BubonicPestilenc got over 1.7TB of data
23:20 JoeJulian Adding bricks with data is not in their design scope. It's known to work with one populated brick but that behavior is undocumented. Technically it's "undefined behavior". What I do know is that you'll potentially have a race condition where multiple servers could add missing metadata simultaneously causing a split-brain situation.
23:21 BubonicPestilenc got it, better not to do it :)
23:21 JoeJulian You can probably resolve any such split-brain safely and easily, but ... yeah.
23:22 JoeJulian And 1.7TB isn't something I would even blink at allowing to self-heal from one to the other two.
23:22 BubonicPestilenc btw, is it okay, if i chose gluster for replicating video files?
23:22 JoeJulian I'm currently healing over 300 TB.
23:23 JoeJulian Yes, but...
23:23 BubonicPestilenc got 3 servers with video files (>10MB) and it's boring to run rsync every 15 minutes
23:23 JoeJulian gluster isn't a "replication service" it's a clustered filesystem. Writes will have to be done through the volume mount.
23:24 BubonicPestilenc yeah, i know :)
23:24 JoeJulian Ok, cool. A lot of people don't seem to get that right away.
23:24 BubonicPestilenc it syncs over "mount-point" and you r/w throught mount-point2
23:25 JoeJulian @glossary
23:25 glusterbot JoeJulian: A "server" hosts "bricks" (ie. server1:/foo) which belong to a "volume"  which is accessed from a "client"  . The "master" geosynchronizes a "volume" to a "slave" (ie. remote1:/data/foo).
23:26 BubonicPestilenc re-phrase: it syncs over volumens/bricks, and user r/w throught mount point
23:26 JoeJulian It's especially important as you grow beyond your three servers to not think if it in terms of having the same files stored on each of your servers, but rather having the same file *available* on all of your servers.
23:27 BubonicPestilenc available/redudancy in #1, this is what we have atm. but gluster helps simplify this
23:27 BubonicPestilenc *is #1
23:27 BubonicPestilenc just to confirm:
23:27 JoeJulian precisely
23:28 JoeJulian replication provides data redundancy.
23:28 BubonicPestilenc i ran 3 bricks, on each server, and "peer-ed" them. in this case, while mounting to localhost:/volume
23:28 BubonicPestilenc if any server will go down, it will automatically work with other 2 ?
23:28 JoeJulian yes
23:28 BubonicPestilenc great
23:35 AdStar I keep getting my bricks log's filled up with this.,
23:35 AdStar [2016-08-16 23:35:48.204092] W [dict.c:1282:dict_foreach_match] (-->/lib64/libglusterfs.so.​0(dict_foreach_match+0x5c) [0x7f7b7caa9c1c] -->/usr/lib64/glusterfs/3.7.11/x​lator/features/index.so(+0x3980) [0x7f7b6cc64980] -->/lib64/libglusterfs.so.​0(dict_foreach_match+0xe3) [0x7f7b7caa9ca3] ) 0-dict: dict|match|action is NULL [Invalid argument]
23:35 AdStar [2016-08-16 23:35:48.204225] W [dict.c:1282:dict_foreach_match] (-->/lib64/libglusterfs.so.​0(dict_foreach_match+0x5c) [0x7f7b7caa9c1c] -->/usr/lib64/glusterfs/3.7.11/x​lator/features/index.so(+0x3980) [0x7f7b6cc64980] -->/lib64/libglusterfs.so.​0(dict_foreach_match+0xe3) [0x7f7b7caa9ca3] ) 0-dict: dict|match|action is NULL [Invalid argument]
23:35 AdStar [2016-08-16 23:35:48.204337] W [dict.c:1282:dict_foreach_match] (-->/lib64/libglusterfs.so.​0(dict_foreach_match+0x5c) [0x7f7b7caa9c1c] -->/usr/lib64/glusterfs/3.7.11/x​lator/features/index.so(+0x3980) [0x7f7b6cc64980] -->/lib64/libglusterfs.so.​0(dict_foreach_match+0xe3) [0x7f7b7caa9ca3] ) 0-dict: dict|match|action is NULL [Invalid argument]
23:35 glusterbot AdStar: ('s karma is now -150
23:36 glusterbot AdStar: ('s karma is now -151
23:36 glusterbot AdStar: ('s karma is now -152
23:36 AdStar any way to stop this?
23:41 JoeJulian ~pasteinfo | AdStar
23:41 glusterbot AdStar: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
23:43 AdStar https://paste.fedoraproject.org/409392/
23:43 glusterbot Title: #409392 Fedora Project Pastebin (at paste.fedoraproject.org)
23:45 AdStar It was a single node setup for a while (needed to build and transport the second node) so it is having to heal 31TB of data.. but it has killed the primary node due to filling the disk up with the brick log.
23:46 JoeJulian Huh... I'm not sure why it's using an index translator...
23:47 AdStar ok? Im not sure what that means?
23:48 JoeJulian Oh, I'm not sure yet either. I see that xlator's been added to my bricks as well. I just have never had need to notice it before.
23:54 wadeholler joined #gluster
23:57 masber joined #gluster
23:57 JoeJulian Wow, I'm observant. That xlator's been around for ages...

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary