Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-06-04

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 theron joined #gluster
00:19 cdunda joined #gluster
00:27 RicardoSSP joined #gluster
00:32 milu joined #gluster
00:33 _abhi joined #gluster
00:37 jag3773 joined #gluster
00:39 bala joined #gluster
00:45 theron joined #gluster
00:55 Ark joined #gluster
00:57 Pupeno_ joined #gluster
01:01 recidive joined #gluster
01:15 _abhi left #gluster
01:20 gildub purpleidea, ping - Would like to discuss python use in purpleidea/gluster
01:21 purpleidea gildub: o hai, 1/2 busy, but go ahead...
01:24 gildub purpleidea, won't go into details. Ok pulling python xml parser inside puppet (to filter exec of gluster --xml output) is creating unnecessary deps.
01:25 purpleidea gildub: not possible. there's only one dep. python-xml what's wrong with that?
01:26 gildub purpleidea, well maybe because you're a fan of Execs instead of Types/Providers but we shouldn't have to use Python at all
01:27 gildub purpleidea, of course inside the Exec, you could even use C++ if you wanted to, but my point is this should ultimately wrapped in type/providers/ruby
01:28 purpleidea gildub: lol, no compiled code is going into puppet-gluster... that's sort of a silly idea...
01:28 gildub purpleidea, yeah, of course!
01:28 purpleidea s/sort of//
01:28 glusterbot What purpleidea meant to say was: gildub: lol, no compiled code is going into puppet-gluster... that's  a silly idea...
01:31 purpleidea gildub: so, um what's the issue?
01:32 vimal joined #gluster
01:33 gildub purpleidea, Even though not compiled using python (or whatever scripting language) don't you see the issue, from a Puppet standard viewpoint?
01:34 purpleidea gildub: no, i don't understand your issue (explain better)
01:35 gildub purpleidea, This is not an issue, this is a discussion about Exec vs Types/providers
01:35 purpleidea oh
01:36 gildub purpleidea, and down the road, having to pull python stuff in instead of a Type/Provider
01:36 purpleidea gildub: two different issues:
01:36 purpleidea 1) use of python. no reason not to use python. it works great. is there some reason not to?
01:38 purpleidea 2) use of types/providers. -- yuck. the exec stuff works why change it? and also the types/providers might not exist long term... also, i don't want to write in ruby :)
01:40 gildub 1) No issue with Python, as such.
01:42 purpleidea (brb)
01:44 gildub 2) To be able to go to puppet-labs or at least get a chance to, it'd be necessary to step back and wonder how to transform some Execs into suitable types/providers.
01:45 gildub I don't know the future about types/providers, but for now that's the way to go. It's what it is but it has the advantage to be a standard framework. I don't think Exec are in the same class. Unless you want to migrate to Ansible.
01:48 gildub Regarding the 'exec stuff works why change it' question: this is a normal transition stage to create more generic puppet modules.
01:53 purpleidea gildub: i don't agree... you're of course welcome to patch this or fork it, but i won't be working on those features or merging them upstream. tbh, i doubt they will work the way you're expecting, and i don't see the issue with the current exec code. this brings no new features to puppet-gluster
01:56 purpleidea gildub: ls -lAh /etc/puppet/modules | fpaste
01:58 gildub purpleidea, I understand and respect we disagree, makes it more interesting, in this case it's about standards. I wanted to get started on the subject because I'm really worried, about the module acceptance upstream. Now if you're not interested of that path, then I don't know
01:59 gildub purpleidea, ^^^ ?
01:59 purpleidea gildub: what are you talking about "module acceptance upstream" i am upstream :P
01:59 gildub purpleidea, sorry but puppetlabs is your upstream
02:00 purpleidea gildub: no, puppetlabs is the puppet upstream. puppet-gluster upstream is me.
02:00 gildub purpleidea, I'm talking about puppetlabs forge
02:00 purpleidea gildub: nobody uses the forge except puppetlabs basically
02:01 purpleidea puppet-gluster is available from github, the glusterforge and in EPEL.
02:02 gildub purpleidea, I don't think so, the whole puppet openstack is using it, please have a look at this, to get as scale of things: https://github.com/gildub/openstack-puppet-modules
02:02 glusterbot Title: gildub/openstack-puppet-modules · GitHub (at github.com)
02:03 gildub purpleidea, most of the submodules there, are in either stackforge or puppetlabs forge, and if not then they have a upstream one in progress
02:04 purpleidea gildub: my module is available on the puppet forge if you want to get it from there, but i don't routinely update it because they force you to click to put new versions
02:06 gildub purpleidea, come on James, you know what I mean, let's not play a git battle. I respect you said "i won't be working on those features or merging them upstream", but that's still raise the question why not?
02:07 purpleidea gildub: if you propose a feature that will have a benefit to puppet-gluster, i'm happy to incorporate it upstream. simple as that.
02:07 purpleidea you haven't convinced me that what you're proposing fulfills that criteria.
02:07 wgao joined #gluster
02:07 gildub purpleidea, I'm not stalking feature, we're discussing structure, as simple as that
02:07 gildub ^ s/stalking/talking/
02:08 purpleidea gildub: in addition, i've been told by my boss to prioritize my other work before puppet-gluster, so unfortunately during work hours, i'm not working on this. so if you have patches that are sane, you can send them in, but i'm definitely not working on these requests in my personal time.
02:08 purpleidea there are other features that i'd rather do in my personal time...
02:09 gildub purpleidea, I don't think I have to convince you of anything
02:10 gildub purpleidea, like what write puppet modules using your own PUppet philosophy approach?
02:10 purpleidea ???
02:11 gildub purpleidea, ok, I'll get advice from other puppet peers before passing that to mike, which I'm trying to help with.
02:12 gildub purpleidea, thanks for your time.
02:12 purpleidea gildub: i'm trying to understand what you're trying to accomplish or what you think should be patched, but i don't understand, sorry....
02:14 gildub purpleidea, as I said, I will get things reviewed by puppet peers, thanks.
02:16 Pupeno joined #gluster
02:22 gildub purpleidea, BTW, on the priorities, I'm confused didn't Vijay say "Both the projects you mention are very high priority"?
02:23 purpleidea gildub: he did not. but if you think this needs to take higher priority, feel free to email him and cc me to ask.
02:23 purpleidea i'm obviously happy to work on puppet-gluster :P
02:25 harish joined #gluster
02:26 gildub purpleidea, just fwd the email from Vijay, maybe I got it wrong
02:32 gildub purpleidea, are you on any RH IRC channel?
02:40 recidive joined #gluster
02:46 gildub purpleidea, ?
03:09 \malex\ joined #gluster
03:15 \malex\ joined #gluster
03:15 \malex\ joined #gluster
03:24 kkeithley1 joined #gluster
03:26 itisravi joined #gluster
03:36 rejy joined #gluster
03:37 saurabh joined #gluster
03:38 kshlm joined #gluster
03:39 haomaiwa_ joined #gluster
03:42 lalatenduM joined #gluster
03:43 bharata-rao joined #gluster
03:48 haomaiwa_ joined #gluster
03:52 shubhendu joined #gluster
04:05 haomaiw__ joined #gluster
04:10 spandit joined #gluster
04:12 davinder8 joined #gluster
04:15 ndarshan joined #gluster
04:21 haomaiwa_ joined #gluster
04:25 haomaiw__ joined #gluster
04:27 rjoseph joined #gluster
04:33 haomaiwang joined #gluster
04:38 kumar joined #gluster
04:39 psharma joined #gluster
04:45 ppai joined #gluster
04:46 rastar joined #gluster
04:48 kanagaraj joined #gluster
04:50 gildub purpleidea, do you have a pointer for any "types/providers gone on the long run" discussion?
04:52 bala joined #gluster
04:57 meghanam joined #gluster
05:03 rastar joined #gluster
05:11 deepakcs joined #gluster
05:13 karnan joined #gluster
05:16 prasanthp joined #gluster
05:16 aravindavk joined #gluster
05:18 glusterbot New news from newglusterbugs: [Bug 1104462] Disconnects of peer and brick is logged while snapshot creations were in progress during IO <https://bugzilla.redhat.com/show_bug.cgi?id=1104462>
05:27 nishanth joined #gluster
05:28 hagarth joined #gluster
05:30 Philambdo joined #gluster
05:40 dusmant joined #gluster
05:40 kdhananjay joined #gluster
05:50 haomaiwang joined #gluster
05:53 rjoseph joined #gluster
05:55 aravindavk joined #gluster
05:56 aravindavk joined #gluster
05:57 sjm left #gluster
05:57 kanagaraj joined #gluster
06:01 vpshastry joined #gluster
06:12 ktosiek joined #gluster
06:18 glusterbot New news from newglusterbugs: [Bug 858732] glusterd does not start anymore on one node <https://bugzilla.redhat.com/show_bug.cgi?id=858732>
06:32 vimal joined #gluster
06:34 ricky-ti1 joined #gluster
06:35 aravindavk joined #gluster
06:36 spiekey joined #gluster
06:37 nshaikh joined #gluster
06:37 rjoseph joined #gluster
06:38 hagarth joined #gluster
06:39 dusmant joined #gluster
06:44 nicolasbadia joined #gluster
06:44 raghu joined #gluster
06:58 ctria joined #gluster
07:01 shubhendu_ joined #gluster
07:05 mbukatov joined #gluster
07:12 dastar joined #gluster
07:13 eseyman joined #gluster
07:24 keytab joined #gluster
07:26 davinder9 joined #gluster
07:28 karnan joined #gluster
07:38 fsimonce joined #gluster
07:44 vikhyat joined #gluster
07:50 ngoswami joined #gluster
07:56 ProT-0-TypE joined #gluster
08:05 liquidat joined #gluster
08:31 shubhendu joined #gluster
08:34 edward1 joined #gluster
08:37 hagarth @channelstats
08:37 glusterbot hagarth: On #gluster there have been 311343 messages, containing 12382302 characters, 2046991 words, 7545 smileys, and 1041 frowns; 1579 of those messages were ACTIONs. There have been 134806 joins, 3880 parts, 131019 quits, 24 kicks, 409 mode changes, and 7 topic changes. There are currently 238 users and the channel has peaked at 239 users.
08:48 karnan joined #gluster
08:48 ctria joined #gluster
08:48 Norky joined #gluster
08:48 lezo joined #gluster
08:48 georgeh|workstat joined #gluster
08:49 karnan joined #gluster
08:49 rejy joined #gluster
08:49 rjoseph joined #gluster
08:49 gmcwhistler joined #gluster
08:49 kkeithley joined #gluster
08:49 vpshastry joined #gluster
08:49 tjikkun joined #gluster
08:49 tjikkun joined #gluster
08:49 lyang0 joined #gluster
08:50 dusmant joined #gluster
09:10 vikhyat joined #gluster
09:11 karnan joined #gluster
09:15 \malex\ joined #gluster
09:18 dusmant joined #gluster
09:21 nishanth joined #gluster
09:49 glusterbot New news from newglusterbugs: [Bug 1104592] heal info may give Success instead of transport end point not connected when a brick is down. <https://bugzilla.redhat.com/show_bug.cgi?id=1104592>
09:54 hagarth joined #gluster
09:55 kdhananjay joined #gluster
09:58 dusmant joined #gluster
09:58 \malex\ joined #gluster
09:58 \malex\ joined #gluster
10:06 haomaiwa_ joined #gluster
10:06 \malex\ joined #gluster
10:09 haomaiw__ joined #gluster
10:12 nishanth joined #gluster
10:26 haomaiwa_ joined #gluster
10:27 davinder9 joined #gluster
10:34 ndarshan joined #gluster
10:38 \malex\ joined #gluster
10:47 \malex\ joined #gluster
10:49 Slashman joined #gluster
11:00 liquidat joined #gluster
11:06 liquidat joined #gluster
11:13 andreask joined #gluster
11:16 ppai joined #gluster
11:22 diegows joined #gluster
11:26 nshaikh joined #gluster
11:29 harish_ joined #gluster
11:37 davinder9 joined #gluster
11:38 sputnik13 joined #gluster
11:51 itisravi joined #gluster
11:55 harish joined #gluster
11:55 qdk_ joined #gluster
11:59 kshlm joined #gluster
12:02 Tume|Sai any ideas why I am getting IO errors in openstack guests, when mirrored gluster volume is used and mounted with fuse?
12:03 Tume|Sai connectivity to gluster is done using IPoIB
12:03 Tume|Sai SDR
12:04 Tume|Sai remote-dio is on, so it's not that
12:04 ndevos Tume|Sai: 1st thing I would check, is to see if the guest running the fuse-mount can reach all the bricks
12:04 nicolasbadia left #gluster
12:04 Tume|Sai it does
12:04 Tume|Sai I have been trying everything for the past week to get this to work
12:05 Tume|Sai re-created volume couple of times, and no change
12:06 Tume|Sai I use enhanceIO for disk cache, but even when that is off I still get the same errors
12:06 kanagaraj joined #gluster
12:06 edward1 joined #gluster
12:07 ndevos Tume|Sai: in the mount logs from the fuse client (/var/log/glusterfs/<path-to-mount>.log), you can see what hostnames/IPs are used for the bricks, and if there are any timeouts
12:21 kshlm joined #gluster
12:22 NCommander joined #gluster
12:31 kkeithley1 joined #gluster
12:40 haomaiwa_ joined #gluster
12:46 haomai___ joined #gluster
12:49 ekuric joined #gluster
12:50 glusterbot New news from newglusterbugs: [Bug 1104653] DHT + rebalance : rebalance process crashed + data loss + few Directories are present on sub-volumes but not visible on mount point + lookup is not healing directories <https://bugzilla.redhat.com/show_bug.cgi?id=1104653>
12:53 primechuck joined #gluster
12:55 giannello joined #gluster
12:55 kshlm joined #gluster
13:01 sjm joined #gluster
13:07 jmarley joined #gluster
13:19 prasanthp joined #gluster
13:19 [o__o] joined #gluster
13:20 glusterbot New news from newglusterbugs: [Bug 1070685] glusterfs ipv6 functionality not working <https://bugzilla.redhat.com/show_bug.cgi?id=1070685>
13:25 cdunda joined #gluster
13:26 sroy_ joined #gluster
13:27 _Bryan_ joined #gluster
13:36 Ark joined #gluster
13:36 japuzzo joined #gluster
13:37 theron joined #gluster
13:42 kkeithley1 joined #gluster
13:44 kkeithley1 joined #gluster
13:44 gmcwhist_ joined #gluster
13:45 jobewan joined #gluster
13:46 B21956 joined #gluster
13:47 davinder6 joined #gluster
14:00 hagarth joined #gluster
14:02 cdunda Hi i'm getting these in my logs. This started after doing a find replace to change my gluster servers from ip addresses to hostnames
14:02 cdunda /var/log/glusterfs/bricks/gluster-mktg-prod-corp-son-mktgorg-main-prod.log:153:[2014-06-04 13:48:15.210877] E [graph.c:272:glusterfs_graph_validate_options] 0-corp-son-mktgorg-main-prod-client-1: validation failed: option remote-host tech-mktg-glu1-prod.2u.com: 'tech-mktg-glu1-prod.2u.com'  is not a valid internet-address, it does not conform to standards.
14:02 cdunda Does it need to start with http://?
14:04 marbu joined #gluster
14:08 XpineX joined #gluster
14:12 kkeithley1 joined #gluster
14:16 Chewi joined #gluster
14:20 glusterbot New news from newglusterbugs: [Bug 1104707] Dist-geo-rep : some of the files not accessible on slave after the geo-rep sync from master to slave. <https://bugzilla.redhat.com/show_bug.cgi?id=1104707>
14:20 wushudoin joined #gluster
14:29 jag3773 joined #gluster
14:29 lmickh joined #gluster
14:31 pdrakeweb joined #gluster
14:32 deepakcs joined #gluster
14:36 brad[] if I specify no replica or distribute arguments to gluster volume create, what method is used by default?
14:36 ndevos brad[]: distribure
14:37 ndevos *distribute
14:37 brad[] what's the impact of that on a gluster filesystem if a node disappears? Are some files still available, others not?
14:37 brad[] Or is the whole thing trashed like a spanned LVM?
14:38 ndevos if you don't use replication, some files will be missing, the files on the other bricks will be available
14:41 shubhendu joined #gluster
14:42 brad[] ndevos: thanks
14:43 mortuar joined #gluster
14:44 glusterbot New news from resolvedglusterbugs: [Bug 1033093] listStatus test failure reveals we need to upgrade tests and then : Fork code OR focus on MR2 <https://bugzilla.redhat.com/show_bug.cgi?id=1033093>
14:46 brad[] Also I'm getting df output that's a bit strange - a glusterfs mounted via FUSE with no contents is showing 60% full
14:47 brad[] 2 bricks, replica 2
14:47 brad[] 200GB drives, dedicated, the resulting gluster filesystem reports its size as 35GB
14:47 brad[] is that expected?
14:48 brad[] I'd have assumed 100GB or 200GB as expected numbers for available space
14:50 kkeithley_ brad[]: _usually_ we see that when you have forgotten to mount your drives at the brick mount point before starting the gluster volume. Are you sure everything is mounted where gluster thinks they are?
14:51 lpabon joined #gluster
14:52 brad[] kkeithley_: triple checking
14:53 brad[] kkeithley_: Sigh. I had forgotten to mount -a on one of the test hosts. Thanks.
14:54 samppah @fuse
14:54 glusterbot samppah: I do not know about 'fuse', but I do know about these similar topics: 'forge'
14:55 Laurent___ joined #gluster
14:58 brad[] kkeithley_: so when a user forgets to do that and then can't mount a disk on the mount point because gluster appears to be keeping it busy....
14:59 brad[] kkeithley_: I've stopped / deleted the volume I errantly created
14:59 kkeithley_ stop the volume. mount the brick disk, restart the volume. That ought to work.
15:00 kkeithley_ or in your case, delete and recreate the volume
15:02 JustinClift *** Gluster Community Meeting time in #gluster-meeting ###
15:02 hagarth samppah: looking for anything around fuse?
15:02 brad[] kkeithley_: got it, was an unrelated prob
15:03 kdhananjay joined #gluster
15:03 lalatenduM joined #gluster
15:03 brad[] and it's 200GB in size. ok excellent
15:04 recidive joined #gluster
15:04 hagarth @channelstats
15:04 glusterbot hagarth: On #gluster there have been 311484 messages, containing 12386495 characters, 2047628 words, 7545 smileys, and 1041 frowns; 1579 of those messages were ACTIONs. There have been 134899 joins, 3881 parts, 131113 quits, 24 kicks, 411 mode changes, and 7 topic changes. There are currently 236 users and the channel has peaked at 239 users.
15:07 daMaestro joined #gluster
15:09 cdunda Probe unsuccessful
15:09 cdunda Probe returned with unknown errno 107
15:09 cdunda y
15:09 cdunda y
15:10 samppah hagarth: wondering if fuse is doing somekind of caching
15:10 samppah and if it's possible to tune that
15:10 hagarth samppah: attributes, entries ?
15:10 ProT-0-TypE joined #gluster
15:10 hagarth cdunda: looks like a firewall to port 24007 ?
15:11 cdunda @hargarth: Nope I typed the command wrong. lol
15:12 cdunda gluster peer probe tech-mktg-glu1-prod.xx.com
15:12 cdunda give me
15:12 cdunda Usage: peer probe <HOSTNAME>
15:13 cdunda That is a hostname
15:13 cdunda hrmmmmm
15:13 hagarth cdunda: interesting
15:15 cdunda I'm runningon gluster 3.3.. did it not support hostnames?
15:16 hagarth cdunda: it does support hostnames
15:16 _dist joined #gluster
15:16 hagarth cdunda: can you try probing with a shorter alias without the hyphens? (this alias needs to be dns resolvable on all nodes in the cluster)
15:16 samppah hagarth: i'm using gluster to store vm images.. when there is heavy io inside vm i see that io is blocked every 4 or 5 seconds and my best guess is that it's writing data from cache to disk.. i have tried tuning /sys/block variables and suchs but can't see any noticeable difference
15:17 cdunda @hagarth: hmmm i can try and see what happends
15:17 cdunda sudo gluster peer probe tech.xx.com
15:17 cdunda Usage: peer probe <HOSTNAME>
15:17 cdunda @hagarth: idk
15:18 cdunda @hagarth: I've restared gluster on both servers, same results
15:18 hagarth cdunda: ping tech.xx.com works?
15:19 wushudoin joined #gluster
15:21 cdunda @hagarth: Yeah, its not a firewall issue. These were connected before
15:26 [o__o] joined #gluster
15:29 radez I have a 3 node gluster install, on only one of the nodes I get "Another transaction could be in progress. Please try again after sometime."
15:30 radez is there a way to clear this without shutting down all services and disconnecting all clients?
15:30 sputnik13 joined #gluster
15:30 radez or to see what the lock even is?
15:31 radez running 3.5.0
15:32 rjoseph joined #gluster
15:33 primechuck joined #gluster
15:34 jcsp joined #gluster
15:37 bala joined #gluster
15:37 radez a volume status on another node shows that one brick isn't coming online on the host that gives me that message, when I bounce glusterd on that node a different brick on that node doesn't come online
15:39 radez hm, bounce gluster again and they're all online but still get the message
15:39 hagarth radez: did you restart glusterd on all nodes?
15:41 radez hagarth: yes I've tried that, rolling though, I can't bring them all down together
15:49 pdrakeweb joined #gluster
15:53 zerick joined #gluster
16:00 shyam joined #gluster
16:00 spiekey left #gluster
16:00 cdunda joined #gluster
16:01 jskinner_ joined #gluster
16:04 cdunda if i try to attacha peer using host name i get
16:04 cdunda Usage: peer probe <HOSTNAME>
16:04 cdunda but if i use ip address it works
16:04 cdunda i'm on gluster 3.3
16:05 cdunda anyone ever experience this?
16:06 zerick joined #gluster
16:06 ndk joined #gluster
16:06 bala joined #gluster
16:06 vimal joined #gluster
16:07 vpshastry joined #gluster
16:07 _dist cdunda: Personally I always define separate hostnames for each brick inside of /etc/hosts. Initially I did this because my gluster setup was on a SAN without DNS, but I've found it useful to when debugging and changing setups down the road
16:07 _dist you don't want to use IP because you can't change it later (in my experience)
16:07 rotbeard joined #gluster
16:08 _dist but I've never seen it where I can ping a hostname or the IP and not peer probe on both
16:08 _dist the only time I've seen peer probe IP work and not peer probe hostname is when DNS was working properly
16:08 _dist (running 3.4.2 myself though)
16:09 cdunda _dist: thanks I'll give dns another look
16:10 _dist I use gluster for my VM storage, and I run the servers on the same physical machine as the hypervisor, that's why I opted to use separate hostnames, one set for gluster bricks, one set for hypervisor hostnames
16:17 LoudNoises joined #gluster
16:20 cdunda _dist: idk it might be a weird 3.3 bug but i ended up attaching using the ip, then stopping the gluster service, update the peers file, and started them up again
16:20 cdunda seems to be working
16:21 semiosis cdunda: why dont you use 3.4?
16:21 semiosis or 3.5?
16:23 cdunda semiosis: That may happen... not sure we may be moving to s3 soon
16:27 Mo_ joined #gluster
16:27 samppah _dist: what hypervisor you are using? :)
16:43 Matthaeus joined #gluster
16:58 kanagaraj joined #gluster
17:02 _dist samppah: I'm using qemu-kvm
17:06 kumar joined #gluster
17:13 sjusthome joined #gluster
17:14 zerick joined #gluster
17:30 samppah _dist: using libgfapi?
17:31 shylesh__ joined #gluster
17:31 _dist samppah: I am
17:43 ramteid joined #gluster
17:43 bene2 joined #gluster
17:48 brad[] _dist: I wanted to ask you about your Oracle DB - a DB within a VM on top of a cluster filesystem - how is that working out? Is it performant?
17:50 tg2 joined #gluster
17:51 _dist brad[]: As long as we use the PV (virtio) drivers we find it pretty good. I'm sure it's not as good as if we ran it locally, but until it because a burden I'd prefer to have easy HA/shared storage vs storage migration
17:52 _dist The test numbers aren't too fresh in my head, but I think it was something like 1500Mbytes/700Mbytes / sec r/w on large block sequential and it bottomed at about 50/20 on 1k
17:53 _dist didn't do an iops test for it honestly. The DB is under heavy load though and hasn't been a problem yet
17:54 jcsp joined #gluster
17:54 monotek joined #gluster
18:14 lpabon_ joined #gluster
18:14 plarsen joined #gluster
18:16 ThatGraemeGuy joined #gluster
18:30 pdrakeweb joined #gluster
18:31 raghu` joined #gluster
18:35 pasqd Hi, does gluster replication works as fast as slowest brick? i'm considering replicating ssd data to sata brick...
18:36 Matthaeus pasqd, only as fast as the slowest brick.
18:36 pasqd damn
18:36 Matthaeus No reason do to that unless you absolutely need the redundancy and those are literally the only two hard drives you have.
18:37 jcsp1 joined #gluster
18:37 klaas joined #gluster
18:37 _dist Well, the read speed would probably be better than two bricks on sata, but not write.
18:37 pasqd i had a dream, where one of my storage nodes is fast as hell thx to mixed ssd drives and those datas are replicated to slowest storage node in case of failure :)
18:38 _dist pasqd: you might want to look into geo-replication
18:39 pasqd ok i'm looking at manuals now :)
18:40 sputnik13 joined #gluster
18:41 _dist I don't have any experience with it yet, but I believe it's for that kind of thing. You might also consider using an ssd based write/read cache on each brick to speed things up. Personally I've found networking is usually the bottleneck though
18:43 pasqd i think 1gbps network should be enough for transfering data from 2xssd
18:44 Matthaeus pasqd, done the math on that?
18:45 Matthaeus Consider that drive read speeds are usually measured in bytes, but network speeds are measured in bits.
18:46 _dist pasqd: My best for gluster replication on 1gbps is somewhere around 80MBytes/sec which any desktop mechanical drive would do fine
18:46 Matthaeus SSDs on the low end have read speeds around 200 MB/s, which is 1.6 Gbs.  A single SSD can saturate gigE.
18:47 pasqd damn you know what?
18:47 _dist (sorry didn't mean to gang up there) :)
18:47 pasqd you're perfectly right
18:47 pasqd :D
18:47 Matthaeus I, too, once thought as you did.
18:48 _dist yeah, when I calculated that I needed at least 10gbps to saturate my array for replication it was annoying because the original plan was to bond 1Gbe together
18:49 radez I have a 3 node gluster install, on only one of the nodes I get "Another transaction could be in progress. Please try again after sometime."
18:49 radez is there a way to clear this without shutting down all services and disconnecting all clients?
18:49 pasqd 10network cards are  so-so, but 10ge switches are faaar to expensive ;/
18:49 radez I bounced the nodes in a rolling fashion, can't bring it all down now, didn't help
18:50 radez running 3.5.0
18:52 _dist pasqd: we're running 10gbe switches that are > 1k and support vlan tagging (netgear prosafe). Only thing about them is you need to use an annoying windows app to "manage" them
18:53 _dist but you could just buy a 4 port 1gbps supermicro for like $100 and bond it
18:54 Matthaeus Careful there.
18:54 Matthaeus Depends entirely on what's on the other end of that bond, and what your usage scenario looks like.
18:54 Matthaeus Some bonding mechanisms fail utterly if you're using gluster as a SAN for a single machine.
18:55 _dist ? something poor in the advice, yeap implmentation becomes important
18:55 Matthaeus And also your switch needs to support the same bonding protocol, and not all switches, even fancy cisco ones, support all bonding protocols.
18:56 Matthaeus The one I had only supported bonding based on a hash of the sending and receiving mac addresses.  Since I was running a two-node gluster cluster, this basically guaranteed that all intra-gluster communication would occur over one link, no matter how many links I added to the bond.
18:58 _dist when I tested bonding for speed it was over lacp with switches that supported it. But if it's two node why even use a switch?
18:59 Matthaeus Well, the gluster servers need to also communicate with their clients.
18:59 pasqd bbl, many thx dist and Matthaeus
18:59 _dist yeah you're right, not everyone is doing my crazy gluster-server+hypervisor thing
18:59 Matthaeus But writes would be limited to to the speed at which the two servers could talk to each other.
19:07 Matthaeus pasqd: no worries!
19:11 XpineX joined #gluster
19:11 ekuric joined #gluster
19:16 ekuric joined #gluster
19:16 ekuric joined #gluster
19:21 ctria joined #gluster
19:26 chirino joined #gluster
19:28 coredump joined #gluster
19:33 systemonkey joined #gluster
19:46 Matthaeus joined #gluster
19:50 Ark joined #gluster
19:55 mdavidson joined #gluster
19:56 edward1 joined #gluster
19:59 mdavidson joined #gluster
20:04 lmickh joined #gluster
20:07 JoeJulian file a bug
20:07 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
20:07 jag3773 joined #gluster
20:08 JoeJulian ... that was for me, btw...
20:10 tg2 joined #gluster
20:26 sjm left #gluster
20:30 DanishMan joined #gluster
20:39 jcsp joined #gluster
20:51 glusterbot New news from newglusterbugs: [Bug 1104861] AFR: self-heal metadata can be corrupted with remove-brick <https://bugzilla.redhat.com/show_bug.cgi?id=1104861>
21:02 andreask joined #gluster
21:09 marcoceppi joined #gluster
21:09 marcoceppi joined #gluster
21:12 Matthaeus joined #gluster
21:28 recidive joined #gluster
21:44 Matthaeus joined #gluster
22:00 Matthaeus1 joined #gluster
22:01 pdrakeweb joined #gluster
22:39 fidevo joined #gluster
22:40 chirino joined #gluster
22:41 Matthaeus joined #gluster
22:47 Matthaeus joined #gluster
22:50 recidive joined #gluster
23:00 Matthaeus joined #gluster
23:28 jcsp joined #gluster
23:35 gildub joined #gluster
23:40 juhaj joined #gluster
23:48 pdrakeweb joined #gluster
23:58 bala joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary