Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-01-26

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:19 slanbuas joined #gluster
00:24 jobewan joined #gluster
01:25 Gill joined #gluster
01:41 Gill joined #gluster
01:56 elico joined #gluster
02:01 calum_ joined #gluster
02:05 nangthang joined #gluster
02:17 wkf joined #gluster
02:21 haomaiwang joined #gluster
02:22 jobewan joined #gluster
02:47 ilbot3 joined #gluster
02:47 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
03:12 MugginsM joined #gluster
03:17 Gill joined #gluster
03:21 fandi joined #gluster
03:28 Mortem joined #gluster
03:28 Mortem Hello
03:28 glusterbot Mortem: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
03:28 Mortem Can someone help me with a Geo-Replication issue on Glusterfs 3.6.1?
03:29 Mortem My geo-replication is set properly following RH documentation (SSH passwordless works, etc...) but it is in faulty when I start it
03:31 Mortem I have really few details in the log
03:31 Mortem [monitor(monitor):214:monitor] Monitor: worker died before establishing connection
03:32 Mortem [repce(agent):92:service_loop] RepceServer: terminating on reaching EOF.
03:32 Mortem Any idea?
04:06 Mortem Nobody can help?
04:28 Gill joined #gluster
04:36 fandi joined #gluster
04:54 Mortem So nobody can help for a geo-replication issue?
05:00 fandi joined #gluster
05:01 fandi joined #gluster
05:07 fandi joined #gluster
05:07 gem joined #gluster
05:09 fandi joined #gluster
05:10 bala joined #gluster
05:11 fandi joined #gluster
05:13 fandi joined #gluster
05:17 fandi joined #gluster
05:56 ramteid joined #gluster
06:06 ramteid joined #gluster
06:08 sputnik13 joined #gluster
06:22 ricky-ti1 joined #gluster
06:23 arash joined #gluster
06:24 arash Hello
06:24 glusterbot arash: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
06:31 zerick joined #gluster
06:34 nhayashi joined #gluster
06:50 bala joined #gluster
06:52 mbukatov joined #gluster
06:55 dusmant joined #gluster
06:58 mbukatov joined #gluster
07:01 nangthang joined #gluster
07:08 ctria joined #gluster
07:17 jtux joined #gluster
07:26 ricky-ticky joined #gluster
07:37 [Enrico] joined #gluster
07:50 soumya__ joined #gluster
08:09 lalatenduM joined #gluster
08:15 fsimonce joined #gluster
08:46 siel joined #gluster
08:46 liquidat joined #gluster
08:47 deniszh joined #gluster
08:51 bipin joined #gluster
08:52 bipin Could someone please explain the parameter performance.cache-size.
08:53 bipin what does it actually mean and how can we increase and decrease its value to see performance gain
08:53 bipin ?
08:53 arash joined #gluster
08:54 bipin ping #gluster : Could someone please explain the parameter performance.cache-size.
08:55 bipin ping #gluster :  ^^ what does it actually mean and how can we increase and decrease its value to see performance gain?
08:59 Slashman joined #gluster
09:04 arash why Gluster mailing list is not activate enough ?
09:12 rastar_afk joined #gluster
09:16 Pupeno joined #gluster
09:17 siel joined #gluster
09:19 fandi joined #gluster
09:22 fandi joined #gluster
09:29 lalatenduM joined #gluster
09:31 ricky-ticky1 joined #gluster
09:36 hagarth joined #gluster
09:41 ricky-ticky joined #gluster
09:41 jvandewege_ joined #gluster
09:43 jvandewege joined #gluster
09:43 sputnik13 joined #gluster
09:46 [Enrico] joined #gluster
09:46 jvandewege_ joined #gluster
09:48 maveric_amitc_ joined #gluster
09:49 fandi joined #gluster
09:53 jvandewege joined #gluster
09:58 jvandewege_ joined #gluster
10:00 jvandewege_ joined #gluster
10:03 jvandewege__ joined #gluster
10:03 soumya__ joined #gluster
10:07 maveric_amitc_ joined #gluster
10:16 ricky-ticky1 joined #gluster
10:28 ralala joined #gluster
10:37 gem joined #gluster
10:42 Norky joined #gluster
10:42 shaunm joined #gluster
10:43 jvandewege_ joined #gluster
10:51 calum_ joined #gluster
11:04 dusmant joined #gluster
11:08 rjoseph|afk joined #gluster
11:11 DV joined #gluster
11:19 raatti joined #gluster
11:26 glusterbot News from newglusterbugs: [Bug 1022759] subvols-per-directory floods client logs with "disk layout missing" messages <https://bugzilla.redhat.co​m/show_bug.cgi?id=1022759>
11:41 ctria joined #gluster
12:03 DV_ joined #gluster
12:28 calisto joined #gluster
12:33 ira joined #gluster
12:37 LebedevRI joined #gluster
12:47 Pupeno_ joined #gluster
12:51 ccha3 joined #gluster
12:53 malevolent joined #gluster
12:53 yosafbridge joined #gluster
13:02 maveric_amitc_ joined #gluster
13:10 puiterwijk joined #gluster
13:10 tetreis joined #gluster
13:12 soumya__ joined #gluster
13:13 jvandewege_ joined #gluster
13:14 kkeithley joined #gluster
13:16 T3 joined #gluster
13:18 ricky-ticky joined #gluster
13:19 jvandewege joined #gluster
13:23 jvandewege_ joined #gluster
13:24 calisto1 joined #gluster
13:25 vimal joined #gluster
13:41 Gill joined #gluster
13:42 gem joined #gluster
13:44 T3 joined #gluster
13:53 Pupeno joined #gluster
13:57 B21956 joined #gluster
13:59 sputnik13 joined #gluster
14:01 churnd joined #gluster
14:03 bennyturns joined #gluster
14:06 virusuy joined #gluster
14:06 bennyturns joined #gluster
14:10 julim joined #gluster
14:12 calisto joined #gluster
14:14 nangthang joined #gluster
14:23 bene2 joined #gluster
14:28 Gill joined #gluster
14:30 tdasilva joined #gluster
14:32 jvandewege_ joined #gluster
14:39 _Bryan_ joined #gluster
14:49 ckotil joined #gluster
14:50 tanuck joined #gluster
14:59 jvandewege_ joined #gluster
15:06 mbukatov joined #gluster
15:06 bennyturns joined #gluster
15:10 chirino joined #gluster
15:11 plarsen joined #gluster
15:11 LebedevRI joined #gluster
15:12 wushudoin joined #gluster
15:25 jmarley joined #gluster
15:26 lpabon joined #gluster
15:26 sauce joined #gluster
15:28 Mouser25 joined #gluster
15:29 kkeithley joined #gluster
15:31 roost joined #gluster
15:42 gem joined #gluster
15:49 roost joined #gluster
15:56 bala joined #gluster
16:12 jobewan joined #gluster
16:12 Mouser25 left #gluster
16:15 gem joined #gluster
16:26 lmickh joined #gluster
16:32 elico joined #gluster
16:32 hagarth joined #gluster
16:45 LebedevRI joined #gluster
16:55 purpleidea joined #gluster
16:57 glusterbot News from newglusterbugs: [Bug 1185950] adding replication to a distributed volume makes the volume unavailable <https://bugzilla.redhat.co​m/show_bug.cgi?id=1185950>
17:01 isquish joined #gluster
17:07 isquish Hi, can anyone help me out with an issue restoring bricks in a replica set?
17:07 isquish I two RHEL6 VMs running gluster 3.6.1, and sharing out several replica volumes
17:07 isquish both VMs are also gluster clients, just for testing purposes
17:07 ralalala joined #gluster
17:07 isquish When I remove a brick from a set and add a new one, I am unable to get gluster to push the existing files to the "replacement" brick
17:08 isquish new files get sync'd fine, and the same for modified files that were there previously
17:09 isquish but no "ls -Ra", "find . -noleaf", or "gluster volume heal <VOLNAME> full" trickery works to get the previously existing files to the new brick
17:10 isquish its very annoying
17:16 chirino joined #gluster
17:17 cfeller joined #gluster
17:20 JoeJulian What process are you using to "remove a brick from a set and add a new one"?
17:21 bennyturns joined #gluster
17:21 isquish gluster volume remove-brick <VOL> replica 1 <server1>:<brick1>
17:21 isquish then delete the <brick1> files from <server1>
17:22 isquish create a new brick directory, and add that the same VOL via:
17:22 isquish gluster volume add-brick VOL replica 2 <server1>:<brick1>
17:23 isquish (i just used the same folder name for the new brick)
17:23 bennyturns joined #gluster
17:24 JoeJulian Hmm, well that *should* work, though I've never tried it that way myself. It's not scalable, of course, since if you distribute over any more replica pairs you will no longer be able to specify replica 1.
17:25 isquish I was just testing out the process for recovering from a server/partition failure
17:26 isquish but so far, haven't been able to get the replacement brick to get the existing files outside of manual transfers
17:26 isquish which isn't optimal :P
17:27 JoeJulian Yeah, that's just plain wrong.
17:27 isquish might it be related to only having 2 bricks?  like gluster can't decide which brick is correct for the auto healing?
17:27 JoeJulian Does gluster volume status show the newly added brick as running?
17:28 isquish yes, running and with the correct bricks included/connected
17:28 JoeJulian check /var/log/glustershd.log (on both servers) and look for clues.
17:31 JoeJulian What I do in the event of disk failure is kill glusterfsd for that brick, replace the failed disk (for your test, rm -rf $brick_root), set the volume_id xattr, then restart glusterd.
17:31 isquish hmmm, okay, I'll try it that way
17:31 isquish I certainly wasn't killing off glusterfsd
17:32 isquish so maybe I'm just "doing it wrong"
17:32 JoeJulian No, that should have worked.
17:32 JoeJulian glustershd logs would be interesting. Wanna share them at fpaste.org?
17:34 isquish sure, one sec
17:35 isquish server1: http://ur1.ca/jjppt
17:37 isquish server2: http://ur1.ca/jjpqf
17:41 JoeJulian Too much noise. All those volumes and logs since Jan 8. I don't have enough spare time (working at $dayjob right now) to parse through that and find you actual test.
17:43 shaunm joined #gluster
17:44 isquish ha, I hear you
17:44 isquish I'll give things another try
17:44 isquish and ensure all the logs and such are captured for just that
17:45 isquish (I've been doing this off adn on for the past week or so, so the logs are a mess)
17:45 chirino joined #gluster
17:45 JoeJulian thanks
17:46 isquish thanks for pointing a newbie in the right direction ;)
17:47 JoeJulian Happy to help
17:47 PeterA joined #gluster
17:56 julim joined #gluster
18:02 julim_ joined #gluster
18:07 badone joined #gluster
18:07 partner was tempted to say hello but bot would not agree so i'm just quiet :)
18:11 JoeJulian partner: The background behind that is, when I first started helping people here, it was annoying as could be that people would come in the channel, say "hello", and leave 2 minutes or less later.
18:12 partner JoeJulian: i know exactly what you mean
18:13 JoeJulian Besides, it just saves time. "Hello" 2 minutes later, "Hi", 2 minutes more, "I have a problem." 2 more, "Oh, what is it?"...
18:13 partner the phone culture just doesn't work here on irc
18:19 partner anyways, there is a good and unfortunate chance i will need to abandon the gluster community myself, i was just today told i'm supposed to start on a new team next monday and as our team was the only one using gluster i most likely cannot keep my skills up much longer on the topic :(
18:20 liquidat joined #gluster
18:21 JoeJulian Sounds like it's time to look for a new job. One in which you can use your newly developed specialization.
18:22 partner applying to that another company that is known to use gluster in this country?-D
18:22 JoeJulian Never know
18:22 JoeJulian unless you look.
18:22 partner (honestly i have no idea, i wish they would have made a quick poll on last meetup just to get any ballpark figures)
18:23 JoeJulian Host more meetups. :)
18:23 partner thoh
18:24 partner ohwell, i can of course try to sell and pitch the technology for the rest of the company still
18:25 partner what else "software defined storage" will be up with three commands.. apt-get install, gluster volume create, start..
18:25 JoeJulian tbqh, this storage specialization I've fallen in to has been quite valuable. I highly encourage it.
18:25 partner should have used $your_favourite_package_manager to be more heterogenous on that field
18:26 partner i'm still lightyears away from being expert with this stuff
18:26 partner its pretty much exactly 2 years now in production since 3.3.1 version
18:26 partner i recall we fired it up (to prod) on january
18:26 JoeJulian Yeah... I remember the questions you were asking 2 years ago. You're expert enough to add value.
18:30 iamdatfatone joined #gluster
18:32 partner we all must start from somewhere. i'm happy to help anybody with questions i have answers to after these years of collecting the knowledge, one of the few things i can give back to the community
18:33 partner its pretty big decision to go forward with something completely unknown, usually there's not enough time to test all the possible things one might encounter along the way, especially not on such large scale
18:33 iamdatfatone Greetings enthusiasts, engineers, and otherwise genius people. Expiramenting with gluster on 3 nodes with 4 bricks per node in a 2 replication configuration with one volume. Working, but copying data to it is extraordinarily slow. For our current data cluster we run the OS off of a compact flash card to free up all the bays for spinning disks. We've never had speed problems in the past with this, but does Gluster not like bei
18:34 iamdatfatone Or am I barking up the wrong tree?
18:34 partner iamdatfatone: your line was cut at "not like bein$"
18:34 iamdatfatone Oh thanks, Not like being run from USB
18:35 partner do the logs and stuff go to usb? all the glusters metadata and such is with the bricks
18:36 jmills joined #gluster
18:36 iamdatfatone We set the logs, /tmp, and swap to go to a HDD
18:37 partner sounds good. what you mean exactly with it being extraordinarily slow?
18:37 partner volume being replica 2 you will use double the network from the client to write the data twice (in parallel) to both replica bricks
18:39 Gill_ joined #gluster
18:39 iamdatfatone Seeing 10MB/s on a 1Gbps LAN
18:40 iamdatfatone Same subnet, not a ton of traffic
18:41 iamdatfatone Tried the NFS and the native connectors. They both work about the same
18:42 iamdatfatone trusty 14.04
18:42 partner hmm
18:42 iamdatfatone I expected some slowness because most of my data is in small files, just billions of them.
18:48 partner i assume its due to overhead with the metadata compared to the size of the file but i'm not an expert on this area, i do also have hundreds and hundreds of millions of files at my volume but the bottleneck is elsewhere so i haven't put much effort into writing side
18:50 iamdatfatone ok. I'll keep playing around to see. What would you consider as an acceptable data write rate? 100MB/s?
18:51 partner it depends on so many variables, quickly testing at my test env (client and server being virtual machines, in 10 gig network), i got for a simple 1 GB file with dd following numbers: 1073741824 bytes (1.1 GB) copied, 13.024 s, 82.4 MB/s
18:52 partner oh, that was over nfs i see
18:52 iamdatfatone Is that replicated?
18:52 michatotol joined #gluster
18:54 partner that was distributed, checking now with replicated over native client
18:55 partner oh, my env is all messed up :o
18:57 iamdatfatone hehe. Well thanks for that info. I'll keep looking
19:03 partner better wait for the real experts, at least the basic info is visible now so they are good to continue :)
19:04 B21956 joined #gluster
19:04 partner but that brings me back to my question on how to combine two separate clusters.. i got rsync as an answer but i'm not satisfied with that..
19:05 fubada purpleidea: how do you get the primary interface IP? i need to do something similar in my module
19:06 partner i've been thinking a bit and have couple of options here.. 1) break the old peering (the one that is supposed to go away) and peer hosts with the "new" one and then maybe copy some vol files somewhere and fire it up.. probably needs downtime on both ones plus some uuid tuning on the volume files aswell
19:06 partner 2) i forgot it already, something similar but with some different angle..
19:07 partner maybe it was something about forcing the new one to know about the old one ie. peer files but that will leave the volume still on its own.. i guess i really need to think this better but fever is up and its 9PM, maybe tomorrow
19:08 partner i just have couple of separate instances here, other was used to run more fresh version but now they are equal and i want to get rid off couple of boxes
19:08 partner instance as in two separate clusters
19:09 partner so the goal was to add-brick and then remove-brick
19:16 Iodun joined #gluster
19:20 LebedevRI joined #gluster
19:22 tdasilva joined #gluster
19:37 tanuck joined #gluster
19:37 R0ok_ joined #gluster
19:40 ricky-ticky1 joined #gluster
19:42 deniszh joined #gluster
19:43 JoeJulian Mmm, nice talk I just had with Red Hat. Looks like they're interested in shifting focus back to community.
19:45 JoeJulian semiosis: ^
19:57 julim joined #gluster
20:07 purpleidea fubada: what do you mean
20:07 purpleidea fubada: what do you mean "get the" ??
20:07 AaronGr joined #gluster
20:08 purpleidea fubada: do you mean this? https://github.com/purpleidea/puppet-gluster​/blob/master/lib/facter/gluster_host.rb#L21 (it's a heuristic!)
20:08 jvandewege_ joined #gluster
20:11 jvandewege_ joined #gluster
20:16 semiosis JoeJulian++
20:16 glusterbot semiosis: JoeJulian's karma is now 18
20:17 DV_ joined #gluster
20:24 Teela just wanted to say thanks
20:24 Teela the fresh volume heal
20:24 Teela fixed up the all the splitbrains
20:24 Teela :)
20:25 Teela has anyone experienced the split brains due to vmotions?
20:25 lmickh joined #gluster
20:29 partner Teela: good you figured it out thought it sounds a bit illogical to have heal fixing those, maybe you interpreted the logs or gluster output incorrectly as gluster has no means to tell which file is good version on split-brain situation?
20:30 partner i have forgot all the details of the case already so pardon me if i'm confusing something here
20:33 Teela yeah
20:33 Teela i kow that
20:33 Teela so we just picked one
20:33 mbukatov joined #gluster
20:34 partner oh, so you just randomly trashed one side of the replica and let it heal from there?
20:34 partner i mean, from the remaining one of course
20:34 Teela pretty much
20:34 Teela because we had no way to tell which one was the correct one
20:35 Teela the other problem was we dont know how the split occurred which was why i trying to replicate
20:35 Teela a s plit
20:35 Teela now that i understand how it works a bit
20:35 Teela I was wondering if a vmotion could cause a split-brain
20:36 Teela and or if anyone has ever ran into that
20:37 calisto joined #gluster
20:38 partner hmm i don't have and facts at hand but i doubt as you know how much effort it requires to even manually produce the issue. maybe, if throwing some wild guesses, there could have been enough load, network traffic and what not to cause interruptions on the network traffic to cause the split-brain but its really a wild guess
20:39 partner but, main thing is to learn from this and maybe put some effort into monitoring of the gluster / healing status and also the logs
20:40 Teela yeah
20:40 Teela im gonna setup some scripts
20:40 Teela to help monitor
20:40 Teela and heal
20:40 Teela iv seen some stuff floating around
20:40 Teela not very sclean
20:40 Teela clean
20:41 Teela however I should able to muddle my way throgh
20:41 Teela through
20:41 Teela thanks again
20:41 partner np, glad you solved the issue
20:52 mbukatov joined #gluster
21:07 rwheeler joined #gluster
21:14 purpleidea JoeJulian: more info... ?
21:19 JoeJulian They've assigned Tom Callaway to be our community advocate and he's asking a lot of questions to figure out what needs to be done to re-invigorate the community.
21:21 JoeJulian Tom (IRC:spot) seems to be approaching this task with a sense of urgency, so I'm hoping to see some changes coming RSN.
21:23 johnbot joined #gluster
21:26 johnbot Hey y'all. Ran into a problem that I'm hope can be overcome without remounting on the client side since the system is in heavy use. Last night I had to ramp up a second gluster server in aws since I maxed out the available /dev/sd* devices that would attach. So I started up the second instance, did a peer probe from the initial server finally adding 5TB of addition space on the second server and did a 'gluster volume add-brick' for
21:26 johnbot each new brick. Everything worked great and 5 out of my six gluster clients saw the additional space within minutes. The problem is with one of the clients where it sees only 2TB out of the additional 5TB of space added. Is there a way to rectify this and why do you think it happened on one instance out of 6?
21:28 johnbot For example : gluster client 1 (found all space) 172.31.47.154:/gv1     20T   12T  8.1T  60% /storage, Gluster client 2 (doesn't see all of new space) 172.31.47.154:/gv1   16T   12T  5.0T  70% /storage
21:28 JoeJulian My guess would be a network connection issue.
21:28 JoeJulian But that's just a wag.
21:28 JoeJulian kill -HUP the client will trigger it to check for a vol file change.
21:28 tg2 http://linux-iscsi.org/
21:28 tg2 lol
21:29 Gill_ joined #gluster
21:29 tg2 sigh
21:30 tanuck joined #gluster
21:30 julim joined #gluster
21:31 purpleidea JoeJulian: sweet... i know spot, he's cool :) let me know if i can help in some way. i was gluster before i was rh.
21:31 purpleidea (and before gluster was rh)
21:36 JoeJulian Totally.
21:37 partner at least he seemed energetic towards this stuff the couple of times i've met him online
21:38 partner nice. we seriously should keep the noise up
21:40 JoeJulian tg2: I wonder if nab's on his way to fosdem.
21:40 partner i mean you as i'm bailing out but i will say nice words about the community and the glusterfs as both have saved my ass and made the life so much easier
21:41 JoeJulian partner: What if we could pay for you to go places and speak about Gluster?
21:41 JoeJulian Would you keep your hands in it?
21:41 JoeJulian (We, being the gluster community board)
21:42 partner i think my credibility would drop over the time very quickly if i would no longer be maintaining the infra or solving its issues
21:43 partner i really don't know what will happen next week, in the worst case i have to drag to storage with me and provide it as a service for my old team
21:43 johnbot JoeJulian: thanks for the tip I'll try that. Just seemed strange to me that 2 out of the 5 new volumes on the new gluster server were seen. If it were network issues wouldn't it of not seen any of the new volumes on the new server?
21:43 partner kind of hope that won't happen but then again i am still some bytes away from the goal the gluster promises to deliver..
21:45 bene2 joined #gluster
21:45 partner i should sleep over this, there's some things in the background that should not be perhaps discussed in here (even thought i don't care but just not the topic of the channel)
21:45 JoeJulian johnbot: I was wondering if you're running out of <1024 ports, in which case you may need to set rpc-auth-allow-insecure
21:46 purpleidea partner: there's gluster-devel for these things
21:46 JoeJulian partner: Feel free to chat with me offline. Use google hangouts is easiest.
21:46 JoeJulian Or that
21:46 purpleidea something, something transparency
21:47 JoeJulian amen
21:47 Iodun hi :) i got a question... there is a separate chapter for "distributed geo-replication" in the admin guide... is the mountbroker part of the old geo-replication still applicable for that?
21:49 JoeJulian Iodun: Looks that way.
21:49 partner on the topic, any idea how far the /dev/sdX-letters will go btw?
21:49 partner i've got this on my system:
21:49 partner brw-rw---T 1 root disk 66,  64 Dec 17 21:16 /dev/sdak
21:49 glusterbot partner: brw-rw-'s karma is now -1
21:50 Iodun looooool
21:50 Iodun someone needs to rewrite this bot
21:51 partner at least someone with such nicks will get bad karma from the get go :DDD
21:51 purpleidea partner: very far past 26
21:51 JoeJulian They deservie it if they have such a nick.
21:52 partner purpleidea: i would imagine that, it was just something that johnbot actually said, "since I maxed out the available /dev/sd* devices that would attach"
21:53 purpleidea partner: i know this because i ensured more than 26 drives could be used/tested in vagrant-libvirt, specifically for the gluster simulation use case: https://github.com/pradels/vagrant-libvirt/c​ommit/be2042537e4f8f3e59a7880bf3e09358252be2​52#diff-b34d2a33c102d0c3e664ab590a2119e7R3
21:53 purpleidea fwiw
21:56 tg2 JoeJulian, is he on irc?
21:58 tg2 i sent him an email but no reply
21:59 JoeJulian tg2: not that I'm aware of. Did you email him @kernel.org?
21:59 tg2 ya
21:59 tg2 and pinged marc on skype
21:59 tg2 i'm sure they're aware of it
21:59 JoeJulian I would hope so.
22:00 partner purpleidea: yeah i got them visible too at the system, was just wondering where it ends up, probably hw limitations are met first on how many disks one just can direct-attach to a single box :o
22:00 tg2 > • LIO prototype for NVMe-RP dropping in 2015   mmmm, want to see this
22:00 JoeJulian partner: It will continue adding letters as long as is necessary up to buflen, which I believe is 32bytes.
22:01 vimal joined #gluster
22:01 tg2 any gentoo pros in here?
22:01 partner JoeJulian: that would be quite a few bricks for gluster to handle on a single box :D
22:01 JoeJulian Yes it would. :D
22:01 johnbot JoeJulian: to clarify I hit some type of os limit/problem that prevented me from adding additional volumes not anything with gluster so the easiest option for me was to add an additional server and peer them
22:02 johnbot JoeJulian: Since performance is typically horrible with AWS mechanical EBS I just added a bunch to a single gluster server since it's network IO was much higher than what they could possibly put out.
22:09 JoeJulian johnbot: The client connects to all the bricks and the bricks and the management daemon are expecting the client to connect from a port <=1024, so depending on syn timeouts, it's entirely possible to use them up.
22:11 ralala joined #gluster
22:12 Pupeno_ joined #gluster
22:13 johnbot JoeJulian: got it. I'm going to attempt and set the rpc-auth-allow-insecure and see if it has any effect. Here's my current port/brick mapping http://dpaste.com/3QE27PA
22:18 partner hah, Reason: Max array count reached
22:20 partner stupid array controller, no jbod support, can only create 64 raid-0
22:36 ricky-ticky joined #gluster
22:38 tdasilva joined #gluster
22:40 T3 joined #gluster
22:41 Staples84 joined #gluster
22:44 garuda joined #gluster
23:10 vikumar joined #gluster
23:40 Pupeno joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary