Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-02-05

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:03 dbruhn joined #gluster
00:07 _dist andreask: thanks! that worked. It's actually using the api now, did I even need libvirt to understand gluster volumes for this to work? or just qemu
00:08 matclayton joined #gluster
00:08 andreask libvirt needs to support it, yes
00:09 _dist andreask: also, it appers my benchmarks are about 50% now, are their tuning options needed that make it as fast as the fuse mount? Honestly, I assumed it would just be faster
00:10 andreask should be faster as the fuse mount
00:10 asantoni_ left #gluster
00:11 andreask IIRC there have been such issues with gluster 3.4.0
00:12 _dist the one I moved over is a win7 install, crystaldiskmark comes in at roughtly 50% what it was testing before. I'm running 3.4.2
00:12 _dist also, is there any tool (other than ovirt) that will do that work for me? (the editing), or am I going to have to wait until virt-manager (if ever) incldues it?
00:14 _dist andreask: thanks, now I can install my debs on my other server, and maybe live migration will work
00:14 JoeJulian _dist: Just got back... I use "virsh edit"
00:15 _dist yeah, that what I did, I'm sure once I get used to it, won't be a big deal
00:15 tokik joined #gluster
00:16 criticalhammer left #gluster
00:16 _dist I was sure I got this to work before libvirt 1.2 was out though, but maybe I didn't
00:16 _dist still getting 50% io, well throughput anyway I didn't look at latency yet
00:17 JoeJulian Have you installed the virtio drivers in winblows?
00:18 _dist yeap but even if I hadn't why would before/after fuse/libgfapi without it be slower?
00:18 JoeJulian Skipping fuse typically give a 6x performance improvement, from what other people have reported.
00:18 _dist wish that was the case :) I've gone from r/w 300/500 to 150/250
00:19 _dist better test with same emulator
00:19 _dist the old results were on 1.5.0 I'm running 1.7.0 now
00:19 andreask hmm ... have "stat-prefetch" enabled on the volume?
00:20 _dist andreask: if it's not on by default, no I don't
00:21 _dist turned it on
00:21 JoeJulian _dist: Since this is just for VM images, this page is relevant: https://access.redhat.com/site/documentation/en​-US/Red_Hat_Storage/2.0/html/Quick_Start_Guide/​chap-Quick_Start_Guide-Virtual_Preparation.html
00:22 glusterbot Title: Chapter 3. Managing Virtual Machine Images on Red Hat Storage Servers (at access.redhat.com)
00:22 JoeJulian more or less... do we have groups?
00:23 andreask isn't 2.1 the latest RHES?
00:23 JoeJulian heh, no groups in the gluster version... interesting...
00:25 JoeJulian The settings are what I was pointing to, though, regardless of how they're set.
00:25 Fresleven joined #gluster
00:27 _dist seems like one of those options killed my vm :)
00:29 _dist left them on, power cycled it
00:30 _dist I'll admit, I'm surprised those options brought me back to nearly pre libgfapi speeds
00:31 _dist but I also admit I'm curious what they would have done to fuse :)
00:31 andreask happy testing ;-)
00:31 _dist I'll let you guys know in a sec, it'll only take a minute, just gotta make a backup on the xml
00:37 _dist wow, those settings cripple fuse, I'm only done the read but it's 15mb/s
00:38 _dist 55 for write. Looks like I've got to read about what the hell those settings are to better understand them :)
00:45 KyleG joined #gluster
00:45 KyleG joined #gluster
00:56 andreask time for me ... good night
00:57 dbruhn joined #gluster
00:58 RobertLaptop joined #gluster
01:02 overclk joined #gluster
01:03 dbruhn joined #gluster
01:07 Humble joined #gluster
01:11 dbruhn_ joined #gluster
01:15 _dist left #gluster
01:22 dewey Having problems replacing a brick in a cluster:  I copy the name of the brick as reported by "gluster volume info" and it claims that the brick isn't in the volume.  Version = 3.4.2.  Any thoughts?
01:33 vpshastry joined #gluster
01:35 itisravi joined #gluster
01:35 JoeJulian ~pasteinfo | dewey
01:35 glusterbot dewey: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
01:36 JoeJulian And the exact command line you're attempting.
01:37 dewey http://fpaste.org/74525/39156421/
01:37 glusterbot Title: #74525 Fedora Project Pastebin (at fpaste.org)
01:40 dewey JoeJulian: http://fpaste.org/74525/39156421/
01:40 glusterbot Title: #74525 Fedora Project Pastebin (at fpaste.org)
01:42 dewey Question:  does this new "replace-brick" methodology do anything other than forcing a removal of the brick and adding a new one, thus losing my redundancy for the time period of the heal?
01:43 JoeJulian dewey: That's what it does, yes. I pointed that out as a concern when the idea was first floated.
01:43 dewey Well, for whatever it's worth (clearly not much), I agree with you :-)
01:43 dewey Can I rsync from old brick to new brick first or will that completely screw everything up?
01:44 JoeJulian I suppose, worst case, you still have your old brick as a backup...
01:47 JoeJulian dewey: Interesting bug... Did you by any chance already probe those peers by name since you created that volume?
01:48 dewey I have not probed them by name.  I've been playing with this volume as my test case before moving my critical one, so I recently added (and replica'd) the brick I'm now trying to replace.
01:49 JoeJulian See if there's anything useful in /var/log/glusterfs/etc-glusterfs-glusterd.vol.log
01:50 dewey Received stage RJT from uuid: 741845a9-ccd3-4f83-a5cb-3d5725be4a04
01:51 vpshastry joined #gluster
01:51 dewey I'm assuming that stands for "reject".
01:53 JoeJulian Sounds pretty likely. That uuid should be able to be matched to a peer in "gluster peer status". Check that log on that peer.
01:54 dewey That is the UUID of the peer I'm trying to *add*.  Checking...
01:54 dewey Doesn't seem terribly helpful:   0-management: Stage failed on operation 'Volume Replace brick', Status : -1
01:55 JoeJulian lame
01:55 JoeJulian Peer status have all "State: Peer in Cluster (Connected)"?
01:55 JoeJulian (from more that one server)
01:56 dewey Ye
01:56 dewey yes
01:56 dewey One potentially interesting thing:  the new peer is on a different network than the old.  I have 2 NICs on each node and all nodes can reach the other nodes
01:57 dewey But the networks are *not* routable between each other.  (in other words, one of my goals is to move Gluster to it's own network)
01:57 dewey All nodes report all other nodes as connected
01:58 JoeJulian I'd have to soak that in... Been awake since 3am on 2 hours of sleep so I'm not sure I'm wrapping my head completely around that.
01:59 JoeJulian Try "glusterd --debug" on that hostile server and see if it makes any more sense.
02:00 dewey Aiieeee!  Yeah, I get that kind of fog
02:00 JoeJulian dinner's ready so I'm going to go hang with the wife and kids. I'll be around tomorrow during GMT-8 business(ish) hours.
02:01 dewey OK.  Thanks for your help.
02:04 davinder joined #gluster
02:09 dbruhn joined #gluster
02:11 dbruhn_ joined #gluster
02:22 Fresleven joined #gluster
02:30 bala joined #gluster
02:34 kam270 joined #gluster
02:35 harish joined #gluster
02:37 Ark_explorys joined #gluster
02:44 kam270 Hi guys, can you please direct me to specific information on setting up and using QEMU and glusterfs 3.4.2
02:46 ilbot3 joined #gluster
02:46 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:54 jclift_ Anyone around?
02:54 jclift_ JoeJulian: This is looking a lot nicer now: https://github.com/justinclift/glusterflow
02:54 glusterbot Title: justinclift/glusterflow · GitHub (at github.com)
02:54 jclift_ :)
02:57 gdubreui joined #gluster
02:58 bharata-rao joined #gluster
03:04 mkzero joined #gluster
03:11 rastar joined #gluster
03:14 _Bryan_ joined #gluster
03:22 shubhendu joined #gluster
03:35 kanagaraj joined #gluster
03:40 itisravi joined #gluster
03:44 dewey I'm around
03:51 jporterfield joined #gluster
03:52 RameshN joined #gluster
03:55 JoeJulian jclift_: Dude! That's awesome!
04:02 Ark_explorys joined #gluster
04:10 cfeller jclift_: so I would need to run that on each node/peer of the gluster cluster, or just one?
04:13 cfeller jclift_: nvm, that seems to make sense reading it again.  I'll play with it on my test cluster tomorrow.
04:17 dbruhn joined #gluster
04:23 _dist joined #gluster
04:28 atrius joined #gluster
04:37 kdhananjay joined #gluster
04:40 Ark_explorys joined #gluster
04:45 dewey Can I use bricks of different sizes in a distributed (actually distributed/replicated) volume?
04:45 Ark_explorys yes
04:45 Ark_explorys you will need to set quotas on the smaller ones
04:48 dewey set quotas on the smaller bricks?
04:51 vpshastry joined #gluster
04:52 kdhananjay joined #gluster
04:54 ppai joined #gluster
04:59 _dist I'm doing some testing, I'm wondering if anyone here is running vms with the libgfapi api, performance seems ok but live migration seems to hang, never actually start.
05:02 Fresleven joined #gluster
05:10 ndarshan joined #gluster
05:11 Humble joined #gluster
05:12 aravindavk joined #gluster
05:15 prasanth joined #gluster
05:16 nshaikh joined #gluster
05:18 rjoseph joined #gluster
05:18 jag3773 joined #gluster
05:28 saurabh joined #gluster
05:35 jporterfield joined #gluster
05:37 _dist left #gluster
05:39 CheRi joined #gluster
05:44 bala joined #gluster
05:49 bala joined #gluster
05:51 lalatenduM joined #gluster
05:51 jporterfield joined #gluster
05:58 vimal joined #gluster
05:58 raghu joined #gluster
06:01 benjamin_____ joined #gluster
06:10 hagarth joined #gluster
06:11 surabhi joined #gluster
06:14 rastar joined #gluster
06:14 FrodeS joined #gluster
06:22 mohankumar joined #gluster
06:25 FrodeS joined #gluster
06:41 prasanth joined #gluster
06:41 micu joined #gluster
06:46 denaitre joined #gluster
06:50 CheRi joined #gluster
06:58 vpshastry joined #gluster
07:17 jtux joined #gluster
07:25 zapotah joined #gluster
07:25 zapotah joined #gluster
07:36 Fenuks|2 joined #gluster
07:36 denaitre joined #gluster
07:38 ekuric joined #gluster
07:39 rossi_ joined #gluster
07:39 qdk joined #gluster
07:40 jtux joined #gluster
07:52 ctria joined #gluster
08:05 eseyman joined #gluster
08:18 awheeler_ joined #gluster
08:26 iksik joined #gluster
08:28 keytab joined #gluster
08:28 haomaiwa_ joined #gluster
08:31 awheeler_ joined #gluster
08:32 haomaiw__ joined #gluster
08:36 ngoswami joined #gluster
08:38 glusterbot New news from newglusterbugs: [Bug 1055037] Add-brick causing exclusive lock missing on a file on nfs mount <https://bugzilla.redhat.co​m/show_bug.cgi?id=1055037>
08:42 blook joined #gluster
08:44 andreask joined #gluster
08:49 surabhi_ joined #gluster
08:54 mgebbe_ joined #gluster
08:55 mgebbe joined #gluster
08:56 ctria joined #gluster
08:57 psharma joined #gluster
09:07 PatNarciso joined #gluster
09:07 surabhi joined #gluster
09:11 DV__ joined #gluster
09:28 Shri joined #gluster
09:31 ctria joined #gluster
09:36 pk1 joined #gluster
09:39 Shri joined #gluster
09:48 Shri joined #gluster
09:52 rjoseph joined #gluster
09:54 zapotah joined #gluster
09:54 zapotah joined #gluster
09:54 jclift_ cfeller: Yeah.  At the moment it's just having each server send it's info to LogStash using a logfile.  That could be changed to using UDP/TCP/<something else>.
09:54 jclift_ I really went with the "easiest way to get this working" approach, as there were so many unknown bits that needed to come together.
09:55 jclift_ cfeller: I also found a few "blocker" type problems with the Glupy codebase atm, such that without some quick&easy renames, Glupy (in mainline gluster repo) is broken atm.
09:57 jclift_ cfeller: I'll try and get patches to fix the Glupy problems into Gluster today/tomorrow/fri.  Unlikely today though, as I'm at a conference and my boss + team will be ultra shitty at me if I don't go to their events today. ;)
09:58 matclayton joined #gluster
10:01 Shri left #gluster
10:10 jclift_ JoeJulian: Thx :)
10:13 ells joined #gluster
10:17 rwheeler joined #gluster
10:21 mohankumar__ joined #gluster
10:27 davinder2 joined #gluster
10:33 mohankumar joined #gluster
10:34 spstarr_work joined #gluster
10:34 spstarr_work hmmm any folks around?
10:35 spstarr_work im having very high CPU on a 3 node multi-master/replication cluster in AWS and trying to see why, i've set the performance cache size to 512MB but still high CPU
10:36 spandit joined #gluster
10:41 shubhendu joined #gluster
10:44 ujjain joined #gluster
10:44 spstarr_work joined #gluster
10:44 spstarr_work oops
10:44 spstarr_work im having very high CPU on a 3 node multi-master/replication cluster in AWS and trying to see why, i've set the performance cache size to 512MB but still high CPU
10:44 spstarr_work I am not using NFS but FUSE
10:46 blook spstarr_work, you should try using a kernel higher than 3.10
10:59 mohankumar joined #gluster
11:05 rjoseph joined #gluster
11:15 shylesh joined #gluster
11:25 edoceo joined #gluster
11:26 inodb joined #gluster
11:29 dusmant joined #gluster
11:30 harish joined #gluster
11:34 kkeithley1 joined #gluster
11:36 edoceo joined #gluster
11:52 hagarth joined #gluster
11:55 surabhi joined #gluster
11:56 andreask joined #gluster
12:01 DV__ joined #gluster
12:03 pk1 joined #gluster
12:05 itisravi joined #gluster
12:07 shubhendu joined #gluster
12:08 vpshastry1 joined #gluster
12:12 spstarr_work blook, im stuck on LTS 12.04
12:14 diegows joined #gluster
12:17 zapotah joined #gluster
12:21 CheRi joined #gluster
12:30 calum_ joined #gluster
12:33 shubhendu joined #gluster
12:33 ThatGraemeGuy_ joined #gluster
12:38 vpshastry1 joined #gluster
12:42 ira joined #gluster
12:47 CheRi joined #gluster
12:54 benjamin_____ joined #gluster
13:03 keytab joined #gluster
13:04 ppai joined #gluster
13:06 DV__ joined #gluster
13:21 Ark_explorys joined #gluster
13:24 pk1 joined #gluster
13:25 dusmantkp_ joined #gluster
13:26 ppai joined #gluster
13:31 sroy joined #gluster
13:34 spstarr_work left #gluster
13:36 dusmantkp__ joined #gluster
13:49 matclayton joined #gluster
13:54 ktosiek joined #gluster
13:55 dusmantkp__ joined #gluster
13:59 japuzzo joined #gluster
14:03 Staples84 joined #gluster
14:03 matclayton joined #gluster
14:05 jmarley joined #gluster
14:13 bala joined #gluster
14:13 bennyturns joined #gluster
14:14 dusmantkp__ joined #gluster
14:18 CheRi joined #gluster
14:19 sarkis joined #gluster
14:19 rfortier joined #gluster
14:21 ThatGraemeGuy joined #gluster
14:23 dbruhn joined #gluster
14:26 pk1 left #gluster
14:28 B21956 joined #gluster
14:33 jkemp101 joined #gluster
14:33 jkemp101 left #gluster
14:34 duckdive joined #gluster
14:36 benjamin joined #gluster
14:41 dusmantkp__ joined #gluster
14:43 duckdive I am testing with a replicated two node configuration.  If I stop one node and run gluster volume status on the other node, it hangs for 2 minutes and then returns with no output.
14:44 duckdive Running it again then returns immediately with "Another transaction is in progress. Please try again after sometime." I need to be able to use the CLI immediately after a failure to diagnose what the problem is.
14:44 _Bryan_ joined #gluster
14:44 duckdive Any tips on how to make the gluster command more responsive after a failure?
14:45 jobewan joined #gluster
14:48 zaitcev joined #gluster
14:50 jruggiero joined #gluster
14:50 jruggiero left #gluster
14:52 vipulnayyar joined #gluster
14:54 vipulnayyar left #gluster
14:55 kaptk2 joined #gluster
15:02 Eco_ joined #gluster
15:03 gmcwhistler joined #gluster
15:05 lalatenduM duckdive, what kind of volume you have?
15:05 kdhananjay joined #gluster
15:06 gmcwhistler joined #gluster
15:07 duckdive Replicated
15:08 duckdive One brick on each server
15:10 lalatenduM duckdive, it is not an usual behavior , which version gluster u r using?
15:10 duckdive The problem seems to be touched on in https://bugzilla.redhat.com/show_bug.cgi?id=810944 but I think it says you have to wait 10 minutes before gluster will detect the failure of the other node.  Is this true?
15:10 glusterbot Bug 810944: low, low, ---, vagarwal, CLOSED CURRENTRELEASE, glusterfs nfs mounts hang when second node is down
15:10 Eco_ joined #gluster
15:11 lalatenduM duckdive, thats a very old bug, not likely to be in the recent releases
15:11 duckdive I am running gluster 3.4.2 on Centos 6.5 in KVM instances.  The two instances are both acting as a gluster server and client
15:13 duckdive Woops, I meant this bug https://bugzilla.redhat.com/show_bug.cgi?id=873549.
15:13 glusterbot Bug 873549: unspecified, medium, ---, kparthas, ASSIGNED , improvements in glusterd log and command failure message needed when "gluster volume status" cli command fails
15:14 duckdive After I wait ~10 minutes all of the commands come back fast
15:14 lalatenduM duckdive, I am currently in #gluster-meeting for weekly gluster meeting, so my replies will be slow
15:24 Humble joined #gluster
15:24 chirino joined #gluster
15:29 duckdive My issues seem to be connected to ability for one node to detect that the other node is gone.  volume status doesn't work if peer status says the dead node is connected.  Once peer status shows them as disconnected everything works fine.  How do I tune my dead node detection?
15:31 plarsen joined #gluster
15:32 theron joined #gluster
15:38 calum_ joined #gluster
15:43 bugs_ joined #gluster
15:44 olisch joined #gluster
15:56 lalatenduM duckdive, that bug does not ay anything abt the slowness you are seeing right?
15:56 lalatenduM s/ay/say/
15:56 glusterbot What lalatenduM meant to say was: duckdive, that bug does not say anything abt the slowness you are seeing right?
15:56 dusmantkp__ joined #gluster
15:59 duckdive If you look at comment 8 of bug 873549, case 2 describes the behavior I am seeing.  I think that bug made the error messages consistent but it does not address why the errors come back in the first place.
15:59 glusterbot Bug https://bugzilla.redhat.com​:443/show_bug.cgi?id=873549 unspecified, medium, ---, kparthas, ASSIGNED , improvements in glusterd log and command failure message needed when "gluster volume status" cli command fails
16:00 duckdive How long should it take for node2 to detect that node1 is gone?  Any why does the volume status command fail on node2 until it figures this out?
16:01 duckdive I can see node2 sending arps for node1 that never get any replies.  However, peer status shows node1 as connected.
16:01 dusmantkp_ joined #gluster
16:03 lalatenduM hagarth, the wiki bug triage page is good, when do we convert it to a gluster qa triage page #link http://www.mediawiki.org/wiki​/Bug_management/How_to_triage
16:03 glusterbot Title: Bug management/How to triage - MediaWiki (at www.mediawiki.org)
16:04 hagarth lalatenduM: let us do that this week?
16:04 lalatenduM hagarth, ok
16:06 lalatenduM duckdive, regarding "How long should it take for node2 to detect that node1 is gone" I dont have information on this right now
16:06 lalatenduM duckdive, looks like the same bug, you can just add a comment and say you are also seeing the issue
16:07 lalatenduM duckdive, I would suggest you to ask the question in gluster-users mailing list
16:08 duckdive Ok, thanks
16:09 lalatenduM duckdive, actually it node 2 should be aware about node 1 asap. as thats the point of having replica volume
16:10 lalatenduM duckdive, that being said, there are different process for each of these things e.g glusterfsd (brick process) glusterd (mgmt deamon)
16:11 lalatenduM duckdive, cli talk to glusterd , but when brick is down , thats glusterfsd process
16:11 duckdive I figured it should be quick,  I see it sometimes taking minutes for status peer to report the other node is disconnected.
16:12 lalatenduM duckdive, yup , it is kind of confusing, I will also do a test on my system so if it is reproducible in 3.5beta2. However we should start a mail on this
16:15 duckdive So if I stop glusterd service on node1, node2 immediately knows that node1 is gone.  volume status returns immediately also.  The problem occurs when I disconnect node1 from the network by doing a "service network stop" to simulate a complete failure of node1.  Now node2 seems to have problems detecting node1 is gone.
16:16 ndk joined #gluster
16:17 vpshastry joined #gluster
16:18 vpshastry left #gluster
16:19 lalatenduM duckdive, hmm
16:20 lalatenduM duckdive, will leave now, will try to check it with my steup
16:26 rotbeard joined #gluster
16:32 Ark_explorys Hello, I have a disturbed replicated volume with files on each brick, but they do not show up on the client. Any Advice?
16:32 GLHMarmot joined #gluster
16:34 dbruhn Ark_explorys, did you add the files through the mount point, or did you add the files directly through the brick?
16:38 criticalhammer joined #gluster
16:39 Ark_explorys it was put on gluster by the client
16:40 Ark_explorys also dbruhn removing the vols fix worked for that old volume
16:40 dbruhn nice
16:41 saltsa joined #gluster
16:43 tdasilva left #gluster
16:43 daMaestro joined #gluster
16:44 Ark_explorys gluster volume heal comes back ok
16:44 Ark_explorys both split brain and normal
16:44 Ark_explorys as no files are missing
16:44 dbruhn Ark_explorys, are you on a pre 3.3. version?
16:44 neofob joined #gluster
16:45 Ark_explorys no i am 3.3
16:45 dbruhn which distro?
16:45 Ark_explorys the vols dir under lib/glusterd have different time stamps for each replica set
16:45 Ark_explorys SL 6.4
16:45 Ark_explorys you recommend CentOS? Not that I want to rebuild them all lol
16:46 dbruhn I am running on red hat, have run on cent
16:46 dbruhn sec
16:47 dbruhn The files in question "getfattr -m . -d -e hex /path/to/file/on/brick/file"
16:47 dbruhn run that and lets see what the extended attributes say about the file
16:49 bet_ joined #gluster
16:50 Ark_explorys sorry dbruhn I was confused about the information given to me
16:50 dbruhn No worries
16:50 rossi_ joined #gluster
16:50 Ark_explorys so /var/glusterfs/rdbms/prod-​exmpi/2014-01-21_04-00-38 this directory on the client
16:51 Ark_explorys i cannot delete
16:51 Ark_explorys rm: cannot remove `2014-01-21_04-00-38': Directory not empty
16:51 dbruhn Oh, that's easy, just go into the directory on the bricks, and delete the files inside the bricks.
16:52 dbruhn since they are trying to get rid of them anyway
16:52 dbruhn I thought these were files you cared about
16:52 dbruhn I've seen this before
16:52 Ark_explorys this volume
16:53 dbruhn I think what happens is the .glusterfs entries get removed before the files, and then the files aren't completely removed.
16:53 dewey Hey JoeJulian, are you on?
16:54 Ark_explorys so to refresh your memory, i had 2 gluster servers in trusted storage pool, added 2 more made the volumes disturbed replicated, I replaced the first 2 servers with same hostname, ip, and UUID. I had 2 volumes that would not stop / delete. I was able to remove the vols info for the 1, is there anyway around that for this rdbms dir?
16:55 dbruhn Ark_explorys, you are going to need to start going through your log files to see where the errors are, and start correcting them.
16:55 jayo joined #gluster
16:58 Ark_explorys Alright I will need a little time to search
16:59 17WAA0PM2 joined #gluster
16:59 92AAAADIY joined #gluster
17:00 Ark_explorys dbruhn: removing everything inside the brick dirs and then issuing the RM on the client for the dir worked!
17:04 tdasilva joined #gluster
17:14 KyleG joined #gluster
17:14 KyleG joined #gluster
17:14 rossi_ joined #gluster
17:22 duckdive left #gluster
17:23 bala joined #gluster
17:29 JoeJulian dewey: You can use bricks of different sizes. The replica sets will be full when the smallest is full. When a distribute subvolume fills, new files will be created on other subvolumes. Existing files will not be able to grow. It's best to have all your bricks be the same size. Some (myself included) accomplish this using lvm to create multiple bricks out of the larger ones.
17:30 KyleG left #gluster
17:30 dewey Thanks.  My issue is that I originally set up the volume with a single monster brick and I was to divide it into smaller onces for a number of reasons.  To make that transition I'll have to add the smaller ones...
17:31 Mo__ joined #gluster
17:32 dewey JoeJulian: I actually wanted to let you know that I figured out the odd node problem -- it was some nasty interaction between using IPs as node names and having a node on a different network.
17:32 dewey I'm currently trying to repeat the behavior in a set of VMs so I can write up a decent bug report.
17:32 JoeJulian excellent on both counts.
17:33 JoeJulian Well, I'm off. I think I've got a 20mile bike ride today. Half a million people are all trying to get to downtown Seattle for the parade. I just want to get to my office...
17:41 Fresleven joined #gluster
17:53 diegows joined #gluster
18:01 ktosiek_ joined #gluster
18:01 jobewan joined #gluster
18:10 calum_ joined #gluster
18:13 rastar joined #gluster
18:13 jflilley joined #gluster
18:18 plarsen joined #gluster
18:18 jag3773 joined #gluster
18:28 quique joined #gluster
18:29 Ark_explorys I am going to upgrade from 3.3 to 3.4 and advise I should know ahead of time? http://184.106.200.248/2013/07​/upgrading-to-glusterfs-3-4-0/
18:29 rcaskey semiosis, any word back on qemu-gluster for trusty?
18:29 rcaskey or vague readings of entrails or something similarly official?
18:29 semiosis lol
18:30 semiosis i think i'm subscribed to all the relevant things in launchpad, no updates yet
18:30 bala joined #gluster
18:36 mrfsl joined #gluster
18:37 gmcwhistler joined #gluster
18:38 mrfsl volume heal <volumename> info --- hangs for a minute, then goes back to gluster prompt. When trying to rerun the command it says their is already one in progress. Help?
18:43 dbruhn mrfs1, what does the status on the heal say?
18:47 lpabon joined #gluster
18:48 joshin joined #gluster
18:48 joshin joined #gluster
18:49 bala joined #gluster
18:51 abyss^ joined #gluster
18:53 ron-slc_ joined #gluster
18:54 sroy_ joined #gluster
19:04 jbrooks left #gluster
19:05 rcaskey hrmm, native performance is ~ 12.5 megs/sec and I'm seeing 7.0 megs a sec on a cluster with only a local brick as a member.... that normal?
19:09 mrfsl dbruhn what do you mean by status on the heal?
19:11 Matthaeus joined #gluster
19:11 dbruhn mrfsl, sorry I misread what you posted earlier
19:12 sroy_ joined #gluster
19:14 semiosis rcaskey: a volume with only a local brick as a member is not normal.  people usually use nfs for that trivial of a setup
19:14 rcaskey semiosis, I know I'm just testing
19:14 semiosis not a valid test then
19:14 semiosis comparing apples to orchards
19:15 rcaskey but i mean, you wouldnt expect it to be faster on multiple hosts would you?
19:15 rcaskey (without striping)
19:15 semiosis http://joejulian.name/blog/dont-get​-stuck-micro-engineering-for-scale/
19:15 glusterbot Title: Dont Get Stuck Micro Engineering for Scale (at joejulian.name)
19:15 semiosis rcaskey: single thread performance can't possibly compare to local
19:16 semiosis but with glusterfs you can scale out & get much more aggregate performance with many bricks, servers, and clients, than you ever could with a single server
19:17 semiosis depending on your particular use case/application, with enough bricks & servers your client could saturate its NIC with traffic
19:18 rcaskey semiosis, I just wanted to make sure nothing was out of whack
19:18 rcaskey my test bench setup was reeeeely slow
19:18 rcaskey so i figured i'd try it on bare metal and see if it was because I wasnt paravirting my ipo
19:18 rcaskey err io
19:18 semiosis if your application is a single dd process, that's not going to happen, but if your app is something that does real work on many files, it has a better shot
19:19 andreask joined #gluster
19:19 rcaskey semiosis, and presumably qemu bypassing fuse as well?
19:20 semiosis fuse isn't usually a major bottleneck
19:20 semiosis for performance
19:20 semiosis if you've already scaled up your disks & network and you start getting cpu bound, then it is
19:21 rcaskey I've not, and it won't ever get that _big_ I 'm just looking for good old redundancy and am willing to take a performance hit
19:21 semiosis there's other benefits to bypassing it though
19:22 rcaskey like what?
19:23 jbrooks joined #gluster
19:25 failshell joined #gluster
19:30 chirino_m joined #gluster
19:30 semiosis rcaskey: not needing root to access the volume, or any system software installed
19:30 rcaskey hrmm, the light still isn't turning on in my head
19:32 Ark_explorys is http://www.gluster.org/community/documen​tation/index.php/Resolving_Peer_Rejected the best way to resolve peer rejected?
19:32 glusterbot Title: Resolving Peer Rejected - GlusterDocumentation (at www.gluster.org)
19:32 semiosis if an app is statically linked to libgfapi & uses that to connect then it can run as an unpriv user
19:32 semiosis without needing any installed system libs
19:33 semiosis Ark_explorys: what's another choice?
19:33 rcaskey semiosis, so in that situation is it talkign right to gluster eschewing the need to even mount it?
19:33 Ark_explorys i dont know just updated from 3.3 to 3.4 and my trusted storage pool is in a bad state.
19:34 semiosis rcaskey: right
19:35 rcaskey ok, i'm astounded by what happened when I added the second node here...
19:35 rcaskey 2 hosts, we shall call them snail and rabbit to help keep their relative disk performance in mind
19:35 semiosis makin me hungry!
19:36 rcaskey started off with a single node on snail, 7.0 megs/second. Add in rabbit and mount gluster one each pointing to their resperctive external ip addresses of 10.4.10.20 and 10.4.10.21
19:36 rcaskey snail then writes at 17.6 MB/s, rabbit writes at 9.2MB/s
19:37 rcaskey first, my naieve expectation is that they would both write at the same speed since replica=2
19:37 ells joined #gluster
19:37 rcaskey and second, if one was faster, I'd expect it to be rabbit :P?
19:37 rcaskey (granted rabbit probably has at least some workload going on outside it)
19:48 rcaskey my thinking is there are 2 cluster nodes and 2 replicas required so how could one finish faster than the other?
19:48 LessSeen joined #gluster
19:57 ells_ joined #gluster
20:17 denaitre joined #gluster
20:27 DV__ joined #gluster
20:47 Matthaeus joined #gluster
20:53 ells left #gluster
20:58 chirino joined #gluster
21:08 ells joined #gluster
21:10 dneary joined #gluster
21:12 Ark_explorys I have a node stuck in Probe Sent to Peer any advice? I have removed /var/lib/glusterd/ and restarted several times
21:12 ells_ joined #gluster
21:14 Ark_explorys did a detach might fix it
21:17 semiosis Ark_explorys: probe in both directions & restart glusterd on both servers, repeat
21:18 semiosis you weren't supposed to remove all of /var/lib/glusterd -- the file /var/lib/glusterd/glusterd.info should have been left in place
21:18 semiosis if you lost it, you'll need to replace it, using the UUID for that server as known by the other servers (revealed by gluster peer status)
21:20 Ark_explorys got it!
21:20 jflilley exit
21:20 jflilley joined #gluster
21:33 Ark_explorys ok got all servers to show up in trusted storage pull
21:33 Ark_explorys volumes came up good
21:34 Ark_explorys tried mounting and i get read fails no data available
21:34 semiosis check client log file
21:34 semiosis also maybe gluster volume status
21:35 Ark_explorys E [socket.c:2157:socket_connect_finish] 0-pdv-safe-client-2: connection to 10.218.26.118:49155 failed (Connection refused)
21:35 Ark_explorys self-heal deamons are not showing ports, but everything is online and all the bricks have ports
21:36 semiosis connection refuse usually means either 1. brick export daemon (glusterfsd proc) is missing, or 2. iptables is rejecting (default on RH/cent iirc)
21:37 semiosis of course other things like ip conflicts or wrong hostname resolution can route the request to the entirely wrong server
21:37 semiosis just tossing out some ideas
21:37 ells joined #gluster
21:37 ells left #gluster
21:37 Ark_explorys selinux is on i restarted
21:37 Ark_explorys disabling it as a first start -_-
21:38 semiosis fine with me
21:38 semiosis to all those who say "Stop disabling SElinux," I say, "Start using Ubuntu"
21:38 semiosis lol
21:38 semiosis (or debian)
21:39 Ark_explorys restarted demons and restarting the volumes
21:40 Ark_explorys haha
21:41 Ark_explorys did ports change on volumes?
21:41 Ark_explorys I dont have 49155 open in iptables either
21:41 Ark_explorys ill flush it for now
21:41 Ark_explorys backups start in 30 minutes
21:42 Ark_explorys well 15
21:42 semiosis ,,(ports)
21:42 glusterbot glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up for <3.4 and 49152 & up for 3.4. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
21:43 semiosis each server has its own counter of bricks it has served, this counter goes up with each new brick on the server, and never goes down, it is used to assign a port to each brick as it is created
21:43 ells_ joined #gluster
21:52 semiosis @learn brick port as each server has its own counter of bricks it has served, this counter goes up with each new brick on the server, and never goes down, it is used to assign the port to each new brick as it is created
21:52 glusterbot semiosis: The operation succeeded.
21:52 semiosis @port
21:52 glusterbot semiosis: I do not know about 'port', but I do know about these similar topics: 'What ports does glusterfs use for nfs?', 'backport wishlist', 'brick port', 'ports', 'privileged port'
21:52 semiosis @brick port
21:52 glusterbot semiosis: each server has its own counter of bricks it has served, this counter goes up with each new brick on the server, and never goes down, it is used to assign the port to each new brick as it is created
21:52 semiosis @forget brick port
21:52 glusterbot semiosis: The operation succeeded.
21:53 semiosis @learn brick port as each server has its own counter of bricks it has hosted. this counter goes up with each new brick on the server, and never goes down. it is used to assign the port to each new brick as it is created (49152 + counter)
21:53 glusterbot semiosis: The operation succeeded.
21:53 semiosis pretty sure that's correct
21:57 zerick joined #gluster
22:09 ells joined #gluster
22:13 Ark_explorys semiosis:
22:13 Ark_explorys the ports changed in 3.4 so I needed to have firewall rules updated
22:14 rossi_ joined #gluster
22:15 semiosis cool
22:21 failshel_ joined #gluster
22:22 Amanda joined #gluster
22:24 chirino joined #gluster
22:34 xymox joined #gluster
22:43 glusterbot New news from newglusterbugs: [Bug 1045309] "volfile-max-fetch-attempts" was not deprecated correctl.. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1045309>
22:58 sarkis joined #gluster
22:59 NuxRo joined #gluster
23:01 kkeithley_ In between stints shoveling snow....   samba-4.1.3 RPMs with the glusterfs vfs plug-in are now available for RHEL6, CentOS6, etc., at http://download.gluster.org/​pub/gluster/glusterfs/samba/
23:01 glusterbot Title: Index of /pub/gluster/glusterfs/samba (at download.gluster.org)
23:03 kkeithley_ Fair warning, this is a quick-and-dirty build, YMMV. Give them a spin. If they work, great, if not, let us know and I'll see what we can do to fix them.
23:03 P0w3r3d joined #gluster
23:04 semiosis cool!
23:04 semiosis or should I say, freezing!
23:06 kkeithley_ well, most of the day my digital outdoor thermometer said it was 32.5; not quite freezing.
23:07 StarBeast joined #gluster
23:09 semiosis you dont wanna know how hot it is here!
23:09 ccha joined #gluster
23:09 kkeithley_ no worries, I was in L.A. a few weeks ago and it was 70+
23:10 semiosis i'm really looking forward to SF in april... time to cool off
23:10 kkeithley_ oops, I missed a systemd Requires: somewhere. I'll have to respin.
23:13 kkeithley_ “The coldest winter I ever spent was a summer in San Francisco.
23:14 JoeJulian I've got ice on my pool.
23:15 criticalhammer i woke up to -16 this morning
23:15 criticalhammer went outside to clean off my car and snot in my nose was freezing
23:15 JoeJulian I'd go back to bed.
23:27 gdubreui joined #gluster
23:35 ells joined #gluster
23:37 ells left #gluster
23:50 badone_ joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary