Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-07-10

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:08 cjanbanan joined #gluster
00:11 rolfb joined #gluster
00:11 rolfb hi all, seems like http://www.gluster.org/blog/ is having some issues. anyone looking into that?
00:11 glusterbot Title: GlusterFS 3.5 Unveiled | Gluster Community Website (at www.gluster.org)
00:15 rolfb johnmark: you awake?
00:20 edwardm61 joined #gluster
00:26 rolfb JustinClift: I don't suppose you are awake? :-)
00:31 Eco_ rolfb, what issues
00:31 plarsen joined #gluster
00:31 rolfb Eco_: from here it looks like stylesheets and assets are gone
00:31 Eco_ rolfb, did you try gluster.org/blog?
00:31 Eco_ oops you did
00:32 rolfb Eco_ :-)
00:32 Eco_ did you mean the stylesheets from the new site?
00:33 rolfb Eco_: try clicking any article
00:33 rolfb do you get a 404?
00:33 Eco_ indeed i did
00:33 rolfb for me it looks like everything but the index.php file is gone atm
00:34 rolfb which caused havoc on some other part of the web ;-)
00:34 JoeJulian stylesheet for that page is looking under wp-content
00:34 JoeJulian javascript under wp-includes
00:34 rolfb it broke a little over an hour ago
00:34 JoeJulian that's when he switched over
00:35 Eco_ rolfb, thats when we rolled out the new site but specifically we didn't touch the blog
00:35 Eco_ so not sure what is going on
00:35 rolfb aha
00:35 rolfb is everything on .org on wordpress?
00:35 rolfb or just /blog
00:36 Eco_ just blog and the legacy wiki
00:36 rolfb any permissions change?
00:36 rolfb can you still see the files on disk?
00:36 rolfb I saw a varnish error for a little while
00:37 Eco_ varnish error was when we disabled directory listing
00:37 Eco_ that should have only lasted a minute or two though
00:37 rolfb yeah, i got lucky then
00:37 rolfb ;-)
00:37 Eco_ lol
00:39 rolfb Eco_: but you have the files on disk still?
00:40 Eco_ rolfb, yes nothing changed there but we did disable directory listing for apache so that might be an issue here
00:40 Eco_ checking it now
00:40 JoeJulian looks like a redirect or an alias that's not happening
00:41 gildub joined #gluster
00:49 Eco_ JoeJulian, agreed but not sure why, rolled back the site
00:49 Eco_ blog is working again now
00:53 rolfb need to sleep, good luck on figuring it out
01:08 cjanbanan joined #gluster
01:19 bala joined #gluster
01:20 gmcwhistler joined #gluster
01:23 pureflex joined #gluster
01:45 Peter3 i have a problem that file delete via NFS does not release the quota
01:45 Peter3 thus df and du seems not matching :(
01:45 vpshastry joined #gluster
01:47 Peter3 is there a way i can tell the current open file on an NFS export from gluster?
01:52 bala joined #gluster
02:16 SpComb joined #gluster
02:17 codex joined #gluster
02:37 plarsen joined #gluster
02:40 harish_ joined #gluster
02:42 Peter3 really need help on this quota not releasing space after file delete :(
02:43 Peter3 i umount and remount the NFS from gluster and still not releasing space
02:45 sjm joined #gluster
02:46 haomaiwa_ joined #gluster
02:50 Eco_ joined #gluster
03:01 haomaiw__ joined #gluster
03:05 Bullardo joined #gluster
03:08 cjanbanan joined #gluster
03:16 jbrooks joined #gluster
03:18 MacWinner joined #gluster
03:24 pureflex joined #gluster
03:26 vpshastry joined #gluster
03:31 bala joined #gluster
03:34 itisravi joined #gluster
03:46 ppai joined #gluster
03:53 kdhananjay joined #gluster
03:53 hagarth1 joined #gluster
03:54 JoeJulian @later tell Peter3 If you have the steps to reproduce the problem you're seeing with quota, please file a bug report.
03:54 glusterbot JoeJulian: The operation succeeded.
03:55 nbalachandran joined #gluster
03:57 atinmu joined #gluster
03:58 KORG_ joined #gluster
03:58 glusterbot New news from resolvedglusterbugs: [Bug 764655] NetBSD port <https://bugzilla.redhat.com/show_bug.cgi?id=764655>
04:00 sputnik1_ joined #gluster
04:01 MacWinner joined #gluster
04:08 vpshastry joined #gluster
04:11 RameshN_ joined #gluster
04:11 RameshN joined #gluster
04:12 shubhendu joined #gluster
04:17 sputnik1_ joined #gluster
04:33 ramteid joined #gluster
04:33 Bullardo_ joined #gluster
04:34 bharata-rao joined #gluster
04:38 cjanbanan joined #gluster
04:38 RameshN joined #gluster
04:42 sahina joined #gluster
04:43 nishanth joined #gluster
04:49 kanagaraj joined #gluster
04:50 rjoseph joined #gluster
04:54 vpshastry joined #gluster
04:55 ndarshan joined #gluster
05:08 cjanbanan joined #gluster
05:11 coredump joined #gluster
05:12 RameshN_ joined #gluster
05:13 Philambdo joined #gluster
05:14 prasanth joined #gluster
05:15 psharma joined #gluster
05:15 RameshN_ joined #gluster
05:17 cultavix joined #gluster
05:19 Pupeno joined #gluster
05:21 coredump joined #gluster
05:26 kshlm joined #gluster
05:26 NCommander joined #gluster
05:35 spandit joined #gluster
05:37 Peter1 joined #gluster
05:40 davinder16 joined #gluster
05:41 karnan joined #gluster
05:45 cultavix joined #gluster
05:55 sputnik1_ joined #gluster
05:56 hagarth joined #gluster
06:18 rgustafs joined #gluster
06:26 sauce joined #gluster
06:38 cjanbanan joined #gluster
06:41 haomaiwang joined #gluster
06:51 Pupeno GlusterFS 3.5 now also failing to mount in Ubuntu 14.04? *sigh*
06:55 ekuric joined #gluster
07:01 davinder16 joined #gluster
07:01 eseyman joined #gluster
07:04 kdhananjay joined #gluster
07:04 saurabh joined #gluster
07:04 lalatenduM joined #gluster
07:05 cjanbanan joined #gluster
07:05 marbu joined #gluster
07:06 Peter1 what do u mean failing to mount?
07:06 ctria joined #gluster
07:08 keytab joined #gluster
07:12 sputnik1_ joined #gluster
07:12 haomaiw__ joined #gluster
07:20 hagarth joined #gluster
07:26 pureflex joined #gluster
07:35 cultavix joined #gluster
07:49 sac`away joined #gluster
07:51 andreask joined #gluster
07:54 harish__ joined #gluster
08:02 pvh_sa joined #gluster
08:04 ricky-ticky joined #gluster
08:14 n0de_ joined #gluster
08:23 gehaxelt Is it possible to create a directory/brick with a specific size?
08:29 ghenry joined #gluster
08:29 ghenry joined #gluster
08:30 Norky well, yeah, create an (XFS/EXT4) filesystem of that size, create a directory within it and use that as your brick
08:35 doekia joined #gluster
08:35 doekia_ joined #gluster
08:38 gehaxelt Norky, ok, and if I can't create a seperate partition?
08:39 gehaxelt okay, I could create a file with dd of that size and mount it using loopback?
08:42 Norky that's an option I suppose
08:42 Norky what are you trying to achieve?
08:43 gehaxelt Norky, I wanted to rent a small VPS (80gb hdd for 15$/yr) and set up replication with some other nodes.
08:43 gehaxelt As far as I know I can't create new partitions in an openVZ container...
08:44 Norky hmm, replicating *too* the VPS, or from it?
08:45 gehaxelt Norky, to it
08:46 Norky well if you control the size of the 'local' bricks, then gluster should not exceed that size
08:46 Norky e.g. have a local brick of ~70GB, no more than 70GB should be used on the VPS
08:47 gehaxelt hmm, okay
08:47 gehaxelt I heard that having bricks of different size can become a hassle...
08:47 Norky eh?
08:48 gehaxelt so e.g. having a brick with 100gb on one server
08:48 Norky oh, I see
08:48 gehaxelt and a 80gb brick on the vps
08:48 gehaxelt for example.
08:48 Norky read what I said
08:48 gehaxelt :)
08:48 Norky have a local brick of *less* than the total size available on the VPS
08:49 meghanam joined #gluster
08:50 gehaxelt Okay, but then I would have to limit the brick-size on every clients which connects to the volume, right?
08:50 Norky clients don't have bricks
08:50 Norky servers have bricks
08:51 gehaxelt ah, right.
08:51 meghanam_ joined #gluster
08:52 Norky the gluster volume would be limited by the size of the smallest brick which participates in it, in this case, something less than 80GB
08:52 gehaxelt okay :)
08:53 Norky so you'll have a volume of total size around, lets say, 75GB
08:54 gehaxelt In that case: Setting a quota limit on the volume of 75gb should be enough?
08:54 Norky you might want to have the VPS be a geo-replication 'target' rather than a simpler member of a gluster cluster
08:54 Norky should be, aye
08:55 Norky if the VPS is a normal brick, then clients will be writing directly to it as well as the other bricks, which might hurt perfermance
08:55 Norky performance*
08:56 vpshastry joined #gluster
08:56 Norky however, as the data is only going one way, you can use gluster's geo-replication, which might fit better with your intended use
08:56 gehaxelt okay, I have to read a bit about geo-replication
08:57 gehaxelt So all clients will connect to the master node which write backs the data to the slaves?
08:59 gehaxelt (PS: At the website admin. The google-links are broken. Searching for "gluster geo replication" pops the following link http://www.gluster.org/category/geo-replication/ which results in a 404). I think that should be fixed oO
08:59 Norky master nodes* plural (unless you're using a single Gluster server, in which case I wonder what the point of using gluster is at all)
09:00 gehaxelt Norky, okay. Yeah I'll have multiple master nodes. I think I'm going to read through the docs first.
09:00 gehaxelt Thanks for your help! :)
09:00 Norky no worries
09:01 Norky I'm not an expert myself btw, so other folks may be able to offer more detail :)
09:01 vimal joined #gluster
09:02 gehaxelt :)
09:03 Slashman joined #gluster
09:06 necrogami joined #gluster
09:06 Intensity joined #gluster
09:06 atinmu joined #gluster
09:15 purpleidea joined #gluster
09:15 purpleidea joined #gluster
09:17 pvh_sa joined #gluster
09:18 siel joined #gluster
09:22 pvh_sa yeah I'm new to gluster but from what I an see about how it works I wouldn't work with different sized bricks if I could avoid it
09:25 stickyboy Yah, I'd say add replica bricks in similar-size pairs...
09:26 pureflex joined #gluster
09:27 haomaiwa_ joined #gluster
09:33 marmalodak joined #gluster
09:35 ppai joined #gluster
09:40 qdk joined #gluster
09:45 Norky for the 'main' set of servers, most certainly, but having a geo-rep. target of potentially slightly larger size should not be a problem
09:46 pvh_sa so are there disaster recovery recommendations for a glusterfs store? I mean... what's crucial? what should I store so I can quickly build a replacement node when one dies? /etc and the list of installed RPMs?
09:53 karnan joined #gluster
10:04 Thilam|work joined #gluster
10:06 haomai___ joined #gluster
10:13 jezier joined #gluster
10:13 gehaxelt joined #gluster
10:13 DV joined #gluster
10:14 coredumb joined #gluster
10:16 stickyboy pvh_sa: I have yet to get into disaster recovery... I still need to get the hang of the geo-replication stuff.
10:16 stickyboy I dunno if I can use geo-replication on bricks where I'm already using replica.
10:20 pvh_sa with our new SANReN (10 Gb between all major universities in South Africa) I think geo-replication is coming our way too...
10:33 calum_ joined #gluster
10:36 atinmu joined #gluster
10:38 bala1 joined #gluster
10:40 tty00 joined #gluster
10:53 stickyboy pvh_sa: We do it for the corporate NetApp over a fiber link, apparently.
10:55 fyxim_ joined #gluster
11:17 stickyboy Anyone have advice about Infiniband hardware?  NICs, switches, etc?  I need to think about moving from 10GbE to Infiniband and not sure where to start.
11:26 LebedevRI joined #gluster
11:27 pureflex joined #gluster
11:30 hagarth joined #gluster
11:32 RameshN joined #gluster
11:33 RameshN_ joined #gluster
11:42 calum_ joined #gluster
11:42 bala joined #gluster
11:58 pvh_sa stickyboy, I know people that use the Mellanox stuff... seems to work well... got any specific questions?
11:58 rolfb joined #gluster
11:59 stickyboy pvh_sa: There was some talk on the mailing list last July, JustinClift mentioned some NICs he got for $100!  And a switch for $500 or so.
11:59 stickyboy Seems too good to be true.
11:59 pvh_sa yes that DOES seem too good to be true
12:00 stickyboy pvh_sa: I have the thread in my Thunderbird, but lemme get the web archive.
12:00 mbukatov joined #gluster
12:00 stickyboy pvh_sa: http://www.gluster.org/pipermail/gluster-users/2013-July/036412.html
12:00 glusterbot Title: [Gluster-users] working inifiniband HW ?? (at www.gluster.org)
12:01 stickyboy Oh, the prices were $ already, thought they were GBP.  Even cheaper. :D
12:01 hagarth joined #gluster
12:02 pvh_sa I've just cc'ed you into an email with one of the admins that I know that works at the CHPC, where they use a load of Infiniband stuff... hopefully they'll have useful input
12:02 stickyboy pvh_sa: Awesome.  Thanks.
12:05 mjfork joined #gluster
12:06 mjfork Can anyone point me to information on using Gluster as VMware / ESXi backing store?
12:10 rolfb left #gluster
12:17 pdrakeweb joined #gluster
12:18 ppai joined #gluster
12:20 cjanbanan What about the question of which brick is used for read access in a replicated volume? How is the decision taken?
12:21 torbjorn__ cjanbanan: AFAIK, the node that answers first will be the primary node for that file operation
12:23 cjanbanan So each read call is sent to all bricks, but all but the first reply are ignored?
12:23 glusterbot New news from newglusterbugs: [Bug 1118311] After enabling nfs.mount-udp mounting server:/volume/subdir fails <https://bugzilla.redhat.com/show_bug.cgi?id=1118311>
12:23 harish__ joined #gluster
12:25 torbjorn__ cjanbanan: something like that, I guess .. I think only the open() call goes to all replicas, then the first node that answers becomes the target for that file descriptors read() operations .. although I guess split-brain detection has to happen in there somewhere as well
12:25 pdrakeweb joined #gluster
12:25 torbjorn__ cjanbanan: as you can see, lots of guessing going on in my answer there .. it's been a while since I looked into the internals
12:26 chirino joined #gluster
12:29 lalatenduM joined #gluster
12:29 edward1 joined #gluster
12:31 cjanbanan OK, thanks. I've got some strange benchmark results that I'm trying to figure out an explanation for. That's why I ask. When reading large files I get higher read performance from a replicated volume than from a single brick glusterfs volume. It doesn't make any sense to me. When reading small files the performance is the same, which I would have expected.
12:33 cjanbanan But if multiple bricks can be involved it could make sense. However, the profiling tool doesn't show any evidence of read() calls being sent to more than one brick.
12:34 bene2 joined #gluster
12:38 japuzzo joined #gluster
12:52 ppai joined #gluster
12:54 theron joined #gluster
12:59 bennyturns joined #gluster
13:00 hagarth joined #gluster
13:02 andreask joined #gluster
13:04 stickyboy No. of entries healed: 1037739      ...     No. of heal failed entries: 514
13:04 stickyboy Not too bad.
13:04 stickyboy Still struggling to catch up after a brick failure two weeks ago.
13:06 marbu joined #gluster
13:06 irated joined #gluster
13:11 julim joined #gluster
13:11 vimal joined #gluster
13:14 kshlm joined #gluster
13:14 kshlm joined #gluster
13:20 rwheeler joined #gluster
13:28 julim joined #gluster
13:28 pureflex joined #gluster
13:30 julim joined #gluster
13:36 gmcwhistler joined #gluster
13:36 vpshastry joined #gluster
13:42 sjm joined #gluster
13:49 ctria joined #gluster
14:01 bala joined #gluster
14:02 jobewan joined #gluster
14:02 julim_ joined #gluster
14:04 tdasilva joined #gluster
14:05 davinder16 joined #gluster
14:14 bene2 joined #gluster
14:19 itisravi joined #gluster
14:22 mortuar joined #gluster
14:28 Pupeno_ joined #gluster
14:29 theron joined #gluster
14:33 shubhendu joined #gluster
14:35 qdk joined #gluster
14:37 rjoseph joined #gluster
14:38 bene3 joined #gluster
14:46 ambish joined #gluster
14:46 jbrooks joined #gluster
14:47 torbjorn__ cjanbanan: maybe the read-ahead feature works better on the large files ? .. you could disable the feature and test again to confirm
14:48 ambish ok, I needed to extend the fs on the two bricks that compose a replicated volume. So I did "gluster volume myvol remove-brick brick1", extended the fs, and now I'm trying to readd it but it fails with "volume add-brick: failed:" and in the log I see "Changing the type of volume myvol from 'distribute' to 'replica'"
14:48 ambish the data is still intact on brick1, any idea how to readd it?
14:49 ambish I'm doing "gluster volume add-brick myvol  brick1"
14:59 deepakcs joined #gluster
15:03 itisravi joined #gluster
15:28 nishanth joined #gluster
15:29 pureflex joined #gluster
15:34 rjoseph joined #gluster
15:34 Peter1 joined #gluster
15:36 lmickh joined #gluster
15:41 RameshN joined #gluster
15:41 RameshN_ joined #gluster
15:47 kshlm joined #gluster
15:56 nbalachandran joined #gluster
16:06 _Bryan_ joined #gluster
16:09 rjoseph joined #gluster
16:14 systemonkey joined #gluster
16:24 jbrooks joined #gluster
16:24 lpabon_test joined #gluster
16:26 Mo__ joined #gluster
16:40 vpshastry joined #gluster
16:43 cfeller joined #gluster
16:45 ndk joined #gluster
16:56 lpabon_test joined #gluster
17:01 rjoseph joined #gluster
17:11 pvh_sa joined #gluster
17:30 pureflex joined #gluster
17:33 zerick joined #gluster
17:43 cjanbanan joined #gluster
17:43 ron-slc joined #gluster
17:50 richvdh joined #gluster
17:54 bene3 joined #gluster
17:55 calum_ joined #gluster
18:12 Peter1 joined #gluster
18:17 cjanbanan joined #gluster
18:25 glusterbot New news from newglusterbugs: [Bug 1118453] run-tests does not handle running single tests <https://bugzilla.redhat.com/show_bug.cgi?id=1118453>
18:38 cjanbanan joined #gluster
18:41 theron joined #gluster
18:55 glusterbot New news from newglusterbugs: [Bug 1105283] Failure to start geo-replication. <https://bugzilla.redhat.com/show_bug.cgi?id=1105283>
18:56 sonicrose joined #gluster
18:56 sonicrose joined #gluster
18:57 sonicrose hiyas glirc! what's the proper way to gracefully shutdown a brick server so that the clients switch to the replicate as seamlessly as possible?
18:58 sonicrose i do service glusterd stop  but that doesn't end the glusterfs processes
18:58 sonicrose i dunno if they're still busy... what if i just do shutdown -h now in the OS is that OK ?
19:01 semiosis sonicrose: fuse clients are connected to all replicas all the time
19:02 sonicrose right on...  so i want to gracefully shutdown one of my servers that has replicas elsewhere... sometimes i find just doing shutdown -h now on the server makes the fuse clients hang sometimes for a few seconds, sometimes for 30+ seconds, sometime indefinitely
19:02 semiosis sonicrose: you can kill the glusterfsd process (brick export daemon) or firewall its tcp port to cut it off from clients.  if you kill -1 or firewall with a reject then the clients should stop using it immediately.  If you kill -9 or firewall with a drop then clients will probably hang for the ping-timeout delay before giving up on the brick
19:03 semiosis sonicrose: if the network gets cut off before the processes are sent the kill signal that would happen
19:03 sonicrose semiosis, thanks, if i stop the network, will the glusterfs processes exit once there are no more connections?
19:03 semiosis no
19:04 stickyboy Crap.  I think my XFS stripe unit is wrong.  Maybe that's why my performance is crap -- all my IO operations are misaligned.
19:04 sonicrose is killall glusterfs safe?
19:04 semiosis sonicrose: see ,,(processes)
19:04 glusterbot sonicrose: The GlusterFS core uses three process names: glusterd (management daemon, one per server); glusterfsd (brick export daemon, one per brick); glusterfs (FUSE client, one per client mount point; also NFS daemon, one per server). There are also two auxiliary processes: gsyncd (for geo-replication) and glustershd (for automatic self-heal).
19:05 semiosis 'killall glusterfs' would unmount any fuse clients (forcibly) and stop a gluster-nfs server
19:05 sonicrose is killall glusterfsd safe?
19:05 sonicrose how about service glusterfsd stop
19:06 semiosis killall glusterfsd should gracefully stop all the bricks on a server, that's probably what you want
19:06 semiosis i dont know about service glusterfsd stop.  we dont have that on ubuntu/debian, i dont know about other distros
19:06 sonicrose ah i did service glusterfsd stop and the fsd processes are gone, now just 2 glusterfs processes :D
19:07 sonicrose and it appears my VMs didnt even notice
19:07 sonicrose thats cool
19:07 sonicrose centos rpms for gluster come with a glusterfsd in /etc/init.d
19:08 sonicrose service command in centos is just a shortcut for doing /etc/init.d/glusterfsd stop
19:09 semiosis well there you go
19:11 cjanbanan joined #gluster
19:12 sonicrose so i understand then why just doing shutdown -h now causes the hang... the clients will wait for ping timeout if it doesn't get a REJECT, so better to stop gluster before shuttting down
19:12 sonicrose in case of a whole server unexpectedly failing, i have set my ping-timeout to 4 seconds and frame timeout to 8
19:13 sonicrose seems that if VMs lose their storage for the whole 42 seconds default that they may switch their / mount to RO
19:13 sonicrose an 8 second hang it seems the VMs can tolerate a little better
19:21 semiosis sonicrose: shutdown -h can cause the hang depending on the order services are stopped.  if the glusterfsd processes are stopped before the network interface, then no hang
19:21 semiosis you could set the glusterfsd service to be stopped before networking
19:21 semiosis make a symlink, something like this... ln -s ../init.d/glusterfsd /etc/rc3.d/K01glusterfsd
19:21 semiosis IIRC*
19:22 semiosis that would kill the glusterfsd service before anything else
19:28 chirino joined #gluster
19:31 pureflex joined #gluster
19:41 cjanbanan joined #gluster
19:58 andreask joined #gluster
19:59 pvh_sa joined #gluster
20:04 rwheeler joined #gluster
20:08 cjanbanan joined #gluster
20:16 diegows joined #gluster
20:23 chirino joined #gluster
20:32 jiffe98 should a 3.3.1 client be able to connect to a 3.4.2 server?  It was working but now I'm getting Client 10.251.188.64:1019 (1 -> 1) doesn't support required op-version (2). Rejecting volfile request.
20:33 dtrainor joined #gluster
20:40 semiosis jiffe98: no
20:40 semiosis jiffe98: just my opinion
20:41 zerick joined #gluster
20:42 jiffe98 why not?  wouldn't that be required for a rolling upgrade?
20:43 semiosis maybe
20:43 JoeJulian @op-version
20:43 cjanbanan joined #gluster
20:45 twx joined #gluster
20:47 semiosis @3.4 upgrade notes
20:47 glusterbot semiosis: http://vbellur.wordpress.com/2013/07/15/upgrading-to-glusterfs-3-4/
20:54 chirino joined #gluster
20:57 JoeJulian @learn op-version as The operating version represents the RPC and translator capabilities that should be agreed upon by the servers ( http://gluster.org/community/documentation/index.php/OperatingVersions ).  The clients are not part of this negotiation. To allow older version clients to connect to newer servers, reset any volume options that require the newer op-version.
20:57 glusterbot JoeJulian: The operation succeeded.
20:57 JoeJulian @forget op-version
20:57 glusterbot JoeJulian: The operation succeeded.
20:58 JoeJulian @learn op-version as The operating version represents the RPC and translator capabilities required to accommodate the volume settings ( http://gluster.org/community/documentation/index.php/OperatingVersions ). To allow older version clients to connect to newer servers, reset any volume options that require the newer op-version.
20:58 glusterbot JoeJulian: The operation succeeded.
21:04 semiosis that makes sense
21:08 sage joined #gluster
21:09 cjanbanan joined #gluster
21:09 sonicrose im running a gluster VM with 3 vCPUs... when i access it via NFS and trasferring large sequential IO, only Core 0 maxes out to 100%, and the other 2 vCPUs are idle.  is there a way to balance this
21:10 sonicrose i added performance.nfs.io-threads: on
21:10 sonicrose no change
21:17 Peter3 joined #gluster
21:21 qdk joined #gluster
21:24 cristov joined #gluster
21:24 chirino joined #gluster
21:31 Eco__ joined #gluster
21:32 pureflex joined #gluster
21:33 Eco__ JoeJulian, responded to your concern on list but hopped on to answer questions in real time
21:34 Eco__ if needed of course, happy to keep it in list but clarity comes from direct communication
21:49 andreask joined #gluster
21:55 cjanbanan joined #gluster
21:55 glusterbot New news from newglusterbugs: [Bug 1116150] [DHT:REBALANCE]: Rebalance failures are seen with error message " remote operation failed: File exists" <https://bugzilla.redhat.com/show_bug.cgi?id=1116150>
22:04 siel joined #gluster
22:04 siel joined #gluster
22:13 bennyturns joined #gluster
22:23 Edddgy joined #gluster
22:26 Edddgy QQ about docs.. looks like the site was recently updated, google search for "glusterfs release notes" points to www.gluster.org/docs/ which returns 404.. /documentation/ works though, and I can't find the versions breakdown and the release notes per version
22:26 Edddgy any suggestions?
22:33 abyss__ joined #gluster
22:33 weykent joined #gluster
22:33 mibby joined #gluster
22:33 silky joined #gluster
22:33 tom[] joined #gluster
22:33 k3rmat joined #gluster
22:33 klaas joined #gluster
22:33 jbrooks joined #gluster
22:33 simulx joined #gluster
22:33 chirino joined #gluster
22:33 bennyturns joined #gluster
22:33 fuz1on joined #gluster
22:34 muhh joined #gluster
22:35 tty00 joined #gluster
22:35 T0aD joined #gluster
22:36 _NiC joined #gluster
22:36 oxidane joined #gluster
22:36 wgao joined #gluster
22:36 msciciel joined #gluster
22:36 NCommander joined #gluster
22:36 nixpanic_ joined #gluster
22:36 sputnik13 joined #gluster
22:36 marcoceppi joined #gluster
22:36 osiekhan3 joined #gluster
22:36 pasqd joined #gluster
22:36 JordanHackworth joined #gluster
22:36 fubada joined #gluster
22:36 eightyeight joined #gluster
22:36 Nopik joined #gluster
22:36 Georgyo joined #gluster
22:36 life_coach joined #gluster
22:36 semiosis joined #gluster
22:36 Alex joined #gluster
22:36 yosafbridge joined #gluster
22:36 m0zes joined #gluster
22:36 johnmwilliams__ joined #gluster
22:36 crashmag joined #gluster
22:36 fim joined #gluster
22:36 al joined #gluster
22:36 ackjewt joined #gluster
22:36 samppah joined #gluster
22:36 sman joined #gluster
22:38 neoice joined #gluster
22:38 cfeller_ joined #gluster
22:38 DanF_ joined #gluster
22:38 msvbhat_ joined #gluster
22:38 firemanxbr joined #gluster
22:38 foster joined #gluster
22:38 the-me joined #gluster
22:38 jiqiren joined #gluster
22:39 Intensity joined #gluster
22:39 dblack joined #gluster
22:40 cjanbanan joined #gluster
22:43 anotheral joined #gluster
22:43 sjm left #gluster
22:45 pvh_sa joined #gluster
22:52 Gugge joined #gluster
23:08 cjanbanan joined #gluster
23:10 plarsen joined #gluster
23:26 Edddgy joined #gluster
23:28 carrar joined #gluster
23:37 Edddgy joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary