Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-02-27

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:05 cjanbanan joined #gluster
00:05 jiqiren JoeJulian: just saw your glusterfs-splitbrain github/blog, although I hope to never need to use it - looks great!
00:05 JoeJulian Thanks
00:08 kmai007 joined #gluster
00:14 JoeJulian jiqiren: Is your web site supposed to work?
00:15 Matthaeus joined #gluster
00:17 jiqiren which one? :)
00:18 jiqiren i think right now itis just a nginx welcome screen
00:18 lava left #gluster
00:19 sroy_ joined #gluster
00:48 ilbot3 joined #gluster
00:48 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
00:53 elyograg if I have an entry in my heal info for the same file on both copies of the brick, but can't see anything wrong with either brick, what do I do?
00:53 elyograg stat via the mount didn't fix it.
00:53 sprachgenerator joined #gluster
00:54 JoeJulian Check the client log
00:54 JoeJulian heal $vol info heal-failed probably has an entry
00:55 elyograg nothing there.
00:55 elyograg client log i assume would be on the machine I'm mounting NFS from?
00:56 JoeJulian I've seen that caused by filesystem corruption (ext4), by a filesystem going read-only due to a controller issue, and by one brick having a directory and another having a file of the same name.
00:56 JoeJulian elyograg: If you're mounting nfs, then it will be the server you've mounted from. The log would be nfs.log.
00:58 elyograg I [afr-self-heal-data.c:655:afr_sh_data_fix] 0-mdfs
00:58 elyograg -replicate-7: no active sinks for performing self-heal on file <gfid:3f629e64-cc
00:58 elyograg f0-4d9b-b7ba-1782f8abeb2b>/WM/wmphotos/docs/077/296
00:58 elyograg that turned out ugly.
00:58 JoeJulian That claims that there's a file that needs healed, but the place it's supposed to heal TO is missing.
00:59 JoeJulian Check the ,,(extended attributes) on the bricks that host that file.
00:59 glusterbot (#1) To read the extended attributes on the server: getfattr -m .  -d -e hex {filename}, or (#2) For more information on how GlusterFS uses extended attributes, see this article: http://hekafs.org/index.php/2011/04/glusterfs-extended-attributes/
00:59 JoeJulian heal direction is source -> sink
01:02 elyograg file attributes look identical on both bricks.
01:02 seapasulli joined #gluster
01:02 elyograg ls -al looks the same too, on the bricks.
01:03 gmcwhistler joined #gluster
01:04 elyograg if I do a getfattr of the gfid in .glusterfs, it looks identical to the other getfattr.
01:04 Matthaeus joined #gluster
01:05 JoeJulian Yeah, the trusted.afr attributes are probably of much more interest.
01:05 elyograg stat of the real filename and the gfid shows they are the same inode.
01:07 elyograg that's the case on both bricks.
01:17 Matthaeus joined #gluster
01:25 elyograg the trusted.afr look the same.
01:26 elyograg http://fpaste.org/80773/46438813/
01:26 glusterbot Title: #80773 Fedora Project Pastebin (at fpaste.org)
01:27 elyograg same if I use the real filename. as already established, in each case the inode is the same, so that's not surprising.
01:27 JoeJulian right
01:31 JoeJulian elyograg: That's split-brain. Note the non-zero value for both replicas on both servers. Each wants to heal the other one.
01:31 cjanbanan joined #gluster
01:34 tokik joined #gluster
01:34 DV__ joined #gluster
01:35 elyograg ok.  so what do i do to fix?  do i just remove the real file and gfid hardlink from one brick, then stat via the mount?
01:36 JoeJulian This just in: http://joejulian.name/blog/glusterfs-split-brain-recovery-made-easy/
01:36 glusterbot Title: GlusterFS Split-Brain Recovery Made Easy (at joejulian.name)
01:36 elyograg saw it, it's not clicking as to what I'd actually need to DO.
01:36 * JoeJulian grumbles as he sees a typo in his first sentence.
01:37 tjikkun joined #gluster
01:37 tjikkun joined #gluster
01:38 JoeJulian download, run "splitmount {server} {volume} {some temporary mount path}", under that mount path are r1 and r2 (just replica 2, right?). Pick one good version of the real filename. Delete the other.
01:39 elyograg ran it on one of my gluster servers.  can't import rpc.fetchvol
01:40 JoeJulian run it from the directory it's in, ie. ./splitmount
01:40 JoeJulian Looks like I need to make it installable.
01:41 elyograg oh, i need more than just the script.
01:41 elyograg how do i get the whole thing?  i went to the script, to Raw, then used 'wget' on the linux machine in /usr/local/sbin
01:42 JoeJulian git clone
01:42 JoeJulian I guess I know what I'll be working on when I get home...
01:42 JoeJulian Ok, gotta go now. See you soon.
01:42 elyograg now i need 'yum install git '
01:46 elyograg that's slick.  thanks.
01:55 DV joined #gluster
02:01 purpleidea @later tell sputnik13 here now, hi.
02:01 glusterbot purpleidea: The operation succeeded.
02:03 JMWbot joined #gluster
02:03 JMWbot I am JMWbot, I try to help remind johnmark about his todo list.
02:03 JMWbot Use: JMWbot: @remind <msg> and I will remind johnmark when I see him.
02:03 JMWbot /msg JMWbot @remind <msg> and I will remind johnmark _privately_ when I see him.
02:03 JMWbot The @list command will list all queued reminders for johnmark.
02:03 JMWbot The @about command will tell you about JMWbot.
02:06 tjikkun joined #gluster
02:06 tjikkun joined #gluster
02:12 harish_ joined #gluster
02:15 cjanbanan joined #gluster
02:17 badone_ joined #gluster
02:23 jporterfield joined #gluster
02:37 cjanbanan joined #gluster
02:42 cp0k_ joined #gluster
02:54 jporterfield joined #gluster
02:58 shubhendu joined #gluster
03:07 cjanbanan joined #gluster
03:12 lman482 joined #gluster
03:15 tjikkun joined #gluster
03:20 bharata-rao joined #gluster
03:21 sputnik13 joined #gluster
03:28 nightwalk joined #gluster
03:29 RayS joined #gluster
03:44 jag3773 joined #gluster
03:47 itisravi joined #gluster
03:52 CheRi joined #gluster
03:55 dusmant joined #gluster
03:55 sputnik13 anyone using gluster with openstack?
03:58 cjanbanan joined #gluster
04:00 glusterbot New news from newglusterbugs: [Bug 1070539] Very slow Samba Directory Listing when many files or sub-direct​ories <https://bugzilla.redhat.com/show_bug.cgi?id=1070539>
04:13 [o__o] left #gluster
04:14 kdhananjay joined #gluster
04:15 [o__o] joined #gluster
04:23 satheesh joined #gluster
04:25 sputnik13 so...  finally have gluster up, but it's slow...  really slow...  iozone with 10mb record size returns roughly 1/3 the performance at best and 1/10 the performance at worst comparing gluster with 2 replicas and 8 peers to running iozone locally on one of those nodes
04:26 satheesh joined #gluster
04:26 sputnik13 I kicked up the write-behind to 32mb...  are there any other commonly recommended set of tuning parameters?
04:34 hagarth joined #gluster
04:34 ndarshan joined #gluster
04:34 DV joined #gluster
04:35 cjanbanan joined #gluster
04:36 dusmant joined #gluster
04:44 DV joined #gluster
04:47 kanagaraj joined #gluster
04:56 aravindavk joined #gluster
04:57 rjoseph joined #gluster
05:00 nshaikh joined #gluster
05:03 spandit joined #gluster
05:07 cjanbanan joined #gluster
05:09 RameshN joined #gluster
05:11 seapasulli joined #gluster
05:12 davinder joined #gluster
05:17 sahina joined #gluster
05:20 bala1 joined #gluster
05:25 prasanth joined #gluster
05:31 lalatenduM joined #gluster
05:37 cjanbanan joined #gluster
05:52 jporterfield joined #gluster
05:55 DV joined #gluster
05:56 vimal joined #gluster
05:57 kris joined #gluster
05:58 jporterfield joined #gluster
06:03 rastar joined #gluster
06:04 raghu joined #gluster
06:05 cjanbanan joined #gluster
06:12 vpshastry joined #gluster
06:13 nithind1988 joined #gluster
06:14 nithind1988 Hi, Is IPv6 in gluster 3.3.2
06:14 nithind1988 supported
06:14 nithind1988 ?
06:22 vpshastry joined #gluster
06:31 glusterbot New news from newglusterbugs: [Bug 1070573] layout is missing when add-brick is done,new created files only locate on old bricks <https://bugzilla.redhat.com/show_bug.cgi?id=1070573>
06:34 hagarth joined #gluster
06:35 prasanth joined #gluster
06:42 haomaiwa_ joined #gluster
06:56 jporterfield joined #gluster
06:57 saurabh joined #gluster
07:00 cjanbanan joined #gluster
07:06 lalatenduM nithind1988, check this http://www.gluster.org/pipermail/gluster-users/2013-November/037824.html
07:06 glusterbot Title: [Gluster-users] Hi all! Glusterfs Ipv6 support (at www.gluster.org)
07:13 sputnik13 joined #gluster
07:16 DV joined #gluster
07:17 cjanbanan joined #gluster
07:19 CheRi joined #gluster
07:26 meghanam joined #gluster
07:26 meghanam_ joined #gluster
07:29 jtux joined #gluster
07:31 pkoro joined #gluster
07:35 rgustafs joined #gluster
07:35 cjanbanan joined #gluster
07:37 dusmant joined #gluster
07:40 ctria joined #gluster
07:40 hagarth joined #gluster
07:47 fyxim joined #gluster
07:51 elyograg linux is being psychotic.  had a problem earlier where memory that was being used by the disk cache was not showing up under "cached" ... but now it IS showing up there.
07:52 ngoswami joined #gluster
07:52 nithind1988 joined #gluster
07:54 kris joined #gluster
07:56 kris joined #gluster
08:00 eseyman joined #gluster
08:05 rahulcs joined #gluster
08:07 cjanbanan joined #gluster
08:11 bazzer any tips for speeding up geo-replication ? I am geo-replicating to a gluster.fuse mount which I believe is a bottleneck.  For example if I replicate to straight xfs its 4 times quicker.
08:11 bazzer I know I can't go directly to the brick as it buggers up the xattrs
08:12 bazzer and I can't use nfs as it doesn't support xattrs ( which geo-rep needs)
08:12 keytab joined #gluster
08:16 bazzer should I try using just rsync ? or is there are way to make the xattrs available via nfs or other local mountpoint?
08:18 sputnik13 joined #gluster
08:23 cjanbanan joined #gluster
08:30 al joined #gluster
08:36 nithind1988 lalatenduM, That thread didn't continue
08:36 CheRi joined #gluster
08:44 cjanbanan joined #gluster
08:47 ccha3 joined #gluster
08:49 SANMED joined #gluster
08:49 ndarshan joined #gluster
08:49 fsimonce joined #gluster
08:52 ppai joined #gluster
08:56 shubhendu joined #gluster
09:13 liquidat joined #gluster
09:23 qdk joined #gluster
09:39 21WACHYN0 joined #gluster
09:39 17SAAKRNV joined #gluster
09:46 harish_ joined #gluster
09:47 hagarth joined #gluster
09:57 cjanbanan joined #gluster
10:04 glusterbot New news from newglusterbugs: [Bug 1070623] Gluster Quotas implementation issue <https://bugzilla.redhat.com/show_bug.cgi?id=1070623>
10:09 harish_ joined #gluster
10:17 Pavid7 joined #gluster
10:20 ngoswami joined #gluster
10:21 gdubreui joined #gluster
10:29 ctria joined #gluster
10:30 DV joined #gluster
10:51 rfortier joined #gluster
10:54 harish_ joined #gluster
10:54 bazzles joined #gluster
10:56 marcoceppi_ joined #gluster
11:07 ngoswami joined #gluster
11:19 pkoro joined #gluster
11:22 SteveCooling joined #gluster
11:24 khushildep joined #gluster
11:25 jmarley joined #gluster
11:29 wica joined #gluster
11:34 glusterbot New news from newglusterbugs: [Bug 1070685] glusterfs ipv6 functionality not working <https://bugzilla.redhat.com/show_bug.cgi?id=1070685>
11:41 psyl0n joined #gluster
11:43 diegows joined #gluster
11:44 calum_ joined #gluster
11:57 qdk joined #gluster
12:00 ctria joined #gluster
12:00 tdasilva joined #gluster
12:06 cjanbanan joined #gluster
12:07 itisravi_ joined #gluster
12:11 kkeithley1 joined #gluster
12:12 CheRi joined #gluster
12:15 Slash joined #gluster
12:16 ppai joined #gluster
12:24 baoboa joined #gluster
12:27 rfortier1 joined #gluster
12:30 hagarth joined #gluster
12:30 Philambdo joined #gluster
12:35 kanagaraj_ joined #gluster
12:44 jporterfield joined #gluster
12:44 RameshN joined #gluster
12:48 Pavid7 joined #gluster
12:48 dusmant joined #gluster
12:49 ndarshan joined #gluster
12:57 jporterfield joined #gluster
13:13 glusterbot` joined #gluster
13:16 wica Has someone seen and maybe used the  Kinetic HDD from Seagate?
13:16 wica With ethernet connection
13:18 bazzles joined #gluster
13:18 ctria joined #gluster
13:19 rastar joined #gluster
13:23 kanagaraj joined #gluster
13:26 nithind1988 left #gluster
13:32 khushildep joined #gluster
13:33 jskinner_ joined #gluster
13:35 glusterbot New news from newglusterbugs: [Bug 1070734] Remove-brick:: Few files are not migrated from the decommissioned bricks even though status says completed, so commit results in data loss <https://bugzilla.redhat.com/show_bug.cgi?id=1070734>
13:35 liquidat joined #gluster
13:39 liquidat joined #gluster
13:47 shubhendu joined #gluster
13:48 ctria joined #gluster
13:50 japuzzo joined #gluster
13:51 rahulcs joined #gluster
13:58 social kkeithley_: from tests gluster does generate iowait when under load
13:58 social kkeithley_: I'm playing with sysbend and fio and I can say the results are interesting, random read causes gluster to eat a lot user/system cpu
13:59 marcoceppi joined #gluster
13:59 marcoceppi joined #gluster
14:00 dewey joined #gluster
14:00 social kkeithley_: when writing randomly but fio vsync which means you can coalesce the writes it generates high cpu no iowait as with random read and eats ram (like 50% in some cases) after that it drops so it's just cache
14:00 social kkeithley_: and random reqrite create high iowait. all of these are what you see from client
14:00 bazzles joined #gluster
14:01 edward2 joined #gluster
14:02 kkeithley_ interesting
14:03 B21956 joined #gluster
14:04 sroy joined #gluster
14:07 edward2 joined #gluster
14:08 jclift ndevos kkeithley_: So, I've just finished install a full CentOS 6.5 devel desktop, the "Development Libraries" and "Development Tools" groups (which somewhere in it includes redhat-rpm-config), and a few other Gluster dependencies.
14:08 jclift CentOS is definitely not being treated as RHEL in it.
14:08 hybrid512 joined #gluster
14:08 jclift eg my patch makes it work
14:09 jclift However, it sounds like the real question is "wtf isn't the redhat-rpm-config stuff working?"  Is that correct?
14:09 jclift Btw, have to jump in a meeting now
14:09 jclift back in a bit
14:10 kkeithley_ jclift: but %rhel doesn't come from redhat-rpm-config.
14:11 jclift kkeithley_: Ok, I'm just going by what ndevos was saying.  My takeaway is that there's a package that should be installed which makes the rhel stuff work for CentOS
14:11 lman4821 joined #gluster
14:11 * jclift thought it was redhat-rpm-config
14:16 gmcwhistler joined #gluster
14:16 kkeithley_ nope, it's in centos-distribution
14:17 jmarley joined #gluster
14:17 jmarley joined #gluster
14:17 jclift Aha, that's probably what's missing
14:17 jclift "No package centos-distribution available"
14:17 jclift Hmmm
14:17 kkeithley_ sorry, that's not the exact name, but it's something like that. hang on, loooking
14:18 kkeithley_ centos-release-6-5.el6.centos.11.2.x86_64
14:18 jclift k, 1 sec
14:18 lman4823 joined #gluster
14:18 kkeithley_ has /etc/rpm/macros.dist. That's where %rhel is defined
14:19 jclift "Package 10:centos-release-5-10.el5.centos.x86_64 already installed and latest version"
14:19 jclift Ahh, k.
14:19 * jclift looks at the macro stuff
14:19 kkeithley_ oh, centos 5
14:19 jclift No macros.dist
14:19 jclift Yeah
14:20 * kkeithley_ wonders how important RHEL5 (and CentOS5) really is
14:21 kkeithley_ Is this for the CentOS packaging SIG?
14:21 jruggiero joined #gluster
14:21 jclift No.  The only reason I'm interested, is because my Glupy patch is failing on the EL5 builds.
14:21 jclift So thus I installed EL5 to figure out how things work on this.
14:21 jclift It turns out "not very well"
14:22 jclift I'm completely ok with us dropping EL5 support.
14:22 * jclift strongly suspects that very few people would have a problem with it
14:23 jclift After all, we disable a shed load of stuff in configure for EL5 to even compile.
14:23 jclift Maybe we should ask for a general show of hands for "does anyone care about EL5 support" on the -devel mailing list?
14:24 kkeithley_ I suspect JoeJulian would complain vociferously if we completely dropped el5 support, but there are definitely things that don't, won't, can't ever be built. The dependencies don't exist. Some things just start falling off the end.
14:27 kkeithley_ What version of python is in 5 anyway? That alone seems a bit scary
14:28 kkeithley_ Well, not so scary that we don't build geo-rep on el5, but maybe we have to drop glupy from el5.
14:29 jclift It has Python 2.4.3.
14:29 jclift It's interesting that you say that.
14:29 jclift When trying to actually _run_ the glusterd that compiles from this, it fails with an error about geo-rep.
14:29 jclift I was going to mark that as a "needs to be looked at as well" thing
14:30 RayS joined #gluster
14:30 ctria joined #gluster
14:30 kkeithley_ oh?  yes, that needs to be looked at
14:30 jclift I'm not sure if it's due to this CentOS building issue, or if the same problem shows up on RHEL
14:31 jclift So.  For this CentOS building thing, what's you're thinking?
14:31 jclift Should there already be a macro to make CentOS be treated the same as RHEL, in EL5?
14:32 jclift Actually, I have some of the CentOS guys in the other channel.  I can ask them. :)
14:32 kkeithley_ yeah, that seems like a good first step
14:33 theron joined #gluster
14:34 theron_ joined #gluster
14:34 kkeithley_ Without knowing more, I'd say some heuristic for when %rhel isn't defined and %centos_version == 5 then %global rhel %centos_version.  I'll leave it to you to figure out the details
14:37 jclift Understood.  So don't rationalise the >/>=/</<= signs at the same time?
14:37 jclift eg "just keep it simple" :)
14:37 kkeithley_ If you want to, but yes, I'd keep it simple
14:37 jclift Heh, you asked me to for the last one.
14:37 jclift But yeah, I'm all for keeping this simple for now.
14:38 jclift :)
14:38 Pavid7 joined #gluster
14:38 kkeithley_ last time you were touching all of them
14:38 jclift Good point
14:40 seapasulli joined #gluster
14:45 plarsen joined #gluster
14:50 jobewan joined #gluster
14:51 bennyturns joined #gluster
14:52 dbruhn joined #gluster
14:55 jclift kkeithley_: At the very least, we'll need the extras/LinuxRPM/Makefile.am to include the mkdir -p rpmbuild/BUILD dir.  That's definitely missing. ;)
14:55 rpowell joined #gluster
14:59 bazzles joined #gluster
15:01 kkeithley_ I gather the el5 rpmbuild doesn't create it if it doesn't exist like it does on el6, fedora, and suse?
15:03 thefiguras joined #gluster
15:05 satheesh joined #gluster
15:06 thefiguras joined #gluster
15:08 jclift kkeithley_: Correct.  The make glusterrpms build just plain fails on CentOS 5 if it's not there, unlike the others.
15:08 cfeller joined #gluster
15:08 * jclift is updating the patch atm.
15:08 jclift Hopefully this one goes through.
15:08 kkeithley_ beautiful
15:09 jclift Then I can figure out the Glupy problem on EL5. ;)
15:11 diegows joined #gluster
15:12 jag3773 joined #gluster
15:12 wica joined #gluster
15:12 chirino joined #gluster
15:13 dbruhn_ joined #gluster
15:16 partner hey. any take on the current suggested hardware configuration? RH seems to suggest nowadays quite a big boxes, for archival purposes 16 gigs already but for anything more real 32/48 GB which sounds quite a lot to me
15:16 _VerboEse joined #gluster
15:16 solid_li1 joined #gluster
15:17 partner that being the minimum, "must be 2-socket", just wondering were they take these
15:17 ctria joined #gluster
15:19 cjanbanan1 joined #gluster
15:19 dbruhn_ partner, I can't really speak to Redhat's requirements, or anything beyond 3.3.2... My servers have 16GB of ram in them.
15:19 Pavid7_ joined #gluster
15:19 partner its the only reference out there that i know of
15:19 dbruhn_ with another application running on top of gluster locally I am using 12GB
15:19 fsimonce` joined #gluster
15:19 dbruhn_ now that being said, rebalance, and full heal operations can be pretty memory intensive
15:20 partner i have dedicated storage boxes with 1x 6C intel, 8 gigs of memory and 12x 3/4 TB disks
15:20 partner well yes, i can't run the rebalance for this cluster at all due to memory usage going over the limits
15:20 crazifyngers_ joined #gluster
15:20 partner i have no idea how much i would need to actually rebalance the volume and whether that is a bug or feature or what..
15:20 dbruhn_ Just letting you know my experience
15:21 partner its always much appreciated, hence telling my story aswell :)
15:21 dbruhn_ rebalance ops in 3.3.2 had a memory leak, not sure if it's been fixed in later versions
15:21 partner my use case is rather simple for the big part, write once, read many, never editing, some deletions
15:21 partner indeed i have 3.3.2 for now, had to upgrade to it due to 3.3.1 having that filehandler leak bug
15:22 partner i hope to be able to upgrade shortly, maybe next week even
15:23 partner the difficult part here is the rebalance always starts from the beginning to my understanding so its using all the memory before it even starts to balance anything (as it was done already by previous rounds)
15:23 partner understanding based on the numbers seen.. ie. it does not set a marker and continue from there
15:23 bugs_ joined #gluster
15:24 wushudoin| joined #gluster
15:27 cp0k_ joined #gluster
15:27 baoboa joined #gluster
15:28 bennyturns joined #gluster
15:31 rgustafs joined #gluster
15:35 seapasulli joined #gluster
15:36 mattappe_ joined #gluster
15:37 lmickh joined #gluster
15:39 psyl0n joined #gluster
15:42 cp0k_ joined #gluster
15:44 daMaestro joined #gluster
15:48 psyl0n joined #gluster
15:55 hagarth joined #gluster
15:57 bazzles joined #gluster
16:01 xavih joined #gluster
16:04 sprachgenerator joined #gluster
16:05 davinder joined #gluster
16:10 semiosis :O
16:15 nixpanic joined #gluster
16:15 nixpanic joined #gluster
16:16 ngoswami joined #gluster
16:19 hybrid512 joined #gluster
16:22 ctria joined #gluster
16:27 Slash_ joined #gluster
16:32 cfeller joined #gluster
16:32 zaitcev joined #gluster
16:38 zerick joined #gluster
16:44 jag3773 joined #gluster
16:47 sputnik13 joined #gluster
16:51 kkeithley_ ping purpleidea
16:51 kkeithley_ purpleidea: ping
16:53 bazzles joined #gluster
16:55 rahulcs joined #gluster
16:57 purpleidea kkeithley_: hey
16:57 purpleidea pong
16:58 kkeithley_ may I pm you?
16:58 purpleidea kkeithley_: sure
16:58 T0aD asl ?
17:00 samppah :D
17:03 semiosis lol
17:04 dbruhn_ and this is how members of the gluster community ended up on How to catch a predator
17:05 kkeithley_ dbruhn: say what?
17:05 sprachgenerator joined #gluster
17:05 dbruhn_ response to T0aD's comment
17:07 purpleidea it's starting to feel like icq in 1999
17:10 zerick joined #gluster
17:16 cjanbanan joined #gluster
17:17 kaptk2 joined #gluster
17:24 partner haven't heard that one for a looong time :o
17:34 sarkis joined #gluster
17:34 ctria joined #gluster
17:38 Mo_ joined #gluster
17:39 owenmurr_ joined #gluster
17:39 rotbeard joined #gluster
17:45 bazzles joined #gluster
17:46 haomaiwang joined #gluster
17:47 owenmurr joined #gluster
17:48 owenmurr joined #gluster
17:50 owenmurr joined #gluster
17:53 partner sputnik13: forgot to reply, we do
17:55 plarsen joined #gluster
17:59 JoeJulian kkeithley_: I would only mildly complain. My biggest issue with legacy is my @$%!#$% fedora 6 box.
18:01 diegows joined #gluster
18:04 zerick joined #gluster
18:06 haomaiwang joined #gluster
18:10 rahulcs joined #gluster
18:12 sputnik13 partner: are you using it for cinder or nova or both?
18:13 Philambdo joined #gluster
18:14 samppah sputnik13: did you already solve the performance problem?
18:15 sputnik13 samppah: nope
18:15 samppah sputnik13: what kind of setup you have and what kind of storage setup you are using?
18:15 sputnik13 8 node replicated distributed with 2 replica
18:16 sputnik13 11 disk raid 6 arrays with battery backed flash
18:16 sputnik13 tried both XFS and EXT4
18:17 samppah what test you are running?
18:17 sputnik13 running ubuntu 12.04.4 with gluster 3.4.2
18:17 samppah LSI card?
18:17 sputnik13 iozone
18:17 sputnik13 yes LSI megaraid 9260
18:17 sputnik13 no wait, that's my compute node
18:17 sputnik13 it's a dell perc 710h or something
18:17 sputnik13 rebranded LSI
18:18 samppah okay
18:18 samppah is that testing sequential or random io?
18:18 samppah i'm not very familiar with iozone
18:18 sputnik13 I think it does all of them
18:18 haomaiwang joined #gluster
18:18 sputnik13 and across the board the performance is much lower than running on local disk
18:19 sputnik13 I set the test to use 10mb "blocks"
18:19 samppah i'm not sure if 11 disk raid6 arry is optimal.. but also i'm not sure if it should effect that much
18:19 sputnik13 with 4k it's not even funny how bad the numbers are
18:19 samppah hmmh
18:20 sputnik13 well if the problem was raid configuration I would expect local runs of the test to be slow as well, which is not the case
18:20 samppah have you done any tuning for glusterfs?
18:20 sputnik13 the only things I've been able to find are on the gluster.org site
18:20 sputnik13 and I set gluster volume set bcf_gluster performance.cache-size 512MB
18:20 sputnik13 gluster volume set bcf_gluster performance.write-behind-window-size 32MB
18:21 sputnik13 sorry just pasted a portion of my setup script
18:21 sputnik13 32MB write behind, 512MB cache
18:21 samppah i have these options defined for rhev volume http://pastie.org/8809703
18:21 glusterbot Title: #8809703 - Pastie (at pastie.org)
18:22 mattappe_ joined #gluster
18:22 sputnik13 why is write-behind disabled?
18:26 samppah sputnik13: i have not found out the exact reason but if i have write-behind enabled and there is heavy io then it causes some buffer to fill up and once it's full it's stop all io until everything has been written to disk
18:26 sputnik13 ic
18:26 Philambdo joined #gluster
18:27 samppah i have only been able to cause that issue when testing performance with fio so i'm not sure if that's a real issue at all
18:27 nage joined #gluster
18:27 nage joined #gluster
18:27 sputnik13 wow, those settings made a big diff
18:28 sputnik13 I have ~ 100MB or so more per test
18:31 sputnik13 samppah: I have to run, really appreciate you sharing your tuning params with me
18:31 samppah sputnik13: no problem :)
18:31 samppah hope it helps
18:32 partner sputnik13: for cinder and glance
18:44 bennyturns joined #gluster
18:45 bazzles joined #gluster
18:45 jskinn___ joined #gluster
18:51 rahulcs joined #gluster
19:04 jesse joined #gluster
19:15 sputnik13 joined #gluster
19:22 sputnik13 partner: have you done any performance measurements of the cinder volumes?
19:23 uebera|| joined #gluster
19:33 partner not really i'm afraid, currently doing quite small env
19:37 bazzles joined #gluster
19:38 sputnik13 ic
19:40 sputnik13 where can I find the latest administration guide for gluster?
19:40 sputnik13 the latest one I can find on the website is for 3.3.
19:42 JoeJulian https://forge.gluster.org/gluster-docs-project/gluster-docs-project/trees/master
19:42 glusterbot Title: Tree for gluster-docs-project in Gluster Docs Project - Gluster Community Forge (at forge.gluster.org)
19:44 Guest48886 joined #gluster
19:45 sputnik13 is this the repository where the website's documentation is kept?
19:46 rahulcs joined #gluster
19:46 sputnik13 I'm looking for the set of volume tuning options available for 3.4, the only list I can find on the web is for 3.2, and then there's the admin guid for 3.3 in PDF form that has the tuning options updated for 3.3...  where can I find the list for 3.4?
19:46 nixpanic joined #gluster
19:46 nixpanic joined #gluster
19:47 semiosis sputnik13: 'gluster volume set help' also ,(undocumented options)
19:47 semiosis ,,(undocumented options)
19:47 glusterbot Undocumented options for 3.4: http://www.gluster.org/community/documentation/index.php/Documenting_the_undocumented
19:48 sputnik13 semiosis: thank you
19:53 semiosis yw
19:54 Matthaeus joined #gluster
20:03 sputnik13 when setting options that take byte numbers, it seems like it takes M G MB GB
20:04 sputnik13 I enter GH and it still takes it
20:04 sputnik13 how do I know what works and what doesn't?
20:06 giulivo joined #gluster
20:06 JoeJulian iirc, I read in the source that it just uses 1 letter.
20:07 giulivo sorry guys, there seems to be a limit on the length of a volume name but I'm not that skilled to find it in the code, can anyone help me figure it?
20:07 sputnik13 ah, ok, that seems to bear out in what the cli is complaining about
20:07 sputnik13 I tried 256J and it said "invalid format"
20:08 giulivo for instance, something like this "rhosqe-jenkins-9443a145-17-cinder_gluster_2014_02_27_20_25_38_555308" is not accepted as it is too long
20:11 kmai007 joined #gluster
20:12 JoeJulian @lucky tr -cd '\11\12\15\40-\176' <
20:12 glusterbot JoeJulian: Google found nothing.
20:12 JoeJulian dammit
20:12 kmai007 :-*(
20:12 JoeJulian @lucky _POSIX_PATH_MAX
20:12 kmai007 i tried to cut over to glusterfs in prod today on 3.4.2
20:12 glusterbot JoeJulian: http://lists.gnu.org/archive/html/bug-coreutils/2005-05/msg00217.html
20:13 kmai007 not good
20:13 kmai007 had to roll back to traditional NFS
20:13 kmai007 i don't know how you guys do it, this gets so depressing
20:14 xymox joined #gluster
20:14 JoeJulian kmai007: I just don't have problems. It makes it much easier.
20:14 kmai007 JoeJulian: (>_<)
20:14 JoeJulian giulivo: Looks like _POSIX_PATH_MAX = 256
20:15 JoeJulian So, kmai007, what's "not good"?
20:16 giulivo JoeJulian, thanks but I think my informations were misleading as the name is shorter than that but still not working ... actually the error I get relates to the brick path not the volume name, http://hastebin.com/valupeyaji.md
20:16 glusterbot Title: hastebin (at hastebin.com)
20:16 kmai007 JoeJulian: i have 2 x 4 = 8 distr. repl servers
20:17 kmai007 i have a volume called /dynamic/coldfusion
20:17 kmai007 this volume has about 40 FUSE clients mounting it
20:18 kmai007 coldfusion web servers write to /dynamic/coldfusion right now contains 1.2 million files
20:19 JoeJulian in one directory?
20:19 kmai007 apache webservers would mount that as FUSE and read it
20:19 kmai007 no 1.2 recursively
20:20 kmai007 the only reason i know of that # is b/c i went to the bricks and counted it from there
20:20 kmai007 it hangs when i try to count it via client
20:20 kmai007 591709 + 597321 = total 1.2mill
20:21 kmai007 so it appears that when i've rsync'd the data for the past 2 weeks
20:21 kmai007 i didn't do a --delete, so i've been collecting files that have been purged
20:22 kmai007 when we put it into production struggling, when it was trying to purge, write, read to the same volume
20:22 kmai007 im only suspecting now
20:26 kmai007 could a busy volume impact other volumes hosted on by gluster?
20:27 JoeJulian I'm not really sure what you're asking. If you host multiple volumes on each server, sure. A busy volume could be using up your bandwidth or cpu cycles.
20:27 kmai007 ok
20:29 kmai007 enabling performance.io-thread-count:, when is it ideal to use this feature?
20:31 jruggiero left #gluster
20:37 bennyturns joined #gluster
20:39 bazzles joined #gluster
20:40 JoeJulian When you have sufficient io channels to utilize them.
20:42 mattappe_ joined #gluster
20:54 rwheeler joined #gluster
21:04 andreask joined #gluster
21:17 daMaestro joined #gluster
21:23 ninkotech joined #gluster
21:24 qdk joined #gluster
21:24 sputnik13 is anyone using striped volumes in production environments
21:24 sputnik13 ?
21:25 sputnik13 I had some weirdness with striped volumes a couple days ago that makes me wary
21:25 sputnik13 of using it
21:26 sputnik13 to be more specific...  when I mounted a volume the client saw nothing, while the servers had stuff in the bricks
21:30 sputnik13 weird, the other volume parameters I used all take just 1 letter, but stripe-block-size expects 2 letters :)
21:31 psyl0n joined #gluster
21:38 SuperYeti joined #gluster
21:39 SuperYeti Hey guys, hoping someone can answer a quick question. I have an upstart job I've created that needs to start on boot, but it depends a gluster fs mount
21:39 SuperYeti what would my start stanza be to inforce that dependency?
21:40 SuperYeti http://paste.ubuntu.com/7007296/ is my current upstart script
21:40 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
21:41 SuperYeti http://paste.ubuntu.com/7007299/ is my dmesg output, which I can see it's starting too soon
21:41 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
21:41 SuperYeti i suspect that's why it's failing
21:42 sputnik13 there's a start after
21:43 sputnik13 what you would start after I don't know off hand
21:43 sputnik13 but there's presumably a job that handles mounting filesystems
21:43 sputnik13 make sure you're starting after that
21:43 sputnik13 alternatively you can just mount the glusterfs in your script
21:43 sputnik13 and take it out of fstab
21:44 elyograg on debian, which still uses the sysv init stuff, the tag is $remote_fs
21:45 SuperYeti so "start after $remote_fs"
21:45 SuperYeti ?
21:45 elyograg I don't know how you phrase it with upstart.  never looked at it.
21:45 sputnik13 maybe, but probably not
21:45 fidevo joined #gluster
21:45 SuperYeti hmm
21:45 sputnik13 yeah elyograg is talking about sysv init on debian, not upstart
21:46 ninkotech joined #gluster
21:46 sarkis joined #gluster
21:46 SuperYeti right
21:46 elyograg do you have postfix?  I got that from the postfix init script.
21:47 SuperYeti ah
21:47 SuperYeti i think i found it
21:47 SuperYeti http://softsolder.com/2012/05/07/ubuntu-12-04-nfs-mounts-vs-upstart/
21:48 elyograg nice.
21:48 SuperYeti start on filesystem and mounted MOUNTPOINT=/mnt/jobs
21:48 SuperYeti theoretically that might work
21:48 SuperYeti LOL
21:50 elyograg what that guy says about remote-filesystems seems like a bug.
21:50 SuperYeti yes
21:50 SuperYeti but mounted part would theoretically bypass that?
21:50 SuperYeti lol
21:51 elyograg a ton of things in debian depend on the remote filesystems being mounted.  ssh os one.  samba. rsync. ntp, which seems weird.
21:51 elyograg so if you look at the upstart config for those, maybe you can see a more generic way.
21:52 SuperYeti w00t that worked
21:52 SuperYeti i have a bug in my upstart script, but dmesg shows it starting after gluster now
21:53 kris joined #gluster
21:56 bazzles joined #gluster
21:59 SuperYeti ah
21:59 SuperYeti apparently there was an actual gluster upstart job mounting-glusterfs
21:59 SuperYeti that i could have also probably depended on
22:02 SuperYeti ok gotta run, thanks for pointing me in the correct direction
22:02 ninkotech joined #gluster
22:03 ninkotech_ joined #gluster
22:10 ninkotech joined #gluster
22:12 ninkotech joined #gluster
22:25 ninkotech joined #gluster
22:33 jskinner_ joined #gluster
22:34 zerick joined #gluster
22:37 ninkotech_ joined #gluster
22:38 ninkotech joined #gluster
22:40 wrale is there any reason that the gluster configure guide tells us to create an msdos (really, any) partition table?  if the brick drive is nothing else is there any benefit to having a one partition MBR?  I'm using an SSD. .is there any reason for not just doing   mkfs.xfs -i size=512 /dev/sdb ?
22:41 wrale never mind.. :) http://www.fhgfs.com/wiki/PartitionAlignment  says: A commonly used and very simple alternative to partition alignment is to create the file system directly on the storage device without any paritions. ... (Works for me)
22:41 glusterbot Title: FhGFS Wiki: Partition Alignment Guide (at www.fhgfs.com)
22:44 ninkotech_ joined #gluster
22:45 primechuck joined #gluster
22:49 ninkotech joined #gluster
22:50 al joined #gluster
22:53 ninkotech__ joined #gluster
22:57 al joined #gluster
22:59 bazzles joined #gluster
23:01 ninkotech_ joined #gluster
23:02 al joined #gluster
23:08 mattappe_ joined #gluster
23:08 YazzY joined #gluster
23:08 YazzY good evening guys
23:10 YazzY is it an intended behaviour that when I write to local file system on a gluster node, the changes won't be distributed to the other nodes?
23:10 YazzY only changes made on a mounted volume get distributed
23:10 elyograg YazzY: don't write directly to the bricks.  only access the filesystem through the gluster mount.
23:11 elyograg the gluster client writes to all copies simultaneously.
23:11 YazzY elyograg: should i mount bricks locally on a node if I wanted to be able to write to it?
23:11 YazzY i was thinking of running KVM on the nodes with failover of the VMs to the other node(s)
23:12 elyograg you need to mount the gluster filesystem, which you can do locally on the brick servers as well.  writing to the bricks directly won't work right.
23:13 YazzY elyograg: allright, so this work around will work then
23:13 YazzY thanks
23:15 YazzY is it recommended to use a virtual/floating IP when mounting bricks ?
23:17 purpleidea YazzY: yes
23:17 purpleidea see: ,,(puppet)
23:17 glusterbot see: https://github.com/purpleidea/puppet-gluster
23:17 purpleidea for an example
23:18 YazzY i usually use pacemaker for that
23:18 andreask why a floating IP?
23:18 elyograg if you're going to do NFS, it's the *only* way.  For the FUSE mount, the server you give it is only used at mount time.  after that, it connects to all the bricks directly.  if you need the mount to work with one hostname and you can't change the command to another host, then you would need either round-robin DNS or a shared IP.
23:20 andreask for native gluster mount you can use the "backupvolfile-server" mount option
23:21 YazzY andreask: you mean fuse mount?
23:21 YazzY i mount my resources like this:
23:21 andreask yes
23:21 YazzY /etc/glusterfs/virtual_servers.vol /virt glusterfs rw,allow_other,default_permissions,max_read=131072 0 0
23:21 ninkotech_ joined #gluster
23:21 YazzY and i put node definitions in my /etc/glusterfs/virtual_servers.vol
23:22 YazzY andreask: do you mount the volumes using fuse monunts in fstab ?
23:22 andreask yes
23:23 YazzY can you please show me how?
23:23 YazzY the parameters you give it
23:24 andreask like .. gluster1:/vol1 /mnt/gluster/vol1 glusterfs defaults,_netdev,backupvolfile-server=gluster2,gluster3,gluster4 0 0
23:27 YazzY right, you don't use /etc/glusterfs/virtual_servers.vol
23:31 jiqiren joined #gluster
23:33 ninkotech_ joined #gluster
23:33 YazzY cool stuff, mounted my file system locally and it syncs on every write
23:36 YazzY so which way is recommended to fuse mount volumes? With the /etc/ config or not?
23:37 andreask "not" ;-)
23:37 YazzY andreask: why not? :)
23:37 andreask don't fiddle around with vol files manually
23:38 ninkotech joined #gluster
23:40 lmickh_ joined #gluster
23:40 glusterbot` joined #gluster
23:41 gdubreui joined #gluster
23:43 seapasulli_ joined #gluster
23:43 theron joined #gluster
23:44 sprachgenerator_ joined #gluster
23:46 asku joined #gluster
23:46 nixpanic joined #gluster
23:46 nixpanic joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary