Camelia, the Perl 6 bug

IRC log for #gluster, 2013-06-27

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:49 tjikkun joined #gluster
01:01 aliguori joined #gluster
01:05 theron joined #gluster
01:08 kevein joined #gluster
01:12 bala joined #gluster
01:24 alias_willsmith joined #gluster
01:26 alias_willsmith Have a question about how to setup a gluster nfs mount point with HA.  I was thinking about using a VIP in some fashion but not exactly sure that will work.  I've searched and have not really seen much information elsewhere.
01:29 alias_willsmith Actually I'm looking at CTDB now... didn't see that before.
01:53 Eco_ alias_willsmith, http://download.gluster.org/pub/gluster/​glusterfs/doc/Gluster_CTDB_setup.v1.pdf
01:53 glusterbot <http://goo.gl/95vXT> (at download.gluster.org)
01:53 Eco_ wht glusterbot said
02:18 vpshastry joined #gluster
02:23 alias_willsmith joined #gluster
02:24 mohankumar joined #gluster
02:33 mohankumar joined #gluster
02:33 vpshastry joined #gluster
02:35 bharata joined #gluster
02:48 lpabon joined #gluster
02:57 lalatenduM joined #gluster
03:39 itisravi joined #gluster
04:00 lpabon joined #gluster
04:01 rjoseph joined #gluster
04:06 krishnan_p joined #gluster
04:08 CheRi joined #gluster
04:08 hagarth joined #gluster
04:19 vpshastry joined #gluster
04:21 mooperd joined #gluster
04:38 anands joined #gluster
04:49 mohankumar joined #gluster
04:49 rjoseph joined #gluster
04:53 mohankumar__ joined #gluster
05:09 bala joined #gluster
05:10 shireesh joined #gluster
05:12 semiosis i have never been as happy to see "[2013-06-27 05:11:47.028359] E [rpcsvc.c:519:rpcsvc_handle_rpc_call] 0-glusterd: Request received from non-privileged port. Failing request"
05:12 semiosis as I am right now
05:12 semiosis java is driving libgfapi \o/
05:13 bala joined #gluster
05:20 aravindavk joined #gluster
05:22 nueces joined #gluster
05:24 shylesh joined #gluster
05:43 hagarth joined #gluster
05:44 hagarth semiosis: \o/!
05:44 semiosis yeah!
05:45 semiosis writing up instructions to reproduce my test now, then i'm going to push the code up to the glfsjni repo chirino set up yesterday
05:46 semiosis in the 30 min since my \o/ message i mapped a few more functions and successfully wrote "hello world" into a file in the volume -- from java
05:46 hagarth great!
05:47 hagarth the glfsjni is on forge?
05:48 semiosis it should be, but chirino just put up the skeleton project on github
05:48 semiosis ultimately it should be on the forge
05:48 hagarth no probs, I am more interested in seeing code :).
05:48 semiosis me too :)
05:48 semiosis want to keep up this momentum
05:49 hagarth yes, let's keep that going.
05:49 semiosis have you looked at chirino's hawtjni?
05:49 hagarth semiosis: not yet
05:49 semiosis being a java guy it looked much more approachable than swig, also helped that the author hangs out here :)
05:50 hagarth sure, whatever works rules :)
05:50 vpshastry joined #gluster
05:56 JoeJulian semiosis: Damn... that news almost makes me want to learn java...
05:56 JoeJulian ... almost
05:56 semiosis never a bad time to start
05:57 semiosis the learning curve with java never stops climbing
05:57 JoeJulian hehe
05:57 JoeJulian I just haven't thought of a project that I want to make that much more difficult yet.
05:57 semiosis lol
05:57 semiosis yeah
05:58 semiosis lets think of this as a *jvm* binding for glusterfs, not a *java* binding
05:58 JoeJulian Actually, I'm not being entirely honest... I have written java apps for android.
05:58 GabrieleV joined #gluster
05:58 semiosis once there is a JNI library for glusterfs, you can call it from any jvm language, even jython
05:58 semiosis or jruby
05:58 semiosis or jperl
05:58 semiosis ok not jperl
05:58 krishnan_p IIRC, scala could use java libraries
05:59 semiosis yes, scala is strictly a jvm language, as is clojure, groovy, ...
06:09 psharma joined #gluster
06:10 puebele joined #gluster
06:12 anands joined #gluster
06:15 semiosis https://github.com/chirino/glfsjni
06:15 glusterbot Title: chirino/glfsjni · GitHub (at github.com)
06:17 ngoswami joined #gluster
06:18 shylesh joined #gluster
06:18 hagarth semiosis: neat
06:19 semiosis thx
06:19 semiosis ok time for bed, it's past 2 am
06:20 hagarth semiosis: good night!
06:20 semiosis good night
06:21 sgowda joined #gluster
06:24 jtux joined #gluster
06:24 fubada joined #gluster
06:24 fubada folks i just broke my gluster mounts with a yum update
06:24 fubada can someone help pls
06:25 kshlm joined #gluster
06:25 fubada vol status is foos
06:25 fubada good
06:27 fubada all set
06:30 bala joined #gluster
06:32 rastar joined #gluster
06:33 bulde joined #gluster
06:34 ollivera joined #gluster
06:35 aravindavk joined #gluster
06:39 kevein_ joined #gluster
06:46 vimal joined #gluster
06:50 vpshastry joined #gluster
06:52 shylesh joined #gluster
06:58 dobber_ joined #gluster
06:59 ricky-ticky joined #gluster
07:01 tjikkun_work joined #gluster
07:01 rastar joined #gluster
07:04 deepakcs joined #gluster
07:05 mmalesa joined #gluster
07:07 bulde joined #gluster
07:12 vshankar joined #gluster
07:13 nueces joined #gluster
07:14 ctria joined #gluster
07:19 hybrid5121 joined #gluster
07:24 mmalesa joined #gluster
07:25 mtanner__ joined #gluster
07:28 edong23_ joined #gluster
07:28 kaushal_ joined #gluster
07:32 ekuric joined #gluster
07:34 shanks joined #gluster
07:35 kaushal_ joined #gluster
07:35 Oneiroi joined #gluster
07:36 saurabh joined #gluster
07:37 atoponce joined #gluster
07:37 xymox_ joined #gluster
07:37 romero_ joined #gluster
07:37 ctria joined #gluster
07:37 ollivera joined #gluster
07:37 ngoswami joined #gluster
07:37 mohankumar__ joined #gluster
07:37 krishnan_p joined #gluster
07:37 aliguori joined #gluster
07:37 nixpanic joined #gluster
07:37 ninkotech_ joined #gluster
07:37 18VABC04B joined #gluster
07:37 awheeler_ joined #gluster
07:37 gmcwhistler joined #gluster
07:37 JoeJulian joined #gluster
07:37 T0aD joined #gluster
07:37 y4m4 joined #gluster
07:37 harish joined #gluster
07:37 jag3773 joined #gluster
07:37 Debolaz joined #gluster
07:37 ofu_ joined #gluster
07:37 codex joined #gluster
07:37 eryc joined #gluster
07:37 abyss^__ joined #gluster
07:37 Chr1st1an joined #gluster
07:37 Shdwdrgn joined #gluster
07:37 dblack joined #gluster
07:37 Madkiss joined #gluster
07:37 ste76 joined #gluster
07:37 hflai joined #gluster
07:37 the-me joined #gluster
07:37 DWSR joined #gluster
07:37 morse joined #gluster
07:37 helloadam joined #gluster
07:37 ingard joined #gluster
07:37 RobertLaptop joined #gluster
07:37 jones_d joined #gluster
07:37 primusinterpares joined #gluster
07:37 It_Burns joined #gluster
07:37 NuxRo joined #gluster
07:37 coredumb joined #gluster
07:37 chlunde joined #gluster
07:37 bdperkin joined #gluster
07:37 partner joined #gluster
07:37 hagarth__ joined #gluster
07:37 semiosis joined #gluster
07:37 kbsingh joined #gluster
07:37 kspaans joined #gluster
07:37 anands joined #gluster
07:38 atoponce joined #gluster
07:40 andreask joined #gluster
07:42 bulde joined #gluster
07:42 rjoseph joined #gluster
07:43 mmalesa joined #gluster
07:47 rotbeard joined #gluster
08:07 rastar joined #gluster
08:15 arusso joined #gluster
08:15 yinyin joined #gluster
08:19 Staples84 joined #gluster
08:24 mmalesa joined #gluster
08:27 hagarth joined #gluster
08:28 hagarth1 joined #gluster
08:32 shylesh joined #gluster
08:35 Norky joined #gluster
08:40 X3NQ joined #gluster
08:44 mooperd joined #gluster
08:52 mmalesa joined #gluster
08:53 rastar joined #gluster
09:07 chlunde joined #gluster
09:11 Debolaz joined #gluster
09:14 yinyin joined #gluster
09:18 mmalesa joined #gluster
09:19 manik joined #gluster
09:39 ngoswami joined #gluster
09:47 jag3773 joined #gluster
09:56 ctria joined #gluster
10:00 ramkrsna joined #gluster
10:00 ramkrsna joined #gluster
10:00 atrius joined #gluster
10:00 spider_fingers joined #gluster
10:03 tziOm joined #gluster
10:04 jf001 joined #gluster
10:06 krokar joined #gluster
10:16 edward1 joined #gluster
10:18 manik joined #gluster
10:35 anands joined #gluster
10:39 ngoswami joined #gluster
10:44 kkeithley1 joined #gluster
10:49 duerF joined #gluster
10:52 piotrektt joined #gluster
10:52 piotrektt joined #gluster
10:56 satheesh joined #gluster
10:57 mohankumar__ joined #gluster
11:06 mooperd joined #gluster
11:09 z2013 joined #gluster
11:10 jcaputo joined #gluster
11:22 CheRi joined #gluster
11:23 rcheleguini joined #gluster
11:29 ultrabizweb joined #gluster
11:30 bulde joined #gluster
11:35 ghaering joined #gluster
11:43 andreask joined #gluster
11:49 kaushal_ joined #gluster
12:15 Debolaz joined #gluster
12:17 Koma joined #gluster
12:29 mooperd joined #gluster
12:36 bulde joined #gluster
12:41 jsheeren joined #gluster
12:42 CheRi joined #gluster
12:43 jthorne joined #gluster
12:48 anands joined #gluster
12:50 hagarth joined #gluster
12:53 ghaering left #gluster
12:54 bennyturns joined #gluster
12:54 Bryan_ joined #gluster
12:55 kkeithley joined #gluster
12:55 georgeh|workstat joined #gluster
12:56 lkoranda joined #gluster
12:57 plarsen joined #gluster
13:04 jsheeren joined #gluster
13:04 shireesh joined #gluster
13:05 lalatenduM joined #gluster
13:05 shireesh left #gluster
13:08 rwheeler joined #gluster
13:09 mohankumar__ joined #gluster
13:11 robo joined #gluster
13:11 jclift joined #gluster
13:21 manik joined #gluster
13:23 deepakcs joined #gluster
13:27 mmalesa joined #gluster
13:31 glusterbot New news from newglusterbugs: [Bug 953694] Requirements of Samba VFS plugin for glusterfs <http://goo.gl/v7g29>
13:40 joelwallis joined #gluster
13:48 rastar joined #gluster
13:48 mmalesa_ joined #gluster
13:49 ccha I want to change the dns of a node. What is best pratice ? stop the vol and substr all occurence of the hostanme in /var/lib/glusterd then restart the vol ?
13:49 ccha or any better way to do ?
13:51 spider_fingers joined #gluster
13:52 hagarth joined #gluster
13:55 mmalesa joined #gluster
13:55 mmalesa joined #gluster
13:56 mmalesa Hi gyus. I have two questions. 1) would you advise against having gluster volume on the same node as hypervisor in ovirt host? 2) will such configuration be possible (gluster and vdsm on one host) when gluster block storage domain will be available in ovirt 3.3?
13:59 robos joined #gluster
14:00 mmalesa hmmm I guess this should be asked in #ovirt channel... :/
14:04 pkoro joined #gluster
14:12 andreask joined #gluster
14:14 JoeJulian ccha: That's probably how I would cheat, yeah.
14:15 JoeJulian mmalesa: Not sure about the ovirt bits, but I host kvm vms on one of my glusterfs servers.
14:15 yinyin joined #gluster
14:16 spider_fingers left #gluster
14:17 mmalesa JoeJulian: I was wondering if any host fencing, denying from cluster may negatively influence gluster volumes
14:19 kaptk2 joined #gluster
14:20 mmalesa_ joined #gluster
14:22 chirino joined #gluster
14:23 JoeJulian I guess that depends on how it's done. It should be possible, and if it's not, tell the upstream ovirt guys and they'll be quick to fix it. :D
14:23 JoeJulian They were demoing their management of gluster volumes at Summit.
14:25 ccha JoeJulian | ccha: That's probably how I would cheat, yeah. <-- and there is brick name as folder name in vols/bricks/
14:28 duerF joined #gluster
14:33 mmalesa_ JoeJulian: technically it is easy i guess. Worst thing are all those disaster scenarios when one have to deal not only with ovirt but also with the storage. That is why Red Hat does not support such environments...
14:38 chirino joined #gluster
14:39 bugs_ joined #gluster
14:40 piotrektt joined #gluster
14:40 piotrektt joined #gluster
14:42 aravindavk joined #gluster
14:42 jf001 having problems with geo-replication on gluster 3.3.  We ran into https://bugzilla.redhat.com/show_bug.cgi?id=886808 - and patched syncdaemon/resource.py as in http://review.gluster.org/#/c/4233/ - however the sync still doesn't complete, we just no longer get errors in the log
14:42 glusterbot <http://goo.gl/nfHvK> (at bugzilla.redhat.com)
14:42 glusterbot Bug 886808: high, urgent, ---, ndevos, MODIFIED , geo-rep: select() returning EBADF not handled correctly
14:45 failshell joined #gluster
14:45 semiosis chirino: i achieved hello world last night
14:46 semiosis and pushed changes to the repo
14:47 ccha arf the cheat doesn't work... the volume started but can mount it :(
14:47 chirino yay!
14:48 chirino ohh readme and everything!
14:50 semiosis chirino: so i'd like to move this project into the gluster community forge, perhaps rename it to some libgfapi-ish
14:50 semiosis how does that sound to you?
14:50 semiosis s/some/somthing/
14:50 glusterbot What semiosis meant to say was: chirino: so i'd like to move this project into the gluster community forge, perhaps rename it to somthing libgfapi-ish
14:50 chirino sounds fine to me.. you also need to fix up those license headers.
14:51 JoeJulian ccha: did you change the peer name too?
14:51 semiosis chirino: yes i know the licensing needs to be addressed
14:51 chirino your a hawtjni pro now :)
14:52 semiosis glusterbot: awesome
14:52 glusterbot semiosis: ohhh yeeaah
14:52 semiosis :)
14:52 chirino about the name, I'd keep the jni suffix on it.
14:53 chirino just so it's not confused /w the c lib
14:53 semiosis i was thinking something like libgfapi-java
14:53 chirino also perhaps one day someone will implement a pure java version of the lib
14:54 semiosis i thought about that but i think it's too high a bar... the client needs to load xlators per the volume configuration sent by the server
14:54 semiosis not just a simple network protocol
14:54 chirino yeah.. but you never know.
14:54 semiosis thats true
14:54 chirino you someone re-wrote leveldb C++ in java right?
14:55 chirino that's a lot of complex code too.
14:55 semiosis didnt know about that
14:55 chirino so it happens. :)
14:55 robo joined #gluster
14:56 chirino you really hope it happens anyways.. folks loving libgfapi and java so much they want a pure java version.
14:56 portante joined #gluster
14:56 semiosis that would be neat
14:56 semiosis so i ran into what seemed at the time to be just a minor inconvenience -- glusterfs servers won't answer to clients coming from unprivileged ports
14:57 semiosis so i ran maven with sudo -E, and moved on
14:57 semiosis when i woke up this morning i realized this was actually devastating
14:57 semiosis no one runs tomcat as root
14:57 semiosis :O
14:57 chirino yeah. I'd talk to the gluster folks about that.
14:57 semiosis hagarth: ^^^
14:57 semiosis i plan to
14:57 chirino gotta run.. bbl
14:58 semiosis ok thanks again for all your help, ttyl
14:58 ccha JoeJulian | ccha: did you change the peer name too? <-- ok it works... I just forget to rename folder base on the hostname
14:59 ccha so the cheat works
14:59 semiosis JoeJulian: before i go digging through BZ, do you happen to know if there is an open bug re: allowing connections from unprivileged ports?
15:00 lbalbalba joined #gluster
15:01 JoeJulian There may be a couple, but I'm not sure of the context.
15:02 JoeJulian There was just a big discussion about unprivileged ports in the devel mailing list.
15:02 lbalbalba hi. did someone mess up 'make install' or something ?
15:02 lbalbalba i compiled latest git with '--prefix=/usr/local/glusterfs', but when i run 'service glusterd start' it fails because of: 'Failed at step EXEC spawning /usr/sbin/glusterd: No such file or directory'
15:03 JoeJulian makes sense... You wouldn't have a /usr/sbin/glusterd you would have a /usr/local/glusterfs/sbin/glusterd
15:04 lbalbalba true
15:04 lbalbalba so why is the service start cmd looking there ? it didnt use to
15:04 JoeJulian So it's a bug. Probably pretty low hanging fruit if you wanted to submit a patch.
15:05 lbalbalba sorry, i have no idea how to fix it. other than creating a symlink /usr/sbin/glusterd -> /usr/local/glusterfs/sbin/glusterd    ;)
15:05 semiosis JoeJulian: thx i'll look through the ml too
15:07 * JoeJulian wants to harass lbalbalba about that, but can't figure out a funny way to word it.
15:07 lbalbalba harass is fine
15:08 lbalbalba i guess it requires figuring out how to translate the --prefix= location into the service script
15:08 lbalbalba but i dont know anything about autoconf
15:08 kkeithley_ I don't think anyone knows anything about autoconf
15:08 lbalbalba rofl
15:08 semiosis haha
15:12 failshell joined #gluster
15:13 vpshastry joined #gluster
15:13 kkeithley_ But I'm thinking if you're building/installing/running on a systemd system that the hardcoded /usr/sbin/glusterd in .../extras/systemd/glusterd.service.in has a lot to do with it
15:15 manik joined #gluster
15:15 JoeJulian So that should be changed to @prefix@ if I'm interpreting the other .in files correctly.
15:15 vpshastry left #gluster
15:15 kkeithley_ yes
15:16 lbalbalba hrm. guess youre right. '/etc/rc.d/init.d/glusterd' contains the right directory name. somehow this doesnt seem to be used anymore, and instead i end up with '/usr/lib/systemd/system/glusterd.service' instead now...
15:16 semiosis jclift: thanks for approving libgfapi-jni!
15:16 jclift semiosis: You're welcome. :)
15:17 kkeithley_ the goggles, they do nothing I guess. I could never see how to approve projects in the forge.
15:17 jclift kkeithley: Yeah, weirdly there no obvious link when an admin logs in to Gitorious to "project admin" page that lists stuff like this.
15:18 jclift kkeithley: The only time there's a link to approve projects in the forge is when the project's approval is still pending.
15:18 jclift kkeithley: What seems to happen is that after a project is approved by anyone, the _existing_ "new project propsal" emails on the forge website get updated to remove the link.
15:19 jclift kkeithley: Weird, but not a killer.
15:19 kkeithley_ lbalbalba: so yes, if you're on f17 I think and later now, the build installs systemd .service files, not /etc/init.d/ scripts
15:20 kkeithley_ Welcome to modern (Fedora) Linux. RHEL7 too eventually
15:20 lbalbalba kkeithley_: well thats new. thanks. now to fix the systemd .service file
15:22 JoeJulian ExecStart=@prefix@/sbin/glusterd ... I think.
15:22 kkeithley_ Are one of you going to submit a new BZ or shall I just throw it into the rest of my fedora/rpm/glusterfs.spec soup?
15:22 yinyin joined #gluster
15:22 ccha about geo-replication, you can create, start, stop what about delete ?
15:23 JoeJulian I really don't like glusterd being managed by systemd unless glusterd's no longer ever going to break.
15:23 kkeithley_ in glusterfs.service.in, yes.  That's run through sed during the build or install
15:23 * jf001 observes that since geo-replication doesn't appear to work that well, starting it might be unproductive
15:24 JoeJulian works well for some... and I haven't seen many bug reports from others.
15:24 kkeithley_ JoeJulian: ??? How is `systemctl start glusterd` better or worse than `service start glusterd` or `/etc/init.d/glusterd start`?
15:25 vpshastry joined #gluster
15:25 jf001 there's a bunch of duplicates for https://bugzilla.redhat.com/show_bug.cgi?id=886808 -- and the attached patch just supresses the error, doesn't in any way fix the problem
15:25 glusterbot <http://goo.gl/nfHvK> (at bugzilla.redhat.com)
15:25 glusterbot Bug 886808: high, urgent, ---, ndevos, MODIFIED , geo-rep: select() returning EBADF not handled correctly
15:25 kkeithley_ and note that I'm no more or less a fan of systemd that I was of init.d
15:25 JoeJulian kkeithley_: "systemctl stop glusterd" kills all forked processes.
15:25 lbalbalba yes, changing 'extras/systemd/glusterd.service.in' to 'ExecStart=@prefix@/sbin/glusterd' works
15:25 kkeithley_ fair point
15:25 jclift Ouch.  I wonder if there's an option to stop it from doing that?
15:26 jclift If not, maybe we need to add one?
15:26 jclift (doesn't sound super difficult, but I don't know the politics of upstream systemd community)
15:26 JoeJulian Isn't that community just Lennart?
15:27 jclift NFI :D
15:28 ccha ok no need to delete, just stop enough for geo
15:28 jclift We could email him to ask if there's a way for it to not kill forked processes.  Has anyone checked first? (so I don't look like (more?) of an idiot) :)
15:28 kkeithley_ I don't see anything obvious in the man page
15:29 * kkeithley_ hears Obiwan whispering "use the source, Luke"
15:30 jclift Meh, if it's not in the man page, the option doesn't exist. :)
15:30 JoeJulian I'm pulling it now...
15:34 jclift Just emailed Lennart, lets see if there's something useful already in place that can be done. :)
15:34 ndevos http://www.freedesktop.org/softwa​re/systemd/man/systemd.kill.html -> KillMode=process
15:34 glusterbot <http://goo.gl/wKQZ5> (at www.freedesktop.org)
15:35 jclift ndevos: Heh.  Time for an "oops, ignore that" email to Lennart
15:35 ndevos hehe
15:36 zaitcev joined #gluster
15:37 Mo___ joined #gluster
15:38 lbalbalba well in case anyone s still interested, there's the fix to  http://review.gluster.org/#/c/5265/6
15:39 lbalbalba make glusterd.service honor --prefix value
15:39 glusterbot Title: Gerrit Code Review (at review.gluster.org)
15:39 champtar joined #gluster
15:39 lbalbalba wow, thats some bad cut n paste foo. sorry :(
15:39 JoeJulian Ok ,KillMode=process or none and set a ExecStop to kill nfs and glustershd with glusterd.
15:42 JoeJulian ... but then how do we kill the bricks at shutdown... do a glusterfsd.service with a dummy ExecStart?
15:42 champtar Hi, is there an estimation of the gluster 3.4 release date?
15:45 semiosis no issue tracker in gitorious?!  :(
15:46 anands joined #gluster
15:49 jclift semiosis: Yeah.  Brilliant isn't it. :/
15:49 semiosis having used gitlab *extensively* at work over the last 6 months i have barely any patience for gitorious
15:49 semiosis gitlab is so great
15:49 jclift We'll probably have to set up one of our own, and use it for all of the Gitorious projects.
15:50 jclift semiosis: Yeah, I reckon GitLab is coming along nicely.  Seems to be evolving pretty fast.
15:50 semiosis there is a hosted gitlab
15:50 semiosis but running it yourself is super easy -- trivial even
15:50 jclift semiosis: Are they running the "Public-GitLab" branch?
15:50 semiosis their docs are comprehensive & their source is clean
15:50 JonnyNomad joined #gluster
15:50 semiosis idk, just assumed they were
15:51 jclift https://github.com/ArthurHoaro/Public-GitLab
15:51 glusterbot Title: ArthurHoaro/Public-GitLab · GitHub (at github.com)
15:51 jclift If they're not, it would be a blocker
15:51 semiosis ohhh i see
15:51 semiosis i didnt know about that
15:52 semiosis i thought you mean is the hosted service running the same code as the open source project
15:52 semiosis which i assume they are
15:52 semiosis the mainline gitlab does have ability to serve the git repo via http publicly
15:53 jclift Some of the OSAS (group I work for) guys in RH have setup GitLab purely for internal sharing.  It's a pain that by _design_ the mainline gitlab won't allow any public sharing of a project other than just a git repo.  Definitely blocker to widescale usage.
15:53 jclift Thus that Public-GitLab fork.
15:54 semiosis thats a great point
15:54 semiosis i completely forgot about that, and it's definitely a blocker
15:55 yosafbridge joined #gluster
15:55 jclift Anyway, so we'll (Gluster Community) will probably need to setup an issue tracker for some sort, and then have that be used for all of the Gluster Forge projects.
15:55 jclift Absolutely NFI how though. :)
15:55 bala1 joined #gluster
15:58 x4rlos joined #gluster
15:59 lbalbalba well its been fun, but i gotta go. later
15:59 * semiosis stars his own repository
15:59 semiosis bye lbalbalba
16:00 jclift lbalbalba: Have a good night dude. :)
16:03 phox joined #gluster
16:03 phox hey.  so is there a complementary operation to 'volume set' that GETS me the values back?
16:03 phox I remember at some point being able to view the "access" list but now I don't seem to be able to get at it anywhere and there's nothing obvious in the online help (at least that I didn't miss)
16:04 semiosis gluster volume info
16:04 phox not seeing access info there =/
16:04 phox already looked
16:05 semiosis auth?
16:05 phox I've forgotten.  The open-ended-by-host setup.
16:05 aliguori joined #gluster
16:05 semiosis no i mean the options should be auth.* if i understand you right
16:05 semiosis auth.allow for example
16:06 JoeJulian ... and they won't be listed if they're not changed from the default.
16:06 phox sorry, yeah, auth.allow
16:06 phox JoeJulian: they are on one of these volumes and I'm not seeing them listed =/
16:06 phox which is weird.
16:07 phox hm.  actually.
16:07 semiosis whenever i get into an argument with software, the software wins
16:07 phox what's the default auth configuration?
16:07 phox unrestricted?
16:07 phox semiosis: ha.  so not true.
16:08 * phox looks at the about-to-be-decommissioned machine that still has a '/hmee #' prompt when I 'cd /home' ;)
16:08 phox so yeah I guess it got unset for the one I thought it was set for.  or recreated and not set.
16:08 JoeJulian semiosis: Who's the upstart expert you talk to?
16:09 semiosis SpamapS aka Clint Byrum (sp?)
16:09 JoeJulian who spamaps
16:09 JoeJulian gah
16:10 semiosis why?
16:10 JoeJulian I'm trying to get something to complete before dmpref
16:10 semiosis maybe i can help?
16:10 phox so I presume I need to include localhost in auth.allow if I want to mount locally, yeah?
16:11 * phox has forgotten these details apparently
16:11 semiosis JoeJulian: want to PM me if it's off topic?
16:12 recidive joined #gluster
16:13 jclift Oh wow.  Just wow: http://nostarch.com/mg_electricity.htm
16:13 glusterbot Title: Manga Guide to Electricity | No Starch Press (at nostarch.com)
16:13 * jclift wants a "Manga Guide to GlusterFS"
16:13 jclift WANT
16:13 jclift :)
16:14 daMaestro joined #gluster
16:14 phox I do tend to read HOWTOs in the wrong order o.O
16:15 * jclift can't read manuals unless it's critically important for something
16:15 jclift Way too boring. :(
16:17 yinyin joined #gluster
16:18 JoeJulian I can't read manuals, because when I start using stuff, they don't usually exist yet.
16:19 jclift Common problem in real world IT
16:20 phox jclift: software manuals tend to suck a lot less than manuals for physical stuff
16:20 phox motherboard manual AKA "reference guide when the thing you want to know isn't screened directly onto the board"
16:20 jclift heh
16:21 phox and if it's "which one is pin 1" it should have an entry saying "you're an idiot.  it's always the one closest to the back left corner" :P
16:22 * phox resumes manual configuration of stuff he doesn't have time to write and test scripts for yet
16:22 jclift :)
16:22 JoeJulian Except on some supermicro boards which threw a few at 90 degrees.
16:22 phox JoeJulian: yeah I've seen ones where the block runs front-to-back and it's at the front for some idiotic reason
16:22 phox we have a few SM boards here; not sure if it was one of those or one of the other ones I've owned
16:25 phox uh yay this is totally out of date http://gluster.org/community/documentation/i​ndex.php/Gluster_3.1:_Setting_Volume_Options
16:25 glusterbot <http://goo.gl/3RW2H> (at gluster.org)
16:25 phox "the following is an incomplete list of options you can set" seeing as it doesn't have read-ahead-page-count
16:26 phox anyways, the reason I was there: anyone recommend setting any other performance.* options to non-default values?
16:26 hagarth joined #gluster
16:26 JoeJulian Nope
16:26 jclift phox: This might be of interest: http://review.gluster.org/#/c/5232/
16:26 glusterbot Title: Gerrit Code Review (at review.gluster.org)
16:27 phox thinking e.g. cache-size
16:27 jclift That commit is where the admin-guide has been moved to markdown format.
16:27 JoeJulian In my fairly limited testing, I couldn't really get any of my tests to show any difference when modifying any of the performance settings.
16:27 * phox looks
16:27 JoeJulian yay
16:27 jclift phox: So, in theory it should be a bunch easier for people to update the docs
16:27 kkeithley_ bloody mock builds in rpm.t on build.g.o :-(
16:27 phox JoeJulian: our testing showed abysmal performance with the default read-ahead-page-count
16:27 phox JoeJulian: and I'm pretty sure being able to double or quadruple what we have it set to now would get us some gain
16:28 * jclift reckons we should chuck the markdown version admin guide on a wiki or something, so it can be live edited
16:28 jclift Haven't looked into that bit yet
16:29 JoeJulian jclift: That or the forge.
16:29 vimal joined #gluster
16:29 jclift JoeJulian: Yeah, I was thinking a wiki on the forge.  Need to find out if it supports markdown properly, as I don't remember
16:29 jclift Hmmmm, does anyone know if there's a programmatic JSON interface or something for gluster, that can be used for doing um... cli type of things?
16:30 JoeJulian I still think we should have used asciidoc...
16:30 jclift eg volume creation
16:30 JoeJulian xml
16:30 jclift JoeJulian: Ascidoc isn't off the cards
16:30 jclift xml?
16:30 JoeJulian gluster --xml volume info
16:31 jclift That's a cli tho.
16:31 jclift I'm kind of wanting something listening on a socket that I can send requests to.
16:31 * phox wonders if there's a place on freenode to ask about squashfs... heh
16:32 JoeJulian later... got to head to p.t.
16:32 phox military?
16:32 jclift Have fun dude. :)
16:33 JoeJulian Physical Therapy. My shoulder's killing me.
16:33 phox ah.
16:33 phox sucky =/
16:33 phox good luck.  I know that kinda pain :l
16:37 bulde joined #gluster
16:43 vpshastry joined #gluster
16:48 satheesh joined #gluster
16:50 mmalesa joined #gluster
16:54 Mo___ joined #gluster
16:58 semiosis why didn't i see rpc-auth-allow-insecure in gluster volume set help last night.  was i too tired, or is it not there?
16:59 anands joined #gluster
17:00 failshell does gluster support logging to syslog?
17:01 semiosis not that i know of
17:07 vpshastry joined #gluster
17:18 ndevos failshell: yeah, check the 'gluster volume set help' options
17:19 aravindavk joined #gluster
17:23 rwheeler joined #gluster
17:26 yinyin joined #gluster
17:27 robo joined #gluster
17:33 andreask joined #gluster
17:40 rotbeard joined #gluster
17:40 chirino semiosis: ping
17:40 semiosis hey
17:40 msacks joined #gluster
17:41 chirino gonna transfer ownership of my repo to you so that your become the root.
17:41 semiosis cool
17:41 chirino that way issues are opened under you and stuff.
17:41 semiosis that's great
17:41 msacks Hallo… If I am setting up a test environment on virtual box, it says to use the virtio driver. Although possible, it seems this is more of a kvm thing. Is it necessary for me to use virtio when using virtual box on my laptop?
17:41 semiosis didnt know you could do that
17:41 chirino crap. it failed with: semiosis already has a repository in the chirino/glfsjni network
17:42 chirino perhaps you need to drop yours first.
17:42 semiosis ok i'll try
17:42 semiosis deleted
17:43 chirino ok got: Repository transfer to semiosis requested
17:44 semiosis there it is... https://github.com/semiosis/glfsjni
17:44 glusterbot Title: semiosis/glfsjni · GitHub (at github.com)
17:44 chirino coolio.
17:44 chirino you should be able to rename it I think.
17:44 semiosis renamed to libgfapi-jni
17:44 semiosis yes
17:45 chirino github rocks.
17:45 semiosis indeed
17:46 semiosis afk for a bit
17:54 gdavis33 joined #gluster
18:04 SpNg joined #gluster
18:06 bulde joined #gluster
18:28 dbruhn joined #gluster
18:34 kuroneko4891 joined #gluster
18:35 ccha if I created bricks with hostname and not with ip... does gluster need to resolve dns too often ? need it use nscd ?
18:40 failshell joined #gluster
18:43 vdrmrt joined #gluster
18:43 vdrmrt_ joined #gluster
18:47 bsaggy joined #gluster
18:59 yinyin joined #gluster
19:02 dberry joined #gluster
19:02 dberry joined #gluster
19:03 kushnir-nlm joined #gluster
19:06 kushnir-nlm Hi folks! I've got a problem with a production gluster 3.3.1 setup running on RHEL 6. I used to have a volume, then I deleted that volume, yet I am still getting errors such as: [2013-06-27 14:58:26.023125] E [afr-self-heald.c:409:_crawl_proceed] 0-<volname>-replicate-4: Stopping crawl for <volname>-client-14 , subvol went down
19:07 kushnir-nlm I've got no clients up, and that volume has been deleted...
19:11 JoeJulian Stop glusterd and any other glusterfs processes on the server (not glusterfsd processes if you still have other volumes running), then start glusterd again.
19:11 JoeJulian kushnir-nlm: ^
19:11 kushnir-nlm Hi Joe. Thanks. I've tried that. Doesn't help. I've even rebooted the server
19:12 kushnir-nlm gluster volume info and gluster volume status do not show the old volume.
19:12 JoeJulian Interesting. Which version?
19:13 semiosis 3.3.1
19:13 * semiosis sends coffee
19:13 kushnir-nlm 3.3.1 from RPMS on the gluster.org website.
19:14 chirino fyi: tried to: ./configure --prefix /home/chirino/opt/glusterfs
19:14 JoeJulian lol
19:14 chirino but make install fails cause it's trying to install stuff to /sbin
19:14 JoeJulian I had to make some overnight configuration changes. Haven't slept.
19:15 chirino on the release-3.4 branch
19:16 gdavis33 left #gluster
19:16 JoeJulian kushnir-nlm: Only one server?
19:16 jclift Hmmm, Avati's idea for auto-selection of networks instead of connection groups is interesting.
19:16 kushnir-nlm Used to be more than one server, but when I deleted the old volume, I also shut down the other servers. So, right now, only one server, with a different new volume.
19:17 jclift Less controllable, but more dynamic
19:17 JoeJulian chirino: please file a bug report
19:17 glusterbot http://goo.gl/UUuCq
19:18 kushnir-nlm Joe: Will do. But in the meantime, any fix you can recommend? I've tried to delete /var/lib/glusterd... Anywhere else this could be coming from?
19:18 kkeithley_ chirino: actually "we" know about it.
19:18 jclift Not sure if it could allow for different networks to be selected for different volume, but also not sure that's needed much
19:18 chirino still need gug?
19:18 chirino bug even?
19:18 JoeJulian kushnir-nlm: Well somewhere the self-heal daemon is picking up configuration for the other servers. You can wipe /var/lib/glusterd/vols if you're trying to start over.
19:19 kushnir-nlm Joe: Yep. Did that. Cleared out all the logs just in case... Server has no peers.
19:20 JoeJulian jclift: yep. PCI data on one network, other data on another, is one use case that comes to mind.
19:21 kkeithley_ chirino: depends on whether you'll use 3.4 when it's released. If you're going to stay on 3.3.x there's some other stuff that needs to a backport to 3.3 first to really fix it there.
19:21 jclift PCI data == ?
19:21 JoeJulian @lucky pci
19:21 glusterbot JoeJulian: https://www.pcisecuritystandards.org/
19:21 jclift thx.  looking. :)
19:22 jclift JoeJulian: Gotcha.  Networks with different levels of trust, etc.
19:22 chirino opened https://bugzilla.redhat.com/show_bug.cgi?id=979164
19:22 glusterbot <http://goo.gl/mHbDC> (at bugzilla.redhat.com)
19:22 glusterbot Bug 979164: unspecified, unspecified, ---, amarts, NEW , Can't 'make install' as non-root
19:22 jclift Interesting.  The connection groups approach would easily allow for that.  (and we could optionally add SSL certs per connection group too)
19:22 JoeJulian kushnir-nlm: What you're describing is impossible, so you must have installed gluster on your quantum computer. :)
19:23 jclift JoeJulian: Is there any chance kushnir-nlm might still have left over processes running in memory using old data, that need to be killed/restarted?
19:23 kushnir-nlm Joe: Haha. That's great. Any path to follow for tracking down where that info is coming from?
19:24 aliguori joined #gluster
19:24 kushnir-nlm jclift: I rebooted, errors are coming back.
19:25 JoeJulian With no configuration and no peers, there's no place for glustershd to pick up configuration after you rebooted. If it's making stuff up, either you've missed something or you're using skynet.
19:25 jclift kushnir-nlm: What's the output from gluster peer status?
19:25 jclift fpaste it?
19:25 failshell you can disable NFS exports per volume. i dont seem to find the option to disable samba
19:25 failshell i call upon the intertubes' help
19:27 jclift Damn, I think the intertubes is ignoring you? :(
19:27 semiosis there is no spoon, nor samba
19:27 failshell *crickets*
19:29 JoeJulian failshell: Are you being facetious?
19:30 kushnir-nlm Yep. One second. Need to get on the box.
19:30 kkeithley_ gluster volume set
19:30 kkeithley_ gah
19:30 failshell i looked at all options with gluster volume set help
19:30 mmalesa joined #gluster
19:30 jclift We really need to upgrade our "gluster help" cli's capabilities too
19:30 kkeithley_ gluster volume set $vol nfs.disable true
19:30 failshell all the volumes are mounted at /mnt/samba/foo
19:30 JoeJulian So you're serious. There is no native cifs. You need sambe for that.
19:31 JoeJulian s/sambe/samba/
19:31 glusterbot What JoeJulian meant to say was: So you're serious. There is no native cifs. You need samba for that.
19:31 jclift failshell: You're needing to look in /etc/samba ?
19:31 failshell uh that's coming from RHS
19:31 failshell ok nvm :)
19:32 failshell nothing to see here folks, move along
19:32 glusterbot New news from newglusterbugs: [Bug 979164] Can't 'make install' as non-root <http://goo.gl/mHbDC>
19:36 kkeithley_ chirino: I thought you said you were using 3.3.1?
19:36 chirino kkeithley_: I'm building from source now.
19:37 chirino kkeithley: I can be flexible.
19:37 kkeithley_ The fix for 3.4 is already in the pipeline.
19:38 kkeithley_ http://review.gluster.com/5263
19:38 glusterbot Title: Gerrit Code Review (at review.gluster.com)
19:38 kkeithley_ and http://review.gluster.com/5264
19:38 glusterbot Title: Gerrit Code Review (at review.gluster.com)
19:41 kkeithley_ hence my question about whether you'll use 3.3.2 or 3.4.0; determines whether I put the effort into backporting stuff to release-3.3 for the forthcoming 3.3.2 or not
19:41 JoeJulian Ahem...
19:41 kkeithley_ yes....
19:42 JoeJulian I use 3.3
19:42 JoeJulian btw... where were you? I was hoping you'd at least join us for dinner or something while we were out there.
19:42 kkeithley_ do you install with --prefix? No? I didn't think so.
19:42 JoeJulian ok... like I said... no sleep...
19:43 kkeithley_ I tried to get down. My wife and I were competing and by the time I could get away you guys were already gone
19:43 JoeJulian I've got the attention span of a goldfish.
19:43 kkeithley_ np.
19:48 l0uis joined #gluster
19:57 NcA^_ joined #gluster
20:02 jclift Hmmm, the glusterd doesn't maintain a host cache (UUID="somehost.mydomain.org") between reboots does it?
20:04 ndevos only the /var/lib/glusterd/peers/*
20:04 semiosis every peer knows all hostname/uuid associations for the whole cluster
20:04 jclift Cool. :)
20:17 jclift JoeJulian: Stole your PCI compliance mention for response to Avati.
20:18 jclift It'll be interesting to see what we eventually decide to use.
20:18 * jclift is just glad we're looking to solve the hostname/interface problem (finally I guess) :D
20:27 rcoup joined #gluster
20:31 mooperd joined #gluster
20:38 RangerRick6 joined #gluster
20:40 yinyin joined #gluster
20:42 kushnir-nlm Hi Joe, jclift: sorry for the long response, had to put out a fire... Gluster peer probe says "No peers present"... Just to be sure, I even disconnected the network, still getting those errors about the old volume
20:43 kushnir-nlm Fresh ones, with fresh time stamps
20:43 jclift kushnir-nlm: Which OS are you on?
20:43 kushnir-nlm RHEL 6
20:44 jclift Cool.  Ok, are you able to shut down glusterd and glusterfsd without causing people problems?  i.e. is this a production server or not, etc? :D
20:44 kushnir-nlm yep
20:44 jclift k, shut them both down for now
20:44 kushnir-nlm there's only one
20:45 jclift err... one what?
20:45 jclift :D
20:45 kushnir-nlm One server, you meant shut down both processes... :)
20:45 kushnir-nlm Ok, both processes off.
20:45 jclift k, do a "ps -ef|grep -i gluster"
20:46 jclift Saying that because even though _in theory_ it sounds like gluster shouldn't be running, it also sounds like _in practise_ it's not working like theory says. :)
20:46 jclift kushnir-nlm: Are there any more gluster process showing up from the ps -ef ?
20:47 kushnir-nlm Yep. One sec, let me post the output
20:47 jclift thx
20:49 kushnir-nlm sorry, can't paste in, too many layers of vpn and rdp...
20:49 jclift email it to me?
20:49 jclift the output that is
20:50 jclift jclift@redhat.com
20:50 jclift kushnir-nlm: Basically, do anything you can to get the output where someone can see it :D
20:50 kushnir-nlm here: /usr/sbin/glusterfs -s localhost --volfile-id gluster/nfs -p /var/lib/glusterf/nfs/run/nfs.pid -l /var/log/glusterfs/nfs.log -S /tmp......socket
20:50 jclift k, the NFS server is still running.
20:51 kushnir-nlm this is all
20:51 jclift k.  do a "kill" (not kill -9) on the process for that, so it shuts down too
20:52 jclift Double check with the ps-ef again, to make sure nothing from gluster is still around. :D
20:52 kushnir-nlm Ok. It's gone.
20:53 JoeJulian @processes
20:53 glusterbot JoeJulian: The GlusterFS core uses three process names: glusterd (management daemon, one per server); glusterfsd (brick export daemon, one per brick); glusterfs (FUSE client, one per client mount point; also NFS daemon, one per server). There are also two auxiliary processes: gsyncd (for geo-replication) and glustershd (for automatic self-heal). See http://goo.gl/F6jqx for more information.
20:53 jclift k.  As root, do a grep -ri for that "missing volume" in /var/lib/glusterd/
20:53 JoeJulian ... though I reboot should have cleared them all before, too.
20:53 kushnir-nlm Ok. Nothing.
20:53 jclift JoeJulian: Yeah, we're working outside of "how things are supposed to be working" :D
20:54 jclift Hmmm
20:54 JoeJulian hehe
20:54 chirino semiosis: pushed an updated.
20:54 JoeJulian ..."rm -rf /" ;)
20:54 jclift :p
20:54 chirino export GLFS_HOME should be optional now
20:54 kushnir-nlm I'm telling you :)
20:54 jclift kushnir-nlm: ok, backup the contents of your /var/log/glusterfs/ directory (eg all the log files), then empty out the directory so there's nothing left in it
20:55 jclift kushnir-nlm: Tell me when you've done that
20:56 a2 jclift, the way i'm thinking of hostmapper is something like this:
20:56 jclift kushnir-nlm: Also do the grep -ri for the missing volume in /etc/glusterfs/
20:57 a2 jclift, each system is identified by a UUID, and can have multiple IPs associated with the uuid
20:57 jclift a2: Yep, that makes sense
20:57 a2 jclift, for each of the IP which can be potentially used by clients, we perform a 'gluster peer probe IP'
20:57 jclift k, to build up the list
20:57 a2 so if there is a 10g/e, 1g/g AND IB on a host, and we want to potentially use all of them, then we perform three peer probs, one for each IP
20:58 jclift Sure.
20:58 a2 and this knowledge is spread across the trusted pool, that the UUID can be reached through three IPs
20:58 a2 (so this way, backup networks are completely kept blind)
20:58 jclift How do we then control which clients have access to only some volumes on the cluster, and how to we enforce only some networks to be used by some volumes?
20:58 JoeJulian Before we go too far down that road, I just now though about integrating with Open vSwitch / Openflow... Thoughts?
20:59 kushnir-nlm jclift: nothing in /etc/glusterfs, moved /var/lib/glusterd to safe place
20:59 a2 the way you do access control then is the same way you do access control today - specify allow and deny ranges per volume
20:59 a2 *allow and deny IP ranges
20:59 jclift JoeJulian: I don't yet have any understanding of how Open vSwitch works
20:59 jclift kushnir-nlm: Ugh.  Not /var/lib/glusterd/
20:59 JoeJulian No, it's okay.
20:59 jclift That's the data dir
20:59 JoeJulian He's not trying to save that.
20:59 jclift kushnir-nlm: /var/log/glusterfsd/
21:00 jclift kushnir-nlm: /var/log/glusterfs/ I mean :D
21:00 JoeJulian /var/log/glusterfs/
21:00 jclift snap
21:00 a2 jclift, we are just "formalizing" the different paths you reach the brick.. nothing prevents a user from doing a sed on the volfile and reach the server from a different interface - we are just making that more convenient and removing incovneinece.. in terms of security we are no better or worse from today
21:00 kushnir-nlm sorry, misread, fixed /var/lib/glusterd, backed up and emptied /var/log/glusterfsd
21:01 kushnir-nlm err /var/log/glusterfs
21:01 a2 jclift, so for eg. if you want access only from 10.1.2.x (10g/e), add it to allow list of the volume, and to disable infiniband on volume B, add 10.1.3.x (the IB subnet) to the reject list, etc.
21:01 a2 all that remains the same
21:01 jclift a2: k.  I think the connection groups approach would enable better security and control, but it's probably not as well suited to "cloud scale" environments as what you're suggesting
21:02 JoeJulian If you're planning to start over, I'd still kill /var/lib/gluserd...
21:02 a2 jclift, i'm trying to minimize new user visible concepts and commands
21:02 a2 jclift, so the "new" thing, would be to ask the user to peer probe every interface on the server
21:02 kushnir-nlm I can always restore /var/lib/glusterd right?
21:02 JoeJulian right
21:02 a2 and everything else is "dynamically figured out"
21:03 jclift a2: A host mapper daemon could make for some really nifty automatic scaling.  Well, as long as it doesn't become a pain point. :D
21:04 jclift kushnir-nlm: Oh, if you're really just wanting to wipe the gluster installation and "start over", then yeah, nuke the contents of /var/lib/glusterd/ and /etc/glusterfs/ (but leave the directory itself there to be populated in a minute)
21:04 a2 jclift, the hostmapper can be implicit in the getspec too.. it can realize that the client is anyways going to perform queries for all the UUIDs in the volfile, and perform an in-flight mapping operation to a client-specific in-memroy copy of the volfile and return that to the client.. this way there are no explicit hostmap queries across the wire
21:05 JoeJulian both would work. A default 'group' that automatically has all the interfaces, but the ability to reassign them to other groups. That group would be accessed like {volume}[.default] or {volume}.backup where the newly defined groupname is "backup"
21:06 nueces joined #gluster
21:06 JoeJulian jclift: no, not /etc/glusterfs unless you're going to reinstall the packages.
21:06 jclift Really?
21:06 JoeJulian really. /etc are static configuration files.
21:06 Eco_ joined #gluster
21:06 JoeJulian /var/lib are program state.
21:06 jclift k
21:07 a2 JoeJulian, the thing about groups is that it is too "rigid".. your current set of 16 servers have 1g/e interface.. and the next 8 servers you add on have 1g/e and 10g/e.. so you want to exploit 10g/e from clients where possible
21:07 JoeJulian (according to FHS)
21:07 jclift a2: ??? That's not making sense to me. :)
21:07 JoeJulian That's where I'd rather just plug in to openflow.
21:08 jclift kushnir-nlm: Ok, if you've already nuked /etc/glusterfs/ I guess you'd better reinstall the gluster rpms then too. :(
21:08 JoeJulian jclift: 1000 1 gig clients all being fed from the 10gig port.
21:08 a2 jclift, what if a half your cluster has 1g/e ports, and the other half (because they are newer hardware which was bought and added much later) has 1/ge AND 10g/e
21:08 kushnir-nlm Joe, jclift: No, just removed the old logs and restarted... a new glusterfsd.log is not there
21:08 a2 jclift, or 1g/e AND IB
21:09 a2 jclift, as long as the cluster is homogenous, groups work just fine
21:09 jclift a2: I'm really not seeing how groups would be hampered by any of what you mention?
21:10 jclift Oh... you're meaning that with your approach it would attempt to match bandwith to the # of clients appropriately?
21:10 jclift eg some kind of "this interface has more bandwidth, so get more clients on this one" ?
21:10 kushnir-nlm Side question, are there rpms of 3.3.1 newer than 3.3.1-1 out there?
21:11 a2 jclift, right.. for clients which do have 10g/e, or IB, each client can potentially use a separate/appropriate faster interface
21:11 JoeJulian Yes, there in the super-secret kkeithley_ fedorapeople repo... ;)
21:11 a2 (in hetergeneous cluster)
21:11 jclift kushnir-nlm: Which source of rpms are you using at the moment?  kkeithley has a repo with newer builds of 3.3.1 in it
21:12 kushnir-nlm I'm using the gluster.org 3.3.1 latest RPMs... Are the kkeithley RPMs an ok drop in replacement?
21:12 JoeJulian http://repos.fedorapeople.or​g/repos/kkeithle/glusterfs/
21:12 glusterbot <http://goo.gl/EyoCw> (at repos.fedorapeople.org)
21:12 jclift a2: k, I can see your point there about automatic scaling.  There's not reason that couldn't also be in the groups approach too.
21:12 kkeithley_ My super secret retired repo.
21:12 JoeJulian kushnir-nlm: there were the defacto standard. I'm not sure why kkeithley_ isn't putting them in d.g.o
21:13 jclift a2: After all, it's just a matter of gluster having some concept of which network interfaces can handle more clients, and sending more as appropriate.  The same code could work for either
21:13 JoeJulian Just not feeling like pulling critical bugfixes anymore?
21:13 kkeithley_ Didn't think anyone was interested, but I can move the old stuff over.
21:13 * jclift shrugs :D
21:14 kushnir-nlm Joe: Ok. Is it safe to just install those and start my volume back up? I don't need to delete an existing 3.3.1 volume and re-build it, do I?
21:14 jclift a2: Note, I don't hate your idea at all.  I just think the "more controllable" approach is better for SysAdmins
21:14 * jclift has been bitten by two many corner cases of the years where "automatic" software makes dumb choices
21:14 a2 jclift, if we're adding new multinetwork infrastructure like this anyways, we should rather do it in a way which allows more flexibility, like network failover (if 10g/e or IB switch dies, fall back by reconnecting via 1g/e etc.) pre-defining the connection group at mount time would be kind of limiting for such things, no?
21:14 JoeJulian kushnir-nlm: perfectly.
21:15 jclift s/of the/over the
21:15 kushnir-nlm Well, the gluster.org stuff does say to use the 3.3.1-1 RPMS for production builds ;)
21:15 * JoeJulian prefers -15.
21:16 kushnir-nlm Ok, so, no point beating a dead horse. I'll install the -15, and then keep an eye for the same errors
21:16 jclift a2: Interesting point.  A fully automatic/dynamic approach could be better for that kind of scenario
21:17 a2 jclift, so if the hostmapper returned the list of all IPs from which the server was peer probed, the client can try different IPs after a network disconnect to see if other paths to the same server are available
21:17 kkeithley_ Most of the 3.3.1-1 thru 3.3.1-15 changes are to UFO. And there's an RDMA fix in there
21:17 kushnir-nlm UFO?
21:17 kushnir-nlm Aliens!
21:17 kushnir-nlm I knew it. :)
21:17 a2 and if multi paths exist, it can figure out which is the fastest (latency, bandwidth, user configured etc.) to use in normal cases
21:17 kkeithley_ Unified File and Object, i.e. OpenStack Swift.
21:18 kushnir-nlm Ahh
21:18 kkeithley_ Being renamed to G4S (gluster for swift)
21:18 jclift a2: That seems sensible.
21:19 * jclift points out the exact same code would work for connection groups too  (ie sysadmin specifies multiple IPs on a server, client gets to choose which one it wants to use based on the same metrics) :D
21:19 jclift k, being devils advocate here  Heh
21:20 a2 jclift, i thought the connection group would get chosen at mount point (and kind of limits the ability to dynamically fallback to an IP in a different group if the first network fails?)
21:20 jclift Maybe the problem is that I'm coming from libvirt, where we control everything from the command line with virsh, and we added new commands to virsh all the time
21:21 a2 jclift, or did you mean a connection-group itself would have multiple IPs for a server?
21:21 jclift a2: Yeah, I meant a connection group itself could have multiple IPs for a server.
21:21 a2 jclift, then what is the use of the connection group abstraction?
21:21 jclift a2: Like, I hadn't really thought of that before, but it does make sense now we're thinking about it :D
21:22 jclift a2: Because it's all controllable _per volume_.
21:22 kushnir-nlm Meanwhile one of two other questions-- Here's the setup I'm going for: Gluster servers are VMs on hyper-V. Each gluster server has 4 x 512GB SSD, 4 vCPU, 16GB of RAM. SSDs are pass through. Each SSD is a brick. I will have 3 such VMs, each in a different host. Network between is 10GbE. Network is dedicated to Gluster... A couple of times I've tried to create a distributed volume, populate the
21:22 kushnir-nlm volume with data, add a second server bump up my replica count to 2, and then I watch the replication start, sputter along with lots of CPU activity, and then kind of go nowhere... ~20GB replicated after 2 days...
21:22 a2 jclift, for the purpose of? security?
21:22 a2 oh you mean volume X must be accessible over 10g/e, volume B over 1g/e?
21:23 kushnir-nlm Right now, no other VMs exist on the hosts, so it's not a resource contention issue...
21:23 jclift a2: SysAdmin can basically say "ok volume X, you get to use interfaces 1 2 3".  "and volume Y, you're on interfaces 4 5 6"
21:24 a2 hmm, i see the use case
21:24 a2 ok, how about this -
21:24 a2 gluster peer probe 10.1.1.17 10gig
21:24 kushnir-nlm So, do I need more than 1 vCPU/CPU core/logical processor per brick? Do I need more RAM? I can't figure out what the problem could be.
21:25 a2 gluster peer probe 10.1.2.17 1gig
21:25 a2 gluster peer probe 10.1.3.17 IB
21:25 jclift a2: Security is one aspect of things, but it could also be used for general capacity planning.  eg to ensure the more business important volumes go over known-to-not-be-crowded network, and other ones (eg developer volumes) get to fight it out on some lower capacity network
21:25 a2 (give a nickname or tag or groupname, whatever we want to call it, for each interface we peer probe)
21:26 a2 at the time of mount, ask for the preferred nickname/tag/groupname/networkname, and "prefer" that entry in the hostmapper output
21:26 a2 (and still fallback to other IPs returned by hostmapper in case of network failure)
21:27 a2 all volumes can potentailly be mounted via any networkname (provided access control permits it)
21:27 jclift a2: In your example, are those 3 peer probes different hosts, or different interfaces on the same host, or ?
21:27 a2 jclift, different IPs on the same host
21:27 JoeJulian configurable, force maybe? Don't want PCI or HIPA data going across the wrong network connection just because it's there.
21:28 jclift a2: k, got it, that's the "building up the list of potential network interfaces" bit,
21:28 a2 jclift, yes, building up the list of potential network interfaces, and giving them "names" at the same time
21:29 badone joined #gluster
21:29 a2 i feel that's the most "natural" way to introduce this concept in a non-intrusive way.. if you don't specify a name we pick "default" by default
21:29 jclift a2: Yeah, we'd need to think of some kind of access control/permission layer too
21:29 a2 jclift, the only place to perform access control is *in the brick*
21:30 * jclift doesn't really like the "all volumes can be mounted by anyone" concept
21:30 jclift a2: Why not on the server?
21:30 kkeithley_ grrr.  what should be a no-op change to the glusterfs.spec.in file and two times in a row I've had the regression fail in 887145 due to "mount.nfs mount system call failed". Same change on master works fine. wft
21:30 a2 jclift, nothing prevents a user from handcrafting a volfile to reach the brick through the other interface..
21:31 a2 jclift, brick is the server..
21:31 jclift a, k.  Yeah, server definitely
21:31 l0uis /topic
21:31 jclift No way security on a client is effective
21:32 badone joined #gluster
21:32 a2 glusterd provides convenience, brick provides data and security.. so this whole multi-network is purely matter of convenience.
21:32 a2 brb, meeting
21:32 jclift a2: With your concept, you might need to explicitly include how self-heal data between servers should operate.  eg select their interfaces
21:32 jclift a2: np
21:34 jclift a2: From your email "You can already do that by limiting allowing or rejecting access to each volume from a certain subnet".  Cool, I didn't you that was already possible.
21:34 jclift s/you/know/
21:34 glusterbot What jclift meant to say was: a2: From knowr email "You can already do that by limiting allowing or rejecting access to each volume from a certain subnet".  Cool, I didn't you that was already possible.
21:34 jclift My typing today is really non optimal :)
21:35 jclift a2: Is it in the volume set options?
21:36 yinyin joined #gluster
21:37 jclift kushnir-nlm: Yeah, that does seem weird.
21:38 jclift kushnir-nlm: Out of curiosity, are there other VM's on the hyper-v hosts?
21:40 kushnir-nlm jclift: No other VMs. Hosts are Dell PE R720xd, Perc H710P 1GB RAID cards, 256GB RAM, 2 x E5-2680, Intel x520 NICs... They're beasts, not overtaxed at all by that one VM :)
21:40 jclift Why the hyperv layer then, instead of bare metal?
21:41 kushnir-nlm All I have are those 3 servers, I need gluster, 6 production web servers, 2 stage servers, 1 alpha server, some auxilliary functions.
21:41 kushnir-nlm FOr this project anyway
21:42 jclift Hmmm, I haven't really touched the perf side of things yet, so I can't personally give useful advise here. :(
21:42 jclift kushnir-nlm: If you don't get good answer here on IRC in the next hours, maybe email the gluster-users mailing list to ask too.  If you haven't already that is. :D
21:43 kushnir-nlm I'll do gluster volume brick-add <blah> replica 2 brick1 bick2 brick3 brick4... it'll sputter, CPU will be high, some stuff will replicate, but not a whole lot...
21:44 kushnir-nlm I'd think with SSDs 10G, and 4 CPUs it'd be like... woosh.
21:44 kushnir-nlm I had a similar setup with DRBD and OCFS... was woosh :)
21:44 recidive joined #gluster
21:47 jclift Yeah, my 3 node setup (2 gluster nodes, 1 client) using either RDMA or IPoIB is like "woosh" compared to 20Gb in 2 days.
21:47 jclift 20GB
21:47 jclift 20GB is like line noise for your setup, so there's something not right there :(
21:48 kushnir-nlm So far, I'm leaning toward hyper-v sucks...
21:48 jclift kushnir-nlm: Have you done perf testing (throughput, latency, etc) between the gluster vm's, just to make sure there's nothing artificially throttling?
21:49 jclift kushnir-nlm: That's possible too.  But, it should suck so much it makes everything 1% as fast as possible. :(
21:49 jclift s/should/shouldn't/
21:49 glusterbot What jclift meant to say was: kushnir-nlm: That's possible too.  But, it shouldn't suck so much it makes everything 1% as fast as possible. :(
21:49 kushnir-nlm Certainly. In the VM I get bi-di iperfs of like 7Gbps, pings are all fine, bonnie+ responses on disks and distributed volume are superb
21:50 jclift kushnir-nlm: Are the files in the vm's tonnes of small files, or are they big files like video files, or ?
21:50 kushnir-nlm tons of 100-200k files
21:50 jclift Lots of files per directory?
21:50 kushnir-nlm 10k
21:50 kushnir-nlm 10,000 files per directory that is
21:50 jclift Ugh. That's probably the issue
21:51 jclift To me that sounds like you're data is in the one of the "horror cases" for Gluster
21:52 jclift Classic "sweet spot" for gluster is medium to large size files, and not tonnes per directory
21:53 jclift Classic "works like crap" scenario for gluster is news/mail servers which have bazillions of small files (like 1k each) type of thing, and thousands of files per directory
21:54 jclift kushnir-nlm: Are you able to influence that layout of data on disk?  eg break it up into a bunch more directories, with less files per dir?
21:55 kushnir-nlm Correction.... 4 directories (1 for each size), in that we have 401 directories, in each of those, we have ~1000 directories, and in each of those we have 1-10 images
21:55 kushnir-nlm That was what we changed to after waiting for weeks for ls on the 10000 file directories :)
21:58 jclift :)
21:58 * phox fixed this problem by isolating people's piles-of-tiny-shit to user-mountable squashfsen
21:58 phox and now I can ignore the problem :)
21:58 jclift k.  I don't know what the problem is then.
21:58 Koma joined #gluster
21:59 phox 1000 entries in a directory is enough to annoy gluster
21:59 jclift phox: Ahh, so gluster just sees the user volumes as 1 file each, and the users client works with mounting that and reading inside it?
21:59 phox jclift: well, not user's client, but yeah that's the idea
22:00 phox mounted locally per development server
22:00 phox also these are tabular text garbage data
22:00 jclift Sounds like a nifty approach. :D
22:00 phox so they're xz'd squashfs
22:00 phox aka "FOAD stupid little files"
22:00 phox plus the backup systems never see them, etc.
22:00 jclift phox: Any interest in writing that up into a blog post, so other people learn how to do that too?
22:00 phox for *real* purposes they're just monolithic blobs.  only stupid people have to look at the disaster.
22:01 phox jclift: that's a point
22:01 kushnir-nlm I can't do that. My glusterfs is the storage for web servers serving images and for search engine indexes, the goal is high availability, and update all three servers at once.
22:01 jclift phox: Do you have a blog already? :)
22:01 * jclift really doesn't know
22:01 phox "kinda"
22:01 phox I have one I use infrequently
22:01 phox musingsandbruisings.blogspot.com
22:02 jclift Well, it'd be nifty to have that info out on the net somewhere.  Really, because it's a good idea. :)
22:02 phox I was hoping to be using squashfuse for this, but it's not really out there as packages and I don't feel like building it to see how stable it is
22:02 phox also fuse would be annoying because they're more user-managed; how it's in fstab and people can't fsck up how it's mounted for everyone else using those files
22:04 kushnir-nlm jclift: So, how many CPUs per brick do you allocate?
22:05 jclift kushnir-nlm: Well, I'm only doing stuff personally in my dev/test lab.
22:05 kushnir-nlm Ahh
22:05 jclift And I just bought the cheapest cpu I could get away with (dual core G2120)
22:05 jclift So, copying me there isn't a good idea. :D
22:05 kushnir-nlm Haha.
22:05 kushnir-nlm Yeah, I test lab I run on crapola too sometimes. :)
22:06 kushnir-nlm I'm trying to figure out whether 4 independent bricks are going to hammer CPU more than one RAID0 brick
22:09 kushnir-nlm Off topic, but I rebuilt a thecus N5200 as an ubuntu server in my test lab. Glorious single core celeron. :)
22:09 jclift On machines like that, the cpu cost of raiding should be almost not noticable
22:10 kushnir-nlm Yeah, I'm not worried about cost of RAID... I think for recoverability from a disaster would be better with 3 servers made of 4 independent bricks, than 3 servers with raid0s
22:11 kushnir-nlm With a RAID0 setup, if I lost one disk, I lost an entire server... with 4 independent bricks per server, if I lose an SSD, it's just one brick gone.
22:15 phox yay cheap celerons :D
22:15 * phox is looking forward to ZFS supporting DISCARD so it's useful on SSDs
22:16 kushnir-nlm Yeah, I thought ZFS is still not stable on Linux?
22:16 phox dunn
22:16 phox o
22:16 phox we're using it in production
22:16 phox working just fine.
22:16 phox I'd also point out that ext4 is not stable and ate more people's data more recently
22:16 kushnir-nlm http://zfsonlinux.org/?
22:16 phox so depends what metric you use for calling something "stable"
22:16 glusterbot Title: ZFS on Linux (at zfsonlinux.org)
22:16 phox kushnir-nlm: yeah
22:17 phox or #zfsonlinux, here
22:17 phox heh
22:17 kushnir-nlm Will have to try it out...
22:17 phox use a very shiny, new kernel
22:17 phox we're using I think 3.9.1... whatever
22:18 phox whatever was "stable" when I started deploying these machines.
22:19 kushnir-nlm So is 3.3.1-15 ext4 compatible?
22:26 rwheeler joined #gluster
22:27 phox I see no reason why not
22:27 phox AFAIk it's pretty filesystem agnostic apart from needing xattrs... I think.
22:28 phox not something I've really committed the details of to memory but I think that's about the summary there
22:28 phox ours is on top of ZFS though.
22:28 kushnir-nlm I was under the impression that some kernel changes broke compatibility with ext4
22:28 kushnir-nlm ?
22:28 phox dunno either way, sorry
22:28 kushnir-nlm gluster on top of zfs is pretty hard core
22:28 phox it's pretty slick
22:28 kushnir-nlm how many brocks per server? how many cpus?
22:28 Koma joined #gluster
22:28 phox some unknown number
22:29 phox we're not using it as a distributed filesystem
22:29 phox we're using it to replace NFS because NFS on Linux is the most amazing pile of shit ever
22:29 phox give it too much throughput?  it wedges itself up permanently somehow.
22:29 phox don't know or care how.  it's definitely a pile of garbage.
22:29 phox and NFS/RDMA is buggy as anything
22:30 phox RH versions thereof might actually work but we're not switching distros for that benefit
22:30 phox and these machines do other things, so WAY more CPUs than are necessary for the task we're discussing.  16-24 virtual cores per machine
22:31 phox well, for the most part, 24
22:31 phox really looking forward to RDMA support reappearing.  gluster without RDMA is kinda lame.  too many context switches = crap performance.
22:33 kushnir-nlm So 10GbE is not the way to go, I take it?
22:34 phox if you're going over IP primarily you're in and out of kernel like ~twice as often to do the same thing
22:35 phox vs RDMA working without need for kernel
22:35 phox RDMA means the Gluster FUSE driver (and server) can talk without blocking on some kernel req (other than real I/O)
22:35 phox also QDR IB costs about the same or less vs 10GbE
22:36 phox of course the cable lengths suffer a bit but that's irrelevant for a few racks of machines...
22:37 ctria joined #gluster
22:38 phox bbl or something, almost going home here..
22:38 phox left #gluster
22:38 kushnir-nlm l8er... thanks for advice
22:59 recidive joined #gluster
23:15 Eco_ joined #gluster
23:19 portante joined #gluster
23:27 robo joined #gluster
23:32 mooperd left #gluster
23:50 Eco_ joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary