Camelia, the Perl 6 bug

IRC log for #gluster, 2013-10-18

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:37 glusterbot New news from newglusterbugs: [Bug 1007509] Add Brick Does Not Clear xttr's <http://goo.gl/Qx4F4w>
01:29 kPb_in_ joined #gluster
01:36 jbrooks joined #gluster
01:39 jbrooks joined #gluster
01:40 vynt joined #gluster
01:47 _ilbot joined #gluster
01:47 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:53 kevein joined #gluster
02:03 bala joined #gluster
02:04 jbrooks joined #gluster
02:06 nasso joined #gluster
02:09 jbrooks joined #gluster
02:10 jag3773 joined #gluster
02:19 JoeJulian rob99: probably one of the brick logs. etc-glusterfs-glusterd.vol.log might have a clue which brick to check.
02:20 vpshastry joined #gluster
02:23 jbrooks joined #gluster
02:40 atrius joined #gluster
03:00 harish joined #gluster
03:04 atrius joined #gluster
03:16 shylesh joined #gluster
03:19 kshlm joined #gluster
03:25 wgao joined #gluster
03:27 kevein_ joined #gluster
03:31 davinder2 joined #gluster
03:31 bharata-rao joined #gluster
03:42 kanagaraj joined #gluster
03:47 sgowda joined #gluster
03:52 itisravi joined #gluster
04:02 davinder joined #gluster
04:07 rjoseph joined #gluster
04:11 spandit joined #gluster
04:19 ndarshan joined #gluster
04:20 CheRi joined #gluster
04:22 vpshastry joined #gluster
04:31 harish joined #gluster
04:31 meghanam joined #gluster
04:31 meghanam_ joined #gluster
04:35 RameshN joined #gluster
04:41 jbrooks joined #gluster
04:42 shruti joined #gluster
04:42 lalatenduM joined #gluster
04:44 ppai joined #gluster
04:49 harish joined #gluster
04:59 dusmant joined #gluster
05:01 nasso joined #gluster
05:07 anands joined #gluster
05:15 aravindavk joined #gluster
05:21 mohankumar joined #gluster
05:30 bala joined #gluster
05:35 vshankar joined #gluster
05:41 raghu joined #gluster
05:41 MrNaviPacho joined #gluster
05:44 ajha joined #gluster
05:45 ababu joined #gluster
05:45 kkeithley joined #gluster
05:45 bfoster joined #gluster
05:45 esalexa|gone joined #gluster
05:46 bdperkin joined #gluster
05:46 mattf joined #gluster
05:46 hagarth joined #gluster
05:47 portante joined #gluster
06:07 sgowda joined #gluster
06:08 glusterbot New news from newglusterbugs: [Bug 1018308] GlusterFS installation on CentOS 6.4 fails with "No package rsyslog-mmcount available." <http://goo.gl/46kz8H>
06:11 blook joined #gluster
06:12 rjoseph joined #gluster
06:16 lalatenduM joined #gluster
06:18 jtux joined #gluster
06:20 lalatenduM joined #gluster
06:20 spandit joined #gluster
06:20 hagarth joined #gluster
06:33 satheesh1 joined #gluster
06:34 kPb_in joined #gluster
06:44 ngoswami joined #gluster
06:47 ekuric joined #gluster
06:47 davinder joined #gluster
06:48 ekuric joined #gluster
06:54 vimal joined #gluster
06:58 ctria joined #gluster
07:01 blook joined #gluster
07:02 eseyman joined #gluster
07:07 eseyman joined #gluster
07:12 keytab joined #gluster
07:20 anands joined #gluster
07:20 harish joined #gluster
07:25 spandit joined #gluster
07:25 rjoseph joined #gluster
07:32 hagarth joined #gluster
07:49 mbukatov joined #gluster
08:03 sgowda joined #gluster
08:03 ekuric joined #gluster
08:06 mgebbe_ joined #gluster
08:22 itisravi joined #gluster
08:23 blook joined #gluster
08:24 harish joined #gluster
08:31 fyxim joined #gluster
08:36 psharma joined #gluster
08:40 ProT-0-TypE joined #gluster
08:47 wgao joined #gluster
08:48 sgowda joined #gluster
08:50 spandit joined #gluster
08:52 rjoseph joined #gluster
09:10 eseyman joined #gluster
09:12 kshlm joined #gluster
09:17 vshankar joined #gluster
09:17 satheesh joined #gluster
09:29 davinder joined #gluster
09:33 ProT-0-TypE joined #gluster
09:33 bma joined #gluster
09:34 bma hey
09:36 bma i testing glusterFS and i wish to create a binding for YCSB for benchmarking it. i already created one for Hadoop HDFS, but HDFS as a library to access via java. I wish to know if is a similar way to do it with glusterFS
09:38 kanagaraj_ joined #gluster
09:43 satheesh joined #gluster
09:45 RameshN joined #gluster
09:46 itisravi joined #gluster
09:48 ngoswami joined #gluster
09:51 Shri joined #gluster
09:53 ndevos bma: maybe https://forge.gluster.org/hadoop helps, or one of the other projects on https://forge.gluster.org/ ?
09:53 glusterbot Title: Apache Hadoop enablement on GlusterFS - Gluster Community Forge (at forge.gluster.org)
09:54 hagarth bma: are you looking for java bindings to access glusterfs?
09:59 rastar joined #gluster
10:02 newb_sysadmin joined #gluster
10:03 newb_sysadmin purpleidea: how do i get started with puppet-gluster ?
10:04 kbsingh hello! hello!
10:04 psharma joined #gluster
10:05 RameshN joined #gluster
10:05 kbsingh humm reading backlog, i think the user who said that CentOS ships 3.2 is actually wrong - we dont ship any GlusterFS packages into production at all
10:05 kbsingh would be a great issue to resolve
10:08 ndevos kbsingh: we cant change the epel version (hekafs depends on glusterfs-3.2), but I guess you can include the latest version in centosplus if you like?
10:09 kbsingh so, with the centos project side of things - we are a way more flexibile with things, the overall aim is to make it possible for the project ( you guys ) to be able to come in and implement policy that works best for you ( after all, glusterfs is used by glusterfs users :D )
10:10 kbsingh the xen guys decided it worked best ( and given the level of complexity involved ) to setup their own repo, that then allows them to implement their own policies w.r.t updates / deps / third party requirements / exposing api's etc
10:11 kbsingh its still a low barrier to entry, since users can 'yum install centos-release-glusterfs' and then 'yum install <components>' ( we can push centos-release-glusterfs into centos-extras, which is enabled by default )
10:11 kbsingh would that help ?
10:12 ndevos hmm, thats an idea, we have a repo on http://download.gluster.org/pub/​gluster/glusterfs/LATEST/CentOS/... already
10:12 glusterbot <http://goo.gl/86bsmh> (at download.gluster.org)
10:13 ndevos kkeithley: what do you think about that ^^ ?
10:13 ndevos (he's in the Boston area and probably not reading this now)
10:13 mattf joined #gluster
10:13 shireesh joined #gluster
10:14 purpleidea fyi newb_sysadmin was me fooling around for screenshot purposes: https://ttboj.wordpress.com/2013/10/1​8/desktop-notifications-for-irssi-in-​screen-through-ssh-in-gnome-terminal/
10:14 glusterbot <http://goo.gl/DS4FdT> (at ttboj.wordpress.com)
10:15 purpleidea kbsingh: would you be interested in a packaged version of puppet-gluster too ?
10:17 ndevos I think the plan johnmark mentioned once to me, to create a repository full of the stuff from forge.gluster.org projects, fits in the centos-release-glusterfs package nicely
10:20 jclift joined #gluster
10:22 kbsingh purpleidea: that would be nice, but we dont ship puppet either :)
10:23 kbsingh purpleidea: i believe the main blocker for what is ruby193 - which may or maynot be solved with the SCL stack for ruby193, I'll need to check
10:23 purpleidea kbsingh: good point about puppet !
10:24 kbsingh ndevos: the main concern with something like 'latest' is that we then need to test, relatively extensively, that yum updates are going to work and stay working ( we dont care that much about deps changing, but stuff should update in place - way too many users have cron'd yum updates )
10:24 purpleidea kbsingh: i was looking into packaging it, because someone else requested this, so if it ever makes sense for centos, ping me and i'm happy to get on that. it sure makes gluster setup easy, but i'm biased of course.
10:26 kbsingh i dont think its hard at the moment :)
10:26 purpleidea ndevos: kbsingh: fyi, i also expect a non zero # of gluster users are using 3.3 and 3.4 in the same clusters
10:26 kbsingh what gluster needs is a avahi/zeroconf sort of 'find me all my bricks'
10:26 purpleidea kbsingh: https://github.com/purpleidea/puppet-gluster/b​lob/master/examples/gluster-simple-example.pp
10:26 glusterbot <http://goo.gl/5mxOIJ> (at github.com)
10:27 purpleidea "avahi/zeroconf sort of" thing.
10:27 purpleidea (but i'm working on something cooler...)
10:27 kbsingh nice
10:27 ndevos kbsingh: yes, I understand it may be better to stick with a release (say 3.4), the repo is there on download.gluster.org too, we could add a disable repo for latest/beta/whatever for users that like more experimental versions
10:28 kbsingh ndevos: right
10:28 kbsingh otoh, i've so far had no issues going from 3.2 to 3.3 to 3.4 in production ( either i dont do much, or i'm just lucky! )
10:28 * kbsingh now waits for his array to fail in a ball of fire
10:30 ndevos I guess you're lucky ;)
10:30 ndevos or, you happen to know what you're doing
10:32 F^nor joined #gluster
10:36 badone joined #gluster
10:37 bma hagarth: yes... a java binding for glusterfs would be great
10:37 hagarth bma: there are a cpl of projects attempting to do that
10:38 bma hagarth: do you know any? or is best for me to look on google?
10:38 purpleidea bma: hagarth: fwiw semiosis was working on some java stuff to natively use libgfapi... maybe he has other plans too
10:38 RedShift joined #gluster
10:38 hagarth bma: one is here - https://forge.gluster.org/libgfapi-jni
10:38 glusterbot Title: libgfapi-jni - Gluster Community Forge (at forge.gluster.org)
10:40 ninkotech joined #gluster
10:40 bma hagarth: thanks.... i will give it a look
10:40 hagarth bma: here's yet another - https://github.com/avati/glusterfs/commits/java
10:41 glusterbot Title: Commits 路 avati/glusterfs 路 GitHub (at github.com)
10:41 hagarth kbsingh: that almost sounds wow! :)
10:42 ninkotech_ joined #gluster
10:45 kbsingh hagarth: :)
10:51 fracky joined #gluster
10:52 ngoswami joined #gluster
10:54 ccha2 why I got some these messages ?
10:54 ccha2 [2013-10-18 10:53:52.145283] W [glusterd-geo-rep.c:1024:g​lusterd_op_gsync_args_get] 0-: master not found
10:54 ccha2 geo replication not start yet
10:55 shireesh joined #gluster
10:57 rjoseph joined #gluster
10:59 spandit joined #gluster
11:04 sgowda joined #gluster
11:08 morse joined #gluster
11:09 psharma joined #gluster
11:10 kanagaraj joined #gluster
11:11 harish joined #gluster
11:14 X3NQ joined #gluster
11:19 anands joined #gluster
11:31 shireesh joined #gluster
11:39 glusterbot New news from newglusterbugs: [Bug 1020848] Enable per client logging for gluster shares served by Samba <http://goo.gl/i7zH1R>
11:42 rgustafs joined #gluster
11:47 vynt joined #gluster
11:49 davinder joined #gluster
11:53 ricky-ticky joined #gluster
11:53 shireesh joined #gluster
11:57 saurabh joined #gluster
11:59 kopke joined #gluster
12:09 glusterbot New news from newglusterbugs: [Bug 969461] RFE: Quota fixes <http://goo.gl/XFSM4>
12:10 kkeithley ndevos, kbsingh: I'm here now. wrt to 3.2.x in CentOS, AFAIK people are getting that from EPEL. Seems like that's the first thing a lot of people with RHEL and CentOS do is turn on EPEL, although maybe I'm wrong about that.
12:11 ndevos oh, I dont know about that either...
12:13 kkeithley ndevos: not sure what you're asking about? This: "...since users can 'yum install centos-release-glusterfs' and then 'yum install <components>' ( we can push centos-release-glusterfs into centos-extras, which is enabled by default )" ?
12:14 kkeithley I presume centos-release-glusterfs is an RPM that adds a new /etc/yum.repos.d/$something.repo for glusterfs?
12:14 ndevos kkeithley: that way centos users would have a really easy way of installing the community packages - yes
12:14 ndevos (at least, that is my understanding)
12:15 kkeithley IOW we add a centos-release-glusterfs RPM to the centos-extras packages/YUM with a glusterfs.repo that points at the download.gluster.org YUM repo. (Just so I'm clear; I can be kinda slow this early in the morning)
12:16 ndevos kkeithley: yes, that is what I understand kbsingh is offering us
12:17 kkeithley and, specifically not any other glusterfs packages or YUM repos anywhere in the CentOS galaxy of YUM repos and packages
12:17 kkeithley yeah, I'm good with that
12:17 kkeithley seems like a fairly light touch way to handle it
12:18 * ndevos likes it too
12:19 kkeithley did we lose kbsingh? Let's proceed with that.
12:19 edward1 joined #gluster
12:19 uebera|| joined #gluster
12:19 uebera|| joined #gluster
12:20 kkeithley Perhaps we should do that for EPEL too?
12:21 kkeithley I.e. we don't ship gluster, instead ship a community gluster YUM repo. Would Fedora policy allow that instead? (wrt EPEL not shipping packages that are in RHEL or some RH* channel)
12:22 ndevos I do not think we can put that in EPEL, but we can put that on download.gluster.org
12:22 kkeithley oh. :-(
12:23 kkeithley really?
12:24 ndevos well, not sure, trying to find a hint in the guidelines
12:27 kkeithley If you see anything about how to withdraw a package, in particular from epel, let me know. I think I saw something once but will have to search for it again.
12:30 mohankumar joined #gluster
12:32 Nev joined #gluster
12:32 Nev i have an issue, when writing and reading small files, in a fast task
12:32 Nev the program complains about the files are not there
12:33 Nev when we do an ls the files are there
12:33 Nev is there a way to optimize this?
12:37 kkeithley ,,(small files)
12:37 glusterbot See http://goo.gl/5IS4e
12:38 kkeithley Nev: ^^^
12:39 cyberbootje joined #gluster
12:48 ndevos kkeithley: https://fedoraproject.org/wiki/How_​to_remove_a_package_at_end_of_life
12:48 glusterbot <http://goo.gl/trp1LJ> (at fedoraproject.org)
12:48 kkeithley thanks.
12:50 vpshastry left #gluster
12:53 kkeithley oh, that's harsh. "... After the package was[sic] retired in package DB, you will not be able to commit changes to GIT..."
12:54 kkeithley But I'll want to do Koji builds for EPEL for our repo on download.gluster.org.
12:54 kkeithley That's what I've been doing all along
12:55 kkeithley I could do --scratch builds, but those are not guaranteed to be the same as !--scratch builds.
12:56 ndevos why? we could build them in sync with the release job that generates the 'make dist' tar.gz - no manual epel building needed
12:57 kkeithley Yes, building a set of RPMs on a RHEL box is probably good enough.
12:58 kkeithley Or even in a rhel-6 mock build on a CentOS box
12:58 kkeithley on an up to date RHEL box
12:59 wgao joined #gluster
12:59 kkeithley That's probably as good or better than doing koji --scratch builds
13:00 bennyturns joined #gluster
13:01 kkeithley although it means doing something different that what I do for Fedora. Different steps, going to a different machine. But it's not the end of the world...
13:03 sgowda joined #gluster
13:03 ndevos kkeithley: I was more thinking of the Jenking 'release' job to build the el-5 and el-6 packages when the release manager creates the versioned tarball
13:04 ndevos that way, you would only need to copy the rpms to download.gluster.org (just like is done with the tarball, I assume)
13:05 ndevos of course, build in mock, thats the most sane environment to do things (koji does that too)
13:05 Shri joined #gluster
13:06 kkeithley That's only good for the first RPM release. If we tweak the fedora glusterfs.spec independent of the upstream glusterfs.spec.in, as has happened on many occasions, a release build in jenkins doesn't do that for us
13:07 kkeithley Although we're getting better and not doing that as much any more
13:07 ndevos ah, yes, you got a point there
13:07 diegows joined #gluster
13:07 kkeithley But, e.g. if we wanted to patch 3.4.1 for the libvirt live migration, we'd do that independent.
13:08 kkeithley do that independent of upstream too
13:12 kkeithley well, to be clear, it's never completely independent of upstream. Patches such as a theoretical fix for the libvirt live migration issue are, or would be, in upstream, they're just not in a release yet, and we patch the RPM to get the fix to the field early.
13:12 ndevos yes, those things can happen - I was actually waiting for a fix for that libvirt issue...
13:12 kkeithley But enough rambling
13:12 cyberbootje joined #gluster
13:14 kkeithley yes, I'm kinda puzzled by that. Raghavan has a patch in downstream, and hagarth told him to submit it upstream, but the latest word is we're not going going to provide a configurable port?
13:16 kkeithley s/to the field/to the community/
13:16 glusterbot What kkeithley meant to say was: well, to be clear, it's never completely independent of upstream. Patches such as a theoretical fix for the libvirt live migration issue are, or would be, in upstream, they're just not in a release yet, and we patch the RPM to get the fix to the community early.
13:16 kkeithley s/going going/going/
13:16 glusterbot What kkeithley meant to say was: yes, I'm kinda puzzled by that. Raghavan has a patch in downstream, and hagarth told him to submit it upstream, but the latest word is we're not going to provide a configurable port?
13:17 kkeithley Any, again, enough rambling
13:20 ndevos kkeithley: lets get back to the point, we're not updating glusterfs in epel because hekafs depends on the older version
13:20 ndevos is the solution not to retire hekafs?
13:21 * ndevos prepares to get yelled at
13:21 kkeithley I'm not going to yell. ;-)
13:22 kkeithley Yes, I'll have to retire HekaFS at the same time. It's past it's "sell by" date. And actually should have retired glusterfs too because it's in a RH* channel
13:23 kkeithley HekaFS is already dead in Fedora
13:24 chirino joined #gluster
13:25 ndevos we can keep glusterfs in epel, eventhough it is in a rh* channel, glusterfs is not in one of the base rh* channels
13:25 ndevos oh wait, the client may get there, that will be causing issues
13:26 kkeithley yes and yes, although some people argued that it was enough that it was in an rh* channel that it ought not to be in epel
13:26 kkeithley The client definitely will get in the base RHEL channel and that's what's forcing this issue
13:27 ndevos okay, now I see
13:29 _pll_ joined #gluster
13:29 _pll_ hi all
13:30 kkeithley the only reason we won the "it's in an rh* channel, it ought not to be in epel" is because the product associated with the rh* channel does not allow add-ons like epel.
13:31 ndevos oh, I'm well aware whats allowed and not :D
13:31 kkeithley yep
13:31 Nev ok, so i tried this, copied program and data to the local fileserver where my glusterfs is exported, an afr2 export btw. and if i ran this program there the command can find the local files. but over nfs another machine cannot run this program, but ls for the diretory works
13:31 Nev as well the fileserver which mounted the glfs on its own mnt/dir is not able to run the command/script
13:32 Nev but if i use the local disk, directly its working
13:32 kkeithley ndevos: yep, I'm just saying that's how we won the argument.
13:32 Nev is there any way to debug this issue?!
13:37 _pll_ I am trying to create a volume, but volume creation fails (it just says failed), and I can't find anything useful in /var/log/gluster or system logs. any suggestions to debug this issue?
13:37 ndevos kkeithley: well, as long as you have the fedora package, you can still create branches in the git repo and build those in koji for epel
13:38 _pll_ I have 8 servers running glusterd, each with 2 disks (2 bricks), and I was trying creating the volume using stripe 2 replica 2
13:41 ndevos Nev: sounds like http://review.gluster.com/3739 ?
13:41 glusterbot Title: Gerrit Code Review (at review.gluster.com)
13:42 ndevos Nev: what permissions do those executable files have? maybe it works if you add read-permissions too
13:42 monotek hi, short question... from kernel 3.11 is xfs still the prefered file system in gluster 3.4 or is the ext4 bug fixed?
13:42 kkeithley ndevos: this goes back to my `...harsh. "... After the package was[sic] retired in package DB, you will not be able to commit changes to GIT..."' comment. I think I can finagle things to do builds in branches, I just couldn't be able to commit the changes to the spec file in that branch for the next time. Just an additional level of (unnecessary) complexity.
13:43 kkeithley ,,(ext4)
13:43 glusterbot The ext4 bug has been fixed in 3.3.2 and 3.4.0. Read about the ext4 problem at http://goo.gl/Jytba or follow the bug report here http://goo.gl/CO1VZ
13:43 ndk joined #gluster
13:43 monotek many thanks :-)
13:44 Nev the executeable is acctually on the local filesystem, the files which should be processed are on the glfs mount, and their perms are 0644
13:44 kkeithley sheesh, my typing is attrocious
13:45 ndevos kkeithley: yes, not the standard branch, but you can call the branch upstream-3.4-el-6 or something like that
13:46 Nev in short, executeable is local filesystem: test 1: client uses data on glfs and local executeable -> not working. Test2: server uses executeable localy, uses data on localmounted glfs -> not working. Test3: server uses executable localy, and data localy in different dir which is not shared, -> its working
13:47 Nev and the client uses the nfs mount version, and the server uses the nativ glusterfs option to mount the exportet dir
13:47 Nev but both don't work
13:47 kkeithley ndevos: but will fedpkg do the correct thing if I'm in a branch with a name like that? As I'm sure you're aware, if I do a `fedpkg build` in the el6 branch, it knows to build el6 packages.
13:48 kkeithley there's probably some git magic tricks that I don't know about that would make this simple
13:48 ndevos kkeithley: I think you can make that to work if you append the branch name with _el-6 or something
13:49 kkeithley okay, that's good to know
13:49 ndevos you need to test that first of course :) it also may help if the branch tracks origin/el-6, fedpkg should check that to
13:49 ndevos o
13:52 kkeithley although I think it'd just be easier if the glusterfs "package" for epel turned into a single RPM: glusterfs-community-yum-*.noarch.rpm that contains the /etc/yum.repos.d/glusterfs-community.repo file.
13:52 failshell joined #gluster
13:53 kkeithley Or something along those lines
13:53 ndevos yes, but I think that is not allowed, I've only found https://fedorahosted.org/fesco/ticket/671 as reference so far
13:53 kkeithley Obsoleting glusterfs, etc.
13:53 glusterbot Title: #671 (Packages packaging yum repo files?) – FESCo (at fedorahosted.org)
13:54 ndevos we could do that for centosplus, and provide a glusterfs-community-release-3.4-1.el6.noarch.rpm on download.gluster.org
13:56 kkeithley right
13:56 vynt joined #gluster
13:58 danci1973 I have Infiniband and IPoIB is working..How can I check if RDMA is working? I found something about using 'ib_write_bw', but I can't find that command ...
13:58 danci1973 How do I even start setting RDMA up?
13:58 kkeithley Alternatively, what if for EPEL we turned it into a single glusterfs-community-*.noarch.rpm that installs a glusterfs.README with instructions pointing to the download.gluster.org repo?
14:00 kkeithley a couple people on fedora-devel just weighed in saying it's against policy.
14:00 kbsingh kkeithley: ndevos: so, the actual rpms would need to be in .centos.org; but we can likely shadow and build from download.gluster.org ( for us to ship a centos-release-glusterfs )
14:00 kbsingh we cant ship a .repo file pointing to an external resource, it just opens up too big a can of worms
14:00 ndevos kbsingh: sure, that sounds good either way
14:01 kbsingh ndevos: also, i think we should target centos-extras/ rather than centosplus/ since centos-extras/ is already enabled and ready to go by default, everywhere ( including cloud images etc )
14:01 kbsingh the only *must* case for centosplus is when we overwrite or replace something that is in-distro, i dont believe we need to do that here right ?
14:02 ndevos kbsingh: ah, okay, that makes sense
14:02 rjoseph joined #gluster
14:02 ndevos kbsingh: uhm, the glusterfs client is not in standard rhel *yet*, but that will likely come as more components (QEMU, Samba) start to have bindings for it
14:03 kbsingh ndevos: kkeithley: so, can we have all this done by monday ? Should I start drafting up an announcement email:D
14:03 kbsingh right
14:03 kbsingh ndevos: i think we might be able to handle that with obsoletes and conflicts against centos-release-glusterfs ( in the future when some of these things show up in RHEL )
14:05 ndevos kbsingh, kkeithley: I do not see an issue why centos-extras can not include the 3.4.1 rpms from download.gluster.org
14:06 johnmark oh oh... this is a conversation I want to have
14:06 johnmark kbsingh: are you in EDI next week?
14:06 kbsingh unfortunatley not :( I'm only really just back home from hospital and wont be able to move around / travel much for atleast another month
14:07 johnmark kbsingh: ouch! I'm sorry :(  well, I'll also be in London the following week - 28th and 29th
14:07 johnmark ndevos: it's not just the glusterfs rpms, it's the libvirt and qemu packages as well
14:08 johnmark ndevos: not to mention a gluster-fied SAMBA 3.6.x
14:08 johnmark kbsingh: so your recommendation is centos-extras
14:09 kbsingh not with libvirt and samba in there :)
14:09 johnmark shit. ok
14:09 wahly joined #gluster
14:09 kbsingh how about we setup a glusterfs-testing repo, put everything we need into that one place ?
14:09 johnmark kbsingh: that sounds reasonable
14:09 johnmark kbsingh: and you could host this on centos.org?
14:09 ndevos johnmark: yes, those packages will require glusterfs-libs and similar, but that is not the case *yet*
14:10 johnmark ndevos: ok
14:10 kbsingh and ask for comments, write a few user stories on how we expect (ab)users to consume things, take it from there
14:10 johnmark kbsingh: yes, gotta please hte abusers
14:10 kbsingh is the libvirt stuff tested to replace in-distro libvirt cleanly ?
14:10 johnmark kbsingh: no idea. all I know is later versions support libgfapi
14:10 kbsingh johnmark: absolutely, we can host it on centos.org
14:10 johnmark kbsingh: ok
14:11 johnmark kbsingh: so what's the best way to create this repo and add packages?
14:11 ndevos that will be the standard libvirt+qemu from rhel, making glusterfs-client support available to all (not just Red Hat Storage customers)
14:11 kbsingh ideally, point me at a set of srpms, and i can do the first run internally, push to testing.
14:11 wahly i'm sure this is stated here all the time, but i'm seeing lower throughput than i had hoped for with my current deployment. i'm using gluster 3.4 on centos 6.4 for openstack presistant storage. i have 3 nodes (for testing) each with 2x300GB disks in raid0. everything is running on a gigabit network. i'm seeing about 52M/s sequentail writes and about 230M/s sequential reads. does that sound like reasonable numbers for gluster, or is there tuning i
14:12 ndevos and supported only against RHS-servers
14:12 kbsingh longer term, send me ssh and gpg keys for a few people, and I'll setup acls allowing people to request builds and push to testing automagically
14:14 kaptk2 joined #gluster
14:18 ndevos kbsingh: http://download.gluster.org/pub/gluster​/glusterfs/3.4/3.4.1/RHEL/epel-6/SRPMS/ would do it, but you're warned that some of those sub-packages will land in rhel one day
14:18 glusterbot <http://goo.gl/mUEvoC> (at download.gluster.org)
14:22 DV joined #gluster
14:22 daMaestro joined #gluster
14:23 bugs_ joined #gluster
14:24 KORG|2 joined #gluster
14:27 hagarth joined #gluster
14:28 StarBeast joined #gluster
14:29 kbsingh ndevos: ok, let me have a crack at it
14:30 ndevos kbsingh: all yours!
14:33 johnmark kbsingh: oh cool
14:34 johnmark wahly: the reads sounds about right, although perahps a big low
14:34 johnmark wahly: the writes though... how many replicas?
14:35 wahly 3 replicas
14:35 johnmark wahly: oh, so 52 * 3 = 156
14:35 johnmark still a bit low
14:35 wahly the reads are great. anything over 100M/s would be more than sufficient
14:36 johnmark wahly: in fact, sounds like the read speed is beyond what I would expect - because the number you quoted is MBs, right? not mbits
14:36 wahly MB, not mb
14:37 wahly as reported from bonnie++
14:37 johnmark right - and if you multiply tha tnumber x 8 you get 1.6 gib
14:37 johnmark the read number, that is
14:37 johnmark I'm thinking you're saturating the network
14:38 wahly for the reads, possibly. but those really aren't the issue. i'm happy with the read speed
14:38 wahly it's the write speed that needs some TLC
14:38 kbsingh btw, thinking about epel - an upgrade there should still leave hakafs with a usable older glusterfs right ?
14:38 kbsingh or is epel policy to only have one, the latest greatest version available ?
14:38 * kbsingh brb
14:39 _pol joined #gluster
14:39 ndevos kbsingh: I think kkeithley will retire hekafs from epel, and glusterfs will be dropped from epel too (because some of those sub-packages will land in rhel)
14:40 johnmark wahly: right, but my point was that 150 * 8 is > 1gb
14:40 johnmark which means that I don't think your write speed can increase much
14:40 glusterbot New news from newglusterbugs: [Bug 986775] file snapshotting support <http://goo.gl/ozgmO>
14:40 kkeithley I'm trying to see if I can morph the epel glusterfs packages into a single glusterfs-community.doc RPM
14:41 wahly johnmark: both numbers are in MB. i guess i don't understand how i read speed can be so much faster than write speed. or is it striping the reads and triplicating the writes?
14:41 Nev left #gluster
14:44 wahly johnmark: looks like you are correct. i'm hitting 900Mbps (per iftop) during my stress tests
14:44 wahly might be time to bond some interfaces
14:48 _pol_ joined #gluster
14:48 F^nor iftop
14:48 F^nor woops
14:49 krishna_ joined #gluster
14:49 _pol__ joined #gluster
14:50 LoudNois_ joined #gluster
14:51 kkeithley ndevos, kbsingh: specifically, the glusterfs RPMs that will appear in the base RHEL channel are glusterfs, glsuterfs-api, glusterfs-fuse, glusterfs-libs, and glusterfs-rdma.
14:51 ndevos kkeithley: what about -devel and -api-devel?
14:52 wrale joined #gluster
14:52 ndevos oh, maybe in Workstation or something...
14:52 kkeithley that's what's in Server. I haven't looked at Workstation
14:52 kkeithley but I'd be surprised
14:53 kkeithley if they're there
14:53 wrale is it a bad idea to use btrfs across two drives (raid 1-style) for the backing store for a gluster brick, versus ext4 on each drive?
14:53 bala joined #gluster
14:53 ndevos Workstation has more -devel stuff than Server, including fortran compilers and things
14:54 ndevos wrale: a bad idea not really, well tested is a different story - xfs + hardware-raid is the general advice
14:54 kkeithley nope, not there, not in the latest snapshot. Probably worth trying to make that happen though
14:55 wrale ndevos: thanks for the feedback.. that makes sense.. i won't have hw raid available, unfortunately
14:55 wrale i'm resisting zfs, because of its license
14:56 ndevos wrale: personally I am more comfortable with mdraid and lvm-mirrors than btrfs, but others used btrfs successfully
14:56 kkeithley using btrfs is pretty bold. you could use lvm stripe and ext4 or xfs. Using zfs makes it semi-hard for some of us to help you
14:57 kkeithley because of the license
14:57 wrale thanks guys.. i think mdraid sounds like a good compromise ..
14:57 kkeithley I'm still rambling and not getting work I need to do done
14:58 wrale and i'm definitely a fan of xfs
14:58 ndevos kkeithley: no, you too?! and that on a Friday :)
14:58 jclift left #gluster
15:00 _pll_ joined #gluster
15:01 vshankar joined #gluster
15:02 bala1 joined #gluster
15:04 _pll_ If I have 2 bricks (2 disks)  per node, and create a volume with stripe 2 (and replica 2), does gluster try to place each stripe on each disk of the same node, or will each stripe end up on different nodes?
15:08 _pll_ Is there anywhere other than the glusterd source where the data placement strategy is documented in detail? Or is there a tool I can use to see how data is actually placed in my gluster cluster?
15:08 hateya joined #gluster
15:08 l0uis _pll_: dunno how it works w/ stripe, but w/ replica they are placed in the order in which the bricks are specified
15:08 cyberbootje joined #gluster
15:09 l0uis _pll_: so replica 2 will plcae 1 copy on the first brick, 1 copy on the 2nd brick, etc. so if you want your replicas on difference nodes, you need to alternate them as you add them to the volume
15:09 _pll_ l0uis, yes, for replica it will even warn you if you do not do a smart ordering when creating the volume (i.e. if replicas are going to end up on the same node)
15:10 _pll_ l0uis, thanks. Although I was wondering how ordering influences striping, and what gluster's strategy usually is
15:11 mgebbe joined #gluster
15:11 l0uis _pll_: i'd be surprised if ordering does not influence striping. but unfortunately i don't know that answer for sure.
15:11 ndevos ~stripe | _pll_
15:11 glusterbot _pll_: Please see http://goo.gl/5ohqd about stripe volumes.
15:11 ndevos _pll_: stripe is often not what you are looking for...
15:13 _pll_ ndevos, my intuition before actually seeing how well it fares is that striping will benefit random access but will penalize sequential access
15:13 _pll_ but it would be nice to know in detail how the data is placed
15:14 semiosis @java
15:14 glusterbot semiosis: https://github.com/semiosis/libgfapi-jni & http://goo.gl/KNsBZ
15:15 semiosis hagarth: these ^
15:15 ndevos _pll_: I don't know if there are more details documented nicely, maybe this helps: https://access.redhat.com/site/document​ation/en-US/Red_Hat_Storage/2.1/html-si​ngle/Administration_Guide/index.html#se​ct-User_Guide-Setting_Volumes-Striped
15:15 glusterbot <http://goo.gl/KkYQnC> (at access.redhat.com)
15:15 _pll_ ndevos, thanks!
15:19 jclift joined #gluster
15:19 eseyman joined #gluster
15:19 stickyboy joined #gluster
15:20 ricky-ticky joined #gluster
15:24 shylesh joined #gluster
15:26 ricky-ticky joined #gluster
15:26 ProT-0-TypE joined #gluster
15:31 zerick joined #gluster
15:31 ricky-ticky joined #gluster
15:35 tg2 is replace-brick being removed in future versions?
15:35 tg2 just do a remove-brick and add-brick?
15:36 _pol joined #gluster
15:36 tg2 also, can you do multiple 'remove-brick start' on the same command so that you can remove an entire server with 4 bricks at the same time without it balancing to it's own bricks?
15:37 ricky-ticky joined #gluster
15:40 StarBeas_ joined #gluster
15:44 _pol joined #gluster
15:47 ricky-ticky joined #gluster
15:49 sprachgenerator joined #gluster
16:02 aliguori joined #gluster
16:08 davidbierce joined #gluster
16:10 davidbierce Is anyone running gluster as a backend for QCOW2 images?
16:11 Skaag joined #gluster
16:11 davidbierce Or more specifically, the fuse client as a shared storage backend for Qcow images for KVM hosts
16:12 wushudoin joined #gluster
16:12 Skaag hey there, I just setup a volume on two bricks, they can communicate with each other, I tried telnetting to the ports and it connected, but when I try to mount it fails, and the log says "socket_connect] 0-management: connection attempt failed (Connection refused)"
16:13 ProT-0-TypE joined #gluster
16:13 Skaag I set this: auth.allow: 1.1.1.1,1.1.1.2
16:13 Skaag could that interfere somehow?
16:29 Mo__ joined #gluster
16:34 lmickh joined #gluster
16:36 ctria joined #gluster
16:51 GabrieleV joined #gluster
16:53 jbrooks joined #gluster
17:11 mdjunaid joined #gluster
17:20 anands joined #gluster
18:04 bdperkin joined #gluster
18:07 cyberbootje joined #gluster
18:09 rcheleguini joined #gluster
18:09 ProT-0-TypE joined #gluster
18:16 _pol_ joined #gluster
18:41 JoeJulian Skaag: ipsec? selinux? some other firewall?
18:41 JoeJulian davidbierce: I do that.
18:41 Skaag no firewalls... I can telnet and it works
18:41 JoeJulian Skaag: Are you mounting as root?
18:41 Skaag yes
18:42 Skaag is that a problem?
18:42 JoeJulian Nope
18:42 Skaag oh ok
18:42 Skaag I checked that /dev/fuse exists
18:42 Skaag crw-rw-rwT 1 root 10, 229 Oct 18 19:54 /dev/fuse
18:42 JoeJulian Mounting as an unpriviliged user is trickier is all.
18:43 JoeJulian So what's the client log show? /var/log/glusterfs/{mount path with '/' converted to '-'}.log
18:44 davidbierce Sweet.  Are there additional settings that should be configured to run VMs with Distributed+Replication?  It appears the VM disks are being constantly healed.  While it doesn't appears to be causing corruption, it pegs CPUs and the Bricks on the nodes it heals the VM image on.
18:45 JoeJulian davidbierce: I don't see that behavior.
18:45 JoeJulian And I haven't done anything interesting....
18:57 davidbierce I haven't in other setups, but there is a volume that just has VMs on it where not nodes are runnings at a load of 30, with gluster taking it all…but still being reasonably responsive.
18:58 MrNaviPacho joined #gluster
19:15 glusterbot New news from resolvedglusterbugs: [Bug 953694] Requirements of Samba VFS plugin for glusterfs <http://goo.gl/v7g29>
19:18 Skaag JoeJulian, the log shows this: [socket.c:2080:socket_connect] 0-management: connection attempt failed (Connection refused)
19:19 Skaag and this: W [socket.c:1494:__socket_proto_state_machine] 0-socket.management: reading from socket failed. Error (Transport endpoint is not connected), peer (1.1.1.2)
19:19 glusterbot Skaag: That's just a spurious message which can be safely ignored.
19:19 Skaag oh I see
19:19 JoeJulian Skaag: truncate the log. attempt your mount. fpaste the log.
19:20 Skaag ok
19:22 dbruhn weird issue
19:22 dbruhn lots of files showing up twice in my file system with the same inodes
19:22 dbruhn any idea what the hell could be doing that?
19:22 Skaag there's nothing new or special in that log file but nfs.log shows this: I [client.c:1883:client_rpc_notify] 0-ghome-client-0: disconnected
19:24 Skaag https://paste.fedoraproject.org/47847/
19:24 glusterbot Title: #47847 Fedora Project Pastebin (at paste.fedoraproject.org)
19:25 JoeJulian Skaag: Started running /usr/sbin/glusterfs version 3.2.5
19:28 Skaag should I attempt the latest instead?
19:28 Skaag it's the version that comes with the OS (Ubuntu 12.04)
19:29 JoeJulian @latest
19:29 glusterbot JoeJulian: The latest version is available at http://goo.gl/zO0Fa . There is a .repo file for yum or see @ppa for ubuntu.
19:29 JoeJulian @ppa
19:29 glusterbot JoeJulian: The official glusterfs packages for Ubuntu are available here: 3.3 stable: http://goo.gl/7ZTNY -- 3.4 stable: http://goo.gl/u33hy
19:29 Skaag thanks! will update now
19:29 JoeJulian Not sure if that'll cure your problem, but it's a lot more mature.
19:30 JoeJulian The other problem might be that you forgot to start your volume (guessing).
19:35 Skaag it was started
19:35 Skaag almost done upgrading
19:39 MrNaviPacho joined #gluster
19:41 ngoswami joined #gluster
19:52 Skaag done with the upgrade. recreating the volume and trying again.
19:54 DV joined #gluster
19:54 Skaag hm. now when I try to create the volume it says this: volume create: ghome: failed: Glusterfs is not supported on brick: xgnt-vps-001:/ghomedata
19:56 dbruhn there is a hidden .glusterfs file on the root of the brick you were using, you'll need to remove that
19:56 dbruhn bricks
19:57 Skaag ah
19:57 Skaag I can't find it
19:58 Skaag going to purge it, reinstall, and try again
20:00 Skaag damn it, doesn't work
20:00 Skaag also, a purge didn't help
20:00 Skaag it somehow persists the data / configs, between installs
20:00 dbruhn There is another command JoeJulian has given me before to help clear it, I just don't remember it.
20:01 Skaag must be glusterfs-common package that I didn't remove
20:02 Skaag nope... :-(
20:03 Skaag where does it save this data? I erased /etc/glusterd/
20:04 aliguori joined #gluster
20:08 dneary joined #gluster
20:35 daMaestro|isBack joined #gluster
20:35 johnmark semiosis: I just pimped your PPA on teh blog. Hope you don't mind :)
20:48 JoeJulian Skaag: /var/lib/glusterd
20:48 * JoeJulian just got back from lunch...
20:54 johnbot11 joined #gluster
21:01 jclift left #gluster
21:14 [o__o] joined #gluster
21:19 _pol joined #gluster
21:20 kr1ss left #gluster
21:23 Xunil joined #gluster
21:48 failshel_ joined #gluster
21:48 _pol joined #gluster
22:03 _pol joined #gluster
22:23 zaitcev joined #gluster
22:33 _pol_ joined #gluster
22:46 badone joined #gluster
23:43 [o__o] joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary