Camelia, the Perl 6 bug

IRC log for #gluster, 2013-07-15

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:05 joelwallis joined #gluster
00:27 kedmison joined #gluster
00:49 yinyin joined #gluster
01:06 bala joined #gluster
01:46 harish joined #gluster
01:50 raghug joined #gluster
02:08 harish joined #gluster
02:25 sprachgenerator joined #gluster
02:33 sprachgenerator_ joined #gluster
02:45 vshankar joined #gluster
02:51 jag3773 joined #gluster
03:21 badone joined #gluster
03:25 mohankumar joined #gluster
03:39 bharata joined #gluster
03:41 raghug joined #gluster
03:49 hagarth joined #gluster
04:01 sgowda joined #gluster
05:03 lalatenduM joined #gluster
05:06 kshlm joined #gluster
05:06 netgaroo joined #gluster
05:10 bulde joined #gluster
05:10 saurabh joined #gluster
05:12 netgaroo joined #gluster
05:16 rgustafs joined #gluster
05:19 vpshastry joined #gluster
05:23 sgowda joined #gluster
05:23 rjoseph joined #gluster
05:37 Humble joined #gluster
05:46 hagarth joined #gluster
05:52 bala joined #gluster
05:55 sgowda joined #gluster
05:58 ppai joined #gluster
06:02 \_pol joined #gluster
06:05 shylesh joined #gluster
06:07 rastar joined #gluster
06:11 raghu joined #gluster
06:15 psharma joined #gluster
06:16 ngoswami joined #gluster
06:19 sgowda joined #gluster
06:24 FinnTux_ joined #gluster
06:28 jtux joined #gluster
06:30 pkoro joined #gluster
06:30 Recruiter joined #gluster
06:34 FinnTux joined #gluster
06:40 deepakcs joined #gluster
06:41 guigui3 joined #gluster
06:47 piotrektt joined #gluster
06:47 piotrektt joined #gluster
06:51 ramkrsna joined #gluster
06:51 ramkrsna joined #gluster
06:56 dobber_ joined #gluster
06:58 ricky-ticky joined #gluster
07:03 raghug joined #gluster
07:08 jtux joined #gluster
07:08 hybrid512 joined #gluster
07:09 bip`away joined #gluster
07:13 satheesh joined #gluster
07:14 sgowda joined #gluster
07:17 bip`away joined #gluster
07:18 ctria joined #gluster
07:18 harish joined #gluster
07:18 bip`away joined #gluster
07:30 ujjain joined #gluster
07:31 andreask joined #gluster
07:39 glusterbot New news from newglusterbugs: [Bug 961892] Compilation chain isn't honouring CFLAGS environment variable <http://goo.gl/xy5LX>
07:43 bip`away joined #gluster
07:47 sac`away joined #gluster
07:49 sac`away joined #gluster
07:50 sac joined #gluster
07:50 Recruiter joined #gluster
07:51 mooperd joined #gluster
07:56 harish joined #gluster
08:00 hagarth joined #gluster
08:36 edong23 joined #gluster
08:48 raghug joined #gluster
08:48 cenit joined #gluster
08:49 satheesh joined #gluster
08:49 msvbhat joined #gluster
08:49 cenit hi! I need some help with "Transport endpoint is not connected" messages, is this a proper place to ask or the mailing list is better?
08:52 hagarth joined #gluster
08:54 cenit hi! is there anyone? Sorry I'm not used to irc...
08:54 sac`away joined #gluster
08:55 andreask dont't exect messages within seconds all the time
08:55 andreask people are spread over quite some timezones
08:56 cenit ok sure! Sorry! Japan here :)
08:56 andreask Austria here ;-)
08:57 cenit great! Good morning!
08:57 andreask hi
08:57 glusterbot andreask: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
08:57 andreask nice
08:57 andreask cenit: you get this error on your client? had a network problem?
08:58 cenit @andreask no network problem, at least not out of gluster
08:59 cenit the setup is 10 bricks replica-2 splitted over 2 servers, connected by Gb ethernet.
08:59 andreask native gluster mount?
08:59 cenit yes
08:59 andreask tried to remount?
08:59 cenit ubuntu 12.10 and gluster 3.4
08:59 cenit yes it happens very often
09:00 andreask hmm ... 3.4
09:00 cenit and it happens (not as often) also in another setup I did, very different (distributed, non replica, connected through infiniband)
09:00 cenit it happened (and was even worse) on 3.2 and 3.3
09:01 cenit each release seems to go better, but still not very reliable. There must be something wrong in the very basic setup that I'm doing, I think...
09:01 andreask looks like, never saw this without a network problem on my setups
09:02 andreask any special network setup?
09:04 cenit don't think so: the two servers are under the same switch, even if their IP address are from different pools. But for example mpi worked perfectly for a long time between the same servers, and no runs ever got shutted down because of network problems
09:04 cenit how would you test for network problems?
09:05 bala joined #gluster
09:05 andreask looking for dropped packages on the interfaces, doing a benchmark like iperf, having a look at the netstat statistics
09:06 andreask kernel error messages ... things like that
09:06 cenit ok thanks
09:06 cenit sorry for the very basic question but even if I have to manage it I'm not really a proper sysadmin
09:06 vimal joined #gluster
09:07 sac joined #gluster
09:08 andreask np
09:10 cenit yes many RX dropped packets
09:11 glusterbot New news from newglusterbugs: [Bug 984444] Avoid logs related to printers from samba <http://goo.gl/9KElB>
09:13 sprachgenerator joined #gluster
09:18 ramkrsna joined #gluster
09:20 andreask cenit: bonding setup?
09:21 cenit mmm what does it mean?
09:22 cenit googled it, if you mean using multiple NIC per server, no, just one
09:24 netgaroo hi, is there a way to turn a directory into a gluster brick?
09:24 netgaroo my problem is:
09:25 netgaroo I have nearly a terrabyte of data that shall be in a gluster mount
09:25 andreask cenit: then you should also look at the "netstat -s" result
09:26 netgaroo is there a better way than just copy the data into a new gluster mount?
09:26 netgaroo a way to somehow add the gluster metadata to an existing dir?
09:28 ngoswami joined #gluster
09:44 T0aD i guess nobody here has tried to use usrquota with gluster ? :)
09:45 manik joined #gluster
09:47 lisca joined #gluster
09:48 lisca hi. can anyone recommend a good, possibly rather in-depth, book on glusterfs design (and maintenance)? I couldn't succeed finding any, but I might be that I'm not that good with search engines really
09:48 bharata-rao joined #gluster
09:54 ndevos T0aD: page 63 in the admin guide (from http://www.gluster.org/community/docume​ntation/index.php/Main_Page#GlusterFS_3.3 ) explains how to use quota
09:54 glusterbot <http://goo.gl/wuhOc> (at www.gluster.org)
09:55 T0aD ndevos, usrquota, that is quota per uid
09:55 ndevos lisca: I guess you can start with http://www.gluster.org/communit​y/documentation/index.php/Arch
09:55 glusterbot <http://goo.gl/4LlkI> (at www.gluster.org)
09:56 ndevos T0aD: yeah, I thought thats in the admin guide... let me check again
09:56 T0aD that would be wonderful
09:56 T0aD but i think you re confused with gluster quota per directory
09:58 ndevos right, looks like it isnt in 3.3, maybe its in 3.4, or I've seen it somewhere else....
09:59 T0aD really ?
09:59 cenit joined #gluster
10:00 T0aD http://www.gluster.org/community/d​ocumentation/index.php/Features34
10:00 glusterbot <http://goo.gl/4MvOh> (at www.gluster.org)
10:00 T0aD dont see anything
10:04 piotrektt_alpha joined #gluster
10:06 ndevos hmm, I cant find clear references either, I guess I've only seen the request for usrquota before
10:08 bulde ndevos: usrquota is not supported in glusterfs
10:08 bulde the best way to do it is, have /home (or similar) as glusterfs mount, and each user directory getting quota
10:08 bulde :-)
10:08 ndevos bulde: right :)
10:09 T0aD hey bulde :)
10:09 T0aD well in my case thats gonna be hard (12,000 websites, going for a 200,000 uids infra)
10:09 bulde T0aD: long time :-)
10:09 T0aD yeah *very* long time :)
10:09 T0aD i saw you sold out to redhat ?
10:09 harish joined #gluster
10:10 bulde T0aD: yep, a *very* old story now :-)
10:10 T0aD oops :P sorry wasnt following gluster for quite some time now
10:10 T0aD rooty is still in the game ?
10:11 hagarth joined #gluster
10:11 bulde rooty is around, busy with bigdata and stuff
10:11 T0aD oh, so the old team is still alive and kicking  ? :)
10:14 bulde very much, trying grow big as community :-)
10:14 T0aD well apparently its working, gluster is very popular now it seems, and you have 200 people in here!
10:16 T0aD well maybe you re right, i should give glusterfs builtin quotas a try
10:22 badone joined #gluster
10:23 ngoswami joined #gluster
10:28 T0aD cp: cannot create regular file `/var/gluster/users/toad/glusterfs-3.3.1.tar.gz': Disk quota exceeded
10:28 T0aD nice :)
10:28 T0aD thx bulde, thats probably the thing i should do
10:29 T0aD will probably be a problem on /tmp until i give each user his own directory
10:29 bulde T0aD: which version of gluster are you trying?
10:29 T0aD the one of the tarball :P
10:33 spider_fingers joined #gluster
10:40 sac`away joined #gluster
10:41 rgustafs joined #gluster
10:42 sac`away joined #gluster
10:43 sac`away joined #gluster
10:44 ramkrsna joined #gluster
10:48 sac joined #gluster
10:52 bala joined #gluster
10:58 T0aD hmm
10:58 T0aD there is no opposite of set ? :)
10:58 T0aD # gluster volume get users features.quota-timeout
10:58 T0aD unrecognized word: get (position 1)
10:58 chirino joined #gluster
10:59 ndevos no, but it is listed in "gluster volume info users"
10:59 T0aD yeah i see it
10:59 lalatenduM joined #gluster
10:59 T0aD well its only listed if it was user-defined, seems the default are hardcoded
11:00 T0aD # gluster volume info users | grep ^features
11:00 T0aD features.limit-usage: /toad:10MB,/jack:10MB,/jack2:10MB
11:00 T0aD features.quota: on
11:01 T0aD im wondering how the features.limit-usage will look with 12,000 entries.
11:02 hagarth @ports
11:02 glusterbot hagarth: glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111.
11:02 hagarth @3.4 ports
11:02 ndevos you'll need to update that :)
11:02 hagarth ndevos: yeah, remembered while writing release notes :)
11:03 ndevos good catch!
11:04 T0aD {"features.quota-timeout",               "features/quota",            "timeout", "0", DOC,
11:04 T0aD 0},
11:04 T0aD funny, seems its 0 by default
11:06 Deebs_at_work joined #gluster
11:07 Deebs_at_work hi folks
11:07 Deebs_at_work can anybody confirm if glusterfs transmits all changes made to the gfs down to every connected gfs client (we are seeing this happen)
11:08 T0aD hmpf, so basically for a value of 0, it will crawl through all subdirectories to compute the total size of files ? doesnt sound very efficient on drives
11:18 DWSR joined #gluster
11:18 DWSR joined #gluster
11:25 DWSR joined #gluster
11:26 chirino joined #gluster
11:33 T0aD bulde, hmm apparently i should use a higher version of glusterfs as there are a couple of problems with 3.3.1 and quotas (like quotas not taking effect when reconfiguring them without having to remount the client)
11:34 skyw joined #gluster
11:34 edward1 joined #gluster
11:36 piotrektt joined #gluster
11:44 lisca left #gluster
11:45 chirino joined #gluster
11:49 T0aD yeah works perfect with 3.4.0 :)
12:00 hagarth joined #gluster
12:05 bulde joined #gluster
12:17 ctria joined #gluster
12:30 vpshastry joined #gluster
12:34 raghug joined #gluster
12:36 fleducquede joined #gluster
12:36 shylesh joined #gluster
12:38 bulde joined #gluster
12:38 xavih joined #gluster
12:39 bet_ joined #gluster
12:44 xavih joined #gluster
12:44 yinyin joined #gluster
12:45 jdarcy joined #gluster
12:55 dblack joined #gluster
12:55 pjameson joined #gluster
12:59 raghug joined #gluster
13:00 harish joined #gluster
13:01 guigui1 joined #gluster
13:04 pjameson Hey all. I'm testing gluster 3.4, and I'm trying to see if there's a way to prefer one node over the node given by the hash function for a file during a rebalance. We were testing with some VMs, and we'd like to be able to get data locality if we move a VM from one host to another. Is there any magic xattr or anything that I'm missing that would help here?
13:06 kedmison joined #gluster
13:08 rcheleguini joined #gluster
13:16 tw I would also like to know the answer to that question.
13:18 jdarcy pjameson, tw: there are some ways to do this, but (regrettably) they're a bit of a usability nightmare.
13:18 aliguori joined #gluster
13:19 tw Could you provide references? Everything I've read seems to indicate gluster client is not rack/bw/latency aware in any way.
13:19 jdarcy In 3.4 and 3.3.2, there's a way to define your own "layout" (the set of attributes defining how files are placed) for a directory.
13:19 pjameson Well, I don't mind coding up something to automate it to a degree if I could get a pointer in the right direction.
13:19 jdarcy Unfortunately, it requires both understanding the format of the trusted.glusterfs.dht extended attributes, and then setting them directly on the bricks (something we normally discourage).
13:21 jdarcy https://github.com/gluster/gluster​fs/blob/master/extras/rebalance.py demonstrates one way to do this with a Python script, but there's a lot of mojo involved.
13:21 glusterbot <http://goo.gl/ZMOaa> (at github.com)
13:21 pjameson Are there any docs on those xattrs? I've only been able to find a listing in the wiki on the gluster site. I'm reading through the gluster source right now, but the only ones that I have seen have not worked as I'd hoped
13:21 netgaroo hi gluster experts, hopefully one of you can help me with my little problem:
13:21 netgaroo is there a way to turn a directory into a gluster brick?
13:21 netgaroo my problem is:
13:21 netgaroo I have nearly a terrabyte of data that shall be in a gluster mount
13:21 netgaroo is there a better way than just copy the data into a new gluster mount?
13:21 netgaroo a way to somehow add the gluster metadata to an existing dir?
13:22 xavih_ joined #gluster
13:22 jdarcy Also, http://hekafs.org/index.php/2011/​04/glusterfs-extended-attributes/ (just noticed that was on my birthday)
13:22 glusterbot <http://goo.gl/Bf9Er> (at hekafs.org)
13:23 jdarcy netgaroo: Unfortunately, it only *mostly* works (I wish I could do a good impression of Miracle Max from Princess Bride here)
13:24 jdarcy netgaroo: Are you using replication?
13:25 netgaroo yes, I've set up two hosts with the files already replicated
13:26 netgaroo and wanted to move from poor rsync to gluster
13:26 jdarcy We really need a good import script, to be *sure* we get all of the internal metadata set up for an existing directory to be a brick.
13:27 netgaroo jdarcy: basically there is none as of now?
13:28 xavih joined #gluster
13:28 jdarcy netgaroo: I'm afraid my own view of what would happen is incomplete.  I know that both AFR and DHT will deal with this in the sense of finding files and regenerating their own xattrs on the files themselves, but I'm not entirely sure of the consequences when the corresponding links etc. in .glusterfs aren't there.
13:29 tw I'm pretty sure you would set the brick as replicated, then stat all of the files to force a self-heal.
13:29 jdarcy netgaroo: There's also a definite problem when the directory structure is not common across all DHT (distribution/sharding) subvolumes (single bricks or replica sets).
13:29 tw Not exactly sure what would happen though, but it would synchronize >.>
13:29 jclift joined #gluster
13:29 l0uis Im investgiating using gluster in a smallish 5 node setup. I'm currently trying to understand the various failure scenarios and recovery options. Something I am curious about and haven't been able to find anything on: let's say I lose a brick, and that data is not mirrored. Is there a way to know which files have been lost and need to be restored from tape?
13:30 jdarcy tw: As I said, *mostly* works.  ;)  Unfortunately I can not, with full diligence, recommend solutions that mostly work.
13:30 tw jdarcy: fair enough.
13:30 tw esp in production environments >.>
13:31 T0aD hmm im starting to love gluster quotas, maybe i can even implement quotas on a mysql storage directory
13:32 tw mysql on gluster? that sounds like not-a-good-idea.
13:32 failshell joined #gluster
13:32 T0aD haha
13:32 plarsen joined #gluster
13:32 jdarcy l0uis: AFAIK no, there is no way to get such a list of what's not there.  We don't have any sort of global directory (that's how we achieve scalability across many servers).
13:32 T0aD thats what i said to rooty 6 years ago, it seems its quite ok actually
13:32 netgaroo thanks jdarcy and tw for the info, in that case I'll bite the bullet and wait for all the files to be copied.
13:33 l0uis jdarcy: so operationally... one needs to keep track of every file we care about so that in the case of failure we can go look for it and restore if necessary?
13:33 jdarcy As of 3.4, I'd say GlusterFS is OK for light-to-medium database use.  I'm working on stuff to make it OK for medium-to-heavy use, but we'll probably never be great at the very high end.
13:34 jdarcy l0uis: I'd say you need to use a restore method that can efficiently skip over files that are already present.
13:34 tw jdarcy: would you say the xattrs listed in your post (trusted.*) are internal and version specific or are they pretty formalized at this point?
13:34 l0uis jdarcy: the bulk of this data is written once and then read numerous times... it doesn't change often
13:34 T0aD tw, you had some issues with the setup ?
13:34 l0uis jdarcy: makes sense...
13:35 jdarcy tw: They're not part of an official API, but the DHT ones haven't changed in a long time except to add the user-defined-layout support, and that support kind of implicitly represents a contract to do what the user said.
13:36 jdarcy tw: I would personally harass anyone on the team who failed to use the already-present format ID to ensure backward compatibility.
13:36 T0aD tw, here is the answer from rooty for my concern http://www.bpaste.net/raw/7dcVi1RyEvfy0zkn5o0U/
13:36 hagarth joined #gluster
13:36 ctria joined #gluster
13:36 jdarcy tw: ...and nobody on the team wants me harassing them.  ;)
13:39 T0aD <jdarcy> As of 3.4, I'd say GlusterFS is OK for light-to-medium database use. hm hm.
13:40 jdarcy T0aD: I'm sure users would like me to say more.  Developers would like me to say less.  I'm trying to walk a fine line here.  ;)
13:40 tw Ok, final question. My understanding is gluster client pretty much goes to whichever replica server replies first. Does it have discression to pick amongst a pool of replies (say if it waited a little) or would that be out of spec?
13:41 T0aD jdarcy, maybe ill give it a shot on a db hosting 12,000 little databases
13:42 jdarcy tw: There are a few deviations from that rule.  A client will *definitely* pick a local replica if there is one.  There's also an option to select a specific subvolume, overriding the first-to-respond choice.  Lastly there's an option to select a subvolume by hashing the GFID instead, so all clients will be guaranteed to converge on a single copy for consistency.
13:42 jdarcy T0aD: That's "light to medium" for you?
13:43 T0aD no idea, i dont consider myself as 'high' (yet!)
13:43 jdarcy T0aD: You sound pretty high to me.
13:43 T0aD and rooty was quite cheerful at this time (check my link) so i dont know what to think.. my guts would say to avoid it
13:44 jdarcy Never a substitute for trying it, I guess.  Maybe it works.  Maybe it's hilarious.  Either way, fun will be had.
13:44 tw jdarcy: ok, thank you very much for all the info. You've given me a pretty good idea where to look and what approach to take.
13:44 jdarcy tw: You're welcome, and good luck.
13:44 xavih_ joined #gluster
13:45 mtanner_ joined #gluster
13:49 xavih joined #gluster
13:52 yinyin joined #gluster
13:52 pjameson jdarcy: I'm looking at that script, and it seems like it's splitting the hash space up by how many bricks I have (doing distributed/replicate w/ nufa, so I've got two replicate bricks; that script gives me a hash at 2^31 - 1 and 2^31-2^32), however I need a bit of clarification on where these attributes are set. Can I stick them on subdirectories, or does it have to be the root dir of the brick? I had tried it on a subdirectory with a hash from 0-
13:52 pjameson on one of the bricks, and 0-0 on the other in hopes of getting the files to move on a rebalance, but it didn't work
13:52 pjameson sorry for the wall of text :)
13:53 jdarcy pjameson: Those attributes are per-directory.  If you set them on one directory, it won't affect another directory (even a subdirectory).
13:54 jdarcy pjameson: I do have some plans to make these layouts "inheritable" so we don't have to keep re-setting them the same way on thousands of directories, but that's off in the future.
13:57 manik joined #gluster
13:58 jclift left #gluster
13:58 jclift joined #gluster
13:58 skyw joined #gluster
13:58 pjameson jdarcy: That's fine. The layout we're going to have for VMs will probably be pretty shallow, so the one directory underneath the root of the volume for each, and a handful of files under that. The problem seems to be, though, that rebalancing is resetting the layout. Is there a way to get it to just do data migration?
13:59 hagarth joined #gluster
13:59 lalatenduM joined #gluster
14:00 pjameson jdarcy: We're basically trying to make it so that we can move VMs between nodes using the network, but if we permanently move them, we'd like to be able to force them to a specific replica, so we're not running their main drives entirely over the network
14:01 mohankumar joined #gluster
14:02 jdarcy pjameson: Understood.  The other possibility is to use the same mechanism that rebalance itself does to relocate data, but I should warn you that it's even uglier than what we've discussed before.
14:03 zaitcev joined #gluster
14:03 tw pjameson: just curious, what hypervisor are you using?
14:04 pjameson tw: kvm; we're going to be using the new libgfapi block driver in qemu
14:04 jdarcy pjameson: I have a patch in the queue (http://review.gluster.org/#/c/5233/) to make this much more palatable, but there seems to be little interest so far.
14:05 pjameson jdarcy: Mother of god, that's exactly what I need I think
14:06 jdarcy pjameson: If you create a file xxx as xxx@dht-volume:subvolume it will be created on "subvolume" but there are caveats.
14:06 tw Ah, I was going to say kvm-qemu recently added migration w/ storage which would handle the storage for you if you were using a normal backend in addition to hostnode-specific VM storage directories as jdarcy suggested.
14:06 jdarcy pjameson: One caveat is that you need to know the internal names for the DHT volume and the subvolume you want, which means getting and parsing the volfile.
14:07 jdarcy pjameson: The other caveat is that it's not guaranteed to *stay* where you put it, across the next rebalance.
14:07 satheesh joined #gluster
14:07 pjameson jdarcy: Yeah, that shouldn't be a huge deal, I don't expect. I'm going to give that patch a spin today to see what it yields.
14:08 jdarcy pjameson: If it's not too much trouble, would you mind adding a comment on the patch?  User feedback is very helpful for getting the Powers That Be to consider these things.
14:08 pjameson jdarcy: Definitely. I'll try to get it looked at this morning. I'll shoot something out early afternoon probably
14:08 tw I was going to ask that. "does user commentary go in the patch notes or on the ML"
14:09 tw *patch comments
14:09 jdarcy tw: It *really* should go on an enhancement request in bugzilla.redhat.com, but that doesn't exist yet in this case.
14:14 jdarcy https://bugzilla.redhat.com/show_bug.cgi?id=984602
14:14 glusterbot <http://goo.gl/4GcIv> (at bugzilla.redhat.com)
14:14 glusterbot Bug 984602: unspecified, unspecified, ---, sgowda, NEW , [FEAT] Add explicit brick affinity
14:14 jdarcy Comments (or even followers) either place are more than welcome.
14:15 glusterbot New news from newglusterbugs: [Bug 984602] [FEAT] Add explicit brick affinity <http://goo.gl/4GcIv>
14:15 jdarcy glusterbot: Meh, you're repeating yourself.
14:18 samppah jdarcy: do you know what is status of bug 953887?
14:18 glusterbot Bug http://goo.gl/tw8oW high, high, ---, pkarampu, MODIFIED , [RHEV-RHS]: VM moved to paused status due to unknown storage error while self heal and rebalance was in progress
14:19 samppah it's clone of 922183 which is private :/
14:19 jdarcy Grrr.  Let me look.
14:21 chirino joined #gluster
14:23 jdarcy samppah: According to the private bug, it hasn't been reproducible since http://review.gluster.org/4568
14:23 glusterbot Title: Gerrit Code Review (at review.gluster.org)
14:25 jdarcy samppah: Unfortunately, that seems to be in neither 3.4 nor 3.3.2  :(
14:25 samppah doh :(
14:25 samppah that's only included in RHS?
14:26 jdarcy It's in upstream, but hasn't been backported to either of the release branches.
14:26 samppah it's not duplicate of this? http://review.gluster.org/#/c/4868/
14:26 glusterbot Title: Gerrit Code Review (at review.gluster.org)
14:28 jdarcy samppah: Ah, yes it is.  Looks like Pranith used a different bug number, that's why it didn't show up in my search.
14:29 jdarcy Our bug tracking is crazy.  Multiple per-release bugs both upstream and downstream for the same underlying issue, clones everywhere.
14:30 hagarth samppah: http://www.gluster.org/community/documentation/ind​ex.php/Backport_Wishlist#Requested_Backports_for_3.4.1 <-- anything you want covered for 3.4.1
14:30 glusterbot <http://goo.gl/lTLaK> (at www.gluster.org)
14:31 samppah cool
14:31 tw jdarcy: regarding brick affinity, in the commit message you say 'setfattr -n distribute.migrate-data ...' but the code reads/translates *.affinity. commit typo or am I not understanding this patch properly?
14:31 jdarcy Yay, another patent.  :(
14:31 samppah i'll try to do some testing with GA release also
14:33 hagarth samppah: cool, please keep your feedback coming.
14:33 jdarcy tw: You use *.affinity to set where it should be, but then you also need distribute.migrate-data (which already existed) to actually move it there.  Otherwise it will stay where it is until you rebalance.
14:34 jdarcy tw: And yes, the commit message is confusing.  My bad.
14:34 * jdarcy shows he has been in New England too long.  "My bad" indeed.
14:35 failshell joined #gluster
14:36 bala joined #gluster
14:36 tw ah, thank you for explaining that. Your explanation implies I can set affinity without a rebalance/migrate? Does that ever make practical sense?
14:37 jdarcy tw: Probably not.  It's more to do with how things work internally.  The promised script (when I get to it) will do both.
14:42 puebele joined #gluster
14:42 deepakcs joined #gluster
14:45 bsaggy joined #gluster
14:46 vpshastry joined #gluster
14:46 chirino joined #gluster
14:49 chirino joined #gluster
14:52 guigui1 joined #gluster
14:57 rwheeler joined #gluster
14:59 chirino joined #gluster
15:08 dbruhn joined #gluster
15:16 spider_fingers left #gluster
15:17 chirino joined #gluster
15:19 daMaestro joined #gluster
15:20 dewey joined #gluster
15:20 coredumb joined #gluster
15:21 joelwallis joined #gluster
15:26 dewey_ joined #gluster
15:26 Technicool joined #gluster
15:27 sprachgenerator joined #gluster
15:37 jebba joined #gluster
15:38 rcedillo1 joined #gluster
15:39 bala joined #gluster
15:42 rcedillo1 nfs Vs glusterfs, i wonder which one has better performance
15:42 rcedillo1 ??
15:42 Technicool joined #gluster
15:43 NuxRo rcedillo1: depends
15:43 NuxRo one case: nfs doesnt scale, glusterfs does, hence glusterfs has better performance :)
15:44 rcedillo1 ok
15:45 rcedillo1 NuxRO: some point against?
15:46 chirino joined #gluster
15:47 NuxRo against what?
15:48 rcedillo1 some point against of glusterfs
15:51 \_pol joined #gluster
15:58 edong23 joined #gluster
15:59 NuxRo it adds more latency and complexity compared to a classical nfs setup
15:59 ke4qqq joined #gluster
16:00 NuxRo especially if you do replication
16:04 jag3773 is there an ETA for gluster 3.4?  I looked online the other day but i didn't see an expected release date
16:04 rcedillo1 nfs would yoy recommend changing to glusterfs, the servers are redhat and centos
16:05 rcedillor joined #gluster
16:05 jag3773 rcedillo1, i would recommend yes if you need scalability -- but you should test first to make sure your software will perform as expected
16:06 rcedillo1 left #gluster
16:06 lpabon joined #gluster
16:08 rcedillor jag3773, it thinks
16:08 jag3773 rcedillor, if you *only* need HA then you may be better with a DRBD setup, but it depends on your environment -- glusterfs is easier to setup ;)
16:20 vpshastry joined #gluster
16:22 rcedillor nfs currently have implemented, we are asked to evaluate the change only for redhat support, i think it is a stupid idea, but good.
16:23 jag3773 pretty sure redhat supports nfs don't they rcedillor ?
16:25 rcedillor i no have idea
16:30 cfeller joined #gluster
16:37 semiosis jag3773: 3.4 was announced today
16:37 semiosis ,,(latest)
16:37 glusterbot The latest version is available at http://goo.gl/zO0Fa . There is a .repo file for yum or see @ppa for ubuntu.
16:37 ctria joined #gluster
16:37 jag3773 aweseome, just upgraded to 3.3 last night :(
16:38 jag3773 that's okay though, i'll let other ppl upgrade to 3.4 first and if everything looks okay i'll take the plunge ;) --  thanks semiosis
16:38 semiosis yw
16:39 semiosis jag3773: fwiw there's also a new 3.3.2 which was announced today as well
16:39 * semiosis updates the ,,(ppa)
16:39 glusterbot The official glusterfs 3.3 packages for Ubuntu are available here: 3.3 stable: http://goo.gl/7ZTNY -- 3.3 QA: http://goo.gl/5fnXN -- and 3.4 QA: http://goo.gl/u33hy
16:40 kkeithley @yum
16:40 glusterbot kkeithley: The official community glusterfs packages for RHEL (including CentOS, SL, etc), Fedora 17 and earlier, and Fedora 18 arm/armhfp are available at http://goo.gl/s077x. The official community glusterfs packages for Fedora 18 and later are in the Fedora yum updates repository.
16:42 jag3773 http://download.gluster.org/pub/glust​er/glusterfs/repos/YUM/glusterfs-3.4/ is blank... but http://download.gluster.org/pu​b/gluster/glusterfs/3.4/3.4.0/ is good
16:42 glusterbot <http://goo.gl/x1fe6> (at download.gluster.org)
16:45 kkeithley try again ;-)
16:45 jag3773 looks good kkeithley, symlinks work wonders ;)
16:45 sjoeboo joined #gluster
17:03 cfeller What are "best practices", regarding GlusterFS and SELinux?
17:04 cfeller I'm attaching a GlusterFS volume as part of my web server directory structure, and Apache gets a permission denied.  I tracked it down to SELinux, and realized the contexts were wrong.  I tried changing the context to httpd_sys_content_t from client and couldn't do so.  (It remains fusefs_t).  As it stands right now I'm running the webserver with SELinux permissive, but I would like to know...
17:04 cfeller ...the 'best' way to approach this (w/out permanantly disabling SELinux)
17:09 samppah cfeller: what distro you are running?
17:10 cfeller Servers are running RHEL6, client is Fedora 18.
17:10 cfeller GlusterFS 3.3.1 on both sides.
17:10 cfeller ('client' being the web server)
17:11 cfeller ('server' being the GlusterFS servers)
17:13 jag3773 cfeller, have you tried running the log entries through audit2allow?
17:13 samppah cfeller: iirc, at least fedora 19 has selinux boolean for httpd_use_fusefs
17:14 jag3773 that sounds like a good way to go samppah, if it's avail in 18
17:16 cfeller I haven't tried that yet.  I just tried changing the context via chcon and got the permissions error.  I thought about the audit2allow route, but I figured I'd ask what the 'best practices' were, here first.
17:19 vpshastry left #gluster
17:26 cfeller OK.  I'll check to see if it is in 18.  If not, that box is on my list to upgrade to 19 anyway, so I may move it to the front of the list if need be.
17:27 kkeithley FYI, f19 comes with glusterfs-3.4.0!
17:27 cfeller that is, I'll check the setsebool route.  thanks all.
17:28 cfeller It does come with 3.4.0. and according to JoeJulian's blog (http://joejulian.name/blog/fedor​a-19-with-legacy-glusterfs-33/), backwards compatible with 3.3 (at least for the clients).
17:28 glusterbot <http://goo.gl/AFBrL> (at joejulian.name)
17:29 cfeller has anyone here seen any issues with this?
17:30 puebele1 joined #gluster
17:35 bit4man joined #gluster
17:41 staykov joined #gluster
17:42 staykov hi - something went wrong and it seems i have files that are not split brained, but they are older than what they should be
17:42 staykov when i try to access them i get Input/Output error - hte timestamp is a few weeks old
17:43 staykov but the rest of the files are good on both bricks - nothing really in the log
17:43 staykov trying to heal shows that healing went file
17:44 staykov also the two bricks sync with each other - not sure where i should go from here and if i can recover the data
17:52 Technicool joined #gluster
17:53 JoeJulian cfeller: Been running a fedora 19 box running the 3.4.0 client with my home directory on a 3.3.1 volume for a week now. No problems.
17:54 kaptk2 joined #gluster
17:55 dbruhn staykov, I have run into this before. I usually manually pick a file from one of the bricks, pull it out to a temp location. Go through the process in JoeJulian's split brain blog write up, once the file is fixed, remove it from the file system (mount point) and then re add it.
17:56 skyw joined #gluster
17:58 JoeJulian staykov: There was an old bug that used to do that. Are you running a current version?
17:59 staykov hmm running glusterfs 3.3.0 built on Jun 12 2012 16:43:24
18:00 JoeJulian 3.3.2 is current (as of today)
18:00 staykov i dont see a .glusterfs dir though
18:00 JoeJulian It's on the brick.
18:00 staykov ahh ill update it when i get a chance
18:01 staykov its different from your blog though, it doesnt see them as split brained i dont think
18:01 staykov http://pastie.org/8143275
18:02 glusterbot Title: #8143275 - Pastie (at pastie.org)
18:03 JoeJulian "Heal operation on volume DATA has been successful" is a misstatement, by the way. It actually means that the query of the self-heal daemons was successful.
18:04 staykov i thought that if it was split brained the file would be different on each brick, they seem to be the same
18:04 JoeJulian I'd double check the client log when you stat that file through a client mount. You're probably right though.
18:05 JoeJulian dbruhn was just saying that's what he does to ensure everything's the way he expects it. It might be a little overkill, but some people like overkill. :D
18:06 staykov ahh no they are different according to stat
18:07 staykov so now that i know which one i want, ill look up what to do
18:07 JoeJulian Make sure you check ,,(extended attributes) too. That'll tell you a bit also.
18:07 staykov thanks a lot
18:07 JoeJulian Hey... what happened to the extended attributes factoid?
18:08 staykov aye according to the access/modify times i know which version i want
18:08 JoeJulian cool
18:08 glusterbot (#1) To read the extended attributes on the server: getfattr -m .  -d -e hex {filename}, or (#2) For more information on how GlusterFS uses extended attributes, see this article: http://goo.gl/Bf9Er
18:09 staykov ahh i tried that before but i get no output
18:09 JoeJulian Oh, wow... that was lagged....
18:09 JoeJulian @reconnect
18:09 glusterbot JoeJulian: Error: You don't have the owner capability. If you think that you should have this capability, be sure that you are identified before trying again. The 'whoami' command can tell you if you're identified.
18:09 JoeJulian @reconnect
18:10 glusterbot joined #gluster
18:10 JoeJulian has to be run on the brick, not the client.
18:10 staykov i am reading this http://blog.oneiroi.co.uk/linux/gluster-res​olving-a-split-brain-in-a-replicated-setup/
18:10 glusterbot <http://goo.gl/voAV0> (at blog.oneiroi.co.uk)
18:10 tqrst joined #gluster
18:10 staykov the stripxattr script 404s though
18:11 JoeJulian What the???
18:11 JoeJulian Talk about overkill.
18:11 JoeJulian I don't think I would do that myself.
18:12 JoeJulian unless you have a completely fubar brick, anyway.
18:14 staykov ok so if i understand it right, i should move the file out of the brick and then gluster should heal itself from the good one?
18:14 JoeJulian The file and it's gfid hardlink
18:14 JoeJulian @split-brain
18:14 glusterbot JoeJulian: To heal split-brain in 3.3, see http://goo.gl/FPFUX .
18:15 JoeJulian Guess I'd better cause some split-brain in 3.4 and see if that still works.
18:17 staykov ahh ok now i get it lol
18:17 dbruhn staykov, JoeJulian, it might indeed be a bit over kill. The issue I run into is that I can't actually do anything with the file till the Input/Output error is corrected. It's what I need to do to fix the file system so I can actually work against the data in it.
18:19 tqrst (congrats on 3.3.2 and 3.4!)
18:22 staykov should i do this with gluster running or off?
18:26 JoeJulian running is fine
18:26 dbruhn Yep, sorry running is how I do it as well.
18:31 Recruiter joined #gluster
18:31 cfeller JoeJulian: Thanks. That has been my experience in my (albeit limited) testing up to this point too.
18:33 theron_ joined #gluster
18:35 staykov hmm, i now see the file replicated and looks the same, i still get Input/output errors on the bad brick but on the good brick i can read the file it seems
18:36 dbruhn staykov, did you remove it from the mount point and recopy it back into the file system after?
18:36 JoeJulian staykov: That's a bug in 3.3.0. Remount.
18:36 dbruhn That process should completely remove it from the system, then re adding it treats it like a new file.
18:37 JoeJulian 3.3.0 doesn't ever forget the file was split-brain.
18:41 tqrst is there an admin guide for 3.4? I couldn't find it on http://www.gluster.org/community/docume​ntation/index.php/Main_Page#GlusterFS_3.4
18:41 glusterbot <http://goo.gl/1M3xe> (at www.gluster.org)
18:42 JoeJulian yes... Now I just need to figure out where it is...
18:42 JoeJulian I think it's supposed to be at forge.gluster.org somewhere...
18:43 tqrst oh, another wiki
18:45 staykov ahh very cool, it worked for one of my files
18:45 staykov thanks a lot!
18:46 JoeJulian tqrst: No, it's a git repo... The plan is to get the admin guide in a format that can be edited easily thus allowing community involvement.
18:46 tqrst JoeJulian: sounds like a good idea
18:47 JoeJulian so-far, though, it looks like a scrape of the wiki...
18:53 JoeJulian Argh... the documentation is still in the source tree. At least it's markdown now. It's in doc/admin-guide/en-US/markdown
18:53 JoeJulian @git repo
18:53 glusterbot JoeJulian: https://github.com/gluster/glusterfs
18:54 hagarth JoeJulian: the idea is to continue editing in markdown and periodically publish on gluster.org. I will send out an announcement of the process shortly.
18:56 JoeJulian hagarth: I want the documentation moved to the forge, rather than having it in the source git.
18:56 JoeJulian It doesn't really require CI testing to make documentation changes.
18:57 hagarth JoeJulian: having it in source git enables developers to update documentation as and when a feature changes
18:57 JoeJulian ok
18:58 hagarth also gerrit enables reviews better (we need to extend gerrit to forge repos too).
18:59 JoeJulian Is there a way to avoid wasting jenkins cycles for documentation changes?
19:00 hagarth JoeJulian: probably we could exclude Jenkins from being triggered for any patch that is exclusively in the doc domain.
19:01 tqrst jenkins supports path-based ignores, so yes
19:01 sprachgenerator joined #gluster
19:02 lpabon_ joined #gluster
19:02 hagarth tqrst: yes, we should enable that for doc only patches.
19:02 semiosis @learn 3.4 upgrade notes as http://vbellur.wordpress.com/2013/​07/15/upgrading-to-glusterfs-3-4/
19:02 glusterbot semiosis: The operation succeeded.
19:03 semiosis @learn 3.4 release notes as https://github.com/gluster/glusterfs/blo​b/release-3.4/doc/release-notes/3.4.0.md
19:03 glusterbot semiosis: The operation succeeded.
19:03 JoeJulian @alias "3.4 upgrade notes" "3.4 release notes"
19:03 glusterbot JoeJulian: An error has occurred and has been logged. Check the logs for more informations.
19:03 nagis joined #gluster
19:04 JoeJulian pfft...
19:04 semiosis not quick enough
19:04 semiosis @3.4 notes
19:04 JoeJulian Oh, they're different anyway. Cool.
19:05 semiosis no related factoids on ,,(3.4 notes) :(
19:05 glusterbot semiosis: Error: No factoid matches that key.
19:06 tqrst "replace-brick operation does not work fine in all cases in this release" sounds a bit ominous
19:07 JoeJulian @learn 3.4 as 3.4 sources and packages are available at http://download.gluster.org/p​ub/gluster/glusterfs/LATEST/ Also see "@3.4 release notes" and "@3.4 upgrade notes"
19:07 glusterbot JoeJulian: The operation succeeded.
19:09 JoeJulian Heh, the markdown docs are in master but not in release-3.4
19:09 semiosis tqrst: replace-brick has never worked fine in all cases, which afaict is logically equivalent to there have been cases where replace brick has not worked fine
19:09 JoeJulian oh, nevermind
19:09 JoeJulian They are, I just missed them.
19:15 rotbeard joined #gluster
19:21 vpshastry joined #gluster
19:22 sprachgenerator so I'm trying to mount some RDMA volume (3.4.0) and mount -t glusterfs appears to be calling 3.3v of the client: http://pastie.org/8143471 - anyone seen this before?
19:22 glusterbot Title: #8143471 - Pastie (at pastie.org)
19:24 tqrst are there any path changes in gluster 3.4 other than /var/run/gluster being used instead of /tmp for statedumps? I'm still stuck with a shared root system, so I need to make sure /etc/statetab is up to date.
19:24 tqrst (coming from 3.3.1)
19:30 rcedillor left #gluster
19:48 glusterbot New news from newglusterbugs: [Bug 977543] RDMA Start/Stop Volume not Reliable <http://goo.gl/hRlNr>
19:51 JoeJulian tqrst: Not sure yet. I probably won't have any time to do any indepth admin testing reviews 'till after OSCON.
19:51 T0aD peer probe: failed: Peer 192.168.122.154 is already at a higher op-version
19:51 * T0aD wants his money back!
19:51 JoeJulian lol
19:52 JoeJulian T0aD: Might want to make sure your servers are all at the same version.
19:52 T0aD they are
19:52 T0aD 3.4.0
19:52 tqrst T0aD: at least it's not 5pm on a friday
19:52 T0aD im wondering whats wrong..
19:54 dbruhn is 3.4.0 GA yet?
19:54 T0aD nope but it fixes an issue with quotas
19:55 T0aD just getting (re)started on glusterfs
19:55 T0aD last version i used was probably 1.3
19:55 T0aD hm apparently it needs to have a dns entry
19:56 semiosis dbruhn: yes
19:56 dbruhn has anyone upgraded from 3.3.1 to 3.4 yet with RDMA?
19:57 T0aD oh the 2 VMs have the same hostname, maybe thats the problem.
19:57 JoeJulian Heh, that could be it.
19:58 T0aD of course thats it
19:58 * T0aD never gets it wrong.
19:58 JoeJulian Though if you're using all ip addresses, I don't know why
19:58 T0aD true true.
19:59 __Bryan__ joined #gluster
19:59 T0aD peer probe: failed: Peer gluster2 is already at a higher op-version
19:59 T0aD oh the dirty moth... !
20:03 T0aD pff i dont get it.
20:03 jclift dbruhn: Errr.... why would you want to upgrade to 3.4.x with RDMA?
20:03 jclift RDMA on 3.4 isn't properly working. :(
20:04 dbruhn well that answers my question on if I should
20:04 dbruhn sorry been out of touch for several weeks
20:04 jclift dbruhn: No worries.
20:04 dbruhn what's not working properly?
20:04 JoeJulian T0aD: Is this an upgrade or a new install?
20:04 T0aD new install
20:05 T0aD should i add a volume right away on the second node ?
20:05 dbruhn jclift, I thought there was a ton of effort going into 3.4 for IB/RDMA
20:05 jclift dbruhn: Well, I kept on hitting basic issues when trying out distributed+replicated setups.
20:05 jclift dbruhn: (With RDMA )
20:05 jclift dbruhn: Yeah, there's been a large amount of code improvements
20:05 T0aD the strace is way too long to get anything usefull out of it
20:05 JoeJulian T0aD: Looks like in /var/lib/glusterd/glusterd.info one of them has a "operating-version" line. The other probably doesn't.
20:06 jclift dbruhn: However, we've hit several bugs with rdma stuff when testing, even though the tcp side seemed to not have issues at the same point
20:06 T0aD JoeJulian, ah yeah indeed
20:06 T0aD @paste
20:06 glusterbot T0aD: For RPM based distros you can yum install fpaste, for debian and ubuntu it's pastebinit. Then you can easily pipe command output to [f] paste [binit] and it'll give you a URL.
20:06 JoeJulian My guess would be that you installed an old version on one of them, started glusterd. Realized your mistake and upgraded but that file was left behind.
20:07 jclift dbruhn: So, the thought process was pretty much "k, lets get 3.4.0 out the door so non-rdma users have some goodness, and we'll keep on working on rdma for (hopefully) 3.4.1"
20:07 T0aD http://www.bpaste.net/show/ZkYkkDluCc2y9usQVXGq/
20:07 glusterbot <http://goo.gl/o0aDk> (at www.bpaste.net)
20:07 dbruhn jclift, so what is needed for this to move past it?
20:07 T0aD JoeJulian, oh my god how did you find out ?!
20:07 T0aD GET OUT OF MY BOX
20:07 JoeJulian It's the only logical progression of circumstances that I could think of to get you to that state. Elementary my dear T0aD .
20:08 jclift dbruhn: There are a couple of points to it.  Note, this is based on my opinion atm... :)
20:08 T0aD # mv /var/lib/glusterd{,.0}
20:08 T0aD how to quickly fix everything.
20:08 * JoeJulian likes that bit of laziness. :)
20:08 jclift dbruhn: Firstly, we need to get the Gluster Test Framework updated so it can test RDMA stuff.
20:08 T0aD JoeJulian, yeah well you have to know one version is adding that line while the other does not :)
20:08 T0aD good catch anyway, thanks dude
20:08 JoeJulian T0aD: I had to read the source to get me there.
20:08 T0aD good work
20:09 T0aD oh my god, i lost my volumes !
20:09 JoeJulian srsly?
20:09 T0aD (just kidding :P)
20:09 T0aD well yeah if i mv the config :P
20:09 JoeJulian :P
20:09 jclift dbruhn: That's in progress, but is kind of blocked by an upstream Linux kernel bug, that kernel panics in the fuse module.  Niels has a proposed patch to fix it:
20:09 jclift dbruhn: https://lkml.org/lkml/2013/7/15/203
20:09 glusterbot Title: LKML: Niels de Vos: [PATCH] fuse: fix occasional dentry leak when readdirplus is used (at lkml.org)
20:09 dbruhn jclift, do you have a lead on 10x cheap IB cards DDR or QDR I can get my testing environment up and make it accessible
20:10 jclift dbruhn: Hmmm, which country are you in?
20:10 dbruhn USA
20:10 jclift dbruhn: Actually, I'll be lazy.  Grab this: http://justinclift.fedorapeople.org/Ce​ntOS_Dojos/2013-07-12_Aldershot/Slides​-Getting_started_with_Infiniband.odp
20:10 glusterbot <http://goo.gl/3idr9> (at justinclift.fedorapeople.org)
20:10 JoeJulian We should just add rdma to those 1U former Gluster, Inc. boxes that we used at Summit.
20:10 jclift dbruhn: Er.... do you have LibreOffice?
20:11 dbruhn yeah
20:11 T0aD JoeJulian, you re working with those crazy indians ?
20:11 jclift dbruhn: I gave a presentation at a CentOS Dojo last week, and in the slides (there's not many) it has direct links to current ebay auctions with the bits. :D
20:11 JoeJulian T0aD: No, I was just at summit and got those machines configured to do a hands-on demo.
20:12 jclift JoeJulian: Yeah, if they've got a spare PCIe slot each, that would be good
20:12 JoeJulian jclift: They (more or less) do. They currently have ethernet cards in them, but they're unnecessary.
20:12 JoeJulian johnmark had them, last I knew.
20:13 T0aD JoeJulian, you lost all my attention.
20:14 jclift dbruhn: What does the postage costs to US look like for you, for these? http://www.ebay.co.uk/itm/360657396651
20:14 glusterbot Title: HP 452372-001 Infiniband PCI-E 4X DDR Dual Port Storage Host Channel Adapter HCA | eBay (at www.ebay.co.uk)
20:14 jclift dbruhn: Otherwise, just look for this model number:  MHGH28-XTC
20:14 dbruhn ?3.90
20:15 jclift US$3.90?  Wow, that's pretty good
20:15 dbruhn wow used prices are nuts on us priced ones?
20:16 JoeJulian $60 refurb - http://www.serversupply.com/products/​part_search/pid_lookup.asp?pid=191877​&amp;gclid=CLOK47WosrgCFa9eQgodriAAVg
20:16 glusterbot <http://goo.gl/XxKJZ> (at www.serversupply.com)
20:16 jclift dbruhn: It's just one of those things where people in "selling to corporates via ebay" price them at stupidly high levels still
20:16 jclift dbruhn: Whereas people with a tonne of them just trying to move them, sell them at low prices
20:17 dbruhn I see that
20:18 JoeJulian kkeithley: glusterd.service Wants=glusterfsd.service ?
20:18 jclift dbruhn: Cables -> http://www.ebay.co.uk/itm/251200441924
20:18 glusterbot Title: New Infiniband 10GBs 4X CX4 to CX4 Cable 1M/3.3FT SAS M/M | eBay (at www.ebay.co.uk)
20:18 kkeithley JoeJulian: ???
20:19 kkeithley Did something revert between beta4 and GA?
20:19 jclift dbruhn: I use that specific cable vendor all the time.  Had one issue where China customs delayed stuff without warning, so the vendor reposted a new lot via Fedex courier at his own expense.  Arrived in 2 days. ;)
20:19 JoeJulian Dunno... The endless saga email thread that I've been on pasted that....
20:19 jclift dbruhn: The original cables eventually turned up a couple of weeks later, and the vendor said to hang onto them as postage back was too much of a pita and I'm a good customer anyway. :D
20:21 edong23 joined #gluster
20:21 jclift JoeJulian: Oh oh... don't tell me there's an error at install time about glusterfsd.service (or similar name) being missing?
20:21 jclift JoeJulian: For the 3.4.0 rpms' that is
20:21 kedmison joined #gluster
20:22 JoeJulian No, this guy's got glusterfsd.service starting (and failing) at boot. grepping the source though, it doesn't look like it should be there.
20:23 kkeithley The glusterd.service in the git source has no reference to a glusterfsd.service
20:23 JoeJulian yeah, false alarm...
20:25 jclift Cool
20:26 JoeJulian kkeithley: Unless that's in the 3.3.1 systemd files
20:26 JoeJulian lunch time...
20:26 kkeithley 3.3.1? Or 3.3.2?
20:26 jclift dbruhn: Something kind of nifty with these cards (for anyone wanting 10GbE), is that you can set them to run in 10GbE mode instead.
20:26 JoeJulian He hasn't said.... One more reason I hate doing email support.
20:27 jclift dbruhn: Loses the IB advantages, but as a way to get cheapo 10GbE cards, it's pretty good. :D
20:27 dbruhn Is there a reasonably priced switch that will bridge to a 10GB Ethernet environment?
20:27 JoeJulian lunch time... bbl.
20:30 jclift dbruhn: Not sure.  I've not had to do that.
20:31 jclift dbruhn: People on the "Serve the Home" forums (network section, they're into this stuff), talk about a model of the Voltaire 4036 switch that does this.  The 4036E.
20:31 jclift dbruhn: Normally they do so in the tones of "waiting for a 4036E to become available at a decent price" though. ;)
20:31 jclift dbruhn: And some of the guys on there build some _very_ full on systems.
20:33 jclift dbruhn: If you get bored, this is an interesting place to spend some time: http://forums.servethehome.com/f20/
20:35 dbruhn yeah, that's the price tag I was afraid of
20:35 kkeithley JoeJulian: (when you get back from lunch),  3.3.1 and 3.3.2 get glusterd.service and glusterfsd.service from the fedora_scm repo and they're both in the 3.3.1 and 3.3.2 packages for f17, e.g.
20:38 jclift dbruhn: It's probably cheaper to run a minimal extra box as a gateway then.  Not great. :(
20:39 jclift dbruhn: Ahhh, this is one killer server: http://www.openida.com/the-dirt-che​ap-data-warehouse-an-introduction/
20:39 glusterbot <http://goo.gl/boCLW> (at www.openida.com)
20:39 \_pol joined #gluster
20:42 badone joined #gluster
20:47 T0aD funny
20:47 T0aD gluster volume delete doesnt remove volumes from nfs.vol file
20:57 netgaroo joined #gluster
21:02 nixpanic joined #gluster
21:02 Peanut joined #gluster
21:02 nixpanic joined #gluster
21:11 puebele1 joined #gluster
21:21 T0aD i have a funny issue: i try to create a new volume and it complains that the path is already part of the volume while its not the case
21:22 JoeJulian path or prefix
21:22 T0aD yeah
21:22 JoeJulian @path or prefix
21:22 glusterbot JoeJulian: I do not know about 'path or prefix', but I do know about these similar topics: 'path-or-prefix'
21:22 T0aD it doesnt know dude
21:22 JoeJulian @path-or-prefix
21:22 glusterbot JoeJulian: http://goo.gl/YUzrh
21:22 T0aD thx
21:22 mooperd joined #gluster
21:23 JoeJulian @alias path-or-prefix "path or prefix"
21:23 glusterbot JoeJulian: The operation succeeded.
21:23 T0aD oh oh . a .glusterfs
21:23 T0aD thx dude
21:23 T0aD whats that directory for ?
21:23 JoeJulian You're welcome
21:24 JoeJulian http://joejulian.name/blog/what-is-​this-new-glusterfs-directory-in-33/
21:24 glusterbot <http://goo.gl/j981n> (at joejulian.name)
21:24 T0aD looks like hard links
21:24 andreask joined #gluster
21:24 T0aD you have an answer for everything, dont you ? :)
21:24 JoeJulian I try
21:28 rcoup joined #gluster
21:33 T0aD never played with attributes
21:35 T0aD funny still doesnt work and i removed the directory, didnt remove the attributes though but there doesnt seem to be any
21:36 bit4man joined #gluster
21:36 JoeJulian There are.
21:37 T0aD lies
21:37 JoeJulian hehe
21:37 T0aD shouldnt i see them with getfattr -d * ?
21:37 JoeJulian nope
21:37 puebele joined #gluster
21:38 T0aD really
21:38 T0aD weird.
21:38 T0aD thats terrible.
21:38 JoeJulian Not really wierd. The system and trusted attributes are filtered out by default.
21:38 T0aD ah ah
21:38 JoeJulian That's why that blog article tells you EXACTLY how to do it. :P
21:38 T0aD so i should name them explicitely
21:38 netgaroo left #gluster
21:39 JoeJulian @extended attributes
21:39 glusterbot JoeJulian: (#1) To read the extended attributes on the server: getfattr -m .  -d -e hex {filename}, or (#2) For more information on how GlusterFS uses extended attributes, see this article: http://goo.gl/Bf9Er
21:39 T0aD there is no tool to fix that ?
21:40 T0aD # file: home/users/glusterfs-3.4.0.tar.gz
21:40 T0aD trusted.gfid=0sT+SKL55QQlmf3VFWug7ISA==
21:40 T0aD oh nice
21:40 T0aD # find /home/users/ -type f -exec getfattr -n trusted.gfid  {} \;
21:40 T0aD i can see the matrix!
21:40 cicero O______________________O
21:43 T0aD thx JoeJulian
21:43 T0aD great help
21:44 T0aD i never thought that would happen on IRC
21:44 T0aD you make me believe in mankind again
21:44 JoeJulian lol
21:45 JoeJulian T0aD: It could have been you... ;) I didn't get involved until 2.0.
21:45 T0aD true
21:45 T0aD maybe you read one of my sexy articles
21:45 T0aD with the sexy graphics
21:50 T0aD JoeJulian, i have extra attributes as well (quota)
21:55 nagis I have been trying to get the fuse client working using -o acl when mounting and now I am seeing fun errors like setting xattrs on /bricks/store/files3 failed (Operation not supported) and  Extended attributes not supported (try remounting brick with 'user_xattr' flag)
21:55 nagis shouldn't this just work if my server has everything formatted with xfs?
22:03 JoeJulian My understanding is that you have to enable user_xattr for your bricks.
22:04 T0aD http://bpaste.net/show/W7lokeUaOxH8gUzN8vuA/
22:04 glusterbot Title: Paste #W7lokeUaOxH8gUzN8vuA at spacepaste (at bpaste.net)
22:04 T0aD here a gift for you JoeJulian
22:04 T0aD its quite horrible but hey, someday it ll probably get better !
22:04 JoeJulian T0aD: you should put that in a gist
22:05 T0aD never !
22:05 T0aD well im using my little python script to pastebin files quickly
22:06 JoeJulian Ah
22:07 JoeJulian I was just thinking that with gist, it won't get lost into obscurity.
22:07 T0aD Could not submit your paste because your paste contains spam.
22:08 T0aD ah who cares
22:08 T0aD the day i improve it i might make a better version and host it on github yeah
22:08 nagis Joe : user_attr is not an option for moutning with xfs.. at this point I am going to try ext3
22:08 nagis mount
22:08 JoeJulian nagis: Interesting, I didn't realize that.
22:09 JoeJulian I wonder if the ext fix made it into 3.3.2
22:09 semiosis JoeJulian: what purpose does .glusterfs serve on a non-replicated volume?
22:10 T0aD can stat() returns extended attributes ?
22:10 JoeJulian semiosis: deletion and rename persistence
22:11 JoeJulian T0aD: Not afaik.
22:11 T0aD hmpf.
22:12 T0aD trusted.sexy.boy="toad"
22:13 * T0aD manages to harvest the secret of extended attributes.
22:14 aliguori joined #gluster
22:16 JoeJulian ** Ding! ** T0aD levels up
22:18 JoeJulian "("Glusterd and the Bricks" is the name of my Cocteau Twins cover band)" - gluster-user email... well, at least I think it's funny... :)
22:35 jag3773 joined #gluster
22:35 T0aD im trying to make a sexier version using ctypes but its hard.
22:51 jebba joined #gluster
23:17 T0aD JoeJulian, here for you: https://gist.github.com/T0aD/6004343
23:17 glusterbot Title: Sexy script to remove GlusterFS extended attributes (at gist.github.com)
23:17 JoeJulian Hehe, I feel honored. :D
23:17 T0aD now im not using os.system() anymore, much sexier.
23:17 * T0aD is touching himself as we speak.
23:19 fidevo joined #gluster
23:19 JoeJulian eww
23:22 T0aD # gluster volume create users replica 2 gluster1:/home/users gluster2:/home/users
23:22 T0aD volume create: users: failed: /home/users or a prefix of it is already part of a volume
23:22 glusterbot T0aD: To clear that error, follow the instructions at http://goo.gl/YUzrh or see this bug http://goo.gl/YZi8Y
23:22 T0aD rraahh !
23:22 * T0aD raises his arms to the sky 'Damn you gluster ! damn youuuuu !'
23:23 tg2 joined #gluster
23:28 tg2 joined #gluster
23:31 fidevo joined #gluster
23:31 sprachgenerator joined #gluster
23:45 jag3773 joined #gluster
23:52 T0aD ah jesus my script aint complete
23:52 T0aD i forgot to take care of the root directory
23:59 jag3773 joined #gluster
23:59 T0aD updated https://gist.github.com/T0aD/6004343
23:59 glusterbot Title: Sexy script to remove GlusterFS extended attributes (at gist.github.com)

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary