Camelia, the Perl 6 bug

IRC log for #gluster, 2012-10-05

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
14:03 _ilbot joined #gluster
14:03 Topic for #gluster is now  Gluster Community - http://gluster.org | Q&A - http://community.gluster.org/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - http://www.gluster.org/interact/chat-archives/
14:04 abyss^ but if I mount gluster volume via nfs (mount -t nfs) that I still have failover and LB? Or only replication?
14:06 moritz left #gluster
14:07 mgebbe_ joined #gluster
14:11 jdarcy abyss^: If you use NFS or SMB, you're dependent on their (weak to nonexistent) ability to detect and respond to failures.
14:11 jdarcy abyss^: Most people implement HA failover or at least RRDNS in those cases.
14:13 stopbit joined #gluster
14:15 bullardo joined #gluster
14:15 xinkeT joined #gluster
14:19 abyss^ jdarcy: yes, but if I export CIFS shares which is mounted as native gluster client then I should have both of this: failover and LB...
14:19 abyss^ I don't know how it is via Gluster NFS
14:21 kkeithley Behind Gluster NFS there is a gluster native client as well. It's just built into gluster. Eventually Samba will work the same way as the Gluster NFS.
14:21 jdarcy abyss^: You'll have those things within the storage cluster, but what about SMB clients connected to a failed/overloaded server?
14:21 johnmark all: DNS switchover just took place this am. Please report any download, mailing list and web server errors pronto
14:21 wushudoin joined #gluster
14:23 DataBeaver joined #gluster
14:23 kkeithley Make that: eventually we'll have our own smbd that works much the same way that the Gluster NFS works.
14:24 abyss^ jdarcy: but this is another story, yes?:) Native gluster client don't care about that as well:) Iam not asking about HA :)
14:27 ndevos joined #gluster
14:29 semiosis joined #gluster
14:29 dblack joined #gluster
14:29 glusterbot joined #gluster
14:29 dblack joined #gluster
14:32 hagarth joined #gluster
14:33 JoeJulian "if I export mounted gluster mount point via samba that I still have failover and LB?" failover=HA
14:37 ngoswami_ joined #gluster
14:39 dobber joined #gluster
14:39 jdarcy abyss^: Native clients do care about that.  If one server fails they can use another just fine, transparently to the user.  That's why I and others prefer it to access via NFS/SMB.
14:40 vikumar joined #gluster
14:40 ngoswami__ joined #gluster
14:49 kkeithley I had to read that a few times to figure it out. At first I thought you were telling us you like NFS/SMB better than gluster native.
14:55 jdarcy kkeithley: Never!  ;)
14:55 jdarcy What was that phrase I used to describe NFS in email yesterday?  Oh yeah, it's a "lousy protocol" that forces all sorts of contortions to make it work properly.
14:55 BigEndianBecause joined #gluster
14:55 kkeithley I know. I was a bit surprised
14:58 mspo you prefer cifs?
14:59 kkeithley My first parsing  was "I ... prefer [the client] to access via NFS". Three rereads later I realized you meant "I ... prefer [gluster native over] access via NFS"
14:59 ramkrsna joined #gluster
14:59 ctria joined #gluster
14:59 kkeithley Sorry to muddy the waters here. I'm sure jdarcy prefers gluster native over NFS and Samba.
15:01 pdurbin people here don't like gluster's implementation of NFS or they don't like the NFS protocol at all?
15:04 neofob joined #gluster
15:08 JoeJulian From what I've encountered, nfs has always just been tolerated.
15:08 JoeJulian Not just here.
15:09 duerF joined #gluster
15:13 jdarcy pdurbin: I spent most of 1990-1992 and 1998-2002 working on NFS full time, wrote most of the pNFS-block spec, and I don't like NFS at all.
15:15 pdurbin sadness
15:15 JoeJulian heh, a familiar story.  I spent most of 1988-1992 working on Macs. I don't like Macs at all.
15:15 jdarcy pdurbin: Why sadness?
15:15 pdurbin well, good to work on something you love, i would think
15:15 pdurbin :)
15:16 jdarcy My issues with NFS might not be everyone's.  From a user perspective, it has always been unfriendly to HA/LB/scaling.  From a developer perspective, every implementation I've ever seen violates the specs in dozens of ways and the protocol makes it too easy to get away with that.
15:16 usrlocalsbin joined #gluster
15:16 jdarcy For a lot of people, neither of those matter.
15:17 pdurbin jdarcy: i have an nfs problem... when i give users sudo to root on a vm that has my home directory mounted on it, they can su to me and get at all my goodies. so i don't mount home directories. would like to use kerberos to solve this problem, i guess, per http://nfsworld.blogspot.com/2006/​02/real-authentication-in-nfs.html
15:17 glusterbot Title: Eisler's NFS Blog: Real Authentication in NFS (at nfsworld.blogspot.com)
15:17 jdarcy pdurbin: Sometimes you have to do something you hate so people will let you do something else related that you love.  That was certainly the case with MPFS in 1998-2002.  There were some cool parts, but getting to them required wading through NFS.
15:18 johnmark pdurbin: yeah, I discovered that neat trick when I worked at a company that had maildirs NFS mounted
15:18 flowouf joined #gluster
15:18 flowouf hello everyone
15:19 johnmark pdurbin: that was pretty scary to know that not only was my email not entirely secure, it was wide open for the world to see
15:19 ondergetekende joined #gluster
15:19 pdurbin johnmark: yes, scary
15:19 flowouf i got a quick question about the geo-replication.indexing option
15:20 jdarcy pdurbin: At first blush, that seems like something root_squash should fix.  Am I missing something?
15:20 flowouf i dont get what it's for
15:20 flowouf coz u need to destroy the sessions before disabling it ..
15:20 pdurbin jdarcy: root is already squashed. home directories are nfs mounted from some emc something or other
15:20 jdarcy flowouf: Is that the one that keeps an index of files that have changed, so they can be re-replicated without having to scan the volume?
15:21 flowouf u mean what's in .glusterfs folder ?
15:21 pdurbin jdarcy: another guy i told this story to said the same thing. "shouldn't root squash fix that?"
15:21 jdarcy Oh right, this isn't squashing root, this is preventing su to *another* ID.  Ick.
15:21 JoeJulian @.glusterfs
15:21 pdurbin jdarcy: ick ick
15:21 flowouf but u can disable it while sessions are opens xD
15:21 flowouf geo-replication.indexing cannot be disabled while geo-replication sessions exist
15:21 JoeJulian @gfid
15:21 glusterbot JoeJulian: The gfid is a uuid that's assigned to represent a unique inode that can be identical across replicas. It's stored in extended attributes and used in the .glusterfs tree. See http://hekafs.org/index.php/2011/​04/glusterfs-extended-attributes/ and http://joejulian.name/blog/what-is-​this-new-glusterfs-directory-in-33/
15:21 jdarcy Yeah, you'd need to use some sort of principal-based authentication (like Kerberos) instead of UID-based.
15:22 jdarcy I'm trying to think if our uid-mapping functionality could help with that.
15:22 flowouf "geo-replication.indexing cannot be disabled while geo-replication sessions exist
15:23 flowouf "
15:23 pdurbin jdarcy: plz help ;)
15:23 flowouf :)
15:23 pdurbin jdarcy: we're actually thinking of starting to set up individual nfs shares for each user. but... we have a lot of users ;)
15:24 jdarcy You could set things up for each mount so that $this_user on the client maps to $this_user on the servers, and anything else on the client maps into a totally made-up user-specific range.
15:24 flowouf jdarcy: i guess so
15:24 pdurbin jdarcy: in some emc config, you mean. i know very little about emc stuff
15:24 flowouf but i'm not sure
15:25 flowouf as written in the documention, it stops the sync between master and slave(s)
15:25 jdarcy EMC would need to have similar UID-mapping/multitenancy support, and I don't know if they do.  Kaleb, still here?
15:25 kkeithley if you can su to another id on the client you're going to have access to the files on the server owned by the remapped id.
15:25 jdarcy kkeithley: Of which there would be none, in a HekaFS-like context.  ;)
15:25 pdurbin jdarcy: what if the NFS server were RHEL? easy to do the uid mapping?
15:26 jdarcy If the server were GlusterFS, yes.  If it were the RHEL in-kernel nfsd, I don't think so.
15:26 pdurbin jdarcy: interesting. ok
15:27 pdurbin jdarcy: so, talk to emc or do kerberos. i'm not sure we're going to jump on glusterfs for home directories in the near term
15:27 flowouf to jdarcy: thx
15:27 jdarcy pdurbin: If you want to explore the UID-mapping further, I'd be glad to come in and whiteboard it for you some time.
15:28 pdurbin or! or! you could /join #crimsonfu where it'll get logged
15:28 * pdurbin searches logs
15:28 JoeJulian It gets logged here. ;)
15:28 elyograg I imagine that glusterfs would have a similar vulnerability, if someone can su to your uid, they can access your files.  Am I wrong?
15:29 JoeJulian Nope, not wrong.
15:29 pdurbin jdarcy: please see http://irclog.perlgeek.de/cr​imsonfu/2012-08-16#i_5903234 and the links from there
15:29 glusterbot Title: IRC log for #crimsonfu, 2012-08-16 (at irclog.perlgeek.de)
15:30 pdurbin JoeJulian: i like the #crimsonfu logs better :)
15:30 JoeJulian I know, hence the winking emoticon.
15:31 pdurbin :)
15:32 jdarcy Yep, looks like Kerberos is the only "pure" NFS solution.  :(
15:32 pdurbin jdarcy: ok. that's what i thought. thanks
15:32 johnmark pdurbin: looks like #gluster and #gluster-dev will be added to the list of logged channels there soon :)
15:32 pdurbin jdarcy: but i'd be fine with leveraging some emc feature, if it exists
15:33 pdurbin johnmark: well i do declare! good job! http://irclog.perlgeek.de/gluster/2012-10-05
15:33 glusterbot Title: IRC log for #gluster, 2012-10-05 (at irclog.perlgeek.de)
15:34 pdurbin moritz is the man
15:34 johnmark heh heh
15:34 johnmark pdurbin: he is!
15:34 mohankumar joined #gluster
15:35 semiosis fyi: http://irclog.perlgeek.de/gluster/2012-10-05
15:35 glusterbot Title: IRC log for #gluster, 2012-10-05 (at irclog.perlgeek.de)
15:36 pdurbin semiosis: ^^ :)
15:37 pdurbin i copy the #crimsonfu logs to http://crimsonfu.github.com/irclogs with https://github.com/crimsonfu/crimsonfu.​github.com/blob/master/bin/logfetch.pl
15:37 glusterbot Title: IRC logs (at crimsonfu.github.com)
15:37 pdurbin i really like having a local copy i can grep/ack
15:38 pdurbin that is to say, they're in our git repo: https://github.com/crimsonfu/crimso​nfu.github.com/tree/master/irclogs
15:38 glusterbot Title: crimsonfu.github.com/irclogs at master · crimsonfu/crimsonfu.github.com · GitHub (at github.com)
15:39 kkeithley I wish I could get the client_t stuff through gerritt. Once that happens then I can start working on adding the HekaFS uidmap xlator to gluster.
15:42 kkeithley I about ready to start updating it every hour just to keep it on the top of the gerritt review stack.
15:46 * kkeithley notices after all this time that there's only one 't' in Gerrit
15:49 mohankumar joined #gluster
15:53 blendedbychris joined #gluster
15:53 blendedbychris joined #gluster
16:02 xinkeT joined #gluster
16:16 rz___ heyo
16:16 rz___ any advices ? -> http://pastie.org/4915458
16:16 glusterbot Title: #4915458 - Pastie (at pastie.org)
16:26 Mo___ joined #gluster
16:26 chandank|work JoeJulian, thanks for posting that article on your blog.  http://joejulian.name/blog/nfs-mount-for-glusterf​s-gives-better-read-performance-for-small-files/
16:26 sashko joined #gluster
16:26 glusterbot Title: NFS mount for GlusterFS gives better read performance for small files? (at joejulian.name)
16:27 JoeJulian You're welcom. Glad it was useful.
16:27 JoeJulian s/welcom/welcome/
16:27 glusterbot What JoeJulian meant to say was: You're welcome. Glad it was useful.
16:28 JoeJulian rz___: The useful bits would probably be what's in the glusterd logs with regard to the abort.
16:32 mohankumar joined #gluster
16:35 elyograg is it a big problem to have unequal size bricks in a volume?  What I had envisioned for my cluter was starting off with disks of say 3TB, then later adding 4TB disks.  We even have a large stockpile of 1TB disks.  It just occurred to me that this may be a problem, because if everything is distributed equally, the smaller disks may fill up.  If a brick fills up, can the system allocate new data to other bricks, or will it fail if the filename hashin
16:37 JoeJulian elyograg: It will, it'll just be less efficient. Additionally, if a file grows that resides on the smaller disk, it won't be moved, so it will fill the disk.
16:38 JoeJulian I can't believe 3TB is only $150. I remember when 10MB was more than twice that.
16:39 elyograg JoeJulian: hmm.  ok, so if I have file growth on existing files, that could be a worry.  but if I were to always have a drive bay available, and worked on doing replace-brick with larger drives before any bricks got too full, I could possibly stay ahead of it.
16:39 JoeJulian Yes
16:40 geggam joined #gluster
16:41 noob2 joined #gluster
16:41 elyograg JoeJulian: I don't think enterprise drives can be had for $150.  The powers that be want to use desktop drives for half of the initial deployment.  I can't get them to understand that desktop drives are badly badly tuned for this sort of thing.
16:41 noob2 has anyone noticed a slowdown in performance using xfs as you add more files to the gluster?
16:41 noob2 for example, ls , or tab completion seems to be slowing down as i add more and more files to the cluster
16:42 JoeJulian Oh, I know... Just bought one for my media server though. I don't mind the lower-end drives for that.
16:43 JoeJulian noob2: More files in a single directory will slow down ls. Not sure how the tab completion works though, so not sure about that one.
16:43 elyograg noob2: is the new data nicely separated into its own directories, or are you just putting more files in existing directories?  Lots of files in a directory goes slow on any filesystem that I've ever tried.
16:44 rz___ JoeJulian: I completed my pastie http://pastie.org/4915458
16:44 noob2 the files are separated into directories.  there's some that have a few thousand but that's the max i've seen so far
16:44 glusterbot Title: #4915458 - Pastie (at pastie.org)
16:44 rz___ JoeJulian: I can see : a staging failed
16:44 noob2 elyograg: hard to say, i'm going to turn on some volume profiling to see what it is doing.  iostat shows a TPS of anywhere from 150-900
16:45 JoeJulian rz___: And that was for an ABORT?
16:45 rz___ I guess it was for a 'start' command
16:45 JoeJulian I can see I'm going to need my espresso... bbiab...
16:45 rz___ I'm searching for the abort error
16:47 * JoeJulian grumbles about PCI testing... still...
16:47 rz___ pastie updated
16:48 seanh-ansca joined #gluster
16:48 rz___ it fails prolly cause of that : I [glusterd-replace-brick.c:12​31:rb_update_srcbrick_port] 0-: adding src-brick port no
16:48 noob2 elyograg: volume profiling shows about 53% latency in STATFS and 46% in LOOKUP
16:50 rz___ JoeJulian: can it be a wrong port number ?
16:52 rz___ https://bugzilla.redhat.com/show_bug.cgi?id=822338 seems prolly related
16:52 glusterbot Bug 822338: urgent, unspecified, ---, kparthas, VERIFIED , Replace-brick status fails after the start operation.
16:52 elyograg noob2: you're getting into low-level specifics that don't mean much to me.  I'd love to be able to take that information and tell you something useful, but I can't.
16:53 noob2 that's ok :)
16:53 noob2 i let the profile run a little longer and 80% of the latency is being caused by mkdir
16:53 noob2 so i think that file copy i have going on is causing it
17:00 mdarade left #gluster
17:03 elyograg does anyone here have experience with desktop-class drives in server situations that really should be using enterprise drives?
17:05 elyograg I'm curious whether the long error timeout characteristics have ever bitten anyone and caused service outages.
17:08 kkeithley Dunno. When I worked at the big Three Letter Storage company on their object-storage system, it used "commodity hardware." Every node had four IDE/PATA drives. (SATA came later) I don't remember ever hearing about long error timeouts as being a problem.
17:08 kkeithley Lots of other problems, but long timeouts weren't one of them AFAIK
17:09 noob2 wow.. running the volume status inode command pegged my cpu's to the roof haha
17:12 noob2 my gluster is going into meltdown
17:12 noob2 is that normal after running that command? lol
17:14 noob2 anyone know how to kill that command?
17:16 noob2 looks like /etc/init.d/glusterd stop won't stop them
17:20 JoeJulian killall -9 glusterd (on all your servers)
17:20 JoeJulian Then you can start glusterd again. This is a known bug and the fix will be in 3.3.1
17:20 noob2 ok
17:21 noob2 wow that is a nasty bug
17:21 JoeJulian I know. It even stops all file access.
17:21 noob2 exactly
17:21 noob2 everyone started reporting their filesystem was frozen haha
17:21 * JoeJulian wasn't a happy camper when he found that.
17:22 johnmark can anyone verify if they've received email from gluster-users today?
17:22 JoeJulian yep
17:22 johnmark cool
17:22 johnmark some mail servers are rejecting mail because we don't have a PTR record at the moment
17:22 JoeJulian Huh... I'm surprised that I didn't then.
17:23 JoeJulian Well, the last one I received was yours at 7:25.
17:23 noob2 JoeJulian: that fixed it.  I'll make a note not to run that again haha.
17:26 _djs joined #gluster
17:27 johnmark JoeJulian: crap
17:27 johnmark ok
17:31 bulde joined #gluster
17:32 noob2 JoeJulian: is 3.3.1 supposed to drop soon?  i saw things in august about it being in QA
17:34 elyograg When you do a replace-brick, does gluster only copy the data from the old to the new brick?  I just want to be sure it's not going to also be copying from other bricks, which would slow the process down.  Also, what happens with new data while it's happening?  Does it get writting to both, or just the new brick?
17:34 kkeithley noob2: Six months from 3.3.0 will be in November sometime.
17:34 gbrand_ joined #gluster
17:34 noob2 cool
17:34 vikumar__ joined #gluster
17:34 noob2 that's not too far off now
17:35 noob2 just in time for my birthday :D
17:38 noob2 kkeithley: question about the gluster fuse client
17:39 noob2 am i correct is thinking that the gluster fuse client is writing to both bricks on different machines at the same time?  whereas with nfs it puts the network load back on the servers to handle replication?
17:41 Nr18 joined #gluster
17:42 rz___ JoeJulian: any advice ? :|
17:47 kkeithley the client writes sequentially I believe, to each replica-set in turn when you're using native fuse mounts.
17:47 tryggvil_ joined #gluster
17:48 kkeithley when you use NFS, the gluster nfs server is, in turn, a client itself; it writes to each replica-set.
17:48 kkeithley make sense?
17:51 kkeithley you can look at the vol files for the client. At the "bottom" you'll see a pair (or more) of protocol/client xlators that reference your replica bricks.
17:52 JoeJulian elyograg: I'm pretty sure that it turns the new brick into a replica of the old brick then runs a self-heal against that. The clients will treat that as a replica brick so they'll both receive updates until the "commit"
17:52 kkeithley Then look at the nfs.vol file on your server. At the "top" is the nfs server xlator. At the bottom you'll see one, two, or more protocol/client xlators that talk to your gluster bricks. Usually the first one references the "local" brick.
17:53 kkeithley HTH
17:54 pdurbin kkeithley: oh, you're the Kaleb jdarcy was talking to. sorry, i didn't realize you were talking root squash too
17:54 * pdurbin scrolls
17:54 JoeJulian rz___: Any advice at all? "It's okay to date nuns, as long as you don't get into the habit."
17:54 kkeithley ;-)
17:55 kkeithley I'll start boiling the spaghetti
17:55 elyograg JoeJulian: always preferable to let multiple features share a code path.  that sounds pretty awesome.
17:56 JoeJulian pdurbin: Careful about using that word. Bethesda likes to sue people for using "scrolls".
17:56 kkeithley s/;-)/;-) yes, that's me/
17:56 glusterbot kkeithley: Error: '/^(?!s([^A-Za-z0-9\\}\\]\\)\\>\\{\\​[\\(\\<\\\\]).+\\1.+[ig]*).*;-).*/' is not a valid regular expression.
17:56 kkeithley glusterbot ftw
17:56 JoeJulian hah
17:56 rz___ JoeJulian: I'm sad my cluster is into an error state and I cant find how to reset or abort the replace-brick status
17:57 kaisersoce joined #gluster
17:57 rz___ :[
17:57 JoeJulian rz___: Have you tried restarting all your glusterd daemons?
17:58 rz___ JoeJulian: yes multiple times
17:59 elyograg I'm curious about monitoring.  Specifically, which gluster commands would be ok to run once a minute, and which might be better to run every five minutes, once an hour, once a day, and once a week.
18:00 JoeJulian rz___: Can I see "gluster volume status backup" please?
18:00 kaisersoce gents, I've installed gluster on two test servers, added a 2TB brick to each, mkfs-xfs'd them, but when I try to joing them using 'gluster peer probe" I get Probe unsuccessful
18:00 kaisersoce Probe returned with unknown errno 107 - I'm using DNS and verified it all works fwd and reverse lookups
18:00 kkeithley kaisersoce: firewall?
18:01 kaisersoce nope, they are connected to the same switch, a cisco 3750 Catalyst
18:01 kkeithley firewall as in iptables.
18:02 rz___ JoeJulian: http://pastie.org/4915458
18:02 glusterbot Title: #4915458 - Pastie (at pastie.org)
18:03 kaisersoce kkeithley: ok, first post, and already feel the fool - I'd chkconfigged iptables off, but it was running on one of the servers. Probe was successful.
18:03 Nr18 joined #gluster
18:12 JoeJulian rz___: Can you start glusterd with --log-level=DEBUG and try aborting again? Should tell us a bit more about why it's failing.
18:13 rz___ JoeJulian: I do it right now
18:19 rz___ pastie updated
18:25 johnmark JoeJulian: please confirm if you got the most recent email via gluster-users
18:25 johnmark semiosis: you, too
18:25 johnmark pretty please :)
18:26 borei joined #gluster
18:26 JoeJulian johnmark: got it
18:27 johnmark *whew*
18:27 rz___ JoeJulian: 0-management: Abort operation failed / 0-management: Commit failed
18:27 JoeJulian rz___: "getfattr -m . -d -e hex /export/vol1" for both node3 and 4
18:29 sr71 joined #gluster
18:31 rz___ heya
18:31 rz___ JoeJulian: pastie updated
18:31 rz___ now I finally know where the replace-brick info is stored!
18:31 JoeJulian me too
18:32 rz___ I didn't know it was an attr
18:32 rz___ did u think I can try remove it safely ?
18:33 JoeJulian So, theoretically you should be able to setfattr -x them, yeah.
18:34 JoeJulian I don't have trusted.afr.archives-io-threads, trusted.afr.archives-replace-brick, or trusted.glusterfs.pump-path
18:34 JoeJulian so I'm *guessing* that those are all part of replace-brick.
18:35 y4m4 joined #gluster
18:39 bennyturns joined #gluster
18:41 johnmark pdurbin: are you on gluster-users?
18:43 lkoranda joined #gluster
18:48 johnmark Technicool: ping
18:54 Daxxial_ joined #gluster
19:07 neofob joined #gluster
19:09 neofob so i have this WD 3TB drive; if i put it along with two other 1 & 2TB drives then my linux box doesn't recognize the 3TB
19:09 neofob but if i unplug the 1 and 2TB then it can recognize the 3TB, esata connection; what is the possible cause of this?
19:10 neofob the hd encloser is Mediasonic Probox 4bay esata usb 3.0
19:14 nightwalk joined #gluster
19:16 rz___ neofob: the topic of this channel is not (WD SUPPORT) :)
19:17 lh joined #gluster
19:17 lh joined #gluster
19:17 neofob rz___: well, i'm building my glusterfs with them
19:18 rz___ it still not related :>
19:20 plarsen joined #gluster
19:22 JoeJulian Maybe not, but there's still a lot of us that know hard drives. It's not unreasonable to ask a bunch of like-minded professionals.
19:23 JoeJulian It's far less off-topic than our discussion of Technicool's mullet last night.
19:30 semiosis neofob: two suggestions for where else to look... does another OS recognize all three drives at once?  contact mediasonic support or check the manual, forums, etc for info about that.
19:31 semiosis johnmark: not on gluster-users
19:32 neofob semiosis: the OS can only recognize all three with USB 3.0 connection, not with esata; i suspect that it's the port multiplier issue w/ mediasonic
19:32 neofob thanks for the tip though
19:32 semiosis i've had better luck with usb than esata myself
19:33 semiosis esata sounded like a good idea but i've not been impressed with it in practice
19:33 * semiosis is not impressed
19:34 JoeJulian @meh
19:34 glusterbot JoeJulian: I'm not happy about it either
19:34 semiosis hmmm... an idea!
19:35 semiosis @learn impressed as /me is not impressed
19:35 glusterbot semiosis: The operation succeeded.
19:35 semiosis ,,(impressed)
19:35 glusterbot /me is not impressed
19:35 semiosis oh well
19:35 semiosis @forget impressed
19:35 glusterbot semiosis: The operation succeeded.
19:36 JoeJulian @learn impressed as [do is not impressed.]
19:36 * glusterbot is not impressed.
19:36 JoeJulian @impressed
19:36 JoeJulian hmm, not quite...
19:36 semiosis ,,(impressed)
19:36 glusterbot semiosis: Error: No factoid matches that key.
19:37 semiosis ,,(impressed)
19:37 glusterbot semiosis: Error: No factoid matches that key.
19:37 semiosis what the
19:37 JoeJulian @impressed
19:37 semiosis oh well
19:37 JoeJulian @forget impressed
19:37 glusterbot JoeJulian: Error: There is no such factoid.
19:37 * JoeJulian boggles
19:37 semiosis !!!!
19:38 semiosis @[do boggles]
19:38 glusterbot semiosis: Error: You must be registered to use this command. If you are already registered, you must either identify (using the identify command) or add a hostmask matching your current hostmask (using the "hostmask add" command).
19:38 * semiosis gbtw
19:38 semiosis sorry 'bout the botspam
19:38 JoeJulian @mp add "^@impressed" "do is not impressed."
19:38 glusterbot JoeJulian: The operation succeeded.
19:39 JoeJulian @impressed
19:40 semiosis johnmark: looking at the g-u archives page, to which message were you referring?
19:41 TheHaven joined #gluster
19:42 JoeJulian He was just trying to confirm that the messages were getting delivered.
19:42 wN joined #gluster
19:42 johnmark JoeJulian: +1
19:42 * glusterbot tightens his screws.
19:43 johnmark semiosis: there were a couple of messages sent to gluster-infra and gluster-users
19:43 semiosis ah, ok
19:44 semiosis johnmark: not sure if you can do anything about this but the first result in a google search for "gluster-users" is broken
19:44 johnmark gah
19:44 * johnmark needs to put in a 301 redirect
19:45 semiosis sounds good
20:03 kaisersoce After a "gluster peer detach <HOST>" command, how can that peer be re-attached?
20:06 semiosis probe?
20:13 kaisersoce says it's already there: Probe on host az-gluster02 port 24007 already in peer list
20:22 JoeJulian kaisersoce: Could it be that the hostname az-gluster02 might be resolving to an ip address that's already a peer?
20:23 kaisersoce nope, a reboot seemed to fix it….
20:24 JoeJulian Ah, the Microsoft solution.
20:26 kaisersoce however, I'm coming into an issue - once I add a "brick" to a volume, then remove it, I have to re-format or it says that the mount-point "or a prefix of it is already part of a volume" - I have no volumes set up. Am I destroying these volumes incorrectly? Today is my first day with it, sorry for the questions - but really appreciate the promptness
20:26 glusterbot kaisersoce: To clear that error, follow the instructions at http://joejulian.name/blog/glusterfs-path-or​-a-prefix-of-it-is-already-part-of-a-volume/
20:26 faizan joined #gluster
20:27 JoeJulian GlusterFS errors on the side of data preservation.
20:29 kaisersoce as it should. The fix didn't do it...
20:29 semiosis errs
20:29 kaisersoce there is no .glusterfs dir
20:30 semiosis kaisersoce: what glusterfs version are you using?
20:30 semiosis distro?
20:31 semiosis oh n/m, 3.3.0 obvs
20:33 JoeJulian Ok, so far everyone that's said it doesn't work hasn't followed directions completely. That's not an accusation, just an observation. If there's an error with those instructions, feel free to tell me and I'll correct them.
20:33 kaisersoce 3.3, yep. I found my issue, and I think I need to stop for the day. Fat fingers and glassy eyes!
20:33 JoeJulian :)
20:34 semiosis what'd you call me?
20:34 semiosis hehe
20:34 JoeJulian hehe
20:34 kaisersoce This is nicer, I think, than zfs, and even tho there is a port for linux, I'd prefer a home-grown system.
20:34 JoeJulian semiosis: Not sure if you're "Fat Fingers" or "Glassy Eyes".
20:57 bennyturns joined #gluster
21:11 tc00per Hi guys... is there a howto somewhere on adding a new host pair to a replica 2 config to 'double' capacity?
21:13 elyograg tc00per: gluster volume add-brick volname server1:/brick/path server2:/brick/path
21:13 elyograg tc00per: then it would be a good idea to rebalance when you will be least affected by the I/O storm.
21:14 tc00per elyograg: check... and this would 'migrate' the gluster cluster (better term?) from simple replicated to distributed-replicated right...?
21:24 elyograg tc00per: I haven't actually tried it, as I've only ever set up distributed-replicated, but if there's any logic to this, that's what would happen.  so far everything has seemed pretty logical.
21:24 H__ is there a way to throttle the rebalance IO ?
21:29 tc00per elyograg: I have read the steps and understand 'in principle'. I will try to test but wondered if the entire process had been summarized in a 'howto' yet. Trying to visualize the 'grow' process as we move from 2x18TB to 4x36TB to 8x72TB. Our data needs will always grow and grow and grow... good for the HD/Chassis vendors I guess.
21:33 tc00per Any plans to implement finer grained quota management than by volume/directory?
21:34 elyograg H__: A google search doesn't turn anything up for throttling I/O.  I'm still very new to all this, and I don't have a production volume yet.  I am still working on my testbed, testing out failure scenarios and migration scenarios that I might encounter.
21:41 moxie joined #gluster
21:42 tc00per jdarcy: Are the Translator 101 tutorials still available?
21:44 moxie woo finally get to use glusterfs
21:45 moxie have two servers with two sas drives to make a redundant nfs share
21:45 moxie but open-e wont let you put the OS on the storage volume and raid wont let me chop it up
21:45 atrius so... gluster on top of iscsi shared between two machines with VM images on them... terrible idea or no?
21:45 seanh-ansca joined #gluster
21:50 moxie how does redundancy and failover work on this?
21:50 JoeJulian tc00per: hekafs.org should have the tutorials still
21:52 JoeJulian moxie: Not familiar with open-e. The fuse client connects directly with every server that's part of the volume. If you use replication, then the redundancy is built-in that way.
21:52 badone_ joined #gluster
22:01 JoeJulian atrius: Not "terrible" per se, but networked storage over networked storage doesn't sound like the optimum efficiency. If it suits your needs, however, then more power to you.
22:38 tc00per JoeJulian: Thanks
22:49 rferris joined #gluster
23:09 sashko joined #gluster
23:13 bullardo joined #gluster
23:39 Daxxial_ joined #gluster
23:40 penglish joined #gluster
23:43 tripoux joined #gluster
23:46 tryggvil joined #gluster
23:51 Nr18 joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary