Camelia, the Perl 6 bug

IRC log for #gluster, 2012-11-02

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 TSM eventualy ssure there will be a gui
00:00 nodots1 joined #gluster
00:00 nodots1 left #gluster
00:00 TSM im sure a webmin module would not be too hard
00:01 NuxRo TSM: true, but still, I think asking who just want some gui to install oVirt is unreasonable
00:03 TSM well most peeps using gluster will understand the cli
00:04 NuxRo TSM: very true and i am perfectly comfortable with the cli, just saying a decent webmin module (or any reasonable gui) might go a long way for popularising glusterfs
00:04 Jippi joined #gluster
00:04 NuxRo the idea with glusterfs and it's certainly the idea i get is that you don't need to be a rocket scientist to run it
00:04 NuxRo gluster peer probbe, create volume, voila!
00:05 JoeJulian I won't want to support gui users. When they had the gui before, some real morons thought they could design systems.
00:06 NuxRo hehe
00:06 NuxRo look at computers, when they started building graphical interfaces all sorts of idiots started using computers :-))
00:07 JoeJulian I don't support them either. I have people for that.
00:07 NuxRo TSM: any chance you were today at the Dev Day on the South Bank?
00:07 JoeJulian I don't mind hanging out here and helping people because for the most part, we get higher-end sysadmins that have a clew.
00:08 NuxRo JoeJulian: i understand..
00:08 JoeJulian (spelled clew because I've been reading Sir Arthur Conan Doyle lately)
00:08 TSM which south bank
00:09 TSM london
00:09 NuxRo TSM: http://gb.redhat.com/about/events-webi​nars/events/developer-day-london-2012
00:09 glusterbot Title: Red Hat | Red Hat Developer Day comes to London (at gb.redhat.com)
00:10 TSM unfortunaly not, am too busy at the mo at work
00:10 TSM head down in code
00:10 NuxRo maybe next time
00:10 NuxRo or just go for beers sometimes
00:11 TSM reminds me i should get my rhel cert done, just lacking some of the bits it requires as ive no need for it
00:11 NuxRo yeah, should do the same, while I'm still young :>
00:12 JoeJulian I should try that too. I've never been much of a believer in certs but it might be fun.
00:12 TSM :p
00:12 TSM its all that NIS stuff they want, never got around to learning it much
00:12 NuxRo they still want that? omg
00:13 TSM when i last looked
00:13 TSM was a while ago
00:13 TSM i guess now prob ldap
00:13 TSM i hate ldap
00:13 NuxRo ldap makes sense
00:13 NuxRo yeah, it can be awkward
00:13 TSM prob i have is that i work in a small it dept and deal with so much i cant spend time on specifics
00:14 NuxRo i know how it is.. been there
00:14 TSM ide say im not too bad, considering, mostly php/perl code now
00:14 TSM play with asterisk
00:15 TSM mysql/sphinx
00:15 NuxRo oh, perl! so you could write that webmin module
00:15 NuxRo would look nice on CV ;-)
00:15 TSM next on the agenda is gluster
00:15 TSM :p
00:15 TSM haa
00:15 TSM erm i dont use webmin, the only install we have of it was not done by me and was to manage iptables on a remote server
00:16 NuxRo okay :-)
00:16 TSM and im too busy
00:16 TSM :p
00:16 NuxRo well chaps, I'll call it a day, been up since 6
00:17 TSM laters
00:17 NuxRo talk to you tomorrow
00:17 NuxRo night
00:19 blendedbychris joined #gluster
00:19 blendedbychris joined #gluster
01:05 nodots joined #gluster
01:12 bala joined #gluster
01:20 erik49_ joined #gluster
01:29 zhashuyu joined #gluster
01:43 nodots1 joined #gluster
01:54 kevein joined #gluster
01:59 daddmac1 left #gluster
02:27 blendedbychris joined #gluster
02:27 blendedbychris joined #gluster
02:50 sunus joined #gluster
02:53 seanh-ansca joined #gluster
02:54 nodots joined #gluster
03:25 Bullardo joined #gluster
03:32 bharata joined #gluster
03:44 anti_user joined #gluster
03:44 anti_user hello all
03:52 anti_user i want to know difference between luster and gluster
03:53 anti_user i understand that luster is kernel filesystem, but what architecture difference there?
03:53 pdurbin anti_user: gluster runs in user space so you don't have to recompile with every kernel update
03:54 anti_user im understand it
03:55 anti_user but i want to know difference in working those fs
03:55 anti_user what gluster can and what luster cannot
03:56 nhm joined #gluster
04:01 hagarth joined #gluster
04:17 vpshastry joined #gluster
04:36 ika2810 joined #gluster
04:37 vimal joined #gluster
04:39 jays joined #gluster
04:41 faizan joined #gluster
04:46 faizan joined #gluster
05:08 JoeJulian anti_user: I know I don't compare them. When I last looked at lustre, it had separate metadata, stored files in blocks spread all over different servers, didn't have any reliable redundancy methods and, worst of all, is owned by a litigious company that patent trolls.
05:10 anti_user thank you Joe
05:10 anti_user i want to know more about " didn't have any reliable redundancy methods"
05:13 Humble joined #gluster
05:17 faizan joined #gluster
05:21 bala joined #gluster
05:32 sripathi joined #gluster
05:36 glusterbot New news from newglusterbugs: [Bug 871987] Split-brain logging is confusing <https://bugzilla.redhat.com/show_bug.cgi?id=871987>
05:59 ankit9 joined #gluster
06:02 raghu joined #gluster
06:07 sripathi joined #gluster
06:08 blendedbychris joined #gluster
06:08 blendedbychris joined #gluster
06:12 sunus joined #gluster
06:12 mdarade joined #gluster
06:17 mohankumar joined #gluster
06:18 shylesh joined #gluster
06:20 mohankumar joined #gluster
06:29 anti_user how i understand gluster cannot write data more then free space of one node of 6 nodes
06:47 ramkrsna joined #gluster
06:47 ramkrsna joined #gluster
06:51 overclk joined #gluster
06:55 ngoswami joined #gluster
07:06 pkoro joined #gluster
07:06 mdarade left #gluster
07:16 rgustafs joined #gluster
07:23 lkoranda joined #gluster
07:30 ekuric joined #gluster
07:36 glusterbot New news from newglusterbugs: [Bug 842549] getattr command from NFS xlator does not make hard link file in .glusterfs directory <https://bugzilla.redhat.com/show_bug.cgi?id=842549> || [Bug 835034] Some NFS file operations fail after upgrading to 3.3 and before a self heal has been triggered. <https://bugzilla.redhat.com/show_bug.cgi?id=835034> || [Bug 871986] [RFE] Striping should have optional parity/checksum <https://bug
07:43 JoeJulian anti_user: I don't understand your last statement. It's possible to have a GlusterFS volume can be as large as around 8 brontobytes.
07:56 shireesh joined #gluster
07:59 sshaaf joined #gluster
08:06 glusterbot New news from newglusterbugs: [Bug 872490] "remote operation failed: No such file or directory" warning messages reported continuously in bulk around 4839 times <https://bugzilla.redhat.com/show_bug.cgi?id=872490>
08:10 Nr18 joined #gluster
08:15 ndevos 20:26 < JoeJulian> ndevos: regarding wireshark. A presentation that shows several connectivity problems and how they can be diagnosed would probably be good. When I  use it I'm usually trying to isolate some bug I've found.
08:15 ndevos thanks :)
08:16 hagarth joined #gluster
08:25 JoeJulian ndevos: When you do create slides, could you try doing them in showoff? I'd like to put together a collection of presentations that can be collaboratively edited.
08:25 JoeJulian https://github.com/joejulian​/intro-to-glusterfs.showoff
08:25 glusterbot Title: joejulian/intro-to-glusterfs.showoff · GitHub (at github.com)
08:27 ndevos JoeJulian: hmm, I'll check it out, but it needs to be convertable to .odp too
08:27 JoeJulian why?
08:28 ndevos I'll probably use it for an internal training too, and we use brainshark.com for that
08:34 JoeJulian Ah, brainshark accepts pdf as well and showoff does do pdf dumps.
08:35 JoeJulian ndevos: If you scrape together any spare time, could you give that a try and see if it works?
08:35 overclk joined #gluster
08:36 mdarade joined #gluster
08:36 mdarade left #gluster
08:37 ndevos JoeJulian: it looks promising, and has pdf output, I'll see if I can find some extra time to try it out
08:41 TheHaven joined #gluster
08:43 sunus hi, how do i monitor the status of glusterfs?
08:43 sunus from one node
08:45 ramkrsna joined #gluster
08:46 sunus like who is connecting or something like that
08:46 sunus i am glusterfsd --debug, but it's kinda weird
08:51 cyberbootje joined #gluster
08:55 puebele joined #gluster
08:58 Triade joined #gluster
09:08 sunus another strange thing happen after mount a stripe volume, copy a 500M file into the mount point and it ends failed
09:08 sunus then network is down..
09:11 Jippi joined #gluster
09:15 puebele1 joined #gluster
09:26 hagarth joined #gluster
09:34 NuxRo http://supercolony.gluster.org/pipermail​/gluster-users/2012-November/034655.html <- nice! I wonder where the SRPM is (would like to read the changelog, what is new)
09:34 glusterbot Title: [Gluster-users] glusterfs-3.4.0qa2 released (at supercolony.gluster.org)
09:36 NuxRo kkeithley: ping?
09:36 bulde joined #gluster
09:36 ndevos NuxRo: you can download the .tar.gz and run: rpmbuild -ts glusterfs-3.4.0qa2.tar.gz
09:37 ndevos at least, that should work
09:38 NuxRo ndevos: thanks, I don't want to rebuild, I trust kkeithley (famous last words :D)
09:38 NuxRo i just want to know what's new in this build
09:38 NuxRo i see 3.4 and it itches :)
09:39 NuxRo i see there's a ChangeLog file in the archive, hope fully it's up to date
09:41 Azrael808 joined #gluster
09:43 frakt joined #gluster
09:43 sunus why clients went down when failed to copy a file to a stripe volume?
09:44 sunus client's network went down
09:46 ankit9 joined #gluster
09:54 DaveS_ joined #gluster
10:17 bulde joined #gluster
10:25 mdarade2 joined #gluster
10:25 mdarade2 left #gluster
10:29 vpshastry joined #gluster
10:34 Kmos joined #gluster
10:36 vpshastry_ joined #gluster
10:40 gbrand_ joined #gluster
10:56 ctria joined #gluster
10:59 jays joined #gluster
11:07 hagarth joined #gluster
11:08 bulde joined #gluster
11:11 dobber joined #gluster
11:17 ankit9 joined #gluster
11:27 mgebbe_ joined #gluster
11:33 ika2810 left #gluster
11:35 rosco joined #gluster
11:46 manik1 joined #gluster
11:49 hagarth joined #gluster
11:55 balunasj joined #gluster
12:04 mdarade joined #gluster
12:07 glusterbot New news from newglusterbugs: [Bug 847619] [FEAT] NFSv3 pre/post attribute cache (performance, caching attributes pre- and post fop) <https://bugzilla.redhat.com/show_bug.cgi?id=847619>
12:08 rgustafs joined #gluster
12:30 HeMan joined #gluster
12:32 ankit9 joined #gluster
12:33 shireesh joined #gluster
12:35 JoeJulian wtf? why will it mount from the command line but coredumps from mount.glusterfs?
12:37 JoeJulian Ah, right... LD_LIBRARY_PATH.
12:37 glusterbot New news from newglusterbugs: [Bug 847626] [FEAT] nfsv3 cluster aware rpc.statd for NLM failover <https://bugzilla.redhat.com/show_bug.cgi?id=847626>
12:37 Alpinist joined #gluster
12:38 mdarade left #gluster
13:16 hagarth joined #gluster
13:28 aliguori joined #gluster
13:44 bennyturns joined #gluster
13:52 JoeJulian file a bug
13:52 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
14:07 stopbit joined #gluster
14:12 stopbit joined #gluster
14:19 wushudoin joined #gluster
14:29 18VAABQI4 joined #gluster
14:37 manik1 joined #gluster
14:38 glusterbot New news from newglusterbugs: [Bug 872601] split-brain caused by %preun% script if server rpm is upgraded during self-heal <https://bugzilla.redhat.com/show_bug.cgi?id=872601>
14:51 raghu joined #gluster
14:57 badone joined #gluster
14:58 hagarth joined #gluster
14:58 ekuric1 joined #gluster
15:02 HeMan joined #gluster
15:03 semiosis :O
15:04 JoeJulian Hey there semiosis.
15:04 semiosis no, i'm not going to write a webmin module for glusterfs :)
15:04 JoeJulian lol
15:05 hateya joined #gluster
15:09 tc00per joined #gluster
15:14 RNZ_ joined #gluster
15:19 badone joined #gluster
15:20 NuxRo :-)
15:23 badone_ joined #gluster
15:25 tc00per So, there's an update sitting in kkeithley's repo (3.3.1-1 -> 3.3.1-2). Saw @JoeJulian's comment about auto-update and bug filed for split-brain caused by %prerun% script if server rpm is upgraded during self-heal. How to I prevent? How can it check 'status' of server trusted-pool to know it's OK to update?
15:26 JoeJulian gluster volume heal $vol info
15:26 tc00per Thanks JoeJulian... did you get a split-brain on auto-update or did I read wrong?
15:28 JoeJulian tc00per: Yeah. I accidentally left "ensure => latest" for the gluster packages in puppet.
15:30 tc00per Dang! My VOL (test environment) is currently stopped so upgrade should be no problem. Friday has become glusterfs testing day while we work out potential issues before diving in with a production system.
15:30 semiosis i never use ensure => latest.  present is the only ensure i'll use on a package, unless i'm holding one at a specific version
15:31 puebele1 joined #gluster
15:31 tc00per I wish I had that problem... == I wish I had time to setup puppet.
15:31 badone__ joined #gluster
15:31 JoeJulian Since I'm running a "stable" distro, most of the time "latest" doesn't bite me.
15:32 andrewbogott joined #gluster
15:32 kkeithley No need to rush to apply that update. All it adds is some patches to the openstack-swift part of UFO, and systemd .system files instead of init.d files for UFO on Fedora mainly (and RHEL-7 Alpha). The glusterfs bits haven't changed.
15:32 kkeithley s/adds is/adds are/
15:32 glusterbot What kkeithley meant to say was: No need to rush to apply that update. All it adds are some patches to the openstack-swift part of UFO, and systemd .system files instead of init.d files for UFO on Fedora mainly (and RHEL-7 Alpha). The glusterfs bits haven't changed.
15:33 JoeJulian tc00per: Just start using puppet. Configure one thing at a time. As you progress, you'll have more time.
15:33 tc00per kkeithley: I saw that on the build system. Is there a better way to get the changelog than going there?
15:34 andrewbogott One of my cluster logfiles just keeps saying this over and over:  I [client.c:2090:client_rpc_notify] 0-fundraising-awight-project-client-3: disconnected
15:34 NuxRo semiosis: I've contacted Jamie from Webmin project, maybe he'd be willing to write a module :)
15:34 andrewbogott Does that mean anything to someone?  It repeats that error for two other systems
15:34 andrewbogott And has been flooding the log with that for many hours.
15:34 semiosis andrewbogott: sounds like you may have a dead brick export daemon or a network issue
15:35 semiosis NuxRo: ok
15:35 andrewbogott semiosis:  I suspect not a network issue since other systems are communicating ok (occasionally the log is interrupted with cries of success.)
15:35 kkeithley I put a CHANGELOG file in the repo with a link to the changelog on the build system. Is more than that necessary?
15:35 andrewbogott semiosis:  Could misbehavior on the part of the clients cause that behavior?
15:36 andrewbogott That is:  should I be blaming the three systems that the error mentions, or should I be debugging my gluster bricks?
15:36 kkeithley you can also do `rpm -q --changelog glusterfs` after you've installed.
15:37 tc00per Nope... that's exactly how I found out. Just wondering if there is any other mechanism that distributes this info that I don't know about. I scanned the irc logs and only saw JoeJulian's comment. Something like RHN Errata Alert list?
15:37 kkeithley or `rpm -q --changelog --file glusterfs-3.3.1-2.fc16.x86_64.rpm` before you install
15:38 kkeithley er, not --file, hang on a sec
15:39 tc00per kkeithley... thanks for rpm-for-dummies lesson... :)
15:39 kkeithley --package
15:39 * kkeithley actually hates rpm
15:39 mtanner joined #gluster
15:39 semiosis andrewbogott: http://community.gluster.org/q/what-is-a-subvo​lume-what-does-subvolume-myvol-client-1-mean/
15:39 glusterbot Title: Question: What is a subvolume? What does "subvolume myvol-client-1" mean? (at community.gluster.org)
15:39 kkeithley but it's better than debian packaging
15:40 semiosis andrewbogott: to see how to identify which brick exactly "0-fundraising-awight-project-client-3" is referring to
15:40 semiosis andrewbogott: then go check the log file for that brick, feel free to pastie it
15:40 andrewbogott ok, thank you.
15:41 semiosis andrewbogott: also you can check using "ps ax" for that brick path on the server to see which port it's using, then try to telnet to that port from one of the clients complaining it can't connect to the brick
15:41 semiosis to test for network issues
15:41 tc00per Looks like I have to download the rpm first, can't query a package that is still on the repo but not local... ;)
15:43 Technicool joined #gluster
15:49 m0zes joined #gluster
15:50 blendedbychris joined #gluster
15:50 blendedbychris joined #gluster
15:53 JoeJulian rpm -q --changelog --package http://repos.fedorapeople.org/repos/kkeithle/glus​terfs/epel-5/SRPMS/glusterfs-3.3.1-2.el5.src.rpm worked.
15:54 tc00per Server rpms updated. Warnings on all peers about creation of rpmsave files in /var/lib/glusterd/vols/VOL directory. Is this expected and/or what is the purpose of doing this?
15:54 JoeJulian It's expected.
15:55 JoeJulian glusterd builds new vol files from the info file. Since this could overwrite your .vol files, they're saved as a matter of course.
15:56 andrewbogott semiosis:  Here's the last bit of that log file:  http://pastebin.com/JHAKRQDT
15:56 glusterbot Please use http://fpaste.org or http://dpaste.org . pb has too many ads. Say @paste in channel for info about paste utils.
15:56 andrewbogott ok, sorry glusterbot
15:57 andrewbogott semiosis:  take two… http://fpaste.org/oPK7/
15:57 glusterbot Title: Viewing Paste #248884 (at fpaste.org)
15:58 andrewbogott I'm guessing that 'could not unregister with portmap' is the interesting part, although I don't know what that means
15:58 kkeithley those rpmsave warnings are normal
15:59 ekuric joined #gluster
15:59 kkeithley you can use `yumdownloader ...` to get them, then `yum localinstall ...` to install the files you just downloaded
15:59 kkeithley tc00per: ^^^
16:02 semiosis andrewbogott: is that a brick log?
16:03 andrewbogott semiosis:  It is /var/log/glusterfs/bricks/a-f​undraising-awight-project.log  <- is that not what we were looking for?
16:04 andrewbogott (On the host that corresponds to 0-fundraising-awight-project-client-3
16:04 andrewbogott )
16:04 tc00per kkeithley: thanks. I did it with yum-plugin-downloadonly... poor cat.
16:05 semiosis andrewbogott: yeah just checking... idk what to make of it
16:05 semiosis will take another look maybe something will jump out at me
16:08 andrewbogott semiosis:  The port for that brick is indeed closed.  But it looks like the brick is shut down and that's on purpose, so the question is why we're still trying to access it...
16:10 semiosis andrewbogott: is there a glusterfsd process running for that brick?
16:10 semiosis andrewbogott: there should be one for every brick if the volume is started
16:10 JoeJulian andrewbogott: you're running 3.3.0
16:11 JoeJulian I'm mostly sure this is part of bug 846619
16:11 glusterbot Bug https://bugzilla.redhat.com​:443/show_bug.cgi?id=846619 urgent, high, ---, vbellur, ASSIGNED , Client doesn't reconnect after server comes back online
16:11 andrewbogott semiosis:  Looks like no process for that brick.
16:11 JoeJulian andrewbogott: It does that a whole bunch of times, then finally stops and never tries again. Ever.
16:12 andrewbogott JoeJulian:  That seems likely, although we are running hundreds of mounts, weird that I only see the bug for three of them.
16:12 andrewbogott Is there a workaround?
16:13 semiosis andrewbogott: if that brick is dead no clients can reach it.  you can restart glusterd on that server and it will respawn the missing brick process
16:13 semiosis andrewbogott: or gluster volume start force or somethign like that
16:13 andrewbogott will restarting glusterd interrupt service for other mounts?
16:15 JoeJulian it will not
16:16 andrewbogott ok, let's see!
16:17 andrewbogott Those bricks are moving to different ports, that seems promising...
16:21 andrewbogott JoeJulian, semiosis:  Now only one of the three complaining mounts is still complaining… and its brick is on a different host.  So 'service glusterd restart' seems to've done the trick.
16:21 andrewbogott I guess I'll just need to do that… periodically :(
16:21 andrewbogott Thanks for your help!
16:21 JoeJulian I hope not.
16:22 JoeJulian If bricks are dying it might be better to find out why.
16:33 seanh-ansca joined #gluster
16:53 Azrael808 joined #gluster
16:54 Mo__ joined #gluster
17:01 phreek joined #gluster
17:01 phreek hi guys
17:02 semiosis hello
17:02 glusterbot semiosis: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
17:02 unalt joined #gluster
17:02 semiosis phreek: ^^
17:03 phreek i am setting up a 2 node glusterfs distributed volume, i was wondering whats the best way to do HA, was thinking about using uCARP + NFS
17:03 morse_ left #gluster
17:03 semiosis if you can use the glusterfs native FUSE client you get HA automatically... see ,,(mount server)
17:03 glusterbot (#1) The server specified is only used to retrieve the client volume definition. Once connected, the client connects to all the servers in the volume. See also @rrnds, or (#2) Learn more about the role played by the server specified on the mount command here: http://goo.gl/0EB1u
17:04 phreek Would there be any performance difference between using NFS or the glusterfs native client?
17:04 semiosis possibly
17:05 NuxRo phreek: NFS is generally a bit faster from what I've seen
17:05 semiosis NuxRo: ...for some workloads
17:05 NuxRo yes, true
17:05 phreek Most going to be big files
17:05 phreek mostly*
17:05 NuxRo I use the native client, free bult-in HA
17:06 phreek when you config your glusterfs native client, and set the 'volume replicate type cluster/replicate' does that have anything to do with the type of volume you configured on the server?
17:06 phreek or is that something else
17:07 semiosis phreek: whoa you're talking very old stuff
17:07 semiosis you shouldn't be doing that kind of manual configuration since 3.0
17:07 semiosis almost 2 years ago
17:07 phreek lol!
17:07 morse joined #gluster
17:07 phreek damn
17:07 semiosis @rtfm
17:07 glusterbot semiosis: Read the fairly-adequate manual at http://gluster.org/community/doc​umentation//index.php/Main_Page
17:07 semiosis @latest
17:07 glusterbot semiosis: The latest version is available at http://download.gluster.org/p​ub/gluster/glusterfs/LATEST/ . There is a .repo file for yum or see @ppa for ubuntu.
17:07 semiosis 3.3.1 is the latest
17:07 phreek :)
17:07 phreek sorry
17:08 * phreek reads
17:08 semiosis vols are configured through a nice CLI, clients are managed centrally & automatically
17:10 phreek oh, wow
17:10 phreek i was using a really old version
17:11 phreek semiosis: thanks :)
17:12 phreek Also, i will be using NFS because ill have to mount the volume on a ESXi host and it doesnt support the gluster native client
17:12 semiosis yw
17:12 semiosis seems like lots of people lately want to use glusterfs for vmware like that
17:12 semiosis UnixDev: ^^^
17:13 phreek Yeah its awesome
17:13 Humble joined #gluster
17:13 phreek it seems really fast also
17:13 phreek i just now got the other server for the second gluster node
17:13 phreek :P
17:13 phreek hence the HA question
17:15 phreek Right now i have like 8 VMs running on that esxi host with the glusterfs vol as storage
17:15 phreek ^_^
17:16 NuxRo phreek: if you only have few machines, switch to KVM on a proper linux host, then you can use the native client ;-)
17:17 JoeJulian I think after people have paid all that money for vmware, they don't want to switch to something that has more options and is free. Feels like they're losing money that way.
17:20 semiosis it's almost as if vmware has some kind of special relationship with an enterprise storage vendor
17:20 semiosis nah, couldn't be
17:23 mohankumar joined #gluster
17:23 NuxRo :-)
17:25 JoeJulian semiosis: Were you at a buzzword sales conference? https://twitter.com/pragmatic​ism/status/264413952270823425
17:25 glusterbot Title: Twitter / pragmaticism: OH: its 360 degree blue sky ... (at twitter.com)
17:25 semiosis JoeJulian: !!!
17:26 semiosis no, ##infra-talk
17:28 nightwalk joined #gluster
17:33 pdurbin guys around me are asking what the best filesystem is for small files
17:33 kkeithley A tar/cpio/zip file?
17:34 pdurbin whatever files. generated by big science :)
17:34 JoeJulian post-it notes.
17:34 pdurbin heh
17:35 semiosis how large is small?
17:35 pdurbin uh. 4 kilobytes. i'm making that up
17:36 JoeJulian I hate to say it, but ntfs would probably handle that the best. It can put the whole file in the inode.
17:36 semiosis pdurbin: these guys around you... are they trolling you?
17:36 semiosis hehe
17:36 pdurbin huh. interesting
17:36 pdurbin they're talking. i don't have a private office
17:36 semiosis "best" is ambiguous without much more qualtifications
17:36 JoeJulian trye
17:37 JoeJulian s/y/u/
17:37 glusterbot What JoeJulian meant to say was: true
17:40 JoeJulian Dirty rotten sticky-pointers! Gah!
17:40 JoeJulian Why do they keep showing up in my client?
17:48 Nr18 joined #gluster
18:06 manik joined #gluster
18:23 andrewbogott joined #gluster
18:24 andrewbogott joined #gluster
18:35 Nr18 joined #gluster
18:38 badone joined #gluster
18:53 JoeJulian Ah, ok... so a "(trusted.glusterfs.dht.linkto) ==> -1 (Permission denied)" means that the dht link is missing on that brick.
18:54 JoeJulian file a bug
18:54 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
18:56 Kmos left #gluster
18:58 aliguori joined #gluster
19:08 glusterbot New news from newglusterbugs: [Bug 872703] sticky-pointer with no trusted.dht.linkto after a replace-brick commit force, heal full migration <https://bugzilla.redhat.com/show_bug.cgi?id=872703>
19:09 manik joined #gluster
19:38 NuxRo semiosis: Jamie agreed to making a module for glusterfs, i'm going to help him with what i know + some test bed
19:49 jdarcy Hm.  I wonder why ext4 with journal on a ramdisk is faster than ext4 with no journal at all.
19:51 a2 wouldn't it?
19:52 a2 wouldn't a journal convert a lot of random IO into sequential IO (into the journal itself) before acknowledging?
19:52 jdarcy Not for O_SYNC writes.
19:52 a2 does ext4 journal data?
19:53 jdarcy Depends on mount options.  By default, I think not.
20:52 foster hm, no journal mode is faster on this box, granted that is with 1 byte writes to a 16mb ramdisk ;)
21:00 jdarcy Just for fun I'm going to give NILFS2 a try.
21:01 jdarcy When writes are synchronous anyway, journaling as well is kind of a waste.
21:02 jdarcy Why would anyone do that?  When what they're doing is already a journal, such as for async replication.  ;)
21:08 nightwalk joined #gluster
21:11 hattenator joined #gluster
21:15 hateya joined #gluster
21:21 JoeJulian jdarcy, a2: Here's a question for you... what's the point of having 4 out of 5 peers scan when only one of them actually does anything? http://fpaste.org/qgjJ/
21:22 glusterbot Title: Viewing Paste #249009 (at fpaste.org)
21:31 jdarcy JoeJulian: In theory, files on each old brick might be destined for a new one and parallel scanning would make rebalance faster.  In practice, because of the way we assign hash ranges, it's highly likely that all of the files to move are on only one of the old servers.
21:36 JoeJulian I just realized I said 4 out of 5...... I only have 4 peers.
21:37 JoeJulian Apparently localhost does double duty.
21:39 JoeJulian 3 of those peers are replicas (joej-linux is just my desktop box so I can manage the volumes without shelling into a server) so all the servers had files that could have been moved. I suppose it wouldn't have gone any faster, anyway, since they're all sharing the same network, but still... It just kind-of seems odd.
21:42 JoeJulian jdarcy: What ever happened to your concentric ring rebalance anyway... ;)
21:46 a2 JoeJulian, why shouldn't all the peers scan? all of them store some of the files, so all of them do "their job" during rebalance..
21:46 jdarcy Still just a gleam in my eye.
21:46 a2 or i misunderstood your question?
21:48 JoeJulian a2: if there was some result to them scanning, I would agree. But since they scanned the whole tree but didn't move anything it seems like a waste of cycles.
21:49 a2 they scanned the full tree to inspect if anything had to be moved. that cannot be avoided.
21:49 a2 without scanning you wouldn't know there was nothing to do
21:50 blendedbychris how do i add a new server to a replicateed volume?
21:50 a2 of course, this is in the narrow-ish view that the purpose of rebalance is to squash linkfiles (and move the actual file over)
21:50 blendedbychris the volume is currently running
21:51 JoeJulian Apparently it can if the bricks they're scanning are replica subvolumes. Then 3 servers all have identical files. Any file that's moved by ewcs2 would have shown up in the same scan on ewcs7 or ewcs10. Since 7 and 10 didn't move anything, my guess would be that's because they weren't the first brick in the replica subvolume.
21:52 JoeJulian a2: in this case, that's really the only reason I'm rebalancing. The linkfiles were replicating without trusted.dht.linkto. :(
21:52 a2 yeah, rebalance can do better in the presence of replicate
21:52 a2 JoeJulian, really?
21:52 a2 JoeJulian, did you check if trusted.dht.linkto was missing in the backend?
21:52 JoeJulian yep bug 872703
21:52 glusterbot Bug https://bugzilla.redhat.com​:443/show_bug.cgi?id=872703 unspecified, unspecified, ---, pkarampu, NEW , sticky-pointer with no trusted.dht.linkto after a replace-brick commit force, heal full migration
21:55 a2 "(trusted.glusterfs.dht.linkto) ==> -1 (Permission denied)"
21:55 Melsom joined #gluster
21:55 a2 what was the full line?
21:55 JoeJulian sorry, I know better than to do that... :(
21:55 a2 ??
21:55 a2 don't you have the full log line?
21:56 JoeJulian I do. I should have pasted the whole thing.
21:56 a2 file number? line number? function name?
21:56 a2 *file name
21:56 JoeJulian [2012-11-02 09:53:50.127373] I [server3_1-fops.c:823:server_getxattr_cbk] 0-home-server: 7831143: GETXATTR /alw/.thunderbird.default/prefs.js (trusted.glusterfs.dht.linkto) ==> -1 (Permission denied)
21:56 JoeJulian That's 3.3.1
21:57 a2 so it's a permission issue.. linkto is probably there in the backend
21:57 JoeJulian That was for a linkfile that didn't exist on that brick.
21:57 a2 what about the xattr on the backend?
21:57 JoeJulian It was subsequently self-healed.
21:57 a2 getfattr -d -m . linkfile
21:57 JoeJulian The file wasn't on the backend.
21:58 JoeJulian Not at the point where that error happened anyway.
21:58 blendedbychris okay i got this … i added the brick but is there a quick way to "touch" all the files so it'll transfer to the second brick?
21:58 JoeJulian blendedbychris: That's what rebalance is for.
21:58 blendedbychris o
21:59 JoeJulian gluster volume rebalance $vol start
21:59 JoeJulian You can then check the status with "gluster volume rebalance $vol status"
21:59 glusterbot joined #gluster
21:59 blendedbychris should i use "migrate-data" ?
21:59 semiosis ~pasteinfo | blendedbychris
21:59 glusterbot blendedbychris: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
21:59 JoeJulian Ah, you're on an older version.
22:00 blendedbychris what's the diff? seems like one might be more intensive?
22:01 blendedbychris bleh one didn't work, that's the diff heh
22:02 Daxxial_ joined #gluster
22:02 tryggvil joined #gluster
22:10 tryggvil joined #gluster
22:11 Daxxial_ joined #gluster
22:12 glusterbot joined #gluster
22:28 blendedbychris JoeJulian: i did rebalance start and it didn't copy all of the files to the brick for some reason
22:28 blendedbychris it copied some
22:28 JoeJulian That's what's expected with a distributed volume.
22:28 blendedbychris It's replica
22:29 blendedbychris oh crap it's not replica
22:29 blendedbychris can i change the volume type  ?
22:29 blendedbychris (i realized that i was using this to do geo replication)
22:29 JoeJulian Not in < 3.3
22:30 blendedbychris bummer
22:30 JoeJulian prior to 3.3 you had to remove the volume and recreate it.
22:30 JoeJulian You're also kind-of screwed in that now half your files are on the new brick...
22:31 JoeJulian What's your stance on upgrading today?
22:31 blendedbychris is there a ppa with 3.3 in it?
22:32 blendedbychris i mean all the files are still on the original brick
22:32 JoeJulian @ppa
22:32 glusterbot JoeJulian: The official glusterfs 3.3 packages for Ubuntu are available here: https://launchpad.net/~semiosis​/+archive/ubuntu-glusterfs-3.3
22:32 blendedbychris what's the upgrade process?
22:36 TSM2 joined #gluster
22:36 blendedbychris oi i'll read :)
22:47 lkoranda joined #gluster
23:03 blendedbychris is the upstarted version still necessary?
23:08 andrewbogott_afk joined #gluster
23:10 lh joined #gluster
23:10 lh joined #gluster
23:43 palmtown joined #gluster
23:43 palmtown hello, having problems with gluster
23:43 palmtown it is not syncing
23:43 palmtown it says started
23:43 palmtown but when I create a file
23:43 palmtown it doesn't sync
23:46 semiosis blendedbychris: the packages in the ubuntu-glusterfs-3.3 repo are upstartified, there's no other choice now
23:46 semiosis blendedbychris: though i seem to have made a mistake uploading the quantal package, but aside from the name there's no difference between the packages in that repo
23:46 semiosis one was built for quantal the other for precise
23:47 semiosis i'll fix the names tho, that is confusing
23:47 palmtown any ideas
23:47 palmtown ?
23:47 palmtown gluster info shows it is started and replciated
23:52 semiosis palmtown: likely a network issue, probably host name resolution.  check your client log file.  sorry but i have to run
23:52 semiosis maybe ,,(mount server) will help
23:52 glusterbot (#1) The server specified is only used to retrieve the client volume definition. Once connected, the client connects to all the servers in the volume. See also @rrnds, or (#2) Learn more about the role played by the server specified on the mount command here: http://goo.gl/0EB1u
23:52 semiosis good luck
23:53 palmtown hey semiosis
23:53 palmtown odd
23:53 palmtown in the nfs log I see
23:53 palmtown (Connection refused)
23:53 palmtown nothing is listing on 24009
23:54 JoeJulian palmtown: Try gluster volume status $vol
23:54 TSM2 when starting a fresh gluster replicated  pair where the first server is full of data, can you start off by rsyncing the data first before you start gluster up, would that be faster than stat'ing all the files across the gluster client?
23:55 palmtown unrecognized word: status (position 1)
23:55 JoeJulian The network connection is the same bottleneck either way.
23:55 JoeJulian palmtown: If you're just getting started, why not use the latest version? Which distro?
23:57 JoeJulian TSM2: Also, same thing. Use the latest version and there's no find..stat you just can do "gluster volume heal $vol full"
23:57 palmtown running 3.2.7
23:57 TSM2 ahh colios
23:57 JoeJulian Sorry, that's not what I was asking. I wanted to know what distribution you prefer.

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary