Camelia, the Perl 6 bug

IRC log for #gluster, 2012-10-18

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:29 bala joined #gluster
01:04 lng joined #gluster
01:04 lng Hi! What does upstarted build mean?
01:08 JoeJulian lng: upstart is the replacement for SysV Init in Ubuntu and Fedora 14.
01:13 lng JoeJulian: what is SysV Init than?
01:13 lng :-)
01:14 lng what is the diff?
01:14 JoeJulian http://en.wikipedia.org/wiki/Init
01:14 glusterbot Title: init - Wikipedia, the free encyclopedia (at en.wikipedia.org)
01:14 lng I know about Init
01:14 lng but SysV is new to me
01:14 JoeJulian hehe, it's actually REALLY oldschool.
01:15 lng thanks for the link
01:15 * JoeJulian is really old, apparently.
01:16 lng JoeJulian: so next time, when I install Gluster, I go for upstarted one?
01:16 JoeJulian Depends on the distro
01:16 JoeJulian and release.
01:16 lng JoeJulian: Latest Ubuntu Server
01:18 JoeJulian Then yes. Upstart was first included in Ubuntu in the 6.10 (Edgy Eft) release in late 2006, replacing sysvinit. Ubuntu 9.10 (Karmic Koala) introduced native Upstart bootup.
01:18 lng JoeJulian: thanks
01:18 JoeJulian Ah, I remembered the Fedora release incorrectly. Fedora 9 was the only one that used it.
01:19 JoeJulian s/only/first/
01:19 glusterbot What JoeJulian meant to say was: Ah, I remembered the Fedora release incorrectly. Fedora 9 was the first one that used it.
01:22 lng JoeJulian: for Gentoo is standard SysV builds, right?
01:23 JoeJulian I've never even seen Gentoo installed anywhere. Don't really know anything about it.
01:23 lng JoeJulian: it's the best distro
01:23 lng :-)
01:23 kevein joined #gluster
01:24 JoeJulian hehe
01:24 lng JoeJulian: http://www.gentoo.org/main/en/about.xml
01:24 glusterbot Title: Gentoo Linux -- About Gentoo (at www.gentoo.org)
01:24 JoeJulian It's got to be every bit as good as Slackware.
01:24 JoeJulian Considering the popularity of it.
01:24 lng JoeJulian: it's high
01:25 lng but not so high as Ubuntu because user should have some knowldge
01:26 lng if you visit Gentoo IRC on Freenode, there're not so many lamers as in Ubuntu channel
01:26 JoeJulian Well, that's always a good thing.
01:26 lng JoeJulian: I use Gentoo
01:26 JoeJulian One of the things I enjoy about this channel. :D
01:26 lng at home
01:26 lng haha
01:28 y4m4 joined #gluster
01:28 foo_ Exherbo >> Gnetoo :)
01:28 * foo_ goes back to russling about in the shadows
01:28 JoeJulian :)
01:29 y4m4 joined #gluster
01:30 foo_ oh since links are about, http://exherbo.org
01:30 glusterbot Title: Welcome to Exherbo Linux (at exherbo.org)
01:30 lng foo_: never used Exherbo
01:30 lng but seen on web
01:31 lng Exherbo: "[W]e don't particularly want you to try it because we don't want to deal with you whining when you find that absolutely nothing works. Exherbo isn't in a fit state for users. We might get there one day, but it's not a priority. Right now, all we care about is getting it into a fit state for a small number of developers."
01:31 foo_ Yes and this is nice.
01:31 lng foo_: do you use it?
01:32 foo_ indeed
01:32 foo_ in fact I was going to get it up on my ssd laiden laptop
01:32 lng foo_: just going?
01:32 lng never installed?
01:32 foo_ I've used it heavily
01:33 foo_ no other distro made working with packges from scm so nice [like working on X stuff]
01:37 foo_ http://halffull.org/2009/03/07/review-of​-exherbo-linux-from-a-users-perspective/
01:50 Bullardo joined #gluster
02:08 Bullardo joined #gluster
02:09 sashko joined #gluster
02:40 Bullardo joined #gluster
02:41 seanh-ansca joined #gluster
02:58 ika2810 joined #gluster
03:05 lanning anyone hit up where they can't add a brick.  gluster says it is already part of the volume.  I have rm -rf;mkdir the brick directory and ran the path through setfattr -x all the way down to /; shutdown every glusterd, then restarted every glusterd
03:06 bharata joined #gluster
03:07 lanning gluster 3.3.1 on centos-release-6-2.el6.centos.7.x86_64
03:08 lanning http://paste.org/55745
03:08 glusterbot Title: Your code. Your site. Use it. - paste.org (at paste.org)
03:12 lanning http://paste.org/55747
03:12 glusterbot Title: Your code. Your site. Use it. - paste.org (at paste.org)
03:21 vpshastry joined #gluster
03:22 shylesh joined #gluster
03:24 vshastry joined #gluster
03:25 sunus joined #gluster
03:27 _Marvel_ joined #gluster
03:31 JoeJulian ~pastestatus | lanning
03:31 glusterbot lanning: Please paste the output of "gluster peer status" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
03:31 JoeJulian From more than one server please.
03:32 lanning just a sec. (I am recreating the volume from scratch in one step, instead of add-brick)
03:33 lanning well, that seemed to work.
03:34 JoeJulian Darn.
03:34 JoeJulian Would have been nice to know why.
03:35 lanning ya, but I want to go home... :)
03:35 lanning plus it was a brand new volume, so no problems with a wipe and restart.
03:45 seanh-ansca joined #gluster
03:51 sripathi joined #gluster
03:56 sgowda joined #gluster
04:00 _Marvel_ joined #gluster
04:05 seanh-ansca joined #gluster
04:17 _Marvel_ joined #gluster
04:26 vpshastry joined #gluster
04:36 _Marvel_ joined #gluster
04:38 seanh-ansca joined #gluster
04:49 crashmag joined #gluster
04:56 zwu joined #gluster
04:57 bala joined #gluster
05:01 hagarth joined #gluster
05:12 faizan joined #gluster
05:19 crashmag joined #gluster
05:23 Nr18 joined #gluster
05:24 faizan joined #gluster
05:26 zwu joined #gluster
05:28 bala joined #gluster
05:33 vpshastry1 joined #gluster
05:35 sripathi joined #gluster
05:35 sripathi joined #gluster
05:39 deepakcs joined #gluster
05:46 raghu joined #gluster
05:47 guigui1 joined #gluster
05:52 flowouffff joined #gluster
05:52 lng Our 1 of 4 EC2 Gluster Instances is scheduled to stop event. It means I need to start new Server and attach EBS blocks taken from Server 1 to it. Any info on how to do it?
05:54 lng It is problematic to re-attach bricks, I remember
05:54 lng should I just rsync the volumes?
05:57 lng what is Gluster versions are different?
05:58 lng if*
06:08 lng JoeJulian: here?
06:15 seanh-ansca joined #gluster
06:29 puebele joined #gluster
06:29 Pushnell__ joined #gluster
06:30 rgustafs joined #gluster
06:39 glusterbot New news from resolvedglusterbugs: [Bug 764890] Keep code more readable and clean <https://bugzilla.redhat.com/show_bug.cgi?id=764890>
06:47 sunus hi how can i print gluster info to console in the server?
06:47 sunus i want to see that info if a client mount a volume
06:47 puebele joined #gluster
06:48 ekuric joined #gluster
06:48 ngoswami joined #gluster
06:48 Shdwdrgn joined #gluster
06:52 bala1 joined #gluster
06:52 ramkrsna joined #gluster
06:52 ramkrsna joined #gluster
06:57 Azrael808 joined #gluster
07:02 ramkrsna joined #gluster
07:03 Psi-Jack joined #gluster
07:04 pkoro joined #gluster
07:04 lng sunus: # gluster volume info
07:08 lng how attach peer?
07:08 lng how to*
07:08 tjikkun_work joined #gluster
07:08 lng unrecognized word: attach (position 1)
07:10 lng should I probe it?
07:12 ctria joined #gluster
07:14 lng is it possible to stop writes to a brick to be replaced?
07:15 JoeJulian lng: I wouldn't delete and recreate the volume. I would use the ,,(replace) instructions for using the same hostname (assuming you're using elastic whachamacallits).
07:15 glusterbot lng: Useful links for replacing a failed server... if replacement server has different hostname: http://community.gluster.org/q/a-replica-no​de-has-failed-completely-and-must-be-replac​ed-with-new-empty-hardware-how-do-i-add-the​-new-hardware-and-bricks-back-into-the-repl​ica-pair-and-begin-the-healing-process/ ... or if replacement server has same hostname:
07:15 glusterbot http://www.gluster.org/community/docum​entation/index.php/Gluster_3.2:_Brick_​Restoration_-_Replace_Crashed_Server
07:16 lng JoeJulian: nope, I don't use EIP
07:17 JoeJulian sunus: With 3.3 you can use "gluster volume status $volname clients"
07:17 JoeJulian lng: Then you're doing it the hard way. :(
07:17 JoeJulian Ok, I'm off to bed. Goodnight.
07:24 puebele joined #gluster
07:25 andreask joined #gluster
07:28 lng glusterfsd consumes 100% of CPU!
07:28 lng after vol migration
07:28 lng I pauseed it
07:28 lng still the same load
07:30 bulde lng: can you put a gdb to the process consuming 100% CPU, and get a bt
07:30 bulde ?
07:35 TheHaven joined #gluster
07:35 sunus volume create distvol 192.168.10.4:/gfs_vm3 192.168.10.3:/gfs_vm2  it says brick maybe containing or contained by an existing brick why
07:35 sunus ?
07:35 saurabh1 joined #gluster
07:40 lng bulde: I detached peer
07:40 lng cpu is ok now
07:40 lng bulde: but soon I need to continue migration
07:47 lng gdb was not installed
07:48 lng JoeJulian: the kink you provided is _stale_ as there's no `gluster peer attach` command
07:49 sripathi joined #gluster
07:49 overclk joined #gluster
07:49 lkoranda joined #gluster
07:52 lng I need to restart the Sevrer (EC2), but after that its IP will change and I don't use hostnames
07:53 lng is it possible to reconfigure the cluster with the new IP?
07:53 lng I had some problems before, I remember
07:53 lng when IP changed
07:53 lng please anydody help
07:56 gbrand_ joined #gluster
08:15 gbrand_ joined #gluster
08:15 sripathi1 joined #gluster
08:27 Humble joined #gluster
08:29 vpshastry joined #gluster
08:35 clag_ hi, on glusterfs 3.2.5, once a node has been stopped for maintenance (system upgrade) and restart , how to know that self-health has finished ?
08:36 clag_ i've some difficult with this part... to be sure my cluster is OK
08:41 zwu joined #gluster
08:41 Azrael808 joined #gluster
08:55 Jippi joined #gluster
08:56 ctria joined #gluster
08:56 ctria joined #gluster
09:00 dobber joined #gluster
09:02 zwu joined #gluster
09:05 linux-rocks joined #gluster
09:07 vpshastry joined #gluster
09:19 oneiroi joined #gluster
09:22 puebele1 joined #gluster
09:23 HavenMonkey joined #gluster
09:23 bala joined #gluster
09:28 deepakcs joined #gluster
09:29 ndevos lng: you really should use hostnames on amazon ec2, I think others use services like dyndns.org to do that
09:41 lng ndevos: jsut hostnames? not eip?
09:41 sripathi joined #gluster
09:42 lng ndevos: does it help if I need to reboot the instance and amazon changed ip?
09:42 lng of the instance after reboot
09:42 ndevos lng: hostnames are sufficient for all I know, reboot might change the IP and thats where the dyndns service comes in
09:43 lng why dyndns?
09:43 ndevos lng: EIPs would work too I guess, just depends on what you prefer
09:43 lng is not Route 53 suffice?
09:43 ndevos lng: a service like dyndns.org
09:47 raghu 14:58 *** deepakcs QUIT Ping timeout: 265 seconds
09:49 lng ndevos: I have been mistaken - Amazon doesn't change Instance DNS after reboot
09:49 lng it happens only if you stop / start instance
09:50 ndevos lng: ah, right, you can use that host/domainname too then
09:51 deepakcs joined #gluster
09:51 lng I don't know how to migrate to it if I create named based gluster
09:52 lng as no downtime is acceptable
09:52 ndevos @hostnames
09:52 glusterbot ndevos: Hostnames can be used instead of IPs for server (peer) addresses. To update an existing peer's address from IP to hostname, just probe it by name from any other peer. When creating a new pool, probe all other servers by name from the first, then probe the first by name from just one of the others.
09:53 lng ndevos: I tried replace-brick today and one of the nodes cpu util. became %100
09:53 lng ndevos: so it's easy to reconfigure gluster to use hostnames, right?
09:56 ndevos lng: yeah, just 'gluster peer probe $HOSTNAME_OF_OTHER_SERVER'
09:56 lng ndevos: great
09:56 ndevos lng: I wouldnt know why replace-brick causes 100% usage
09:57 lng ndevos: have you ever tried replace-brick?
09:57 ndevos lng: you can check with 'gluster peer status' and it should list the hostnames afterwards (check on all servers)
09:58 ndevos lng: yeah, tried it and did not notice anything weird, all worked fine afterwards
09:58 Humble joined #gluster
09:59 lng ndevos: so after I switched to hostanames, instance IP change should not affect Gluster?
10:00 rwheeler joined #gluster
10:00 ndevos lng: no, the glusterd and glusterfs binaries will resolve the host - you can grep for the IP under /var/lib/glusterd and verify they are gone
10:01 hagarth joined #gluster
10:05 bala joined #gluster
10:11 berend joined #gluster
10:13 jmara joined #gluster
10:15 lng ndevos: `gluster peer status` shows hostnames, but IPs are still in /var/lib/glusterd/
10:15 lng do I have to edit these these files manually?
10:22 lng ndevos: shall I just replace all the IPs with sed?
10:23 arti_t joined #gluster
10:26 lng hostnames appared only in /var/lib/glusterd/peers/
10:34 gcbirzan_ joined #gluster
10:35 gcbirzan_ http://bpaste.net/show/1pVwwdtxjCeS6WIfdvpX/ I'm getting this when trying to heal a volume
10:35 glusterbot Title: Paste #1pVwwdtxjCeS6WIfdvpX at spacepaste (at bpaste.net)
10:36 ekuric joined #gluster
10:37 gcbirzan_ And, I know that's not a question, so, I guess. What do I do now? :P
10:39 Azrael808 joined #gluster
10:48 ctria joined #gluster
10:51 bala joined #gluster
10:56 mdarade joined #gluster
11:07 ika2810 left #gluster
11:10 hagarth joined #gluster
11:12 manik joined #gluster
11:15 kkeithley1 joined #gluster
11:17 vpshastry joined #gluster
11:17 Alpinist joined #gluster
11:18 nocturn joined #gluster
11:20 nocturn Hi all
11:22 nocturn We keep hitting "Too many levels of symbolic links" errors on our gluster 3.2 setup.  we have a replicated and distributed replicated volume that both get this problem .
11:31 Pushnell_ joined #gluster
11:32 hagarth__ joined #gluster
11:33 ndevos_ joined #gluster
11:33 koaps_ joined #gluster
11:36 adechiaro1 joined #gluster
11:40 Psi-Jack joined #gluster
11:44 lkoranda joined #gluster
11:45 joeto joined #gluster
11:45 wN joined #gluster
11:45 hagarth joined #gluster
11:46 ramkrsna joined #gluster
11:46 rgustafs joined #gluster
11:47 sunus joined #gluster
11:54 bulde joined #gluster
11:55 mdarade1 joined #gluster
11:57 kevein joined #gluster
12:01 sunus joined #gluster
12:12 Triade joined #gluster
12:15 Triade1 joined #gluster
12:24 edward1 joined #gluster
12:27 manik joined #gluster
12:33 balunasj joined #gluster
12:34 balunasj joined #gluster
12:36 Azrael808 joined #gluster
12:48 ctria joined #gluster
12:50 clag_ joined #gluster
12:58 DaveS joined #gluster
13:07 hagarth joined #gluster
13:12 foo_ joined #gluster
13:13 bennyturns joined #gluster
13:20 Nr18 joined #gluster
13:28 Alpinist Hello guys
13:29 Alpinist We still got a problem with to many symbolic links and it's getting worse
13:30 Alpinist On some mounts, we got the error to many symlinks on other its ok, but it seems more en more files are infected
13:31 Alpinist We are running Gluster 3.2.7 on 6 debian machines, with a 3x2 replicated
13:37 saurabh1 left #gluster
13:42 Azrael808 joined #gluster
13:42 NuxRo Hi, if I want to store a file that is bigger than any of the bricks, I must use a striped mode, yes? Any other way I can store files larger than the bricks?
13:45 kkeithley1 There's no compression xlator yet, so stripe mode seems like your only option. You probably shouldn't use xfs, at least not without the stripe coalescing fixes.
13:46 johnmark kkeithley1: oh? first time I've heard that
13:47 kkeithley1 sparse files on xfs? You haven't heard about bfoster's work to fix the
13:47 kkeithley1 s/the/that/
13:47 glusterbot What kkeithley1 meant to say was: sparse files on xfs? You haven't heard about bfoster's work to fix that
13:50 Alpinist Anybody an idea how we can fix the problem? Even an "ls" is some directories are imposible
13:53 davdunc joined #gluster
13:53 davdunc joined #gluster
13:57 jdarcy Alpinist: Where are these symbolic links coming from?  GlusterFS doesn't add any on its own.
13:57 NuxRo kkeithley_wfh: thanks
13:58 kkeithley_wfh xfs does aggressive pre-allocation of blocks, and when you write sparse files with big holes -- as stripe mode does -- xfs will defeat you. bfoster can chime in with more info
13:59 NuxRo Right. There were some problems with EXT4 on RHEL 6 last time I checked; do you know if they have been fixed?
13:59 jdarcy NuxRo: Unfortunately, the ext4 problem is even worse - can cause infinite looping on directory traversals.  :(
13:59 NuxRo Currently i'm using XFS cause that's the recommended way.. good thing I didn't put it in production, I'll switch to EXT4
13:59 kkeithley_wfh I don't remember hearing about problems with ext4
13:59 NuxRo omg ...
14:00 jdarcy kkeithley_wfh: This is the 64-bit-d_off problem.
14:01 NuxRo any idea if we'll see that bfoster thing backported in EL6?
14:01 kkeithley_wfh yes, just realized
14:01 jdarcy I think the safe thing to do with striping over XFS right now is to set allocsize on the XFS volumes, which unfortunately does impact performance.
14:01 stopbit joined #gluster
14:02 NuxRo jdarcy: I do use that, allocsize=4096 <- some of the gluster staff recommended it for quota support
14:03 pdurbin jdarcy: our plan for kvm on gluster: http://irclog.perlgeek.de/cr​imsonfu/2012-10-18#i_6075403
14:03 glusterbot Title: IRC log for #crimsonfu, 2012-10-18 (at irclog.perlgeek.de)
14:04 bfoster NuxRo: you should be able to use the stripe-coalesce option to get around the preallocation space usage issue with stripe.
14:04 jdarcy bfoster: Which releases of GlusterFS/RHS have that?
14:04 bfoster NuxRo: allocsize is also a way around it, just not that you might increase the likelihood of fragmentation. Though if you're using traditional stripe format, you're pretty fragmented anyways I suppose
14:05 bfoster jdarcy: I thought it went into release-3.3, but let me try and check...
14:05 Alpinist jdarcy, we don't make symlink, it seems like gluster is making them
14:05 * NuxRo is using 3.3.1
14:05 NuxRo is that a volume creation option or a mount option?
14:05 Alpinist we also see that the files with to many symlink do not apear on all mounts
14:05 NuxRo stripe-coalesce that is
14:06 bfoster jdarcy: I see it in my release-3.3 branch
14:06 bfoster NuxRo: it's a stripe xlator option... 'gluster volume set stripe-coalesce on' or some such command should work :P
14:07 NuxRo bfoster: cheers
14:07 jdarcy NuxRo: If the command fails, you don't have the feature.  (I'm so helpful)
14:07 NuxRo jdarcy: no. 1
14:07 NuxRo now, do you guys actually recommend using the stripe xlator at all?
14:08 NuxRo if it's against recommended practice I might try to not use files larger than the bricks
14:08 jdarcy NuxRo: Not generally, except to get around the limitations of large files on small bricks.
14:08 NuxRo aha
14:10 manik joined #gluster
14:10 JoeJulian @stripe
14:11 glusterbot JoeJulian: Please see http://joejulian.name/blog/sho​uld-i-use-stripe-on-glusterfs/ about stripe volumes.
14:11 * jdarcy has to disappear for a while to test some (other people's) patches.
14:11 NuxRo JoeJulian: thanks, I was actually reading your article now! :)
14:12 NuxRo stripe+replicate sounds nice
14:12 pkoro joined #gluster
14:13 JoeJulian All my fonts are blocky today... https://plus.google.com/+Sc​obleizer/posts/MJZ7M3SDRjq
14:14 Alpinist Hi JoeJulian
14:14 NuxRo another question, unrelated to striping, I have a 4 server setup, each with 2 bricks. How can I set up a distributed+replicated volume with replica 2 in such a way that no 2 copies will be on the same server?
14:15 NuxRo is it even possible?
14:15 Alpinist Do you remember our problem from yesterday? We did the rebalance, but after it even got worse
14:16 JoeJulian Alpinist: Yeah, was reading the scrollback. What command do you use to get that symlink error?
14:17 Alpinist we do a normal "ls -l" on the directory which is mounted
14:17 Alpinist " ls -l /mnt/pve/clusterfs/images/*"
14:18 Alpinist for example: http://fpaste.org/YYcg/
14:18 glusterbot Title: Viewing Paste #244394 (at fpaste.org)
14:18 pdurbin "A cluster is a group of linked computers" according to http://gluster.org/community/documentat​ion/index.php/Gluster_3.2:_Terminology but do you really call it a gluster "computer"? what's the term, please, for the server running gluster that's part of a cluster?
14:19 dialt0ne joined #gluster
14:21 dialt0ne thanks for Ubuntu.README ... was wondering what the status of ubuntu packages were when download.gluster.com dissolved
14:21 dialt0ne my question for today is... can you replace a brick in-place?
14:21 neofob joined #gluster
14:22 dialt0ne i am working on a setup like this one http://community.gluster.org/q/a​mazon-autoscaling-and-glusterfs/
14:22 glusterbot Title: Question: Amazon AutoScaling and GlusterFS (at community.gluster.org)
14:22 JoeJulian ~glossary | pdurbin
14:22 glusterbot pdurbin: A "server" hosts "bricks" (ie. server1:/foo) which belong to a "volume"  which is accessed from a "client"  . The "master" geosynchronizes a "volume" to a "slave" (ie. remote1:/data/foo).
14:23 jdarcy I don't see any place where we'll generate an ELOOP error ourselves, though we probably should for linkfile cycles.
14:23 pdurbin JoeJulian: thanks! master?!? i thought gluster was master-less
14:23 JoeJulian dialt0ne: Interesting question. I'd test it first but theoretically you could do replace-brick ... commit force
14:23 JoeJulian That's a geo-replication thing.
14:24 jdarcy pdurbin: GlusterFS within a site is masterless, but geosync is master/slave (which I think sucks).
14:24 dialt0ne and putting it through the paces... i've killed an ec2 instance and recovered. i'm now killing invidivual ebs volumes (there's 1 brick per volume) and i'm wondering how to recover once "gluster volume status" shows that the brick is offline
14:24 ekuric joined #gluster
14:24 JoeJulian gluster volume start $volname force
14:24 JoeJulian or restart glusterd
14:24 pdurbin @lucky gluster geosync
14:24 glusterbot pdurbin: http://gluster.org/community/document​ation/index.php/Gluster_3.2:_Checking​_Geo-replication_Minimum_Requirements
14:25 jdarcy Alpinist: I can think of two possibilities.  One is that somehow the rebalance changed the context in which the links are interpreted (not the links themselves) in a way that creates a loop.
14:25 dialt0ne hm
14:25 * dialt0ne gives it a try
14:25 jdarcy Alpinist: The other is that we created a loop in the hidden .glusterfs directory (where we create them because we can't use hardlinks to directories).
14:25 JoeJulian Alpinist: Are you mounting NFS or fuse?
14:26 jdarcy Alpinist: Both possibilities seem very remote to me, but when likely answers are rules out then unlikely ones must be considered.
14:26 Alpinist JoeJulian, We are mounting with NFS
14:26 wushudoin joined #gluster
14:27 Alpinist jdarcy, the problem existed before the rebalance
14:27 JoeJulian jdarcy: Any chance it could be something like this: https://bugzilla.redhat.com/show_bug.cgi?id=739222
14:27 glusterbot Bug 739222: urgent, unspecified, rc, nfs-maint, ASSIGNED , Duplicate files on NFS shares
14:27 pdurbin jdarcy: thanks. i do want geosync, i guess. maybe i could do it manually, as well? but geosync sounds like a nice feature, based on this: http://gluster.org/community/documenta​tion/index.php/Gluster_3.2:_Exploring_​Geo-replication_Deployment_Scenarios
14:28 Alpinist the idea of hardlink in .glusterfs might be posible, i'll check that
14:28 jdarcy JoeJulian: I don't see how it could be.  I'm a bit surprised that nobody has identified that as the ext4 d_offset issue yet, since it seems highly likely to me.
14:29 jdarcy pdurbin: IIRC, bidirectional geosync is supposed to be in GlusterFS 3.4 and RHS 2.1
14:29 nocturn jdarcy: isn't .glusterfs new in 3.3, we are still on 3.2 (the symlink issue is on 3.2)
14:29 jdarcy nocturn: OK, so it's definitely not that.
14:30 nocturn jdarcy: we're also  not using automounter which is the only reference we could find to the symlink issue  on google
14:30 jdarcy I guess what I'd do is find a file that seems to be affected, then manually try to trace all of the symlinks and linkfiles that might be involved to see if there really is a loop.
14:30 Alpinist proxmos might be using automounter, we'll check that
14:31 jdarcy If such a loop does exist, then a dump of the symlinks/linkfiles might suggest how it came to be.
14:31 * jdarcy thought there were other problems with proxmox.
14:31 pdurbin jdarcy: ok. hmm, we're using gluster 3.3 right now and i was planning on sticking with that
14:32 dialt0ne hm. no love on replace-brick ... commit force
14:33 dialt0ne i detached the underlying ebs volume to simulate failure
14:33 dialt0ne i attached a brand new one
14:33 dialt0ne and mounted in the same spot
14:33 johnmark pdurbin: there was a nasty memory leak tha twas fixed in 3.3.1, just FYI
14:33 Alpinist proxmox is not using automounter
14:33 dialt0ne i know if i mount in a different mount point then replace-brick works
14:33 ndevos joined #gluster
14:33 dialt0ne but then the setup isn't consistent :-\
14:34 18VAADK4P joined #gluster
14:34 jdarcy dialt0ne: If you mount a new volume in the same place, you shouldn't need replace-brick.  Just self-heal.
14:35 dialt0ne heal .. info says "Status: Brick is Not connected"
14:35 JoeJulian jdarcy: I wonder, though, if an in-place replace-brick would work for converting bricks defined with IPs to hostnames.
14:35 jdarcy JoeJulian: Maybe.
14:35 dialt0ne volume status says the brick is offline
14:35 jdarcy dialt0ne: Did you try "gluster volume start $volname force"?
14:37 Alpinist jdarcy, JoeJulian another thing that can help finding the cause: on all 6 servers glusterfs is using more the 100% CPU, even if there isn't much load on the cluster
14:37 dialt0ne ah hm. i missed that in the scolling... yeah, that looks ok
14:37 puebele joined #gluster
14:38 pdurbin johnmark: thanks for the heads up
14:39 dialt0ne yeah "sudo gluster volume start testvol0 force; sudo gluster volume heal testvol0 full"
14:39 pdurbin johnmark: what do you think of my whiteboarding? :) http://irclog.perlgeek.de/cr​imsonfu/2012-10-18#i_6075404
14:39 glusterbot Title: IRC log for #crimsonfu, 2012-10-18 (at irclog.perlgeek.de)
14:40 JoeJulian pdurbin: I can see why you type for a living.
14:40 dialt0ne which underlying filesystem is the preferred one? ext4? xfs? i've seen both mentioned
14:40 pdurbin :)
14:40 dialt0ne i'm testing now with ext4, but not opposed to xfs
14:41 pdurbin JoeJulian: don't tempt me to spend all day making ascii art with App::Asciio :)
14:41 vipkilla joined #gluster
14:41 vipkilla ahh now im in the right channel
14:41 vipkilla someone should redirect #glusterfs to here
14:42 vipkilla anyway... i'm looking for an outline of how glusterfs works.... i have setup a cluster and 1 client, i took down one server and the files were still on the client, how is this possible?
14:43 JoeJulian Assuming you made a replicated volume, you were reading from (and writing to) the one that was still up.
14:43 atrius_away joined #gluster
14:44 vipkilla yes it was replicated.... but when i did ls -altr /var/lib/volume_name on the one that was still up, it said ls: cannot access /var/lib/volume_name/: Invalid argument
14:44 vipkilla also, how does the client know to use the replicated one?
14:45 JoeJulian It's part of the volume definition. The client is actually always connected to all the bricks in the volume.
14:46 vipkilla woah that's sweet... so the if i bring up the one that was down, it will auto-sync with the files?
14:46 JoeJulian So if a brick goes away, it continues happily along, marking the one it's using as "dirty" so when the missing brick returns, it'll self-heal.
14:47 JoeJulian ~split-brain | vipkilla
14:47 glusterbot vipkilla: (#1) learn how to cause split-brain here: http://goo.gl/nywzC, or (#2) To heal split-brain in 3.3, see http://joejulian.name/blog/fixin​g-split-brain-with-glusterfs-33/ .
14:47 JoeJulian #1's a good "don't do this" article.
14:47 vipkilla i'm setting up a second client to see if this is setup correctly
14:48 dialt0ne_ joined #gluster
14:48 ekuric joined #gluster
14:50 vipkilla see i dont think i need to worryabout split brain since only 1 client will write/delete, while other clients will write files but each file will be uniquefilename
14:51 vipkilla am i correct in this assumption?
14:51 JoeJulian Network partitions, or partitions in time can cause split brain as well.
14:51 jdarcy joined #gluster
14:51 vipkilla hmmm
14:54 JoeJulian jdarcy: did you spot the BS in that email thread you were replying on?
14:54 bfoster joined #gluster
14:54 JoeJulian jdarcy: The gluster-users one regarding stripe?
14:55 jdarcy JoeJulian: You mean the last couple of emails I sent?  ;)
14:55 JoeJulian yeah... "As a test I created as large of a volume as I could using ramdisks."
14:57 jdarcy JoeJulian: I don't think that's an invalid test.  Factoring out one thing (disk time) to concentrate on another (networking) is a time honored benchmarking technique.
14:57 jdarcy Heck, I've done that one myself to test AFR changes.
14:58 JoeJulian Really? Which ramdisk now has xattr support?
14:58 JoeJulian I'm glad I didn't call him out then. Last time I looked there wasn't one.
14:59 jdarcy JoeJulian: I had to use loopback over tmpfs, because tmpfs itself doesn't support xattrs.
15:00 JoeJulian Ah, sure... ok.
15:00 jdarcy In fact, I have a couple of those running now.  Very handy for testing ENOSPC problems.  :)
15:03 vipkilla JoeJulian: as long as the network connection stays up between the nodes then that could not happen right?
15:06 vipkilla_ joined #gluster
15:08 Alpinist I found in the bricklogs http://fpaste.org/gCM8/ can that explane the problems?
15:08 glusterbot Title: Viewing Paste #244429 (at fpaste.org)
15:08 JoeJulian If you understand what split-brain is and how it's created, you can best determine your risk level. Since these are plain files on readable filesystems, at least split-brain is fixable should it happen.
15:09 sashko joined #gluster
15:12 vipkilla joined #gluster
15:12 vipkilla how long does it take for gfs to "self" heal?
15:12 vipkilla i powered one server off, added a file from client. powered on the other server, then powered of the other and the new file was not there...
15:13 vipkilla also, i have two clients, one sees the contents of a file and the other doesn't. why would this be?
15:13 JoeJulian The self-heal daemon wakes every 10 minutes. You can trigger it sooner with "gluster volume heal $volname"
15:14 vipkilla JoeJulian: it would be nice if it did that on startup
15:14 vipkilla root@gfs02:~# gluster volume heal ampswitch
15:14 vipkilla unrecognized word: heal (position 1)
15:14 vipkilla what does that mean?
15:14 clag_ JoeJulian: self-heal daemon , in 3.3 ? or 3.2 has it ?
15:14 vipkilla ampswitch is name of the volume
15:15 JoeJulian Ah, forget all that. It means you're running an older version.
15:15 JoeJulian clag_: 3.3
15:15 JoeJulian ~repair | vipkilla
15:15 glusterbot vipkilla: http://www.gluster.org/community/do​cumentation/index.php/Gluster_3.1:_​Triggering_Self-Heal_on_Replicate
15:16 JoeJulian @repos
15:16 glusterbot JoeJulian: I do not know about 'repos', but I do know about these similar topics: 'repository', 'yum repository'
15:16 JoeJulian @repo
15:16 glusterbot JoeJulian: I do not know about 'repo', but I do know about these similar topics: 'repository', 'yum repo', 'yum repository', 'git repo', 'ppa repo', 'yum33 repo', 'yum3.3 repo'
15:17 vipkilla find /var/lib/ampswitch -print0 | xargs --null stat >/dev/null     <-------- did nothing
15:18 vipkilla client1 sees a file as blank, while client2 sees "TEST" in the file
15:18 vipkilla how could this be?
15:19 rwheeler joined #gluster
15:19 clag_ vipkilla: can you try find /var/lib/ampswitch -noleaf -print0 | xargs --null head -n0 >/dev/null
15:19 vipkilla clag_: on the clients right?
15:20 clag_ vipkilla: yes
15:20 vipkilla does nothing
15:20 Bullardo joined #gluster
15:20 clag_ we've found sometime head better stat...
15:22 vipkilla so what's the deal here? why is this happening?
15:22 zoldar I have 2 geo-replication sessions in "OK" state. I'm able to start/stop one of them without problems. The other one fails to stop with a message that session is not active and on attempt to call start signals that session is already started. How to clean it up?
15:25 JoeJulian vipkilla: Maybe you don't really have the client mounted? Sorry, gotta run and catch a bus into Seattle. Be back later.
15:25 vipkilla JoeJulian: but it sees the other files...
15:27 clag_ vipkilla:  client1 sees a file as blank, while client2 sees "TEST" in the file :  like a split brain...does this files are on each bricks ?
15:27 vipkilla how can i tell that?
15:29 clag_ look in the two bricks directory and see the files. Remove the bad files...do the stat/ head find.
15:29 vipkilla clag_: on the server right? the file exists and is correct on both servers
15:31 clag_ strange...
15:32 vipkilla so if i rewrite the file it's correct.... hmmmm
15:33 vipkilla i have a question about fstab now because we want the gfs  to mount on boot, correct?
15:33 vipkilla root@gfs01:~# mount        ----> gfs01:/ampswitch on /var/lib/ampswitch type fuse.glusterfs (rw,allow_other,default_pe​rmissions,max_read=131072)
15:33 vipkilla how does that translate to fstab entry?
15:35 vipkilla gfs01:/ampswitch /var/lib/ampswitch gfs (rw,allow_other,default_pe​rmissions,max_read=131072)                    -------------> gives me "mount: unknown filesystem type 'gfs'"
15:37 clag_ mount type : glusterfs
15:37 vipkilla thanks clag_ that did it
15:38 ika2810 joined #gluster
15:39 vipkilla Mounting local filesystems...fusermount: fuse device not found, try 'modprobe fuse' first
15:39 vipkilla fusermount: fuse device not found, try 'modprobe fuse' first
15:39 vipkilla Mount failed. Please check the log file for more details.
15:39 vipkilla failed.
15:39 vipkilla that's why it says on boot
15:39 vipkilla *what
15:41 vipkilla but "mount -a" works.... what's up with that
15:44 geek65535 joined #gluster
15:48 vipkilla sorry, here's the pb: http://paste2.org/p/2350756
15:48 glusterbot Title: Paste2 - Viewing Paste 2350756 (at paste2.org)
15:50 ekuric left #gluster
15:55 ctria joined #gluster
15:56 csaba joined #gluster
16:10 kkeithley_wfh omg, it worked
16:13 VisionNL joined #gluster
16:14 VisionNL 'ello
16:14 VisionNL did anybody ever experienced glusterfs process at 2GB of memory allocation?
16:14 VisionNL 5 S root     25950     1  1  76   0 - 2000000 futex_ 7867012 1 Oct03 04:49:45 /opt/glusterfs/3.2.5/sbin/glusterfs --log-level=INFO --volfile- id=/atlas3 --volfile-server=agri /glusterfs/atlas3
16:14 VisionNL The effect is that all client interactions on the mout point get stuck in a lock
16:15 VisionNL mount*
16:15 VisionNL it's also interesting that it is *exactly* 2GB of memory
16:16 VisionNL The only way to unclog the system is to kill the process with -9. It only effects one client node
16:16 lkoranda_ joined #gluster
16:16 VisionNL The rest of the clients are fine.
16:16 VisionNL We are using rpm: glusterfs-core-3.2.5-1.x86_64, glusterfs-fuse-3.2.5-1.x86_64. bron: glusterfs.org
16:17 VisionNL any help in how to debug this would be helpful :-)
16:28 dialt0ne__ joined #gluster
16:28 kkeithley_wfh okay, glusterfs-3.3.1 .debs for Debian squeeze on on download.gluster.org, along with the ones that are already there for wheezy
16:28 kkeithley_wfh johnmark: ^^^
16:30 semiosis kkeithley_wfh: did you rename wheezy.repo to something less wheezy?  i'm pretty sure packages for both distros can live in the same repo
16:31 kkeithley_wfh can they?
16:32 kkeithley_wfh wouldn't they all land in .../pool/main/g/glusterfs/ ?
16:33 semiosis ah, right... they'd have to be named differently, like glusterfs_3.3.1-1squeeze and glusterfs_3.3.1-1wheezy.  i'm sure there's a convention for how to do that, but would have to look it up
16:33 semiosis just guessing here
16:34 kkeithley_wfh exactly. They're all glusterfs-foo_3.3.1-1_amd64.deb.
16:37 kkeithley_wfh If you're interested in knowing the two things I had to do different from your recipe for building on wheezy....
16:37 johnmark kkeithley_wfh: w00t. thank you!
16:38 semiosis kkeithley_wfh: sure i'm interested
16:38 kkeithley_wfh ;-)
16:39 geek65535 left #gluster
16:39 kkeithley_wfh first, to get around that error I mentioned yesterday, I had to add DEB_PYTHON_SYSTEM=pysupport to the top of .../debian/rules
16:39 kkeithley_wfh or pycentral, either one seems to work.
16:41 kkeithley_wfh second was to add --extrapackages "python-support python-central" (the quotes are important) to the pbuilder --create
16:41 kkeithley_wfh you could probably just use python-support or python-central depending on what you added o .../debian/rules
16:44 kkeithley_wfh s/added o/added to/
16:44 glusterbot What kkeithley_wfh meant to say was: you could probably just use python-support or python-central depending on what you added to .../debian/rules
16:44 semiosis did you get my message yesterday afternoon about simply removing the include from rules?
16:45 semiosis do we even know what that python stuff is doing?  anything?
16:46 kkeithley_wfh nope, I had to leave. I was going to say I'd see it in the scrollback but I heard we had a power hit at the office.
16:46 kkeithley_wfh so you think if I just take out the include from rules that it'll work?
16:46 semiosis (replay) kkeithley: removing the python-module.mk include from the rules file let debuild proceed, but i dont know at what cost
16:47 kkeithley_wfh I'll give it a try
16:48 kkeithley_wfh I need coffee, biab
16:49 semiosis :)
16:49 hagarth joined #gluster
16:50 vimal joined #gluster
17:03 vipkilla is anybody running glusterfs 3.2.7 on debian squeeze? I can't get the fs to mount on boot in the fstab
17:04 dialt0ne__ on 3.3, adding "fetch-attempts=10" helped on 3.3.1
17:05 dialt0ne__ i also added "backupvolfile-server"
17:05 vipkilla getting this error: http://paste2.org/p/2350756
17:05 glusterbot Title: Paste2 - Viewing Paste 2350756 (at paste2.org)
17:20 johnmark kkeithley_wfh: ping
17:20 johnmark what distro did you use for your ARM stuff?
17:20 johnmark kkeithley_wfh: I just got 2 mk802 mini pcs and am itching to try it out
17:23 JoeJulian Dammit... volume migration has stopped progressing.
17:24 Alpinist doesn't sound good
17:24 dialt0ne left #gluster
17:27 Alpinist joined #gluster
17:28 JoeJulian jdarcy, hagarth, a2: Why could this take over 30 minutes? http://www.fpaste.org/FGhg/
17:28 glusterbot Title: Viewing Paste #244508 (at www.fpaste.org)
17:30 JoeJulian Yeah, and it's just plain stuck there. :(
17:31 JoeJulian uuid_unpack (in=0x11d75c0 "\v?.~\335qC?\214\340Nσ\365F\031", uu=0x7fff87669f00) at ../../contrib/uuid/unpack.c:44
17:31 JoeJulian 44tmp = (tmp << 8) | *ptr++;
17:31 vipkilla im becoming displeased with gluster.... anybody recommend a better distributed filesystem for debian squeeze?
17:32 JoeJulian This is TOTALLY the right place to be asking THAT question...
17:32 samppah :)
17:32 vipkilla lol
17:32 samppah i'd probably take a look at Ceph if something
17:33 vipkilla what about hadoop?
17:33 JoeJulian Have you been burned by drbd yet?
17:33 vipkilla JoeJulian: no
17:33 vipkilla glusterfs looks promising but i'm having issues with it
17:33 samppah vipkilla: sorry, i haven't been following, but what are the issues with glusterfs?
17:33 JoeJulian Oh! You should do that. It gives you a new appreciation for all of this.
17:34 * jdarcy wakes up, looks.
17:34 vipkilla samppah: can't mount the gfs on boot in debian squeeze
17:34 vipkilla but if i run 'mount -a'  it mounts, the fstab is correct
17:35 vipkilla here is error on boot -----------> http://paste2.org/p/2350756
17:35 glusterbot Title: Paste2 - Viewing Paste 2350756 (at paste2.org)
17:35 jdarcy So did you try 'modprobe fuse' like it says, or have fuse built in to your kernel?
17:36 vipkilla like i said, 'mount -a' works without doing modprobe
17:36 jdarcy JoeJulian: Stuck in uuid_unpack?
17:36 JoeJulian yep
17:37 vipkilla my question is, does anybody run glusterfs on debian squeeze if so how?
17:37 JoeJulian jdarcy: 100% cpu too.
17:37 jdarcy vipkilla: So apparently fuse isn't loaded when it tries to mount gluster, but is later.  That's a Debian-specific sequencing problem that I think I've seen semiosis and others here discuss before.
17:37 jdarcy JoeJulian: Still looking.
17:37 JoeJulian @ppa
17:37 glusterbot JoeJulian: semiosis' Launchpad PPAs have 32 & 64-bit binary packages of the latest Glusterfs for Ubuntu Lucid - Precise. http://goo.gl/DzomL (for 3.1.x) and http://goo.gl/TNN6N (for 3.2.x) and http://goo.gl/TISkP (for 3.3.x). See also @upstart
17:38 hagarth JoeJulian: do the input parameters to uuid_unparse look sane?
17:38 JoeJulian Also apparently just now there's new packages at download.gluster.org
17:38 JoeJulian hagarth: Not sure how I would know. :D
17:40 jdarcy JoeJulian: Just looked at uuid_unpack, it doesn't even have any loops.  Not sure how it could get stuck like that as long as it's being scheduled.
17:40 jdarcy JoeJulian: If you reattach at different times, is it the same UUID, or different UUIDs each time?
17:40 vipkilla JoeJulian: should i use CentOS then?
17:41 jdarcy vipkilla: Not necessarily.  I'm pretty sure there's a solution having to do with extra params in fstab and/or upstart magic, but I don't know enough about the Debian init system personally to provide an answer.
17:41 vipkilla jdarcy: what do you use?
17:41 hagarth JoeJulian: x /16b  in
17:42 jdarcy vipkilla: Like everyone else at Red Hat, mostly Fedora.  ;)
17:42 hagarth in frame 0 of thread 1
17:42 vipkilla ahhh cool.
17:42 vipkilla jdarcy: get me a job?
17:42 jdarcy vipkilla: That's for desktop and development, for QA or production obviously RHEL.
17:43 * jdarcy barely got *himself* a job.
17:43 vipkilla jdarcy: does redhat use gluster?
17:44 jdarcy vipkilla: You mean internally?  Couldn't say.
17:45 johnmark vipkilla: yes
17:45 johnmark vipkilla: most obvious use is on fedorahosted.org
17:46 jdarcy JoeJulian, hagarth: I wonder if it's repeated calls to uuid_compare, not a single call.
17:46 JoeJulian jdarcy, hagarth: Yep, it's looping through the same uuid from the list_for_each_entry in __inode_find
17:46 johnmark vipkilla: if you're an expert on packaging, and particularly wiht debian packaging, I know of a job that will start in a month or two
17:47 johnmark kkeithley_wfh: ping
17:47 jdarcy johnmark: OK, so it's the same UUID.  Is it the same *call*?  Try setting a breakpoint on uuid_compare, then continue and see if it gets hit again.
17:47 y4m4 joined #gluster
17:47 johnmark jdarcy: I think that was meant for JoeJulian
17:47 johnmark :)
17:48 * jdarcy is wondering about loops in the inode list.
17:48 vipkilla johnmark: what's this job?
17:48 johnmark jdarcy: eddies in the space-time continuum
17:48 johnmark jdarcy: oh, is he???
17:48 JoeJulian right, cont breaks there.
17:48 jdarcy OK, same UUID but different calls.  Looks grim.
17:48 johnmark vipkilla: packaging guru for all of Red Hat's upstream projects
17:49 johnmark including glusterfs, ovirt, aeolus, etc. etc.
17:49 johnmark jboss-*
17:49 vipkilla sorry i dont think qualified though
17:49 vipkilla i'm a voip guy
17:49 johnmark vipkilla: bummer :(  w
17:49 johnmark ah, ok
17:49 jdarcy johnmark: Aieeee!  Do not speak the language of Mordor here.
17:50 vipkilla i'd like to use a distributed fs for this new voip cluster i'm going to setup
17:50 vipkilla i may just load centos and try gluster on it since jdarcy says he uses it
17:50 JoeJulian So would that mean that it's likely that table->inode_hash[hash] links back on itself?
17:51 jdarcy JoeJulian: That's what I'm worried about.  Shouldn't happen, obviously, but the symptoms point that way.  How strong is your gdb-fu?
17:52 johnmark jdarcy: ah hahahaha
17:52 JoeJulian pretty damned weak but I follow directions well.
17:52 johnmark JoeJulian: you do as you're told?
17:52 * johnmark files that away for future use
17:53 ctria joined #gluster
17:54 jdarcy JoeJulian: OK, let's walk through this off-channel.
17:55 ctria joined #gluster
18:02 andreask joined #gluster
18:05 gbrand_ joined #gluster
18:05 hagarth joined #gluster
18:07 sashko joined #gluster
18:11 wiqd the Debian Aqueeze .debs on download.gluster.org are not working
18:12 wiqd they depend on libssl1.0.0 which is only available in wheezy (testing), not in squeeze (stable)
18:12 wiqd Squeeze* even
18:14 wiqd http://paste2.org/p/2351170
18:14 glusterbot Title: Paste2 - Viewing Paste 2351170 (at paste2.org)
18:14 johnmark wiqd: ah, ok
18:14 johnmark wiqd: we had that same issue before
18:14 wiqd http://packages.debian.org/​search?keywords=libssl1.0.0
18:14 glusterbot Title: Debian -- Package Search Results -- libssl1.0.0 (at packages.debian.org)
18:14 johnmark wiqd: you'll have to ask kkeithley_wfh
18:14 wiqd 1.0.1c-4 in wheezy
18:15 wiqd johnmark: thanks, will do
18:16 kkeithley_wfh I built the squeeze .debs on squeeze with libssl0.9.8/squeeze uptodate 0.9.8o-4squeeze13
18:17 wiqd Depends: libc6 (>= 2.8), libibverbs1 (>= 1.1.2), libssl1.0.0 (>= 1.0.0), libssl0.9.8
18:17 wiqd (from glusterfs-common)
18:18 kkeithley_wfh oh, crap, pbuild's aptcache ahs ./aptcache/libssl1.0.0_1.0.1c-4_amd64.deb
18:18 kkeithley_wfh s/ahs/has/
18:18 glusterbot What kkeithley_wfh meant to say was: oh, crap, pbuild's aptcache has ./aptcache/libssl1.0.0_1.0.1c-4_amd64.deb
18:18 kkeithley_wfh Let me fix that, hang on
18:18 kkeithley_wfh why did pbuild do that.
18:18 kkeithley_wfh ?
18:19 wiqd "There are only two hard things in Computer Science: cache invalidation and naming things. -- Phil Karlton" :P
18:20 wiqd no rush, just thought i'd test them out for you
18:21 kkeithley_wfh that's good, I'm glad someone's actually using/testing after all the work it took to build them
18:26 kkeithley_wfh how do I get apt-get to really download the file when it's already installed? apt-get install --download-only just tells me I've already got the latest installed.
18:31 johnmark kkeithley_wfh: should be in /var/apt/ somewhere
18:32 kkeithley_wfh the libssl-dev0.9.8 .deb is, but not the libssl0.9.8 .deb.
18:51 adechiaro joined #gluster
18:56 wN joined #gluster
18:58 semiosis vipkilla: looks like you'll need to add 'fuse' on a line by itself to /etc/modules... to be honest though never really used glusterfs on debian squeeze.  i use ubuntu, which has fuse compiled in.
18:59 vipkilla thanks semiosis
18:59 vipkilla i'll give it a try
19:00 semiosis yw, good luck & let us know how it goes
19:02 Staples85 joined #gluster
19:05 vipkilla semiosis: you use ubuntu in enterprise server environment?
19:07 semiosis i use it in my server environment, yes... but what do you mean "enterprise" ?
19:07 semiosis ubuntu all the things :)
19:09 stickyboy joined #gluster
19:10 semiosis kkeithley_wfh: you will have to bump the package version number, to 3.3.1-2 for example, if you're building new ones
19:11 hagarth1 joined #gluster
19:13 semiosis brb
19:14 semiosis joined #gluster
19:19 semiosis back
19:24 H__ rebalance layout at 1335133, chewing forward for 6 days now
19:26 jdarcy Any moment now.
19:30 vipkilla semiosis: screw it i'll just ubunut then :)
19:31 kkeithley_wfh semiosis: why?  The few people who might have dl-ed those, can't they just remove and reinstall? If I regen the repo from scratch?
19:32 VisionNL did anybody try to fill up Gluster with 100k - 1M files with roughly 50TB of data and do a find . on one of the client nodes?
19:33 H__ I have not, and i fear its 'performance'
19:33 Technicool joined #gluster
19:33 VisionNL nope, performnce is excellent and stays that way
19:34 VisionNL we have 8 boxes, each with 10Gbit per node to the core and to a cluster. The glusterfs scales pretty well
19:34 VisionNL but...
19:34 kkeithley_wfh pbuilder insists on populating the chroot env with libssl1.0.0
19:34 VisionNL ... one of the clients could fill up the cache locally
19:34 VisionNL and then the fuse mounted process hangs and blocks in a lock at exactly 2GB of memory size
19:35 VisionNL the glusterfs process to be exact
19:35 stickyboy VisionNL: Nice setup.  I love hearing about peoples' setups.
19:35 JoeJulian Weren't you talkint to Vijay about this?
19:35 JoeJulian s/int/ing/
19:35 glusterbot What JoeJulian meant to say was: Weren't you talking to Vijay about this?
19:36 VisionNL stickyboy: :-) we use it at www.nikhef.nl to keep performance when our physicis are bunch the storage
19:36 VisionNL bunch = punching
19:37 JoeJulian hmm, maybe not.
19:37 * pdurbin swoons. glusterbot, you have so many features. you put my bot to shame
19:37 VisionNL JoeJulian: other then a chat/talk on twitter with somebody else, no
19:37 VisionNL I did mention it here earlier today, but I was out of luck (no response) :-)
19:38 JoeJulian VisionNL: So I see you're using 3.2.5. 3.2's up to .7. Might be worth checking to see if that's already fixed.
19:38 VisionNL IT's a Red Hat system, rpm: glusterfs-core-3.2.5-1.x86_64, glusterfs-fuse-3.2.5-1.x86_64.
19:38 JoeJulian Also I presume you've checked your client log for errors.
19:38 VisionNL source: bron: glusterfs.org
19:39 VisionNL ps output: 5 S root     25950     1  1  76   0 - 2000000 futex_ 7867012 1 Oct03 04:49:45 /opt/glusterfs/3.2.5/sbin/glusterfs --log-level=INFO --volfile- id=/atlas3 --volfile-server=agri /glusterfs/atlas3
19:39 JoeJulian @latest
19:39 glusterbot JoeJulian: The latest version is available at http://goo.gl/TI8hM and http://goo.gl/8OTin See also @yum repo or @yum3.3 repo or @ppa repo
19:39 VisionNL it's funny that it exactly is at 2GB
19:39 JoeJulian Is that broken....
19:39 JoeJulian yep
19:39 JoeJulian @forget latest
19:39 glusterbot JoeJulian: The operation succeeded.
19:40 VisionNL JoeJulian: the client logs show nothing special. It just locked up at this state. The gluster cluster was fine, btw. Only one node locked up
19:41 steven joined #gluster
19:41 VisionNL JoeJulian: the glusterbot links go to an NginX 404
19:41 steven howdy ya'll.  Can someone point me to some clear documentation on how to bring a brick back after a drive failure?
19:41 steven gluster 3.3.0
19:42 JoeJulian @learn latest as The latest version is available at http://download.gluster.org/p​ub/gluster/glusterfs/LATEST/ . There is a .repo file for yum or see @ppa for ubuntu.
19:42 glusterbot JoeJulian: The operation succeeded.
19:43 VisionNL JoeJulian: thank you. Do you memorize if this is a known bug? the cache fill-up?
19:43 JoeJulian Sorry, nope.
19:44 VisionNL JoeJulian: ok. If it was, it would be easier to sell the upgrade from 3.2.5 to 3.3 to our production services.
19:45 steven I replaced the drive, but disk IO never came back on it
19:45 steven volume status shows it up and a PID, but it doesnt appear to actually be working
19:47 kkeithley_wfh hmm, looks like pbuilder is also populating the chroot env with gcc-4.7.x which has its own libssl.so.1.0.0 in /usr/lib/x86_64; what a fscking maze of crap
19:48 jdarcy VisionNL: Is this a 32-bit system?
19:48 JoeJulian jdarcy: Well, is it good news that I can repro? ;)
19:49 jdarcy JoeJulian: I don't know.  Which version, and what architecture?
19:50 JoeJulian jdarcy: No, no... repro my replace-brick issue.
19:50 jdarcy JoeJulian: Oh, that.  Well, short term it's bad news, long term might be good.
19:51 davdunc joined #gluster
19:51 davdunc joined #gluster
19:54 VisionNL jdarcy: no, 64bit
19:55 VisionNL jdarcy: granted, I could double check which libraries are installed. Most likely it's 64bit.
19:57 JoeJulian VisionNL: git log v3.2.5 v3.2.7 | grep -c leak = 129
19:59 JoeJulian Granted, sometime's it's thinks like "to make it easier to prevent leaks" or it's mentioned more than once in the same commit log.
19:59 JoeJulian thinks? really fingers? That's not what I heard in my head.
20:00 * VisionNL is running towards the git and forwards the upstream
20:04 tc00per Any thoughts as to why the FUSE client in CentOS6 might operate 'faster' than the one in CentOS5 for the same test case?
20:09 semiosis kkeithley_wfh: bumping the package version number is the right way.  otherwise people would have to remove the package, and clear their apt cache, and reinstall, if they ever found out they needed to in the first place
20:10 Alpha64 joined #gluster
20:10 semiosis kkeithley_wfh: are you running pbuilder on a squeeze vm or trying to cross build for squeeze from wheezy?
20:11 VisionNL JoeJulian: I'm having troubles matching the funky problem we have with the leakage issues fixed. Sometimes it is more or less related when it is combined with a FUSE or client side issue.
20:11 VisionNL JoeJulian: but, it does motivate to upgrade :-)
20:12 VisionNL tc00per: define 'faster'
20:12 semiosis tc00per: newer kernel
20:13 semiosis tc00per: linux kernel that is
20:13 semiosis VisionNL: doing the same thing in less time? :P
20:14 VisionNL tc00per: could be because better context switching in the newer kernels is implemented. We've seen that in OpenVPN performance tests too
20:15 tc00per Yes... still testing to be sure but CentOS5 looks to be ~20% slower on same write operation to same glusterfs volume.
20:15 VisionNL tc00per: protip: mount with root privs on CentOS5 and CentOS6 and repeat the test to by pass the obvious difference
20:15 tc00per Mounted as root on both already.
20:16 VisionNL tc00per: ah :-) Then it's probably something else that is just 'better' ;-)
20:16 tc00per Still need to rule out drive/interface speed.... different hardware possible (ie. SATA version).
20:17 VisionNL tc00per: ah, different hardware? Complicated
20:17 tc00per However, with same hardware this might be an interesting argument for us to upgrade our compute cluster OS.
20:18 VisionNL BTW: can we mix and match a 3.2.5 Gluster server with 3.3 Gluster clients?
20:18 VisionNL tc00per: if I'm not mistaken we are using RHEL 6 servers and Scientific Linux clients.
20:18 VisionNL SL5
20:18 JoeJulian VisionNL: No, it'll require downtime.
20:19 VisionNL JoeJulian: darn. We'll make a parallel setup to certify the new version. Cant stop science :-)
20:19 semiosis tc00per: what are the kernel versions on your cent5 & cent6 systems?
20:21 tc00per latest on both... CentOS 5.8 2.6.18-308.16.1.el5, CentOS 6.3 2.6.32-279.11.1.el6
20:22 semiosis wow i had no idea cent6.3 used such old kernels!
20:23 semiosis someone should tell them linux 3.6 was just released
20:24 hchiramm_ joined #gluster
20:24 tc00per Not sure anyone at Redhat is listening... :)
20:24 johnmark VisionNL: nope. can never stop science
20:24 johnmark tc00per: heh he
20:25 tryggvil joined #gluster
20:28 JoeJulian semiosis: That's what Fedora's for: testing those new kernels.
20:29 semiosis i dont always test new kernels, but when i do, it's on production servers too.
20:33 steven lets say you have 4 servers with 12 drives each, would you run 48 bricks with XFS, or some other setup? (mail storage)
20:34 JoeJulian no
20:34 steven It seems that 48 bricks slows it down to 1/3 of the speed of just running 3
20:34 VisionNL sudden meme is sudden
20:34 JoeJulian No, I woudln't run 48 bricks or any other setup for mail storage. ;)
20:35 JoeJulian mail storage is broken. :(
20:35 steven test showed ~300IOPS with filebench for 3 bricks, and only like ~130 with all the bricks
20:35 JoeJulian Someone needs to reinvent it.
20:35 semiosis steven: how many instances of filebench?  how many machines was it running on?
20:35 steven so you're saying gluster won't work for a mail storage backend?
20:35 steven semiosis I was just running varmail with defaults
20:36 steven one client server for now
20:36 steven eventually 2 load balanced postfix mail servers with gluster mount was the plan
20:37 JoeJulian Oh, it'll work. But performance is not going to be what even I would like.
20:39 JoeJulian @Joe's performance metric
20:39 glusterbot JoeJulian: nobody complains.
20:39 semiosis ha
20:39 VisionNL lol
20:41 jdarcy semiosis: You are aware than Debian stable is also on 2.6.32, without even backports?
20:42 JoeJulian Pay no attention to the kernel behind the curtain!
20:42 semiosis not specifically, but i am aware that debian stable is pretty old
20:42 semiosis like i said, i use ubuntu :)
20:43 jdarcy I use mostly Fedora, mostly because of Newer Stuff.  But if I wanted ten-year support it'd be a bit different.
20:44 jdarcy BTW, finally got my new Razr to recognize its SIM card.   :)
20:45 sashko joined #gluster
20:45 * jdarcy looks back to see what happened with VisionNL's issue.
20:57 jdarcy VisionNL: If you're still here, I strongly encourage you to file a bug on bugzilla.redhat.com for your issue.  It's going to require more time to investigate, and having a bug # makes it easier to assign someone.
20:57 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
20:58 kkeithley_wfh semiosis: yes, I agree that's the canonical, pedantic right way, and if I thought more than two people had installed from it I'd do that.
20:58 jdarcy VisionNL: Also, it would be really helpful to get a stack trace.  Attach with gdb and "thread apply all bt full" then include the output in the bug report.
20:59 kkeithley_wfh and it's a squeeze vm, not cross-built on the wheezy vm. (remember me trying to figure out getting squeeze to run yesterday?)
21:00 lanning joined #gluster
21:00 JoeJulian jdarcy: Are you able to add 3.3.1 to the Version list on bugzilla?
21:01 y4m4 joined #gluster
21:01 steven JoeJulian so, i know you said performance is gonna be "meh", but still, whats better less bricks?  thats what my test seems to show
21:01 JoeJulian @meh
21:01 glusterbot JoeJulian: I'm not happy about it either
21:01 steven I assume the overhead of keeping up with all those bricks slows it way down...
21:02 y4m4 joined #gluster
21:02 JoeJulian More clients, more bricks. Fewer clients, fewer bricks is probably better.
21:02 jdarcy JoeJulian: Nope.  I don't even know who does have access to that.  Probably nobody in the actual Gluster team, need to fill out a form etc.
21:02 JoeJulian hagarth /was/ able to do that. Dunno any more.
21:03 jdarcy hagarth is a wizard, he can do lots of things the rest of us couldn't dream of
21:07 semiosis kkeithley_wfh: yeah i remember that, but then why would pbuilder have gcc4.7 & libssl1.0?
21:07 kkeithley_wfh ya got me
21:10 kkeithley_wfh I just don't have that much (i.e. none) experience with Debian to know what the Debian way is. And I've forgotten most everything I ever knew about Slackware and SuSE.
21:10 JoeJulian Ooh, got glusterd in D state.
21:12 kkeithley_wfh semiosis: I can't figure out which .deb I need for /usr/bin/deb to add my repos to my Debian boxes.
21:12 kkeithley_wfh and my google fu was weak when I searched earlier.
21:13 sashko_ joined #gluster
21:13 kkeithley_wfh and `apt-file search /usr/bin/deb` is showing lots of things that aren't just deb
21:13 Technicool kkeithley_wfh, i sent you an email earlier
21:14 JoeJulian Well, replace-brick just hit my performance metric. I'll have to wait until overnight to continue migration.
21:14 Technicool of my "not very awesome" build
21:14 kkeithley_wfh oh, haven't looked at email in an hour
21:14 kkeithley_wfh hang on
21:14 Technicool with that, i was on a fresh install of squeeze and used the gluster tarball
21:17 kkeithley_wfh ah, okay. did you follow semiosis' recipe of debuild then pbuilder? I finally have up on the pbuilder part and used the .debs from the debuild to create a repo and AFAICT it depends on libssl0.9.8.
21:17 JoeJulian mmkay, glusterd was stuck waiting for /var/lib/glusterd/vols/home/rb_mount which I had issued an abort on.
21:17 semiosis kkeithley_wfh: /usr/bin/deb?  what's that?
21:18 Technicool no, i followed the technicool method of "GAH!! you *WILL* work <swear words>"
21:18 kkeithley_wfh otherwise yeah, I used the tarball and semiosis' debian-gluster bits on github for the packaging
21:18 Technicool its the technical equivalent of winning in Street Fighter II by mashing buttons
21:18 kkeithley_wfh semiosis: well, or where ever `which deb` lives. For adding a repo to apt.
21:19 JoeJulian I thought that was how you played that game.
21:19 kkeithley_wfh as I see in numerous results on google for how to add a repo to apt
21:19 Technicool joe, its a valid method....have seen more than one "l44t expart" go down to the novice button mash
21:20 kkeithley_wfh l44t? is that 11 more than l33t?
21:20 kkeithley_wfh or 1337
21:20 Technicool yes, its like leet++
21:20 JoeJulian Not like the old days where you could memorize the ghost pattern.
21:20 Technicool and using ++ is really like saying you are in constant -- ...
21:21 Technicool <--- had the magazine that showed how to memorize the ghost pattern
21:21 Technicool <--- should not admit that
21:22 tryggvil joined #gluster
21:22 kkeithley_wfh amusingly enough, when I moved here almost 20 years ago, I got xxx-xxx-1338 as our land line #. So close.
21:22 JoeJulian We'll have to have a rubik's cube race some day. I bet you still have that book somewhere too.
21:23 Technicool i do not
21:23 Technicool but
21:23 Technicool one of my co-workers is a rubiks master
21:23 Technicool wins bar bets with it
21:24 Technicool average is less than 1 minute
21:24 semiosis kkeithley_wfh: well there's add-apt-repository but i think that's ubuntu specific.  it's super easy to add a ppa with that
21:24 JoeJulian <--- ~50 seconds on average.
21:25 kkeithley_wfh okay, well, I'll just chalk that up as another Debian mystery and leave it at that
21:25 semiosis kkeithley_wfh: look at how 10gen, for example, instructs people to add the mongodb repo: http://docs.mongodb.org/manual/tu​torial/install-mongodb-on-debian/
21:25 glusterbot Title: Install MongoDB on Debian MongoDB Manual (at docs.mongodb.org)
21:25 semiosis very manual
21:25 Technicool JoeJulian, next time you are in the Bay, let me know and i will arrange a match between you two
21:26 Technicool then, after, you can both have fun watching me uberphail at it
21:26 JoeJulian hehe
21:26 Technicool the only way I am getting < 1 minute is with spray paint
21:28 kkeithley_wfh maybe that's all the `deb` utility that I see references to does?
21:28 semiosis what deb utility?
21:29 semiosis link?
21:29 semiosis where do you see references?
21:29 kkeithley_wfh google. Let me see if I can find one
21:30 semiosis i suspect your confusing a line for the /etc/apt/sources.list config file with a command to be run
21:31 semiosis s/your/you're/
21:31 glusterbot What semiosis meant to say was: i suspect you're confusing a line for the /etc/apt/sources.list config file with a command to be run
21:31 kkeithley_wfh no, I don't think so. Here's one http://www.isotton.com/software/debian/do​cs/repository-howto/repository-howto.html
21:31 glusterbot Title: Debian Repository HOWTO (at www.isotton.com)
21:32 kkeithley_wfh here's another (both near the bottom of the page). This one seems rather authoritative http://wiki.debian.org/SettingUp​SignedAptRepositoryWithReprepro
21:32 glusterbot Title: SettingUpSignedAptRepositoryWithReprepro - Debian Wiki (at wiki.debian.org)
21:33 semiosis yeah that's not a command, it's a line for the apt sources.list config file
21:33 semiosis /etc/apt/sources.list and /etc/apt/sources.list.d/*.list
21:34 kkeithley_wfh ah, brainfart
21:34 semiosis those files contain lines beginning with either 'deb' or 'deb-src' followed by a URL followed by a distro release name (stable, unstable, precise, quantal, etc) followed by a repo name (main, universe, etc)
21:34 semiosis :D
21:38 kkeithley_wfh fixed the readme.txt
21:38 kkeithley_wfh thanks
21:38 semiosis @repo
21:38 glusterbot semiosis: I do not know about 'repo', but I do know about these similar topics: 'repository', 'yum repo', 'yum repository', 'git repo', 'ppa repo', 'yum33 repo', 'yum3.3 repo'
21:38 semiosis kkeithley_wfh: you're welcome
21:38 semiosis @debian
21:38 glusterbot semiosis: I do not know about 'debian', but I do know about these similar topics: 'files edited with vim become unreadable on other clients with debian 5.0.6'
21:38 semiosis @latest
21:38 glusterbot semiosis: The latest version is available at http://download.gluster.org/p​ub/gluster/glusterfs/LATEST/ . There is a .repo file for yum or see @ppa for ubuntu.
21:40 kkeithley_wfh and now my wife says there's a free beer tasting at one of our favorite restaurants, so I'm off
21:40 semiosis kkeithley_wfh: ok another thing people often do in their "how to add this repo" readmes is to give a whole command to add the key... usually of the form "wget -O - http://path.to.key | apt-key add"
21:40 JoeJulian Nice!
21:40 semiosis kkeithley_wfh: yes, go to the free beer
21:40 semiosis you deserve it
21:41 JoeJulian Go with the wife.
21:41 kkeithley_wfh It's a trap, we'll end up buying food I'm sure
21:43 glusterbot New news from resolvedglusterbugs: [Bug 764966] gerrit integration fixes <https://bugzilla.redhat.com/show_bug.cgi?id=764966>
21:54 vimal joined #gluster
22:11 Bullardo joined #gluster
22:28 jbrooks joined #gluster
22:47 hattenator joined #gluster
23:45 y4m4 joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary