Camelia, the Perl 6 bug

IRC log for #gluster, 2012-10-10

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 blendedbychris1 joined #gluster
00:17 Technicool joined #gluster
00:30 puebele joined #gluster
00:36 usrlocalsbin joined #gluster
00:58 moodeque joined #gluster
00:59 aliguori joined #gluster
01:48 kevein joined #gluster
02:29 sashko joined #gluster
02:36 sgowda joined #gluster
03:03 hagarth joined #gluster
03:22 glusterbot New news from resolvedglusterbugs: [Bug 845213] [FEAT] Need O_DIRECT support in translators <https://bugzilla.redhat.com/show_bug.cgi?id=845213>
03:22 glusterbot New news from newglusterbugs: [Bug 857673] write-behind: implement causal ordering of requests for better VM performance <https://bugzilla.redhat.com/show_bug.cgi?id=857673>
03:22 kshlm joined #gluster
03:22 kshlm joined #gluster
03:35 shylesh joined #gluster
03:36 raghu joined #gluster
03:37 _Bryan_ joined #gluster
03:44 Humble joined #gluster
03:53 sripathi joined #gluster
03:57 ngoswami joined #gluster
03:57 puebele1 joined #gluster
04:08 deepakcs joined #gluster
04:09 ramkrsna joined #gluster
04:09 ramkrsna joined #gluster
04:28 jays joined #gluster
04:37 vpshastry joined #gluster
04:42 overclk joined #gluster
04:48 faizan joined #gluster
04:52 mdarade1 joined #gluster
04:58 ramkrsna joined #gluster
05:05 McLev joined #gluster
05:05 McLev just a quick one: would anybody here recommend running kvm via glusterfs mounts?
05:07 McLev I'm just seeing a lot of contrasting advice via the net. some people say gluster stores kvm guests pretty well. some don't.
05:38 mdarade joined #gluster
05:48 bala joined #gluster
05:53 mohankumar joined #gluster
05:55 layer3 joined #gluster
06:06 mohankumar joined #gluster
06:19 layer3 joined #gluster
06:25 Tekni joined #gluster
06:28 sashko joined #gluster
06:30 lkoranda joined #gluster
06:32 mo joined #gluster
06:36 ctria joined #gluster
06:43 atrius joined #gluster
06:48 samppah McLev: i think that biggest problems are with performance
06:49 mdarade left #gluster
06:50 McLev yeah, totally. just ran some tests, and I'm getting mad read/write.
06:50 McLev there's no way around that, is there?
06:50 bala joined #gluster
06:51 samppah McLev: http://lists.gnu.org/archive/html​/qemu-devel/2012-06/msg01745.html this seems promising but it's still under development
06:51 glusterbot Title: [Qemu-devel] [RFC PATCH 0/3] GlusterFS support in QEMU (at lists.gnu.org)
06:52 McLev thanks man. really, really appreciated.
06:52 samppah it's possible to use glusterfs over nfs.. performance is probably better but you lose some other functions of glusterfs
06:52 glusterbot New news from newglusterbugs: [Bug 864786] Eager locking lk-owner decision is taken before transaction type is set <https://bugzilla.redhat.com/show_bug.cgi?id=864786>
06:52 stickyboy joined #gluster
06:53 ondergetekende joined #gluster
06:55 vimal joined #gluster
06:58 McLev yeah. I kind of lost xattr, but I'm not sure if that's just my mount options.
07:02 ankit9 joined #gluster
07:03 bulde joined #gluster
07:07 dobber joined #gluster
07:07 samppah McLev: lost xattr?
07:08 McLev oh, I was running darwin calendar on the glusterfs mount. it kind of blew up with xattr errors when I mounted it as NFS.
07:08 McLev it's cool. I don't think v3 supports it (fuse client does, though)
07:13 Nr18 joined #gluster
07:18 shireesh joined #gluster
07:23 stickyboy McLev: Darwin calendar?  ie, `cal` on Mac OS X?  Or some other app I've never heard of?
07:25 McLev oh, darwin calendar server
07:25 McLev http://trac.calendarserver.org/
07:25 glusterbot Title: Calendar and Contacts Server (at trac.calendarserver.org)
07:25 stickyboy McLev: Ah, hehe.  Thanks.
07:25 McLev it's cool bro. :D
07:30 Nr18 joined #gluster
07:32 mohankumar samppah: McLev: fyi: glusterfs support for qemu patches merged in upstream qemu
07:32 McLev oh, sweet
07:34 samppah mohankumar: woohooo \o/
07:35 mohankumar :)
07:36 sshaaf joined #gluster
07:44 ctria joined #gluster
07:51 gbrand_ joined #gluster
08:05 Daxxial_ joined #gluster
08:33 mdarade joined #gluster
08:36 dastar_ joined #gluster
08:58 hurdman joined #gluster
08:59 hurdman hi, is-it safe to upgrade one brick ok a Replicate volume from 3.2.4 to 3.2.6 without stop the service ?
09:03 sripathi joined #gluster
09:04 Qten joined #gluster
09:06 nueces joined #gluster
09:12 ondergetekende_ joined #gluster
09:20 hurdman where can i found the 3.2.7 changelog ?
09:21 hurdman http://download.gluster.com/pu​b/gluster/glusterfs/3.2/3.2.7/ <= it's not here like for others
09:21 glusterbot Title: Index of /pub/gluster/glusterfs/3.2/3.2.7 (at download.gluster.com)
09:36 gcbirzan joined #gluster
09:42 gcbirzan Hum. I'm trying to understand some log messages I'm getting from gluster. The log messages are full of [2012-10-08 00:30:40.296641] I [afr-common.c:1189:afr_detect_self_heal_by_iatt] 0-dis-magnetics-2-181901-replicate-0: size differs for <gfid:6e1dd6d0-f0bb-4ccd-9419-202911c90d27> every ten minutes. It started like 4 days ago, and there was no self heal triggered. It also magically stopped after 2 days, but I belive the file was corrupted someho
09:50 kshlm joined #gluster
09:50 kshlm joined #gluster
09:52 gcbirzan Ah. Duh. It seems there's a problem with one of the clients...
09:55 kshlm joined #gluster
10:01 badone_home joined #gluster
10:06 gcbirzan left #gluster
10:14 ntt joined #gluster
10:14 ntt hi
10:14 glusterbot ntt: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
10:15 shireesh joined #gluster
10:15 tryggvil joined #gluster
10:18 ntt I have n storage server with ip (on eth0) 10.10.1.1, 10.10.1.2, ecc.... Each server have 2 ethernet card, where eth1 is in 192.168.1.0/24 . If 10.10.1.0/24 goes down, can i atumatically switch my storage servers on 192.168.1.0/24 ?
10:23 glusterbot New news from newglusterbugs: [Bug 825569] [enhancement]: Add support for noatime, nodiratime <https://bugzilla.redhat.com/show_bug.cgi?id=825569> || [Bug 857220] [enhancement]: Add "system restart" command for managing version upgrades <https://bugzilla.redhat.com/show_bug.cgi?id=857220> || [Bug 858275] Gluster volume status doesn't show disconnected peers <https://bugzilla.redhat.com/show_bug.cgi?id=858275> || [Bug 847
10:32 Humble joined #gluster
10:34 sripathi joined #gluster
10:47 ctria joined #gluster
10:52 badone_home joined #gluster
10:53 glusterbot New news from newglusterbugs: [Bug 848556] glusterfsd apparently unaware of brick failure. <https://bugzilla.redhat.com/show_bug.cgi?id=848556> || [Bug 830121] Nfs mount doesn't report "I/O Error" when there is GFID mismatch for a file <https://bugzilla.redhat.com/show_bug.cgi?id=830121> || [Bug 830665] self-heal of files fails when simulated a disk replacement <https://bugzilla.redhat.com/show_bug.cgi?id=830665> || [Bug
10:55 bala joined #gluster
10:57 dastar joined #gluster
11:02 duerF joined #gluster
11:02 badone_home joined #gluster
11:05 tryggvil_ joined #gluster
11:08 dobber how can I change the volume transport on the fly ?
11:09 shireesh joined #gluster
11:30 Triade joined #gluster
11:34 badone_home joined #gluster
11:41 mdarade1 joined #gluster
11:42 mdarade1 joined #gluster
12:03 Humble joined #gluster
12:04 sac joined #gluster
12:04 kshlm joined #gluster
12:04 kshlm joined #gluster
12:05 overclk joined #gluster
12:06 vpshastry joined #gluster
12:09 hagarth joined #gluster
12:10 bulde joined #gluster
12:16 aliguori joined #gluster
12:21 shireesh joined #gluster
12:34 balunasj joined #gluster
12:34 balunasj joined #gluster
12:35 Humble joined #gluster
12:47 verywiseman joined #gluster
12:51 RNZ joined #gluster
12:52 shireesh joined #gluster
12:52 gmcwhistler joined #gluster
13:17 kerkis joined #gluster
13:18 kerkis hi
13:18 glusterbot kerkis: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
13:19 Humble joined #gluster
13:26 verywiseman if i have several mysql servers work behind load balancer , if 2 or more mysql servers access same database in same time , is glusterfs manage these requests to ensure that all mysql servers'  operation will be done successfully ,or database may corrupt?
13:27 dobber database may corrupt
13:29 manik joined #gluster
13:31 Nr18_ joined #gluster
13:37 bennyturns joined #gluster
13:40 mweichert joined #gluster
13:42 verywiseman dobber, why?
13:44 Humble joined #gluster
13:48 lh joined #gluster
13:48 lh joined #gluster
13:50 dobber try startin two mysql daemon over one datadir
13:51 Nr18 joined #gluster
13:51 verywiseman dobber, i get your point
13:51 verywiseman but how can i solve this problem ?
13:52 hagarth joined #gluster
13:52 dobber master/slave replication ?
13:53 tryggvil joined #gluster
13:54 glusterbot New news from resolvedglusterbugs: [Bug 765450] txbufs and rxbufs must be freed in nfs_rpcsvc_conn_destroy <https://bugzilla.redhat.com/show_bug.cgi?id=765450>
13:54 glusterbot New news from newglusterbugs: [Bug 864963] Heal-failed and Split-brain messages are not cleared after resolution of issue <https://bugzilla.redhat.com/show_bug.cgi?id=864963>
13:55 H__ havnig 1 write and multiple read slaves on the same datadir might work. but i suggest not to use that and go repliction instead
13:57 rwheeler joined #gluster
13:58 deepakcs joined #gluster
14:06 stopbit joined #gluster
14:09 wushudoin joined #gluster
14:11 ntt joined #gluster
14:13 pkoro joined #gluster
14:13 ntt hi
14:13 glusterbot ntt: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
14:16 bulde1 joined #gluster
14:16 ntt I have 2 storage server each with 2 nic. the "storage network" use eth0 with ip 10.10.1.1 and 10.10.1.2 respectively. Eth1 is 192.168.2.81 and 192.168.2.82 respectively. if 10.10.1.0/24 goes down, can I "switch" gluster on 192.168.2.0/24 ??
14:18 ekuric joined #gluster
14:23 ntt someone can help me?
14:23 dobber is there implications if I delete .glusterfs ?
14:29 stopbit dobber: you should do that while the volume is offline, but otherwise it should be safe... i've done it a number of times myself recently on a large cluster
14:29 stopbit what is the problem you are attempting to solve by deleting that directory?
14:30 dobber i want to change the transport type from tcp to rdma
14:30 dobber and the only way i see is to create the volume from scratch
14:30 stopbit ah, okay... yeah, just offline the volume and you can clear those out
14:30 dobber so maybe the procedure - stop volume; delete volume ; create volume transport rdma ; start volume
14:30 dobber and hope the data is ok
14:30 stopbit if you're recreating the cluster from scratch, you'll also want to clear all the extended attributes
14:30 dobber yes
14:30 dobber http://joejulian.name/blog/glusterfs-path-or​-a-prefix-of-it-is-already-part-of-a-volume/
14:30 glusterbot Title: GlusterFS: {path} or a prefix of it is already part of a volume (at joejulian.name)
14:31 dobber I got the "already part of a volume"
14:31 stopbit that'd be the same post i followed a little while back :)
14:32 dobber sweet
14:32 dobber i hope i don't have dataloss :(
14:33 stopbit as long as you don't delete anything other than .glusterfs on each brick, you shouldn't
14:34 stopbit that's just metadata that gluster will recreate automatically
14:34 dobber it looked to me as a hard link to the actual file
14:34 dobber not sure
14:35 ntt stopbit can help me?
14:36 red_solar joined #gluster
14:36 stopbit ntt: unfortunately, i don't have any experience moving peers across interfaces -- the bricks in the clusters i've worked on have all been on single-NIC systems
14:39 ntt ok.... I do not understand if glusterfs natively support this
14:40 flind_ joined #gluster
14:40 abyss^_ joined #gluster
14:41 Nuxr0 joined #gluster
14:44 RobertLaptop_ joined #gluster
14:44 kaisersoce joined #gluster
14:46 kkeithley jdarcy: did you fall off the call?
14:51 a2 joined #gluster
14:52 tripoux joined #gluster
14:52 bennyturns joined #gluster
14:54 mdarade1 joined #gluster
14:59 JoeJulian dobber, stopbit: http://joejulian.name/blog/what-is-​this-new-glusterfs-directory-in-33/
14:59 glusterbot Title: What is this new .glusterfs directory in 3.3? (at joejulian.name)
15:01 dobber If that gfid file is broken, the filename file will be as well.
15:01 dobber so if i delete it
15:01 dobber it will delete the file as well
15:02 ankit9 joined #gluster
15:03 JoeJulian yep
15:03 dobber cool
15:04 JoeJulian ntt: the servers listen on the global address (:::) so your only problem is one of routing and dns resolution.
15:05 dobber JoeJulian: and i would get empty files if i delete all .glusterfs ?
15:05 JoeJulian Shouldn't, no.
15:05 JoeJulian If you make a file and hardlink it, then delete one of the links, the other remains.
15:06 ntt JoeJulian: I known what you tell, but there is a automated way to "switch" between 2 networks?
15:06 JoeJulian That's a #networking question.
15:07 ntt ok... Technically, if i edit /etc/hosts on each server, it works (tested few minutes ago)
15:08 sashko joined #gluster
15:10 ntt JoeJulian: do you known other solutions?
15:11 JoeJulian dobber: What I would do is create a test volume. Look at /var/log/glusterd/$volname/info. Create that test volume again with the different transport and look at the info file again to see what's different. Theoretically if you stop your volume, make that change and start the volume again it /should/ change the transport.
15:12 JoeJulian ntt: ospf, rip, rip2, weighted routing, even rrdns might work.
15:12 dobber i did that but no change happened
15:13 dobber i also tested with create, copy files, delete , re-create , files still exist
15:13 dobber so i did that to the production server now
15:15 ondergetekende joined #gluster
15:16 dobber but i think i broke it now
15:16 davdunc joined #gluster
15:16 davdunc joined #gluster
15:18 dobber nope, it's working fine
15:18 dobber thanks guys
15:18 ntt JoeJulian: really Thank You!
15:19 mdarade1 left #gluster
15:19 JoeJulian You're welcome.
15:19 tryggvil joined #gluster
15:20 ctria joined #gluster
15:22 ntt JoeJulian: it is possible use Gluster in remote? Can I mount a remote gluster filesystem via internet?
15:22 JoeJulian You /can/ but it's going to have latency issues. If I were forced to do that I'd probably use an nfs mount.
15:23 daMaestro joined #gluster
15:25 ntt what's your advice for remote filesystems?
15:25 JoeJulian vnc to a vm that's not remote.
15:26 ntt :) remote filesystems are bad...
15:26 ndevos sshfs?
15:27 flowouffff hi guys
15:27 * JoeJulian waves
15:27 flowouffff quick question, any problem with glusterFS and virtual interfaces ?
15:28 flowouffff i've got Transport Endpoint is not connected
15:28 flowouffff :/
15:28 JoeJulian Nope
15:29 JoeJulian This is from the client?
15:29 ndevos flowouffff: just make sure that when you mount over glusterfs, the client system needs to be able to contact the hostname/ip of the bricks directly
15:31 ndevos some people tend to use gluster-internal IPs and hostnames, which the clients can not use/resolve, the clients will not be able to contact the bricks and fail
15:31 jdarcy kkeithley: Missed that the call was today.  :(
15:32 * jdarcy grumbles about reminders less than four waking hours before an event, during non-work hours.
15:35 flowouffff ndevos, thx for your answer
15:35 neofob joined #gluster
15:35 flowouffff i've got a volume with only one brick, i've tried to mount it locally
15:35 flowouffff but the mount is always failing
15:36 flowouffff i'm wondering if there's any problem with virtual interfaces
15:36 JoeJulian fpaste the client log
15:37 neofob after doing rebalancing my gluster for hrs...i got the message "VFS file-limit some number... reached"
15:37 neofob what's wrong?
15:37 flowouffff i add to inet familly option
15:38 kkeithley jdarcy: not sure why we thought you had been on the call in the first place.
15:40 JoeJulian neofob: too many open files. What version are you running?
15:40 neofob 3.3 compiled from sources
15:41 neofob it's on arm 32bit, some sort of lower number file limit?
15:42 JoeJulian Could be I suppose. /proc/sys/fs/file-max is where that limit resides.
15:43 neofob have you seen this with x64 system?
15:43 jdarcy kkeithley: An impersonator?  Cool!
15:43 kkeithley huh?
15:43 JoeJulian neofob: Never heard of it before. You're the first! Don't you feel special?!
15:44 neofob i have 1.6TB of data (mostly my camera raws around 50-70MB) in a gluster 5.7 TB
15:44 neofob yeah, i'm the pioneer in gluster on ARM
15:44 kkeithley oh, no, there was a  lot of noise on the line. Perhaps Doug thought he heard you chime in at some point.
15:45 JoeJulian neofob: I'd actually lean toward suspecting a compiler or library bug more than anything. What's the number in /proc/sys/fs/file-max?
15:46 kkeithley neofob: maybe. What are you running, Fedora?
15:46 neofob JoeJulian: i don't have access to my cubox now (i'm in my office); i run debian wheezy 3.5 kernel
15:47 kkeithley Oh, then you wouldn't be interested in my ARM RPMs in my fedorapeople.org repo.
15:47 JoeJulian https://twitter.com/JoeCyberG​uru/status/254720620691091456
15:47 glusterbot Title: Twitter / JoeCyberGuru: You cant have Fedora People ... (at twitter.com)
15:49 kkeithley That's..... unfortunate.
15:50 neofob does gluster officially support 32bit? or i'm on my own here?
15:51 blendedbychris joined #gluster
15:51 blendedbychris joined #gluster
15:51 JoeJulian Official support is through Red Hat as Red Hat Storage. Everything here is unofficial. I still have 32 bit boxes that I run glusterfs on.
15:53 * ndevos has 2 arm boxes that provide some volumes
15:55 ekuric left #gluster
15:56 neofob btw, my question the other day about port mutiplier with mediasonic; it turned out that i was confused between my cubox and my intel mb when i plug/unplug the cable
15:57 neofob the cubox has port multiplier so it can recognize all the drives; my intel mb dq77kb doesn't
15:59 neofob to make it worse; i initially didn't have EFI enabled on my arm cubox kernel so it didn't recognize the GPT partition of my 3TB drive
15:59 johnmark semiosis: lulz
15:59 neofob => cable management is important!
15:59 JoeJulian +1
15:59 johnmark semiosis: please don't feed the trolls :)  I made that mistake once
15:59 JoeJulian Oooh, which trolls?
16:01 sshaaf joined #gluster
16:02 Humble joined #gluster
16:07 mo joined #gluster
16:08 geggam joined #gluster
16:09 seanh-ansca joined #gluster
16:17 saz_ joined #gluster
16:21 Humble_afk joined #gluster
16:24 bulde1 joined #gluster
16:25 Ramereth joined #gluster
16:27 samkottler joined #gluster
16:30 Nr18 joined #gluster
16:34 ctria joined #gluster
16:35 Mo___ joined #gluster
17:01 Nr18 joined #gluster
17:16 hagarth joined #gluster
17:21 edward1 joined #gluster
17:32 mohankumar joined #gluster
17:40 raghu joined #gluster
17:42 gbrand_ joined #gluster
17:49 davdunc joined #gluster
17:53 mo joined #gluster
18:03 redsolar joined #gluster
18:03 redsolar_office joined #gluster
18:11 daMaestro joined #gluster
18:16 tqrst joined #gluster
18:18 tqrst we have a volume where every brick has a replica (2*N) and just had a single hard drive failure. We replaced the dead hard drive with a new one and are currently running a targeted self-heal (http://community.gluster.org/a/howto-targeted-s​elf-heal-repairing-less-than-the-whole-volume/). At this rate, it should take, oh...about 3 weeks. Is there a faster way to do this? (rsync + "fix all the xattr"?)
18:18 glusterbot Title: Article: HOWTO: Targeted self-heal (repairing less than the whole volume) (at community.gluster.org)
18:19 adechiaro joined #gluster
18:20 davdunc joined #gluster
18:22 y4m4 joined #gluster
18:24 tqrst related: http://download.gluster.com/pub/gluster/RHSSA/​3.2/Documentation/UG/html/sect-Gluster_Adminis​tration_Guide-Managing_Volumes-Self_heal.html claims that 3.3 now has proactive self-heal. In the case of my brick failure, would the self-heal have been run on the whole volume, or would it have been targeted only to the files that were on that brick?
18:24 glusterbot Title: 12.8. Triggering Self-Heal on Replicate (at download.gluster.com)
18:26 tqrst (to clarify my first question: in our case, one hard drive = one brick)
18:26 neofob tqrst: you should check your network
18:27 neofob if you use gigabit ethernet, enable jumbo frame will help maxing out the throughput
18:30 JoeJulian tqrst: 3.3's self-heal keeps track of which files need healed and handles it. It's triggered on a 10 minute timer so keep that in mind.
18:31 tqrst neofob: the thing is there is no speed issue for rsync (and yes, it's gigabit ethernet; not sure if jumbo frame is on)
18:32 neofob tqrst: you can check ifconfig to see the value of mtu; this could be 3.2 problem; 3.3 healing can max out my own disk io
18:32 neofob from my experience
18:35 tqrst mtu's 1500
18:35 adechiaro joined #gluster
18:36 neofob try setting it to 9000
18:38 _Bryan_ Is there RPM's for RHEL4 for 3.2.5
18:39 gmcwhistler joined #gluster
18:41 gmcwhistler joined #gluster
18:46 adechiaro joined #gluster
18:47 JoeJulian @yum repo
18:47 glusterbot JoeJulian: kkeithley's fedorapeople.org yum repository has 32- and 64-bit glusterfs 3.3 packages for RHEL/Fedora/Centos distributions: http://goo.gl/EyoCw
18:47 JoeJulian Oh, for EL4?
18:47 JoeJulian No.
18:47 JoeJulian EL4 doesn't provide some prerequisites.
18:47 JoeJulian Or, at least 1 prereq.
18:50 nueces joined #gluster
18:50 kkeithley and my repo only has -3.3.0 RPMs anyway.
18:56 semiosis :O
18:57 JoeJulian Hey there. What troll were you feeding?
18:57 semiosis meh
18:57 semiosis ari p.n.r.
19:00 johnmark haha
19:00 johnmark semiosis: I told him that if he showed up here, we would treat him with respect
19:00 johnmark so let's just drop it - not important anyway
19:00 semiosis yeah totally
19:01 adechiaro joined #gluster
19:03 JoeJulian Always have.
19:06 JoeJulian I'm going to have to head down to portland now...
19:06 johnmark JoeJulian: oh?
19:06 johnmark family fun?
19:07 JoeJulian No, gunna have to stop by puppetlabs and brick some microbrews.
19:07 semiosis trouble at the plant?
19:07 JoeJulian s/brick/bring/
19:07 glusterbot What JoeJulian meant to say was: No, gunna have to stop by puppetlabs and bring some microbrews.
19:07 johnmark ooooh... microbrews
19:07 johnmark JoeJulian: you're going to puppetlabs? cool
19:08 johnmark JoeJulian: what's the occasion?
19:08 semiosis cleaning up my mess it seems :)
19:08 JoeJulian hehe
19:09 JoeJulian no cleaning, but I've been there before. I know people there. I'm in the neighborhood.
19:09 JoeJulian ... and we do have friends that we're overdue to go visit.
19:09 JoeJulian It's sure easier to get over things when you're face to face socially.
19:11 _Bryan_ Thanks for comfirming that JoeJulian
19:15 adechiaro joined #gluster
19:28 adechiaro joined #gluster
19:39 rwheeler joined #gluster
19:40 adechiaro joined #gluster
19:41 elyograg with gluster-swift, is there any way in the config to change the uid/gid that gets used for creating files?  I know that I could probably just change the user uner which the software gets started, but then it would have a hard time listening on port 443.
19:42 davdunc the container and object servers don't listen on port 443 only the proxy server.
19:42 elyograg the reason that I want to do this is that it is likely that I will need to access (both read and write) the same system via Samba as well as potentially NFS and native mount, and I would very much not want these to be using root.
19:44 davdunc elyograg, I haven't tried that, but it is routine (from what I understand) in the openstack community.
19:46 elyograg some google searches turn up uid=swif and gid=swift, but this seems to be related to using swift's native storage stuff, probably wouldn't apply when gluster is the storage.
19:46 davdunc come to think of it though, the container server has to mount the gluster volume doesn't it?
19:47 elyograg davdunc: I have no idea what does what.  I believe that the gluster volume is linked to the account, though.
19:48 dshea What would the theoretical limit of a file system be for a cluster?  The underlying fs size limit?
19:49 davdunc elyograg, there is a mount_dirs method in the gluster plugin that mounts the directory.  The container server is responsible for mounting that at $reseller_prefix_$account
19:51 davdunc s/mounts the directory/mounts the volume/
19:51 glusterbot What davdunc meant to say was: elyograg, there is a mount_dirs method in the gluster plugin that mounts the volume.  The container server is responsible for mounting that at $reseller_prefix_$account
19:52 dshea And what determines the max number of inodes that I can have for a given cluster?  (Our current use case is going to involve about 30 million files and we'd like to be able to scale that up by a factor of 10 for starters)
19:52 elyograg dshea: what I can find says there are no theoretical limits other than available ports, because each brick requires two open ports, making for a limit of about 1000 bricks.  The maximum size of any one file is the size of your brick filesystems.  I am not sure whether that should be further qualified as your smallest brick filesystem.
19:52 elyograg http://community.gluster.org/q/how-​big-can-i-build-a-gluster-cluster/
19:52 glusterbot Title: Question: How big can I build a Gluster cluster? (at community.gluster.org)
19:54 dshea thx
19:55 elyograg The data that we are going to put on the gluster volume that we intend to build is about 76 million objects, and each object consists of 5 or more individual files.  I have not seen anything to suggest we are going to have any trouble with this.
19:56 elyograg That's just where the system is now, the business guys would like to see three times that, and I'm sure 10x would tickle them to death.
19:57 H__ i think the size will quickly be limited by the fact that every brick must have the entire directory tree on it. Or did I misunderstand that part ?
20:02 dshea I have a lab I'm going to use for my pilot that has 10TB and 30 million files, we're still trying to determine if we want to go with a ginormus volume for the full 8PB of data though or if we want to restructure how our data is sorted and stored.
20:03 jdarcy dshea: How many files are you likely to have in one directory?
20:05 dshea That is a good question.  I would need to look at the actual lab's data.  As of now, I've only been given randomized data files to mimic the number, but the sanmple data I have is 3000 files per directory. 1 data file, 1 sha256 and 1 md5sum
20:05 dshea s/sanmple/sample/
20:05 glusterbot What dshea meant to say was: That is a good question.  I would need to look at the actual lab's data.  As of now, I've only been given randomized data files to mimic the number, but the sample data I have is 3000 files per directory. 1 data file, 1 sha256 and 1 md5sum
20:06 dshea Is there a recommended limit regarding files per directory?
20:06 jdarcy That's not too bad.  The reason I ask is that our readdir performance isn't that great, so very large directories (tens of thousands up to millions) don't do well for some workloads.
20:07 dshea I have a sneaking suspicion I will get access to the lab fs and it will be one enormous directory.
20:07 jdarcy Heh.  That would be lovely.
20:07 jdarcy And by "lovely" I mean a total nightmare that will rot everyone's brains for a hundred-mile radius.
20:08 dshea these guys are great biologists in their defense, just lousy computer scientists. ;-)
20:09 Eco__ joined #gluster
20:10 dshea expect the worst and hope for the best seems to be the best approach
20:10 jdarcy dshea: I used to work in HPC, and saw that with just about every kind of scientist.
20:11 jdarcy There's no reason a biologist or physicist *should* know much about computers.  Plenty of other stuff to fill their heads.
20:11 dshea yep
20:12 jdarcy dshea: The other worry I'd have is whether these files are going to be accessed concurrently from many machines as part of one big MPI job or anything like that.
20:12 dshea jdarcy: the most realistic use case for these guys is "fire and forget" storage
20:13 dshea jdarcy: they are recording maybe 1GB of data per file run and then storing off to the file server where it might be accessed for post processing at a later date
20:14 jdarcy That sounds pretty ideal, then.
20:15 andreask joined #gluster
20:15 dshea At least for this one lab, I think it will be.  I'm trying to make the case of moving away from trying to design a one size fits all cluster.  Given the widely varying types of research going on, it is better for us to move to tailoring solutions for various groups based on what they are actually trying to accomplish.
20:16 dshea We'll see how that goes. ;-)
20:24 mo joined #gluster
20:24 badone joined #gluster
20:26 m0zes dshea: heterogenous clusters ftw :D
20:42 kaisersoce joined #gluster
20:44 Shaun-- joined #gluster
20:48 benner has anyone any recomendation on IO scheduler when having lots of small files in gluster volume?
20:48 y4m4 joined #gluster
20:51 Shaun-- I'm getting the error:: gluster peer probe 192.168.1.66 Probe unsuccessful Probe returned with unknown errno 107
20:51 Shaun-- I'm getting the same error on all of my bricks
20:52 mspo Shaun--: firewall problem?
20:52 JoeJulian No idea why it's unknown... that's "transport endpoint is not connected"
20:53 mspo JoeJulian: you had that header file ready to go
20:53 JoeJulian @lucky errno 107
20:53 glusterbot JoeJulian: http://stackoverflow.com/questions/2131561/re​cv-with-errno-107transport-endpoint-connected
20:53 Shaun-- I double checked that the ports where open in iptables
20:53 mspo there's a range
20:53 JoeJulian @ports
20:53 glusterbot JoeJulian: glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111.
20:54 JoeJulian For the peer probe, though, 24007 should be the only port in play.
20:55 Shaun-- 24007 to 24050 are open
20:55 JoeJulian I would flush iptables and see if it works. If it does, then you know it's your firewall.
20:55 Shaun-- I'm not using rdma
20:55 Shaun-- I tried that already
20:55 Shaun-- the bricks are VMs on the same host
20:56 Shaun-- so there's no switch error
20:56 JoeJulian Ah, you were here yesterday... :D
20:56 mspo just turn off iptables
20:56 mspo then let's see netstat -anp|grep glu and some telnet
20:57 JoeJulian can you telnet to port 24007 from one to the other?
21:03 mspo all that iptables stuff got him, I'll bet ;)
21:08 johnmark mspo: it usually does :)
21:09 JoeJulian That happened last time. It sure makes it hard to help.
21:10 JoeJulian @later tell Shaun-- Since you left before you could get any help, be sure that glusterd is running on both vms, and see if you can telnet from one vm to port 24007 on the other vm.
21:10 glusterbot JoeJulian: The operation succeeded.
21:10 mspo fancy bot
21:10 JoeJulian I will not be denied.
21:11 mspo glusterbot: eggdrop?
21:11 JoeJulian @version
21:11 glusterbot JoeJulian: The current (running) version of this Supybot is 0.83.4.1+limnoria 2012-07-13T14:08:55+0000. The newest versions available online are 2012-10-10T10:39:28+0700 (in testing), 2012-09-03T11:00:41+0700 (in master).
21:13 tryggvil joined #gluster
21:13 mspo neat
21:14 mspo I wrote an ircbot a while ago but it just did keyword/response stuff for bugs
21:14 tryggvil_ joined #gluster
21:14 mspo like a single plugin of supybot, probably
21:15 badone_home joined #gluster
21:15 JoeJulian Yep, Factoids plugin. Cool though. :)
21:15 mspo perl anyevent::irc made is stupidly easy
21:34 mweichert joined #gluster
21:37 hattenator joined #gluster
21:38 JordanHackworth @load
21:38 glusterbot JordanHackworth: Error: You don't have the owner capability. If you think that you should have this capability, be sure that you are identified before trying again. The 'whoami' command can tell you if you're identified.
21:40 hattenator Anyone doing anything with snapshots?  Maybe snapshotting the brick LVM or the geo-replication FS/LVM?
22:07 crashmag joined #gluster
22:13 balunasj joined #gluster
22:14 octi joined #gluster
22:17 octi Say I have a gluster cluster that becomes split brain, and application code operates on each copy of the split brain independently. Upon re connection of the gluster, is it possible to have a program get called that can resolve the conflicts? Basically I would end up reading both versions of the file and creating a new file that merges the contents of the two.
22:17 octi Say they are logfiles or something similar
22:24 semiosis octi: that sounds dangerous
22:25 semiosis in pre-3.3.0 you would just have to merge the two into one, then delete the other
22:25 semiosis but since 3.3.0 resolving split brain is a bit more involved
22:25 semiosis @split-brain
22:25 glusterbot semiosis: (#1) learn how to cause split-brain here: http://goo.gl/nywzC, or (#2) To heal split-brain in 3.3, see http://joejulian.name/blog/fixin​g-split-brain-with-glusterfs-33/ .
22:26 semiosis more involved, but the idea should still work
22:36 octi So what about having a custom script that would know how to resolve the files be executed automatically?
22:39 hattenator Is there an api or sorts to get a copy of a split-brained file from a specific brick over fuse rather than just an API error?
22:39 hattenator s/API/I\/O/
22:41 octi Exactly, I understand the best thing to do is just keep split-brain from ever happening, so that if the two replica's get separated the application code just kind of freezes and stops performing work until its fixed or one set, but it would be nice to be able to keep running and resolve conflicts at a later date, that way a sys admin can fix it during the day instead of having to wake up at 1AM
22:42 hattenator Can you get the fuse mount to freeze?  I'm used to NFS and other filesystems freezing, but it seems like gluster always throws immediate I/O errors.
22:43 tryggvil_ joined #gluster
22:49 tc00per left #gluster
23:01 hattenator But yes, I think they were talking about new split-brain support for 3.4 that's supposed to blow your mind, but it would be nice in the meantime if you could just type eg.
23:01 hattenator cp mount/file1@brick1 mount/file1
23:01 hattenator and it would fix your split-brain from the fuse client.
23:11 tc00per-afk joined #gluster
23:26 tc00per joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary