Camelia, the Perl 6 bug

IRC log for #gluster, 2013-08-20

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:10 glusterbot New news from resolvedglusterbugs: [Bug 796195] Saved frames unwind is priniting wrong message for fxattrop <http://goo.gl/RJXvab> || [Bug 815483] extras/init.d/glusterd-Redhat stop action kills glusterfsd? <http://goo.gl/hlhJd> || [Bug 809583] Posix test failures on cifs mount <http://goo.gl/Z9OZ0> || [Bug 811539] volume status o/p incorrect <http://goo.gl/ivW8u> || [Bug 771221] init.d script for RedHat doesn't create PID file <http://goo.g
00:23 hagarth joined #gluster
00:45 chirino joined #gluster
00:55 asias joined #gluster
01:21 awheeler joined #gluster
01:26 kevein joined #gluster
01:34 harish joined #gluster
01:43 NuxRo joined #gluster
01:53 bala joined #gluster
02:27 harish joined #gluster
02:38 guy28 joined #gluster
02:39 guy28 hi, does anyone know where i can download glusterfs virtual appliance ova? the link in the website is not working. thanks
02:47 guy28 anyone out there?
02:48 MugginsM yeah, but I have no idea
02:56 shubhendu joined #gluster
03:05 atrius joined #gluster
03:05 MugginsM should I be concerned about a billion of these:   E [afr-self-heal-common.c:2160:​afr_self_heal_completion_cbk] 0-storage-replicate-5: background  entry self-heal failed on /
03:05 MugginsM (on a client)
03:06 MugginsM said client has glusterfs taking 2.6GB of RAM also
03:06 MugginsM which is hurty
03:08 MugginsM well, 600M RES
03:19 lalatenduM joined #gluster
03:23 saurabh joined #gluster
03:27 sprachgenerator joined #gluster
03:27 bharata joined #gluster
03:32 MugginsM drop_caches is making no difference, and the machine is swapping  badly :(
03:38 awheeler joined #gluster
03:40 65MAAVQ9G joined #gluster
03:40 lala_ joined #gluster
03:45 hagarth joined #gluster
03:47 atrius joined #gluster
03:52 itisravi joined #gluster
03:55 chirino joined #gluster
03:56 ppai joined #gluster
04:05 sgowda joined #gluster
04:14 matthewh joined #gluster
04:16 matthewh Hi.  I've been experimenting with some gluster volumes and would like to test it using ext4.  However, I've read of a nasty kernel change that breaks gluster.  Is there a way around it.  has it been fixed in newer kernels, gluster versions etc.  Am I advised to steer clear of ext4 with gluster for now?
04:17 RameshN joined #gluster
04:22 bulde joined #gluster
04:27 ngoswami joined #gluster
04:33 satheesh1 joined #gluster
04:35 aravindavk joined #gluster
04:35 shylesh joined #gluster
04:38 shruti joined #gluster
04:41 glusterbot New news from resolvedglusterbugs: [Bug 838784] DHT: readdirp goes into a infinite loop with ext4 <http://goo.gl/CO1VZ>
04:42 chirino joined #gluster
04:44 CheRi joined #gluster
04:51 vijaykumar joined #gluster
04:52 wgao joined #gluster
05:04 ngoswami joined #gluster
05:06 rjoseph joined #gluster
05:06 MugginsM so is gluster 3.4 client to a 3.3.1 server reasonably ok?
05:07 MugginsM we've got a large storage setup and can't duplicate it so have to upgrade piece by piece, testing as we go
05:07 ngoswami joined #gluster
05:10 ngoswami joined #gluster
05:29 mohankumar joined #gluster
05:29 ngoswami joined #gluster
05:34 kanagaraj joined #gluster
05:38 lalatenduM joined #gluster
05:38 hagarth joined #gluster
05:40 lalatenduM joined #gluster
05:42 satheesh1 joined #gluster
05:44 satheesh2 joined #gluster
05:45 ababu joined #gluster
05:51 raghu joined #gluster
05:55 ppai joined #gluster
05:55 shruti joined #gluster
05:57 RameshN joined #gluster
06:06 ndarshan joined #gluster
06:07 darshan joined #gluster
06:10 nshaikh joined #gluster
06:10 mooperd joined #gluster
06:16 lalatenduM joined #gluster
06:16 satheesh5 joined #gluster
06:20 jtux joined #gluster
06:28 bala joined #gluster
06:46 darshan joined #gluster
06:47 ndarshan joined #gluster
06:59 ricky-ticky joined #gluster
07:03 hybrid512 joined #gluster
07:08 timothy joined #gluster
07:10 shruti joined #gluster
07:12 tziOm joined #gluster
07:13 guigui1 joined #gluster
07:14 vimal joined #gluster
07:20 andreask joined #gluster
07:21 ababu joined #gluster
07:40 shruti joined #gluster
07:40 badone joined #gluster
07:44 vshankar joined #gluster
07:45 ngoswami joined #gluster
07:46 wgao joined #gluster
07:48 mooperd joined #gluster
08:10 bulde joined #gluster
08:16 rastar joined #gluster
08:28 mmalesa joined #gluster
08:30 ricky-ticky joined #gluster
08:37 atrius joined #gluster
08:39 ppai joined #gluster
08:41 CheRi joined #gluster
08:58 ababu joined #gluster
09:04 ujjain joined #gluster
09:13 mooperd joined #gluster
09:16 ndarshan joined #gluster
09:16 darshan joined #gluster
09:17 sgowda joined #gluster
09:21 ppai joined #gluster
09:22 duerF joined #gluster
09:23 bulde joined #gluster
09:30 harish joined #gluster
09:40 spider_fingers joined #gluster
09:46 deepakcs joined #gluster
09:47 ndarshan joined #gluster
09:50 NeatBasis joined #gluster
09:52 sgowda joined #gluster
09:54 bulde joined #gluster
09:55 mmalesa joined #gluster
09:57 toad joined #gluster
10:02 toad joined #gluster
10:10 CheRi joined #gluster
10:20 X3NQ joined #gluster
10:34 kkeithley1 joined #gluster
10:55 lpabon joined #gluster
11:13 cicero shoot, i accidentally remove-brick w/o 'start' so it dropped the brick w/o migrating data
11:15 bala joined #gluster
11:16 bala joined #gluster
11:17 hagarth joined #gluster
11:22 clag__ joined #gluster
11:23 ndarshan joined #gluster
11:24 cicero JoeJulian,semiosis: any idea how to revert a remove-brick in 3.3.1? the data is still intact
11:30 clag_ joined #gluster
11:31 clag_ left #gluster
11:34 satheesh joined #gluster
11:49 lpabon joined #gluster
11:50 bala joined #gluster
11:52 samsamm joined #gluster
12:08 clag_ joined #gluster
12:10 chirino joined #gluster
12:10 clag_ left #gluster
12:14 meghanam joined #gluster
12:14 meghanam_ joined #gluster
12:17 ricky-ticky joined #gluster
12:17 cicero oh well
12:17 cicero i figured out an alternative way to migrate the data
12:26 mohankumar joined #gluster
12:37 rcheleguini joined #gluster
12:42 mmalesa_ joined #gluster
12:56 social kkeithley_: ping
12:58 codex joined #gluster
12:59 bulde1 joined #gluster
12:59 kkeithley_ social: pong
13:00 social kkeithley_: 998967 I'm struggling with ACLs and entry-timeout=0, seems like client mount is setting different acls than server. any idea what to trace now?
13:10 mmalesa joined #gluster
13:12 kkeithley_ No, not off the top of my head. Might be a dupe of 994392
13:15 * social looks
13:17 social nah, see list of applied patches, we already have that one
13:18 hagarth social: have you disabled stat-prefetch?
13:20 social sure
13:25 robo joined #gluster
13:25 B21956 joined #gluster
13:25 B21956 left #gluster
13:25 B21956 joined #gluster
13:28 TSM joined #gluster
13:29 TSM has anyone used 3.4 with vmware clusters, seems there has been a tail off recently on deployments using gluster
13:29 robo joined #gluster
13:40 rwheeler joined #gluster
13:40 mmalesa joined #gluster
13:41 partner is there a way to target rebalance manually to certain parts of a volume (say, check certain dir structure and move stuff if needed) ?
13:41 partner i have a distributed volume which has lots of files and due to 3.3.1 bug with the filehandles i am reaching some brick free space limits..
13:47 failshell joined #gluster
13:49 spider_fingers left #gluster
13:51 bugs_ joined #gluster
13:53 nightwalk joined #gluster
13:58 social kkeithley_: imho acls are managed on server by standard filesystem acls am I right? and on client they are set incorrectly and after lookup they are fixed to correct value. my guess is that posix_acl_inherit_mode must be doing something wrong
13:59 social kkeithley_: but I can also be completly wrong :)
14:02 toad joined #gluster
14:03 aliguori joined #gluster
14:05 plarsen joined #gluster
14:12 B21956 left #gluster
14:16 lpabon joined #gluster
14:21 theron joined #gluster
14:25 B21956 joined #gluster
14:26 B21956 left #gluster
14:28 Technicool joined #gluster
14:28 kaptk2 joined #gluster
14:31 toad joined #gluster
14:37 [o__o] joined #gluster
14:38 [o__o] joined #gluster
14:41 [o__o] joined #gluster
14:52 zetheroo1 joined #gluster
14:52 zetheroo1 ceph or gluster? or are these two totally separate animals?
14:55 lpabon zetheroo1: afaik, they are both open source projects and both support similar features..
14:56 lpabon cepth started as an Object Store, but gluster started as a local file access store (if i'm correct)
15:02 rwheeler joined #gluster
15:07 ryant read the docs on ceph.  They are backing off of supporting a cluster filesystem and want to concentrate on the object store
15:09 ryant I'm still struggling with an AWOL command line interface for gluster.  What prevents "gluster volume heal $VOLNAME info" from running correctly?  It just returns with exit value 110
15:09 ryant anyone else see this?
15:10 daMaestro joined #gluster
15:12 awheeler joined #gluster
15:12 awheeler joined #gluster
15:13 johnmark @channelstats
15:13 glusterbot johnmark: On #gluster there have been 171641 messages, containing 7273460 characters, 1214517 words, 4852 smileys, and 642 frowns; 1065 of those messages were ACTIONs. There have been 65774 joins, 2043 parts, 63735 quits, 21 kicks, 164 mode changes, and 7 topic changes. There are currently 205 users and the channel has peaked at 226 users.
15:13 zetheroo1 how does Object Store differ from local file access store?
15:20 jebba joined #gluster
15:26 zetheroo1 left #gluster
15:27 LoudNoises joined #gluster
15:30 sprachgenerator joined #gluster
15:42 hagarth joined #gluster
16:06 dusmant joined #gluster
16:09 bulde joined #gluster
16:10 lpabon let me know if zetheroo1 shows up again, i can answer that question ^^
16:13 kkeithley_ lpabon: you can use the glusterbot to send a reply when someone rejoins or posts something
16:13 kkeithley_ @later tell lpabon the answer is 42
16:13 glusterbot kkeithley_: The operation succeeded.
16:14 neofob left #gluster
16:15 lpabon sweet, thanks kkeithley_
16:17 mtanner_ joined #gluster
16:17 robo joined #gluster
16:28 jbrooks joined #gluster
16:36 awheeler joined #gluster
16:36 andreask joined #gluster
16:49 samsamm joined #gluster
16:54 Mo__ joined #gluster
17:12 bulde joined #gluster
17:19 bulde joined #gluster
17:23 bala joined #gluster
17:32 robos joined #gluster
17:37 JoeJulian ryant: 110 would be a timeout
17:40 NuxRo hi, is glusterfs' nfs server fs-cache aware in 3.4?
17:54 zerick joined #gluster
17:55 kkeithley_ no more so than it was in 3.3 I'd say
18:00 abassett left #gluster
18:06 SteveWatt joined #gluster
18:06 JoeJulian No more than it's ever been. Why would the server be aware?
18:07 SteveWatt left #gluster
18:07 SteveWatt joined #gluster
18:10 SteveWatt left #gluster
18:10 lalatenduM joined #gluster
18:12 _pol joined #gluster
18:20 andreask joined #gluster
18:25 B21956 joined #gluster
19:05 _pol joined #gluster
19:09 awheeler joined #gluster
19:09 andreask joined #gluster
19:25 Recruiter joined #gluster
19:40 jayunit100 joined #gluster
19:50 B21956 left #gluster
19:54 mattf johnmark, ping re hcfs gluster
20:02 johnmark mattf: pong
20:02 johnmark chat in 30 mins?
20:04 jag3773 joined #gluster
20:18 daMaestro joined #gluster
20:28 MugginsM joined #gluster
20:31 xdexter joined #gluster
20:31 mattf johnmark, sure, that's now!
20:31 xdexter Hello, my folder ~/.glusterfs are very big, i can erase subfolders?
20:33 johnmark mattf: heya
20:34 johnmark mattf: why yes it is
20:34 mattf hey, i'm curious why you guys went w/ the name gluster-swift-plugin
20:35 johnmark mattf: oh, that was decided by the devs
20:35 johnmark not sure why
20:35 mattf i'm pondering what the hcfs feature rpm would be
20:35 johnmark excellent question
20:36 mattf it had a horrible name of org.apache.blah.blah.blah-blahblah and gluster-hadoop
20:36 johnmark ha, yeah
20:36 johnmark so gluster-hcfs isn't kosher?
20:36 mattf err glusterfs-hadoop
20:36 mattf there should be some sort of scheme for these names
20:36 johnmark I mean, at some point, we have ot stop naming all projects in gluster.org gluster-foo
20:36 mattf do we?
20:36 mattf i was thinking gluster-hdfs-plugin would be consistent, but *shrug*
20:37 johnmark well, it will get comical soon - if we have 30+ projects that all start with gluster-
20:38 johnmark any of gluster-hdfs or gluster-hdfs-plugin or gluster-hcfs* would be fine b yme
20:38 SteveWatt joined #gluster
20:38 mattf johnmark, will you shake the gluster community and figure out a naming scheme that works for both up and downstream?
20:44 johnmark mattf: I can certainly try
20:45 mattf johnmark, ok, as soon as you can get a name, i can make a package for hcfs. a real package, not an rpm wrapping a binary thunk.
20:46 johnmark but sounds like you're ok with gluster-hdfs-plugin
20:46 johnmark what do you think re: hdfs vs. hcfs in the name?
20:46 hagarth1 joined #gluster
20:48 mattf makes little difference for me. i just don't want to have to rename the thing later
20:50 Twinkies_ joined #gluster
20:51 pono joined #gluster
20:51 Twinkies_ hello all,  question,  I have a gluster volume with 3 x 2 brick pairs for distibuted rplication
20:51 Twinkies_ I have data distributed on all 3 sets
20:52 Twinkies_ if I remove one set of servers,  where will the data go?
20:53 Twinkies_ can I migrate data to other server sets before removing one set?
20:55 JoeJulian "gluster volume remove-brick $vol $brick1 $brick2 start" will begin a migration process that's like a rebalance in reverse. The bricks are re-masked without the target bricks and a rebalance is forced. Once the files are moved off the brick(s) the status will show complete and you can commit the removal.
20:58 Twinkies_ ok, let me try that, because if it does that then....SWEEET!
21:07 Twinkies_ it looks like a manual rebalance needs to be done for this to take place.
21:07 JoeJulian No
21:07 Twinkies_ im using 3.2.7
21:07 JoeJulian Oh! Then you're hosed.
21:07 JoeJulian Upgrade
21:08 Twinkies_ ok, so then im right correct, it needs to be done manually after removing the bricks
21:08 JoeJulian Right
21:08 Twinkies_ it appears the data stays on the removed bricks and issuing the rebalance command, moves the data to the remaining bricks
21:08 JoeJulian Or upgrade and do it the easy way. :D
21:08 JoeJulian No
21:08 Twinkies_ it appears , this is done automatically in 3.4
21:09 JoeJulian The data does stay on the removed bricks, but they're removed. There's no way for the rebalance client to get to that data.
21:09 Twinkies_ i did it and it moved it
21:09 * JoeJulian doesn't trust those results...
21:10 Twinkies_ before the rebalance the data was still on the old bricks, after wards, it was moved, i watched the command status
21:10 Twinkies_ i watched it move the data
21:10 Twinkies_ rebalance completed: rebalanced 15 files of size 5821497 (total files scanned 35)
21:11 Twinkies_ i also, did a directory listing on the existing bricks and saw some of the new files there
21:11 JoeJulian As long as you're satisfied, that's all that matters. I just find the results suspect.
21:11 Twinkies_ let me check the client side
21:11 JoeJulian It's possible that you encountered some sort of happy bug. :D
21:12 Twinkies_ according to the docks,  the prereq to this is that the server have the fuse installed, and it does
21:12 Twinkies_ happy bug? lol
21:14 Twinkies_ ill copy more data via the client and do more testing
21:14 Twinkies_ but not sure if its necesssary, since 3.4 does this automatically...
21:14 JoeJulian Right
21:20 mrEriksson joined #gluster
21:25 mooperd joined #gluster
21:28 jporterfield joined #gluster
21:32 jag3773 joined #gluster
21:36 SteveWatt joined #gluster
21:42 kkeithley_ gluster-swift-plugin? Yeah, a long time ago. More recently its RPM package name has been glusterfs-ufo.  lpabon is getting ready to submit the new packaging to Fedora review and if it makes it through that "unscathed" the RPM package name will be glusterfs-openstack-swift.
21:43 andreask joined #gluster
21:44 jporterfield joined #gluster
21:54 fidevo joined #gluster
21:55 Ramereth johnmark: ping
21:56 basic` joined #gluster
21:58 basic` Ramereth: small files :(
21:59 JoeJulian small files don't exist.
22:03 basic` JoeJulian: any tips on getting small file performance improved?  We have git repos on glusterfs that are taking like 30 seconds to do things like 'git status'
22:04 JoeJulian make bigger files... ;)
22:05 JoeJulian Use a new kernel that has the fuse improvements.
22:05 JoeJulian a2_: Any idea which kernel version has your patches?
22:06 m0zes fuse readdirplus works with gluster 3.4?
22:06 JoeJulian Yes
22:07 m0zes did that get backported to the 3.3 line?
22:09 JoeJulian no
22:09 basic` what kernel?
22:10 m0zes 3.8 mainline iirc. not sure if it got backported in centos/redhat.
22:11 basic` gotcha
22:13 JoeJulian Looks like it was incorporated as of 3.8-rc3
22:13 robo joined #gluster
22:14 JoeJulian bug 841514
22:14 glusterbot Bug http://goo.gl/7OJaS1 is not accessible.
22:14 JoeJulian pfft...
22:15 JoeJulian I take "[fs] fuse: implement NFS-like readdirplus support (Brian Foster) [841514]" to mean it was backported as of 2.6.32-335.el6
22:15 SteveWatt joined #gluster
22:16 JoeJulian bfoster: ^^ ?
22:17 foster JoeJulian: "Patch(es) available on kernel-2.6.32-335.el6"
22:17 JoeJulian excellent
22:18 foster though there were some follow on fixes from ndevos
22:18 foster let me see if I can find that bug
22:20 JoeJulian Right... this now bumps up my upgrade schedule... I may have to jump to 3.4 tonight...
22:22 foster bug 981741
22:22 glusterbot Bug http://goo.gl/gWyDIT high, high, rc, ndevos, ON_QA , BUG on dentry still in use when unmounting fuse
22:22 foster and bug 994492
22:22 glusterbot Bug http://goo.gl/t8szDx is not accessible.
22:27 basic` JoeJulian: awesome, i will need to read up on that
22:27 basic` is that automatic with 3.4?
22:28 JoeJulian looks like it is
22:28 JoeJulian Unless I'm reading the source wrong... but it looks like if fuse supports it, it uses it.
22:29 basic` that's great
22:29 basic` any idea if ubuntu 12.04 gets the backport?  I have a feeling it doesn't :)
22:29 basic` Ramereth: ^^
22:30 foster JoeJulian: FYI, iirc the following gluster commit adds a mount option: 61b0956 mount/fuse: Provide option to use/not use kernel-readdirp
22:31 foster dinner time, bbl
22:31 JoeJulian Hmm, not in mount-glusterfs
22:31 JoeJulian er, mount.glusterfs
22:32 JoeJulian @yum repo
22:32 glusterbot JoeJulian: The official community glusterfs packages for RHEL (including CentOS, SL, etc), Fedora 17 and earlier, and Fedora 18 arm/armhfp are available at http://goo.gl/s077x. The official community glusterfs packages for Fedora 18 and later are in the Fedora yum updates repository.
22:32 Ramereth basic`: lol, so we should rebuild the workstations with CentSO6 afterall eh? ;)
22:33 SteveWatt joined #gluster
22:33 tg2 joined #gluster
22:43 a2_ JoeJulian, rhel 6.4 should have it backported
22:51 andreask joined #gluster
22:55 andreask joined #gluster
23:05 mooperd__ joined #gluster
23:18 plarsen joined #gluster
23:19 duerF joined #gluster
23:19 SteveWatt left #gluster
23:28 an joined #gluster
23:30 matthewh joined #gluster
23:31 matthewh Hi I have a question regarding placement of data in gluster.
23:32 matthewh If you export multiple bricks from a node and use all the nodes and bricks for a volume and you are using distributed/replicated, will the files be on both nodes or just on 2 separate bricks possibly in the same node?
23:34 verdurin joined #gluster
23:37 matthewh I'm thinking of using some AWS instances for storage and using empheral storage as they provide more stable performance then the EBS disks.  But empheral disks get whiped when your instance reboots so I need to be sure that the data is at least replicated to other nodes.  I will also be backing up.
23:41 sprachgenerator joined #gluster
23:44 JoeJulian @3.4
23:44 glusterbot JoeJulian: (#1) 3.4 sources and packages are available at http://goo.gl/zO0Fa Also see @3.4 release notes and @3.4 upgrade notes, or (#2) To replace a brick with a blank one, see http://goo.gl/bhbwd2
23:44 JoeJulian @3.4 upgrade notes
23:44 glusterbot JoeJulian: http://goo.gl/SXX7P
23:52 jporterfield joined #gluster
23:52 JoeJulian F!
23:54 jebba joined #gluster
23:56 JoeJulian ... well that didn't work...
23:59 JoeJulian dammit... how did subvolume hashes suddenly start failing when I upgraded a single server to 3.4.0 from 3.3.1

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary