Camelia, the Perl 6 bug

IRC log for #gluster, 2012-12-28

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 duffrecords hopefully when this is all restored is should be blazing fast compared to the old system
00:39 raven-np joined #gluster
00:40 swinchen_ hmmm how am I going to fix this nonlocal bind problem
00:42 semiosis swinchen_: could you please file a bug about it
00:42 glusterbot http://goo.gl/UUuCq
00:46 swinchen_ semiosis: I can... the only problem is I can't take haproxy offline to test that setting nonlocal_bind to 0 fixes it :/
00:46 andreask joined #gluster
00:47 swinchen_ I just downgraded to 3.2.7 and it is working fine.  So there is certainly something wrong with my setup and 3.3.1
00:47 semiosis well you could still file a bug :)
00:47 glusterbot http://goo.gl/UUuCq
00:48 swinchen_ will do
00:51 swinchen_ semiosis: any idea what component it would be?  glusterd?
00:51 semiosis sounds good
00:51 semiosis someone will triage the bug & correct any of those details
00:52 swinchen_ or core.
00:52 swinchen_ Ok
01:01 swinchen_ https://bugzilla.redhat.com/show_bug.cgi?id=890587
01:01 glusterbot <http://goo.gl/xoC4p> (at bugzilla.redhat.com)
01:01 glusterbot Bug 890587: high, unspecified, ---, kparthas, NEW , peers thought to be localhost when net.ipv4.ip_nonlocal_bind = 1
01:01 swinchen_ there she be
01:09 glusterbot New news from newglusterbugs: [Bug 890587] peers thought to be localhost when net.ipv4.ip_nonlocal_bind = 1 <http://goo.gl/xoC4p>
01:42 mynameisdeleted joined #gluster
01:50 mynameisdeleted joined #gluster
02:00 kevein joined #gluster
02:03 mynameisdeleted joined #gluster
02:36 raven-np joined #gluster
02:38 raven-np joined #gluster
02:44 raven-np joined #gluster
02:56 mohankumar joined #gluster
03:49 raven-np joined #gluster
04:06 jermudgeon joined #gluster
04:42 vpshastry joined #gluster
04:44 vijaykumar joined #gluster
04:44 Humble joined #gluster
05:05 overclk joined #gluster
05:06 sgowda joined #gluster
05:09 melanor9 joined #gluster
05:17 greylurk joined #gluster
05:18 bulde joined #gluster
05:20 bfoster joined #gluster
05:20 melanor91 joined #gluster
05:32 shylesh joined #gluster
05:32 shylesh__ joined #gluster
05:32 shylesh_ joined #gluster
05:33 raghu joined #gluster
05:38 melanor9 joined #gluster
05:43 shylesh joined #gluster
05:43 shylesh__ joined #gluster
05:43 shylesh_ joined #gluster
05:44 rastar joined #gluster
05:59 hagarth joined #gluster
06:06 melanor9 joined #gluster
06:13 shylesh joined #gluster
06:20 isomorphic joined #gluster
06:40 glusterbot New news from newglusterbugs: [Bug 890618] misleading return values of some functions. <http://goo.gl/WsVnD>
06:45 ramkrsna joined #gluster
06:45 ramkrsna joined #gluster
06:47 ngoswami joined #gluster
06:49 hata_ph joined #gluster
06:51 hata_ph hi there...i am using glusterfs 3.3.1 on centos6.3 and i want to setup as storage for my email server...using postfix and dovecot
06:52 hata_ph everything when well on the glusterfs configuration but i cannot logon to my webmail...it complain about subscriptions.lock file error
06:52 hata_ph has anyone use glusterfs on email server before?
06:52 hata_ph mainly dovecot
06:52 hata_ph anyone?
06:56 hata_ph anyone with experince using glusterfs on dovecot?
07:06 rgustafs joined #gluster
07:15 bala1 joined #gluster
07:20 jtux joined #gluster
07:22 overclk joined #gluster
07:29 vpshastry joined #gluster
07:46 hata_ph anyone use glusterfs with dovecot?
07:59 andreask joined #gluster
08:03 melanor9 joined #gluster
08:04 shylesh_ joined #gluster
08:04 shylesh joined #gluster
08:05 ctria joined #gluster
08:12 jtux joined #gluster
08:26 3JTAACZHY joined #gluster
08:28 shireesh joined #gluster
08:49 dobber joined #gluster
08:50 melanor9 joined #gluster
08:53 vimal joined #gluster
08:59 tjikkun_work joined #gluster
09:13 Dave2 joined #gluster
09:17 ctria joined #gluster
09:22 hagarth joined #gluster
09:24 bala1 joined #gluster
09:36 rastar1 joined #gluster
09:38 Humble joined #gluster
10:05 bala1 joined #gluster
10:15 Tekni joined #gluster
10:15 melanor9 joined #gluster
10:34 elyograg joined #gluster
10:40 hagarth joined #gluster
11:17 morse joined #gluster
11:23 vpshastry joined #gluster
11:34 vpshastry1 joined #gluster
11:52 rastar joined #gluster
11:56 vimal joined #gluster
12:18 vimal joined #gluster
13:21 rastar joined #gluster
13:31 duerF joined #gluster
13:32 shylesh joined #gluster
13:54 shylesh joined #gluster
13:55 jtux joined #gluster
14:05 raven-np joined #gluster
14:06 hagarth joined #gluster
14:21 vijaykumar left #gluster
14:21 tru_tru joined #gluster
14:36 a_key joined #gluster
14:36 a_key Hi Guys.
14:36 a_key I have a question about xattr
14:37 a_key I know that it's possible to use/set/read the "user." namespace/domain with gluster.
14:37 a_key It doesn't seem to be possible to use the security. namespace tho.
14:38 a_key Could you please advise on why hasn't this been implemented ?
14:39 a_key The background for this question is that samba4 is using security.NTACL xattr which is not replicated by glusterfs so it's impossible to have samba4 shares with proper ACL implementation.
14:47 stopbit joined #gluster
14:50 chirino joined #gluster
15:21 wushudoin joined #gluster
15:24 chirino joined #gluster
15:41 3JTAACZHY left #gluster
15:46 Kins joined #gluster
15:49 dhsmith joined #gluster
16:17 chirino joined #gluster
16:55 Humble joined #gluster
17:02 sunus joined #gluster
17:06 sunus joined #gluster
17:22 copec Has anybody gotten Gluster 3.3 to build on Solaris 11.1 in here...by chance?
17:29 Mo___ joined #gluster
17:29 melanor9 joined #gluster
17:33 Bullardo joined #gluster
18:02 JoeJulian @glossary
18:02 glusterbot JoeJulian: A "server" hosts "bricks" (ie. server1:/foo) which belong to a "volume"  which is accessed from a "client"  . The "master" geosynchronizes a "volume" to a "slave" (ie. remote1:/data/foo).
18:02 isomorphic joined #gluster
18:08 duffrecords joined #gluster
18:12 vimal joined #gluster
18:16 JoeJulian Good morning.
18:17 JoeJulian copec: Not that I've heard about. There's been some effort made in that direction some time back, but I haven't heard anything in a long time.
18:19 copec JoeJulian, I'll let you know if I make progress on it, I'm not a developer but would like to head more in that direction so I'm working through getting it to build myself
18:21 JoeJulian copec: If you could, do the stuff on the ,,(hack) page and submit your patches that'd be awesome.
18:21 glusterbot copec: The Development Work Flow is at http://goo.gl/ynw7f
18:21 copec okay, thanks
18:24 copec JoeJulian, sorry if this has been asked before:  Is there any sort of distributed file locking that could work in combination with georeplication?
18:25 copec so that two datasets could replication to each other with both having files that are opened and used
18:25 copec *replicate
18:31 dbruhn Joe I am seeing this in my NFS log, any clue what I am seeing
18:31 dbruhn Server and Client lk-version numbers are not same, reopening the fds
18:31 glusterbot dbruhn: This is normal behavior and can safely be ignored.
18:34 JoeJulian copec: Not yet. That will come with two-way georeplication.
18:34 copec thanks
18:34 copec sweet
18:36 duffrecords our old GlusterFS system used software RAID 10 but only because there were LVs on top, so we could divide the storage into OS partitions and one big brick directory.  I've since moved the OS to a new, separate disk, freeing up the storage disks to be used as bricks only.  does software RAID make sense anymore now that there's no LVM or is that just a layer of unnecessary overhead at this point?
18:39 dbruhn Does anyone have an idea of why I would be seeing this many errors. http://dpaste.org/6Nx1x/
18:39 glusterbot Title: dpaste.de: Snippet #215733 (at dpaste.org)
18:44 JoeJulian duffrecords: personal preference, it seems. I like having lvm on drives and carving out individual bricks per drive using gluster replication to provide the fault tolerance. Some don't.
18:46 JoeJulian dbruhn: Looks to me like those files are split-brain.
18:48 melanor9 joined #gluster
18:50 dbruhn my log is just over flowing with that, it doesn't stop ever
18:53 dbruhn Here is my split brain info http://pastie.org/5590003
18:53 glusterbot Title: #5590003 - Pastie (at pastie.org)
19:11 carlosmp left #gluster
20:05 daMaestro joined #gluster
20:16 melanor9 joined #gluster
20:26 tziOm joined #gluster
20:28 melanor91 joined #gluster
20:28 tziOm I am just starting to look at gluster for backing fs for cifs nas. How does replication and scaling work with glusterfs?
20:28 H__ strace of a find on my local glusterfs mount shows that some files get newfstatat over a 100 times. Why would this happen ?
20:36 duffrecords I just increased the read-ahead value for each block device in my software RAID array.  should I change the value for the RAID device as well (i.e. /dev/md0)?
20:37 duffrecords because it is now lower than the value of each block device
20:37 H__ tziOm: gluster scales by adding bricks, on either the same server(s) or on other servers
20:39 JoeJulian H__: symlinks?
20:40 H__ JoeJulian: not in the directory tree I was running the find in. (I do have those elsewhere on the fs)
20:40 JoeJulian duffrecords: depends on likelihood that the next read will be sequential.
20:42 JoeJulian H__: I havent read the source for find...
20:44 duffrecords JoeJulian: I understand that part but I'm wondering if the md0 device should match the individual block devices or be a multiple of them.  before I changed anything it was 12x each block device's read-ahead value
20:45 H__ I have the same tree on a raid6 xfs box. I'll compare the behaviour
20:45 melanor9 joined #gluster
20:49 mohankumar joined #gluster
20:51 tziOm H__, and a brick is a filesystem?
20:52 tziOm H__, How does one configure gluster to replicate twice to different machines, for example?
20:55 H__ tziOm: yes, a brick is typically a single disk with a filesystem on it. Can also be a raided set of disks with 1 filesystem.
20:56 H__ the setup commands let you specify 2 copies on two separate machines if you want.
20:56 JoeJulian duffrecords: gluster's readahead is a fd read so the only way a block-level readahead will matter is if that block level readahead cache is full of the next gluster read. If the next gluster readahead is more likely than not to fetch the next sequential block, then having the same size readahead for the block device would be beneficial. If you're using multiple clients, then it seems less likely.
20:56 JoeJulian @order
20:57 JoeJulian @replicate
20:57 glusterbot JoeJulian: I do not know about 'replicate', but I do know about these similar topics: 'replace'
20:57 andreask joined #gluster
20:57 JoeJulian damn. I thought I added that.
21:01 JoeJulian @learn brick order as Replicas are defined in the order bricks are listed in the volume create command. So \"gluster volume create myvol replica 2 server1:/data/brick1 server2:/data/brick1 server3:/data/brick1 server4:/data/brick1\" will replicate between server1 and server2 and replicate between server3 and server4.
21:01 glusterbot JoeJulian: The operation succeeded.
21:01 JoeJulian ~brick order | tziOm
21:01 glusterbot tziOm: Replicas are defined in the order bricks are listed in the volume create command. So \"gluster volume create myvol replica 2 server1:/data/brick1 server2:/data/brick1 server3:/data/brick1 server4:/data/brick1\" will replicate between server1 and server2 and replicate between server3 and server4.
21:01 JoeJulian @forget brick order
21:01 glusterbot JoeJulian: The operation succeeded.
21:02 JoeJulian @learn brick order as Replicas are defined in the order bricks are listed in the volume create command. So "gluster volume create myvol replica 2 server1:/data/brick1 server2:/data/brick1 server3:/data/brick1 server4:/data/brick1" will replicate between server1 and server2 and replicate between server3 and server4.
21:02 glusterbot JoeJulian: The operation succeeded.
21:04 duffrecords JoeJulian: thanks.  that makes more sense
21:04 melanor91 joined #gluster
21:06 duffrecords I'll just adjust the RAID volume and leave the block devices the way they were.  there will be many clients in this system
21:06 GLHMarmot joined #gluster
21:07 tziOm About gluster and quota, does it support per-directory quotas?
21:09 JoeJulian Let me ,,(rtfm) and see if I can make heads or tails of what it says about that...
21:09 glusterbot Read the fairly-adequate manual at http://goo.gl/E3Jis
21:10 JoeJulian looks like that's how it works.
21:11 JoeJulian I can't remember if it checks quotas from the top down or the bottom up though.
21:14 tziOm Yeah, thats what I wonder..
21:15 m0zes root, then up. iirc.
21:23 melanor9 joined #gluster
21:40 duffrecords does anyone have recommendations for calculating sunit and swidth for XFS on RAID 10?  all I can find on the web are suggestions for RAID 5
21:47 duffrecords never mind, I just found some
21:59 nueces joined #gluster
22:19 y4m4 joined #gluster
22:57 duffrecords I noticed in our old fstab we used direct-io-mode=disable but I can't remember why.  what does that option do?
22:59 JoeJulian duffrecords: That errored more quickly back when fuse didn't support direct-io-mode rather than silently hanging for some arbitrary timeout. It's no longer necessary with current kernels.
23:01 duffrecords as in 2.6.32?
23:02 H__ JoeJulian: 'good' news : I see the same weird behaviour when running find on a non-gluster system (which holds an rsynced copy of the directory tree)
23:09 daMaestro joined #gluster
23:34 isomorphic joined #gluster
23:48 Ryan_Lane joined #gluster
23:49 Ryan_Lane I have one host that is having trouble with a gluster mount while the rest are fine...
23:49 Ryan_Lane when I try to access the mount, a bunch of directories come back with Invalid argument
23:50 Ryan_Lane other gluster mounts work fine. it's one specific mount
23:50 Ryan_Lane my logs show constant disconnects
23:51 Ryan_Lane and I'm getting log entries like: E [dht-common.c:1372:dht_lookup] 0-editor-engagement-home-dht: Failed to get hashed subvol for /fran
23:52 Ryan_Lane servers don't show any errors in the logs
23:58 Ryan_Lane hm. weird. it shows two of the bricks as down

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary