Camelia, the Perl 6 bug

IRC log for #gluster, 2013-10-01

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 harish_ joined #gluster
00:10 tryggvil joined #gluster
00:31 sprachgenerator joined #gluster
00:49 lpabon joined #gluster
00:49 chirino joined #gluster
00:57 rcheleguini joined #gluster
00:58 rcheleguini_ joined #gluster
01:23 sprachgenerator joined #gluster
01:33 mohankumar joined #gluster
01:36 daMaestro joined #gluster
01:38 tryggvil joined #gluster
01:53 johnbot11 joined #gluster
01:54 johnbot11 Somewhat of a rookie with Ubuntu and trying to install semiosis 3.4 version of glusterfs-client on an ubunutu machine. it keeps installing an older version of 3.2 from what i assume is the standard ubuntu repo. any tips?
01:54 johnbot11 add-apt-repository ppa:semiosis/ubuntu-glusterfs-3.4
01:55 johnbot11 apt-get install glusterfs-client
01:56 johnbot11 glusterfs 3.2.5 built on Jan 31 2012 07:39:58
01:56 johnbot11 My understanding is that it would install the newest available
01:56 johnbot11 from the ppa
01:57 johnbot11 On Ubuntu 12.04.2 LTS
01:59 semiosis johnbot11: you need to run apt-get update after adding (or removing) a ppa
01:59 johnbot11 duh thanks!
01:59 semiosis yw
02:02 kshlm joined #gluster
02:10 jesse joined #gluster
02:20 sjoeboo joined #gluster
02:30 DV joined #gluster
02:52 ProT-0-TypE joined #gluster
03:29 satheesh joined #gluster
03:30 davinder joined #gluster
03:32 Guest34797 joined #gluster
03:34 lkthomas joined #gluster
03:35 Durzo joined #gluster
03:58 johnbot11 joined #gluster
04:00 johnbot11 joined #gluster
04:10 Technicool joined #gluster
04:21 raz joined #gluster
04:23 kPb_in_ joined #gluster
04:45 raz joined #gluster
04:53 ngoswami joined #gluster
04:55 CheRi_ joined #gluster
04:57 CheRi_ joined #gluster
04:58 jskinner joined #gluster
05:17 Technicool joined #gluster
05:23 harish joined #gluster
05:42 social_ joined #gluster
05:42 nshaikh joined #gluster
06:02 jskinner joined #gluster
06:02 ngoswami joined #gluster
06:11 psharma joined #gluster
06:20 jtux joined #gluster
06:57 Dga joined #gluster
06:57 eseyman joined #gluster
06:58 davinder joined #gluster
07:01 ekuric joined #gluster
07:11 ctria joined #gluster
07:12 andreask joined #gluster
07:15 hybrid512 joined #gluster
07:21 DV joined #gluster
07:29 kPb_in_ joined #gluster
07:45 ricky-ticky joined #gluster
07:59 mgebbe_ joined #gluster
08:00 raz joined #gluster
08:01 jcsp joined #gluster
08:02 mbukatov joined #gluster
08:10 harish joined #gluster
08:12 TomKa joined #gluster
08:14 GabrieleV joined #gluster
08:17 duerF joined #gluster
08:41 atrius joined #gluster
08:43 StarBeast joined #gluster
08:53 pkoro joined #gluster
08:54 ngoswami joined #gluster
09:01 rgustafs joined #gluster
09:16 Maxence joined #gluster
09:16 purpleidea avati: a2: fyi, i appreciate your email response, i'm working on some example code to show to better explain my response. thanks for your patience!
09:18 Dga joined #gluster
09:48 quique_ joined #gluster
10:01 stickyboy joined #gluster
10:38 edward2 joined #gluster
10:43 pkoro joined #gluster
10:46 stickyboy joined #gluster
10:46 satheesh1 joined #gluster
10:47 kkeithley1 joined #gluster
10:49 ngoswami joined #gluster
10:51 social_ Hi, I probably asked this before but I don't see answer but, could one call mount -o remount on gluster? Is there way for native mount to do remount after package update?
10:59 kkeithley_ social_: do you mean package update on the servers?
11:06 NeatBasis joined #gluster
11:10 social_ kkeithley_: no, client package update
11:18 andreask joined #gluster
11:36 kkeithley_ Hmm, I don't know whether -o remount will kill and restart the glusterfs daemon (fuse bridge) or not. I'll have to try it and see.
11:37 TomKa joined #gluster
11:41 jcsp joined #gluster
11:51 NeatBasis joined #gluster
11:52 social_ kkeithley_: in ideal usecase I think gluster could benefit from fuse and do downtime less updates. It's super highlevel so it won't be easy but I think you keep up versions so you could just spawn new process, handle new fds from there and a) slowly die out old fds old just migrate them to new process
12:09 fyxim joined #gluster
12:27 dmojoryder joined #gluster
12:29 dmojoryder gluster is reporting a split-brain on a the top level directory of a brick. Is there any easier way to clear this up than deleting that directory and .glusterfs?
12:30 rcheleguini joined #gluster
12:31 social_ dmojoryder: isn't it just xattr split brain?
12:31 social_ dmojoryder: for example different perrmisions
12:33 dmojoryder social_: it does say meta-data self-heal failed
12:40 social_ dmojoryder: I think I always used something like this http://gluster.org/pipermail/glust​er-users/2010-November/005943.html
12:40 glusterbot <http://goo.gl/v8yVdQ> (at gluster.org)
12:41 social_ sec I'll look for my notes
12:43 social_ dmojoryder: ./clusterExec.py -m gluster01 gluster02 gluster03 gluster04 gluster05 gluster06  -- 'getfattr -d -e hex -m trusted.afr  /mnt/gluster/Exporter  | grep trusted.afr | cut -d"=" -f1 | while read afr; do setfattr -x ${afr} /mnt/gluster/Exporter ; done ; getfattr -d -e hex -m trusted.afr  /mnt/gluster/Exporter/table  | grep trusted.afr | cut -d"=" -f1 | while read afr; do setfattr -x ${afr} /mnt/gluster/Exporter/table ; done' << in other words on all b
12:45 dmojoryder social_: thanks. I do the see inconsistencies in the trusted.afr values between the 2 bricks in a replica
12:47 social_ I'm not sure if you shouldn't just delete it on one of the bricks, I would ask someone more "experienced"
12:47 B21956 joined #gluster
12:47 dmojoryder social_: if I understand correctly this uses setfattr to make the trusted.afr between two brinks in a replica consistent
12:49 social_ no, it just removes the attr - see setattr man: Remove the named extended attribute entirely.
12:49 social_ imho selfheal should fix it afterwards
12:58 dmojoryder social_: thanks. I removed the trusted.afr. split brain message has gone away and the new trusted.afr was created and is consistent.
13:00 manik1 joined #gluster
13:11 chirino joined #gluster
13:12 jskinner_ joined #gluster
13:17 johnmark JoeJulian: 'tis true. I'm trying to hire a XX chromosome person as we speak - for community management
13:23 bennyturns joined #gluster
13:25 kkeithley_ Hey, just because someone's X chromosomes is broken is no reason to discriminate.
13:26 kkeithley_ All men are XX, just one of the Xs is busted.
13:26 kkeithley_ s/chromosomes/chromosome/
13:26 glusterbot What kkeithley_ meant to say was: Hey, just because someone's X chromosome is broken is no reason to discriminate.
13:28 Norky joined #gluster
13:34 giannello joined #gluster
13:35 Norky joined #gluster
13:35 giannello hi everyone, I have a strange problem using glusterfs nfs translator (gluster 3.4 on Ubuntu 12.04 LTS)
13:38 giannello trying to copy a folder (using cp -a) to the nfs mountpoint results in wrong permissions on the target folder
13:38 giannello the source folder is 0755, the destination is 0700
13:38 giannello this applies also to all the subfolders
13:38 giannello (and files)
13:40 Norky cp wont necessarily preserve file mode. use rsync
13:40 Norky ahh, you were using the -a option
13:41 giannello i can try with rsync too
13:41 giannello but this is quite strange
13:41 NeatBasis joined #gluster
13:41 Norky sorry, that shoudl have done the right thing
13:41 giannello chmod'ing them works correctly
13:42 jcsp joined #gluster
13:43 giannello rsync works as expected
13:44 giannello and cp -rp doesn't, I tried again
13:46 zaitcev joined #gluster
13:48 bala joined #gluster
13:53 mgebbe_ joined #gluster
13:54 giannello ll
13:58 social_ kkeithley_: aaah service glusterd restart does not restart brick daemons as well
13:58 giannello please note that I'm using ZFS as brick filesystem, using xattr=sa, and bricks are mounted with options rw,noatime,xattr
14:00 kkeithley_ huh? it does for me.
14:01 theron joined #gluster
14:02 social_ kkeithley_: http://paste.fedoraproject.org/43381/80636110/ after I did service glusterd stop
14:02 glusterbot Title: #43381 Fedora Project Pastebin (at paste.fedoraproject.org)
14:02 social_ I have several of these :/
14:03 theron_ joined #gluster
14:04 social_ I can see in init that it does killproc to glusterd that's all. Does glusterd forward this to bricks?
14:04 kkeithley_ stopping glusterd shouldn't stop the volume
14:04 manik1 joined #gluster
14:05 social_ so when I update package what's the correct way to ensure new version of gluster is running?
14:05 kkeithley_ but restarting glusterd, e.g. after a software update kills the glusterfsds and glusterfss, should restart the volume
14:06 social_ kkeithley_: it doesn't look like bricks were restarted after update
14:06 kkeithley_ this is on RHEL or CentOS?
14:07 social_ scientific linux
14:07 social_ 3.4.1 rpm
14:07 kkeithley_ stopping glusterd does not stop any volumes, it does not kill glusterfsd or glusterfs processes.
14:08 kkeithley_ The rpm update scripts do kill the glusterfsd and glusterfs processes before installing the new bits.
14:09 social_ I just reinstalled gluster package, no change
14:09 social_ same process is still runing
14:09 kkeithley_ restarting glusterd after an update should automatically restart the volume(s) because they were running previously and not overtly stopped via the gluster cli
14:11 kaptk2 joined #gluster
14:11 kkeithley_ File a bug. JoeJulian runs a lot of CentOS. If he saw this I expect he would have yelled about it.
14:11 kkeithley_ let me fire up my SL VM
14:12 badone joined #gluster
14:12 social_ file /etc/init.d/glusterfsd
14:12 social_ is in scripts of glusterfs-server
14:12 social_ but not provided by any package
14:13 eseyman joined #gluster
14:13 kkeithley_ % rpm -q --whatprovides /etc/init.d/glusterfsd
14:13 kkeithley_ glusterfs-server-3.4.0-2.el6.x86_64
14:14 social_ I'm working on 3.4.1 from qa release
14:14 social_ rebuilt in our koji but that should not be different (I just imported the package)
14:15 kkeithley_ yes, I'm updating, hang on
14:15 social_ hmm it should be there I can see it in spec
14:15 jlec joined #gluster
14:15 jlec Hi
14:15 glusterbot jlec: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
14:16 kkeithley_ % cat /etc/redhat-release
14:16 kkeithley_ Scientific Linux release 6.4 (Carbon)
14:16 kkeithley_ % rpm -qa | egrep gluster
14:16 kkeithley_ glusterfs-fuse-3.4.1-1.el6.x86_64
14:16 kkeithley_ glusterfs-server-3.4.1-1.el6.x86_64
14:16 kkeithley_ glusterfs-3.4.1-1.el6.x86_64
14:16 kkeithley_ glusterfs-libs-3.4.1-1.el6.x86_64
14:16 kkeithley_ glusterfs-cli-3.4.1-1.el6.x86_64
14:16 kkeithley_ I'm going to get yelled at for multi-line pastes
14:16 jlec do I get it correct that it is impossible to use to subdirs of a mounted partition as different bricks in different volumes?
14:17 Guest34797 joined #gluster
14:17 jlec *two subdirs of course
14:17 social_ kkeithley_: rpm -q --whatprovides /etc/init.d/glusterfsd works fine for you?
14:18 jbautista joined #gluster
14:18 social_ kkeithley_: I have same packages installed but rebuilt in our koji so if it works fine for you than issue is probably in our buildsystem
14:20 kkeithley_ no, now I get error: file /etc/init.d/glusterfsd: No such file or directory
14:22 kkeithley_ I created and started a volume. then did service glusterd stop, and glusterfsd is still running.
14:22 jclift joined #gluster
14:23 kkeithley_ yum erase gluster\*, glusterfsd is still running
14:23 kkeithley_ kill -9 glusterfsd
14:24 kkeithley_ yum install, service start glusterd,   glusterfsd is running again w/ a new pid
14:25 kkeithley_ I'll have to reconstruct why /etc/init.d/glusterfsd is not installed.
14:25 bugs_ joined #gluster
14:28 social_ I'm not sure about -9
14:29 social_ it should work with kill shouldn't it?
14:29 kkeithley_ seems so
14:29 social_ kkeithley_: will you make ticket for this so I could track it and remove hack for this?
14:30 kkeithley_ ordinary kill of glusterfsd, service glusterd restart, new glusterfsd is runing with new pid
14:31 kkeithley_ and then close it with "WORKSFORME"?
14:31 kkeithley_ ;-)
14:31 social_ =]
14:31 sprachgenerator joined #gluster
14:32 dneary joined #gluster
14:33 Technicool joined #gluster
14:33 kkeithley_ I used the bits from (,,yum)
14:34 kkeithley_ I used the rpms from @yum
14:34 kkeithley_ @yum
14:34 glusterbot kkeithley_: The official community glusterfs packages for RHEL (including CentOS, SL, etc) are available at http://goo.gl/42wTd5 The official community glusterfs packages for Fedora 18 and later are in the Fedora yum updates (or updates-testing) repository.
14:36 kkeithley_ I can't imagine why the RPMs you built in your Koji would be different than the ones built in the Fedora Koji, but it might be worth trying them, just to be sure
14:39 sprachgenerator joined #gluster
14:39 phox any thoughts on this?  [rdma.c:1079:gf_rdma_cm_event_handler] 0-rdma-attempt-client-0: cma event RDMA_CM_EVENT_REJECTED, error 8
14:40 phox got the daemon apparently binding to something, which is good, but now this
14:42 harish joined #gluster
14:48 plarsen joined #gluster
14:50 glusterbot New news from newglusterbugs: [Bug 1014242] /etc/init.d/glusterfsd not provided by any package in 3.4.1 on rhel <http://goo.gl/RSSnY5>
14:52 social_ kkeithley_: ^^ it's same in packages from your link. They want to use init script which they don't provide, imho that's bug
14:53 ctria joined #gluster
14:53 giannello ok, looks like there's a problem with xattrs -> 0-ftp-posix: Extended attributes not supported (try remounting brick with 'user_xattr' flag)
14:54 giannello this is strange, btw: zfs _does_ support xattrs
14:54 phox you can also turn them off
14:54 phox bu they're on by default, yes
14:54 phox so one would assume you have them on
14:55 phox 'zfs get xattr <whatever>'
14:56 giannello by default, they are set to "on", right
14:56 giannello but I changed it to "sa"
14:57 phox on or yes or something
14:57 phox yeah mine are set to 'sa' as recommended, works fine
14:57 phox so, weird
14:57 giannello it's a fresh install
14:57 giannello less than 4 hours :)
14:57 phox dunno, then.  one would probably want to trace gluster and see what it's calling that's actually failing and then if it's legitimately failing, reproduce that against ZFS and complain @ ZFS if that's the case
14:58 lpabon joined #gluster
14:59 giannello does gluster need xattrs?
15:01 kkeithley_ yes, gluster needs xattrs
15:01 phox you can try setting your own on something on the FS to see if they're alive
15:01 phox setfattr
15:01 phox package 'attr'
15:01 giannello there's another pretty strange thing. one of the machines of my cluster (the only one I rebooted to test failover) has no "xattr" option in zfs mount
15:02 phox RTFM for the rest because I don't actually use it for anything myself
15:02 phox yeah my ZFSen are mounted xattr
15:02 phox so that'd probably be related
15:02 giannello weird
15:02 giannello because that error I have in the logs is on a node _with_ xattr in the options
15:03 recidive joined #gluster
15:04 phox hah.
15:04 phox well, as I said: isolate.  test xattrs manually with setfattr yourself.
15:04 phox well, and I guess getfattr as well just to confirm
15:04 giannello getfattr on the mounted nfs or on the brick?
15:05 phox on the brick, where gluster would be trying to use them
15:08 giannello well, it returns nothing
15:09 giannello and setfattr returns an error
15:09 giannello yay!
15:09 bode joined #gluster
15:12 bode I am having a problem with my gluster. I have it set up 16 drives in distributed-replicated mode. One of my drives died, so I used the replace-brick, but it seems to be broken at that point... I was going to try to remove the replica (vid9 and vid10) but it says the replace is in progress on the volume. Error below:
15:12 bode [root@v12 vidstor]# gluster volume remove-brick vidstor replica 2 v12:/vidstor/vid9 v12:/vidstor/vid10 force Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y Replace brick is in progress on volume vidstor. Please retry after replace-brick operation is committed or aborted
15:14 phox giannello: yep.  bisection.  "is problem this side or that side of _______"
15:14 bode It will not let me abort the replace-brick operation, because vid10 is offline. Is there a way to force kill the replace-brick process so I can just remove the gluster replica and re-create it?
15:19 manik joined #gluster
15:29 bode anyone here?
15:30 sticky_afk joined #gluster
15:30 stickyboy joined #gluster
15:31 zerick joined #gluster
15:35 phox I am not I have no idea as I've never used replace-brick
15:35 phox heh
15:37 JuanBre joined #gluster
15:38 jclift left #gluster
15:39 JuanBre hi, I am having a extrange problem. Somehow on of the nodes changed its uuid. I changed it back again but now executing "gluster peer status" in the problematic server I get..
15:39 JuanBre "Peer is connected and Accepted (Connected)"
15:40 JuanBre at one of the other nodes...
15:40 JuanBre everything is normal asides from that...
15:40 LoudNoises joined #gluster
15:42 cfeller joined #gluster
15:47 johnmark bode: hmm... it's unusually quiet today
15:47 kPb_in joined #gluster
15:48 johnmark bode: I would send a note to the list
15:51 kPb_in___ joined #gluster
15:55 bode Ok, I'll probably try that... Thanks johnmark
15:55 jlec I want to use to subdirs of a mounted partition as bricks, but I always get the lovely "Brick may be containing or be contained by an existing brick" error.
15:55 jlec I already tried clearing the xattr, removing and recreating the folder and restarting the daemon.
15:55 jlec But nothing help
15:55 jlec Any additional help.
15:55 jlec Anyideas?
15:56 JuanBre what might cause a "peer is connected and accepted" status?
16:07 sticky_afk joined #gluster
16:07 stickyboy joined #gluster
16:14 jbautista joined #gluster
16:14 fyxim joined #gluster
16:16 RedShift joined #gluster
16:17 RedShift hi all, how can I control when one gluster server can't reach another, it's considered dead and normal operations continue?
16:21 RedShift test
16:29 ndk joined #gluster
16:40 jbrooks joined #gluster
16:51 Mo__ joined #gluster
16:56 semiosis RedShift: quorum?
16:56 semiosis see 'gluster volume set help' for related options, also ,,(rtfm) may talk about that
16:56 glusterbot Read the fairly-adequate manual at http://goo.gl/E3Jis
16:59 RedShift I don't think that's what I'm looking for, I'm looking for the case where the backend connection fails
17:00 RedShift (the link that does the replication)
17:09 rotbeard joined #gluster
17:14 bennyturns joined #gluster
17:15 semiosis RedShift: what kind of client are you using?  fuse or nfs?
17:15 semiosis also... why do you want to control this?  is gluster not doing what you expect it to?
17:19 RedShift nfs
17:20 RedShift no it's not, when I break the backend connection I expected gluster to resume operations after it determined the other node to be unreachable
17:32 semiosis did you wait for the ,,(ping-timeout)?
17:32 glusterbot The reason for the long (42 second) ping-timeout is because re-establishing fd's and locks can be a very expensive operation. Allowing a longer time to reestablish connections is logical, unless you have servers that frequently die.
17:35 RedShift yes, it was way beyond that. I'll try lowering that value and see what it does
17:35 semiosis and, just to be sure, was your client connected to the machine that stayed up?
17:37 RedShift yes, both nodes were up, but their backend link failed
17:37 RedShift so I have one client that connects to node1, I break the backend connection between node1 and node2
17:38 RedShift (pulled the cable as a test scenario)
17:39 semiosis well that would cause ping timeout
17:39 SpeeR joined #gluster
17:39 davinder joined #gluster
17:44 jskinner_ joined #gluster
17:49 noseyb joined #gluster
17:50 noseyb I'm trying to use webdav on glusterfs, and the write performance is horrible, I was hoping someone could help
17:50 noseyb I wrote a post here: https://www.centos.org/modules/ne​wbb/viewtopic.php?topic_id=44901
17:50 glusterbot <http://goo.gl/B6hd3w> (at www.centos.org)
17:51 phox are you using lots of little files with webdav?
17:51 noseyb I confirmed that I can write to a non-gluster mount far above the 25MB/s problem
17:51 noseyb No, I am writing 2GB, 8GB and 16GB test files (all zeros)
17:51 phox huh.  yeah, that's pretty unimpressive =/
17:51 noseyb But I thought that perhaps webdav was writing in 2k chunk sizes or something
17:52 noseyb So I was trying to use some performance translators
17:52 phox it could be doing something like that.  Gluster is not so hot with small transactions.
17:52 phox Yeah.  Sounds like a plan.... unfortunately not something I am familiar with.
17:53 noseyb I had no luck, I still cannot write that fast even after changing writebehind in the Translator
17:53 giannello how could glusterfs work if xattr are not correctly working?
17:53 noseyb I'm out of ideas, I can SCP the same file much faster, at over 60MB/s
17:54 noseyb to the same destination, so I know gluster should be faster - its somehow the combination of webdav and gluster that is slow
17:55 kkeithley_ giannello: it can't, gluster absolutely requires xattrs.
17:55 semiosis noseyb: maybe try to strace apache?  http://edoceo.com/exemplar​/strace-multiple-processes
17:55 glusterbot <http://goo.gl/GnNWo> (at edoceo.com)
17:55 giannello so there's something absolutely wrong, kkeithley_
17:55 giannello it _works_, I can see data split into distributed/replicated bricks
17:55 giannello I can mount it with both glusterfs native client and nfs
17:56 manik joined #gluster
17:56 giannello but still, setfattr on a zfs volume holding the brick fails.
17:56 kkeithley_ You can't even create a volume if you don't have xattrs. If you created a volume then there must be xattrs.
17:56 semiosis noseyb: where is your dav lock database?  maybe try moving that off gluster just to see if that changes performance... http://httpd.apache.org/docs/2.​2/mod/mod_dav_fs.html#davlockdb
17:56 glusterbot <http://goo.gl/YA8qxa> (at httpd.apache.org)
17:57 noseyb Oh, the dav lock db is on gluster yes
17:57 noseyb the idea was that since I write from multiple places and webdav exposes the same share from each server I wanted them to share the davlockdb
17:58 noseyb I'll try that and see if it helps
17:58 semiosis a nice idea :)
17:58 giannello ok no wait, I was using setfattr badly
17:58 giannello :D
17:59 noseyb oh, no its in /var/lib/dav/lockdb
17:59 giannello but still I don't explain the "Extended attributes not supported (try remounting brick with 'user_xattr' flag)" error
17:59 JoeJulian The default DAV_READ_BLOCKSIZE is 2k
17:59 kkeithley_ giannello: this is with ZFS?
17:59 giannello yes
18:00 giannello zfs with xattr = sa
18:00 * phox wonders if this is an issue with ZFS 0.6.2
18:00 phox <- 0.6.1 here and 0.6.0 worked fine with Gluster as well
18:00 giannello I don't know if the same error happens with  xattr = on
18:01 phox you can switch it whenever, so I'd try that.
18:01 giannello I'm using gluster 3.4, trying to get nfs failovering
18:01 JoeJulian noseyb: If you want a bigger DAV_READ_BLOCKSIZE, you'll have to change it and recompile the plugin.
18:01 phox and did you confirm if playing with setfattr works?
18:01 giannello and actually it works :)
18:01 giannello phox, yes, "setfattr -n user.something -v something somefile" works
18:01 kkeithley_ Don't know much about ZFS and the license makes it impossible for us, as Red Hat employees, to really do much of anything with it. But ISTR that ZFS allows system and user xattrs to be enabled and disabled separately.
18:01 phox ok cool
18:02 phox kkeithley_: well, you could build it :P
18:02 phox but that's about it
18:02 phox anyways, works fine with Gluster on top of it here
18:02 JoeJulian noseyb: It appears with a cursory glance over PUT that it uses that same buffer size there as well.
18:02 giannello phox, what version of gluster?
18:02 phox 3.4.1 now.  was 3.3.1 -> 3.4.0 -> 3.4.1
18:03 phox giannello: ZFS 0.6.1 though.  Not upgrading right now because Id on't have time.
18:03 giannello k, I moved to 3.4 because of the improvements with nfs
18:03 kkeithley_ I could, and then I'd have to fend off the license zealots
18:03 phox kkeithley_: pff
18:03 giannello but I'll try downgrading tonight, to see if 3.2 works as good as 3.4 for my needs
18:03 phox you just can't ship it and shouldn't look at the code
18:03 giannello (floating IP with nfs mount)
18:03 theron johnmark, ping
18:04 phox pretty much anyone not using ZFS to store lots of crap on Linux is living in backwards-I-hate-myself land :P
18:04 giannello phox, not really, we have impressive results with databases
18:04 giannello both mysql/mariadb and postgresql
18:04 phox giannello: you're saying vs using ZFS, or on top of ZFS, or on top of other filesystems?
18:05 * phox confused by statement
18:06 phox I have disks in the mail that will be arriving and turning into 385T of usable ZFS space :d  yum
18:06 giannello phox, running mariadb with dataset on zfs against mariadb on xfs
18:07 giannello 3x to 6x improvement in query speed, using compression
18:07 phox giannello: ZFS should be mostly boring.  there's a bit of a hit because it has its own internal block cache, which is stupid
18:07 phox as in ZFS is faster?  I presume with lz4?
18:07 andreask joined #gluster
18:07 giannello yes, zfs is _way_ faster, also using gzip compression
18:07 phox huh
18:07 phox nice
18:08 JoeJulian license zealots roam around chanting, "Pie GNU Domine, plebis licentiam libero." and whacking themselves in the head with a copy of the GPL on stone tablets, right?
18:08 giannello btw, gotta go home, I'll let you know something about this
18:08 * giannello <3 gluster
18:08 phox JoeJulian: or a copy of the O'Reilly acid book :P
18:08 giannello and I really need to push this to production ASAP
18:09 kkeithley_ yeah, something like that, along with the anti-{iPhone,Mac,$anything-from-Apple} people
18:09 phox JoeJulian: you can keep them out of your office by using a data projector that projects incompatibly-licensed code into the air, so it proejcts on their tablets if they enter :d
18:09 phox that counts as linking!  GTFO hypocrite!
18:09 jlec left #gluster
18:10 noseyb JoeJulian: I am writing the file, that parameter affects that too?
18:12 JoeJulian noseyb: As far as I can tell without becoming an expert. Might either test that or ask in #httpd
18:13 noseyb Thank you very much, Can you tell me which file you found that in?  I'll look at downloading the source
18:19 lalatenduM joined #gluster
18:20 kkeithley_ JoeJulian: I'm trying to remember why I would have taken the /etc/init.d/glusterfsd out of the el[56] packages. Do you remember anything or was it a brainfart on my part?
18:20 JoeJulian It boggled me.
18:21 kkeithley_ ndevos: ^^^
18:21 kkeithley_ And you didn't say anything? ;-)
18:22 kkeithley_ new packages for el[56] with /etc/init.d/glusterfsd back in are in the repo on d.g.o.
18:22 kkeithley_ Sigh
18:26 johnmark theron: pong
18:27 johnmark theron: find me on hangout
18:28 theron johnmark, ok
18:30 * phox prefers el{5,6} although it's less portable (:
18:54 JoeJulian Dammit. I did it again. I thought I had my puppet manifests set to a specific version. Last night the servers all upgraded and caused a split-brain.
19:02 Guest34797 joined #gluster
19:02 kkeithley_ ouch
19:09 giannello joined #gluster
19:11 sprachgenerator joined #gluster
19:18 ofu_ joined #gluster
19:20 Gugge_ joined #gluster
19:20 VeggieMeat_ joined #gluster
19:21 arusso- joined #gluster
19:22 _br_- joined #gluster
19:22 raz[zZzz] joined #gluster
19:22 pull_ joined #gluster
19:22 Ramereth|home joined #gluster
19:22 Ramereth|home joined #gluster
19:22 haakon__ joined #gluster
19:22 the-me_ joined #gluster
19:22 ingard__ joined #gluster
19:23 torbjorn1__ joined #gluster
19:23 xymox joined #gluster
19:23 mjrosenb_ joined #gluster
19:24 masterzen_ joined #gluster
19:25 raz_ joined #gluster
19:25 ueberall joined #gluster
19:25 ueberall joined #gluster
19:25 mharriga1 joined #gluster
19:27 wirewater joined #gluster
19:28 yosafbridge joined #gluster
19:28 arusso joined #gluster
19:28 avati joined #gluster
19:29 [o__o] joined #gluster
19:45 semiosis JoeJulian: were you running gitlab on glusterfs?
19:46 JoeJulian no, it's on my co-lo box.
19:46 semiosis ah
19:55 semiosis trying to run it out of a gluster volume on an ec2 micro instance
19:55 semiosis if it works, will be just barely
19:55 georgeh|workstat joined #gluster
19:58 kkeithley joined #gluster
20:05 marcoceppi joined #gluster
20:05 TDJACR joined #gluster
20:05 noseyb JoeJulian: Been trying to recompile apache with the 8k block size, going crazy on it
20:05 TDJACR joined #gluster
20:06 JoeJulian Heh, never done that myself so I'm not sure if I'd be any help.
20:07 noseyb Just saying I haven't given up, still working on it
20:08 noseyb I have apache working, but futzing with webdav again, it wasnt enabled by default
20:08 NeatBasis joined #gluster
20:08 mrEriksson joined #gluster
20:12 a2 semiosis, is that for hostin a git repo on glusterfs?
20:13 giannello joined #gluster
20:20 Guest34797 joined #gluster
20:21 semiosis yes, we use gitlab at work for internal projects.  it's been running on a vm in our office but the vm host machine died so i want to move it to ec2
20:21 semiosis anything i should be aware of?
20:22 semiosis a2: ^
20:27 noseyb JoeJulian: it appears that parameter isn't for writes, only reads
20:32 a2 semiosis, try mounting clients with glusterfs --fopen-keep-cache --attribute-timeout=5 --entry-timeout=5 and use a client with readdirplus
20:33 a2 readdirplus in FUSE
20:33 a2 git should run good with all that settings
20:33 semiosis will all that work on 3.1.7? :)
20:33 a2 ooo
20:33 a2 3.4.1
20:33 a2 please :)
20:33 semiosis yeah i know
20:34 a2 you also need rhel 6.4+ for readdirplus in fuse
20:34 semiosis ubuntu?
20:34 a2 sorry i really don't know :( i don't follow ubuntu kernel patches
20:35 semiosis ok thanks.  right now i'm not even getting the ruby on rails app to start up
20:35 semiosis so still a few things to solve before getting to git
20:35 a2 ok
20:48 zerick joined #gluster
20:54 badone joined #gluster
20:57 noseyb left #gluster
21:00 semiosis @upgrade
21:00 glusterbot semiosis: I do not know about 'upgrade', but I do know about these similar topics: '3.3 upgrade notes', '3.4 upgrade notes'
21:01 semiosis @3.3 upgrade notes
21:01 semiosis @3.4 upgrade notes
21:01 glusterbot semiosis: http://goo.gl/qOiO7
21:01 glusterbot semiosis: http://goo.gl/SXX7P
21:01 zerick joined #gluster
21:03 semiosis so the 3.4 notes say you can jump from 3.1 to 3.4 by following the 3.3 upgrade procedure (with vers 3.4 instead)
21:03 semiosis a2: any thoughts on that?
21:12 theron_ joined #gluster
21:23 khushildep joined #gluster
21:23 ccha joined #gluster
21:24 daMaestro joined #gluster
21:24 zaitcev joined #gluster
21:24 khushildep left #gluster
21:25 Shdwdrgn joined #gluster
21:26 B21956 joined #gluster
21:31 jskinner joined #gluster
21:32 a2 semiosis, in theory it should..i must admit i haven't tried
21:32 B21956 left #gluster
21:42 semiosis ok ruby on rails on glusterfs doesnt seem to be working
21:43 semiosis now thinking i'll just rsync from local disk to the gluster volume every 5 minutes, and when puppet makes a new gitlab machine it will rsync from the gluster volume to local
21:43 semiosis is that hackey enough?!
21:54 Remco "doesnt seem to be working" is not very descriptive
21:55 Remco Lots of things to think about when you do distributed storage
21:57 semiosis Remco: trying to run gitlab out of a gluster volume.  it just makes the gluster client burn cpu, strace shows tons of tiny iops afaict
21:57 semiosis performance kills
21:57 Remco Mount as NFS
21:57 semiosis good idea
21:58 Remco Had the same thing for a PHP setup
21:58 Remco But then that gave other trouble, so still thinking about it
21:58 semiosis yeah not generally a fan of nfs mounts
21:59 Remco You basically need to make sure that you cache locally
21:59 a2 semiosis, the options i specified make gluster work MUCH BETTER with small files, especially git kind of workload
22:00 a2 but unfortunately you need rhel6.4+ client and 3.4.1 glsuterfs
22:00 semiosis a2: great, i'll try running gitlab right out of gluster after i upgrade
22:00 semiosis oh right, the rhel
22:00 * Remco saves those hints for later
22:00 semiosis is that readdirplus fuse patch in a mainline kernel?
22:00 a2 a recent version of ubuntu kernel might have picked up readdirplus
22:00 a2 yes
22:01 semiosis ok that works
22:01 Remco Basically to have nice gluster you need to be on the bleeding edge
22:02 Remco *lots* of fixes in recent versions, but there is slow pickup of those
22:03 a2 the first version of readdirplus was in upstream kernel 3.9
22:03 a2 however lots of fixes have gone in since..
22:03 a2 3.12 has all the fixes
22:03 Remco So it'll be a while :/
22:03 a2 3.11 has most actually
22:05 a2 yeah.. no readdirplus fixes after 3.11
22:05 semiosis Remco: saucy salamander willb e out at the end of this month and you can install 3.11 from the kernel ppa - http://kernel.ubuntu.com/~kern​el-ppa/mainline/v3.11.3-saucy/
22:05 glusterbot <http://goo.gl/iH9src> (at kernel.ubuntu.com)
22:06 semiosis and thats just the easy way to do it :)
22:07 a2 oh cool.. that should be a good release for gluster clients
22:10 FooBar joined #gluster
22:10 Remco I don't do ubuntu
22:10 Remco Debian here
22:13 Remco Ah well, time for bed. Good luck and good night
22:14 semiosis later
22:19 social_ a2: entry timeout does what? clients won't know if file was deleted on server for 5 sec if they used it before?
22:28 pdrakewe_ joined #gluster
23:05 d-fence joined #gluster
23:50 jskinner joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary