Camelia, the Perl 6 bug

IRC log for #gluster, 2013-01-17

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 rhys well i enabled it. i'll have to force them to reboot to recreate the bug. i've added it to /etc/sysfs.conf so it should get applied at boot.
00:00 rhys i should say, i disabled it.
00:00 JoeJulian ok
00:00 JoeJulian I hope that's it.
00:02 rhys JoeJulian, i appreciate this. i'm using proxmox, a KVM frontend. these are like their appliance installs, they don't change much but they do roll their own kernel. i know RH has been working on making gluster work for exactly this usecase, but last job I couldn't get their engineers to sell it with support to use with RHEV
00:02 rhys so I've been afraid of deeply buried bugs
00:09 glusterbot New news from resolvedglusterbugs: [Bug 764679] do not tamper with libexecdir in rpm spec <http://goo.gl/iRHLY> || [Bug 764623] Avoid hardcoding libexecdir <http://goo.gl/M60T6>
00:11 glusterbot New news from newglusterbugs: [Bug 895831] auth.allow limited to 1024 chars in 3.2.5 and perhaps later versions, can you increase to something much bigger or allow unlimited length or see bug 861932 <http://goo.gl/2H6wW> || [Bug 895656] geo-replication problem (debian) [resource:194:logerr] Popen: ssh> bash: /usr/local/libexec/glusterfs/gsyncd: No such file or directory <http://goo.gl/ZNs3J> || [Bug 878663] mount_ip and re
00:11 JoeJulian Looks like bugzilla must be back up.
00:20 copec What do I need to get gluster to build with georeplication enabled?
00:22 JoeJulian yum install glusterfs-georeplication
00:22 copec What if I am building it from source on Solaris :)
00:23 JoeJulian They I think Larry is standing by to help. ;)
00:23 JoeJulian s/They/Then/
00:23 glusterbot What JoeJulian meant to say was: Then I think Larry is standing by to help. ;)
00:25 copec hehe
00:26 rhys JoeJulian, it did it again
00:27 rhys JoeJulian, though I did find out why the "gluster volume status" returns operation failed
00:27 rhys i used essentially clusterssh. Issuing the same command on two boxes at the same time will cause one to fail
00:27 JoeJulian Oh, right! Can't acquire the lock...
00:28 rhys now i can't be sure its gluster at all.
00:30 rhys i'll pull this from the back of my memory. kvm allows you turn change the disk caching options of virtual machines. previously I turned on write-back cache (though how its different from write-back unsafe I don't know) and it might fix this problem
00:30 rhys no. it has to be gluster. i can't go to the actual NFS mount and touch a file from the command line
00:47 manik joined #gluster
00:47 JoeJulian Could the load be to the point where ucarp is failing over?
00:48 rhys JoeJulian, no.
00:49 rhys well.
00:50 rhys well i'll be damned
00:50 rhys yup
00:50 rhys puts it into split brain
01:00 rhys how on earth do i do HA then? the fuse client?
01:01 rhys is the fuse client faster/slower than NFS?
01:01 JoeJulian yes
01:01 JoeJulian hehe
01:01 rhys slower for some things, faster for others?
01:02 JoeJulian It's faster for throughput but slower for directory lookups due to the kernel caching them for nfs. Since you're not going to be doing directory lookups, fuse is overall going to provide better throughput.
01:02 kevein joined #gluster
01:02 JoeJulian Even better, with 3.4 qemu will provide direct library interface to the volume eliminating fuse or nfs.
01:03 rhys and the fuse client only looks at the 'volumeserver' at mount time to get the volume information, from then on it knows there are multiple peers providing the data?
01:04 JoeJulian Right. The fuse clients connects directly to all the servers in the volume.
01:04 rhys with these two machines, what i need to be able to do is have a VM running, power off one of the peers, and never have the VM notice until I can get the other peer back up.
01:04 JoeJulian I do that with fuse all the time.
01:04 JoeJulian Well, not all the time, I wouldn't get much else done, but frequently enough to be confident.
01:05 rhys :D got it. baaaah. now i have to figure out how to make this GUI do what I want and change out its 'storage' object
01:06 rhys thank you so much, i had a hypertensive manager over my shoulder thinking the world was ending even though this is a test system not even in production.
01:06 JoeJulian Heh, I'm sure he's got things riding on it as well.
01:08 spn joined #gluster
01:16 jmpf joined #gluster
01:33 edong23_ joined #gluster
01:41 glusterbot New news from newglusterbugs: [Bug 895831] auth.allow limited to 1024 chars in 3.2.5 and perhaps later versions, can you increase to something much bigger or allow unlimited length or see bug 861932 <http://goo.gl/2H6wW> || [Bug 895656] geo-replication problem (debian) [resource:194:logerr] Popen: ssh> bash: /usr/local/libexec/glusterfs/gsyncd: No such file or directory <http://goo.gl/ZNs3J> || [Bug 878663] mount_ip and re
01:43 glusterbot New news from resolvedglusterbugs: [Bug 764679] do not tamper with libexecdir in rpm spec <http://goo.gl/iRHLY> || [Bug 764623] Avoid hardcoding libexecdir <http://goo.gl/M60T6>
01:45 dhsmith joined #gluster
01:48 shireesh joined #gluster
02:03 nik__ joined #gluster
02:13 Oneiroi joined #gluster
02:28 lala joined #gluster
02:29 rastar joined #gluster
03:04 raven-np joined #gluster
03:14 sashko joined #gluster
03:32 bharata joined #gluster
03:37 jag3773 joined #gluster
03:44 dustint_ joined #gluster
03:58 dustint joined #gluster
04:07 jag3773 joined #gluster
04:11 overclk joined #gluster
04:19 shylesh joined #gluster
04:34 sgowda joined #gluster
04:47 Humble joined #gluster
05:16 raghu joined #gluster
05:24 bharata joined #gluster
05:31 ramkrsna joined #gluster
05:31 ramkrsna joined #gluster
05:31 sashko joined #gluster
05:32 hagarth joined #gluster
05:32 vpshastry joined #gluster
05:33 decci joined #gluster
05:33 decci Hi
05:33 glusterbot decci: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
05:35 ngoswami joined #gluster
05:35 decci My systems are currently hosted in AWS EC2.Right now I am using load balancers, and spinning up instances based on need. The problem when we do that is, we just added another independant file system which will have to be updated manually. We need a file server for all of our servers to use, but we can't afford a single point of failure. It needs to be capable of scaling laterally, as load demands. We run 2 web environments, 1 is the r
05:35 rastar joined #gluster
05:36 decci login (Ubuntu, Apache, Python/Django), and the other is the content (Debian, NGINX, Wordpress), database is Mysql using RDS. We are not using Infiniband. I need to setup GLustreFS. Can anyone suggest?
05:38 raven-np1 joined #gluster
05:44 raven-np joined #gluster
05:51 bala joined #gluster
06:00 bharata joined #gluster
06:02 JoeJulian Sounds good to me...
06:03 sgowda joined #gluster
06:05 ramkrsna joined #gluster
06:10 vpshastry joined #gluster
06:10 sripathi joined #gluster
06:11 sashko joined #gluster
06:27 hagarth joined #gluster
06:28 sgowda joined #gluster
06:30 vpshastry joined #gluster
06:41 msgq joined #gluster
06:42 vimal joined #gluster
06:47 sripathi joined #gluster
06:51 y4m4 joined #gluster
07:11 rgustafs joined #gluster
07:20 layer3switch joined #gluster
07:24 jtux joined #gluster
07:28 Nevan joined #gluster
07:35 lala joined #gluster
07:35 sripathi joined #gluster
07:52 stickyboy joined #gluster
07:53 stickyboy Is kkeithly's .repo still the recommended one?
07:53 stickyboy For CentOS 6 RPMs?
07:54 sripathi joined #gluster
07:54 ekuric joined #gluster
07:55 JoeJulian yep
07:55 stickyboy Awesome.  Thanks, JoeJulian.
07:55 JoeJulian Since he's the package maintainer for fedora and epel, unless it's in one of those then it'll be his fedorapeople repo.
07:56 Azrael808 joined #gluster
07:56 stickyboy Ah, I see.  Then the one in EPEL is the same one anyways.
07:57 JoeJulian Isn't the one in epel 3.2?
07:58 stickyboy I don't think so... lemme check.
07:58 guigui1 joined #gluster
07:58 JoeJulian I think 3.3 is the one slated for epel-7 but 3.2's in epel-6, iirc.
07:59 Azrael808 joined #gluster
08:00 JoeJulian Anyway... off to bed with me... Have a good one.
08:00 stickyboy JoeJulian: Yeah, you're right.  3.2.7 is in EPEL-6.
08:00 stickyboy JoeJulian: Alright.  Night, dude.
08:02 jtux joined #gluster
08:07 hagarth joined #gluster
08:09 ctria joined #gluster
08:19 puebele1 joined #gluster
08:20 tjikkun_work joined #gluster
08:22 xavih joined #gluster
08:25 deepakcs joined #gluster
08:25 deepakcs Hi, if i am getting " /usr/sbin/gluster: unrecognized option '--xml' - it means the gluster installed is older, rite ?
08:26 deepakcs after installing latest gluster from gluster.org.. i see this...
08:26 deepakcs <zhshzhou> deepakcs: This time I get XML error\nerror: <cliOutput><opRet>0</opRet><​opErrno>0</opErrno><opErrstr /><volInfo><volumes><volume><name>testvol</name><​id>853fe7a7-d760-4f5d-9128-42b460247495</id><type​>0</type><status>1</status><brickCount>1</brickCo​unt><distCount>1</distCount><stripeCount>1</strip​eCount><replicaCount>1</replicaCount><transport>0​</transport><bricks><brick>zhshzhouf17:/teststora​ge</brick></bricks><optCount>1</optCount><option
08:26 deepakcs s><option><nam
08:26 deepakcs <zhshzhou> deepakcs: Then I try to call glusterVolumesList from vdscli, it gives the same error.
08:29 ndevos deepakcs: when gluster is compiled, it checks for libxml2-devel, if that is not available, support for xml will not be included
08:29 ndevos I'm not sure if the --xml option gives an error that way, or if it just returns nothing...
08:30 sripathi joined #gluster
08:33 ultrabizweb joined #gluster
08:33 deepakcs ndevos, Ah. thanks
08:33 deepakcs ndevos, so maybe i need to recompile after installing libxml2-devel
08:34 deepakcs <zhshzhou1> deepakcs: I use the yum repo from gluster.org
08:34 deepakcs ndevos, ^^
08:34 deepakcs the person insatalled it from gluster.org
08:34 deepakcs so it was not compiled
08:35 ndevos deepakcs: right, then it may be an older version
08:35 deepakcs ndevos, its throwing error in xml.. the error code is 0.. errorStr is empty..
08:35 deepakcs ndevos, yum install will get a older version ?
08:35 deepakcs <zhshzhou1> deepakcs: glusterfs-3.3.1-1.fc17.x86_64
08:35 deepakcs <zhshzhou1> glusterfs-fuse-3.3.1-1.fc17.x86_64
08:35 deepakcs <zhshzhou1> glusterfs-server-3.3.1-1.fc17.x86_64
08:35 deepakcs <zhshzhou1> vdsm-gluster-4.10.3-0.84.git29f2​048.fc17.edward1358394831.noarch
08:35 deepakcs ndevos, ^^
08:36 puebele joined #gluster
08:37 ndevos deepakcs: hmm, that version should support --xml, I have a 3.3.0 here that supports it too
08:37 ndevos deepakcs: does "ldd /usr/sbin/gluster" list a dependency on libxml2?
08:38 deepakcs <zhshzhou1> deepakcs: Yes, there is a dependency
08:38 deepakcs ndevos, ^
08:40 ndevos deepakcs: and that glusterfs-3.3.1-1.fc17.src.rpm contains a BuildReq on libxml2-devel, so all should be good
08:42 deepakcs ndevos, so any reason, why caling gluster API from vdsm is throwing that xml error ?
08:42 * deepakcs looks for bala
08:43 ndevos deepakcs: not sure, I do not know how the API is implemented... is there a gluster binary on the system where the API is called?
08:43 deepakcs ndevos, ya.. its just a python binding over gluster binary
08:43 deepakcs i will try to debug this
08:44 ndevos deepakcs: it is possible that the API executes a local gluster binary like 'gluster --xml --remote-host=zhshzhouf17 volume info testvol'
08:45 deepakcs ndevos, possible
08:45 ndevos deepakcs: that would mean that the local gluster binary needs to support --xml, and not only the binary on the storage server itself
08:45 deepakcs ndevos, this is not on storage serevr. the test runs oin the localhost, where the gluster binary was installed
08:45 deepakcs ndevos, but good point, let me check with him
08:46 deepakcs ndevos, no, thats not the case.. gluster server, client and vdsm are all on the same machine.. and we have a test to validate the gluster storgae domain in vdsm
08:46 deepakcs which fails with the xml error exception from gluster.py
08:46 ndevos hmm
08:47 ndevos deepakcs: and there is only one gluster binary in the path? not /usr/local/sbin/gluster and /usr/sbin/gluster?
08:48 ndevos deepakcs: maybe add a line for debugging in gluster.py, and execute 'gluster --version' to verify?
08:49 deepakcs ndevos, will cehck.. need to run for a mtg now.. wil ltry to debug.. thanks
08:49 ndevos deepakcs: sure, cya
08:56 duerF joined #gluster
09:02 gbrand_ joined #gluster
09:03 gbrand_ joined #gluster
09:05 bauruine joined #gluster
09:12 glusterbot New news from newglusterbugs: [Bug 896408] Gluster CLI does not allow setting root squashing <http://goo.gl/JgRlg>
09:15 sripathi joined #gluster
09:17 avati joined #gluster
09:23 hagarth joined #gluster
09:42 glusterbot New news from newglusterbugs: [Bug 896410] gnfs-root-squash: write success with "nfsnobody", though file created by "root" user <http://goo.gl/F764A> || [Bug 896411] gnfs-root-squash: read successful from nfsnobody for files created by root <http://goo.gl/6o6pf>
09:46 ekuric joined #gluster
10:03 bzf130_mm joined #gluster
10:10 jjnash joined #gluster
10:10 nightwalk joined #gluster
10:12 Azrael808 joined #gluster
10:25 dobber joined #gluster
10:28 kshlm joined #gluster
10:28 ramkrsna joined #gluster
10:30 Elendrys joined #gluster
10:30 Elendrys hi
10:30 glusterbot Elendrys: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
10:31 Elendrys I would need some help to solve a problem about the geo-replication that is failing.
10:33 Elendrys We setup two server. The main server have the original brick which is synced to the slave server in a simple setup. Everything did go ok until yesterday. The geo-replication daemon restart over and over.
10:33 deepakcs ndevos, <zhshzhou> deepakcs: It seems that the output XML of gluster misses a 'typeStr' node. It only contains a 'type' node.
10:33 deepakcs ndevos, thats the latest on the debig of xml error
10:34 Elendrys When i look at the volume logfile, i see a lot of "[repce:188:__call__] RepceClient: call 27528:140015194752768:1358416566.89 (xtime) failed on peer with OSError"
10:34 Elendrys OSError: [Errno 12] Cannot allocate memory
10:35 Elendrys If someone can help me to track the problem, it would be nice. thank you
10:35 ndevos deepakcs: okay, but I've got no experience with that :-/
10:36 deepakcs ndevos, np, hope bala was here
10:38 ndevos deepakcs: well, if the xml structure is not as it should be, it sounds like a bug. It may be fixed in a newer version already
10:40 deepakcs ndevos, possible, i am asking him to try compiling gluster code and useit
10:42 shireesh joined #gluster
10:44 Guest74184 joined #gluster
10:44 glusterbot New news from resolvedglusterbugs: [Bug 814052] nfs:bonnie fails during delete files operation <http://goo.gl/AfvK5>
10:45 overclk Elendrys: mind sending/attaching the logs (both geo-rep and gluster client mount)
10:46 overclk also wcich version of glusterfs are you using?
10:50 Elendrys i am using version 3.3.0-1
10:52 Elendrys i check the logs
10:54 bala joined #gluster
10:57 Elendrys joined #gluster
11:06 bala joined #gluster
11:19 _br_ joined #gluster
11:25 _br_ joined #gluster
11:38 _br_- joined #gluster
11:41 jtux joined #gluster
11:43 _br_ joined #gluster
11:43 tryggvil joined #gluster
11:53 edward1 joined #gluster
12:06 isomorphic joined #gluster
12:07 puebele joined #gluster
12:26 kkeithley1 joined #gluster
12:26 Guest74184 joined #gluster
12:30 hagarth joined #gluster
12:33 robert__ joined #gluster
12:45 shireesh joined #gluster
12:48 manik joined #gluster
12:50 andreask joined #gluster
12:54 aliguori joined #gluster
13:19 shireesh joined #gluster
13:20 chirino joined #gluster
13:37 Joda joined #gluster
13:38 puebele joined #gluster
13:41 Joda I'm having problem with a gluster volume where some folders have become corrupted. "Input/output error" how can i fix this?
13:50 ndevos Joda: that is probably caused by a ,,(split brain)
13:50 glusterbot Joda: I do not know about 'split brain', but I do know about these similar topics: 'split-brain'
13:50 ndevos ~split-brain | Joda
13:50 glusterbot Joda: (#1) learn how to cause split-brain here: http://goo.gl/nywzC, or (#2) To heal split-brain in 3.3, see http://goo.gl/FPFUX .
13:51 Joda thanks
13:51 Joda about self healing
13:52 Joda i was reading up and found the find <path> -noloaf -print0 | xargs −0 stat &> f.log cmd
13:52 Joda do you know which part of the cmd chain triggers healing ?
13:52 Joda is it the stat call?
13:54 ndevos yeah, the stat call does that
13:55 Joda hmm. its now working for me. Do you know what "healing" actually does? Is it settings attributes on the files/inodes? or is it fixing some metadata existing elsewhere?
13:55 ndevos but self-healing may not be possible, depending on the kind of split-brain - sometimes gluster can not tell what the 'good' copy is
13:56 Joda it's not split brain. Its not a replicated volume
13:56 lala joined #gluster
13:57 dustint joined #gluster
13:58 puebele1 joined #gluster
13:58 Joda some details: its a 500TB volume split on 6 hosts which 6 bricks each. A problem is that some bricks are full and have even written into reserved space. Could this be what is causing issues? that some bricks are full?
13:58 bronaugh joined #gluster
14:21 abkenney joined #gluster
14:26 jtux joined #gluster
14:30 lh joined #gluster
14:38 Gugge Joda: you want rebalance then, not healing
14:38 Gugge as far as i can tell
14:40 Joda Hi Gugge, the volume is not replicated. Do you mean to rebalance how the bricks stored the data?
14:43 Gugge yes
14:43 plarsen joined #gluster
14:43 Gugge that is what reblance does, move data between bricks
14:43 Joda ah ok. i'll look into that
14:45 Joda Could you move say a subset of data from one brick to another? For instance i have a brick which is 100% full. and another with 500GB free. Is there any way of rebalancing that could help me move say 300 GB to the other brick?
14:45 Gugge only manually
14:45 Gugge Dont do it :)
14:46 Gugge rebalance moves the data to where it should be, based on filename hash
14:47 Joda hehe, well i'm not able to change the names of the files. Would it be possible to free up space on the full brick without say moving the whole brick to a new and bigger host?
14:49 Gugge i would think it would be possible to move data manually, but i cant help you how :)
14:49 nueces joined #gluster
14:49 Joda ok thanks
14:54 rwheeler joined #gluster
14:55 ninkotech_ joined #gluster
15:00 aliguori joined #gluster
15:03 QuentinF_ joined #gluster
15:03 xavih_ joined #gluster
15:03 hagarth joined #gluster
15:07 Joda joined #gluster
15:08 mnaser joined #gluster
15:09 stopbit joined #gluster
15:10 lkoranda joined #gluster
15:10 vimal joined #gluster
15:11 puebele1 joined #gluster
15:11 bugs_ joined #gluster
15:11 bulde joined #gluster
15:12 tjikkun_work joined #gluster
15:12 ultrabizweb joined #gluster
15:32 jbrooks joined #gluster
15:33 wushudoin joined #gluster
15:34 erik_ joined #gluster
15:40 vpshastry joined #gluster
15:56 plarsen joined #gluster
16:00 rastar joined #gluster
16:09 bala joined #gluster
16:36 dbruhn left #gluster
16:49 elyograg irc backlog - tl;dr
16:58 manik joined #gluster
17:01 rhys JoeJulian, did you mention about how to change gluster peer IP addresses easily?
17:01 smellis joined #gluster
17:02 sashko joined #gluster
17:03 ninkotech_ joined #gluster
17:05 raghu joined #gluster
17:12 raven-np joined #gluster
17:24 obryan joined #gluster
17:32 bauruine joined #gluster
17:34 ppinatti1 joined #gluster
17:36 ppinatti1 hello everyone, I just installed the gluster packages on two rhel nodes but cannot get the peer probe command to work, it says Probe returned with unknown errno -1
17:37 ppinatti1 firewall is turned off and from what I could see in the log, it's due to an error "XDR decoding failed"
17:38 DaveS_ joined #gluster
17:45 _br_ joined #gluster
17:47 RicardoSSP joined #gluster
17:49 sashko ppinatti1: do you have access list configured?
17:51 ppinatti1 sashko: I guess not. Just followed the admin guide
17:51 ppinatti1 sashko: do I need to configure it first?
17:51 sashko no
17:51 sashko ppinatti1: this is not an upgrade you are doing from an older version correct?
17:52 sashko brand new setup?
17:52 ppinatti1 sashko: yes, brand new
17:52 sashko hmm ok
17:52 sashko which guide are you following?
17:52 JoeJulian rhys: In a volume definition? The only way is to stop the volumes and glusterd, change every instance in any file that has it under /var/lib/glusterd, then start everything back up again -- or just delete and recreate the volumes. That's why we always recommend using ,,(hostnames).
17:52 glusterbot rhys: Hostnames can be used instead of IPs for server (peer) addresses. To update an existing peer's address from IP to hostname, just probe it by name from any other peer. When creating a new pool, probe all other servers by name from the first, then probe the first by name from just one of the others.
17:53 ppinatti1 sashko: http://www.gluster.org/wp-content/up​loads/2012/05/Gluster_File_System-3.​3.0-Administration_Guide-en-US.pdf
17:53 glusterbot <http://goo.gl/bzF5B> (at www.gluster.org)
17:53 rhys got it.
17:53 JoeJulian ppinatti1: Did you use the ,,(yum repo)? XDR decoding error suggests differing versions.
17:53 glusterbot ppinatti1: kkeithley's fedorapeople.org yum repository has 32- and 64-bit glusterfs 3.3 packages for RHEL/Fedora/Centos distributions: http://goo.gl/EyoCw
17:58 _br_ joined #gluster
17:58 ppinatti1 JoeJulian: I am using the ppc packages from epel
17:58 ppinatti1 rpm -q glusterfs
17:58 ppinatti1 glusterfs-3.2.7-1.el6.ppc64
17:59 ppinatti1 JoeJulian: both machines are using this one
18:00 JoeJulian kkeithley: ^^ looks like you should build for ppc too! :O
18:00 ppinatti1 I am starting to think it might be a ppc specific problem
18:00 JoeJulian I'm wondering about that as well.
18:00 ppinatti1 let me try on x86
18:01 _br_- joined #gluster
18:01 JoeJulian If you are able to isolate a problem specific to ppc, please file a bug report.
18:01 glusterbot http://goo.gl/UUuCq
18:02 JoeJulian Hmm, I wonder if kkeithley is gone for fudcon.
18:03 _br_ joined #gluster
18:03 Mo___ joined #gluster
18:03 ppinatti1 JoeJulian: will do
18:04 JoeJulian btw... what's the hardware?
18:09 ppinatti1 JoeJulian: ibm power
18:15 erik_ joined #gluster
18:17 JoeJulian Don't they have any 1U? I'd consider it for one of my co-lo boxes but 2U increases the cost of ownership...
18:25 ppinatti1 JoeJulian: just tested running the probe from x86 and it works fine. It seems to be ppc only
18:25 ppinatti1 JoeJulian: will open a bz for that
18:29 kkeithley1 Nope, I'm not going to fudcon.
18:32 kkeithley1 There are el6/ppc rpms available from my builds in Koji @ http://koji.fedoraproject.org/​koji/buildinfo?buildID=377452. This is the first time I've heard anyone asking for ppc.
18:32 glusterbot <http://goo.gl/8JJF8> (at koji.fedoraproject.org)
18:33 ppinatti1 kkeithley1: I am possibly hitting a ppc specific bug
18:34 dustint joined #gluster
18:34 Teknix joined #gluster
18:34 JoeJulian If you could try it with the koji build, we can see if the bug is still valid with current versions.
18:35 ppinatti1 JoeJulian: ok, will try
18:41 kkeithley1 JoeJulian, semiosis: Any further word on bz 895656 last night? Did hagarth or avati weigh in with anything?
18:45 JoeJulian Nope. Didn't hear from either of them.
18:45 danishman joined #gluster
18:46 kkeithley1 :-(
18:49 ppinatti1 JoeJulian: kkeithley: fyi, it works with the koji build :)
18:49 kkeithley1 <montyburns>excellent</montyburns>
18:50 ppinatti1 lol
18:50 ppinatti1 kkeithley: JoeJulian: thanks guys
18:52 dec joined #gluster
18:55 y4m4 joined #gluster
18:57 Tekni joined #gluster
19:04 chouchins joined #gluster
19:05 sashko joined #gluster
19:09 z00dax kkeithley1: ping
19:11 bennyturns joined #gluster
19:13 kkeithley1 z00dax: yes?
19:20 kkeithley1 JoeJulian, semiosis, ndevos: http://review.gluster.org/#change,4392 (I couldn't find the right username to add JoeJulian to reviewers)
19:20 glusterbot Title: Gerrit Code Review (at review.gluster.org)
19:24 maxiepax_ joined #gluster
19:26 bfoster joined #gluster
19:26 nhm joined #gluster
19:27 ndevos joined #gluster
19:27 jiffe98 joined #gluster
19:28 Hymie joined #gluster
19:28 z00dax kkeithley1: pondering what to do with the gluster*swift* subpackages.
19:29 z00dax kkeithley1: we could leave them there, in case anyone wants it - but it creates problems in our qa setup with dangling bits
19:29 kkeithley1 dangling bits?
19:29 Nicolas_Leonidas joined #gluster
19:29 Nicolas_Leonidas yo gluster I'm new to you
19:30 niv_ joined #gluster
19:32 z00dax kkeithley1: we dont ship anything that can use it
19:33 z00dax as long as the dep loop closes, repoclosure is happy, we can leave it there i guess
19:33 tryggvil joined #gluster
19:33 johnmark z00dax: I would strongly advocate its inclusion
19:33 johnmark because we're going to be doing a lot more with Swift down the road
19:33 kkeithley1 er, so what uses glusterfs then that makes that okay?
19:33 Nicolas_Leonidas I'm trying to have one folder on four amazon instances something like /r_images that all instances can write to and read from and have it's content shared real time among all of them
19:33 Nicolas_Leonidas is gluster something I can use for that?
19:34 johnmark Nicolas_Leonidas: so 1 folder replicated on 4 instances?
19:34 Nicolas_Leonidas johnmark: yes, and synced at all times
19:36 z00dax kkeithley1: I sure hope someone uses it, it would be quite a waste if noone did
19:37 kkeithley1 But what satisfies the dependency criteria?
19:37 johnmark kkeithley1: I think z00dax meant that they don't ship anything that can use gluster-swift
19:37 al joined #gluster
19:37 kkeithley1 yeah, I got that part
19:37 al joined #gluster
19:38 johnmark Nicolas_Leonidas: so we usually recommend that replication be 2-way. you *can* do something like 4-way, but I wouldn't recommend that for something that requires high performance
19:38 z00dax kkeithley1: so, at this point the target is 'glusterfs', if its a case of apps - then opennebula and to some extent cloudstack is what were targetting
19:39 johnmark Nicolas_Leonidas: I would try two things. 1. set up a 4-way synchronously replicated volume and see if it satisfies your performance requirements
19:39 johnmark if not, then 2. set up a 2-way synchronously replicated volume, and then set up geo-rep (asynchronous replication) for 2 slave nodes
19:40 johnmark the potential problem is that async rep isn't "real time" but it depends on how much latency you can tolerate there
19:40 Nicolas_Leonidas johnmark: so we have 4 instances, people can upload images to them, so the images need to be on every server, I guess it must be 4 ways
19:42 johnmark Nicolas_Leonidas: oh wait, you mean uploading to 4 instances - ah, that's easy
19:42 johnmark because each of them cna mount (and write to) the same replicated Gluster volume(s)
19:43 johnmark the 4 instances would host web and/or proxy servers?
19:43 Nicolas_Leonidas johnmark: they do yes
19:43 johnmark ok
19:43 Nicolas_Leonidas they host the same website
19:44 johnmark Nicolas_Leonidas: then all you need is one replicated Gluster volume mounted by each of the 4 instances
19:44 Nicolas_Leonidas right, so that gluster volume will be on  one server, then they all mount that volume?
19:44 johnmark Nicolas_Leonidas: or you can have it distributed over two machines
19:45 johnmark that way, you won't be hitting the same instance every time you do a write
19:45 Nicolas_Leonidas distributed means master slave replication?
19:45 johnmark no
19:45 johnmark distributed means one mountable namespace that encompasses multiple servers
19:46 johnmark replicated means you take that same namespace and repeat it on other instances
19:46 Nicolas_Leonidas right, does gluster use TCP?
19:46 kkeithley1 Well, it's not going to hurt my feelings if CentOS decides not to ship glusterfs-swift-*. People can always get it from EPEL. But I do agree with johnmark that you should just ship it. At some point I'll be adding the Hadoop plug-in to the glusterfs package(s) too. It's all part of glusterfs.
19:46 johnmark Nicolas_Leonidas: yes. and UDP
19:46 Nicolas_Leonidas I was thinking of s3fs but when I realized hadoop is using gluster, then it must be way more mature
19:47 kkeithley1 Packaged the way it is, people can decide to install it or not. I could argue that CentOS should not be making the decision for them, let them decide for themselves.
19:47 kkeithley1 s/Packaged the way it is/Packaged the way it is in Fedora/
19:47 glusterbot kkeithley1: Error: I couldn't find a message matching that criteria in my history of 1000 messages.
19:48 Nicolas_Leonidas ls
19:50 Nicolas_Leonidas so what makes gluster different than nfs?
19:55 semiosis imo the most important thing that distinguishes gluster from nfs is ability to provide automatic replication & failover (HA) across servers
19:55 semiosis other people may have other priorities
19:55 semiosis there are many capabilities gluster has that nfs (on its own) does not
19:56 semiosis kkeithley1: checking in re: gsyncd path... i see your patch, and i think in order to review i will need to apply that to the 3.3.1 source tree, build my packages, and see if it works... does that sound about right?  any other input?
19:59 z00dax kkeithley1: thats ok, I just dont want to ship something that noone is testing
19:59 z00dax kkeithley1: there is quite a bit of a mindset difference between CentOS and Fedora
20:00 z00dax https://github.com/gluster/glusterfs/blob/ma​ster/glusterfs-hadoop/0.20.2/src/test/java/o​rg/apache/hadoop/fs/glusterfs/AppTest.java
20:00 glusterbot <http://goo.gl/a32E2> (at github.com)
20:01 z00dax that too looks well tested
20:01 kkeithley1 semiosis: re: gsyncd patch, sounds right to me.
20:03 semiosis ship things no one is testing?  that's the debian way!
20:03 semiosis but srsly, i <3 debian
20:03 kkeithley1 z00dax: CentOS vs. Fedora isn't the point. My concern is what constitutes glusterfs and if you're shipping glusterfs are you shipping some of it or all of it.
20:04 semiosis but yeah with 30k packages some things dont get tested :/
20:04 elyograg i'd like debian to fix mrtg daemon mode.
20:05 kkeithley1 Nicolas_Leonidas: With NFS you can't aggregate multiple volumes into a seamless single namespace.
20:05 semiosis elyograg: contribute a patch! :)
20:05 elyograg semiosis: I don't know mrtg code at all.  I did file a bug - in march of last year.
20:05 glusterbot http://goo.gl/UUuCq
20:05 elyograg glusterbot: heh.
20:06 glusterbot elyograg: I do not know about 'heh.', but I do know about these similar topics: 'hack'
20:06 elyograg the problem started sep 2011.
20:10 kkeithley1 WRT to "tested", glusterfs is extensively tested before it's released. Packaging it in Fedora and EPEL comes after it's released, not before.
20:11 phox joined #gluster
20:11 pithagorians joined #gluster
20:11 phox hi.  trying to figure out what sort of, you know, access control gluster offers... looking at http://gluster.org/community/documentation/ind​ex.php/Gluster_3.2:_Manually_Mounting_Volumes and not seeing any mention in this or previous docs as to how to control who can access it o.O
20:12 glusterbot <http://goo.gl/qjbyR> (at gluster.org)
20:13 kkeithley1 NFS ACL will be in 3.4 IIRC.
20:13 kkeithley1 But that's not related to mounting volumes.
20:13 kkeithley1 Not sure which one you are asking about.
20:14 pithagorians he all. today i encountered an issue. quite intensive operations on glusterfs remote partition put it down claiming http://pastebin.com/YzzwCizn
20:14 glusterbot Please use http://fpaste.org or http://dpaste.org . pb has too many ads. Say @paste in channel for info about paste utils.
20:14 pithagorians when i try to remount it - nothing happens
20:14 pithagorians any clue?
20:15 pithagorians @pase
20:15 glusterbot pithagorians: I do not know about 'pase', but I do know about these similar topics: 'paste'
20:15 pithagorians @paste
20:15 glusterbot pithagorians: For RPM based distros you can yum install fpaste, for debian and ubuntu it's dpaste. Then you can easily pipe command output to [fd] paste and it'll give you an url.
20:15 phox kkeithley1: well, I presume this ends up having client-determined file permissions... so, what controls who can be a client there?
20:25 kkeithley1 phox: I confess I don't know all I should about it. I believe most of the auth/access control right now  for native, i.e. fuse, mounts. NFS w/ krb5 auth is targeted for 3.5 or later.
20:29 phox kkeithley1: yeah I'm just wondering what's in place to prevent some arbitrary other machine from going "hey, free files!"
20:31 kkeithley1 You can lock down who is allowed to mount, just like you can with /etc/exports for kernel-based nfs.
20:32 phox for both NFS and native gluster client?  didn't see it anywhere in docs
20:35 jiffe98 any status on 3.4?  been a while since a release now
20:36 kkeithley1 phox: yes, the community docs leave a lot to be desired.
20:36 kkeithley1 We're working on solving that
20:37 phox heh
20:37 johnmark jiffe98: beta should be out in a couple of weeks
20:37 johnmark we hope :)
20:38 jiffe98 good deal
20:47 gbrand_ joined #gluster
20:51 sjoeboo joined #gluster
20:57 JoeJulian z00dax, kkeithley: imho as a CentOS and Fedora user, and as someone who has a fairly strong interest in open-source software as well as packaging: if the srpm builds a set of packages, include all those packages.
21:07 Hymie joined #gluster
21:07 hagarth joined #gluster
21:14 aliguori joined #gluster
21:16 aliguori_ joined #gluster
21:17 JoeJulian johnmark: ... sponsorship reminder... :)
21:17 andreask joined #gluster
21:22 Hymie joined #gluster
21:24 kkeithley1 JoeJulian: but what the srpm makes is a function of the .spec file used to create it. I don't know of any requirement for CentOS's glusterfs.spec or srpm to exactly match the one used for Fedora/EPEL. (Semiosis's ppa/.debs don't have ufo/swift in them either as far as that goes.)
21:26 jiffe98 anyone see anything glaring in here that would prevent glusterd from starting? http://nsab.us/public/gluster
21:27 jiffe98 this is a fresh install of gluster, I just did this on 4 boxes, 3 of them worked fine this one did not.  I completely wiped everything gluster related from the box, rebuilt and reinstalled and same thing
21:28 bugs_ jiffe98 - run a memtest on that box lately?
21:29 jiffe98 no but I can if this is probably a problem with memory
21:30 gauravp joined #gluster
21:32 gauravp Hi All, I'm rsync'ing a series of files ~600-800MB each from a gluster-fuse client to a replica 2 gluster volume and seeing ~30MB/s throughput. Is that par for the course?
21:35 jiffe98 must be python, I'm seeing problems elsewhere too
21:40 rwheeler joined #gluster
21:43 johnmark JoeJulian: thanks for the reminder. will take a day or two to get a response
21:44 chouchins well, another giant power failure before we can move to our new datacenter.  One of our gluster volumes by status says its not started but when you start it says its already started.  Any thoughts?
21:46 sashko joined #gluster
21:48 chouchins heh, joe julian helped me with a post to the glusterfs website a year ago.  gluster volume start <blah> force
21:49 chouchins so um belated thanks :)
22:14 gbrand_ joined #gluster
22:31 JoeJulian :)
22:32 JoeJulian gauravp: That question requires too many variables to answer accurately.
22:34 raven-np joined #gluster
22:37 greylurk joined #gluster
22:43 greylurk joined #gluster
23:14 hattenator joined #gluster
23:28 raven-np joined #gluster
23:45 JoeJulian jdarcy: What's an "active sink"?
23:48 a2 w.r.t self-heal?
23:49 JoeJulian yes.
23:49 JoeJulian I have a whole slew of "no active sinks" and a bunch of vm images that are not healed after 2 days.
23:49 a2 when self-heal kicks in, the set of copies are classified into two partitions - sources and sinks
23:49 JoeJulian Well... bunch = 4.
23:49 a2 sources are "good" copies, sinks are "bad" copies
23:50 a2 and those names indicate the direction of transfer of data for the healing operations
23:50 errstr joined #gluster
23:50 JoeJulian Ok, so now to try to figure out why it thinks that they're not "active"...
23:51 a2 does the message read "no active sinks for performing self-heal on file ..."
23:51 a2 ?
23:51 a2 what's the file/function/line?
23:53 a2 maybe the case that all your "sink" bricks are down and not connected to the client/self-heal daemon
23:53 JoeJulian [2013-01-17 15:45:37.211499] I [afr-self-heal-data.c:712:afr_sh_data_fix] 0-vmimages-replicate-0: no active sinks for performing self-heal on file <gfid:1279df76-28ab-4fec-ba55-bc9ef007725c>
23:53 JoeJulian Status shows them all up...
23:54 JoeJulian ps shows them up as well.
23:55 JoeJulian 3.3.1

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary