Camelia, the Perl 6 bug

IRC log for #gluster, 2013-06-06

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 duerF joined #gluster
00:10 c4 joined #gluster
00:11 xavih joined #gluster
00:12 dbruhn joined #gluster
00:13 rob__ joined #gluster
00:15 lkoranda joined #gluster
00:37 bala joined #gluster
00:38 rcoup tg2: my setup is 2x replica, so i do remove-brick nodea:/brick nodeb:/brick
00:38 rcoup but I'm not sure if remove-brick nodea:/brick1 nodeb:/brick1 nodea:/brick2 nodeb:/brick2 would work or not
00:38 rcoup to remove two replica pairs
00:39 tg2 its just distributed
00:39 tg2 2 nodes 2 bricks each
00:40 tg2 trying to move everything from node 2 to node 1, so I can power down node 2 and do some hardware changes.  I also need to know if, after comitting, I can just power it off or if I ahve to modify the clients.
01:18 kevein joined #gluster
01:21 rcoup tg2: i'm just trying to replace a brick on two nodes with another one where the filesystem is setup better
01:38 majeff joined #gluster
01:58 joelwallis joined #gluster
02:01 Chr1z joined #gluster
02:02 Chr1z gluster volume create test-volume replica 2 transport tcp server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4 <- would that be replicas are server1 and server2 and servers 3 and 4 used for distribution?  Is that right?
02:05 hagarth joined #gluster
02:21 m0zes Chr1z: no, server1 and server2 would be copies of each other, server3 and server4 would be copies of each other, and it would distribute the files between those replica pairs.
02:21 _pol joined #gluster
02:23 Chr1z m0zes: ok… so if all 4 were 100gb -- 100gb total is still what I would have usable?
02:24 m0zes if each had 100GB of storage, in that configuration your volume size would be 200GB
02:26 Chr1z m0zes: Ok.. so each replicated set is added together… then load is just distributed between the sets ?  For a system to share apache web files (apache cluster behind a load balancer) is that the way you would go or is striped or something else best?
02:28 m0zes Chr1z: I would do it like that, but you should probably: A) test it and B) check out ,,(php)
02:28 glusterbot Chr1z: php calls the stat() system call for every include. This triggers a self-heal check which makes most php software slow as they include hundreds of small files. See http://goo.gl/uDFgg for details.
02:34 plarsen joined #gluster
02:35 hjmangalam1 joined #gluster
02:44 harish joined #gluster
02:47 mtanner__ joined #gluster
02:47 vshankar joined #gluster
02:51 majeff joined #gluster
03:19 mohankumar__ joined #gluster
03:32 hjmangalam1 joined #gluster
03:36 bala joined #gluster
03:40 majeff joined #gluster
03:54 brunoleon__ joined #gluster
03:59 saurabh joined #gluster
04:05 hjmangalam1 joined #gluster
04:06 vpshastry joined #gluster
04:08 sgowda joined #gluster
04:17 shylesh joined #gluster
04:25 bharata joined #gluster
04:40 badone_ joined #gluster
04:41 anands joined #gluster
05:01 badone_ joined #gluster
05:05 rotbeard joined #gluster
05:12 bulde joined #gluster
05:13 aravindavk joined #gluster
05:16 _pol joined #gluster
05:18 hjmangalam3 joined #gluster
05:26 hchiramm_ joined #gluster
05:27 hagarth joined #gluster
05:28 lalatenduM joined #gluster
05:29 arusso_znc joined #gluster
05:31 satheesh joined #gluster
05:32 satheesh1 joined #gluster
05:35 bala joined #gluster
05:56 psharma joined #gluster
05:57 ricky-ticky joined #gluster
05:58 raghu joined #gluster
06:00 cfeller joined #gluster
06:01 rastar joined #gluster
06:23 jtux joined #gluster
06:23 vimal joined #gluster
06:44 dobber_ joined #gluster
06:49 tg2 so yeah, confirmed that while doing a remove-brick start
06:49 tg2 the brick still accepts files from the clients
06:49 guigui3 joined #gluster
06:49 tg2 how the hell do you completely remove a brick lol
06:50 mooperd joined #gluster
06:51 vimal joined #gluster
06:55 tg2 http://pastie.org/pastes/8013523​/text?key=p3mwirzlwteoidkpbnspw
06:55 glusterbot <http://goo.gl/0xNmG> (at pastie.org)
06:55 tg2 am I crazy, or is it filling up while it's doing a remove-brick
06:57 brunoleon__ joined #gluster
06:57 puebele1 joined #gluster
06:59 badone_ joined #gluster
07:12 saurabh joined #gluster
07:18 ctria joined #gluster
07:28 manik joined #gluster
07:30 saurabh joined #gluster
07:39 jiffe98 joined #gluster
07:47 GLHMarmot joined #gluster
08:01 ProT-0-TypE joined #gluster
08:04 kevein joined #gluster
08:09 Stephan_ joined #gluster
08:09 Stephan_ Good morning all :)
08:10 Stephan_ I need help with my gluster is anyone available?
08:11 Stephan_ left #gluster
08:13 Stephan_George_1 joined #gluster
08:13 Stephan_George_1 Good day all
08:14 Stephan_George_1 I need help with my gluster any one available?
08:16 mooperd joined #gluster
08:16 stephan_ joined #gluster
08:21 satheesh joined #gluster
08:22 Stephan_George_1 hello
08:22 glusterbot Stephan_George_1: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an
08:22 glusterbot answer.
08:23 Guest8062 joined #gluster
08:24 Stephan_George_1 When I runn the command "gluster volume rebalance backup status" I get no responce
08:24 StarBeast joined #gluster
08:29 tziOm joined #gluster
08:34 edoceo joined #gluster
08:36 andreask joined #gluster
08:38 Stephan_George_1 When I runn the command "gluster volume rebalance backup status" I get no responce. Can someone please suggest a sulution?
08:38 tziOm Is it adviced that each disk is a brick, or to do raid and brick?
08:39 Stephan_George_1 I raid each box and each box is a brick
08:39 Stephan_George_1 your choice
08:40 Stephan_George_1 then I raid 10 boxes 5 X 2 = 1 gluster
08:45 Stephan_George_1 man Why is noone chatting?
08:47 spider_fingers joined #gluster
08:49 vimal joined #gluster
08:51 Stephan_George_1 When I runn the command "gluster volume rebalance backup status" I get no responce.
08:52 Stephan_George_1 root@gluster1:~# gluster volume set backup diagnostics.brick-log-level debug Set volume unsuccessful
08:53 Oneiroi joined #gluster
08:55 lh joined #gluster
08:55 lh joined #gluster
08:55 shylesh Stephan_George_1: can you check whether glusterd is running on all the nodes
08:56 Stephan_George_1 Done and yes all serveces are running
08:56 Stephan_George_1 root@gluster1:~# gluster peer status Number of Peers: 9  Hostname: gluster5.george.org.za Uuid: 7c671cd6-ef67-40dd-9bcc-9c8077e57926 State: Peer in Cluster (Connected)  Hostname: 5.5.5.2 Uuid: ee99eff0-4bb9-4c0d-9150-c7a21abb1809 State: Peer in Cluster (Connected)  Hostname: gluster9.george.org.za Uuid: 1851e71d-bff0-4b84-a98f-679b882f8179 State: Peer in Cluster (Connected)  Hostname: gluster4.george.org.za Uuid: 6679ee3b-3
09:32 ujjain joined #gluster
09:34 bharata joined #gluster
09:37 kke what happens if i add a brick that already has files on it?
09:38 satheesh1 joined #gluster
09:46 portante joined #gluster
09:54 Guest22396 joined #gluster
09:57 satheesh joined #gluster
10:00 duerF joined #gluster
10:15 eightyeight joined #gluster
10:16 rastar joined #gluster
10:18 ricky-ticky joined #gluster
10:21 hagarth joined #gluster
10:22 zz_stickyboy joined #gluster
10:23 hagarth joined #gluster
10:25 andreask joined #gluster
10:26 harish joined #gluster
10:27 mohankumar__ joined #gluster
10:42 glusterbot New news from newglusterbugs: [Bug 952029] Allow an auxiliary mount which lets users access files using only gfids <http://goo.gl/x5z1R>
10:46 rcoup joined #gluster
10:52 kkeithley1 joined #gluster
10:54 saurabh joined #gluster
11:01 hagarth joined #gluster
11:03 lpabon joined #gluster
11:16 nueces joined #gluster
11:24 edward1 joined #gluster
11:49 andreask joined #gluster
11:52 StarBeast joined #gluster
11:57 DEac- joined #gluster
11:57 DEac- servus
11:58 puebele3 joined #gluster
12:00 jclift joined #gluster
12:06 majeff joined #gluster
12:07 ngoswami joined #gluster
12:11 puebele1 joined #gluster
12:21 ricky-ticky joined #gluster
12:25 mohankumar__ joined #gluster
12:28 kke can i somehow mark bricks to be not written on anymore?
12:37 puebele joined #gluster
12:38 anti_lp joined #gluster
12:40 jbourke joined #gluster
12:40 lbalbalba joined #gluster
12:42 manik joined #gluster
12:54 bala joined #gluster
12:57 puebele joined #gluster
13:00 bennyturns joined #gluster
13:00 bennyturns couple mins late, leaving here in a few
13:01 rb2k joined #gluster
13:01 rb2k hey semiosis ! Did you try compiling the recent 3.3 release branch?
13:02 rb2k I imported the "debian" part of your scripts
13:02 rb2k and the current 3.3 release branch ends up with a
13:02 rb2k objcopy: 'debian/tmp/usr/lib/glusterf​s/*/xlator/debug/trace.so': No such file
13:08 sjoeboo joined #gluster
13:08 lbalbalba hrm. running gcov/lcov on latest git gluster i get this: geninfo: ERROR: /usr/local/src/glusterfs/cli/src/cli-rpc-ops.gcno: reached unexpected end of file
13:08 lbalbalba huh ?
13:10 dbruhn joined #gluster
13:10 abyss^ joined #gluster
13:10 lh joined #gluster
13:11 ctria joined #gluster
13:11 rob__ joined #gluster
13:11 johnmorr joined #gluster
13:11 mjrosenb joined #gluster
13:12 majeff joined #gluster
13:12 ngoswami joined #gluster
13:12 portante joined #gluster
13:15 jiffe98 joined #gluster
13:15 ingard_ joined #gluster
13:15 GLHMarmo1 joined #gluster
13:16 ProT-0-TypE joined #gluster
13:18 ujjain2 joined #gluster
13:18 bfoster joined #gluster
13:19 yosafbridge joined #gluster
13:19 deepakcs joined #gluster
13:19 Cenbe joined #gluster
13:19 badone_ joined #gluster
13:19 mohankumar__ joined #gluster
13:20 the-me joined #gluster
13:20 vpshastry joined #gluster
13:26 andreask joined #gluster
13:26 saurabh joined #gluster
13:27 hagarth joined #gluster
13:39 rob__ joined #gluster
13:41 stigchristian joined #gluster
13:41 nueces joined #gluster
13:41 jtux joined #gluster
13:44 rwheeler joined #gluster
13:46 vpshastry joined #gluster
13:46 ultrabizweb joined #gluster
13:47 failshell joined #gluster
13:50 cfeller joined #gluster
13:52 hybrid512 joined #gluster
13:53 nueces joined #gluster
13:53 ultrabizweb joined #gluster
13:54 plarsen joined #gluster
13:58 codex joined #gluster
14:04 failshell Number of Bricks: 8 x 2 = 16
14:05 failshell that means i have 2 copies of each file right? not 8 copies?
14:08 lbalbalba failshell: replica 2 ?
14:11 failshell lbalbalba: yes
14:12 dbruhn failshell: You are correct 2 copies, not 8
14:12 failshell ok, had a moment of doubt
14:12 dbruhn We all have them sometimes
14:12 failshell we have this issue where some bricks fill up faster
14:12 failshell we now use --inplace to rsync data to it
14:12 failshell its kinda weird
14:14 dbruhn Joe Julian was talking about DHT one day and made the comment that if all of your files start with the same letter, or are very similar the way DHT works it will fill up certain bricks first.
14:14 dbruhn I don't remember the specifics of it all, but made sense when he was talking about it
14:15 failshell yeah i talked about it with him
14:16 failshell but its kinda freaky, because we get paged for some nodes
14:16 failshell while there's ~200GB left in the cluster
14:16 failshell my coworkers are like 'do something about it' :)
14:17 kkeithley_ short, similar names will usually hash to the same brick. If those files are large you stand a good chance of filling up those bricks faster.
14:18 failshell is there a way to tell gluster, don't fill the brick pass N% ?
14:19 kkeithley_ there is, don't remember the volume option off the top of my head. hang on
14:20 kkeithley_ cluster min-free-disk
14:21 kkeithley_ gluster volume set $volname cluster.min-free-disk 30     should leave 30% free, e.g. 70% full max
14:21 failshell is that going to trigger a rebalance?
14:22 dbruhn kkeithley, is that a per brick setting or a global setting?
14:22 aliguori joined #gluster
14:22 vpshastry1 joined #gluster
14:23 dbruhn The documentation is a bit ambiguous, I always thought it was per brick, but didn't want to assume.
14:23 kkeithley_ it's listed as  a volume option...
14:24 Norky joined #gluster
14:25 joelwallis joined #gluster
14:26 karoshi joined #gluster
14:26 karoshi when mounting a replicated volume by nfs, who does the replication?
14:27 dbruhn Yeah, that's why I think it's ambiguous. Is that saying do not let the entire volume fill over the mark, or is it saying do not let each brick fill over the mark.
14:27 kkeithley_ oh, sorry, I didn't get that. I agree, it's ambiguous
14:28 failshell dbruhn: even if you set that, once all bricks reach the threshold, its still going to allow writes
14:28 failshell https://bugzilla.redhat.com/show_bug.cgi?id=889334
14:28 kkeithley_ koroshi: the gluster nfs server, i.e. the glusterfs process, does the replication.
14:28 glusterbot <http://goo.gl/eOt3c> (at bugzilla.redhat.com)
14:28 glusterbot Bug 889334: high, medium, ---, asengupt, ON_QA , cluster.min-free-disk does not appear to work as advertised
14:28 Norky kkeithley, that problem I was talking about yesterday (RHS, memory leak in the NFS function), known problem it seems and I was being stupid (was sure I wasn't using NFS... when I was)
14:28 Norky all fixed (I hope)
14:29 kkeithley_ Norky: good to know
14:29 Norky I'll give it a day to see that it behaves then close my RHN case with "sorry, known bug/pebcak"
14:35 bennyturns joined #gluster
14:40 failshell im facing a dilema in terms of scaling. right now, i have 16 bricks of 128GB. so that's 16VMs. should i add more bricks per VMs? Or just keep adding VMs?
14:42 StarBeast joined #gluster
14:45 dbruhn failshell: what does your memory and cpu usage look like?
14:46 failshell RAM is full at 2GB, CPU is fine, roughly 10%
14:47 manik joined #gluster
14:47 dbruhn I would tweak your ram and see how much each server needs before making a decision either way
14:48 failshell yeah going to bump up ram and add 2 nodes for now
14:49 failshell wish i could build a proper gluster cluster
14:49 dbruhn Are all of your servers running in vm's?
14:49 failshell with dedicated hardware, SSDs, 10Gbps network
14:49 failshell yes
14:49 dbruhn may I ask the purpose?
14:50 failshell centralized storage for various stuff like backups, local RPMs repos, sharing hundreds of gigs of images for various websites
14:50 failshell i also use it a another facility to share data between frontend servers for a website
14:51 dbruhn Ahh cool, the vm thing I am assuming is just because of the resources you had on hand?
14:51 failshell yeah we run everything in vmware
14:51 failshell was running a Gitlab proof of concept on it too
14:51 failshell but it turned out gitlab is a POS
14:52 dbruhn I wish I had the luxury of being able to run my stuff in vm's, lol.
14:52 dbruhn Less stress
14:53 failshell i cant recall the last time i interacted with a physical machine
14:53 failshell except at home that is
14:57 dbruhn I am running three gluster systems. I am in a storage read intensive environment so I am squeezing every little bit I can out of these systems QDR infiniband, couple hundred 15K SAS drives, RDMA, ect.
14:58 bugs_ joined #gluster
14:59 failshell maybe we'll build a proper cluster, but its unlikely
15:00 failshell but writes are so damn slow
15:00 puebele joined #gluster
15:00 failshell so much contention caused by all the other VMs
15:00 dbruhn I am lucky enough that I am 1% write and 99% read in my environment. I am assuming you are using replication? it slows the writes down even more
15:01 failshell yes distributed replicated
15:01 failshell maybe that's overkill and replicated only would be enough
15:02 hjmangalam1 joined #gluster
15:04 dbruhn What kind of storage are you using under your VM system?
15:06 kaptk2 joined #gluster
15:06 failshell IBM XiV SAN + some Datacore caching fronting it
15:07 failshell for large files, its fine, but my collegue is trying to sync millions of tiny files to our cluster
15:07 failshell its been days and its not complete
15:07 failshell we experiment with it
15:08 jclift That's pretty much gluster's nightmare scenario too
15:08 failshell i know
15:09 dbruhn That's every storage systems worst nightmare
15:09 JoeJulian If you split it among thousands of clients it wouldn't be so bad.
15:10 failshell JoeJulian: what do you mean? split the rsync jobs?
15:10 JoeJulian Sure. If you had thousands of clients writing thousands of tiny files, it should be more efficient than one doing millions.
15:11 JoeJulian especially the further you distribute
15:11 failshell we'll take that into account
15:12 daMaestro joined #gluster
15:13 dbruhn I have done testing with rsync on a lot of things, if you can break out the jobs into multiple jobs even from one client the performance increases
15:14 failshell will pass that info to my team :)
15:14 failshell what i copy is large backup files, so i see a 30% decrease maybe compared to regular local disks
15:15 dbruhn What are you backing up? The vm's themselves? or the files in the vm's?
15:15 failshell compressed mysql and mongodb backups
15:15 failshell in my case
15:16 failshell my coworker is syncing images from one of websites
15:16 failshell so that instead of having several copies for our devs, we have one
15:17 failshell 250GB of images, so that's a lot
15:17 dbruhn are they resized or anything?/
15:18 failshell during the copy?
15:18 dbruhn on storage
15:18 failshell its whatever we serve on the site
15:18 failshell to be honest, never really looked
15:18 dbruhn I have a customer I do storage work for and am running about 500GB of images with a maximum constraint of 800x800
15:19 dbruhn it's a bear to work with
15:19 failshell on gluster?
15:20 dbruhn No, I am implementing gluster before too long, but we have developed a tiered system for toarge where only warm data not hot data will be stored in the gluster system
15:20 dbruhn sp/toarge/storage
15:20 failshell images are no biggie, we mostly serve them with a CDN anyway
15:24 bulde joined #gluster
15:24 rb2k Has anybody in here managed to package deb files from the current release-3.3 branch? (e.g. semiosis ;) )
15:24 rb2k I'm a complete noob when it comes to deb packaging, so I just grabbed the control files from his launchpad repo
15:24 JoeJulian Isn't that already in the ,,(ppa) ?
15:24 glusterbot The official glusterfs 3.3 packages for Ubuntu are available here: 3.3 stable: http://goo.gl/7ZTNY -- 3.3 QA: http://goo.gl/5fnXN -- and 3.4 QA: http://goo.gl/u33hy
15:24 rb2k Yeah, but I need more up to date versions
15:25 rb2k Been talking with redhat and they want me to test a patch they committed yesterday
15:25 rb2k For a problem that we see sometimes
15:27 jthorne joined #gluster
15:27 social__ does anyone deploy gluster via puppet? do you have some dynamic node adding?
15:28 hlieberman joined #gluster
15:28 jbourke See rb2k for that question social.  :)
15:28 rb2k Ha
15:29 rb2k We deploy a ruby script via puppet
15:29 rb2k that takes care of everything else
15:29 rb2k Not the "cleanest" way to do it, but it works (tm)
15:29 portante joined #gluster
15:30 social__ rb2k: do you do some dynamic brick adding?
15:30 rb2k yes
15:30 social__ rb2k: I'm considering whether to use static definition of bricks, peers etc
15:30 rb2k we do manage that stuff in a central database though
15:31 rb2k and basically just do stuff like comparing that DB with "gluster peer status"
15:31 rb2k if it's missing --> add it
15:31 rb2k if it's there but not in the DB --> remove it
15:31 rb2k same for bricks and volumes
15:33 social__ rb2k: when removing bricks you migrate data off and rebalance? do you rebalance on brick add?
15:33 rb2k yes
15:33 rb2k although we run fully replicated
15:33 Norky who here is successfully using RDMA stransport for glusterfs?
15:33 rb2k so there is no real rebalancing
15:34 Norky I've tried and failed
15:34 hlieberman Norky, We are.
15:34 Norky gluster 3.3?
15:34 social__ hmm I'd like to have replica 2 and rebalance but I'm wondering how much expensive it is if I have for example 2TB there
15:34 arusso joined #gluster
15:35 hlieberman We're using RDMA, but the transport type is tcp, rdma, and we force it at mount time.
15:35 failshell hmm i need to look into RDMA. looks like it could improve my writes
15:35 rb2k social__: good question. Since the data is always replicated for me, I don't know
15:35 Norky hlieberman, over InfiniBand, or some 10GbE that supports RDMA?
15:35 hlieberman Over QDR.
15:37 lh joined #gluster
15:37 lh joined #gluster
15:37 Norky pretty much what I tried  (with RHS = glsuterfs 3.3), mixed transport volumes over QDR IB (though some older DDR machines too), though even forcing it, it always seemed to fall back to using tcp
15:39 hlieberman We're using 3.4-beta1.
15:39 social__ last thing, I'm debugging https://bugzilla.redhat.com/show_bug.cgi?id=844584 any ideas where to look?
15:39 Norky this setup is fairly conservative, hence the $upported RHS, beta is not an option :)
15:39 glusterbot <http://goo.gl/z72b6> (at bugzilla.redhat.com)
15:39 glusterbot Bug 844584: medium, medium, ---, spalai, NEW , logging: Stale NFS messages
15:40 Norky hlieberman, did you try earlier versions?
15:41 hlieberman 3.3 has a hardcoded port that is incorrect.
15:42 hlieberman RDMA will not work on 3.3, from what my engineers said.
15:42 Norky yeah, that's the conclusion I arrived at
15:42 JoeJulian Well for official non-beta not-patched RHS, it's broken and won't work. Once 3.3.2 is released and RHS pulls from upstream you'll be good.
15:42 hlieberman Yeah.
15:42 Norky JoeJulian, thank you, that kinda confirms what I was hoping
15:42 JoeJulian Though you could probably hack it using filter.
15:45 Norky "filter"? Net filter to redirect the wrong port ti the right one?
15:45 * Norky is uncomprehending
15:47 kkeithley_ FWIW, that RDMA bug is fixed in my 3.3.1 rpms!
15:47 kkeithley_ @yum repos
15:47 glusterbot kkeithley_: I do not know about 'yum repos', but I do know about these similar topics: 'yum repository'
15:47 kkeithley_ @yum repository
15:47 glusterbot kkeithley_: Joe Julian's repo with compiler optimizations (-O2) enabled having 3.1 and 3.2 versions for both x86_64 and i686 are at http://joejulian.name/yumrepo/
15:47 kkeithley_ grr
15:48 Norky hehe
15:48 kkeithley_ @yum
15:48 glusterbot kkeithley_: The official glusterfs packages for RHEL/CentOS/SL are available here: http://goo.gl/s077x
15:48 hlieberman kkeithley_, If only you were nearby, I would pat you on the head.
15:49 hlieberman Does anyone here use the Hadoop plugin for glusterfs?
15:49 JoeJulian @forget yum repository
15:49 glusterbot JoeJulian: The operation succeeded.
15:52 JoeJulian Norky: http://www.gluster.org/community/docu​mentation/index.php/Glusterfs-filter
15:52 glusterbot <http://goo.gl/dMhlL> (at www.gluster.org)
15:53 * JoeJulian is amazed, sometimes, at the hacks people will implement to stay with "supported" non-working stuff. ;)
15:53 JoeJulian bbl
15:53 Norky yeah, that seems as bad to me as replacing the package with newer versions, JoeJulian :)
15:54 Norky for this particular installation, I am leaving the 4 servers running official RHS packages, with no unsupported changes
15:54 Norky just means I have to wait for the next update
15:56 kkeithley_ @forget yum
15:56 glusterbot kkeithley_: The operation succeeded.
15:56 rob__ joined #gluster
15:57 kkeithley_ @learn yum The official community glusterfs packges for RHEL/CentOS/SL (and Fedora 17 and earlier) are available here http://goo.gl/s077x
15:57 glusterbot kkeithley_: (learn [<channel>] <key> as <value>) -- Associates <key> with <value>. <channel> is only necessary if the message isn't sent on the channel itself. The word 'as' is necessary to separate the key from the value. It can be changed to another word via the learnSeparator registry value.
15:57 kkeithley_ @learn yum as The official community glusterfs packges for RHEL/CentOS/SL (and Fedora 17 and earlier) are available here http://goo.gl/s077x
15:57 glusterbot kkeithley_: The operation succeeded.
15:58 kkeithley_ @yum
15:58 glusterbot kkeithley_: The official community glusterfs packges for RHEL/CentOS/SL (and Fedora 17 and earlier) are available here http://goo.gl/s077x
15:59 kkeithley_ @learn rpm as The official community glusterfs packges for RHEL/CentOS/SL (and Fedora 17 and earlier) are available here http://goo.gl/s077x
15:59 glusterbot kkeithley_: The operation succeeded.
16:00 sgowda joined #gluster
16:00 m0zes joined #gluster
16:11 devoid joined #gluster
16:21 pkoro joined #gluster
16:22 semiosis social__: ,,(puppet)
16:22 glusterbot social__: (#1) https://github.com/semiosis/puppet-gluster, or (#2) https://github.com/purpleidea/puppet-gluster
16:24 nueces joined #gluster
16:24 semiosis rb2k: i only package glusterfs releases (and QA releases)
16:24 rb2k yeah, those work fine for me :)
16:24 semiosis rb2k: describe the problem you're having and i'll try to help you make your own packages
16:24 rb2k I already package the qa branches fine
16:24 semiosis but i'm going to be busy today so i hope you have patience :)
16:24 rb2k hm shoot, I have to run anyway
16:24 rb2k (GMT+1)
16:24 semiosis oh ok
16:25 rb2k But I'll be sure to hunt you down and nag you about the release-3.3 branch
16:25 rb2k which is usually what the qa releases should be based on
16:25 semiosis leave me a message in here when you have time describing your problem
16:25 rb2k but something seems different
16:25 rb2k ok, cool
16:25 rb2k thanks
16:25 rb2k bye
16:25 semiosis yw
16:26 daMaestro joined #gluster
16:27 failshell how frequently does georepl sync files?
16:28 vimal joined #gluster
16:29 Mo_ joined #gluster
16:32 hjmangalam joined #gluster
16:39 bulde joined #gluster
16:44 daMaestro joined #gluster
16:46 bstromski joined #gluster
16:53 rwheeler joined #gluster
16:53 aravindavk joined #gluster
17:01 jiqiren joined #gluster
17:01 failshell in my newly created georepl setup, the master is reporting a split brain . and yet on the slave cluster, i see no messages mentionining split brains
17:01 failshell im a bit confused
17:01 the-me joined #gluster
17:17 Technicool joined #gluster
17:18 sjoeboo failshell, is the "master" a replicated volume itself ?
17:19 failshell yup
17:19 sjoeboo right, so the split brain would be with in that volume, not across the geo-rep
17:21 failshell but the error is mentioning the volume name on the slave
17:21 failshell not the one on the master
17:24 failshell https://gist.github.com/fai​lshell/54038042c57d662f47d1
17:24 glusterbot <http://goo.gl/CvuCG> (at gist.github.com)
17:24 failshell i get that non-stop
17:29 sjoeboo hm, is that from the "client" log fro the geo-rep? just an idea, i haven't done geo-rep relaly, but i know the master acts as a client o the slave volume, and i know in split brain situations, clients will complain..?
17:31 tg2 I didn't see anything in the documentation about removing multiple bricks at once?
17:31 tg2 if I do two separate remove-brick commands
17:31 tg2 the second one says one is already in progress
17:32 failshell sjoeboo: that's from the slave logs
17:32 n8d2 joined #gluster
17:32 failshell im gonna let it finish its initial sync and see if that error goes away
17:35 rb2k joined #gluster
17:39 sonne joined #gluster
17:43 mooperd joined #gluster
17:49 hagarth joined #gluster
17:56 eightyeight joined #gluster
18:13 glusterbot New news from newglusterbugs: [Bug 971528] Gluster fuse mount corrupted <http://goo.gl/504II>
18:16 devoid joined #gluster
18:42 kke when i create a file, is the idea that the same file (but with size 0) is found on every brick?
18:47 kke or what are these 0b files with perm 1000 all over the place
18:48 kke so if i got 10 bricks, i kind of need 10 times the inodes i would "normally"
18:49 kkeithley_ kke: that's not normal
18:53 kke i wonder what the 0b files are then
18:54 kke "link files" i think, whatever that means
18:54 kkeithley_ what version of gluster, what $linuxdist, what $fstype for the bricks?
18:54 kke http://www.gluster.org/pipermail/glu​ster-users/2011-January/029366.html
18:54 glusterbot <http://goo.gl/NQ8RU> (at www.gluster.org)
18:54 kkeithley_ It should only create link files when the fs is full
19:11 lbalbalba joined #gluster
19:15 StarBeast joined #gluster
19:19 portante joined #gluster
19:21 flakrat joined #gluster
19:21 flakrat joined #gluster
19:25 thomaslee joined #gluster
19:27 lpabon joined #gluster
19:39 MrNaviPacho joined #gluster
19:43 lhawthor_ joined #gluster
19:46 tstclair joined #gluster
19:52 vpshastry2 joined #gluster
19:56 rwheeler joined #gluster
20:16 majeff1 joined #gluster
20:21 y4m4 joined #gluster
20:24 portante joined #gluster
20:27 n8d2 left #gluster
20:34 JoeJulian kke, kkeithley_: or if you create and rename a file, like if you use rsync without --inplace.
20:42 andreask joined #gluster
20:44 duffrecords joined #gluster
20:44 JoeJulian failshell: left a comment on your gist.
20:45 duffrecords some network hiccup just occurred and one of my gluster boxes is reporting "0-guests-replicate-1: failing writev due to lack of quorum"  what does that mean?
20:45 failshell JoeJulian: thanks. thing is, im a bit confused to get that in the slave logs on the master. not sure where to run that
20:46 JoeJulian duffrecords: That means that quorum is enabled and the client cannot write because it is not connected to a minimum number of replicas. If you didn't specify that number, it's $replicas/2+1
20:47 JoeJulian failshell: I'd check all the brick roots for that volume.
20:47 failshell JoeJulian: ah right, was assuming volume
20:48 failshell i can do that while the volume is started?
20:48 JoeJulian yes
20:48 duffrecords I turned on quorum a few months back because of a data corruption issue.  looks like it has some side effects
20:48 failshell thanks. will do that tomorrow
20:48 JoeJulian duffrecords: Sounds like it's working as intended. :D
20:51 duffrecords ok, well that's good news I guess
20:52 duffrecords this is a replicated distributed setup with 4 nodes.  apparently write operations are on hold until quorum is reached.  I'm thinking I should shut down all the VMs that are hosted on this volume

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary