Camelia, the Perl 6 bug

IRC log for #gluster, 2013-03-21

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 robo joined #gluster
00:12 y4m4 joined #gluster
00:18 jskinne__ joined #gluster
00:26 jskinner joined #gluster
00:43 \_pol joined #gluster
00:50 yinyin joined #gluster
00:58 jdarcy joined #gluster
01:03 lkthomas joined #gluster
01:10 jules_ joined #gluster
01:23 yinyin joined #gluster
01:39 mooperd joined #gluster
01:42 sahina joined #gluster
01:42 jdarcy joined #gluster
01:53 isomorphic joined #gluster
01:55 jdarcy joined #gluster
02:03 isomorphic joined #gluster
02:41 mohankumar joined #gluster
02:46 shylesh joined #gluster
02:52 bala1 joined #gluster
02:55 bharata joined #gluster
03:15 edong23 joined #gluster
03:19 glusterbot New news from newglusterbugs: [Bug 916372] NFS3 stable writes are very slow <http://goo.gl/Z0gaJ>
03:28 madphoenix joined #gluster
03:49 anmol joined #gluster
04:15 vpshastry joined #gluster
04:17 harshpb joined #gluster
04:19 sgowda joined #gluster
04:20 yinyin joined #gluster
04:27 sjoeboo_ joined #gluster
04:36 harshpb joined #gluster
04:42 pai joined #gluster
04:54 harshpb joined #gluster
04:55 yinyin joined #gluster
05:01 lalatenduM joined #gluster
05:02 saurabh joined #gluster
05:04 bulde joined #gluster
05:04 sahina joined #gluster
05:08 aravindavk joined #gluster
05:08 yinyin joined #gluster
05:09 sripathi joined #gluster
05:10 rastar joined #gluster
05:19 hagarth joined #gluster
05:24 nhm_ joined #gluster
05:26 helloadam joined #gluster
05:26 bala1 joined #gluster
05:41 raghu joined #gluster
05:44 harshpb joined #gluster
05:48 bala1 joined #gluster
05:50 aravindavk joined #gluster
05:50 mohankumar joined #gluster
05:51 sripathi joined #gluster
05:52 sjoeboo_ joined #gluster
05:54 y4m4 joined #gluster
06:18 vshankar joined #gluster
06:25 yinyin joined #gluster
06:26 bala1 joined #gluster
06:34 vimal joined #gluster
06:35 sripathi joined #gluster
06:38 vpshastry joined #gluster
06:38 sripathi1 joined #gluster
06:40 guigui3 joined #gluster
06:41 ngoswami joined #gluster
06:43 bulde joined #gluster
06:45 kevein joined #gluster
06:45 kevein_ joined #gluster
06:46 satheesh joined #gluster
06:47 ramkrsna joined #gluster
06:47 ramkrsna joined #gluster
06:47 harshpb joined #gluster
06:53 ngoswami joined #gluster
06:53 sahina joined #gluster
07:02 timothy_ joined #gluster
07:03 timothy_ joined #gluster
07:09 bala1 joined #gluster
07:20 jtux joined #gluster
07:26 yinyin joined #gluster
07:31 ekuric joined #gluster
07:32 isomorphic joined #gluster
07:32 sahina joined #gluster
07:43 harshpb joined #gluster
07:54 andreask joined #gluster
07:59 tjikkun_work joined #gluster
08:13 mooperd joined #gluster
08:14 timothy_ joined #gluster
08:19 rotbeard joined #gluster
08:19 joeto joined #gluster
08:22 ctria joined #gluster
08:27 yinyin joined #gluster
08:30 vpshastry joined #gluster
08:31 Norky joined #gluster
08:34 ProT-0-TypE joined #gluster
08:36 sgowda joined #gluster
08:47 tryggvil__ joined #gluster
08:49 vpshastry joined #gluster
09:01 ujjain joined #gluster
09:05 yinyin joined #gluster
09:14 timothy_ joined #gluster
09:28 glusterbot New news from resolvedglusterbugs: [Bug 923580] ufo: `swift-init all start` fails <http://goo.gl/F73bO>
09:29 deepakcs joined #gluster
09:31 bulde joined #gluster
09:33 hagarth @channelstats
09:33 mooperd joined #gluster
09:33 glusterbot hagarth: On #gluster there have been 102840 messages, containing 4473008 characters, 751514 words, 3068 smileys, and 387 frowns; 692 of those messages were ACTIONs. There have been 36803 joins, 1181 parts, 35636 quits, 14 kicks, 110 mode changes, and 5 topic changes. There are currently 213 users and the channel has peaked at 215 users.
09:34 samppah :O
09:38 harshpb joined #gluster
09:40 kanagaraj joined #gluster
09:40 deepakcs kanagaraj, for ovirt UI issue, where do i open a bug report ?
09:41 deepakcs kanagaraj, i want to open one for the 'custom property' issue when i create a new VM.
09:41 deepakcs kanagaraj, if u remember, i had shown this problem during the gluster summit demo
09:41 kanagaraj deepakcs, i remember the issue you had
09:41 deepakcs kanagaraj, i still see the issue even today
09:42 deepakcs using latest engine
09:42 deepakcs kanagaraj, itamar had asked me to open a bug report for same.. whats the link for the bug report ?
09:42 kanagaraj deepakcs, ok, you can file it here https://bugzilla.redhat.com/​enter_bug.cgi?product=oVirt.
09:42 glusterbot <http://goo.gl/ELzFc> (at bugzilla.redhat.com)
09:42 deepakcs kanagaraj, ok, thanks
09:43 kanagaraj deepakcs, select the webadmin component, and attach the screenshot also if possible.
09:51 Nevan joined #gluster
09:52 ekuric joined #gluster
09:52 glusterbot New news from newglusterbugs: [Bug 924132] reports a 503 error when download a container <http://goo.gl/pDZ8M>
09:54 hagarth samppah: saw your email on gluster-users, do you plan to geo-replicate live vm image files?
09:56 samppah hagarth: yeah, i was hoping to use that for disaster recovry.. is that a bad idea?
09:58 dobber_ joined #gluster
09:58 glusterbot New news from resolvedglusterbugs: [Bug 918427] Disable data-self-heal on mount points for virt sample file <http://goo.gl/nmT0I>
09:59 shylesh joined #gluster
09:59 HaraldJensas joined #gluster
10:00 hagarth samppah: since geo-replication is eventually consistent, the geo-replicated image file might not be totally consistent
10:00 hagarth samppah: taking a snapshot and geo-replicating that might be a better model
10:03 samppah hagarth: you are correct.. is there a way to use geo replication only on snapshots?
10:03 sripathi joined #gluster
10:04 isomorphic comparing geo-replication vs ordinary replicated volumes - the user guide seems to suggest the only difference is that it is asynchronous.  Is that correct?
10:04 unlocksmith joined #gluster
10:04 hagarth samppah: there's work in progress to geo-rep selective directories/files. Till then, you can probably use an alternate volume (other than image store) for geo-replicating. You'll have to script/manually copy snapshots on to that volume.
10:05 samppah hagarth: thanks, i'll see if that's possible
10:06 hagarth isomorphic: yes, that is the primary difference.
10:06 isomorphic hagarth:  I'm currently using gluster for home directories between servers on a fast connection
10:07 isomorphic but the delay is very significant for jobs on many small files - eg: ld takes significantly longer.    Is switching to geo replication likely to improve this due to the weaker eventual consistency guarantee?
10:07 samppah we have separate backup program that can take backup of whole vm but it has to be installed in the vm and it's not installed to all vms
10:08 samppah hagarth: btw do you know if there is any known issues in geo replication in 3.4? :)
10:09 hagarth isomorphic: geo-replication is primarily designed for disaster recovery not for high availability. synchronous replication provides high availability.
10:09 hagarth samppah: i am not aware of, one of the geo-rep developers mentioned that he's looking into the issue reported in your email.
10:10 hagarth @channelstats
10:10 glusterbot hagarth: On #gluster there have been 102881 messages, containing 4476000 characters, 751987 words, 3069 smileys, and 387 frowns; 692 of those messages were ACTIONs. There have been 36813 joins, 1181 parts, 35642 quits, 14 kicks, 110 mode changes, and 5 topic changes. There are currently 217 users and the channel has peaked at 217 users.
10:10 hagarth yay! we have peak usage atm :O
10:11 samppah \o/
10:11 isomorphic hagarth:  Understood - I guess what I'm trying to get a handle on is whether things like linking would improve speedwise.  The reason for replication in this system is more DR than HA, but the tradeoff really depends on whether small file performance improves
10:12 harshpb joined #gluster
10:12 hagarth isomorphic: geo-replication is unidirectional right now - you cannot perform any updates on the slave site.
10:13 isomorphic ah - that answers my question, thanks :)
10:13 unlocksmith1 joined #gluster
10:14 unlocksmith1 left #gluster
10:14 isomorphic i assume there's no good way to enable asynchronous updates on replicated volumes
10:15 hagarth isomorphic: not as of now. there are some thoughts in that direction.
10:19 isomorphic oh - incidentally - the NFS server built into glusterfs doesn't seem to support things like krb5.   Is the intention there to expand it to be a more full client, or is the model you're aiming for different to that type of use case?
10:21 hagarth isomorphic: the nfs stack in gluster will also mature to provide more capabilities.
10:28 glusterbot New news from resolvedglusterbugs: [Bug 765442] duplicate file names on replicated volume <http://goo.gl/jNzqh>
10:30 rastar1 joined #gluster
10:31 lalatenduM joined #gluster
10:32 sripathi1 joined #gluster
10:34 stickyboy joined #gluster
10:35 stickyboy Anyone used the WD Red drives?  They're much cheaper than what we are currently using (Seagate), and appear to be for "RAID".
10:36 stickyboy I know Hitachi's recent drives are also highly recommended, just a bit pricy.
10:36 \_pol joined #gluster
10:37 \_pol joined #gluster
10:41 camel1cz joined #gluster
10:48 kshlm joined #gluster
10:48 kshlm joined #gluster
10:49 HaraldJensas joined #gluster
10:49 vpshastry1 joined #gluster
10:56 jdarcy joined #gluster
10:59 camel1cz left #gluster
11:01 mooperd joined #gluster
11:05 \_pol joined #gluster
11:06 \_pol joined #gluster
11:07 edong23 joined #gluster
11:07 camel1cz joined #gluster
11:15 fleducquede joined #gluster
11:20 kkeithley joined #gluster
11:21 mooperd joined #gluster
11:28 harshpb joined #gluster
11:32 harshpb joined #gluster
11:35 camel1cz joined #gluster
11:38 saurabh joined #gluster
11:40 lpabon joined #gluster
11:44 manik joined #gluster
11:46 sripathi joined #gluster
11:46 sgowda joined #gluster
11:49 aravindavk joined #gluster
11:50 harshpb joined #gluster
11:51 akshatcy joined #gluster
11:51 akshatcy Hi. I needed help with an error.
11:51 akshatcy When I run "gluster peer status"
11:51 akshatcy I get the following error "gluster: symbol lookup error: gluster: undefined symbol: gf_xdr_from_cli_defrag_vol_req"
11:52 akshatcy This used to work earlier. I had updated my ubuntu server with security updates and this started coming up after that
11:52 akshatcy Can some one please help me. Thanks
11:53 mooperd joined #gluster
11:59 camel1cz joined #gluster
12:02 harshpb joined #gluster
12:04 akshatcy can any one help please?
12:05 harshpb joined #gluster
12:05 sripathi joined #gluster
12:06 rwheeler joined #gluster
12:08 rastar joined #gluster
12:09 hybrid512 joined #gluster
12:22 shireesh joined #gluster
12:23 sahina joined #gluster
12:31 dblack joined #gluster
12:33 edward1 joined #gluster
12:36 HaraldJensas joined #gluster
12:37 bennyturns joined #gluster
12:48 awheeler I had this same issue, it seems to be an x64 problem.  Not sure why it's happening, but I put I put /usr/lib64 in a file to /etc/ld.so.conf.d/ and that fixed it.
12:49 yinyin joined #gluster
12:49 awheeler This was on a centos box, but my x64 ubuntu box doesn't have a /usr/lib64
12:50 vpshastry joined #gluster
12:51 awheeler Disregard, seems my solution was for a different symbol.
12:54 andreask joined #gluster
13:02 timothy joined #gluster
13:04 robo joined #gluster
13:17 camel1cz left #gluster
13:17 mooperd joined #gluster
13:20 mbmvianna joined #gluster
13:21 manik joined #gluster
13:23 glusterbot New news from newglusterbugs: [Bug 924265] [FEAT] Support "DHT over DHT" configurations <http://goo.gl/VN1e2>
13:36 shireesh joined #gluster
13:37 rwheeler joined #gluster
13:40 hagarth joined #gluster
13:42 timothy joined #gluster
13:49 vimal joined #gluster
13:49 ngoswami joined #gluster
13:55 dustint joined #gluster
14:01 mohankumar joined #gluster
14:03 bennyturns joined #gluster
14:13 harshpb joined #gluster
14:25 plarsen joined #gluster
14:30 jdarcy joined #gluster
14:35 Nagilum_ joined #gluster
14:36 ProT-0-TypE joined #gluster
14:37 manik joined #gluster
14:38 manik joined #gluster
14:38 jdarcy joined #gluster
14:38 harshpb joined #gluster
14:45 matt_hm joined #gluster
14:54 manik joined #gluster
14:54 daMaestro joined #gluster
14:59 Martin|2 joined #gluster
15:00 manik joined #gluster
15:11 jskinner joined #gluster
15:15 lh joined #gluster
15:18 partner was there any command / trick / oneliner available for cleaning up some sticky-pointers that are preventing rebalancing "properly" ?
15:19 nueces joined #gluster
15:19 red_solar joined #gluster
15:20 satheesh joined #gluster
15:20 partner i guess i could do some find /foo -perm 1000 -delete and something similar perhaps for .glusterfs and then rebalance but thought to ask first if i'm all lost?-)
15:22 partner maybe i can just leave them as is too, some 7% or so.. new brick on dist-repl only got 35% of stuff while rest do 65% so not exactly in balance?
15:23 satheesh joined #gluster
15:25 satheesh joined #gluster
15:25 robinr joined #gluster
15:26 robinr hi, i read that redhat 6.3 should not be used for Gluster Server (due to ext4 changes). Reformatting into xfs does not seem to be feasible for me at this point. What about redhat 6.3 gluster *client* ? Can I use Gluster client on Redhat 6.3 ?
15:27 semiosis probably yes
15:28 Norky you can run the server on RHEL6.3 and 6.4, so long as you buy XFS add-on. Or run RHS instead of RHEL
15:28 Norky and yes, the client works fine
15:28 robinr thanks semiosis and Norky for the feedback
15:28 Mo_ joined #gluster
15:37 bala joined #gluster
15:47 theron joined #gluster
15:48 satheesh joined #gluster
15:52 dustint joined #gluster
15:57 rastar joined #gluster
16:07 aliguori joined #gluster
16:11 GLHMarmot joined #gluster
16:12 nat left #gluster
16:19 vpshastry joined #gluster
16:24 manik joined #gluster
16:25 JoeJulian partner: I don't think that would work as the <gfid> file would still exist for those sticky pointers. What I think you're looking to do, though, can be accomplished by "rebalance ... start force".
16:26 hagarth joined #gluster
16:29 shylesh joined #gluster
16:34 bulde joined #gluster
16:49 flrichar joined #gluster
16:52 manik joined #gluster
16:58 GreyFoxx joined #gluster
16:58 partner rgr, i'll try what happens with force
17:05 soukihei joined #gluster
17:13 robinr joined #gluster
17:14 partner seems to be touching to ones it skipped on previous round without force
17:17 aravindavk joined #gluster
17:17 partner not exactly sure what this does now but at least its making it even less balanced :)
17:19 rwheeler joined #gluster
17:23 vpshastry joined #gluster
17:32 harshpb joined #gluster
17:34 robinr joined #gluster
17:37 partner finished. the three bricks are now 68%, 63%, 34% so i guess my files just happens to be more suitable for the old bricks than for the new one..
17:38 manik1 joined #gluster
17:38 partner or i am just _assuming_ i would get something more close to having 1/3 on each..
17:39 harshpb joined #gluster
17:53 kbsingh joined #gluster
17:57 vpshastry joined #gluster
18:02 y4m4 joined #gluster
18:02 harshpb joined #gluster
18:06 NuxRo joined #gluster
18:11 _pol does glusterd benefit from lots of RAM?
18:12 _pol is there a recommended RAM/GB ratio?
18:12 _pol or RAM/brick or something...
18:19 ujjain joined #gluster
18:21 harshpb joined #gluster
18:25 harshpb joined #gluster
18:28 harshpb joined #gluster
18:31 harshpb joined #gluster
18:38 JoeJulian _pol: performance.cache-size defaults to 32Mb. That measurement is used for several cache size defaults resulting in that being multiplied per-brick. I set mine to 8Mb and with 60 bricks, I use 16Gb.
18:46 dustint joined #gluster
18:46 luckybambu joined #gluster
18:49 stickyboy joined #gluster
18:50 camel1cz joined #gluster
18:51 stickyboy I'm trying to find bottlenecks in my new Gluster deployment.  Not sure if it's network, disk, or Gluster overhead.
18:54 camel1cz stickyboy: Hi, what's your setup?
18:54 stickyboy Hey, camel1cz.
18:56 _pol JoeJulian: 60 bricks in a volume?
18:56 stickyboy camel1cz: I have a few replicate + distribute volumes, running on dedicated 1GbE switch, on two servers with 12-disk hardware RAID5 arrays.
18:56 _pol JoeJulian: or just 60 bricks present on a single server?
18:57 camel1cz stickyboy: Nice... is it a production system? (sorry for the questionary - I'm curious and going to deploy glfs soon :)
18:58 _pol_ joined #gluster
18:58 JoeJulian _pol: 60 per server
18:58 stickyboy camel1cz: No problem.  Not production yet.  Still migrating production data in and testing.
19:01 JoeJulian stickyboy: Potential bottlenecks include context switching overhead through fuse, tcp overhead for incomplete packets, network bandwidth, and disk i/o. The order of those depends on typical usage, and volume configuration.
19:03 semiosis typically network & disk are the bottlenecks, however if you had infiniband & ssd then maybe fuse/cpu would be
19:05 stickyboy Infiniband + SSD :P
19:06 stickyboy I guess a good test would be to scp from client -> server versus copying straight into the fuse mount?
19:07 dendazen joined #gluster
19:07 dendazen quit
19:08 _pol joined #gluster
19:09 semiosis stickyboy: what i'd do is start with one brick per server, and one client, then add clients until aggregate performance levels off, then add a brick to each server (more distribute) and add more clients until performance levels off again
19:10 semiosis repeat until you have enough bricks per server and clients to saturate the network interfaces, at which point adding bricks & clients wont return any greater aggregate performance, then start spreading out over more servers
19:10 semiosis until you saturate the switch
19:10 semiosis ...
19:11 semiosis bbl
19:12 dendazen Hi guys.
19:12 stickyboy semiosis: Cool, thanks.  Chat later.
19:12 camel1cz Does anyone host KVM images on glfs on two node cluster?
19:15 dendazen If i have one node of glusterfs with 2 bricks on the same server which are in Distribute Type as of right now, but I want to create another server with glusterfs on it and do replicate current bricks to that new server for replication.
19:15 dendazen IS it possible?
19:16 dendazen Or i will have to rebuild the bricks?
19:18 semiosis dendazen: i think you can do add-brick replica 2
19:18 dendazen on the new server?
19:19 semiosis right, first you'll need to probe the new server from the existing one, then you can do the add-brick from either
19:19 semiosis you really should try this out on test vms before trying it with your real volume
19:20 dendazen Sorry still did not get it, so i spin up a new server install glusterfs on it and then will i have to rsync the data from the existing one at first?
19:20 dendazen and then add those as a replica brick?
19:20 semiosis no i dont think you need to rsync
19:20 semiosis what version of glusterfs are you using?
19:20 dendazen one sec.
19:24 dendazen sorry
19:24 dendazen 3.2.5
19:27 semiosis ahh, ok you dont have the ability to add replicas in that version :(
19:27 dendazen oh wow.
19:27 dendazen i have ubuntu 12.04
19:28 semiosis see ,,(ppa)
19:28 glusterbot The official glusterfs 3.3 packages for Ubuntu are available here: http://goo.gl/7ZTNY
19:28 dendazen which is quite recent distro
19:28 dendazen is there a way to upgrade?
19:28 dendazen without downtime?
19:28 semiosis no
19:28 dendazen sigh..
19:28 semiosis downtime is required, see ,,(3.3 upgrade notes)
19:28 glusterbot http://goo.gl/qOiO7
19:29 dendazen Thanks, i will look into it.
19:29 semiosis on 3.2.x you would need downtime to change replica count anyway, it would require taking volume down, deleting it & recreating it
19:29 semiosis which is actually pretty safe & fairly easy, but requires downtime
19:29 dendazen To bring the server down for 15 minutes it is better unmount all the active mounts, right?
19:30 sjoeboo_ joined #gluster
19:30 dendazen Oh okay, Thanks . But for replication to work on the server i will have to upgrade anyway, correct?
19:32 semiosis hard for me to say whats better
19:32 semiosis idk your situation
19:35 jdarcy joined #gluster
19:37 kedmison joined #gluster
19:43 dustint joined #gluster
19:44 jdarcy joined #gluster
19:44 wushudoin joined #gluster
19:44 dendazen Well you said yourself that replicas can only be available in 3.3 and not 2.x
19:45 semiosis sorry to confuse you, what i mean is that in 3.2.x you can't change the replica count on an existing volume
19:46 semiosis meaning you can't add replication to a volume without replication, or change how many replicas there are on a volume that does have it
19:46 semiosis when you create a volume you can enable replication and set the number of replicas, then it stays the same for the life of the volume
19:46 dendazen Oh i see
19:46 dendazen .
19:47 semiosis thus if you had to change the replica count (or enable replication) you would have to delete & recreate the volume
19:47 dendazen Right, but not delete the data itslef
19:47 dendazen just the gluster meta volume structure, correct?
19:48 semiosis no the data stays on the bricks.  for example, if you had a 2-brick distributed volume, lets call the bricks a & b, then you could delete the volume and make a new one, giving the bricks on the create command as a a' b b'
19:48 semiosis where the ' (prime) bricks are the new empty bricks that will be replicas of your existing bricks
19:48 semiosis interleave the existing bricks with the new ones, old new old new old new ...
19:49 semiosis when you create the new volume, for each replica pair one of the bricks (old) would be full of data, the other (new) would be empty
19:49 dendazen oh i see.
19:49 semiosis then self-heal woudl sync the data from old to new
19:51 dendazen so but i do remember that i can fiddle with metadata manually by hands in /etc/glusterd folder
19:51 dendazen Thats how i chnaged ip address when it got changed as first admin binded the volumes to lan IP
19:51 dendazen and not to hostnames.
19:51 JoeJulian camel1cz: I think hosting images on a Global Logistical Freight Service might have some disadvantages. ;)
19:52 dendazen Or it is always better remove volumes and recreate them.
19:52 dendazen ?
19:53 JoeJulian Does your risk model vary?
19:54 camel1cz JoeJulian: Heh :-) what have I asked? :-D
19:55 JoeJulian Typically the replica count isn't changed, though I could see if you're going to have read loads maxing out your servers that adding replicas could be advantageous. But that would have to be analyzed against redistribution.
19:56 JoeJulian camel1cz: You asked about hosting images on glfs. I googled that and that seemed the most likely result. Unless you're trying to get the Great Lakes Forecasting System or Grande Loge Feminine de Suisse to host them...
19:57 _pol_ joined #gluster
19:57 JoeJulian But to be a little more serious, lots of people host kvm images on glusterfs.
19:57 zaitcev joined #gluster
19:58 camel1cz JoeJulian: So I go with the Grande Loge Feminine de Suisse - sounds interesting :-P :-D
20:01 camel1cz JoeJulian: Do you possibly have some links to readings about this type of deployment?
20:18 camel1cz left #gluster
20:20 awheeler So I've replaced a server, and heal-failed is showing some files.  How do I resolve that?
20:21 camel1cz joined #gluster
20:21 kedmison joined #gluster
20:28 camel1cz left #gluster
20:38 hateya joined #gluster
20:38 camel1cz joined #gluster
20:39 camel1cz nite guys
20:40 camel1cz left #gluster
20:53 y4m4 joined #gluster
20:59 glusterbot New news from newglusterbugs: [Bug 924481] gluster nfs sets unrequested bits in ACCESS reply <http://goo.gl/g6zWA>
20:59 andreask joined #gluster
21:05 k7_ joined #gluster
21:14 ricky-ticky joined #gluster
21:29 glusterbot New news from newglusterbugs: [Bug 924488] NUFA and switch don't initialize DHT data structures properly <http://goo.gl/CEFNF> || [Bug 924490] Use a proper method table for init/fini <http://goo.gl/mWr8o>
21:34 ricky-ticky Hello, can someone help me with remove brick problem?
21:36 semiosis hello
21:36 glusterbot semiosis: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
21:36 semiosis ricky-ticky: ^^^
21:37 ricky-ticky semiosis: thanks )
21:38 ricky-ticky semiosis: i have strange problem with distributed replicated volume
21:39 ricky-ticky here is a info: Volume Name: distributed-replicated-pdf-storage
21:39 ricky-ticky Type: Distributed-Replicate
21:39 ricky-ticky Volume ID: 26a4a439-a3ee-492f-b548-b08bec0b3c69
21:39 ricky-ticky Status: Started
21:39 ricky-ticky Number of Bricks: 4 x 2 = 8
21:39 ricky-ticky Transport-type: tcp
21:39 ricky-ticky Bricks:
21:39 ricky-ticky Brick1: 10.99.1.10:/data/brick03-1
21:39 ricky-ticky Brick2: 10.99.1.11:/data/brick04-1
21:39 ricky-ticky Brick3: 10.99.1.10:/data/brick03-2
21:39 ricky-ticky Brick4: 10.99.1.11:/data/brick04-2
21:39 ricky-ticky was kicked by glusterbot: message flood detected
21:40 ricky-ticky joined #gluster
21:40 dendazen use pastebin
21:40 dendazen next time
21:40 ricky-ticky sorry, using irc not so often
21:42 cw joined #gluster
21:42 ricky-ticky so, when i try remove bricks i got mesage: Rebalance is in progress. Please retry after completion
21:43 ricky-ticky but there is no rebalance
21:43 ricky-ticky and command  "gluster volume rebalance distributed-replicated-pdf-storage status" returns nothing
21:45 ricky-ticky so i'm stuck and don't know what to do
21:52 JoeJulian I wonder if "gluster volume rebalance distributed-replicated-pdf-storage stop" would work...
21:56 ricky-ticky joined #gluster
22:04 red_solar joined #gluster
22:15 fendrychl joined #gluster
22:29 glusterbot New news from newglusterbugs: [Bug 924504] print the in-memory graph rather than volfile in the logfile <http://goo.gl/6rd7E>
22:33 fendrychl left #gluster
22:42 Nagilum_ what would be the best way to re-IP(change the IPs of) a gluster?
22:42 JoeJulian Use ,,(hostnames)
22:43 glusterbot Hostnames can be used instead of IPs for server (peer) addresses. To update an existing peer's address from IP to hostname, just probe it by name from any other peer. When creating a new pool, probe all other servers by name from the first, then probe the first by name from just one of the others.
22:43 JoeJulian But since you didn't, generally people delete and recreate the volume.
22:43 JoeJulian If you want to figure out how to brute-force it, look at the files under /var/lib/glusterd.
22:43 Nagilum_ JoeJulian: well, I used hostnames, but the first host is always shown by IP
22:44 JoeJulian Re-read that factoid carefully.
22:44 Nagilum_ but glusterbot just gave the answer..
22:49 _pol joined #gluster
22:54 Gualicho joined #gluster
22:55 Gualicho hello all
22:57 disarone joined #gluster
22:59 jdarcy joined #gluster
23:04 fendrychl joined #gluster
23:08 fendrychl left #gluster
23:58 dendazen joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary