Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-07-01

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 JoeJulian The error was that it couldn't resolve ec2-23-21-168-59.compute-1.amazonaws.com to a local server but that hostname is assigned to a brick. None of it's peers matched that hostname so it panicked. Now it will resolve that hostname to itself, passing that error.
00:01 realdannys ok excellent
00:01 realdannys fingers crossed then
00:05 realdannys excellent it worked :D
00:05 realdannys thanks JoeJulian!
00:05 realdannys thats one down
00:05 realdannys now lets see what permissions the folder has got...
00:07 realdannys excellent, its started as nginx:nginx for now
00:07 realdannys i’ll shut both servers down and see what happens
00:10 Peter1 anyone seens this?
00:10 gildub joined #gluster
00:10 Peter1 quota: quota context is not present in inode (gfid:00000000-0000-0000-0000-000000000001)
00:10 JoeJulian I've seen it pasted here before... ;)
00:11 JoeJulian I would assume there's no quota defined for the root of the volume.
00:11 Peter1 right
00:11 Peter1 do we need to?
00:11 JoeJulian Not as far as I'm concerned.
00:12 Peter1 just the amount of logging on gluster is  A LOT
00:12 Peter1 trying to see if we can eliminate those logs
00:12 JoeJulian Turn it up from info to error.
00:12 Peter1 where can i do that?
00:13 JoeJulian gluster volume set help
00:17 Peter1 all upper case? ERROR ?
00:18 Peter1 or CRITICAL ?
00:19 Peter1 how about the cli.log, nfs.log and etc-glusterfs-glusterd.vol.log?
00:20 TaiSHi joined #gluster
00:21 TaiSHi Hi all, I'm receiving an error when executing the following: volume remove-brick dalepumas replica 2 gfs03-test:/dalepumas
00:21 TaiSHi volume remove-brick commit force: failed: number of bricks provided (1) is not valid. need at least 2 (or 2xN)
00:21 realdannys great, its stayed as nginx:nignx between reboots, although on the actual gluster server its going to git:494 - no idea why but as long as the client is correct thats all that matters for me
00:24 TaiSHi oh yay it switched itself to replicated-distributed
00:26 TaiSHi Number of Bricks: 1 x 2 = 3 because of science
00:34 realdannys ok excellent, all seems well now - only one shut down on and cold start on the client caused gluster not to mount automatically, I’m guessing its due to the fact that I wasn’t able to sign the elastic IP its expecting in time as it works across reboots more reliabily. Hopefully thats the issue as once the load balancer is setup the IP will always be set when an instance boots I think
00:34 realdannys at least I think thats the way it’ll work internally and connecting to gluster anyway
00:39 JoeJulian Client's not connecting through a load balancer to the gluster server. That won't work.
00:40 TaiSHi_ joined #gluster
00:40 bala joined #gluster
00:42 TaiSHi joined #gluster
00:47 realdannys hmmm ok, well I guess i tackle how each EC2 instance auto connect s to gluster on boot then :/
00:47 realdannys bed now, that’ll take some reading tomorrow I suppose, if anyone wants to leave a link
00:47 realdannys feel free
00:50 Peter1 any developer here know about this error?
00:50 Peter1 http://fpaste.org/114480/14041676/
00:50 glusterbot Title: #114480 Fedora Project Pastebin (at fpaste.org)
00:53 TaiSHi Hi all
00:54 TaiSHi I'm working on a schema where I would need to add/delete 'masters' of a replica on demand
00:54 TaiSHi Is gluster currently capable of doing that?
00:54 bennyturns joined #gluster
01:00 drsekula joined #gluster
01:01 bennyturns Is it a bug that I can set a quota larger than the size of my volume?  Should it it error out or something?
01:05 hagarth joined #gluster
01:08 TaiSHi I guess Gluster can't do it =(
01:36 bala joined #gluster
01:38 gildub joined #gluster
01:53 JoeJulian ~brick order | TaiSHi
01:53 glusterbot TaiSHi: Replicas are defined in the order bricks are listed in the volume create command. So gluster volume create myvol replica 2 server1:/data/brick1 server2:/data/brick1 server3:/data/brick1 server4:/data/brick1 will replicate between server1 and server2 and replicate between server3 and server4.
01:53 JoeJulian Therefore, to remove-bricks you have to remove an entire replica.
01:56 JoeJulian TaiSHi: If you're looking to have automatic dynamic growth, look at the ,,(puppet) module
01:56 glusterbot TaiSHi: https://github.com/purpleidea/puppet-gluster
01:59 Peter1 i got this error: replicate-0:  metadata self heal  failed
01:59 Peter1 but when i did a volume status, there are no active volume task
01:59 Peter1 what should i do?
01:59 JoeJulian stuff preceding that matters
02:00 Peter1 ?
02:00 JoeJulian That error all by itself is completely useless.
02:01 Peter1 http://ur1.ca/hn58y
02:01 glusterbot Title: #114504 Fedora Project Pastebin (at ur1.ca)
02:02 Peter1 ah…so i see the following line saying selfheal is completed....
02:02 JoeJulian yeah
02:02 Peter1 should be all good?
02:02 JoeJulian looks that way
02:03 Peter1 so gluster does retry itself....
02:03 Peter1 thank you !
02:11 harish_ joined #gluster
02:29 TaiSHi JoeJulian: heh, I'm a salt user but gonna take a look at that
02:29 TaiSHi Still, current infrastructure is only 1 server (yuck) with 1 mounted ceph volume
02:30 TaiSHi pranithk said that 3.6 will have a better management
02:31 TaiSHi Also, JoeJulian I was doing replica 2 serverx:/x servery:/x
02:31 TaiSHi then add-brick and remove-brick of a serverz:/x
02:31 JoeJulian salt is supposed to be able to process puppet manifests, isn't it?
02:31 TaiSHi Not really sure
02:32 JoeJulian TaiSHi: Right. Can't do that. When you're doing replicas, adds and removes have to be complete replicas.
02:32 mortuar joined #gluster
02:33 TaiSHi So I basically have no way of 'growing' the way I want ?
02:33 JoeJulian Really? I thought rule based replication wasn't going to be in before 3.7.
02:33 TaiSHi That's what he said, waiting for a ticket to be solved
02:33 JoeJulian @lucky expanding a gluster volume by one server
02:33 glusterbot JoeJulian: http://joejulian.name/blog/how-to-expand-gl​usterfs-replicated-clusters-by-one-server/
02:34 TaiSHi I have an offtopic question: If I have a 3-server gluster cluster
02:34 TaiSHi And a 4th one with a client, and mount gfs01:/mount
02:34 JoeJulian @mount server
02:34 glusterbot JoeJulian: The server specified is only used to retrieve the client volume definition. Once connected, the client connects to all the servers in the volume. See also @rrdns
02:34 TaiSHi Will it read ONLY from gfs01 or will distribute the readings ?
02:34 TaiSHi Love you
02:34 TaiSHi @rrdns
02:34 glusterbot TaiSHi: You can use rrdns to allow failover for mounting your volume. See Joe's tutorial: http://goo.gl/ktI6p
02:34 TaiSHi You seem to be the gluster guru
02:35 JoeJulian That's what I get paid for (now).
02:35 TaiSHi What if I create a gluster with 4 servers and replica 1 ?
02:35 JoeJulian Principal Cloud Architect for IO as of about a month ago.
02:36 TaiSHi Nice!
02:36 JoeJulian That would be a distributed volume.
02:36 TaiSHi I'm stuck in this work as a sysadmin for a crappy infrastructure
02:36 JoeJulian I feel that this is a pretty good description of how distribute works: ,,(lucky dht misses are expensive)
02:36 glusterbot JoeJulian: Error: No factoid matches that key.
02:36 glusterbot JoeJulian: Error: No factoid matches that key.
02:36 JoeJulian @lucky dht misses are expensive
02:36 glusterbot JoeJulian: http://joejulian.name/blog​/dht-misses-are-expensive/
02:37 JoeJulian Oh, my new toys are very cool.
02:37 TaiSHi lol
02:37 TaiSHi Yeah, I kind of want to have ALL the data on ALL of the servers
02:37 TaiSHi (hey, it's YOUR blog lol)
02:37 JoeJulian hehe
02:37 JoeJulian Also read the one about replication.
02:38 dockbram hi all, I've setup gluster 3.5.1 and successfully shared a folder between two nodes. Adding files works fine however deleting a file does not. When I delete a file it stays on one of the nodes. How do I get delete to sync?
02:38 TaiSHi Yeah, I read that one back a few days when searching
02:38 JoeJulian You really don't necessarily want all the data on all the servers. You want a predicted failure rate +1.
02:38 JoeJulian Calculate your uptime guarantee, don't just guess at it. :D
02:39 TaiSHi Thing is I'm migrating over from a single-server with MANY sites
02:39 * JoeJulian beats dockbram with the "don't access the bricks directly" stick.
02:39 TaiSHi to a 4 (min) server per site
02:39 TaiSHi (1 load balancer, 1 db server, 2+n webservers)
02:40 TaiSHi And eventually, if webservers go over a load threshold, another one is fired up
02:40 TaiSHi Which is where my mess with gluster begins
02:40 JoeJulian gotta run. timer just went off for dinner...
02:40 dockbram JoeJulian :) I don't...
02:40 dockbram I've mounted them (locally) over nfs
02:41 dockbram somehoew that sounds wrong?
02:41 TaiSHi lol
02:41 TaiSHi JoeJulian: I'll ping you later on when I get home, I feel like you might provide me some insight I've been missing
02:41 TaiSHi And tell you about an idea I've been chewing
02:42 dockbram thank you JoeJulian :)
02:42 dockbram for later reference: no nfs but mount -t glusterfs ... can't try right now as my node is inaccessible but guess that will work :)
02:43 TaiSHi I do mount it that way
02:43 TaiSHi (actually -t glusterfs isn't needed)
02:43 TaiSHi But my guess is, if file is local, will it still request other servers for files?
02:43 dockbram TaiSHi depends on what you put in /etc/fstab, right?
02:43 TaiSHi Nope, it detected it automatically
02:43 dockbram hmm
02:43 dockbram what's your mount command?
02:44 TaiSHi I have to run, public bus rates are going up 20% in 17 minutes
02:44 dockbram ok later
02:44 TaiSHi mount localhost(or whateverhost):/volume /mnt/mountpoint
02:44 dockbram strange how it knows about that fs being gluster
02:44 dockbram whatever, I'll play with it when my node is up :)
02:44 TaiSHi Well in 1 hour I can give you some insight on if it's loading it with gluster
02:44 TaiSHi or nfs perhaps :P
02:44 dockbram tnx :)
02:45 TaiSHi ping me then
02:45 TaiSHi <3
02:45 dockbram will do
03:14 AaronGr joined #gluster
03:24 gildub joined #gluster
03:25 kdhananjay joined #gluster
03:32 gildub joined #gluster
03:36 MacWinner joined #gluster
03:38 itisravi joined #gluster
03:40 sputnik13 joined #gluster
03:41 gildub joined #gluster
03:47 rastar joined #gluster
03:51 TaiSHi dockbram: ping
03:51 glusterbot TaiSHi: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
03:51 * TaiSHi hides
03:57 TaiSHi JoeJulian: let me know when you're back, or if I can bother you tomorrow. I'd like some insight on my particular case
03:59 TaiSHi dockbram: I'm deploying an instance to give you that info
03:59 TaiSHi Give it a couple mins to fully deploy
03:59 ndarshan joined #gluster
04:02 bharata-rao joined #gluster
04:08 davinder16 joined #gluster
04:14 TaiSHi dockbram: If you don't specify a fs, it'll use nfs
04:14 shubhendu_ joined #gluster
04:18 jiffe98 joined #gluster
04:22 T0aD joined #gluster
04:22 MacWinner joined #gluster
04:22 kdhananjay joined #gluster
04:22 sage_ joined #gluster
04:22 Slasheri joined #gluster
04:22 firemanxbr joined #gluster
04:22 qdk joined #gluster
04:22 Der_Fisch2 joined #gluster
04:22 Philambdo joined #gluster
04:22 17SAAL276 joined #gluster
04:22 Bardack joined #gluster
04:22 RobertLaptop joined #gluster
04:22 burn420 joined #gluster
04:22 Kins joined #gluster
04:22 _jmp__ joined #gluster
04:22 Intensity joined #gluster
04:22 ninkotech joined #gluster
04:22 stickyboy joined #gluster
04:22 semiosis joined #gluster
04:22 the-me_ joined #gluster
04:22 xavih joined #gluster
04:22 dblack_ joined #gluster
04:22 ry joined #gluster
04:22 kkeithley joined #gluster
04:22 weykent joined #gluster
04:22 lyang0 joined #gluster
04:22 ccha joined #gluster
04:22 marmalodak joined #gluster
04:22 pdrakeweb joined #gluster
04:22 silky joined #gluster
04:22 georgeh|workstat joined #gluster
04:22 sauce joined #gluster
04:22 sspinner joined #gluster
04:22 vincent_vdk joined #gluster
04:22 coreping joined #gluster
04:22 overclk joined #gluster
04:22 hflai joined #gluster
04:22 asku joined #gluster
04:22 prasanth|brb joined #gluster
04:22 sac`away` joined #gluster
04:22 xymox joined #gluster
04:22 johnmark joined #gluster
04:22 portante joined #gluster
04:26 samkottler joined #gluster
04:27 Nopik joined #gluster
04:30 lalatenduM joined #gluster
04:31 nishanth joined #gluster
04:32 firemanxbr joined #gluster
04:32 semiosis joined #gluster
04:32 semiosis joined #gluster
04:33 dblack joined #gluster
04:34 Peter1 hi Anyone know why this happen?
04:34 Peter1 http://fpaste.org/114480/14041676/
04:34 glusterbot Title: #114480 Fedora Project Pastebin (at fpaste.org)
04:36 Intensity joined #gluster
04:43 vimal joined #gluster
04:45 gildub joined #gluster
04:47 ramteid joined #gluster
04:48 dtrainor joined #gluster
04:57 spandit joined #gluster
04:59 psharma joined #gluster
05:05 ppai joined #gluster
05:07 RameshN joined #gluster
05:07 kdhananjay joined #gluster
05:13 prasanthp joined #gluster
05:32 kshlm joined #gluster
05:33 davinder16 joined #gluster
05:34 vpshastry joined #gluster
05:35 bala1 joined #gluster
05:37 saurabh joined #gluster
05:48 fraggeln [5919129.379820] Out of memory: Kill process 63382 (glusterfs) score 666 or sacrifice child
05:48 fraggeln [5919129.380065] Killed process 63382 (glusterfs) total-vm:3594116kB, anon-rss:2842036kB, file-rss:196kB
05:48 fraggeln Hehe :)
05:48 fraggeln thats often bad right? :D
05:50 fraggeln JoeJulian: quick-read <- safe to use?
05:54 dusmant joined #gluster
06:04 suj joined #gluster
06:07 suj Noob questions: Does Gluster use any special devices for changelogs (like in far)? Like NVRAM / flash?
06:07 suj s/far/afr
06:08 suj anyone?
06:11 marbu joined #gluster
06:11 ppai joined #gluster
06:14 dtrainor joined #gluster
06:28 mbukatov joined #gluster
06:29 ricky-ti1 joined #gluster
06:34 purpleid1a 118215
06:34 purpleid1a 624668
06:34 purpleid1a 626060
06:34 purpleid1a 768270
06:34 purpleid1a 035102
06:34 purpleid1a 469281
06:34 purpleid1a 212338
06:34 purpleid1a 076481
06:34 purpleid1a sorry ^
06:36 kdhananjay joined #gluster
06:37 vkoppad joined #gluster
06:43 kasturi joined #gluster
06:49 dtrainor joined #gluster
06:52 dusmant joined #gluster
06:54 ctria joined #gluster
06:58 ekuric joined #gluster
07:05 itisravi_ joined #gluster
07:21 lalatenduM fraggeln, yes. I think you should report it to gluster-devel mailing list
07:21 lalatenduM @mailinglists
07:21 glusterbot lalatenduM: http://www.gluster.org/interact/mailinglists
07:21 vpshastry joined #gluster
07:31 fraggeln lalatenduM: after some investigation, I think its related to quick-read and only 4gb ram on the client
07:31 vpshastry joined #gluster
07:48 shylesh__ joined #gluster
07:52 andreask joined #gluster
07:58 ppai joined #gluster
07:58 fsimonce joined #gluster
08:01 calum_ joined #gluster
08:05 lalatenduM fraggeln, ok cool
08:06 kanagaraj joined #gluster
08:16 monotek left #gluster
08:30 liquidat joined #gluster
08:35 ktosiek joined #gluster
08:41 kanagaraj_ joined #gluster
08:43 [o__o] joined #gluster
08:48 andreask joined #gluster
08:48 [o__o] joined #gluster
08:49 harish_ joined #gluster
08:52 [o__o] joined #gluster
08:54 pkliczew_ joined #gluster
08:59 rjoseph joined #gluster
09:06 vpshastry joined #gluster
09:07 keytab joined #gluster
09:08 vpshastry joined #gluster
09:09 realdannys joined #gluster
09:14 ppai joined #gluster
09:14 davent joined #gluster
09:15 davent Could anyone help me with a glusterfsd issue? I am trying to reset the TCP port that a brick listens on as I am runing it behind a firewall
09:15 davent the port selected appears to increases if I have created and delete the volume before
09:30 andreask joined #gluster
09:33 ramteid joined #gluster
09:41 [ilin] joined #gluster
09:44 [ilin] hi, we have two gluster servers, one volume with two bricks - because we probably deleted files directly from the bricks and not the gluster mount we ended up with different disk usage on both bricks, one shows 60% use, the other one 80%
09:44 [ilin] is there a way to fix this?
09:47 dusmant joined #gluster
09:47 ctria joined #gluster
09:49 [o__o] joined #gluster
09:50 [o__o] joined #gluster
09:52 karnan joined #gluster
10:00 davent left #gluster
10:02 sahina joined #gluster
10:05 RameshN joined #gluster
10:06 elico joined #gluster
10:09 lalatenduM joined #gluster
10:10 dusmant joined #gluster
10:12 rastar joined #gluster
10:17 prasanthp joined #gluster
10:19 kkeithley1 joined #gluster
10:32 torbjorn1_ joined #gluster
10:34 dusmant joined #gluster
10:36 samkottler joined #gluster
10:39 ctria joined #gluster
10:44 ppai joined #gluster
10:51 realdannys joined #gluster
11:00 LebedevRI joined #gluster
11:02 SNow joined #gluster
11:02 SNow hi ppl, I have a nginx server and serving small imgs (banners) and I'm running out of space and in demand of scaling. What's the best solution to do the scaling between few servers( same rack same switch) but with highest performance for nginx serving imgs. Gluster fs is good for that or too slow?
11:09 fraggeln SNow: I have done a lot of testing, with images, glusterfs is not lightningfast with small files, but I guess it really depends on how good your cache-layer is.
11:09 gildub joined #gluster
11:10 SNow i have nginx in front
11:10 SNow working on the caching layer
11:10 SNow I have 460 GB of img files
11:10 fraggeln My advice is to test :)
11:11 fraggeln there is a lot of tuning that can be done.
11:12 Pupeno joined #gluster
11:22 ppai joined #gluster
11:24 ndk joined #gluster
11:28 diegows joined #gluster
11:32 andreask joined #gluster
11:37 rwheeler joined #gluster
11:40 B21956 joined #gluster
11:50 SNow joined #gluster
11:50 SNow joined #gluster
11:51 lalatenduM joined #gluster
11:57 lava_ joined #gluster
12:07 vikumar joined #gluster
12:08 psharma_ joined #gluster
12:08 doekia joined #gluster
12:08 itisravi joined #gluster
12:08 prasanth|afk joined #gluster
12:09 vpshastry1 joined #gluster
12:09 sac`away joined #gluster
12:09 vikumar joined #gluster
12:09 psharma_ joined #gluster
12:09 klaas joined #gluster
12:09 itisravi joined #gluster
12:09 prasanth|afk joined #gluster
12:09 vpshastry1 joined #gluster
12:09 sac`away joined #gluster
12:09 spandit_ joined #gluster
12:09 darshan joined #gluster
12:09 shubhendu__ joined #gluster
12:09 dusmantkp_ joined #gluster
12:10 vpshastry joined #gluster
12:10 sac`away` joined #gluster
12:10 kdhananjay joined #gluster
12:10 lalatenduM_ joined #gluster
12:10 itisravi_ joined #gluster
12:10 spandit__ joined #gluster
12:10 dusmantkp__ joined #gluster
12:10 kasturi_ joined #gluster
12:10 RameshN_ joined #gluster
12:10 psharma__ joined #gluster
12:10 kshlm joined #gluster
12:10 vikumar__ joined #gluster
12:10 hchiramm__ joined #gluster
12:11 ndarshan joined #gluster
12:11 prasanth|offline joined #gluster
12:11 shubhendu joined #gluster
12:23 prasanthp joined #gluster
12:25 edward1 joined #gluster
12:25 edward1 joined #gluster
12:26 bala1 joined #gluster
12:27 chirino joined #gluster
12:44 rwheeler joined #gluster
12:52 julim joined #gluster
13:00 Ark joined #gluster
13:03 rwheeler joined #gluster
13:07 kanagaraj joined #gluster
13:08 mdavidson joined #gluster
13:08 ctria joined #gluster
13:08 rjoseph joined #gluster
13:09 japuzzo joined #gluster
13:16 dusmantkp__ joined #gluster
13:20 kanagaraj joined #gluster
13:22 realdannys joined #gluster
13:25 tdasilva joined #gluster
13:33 coredump joined #gluster
13:37 jmarley joined #gluster
13:37 jmarley joined #gluster
13:43 mjsmith2 joined #gluster
13:44 hagarth joined #gluster
13:46 davinder16 joined #gluster
13:47 plarsen joined #gluster
13:49 plarsen joined #gluster
13:57 _Bryan_ joined #gluster
13:58 glusterbot New news from newglusterbugs: [Bug 962169] prove ./tests/basic/rpm.t fails on non x86_64 architectures <https://bugzilla.redhat.com/show_bug.cgi?id=962169>
14:10 ramteid joined #gluster
14:11 jobewan joined #gluster
14:15 mortuar joined #gluster
14:15 andreask joined #gluster
14:19 kanagaraj joined #gluster
14:20 ctria joined #gluster
14:21 wushudoin joined #gluster
14:22 andreask joined #gluster
14:23 suj joined #gluster
14:23 suj left #gluster
14:40 andreask1 joined #gluster
14:48 bala1 joined #gluster
14:48 vpshastry joined #gluster
14:49 mambru joined #gluster
14:50 mambru n00b question
14:50 mambru I HAD a gluster setup
14:51 mambru and accidentally reinstalled all the nodes but one
14:51 mambru the config in /var/lib/glusterd is gone for all but one of them
14:51 mambru the bricks are still there
14:51 mambru how can I recreate the volume?
14:52 mambru my plan was to set the uuid of each node back to its old value
14:54 mambru probe the peers and let the magic happen
14:54 mambru will this work?
14:54 [o__o] joined #gluster
14:55 ricky-ticky joined #gluster
15:05 hchiramm_ joined #gluster
15:07 deepakcs joined #gluster
15:10 daMaestro joined #gluster
15:15 ndk joined #gluster
15:16 jdarcy joined #gluster
15:27 burnalot joined #gluster
15:36 JonathanD joined #gluster
15:47 Ark joined #gluster
15:48 ndevos mambru: you might try to copy the /var/lib/glusterd/* to all the servers, and manually create/update the /var/lib/glusterd/peers/* files
15:49 ndevos mambru: the UUID from /var/lib/glusterd/glusterd.info for all the missing servers should be in /var/lib/glusterd/peers/*
15:49 ndevos mambru: /var/lib/glusterd/peers/* does not contain a file for the 'localhost' server
15:49 ndevos mambru: so, some manual work is needed, peer-probing would not work
15:51 ndevos mambru: if all the peer files on all servers look good, you should be able to start glusterd on all servers and the volumes should get back
15:51 * ndevos wishes you good luck!
15:57 Kins joined #gluster
16:10 Pupeno_ joined #gluster
16:25 Mo_ joined #gluster
16:26 zorgan joined #gluster
16:33 Kins joined #gluster
16:36 rastar joined #gluster
16:38 pureflex joined #gluster
16:51 Kins joined #gluster
17:04 Kins joined #gluster
17:06 mortuar joined #gluster
17:09 plarsen joined #gluster
17:10 dtrainor joined #gluster
17:13 Matthaeus joined #gluster
17:13 dtrainor joined #gluster
17:16 Peter1 joined #gluster
17:19 Kins joined #gluster
17:19 mambru ndevos:thanks it worked!
17:19 mambru I re-created  /var/lib/glusterd/glusterd.info for all the missing peers
17:19 mambru with the info I had
17:20 mambru (had to build some scripts for that)
17:20 mambru probed all the peers
17:20 mambru and the volume info got replicated to all the peers and i managed to start the volume
17:20 mambru working fine
17:20 mambru thaks!
17:20 mambru thanks, sorry :)
17:21 jcsp1 joined #gluster
17:21 ramteid joined #gluster
17:29 Diddi joined #gluster
17:30 jcsp joined #gluster
17:34 sputnik13 joined #gluster
17:35 Kins joined #gluster
17:35 sputnik13 joined #gluster
17:39 TaiSHi dockbram: did you read what I said to you last night ?
17:41 Ark joined #gluster
17:53 vpshastry joined #gluster
17:58 Kins joined #gluster
18:04 Kins joined #gluster
18:10 sputnik13 joined #gluster
18:25 edward1 joined #gluster
18:29 sijis joined #gluster
18:35 sijis i'm seeing a 'failed to get extended attribute trusted.glusterfs.volume-id for brick <brick path>. Reason: No data available'
18:35 sijis are the steps suggested here acceptable http://www.gluster.org/pipermail/gl​uster-users/2013-August/036989.html ?
18:35 glusterbot Title: [Gluster-users] Replacing a failed brick (at www.gluster.org)
18:35 B21956 joined #gluster
18:38 redbeard joined #gluster
18:45 andreask joined #gluster
18:48 Matthaeus joined #gluster
18:50 JoeJulian sijis: That could mean two things. One is that your brick didn't mount. That's a feature to prevent your volume replicating to your root drive if that happens. The other, as it sounds like is the case, is that you've replaced a failed brick.
18:51 JoeJulian To set the volume-id, get it from another brick: getfattr -n trusted.glusterfs.volume-id -e hex $brick
18:51 JoeJulian then set it on the new brick: setfattr -n trusted.glusterfs.volume-id -v $value_from_above $new_brick
18:53 sijis JoeJulian: so this is really odd. on the working node, i'm seeing this: /opt/gluster-data/brick-dotcom1: trusted.glusterfs.volume-id: No such attribute
18:54 sijis doing volume info dotcom1 .. i'm seeing status: started and its looks ok
18:55 JoeJulian I'm not sure why your printer even has that directory.
18:56 * JoeJulian is sarcastic in reference to "node" again...
18:56 JoeJulian @define node
18:56 JoeJulian @dictinary node
18:56 JoeJulian hmm, I thought I had that plugin...
18:56 JoeJulian @google define node
18:57 glusterbot JoeJulian: Node (networking) - Wikipedia, the free encyclopedia: <http://en.wikipedia.org/wiki/Node_(networking)>; Node | Define Node at Dictionary.com: <http://dictionary.reference.com/browse/node>; Node - Definition and More from the Free Merriam-Webster Dictionary: <http://www.merriam-webster.com/dictionary/node>; node - definition of node by The Free Dictionary:
18:57 glusterbot JoeJulian: <http://www.thefreedictionary.com/node>; Node - Computer Networking Definition for Node: <http://compnetworking.about.com/od/iti​nformationtechnology/l/bldef_node.htm>; What is node? - Definition from WhatIs.com - SearchNetworking.com: <http://searchnetworking.tec​htarget.com/definition/node>; Node Definition - The Tech Terms Computer Dictionary: (1 more message)
18:57 JoeJulian ewww.
18:57 JoeJulian that was far messier than I thought it would be.
18:57 sijis haha. supybot?
18:57 JoeJulian yeah
18:57 JoeJulian Anyway... We like to use the ,,(glossary) terms to avoid confusion.
18:57 glusterbot A "server" hosts "bricks" (ie. server1:/foo) which belong to a "volume"  which is accessed from a "client"  . The "master" geosynchronizes a "volume" to a "slave" (ie. remote1:/data/foo).
18:57 sijis i think there's a param to just output 1 result
18:57 JoeJulian @lucky define node
18:58 glusterbot JoeJulian: http://en.wikipedia.org/wiki/Node_(networking)
18:58 JoeJulian yeah, that's probably what I should have done... was being sarcastic anyway though.
18:58 sijis ok.. so server gfs-11, seems to have a working brick
18:58 sijis i just mounted it an added a file.
18:59 JoeJulian did you run that getfattr as root?
18:59 JoeJulian (probably not)
18:59 sijis sudo ...
18:59 JoeJulian That's usually the mistake that I make.
18:59 * JoeJulian raises an eyebrow
19:00 sijis let me try it again
19:02 JoeJulian Try getfattr -m . -d -e hex
19:03 sijis http://paste.fedoraproject.org/114758/42414091/
19:03 glusterbot Title: #114758 Fedora Project Pastebin (at paste.fedoraproject.org)
19:06 sijis JoeJulian: in the brick-dotcom1 running the latest command you sent, i'm geting a "Usage: gefattr ..." response
19:06 JoeJulian sorry, try "getfattr -m . -d -e hex /opt/gluster-data/brick-dotcom1/dotcom1"
19:08 sijis hh. i was missing path.
19:08 sijis i did try '.'
19:08 sijis (btw, '.' does not work)
19:12 sijis ok. i'll try your previous suggestoin on copying that id from working server to non-working server
19:14 sijis JoeJulian: i just realized. it didn't return a volume-id
19:15 sijis http://paste.fedoraproject.org/114762/14042421/
19:15 glusterbot Title: #114762 Fedora Project Pastebin (at paste.fedoraproject.org)
19:20 JoeJulian did you just upgrade from an older version?
19:21 sijis nope. this is a new brick and volume i'm tyring to setup
19:21 sijis version shows 3.4.2
19:21 JoeJulian which is an older version, but should still have the volume-id
19:22 JoeJulian Oh! You followed that email...
19:22 sijis i already have another brick/volume on there that i setup late last week
19:22 sijis well. i was starting to
19:22 JoeJulian "setfattr -x" deletes extended attributes.
19:25 JoeJulian grep volume-id /var/lib/glusterd/vols/$volname/info | sed 's/-//' | sed 's/^/0x/' | xargs setfattr -n trusted.glusterfs.volume-id -v
19:25 JoeJulian hmm....
19:25 JoeJulian grep volume-id /var/lib/glusterd/vols/$volname/info | sed 's/-//' | sed 's/^/0x/' | xargs setfattr -n trusted.glusterfs.volume-id $brickpath -v
19:25 JoeJulian I think that might work
19:25 JoeJulian otherwise...
19:25 JoeJulian grep volume-id /var/lib/glusterd/vols/$volname/info | sed 's/-//' | sed 's/^/0x/' | xargs -I{} setfattr -n trusted.glusterfs.volume-id -v {} $brickpath
19:27 * sijis trying
19:29 sijis i assume the output of this: grep volume-id /var/lib/glusterd/vols/dotcom1/info | sed 's/-//' | sed 's/^/0x/'
19:29 sijis should not be 0xvolumeid=0219575f-9482-4ab8-ac67-bd865cb2df9a but 0x0219575f-9482-4ab8-ac67-bd865cb2df9a
19:29 JoeJulian gah
19:30 sijis (replace ^volumeid with 0x)
19:30 JoeJulian grep volume-id /var/lib/glusterd/vols/$volname/info | cut -d= -f 2 | sed 's/-//g' | sed 's/^/0x/' | xargs -I{} setfattr -n trusted.glusterfs.volume-id -v {} $brickpath
19:30 JoeJulian or that
19:30 JoeJulian grep volume-id /var/lib/glusterd/vols/$volname/info | sed 's/-//g' | sed 's/^volume-id=/0x/' | xargs -I{} setfattr -n trusted.glusterfs.volume-id -v {} $brickpath
19:31 JoeJulian Also, note the "g". I forgot to delete ALL the -
19:31 sijis ahh. ok. retries
19:33 sijis alright i see a value now
19:33 sijis trusted.glusterfs.volume-id=0x0​219575f94824ab8ac67bd865cb2df9a
19:35 sijis goes retries the swap stuff from earlier
19:43 redbeard joined #gluster
19:45 sijis
19:48 sijis JoeJulian: so, i did the step: setfattr -n trusted.glusterfs.volume-id -v 0x0219575f94824ab8ac67bd865cb2df9a /opt/gluster-data/brick-dotcom1
19:50 sijis still seeing Failed to get extended attribute trusted.glusterfs.volume-id for brick dir /opt/gluster-data/brick-dotcom1/dotcom1. Reason : No data available
19:51 JoeJulian You set it on /opt/gluster-data/brick-dotcom1 but your brick is /opt/gluster-data/brick-dotcom1/dotcom1
19:53 sijis i thought i setup brick-dotcom as brick, and dotcom1 as volume
19:53 * sijis scratches head
19:53 JoeJulian gluster volume info will tell you for sure
19:54 JoeJulian (but the error tells you as well)
19:54 sijis it does show dotcom1 as it
19:54 sijis wtf am i doing!
19:55 JoeJulian Sounds like you need more caffeine. ;)
19:56 sijis everything i've touched today has been borked ;/
19:56 sijis its just one of those days
19:56 sijis should i delete that attr from the brick-dotcom?
19:57 sijis volume start: dotcom1: failed -- ?? :(
19:57 * sijis looking at logs
19:57 JoeJulian You probably should, yes.
19:57 JoeJulian Otherwise it'll come back to bite you in the future.
19:58 sputnik13 joined #gluster
19:59 gildub joined #gluster
19:59 [o__o] joined #gluster
20:00 [o__o] joined #gluster
20:01 lmickh joined #gluster
20:03 * TaiSHi waves at JoeJulian
20:03 JoeJulian o/
20:03 sputnik13 joined #gluster
20:04 TaiSHi Mind if I bug you a bit today ?
20:04 TaiSHi Like, 10 minutes tops
20:04 JoeJulian If I did mind, I wouldn't be here. :D
20:04 JoeJulian Or I'd ban you... ;)
20:07 TaiSHi Now I feel scared
20:07 TaiSHi Well seems our infrastructure is near to failure (my day job's )
20:07 theron joined #gluster
20:07 JoeJulian :(
20:08 dtrainor hmmm, i'm trying to 'gluster volume set' and my changes aren't sticking, I don't see them in /etc/gluster/glusterd.vol
20:08 dtrainor Nothing in the logs to indicate why
20:09 sijis JoeJulian: is /var/log/glusterfs/bricks/ the place to see?
20:09 JoeJulian dtrainor: cli doesn't make any changes to that file.
20:09 dtrainor oh.  i didn't know that.  where are these 'set' changes stored persistently?
20:09 JoeJulian /var/lib/glusterd
20:09 dtrainor oh, i see them through 'gluster volume info'
20:09 dtrainor got it.  thanks.
20:10 TaiSHi Hmm, seems I can't bother you right NOW :P something might explode here
20:10 TaiSHi brb
20:10 Pupeno joined #gluster
20:11 mjsmith2 joined #gluster
20:11 JoeJulian dtrainor: I didn't realize you were in Phoenix. We could have had lunch when I was down there.
20:12 JoeJulian next time...
20:12 sijis JoeJulian: where would i find the log of why it didn't start?
20:13 ghenry joined #gluster
20:14 JoeJulian either /var/log/glusterfs/bricks/ or /var/log/glusterfs/etc-glusterfs-glusterd.vol.log
20:14 dtrainor Ah, damn!  Next time for sure, that would be great.  How oftne are you out here?
20:15 JoeJulian dtrainor: as seldom as possible... hehe. Especially in summer heat. I work for IO now so I can go down whenever I want to work from the office.
20:17 Philambdo joined #gluster
20:19 facemangus joined #gluster
20:19 dtrainor Oh cool.  I worked for a few places that had space in the Princess Dr. location.
20:19 facemangus Just set up a brand new gluster cluster (god it feels good to run that first copy test). One thing I am wondering, will it auto gluster volume start? One of our older servers has a cron hack that checks if it is started and starts it if it isn't, is this necessary? I can't find anything on it so I considered ignoring it. Everything else is automated perfectly fine
20:20 facemangus only worried because it is production and if the volume doesn't start on boot...
20:20 sijis JoeJulian: this is what i see in those files http://paste.fedoraproject.org/114775/14042460/
20:20 facemangus *will be production
20:20 glusterbot Title: #114775 Fedora Project Pastebin (at paste.fedoraproject.org)
20:20 JoeJulian facemangus: "volume start" is a one-time operation unless you "volume stop" again.
20:21 facemangus awesome
20:21 Ark joined #gluster
20:21 facemangus I have never worked with 3.5 only... lets say archaic versions that I guess required more teasing to keep running
20:21 JoeJulian Hey, I started in the 2.0 days myself.
20:21 facemangus that and the old admin was an idiot
20:22 plarsen joined #gluster
20:23 facemangus JoeJulian: I am pretty sure you have even helped me deal with pre-3.0 brain splits before
20:23 facemangus I think it was you
20:23 sijis JoeJulian: i was trying a few things like volume heal dotcom1 but it just said 'volume not started'
20:24 andreask joined #gluster
20:24 JoeJulian ~pasteinfo | sijis
20:24 glusterbot sijis: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
20:25 facemangus the one version I have to maintain:
20:25 facemangus glusterfs 2.0.9 built on Apr 11 2010 20:45:02
20:25 facemangus client wont let me rebuild it yet
20:26 facemangus even tho it irreperably failed awhile ago
20:26 JoeJulian omg... client needs to overcome irrational fear and face the very real ones.
20:27 facemangus well they actually have good reason
20:27 facemangus but luckily
20:27 facemangus that reason is about to dry up
20:27 facemangus its for a school district
20:27 sijis JoeJulian: http://paste.fedoraproject.org/114779/04246412/
20:27 glusterbot Title: #114779 Fedora Project Pastebin (at paste.fedoraproject.org)
20:27 facemangus and they have to wait until final grades + paperwork + school out etc
20:27 JoeJulian I disagree. There is no good reason for maintaining 2.0
20:27 facemangus There is also no way to bring it offline for some needed maintenance either
20:27 facemangus :( it's not my server
20:27 JoeJulian You can take those same bricks and create a volume using them on a current version.
20:28 facemangus I wanted them to add new block devices for the bricks, so I can run xfs
20:28 JoeJulian sijis: line 4 should say started... "gluster volume start dotcom1"
20:28 facemangus but they don't know what xfs is and thus are afraid
20:29 JoeJulian Well, that's a sales problem.
20:29 dtrainor bah.  docs say to use host:port:/volume to use a specific port number.  When I try to mount that, I get "DNS resolution failed on host .....:"  'man mount.glusterfs' doesn't give any mount options for specifying alternate ports.
20:29 JoeJulian stupid docs...
20:29 dtrainor haha
20:29 dtrainor hey, i did my homework...
20:30 sijis JoeJulian: right, when i try to start, i get a 'volume start: dotcom1: failed'
20:30 JoeJulian You have glusterd listening on something other than 24007?
20:30 dtrainor i think that's how 3.5 comes by default
20:30 dtrainor maybbbbeee i should update my 3.4 client....
20:30 JoeJulian maybe.
20:30 sijis JoeJulian: stock. nothing like that changed
20:31 JoeJulian sorry sijis, was referring to dtrainor.
20:31 sijis oh. no prob
20:31 sputnik13 joined #gluster
20:31 JoeJulian sijis: Ok, so it looks like it's still saying the xattr is missing
20:32 dtrainor na, same thing.
20:32 dtrainor lies.  i don't *have* to specify a port number now
20:33 facemangus jeeze, 3.5 is so slick
20:33 JoeJulian dtrainor: yeah, I just confirmed that. 24007 still.
20:33 julim joined #gluster
20:34 Peter1 found an issue on 3.5
20:35 JoeJulian dtrainor: If you *had* changed the port for glusterd (would do that in /etc/glusterfs/glusterd.vol) the mount option is "server-port"
20:35 Peter1 volume quota vol1 limit-usage /path 5GB, df -h /path 5GB
20:35 JoeJulian So the documentation is wrong.
20:35 Peter1 but df -h /path/subpath saw the entire capacity of the volume
20:36 JoeJulian Peter1: nifty feature... hehe. Can you please file a bug report?
20:36 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
20:36 obelix_ joined #gluster
20:36 theron joined #gluster
20:36 dtrainor thanks JoeJulian
20:36 dtrainor i like 3.5, a lot.
20:36 sijis JoeJulian: i'll dump the attr again
20:37 obelix_ hi every body
20:37 * JoeJulian still isn't convinced.
20:37 obelix_ i am new with glcuster
20:37 obelix_ so just a little question
20:37 obelix_ may i share a gluster volume using iscsi?
20:38 JoeJulian obelix_: I think you probably could using the block device translator...
20:39 obelix_ :-|
20:39 JoeJulian http://www.gluster.org/categ​ory/block-device-translator/
20:41 obelix_ thanx!
20:41 obelix_ :)
20:42 JoeJulian obelix_: out of curiosity, what's your use case?
20:42 obelix_ i have 2 esxi servers
20:42 Peter1 https://bugzilla.redhat.co​m/show_bug.cgi?id=1115197
20:42 glusterbot Bug 1115197: medium, unspecified, ---, rgowdapp, NEW , Directory quota does apply on it's sub-directories
20:42 obelix_ actally  both are connected to a single san
20:42 obelix_ via iscsi
20:42 JoeJulian obelix_: Most people juse use the native nfs
20:43 obelix_ ok,do you think the performance beteewn nfs vs iscsi is similar?
20:46 JoeJulian I would expect so. TBQH, I would never use esxi since it forces you to use hacks to get access to your images instead of a more reliable method.
20:46 JoeJulian but that's just me.
20:47 obelix_ i understand you
20:48 obelix_ actually i try to move to another solution
20:48 obelix_ i have a xen server 6.2 and i try to learn from it for migrate
20:50 sijis JoeJulian: gfs-11 is working, but i'm not seeing the trusted.glusterfs.volume-id attr. gfs-12 isn't working but it does have that attr. the output of the earlier grep are the same
20:52 JoeJulian gfs-11 is working because the brick was still running when you deleted the volume-id.
20:52 JoeJulian It doesn't periodically re-check.
20:52 JoeJulian Unless that's why the volume won't start now... wow, that's really screwed up... ;)
20:52 sijis so shoiuld i add that attr to gfs-11?
20:52 JoeJulian yes
20:53 sijis ok. added
20:54 JoeJulian see if it will start now...
20:55 TaiSHi YAY, found the issue, Layer8/pebcak
20:56 TaiSHi site manager decided that 6 pm was a good time as any to truncate a really, really big table
20:59 sijis JoeJulian: nope.
20:59 sijis volume start: dotcom1: failed
20:59 glusterbot New news from newglusterbugs: [Bug 1115199] Unable to get lock for uuid, Cluster lock not held <https://bugzilla.redhat.co​m/show_bug.cgi?id=1115199> || [Bug 1115197] Directory quota does not apply on it's sub-directories <https://bugzilla.redhat.co​m/show_bug.cgi?id=1115197>
21:00 jtdavies joined #gluster
21:01 sijis JoeJulian: i'm seeing this in the logs 'opt/gluster-data/brick-dotcom1/dotcom1 or a prefix of it is already part of a volume' .. i've seen that before and your blog post fixes it for me :)
21:01 glusterbot sijis: To clear that error, follow the instructions at http://joejulian.name/blog/glusterfs-path-or​-a-prefix-of-it-is-already-part-of-a-volume/ or see this bug https://bugzilla.redhat.com/show_bug.cgi?id=877522
21:01 sijis let me tackle that
21:03 srjb joined #gluster
21:04 jtdavies Hello? I have a quick question regarding Gluster setup. I am currently running two stateless CentOS nodes through Warewulf, each of which has Gluster set up. The bricks are set up on hard drives attached to the nodes; however, the volume configuration is wiped out on reboot. Is there an easy way to reconstruct the volume from the bricks, guaranteei
21:04 jtdavies ng that they will be linked up properly? Will clearing the xattrs on each brick and simply running [gluster volume create gv0 replica 2 n0:/export/sda1/brick0 n1:/export/sda1/brick1] again always put the bricks together correctly?
21:04 srjb Are changelogs used in afr written to disk or only in memory?
21:09 sijis JoeJulian: yay! fixed. i think adding that attr on gfs-11 helped. as i started seeing the /prefix/ error and i knew how to deal with that
21:09 sijis JoeJulian: although, i have no clue how this brick got into this state.
21:15 ctria joined #gluster
21:19 sijis JoeJulian: thanks again forthe help.
21:20 * sijis owe you a beer
21:29 TaiSHi JoeJulian: Let me know when you have a couple minutes so I can ask you some advice on an ongoing migration
21:33 codex joined #gluster
21:46 edong23 joined #gluster
21:47 andreask joined #gluster
21:50 semiosis TaiSHi: don't ask to ask, just ask
21:50 TaiSHi That's 3 ask right there!
21:51 TaiSHi Now I fell into a mind pool because I feel cornered :P
21:52 TaiSHi If I have 3 masters replicated and mount the volume in each one
21:52 TaiSHi Will they read from a remote or always from their local brick ?
21:52 TaiSHi (all replicated, no distributed)
21:53 semiosis i think it automatically uses the local brick but not sure about that
21:53 semiosis you could test it
21:53 semiosis watch the tcp connections to the brick ,,(ports)
21:53 glusterbot glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up for <3.4 and 49152 & up for 3.4. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
21:53 semiosis from the client ,,(processes)
21:53 glusterbot The GlusterFS core uses three process names: glusterd (management daemon, one per server); glusterfsd (brick export daemon, one per brick); glusterfs (FUSE client, one per client mount point; also NFS daemon, one per server). There are also two auxiliary processes: gsyncd (for geo-replication) and glustershd (for automatic self-heal).
21:53 semiosis and see where the data comes from
21:53 bennyturns joined #gluster
21:54 semiosis or if it's a quiet network otherwise, you could just look at the blinkenlights on the switch
21:54 TaiSHi Don't think there will be much latency anyway, they're all on the local network
21:54 TaiSHi Not possible sadly, it'll be on digital ocean VMs :P
21:54 semiosis right, then tcpdumping on the brick ports
22:07 jcsp1 joined #gluster
22:38 sputnik13 joined #gluster
22:50 mortuar joined #gluster
22:54 theron joined #gluster
22:58 fidevo joined #gluster
23:00 glusterbot New news from newglusterbugs: [Bug 1075611] [FEAT] log: enhance gluster log format with message ID and standardize errno reporting <https://bugzilla.redhat.co​m/show_bug.cgi?id=1075611>
23:08 StarBeast joined #gluster
23:09 TaiSHi semiosis: sorry, something exploded around here
23:09 theron joined #gluster
23:12 StarBeast Hi all. I am using gluster as openstack storage. When vm's writing something to their disk or volumes, gluster star healing process. I always see some files not in sync when do gluster heal info. Sometimes it is even corrupt volumes or vm's. Is there some special params I have to change to make it more rubust? I am using gluster 3.5 on CentOS 6.5.
23:12 jcsp joined #gluster
23:20 mortuar joined #gluster
23:32 JoeJulian StarBeast: It's always going to show files in the "heal info" that doesn't mean they're healing, that means that they have extended attributes that mark them as having pending changes. There's a subtle difference.
23:33 TaiSHi JoeJulian: Yesterday you said to calculate based on risk
23:33 JoeJulian yes
23:34 JoeJulian @lucky uptime calculation formula
23:34 glusterbot JoeJulian: http://monitive.com/blog/2012/02/2​-approaches-on-uptime-calculation/
23:34 StarBeast JoeJulian: Thanks for the info. But it is also corrupting my volumes and images sometimes. How can I debug the root cause of it. I see no network or disk errors or something.
23:34 TaiSHi I currently plan to have 1 balancer + n webservers
23:35 TaiSHi + 1 (+) webservers automatically created on demand based on load
23:35 TaiSHi I still can't find the best approach
23:36 StarBeast I have 10Gbe network with 15k disks on all nodes (currently I have only 4), all configs and hardware are absolutely the same.
23:36 TaiSHi Was thinking a fully replicated cluster between lb + webservers and let the auto-fired webservers act as clients
23:36 JoeJulian http://www.eventhelix.com/realtimemantra/faul​thandling/system_reliability_availability.htm#.U7NGBHVdVvA
23:36 glusterbot Title: System Reliability and Availability Calculation (at www.eventhelix.com)
23:37 TaiSHi JoeJulian: if I mount a volume, will it use my local brick ?
23:37 TaiSHi Or act as a client?
23:38 JoeJulian You always have to mount the volume. The machine you mount the volume on will be a client. It may also be a server, but that's not a necessity.
23:39 srjb joined #gluster
23:40 JoeJulian Your client will connect to all the servers that are part of the volume. This is what gives parallel reliability.
23:40 JoeJulian ... or at least can give parallel reliability as long as there's some level of replication.
23:40 TaiSHi And it'll use them with whatever criteria it has, right ?
23:40 TaiSHi I want to have full replication on all servers
23:40 JoeJulian Then you want to waste cpu, io, and network time.
23:41 TaiSHi Then what do you suggest? Right now I'm looking at the smallest site that will be 3 servers (lb + 2 ws)
23:42 TaiSHi Yes, I saw your post yesterday, basically 1 node can fail (each node will contain 2 bricks)
23:42 JoeJulian Let's say you have 100 servers and you replicate the same file to all 100. When you start to open that file, a lookup function is called. That lookup function triggers a check that ensures you're not reading from a broken copy of the file. To do that, it queries all 100 replicas to see if any of them think they have writes that are pending for the other replicas.
23:43 JoeJulian What do I suggest? What is your SLA requirement?
23:43 JoeJulian (or OLA)
23:43 TaiSHi I have none, this is an important project for me since I'm helping out a friend's company
23:44 TaiSHi And was looking for the best solution so the webservers can have the same data at all times w/o using rsync
23:44 JoeJulian Then define a need. Do you require 2 nines of yourself? 4 nines? 5?
23:44 TaiSHi Perhaps gluster was an overkill
23:44 TaiSHi 4 would be awesome :) (I suck at project management lol)
23:44 JoeJulian Hehe
23:44 mortuar joined #gluster
23:45 JoeJulian So that's 52 minutes of downtime a year.
23:45 TaiSHi Well coming from ~1-2 hours a day lol...
23:45 JoeJulian Now... What's the availability of any single storage server (regardless of whatever other operations they're performing).
23:46 TaiSHi They're Digital Ocean, they didn't provide that info but so far I've had no issues with currently created ones
23:47 JoeJulian DO's SLA is 3 nines
23:48 TaiSHi ~ 9 hours / year
23:48 * TaiSHi is looking at whiskeypedia
23:49 JoeJulian To get 4 nines from that, replica 2 is sufficient. That'll actually give you 6 nines.
23:50 JoeJulian 31 seconds/year
23:50 TaiSHi JoeJulian: tbh, infrastructure per site will grow (automatically with auto-scaling) only by 1 or 2 servers
23:50 TaiSHi But I'd need 2n servers, right? Can't do with 3
23:51 JoeJulian I would manage web servers separately from storage if I were doing it. They're going to reach load capacity separately.
23:52 JoeJulian And the usual crap about putting caches as close to the user as possible to avoid even hitting storage if you can.
23:52 JoeJulian Ram is way faster than disks.
23:52 TaiSHi Yeah, the load balancer acts as proxy cache and we plan on using cdn for static content
23:53 chirino joined #gluster
23:53 TaiSHi Yeah my idea was to separate storage from other servers but it would imply more money upfront and, well, their company doesn't want to
23:54 JoeJulian You could dual purpose the initial servers. You'll more likely than not need to add web servers before you need more storage servers.
23:55 JoeJulian Especially since it sounds like you're just using storage for replicating the application.
23:56 TaiSHi Application and just a little content uploaded by editors / users (like, news and stuff)
23:56 TaiSHi Sorry, english isn't my native language. In your sentence "You'll more likely..." you mean I'll probably need webservers before storage?
23:57 TaiSHi If so, that's right, we plan on webservers to grow dynamically based on current load
23:57 JoeJulian Go with replica 3 and 9 nines of availability. Should be plenty. Watch your storage load. If you're maxing out io for a very small subset of files, then more replication may be beneficial. If you max out io for a very wide set of files, distribution is the key.
23:58 JoeJulian Your interpretation of my sentence is correct.
23:58 TaiSHi Sec, let me give you a more accurate file count for this particular site
23:59 JoeJulian The file count is probably not all that relevant as compared to actual use.
23:59 TaiSHi So many variables I've never pondered before

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary