Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-09-28

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 wushudoin joined #gluster
00:19 barajasfab joined #gluster
00:35 kimmeh joined #gluster
00:38 plarsen joined #gluster
00:42 natarej__ joined #gluster
00:43 suliba joined #gluster
00:47 unforgiven512 joined #gluster
00:52 shdeng joined #gluster
00:53 om joined #gluster
01:15 scooby2 joined #gluster
01:15 Pupeno joined #gluster
01:19 kramdoss_ joined #gluster
01:46 barajasfab joined #gluster
01:48 ilbot3 joined #gluster
01:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:58 harish joined #gluster
02:07 misc joined #gluster
02:19 caitnop joined #gluster
02:29 natarej__ joined #gluster
02:36 kimmeh joined #gluster
03:03 cliluw joined #gluster
03:05 barajasfab joined #gluster
03:06 gem_ joined #gluster
03:09 Gambit15 joined #gluster
03:16 magrawal joined #gluster
03:31 kramdoss_ joined #gluster
04:01 atinm joined #gluster
04:06 ppai joined #gluster
04:08 itisravi joined #gluster
04:17 kdhananjay joined #gluster
04:33 Guest85806 joined #gluster
04:37 kimmeh joined #gluster
04:44 nbalacha joined #gluster
04:53 rafi joined #gluster
05:03 mhulsman joined #gluster
05:03 rafi joined #gluster
05:13 apandey joined #gluster
05:15 satya4ever joined #gluster
05:15 prasanth joined #gluster
05:17 ndarshan joined #gluster
05:17 itisravi joined #gluster
05:24 nbalacha joined #gluster
05:28 om joined #gluster
05:29 ramky joined #gluster
05:30 ppai joined #gluster
05:31 devyani7_ joined #gluster
05:31 devyani7_ joined #gluster
05:33 aravindavk joined #gluster
05:34 ankitraj joined #gluster
05:39 Muthu joined #gluster
05:43 nishanth joined #gluster
05:49 nbalacha joined #gluster
05:50 hgowtham joined #gluster
05:56 Lee1092 joined #gluster
05:57 RameshN joined #gluster
05:57 nbalacha joined #gluster
05:58 hchiramm joined #gluster
06:00 skoduri joined #gluster
06:06 Bhaskarakiran joined #gluster
06:07 kotreshhr joined #gluster
06:11 Caveat4U joined #gluster
06:13 hchiramm joined #gluster
06:18 spalai joined #gluster
06:19 jiffin joined #gluster
06:20 jwd joined #gluster
06:21 [diablo] joined #gluster
06:31 Pupeno joined #gluster
06:32 derjohn_mobi joined #gluster
06:34 karnan joined #gluster
06:35 jtux joined #gluster
06:37 kimmeh joined #gluster
06:42 hchiramm joined #gluster
06:44 [diablo] joined #gluster
06:44 nbalacha joined #gluster
06:45 kovshenin joined #gluster
06:48 rafi joined #gluster
06:57 k4n0 joined #gluster
06:59 marlinc joined #gluster
07:14 deniszh joined #gluster
07:24 rastar joined #gluster
07:26 msvbhat joined #gluster
07:27 marlinc joined #gluster
07:35 jkroon joined #gluster
07:38 owlbot joined #gluster
07:40 Muthu joined #gluster
07:51 kimmeh joined #gluster
07:51 Gnomethrower joined #gluster
08:03 devyani7_ joined #gluster
08:11 mhulsman1 joined #gluster
08:14 _nixpanic joined #gluster
08:14 _nixpanic joined #gluster
08:17 k4n0 joined #gluster
08:18 karnan_ joined #gluster
08:19 hchiramm joined #gluster
08:22 jiffin joined #gluster
08:22 itisravi joined #gluster
08:24 marlinc joined #gluster
08:27 [diablo] joined #gluster
08:31 karthik joined #gluster
08:32 gvandewe1er Hi, is it possible to de-replicate a gluster volume (form 3x2 to 3x1)? If not, would it be possible to shutdown all replicate bricks, create a new 3x1 volume and copy all data to the new volume from the 'crippled' existing volume. Then afterwards remove the original volume, and add the bricks to the new volume
08:34 Slashman joined #gluster
08:47 Bhaskarakiran joined #gluster
08:48 derjohn_mob joined #gluster
08:51 Muthu joined #gluster
08:52 ashiq joined #gluster
08:54 ppai joined #gluster
08:54 rastar joined #gluster
08:55 kotreshhr joined #gluster
09:02 Bhaskarakiran joined #gluster
09:06 prth joined #gluster
09:06 hackman joined #gluster
09:08 k4n0 joined #gluster
09:23 itisravi joined #gluster
09:27 riyas joined #gluster
09:33 lanning joined #gluster
09:38 skoduri_ joined #gluster
09:41 jkroon joined #gluster
09:45 nishanth joined #gluster
09:49 petan joined #gluster
09:54 Wizek joined #gluster
09:56 aravindavk joined #gluster
09:59 kimmeh joined #gluster
10:05 derjohn_mob joined #gluster
10:07 Lee1092 joined #gluster
10:21 jkroon gvandewe1er, why do you want to take away the replicate?
10:22 jkroon more curiosity than being helpful - i'm afraid I've never reduced the replicate factor but when adding bricks some of the other options can be updated.
10:22 jkroon replica is perhaps one of those.
10:23 ankitraj atinm, ping
10:23 glusterbot ankitraj: Please don't naked ping. http://blogs.gnome.org/markmc/2014/02/20/naked-pings/
10:23 jkroon gvandewe1er, https://www.gluster.org/pipermail/gluster-users/2013-June/013347.html might work.
10:23 glusterbot Title: [Gluster-users] Ability to change replica count on an active volume (at www.gluster.org)
10:24 atinm ankitraj, hi
10:24 ppai joined #gluster
10:24 ankitraj do i need to abondand the patch http://review.gluster.org/#/c/15582/1
10:24 glusterbot Title: Gerrit Code Review (at review.gluster.org)
10:25 ankitraj since , we can;t relax hostname, its against RFC rule
10:27 atinm ankitraj, yes and probably close the bug too with the review comment copied into the bugzilla?
10:28 ankitraj ok..
10:31 atinm joined #gluster
10:31 nishanth joined #gluster
10:31 ankit-raj joined #gluster
10:32 devyani7_ joined #gluster
10:34 jkroon gvandewe1er, https://www.gluster.org/pipermail/gluster-users/2013-June/013237.html
10:34 glusterbot Title: [Gluster-users] Removing bricks from a replicated setup completely brakes volume on Gluster 3.3 (at www.gluster.org)
10:35 jkroon the commands you need are there - whether it'll work and do what you expect I don't know.
10:37 ankitraj joined #gluster
10:43 kimmeh joined #gluster
10:46 kotreshhr joined #gluster
10:49 ahino joined #gluster
10:54 msvbhat joined #gluster
11:00 DoubleJ joined #gluster
11:08 kotreshhr joined #gluster
11:10 Philambdo joined #gluster
11:12 DoubleJ In the logs of a newly added brick to a distribute-replicate setup, I see a lot "setting xattrs on <filename> failed (Operation not supported)" entries. These are marked as error. Is this critical? Also, glusterfs(d) consume multiple CPU cores.
11:13 DoubleJ (gluster 3.4 on ubuntu 12.02)
11:16 msvbhat joined #gluster
11:25 azilian joined #gluster
11:27 aravindavk joined #gluster
11:31 arcolife joined #gluster
11:32 samikshan REMINDER: Gluster community meeting to take place in ~30 minutes on #gluster-meeting
11:33 gvandewe1er jkroon: thanks, reason is the budget: size isi more important than redundancy, so it was decided to double storage by removing replicates, store some critical data to an off-site system for backup
11:35 jkroon eesh.  that's fair but how much storage are you talking?
11:35 gvandewe1er now we have 60Tb in replicate
11:35 jkroon we're running on SATA drives (Seagate Constellation) and those aren't that expensive.
11:36 jkroon ah ok.  We're talking a rather big volume.
11:36 gvandewe1er yes :-)
11:36 jkroon is the bricks themselves at least on some form of raid?
11:36 gvandewe1er yeah, all bricks are on raid5 systems
11:38 jkroon redundant power?
11:40 gvandewe1er yup
11:40 cloph from my (very limited) experience: reducing replica is easy - use remove-brick with replica <one-lower>, increasing replica was not so smooth/didn't work out (on 3.7 with striping) - there re-adding bricks and increasing replica brought the volume down to ro/split-brain...
11:40 gvandewe1er it's all in racks, two power supplies, one on main, one on generator.
11:41 gvandewe1er cloph: ah, that's good. Thanks for the info.
11:42 [diablo] joined #gluster
11:44 cloph not sure whether gluster had some bookkeeping that messed around (since I was trying to re-add the same paths on the same hosts that were previously part of the replica back that I previously had removed - reinstalled those hosts and gave them the same UUID, so clean brick-directory, but didn't work out for me)
11:50 gvandewe1er cloph: was that also a distributed system? if so, does gluster decide which replica to retain?
11:50 Muthu joined #gluster
11:50 cloph no, in this case it was only a plain replica one.
11:50 gvandewe1er hmm ok.
11:51 cloph but the replication is alwaas in sequence, i.e. for distributed replica 2,  brick1 brick2 brick3 brick4
11:51 cloph brick1 and brick2 would be replicas of each other, and brick3 and brick4 would be another set.
11:51 cloph so you could remove brick2 and brick4
11:51 cloph but not brick3 and brick4
11:52 cloph (well, you could if you tell it to rebalance first)
11:52 cloph but I didn't try that, and of course this is not for the I want to have more diskspace, that I read as the removed bricks will be re-added as another distribution endpoints)
11:52 gvandewe1er cloph: and you can specify multiple bricks (2 and 4) in a single remove-brick statement?
11:54 gvandewe1er cloph: well, we would reinstall the bricks servers and start a new volume to have a fresh start. It'll take a while but it's a good start time to upgrade the OS and all
11:54 [diablo] joined #gluster
11:54 gvandewe1er so new volume, recieve all data (whole 60Tb copy), reinstall the original remaining bricks, add them to new volume
11:55 cloph and yes, you should be able to remove multiple bricks at once.
12:00 samikshan REMINDER: Gluster community meeting starting now on #gluster-meeting
12:02 johnmilton joined #gluster
12:03 masber joined #gluster
12:03 johnmilton joined #gluster
12:05 skoduri_ joined #gluster
12:09 hchiramm joined #gluster
12:10 johnmilton joined #gluster
12:16 unclemarc joined #gluster
12:21 hackman joined #gluster
12:22 magrawal joined #gluster
12:26 B21956 joined #gluster
12:29 rastar joined #gluster
12:30 ira joined #gluster
12:31 itisravi joined #gluster
12:37 pdrakewe_ joined #gluster
12:45 rwheeler joined #gluster
12:46 ppai joined #gluster
12:51 kotreshhr joined #gluster
12:58 ahino1 joined #gluster
13:03 mhulsman joined #gluster
13:04 snila joined #gluster
13:05 mhulsman joined #gluster
13:06 mhulsman1 joined #gluster
13:11 bowhunter joined #gluster
13:15 harish joined #gluster
13:20 gem_ joined #gluster
13:35 skylar joined #gluster
13:38 jiffin1 joined #gluster
13:38 spalai left #gluster
13:41 anrao joined #gluster
13:41 anrao amye: ping
13:41 glusterbot anrao: Please don't naked ping. http://blogs.gnome.org/markmc/2014/02/20/naked-pings/
13:42 anrao amye: this is regarding the gnome Outreachy program
13:48 DoubleJ joined #gluster
13:53 Caveat4U joined #gluster
13:54 Philambdo joined #gluster
13:54 skoduri_ joined #gluster
13:58 kramdoss_ joined #gluster
14:02 plarsen joined #gluster
14:09 k4n0 joined #gluster
14:14 shyam joined #gluster
14:19 atinm joined #gluster
14:20 msvbhat joined #gluster
14:21 kotreshhr joined #gluster
14:30 iopsnax joined #gluster
14:32 nbalacha joined #gluster
14:35 hackman joined #gluster
14:35 satya4ever joined #gluster
14:36 DoubleJ joined #gluster
14:37 DoubleJ hi, does anyone know that causes warnings of the type "inode for the gfid (a6a19f66-dbf6-48f3-92b1-5cef06617068) is not found. anonymous fd creation failed" in the logs?
14:38 caleb_ joined #gluster
14:40 kkeithley1 joined #gluster
14:41 kkeithley1 left #gluster
14:43 hackman joined #gluster
14:46 caleb_ Hi everybody - have a quick question that hopefully somebody can chime in on. We are wanting to use gluster as part of our web deployment process. Basically 8 replicated web servers. Right now our sandboxes are taking about 10 minutes to sync out about 200mb of files. Is that a typical time? Seems slow to us. These servers are fast (gig network, high speed SAN disks). Is anybody here doing something like this?
14:48 cloph replica 8 likely is overkill, do you really need 8 copies?
14:53 caleb_ We'd like to have a local copy on each web server. All 8 are public, serving traffic
14:54 caleb_ Are we using this system entirely the wrong way? We just want to sync them out to all the servers automatically
14:54 nathwill joined #gluster
14:57 kpease joined #gluster
15:01 kpease joined #gluster
15:02 BitByteNybble110 joined #gluster
15:09 cloph caleb_: pretty much yeah, since unless you force gluster to use the local peer for lookup, this isn't what gluster will do. It decides which peer to ask for the file based on a computed hash
15:09 cloph so even if you have a local copy, it might connect to another one to fulfill that request.
15:10 cloph and replica is synchronous anyway, so no matter where the file is stored physically
15:12 caleb_ Is telling gluster to use the local peer for lookup not recommended?
15:14 cloph it is pretty pointless I'd say. And for writes it won't help anyway, as the file needs to be written on all replicas before it is considered complete.
15:16 cloph and of course for reads it also needs to retrieve meta-data from all the other replicas to make sure the file is in sync/there is no splitbrain..
15:16 caleb_ But thats kind of what we want. We want to ensure that all web servers have the exact same copies of the files.
15:16 caleb_ I see..
15:17 caleb_ I appreciate your insite
15:17 wushudoin joined #gluster
15:18 tm__ joined #gluster
15:20 cloph yeah, but for that you don't need to have 8 copies of the file. four copies is enough, if it is the same files that are frequently accessed, then using a cache will avoid having to read the actual content from another peer.
15:21 gem_ joined #gluster
15:21 cloph so it is a question whether all the metadata, and waiting for syncs and having the opportunity to read every file locally is really better than having to do less metadata checks but need to transfer the actual file data from another host via the network.
15:21 Caveat4U joined #gluster
15:22 cloph I cannot really help out with that, as this then depends on the characteristics of the load (read to write ratio, how many different files, how frequently are they accessed, network performance, … others with more experience can probably give more hints
15:23 cloph but with replica 8 and webserver (i.e. small files) you should try increasing the client thread count, so it can actually communicate with all replicas at once
15:25 caleb_ Understood. The critical performance is 100% reads (from web traffic). We'll try some of this stuff out, but I'm wondering if we need to pursue a different solution
15:25 caleb_ I appreciate it
15:28 johnmilton joined #gluster
15:47 kotreshhr left #gluster
15:47 ivan_rossi left #gluster
15:57 xavih joined #gluster
15:57 malevolent joined #gluster
15:57 hchiramm joined #gluster
16:00 ttkg joined #gluster
16:09 nbalacha joined #gluster
16:11 tdasilva joined #gluster
16:19 ashp joined #gluster
16:19 ashp Hey guys, I have an awful question and I feel bad for even bringing it here
16:19 ashp We have a small gluster cluster running on 3.5.5 (and it's all broken and in a sad state)
16:20 ashp I'm trying to bring up some ubuntu 16.04 boxes that ship with 3.7.x, and I can't get these to connect back to 3.5.x after enabling some server.allow-insecure option
16:20 ashp [2016-09-28 16:15:25.231545] E [glusterfsd-mgmt.c:1603:mgmt_getspec_cbk] 0-mgmt: failed to fetch volume file (key:/qa-lockbox)
16:20 ashp Is this just going to be impossible?  I'm not at all confident we could do a rolling upgrade without the world collapsing
16:24 msvbhat joined #gluster
16:24 Gambit15 joined #gluster
16:31 Vaelatern joined #gluster
16:36 ashp https://gist.github.com/apenney/9dc86359439ac670bc6b67d3a5aac5d9 - the full log in case anyone takes pity
16:36 glusterbot Title: gist:9dc86359439ac670bc6b67d3a5aac5d9 · GitHub (at gist.github.com)
16:36 jkroon ashp, should be possible assuming that your cluster is healthy.
16:36 jkroon you sure it's not a firewall problem?
16:37 jkroon are you able to at least probe the peers so that they can join?
16:37 ashp we're in AWS and it's set to allow 'all traffic' in from this machine.  I can telnet to 24007, etc
16:37 ashp jkroon: I Don't know how to probe peers :(
16:37 ashp this is the first time I've ever used gluster I'm afraid
16:37 jkroon have you ever set up gluster from scratch?
16:37 ashp no, I just inherited some horrible mess
16:37 jkroon where exactly are you getting stuck?
16:38 ashp the server is running 3.5.5 (3 nodes) and has been working fine, we've got a bunch of ubuntu 14.04 nodes talking to it
16:38 ashp I'm trying to use 16.04 which ships with gluster 3.7 instead of the 3.5 we're using
16:38 ashp i just wanted to get a client with 3.7 to talk to the 3.5.5 server
16:38 jkroon ok, and you want to migrate that to 3.7 glusterfs
16:38 ashp no, I don't even want to be brave enough to do that yet
16:38 ashp I just wanted to get a 3.7 client talking to it
16:38 ashp literally I'm doing 'mount -a' after adding it to /etc/fstab
16:38 jkroon two strategies, upgrade in-place, or use replace-brick (which I haven't personally done het).
16:38 ashp and then those logs were the output of the attempted mount
16:39 jkroon you looking to replace the glusterfs servers?
16:39 ashp not yet, I'm just looking to have a client running 3.7 talk to the existing cluster
16:39 jkroon for that you don't need to mount anything on the server other than the bricks underlying storage.
16:39 jkroon ah ok.  what does your fstab entry look like?
16:40 ashp pgluster01.internal.fitzy.co:/qa-lockbox /mnt/lockbox glusterfs defaults,_netdev 0 0
16:40 jkroon ok, can it resolve pgluster01.internal.fitzy.co?
16:42 ashp yes, it even tries the connection
16:43 ashp but fails to fetch the volume key for some reason
16:43 jkroon odd
16:43 ashp all the stuff I saw online was about 3.7.3 changes, and allowing insecure connections
16:43 ashp I've set that on the volume but it made no difference
16:44 jkroon [2016-09-28 16:31:34.700157] W [socket.c:588:__socket_rwv] 0-glusterfs: readv on 10.1.27.7:24007 failed (No data available)
16:44 ashp yeah, that's so strange, if I telnet to 10.1.27.7 24007 it connects fine
16:45 jkroon gluster volume status; gluster volume info; gluster volume heal ${volname} info; please.
16:45 jkroon on one of the existing server nodes.
16:47 ashp https://gist.github.com/apenney/d3e762aaa4746f5f1e76d504444390fc
16:47 glusterbot Title: gist:d3e762aaa4746f5f1e76d504444390fc · GitHub (at gist.github.com)
16:49 jkroon what happens if you remove the _netdev from the mount options?  i know ... just entertain me there quickly please.
16:49 ashp i'm willing to try anything :) two seconds
16:49 jkroon https://www.gluster.org/pipermail/gluster-users/2015-October/023808.html
16:49 glusterbot Title: [Gluster-users] 3.7 client with 3.5 server (at www.gluster.org)
16:50 jkroon someone else had the same problem as you - no resolution on that mail thread.
16:50 ashp it doesn't help that aws keeps killing instances for being unhealthy as I troubleshoot (autoscaling groups are a pain)
16:50 om joined #gluster
16:50 ashp i'm just waiting for the mount to fail so I can try it
16:51 jkroon i'm not exactly an aws fan to begin with.
16:52 ashp didn't seem to help
16:53 ashp still hangs :/
16:53 ashp I might try forcing 3.6 on, see if that helps
16:53 ashp as upgrading is unlikely :/
16:53 jkroon yea well, take if from me.  glusterfs 3.7 == MUCH IMPROVED over 3.5 and even 3.6.
16:53 ashp I'd love to upgrade to 3.7 but it would be hard to coordinate all the client updates at the same time
16:54 ashp i might setup a 3.7 test cluster and try to connect to it from the 3.5 clients, see if that works at least
16:54 jkroon you WANT to make this happen.  unfortunately I've got to head out so won't be able to assist further.  Based on what I've read now you might actually be in for a bit of trouble.
16:54 jkroon it looks like a 3.5 client will work against a 3.7 server, but not the other way round.
16:54 jkroon I'd suggest you set up the three new servers as a test box.
16:55 jkroon well, create a 3.7 based cluster, then try and use a 3.5 client to mount that volume and perform a few tests to make sure it works to your liking.
16:55 jkroon then perhaps try an in-place upgrade.
16:55 ashp yeah, I think I'd have to do that
16:55 jkroon i see your config is a 1x3 - which means you have three copies of everything.
16:56 jkroon anyway, seriously gone now :)
17:10 mhulsman joined #gluster
17:12 derjohn_mob joined #gluster
17:24 kpease joined #gluster
17:26 nathwill joined #gluster
17:28 arcolife joined #gluster
17:30 kimmeh joined #gluster
17:34 om joined #gluster
17:36 hchiramm joined #gluster
17:41 mhulsman1 joined #gluster
17:42 riyas joined #gluster
17:44 kpease joined #gluster
17:45 jiffin joined #gluster
17:48 kpease joined #gluster
17:51 amye joined #gluster
18:01 nathwill joined #gluster
18:04 amye joined #gluster
18:04 om joined #gluster
18:05 gem joined #gluster
18:08 raghu joined #gluster
18:15 msvbhat joined #gluster
18:16 tdasilva joined #gluster
18:17 robb_nl joined #gluster
18:17 nathwill joined #gluster
18:20 bowhunter joined #gluster
18:23 jiffin joined #gluster
18:27 jiffin1 joined #gluster
18:33 kpease joined #gluster
18:54 msvbhat joined #gluster
18:55 kpease joined #gluster
19:04 msvbhat joined #gluster
19:05 k4n0 joined #gluster
19:12 johnmilton joined #gluster
19:13 hackman joined #gluster
19:16 ashp So, if I do a rolling upgrade from 3.5 to 3.7... will the 3.7 nodes be able to peer with the 3.5 ones during the process?
19:22 msvbhat joined #gluster
19:38 amye joined #gluster
19:46 ahino joined #gluster
19:52 bowhunter joined #gluster
19:57 a2 joined #gluster
20:11 msvbhat joined #gluster
20:16 derjohn_mob joined #gluster
20:18 nathwill joined #gluster
20:24 nathwill joined #gluster
20:32 kimmeh joined #gluster
20:57 nathwill joined #gluster
21:06 glusterN00b joined #gluster
21:06 glusterN00b hi all
21:07 glusterN00b anyone around that can help with some geo-replication questions?
21:15 glusterN00b left #gluster
21:16 jgarcia_ joined #gluster
21:17 jgarcia_ is anyone alive !
21:21 misc joined #gluster
21:23 jgarcia_ wtb gluster expert
21:27 jgarcia_ left #gluster
21:27 jgarcia_ joined #gluster
21:32 jgarcia_ hagarth hey do you know who might be able to help me out with some questions regarding geo-replication??
21:38 jgarcia_ left #gluster
22:33 kimmeh joined #gluster
22:41 Caveat4U joined #gluster
22:47 ashp If I had a new peer (is that the right word) to a volume do the clients that have it mounted magically learn about it without a remount?
22:47 ashp I am thinking of adding 3 new boxes and retiring the 3 old ones I have completely
22:48 harish joined #gluster
22:56 plarsen joined #gluster
23:02 plarsen joined #gluster
23:35 Klas joined #gluster
23:55 jeremyh joined #gluster
23:55 cloph_away joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary