Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-04-27

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:08 dfaro joined #gluster
00:08 dfaro hi all
00:12 dav joined #gluster
00:12 dav hi all
00:13 dav i'm david
00:14 dav i've a question pertinent to the new version 3.7.11
00:19 Logos01 Hrm... reducing from replica 3 to replica 2 increased the dd MB/s to about 2x what it was.
00:20 Logos01 ... once. Grr.
00:21 jbrooks joined #gluster
00:23 dav :D
00:25 akay Thanks JoeJulian - I'm seeing a lot of Stale file handles and folders with ?????????? permissions when doing a ls -l listing. I've tried remounting and using use-readdirp=no but they still exist. any ideas?
00:31 dav anyone knows if is necessary stopped the volume when upgrade glusterfs-server from 3.7.8 version to 3.7.11 version? i canno't find this info
00:31 dav thanks so much
00:34 Logos01 Okay now this is bizarre. The clients are also on the same host...
00:34 Logos01 When I use one of the brick volumes *AS* the client and perform a mount -t gluster localhost:volume /mnt/gluster/volume
00:34 Logos01 And then do my dd command ... I get 1.1GB/s write.
00:34 Logos01 But non-brickvolume client gets 3.1MB/s write.
00:34 Logos01 That's with all of the optimizations I've been able to pull off.
00:35 Logos01 Does the gluster client have mount options?
00:39 dav anyone?
00:43 Logos01 dav: Safest to assume the answer is yes you ought to do so.
00:48 dav thanks logo01, i supposed  :(
00:48 dav even with upgrades on the same brach 3.7.x?
00:49 dav Logos01 sorry :D
00:50 Logos01 No idea, sorry
00:50 dav Ok don't worry, thanks so much
00:50 dav for the response
01:08 russoisraeli joined #gluster
01:12 bluenemo joined #gluster
01:24 EinstCrazy joined #gluster
01:53 DV__ joined #gluster
01:54 davpostpunk joined #gluster
01:56 davpostpunk hi all
02:00 davpostpunk i need to know if it's possible upgrade gluster version from 3.7.8 to 3.7.11 without take to stop volume.
02:02 EinstCra_ joined #gluster
02:05 poing joined #gluster
02:05 cholcombe joined #gluster
02:07 poing Can someone help me understand something about Distributed Striped Glusterfs Volumes?
02:08 poing How are volumes grouped?
02:08 DV joined #gluster
02:09 poing There is mention of create order, but it's not really clear.
02:10 davpostpunk see this link https://gluster.readthedocs.org/en/latest/Administrator%20Guide/Setting%20Up%20Volumes/
02:10 glusterbot Title: Setting Up Volumes - Gluster Docs (at gluster.readthedocs.org)
02:11 davpostpunk on this link explain all volume modes
02:11 poing I understand the modes.  But the grouping is not explained clearly.
02:11 poing Let me pose it a different way.
02:12 poing I have 4 servers, 2 bricks on each.
02:12 davpostpunk ah ok sorry
02:13 poing In a replicated scenario.  2 x 8 - How do I prevent the same data from being stored on a single server?
02:14 davpostpunk what is your gluster version?
02:14 EinstCrazy joined #gluster
02:15 poing The groupings in a Distributed Striped Glusterfs Volume are unclear.  But I want to avoid having the same data on on server.
02:15 poing Version is 3.7.11
02:15 davpostpunk should do it auto
02:16 davpostpunk test with the rebalancing command
02:18 davpostpunk i've the same version on 2x2 infraestructure distributed replicated, also georeplicated on my way
02:18 poing Should is not very convincing.  I hope that data with 2 replicas, would use different servers.  Where volumes are s1:/a s1:/b s2/a
02:19 poing So rebalancing will show the grouping and distribution across the servers?  How the architecture is implemented?
02:21 davpostpunk :D i'm just a another user :D
02:22 poing Got'cha.
02:22 poing The thing that is missing is the result.  Docs say, the order in which bricks are specified has a great effect on data protection. Each replica_count consecutive bricks in the list you give will form a replica set, with all replica sets combined into a volume-wide distribute set.
02:23 davpostpunk see this link http://www.gluster.org/community/documentation/index.php/Gluster_3.2:_Rebalancing_Volumes, different version but you can get ideas
02:23 poing But can't see the how the sets are defined.
02:24 davpostpunk umm i never had that problem
02:24 davpostpunk but i saw that lack some info about some things
02:25 davpostpunk for example in my case, i need to know that it's possible upgrade from the 3.7.8 version to the 3.7.11 version without stopped the gluster volume
02:26 davpostpunk you need to know how distributed the data across the servers, i suppose
02:26 poing Yeah, that would be important to know.
02:27 davpostpunk we've a little abandoned for the community
02:27 davpostpunk :(
02:27 poing Oh!
02:28 davpostpunk :D
02:28 poing Well, thanks for the input.  I 'hope' it works as expected.  But that does alter my use case.  Thanks!
02:28 davpostpunk only with some issues
02:29 davpostpunk i had so many problems with the georeplication for example, finally i got it.
02:29 poing Does the upgrade solve an issue you have?
02:30 poing In production environments, upgrades are not always helpful.  Unless you are looking to resolve some specific issue.
02:31 poing Gotta run, but thanks for your input!
02:31 davpostpunk yeah, i solved a bug with the georeplication servers with the last upgrade
02:32 davpostpunk but i also need to upgrade de PROD servers
02:33 davpostpunk and i need to need if it's possible upgrade without stopped the volume :D
02:33 mpietersen joined #gluster
02:33 davpostpunk it's very important
02:33 davpostpunk thanks for the ask
02:35 davpostpunk and good luck with your problem
02:35 Logos01 Well this is irritating as all get-out.
02:36 Logos01 4 client-hosts, 3 replica-bricks (not in that list of clients)
02:36 Logos01 Each replica-brick when mounting via the FUSE client gets 1.1GB/s write speed for files inside the volume.
02:36 Logos01 The client-hosts on the other hand get 3MB/s
02:36 Logos01 All 7 machines are on the same ESXi host.
02:37 Logos01 Why the devil is this happening?!
02:40 tswartz left #gluster
02:43 blu_ joined #gluster
02:44 ramteid joined #gluster
02:44 EinstCra_ joined #gluster
02:49 RameshN joined #gluster
02:50 MugginsM joined #gluster
02:51 julim joined #gluster
02:52 vmallika joined #gluster
03:05 gem joined #gluster
03:06 harish_ joined #gluster
03:08 Siavash joined #gluster
03:08 Siavash joined #gluster
03:11 DV joined #gluster
03:16 bennyturns joined #gluster
03:23 shubhendu joined #gluster
03:36 cholcombe joined #gluster
03:39 overclk joined #gluster
03:40 nehar joined #gluster
03:44 Siavash joined #gluster
03:58 davpostpunk i need to know if it's possible upgrade gluster version from 3.7.8 to 3.7.11 without take to stop volume
04:01 nbalacha joined #gluster
04:02 Intensity joined #gluster
04:05 MugginsM joined #gluster
04:08 brandon joined #gluster
04:20 aspandey joined #gluster
04:22 Intensity joined #gluster
04:30 ashiq joined #gluster
04:39 atinm joined #gluster
04:49 MugginsM joined #gluster
04:55 prasanth joined #gluster
04:56 nehar joined #gluster
05:00 brandon joined #gluster
05:00 hchiramm joined #gluster
05:08 ndarshan joined #gluster
05:17 Manikandan joined #gluster
05:19 Manikandan joined #gluster
05:21 Apeksha joined #gluster
05:21 davpostpunk i need to know if it's possible upgrade gluster version from 3.7.8 to 3.7.11 without take to stop volume
05:21 davpostpunk ?¿
05:21 davpostpunk anyone?
05:23 davpostpunk joined #gluster
05:23 davpostpunk i need to know if it's possible upgrade gluster version from 3.7.8 to 3.7.11 without take to stop volume
05:23 davpostpunk anyone?
05:24 davpostpunk thanks
05:25 MugginsM joined #gluster
05:26 anoopcs davpostpunk, Please be patient until someone replies.
05:27 davpostpunk sorry, i wrote new messages for the new people connections sorry
05:27 karthik___ joined #gluster
05:29 poornimag joined #gluster
05:31 vmallika joined #gluster
05:31 davpostpunk the people are entering to work :D
05:33 ppai joined #gluster
05:35 Bhaskarakiran joined #gluster
05:35 ic0n joined #gluster
05:36 kshlm joined #gluster
05:40 Wizek joined #gluster
05:51 spalai joined #gluster
05:51 spalai left #gluster
05:53 karnan joined #gluster
05:59 MugginsM joined #gluster
05:59 gowtham joined #gluster
06:01 rafi joined #gluster
06:04 hgowtham joined #gluster
06:04 anil_ joined #gluster
06:05 jiffin joined #gluster
06:05 Saravanakmr joined #gluster
06:06 hgowtham joined #gluster
06:10 Lee1092 joined #gluster
06:12 spalai joined #gluster
06:12 jiffin joined #gluster
06:13 jtux joined #gluster
06:17 RameshN joined #gluster
06:18 DV__ joined #gluster
06:24 RameshN joined #gluster
06:24 poing joined #gluster
06:27 Saravanakmr joined #gluster
06:28 poing joined #gluster
06:33 poing joined #gluster
06:41 kdhananjay joined #gluster
06:43 poing joined #gluster
06:48 atinm joined #gluster
06:48 atalur joined #gluster
06:49 nathwill joined #gluster
06:51 nathwill hmmm, curious if i should be alarmed about https://gist.github.com/nathwill/445090098673dc6740c3a320ec78bb12; this is a new gluster cluster with a whole pile of pre-existing files, so i'm thinking that that is OK?
06:52 Gnomethrower joined #gluster
06:52 tyler274 joined #gluster
06:54 Chr1st1an joined #gluster
06:55 anil_ joined #gluster
06:55 DV joined #gluster
06:57 jiffin nathwill: do cluster contains only files?
06:57 post-factum JoeJulian: whould I set cluster.data-self-heal, cluster.metadata-self-heal and cluster.entry-self-heal to off before upgrading the cluster?
06:57 post-factum *should
06:58 nathwill jiffin, files and folders
06:58 nathwill one of the nodes *was* a NFS server, i converted it to gluster server and added two nodes with two extra bricks for quorum
06:59 nathwill one of the new bricks was empty, the other was a drbd clone of the first brick
06:59 jiffin nathwill: oh
06:59 nathwill but it looks like the drbd clone had all the content moved to .glusterfs/landfill and is now being backfilled
06:59 nathwill :/
06:59 nangthang joined #gluster
07:00 billputer joined #gluster
07:00 nathwill so i'm *guessing* that the buf->ia_gfid is null message is because those files don't have a gfid, so gluster's just adding them?
07:00 jiffin nathwill: if it contains folders, it may end up in split brains
07:00 jiffin and there is great chance for that
07:01 nathwill hmmm
07:01 jiffin if the new bricks were empty , then that would be fine
07:01 jiffin what is ur volume configuration
07:01 jiffin ?
07:01 Pintomatic joined #gluster
07:01 twisted` joined #gluster
07:02 nathwill https://gist.github.com/nathwill/15af98f8a96d703de4c61dfca4405726 << brick status
07:02 glusterbot Title: gist:15af98f8a96d703de4c61dfca4405726 · GitHub (at gist.github.com)
07:02 nathwill https://gist.github.com/nathwill/e44f28b5f94717e2c8c1c442f726c529 << config
07:02 glusterbot Title: gist:e44f28b5f94717e2c8c1c442f726c529 · GitHub (at gist.github.com)
07:02 sc0 joined #gluster
07:03 nathwill i set quorum type to auto straight away, hoping to avoid split-brains by requiring majority concensus
07:03 anil_ left #gluster
07:04 scubacuda joined #gluster
07:05 fyxim__ joined #gluster
07:08 scobanx Hi, is there any limit on file name size for gluster? We have 100-150 character file names, will this be a problem?
07:11 nathwill my main concern at this point is just to be able to replicate the data off of the old servers onto new servers while keeping the data accessible to clients
07:12 jiffin nathwill: are trying for 1x3 volume ?
07:13 nathwill jiffin: yep; want to have enough to maintain quorum while taking one node down for snapshotting
07:13 ivan_rossi joined #gluster
07:14 jiffin scobanx: if u can create file with that file name size then it is fine
07:14 scobanx is there any limits?
07:15 jiffin scobanx: #define PATH_MAX        4096    /* # chars in a path name including nul */
07:15 jiffin scobanx: #define NAME_MAX         255    /* # chars in a file name */
07:15 jiffin from linits.h
07:15 jiffin s/linits.h/limits.h
07:16 Debloper joined #gluster
07:16 jiffin nathwill: k thats fine
07:16 MugginsM joined #gluster
07:16 jiffin nathwill: it is not recommend to have pre exist file and folders in glusterfs volumes
07:17 jiffin nathwill: as u mentioned as earlier glusterfs tries to set gfids for files and folders
07:18 jiffin nathwill: but it may end up in split brains
07:18 poing joined #gluster
07:20 spalai joined #gluster
07:20 jiffin nathwill: don't forget to perform heal on that volume
07:22 nathwill jiffin: yeah, i know it's not ideal; we have some 1.5M+ top-level dirs with some 5->1,000 files per dir, and unfortunately rsync'ing clean from NFS to a new gluster cluster was not going to work as it would have to be done offline to ensure consistent replication
07:22 nathwill which would have meant several days of downtime at the rate we calculated
07:22 nathwill :/
07:23 nathwill jiffin: and thanks, i'll be sure to heal. i know the heal info currently shows a big list of files
07:28 GoKule joined #gluster
07:30 scobanx jiffin: thanks
07:30 GoKule I have problem regarding speed between two nodes on Gluster... I have 10gbe connection between them, but then I check speed on iftop max is only 40Mbit/s.
07:31 GoKule First I only have one node, and than added another one as a replica.
07:31 GoKule I have around 4TB of data, estimated time for sync between them is 1 months?!?!?!
07:31 jri joined #gluster
07:33 ctria joined #gluster
07:33 aravindavk joined #gluster
07:36 atinm joined #gluster
07:38 fsimonce joined #gluster
07:45 muneerse joined #gluster
07:46 RameshN joined #gluster
07:50 karthik___ joined #gluster
07:51 muneerse2 joined #gluster
07:53 vmallika joined #gluster
07:56 muneerse joined #gluster
07:58 rouven joined #gluster
08:04 ahino joined #gluster
08:07 DV joined #gluster
08:16 aspandey joined #gluster
08:20 alghost joined #gluster
08:20 aspandey_ joined #gluster
08:27 Norky joined #gluster
08:27 itisravi joined #gluster
08:28 spalai joined #gluster
08:28 DV joined #gluster
08:30 robb_nl joined #gluster
08:33 alghost Hi all, do you know why I can't access download.gluster.org ??
08:33 alghost server said I don't have permission (Forbidden)
08:35 DV joined #gluster
08:36 Slashman joined #gluster
08:38 skoduri joined #gluster
08:39 MikeLupe @alghost: same here
08:39 harish_ joined #gluster
08:40 aravindavk joined #gluster
08:40 alghost @MikeLupe: Thank you for checking that
08:40 MikeLupe my pleasure
08:46 daveys110 joined #gluster
08:48 daveys110 Is there a problem with the glusterfs-epel.repo ? (We've started getting HTTP 403 Forbidden on http://download.gluster.org/pub/gluster/glusterfs/3.6/3.6.1/EPEL.repo/glusterfs-epel.repo )
08:51 post-factum yes
08:52 daveys110 @post-factum - sorry, I should have read back in the channel logs first. Thanks for the reply
08:57 harish_ joined #gluster
08:58 Jules- joined #gluster
09:03 misc there was a cloud issue, the block device was having problem, thing are back to normal
09:04 pur joined #gluster
09:12 rastar joined #gluster
09:19 [Enrico] joined #gluster
09:31 arcolife joined #gluster
09:36 daveys110 thx @misc
09:38 atinm joined #gluster
09:44 ppai joined #gluster
09:44 johnmilton joined #gluster
09:46 rastar joined #gluster
09:46 spalai joined #gluster
09:46 atalur joined #gluster
10:00 mpietersen joined #gluster
10:02 davpostpunk joined #gluster
10:02 johnmilton joined #gluster
10:11 refj joined #gluster
10:12 refj How do I find out what the default {client,server}.event-threads settings are?
10:18 poing joined #gluster
10:20 kalzz joined #gluster
10:38 refj ok, gluster volume get <vol> <setting>
10:39 DV joined #gluster
10:41 post-factum also, gluster volume set help
10:43 robb_nl joined #gluster
10:45 ppai joined #gluster
10:48 post-factum @options
10:48 glusterbot post-factum: See config options and their defaults with 'gluster volume set help'; you can see the current value of an option, if it has been modified, with 'gluster volume info'; see also this page about undocumented options: http://goo.gl/mIAe4E
10:51 refj post-factum: Thanks!
10:52 rastar joined #gluster
10:55 atalur joined #gluster
11:03 Bhaskarakiran joined #gluster
11:04 mpietersen joined #gluster
11:07 bluenemo joined #gluster
11:13 atinm joined #gluster
11:19 RameshN joined #gluster
11:24 johnmilton joined #gluster
11:32 rouven joined #gluster
11:37 shdeng joined #gluster
11:37 nishanth joined #gluster
11:40 russoisraeli joined #gluster
11:50 wnlx joined #gluster
11:53 Manikandan_ joined #gluster
11:54 kshlm Weekly community meeting starts in 5 minutes in #gluster-meeting
11:56 DV joined #gluster
11:57 hagarth /j #gluster-meeting
12:06 bennyturns joined #gluster
12:08 bennyturns joined #gluster
12:16 ppai joined #gluster
12:19 prasanth joined #gluster
12:21 Wizek joined #gluster
12:24 harish joined #gluster
12:25 EinstCrazy joined #gluster
12:29 DV joined #gluster
12:36 [Enrico] joined #gluster
12:41 itisravi joined #gluster
12:52 chirino_m joined #gluster
12:52 Wizek joined #gluster
12:53 nbalacha joined #gluster
12:57 DV joined #gluster
13:00 EinstCra_ joined #gluster
13:02 skoduri_ joined #gluster
13:05 mpietersen joined #gluster
13:11 plarsen joined #gluster
13:18 DV joined #gluster
13:24 DV joined #gluster
13:27 prasanth joined #gluster
13:39 julim joined #gluster
13:46 muneerse2 joined #gluster
13:47 shyam joined #gluster
13:55 bennyturns joined #gluster
13:59 [Enrico] joined #gluster
14:05 skylar joined #gluster
14:05 sysanthrope joined #gluster
14:06 MikeLupe Hi again. It shouldn't be (by design) a problem, that on "replica 3" the fuse.glusterfs mounts on each memberhost for the gluster volumes all show to the same host, should it?
14:06 plarsen joined #gluster
14:09 MikeLupe btw, thanks @JoeJulian and @postfactum for helping me out when I was logged on as "glugg" or smthing ;)
14:11 nbalacha joined #gluster
14:14 post-factum MikeLupe: don't miss the dash for me :)
14:14 sysanthrope Hi.  I currently have two gluster servers, sharing out two volumes.  One of the volumes was created with two bricks, one on each server, but witht he short name of the server specified.  I would like to rename the brick so the FQDN is reflected instead of the short name (just host name).  Is there an easy way to do a rename?  So to go from hostname:/gfs/brick to hostname.domain.name:/gfs/brick...
14:14 MikeLupe I'm so sorry, again :) I already messed up you nick some weeks ago...you remember? ;)
14:17 post-factum of course, i'm unforgiving
14:20 shyam joined #gluster
14:20 sysanthrope Will this do what I want (a rename)?  volume replace-brick <VOLNAME> <SOURCE-BRICK> <NEW-BRICK>?  Will gluster interpret that as a rename or will it fail if it's actually the same brick, but just with a new FQDN instead of the short name?
14:39 social joined #gluster
14:39 gowtham joined #gluster
14:39 kbyrne joined #gluster
14:44 level7 joined #gluster
14:46 rwheeler joined #gluster
14:47 itisravi joined #gluster
14:55 arcolife joined #gluster
14:56 kpease joined #gluster
15:01 kpease joined #gluster
15:05 kpease joined #gluster
15:08 aravindavk joined #gluster
15:08 rafi joined #gluster
15:13 atalur joined #gluster
15:16 shyam joined #gluster
15:16 itisravi_ joined #gluster
15:21 nehar joined #gluster
15:22 hagarth joined #gluster
15:24 Bhaskarakiran joined #gluster
15:26 muneerse joined #gluster
15:35 MikeLupe Is it possible to nfs export a subdirectory of a replicated volume without using NFS-Ganesha?
15:35 muneerse joined #gluster
15:37 spalai left #gluster
15:38 kkeithley someone is (was?) working on subdir exports from a Gluster Native (FUSE), but AFAIK nobody is doing anything with gnfs.
15:38 MikeLupe ok thanks
15:39 kkeithley For NFS, all focus is on NFS-Ganesha.
15:40 MikeLupe ok.. Is this guide ok? https://gluster.readthedocs.org/en/latest/Administrator%20Guide/NFS-Ganesha%20GlusterFS%20Intergration/
15:40 glusterbot Title: Configuring NFS-Ganesha server - Gluster Docs (at gluster.readthedocs.org)
15:41 kkeithley You can start with that. I (humbly) suggest my blog at http://blog.gluster.org/2015/10/linux-scale-out-nfsv4-using-nfs-ganesha-and-glusterfs-one-step-at-a-time/
15:41 glusterbot Title: Linux scale out NFSv4 using NFS-Ganesha and GlusterFS — one step at a time | Gluster Community Website (at blog.gluster.org)
15:42 skoduri joined #gluster
15:43 MikeLupe I humbly accept it :) thanks
15:43 MikeLupe Will I run into problems with hyperconverged ovirt using the same data volume?
15:44 muneerse2 joined #gluster
15:44 kkeithley you shouldn't. YMMV
15:45 MikeLupe ok thanks again
15:47 muneerse joined #gluster
15:49 ivan_rossi left #gluster
16:11 level7_ joined #gluster
16:18 shyam joined #gluster
16:20 F2Knight joined #gluster
16:26 vmallika joined #gluster
16:29 hackman joined #gluster
16:31 atalur joined #gluster
16:32 shaunm joined #gluster
16:33 rafi joined #gluster
16:36 rafi joined #gluster
16:42 mhulsman joined #gluster
16:42 gem joined #gluster
16:44 mhulsman joined #gluster
16:45 overclk joined #gluster
16:59 MikeLupe @kkeithley: great guide so far. Topic 6. - ganesha-ha.conf . I'm a bit confused about the Virtual IPs. Why can't I use FQDNs here?
17:00 MikeLupe Even if it's "virtual"
17:00 mhulsman joined #gluster
17:01 rnowling joined #gluster
17:01 MikeLupe # Virtual IPs for each of the nodes specified above.
17:01 MikeLupe VIP_node0="172.16.3.140"
17:01 MikeLupe VIP_node1="172.16.3.141"
17:02 brandon joined #gluster
17:02 rnowling joined #gluster
17:03 rnowling hi all.  I have some questions about using Swift on Gluster (mostly about features, use cases).  Is this an appropriate place to ask about that?
17:08 Gnomethrower joined #gluster
17:10 rnowling joined #gluster
17:13 ahino joined #gluster
17:15 Bhaskarakiran joined #gluster
17:16 kkeithley MikeLupe: Not sure if you can. I don't know if the pacemaker IPaddr resource agent will use it.
17:17 kkeithley s/virtual/floating/
17:17 glusterbot kkeithley: Error: I couldn't find a message matching that criteria in my history of 1000 messages.
17:18 MikeLupe kkeithley: thanks, I just realized I need a fully functional IPV6 env, so I'll just create a new disk for a glustered VM and nfs export that.
17:19 rafi joined #gluster
17:20 kkeithley if you have FQDN (e.g. in DNS) for those floating IPs, the clients can certainly use them.
17:22 kkeithley the FQDNs that is
17:22 kkeithley me.  The clients can use the FQDNs
17:23 skoduri joined #gluster
17:35 rafi joined #gluster
17:39 mhulsman joined #gluster
17:47 MikeLupe ok
17:54 DV joined #gluster
17:54 rafi joined #gluster
18:01 shubhendu joined #gluster
18:06 rnowling joined #gluster
18:07 rnowling joined #gluster
18:08 rnowling joined #gluster
18:12 Wizek joined #gluster
18:16 bowhunter joined #gluster
18:22 theron joined #gluster
18:26 mhulsman joined #gluster
18:29 post-factum JoeJulian: are you here?
18:41 nishanth joined #gluster
18:44 kpease joined #gluster
18:57 rtalur joined #gluster
19:03 mhulsman joined #gluster
19:03 nigelb joined #gluster
19:07 jiffin joined #gluster
19:14 rafi joined #gluster
19:16 _Bryan_ joined #gluster
19:17 rafi joined #gluster
19:17 dorvan joined #gluster
19:17 dorvan hi all
19:18 rafi joined #gluster
19:20 jiffin dorvan: hi
19:21 dorvan i need to use gluster with zfs as brick fs, with a particular network configuration, but i need to review network configuration to get maximum bandwidth on backend and frontend access, anione can help me?
19:23 jiffin dorvan: use can try zfs as backend , but i am not right person for recommending  network configuration
19:24 jiffin s/use can/u can use
19:25 dorvan jiffin: clear ;-)
19:25 dorvan fo zfs I've set it
19:25 post-factum jiffin: would you please review my upgrade plan for replica 2?
19:26 dorvan for*
19:26 dorvan post-factum: seems interesting...
19:28 dorvan I can rebuild config of a gluster volume in runtime, if i have a brick already populated?
19:28 dorvan at runtime*
19:29 post-factum jiffin: https://public.pad.fsfe.org/p/376_to_3711
19:29 glusterbot Title: FSFE Etherpad: public instance (at public.pad.fsfe.org)
19:36 JoeJulian I'm not currently convinced that step 4 is necessary.
19:36 JoeJulian or even desired
19:38 post-factum why?
19:38 post-factum those options are on by default
19:39 JoeJulian That's just client-side self-heals. Originally there was no self-heal daemon so that was the only way files would get healed.
19:39 JoeJulian In fact, the file had to be looked up for even that to work.
19:39 post-factum pkill glusterfsd → systemctl stop glusterfsd :)
19:40 post-factum JoeJulian: so that won't introduce split-brains or other issues?
19:40 JoeJulian Have you tested that? On most systems it's unlikely to work since it's never been implemented correctly.
19:41 post-factum you mean client-side heals?
19:41 post-factum oh. systemd
19:41 JoeJulian systemd
19:41 post-factum ExecStop=/bin/sh -c "/bin/killall --wait glusterfsd || /bin/true"
19:41 post-factum why not?
19:41 JoeJulian All the necessary heal checks are done to ensure your application is getting healthy data, just the client won't actively perform a self-heal of a file.
19:41 JoeJulian because it's never been started.
19:42 JoeJulian arch doesn't even have glusterfsd.service
19:42 post-factum it is active for me
19:42 JoeJulian ok, cool
19:42 post-factum as told by systemctl status
19:42 post-factum anyway, I've got the point. kill -15
19:43 post-factum so, why client-side heals are not disabled by default?
19:43 JoeJulian legacy more than anything at this point, I think.
19:43 post-factum hmmm
19:43 JoeJulian If it was all smaller files, I'd just leave it.
19:44 post-factum there are lots of small files
19:44 post-factum mail, web etc
19:44 JoeJulian Once you start healing larger files, it's just too inefficient to read both copies across the network, generate and compare hashes, then copy data if necessary.
19:44 post-factum also, should i start heal manually after each node upgrade?
19:45 post-factum or it will be triggered automatically?
19:45 JoeJulian It will trigger automatically.
19:45 post-factum so, not clear about client-side heals so far... there are lots of different files
19:45 post-factum like iso images as well as php-files and mailboxes
19:49 post-factum if self-heal will do the trick, we do not need client-side heal at all even for small files. am i correct?
19:49 JoeJulian correct
19:49 post-factum and you also disable those options in your production deployment?
19:49 JoeJulian I have, yes.
19:50 post-factum ok, darn it :). we have backups
19:50 JoeJulian lol
19:50 post-factum :D
19:50 post-factum JoeJulian: thanks!
19:50 post-factum JoeJulian++
19:50 glusterbot post-factum: JoeJulian's karma is now 26
19:53 whallz joined #gluster
19:53 whallz joined #gluster
19:54 whallz hello
19:54 glusterbot whallz: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
19:54 whallz im not sure gluster a good solution for me, please can anyone point in the right direction?
19:55 post-factum whallz: the very first question you should answer prior to asking this one is "what is my workload?"
19:55 whallz im using aws, N instances from an autoscaling group should be able to read from unique source of data
19:56 whallz data is an ebs volume, which one instance has mounted, and its the only instance who actually writes the data
19:56 post-factum JoeJulian: also, i forgot about bumping opversion. would like to do that at the end
19:56 whallz but i cannot mount the same ebs volume on all instances who need to read it
19:56 * JoeJulian cringes at having all your eggs in an ebs volume
19:57 whallz its not all my eggs
19:57 JoeJulian whew
19:57 whallz its just some eggs they need to eat
19:57 whallz only one instance cooks the eggs, N instances eat them
19:58 JoeJulian ebs volumes have the same reliability as a single hard drive.
19:58 post-factum ideal for pub-sub scheme
19:58 whallz JoeJulian: so whats the alternative?
19:59 JoeJulian Is this scientific analysis? Web serving? Rendering?
19:59 whallz web server
20:01 JoeJulian The alternative would be two, maybe even three (I would use three) servers each with ebs volumes with a replica 3 gluster volume on top. Probably serving nfs to your web hosts.
20:01 whallz there's this instance who runs crons and stuff, i want it to "host" the data for the N other instances who need to read it
20:02 whallz is a replica kind of a software RAID 1 ?
20:03 JoeJulian That comparison has been made. I don't like to call it that because they're actually completely different.
20:03 whallz that ok, the vague concept of it is similar
20:03 JoeJulian But you would have the same data live on 3 storage devices and if one goes down, all the web hosts keep working.
20:05 JoeJulian aws and ebs each only have 99.9% uptime guarantees. Doing a replica 3 volume would give you five nines.
20:05 nathwill hmmm, has anyone seen very high (400K+) fd use of glusterfsd during heal in a replica?
20:05 whallz right. let's imagine i already have 3 replicated ebs volume like a single gluster volume, now the instances which need to read that data, need to be peers too?
20:05 nathwill curious if this is a bug, or something i can do to fix; this already sent the machine non-responsive once, now watching it creep closer again
20:05 JoeJulian nathwill: No, but I haven't really looked.
20:06 JoeJulian nathwill: if it's non-responsive, I'd look at disabling client-side heals. cluster.data-self-heal off
20:06 JoeJulian whallz: no, they can just mount the data either using the native fuse mount or (more probably) using nfs.
20:08 JoeJulian whallz: I would use a set of floating ips so if a server goes down, its ips are assigned to the other two and the load is distributed. (6 ips, two per server, where each of the two would fail-over to a different server)
20:08 JoeJulian ... that's only for nfs mounts though.
20:09 JoeJulian fuse mounts are already HA, but they have enough overhead that the thousands of lookups most web sites do for a single page adds too much overhead.
20:11 plarsen joined #gluster
20:12 post-factum JoeJulian: we use fuse for web with extensive local caching
20:13 JoeJulian That's my preference as well.
20:14 JoeJulian But since he's talking about autoscaling, you would then require a cache preload time before you add your new web servers to the LB.
20:15 whallz yes
20:15 whallz that's why i expect like a "live" filesystem from which i can mount
20:16 whallz if i dont want to replicate or distribute my volume
20:16 whallz can i create it in a single server instance?
20:16 level7 joined #gluster
20:16 JoeJulian Sure, but you lose all that reliability.
20:16 whallz i know
20:17 whallz how would that gluster volume create command look like?
20:17 JoeJulian You should just spin up a vm and try it out.
20:17 whallz i mean, i can only find examples in which they set the location based in hosts:paths
20:17 JoeJulian It's pretty easy.
20:17 JoeJulian Yes, host:path
20:18 JoeJulian that's how it works.
20:18 whallz so should it be localhost:/mnt/data ?
20:18 whallz 127.0.0.1:/mnt/data ?
20:18 JoeJulian No, you can't use localhost.
20:18 JoeJulian There's good reasons for that.
20:18 JoeJulian Just set a hostname for your server in /etc/hosts and use that.
20:19 JoeJulian Or use systemd-resolved
20:19 ahino joined #gluster
20:20 whallz ok, i'v mapped a hostname in /etc/hosts
20:21 whallz now: gluster volume create data data:/mnt/data
20:21 whallz volume create: data: failed: Host data is not in 'Peer in Cluster' state
20:21 bowhunter joined #gluster
20:21 whallz am i missing something?
20:22 JoeJulian You're running this on the one machine you want to be the server?
20:23 JoeJulian You didn't use 127.0.0.1 for the address, did you?
20:23 whallz i did
20:23 whallz in /etc/hosts i put: data 127.0.0.1
20:23 JoeJulian Actually, let's not waste time.
20:23 JoeJulian If you don't want the reliability, just use nfs.
20:24 JoeJulian There's no point in using any sort of clustered filesystem software if you're going to run it all from just one single machine with a storage device.
20:25 whallz really?
20:25 whallz but i need to set this up now, and it will need to scale and it will need to be clustered and i will add replicas on several devices
20:26 JoeJulian Ah, ok, well then that's different.
20:26 whallz i just thought of this starting point as a quick way to test and get to know gluster
20:26 whallz for my head-sake too
20:26 JoeJulian in production. A man after my own heart.
20:26 post-factum how many replicas would you plan to have?
20:27 whallz at least 2
20:27 JoeJulian I recommend no more than three.
20:27 post-factum i'd say "not more than 2+arbiter"
20:27 JoeJulian Unless you're going to be netflix.
20:27 whallz im not going to be netflix
20:27 whallz I AM
20:27 whallz muahaha
20:28 post-factum :D
20:28 whallz nope
20:28 whallz so, to start off quickly
20:28 JoeJulian I think I'm going to cause a split-brain with a 2+A config this weekend.
20:28 whallz there's a concept i'm not getting i think
20:28 post-factum JoeJulian: ??
20:28 JoeJulian I think I know of a way to do it.
20:28 post-factum how?
20:28 JoeJulian split-brain over time.
20:29 JoeJulian If I'm successful, I'll blog about it.
20:29 JoeJulian Hell, if I'm not I'll still blog about it.
20:29 whallz yay
20:30 post-factum okay. intriguing
20:30 whallz what am i getting wrong?
20:30 whallz the server has a device
20:30 whallz i create a gluster volume on that device
20:30 brandon joined #gluster
20:30 kalzz joined #gluster
20:30 JoeJulian So, whallz, you need a real ip address. For now it doesn't need to be reachable so you'll use a hostname. You can create a dummy ethernet link if you like and assign it to that.
20:30 whallz luster volume create data data:/mnt/data
20:30 whallz volume create: data: failed: Host data is not in 'Peer in Cluster' state
20:31 JoeJulian You cannot use 127.
20:31 whallz i have an elastic ip
20:31 whallz will be the productive one
20:32 JoeJulian Ok, then use that.
20:32 JoeJulian The servers, once you have more, will need to be able to find each other.
20:33 whallz same thing not in 'Peer in Cluster' state
20:33 JoeJulian The clients, if you were mounting with fuse, would need to be able to find *all* the servers. Since you're mounting with nfs, they only need to find one (any one).
20:33 whallz ok
20:33 jiffin joined #gluster
20:33 JoeJulian Ok, stop glusterd. rm -rf /var/lib/glusterd/*. Start glusterd.
20:34 post-factum warning: destructive operation. would you like to continue [y/n]?
20:34 JoeJulian That's why I always type /bin/rm ;)
20:35 post-factum (:
20:35 JoeJulian I also type the path first and always go back and add the -rf after.
20:35 post-factum alias rm="rm -i"
20:35 JoeJulian alias pinkslip="/bin/rm -rf /"
20:36 whallz -i++
20:36 glusterbot whallz: -i's karma is now 1
20:36 whallz lol
20:36 post-factum -i++ let it be
20:36 glusterbot post-factum: -i's karma is now 2
20:36 post-factum -i++ ++ -i--
20:36 glusterbot post-factum: -i's karma is now 3
20:36 glusterbot post-factum: -i's karma is now 2
20:36 post-factum stupid bot
20:37 whallz glusterbot --version
20:37 glusterbot whallz: The current (running) version of this Supybot is 0.83.4.1+limnoria 2015.05.23, running on Python 2.7.9 (default, Jul 29 2015, 20:23:41)  [GCC 4.4.7 20120313 (Red Hat 4.4.7-11)].  The newest versions available online are 2016.04.24 (in testing), 2016.03.21 (in master).
20:37 post-factum glusterbot --exit
20:37 post-factum no?
20:37 whallz glusterbot --kill
20:37 whallz nono
20:38 whallz same thing guys
20:38 JoeJulian hmm, odd. I thought I was auto-updating glusterbot.
20:38 whallz is not in 'Peer in Cluster' state
20:39 JoeJulian whallz: ,,(paste) /var/lib/glusterfs/etc-glusterfs-glusterd.vol.log
20:39 glusterbot whallz: For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
20:41 whallz ook
20:41 whallz nice
20:41 whallz got it working
20:41 papamoose joined #gluster
20:42 whallz my bad
20:42 whallz runs smoothly
20:42 whallz much obliged
20:42 whallz many thanks
20:42 whallz merry times
20:42 whallz regards
20:42 whallz left #gluster
20:43 JoeJulian excellent
20:44 JoeJulian what was it?
20:44 The_Pugilist joined #gluster
20:46 ctria joined #gluster
20:49 swebb joined #gluster
21:15 julim joined #gluster
21:24 bowhunter joined #gluster
21:28 atrius joined #gluster
21:33 skylar joined #gluster
21:44 level7 joined #gluster
21:50 julim joined #gluster
21:53 hagarth joined #gluster
22:38 cliluw joined #gluster
22:41 MugginsM joined #gluster
22:48 MugginsM joined #gluster
22:50 bowhunter joined #gluster
23:04 MugginsM joined #gluster
23:07 MikeLupe what was it???
23:07 brandon left #gluster
23:14 davpostpunk so many thanks atinm
23:16 MugginsM joined #gluster
23:26 luizcpg joined #gluster
23:27 post-factum how should i cope with gfids in heal info that are not healed?
23:29 JoeJulian Rage, slam doors on the way out, have a beer, come back in the morning and see that they're done?
23:30 JoeJulian There's a couple of options. The one I use is to do a state dump on one of the bricks which has that file. Look for "self-heal" that should show you a lock and the position within the file of that lock.
23:30 JoeJulian That should give you a clue how far along it is.
23:30 post-factum JoeJulian: will do now
23:31 JoeJulian The other is to find the filename that belongs to it and make guesses.
23:31 JoeJulian @gfid-resolver
23:31 glusterbot JoeJulian: I do not know about 'gfid-resolver', but I do know about these similar topics: 'gfid resolver'
23:31 JoeJulian @gfid resolver
23:31 glusterbot JoeJulian: Use /usr/lib*/glusterfs/glusterfs/gfind_missing_files/gfid_to_path.py to resolve a gfid to a filename.
23:32 JoeJulian btw, if the gfid file under .glusterfs is a symlink, it's a directory and it shouldn't be marked for heal. imho that's a bug but I don't know how to duplicate it.
23:33 post-factum ok, i've resolved gfid to filename
23:34 post-factum the file is present in fuse mount and could be read
23:35 post-factum it looks like all the affected gfids shown by heals are the files that were written to from different clients simultaneously
23:35 post-factum but how could i resolve that?
23:35 JoeJulian They should resolve themselves.
23:35 JoeJulian heal statistics might tell you something?
23:37 post-factum No. of heal failed entries: 68 for one of the bricks
23:37 post-factum but on next crawler launch it is 0
23:37 JoeJulian not uncommon
23:37 post-factum No. of heal failed entries: 1
23:37 post-factum and 1 on next as well
23:38 JoeJulian You could check the glustershd.log
23:38 JoeJulian They usually resolve themselves though.
23:40 post-factum see nothing there
23:40 post-factum should i try to launch full heal?
23:41 JoeJulian The only problem I have with that is how to know if/when it is/will be done.
23:41 shyam joined #gluster
23:41 post-factum i will try for small volume first
23:42 post-factum hm, it did the trick
23:43 post-factum now launched for large volume
23:45 post-factum and the amount decreases
23:45 post-factum good sign
23:46 post-factum i see metadata heals happening in /var/log/glusterfs/glustershd.log
23:46 post-factum like this
23:46 post-factum [2016-04-27 23:45:49.016728] I [MSGID: 108026] [afr-self-heal-common.c:770:afr_log_selfheal] 0-mail_boxes-replicate-1: Completed metadata selfheal on 1056e95b-d32b-4206-a98c-3a5d651b731d. sources=[0]  sinks=1
23:48 post-factum unless i fcked up smth hidden, the upgrade is quite successful
23:49 s-hell joined #gluster
23:49 theron_ joined #gluster
23:51 JoeJulian yay
23:53 PaulCuzner joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary