Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-10-25

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 Pupeno joined #gluster
00:07 hchiramm__ joined #gluster
00:40 Pupeno joined #gluster
00:40 Pupeno joined #gluster
00:41 squizzi_ joined #gluster
00:41 hagarth joined #gluster
00:49 jerrcs_ joined #gluster
00:50 jerrcs_ joined #gluster
00:51 jerrcs_ hi all, just had a quick question about high availability glusterfs. im using a 3 node replicated set up and i want to be able to take nodes out of service, one at a time, at any time (graceful shutdown). unfortunately, when i reboot any of the hosts, i notice a lag on my fuse fs gluster client/mount for about 15-20 seconds while it renegotiates against a different, working node. is this normal or am i doing something wrong?
00:52 jerrcs_ i basically don't want that lag on the client side. i want it to be a seamless transition to one of the other nodes that are available.
01:07 hchiramm_ joined #gluster
01:26 kramdoss_ joined #gluster
01:34 daMaestro joined #gluster
01:37 ic0n_ joined #gluster
01:48 ilbot3 joined #gluster
01:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:54 arc0 joined #gluster
02:00 gem joined #gluster
02:08 hchiramm__ joined #gluster
02:17 victori joined #gluster
02:23 harish joined #gluster
02:28 Muthu joined #gluster
02:28 john51 joined #gluster
02:33 john51 joined #gluster
02:38 john51 joined #gluster
02:43 victori joined #gluster
02:43 john51 joined #gluster
02:53 arc0 joined #gluster
03:04 arpu joined #gluster
03:08 shdeng joined #gluster
03:09 hchiramm_ joined #gluster
03:14 nishanth joined #gluster
03:23 magrawal joined #gluster
03:23 RameshN joined #gluster
03:23 Pupeno joined #gluster
03:24 nbalacha joined #gluster
03:26 hagarth joined #gluster
03:28 jiffin joined #gluster
03:31 DV_ joined #gluster
03:34 prth joined #gluster
03:54 Lee1092 joined #gluster
04:00 itisravi joined #gluster
04:05 hgowtham joined #gluster
04:09 hchiramm__ joined #gluster
04:10 atinm joined #gluster
04:16 malevolent joined #gluster
04:19 om joined #gluster
04:20 sanoj joined #gluster
04:30 gem joined #gluster
04:55 gem joined #gluster
04:55 sanoj_ joined #gluster
04:56 sanoj_ joined #gluster
05:01 daMaestro|isBack joined #gluster
05:02 karthik_us joined #gluster
05:09 aravindavk joined #gluster
05:10 hchiramm_ joined #gluster
05:15 ndarshan joined #gluster
05:20 kramdoss_ joined #gluster
05:21 prasanth joined #gluster
05:22 DV_ joined #gluster
05:23 Pupeno joined #gluster
05:35 Bhaskarakiran joined #gluster
05:39 kdhananjay joined #gluster
05:44 riyas joined #gluster
05:46 gem joined #gluster
05:54 mhulsman joined #gluster
05:57 prth joined #gluster
05:57 karnan joined #gluster
06:02 hchiramm_ joined #gluster
06:04 Muthu_ joined #gluster
06:21 hchiramm_ joined #gluster
06:25 apandey joined #gluster
06:39 jeremyh1 joined #gluster
06:39 kramdoss_ joined #gluster
06:39 Susant joined #gluster
06:41 aravindavk_ joined #gluster
06:41 ebbex_ joined #gluster
06:41 mhulsman1 joined #gluster
06:41 snixor joined #gluster
06:41 markd_ joined #gluster
06:41 d-fence joined #gluster
06:42 the-me_ joined #gluster
06:43 thwam joined #gluster
06:44 rafi joined #gluster
06:44 squeakyneb joined #gluster
06:50 johnmilton joined #gluster
06:52 kotreshhr joined #gluster
06:56 ashiq joined #gluster
06:56 msvbhat joined #gluster
06:58 Philambdo joined #gluster
06:58 shdeng joined #gluster
06:59 prth joined #gluster
07:00 nishanth joined #gluster
07:00 hchiramm joined #gluster
07:13 jiffin joined #gluster
07:18 Saravanakmr joined #gluster
07:20 prth joined #gluster
07:25 [diablo] joined #gluster
07:29 Gnomethrower joined #gluster
07:32 fsimonce joined #gluster
07:33 devyani7_ joined #gluster
07:39 ivan_rossi joined #gluster
07:42 ppai joined #gluster
07:42 nishanth joined #gluster
07:43 bobrebyc1 joined #gluster
07:43 bobrebyc1 hello
07:43 glusterbot bobrebyc1: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
07:43 bobrebyc1 London is a capital of great britain
07:48 greynix joined #gluster
07:50 rafi2 joined #gluster
07:52 prth joined #gluster
08:00 devyani7_ joined #gluster
08:06 Acinonyx joined #gluster
08:07 rafi joined #gluster
08:12 harish joined #gluster
08:13 natarej joined #gluster
08:22 Slashman joined #gluster
08:22 hackman joined #gluster
08:23 panina joined #gluster
08:27 itisravi *the* capital.
08:29 derjohn_mob joined #gluster
08:35 hchiramm joined #gluster
08:45 Pupeno joined #gluster
08:50 karthik_us joined #gluster
08:52 arc0 joined #gluster
08:58 prth joined #gluster
08:59 flying joined #gluster
09:15 Acinonyx joined #gluster
09:27 ppai joined #gluster
09:28 Pupeno joined #gluster
09:37 kdhananjay joined #gluster
10:05 derjohn_mob joined #gluster
10:12 ankitraj joined #gluster
10:16 hackman joined #gluster
10:23 ankit-raj joined #gluster
10:34 arc0 joined #gluster
10:44 Maf joined #gluster
10:44 Maf ý
10:47 msvbhat joined #gluster
10:50 Philambdo joined #gluster
10:56 hchiramm joined #gluster
11:01 om joined #gluster
11:03 Maf left #gluster
11:09 msvbhat joined #gluster
11:19 luizcpg joined #gluster
11:22 partner hey.. i've never used glusterfs with thin-provisioned files on the bricks but that (unfortunately) allows overcommiting over the brick size.. are there any options i could set to prevent this from happening (like, 2TB brick cannot have 3x 1TB files but which are 0 sized at the creation time) ?
11:27 yalu joined #gluster
11:28 partner actually i have 2.8T reported by the OS, of which 2.5T being used, i have 2TB and 1000G files on it..
11:31 Wizek joined #gluster
11:36 Muthu_ joined #gluster
11:37 Gnomethrower joined #gluster
11:39 hchiramm joined #gluster
11:39 partner and since i have empty bricks (from the beginning) on the volume, does rebalance bite as all the files are on their hash-based location, to get more room into that one full brick?
11:41 Susant left #gluster
11:42 rafi1 joined #gluster
11:43 ashiq joined #gluster
11:45 hgowtham joined #gluster
11:46 arc0 joined #gluster
11:53 Pupeno joined #gluster
11:55 kkeithley bug triage in five minutes in #gluster-meeting.
11:59 johnmilton joined #gluster
12:01 Philambdo joined #gluster
12:03 kramdoss_ joined #gluster
12:05 johnmilton joined #gluster
12:13 riyas joined #gluster
12:15 jerrcs_ joined #gluster
12:20 jerrcs_ joined #gluster
12:22 jerrcs_ joined #gluster
12:26 Muthu joined #gluster
12:26 nishanth joined #gluster
12:27 partner rrright, i was just expecting some i/o errors but when filling the brick the client put the mount into readonly.. cannot remount rw, lets see if reboot helps
12:32 ira_ joined #gluster
12:33 arc0life_ joined #gluster
12:36 partner ok, so i have one 100% full and one 0% full brick now.. can i somehow get the files moving to empty one?
12:40 panina joined #gluster
12:49 abyss^_ partner: you've added new brick to existing volume?
12:53 partner my goal is to figure out if there is any means to solve this puzzle with full brick and bunch of less full/empty bricks (from the start of the very volume)
12:54 unclemarc joined #gluster
12:55 partner since with this thin-provisioning, no matter what amount of bricks i have it is still possible to get any bricks overcommited
12:55 shyam joined #gluster
13:09 hagarth joined #gluster
13:10 partner are there any tools that could be used for such manual operations? say, give it source and destination and it will make sure all the required metadata gets along plus sets the sticky pointer?
13:10 rastar joined #gluster
13:11 partner i've long lost already the internals from my head..
13:17 atinm joined #gluster
13:23 guhcampos joined #gluster
13:34 kpease joined #gluster
13:37 partner well, i'll get back to this once the US wakes up :)
13:38 Sebbo2 joined #gluster
13:38 Rasathus joined #gluster
13:38 Muthu joined #gluster
13:38 kotreshhr left #gluster
13:44 ahino joined #gluster
13:45 Lee1092 joined #gluster
13:48 kpease joined #gluster
13:49 kpease_ joined #gluster
13:52 rastar joined #gluster
13:58 snehring joined #gluster
14:04 plarsen joined #gluster
14:08 rwheeler joined #gluster
14:09 wushudoin joined #gluster
14:11 skylar joined #gluster
14:13 squizzi joined #gluster
14:14 magrawal joined #gluster
14:15 arif-ali joined #gluster
14:17 atinm joined #gluster
14:19 shyam joined #gluster
14:26 fcami joined #gluster
14:27 Gnomethrower joined #gluster
14:29 Pupeno joined #gluster
14:33 alvinstarr I am trying to geo-replicate 382719 files but it looks like the replaication stops after about 6861 files. How would I start to find out what is happening?
14:43 jkroon joined #gluster
14:43 farhorizon joined #gluster
14:43 LinkRage joined #gluster
14:51 plarsen joined #gluster
14:52 bluenemo joined #gluster
15:03 snehring joined #gluster
15:07 guhcampos joined #gluster
15:14 shyam joined #gluster
15:14 jeremyh joined #gluster
15:25 farhorizon joined #gluster
15:32 freepe joined #gluster
15:52 aravindavk_ joined #gluster
16:00 luizcpg joined #gluster
16:22 shyam joined #gluster
16:25 nbalacha joined #gluster
16:30 Pupeno joined #gluster
16:31 jkroon joined #gluster
16:31 DV_ joined #gluster
16:33 haomaiwang joined #gluster
16:48 jiffin joined #gluster
16:53 hackman joined #gluster
16:55 guhcampos joined #gluster
16:59 tmirks joined #gluster
17:02 shyam joined #gluster
17:03 shaunm joined #gluster
17:03 ahino joined #gluster
17:09 jeremyh joined #gluster
17:13 jeremyh joined #gluster
17:13 atinm joined #gluster
17:17 Philambdo joined #gluster
17:18 shaunm joined #gluster
17:21 hagarth joined #gluster
17:26 mhulsman joined #gluster
17:35 jeremyh joined #gluster
17:38 unclemarc joined #gluster
17:40 jeremyh joined #gluster
17:42 jiffin joined #gluster
17:43 mhulsman joined #gluster
17:46 tom[] joined #gluster
17:50 Rasathus_ joined #gluster
17:53 rwheeler joined #gluster
17:56 jeremyh joined #gluster
17:57 jiffin1 joined #gluster
17:59 panina joined #gluster
18:06 jeremyh joined #gluster
18:10 cliluw joined #gluster
18:12 cliluw joined #gluster
18:12 jiffin joined #gluster
18:15 kernel joined #gluster
18:17 Guest98789 Does anyone know if gluster has a /usr/doc/share/gluster*/example system documentation?Im going on exam(EX236) shortly, but worried that I will forget options when configuring shares. System documentation would be very helpfull!
18:20 cloph "gluster help" command?
18:23 Guest98789 jup which is very helpfullm,but im looking for examples for when for example configuring samba what options to set. without examples i need to remember them
18:29 [diablo] joined #gluster
18:32 JoeJulian Man, I would probably fail that test.
18:33 JoeJulian There's never enough context to answer them so my answer to nearly everything would be, "It depends..."
18:33 alvinstarr When I run a geo-replication should I see rsync running on both sides?
18:34 jiffin joined #gluster
18:36 Guest98789 im aiming the pass it :D
18:43 Guest98789 alvinstarr I would only expect to see gsyncd running on the master, but you can check status with gluster volume geo-replication status
18:44 alvinstarr I am seeing an active status but nothing is being transferred after an initial burst of data.
18:45 Guest98789 do you see the brick on the slave being filled with files?
18:46 alvinstarr Guest98789: at first but then things just stop. I have about 6,000 of 300,000 files copied
18:47 alvinstarr Guest98789: It seems that I can replicate smallish volumes but something of size just never seems to complete  or keep running after the first flurry of work.
18:49 alvinstarr What would happen if I rsynced the bricks and then started the replication? Would that work at all or is there some context sensitive data in the bricks?
18:49 Guest98789 what is the crawl status?
18:50 alvinstarr CRAWL STATUS == Hybrid Crawl
18:51 Guest98789 looks ok, but you don't see data comming in on the slave from the master volume?
18:52 tmirks left #gluster
18:52 alvinstarr no data that I can see.  Is there a way to log files transfered?
18:55 Guest98789 -var-log-gluster
18:56 alvinstarr I have been looking at the gluster/geo-replication/* files with little luck
18:57 jeremyh joined #gluster
18:57 alvinstarr I am not seeing any error messages other than a couple about the link dropping while running overnight.
18:58 luizcpg joined #gluster
19:06 Guest98789 ls -l -var-log-glusterfs-geo-replication-slaves-, are there no logs on the slave?
19:06 W_v_D joined #gluster
19:07 alvinstarr yes the slave has a log file but just "I"(informational ?) messages.
19:08 alvinstarr Sorry. I just found a log file with errors.
19:09 sloop joined #gluster
19:09 Guest98789 what errors?
19:10 alvinstarr on looking at it more the errors were from this morning when I forcably killed everything and restarted.
19:10 alvinstarr I am getting Transport endpoint is not connected
19:10 alvinstarr I will try restarting all again with clean logs
19:15 Guest98789 ok good luck with that!
19:17 alvinstarr 0-production-stripe-0: Failed to get stripe-size
19:17 hagarth joined #gluster
19:19 ndevos joined #gluster
19:21 alvinstarr So I am getting an error.
19:21 prasanth joined #gluster
19:30 om joined #gluster
19:37 shyam joined #gluster
19:38 rwheeler joined #gluster
19:48 ndevos joined #gluster
19:48 ndevos joined #gluster
19:55 johnmilton joined #gluster
19:59 JoeJulian Ugh, stripe...
20:03 farhoriz_ joined #gluster
20:05 shyam joined #gluster
20:08 alvinstarr a clean reinstall of gluster on the remote client does not have seem to help. Getting the stripe error again and things stop copying after a wile.
20:17 hagarth joined #gluster
20:24 Rasathus_ joined #gluster
20:25 Pupeno joined #gluster
20:25 Pupeno joined #gluster
20:31 JoeJulian In the future, if you post the entire log line I don't have to waste time finding the error in the source.
20:32 JoeJulian Failure to get the stripe size comes from a missing ,,(extended attribute)
20:32 glusterbot To read the extended attributes on the server: getfattr -m .  -d -e hex {filename}
20:32 JoeJulian sprintf (key, "trusted.%s.stripe-size", this->name);
20:32 JoeJulian I'm guessing this->name is the volume name.
20:33 JoeJulian So, theoretically, if you find a file on a brick without that attribute, that should be the one causing the problem.
20:34 alvinstarr JoeJulian: sorry for not posting the logfile.
20:35 JoeJulian At least the one full line. It has the source file and line number to look for.
20:37 JoeJulian fyi, stripe hasn't been very commonly used and has, therefore, seen the least amount of testing. It's quite inefficient and, I suspect, is going to be deprecated soon.
20:38 alvinstarr [2016-10-25 20:05:46.072879] E [stripe-helpers.c:358:stripe_ctx_handle] 0-edocs-production-stripe-0: Failed to get stripe-size
20:39 alvinstarr JoeJulian: The odd thing is that this appears on the geo-remote device and not on the main servers.
20:39 JoeJulian Is edocs-production master or slave?
20:40 alvinstarr There is one on the master and one on the slave.
20:40 JoeJulian Well that doesn't tell us much then. :/
20:41 JoeJulian Are they both stripe volumes?
20:41 alvinstarr Sadly. Consistent naming can be a hindrance.
20:41 alvinstarr the master side is striped(2) x replicated(2) the slave is just striped(2)
20:42 alvinstarr I can change the names on the slave and rebuild/reinstall once again.
20:44 JoeJulian If you're willing to do that, you should also change the volume type. Since you were going with a simple striped slave, make it a simple distribute and enable sharding (volume set $volname features.shard on)
20:45 JoeJulian If you have a chance to change the master, same thing. Build a replicated-distributed volume and enable sharding.
20:46 squizzi joined #gluster
20:50 alvinstarr I was kind of under the impression (mostly from example configs) that stripe/replicate was the generally accepted method for distributing data.
20:52 JoeJulian @stripe
20:52 glusterbot JoeJulian: Please see http://joejulian.name/blog/should-i-use-stripe-on-glusterfs/ about stripe volumes.
20:52 alvinstarr I could rebuild the slave but rebuilding the master at this point will be hard.
20:52 JoeJulian Figured
20:52 alvinstarr I am reading it right now
20:58 alvinstarr your blog makes sense.
20:59 alvinstarr I will give making the bricks unstriped  a try.
21:08 luizcpg joined #gluster
21:09 alvinstarr sharding looks to be tuned to large files. I have a large number of small files. so that it looks like I would be as well off with distributed bricks.
21:13 squizzi joined #gluster
21:17 JoeJulian Sounds likely. I only recommended sharding because you used striping and I assumed you had a reason for that.
21:23 alvinstarr I used striping for no good reason.  Clearly distributed would have been better for this application.
21:23 hagarth joined #gluster
21:23 alvinstarr Strangly removing and recreating the slave volume as distributed has not helped I am now getting.
21:24 alvinstarr /var/log/glusterfs/geo-replication-slaves/966a3305-2641-4727-bd2f-bcecd58d02ef:gluster%3A%2F%2F127.0.0.1%3Aedocs-production.gluster.log:[2016-10-25 21:14:33.462474] E [dht-helper.c:1597:dht_inode_ctx_time_update] (-->/usr/lib64/glusterfs/3.7.11/xlator/protocol/client.so(client3_3_lookup_cbk+0x707) [0x7ff98ce56417] -->/usr/lib64/glusterfs/3.7.11/xlator/cluster/distribute.so(dht_lookup_dir_cbk+0x359) [0x7ff98cbe40f9] -->/usr/lib64/glusterfs/3.7.11/xlator/c
21:24 glusterbot alvinstarr: ('s karma is now -164
21:24 alvinstarr guess I should have pastbin'd that.
21:27 JoeJulian Yeah, too long for IRC.
21:32 farhorizon joined #gluster
21:32 wiza joined #gluster
21:55 Rasathus joined #gluster
22:04 jeremyh joined #gluster
22:11 jeremyh joined #gluster
22:15 Pupeno joined #gluster
22:37 caitnop joined #gluster
22:51 farhoriz_ joined #gluster
23:13 hackman joined #gluster
23:14 cholcombe joined #gluster
23:17 partner joined #gluster
23:17 Pupeno joined #gluster
23:20 jeremyh joined #gluster
23:23 plarsen joined #gluster
23:26 Rasathus_ joined #gluster
23:30 Rasathus joined #gluster
23:32 Rasathus_ joined #gluster
23:37 Rasathus joined #gluster
23:40 Rasathus_ joined #gluster
23:41 Rasathus_ joined #gluster
23:48 jeremyh joined #gluster
23:50 om2 joined #gluster
23:52 om joined #gluster
23:57 om joined #gluster
23:57 om joined #gluster
23:58 om joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary