Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-07-12

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:16 level7 joined #gluster
00:24 rafaels joined #gluster
00:28 cloph_away joined #gluster
01:01 shdeng joined #gluster
01:04 Javezim With Server Quorom, does it help if the bandwidth is not enough to keep two bricks in sync? Ie. It will wait for one if it see's it as offline until it can begin replicating again? As to avoid split brain
01:12 JoeJulian Javezim: replication is synchronous. If the bandwidth is so bad that the client loses connection to one server, but the servers can maintain a ping connection with each other, enough that they don't lose quorum, server quorum will do nothing.
01:13 Javezim @JoeJulian Bummer, Thanks. Trying to reduce the amount of split brains we are currently getting
01:13 JoeJulian Are your servers also clients?
01:14 Javezim @JoeJulian - No, Servers are just servers and we have a few other Ubuntu machines that act as clients around the place
01:15 JoeJulian It may sound counter-intuitive, but have you tried disabling client-side self-heal?
01:16 Javezim @JoeJulian Our Backups have a Metadata file that are constantly read/written to, we find it is these that constantly get split brain. I am guessing because of the constant read/write a brick will fall off for a short period of time and then won't recover the file. it just keeps getting written to one of the bricks only and not replicated
01:17 JoeJulian I understand, but have you tried disabling client-side self-heal?
01:17 Javezim @JoeJulian, No I can't say I have
01:18 JoeJulian I suspect that might either cure or at least mitigate the problem.
01:18 Javezim @JoeJulian, Okay I'll take a look into it :) Out of curiosity why would they cause issues?
01:19 JoeJulian That way the servers will handling the self-heal locks and less traffic will go between the clients and the servers. You could also do volume-quorum.
01:20 JoeJulian s/will handling/will be handling/
01:20 glusterbot What JoeJulian meant to say was: That way the servers will be handling the self-heal locks and less traffic will go between the clients and the servers. You could also do volume-quorum.
01:21 Javezim Do we need to reconnect a client after making changes to the volume? ie. Disabling Client Side Self Heal?
01:25 JoeJulian no
01:31 Javezim Ok so how does one disable Client side self heal?
01:32 JoeJulian cluster.data-self-heal off (same for metadata and entry)
01:40 Javezim @JoeJulian Thanks Joe :)
01:41 level7 joined #gluster
01:42 JoeJulian You're welcome.
01:46 Javezim @JoeJulian I'll also look into that Volume-Quorum after making these changes
01:46 Javezim @JoeJulian, Just to confirm, those will only stop Clients self healing right, not the server cluster?
01:46 JoeJulian Correct
01:46 JoeJulian There's a different setting for that.
01:46 Javezim @JoeJulian, Different setting to disable Server Cluster Self Heals?
01:46 JoeJulian Right
01:46 JoeJulian Not that I would ever recommend doing so.
01:46 JoeJulian But it is nice to know it's there just in case.
01:46 Javezim @JoeJulian, Haha yeah, I was about to try those commands and thought, better make sure Servers will keep self healing :P Otherwise that could be catastrophic
01:47 JoeJulian I don't like catastrophies.
01:52 B21956 joined #gluster
01:59 Javezim Just reading up on 3.8, The Cluster.favorite-child-policy seems cool :D
02:38 poornimag joined #gluster
02:47 magrawal joined #gluster
03:00 aravindavk joined #gluster
03:42 ppai joined #gluster
03:57 sanoj joined #gluster
03:58 itisravi joined #gluster
04:01 nishanth joined #gluster
04:04 nbalacha joined #gluster
04:07 hchiramm__ joined #gluster
04:10 javi404 joined #gluster
04:12 Saravanakmr joined #gluster
04:20 auzty joined #gluster
04:24 itisravi joined #gluster
04:25 karnan joined #gluster
04:27 itisravi derjohn_mob: so did you manage to figure out the permission issue?
04:28 kdhananjay joined #gluster
04:33 Manikandan joined #gluster
04:35 Telsin joined #gluster
04:39 kshlm joined #gluster
04:40 Telsin joined #gluster
04:40 shubhendu joined #gluster
04:43 rafi joined #gluster
04:50 hchiramm__ joined #gluster
04:50 RameshN joined #gluster
04:55 Manikandan joined #gluster
04:59 Gnomethrower joined #gluster
05:02 jiffin joined #gluster
05:03 sakshi joined #gluster
05:04 prasanth joined #gluster
05:05 ndarshan joined #gluster
05:06 aravindavk joined #gluster
05:07 atinm joined #gluster
05:07 ramky joined #gluster
05:15 karthik_ joined #gluster
05:24 Bhaskarakiran joined #gluster
05:27 MikeLupe joined #gluster
05:32 nehar joined #gluster
05:35 gnulnx Can anyone tell me what this message means?  It is all over my log files - looks like for each new directory, a new log entry: https://gist.github.com/kylejohnson/4f3c4af2dfabdca4c47a0d4a5223eab9
05:35 glusterbot Title: gist:4f3c4af2dfabdca4c47a0d4a5223eab9 · GitHub (at gist.github.com)
05:35 gnulnx This is a distributed volume, not sure why selfheal is being mentioned.
05:40 satya4ever joined #gluster
05:41 Muthu joined #gluster
05:42 poornimag joined #gluster
05:42 jiffin gnulnx: I hope this will help http://review.gluster.org/8173,  read the comment given before the for loop http://review.gluster.org/#/c/8173/6/xlators/cluster/dht/src/dht-common.c
05:42 glusterbot Title: Gerrit Code Review (at review.gluster.org)
05:48 atalur joined #gluster
05:48 gnulnx jiffin: Ah, thank you.  So the message itself does not indicate issue, it would only help to debug future issues.
05:49 ashiq joined #gluster
05:49 atalur derjohn_mob, hello.. the volume issues that you were facing yesterday, are they resolved?
05:51 hchiramm_ joined #gluster
05:54 aspandey joined #gluster
05:56 gnulnx I guess it is the Err: -1 that concerned me.
06:01 itisravi atalur: looks like he's  AFK, I was asking him the same thing in the morning.
06:01 atalur itisravi, I see. I think like Nithya mentioned it is a split-brain. Likely in a replica 2 setup.
06:02 itisravi atalur: maybe..
06:10 skoduri joined #gluster
06:14 nbalacha itisravi, atalur, am not sure what issue it was - just thought AFR team should check as it is a 1x2 vol
06:15 itisravi nbalacha: hmm let's see if he responds..
06:16 rafi1 joined #gluster
06:16 satya4ever joined #gluster
06:16 kshlm joined #gluster
06:19 jtux joined #gluster
06:22 karnan joined #gluster
06:25 kramdoss_ joined #gluster
06:27 msvbhat joined #gluster
06:28 Klas why is sqlite needed while building glusterfs?
06:28 devyani7_ joined #gluster
06:32 hackman joined #gluster
06:35 anoopcs Klas, For tiering support.
06:36 msvbhat joined #gluster
06:38 anoopcs You can pass --disable-tiering while configuring to get rid of this dependency.
06:43 azilian joined #gluster
06:43 Klas yup, it tells me that =)
06:48 level7_ joined #gluster
06:50 atalur joined #gluster
06:55 rafi joined #gluster
06:57 Klas anyone know which sqllite package in ubuntu is needed, trying to find it bit can't so far
06:59 Saravanakmr joined #gluster
07:01 rafi1 joined #gluster
07:04 jri joined #gluster
07:04 rafi joined #gluster
07:10 [Enrico] joined #gluster
07:16 arcolife joined #gluster
07:17 atinm joined #gluster
07:17 satya4ever joined #gluster
07:18 archit_ joined #gluster
07:19 deniszh joined #gluster
07:26 Muthu joined #gluster
07:27 hgowtham joined #gluster
07:27 jwd joined #gluster
07:29 kdhananjay joined #gluster
07:31 rafi1 joined #gluster
07:35 rafi joined #gluster
07:36 Klas mmm, after building it, it segfaults =P
07:49 poornimag joined #gluster
07:50 atalur_ joined #gluster
08:01 hersim joined #gluster
08:03 fsimonce joined #gluster
08:07 ivan_rossi joined #gluster
08:07 karthik_ joined #gluster
08:10 level7 joined #gluster
08:11 derjohn_mob joined #gluster
08:18 Slashman joined #gluster
08:22 hersim Hi All, i'm running two node master and two node slave glusterfs geo-replication setup using glusterfs version 3.7.11. When i start geo-replication, the CPU spikes around 80% on all 4 cores.
08:23 hersim Can someone please advice how to reduce the CPU usage when using geo-replication.
08:24 hersim When i stop geo-replication, CPU usage goes back arounf 2-5 % usage.
08:26 PaulCuzner joined #gluster
08:29 om joined #gluster
08:30 devyani7_ joined #gluster
08:32 archit_ joined #gluster
08:33 ashiq joined #gluster
08:33 kdhananjay joined #gluster
08:33 shruti joined #gluster
08:33 sakshi joined #gluster
08:33 aspandey joined #gluster
08:34 itisravi joined #gluster
08:34 prasanth joined #gluster
08:36 anil joined #gluster
08:36 jwd joined #gluster
08:36 om joined #gluster
08:39 nishanth joined #gluster
08:39 sbulage joined #gluster
08:39 lalatenduM joined #gluster
08:39 hgowtham joined #gluster
08:39 jiffin joined #gluster
08:39 sanoj joined #gluster
08:39 shubhendu joined #gluster
08:39 sac joined #gluster
08:39 kramdoss_ joined #gluster
08:41 satya4ever_ joined #gluster
08:42 devyani7_ joined #gluster
08:42 rafi joined #gluster
08:43 Manikandan joined #gluster
08:43 satya4ever_ joined #gluster
08:44 level7 joined #gluster
08:47 shruti joined #gluster
08:48 ashiq joined #gluster
08:51 atalur__ joined #gluster
08:54 rafi1 joined #gluster
08:54 jiffin joined #gluster
08:55 rafi joined #gluster
08:55 karthik_ joined #gluster
08:58 skoduri joined #gluster
08:58 kdhananjay joined #gluster
08:58 anil joined #gluster
08:59 ramky joined #gluster
09:00 itisravi joined #gluster
09:01 titansmc left #gluster
09:02 satya4ever joined #gluster
09:03 kramdoss_ joined #gluster
09:03 nehar joined #gluster
09:03 Bhaskarakiran joined #gluster
09:04 kdhananjay1 joined #gluster
09:08 kdhananjay joined #gluster
09:16 Muthu joined #gluster
09:21 hchiramm joined #gluster
09:27 atinm joined #gluster
09:29 armyriad joined #gluster
09:38 ira joined #gluster
09:46 satya4ever joined #gluster
09:49 ashiq joined #gluster
09:52 atalur_ joined #gluster
10:06 javi404 joined #gluster
10:11 nbalacha joined #gluster
10:22 hchiramm_ joined #gluster
10:25 devyani7_ joined #gluster
10:31 msvbhat joined #gluster
10:31 kramdoss_ joined #gluster
10:33 nbalacha joined #gluster
10:38 kshlm joined #gluster
10:41 bfoster joined #gluster
10:45 devyani7_ joined #gluster
10:47 Gnomethrower joined #gluster
10:50 arif-ali joined #gluster
10:53 hackman joined #gluster
10:54 abyss^ should nfs protocol works with different gluster server and client? Ofcourse with native it won't work but nfs it's nfs so when I use on server gluster nfs 3 and on client the same then everything should work fine?
10:57 Bhaskarakiran joined #gluster
10:58 kaushal_ joined #gluster
11:00 pur joined #gluster
11:01 rastar joined #gluster
11:02 johnmilton joined #gluster
11:05 jesk what do you mean with "native wont work"? just curious...
11:05 julim joined #gluster
11:05 jiffin abyss^: in case of nfs
11:05 jiffin nfs client --> gluster nfs server/nfs-ganesha --> gluster volume (server)
11:06 jiffin here gluster nfs server/nfs-ganesha is an another gluster client instant which can talk nfs client
11:07 jiffin *with nfs client
11:09 Manikandan joined #gluster
11:23 hchiramm__ joined #gluster
11:24 abyss^ jesk: when I have gluster client 3.3.1 and gluster server 3.7.2 then will be the problem :)
11:25 abyss^ jiffin: ok, but when I have for example gluster 3.3.1 client and 3.7.7  gluster server, then nfs will be working or not? It should because it nfs protocol not glusterfs, yes?
11:27 post-factum nfs client does not care about glusterfs client, obviously
11:28 jiffin abyss^: yes it will be working, what i intend to say was ur gluster nfs server and gluster volume will be in same version
11:34 atalur joined #gluster
11:40 ashiq joined #gluster
11:41 derjohn_mob joined #gluster
11:41 ju5t joined #gluster
11:42 B21956 joined #gluster
11:43 ju5t hello, if i remember it correctly gluster is aware of individual bricks disk usage since a particular version, but i can't remember what version that is, does anyone here know that?
11:47 jri joined #gluster
11:50 Manikandan joined #gluster
11:52 level7 joined #gluster
11:54 snila hi, how can i add the features/filters fixed-uid and fixed-gid to an existing gluster volume?
11:55 jesk abyss^: ah I see
11:55 jesk question from gluster noob
11:55 jesk just set it up and now I'am copying my homedirectory on it
11:56 jesk its quite slow and the logfile is filling all the time with stuff like that:
11:56 jesk [2016-07-12 11:55:07.870247] I [MSGID: 109036] [dht-common.c:8824:dht_log_new_layout_for_dir_selfheal] 0-gv1-dht: Setting layout of /home/jesk/Downloads/opennms-18.0.0-1/features/topology-map/org.opennms.features.topology.app/src/main/java/org/opennms/features/topology/app/internal/operations/icons with [Subvol_name: gv1-replicate-0, Err: -1 , Start: 0 , Stop: 4294967295 , Hash: 1 ],
11:56 jesk is that normal?
11:58 rwheeler joined #gluster
11:59 post-factum jesk: yes
12:00 jesk does it do that for every directory?
12:00 post-factum correct
12:00 robb_nl joined #gluster
12:01 jesk its also funny that the disk space is of course decreasing but its not linear, "df" shows sometimes for a short moment that the spaces increased :-)
12:01 jesk now its super fast, big files...
12:04 chirino_m joined #gluster
12:06 guhcampos joined #gluster
12:08 Manikandan joined #gluster
12:09 hackman joined #gluster
12:10 abyss^ jiffin: thank you
12:12 Seth_Karlo joined #gluster
12:19 jesk I think I'am running into a bug
12:20 jesk I mounted two different volumes via gfs on a gfs server itself
12:22 kkeithley why do you think that's a bug?
12:23 jesk when I write into volume1 then the gluster1 directory is not filling but gluster2 directory
12:23 hchiramm_ joined #gluster
12:23 jesk "df" shows also that both mount points are filling
12:23 jesk thats odd
12:24 kkeithley indeed.
12:27 Gnomethrower joined #gluster
12:45 rafaels joined #gluster
12:54 MrAbaddon joined #gluster
12:57 derjohn_mob joined #gluster
12:58 jri joined #gluster
12:59 skoduri joined #gluster
12:59 ahino joined #gluster
13:05 itisravi joined #gluster
13:09 poornimag joined #gluster
13:21 julim joined #gluster
13:22 ahino joined #gluster
13:24 hchiramm__ joined #gluster
13:25 squizzi joined #gluster
13:27 plarsen joined #gluster
13:30 jri joined #gluster
13:33 ic0n joined #gluster
13:36 robb_nl joined #gluster
13:36 shaunm joined #gluster
13:37 bluenemo joined #gluster
13:39 nbalacha joined #gluster
13:40 chrisg joined #gluster
13:48 rwheeler joined #gluster
13:49 skoduri joined #gluster
13:51 dnunez joined #gluster
13:53 kpease joined #gluster
13:58 derjohn_mob joined #gluster
13:58 baojg joined #gluster
14:05 itisravi joined #gluster
14:09 ahino joined #gluster
14:15 ron-slc joined #gluster
14:15 TheBall joined #gluster
14:19 plarsen joined #gluster
14:19 nehar joined #gluster
14:22 kramdoss_ joined #gluster
14:24 hackman joined #gluster
14:24 hchiramm_ joined #gluster
14:28 ahino joined #gluster
14:31 julim joined #gluster
14:37 bowhunter joined #gluster
14:41 chirino_ joined #gluster
14:41 kramdoss_ joined #gluster
14:43 JoeJulian jesk: When your bricks share a mount changes to one volume are going to affect the free space on the other. If that's not your desired behavior, partition your block device and separate your bricks.
14:52 shubhendu joined #gluster
15:03 skylar joined #gluster
15:05 baojg joined #gluster
15:07 wushudoin joined #gluster
15:21 Dave_____ joined #gluster
15:24 plarsen joined #gluster
15:31 hagarth joined #gluster
15:31 kpease joined #gluster
15:33 chrisg joined #gluster
15:33 chrisg joined #gluster
15:33 kpease joined #gluster
15:35 kpease joined #gluster
15:49 aravindavk joined #gluster
16:06 hackman joined #gluster
16:07 nickage joined #gluster
16:10 Bhaskarakiran joined #gluster
16:30 hagarth joined #gluster
16:50 ivan_rossi left #gluster
17:05 jiffin joined #gluster
17:09 guhcampos joined #gluster
17:13 guhcampos joined #gluster
17:18 JoeJulian file a bug
17:18 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
17:21 morgbin joined #gluster
17:24 rafi joined #gluster
17:40 alvinstarr joined #gluster
17:48 skoduri joined #gluster
17:51 jri joined #gluster
17:55 rafi joined #gluster
17:56 bowhunter joined #gluster
18:02 a2 joined #gluster
18:13 jiffin joined #gluster
18:14 robb_nl joined #gluster
18:24 takarider joined #gluster
18:39 robb_nl joined #gluster
18:45 nickage joined #gluster
18:46 jaake joined #gluster
18:46 jaake hello
18:46 glusterbot jaake: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
18:47 hagarth joined #gluster
19:07 ahino joined #gluster
19:08 jaake left #gluster
19:10 hchiramm joined #gluster
19:12 bowhunter joined #gluster
19:26 ben453 joined #gluster
19:40 blu_ joined #gluster
19:52 johnmilton joined #gluster
19:59 julim joined #gluster
20:05 PaulCuzner joined #gluster
20:10 johnmilton joined #gluster
20:11 hchiramm_ joined #gluster
20:28 bowhunter joined #gluster
20:48 tdasilva joined #gluster
21:02 deniszh joined #gluster
21:11 hchiramm__ joined #gluster
21:20 gabe_ joined #gluster
21:22 ghollies joined #gluster
21:24 ghollies Hey, I was wondering if anyone knew of an easy way to get the volume id on a box that is mounting the volume.
22:11 Gnomethrower joined #gluster
22:12 hchiramm_ joined #gluster
22:16 hackman joined #gluster
22:17 guhcampos joined #gluster
22:41 Gnomethrower joined #gluster
23:12 hchiramm__ joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary