Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-10-19

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 Alghost_ joined #gluster
00:09 nathwill joined #gluster
00:40 nathwill joined #gluster
00:41 Alghost joined #gluster
00:42 Alghost_ joined #gluster
00:42 nathwill joined #gluster
00:55 vodik joined #gluster
00:57 vodik i have three hosts that i have set up identically (centos 7, up-to-date, same version of gluster). two of the boxes perform well, but for some reason, the third is super slow
00:57 vodik creating volumes takes on the order of 5 minutes
00:57 vodik and i don't know where to begin debugging it
00:58 vodik i don't see anything weird in the logs, but maybe i'm missing something important
01:04 shdeng joined #gluster
01:06 Alghost joined #gluster
01:11 Alghost joined #gluster
01:11 vodik ...
01:11 vodik wrong nameserver
01:11 vodik failing over to ipv6
01:11 vodik my bad
01:18 hagarth joined #gluster
01:21 sysanthrope joined #gluster
01:42 Javezim Anyone tried to set gluster to use 3.7.16 but it fails with - /etc/ctdb# gluster volume set all cluster.op-version 30716
01:46 Javezim volume set: failed: Required op_version (30716) is not supported
01:46 hagarth joined #gluster
01:46 derjohn_mobi joined #gluster
01:48 Acinonyx joined #gluster
01:51 harish_ joined #gluster
01:59 cliluw joined #gluster
02:03 Acinonyx joined #gluster
02:04 Wizek__ joined #gluster
02:05 nathwill joined #gluster
02:05 Gambit15 joined #gluster
02:08 luizcpg joined #gluster
02:10 ashiq joined #gluster
02:14 Acinonyx joined #gluster
02:21 nathwill joined #gluster
02:31 Acinonyx joined #gluster
02:41 Lee1092 joined #gluster
02:56 magrawal joined #gluster
03:07 mmckeen joined #gluster
03:20 jiffin joined #gluster
03:35 kramdoss_ joined #gluster
03:41 RameshN joined #gluster
03:43 sanoj joined #gluster
03:57 hchiramm joined #gluster
03:57 Gnomethrower joined #gluster
04:00 Gnomethrower joined #gluster
04:00 magrawal joined #gluster
04:01 atinm joined #gluster
04:19 kramdoss_ joined #gluster
04:20 buvanesh_kumar joined #gluster
04:34 ppai joined #gluster
04:35 riyas joined #gluster
04:40 shubhendu joined #gluster
05:01 ankitraj joined #gluster
05:03 kramdoss_ joined #gluster
05:05 prasanth joined #gluster
05:06 msvbhat joined #gluster
05:08 Javezim Anyone have an issue with Gluster 3.7 where one core will be maxed 100%? - /usr/sbin/glusterfs --log-level=WARNING --log-file=/var/log/gluster.log --direct-io-mode=disable --volfile-server=cb-syd-01 --volfile-id=/gv0syd /cluster01syd
05:09 Javezim Its this process here, if we kill it, the mount drops and we 'mount -a' problem is then within 5 minutes its maxing a core again
05:11 raginbaj- joined #gluster
05:12 gem joined #gluster
05:14 ndarshan joined #gluster
05:15 Gnomethrower joined #gluster
05:17 karthik_us joined #gluster
05:19 itisravi joined #gluster
05:20 jiffin joined #gluster
05:21 aravindavk joined #gluster
05:27 msvbhat joined #gluster
05:30 mhulsman joined #gluster
05:32 [diablo] joined #gluster
05:35 hgowtham joined #gluster
05:46 rafi joined #gluster
05:49 Bhaskarakiran joined #gluster
05:51 sanoj joined #gluster
06:01 karnan joined #gluster
06:07 msvbhat joined #gluster
06:08 Saravanakmr joined #gluster
06:09 jtux joined #gluster
06:11 a2batic joined #gluster
06:21 sage_ joined #gluster
06:37 Sebbo3 joined #gluster
06:37 kdhananjay joined #gluster
06:44 devyani7 joined #gluster
06:44 sage_ joined #gluster
06:44 Muthu|afk joined #gluster
06:47 devyani7__ joined #gluster
06:50 nishanth joined #gluster
07:00 sage_ joined #gluster
07:01 jiffin joined #gluster
07:02 apandey joined #gluster
07:05 inodb joined #gluster
07:07 Sebbo2 joined #gluster
07:10 rafi joined #gluster
07:11 wyklq joined #gluster
07:14 kdhananjay1 joined #gluster
07:14 anrao joined #gluster
07:16 msvbhat joined #gluster
07:25 rafi2 joined #gluster
07:37 fsimonce joined #gluster
07:37 shdeng joined #gluster
07:40 ankitraj joined #gluster
07:40 mhulsman joined #gluster
07:47 devyani7__ joined #gluster
07:50 derjohn_mobi joined #gluster
07:51 gem joined #gluster
07:53 side_control joined #gluster
07:57 kxseven joined #gluster
08:02 jtux joined #gluster
08:16 derjohn_mobi joined #gluster
08:16 gem joined #gluster
08:17 shdeng joined #gluster
08:17 shdeng joined #gluster
08:21 rouven joined #gluster
08:25 jiffin joined #gluster
08:33 devyani7__ joined #gluster
08:36 derjohn_mobi joined #gluster
08:54 karthik_us joined #gluster
08:56 derjohn_mobi joined #gluster
09:00 social joined #gluster
09:05 jiffin1 joined #gluster
09:09 rouven joined #gluster
09:11 Muthu|afk joined #gluster
09:13 rouven_ joined #gluster
09:18 bluenemo joined #gluster
09:18 derjohn_mobi joined #gluster
09:29 david____ joined #gluster
09:33 anrao joined #gluster
09:33 jkroon joined #gluster
09:34 shdeng joined #gluster
09:36 david____ hi all, i'm having a question regarding to Brick Size in Distributed-Replicate volume. We have 4 x 2 bricks. All brick pairs have same size: 10TB. If we upgrade pair to 30TB, will it be recognise and use by Glusterfs ?
09:36 david____ so we will have 3 pairs : 10TB , 1 pair: 30TB
09:36 david____ Total volume: 60TB
09:36 derjohn_mobi joined #gluster
09:44 sage_ joined #gluster
09:45 nishanth joined #gluster
09:46 rafi joined #gluster
09:48 luizcpg joined #gluster
09:50 jiffin1 joined #gluster
09:56 derjohn_mobi joined #gluster
10:04 fsimonce joined #gluster
10:18 derjohn_mobi joined #gluster
10:36 derjohn_mobi joined #gluster
10:38 raghug joined #gluster
10:49 karnan joined #gluster
10:53 shyam joined #gluster
10:55 ankitraj joined #gluster
10:56 derjohn_mobi joined #gluster
10:57 luizcpg joined #gluster
10:57 luizcpg left #gluster
10:58 msvbhat joined #gluster
11:02 karnan joined #gluster
11:15 arc0 joined #gluster
11:16 derjohn_mobi joined #gluster
11:24 anrao joined #gluster
11:25 skoduri joined #gluster
11:37 derjohn_mobi joined #gluster
11:46 B21956 joined #gluster
11:52 B21956 joined #gluster
11:56 kkeithley Gluster Community Meeting in five minutes in #gluster-meeting
11:56 derjohn_mobi joined #gluster
11:58 Saravanakmr joined #gluster
12:00 k4n0 joined #gluster
12:07 jdarcy joined #gluster
12:12 harish_ joined #gluster
12:12 Slashman joined #gluster
12:16 derjohn_mobi joined #gluster
12:17 skoduri joined #gluster
12:17 skoduri joined #gluster
12:31 luizcpg joined #gluster
12:36 uebera|| joined #gluster
12:36 uebera|| joined #gluster
12:36 derjohn_mobi joined #gluster
12:38 shyam joined #gluster
12:38 johnmilton joined #gluster
12:38 luizcpg joined #gluster
12:45 msvbhat joined #gluster
12:46 ankitraj joined #gluster
12:54 unclemarc joined #gluster
12:56 derjohn_mobi joined #gluster
13:18 derjohn_mobi joined #gluster
13:18 wadeholler joined #gluster
13:18 wadeholler left #gluster
13:28 hagarth joined #gluster
13:28 luizcpg joined #gluster
13:29 arpu joined #gluster
13:30 Schenker joined #gluster
13:30 kramdoss_ joined #gluster
13:32 skylar joined #gluster
13:34 msvbhat joined #gluster
13:36 derjohn_mobi joined #gluster
13:40 ashiq joined #gluster
13:42 riyas joined #gluster
13:44 snila joined #gluster
13:46 plarsen joined #gluster
13:48 nisroc joined #gluster
13:49 rastar joined #gluster
13:49 raghu` joined #gluster
13:53 Schenker hi guys!
13:53 Schenker I wanted to ask something about the glusterfs/fuse/mmap support
13:53 Schenker does someone know if the mmaping is supported under glusterfs?
13:53 squizzi joined #gluster
13:53 Schenker I'm trying to use tokyocabinet library which uses mmap calls and receive lots of mmap errors ...
13:55 MadPsy is mmap not for memory rather than filesystem :|
13:56 MadPsy just ignore me, nevermind
13:56 derjohn_mobi joined #gluster
13:57 MadPsy I would assume it's down to the FUSE support in the kernel rather than glusterfs itself
14:01 nishanth joined #gluster
14:03 Schenker yes, makes sense, will check
14:09 wadeholler joined #gluster
14:11 arpu hello i use find /dist/chunksglu -mtime +1 -name *.chk -type f -delete
14:12 arpu after find and delete is finished the memory from glusterfs mount is not  released
14:12 arpu glusterfs 3.8.5
14:13 rwheeler joined #gluster
14:18 derjohn_mobi joined #gluster
14:20 arpu root     15135 13.5 10.1 1235840 820248 ?      Ssl  15:03  10:17 /usr/sbin/glusterfs --volfile-server=gluster0 --volfile-id=volume1 /dist/chunksglu
14:36 derjohn_mobi joined #gluster
14:39 aravindavk joined #gluster
14:40 blu__ joined #gluster
14:41 arpu is there a stable release with this change now? http://review.gluster.org/#/c/15593/
14:41 glusterbot Title: Gerrit Code Review (at review.gluster.org)
14:48 farhorizon joined #gluster
14:49 cholcombe joined #gluster
14:57 derjohn_mobi joined #gluster
15:08 f0rpaxe joined #gluster
15:16 raghug joined #gluster
15:16 derjohn_mobi joined #gluster
15:17 shyam joined #gluster
15:22 ira joined #gluster
15:33 farhorizon joined #gluster
15:37 derjohn_mobi joined #gluster
15:46 arpu hmm maybe i should not use the find -delete on a glusterfs mount?
15:46 morse joined #gluster
15:47 arpu is there any best  usage to delte old files on the cluster?
15:47 post-factum arpu, memory issue is known
15:47 kshlm joined #gluster
15:51 arpu ok
15:51 arpu now i testing with performance.readdir-ahead: off
15:51 arpu any other known workaround?
15:56 derjohn_mobi joined #gluster
15:58 post-factum does that help?
15:59 arpu find is running now,...
16:00 nathwill joined #gluster
16:01 arpu takes long time running 30min
16:02 JoeJulian arpu: Not a "fix" but a potential workaround... Create a temporary mount that you use for your find which you umount when you're done.
16:02 raghu joined #gluster
16:14 jiffin joined #gluster
16:14 hackman joined #gluster
16:16 a2batic joined #gluster
16:16 derjohn_mobi joined #gluster
16:26 shyam joined #gluster
16:27 ivan_rossi left #gluster
16:32 arpu hm find running now 1h :/
16:33 arpu JoeJulian: oh good idea, i will try this next
16:37 Lee1092 joined #gluster
16:39 derjohn_mobi joined #gluster
16:43 hagarth joined #gluster
17:06 kpease joined #gluster
17:10 nathwill joined #gluster
17:19 jiffin joined #gluster
17:26 glustin joined #gluster
17:29 elastix joined #gluster
17:43 arpu JoeJulian,  yes this works fine, but i need this free memory
17:50 derjohn_mobi joined #gluster
17:59 ahino joined #gluster
18:00 Philambdo joined #gluster
18:14 ahino joined #gluster
18:26 mhulsman joined #gluster
18:38 prth joined #gluster
18:44 johnnyNumber5 joined #gluster
18:44 * johnnyNumber5 feels stupid. i have 3 app servers and 3 glusterfs servers
18:45 johnnyNumber5 when i try mounting the glusterfs volume on one of my glusterfs servers things work and its read write
18:45 johnnyNumber5 and i see everything replicate.
18:45 johnnyNumber5 when i run mount -t glusterfs gluster1:/appfs /root/testmnt on my application server though, it mounts it as readonly
18:45 johnnyNumber5 am i being dumb?
18:47 johnnyNumber5 joined #gluster
18:47 johnnyNumber5 crap got DCd
18:47 johnnyNumber5 whoops
18:49 * johnnyNumber5 is very confused
18:49 JoeJulian johnnyNumber5: The the client log, but my suspicion is that the client cannot reach all the servers and is losing quorum.
18:49 JoeJulian s/The/Check/
18:49 glusterbot What JoeJulian meant to say was: johnnyNumber5: Check the client log, but my suspicion is that the client cannot reach all the servers and is losing quorum.
18:50 johnnyNumber5 sounds good, i will take a look. also that bot is awesome
18:50 JoeJulian :)
18:54 johnnyNumber5 damn firewall
18:54 johnnyNumber5 i know exactly what it was i added the rules to gluster1 but not 2 and 3
18:54 johnnyNumber5 facepalm
18:57 johnnyNumber5 this is beautiful, its alive thanks JoeJulian !
18:57 JoeJulian You're welcome. :)
18:57 johnnyNumber5 can i ask some general gluserfs questions?
18:57 JoeJulian People often do.
18:58 johnnyNumber5 if a glusternode goes temporarily offline, is it a nightmare to resync?
18:59 nathwill joined #gluster
19:00 JoeJulian No. If a client node goes offline, there's no healing necessary. If a server node goes offline, there are protocols in place by which gluster tracks which files were changed. When the server comes back, the self-heal daemon that runs on each server will manage healing those files.
19:01 johnnyNumber5 i would want to run at least 4 gluster servers to have that self heal ability right? if i drop from 3-2 i'm assuming it goes into read only mode?
19:04 JoeJulian replication determines fault-tolerance. Replica 2 is prone to split-brain and having at least a 3rd server with an arbiter brick is recommended. Replica 2, on most common server hardware, will give you 5 nines (99.999%) of uptime. Replica 3 or 4 will only get you 6 nines.
19:04 JoeJulian So for the cost, replica 2 with an arbiter or replica 3 is the most efficient.
19:05 JoeJulian From there, you would add multiples of your replica count to add capacity.
19:06 johnnyNumber5 sounds pretty great. thanks JoeJulian i'm sure i'm already going overkill but meh :)
19:06 JoeJulian Better overkill than under.
19:06 JoeJulian Well, not always...
19:06 johnnyNumber5 app of this caliber already has run fine for years on a single server. i just like the idea of giving my clients something better and reduce the odds of me having to wake at 4am
19:07 johnnyNumber5 right now i have just been a big fan of running one instance on aws with frequent backups and an elastic ip to cover my ass if there is an issue
19:07 JoeJulian For instance, replica 4 will generally (depending on use case) be less performant than a replica 3.
19:07 johnnyNumber5 ill probably run replica 3 then
19:08 johnnyNumber5 been following this guide, which has it for replica 3 https://www.linode.com/docs/websites/host-a-website-with-high-availability
19:08 glusterbot Title: Host a Website with High Availability (at www.linode.com)
19:09 nathwill joined #gluster
19:11 johnnyNumber5 i think they did a pretty decent job no?^
19:12 JoeJulian For a web app, if you're using gluster 3.8+ I recommend turning up the performance.md-cache-timeout a lot. 30 minutes or more, imho.
19:12 JoeJulian Other than that, yeah, not a bad writeup.
19:13 nathwill joined #gluster
19:13 nathwill joined #gluster
19:14 prth joined #gluster
19:14 nathwill joined #gluster
19:15 nathwill joined #gluster
19:16 nathwill joined #gluster
19:19 johnnyNumber5 good call
19:22 johnmilton joined #gluster
19:28 derjohn_mobi joined #gluster
19:40 sage_ joined #gluster
19:40 mlhess joined #gluster
20:05 nathwill joined #gluster
20:22 sage_ joined #gluster
20:47 msvbhat joined #gluster
20:47 johnnyNumber5 joined #gluster
20:58 prth joined #gluster
20:59 farhoriz_ joined #gluster
21:04 mhulsman joined #gluster
21:34 johnnyNumber5 joined #gluster
22:13 johnnyNumber5 joined #gluster
22:35 cliluw joined #gluster
22:36 johnnyNumber5 joined #gluster
22:48 farhorizon joined #gluster
23:09 skoduri joined #gluster
23:10 johnnyNumber5 joined #gluster
23:32 plarsen joined #gluster
23:36 johnnyNumber5 joined #gluster
23:53 farhoriz_ joined #gluster
23:58 B21956 joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary