Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-02-14

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:05 cholcombe is everything that's in the .indices/xattrop dir that doesn't start with xattrop a file needing healing?
00:07 cacasmacas joined #gluster
00:12 cholcombe seems so
00:30 kramdoss_ joined #gluster
00:32 ankit joined #gluster
00:36 gyadav_ joined #gluster
00:45 baber joined #gluster
00:50 JoeJulian cholcombe: It should be, yes.
00:55 gem joined #gluster
00:55 shdeng joined #gluster
00:58 pjrebollo joined #gluster
01:06 cyberbootje1 Hi all, any clue if there is documentation of the latest stable glusterFS downloadable in PDF format?
01:11 cacasmacas joined #gluster
01:14 jwd joined #gluster
01:16 john51 joined #gluster
01:16 amye uh, if you want to 'print to PDF' from ReadTheDocs, you could, but no current PDF copies exist.
01:18 cyberbootje1 amye: Correct me if i'm wrong but in that case i would print one page? not the whole "ReadTheDocs" base
01:18 amye cyberbootje1, correct. However, everything's in a github repo.
01:19 JoeJulian Which needs an editor... if you feel like volunteering for something.. ;)
01:19 amye https://github.com/gluster/glusterdocs
01:19 glusterbot Title: GitHub - gluster/glusterdocs: This repo had it's git history re-written on 19 May 2016. Please create a fresh fork or clone if you have an older local clone. (at github.com)
01:19 amye JoeJulian, fully agreed
01:19 pjrebollo joined #gluster
01:20 cyberbootje1 thing is, it would be nice to have hardcopy laying around just in case
01:21 john51 joined #gluster
01:21 JoeJulian Not me... I never have paper around anymore. I used to print everything though. Even printed the perl manual once upon a time.
01:22 cyberbootje1 yeah i know but then again, murphy's law, glusterFS fails and internet/other systems at the same time, i would love to have hardcopy somewhere in that case
01:23 amye cyberbootje1, pull down the github repo locally?
01:23 cyberbootje1 yeah, plan B
01:23 cyberbootje1 plan A: was hoping for PDF
01:25 amye plan c: generate PDF from repo
01:26 john51 joined #gluster
01:26 amye https://github.com/jobisoft/wikidoc may be what you're looking for.
01:26 glusterbot Title: GitHub - jobisoft/wikidoc: Create PDF file from github wiki documentation (at github.com)
01:31 john51 joined #gluster
01:36 john51 joined #gluster
01:38 ankit joined #gluster
01:40 jkroon joined #gluster
01:42 cacasmacas joined #gluster
01:54 bwerthmann joined #gluster
01:57 arpu joined #gluster
02:03 baber joined #gluster
02:07 Gambit15 joined #gluster
02:09 pjrebollo joined #gluster
02:11 cholcombe JoeJulian, cool thanks
02:17 riyas joined #gluster
02:23 cacasmacas joined #gluster
02:27 gem joined #gluster
02:28 Shu6h3ndu joined #gluster
02:28 jkroon joined #gluster
02:30 derjohn_mob joined #gluster
02:31 daMaestro joined #gluster
02:34 skoduri joined #gluster
02:48 ilbot3 joined #gluster
02:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:50 Wizek joined #gluster
02:55 RameshN joined #gluster
03:06 cacasmacas joined #gluster
03:14 RameshN joined #gluster
03:15 jwd joined #gluster
03:18 unlaudable joined #gluster
03:22 unlaudable joined #gluster
03:25 kramdoss_ joined #gluster
03:27 unlaudable joined #gluster
03:32 unlaudable joined #gluster
03:39 nbalacha joined #gluster
03:53 haomaiwang joined #gluster
03:54 cacasmacas joined #gluster
03:58 magrawal joined #gluster
03:59 aravindavk joined #gluster
04:03 gyadav_ joined #gluster
04:10 PaulCuzner joined #gluster
04:12 sbulage joined #gluster
04:13 haomaiwang joined #gluster
04:18 nbalacha joined #gluster
04:19 skumar joined #gluster
04:31 farhorizon joined #gluster
04:41 skumar joined #gluster
04:44 mb_ joined #gluster
04:48 skoduri joined #gluster
04:49 riyas joined #gluster
04:59 rafi joined #gluster
05:04 sage joined #gluster
05:06 ppai joined #gluster
05:13 haomaiwang joined #gluster
05:15 ndarshan joined #gluster
05:16 jwd joined #gluster
05:16 cacasmacas_ joined #gluster
05:20 Humble joined #gluster
05:21 gem joined #gluster
05:24 skumar_ joined #gluster
05:25 karthik_us joined #gluster
05:25 Jacob843 joined #gluster
05:26 shdeng joined #gluster
05:33 kdhananjay joined #gluster
05:39 susant joined #gluster
05:42 Prasad joined #gluster
05:45 jiffin joined #gluster
05:52 riyas joined #gluster
05:53 k4n0 joined #gluster
05:55 Karan joined #gluster
05:57 Guest37230 joined #gluster
05:58 cacasmacas joined #gluster
05:58 prasanth joined #gluster
06:06 ashiq joined #gluster
06:08 sanoj joined #gluster
06:08 susant joined #gluster
06:08 jiffin1 joined #gluster
06:12 jwd joined #gluster
06:13 haomaiwang joined #gluster
06:20 ankit_ joined #gluster
06:22 cacasmacas joined #gluster
06:24 sanoj joined #gluster
06:33 hgowtham joined #gluster
06:35 Humble joined #gluster
06:53 hgowtham joined #gluster
06:57 sona joined #gluster
06:57 shutupsquare joined #gluster
07:03 armyriad joined #gluster
07:03 kotreshhr joined #gluster
07:07 msvbhat joined #gluster
07:09 mlhamburg Hello, I mounted a gluster volume with two different clients using Gluster Native Client (with fstab). The system's user-ids on the clients are not identical. Is there an client-side option to have some kind of mapping of user ids so the user names will end up all the same on all clients?
07:12 shutupsquare joined #gluster
07:13 shutupsq_ joined #gluster
07:13 haomaiwang joined #gluster
07:14 unlaudable joined #gluster
07:16 Philambdo joined #gluster
07:17 jiffin1 joined #gluster
07:22 jwd joined #gluster
07:23 jtux joined #gluster
07:23 sbulage joined #gluster
07:29 Humble joined #gluster
07:30 mhulsman joined #gluster
07:32 mhulsman joined #gluster
07:38 flomko joined #gluster
07:40 mb_ joined #gluster
07:41 BatS9 joined #gluster
07:56 Humble joined #gluster
07:57 cacasmacas joined #gluster
08:05 ivan_rossi joined #gluster
08:06 [diablo] joined #gluster
08:06 kdhananjay joined #gluster
08:06 nishanth joined #gluster
08:13 haomaiwang joined #gluster
08:23 jkroon joined #gluster
08:24 atinm joined #gluster
08:35 Reventlov joined #gluster
08:36 Reventlov Hi. I read there http://lists.gluster.org/pipermail/gl​uster-devel/2014-February/028104.html that some effort was "ongoing" in 2014 to use raft for synchronous replication. Is there anywhere I can read more about the current replication // consensus algorithm used in gluster?
08:36 glusterbot Title: [Gluster-devel] raft consensus algorithm and glusterfs (at lists.gluster.org)
08:36 nbalacha joined #gluster
08:41 suliba joined #gluster
08:44 fsimonce joined #gluster
08:47 jiffin1 joined #gluster
08:48 musa22 joined #gluster
08:49 skoduri joined #gluster
09:04 nh2 joined #gluster
09:11 derjohn_mob joined #gluster
09:13 haomaiwang joined #gluster
09:15 ShwethaHP joined #gluster
09:18 ahino joined #gluster
09:30 jiffin1 joined #gluster
09:31 poornima joined #gluster
09:32 Saravanakmr joined #gluster
09:40 gem joined #gluster
09:43 cacasmacas joined #gluster
09:44 buvanesh_kumar joined #gluster
09:49 saintpablo joined #gluster
10:08 percevalbot joined #gluster
10:10 jiffin Reventlov: http://lists.gluster.org/pipermail/gl​uster-devel/2017-February/052015.html
10:10 glusterbot Title: [Gluster-devel] Leader Election Xlator Design Document (at lists.gluster.org)
10:12 AG joined #gluster
10:12 shutupsquare joined #gluster
10:13 Reventlov thanks :
10:13 Reventlov )
10:13 haomaiwang joined #gluster
10:15 jiffin Reventlov: sorry did u mean current replication algorithm ?
10:16 Reventlov jiffin: well, why not both? :) So, yeah, if you have things on the current algorithm, i'll take them too!
10:19 pulli joined #gluster
10:26 Homastli joined #gluster
10:26 sona joined #gluster
10:45 shutupsquare joined #gluster
11:10 buvanesh_kumar joined #gluster
11:16 shutupsquare joined #gluster
11:27 skoduri joined #gluster
11:28 sbulage joined #gluster
11:33 derjohn_mob joined #gluster
11:34 cacasmacas joined #gluster
11:40 jiffin Reventlov:check this one as well https://github.com/gluster/glusterfs/b​lob/master/doc/developer-guide/afr.md
11:40 glusterbot Title: glusterfs/afr.md at master · gluster/glusterfs · GitHub (at github.com)
11:42 kotreshhr left #gluster
11:46 ilsanto joined #gluster
11:46 ilsanto hi all!
11:49 pulli joined #gluster
11:49 susant left #gluster
11:51 pjrebollo joined #gluster
12:01 mb_ joined #gluster
12:02 haomaiwang joined #gluster
12:05 jiffin ilsanto: Hi
12:18 Homastli After upgrading Gluster, I can see my volumes and files, but when I try to access a file it's not found! Any ideas?
12:19 Seth_Karlo joined #gluster
12:20 Seth_Karlo joined #gluster
12:20 pulli joined #gluster
12:22 bfoster joined #gluster
12:25 unlaudable joined #gluster
12:26 nishanth joined #gluster
12:27 Seth_Karlo joined #gluster
12:31 Wizek joined #gluster
12:38 cacasmacas joined #gluster
12:46 Homastli I can open the same file on one node, but not another. The log file for the volume states "remote operation failed. Path... (no such file or directory)". Strange because the file is there on the brick
12:50 buvanesh_kumar joined #gluster
13:06 Reventlov So, architecture question: I currently read that it does not have any meta-data server
13:06 Reventlov so, the use of etcd (and raft though that), will kinda centralize glusterFS, no?
13:18 ahino joined #gluster
13:22 jkroon joined #gluster
13:22 skoduri joined #gluster
13:26 atm0sphere joined #gluster
13:28 jeffspeff joined #gluster
13:29 pulli joined #gluster
13:37 pulli joined #gluster
13:38 shutupsquare joined #gluster
13:39 shutupsquare joined #gluster
13:40 arpu joined #gluster
13:41 pulli joined #gluster
13:46 cacasmacas joined #gluster
13:50 unclemarc joined #gluster
13:51 Jacob843 joined #gluster
13:53 pulli joined #gluster
13:55 bwerthma1n joined #gluster
13:57 baber joined #gluster
14:00 ankitr joined #gluster
14:01 ahino joined #gluster
14:05 k4n0 joined #gluster
14:07 Prasad joined #gluster
14:08 jiffin joined #gluster
14:09 ebbex joined #gluster
14:16 riyas joined #gluster
14:17 kramdoss_ joined #gluster
14:19 Reventlov (as raft does not scale well with the number of consensus participating nodes)
14:19 derjohn_mob joined #gluster
14:19 ppai joined #gluster
14:19 atinm joined #gluster
14:19 squizzi joined #gluster
14:20 buvanesh_kumar joined #gluster
14:34 gem joined #gluster
14:35 ahino joined #gluster
14:40 gem joined #gluster
14:46 PotatoGim joined #gluster
14:46 AG joined #gluster
14:48 rideh joined #gluster
14:52 cacasmacas joined #gluster
14:54 skumar joined #gluster
14:54 nbalacha joined #gluster
14:55 ahino joined #gluster
14:56 skylar joined #gluster
14:57 JoeJulian Reventlov: Since that's more of a developer design question, I recommend the gluster-devel mailing list. http://lists.gluster.org/mai​lman/listinfo/gluster-devel
14:57 glusterbot Title: Gluster-devel Info Page (at lists.gluster.org)
14:58 ppai joined #gluster
15:04 Reventlov JoeJulian: Ok, I may send an email, thank you for the hint
15:06 klaas joined #gluster
15:13 Gambit15 joined #gluster
15:22 skumar joined #gluster
15:24 skumar joined #gluster
15:27 jtux joined #gluster
15:32 shyam joined #gluster
15:33 farhorizon joined #gluster
15:34 nh2 JoeJulian: I wrote that parallel copying program, but it's only 2x as fast as cp in the best case, so I think I found a limitation in gluster:
15:36 nh2 I observed that even though I copy with 1000 threads at the same time, strace() on glusterd (and also gluster vol profile) shows that at no point are there more than 4 fds open at the same time
15:36 nh2 so there seems to be something linearising my parallel writes
15:36 nh2 but I can't really figure out what it is!
15:41 kshlm nh2, how many bricks does your volume have? glusterfs scales more, the more bricks it has available to distribute data over.
15:43 kshlm You could also possibly try tuning the client.event-threads and server.event-threads options to allow more requests to be submitted and processed parallely
15:44 kshlm And glusterd isn't the process that handles IO. You should be stracing the brick processes. These will be glusterfsd processes, whoes pids you can get from `gluster volume statue <volname>`
15:57 haomaiwang joined #gluster
15:59 Gambit15 Hi guys
15:59 cacasmacas joined #gluster
15:59 Gambit15 kshlm, you still around?
16:00 Gambit15 I've got an odd issue where every now & then, some of my nodes lose contact with each other & fence themselves
16:01 Gambit15 I've got some basic monitoring pinging the storage interfaces, and they never go down
16:04 Gambit15 I'm going to add a tcp check on the glusterd ports now, but I'd love to be able to narrow down what's going on. It occurs 0-4 times a day & keeps causing my VMs to get paused
16:05 wushudoin joined #gluster
16:05 msvbhat joined #gluster
16:06 dgandhi joined #gluster
16:06 Gambit15 The disconnects only ever last a few seconds and the only signal I ever get is from parsing the logs, it never lasts long enough to catch the issue by querying gluster on the CLI
16:07 Gambit15 JoeJulian, a cheeky ping if you're around :) IIRC, I initially discussed this with you a week or so back
16:07 kpease joined #gluster
16:08 kpease_ joined #gluster
16:13 kshlm Gambit15, I've heard of this before, but I don't have any clue as to why this happens.
16:14 kshlm Always the network is fine, but for some reason gluster connections disconnect.
16:14 Gambit15 Any recollections of where? Perhaps it'd help shine some light
16:14 Gambit15 I'm using 3.8.8
16:14 Gambit15 (2+1)x2 across 4 servers (for now)
16:17 kshlm Not sure. I guess I saw it a few times in internal qe tests.
16:17 Gambit15 Another thing that frequently happens around this time is entries often appear in the heal info. They almost always come in pairs, with the same file being "healed" on 2 of the 3 bricks in its rep pair - I found that extremely odd, as that should be a quorum affecting split-brain situation, altough no files are ever listed as in split-brain
16:18 Gambit15 These heal processes can hang around from anywhere between 20 minutes to 6+ hours, even when one of the bricks being healed is a meta-data only arbiter...
16:24 farhorizon joined #gluster
16:27 kpease joined #gluster
16:31 cacasmacas joined #gluster
16:40 haomaiwang joined #gluster
16:42 shortdudey123 joined #gluster
16:46 jdossey joined #gluster
16:51 jiffin joined #gluster
16:55 ShwethaHP joined #gluster
16:55 aravindavk joined #gluster
17:15 skumar_ joined #gluster
17:18 kpease joined #gluster
17:18 chawlanikhil24 joined #gluster
17:19 chawlanikhil24 .hello
17:19 chawlanikhil24 I was setting up gluster
17:19 chawlanikhil24 need help at the "make install" step
17:24 mhulsman joined #gluster
17:26 Gambit15 chawlanikhil24, wouldn't you be best using a repo & package manager? Maintaining updates for services compiled from source is a real PITA...
17:39 jbrooks joined #gluster
17:42 mhulsman joined #gluster
17:44 skumar joined #gluster
17:49 bbooth joined #gluster
17:52 cacasmacas joined #gluster
17:56 JoeJulian Gambit15: If I were trying to diagnose that, I would first monitor the ports with tcpdump looking for RST packets.
18:00 Rasathus joined #gluster
18:00 Rasathus left #gluster
18:05 kpease_ joined #gluster
18:06 pulli joined #gluster
18:06 snehring joined #gluster
18:07 Gambit15 JoeJulian, all of the glusterfsd ports, or just glusterd (24007)?
18:07 Gambit15 ...and what exactly should I be looking for?
18:07 JoeJulian Whichever you're losing connection to.
18:08 Gambit15 One glusterfsd process is opened per brick, correct?
18:08 JoeJulian Actually... hmm...
18:08 JoeJulian If you receive an RST, the connection should close and that log entry should be logged.
18:09 JoeJulian If you don't, but the connection closes anyway, you'll *send* and RST as you close the connection.
18:10 JoeJulian If the latter is happening, that's a bug on the side that things the connection closed without the tcp connection closing.
18:10 JoeJulian s/things/thinks/
18:10 glusterbot JoeJulian: Error: I couldn't find a message matching that criteria in my history of 1000 messages.
18:10 * JoeJulian raises an eyebrow
18:13 ahino joined #gluster
18:15 Seth_Karlo joined #gluster
18:16 nh2 joined #gluster
18:16 nh2 kshlm: 2 bricks (2-replica). I changed `event-threads` from 2 to 32 but the behaviour didn't change. That's why I'm suspecting something else is linearising my open()s.
18:16 nh2 kshlm: regarding strace, you're right, and I did strace gluster*fs*d, but confused them when typing the messages
18:17 nh2 I did notice that when I turn off `open-behind` and put a `sleep(10 seconds)` after the open()s, the open fd count does go up, but still way too slowly
18:19 nh2 where's the person (chawlanikhil24) gone that wanted help with `make install`? I'm suspecting that their problem is the missing gluster eventsd-Debian file error that happens on Ubuntu (which can be solved by disabling the events feature at the ./configure step)
18:20 mhulsman joined #gluster
18:31 stomith joined #gluster
18:31 Gambit15 JoeJulian, ok, cheers Joe. Will monitor it this afternoon
18:33 stomith Hi, new here. I’ve got a two node gluster cluster that’s pretty stable, but I need to remove one of the servers. Is this possible to do live?
18:33 JoeJulian What version?
18:33 stomith rpm -qa says 3.7.11-2
18:34 JoeJulian iirc, quorum enforcement didn't come until 3.8 so you should be ok. Make sure self-heal finishes after bringing it back up before doing the other one.
18:36 stomith do I just remove bricks on the secondary server? turn off the secondary server?
18:38 JoeJulian Is this a temporary removal?
18:38 JoeJulian Like server maintenance or something?
18:38 stomith No, permanent removal.
18:39 JoeJulian Oh, well that's different then. Yes, remove-brick $volname replica 1 blah:/deblah
18:40 stomith Thanks. :)
18:52 derjohn_mob joined #gluster
18:55 Seth_Karlo joined #gluster
18:56 haomaiwang joined #gluster
18:56 Wizek joined #gluster
18:58 unlaudable joined #gluster
19:05 cacasmacas joined #gluster
19:08 [diablo] joined #gluster
19:11 kpease joined #gluster
19:26 stomith joined #gluster
19:43 derjohn_mob joined #gluster
19:58 vbellur joined #gluster
20:00 cholcombe joined #gluster
20:01 farhoriz_ joined #gluster
20:04 irated joined #gluster
20:05 cholcombe joined #gluster
20:06 Seth_Karlo joined #gluster
20:10 msvbhat joined #gluster
20:13 mb_ joined #gluster
20:14 cholcombe joined #gluster
20:21 cacasmacas joined #gluster
20:22 stomith joined #gluster
20:24 baber joined #gluster
20:32 Jacob843 joined #gluster
20:36 buvanesh_kumar joined #gluster
20:42 nh2 joined #gluster
20:44 mhulsman joined #gluster
20:45 haomaiwang joined #gluster
20:48 pulli joined #gluster
20:51 aronnax Does anyone know if I can create snapshots on a dispersed volume while a brick is down? I tried to issue a snapshot create force but got back a "snapshot create: failed: One or more bricks may be down."
20:52 irated joined #gluster
20:52 vbellur joined #gluster
20:53 aronnax The down brick will be replaced in a few days, but until then, I'd like to continue taking snapshots so users can recover accidentally deleted files in the meantime.
20:57 jkroon joined #gluster
21:00 stomith joined #gluster
21:02 nmreis joined #gluster
21:03 Wizek_ joined #gluster
21:04 nmreis joined #gluster
21:12 unlaudable joined #gluster
21:16 baber joined #gluster
21:18 niknakpaddywak joined #gluster
21:36 cacasmacas joined #gluster
21:44 Duke3d joined #gluster
21:45 farhorizon joined #gluster
21:56 farhorizon joined #gluster
21:59 Gambit15 joined #gluster
22:04 farhorizon joined #gluster
22:15 cacasmacas joined #gluster
22:20 Gambit15 joined #gluster
22:23 f0rpaxe joined #gluster
22:30 bwerthmann joined #gluster
22:33 akay_ joined #gluster
22:33 haomaiwang joined #gluster
22:34 p7mo_ joined #gluster
22:35 cacasmacas joined #gluster
22:35 RustyB_ joined #gluster
22:35 telius_ joined #gluster
22:36 mrpops2ko joined #gluster
22:37 mrpops2ko hai guys, quick question - could you use glusterfs as a jbod type setup?
22:37 sage_ joined #gluster
22:38 niknakpa1dywak joined #gluster
22:38 squeakyneb joined #gluster
22:38 overclk_ joined #gluster
22:38 samppah joined #gluster
22:38 side_con1rol joined #gluster
22:41 MadPsy_ joined #gluster
22:41 MadPsy_ joined #gluster
22:42 gluytium_ joined #gluster
22:42 DJCl34n joined #gluster
22:42 DJClean joined #gluster
22:43 nthomas joined #gluster
22:46 thatgraemeguy joined #gluster
22:46 thatgraemeguy joined #gluster
22:46 mober joined #gluster
22:46 cholcombe joined #gluster
22:47 moss joined #gluster
22:47 jeffspeff joined #gluster
22:47 kenansulayman joined #gluster
22:47 squeakyneb joined #gluster
22:48 jackhill joined #gluster
22:48 yosafbridge joined #gluster
22:48 rideh joined #gluster
22:48 varesa_ joined #gluster
22:49 [o__o] joined #gluster
22:50 fyxim_ joined #gluster
22:50 PotatoGim joined #gluster
22:51 aronnax joined #gluster
22:51 wushudoin joined #gluster
22:51 joshin joined #gluster
22:51 joshin joined #gluster
22:52 Peppard joined #gluster
22:52 Intensity joined #gluster
22:52 akay joined #gluster
22:52 fsimonce joined #gluster
22:52 shaunm joined #gluster
22:52 caitnop joined #gluster
22:52 amye joined #gluster
22:52 mrEriksson joined #gluster
22:52 masber joined #gluster
22:52 cvstealth joined #gluster
22:52 edong23 joined #gluster
22:52 mattmcc_ joined #gluster
22:52 lanning joined #gluster
22:52 xMopxShell joined #gluster
22:52 Vaelatern joined #gluster
22:52 jarbod_ joined #gluster
22:52 Ramereth joined #gluster
22:52 koma joined #gluster
22:52 inodb joined #gluster
22:52 saali joined #gluster
22:54 baber joined #gluster
22:54 snehring joined #gluster
23:00 scubacuda joined #gluster
23:10 john51 joined #gluster
23:12 shyam joined #gluster
23:24 foster joined #gluster
23:36 vbellur joined #gluster
23:56 Limebyte joined #gluster
23:58 anoopcs joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary