Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-11-10

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:24 Caveat4U joined #gluster
00:41 Caveat4U joined #gluster
00:49 shdeng joined #gluster
00:51 ashp joined #gluster
00:52 ashp Hey guys, long shot but... We're running gluster 3.5.5 (you don't want to ask) with the agents on 3.5.2
00:52 ashp We had an issue where a brick was struggling due to a disk error
00:52 ashp and the mount point hung on all the servers and processes flailed out
00:52 ashp Do later versions handle this kind of thing way better? I would assume with 2/3 of our gluster servers being up it should have just ignored the brick and moved on
00:58 bowhunter joined #gluster
00:59 luizcpg joined #gluster
01:07 kramdoss_ joined #gluster
01:15 hagarth joined #gluster
01:22 daMaestro joined #gluster
01:27 helsinki joined #gluster
01:29 helsinki joined #gluster
01:31 suliba joined #gluster
01:50 haomaiwang joined #gluster
02:03 magrawal joined #gluster
02:06 Gambit15 joined #gluster
02:10 prth joined #gluster
02:14 plarsen joined #gluster
02:14 PatNarciso_ ashp: I feel for ya.  but, what was the disk error?  the underlying disk/filesystem must be *trustworthy*.  umm... what was your setup?    distributed?  replicated?  answers to these questions may help better answer your question -- however, I'm not sure by how much.  (personally, I'm not up to date with 3.5.x)
02:14 * PatNarciso_ wishes xchat hat spell check.
02:16 luizcpg_ joined #gluster
02:38 XpineX joined #gluster
02:46 Muthu joined #gluster
02:48 ilbot3 joined #gluster
02:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
03:15 Lee1092 joined #gluster
03:16 snixor joined #gluster
03:37 shubhendu joined #gluster
03:38 blu__ joined #gluster
03:39 JoeJulian PatNarciso_: xchat does have speel chek. In arch linux, I just had to install hunspell and hunspell-en.
03:42 kramdoss_ joined #gluster
03:42 masber joined #gluster
03:48 nishanth joined #gluster
03:55 atinm joined #gluster
04:05 ashp hmm, i had to replace a failing server, i dragged over /var/lib/gluster and /etc/gluster and all my filesystems but one
04:05 ashp now for the bricks on the wiped filesystem it shows N for online
04:06 ashp but Y for onine for all the old filesystems
04:06 ashp is there a way to force gluster online for volumes
04:07 ashp a heal info shows: Status: Transport endpoint is not connected
04:08 ashp http://www.gluster.org/pipermail/g​luster-devel/2016-June/049933.html similar to this but a restart didnt help
04:08 glusterbot Title: [Gluster-devel] new brick Status shows "Transport endpoint is not connected" in heal info after replace-brick on V3.7.11 (at www.gluster.org)
04:12 itisravi joined #gluster
04:16 satya4ever joined #gluster
04:27 JoeJulian ashp: "gluster volume start $volname force" for the volumes with the newly created bricks. Double check to make sure the filesystem you want mounted is mounted before you do that.
04:27 karthik_us joined #gluster
04:27 JoeJulian You don't want gluster marking your root filesystem as the brick and replicating everything there. :)
04:28 ashp ahhhhhh
04:28 ashp just a simple start!
04:29 Logos01 joined #gluster
04:29 Logos01 JoeJulian: ashp is a buddy of mine. Long time no talk. :)
04:30 ndarshan joined #gluster
04:30 JoeJulian Very. How's it goin'?
04:31 Logos01 I've changed jobs since we last talked. AWS shop now. No more mysterious "what the flying F just happened" network issues blocking gluster usage. S3/EFS on the other hand...
04:31 Logos01 <_<
04:31 Logos01 'sall good.
04:31 * Logos01 was about to point ashp to see if he'd had transport.socket.bind-address set
04:31 Logos01 So ... thanks. Dude's up *WAAAY* too late for this.
04:31 JoeJulian Hehe, so am I. :)
04:32 * Logos01 is on UT time.
04:33 ashp ok, my first volume is actually restoring
04:33 Logos01 Woot
04:33 ashp thanks! what a painful night
04:34 JoeJulian Good thing I checked. :) I'm supposed to be watching TV with my wife. :)
04:34 Logos01 Hahahaha
04:34 Logos01 JoeJulian: Welp, I'd have gotten him there *eventually*
04:34 ashp kicked off the full repair on the huge filesystem
04:34 Logos01 This is with a new EBS volume underneath right?
04:35 ashp should take a week or so, who knows, with these crappy machines
04:35 ashp yeah, new ebs
04:35 ashp only 1TB to copy
04:35 Logos01 And the number of files involved.
04:35 magrawal joined #gluster
04:35 Logos01 ashp: Check out EFS sometime.
04:35 ashp we were going to use efs
04:35 Logos01 Well you already know about it. But ... yeah.
04:35 ashp but the end solution is for them to put all these files in s3
04:36 ashp and stop giving me a headache
04:36 ashp no more gluster 3.5.5
04:36 ashp with 3.5.2 clients
04:36 Logos01 Can you *use* s3 w/ subdirectories?
04:36 Logos01 I'm fuzzy on that.
04:36 buvanesh_kumar joined #gluster
04:37 ashp you can :)
04:37 ashp off to bed for me, gotta be up in 5 hours
04:37 ashp Logos01: THANK YOU for helping/hanging with me!
04:38 Logos01 :D
04:38 Logos01 We need to hang out at DefCon I think
04:38 Logos01 But for now go to sleep
04:39 Logos01 JoeJulian: On ashp's behalf because his matters more: thanks, again.
04:39 Logos01 left #gluster
05:00 ankitraj joined #gluster
05:10 aravindavk joined #gluster
05:12 sanoj joined #gluster
05:18 hchiramm joined #gluster
05:18 nbalacha joined #gluster
05:19 Debloper joined #gluster
05:21 ankitraj joined #gluster
05:27 prasanth joined #gluster
05:31 Saravanakmr joined #gluster
05:34 shubhendu joined #gluster
05:41 skoduri joined #gluster
05:43 Philambdo joined #gluster
05:44 sathees joined #gluster
05:45 satya4ever joined #gluster
05:46 Caveat4U joined #gluster
05:47 kdhananjay joined #gluster
05:58 jiffin joined #gluster
05:59 jiffin joined #gluster
06:07 rastar joined #gluster
06:12 Debloper joined #gluster
06:13 arc0 joined #gluster
06:14 buvanesh_kumar_ joined #gluster
06:14 msvbhat joined #gluster
06:23 hgowtham joined #gluster
06:23 gem joined #gluster
06:26 rafi joined #gluster
06:33 ashah joined #gluster
06:45 buvanesh_kumar joined #gluster
06:47 devyani7 joined #gluster
06:47 prth joined #gluster
06:49 [diablo] joined #gluster
06:51 suliba joined #gluster
06:52 jiffin joined #gluster
06:53 k4n0 joined #gluster
07:01 karnan joined #gluster
07:02 nishanth joined #gluster
07:21 ajneil joined #gluster
07:21 jtux joined #gluster
07:27 derjohn_mobi joined #gluster
07:28 mhulsman joined #gluster
07:36 jtux joined #gluster
07:43 mhulsman joined #gluster
07:59 Gnomethrower joined #gluster
08:03 jkroon joined #gluster
08:24 riyas joined #gluster
08:38 flying joined #gluster
08:43 prasanth joined #gluster
08:44 mb_ joined #gluster
08:46 jri joined #gluster
08:54 renout would it be bad to do a gluster volume status every minute for monitoring purposes? Sometimes glusterd crashes (like once every month) with signal 6 and core is dumped. glusterfsd processes keep running. All is good after a glusterd restart. This is gluster 3.7.6.1 on centos7.
08:57 rafi joined #gluster
08:58 rouven joined #gluster
08:58 kshlm renout, Every minute should be okay. Just don't lower it to something like every 10s.
09:02 gem joined #gluster
09:04 Saravanakmr joined #gluster
09:08 ahino joined #gluster
09:13 marcvw joined #gluster
09:13 panina joined #gluster
09:15 renout kshlm: ok thanks. What would be the best way forward with debugging the reason of the crash? As for the logs I can't make much out of it apart from it's killed with signal 6.
09:18 Slashman joined #gluster
09:21 rouven hey, i would like to create a special device (urandom) on a gluster volume (3.7.12) for a chroot environment on the glusterfs. whenever i try to mknod it, i get a "permission denied" error. is there any way to achieve such a device on a gluster volume? the volume in question is mounted via nfs, but the error also appears when the volume is mounted via fuse. any hints are greatly appreciated.
09:26 prth joined #gluster
09:27 poornima_ joined #gluster
09:27 jiffin joined #gluster
09:29 buvanesh_kumar joined #gluster
09:30 mhulsman1 joined #gluster
09:36 post-factum kshlm, in fact, no :)
09:36 post-factum kshlm, i had issues monitoring gluster via status every 5 mins
09:37 post-factum kshlm, but i guess that is related to retrieving list of connected clients. i have a bugreport filled against this issue
09:38 post-factum renout, you might find coredump generated somewhere
09:40 renout post-factum: what would you recommend for monitoring?
09:41 post-factum renout, what do you need to monitor?
09:42 renout post-factum: basicly if gluster is running ok. Best would be end to end and check if volume is writeable or not
09:43 post-factum renout, we gave up on querying gluster via scripts or cli because of crashes
09:43 post-factum renout, the only thing we monitored is whether bricks are up
09:47 renout post-factum: whats the reason gluster crashes while monitoring? I would asssume you should be able to monitor it, right? :)
09:49 post-factum https://bugzilla.redhat.co​m/show_bug.cgi?id=1353561
09:49 glusterbot Bug 1353561: medium, unspecified, ---, rgowdapp, ASSIGNED , Multiple bricks could crash after TCP port probing
09:49 post-factum https://bugzilla.redhat.co​m/show_bug.cgi?id=1353529
09:49 glusterbot Bug 1353529: medium, unspecified, ---, bugs, NEW , Multiple bricks could crash after invoking status command
09:49 post-factum renout, ^^
09:49 abyss^ joined #gluster
09:50 satya4ever joined #gluster
09:50 post-factum those are epic bugs, especially the one with tcp probing
09:50 post-factum in fact, it is remote DoS\
09:51 post-factum we've set up tcp probing via zabbix, and bricks started crashing :)
09:53 renout post-factum: thanks, i will have a read. As for my crash i'll locate the core dump and see where I get from there
09:54 thatgraemeguy joined #gluster
09:57 social joined #gluster
10:01 Jacob843 joined #gluster
10:01 arc0 joined #gluster
10:04 rafi joined #gluster
10:07 derjohn_mobi joined #gluster
10:07 zat1 joined #gluster
10:18 rastar joined #gluster
10:19 jbrooks joined #gluster
10:35 sanoj joined #gluster
10:39 jri joined #gluster
10:39 msvbhat joined #gluster
10:41 buvanesh_kumar_ joined #gluster
10:42 panina joined #gluster
10:48 rastar joined #gluster
10:49 derjohn_mobi joined #gluster
10:54 gem joined #gluster
10:57 jiffin joined #gluster
11:05 jiffin joined #gluster
11:15 hybrid512 joined #gluster
11:16 guhcampos joined #gluster
11:21 satya4ever joined #gluster
11:23 yalu_ joined #gluster
11:25 yalu_ I read settings like performance.io-cache on mailing lists, but the administrator guide does not mention it. Is there a more complete list of tunables somewhere?
11:26 Philambdo joined #gluster
11:29 bfoster joined #gluster
11:33 haomaiwang joined #gluster
11:36 bfoster joined #gluster
11:52 hackman joined #gluster
12:03 jiffin joined #gluster
12:13 shubhendu joined #gluster
12:15 jri_ joined #gluster
12:23 panina joined #gluster
12:23 yalu_ joined #gluster
12:25 Caveat4U joined #gluster
12:27 Anarka hey :) is there a standard way to make gluster use a specific network and/or interface ? i see plans for it but atm i find more about adding specific entries in  /etc/hosts
12:30 jiffin joined #gluster
12:30 kkeithley nope, it listens on IP_ADDR_ANY
12:41 Anarka kkeithley: meh, tnx :)
12:42 guhcampos joined #gluster
12:43 kramdoss_ joined #gluster
12:44 Anarka creating host1-gluster, for ex, entries in /etc/hosts and using those for peers is ok or will that be more trouble than its worth ?
12:44 Anarka hosts or ofc dns
12:45 skoduri joined #gluster
12:45 [diablo] joined #gluster
12:51 johnmilton joined #gluster
12:53 arpu joined #gluster
12:53 Muthu joined #gluster
12:56 Saravanakmr joined #gluster
12:59 ira joined #gluster
12:59 prth joined #gluster
13:02 luizcpg joined #gluster
13:16 unclemarc joined #gluster
13:26 TvL2386 joined #gluster
13:28 f0rpaxe joined #gluster
13:31 ndarshan joined #gluster
13:32 mhulsman joined #gluster
13:33 nbalacha joined #gluster
13:49 B21956 joined #gluster
13:53 shyam joined #gluster
13:54 d0nn1e joined #gluster
13:55 gem joined #gluster
14:00 k4n0 joined #gluster
14:05 bowhunter joined #gluster
14:06 satya4ever joined #gluster
14:08 ankitraj joined #gluster
14:20 shubhendu joined #gluster
14:21 kdhananjay joined #gluster
14:23 dgandhi joined #gluster
14:28 skylar joined #gluster
14:28 B21956 joined #gluster
14:32 mhulsman joined #gluster
14:34 hagarth joined #gluster
14:35 kkeithley joined #gluster
14:42 guhcampos joined #gluster
14:44 dgandhi joined #gluster
14:44 haomaiwang joined #gluster
14:45 gnulnx left #gluster
14:49 nbalacha joined #gluster
14:52 skylar1 joined #gluster
14:52 mhulsman1 joined #gluster
14:53 hchiramm joined #gluster
15:05 abyss^ JoeJulian: Are you there?:)
15:06 mhulsman1 joined #gluster
15:11 plarsen joined #gluster
15:18 mhulsman joined #gluster
15:31 Gambit15 joined #gluster
15:36 Philambdo joined #gluster
15:42 ashiq joined #gluster
15:45 abyss^ JoeJulian: so, tommorow?;)
15:46 farhorizon joined #gluster
15:52 post-factum abyss^, just leave the question
15:53 abyss^ post-factum: JoeJulian know what I'd like to do with him:) I have trouble with split-brain on dirs
15:54 shyam joined #gluster
16:04 bluenemo joined #gluster
16:05 wushudoin joined #gluster
16:07 farhorizon joined #gluster
16:08 vbellur joined #gluster
16:08 hagarth left #gluster
16:13 titansmc joined #gluster
16:15 titansmc Hi guys, would anyone recommend me how to setup a two nodes tomcat cluster with glusterFS ? Would you guys share webapps  directory ??
16:20 marcvw left #gluster
16:25 cholcombe nixpanic, you around?
16:26 kpease joined #gluster
16:29 farhoriz_ joined #gluster
16:29 Caveat4U joined #gluster
16:30 annettec joined #gluster
16:31 skoduri joined #gluster
16:35 nage joined #gluster
16:41 ehermes left #gluster
17:00 Caveat4U joined #gluster
17:00 hchiramm joined #gluster
17:03 nathwill joined #gluster
17:06 Caveat4U joined #gluster
17:06 Philambdo joined #gluster
17:07 prth joined #gluster
17:38 scubacuda_ joined #gluster
17:44 [diablo] joined #gluster
17:50 Caveat4U joined #gluster
18:06 ivan_rossi left #gluster
18:18 hagarth joined #gluster
18:18 vbellur joined #gluster
18:29 shyam left #gluster
18:29 jkroon joined #gluster
18:31 akanksha__ joined #gluster
18:31 hackman joined #gluster
18:42 shaunm joined #gluster
18:44 rastar joined #gluster
19:02 farhorizon joined #gluster
19:03 derjohn_mobi joined #gluster
19:19 hagarth joined #gluster
19:20 vbellur joined #gluster
19:21 msvbhat joined #gluster
19:22 ahino joined #gluster
19:23 Caveat4U joined #gluster
19:44 Pintomatic joined #gluster
19:47 rouven joined #gluster
19:52 wushudoin joined #gluster
19:52 Caveat4U joined #gluster
19:58 farhoriz_ joined #gluster
20:04 rouven hey, i would like to create a special device (urandom) on a gluster volume (3.7.12) for a chroot environment on the glusterfs. whenever i try to mknod it, i get a "permission denied" error. is there any way to achieve such a device on a gluster volume? the volume in question is mounted via nfs, but the error also appears when the volume is mounted via fuse. any hints are greatly appreciated
20:25 cholcombe joined #gluster
20:26 MidlandTroy joined #gluster
20:39 JoeJulian @later tell rouven next time stick around. You asked and left during lunch.
20:39 glusterbot JoeJulian: The operation succeeded.
20:47 moss1 joined #gluster
20:49 moss joined #gluster
20:50 moss joined #gluster
20:50 moss joined #gluster
20:58 mb_ joined #gluster
21:03 farhoriz_ joined #gluster
21:18 Caveat4U joined #gluster
21:18 zat joined #gluster
21:20 skylar joined #gluster
21:20 Caveat4U_ joined #gluster
21:43 PatNarciso_ thanks @JoeJulian, hunspell installed.  should be rockin on next app-start.
21:44 panina joined #gluster
21:45 JoeJulian :)
21:53 farhorizon joined #gluster
21:56 Caveat4U joined #gluster
22:00 Caveat4U joined #gluster
22:11 Caveat4U joined #gluster
22:13 Caveat4U_ joined #gluster
22:14 nage left #gluster
22:19 gem joined #gluster
22:34 jeremyh joined #gluster
22:39 daMaestro joined #gluster
22:45 bluenemo joined #gluster
22:47 swebb left #gluster
22:52 Wizek_ joined #gluster
23:06 daMaestro joined #gluster
23:10 derjohn_mobi joined #gluster
23:12 arpu joined #gluster
23:26 jvandewege joined #gluster
23:33 Caveat4U joined #gluster
23:36 mb_ joined #gluster
23:43 timotheus1 joined #gluster
23:47 moss joined #gluster
23:47 moss joined #gluster
23:56 Caveat4U joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary