Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-09-12

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:16 baojg joined #gluster
00:44 jbrooks joined #gluster
00:55 kramdoss_ joined #gluster
01:04 shdeng joined #gluster
01:29 derjohn_mobi joined #gluster
01:34 itisravi joined #gluster
01:45 caitnop joined #gluster
01:47 ilbot3 joined #gluster
01:47 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:54 beemobile joined #gluster
02:09 Gnomethrower joined #gluster
02:26 kshlm joined #gluster
03:00 magrawal joined #gluster
03:00 prasanth joined #gluster
03:14 kramdoss_ joined #gluster
03:20 Gnomethrower joined #gluster
03:28 jobewan joined #gluster
03:29 Philambdo joined #gluster
03:32 nbalacha joined #gluster
03:43 baojg joined #gluster
03:54 riyas joined #gluster
03:55 riyas joined #gluster
03:58 atinm joined #gluster
04:05 RameshN joined #gluster
04:22 ppai joined #gluster
04:31 aravindavk joined #gluster
04:46 ramky joined #gluster
04:49 ashiq joined #gluster
04:53 poornima joined #gluster
04:56 Bhaskarakiran joined #gluster
04:59 kkeithley joined #gluster
05:01 jiffin joined #gluster
05:04 ndarshan joined #gluster
05:10 Pupeno joined #gluster
05:14 PaulCuzner joined #gluster
05:18 rwheeler joined #gluster
05:18 mhulsman joined #gluster
05:19 Philambdo joined #gluster
05:20 hgowtham joined #gluster
05:20 itisravi joined #gluster
05:22 Pupeno joined #gluster
05:28 Gnomethrower joined #gluster
05:32 Jeremy joined #gluster
05:32 Lee1092 joined #gluster
05:35 karnan joined #gluster
05:37 nishanth joined #gluster
05:49 jeremyh joined #gluster
05:50 jeremyh Can somebody please point me in the right direction here? I'd like remote users who are not known to the Gluster server by their remote user name, to mount a gluster volume with their Active Directory credentials which the Gluster server does know. Hence the files would be written as the AD user. How would I mount the volume to achieve this?
05:52 rastar joined #gluster
05:52 ppai joined #gluster
05:53 skoduri joined #gluster
05:53 Klas jeremyh: https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/html/Administration_Guide/chap-Integrating_RHGS_AD.html this guide (on this and following pages) seems to do what you want (only skimmed briefly)
05:53 glusterbot Title: Chapter 8. Integrating Red Hat Gluster Storage with Windows Active Directory (at access.redhat.com)
05:56 om joined #gluster
05:59 jeremyh @Klas Thanks for the link, but the gluster server is already a domain member (using sssd), however, the linux client machine isn't (and can't be) a domain member. I'm looking for the functional equivalent of "mount --uid=abc --gid=xyz glusterserver:/vol /mnt/mymount" as either an interactive command or fstab method.
06:01 Klas ah, ok
06:05 aspandey joined #gluster
06:05 ankitraj joined #gluster
06:15 ndevos jeremyh: you might be able to achieve that when mounting with cifs (or NFSv4), no such option for fuse
06:15 kukulogy joined #gluster
06:16 kdhananjay joined #gluster
06:19 shdeng joined #gluster
06:19 Klas sounds reasonable
06:23 satya4ever joined #gluster
06:24 prth joined #gluster
06:26 Muthu_ joined #gluster
06:26 derjohn_mobi joined #gluster
06:33 kdhananjay joined #gluster
06:34 shdeng joined #gluster
06:34 jtux joined #gluster
06:35 itisravi joined #gluster
06:41 skoduri joined #gluster
06:47 devyani7 joined #gluster
06:47 RameshN joined #gluster
06:47 devyani7 joined #gluster
06:49 kukulogy joined #gluster
06:54 prth joined #gluster
06:56 deniszh joined #gluster
06:56 lalatenduM joined #gluster
06:57 fsimonce joined #gluster
07:00 jri joined #gluster
07:01 sandersr joined #gluster
07:02 ieth0 joined #gluster
07:03 d0nn1e joined #gluster
07:09 ivan_rossi joined #gluster
07:14 jeremyh @ndevos & @Klas Thanks for the suggestions! I was hoping to find a native glusterfs solution, but the more I look the less I find.
07:16 magrawal_ joined #gluster
07:17 mbukatov joined #gluster
07:25 Klas how do you check which version of glusterfs you are currently running (actively, not just installed version)
07:26 Klas just after I asked, I finally found: glusterfs –-version
07:26 Klas "gluster -version" seems to be correct
07:27 gem joined #gluster
07:32 kdhananjay joined #gluster
07:33 itisravi_ joined #gluster
07:34 shdeng joined #gluster
07:34 masber joined #gluster
07:43 robb_nl joined #gluster
07:47 RameshN joined #gluster
07:50 derjohn_mob joined #gluster
07:57 kukulogy joined #gluster
08:01 hgowtham joined #gluster
08:03 Pupeno_ joined #gluster
08:04 hybrid512 joined #gluster
08:19 kukulogy joined #gluster
08:20 lalatenduM joined #gluster
08:21 ndevos Klas: a better way would be to check the logs, when a process starts it writes its version out, the "gluster" command will most likely always maych what is installed
08:21 ndevos s/maych/match/
08:22 glusterbot What ndevos meant to say was: Klas: a better way would be to check the logs, when a process starts it writes its version out, the "gluster" command will most likely always match what is installed
08:22 Klas ndevos: oh
08:22 kukulogy joined #gluster
08:28 karnan joined #gluster
08:32 jkroon joined #gluster
08:32 Slashman joined #gluster
08:35 aspandey joined #gluster
08:35 kukulogy joined #gluster
08:37 shdeng joined #gluster
08:38 kukulogy joined #gluster
08:39 philiph joined #gluster
08:40 harish joined #gluster
08:41 kukulogy joined #gluster
08:53 jwd joined #gluster
08:55 kotreshhr joined #gluster
08:59 Muthu_ joined #gluster
09:00 itisravi joined #gluster
09:03 rwheeler joined #gluster
09:04 Bhaskarakiran joined #gluster
09:09 Bhaskarakiran joined #gluster
09:18 mhulsman joined #gluster
09:20 Wizek_ joined #gluster
09:20 mhulsman joined #gluster
09:22 Abhay__ joined #gluster
09:23 Abhay__ Currently running smbtorture tests on samba share created out of gluster volume. Getting following errors: https://lists.samba.org/archive/samba/2014-November/186740.html
09:23 glusterbot Title: [Samba] smbtorture tests errors (at lists.samba.org)
09:23 kukulogy joined #gluster
09:28 nbalacha joined #gluster
09:31 Pupeno joined #gluster
09:31 Jacob843 joined #gluster
09:31 arcolife joined #gluster
09:31 ankitraj joined #gluster
09:33 kukulogy joined #gluster
09:36 TZaman joined #gluster
09:45 ankitraj joined #gluster
09:49 harish joined #gluster
09:53 nbalacha joined #gluster
09:56 Bhaskarakiran joined #gluster
10:10 jiffin joined #gluster
10:34 [diablo] joined #gluster
10:37 Pupeno joined #gluster
10:38 webmind joined #gluster
10:38 webmind hello
10:38 glusterbot webmind: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
10:39 webmind thanks :)
10:40 webmind I've got a 2 node glusterfs setup with samba on top. I've got moments where glusterfs takes all the cpu and samba becomes unreachable. Any suggestions on how to debug this?
10:40 webmind it's not happening now, but has happened in the past few days
10:41 kukulogy joined #gluster
10:46 MadPsy_ joined #gluster
10:48 MadPsy_ Having trouble getting gluster to start properly after a reboot. The result of a 'service glusterfs-server restart' is in this pastebin:  http://pastebin.com/G0Dy30Jm
10:48 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
10:49 MadPsy_ http://paste.fedoraproject.org/427134/77360147/
10:49 glusterbot Title: #427134 • Fedora Project Pastebin (at paste.fedoraproject.org)
10:50 MadPsy_ maybe being blind but there's nothing I see obvious in those logs apart from a pile of failures
10:52 MadPsy_ (the result is 'glusterd' is running but no bricks, nfs server etc.)
10:54 robb_nl joined #gluster
11:04 kukulogy joined #gluster
11:08 kukulogy joined #gluster
11:10 kukulogy_ joined #gluster
11:12 devyani7 joined #gluster
11:12 kukulogy joined #gluster
11:13 devyani7 joined #gluster
11:26 nbalacha MadPsy_, what does gluster v info return?
11:28 kukulogy joined #gluster
11:28 MadPsy_ nbalacha, sorry, I managed to get it started but had to run 'gluster volume stop <vol>' & 'gluster volume start <vol>', then all the processes magically started. I presume this is related to the initial boot process, possibly something network related.
11:28 nbalacha MadPsy_, ok. Glad to hear it is working now
11:29 MadPsy_ It's a bit confusing that glusterfs' service doesn't touch the bricks themselves
11:29 MadPsy_ so if the bricks fail to start on boot then a 'service glusterfs-server restart' will /never/ start them
11:31 kdhananjay joined #gluster
11:38 nbalacha atinm, does the gluster service start the bricks? i thought it did?
11:42 atinm nbalacha, if the volume is in started state and the bricks are not then restarting glusterd service should be bringing up the bricks
11:43 nbalacha atinm, MadPsy_ reported a problem where glusterd appears to start without issues but bricks did not. Any ideas?
11:43 atinm MadPsy_, do you see any errors reported in the brick logs?
11:44 atinm MadPsy_, is this gluster 3.8.2 you are using?
11:44 MadPsy_ atinm, the pastebin (http://paste.fedoraproject.org/427134/77360147/) is every log it generated so I believe that's a 'no'
11:44 glusterbot Title: #427134 • Fedora Project Pastebin (at paste.fedoraproject.org)
11:45 atinm MadPsy_, that's the glusterd log
11:45 MadPsy_ 3.7.6 (ubuntu 16.04)
11:45 atinm MadPsy_, you should be having brick logs in /var/log/glusterfs as well
11:46 MadPsy_ it was a tail of /var/log/glusterfs/*.log /var/log/glusterfs/bricks/*.log
11:46 MadPsy_ in other words, the brick logs were empty, suggesting it was doing nothing
11:47 atinm MadPsy_, does 'ps' show any glusterfsd process running?
11:47 MadPsy_ it does now it's fixed, but no, the only process was glusterd
11:49 MadPsy_ (i.e. fixed with 'gluster volume stop/start [vol]')
11:52 ppai joined #gluster
11:53 atinm MadPsy_, ideally we should have seen brick logs with errors indicating why the process couldn't come up
11:53 atinm MadPsy_,can you share the brick logs?
11:55 MadPsy_ the first brick log is from [2016-09-12 10:56:12.479721]
11:55 MadPsy_ which is when I ran the stop/start on the volume, rather than the glusterfs-server service
11:57 MadPsy_ http://paste.fedoraproject.org/427149/68143214/
11:57 glusterbot Title: #427149 • Fedora Project Pastebin (at paste.fedoraproject.org)
12:03 kukulogy joined #gluster
12:11 kukulogy joined #gluster
12:17 itisravi joined #gluster
12:19 kukulogy joined #gluster
12:32 jiffin1 joined #gluster
12:37 atrius joined #gluster
12:43 ira joined #gluster
12:44 hchiramm joined #gluster
12:44 social joined #gluster
12:47 shyam joined #gluster
12:55 unclemarc joined #gluster
12:55 ashiq joined #gluster
12:55 ic0n joined #gluster
13:03 sanoj joined #gluster
13:05 edong23 joined #gluster
13:18 Wizek_ joined #gluster
13:23 skylar joined #gluster
13:30 hchiramm joined #gluster
13:34 squizzi joined #gluster
13:34 k4n0 joined #gluster
13:42 nbalacha joined #gluster
13:43 JoeJulian @learn heal overload as Try turning off client-side self-heals and leaving it up to the self-heal daemon. Set cluster.data-self-heal, cluster.metadata-self-heal, and cluster.entry-self-heal off.
13:43 glusterbot JoeJulian: The operation succeeded.
13:43 JoeJulian ~heal overload | webmind
13:43 glusterbot webmind: Try turning off client-side self-heals and leaving it up to the self-heal daemon. Set cluster.data-self-heal, cluster.metadata-self-heal, and cluster.entry-self-heal off.
13:43 bluenemo joined #gluster
13:45 JoeJulian MadPsy_: How are you determining the bricks aren't started?
13:46 JoeJulian nevermind, I just found that
13:46 MadPsy_ :)
13:47 JoeJulian The thing is, the "Started thread with index" lines at the end of the glusterd log are those bricks starting.
13:48 JoeJulian and restarting glusterd should also start bricks if they're not started (aka, restarting the gluster-server upstart job)
13:48 kotreshhr left #gluster
13:48 MadPsy_ but my assumption that if the brick log is empty that nothing actually tried to start ?
13:49 MadPsy_ is correct yeah*
13:50 gem joined #gluster
13:50 JoeJulian Not necessarily. The log file is created by the brick process. It shouldn't exist if the brick didn't attempt to start.
13:50 Slashman joined #gluster
13:51 JoeJulian The only other thought is that logrotate just happened to run at just the right time to make it empty when you looked.
13:52 MadPsy_ the file itself already existed so doubt it's logrotate
13:52 MadPsy_ (was just empty after the previous day's logrotate)
13:52 MadPsy_ I guess I can see if it happens again
13:53 MadPsy_ one thing I didn't mention was 'gluster volume info [vol]' stated 'Status: Starte' but not sure if that has anything to do with processes running or not.
13:54 JoeJulian If it was not "Started" then that would have been the cause.
13:55 MadPsy_ it was 'started' when it was broken
13:55 JoeJulian Yeah, if you can try it again and it comes up without the bricks, try stopping glusterfs-server and running "glusterd -d". That'll start glusterd in the foreground with debug logging. Maybe there will be some clue there.
13:56 MadPsy_ will do thanks... it's a production system but I'll have a play elsewhere
13:56 webmind JoeJulian: thnx
13:56 karnan joined #gluster
13:58 kdhananjay joined #gluster
13:59 MadPsy joined #gluster
14:02 sandersr joined #gluster
14:02 kramdoss_ joined #gluster
14:06 mreamy joined #gluster
14:08 Gambit15 joined #gluster
14:14 johnmilton joined #gluster
14:15 johnmilton joined #gluster
14:27 atinm joined #gluster
14:27 baojg joined #gluster
14:30 Bhaskarakiran joined #gluster
14:31 bowhunter joined #gluster
14:32 riyas joined #gluster
14:37 wushudoin joined #gluster
14:38 jiffin joined #gluster
14:44 [diablo] joined #gluster
14:48 level7 joined #gluster
14:53 derjohn_mob joined #gluster
15:04 jbrooks joined #gluster
15:05 skoduri joined #gluster
15:07 riyas joined #gluster
15:33 plarsen joined #gluster
15:35 Slashman joined #gluster
15:46 xavih joined #gluster
15:47 malevolent joined #gluster
15:53 derjohn_mob joined #gluster
15:55 shaunm joined #gluster
15:56 malevolent joined #gluster
15:56 bkolden joined #gluster
16:06 xavih joined #gluster
16:06 malevolent joined #gluster
16:07 jiffin joined #gluster
16:11 derjohn_mob joined #gluster
16:21 kdhananjay joined #gluster
16:27 plarsen joined #gluster
16:32 Gambit15 joined #gluster
16:34 Gambit15 Hey guys, we had a hiccup in the datacenter & all of the gluster hosts went offline. When they came back up, none of the gluster volumes came back automatically.
16:35 Gambit15 Before I start everything again manually, I'd like to find out why they didn't come back up by themselves
16:36 Gambit15 Could anyone point me to the key logs I should be looking at? There are a lot of different logs, and each with a lot of info, so any pointers for the best place to look for the needle in the haystack would be much appreciated
16:41 jobewan joined #gluster
16:43 aspandey joined #gluster
16:47 jwd joined #gluster
16:51 janlam7 joined #gluster
16:51 hagarth Gambit15: you would need to start with log files of glusterd
16:53 Gambit15 glustershd.log?
16:53 Gambit15 That's not been upadted since the volumes fell
16:54 hagarth *glusterd.log
16:57 kdhananjay joined #gluster
17:07 kpease joined #gluster
17:07 tom[] joined #gluster
17:13 kpease joined #gluster
17:41 ieth0 joined #gluster
18:18 social joined #gluster
18:26 gem joined #gluster
18:38 jiffin joined #gluster
19:03 JoeJulian Gambit15: etc-glusterfs-glusterd.log (< 3.8) or glusterd.log (>=3.8)
19:09 B21956 joined #gluster
19:11 Gambit15 JoeJulian, the glusterd service seems to be refusing because quorum isn't met, but all servers are pingable
19:13 Gambit15 But it's a bit of a bootstrap situation then. I can't start the glusterd service on the servers becvause the glusterd service isn't active on the peers
19:14 Gambit15 Why would glusterd refuse to start without quorum? AFAIK, it should start the service, but with the volumes in a failed state until quorum is regained
19:14 JoeJulian You can start glusterd, it's just the bricks that won't start until you have enough servers to make quorum.
19:16 mhulsman joined #gluster
19:17 Gambit15 Hmm...
19:17 Gambit15 Starting GlusterFS, a clustered file-system server... glusterd.service: control process exited, code=exited status=1  Failed to start GlusterFS, a clustered file-system server.
19:17 Gambit15 ...but generating no logs
19:18 JoeJulian gotta love upstart
19:19 JoeJulian So try starting it by hand in the foreground: glusterd -d
19:19 JoeJulian s/-d/--debug/
19:19 glusterbot JoeJulian: s/-d/'s karma is now -1
19:19 glusterbot What JoeJulian meant to say was: So try starting it by hand in the foreground: glusterd --debug
19:19 JoeJulian lol
19:37 squizzi_ joined #gluster
19:41 julim_ joined #gluster
19:44 squizzi joined #gluster
19:45 Gambit15 heh
19:47 Gambit15 JoeJulian: https://paste.fedoraproject.org/427337/14737096/
19:47 glusterbot Title: #427337 • Fedora Project Pastebin (at paste.fedoraproject.org)
19:50 JoeJulian Gambit15: 'strcpy (template, "/tmp/tmp.XXXXXX"); tmp_fd = mkstemp (template);' is failing. Do you have no /tmp or is it full or read only?
19:54 Gambit15 oops, that'd be the VHD I copied over this morning!
19:55 Gambit15 Just tried it again, and it failed - port in use. Upstart still reporting that the service is offline, ps argues otherwise
19:55 * Gambit15 grumbles
19:56 Gambit15 And on the subsequent hosts, it's starting normally, manually
19:57 Gambit15 Why would the service not have restarted at boot?
19:57 deniszh joined #gluster
19:58 Gambit15 /tmp was only filled about an hour ago. The servers came back up yesterday...
20:01 Philambdo joined #gluster
20:03 slunatecqo joined #gluster
20:06 slunatecqo Hi - I have question about client. If I have big latention to server, and the client is mounted. If I want to read from client, I will have to wait for the latence, or the latence will just affect time of update of change on serverside?
20:06 Gambit15 Ah, so reading the logs it looks like it automatically shuts down the glusterd service if it's unable to start the volumes...?
20:07 ira joined #gluster
20:07 Gambit15 slunatecqo, "latency"
20:07 Gambit15 It depends on the size & number of files you're reading
20:07 slunatecqo Gambit15: ping servername shows big time
20:07 Gambit15 But yes, latency affects everything
20:08 slunatecqo So there is not something like cache on client?
20:09 Gambit15 Not sure if gluster has a cache, but it'd still need to constantly verify that the data you're reading in the cache is current
20:09 JoeJulian There is a cache for open files on the client, but this is a clustered filesystem. File operations on shared files should use locks. Locks have to be acquired from a server or else other clients would have no way of knowing that your client has that lock.
20:09 Gambit15 The last thing you want is a filesystem giving different results to different hosts.
20:09 JoeJulian +1
20:11 slunatecqo Sometimes is better old answer, than new answer after long time...
20:11 slunatecqo Could I solve it by using replicated bricks and accessing bricks directly?
20:13 Gambit15 slunatecqo, you can't have an efficient filesystem distributed across the network if the network is crap
20:14 Gambit15 What's the design of your network? Why are the peerage links so poor?
20:14 Gambit15 JoeJulian, https://paste.fedoraproject.org/427345/11128147/
20:14 glusterbot Title: #427345 • Fedora Project Pastebin (at paste.fedoraproject.org)
20:14 slunatecqo Gambit15: Well, bad think is, that I can not affect network. My job is to solve this problem. Figuring up design of network is my job
20:15 Gambit15 slunatecqo, well fix the network before you try to run services over it!
20:15 Gambit15 What is the latency you're getting?
20:15 Gambit15 Is it all a local network?
20:15 slunatecqo Not at all
20:16 slunatecqo They gave me example of Czech Republic and Australia
20:16 JoeJulian Are they aware of the speed of light?
20:16 JoeJulian Is this a read-only need, or are you expecting to write?
20:16 slunatecqo But they don't care serving "old" file
20:17 Gambit15 JoeJulian, WRT that last paste, I'm trying to work out what the killer was that prevented the service coming back up automatically.   ...or in this case, why was the service killed after connection problems?
20:18 Gambit15 slunatecqo, well then create your volumes on local bricks & use geo-replication.
20:18 slunatecqo So accessing bricks directly instead of mounting them should be solution?
20:18 Gambit15 (I've not used or read much into geo-rep, but I expect that'll be far kinder on latency issues)
20:18 JoeJulian Unless you don't need to write, then just have a master volume that uses geo-replication to feed data to your remotes.
20:19 JoeJulian slunatecqo: no, bricks belong to gluster. You cannot write to them directly.
20:20 Gambit15 slunatecqo, *never* manually make changes to your bricks
20:21 slunatecqo So I will have gluster server on every node, and I will access shared volume from the host from the host?
20:21 Gambit15 ...unless everything's already foobared
20:21 JoeJulian slunatecqo: can you ensure that a write will only happen in one place? What if the file is edited in Czech Republic and Australia simultaneously? Whose data do you throw away?
20:21 Gambit15 slunatecqo, have you read the documentation?
20:21 Gambit15 https://gluster.readthedocs.io/en/latest/
20:21 glusterbot Title: Gluster Docs (at gluster.readthedocs.io)
20:22 slunatecqo I did spent few hours there
20:22 JoeJulian Gambit15: The "resolve brick failed in restore" was due to the brick hostname being unresolvable (dns failures).
20:23 Gambit15 slunatecqo: Read it well, and then read it again. You need to understand Gluster far better if you want to use it in production
20:24 slunatecqo So what you are trying to say, is that the server should be on one place, and If I have nodes that uses data far away, I will just have to wait the latency? (I will read it, but I need to know what am I looking for)
20:24 Gambit15 Not to be offensive, but you just asked a *very* basic question, so you need to do far more homework
20:24 JoeJulian imho, slunatecqo and Gambit15, make some volumes and play with them. See how they work.
20:25 JoeJulian slunatecqo: you have never answered the question about where the writes will occur, so I cannot answer your last question.
20:25 slunatecqo JoeJulian: I would tell you if I knew...
20:25 Gambit15 slunatecqo, the idea is that gluster runs on every server you want to utilise. If you want to access the data on a gluster volume, you should use gluster's FUSE client, or its NFS server.
20:26 slunatecqo But let me as diferently..
20:26 ieth0 joined #gluster
20:28 slunatecqo I have docker cluster with containers spread around the world. I need to share their changes and stuff like they are one application. Is gluster good solution? Do you have any idea what should I try (another program, topology, anything)?
20:28 Gambit15 JoeJulian, is there a way to stop the gluster service shutting itself down because of a network issue though? I'd far prefer it to stay online & keep trying
20:30 JoeJulian No. With systemd you can set the restart on a service to keep restarting on a failure (a feature I love) but upstart isn't so smart. I guess if you're going to use a non-systemd distro you could use supervisord to make it smarter.
20:30 Gambit15 CentOS 7
20:30 JoeJulian Oh! Well then...
20:33 om joined #gluster
20:35 JoeJulian Gambit15: https://gist.github.com/joejulian/7d7e733406e725c2bdbf77a803502fdf
20:35 glusterbot Title: Make glusterd restart continuously if it fails: /etc/systemd/system/glusterd.service.d/10_restart.conf · GitHub (at gist.github.com)
20:41 * Gambit15 hands JoeJulian a beer
20:43 Gambit15 Still getting into this systemd stuff. Seems many of the good ol' unix tools of the past have been sent to the cemetery.
20:44 * Gambit15 waves farewell to netstat
20:44 JoeJulian I still use netstat...
20:44 JoeJulian I'm loving systemd though. It has so many well thought out features.
20:44 JoeJulian And creating a service is so easy.
20:45 Gambit15 I don't know enough about it to critique it, but it certainly a big step away from the old rc methodology of yore
20:46 Gambit15 netstat & ifconfig are no longer available in CentOS 7 & Ubuntu 16
20:46 * post-factum removed net-tools yesterday
20:46 * post-factum ashames JoeJulian
20:46 JoeJulian hehe
20:47 JoeJulian Ok, how do I netstat -tlnp now?
20:47 post-factum ss -tnlp?
20:47 snehring ss -tlnp
20:47 JoeJulian Oh, well that's better. I'm lazy.
20:48 JoeJulian but it sure takes a lot longer.
20:48 post-factum do not hesitate to google^Wask us here
20:48 Gambit15 heh
20:49 JoeJulian Meh, it was germane to the conversation.
20:49 Gambit15 ss was actually fairly simple, it's mostly a drop-in replacement. Everything else has been absorbed into "ip"
20:51 Gambit15 eesh, going to have to get around to overhauling my old vault of trusty scripts
20:59 Gambit15 Goo^W^WJoeJulian, /etc/systemd/system/glusterd.service.d doesn't exist. Is it enough to just create the dir & file, or do I need to poke systemd?
21:00 misc you need to reload systemd (systemctl daemon-reload)
21:00 misc and check if that's not in /usr/lib too
21:00 post-factum Gambit15: mkdir/create file, the systemctl daemon-reload
21:00 post-factum *then
21:01 post-factum Gambit15: once you do this, you'll see drop-in available in systemctl status glusterd
21:01 Gambit15 Aha! it's in /usr/lib
21:02 Gambit15 ...or is it best practice to "override" in /etc rather than update /usr/lib?
21:02 snehring yeah
21:02 misc it is best practice
21:02 snehring local changes should go in /etc
21:03 misc well, you can also use the dro-in configuration system
21:03 post-factum Gambit15: do not touch /usr/lib
21:03 misc create /etc/systemd/system/foo.service.d and place a file bar.conf there
21:03 misc (with bar.conf having the change you want, like [Service]\n User=bar
21:06 Gambit15 Perfect, cheers all!
21:07 webmind left #gluster
21:11 hchiramm joined #gluster
21:13 ChrisHolcombe joined #gluster
21:19 jobewan joined #gluster
21:37 shyam joined #gluster
21:38 Wizek_ joined #gluster
21:44 om joined #gluster
21:44 hackman joined #gluster
21:45 john51 joined #gluster
21:50 hackman joined #gluster
22:16 mreamy joined #gluster
22:21 Pupeno joined #gluster
22:38 kukulogy joined #gluster
23:45 prth joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary