Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-09-06

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:41 jkroon_ joined #gluster
00:46 bit4man joined #gluster
01:08 MrAbaddon joined #gluster
01:21 shyu joined #gluster
01:51 ilbot3 joined #gluster
01:51 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:52 jbrooks joined #gluster
02:02 gospod2 joined #gluster
02:27 bit4man joined #gluster
02:39 gyadav joined #gluster
02:43 aravindavk joined #gluster
02:55 vbellur joined #gluster
02:58 aravindavk joined #gluster
03:31 Saravanakmr joined #gluster
03:44 itisravi joined #gluster
03:51 Guest9038 joined #gluster
03:53 susant joined #gluster
04:07 nbalacha_ joined #gluster
04:09 karthik_us joined #gluster
04:20 atinm|brb joined #gluster
04:28 jkroon__ joined #gluster
04:29 atinm joined #gluster
04:53 nbalacha_ joined #gluster
04:56 dijuremo joined #gluster
04:58 aravindavk joined #gluster
05:02 omie888777 joined #gluster
05:03 itisravi joined #gluster
05:03 dijuremo joined #gluster
05:11 riyas joined #gluster
05:24 ndarshan joined #gluster
05:29 ilbot3 joined #gluster
05:29 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
05:36 nbalacha_ joined #gluster
05:46 buvanesh_kumar joined #gluster
05:49 Saravanakmr joined #gluster
05:51 hgowtham joined #gluster
05:53 nbalacha_ joined #gluster
05:59 apandey joined #gluster
06:00 Saravanakmr joined #gluster
06:02 skumar joined #gluster
06:02 anthony25 joined #gluster
06:04 kotreshhr joined #gluster
06:04 kotreshhr left #gluster
06:05 jiffin joined #gluster
06:13 Prasad joined #gluster
06:18 aravindavk joined #gluster
06:21 nbalacha joined #gluster
06:22 psony joined #gluster
06:29 kotreshhr joined #gluster
06:30 anthony25 joined #gluster
06:30 msvbhat joined #gluster
06:35 apandey_ joined #gluster
06:36 apandey_ joined #gluster
06:43 poornima joined #gluster
06:48 zcourts joined #gluster
06:49 edong23 joined #gluster
06:49 rastar joined #gluster
06:54 psony joined #gluster
07:01 nbalacha joined #gluster
07:15 ivan_rossi joined #gluster
07:15 riyas joined #gluster
07:23 fsimonce joined #gluster
07:27 shdeng joined #gluster
07:38 nbalacha joined #gluster
07:45 [diablo] joined #gluster
07:55 apandey__ joined #gluster
07:56 prasanth joined #gluster
08:01 jtux joined #gluster
08:04 mohan joined #gluster
08:08 msvbhat joined #gluster
08:12 _KaszpiR_ joined #gluster
08:15 susant joined #gluster
08:22 prasanth joined #gluster
08:32 _KaszpiR_ joined #gluster
08:43 nbalacha joined #gluster
08:54 XpineX joined #gluster
08:56 mrw___ joined #gluster
08:57 mrw___ Hi, is there a solution for the following problem: give a filesystem an owner (user, group) by name, because the uid and gid are random and differ from server to server?
08:59 mrw___ e.g. group docker should be owner and have write access to a mounted brick, but it's groupid is 117 one one node, 138 on another, 999 on the third, …
09:01 gyadav_ joined #gluster
09:03 msvbhat joined #gluster
09:08 apandey_ joined #gluster
09:09 apandey joined #gluster
09:13 jtux joined #gluster
09:18 cyberbootje joined #gluster
09:23 kdhananjay joined #gluster
09:24 Larsen_ joined #gluster
09:29 MrAbaddon joined #gluster
09:31 _KaszpiR_ joined #gluster
09:31 kotreshhr left #gluster
09:32 poornima joined #gluster
09:33 shdeng joined #gluster
09:37 gyadav__ joined #gluster
09:37 _KaszpiR_ joined #gluster
09:42 ndevos mrw___: I think Heketi and gluster-kubernetes solve that, but I dont know how - maybe ask in #heketi ?
09:42 nbalacha joined #gluster
09:48 atinm joined #gluster
09:51 Guest9038 joined #gluster
09:59 foster joined #gluster
10:04 cloph joined #gluster
10:18 nbalacha joined #gluster
10:34 atinm joined #gluster
10:35 baber joined #gluster
10:39 shyam joined #gluster
10:52 Klas mrw___: ldap?
10:54 gyadav_ joined #gluster
11:07 anthony25 joined #gluster
11:12 itisravi joined #gluster
11:12 gyadav__ joined #gluster
11:15 anthony25 joined #gluster
11:19 riyas joined #gluster
11:21 buvanesh_kumar joined #gluster
11:41 mrw___ Klas, the user is created by the system, LDAP will be available after and on top of docker + gluster
11:41 Klas I'm just saying, ldap is very much intended to solve these issues =)
11:43 mrw___ I know …
11:44 mrw___ But often you need first gluster before any aditional services can run, so I'd expect a solution within gluster… ;)
11:44 Klas not sure myself =)
11:45 mrw___ I have LDAP, but the data is stored on gluster. ;)
11:45 mrw___ so a chicken and egg problem
11:46 Klas oh, you store the catalogue in gluster?
11:46 mrw___ catalog = LDAP-DB? Yes.
11:46 mrw___ All data of all services,
11:47 mrw___ Due to (a) redundancy and (b) dirstribution (which is needed on failover)
11:51 mrw___ as far as I see, option gid is not supported for glusterfs mount, that would have been an easy solution
11:52 mrw___ → Other way to ask the same question: how can I mount a glusterfs for a specific user/group an a specific client (and other users/groups on other clients)?
12:30 jiffin joined #gluster
12:35 shyam joined #gluster
12:48 Humble joined #gluster
13:00 skumar_ joined #gluster
13:03 baber joined #gluster
13:08 marin[m] i am looking at the docs and there is this phrase about stripped replicated volumes: n this release, configuration of this volume type is supported only for Map Reduce workloads.
13:08 marin[m] any idea what it means?
13:08 marin[m] i just setup a stripped replicated volume and everything works normally
13:09 marin[m] and as expected
13:18 atinm joined #gluster
13:18 nbalacha joined #gluster
13:44 msvbhat joined #gluster
13:50 jiffin joined #gluster
13:51 aravindavk joined #gluster
13:54 Wayke91 joined #gluster
13:57 farhorizon joined #gluster
13:57 nbalacha joined #gluster
14:02 rafi1 joined #gluster
14:06 jstrunk joined #gluster
14:15 atrius joined #gluster
14:27 bit4man joined #gluster
14:35 MrAbaddon joined #gluster
14:43 ilbot3 joined #gluster
14:43 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
14:44 cyberbootje joined #gluster
14:48 ilbot3 joined #gluster
14:48 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
14:49 wushudoin joined #gluster
14:49 wushudoin joined #gluster
14:57 X-ian_ hi
14:57 glusterbot X-ian_: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an
14:58 X-ian_ after updating from 3.10.4-1 to 3.10.5-1 on debian 8 the gfs mount fails.
14:59 X-ian_ logfiel says something like "received signum (1), shutting down" and "0-fuse: Unmounting '/data'"
14:59 X-ian_ s/logfiel/logfile/
14:59 glusterbot What X-ian_ meant to say was: logfile says something like "received signum (1), shutting down" and "0-fuse: Unmounting '/data'"
15:04 rafi1 joined #gluster
15:06 atinm joined #gluster
15:09 X-ian_ guy i'm really in trouble now, the estimated downtime was 1h here
15:11 rafi1 joined #gluster
15:12 gyadav__ joined #gluster
15:14 dominicpg joined #gluster
15:15 fury having trouble deploying glusterfs to my kubernetes cluster - running kubernetes 1.7.5, docker 1.11.2 on the worker nodes, liveness probe failed: https://paste.fedoraproject.org/paste/xL2GE20bH93nhGhb7K4jJw
15:15 glusterbot Title: Untitled - Modern Paste (at paste.fedoraproject.org)
15:16 fury glusterfs daemonset created from https://github.com/heketi/heketi/blob/master/extras/kubernetes/glusterfs-daemonset.json
15:16 glusterbot Title: heketi/glusterfs-daemonset.json at master · heketi/heketi · GitHub (at github.com)
15:16 mbrandeis joined #gluster
15:17 X-ian_ waht's in /var/log/glusterfs ?
15:18 fury i have a cli.log, cmd_history.log, and glusterd.log, as well as a few dirs - bricks, geo-replication and geo-replication-slaves
15:18 fury want glusterd.log?
15:19 alvinstarr joined #gluster
15:19 X-ian_ seems the logical choice
15:20 fury grabbed both cli.log and glusterd.log, saw an error in cli.log not sure if it means something - https://paste.fedoraproject.org/paste/q8XwNDMoFJuHJyOvm91Kig
15:20 glusterbot Title: Untitled - Modern Paste (at paste.fedoraproject.org)
15:21 rafi1 joined #gluster
15:21 fury oof
15:22 fury those logs are from when i set up glusterfs on the nodes outside of kubernetes
15:22 fury prolly should take down the glusterfs nodes before attempting to run them in kubernetes i guess
15:23 X-ian_ don't know much about that.
15:24 fury well i notice the glusterfs-daemonset.json file is configuring it to mount the same host paths as the glusterfs-server apt package i had previously installed is using
15:28 farhorizon joined #gluster
15:30 X-ian_ sorryman. i've been tracking down some wierd problem which just got worse (will not start any more)
15:30 fury no worries. mine isn't starting either :D
15:31 Guest9038 joined #gluster
15:31 fury i just deleted the one i'd installed through apt, glusterd.log now says it can't find some files: https://paste.fedoraproject.org/paste/nHjxXktF7vVbNseqYQ~YdA
15:31 glusterbot Title: Untitled - Modern Paste (at paste.fedoraproject.org)
15:31 jiffin joined #gluster
15:38 nbalacha joined #gluster
15:39 X-ian_ downgraded to 3.10.4. a
15:39 X-ian_ after commenting out the invoke-rc command it finally startet
15:39 Wayke91_ joined #gluster
15:40 X-ian_ i do not need this kinda stuff
15:42 buvanesh_kumar joined #gluster
15:43 X-ian_ I think I'm done with gluster for good.
15:43 X-ian_ bye, and thanks for the fish
15:46 dijuremo Are there any best practices for setting network.OPTIONS ? For example, the default of network.ping-timeout=42 seconds is very high for hosting VMs. What would be the recommended value for VM hosting? How about the other network.OPTIONS, what adjustments should be made?
15:48 buvanesh_kumar_ joined #gluster
15:50 kpease joined #gluster
15:51 kpease joined #gluster
15:52 dominicpg joined #gluster
15:55 dominicpg joined #gluster
16:09 mbrandeis joined #gluster
16:16 PatNarciso joined #gluster
16:21 Muthu joined #gluster
16:26 msvbhat joined #gluster
16:33 decayofmind joined #gluster
16:34 rafi joined #gluster
16:35 ivan_rossi left #gluster
16:38 cyberbootje joined #gluster
16:50 JoeJulian dijuremo: ,,(ping-timeout)
16:50 glusterbot dijuremo: The reason for the long (42 second) ping-timeout is because re-establishing fd's and locks can be a very expensive operation. With an average MTBF of 45000 hours for a server, even just a replica 2 would result in a 42 second MTTR every 2.6 years, or 6 nines of uptime.
16:54 dijuremo JoeJulian: MTBF does not cover reboots, so when you reboot a server you have to wait 42 seconds for I/O to continue?
16:54 JoeJulian no
16:55 JoeJulian When you reboot, the process is sent a sigterm from which it properly shuts down the tcp connections. This avoids the ping-timeout whose job is to allow tcp recovery during momentary network interruptions.
16:55 rafi1 joined #gluster
16:56 JoeJulian The only reason that wouldn't be so is if your distro stupidly (imho) stops the network as part of the shutdown process.
16:56 Wayke91 joined #gluster
16:57 Wayke91_ joined #gluster
16:58 Wayke91_ joined #gluster
16:59 dijuremo JoeJulian: OK, this is running on RHEL 7.x with their official 3.8.4 gluster, so I will keep the 42 seconds. It is a replica 3 setup. Doing a test now while running some write benchmarks on a windows VM to see what happens...
17:02 dijuremo JoeJulian: Looks good, i/o did not stop, so then only if a server goes down uncleanly or if there is a network failure (my 3 servers are located in different buildings), then I would see a VM freeze issue. Is there a minimum value I should not exceed if I decide to change it? Given my higher risk of network disconnections?
17:03 JoeJulian The desire is to allow intermittent network drops to recover sanely, so whatever works best for your use case. You might also consider changing ext4's behavior within your VMs to just prevent them going read-only.
17:03 JoeJulian https://joejulian.name/blog/keeping-your-vms-from-going-read-only-when-encountering-a-ping-timeout-in-glusterfs/
17:03 glusterbot Title: Keeping your VMs from going read-only when encountering a ping-timeout in GlusterFS (at joejulian.name)
17:04 dijuremo JoeJulian: I found your post about timeouts. How about VMs running with XFS?
17:05 JoeJulian I could not find a way to make xfs more tolerant at that time.
17:06 jiffin joined #gluster
17:06 dijuremo Given that this wont happen too often is going down to 3 or 5 seconds too bad?
17:07 dijuremo Do you know if the freeze will bother Windows VMs badly?
17:07 msvbhat joined #gluster
17:15 rafi joined #gluster
17:16 jiffin joined #gluster
17:16 alok_ joined #gluster
17:17 alok_ hello here
17:17 alok_ got a quick question if anyone know professional technical support provides for Gluster FIle Ssystem
17:43 rafi1 joined #gluster
17:46 Saravanakmr joined #gluster
17:46 dijuremo JoeJulian: Seems like even on regular reboots the VM hung for ~42 seconds.
17:48 dijuremo JoeJulian: I had rebooted a few times my server gluster-srv03. If I reboot that one again, I do not see a VM freeze. However, I rebooted gluster-srv02 for the firs time and then the Windows VM I am testing froze completely for ~42+ seconds. Once gluster-srv02 came back up and all files had been completely healed, I rebooted gluster-srv01 and also n
17:48 dijuremo oticed the ~42+ second freeze.
17:51 jiffin joined #gluster
17:53 jiffin joined #gluster
18:19 Larsen_ joined #gluster
18:22 baber joined #gluster
18:35 JoeJulian dijuremo: I'd probably do some wireshark traces. That shouldn't be happening, of course.
18:38 dijuremo JoeJulian: What is the penalty (high cost) of changing network.ping-timeout to say 3 or 5 seconds?
18:39 jiffin joined #gluster
18:39 _KaszpiR_ joined #gluster
18:40 JoeJulian high cpu utilization on the servers and slow response. This has been known to cause a complete lack of response for periods of time that have exceeded short ping-timeouts causing a cascading fault.
18:43 dijuremo Say the server CPU's are overkill, would it still matter? As in these were originally purchased to try and do hyperconverge (So dual Xeon 10 cores / 20 threads each), but that did not quote work well, so now have lots of CPU and RAM just for gluster...
18:44 JoeJulian I can only suggest trying it with your anticipated load.
18:45 JoeJulian In my own implementations, I've never changed ping-timeout.
18:45 _KaszpiR_ joined #gluster
18:46 dijuremo Is it possibly an issue of systemd? Not shutting down things in proper order?
18:46 dijuremo These machines are running RHEL 7.4 with RHEL's gluster 3.8.4 as I had stated earlier.
18:46 omie888777 joined #gluster
18:47 JoeJulian It's possible, yes. I've been running Arch for about 4 years now, so I haven't even touched RHEL7.
18:47 dijuremo I had read in one of the post when they suggested modifying network.ping-timeout that things needed to be shut down in a very specific order...
18:47 dijuremo I think it was something like glusterfs then glusterfsd then glusterd
18:49 JoeJulian Doesn't really matter as long as glusterfsd processes are terminated before it loses connection with the network. It was my understanding that RHEL7's configuration doesn't ever stop the network during shutdown so that should be ok. Maybe it's firewalld?
18:51 JoeJulian I guess my opinion on this is that shortening ping-timeout is masking tech debt.
18:52 JoeJulian If I have time to find the source of a problem and fix it, it always saves me time in the long run. Sometimes you just don't have time for that and masking makes your boss happy. <shrug>
18:59 major joined #gluster
19:03 dijuremo So no distinction between glusterfsd and glusterd services in systemd on RHEL gluster 3.8.4 pacakges... it is all handled from glusterd.
19:07 blue joined #gluster
19:10 cyberbootje joined #gluster
19:20 baber joined #gluster
19:24 JoeJulian dijuremo: Actually, when that service stops, all the glusterfsd (bricks) are left running. It's the final term (before the eventual kill) that stop glusterfsd.
19:26 JoeJulian One potential workaround would be to have a service whose start command does nothing, but the stop command does 'pkill glusterfsd'. Enable and start the service and when it's stopped, the bricks will stop.
19:26 dijuremo joined #gluster
19:33 guhcampos joined #gluster
19:36 dijuremo JoeJulian, guess I can just make a "special" procedure for reboots, 1. Manually stop glusterd then reboot. Then leave 42 seconds timeout
20:01 JoeJulian dijuremo: Well if you're going to do that, manually stop glusterd; pkill glusterfsd; reboot
20:03 farhorizon joined #gluster
20:04 msvbhat joined #gluster
20:22 ThHirsch joined #gluster
20:56 Peppard joined #gluster
20:59 farhorizon joined #gluster
21:15 MrAbaddon joined #gluster
21:20 farhorizon joined #gluster
21:28 major joined #gluster
21:32 zcourts joined #gluster
21:45 farhorizon joined #gluster
22:05 msvbhat joined #gluster
22:25 zcourts_ joined #gluster
22:59 zcourts joined #gluster
23:11 zcourts_ joined #gluster
23:15 zcourts joined #gluster
23:30 Telsin joined #gluster
23:47 Telsin joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary