Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-07-11

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:03 masber joined #gluster
00:09 jiffin1 joined #gluster
00:24 jiffin1 joined #gluster
00:34 jiffin1 joined #gluster
00:37 niknakpaddywak joined #gluster
00:39 Alghost_ joined #gluster
00:58 Alghost joined #gluster
01:05 gyadav__ joined #gluster
01:09 victori joined #gluster
01:09 caitnop joined #gluster
01:11 pioto_ joined #gluster
01:16 Jacob8432 joined #gluster
01:19 victori joined #gluster
01:23 gyadav__ joined #gluster
01:48 ilbot3 joined #gluster
01:48 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:02 major joined #gluster
02:09 riyas joined #gluster
02:09 kramdoss_ joined #gluster
02:18 prasanth joined #gluster
02:23 victori joined #gluster
02:33 Alghost joined #gluster
02:37 Alghost_ joined #gluster
02:39 Alghost joined #gluster
02:45 MrAbaddon joined #gluster
02:48 rastar joined #gluster
03:08 ankitr joined #gluster
03:13 victori joined #gluster
03:47 victori joined #gluster
03:52 mbukatov joined #gluster
03:56 psony joined #gluster
03:59 atinmu joined #gluster
04:11 karthik_us joined #gluster
04:18 nbalacha joined #gluster
04:23 ankitr joined #gluster
04:30 skoduri joined #gluster
04:32 ppai joined #gluster
04:34 gyadav__ joined #gluster
04:34 ankitr joined #gluster
04:36 Shu6h3ndu joined #gluster
04:37 Alghost joined #gluster
04:37 itisravi joined #gluster
04:43 gyadav_ joined #gluster
04:46 gyadav__ joined #gluster
04:49 winrhelx joined #gluster
04:53 skumar joined #gluster
04:57 msvbhat joined #gluster
04:58 susant joined #gluster
05:03 rastar joined #gluster
05:07 buvanesh_kumar joined #gluster
05:09 sanoj joined #gluster
05:11 amarts joined #gluster
05:12 lalatenduM joined #gluster
05:18 sahina joined #gluster
05:20 daMaestro joined #gluster
05:20 aravindavk joined #gluster
05:23 ashiq joined #gluster
05:31 victori joined #gluster
05:44 godas_ joined #gluster
05:44 Prasad joined #gluster
05:46 Shu6h3ndu joined #gluster
05:48 apandey joined #gluster
05:48 prasanth joined #gluster
05:50 jiffin joined #gluster
05:50 kramdoss_ joined #gluster
05:55 kdhananjay joined #gluster
05:57 apandey_ joined #gluster
05:58 armyriad joined #gluster
06:04 itisravi joined #gluster
06:11 sona joined #gluster
06:11 victori joined #gluster
06:12 Alghost_ joined #gluster
06:24 buvanesh_kumar_ joined #gluster
06:32 skoduri joined #gluster
06:32 karthik_us joined #gluster
06:37 mlg9000 joined #gluster
06:38 victori joined #gluster
06:41 ndarshan joined #gluster
06:45 godas joined #gluster
06:48 ayaz joined #gluster
06:51 Karan joined #gluster
06:59 msvbhat joined #gluster
07:00 itisravi joined #gluster
07:02 mlg9000 joined #gluster
07:11 mlg9000 joined #gluster
07:21 jiffin joined #gluster
07:21 karthik_us joined #gluster
07:35 fsimonce joined #gluster
07:38 ivan_rossi joined #gluster
07:52 guest2 joined #gluster
07:55 guest2 I have a glusterfs storage, if I want to enhancing performance, should I do the recommendation in this link: https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/html/Administration_Guide/chap-Configuring_Red_Hat_Storage_for_Enhancing_Performance.html
07:55 glusterbot Title: Chapter 9. Configuring Red Hat Storage for Enhancing Performance (at access.redhat.com)
07:59 dominicpg joined #gluster
08:01 [diablo] joined #gluster
08:05 victori joined #gluster
08:07 jkroon joined #gluster
08:15 kramdoss_ joined #gluster
08:18 Shu6h3ndu joined #gluster
08:19 ndarshan joined #gluster
08:23 jiffin1 joined #gluster
08:28 ankitr joined #gluster
08:29 rafi1 joined #gluster
08:29 ahino joined #gluster
08:32 apandey__ joined #gluster
08:47 ahino joined #gluster
08:47 ankitr joined #gluster
08:54 jiffin1 joined #gluster
08:58 sanoj joined #gluster
08:58 social joined #gluster
09:00 nbalacha joined #gluster
09:06 dominicpg joined #gluster
09:10 dominicpg joined #gluster
09:17 dominicpg joined #gluster
09:18 bluenemo joined #gluster
09:25 jiffin joined #gluster
09:36 karthik_us joined #gluster
09:56 ankitr joined #gluster
09:57 jiffin1 joined #gluster
10:05 karthik_us joined #gluster
10:07 csaba joined #gluster
10:08 jiffin1 joined #gluster
10:11 jiffin1 joined #gluster
10:16 Robin_ joined #gluster
10:17 Robin_ Hi !
10:18 Robin_ Is there a forum to ask for some help about GlusterFS ?
10:18 Robin_ Or I may ask directly from here ?
10:30 apandey__ joined #gluster
10:30 apandey joined #gluster
10:33 msvbhat joined #gluster
10:39 jiffin1 joined #gluster
10:44 atinmu joined #gluster
10:51 amarts joined #gluster
10:55 msvbhat joined #gluster
10:57 nbalacha Robin_, go ahead
10:58 kramdoss_ joined #gluster
11:10 victori joined #gluster
11:19 itisravi joined #gluster
11:27 guest2 what parameters should be edit for a glusterfs volumes for large files?
11:28 poornima joined #gluster
11:36 itisravi joined #gluster
11:37 lkoranda joined #gluster
11:38 baber joined #gluster
11:42 ndarshan joined #gluster
11:49 amarts joined #gluster
11:54 kramdoss_ joined #gluster
11:59 ahino1 joined #gluster
12:27 Robin_ nbalacha, thank you for proposing help (I was away from my keyboard for lunch). First, sorry if my english isn't perfect.
12:28 Robin_ So I have to install a "high aviable" storage. I just buy 3 storage server directly on a datacenter with 2 network interfaces (1 private, and 1 directly connected on internet). I need to configure glusterFS-client from outside. Everything works so far BUT when I simulate a failure of a node (reboot or ifdown eth {0..1}) I have two problems.
12:28 Robin_ First the current copies are blocked even if only one of the three nodes is cut. The goal was to have continuous service.
12:28 Robin_ Then, when the server connects again, the data is no longer sent directly to it, it is transmitted in a second time by the private network between the servers. In the client logs, the server appears touporus disconnected.
12:28 Robin_ Here is my server-side configuration: gfs-01-02: / mnt / zfspool / LM-HA-01 gfs-01-01: / mnt / zfspool / LM-HA- 01-03 / mnt / zfspool / LM-HA-01 /
12:35 Shu6h3ndu joined #gluster
12:41 atinm joined #gluster
12:44 mbukatov joined #gluster
12:58 ankitr joined #gluster
13:00 kramdoss_ joined #gluster
13:01 karthik_us joined #gluster
13:01 Robin_ Can someone help me?
13:04 cloph Robin_: you don't tell how you did mount it and what type your volume is.
13:06 Robin_ OK Cloph, I mount whit : sudo mount -t glusterfs gfs-01-01:/LM-HA-01  /LM-HA-01
13:07 Robin_ the volume is on 3 ZFS storages (replica 3)
13:08 Robin_ and I add gfs-01-01 on my /etc/hosts
13:08 victori joined #gluster
13:10 kotreshhr left #gluster
13:16 atinm joined #gluster
13:16 jiffin joined #gluster
13:25 cloph ok, so with a glusterfs mount (as opposed to using nfs), the volume should be available no matter which host goes down. and with three servers and default quorum settings, having one of the servers down should also not turn a replica 3 the volume r/o or unavailable.
13:26 cloph so when you disconnect one of the server, do a gluster volume status and gluster peer status from the remaining hosts / make sure they can still talk to each other
13:28 ahino joined #gluster
13:29 lkoranda joined #gluster
13:29 Robin_ Yes, that's why we chose a replica 3 but changing the settings "network.ping-timeout = 1" and "network.frame-timeout = 120" does not change that a server down stop copies. And when the server is up again, in zabbix I can see that the data is no longer transmitted directly from the client.
13:32 WebertRLZ joined #gluster
13:38 vbellur joined #gluster
13:40 vbellur joined #gluster
13:42 vbellur joined #gluster
13:43 vbellur joined #gluster
13:46 Robin_ Gluster peer status on each servers shows me that one of the nodes is disconnected and Volume status detects changes. On client side, logs "talks" about disconnected peer. All works fine, but when server is back, auto-heal start and the client never sent again data directly to this node
13:53 skylar joined #gluster
13:55 Klas heal should go between servers, right?
13:55 Klas would be pretty strange if clients where in charge of changelogs
13:56 jiffin joined #gluster
13:56 humblec joined #gluster
14:01 Robin_ My problem is that when the server is synchronized again, it no longer receives any data from the client while the other two receive the data.
14:01 vbellur joined #gluster
14:02 vbellur joined #gluster
14:02 vbellur joined #gluster
14:05 vbellur joined #gluster
14:07 victori joined #gluster
14:08 vbellur joined #gluster
14:24 nbalacha joined #gluster
14:30 ahino joined #gluster
14:33 kramdoss_ joined #gluster
14:38 riyas joined #gluster
14:40 atinm joined #gluster
14:49 JoeJulian ~ping timeout | Robin_
14:49 glusterbot Robin_: I do not know about 'ping timeout', but I do know about these similar topics: 'ping-timeout'
14:49 JoeJulian ~ping-timeout | Robin_
14:49 glusterbot Robin_: The reason for the long (42 second) ping-timeout is because re-establishing fd's and locks can be a very expensive operation. With an average MTBF of 45000 hours for a server, even just a replica 2 would result in a 42 second MTTR every 2.6 years, or 6 nines of uptime.
14:50 JoeJulian Secondly, if you shut down sanely, glusterfsd will be stopped and will close the tcp connections (RST/ACK). This will prevent the ping-timeout from being used and operation will continue without interruption.
14:53 JoeJulian The last part of your issue looks interesting. I'd have to see the client logs.
14:55 farhorizon joined #gluster
14:58 nirokato joined #gluster
14:59 JoeJulian @php
14:59 glusterbot JoeJulian: (#1) php calls the stat() system call for every include. This triggers a self-heal check which makes most php software slow as they include hundreds of small files. See http://joejulian.name/blog/optimizing-web-performance-with-glusterfs/ for details., or (#2) It could also be worth mounting fuse with glusterfs --attribute-timeout=HIGH --entry-timeout=HIGH --negative-timeout=HIGH
14:59 glusterbot JoeJulian: --fopen-keep-cache
14:59 ahino joined #gluster
15:02 pschultz joined #gluster
15:04 buvanesh_kumar_ joined #gluster
15:05 Robin_ Thank you JoeJullian, You're right: using the reboot command is not a good test. With ifdown then ifup, if I run a new copy, it is sent correctly to all servers. But my first problem is always present: as long as the node is down is not up again, the current copy does not resume alone.
15:05 Shu6h3ndu joined #gluster
15:06 ahino joined #gluster
15:06 JoeJulian Let me rephrase to see if I'm understanding correctly. If you stop a server and start it again, the client will not reconnect open FDs to that server. If you open a new FD, that does connect to all servers?
15:11 _KaszpiR_ joined #gluster
15:14 m0zes joined #gluster
15:15 Robin_ Not really. if I stop a server during a copy (from a client to the cluster) the copy freeze and never restart until the server start again. The aim was to have a storage place over the network that never stop working even if a server goes down. Why the client don't continue to copy to servers that are up ?
15:17 Robin_ Can I copy/paste logs directly in the IRC whitout be banned ?
15:17 cloph @paste
15:17 glusterbot cloph: For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
15:18 plarsen joined #gluster
15:18 JoeJulian It will start again as long as the client can connect to at least two servers and (if the tcp connection hasn't closed) after ping-timeout has elapsed.
15:18 vbellur joined #gluster
15:18 JoeJulian (Thanks for asking first)
15:20 Robin_ I'm using the IRC web client (webchat.freenode.net)
15:20 Robin_ maybe in private message ?
15:20 JoeJulian The information cloph triggered creates a link that you can share.
15:21 Robin_ | test
15:21 Robin_ ok not sure to understand how it works :-/
15:21 JoeJulian like "cat /var/log/glusterfs/foo.log | nc termbin.com 9999"
15:28 Robin_ like I said, I can't use "cat" because i'm using a web browser IRC client. But you can access to log file directly from here : https://pastebin.com/rxeLsVfG
15:28 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
15:29 Robin_ like I said, I can't use "cat" because i'm using a web browser IRC client. But you can access to log file directly from here : http://paste.ubuntu.com/25068817/
15:29 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
15:30 JoeJulian You would use cat on the gluster client.
15:30 JoeJulian Anyway...
15:34 JoeJulian So it looks like the client is trying to connect, but the connection times out. This would imply a firewall problem. If it were anything else, you would get a different connection error.
15:39 Robin_ This is not a firewall issue (we disallowed them for testing time). The node is disconnected manually, but the current copy has stopped while it should continue on the other nodes.
15:42 Robin_ When we simulate a failure of a node, the current copies (from clients to glusterfs volume) freeze. it should continue on the nodes still online, isn't it ?
15:43 shyam joined #gluster
15:43 JoeJulian Try waiting 45 seconds.
15:44 Robin_ I waited more than 30 minutes :)
15:46 JoeJulian Ooookay... I've not seen that. paste the output of "gluster volume info" and "gluster volume status"
15:46 Robin_ And just now, it comes up. We shutdown eht0/1 at 17:02pm and copi resumes juste now (17:47pm)
15:47 JoeJulian That would be consistent with tcp_retries (a kernel setting) being used up. ping-timeout should have happened sooner.
15:49 Robin_ how can we reduce this ? (I will send you the output of "gluster volume info" and "gluster volume status")
15:51 Robin_ http://termbin.com/w0j0
15:51 Robin_ http://termbin.com/9o7y
15:52 JoeJulian am I correct in assuming that gfs-01-02 is the one you've disabled the network on?
15:53 humblec joined #gluster
15:54 Robin_ yes
15:54 pschultz left #gluster
15:56 JoeJulian I see one bit that triggers a very vague memory. researching...
15:58 ankitr joined #gluster
15:58 msvbhat joined #gluster
15:58 Robin_ Many thanks for your help !
15:59 JoeJulian btw, Robin_, is there some reason you need to use an old unsupported version of gluster?
16:00 kpease joined #gluster
16:01 Robin_ Absolutely no reason. Precisely we got our tests going with glusterfs 3.11 tomorrow. I would like to say if you are present tomorrow :)
16:01 JoeJulian I'll be around during my daytime. I'm in GMT-7.
16:03 JoeJulian Robin_: Looks like that was fixed in 3.8.6
16:03 JoeJulian Pretty sure it's this one https://bugzilla.redhat.com/show_bug.cgi?id=1387976
16:03 glusterbot Bug 1387976: high, high, ---, moagrawa, CLOSED CURRENTRELEASE, Continuous warning messages getting when one of the cluster node is down on SSL setup.
16:06 gyadav joined #gluster
16:07 susant joined #gluster
16:10 jiffin1 joined #gluster
16:19 baber joined #gluster
16:19 jkroon joined #gluster
16:24 humblec joined #gluster
16:30 baber joined #gluster
17:06 msvbhat joined #gluster
17:09 gyadav joined #gluster
17:14 Jacob843 joined #gluster
17:18 victori joined #gluster
17:20 sona joined #gluster
17:28 ivan_rossi left #gluster
17:35 rafi joined #gluster
17:41 susant joined #gluster
17:42 rastar joined #gluster
17:58 ahino joined #gluster
18:01 msvbhat joined #gluster
18:20 shyam joined #gluster
18:25 hchiramm__ joined #gluster
18:47 DoubleJ_ joined #gluster
18:48 colm4 joined #gluster
18:49 al_ joined #gluster
18:49 rideh joined #gluster
19:03 varesa joined #gluster
19:03 skylar joined #gluster
19:05 plarsen joined #gluster
19:13 hchiramm__ joined #gluster
19:18 XpineX joined #gluster
19:23 farhorizon joined #gluster
19:26 baber joined #gluster
19:30 farhoriz_ joined #gluster
19:31 vbellur joined #gluster
19:41 Karan joined #gluster
19:54 nick_g joined #gluster
19:55 nick_g Hello guys ;) After issuing "gluster volume rebalance VOLUME-NAME start" and waiting until the "Estimated time left for rebalance to complete :" I still get the "in progress" status although the "Estimated time left for rebalance to complete:" is now gone. I though that when there is no time left for the rebalance to complete, that would mean that the rebalance has been completed. It looks like it does not work that way?
19:57 farhorizon joined #gluster
20:00 nick_g How can I tell when the rebalance task has been completed?
20:12 baber joined #gluster
20:16 farhorizon joined #gluster
20:21 ashiq joined #gluster
20:35 farhoriz_ joined #gluster
20:35 skylar joined #gluster
20:52 DoubleJ joined #gluster
21:02 vbellur joined #gluster
21:03 vbellur joined #gluster
21:04 vbellur joined #gluster
21:04 vbellur joined #gluster
21:05 vbellur joined #gluster
21:28 mridlen joined #gluster
21:31 mridlen I have a real predicament on my hands, and I'm hoping someone can assist me with it "Peer in Cluster (Disconnected)"
21:32 mridlen from the problematic brick, it still shows "Peer in Cluster (Connected)" but the other way it shows disconnected
21:37 shyam joined #gluster
21:46 mridlen nevermind, I finally figured it out... this was a dns problem related to the hostname I was using
21:46 mridlen changed the order in the peer file and it works
21:47 mridlen under /var/lib/glusterd/peers I found the file that had my hostname in it
21:48 mridlen and then changed hostname1 to point to the ip address and hostname2 to point to the dns name
21:48 mridlen so it must get hung up when it can't reach one of them
21:59 farhorizon joined #gluster
22:00 farhorizon joined #gluster
22:01 farhorizon joined #gluster
22:25 jkroon joined #gluster
22:40 vbellur joined #gluster
22:42 Jacob843 joined #gluster
23:20 Alghost joined #gluster
23:26 daMaestro joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary