Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-06-09

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:38 [o__o] joined #gluster
01:37 jbrooks joined #gluster
01:41 F2Knight joined #gluster
01:52 [o__o] joined #gluster
02:04 shyam joined #gluster
02:06 dlambrig joined #gluster
02:26 hagarth joined #gluster
02:39 harish joined #gluster
03:20 F2Knight joined #gluster
03:28 plarsen joined #gluster
03:30 raghug joined #gluster
03:35 nishanth joined #gluster
03:44 itisravi joined #gluster
03:54 deniszh joined #gluster
03:55 shubhendu joined #gluster
03:58 deniszh1 joined #gluster
03:59 nhayashi joined #gluster
04:03 atinm joined #gluster
04:14 nhayashi joined #gluster
04:15 Lee1092 joined #gluster
04:23 nbalacha joined #gluster
04:24 nehar joined #gluster
04:35 ppai joined #gluster
04:35 prasanth joined #gluster
04:52 Gnomethrower joined #gluster
04:57 nbalacha joined #gluster
05:07 gem joined #gluster
05:08 kshlm joined #gluster
05:15 skoduri joined #gluster
05:18 kotreshhr joined #gluster
05:22 Apeksha joined #gluster
05:24 lezo joined #gluster
05:28 Pintomatic joined #gluster
05:29 fyxim joined #gluster
05:30 tyler274 joined #gluster
05:31 Chr1st1an joined #gluster
05:31 jiffin joined #gluster
05:31 lh joined #gluster
05:32 Lee1092 joined #gluster
05:32 twisted` joined #gluster
05:33 sc0 joined #gluster
05:34 AppStore joined #gluster
05:35 billputer joined #gluster
05:35 scubacuda joined #gluster
05:36 hgowtham joined #gluster
05:36 Pintomatic joined #gluster
05:37 ppai joined #gluster
05:41 ndarshan joined #gluster
05:42 Manikandan joined #gluster
05:49 aspandey joined #gluster
05:53 raghug joined #gluster
05:54 ashiq joined #gluster
06:01 gowtham joined #gluster
06:03 kdhananjay joined #gluster
06:07 spalai joined #gluster
06:17 karnan joined #gluster
06:20 atalur joined #gluster
06:23 jtux joined #gluster
06:23 msvbhat_ joined #gluster
06:23 overclk joined #gluster
06:26 itisravi joined #gluster
06:27 ramky joined #gluster
06:30 gem joined #gluster
06:37 Ulrar So is 3.7.12 still scheduled to be released today ?
06:38 nathwill joined #gluster
06:40 rafi joined #gluster
06:41 msvbhat_ joined #gluster
06:46 rastar joined #gluster
06:55 Siavash joined #gluster
06:55 Siavash joined #gluster
06:55 pur__ joined #gluster
06:57 level7_ joined #gluster
06:58 d-fence joined #gluster
07:05 d-fence Hi all. I'm using glsuterfs 3.2.7 on Debian for a long time now. When the version changed to 3.5, I was not abl to upgrade, so I pinned the packages to stay at 3.2.7. Now, I'm experiencing a lot of  "Transport endpoint is not connected".
07:05 d-fence I would like to try again an upgrade.
07:05 spalai left #gluster
07:05 d-fence I cannot make a backup of the data, because the storage is too large.
07:06 deniszh joined #gluster
07:08 autostatic joined #gluster
07:12 autostatic Why does one of the bricks of a 2x2 GusterFS setup still lock all the memory that was used during a self-heal process? All bricks are Ubuntu 14.04, GlusterFS 3.5.9. Is there a way to free the memory again?
07:13 hchiramm joined #gluster
07:14 autostatic Tried echo 2 > /proc/sys/vm/drop_caches to no avail
07:16 om joined #gluster
07:19 PaulCuzner joined #gluster
07:23 ctria joined #gluster
07:34 Saravanakmr joined #gluster
07:38 jri joined #gluster
07:40 [Enrico] joined #gluster
07:42 hackman joined #gluster
07:42 anil joined #gluster
07:54 Guest20916 joined #gluster
07:59 ghenry joined #gluster
07:59 ghenry joined #gluster
08:06 ivan_rossi joined #gluster
08:15 Slashman joined #gluster
08:25 d-fence Anyone that can point me to an upgrade procedure from 3.2 to 3.5 ?
08:28 DV joined #gluster
08:28 kdhananjay joined #gluster
08:28 Guest20916 joined #gluster
08:29 nbalacha joined #gluster
08:41 jri joined #gluster
08:52 Slashman joined #gluster
08:52 gem joined #gluster
08:58 kovshenin joined #gluster
09:06 Apeksha joined #gluster
09:07 kdhananjay joined #gluster
09:10 hackman joined #gluster
09:10 om joined #gluster
09:11 om2 joined #gluster
09:11 om3 joined #gluster
09:14 ninkotech_ joined #gluster
09:14 ninkotech joined #gluster
09:14 msvbhat__ joined #gluster
09:16 nishanth joined #gluster
09:17 shubhendu joined #gluster
09:30 kramdoss_ joined #gluster
09:39 atalur joined #gluster
09:42 om3 joined #gluster
09:47 raunoV joined #gluster
09:47 raunoV Hi
09:47 glusterbot raunoV: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
09:49 raunoV I have a simple problem that many other users have. Actions with small files is terribly slow. Any key notes to follow? At the moment i have tune vm.dirty_ratio and dirty_ratio_background, also tuned volume options: performance.io-thread-count: 24 server.event-threads: 3 client.event-threads: 3 performance.cache-refresh-timeout: 1 performance.cache-size: 1073741824 performance.readdir-ahead: on. Is there even possible to get some bet
09:52 raunoV At the moment i'm trying to write about 38 000 files with unix cp command. It takes about 7-9 minutes. I think it should be better but I may be wrong..
09:53 glafouille joined #gluster
09:54 Slashman joined #gluster
09:57 msvbhat_ joined #gluster
09:57 hackman joined #gluster
10:02 msvbhat__ joined #gluster
10:05 arcolife joined #gluster
10:12 Apeksha_ joined #gluster
10:13 ndarshan joined #gluster
10:27 ppai joined #gluster
10:27 ivan_rossi left #gluster
10:32 raghug joined #gluster
10:41 skoduri joined #gluster
10:43 atinm joined #gluster
10:44 autostatic Why does one of the bricks of a 2x2 GusterFS setup still lock all the memory that was used during a self-heal process? All bricks are Ubuntu 14.04, GlusterFS 3.5.9. Is there a way to free the memory again? Tried echo 2 > /proc/sys/vm/drop_caches to no avail.
10:46 itisravi autostatic: what is the process that consumes memory?
10:47 itisravi The brick process?
10:51 autostatic /usr/bin/glusterfsd
10:53 autostatic The two bricks of that specific replica set were out of sync so the self-heal daemon fixed that
10:53 itisravi okay, you tried the drop_caches on the server?
10:53 autostatic Yes I did, to no avail
10:54 itisravi hmm..an ugly hack is to kill the brick and restart glusterd on the server which will restart the brick.
10:55 autostatic Yes I know but I'd like to avoid that
10:55 autostatic I'm wondering why that specific glusterfsd process doesn't free the memory
10:56 itisravi could be a memory leak.
10:59 autostatic There are some errors though:
10:59 autostatic [2016-06-09 10:56:05.713571] E [marker.c:2573:marker_removexattr_cbk] 0-datavolume-marker: No data available occurred while creating symlinks
10:59 autostatic [2016-06-09 10:58:38.007879] E [index.c:271:check_delete_stale_index_file] 0-datavolume-index: Base index is not createdunder index/base_indices_holder
10:59 autostatic Lunchtime, bbl, thanks for your time already!
11:02 ndarshan joined #gluster
11:04 johnmilton joined #gluster
11:07 om joined #gluster
11:08 om2 joined #gluster
11:17 nishanth joined #gluster
11:18 micke joined #gluster
11:19 ira joined #gluster
11:20 ndarshan joined #gluster
11:20 snila joined #gluster
11:23 snila i found this on redhat.com:
11:23 snila The total number of ports required to be open depends on the total number of bricks exported on the machine.
11:23 snila is this only between the gluster nodes/servers?
11:24 snila or is it for the clients as well
11:24 snila starting from 49152
11:34 autostatic snila: We do it on both servers and clients
11:34 autostatic $IPTABLES -A INPUT -p tcp -m multiport -s 0.0.0.0/24 --dports 24007:24010,49152:49153 -j ACCEPT
11:34 autostatic $IPTABLES -A OUTPUT -p tcp -m multiport -d 0.0.0.0/24 --dports 24007:24010,49152:49153 -j ACCEPT
11:35 autostatic (24009:24010 is for server/clients < 3.4)
11:37 nottc joined #gluster
11:37 snila ok, thank you :)
11:38 shubhendu_ joined #gluster
11:40 atinm joined #gluster
11:45 level7 joined #gluster
11:50 msvbhat_ joined #gluster
11:58 R0ok_ joined #gluster
12:02 om joined #gluster
12:02 robb_nl joined #gluster
12:08 nottc joined #gluster
12:14 msvbhat_ joined #gluster
12:17 raunoV I have a simple problem that many other users have. Actions with small files is terribly slow. Any key notes to follow? At the moment i have tune vm.dirty_ratio and dirty_ratio_background, also tuned volume options: performance.io-thread-count: 24 server.event-threads: 3 client.event-threads: 3 performance.cache-refresh-timeout: 1 performance.cache-size: 1073741824 performance.readdir-ahead: on. Is there even possible to get some bet
12:22 karnan joined #gluster
12:23 msvbhat_ joined #gluster
12:25 Gnomethrower joined #gluster
12:26 PaulCuzner joined #gluster
12:35 dlambrig joined #gluster
12:38 ppai joined #gluster
12:40 guhcampos joined #gluster
12:50 plarsen joined #gluster
12:50 luizcpg joined #gluster
12:51 julim joined #gluster
12:52 ben453 joined #gluster
13:00 shubhendu_ joined #gluster
13:07 dgandhi joined #gluster
13:21 jbrooks joined #gluster
13:26 Elmo_ joined #gluster
13:32 chirino_m joined #gluster
13:33 satya4ever_ joined #gluster
13:35 robb_nl joined #gluster
13:35 shyam joined #gluster
13:40 rwheeler joined #gluster
13:41 Manikandan joined #gluster
13:46 kramdoss_ joined #gluster
13:46 squizzi joined #gluster
13:51 nbalacha joined #gluster
13:56 om joined #gluster
13:57 om2 joined #gluster
13:57 harish joined #gluster
14:00 Gnomethrower joined #gluster
14:01 Guest3535 joined #gluster
14:03 hackman joined #gluster
14:06 cscf joined #gluster
14:07 jri joined #gluster
14:07 cscf We want to set up a Gluster DFS, where each service/client encrypts their data before sending it to Gluster.  What is the best way to do this?
14:08 cscf Can I thin provision large LUKS volumes on gluster, for example?
14:11 arcolife joined #gluster
14:15 jri joined #gluster
14:16 deniszh joined #gluster
14:20 autostatic If you want clients to encrypt their data before sending it to Gluster you need TLS/SSL
14:20 autostatic I guess :)
14:23 autostatic And yes, you can use LUKS for volumes but this will not encrypt the traffic between client and server
14:24 kotreshhr joined #gluster
14:24 B21956 joined #gluster
14:28 squizzi joined #gluster
14:28 cscf autostatic, TLS/SSL would encrypt over the network only.  LUKS on the client would encrypt both in transit and at rest, without the DFS servers having the keys
14:29 autostatic kraakman.eu
14:29 autostatic Oops, wrong window ;)
14:30 kpease joined #gluster
14:30 autostatic Ah so LUKS on the client would also encrypt data in transit? Didn't know that
14:31 cscf autostatic, if the cipher block device is stored on gluster, and shared to the client, then you mount it on the client with LUKS, yes.  Think about it.
14:35 dgandhi1 joined #gluster
14:36 skoduri joined #gluster
14:36 dgandhi joined #gluster
14:37 hagarth joined #gluster
14:39 dgandhi joined #gluster
14:40 dgandhi joined #gluster
14:42 om joined #gluster
14:42 dgandhi joined #gluster
14:43 dgandhi joined #gluster
14:43 ank joined #gluster
14:51 B21956 joined #gluster
14:52 Jules- joined #gluster
14:56 kotreshhr joined #gluster
15:07 om2 joined #gluster
15:07 wushudoin joined #gluster
15:16 atinm joined #gluster
15:21 om2 joined #gluster
15:27 overclk joined #gluster
15:27 ivan_rossi joined #gluster
15:32 aspandey joined #gluster
15:59 F2Knight joined #gluster
16:00 F2Knight joined #gluster
16:04 aspandey joined #gluster
16:18 shubhendu_ joined #gluster
16:22 om2 joined #gluster
16:24 kramdoss_ joined #gluster
16:38 dlambrig joined #gluster
16:43 kramdoss_ joined #gluster
16:44 om joined #gluster
16:59 ben453 Has anyone ever seen gluster only healing file metadata and not the file contents when running a replace-brick command? I'm on gluster version 3.7.11 and followed the instructions for replacing a replicate only brick given by the docs. The only difference with my configuration is that the new brick is on a different node (so I had to probe it first)
17:00 ben453 when I run gluster volume heal $volname info, it shows all of the files that need to be healed which is correct, but the file contents are not being copied over to my new node
17:12 luizcpg joined #gluster
17:13 julim joined #gluster
17:14 ivan_rossi left #gluster
17:18 nehar joined #gluster
17:26 PaulCuzner joined #gluster
17:27 rwheeler joined #gluster
17:33 JoeJulian Ulrar: No, 3.7.12 looks to be delayed a week.
17:34 cliluw joined #gluster
17:34 Siavash joined #gluster
17:34 Siavash joined #gluster
17:36 hagarth joined #gluster
17:38 JoeJulian d-fence: https://vbellur.wordpress.com/2012​/05/31/upgrading-to-glusterfs-3-3/ should work for upgrading all the way up to 3.7 from 3.2.
17:40 PaulCuzner joined #gluster
17:43 JoeJulian raunoV: latency is your biggest killer with small files. Fix that problem first.
17:44 JoeJulian autostatic: There's been a lot of work lately in tracking down memory leaks. I doubt most of those leaks have been backported to 3.5. Perhaps try 3.7.11+.
17:45 JoeJulian ~ports | snila
17:45 glusterbot snila: glusterd's management port is 24007/tcp (also 24008/tcp if you use rdma). Bricks (glusterfsd) use 49152 & up. All ports must be reachable by both servers and clients. Additionally it will listen on 38465-38468/tcp for NFS. NFS also depends on rpcbind/portmap ports 111 and 2049.
17:45 dlambrig joined #gluster
17:46 JoeJulian ben453: Nope, haven't seen that. Perhaps it's in the process of ensuring all the directories exist first?
17:47 atinm joined #gluster
17:47 raunoV JoeJulian: you mean the latency between storage node and client ?
17:50 nathwill joined #gluster
17:54 JoeJulian yes
17:56 hackman joined #gluster
18:01 micke joined #gluster
18:02 F2Knight joined #gluster
18:03 raunoV JoeJulian: I think google gce latency is quite good between instances, isn't it ?
18:04 micke_ joined #gluster
18:07 hagarth joined #gluster
18:08 JoeJulian A quick search suggests 20ms average. So approximately 3 io operations per file just to open the file at 20ms round trip you're now at 60ms per file.
18:09 JoeJulian It starts adding up quick.
18:11 JoeJulian So, raunoV, the trick to that is to try and cache things you know you're not worried about another node changing and avoid doing file ops that you don't actually need.
18:12 JoeJulian Things like, don't lookup filenames that don't exist. Don't pull all the metadata for every file in every directory when you're just trying to open a file you already know the name of...
18:12 JoeJulian Stuff like that helps a lot.
18:13 micke joined #gluster
18:18 raunoV JoeJulian: Okey understood. At the moment i'm worried about the adding files. For example copying about 38k files to disk takes about 3m30s. It was over 7m before, got it down with some volume options but still seems a bit too much, isn't it?
18:19 raunoV JoeJulian: Okey understood. At the moment i'm worried about the adding files. For example copying about 38k files to gfs volume takes about 3m30s. It was over 7m before, got it down with some volume options but still seems a bit too much, isn't it?
18:19 overclk joined #gluster
18:30 plarsen joined #gluster
18:49 dgandhi joined #gluster
18:53 ben453 I found a fix to my issue of gluster only replicating the metadata after running replace-brick and trying to heal. It looks like as long as I detach the old node that no longer has a brick from the cluster before trying to run a full heal command, everything works as expected.
18:56 nigelb joined #gluster
18:57 overclk joined #gluster
18:59 pedahzur joined #gluster
19:01 pedahzur I have an odd one here. I'm getting the "peer probe: failed: Probe returned with Transport endpoint is not connected" error message when trying to do peer probes. However, when I go to the two peer, and do "gluster peer status," both list number of peers as one, both list the other host as the host name, and both say the state is "Peer in Cluster (Connected)"  The logs aren't helping much at this point. What would I be looking for to figure out the peer
19:01 pedahzur probe error message?
19:02 pedahzur Oh...more detail. The peer probe from the "master" (command that originally ran the probe) works fine. Peer probe from the "probed" machine to the master does not.
19:03 hagarth pedahzur: look for firewalls blocking port 24007 and errors in glusterd's log file on the probed machine
19:06 pedahzur hagarth: no firewall rules. (policy is accept)  Which log file? There are logs of logs in /var/log/gluster.  No new messages in /var/log/glusterfs/etc-glusterfs-glusterd.vol.log when I do a probe from the other machine.  Should I be looking in another file?
19:08 hagarth pedahzur: mostly looks like a network problem to me .. make sure that port 24007 is reachable from both nodes
19:08 Ulrar JoeJulian: Aw, that's unfortunate. Well, guess I'd rather delay everything a week than dicovering that 3.7.12 makes my problem worse, so that's fine ! Thanks for the news anyway
19:09 pedahzur hagarth: Yeah. I can reach 24007 on the other machine from each machine.
19:10 pedahzur hagarth: Like I said, peer probe succeeds from B -> A, but not A -> B. "B" is the machine that originally did the probe.
19:11 pedahzur hagarth: And Both machines show the other machine in their peer list if I do "gluster peer status." So it *looks* like it's OK, but just wanted to be sure...
19:12 pedahzur hagarth: Oh! Interesting! Server "A" has "B" in its list as the IP address, and not the host name. If I do peer probe from A -> B using the IP address...it works!
19:13 pedahzur I wonderr how "A" ended up with B's IP address and not host name, as it was given the host name during initial probe.
19:19 cscf left #gluster
20:02 chirino joined #gluster
20:05 hagarth joined #gluster
20:11 DV joined #gluster
20:16 muneerse2 joined #gluster
20:22 B21956 joined #gluster
20:32 PaulCuzner joined #gluster
20:59 johnmilton joined #gluster
21:16 johnmilton joined #gluster
21:18 F2Knight joined #gluster
21:24 pedahzur joined #gluster
21:45 bb0x joined #gluster
21:46 bb0x hi guys
21:47 bb0x i'm trying to deploy gluster using ansible gluster_volume module
21:47 bb0x and I'm getting this during the first run: failed: Host gfs822 is not in 'Peer in Cluster' state\n
21:48 bb0x if checking from hosts it says that peers are connected
21:48 bb0x any ideas?
21:51 dlambrig joined #gluster
21:52 JoeJulian bb0x: I've heard that before and not heard a fix.
21:56 bb0x JoeJulian, probably it's related to a timeout or something, on the second run it complains that brick is already part of the volume even I don't have a module
21:56 bb0x volume create: gv800: failed: /glusterfs/brick01/gv800 is already part of a volum
21:56 bb0x e\n"}
21:56 bb0x after first run if I delete the brick01 folder then re-run on the second run works fine
22:02 DV joined #gluster
22:07 dlambrig joined #gluster
22:07 DV joined #gluster
22:08 guhcampos joined #gluster
22:13 Jules- joined #gluster
22:18 arif-ali joined #gluster
22:23 arif-ali Hi all, hopefully someone can help, I have been looking and searching for a while, to figure out the problem. I have been trying to re-install machines that are in the cluster, and following the link https://access.redhat.com/documentation/e​n-US/Red_Hat_Storage/3/html/Administratio​n_Guide/sect-Replacing_Hosts.html#Replaci​ng_a_Host_Machine_with_the_Same_Hostname section 8.6.2, to put it back into the cluster. It
22:23 arif-ali works mostly, but gluster volume status hangs, with the message "Error : Request timed out", any ideas on debugging would be very helpful
22:23 glusterbot Title: 8.6. Replacing Hosts (at access.redhat.com)
22:24 dlambrig joined #gluster
22:27 Jules--1 joined #gluster
22:27 Jules--1 left #gluster
22:28 Jules--1 joined #gluster
22:28 Jules--1 left #gluster
22:29 pedahzur joined #gluster
23:08 dlambrig joined #gluster
23:12 om joined #gluster
23:19 amye joined #gluster
23:27 dlambrig joined #gluster
23:34 jlockwood joined #gluster
23:50 plarsen joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary