Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-06-21

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:09 muneerse2 joined #gluster
00:33 Alghost joined #gluster
01:07 shdeng joined #gluster
01:46 plarsen joined #gluster
01:48 ilbot3 joined #gluster
01:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:48 Lee1092 joined #gluster
02:10 haomaiwang joined #gluster
02:14 DV_ joined #gluster
02:46 skoduri joined #gluster
02:46 ramteid joined #gluster
03:04 nishanth joined #gluster
03:11 Gambit15 joined #gluster
03:17 magrawal joined #gluster
03:19 suliba joined #gluster
03:21 raghug joined #gluster
03:22 julim joined #gluster
03:36 F2Knight joined #gluster
03:45 rafi joined #gluster
03:56 suliba joined #gluster
03:59 nbalacha joined #gluster
04:01 suliba joined #gluster
04:05 atinm joined #gluster
04:06 RameshN joined #gluster
04:07 suliba joined #gluster
04:12 aspandey joined #gluster
04:12 suliba joined #gluster
04:14 hchiramm joined #gluster
04:16 nehar joined #gluster
04:17 itisravi joined #gluster
04:25 jiffin joined #gluster
04:29 nbalacha joined #gluster
04:31 ramky joined #gluster
04:37 gem joined #gluster
04:38 shubhendu joined #gluster
04:39 shdeng joined #gluster
04:40 nishanth joined #gluster
04:59 ppai joined #gluster
05:01 aspandey joined #gluster
05:03 karthik___ joined #gluster
05:05 ndarshan joined #gluster
05:15 prasanth joined #gluster
05:18 skoduri joined #gluster
05:19 gowtham__ joined #gluster
05:21 Apeksha joined #gluster
05:23 aravindavk joined #gluster
05:28 hgowtham joined #gluster
05:31 poornimag joined #gluster
05:33 suliba joined #gluster
05:39 Manikandan joined #gluster
05:41 Bhaskarakiran joined #gluster
05:45 Manikandan joined #gluster
05:50 kotreshhr joined #gluster
05:51 prasanth joined #gluster
05:51 nehar joined #gluster
05:56 kshlm joined #gluster
05:58 nehar_ joined #gluster
06:07 karnan joined #gluster
06:09 kovshenin joined #gluster
06:10 satya4ever joined #gluster
06:12 prasanth joined #gluster
06:23 nbalacha joined #gluster
06:25 DV_ joined #gluster
06:27 jtux joined #gluster
06:29 ashiq joined #gluster
06:32 Gnomethrower joined #gluster
06:34 kramdoss_ joined #gluster
06:52 nhayashi joined #gluster
06:54 jri joined #gluster
06:57 rastar joined #gluster
07:00 Klas joined #gluster
07:04 Bhaskarakiran joined #gluster
07:06 msvbhat joined #gluster
07:06 nbalacha joined #gluster
07:07 F2Knight joined #gluster
07:08 pur_ joined #gluster
07:09 F2Knight joined #gluster
07:09 Gnomethrower joined #gluster
07:10 atinm joined #gluster
07:12 hackman joined #gluster
07:16 null1 joined #gluster
07:17 voidspacexyz[m] joined #gluster
07:34 raghug joined #gluster
07:42 ahino joined #gluster
07:46 deniszh joined #gluster
07:50 fsimonce joined #gluster
07:51 Gnomethrower joined #gluster
08:03 Jules- joined #gluster
08:08 Slashman joined #gluster
08:11 karthik___ joined #gluster
08:16 micw joined #gluster
08:27 micw hello. i have serious performance issues on my gluster with many small files and 3 replica
08:27 micw there are 3 servers, connected with gbit, ping time ~0.35ms
08:27 micw time find .git | wc -l
08:27 micw 117
08:27 micw real0m2.907s
08:28 micw that's for just a few files
08:29 micw performance increases a lot if i kill one node
08:31 shubhendu joined #gluster
08:32 wnlx joined #gluster
08:42 atalur joined #gluster
08:45 post-factum micw: if you kill all nodes but one and switch to nfs the performance will increase dramatically. that is the price for high availability
08:47 micw i've run a find on a single node on the raw brick to a directory structure with  ~ 70000 files
08:47 micw takes ~30 seconds
08:47 micw same find on gluster is running since ~10 minutes, still no result
08:47 micw is it really _that_ slow? or is this probably a config problem?
08:49 post-factum micw: it is all about network latency
08:50 micw should an 1gbit network provide low enough latency to have it run faster?
08:50 micw atm the 3 servers are in different rooms
08:50 micw but i could place them togehter
08:51 post-factum micw: 10GbE at min, infiniband better
08:51 post-factum micw: or you will suffer
08:53 Klas between servers or all the way to the clients?
08:53 micw between servers, clients are local on the servers
08:53 micw those are backup servers who share a filesystem
08:53 micw 3 nodes, 4 disk each, 3 replica
08:54 micw backup is done via ssh/rsync
08:54 micw the initial sync was reasonably fast
08:54 Klas yeah, I meant as a general recommendation (currently trying out glusterfs for a project at work)
08:54 micw but now everything takes hous
08:54 Klas and post-factum recommendation seems to be highly relevant
08:55 post-factum micw: backup strategy should involve reading from underlying bricks directly
08:55 micw post-factum, why would it be faster with nfs?
08:55 Klas micw: nfs is way simpler
08:55 micw i'm not backing up bricks, i back up nother servers into the bricks
08:55 micw -n
08:56 post-factum micw: because nfs client is in kernel space, and nfs protocol is much simpler
08:56 Klas backups to gluster seems a strange fit
08:56 micw but the nfs server of gluster fs would still query all nodes for metadata, right?
08:56 post-factum Klas: no, we backup to gluster as well :)
08:56 ndevos micw: 'find' uses stat() on each and that triggers a self-heal check, it is very costly
08:56 micw ndevos, rsync does as well, i cannot avoid this
08:56 Klas post-factum: yeah, I can see it as well, just saying that it's not really where it seems to shine
08:56 ndevos micw: just listing all the files could be done with '/bin/ls -R' or something like that
08:57 ndevos micw: yeah, it is one of the reasons we have geo-replication, it builds intelligent rsync commands without the expensive crawling
08:58 micw can i somehow avoid that self-healing stuff on ls?
08:58 micw on stat i mean
08:59 ndevos micw: if you want to create backups from the Gluster volume, you may be interested in Bareos, the 16.2 version can use glusterfind to efficiently backup changed contents
08:59 atinm joined #gluster
08:59 micw i don't want to take backups of my cluster
08:59 micw i want to backup to glusterfs
08:59 micw (in fact i want to backup to a replicated filesystem)
09:00 ndevos oh, you can use Bareos for that too, it has native Gluster support
09:00 Klas post-factum: btw, the question I wondered was if 10 Gb/s is essential between servers or between servers and clients as well?
09:00 ndevos I'm just wondering why you would want to run 'find' or 'rsync' on a Gluster volume, there most likely are better solutions
09:00 post-factum Klas: sure, as clients do replication
09:02 micw ndevos, does bareos support "continuous backups" (i.e. doing only one full backup and then inrements and always delete oldest backups)?
09:02 micw without the need to repeat the full backup from time to time
09:02 ndevos micw: yes, it can do that
09:03 ndevos micw: http://gluster.readthedocs.io/en/latest/Administrator%20Guide/Bareos/ should explain the basics
09:03 glusterbot Title: Configuring Bareos to store backups on Gluster - Gluster Docs (at gluster.readthedocs.io)
09:07 Manikandan joined #gluster
09:11 DV_ joined #gluster
09:11 om joined #gluster
09:29 atalur joined #gluster
09:38 DV_ joined #gluster
09:41 DV_ joined #gluster
09:43 rafi1 joined #gluster
09:50 Guest41482 joined #gluster
09:52 om2 joined #gluster
09:59 nbalacha joined #gluster
10:01 hgowtham joined #gluster
10:13 rhqq left #gluster
10:26 Gnomethrower joined #gluster
10:27 Manikandan joined #gluster
10:33 Ru57y joined #gluster
10:35 Ulrar So do we know when 3.7.12 is supposed to be released ? Since 3.8 doesn't really work with proxmox
10:43 d0nn1e joined #gluster
10:49 post-factum Ulrar: ask hagarth
10:49 bfoster joined #gluster
10:50 Ulrar hagarth: Hi, any idea when 3.7.12 should be released ?
10:50 Ulrar Is it more in the coming days or the coming months ? :)
10:52 kshlm joined #gluster
10:56 Ru57y joined #gluster
11:01 msvbhat joined #gluster
11:04 pfactum joined #gluster
11:06 johnmilton joined #gluster
11:10 chirino joined #gluster
11:13 rafi joined #gluster
11:15 Jules- Ulrar: Have you filed a bug report to Proxmox Bugtracker yet? Probably they can make it work again.
11:25 hgowtham joined #gluster
11:27 ira joined #gluster
11:27 jwd joined #gluster
11:39 armyriad joined #gluster
11:42 Ulrar Jules-: Yeah, I created a thread on the forum, I'm clearly not the only affected ! It'll be updated one of these days, but it might be better to wait for 3.7.12 anyway, for production
11:43 Ulrar I don't know, I'll try both anyway
11:46 ppai joined #gluster
11:52 B21956 joined #gluster
11:55 Manikandan REMINDER: Gluster Community Bug Triage Meeting at #gluster-meeting will start in ~ 5 minutes
12:04 unclemarc joined #gluster
12:12 arcolife joined #gluster
12:20 DV_ joined #gluster
12:23 plarsen joined #gluster
12:30 squizzi joined #gluster
12:34 ben453 joined #gluster
12:44 DV_ joined #gluster
12:47 om2 joined #gluster
12:51 guhcampos joined #gluster
12:53 skoduri joined #gluster
12:55 msvbhat joined #gluster
13:03 ahino1 joined #gluster
13:07 rafaels joined #gluster
13:10 shaunm joined #gluster
13:16 ben453 joined #gluster
13:19 DV_ joined #gluster
13:30 gem joined #gluster
13:30 nbalacha joined #gluster
13:30 muneerse joined #gluster
13:31 ben453 Has anyone had issues with scripting gluster commands? I'm trying to create a replica 3 cluster and when I create it manually it has worked every time, but trying to script the volume creation and mounting causes it to fail 4 out of 5 times.
13:32 ahino joined #gluster
13:33 ben453 One other thing to note is that one of my bricks has data on it to begin, and I run a "find . | xargs stat &> /dev/null" to build up the gluster metadata after starting and mounting the volume
13:35 atinm ben453, and what the CLI failure points to?
13:36 om2 joined #gluster
13:40 kramdoss_ joined #gluster
13:44 ben453 Everything looks fine when I run "gluster volume status" and "gluster volume heal <volname> info", however running "gluster volume heal <volname> full" gives an unsuccessful message
13:45 atinm ben453, what is that message? that would help us to identify what might have gone wrong
13:48 DV_ joined #gluster
13:49 haomaiwang joined #gluster
13:53 ben453 Launching heal operation to perform full self heal on volume vol1 has been unsuccessful on bricks that are down. Please check if all brick processes are running.
13:54 ben453 then I try running "gluster volume status" and it shows that all of the brick processes are running
13:54 atinm atalur, ^^
13:54 atalur checking
13:55 pwa joined #gluster
13:55 voidspacexyz[m] joined #gluster
13:55 ben453 I checked "gluster volume status" from all 3 of my nodes, and they all seem to be happy
13:56 atalur ben453, is self-heal-daemon enabled?
13:56 atalur you can see it in volume status
13:57 ben453 Yup all of the self heal daemons look like they're online
13:57 ben453 one per brick
13:57 atalur could you share self-heal-daemon logs and volume status please?
13:58 atalur ben453, I had seen this issue when self-heal-daemon was off and had raised a bug : https://bugzilla.redhat.com/show_bug.cgi?id=1314646. I thought it might be related.
13:58 glusterbot Bug 1314646: unspecified, unspecified, ---, ravishankar, NEW , Incorrect error message on CLI on launching heal
13:59 ben453 this is the output of the etc-glusterfs-glsuterd.vol.log when I try a full heal: [2016-06-21 13:57:53.835064] I [MSGID: 106533] [glusterd-volume-ops.c:857:__glusterd_handle_cli_heal_volume] 0-management: Received heal vol req for volume vol1 [2016-06-21 13:57:53.840188] E [MSGID: 106153] [glusterd-syncop.c:113:gd_collate_errors] 0-glusterd: Commit failed on <ip-addr>. Please check log file for details.
13:59 atalur also, what is the version of gluster being used?
13:59 ben453 version 3.7.11
14:00 dnunez joined #gluster
14:00 ben453 It doesn't look like there's any output to the "glustershd.log" file when I explicitly try to do a full heal
14:00 atalur okay
14:00 muneerse2 joined #gluster
14:02 atalur what does the log say in that specific ip?
14:03 atalur log as in  etc-glusterfs-glsuterd.vol.log
14:03 ben453 there's no output in the specific IP's log. It looks like they're not connected
14:04 ben453 I'm tailing all of the logs under /var/log/glusterfs for the node I'm running the command on and that IP it specifies
14:05 ben453 It looks like in the local logs the only real message is to check the logs on the other node, but those logs did not have anything
14:06 rafi joined #gluster
14:07 voidspacexyz[m] joined #gluster
14:11 ben453 In the brick log <brick-path>.log, it looks like when it tries to self heal it is failing. I'm getting a lot of these messages "[client-rpc-fops.c:2974:client3_3_lookup_cbk] 0-vol1-client-0: remote operation failed. Path: (null) (00000000-0000-0000-0000-000000000000) [No data available]"
14:12 ben453 which come right after trying to do a self heal: "[afr-self-heal-entry.c:589:afr_selfheal_entry_do] 0-vol1-replicate-0: performing entry selfheal on 38d8b701-f2e0-469f-8775-b5a9c9b2e2ae"
14:16 kotreshhr joined #gluster
14:16 om2 joined #gluster
14:20 TvL2386 joined #gluster
14:22 om joined #gluster
14:22 om2 joined #gluster
14:23 aravindavk joined #gluster
14:23 DV_ joined #gluster
14:24 om2 joined #gluster
14:24 jwd joined #gluster
14:30 muneerse joined #gluster
14:30 kotreshhr joined #gluster
14:30 plarsen joined #gluster
14:32 fcoelho joined #gluster
14:32 atalur ben453, sorry.. was away. Its late here and I need to leave. If you send a mail to gluster-users@gluster.org one of us will look into it and help you.
14:36 ben453 atalur: thanks for your time! I will do that
14:46 papamoose joined #gluster
14:46 msvbhat joined #gluster
14:51 kpease joined #gluster
14:56 rafi joined #gluster
14:59 julim joined #gluster
15:02 B21956 joined #gluster
15:11 om2 joined #gluster
15:12 msvbhat joined #gluster
15:13 jbrooks joined #gluster
15:19 hackman joined #gluster
15:21 hchiramm joined #gluster
15:21 hackman joined #gluster
15:24 jbrooks joined #gluster
15:28 F2Knight joined #gluster
15:29 uosiu joined #gluster
15:29 DV_ joined #gluster
15:30 F2Knight joined #gluster
15:32 kshlm joined #gluster
15:33 DV_ joined #gluster
15:38 dnunez joined #gluster
15:42 kshlm joined #gluster
15:43 om2 joined #gluster
15:51 nhayashi joined #gluster
16:05 om2 joined #gluster
16:12 F2Knight joined #gluster
16:12 jiffin joined #gluster
16:26 om joined #gluster
16:27 alvinstarr joined #gluster
16:31 Gambit15 joined #gluster
16:39 DV_ joined #gluster
16:40 jwd joined #gluster
16:44 Jules- joined #gluster
16:48 kshlm joined #gluster
16:58 kshlm joined #gluster
17:00 rafaels joined #gluster
17:03 kshlm joined #gluster
17:04 nishanth joined #gluster
17:10 rafaels joined #gluster
17:12 Gnomethrower joined #gluster
17:13 ic0n_ joined #gluster
17:16 ic0n joined #gluster
17:21 natarej joined #gluster
17:27 B21956 joined #gluster
17:32 kovshenin joined #gluster
17:32 Leildin joined #gluster
17:33 Leildin hey guys, just updated my dev gluster on ubuntu and have a strange problem.
17:33 Leildin went from 3.5 to 3.6 then 3.6 to 3.7
17:34 Leildin during 3.6 to 3.7 something went wrong
17:34 Leildin I can't mount the volume and it says it's doing a rebalance even though I didn't initiate it and can't stop rebalance
17:34 Leildin also rebalance status is: volume rebalance: gv0: failed: error
17:39 Leildin 'failed to fetch volume file (key:/gv0)' in mnt.log
17:40 Leildin I have no idea what to do :(
17:42 Leildin did the same in centos on prod and had no problems
17:53 mlhess joined #gluster
17:53 kotreshhr left #gluster
17:53 jwd joined #gluster
17:55 Jules- joined #gluster
17:59 amye joined #gluster
18:18 Jules- joined #gluster
18:19 gem joined #gluster
18:26 hackman joined #gluster
18:27 om joined #gluster
18:30 Jules- joined #gluster
18:39 merlink joined #gluster
18:41 kotreshhr joined #gluster
18:46 ashiq joined #gluster
18:52 deniszh joined #gluster
19:09 raghu joined #gluster
19:16 hchiramm joined #gluster
19:23 ahino joined #gluster
19:24 rafaels joined #gluster
19:29 muneerse2 joined #gluster
19:30 d-fence joined #gluster
19:31 bfoster joined #gluster
19:31 juhaj joined #gluster
19:32 telmich joined #gluster
19:34 shyam joined #gluster
19:35 ashka joined #gluster
19:35 ashka joined #gluster
19:35 arif-ali joined #gluster
19:35 Champi joined #gluster
19:40 deniszh joined #gluster
19:42 Norky joined #gluster
19:42 glafouille joined #gluster
19:46 papamoose joined #gluster
20:12 DV_ joined #gluster
20:17 F2Knight joined #gluster
20:34 amye joined #gluster
20:36 ackjewt joined #gluster
20:39 gnulnx Hi everyone.  When deal with a distributed volume, if one of the servers in the volume goes offline, is all data inaccessible, or only the data which resides on that server?
20:40 gnulnx Also, how would one move all data off of one server, and on to the other (e.g. if I was planning on downing the second server for a rebuild, and wanted to move all of its data over to the first server during the rebuild)
21:24 deniszh joined #gluster
22:07 Sue joined #gluster
22:07 Sue joined #gluster
22:08 DV_ joined #gluster
22:21 wnlx joined #gluster
22:57 plarsen joined #gluster
23:37 johnmilton joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary