Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-09-01

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:06 KuzuriAo I’m a little nervous about all this self healing going on.  It’s been healing like mad for a long time and I don’t think I’ve got that many files.
00:06 KuzuriAo Oh damn, yeah, I think it’s just going over and over these files.
00:07 KuzuriAo root@node-01:~# cat /var/log/glusterfs/www.log | grep "performing metadata selfheal on" | cut -f12 -d' ' | sort | uniq -c | sort -n | tail -3
00:07 KuzuriAo 806 a7d93b6b-e1bc-40e4-ac79-6c37004ec292
00:07 KuzuriAo 824 279d7e93-957c-4d79-89b4-e1f5e174556a
00:07 KuzuriAo 1342 ce806823-cfd0-4345-b12b-dd31d5b861a4
00:14 KuzuriAo :(
00:14 KuzuriAo root@node-01:~# cat /var/log/glusterfs/www.log | grep "performing metadata selfheal on" | cut -f12 -d' ' | sort | uniq -c | sort -n | tail -3
00:14 KuzuriAo 910 a7d93b6b-e1bc-40e4-ac79-6c37004ec292
00:14 KuzuriAo 930 279d7e93-957c-4d79-89b4-e1f5e174556a
00:14 KuzuriAo 1545 ce806823-cfd0-4345-b12b-dd31d5b861a4
00:14 KuzuriAo It’s just self healing the same stuff over and over again and not actually doing anything but wrecking the CPU
00:15 KuzuriAo On the plus side at least the load average is below 6, so it’s improved there. :)
00:21 shyam joined #gluster
00:50 mbrandeis joined #gluster
00:53 kotreshhr joined #gluster
01:03 zcourts joined #gluster
01:30 luizcpg joined #gluster
01:51 ilbot3 joined #gluster
01:51 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:02 gospod2 joined #gluster
02:22 skoduri joined #gluster
02:26 baber joined #gluster
03:21 jiffin joined #gluster
03:22 aravindavk joined #gluster
03:29 gyadav joined #gluster
03:50 riyas joined #gluster
03:53 itisravi joined #gluster
04:10 Humble joined #gluster
04:22 skumar joined #gluster
04:25 Guest9038 joined #gluster
04:29 aravindavk joined #gluster
04:30 nbalacha joined #gluster
04:38 atinm joined #gluster
04:39 Shu6h3ndu joined #gluster
04:50 Prasad joined #gluster
04:58 jiffin joined #gluster
04:59 msvbhat joined #gluster
05:00 ThHirsch joined #gluster
05:04 gyadav_ joined #gluster
05:10 karthik_us joined #gluster
05:13 kdhananjay joined #gluster
05:13 aravindavk joined #gluster
05:16 ankitr joined #gluster
05:17 ndarshan joined #gluster
05:17 ankitr joined #gluster
05:21 msvbhat joined #gluster
05:25 kraynor5b joined #gluster
05:27 karthik_ joined #gluster
05:31 Saravanakmr joined #gluster
05:31 skumar_ joined #gluster
05:32 gyadav joined #gluster
05:35 skumar_ joined #gluster
05:46 karthik_ joined #gluster
05:48 hgowtham joined #gluster
05:52 atinm joined #gluster
05:53 prasanth joined #gluster
05:59 nbalacha joined #gluster
06:04 jtux joined #gluster
06:11 nbalacha joined #gluster
06:13 sanoj joined #gluster
06:13 ThHirsch joined #gluster
06:16 karthik_ joined #gluster
06:22 susant joined #gluster
06:23 cloph KuzuriAo: "01:03 <JoeJulian> 3.11 is the "lts" version" - announcements ( http://lists.gluster.org/pipermail/gluster-users/2017-May/031298.html ) say otherwise though...  (Short term maintenance) - and I'd be surprised if ubuntu picked such a version for ubuntu's LTS (but that might be of course possible)
06:23 glusterbot Title: [Gluster-users] Announcing GlusterFS release 3.11.0 (Short Term Maintenance) (at lists.gluster.org)
06:25 cloph see also https://www.gluster.org/release-schedule/#release-status (seems website got a facelift, but apparently blog.gluster.org not restored yet)
06:25 glusterbot Title: Gluster » Release Schedule (at www.gluster.org)
06:25 ThHirsch joined #gluster
06:29 kotreshhr joined #gluster
06:30 jiffin joined #gluster
06:33 skumar__ joined #gluster
06:42 aravindavk joined #gluster
06:43 nishanth joined #gluster
06:49 buvanesh_kumar joined #gluster
06:50 aardbolreiziger joined #gluster
07:07 atinm joined #gluster
07:11 rastar joined #gluster
07:19 aardbolreiziger joined #gluster
07:26 JoeJulian Thanks, cloph. Now I feel bad. :/
07:27 JoeJulian @ppa
07:27 glusterbot JoeJulian: The GlusterFS Community packages for Ubuntu are available at: 3.8: https://goo.gl/MOtQs9, 3.10: https://goo.gl/15BCcp
07:27 JoeJulian @forget ppa
07:27 glusterbot JoeJulian: The operation succeeded.
07:29 JoeJulian @learn ppa as  The GlusterFS Community packages for Ubuntu are available at: 3.10 (LTS) https://goo.gl/15BCcp ; 3.12 https://goo.gl/JCCbwg
07:29 glusterbot JoeJulian: The operation succeeded.
07:38 fsimonce joined #gluster
07:43 coredumb joined #gluster
07:43 coredumb Hello
07:43 glusterbot coredumb: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offe
07:43 coredumb I have a two nodes replicated setup, where one node crashed during a major delete operation
07:44 coredumb after recovering the server, it still happen to have the datas in
07:44 coredumb apparently running gluster volume heal fails
07:44 coredumb what are my options?
07:47 coredumb ah seems it recovered by itself actually
07:49 coredumb \o/
07:57 ankitr joined #gluster
07:58 karthik_|lunch joined #gluster
08:07 ankitr joined #gluster
08:10 itisravi joined #gluster
08:16 ankitr joined #gluster
08:20 mohan joined #gluster
08:20 Prasad joined #gluster
08:27 ankitr joined #gluster
08:28 ankitr joined #gluster
08:32 aardbolreiziger joined #gluster
08:33 poornima joined #gluster
08:33 ThHirsch joined #gluster
08:34 itisravi__ joined #gluster
08:35 ankitr joined #gluster
08:45 sanoj joined #gluster
08:50 skumar_ joined #gluster
08:50 karthik_us joined #gluster
08:52 Prasad joined #gluster
08:57 gyadav_ joined #gluster
09:16 gospod2 joined #gluster
09:26 gyadav__ joined #gluster
09:47 shyu joined #gluster
09:51 skumar__ joined #gluster
09:56 Saravanakmr joined #gluster
09:57 jockek joined #gluster
10:02 msvbhat joined #gluster
10:11 aardbolreiziger joined #gluster
10:12 prasanth joined #gluster
10:17 shyam joined #gluster
10:24 kotreshhr joined #gluster
10:27 foster joined #gluster
10:29 aardbolreiziger joined #gluster
10:36 aravindavk joined #gluster
10:50 msvbhat joined #gluster
10:58 aardbolreiziger joined #gluster
10:59 buvanesh_kumar joined #gluster
11:21 buvanesh_kumar joined #gluster
11:23 aravindavk joined #gluster
11:32 hosom joined #gluster
11:35 aravindavk joined #gluster
11:41 dijuremo joined #gluster
11:49 aravindavk joined #gluster
11:50 baber joined #gluster
11:56 kotreshhr joined #gluster
11:58 shyam joined #gluster
12:07 plarsen joined #gluster
12:14 Humble joined #gluster
12:24 skumar joined #gluster
12:27 shyu joined #gluster
12:27 jstrunk joined #gluster
12:34 plarsen joined #gluster
12:39 aravindavk joined #gluster
12:45 mbrandeis joined #gluster
12:45 msvbhat joined #gluster
12:50 msvbhat_ joined #gluster
12:55 nbalacha joined #gluster
12:58 aravindavk joined #gluster
13:19 kotreshhr left #gluster
13:33 hvisage joined #gluster
13:49 bEsTiAn Hi, is there a way to delegate rights to one specific user, to perform a snapshot, and to consolidate it later, on a specific volume ? (except using sudoers file)
13:52 skumar joined #gluster
14:04 baojg joined #gluster
14:18 msvbhat joined #gluster
14:30 nbalacha joined #gluster
14:34 ThHirsch joined #gluster
14:49 farhorizon joined #gluster
15:02 omie888777 joined #gluster
15:05 mbrandeis joined #gluster
15:08 wushudoin joined #gluster
15:10 ThHirsch joined #gluster
15:19 idn joined #gluster
15:21 MadPsy joined #gluster
15:21 MadPsy joined #gluster
15:22 idn Hey folks. Trying to track down resolution of a mount on boot issue. Seems to be mixed opinions about this being fixed in v3.2.5 but I'm running 3.9.1 and seeing failure to mount on reboot. Is there a recommended fix that I'm missing?
15:36 rdanter joined #gluster
15:37 cloph if the system you're trying to mount on is not the (only) server/brick of the volume, then should not be a problem.
15:38 cloph But AFAIK there is no clean solution to mount gluster volume on the (only) server itself without resolrting to local after-boot jobs.
15:41 idn cloph: No, it's one of four replicated distributed (2 + 2)
15:42 cloph then just  make sure you specify one of the other hosts in the mount to avoid trying to mount before local glusterd service is started.
15:43 idn cloph: Hmm, I suspect I'm doing something dumb in that case. Only have one specified.
15:43 * idn goes to look up the syntax for that
15:44 cloph only one is enough, if that one is not localhost (and you can be pretty sure that other server is up when you reboot the other machine)
15:44 cloph other than that there are backup options, but yeah for those I don't remember exact option
15:45 idn That's gonna bugger up my ansible playbook. Other option seems to be retry the mount -a once the local server comes up
15:45 cloph (and of course specify that you need network for the mount)
15:45 idn cloph: Thanks for the pointers, much appreciated
15:47 cloph problem is exactly that: at least with systemd there is no signal when the local server is *ready* - you can mount it after the glusterd unit is triggered, but that might be too early for the mount to succeed :-/  - but fetching the volume info from the other  host should work in your case - waiting for network should works fine in systemd
15:48 idn I see what you mean :/
15:48 dijuremo I have configured a 3 node gluster setup using thin provisioning. However, I am now having questions about whether or not this may be a problem. It seems to me the maximum pool size is limited by the maximum metadata size. I provisioned a 10TB pool with a 16GB (maximum allowed) metadata size. Will that be an issue as the volume fills up?
15:48 dijuremo I should had said Thin LVM...
15:50 ThHirsch joined #gluster
15:54 dijuremo Oh, never mind, I botched the math originally. The recommended metadata size should be 0.1% of the thin pool, so in my case, the Pool is 10TB and 0.1% of 10TB is 10GB of meatadata size. So in theory the maximum data size would be 16TB, right? That would equate to 16GB metadata.
16:04 rastar joined #gluster
16:18 Guest9038 joined #gluster
16:19 mbrandeis joined #gluster
16:31 ivan_rossi left #gluster
17:04 msvbhat joined #gluster
17:25 kraynor5b_ joined #gluster
17:26 ThHirsch joined #gluster
18:00 rafi joined #gluster
18:19 rafi joined #gluster
18:40 rafi joined #gluster
18:43 rastar joined #gluster
18:44 rastar joined #gluster
18:50 rafi1 joined #gluster
18:56 mlhess joined #gluster
19:06 kpease_ joined #gluster
19:25 Gambit15 joined #gluster
19:27 bowhunter joined #gluster
19:40 msvbhat joined #gluster
19:41 kpease joined #gluster
19:43 zcourts joined #gluster
20:22 caitnop joined #gluster
20:40 msvbhat joined #gluster
20:57 PatNarciso_ joined #gluster
21:00 PatNarciso_ joined #gluster
21:07 shyam joined #gluster
21:22 farhorizon joined #gluster
21:28 nuk3 joined #gluster
21:30 nuk3 hello, recently 3.11.1 on arch linux system and I've just noticed that my nfs (legacy) shares are no longer working.
21:31 nuk3 yes - should have rtfm when upgrading, but has legacy gnfs been officially replaced with ganesha in 3.11.1?
21:56 JoeJulian nuk3: yes, but you can re-enable gnfs explicitly.
21:56 JoeJulian But plan on migrating.
22:54 gospod3 joined #gluster
23:28 shyam joined #gluster
23:31 _KaszpiR_ joined #gluster
23:36 plarsen joined #gluster
23:50 shyam left #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary