Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2018-01-02

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:13 ic0n joined #gluster
00:29 ic0n joined #gluster
00:53 ic0n joined #gluster
01:12 protoporpoise had a weird bug (possibly due to puppet module that automates the creation of volumes) where the volumes were being created as replica 3, rather than replica 3 arbiter 1 hahahaha
01:12 protoporpoise thanks kale for pointing me towards that
01:13 protoporpoise had to do this https://github.com/sammcj/scripts/blob/master/gluster/gluster_convert_replica_to_arbiter_brick.sh
01:13 glusterbot Title: scripts/gluster_convert_replica_to_arbiter_brick.sh at master · sammcj/scripts · GitHub (at github.com)
01:14 kale protoporpoise: did you gain more speed?
01:14 protoporpoise havent tested yet
01:14 protoporpoise it used to be correct, so something's gone backwards
01:14 protoporpoise just looking at another thing then ill try
01:15 protoporpoise kale Number of Bricks: 1 x (2 + 1) = 3
01:15 protoporpoise now though
01:38 ic0n joined #gluster
01:46 Rakkin_ joined #gluster
01:46 sankarshan joined #gluster
01:50 yosafbridge joined #gluster
03:02 ilbot3 joined #gluster
03:02 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
03:06 ic0n joined #gluster
03:13 psony joined #gluster
03:20 gyadav joined #gluster
03:21 plarsen joined #gluster
03:24 ic0n joined #gluster
03:34 rafi joined #gluster
03:47 ic0n joined #gluster
03:48 ppai joined #gluster
03:53 kdhananjay joined #gluster
03:56 itisravi joined #gluster
03:58 protoporpoise man, whenever I try to add a brick as an arbiter to a volume I end up with "ls: cannot access volnamehere: Transport endpoint is not connected" garhhh
04:00 protoporpoise is there something other than a heal you're meant to do when adding a arbiter brick? like some sort of rebalance or something?
04:02 ic0n joined #gluster
04:05 atinm_ joined #gluster
04:18 nigelb Ban list cleared out for the year :)
04:21 amye Don't hold that thought. :P
04:24 nigelb well, I just cleared out old bans.
04:24 nigelb happy to hand out new ones
04:26 krk joined #gluster
04:28 nbalacha joined #gluster
04:50 karthik_us joined #gluster
04:52 rejy joined #gluster
04:53 kdhananjay joined #gluster
04:57 Shu6h3ndu joined #gluster
04:59 skumar joined #gluster
05:18 nbalacha joined #gluster
05:38 ndarshan joined #gluster
05:39 Prasad joined #gluster
05:45 itisravi__ joined #gluster
05:49 itisravi__ joined #gluster
05:51 msvbhat joined #gluster
05:54 Saravanakmr joined #gluster
05:55 hgowtham joined #gluster
06:02 Vishnu_ joined #gluster
06:15 prasanth joined #gluster
06:25 xavih joined #gluster
06:58 drymek joined #gluster
07:05 kotreshhr joined #gluster
07:12 Rakkin_ joined #gluster
07:14 DV joined #gluster
07:15 jtux joined #gluster
07:22 sunnyk joined #gluster
07:30 msvbhat joined #gluster
07:33 rafi2 joined #gluster
07:50 sunnyk joined #gluster
08:16 Vishnu__ joined #gluster
08:22 Vishnu_ joined #gluster
08:36 misc mhh, I wonder why supybot from RH was banned
08:47 ahino joined #gluster
08:50 fsimonce joined #gluster
08:54 victori joined #gluster
09:03 ahino1 joined #gluster
09:07 buvanesh_kumar joined #gluster
09:09 [diablo] joined #gluster
09:11 krk joined #gluster
09:12 msvbhat joined #gluster
09:36 ahino joined #gluster
09:41 msvbhat joined #gluster
09:44 itisravi joined #gluster
09:45 hgowtham joined #gluster
09:52 karthik_us joined #gluster
10:04 MrAbaddon joined #gluster
10:05 drymek_ joined #gluster
10:12 anthony25 joined #gluster
10:14 drymek_ joined #gluster
10:17 buvanesh_kumar joined #gluster
10:27 smremde joined #gluster
10:30 smremde can anyone help me prior to enabling georeplication. my backup is already populated with a copy of my live data, but I have read the GFIDs need to match before enabling georeplication, however I can't seem to set the GFIDs on the backup. I have tried using gsync-sync-gfid and manually using setfattr -n glusterfs.gfid.heal -v "ff2a1e83-1b09-4f35-95ef-aac9313e2065\0file\0" path
10:34 drymek joined #gluster
10:43 jiffin joined #gluster
10:48 rafi joined #gluster
10:52 rafi1 joined #gluster
11:06 msvbhat joined #gluster
11:20 hgowtham joined #gluster
11:24 ThHirsch joined #gluster
11:25 hgowtham joined #gluster
11:28 Rakkin_ joined #gluster
11:31 kotreshhr left #gluster
11:32 rafi joined #gluster
11:35 atinmu joined #gluster
11:37 rafi2 joined #gluster
11:41 Prasad_ joined #gluster
11:51 Prasad__ joined #gluster
12:06 rafi1 joined #gluster
12:20 hgowtham joined #gluster
12:23 jiffin1 joined #gluster
12:27 rafi1 joined #gluster
12:33 bfoster joined #gluster
12:36 jiffin joined #gluster
12:43 arpu joined #gluster
12:50 msvbhat joined #gluster
12:58 ahino1 joined #gluster
13:15 psony|afk joined #gluster
13:24 Prasad joined #gluster
13:31 nbalacha joined #gluster
13:38 DV joined #gluster
13:49 ahino joined #gluster
14:00 plarsen joined #gluster
14:02 plarsen joined #gluster
14:17 krk joined #gluster
14:20 shyam joined #gluster
14:23 jobewan joined #gluster
14:24 phlogistonjohn joined #gluster
14:25 msvbhat joined #gluster
14:25 shyam joined #gluster
14:38 ic0n joined #gluster
14:49 skylar1 joined #gluster
15:07 ic0n joined #gluster
15:08 ahino1 joined #gluster
15:16 jstrunk joined #gluster
15:20 jobewan joined #gluster
15:20 melliott joined #gluster
15:23 Rakkin_ joined #gluster
15:24 dominicpg joined #gluster
15:25 brettnem joined #gluster
15:35 krk joined #gluster
15:38 jbrooks joined #gluster
15:42 shyam1 joined #gluster
15:47 shyam joined #gluster
15:52 ic0n joined #gluster
15:54 DV joined #gluster
15:54 Rakkin_ joined #gluster
16:01 [fre] joined #gluster
16:01 [fre] Hey Guys, Happy NewYear & Best Wishes!
16:03 [fre] Could somebody enlighten me on a glusterfs-client-issue, related to memory-consumption?
16:07 gyadav joined #gluster
16:07 * cloph likely cannot - and it is rather quiet here, people likely are still on vacation - but if you don't post the actual question surely nobody can enlighten you.
16:08 [fre] :)
16:08 [fre] right! :)
16:09 [fre] I was searching for the most possible detailed evidence.
16:10 jobewan joined #gluster
16:11 [fre] We are actually running Alfresco on gluster-volumes, mounted with default-params. Thing is, memory-consumption over time increases to many gigabytes, filling up swap-space. After a umount-mount-operation, things get back to easy (400Mb tops). :)
16:12 kpease joined #gluster
16:13 rouven joined #gluster
16:17 ic0n joined #gluster
16:22 rouven joined #gluster
16:27 [fre] I noticed some memory-leaks in the past, wondering how it evolved. Servers are running 3.8.4-18.4, clients 3.7.9.
16:28 major joined #gluster
16:34 nisroc_ joined #gluster
16:36 buvanesh_kumar_ joined #gluster
16:46 ic0n joined #gluster
16:48 rouven joined #gluster
16:50 cholcombe [fre]: check your sysctls
16:50 cholcombe linux caches pages for gluster and your kernel params are probably too aggressive
16:51 cholcombe there's a sysctl to prefer caching.  i forget the name off the top of my head
16:51 cholcombe what's your swappiness setting?
16:54 Asako_ yeah, it sounds like cache not getting flushed
17:02 snehring joined #gluster
17:03 ic0n joined #gluster
17:21 sunny joined #gluster
17:32 msvbhat joined #gluster
17:37 ic0n joined #gluster
17:37 jiffin joined #gluster
17:42 arpu FUSE inited with protocol versions: glusterfs 7.24 kernel 7.23
17:43 arpu i need to reboot the server ?
17:45 ThHirsch joined #gluster
18:02 rafi1 joined #gluster
18:06 ic0n joined #gluster
18:08 jiffin joined #gluster
18:13 ThHirsch joined #gluster
18:20 guhcampos joined #gluster
18:22 ic0n joined #gluster
18:43 ic0n joined #gluster
19:02 rouven joined #gluster
19:10 MrAbaddon joined #gluster
19:21 dominicpg joined #gluster
19:22 zerick joined #gluster
19:31 major joined #gluster
19:31 ic0n joined #gluster
19:34 ThHirsch joined #gluster
19:40 ahino joined #gluster
19:51 jbrooks joined #gluster
19:56 ThHirsch joined #gluster
19:57 rwheeler joined #gluster
19:59 ic0n joined #gluster
20:05 kettlewell joined #gluster
20:13 Vapez joined #gluster
20:14 ic0n joined #gluster
20:16 kettlewell joined #gluster
20:25 kettlewell joined #gluster
20:44 ic0n joined #gluster
20:47 kettlewe_ joined #gluster
20:50 shellclear joined #gluster
20:54 kettlewell joined #gluster
20:55 JoeJulian [fre]: You can test the theory offered with "echo 3 >> /proc/sys/vm/drop_caches". I suspect 3.7.9 has memory leaks and you should use a newer client.
20:56 JoeJulian arpu: not necessarily. I seldom see those match.
21:16 shellclear joined #gluster
21:19 kettlewell joined #gluster
21:20 major joined #gluster
21:27 DV joined #gluster
21:30 shellclear joined #gluster
21:31 ic0n joined #gluster
21:37 jbrooks joined #gluster
21:58 ic0n joined #gluster
22:13 ic0n joined #gluster
22:16 protoporpoise joined #gluster
22:18 protoporpoise kale: re: did I gain more speed by fixing arbiter: no more write speed, yes to read.
22:24 edong23 joined #gluster
22:25 brettnem joined #gluster
22:41 protoporpoise question: The self-heal daemon, I've noticed that some times it'll be running on one host but not the other two hosts in the cluster, this is something I'm currently monitoring and thus getting alerts, but I'm trying to determine if this is normal behaviour? (self-healing+daemon is enabled in volume config)
23:07 jobewan joined #gluster
23:08 ic0n joined #gluster
23:28 ic0n joined #gluster
23:31 ThHirsch joined #gluster
23:34 jbrooks joined #gluster
23:37 jbrooks joined #gluster
23:48 ic0n joined #gluster
23:55 jbrooks joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary