Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-06-02

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 MikeLupe11 JoeJulian: I just found a note I made some days ago:
00:00 MikeLupe11 <post-factum>you want to issue only one add-brick command with 3 additional bricks specified and only one rebalance command on any node from your trusted pool
00:01 MikeLupe11 I'll try tomorrow.
00:05 MikeLupe11 bye
00:06 F2Knight joined #gluster
00:12 hi11111 joined #gluster
00:23 chirino joined #gluster
00:28 chirino joined #gluster
00:47 luizcpg joined #gluster
00:58 haomaiwang joined #gluster
01:01 haomaiwang joined #gluster
01:05 XpineX joined #gluster
01:09 hagarth joined #gluster
01:11 JesperA- joined #gluster
01:30 hagarth joined #gluster
01:33 shyam joined #gluster
01:46 haomaiwang joined #gluster
01:51 haomaiwang joined #gluster
02:01 haomaiwang joined #gluster
02:01 kdhananjay joined #gluster
02:02 shyam left #gluster
02:02 alghost joined #gluster
02:09 Javezim Are there any options to help reduce the likelyhood of split brains to continously written files
02:10 Javezim Ie. Slow the cluster down slightly?
02:34 Lee1092 joined #gluster
02:54 JoeJulian Javezim: 1st, unless "gluster volume heal $vol info" says split-brain, it's not.
02:55 JoeJulian 2nd, did you look in the client log for errors or connection issues?
03:01 haomaiwang joined #gluster
03:04 nehar joined #gluster
03:05 julim joined #gluster
03:05 kenansulayman joined #gluster
03:08 Che-Anarch joined #gluster
03:10 luizcpg joined #gluster
03:34 jiffin joined #gluster
03:43 F2Knight joined #gluster
03:44 PaulCuzner joined #gluster
03:47 F2Knight joined #gluster
03:48 cliluw joined #gluster
03:49 natarej_ joined #gluster
03:50 PaulCuzner left #gluster
03:53 nbalacha joined #gluster
03:55 Javezim @JoeJulian What do you mean by Client logs sorry?
03:55 nathwill joined #gluster
03:56 shubhendu joined #gluster
03:56 atinm joined #gluster
03:59 itisravi joined #gluster
03:59 ramteid joined #gluster
04:01 haomaiwang joined #gluster
04:02 rafi joined #gluster
04:04 kenansulayman joined #gluster
04:08 kotreshhr joined #gluster
04:24 F2Knight joined #gluster
04:25 shortdudey123 joined #gluster
04:25 JoeJulian joined #gluster
04:26 mlhess joined #gluster
04:28 aspandey joined #gluster
04:31 Javezim Was looking through the bricks logs and found this with one of the files that were having issues
04:31 Javezim http://paste.ubuntu.com/16914639/
04:31 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
04:45 kotreshhr left #gluster
04:46 gem joined #gluster
04:52 jiffin joined #gluster
04:55 nathwill joined #gluster
04:58 Javezim Am seeing a lot a lot of these - [2016-06-01 00:40:44.640764] E [MSGID: 108006] [afr-common.c:4042:afr_notify] 0-gv0mel-replicate-3: All subvolumes are down. Going offline until atleast one of them comes back up.
05:01 haomaiwang joined #gluster
05:12 atalur joined #gluster
05:14 ndarshan joined #gluster
05:16 Pupeno joined #gluster
05:18 pur__ joined #gluster
05:24 aravindavk joined #gluster
05:26 rastar joined #gluster
05:26 ppai joined #gluster
05:27 sakshi joined #gluster
05:29 Gnomethrower joined #gluster
05:29 Apeksha joined #gluster
05:30 kdhananjay joined #gluster
05:32 hgowtham joined #gluster
05:34 karthik___ joined #gluster
05:38 Pupeno joined #gluster
05:39 PaulCuzner joined #gluster
05:39 rafi joined #gluster
05:41 overclk joined #gluster
05:41 karnan joined #gluster
05:41 Manikandan joined #gluster
05:42 kotreshhr joined #gluster
05:45 itisravi joined #gluster
05:48 aspandey joined #gluster
05:59 shruti joined #gluster
06:01 haomaiwang joined #gluster
06:06 anil_ joined #gluster
06:09 poornimag joined #gluster
06:11 Acinonyx joined #gluster
06:15 spalai joined #gluster
06:15 Bhaskarakiran joined #gluster
06:18 Bhaskarakiran joined #gluster
06:20 RameshN joined #gluster
06:23 jtux joined #gluster
06:28 kenansulayman joined #gluster
06:28 ashiq joined #gluster
06:28 nishanth joined #gluster
06:36 ramky joined #gluster
06:51 Javezim Anyone performed any tuning on their Ubuntu that has made gluster run better?
06:52 [Enrico] joined #gluster
07:01 haomaiwang joined #gluster
07:04 spalai joined #gluster
07:06 spalai1 joined #gluster
07:07 spalai left #gluster
07:07 deniszh joined #gluster
07:08 claude_ joined #gluster
07:13 hackman joined #gluster
07:17 ctria joined #gluster
07:17 prasanth joined #gluster
07:25 Saravanakmr joined #gluster
07:30 kovshenin joined #gluster
07:38 ctria joined #gluster
07:55 jri joined #gluster
07:56 karthik___ joined #gluster
08:01 haomaiwang joined #gluster
08:12 Wizek_ joined #gluster
08:16 Pupeno joined #gluster
08:25 Wizek_ joined #gluster
08:27 Slashman joined #gluster
08:29 sabansal_ joined #gluster
08:36 purpleidea joined #gluster
08:36 purpleidea joined #gluster
08:41 claude_ joined #gluster
08:49 nbalacha joined #gluster
08:56 moka111 joined #gluster
08:59 moka111 Hallo, ist jemand da, der mir ggf. helfen kann? Wir haben auf unserem Produktivserver ein heftiges Problem. GlusterFS reizt unseren RAM zu beinahe 100% aus, auch nach dem "echo 2 > /proc/sys/vm/drop_caches"
09:01 haomaiwang joined #gluster
09:04 post-factum moka111: english please :(
09:08 moka111 Hello. Our GlusterFS on our production system is using nearly 100% the system is heavly swapping, even after executing "echo 2 > /proc/sys/vm/drop_caches"
09:08 moka111 Is there a way to fix it without stopping glusterfs or is there a patch? I think it's a memory leak
09:09 moka111 glusterfs 3.7.11 built on Apr 18 2016 14:19:11
09:10 post-factum moka111: would be nice to take statedumps first
09:12 moka111 could you tell me what exactly you mean? I'm new to this. Our hoster has done the initial setup but its not managed...
09:12 post-factum first of all, what leaks? client?
09:16 moka111 I think, it is the daemon: https://pl.vc/15ubuu
09:17 post-factum oh, glusterd leaks...
09:17 itisravi joined #gluster
09:17 post-factum what is your usage pattern?
09:22 moka111 the system holds all our images on two webservers (nginx) as backend/origin for a cdn
09:25 TvL2386 joined #gluster
09:25 post-factum ok, take a statedump
09:25 moka111 I can I do this?
09:25 moka111 How can I do this ?
09:25 post-factum send to glusterd process USR1 signal and look for statedump in /var/run/gluster folder
09:29 moka111 http://expirebox.com/download/6bbb​c93359b6f345b7b55b13eb524abc.html
09:29 glusterbot Title: ExpireBox | glusterdump.1359.dump.1464859580 (at expirebox.com)
09:30 jiffin1 joined #gluster
09:32 Pupeno joined #gluster
09:32 post-factum moka111: now you would like to join #gluster-dev and catch some dev :)
09:33 moka111 ok, i'll try :)
09:33 post-factum [mgmt/glusterd.management - usage-type gf_common_mt_memdup memusage]
09:33 post-factum size=825126254
09:33 post-factum looks too much to me
09:34 post-factum also
09:34 post-factum [mgmt/glusterd.management - usage-type gf_common_mt_char memusage]
09:34 post-factum size=932274270
09:34 post-factum and
09:34 post-factum [mgmt/glusterd.management - usage-type gf_common_mt_mem_pool memusage]
09:34 post-factum size=3881152672
09:34 post-factum and some integer overflow:
09:35 post-factum [mgmt/glusterd.management - usage-type (null) memusage]
09:35 post-factum size=4294966920
09:35 moka111 are these sizes are values it "can be used" or are they "actually used"?
09:35 post-factum i believe those are "actually"
09:41 moka111 ok, but this is about 9 GB, when the values are in bytes but the actually RAM usage is above 20GB. Is this because of the leak?
09:42 post-factum probably, some allocations are not reflected by statedump, or i missed something
09:43 Jules- joined #gluster
09:47 atalur moka111, you might get a wider audience if you mail this statedump along with a short description of memory leak issue you observe to gluster-users@gluster.org.
09:47 anil_ left #gluster
09:55 DV_ joined #gluster
10:01 haomaiwang joined #gluster
10:04 nbalacha joined #gluster
10:04 kotreshhr joined #gluster
10:06 hgowtham joined #gluster
10:07 luizcpg joined #gluster
10:13 jiffin1 joined #gluster
10:15 purpleidea joined #gluster
10:17 Pupeno joined #gluster
10:21 Pupeno joined #gluster
10:33 jiffin1 joined #gluster
10:39 hgowtham joined #gluster
10:44 atinm|brb joined #gluster
10:46 kotreshhr joined #gluster
10:52 Pupeno joined #gluster
11:01 haomaiwang joined #gluster
11:05 ira joined #gluster
11:10 DV_ joined #gluster
11:15 johnmilton joined #gluster
11:21 Pupeno joined #gluster
11:25 claude__ joined #gluster
11:31 julim joined #gluster
11:32 chirino joined #gluster
12:01 haomaiwang joined #gluster
12:06 JesperA joined #gluster
12:12 atinm|brb joined #gluster
12:14 luizcpg joined #gluster
12:19 itisravi_ joined #gluster
12:29 nottc joined #gluster
12:32 primehaxor joined #gluster
12:39 kdhananjay joined #gluster
12:41 julim joined #gluster
12:43 hi11111 joined #gluster
12:46 hi111111 joined #gluster
12:48 d0nn1e joined #gluster
12:48 plarsen joined #gluster
12:50 unclemarc joined #gluster
12:52 nbalacha joined #gluster
12:56 jri joined #gluster
12:57 shyam joined #gluster
12:59 luis_silva joined #gluster
12:59 goretoxo joined #gluster
13:01 haomaiwang joined #gluster
13:02 Pupeno_ joined #gluster
13:04 plarsen joined #gluster
13:05 bluenemo joined #gluster
13:07 guhcampos joined #gluster
13:14 julim joined #gluster
13:26 Drankis joined #gluster
13:28 Gnomethrower joined #gluster
13:30 jri joined #gluster
13:33 robb_nl joined #gluster
13:43 haomaiwang joined #gluster
13:48 B21956 joined #gluster
13:49 skylar joined #gluster
13:50 haomaiwang joined #gluster
13:51 haomaiwang joined #gluster
14:01 haomaiwang joined #gluster
14:06 arcolife joined #gluster
14:10 dienes joined #gluster
14:11 dienes hi, i have question for you! Anyone is online?
14:12 jri_ joined #gluster
14:16 post-factum dienes: just ask and wait
14:17 misc yeah, order is important, cause wait, then ask is less efficient :p
14:21 dienes ok.. So my problem is the .glusterfs directory. After i have deleted many files from the glustered directory, the glusterfs contain these (i guess). Do you know any method for i get back the storage capacity?
14:22 jri joined #gluster
14:23 shyam joined #gluster
14:25 shyam joined #gluster
14:29 gem joined #gluster
14:30 amye joined #gluster
14:36 wushudoin joined #gluster
14:40 jrwr joined #gluster
14:41 jrwr So I have a 5 server gluster that I need to move to a single machine, but I would like to use the fact its on 4 servers now to get the max speed since the target server has a faster connection, will a geo replicate be able to do that? or do I need to do some other method
14:54 rwheeler joined #gluster
14:55 haomaiwang joined #gluster
15:01 haomaiwang joined #gluster
15:05 Pupeno joined #gluster
15:06 shubhendu joined #gluster
15:07 PotatoGim joined #gluster
15:09 tyler274 joined #gluster
15:13 rwheeler joined #gluster
15:18 zzal joined #gluster
15:21 zzal Is it a good idea to put your apache webroot on glusterfs ? Apache is serving multiple PHP applications.
15:23 dienes zzal: I think not a good idea.. what do you want to do?
15:24 JoeJulian zzal: It /can/ be, but you need to be aware of how to be successful with that. Clustered filesystems are less efficient when performing hundreds of needless operations per page load. See ,,(php)
15:24 glusterbot zzal: (#1) php calls the stat() system call for every include. This triggers a self-heal check which makes most php software slow as they include hundreds of small files. See http://joejulian.name/blog/optimizi​ng-web-performance-with-glusterfs/ for details., or (#2) It could also be worth mounting fuse with glusterfs --attribute-timeout=HIGH --entry-timeout=HIGH --negative-timeout=HIGH --fopen-keep-cache
15:25 JoeJulian dienes: the files under .glusterfs are all hardlinks. They don't take up any additional space.
15:25 zzal I have multiple apache servers serving the same php applications with haproxy in front of them. I would like to sync the code of all applications and apache vhost config files
15:26 JoeJulian gluster is not a sync tool. It is a clustered filesystem.
15:26 JoeJulian I make the distiction to help you change your mindset and avoid the same pitfalls everyone else makes.
15:28 jri joined #gluster
15:32 dienes JoeJulian: I don't see that.. the .glusterfs directory contain 77G, but the main directory (without .glusterfs) contain 53K
15:33 jri_ joined #gluster
15:36 jiffin joined #gluster
15:37 hagarth joined #gluster
15:42 kpease joined #gluster
15:52 jiffin joined #gluster
15:55 hackman joined #gluster
15:57 deangiberson joined #gluster
16:05 Pupeno joined #gluster
16:12 JoeJulian dienes: Well that's not right... You can use find to find files with only one link under ".glusterfs/[0-9a-f][0-9a-f]/". There shouldn't be any.
16:17 dienes JoeJulian: i have thought too.. it's not good. I had 90G datas in that dataset. After I deleted everything the .glusterfs/ contain many datas.. (Yes, those are hard links)
16:20 rafi1 joined #gluster
16:21 gem joined #gluster
16:37 robb_nl joined #gluster
16:45 atalur joined #gluster
16:54 Philambdo joined #gluster
16:56 Drankis joined #gluster
17:09 rafi joined #gluster
17:10 DV_ joined #gluster
17:17 David_H_Smith joined #gluster
17:26 F2Knight joined #gluster
17:34 kpease joined #gluster
17:38 kpease joined #gluster
17:42 Drankis joined #gluster
17:46 kpease joined #gluster
17:56 nathwill joined #gluster
17:59 luizcpg joined #gluster
18:14 Philambdo joined #gluster
18:18 nishanth joined #gluster
18:23 julim joined #gluster
18:52 cliluw joined #gluster
18:56 deniszh joined #gluster
18:58 shyam1 joined #gluster
19:00 shyam joined #gluster
19:04 Pupeno joined #gluster
19:04 renout_away joined #gluster
19:06 PatNarciso joined #gluster
19:08 PatNarciso Hello all.  Hello Gluster Bot.
19:09 DV_ joined #gluster
19:11 PatNarciso I was just reading the RedHat Gluster Admin guide.  I see Tier + SMB is not supported in this release.  Is there any issue with Samba via GlusterFuse on tiered volume?
19:43 robb_nl joined #gluster
19:44 chirino joined #gluster
19:49 Wizek__ joined #gluster
19:54 deniszh1 joined #gluster
19:55 jri joined #gluster
20:00 ira PatNarciso: That we never support Samba + FUSE? :)
20:01 PatNarciso heh, good point.
20:03 PatNarciso aside from performance considerations; is there any reason why a samba server should not run on a tier'd volume?
20:05 shyam joined #gluster
20:11 Chaot_s joined #gluster
20:12 johnmilton joined #gluster
20:20 kovshenin joined #gluster
20:21 JoeJulian PatNarciso: I can't see why it wouldn't work. The vfs just uses libgfapi. The translators do all the work with tiering so I have no idea why besides the normal Red Hat not "supporting" things they don't have trained support people to handle and that they haven't built in the lab and hammered on sufficiently. ira?
20:23 ira We actually hit bugs, that's why we didn't support it.
20:23 ira Honestly.
20:23 ira PatNarciso, JoeJulian: ^^
20:24 JoeJulian So since it's all the same translators, I read that as there are bugs in the tiering graph.
20:24 ira JoeJulian: Depends, on if FUSE covers up the issues or not.
20:26 JoeJulian I don't suppose you could point me at the bugs?
20:28 PatNarciso If its public, I'd also be curious at reading what the bugs impacted.
20:37 deniszh joined #gluster
20:54 johnmilton joined #gluster
20:59 shersi_ joined #gluster
21:01 shersi_ Hi All, did anyone used gluster_volume module in Ansible?
21:07 shersi_ I'm using gluster_volume module in ansible 2.1. I'm trying to create glusterfs volume and first ansible run always fails with following error: "Host xxxxx is not in 'Peer in Cluster' state\n".  If i re-run the task, the task completes successfully.
21:09 JoeJulian I'm working on the salt module for gluster. I have philisophical differences with Ansible. @purpleidea would suggest you look at https://github.com/purpleidea/mgmt too.
21:09 glusterbot Title: GitHub - purpleidea/mgmt: A next generation config management prototype! (at github.com)
21:11 purpleidea +1
21:11 purpleidea i hear the mgmt thing is really sweet
21:27 Pupeno joined #gluster
22:06 shyam joined #gluster
22:09 luizcpg joined #gluster
22:37 kovshenin joined #gluster
22:39 Pupeno joined #gluster
23:00 David_H_Smith joined #gluster
23:31 moka111 joined #gluster
23:34 suliba joined #gluster
23:49 gnulnx left #gluster
23:51 Pupeno joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary