Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2018-01-17

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:04 susant joined #gluster
00:15 gospod3 joined #gluster
00:27 Rakkin__ joined #gluster
00:44 MrAbaddon joined #gluster
00:48 shellclear joined #gluster
01:11 kramdoss_ joined #gluster
01:20 gospod3 joined #gluster
01:39 vbellur joined #gluster
01:51 ic0n joined #gluster
01:59 jkroon joined #gluster
02:02 gospod2 joined #gluster
02:10 Shu6h3ndu joined #gluster
02:11 atinm joined #gluster
02:13 ppai joined #gluster
02:18 susant joined #gluster
02:24 kotreshhr joined #gluster
02:26 gospod2 joined #gluster
02:27 susant joined #gluster
02:35 hgowtham joined #gluster
02:43 jiffin joined #gluster
02:55 kotreshhr joined #gluster
02:57 ilbot3 joined #gluster
02:57 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:57 nbalacha joined #gluster
03:08 ppai joined #gluster
03:09 ndarshan joined #gluster
03:15 cyberbootje joined #gluster
03:31 gospod2 joined #gluster
03:38 ndarshan joined #gluster
03:38 Shu6h3ndu joined #gluster
03:43 mbukatov joined #gluster
03:44 kpease joined #gluster
03:45 rwheeler joined #gluster
03:47 ACiDGRiM joined #gluster
03:51 major joined #gluster
03:51 vbellur joined #gluster
04:04 psony|afk joined #gluster
04:28 poornima_ joined #gluster
04:34 ppai joined #gluster
04:36 naisanza joined #gluster
04:36 plarsen joined #gluster
04:37 gospod2 joined #gluster
04:37 Prasad joined #gluster
04:39 Humble joined #gluster
04:42 Rakkin__ joined #gluster
04:52 susant joined #gluster
04:53 ppai joined #gluster
04:57 ACiDGRiM Anyone know how to make a distributed volume report the total size of all bricks when mounted, and not the size of one brick?
05:04 skumar joined #gluster
05:05 nbalacha ACiDGRiM, it should do that already
05:05 nbalacha ACiDGRiM, what are you seeing?
05:05 sunny joined #gluster
05:06 Rakkin__ joined #gluster
05:07 hgowtham aravindavk, ppai, kshlm can any of you take a look at https://github.com/gluster/glusterd2/pull/490
05:07 glusterbot Title: quota plugin stubs by sanoj-unnikrishnan · Pull Request #490 · gluster/glusterd2 · GitHub (at github.com)
05:08 ppai hgowtham, will do
05:08 hgowtham ppai, thanks :)
05:08 susant joined #gluster
05:10 Prasad_ joined #gluster
05:12 varshar joined #gluster
05:13 Prasad__ joined #gluster
05:15 rafi joined #gluster
05:18 varsha_ joined #gluster
05:18 Saravanakmr joined #gluster
05:21 susant joined #gluster
05:22 ACiDGRiM joined #gluster
05:24 Prasad_ joined #gluster
05:32 Humble joined #gluster
05:32 ACiDGRiM joined #gluster
05:42 gospod2 joined #gluster
05:44 jason-ma joined #gluster
05:47 jason-ma could anyone advise that when Heketi creating volume,how does Heketi to determine the brick size? Thanks.
05:48 nigelb Humble: ^^
05:49 daMaestro joined #gluster
05:52 ACiDGRiM_ joined #gluster
05:54 apandey joined #gluster
05:54 Humble jason-ma, heketi volume create request has to be filled with size param
05:54 Humble that decide the brick size
05:55 jason-ma @Humble  I mean the brick size,not the volume size
05:55 ACiDGRiM_ joined #gluster
05:56 jason-ma for example,in my test environment, 3 nodes,1 big RAID6 (36T) per node, when I created volume bigger than 4T,heketi separated each brick as 2.5T....and if the volume size was 10T or 15T,the brick size was about 3.7T or bigger, are there any rules about Heketi's behavior ??
06:10 kdhananjay joined #gluster
06:10 armyriad joined #gluster
06:13 buvanesh_kumar joined #gluster
06:26 ACiDGRiM joined #gluster
06:29 kdhananjay joined #gluster
06:30 varsha__ joined #gluster
06:33 xavih joined #gluster
06:36 ACiDGRiM_ joined #gluster
06:38 Vishnu_ joined #gluster
06:39 ACiDGRiM_ joined #gluster
06:40 ACiDGRiM_ joined #gluster
06:42 msvbhat joined #gluster
06:44 ACiDGRiM joined #gluster
06:45 kotreshhr joined #gluster
06:48 gospod2 joined #gluster
06:52 ACiDGRiM joined #gluster
06:53 jkroon joined #gluster
06:57 vbellur joined #gluster
07:10 [diablo] joined #gluster
07:17 jtux joined #gluster
07:19 raghug joined #gluster
07:20 raghug nh2[m], ping
07:20 glusterbot raghug: Please don't naked ping. http://blogs.gnome.org/markmc/2014/02/20/naked-pings/
07:24 kotreshhr joined #gluster
07:30 kotreshhr left #gluster
07:36 msvbhat joined #gluster
07:53 gospod2 joined #gluster
08:00 ndarshan joined #gluster
08:03 Rakkin__ joined #gluster
08:03 gospod2 joined #gluster
08:11 jri joined #gluster
08:12 ivan_rossi joined #gluster
08:12 ivan_rossi left #gluster
08:15 msvbhat joined #gluster
08:17 jbrooks joined #gluster
08:29 Humble joined #gluster
08:33 sanoj joined #gluster
08:34 Humble joined #gluster
08:58 atinm_ joined #gluster
08:59 gospod2 joined #gluster
09:12 msvbhat joined #gluster
09:15 jbrooks joined #gluster
09:19 ThHirsch joined #gluster
09:21 mattmcc joined #gluster
09:23 ThHirsch1 joined #gluster
09:42 varshar joined #gluster
09:43 ppai joined #gluster
09:47 buvanesh_kumar joined #gluster
09:52 atinm joined #gluster
10:04 gospod2 joined #gluster
10:05 karthik_us joined #gluster
10:07 poornima_ joined #gluster
10:13 hgowtham joined #gluster
10:16 ndarshan joined #gluster
10:18 MrAbaddon joined #gluster
10:35 rafi joined #gluster
10:36 poornima_ joined #gluster
10:36 Rakkin__ joined #gluster
10:38 LucaBoss74_ joined #gluster
10:46 msvbhat joined #gluster
10:49 jri joined #gluster
10:57 kotreshhr joined #gluster
11:09 gospod2 joined #gluster
11:15 purpleidea joined #gluster
11:15 purpleidea joined #gluster
11:18 buvanesh_kumar joined #gluster
11:19 kdhananjay LucaBoss74_: there?
11:19 LucaBoss74_ yes
11:20 kdhananjay LucaBoss74_: ok. so you say you see this with libgfapi and 3.12.4. correcT?
11:20 LucaBoss74_ yes it's correct
11:20 LucaBoss74_ tried with different qemu-kvm versiobn
11:20 LucaBoss74_ actually I'm using 2.9.0
11:20 LucaBoss74_ but verified on 2.6.0 too
11:21 ThHirsch joined #gluster
11:32 gospod2 joined #gluster
11:37 ws2k3 joined #gluster
11:44 shyam joined #gluster
11:48 aravindavk joined #gluster
11:50 msvbhat joined #gluster
12:00 susant joined #gluster
12:05 jri Hi there, maybe you can help me... :) I got performance issues with web request throught nginx > Python/Django > GlusterFS. I'm not using NFS client due to an other issue. So I'm using Gluster client and my web requests take about 3-6 sec to finish instead of 1-2 sec with NFS client. I've disabled Direct I/O in my fstab and I got 230MB/s.
12:06 jri Do you know an another thing that I can config in my Gluster ?
12:06 jri or for you those requests time are  normal ?
12:07 jiffin1 joined #gluster
12:07 jri (230MB/s speed is for read )
12:15 gospod2 joined #gluster
12:18 jiffin1 joined #gluster
12:22 buvanesh_kumar joined #gluster
12:51 msvbhat joined #gluster
12:57 atinm_ joined #gluster
12:59 ic0n joined #gluster
13:04 Vapez joined #gluster
13:11 phlogistonjohn joined #gluster
13:13 DV__ joined #gluster
13:13 Klas jri: glusterfs has several performance issues with statting files, they are primarily an issue with the FUSE client since it verifies with all servers at once, sequentially
13:18 jri @Klas : is it possible (and safe) to disable statting ?
13:20 gospod2 joined #gluster
13:26 Klas nope
13:26 Klas gluster is based on quorum
13:27 Klas asking all of the servers about the file status is fundamental to the entire concept of how file integrity in gluster works
13:27 Klas this basically means that the slowest brick defines the latency of all operations
13:27 Klas for every single transaction
13:28 Klas NFS works a bit differently, so statting generally goes faster, not sure what it means for stability tbh
13:37 kkeithley stat()ing a file also triggers a self-heal check
13:37 _nixpanic joined #gluster
13:37 _nixpanic joined #gluster
13:37 susant joined #gluster
13:38 Klas ah, that I didn't know
13:39 Klas jri: a generally positive performance enhancement is not statting contents of dirs unless necessary
13:39 kkeithley it's one of the reasons why using PHP really slows gluster down. PHP is infamous for stat()ing every 'include' file
13:40 Klas *brrr*
13:40 ndevos @php
13:40 glusterbot ndevos: (#1) php calls the stat() system call for every include. This triggers a self-heal check which makes most php software slow as they include hundreds of small files. See http://joejulian.name/blog/optimizing-web-performance-with-glusterfs/ for details., or (#2) It could also be worth mounting fuse with glusterfs --attribute-timeout=HIGH --entry-timeout=HIGH --negative-timeout=HIGH
13:40 glusterbot ndevos: --fopen-keep-cache
13:40 Klas apache in general with .htaccess would probably be a noticeable cost as well
13:45 jstrunk joined #gluster
13:46 jri Thx Klas and kkeithley it's interesting
13:49 sunny joined #gluster
13:56 psony|afk joined #gluster
14:11 nbalacha joined #gluster
14:18 buvanesh_kumar joined #gluster
14:26 gospod2 joined #gluster
14:33 TBlaar joined #gluster
14:39 msvbhat joined #gluster
14:43 skylar1 joined #gluster
14:53 atinm_ joined #gluster
14:57 guhcampos joined #gluster
14:59 FuzzyVeg joined #gluster
15:00 prasanth_ joined #gluster
15:02 legreffier joined #gluster
15:07 rtalur_afk[m] joined #gluster
15:12 jkroon joined #gluster
15:16 jbrooks joined #gluster
15:24 yoavz joined #gluster
15:30 major joined #gluster
15:31 gospod2 joined #gluster
15:42 msvbhat joined #gluster
15:44 Anarka joined #gluster
15:45 rizonz joined #gluster
15:45 rizonz hi guys!
15:47 rizonz side_control: :)
16:00 WebertRLZ hey gius
16:01 WebertRLZ s/gius/guys
16:01 WebertRLZ I'm seeing glustefsd consuming 90% cpu on a near-idle filesystem. I'm very new to glusterfs, can't get it trought
16:03 WebertRLZ there's nothing going on on the logs
16:04 WebertRLZ there is no process writing in the gluster volue either
16:04 Rakkin joined #gluster
16:04 WebertRLZ s/volue/volume
16:06 msvbhat joined #gluster
16:20 gospod2 joined #gluster
16:31 vbellur joined #gluster
16:36 gospod2 joined #gluster
16:40 guhcampos joined #gluster
16:42 plarsen joined #gluster
16:42 johnnyNumber5 joined #gluster
16:49 tom[] joined #gluster
17:00 d-fence joined #gluster
17:00 d-fence_ joined #gluster
17:03 mdan joined #gluster
17:04 mdan Hi, anyone here can help with Gluster healing ?
17:07 jbrooks joined #gluster
17:22 Peppard joined #gluster
17:32 kramdoss_ joined #gluster
17:35 jiffin joined #gluster
17:40 raghug joined #gluster
17:41 johnnyNumber5 joined #gluster
17:42 jbrooks joined #gluster
17:42 gospod2 joined #gluster
17:47 Vapez joined #gluster
17:54 Humble joined #gluster
18:01 skylar1 joined #gluster
18:03 arpu_ joined #gluster
18:18 vbellur joined #gluster
18:19 vbellur joined #gluster
18:47 gospod2 joined #gluster
18:57 mallorn After we upgraded from 3.10 to 3.13 our gluster filesystem became *very* slow.  It seems to hang on initial stat() calls, and then we get timeouts.  Any ideas?
19:25 major joined #gluster
19:33 major is it allowed to add in an arbiter with replica 4?
19:39 major durp, found it: Note: Volumes using the arbiter feature can only be replica 3 arbiter
19:45 mallorn I should add that we're doing disperse distributed volumes ( 5 x (2 + 1) ), and even doing a 'df' command on the system will hang.  If I do an ls on the mountpoint it does a stat() call and takes 2m41s to complete (there are ten files in the directory).
19:49 jkroon joined #gluster
19:51 major wonder how you replace an arbiter brick w/ a non-arbiter brick..
19:53 gospod2 joined #gluster
19:57 mallorn Does anyone know when CentOS is going to publish 3.13.1?  They have it on the buildlogs server (dated Dec 21), but it's still not on the mirror server.
19:59 skylar1 joined #gluster
20:09 MrAbaddon joined #gluster
20:27 kkeithley I don't see any email indicating anyone has tested the 3.13.1 packages. ndevos usually packages them and he won't tag them for release (push to mirror.centos.org) until someone tests them.
20:28 kkeithley these are "community" packages — it would be nice if someone in the community stepped up to the plate occasionally.
20:28 major soo .. has no one replaced an arbiter brick w/ a data brick before .. like .. on purpose?
20:28 kkeithley stepped up to the plate and pitched in.
20:30 * major makes some test volumes to see if he can't figure out if there is some special magic that needs to be done.
20:35 MrAbaddon joined #gluster
20:38 mallorn OK.  I just tested them and it's a major disaster.  How do I let him know?
20:42 Asako_ you're not the first one I've seen reporting performance issues
20:44 melliott joined #gluster
20:48 mallorn Our whole Openstack cluster is slowly timing out and failing because of the performance problems.
20:52 Asako_ hopefully you're not running this in prod
20:53 major and here I was toying w/ upgrading to 3.13..
20:55 mallorn We are.  We were having 3.10 issues that we had to get away from.  Swap one set of issues for another.  :)
20:58 gospod2 joined #gluster
21:03 jbrooks joined #gluster
21:09 msvbhat joined #gluster
21:10 kkeithley Sorry, that was a little bit unfair of me. ndevos only sent the mail to the {packagers,maintainers}@gluster.org mailing lists.
21:11 kkeithley mallorn: I suppose you could consider ndevos as having been notified now.
21:11 kkeithley Is it just poor performance? Or are there other issues?
21:12 kkeithley Other issues that are not a function of the poor performance?
21:13 kkeithley Regardless, I would certainly raise the performance issue over on #gluster-dev
21:14 kkeithley And out of curiosity, have you tried 3.12?
21:16 melliott the poor performance eventually results in file access timeouts that cascade up the openstack stack to fatal errors.
21:16 mallorn I was having problems with dependencies.  It wanted liburcu-cds.so.1 and liburcu-bp.so.1, but the userspace-rcu package (installed from the CentOS repository) has liburcu-cds.so.6 and liburcu-bp.so.6 which makes it fail the dependency.
21:17 kkeithley And the poor performance does or doesn't happen with 3.12?
21:19 kkeithley wrt liburcu libs, are you on CentOS6 or CentOS7?
21:20 kkeithley And just as a sanity check, you're not using EPEL, right?
21:21 mallorn We're using CentOS7, not EPEL.
21:22 mallorn We went straight from 3.10 to 3.13, so we never tested 3.12.  We only get downtime for updates twice a year or so, so we couldn't try 3.12.
21:22 kkeithley okay
21:22 tuxxie joined #gluster
21:29 ThHirsch joined #gluster
21:35 ThHirsch joined #gluster
21:47 kkeithley dunno what to say about the liburcu libs.  The packages have Requires: liburcu-{bp,cds}.so.6   ldd of /usr/lib64/glusterfs/3.13.0/xlator/mgmt/glusterd.so shows it wants liburcu-{bp,cds}.so.6
21:47 kkeithley and the liburcu.rpm from storage sig contains  liburcu-{bp,cds}.so.6
21:47 kkeithley everything looks right to me
21:48 kkeithley The glusterfs-server-3.13.0 rpm has Requires: liburcu-{bp,cds}.so.6
21:53 kkeithley maybe a fresh set of eyes will see something I'm not seeing. When ndevos comes on line tomorrow perhaps
21:55 rafi joined #gluster
22:04 gospod2 joined #gluster
22:06 cholcombe ndevos: i have a new repo i'm in the process of open sourcing at work and I was wondering if i could move it under the gluster github org like my rust gfapi bindings
22:08 s34n I have a peer on which the network dropped and came back. But the peer still has a status of disconnected.
22:08 s34n How do I nudge back?
22:09 amye cholcombe, we don't have a ton of process around this at the moment, I would like there to be more.
22:09 cholcombe amye: yeah i figured
22:09 amye That being said, if you drop me an email, I can make sure this gets in front of the right people
22:09 amye amye@redhat.com
22:09 cholcombe sounds good to me!  I think we spoke before
22:10 amye Ideally, there should be 'hey I made a thing, can I put it in Gluster's stuff' and we can just do it.
22:10 amye Yes, yes we have. :)
22:10 amye Right now the process is manual and depends on people checking their email.
22:10 cholcombe i'm not sure why but work has some stupid new policy where they don't want to open source things under their own org
22:10 amye Eh, everyone's got reasons, but if you want to put it under Gluster.org, I'm sure we can work with that.
22:11 cholcombe cool
22:12 cholcombe i think people will like it.  it fakes out open shift to think directories are volumes and slaps a quota on them
22:15 s34n peer1 says peer2 is disconnected; peer2 says peer1 is connected. How can this be?
22:18 s34n peer3 also says peer2 is disconnected
22:18 s34n restarting glusterd on peer2 hasn't helped
22:20 vbellur amye, cholcombe: right now, bugs logged against infra can possibly result in a project under the gluster github org..
22:20 vbellur cholcombe: your project sounds pretty cool!
22:20 cholcombe vbellur: thanks :)
22:21 amye vbellur, that's one way to do it, but we should make official documentation on this stuff. 'I made a thing, now I want to give it to you.'
22:21 amye Bugs are good in short term.
22:21 amye I'll put it on the community working group list.
22:22 vbellur amye: agree on having a better process, would be pretty cool to have the process in official documentation!
22:22 * amye is making an issue as we speak
22:23 amye https://github.com/gluster/community/issues/21 has my rough notes
22:23 glusterbot Title: Process and Documentation for accepting new projects · Issue #21 · gluster/community · GitHub (at github.com)
22:24 amye It's so that I don't forget that we need this and other people can tell us what we're forgetting too.
22:47 s34n how do I restart a node?
22:54 s34n I need to bring a node back into the cluster. I'm not sure why it's disconnected.
22:54 s34n Is anyone willing to help with this?
23:09 gospod2 joined #gluster
23:46 e1z0 joined #gluster
23:56 major how do you manually wipe out a file from the backend storage ?

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary