Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-02-27

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:08 nishanth joined #gluster
00:24 nishanth joined #gluster
00:35 javi404 joined #gluster
00:51 jbrooks joined #gluster
00:54 major joined #gluster
01:05 kraynor5b joined #gluster
01:07 derjohn_mob joined #gluster
01:07 shdeng joined #gluster
01:21 cyberbootje1 major, when i tweak the volume on gluster, is it real time or do i need to restart things?
01:23 saali joined #gluster
02:14 major Real time
02:36 cyberbootje1 so i figured out that if i use cache=writetrough in KVM, it works as soon as i use cache=none or directsync it just won't work
02:48 ilbot3 joined #gluster
02:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:48 pioto joined #gluster
02:50 squizzi joined #gluster
03:04 kramdoss_ joined #gluster
03:06 sbulage joined #gluster
03:09 jeffspeff joined #gluster
03:36 magrawal joined #gluster
03:38 nbalacha joined #gluster
03:48 Seth_Karlo joined #gluster
03:49 atinm joined #gluster
04:10 Wizek_ joined #gluster
04:12 itisravi joined #gluster
04:17 plarsen joined #gluster
04:17 ankitr joined #gluster
04:19 jkroon joined #gluster
04:22 sanoj joined #gluster
04:28 karthik_us joined #gluster
04:35 ankitr joined #gluster
04:40 Shu6h3ndu joined #gluster
04:45 plarsen joined #gluster
04:48 ankitr joined #gluster
04:52 kdhananjay joined #gluster
04:55 buvanesh_kumar joined #gluster
04:56 Prasad joined #gluster
05:06 ankitr joined #gluster
05:06 msvbhat joined #gluster
05:08 BitByteNybble110 joined #gluster
05:09 prasanth joined #gluster
05:20 kotreshhr joined #gluster
05:22 RameshN joined #gluster
05:22 rafi joined #gluster
05:34 hgowtham joined #gluster
05:35 hgowtham joined #gluster
05:36 XpineX joined #gluster
05:37 rjoseph joined #gluster
05:37 skumar joined #gluster
05:38 k4n0 joined #gluster
05:39 Karan joined #gluster
05:39 ankitr joined #gluster
05:43 itisravi joined #gluster
05:43 hgowtham joined #gluster
05:44 ndarshan joined #gluster
05:51 apandey joined #gluster
05:53 riyas joined #gluster
06:00 prasanth joined #gluster
06:02 Saravanakmr joined #gluster
06:03 apandey joined #gluster
06:13 susant joined #gluster
06:16 ppai joined #gluster
06:17 sona joined #gluster
06:20 Prasad joined #gluster
06:21 nishanth joined #gluster
06:23 msvbhat joined #gluster
06:27 skumar joined #gluster
06:29 jiffin joined #gluster
06:32 Philambdo joined #gluster
06:32 Seth_Karlo joined #gluster
06:33 susant joined #gluster
06:36 kdhananjay joined #gluster
06:38 Humble joined #gluster
06:42 ashiq joined #gluster
06:48 RameshN joined #gluster
07:00 msvbhat joined #gluster
07:03 mhulsman joined #gluster
07:18 sbulage joined #gluster
07:23 jtux joined #gluster
07:25 skoduri joined #gluster
07:31 BatS9 joined #gluster
07:32 mbukatov joined #gluster
07:37 kdhananjay joined #gluster
07:45 XpineX joined #gluster
07:48 MikeLupe joined #gluster
07:51 Telsin joined #gluster
07:52 msvbhat joined #gluster
07:52 RameshN joined #gluster
07:53 ivan_rossi joined #gluster
08:05 k4n0 joined #gluster
08:10 itisravi joined #gluster
08:11 d0nn1e joined #gluster
08:13 saintpablo joined #gluster
08:19 jkroon_ joined #gluster
08:25 [diablo] joined #gluster
08:28 jtux joined #gluster
08:29 jkroon_ joined #gluster
08:36 hgowtham joined #gluster
08:43 fsimonce joined #gluster
08:46 k4n0 joined #gluster
08:46 rastar joined #gluster
08:54 skoduri_ joined #gluster
08:54 Guest99775 joined #gluster
08:56 jkroon_ joined #gluster
09:04 Jules- anybody knows what can cause glusterfsd memory usage keep growing until max mem reached?
09:08 sona joined #gluster
09:10 k4n0 joined #gluster
09:10 msvbhat joined #gluster
09:24 prasanth joined #gluster
09:28 skumar_ joined #gluster
09:31 rafi1 joined #gluster
09:41 skumar__ joined #gluster
09:41 pjrebollo joined #gluster
09:45 karthik_us joined #gluster
09:47 Jacob843 joined #gluster
09:57 ankitr joined #gluster
10:10 kotreshhr joined #gluster
10:35 itisravi joined #gluster
10:39 jkroon_ joined #gluster
10:52 Seth_Karlo joined #gluster
10:52 rafi joined #gluster
10:53 sona joined #gluster
10:56 Seth_Kar_ joined #gluster
10:58 derjohn_mob joined #gluster
11:04 kotreshhr joined #gluster
11:04 jkroon__ joined #gluster
11:10 itisravi_ joined #gluster
11:13 sona joined #gluster
11:14 poornima_ joined #gluster
11:14 jkroon__ joined #gluster
11:17 atinm joined #gluster
11:17 cloph cyberbootje1: there's a bug in qemu/kvm when using cache=none and sparse files.. https://access.redhat.com/articles/40643 - in other words: when using cache=none/aio=native, use preallocated files...
11:17 glusterbot Title: Using native AIO with qemu-kvm can cause filesystem corruption with sparse images on EXT4 - Red Hat Customer Portal (at access.redhat.com)
11:19 nishanth joined #gluster
11:19 cyberbootje1 cloph, that's from 2012 ?
11:19 cyberbootje1 not fixed ?
11:19 cloph not fixed.
11:20 cloph at least with the version shipped with debian 8 it is still an issue.
11:20 cyberbootje1 i'm using glusterfs 8.x now
11:20 cloph maybe less likely to hit it nowadays, as likely every newly created VM is more likely to have aligned i/o
11:21 cyberbootje1 with debian it's shipped with 5.2 i think
11:21 cyberbootje1 3.5.2
11:21 itisravi_ joined #gluster
11:21 cyberbootje1 i'm forgetting the 3 :-)
11:23 cloph symptoms are a little different though (vm will not loose the disk, but rather after installation package files won't match md5sum, and likely won't boot afterwards). We had issues with vms loosing the disk (aka remounting it read-only) when trying to re-add a brick after reinstallation of one of the peers. gluster at the time didn't like that/didn't heal properly and thus i/o stalled....
11:24 cyberbootje1 now it does like it?
11:24 cloph at that point (after multiple attempts) we took a different approach and just did create a new volume and migrated the existing ones over to the new volume...
11:27 nh2 joined #gluster
11:43 kotreshhr joined #gluster
11:49 pjrebollo joined #gluster
11:51 prasanth joined #gluster
11:52 karthik_us joined #gluster
11:53 mindgaze joined #gluster
11:53 pjrebollo joined #gluster
11:55 msvbhat joined #gluster
12:00 chawlanikhil24 joined #gluster
12:02 chawlanikhil24 hello people
12:02 chawlanikhil24 I would like to report a bug , on centOS 7
12:03 chawlanikhil24 I don;t know it's an OS bug or gluster ,
12:03 chawlanikhil24 I am sharing the log
12:06 Philambdo joined #gluster
12:06 ppai chawlanikhil24, yes please
12:07 cloph (please share it using a pastebin website :-))
12:08 chawlanikhil24 cloph, yess using pastebin
12:08 chawlanikhil24 creating the paste
12:10 chawlanikhil24 http://pastebin.com/iBd0truF
12:10 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
12:10 chawlanikhil24 here is the log for, ./configure
12:10 chawlanikhil24 error : configure: error: liburcu-bp not found
12:13 apandey joined #gluster
12:17 chawlanikhil24 ppai, cloph , any idea ?
12:17 chawlanikhil24 i even tried to install the library mannualy, but the terminal says its already installed
12:17 chawlanikhil24 https://rpmfind.net/linux/rpm2html/sea​rch.php?query=liburcu-bp.so.1()(64bit) ..
12:18 glusterbot Title: RPM resource liburcu-bp.so.1( (at rpmfind.net)
12:18 cloph you need the corresponding dev / devel package
12:18 chawlanikhil24 cloph, Tried all permutations, infact using <libname>*
12:18 cloph (and next time you sould paste the full log, the bottom/end typically is the most interesting bit)
12:19 chawlanikhil24 cloph, let me do it again, apologies for the trouble
12:19 Klas we are having issues with backups of large number of files (think rsync)
12:19 skumar joined #gluster
12:20 chawlanikhil24 cloph, here;s the link
12:20 chawlanikhil24 http://pastebin.com/DbftC7gj
12:20 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
12:20 Klas could we use georeplication to copy all data to seperate node to not have to bother about overhead
12:21 cloph Klas: geo-replication still seems to have issues with symlinks :-/
12:21 Klas symlinks is no issue in this usecase
12:22 cloph and hybrid crawl sucks in catching up when there's lot of roation in the files :-(
12:22 Klas the files are seldomly changed in most of these cases
12:22 cloph userspace-rcu-devel.x86_64 : Development files for userspace-rcu
12:22 chawlanikhil24 ppai, http://pastebin.com/DbftC7gj
12:22 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
12:23 chawlanikhil24 cloph, and when Install glusterfs using yum,there is no issue
12:24 Klas cloph: but, can geo-replication be made to just one node?
12:25 cloph you pasted the same thing again, I doubt configure will really stop after that. and it doesn't include the invocation either. But in any case: If you install a binary package, you don't need the development packages of course. If you want to compile yourself, you need the devel packages. I already pasted the name under which you should be able to find it in CentOS
12:25 Klas basically, I just want a real-time backup to a secondary node with which I can run the sync-job way quicker
12:25 skumar joined #gluster
12:25 Klas bascially, our issue is that we will reach about 12 hours of statting files every night
12:25 Klas 3.5 million files
12:25 cloph geo-replication to just one node is the default, so sure
12:26 Klas hmm, interesting =)
12:26 cloph it is not real-time backup though. It is async.
12:27 cloph (should rather write: geo-replication is to one other volume by default - so depends on how your other volume is setup/whether that in turn would use multiple peers :-))
12:29 nthomas joined #gluster
12:30 rjoseph joined #gluster
12:33 rafi1 joined #gluster
12:36 chawlanikhil24 cloph, thanq very much :)
12:36 rafi joined #gluster
12:42 mattmcc_ joined #gluster
12:43 Ramereth joined #gluster
12:46 jwd joined #gluster
12:48 Alghost joined #gluster
12:49 arpu joined #gluster
12:53 ashka hi, can anyone answer a question about an older gluster version (3.5.2)? I am trying to know if it is possible to cache writes instead of iowait the writing process until the file is written
12:54 cloph somewhat - gluster has a writeback option, but that doesn't play nice with qemu/kvm afaict
12:55 cyberbootje1 are there any best practice guidelines for tweaking gluster when using it with KVM ?
12:56 cloph there's the virt group/set of suggested config switches for use as virt-storage.
12:57 ashka cloph: it's only flat files, so this shouldn't be an issue for my use case
12:57 cyberbootje1 where can i find these?
12:58 cyberbootje1 and general ones? for example, throttle the self-heal so it won't saturate the network making the storage unavailable...
12:58 cloph https://github.com/gluster/glusterfs/​blob/master/extras/group-virt.example
12:58 glusterbot Title: glusterfs/group-virt.example at master · gluster/glusterfs · GitHub (at github.com)
13:01 atinm joined #gluster
13:02 unclemarc joined #gluster
13:04 k4n0 joined #gluster
13:09 Wizek_ joined #gluster
13:13 kotreshhr left #gluster
13:18 skumar_ joined #gluster
13:20 danielitit_ joined #gluster
13:28 kramdoss_ joined #gluster
13:30 baber joined #gluster
13:31 atm0sphere joined #gluster
13:37 rastar joined #gluster
13:41 Akram joined #gluster
13:41 Akram hi guys, is there a way to change the default owner-uid for all volumes to be created ?
13:48 msvbhat joined #gluster
13:49 RameshN joined #gluster
13:58 ira joined #gluster
14:00 pulli joined #gluster
14:01 skumar_ joined #gluster
14:01 dominicpg joined #gluster
14:05 Seth_Karlo joined #gluster
14:06 skumar_ joined #gluster
14:07 kpease joined #gluster
14:08 kpease_ joined #gluster
14:18 fyxim_ joined #gluster
14:21 plarsen joined #gluster
14:26 melliott joined #gluster
14:37 skylar joined #gluster
14:39 tallmocha joined #gluster
14:42 kramdoss_ joined #gluster
14:47 sona joined #gluster
14:50 pulli joined #gluster
14:52 nbalacha joined #gluster
14:54 RameshN joined #gluster
14:57 susant left #gluster
15:05 squizzi joined #gluster
15:06 atm0sphere joined #gluster
15:10 jkroon_ joined #gluster
15:19 oajs_ joined #gluster
15:31 jdossey joined #gluster
15:34 atm0sphere joined #gluster
15:43 farhorizon joined #gluster
15:46 RameshN joined #gluster
15:53 farhoriz_ joined #gluster
16:02 wushudoin joined #gluster
16:11 Philambdo1 joined #gluster
16:14 pioto joined #gluster
16:18 ahino joined #gluster
16:21 atm0sphere joined #gluster
16:29 sanoj|afk joined #gluster
16:37 Shu6h3ndu joined #gluster
16:42 sbulage joined #gluster
16:44 Clone heya, does anyone know what the reason is that http://lists.gluster.org/pipermail/g​luster-users/2016-August/027995.html actually speeds up the healing process? I don
16:44 glusterbot Title: [Gluster-users] Fwd: disperse heal speed up (at lists.gluster.org)
16:44 Clone I don't see any attr on the client mounts at all..
16:47 Clone so why the find on trusted.ec.heal attr?
16:48 Seth_Karlo joined #gluster
16:48 Saravanakmr joined #gluster
16:48 Seth_Karlo joined #gluster
16:50 cloph the find is not on the heal attribute, the find is on all files, and the attempt to read that attribute then triggers heal.
16:52 Clone yes.. and that works because the client would normally see a file that has this attribute and initiate the heal?
16:53 Clone I don't see the connection between the reading of the attribute and the subsequent healing.
16:56 cloph gluster will heal files when they are accessed, so if you force accessing them, you trigger heals and don't wait for the file's getting accessed in regular fashion.
16:57 Clone ah, so it's a arbitrary value.
16:58 csuka joined #gluster
16:59 csuki joined #gluster
16:59 cloph rather a special way to have this for dispersed volumes.
17:00 cloph for regular/replicated ones a "stat" or ls -l (anything that doesn't just read the directory entry, but the file's stats) will trigger the inconsistency check/self-heal
17:17 farhorizon joined #gluster
17:24 susant joined #gluster
17:40 farhorizon joined #gluster
17:40 susant left #gluster
17:49 mhulsman joined #gluster
17:52 ivan_rossi left #gluster
18:16 vbellur joined #gluster
18:31 Humble joined #gluster
18:40 ksandha_ joined #gluster
18:41 atinm joined #gluster
19:08 mhulsman joined #gluster
19:19 mhulsman joined #gluster
19:20 mhulsman joined #gluster
19:39 chris349 joined #gluster
19:39 Clone cloph: thnx!
19:43 mhulsman joined #gluster
19:45 jeffspeff joined #gluster
19:48 mhulsman joined #gluster
19:54 riyas joined #gluster
20:13 Jacob843 joined #gluster
20:14 major joined #gluster
20:16 mhulsman joined #gluster
20:17 major joined #gluster
20:26 rastar joined #gluster
20:26 pulli joined #gluster
20:28 kpease joined #gluster
20:29 danielitit_ joined #gluster
20:33 pjreboll_ joined #gluster
20:46 mhulsman joined #gluster
21:05 major curious .. when doing a replica 2 with arbiter .. if the arbiter is offline .. everything is still fine?
21:05 plarsen joined #gluster
21:06 major hadn't played with this option yet and I am sort of trying to wrap my head around all the implications
21:13 oajs_ joined #gluster
21:14 jdossey_ joined #gluster
21:22 Seth_Karlo joined #gluster
21:28 JoeJulian major: yes. When the arbiter comes back online, the metadata is healed from the two replica.
21:28 major so it is just a way to balance the qorum?
21:28 JoeJulian Yes. Helps prevent split-brain.
21:29 major yah .. makes sense .. was just curious if there was any sort of extra burden being placed on the arbiter that would require it to be up when there is no other failures
21:30 cyberbootje1 i'm playing with the mtu size, when i set the client to 9000 i will get an transport endpoint not connected, if i set it to 8000, it's no problem... i'm sure all devices are set to 9000 any clue? (testing with mtu 9000 works as wel, just nog with glusterfs client)
21:31 JoeJulian I run 9000 with no issues.
21:31 cyberbootje1 hmm
21:32 cyberbootje1 the reason i'm playing with mtu is that cannot get any more speed than 170MB on the client while i know the network is full 10G and the storage can do way more than 170MB, am i missing a gluster tuning option?
21:34 shyam joined #gluster
21:34 major this tanker rollover on I-5 is causing traffic hell
21:39 ndevos cyberbootje1: throughput also depends a lot on how you're testing, if you use 'dd', try with a larger 'bs=..' option, or run multiple i/o processes at the same time to simulate threading, possibly have multiple fuse mounts in the same client too
21:40 cyberbootje1 ndevos, yeah i know i'm trying everything also quedepth 32, i'm going to do an NFS benchmark just to be sure so i can exclude things
21:42 ndevos cyberbootje1: a fuse mount can well be the bottleneck, if that is the case, nfs will likely do a little better
21:42 misc joined #gluster
21:42 cyberbootje1 let me guess, i cannot get around fuse if i want the gluster client...
21:44 ndevos if you want a filesystem interface it is fuse, nfs or smb, if you have applications you could think about using libgfapi (in C and several bindings in other languages)
21:47 jdossey joined #gluster
21:47 cyberbootje1 ndevos, nfs would be fine but i guess i won't have the HA failover that the gluster client has...
21:52 ndevos cyberbootje1: indeed, not by default, but you can configure nfs-ganesh+ pacemaker/corosync to do HA
21:53 cyberbootje1 just ucarp not enough ?
21:56 misc joined #gluster
22:12 ndevos cyberbootje1: no, you still need t osend a signal to nfs-ganesha so that it starts a procedure to get clients to recover locks
22:37 d0nn1e joined #gluster
22:45 pjrebollo joined #gluster
22:45 plarsen joined #gluster
22:52 MidlandTroy joined #gluster
22:54 nathwill joined #gluster
23:02 amye Happy 3.10 release day!
23:02 amye https://blog.gluster.org/2017​/02/announcing-gluster-3-10/
23:02 glusterbot Title: Announcing Gluster 3.10 | Gluster Community Website (at blog.gluster.org)
23:06 farhorizon joined #gluster
23:24 pulli1 joined #gluster
23:25 oajs_ joined #gluster
23:26 plarsen joined #gluster
23:38 oajs_ joined #gluster
23:43 farhorizon joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary