Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-02-28

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 glusterbot` joined #gluster
00:05 major JoeJulian, soo .. hope you didn't have any plans for getting around in the Seattle area today
00:07 Utoxin joined #gluster
00:11 zakharovvi[m] joined #gluster
00:12 jdossey joined #gluster
00:12 glusterbot joined #gluster
00:16 snehring joined #gluster
00:16 vbellur joined #gluster
00:17 MidlandTroy joined #gluster
00:43 atm0sphere joined #gluster
00:50 nthomas joined #gluster
00:52 moneylotion joined #gluster
00:56 jbrooks joined #gluster
01:01 pjrebollo joined #gluster
01:01 nthomas joined #gluster
01:07 cyberbootje1 is it normal to see a lot of "no data available" in combination with "shard_common_lookup_shards_cbk" in the gluster logs ?
01:08 cyberbootje1 just to mention, for testing purposes i have setup gluster wit one server and one brick
01:14 misc kkeithley: I see that coverity logs are not uploaded on weekend, shall I safely assume that there is no cron to do it ?
01:18 shdeng joined #gluster
02:00 bitonic joined #gluster
02:18 auzty joined #gluster
02:29 rastar joined #gluster
02:47 riyas joined #gluster
02:48 ilbot3 joined #gluster
02:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:53 kramdoss_ joined #gluster
02:54 overyander joined #gluster
03:24 unclemarc joined #gluster
03:30 moneylotion joined #gluster
03:35 sbulage joined #gluster
03:38 dominicpg joined #gluster
03:50 nishanth joined #gluster
03:51 atinm joined #gluster
03:58 ahino joined #gluster
04:00 plarsen joined #gluster
04:07 RameshN joined #gluster
04:11 buvanesh_kumar joined #gluster
04:16 kramdoss_ joined #gluster
04:29 ankitr joined #gluster
04:37 Shu6h3ndu joined #gluster
04:39 plarsen joined #gluster
04:41 nbalacha joined #gluster
04:46 skumar joined #gluster
04:49 k0nsl joined #gluster
04:49 k0nsl joined #gluster
04:51 kramdoss_ joined #gluster
04:56 rafi1 joined #gluster
05:03 kraynor5b_ joined #gluster
05:08 BitByteNybble110 joined #gluster
05:10 Jacob843 joined #gluster
05:14 kraynor5b__ joined #gluster
05:15 ndarshan joined #gluster
05:16 prasanth joined #gluster
05:17 moneylotion joined #gluster
05:21 Shu6h3ndu joined #gluster
05:22 jiffin joined #gluster
05:26 kraynor5b_ joined #gluster
05:30 kdhananjay joined #gluster
05:31 kenansulayman joined #gluster
05:31 susant joined #gluster
05:33 ppai joined #gluster
05:36 kraynor5b_ joined #gluster
05:38 aravindavk joined #gluster
05:40 rastar joined #gluster
05:42 k4n0 joined #gluster
05:43 itisravi joined #gluster
05:48 Prasad joined #gluster
05:50 sbulage joined #gluster
05:51 sanoj joined #gluster
05:52 ankitr joined #gluster
05:56 apandey joined #gluster
05:57 danielitit_ joined #gluster
05:57 Saravanakmr joined #gluster
06:01 prasanth joined #gluster
06:01 kraynor5b__ joined #gluster
06:06 rjoseph joined #gluster
06:10 nbalacha joined #gluster
06:15 BlackoutWNCT Hey guys, quick question about Gluster 3.10. Is it backwards compatible with the 3.8 client?
06:15 BlackoutWNCT As in, can a 3.10 client mount a 3.8 mount point?
06:19 hgowtham joined #gluster
06:21 kotreshhr joined #gluster
06:22 nbalacha_ joined #gluster
06:24 jkroon joined #gluster
06:25 rastar joined #gluster
06:29 Karan joined #gluster
06:32 Saravanakmr joined #gluster
06:33 skumar joined #gluster
06:36 [diablo] joined #gluster
06:39 apandey joined #gluster
06:41 sona joined #gluster
06:46 nishanth joined #gluster
06:47 Humble joined #gluster
06:54 cesar_ joined #gluster
06:54 Guest8934 Hi
06:54 glusterbot Guest8934: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer
06:57 Guest8934 Can I configure a gluster server and a gluster client on the same machine with two separate disks?
06:57 Guest8934 Eg I have a web server and I have a partition that I need to replicate to have high availability, can I install gluster on that web serve
07:04 skoduri joined #gluster
07:05 RameshN joined #gluster
07:10 mhulsman joined #gluster
07:10 msvbhat joined #gluster
07:17 itisravi joined #gluster
07:17 itisravi joined #gluster
07:20 nbalacha_ joined #gluster
07:24 jtux joined #gluster
07:32 aravindavk joined #gluster
07:38 k4n0 joined #gluster
07:43 aravindavk joined #gluster
07:44 sona joined #gluster
07:47 msvbhat joined #gluster
07:47 circ-user-6O5Su joined #gluster
07:50 mbukatov joined #gluster
07:54 atrius_ joined #gluster
07:59 atrius joined #gluster
08:07 ashiq joined #gluster
08:10 ivan_rossi joined #gluster
08:10 gk-1wm-su joined #gluster
08:10 gk-1wm-su left #gluster
08:14 kraynor5b joined #gluster
08:14 sona joined #gluster
08:23 rastar joined #gluster
08:24 masber joined #gluster
08:33 jbrooks joined #gluster
08:40 pulli joined #gluster
08:46 pulli joined #gluster
08:49 ahino joined #gluster
08:54 flying joined #gluster
08:56 fsimonce joined #gluster
09:03 nh2 joined #gluster
09:18 atinm joined #gluster
09:18 apandey joined #gluster
09:26 sona joined #gluster
09:30 ppai joined #gluster
09:31 atrius joined #gluster
09:38 poornima joined #gluster
09:48 prasanth joined #gluster
09:49 p7mo joined #gluster
09:49 Seth_Karlo joined #gluster
09:49 RameshN joined #gluster
09:52 Seth_Karlo joined #gluster
09:58 kotreshhr joined #gluster
10:08 Seth_Karlo joined #gluster
10:09 jkroon_ joined #gluster
10:11 jkroon__ joined #gluster
10:17 rjoseph joined #gluster
10:18 hybrid512 joined #gluster
10:18 nh2 ndevos: I noticed that I didn't get any email when people commented on the (first) patch I submitted to Gerrit, do I need to enable that I want to be notified somewhere?
10:18 Limebyte joined #gluster
10:21 Seth_Karlo joined #gluster
10:22 rastar joined #gluster
10:25 Seth_Karlo joined #gluster
10:27 ppai joined #gluster
10:27 RameshN joined #gluster
10:30 msvbhat joined #gluster
10:34 DV__ joined #gluster
10:36 zakharovvi[m] joined #gluster
10:36 rjoseph joined #gluster
10:37 BatS9 joined #gluster
10:47 nishanth joined #gluster
10:49 ppai joined #gluster
10:51 Seth_Kar_ joined #gluster
10:57 arpu joined #gluster
11:00 msvbhat joined #gluster
11:01 itisravi_ joined #gluster
11:01 MidlandTroy71 joined #gluster
11:10 DV__ joined #gluster
11:28 k4n0 joined #gluster
11:29 riyas joined #gluster
11:30 MidlandTroy joined #gluster
11:36 jkroon we just bumped into a situation where we had a cluster with multiple machines.
11:36 jkroon two of those was queued for decom.
11:40 nbalacha_ joined #gluster
11:46 devyani7 joined #gluster
11:48 kotreshhr joined #gluster
11:51 kramdoss_ joined #gluster
11:54 derjohn_mob joined #gluster
11:55 jkroon all the bricks are in tackt (got moved prior) but now we're unable to unlink those two peers from the rest of the cluster.
11:56 jkroon the force option seems to be deprecated but it seems Pieter has managed to temporarily get those two nodes back up.
11:56 jkroon so we're sorted.
11:58 ShwethaHP joined #gluster
12:00 kpease joined #gluster
12:00 jiffin Gluster Bug Triage started on gluster-meeting
12:11 itisravi joined #gluster
12:27 Seth_Karlo joined #gluster
12:30 Seth_Kar_ joined #gluster
12:34 susant joined #gluster
12:40 buvanesh_kumar joined #gluster
12:46 ira joined #gluster
12:47 Saravanakmr joined #gluster
12:49 sona joined #gluster
12:57 ahino1 joined #gluster
13:11 glusterbot joined #gluster
13:15 nishanth joined #gluster
13:19 Saravanakmr joined #gluster
13:19 susant joined #gluster
13:20 skoduri joined #gluster
13:28 d0nn1e joined #gluster
13:31 unclemarc joined #gluster
13:32 kotreshhr left #gluster
13:35 nh2 hi, the Fedora vagrant box linked to in the repo for running tests doesn't seem to have`bc` installed which is used by the tests
13:38 msvbhat joined #gluster
13:41 R0ok_ joined #gluster
13:42 BatS9 joined #gluster
13:50 apandey joined #gluster
14:01 Seth_Karlo joined #gluster
14:03 buvanesh_kumar joined #gluster
14:05 Seth_Kar_ joined #gluster
14:09 ahino joined #gluster
14:12 fyxim joined #gluster
14:12 baber joined #gluster
14:12 vbellur joined #gluster
14:28 jiffin joined #gluster
14:31 moneylotion joined #gluster
14:33 aravindavk joined #gluster
14:33 ankitr joined #gluster
14:35 Clone joined #gluster
14:36 Humble joined #gluster
14:41 skylar joined #gluster
14:46 nbalacha_ joined #gluster
14:51 susant left #gluster
14:54 bitonic_ joined #gluster
15:01 Humble joined #gluster
15:03 sanoj joined #gluster
15:04 plarsen joined #gluster
15:05 msvbhat joined #gluster
15:08 rafi1 joined #gluster
15:10 Wizek_ joined #gluster
15:11 unclemarc joined #gluster
15:12 victori joined #gluster
15:16 buvanesh_kumar joined #gluster
15:16 riyas joined #gluster
15:22 k4n0 joined #gluster
15:24 skoduri joined #gluster
15:30 derjohn_mob joined #gluster
15:32 sona joined #gluster
15:32 buvanesh_kumar joined #gluster
15:34 baber joined #gluster
15:34 Seth_Karlo joined #gluster
15:38 Karan joined #gluster
15:39 vbellur joined #gluster
15:39 farhorizon joined #gluster
15:41 TvL2386 joined #gluster
15:41 arpu joined #gluster
15:53 kramdoss_ joined #gluster
15:55 buvanesh_kumar joined #gluster
16:02 Seth_Kar_ joined #gluster
16:09 Abazigal joined #gluster
16:13 jiffin joined #gluster
16:13 wushudoin joined #gluster
16:22 Shu6h3ndu joined #gluster
16:28 serg_k joined #gluster
16:31 poxbat joined #gluster
16:33 phileas joined #gluster
16:34 vbellur joined #gluster
16:48 jdossey joined #gluster
16:53 jkroon joined #gluster
16:57 oajs joined #gluster
16:59 farhorizon joined #gluster
17:03 farhorizon joined #gluster
17:06 zakharovvi[m] joined #gluster
17:09 csuka joined #gluster
17:09 csuka Hi!
17:09 glusterbot csuka: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
17:11 csuka I am running 3.8.5 on CentOS 7.3.1611. I cannot re-mount the brick. The log file states: https://paste.ubuntu.com/24085373/
17:11 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
17:12 level7 joined #gluster
17:13 serg_k joined #gluster
17:17 buvanesh_kumar joined #gluster
17:21 baber joined #gluster
17:22 ankitr joined #gluster
17:26 vbellur joined #gluster
17:29 Vapez joined #gluster
17:29 Vapez joined #gluster
17:32 ivan_rossi left #gluster
17:34 sona joined #gluster
17:38 StormTide joined #gluster
17:46 Akram joined #gluster
17:47 unclemarc joined #gluster
17:49 StormTide Anyone know if there are official packages for gluster 3.9+ client for ubuntu 14.04? Am planning a new deployment and while we can run 16.04 on the gluster servers, some clients will be 14.04... I'm hoping to use the new cache feature to improve small file performance. The application is to hold original images for a cloud/scaling web server group (files not directly served, theres a thumbnail cache in front)... Total size is abou
17:49 StormTide 500gb of small-ish read-primary files..
17:49 sbulage joined #gluster
17:51 zakharovvi[m] joined #gluster
17:52 buvanesh_kumar joined #gluster
17:53 atinm joined #gluster
18:05 JoeJulian @ppa
18:05 glusterbot JoeJulian: The GlusterFS Community packages for Ubuntu are available here: 3.6: http://goo.gl/XyYImN, 3.7: https://goo.gl/aAJEN5, 3.8: https://goo.gl/gURh2q
18:05 JoeJulian StormTide: ^
18:05 JoeJulian Oh, heh, guess we need to update that.
18:06 StormTide JoeJulian: yah. saw that. no 3.9+
18:06 JoeJulian We build it.
18:06 JoeJulian https://launchpad.net/~gluster​/+archive/ubuntu/glusterfs-3.9
18:06 glusterbot Title: glusterfs-3.9 : “Gluster” team (at launchpad.net)
18:06 JoeJulian @forget ppa
18:06 glusterbot JoeJulian: The operation succeeded.
18:06 StormTide JoeJulian: for trusty.
18:09 JoeJulian huh... I'm not sure why trusty's not built. I'll look in to it.
18:09 Humble ndevos, centos-release-gluster310 is this available ?
18:09 StormTide JoeJulian: thanks. We only need the client for older machines to make use the small file improvements....
18:09 JoeJulian btw, 3.9 is eol. 3.10 is the current bleeding edge release. 3.8 is supposed to be the long-term release.
18:10 StormTide yah, happy to use 3.10 ....
18:10 JoeJulian 3.10 also isn't built...
18:10 StormTide nope ;)
18:11 StormTide i mean i get it that 14.04 is kinda dated... but its still within our app server setup as its lts and php5.6/7 transition in xenial... yada yada....
18:13 JoeJulian @learn ppa as The GlusterFS Community packages for Ubuntu are available at: 3.8: https://goo.gl/MOtQs9, 3.10: https://goo.gl/15BCcp
18:13 glusterbot JoeJulian: The operation succeeded.
18:15 Karan joined #gluster
18:17 cyberbootje1 well i give up, zfs on linux + gluster is just awfull
18:18 JoeJulian some people seem to love it. My experience sounds like it was closer to yours.
18:19 cyberbootje1 JoeJulian: really? i'm using it for KVM
18:19 JoeJulian That and media were my two use cases I tried.
18:20 cyberbootje1 i'm struggling with it for over a week now
18:21 JoeJulian I now use btrfs for my kvm bricks, and xfs for my media (since it's drop and forget).
18:21 cyberbootje1 JoeJulian, you mind telling me what the case was? and maybe details like how it had been setup ?
18:22 JoeJulian I just remember it being ok for the first 3 or 4 hours, then performance dropped off precipitously.
18:23 cyberbootje1 i'm not even able to install a normal linux distro within a VM, when using default KVM cache=none i'm not even able to start the vm...
18:26 major I distinctly remembering looking at bd code the other day and thinking it was pretty lvm specific...
18:27 cyberbootje1 well even with cache=throughput i'm not able to run a vm normally
18:28 cyberbootje1 at the end the grub boot loader doesn't want to get installed
18:28 major yah .. I dunno .. it was just one of the items I added to my TODO list for touching the code for added btrfs support
18:30 cyberbootje1 just to be sure
18:30 cyberbootje1 i can install glusterfs with only one server and one brick right?
18:31 cyberbootje1 that's the way i'm testing sois it worth trying it with replica 2 and 2 servers?
18:31 Humble kkeithley, Hi..........
18:31 Humble a quick question ,, "centos-release-gluster310" is this available ?
18:48 amye Humble, isn't that a nixpanic question?
18:49 Humble amye, yes :)
18:50 Humble btw I follow gluster dev chat and kind of having the answer
18:50 Humble I have to wait for centos storage sig rpms for building docker image.
18:56 msvbhat joined #gluster
19:06 MidlandTroy71 joined #gluster
19:06 ahino joined #gluster
19:16 cholcombe joined #gluster
19:22 k4n0 joined #gluster
19:23 rastar joined #gluster
19:26 baber joined #gluster
19:35 sona joined #gluster
19:35 susant joined #gluster
19:49 rastar joined #gluster
19:50 PTech joined #gluster
19:53 PTech Is this a good place to ask for help with Gluster?
19:53 JoeJulian Yep
19:54 ira joined #gluster
19:54 PTech Great, thank you. I'm standing up a test node before we dive further into gluster and i'm having some trouble getting NFS to work.
19:55 PTech I have three bricks that are distributed, and the volume is started and they're all online. However I cannot connect via NFS. FUSE works fine.
19:56 PTech Checking the NFS log, after the startup entry it says :  0-glusterfs: connection to ::1:24007 failed (Connection refused)
19:57 PTech followed by :  0-glusterfsd-mgmt: failed to connect with remote-host: localhost (Transport endpoint is not connected)
19:57 PTech later down it say [2017-02-28 00:12:33.616451] E [socket.c:793:__socket_server_bind] 0-socket.nfs-server: binding to  failed: Address already in use [2017-02-28 00:12:33.616482] E [socket.c:796:__socket_server_bind] 0-socket.nfs-server: Port is already in use
19:58 PTech I am unclear of what is preventing it from starting NFS correctly.
20:06 JoeJulian PTech: sounds like your kernel nfs has the port already.
20:08 JoeJulian PTech: The gluster path for nfs is now to use ganesha-nfs. If you must use the integrated nfs, you'll now have to un-disable it (kind of backwards, I know): gluster volume set $vol nfs.disable no
20:09 baber joined #gluster
20:14 PTech JoeJulian: I found that out yesterday. I have these options currently set on that Volume. nfs.register-with-portmap: on nfs.disable: off performance.readdir-ahead: on transport.address-family: inet nfs.acl: off
20:14 PTech I'm running down the kernel NFS now
20:19 PTech @JoeJulian Yeah, that looks like it was it. Thank you sooooo much!
20:21 JoeJulian You're welcome. :)
20:25 Vapez joined #gluster
20:25 Vapez joined #gluster
20:41 baber joined #gluster
20:43 Seth_Karlo joined #gluster
20:43 armyriad joined #gluster
20:43 jbrooks joined #gluster
20:53 cyberbootje2 joined #gluster
20:56 Seth_Kar_ joined #gluster
21:10 level7 joined #gluster
21:15 Iouns joined #gluster
21:35 jdossey joined #gluster
21:36 Seth_Karlo joined #gluster
21:40 arpu joined #gluster
21:48 ankitr joined #gluster
21:51 buvanesh_kumar joined #gluster
21:52 skylar joined #gluster
22:12 panina joined #gluster
22:15 derjohn_mob joined #gluster
22:31 Seth_Karlo joined #gluster
22:33 StormTide left #gluster
22:41 rastar joined #gluster
22:46 cyberbootje2 JoeJulian, you there?
22:50 cyberbootje2 or maybe anone else... i'm getting the following in the brick logs:
22:50 cyberbootje2 [2017-02-28 22:48:34.538402] E [MSGID: 113107] [posix.c:1051:posix_seek] 0-pool-posix: seek failed on fd 24 length 3221946368 [No such device or address]
22:51 cyberbootje2 i'm using gluster with zfs
22:54 JoeJulian Sounds like a brick filesystem error. Check dmesg?
22:54 JoeJulian maybe
22:54 cyberbootje2 already did...
22:54 cyberbootje2 last entry is the fuse init
22:55 cyberbootje2 it's a fresh install
22:56 cyberbootje2 only thing is, i updated gluster from a deb package on some proxmox machines
22:56 cyberbootje2 they already had gluster on them
22:56 JoeJulian Is this a sharded volume?
22:56 cyberbootje2 yes
22:56 cyberbootje2 oh wait
22:57 cyberbootje2 i read shared... no it's replicated
22:57 cyberbootje2 replica 2 on 2 servers
23:00 Klas joined #gluster
23:03 cyberbootje2 ok, just for the sake of testing i created a new pool that's in the root dir and not on zfs
23:03 cyberbootje2 same posix error
23:06 cyberbootje2 JoeJulian, i'm using version 3.8.9 should i use another one?
23:08 geoff1 joined #gluster
23:09 geoff1 left #gluster
23:10 rastar joined #gluster
23:11 ankitr joined #gluster
23:11 da4975 joined #gluster
23:12 jbrooks joined #gluster
23:22 vbellur joined #gluster
23:23 cyberbootje2 and i checked, shard i off
23:23 JoeJulian Digging in the code. No clue so far.
23:24 cyberbootje2 just to point out, on zfs it will give that error also when i'm formatting a VM disk and then the formatting will hang
23:25 cyberbootje2 JoeJulian, i just set shard to on and no error so far
23:25 JoeJulian There's nothing I can do to diagnose zfs.
23:26 JoeJulian I don't know anything about it and have no interest in reading their source.
23:26 cyberbootje2 ok, let's keet it to normal disks
23:26 major cyberbootje2, you are using a single file for the VM's disk right?
23:26 cyberbootje2 for now same errors
23:26 cyberbootje2 qow2
23:26 cyberbootje2 qcow2*
23:26 major basically treating a single file as the VM's block device?
23:27 JoeJulian So you're using a copy on write virtual disk on top of a copy on write filesystem? Seems redundant.
23:27 cyberbootje2 major, i suppose.... it's 1 file, a qcow2 image
23:27 major I wouldn't use the qcow2 image myself..
23:27 major seems like it would add significant performance problems
23:28 major that and .. its qcow2
23:28 JoeJulian Me neither. raw has giving me better performance.
23:28 cyberbootje2 JoeJulian, yes i know but i need snapshotting functions with ram  :-) would love to use it as a raw disk...
23:28 major I would use raw I think
23:28 JoeJulian I thought you wanted zfs for the snapshotting.
23:28 cyberbootje2 for the speed :-)
23:29 cyberbootje2 and the eror correcting
23:29 major well .. don't put qcow2 on it then :(
23:29 JoeJulian Hey, I heard you like snapshots so I took a snapshot of your snapshot.
23:29 major http://www.linux-kvm.org/page/Qcow2
23:29 glusterbot Title: Qcow2 - KVM (at www.linux-kvm.org)
23:30 cyberbootje2 i could live with not having qcow2 but it's the same with raw
23:30 JoeJulian I've never heard of someone using a cow filesystem for performance.
23:30 JoeJulian but anyway...
23:30 cyberbootje2 JoeJulian, it's the read cache and SLOG
23:31 major I suspect that zfs and btrfs would be prone to fragementation over time when used for this sort of thing..
23:31 JoeJulian ENXIO  whence is SEEK_DATA or SEEK_HOLE, and the file offset is beyond the end of the file.
23:31 major still not fully groking the error that is happening though
23:32 major sparse file?
23:32 major forst 16TB of drives just arrived at the house down south ..
23:32 major and the new 10G switch ..
23:32 JoeJulian http://man7.org/linux/man-pages/man2/lseek.2.html doesn't list zfs as supporting SEEK_DATA or SEEK_HOLE, but it may just be because zfs is not in linux.
23:32 glusterbot Title: lseek(2) - Linux manual page (at man7.org)
23:32 major and I am 100 miles away :(
23:33 major boooo
23:33 cyberbootje2 hmm
23:33 JoeJulian At least you don't have to deal with a propane truck today.
23:34 major in serious
23:34 major zfs added support for SEEK_HOLE in 2005 it looks like
23:35 major hmmm.. https://github.com/zfsonlinux/zfs/issues/4306
23:35 glusterbot Title: lseek(..., ..., SEEK_HOLE) extremely slow under heavy I/O load. · Issue #4306 · zfsonlinux/zfs · GitHub (at github.com)
23:36 cyberbootje2 ugh
23:37 major the last comment is telling
23:37 major cyberbootje2, you may have found a related problem
23:37 cyberbootje2 yeah i see
23:37 major https://www.mail-archive.com/qemu​-devel@nongnu.org/msg427170.html
23:37 glusterbot Title: [Qemu-devel] [qcow2] how to avoid qemu doing lseek(SEEK_DATA/SEEK_HOLE)? (at www.mail-archive.com)
23:38 major yah .. I would certain gut qcow/qcow2 from the picture ..
23:38 cyberbootje2 going to try now
23:38 major sparse cow block device ontop of a cow filesystem
23:39 major and add lseek() ontop of a sparse file to boot
23:39 major bleh
23:41 cyberbootje2 for now also no errors
23:42 cyberbootje2 after i set shard to on
23:43 major JoeJulian, is there any sort of docs related to tuning gluster ontop of the various filesystems (btr/zfs/xfs/ext[234])?
23:43 cyberbootje2 what is the most stable recommended gluster version at the moment?
23:47 JoeJulian xfs is highly tested by Red Hat and is the required filesystem for certified installations.
23:47 JoeJulian And no, major, there are no such docs.
23:48 JoeJulian I argued years ago that the defaults should be changed to the least efficient so people would feel like they've discovered some magic tuning.
23:50 pioto joined #gluster
23:54 major heh
23:54 major I was more thinkin of the tunings for the underlying filesystems
23:56 major in some cases it might be usefull to simply be redundant .. E.g. recommending specific options when creating btrfs filesystems on Ubuntu14 in order to make certain the created filesystem matches the later defaults adopted in newer versions of btrfs
23:56 JoeJulian There was a theory that making 1k inodes would increase performance. Actual testing didn't back that up.
23:59 major well, I was thinking of stuff like the btrfs leafsize (default is now 16K, default used to be 4K), extref's (default only since 3.12, supported since 3.7), skinny-metadata (default since 3.18, supported since 3.10)

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary