Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-07-17

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 Wizek_ joined #gluster
00:09 Teraii joined #gluster
00:43 Alghost joined #gluster
02:00 kpease joined #gluster
02:19 auzty joined #gluster
02:35 Jacob843 joined #gluster
02:55 AshishS joined #gluster
03:04 prasanth joined #gluster
03:15 Shu6h3ndu joined #gluster
03:20 kramdoss_ joined #gluster
03:21 jiffin joined #gluster
03:39 riyas joined #gluster
03:48 shdeng joined #gluster
03:49 Gambit15 joined #gluster
03:49 Gambit15_ joined #gluster
03:49 ppai joined #gluster
03:50 atinm joined #gluster
03:51 psony joined #gluster
03:55 nbalacha joined #gluster
04:01 AshishS joined #gluster
04:12 dominicpg joined #gluster
04:15 AshishS_ joined #gluster
04:16 itisravi joined #gluster
04:20 Jacob843 joined #gluster
04:22 Saravanakmr joined #gluster
04:38 AshishS__ joined #gluster
04:44 Saravanakmr joined #gluster
04:45 gyadav joined #gluster
04:54 buvanesh_kumar joined #gluster
04:56 amarts joined #gluster
05:12 kdhananjay joined #gluster
05:14 hgowtham joined #gluster
05:16 ankitr joined #gluster
05:20 shdeng joined #gluster
05:27 karthik_us joined #gluster
05:33 prasanth joined #gluster
05:34 poornima joined #gluster
05:43 Prasad joined #gluster
05:44 skumar joined #gluster
05:45 prasanth joined #gluster
05:46 apandey joined #gluster
05:56 Karan joined #gluster
05:57 kramdoss_ joined #gluster
06:12 Saravanakmr joined #gluster
06:18 sona joined #gluster
06:29 ndarshan joined #gluster
06:30 atinm_ joined #gluster
06:34 ndarshan joined #gluster
06:47 mbukatov joined #gluster
06:47 Shu6h3ndu_ joined #gluster
06:49 prasanth joined #gluster
06:52 kotreshhr joined #gluster
06:52 ndarshan joined #gluster
06:57 Wizek_ joined #gluster
07:00 amarts joined #gluster
07:07 msvbhat joined #gluster
07:20 ivan_rossi joined #gluster
07:22 Karan joined #gluster
07:30 mbukatov joined #gluster
07:37 mbukatov joined #gluster
07:39 kramdoss_ joined #gluster
07:40 fsimonce joined #gluster
07:47 atinm_ joined #gluster
07:48 AshishS joined #gluster
07:50 rastar joined #gluster
07:50 AshishS_ joined #gluster
07:53 AshishS joined #gluster
08:01 ashiq joined #gluster
08:02 jkroon joined #gluster
08:09 Jacob843 joined #gluster
08:10 Saravanakmr joined #gluster
08:27 fcami joined #gluster
08:35 Saravanakmr joined #gluster
08:46 nbalacha joined #gluster
08:50 itisravi joined #gluster
08:54 awoelfel joined #gluster
09:00 p7mo joined #gluster
09:06 Shu6h3ndu__ joined #gluster
09:08 msvbhat joined #gluster
09:13 Jacob843 joined #gluster
09:17 ashiq joined #gluster
09:21 nbalacha joined #gluster
09:28 ashiq joined #gluster
09:31 kdhananjay joined #gluster
09:36 amarts joined #gluster
09:36 msvbhat joined #gluster
09:56 p7mo joined #gluster
10:01 mbukatov joined #gluster
10:10 jsierles joined #gluster
10:10 jsierles hey
10:12 jsierles is it possible to enabling aggressive caching for a gluster volume? My goal is to mount a volume over the internet read-only and have it lazily, but permanently cache things.
10:22 jsierles i saw this article but it's not clear if this functionality is available. it's not mentioned in the docs: http://blog.gluster.org/author/dlambrig/
10:29 kramdoss_ joined #gluster
10:33 ndevos jsierles: the best approach would be to use fs-cache when mounting the volume over nfs or Samba
10:34 ndevos jsierles: it is not *that* efficient yet, there are some enhancements we need to make in Gluster to make it better (https://bugzilla.redhat.com/show_bug.cgi?id=1318493)
10:34 glusterbot Bug 1318493: unspecified, unspecified, ---, rkavunga, ASSIGNED , Introduce ctime-xlator to return correct (client-side set) ctime
10:35 jsierles ndevos: ok. what I would actually be mounting is gigabytes of software, lots of small files, shared libraries etc. so wondering if that will slow things down because of excessive LOOKUP activity
10:36 ndevos jsierles: fs-cache is not available in combination with fuse mounts, that is something that may come in the future (either as enhancement to fuse, or a fs-cache xlator)
10:36 jsierles what is fs-cache?
10:36 ndevos fs-cache is a Linux mechanism to cache data from network filesystems on a local disk
10:37 ndevos if you are worried about lookups, md-cache with a large timeout might be sufficient
10:37 jsierles ok, didn't know about that one.
10:37 jsierles ndevos: can the timeout be higher than 5 minutes?
10:37 jsierles ideally it would never timeout, since the things written to this volume never change
10:38 ndevos more than 5 minutes should not be a problem for md-cache
10:39 jsierles alright. would I be able to use fs-cache on coreos you think?
10:40 ndevos you need the fscache kernel module and cachefilesd daemon on the OS for fs-cache, I dont know if coreos provides that
10:41 jsierles ok, maybe i can get cachefilesd working on docker
10:41 jsierles looks like coreos does support cachefs
10:41 jsierles so it will only work for NFS volumes?
10:43 buvanesh_kumar_ joined #gluster
11:01 ndevos either NFS or Samba, not for FUSE mounts
11:02 ndevos you need to mount with the "fsc" option to enable it (and configure cachefilesd)
11:02 ndevos cachefs is only useful in combination with fs-cache, I think, so if there is one, the other should be there too
11:04 ndevos see https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/filesystems/caching/cachefiles.txt for some more details
11:04 glusterbot Title: cachefiles.txt\caching\filesystems\Documentation - kernel/git/torvalds/linux.git - Linux kernel source tree (at git.kernel.org)
11:05 ndevos and https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/filesystems/caching/fscache.txt is you want more technical stuff
11:05 glusterbot Title: fscache.txt\caching\filesystems\Documentation - kernel/git/torvalds/linux.git - Linux kernel source tree (at git.kernel.org)
11:06 jsierles ndevos: thanks. will take a look
11:08 mbukatov joined #gluster
11:09 Jacob843 joined #gluster
11:12 jsierles i need at least two nodes to setup gluster?
11:16 atinmu joined #gluster
11:17 gyadav_ joined #gluster
11:19 gyadav__ joined #gluster
11:20 atinm_ joined #gluster
11:22 WebertRLZ joined #gluster
11:23 Jacob843 joined #gluster
11:25 atinmu joined #gluster
11:25 baber joined #gluster
11:27 atinm_ joined #gluster
11:32 gyadav_ joined #gluster
11:37 amarts joined #gluster
11:38 aravindavk joined #gluster
11:44 Jacob843 joined #gluster
12:00 amarts joined #gluster
12:11 kramdoss_ joined #gluster
12:11 kotreshhr left #gluster
12:12 dominicpg joined #gluster
12:12 msvbhat joined #gluster
12:27 nbalacha joined #gluster
12:38 jstrunk joined #gluster
12:58 samppah_ is Lindsay Mathieson here by any chance? :)
13:01 fsimonce joined #gluster
13:10 dubs joined #gluster
13:22 skylar joined #gluster
13:24 plarsen joined #gluster
13:42 shyam joined #gluster
13:48 ksandha_ joined #gluster
14:01 rwheeler joined #gluster
14:03 ksandha__ joined #gluster
14:03 amarts joined #gluster
14:08 Jacob843 joined #gluster
14:17 DV__ joined #gluster
14:18 mbukatov joined #gluster
14:37 skylar joined #gluster
14:38 kdhananjay joined #gluster
14:40 ankitr joined #gluster
14:42 gyadav_ joined #gluster
14:49 ten10 joined #gluster
14:50 ten10 so I ran into this awhile ago but figured I'd revisit it but doesn't appear to be fix. I setup a gluster vol and implement iscsi using user:glfs..
14:50 ten10 when I try to add the LUN as a volume in vmware it fails
14:52 ten10 2017-07-17T14:46:55.716Z cpu1:33937 opID=66aa61a3)LVM: 7611: LVMProbeDevice failed on (4098015552, naa.6001405ac31ddd77182407c9f64b199d:1): Device does not contain a logical volume
14:57 farhorizon joined #gluster
15:00 DV joined #gluster
15:07 kpease joined #gluster
15:15 atinm_ joined #gluster
15:21 major joined #gluster
15:42 kramdoss_ joined #gluster
15:42 cholcombe joined #gluster
15:43 amarts joined #gluster
15:50 pdrakeweb joined #gluster
15:57 nirokato joined #gluster
15:59 gyadav_ joined #gluster
16:03 hasi joined #gluster
16:07 hasi Hi Guys, Do you know what happens to the GlusterFS if a brick get out of space in the distributed mode where other bricks will still have the space?
16:19 baber joined #gluster
16:19 pkalever joined #gluster
16:22 major is 'start' no longer valid w/ replace-brick?
16:23 rafi1 joined #gluster
16:30 pkalever left #gluster
16:30 vbellur joined #gluster
16:31 vbellur joined #gluster
16:44 AshishS joined #gluster
16:45 decayofmind joined #gluster
16:46 pocketprotector joined #gluster
16:49 vbellur joined #gluster
16:50 msvbhat joined #gluster
17:03 JoeJulian major: it is not. :(
17:05 JoeJulian hasi: If there are still inodes available, creation of new files will create "dht pointer" files with metadata pointing at a different brick where there is room. Growing a file on that brick will, however, fail with ENOSPC.
17:07 ten10 joined #gluster
17:07 ten10 can anyone tell me where to set global options for glusterd?
17:07 baber joined #gluster
17:07 riyas joined #gluster
17:08 major no status either
17:08 major just 'commit force' ?
17:08 major feels like a risky date
17:08 major blind marriage more like
17:08 JoeJulian major: correct - and I completely agree. My arguments against the process went unheeded.
17:09 major bleh
17:09 vbellur joined #gluster
17:09 major soo .. dispersed volumes ..
17:10 major I take it there is no "migration" path to getting there .. just "create new volume and copy it over"
17:10 JoeJulian ten10: There are no global defaults (I assume you mean for creating new volumes). If your use case is heketi - default volume settings has been added fairly recently.
17:14 ten10 well, I was playing around with gluster using a replicated volume and performance is pretty poor
17:14 JoeJulian major: If you add the disperse option, all newly created files will use the option. Existing files will not be changed.
17:14 JoeJulian ten10: I have not found that to be true for my use cases.
17:14 Intensity joined #gluster
17:14 major is there a way to force it to re-disperse?
17:14 major forced heal maybe?
17:14 JoeJulian copy/rename
17:14 major hmm
17:14 JoeJulian That's all I've found.
17:14 ten10 9126805504 bytes (9.1 GB, 8.5 GiB) copied, 125.64 s, 72.6 MB/s ... this would be on SSDs with local 10G network
17:14 major so you can flag a replica to be dispersed?
17:14 major do arbiters still play a role w/ dispersed?
17:14 major man .. all sorts of questions all of a sudden
17:14 JoeJulian "a" replica? Among a distributed set of replicas? No.
17:14 JoeJulian arbiters, yes.
17:14 JoeJulian Wait.. disperse... am I thinking something else...
17:16 bwerthmann joined #gluster
17:16 major think you are thining of sharding
17:16 major thinking even
17:16 JoeJulian Crap, I am.
17:17 major heh
17:17 major its okay .. I think its still monday
17:17 hasi Thanks @JoeJulian
17:18 ankitr joined #gluster
17:18 JoeJulian ten10: well then... I guess I would look at my network if it was me. I would suspect latency more than anything. Perhaps you're getting traffic shaped?
17:18 major I am thinking of playing with a disperse 4, replica 2 volume to see how it behaves
17:19 georgeangel[m] joined #gluster
17:19 ten10 this is in a homelab, 2 nodes.. i'll play around with it.. Glusterfs 3.10 on fedora 26
17:19 major erm .. disperse 4 redundancy 2..
17:19 major or something
17:20 ten10 1 of the nodes only has a 2 core celeron CPU.. load average is well over 3 :)
17:20 ten10 was hoping maybe I could increase caching somehow or something
17:20 major though .. there is a note about disperse 4/2 being no better than a replica-2...
17:20 major bleh
17:20 major boring
17:22 major is sharding even still supported?
17:23 vbellur major: sharding is supported for block accesses .. not for general purpose use cases
17:23 major I feel like I am being denied all the fun toys
17:25 major ...
17:25 major when was the lookup-optimize added?
17:25 major I feel like I missed something
17:26 major or was it removed?
17:26 ankitr joined #gluster
17:27 vbellur ten10: could you share more details about your setup where you encountered problems with esxi?
17:27 ten10 vbellur, absolutely.. is there any specific you would want to see first?
17:28 vbellur major: lookup-optimize came about in 3.7 or 3.8 IIRC
17:29 major man .. why did I just locate the docs on it :(
17:29 major I thought I read all this stuff .. twice
17:29 vbellur ten10: what version of tcmu-runner are you running?
17:29 ten10 tcmu-runner 1.1.3
17:30 ten10 when I add the volume on a single esx lab server it find the volume but fails to finally add it
17:31 vbellur ten10: a higher version is better - something like 1.2.0 or higher would be better for testing
17:31 major do dispersed volumes gracefully handle distribution of the erasure encodings between bricks such that they don't land on 2 bricks of the same node? I.e. if I have 4 nodes w/ 2 bricks per node, can I do a disperse 8, redundancy 2, will the translator avoid using 2 bricks from 1 node?
17:31 ten10 and then I tried to zero out the backing file and that's when I found it the actual volume seems a bit slow
17:32 shyam joined #gluster
17:32 vbellur ten10: do you have tcmu-runner running on the same nodes as gluster daemons?
17:33 ten10 yes
17:33 ten10 I only have 2 nodes for right now with testing
17:35 ten10 i'm just wondering if this spare server I have my SSDs in just isn't cutting it because of the CPU
17:35 ten10 load average: 3.38, 3.49, 2.67
17:36 vbellur ten10: do you notice any errors in tcmu log files or in dmesg?
17:37 Karan joined #gluster
17:38 MrAbaddon joined #gluster
17:39 ten10 not that I can see, nothing in /var/log/messages
17:42 vbellur ten10: would it be possible to describe your problem on gluster-users? tcmu/gluster-block devs are not online atm and will respond on that thread
17:44 major joined #gluster
17:44 ten10 what about general slowness first... anything I can check with that?
17:45 ten10 is there a specific CPU limitation I might be running into using a celeron?
17:46 ten10 if I write to the disks where the blocks are I get like 1.2 GB/s
17:46 ten10 5368709120 bytes (5.4 GB, 5.0 GiB) copied, 4.50709 s, 1.2 GB/s
17:49 ten10 specifying conv=fdatasync i see 5368709120 bytes (5.4 GB, 5.0 GiB) copied, 47.7788 s, 112 MB/s
17:49 vbellur ten10: are you writing through fuse?
17:49 ten10 I did a mount -t glusterfs media1-be:/gv0 /mnt
17:49 sona joined #gluster
17:50 vbellur and what's the block size being used with dd?
17:51 ten10 I guess I am: media1-be:gv0 on /gv0 type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
17:51 ten10 I lied I did /gv0 not /mnt
17:51 ten10 currently using this: dd if=/dev/zero of=iscsi.disk.1 bs=512M count=200
17:51 ten10 for a 100g file
17:53 vbellur ten10: can you get volume profile info? that will help in determining where time is being spent
17:53 ten10 sure. what command should I run?
17:54 vbellur gluster volume profile <volname> start; run test; gluster volume profile <volname> info
17:56 jsierles joined #gluster
17:56 ten10 ok cool, yeah i got the stats
17:56 ten10 1 thing that jumps out is the 1 brick is showing much higher for this stat:
17:56 ten10 40.08   43396.70 us      23.00 us  220613.00 us           1045    FXATTROP
17:58 ten10 i can paste the whole thing, what site do you guy use.. dont want to flood channel even more
17:58 ten10 it looks like the server with the celeron has a ton of latency for some reason
17:59 ten10 I am using MTU 9000
18:00 tamalsaha[m] joined #gluster
18:00 siel joined #gluster
18:18 vbellur ten10: @paste
18:18 ten10 @paste
18:18 glusterbot ten10: For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
18:18 vbellur ten10: you can use fpaste
18:20 ten10 https://pastebin.com/y5e1LWvH
18:20 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
18:21 ten10 https://paste.fedoraproject.org/paste/4FQ4y2DepPBtkAvSVCI8Zg
18:21 glusterbot Title: Untitled - Modern Paste (at paste.fedoraproject.org)
18:22 ten10 heh now it just bombs out
18:23 ten10 [root@media1 gv0]# dd if=/dev/zero of=iscsi.disk.1 bs=512M count=200
18:23 ten10 dd: failed to open 'iscsi.disk.1': Input/output error
18:36 amarts joined #gluster
18:38 _KaszpiR_ joined #gluster
18:42 rafi joined #gluster
18:44 jeffspeff joined #gluster
18:45 ten10 hmm well deleting the iscsi backing file and starting over seemed to improve speed
18:45 ten10 guess I'll have to post to the group if i want to get the user:glfs thing fixed
18:46 shyam joined #gluster
18:50 dubs joined #gluster
18:55 Jacob843 joined #gluster
18:58 ten10 this sounds like the issue I'm having:
18:58 ten10 https://lists.fedorahosted.org/archives/list/targetcli-fb-devel@lists.fedorahosted.org/thread/RUTXXS4DH2T57C2BPJXXYIIWC46UBH5Y/
18:58 glusterbot Title: how to add user backed storage object (file example) as a lun to iscsi target? - targetcli-fb-devel - Fedora Mailing-Lists (at lists.fedorahosted.org)
18:59 ten10 i can't even access the target on windows
18:59 ten10 says it's not ready
19:02 vbellur ten10: the file example backend is not ready with tcmu-runner
19:02 nirokato joined #gluster
19:03 ten10 ok, then I'm not sure why I can't get it to work
19:03 om2 joined #gluster
19:07 nirokato joined #gluster
19:15 nirokato joined #gluster
19:20 nirokato joined #gluster
19:23 sona joined #gluster
19:39 bowhunter joined #gluster
19:44 ivan_rossi left #gluster
19:51 jbrooks joined #gluster
19:56 msvbhat joined #gluster
20:02 Acinonyx joined #gluster
20:02 Urania joined #gluster
20:03 fcami joined #gluster
20:27 baber joined #gluster
20:57 Acinonyx joined #gluster
21:08 msvbhat joined #gluster
21:13 vbellur joined #gluster
21:14 vbellur joined #gluster
21:16 vbellur joined #gluster
21:19 vbellur joined #gluster
21:39 major joined #gluster
21:43 Peppard joined #gluster
21:44 vbellur joined #gluster
21:47 edong23 joined #gluster
22:00 farhorizon joined #gluster
22:49 gospod3 joined #gluster
22:56 farhorizon joined #gluster
22:57 pl3bs joined #gluster
22:58 pl3bs hi. I am testing gluster on CentOS 7.3. Is it best to keep glusterd under systemd or should I use a pacemaker resource agent?
22:59 pl3bs If the latter, does the 5 year old commit from glusterfs/gluster/extras/ocf still work with 3.11?
23:23 cloph_away joined #gluster
23:32 vbellur joined #gluster
23:45 Alghost joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary