Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-10-18

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:03 baber joined #gluster
00:20 plarsen joined #gluster
00:32 _KaszpiR_ joined #gluster
01:01 baber joined #gluster
01:08 Wizek__ joined #gluster
01:54 shdeng joined #gluster
01:56 ilbot3 joined #gluster
01:56 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:58 blu_ joined #gluster
03:23 kramdoss_ joined #gluster
03:31 DoubleJ joined #gluster
03:31 DoubleJ joined #gluster
03:33 weller joined #gluster
03:33 nigelb joined #gluster
03:44 Humble joined #gluster
03:52 Shu6h3ndu joined #gluster
04:04 fiyawerx_ joined #gluster
04:09 shyu joined #gluster
04:10 Chinorro joined #gluster
04:16 itisravi joined #gluster
04:18 atinm joined #gluster
04:21 susant joined #gluster
04:34 skumar joined #gluster
04:37 nbalacha joined #gluster
04:49 karthik_us joined #gluster
04:55 map1541 joined #gluster
04:56 sanoj joined #gluster
04:57 mattmcc joined #gluster
05:12 poornima_ joined #gluster
05:18 Prasad joined #gluster
05:25 kdhananjay joined #gluster
05:26 susant joined #gluster
05:30 xavih joined #gluster
05:31 ppai joined #gluster
05:33 susant joined #gluster
05:34 ndarshan joined #gluster
05:37 skumar joined #gluster
05:51 rouven joined #gluster
05:54 poornima_ joined #gluster
06:03 ndarshan joined #gluster
06:24 msvbhat joined #gluster
06:24 msvbhat__ joined #gluster
06:24 msvbhat_ joined #gluster
06:29 shdeng joined #gluster
06:29 marbu joined #gluster
06:30 jtux joined #gluster
06:41 kotreshhr joined #gluster
06:42 karthik_us joined #gluster
06:48 rafi joined #gluster
06:54 ivan_rossi joined #gluster
06:57 rastar joined #gluster
07:04 bEsTiAn joined #gluster
07:12 jkroon joined #gluster
07:18 rouven joined #gluster
07:22 jtux joined #gluster
07:26 fsimonce joined #gluster
07:34 skoduri joined #gluster
07:35 skoduri rastar, jiffin ..request you to merge https://review.gluster.org/18523 , https://review.gluster.org/18524
07:35 glusterbot Title: Gerrit Code Review (at review.gluster.org)
07:47 rwheeler joined #gluster
07:57 buvanesh_kumar joined #gluster
08:04 armyriad joined #gluster
08:08 bfoster joined #gluster
08:09 kkeithley joined #gluster
08:09 portante joined #gluster
08:09 crag joined #gluster
08:22 xavih_ joined #gluster
08:22 jiffin1 joined #gluster
08:25 kenansul- joined #gluster
08:25 ackjewt__ joined #gluster
08:26 mrcirca__ joined #gluster
08:26 Shu6h3ndu_ joined #gluster
08:27 fsimonce` joined #gluster
08:28 fsimonce joined #gluster
08:28 Gambit15 joined #gluster
08:28 john51 joined #gluster
08:31 owlbot` joined #gluster
08:42 ThHirsch joined #gluster
08:52 Shu6h3ndu_ joined #gluster
09:18 _KaszpiR_ joined #gluster
09:21 msvbhat_ joined #gluster
09:21 msvbhat__ joined #gluster
09:21 msvbhat joined #gluster
09:23 buvanesh_kumar joined #gluster
09:40 poornima_ joined #gluster
09:44 buvanesh_kumar joined #gluster
09:56 sanoj joined #gluster
10:06 kramdoss_ joined #gluster
10:10 major joined #gluster
10:23 mattmcc joined #gluster
10:24 kenansulayman joined #gluster
10:25 Klas mrcirca__: what is it that you want to do?
10:26 Klas glusterfs can be used in several ways for proxmox
10:27 Klas in general, I would'nt advice quorum-based in even nodes since that makes it impossible to be fully HA and still fully protected from split-brain
10:27 Klas and both proxmox and gluster are quorum-based
10:39 shyam joined #gluster
10:46 _KaszpiR_ joined #gluster
10:47 Klas there shouldn't be any issues removing changelogs from a volume which is no longer geo-replicating, right?
10:56 msvbhat joined #gluster
10:57 msvbhat_ joined #gluster
10:57 baber joined #gluster
10:57 msvbhat__ joined #gluster
11:01 rastar joined #gluster
11:04 susant joined #gluster
11:11 cyberbootje joined #gluster
11:17 toshywoshy joined #gluster
11:27 Wizek__ joined #gluster
11:31 map1541 joined #gluster
11:50 mrcirca__ Klas: and what you prefer?
11:51 mrcirca__ Klas: i want to make a HA cluster on 3 nodes with replica 3
12:03 ic0n joined #gluster
12:12 nbalacha joined #gluster
12:30 dr-gibson joined #gluster
12:48 decayofmind joined #gluster
12:49 shyam joined #gluster
12:56 Klas planning on running it on dedicated nodes or on the same as proxmox?
12:56 Klas recommendation, seperate networks as much as possible in proxmox
12:59 pdrakeweb joined #gluster
12:59 blu__ joined #gluster
13:02 baber joined #gluster
13:06 rouven joined #gluster
13:07 ndarshan joined #gluster
13:12 NoctreGryps joined #gluster
13:17 major joined #gluster
13:21 nbalacha joined #gluster
13:21 dspisla joined #gluster
13:22 dspisla Does anybody know how much performance I will loose if I use a object store on top of a gluster volume?
13:26 skylar1 joined #gluster
13:47 ndarshan joined #gluster
13:57 hmamtora joined #gluster
13:57 farhorizon joined #gluster
13:58 _KaszpiR_ joined #gluster
14:17 nbalacha joined #gluster
14:18 kotreshhr left #gluster
14:27 vbellur1 joined #gluster
14:28 vbellur joined #gluster
14:29 vbellur joined #gluster
14:29 susant joined #gluster
14:30 vbellur joined #gluster
14:32 jkroon joined #gluster
14:36 _KaszpiR_ joined #gluster
14:38 mrcirca__ planning on running it on dedicated nodes
14:41 dspisla exit
14:44 atrius joined #gluster
14:45 omie888777 joined #gluster
14:45 rwheeler joined #gluster
14:48 susant joined #gluster
14:55 farhorizon joined #gluster
14:57 tom[] i've a set of servers that use replicated gluster to share files. it's a simple setup much like the getting started tutorial. i'm introducing btrfs to the next build. what are the options?
14:58 tom[] use a btrfs subvolume as a gluster brick? or does btrfs have somethink like a zvol into which i can put xfs? or should i reserve a hw device partition for the xfs brick?
14:58 tom[] anything else?
15:07 wushudoin joined #gluster
15:07 jtux left #gluster
15:08 vbellur joined #gluster
15:08 vbellur joined #gluster
15:09 vbellur joined #gluster
15:10 vbellur1 joined #gluster
15:14 dominicpg joined #gluster
15:16 vbellur joined #gluster
15:28 major tom[], btrfs supports pools, but not entirely in the way people expect
15:28 tom[] pools?
15:28 major basically, if you build the btrfs filesystem across multiple devices, then any one of those devices themselves are an entry-point into the overall pool, and you can choose to mount the pool as a whole somewhere (off in /btr/ or something) in order to construct subvolumes (/btr/@brick1) and then mount those somewhere
15:29 major Ubuntu uses that scheme when dealing with btrfs on root
15:29 major so root being subvolume=@, subvolume=@home, etc
15:30 tom[] in the current plan, the btrfs filesystems each use only one device
15:30 major metadata replication only?
15:30 major or .. metadata mirroring on a single device
15:31 tom[] i don't understand the question
15:31 major btrfs supports data and metadata level replication and checksumming
15:31 tom[] yes
15:32 major if you run only 1 device within the filesystem it often defaults to mirroring the metadata across the single device to act as metadata backups, but you get no replication/recovery of the datablocks themselves
15:32 major w/out something like gluster atop of it all of course
15:32 tom[] data isn't being replicated. each physical machine is entirely redundant
15:33 dgandhi joined #gluster
15:33 tom[] so disaster recovery is: get a new machine and load it up
15:33 major well, it will work, but I would still mount the filesystem as a whole off on /btr/ and use that mountpoint as your volume pool
15:33 tom[] ok
15:34 major UUID=c544de7a-259a-4150-9670-cb02929ce64d /btr            btrfs   defaults 0       2
15:34 major UUID=c544de7a-259a-4150-9670-cb02929ce64d /srv/brick1     btrfs   defaults,subvol=@brick1 0       2
15:35 major thus /btr/ is the mount point for managing the btr as a whole (including snapshots), and you mount your desired subvolumes elsewhere
15:35 tom[] ok
15:35 dr-gibson joined #gluster
15:35 dr-gibson joined #gluster
15:36 major you "can" mount the raw device as say .. /srv/brick1/ .. but it cause cause some interesting issues if you get into doing snapshots
15:36 tom[] but my first question was, can i use a btrfs subvol as a gluster brick? it looks like the answer is yes, under an appropriate btr setup
15:36 major yes, you can, but the offocial code path does not yet include the btrfs snapshot patches
15:37 major so far as I am aware at least
15:37 tom[] i'm not going to snapshot the gluster stuff
15:37 major interesting
15:37 major well, then it should work fine
15:37 tom[] \o/
15:38 * tom[] new to btr
15:39 tom[] the motivation to use it is unrelated to gluster, for lxc backing
15:39 major just be certain to periodically run the btrfs scrub and email the results to an admin account, or wrap up the logs in a logging filter to be shipped out to your monitoring system
15:39 major lxc on gluster on btr?
15:39 tom[] that's a familiar practice from zfs
15:39 tom[] lxc on btr
15:40 vbellur joined #gluster
15:40 major yah, I have lxc on btr and some of my containers have gluster volumes
15:40 major well .. lxd really
15:41 major working on the lxd-formula for saltstack atm .. it is sort of messed up
15:41 tom[] same thing, different cli
15:41 tom[] :)
15:41 tom[] salt!
15:41 major yah, I am trying to fix up the container migration and the like
15:41 tom[] ubuntu?
15:41 major and remote lxd secret management
15:41 major yah, on 16.04
15:42 tom[] debian stretch here. so i am using lxc via the salt state module
15:42 major also working on finishing up and publishing salt-lxd-proxy to be able to manage containers w/out running a minion or ssh on the container itself (manage them from the host)
15:43 major lets me dump containers into the DMZ which have no access to the salt master or any special secrets
15:43 major and shaves off some of the resource usage
15:43 tom[] fancy stuff
15:43 major tedious really ..
15:43 major very tedious
15:44 major :)
15:44 tom[] i know. i mean, i'm happy to have a minion in the container to keep life simple
15:45 major anyway .. yes .. btr should work fine, just make certain to periodically 'btrfs scrub' and monitor the output .. btr doesn't notify the OS regarding invalid checksums or anything via the normal logging path, it is only really reported during a scrub (when you are expect to be monitoring it I guess)
15:45 tom[] are you involved in gluster dev?
15:46 major tom[], kind of? I wrote the btrfs snapshot support, and rewrote the zfs support (previous patch was totally incomplete and didn't compile)
15:46 major I am sort of at a stand-still in finishing the btrfs support as I can't find a clean way to track specific key=value data at the gluster level that I need to properly support restoring btrfs snapshots :(
15:47 major and I am sort of distracted with lxd and salt this week :P
15:47 tom[] i am sort of distracted with lxc and salt this week
15:47 tom[] adn last
15:47 tom[] and next
15:47 major heh
15:47 major right
15:49 tom[] i'm not a sysop. but i need servers for my app. every few years i need new servers and i have to learn all the new tech and re-learn all the old stuff i forgot over those years
15:49 tom[] it's so painful
15:51 major ouch
15:51 major I just do this stuff as a sort of hobby I guess...
15:52 major have a stack of blades in the basement on 1G fiber .. mostly just because ;)
15:52 tom[] wouldn't, e.g. rh, pay you to do this?
15:53 major I dunno
15:53 major I never asked, and they didn't offer :)
15:54 major sides .. they seem to be steering more towards XFS and Ansible sort of things
15:54 mallorn I'm wondering if someone can help me understand the healing process for a distributed disperse volume.  It's 5 x (2+1) 60TB volume that's been healing for about 14 days now and doesn't seem to be making progress.  The heal is consuming all resources to the point that reads are at 200KB/s on our 10GigE link.  We're running 3.10.
15:54 mallorn I'll watch a file on all three bricks in the set and, as expected, two will be static and the other will be updating.  Once the updating one  reaches the size of the other two it stops and resets itself to about 4GB smaller, then starts copying again.
15:55 tom[] major: ansible ruined my life last go around with serbers
15:56 major tom[], yah .. I have done puppet, salt, ansible, chef (and even pickt back in the day) and .. IMHO .. salt is the only complete solution .. the rest seem to be a subset of functionality .. not to mention slower
15:57 major I mean .. it isn't necessarily a fair comparison .. salt isn't designed to be a configuration manager .. it just has that functionality atop the core system .. the rest of salt is sort of spectacular
15:57 major in the words of Sony "It simply does everything"
15:58 tom[] "simply" is arguable
15:58 tom[] because in this case, doing everything cannot be simple
15:59 tom[] but, yeah, salt is impressive
16:00 tom[] but i only one complaint: naming. too many terrible puns: stacks, pillars, grains etc.
16:00 major as soon as I am done with the lxd-proxy and refactoring lxd-formula I plan on writing a salt-engine to interface the event bus to gluster
16:01 major tom[], HAH .. TRUTH! Salt is the punniest ..
16:01 major don't forget pepper
16:01 major https://github.com/saltstack/pepper
16:01 glusterbot Title: GitHub - saltstack/pepper: A library and stand-alone CLI tools to access a salt-api instance (at github.com)
16:01 tom[] and "salt" is a bad keyword for web search
16:02 major yah .. I always use 'saltstack'
16:02 dr-gibson joined #gluster
16:02 dr-gibson joined #gluster
16:02 misc +1, that was quite painful to find doc due to that :/
16:02 tom[] me too. it's not optimal
16:02 major anyway .. I am hoping to be able to monitor and manage gluster via a salt-engine befor the year is up
16:03 major want to be able to spin up containers on demand, make their volumes, monitor them, and migrate them around the network
16:04 major even manage their snapshots :)
16:04 major mallorn, you try the mailing list?
16:08 mallorn I didn't find anything in the archives, but haven't sent anything there yet.
16:10 major mallorn, and there are no log events kicking up when the file is truncated?
16:10 major server logs, dmesg, etc?
16:13 buvanesh_kumar joined #gluster
16:14 vbellur joined #gluster
16:14 mallorn I haven't seen anything in the logs.  It just rewinds about 4GB and starts over.  We had this problem a while back and clearing the locks would resolve the problem, but this time a clear ends up putting a bunch more systems in the heal list.
16:20 mallorn I do see a huge number of 'page allocation errors' in dmesg for that server, similar to this old bug:  https://bugzilla.redhat.com/show_bug.cgi?id=842206
16:20 glusterbot Bug 842206: high, medium, ---, rabhat, CLOSED CURRENTRELEASE, glusterfsd: page allocation failure
16:20 major o.O
16:23 baber joined #gluster
16:24 mallorn I'm watching another file that will reach completion soon.  I'll try to get a statedump then and will watch the logs.
16:24 major are you also using ACL's on gluster and the underlying filesystem (similar to this issue) ?
16:26 major the issue you linked to looks to be related to pressure on the memory allocator which is causing fragmented memory problems, and subsequently is causing problems with sync'ing out contiguous chunks of memory to the filesystem
16:26 major it "sounds" like the writes are being buffered, the heal finishes, and the buffer is attempting to flush to disk and is failing due to fragmented memory
16:27 major like .. someone failed to mmap() a chunk of memory to garantee it was going to be contiguous and then tried to treat it as always being contiguous
16:28 major and then blamed the memory manager ...
16:28 mallorn That makes sense.
16:29 pdrakeweb joined #gluster
16:29 major one of the posts in that issue talks about dissabling ACL's on the underlying filesystem (ext2 in their case) and in Gluster as it was contributing to memory pressure
16:30 major which .. is an interesting work around, but doesn't feel like a solution really
16:31 dr-gibson joined #gluster
16:31 dr-gibson joined #gluster
16:31 major I wonder if using huge tlb's would help/hinder this
16:32 major or is that even a thing anymore with 64bit and everyone having dog piles of memory
16:32 mallorn We're not using acls within Gluster, but I don't know about the underlying filesystem (ZFS).  Checking...
16:33 mallorn I was thinking of enabling huge pages because we have memory to spare (128GB on that server , 49GB free).
16:34 major poking around the gluster code atm to see if someone was trying to mmap() a MAP_FIXED region for something like this
16:34 major MAP_FIXED used to have some .. unexpected behavior depending on the sort of things you where interacting with
16:35 major though .. the problem could be in the underlying fs and how it is being interacted with
16:35 major fun
16:37 mallorn I just realized that I wasted the last ten minutes of your time; I ran dmesg -T to get real timestamps and see that it hasn't appeared since April (which is when we upgraded from 3.7 to 3.10).  I'm sorry about that.  :(
16:37 major heh
16:37 major that happens
16:38 major you haven't rebooted the node since then?
16:39 mallorn It also has iSCSI connections, so no.
16:41 ThHirsch joined #gluster
16:43 msvbhat joined #gluster
16:43 msvbhat_ joined #gluster
16:43 msvbhat__ joined #gluster
16:43 major fun
16:45 mallorn We were planning to do a rolling upgrade on all gluster nodes, but then Something Terrible happened and everything had to heal first (I know they're not supported with distributed-disperse volumes).
16:52 kramdoss_ joined #gluster
16:53 ivan_rossi left #gluster
17:04 skylar1 joined #gluster
17:04 mallorn Another file came to completion, then 'rewound' itself by 4GB (it's always 4GB).  I don't see anything unusual in dmesg, the system logs, the gluster logs, or the statedump.
17:12 dgandhi joined #gluster
17:15 major joined #gluster
17:22 mallorn Different question...  Running 'gluster volume heal [volume] info' takes a huge amount of time to return anything (like 15 minutes or so).  Then it comes back and tells us that each set of five in our distributed-disperse volume has between 3 and 40 files to heal.
17:24 mallorn Is there anything that could be causing it to be slow?  If I remove all locks the output is instantaneous, but that has adverse effects on other stuff.
17:28 rastar joined #gluster
17:30 pdrakeweb joined #gluster
17:34 farhorizon joined #gluster
17:35 dgandhi joined #gluster
17:39 pdrakeweb joined #gluster
17:41 xavih joined #gluster
17:44 shyam joined #gluster
17:44 major joined #gluster
18:02 tom[] nit quite gluster but since major  was so helpful before ... what's a better way of getting a btr fs uuid into a shell var?
18:02 tom[] uuid=$(btrfs fi sh /dev/sdb | grep uuid | cut -d' ' -f5)
18:03 skylar1 joined #gluster
18:04 snehring joined #gluster
18:04 rouven joined #gluster
18:05 MrAbaddon joined #gluster
18:08 baber joined #gluster
18:13 rouven joined #gluster
18:23 victori joined #gluster
18:27 Jacob843 joined #gluster
18:29 msvbhat joined #gluster
18:29 msvbhat_ joined #gluster
18:34 msvbhat__ joined #gluster
18:39 major tom[], blkid cmd?
18:42 rouven joined #gluster
18:43 major tom[], in a shell script you can do: eval "$(blkid -o udev <device path>)
18:43 major erm
18:43 major eval "$(blkid -o udev "${device_path}")"
18:43 major if you set device_path first
18:44 major and that will dump the UUID into the environment variable
18:44 major or into a whole stack of variables
18:44 tom[] major: tnx
18:45 farhorizon joined #gluster
18:47 tom[] jeepers
18:48 rouven joined #gluster
18:50 vbellur joined #gluster
18:59 msvbhat joined #gluster
18:59 msvbhat_ joined #gluster
18:59 msvbhat__ joined #gluster
19:05 Wizek_ joined #gluster
19:07 baber joined #gluster
19:42 rouven joined #gluster
19:48 rouven joined #gluster
19:49 baber joined #gluster
19:57 vbellur joined #gluster
19:59 dgandhi joined #gluster
20:19 msvbhat joined #gluster
20:19 msvbhat_ joined #gluster
20:22 rouven joined #gluster
20:29 pdrakeweb joined #gluster
20:35 dgandhi joined #gluster
20:36 rouven joined #gluster
20:41 ThHirsch joined #gluster
20:45 gbox joined #gluster
20:49 pdrakeweb joined #gluster
20:59 plarsen joined #gluster
21:02 rouven joined #gluster
21:06 rouven joined #gluster
21:12 rouven joined #gluster
21:16 rouven joined #gluster
21:19 msvbhat joined #gluster
21:22 farhorizon joined #gluster
21:27 baber joined #gluster
21:30 farhoriz_ joined #gluster
21:30 pdrakeweb joined #gluster
21:34 plarsen joined #gluster
21:40 kpease joined #gluster
21:56 omie888777 joined #gluster
22:04 _KaszpiR_ joined #gluster
22:17 rouven joined #gluster
22:20 vbellur joined #gluster
22:20 msvbhat joined #gluster
22:20 msvbhat_ joined #gluster
22:21 vbellur joined #gluster
22:21 msvbhat__ joined #gluster
22:21 rouven joined #gluster
22:22 vbellur joined #gluster
22:22 vbellur1 joined #gluster
22:23 vbellur joined #gluster
22:24 vbellur joined #gluster
22:24 vbellur joined #gluster
22:25 vbellur joined #gluster
22:43 rouven joined #gluster
22:46 farhorizon joined #gluster
23:03 rouven joined #gluster
23:07 rouven_ joined #gluster
23:10 ahino joined #gluster
23:16 vbellur joined #gluster
23:19 vbellur1 joined #gluster
23:22 vbellur joined #gluster
23:25 vbellur joined #gluster
23:37 vbellur joined #gluster
23:45 vbellur joined #gluster
23:55 map1541 joined #gluster
23:59 vbellur joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary