Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-10-16

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 kenansulayman joined #gluster
00:01 rouven_ joined #gluster
01:41 baber joined #gluster
01:54 shyam joined #gluster
01:56 ilbot3 joined #gluster
01:56 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:01 prasanth joined #gluster
02:02 blu_ joined #gluster
02:19 rastar joined #gluster
02:35 ahino joined #gluster
02:41 nimda_ joined #gluster
02:42 nimda_ Hi Every one I am using gluster 3.10.6. It is 2 replicated nodes. I see a lot of log like this [2017-10-16 02:40:10.974315] E [index.c:605:index_link_to_base] 0-images-index: /storage/public/.glusterfs/indices/xattrop/841c7af5-a28d-4cc4-aa89-c946f2257d43: Not able to add to index (Too many links)
02:43 nimda_ in /var/log/glusterfs/bricks/storage-public.log
02:43 nimda_ Does anybody know what is wrong with that?
03:06 nbalacha joined #gluster
03:18 kramdoss_ joined #gluster
03:20 masber joined #gluster
03:24 kraynor5b joined #gluster
03:26 shdeng joined #gluster
03:29 msvbhat joined #gluster
03:29 msvbhat__ joined #gluster
03:29 msvbhat_ joined #gluster
03:32 psony joined #gluster
03:48 itisravi joined #gluster
03:54 jkroon joined #gluster
04:10 shdeng joined #gluster
04:13 msvbhat joined #gluster
04:13 msvbhat_ joined #gluster
04:14 Humble joined #gluster
04:15 msvbhat__ joined #gluster
04:36 ppai joined #gluster
04:39 kdhananjay joined #gluster
04:43 Shu6h3ndu joined #gluster
04:48 rouven joined #gluster
04:53 rouven joined #gluster
04:55 map1541 joined #gluster
05:02 karthik_us joined #gluster
05:03 ndarshan joined #gluster
05:04 sanoj joined #gluster
05:05 skumar joined #gluster
05:10 aravindavk joined #gluster
05:11 xavih joined #gluster
05:17 Prasad joined #gluster
05:21 cloph_away joined #gluster
05:25 omie888777 joined #gluster
05:25 jiffin joined #gluster
05:27 jiffin joined #gluster
05:29 Prasad_ joined #gluster
05:32 Prasad__ joined #gluster
05:34 shdeng joined #gluster
05:39 prasanth joined #gluster
05:52 rafi joined #gluster
05:55 susant joined #gluster
06:01 mbukatov joined #gluster
06:09 nbalacha joined #gluster
06:22 nbalacha joined #gluster
06:32 ilbot3 joined #gluster
06:32 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
06:36 jtux joined #gluster
06:40 kdhananjay joined #gluster
06:41 prasanth joined #gluster
06:48 prasanth joined #gluster
06:52 poornima_ joined #gluster
06:53 shdeng joined #gluster
06:53 ivan_rossi joined #gluster
06:54 kotreshhr joined #gluster
07:12 dominicpg joined #gluster
07:17 fsimonce joined #gluster
07:24 rafi1 joined #gluster
07:45 Prasad__ joined #gluster
07:47 deskpot joined #gluster
07:49 jkroon joined #gluster
07:50 sanoj joined #gluster
07:52 msvbhat_ joined #gluster
07:52 msvbhat joined #gluster
07:52 msvbhat__ joined #gluster
08:01 Humble joined #gluster
08:06 kramdoss_ joined #gluster
08:06 rastar joined #gluster
08:08 sanoj joined #gluster
08:18 _KaszpiR_ joined #gluster
08:19 buvanesh_kumar joined #gluster
08:24 rwheeler joined #gluster
08:31 Prasad__ joined #gluster
08:51 jap joined #gluster
08:53 jap hi, I'm trying to understand how async replication works in gluster and have a hard time finding architecture docs
08:54 jap (like for example how the change detection works, or what is supposed to happen if I change a single byte in a 20G file)
08:59 jap if I understand correctly both xsync and changelog are only used to keep track of which files have changed, and then rsync is used to replicate the changes
09:00 jap I'm afraid that this single byte change will then lead to a full rescan of the 20G files on both sides, or am I reading this wrong?
09:13 rouven joined #gluster
09:18 msvbhat joined #gluster
09:23 msvbhat_ joined #gluster
09:23 msvbhat__ joined #gluster
09:24 skoduri joined #gluster
09:25 omie888777 joined #gluster
09:31 Prasad__ joined #gluster
09:42 skoduri joined #gluster
09:45 Wizek_ joined #gluster
10:19 ndevos jap: underneath geo-replication there is standard rsync (with loads of fancy options), I think it calculates checksums for data-ranges and only transfers the changed blocks
10:20 ndevos kotreshhr: ^
10:55 jap ndevos: thanks, that makes sense (but is a bit of a disappointment, I was hoping for something like zfs send/receive or ceph rbd where the delta between snapshots can be extracted and applied to another filesystem very easily)
10:56 ndevos jap: you can make snapshots of a volume (not per file) and mount the snapshot to create an off-site backup
10:57 ndevos jap: that uses lvm-thinp in the backend, so it is more like the other approaches, I think
10:57 deskpot joined #gluster
10:58 jap let me see if I can find some documentation on that, it might be an option in our case
11:05 gospod2 joined #gluster
11:37 Prasad_ joined #gluster
11:39 Prasad__ joined #gluster
11:40 psony joined #gluster
12:05 Norky joined #gluster
12:25 psony_ joined #gluster
12:34 pdrakeweb joined #gluster
12:37 ThHirsch joined #gluster
12:40 major joined #gluster
12:55 jstrunk joined #gluster
13:00 jstrunk joined #gluster
13:11 baber joined #gluster
13:14 shyam joined #gluster
13:17 pdrakeweb joined #gluster
13:19 shyam joined #gluster
13:22 pdrakeweb joined #gluster
13:24 nbalacha joined #gluster
13:46 bitchecker_ joined #gluster
13:52 Champi joined #gluster
13:54 kkeithley @paste
13:54 glusterbot kkeithley: For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
13:54 farhorizon joined #gluster
14:00 hmamtora joined #gluster
14:00 hmamtora_ joined #gluster
14:06 pdrakeweb joined #gluster
14:08 nbalacha joined #gluster
14:08 pdrakewe_ joined #gluster
14:12 masber joined #gluster
14:20 skylar1 joined #gluster
14:31 pdrakeweb joined #gluster
14:35 susant joined #gluster
14:37 bit4man joined #gluster
14:43 farhorizon joined #gluster
14:51 mattmcc joined #gluster
14:57 farhoriz_ joined #gluster
15:01 mallorn joined #gluster
15:06 wushudoin joined #gluster
15:08 kpease joined #gluster
15:28 farhorizon joined #gluster
15:29 shyam joined #gluster
15:43 rouven joined #gluster
15:43 shyam joined #gluster
15:43 hmamtora joined #gluster
15:56 ivan_rossi left #gluster
15:58 kshlm joined #gluster
16:04 mallorn I'm wondering if someone can help me understand the healing process for a distributed disperse volume.  It's 5 x (2+1) 60TB volume that's been healing for about 12 days now and doesn't seem to be making progress.
16:06 atrius joined #gluster
16:08 rastar joined #gluster
16:09 rwheeler joined #gluster
16:13 jiffin joined #gluster
16:16 sanoj joined #gluster
16:19 baber joined #gluster
16:26 mrx1 joined #gluster
16:28 pdrakeweb joined #gluster
16:34 jkroon joined #gluster
16:41 kpease joined #gluster
16:43 kpease_ joined #gluster
16:45 ij_ joined #gluster
16:49 rouven joined #gluster
16:53 farhorizon joined #gluster
16:55 buvanesh_kumar joined #gluster
16:57 bitchecker joined #gluster
16:59 farhorizon joined #gluster
17:01 farhorizon joined #gluster
17:02 xavih joined #gluster
17:04 baber joined #gluster
17:04 pdrakeweb joined #gluster
17:10 rouven joined #gluster
17:15 rouven joined #gluster
17:20 rouven joined #gluster
17:25 rouven joined #gluster
17:32 ThHirsch joined #gluster
17:45 rouven joined #gluster
17:45 NoctreGryps joined #gluster
17:48 NoctreGryps Good afternoon, I have inherited an already-set-up gluster set of nodes that are in all kinds of wonky mess- four nodes total, one sees three peers and only one in actual "peer in cluster" status (other two in accepted peer request) state; second node sees three peers with one in accepted peer request, one in peer rejected, and one in peer in cluster; the third node only sees two peers, both in accepted peer request state; and t
17:49 NoctreGryps in accepted peer request state. I'm looking for tools or documentation on how to maintain and administrate an existing system, so I can figure out how terribly knackered this is. Any suggestions?
17:49 NoctreGryps I find lots of materials on how to set up and configure gluster, but not so much for those who have to figure out how to give care and feeding to an existing system.
17:49 major gluster needs care and feeding?
17:50 msvbhat joined #gluster
17:50 94KAABFFI joined #gluster
17:50 msvbhat_ joined #gluster
17:52 NoctreGryps Definition of care and feeding  :the providing of what is needed for sustenance, well-being, or efficient operation
17:53 NoctreGryps Unless you meant that in the context of the person before me clearly didn't set it up correctly if it needs attention of any sort.
18:01 major just .. normally I find that it does a pretty good job of taking care of itself.  About the only time it has a hiccup is I am off mucking with something..
18:01 major mostly I just focus on normal monitoring of logs and space and the automatic healing interface
18:02 NoctreGryps Yeah, that's what started my snowball investigation- root volume keeps filling up, and after putting some proactive monitoring around that, I've noticed only one of the four volumes is filling up which inferred only one was doing heavy work...which led me to look into checking the peer status.
18:03 Ulrar joined #gluster
18:03 major ahhh
18:03 NoctreGryps Gotta love inheriting a job after somebody else, right?
18:03 major I would offer you a beer...
18:03 major or whiskey
18:04 NoctreGryps Thanks man, I appreciate the empathy. :) The best part is that this is all on production, of course.
18:04 major bleh
18:04 NoctreGryps p much
18:04 major yah .. that sucks
18:04 major well .. I mean .. you can swap the bricks live ;)
18:04 major swap out bricks one at a time for bigger bricks, let it heal itself
18:05 major even if you bring in a new node or whatever
18:06 major swap node/brick, wait for heal, rinse repeat
18:06 major did that once already
18:07 NoctreGryps Thanks for that suggestion. I'll add that to the list of things to evaluate. I've actually got two implementations of gluster up in AWS, on two different VPCs for two different purposes, and given the other one is healthy and this one with the weird statuses on the peers had a problem a week ago that it shouldn't, in theory, have had, I'm trying to right now just focus on getting it to appear healthy to status checks like the ot
18:07 major hmmm
18:07 major wonder if it has deleted files that aren't being reclaimed
18:07 NoctreGryps At the moment, I've been dealing with the out of space issue with a gluster volume rotate command
18:08 major this on btr or something?
18:08 NoctreGryps BTR? I'm sorry?
18:08 NoctreGryps I don't recognize that
18:08 major gluster volumes ontop of btrfs
18:09 NoctreGryps I don't think so. To be honest, I've literally only been trying to start wrapping my head around gluster for a couple of days; I had no prior exposure to it. It's up on Ubuntu up at AWS.
18:10 major hmmm
18:11 major well .. it is worth looking at then.  It isn't a gluster problem so much as a potential "gotcha" of COW filesystems not freeing space when a snapshot/cow in the middle is deleted
18:11 major and 'cp' and other COW filesystem aware tools can do unexpected things as well
18:12 NoctreGryps Fair. Thank you for giving me the thought to keep in mind. I'd rather have a list of possible problems to investigate that turn out to be not applicable than a problem I have no leads to investigate on.
18:13 major yah .. with btrfs there should be a periodic "btrfs scrub" happening with the logs being sent to your log monitoring interface
18:13 major the scrub is the only interface that will tell you if you have corrupt inodes as part of regular monitoring/maintainence
18:14 major kinda exciting that :)
18:14 NoctreGryps haha made myself a note. I was trying to decide earlier if I needed to see if there was a way to do maybe Nagios monitoring on peer status or something (assuming it's even possible). But when all four volumes started showing different peer statuses, I decided it could go on the wish list for later.
18:15 major the IP's change on the peers?
18:16 NoctreGryps Given I don't know how long the peers have been perturbed with each other, I really don't know.
18:16 rouven_ joined #gluster
18:17 major hmm
18:17 NoctreGryps I would guess most likely not, as it's assigned EIPs.
18:17 major they are still messed up?
18:17 NoctreGryps But I can't 100% say for sure no, it hasn't changed.
18:17 baber joined #gluster
18:18 NoctreGryps yeah, I'm currently reading the quick start guide and some of the other docs before I try to work on it. Right now we're not seeing production symptoms, but we've got a lot of critical prod stuff referencing this set of volumes, so I am trying to feel like I know maybe a half an idea of what I'm doing before I do anything.
18:19 NoctreGryps It'd be utterly my luck if I tried to resync something and took prod down
18:21 major know the feeling
18:22 NoctreGryps Plus side: by the other side of this fiasco, I'll have new experience and skills on the resume, I reckon. That's what being uncomfortable is, that's growing happening, right? (or something.)
18:23 major Its the growing pains that make me nervous ;)
18:23 NoctreGryps amen!
18:41 baber joined #gluster
18:46 jiffin1 joined #gluster
19:00 pdrakeweb joined #gluster
19:05 _KaszpiR_ joined #gluster
19:09 buvanesh_kumar joined #gluster
19:09 pdrakeweb joined #gluster
19:10 plarsen joined #gluster
19:13 cyberbootje joined #gluster
19:17 anthony25 joined #gluster
19:19 cyberbootje joined #gluster
19:20 mallorn I'm wondering if someone can help me understand the healing process for a distributed disperse volume.  It's 5 x (2+1) 60TB volume that's been healing for about 12 days now and doesn't seem to be making progress.  The heal is consuming all resources to the point that reads give are at 200KB/s on our 10GigE link.  We're running 3.10.
19:29 NoctreGryps no help from me mallorn, sorry
19:29 WebertRLZ joined #gluster
19:36 msvbhat joined #gluster
19:36 msvbhat_ joined #gluster
19:36 msvbhat__ joined #gluster
19:37 cyberbootje joined #gluster
19:43 mallorn I'll watch a file on all three bricks in the set and, as expected, two will be static and the other will be updating.  Once it reaches the size of the other two it stops and resets itself to about 4GB smaller, then starts copying again.
19:53 farhorizon joined #gluster
20:18 baber joined #gluster
20:28 maia joined #gluster
20:28 maia Hello folks
20:28 maia Is it viable to use gluster for database volumes?
20:44 farhorizon joined #gluster
20:49 kpease joined #gluster
20:58 baber joined #gluster
21:43 farhorizon joined #gluster
22:22 omie888777 joined #gluster
22:26 masber joined #gluster
22:26 zcalusic when is gluster 3.12 for debian stretch expected?
22:26 zcalusic https://download.gluster.org/pub/gluster/glusterfs/3.12/LATEST/Debian/stretch/apt/ is empty :(
22:26 glusterbot Title: Index of /pub/gluster/glusterfs/3.12/LATEST/Debian/stretch/apt (at download.gluster.org)
22:59 Jacob843 joined #gluster
23:09 catphish joined #gluster
23:11 squeakyneb joined #gluster
23:13 farhorizon joined #gluster
23:34 catphish would anyone be able to look at a small file workload (from strace) and tell me if there's anything obvious that can be done to improve it? https://paste.ubuntu.com/25756007/
23:34 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
23:35 catphish essentially i'm extracting a tar file with lots of small files, this seem like a worst case workload for gluster, but i'm wondering what i might do to improve it
23:38 catphish my network RTT is 0.5ms so i suppose unfortunately these timings are reasonable
23:39 major catphish, upcall caching?
23:40 catphish i think i'm already using it
23:41 catphish at least features.cache-invalidation, features.cache-invalidation-timeout, performance.md-cache-timeout
23:41 catphish however i'm very new to gluster, so there will be plenty i don't understand

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary