Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-05-26

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 doc|work thanks, I've been trying to solve it even longer than that :/ I appreciate the response
00:00 doc|work seems like this is yet another docker problem
00:00 shyam Search for fuse here: https://docs.docker.com/engine/reference/run/
00:00 glusterbot Title: Docker run reference | Docker Documentation (at docs.docker.com)
00:01 doc|work will do, thanks!
00:42 pioto_ joined #gluster
00:53 doc|work shyam, thanks! that set me on the right path.
00:54 doc|work For anyone else that sees this in the chat logs I could mount it using the mount command by adding securityContext: \n \t privileged: true to the statefulset container definition
01:05 Teraii joined #gluster
01:08 shdeng joined #gluster
01:13 plarsen joined #gluster
01:14 pdrakeweb joined #gluster
01:48 ilbot3 joined #gluster
01:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:02 susant joined #gluster
02:08 derjohn_mob joined #gluster
02:38 k0nsl joined #gluster
02:38 k0nsl joined #gluster
02:42 kpease joined #gluster
02:52 k0nsl joined #gluster
02:52 k0nsl joined #gluster
03:10 kramdoss_ joined #gluster
03:26 k0nsl joined #gluster
03:26 k0nsl joined #gluster
03:28 ppai joined #gluster
03:30 riyas joined #gluster
03:35 nbalacha joined #gluster
03:36 k0nsl joined #gluster
03:36 k0nsl joined #gluster
03:54 aravindavk joined #gluster
03:55 buvanesh_kumar joined #gluster
03:58 itisravi joined #gluster
04:29 skumar joined #gluster
04:37 rafi joined #gluster
04:44 doc|work joined #gluster
04:45 gyadav joined #gluster
04:45 gyadav__ joined #gluster
04:46 ankitr joined #gluster
04:57 Karan joined #gluster
04:57 k0nsl joined #gluster
04:57 k0nsl joined #gluster
04:57 Shu6h3ndu joined #gluster
05:08 rafi joined #gluster
05:13 hgowtham joined #gluster
05:30 ndarshan joined #gluster
05:40 karthik_us joined #gluster
05:40 sanoj joined #gluster
05:49 kramdoss_ joined #gluster
05:50 Saravanakmr joined #gluster
05:53 k0nsl joined #gluster
05:53 k0nsl joined #gluster
05:54 apandey joined #gluster
05:57 sona joined #gluster
06:01 jiffin joined #gluster
06:04 Humble joined #gluster
06:05 rafi joined #gluster
06:06 hgowtham joined #gluster
06:10 kdhananjay joined #gluster
06:14 ashiq joined #gluster
06:15 rafi1 joined #gluster
06:17 apandey_ joined #gluster
06:21 Jacob843 joined #gluster
06:31 susant joined #gluster
06:36 jtux joined #gluster
06:37 k0nsl joined #gluster
06:37 k0nsl joined #gluster
06:42 ankitr joined #gluster
06:47 poornima_ joined #gluster
06:47 Shu6h3ndu joined #gluster
06:48 ankitr joined #gluster
06:49 rastar joined #gluster
06:51 ndarshan joined #gluster
06:56 itisravi joined #gluster
06:57 ivan_rossi joined #gluster
06:57 ivan_rossi left #gluster
06:58 _KaszpiR_ doc|work afair you can just allow given cgroups to mount fuse within containers without running them in privileged mode
06:58 Prasad joined #gluster
06:59 ivan_rossi joined #gluster
06:59 ivan_rossi left #gluster
06:59 _KaszpiR_ I remember i had to update my lxc container configs to allow fuse mounts
06:59 _KaszpiR_ https://github.com/moby/moby/issues/514 should help
06:59 glusterbot Title: Can't use Fuse within a container · Issue #514 · moby/moby · GitHub (at github.com)
07:00 _KaszpiR_ so no need for running in privileged mode
07:01 _KaszpiR_ ah it was mentioned twice .. (/me slowpoke)
07:03 marbu joined #gluster
07:09 ankitr joined #gluster
07:10 ankitr_ joined #gluster
07:14 skumar joined #gluster
07:26 kramdoss_ joined #gluster
07:34 buvanesh_kumar joined #gluster
07:35 skumar joined #gluster
07:38 riyas joined #gluster
07:51 buvanesh_kumar joined #gluster
07:55 fsimonce joined #gluster
07:57 aravindavk joined #gluster
08:01 Shu6h3ndu joined #gluster
08:03 [diablo] joined #gluster
08:17 _KaszpiR_ joined #gluster
08:25 kramdoss_ joined #gluster
08:42 buvanesh_kumar joined #gluster
08:48 ndarshan joined #gluster
08:49 poornima_ joined #gluster
08:49 skumar joined #gluster
09:12 MrAbaddon joined #gluster
09:13 Wizek_ joined #gluster
09:19 poornima_ joined #gluster
09:33 hgowtham joined #gluster
09:35 derjohn_mob joined #gluster
09:36 ayaz joined #gluster
09:37 skumar joined #gluster
09:41 ankitr joined #gluster
10:06 rafi1 joined #gluster
10:57 poornima_ joined #gluster
10:58 mahdi_adnan joined #gluster
11:02 ndarshan joined #gluster
11:10 mahdi_adnan left #gluster
11:19 Karan joined #gluster
11:24 MadPsy joined #gluster
11:24 MadPsy joined #gluster
11:32 jkroon joined #gluster
11:33 MadPsy joined #gluster
11:33 MadPsy joined #gluster
11:39 itisravi joined #gluster
11:43 MadPsy joined #gluster
11:43 MadPsy joined #gluster
11:45 pioto_ joined #gluster
11:56 ankitr joined #gluster
11:56 rafi joined #gluster
12:02 hgowtham joined #gluster
12:14 atinm joined #gluster
12:17 plarsen joined #gluster
12:21 craig0990 joined #gluster
12:27 craig0990 Hello folks. You are my last, best hope ;) I have a GlusterFS cluster (3.8.4 - bog standard setup) which has filled a brick. All (6) bricks are homogenous (500GB drives). I assume the DHT is weighting files to that particular brick. Is there a command to "unweight" the layout? Push files over to some of the bricks with free space?
12:29 Klas https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.0/html/Administration_Guide/sect-User_Guide-Managing_Volumes-Rebalancing.html
12:29 glusterbot Title: 10.5. Rebalancing Volumes (at access.redhat.com)
12:29 Klas gluster volume rebalance VOLNAME start
12:29 gem joined #gluster
12:29 Klas craig0990: is that what you are looking for?
12:31 craig0990 Klas: I've rebalanced, and it says completed (in seconds), but "/dev/sdb1" is still 100% used on Node 1 :'(
12:37 Klas how full are the other bricks?
12:39 craig0990 Erm, two bricks at 100%, rest are 26%/26%, and 56%/57% (3x2 replicas)
12:40 Klas wait, are you saying that a single volume, with a dedicated disk, is full?
12:42 Klas (that would be hard to fix)
12:42 craig0990 My terminology may be a bit off, so apologies if it is. 6 servers, 3x2 replica setup. Bricks are allocated on /dev/sdb1 on each server. Bricks 1 and 2 are full. Bricks 4-6 have space. Rebalancing does not help.
12:42 craig0990 If this is a case of "you are in serious trouble" let me know, and I can stop trying to fix something I don't yet fully understand ;)
12:43 Klas ah, you have a replica 2 over 6 nodes?
12:43 Klas that should mean that rebalancing should work, at least as I understand it
12:43 Klas I'm no expert btw
12:45 craig0990 Me neither - our resident expert is on leave (oh the joys)
12:45 craig0990 This may simplify the explanation: https://ibb.co/fH8yyF
12:46 glusterbot Title: brick fuzz — imgbb.com (at ibb.co)
12:48 Klas in our setup, I would add more disk =P
12:48 kramdoss_ joined #gluster
12:49 craig0990 Yeah, my thought too :D That might be the route we take...thanks for the assistance - at least I know it's not a "simple" fix :)
12:52 EdwardIII joined #gluster
12:53 derjohn_mob joined #gluster
12:53 baber joined #gluster
12:56 gem_ joined #gluster
13:06 mahdi_adnan joined #gluster
13:07 mahdi_adnan Hi, i deleted some large files from my volume but in one brick i can see it's still there in the unlink dir, can i just delete those files from inside the brick ?
13:16 skylar1 joined #gluster
13:23 susant joined #gluster
13:29 plarsen joined #gluster
13:44 MrAbaddon joined #gluster
13:57 pdrakeweb joined #gluster
14:00 nbalacha joined #gluster
14:01 shyam joined #gluster
14:02 pdrakeweb joined #gluster
14:08 jiffin joined #gluster
14:16 craig0990 left #gluster
14:29 jiffin joined #gluster
14:30 kpease joined #gluster
14:31 farhorizon joined #gluster
14:32 jiffin joined #gluster
14:38 sanoj joined #gluster
14:43 farhorizon joined #gluster
14:58 masuberu joined #gluster
15:04 shyam joined #gluster
15:05 wushudoin joined #gluster
15:05 wushudoin joined #gluster
15:07 kramdoss_ joined #gluster
15:12 gyadav_ joined #gluster
15:12 gyadav joined #gluster
15:26 shyam joined #gluster
15:27 susant joined #gluster
15:28 nbalacha joined #gluster
15:36 Shu6h3ndu joined #gluster
15:39 _KaszpiR_ mahdi_adnan it should be handled by gluster
15:44 mahdi_adnan But the brick is filling up and it reaching 93%
15:44 mahdi_adnan i deleted one file from one of the bricks tho
15:45 kramdoss_ joined #gluster
15:45 mahdi_adnan the file is just not been used by the client anymore
15:54 _KaszpiR_ adjust trash settings
15:54 _KaszpiR_ then
16:00 susant joined #gluster
16:02 pdrakeweb joined #gluster
16:04 farhorizon joined #gluster
16:10 doc|work joined #gluster
16:13 doc|work _KaszpiR_, thanks, I saw how how to do CAP_SYS_ADMIN under kubernetes, but while looking into how to do an equivalent to docker's --device I saw the privileged flag. Is that what you're talking about? Just going back to --device with CAP_SYS_ADMIN?
16:14 doc|work btw, does docker still use lxc? I vaguely remember a comment on a thread over the last couple of days it doesn't.
16:17 MrAbaddon joined #gluster
16:17 doc|work not supported in kubernetes? https://github.com/kubernetes/kubernetes/issues/5607
16:17 glusterbot Title: add support for host devices · Issue #5607 · kubernetes/kubernetes · GitHub (at github.com)
16:20 jstrunk joined #gluster
16:21 mahdi_adnan left #gluster
16:28 Gambit15 joined #gluster
16:34 _KaszpiR_ idk, I dont use kube
16:34 gyadav_ joined #gluster
16:34 nbalacha joined #gluster
16:35 gyadav joined #gluster
16:35 baber joined #gluster
16:35 buvanesh_kumar joined #gluster
16:39 pdrakeweb joined #gluster
16:41 baber joined #gluster
16:43 doc|work _KaszpiR_, ok, thakns
17:03 nbalacha joined #gluster
17:10 rafi1 joined #gluster
17:17 susant joined #gluster
17:23 rastar joined #gluster
17:37 alvinstarr joined #gluster
17:37 _KaszpiR_ joined #gluster
17:38 nbalacha joined #gluster
17:47 farhoriz_ joined #gluster
17:49 shyam joined #gluster
17:51 susant left #gluster
17:56 edong23 joined #gluster
18:05 nirokato joined #gluster
18:17 riyas joined #gluster
18:19 shyam joined #gluster
18:45 derjohn_mob joined #gluster
18:54 JoeJulian doc|work: docker's built on an lxc foundation, but it's evolved well beyond those humble roots.
18:55 doc|work hi JoeJulian, looks like support for it though was removed in 1.10 http://news.softpedia.com/news/docker-1-10-linux-container-engine-brings-over-100-changes-removes-lxc-support-499945.shtml
18:55 glusterbot Title: Docker 1.10 Linux Container Engine Brings over 100 Changes, Removes LXC Support (at news.softpedia.com)
19:00 JoeJulian Oh, doc|work, I just scrolled back to see what you're working on. Have you looked at http://docs.ganeti.org/ganeti/2.15/html/design-glusterfs-ganeti-support.html ?
19:00 glusterbot Title: GlusterFS Ganeti support — Ganeti 2.15.2 documentation (at docs.ganeti.org)
19:03 doc|work JoeJulian, sorry, this isn't my area. Have looked at ganeti.org and got a quick idea of what it is, but not sure what that page gives me. Can you clarify?
19:04 doc|work Are you suggesting to replace kubernetes with Ganeti?
19:05 doc|work I've actually gotten it to mount now, so... yey! Just someone suggested there might be a more locked down way of doing the same thing.
19:05 doc|work I'm working on something else at the moment so haven't had a chance to dig into the kubernetes way of doing that same thing
19:22 jbrooks joined #gluster
19:24 farhorizon joined #gluster
19:32 farhoriz_ joined #gluster
19:59 shyam joined #gluster
20:00 rastar joined #gluster
20:04 JoeJulian doc|work: ganeti works on top of kubernetes to provide gluster volume as persistent storage to kubernetes pods.
20:04 doc|work JoeJulian, ok, thanks. I'll look into it a bit more then.
20:05 JoeJulian The folks downstairs from me, rook.io, also have a product to provide persistent storage to kubernetes. Currently they use ceph, but they're talking about adding gluster support.
20:06 JoeJulian s/product/project/
20:06 glusterbot What JoeJulian meant to say was: The folks downstairs from me, rook.io, also have a project to provide persistent storage to kubernetes. Currently they use ceph, but they're talking about adding gluster support.
20:06 doc|work thanks, will look into that too.
20:07 doc|work Trying to do this right is a pandora's box of new stuff to learn
20:07 JoeJulian That's the best part of being on the bleeding edge. There's no one right way so you can't do it wrong.
20:08 doc|work yeah, but as someone who's trying to just build a quick app and had someone offer kubernetes as a way to solve a problem I was hitting with docker, I'm deep in the rabbit hole right now. It's a bit counter productive. At some stage I actually need to build the app :)
20:09 doc|work I'll give these a read and decide if they add anything. Thanks!
20:11 gyadav joined #gluster
20:11 gyadav_ joined #gluster
20:12 _KaszpiR_ :D
21:09 farhorizon joined #gluster
22:17 arpu_ joined #gluster
22:17 arpu_ joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary