Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-08-08

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:07 vbellur joined #gluster
00:20 baojg joined #gluster
00:30 kramdoss_ joined #gluster
00:51 daMaestro joined #gluster
01:22 baojg joined #gluster
01:26 bwerthmann joined #gluster
01:52 ilbot3 joined #gluster
01:52 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:52 kpease joined #gluster
01:56 ic0n joined #gluster
02:14 amosbird hello
02:14 glusterbot amosbird: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer
02:15 amosbird I have 4 machines with 8 disks each
02:15 amosbird is strip 8 replica 2 ok ?
02:18 _KaszpiR_ joined #gluster
02:19 jarbod_ joined #gluster
02:19 samppah joined #gluster
02:25 ankitr joined #gluster
02:25 MrAbaddon joined #gluster
02:30 WebertRLZ joined #gluster
02:42 amosbird should I use strip to store large files?
02:53 ppai joined #gluster
02:58 msvbhat joined #gluster
03:07 plarsen joined #gluster
03:12 vaboston joined #gluster
03:12 itisravi joined #gluster
03:27 baojg joined #gluster
03:30 amosbird hmm
03:30 amosbird I cannot remove a non empty directoy in glusterfs
03:49 riyas joined #gluster
04:03 jiffin joined #gluster
04:08 aravindavk joined #gluster
04:11 susant joined #gluster
04:25 bwerthmann joined #gluster
04:25 plarsen joined #gluster
04:33 skumar joined #gluster
04:36 sanoj joined #gluster
04:40 baojg joined #gluster
04:43 JoeJulian ~stripe | amosbird
04:43 glusterbot amosbird: (#1) Please see http://joejulian.name/blog/should-i-use-stripe-on-glusterfs/ about stripe volumes., or (#2) The stripe translator is deprecated. Consider enabling sharding instead.
04:43 kramdoss_ joined #gluster
04:48 atinm joined #gluster
04:48 hgowtham joined #gluster
05:00 nbalacha joined #gluster
05:02 karthik_us joined #gluster
05:06 gyadav joined #gluster
05:08 buvanesh_kumar joined #gluster
05:13 ndarshan joined #gluster
05:22 msvbhat joined #gluster
05:22 apandey joined #gluster
05:23 rafi1 joined #gluster
05:27 nbalacha joined #gluster
05:32 mahendratech joined #gluster
05:35 Saravanakmr joined #gluster
05:39 susant joined #gluster
05:41 skoduri joined #gluster
05:43 ppai joined #gluster
05:44 baojg joined #gluster
05:48 ahino joined #gluster
05:58 rafi joined #gluster
06:02 jyasveer joined #gluster
06:02 prasanth joined #gluster
06:04 rafi2 joined #gluster
06:06 jtux joined #gluster
06:13 rafi joined #gluster
06:13 rtnpro joined #gluster
06:13 rtnpro Humble, Hi
06:14 rtnpro I want to setup a SAN using gluster in my private cloud to use it as a persistent backend for kubernetes cluster
06:14 rtnpro Humble, can you point me to the steps of setting up a SAN using gluster?
06:16 rafi joined #gluster
06:19 kotreshhr joined #gluster
06:25 Kassandry joined #gluster
06:37 bartden joined #gluster
06:38 bartden Hi, Can i combine Glusterfs with some form of local caching on a local disk, like cachefs?
06:45 baojg joined #gluster
06:47 msvbhat joined #gluster
07:00 jyasveer1 joined #gluster
07:03 jkroon joined #gluster
07:11 amosbird JoeJulian: hmm, it's 502 Bad Gateway
07:11 [diablo] joined #gluster
07:13 rtnpro nigelb, around? could you point me to resources where I can learn about how to setup a HA gluster storage backend for kubernetes
07:15 nigelb rtnpro: Take a look at https://github.com/heketi/heketi/wiki/Usage-Guide
07:15 glusterbot Title: Usage Guide · heketi/heketi Wiki · GitHub (at github.com)
07:16 rtnpro nigelb, got that, thanks :)
07:16 nigelb Humble or rastar are the best people to ask.
07:16 jkroon_ joined #gluster
07:17 Humble nigelb,
07:17 Humble rtnpro, hey :)
07:17 rtnpro Humble, hey
07:18 Humble just noticed ur query
07:18 Humble so, if u want to use gluster as a PV and if you want to make use of it with dynamic provisioning
07:18 Humble u need to use heketi
07:18 Humble https://github.com/gluster/gluster-kubernetes
07:18 glusterbot Title: GitHub - gluster/gluster-kubernetes: GlusterFS + heketi on Kubernetes (at github.com)
07:19 Humble rtnpro, above repo is our upstream repo for configuring the same
07:19 rtnpro Humble, got that :)
07:19 Humble it has kube and openshift artifacts
07:20 rtnpro nigelb, Humble, I will try to set it up tonight, I will ping you if I run into some trouble :D
07:20 buvanesh_kumar joined #gluster
07:20 mbukatov joined #gluster
07:20 Humble sure rtnpro
07:27 jkroon_ joined #gluster
07:37 edong23 joined #gluster
07:48 baojg joined #gluster
07:57 level7 joined #gluster
07:58 Saravanakmr joined #gluster
08:00 wiza joined #gluster
08:09 jkroon joined #gluster
08:41 Manikandan joined #gluster
08:52 msvbhat joined #gluster
09:02 jiffin1 joined #gluster
09:07 ashiq joined #gluster
09:13 poornima_ joined #gluster
09:24 jiffin1 joined #gluster
09:40 jyasveer1 joined #gluster
09:57 baojg joined #gluster
10:00 msvbhat joined #gluster
10:11 jkroon_ joined #gluster
10:15 fcami joined #gluster
10:17 mdavidson joined #gluster
10:22 jkroon_ joined #gluster
10:41 DV joined #gluster
10:42 atinm joined #gluster
10:47 kramdoss_ joined #gluster
10:56 itisravi joined #gluster
10:59 baojg joined #gluster
11:11 baber joined #gluster
11:29 baojg joined #gluster
11:33 atinm joined #gluster
11:40 TBlaar joined #gluster
11:42 msvbhat joined #gluster
11:42 thatgraemeguy joined #gluster
11:46 jyasveer2 joined #gluster
11:57 vbellur joined #gluster
12:00 vbellur joined #gluster
12:00 vbellur joined #gluster
12:02 ahino joined #gluster
12:03 vaboston joined #gluster
12:16 pdrakeweb joined #gluster
12:21 vbellur joined #gluster
12:22 vbellur joined #gluster
12:23 vbellur joined #gluster
12:23 vbellur joined #gluster
12:24 vbellur joined #gluster
12:28 vbellur joined #gluster
12:34 mbukatov joined #gluster
12:37 rwheeler joined #gluster
12:59 kpease joined #gluster
13:00 guhcampos joined #gluster
13:08 federicoaguirre joined #gluster
13:08 federicoaguirre Hi there.! is there any configuration parameter to improove the IO ???
13:09 federicoaguirre I've configured performance.cache-size: 1GB
13:14 skumar joined #gluster
13:20 mbukatov joined #gluster
13:26 buvanesh_kumar joined #gluster
13:28 skylar joined #gluster
13:48 vbellur joined #gluster
13:57 JoeJulian amosbird: Thanks for letting me know. I really need to spend some time and get my blog off of that site.
14:16 Anarka joined #gluster
14:17 plarsen joined #gluster
14:19 Anarka joined #gluster
14:21 vbellur joined #gluster
14:21 Anarka joined #gluster
14:23 Anarka joined #gluster
14:24 federicoaguirre joined #gluster
14:41 baojg joined #gluster
14:42 major joined #gluster
14:42 baojg joined #gluster
14:43 baojg joined #gluster
14:44 baojg joined #gluster
14:48 susant joined #gluster
14:49 farhorizon joined #gluster
14:51 amosbird hello
14:51 glusterbot amosbird: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer
14:51 amosbird does glusterfs use page cache?
14:54 baojg joined #gluster
14:58 vbellur joined #gluster
15:02 DJClean hiya, currently running into an issue with 2 gluster nodes, it seems the SHD refuses to work on either node: https://paste.openttdcoop.org/pycgeohca/b1facl
15:02 glusterbot Title: #openttdcoop - Paste (at paste.openttdcoop.org)
15:02 DJClean and for some reason it doesn't keep the bricks in sync properly cause of it (or atleast heal tells me that all files are not in sync)
15:04 federicoaguirre hi there... on Ubuntu 16.04 what parameters I can modify in the kernel to get a best performance?
15:04 vbellur DJClean: looks like you have IPv6 enabled. gluster doesn't go IPv6 well.. so you would need to disable that or force IPv4 resolutions ahead of IPv6.
15:07 wushudoin joined #gluster
15:07 DJClean well that's the fun thing, i read that, and tried that...
15:07 DJClean and it just shows the same, only with 127.0.0.2
15:07 nbalacha joined #gluster
15:08 DJClean 127.0.0.1*
15:19 vbellur DJClean: does gluster volume status list all bricks as running?
15:20 DJClean yep
15:20 amosbird ..
15:21 DJClean advantage we got it's a small dataset so it's "easy" to set up again
15:27 baojg joined #gluster
15:35 bowhunter joined #gluster
15:35 federicoaguirre hi there.... Any performance tweak to change in the Kernel (ubuntu) to improve the pref?
15:35 federicoaguirre perf
15:40 msvbhat joined #gluster
15:42 cholcombe joined #gluster
15:51 baber joined #gluster
15:56 jiffin joined #gluster
16:14 msvbhat joined #gluster
16:16 Gambit15 joined #gluster
16:20 JoeJulian amosbird: yes and no. The servers, of course, use page cache. FUSE clients do not (FUSE in general does not). The client has its own caching built in.
16:20 JoeJulian amosbird: You really wouldn't want page caching in the kernel anyway. How would it know when another client changed a file? You need some form of remote cache invalidation.
16:21 JoeJulian DJClean: debian?
16:22 baber joined #gluster
16:23 DJClean nop, centos
16:23 DJClean 7
16:23 JoeJulian @pasteinfo
16:23 glusterbot JoeJulian: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
16:24 DJClean hold on, will need to fire up the laptop for that :)
16:24 JoeJulian or your own paste site
16:24 JoeJulian speaking of... the cert is showing as invalid.
16:24 DJClean guess i need to poke the person that got us the cert :)
16:25 DJClean hmmm...
16:25 DJClean it is valid... till jan
16:25 DJClean why it showing invalid for you? chain?
16:25 JoeJulian yes, chain.
16:25 DJClean ah, should be easy fix then
16:25 DJClean will check later, thnx
16:25 JoeJulian :+1:
16:28 baojg joined #gluster
16:30 DJClean https://paste.openttdcoop.org/pd26rrrel/jmcqz2
16:31 glusterbot Title: #openttdcoop - Paste (at paste.openttdcoop.org)
17:03 JoeJulian DJClean: Well that wasn't it... Do both hostnames resolve correctly from both servers?
17:03 skoduri joined #gluster
17:04 kraynor5b joined #gluster
17:08 DJClean JoeJulian: yep they do, as far as i'm aware it worked before without problems (we have like 3-4 other clusters working without problem)
17:09 JoeJulian oh, centos... selinux/
17:09 JoeJulian ?
17:09 DJClean we are currently at the point considering wiping the gluster config, backup data and set gluster up from scratch
17:09 DJClean tried setenforce 0 already
17:09 DJClean but other clusters also have it enabled without issues :)
17:10 DJClean we have learned to embrace selinux to some extend, even though it comes with pain :)
17:10 JoeJulian yeah, but if selinux blocked the server from getting the port, you may have to restart the volume.
17:10 JoeJulian I totally agree. I encourage selinux. Just trying to troubleshoot.
17:10 DJClean i know :)
17:11 DJClean can try again, as i know no-one is working on it now :)
17:12 JoeJulian Another test would be to remove the hostname from the localhost addresses.
17:12 JoeJulian in /etc/hosts, of course.
17:14 DJClean nop wasn't selinux :)
17:16 jiffin joined #gluster
17:17 DJClean ok... seems removing it from the hosts file didn't make it kill itself....
17:18 DJClean and error not showing up.....
17:19 DJClean JoeJulian: does look like that did it... now to figure out why it was in there... cause i guess puppet will put it back :)
17:20 DJClean and also idd disable ipv4
17:20 DJClean seems i'll be having a fun day tomorrow :)
17:22 msvbhat joined #gluster
17:24 MikeLupe joined #gluster
17:30 JoeJulian :)
17:30 amosbird hi, why do I get this error https://la.wentropy.com/9MLg  when removing a directory in glusterfs?
17:30 JoeJulian Somehow something got abandoned in that directory on one of the bricks.
17:31 JoeJulian If I'm guessing, it would be a dht-link file. If it is, it'll be mode 1000 with an ,,(extended attribute) trusted.glusterfs.dht-linkto
17:31 glusterbot To read the extended attributes on the server: getfattr -m .  -d -e hex {filename}
17:32 JoeJulian (though I would drop the `-e hex` if you want to read the text of the dht-linkto attribute)
17:34 jkroon joined #gluster
17:34 baojg joined #gluster
17:38 amosbird JoeJulian: um....
17:38 amosbird so how can I fix it
17:39 JoeJulian find the file, decide if it's valuable to you, back it up if it is, delete it if it's not
17:39 amosbird how can I find the file then?
17:39 amosbird hmm https://la.wentropy.com/Kgno
17:39 amosbird it's empty
17:40 JoeJulian You missed the line that said, "on one of the bricks".
17:40 amosbird oh ....
17:42 amosbird well, those bricks all contain some files
17:42 amosbird do I need to delete them all?
17:42 amosbird I have 45 bricks
17:42 amosbird strip 15 * rep 3
17:42 major joined #gluster
17:42 JoeJulian I don't know. I don't know what those files are or why they were left behind.
17:43 JoeJulian I would check the extended attributes, permissions, and possibly the contents. If I had any doubt, I'd copy them off somewhere before deleting them.
17:44 JoeJulian amosbird: I assume by strip you mean stripe, that's not really recommended for all but a very small subset of use cases, and even then the use of ,,(stripe) is deprecated.
17:44 glusterbot amosbird: (#1) Please see http://joejulian.name/blog/should-i-use-stripe-on-glusterfs/ about stripe volumes., or (#2) The stripe translator is deprecated. Consider enabling sharding instead.
17:45 amosbird JoeJulian: well, i have already placed so much data into this gluster cluster
17:45 amosbird can I switch to sharding automaticallys
17:45 JoeJulian No, you would have to build a new volume.
17:46 JoeJulian It may not be feasible, but it's something to put as a goal.
17:46 amosbird ok
17:46 amosbird how does stripe work in gluster?
17:47 JoeJulian see that blog post
17:51 jkroon joined #gluster
17:57 farhorizon joined #gluster
18:07 Jacob843 joined #gluster
18:10 farhorizon joined #gluster
18:22 ic0n joined #gluster
18:24 rafi1 joined #gluster
18:32 ahino joined #gluster
18:33 kraynor5b Can someone tell me where the ganesha configuration file for whether it is "enabled" in gluster or not is located?
18:34 kraynor5b I had this information but unfortunately I lost it
18:35 [diablo] joined #gluster
18:37 kraynor5b Ah, nevermind I think I found it... /var/lib/glusterd/options
18:38 major joined #gluster
19:05 ThHirsch joined #gluster
19:28 kpease joined #gluster
19:29 vbellur joined #gluster
19:30 vbellur joined #gluster
19:30 msvbhat joined #gluster
19:31 vbellur1 joined #gluster
19:31 vbellur joined #gluster
19:32 vbellur1 joined #gluster
19:33 vbellur joined #gluster
19:35 kraynor5b joined #gluster
19:37 baojg joined #gluster
19:48 vbellur joined #gluster
19:55 vbellur joined #gluster
19:56 vbellur joined #gluster
19:58 bwerthmann joined #gluster
20:38 baojg joined #gluster
20:57 farhoriz_ joined #gluster
21:38 baojg joined #gluster
22:40 baojg joined #gluster
22:40 j4z joined #gluster
22:42 j4z Hi - I'm running some tests to see how we can use Gluster to replicate some data. Does Gluster have a maximum number of replication bricks a volume can have? My experiments suggest it does: 16
22:50 crag joined #gluster
22:52 j4z I've searched the internet and can't find any mention of a maximum. But every time I try to add a 17th brick, the gluster mount stops showing files and operations get an input/output error.
23:21 JoeJulian 16??? holy cow that would be ... interesting.
23:21 JoeJulian How much resiliency do you actually need?
23:23 j4z We are trying to use gluster to keep several read-only shares up to date - updated by a single server. Not really what gluster is made for, but could work well for us if the numbers are there.
23:24 j4z We will want at least 40
23:25 JoeJulian You should use georeplication.
23:26 j4z Oh yeah? We're hoping to have them sync within a second of changes on the master.
23:26 JoeJulian Hmm, not sure on that.
23:26 JoeJulian But replica is certainly not designed for that.
23:27 JoeJulian I suppose you could write your own volfile and tree out replicas. I'm assuming you're planning on reading directly from the bricks?
23:27 j4z Yeah. It was wierdly the simplest setup I could quickly implement (assuming it did). Things like btsync (or whatever they're calling it now) and rsync require a lot of setup or in some cases money to do. Gluster is so simple to setup.
23:27 h4rry joined #gluster
23:28 j4z Yeah, reading directly from bricks.
23:28 JoeJulian You'll lose a lot of the live-management capabilities if you write your own volfile, but it could work.
23:29 j4z Hmm. I don't know a lot about things that deep. I wonder if that would be simpler than my other alternatives (rsync, btsync, etc.).
23:31 JoeJulian They all seem pretty simple from my perspective. If I were doing it, I would probably let the remotes pull content from some source (s3 maybe) and use a message bus (rabbitmq probably) to inform them when it's time to pull.
23:32 j4z Yeah, that's the "right" way to do it for sure. I was just hoping Gluster could give me a quick win.
23:33 j4z But 16. Man.
23:33 JoeJulian I don't know.. I've never heard of anyone replicating further.
23:33 JoeJulian I don't know of any hard limits.
23:34 j4z I'm just glad we hadn't decided to use it yet. All of our small-scale tests worked great. It's only after we hit that 16 limit that we saw a problem. Would have been a much bigger problem if we had started with 14 servers expecting it to work forever.

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary