Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-03-31

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 haomaiwa_ joined #gluster
00:14 sage joined #gluster
00:25 nathwill joined #gluster
00:37 cliluw joined #gluster
00:49 bwerthmann joined #gluster
01:01 haomaiwa_ joined #gluster
01:11 plarsen joined #gluster
01:20 calavera joined #gluster
01:25 coredump joined #gluster
01:26 Lee1092 joined #gluster
01:37 vmallika joined #gluster
01:46 baojg joined #gluster
01:47 ilbot3 joined #gluster
01:47 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:50 vmallika joined #gluster
01:58 Javezim joined #gluster
01:59 Javezim pasted.co/86397482 ---- We are using Gluster with Samba, yet samba keeps crashing on us when we are reading/writing a file wth the error shown in the pasted link
01:59 glusterbot Javezim: --'s karma is now -4
01:59 Javezim Anyone seen this before and know how to fix?
02:07 atrius joined #gluster
02:25 kshlm joined #gluster
02:29 nehar joined #gluster
02:35 ronrib joined #gluster
02:38 haomaiwa_ joined #gluster
02:41 jhyland joined #gluster
02:49 Javezim joined #gluster
02:50 DV joined #gluster
03:01 baojg joined #gluster
03:01 haomaiwa_ joined #gluster
03:02 DV joined #gluster
03:05 camg joined #gluster
03:15 nehar joined #gluster
03:25 carnil joined #gluster
03:26 necrogami joined #gluster
03:28 overclk joined #gluster
03:29 kaushal_ joined #gluster
03:29 marlinc_ joined #gluster
03:29 wistof_ joined #gluster
03:31 lord4163_ joined #gluster
03:33 sagarhan| joined #gluster
03:34 crashmag joined #gluster
03:36 atinm joined #gluster
03:36 steveeJ joined #gluster
03:41 jhyland joined #gluster
03:44 ramteid joined #gluster
03:50 aspandey joined #gluster
03:51 ronrib joined #gluster
03:52 nehar joined #gluster
03:55 nbalacha joined #gluster
03:56 shubhendu joined #gluster
04:01 haomaiwa_ joined #gluster
04:01 itisravi joined #gluster
04:06 ashiq_ joined #gluster
04:08 Javezim joined #gluster
04:08 nehar joined #gluster
04:11 Bhaskarakiran joined #gluster
04:12 kshlm joined #gluster
04:27 jiffin joined #gluster
04:27 sakshi joined #gluster
04:33 EinstCrazy joined #gluster
04:34 calavera joined #gluster
04:35 Bhaskarakiran joined #gluster
04:35 gem joined #gluster
04:38 aravindavk joined #gluster
04:39 skoduri joined #gluster
04:47 karthik___ joined #gluster
04:48 Manikandan joined #gluster
04:48 theron joined #gluster
04:49 prasanth joined #gluster
04:50 ndarshan joined #gluster
04:52 vmallika joined #gluster
04:57 EinstCra_ joined #gluster
05:01 haomaiwa_ joined #gluster
05:05 aravindavk joined #gluster
05:07 kotreshhr joined #gluster
05:09 ppai joined #gluster
05:10 camg joined #gluster
05:16 Bhaskarakiran joined #gluster
05:21 DV joined #gluster
05:22 hgowtham joined #gluster
05:22 Bhaskarakiran joined #gluster
05:30 DV joined #gluster
05:30 poornimag joined #gluster
05:31 Apeksha joined #gluster
05:37 nishanth joined #gluster
05:45 gowtham joined #gluster
05:46 ramky joined #gluster
05:51 DV joined #gluster
05:59 karnan joined #gluster
05:59 anil_ joined #gluster
06:00 baojg joined #gluster
06:01 haomaiwa_ joined #gluster
06:05 Gaurav_ joined #gluster
06:07 rafi1 joined #gluster
06:08 ahino joined #gluster
06:09 kore left #gluster
06:10 nehar joined #gluster
06:13 beeradb_ joined #gluster
06:13 atalur joined #gluster
06:16 karnan joined #gluster
06:21 ramky joined #gluster
06:23 kdhananjay joined #gluster
06:26 skoduri joined #gluster
06:30 RameshN joined #gluster
06:30 rouven joined #gluster
06:31 rwheeler joined #gluster
06:33 spalai joined #gluster
06:35 DV joined #gluster
06:35 spalai joined #gluster
06:37 [Enrico] joined #gluster
06:38 Gaurav_ joined #gluster
06:38 mhulsman joined #gluster
06:40 anti[Enrico] joined #gluster
06:43 Bhaskarakiran joined #gluster
06:49 coredump joined #gluster
06:50 unlaudable joined #gluster
06:55 vmallika joined #gluster
06:56 om joined #gluster
06:56 Saravanakmr joined #gluster
07:01 haomaiwa_ joined #gluster
07:08 mhulsman1 joined #gluster
07:10 itisravi joined #gluster
07:31 kshlm joined #gluster
07:38 ramky joined #gluster
07:42 ctria joined #gluster
07:47 kdhananjay joined #gluster
07:50 rouven joined #gluster
07:52 Bhaskarakiran joined #gluster
07:54 ahino joined #gluster
07:57 fsimonce joined #gluster
07:58 hackman joined #gluster
07:59 sakshi joined #gluster
08:01 haomaiwa_ joined #gluster
08:01 ivan_rossi joined #gluster
08:02 ivan_rossi left #gluster
08:02 jri joined #gluster
08:04 sjohnsen left #gluster
08:05 rwheeler joined #gluster
08:07 dariol joined #gluster
08:10 Wizek joined #gluster
08:10 om2 joined #gluster
08:10 aravindavk joined #gluster
08:13 madnexus_ joined #gluster
08:13 jri_ joined #gluster
08:23 Saravanakmr_ joined #gluster
08:47 kotreshhr joined #gluster
08:50 johnmilton joined #gluster
08:53 MrAbaddon joined #gluster
08:54 Saravanakmr joined #gluster
08:54 Wizek joined #gluster
09:01 haomaiwa_ joined #gluster
09:09 Rasathus joined #gluster
09:09 mhulsman joined #gluster
09:20 harish__ joined #gluster
09:33 TvL2386 joined #gluster
09:39 ahino joined #gluster
10:01 haomaiwa_ joined #gluster
10:01 aspandey joined #gluster
10:11 d0nn1e joined #gluster
10:19 Bhaskarakiran joined #gluster
10:36 marbu joined #gluster
10:40 mhulsman joined #gluster
10:46 sakshi joined #gluster
10:49 caitnop joined #gluster
10:53 kasturi joined #gluster
11:00 overclk joined #gluster
11:01 haomaiwa_ joined #gluster
11:06 rafi1 joined #gluster
11:06 arcolife joined #gluster
11:10 spalai joined #gluster
11:11 ahino joined #gluster
11:15 robb_nl joined #gluster
11:16 gem joined #gluster
11:16 prasanth joined #gluster
11:16 nehar joined #gluster
11:17 beeradb_ joined #gluster
11:22 julim_ joined #gluster
11:22 hackman joined #gluster
11:22 madnexus__ joined #gluster
11:23 David_H_Smith joined #gluster
11:23 lkoranda joined #gluster
11:23 foster joined #gluster
11:24 [o__o] joined #gluster
11:26 kkeithley joined #gluster
11:38 johnmilton joined #gluster
11:40 karnan joined #gluster
11:42 ira_ joined #gluster
11:48 Apeksha joined #gluster
11:55 gem_ joined #gluster
11:57 Debloper joined #gluster
11:57 unclemarc joined #gluster
12:01 haomaiwa_ joined #gluster
12:08 kdhananjay joined #gluster
12:08 bwerthmann joined #gluster
12:13 marbu joined #gluster
12:14 shaunm joined #gluster
12:15 atalur joined #gluster
12:15 spalai1 joined #gluster
12:15 karnan joined #gluster
12:19 EinstCrazy joined #gluster
12:28 dscastro joined #gluster
12:28 dscastro hello..  what if i need to run hundreds of volumes? is there a limit for it?
12:32 ashiq_ joined #gluster
12:34 ramky joined #gluster
12:35 moss joined #gluster
12:35 DV joined #gluster
12:36 nishanth joined #gluster
12:37 DV joined #gluster
12:41 mhulsman joined #gluster
12:44 ira_ joined #gluster
12:46 hamiller joined #gluster
12:46 shaunm joined #gluster
12:48 kkeithley dscastro: no, there's no limit in gluster. There might be limits in what your hardware can do.
12:49 dscastro kkeithley: yeah.. looks like firing up hundreds of them consumes a lot of memory
12:51 Gnomethrower joined #gluster
12:51 kkeithley It is possible to hand craft a vol file for the server that serves/exports multiple volumes from a single glusterfsd.  See, e.g. HekaFS from a few years ago.
12:52 madnexus joined #gluster
12:56 dscastro kkeithley: ohh.. nice
13:00 shyam joined #gluster
13:05 hchiramm joined #gluster
13:05 hchiramm_ joined #gluster
13:06 atinm joined #gluster
13:11 chirino joined #gluster
13:11 spalai1 left #gluster
13:12 dlambrig_ joined #gluster
13:12 shubhendu joined #gluster
13:14 nishanth joined #gluster
13:15 karnan joined #gluster
13:15 RameshN joined #gluster
13:16 ndarshan joined #gluster
13:16 dscastro kkeithley: despite hekafs do you have a guide for this kind of setup?
13:19 tswartz joined #gluster
13:21 kkeithley I don't. You can look at old docs, pre-3.2 I think, that describe writing volfiles. (back before gluster/glusterd did it for you).
13:21 kshlm Pre 3.1
13:22 kshlm GlusterD started generating volfiles in 3.1
13:23 kkeithley yeah, what kshlm said. ;-)
13:25 dscastro kkeithley: kshlm ok.. tks
13:26 dscastro so you think is possible to have on volume and multiple exports on it?
13:26 kshlm dscastro, For glusterfs a volume is an export.
13:27 kshlm But glusterfs can serve multiple volumes from a single glusterfsd process.
13:27 dscastro kshlm: that's my case :)
13:27 kshlm Users could do this pre glusterfs-3.1 where users would write their own volfiles.
13:28 kshlm But glusterd cannot generate concatenated volfiles (for the lack of a better term) which can serve multiple volumes from a single process.
13:28 dscastro kshlm: oh.. so i have to use pre 3.1 ?
13:29 kshlm The support still existin glusterfsd though.
13:29 dscastro so it can't generate but i understands it
13:29 kshlm Yup.
13:30 hagarth joined #gluster
13:30 kshlm We're aiming to get this support for glusterfs-4.0 though.
13:30 dscastro kshlm: ok..  one more question
13:30 dlambrig_ joined #gluster
13:30 Lee1092 joined #gluster
13:31 dscastro kshlm: once i have my hand crafted vol file, how do i import it to gluster servers
13:31 kshlm you can't import it into the volumes glusterd handles.
13:31 kshlm You'll have to manually manage those processes.
13:32 theron joined #gluster
13:32 dscastro kshlm: not sure if i follow you
13:35 kshlm You'll have to start a glusterfsd process your self by pointing it to the vol file, `glusterfsd --volfile=<path>`
13:36 kshlm And you'll have to manually distribute the volfiles to all the servers and clients.
13:37 dscastro kshlm: ohh... that's bad
13:37 Gnomethrower joined #gluster
13:37 kshlm That was how it was being done previous to glusterfs-3.1
13:39 kshlm Improving scalability is one of the thing we're aiming for 4.0, and this is a scalability problem.
13:39 kshlm But the earliest 4.0 will arrive is the end of this year.
13:39 dscastro kshlm: ok.. i'll take a look on it
13:40 goretoxo joined #gluster
13:40 kshlm dscastro, Just test glusterfs out.
13:40 dscastro kshlm: i am
13:40 kshlm I've not generally seen setups where users try to run a 100s of volumes on a very small cluster.
13:41 dscastro kshlm: but my requirements are very strict (i own a cloud provider which offers openshift )
13:41 kshlm The larger the cluster, the larger the number of volumes, and they get distributed among the cluster.
13:41 kshlm So you're trying to provide storage to openshift?
13:41 dscastro yeah
13:42 dscastro it does work very well
13:42 kshlm So you want to provide a unique share to each gear (what to they call them now)
13:42 dscastro kshlm: pods ( its docker based)
13:42 dscastro docker / kubernetes
13:42 dscastro each volume (kubernetes) points to a export or volume (gluster)
13:43 kshlm There is some work going on in this direction for 3.8, which should be out in may/june
13:43 kshlm One of that is supporting sub-directory mounts with native glusterfs volumes. This plus some access control can provide seperate shares for many clients.
13:44 dscastro kshlm: ohhh.. nice !
13:44 kshlm You could also use NFS mounts right now to get the same.
13:44 kshlm Either the inbuilt gluster-nfs server or the nfs-ganesha integration can do it.
13:44 haomaiwa_ joined #gluster
13:45 dscastro kshlm: yeah i know, but i lost HA/LB in case of gluster-nfs right ?
13:45 ppai joined #gluster
13:45 plarsen joined #gluster
13:45 DV__ joined #gluster
13:46 kshlm Yup.
13:47 dscastro ganesha does seems to be the right choice, but i'm the management over read makes me suffer, there's to many moving parts
13:47 dscastro s/over read/overhead/
13:47 glusterbot What dscastro meant to say was: ganesha does seems to be the right choice, but i'm the management overhead makes me suffer, there's to many moving parts
13:47 kshlm there's been a lot of work in getting ganesha to work seamlessly with gluster.
13:47 ashiq_ joined #gluster
13:48 kshlm Gluster can automatically set up and manage ganesha shares.
13:49 kshlm ndevos (who's surprisingly not around right now) can help with that.
13:49 dscastro kshlm: what about the HA work? i found a doc that shows how to use it + Pacemaker
13:49 dscastro cool... i'll ping him later
13:49 kshlm dscastro, Not much idea about that. Again ndevos should be able to get you answers.
13:50 dscastro kshlm: regarding 3.8 work, what is the best place to keep track of it ?
13:51 kshlm https://www.gluster.org/community/roadmap/3.8/
13:51 glusterbot Title: Release Schedule Gluster (at www.gluster.org)
13:52 kshlm You can follow this page. It should be kept updated.
13:53 kkeithley setting up ganesha can seem daunting, but it's not that hard.  I have a blog post about it at http://blog.gluster.org/2015/10/linux-scale-out-nfsv4-using-nfs-ganesha-and-glusterfs-one-step-at-a-time/
13:53 glusterbot Title: Linux scale out NFSv4 using NFS-Ganesha and GlusterFS — one step at a time | Gluster Community Website (at blog.gluster.org)
13:55 kshlm Ah I forgot, kkeithley can help as well
13:55 kshlm dscastro, ^
13:56 jhyland joined #gluster
13:56 dscastro kkeithley: i had it earlier today, so pacemaker is definitely required right?
13:57 dscastro i'll give it a try... looks like is the best options (for now0
13:58 kkeithley You can probably build your own HA solution using something else besides pacemaker, but using pacemaker is our preferred solution.
13:59 nbalacha joined #gluster
14:00 dscastro kkeithley: since i run on Azure, i'll look for the options :)
14:00 moss joined #gluster
14:01 skoduri joined #gluster
14:01 haomaiwa_ joined #gluster
14:05 kotreshhr joined #gluster
14:07 nehar joined #gluster
14:08 plarsen joined #gluster
14:08 ndarshan joined #gluster
14:09 nishanth joined #gluster
14:10 camg joined #gluster
14:17 shyam joined #gluster
14:21 Manikandan joined #gluster
14:22 nathwill joined #gluster
14:23 techsenshi joined #gluster
14:26 Wojtek [2016-03-31 14:26:12.240396] D [MSGID: 0] [dht-common.c:2356:dht_lookup] 0-gv0-dht: Calling fresh lookup for /files/fe/92 on gv0-replicate-0
14:26 Wojtek [2016-03-31 14:26:12.240850] D [MSGID: 0] [dht-common.c:1944:dht_lookup_cbk] 0-gv0-dht: fresh_lookup returned for /files/fe/92 with op_ret 0
14:27 Wojtek I have these debug messages that loop endlessly and glusterfsd eats up 100% cpu and makes the cluster unusable
14:29 Wojtek https://paste.fedoraproject.org/347864/59434555/
14:29 glusterbot Title: #347864 Fedora Project Pastebin (at paste.fedoraproject.org)
14:29 shyam Wojtek: These are debug messages (the 'D' after the time stamp), so if you change your log level to INFO (which is the default) this should go away
14:30 shyam Wojtek: unless of course you are in the middle of debugging something using the logs
14:31 lkoranda joined #gluster
14:40 post-factum JoeJulian:
14:40 post-factum JoeJulian: are you here?
14:41 Gnomethrower joined #gluster
14:45 DV joined #gluster
14:51 harish__ joined #gluster
14:56 moss joined #gluster
14:58 shyam1 joined #gluster
15:00 squaly joined #gluster
15:01 mhulsman joined #gluster
15:01 haomaiwa_ joined #gluster
15:10 kotreshhr joined #gluster
15:13 [Enrico] joined #gluster
15:14 plarsen joined #gluster
15:17 baojg joined #gluster
15:18 robb_nl joined #gluster
15:21 nishanth joined #gluster
15:21 tswartz joined #gluster
15:25 ira_ joined #gluster
15:26 bowhunter joined #gluster
15:28 Wojtek shyam - yes I enabled Debug to try to understand what's going on that's generating all this cpu load when nothing is going on
15:29 Wojtek https://paste.fedoraproject.org/347885/59438151/
15:29 glusterbot Title: #347885 Fedora Project Pastebin (at paste.fedoraproject.org)
15:29 Wojtek Here's the options I currently have on the cluster
15:30 Wojtek All heals are off, so I don't quite understand what's going on. I tried the lookup-optimize but that didn't have any impacts
15:31 ahino joined #gluster
15:31 Manikandan joined #gluster
15:31 kshlm joined #gluster
15:31 kpease joined #gluster
15:37 klfwip joined #gluster
15:37 dscastro kkeithley: ping
15:37 glusterbot dscastro: Please don't naked ping. http://blogs.gnome.org/markmc/2014/02/20/naked-pings/
15:37 kkeithley yes?
15:37 dscastro kkeithley: does corosync needs to be configured?
15:38 dscastro kkeithley: your post doesn't mention but pcsd is complaining about it : Cannot read config 'corosync.conf' from '/etc/corosync/corosync.conf'
15:38 dlambrig_ joined #gluster
15:38 kkeithley No, it's should get everything it needs as a side effect of setting up pacemaker
15:40 dscastro kkeithley: weird, are they steps 8,9 and 10 right?
15:44 kkeithley dcastro: to start with, yes, but then `gluster nfs-ganesha enable` does a lot more
15:45 kkeithley see /usr/libexec/ganesha/ganesha-ha.sh
15:47 dscastro kkeithley: gluster nfs-ganesha enable is the same as "systemctl enable nfs-ganesha"
15:47 dscastro wondering if gluster couldn't start ganesha , let me see
15:48 kkeithley no, `gluster nfs-ganesha enable` sets up the cluster and HA, and then starts nfs-ganesha.  Don't use systemctl to start or stop nfs-ganesha.
15:49 kkeithley It's all supposed to be seamless through gluster.
15:50 coredump joined #gluster
15:50 baojg joined #gluster
15:51 dscastro kkeithley: http://www.fpaste.org/347899/59439482/
15:51 glusterbot Title: #347899 Fedora Project Pastebin (at www.fpaste.org)
15:52 kkeithley apart from the Error: One of the client 10.0.200.5:1023 is running with op-version  30703 and doesn't support the required op-version 30707. This client  needs to be upgraded or disconnected before running this command again errors....
15:52 glusterbot kkeithley: set the desired op-version using ''gluster volume set all cluster.op-version $desired_op_version''.
15:53 kkeithley What does `pcs status` show?
15:53 dscastro Error: cluster is not currently running on this node
15:53 kkeithley paste your /etc/ganesha/ganesha-ha.conf please?
15:54 dscastro kkeithley: http://www.fpaste.org/347901/14594396/ (pacemaker config)
15:54 glusterbot Title: #347901 Fedora Project Pastebin (at www.fpaste.org)
15:55 dscastro s/pacemaker config/pacemaker log/
15:55 glusterbot What dscastro meant to say was: kkeithley: http://www.fpaste.org/347901/14594396/ (pacemaker log)
15:55 kkeithley and all the hosts are resolvable in dns or /etc/hosts?
15:55 kkeithley on all nodes?
15:55 dscastro kkeithley: ohh.. i might be wrong, but
15:56 dscastro i created a dns entry (glsuter.mydomain.com) pointing to all four VIPS
15:56 kkeithley I don't think that should matter.  paste your /etc/ganesha/ganesha-ha.conf please?
15:57 dscastro kkeithley: http://www.fpaste.org/347903/39827145/
15:57 glusterbot Title: #347903 Fedora Project Pastebin (at www.fpaste.org)
15:58 kkeithley there's a typo in line 10:   HA_CLUSTER_NODES="gluterbr0,glusterbr1,glusterbr2,glusterbr3"
15:59 kkeithley gluterbr0 -> glusterbr0.
15:59 dscastro kkeithley: fuu .. you are right
16:00 dscastro kkeithley: ok, fixed. how i reschedule the config?
16:00 kkeithley you should be able to just `gluster nfs-ganesha disable` followed by `gluster nfs-ganesha enable`
16:01 dscastro kkeithley: ahh.. the cluster came up... let me test
16:01 dscastro :)
16:01 haomaiwa_ joined #gluster
16:02 kkeithley good.  `pcs status` should show good things.
16:03 dscastro kkeithley: it does ! know i have to figure out why azure doesn't let me use the VIPs :)
16:05 kkeithley kewl.   (I'm going to step out for some lunch. Back in a bit)
16:09 shubhendu joined #gluster
16:13 shyam joined #gluster
16:15 calavera joined #gluster
16:16 atalur joined #gluster
16:19 plarsen joined #gluster
16:27 shaunm joined #gluster
16:28 Rasathus_ joined #gluster
16:28 camg joined #gluster
16:29 lkoranda joined #gluster
16:32 Rasathus joined #gluster
16:41 nishanth joined #gluster
16:43 techsenshi joined #gluster
16:51 shubhendu joined #gluster
16:57 kotreshhr left #gluster
17:00 madnexus joined #gluster
17:01 haomaiwang joined #gluster
17:09 bluenemo joined #gluster
17:09 atrius joined #gluster
17:14 lkoranda joined #gluster
17:21 dlambrig_ joined #gluster
17:28 jwd joined #gluster
17:31 B21956 joined #gluster
17:32 nehar joined #gluster
17:32 ghenry joined #gluster
17:38 calavera joined #gluster
17:46 jri joined #gluster
17:56 atalur joined #gluster
17:58 dlambrig_ joined #gluster
18:01 haomaiwa_ joined #gluster
18:08 vmallika joined #gluster
19:01 haomaiwa_ joined #gluster
19:06 theron_ joined #gluster
19:12 MrAbaddon joined #gluster
19:14 joshin joined #gluster
19:35 nishanth joined #gluster
19:43 lord4163 joined #gluster
19:43 edong23 joined #gluster
19:48 nathwill joined #gluster
20:01 haomaiwang joined #gluster
20:16 Rasathus joined #gluster
20:18 theron joined #gluster
20:19 plarsen joined #gluster
20:37 robb_nl joined #gluster
21:01 haomaiwa_ joined #gluster
21:16 Rasathus joined #gluster
21:22 coredump joined #gluster
21:47 shyam joined #gluster
22:01 haomaiwa_ joined #gluster
22:07 masterzen joined #gluster
22:43 plarsen joined #gluster
22:46 bwerthmann joined #gluster
22:47 madnexus joined #gluster
23:01 haomaiwang joined #gluster
23:14 d0nn1e joined #gluster
23:30 Doyle left #gluster
23:42 yosafbridge joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary