Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-09-29

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:27 jeremyh joined #gluster
00:33 jeremyh joined #gluster
00:42 arcolife joined #gluster
00:47 shyam joined #gluster
00:55 kramdoss_ joined #gluster
01:02 jeremyh joined #gluster
01:10 derjohn_mobi joined #gluster
01:13 Lee1092 joined #gluster
01:19 zen0n joined #gluster
01:24 zen0n hey guys,  I've been running my first gluster instance for a couple of months now,  and have just noticed I must have been having some split brain/auto heal issues for some time.  running gluster volume heal  $volname info I have determined all the files effected are useless cache and tmp files and be deleted without worry.  Is there an easy of just deleting all the files now that I have a list of effected gfids/filenames?
01:38 auzty joined #gluster
01:46 om joined #gluster
01:48 ilbot3 joined #gluster
01:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:54 shdeng joined #gluster
02:05 scuttle|afk joined #gluster
02:06 om joined #gluster
02:11 masber joined #gluster
02:28 prth joined #gluster
02:33 gem joined #gluster
02:35 kimmeh joined #gluster
02:38 daMaestro joined #gluster
02:53 arcolife joined #gluster
02:59 om joined #gluster
03:04 Gambit15 joined #gluster
03:07 k4n0 joined #gluster
03:17 shubhendu joined #gluster
03:18 kramdoss_ joined #gluster
03:18 om joined #gluster
03:22 Muthu_ joined #gluster
03:22 magrawal joined #gluster
03:32 nbalacha joined #gluster
03:43 masuberu joined #gluster
03:45 barajasfab joined #gluster
03:48 nbalacha joined #gluster
03:48 Dave___ joined #gluster
03:49 amye joined #gluster
03:49 misc joined #gluster
03:49 Ramereth joined #gluster
03:50 prth joined #gluster
03:57 om joined #gluster
03:57 scuttle|afk joined #gluster
04:01 derjohn_mobi joined #gluster
04:06 abyss^ joined #gluster
04:10 itisravi joined #gluster
04:18 itisravi joined #gluster
04:20 eightyeight joined #gluster
04:22 sanoj joined #gluster
04:26 hchiramm joined #gluster
04:28 Lee1092 joined #gluster
04:35 rafi joined #gluster
04:37 ppai joined #gluster
04:38 gem joined #gluster
04:45 sanoj joined #gluster
04:47 atinm joined #gluster
04:51 prth joined #gluster
04:57 sanoj joined #gluster
05:01 prasanth joined #gluster
05:04 ndarshan joined #gluster
05:04 ankitraj joined #gluster
05:05 buvanesh_kumar joined #gluster
05:09 skoduri_ joined #gluster
05:09 RameshN joined #gluster
05:10 karthik joined #gluster
05:13 nbalacha joined #gluster
05:18 Muthu joined #gluster
05:20 kotreshhr joined #gluster
05:29 karnan joined #gluster
05:39 k4n0 joined #gluster
05:46 apandey joined #gluster
05:47 mhulsman joined #gluster
05:49 mhulsman1 joined #gluster
05:49 Bhaskarakiran joined #gluster
05:51 mhulsman2 joined #gluster
05:52 aravindavk joined #gluster
05:53 mhulsman joined #gluster
05:54 jiffin joined #gluster
05:57 prth joined #gluster
06:03 hgowtham joined #gluster
06:06 satya4ever joined #gluster
06:06 prth joined #gluster
06:09 Saravanakmr joined #gluster
06:13 jtux joined #gluster
06:13 nishanth joined #gluster
06:21 mhulsman1 joined #gluster
06:22 jtux joined #gluster
06:24 mhulsman joined #gluster
06:28 kdhananjay joined #gluster
06:38 devyani7_ joined #gluster
06:42 jkroon joined #gluster
06:44 poornima_ joined #gluster
06:51 arcolife joined #gluster
07:01 xavih joined #gluster
07:01 malevolent joined #gluster
07:02 msvbhat joined #gluster
07:09 ivan_rossi joined #gluster
07:18 msvbhat joined #gluster
07:29 [diablo] joined #gluster
07:34 deniszh joined #gluster
07:36 derjohn_mob joined #gluster
07:41 mhulsman joined #gluster
07:44 ashiq joined #gluster
07:54 kimmeh joined #gluster
08:01 jtux joined #gluster
08:05 Javezim Trying to install nfs-ganesha, Getting "nfs-ganesha : Depends: libntirpc1 but it is not installable" , Running Ubuntu 16.04, anyone know how to fix?
08:07 Mattias left #gluster
08:13 [diablo] joined #gluster
08:16 snixor joined #gluster
08:17 Slashman joined #gluster
08:25 karnan joined #gluster
08:32 itisravi joined #gluster
08:32 ramky joined #gluster
08:46 jkroon joined #gluster
08:50 Javezim Any reason Gluster 3.8 doesn't have a Xenial Release? - https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.8
08:50 glusterbot Title: glusterfs-3.8 : “Gluster” team (at launchpad.net)
08:50 karnan joined #gluster
08:57 anoopcs Javezim, You may need to ask the GlusterFS package maintainer for Ubuntu about this...
09:01 ndevos Javezim: Kaleb sent an email to gluster-devel with a nice table containing all versions, but the archived mail looks horrible - http://www.gluster.org/pipermail/gluster-devel/2016-September/051054.html
09:01 glusterbot Title: [Gluster-devel] Community Gluster Package Matrix, updated (at www.gluster.org)
09:04 edvin joined #gluster
09:08 riyas joined #gluster
09:14 hgowtham joined #gluster
09:15 anoopcs ndevos, What do you think of putting up that matrix onto gluster.org?
09:16 ndevos anoopcs: I think Kaleb was going to send a pull request
09:16 ndevos anoopcs: https://github.com/gluster/glusterdocs/pull/170
09:16 glusterbot Title: add table of packages built by the community by kalebskeithley · Pull Request #170 · gluster/glusterdocs · GitHub (at github.com)
09:17 ndevos kkeithley++ thanks!
09:17 glusterbot ndevos: kkeithley's karma is now 28
09:17 anoopcs ndevos, Cool...That would be very helpful..
09:17 karnan joined #gluster
09:19 prth joined #gluster
09:20 ndevos Javezim: the work-in-progress table is also on https://github.com/kalebskeithley/glusterdocs/blob/8a224cd8f77685a6d5d407be472036d36a4d14ea/Install-Guide/Community_Packages.md
09:20 glusterbot Title: glusterdocs/Community_Packages.md at 8a224cd8f77685a6d5d407be472036d36a4d14ea · kalebskeithley/glusterdocs · GitHub (at github.com)
09:20 ramky joined #gluster
09:34 aravindavk joined #gluster
09:34 atinm joined #gluster
09:34 ivan_rossi1 joined #gluster
09:35 nigelb ndevos: I recommend putting it in one place and make that the single source-of-truth.
09:35 nigelb Otherwise, we're going to end up having one out of sync with the other very soon.
09:35 ndevos nigelb: yes, its a pull request for glusterdocs
09:35 nigelb Yeah, then let's point from gluster.org -> docs
09:35 ndevos but not merged yet...
09:35 nigelb Rather than duplicating.
09:35 jiffin1 joined #gluster
09:35 ndevos nigelb: yes, of course
09:48 edvin I have a volume with 3 nodes 3 bricks on each node running distributed replication in a 3x3 scenario. When I add a 4th node with another 3 bricks it turns into a 4x3 as expected but the three bricks on node 4 all share the same data (as expected in a 3 replication scenario) the problem is I want to move the bricks around to have the redundancy is there I simple way to do this?
09:49 edvin And if there isent a simple way would replacing earlier bricks with bricks from node4 to create 3 avaliable bricks on 3 diffrent servers and then adding those to the gluster force it to running a redundant 4x3 setup
09:50 karnan joined #gluster
09:54 jkroon joined #gluster
09:57 jiffin1 joined #gluster
10:08 rastar joined #gluster
10:12 edvin Guess its a nonstandard setup and question judging by the silence ;)
10:13 hackman joined #gluster
10:14 ndevos edvin: I think you can use replace-brick to move two bricks to the new server, and then add-brick the previous two and an other one on the new server
10:15 ndevos edvin: after each replace-brick, you will need to make sure the data has moved completely and the 'old' bricks are not in use anymore
10:21 kimmeh joined #gluster
10:25 rastar joined #gluster
10:26 msvbhat joined #gluster
10:29 edvin ndevos: Alright welp i was hoping there was an automated way but I guess a little manual work dosent hurt too much
10:33 ndevos edvin: I'm not aware of an automated way, sorry
10:38 slunatecqo joined #gluster
10:39 slunatecqo Hi - is there any easy way to geo-replicate all volumes, and auto geo-replicate new ones?
10:44 msvbhat joined #gluster
10:48 hgowtham joined #gluster
10:50 rastar joined #gluster
11:03 k4n0 joined #gluster
11:05 goretoxo joined #gluster
11:07 jiffin joined #gluster
11:16 tomaz__ joined #gluster
11:18 atinm joined #gluster
11:19 kotreshhr left #gluster
11:21 nisroc joined #gluster
11:23 Wizek_ joined #gluster
11:24 Caveat4U joined #gluster
11:26 kkeithley [04:50:16] <Javezim> Any reason Gluster 3.8 doesn't have a Xenial Release? - https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.8
11:26 glusterbot Title: glusterfs-3.8 : “Gluster” team (at launchpad.net)
11:26 gem joined #gluster
11:27 kkeithley because Launchpad deleted the xenial 3.8 build when I tried to build for yakkety.  3.8 is rebuilding for xenial. It'll be back in a few minutes
11:33 jkroon joined #gluster
11:34 Rasathus joined #gluster
11:38 shubhendu joined #gluster
11:42 arcolife joined #gluster
11:46 edvin joined #gluster
11:47 ira joined #gluster
11:48 harish joined #gluster
11:52 ramky joined #gluster
12:00 kramdoss_ joined #gluster
12:03 ankitraj joined #gluster
12:05 flying joined #gluster
12:06 johnmilton joined #gluster
12:06 flying ls: cannot access ROOT/uatcontent: Transport endpoint is not connected
12:06 flying hi guys can you help me to check what's happenede?
12:07 johnmilton joined #gluster
12:28 plarsen joined #gluster
12:29 jiffin1 joined #gluster
12:31 B21956 joined #gluster
12:31 jiffin1 flying: is it for only one file
12:32 flying for one entire directory
12:36 jiffin for one directory , wat abt other?
12:38 jiffin flying: for only one directory, wat abt other entries?
12:38 flying the others are fine
12:38 flying is there a gluster commadn to check the state?
12:45 unclemarc joined #gluster
12:46 shyam joined #gluster
12:48 plarsen joined #gluster
12:51 skoduri joined #gluster
12:54 rastar joined #gluster
13:08 magrawal joined #gluster
13:10 kpease joined #gluster
13:18 gevatter_ joined #gluster
13:18 shyam joined #gluster
13:30 ppai joined #gluster
13:31 guhcampos joined #gluster
13:32 guhcampos Hi, I messed up a gluster config by deleting my ovirt's hosted engine volume with "gluster volume delete"...
13:32 guhcampos ... so after a few moments of mindful meditation, I just re-created the volume with the same name, using the same bricks... which seemed to have worked
13:33 guhcampos but the mounted volume on clients is empty, even if the bricks still have all the data in them, what am I missing?
13:34 derjohn_mob joined #gluster
13:34 rastar joined #gluster
13:39 shaunm joined #gluster
13:41 nbalacha joined #gluster
13:41 Muthu_ joined #gluster
13:44 edvin_ joined #gluster
13:50 tdasilva joined #gluster
14:04 shyam joined #gluster
14:10 Caveat4U joined #gluster
14:11 jiffin guhcampos: can u do ls -ltr * on mount
14:11 jiffin ?
14:11 guhcampos Sorry, I was able to recover it now
14:11 guhcampos I was stupid, really
14:11 jiffin guhcampos: cool
14:12 guhcampos I recreated the volume with server:/gluster/brickX, while the actuall old bricks were on server:/gluster/engine/brickX
14:12 guhcampos deleted the volme again, recreated again, and I was able to bring my hostedengine back online
14:13 guhcampos I have to stop doing this kind of thing in the middle of the night when I get insomnia
14:14 MadPsy does anyone have any idea why an NFS mounted gluster share is creating all files as root (no sign of setuid)
14:15 jiffin MadPsy: u mean gluster nfs/ nfs ganesha/ kernel nfs?
14:15 MadPsy gluster nfs sorry
14:16 rastar joined #gluster
14:17 jiffin did u enable gluster nfs-acl option?
14:19 MadPsy let me check
14:22 rafi1 joined #gluster
14:22 ashp jkroon: turns out setting 'option rpc-auth-allow-insecure on' in /etc/glusterfs/glusterd.vol allows 3.7+ to talk to 3.5 just fine
14:22 MadPsy doesn’t look like it no
14:24 jiffin MadPsy: which gluster version ?
14:24 jiffin i guess by default it should be on
14:25 MadPsy 3.7.6
14:25 MadPsy if I mount with glusterfs directly it’s fine
14:26 jkroon ashp, that's good to know thanks.  what was wrong?
14:26 satya4ever joined #gluster
14:26 jiffin any can u try gluster v set <volname> nfs-acl on and try again?
14:27 MadPsy didn’t like nfs-acl
14:30 MadPsy is that not a mount option instead
14:33 slunatecqo joined #gluster
14:33 slunatecqo Hi - is there any easy way to geo-replicate all volumes?
14:34 cloph_away just create target volumes and setup corresponding geo-rep entries for those.
14:36 slunatecqo So if I want to create new volume, I have to create it on master, on slave and set geo-replication?
14:37 cloph_away you can only replciate a gluster volume to another gluster volume
14:38 jiffin joined #gluster
14:38 slunatecqo cloph_away: And there is no way to set somehow, that when I create new volume, it will be automatically replicated to another server?
14:39 cloph_away well, you can use a helper script that does all the steps for you, but yeah, no single gluster command to do it.
14:39 ashp jkroon: it was that 3.8->3.5 connectivity thing we talked about yesterday, with the weird handshake error, that was the fix to it :)
14:40 slunatecqo OK - thank you
14:44 jkroon ashp, i thought we were looking at 3.7.3 -> 3.5
14:44 nbalacha joined #gluster
14:46 slunatecqo cloph_away: and what if I just had replicated volumes between geologically separated servers?
14:47 cloph_away then it would not be using geo-replication feature (async), but  synchronous replicaiton (that due to the separation likely has high latency/bad performance)
14:49 slunatecqo cloph_away: Ok - thank you
14:54 theron joined #gluster
14:57 wushudoin joined #gluster
14:57 arcolife joined #gluster
15:02 MadPsy jiffin: I just realised that it’s not just NFS, it’s fuse flusterfs (FUSE) too.. but only when they are mounted on the same machines as the bricks
15:03 rastar joined #gluster
15:04 nbalacha joined #gluster
15:07 Caveat4U joined #gluster
15:11 ashp jkroon: yeah, sorry, 3.7, it fixed that issue (sorry, i've been looking into what it takes for an upgrade to 3.8.4)
15:12 Caveat4U joined #gluster
15:12 jkroon ah ok
15:13 jkroon well, i thought you already said yesterday that you set that option?
15:13 jkroon never mind, it was different option name :)
15:14 jkroon glad you're sorted.  to migrate the servers, basically prepare new bricks on the new servers.
15:15 jkroon from one of the old servers "gluster peer probe newservername" to each of the new servers (might be that you need to do this from the new to the old - can't remember but it'll err out if you get it wrong).
15:15 theron_ joined #gluster
15:16 jkroon https://www.gluster.org/pipermail/gluster-users/2012-October/011502.html contains some useful info on the process to replace individual bricks.
15:16 glusterbot Title: [Gluster-users] 'replace-brick' - why we plan to deprecate (at www.gluster.org)
15:16 theron_ joined #gluster
15:16 Caveat4U joined #gluster
15:16 jkroon experience has shown that the kind of migrations can result in HUGE CPU spikes - so do this at a quiet time.
15:16 jkroon also make sure you have backups.
15:17 Caveat4U joined #gluster
15:18 jkroon all your instances is newer than 3.3 so you should be OK.
15:19 Caveat4U joined #gluster
15:20 Caveat4U joined #gluster
15:21 Caveat4U_ joined #gluster
15:27 slunatecqo left #gluster
15:27 arcolife joined #gluster
15:44 slunatecqo joined #gluster
15:45 slunatecqo Hi - if I hacve mounted gluster volume to my computer, are the data somehow cached on my computer, or I am connecting to the gluster server every time I try to read something?
15:51 jheckman joined #gluster
15:54 slunatecqo left #gluster
15:55 jheckman Hi, I am currently looking a Gluster and there are a lot of things being thrown around the documentation that I don't understand. I am working on setting up a Docker Cluster and I need a way to share all Volumes between all hosts. Due to the setup I won't need to worry about multiple containers accessing the same volume much, I just may not know what host a container is on at any point. I am curious is there
15:55 jheckman is an option when setting up Gluster where it will not have a copy of the entire storage on every server, but will instead with some intelligence store files based on what host accesses it more. but still have replication between hosts
15:57 msvbhat joined #gluster
16:03 hagarth joined #gluster
16:09 jheckman_ joined #gluster
16:09 jheckman_ left #gluster
16:09 jheckman_ joined #gluster
16:20 ndevos jheckman_: you can use some servers as storage servers, and others as docker hosts, or combing and mix'n match
16:21 ndevos jheckman_: there is no requirement to have the docker hosts also be storage servers, gluster is a network filesystem, similar to NFS
16:22 ndevos jheckman_: there are ways to figure out what storage servers have the files, and in that case you could run a docker instance there
16:23 ndevos jheckman_: but I do not think that knowledge is built in any of the docker managers/schedulers
16:24 ndevos jheckman_: so, it is more the other way around as what you are asking, not locate the files next to the container, but run the container where the file is
16:25 ndevos jheckman_: that is assuming the container uses (mainly) a single file, and not many, and sharding should not be enabled either
16:28 jheckman_ So if the docker hosts are not storage servers, are there copies of the files on the docker hosts or is it all network like a traditional NFS share? In this case the container is worried about the contents of a directory. I guess a possibly better way to look at it, is ignoring docker. If a host is not a storage server and is instead just accessing the share. Does gluster have any mechanism to cache the file
16:28 jheckman_ s used commonly on a given host, on that host. Right now I am worried about performance of these files not living on the host, without requiring a close to 3TB (we think) share to be on all of these hosts
16:29 ndevos Gluster would not keep a local copy cached
16:29 theron joined #gluster
16:29 ndevos it would just be like 'normal' NFS
16:29 Gambit15 joined #gluster
16:30 theron joined #gluster
16:30 ndevos but, if you mount over NFS, you would be able to use fs-cache which is a read-only cache
16:31 ndevos hoever, mounting NFS on a storage server is not recommended, (multi-host) locking will not work correctly in that case
16:32 jheckman_ hmm, ok thank you. I will need to do some more looking around at various options for this problem.
16:41 ivan_rossi1 left #gluster
16:42 msvbhat joined #gluster
16:42 hackman joined #gluster
16:43 prth joined #gluster
16:45 dataio joined #gluster
16:45 mhulsman joined #gluster
16:53 guhcampos joined #gluster
17:00 BitByteNybble110 joined #gluster
17:13 mhulsman joined #gluster
17:19 kimmeh joined #gluster
17:20 jdarcy joined #gluster
17:32 atinm joined #gluster
17:34 karnan joined #gluster
17:36 squizzi joined #gluster
17:51 bowhunter joined #gluster
17:55 Guest71203 joined #gluster
18:14 deniszh joined #gluster
19:03 kpease joined #gluster
19:24 Wizek_ joined #gluster
19:24 kpease joined #gluster
19:42 twisted` joined #gluster
19:42 rideh- joined #gluster
19:47 Trefex_ joined #gluster
19:48 Chinorro joined #gluster
19:49 tom][ joined #gluster
19:50 Mmike joined #gluster
19:51 marlinc joined #gluster
19:54 johnmilton joined #gluster
20:00 wushudoin joined #gluster
20:05 bowhunter joined #gluster
20:09 prth joined #gluster
20:11 wushudoin joined #gluster
20:11 jheckman joined #gluster
20:20 kimmeh joined #gluster
20:21 sage__ joined #gluster
20:23 om joined #gluster
20:27 shyam joined #gluster
20:28 MadPsy does anyone know why, when mounting a volume with glusterfs’s built in NFS server, any user who writes to the share is effectively ‘root’?
20:28 derjohn_mob joined #gluster
20:38 rwheeler joined #gluster
20:58 derjohn_mob joined #gluster
21:02 glustin joined #gluster
21:05 malevolent joined #gluster
21:13 petan joined #gluster
21:30 hagarth joined #gluster
21:37 stopbyte joined #gluster
21:58 MadPsy for anyone else having problems with NFS and every user effectively being ‘root’, sec=sys as a mount option seems to cause this behaviour
22:01 MadPsy which is very strange, given sec=sys is the default (and in fact ‘mount -a’ shows it, yet specifying it as a mount option causes that strange behaviour)
22:02 MadPsy s/mount -a/mount/
22:02 glusterbot What MadPsy meant to say was: which is very strange, given sec=sys is the default (and in fact ‘mount’ shows it, yet specifying it as a mount option causes that strange behaviour)
22:21 kimmeh joined #gluster
22:33 Caveat4U joined #gluster
22:57 bowhunter joined #gluster
23:06 nathwill joined #gluster
23:14 jheckman joined #gluster
23:20 fang64 joined #gluster
23:33 shyam joined #gluster
23:48 Klas joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary