Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-09-13

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:06 cliluw joined #gluster
00:07 harish joined #gluster
00:08 jobewan joined #gluster
00:11 johnmilton joined #gluster
00:17 johnmilton joined #gluster
00:37 itisravi joined #gluster
00:47 plarsen joined #gluster
01:06 shdeng joined #gluster
01:15 suliba joined #gluster
01:19 siel joined #gluster
01:21 kramdoss_ joined #gluster
01:29 shaunm joined #gluster
01:35 plarsen joined #gluster
01:48 harish joined #gluster
02:02 shdeng joined #gluster
02:12 prth joined #gluster
02:18 prth joined #gluster
02:44 Javezim NFS-Ganesha is compatible with v 3.8 right?
02:44 Bhaskarakiran joined #gluster
02:44 Javezim and Ubuntu 16.04
02:54 shdeng joined #gluster
02:59 skoduri joined #gluster
03:07 RameshN joined #gluster
03:18 magrawal joined #gluster
03:19 prth joined #gluster
03:26 jobewan joined #gluster
03:33 aspandey joined #gluster
03:40 hchiramm joined #gluster
03:44 atinm joined #gluster
03:50 RameshN joined #gluster
04:04 kkeithley Javezim: yes to both
04:07 prth joined #gluster
04:07 ppai joined #gluster
04:08 jiffin joined #gluster
04:14 nbalacha joined #gluster
04:14 RameshN joined #gluster
04:14 itisravi joined #gluster
04:19 prasanth joined #gluster
04:26 ramky joined #gluster
04:32 baudster- joined #gluster
04:33 sanoj_ joined #gluster
04:33 baudster- Hi everyone
04:34 baudster- I have a little problem. I have 2 gluster nodes with 1 brick each and is configured as replication.
04:34 baudster- when I do "gluster peer status" each node see the other one
04:35 baudster- when i do "gluster volume info" both bricks are shown
04:35 baudster- when i create or delete a file on the gluster volume in one node, it reflects on the other node. So i know it's working
04:36 baudster- but when i do "gluster voume status" it shows one node as offline
04:37 baudster- so i'm not entirely sure why it's working even if it shows as offline
04:38 karthik_ joined #gluster
04:39 baudster- anyone?
04:39 jiffin baudster-: the issue is not related to nodes, related to bricks
04:39 jiffin just do gluster v start <volume> force
04:40 jiffin if still it shows offline, check the brick logs
04:41 baudster- yeah, i did the force star already
04:41 baudster- but still the same
04:41 baudster- checking the logs
04:41 baudster- [2016-09-13 04:41:47.763619] E [rpcsvc.c:521:rpcsvc_handle_rpc_call] 0-glusterd: Request received from non-privileged port. Failing request
04:42 baudster- I see that on the brick log
04:47 jiffin baudster-: https://www.gluster.org/pipermail/glu​ster-users/2012-December/012064.html
04:47 glusterbot Title: [Gluster-users] gluster command Request received from non-privileged port (at www.gluster.org)
04:47 jiffin baudster-: http://thr3ads.net/gluster-users/2013/12​/2712958-Error-when-trying-to-connect-to​-a-gluster-volume-with-libvirt-libgfapi
04:47 glusterbot Title: thr3ads.net - Gluster users - Error when trying to connect to a gluster volume (with libvirt/libgfapi) [Dec 2013] (at thr3ads.net)
04:48 karnan joined #gluster
04:49 kkeithley er, what do you mean by "when I create ... a file on the volume on one node" ?  You shouldn't be creating files on the bricks.  Mount the volume somewhere (a client) and write to the volume from the client.
04:51 jiffin joined #gluster
04:51 baudster- kkeithley: that's what i meant, when i create a file on the client that accesses GlusterNodeA, I can see the file on a different client that accesses GlusterNodeB and vice-versa
04:54 itisravi joined #gluster
04:55 baudster- jiffin: I did what the link you gave said but i'm stil having the same issue
05:00 jiffin baudster-: u turn on rpc-auth-allow-insecure and still facing the issue
05:02 baudster- correct
05:06 jiffin baudster-: did u start the volume again
05:06 jiffin after setting the option
05:06 baudster- yes
05:06 jiffin baudster-: hmm
05:06 rastar baudster-: what is the Gluster version?
05:07 baudster- glusterfs 3.7.15 built on Aug 31 2016 13:50:05
05:07 rastar baudster-: and all the clients also use the same version?
05:07 ndarshan joined #gluster
05:09 baudster- rastar: correct
05:10 rastar baudster-: The error says Glusterd rejected connection from a client.
05:13 baudster- what's weird is that it's working.
05:13 atinm joined #gluster
05:13 baudster- as i mentioned, if a file is created on GlusterNodeA through ClientA, ClientB which is connected to GlusterNodeB sees the file
05:14 RameshN_ joined #gluster
05:14 baudster- if ClientB delete's the file, it also reflects on ClientA
05:14 rastar baudster-: What that means is that the volume is working fine
05:14 rastar baudster-: only glusterd management connection is not working
05:15 rastar baudster-: please share the output which says node is offline
05:15 rastar @paste
05:15 glusterbot rastar: For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
05:17 baudster- sorry, it's not the node that's offline, but the brick
05:18 baudster- Gluster process                             TCP Port  RDMA Port  Online  Pid
05:18 baudster- ---------------------------------------​---------------------------------------
05:18 baudster- Brick dc1-cch-smb-clust-01-1:/vol1/brick1   49152     0          Y       1840
05:18 baudster- Brick dc1-cch-smb-clust-01-2:/vol1/brick1   N/A       N/A        N       N/A
05:18 glusterbot baudster-: ---------------------------------------​-------------------------------------'s karma is now -18
05:23 baudster- any idea?
05:23 skoduri joined #gluster
05:25 hgowtham joined #gluster
05:30 aspandey joined #gluster
05:30 itisravi joined #gluster
05:32 ppai joined #gluster
05:35 Klas haha, bad karma from posting all those - =P, buggy bot!
05:37 ankitraj joined #gluster
05:38 baudster- rastar?
05:38 gem joined #gluster
05:39 rastar atinm: baudster- sees this when he performs gluster volume status
05:39 rastar atinm: what could be the reason for failure of communication between brick and Glusterd?
05:39 rastar baudster-: did you upgrade recently?
05:39 rastar baudster-: have you restarted Glusterd after upgrade?
05:40 baudster- no upgrade has been done
05:41 baudster- my next question is, i'm not going to lose any data because of this right?
05:42 mhulsman joined #gluster
05:43 ankitraj joined #gluster
05:43 RameshN_ joined #gluster
05:44 Klas are you running quorum or not?
05:44 Klas and are both servers agreed on the status?
05:45 Klas and, are you running SSL?
05:45 Klas I had severe issues which created situations much like this when I did
05:45 rastar baudster-: if the file is showing up on all mounts then all the bricks are up and you won't lose data
05:46 rastar baudster-: Klas has a valid point, have you turned on SSL?
05:47 baudster- we''re not running SSL
05:47 baudster- firewalls are off
05:47 ieth0 joined #gluster
05:47 atinm baudster-, rastar : if brick is up and glusterd is not showing the port, the only reason I see here is a missing PMAP_SIGNIN
05:47 kotreshhr joined #gluster
05:47 Saravanakmr joined #gluster
05:48 atinm baudster-, output of 'gluster volume get <volname> cluster.op-version' ?
05:48 rastar atinm: Oh yes, that is perfectly possible, I think baudster- did a volume start force
05:48 baudster- rastar: that's exactly what i think. everything seems to be working as expected expcept when we do a "gluster volume status"
05:49 baudster- atinm, value for that on both nodes is 2
05:50 baudster- cluster.op-version                      2
05:50 RameshN__ joined #gluster
05:50 ndarshan joined #gluster
05:50 atinm baudster-, so this means you are running with an old op-version
05:50 atinm baudster-, gluster --version from both the nodes?
05:51 baudster- glusterfs 3.7.15 on both nodes
05:51 atinm baudster-, ehh!! that's not possible
05:52 baudster- just so you know, both nodes are running Ubuntu
05:52 derjohn_mob joined #gluster
05:53 darshan joined #gluster
05:54 baudster- atinm: what do you mean that's not possible?
05:56 baudster- atinm?
05:58 [diablo] joined #gluster
05:58 ankitraj joined #gluster
05:58 aravindavk joined #gluster
05:59 atinm baudster-, in that case you have upgraded the gluster version
05:59 baudster- atinm: what should we do to rectify this?
06:08 jtux joined #gluster
06:08 nishanth joined #gluster
06:09 prth joined #gluster
06:10 Saravanakmr joined #gluster
06:12 mhulsman1 joined #gluster
06:13 atinm baudster-,sorry I was afk for sometime
06:13 baudster- atinm: that's ok.
06:13 atinm baudster-, what has happened here is you might have started with older glusterfs install and upgraded to 3.7.15, in gluster you'd need to bump up the op-version manually
06:13 baudster- atinm: my concern is that this is a production server and we don't want to lose any data.
06:13 atinm baudster-, your data wouldn't get impacted
06:14 baudster- how can we upgrade op-version
06:14 atinm baudster-, gluster v set all cluster.op-version 30712
06:14 atinm baudster-, once you do it, I believe you shouldn't be experiencing this issue
06:15 baudster- do i need to restart anything?
06:17 jkroon joined #gluster
06:18 baudster- atinm: ok, i did that and now I see this error on both nodes instead of just seeing it on one:  0-glusterd: Request received from non-privileged port. Failing request
06:19 atinm baudster-, quick question, what was your last upgrade path?
06:22 baudster- hmm.. sorry, i can see that it was automatically upgraded from 3.4.2
06:23 Mattias joined #gluster
06:26 baudster- atinm: does that info help?
06:27 arcolife joined #gluster
06:28 rastar baudster-: I would recommend that you follow the upgrade steps.
06:28 rastar baudster-: it is very well possible that your clients are still running on 3.4.2
06:29 baudster- is there a way to check this aside from checking it through the packaging system?
06:30 nishanth joined #gluster
06:31 kdhananjay joined #gluster
06:31 rastar baudster-: packaging system is the right way
06:32 rastar baudster-: are your server nodes the client nodes as well?
06:32 baudster- yes
06:32 kramdoss_ joined #gluster
06:32 rastar baudster-: ok, is it then possible that you have not remounted them after automatic upgrade?
06:34 [diablo] joined #gluster
06:35 baudster- rastar: highly possible.
06:35 baudster- rastar: that's a good point
06:41 Mattias I've had glusterfs stay untouched (nobody using the servers) for 4 days now and glusterfs is still using 100% cpu... It synced the files instantly when I first set it up (3 servers, 3 replicas.) Why is it still using 100% cpu? the cpu is shared between glusterfsd and glusterfs, glusterfsd using 60% of the cpu.
06:41 Mattias It was only about 500mb of data.
06:42 Mattias profile after 60s: https://gist.github.com/mattias/​7b5e8d333abd51fde28afadbf7531bed
06:42 glusterbot Title: gluster 60s profile · GitHub (at gist.github.com)
06:42 Mattias profile after 120s: https://gist.github.com/mattias/​26e8b4d88b4c0f7d3eb1cc8e8279702b
06:42 glusterbot Title: gluster 120s profile · GitHub (at gist.github.com)
06:42 mhulsman joined #gluster
06:47 devyani7 joined #gluster
06:49 ankitraj joined #gluster
06:57 jri joined #gluster
06:57 TvL2386 joined #gluster
07:04 Philambdo joined #gluster
07:05 k4n0 joined #gluster
07:07 satya4ever joined #gluster
07:13 mhulsman1 joined #gluster
07:15 jiffin joined #gluster
07:24 robb_nl joined #gluster
07:24 jiffin joined #gluster
07:45 jkroon joined #gluster
07:45 derjohn_mob joined #gluster
07:52 level7 joined #gluster
07:53 fsimonce joined #gluster
08:04 jwd joined #gluster
08:06 harish joined #gluster
08:08 deniszh joined #gluster
08:12 ankitraj joined #gluster
08:13 mhulsman joined #gluster
08:15 webmind joined #gluster
08:15 webmind can anyone explain me what this means? [2016-09-13 08:09:56.264733] W [client-rpc-fops.c:1232:client3_3_removexattr_cbk] 0-bijlagen-client-0: remote operation failed: No data available
08:17 webmind also getting this one: [2016-09-13 08:09:56.265789] W [fuse-bridge.c:1172:fuse_err_cbk] 0-glusterfs-fuse: 59025949: REMOVEXATTR() /bijlagen//31a985c8a87658f2c8c49b59fe04e751 => -1 (No data available)
08:18 webmind but can't find anything when looking for these messages
08:22 ndevos webmind: removexattr gets called when a client tries to remove an extended attribute, and in this case, it seems that the extended attribute was not set at all
08:24 Slashman joined #gluster
08:26 hchiramm joined #gluster
08:26 webmind so not something important right?
08:27 webmind ah, this is an actual error I think: [2016-09-13 08:19:22.445238] E [afr-self-heal-data.c:1270:afr_sh_data_open_cbk] 0-bijlagen-replicate-0: open of <gfid:ebc59336-cbe4-4652-a290-df10fd1ec0c2> failed on child bijlagen-client-1 (No such file or directory)
08:38 aspandey joined #gluster
08:41 muneerse2 joined #gluster
08:44 mhulsman joined #gluster
08:47 mhulsman joined #gluster
08:49 legreffier webmind: messages starting with W are warning, with E are errors.
08:49 legreffier warning are usually no big deals, unless you get a lot of them
08:50 webmind legreffier: I just guessed it, thanks :)
08:50 webmind well, that warning is rather common in the logs
08:50 ndevos webmind: well, time of those messages is different
08:51 ndevos webmind: depending on the application, maybe it needs a fix to not remove xattrs that do not exist
08:51 webmind check
08:53 webmind self-heal is about removing inconsistencies right? so if I upload a file to one server, that's what it does to move the file to the other server as well?
08:53 webmind I'm seeing and a highload and stuff in the heal info list
08:55 webmind how do you get settings in gluster? there's a set command, but would like to verify some settings
08:56 baudster- rastar: just letting you know that unmounting and remounting the volume worked
08:56 baudster- rastar: thanks for that!
08:56 baudster- atinm: thanks for the help as well!
08:57 baudster- jiffin: you too! thanks!
08:58 derjohn_mob joined #gluster
09:06 Jules- joined #gluster
09:06 ndarshan joined #gluster
09:07 arcolife joined #gluster
09:08 ndarshan joined #gluster
09:26 webmind I'd like to test these heal settings that where recommended
09:26 webmind but would like to be able to return to the original values, anyone know how to get those?
09:32 jiffin webmind: gluster volume get
09:33 ndevos webmind: you can also do 'gluster volume reset <volume> <option>'
09:33 webmind unrecognized word: get (position 1)
09:34 webmind ndevos: what will it reset to?
09:34 ndevos webmind: healing should not be needed for normal access, just make sure you always access the volume/files through a glusterfs client (fuse mount, or nfs)
09:34 ndevos webmind: it resets the option to the default value
09:35 webmind so if it was changed before, that gets lost?
09:35 ndevos yes
09:35 webmind ah, hmm
09:35 webmind it's a production system I've not setup.
09:35 arcolife joined #gluster
09:35 webmind not sure if that's a good idea :)
09:35 ndevos make sure to keep track of the changed options by copy/pasting the output of 'gluster volume info' somewhere :)
09:36 webmind gluster volume info should list changed settings?
09:36 robb_nl joined #gluster
09:36 webmind so if it gives no settings, it is all default values?
09:36 ndevos yes, it only lists the changed options, not the ones with the default values
09:37 ndevos if you are on a recent release, 'gluster volume get <volume> all' should list everything, but you seem to have an old(er) version
09:37 webmind yep
09:37 webmind 3.4.2
09:38 rastar baudster-: glad it worked , please follow the guide at http://gluster.readthedocs.io/en/late​st/Upgrade-Guide/Upgrade%20to%203.7/ to verify that your upgrade is safe
09:38 glusterbot Title: Upgrade to 3.7 - Gluster Docs (at gluster.readthedocs.io)
09:38 webmind thnx
09:39 ndevos oh, wow, 3.4 does not get any updates anymore, you really should think about upgrading to a more up-to-date version
09:39 ndevos see https://www.gluster.org/co​mmunity/release-schedule/ for some details
09:39 glusterbot Title: Release Schedule — Gluster (at www.gluster.org)
09:40 Wizek_ joined #gluster
09:45 webmind ndevos: I think there is one planned.
09:49 webmind someone is uploading a file to the server, io/cpu load spikes up and get this in gluster volume heal <name> info
09:50 webmind can't copy paste, anyway, the file get listed as being on one of the bricks
09:50 jri_ joined #gluster
09:51 webmind ndevos: did I understand correctly that this should not happen?
09:52 webmind it's btw packaged with ubuntu 14.04
09:52 ndevos webmind: oh, the detection of files in need of healing is not always correct, whan a file is actively used the detection can think it is in split-brain because not all operations are done on both/all copies at the exact same time
09:53 webmind ok
09:53 ndevos once the file is not in use anymore, it should not be listed as needing healing
09:53 webmind any idea what could cause the high load here?
09:54 ndevos you should try to find the process that causes it, the full commandline of the process identifies the service and can point you to the log file
09:57 arcolife joined #gluster
09:58 webmind I do get: 0-management: Received heal vol req for volume bijlagen in the log btw
09:58 webmind ndevos: it's glusterfsd
09:59 ndevos webmind: whats the full commandline? it would tell you if it is a brick, fuse-mount, gluster/nfs, self-heal or anything else
09:59 webmind there's a lot of" 0-bijlagen-marker: No data available occurred while creating symlinks" in the log
09:59 webmind /usr/sbin/glusterfsd -s 192.168.151.21 --volfile-id bijlagen.192.168.151.21.mnt-glusterdata-bijlagen -p /var/lib/glusterd/vols/bijlagen/run/192.​168.151.21-mnt-glusterdata-bijlagen.pid -S /var/run/655af2bd2d17b046122013c3fdbbab75.socket --brick-name /mnt/glusterdata/bijlagen -l /var/log/glusterfs/bricks/m​nt-glusterdata-bijlagen.log --xlator-option *-posix.glusterd-uuid=cbac644​1-a075-4b62-abbf-e163aeca1cc6
10:00 webmind --brick-port 49152 --xlator-option bijlagen-server.listen-port=49152
10:00 webmind so a brick it seems
10:04 ndevos yes, that si a brick process
10:04 ndevos they normally should not need a lot of cpu... I bet it'll be fixed in a newer version :-/
10:06 webmind hmm, should I contact the package maintainer then? as ubuntu still claims to support this :)
10:06 webmind but the error I posted on symlinks is not relevant?
10:14 rwheeler joined #gluster
10:25 [diablo] joined #gluster
10:42 jiffin joined #gluster
10:45 ndevos webmind: not sure if the symlink warnings are relevant... maybe some directory is missing that should be there and the symlink can not be placed in the dir?
10:45 [diablo] joined #gluster
10:49 ndevos webmind: maybe you can find some hints in the commit messages for 3.4 - https://github.com/gluster/g​lusterfs/commits/release-3.4
10:49 glusterbot Title: Commits · gluster/glusterfs · GitHub (at github.com)
10:50 ndevos webmind: I suggest to search for "marker", most messages should have a link to a BUG, just open https://bugzilla.redhat.com/1234567
10:51 glusterbot Title: Bug 1234567 – Package should not ship a separate emacs sub-package (at bugzilla.redhat.com)
10:51 ndevos haha
10:51 ppai joined #gluster
10:55 satya4ever joined #gluster
10:56 nashvil22 joined #gluster
10:57 kramdoss_ joined #gluster
11:01 nashvil22 Hi All, i'm trying to diagnose an issue with glusterfs geo-replication. When i run geo-replication status command, all my 6 bricks except brick2 are in Changelog Crawl status. Brick2 is in History Crawl since 21st August 2016 and brick2 status changes from Active to Faulty then back to Active.
11:01 nashvil22 Could someone please direct me where to look to fix this issue.
11:02 nashvil22 I've tried removing geo-replication session then re-creating it and it did't fix the issue.
11:03 nashvil22 I've checked obvious causes such as brick status and whether bricks ports are open etc.
11:07 Lee1092 joined #gluster
11:08 webmind ndevos: thanks
11:18 mhulsman1 joined #gluster
11:20 ndevos nashvil22: maybe kotreshhr or aravindavk can help out, otherwise it would be best to send an email to gluster-users@gluster.org
11:21 aravindavk ndevos: thanks, I will look into this issue
11:22 aravindavk nashvil22: please share the log from Faulty node
11:22 ndevos aravindavk++ thank you
11:22 glusterbot ndevos: aravindavk's karma is now 1
11:23 Philambdo joined #gluster
11:23 aravindavk nashvil22: It is unable to complete History crawl since that worker is going to faulty and coming back. Every worker restart starts from History crawl to process backlog changelogs and then switches to changelog crawl
11:29 nashvil22 aravindavk: Thanks. can you pls guide me which log file i needs to look, they are several logs files.
11:29 jtux joined #gluster
11:30 aravindavk nashvil22: /var/log/glusterfs/geo-replication/<​master>_<slavehost>_<slavevol>/*.log from faulty node
11:31 Mattias left #gluster
11:31 nashvil22 aravindavk: Thanks, will check.
11:33 hgowtham joined #gluster
11:35 Saravanakmr #REMINDER: Gluster Community Bug Triage meeting at 12:00 UTC ( start ~in 25 minutes) in #gluster-meeting
11:39 gem joined #gluster
11:41 nashvil22 <aravindavk>: http://imgur.com/a/HEEtv
11:41 glusterbot Title: Imgur: The most awesome images on the Internet (at imgur.com)
11:41 nashvil22 Its screenshot of log file, sorry i can't copy/paste as it's restricted env.
11:44 mhulsman joined #gluster
11:44 ira joined #gluster
11:45 mhulsman1 joined #gluster
11:49 mchangir joined #gluster
11:49 aravindavk joined #gluster
11:50 aravindavk nashvil22: which version?
11:51 aravindavk nashvil22: that issue is already fixed, let me know which version you are using
11:51 aravindavk nashvil22: any other tracebacks in log file?
11:52 nashvil22 version 3.7.11
11:52 unclemarc joined #gluster
11:53 nashvil22 No, all Tracebacks errors point to Brick2.
11:54 aravindavk nashvil22: same traceback repeating?
11:54 aravindavk nashvil22: or different tracebacks?
11:54 aravindavk nashvil22: this one fixed in 3.7.13 https://bugzilla.redhat.co​m/show_bug.cgi?id=1348085
11:55 glusterbot Bug 1348085: high, unspecified, ---, avishwan, CLOSED CURRENTRELEASE, [geo-rep]: Worker crashed with "KeyError: "
11:55 nashvil22 yes traceback repeats
11:57 TvL2386 joined #gluster
11:58 robb_nl joined #gluster
12:00 nashvil22 <aravindavk> : Thanks, i'll try to reproduce the issue locally then upgrade to 3.7.13
12:01 aravindavk nashvil22: ok, let me know if you need any help
12:07 Bhaskarakiran joined #gluster
12:13 d0nn1e joined #gluster
12:15 devyani7 joined #gluster
12:19 webmind ah, nice there is an ubuntu ppa :)
12:20 webmind 3.8 is not stable, do I read that correctly?
12:20 ndevos 3.8 should be stable, and the ,,(ppa) has packages for it
12:20 glusterbot The official glusterfs packages for Ubuntu are available here: 3.6: http://goo.gl/XyYImN, 3.7: https://goo.gl/aAJEN5, 3.8: https://goo.gl/gURh2q
12:21 plarsen joined #gluster
12:39 johnmilton joined #gluster
12:40 webmind thnx
12:40 ieth0 joined #gluster
12:59 shaunm joined #gluster
13:00 ZachLanich joined #gluster
13:03 rwheeler joined #gluster
13:14 atinm joined #gluster
13:21 arcolife joined #gluster
13:28 devyani7 joined #gluster
13:38 jobewan joined #gluster
13:45 mhulsman joined #gluster
13:45 prth joined #gluster
13:47 squizzi_ joined #gluster
13:48 mhulsman1 joined #gluster
13:50 Gambit15 joined #gluster
13:55 skylar joined #gluster
13:57 hagarth joined #gluster
14:01 ieth0 joined #gluster
14:01 baojg joined #gluster
14:02 baojg joined #gluster
14:02 baojg joined #gluster
14:03 baojg joined #gluster
14:04 baojg joined #gluster
14:07 mreamy joined #gluster
14:10 BitByteNybble110 joined #gluster
14:15 armin joined #gluster
14:16 baojg joined #gluster
14:25 squizzi_ joined #gluster
14:29 jwd joined #gluster
14:42 bowhunter joined #gluster
14:56 mhulsman joined #gluster
14:56 nbalacha joined #gluster
14:57 hagarth joined #gluster
15:01 wushudoin joined #gluster
15:01 wushudoin joined #gluster
15:03 baudster- joined #gluster
15:10 glustin joined #gluster
15:11 d-fence_ joined #gluster
15:12 zerick joined #gluster
15:13 mlhess joined #gluster
15:14 aravindavk joined #gluster
15:14 kpease joined #gluster
15:15 bluenemo joined #gluster
15:16 owlbot joined #gluster
15:20 Saravanakmr joined #gluster
15:30 hagarth joined #gluster
15:32 derjohn_mob joined #gluster
15:38 aravindavk joined #gluster
15:39 bfoster joined #gluster
15:42 bkolden joined #gluster
15:45 jkroon joined #gluster
15:45 squizzi_ joined #gluster
15:46 xMopxShell joined #gluster
15:57 kotreshhr left #gluster
16:09 prth joined #gluster
16:13 armyriad joined #gluster
16:21 jockek joined #gluster
16:23 mchangir joined #gluster
16:23 Gambit15 joined #gluster
16:27 aravindavk joined #gluster
16:32 prth joined #gluster
16:46 squizzi_ joined #gluster
16:51 mhulsman joined #gluster
16:58 cholcombe joined #gluster
17:25 guhcampos joined #gluster
17:25 plarsen joined #gluster
17:38 sankarshan_ joined #gluster
17:40 prth joined #gluster
17:44 rastar joined #gluster
17:51 deniszh joined #gluster
17:59 cholcombe joined #gluster
18:12 jobewan joined #gluster
18:16 bowhunter joined #gluster
18:19 pedrogibson joined #gluster
18:21 robb_nl joined #gluster
18:23 pedrogibson > JoeJulian : I had looked in the .../1/hooks directory for a sample py script that might show us how to set a feature/attribute in all of our /var/lib/glusterd/<vol>.vol configuration files, but did not have any luck.  We have 18 nodes, so it is a lot of vol files.  Other than .py hook scripts is there another suggestion on how to implement these feature/option changes in our countless .vol files? (we are no opposed to taking
18:42 JoeJulian pedrogibson: The default is to manage volumes through the cli. If you need to modify the vol file (and you know why) then hooks is the only way to do that. I have never had the need to alter the volume via hooks so I would probably have to read the source to figure out which processes need hooked and whether it needs it pre or post.
18:43 JoeJulian You don't need to alter vol files on your clients, they retrieve them from the servers, so that might not be as many nodes that you need to manage.
18:57 ira joined #gluster
18:57 robb_nl joined #gluster
19:08 level7 joined #gluster
19:16 MmikeM joined #gluster
19:17 bitchecker joined #gluster
19:17 Acinonyx joined #gluster
19:17 syadnom joined #gluster
19:17 mbukatov joined #gluster
19:17 galeido joined #gluster
19:17 repnzscasb joined #gluster
19:18 xavih joined #gluster
19:19 armin joined #gluster
19:23 john51 joined #gluster
19:23 micke joined #gluster
19:23 XpineX joined #gluster
19:23 ws2k3 joined #gluster
19:23 suliba joined #gluster
19:23 ira joined #gluster
19:23 juhaj joined #gluster
19:23 squeakyneb joined #gluster
19:23 om joined #gluster
19:23 martinetd joined #gluster
19:23 pocketprotector joined #gluster
19:23 Ramereth joined #gluster
19:23 bio__ joined #gluster
19:23 dataio_ joined #gluster
19:23 lanning joined #gluster
19:23 DJClean joined #gluster
19:23 snila_ joined #gluster
19:23 abyss^ joined #gluster
19:23 eryc joined #gluster
19:23 alvinstarr joined #gluster
19:23 csaba joined #gluster
19:23 portante joined #gluster
19:23 Iouns joined #gluster
19:23 shruti joined #gluster
19:23 sysanthrope joined #gluster
19:23 malevolent joined #gluster
19:23 social joined #gluster
19:23 fsimonce joined #gluster
19:23 arcolife joined #gluster
19:23 d-fence_ joined #gluster
19:23 owlbot joined #gluster
19:23 cvstealth joined #gluster
19:23 amye joined #gluster
19:23 eKKiM joined #gluster
19:23 ahino joined #gluster
19:23 Larsen_ joined #gluster
19:23 Gugge joined #gluster
19:23 mattmcc joined #gluster
19:23 samppah joined #gluster
19:23 JoeJulian joined #gluster
19:23 _fortis joined #gluster
19:23 purpleidea joined #gluster
19:23 post-factum joined #gluster
19:23 inodb joined #gluster
19:23 pkalever joined #gluster
19:23 shaunm joined #gluster
19:23 kpease joined #gluster
19:23 gluytium joined #gluster
19:23 coreping joined #gluster
19:23 rjoseph|afk joined #gluster
19:23 ackjewt joined #gluster
19:23 Dasiel joined #gluster
19:23 cliluw joined #gluster
19:23 jockek joined #gluster
19:23 yawkat joined #gluster
19:23 tomaz__ joined #gluster
19:23 semiosis joined #gluster
19:23 _nixpanic joined #gluster
19:23 Vaelatern joined #gluster
19:23 Kins joined #gluster
19:23 stopbyte joined #gluster
19:23 yoavz joined #gluster
19:23 nhayashi joined #gluster
19:23 side_control joined #gluster
19:23 partner joined #gluster
19:23 webmind joined #gluster
19:23 skylar joined #gluster
19:23 armyriad joined #gluster
19:23 squizzi_ joined #gluster
19:23 level7 joined #gluster
19:23 legreffier joined #gluster
19:23 Champi joined #gluster
19:23 ivan_rossi joined #gluster
19:23 markd_ joined #gluster
19:23 [o__o] joined #gluster
19:23 xMopxShell joined #gluster
19:23 dgandhi joined #gluster
19:23 pvi joined #gluster
19:23 scuttle` joined #gluster
19:23 thwam joined #gluster
19:23 RustyB joined #gluster
19:23 billputer joined #gluster
19:23 Trefex_ joined #gluster
19:23 nohitall joined #gluster
19:23 morse joined #gluster
19:23 ron-slc joined #gluster
19:23 d4n13L joined #gluster
19:23 mlhess joined #gluster
19:23 prth joined #gluster
19:23 yalu_ joined #gluster
19:23 gbox joined #gluster
19:23 JonathanD joined #gluster
19:23 thatgraemeguy joined #gluster
19:23 scc joined #gluster
19:23 glusterbot joined #gluster
19:23 jvandewege joined #gluster
19:23 Anarka joined #gluster
19:23 Klas joined #gluster
19:23 shyam joined #gluster
19:23 unclemarc joined #gluster
19:23 dupondje_ joined #gluster
19:23 fus_ joined #gluster
19:23 Ulrar joined #gluster
19:23 verdurin joined #gluster
19:23 moss joined #gluster
19:23 arif-ali joined #gluster
19:23 cloph_away joined #gluster
19:23 crag joined #gluster
19:23 coredumb joined #gluster
19:23 rossdm joined #gluster
19:23 Guest35265 joined #gluster
19:23 Nebraskka joined #gluster
19:23 rastar joined #gluster
19:23 bkolden joined #gluster
19:23 mreamy joined #gluster
19:23 muneerse2 joined #gluster
19:23 jbrooks joined #gluster
19:23 edong23 joined #gluster
19:23 Jacob843 joined #gluster
19:23 ehermes joined #gluster
19:23 Peppard joined #gluster
19:23 Javezim joined #gluster
19:23 JPaul joined #gluster
19:23 the-me joined #gluster
19:23 DV joined #gluster
19:23 burn joined #gluster
19:23 d-fence joined #gluster
19:23 kevc joined #gluster
19:23 tg2 joined #gluster
19:23 misc joined #gluster
19:23 al joined #gluster
19:23 devyani7_ joined #gluster
19:23 mmckeen joined #gluster
19:23 shortdudey123 joined #gluster
19:23 jwd joined #gluster
19:23 bfoster joined #gluster
19:23 Guest31342 joined #gluster
19:23 sloop joined #gluster
19:23 robb_nl joined #gluster
19:23 Gambit15 joined #gluster
19:23 atrius joined #gluster
19:23 kkeithley joined #gluster
19:23 Vaizki joined #gluster
19:23 rideh joined #gluster
19:23 ndk- joined #gluster
19:23 klaas joined #gluster
19:23 LiftedKilt joined #gluster
19:23 m0zes joined #gluster
19:23 ndevos joined #gluster
19:23 gnulnx joined #gluster
19:23 plarsen joined #gluster
19:23 glustin joined #gluster
19:23 wushudoin joined #gluster
19:23 hchiramm joined #gluster
19:23 siel joined #gluster
19:23 MadPsy joined #gluster
19:23 lalatenduM joined #gluster
19:23 masber joined #gluster
19:23 victori joined #gluster
19:23 sac joined #gluster
19:23 overclk joined #gluster
19:23 snehring joined #gluster
19:23 sage_ joined #gluster
19:23 virusuy joined #gluster
19:23 loadtheacc joined #gluster
19:23 saltsa joined #gluster
19:23 Telsin joined #gluster
19:23 eightyeight joined #gluster
19:23 anoopcs joined #gluster
19:23 uebera|| joined #gluster
19:23 wiza joined #gluster
19:23 sankarshan_away joined #gluster
19:23 tru_tru joined #gluster
19:23 logan- joined #gluster
19:23 Bardack joined #gluster
19:23 Guest89761 joined #gluster
19:23 ashka joined #gluster
19:23 d0nn1e joined #gluster
19:23 Wizek_ joined #gluster
19:23 johnmilton joined #gluster
19:23 gem joined #gluster
19:23 ic0n joined #gluster
19:23 eMBee joined #gluster
19:23 delhage joined #gluster
19:23 ebbex_ joined #gluster
19:23 frakt joined #gluster
19:23 The_Ball joined #gluster
19:23 gvandeweyer joined #gluster
19:23 jesk joined #gluster
19:23 javi404 joined #gluster
19:23 ItsMe`` joined #gluster
19:23 wistof joined #gluster
19:23 swebb joined #gluster
19:23 rjoseph|afk joined #gluster
19:23 decay joined #gluster
19:23 amye joined #gluster
19:23 mbukatov joined #gluster
19:23 marlinc joined #gluster
19:23 valkyr1e joined #gluster
19:24 armin joined #gluster
19:24 sadbox joined #gluster
19:24 amye left #gluster
19:24 lkoranda joined #gluster
19:25 unforgiven512 joined #gluster
19:25 yosafbridge joined #gluster
19:26 BitByteNybble110 joined #gluster
19:26 Philambdo joined #gluster
19:26 guhcampos joined #gluster
19:26 Arrfab joined #gluster
19:26 Mmike joined #gluster
19:26 primusinterpares joined #gluster
19:26 john51 joined #gluster
19:27 jiffin joined #gluster
19:27 Mmike joined #gluster
19:28 steveeJ joined #gluster
19:29 Dave joined #gluster
19:31 jwd joined #gluster
19:33 fyxim joined #gluster
19:33 raghu` joined #gluster
19:36 AppStore joined #gluster
19:45 gluytium joined #gluster
19:48 hchiramm joined #gluster
19:48 level7_ joined #gluster
19:50 robb_nl joined #gluster
19:56 samsaffron___ joined #gluster
19:58 scubacuda joined #gluster
20:01 prth joined #gluster
20:01 telius joined #gluster
20:03 davidj joined #gluster
20:03 jobewan joined #gluster
20:05 lh joined #gluster
20:06 twisted` joined #gluster
20:16 bluenemo joined #gluster
20:17 hagarth joined #gluster
20:18 billputer joined #gluster
20:18 PotatoGim joined #gluster
20:18 tyler274 joined #gluster
20:18 cholcombe joined #gluster
20:24 l3vi joined #gluster
20:25 prth joined #gluster
20:25 l3vi hello. i was going to try and setup haproxy <-> gluster servers, but I don't think I need to do that. I read somewhere that it has automatic failovers, but does it offer any load balancing like round robin or leastconn
20:28 JoeJulian The default is first-to-respond, but there are other options. See cluster.read-hash-mode and cluster.read-subvolume (in "gluster volume set help")
20:28 l3vi JoeJulian: thank you very much
20:28 JoeJulian Typically first-to-respond is going to be more efficient.
20:28 l3vi i appreciate you pointing me in the right direction
20:29 JoeJulian You want as many clients as possible reading the same file from the same server to take advantage of disk seek and kernel caching.
20:29 l3vi brilliant
20:34 l3vi JoeJulian: thanks again, i'm going to take off now. have a good day
20:37 derjohn_mob joined #gluster
20:41 jwd joined #gluster
20:42 skylar joined #gluster
20:50 hchiramm joined #gluster
20:52 slunatecqo joined #gluster
20:56 slunatecqo Hi - I am trying to create a dockerized virtual gluster cluster I managed to create a gluster cluster and start volume. But I can not connect to it. Here are commands I used http://pastebin.com/9nGenJn8 Log says just "0-glusterfs: failed to get the 'volume file' from server" Any ideas?
20:56 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
21:03 galeido joined #gluster
21:03 d0nn1e joined #gluster
21:22 bluenemo joined #gluster
21:22 cliluw joined #gluster
21:22 skylar joined #gluster
21:29 ilbot3 joined #gluster
21:29 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
21:29 tom[] joined #gluster
21:30 steveeJ joined #gluster
21:30 rwheeler joined #gluster
21:30 kpease joined #gluster
21:33 AppStore joined #gluster
21:35 lh joined #gluster
21:37 scubacuda joined #gluster
21:42 slunatecqo left #gluster
21:43 markd_ joined #gluster
21:57 Chinorro joined #gluster
22:06 samikshan joined #gluster
22:06 tdasilva joined #gluster
22:28 fcoelho joined #gluster
22:38 plarsen joined #gluster
23:06 jeremyh joined #gluster
23:15 fang64 joined #gluster
23:42 eightyeight I'm writing some training material for staff, and am curious some good troubleshooting lab ideas for them to solve, and i'm coming up blank.
23:43 eightyeight Any ideas on some simple labs staff could do for troubleshooting various GlusterFS scenarios
23:43 eightyeight Each staff will have 5 VMs, each with 2 virtual disks to play with.
23:46 misc network partition
23:46 misc disk corruption
23:46 misc disk full
23:48 misc slowness of the link, but that's a bit harder to simulate (but you can do it with tc)
23:48 eightyeight Network partition? How would you do disk corruption?
23:49 misc so network parition, cut the link between the VMs
23:49 misc like 2 on 1 side, 3 on the other
23:50 eightyeight Ah
23:50 misc and for disk corruption, I think just writing random stuff on the file with dd would corrupt it
23:51 eightyeight If you do that from the mount, it will be considered consistent though.
23:52 eightyeight GlusterFS doesn't care what data is in the file.
23:52 eightyeight Although, I guess I could corrupt data on one brick
23:52 eightyeight But what could I get back from Gluster that shows something is broken, and needs repair?

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary