Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-05-29

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:19 shyam joined #gluster
01:04 Jacob843 joined #gluster
01:15 Jacob843 joined #gluster
01:41 masber joined #gluster
01:48 ilbot3 joined #gluster
01:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:50 Jacob843 joined #gluster
01:56 Jacob843 joined #gluster
01:59 Jacob843 joined #gluster
02:00 Jacob843 joined #gluster
02:05 derjohn_mob joined #gluster
02:31 ankitr joined #gluster
02:45 shyam joined #gluster
02:51 prasanth joined #gluster
03:06 pioto_ joined #gluster
03:07 Gambit15 joined #gluster
03:11 kramdoss_ joined #gluster
03:15 BlackoutWNCT Hey Guys, I've got a split braining issue on a replica 2 arbiter 1 volume. I was hoping someone would be able to help me out with how to resolve this.
03:16 BlackoutWNCT Files aren't displaying in a gluster volume heal info
03:16 BlackoutWNCT And nothing's displaying in a gluster volume heal info split-brain
03:17 ppai joined #gluster
03:18 BlackoutWNCT A gluster volume heal info {healed,heal-failed} outputs "Gathering list of healed entries on volume * has been unsuccessful on bricks that are down. Please check if all brick processes are running."
03:20 BlackoutWNCT A gluster volume status however shows that all bricks are online.
03:20 BlackoutWNCT The only thing offline is a single NFS server.
03:20 BlackoutWNCT But that shouldn't be causing me these issues right?
03:28 susant left #gluster
03:29 Prasad joined #gluster
03:30 BlackoutWNCT Ok, fixed the issue with the NFS server. still the same result.
03:36 susant joined #gluster
03:37 susant left #gluster
03:42 riyas joined #gluster
03:51 itisravi joined #gluster
03:57 Shu6h3ndu joined #gluster
04:03 nbalacha joined #gluster
04:08 om2 joined #gluster
04:13 atinm joined #gluster
04:35 gyadav joined #gluster
04:38 ankitr joined #gluster
04:44 susant joined #gluster
04:48 ankitr I need bit help. I am facing the issue while logging in gerrit via github.. it is saying server issue
04:49 ankitr any idea how how to fix it. Actually yesterday i cleared my cookies so, after that I am trying to sign in and it is throwing error
04:54 om2 joined #gluster
04:56 apandey joined #gluster
05:02 susant joined #gluster
05:04 masber joined #gluster
05:11 poornima joined #gluster
05:11 hgowtham joined #gluster
05:13 gem joined #gluster
05:16 buvanesh_kumar joined #gluster
05:17 rastar joined #gluster
05:26 kotreshhr joined #gluster
05:31 ndarshan joined #gluster
05:41 gem_ joined #gluster
05:41 skoduri joined #gluster
05:44 om2 joined #gluster
05:46 Peppard joined #gluster
05:47 karthik_us joined #gluster
05:51 Humble joined #gluster
05:56 kdhananjay joined #gluster
06:01 rafi joined #gluster
06:04 deniszh joined #gluster
06:09 rafi1 joined #gluster
06:18 aravindavk joined #gluster
06:23 atinm joined #gluster
06:27 ashiq joined #gluster
06:30 sona joined #gluster
06:35 jtux joined #gluster
06:46 Gambit15 joined #gluster
06:47 prasanth joined #gluster
07:02 deniszh joined #gluster
07:07 deniszh1 joined #gluster
07:15 ivan_rossi joined #gluster
07:19 hgowtham_ joined #gluster
07:19 mbukatov joined #gluster
07:21 prasanth joined #gluster
07:50 fsimonce joined #gluster
07:52 gem joined #gluster
08:23 jkroon joined #gluster
08:27 mb_ joined #gluster
08:44 victori joined #gluster
08:47 ivan_rossi joined #gluster
08:47 atinm joined #gluster
08:49 nbalacha rastar, ping
08:49 glusterbot nbalacha: Please don't naked ping. http://blogs.gnome.org/markmc/2014/02/20/naked-pings/
08:49 rastar nbalacha: pong
08:49 nbalacha rastar, can you take a look at https://review.gluster.org/17349 when you have time
08:49 glusterbot Title: Gerrit Code Review (at review.gluster.org)
08:49 nbalacha and merge if you have not concerns
08:52 rastar nbalacha: merge
08:52 rastar *merged
08:52 nbalacha rastar, thanks
08:52 nbalacha rastar++
08:52 glusterbot nbalacha: rastar's karma is now 4
08:56 gem joined #gluster
08:59 victori joined #gluster
08:59 ahino joined #gluster
09:17 atinm joined #gluster
09:27 ahino joined #gluster
09:38 ahino joined #gluster
09:45 ndarshan joined #gluster
09:45 ppai joined #gluster
09:47 ahino joined #gluster
09:51 buvanesh_kumar_ joined #gluster
09:52 kramdoss_ joined #gluster
10:05 hgowtham joined #gluster
10:22 scobanx joined #gluster
10:25 kdhananjay joined #gluster
10:26 scobanx Hi, I am watching a heal operation and confused. This is an 16+4 EC. One disk replaced and is being healed now. I was thinking that SHD on failed disk's node will read from other 16 nodes and will write its chunk to new disk. So for a 2GB file It will read 2GB from network and write 200MB to its disk right? I am seeing this is not the case. Failed disk is being healed with 7-8MB and there is only 7-8MB network in. What I am missin
10:32 ahino joined #gluster
10:46 susant joined #gluster
11:09 kovshenin joined #gluster
11:11 tallmocha joined #gluster
11:36 buvanesh_kumar_ joined #gluster
12:00 kovshenin joined #gluster
12:03 pioto_ joined #gluster
12:04 Prasad joined #gluster
12:05 buvanesh_kumar joined #gluster
12:12 p7mo joined #gluster
12:33 sanoj joined #gluster
12:40 loadtheacc joined #gluster
12:48 fokas joined #gluster
12:49 fokas hi all, quick question about gluster + NFS, in a production environment, would you use gluster internal NFS capability or mount volumes as native gluster volumes locally on each node and export through the system NFS ?
12:56 _KaszpiR_ joined #gluster
13:06 Jules- is there a possible bug in ACL logic of glusterfs. On latest release i try to remove ACLs: using setfacl -b directory, but it instantly adding the previous acl again?!
13:06 ankitr joined #gluster
13:12 skoduri Jules-, are you trying it on gluster-NFS or fuse-mount?
13:13 Jules- both
13:14 Jules- none of it works
13:14 Jules- on nfs share i see that it was shortly removed then gets readded
13:23 ndevos fokas: either use the old Gluster/NFS, or the more feature full nfs-ganesha
13:27 _KaszpiR_ joined #gluster
13:31 kramdoss_ joined #gluster
13:34 arpu joined #gluster
13:36 arpu_ joined #gluster
13:48 fokas is nfs-ganesha bringing anything when parrallel NFS is not really mandatory ? I'm doing a very limited setup here (2nodes + 1 Arbiter) with the comminly described CTDB + DNS Round Robin
13:52 shyam joined #gluster
13:53 kramdoss_ joined #gluster
13:54 Jules- ndevos: can you tell me why setfacl -b file/directory doesn't function with neither glusterfs fuse-client, nor nfs?
14:03 nbalacha joined #gluster
14:17 ndevos Jules-: I dont know... I would capture a tcpdump and check with wireshark if any (f)getxattr return something after removeing the acls
14:17 ndevos fokas:
14:18 Jules- ndevos: might is possible to change permissions on the glusterfs directly? or will it break my gluster?
14:18 ndevos fokas: nfs-ganesha is useful for non-pnfs workloads too, it is the recommended nfs server and can be used better with pacemaker than gluster/nfs with ctdb
14:19 ndevos Jules-: yes, it might give you unexpected results, it is not recommended
14:20 ndevos Jules-: something might be caching the ACL, it is just the question where that happens, changing the ACL on the bricks will not remove the cached version
14:25 sona joined #gluster
14:25 percevalbot joined #gluster
14:49 pioto joined #gluster
15:01 tallmocha joined #gluster
15:18 saali joined #gluster
15:21 saali joined #gluster
15:31 mlessard joined #gluster
15:32 mlessard Hi guys, if a want to grow up the filesystem on a replicated brick, what is the procedure  ?
15:40 humblec joined #gluster
15:42 ankitr joined #gluster
15:45 plarsen joined #gluster
15:57 skoduri joined #gluster
16:11 Karan joined #gluster
16:26 Gambit15 joined #gluster
16:29 MarkAllasread left #gluster
16:31 d-fence joined #gluster
16:35 ivan_rossi left #gluster
16:35 gyadav joined #gluster
16:56 Gambit15 Hi all
16:56 Gambit15 JoeJulian, are you around per chance?
17:04 Ashutto joined #gluster
17:05 Ashutto @help nopaste
17:05 glusterbot Ashutto: Error: There is no command "nopaste".
17:05 Ashutto help nopaste
17:05 Ashutto nopaste
17:05 Ashutto grr
17:07 Gambit15 I think I've encountered a bug in 3.8.12, however I'm not too sure
17:07 Gambit15 Here's what I wrote in the dev channel
17:08 Gambit15 Hi all, I'm running an arbiter with Gluster 3.8.12 & CentOS 7, however it has recently stopped activating its bricks
17:08 Gambit15 glusterd runs without issue, however no glusterfsd processes are started
17:08 Gambit15 If I execute a glusterfsd brick process manually, the "gluster volume" commands all continue to timeout, however other still peers seem to be able to communicate with it, as entries start appearing in their heal logs
17:08 Gambit15 DEBUG shows no errors & provides to insight as to why the brick processes aren't being started
17:10 Ashutto Hello, what does this log mean? https://nopaste.me/view/072764c2
17:10 glusterbot Title: gluster - Nopaste.me (at nopaste.me)
17:14 Gambit15 Ashutto, do you get anything back from "getfattr -d -m . -e hex /bricks/vol-gaz-homes/safe/bricks/brick0/classifica_marcatori.json" ?
17:20 Ashutto what are we looking for?
17:20 Ashutto I'll paste it in a second, btw :)
17:24 Ashutto https://nopaste.me/view/016590df
17:24 Ashutto these are the attributes of each brick's file
17:24 Ashutto I have a 3 node replica
17:31 _KaszpiR_ fokas native gluster client
17:31 _KaszpiR_ or ganesha
17:31 _KaszpiR_ but if you have 2 node + arbiter, then just nfs should be enough
17:31 _KaszpiR_ don overkill it
17:33 saali joined #gluster
17:44 social joined #gluster
18:01 susant joined #gluster
18:10 ajph joined #gluster
18:20 kovshenin joined #gluster
18:40 saali joined #gluster
18:41 rastar joined #gluster
18:50 Jacob843 joined #gluster
19:21 Jacob843 joined #gluster
19:34 social joined #gluster
19:35 shyam joined #gluster
19:38 ahino joined #gluster
19:38 mlessard joined #gluster
20:01 Telsin joined #gluster
20:12 rastar joined #gluster
20:13 tallmocha joined #gluster
20:37 jkroon joined #gluster
20:52 Ashutto joined #gluster
21:32 armyriad joined #gluster
21:39 guhcampos joined #gluster
21:39 plarsen joined #gluster
22:06 shyam joined #gluster
22:10 sanoj joined #gluster
22:30 gospod joined #gluster
22:32 gospod reboot testing: 2nd node rarely doesnt come up fine (only Brick N Online), glusterd.log is spamming useless sh!t, which log file to check for debugging?
22:50 MrAbaddon joined #gluster
23:02 Gambit15 Gluster only provides "info" level logging by default. Start it with -LDEBUG -l/var/log/glusterd/debug.log
23:03 MrAbaddon joined #gluster
23:03 Gambit15 ...or --debug to not daemonize & log to stderr

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary