Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-05-14

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 ShaunR left #gluster
00:01 akay joined #gluster
00:03 rafaelcapucho joined #gluster
00:10 Slashman joined #gluster
00:16 akay hey guys, can anyone recommend the best ways to monitor performance of gluster? i've got two gluster setups, one runs fast, the other slow, but have very similar setups. i've tried using the 'gluster volume profile' command to see the number of opens/writes/etc but is there anything else i can do?
00:47 mat1010 joined #gluster
00:55 jbrooks joined #gluster
01:02 theron joined #gluster
01:21 harish__ joined #gluster
01:33 itisravi joined #gluster
01:45 DV joined #gluster
02:11 kevein joined #gluster
02:37 msmith_ joined #gluster
02:44 PaulCuzner joined #gluster
03:05 bharata-rao joined #gluster
03:33 shubhendu_ joined #gluster
03:46 [7] joined #gluster
03:46 RameshN joined #gluster
03:48 PaulCuzner joined #gluster
03:55 kanagaraj joined #gluster
04:00 kumar joined #gluster
04:03 ndarshan joined #gluster
04:08 ndarshan joined #gluster
04:17 RameshN joined #gluster
04:23 spandit joined #gluster
04:26 rjoseph joined #gluster
04:30 jiffin joined #gluster
04:32 dusmant joined #gluster
04:32 JoeJulian akay: profile and top, that's about it.
04:34 JoeJulian akay: Had a talk from a facebook dev at gluster summit about how they dump stats to a text file using some custom modifications they made. They'll be submitting those patches and there's been interest in getting them merged in. Hopefully, tools from users like Facebook who are very good at optimizing every little thing will help the rest of us.
04:36 ppai joined #gluster
04:36 ChrisHolcombe joined #gluster
04:38 Bhaskarakiran joined #gluster
04:47 milkyline joined #gluster
04:53 milkyline Is there any way to convert a Distributed volume to a Dispersed volume? Or is the suggested path to create a new volume?
04:54 deepakcs joined #gluster
04:55 schandra|WFH joined #gluster
04:56 atalur joined #gluster
05:00 pppp joined #gluster
05:00 Manikandan joined #gluster
05:08 sakshi joined #gluster
05:08 meghanam joined #gluster
05:08 akay great, thanks JoeJulian
05:11 ashiq joined #gluster
05:15 glusterbot News from newglusterbugs: [Bug 1221457] nfs-ganesha+posix: glusterfsd crash while executing the posix testuite <https://bugzilla.redhat.com/show_bug.cgi?id=1221457>
05:19 Apeksha joined #gluster
05:24 anil_ joined #gluster
05:26 Manikandan joined #gluster
05:27 gem joined #gluster
05:30 kevein_ joined #gluster
05:39 rafi joined #gluster
05:39 sage joined #gluster
05:50 gem_ joined #gluster
05:55 maveric_amitc_ joined #gluster
05:59 tessier_ joined #gluster
06:05 smohan joined #gluster
06:22 kshlm joined #gluster
06:25 atalur joined #gluster
06:29 anrao joined #gluster
06:44 meghanam joined #gluster
06:46 glusterbot News from newglusterbugs: [Bug 1221470] dHT rebalance: Dict_copy log messages when running rebalance on a dist-rep volume <https://bugzilla.redhat.com/show_bug.cgi?id=1221470>
06:46 glusterbot News from newglusterbugs: [Bug 1221473] BVT: Posix crash while running BVT on 3.7beta2 build on rhel6.6 <https://bugzilla.redhat.com/show_bug.cgi?id=1221473>
06:48 glusterbot News from resolvedglusterbugs: [Bug 1212377] "Transport Endpoint Error" seen when hot tier is unavailable <https://bugzilla.redhat.com/show_bug.cgi?id=1212377>
06:48 glusterbot News from resolvedglusterbugs: [Bug 1221032] Directories are missing post tier attach <https://bugzilla.redhat.com/show_bug.cgi?id=1221032>
06:48 glusterbot News from resolvedglusterbugs: [Bug 1201621] RDMA: Crash seen during smallfile read test. <https://bugzilla.redhat.com/show_bug.cgi?id=1201621>
06:56 meghanam joined #gluster
07:04 Anjana joined #gluster
07:07 harish__ joined #gluster
07:09 nangthang joined #gluster
07:16 glusterbot News from newglusterbugs: [Bug 1221476] Data Tiering:rebalance fails on a tiered volume <https://bugzilla.redhat.com/show_bug.cgi?id=1221476>
07:16 glusterbot News from newglusterbugs: [Bug 1221477] The tiering feature requires counters. <https://bugzilla.redhat.com/show_bug.cgi?id=1221477>
07:17 spalai joined #gluster
07:18 glusterbot News from resolvedglusterbugs: [Bug 1214249] I/O (fresh writes) failure of failure of hot tier <https://bugzilla.redhat.com/show_bug.cgi?id=1214249>
07:25 nbalacha joined #gluster
07:31 kkeithley1 joined #gluster
07:46 glusterbot News from newglusterbugs: [Bug 1221489] nfs-ganesha +dht :E [server-rpc-fops.c:1048:server_unlink_cbk] 0-vol2-server: 2706777: UNLINK (Permission denied) <https://bugzilla.redhat.com/show_bug.cgi?id=1221489>
07:46 glusterbot News from newglusterbugs: [Bug 1221490] fuse: check return value of setuid <https://bugzilla.redhat.com/show_bug.cgi?id=1221490>
07:48 glusterbot News from resolvedglusterbugs: [Bug 763398] GlustNFS is incompatible with Windows 7 NFS client. <https://bugzilla.redhat.com/show_bug.cgi?id=763398>
07:48 gem_ joined #gluster
07:49 ctria joined #gluster
07:52 meghanam joined #gluster
08:15 hgowtham joined #gluster
08:16 glusterbot News from newglusterbugs: [Bug 1179179] When an unsupported AUTH_* scheme is used, the RPC-Reply should contain MSG_DENIED/AUTH_ERROR/AUTH_FAILED <https://bugzilla.redhat.com/show_bug.cgi?id=1179179>
08:18 lord4163 joined #gluster
08:31 gem joined #gluster
08:31 sas_ joined #gluster
08:31 PaulCuzner joined #gluster
08:37 kovshenin joined #gluster
08:46 glusterbot News from newglusterbugs: [Bug 1221511] nfs-ganesha: OOM killed for nfsd process while executing the posix test suite <https://bugzilla.redhat.com/show_bug.cgi?id=1221511>
08:46 glusterbot News from newglusterbugs: [Bug 1221507] NFS-Ganesha: ACL should not be enabled by default <https://bugzilla.redhat.com/show_bug.cgi?id=1221507>
08:47 dusmant joined #gluster
08:50 karnan joined #gluster
08:50 gem joined #gluster
08:59 soumya joined #gluster
09:08 itisravi_ joined #gluster
09:12 PaulCuzner joined #gluster
09:23 nangthang joined #gluster
09:27 itisravi_ joined #gluster
09:33 atalur joined #gluster
09:45 spandit_ joined #gluster
09:50 ghenry joined #gluster
09:53 gem joined #gluster
09:55 hgowtham joined #gluster
10:04 Manikandan joined #gluster
10:06 ira joined #gluster
10:06 shubhendu joined #gluster
10:07 PaulCuzner joined #gluster
10:07 dusmant joined #gluster
10:09 atalur joined #gluster
10:10 LebedevRI joined #gluster
10:11 gem_ joined #gluster
10:13 jkroon joined #gluster
10:14 PaulCuzner joined #gluster
10:16 glusterbot News from newglusterbugs: [Bug 1221534] rebalance failed after attaching the tier to the volume. <https://bugzilla.redhat.com/show_bug.cgi?id=1221534>
10:25 Norky joined #gluster
10:39 spandit_ joined #gluster
10:43 lord4163 Very exciting software, gluster. Testing it in some VM's now. Just some questions. I see that there is a way to authenticate, but that is based on the source IP. What about malformed packets (ip spoofs)? How secure is this really? Are there different ways to authenticate?
10:45 haomaiw__ joined #gluster
10:46 lord4163 Also the following messages appear from time to time when the machines are stressed, "failed command write fpdma queued". I think it is just a informational message saying that it waiting for the underlying storage devices?
10:47 glusterbot News from newglusterbugs: [Bug 1221544] [Backup]: Unable to create a glusterfind session <https://bugzilla.redhat.com/show_bug.cgi?id=1221544>
10:52 ppai joined #gluster
10:54 dusmant joined #gluster
10:56 spalai joined #gluster
11:10 nsoffer joined #gluster
11:11 ppai joined #gluster
11:29 harish__ joined #gluster
11:36 spalai joined #gluster
11:37 hagarth joined #gluster
11:40 spandit_ joined #gluster
11:44 al joined #gluster
11:47 glusterbot News from newglusterbugs: [Bug 1221560] `-bash: fork: Cannot allocate memory' error seen regularly on nodes on execution of any command <https://bugzilla.redhat.com/show_bug.cgi?id=1221560>
11:51 rafi joined #gluster
11:52 diegows joined #gluster
12:01 Prilly joined #gluster
12:04 pdrakeweb joined #gluster
12:11 aaronott joined #gluster
12:13 shubhendu joined #gluster
12:17 glusterbot News from newglusterbugs: [Bug 1221577] glusterfsd crashed on a quota enabled volume where snapshots were scheduled <https://bugzilla.redhat.com/show_bug.cgi?id=1221577>
12:17 glusterbot News from newglusterbugs: [Bug 1221578] nfs-ganesha: cthon general category test fails with vers=4 <https://bugzilla.redhat.com/show_bug.cgi?id=1221578>
12:17 glusterbot News from newglusterbugs: [Bug 1221584] Disperse volume: gluster volume heal info lists entries of all bricks <https://bugzilla.redhat.com/show_bug.cgi?id=1221584>
12:40 shaunm joined #gluster
12:42 ndarshan joined #gluster
12:47 rafi1 joined #gluster
12:47 glusterbot News from newglusterbugs: [Bug 1221605] Scrub.log grows rapidly and the size increases upto 24GB in a span of 10 hours <https://bugzilla.redhat.com/show_bug.cgi?id=1221605>
12:53 lpabon joined #gluster
12:56 _shaps_ joined #gluster
12:59 Apeksha joined #gluster
13:00 Slashman joined #gluster
13:00 dgandhi joined #gluster
13:03 theron joined #gluster
13:08 Apeksha_ joined #gluster
13:08 theron_ joined #gluster
13:10 theron_ joined #gluster
13:10 DV joined #gluster
13:11 vikumar joined #gluster
13:15 dusmant joined #gluster
13:16 ppai joined #gluster
13:17 glusterbot News from newglusterbugs: [Bug 1221620] Bitd crashed on tier volume <https://bugzilla.redhat.com/show_bug.cgi?id=1221620>
13:17 glusterbot News from newglusterbugs: [Bug 1221629] Bitd crashed <https://bugzilla.redhat.com/show_bug.cgi?id=1221629>
13:18 nbalacha joined #gluster
13:23 msmith_ joined #gluster
13:24 georgeh-LT2 joined #gluster
13:26 aaronott joined #gluster
13:28 halfinhalfout joined #gluster
13:36 hamiller joined #gluster
13:39 freakzz joined #gluster
13:39 joseki joined #gluster
13:40 joseki does changing the performance. options require any sort of restart of the volume or daemons?
13:41 plarsen joined #gluster
13:44 aaronott joined #gluster
13:45 DV joined #gluster
13:46 rafi joined #gluster
14:01 vimal joined #gluster
14:03 bturner joined #gluster
14:09 elico joined #gluster
14:09 DV joined #gluster
14:10 lord4163 joined #gluster
14:15 msmith_ joined #gluster
14:17 vimal joined #gluster
14:17 glusterbot News from newglusterbugs: [Bug 1221656] rebalance failing on one of the node <https://bugzilla.redhat.com/show_bug.cgi?id=1221656>
14:20 kovshenin joined #gluster
14:22 nsoffer joined #gluster
14:24 georgeh-LT2 joined #gluster
14:24 kovshenin joined #gluster
14:24 firemanxbr joined #gluster
14:29 lexi2 joined #gluster
14:32 yossarianuk joined #gluster
14:33 yossarianuk hi - can anyone help me get started with geo-replication?
14:33 yossarianuk I have 2 servers (centos7) -latest stable glusterfs (3.6.x)
14:34 yossarianuk tryin gto follow - https://github.com/gluster/glusterfs/blob/master/doc/admin-guide/en-US/markdown/admin_distributed_geo_rep.md
14:34 yossarianuk used 'gluster system:: execute gsec_create '
14:34 yossarianuk however the next stage fails...
14:35 yossarianuk i.e  - gluster volume geo-replication master-vol repository3::slave-vol create push-pem
14:35 yossarianuk -> I get 'Volume master-vol does not exist'
14:35 yossarianuk so do I have to create a volume first ? if so how?
14:35 yossarianuk I did have a 2 server replicated  volume
14:36 yossarianuk but removed it as I wanted geo-replication  - when I create a vol do I not need 2 servers (min)
14:38 yossarianuk i.e to do georeplication do I first need to do something like
14:39 yossarianuk 'gluster volume create vol1 replica 2 peer1:/vol1 peer2:/vol2'
14:39 yossarianuk first ?
14:44 yossarianuk i.e - do I have to create a vol on both master + slave *before* I use 'gluster volume geo-replication master-vol repository3::slave-vol create push-pem'
14:47 halfinhalfout1 joined #gluster
14:49 jcastill1 joined #gluster
14:49 yossarianuk is there an examples anywhere (apart from https://github.com/gluster/glusterfs/blob/master/doc/admin-guide/en-US/markdown/admin_distributed_geo_rep.md i can't find anything about the new georeplication)
14:50 jobewan joined #gluster
14:52 rouge2507 joined #gluster
14:54 jcastillo joined #gluster
14:57 georgeh-LT2 joined #gluster
14:59 Pupeno joined #gluster
15:00 yossarianuk ok - messed up  completely
15:01 yossarianuk rmoved all /var/log/gluster + the volume info
15:01 yossarianuk now when I try agin and use
15:01 yossarianuk 'gluster system:: execute gsec_create'
15:01 yossarianuk i get ->> Staging failed on 192.168.254.121. Error: gsec_create not found.
15:04 yossarianuk anyone ? (I think i have some basic misunderstanding regarding geo-replication setup)
15:05 yossarianuk ok ignore that the slave server disn't have geo-replication rpm installed.
15:06 yossarianuk now -> gluster system:: execute gsec_create --> shows '' Common secret pub file present at /var/lib/glusterd/geo-replication/common_secret.pem.pub
15:06 yossarianuk --> is this right ?
15:06 yossarianuk how do I create a geo-replcation vol though ?????
15:07 yossarianuk gluster volume geo-replication master-vol repository3::slave-vol create push-pem -> gives 'Volume master-vol does not exist'
15:07 yossarianuk but if I create a vol that a replicated not georeplicated - im in catch 22 !
15:09 yossarianuk when I create the vol do I just do it on the master - then create georeplication ?
15:09 yossarianuk any help would be welcomed...........
15:12 yossarianuk i.e what I need to know are the steps  - i.e step 1 - do I create a vol on the master - what flags/options to use ?
15:13 jobewan joined #gluster
15:13 yossarianuk i.e say the masters hostname is repo2 and I want geo-replaiction do I first have to create a local vol ????
15:13 yossarianuk i.e 'gluster volume create master-vol repo2:/exports/gluster/gvHANA/
15:13 yossarianuk >??
15:16 rouge2507 left #gluster
15:17 soumya joined #gluster
15:18 glusterbot News from newglusterbugs: [Bug 1221696] rebalance failing on one of the node <https://bugzilla.redhat.com/show_bug.cgi?id=1221696>
15:19 yossarianuk anyone?
15:19 lord4163 joined #gluster
15:20 yossarianuk I guess my question is - if i am creating volumes for geo-replication - what I create the volume on the master what options do I use
15:20 yossarianuk ?
15:20 yossarianuk gluster volume create gv0 replica 2  - i.e for a replicated one I sued
15:20 yossarianuk gluster volume create gv0 replica 2 transport tcp repository3:/exports/gluster/gvHANA/ repository1:/exports/gluster/gvHANA/ force
15:20 yossarianuk what do I use for geo-replication ???
15:20 ivan_rossi joined #gluster
15:21 jobewan joined #gluster
15:22 jobewan joined #gluster
15:22 jobewan joined #gluster
15:28 ivan_rossi left #gluster
15:28 yossarianuk its ok
15:28 yossarianuk volume create: slave-vol: success: please start the volume to access data  - whoop !
15:29 haomaiwa_ joined #gluster
15:29 haomaiwa_ joined #gluster
15:33 jobewan joined #gluster
15:34 CyrilPeponnet yossarianuk https://github.com/gluster/glusterfs/blob/master/doc/admin-guide/en-US/markdown/admin_geo-replication.md
15:34 B21956 joined #gluster
15:37 CyrilPeponnet One question, let say I have 3 nodes using replicated volumes served as nfs for around 4k clients
15:38 elico joined #gluster
15:38 CyrilPeponnet will it worth to use a Load balancer to dispatch connexion between the 3 nodes ?
15:38 CyrilPeponnet I not that behind the scene it doesn't matter but for now using nfs, I can see that all my clients are connected to one node
15:39 CyrilPeponnet s/not/know
15:40 rwheeler joined #gluster
15:41 yossarianuk CyrilPeponnet: thanks my geo-replication volume is showing as 'faulty'....
15:42 Twistedgrim joined #gluster
15:42 CyrilPeponnet google is your friend, plenty of info everywhere
15:42 yossarianuk If anyone has a second here is a pastebin link with the log (its fairly small...
15:42 yossarianuk http://pastebin.com/59G5ie1P
15:42 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
15:43 yossarianuk ok here - http://fpaste.org/221959/61820214/
15:43 yossarianuk -> [2015-05-14 16:41:02.139528] W [syncdutils(/exports/gluster/gvHANA):251:log_raise_exception] <top>: !!! getting "No such file or directory" errors is most likely due to MISCONFIGURATION, please consult https://access.redhat.com/site/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/chap-User_Guide-Geo_Rep-Preparation-Settingup_Environment.html
15:44 CyrilPeponnet yeah consult the doc
15:44 yossarianuk and '[2015-05-14 16:41:02.140120] E [resource(/exports/gluster/gvHANA):225:logerr] Popen: ssh> bash: /nonexistent/gsyncd: No such file or directory'
15:44 CyrilPeponnet you've certainly done something wrong
15:47 yossarianuk i tried following 'https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/chap-User_Guide-Geo_Rep-Preparation-Settingup_Environment.html'
15:47 yossarianuk CyrilPeponnet: am I right in thinking I have to create vol (brick) each side first ?
15:48 yossarianuk (I wish there was an example somewhere - can't find any for 3.5+)
15:48 yossarianuk and from this bugger up what should I do - i.e should I delete all volumes and satrt afgain ?
15:49 CyrilPeponnet stop the geo-rep, remove the geo-rep, clean certs from  /var/lib/glusterd/geo-replication/, clean the keys set in .authorized_keys, and restart following the doc
15:49 yossarianuk thank you - just to confirm I need normal vols setup before geo-replication /
15:50 CyrilPeponnet sure you can use your current vol
15:50 CyrilPeponnet and create another on your dest cluster
15:50 CyrilPeponnet (empty vol)
15:50 yossarianuk ok thanks
15:50 CyrilPeponnet and then, follow the doc :) step by step
15:51 yossarianuk my slave vol did have files in it - this is possibly the issue.
15:51 coredump joined #gluster
15:51 CyrilPeponnet no
15:51 CyrilPeponnet but It's not recommended
15:51 yossarianuk and I created (master + slave) using gluster volume create slave-vol repository3:/exports/gluster/gvHANA/ force - does this look about right ?
15:51 yossarianuk (the host was different each side)
15:52 CyrilPeponnet looks good, start the vol at the end
15:52 yossarianuk ok and one last thing (i promise) - when you say clean certs  - is that all files in /var/lib/glusterd/geo-replication/
15:52 yossarianuk or just common_secret.pem.pub
15:53 CyrilPeponnet just the pub and tgz
15:53 CyrilPeponnet not the gsyncfile
15:53 CyrilPeponnet do it everywhere
15:53 CyrilPeponnet on each node
15:53 CyrilPeponnet it will be recreated using gsec_create cmd
15:54 B21956 joined #gluster
15:54 yossarianuk there is no tgz file just  tar_ssh.pem secret.pem , etc
15:54 CyrilPeponnet then pushed to your slave with the push-pem when you will create the georep
15:54 CyrilPeponnet fine
15:54 aaronott joined #gluster
15:54 yossarianuk i should leave  tar_ssh.pem secret.pem  or delete ?
15:54 CyrilPeponnet delete
15:54 CyrilPeponnet pem and pub
15:54 yossarianuk thanks!
15:54 CyrilPeponnet and folder if any
15:55 CyrilPeponnet delete your geo-rep before
15:55 yossarianuk done - cheers
15:55 bturner joined #gluster
15:56 bturner joined #gluster
15:57 CyrilPeponnet Hey bturner :) Regarding the issue I got yesterday
15:57 bturner CyrilPeponnet, hi yas!
15:58 CyrilPeponnet it looks that the hangs mach when gluster was hitting the roof (for few seconds).
15:59 CyrilPeponnet after a reboot of the node, it's better but I still don't understand why it was using so much CPU for few secs
15:59 bturner CyrilPeponnet, you mean in top CPU hits 100%
15:59 CyrilPeponnet yep more 1200%
15:59 CyrilPeponnet as it's a 12 core system
15:59 CyrilPeponnet only for 1 or 2 s
15:59 CyrilPeponnet but during this time every connexion hangs
16:00 CyrilPeponnet now it looks better but I have a theory
16:00 bturner CyrilPeponnet, it could be the hot thread issue that was fixed with MT epoll in 3.7
16:00 CyrilPeponnet we have a 3 node setup with replica 3, I use a vip (active/passiv) for nfs client.
16:01 bturner k
16:01 CyrilPeponnet but this mean that every client get connected to one machine (and then gluster doing his stuff)
16:01 CyrilPeponnet will it worth to use a Load balancer to dispatch connexion between the 3 nodes ?
16:01 bturner yes
16:01 CyrilPeponnet any recommandation?
16:01 bturner I normally distribute my clients across the servers
16:01 CyrilPeponnet Yeah i'd like to do that as well
16:02 CyrilPeponnet RR style
16:02 CyrilPeponnet with failover if one of the node is down
16:02 bturner yeah I do my stuff thropugh scripts
16:02 bturner but a load balancer would be the way to go
16:02 CyrilPeponnet that was I tough
16:03 bturner yep
16:03 CyrilPeponnet do you know how many cocurent nfs connexion a single node could handle ?
16:03 CyrilPeponnet (12 Core, 64GB RAM, 10Gbe)
16:04 bturner depends on what kind of throughput you need per client and how active the clients are
16:04 Gill joined #gluster
16:04 bturner I find that 2 or 3 to 1 will hit line speed / RAID speed depending on which is slower
16:05 CyrilPeponnet very actives, small files in general (QA testing machines, 1GBe each - around 1200 of them)
16:05 bturner 3 client 1 server
16:05 bturner so anything after 3 or so the throughput per clint will start to drop
16:05 CyrilPeponnet 3 client for 1 server ? doh !
16:06 bturner then keep adding till you hit the point where through throughput per client is unaccptable and you'll have your limit
16:06 bturner CyrilPeponnet, just 1 client 1 server won't usually saturate things unless you do multiple mounts
16:06 CyrilPeponnet I think the troughput in my case is fine. I think I hit a compute limit before
16:06 bturner saturate I mean 100% utalize
16:07 CyrilPeponnet according to munin, disk IO are fine, throughput is fine, but load average is high
16:07 ChrisHolcombe joined #gluster
16:08 bturner CyrilPeponnet, the event listener per 3.7 is single threaded and is a hot spot in the code
16:08 CyrilPeponnet (at least as I have one single point as nfs entry for all the clients)
16:08 bturner in 3.7 we made it multi threaded
16:08 bturner CyrilPeponnet, you may be running into that.  high load is one of the symptoms
16:09 CyrilPeponnet generally it's glusterfsd handling nfs which consume a lot of cpu
16:09 bturner CyrilPeponnet, I would start wth distributing the clients across servers
16:09 CyrilPeponnet yep
16:09 CyrilPeponnet I will be a good start
16:10 bturner maybe geta core from one of the pegged glusterd processes:
16:10 CyrilPeponnet Thanks again :)
16:10 bturner kill -SIGQUIT process_id
16:10 CyrilPeponnet well I guess the best move will be to bump the release
16:10 bturner CyrilPeponnet, if you have dev env you should try 3.7, the sdmallfile improvements will really help you I think
16:10 CyrilPeponnet what is the latest supperted release for centos7?
16:11 bturner 3.7 just dropped today
16:11 CyrilPeponnet (yes but dev env is not representative to production, hard to find 4k nfs client to test it :p)
16:11 bturner so if you have a dev env try it there and make sure everything is OK with your workload
16:11 bturner :)
16:12 CyrilPeponnet anyway I have work to do
16:12 CyrilPeponnet :) thanks again bturner
16:12 bturner kk lmk how it goes
16:12 CyrilPeponnet sure
16:12 bturner np any time!
16:12 CyrilPeponnet say hello to James :) (if you two are working together)
16:20 jobewan joined #gluster
16:21 nbalacha joined #gluster
16:25 Manikandan_ joined #gluster
16:25 Manikandan__ joined #gluster
16:25 alexcrow joined #gluster
16:27 hamiller joined #gluster
16:44 ivan_rossi joined #gluster
16:44 ivan_rossi left #gluster
16:55 Pintomatic joined #gluster
17:03 jobewan joined #gluster
17:03 spalai joined #gluster
17:06 shaunm joined #gluster
17:07 shaunm joined #gluster
17:07 CyrilPeponnet Hey guys, any recommandation to setup a Load Balancer to dispatch traffic across several nodes ?
17:09 Rapture_ joined #gluster
17:09 joseki joined #gluster
17:09 joseki does changing the performance. options require any sort of restart of the volume or daemons?
17:09 jobewan joined #gluster
17:16 jobewan joined #gluster
17:18 glusterbot News from newglusterbugs: [Bug 1221737] Multi-threaded SHD support <https://bugzilla.redhat.com/show_bug.cgi?id=1221737>
17:18 spalai left #gluster
17:28 pppp joined #gluster
17:33 rafi joined #gluster
17:34 jobewan joined #gluster
17:37 elico joined #gluster
17:38 jobewan joined #gluster
17:42 vimal joined #gluster
17:47 elico joined #gluster
17:48 bene2 joined #gluster
17:50 glusterbot News from resolvedglusterbugs: [Bug 1164559] writev, fsync callback use truncate_rsp for decoding <https://bugzilla.redhat.com/show_bug.cgi?id=1164559>
17:50 glusterbot News from resolvedglusterbugs: [Bug 1171615] AFR + Snapshot : Read operation on  file in split-brain is successful in USS <https://bugzilla.redhat.com/show_bug.cgi?id=1171615>
17:50 glusterbot News from resolvedglusterbugs: [Bug 1199545] mount.glusterfs uses /dev/stderr and fails if the device does not exist <https://bugzilla.redhat.com/show_bug.cgi?id=1199545>
17:50 glusterbot News from resolvedglusterbugs: [Bug 1180424] mismatch between arguments and parameters in functions glusterfs_uuid_buf_get, glusterfs_lkowner_buf_get <https://bugzilla.redhat.com/show_bug.cgi?id=1180424>
17:50 glusterbot News from resolvedglusterbugs: [Bug 1181543] glusterd  crashing  with SIGABRT  if  rpc connection is failed in debug mode <https://bugzilla.redhat.com/show_bug.cgi?id=1181543>
17:50 glusterbot News from resolvedglusterbugs: [Bug 1186993] "gluster volume set help" for server.statedump-path has wrong description <https://bugzilla.redhat.com/show_bug.cgi?id=1186993>
17:50 glusterbot News from resolvedglusterbugs: [Bug 1117921] Wrong releaseversion picked up when doing 'make -C extras/LinuxRPM glusterrpms' <https://bugzilla.redhat.com/show_bug.cgi?id=1117921>
17:50 glusterbot News from resolvedglusterbugs: [Bug 1117951] [HC] - Use C-locale for numerics (strtod and friends) <https://bugzilla.redhat.com/show_bug.cgi?id=1117951>
17:50 glusterbot News from resolvedglusterbugs: [Bug 1123004] Automounter maps with 'localdir -fstype=glusterfs host:/remote/dir' fails <https://bugzilla.redhat.com/show_bug.cgi?id=1123004>
17:50 glusterbot News from resolvedglusterbugs: [Bug 1130892] Marking pending xattrs for new entry creations <https://bugzilla.redhat.com/show_bug.cgi?id=1130892>
17:50 glusterbot News from resolvedglusterbugs: [Bug 1139230] AFR: invalid deletion of entry from indices/xattrop directory <https://bugzilla.redhat.com/show_bug.cgi?id=1139230>
17:50 glusterbot News from resolvedglusterbugs: [Bug 1145471] AFR: Fix memory leaks in self-heal daemon <https://bugzilla.redhat.com/show_bug.cgi?id=1145471>
17:50 glusterbot News from resolvedglusterbugs: [Bug 1146812] AFR : Self-heal daemon incorrectly retries healing <https://bugzilla.redhat.com/show_bug.cgi?id=1146812>
17:51 glusterbot News from resolvedglusterbugs: [Bug 1154599] Create a document on how "heal" commands work <https://bugzilla.redhat.com/show_bug.cgi?id=1154599>
17:51 glusterbot News from resolvedglusterbugs: [Bug 1163804] Change in volume heal info command output <https://bugzilla.redhat.com/show_bug.cgi?id=1163804>
17:51 glusterbot News from resolvedglusterbugs: [Bug 1183019] Change in heal info split-brain command <https://bugzilla.redhat.com/show_bug.cgi?id=1183019>
17:51 glusterbot News from resolvedglusterbugs: [Bug 1191396] AFR: Enable users to analyse and resolve split-brain <https://bugzilla.redhat.com/show_bug.cgi?id=1191396>
17:51 glusterbot News from resolvedglusterbugs: [Bug 1219388] Do not let an inode evict during split-brain resolution process. <https://bugzilla.redhat.com/show_bug.cgi?id=1219388>
17:51 glusterbot News from resolvedglusterbugs: [Bug 1129702] [Dist-geo-rep]: While rm -rf on master mount-point, the shutting down, wait and bringing up active node results few files not getting removed from slave. <https://bugzilla.redhat.com/show_bug.cgi?id=1129702>
17:51 glusterbot News from resolvedglusterbugs: [Bug 1200733] [geo-rep]: some of the pre existing files before geo-rep session did not sync to slave <https://bugzilla.redhat.com/show_bug.cgi?id=1200733>
17:51 glusterbot News from resolvedglusterbugs: [Bug 1203650] tools/glusterfind: Ignore .trashcan during Brick Crawl <https://bugzilla.redhat.com/show_bug.cgi?id=1203650>
17:51 glusterbot News from resolvedglusterbugs: [Bug 1203656] tools/glusterfind: pre command fails if output file path is not absolute <https://bugzilla.redhat.com/show_bug.cgi?id=1203656>
17:51 glusterbot News from resolvedglusterbugs: [Bug 1205057] tools/glusterfind: Provide API for testing status of session creation <https://bugzilla.redhat.com/show_bug.cgi?id=1205057>
17:51 glusterbot News from resolvedglusterbugs: [Bug 1206127] tools/glusterfind: Changelog Init before Changelog Register <https://bugzilla.redhat.com/show_bug.cgi?id=1206127>
17:51 glusterbot News from resolvedglusterbugs: [Bug 1206547] [Backup]: Glusterfind create session unable to correctly set passwordless ssh to its peer(s) <https://bugzilla.redhat.com/show_bug.cgi?id=1206547>
17:51 glusterbot News from resolvedglusterbugs: [Bug 1217927] [Backup]: Crash observed when multiple sessions were created for the same volume <https://bugzilla.redhat.com/show_bug.cgi?id=1217927>
17:51 glusterbot News from resolvedglusterbugs: [Bug 1219938] Running status second time shows no active sessions <https://bugzilla.redhat.com/show_bug.cgi?id=1219938>
17:51 glusterbot News from resolvedglusterbugs: [Bug 1192378] Disperse volume: client crashed while running renames with epoll enabled <https://bugzilla.redhat.com/show_bug.cgi?id=1192378>
17:51 glusterbot News from resolvedglusterbugs: [Bug 983317] add 'get' option to view all volume options <https://bugzilla.redhat.com/show_bug.cgi?id=983317>
17:51 glusterbot News from resolvedglusterbugs: [Bug 1005344] duplicate entries in volume property <https://bugzilla.redhat.com/show_bug.cgi?id=1005344>
17:51 glusterbot News from resolvedglusterbugs: [Bug 1091902] [barrier] fsync on NFS mount was not barriered, when barrier was enabled <https://bugzilla.redhat.com/show_bug.cgi?id=1091902>
17:51 glusterbot News from resolvedglusterbugs: [Bug 1121870] cli getspec asks volid instead of volname in usage <https://bugzilla.redhat.com/show_bug.cgi?id=1121870>
17:51 glusterbot News from resolvedglusterbugs: [Bug 1134822] core :  setting a volume ready-only needs a brick restart to make the volume read-only <https://bugzilla.redhat.com/show_bug.cgi?id=1134822>
17:51 glusterbot News from resolvedglusterbugs: [Bug 1135691] [barrier] features.barrier should be a NO_DOC option <https://bugzilla.redhat.com/show_bug.cgi?id=1135691>
17:51 glusterbot News from resolvedglusterbugs: [Bug 1139682] statedump support in glusterd <https://bugzilla.redhat.com/show_bug.cgi?id=1139682>
17:51 glusterbot News from resolvedglusterbugs: [Bug 1146902] Stopping or restarting glusterd on another node when volume start is in progress gives error messages but volume is started <https://bugzilla.redhat.com/show_bug.cgi?id=1146902>
17:51 glusterbot News from resolvedglusterbugs: [Bug 1152890] Peer probe during rebalance causing "Peer rejected" state for an existing  node in trusted cluster <https://bugzilla.redhat.com/show_bug.cgi?id=1152890>
17:51 glusterbot News from resolvedglusterbugs: [Bug 1154635] glusterd: Gluster rebalance status returns failure <https://bugzilla.redhat.com/show_bug.cgi?id=1154635>
17:51 glusterbot News from resolvedglusterbugs: [Bug 1161037] cli command does not show anything when command times out <https://bugzilla.redhat.com/show_bug.cgi?id=1161037>
17:51 glusterbot News from resolvedglusterbugs: [Bug 1162987] cli log is consolidated for cluster regression tests <https://bugzilla.redhat.com/show_bug.cgi?id=1162987>
17:51 glusterbot News from resolvedglusterbugs: [Bug 1168803] [USS]: When snapd is crashed gluster volume stop/delete operation fails making the cluster in inconsistent state <https://bugzilla.redhat.com/show_bug.cgi?id=1168803>
17:51 glusterbot News from resolvedglusterbugs: [Bug 1168910] duplicate dirfd calls in opendir <https://bugzilla.redhat.com/show_bug.cgi?id=1168910>
17:51 glusterbot News from resolvedglusterbugs: [Bug 1173414] glusterd: remote locking failure when multiple synctask transactions are run <https://bugzilla.redhat.com/show_bug.cgi?id=1173414>
17:51 glusterbot News from resolvedglusterbugs: [Bug 1180972] glusterd socket files should reside in gluster sub directory in /var/run/gluster <https://bugzilla.redhat.com/show_bug.cgi?id=1180972>
17:51 glusterbot News from resolvedglusterbugs: [Bug 1183463] Deleting a volume should follow ref counting mechanism <https://bugzilla.redhat.com/show_bug.cgi?id=1183463>
17:51 glusterbot News from resolvedglusterbugs: [Bug 1191486] daemons abstraction & refactoring <https://bugzilla.redhat.com/show_bug.cgi?id=1191486>
17:51 glusterbot News from resolvedglusterbugs: [Bug 1201203] glusterd OOM killed, when repeating volume set operation in a loop <https://bugzilla.redhat.com/show_bug.cgi?id=1201203>
17:51 glusterbot News from resolvedglusterbugs: [Bug 1204727] Maintainin local transaction peer list in op-sm framework <https://bugzilla.redhat.com/show_bug.cgi?id=1204727>
17:51 glusterbot News from resolvedglusterbugs: [Bug 1206655] glusterd crashes on brick op <https://bugzilla.redhat.com/show_bug.cgi?id=1206655>
17:51 glusterbot News from resolvedglusterbugs: [Bug 1210627] glusterd ping-timeout value available in glusterd statedump is incorrect <https://bugzilla.redhat.com/show_bug.cgi?id=1210627>
17:51 glusterbot News from resolvedglusterbugs: [Bug 1215517] [New] - Distribute replicate volume type is shown as Distribute Stripe in  the output of gluster volume info <volname> --xml <https://bugzilla.redhat.com/show_bug.cgi?id=1215517>
17:51 DV joined #gluster
17:51 glusterbot News from resolvedglusterbugs: [Bug 1215518] Glusterd crashed after updating to 3.8 nightly build <https://bugzilla.redhat.com/show_bug.cgi?id=1215518>
17:51 glusterbot News from resolvedglusterbugs: [Bug 1215547] 'volume get' invoked on a non-existing key fails with zero as a return value <https://bugzilla.redhat.com/show_bug.cgi?id=1215547>
17:51 glusterbot News from resolvedglusterbugs: [Bug 1220012] regression failure in volume-snapshot-clone.t <https://bugzilla.redhat.com/show_bug.cgi?id=1220012>
17:51 glusterbot News from resolvedglusterbugs: [Bug 1220016] bitrot testcases fail spuriously <https://bugzilla.redhat.com/show_bug.cgi?id=1220016>
17:51 glusterbot News from resolvedglusterbugs: [Bug 1119256] [glusterd] glusterd crashed when it failed to create geo-rep status file. <https://bugzilla.redhat.com/show_bug.cgi?id=1119256>
17:51 glusterbot News from resolvedglusterbugs: [Bug 1138577] [SNAPSHOT]: glusterd crash while snaphshot creation was in progress <https://bugzilla.redhat.com/show_bug.cgi?id=1138577>
17:51 glusterbot News from resolvedglusterbugs: [Bug 1162462] [USS] : Snapd crashed while trying to access the snapshots under .snaps directory <https://bugzilla.redhat.com/show_bug.cgi?id=1162462>
17:51 glusterbot News from resolvedglusterbugs: [Bug 1164711] [USS] : Rebalance process tries to connect to snapd and in case when snapd crashes it might affect rebalance process <https://bugzilla.redhat.com/show_bug.cgi?id=1164711>
17:51 glusterbot News from resolvedglusterbugs: [Bug 1175700] glusterd : 'gluster volume status <vol_name>' is showing 'N/A' under Port column for all volumes. - same result after gluster volume start <vol_name> force <https://bugzilla.redhat.com/show_bug.cgi?id=1175700>
17:51 glusterbot News from resolvedglusterbugs: [Bug 1181418] [SNAPSHOT]: Snapshot restore fails after adding a node to master with geo-replication involved <https://bugzilla.redhat.com/show_bug.cgi?id=1181418>
17:51 glusterbot News from resolvedglusterbugs: [Bug 1184344] [SNAPSHOT]: In a n-way replica volume, snapshot should not be taken, even if one brick is down. <https://bugzilla.redhat.com/show_bug.cgi?id=1184344>
17:51 glusterbot News from resolvedglusterbugs: [Bug 1194538] [Snapshot]: Glusterd crashed during snap restore after adding a new node without hostname resolution <https://bugzilla.redhat.com/show_bug.cgi?id=1194538>
17:51 glusterbot News from resolvedglusterbugs: [Bug 1198027] [SNAPSHOT]: Schedule snapshot creation with frequency ofhalf-hourly ,hourly,daily,weekly,monthly and yearly <https://bugzilla.redhat.com/show_bug.cgi?id=1198027>
17:51 glusterbot News from resolvedglusterbugs: [Bug 1198076] CIFS: [USS]: After upgrade gluster volume status command times out if 256 snapshots preexists <https://bugzilla.redhat.com/show_bug.cgi?id=1198076>
17:51 glusterbot News from resolvedglusterbugs: [Bug 1208097] [SNAPSHOT] : Appending time stamp to snap name while using scheduler to create snapshots should be removed. <https://bugzilla.redhat.com/show_bug.cgi?id=1208097>
17:51 glusterbot News from resolvedglusterbugs: [Bug 1209120] [Snapshot] White-spaces are not handled properly in Snapshot scheduler <https://bugzilla.redhat.com/show_bug.cgi?id=1209120>
17:51 glusterbot News from resolvedglusterbugs: [Bug 1209408] [Snapshot] Scheduler should accept only valid crond schedules <https://bugzilla.redhat.com/show_bug.cgi?id=1209408>
17:51 glusterbot News from resolvedglusterbugs: [Bug 1218576] Regression failures in tests/bugs/snapshot/bug-1162498.t <https://bugzilla.redhat.com/show_bug.cgi?id=1218576>
17:51 glusterbot News from resolvedglusterbugs: [Bug 1218585] [Snapshot] Snapshot scheduler show status disable even when it is enabled <https://bugzilla.redhat.com/show_bug.cgi?id=1218585>
17:51 glusterbot News from resolvedglusterbugs: [Bug 1219782] Regression failures in tests/bugs/snapshot/bug-1112559.t <https://bugzilla.redhat.com/show_bug.cgi?id=1219782>
17:52 glusterbot News from resolvedglusterbugs: [Bug 928648] RFE: cleaner log message is required for better understanding <https://bugzilla.redhat.com/show_bug.cgi?id=928648>
17:52 glusterbot News from resolvedglusterbugs: [Bug 1093692] Resource/Memory leak issues reported by Coverity. <https://bugzilla.redhat.com/show_bug.cgi?id=1093692>
17:52 glusterbot News from resolvedglusterbugs: [Bug 1119328] Remove libgfapi python example code from glusterfs source <https://bugzilla.redhat.com/show_bug.cgi?id=1119328>
17:52 glusterbot News from resolvedglusterbugs: [Bug 1120646] rfc.sh transfers patches with whitespace problems without warning [provide coding guidelines check] <https://bugzilla.redhat.com/show_bug.cgi?id=1120646>
17:52 glusterbot News from resolvedglusterbugs: [Bug 1122443] Symlink mtime changes when rebalancing <https://bugzilla.redhat.com/show_bug.cgi?id=1122443>
17:52 glusterbot News from resolvedglusterbugs: [Bug 1122533] tests/bug-961307.t: Echo output string in case of failure for easy debug <https://bugzilla.redhat.com/show_bug.cgi?id=1122533>
17:52 glusterbot News from resolvedglusterbugs: [Bug 1125814] Fix error code to return of posix_fsync <https://bugzilla.redhat.com/show_bug.cgi?id=1125814>
17:52 glusterbot News from resolvedglusterbugs: [Bug 1126048] crash on fsync <https://bugzilla.redhat.com/show_bug.cgi?id=1126048>
17:52 glusterbot News from resolvedglusterbugs: [Bug 1129708] rdma: glusterfsd SEGV at volume start <https://bugzilla.redhat.com/show_bug.cgi?id=1129708>
17:52 glusterbot News from resolvedglusterbugs: [Bug 1132105] Outdated  glusterfs-hadoop Install Instructions <https://bugzilla.redhat.com/show_bug.cgi?id=1132105>
17:52 glusterbot News from resolvedglusterbugs: [Bug 1132913] Add tests for metadata self-heal <https://bugzilla.redhat.com/show_bug.cgi?id=1132913>
17:52 glusterbot News from resolvedglusterbugs: [Bug 1139506] Core: client crash while doing rename operations on the mount <https://bugzilla.redhat.com/show_bug.cgi?id=1139506>
17:52 glusterbot News from resolvedglusterbugs: [Bug 1140084] quota: bricks coredump while creating data inside a subdir and lookup going on in parallel <https://bugzilla.redhat.com/show_bug.cgi?id=1140084>
17:52 glusterbot News from resolvedglusterbugs: [Bug 1142419] tests: regression, can't run `prove $t` in {basic,bugs,encryption,features,performance} subdirs <https://bugzilla.redhat.com/show_bug.cgi?id=1142419>
17:52 glusterbot News from resolvedglusterbugs: [Bug 1143835] dht crashed on running regression with floating point exception <https://bugzilla.redhat.com/show_bug.cgi?id=1143835>
17:52 glusterbot News from resolvedglusterbugs: [Bug 1157839] link(2) corrupts meta-data of encrypted files <https://bugzilla.redhat.com/show_bug.cgi?id=1157839>
17:52 glusterbot News from resolvedglusterbugs: [Bug 1158751] Dentries with trailing '/' are added when quota is enabled in bricks <https://bugzilla.redhat.com/show_bug.cgi?id=1158751>
17:52 glusterbot News from resolvedglusterbugs: [Bug 1159571] DHT: Rebalance- Rebalance process crash after remove-brick <https://bugzilla.redhat.com/show_bug.cgi?id=1159571>
17:52 glusterbot News from resolvedglusterbugs: [Bug 1160900] cli segmentation fault with remote ssl (3.6.0) <https://bugzilla.redhat.com/show_bug.cgi?id=1160900>
17:52 glusterbot News from resolvedglusterbugs: [Bug 1164775] Glusterd segfaults  on gluster volume status ... detail <https://bugzilla.redhat.com/show_bug.cgi?id=1164775>
17:52 glusterbot News from resolvedglusterbugs: [Bug 1170825] GlusterFS logrotate config complains about missing files <https://bugzilla.redhat.com/show_bug.cgi?id=1170825>
17:52 glusterbot News from resolvedglusterbugs: [Bug 1174783] [readdir-ahead]: indicate EOF for readdirp <https://bugzilla.redhat.com/show_bug.cgi?id=1174783>
17:52 glusterbot News from resolvedglusterbugs: [Bug 1178008] /usr/share/glusterfs/run-tests.sh should not check for rpmbuild <https://bugzilla.redhat.com/show_bug.cgi?id=1178008>
17:52 glusterbot News from resolvedglusterbugs: [Bug 1178079] [AFR] getfattr on fuse mount gives error : Software caused connection abort <https://bugzilla.redhat.com/show_bug.cgi?id=1178079>
17:52 glusterbot News from resolvedglusterbugs: [Bug 1181367] rmdir changes permission of directory when rmdir fails with ENOTEMPTY <https://bugzilla.redhat.com/show_bug.cgi?id=1181367>
17:52 glusterbot News from resolvedglusterbugs: [Bug 1187952] dht-common.c GF_XATTR_LOCKINFO_KEY compare done wrongly <https://bugzilla.redhat.com/show_bug.cgi?id=1187952>
17:52 glusterbot News from resolvedglusterbugs: [Bug 1188557] gfapi symbol versions in markdown and move to doc dir <https://bugzilla.redhat.com/show_bug.cgi?id=1188557>
17:52 glusterbot News from resolvedglusterbugs: [Bug 1195120] DHT + epoll : client crashed <https://bugzilla.redhat.com/show_bug.cgi?id=1195120>
17:52 glusterbot News from resolvedglusterbugs: [Bug 1195415] glusterfsd core dumps when cleanup and socket disconnect routines race <https://bugzilla.redhat.com/show_bug.cgi?id=1195415>
17:52 glusterbot News from resolvedglusterbugs: [Bug 1196584] RDMA: [RFE] Cleaner log messages when RDMA volumes fail to mount. <https://bugzilla.redhat.com/show_bug.cgi?id=1196584>
17:52 glusterbot News from resolvedglusterbugs: [Bug 1200255] libglusterfs event library does not compile in Centos5 <https://bugzilla.redhat.com/show_bug.cgi?id=1200255>
17:52 glusterbot News from resolvedglusterbugs: [Bug 1203557] gluster rpm build failing for snapshot scheduler install <https://bugzilla.redhat.com/show_bug.cgi?id=1203557>
17:52 glusterbot News from resolvedglusterbugs: [Bug 1203637] Disperse volume: glfsheal crashed <https://bugzilla.redhat.com/show_bug.cgi?id=1203637>
17:52 glusterbot News from resolvedglusterbugs: [Bug 1204604] [Data-tiering] :  Tiering error during configure even if tiering is disabled. <https://bugzilla.redhat.com/show_bug.cgi?id=1204604>
17:52 glusterbot News from resolvedglusterbugs: [Bug 1207603] Persist file size and block count of sharded files in the form of xattrs <https://bugzilla.redhat.com/show_bug.cgi?id=1207603>
17:52 glusterbot News from resolvedglusterbugs: [Bug 1212182] nfs : racy condition in export/netgroup feature <https://bugzilla.redhat.com/show_bug.cgi?id=1212182>
17:52 glusterbot News from resolvedglusterbugs: [Bug 1214247] sharding - Implement remaining fops <https://bugzilla.redhat.com/show_bug.cgi?id=1214247>
17:52 glusterbot News from resolvedglusterbugs: [Bug 1214248] Persist file size and block count of sharded files in the form of xattrs <https://bugzilla.redhat.com/show_bug.cgi?id=1214248>
17:52 glusterbot News from resolvedglusterbugs: [Bug 1215173] Disperse volume: rebalance and quotad crashed <https://bugzilla.redhat.com/show_bug.cgi?id=1215173>
17:52 glusterbot News from resolvedglusterbugs: [Bug 1216302] Symlink heal leaks 'linkname' memory <https://bugzilla.redhat.com/show_bug.cgi?id=1216302>
17:52 glusterbot News from resolvedglusterbugs: [Bug 1216303] Fixes for data self-heal in ec <https://bugzilla.redhat.com/show_bug.cgi?id=1216303>
17:52 glusterbot News from resolvedglusterbugs: [Bug 1217406] glusterfsd crashed after directory was removed from the mount point, while self-heal and rebalance were running on the volume <https://bugzilla.redhat.com/show_bug.cgi?id=1217406>
17:52 glusterbot News from resolvedglusterbugs: [Bug 1218593] ec test spurious failures <https://bugzilla.redhat.com/show_bug.cgi?id=1218593>
17:52 glusterbot News from resolvedglusterbugs: [Bug 1218884] NFS-Ganesha : Add-node and delete-node should start/stop NFS-Ganesha service <https://bugzilla.redhat.com/show_bug.cgi?id=1218884>
17:52 glusterbot News from resolvedglusterbugs: [Bug 1218940] Spurious failures in fop-sanity.t <https://bugzilla.redhat.com/show_bug.cgi?id=1218940>
17:52 glusterbot News from resolvedglusterbugs: [Bug 1218959] dht/rebalancer: Marking tiering migration fops <https://bugzilla.redhat.com/show_bug.cgi?id=1218959>
17:52 glusterbot News from resolvedglusterbugs: [Bug 1218963] NFS-Ganesha : Locking of global option file used by NFS-Ganesha. <https://bugzilla.redhat.com/show_bug.cgi?id=1218963>
17:52 glusterbot News from resolvedglusterbugs: [Bug 1219027] DHT/Tiering/Rebalancer: The Client PID set by tiering migration is getting reset by dht migration <https://bugzilla.redhat.com/show_bug.cgi?id=1219027>
17:52 glusterbot News from resolvedglusterbugs: [Bug 1219066] Data Tiering : Adding performance to unlink/link/rename in CTR Xlator <https://bugzilla.redhat.com/show_bug.cgi?id=1219066>
17:52 glusterbot News from resolvedglusterbugs: [Bug 1219075] Named lookup heal of pre-existing files, before ctr was ON <https://bugzilla.redhat.com/show_bug.cgi?id=1219075>
17:52 glusterbot News from resolvedglusterbugs: [Bug 1219744] [SNAPSHOT]: activate and deactivate doesn't do a handshake when a glusterd comes  back <https://bugzilla.redhat.com/show_bug.cgi?id=1219744>
17:52 glusterbot News from resolvedglusterbugs: [Bug 1220011] Force replace-brick lead to the persistent write(use dd) return Input/output error <https://bugzilla.redhat.com/show_bug.cgi?id=1220011>
17:52 glusterbot News from resolvedglusterbugs: [Bug 1220041] timer wheel and throttling in bitrot <https://bugzilla.redhat.com/show_bug.cgi?id=1220041>
17:52 glusterbot News from resolvedglusterbugs: [Bug 1220058] Disable known bad tests <https://bugzilla.redhat.com/show_bug.cgi?id=1220058>
17:52 glusterbot News from resolvedglusterbugs: [Bug 1220059] Disable known bad tests <https://bugzilla.redhat.com/show_bug.cgi?id=1220059>
17:52 glusterbot News from resolvedglusterbugs: [Bug 1132796] client3_3_readdir - crash on NULL local <https://bugzilla.redhat.com/show_bug.cgi?id=1132796>
17:52 glusterbot News from resolvedglusterbugs: [Bug 1147107] Cannot set distribute.migrate-data xattr on a file <https://bugzilla.redhat.com/show_bug.cgi?id=1147107>
17:52 glusterbot News from resolvedglusterbugs: [Bug 1190734] Enhancement to readdir for tiered volumes <https://bugzilla.redhat.com/show_bug.cgi?id=1190734>
17:52 glusterbot News from resolvedglusterbugs: [Bug 1207343] SQL query failed during tiering rebalancer and write/read frequency thresolds not work <https://bugzilla.redhat.com/show_bug.cgi?id=1207343>
17:52 glusterbot News from resolvedglusterbugs: [Bug 1198615] geo-replication create command must have an option to avoid slave verification. <https://bugzilla.redhat.com/show_bug.cgi?id=1198615>
17:53 glusterbot News from resolvedglusterbugs: [Bug 1208676] No support for mounting volumes with volume files <https://bugzilla.redhat.com/show_bug.cgi?id=1208676>
17:53 glusterbot News from resolvedglusterbugs: [Bug 1087487] DHT - rebalance - output of  'gluster volume rebalance <volname> start/start force/fix-layout start ' is ambiguous and poorly formatted <https://bugzilla.redhat.com/show_bug.cgi?id=1087487>
17:53 glusterbot News from resolvedglusterbugs: [Bug 1091935] Inappropriate error message generated when non-resolvable hostname is given for peer in 'gluster volume create' command for distribute-replicate volume creation <https://bugzilla.redhat.com/show_bug.cgi?id=1091935>
17:53 glusterbot News from resolvedglusterbugs: [Bug 1099369] [barrier] null gfid is shown in state-dump file for respective barrier fops when barrier is enable and state-dump has taken <https://bugzilla.redhat.com/show_bug.cgi?id=1099369>
17:53 glusterbot News from resolvedglusterbugs: [Bug 1101382] [RFE] glusterd log could also add hostname or ip along with host's UUID <https://bugzilla.redhat.com/show_bug.cgi?id=1101382>
17:53 glusterbot News from resolvedglusterbugs: [Bug 1121584] remove-brick stop & status not validating the bricks to check whether the rebalance is actually started on them <https://bugzilla.redhat.com/show_bug.cgi?id=1121584>
17:53 glusterbot News from resolvedglusterbugs: [Bug 1140162] Volume option set <vol> <file-snapshot> or <features.encryption> <value> command input not consistent and corrupting all other valid option <https://bugzilla.redhat.com/show_bug.cgi?id=1140162>
17:53 glusterbot News from resolvedglusterbugs: [Bug 1163108] gluster accepts invalid values when changing cluster.min-free-disk option <https://bugzilla.redhat.com/show_bug.cgi?id=1163108>
17:53 glusterbot News from resolvedglusterbugs: [Bug 1177132] glusterd: when there is loss in quorum then it should block all operation <https://bugzilla.redhat.com/show_bug.cgi?id=1177132>
17:53 glusterbot News from resolvedglusterbugs: [Bug 1179175] gluster vol  set testvol  features.uss  command accepts  invalid random values <https://bugzilla.redhat.com/show_bug.cgi?id=1179175>
17:53 glusterbot News from resolvedglusterbugs: [Bug 1199451] gluster command should retrieve current op-version of the NODE <https://bugzilla.redhat.com/show_bug.cgi?id=1199451>
17:53 glusterbot News from resolvedglusterbugs: [Bug 1209751] BitRot :- on restarting glusterd values for tunables are reset to default <https://bugzilla.redhat.com/show_bug.cgi?id=1209751>
17:53 glusterbot News from resolvedglusterbugs: [Bug 1211576] Gluster CLI crashes when volume create command is incomplete <https://bugzilla.redhat.com/show_bug.cgi?id=1211576>
17:53 glusterbot News from resolvedglusterbugs: [Bug 1218033] BitRot :- If scrubber finds bad file then it should log as a 'ALERT' in log not 'Warning' <https://bugzilla.redhat.com/show_bug.cgi?id=1218033>
17:53 glusterbot News from resolvedglusterbugs: [Bug 1218036] BitRot :- volume info should not show 'features.scrub: resume' if scrub process is resumed <https://bugzilla.redhat.com/show_bug.cgi?id=1218036>
17:53 glusterbot News from resolvedglusterbugs: [Bug 1218048] BitRot :- changing log level to DEBUG doesn't have any impact on bitrot log files (scrub/bitd logs) <https://bugzilla.redhat.com/show_bug.cgi?id=1218048>
17:53 glusterbot News from resolvedglusterbugs: [Bug 1218596] BitRot :- scrub pause/resume should give proper error message if scrubber is already paused/resumed and Admin tries to perform same operation <https://bugzilla.redhat.com/show_bug.cgi?id=1218596>
17:53 glusterbot News from resolvedglusterbugs: [Bug 1219057] cli: While attaching tier cli sholud always ask question whether you really want to attach a tier or not. <https://bugzilla.redhat.com/show_bug.cgi?id=1219057>
17:53 glusterbot News from resolvedglusterbugs: [Bug 1220068] BitRot :- Tunable (scrub-throttle, scrub-frequency, pause/resume) for scrub functionality don't have any impact on scrubber <https://bugzilla.redhat.com/show_bug.cgi?id=1220068>
17:53 glusterbot News from resolvedglusterbugs: [Bug 1220338] unable to start the volume with the latest beta1 rpms <https://bugzilla.redhat.com/show_bug.cgi?id=1220338>
17:53 glusterbot News from resolvedglusterbugs: [Bug 1139598] The memories are exhausted quickly when handle the message which has multi fragments in a single record <https://bugzilla.redhat.com/show_bug.cgi?id=1139598>
17:53 glusterbot News from resolvedglusterbugs: [Bug 1211900] package glupy as a subpackage under gluster namespace. <https://bugzilla.redhat.com/show_bug.cgi?id=1211900>
17:53 glusterbot News from resolvedglusterbugs: [Bug 1220075] Fix duplicate entires in glupy makefile. <https://bugzilla.redhat.com/show_bug.cgi?id=1220075>
17:53 glusterbot News from resolvedglusterbugs: [Bug 1155328] GlusterFS allows insecure SSL modes <https://bugzilla.redhat.com/show_bug.cgi?id=1155328>
17:53 glusterbot News from resolvedglusterbugs: [Bug 1158614] Client fails to pass xflags for unlink call <https://bugzilla.redhat.com/show_bug.cgi?id=1158614>
17:53 glusterbot News from resolvedglusterbugs: [Bug 1158628] [FEAT] Add GF_FOP_IPC for inter-translator communication <https://bugzilla.redhat.com/show_bug.cgi?id=1158628>
17:53 glusterbot News from resolvedglusterbugs: [Bug 1158648] [FEAT] Make own-thread option configurable separately from SSL <https://bugzilla.redhat.com/show_bug.cgi?id=1158648>
17:53 glusterbot News from resolvedglusterbugs: [Bug 1214220] Crashes in logging code <https://bugzilla.redhat.com/show_bug.cgi?id=1214220>
17:53 glusterbot News from resolvedglusterbugs: [Bug 1219026] glusterd crashes when brick option validation fails <https://bugzilla.redhat.com/show_bug.cgi?id=1219026>
17:53 glusterbot News from resolvedglusterbugs: [Bug 1143880] [FEAT] Exports and Netgroups Authentication for Gluster NFS mount <https://bugzilla.redhat.com/show_bug.cgi?id=1143880>
17:53 glusterbot News from resolvedglusterbugs: [Bug 1157223] nfs mount via symbolic link does not work <https://bugzilla.redhat.com/show_bug.cgi?id=1157223>
17:53 glusterbot News from resolvedglusterbugs: [Bug 1157381] mount fails for nfs protocol in rdma volumes <https://bugzilla.redhat.com/show_bug.cgi?id=1157381>
17:53 glusterbot News from resolvedglusterbugs: [Bug 1158831] gnfs : nfs mount fails if the connection between nfs server and bricks is not established <https://bugzilla.redhat.com/show_bug.cgi?id=1158831>
17:53 glusterbot News from resolvedglusterbugs: [Bug 1116263] [SNAPSHOT]: setting config valuses doesn't delete the already created snapshots,but wrongly warns the user that it might delete <https://bugzilla.redhat.com/show_bug.cgi?id=1116263>
17:53 glusterbot News from resolvedglusterbugs: [Bug 1207867] Are not distinguishing internal vs external FOPs in tiering <https://bugzilla.redhat.com/show_bug.cgi?id=1207867>
17:53 glusterbot News from resolvedglusterbugs: [Bug 1110916] fix remaining *printf format warnings on 32-bit <https://bugzilla.redhat.com/show_bug.cgi?id=1110916>
17:53 glusterbot News from resolvedglusterbugs: [Bug 1126832] glusterfs.spec.in: deprecate *.logrotate files in dist-git in favor of the upstream logrotate files <https://bugzilla.redhat.com/show_bug.cgi?id=1126832>
17:53 glusterbot News from resolvedglusterbugs: [Bug 1128192] extras/LinuxRPMS: typo in Makefile.am <https://bugzilla.redhat.com/show_bug.cgi?id=1128192>
17:53 glusterbot News from resolvedglusterbugs: [Bug 1146426] glusterfs-server and the regression tests require the 'killall' command <https://bugzilla.redhat.com/show_bug.cgi?id=1146426>
17:53 glusterbot News from resolvedglusterbugs: [Bug 1160709] libgfapi: use versioned symbols in libgfapi.so for compatibility <https://bugzilla.redhat.com/show_bug.cgi?id=1160709>
17:53 glusterbot News from resolvedglusterbugs: [Bug 1213934] common-ha: delete-node implementation <https://bugzilla.redhat.com/show_bug.cgi?id=1213934>
17:53 glusterbot News from resolvedglusterbugs: [Bug 1215488] configure: automake defaults to Unix V7 tar, w/ max filename length=99 chars <https://bugzilla.redhat.com/show_bug.cgi?id=1215488>
17:53 glusterbot News from resolvedglusterbugs: [Bug 1216128] Autogenerated files delivered in tarball <https://bugzilla.redhat.com/show_bug.cgi?id=1216128>
17:53 glusterbot News from resolvedglusterbugs: [Bug 1218400] glfs.h:46:21: fatal error: sys/acl.h: No such file or directory <https://bugzilla.redhat.com/show_bug.cgi?id=1218400>
17:53 glusterbot News from resolvedglusterbugs: [Bug 1109741] glusterd operating version falls back to the operating version of an invalid friend <https://bugzilla.redhat.com/show_bug.cgi?id=1109741>
17:53 glusterbot News from resolvedglusterbugs: [Bug 1122186] Compilation fails if configured with --disable-xml-output option <https://bugzilla.redhat.com/show_bug.cgi?id=1122186>
17:53 glusterbot News from resolvedglusterbugs: [Bug 1122398] Bump op-version for 3.7.0 <https://bugzilla.redhat.com/show_bug.cgi?id=1122398>
17:53 glusterbot News from resolvedglusterbugs: [Bug 1157979] Executing volume status for 2X2 dis-rep volume leads to "Failed to aggregate response from node/brick " errors in logs <https://bugzilla.redhat.com/show_bug.cgi?id=1157979>
17:53 glusterbot News from resolvedglusterbugs: [Bug 1218031] GlusterD crashes on NetBSD when running mgmt_v3-locks.t test <https://bugzilla.redhat.com/show_bug.cgi?id=1218031>
17:53 glusterbot News from resolvedglusterbugs: [Bug 1103577] Dist-geo-rep : geo-rep doesn't log the list of skipped gfid after it failed to process the changelog. <https://bugzilla.redhat.com/show_bug.cgi?id=1103577>
17:53 glusterbot News from resolvedglusterbugs: [Bug 1122037] [Dist-geo-rep] : In a cascaded setup, after hardlink sync, slave level 2 volume has sticky bit files found on mount-point. <https://bugzilla.redhat.com/show_bug.cgi?id=1122037>
17:53 glusterbot News from resolvedglusterbugs: [Bug 1129008] Dist-geo-rep: An API to check any active geo-rep session for the volume. <https://bugzilla.redhat.com/show_bug.cgi?id=1129008>
17:53 glusterbot News from resolvedglusterbugs: [Bug 1139196] dist-geo-rep: Few files are not synced to slave when files are being created during geo-rep start <https://bugzilla.redhat.com/show_bug.cgi?id=1139196>
17:53 glusterbot News from resolvedglusterbugs: [Bug 1146823] dist-geo-rep: Session going into faulty with "Can no allocate memory" backtrace when pause, rename and resume is performed <https://bugzilla.redhat.com/show_bug.cgi?id=1146823>
17:53 glusterbot News from resolvedglusterbugs: [Bug 1159119] glusterd-georep: In multinode slave setup, glusterd crashes during geo-rep create push-pem. <https://bugzilla.redhat.com/show_bug.cgi?id=1159119>
17:53 glusterbot News from resolvedglusterbugs: [Bug 1168108] cli/georep: Geo-rep can't set rsync_options through cli <https://bugzilla.redhat.com/show_bug.cgi?id=1168108>
17:53 glusterbot News from resolvedglusterbugs: [Bug 1187140] [RFE]: geo-rep: Tool to find missing files in slave volume <https://bugzilla.redhat.com/show_bug.cgi?id=1187140>
17:53 glusterbot News from resolvedglusterbugs: [Bug 1203086] Dist-geo-rep: geo-rep create fails with slave not empty <https://bugzilla.redhat.com/show_bug.cgi?id=1203086>
17:53 glusterbot News from resolvedglusterbugs: [Bug 1203293] Dist-geo-rep: geo-rep goes to faulty trying to sync .trashcan directory <https://bugzilla.redhat.com/show_bug.cgi?id=1203293>
17:53 glusterbot News from resolvedglusterbugs: [Bug 1204641] [geo-rep] stop-all-gluster-processes.sh fails to stop all gluster processes <https://bugzilla.redhat.com/show_bug.cgi?id=1204641>
17:54 glusterbot News from resolvedglusterbugs: [Bug 1207201] Dist-geo-rep: Missing hook-script S56glusterd-geo-rep-create-post.sh during source install <https://bugzilla.redhat.com/show_bug.cgi?id=1207201>
17:54 glusterbot News from resolvedglusterbugs: [Bug 1207643] [geo-rep]: starting the geo-rep causes "Segmentation fault" and core is generated by "gsyncd.py" <https://bugzilla.redhat.com/show_bug.cgi?id=1207643>
17:54 glusterbot News from resolvedglusterbugs: [Bug 1213048] [Geo-replication] cli crashed and core dump was observed while running gluster volume geo-replication vol0 status command <https://bugzilla.redhat.com/show_bug.cgi?id=1213048>
17:54 glusterbot News from resolvedglusterbugs: [Bug 1214561] [Backup]: To capture path for deletes in changelog file <https://bugzilla.redhat.com/show_bug.cgi?id=1214561>
17:54 glusterbot News from resolvedglusterbugs: [Bug 1217935] [Backup]: To capture path for deletes in changelog file <https://bugzilla.redhat.com/show_bug.cgi?id=1217935>
17:54 glusterbot News from resolvedglusterbugs: [Bug 1217939] Have a fixed name for common meta-volume for nfs, snapshot and geo-rep and mount it at a fixed mount location <https://bugzilla.redhat.com/show_bug.cgi?id=1217939>
17:54 glusterbot News from resolvedglusterbugs: [Bug 1217944] Changelog: Changelog should be treated as discontinuous only on changelog enable/disable <https://bugzilla.redhat.com/show_bug.cgi?id=1217944>
17:54 glusterbot News from resolvedglusterbugs: [Bug 1218381] rpc: Memory corruption  because rpcsvc_register_notify interprets opaque mydata argument as xlator pointer <https://bugzilla.redhat.com/show_bug.cgi?id=1218381>
17:54 glusterbot News from resolvedglusterbugs: [Bug 1218383] [Backup]: To capture path for deletes in changelog file <https://bugzilla.redhat.com/show_bug.cgi?id=1218383>
17:54 glusterbot News from resolvedglusterbugs: [Bug 1219444] Rsync Hang and Georep fails to Sync files <https://bugzilla.redhat.com/show_bug.cgi?id=1219444>
17:54 glusterbot News from resolvedglusterbugs: [Bug 1219457] [Backup]: Packages to be installed for glusterfind api to work <https://bugzilla.redhat.com/show_bug.cgi?id=1219457>
17:54 glusterbot News from resolvedglusterbugs: [Bug 1219475] tools/glusterfind: Use Changelogs more effectively for GFID to Path conversion <https://bugzilla.redhat.com/show_bug.cgi?id=1219475>
17:54 glusterbot News from resolvedglusterbugs: [Bug 1219823] [georep]: Creating geo-rep session kills all the brick process <https://bugzilla.redhat.com/show_bug.cgi?id=1219823>
17:54 glusterbot News from resolvedglusterbugs: [Bug 1066529] 'op_ctx modification failed' in glusterd log after gluster volume status <https://bugzilla.redhat.com/show_bug.cgi?id=1066529>
17:54 glusterbot News from resolvedglusterbugs: [Bug 1144282] Documentation for meta xlator <https://bugzilla.redhat.com/show_bug.cgi?id=1144282>
17:54 glusterbot News from resolvedglusterbugs: [Bug 1181203] glusterd/libglusterfs: Various failures when multi threading epoll, due to racy state updates/maintenance <https://bugzilla.redhat.com/show_bug.cgi?id=1181203>
17:54 glusterbot News from resolvedglusterbugs: [Bug 1199944] readv on /var/run/6b8f1f2526c6af8a87f1bb611ae5a86f.socket failed when NFS is disabled <https://bugzilla.redhat.com/show_bug.cgi?id=1199944>
17:54 glusterbot News from resolvedglusterbugs: [Bug 1205592] [glusterd-snapshot] - Quorum must be computed using existing peers <https://bugzilla.redhat.com/show_bug.cgi?id=1205592>
17:54 glusterbot News from resolvedglusterbugs: [Bug 1206134] glusterd :- after volume create command time out, deadlock has been observed among glusterd and all command keep failing with error "Another transaction is in progress" <https://bugzilla.redhat.com/show_bug.cgi?id=1206134>
17:54 glusterbot News from resolvedglusterbugs: [Bug 1132469] [AFR-V2] - Type mismatch of inode does not report EIO <https://bugzilla.redhat.com/show_bug.cgi?id=1132469>
17:54 glusterbot News from resolvedglusterbugs: [Bug 1134221] [AFR-V2] - dict_t leaks <https://bugzilla.redhat.com/show_bug.cgi?id=1134221>
17:54 glusterbot News from resolvedglusterbugs: [Bug 1140613] [AFR-V2] - SHD doesn't remember selfheal failures before updating pending entry heal count <https://bugzilla.redhat.com/show_bug.cgi?id=1140613>
17:54 glusterbot News from resolvedglusterbugs: [Bug 1170913] [AFR-V2] - Eliminate inodelks taken by shd during metadata self-heal in self-heal domain <https://bugzilla.redhat.com/show_bug.cgi?id=1170913>
17:54 glusterbot News from resolvedglusterbugs: [Bug 1179169] tar on a gluster directory gives message "file changed as we read it" even though no updates to file in progress <https://bugzilla.redhat.com/show_bug.cgi?id=1179169>
17:54 glusterbot News from resolvedglusterbugs: [Bug 1194305] [AFR-V2] - Do not count files which did not need index heal in the first place as successfully healed <https://bugzilla.redhat.com/show_bug.cgi?id=1194305>
17:54 glusterbot News from resolvedglusterbugs: [Bug 1200670] Convert quota size from n-to-h order before using it <https://bugzilla.redhat.com/show_bug.cgi?id=1200670>
17:54 glusterbot News from resolvedglusterbugs: [Bug 1205661] Sharding translator - bug fixes <https://bugzilla.redhat.com/show_bug.cgi?id=1205661>
17:54 glusterbot News from resolvedglusterbugs: [Bug 1218475] [FEAT] - Sharding xlator <https://bugzilla.redhat.com/show_bug.cgi?id=1218475>
17:54 glusterbot News from resolvedglusterbugs: [Bug 1195907] RDMA mount fails for unprivileged user without cap_net_bind_service <https://bugzilla.redhat.com/show_bug.cgi?id=1195907>
17:54 glusterbot News from resolvedglusterbugs: [Bug 1121062] nfs-ganesha dumps core on running pynfs tests OPDG10 , OPDG11 <https://bugzilla.redhat.com/show_bug.cgi?id=1121062>
17:54 glusterbot News from resolvedglusterbugs: [Bug 1124711] nfs-ganesha.enable off fails to stop nfs-ganesha due to a dbus error in upstream nfs-ganesha. <https://bugzilla.redhat.com/show_bug.cgi?id=1124711>
17:54 CyrilPeponnet glusterbot what as spam bot you are
17:54 glusterbot News from resolvedglusterbugs: [Bug 1125804] SMB:While running command user.cifs enable/disable there are error messages in log file for hook script. <https://bugzilla.redhat.com/show_bug.cgi?id=1125804>
17:54 glusterbot News from resolvedglusterbugs: [Bug 1202316] NFS-Ganesha : Starting NFS-Ganesha independent of platforms <https://bugzilla.redhat.com/show_bug.cgi?id=1202316>
17:54 glusterbot News from resolvedglusterbugs: [Bug 1207629] nfs-ganesha: feature.ganesha enable options fails <https://bugzilla.redhat.com/show_bug.cgi?id=1207629>
17:54 glusterbot News from resolvedglusterbugs: [Bug 1217793] NFS-Ganesha: Handling GlusterFS CLI commands when NFS-Ganesha related commands are executed and other additonal checks <https://bugzilla.redhat.com/show_bug.cgi?id=1217793>
17:54 glusterbot News from resolvedglusterbugs: [Bug 1218857] Clean up should not empty the contents of  the global config file <https://bugzilla.redhat.com/show_bug.cgi?id=1218857>
17:54 glusterbot News from resolvedglusterbugs: [Bug 1218858] nfs-ganesha: Multi-head nfs  need Upcall Cache invalidation support <https://bugzilla.redhat.com/show_bug.cgi?id=1218858>
17:54 glusterbot News from resolvedglusterbugs: [Bug 1158746] client process will hang if server is started to send the request before completing connection establishment. <https://bugzilla.redhat.com/show_bug.cgi?id=1158746>
17:54 glusterbot News from resolvedglusterbugs: [Bug 1197548] RDMA:crash during sanity test <https://bugzilla.redhat.com/show_bug.cgi?id=1197548>
17:54 glusterbot News from resolvedglusterbugs: [Bug 1205596] [SNAPSHOT]: Output message when a snapshot create is issued when multiple bricks are down needs to be improved <https://bugzilla.redhat.com/show_bug.cgi?id=1205596>
17:54 glusterbot News from resolvedglusterbugs: [Bug 1207132] Snapshot:snapshot list should display in sorted order based on timestamp <https://bugzilla.redhat.com/show_bug.cgi?id=1207132>
17:54 glusterbot News from resolvedglusterbugs: [Bug 1218741] [USS] : statfs call fails on USS. <https://bugzilla.redhat.com/show_bug.cgi?id=1218741>
17:54 glusterbot News from resolvedglusterbugs: [Bug 1086228] Remove-brick: File permission (setuid) changes after migration of the file <https://bugzilla.redhat.com/show_bug.cgi?id=1086228>
17:54 glusterbot News from resolvedglusterbugs: [Bug 1111554] SNAPSHOT[USS]:gluster volume set for uss doesnot check any boundaries <https://bugzilla.redhat.com/show_bug.cgi?id=1111554>
17:54 glusterbot News from resolvedglusterbugs: [Bug 1138602] DHT: nfs.log getting filled with "I" logs <https://bugzilla.redhat.com/show_bug.cgi?id=1138602>
17:54 glusterbot News from resolvedglusterbugs: [Bug 955753] NFS SETATTR call with a truncate and chmod 440 fails <https://bugzilla.redhat.com/show_bug.cgi?id=955753>
17:54 glusterbot News from resolvedglusterbugs: [Bug 1099645] Unchecked strcpy and strcat in gf-history-changelog.c <https://bugzilla.redhat.com/show_bug.cgi?id=1099645>
17:54 glusterbot News from resolvedglusterbugs: [Bug 1124858] building on EPEL-5 fails with unittest/dht_layout_mock.c <https://bugzilla.redhat.com/show_bug.cgi?id=1124858>
17:54 glusterbot News from resolvedglusterbugs: [Bug 1138621] Wrong error handling for mmap() syscall in gf-changelog-process.c FILE <https://bugzilla.redhat.com/show_bug.cgi?id=1138621>
17:54 glusterbot News from resolvedglusterbugs: [Bug 1149863] Option transport.socket.bind-address ignored <https://bugzilla.redhat.com/show_bug.cgi?id=1149863>
17:54 glusterbot News from resolvedglusterbugs: [Bug 1149943] duplicate librsync code should likely be linked removed and linked as a library <https://bugzilla.redhat.com/show_bug.cgi?id=1149943>
17:54 glusterbot News from resolvedglusterbugs: [Bug 1164503] After readv, md-cache only checks cache times if read was empty <https://bugzilla.redhat.com/show_bug.cgi?id=1164503>
17:54 glusterbot News from resolvedglusterbugs: [Bug 1170643] Potential race while checking/updating graph->used <https://bugzilla.redhat.com/show_bug.cgi?id=1170643>
17:54 glusterbot News from resolvedglusterbugs: [Bug 1174017] Unchecked buffer fill by gf_readline in gf_history_changelog_next_change <https://bugzilla.redhat.com/show_bug.cgi?id=1174017>
17:54 glusterbot News from resolvedglusterbugs: [Bug 1215382] Bricks fail to start with tiering related logs on the brick <https://bugzilla.redhat.com/show_bug.cgi?id=1215382>
17:54 glusterbot News from resolvedglusterbugs: [Bug 1215385] rmtab file is a bottleneck when lot of clients are accessing a volume through NFS <https://bugzilla.redhat.com/show_bug.cgi?id=1215385>
17:54 glusterbot News from resolvedglusterbugs: [Bug 1219089] glusterfs-server is a requirements for glusterfs rpm <https://bugzilla.redhat.com/show_bug.cgi?id=1219089>
17:54 glusterbot News from resolvedglusterbugs: [Bug 1220022] package glupy as a subpackage under gluster namespace. <https://bugzilla.redhat.com/show_bug.cgi?id=1220022>
17:54 glusterbot News from resolvedglusterbugs: [Bug 1099752] Gluster Man Page: Incorrect indentation for " Log Commands" section <https://bugzilla.redhat.com/show_bug.cgi?id=1099752>
17:55 glusterbot News from resolvedglusterbugs: [Bug 1131272] who-wrote-glusterfs.sh doesn't use its defined GITDM_REPO variable <https://bugzilla.redhat.com/show_bug.cgi?id=1131272>
17:55 glusterbot News from resolvedglusterbugs: [Bug 1093594] Glfs_fini() not freeing the resources <https://bugzilla.redhat.com/show_bug.cgi?id=1093594>
17:55 glusterbot News from resolvedglusterbugs: [Bug 1088649] Some newly created folders have root ownership although created by unprivileged user <https://bugzilla.redhat.com/show_bug.cgi?id=1088649>
17:55 glusterbot News from resolvedglusterbugs: [Bug 1115907] Use common loc-touchup in fuse/server/gfapi <https://bugzilla.redhat.com/show_bug.cgi?id=1115907>
17:55 glusterbot News from resolvedglusterbugs: [Bug 1117733] gfid-access should print real-inode-gfid in statedump <https://bugzilla.redhat.com/show_bug.cgi?id=1117733>
17:55 glusterbot News from resolvedglusterbugs: [Bug 1128721] Tracker bug for afrv1 changelog support in afrv2 <https://bugzilla.redhat.com/show_bug.cgi?id=1128721>
17:55 glusterbot News from resolvedglusterbugs: [Bug 1129529] Find <mnt> | xargs stat leads to mismatching gfids on files without gfid <https://bugzilla.redhat.com/show_bug.cgi?id=1129529>
17:55 glusterbot News from resolvedglusterbugs: [Bug 1132102] creating special files like device files is leading to pending data changelog when one of the brick is down <https://bugzilla.redhat.com/show_bug.cgi?id=1132102>
17:55 glusterbot News from resolvedglusterbugs: [Bug 1132461] Metadata sync in self-heal is not happening inside metadata locks <https://bugzilla.redhat.com/show_bug.cgi?id=1132461>
17:55 glusterbot News from resolvedglusterbugs: [Bug 1136159] Open fails with ENOENT while renames/readdirs are in progress <https://bugzilla.redhat.com/show_bug.cgi?id=1136159>
17:55 glusterbot News from resolvedglusterbugs: [Bug 1141167] include.rc has typo for 3rd mount <https://bugzilla.redhat.com/show_bug.cgi?id=1141167>
17:55 glusterbot News from resolvedglusterbugs: [Bug 1142601] files with open fd's getting into split-brain when bricks goes offline and comes back online <https://bugzilla.redhat.com/show_bug.cgi?id=1142601>
17:55 glusterbot News from resolvedglusterbugs: [Bug 1147462] implement gluster volume heal <volname> info using gfapi on afrv2 <https://bugzilla.redhat.com/show_bug.cgi?id=1147462>
17:55 glusterbot News from resolvedglusterbugs: [Bug 1153935] fix static analysis checks in io-threads <https://bugzilla.redhat.com/show_bug.cgi?id=1153935>
17:55 glusterbot News from resolvedglusterbugs: [Bug 1159221] io-stats may crash the brick when loc->path is NULL in some fops <https://bugzilla.redhat.com/show_bug.cgi?id=1159221>
17:55 glusterbot News from resolvedglusterbugs: [Bug 1160509] bulk remove xattr should not fail if removexattr fails with ENOATTR/ENODATA <https://bugzilla.redhat.com/show_bug.cgi?id=1160509>
17:55 glusterbot News from resolvedglusterbugs: [Bug 1161106] quota xattrs are exposed in lookup and getxattr <https://bugzilla.redhat.com/show_bug.cgi?id=1161106>
17:55 glusterbot News from resolvedglusterbugs: [Bug 1164051] Dictionary datastructure refactoring <https://bugzilla.redhat.com/show_bug.cgi?id=1164051>
17:55 glusterbot News from resolvedglusterbugs: [Bug 1168189] entry self-heal in 3.5 and 3.6 are not compatible <https://bugzilla.redhat.com/show_bug.cgi?id=1168189>
17:55 glusterbot News from resolvedglusterbugs: [Bug 1170407] self-heal: heal issue, E [afr-self-heal-data.c:1613:afr_sh_data_open_cbk] 0-vol0-replicate-2: open of <gfid:ee96b8a3-de43-48e0-8920-325a3890bb3e> failed on child vol0-client-4 (Permission denied) <https://bugzilla.redhat.com/show_bug.cgi?id=1170407>
17:55 glusterbot News from resolvedglusterbugs: [Bug 1172477] Incorrect error code in the warning message : W [client-rpc-fops.c:1732:client3_3_xattrop_cbk] 0-vol1-client-1: remote operation failed: Success. <https://bugzilla.redhat.com/show_bug.cgi?id=1172477>
17:55 glusterbot News from resolvedglusterbugs: [Bug 1177601] [FEAT[ Implement proactive self-heal daemon feature for disperse subvolumes <https://bugzilla.redhat.com/show_bug.cgi?id=1177601>
17:55 glusterbot News from resolvedglusterbugs: [Bug 1178688] Internal ec xattrs are allowed to be modified <https://bugzilla.redhat.com/show_bug.cgi?id=1178688>
17:55 glusterbot News from resolvedglusterbugs: [Bug 1179180] When the volume is in stopped state/all the bricks are down mount of the volume hangs <https://bugzilla.redhat.com/show_bug.cgi?id=1179180>
17:55 pppp joined #gluster
17:55 glusterbot News from resolvedglusterbugs: [Bug 1180986] Symlink creation doesn't happen with intended gid <https://bugzilla.redhat.com/show_bug.cgi?id=1180986>
17:55 glusterbot News from resolvedglusterbugs: [Bug 1187858] make client-bind-insecure option configurable using gluster volume set <https://bugzilla.redhat.com/show_bug.cgi?id=1187858>
17:55 glusterbot News from resolvedglusterbugs: [Bug 1187885] Enable quorum for replica with odd number of bricks <https://bugzilla.redhat.com/show_bug.cgi?id=1187885>
17:55 glusterbot News from resolvedglusterbugs: [Bug 1199382] [epoll] Typo in the gluster volume set help message for server.event-threads and client.event-threads <https://bugzilla.redhat.com/show_bug.cgi?id=1199382>
17:55 glusterbot News from resolvedglusterbugs: [Bug 1199406] Spurious failures with tests/bugs/geo-replication/bug-877293.t <https://bugzilla.redhat.com/show_bug.cgi?id=1199406>
17:55 glusterbot News from resolvedglusterbugs: [Bug 1199431] Spurious failure of tests/bugs/quota/bug-1038598.t <https://bugzilla.redhat.com/show_bug.cgi?id=1199431>
17:55 glusterbot News from resolvedglusterbugs: [Bug 1200372] Geo-rep fails with disperse volume <https://bugzilla.redhat.com/show_bug.cgi?id=1200372>
17:55 glusterbot News from resolvedglusterbugs: [Bug 1203581] Disperse volume: No output with gluster volume heal info <https://bugzilla.redhat.com/show_bug.cgi?id=1203581>
17:55 glusterbot News from resolvedglusterbugs: [Bug 1207085] ec heal improvements <https://bugzilla.redhat.com/show_bug.cgi?id=1207085>
17:55 glusterbot News from resolvedglusterbugs: [Bug 1207712] Input/Output error with disperse volume when geo-replication is started <https://bugzilla.redhat.com/show_bug.cgi?id=1207712>
17:55 glusterbot News from resolvedglusterbugs: [Bug 1207939] georep commands are listed as <slave_volume>::<slave_volume> instead of <slave_host>::<slave_volume> <https://bugzilla.redhat.com/show_bug.cgi?id=1207939>
17:55 glusterbot News from resolvedglusterbugs: [Bug 1215265] Fixes for data self-heal in ec <https://bugzilla.redhat.com/show_bug.cgi?id=1215265>
17:55 glusterbot News from resolvedglusterbugs: [Bug 1218485] spurious failure bug-908146.t <https://bugzilla.redhat.com/show_bug.cgi?id=1218485>
17:55 glusterbot News from resolvedglusterbugs: [Bug 1122028] Unlink fails on files having no trusted.pgfid.<gfid> xattr when linkcount>1 and build-pgfid is turned on. <https://bugzilla.redhat.com/show_bug.cgi?id=1122028>
17:55 glusterbot News from resolvedglusterbugs: [Bug 1119628] [SNAPSHOT] USS: The .snaps directory shows does not get refreshed immediately if snaps are taken when I/O is in progress <https://bugzilla.redhat.com/show_bug.cgi?id=1119628>
17:55 glusterbot News from resolvedglusterbugs: [Bug 1145475] add documentation  for inode and dentry management <https://bugzilla.redhat.com/show_bug.cgi?id=1145475>
17:55 glusterbot News from resolvedglusterbugs: [Bug 1151004] [USS]: deletion and creation of snapshots with same name causes problems <https://bugzilla.redhat.com/show_bug.cgi?id=1151004>
17:55 glusterbot News from resolvedglusterbugs: [Bug 1172262] glusterfs client crashed while migrating the fds <https://bugzilla.redhat.com/show_bug.cgi?id=1172262>
17:55 glusterbot News from resolvedglusterbugs: [Bug 1176393] marker: inode_path is being called on the inodes not yet linked <https://bugzilla.redhat.com/show_bug.cgi?id=1176393>
17:55 glusterbot News from resolvedglusterbugs: [Bug 1179663] CIFS:[USS]: glusterfsd OOM killed when 255 snapshots were browsed at CIFS mount and Control+C is issued <https://bugzilla.redhat.com/show_bug.cgi?id=1179663>
17:55 glusterbot News from resolvedglusterbugs: [Bug 1184366] make sure pthread keys are used only once <https://bugzilla.redhat.com/show_bug.cgi?id=1184366>
17:55 glusterbot News from resolvedglusterbugs: [Bug 1210338] file copy operation fails on nfs <https://bugzilla.redhat.com/show_bug.cgi?id=1210338>
17:55 glusterbot News from resolvedglusterbugs: [Bug 1067256] ls on some directories takes minutes to complete <https://bugzilla.redhat.com/show_bug.cgi?id=1067256>
17:55 glusterbot News from resolvedglusterbugs: [Bug 1196615] [dht]: Failed to rebalance files when a replica-brick-set was removed <https://bugzilla.redhat.com/show_bug.cgi?id=1196615>
17:55 glusterbot News from resolvedglusterbugs: [Bug 1128648] SMB:On Cifs mount creating files and doing list or running arequal checksum fills client log with "found anomalies for /.. gfid" issue <https://bugzilla.redhat.com/show_bug.cgi?id=1128648>
17:55 glusterbot News from resolvedglusterbugs: [Bug 1177767] make uninstall leaves two symlink files <https://bugzilla.redhat.com/show_bug.cgi?id=1177767>
17:55 glusterbot News from resolvedglusterbugs: [Bug 1209729] Disperse volume: Fix memory leak in truncate calls <https://bugzilla.redhat.com/show_bug.cgi?id=1209729>
17:55 glusterbot News from resolvedglusterbugs: [Bug 1120647] Converting a volume from replicate to distribute by reducing the replica count to one  fails. <https://bugzilla.redhat.com/show_bug.cgi?id=1120647>
17:55 glusterbot News from resolvedglusterbugs: [Bug 1127148] Regression test failure while running bug-918437-sh-mtime.t <https://bugzilla.redhat.com/show_bug.cgi?id=1127148>
17:55 glusterbot News from resolvedglusterbugs: [Bug 1134691] Need ability to heal mismatching user extended attributes without any changelogs <https://bugzilla.redhat.com/show_bug.cgi?id=1134691>
17:55 glusterbot News from resolvedglusterbugs: [Bug 1135514] setfattr/getfattr of a 'key' fails when 'value' is null <https://bugzilla.redhat.com/show_bug.cgi?id=1135514>
17:55 glusterbot News from resolvedglusterbugs: [Bug 1136769] AFR: Provide a gluster CLI for automated resolution of split-brains. <https://bugzilla.redhat.com/show_bug.cgi?id=1136769>
17:55 glusterbot News from resolvedglusterbugs: [Bug 1139327] dicts are not freed leading to memory leaks <https://bugzilla.redhat.com/show_bug.cgi?id=1139327>
17:55 glusterbot News from resolvedglusterbugs: [Bug 1166020] self-heal-algorithm with option "full" doesn't heal sparse files correctly <https://bugzilla.redhat.com/show_bug.cgi?id=1166020>
17:55 glusterbot News from resolvedglusterbugs: [Bug 1169335] gluster vol heal <VOLNAME> info takes long time to run even when there are no pending heals. <https://bugzilla.redhat.com/show_bug.cgi?id=1169335>
17:56 glusterbot News from resolvedglusterbugs: [Bug 1176089] AFR coverity fixes <https://bugzilla.redhat.com/show_bug.cgi?id=1176089>
17:56 glusterbot News from resolvedglusterbugs: [Bug 1214168] While running i/o's from cifs mount huge logging errors related to quick_read performance xlator : invalid argument:iobuf <https://bugzilla.redhat.com/show_bug.cgi?id=1214168>
17:56 glusterbot News from resolvedglusterbugs: [Bug 1217689] [RFE] arbiter for 3 way replication <https://bugzilla.redhat.com/show_bug.cgi?id=1217689>
17:56 glusterbot News from resolvedglusterbugs: [Bug 1145450] Fix for spurious failure <https://bugzilla.redhat.com/show_bug.cgi?id=1145450>
17:56 glusterbot News from resolvedglusterbugs: [Bug 1146479] [SNAPSHOT]: Need logging correction during the lookup failure case. <https://bugzilla.redhat.com/show_bug.cgi?id=1146479>
17:56 glusterbot News from resolvedglusterbugs: [Bug 1151933] Quota: features.quota-deem-statfs is "on" even after disabling quota. <https://bugzilla.redhat.com/show_bug.cgi?id=1151933>
17:56 glusterbot News from resolvedglusterbugs: [Bug 1161015] [USS]: snapd process is not killed once the glusterd comes back <https://bugzilla.redhat.com/show_bug.cgi?id=1161015>
17:56 glusterbot News from resolvedglusterbugs: [Bug 1167580] [USS]: Non root user who has no access to a directory, from NFS mount, is able to access the files under .snaps under that directory <https://bugzilla.redhat.com/show_bug.cgi?id=1167580>
17:56 glusterbot News from resolvedglusterbugs: [Bug 1190108] RFE: Quota for inode count <https://bugzilla.redhat.com/show_bug.cgi?id=1190108>
17:56 glusterbot News from resolvedglusterbugs: [Bug 1123950] Rename of a file from 2 clients racing and resulting in an error on both clients <https://bugzilla.redhat.com/show_bug.cgi?id=1123950>
17:56 glusterbot News from resolvedglusterbugs: [Bug 1139812] DHT: Rebalance process crash after add-brick and `rebalance start' operation <https://bugzilla.redhat.com/show_bug.cgi?id=1139812>
17:56 glusterbot News from resolvedglusterbugs: [Bug 1161156] DHT: two problems, first rename fails for a file, second rename failures give different error messages <https://bugzilla.redhat.com/show_bug.cgi?id=1161156>
17:56 glusterbot News from resolvedglusterbugs: [Bug 1161311] DHT + rebalance :- DATA LOSS -  while file is in migration, creation of Hard-link and unlink of original file ends in data loss(both files are missing from mount and backend <https://bugzilla.redhat.com/show_bug.cgi?id=1161311>
17:56 glusterbot News from resolvedglusterbugs: [Bug 1174205] use different names for getting volfiles <https://bugzilla.redhat.com/show_bug.cgi?id=1174205>
17:56 glusterbot News from resolvedglusterbugs: [Bug 1180231] glusterfs-fuse: Crash due to race in FUSE notify when multiple epoll threads invoke the routine <https://bugzilla.redhat.com/show_bug.cgi?id=1180231>
17:56 glusterbot News from resolvedglusterbugs: [Bug 1192114] edge-triggered epoll breaks rpc-throttling <https://bugzilla.redhat.com/show_bug.cgi?id=1192114>
17:56 glusterbot News from resolvedglusterbugs: [Bug 1197118] Perf: Crash seen during IOZone performance regression runs. <https://bugzilla.redhat.com/show_bug.cgi?id=1197118>
17:56 glusterbot News from resolvedglusterbugs: [Bug 1200271] Upcall: xlator options for Upcall xlator <https://bugzilla.redhat.com/show_bug.cgi?id=1200271>
17:56 glusterbot News from resolvedglusterbugs: [Bug 1217711] Upcall framework support along with cache_invalidation usecase handled <https://bugzilla.redhat.com/show_bug.cgi?id=1217711>
17:56 glusterbot News from resolvedglusterbugs: [Bug 1114557] DHT : - In case of race between two mkdir(creating same Directory) from different mount, both are failing with error even though Directory is created. FUSE mount gave "Input/output error" <https://bugzilla.redhat.com/show_bug.cgi?id=1114557>
17:56 glusterbot News from resolvedglusterbugs: [Bug 1117923] DHT :- rm -rf is not removing stale link file and because of that unable to create file having same name as stale link file <https://bugzilla.redhat.com/show_bug.cgi?id=1117923>
17:56 glusterbot News from resolvedglusterbugs: [Bug 1157974] Warning message to restore data from removed bricks, should not be thrown when 'remove-brick force' was used <https://bugzilla.redhat.com/show_bug.cgi?id=1157974>
17:56 glusterbot News from resolvedglusterbugs: [Bug 1217386] Crash in dht_getxattr_cbk <https://bugzilla.redhat.com/show_bug.cgi?id=1217386>
17:56 glusterbot News from resolvedglusterbugs: [Bug 1217949] Null check before freeing dir_dfmeta and tmp_container <https://bugzilla.redhat.com/show_bug.cgi?id=1217949>
17:56 glusterbot News from resolvedglusterbugs: [Bug 1219579] DHT Rebalance : Provide options to control the maximum number of files being migrated at a time( Throttling) <https://bugzilla.redhat.com/show_bug.cgi?id=1219579>
17:56 glusterbot News from resolvedglusterbugs: [Bug 1207624] BitRot :- scrubber is not detecting rotten data and not marking file as 'BAD' file <https://bugzilla.redhat.com/show_bug.cgi?id=1207624>
17:56 glusterbot News from resolvedglusterbugs: [Bug 1119641] [SNAPSHOT]: error message for invalid snapshot status should be aligned with error messages of info and list <https://bugzilla.redhat.com/show_bug.cgi?id=1119641>
17:56 glusterbot News from resolvedglusterbugs: [Bug 1122816] [SNAPSHOT]: In mixed cluster with RHS 2.1 U2 & RHS 3.0, newly created volume should not contain snapshot related options displayed in 'gluster volume info' <https://bugzilla.redhat.com/show_bug.cgi?id=1122816>
17:56 glusterbot News from resolvedglusterbugs: [Bug 1132451] [SNAPSHOT]: If the snapshoted brick has xfs options set as part of its creation, they are not automount upon reboot <https://bugzilla.redhat.com/show_bug.cgi?id=1132451>
17:56 glusterbot News from resolvedglusterbugs: [Bug 1132946] [SNAPSHOT]: snapshoted volume is read only but it shows rw attributes in mount <https://bugzilla.redhat.com/show_bug.cgi?id=1132946>
17:56 glusterbot News from resolvedglusterbugs: [Bug 1133456] [SNAPSHOT]: nouuid is appended for every snapshoted brick which causes duplication if the original brick has already nouuid <https://bugzilla.redhat.com/show_bug.cgi?id=1133456>
17:56 glusterbot News from resolvedglusterbugs: [Bug 1136352] [SNAPSHOT]: snapshot create fails with error in log "Failed to open directory <xyz>, due to many open files" <https://bugzilla.redhat.com/show_bug.cgi?id=1136352>
17:56 glusterbot News from resolvedglusterbugs: [Bug 1147378] Enabling Quota on existing data won't create pgfid xattrs <https://bugzilla.redhat.com/show_bug.cgi?id=1147378>
17:56 glusterbot News from resolvedglusterbugs: [Bug 1159840] [USS]: creating file/directories under .snaps shows wrong error message <https://bugzilla.redhat.com/show_bug.cgi?id=1159840>
17:56 glusterbot News from resolvedglusterbugs: [Bug 1160236] [USS]: Typo error in the description for USS under "gluster volume set help" <https://bugzilla.redhat.com/show_bug.cgi?id=1160236>
17:56 glusterbot News from resolvedglusterbugs: [Bug 1160534] [USS]: All uss related logs are reported under /var/log/glusterfs, it makes sense to move it into subfolder <https://bugzilla.redhat.com/show_bug.cgi?id=1160534>
17:56 glusterbot News from resolvedglusterbugs: [Bug 1162498] [USS]: Unable to access .snaps after snapshot restore after directories were deleted and recreated <https://bugzilla.redhat.com/show_bug.cgi?id=1162498>
17:56 glusterbot News from resolvedglusterbugs: [Bug 1164613] [USS]: If the snap name is same as snap-directory than cd to virtual snap directory fails <https://bugzilla.redhat.com/show_bug.cgi?id=1164613>
17:56 glusterbot News from resolvedglusterbugs: [Bug 1166197] [USS]:After deactivating a snapshot trying to access the remaining activated snapshots from NFS mount gives 'Invalid argument' error <https://bugzilla.redhat.com/show_bug.cgi?id=1166197>
17:56 glusterbot News from resolvedglusterbugs: [Bug 1168643] [USS]: data unavailability for a period of time when USS is enabled/disabled <https://bugzilla.redhat.com/show_bug.cgi?id=1168643>
17:56 glusterbot News from resolvedglusterbugs: [Bug 1184885] Quota: Build ancestry in the lookup <https://bugzilla.redhat.com/show_bug.cgi?id=1184885>
17:56 glusterbot News from resolvedglusterbugs: [Bug 1188636] RFE: Quota marker needs to be re-factored by replacing WIND and UNWIND with SYNCOP <https://bugzilla.redhat.com/show_bug.cgi?id=1188636>
17:56 glusterbot News from resolvedglusterbugs: [Bug 1203629] DHT:Quota:- brick process crashed after deleting .glusterfs from backend <https://bugzilla.redhat.com/show_bug.cgi?id=1203629>
17:56 glusterbot News from resolvedglusterbugs: [Bug 1207967] heal doesn't work for new volumes to reflect the 128 bits changes in quota after upgrade <https://bugzilla.redhat.com/show_bug.cgi?id=1207967>
17:56 glusterbot News from resolvedglusterbugs: [Bug 1212348] quota: inode quota not healing after upgrade <https://bugzilla.redhat.com/show_bug.cgi?id=1212348>
17:56 glusterbot News from resolvedglusterbugs: [Bug 1215907] cli should return error with inode quota cmds on cluster with op_version less than 3.7 <https://bugzilla.redhat.com/show_bug.cgi?id=1215907>
17:56 glusterbot News from resolvedglusterbugs: [Bug 1218243] quota/marker: turn off inode quotas by default <https://bugzilla.redhat.com/show_bug.cgi?id=1218243>
17:56 glusterbot News from resolvedglusterbugs: [Bug 1144527] log files get flooded when removexattr() can't find a specified key or value <https://bugzilla.redhat.com/show_bug.cgi?id=1144527>
17:56 glusterbot News from resolvedglusterbugs: [Bug 1174087] logging improvements in marker translator <https://bugzilla.redhat.com/show_bug.cgi?id=1174087>
17:56 glusterbot News from resolvedglusterbugs: [Bug 1188196] Change order of translators in brick <https://bugzilla.redhat.com/show_bug.cgi?id=1188196>
17:56 glusterbot News from resolvedglusterbugs: [Bug 1208784] Load md-cache on the server <https://bugzilla.redhat.com/show_bug.cgi?id=1208784>
17:56 glusterbot News from resolvedglusterbugs: [Bug 1122417] Writing data to a dispersed volume mounted by NFS fails <https://bugzilla.redhat.com/show_bug.cgi?id=1122417>
17:56 glusterbot News from resolvedglusterbugs: [Bug 1122581] Sometimes self heal on disperse volume crashes <https://bugzilla.redhat.com/show_bug.cgi?id=1122581>
17:56 glusterbot News from resolvedglusterbugs: [Bug 1122586] Read/write speed on a dispersed volume is poor <https://bugzilla.redhat.com/show_bug.cgi?id=1122586>
17:56 glusterbot News from resolvedglusterbugs: [Bug 1125166] Current implementation depends on Intel's SSE2 extensions <https://bugzilla.redhat.com/show_bug.cgi?id=1125166>
17:56 glusterbot News from resolvedglusterbugs: [Bug 1125312] Disperse xlator issues in a 32 bits environment <https://bugzilla.redhat.com/show_bug.cgi?id=1125312>
17:56 glusterbot News from resolvedglusterbugs: [Bug 1126932] Random crashes while writing to a dispersed volume <https://bugzilla.redhat.com/show_bug.cgi?id=1126932>
17:56 glusterbot News from resolvedglusterbugs: [Bug 1127653] Memory leaks of xdata on some fops of protocol/server <https://bugzilla.redhat.com/show_bug.cgi?id=1127653>
17:56 glusterbot News from resolvedglusterbugs: [Bug 1140396] ec tests fail on NetBSD <https://bugzilla.redhat.com/show_bug.cgi?id=1140396>
17:56 glusterbot News from resolvedglusterbugs: [Bug 1140861] A new xattr is needed to store ec parameters <https://bugzilla.redhat.com/show_bug.cgi?id=1140861>
17:57 glusterbot News from resolvedglusterbugs: [Bug 1144108] Spurious failure on disperse tests (bad file size on brick) <https://bugzilla.redhat.com/show_bug.cgi?id=1144108>
17:57 glusterbot News from resolvedglusterbugs: [Bug 1146903] New 32 bits issues introduced by a recent patch <https://bugzilla.redhat.com/show_bug.cgi?id=1146903>
17:57 glusterbot News from resolvedglusterbugs: [Bug 1147563] Update documentation for dispersed volumes <https://bugzilla.redhat.com/show_bug.cgi?id=1147563>
17:57 glusterbot News from resolvedglusterbugs: [Bug 1148010] Add support for dumping private state of ec xlator <https://bugzilla.redhat.com/show_bug.cgi?id=1148010>
17:57 glusterbot News from resolvedglusterbugs: [Bug 1148520] Memory leaks in ec while traversing directories <https://bugzilla.redhat.com/show_bug.cgi?id=1148520>
17:57 glusterbot News from resolvedglusterbugs: [Bug 1149723] Self-heal on dispersed volumes does not restore the correct date <https://bugzilla.redhat.com/show_bug.cgi?id=1149723>
17:57 glusterbot News from resolvedglusterbugs: [Bug 1149726] An 'ls' can return invalid contents on a dispersed volume before self-heal repairs a damaged directory <https://bugzilla.redhat.com/show_bug.cgi?id=1149726>
17:57 glusterbot News from resolvedglusterbugs: [Bug 1152902] Rebalance on a dispersed volume produces multiple errors in logs <https://bugzilla.redhat.com/show_bug.cgi?id=1152902>
17:57 glusterbot News from resolvedglusterbugs: [Bug 1156404] geo-replication fails with dispersed volumes <https://bugzilla.redhat.com/show_bug.cgi?id=1156404>
17:57 glusterbot News from resolvedglusterbugs: [Bug 1158008] Quota utilization not correctly reported for dispersed volumes <https://bugzilla.redhat.com/show_bug.cgi?id=1158008>
17:57 glusterbot News from resolvedglusterbugs: [Bug 1161588] ls -alR can not heal the disperse volume <https://bugzilla.redhat.com/show_bug.cgi?id=1161588>
17:57 glusterbot News from resolvedglusterbugs: [Bug 1161621] Possible file corruption on dispersed volumes <https://bugzilla.redhat.com/show_bug.cgi?id=1161621>
17:57 glusterbot News from resolvedglusterbugs: [Bug 1161886] rename operation leads to core dump <https://bugzilla.redhat.com/show_bug.cgi?id=1161886>
17:57 glusterbot News from resolvedglusterbugs: [Bug 1162805] A disperse 2 x (2 + 1) = 6 volume, kill two glusterfsd program, ls  mountpoint abnormal. <https://bugzilla.redhat.com/show_bug.cgi?id=1162805>
17:57 glusterbot News from resolvedglusterbugs: [Bug 1163760] when replace one brick on disperse volume, ls sometimes goes wrong <https://bugzilla.redhat.com/show_bug.cgi?id=1163760>
17:57 glusterbot News from resolvedglusterbugs: [Bug 1167419] EC_MAX_NODES is defined incorrectly <https://bugzilla.redhat.com/show_bug.cgi?id=1167419>
17:57 glusterbot News from resolvedglusterbugs: [Bug 1168167] Change licensing of disperse to dual LGPLv3/GPLv2 <https://bugzilla.redhat.com/show_bug.cgi?id=1168167>
17:57 glusterbot News from resolvedglusterbugs: [Bug 1170254] Fix mutex problems reported by coverity scan <https://bugzilla.redhat.com/show_bug.cgi?id=1170254>
17:57 glusterbot News from resolvedglusterbugs: [Bug 1179050] gluster vol clear-locks vol-name path kind all inode return IO error in a disperse volume <https://bugzilla.redhat.com/show_bug.cgi?id=1179050>
17:57 glusterbot News from resolvedglusterbugs: [Bug 1182267] compile warnings with gcc 5.0 <https://bugzilla.redhat.com/show_bug.cgi?id=1182267>
17:57 glusterbot News from resolvedglusterbugs: [Bug 1187474] Disperse volume mounted through NFS doesn't list any files/directories <https://bugzilla.redhat.com/show_bug.cgi?id=1187474>
17:57 glusterbot News from resolvedglusterbugs: [Bug 1190581] Detect half executed operations on disperse volumes <https://bugzilla.redhat.com/show_bug.cgi?id=1190581>
17:57 glusterbot News from resolvedglusterbugs: [Bug 1123646] [SNAPSHOT]: Snapshot of volume with thick provisioned LV as bricks does not give proper error message <https://bugzilla.redhat.com/show_bug.cgi?id=1123646>
17:57 glusterbot News from resolvedglusterbugs: [Bug 1151412] Geo-Replication Passive node is not getting promoted to active when one node of replicated slave volume goes down <https://bugzilla.redhat.com/show_bug.cgi?id=1151412>
17:57 glusterbot News from resolvedglusterbugs: [Bug 1176934] Changing mtime using touch doesn't copy master file to slave with tar+ssh <https://bugzilla.redhat.com/show_bug.cgi?id=1176934>
17:57 Anjana joined #gluster
17:57 glusterbot News from resolvedglusterbugs: [Bug 1206065] [Backup]:  Crash seen when 'glusterfind create' command is run on a non-existent volume <https://bugzilla.redhat.com/show_bug.cgi?id=1206065>
17:57 glusterbot News from resolvedglusterbugs: [Bug 1217930] Geo-replication very slow, not able to sync all the files to slave <https://bugzilla.redhat.com/show_bug.cgi?id=1217930>
17:57 glusterbot News from resolvedglusterbugs: [Bug 1125824] rebalance is not resulting in the hash layout changes being available to nfs client <https://bugzilla.redhat.com/show_bug.cgi?id=1125824>
17:57 glusterbot News from resolvedglusterbugs: [Bug 1158226] Inode infinite loop leads to glusterfsd segfault <https://bugzilla.redhat.com/show_bug.cgi?id=1158226>
17:57 glusterbot News from resolvedglusterbugs: [Bug 1158262] Rebalance failed to rebalance files <https://bugzilla.redhat.com/show_bug.cgi?id=1158262>
17:57 glusterbot News from resolvedglusterbugs: [Bug 1168875] [USS]: browsing .snaps directory with CIFS fails with "Invalid argument" <https://bugzilla.redhat.com/show_bug.cgi?id=1168875>
17:57 glusterbot News from resolvedglusterbugs: [Bug 1176008] Directories not visible anymore after add-brick, new brick dirs not part of old bricks <https://bugzilla.redhat.com/show_bug.cgi?id=1176008>
17:57 glusterbot News from resolvedglusterbugs: [Bug 1193757] Inode infinite loop leads to glusterfsd segfault <https://bugzilla.redhat.com/show_bug.cgi?id=1193757>
17:57 glusterbot News from resolvedglusterbugs: [Bug 1202669] Perf:  readdirp in replicated volumes causes performance degrade <https://bugzilla.redhat.com/show_bug.cgi?id=1202669>
17:57 glusterbot News from resolvedglusterbugs: [Bug 1219048] Data Tiering:Enabling quota command fails with "quota command failed : Commit failed on localhost" <https://bugzilla.redhat.com/show_bug.cgi?id=1219048>
17:57 glusterbot News from resolvedglusterbugs: [Bug 1219600] Use tiering only if all nodes are capable of it at proper version <https://bugzilla.redhat.com/show_bug.cgi?id=1219600>
17:57 glusterbot News from resolvedglusterbugs: [Bug 1219845] tiering: cksum mismach for tiered volume. <https://bugzilla.redhat.com/show_bug.cgi?id=1219845>
17:57 glusterbot News from resolvedglusterbugs: [Bug 1219846] Data Tiering: glusterd(management) communication issues seen on tiering setup <https://bugzilla.redhat.com/show_bug.cgi?id=1219846>
17:57 glusterbot News from resolvedglusterbugs: [Bug 1220052] Data Tiering:UI:changes required to CLI responses for attach and detach tier <https://bugzilla.redhat.com/show_bug.cgi?id=1220052>
17:57 glusterbot News from resolvedglusterbugs: [Bug 1206517] Data Tiering:Distribute-replicate type Volume not getting converted to a tiered volume on attach-tier <https://bugzilla.redhat.com/show_bug.cgi?id=1206517>
17:57 glusterbot News from resolvedglusterbugs: [Bug 1219547] I/O failure on attaching tier <https://bugzilla.redhat.com/show_bug.cgi?id=1219547>
17:57 glusterbot News from resolvedglusterbugs: [Bug 1207532] BitRot :- gluster volume help gives insufficient and ambiguous information for bitrot <https://bugzilla.redhat.com/show_bug.cgi?id=1207532>
17:57 glusterbot News from resolvedglusterbugs: [Bug 1219787] package glupy as a subpackage under gluster namespace. <https://bugzilla.redhat.com/show_bug.cgi?id=1219787>
17:57 glusterbot News from resolvedglusterbugs: [Bug 1155489] Hook script S31ganesha-reset.sh accounts for  100% CPU usage and doesn't terminate <https://bugzilla.redhat.com/show_bug.cgi?id=1155489>
17:57 glusterbot News from resolvedglusterbugs: [Bug 1118591] core: all brick processes crash when quota is enabled <https://bugzilla.redhat.com/show_bug.cgi?id=1118591>
17:57 glusterbot News from resolvedglusterbugs: [Bug 1166232] Libgfapi symbolic version breaking Samba Fedora rawhide (22)  koji builds <https://bugzilla.redhat.com/show_bug.cgi?id=1166232>
17:57 glusterbot News from resolvedglusterbugs: [Bug 1176242] glfs_h_creat() leaks file descriptors <https://bugzilla.redhat.com/show_bug.cgi?id=1176242>
17:57 glusterbot News from resolvedglusterbugs: [Bug 1196650] dht-common.h:911:1: warning: inline function 'dht_lock_count' declared but never defined <https://bugzilla.redhat.com/show_bug.cgi?id=1196650>
17:57 glusterbot News from resolvedglusterbugs: [Bug 1141539] data loss when rebalance + renames are in progress and bricks from replica pairs goes down and comes back <https://bugzilla.redhat.com/show_bug.cgi?id=1141539>
17:57 glusterbot News from resolvedglusterbugs: [Bug 1220064] Gluster small-file creates do not scale with brick count <https://bugzilla.redhat.com/show_bug.cgi?id=1220064>
17:57 glusterbot News from resolvedglusterbugs: [Bug 1112238] Dist-geo-rep : geo-rep syncs files through hybrid crawl after history crawl is finished even though changelogs are available. <https://bugzilla.redhat.com/show_bug.cgi?id=1112238>
17:57 glusterbot News from resolvedglusterbugs: [Bug 1128093] [Dist-geo-rep] after restore of symlink snapshot in geo-rep setup, few files fail to sync to slave. <https://bugzilla.redhat.com/show_bug.cgi?id=1128093>
17:57 glusterbot News from resolvedglusterbugs: [Bug 1143853] geo-rep: file with same name has diffrent gfid in master and slave <https://bugzilla.redhat.com/show_bug.cgi?id=1143853>
17:57 glusterbot News from resolvedglusterbugs: [Bug 1162057] dist-geo-rep: zero byte files created in slave even before creating any data in master by processing stale changelogs in working dir <https://bugzilla.redhat.com/show_bug.cgi?id=1162057>
17:57 glusterbot News from resolvedglusterbugs: [Bug 1169331] Geo-replication slave fills up inodes <https://bugzilla.redhat.com/show_bug.cgi?id=1169331>
17:57 glusterbot News from resolvedglusterbugs: [Bug 1177527] Geo-Replication : many files are missing in slave volume <https://bugzilla.redhat.com/show_bug.cgi?id=1177527>
17:57 glusterbot News from resolvedglusterbugs: [Bug 1180459] Dist-geo-rep : In geo-rep mount-broker setup, the status doesn't show, to which user geo-rep relationship is established on slave. <https://bugzilla.redhat.com/show_bug.cgi?id=1180459>
17:57 glusterbot News from resolvedglusterbugs: [Bug 1181117] ssh AuthorizedKeyFiles are assumed to be in $HOME/.ssh/authorized_keys ignoring any configuration from /etc/ssh/sshd_config <https://bugzilla.redhat.com/show_bug.cgi?id=1181117>
17:57 glusterbot News from resolvedglusterbugs: [Bug 1196690] Dist-geo-rep: setxattr to files on master are not getting synced to slave <https://bugzilla.redhat.com/show_bug.cgi?id=1196690>
17:58 glusterbot News from resolvedglusterbugs: [Bug 1217928] [georep]: Transition from xsync to changelog doesn't happen once the brick is brought online <https://bugzilla.redhat.com/show_bug.cgi?id=1217928>
17:58 glusterbot News from resolvedglusterbugs: [Bug 1218166] [Backup]: User must be warned while running the 'glusterfind pre' command twice without running the post command <https://bugzilla.redhat.com/show_bug.cgi?id=1218166>
17:58 glusterbot News from resolvedglusterbugs: [Bug 1218586] dist-geo-rep : all the bricks of a node shows faulty in status if slave node to which atleast one of the brick connected goes down. <https://bugzilla.redhat.com/show_bug.cgi?id=1218586>
17:58 glusterbot News from resolvedglusterbugs: [Bug 1165996] cmd log history should not be a hidden file <https://bugzilla.redhat.com/show_bug.cgi?id=1165996>
17:58 glusterbot News from resolvedglusterbugs: [Bug 1202745] glusterd crashed on one of the node <https://bugzilla.redhat.com/show_bug.cgi?id=1202745>
17:58 glusterbot News from resolvedglusterbugs: [Bug 1145989] package POSTIN scriptlet failure <https://bugzilla.redhat.com/show_bug.cgi?id=1145989>
17:58 glusterbot News from resolvedglusterbugs: [Bug 1123768] mem_acct : Check return value of xlator_mem_acct_init() <https://bugzilla.redhat.com/show_bug.cgi?id=1123768>
17:58 glusterbot News from resolvedglusterbugs: [Bug 1136702] Add a warning message to check the removed-bricks for any files left post "remove-brick commit" <https://bugzilla.redhat.com/show_bug.cgi?id=1136702>
17:58 glusterbot News from resolvedglusterbugs: [Bug 1146279] Compilation on OSX is broken with upstream git master and release-3.6 branches <https://bugzilla.redhat.com/show_bug.cgi?id=1146279>
17:58 glusterbot News from resolvedglusterbugs: [Bug 1207709] trash: remove_trash_path broken in the internal case <https://bugzilla.redhat.com/show_bug.cgi?id=1207709>
17:58 glusterbot News from resolvedglusterbugs: [Bug 1216310] Disable rpc throttling for glusterfs protocol <https://bugzilla.redhat.com/show_bug.cgi?id=1216310>
17:58 glusterbot News from resolvedglusterbugs: [Bug 1218170] [Quota] : To have a separate quota.conf file for inode quota. <https://bugzilla.redhat.com/show_bug.cgi?id=1218170>
17:58 glusterbot News from resolvedglusterbugs: [Bug 1218922] [dist-geo-rep]:Directory not empty and Stale file handle errors in geo-rep logs during deletes from master in history/changelog crawl <https://bugzilla.redhat.com/show_bug.cgi?id=1218922>
17:58 glusterbot News from resolvedglusterbugs: [Bug 1219412] Geo-Replication - Fails to handle file renaming correctly between master and slave <https://bugzilla.redhat.com/show_bug.cgi?id=1219412>
17:58 glusterbot News from resolvedglusterbugs: [Bug 1219608] IO touched a file undergoing migration fails for tiered volumes <https://bugzilla.redhat.com/show_bug.cgi?id=1219608>
17:58 glusterbot News from resolvedglusterbugs: [Bug 1207547] BitRot :- If bitrot is not enabled for given volume then scrubber should not crawl bricks of that volume and should not update vol file for that volume <https://bugzilla.redhat.com/show_bug.cgi?id=1207547>
17:58 glusterbot News from resolvedglusterbugs: [Bug 1218602] Remove replace-brick with data migration support from gluster cli <https://bugzilla.redhat.com/show_bug.cgi?id=1218602>
17:58 glusterbot News from resolvedglusterbugs: [Bug 1219785] bitrot: glusterd is crashing when user enable bitrot on the volume <https://bugzilla.redhat.com/show_bug.cgi?id=1219785>
17:58 glusterbot News from resolvedglusterbugs: [Bug 1109180] Issues reported by Cppcheck static analysis tool <https://bugzilla.redhat.com/show_bug.cgi?id=1109180>
17:58 glusterbot News from resolvedglusterbugs: [Bug 1136201] RFE: Rebalance shouldn't start when clients older than glusterfs-3.6 are connected <https://bugzilla.redhat.com/show_bug.cgi?id=1136201>
17:58 glusterbot News from resolvedglusterbugs: [Bug 1191030] Use rcu to protect concurrent access to data structures in GlusterD <https://bugzilla.redhat.com/show_bug.cgi?id=1191030>
17:58 glusterbot News from resolvedglusterbugs: [Bug 1207611] peer probe with additional network address fails <https://bugzilla.redhat.com/show_bug.cgi?id=1207611>
17:58 glusterbot News from resolvedglusterbugs: [Bug 1215018] [New] - gluster peer status goes to disconnected state. <https://bugzilla.redhat.com/show_bug.cgi?id=1215018>
17:58 glusterbot News from resolvedglusterbugs: [Bug 1125843] geo-rep: changelog_register fails when geo-rep started after session creation. <https://bugzilla.redhat.com/show_bug.cgi?id=1125843>
17:58 glusterbot News from resolvedglusterbugs: [Bug 1125918] Dist-geo-rep : geo-rep fails to do first history crawl after the volume restored from snap. <https://bugzilla.redhat.com/show_bug.cgi?id=1125918>
17:58 glusterbot News from resolvedglusterbugs: [Bug 1183229] Geo-Replication creation of common_secret.pem.pub file with gsec_create <https://bugzilla.redhat.com/show_bug.cgi?id=1183229>
17:58 glusterbot News from resolvedglusterbugs: [Bug 1187021] Geo-replication not replicating ACLs to target <https://bugzilla.redhat.com/show_bug.cgi?id=1187021>
17:58 glusterbot News from resolvedglusterbugs: [Bug 1191413] Geo-Replication : shows faulty with lots of error messages "ssh> tar: .gfid/<real GFID>: Cannot open: Structure needs cleaning" <https://bugzilla.redhat.com/show_bug.cgi?id=1191413>
17:58 glusterbot News from resolvedglusterbugs: [Bug 1196632] dist-geo-rep: Concurrent renames and node reboots results in slave having both source and destination of file with destination being 0 byte sticky file <https://bugzilla.redhat.com/show_bug.cgi?id=1196632>
17:58 glusterbot News from resolvedglusterbugs: [Bug 1130242] brick failure detection does not work for ext4 filesystems <https://bugzilla.redhat.com/show_bug.cgi?id=1130242>
17:58 glusterbot News from resolvedglusterbugs: [Bug 1187456] Performance enhancement for RDMA <https://bugzilla.redhat.com/show_bug.cgi?id=1187456>
17:58 glusterbot News from resolvedglusterbugs: [Bug 1105082] DHT  : - two directories has same gfid <https://bugzilla.redhat.com/show_bug.cgi?id=1105082>
17:58 glusterbot News from resolvedglusterbugs: [Bug 1129787] file locks are not released within an acceptable time when a fuse-client uncleanly disconnects <https://bugzilla.redhat.com/show_bug.cgi?id=1129787>
17:58 glusterbot News from resolvedglusterbugs: [Bug 1130969] NFS interoperability problem: stripe-xlator removes EOF at end of READDIR <https://bugzilla.redhat.com/show_bug.cgi?id=1130969>
17:58 glusterbot News from resolvedglusterbugs: [Bug 1134773] cli-rpc-ops.c fails to compile with -Werror=format-security <https://bugzilla.redhat.com/show_bug.cgi?id=1134773>
17:58 glusterbot News from resolvedglusterbugs: [Bug 1169005] RPM building of glusterfs-3.6.1-3.fc22 src RPM on el5 is failing <https://bugzilla.redhat.com/show_bug.cgi?id=1169005>
17:58 glusterbot News from resolvedglusterbugs: [Bug 1197253] NFS logs are filled with system.posix_acl_access messages <https://bugzilla.redhat.com/show_bug.cgi?id=1197253>
17:58 glusterbot News from resolvedglusterbugs: [Bug 1205579] gluster nfs server process was crashed multiple time while mounting volume and starting volume using force option <https://bugzilla.redhat.com/show_bug.cgi?id=1205579>
17:58 glusterbot News from resolvedglusterbugs: [Bug 1211837] glusterfs-api.pc versioning breaks QEMU <https://bugzilla.redhat.com/show_bug.cgi?id=1211837>
17:58 glusterbot News from resolvedglusterbugs: [Bug 1215189] timeout/expiry of group-cache should be set to 300 seconds <https://bugzilla.redhat.com/show_bug.cgi?id=1215189>
17:58 glusterbot News from resolvedglusterbugs: [Bug 1113960] brick process crashed when rebalance and rename was in progress <https://bugzilla.redhat.com/show_bug.cgi?id=1113960>
17:58 glusterbot News from resolvedglusterbugs: [Bug 1130888] Renaming file while rebalance is in progress causes data loss <https://bugzilla.redhat.com/show_bug.cgi?id=1130888>
17:58 glusterbot News from resolvedglusterbugs: [Bug 1199436] glfs_fini- The pending per xlartor resource frees. <https://bugzilla.redhat.com/show_bug.cgi?id=1199436>
17:58 glusterbot News from resolvedglusterbugs: [Bug 1202290] [epoll+Snapshot] : Snapd crashed while trying to list snaps under .snaps folder <https://bugzilla.redhat.com/show_bug.cgi?id=1202290>
17:58 glusterbot News from resolvedglusterbugs: [Bug 1215787] [HC] qcow2 image creation using qemu-img hits segmentation fault <https://bugzilla.redhat.com/show_bug.cgi?id=1215787>
17:58 glusterbot News from resolvedglusterbugs: [Bug 1195668] Perf:  DHT errors filling logs when perf tests are run. <https://bugzilla.redhat.com/show_bug.cgi?id=1195668>
17:58 glusterbot News from resolvedglusterbugs: [Bug 1204140] "case sensitive = no" is not honored when "preserve case = yes" is present in smb.conf <https://bugzilla.redhat.com/show_bug.cgi?id=1204140>
17:58 glusterbot News from resolvedglusterbugs: [Bug 1122399] [SNAPSHOT]: man or info page of gluster needs to be updated with snapshot commands <https://bugzilla.redhat.com/show_bug.cgi?id=1122399>
17:58 glusterbot News from resolvedglusterbugs: [Bug 1129038] If snapshot is attempted when geo-replication session is live, error must be signaled. <https://bugzilla.redhat.com/show_bug.cgi?id=1129038>
17:58 glusterbot News from resolvedglusterbugs: [Bug 1145069] [SNAPSHOT]: man or info page of gluster needs to be updated with snapshot commands <https://bugzilla.redhat.com/show_bug.cgi?id=1145069>
17:58 glusterbot News from resolvedglusterbugs: [Bug 1185259] cli crashes when listing quota limits with xml output <https://bugzilla.redhat.com/show_bug.cgi?id=1185259>
17:58 glusterbot News from resolvedglusterbugs: [Bug 1202436] [SNAPSHOT]: After a volume which has quota enabled is restored to a snap, attaching another node to the cluster is not successful <https://bugzilla.redhat.com/show_bug.cgi?id=1202436>
17:58 glusterbot News from resolvedglusterbugs: [Bug 1163161] With afrv2 + ext4, lookups on directories with large offsets could result in duplicate/missing entries <https://bugzilla.redhat.com/show_bug.cgi?id=1163161>
17:58 glusterbot News from resolvedglusterbugs: [Bug 1125180] [SNAPSHOT] Snap mount doesn't inheret mount options from parent brick <https://bugzilla.redhat.com/show_bug.cgi?id=1125180>
17:58 glusterbot News from resolvedglusterbugs: [Bug 1197587] quota: quotad.socket in /tmp <https://bugzilla.redhat.com/show_bug.cgi?id=1197587>
17:58 glusterbot News from resolvedglusterbugs: [Bug 1202292] quota: 'Usage crossed soft limit' not generated in the log <https://bugzilla.redhat.com/show_bug.cgi?id=1202292>
17:58 glusterbot News from resolvedglusterbugs: [Bug 1166284] Directory fd leaks in index translator <https://bugzilla.redhat.com/show_bug.cgi?id=1166284>
17:58 glusterbot News from resolvedglusterbugs: [Bug 1125168] ec-method.c fails to compile in function 'ec_method_encode' due to unknown register name 'xmm7' <https://bugzilla.redhat.com/show_bug.cgi?id=1125168>
17:58 glusterbot News from resolvedglusterbugs: [Bug 1131502] Fuse mounting of a tcp,rdma volume with rdma as transport type always mounts as tcp without any fail <https://bugzilla.redhat.com/show_bug.cgi?id=1131502>
17:58 glusterbot News from resolvedglusterbugs: [Bug 764827] [RFE] Show progress during transfer - Effective bandwidth usage at a give point in time <https://bugzilla.redhat.com/show_bug.cgi?id=764827>
17:59 glusterbot News from resolvedglusterbugs: [Bug 1177722] [RFE] Unittest framework for geo-replication <https://bugzilla.redhat.com/show_bug.cgi?id=1177722>
17:59 glusterbot News from resolvedglusterbugs: [Bug 1197433] Geo-rep: Duplicate public keys in authorized_keys on each run of create command <https://bugzilla.redhat.com/show_bug.cgi?id=1197433>
17:59 glusterbot News from resolvedglusterbugs: [Bug 1198101] dist-geo-rep : gsyncd crashed in syncdutils.py while removing a file. <https://bugzilla.redhat.com/show_bug.cgi?id=1198101>
17:59 glusterbot News from resolvedglusterbugs: [Bug 1217929] ignore_deletes option is not something you can configure <https://bugzilla.redhat.com/show_bug.cgi?id=1217929>
17:59 glusterbot News from resolvedglusterbugs: [Bug 1114469] Dist-geo-rep : geo-rep throws wrong error messages when incorrect commands are executed. <https://bugzilla.redhat.com/show_bug.cgi?id=1114469>
17:59 glusterbot News from resolvedglusterbugs: [Bug 906763] SSL code does not use OpenSSL multi-threading interface <https://bugzilla.redhat.com/show_bug.cgi?id=906763>
17:59 glusterbot News from resolvedglusterbugs: [Bug 1128165] [HC] - mount.glusterfs fails to check return of mount command. <https://bugzilla.redhat.com/show_bug.cgi?id=1128165>
17:59 glusterbot News from resolvedglusterbugs: [Bug 1135348] regression tests fail on osx due to delay - requires explicit ``sleep `` <https://bugzilla.redhat.com/show_bug.cgi?id=1135348>
17:59 glusterbot News from resolvedglusterbugs: [Bug 1151303] Excessive logging in the self-heal daemon after a replace-brick <https://bugzilla.redhat.com/show_bug.cgi?id=1151303>
17:59 glusterbot News from resolvedglusterbugs: [Bug 1153610] libgfapi crashes in glfs_fini for RDMA type volumes <https://bugzilla.redhat.com/show_bug.cgi?id=1153610>
17:59 glusterbot News from resolvedglusterbugs: [Bug 1168809] logging improvement in glusterd/cli <https://bugzilla.redhat.com/show_bug.cgi?id=1168809>
17:59 glusterbot News from resolvedglusterbugs: [Bug 1197260] segfault trying to call ibv_dealloc_pd on a null pointer if ibv_alloc_pd failed <https://bugzilla.redhat.com/show_bug.cgi?id=1197260>
17:59 glusterbot News from resolvedglusterbugs: [Bug 1198963] set errno if gf_strdup() failed <https://bugzilla.redhat.com/show_bug.cgi?id=1198963>
17:59 glusterbot News from resolvedglusterbugs: [Bug 1199003] Avoid possibility of segfault if xl->ctx is  NULL. <https://bugzilla.redhat.com/show_bug.cgi?id=1199003>
17:59 glusterbot News from resolvedglusterbugs: [Bug 1199053] list , wr memory has to be verified <https://bugzilla.redhat.com/show_bug.cgi?id=1199053>
17:59 glusterbot News from resolvedglusterbugs: [Bug 1202492] Rewrite glfs_new function for better error out scenarios. <https://bugzilla.redhat.com/show_bug.cgi?id=1202492>
17:59 glusterbot News from resolvedglusterbugs: [Bug 1209380] Tracker bug for the documentation of glusterfs-nfs-ganesha intergration <https://bugzilla.redhat.com/show_bug.cgi?id=1209380>
17:59 glusterbot News from resolvedglusterbugs: [Bug 1218032] Effect of Trash translator over CTR translator <https://bugzilla.redhat.com/show_bug.cgi?id=1218032>
17:59 glusterbot News from resolvedglusterbugs: [Bug 1218653] rdma: properly handle memory registration during network interruption <https://bugzilla.redhat.com/show_bug.cgi?id=1218653>
17:59 glusterbot News from resolvedglusterbugs: [Bug 1111774] FreeBSD port for GlusterFS <https://bugzilla.redhat.com/show_bug.cgi?id=1111774>
17:59 glusterbot News from resolvedglusterbugs: [Bug 1131713] Port glusterfs regressions on MacOSX/Darwin <https://bugzilla.redhat.com/show_bug.cgi?id=1131713>
17:59 glusterbot News from resolvedglusterbugs: [Bug 1121518] xml output needed for geo-rep CLI commands <https://bugzilla.redhat.com/show_bug.cgi?id=1121518>
17:59 glusterbot News from resolvedglusterbugs: [Bug 1218039] BitRot :- If peer in cluster doesn't have brick then its should not start bitd on that node and should not create partial volume file <https://bugzilla.redhat.com/show_bug.cgi?id=1218039>
17:59 glusterbot News from resolvedglusterbugs: [Bug 1146519] Wrong summary in GlusterFs spec file <https://bugzilla.redhat.com/show_bug.cgi?id=1146519>
17:59 glusterbot News from resolvedglusterbugs: [Bug 1208118] gf_log_inject_timer_event can crash if the passed ctx is null. <https://bugzilla.redhat.com/show_bug.cgi?id=1208118>
17:59 glusterbot News from resolvedglusterbugs: [Bug 1179208] Since 3.6; ssl without auth.ssl-allow broken <https://bugzilla.redhat.com/show_bug.cgi?id=1179208>
17:59 glusterbot News from resolvedglusterbugs: [Bug 1214563] [FEAT] Trash translator <https://bugzilla.redhat.com/show_bug.cgi?id=1214563>
17:59 glusterbot News from resolvedglusterbugs: [Bug 1155421] Typo in replace-brick...pause example <https://bugzilla.redhat.com/show_bug.cgi?id=1155421>
17:59 glusterbot News from resolvedglusterbugs: [Bug 1141659] OSX LaunchDaemon plist file should be org.gluster... instead of com.gluster... <https://bugzilla.redhat.com/show_bug.cgi?id=1141659>
17:59 glusterbot News from resolvedglusterbugs: [Bug 1141682] The extras/MacOSX directory is no longer needed, and should be removed <https://bugzilla.redhat.com/show_bug.cgi?id=1141682>
17:59 glusterbot News from resolvedglusterbugs: [Bug 1149982] dist-geo-rep: geo-rep status in one of rebooted node remains at "Stable(paused)" after session is resumed. <https://bugzilla.redhat.com/show_bug.cgi?id=1149982>
17:59 glusterbot News from resolvedglusterbugs: [Bug 1179638] Dist-geo-rep : replace-brick/remove-brick wont work untill the geo-rep session is deleted. <https://bugzilla.redhat.com/show_bug.cgi?id=1179638>
17:59 glusterbot News from resolvedglusterbugs: [Bug 1217938] Dist-geo-rep: Too many "remote operation failed: No such file or directory" warning messages in auxilary mount log on slave while executing "rm -rf" <https://bugzilla.redhat.com/show_bug.cgi?id=1217938>
17:59 glusterbot News from resolvedglusterbugs: [Bug 1219479] [Dist-geo-rep] after snapshot in geo-rep setup, empty changelogs are  generated in the snapped brick. <https://bugzilla.redhat.com/show_bug.cgi?id=1219479>
17:59 glusterbot News from resolvedglusterbugs: [Bug 816915] abort after an interrupted replace-brick operation causes glusterd to hang <https://bugzilla.redhat.com/show_bug.cgi?id=816915>
17:59 glusterbot News from resolvedglusterbugs: [Bug 1121920] AFR : fuse,nfs mount hangs when directories with same names are created and deleted continuously <https://bugzilla.redhat.com/show_bug.cgi?id=1121920>
17:59 glusterbot News from resolvedglusterbugs: [Bug 1146377] Quota-crawlers of all volumes write to the same log file <https://bugzilla.redhat.com/show_bug.cgi?id=1146377>
17:59 glusterbot News from resolvedglusterbugs: [Bug 1126802] glusterfs logrotate config file pollutes global config <https://bugzilla.redhat.com/show_bug.cgi?id=1126802>
17:59 glusterbot News from resolvedglusterbugs: [Bug 1195336] There is no information about GlusterFS development work flow in the README file <https://bugzilla.redhat.com/show_bug.cgi?id=1195336>
17:59 glusterbot News from resolvedglusterbugs: [Bug 1161092] nfs: ls shows "Permission denied" with root-squash <https://bugzilla.redhat.com/show_bug.cgi?id=1161092>
17:59 glusterbot News from resolvedglusterbugs: [Bug 1188232] RFE: Reduce the initial RDMA protocol check log level from E to W or I <https://bugzilla.redhat.com/show_bug.cgi?id=1188232>
17:59 glusterbot News from resolvedglusterbugs: [Bug 1199894] RFE: Clone of a snapshot <https://bugzilla.redhat.com/show_bug.cgi?id=1199894>
17:59 glusterbot News from resolvedglusterbugs: [Bug 1204636] [SNAPSHOT]:Adding new peer to the cluster result a check sum mismatch due to wrong snap info file in newly probed peer <https://bugzilla.redhat.com/show_bug.cgi?id=1204636>
17:59 glusterbot News from resolvedglusterbugs: [Bug 1205037] [SNAPSHOT]: "man gluster" needs modification for few snapshot commands <https://bugzilla.redhat.com/show_bug.cgi?id=1205037>
17:59 glusterbot News from resolvedglusterbugs: [Bug 1118311] After enabling nfs.mount-udp mounting server:/volume/subdir fails <https://bugzilla.redhat.com/show_bug.cgi?id=1118311>
17:59 glusterbot News from resolvedglusterbugs: [Bug 1164506] md-cache checks for modification using whole seconds only <https://bugzilla.redhat.com/show_bug.cgi?id=1164506>
17:59 glusterbot News from resolvedglusterbugs: [Bug 1182934] Do not have tmpfiles snippet for /var/run/gluster <https://bugzilla.redhat.com/show_bug.cgi?id=1182934>
17:59 glusterbot News from resolvedglusterbugs: [Bug 1183547] glfs_set_volfile_server() should accept NULL as transport <https://bugzilla.redhat.com/show_bug.cgi?id=1183547>
17:59 glusterbot News from resolvedglusterbugs: [Bug 1185654] Improved support for POSIX ACLs <https://bugzilla.redhat.com/show_bug.cgi?id=1185654>
17:59 glusterbot News from resolvedglusterbugs: [Bug 1205785] dht linkfile are created with different owner:group than that source(data) file in few cases <https://bugzilla.redhat.com/show_bug.cgi?id=1205785>
17:59 glusterbot News from resolvedglusterbugs: [Bug 1179640] Enable quota(default) leads to heal directory's xattr failed. <https://bugzilla.redhat.com/show_bug.cgi?id=1179640>
17:59 glusterbot News from resolvedglusterbugs: [Bug 1174625] nfs server restarts when a snapshot is deactivated <https://bugzilla.redhat.com/show_bug.cgi?id=1174625>
17:59 glusterbot News from resolvedglusterbugs: [Bug 1120136] Excessive logging of warning message "remote operation failed: No data available"  in samba-vfs logfile <https://bugzilla.redhat.com/show_bug.cgi?id=1120136>
17:59 glusterbot News from resolvedglusterbugs: [Bug 1199075] iobuf: Ref should be taken on iobuf through proper functions. <https://bugzilla.redhat.com/show_bug.cgi?id=1199075>
17:59 glusterbot News from resolvedglusterbugs: [Bug 1190069] Entries in indices/xattrop directory not removed appropriately <https://bugzilla.redhat.com/show_bug.cgi?id=1190069>
17:59 glusterbot News from resolvedglusterbugs: [Bug 1113476] [SNAPSHOT] : gluster volume info should not show the value which is not set explicitly <https://bugzilla.redhat.com/show_bug.cgi?id=1113476>
17:59 glusterbot News from resolvedglusterbugs: [Bug 1133426] [RFE] Add confirmation dialog to to snapshot restore operation <https://bugzilla.redhat.com/show_bug.cgi?id=1133426>
17:59 glusterbot News from resolvedglusterbugs: [Bug 1155042] [USS] : don't display the snapshots which are not activated <https://bugzilla.redhat.com/show_bug.cgi?id=1155042>
17:59 glusterbot News from resolvedglusterbugs: [Bug 1186713] syncop: Support to set and pass lkowner to GlusterFS server <https://bugzilla.redhat.com/show_bug.cgi?id=1186713>
17:59 glusterbot News from resolvedglusterbugs: [Bug 1157991] [SNAPSHOT]: snapshot should be deactivated by default when created <https://bugzilla.redhat.com/show_bug.cgi?id=1157991>
18:00 glusterbot News from resolvedglusterbugs: [Bug 1197585] quota: regex in logging message <https://bugzilla.redhat.com/show_bug.cgi?id=1197585>
18:00 glusterbot News from resolvedglusterbugs: [Bug 1136312] geo-rep mount broker setup has to be simplified. <https://bugzilla.redhat.com/show_bug.cgi?id=1136312>
18:00 glusterbot News from resolvedglusterbugs: [Bug 1193893] [FEAT] Tool to find incremental changes from GlusterFS Volume <https://bugzilla.redhat.com/show_bug.cgi?id=1193893>
18:00 glusterbot News from resolvedglusterbugs: [Bug 1142045] THANKS message in git repo has typos <https://bugzilla.redhat.com/show_bug.cgi?id=1142045>
18:00 glusterbot News from resolvedglusterbugs: [Bug 1184627] Community Repo RPMs doesn't include attr package as a dependency <https://bugzilla.redhat.com/show_bug.cgi?id=1184627>
18:00 glusterbot News from resolvedglusterbugs: [Bug 1130462] glusterd fails to get the inode size for a brick <https://bugzilla.redhat.com/show_bug.cgi?id=1130462>
18:00 glusterbot News from resolvedglusterbugs: [Bug 1178685] Move testcases into their main component directories <https://bugzilla.redhat.com/show_bug.cgi?id=1178685>
18:00 glusterbot News from resolvedglusterbugs: [Bug 1183538] Prevent including automake/autoconf cache files in the 'make dist' tarball <https://bugzilla.redhat.com/show_bug.cgi?id=1183538>
18:00 glusterbot News from resolvedglusterbugs: [Bug 1200174] Make it easier to identify regression test failures <https://bugzilla.redhat.com/show_bug.cgi?id=1200174>
18:00 glusterbot News from resolvedglusterbugs: [Bug 1206587] Replace contrib/uuid by a libglusterfs wrapper that uses the uuid implementation from the OS <https://bugzilla.redhat.com/show_bug.cgi?id=1206587>
18:00 glusterbot News from resolvedglusterbugs: [Bug 1217176] Replace contrib/uuid by a libglusterfs wrapper that uses the uuid implementation from the OS <https://bugzilla.redhat.com/show_bug.cgi?id=1217176>
18:00 glusterbot News from resolvedglusterbugs: [Bug 1199388] glfs_new does not check if volname is NULL <https://bugzilla.redhat.com/show_bug.cgi?id=1199388>
18:00 glusterbot News from resolvedglusterbugs: [Bug 1168207] Change license of arequal checksum.c to include GPL v2 <https://bugzilla.redhat.com/show_bug.cgi?id=1168207>
18:00 glusterbot News from resolvedglusterbugs: [Bug 1112613] [SNAPSHOT] : gluster snapshot delete doesnt provide option to delete all / multiple snaps of a given volume <https://bugzilla.redhat.com/show_bug.cgi?id=1112613>
18:00 glusterbot News from resolvedglusterbugs: [Bug 1104462] RFC: make epoll multithreaded <https://bugzilla.redhat.com/show_bug.cgi?id=1104462>
18:00 glusterbot News from resolvedglusterbugs: [Bug 1197593] quota: limit set cli issues with setting in Bytes(B) or without providing the type(size) <https://bugzilla.redhat.com/show_bug.cgi?id=1197593>
18:00 glusterbot News from resolvedglusterbugs: [Bug 1206432] quota: setting limit to 16384PB shows wrong stat with list commands <https://bugzilla.redhat.com/show_bug.cgi?id=1206432>
18:15 coredump joined #gluster
18:19 kudude joined #gluster
18:19 kudude hello everyone
18:19 kudude i have glusterfs configured in my network, df -h is hanging on the client, any ideas?
18:34 redbeard joined #gluster
18:36 papamoose kudude: you have not provided enough information
18:47 lord4163 left #gluster
19:02 PaulCuzner joined #gluster
19:05 edong23 joined #gluster
19:16 scooby2 does anyone here have control od download.gluster.org?
19:16 scooby2 of
19:16 scooby2 the repodata files are hosed up for some of the gluster files
19:16 scooby2 http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-6   almost everything under here
19:21 scooby2 rror importing repomd.xml for glusterfs-epel: Damaged repomd.xml file
19:21 scooby2 error
19:27 uebera|| joined #gluster
19:31 daMaestro joined #gluster
19:33 uebera|| joined #gluster
20:19 uebera|| joined #gluster
20:21 Prilly joined #gluster
20:33 morgan_ joined #gluster
20:35 morgan_ hi - trying to set a geo-rep gluster share, using
20:35 morgan_ gluster volume geo-replication master-vol rhel7-2::slave-vol create push-pem force
20:36 morgan_ I get 'Passwordless ssh login has not been setup with rhel7-2 for user root.'  but I can ssh (passwordless) from rhel7->rhel7-2
20:36 morgan_ what do I need to setup?
20:38 morgan_ i have done 'gluster system:: execute gsec_create'
20:44 LebedevRI joined #gluster
20:48 morgan_ i have copied ssh key across
20:48 morgan_ using -  cat /var/lib/glusterd/geo-replication/secret.pem.pub | ssh root@192.168.122.11 "cat >> ~/.ssh/authorized_keys"
20:49 morgan_ I get 'Passwordless ssh login has not been setup with rhel7-2 for user root.'
20:49 morgan_ even though i can ssh (without pass)
20:56 _nixpanic joined #gluster
20:56 _nixpanic joined #gluster
21:04 ira joined #gluster
21:08 haomaiwang joined #gluster
21:19 jfdoucet joined #gluster
21:19 jfdoucet Hi
21:19 glusterbot jfdoucet: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
21:19 jfdoucet I have a glusterfs in distributed replicate with 12 nodes and 1 of the node shows 2751 gfid files when running "gluster volume heal" and if I check one I see that it is not a hard link. I read over the web that they are orphaned. My question is, is it safe to just remove them (rm [FILE]) or there is a better way of dealing with them ?
21:23 verdurin joined #gluster
21:50 neofob joined #gluster
21:53 halfinhalfout joined #gluster
22:04 gem joined #gluster
22:43 coredump joined #gluster
23:06 xaeth_afk joined #gluster
23:18 and` joined #gluster
23:28 and` joined #gluster
23:31 Intensity joined #gluster
23:35 plarsen joined #gluster
23:37 aaronott joined #gluster
23:38 Gill joined #gluster
23:39 cholcombe joined #gluster
23:50 premera joined #gluster
23:52 Prilly joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary