Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-06-03

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:05 adzmely joined #gluster
00:13 baoboa joined #gluster
00:30 badone_ joined #gluster
00:35 Jandre joined #gluster
00:39 doubt joined #gluster
00:49 [7] oops. is that a segfault triggered by remote CLI action!? http://pastie.org/10220218
00:53 [7] "volume replace-brick: success: replace-brick commit force operation successful", but output of "gluster volume info ove" doesn't change!?
00:54 [7] "[2015-06-03 00:52:53.705641] E [glusterd-rpc-ops.c:1132:__glusterd_stage_op_cbk] 0-management: Received stage RJT from uuid: 90392614-0a02-4f3c-925c-4245e2c4765f"
00:54 * [7] interprets that as "reject" and wonders about the "successful" message...
00:56 [7] ...and repeated the segfault
01:10 doubt joined #gluster
01:11 doubt_ joined #gluster
01:14 doubt_ joined #gluster
01:25 victori joined #gluster
01:33 nangthang joined #gluster
01:35 al joined #gluster
01:38 harish joined #gluster
02:18 victori joined #gluster
02:21 theron joined #gluster
02:23 bharata-rao joined #gluster
02:41 Twistedgrim joined #gluster
02:55 jfdoucet joined #gluster
02:59 jbrooks joined #gluster
03:03 aaronott joined #gluster
03:07 TheSeven joined #gluster
03:11 bennyturns joined #gluster
03:19 overclk joined #gluster
03:21 victori joined #gluster
03:27 rejy joined #gluster
03:28 kdhananjay joined #gluster
03:29 victori joined #gluster
03:46 nbalacha joined #gluster
03:50 David_H__ joined #gluster
03:53 nbalacha joined #gluster
03:53 shubhendu joined #gluster
04:02 atinmu joined #gluster
04:04 RameshN joined #gluster
04:23 David_H_Smith joined #gluster
04:23 hagarth joined #gluster
04:25 David_H__ joined #gluster
04:29 doubt_ joined #gluster
04:30 kanagaraj joined #gluster
04:32 soumya joined #gluster
04:33 yazhini joined #gluster
04:36 poornimag joined #gluster
04:36 sakshi joined #gluster
04:40 ppai joined #gluster
04:46 anil joined #gluster
04:47 hgowtham joined #gluster
04:47 atalur joined #gluster
04:49 ramteid joined #gluster
04:50 schandra joined #gluster
04:54 Manikandan joined #gluster
04:54 gem joined #gluster
04:54 ashiq joined #gluster
05:01 Jandre joined #gluster
05:03 RajeshReddy joined #gluster
05:10 jiffin joined #gluster
05:11 17WAB0EN6 joined #gluster
05:12 ashiq- joined #gluster
05:23 spandit joined #gluster
05:25 Bhaskarakiran joined #gluster
05:28 baoboa joined #gluster
05:30 vimal joined #gluster
05:38 glusterbot News from resolvedglusterbugs: [Bug 1218717] Files migrated should stay on a tier for a full cycle <https://bugzilla.redhat.com/show_bug.cgi?id=1218717>
05:39 sripathi joined #gluster
05:54 deepakcs joined #gluster
05:59 spandit joined #gluster
06:02 DV joined #gluster
06:10 Pupeno joined #gluster
06:11 jtux joined #gluster
06:11 nsoffer joined #gluster
06:13 rafi joined #gluster
06:13 hagarth joined #gluster
06:16 raghu joined #gluster
06:24 ToMiles joined #gluster
06:27 rgustafs joined #gluster
06:28 atalur joined #gluster
06:29 spalai joined #gluster
06:34 nishanth joined #gluster
06:36 pppp joined #gluster
06:36 liquidat joined #gluster
06:46 kdhananjay joined #gluster
06:58 arcolife joined #gluster
07:02 ashiq joined #gluster
07:02 [Enrico] joined #gluster
07:03 ashiq- joined #gluster
07:16 kdhananjay1 joined #gluster
07:18 kdhananjay joined #gluster
07:21 c0m0 joined #gluster
07:33 shubhendu_ joined #gluster
07:37 shubhendu__ joined #gluster
08:01 shubhendu_ joined #gluster
08:04 LebedevRI joined #gluster
08:09 nangthang joined #gluster
08:17 ctria joined #gluster
08:17 jcastill1 joined #gluster
08:19 johnnytran joined #gluster
08:21 jtux joined #gluster
08:22 jcastillo joined #gluster
08:26 kdhananjay joined #gluster
08:26 fsimonce joined #gluster
08:29 Slashman joined #gluster
08:38 dusmant joined #gluster
08:40 kevein joined #gluster
08:42 deniszh joined #gluster
08:45 Norky joined #gluster
08:49 atinmu joined #gluster
08:50 Bhaskarakiran joined #gluster
08:53 c0m0 joined #gluster
08:57 nishanth joined #gluster
08:59 glusterbot News from newglusterbugs: [Bug 1227654] linux untar hanged after the bricks are up in a 8+4 config <https://bugzilla.redhat.com/show_bug.cgi?id=1227654>
08:59 glusterbot News from newglusterbugs: [Bug 1227656] Glusted dies when adding new brick to a distributed volume and converting to replicated volume <https://bugzilla.redhat.com/show_bug.cgi?id=1227656>
09:02 Manikandan joined #gluster
09:10 Prilly joined #gluster
09:11 nishanth joined #gluster
09:11 mator joined #gluster
09:12 lenz__ joined #gluster
09:12 lenz__ hi
09:12 glusterbot lenz__: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
09:14 lenz__ Gluster 3.7.1 crashes on DigitalOcean when doing a rebalance. Afetr this server does not ever get back up
09:14 lenz__ See http://pastebin.ca/3016791
09:14 lenz__ Using stock RPMs on Centos7
09:15 lenz__ gluster peer probe 46.101.184.191
09:15 lenz__
09:15 lenz__ gluster volume create livebackup replica 2 transport tcp 46.101.160.245:/opt/gluster_brick1 46.101.184.191:/opt/gluster_brick2 force
09:15 lenz__ gluster volume start livebackup
09:15 lenz__ gluster volume add-brick livebackup 46.101.160.245:/opt/gluster_brick2 46.101.184.191:/opt/gluster_brick1 force
09:15 lenz__
09:16 lenz__ [root@glu2 ~]# gluster volume info
09:16 lenz__
09:16 lenz__ Volume Name: livebackup
09:16 lenz__ Type: Distributed-Replicate
09:16 lenz__ Volume ID: 55cf62a0-099f-4a5e-ae4a-0ddec29239b4
09:16 lenz__ Status: Started
09:16 lenz__ Number of Bricks: 2 x 2 = 4
09:16 lenz__ Transport-type: tcp
09:16 lenz__ Bricks:
09:16 lenz__ Brick1: 46.101.160.245:/opt/gluster_brick1
09:16 lenz__ Brick2: 46.101.184.191:/opt/gluster_brick2
09:16 lenz__ Brick3: 46.101.160.245:/opt/gluster_brick2
09:16 lenz__ Brick4: 46.101.184.191:/opt/gluster_brick1
09:16 lenz__ Options Reconfigured:
09:16 lenz__ performance.readdir-ahead: on
09:16 lenz__
09:16 lenz__ mount -t glusterfs localhost:/livebackup /mnt
09:16 lenz__
09:16 lenz__ cp /var/log/* /mnt
09:16 lenz__
09:16 lenz__ gluster volume rebalance livebackup fix-layout start
09:16 lenz__
09:16 lenz__ [root@glu2 ~]# gluster volume rebalance livebackup fix-layout start
09:16 lenz__ Connection failed. Please check if gluster daemon is operational.
09:18 mator lenz__, it's in your logs
09:18 mator *** buffer overflow detected ***: glusterd terminated
09:18 lenz__ Yes I know
09:18 lenz__ I was wondering if I did something wrong
09:18 dcroonen joined #gluster
09:19 lenz__ Should I post this as a bug?
09:20 mator well, you can try... but your logs does not say anything about your OS, version and installed updates
09:20 mator probably, if you could update your server (yum update) before, this bug is already fixed ...
09:21 mator ahh, it's in the first string of the log
09:21 mator then try to file a bug report
09:21 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
09:21 anoopcs lenz__, You can use fpaste rather than flooding the channel :)
09:21 lenz__ I am now compiling from source and it’s behaving differently
09:22 lenz__ consider that I am using the version that is used by everybody on centos
09:28 PaulCuzner joined #gluster
09:29 glusterbot News from newglusterbugs: [Bug 847821] After disabling NFS the message "0-transport: disconnecting now" keeps appearing in the logs <https://bugzilla.redhat.com/show_bug.cgi?id=847821>
09:33 dusmant joined #gluster
09:45 nsoffer joined #gluster
09:48 poornimag joined #gluster
09:56 dusmant joined #gluster
09:59 glusterbot News from newglusterbugs: [Bug 1227677] Glusterd crashes and cannot start after rebalance <https://bugzilla.redhat.com/show_bug.cgi?id=1227677>
09:59 glusterbot News from newglusterbugs: [Bug 1227674] Honour afr self-heal volume set options from clients <https://bugzilla.redhat.com/show_bug.cgi?id=1227674>
10:00 PaulCuzner joined #gluster
10:04 schandra joined #gluster
10:07 harish_ joined #gluster
10:21 nishanth joined #gluster
10:33 mdavidson joined #gluster
10:42 TheSeven hm, I have a few more questions
10:43 TheSeven I read somewhere that gluster now determines the size of a brick and adjusts the hashing algorithm accordingly
10:43 TheSeven I guess that might be troublesome if I have multiple bricks on the same filesystem
10:44 TheSeven but more importantly: what does that mean for the distribution of data on distribute-replicate volumes?
10:44 TheSeven previously on a replica3 volume every set of 3 consecutive bricks had the same data, so shouldn't be on the same server
10:45 TheSeven how do I now determine which bricks need to be on different machines?
10:45 TheSeven I guess the total number of bricks still needs to be an integer multiple of the replica count?
10:50 dusmant joined #gluster
10:50 monotek1 joined #gluster
10:50 lalatenduM joined #gluster
10:52 lalatenduM joined #gluster
10:53 Philambdo joined #gluster
11:04 dcroonen joined #gluster
11:06 ira joined #gluster
11:06 julim joined #gluster
11:14 harish_ joined #gluster
11:16 B21956 joined #gluster
11:17 poornimag joined #gluster
11:18 R0ok_ joined #gluster
11:33 nbalacha joined #gluster
11:35 nbalacha joined #gluster
11:40 lenz__ joined #gluster
11:46 ToMiles joined #gluster
11:56 aaronott joined #gluster
11:58 rafi1 joined #gluster
12:00 rafi joined #gluster
12:02 lpabon joined #gluster
12:06 RameshN joined #gluster
12:13 atalur joined #gluster
12:14 TheOtter joined #gluster
12:14 poornimag joined #gluster
12:16 TheOtter hi - day 1 with gluster.  I followed the quick start and then mounted localhost:/gv0 on server1.  When I bounce server2 I cannot write to the directory on server1, is that expected behaviour?
12:17 TheSeven TheOtter: I'm like "1 week with gluster", but I'd tend to say yes
12:17 TheOtter :)
12:17 ppai joined #gluster
12:18 TheSeven you need a majority of bricks to be a reachable if you want to write, to avoid split-brain situations
12:18 TheSeven so usually 2 of 3
12:18 TheOtter ahhhhh
12:18 TheOtter ok great, makes sense
12:18 TheOtter I'll add another node and see how it performs. thanks v much for your help
12:20 TheSeven with 2 bricks there's a special case, that the first brick is considered a majority ;)
12:21 mcpierce joined #gluster
12:21 TheSeven so the second one can go offline without bringing write access down, but not the first one
12:25 rafi joined #gluster
12:25 TheOtter ok, I'll test that too
12:25 TheOtter thanks again
12:27 chirino joined #gluster
12:29 TheOtter ok so I tested rebooting the other node (in my 2 node cluster) and writes hang on that one too
12:31 TheOtter anyway, I'll add a third node and see what that does
12:33 klaxa|work joined #gluster
12:43 ghenry joined #gluster
12:45 geaaru joined #gluster
12:45 kdhananjay joined #gluster
12:50 theron joined #gluster
12:54 Norky_ joined #gluster
13:04 wkf joined #gluster
13:04 spalai left #gluster
13:06 social joined #gluster
13:06 Norky_ joined #gluster
13:06 B21956 joined #gluster
13:08 ghenry joined #gluster
13:08 ghenry joined #gluster
13:10 aaronott joined #gluster
13:12 kkeithley joined #gluster
13:20 hagarth joined #gluster
13:21 ndevos kkeithley_bat: there is an email with subject "Support for SLES 11 SP3" on the gluster-users list, I guess thats for you ;-)
13:23 dgandhi joined #gluster
13:24 kkeithley_bat I should put a README.txt file on d.g.o
13:27 kkeithley_bat but first I should remember why
13:28 georgeh-LT2 joined #gluster
13:29 kkeithley_bat ah, no libacl(-devel), no liburcu(-devel) > 0.7, no libuuid(-devel), no sqlite3(-devel)
13:30 meghanam joined #gluster
13:36 theron joined #gluster
13:39 meghanam joined #gluster
13:42 firemanxbr joined #gluster
13:45 DV joined #gluster
13:45 monotek joined #gluster
13:52 firemanxbr joined #gluster
13:58 vovcia o/ i asked this on #ganesha but maybe someone here will know - is there any chance for nfs-ganesha 4.1 to provide proper locking, e.g. for sqlite3 database?
13:58 B21956 joined #gluster
14:01 ndevos vovcia: I think you need to ask the sqlite people for support on network filesystems like nfs? https://www.sqlite.org/wal.html explains a little about it
14:02 vovcia ndevos: sqlite is just an example :)
14:02 lenz__ left #gluster
14:02 vovcia i need locking for many things
14:03 ndevos vovcia: well, normal locking should work fine?
14:06 vovcia i need fcntl :(( maybe native fuse client supports them?
14:09 kkeithley joined #gluster
14:11 DV joined #gluster
14:12 pppp joined #gluster
14:29 chirino joined #gluster
14:30 aaronott joined #gluster
14:30 glusterbot News from newglusterbugs: [Bug 1227803] tiering: tier status shows as " progressing " but there is no rebalance daemon running <https://bugzilla.redhat.com/show_bug.cgi?id=1227803>
14:40 glusterbot News from resolvedglusterbugs: [Bug 1086460] Ubuntu code audit results (blocking inclusion in Ubuntu Main repo) <https://bugzilla.redhat.com/show_bug.cgi?id=1086460>
14:40 glusterbot News from resolvedglusterbugs: [Bug 1109180] Issues reported by Cppcheck static analysis tool <https://bugzilla.redhat.com/show_bug.cgi?id=1109180>
14:40 glusterbot News from resolvedglusterbugs: [Bug 1091677] Issues reported by Cppcheck static analysis tool <https://bugzilla.redhat.com/show_bug.cgi?id=1091677>
14:42 ndevos vovcia: see "man 2 fcntl" about locking over NFS?
14:47 vovcia ndevos: i see.. i might have some bigger issues because even git clone on nfs mount is hanging
14:58 jiffin joined #gluster
14:58 deepakcs joined #gluster
15:00 nbalacha joined #gluster
15:02 jbrooks joined #gluster
15:08 ToMiles joined #gluster
15:10 glusterbot News from resolvedglusterbugs: [Bug 1226038] openSuse 13.2 rdma.so missing <https://bugzilla.redhat.com/show_bug.cgi?id=1226038>
15:15 ToMiles_ joined #gluster
15:19 ndevos does any of the Ubuntu users have an idea how the glusterd service is supposed to get started? please reply to http://www.gluster.org/pipermail/gluster-users/2015-June/022184.html
15:20 jbrooks joined #gluster
15:25 Leildin ndevos, "sudo service glusterd start" worked fine when I was on ubuntu. since moved to centos though.
15:25 Leildin I think he had a problem during install that he has to repair
15:27 bennyturns joined #gluster
15:27 ndevos Leildin: thanks, I dont know, and do not have an Ubuntu system available for testing either...
15:28 Leildin we're full centos here too for ctdb
15:29 jiffin ndevos: ping
15:29 glusterbot jiffin: Please don't naked ping. http://blogs.gnome.org/markmc/2014/02/20/naked-pings/
15:29 jiffin ping ndevos
15:29 ndevos jiffin: Please don't naked ping. http://blogs.gnome.org/markmc/2014/02/20/naked-pings/
15:30 jiffin ndevos: understood :)
15:30 ndevos JoeJulian: maybe we need to train glusterbot more? ping <-> nick?
15:30 ndevos jiffin: more ACLs?
15:31 jiffin ndevos: Yup
15:31 ndevos jiffin: what about them?
15:31 jiffin ndevos: i think figure out issue
15:32 ndevos jiffin: oh, thats cool, remind me again which issue that was?
15:33 jiffin ndevos: even if user has write permission on directory, he cannot creates files inside that
15:33 ndevos jiffin: ah, right
15:34 ndevos jiffin: I wanted to ask you if it makes a difference if the ACE and the owner are not the same
15:34 ndevos jiffin: but do tell me what you figured out!
15:36 jiffin ndevos: user should have execute permission on directory  for creating files
15:36 jiffin ndevos: luckily i found out bug in my code
15:37 ndevos jiffin: oh, write+execute, I assume?
15:37 jiffin will send patch for that pretty soon
15:37 jbrooks joined #gluster
15:38 jiffin ndevos: i didn't try it  without write permission
15:38 ndevos jiffin: indeed: cd /tmp ; mkdir hello ; chmod -x hello ; touch hello/world -> Permission denied
15:38 jiffin ndevos: hmm
15:39 ndevos jiffin: I did not know that execute was required too, seems to be the case for local filesystems as well
15:39 Intensity joined #gluster
15:39 jiffin ndevos: me too
15:40 ndevos jiffin: nice catch!
15:41 monotek1 joined #gluster
15:41 vovcia execute is required to enter directory
15:41 anoopcs vovcia, yes.. I was about to tell the same
15:42 anoopcs ndevos, +x is required even for doing cd into the directory.
15:42 jiffin anoopcs: oh
15:43 ndevos I always thought execute was needed to list directory contents, never thought about creating files in there
15:43 vovcia +r is for listing :)
15:43 ndevos hehe, yeah, I guess that makes sense, but without +x that wont do you any good then
15:44 anoopcs ndevos, You are right.
15:45 anoopcs jiffin, Anyway nice catch..
15:45 ndevos hmm, mkdir hello ; touch hello/world ; chmod -x hello ; ls hello -> returns both -EPERM *and* "world"
15:46 jiffin ndevos: that's interesting
15:46 soumya joined #gluster
15:46 ndevos but "/bin/ls hello" does not return the error, only "hello", I guess stat("hello/world") is not permitted
15:46 vovcia +x means that dir can be used as part of path
15:47 ndevos uh, returns "world"
15:47 ndevos jiffin: enjoy combining that info with ACL conversion! :D
15:47 jiffin ndevos: sure
15:51 madphoenix I'm having a strange issue with rebalance where one server out of three just sits at 0 runtime in seconds.  There in't anything useful written to the rebalance log, and the gluster services have been restarted as well as the host.  Any idea where I could look to figure out what is going on?
16:01 cholcombe joined #gluster
16:04 rafi joined #gluster
16:07 atinmu joined #gluster
16:09 rafi joined #gluster
16:09 squizzi joined #gluster
16:18 RameshN joined #gluster
16:26 wushudoin| joined #gluster
16:28 rafi1 joined #gluster
16:32 wushudoin| joined #gluster
16:37 coredump joined #gluster
16:43 neoice joined #gluster
16:43 StrangeWill1 joined #gluster
16:44 StrangeWill1 Anyone running gluster to run a master/slave setup in Postgres? There are papers to use DRDB but not much on doing it with gluster instead.
16:51 maveric_amitc_ joined #gluster
16:53 bene2 joined #gluster
16:54 dcroonen joined #gluster
16:54 plarsen joined #gluster
16:54 madphoenix if anybody would be willing to look at the output of glusterd --debug for me i'd be appreciative, not sure why initialization of 'management' is failing: https://gist.github.com/brandentimm/98bdbbcf37691d6b3895
16:56 davy_ joined #gluster
16:56 JoeJulian port is in use. That would suggest that 24007 is busy.
16:56 madphoenix is that just because I ran 'glusterd --debug' when the glusterd service was already running?
16:57 JoeJulian Why it says "binding to  failed:" instead of "binding to 24007 failed:" I have no idea.
17:01 madphoenix i'm thinking of changing GLUSTERD_LOGLEVEL="NORMAL" to DEBUG in /etc/sysconfig/glusterd, would that give me more output
17:02 JoeJulian No more output than glusterd --debug
17:02 JoeJulian How about netstat -tlnp | grep 24007
17:03 madphoenix glusterd processes listening on all interfaces:24007
17:03 JoeJulian That would be it.
17:03 madphoenix I'm confused - should glusterd not be bound to 24007?
17:04 TheSeven I read somewhere that gluster now determines the size of a brick and adjusts the hashing algorithm accordingly
17:04 TheSeven I guess that might be troublesome if I have multiple bricks on the same filesystem
17:04 TheSeven but more importantly: what does that mean for the distribution of data on distribute-replicate volumes?
17:04 TheSeven previously on a replica3 volume every set of 3 consecutive bricks had the same data, so shouldn't be on the same server
17:04 TheSeven how do I now determine which bricks need to be on different machines?
17:04 TheSeven I guess the total number of bricks still needs to be an integer multiple of the replica count?
17:04 JoeJulian It is, but if it is bound to 24007 you can't start it again and have it running twice.
17:04 madphoenix Right, ok - so what you're saying is I should stop the glusterd service, then run glusterd --debug ?
17:06 JoeJulian TheSeven: to enable that feature (in >= 3.6.0) you would have to set cluster.weighted-rebalance
17:06 TheSeven ok, probably going to leave that off then
17:07 JoeJulian madphoenix: going back to your original question, "not sure why initialization of 'management' is failing" isn't failing if glusterd is running.
17:07 TheSeven but out of curiosity, how would that affect the placement of distribute-replicate data if the bricks that would usually hold the same data have different sizes?
17:08 madphoenix joejulian: right, that makes sense.  i stopped glusterd and then ran glusterd --debug manually, and am watching uotput now
17:09 madphoenix what i'm seeing now is "[socket.c:2353:socket_event_handler] 0-transport: disconnecting now" and "[socket.c:590:__socket_rwv] 0-socket.management: EOF on socket" over and over again, but it's only listed as a debug message not an error
17:09 JoeJulian TheSeven: I haven't actually looked at the code, but based on my prior knowledge of how dht works, the hash allocation will just be smaller on smaller bricks and larger on larger ones probably based on a df of the brick at the time of fix-layout.
17:10 TheSeven does that happen above or below the replicate layer?
17:10 JoeJulian madphoenix: something is trying to connect over and over again I think.
17:10 JoeJulian TheSeven: above. replica bricks are subvolumes to dht.
17:10 JoeJulian (or below, depends which way you hold the map)
17:11 TheSeven so the three replicas still contain the same data, arbitrarily picking the space of one of the replica bricks at fix-layout time?
17:13 JoeJulian replica-0 is 100G, replica-1 is 10G, ... , replica-10 is 10G. In this scenario (200G total), replica-0 will have 50% of the dht hash, the rest with 5% each. Half of the total number of files will (probably) go to replica-0.
17:13 JoeJulian The bricks in replica-0 will all have the same data.
17:16 nsoffer joined #gluster
17:19 bennyturns joined #gluster
17:20 georgeh-LT2 joined #gluster
17:20 TheSeven "replica-N" being replicate subvolumes here, not actually replicas?
17:21 TheSeven I'm wondering what happens if the bricks *inside* such a subvolume are non-equal in size... smallest one wins?
17:21 JoeJulian yep
17:22 TheSeven also, how do I determine which bricks belong to which replicate subvolume? are these always N consecutive bricks (as listed in volume info) for a volume with N replicas?
17:22 aaronott joined #gluster
17:23 JoeJulian Filling up one brick in a replica with different amounts of free space is a horror. (I'm dealing with one [mental judgement deleted] customer that created a cinder volume bigger than the brick and a stupid layout scheme that leaves things mismatched).
17:23 atinmu joined #gluster
17:23 JoeJulian @brick order
17:23 glusterbot JoeJulian: Replicas are defined in the order bricks are listed in the volume create command. So gluster volume create myvol replica 2 server1:/data/brick1 server2:/data/brick1 server3:/data/brick1 server4:/data/brick1 will replicate between server1 and server2 and replicate between server3 and server4.
17:24 TheSeven I guess a replace-brick inserts the new one where the old one was?
17:24 JoeJulian correct
17:27 TheSeven so if I don't want my node count to always be a multiple of the replica count, but want to scale by adding single nodes as required, I should probably have N bricks on every node (N being the replica count), and then, when adding a node, replace-brick N-1 bricks from different nodes and different replicate suvolumes over to the new node, and add new bricks where the old ones were and one new one to the new node?
17:27 meghanam joined #gluster
17:27 TheSeven (of course always taking care that the N replica bricks are on different nodes)
17:27 hagarth joined #gluster
17:35 JoeJulian @lucky expanding glusterfs from two to three
17:35 glusterbot JoeJulian: https://joejulian.name/blog/how-to-expand-glusterfs-replicated-clusters-by-one-server/
17:35 JoeJulian TheSeven: ^
17:36 JoeJulian Essentially, yes.
17:36 atinmu joined #gluster
17:36 TheSeven thanks :)
17:38 TheSeven in the meantime I tried replace-brick force commit to move a brick on replica3 (all 3 bricks are created on the same host, now I want to move one to another host)
17:39 TheSeven it doesn't seem to work, I'm getting errors in glustershd.log every second on both hosts
17:39 TheSeven [2015-06-03 17:38:09.610258] I [rpc-clnt.c:1807:rpc_clnt_reconfig] 0-ove-client-0: changing port to 49161 (from 0)
17:39 TheSeven [2015-06-03 17:38:09.613683] E [socket.c:2332:socket_connect_finish] 0-ove-client-0: connection to 10.0.0.100:49161 failed (Connection refused)
17:40 atinmu left #gluster
17:40 TheSeven both clients spam those messages into the logs, exactly the same content!
17:40 TheSeven the ip address, on both sides, is the one of th source host
17:40 atinmu joined #gluster
17:40 madphoenix is it strange to be seeing this message in the logs on a glusterfs 3.6.3 system? [client-handshake.c:1413:select_server_supported_programs] 0-bigdata2-client-15: Using Program GlusterFS 3.3, Num (1298437), Version (330)
17:40 TheSeven firewalls on both hosts are disabled as far as I can tell
17:41 JoeJulian madphoenix: normal
17:42 JoeJulian TheSeven: gluster volume status might tell you something
17:43 TheSeven http://pastie.org/10222001
17:43 fink joined #gluster
17:43 TheSeven same on both sides (aside from the NFS server lines)
17:45 plarsen joined #gluster
17:46 fink Hi. Can someone help me in understanding self-heal ? I was under the impression that it needs to work only if healing is needed. I'm seeing that the heal info command shows a lot of gfids, but split-brain shows none
17:47 fink the heal logs don't have anything useful either.
17:48 JoeJulian TheSeven: there is no 49161. I bet it's the former brick that you replaced.
17:48 JoeJulian TheSeven: If it is, I'd call that a bug.
17:48 JoeJulian fink: what version are you using?
17:48 TheSeven let me take a look if I can reconstruct what that port was
17:48 fink This is rhs 3.0
17:49 fink so believe 3.6
17:49 fink *i believe
17:49 JoeJulian There are differences in the downstream version, so I don't know the answer.
17:49 JoeJulian Even their versions don't match.
17:50 fink oh ok. In any case, is self heal supposed to kick in if the files are in sync ?
17:50 fink i noticed this because nagios's self_heal status was critical
17:50 fink and then i ran the info command
17:50 fink which just shows a list of gfids. no file names
17:50 fink i don't think any of the bricks were down. So i find it unlikely that files are not in sync
17:51 TheSeven no, as far as I can tell that port belonged to brick00, and brick01 is the one that I've replaced
17:51 JoeJulian No, self-heal does not happen if the files are in sync.
17:51 JoeJulian On older versions they would show up in the heal...info if they were in the process of being written to
17:51 JoeJulian They can show up if your client cannot connect to all the bricks.
17:53 fink that's interesting. Is it possible to get anything more from the log by increasing the verbosity ? Right now, the self heal log has "2015-06-03 17:52:05.986243] I [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1"
17:53 fink over and over again
17:53 fink nothing more than that
17:53 JoeJulian That would just say that it's not doing anything.
17:53 fink which sort of confirms my belief that files are in sync
17:54 fink btw, the heal info command itself doesn't do any redundant work right ?
17:54 fink like launching a volume wide self heal
17:54 JoeJulian "gluster volume heal $vol" and see if that changes. Also, of course, check "gluster volume status" and ensure that everything is actually in the state you think it is.
17:54 JoeJulian Right, it doesn't. It only tries to heal things that are marked.
17:55 fink vol status shows everything is up and online.
17:57 fink heal was submitted successfully. heal info still has a log list of gfids
17:57 jbautista- joined #gluster
17:58 fink nagios still has the socket timeout self heal
17:58 JoeJulian Also, you'd need to look at all the glustershd.log. Some other shd may be active.
17:58 JoeJulian socket timeout?
17:59 fink the nagios entry for Volume Self-Heal
17:59 fink which has a timeout of 10 seconds
17:59 fink and that is timing out since the heal info is not returning within 10 seconds i guess
18:00 TheSeven JoeJulian: are these port numbers ever changed automatically?
18:01 TheSeven this looks like 2 out of 4 volumes have been moved to new port numbers after replace-brick was run
18:01 TheSeven I basically have a hole of 6 ports there (3 bricks per volume)
18:02 TheSeven I can't seem to find any reference to the old port number aside from log files though
18:02 TheSeven so no idea why it's trying to connect to that
18:02 JoeJulian Yes, ports can change whenever there's a state change on the volume.
18:03 JoeJulian I've seen something where it tried to connect to an old port after a replace-brick. Something to do with not completely releasing the prior graph.
18:04 JoeJulian I thought I saw a fix for that, but maybe not.
18:04 JoeJulian I can't even remember if I filed a bug report.
18:05 TheSeven well, if I can't find any reference to that port number I'd have expected it to go away after restarting the glusterd instances, but it doesn't seem so
18:05 JoeJulian Right, because it's a client (could be glustershd or nfs as well) that are still trying to use it.
18:06 TheSeven who starts/stops shd? is it safe to restart that manually?
18:06 TheSeven I'd have expected that to stop automatically if I stop/restart glusterd on that node
18:06 fink so i have narrowed down the self heal entries to two brigs. This is a 3x2 dist+repl setup, and the heal info shows the same number of entries for one replica pair.
18:06 JoeJulian It is. I think it's supposed to stop/start on a restart of glusterd, but it's safe to kill and restart glusterd.
18:06 fink however, the number of entries doesn't seem to reduce
18:07 JoeJulian @gfid
18:07 glusterbot JoeJulian: The gfid is a uuid that's assigned to represent a unique inode that can be identical across replicas. It's stored in extended attributes and used in the .glusterfs tree. See http://hekafs.org/index.php/2011/04/glusterfs-extended-attributes/ and http://joejulian.name/blog/what-is-this-new-glusterfs-directory-in-33/
18:08 JoeJulian That last link, iirc, should help you find the hardlink of the file associated with the gfid reference you're seeing.
18:08 firemanxbr joined #gluster
18:08 JoeJulian @gfid resolver
18:09 glusterbot JoeJulian: https://gist.github.com/4392640
18:09 JoeJulian And you can use that to find the filename associated with the gfid.
18:10 JoeJulian Before I run that, though, I'd stat the gfid file. If the link count is 1 the gfid is (somehow) stale and can be deleted.
18:10 fink @JoeJulian, interesting. thanks. Let me try that.
18:11 JoeJulian The corresponding entry under .glusterfs/indices/xattrop is what's showing up in the heal..info so that would need removed as well.
18:16 jbautista- joined #gluster
18:16 Rapture joined #gluster
18:20 TheSeven hm looks like another round of glusterd restarts made that bad port number go away
18:21 TheSeven and apparently something also put some data on that brick in the meantime
18:21 TheSeven I'm a bit confused by du -sh output of the bricks, but there are sparse files involved here so who knows
18:22 TheSeven now I tried doing the same on another volume, and while I see files showing up in heal info, it doesn't seem to make any attempt to actually self-heal them
18:22 TheSeven manually kicking off a heal didn't seem to do anything either
18:23 fink @JoeJulian, by eyeballing the gfids, it looks like they either don't exist or are broken links. i.e the dest doesn't exist
18:23 TheSeven shd says:
18:23 TheSeven [2015-06-03 18:19:51.192907] I [client-handshake.c:1203:client_setvolume_cbk] 0-ove-client-1: Server and Client lk-version numbers are not same, reopening the fds
18:23 TheSeven [2015-06-03 18:19:51.193075] I [client-handshake.c:187:client_set_lk_version_cbk] 0-ove-client-1: Server lk version = 1
18:23 glusterbot TheSeven: This is normal behavior and can safely be ignored.
18:23 TheSeven hahaha, glusterbot... but I guess *not* if I expect that thing to actually *do* something ;)
18:24 JoeJulian No, glusterbot is correct.
18:24 TheSeven well, it's more the non-existance of any other messages that worries me...
18:24 JoeJulian lk_versions change as locks are added or removed. When a client starts up, its lk_version is 0 so it needs to catch up.
18:25 TheSeven I'd expect it to say anything else though if I trigger a heal on a volume
18:25 TheSeven there's also no disk activity, so it doesn't seem to make any attempt to heal things
18:26 TheSeven (despite heal info showing 4 entries on both other replicas)
18:29 JoeJulian TheSeven: you may be in the same boat as fink.
18:29 theron joined #gluster
18:31 glusterbot News from newglusterbugs: [Bug 1227894] Increment op-version requirement for lookup-optimize configuration option <https://bugzilla.redhat.com/show_bug.cgi?id=1227894>
18:31 glusterbot News from newglusterbugs: [Bug 1227904] Memory leak in marker xlator <https://bugzilla.redhat.com/show_bug.cgi?id=1227904>
18:33 hgowtham joined #gluster
18:33 ashiq joined #gluster
18:35 fink i think the original files have been deleted.
18:35 fink i can't find any proper file names
18:35 fink the ones that i found look like they were deleted
18:38 bennyturns joined #gluster
18:38 fink actually no. not all of them are invalid. The script though hasn't returned. It's still trying to look up the file number
18:38 fink *file name
18:42 B21956 joined #gluster
18:42 coredump|br joined #gluster
18:51 fink is this affecting correctness/performance right now ?
18:51 fink i don't see the heal daemon consuming cpu cycles right now
18:51 fink not sure if data integrity is at risk right now
18:52 JoeJulian no
18:52 coredump joined #gluster
18:56 rotbeard joined #gluster
19:05 fink is there anything else that i can try ?
19:07 jcastill1 joined #gluster
19:07 coredump joined #gluster
19:12 jcastillo joined #gluster
19:14 hgowtham_ joined #gluster
19:14 ashiq- joined #gluster
19:20 wushudoin| joined #gluster
19:25 wushudoin| joined #gluster
19:26 CyrilPeponnet Hey guys :) one question, I mount a root-squased vol as RO (client side). I got plenty of warning like [client-rpc-fops.c:2150:client3_3_setattr_cbk] 0-usr_global-client-1: remote operation failed: Operation not permitted
19:26 CyrilPeponnet metadata self heal is successfullu completed
19:27 CyrilPeponnet each time I browse the mount point
19:27 CyrilPeponnet are they legitimate ?
19:28 nsoffer joined #gluster
19:33 ghenry joined #gluster
19:33 ghenry joined #gluster
19:34 StrangeWill1 left #gluster
19:35 Rapture joined #gluster
19:37 deniszh joined #gluster
19:41 CyrilPeponnet anybody ?
19:42 rafi joined #gluster
19:44 ndevos CyrilPeponnet: well, if the volume is mounted read-only, setattr (update atime and the like) should not be permitted
19:44 CyrilPeponnet ok maybe I should mount them with noatime
19:44 ndevos CyrilPeponnet: logging it in that case, or even trying to do a setattr sounds wrong to me
19:45 ndevos CyrilPeponnet: oh, yes, maybe "noatime" would be a workaround
19:45 CyrilPeponnet great I'll try that
19:45 CyrilPeponnet I also try to find a document tu understand a heal matrix
19:46 CyrilPeponnet like [ [ 0 22 22 ] [ 22 8 22 ] [ 22 22 4 ] ],
19:46 CyrilPeponnet I know each  [] represent a node but the numbers inside not clue
19:46 ndevos s/node/brick/
19:46 glusterbot What ndevos meant to say was: An error has occurred and has been logged. Please contact this bot's administrator for more information.
19:47 TheSeven lol
19:47 ndevos and you're one step further than me on the matrix bit :)
19:47 ndevos maybe pranithk explained it to me once, but I surely can not remember
19:47 CyrilPeponnet lol
19:48 CyrilPeponnet So I need a matrix guru
19:48 CyrilPeponnet :p
19:48 CyrilPeponnet or dealing with source code
19:53 TheSeven [2015-06-03 19:51:59.301330] W [socket.c:3059:socket_connect] 0-glustershd: Ignore failed connection attempt on , (No such file or directory)
19:56 joshin joined #gluster
19:57 CyrilPeponnet @ndevos noatime does not change anything. but I have this warning only on some folders not all files
19:58 TheSeven [2015-06-03 18:11:31.802310] E [rpc-transport.c:512:rpc_transport_unref] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x186)[0x7f26fcff5ee6] (--> /lib64/libgfrpc.so.0(rpc_transport_unref+0xa3)[0x7f26fc5135a3] (--> /lib64/libgfrpc.so.0(rpc_clnt_unref+0x5c)[0x7f26fc5168ec] (--> /lib64/libglusterfs.so.0(+0x21761)[0x7f26fcff2761] (--> /lib64/libglusterfs.so.0(+0x216f5)[0x7f26fcff26f5] ))))) 0-rpc_transport: invalid argument: this
19:58 glusterbot TheSeven: ('s karma is now -81
19:58 glusterbot TheSeven: ('s karma is now -82
19:58 glusterbot TheSeven: ('s karma is now -83
19:58 glusterbot TheSeven: ('s karma is now -84
19:58 glusterbot TheSeven: ('s karma is now -85
19:58 TheSeven er?
19:58 TheSeven oh, I see what's going on, lol
19:58 theron_ joined #gluster
20:01 ndevos CyrilPeponnet: maybe the setattr only gets triggered for a self-heal... I guess self-heal should be turned off automatically when the fuse-client is in read-only mode
20:02 samppah !seen
20:02 samppah !seen joonas
20:03 samppah hmmh, oh well
20:03 ndevos CyrilPeponnet: btw, how much do you want that 32-bit inode fix in 3.5? I can include it now and do the release
20:03 JoeJulian @ not !
20:03 glusterbot JoeJulian: I do not know about 'not !', but I do know about these similar topics: 'node'
20:03 samppah @seen joonas
20:03 glusterbot samppah: I have not seen joonas.
20:03 jbautista- joined #gluster
20:03 samppah thank you JoeJulian
20:04 CyrilPeponnet @ndevos can I disable the self-healing client side ?
20:04 ndevos CyrilPeponnet: I was wondering about that too
20:04 CyrilPeponnet @ndevos well for with release will it be ?
20:05 ndevos CyrilPeponnet: 3.6 and 3.7 have the patch merged already, I would put it in 3.5.4 if you want it
20:05 social joined #gluster
20:05 CyrilPeponnet @ndevos I'm switching from nfs to gfs for most of our mount point as we have a lot of trouble with nfs server overload
20:05 CyrilPeponnet Is is safe to migrate from 3.5.2 to 3.5.4 without downtime ?
20:05 * ndevos always feels a little awkward +2'ing his own patches, but well, its been merged 3x already
20:06 ndevos CyrilPeponnet: yes, there should not be an issue updating any 3.5.x to 3.5.y versions
20:06 CyrilPeponnet **should**
20:06 CyrilPeponnet :p
20:06 ndevos CyrilPeponnet: it's only client-side too
20:06 CyrilPeponnet so go ahead
20:06 ndevos yes, *SHOULD*
20:06 CyrilPeponnet Oh you're right
20:07 CyrilPeponnet go aheard
20:07 CyrilPeponnet ahead
20:07 CyrilPeponnet I have fixed my clients release but  I can bump them if needed for everyone (thanks to puppet)
20:08 ndevos ah, so the patch is well tested, no reason to not include it then
20:08 TheSeven hm, how can a server that hosts 2 of 3 bricks of a replica3 volume *ever* lose quorum!?
20:08 CyrilPeponnet One last thing, how I can change the log level for nfs.log file because it growing super fast.... like several GB / days
20:09 ndevos CyrilPeponnet: nfs is a client like fuse, you can set diagnostics.client-log-level to WARNING or something, but all clients will be affected
20:09 CyrilPeponnet I want it
20:10 CyrilPeponnet so I already pass diagnostics.client-log-level  to CRITICAL
20:10 CyrilPeponnet should avoid growing so fast
20:10 ndevos I thought so, yes...
20:10 CyrilPeponnet and maybe improve some perfomance to avoid keeping writing logs....
20:11 ndevos are there any particular messages repeated a lot? there might have been some fixes to change the log level of some messages
20:13 ndevos https://github.com/gluster/glusterfs/blob/v3.5.3/doc/release-notes/3.5.3.md has the changes for 3.5.2 -> 3.5.3
20:13 TheSeven hm, looks like this breaks way too easily to be considered for any serious use
20:14 TheSeven how on earth can the majority ever lose quorum
20:14 TheSeven interestingly it seems to have forgotten about the only alive brick as well (which is up, but read-only as it's a minority of course)
20:15 ToMiles joined #gluster
20:16 ndevos TheSeven: I'm only guessing, but could it be that the load on the system is high and the bricks can not respond in time?
20:16 TheSeven not at all
20:17 TheSeven a simple testing setup with 2 completely idle servers
20:17 TheSeven and it fails to do the most basic things without exploding into pieces
20:18 ndevos thats really strange, I wouldnt know why it would fail
20:18 TheSeven there's a brick completely missing now in gluster volume status
20:18 TheSeven yet it doesn't bring the remaining two local bricks online
20:19 ndevos the rpc_transport_unref error you pasted earlier suggests thet the network connection gets disconnected, could it be some local firewall issues or network changes?
20:20 jbautista- joined #gluster
20:21 TheSeven unlikely, and even if that would be the cause, how on earth can that bring a volume completely down if it has a majority of bricks locally on the server!?
20:21 n-st joined #gluster
20:21 TheSeven 1 of 3 replicas missing killed the whole FS. that's completely unacceptable.
20:22 TheSeven especially that this replica was just missing because there is no sane way to migrate an existing replica
20:26 fink so i have located some of the files from the gfids of self heal info, and they seem to be in sync on the replicas at least from an md5sum
20:26 fink yet, the self heal doesn't seem to do anything about it
20:27 fink the number of entries in self heal info is still the same
20:27 fink is there anyway to restart self heal
20:27 fink without bringing the volume down
20:27 hgowtham_ joined #gluster
20:28 ashiq- joined #gluster
20:29 CyrilPeponnet @JoeJulian did you know anything about the meaning of the heal-matrix ?
20:34 fink anyone around ?
20:34 * TheSeven is still confused by the results of "du -sh" being vastly different on different replicas of the same volume
20:36 JoeJulian CyrilPeponnet: sort-of... they're numbers indicating changes that are pending.
20:36 JoeJulian @extended attributes
20:36 glusterbot JoeJulian: (#1) To read the extended attributes on the server: getfattr -m .  -d -e hex {filename}, or (#2) For more information on how GlusterFS uses extended attributes, see this article: http://pl.atyp.us/hekafs.org/index.php/2011/04/glusterfs-extended-attributes/
20:36 JoeJulian That last article has some details.
20:37 CyrilPeponnet thx
20:39 fink wow, the number of self heal entries has reduced significantly on its own. No idea what's going on.
20:40 JoeJulian TheSeven: I'm trying to get information out of the rant that can be used to file a bug report. Are you saying that quorum breaks if you replace-brick? Server quorum or volume quorum?
20:40 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
20:40 fink i guess it is working, just taking a while. Still hard to figure out what triggered the heal in the first place. AFAIK, none of the bricks were offline.
20:41 TheSeven JoeJulian: I'm not quite sure what happened there
20:41 JoeJulian network partition?
20:41 TheSeven [2015-06-03 18:32:31.946999] W [glusterd-locks.c:653:glusterd_mgmt_v3_unlock] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x186)[0x7f6533baeee6] (--> /usr/lib64/glusterfs/3.7.1/xlator/mgmt/glusterd.so(glusterd_mgmt_v3_unlock+0x461)[0x7f6528aa3b21] (--> /usr/lib64/glusterfs/3.7.1/xlator/mgmt/glusterd.so(__glusterd_peer_rpc_notify+0x162)[0x7f6528a20842] (--> /usr/lib64/glusterfs/3.7.1/xlator/mgmt/glusterd.so(glusterd_big_locked_notify+0x4c
20:41 TheSeven )[0x7f6528a17ecc] (--> /lib64/libgfrpc.so.0(rpc_clnt_notify+0x90)[0x7f653397f610] ))))) 0-management: Lock for vol ove not held
20:41 glusterbot TheSeven: ('s karma is now -86
20:41 glusterbot TheSeven: ('s karma is now -87
20:41 glusterbot TheSeven: ('s karma is now -88
20:41 glusterbot TheSeven: ('s karma is now -89
20:41 glusterbot TheSeven: ('s karma is now -90
20:41 JoeJulian heh
20:41 TheSeven I think that's where it all started
20:41 JoeJulian Gotta fix that someday.
20:42 JoeJulian An attempt to unlock something that wasn't locked?
20:43 TheSeven immediately before [2015-06-03 18:32:31.947270] C [MSGID: 106002] [glusterd-server-quorum.c:356:glusterd_do_volume_quorum_action] 0-management: Server quorum lost for volume ove. Stopping local bricks.
20:43 TheSeven which then caused the bricks on the majority server to go dow
20:43 TheSeven down*
20:43 JoeJulian Ah, server-quorum.
20:44 JoeJulian So 50% or more of the servers (glusterd) went down.
20:44 TheSeven well, there are only 2 servers right now
20:44 TheSeven but wouldn't the first one get an additional vote if there are exactly two, like with bricks?
20:44 JoeJulian If that's not what you intended, you may wish to add another peer that doesn't necessarily participate in the volume.
20:44 hamiller joined #gluster
20:45 TheSeven actually I'm in the process of spreading these volumes out to 3 servers
20:45 ndevos CyrilPeponnet: I hope someone will have time to build packages, v3.5.4 is now available: http://thread.gmane.org/gmane.comp.file-systems.gluster.packaging/2
20:45 TheSeven this just caught me before I could even get the bricks migrated to the second server successfully
20:45 TheSeven interestingly this only killed 2 out of 4 bricks
20:46 TheSeven s/bricks/volumes/
20:46 glusterbot What TheSeven meant to say was: An error has occurred and has been logged. Please contact this bot's administrator for more information.
20:46 ndevos CyrilPeponnet: and https://github.com/gluster/glusterfs/blob/release-3.5/doc/release-notes/3.5.4.md for the release notes
20:50 fink ok, so calling heal several times healed everything
20:51 fink looks like the called heal process doesn't do a proper job of healing all the required files, it stops after processing some files
20:51 fink calling it multiple times healed the entries in batches
20:52 fink and yet server side log has nothing
20:53 TheSeven am I correct that if I bring down a server with a brick on it, heavily change volume contents affecting that brick, then bring it up again and mount the gluster volume locally on that server, I will see outdated contents of that brick until self-heal completes?
20:53 badone_ joined #gluster
20:53 TheSeven it seems like mounting a volume on a server hosting a brick (cleverly) accesses only the local copy for reads, but apparently it does so even if that copy isn't up to date :/
21:01 kkeithley semiosis: ping. I did a `dput ppa:gluster/glusterfs-3.6 glusterfs_3.6.3-ubuntu1~trusty4_source.changes` from a trusty box.  It ran successfully. I'm not seeing that it built though. Or how to initiate a build.
21:05 Pupeno joined #gluster
21:09 CyrilPeponnet @ndevos thanks a lot !
21:12 * ndevos warns for an incoming glusterbot closed bug flood
21:12 glusterbot News from resolvedglusterbugs: [Bug 1161102] self heal info logs are filled up with messages reporting split-brain <https://bugzilla.redhat.com/show_bug.cgi?id=1161102>
21:12 glusterbot News from resolvedglusterbugs: [Bug 1160711] libgfapi: use versioned symbols in libgfapi.so for compatibility <https://bugzilla.redhat.com/show_bug.cgi?id=1160711>
21:12 glusterbot News from resolvedglusterbugs: [Bug 1177339] entry self-heal in 3.5 and 3.6 are not compatible <https://bugzilla.redhat.com/show_bug.cgi?id=1177339>
21:12 glusterbot News from resolvedglusterbugs: [Bug 1186121] tar on a gluster directory gives message "file changed as we read it" even though no updates to file in progress <https://bugzilla.redhat.com/show_bug.cgi?id=1186121>
21:12 glusterbot News from resolvedglusterbugs: [Bug 1141158] GlusterFS 3.5.4 Tracker <https://bugzilla.redhat.com/show_bug.cgi?id=1141158>
21:12 glusterbot News from resolvedglusterbugs: [Bug 1168173] Regression tests fail in quota-anon-fs-nfs.t <https://bugzilla.redhat.com/show_bug.cgi?id=1168173>
21:12 glusterbot News from resolvedglusterbugs: [Bug 1222150] readdirp return 64bits inodes even if enable-ino32 is set <https://bugzilla.redhat.com/show_bug.cgi?id=1222150>
21:12 glusterbot News from resolvedglusterbugs: [Bug 1162226] bulk remove xattr should not fail if removexattr fails with ENOATTR/ENODATA <https://bugzilla.redhat.com/show_bug.cgi?id=1162226>
21:12 glusterbot News from resolvedglusterbugs: [Bug 1184528] Some newly created folders have root ownership although created by unprivileged user <https://bugzilla.redhat.com/show_bug.cgi?id=1184528>
21:12 glusterbot News from resolvedglusterbugs: [Bug 1190633] self-heal-algorithm with option "full" doesn't heal sparse files correctly <https://bugzilla.redhat.com/show_bug.cgi?id=1190633>
21:12 glusterbot News from resolvedglusterbugs: [Bug 1200764] [AFR] Core dump and  crash observed during disk replacement case <https://bugzilla.redhat.com/show_bug.cgi?id=1200764>
21:12 glusterbot News from resolvedglusterbugs: [Bug 1115197] Directory quota does not apply on it's sub-directories <https://bugzilla.redhat.com/show_bug.cgi?id=1115197>
21:12 glusterbot News from resolvedglusterbugs: [Bug 1166275] Directory fd leaks in index translator <https://bugzilla.redhat.com/show_bug.cgi?id=1166275>
21:12 glusterbot News from resolvedglusterbugs: [Bug 1192832] log files get flooded when removexattr() can't find a specified key or value <https://bugzilla.redhat.com/show_bug.cgi?id=1192832>
21:12 glusterbot News from resolvedglusterbugs: [Bug 1202675] Perf:  readdirp in replicated volumes causes performance degrade <https://bugzilla.redhat.com/show_bug.cgi?id=1202675>
21:12 glusterbot News from resolvedglusterbugs: [Bug 1162150] AFR gives EROFS when fop fails on all subvolumes when client-quorum is enabled <https://bugzilla.redhat.com/show_bug.cgi?id=1162150>
21:12 glusterbot News from resolvedglusterbugs: [Bug 1177928] Directories not visible anymore after add-brick, new brick dirs not part of old bricks <https://bugzilla.redhat.com/show_bug.cgi?id=1177928>
21:12 glusterbot News from resolvedglusterbugs: [Bug 1162767] DHT: Rebalance- Rebalance process crash after remove-brick <https://bugzilla.redhat.com/show_bug.cgi?id=1162767>
21:12 glusterbot News from resolvedglusterbugs: [Bug 1092037] Issues reported by Cppcheck static analysis tool <https://bugzilla.redhat.com/show_bug.cgi?id=1092037>
21:12 glusterbot News from resolvedglusterbugs: [Bug 1159968] glusterfs.spec.in: deprecate *.logrotate files in dist-git in favor of the upstream logrotate files <https://bugzilla.redhat.com/show_bug.cgi?id=1159968>
21:12 glusterbot News from resolvedglusterbugs: [Bug 1211841] glusterfs-api.pc versioning breaks QEMU <https://bugzilla.redhat.com/show_bug.cgi?id=1211841>
21:12 glusterbot News from resolvedglusterbugs: [Bug 1162230] quota xattrs are exposed in lookup and getxattr <https://bugzilla.redhat.com/show_bug.cgi?id=1162230>
21:12 glusterbot News from resolvedglusterbugs: [Bug 1174250] Glusterfs outputs a lot of warnings and errors when quota is enabled <https://bugzilla.redhat.com/show_bug.cgi?id=1174250>
21:12 glusterbot News from resolvedglusterbugs: [Bug 1101138] meta-data split-brain prevents entry/data self-heal of dir/file respectively <https://bugzilla.redhat.com/show_bug.cgi?id=1101138>
21:12 glusterbot News from resolvedglusterbugs: [Bug 1173515] [HC] - mount.glusterfs fails to check return of mount command. <https://bugzilla.redhat.com/show_bug.cgi?id=1173515>
21:12 glusterbot News from resolvedglusterbugs: [Bug 1191006] Building argp-standalone breaks nightly builds on Fedora Rawhide <https://bugzilla.redhat.com/show_bug.cgi?id=1191006>
21:14 cholcombe gluster: does vol add-brick work for all translator types?  It only shows <stripe|replica> in the CLI docs
21:20 virusuy joined #gluster
21:27 hgowtham_ joined #gluster
21:27 deniszh joined #gluster
21:40 wkf joined #gluster
21:46 wushudoin| joined #gluster
21:52 wushudoin| joined #gluster
22:04 CyrilPeponnet Is mount-rmtab option vol wide ?
22:04 CyrilPeponnet of I need to set to on each volume with the same path
22:06 CyrilPeponnet s/of/or
22:19 theron joined #gluster
22:30 ashiq joined #gluster
22:36 hgowtham__ joined #gluster
22:41 Rapture joined #gluster
23:05 sysconfig joined #gluster
23:08 Jandre joined #gluster
23:18 plarsen joined #gluster
23:18 plarsen joined #gluster
23:20 gildub joined #gluster
23:23 doubt_ joined #gluster
23:33 victori joined #gluster
23:47 maveric_amitc_ joined #gluster
23:52 vincent_vdk joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary