Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-04-28

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:09 jobewan joined #gluster
00:18 nangthang joined #gluster
00:27 nangthang joined #gluster
00:31 kripper joined #gluster
00:36 kripper hi, a glusterfs mount is acting funny: "cannot create regular file" + "No such file or directory" when creating a file
00:37 kripper This are the logs http://pastebin.com/DHpvvQKR
00:37 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
00:38 kripper the reason is that the client doesn't have access to one of the replica bricks
00:38 kripper (firewall)
00:40 Le22S joined #gluster
00:40 kripper I wonder if error messages could be more elaborated (eg: "ERROR: Brick ... is down")
00:46 lalatenduM joined #gluster
00:52 kripper is there something like "gluster peer probe" but to add a firewall rules to give access external clients to al gluster peers?
00:53 kripper since clients require access to replica brick hosts, it makes sense to have a gluster cmd for this
00:54 atalur joined #gluster
00:55 atinmu joined #gluster
00:59 jbrooks joined #gluster
01:12 JoeJulian kripper: Unfortunately, there's nothing but the pre-defined error messages that can be returned as an error from the standpoint of the filesystem. In the error logs, there's always room for improvement.
01:13 JoeJulian You might be able to add firewall rules under /var/lib/glusterd/hooks somewhere.
01:20 Gill joined #gluster
01:27 kripper JoeJulian: thanks
01:30 atalur joined #gluster
01:39 halfinhalfout joined #gluster
01:53 DV joined #gluster
01:55 ilbot3 joined #gluster
01:55 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:04 Pupeno joined #gluster
02:10 halfinhalfout joined #gluster
02:13 raghug joined #gluster
02:24 kdhananjay joined #gluster
02:29 lalatenduM joined #gluster
02:30 kripper left #gluster
02:48 gildub joined #gluster
02:54 jiku joined #gluster
03:17 jobewan joined #gluster
03:27 bharata-rao joined #gluster
03:30 rjoseph joined #gluster
03:34 raghug joined #gluster
03:40 shubhendu joined #gluster
03:44 itisravi joined #gluster
03:47 halfinhalfout1 joined #gluster
03:52 hagarth joined #gluster
03:53 saurabh_ joined #gluster
03:54 Pupeno joined #gluster
03:57 smohan joined #gluster
03:57 lalatenduM joined #gluster
03:59 nbalacha joined #gluster
04:04 atinmu joined #gluster
04:04 shubhendu joined #gluster
04:05 RameshN joined #gluster
04:13 kumar joined #gluster
04:16 poornimag joined #gluster
04:17 Pupeno joined #gluster
04:18 nishanth joined #gluster
04:18 spandit joined #gluster
04:19 sakshi joined #gluster
04:19 AGTT left #gluster
04:21 ndarshan joined #gluster
04:26 sripathi joined #gluster
04:30 ashiq joined #gluster
04:35 anoopcs joined #gluster
04:36 jiffin joined #gluster
04:38 overclk joined #gluster
04:41 jiku joined #gluster
04:45 vimal joined #gluster
04:49 smohan joined #gluster
04:50 rjoseph joined #gluster
04:50 jobewan joined #gluster
04:50 pppp joined #gluster
04:51 shubhendu joined #gluster
04:54 gem joined #gluster
04:56 Anjana joined #gluster
04:59 hagarth joined #gluster
05:08 Bhaskarakiran joined #gluster
05:09 jbrooks joined #gluster
05:12 rafi joined #gluster
05:15 lalatenduM joined #gluster
05:16 anil joined #gluster
05:18 Apeksha joined #gluster
05:23 kanagaraj joined #gluster
05:32 ashiq joined #gluster
05:38 maveric_amitc_ joined #gluster
05:40 kshlm joined #gluster
05:41 hgowtham joined #gluster
05:53 Manikandan joined #gluster
05:53 Manikandan_ joined #gluster
05:54 glusterbot News from newglusterbugs: [Bug 1215896] Typos in the messages logged by the CTR translator <https://bugzilla.redhat.com/show_bug.cgi?id=1215896>
05:54 kdhananjay joined #gluster
05:55 Manikandan__ joined #gluster
06:00 raghu joined #gluster
06:00 mbukatov joined #gluster
06:01 ashiq joined #gluster
06:09 tg2 joined #gluster
06:09 badone__ joined #gluster
06:11 kaushal_ joined #gluster
06:12 soumya joined #gluster
06:12 jiku joined #gluster
06:16 shubhendu joined #gluster
06:17 csim joined #gluster
06:18 rjoseph joined #gluster
06:18 SOLDIERz joined #gluster
06:19 DV__ joined #gluster
06:20 Anjana joined #gluster
06:23 jtux joined #gluster
06:24 glusterbot News from newglusterbugs: [Bug 1215907] cli should return error with inode quota cmds on cluster with op_version less than 3.7 <https://bugzilla.redhat.com/show_bug.cgi?id=1215907>
06:26 badone__ joined #gluster
06:26 ktosiek joined #gluster
06:35 smohan joined #gluster
06:40 cholcombe joined #gluster
06:42 SOLDIERz joined #gluster
06:44 gem joined #gluster
06:50 Anjana joined #gluster
06:53 gvandeweyer joined #gluster
06:53 gvandeweyer good morning all
06:54 gvandeweyer small question: If I access a gluster volume by NFS, does the client still retrieve files directly from the different servers/bricks, or does all traffic pass through the server used for mounting the NFS-gluster volume from?
06:57 TvL2386 joined #gluster
06:59 gem joined #gluster
07:00 _Bryan_ joined #gluster
07:01 harish_ joined #gluster
07:03 ppai joined #gluster
07:06 atalur joined #gluster
07:09 [Enrico] joined #gluster
07:12 Jeeves__ left #gluster
07:13 kshlm joined #gluster
07:18 ndevos gvandeweyer: all traffic goes through the nfs-server, similar to a gateway/proxy
07:18 gvandeweyer ok. thanks for the info
07:19 sahina joined #gluster
07:21 o5k_ joined #gluster
07:26 hgowtham_ joined #gluster
07:32 pcaruana joined #gluster
07:34 fsimonce joined #gluster
07:35 SOLDIERz joined #gluster
07:35 Pupeno joined #gluster
07:36 Pupeno joined #gluster
07:40 Anjana joined #gluster
07:44 gvandeweyer ndevos: is there a risk in letting some clients use NFS access, and others the fuse-gluster protocol?
07:44 gvandeweyer we currently have a severe impact on our clients with regard to non-caching of repeatedly read big files
07:44 gvandeweyer nfs access seems to perform page caching better
07:45 liquidat joined #gluster
07:47 ndevos gvandeweyer: the (Linux) nfs-client does indeed have a better caching than the fuse module, but it is more work to make the mountpoint high available
07:48 ndevos gvandeweyer: access over both nfs and fuse is not a problem during normal operations, locking is passe on to the bricks where they are checked/enforced
07:48 ndevos s/is passe/is passed/
07:48 glusterbot What ndevos meant to say was: An error has occurred and has been logged. Please contact this bot's administrator for more information.
07:48 * ndevos pokes glusterbot
07:48 gvandeweyer :-)
07:49 gvandeweyer fine. We'll evaluate the availability of gluster.nfs mounts under high load, but keep the lack of failover in mind.
07:49 ashiq joined #gluster
07:49 ndevos samba uses its own locking, and those are not shared (completely?) on the bricks, so there you can have an issue when you use files on both fuse/nfs and smb/cifs
07:50 ndevos for high-available nfs, you need virtual ip-addresses and relocate them on failures, something like pacemaker would do that for you
07:51 ndevos that does not address all issues, you would loose the NLM status (nfs locking) though :-/
07:56 ndevos gvandeweyer: there is much work being done to improve the nfs solution, see http://www.gluster.org/community/documentation/index.php/Features/HA_for_ganesha for more details
07:57 anrao joined #gluster
07:58 ron-slc joined #gluster
07:58 anrao_ joined #gluster
08:01 ctria joined #gluster
08:02 Pupeno joined #gluster
08:02 Pupeno joined #gluster
08:05 morse joined #gluster
08:06 anil left #gluster
08:09 gvandeweyer ndevos, thank you for the pointers
08:09 ndevos surem np!
08:09 ndevos m=,
08:24 mbukatov joined #gluster
08:24 kovshenin joined #gluster
08:25 Norky joined #gluster
08:26 DJClean joined #gluster
08:26 ron-slc joined #gluster
08:28 sac joined #gluster
08:31 SOLDIERz joined #gluster
08:33 deniszh joined #gluster
08:34 atalur joined #gluster
08:34 deniszh joined #gluster
08:35 smohan joined #gluster
08:36 Bhaskarakiran joined #gluster
08:37 raghug joined #gluster
08:37 hagarth joined #gluster
08:49 bootc joined #gluster
08:49 bootc Hi folks, I have a 2-node cluster with a split brain situation I'm trying to resolve
08:49 bootc I've looked at https://joejulian.name/blog/fixing-split-brain-with-glusterfs-33/
08:50 bootc I'm running 3.5 though (from Debian jessie)
08:51 bootc http://paste.debian.net/169542/ is from "gluster volume heal hafw info"
08:56 Pupeno joined #gluster
08:59 hagarth joined #gluster
09:03 [Enrico] joined #gluster
09:03 [Enrico] joined #gluster
09:04 nhayashi joined #gluster
09:07 samsaffron___ joined #gluster
09:08 sadbox joined #gluster
09:08 o5k joined #gluster
09:08 SOLDIERz joined #gluster
09:11 SOLDIERz joined #gluster
09:15 gem joined #gluster
09:18 Leildin joined #gluster
09:19 SOLDIERz joined #gluster
09:30 RayTrace_ joined #gluster
09:37 itisravi_ joined #gluster
09:40 Bardack joined #gluster
09:43 Slashman joined #gluster
09:46 kkeithley1 joined #gluster
09:48 atalur joined #gluster
09:50 sripathi left #gluster
09:54 cholcombe joined #gluster
10:00 gem joined #gluster
10:07 atinmu joined #gluster
10:13 RayTrace_ joined #gluster
10:24 oli_144 joined #gluster
10:25 oli_144 gday
10:25 glusterbot News from newglusterbugs: [Bug 1166862] rmtab file is a bottleneck when lot of clients are accessing a volume through NFS <https://bugzilla.redhat.com/show_bug.cgi?id=1166862>
10:26 oli_144 left #gluster
10:28 ktosiek joined #gluster
10:31 Peppard joined #gluster
10:48 halfinhalfout joined #gluster
10:51 firemanxbr joined #gluster
10:59 ndevos REMINDER: Gluster Bug Triage meeting is starting in 1 hour from now in #gluster-meeting
11:04 DV__ joined #gluster
11:06 Apeksha joined #gluster
11:10 LebedevRI joined #gluster
11:10 atalur joined #gluster
11:12 atinmu joined #gluster
11:13 itisravi joined #gluster
11:14 Manikandan joined #gluster
11:14 Anjana joined #gluster
11:15 hagarth joined #gluster
11:15 gem joined #gluster
11:16 RameshN joined #gluster
11:17 rafi joined #gluster
11:17 B21956 joined #gluster
11:20 SOLDIERz joined #gluster
11:25 cholcombe joined #gluster
11:30 rafi1 joined #gluster
11:30 DV__ joined #gluster
11:31 kovshenin joined #gluster
11:34 kovshenin joined #gluster
11:37 kovsheni_ joined #gluster
11:40 kovshenin joined #gluster
11:42 gildub joined #gluster
11:42 atalur joined #gluster
11:46 soumya_ joined #gluster
11:51 hagarth joined #gluster
11:54 ndevos REMINDER: Gluster Bug Triage meeting is starting in ~5 minutes from now in #gluster-meeting
11:59 ira joined #gluster
12:02 DV joined #gluster
12:03 soumya joined #gluster
12:13 jcastill1 joined #gluster
12:15 atalur joined #gluster
12:16 DV__ joined #gluster
12:16 rjoseph joined #gluster
12:18 sahina joined #gluster
12:20 raghug joined #gluster
12:24 Gill joined #gluster
12:25 glusterbot News from newglusterbugs: [Bug 1216052] Spelling errors again <https://bugzilla.redhat.com/show_bug.cgi?id=1216052>
12:25 glusterbot News from newglusterbugs: [Bug 1216039] nfs-ganesha: Discrepancies with lock states recovery during migration <https://bugzilla.redhat.com/show_bug.cgi?id=1216039>
12:25 glusterbot News from newglusterbugs: [Bug 1216051] nfs-ganesha: More parameters to be added to default export config file <https://bugzilla.redhat.com/show_bug.cgi?id=1216051>
12:30 jcastillo joined #gluster
12:30 rafi1 joined #gluster
12:30 sahina joined #gluster
12:47 mmbash joined #gluster
12:47 mmbash hi
12:47 glusterbot mmbash: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
12:48 mmbash testing quotas
12:48 mmbash it's working fine so far
12:48 mmbash but docs say: You can set the disk limit on the directory even if it is not created
12:49 mmbash when i try this i get: quota command failed : Failed to get trusted.gfid attribute on path /testtest4. Reason : No such file or directory
12:49 mmbash glusterfs 3.5.2
12:50 mmbash gluster vol quota testvol limit-usage /testtest4 100MB
12:50 lalatenduM joined #gluster
12:51 ndevos mmbash: I thought quotas were stored in an xattr for the directory, so that would require the directory to exist beforehand - which docs are you using?
12:51 mmbash http://www.gluster.org/community/documentation/index.php/Gluster_3.2:_Managing_Directory_Quota
12:51 mmbash laste Note
12:52 ndevos ah, well, those docs are for 3.2, and you are on 3.5, I think things have changed a little
12:54 ndevos https://github.com/gluster/glusterfs/blob/release-3.5/doc/features/quota-scalability.md seems to have some description, and further links
12:54 lalatenduM_ joined #gluster
12:56 julim joined #gluster
12:56 DV__ joined #gluster
12:57 glusterbot News from resolvedglusterbugs: [Bug 1216065] Fails to build on x32 <https://bugzilla.redhat.com/show_bug.cgi?id=1216065>
13:05 kkeithley_ the-me: ping. do you have an account in the new gerrit?
13:10 wkf joined #gluster
13:15 mmbash checking the docs but seeing anything related yet
13:17 hamiller joined #gluster
13:18 jmarley joined #gluster
13:19 mmbash not
13:19 ashiq joined #gluster
13:19 bene2 joined #gluster
13:21 cholcombe joined #gluster
13:25 theron joined #gluster
13:26 theShiz joined #gluster
13:26 theShiznit joined #gluster
13:30 theron joined #gluster
13:31 theShiz joined #gluster
13:32 theShiz joined #gluster
13:32 NuxRo joined #gluster
13:34 ashiq joined #gluster
13:35 theShiz joined #gluster
13:36 ricky-ti1 joined #gluster
13:37 SOLDIERz joined #gluster
13:41 Anjana joined #gluster
13:48 coredump joined #gluster
13:58 soumya joined #gluster
14:00 squizzi joined #gluster
14:06 jcastill1 joined #gluster
14:06 wushudoin joined #gluster
14:08 wushudoin joined #gluster
14:13 ricky-ticky1 joined #gluster
14:23 jcastillo joined #gluster
14:24 ira joined #gluster
14:28 ricky-ticky joined #gluster
14:29 T0aD joined #gluster
14:29 hagarth joined #gluster
14:30 SOLDIERz joined #gluster
14:31 plarsen joined #gluster
14:31 [Enrico] joined #gluster
14:37 DV__ joined #gluster
14:42 kdhananjay joined #gluster
14:46 Anjana joined #gluster
14:54 atinmu joined #gluster
14:54 halfinhalfout hey folks. should I expect to see heals happening on a heavily used volume all the time as a matter of course?
14:57 halfinhalfout or is that indicative of a performance problem? this is on a replicate volume w/  3 nodes across 3 bricks.
14:58 halfinhalfout I've seen no problems w/ the underlying XFS file systems
14:58 bene2 joined #gluster
14:58 jcastill1 joined #gluster
15:01 shaunm_ joined #gluster
15:03 bene3 joined #gluster
15:03 jcastillo joined #gluster
15:06 wkf joined #gluster
15:10 necrogami joined #gluster
15:11 uxbod joined #gluster
15:12 uxbod Hello, have an issue with geo-replication and gluster 3.6.3
15:12 uxbod it seems to be stuck in history crawl and nothing gets replicated ?
15:19 roost__ joined #gluster
15:22 jbrooks joined #gluster
15:24 uxbod how may I debug this please ?
15:25 lalatenduM joined #gluster
15:27 lalatenduM joined #gluster
15:32 uxbod have tried setting indexing but receive failed: geo-replication.indexing cannot be disabled while geo-replication sessions exist
15:32 uxbod but I have stopped the geo-rep
15:40 jvandewege_ joined #gluster
15:41 ricky-ticky joined #gluster
15:45 DJClean joined #gluster
15:45 jotun joined #gluster
15:48 bfoster joined #gluster
15:51 JordanHackworth joined #gluster
15:56 jobewan joined #gluster
16:01 Guest65503 joined #gluster
16:07 meghanam joined #gluster
16:11 papamoose1 joined #gluster
16:14 dbruhn joined #gluster
16:23 haomaiwang joined #gluster
16:30 chirino joined #gluster
16:41 papamoose1 joined #gluster
16:41 papamoose1 joined #gluster
16:43 Anjana joined #gluster
16:44 papamoose1 left #gluster
16:45 kovshenin joined #gluster
16:47 kovshenin joined #gluster
16:50 kovsheni_ joined #gluster
16:53 JoeJulian uxbod: have you tried restarting glusterd?
16:55 uxbod yep
16:55 uxbod on all nodes
16:56 uxbod even stopped and deleted the geo-rep
16:56 uxbod recreated
16:56 JoeJulian What's the exact text of the error message?
16:56 uxbod and now it sits at
16:56 uxbod Active     N/A                  Changelog Crawl    235
16:56 uxbod if I chmod a file it syncs but ignores all other content
16:57 JoeJulian That would make sense if it had already synced.
16:57 uxbod it hadn't
16:57 julim joined #gluster
16:58 uxbod zero files were in remote vol
16:58 uxbod chmod a file and it syncd
16:58 JoeJulian is there a backstory to this problem?
17:07 Anjana joined #gluster
17:10 JoeJulian bootc: ,,(splitmount)
17:10 glusterbot bootc: https://github.com/joejulian/glusterfs-splitbrain
17:13 jiffin joined #gluster
17:14 nbalacha joined #gluster
17:15 JoeJulian halfinhalfout: what version? prior to 3.6: yes. The problem is that the report shows entries in .glusterfs/indices which are created when the change process is started and deleted when the change is completed. If you're in the middle of a write when that directory is checked, it will show a file in need of self-heal. That misinformation has been mitigated in 3.6.
17:18 bootc JoeJulian: I used that, and the versions on both sides were the same; removed the file entirely and put it back, and glusterfs volume heal still complains
17:18 bootc interestingly enough that gfid was nowhere to be found on either of my bricks
17:19 bootc it's now no longer complaining about the gfid, but I still see the 2x entries on 'clemens'
17:19 chirino joined #gluster
17:20 JoeJulian Is it creating new entries?
17:20 JoeJulian Note the timestamps.
17:22 kdhananjay joined #gluster
17:25 halfinhalfout left #gluster
17:25 Rapture joined #gluster
17:26 glusterbot News from newglusterbugs: [Bug 1216187] readdir-ahead needs to be enabled by default for new volumes on rhs-3.0 <https://bugzilla.redhat.com/show_bug.cgi?id=1216187>
17:28 glusterbot News from resolvedglusterbugs: [Bug 1096616] [RFE] Allow options to be enabled on new volumes based on op-version <https://bugzilla.redhat.com/show_bug.cgi?id=1096616>
17:28 shubhendu joined #gluster
17:31 Gill joined #gluster
17:32 dblack joined #gluster
17:33 RayTrace_ joined #gluster
17:34 ir2ivps5 joined #gluster
17:37 ktosiek joined #gluster
17:44 bootc JoeJulian: no
17:45 bootc nothing new since this morning when it happened
17:46 JoeJulian sounds like it's fixed then.
17:48 bootc right, but what about the entries in that list? do I just ignore them?
17:49 gnudna joined #gluster
17:57 RameshN joined #gluster
18:05 uxbod @JoeJulian: waiting to see what happens since re-establishing the geo-rep
18:06 kovshenin joined #gluster
18:06 uxbod still sat at change log crawl
18:08 uxbod in /var/log/glusterfs/geo-replication/{volname}/*.brick-changes.log I see
18:08 uxbod 0-glusterfs: processing changelog: /mnt/san/data/brick/.glusterfs/changelogs/CHANGELOG.1430244443
18:08 kovsheni_ joined #gluster
18:08 uxbod and it just keeps going through more and more of then
18:09 uxbod never a full sync occurs
18:11 kovshenin joined #gluster
18:13 kovshen__ joined #gluster
18:15 halfinhalfout joined #gluster
18:16 ricky-ticky1 joined #gluster
18:23 ir2ivps5 joined #gluster
18:25 uxbod what a pain ... ended up putting the geo-rep on stop
18:26 uxbod tar'd up the gluster mnt, removing everying thing, starting the geo-rep, and then restoring that tar
18:26 uxbod now the files have syncd
18:26 uxbod so how does one perform a full initial sync ???
18:27 uxbod Active     N/A                  Changelog Crawl    13137
18:27 uxbod that is sync files against the ~250 earlier
18:34 kanagaraj joined #gluster
18:36 RayTrace_ joined #gluster
18:38 shubhendu joined #gluster
18:46 lalatenduM joined #gluster
18:50 cholcombe joined #gluster
18:56 halfinhalfout JoeJulian: thanks for the reply RE frequent heals on busy volume. yeah, I'm running v 3.5.3
18:57 halfinhalfout so prolly the only way to reduce those spurious heal messages on 3.5.3 is to find the primary bottleneck to write performance and mitigate it to reduce the likelihood of the health checking hitting at the same time … it's a nagios check, so it happens all day long
19:00 JoeJulian Or just run heal twice in a row and remove anything that's not in both?
19:20 tessier_ joined #gluster
19:22 Arminder joined #gluster
19:30 RayTrace_ joined #gluster
19:34 deniszh joined #gluster
19:36 Arminder joined #gluster
19:38 _zerick_ joined #gluster
19:40 Arminder joined #gluster
19:59 soumya joined #gluster
20:03 lexi2 joined #gluster
20:05 shaunm_ joined #gluster
20:08 kripper joined #gluster
20:08 kripper hi again, one of my bricks doesn't start
20:08 kripper here are the logs: http://pastebin.com/aaaxVU9U
20:08 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
20:09 deniszh joined #gluster
20:14 kripper hmm...brick dir is empty, probably it was recreated because the parent disk mount failed
20:15 shpank left #gluster
20:16 Pupeno_ joined #gluster
20:16 kripper I keep bricks in /mnt/disk1/gluster-bricks/mybrick
20:17 kripper I noticed that when /mnt/disk1, gluster recreates the /mnt/disk1/gluster-bricks directory
20:17 kripper isn't it wrong to do so?
20:18 kripper AFAIK it is also recommended to create a subdir to avoid this problems
20:19 JoeJulian which version?
20:19 gnudna left #gluster
20:19 kripper glusterfs-cli-3.8dev-0.58.gitf692757.el7.centos.x86_64
20:19 kripper glusterfs-server-3.8dev-0.58.gitf692757.el7.centos.x86_64
20:19 kripper glusterfs-fuse-3.8dev-0.58.gitf692757.el7.centos.x86_64
20:19 kripper glusterfs-3.8dev-0.58.gitf692757.el7.centos.x86_64
20:19 kripper glusterfs-geo-replication-3.8dev-0.58.gitf692757.el7.centos.x86_64
20:19 kripper vdsm-gluster-4.17.0-724.git04631f8.el7.noarch
20:19 kripper glusterfs-libs-3.8dev-0.58.gitf692757.el7.centos.x86_64
20:19 kripper glusterfs-api-3.8dev-0.58.gitf692757.el7.centos.x86_64
20:19 kripper glusterfs-rdma-3.8dev-0.58.gitf692757.el7.centos.x86_64
20:24 JoeJulian 3.8? Sure it's wrong. File a bug report.
20:24 Pupeno joined #gluster
20:24 JoeJulian If you're running a pre-release and something doesn't work the way you think it should, always file a bug report.
20:24 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
20:27 o5k_ joined #gluster
20:28 DV joined #gluster
20:32 o5k__ joined #gluster
20:48 kripper JoeJulian: sure, I want to contribute to gluster too
20:48 kripper JoeJulian: another question for you
20:48 JoeJulian excellent
20:49 kripper JoeJulian: AFAIU, writes are sync so if you have 2 replica bricks (one local another remote) it will be slower than just 1 local brick, right?
20:50 JoeJulian probably
20:50 JoeJulian well, certainly
20:50 kripper JoeJulian: If want to increase write performance temporarily at the cost of security, can I remove and add the brick later?
20:51 kripper JoeJulian: or add and remove to make a backup?
20:51 kripper sorry, have to go, will read layer
20:54 JoeJulian kripper: If you want to increase write performance at the cost of security, just enable performance.flush-behind and set the performance.write-behind-window-size to a size that's acceptable for that purpose.
21:13 chirino joined #gluster
21:15 badone__ joined #gluster
21:28 B21956 left #gluster
21:36 kripper JoeJulian: cool!
21:37 kripper JoeJulian: Thanks profesor
21:37 JoeJulian :)
21:38 JoeJulian as an aside, kripper, even if you're a contributor you need to start with a bug report. :D
21:42 wkf joined #gluster
21:47 zerick joined #gluster
21:51 jbrooks joined #gluster
21:52 deniszh joined #gluster
22:01 Twistedgrim joined #gluster
22:05 Twistedgrim joined #gluster
22:07 Twistedgrim joined #gluster
22:34 Peppard joined #gluster
22:59 redbeard joined #gluster
23:08 plarsen joined #gluster
23:18 RayTrace_ joined #gluster
23:29 gildub joined #gluster
23:38 marbu joined #gluster
23:45 marbu joined #gluster
23:46 lkoranda joined #gluster
23:47 kripper left #gluster
23:58 chirino joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary