Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-05-06

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:03 lpabon joined #gluster
00:05 yinyin joined #gluster
00:42 johnmwilliams__ joined #gluster
00:43 bala joined #gluster
00:49 gdubreui joined #gluster
01:03 theron joined #gluster
01:08 gmcwhistler joined #gluster
01:10 tryggvil joined #gluster
01:11 harish joined #gluster
01:53 jmarley joined #gluster
01:53 jmarley joined #gluster
02:08 Franklu joined #gluster
02:09 harish joined #gluster
02:11 gmcwhistler joined #gluster
02:15 sprachgenerator joined #gluster
02:17 qdk joined #gluster
02:52 kdhananjay joined #gluster
03:04 hagarth joined #gluster
03:10 shubhendu joined #gluster
03:29 kanagaraj joined #gluster
03:29 RameshN joined #gluster
03:46 itisravi joined #gluster
03:50 rastar joined #gluster
03:59 ppai joined #gluster
04:05 nshaikh joined #gluster
04:07 glusterbot New news from newglusterbugs: [Bug 1094557] [SNAPSHOT] Glusterd restore path prematurely exits (silently) when /var/lib/glusterd is on xfs backend <https://bugzilla.redhat.com/show_bug.cgi?id=1094557>
04:11 gmcwhist_ joined #gluster
04:16 haomaiwa_ joined #gluster
04:19 ndarshan joined #gluster
04:23 bharata-rao joined #gluster
04:26 haomaiwang joined #gluster
04:29 ngoswami joined #gluster
04:35 nishanth joined #gluster
04:43 hagarth joined #gluster
04:45 decimoe joined #gluster
04:47 rejy joined #gluster
04:50 yinyin_ joined #gluster
04:52 aravindavk joined #gluster
04:53 deepakcs joined #gluster
04:54 saurabh joined #gluster
05:10 vpshastry1 joined #gluster
05:13 prasanthp joined #gluster
05:15 kanagaraj joined #gluster
05:15 kdhananjay joined #gluster
05:16 ravindran1 joined #gluster
05:16 ravindran1 left #gluster
05:23 haomaiwang joined #gluster
05:25 ricky-ti1 joined #gluster
05:27 ricky-ticky1 joined #gluster
05:29 lalatenduM joined #gluster
05:31 bala joined #gluster
05:31 surabhi joined #gluster
05:34 rjoseph joined #gluster
05:52 kanagaraj joined #gluster
05:53 hagarth joined #gluster
06:02 psharma joined #gluster
06:05 yinyin joined #gluster
06:07 haomaiwa_ joined #gluster
06:13 davinder joined #gluster
06:15 ndarshan joined #gluster
06:18 naveed joined #gluster
06:23 haomaiw__ joined #gluster
06:26 hagarth joined #gluster
06:30 rjoseph joined #gluster
06:35 lalatenduM joined #gluster
06:44 dusmant joined #gluster
06:51 basso joined #gluster
06:51 rahulcs joined #gluster
06:52 lkoranda joined #gluster
06:57 dusmant joined #gluster
06:59 edward1 joined #gluster
07:00 eseyman joined #gluster
07:01 kdhananjay joined #gluster
07:02 ekuric joined #gluster
07:04 kanagaraj joined #gluster
07:05 ctrianta joined #gluster
07:08 deepakcs joined #gluster
07:10 davinder2 joined #gluster
07:11 tziOm joined #gluster
07:16 rahulcs_ joined #gluster
07:20 hagarth joined #gluster
07:32 fsimonce joined #gluster
07:36 liquidat joined #gluster
07:36 FrankLu joined #gluster
07:39 FrankLu Hi, I could re-produce the gfid-mismatch in 3.4.2 using such kind of benchmark: https://gist.github.com/mflu/9f7322d4161fda752851.
07:40 FrankLu I am wondering whether this commit could help: http://review.gluster.org/#/c/5240/
07:40 glusterbot Title: Gerrit Code Review (at review.gluster.org)
07:41 rahulcs joined #gluster
07:47 lalatenduM FrankLu, the patch is old one is merged in to master long back. so it would be already in 3.4.2
07:49 lalatenduM FrankLu, I think you should file a bug for the issue
07:49 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
07:52 FrankLu @lalatenduM I don't think 3.4.2 has this change: https://github.com/gluster/glusterfs/blob/v3.4.2/xlators/storage/posix/src/posix.c#L131
07:53 andreask joined #gluster
07:57 ktosiek joined #gluster
07:59 Lethalman joined #gluster
07:59 Lethalman hi, is the new geo replica in gluster 3.5 supporting master master replication?
08:03 andreask joined #gluster
08:11 jhonnynem joined #gluster
08:11 jhonnynem hello world
08:12 jhonnynem there is some way for make failover through NFS
08:13 jhonnynem I need 4 clients stay connected to one of my 2 nodes always and in case of fail(node), clients will retry connection to another node
08:13 jhonnynem thanks in advance
08:14 rahulcs joined #gluster
08:24 harish joined #gluster
08:24 naveed joined #gluster
08:37 ninkotech joined #gluster
08:38 ninkotech_ joined #gluster
08:41 MrAbaddon joined #gluster
08:46 foobar joined #gluster
08:46 foobar anyone any idea why gluster node wont start: http://paste.sigio.nl/phdddoo71/py9xap
08:46 glusterbot Title: Sticky Notes (at paste.sigio.nl)
08:49 saravanakumar1 joined #gluster
08:50 nshaikh joined #gluster
08:51 lalatenduM foobar, are you sure selinux and iptables not causing this
08:52 lalatenduM Lethalman, msvbhat would know
08:53 Chewi joined #gluster
08:53 foobar lalatenduM: both are off
08:54 msvbhat Lethalman: Not yet.
08:54 msvbhat Lethalman: By master-master, I assume you mean Active-Active right?
08:54 vpshastry joined #gluster
08:55 foobar I had one brick losing a filesystem / I-O errors... stopped gluster on that node, unmounted and rescanned the disk, remounted and tried restarting gluster... didn't work... rebooted, but still no joy
08:55 msvbhat foobar: Do you have any stale bricks around?
08:56 foobar don't know / don't think so ... how do I check?
08:56 msvbhat foobar: You mounted the the brick in the same path right?
08:57 lalatenduM FrankLu, this commit id "acf8cfdf698aa3ebe42ed55bba8be4f85b751c29" is present in 3.4 branch, here are the steps I tried to verify the same
08:57 foobar msvbhat: yup... (/export/sdd1/ in this case, mounted on UUID)
08:57 Lethalman msvbhat, yes, both servers can write files
08:58 lalatenduM FrankLu, clone the git repo , git checkout -b release-3.4 origin/release-3.4, git show acf8cfdf698aa3ebe42ed55bba8be4f85b751c29
08:59 msvbhat Lethalman: That's not there yet :(
08:59 Lethalman msvbhat, :( ok thanks
09:00 andreask joined #gluster
09:00 jhonnynem it is possible make failover with nfs
09:00 jhonnynem ?
09:00 lkoranda joined #gluster
09:01 msvbhat foobar: Looks like your glusterd volfile init failed... kshlm would know better.
09:05 FrankLu @lalatenduM ok, let me have a check.  What I have done is to verify the file of v3.4.2:  https://github.com/gluster/glusterfs/blob/v3.4.2/xlators/storage/posix/src/posix.c#L131. The file doesn't contain the changes from http://review.gluster.org/#/c/5240/
09:08 glusterbot New news from newglusterbugs: [Bug 1094655] Peer is disconnected and reconnected every 30 seconds <https://bugzilla.redhat.com/show_bug.cgi?id=1094655>
09:11 harish joined #gluster
09:14 foobar kshlm: any idea what could be wrong here: http://paste.sigio.nl/phdddoo71/py9xap
09:14 glusterbot Title: Sticky Notes (at paste.sigio.nl)
09:14 tryggvil joined #gluster
09:16 FrankLu @lalatenduM, I checkout -b release-3.4 origin/release-3.4 and git shortlog |grep Revert |grep 'Remove the interim fix that handles the gfid race', I could not get anything. So, I could confirm that 3.4 release doesn't have this change.
09:17 FrankLu git show <commit_id> only gives the changes of some commit_id, it doesn't mean the specified commit_id in current branch.
09:20 foobar msvbhat: looks like It's working now... one of the peer files was empty...
09:20 kshlm foobar, I'll get back to you in a little while.
09:20 foobar used this: http://comments.gmane.org/gmane.comp.file-systems.gluster.user/13278
09:20 glusterbot Title: Gluster filesystem users () (at comments.gmane.org)
09:20 foobar kshlm: looks like I got it fixed
09:20 Lethalman left #gluster
09:20 foobar at least.. the processes have started now...
09:21 foobar gluster> volume status
09:21 foobar Another transaction could be in progress. Please try again after sometime.
09:22 harish joined #gluster
09:27 lalatenduM FrankLu, yes u r right
09:27 lalatenduM FrankLu, you should list this patch for backport in 3.4.4 in http://www.gluster.org/community/documentation/index.php/Backport_Wishlist
09:27 glusterbot Title: Backport Wishlist - GlusterDocumentation (at www.gluster.org)
09:29 lalatenduM FrankLu, so that it will be backported to 3.4.4 , else you can send a patch for backport
09:31 lalatenduM FrankLu, just checked the bug and found new patches in master for this issue e.g. http://review.gluster.org/#/c/7662/
09:31 glusterbot Title: Gerrit Code Review (at review.gluster.org)
09:32 lalatenduM FrankLu, Please put a comment in the bug that you are seeing this in 3.4.2
09:34 foobar kshlm / msvbhat: looks like something isn't right yet... load is 80 on one of the nodes
09:37 keytab joined #gluster
09:38 glusterbot New news from newglusterbugs: [Bug 1077452] Unable to setup/use non-root Geo-replication <https://bugzilla.redhat.com/show_bug.cgi?id=1077452>
09:38 jhonnynem hello, good morning
09:39 rahulcs joined #gluster
09:39 jhonnynem i have following problem, i have a farm of webservers with gluster client mount and failover working
09:40 jhonnynem i can improve reads
09:40 jhonnynem or in case to use NFS I could make a failover from clients to nodes?
09:41 jhonnynem thanks in advance
09:44 FrankLu @lalatenduM, the change http://review.gluster.org/#/c/7662/ is anther story? Might be I should test concurrent mkdir & rmdir
09:46 kshlm foobar, what is taking up that much load?
09:46 nikk_ joined #gluster
09:50 foobar kshlm: I have no idea...
09:50 foobar disks aren't doing too much
09:50 kshlm Is it glusterd? or some other gluster process?
09:51 foobar glusterfsd's are eating all 100%+ cpu
09:52 ctrianta joined #gluster
09:52 foobar i'm seeing some: 0-gv0-replicate-6: Non Blocking entrylks failed for <gfid:ID> in the shd-log on one node
09:53 kshlm Then it isn't most probably related to your earlier issue.
09:53 foobar load 80+ on 2 nodes... 6 on the 3rd node
09:53 kshlm Ah, self-heal daemon is healing.
09:53 foobar is there any way to make the SHD run with way lower priority...
09:53 kshlm That generally does use some resources.
09:53 foobar as it's killing all performance, and now our site can't run at all
09:54 foobar it's making simple file uploads take minutes instead of 1 second
09:54 kshlm I remember someone else complaining about the same and they found a solution.
09:54 kshlm I think cgroups was used in that case.
09:55 kshlm @cgroups
09:55 foobar would that also explain: gluster volume status gv0
09:55 foobar Another transaction could be in progress. Please try again after sometime.
09:56 foobar or is there a way to stop the SHD ... and run it later ... when I don't need the performance
09:56 foobar so I can run it at night... instead of now
09:57 kshlm There is a volume set option to do that, 'cluster.self-heal-daemon' or something along the lines.
09:57 smithyuk1_ joined #gluster
09:58 kshlm cluster.self-heal-daemon is the option.
09:58 kshlm set it to off to turn shd off.
09:58 foobar gluster> volume set gv0 cluster.data-self-heal off
09:58 foobar volume set: failed: Another transaction could be in progress. Please try again after sometime.
09:59 eshy joined #gluster
09:59 kshlm could you try the command from another node?
09:59 eshy joined #gluster
10:00 kshlm if you are getting that error constantly it means that one of the peers is holding a stale lock.
10:00 kshlm you can get the identity of the peer from the glusterd logs. restarting glusterd on just that peer would clear the lock.
10:03 foobar ok... restarted glusterd on the node that I think had the lock
10:05 foobar gluster> volume set gv0 cluster.data-self-heal off
10:05 foobar volume set: success
10:08 foobar and locked again...
10:10 rahulcs_ joined #gluster
10:18 DzU joined #gluster
10:20 DzU Hi all. A question about something I don't understand:
10:20 DzU I have a replica volume
10:20 jhonnynem some advice for NFS failover with gluster?
10:20 jhonnynem thanks
10:21 DzU heal info give me a file that need healing
10:21 andreask jhonnynem: use a loadbalancer like haproxy
10:21 DzU triggering heal process do anything
10:22 FrankLu @lalatenduM, even use the commit: http://review.gluster.org/#/c/5240/ I could still re-produce one gfid-mismatch using 10 client (each in one VM) to concurrent create 70000 directories
10:22 DzU the file seems ok in both bricks
10:23 andreask DzU: so no split-brain messages in the logs?
10:23 DzU no split-brain
10:24 andreask and how do you trigger the healing?
10:25 DzU andreask: gluster volume heal gluster01
10:25 jhonnynem there are some cases of use with haproxy
10:25 jhonnynem Could works?
10:25 andreask jhonnynem: it does, yes
10:26 DzU andreask: "gluster volume heal gluster01 info" keep listing the file in both bricks
10:27 kkeithley1 joined #gluster
10:27 andreask DzU: and "info healed" and "info heal-failed" ?
10:30 lalatenduM FrankLu, please put ur comments (i.e. test scenarios) in the bug, and a mail in gluster-devel should get your issues to the dev
10:30 DzU andreask, heal-failed nothing at all. info healed list an entry of the file for each heal I triggered
10:32 andreask DzU: big file?
10:34 DzU not really. The volume is used for assets (img, js e css) for a wordpress website
10:34 jhonnynem otherwise i think that is better to use a more simply failover storage. I like how gluster fs native client handle failover but i am worry about reads. There is some tips and triks for tune up gluster clients performance at read-level, or some type of cache?
10:35 jhonnynem sorry, my english is not very good
10:35 jhonnynem You understand?
10:36 jhonnynem thanks in advance
10:37 edward1 joined #gluster
10:38 jhonnynem There is some tips and triks for tune up gluster clients performance at read-level, or some type of cache?
10:38 FrankLu @lalatenduM, ok I will
10:38 glusterbot New news from newglusterbugs: [Bug 1094708] gsyncd binary crash due to missing memory accounting <https://bugzilla.redhat.com/show_bug.cgi?id=1094708>
10:39 andreask DzU: is the file already in sync and only the display wrong?
10:41 ctrianta joined #gluster
10:41 DzU andreask, yes, the file is in sync, and in both brick has the same md5sum. But it keep be listed in heal info
10:41 rahulcs joined #gluster
10:43 jhonnynem please can somebody help me?
10:44 jhonnynem i have a production enviroment with gluster I apreciate your help
10:45 andreask DzU: have you tried restarting glusterd on all nodes?
10:51 naveed joined #gluster
10:54 X3NQ joined #gluster
10:56 DzU andreask, not tried. I'll test it soon (in a hour) and report back, here.
10:59 d-fence joined #gluster
11:03 nshaikh joined #gluster
11:08 glusterbot New news from newglusterbugs: [Bug 1094720] [SNAPSHOT]: snapshot creation fails but df -h shows the snapshot brick as mounted <https://bugzilla.redhat.com/show_bug.cgi?id=1094720>
11:09 vikhyat joined #gluster
11:09 FrankLu @lalatenduM, posted the comment. In our production deployment, gfid-mismatch is a critical issue, for it could make our web servers hung in D state, then our service is unavailable.
11:11 cyber_si joined #gluster
11:11 lalatenduM FrankLu, thanks , appreciate it . Is thr any specific reason you are using pure distribute volume, practically distribute-replicate volumes are less error prone
11:12 diegows joined #gluster
11:14 FrankLu @lalatenduM, I don't use pure distribute volume. I use a Distributed-Replicate one:
11:14 FrankLu Volume Name: test_volume
11:14 FrankLu Type: Distributed-Replicate
11:14 lalatenduM FrankLu, last suggestion, have you tried the same on 3.4.3 (latest) , I am pretty sure it would be reproducible
11:14 lalatenduM FrankLu, ohh, ;(
11:15 FrankLu @LalatenduM: no, I haven't tried 3.4.3, I will try if I have time.
11:15 lalatenduM FrankLu, I have clone the original bug (which was on mainline) to branch 3.4.3 https://bugzilla.redhat.com/show_bug.cgi?id=1094724
11:15 glusterbot Bug 1094724: high, high, ---, nsathyan, NEW , mkdir/rmdir loop causes gfid-mismatch on a 6 brick distribute volume
11:22 chirino joined #gluster
11:32 andreask joined #gluster
11:34 rahulcs joined #gluster
11:36 andreask joined #gluster
11:38 glusterbot New news from newglusterbugs: [Bug 1094724] mkdir/rmdir loop causes gfid-mismatch on a 6 brick distribute volume <https://bugzilla.redhat.com/show_bug.cgi?id=1094724>
11:39 andreask joined #gluster
11:48 ekuric joined #gluster
11:50 partner joined #gluster
11:54 d-fence_ joined #gluster
12:00 B21956 joined #gluster
12:01 gdubreui joined #gluster
12:02 jmarley joined #gluster
12:02 jmarley joined #gluster
12:02 d-fence joined #gluster
12:05 itisravi joined #gluster
12:05 itisravi joined #gluster
12:17 rahulcs joined #gluster
12:28 rjoseph1 joined #gluster
12:31 DzU andreask, restart glusterfs-server on both server
12:31 DzU nothing changed
12:32 jhonnymad2 joined #gluster
12:32 andreask DzU: I meant only the glusterd .... but I assume that was also restarted?
12:32 jhonnymad2 I found this tip http://www.gluster.org/community/documentation/index.php/Translators/performance/quick-read
12:32 glusterbot Title: Translators/performance/quick-read - GlusterDocumentation (at www.gluster.org)
12:32 jhonnymad2 where i must define that? node client or both?
12:32 sroy_ joined #gluster
12:40 Honghui joined #gluster
12:44 haomaiwang joined #gluster
12:46 DzU andreask, yes. I restarted all the server component (service glusterfs-server restart) in each nodes
12:46 DzU andreask, yes. I restarted all the server component (service glusterfs-server restart) in each node
12:48 andreask DzU: hmm ... seeing quite some bugs regarding the self-heal information in RH bugzilla
12:49 DzU andreask, mmm ok. I'm on 3.4.3 server version
12:50 davinder joined #gluster
12:50 partner joined #gluster
12:50 DzU andreask, if you have any link to share I'll be happy to read
12:53 naveed joined #gluster
12:54 andreask DzU: search like "gluster-afr heal" on bugzilla.redhat.com
12:55 sadbox joined #gluster
12:56 Intensity joined #gluster
12:57 haomaiw__ joined #gluster
12:57 ira joined #gluster
13:01 DzU andreask, thanks!
13:04 rahulcs joined #gluster
13:06 plarsen joined #gluster
13:07 hagarth joined #gluster
13:19 calum_ joined #gluster
13:22 dusmant joined #gluster
13:30 primechuck joined #gluster
13:30 primechuck joined #gluster
13:31 bennyturns joined #gluster
13:32 mjsmith2 joined #gluster
13:34 mjsmith2 joined #gluster
13:35 kaptk2 joined #gluster
13:38 kmai007 joined #gluster
13:46 B21956 joined #gluster
13:51 bala joined #gluster
13:54 tdasilva joined #gluster
13:54 failshell joined #gluster
13:55 vpshastry joined #gluster
13:57 vpshastry left #gluster
13:58 triode3 joined #gluster
14:01 dusmant joined #gluster
14:03 triode3 I see 3.5 has been out since April 17, but it is not GA. Should we use it in production?
14:04 kmai007 you should test it before you decide to go prod.
14:04 kmai007 maybe wait for 3.5.1 ?
14:05 triode3 kmai007, my question is two fold. In the past many times I have waited only to find out gluster version x.y is not backwards compatible with gluster x.n... so I am skeptical about installing 3.4.3 and waiting for 3.5.1
14:14 kmai007 i saw some where that it was backwards compatible.  3.5 -> 3.4
14:14 kmai007 but should ask the pros to be for sure
14:14 triode3 kmai007, thanks.
14:16 chirino joined #gluster
14:17 jobewan joined #gluster
14:17 wushudoin joined #gluster
14:21 calum_ joined #gluster
14:27 dbruhn joined #gluster
14:32 haomaiwang joined #gluster
14:35 TvL2386 joined #gluster
14:36 nikk_ joined #gluster
14:39 glusterbot New news from newglusterbugs: [Bug 1094815] [FEAT]: User Serviceable Snapshot <https://bugzilla.redhat.com/show_bug.cgi?id=1094815> || [Bug 1094822] Add documentation for the Feature: User Serviceable snapshot <https://bugzilla.redhat.com/show_bug.cgi?id=1094822>
14:39 TvL2386_ joined #gluster
14:41 [o__o] joined #gluster
14:42 recidive joined #gluster
14:42 hchiramm_ joined #gluster
14:45 chirino joined #gluster
14:49 B21956 joined #gluster
14:55 LoudNoises joined #gluster
15:00 shubhendu joined #gluster
15:00 jag3773 joined #gluster
15:03 sprachgenerator joined #gluster
15:28 scuttle_ joined #gluster
15:30 haomaiwang joined #gluster
15:34 sprachgenerator joined #gluster
15:37 hansd joined #gluster
15:38 vpshastry joined #gluster
15:40 plarsen joined #gluster
15:42 Matthaeus joined #gluster
15:46 vpshastry left #gluster
15:50 marcoceppi joined #gluster
15:58 semiosis anyone know how, if it's possible, to reduce the amount of glustershd's logging?
15:58 semiosis running a full heal on one volume produced a glustershd.log file so big it exhausted the free space on / :(
15:59 dbruhn ouch
16:00 dbruhn could you set log rotate up to rotate hourly temporarily?
16:00 semiosis meh.  would rather just use cron to truncate the file :)
16:01 vpshastry1 joined #gluster
16:01 dbruhn lol true
16:01 semiosis afaict all the lines are Debug level
16:02 davinder joined #gluster
16:02 semiosis maybe ,,(undocumented options)
16:02 glusterbot Undocumented options for 3.4: http://www.gluster.org/community/documentation/index.php/Documenting_the_undocumented
16:05 John_HPC joined #gluster
16:07 rwheeler joined #gluster
16:08 davinder joined #gluster
16:09 japuzzo joined #gluster
16:09 glusterbot New news from newglusterbugs: [Bug 1094860] Puppet-Gluster should support building btrfs bricks <https://bugzilla.redhat.com/show_bug.cgi?id=1094860>
16:11 coredump joined #gluster
16:17 Mo__ joined #gluster
16:19 arya joined #gluster
16:19 davinder joined #gluster
16:26 vpshastry1 left #gluster
16:26 bennyturns joined #gluster
16:30 Chewi left #gluster
16:44 anotheral left #gluster
16:49 hchiramm__ joined #gluster
16:49 chirino joined #gluster
16:50 kanagaraj joined #gluster
16:51 somepoortech semiosis: I used diagnostics.brick-log-level Error there's also diagnostics.client-log-level see; http://gluster.org/community/documentation/index.php/Gluster_3.2:_Setting_Volume_Options
16:51 glusterbot Title: Gluster 3.2: Setting Volume Options - GlusterDocumentation (at gluster.org)
16:56 anotheral joined #gluster
17:03 Slashman joined #gluster
17:13 hchiramm_ joined #gluster
17:17 kanagaraj joined #gluster
17:23 zerick joined #gluster
17:46 Scott6 joined #gluster
17:48 chirino joined #gluster
17:51 rahulcs joined #gluster
17:52 zerick joined #gluster
17:56 dusmant joined #gluster
17:59 * semiosis afk for the next 1.5 weeks
18:00 semiosis leave me a message by glusterbot, i'll try to check in from time to time.
18:00 MeatMuppet joined #gluster
18:03 Matthaeus joined #gluster
18:03 purpleidea semiosis: you better be on a beach somewhere :)
18:05 semiosis when one lives on a beach, one vacations in the mountains :)
18:13 dbruhn AS long as it involves some rest, that's all that matters. And from a flat lander living the bi-polar seasons.... it's just about motorcycles
18:14 nishanth joined #gluster
18:14 purpleidea semiosis: damn... so if i live in an igloo, i guess i can vacation just about anywhere!
18:14 dbruhn Don't you already live in Canada? ;)
18:25 purpleidea dbruhn: yeah
18:25 dbruhn I live in MN, so not much different by way of weather
18:26 dbruhn How do people run windows on servers? Even with powershell.... it's still so slow to do some things
18:31 zaitcev joined #gluster
18:32 edoceo are there any recommended options for creating the XFS that make gluster "better"?
18:32 dbruhn in ode size of 512
18:32 dbruhn inode
18:35 davinder joined #gluster
18:36 jag3773 joined #gluster
18:40 MeatMuppet joined #gluster
18:45 maduser joined #gluster
18:52 edoceo I'm on RAID6, is there any value for me in figuring out this alignment blocks stuffs for XFS options?
18:55 dbruhn if your subsystems perform better due to it, gluster will perform better as well.
18:59 DV joined #gluster
19:02 davinder2 joined #gluster
19:04 edward1 joined #gluster
19:15 kanagaraj joined #gluster
19:16 Philambdo joined #gluster
19:26 dusmant joined #gluster
19:34 jag3773 joined #gluster
19:38 purpleidea edoceo: just look at ,,(puppet) it does it automatically for you!
19:38 glusterbot edoceo: https://github.com/purpleidea/puppet-gluster
19:38 purpleidea edoceo: and without these things, i've heard of gluster being 40% slower.
19:38 edoceo thanks!
19:39 maduser joined #gluster
19:42 mjsmith2 joined #gluster
19:44 purpleidea edoceo: yw
19:48 saravanakumar1 joined #gluster
19:48 Matthaeus joined #gluster
20:03 Matthaeus joined #gluster
20:04 gdubreui joined #gluster
20:06 maduser joined #gluster
20:27 calum_ joined #gluster
20:36 gdubreui joined #gluster
20:41 gdubreui joined #gluster
20:54 hchiramm_ joined #gluster
21:20 ilbot3 joined #gluster
21:20 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
21:31 refrainblue joined #gluster
21:33 ktosiek joined #gluster
21:40 jcsp joined #gluster
21:40 andreask joined #gluster
21:47 B21956 joined #gluster
21:59 fidevo joined #gluster
22:13 zerick joined #gluster
22:24 Jonynemonic joined #gluster
22:24 Jonynemonic Hello
22:24 glusterbot Jonynemonic: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
22:25 MrAbaddon joined #gluster
22:25 Jonynemonic How can i improve performance for smaller files with gluster native client?
22:26 Jonynemonic some vol file sugested for small reads performance?
22:26 Jonynemonic or config?
22:27 [o__o] joined #gluster
22:31 glusterbot New news from resolvedglusterbugs: [Bug 874554] cluster.min-free-disk not having an effect on new files <https://bugzilla.redhat.com/show_bug.cgi?id=874554>
23:00 Matthaeus joined #gluster
23:09 mjsmith2 joined #gluster
23:15 arya joined #gluster
23:16 plarsen joined #gluster
23:27 arya joined #gluster
23:29 arya joined #gluster
23:30 mshadle joined #gluster
23:31 mshadle can gluster 3.3 be upgraded to 3.5 (simple setup, only 2 servers, 2 total volumes) seamlessly?
23:33 arya joined #gluster
23:57 jbrooks joined #gluster
23:57 jbrooks left #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary