Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-11-03

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:25 Gugge joined #gluster
00:52 Guest5348 joined #gluster
00:55 rajesh joined #gluster
01:01 Guest58542 joined #gluster
01:39 harish joined #gluster
02:05 coreping joined #gluster
02:10 overclk joined #gluster
02:38 dmyers joined #gluster
03:05 bharata-rao joined #gluster
03:43 vimal joined #gluster
03:44 RameshN joined #gluster
03:44 kanagaraj joined #gluster
03:46 meghanam_ joined #gluster
03:46 meghanam joined #gluster
03:47 kdhananjay joined #gluster
03:57 itisravi joined #gluster
04:03 nbalachandran joined #gluster
04:03 ppai joined #gluster
04:05 RameshN joined #gluster
04:10 meghanam_ joined #gluster
04:10 meghanam joined #gluster
04:14 atinmu joined #gluster
04:16 JijoJohn joined #gluster
04:20 hagarth joined #gluster
04:22 overclk joined #gluster
04:36 hchiramm_ joined #gluster
04:37 jiffin joined #gluster
04:39 Guest58542 joined #gluster
04:40 anoopcs joined #gluster
04:44 nishanth joined #gluster
04:47 bala joined #gluster
04:54 smohan joined #gluster
04:56 soumya joined #gluster
05:03 lalatenduM joined #gluster
05:03 dusmant joined #gluster
05:06 ndarshan joined #gluster
05:07 meghanam_ joined #gluster
05:07 meghanam joined #gluster
05:10 spandit joined #gluster
05:11 anoopcs joined #gluster
05:14 atinmu joined #gluster
05:16 Guest58542 joined #gluster
05:17 prasanth_ joined #gluster
05:28 atalur joined #gluster
05:33 glusterbot New news from newglusterbugs: [Bug 1152957] arequal-checksum mismatch between before and after successful heal on a replaced disk <https://bugzilla.redhat.co​m/show_bug.cgi?id=1152957>
05:36 Guest58542 joined #gluster
05:37 anoopcs joined #gluster
05:38 overclk joined #gluster
05:47 sahina joined #gluster
05:49 saurabh joined #gluster
05:50 ramteid joined #gluster
05:53 Anuradha joined #gluster
05:55 shubhendu joined #gluster
05:58 calisto joined #gluster
06:02 aravindavk joined #gluster
06:03 karnan joined #gluster
06:06 nshaikh joined #gluster
06:10 soumya joined #gluster
06:11 atalur joined #gluster
06:13 haomaiwang joined #gluster
06:19 shubhendu joined #gluster
06:19 Guest58542 joined #gluster
06:23 shubhendu_ joined #gluster
06:27 msmith_ joined #gluster
06:27 atinmu joined #gluster
06:28 atalur joined #gluster
06:31 shubhendu joined #gluster
06:36 shubhendu_ joined #gluster
06:37 ppai joined #gluster
06:37 shubhendu__ joined #gluster
06:54 shubhendu_ joined #gluster
06:58 kumar joined #gluster
06:59 Arrfab joined #gluster
07:00 nishanth joined #gluster
07:02 Guest58542 joined #gluster
07:03 soumya joined #gluster
07:08 gobbe left #gluster
07:10 dark_lord joined #gluster
07:16 atinmu joined #gluster
07:17 ctria joined #gluster
07:20 soumya joined #gluster
07:33 atinmu joined #gluster
07:33 Philambdo joined #gluster
07:34 Guest58542 joined #gluster
07:40 Fen2 joined #gluster
07:41 Slydder joined #gluster
07:42 rafi1 joined #gluster
07:44 nishanth joined #gluster
07:51 ppai joined #gluster
08:01 mariusp joined #gluster
08:09 shubhendu joined #gluster
08:16 msmith_ joined #gluster
08:20 deniszh joined #gluster
08:25 rgustafs joined #gluster
08:26 shubhendu_ joined #gluster
08:28 smohan_ joined #gluster
08:29 ricky-ticky joined #gluster
08:29 cultav1x joined #gluster
08:29 cultav1x gooood morning gluster friends
08:34 TvL2386 joined #gluster
08:34 rgustafs joined #gluster
08:38 aravindavk joined #gluster
08:44 hagarth joined #gluster
08:44 vikumar joined #gluster
08:47 shubhendu__ joined #gluster
08:59 Thilam joined #gluster
09:05 liquidat joined #gluster
09:19 kdhananjay joined #gluster
09:21 shubhendu joined #gluster
09:29 shubhendu_ joined #gluster
09:32 nishanth joined #gluster
09:33 Philambdo joined #gluster
09:33 Philambdo joined #gluster
09:51 mbukatov joined #gluster
09:52 dark_lord joined #gluster
09:59 shubhendu_ joined #gluster
10:05 msmith_ joined #gluster
10:05 haomaiwa_ joined #gluster
10:06 partner cluster.min-free-disk - should that prevent writing to a brick once the value is met? at least earlier it used to "redirect" writes to bricks which had free space available above the limit
10:08 Slashman joined #gluster
10:09 harish joined #gluster
10:09 partner now it seems all the bricks have met the limit and all are pretty much equally gone below the value. not sure if this a bug or feature or what (of 3.4.5 on wheezy)
10:10 Nowaker joined #gluster
10:19 Guest58542 joined #gluster
10:24 hagarth joined #gluster
10:25 rgustafs joined #gluster
10:39 SOLDIERz joined #gluster
10:40 SOLDIERz hey everyone does anybody get some good tipps about performance tuning on glusterfs?
10:40 SOLDIERz I'm just working with small files just for the info so 1-8MB files
10:41 ricky-ticky1 joined #gluster
10:42 aravindavk joined #gluster
10:44 smohan joined #gluster
10:44 smohan_ joined #gluster
10:50 mariusp joined #gluster
10:57 SOLDIERz joined #gluster
10:57 ctria joined #gluster
10:58 Philambdo joined #gluster
11:00 hchiramm__ joined #gluster
11:05 glusterbot New news from newglusterbugs: [Bug 1157223] nfs mount via symbolic link does not work <https://bugzilla.redhat.co​m/show_bug.cgi?id=1157223>
11:06 Guest58542 joined #gluster
11:08 virusuy joined #gluster
11:08 virusuy joined #gluster
11:13 nshaikh joined #gluster
11:17 dark_lord joined #gluster
11:18 shubhendu__ joined #gluster
11:20 haomaiwa_ joined #gluster
11:21 meghanam_ joined #gluster
11:21 meghanam joined #gluster
11:21 LebedevRI joined #gluster
11:25 bala joined #gluster
11:42 SOLDIERz is there a way to make glusterfs parameters permanent
11:42 SOLDIERz so they are also present after a reboot?
11:43 ndevos SOLDIERz: volume options should all be permanent? or what parameters are you talking about?
11:43 SOLDIERz if i say gluster volume set gv0 <param> <value>
11:44 SOLDIERz it will not survive a reboot or a failure of all nodes right?
11:45 ndevos that option should be permanent, after a reboot it should still be set, even after rebooting all servers at once
11:46 SOLDIERz oh okay then thx
11:46 meghanam joined #gluster
11:46 meghanam_ joined #gluster
11:47 doekia joined #gluster
11:47 soumya_ joined #gluster
11:53 msmith_ joined #gluster
11:55 calum_ joined #gluster
12:14 B21956 joined #gluster
12:20 itisravi joined #gluster
12:20 krullie joined #gluster
12:23 krullie How is everyone installing their gluster setup? I'm using the epel packages on centos6 but there is no service file for the glusterfsd daemon and if I run it directly it asks for a volfile. The docs are quite confusing saying that setting up volfiles is an advanced developers topic. If so, how are admins supposed to set it all up?
12:25 chirino joined #gluster
12:28 haomaiwa_ joined #gluster
12:38 ndevos krullie: you do not start glusterfsd from a service script, glusterd does that for you
12:38 krullie glusterd also does not exist, ndevos
12:39 ndevos krullie: have you installed the glusterfs-service package?
12:39 ndevos uh, glusterfs-server
12:40 krullie ndevos: that package is not installable from the epel repo
12:40 krullie installable = available
12:42 krullie ndevos: looks like it exists in this repo file http://download.gluster.org/pub/gluster/gl​usterfs/LATEST/CentOS/glusterfs-epel.repo
12:43 krullie but that package has conflicts
12:44 krullie yum output: https://gist.github.com/Rio/a536cf5da55ba0df0784
12:44 glusterbot Title: glusterfs install on centos6 (at gist.github.com)
12:44 SOLDIERz joined #gluster
12:44 hagarth joined #gluster
12:46 ppai joined #gluster
12:47 ndevos krullie: right, the current RHEL/CentOS packages have a higher version than what is available in our upstream :-/
12:47 krullie interesting...
12:48 ndevos we're hoping to fix that soon... http://supercolony.gluster.org/pipermail​/gluster-users/2014-November/019333.html
12:48 glusterbot Title: [Gluster-users] RHEL6.6 provides Gluster 3.6.0-28.2 ? (at supercolony.gluster.org)
12:48 krullie so... how do I deploy then?
12:49 ndevos krullie: so, you need to uninstall/blacklist the packages from CentOS, and install the ones from the community repository
12:50 ndevos krullie: something like "exclude=glusterfs*
12:51 ndevos in yum.conf, and add "includepkgs=glusterfs*" in the gluster.repo file
12:52 krullie ndevos: ah right thanks. will play with it some more. Could you point me to a good/complete working version of the docs aswel? half the links from gluster.org are dead so I keep switching to community docs and back. It looks like quite a mess really...
12:53 ndevos krullie: there are some guys working on getting the website in order and put the documentation up there - personally I mostly use the rhs documentation on access.redhat.com
12:54 krullie ndevos: thanks, didn't know redhat also served them. will check them out
12:57 ndevos krullie: yeah, red hat storage is based on glusterfs and the docs are pretty good - unfortunately those docs are not available on gluster.org yet
12:58 social joined #gluster
13:00 krullie ndevos: that's nice. I hope everything will get straightend out. We are running a 2 node distributed replication setup and one node got a high load because of a hanging rsync on it making it unresponsive. however the loadbalancing to node2 didn't work so everything stalled. I've got to figure out what went wrong with the setup that's why I'm building a network with docker containers/VM's.
13:01 edward1 joined #gluster
13:02 Fen1 joined #gluster
13:04 bala joined #gluster
13:06 Thilam hi guys, I've a question about the structure of the .gluster folder present in each brick of glusterfs
13:07 Thilam in a brick, I have 2 million files
13:07 Thilam and 1 million in the .gluster folder
13:07 Thilam is it normal?
13:07 _Bryan_ joined #gluster
13:08 Thilam when I'm performing a backup of the brick, where there is 1,5To (df -h shows this amount)
13:08 Thilam the backup lists 3To to backup
13:08 Thilam (the backup software)
13:09 ndevos Thilam: .glusterfs/ contains hard-links to access contents of a file by gfid or filename - the .glusterfs/ directory is used for gfid access
13:10 Thilam ho
13:10 Thilam so this behaviour is totally normal
13:10 ndevos counting files yes, a backup solution that does not know how to handle hard-links less so
13:10 diegows joined #gluster
13:11 Thilam It's not a know backup solution but it works well, except on that =)
13:11 Thilam thx ndevos
13:11 ndevos :)
13:14 B21956 joined #gluster
13:18 SOLDIERz joined #gluster
13:22 virusuy morning
13:22 Thilam lalatenduM, do you have an idea on when the debian packages for version 3.5.3 will be released ?
13:24 lalatenduM Thilam, you mean 3.5.3beta2?
13:25 Thilam I don't know
13:25 Thilam at this time I'm using the 3.5.2-4 deb packages
13:25 Thilam and I'm waiting for fix that are integrated in version 3.5.3
13:26 hollaus joined #gluster
13:27 Thilam I'm stoped my migration to my new cluster because of this bug, so I just would like to have a idea on when I will be able to go forward
13:28 Thilam i've stopped to be correct :/
13:28 JustinClift Probably a good idea. :)
13:28 JustinClift Frustrating, but yeah, best to wait
13:29 lalatenduM Thilam, 3.5.3 is not released till now. it is beta2 now
13:32 lalatenduM Thilam, ndevos is the maintainer for 3.5 branch, he would have a better idea when 35.3 is going to be released?
13:32 Thilam ho, maybe I've misunderstood some talks because I thought it was planned 2 weeks ago
13:32 lalatenduM s/?//
13:32 glusterbot lalatenduM: Error: u's/?// Thilam, ndevos is the maintainer for 3.5 branch, he would have a better idea when 35.3 is going to be released?' is not a valid regular expression.
13:32 bene joined #gluster
13:36 social joined #gluster
13:38 virusuy gents, quota implementation in gluster ain't for user, right
13:38 virusuy i mean, i cannot set quotas per user in gluster
13:38 kkeithley 3.4.6beta2 and 3.5.3beta2 were released on Friday. IIRC the plan is to release GA after a week of testing. That presumes nothing is found in testing that would warrant a beta3 release.
13:38 khoj92 joined #gluster
13:38 ndevos Thilam, lalatenduM: 3.5.3 beta2 should be available soon, I hope to have a final 3.5.3 later this week, or early next one
13:39 kkeithley <hint>get testing</hint>
13:39 ndevos yes, THAT ^^ hint
13:39 kkeithley ndevos++
13:39 glusterbot kkeithley: ndevos's karma is now 5
13:40 ndevos kkeithley++
13:40 glusterbot ndevos: kkeithley's karma is now 20
13:40 kkeithley where'd all your karma go?
13:40 ndevos just wondering about that too... maybe we have different karma levels in different channels?
13:40 kkeithley I believe that is correct, but I still thought you had more here.
13:41 ndevos @karma ndevos
13:41 glusterbot ndevos: Karma for "ndevos" has been increased 5 times and decreased 0 times for a total karma of 5.
13:41 Thilam ndevos, kkeithley, thx a lot for your answers
13:41 ndevos does not look like someone -- me...
13:42 kkeithley nope
13:42 meghanam joined #gluster
13:43 meghanam_ joined #gluster
13:46 khoj1994 joined #gluster
13:50 mojibake joined #gluster
13:55 theron joined #gluster
14:05 glusterbot New news from newglusterbugs: [Bug 1159840] [USS]: creating file/directories under .snaps shows wrong error message <https://bugzilla.redhat.co​m/show_bug.cgi?id=1159840>
14:08 John_HPC joined #gluster
14:10 mojibake joined #gluster
14:10 bennyturns joined #gluster
14:12 mojibake joined #gluster
14:18 msmith_ joined #gluster
14:19 SOLDIERz joined #gluster
14:20 msmith_ joined #gluster
14:25 dark_lord joined #gluster
14:33 John_HPC I have some "heal" issues. http://paste.ubuntu.com/8803392/
14:33 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
14:33 John_HPC No matter how many full heals I do or find the files, gluster02 always reports those gfids
14:36 soumya_ joined #gluster
14:39 davemc joined #gluster
14:41 bennyturns joined #gluster
14:41 coredump joined #gluster
14:44 dmyers joined #gluster
14:44 dmyers joined #gluster
14:45 jmarley joined #gluster
15:02 meghanam_ joined #gluster
15:02 meghanam joined #gluster
15:08 RameshN joined #gluster
15:09 jobewan joined #gluster
15:15 RaSTarl joined #gluster
15:25 davemc joined #gluster
15:25 21WAAFAF6 joined #gluster
15:27 Guest58542 joined #gluster
15:28 jiffin joined #gluster
15:30 _dist joined #gluster
15:33 kumar joined #gluster
15:33 Bardack joined #gluster
15:38 bala1 joined #gluster
15:39 bene joined #gluster
15:40 afics joined #gluster
15:42 DJClean joined #gluster
15:45 siel joined #gluster
15:46 calisto joined #gluster
15:54 jiffin joined #gluster
16:01 lmickh joined #gluster
16:01 nbalachandran joined #gluster
16:05 Fen1 @karma Fen1
16:05 glusterbot Fen1: Fen1 has neutral karma.
16:05 Fen1 Fen1++
16:05 glusterbot Fen1: Error: You're not allowed to adjust your own karma.
16:06 johnmark ha
16:08 nishanth joined #gluster
16:12 ndevos @karma coffee
16:12 glusterbot ndevos: coffee has neutral karma.
16:12 ndevos coffee++
16:12 glusterbot ndevos: coffee's karma is now 1
16:16 mojibake joined #gluster
16:18 RameshN joined #gluster
16:21 glusterbot New news from resolvedglusterbugs: [Bug 913699] Conservative merge fails on client3_1_mknod_cbk <https://bugzilla.redhat.com/show_bug.cgi?id=913699>
16:25 cfeller joined #gluster
16:25 jiffin left #gluster
16:25 jiffin joined #gluster
16:27 plarsen joined #gluster
16:28 mojibake joined #gluster
16:30 RameshN_ joined #gluster
16:33 virusuy guys, somebody has done a web interface to admin gluster's quotas ?
16:37 haomaiwang joined #gluster
16:54 semiosis virusuy: no
16:54 virusuy semiosis: :( , well time to be the first one :-)
16:55 hagarth joined #gluster
16:56 cultav1x joined #gluster
17:09 meghanam joined #gluster
17:09 meghanam_ joined #gluster
17:12 ultrabizweb joined #gluster
17:13 DV joined #gluster
17:16 JoeJulian virusuy: When you're ready, consider putting your web interface on forge.gluster.org
17:16 mbukatov joined #gluster
17:17 virusuy JoeJulian: thanks! my boss asked my if there is something like that already done, but i think i'll take so time to code myself (obviously i'll post it into forge.gluster.org )
17:38 calisto joined #gluster
17:42 n-st joined #gluster
17:46 hchiramm joined #gluster
17:46 msmith_ joined #gluster
17:46 hchiramm joined #gluster
17:47 hchiramm joined #gluster
17:48 lalatenduM joined #gluster
17:50 haomaiwang joined #gluster
17:54 jvandewege joined #gluster
17:59 hchiramm joined #gluster
18:10 xrubbit joined #gluster
18:18 hchiramm joined #gluster
18:22 mojibake joined #gluster
18:30 mariusp joined #gluster
18:32 jmarley joined #gluster
18:36 glusterbot New news from newglusterbugs: [Bug 1159968] glusterfs.spec.in: deprecate *.logrotate files in dist-git in favor of the upstream logrotate files <https://bugzilla.redhat.co​m/show_bug.cgi?id=1159968>
18:40 mojibake JoeJulian:  Can you tell what "forge.gluster.org" is supposed to provide, or enhance upon what is available at www.gluster.org?
18:42 MugginsM joined #gluster
18:43 mojibake NVM...It always helps to read the about.. https://forge.gluster.org/about
18:43 glusterbot Title: About - Gluster Community Forge (at forge.gluster.org)
18:48 mariusp joined #gluster
18:49 mariusp joined #gluster
18:58 marbu joined #gluster
18:59 theron joined #gluster
19:01 nueces joined #gluster
19:07 glusterbot New news from newglusterbugs: [Bug 1159970] glusterfs.spec.in: deprecate *.logrotate files in dist-git in favor of the upstream logrotate files <https://bugzilla.redhat.co​m/show_bug.cgi?id=1159970>
19:11 mojibake joined #gluster
19:13 sjohnsen 19
19:14 sjohnsen woops
19:15 rotbeard joined #gluster
19:17 gomikemike can i have multiple dirs on a volume and mount the dirs instead of the volume?
19:18 semiosis gomikemike: not with the fuse client, but maybe with an nfs client
19:19 gomikemike urgh
19:26 mojibake joined #gluster
19:30 chirino joined #gluster
19:30 rsanchez joined #gluster
19:33 chirino joined #gluster
19:36 failshell joined #gluster
19:39 clutchk joined #gluster
19:45 hchiramm joined #gluster
19:47 rsanchez joined #gluster
19:50 rsanchez joined #gluster
19:50 rshott joined #gluster
19:52 MugginsM ok I'm seeing lots of duplicate files from the client, with the ---------T perms. link files leaking through?
19:52 glusterbot MugginsM: -------'s karma is now -2
19:52 * MugginsM glares at glusterbot
20:02 nueces joined #gluster
20:03 chirino_m joined #gluster
20:06 MugginsM argh, I think rebalance is eating out files :(
20:16 plarsen joined #gluster
20:27 xrubbit joined #gluster
20:30 MugginsO joined #gluster
20:32 claytonk joined #gluster
20:33 claytonk I am troubleshooting a glusterfs installation on Ubuntu 14.04.
20:34 claytonk copying a 156MB git repo with a gluster mount takes 2.5 minutes
20:35 claytonk it takes less than a second to copy the repo directly on one of the nodes host the bricks
20:37 claytonk here's the replica config   http://paste.ubuntu.com/8807871/
20:37 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
20:54 theron joined #gluster
21:05 rsanchez joined #gluster
21:12 JonathanD joined #gluster
21:25 mariusp joined #gluster
21:28 side_control joined #gluster
21:44 hollaus joined #gluster
21:45 rsanchez joined #gluster
22:34 badone joined #gluster
22:34 MugginsM I have a lot of link files showing up on the client as duplicates, how should I fix this ?
22:44 firemanxbr joined #gluster
22:51 calisto1 joined #gluster
22:55 nage joined #gluster
23:03 marbu joined #gluster
23:16 MugginsM I don't think I'm ever going to do a rebalance again
23:16 haomaiwa_ joined #gluster
23:16 MugginsM each time, 3.2, 3.3, and 3.4 has led to data loss and lots of errors
23:17 MugginsM and days of trying to fix them manually
23:17 * MugginsM sighs
23:17 MugginsM I do like gluster, but wow I've had a bad time with it
23:17 hollaus joined #gluster
23:18 semiosis MugginsM: i've never felt comfortable with rebalance
23:18 semiosis too risky
23:20 MugginsM thing is without it, add-brick is not all that useful
23:21 MugginsM I wish distributed filesystems weren't so incredibly hard to make work :)
23:21 MugginsM Hard Problems
23:37 rsanchez joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary