Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-06-19

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:03 haomaiwang joined #gluster
00:06 sage joined #gluster
00:17 Pupeno joined #gluster
00:25 funfunctor joined #gluster
00:25 funfunctor Hi
00:25 glusterbot funfunctor: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
00:26 funfunctor I'm trying to evaluate glusterfs vs. gfs2+drbd, its actually pretty non-trivial to think about, i'm looking bounce some ideas back and forth..
00:27 funfunctor oh angry bot has a regex on Hi
00:40 Pupeno joined #gluster
00:48 jdossey joined #gluster
00:55 atrius joined #gluster
01:01 atrius joined #gluster
01:09 haomaiwa_ joined #gluster
01:13 hagarth joined #gluster
01:35 haomaiwa_ joined #gluster
01:37 chirino joined #gluster
01:41 harish joined #gluster
01:47 nangthang joined #gluster
01:55 chirino joined #gluster
02:03 akay1 hey guys does ubuntu ppa have the problem that i saw where rebalance doesnt work properly?
02:12 kdhananjay joined #gluster
02:16 bharata-rao joined #gluster
02:23 Twistedgrim joined #gluster
02:46 nangthang joined #gluster
02:52 maveric_amitc_ joined #gluster
03:24 overclk joined #gluster
03:27 spalai joined #gluster
03:28 DV joined #gluster
03:28 DV__ joined #gluster
03:30 spalai left #gluster
03:43 itisravi joined #gluster
03:47 [7] joined #gluster
03:48 funfunctor left #gluster
03:48 gem joined #gluster
03:58 sakshi joined #gluster
04:01 overclk joined #gluster
04:03 TheCthulhu joined #gluster
04:05 poornimag joined #gluster
04:16 nbalacha joined #gluster
04:17 vimal joined #gluster
04:17 atinm joined #gluster
04:23 nbalacha joined #gluster
04:23 ndarshan joined #gluster
04:24 RameshN joined #gluster
04:24 poornimag joined #gluster
04:24 overclk joined #gluster
04:26 shubhendu joined #gluster
04:40 soumya joined #gluster
04:42 anil joined #gluster
04:51 hgowtham joined #gluster
04:51 Humble_ joined #gluster
04:54 zeittunnel joined #gluster
05:08 pppp joined #gluster
05:09 poornimag joined #gluster
05:12 m0zes joined #gluster
05:15 prg3 joined #gluster
05:19 soumya joined #gluster
05:20 meghanam joined #gluster
05:25 jiffin joined #gluster
05:25 kovshenin joined #gluster
05:29 ashiq joined #gluster
05:29 Bhaskarakiran joined #gluster
05:32 Manikandan joined #gluster
05:35 haomaiw__ joined #gluster
05:38 kdhananjay joined #gluster
05:39 spandit joined #gluster
05:49 raghu joined #gluster
05:49 arcolife joined #gluster
05:51 spalai joined #gluster
05:52 atalur joined #gluster
05:54 kotreshhr joined #gluster
05:54 Pupeno joined #gluster
05:55 Bhaskarakiran joined #gluster
05:56 Pupeno joined #gluster
06:00 nbalacha joined #gluster
06:00 kshlm joined #gluster
06:01 Pupeno joined #gluster
06:03 shubhendu joined #gluster
06:10 pppp joined #gluster
06:11 teknologeek joined #gluster
06:11 teknologeek Hi all
06:12 teknologeek I need some help with my glusterfs setup if someone can help
06:13 nbalacha joined #gluster
06:15 spalai joined #gluster
06:15 Saravana joined #gluster
06:18 autoditac joined #gluster
06:29 rp_ joined #gluster
06:30 rjoseph joined #gluster
06:32 Trefex joined #gluster
06:36 rp_ joined #gluster
06:39 spalai joined #gluster
06:41 ramteid joined #gluster
06:43 maveric_amitc_ joined #gluster
06:45 nsoffer joined #gluster
06:46 autoditac joined #gluster
06:48 anrao joined #gluster
06:50 Philambdo joined #gluster
06:51 jtux joined #gluster
06:53 anrao joined #gluster
06:56 side_control tessier: just ask
06:57 SOLDIERz joined #gluster
07:01 Manikandan_ joined #gluster
07:01 gfranx joined #gluster
07:18 joshin joined #gluster
07:19 liquidat joined #gluster
07:20 zeittunnel joined #gluster
07:24 [Enrico] joined #gluster
07:25 fsimonce joined #gluster
07:27 gfranx joined #gluster
07:30 deepakcs joined #gluster
07:33 Manikandan joined #gluster
07:33 Manikandan_ joined #gluster
07:39 al joined #gluster
07:42 Manikandan gem++
07:42 glusterbot Manikandan: gem's karma is now 3
07:49 glusterbot News from newglusterbugs: [Bug 1233559] libglusterfs: avoid crash due to ctx being NULL <https://bugzilla.redhat.co​m/show_bug.cgi?id=1233559>
07:52 teknologeek joined #gluster
07:52 teknologeek Hello !
07:53 teknologeek I am experiencing an issue with my glusterfs setup.
07:54 teknologeek I use glusterfs 3.7.1 with replica 2 and enable-ino32 volume option
07:54 teknologeek rcpbind is running on my glusterfs servers and the volumes are accessed by NFS
07:55 teknologeek the volume filesystems is XFS
07:55 teknologeek the problem is that i have got a strange message when using tar on my NFS share
07:56 teknologeek "file changed as we read it"
07:56 ctria joined #gluster
07:57 teknologeek when i try it a few times, with NFS caches the message disappears
07:57 teknologeek Is there a way to get over this issue ?
08:01 Slashman joined #gluster
08:01 anoopcs teknologeek, Looks like this bug https://bugzilla.redhat.co​m/show_bug.cgi?id=1212842
08:01 glusterbot Bug 1212842: high, unspecified, ---, bugs, NEW , tar on a glusterfs mount displays "file changed as we read it" even though the file was not changed
08:05 teknologeek almost
08:05 teknologeek i don't have the problem with glusterfs fuse client
08:05 teknologeek i have the problem with NFS client
08:08 anoopcs teknologeek, How often on NFS client? Because reading through the comments from the bug, it doesn't happen always is what I got (from fuse mount).
08:09 teknologeek actually it only happens the first time i try tar
08:09 teknologeek an easy way to reproduce is to trop caches
08:09 teknologeek drop
08:09 teknologeek sync && echo 3 > /proc/sys/vm/drop_caches
08:09 teknologeek then i reproduce every time i call tar
08:11 teknologeek moreover, I can't really understand because stat gives me inode times that are not the same than any of my replica
08:12 anoopcs teknologeek, Do you have quotas enabled?
08:12 teknologeek actually i don't think so
08:12 teknologeek didn't disable it but didn't for it so it may be default
08:13 anoopcs teknologeek, Ok. The bug explains that its seen more often when quotas are enabled. Just need to check on that.
08:13 teknologeek checking right now
08:15 teknologeek quota command failed : Quota is disabled, please enable quota
08:15 teknologeek quota are already disabled
08:15 ndevos teknologeek: maybe see http://www.gluster.org/pipermail/glu​ster-devel/2014-December/043356.html
08:20 LebedevRI joined #gluster
08:21 kotreshhr left #gluster
08:21 teknologeek it doesn't resolve the problem
08:21 teknologeek it happens only on directories
08:21 teknologeek btw
08:25 ndevos hmm, not sure how it would impact directories...
08:26 teknologeek i really have no clue
08:26 Trefex1 joined #gluster
08:26 teknologeek i am trying all the volume options that i can find but it doesn4t help
08:36 teknologeek no more ideas ?
08:41 sysconfig joined #gluster
08:49 glusterbot News from newglusterbugs: [Bug 1200364] longevity: Incorrect log level messages in posix_istat and posix_lookup <https://bugzilla.redhat.co​m/show_bug.cgi?id=1200364>
08:51 teknologeek i did some strace and actually it seems that lstat syscalls have strange behaviour yes
08:51 teknologeek will read further the bug
08:54 teknologeek the bug has not been fixed yet ?
09:15 spandit joined #gluster
09:15 teknologeek is there a workaround ?
09:18 meghanam_ joined #gluster
09:20 anrao joined #gluster
09:20 ghenry joined #gluster
09:23 atalur joined #gluster
09:23 teknologeek actually it doens't matter if the volume is replicated or not
09:24 teknologeek the same error happens with no replica
09:25 liquidat joined #gluster
09:25 nbalacha joined #gluster
09:29 nbalacha joined #gluster
09:29 poornimag joined #gluster
09:29 glusterbot News from resolvedglusterbugs: [Bug 1219358] Disperse volume: client crashed while running iozone <https://bugzilla.redhat.co​m/show_bug.cgi?id=1219358>
09:32 soumya joined #gluster
09:32 deniszh joined #gluster
09:33 ashiq joined #gluster
09:34 ashiq joined #gluster
09:35 nbalacha joined #gluster
09:44 stickyboy joined #gluster
09:49 ndevos teknologeek: you mentioned the issue was with directories, right? directories are replicated on distribute-only volumes too
09:51 teknologeek yeah true
09:53 nbalacha joined #gluster
09:54 sabansal_ joined #gluster
10:01 atalur joined #gluster
10:07 hagarth atinm: kudos on getting 3.7.2 out!
10:09 Manikandan joined #gluster
10:10 atinm hagarth, thanks
10:15 elico joined #gluster
10:20 dusmant joined #gluster
10:20 glusterbot News from newglusterbugs: [Bug 1233624] nfs-ganesha: ganesha-ha.sh --refresh-config not working <https://bugzilla.redhat.co​m/show_bug.cgi?id=1233624>
10:23 soumya joined #gluster
10:23 meghanam joined #gluster
10:30 glusterbot News from resolvedglusterbugs: [Bug 1192378] Disperse volume: client crashed while running renames with epoll enabled <https://bugzilla.redhat.co​m/show_bug.cgi?id=1192378>
10:30 Manikandan joined #gluster
10:31 poornimag joined #gluster
10:37 overclk joined #gluster
10:43 soumya joined #gluster
10:45 gfranx joined #gluster
10:46 haomaiwa_ joined #gluster
10:50 glusterbot News from newglusterbugs: [Bug 1233632] Disperse volume: client crashed while running iozone <https://bugzilla.redhat.co​m/show_bug.cgi?id=1233632>
10:52 hchiramm joined #gluster
10:53 hchiramm 021040
10:53 ndevos oh, let me try your pin, hchiramm
10:53 hchiramm ndevos, good thought :P
10:55 gfranx joined #gluster
11:02 harish joined #gluster
11:04 hagarth joined #gluster
11:06 gfranx joined #gluster
11:07 atalur joined #gluster
11:18 arcolife joined #gluster
11:20 glusterbot News from newglusterbugs: [Bug 1233651] pthread cond and mutex variables of fs struct has to be destroyed conditionally. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1233651>
11:22 rjoseph joined #gluster
11:33 gfranx joined #gluster
11:34 spalai left #gluster
11:35 atalur joined #gluster
11:43 [Enrico] joined #gluster
11:43 autoditac joined #gluster
11:43 TheSeven joined #gluster
11:44 pppp joined #gluster
11:49 [Enrico] joined #gluster
11:50 Suckervil1e joined #gluster
11:51 Suckervil1e hey. I just set up a first replicated gluster volume for testing, and now the data on the two VMs is different. I haven't been able to find any information about how to troubleshoot such a problem, can you please point me in the right direction?
11:59 zeittunnel joined #gluster
12:00 jcastill1 joined #gluster
12:00 sakshi joined #gluster
12:03 B21956 joined #gluster
12:04 gem_ joined #gluster
12:04 elico joined #gluster
12:05 jcastillo joined #gluster
12:06 itisravi joined #gluster
12:06 rjoseph joined #gluster
12:15 poornimag joined #gluster
12:22 gem joined #gluster
12:24 kovshenin joined #gluster
12:28 lalatenduM joined #gluster
12:31 haomaiwa_ joined #gluster
12:42 poornimag joined #gluster
12:44 wkf joined #gluster
12:45 hagarth joined #gluster
12:46 unclemarc joined #gluster
12:48 chirino joined #gluster
12:58 kanagaraj joined #gluster
12:58 jyoung joined #gluster
13:01 jyoung joined #gluster
13:01 jrm16020 joined #gluster
13:02 jyoung joined #gluster
13:03 jyoung joined #gluster
13:04 jyoung joined #gluster
13:05 julim joined #gluster
13:05 jyoung joined #gluster
13:06 jyoung joined #gluster
13:07 RameshN joined #gluster
13:08 julim joined #gluster
13:09 jrm16020 joined #gluster
13:10 jrm16020 joined #gluster
13:13 DV__ joined #gluster
13:20 aaronott joined #gluster
13:20 ashiq joined #gluster
13:24 ekuric joined #gluster
13:29 Manikandan joined #gluster
13:29 georgeh-LT2 joined #gluster
13:29 jrm16020 joined #gluster
13:31 rwheeler joined #gluster
13:31 firemanxbr joined #gluster
13:33 dgandhi joined #gluster
13:35 jrm16020 joined #gluster
13:38 jrm16020 joined #gluster
13:53 kanagaraj joined #gluster
13:58 plarsen joined #gluster
13:58 shaunm joined #gluster
13:59 soumya joined #gluster
14:07 theron joined #gluster
14:12 elico joined #gluster
14:18 abrt joined #gluster
14:20 atinm joined #gluster
14:44 shaunm joined #gluster
14:51 krink joined #gluster
14:59 jdossey joined #gluster
15:01 cholcombe joined #gluster
15:01 hagarth joined #gluster
15:11 B21956 left #gluster
15:11 hchiramm joined #gluster
15:12 B21956 joined #gluster
15:18 ira joined #gluster
15:31 glusterbot News from resolvedglusterbugs: [Bug 1066511] Enhancement - glusterd should be chkconfig-ed on <https://bugzilla.redhat.co​m/show_bug.cgi?id=1066511>
15:31 glusterbot News from resolvedglusterbugs: [Bug 1159221] io-stats may crash the brick when loc->path is NULL in some fops <https://bugzilla.redhat.co​m/show_bug.cgi?id=1159221>
15:39 gfranx joined #gluster
15:47 cholcombe joined #gluster
15:52 bene2 joined #gluster
16:01 glusterbot News from resolvedglusterbugs: [Bug 884597] dht linkfile are created with different owner:group than that source(data) file in few cases <https://bugzilla.redhat.com/show_bug.cgi?id=884597>
16:13 rwheeler joined #gluster
16:20 Intensity joined #gluster
16:23 autoditac joined #gluster
16:29 bennyturns joined #gluster
16:32 nangthang joined #gluster
16:38 lkoranda joined #gluster
16:39 nsoffer joined #gluster
16:42 jdossey joined #gluster
16:45 Rapture joined #gluster
17:04 lkoranda joined #gluster
17:10 lkoranda joined #gluster
17:10 rotbeard joined #gluster
17:29 diegows joined #gluster
17:30 pppp joined #gluster
17:34 TvL2386 joined #gluster
17:47 gfranx joined #gluster
17:48 haomaiwang joined #gluster
18:08 ashiq joined #gluster
18:16 cuqa_ joined #gluster
18:17 ashiq joined #gluster
18:19 jiffin joined #gluster
18:31 ndk joined #gluster
18:51 diegows joined #gluster
18:59 smohan joined #gluster
19:01 TheSeven joined #gluster
19:24 gfranx joined #gluster
19:51 jrm16020 joined #gluster
19:55 dbruhn joined #gluster
20:26 DV joined #gluster
20:27 DV__ joined #gluster
20:35 TheSeven hm, looks like http://download.gluster.org/pub/gluster​/glusterfs/samba/CentOS/epel-7/x86_64/ is missing quite a bit of packages...
20:35 TheSeven is there a reason for that (build failure?) or is that just a glitch?
20:37 TheSeven so where could I get a gluster-enabled samba 4.2 build for centos 7.1?
20:52 plarsen joined #gluster
20:58 frostyfrog joined #gluster
20:59 frostyfrog Weee, breaking stuff trying to figure out things! xp
21:00 frostyfrog Does anyone know how to list all of the directories in a brick?
21:06 TheSeven frostyfrog: IIUC every directory is in every brick, just the files within those dirs are distributed
21:10 frostyfrog Ah that makes sense. Then I guess what I was asking was... If I have a Distributed-Replicate volume, can I get a mapping for where each directory replicates to?
21:12 TheSeven frostyfrog: a distribute-replicate volume is basically a distribute volume consisting of a set of replicate volumes
21:12 TheSeven if you have N replicas, every N consecutive bricks will form one replicate subvolume of the distribute volume
21:12 TheSeven i.e. every N consecutive bricks will have the same data
21:13 TheSeven which of those sets a file ends up on depends on its path and file name
21:14 frostyfrog TheSeven:  Ah, so if I have a replicate of 3, the first 3 bricks are replicated, then the next 3, and so on. Am I correct?
21:15 TheSeven yes
21:15 frostyfrog Thank you very much TheSeven. That helps a lot! :D
21:15 * frostyfrog goes to try to repair their broken cluster.
21:21 octaviovelasco joined #gluster
21:21 lexi2 joined #gluster
21:34 frostyfrog Aha! Interresting. :) If I try to rebalance my bricks, when "Number of Bricks: 2 x 2 = 5", then glusterd will continually crash as it tries to rebalance when it starts up. Good to know.
21:36 TheSeven heh, that seems like a weird state indeed ;) were you in the process of adding/removing bricks when that happened?
21:38 frostyfrog When I tried adding a brick, it failed on the second node but didn't revert it's current progress. So I ended up with 5 Bricks where the replicacation count was 2. I was messing around trying to remove the extra brick so I could try again.
21:39 TheSeven yeah, that's about the only way that I could imagine how it could end up with that replica count
21:39 TheSeven s/replica/brick/
21:39 glusterbot What TheSeven meant to say was: An error has occurred and has been logged. Please contact this bot's administrator for more information.
21:39 TheSeven glusterbot--
21:39 glusterbot TheSeven: glusterbot's karma is now 7
21:40 TheSeven does the bad brick contain any data already?
21:42 frostyfrog Doesn't look like it did. Only the .glusterfs stuck that had been there (and that I keep removing while I try to add this set of bricks)
21:42 TheSeven frostyfrog: I'd be tempted to just kill gluster and remove that bad brick from the volume definition by hand then
21:42 TheSeven not sure if that's good advice though, I'm not really experienced with this kind of stuff yet ;)
21:43 frostyfrog I was somehow able to get it cleaned up when I ran: service glusterd start; gluster volume rebalance TestVolume stop
21:44 frostyfrog The quick succession of commands got it fixed.
21:44 TheSeven you probably caught it before it actually fired up the rebalance
21:45 frostyfrog That's my assumption. :)
21:48 wkf joined #gluster
21:50 * frostyfrog is just barely starting to play with gluster and is trying out real-life scenarios. So you've been a big help in my debugging efforts TheSeven. Thank you :D
21:50 * TheSeven was in that kind of situation (and partially still is) a week ago
21:59 jdossey joined #gluster
22:13 ctria joined #gluster
22:34 smoothbutta joined #gluster
22:38 PatNarciso so... if I wanted to take a brick offline, without using remove-brick for the redistrubute process... and instead use rsync to deliver the files into an existing brick... is there a process (rebalance? fix-layout?) that would identify the changes?
22:41 PatNarciso (distributed volume)
22:47 Rapture joined #gluster
23:05 stickyboy joined #gluster
23:13 nsoffer joined #gluster
23:21 marcoceppi joined #gluster
23:22 siel joined #gluster
23:24 pjschmitt joined #gluster
23:25 stickyboy joined #gluster
23:31 jermudgeon joined #gluster
23:35 virusuy joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary