Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-07-04

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:27 gildub joined #gluster
00:34 Norky joined #gluster
00:45 mortuar joined #gluster
00:52 mortuar_ joined #gluster
01:00 gmcwhistler joined #gluster
01:10 tjikkun_ joined #gluster
01:23 harish_ joined #gluster
01:26 mortuar joined #gluster
01:34 mortuar_ joined #gluster
01:59 gildub joined #gluster
02:02 gildub joined #gluster
02:27 PLATOSCAVE joined #gluster
02:43 PLATOSCAVE joined #gluster
02:57 PLATOSCAVE joined #gluster
03:09 PLATOSCAVE joined #gluster
03:17 bharata-rao joined #gluster
03:23 kshlm joined #gluster
03:26 PLATOSCAVE joined #gluster
03:32 kshlm joined #gluster
03:35 coredump joined #gluster
03:42 PLATOSCAVE joined #gluster
04:05 SFLimey joined #gluster
04:06 saurabh joined #gluster
04:08 itisravi joined #gluster
04:10 kanagaraj joined #gluster
04:12 RameshN_ joined #gluster
04:28 dusmant joined #gluster
04:28 rejy joined #gluster
04:37 ndarshan joined #gluster
04:44 glusterbot New news from newglusterbugs: [Bug 1116236] [DHT-REBALANCE]: Few files are missing after add-brick and rebalance <https://bugzilla.redhat.com/show_bug.cgi?id=1116236>
04:47 nishanth joined #gluster
04:52 kdhananjay joined #gluster
05:05 vkoppad joined #gluster
05:11 prasanthp joined #gluster
05:13 PLATOSCAVE joined #gluster
05:14 nbalachandran joined #gluster
05:17 ramteid joined #gluster
05:17 rjoseph joined #gluster
05:20 lalatenduM joined #gluster
05:30 ppai joined #gluster
05:32 PLATOSCAVE joined #gluster
05:36 bala joined #gluster
05:37 necrogami joined #gluster
05:38 psharma joined #gluster
05:40 vimal joined #gluster
05:42 hagarth joined #gluster
05:49 raghu joined #gluster
05:57 sahina joined #gluster
05:59 meghanam joined #gluster
05:59 meghanam_ joined #gluster
06:01 kumar joined #gluster
06:14 rastar joined #gluster
06:15 ron-slc joined #gluster
06:23 rjoseph1 joined #gluster
06:42 davinder16 joined #gluster
06:50 RameshN joined #gluster
06:54 ekuric joined #gluster
06:58 rjoseph joined #gluster
07:00 Pavid7 joined #gluster
07:00 atinmu joined #gluster
07:00 sijis joined #gluster
07:10 ctria joined #gluster
07:14 DV joined #gluster
07:16 glusterbot New news from newglusterbugs: [Bug 1092840] Glusterd crashes and core-dumps when starting a volume in FIPS mode. <https://bugzilla.redhat.com/show_bug.cgi?id=1092840>
07:20 ktosiek joined #gluster
07:31 deepakcs joined #gluster
07:38 hybrid512 joined #gluster
07:39 ppai joined #gluster
07:52 monotek left #gluster
07:54 andreask joined #gluster
08:02 Pavid7 joined #gluster
08:02 liquidat joined #gluster
08:18 Intensity joined #gluster
08:27 ppai joined #gluster
08:28 hagarth joined #gluster
08:52 shubhendu joined #gluster
09:00 goerk joined #gluster
09:06 RameshN joined #gluster
09:07 pureflex joined #gluster
09:09 ppai joined #gluster
09:10 glusterbot New news from resolvedglusterbugs: [Bug 1102989] [libgfapi] glfs_open doesn't works for O_CREAT flag <https://bugzilla.redhat.com/show_bug.cgi?id=1102989>
09:25 Norman_M joined #gluster
09:31 goerk left #gluster
09:46 stickyboy My self-heal daemon isn't running... replica 2.
09:46 stickyboy I see this message in one of the glustershd logs: [2014-07-04 09:39:00.834323] I [afr-self-heal-entry.c:2321:afr_sh_entry_fix] 0-data-replicate-0: <gfid:00000000-0000-0000-0000-000000000001>: Performing conservative merge
09:53 stickyboy Hmmm.
10:03 stickyboy joined #gluster
10:10 stickyboy Ah, I restarted the glusterd service on the node where it was down... and now glustershd is running.
10:10 stickyboy w00t
10:15 stickyboy Can't wait to move from 3.5 to 3.5.1...
10:21 Der_Fisch joined #gluster
10:25 RameshN_ joined #gluster
10:38 Norman_M joined #gluster
10:40 edward1 joined #gluster
10:40 Norman_M Hey Guys, we have got some trouble with gluster over nfs and some strange error messages in the nfs.log
10:42 Norman_M http://pastebin.com/aZGG3y2r
10:42 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
10:43 Norman_M http://fpaste.org/115548/47060314/
10:43 glusterbot Title: #115548 Fedora Project Pastebin (at fpaste.org)
10:43 Norman_M @paste
10:43 glusterbot Norman_M: For RPM based distros you can yum install fpaste, for debian and ubuntu it's pastebinit. Then you can easily pipe command output to [f] paste [binit] and it'll give you a URL.
10:46 Norman_M The system load is increasing fast while these errors occur
10:46 LebedevRI joined #gluster
10:50 hagarth joined #gluster
10:56 social joined #gluster
11:04 JustinClift Norman_M: Hmmm, looks like no-one around atm.  Maybe ask on gluster-users?
11:04 JustinClift The mailing list I mean. :)
11:04 Norman_M Yeah I will tell our admin, but he is quite busy atm ;)
11:05 JustinClift ;)
11:05 Slashman joined #gluster
11:07 SpComb https://bugzilla.redhat.com/show_bug.cgi?id=971805 contains the asswer
11:07 glusterbot Bug 971805: high, high, ---, pkarampu, CLOSED CURRENTRELEASE, nfs: "rm -rf"  throws "E [client3_1-fops.c:5214:client3_1_inodelk]"  Assertion failed
11:07 SpComb *assert
11:11 keytab joined #gluster
11:21 Philambdo joined #gluster
11:23 Norman_M cat someone tell me what to do, if the gfid for one file on the servers differ?
11:23 Norman_M *can
11:24 Norman_M will it be resolved automatically?
11:39 rastar joined #gluster
11:48 harish_ joined #gluster
11:55 andreask joined #gluster
12:10 itisravi_ joined #gluster
12:17 Slashman joined #gluster
12:19 kanagaraj_ joined #gluster
12:19 ppai joined #gluster
12:20 theron joined #gluster
12:53 diegows joined #gluster
12:59 edwardm61 joined #gluster
13:01 Norman_M Hey all together! Is there a tool to batch fix the "gfid differs on subvolume" issue?
13:32 burn420 joined #gluster
13:34 narhen joined #gluster
13:36 narhen Hi. I'm having some issues with my gluster test-setup. I have a replicated striped volume with 4 bricks over 4 nodes, but when I create files they just disappear
13:37 narhen I can, however see that the file exist on the actual brick mountpoints, but not the volume mountpoint
13:37 narhen I tried restarting the gluster daemon on all nodes, but the issue still persists
13:38 narhen does anyone have a clue on what is going on?
13:48 torbjorn1_ narhen: just to make sure, you're doing the writes through the Gluster client, right ? You're not writing directly to the bricks directory on the server-side ?
13:48 hagarth joined #gluster
13:48 narhen nope. I mounted the volume like this: '# mount <host>:/testvol /mnt' then created the files in /mnt
13:49 narhen I mean yes. I did
13:51 narhen hm, ok I think I might have resolved it. I removed all the files manually, from all bricks. then restarted glusterd on all nodes
13:51 narhen remounted, then tried to create a file. it seems to be fine now. but it would be nice to know what was wrong
13:54 torbjorn1_ narhen: I'm guessing the logs would be the next place to look for clues
13:58 narhen yes. "unable to self-heal contents of '<gfid:0....>'. (possible split-brain). please delete the file from all but the preferred subvolume"
14:00 narhen i'm assuming this means there was inconsistencies between some of the bricks
14:25 DV_ joined #gluster
14:25 redbeard joined #gluster
14:25 hagarth joined #gluster
15:34 hagarth joined #gluster
15:53 kshlm joined #gluster
16:27 ctria joined #gluster
16:28 plarsen joined #gluster
16:48 sputnik13 joined #gluster
17:50 zerick joined #gluster
18:11 coredump joined #gluster
18:25 diegows joined #gluster
18:37 gmcwhistler joined #gluster
18:51 plarsen joined #gluster
19:15 diegows joined #gluster
20:00 chirino joined #gluster
20:36 zerick joined #gluster
22:17 Philambdo joined #gluster
23:00 fidevo joined #gluster
23:07 MrAbaddon joined #gluster
23:21 gmcwhistler joined #gluster
23:57 gmcwhist_ joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary