Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-08-14

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 mariusp joined #gluster
00:03 mariusp joined #gluster
00:03 mariusp joined #gluster
00:04 mariusp joined #gluster
00:06 mariusp joined #gluster
00:06 mariusp joined #gluster
00:07 mariusp joined #gluster
00:07 mariusp joined #gluster
00:08 mariusp joined #gluster
00:08 altmariusp joined #gluster
00:10 hchiramm_ joined #gluster
00:11 mariusp joined #gluster
00:12 altmariusp joined #gluster
00:12 gEEbusT Hi guys, one of my volumes is playing up after one of my nodes rebooted (4 nodes, replica 2) - self-healed says it's done 1023 files on all the bricks but heal info keeps saying theres 100 on the 1st & 2nd bricks, 93 on the 3rd & 4th. Any ideas?
00:13 mariusp joined #gluster
00:14 mariusp joined #gluster
00:15 mariusp joined #gluster
00:16 mariusp joined #gluster
00:17 mariusp joined #gluster
00:18 mariusp joined #gluster
00:18 mariusp joined #gluster
00:19 mariusp joined #gluster
00:19 mariusp joined #gluster
00:20 mariusp joined #gluster
00:20 mariusp joined #gluster
00:21 mariusp joined #gluster
00:22 mariusp joined #gluster
00:23 mariusp joined #gluster
00:25 mariusp joined #gluster
00:25 mariusp joined #gluster
00:26 mariusp joined #gluster
00:26 mariusp joined #gluster
00:28 mariusp joined #gluster
00:28 mariusp joined #gluster
00:29 mariusp joined #gluster
00:30 gEEbusT It looks like my self-heal is looping :(
00:30 mariusp joined #gluster
00:30 DV joined #gluster
00:30 mariusp joined #gluster
00:31 mariusp joined #gluster
00:33 mariusp joined #gluster
00:34 mariusp joined #gluster
00:34 mariusp joined #gluster
00:35 mariusp joined #gluster
00:35 mariusp joined #gluster
00:36 mariusp joined #gluster
00:37 mariusp joined #gluster
00:37 mariusp joined #gluster
00:37 systemonkey joined #gluster
00:38 mariusp joined #gluster
00:39 mariusp joined #gluster
00:40 mariusp joined #gluster
00:40 mariusp joined #gluster
00:41 mariusp joined #gluster
00:42 mariusp joined #gluster
00:43 mariusp joined #gluster
00:43 mariusp joined #gluster
00:44 mariusp joined #gluster
00:44 mariusp joined #gluster
00:45 mariusp joined #gluster
00:46 mariusp joined #gluster
00:46 mariusp joined #gluster
00:47 mariusp joined #gluster
00:48 mariusp joined #gluster
00:49 mariusp joined #gluster
00:49 altmariusp joined #gluster
00:50 mariusp joined #gluster
00:51 mariusp joined #gluster
00:51 mariusp joined #gluster
00:52 mariusp joined #gluster
00:52 mariusp joined #gluster
00:53 mariusp joined #gluster
00:53 mariusp joined #gluster
00:55 mariusp joined #gluster
00:56 mariusp joined #gluster
00:57 mariusp joined #gluster
00:57 mariusp joined #gluster
00:58 mariusp joined #gluster
00:59 mariusp joined #gluster
00:59 mariusp joined #gluster
01:00 mariusp joined #gluster
01:01 mariusp joined #gluster
01:02 mariusp joined #gluster
01:03 mariusp joined #gluster
01:04 mariusp joined #gluster
01:04 mariusp joined #gluster
01:05 sputnik13 joined #gluster
01:05 mariusp joined #gluster
01:06 mariusp joined #gluster
01:06 mariusp joined #gluster
01:08 mariusp joined #gluster
01:08 mariusp joined #gluster
01:09 sputnik13 joined #gluster
01:09 altmariusp joined #gluster
01:09 mariusp joined #gluster
01:10 altmariusp joined #gluster
01:11 mariusp joined #gluster
01:12 mariusp joined #gluster
01:12 altmariusp joined #gluster
01:13 mariusp joined #gluster
01:13 mariusp joined #gluster
01:16 altmariusp joined #gluster
01:17 mariusp joined #gluster
01:18 mariusp joined #gluster
01:19 mariusp joined #gluster
01:21 mariusp joined #gluster
01:21 mariusp joined #gluster
01:22 mariusp joined #gluster
01:23 mariusp joined #gluster
01:23 mariusp joined #gluster
01:24 mariusp joined #gluster
01:26 mariusp joined #gluster
01:26 mariusp joined #gluster
01:26 bala joined #gluster
01:26 sputnik13 joined #gluster
01:27 mariusp joined #gluster
01:27 cristov_mac joined #gluster
01:28 mariusp joined #gluster
01:28 mariusp joined #gluster
01:29 mariusp joined #gluster
01:29 mariusp joined #gluster
01:30 mariusp joined #gluster
01:31 mariusp joined #gluster
01:31 mariusp joined #gluster
01:32 mariusp joined #gluster
01:32 mariusp joined #gluster
01:33 mariusp joined #gluster
01:33 altmariusp joined #gluster
01:34 mariusp joined #gluster
01:35 mariusp joined #gluster
01:35 mariusp joined #gluster
01:36 mariusp joined #gluster
01:37 sputnik13 joined #gluster
01:37 mariusp joined #gluster
01:38 mariusp joined #gluster
01:40 mariusp joined #gluster
01:40 mariusp joined #gluster
01:41 mariusp joined #gluster
01:42 mariusp joined #gluster
01:45 mariusp joined #gluster
01:45 sputnik13 joined #gluster
01:45 mariusp joined #gluster
01:45 mariusp joined #gluster
01:45 mariusp joined #gluster
01:45 mariusp joined #gluster
01:45 mariusp joined #gluster
01:46 mariusp joined #gluster
01:46 mariusp joined #gluster
01:47 gildub joined #gluster
01:48 mariusp joined #gluster
01:48 mariusp joined #gluster
01:49 mariusp joined #gluster
01:50 mariusp joined #gluster
01:51 mariusp joined #gluster
01:51 mariusp joined #gluster
01:52 mariusp joined #gluster
01:53 harish_ joined #gluster
01:53 mariusp joined #gluster
01:53 haomaiwa_ joined #gluster
01:54 mariusp joined #gluster
01:54 mariusp joined #gluster
01:55 mariusp joined #gluster
01:55 mariusp joined #gluster
01:56 mariusp joined #gluster
01:56 overclk joined #gluster
01:56 mariusp joined #gluster
01:57 mariusp joined #gluster
01:58 mariusp joined #gluster
02:02 mariusp joined #gluster
02:02 altmariusp joined #gluster
02:03 mariusp joined #gluster
02:04 mariusp joined #gluster
02:05 mariusp joined #gluster
02:05 mariusp joined #gluster
02:06 mariusp joined #gluster
02:07 mariusp joined #gluster
02:08 mariusp joined #gluster
02:09 mariusp joined #gluster
02:09 mariusp joined #gluster
02:10 mariusp joined #gluster
02:11 mariusp joined #gluster
02:11 mariusp joined #gluster
02:12 mariusp joined #gluster
02:12 mariusp joined #gluster
02:13 mariusp joined #gluster
02:14 mariusp joined #gluster
02:14 altmariusp joined #gluster
02:15 mariusp joined #gluster
02:15 mariusp joined #gluster
02:18 mariusp joined #gluster
02:19 mariusp joined #gluster
02:19 altmariusp joined #gluster
02:19 sputnik13 joined #gluster
02:21 mariusp joined #gluster
02:21 mariusp joined #gluster
02:22 mariusp joined #gluster
02:22 mariusp joined #gluster
02:23 mariusp joined #gluster
02:24 mariusp joined #gluster
02:24 altmariusp joined #gluster
02:25 mariusp joined #gluster
02:25 sputnik13 joined #gluster
02:26 mariusp joined #gluster
02:27 mariusp joined #gluster
02:28 mariusp joined #gluster
02:29 mariusp joined #gluster
02:29 mariusp joined #gluster
02:29 mariusp joined #gluster
02:30 mariusp joined #gluster
02:31 sputnik13 joined #gluster
02:31 mariusp joined #gluster
02:32 mariusp joined #gluster
02:33 mariusp joined #gluster
02:33 harish_ joined #gluster
02:33 mariusp joined #gluster
02:34 mariusp joined #gluster
02:35 mariusp joined #gluster
02:35 mariusp joined #gluster
02:36 mariusp joined #gluster
02:37 mariusp joined #gluster
02:38 mariusp joined #gluster
02:38 mariusp joined #gluster
02:39 mariusp joined #gluster
02:39 mariusp joined #gluster
02:40 haomai___ joined #gluster
02:40 mariusp joined #gluster
02:41 altmariusp joined #gluster
02:42 mariusp joined #gluster
02:43 mariusp joined #gluster
02:43 mariusp joined #gluster
02:44 mariusp joined #gluster
02:44 mariusp joined #gluster
02:45 mariusp joined #gluster
02:45 mariusp joined #gluster
02:46 mariusp joined #gluster
02:46 mariusp joined #gluster
02:47 mariusp joined #gluster
02:48 mariusp joined #gluster
02:48 mariusp joined #gluster
02:49 mariusp joined #gluster
02:49 mariusp joined #gluster
02:50 mariusp joined #gluster
02:51 mariusp joined #gluster
02:51 mariusp joined #gluster
02:52 mariusp joined #gluster
02:53 mariusp joined #gluster
02:54 mariusp joined #gluster
02:55 mariusp joined #gluster
02:55 altmariusp joined #gluster
02:56 mariusp joined #gluster
02:56 mariusp joined #gluster
02:57 mariusp joined #gluster
02:58 mariusp joined #gluster
02:58 mariusp joined #gluster
03:00 mariusp joined #gluster
03:01 mariusp joined #gluster
03:02 altmariusp joined #gluster
03:02 mariusp joined #gluster
03:03 mariusp joined #gluster
03:03 mariusp joined #gluster
03:06 mariusp joined #gluster
03:07 mariusp joined #gluster
03:07 mariusp joined #gluster
03:08 mariusp joined #gluster
03:09 bharata-rao joined #gluster
03:09 mariusp joined #gluster
03:09 mariusp joined #gluster
03:11 mariusp joined #gluster
03:11 mariusp joined #gluster
03:13 mariusp joined #gluster
03:14 mariusp joined #gluster
03:14 mariusp joined #gluster
03:15 mariusp joined #gluster
03:16 mariusp joined #gluster
03:19 mariusp joined #gluster
03:19 mariusp joined #gluster
03:20 mariusp joined #gluster
03:21 mariusp joined #gluster
03:22 mariusp joined #gluster
03:23 mariusp joined #gluster
03:23 mariusp joined #gluster
03:24 mariusp joined #gluster
03:25 mariusp joined #gluster
03:26 mariusp joined #gluster
03:29 altmariusp joined #gluster
03:29 mariusp joined #gluster
03:31 mariusp joined #gluster
03:32 mariusp joined #gluster
03:32 mariusp joined #gluster
03:33 mariusp joined #gluster
03:33 altmariusp joined #gluster
03:34 mariusp joined #gluster
03:35 hchiramm_ joined #gluster
03:35 mariusp joined #gluster
03:35 mariusp joined #gluster
03:36 mariusp joined #gluster
03:37 mariusp joined #gluster
03:39 mariusp joined #gluster
03:39 altmariusp joined #gluster
03:40 mariusp joined #gluster
03:41 mariusp joined #gluster
03:41 mariusp joined #gluster
03:42 mariusp joined #gluster
03:43 mariusp joined #gluster
03:43 altmariusp joined #gluster
03:44 mariusp joined #gluster
03:44 mariusp joined #gluster
03:45 mariusp joined #gluster
03:46 mariusp joined #gluster
03:48 kdhananjay joined #gluster
03:49 jvandewege_ joined #gluster
03:49 simulx2 joined #gluster
03:50 mariusp joined #gluster
03:50 VeggieMeat_ joined #gluster
03:51 mariusp joined #gluster
03:51 msvbhat_ joined #gluster
03:51 torbjorn1_ joined #gluster
03:52 mariusp joined #gluster
03:52 delhage_ joined #gluster
03:52 nixpanic_ joined #gluster
03:52 nixpanic_ joined #gluster
03:52 mariusp joined #gluster
03:53 atrius- joined #gluster
03:53 spandit joined #gluster
03:53 silky joined #gluster
03:53 DV joined #gluster
03:53 tg2 joined #gluster
03:53 mariusp joined #gluster
03:53 tty00 joined #gluster
03:53 radez_g0n3 joined #gluster
03:55 ilbot3 joined #gluster
03:55 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
03:55 mariusp joined #gluster
03:56 mariusp joined #gluster
03:57 mariusp joined #gluster
03:57 mariusp joined #gluster
03:58 mariusp joined #gluster
04:00 mariusp joined #gluster
04:00 glusterbot New news from newglusterbugs: [Bug 1129939] NetBSD port <https://bugzilla.redhat.co​m/show_bug.cgi?id=1129939>
04:10 ilbot3 joined #gluster
04:10 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
04:11 altmariusp joined #gluster
04:11 altmariusp joined #gluster
04:12 mariusp joined #gluster
04:13 mariusp joined #gluster
04:14 mariusp joined #gluster
04:14 mariusp joined #gluster
04:15 mariusp joined #gluster
04:16 mariusp joined #gluster
04:17 mariusp joined #gluster
04:18 altmariusp joined #gluster
04:18 mariusp joined #gluster
04:19 altmariusp joined #gluster
04:20 mariusp joined #gluster
04:21 mariusp joined #gluster
04:21 mariusp joined #gluster
04:22 mariusp joined #gluster
04:22 mariusp joined #gluster
04:23 mariusp joined #gluster
04:23 mariusp joined #gluster
04:24 mariusp joined #gluster
04:24 mariusp joined #gluster
04:25 mariusp joined #gluster
04:26 sputnik13 joined #gluster
04:26 mariusp joined #gluster
04:26 altmariusp joined #gluster
04:27 mariusp joined #gluster
04:28 mariusp joined #gluster
04:29 mariusp joined #gluster
04:29 mariusp joined #gluster
04:30 mariusp joined #gluster
04:30 mariusp joined #gluster
04:31 altmariusp joined #gluster
04:31 mariusp joined #gluster
04:33 mariusp joined #gluster
04:33 mariusp joined #gluster
04:34 mariusp joined #gluster
04:37 mariusp joined #gluster
04:37 mariusp joined #gluster
04:38 mariusp joined #gluster
04:39 mariusp joined #gluster
04:39 anoopcs joined #gluster
04:41 jbrooks left #gluster
04:42 mariusp joined #gluster
04:43 mariusp joined #gluster
04:45 mariusp joined #gluster
04:45 ppai joined #gluster
04:45 mariusp joined #gluster
04:47 mariusp joined #gluster
04:47 mariusp joined #gluster
04:48 mariusp joined #gluster
04:48 mariusp joined #gluster
04:50 mariusp joined #gluster
04:51 mariusp joined #gluster
04:51 atinmu joined #gluster
04:52 mariusp joined #gluster
04:52 mariusp joined #gluster
04:52 kdhananjay joined #gluster
04:52 mariusp joined #gluster
04:53 altmariusp joined #gluster
04:53 Rafi_kc joined #gluster
04:53 nishanth joined #gluster
04:54 mariusp joined #gluster
04:54 mariusp joined #gluster
04:55 jiffin joined #gluster
04:55 mariusp joined #gluster
04:56 mariusp joined #gluster
04:56 altmariusp joined #gluster
04:57 mariusp joined #gluster
04:57 mariusp joined #gluster
04:58 mariusp joined #gluster
04:58 bala joined #gluster
04:59 mariusp joined #gluster
05:00 mariusp joined #gluster
05:01 mariusp joined #gluster
05:01 mariusp joined #gluster
05:02 mariusp joined #gluster
05:05 mariusp joined #gluster
05:06 mariusp joined #gluster
05:08 mariusp joined #gluster
05:08 itisravi_ joined #gluster
05:09 edong23_ joined #gluster
05:09 mariusp joined #gluster
05:11 mariusp joined #gluster
05:12 mariusp joined #gluster
05:13 mariusp joined #gluster
05:13 mariusp joined #gluster
05:14 RameshN joined #gluster
05:14 mariusp joined #gluster
05:14 mariusp joined #gluster
05:16 mariusp joined #gluster
05:17 mariusp joined #gluster
05:17 mariusp joined #gluster
05:19 mariusp joined #gluster
05:19 mariusp joined #gluster
05:20 mariusp joined #gluster
05:21 mariusp joined #gluster
05:22 mariusp joined #gluster
05:22 mariusp joined #gluster
05:22 ramteid joined #gluster
05:23 altmariusp joined #gluster
05:24 karnan joined #gluster
05:24 mariusp joined #gluster
05:25 mariusp joined #gluster
05:25 mariusp joined #gluster
05:26 altmariusp joined #gluster
05:26 prasanth_ joined #gluster
05:27 mariusp joined #gluster
05:27 mariusp joined #gluster
05:28 mariusp joined #gluster
05:29 mariusp joined #gluster
05:29 mariusp joined #gluster
05:30 mariusp joined #gluster
05:31 ekuric joined #gluster
05:31 mariusp joined #gluster
05:31 haomaiwa_ joined #gluster
05:32 mariusp joined #gluster
05:32 glusterbot New news from resolvedglusterbugs: [Bug 764655] NetBSD port <https://bugzilla.redhat.com/show_bug.cgi?id=764655>
05:33 mariusp joined #gluster
05:33 lalatenduM joined #gluster
05:34 mariusp joined #gluster
05:34 altmariusp joined #gluster
05:35 karnan joined #gluster
05:37 atalur joined #gluster
05:38 overclk joined #gluster
05:42 haomaiw__ joined #gluster
05:43 meghanam joined #gluster
05:43 meghanam_ joined #gluster
05:44 hagarth joined #gluster
05:57 wangjun joined #gluster
06:03 raghu joined #gluster
06:03 ppai joined #gluster
06:04 ghghz wc
06:04 ghghz left #gluster
06:05 dusmant joined #gluster
06:18 simulx joined #gluster
06:21 karnan joined #gluster
06:30 sputnik13 joined #gluster
06:34 tty00 joined #gluster
06:36 LebedevRI joined #gluster
06:47 nbalachandran joined #gluster
06:49 ppai joined #gluster
06:49 sputnik13 joined #gluster
06:53 tom[] joined #gluster
06:53 mariusp joined #gluster
06:54 ctria joined #gluster
07:12 RameshN joined #gluster
07:12 haomaiwa_ joined #gluster
07:13 sputnik13 joined #gluster
07:15 haomai___ joined #gluster
07:18 sahina joined #gluster
07:22 fsimonce joined #gluster
07:23 keytab joined #gluster
07:24 rgustafs joined #gluster
07:27 sputnik13 joined #gluster
07:27 ricky-ti1 joined #gluster
07:32 sage joined #gluster
07:36 mariusp joined #gluster
07:40 sputnik13 joined #gluster
07:42 RameshN joined #gluster
07:44 ppai joined #gluster
07:46 karnan joined #gluster
07:53 ppai joined #gluster
07:58 calum_ joined #gluster
08:04 R0ok_ joined #gluster
08:05 haomaiwa_ joined #gluster
08:07 mariusp joined #gluster
08:11 haomaiw__ joined #gluster
08:20 ppai joined #gluster
08:26 Slashman joined #gluster
08:31 glusterbot New news from newglusterbugs: [Bug 1130023] A small patch for enabling client side applications to gather io statistics from mounted volumes <https://bugzilla.redhat.co​m/show_bug.cgi?id=1130023>
08:33 bharata-rao joined #gluster
08:39 ppai joined #gluster
08:40 nbalachandran joined #gluster
08:40 rolfb joined #gluster
08:43 lyang0 joined #gluster
08:49 foobar Hi, I'm having a weird problem... 6 servers mounting a glusterfs via fuse will report 'stale file handle' on some directories... all are running 3.5.1. client... when I then run a find on the parent directory from another host that runs 3.4.2, it works, and then it also works for the 6 machines where it was giving 'stale file handle' first
08:49 foobar servers are also running 3.5.1 btw
08:54 wangjun joined #gluster
08:56 foobar even just running 'ls' on that dir in 3.4.2 fixes the issue for the 3.5.1 clients
09:10 vimal joined #gluster
09:11 haomaiwang joined #gluster
09:12 karnan joined #gluster
09:14 foobar updated the 3.4 box to 3.5.... and the issue persists... I just can't trigger the fix now by running 'ls' on the 3.4 client
09:14 foobar thinking of downgrading
09:19 cultav1x joined #gluster
09:20 nbalachandran joined #gluster
09:22 dusmant joined #gluster
09:27 foobar downgraded 1 client to 3.4.5 ... seeing if that client no longer shows the problem... (which could take a while)
09:34 mdavidson joined #gluster
09:35 bharata-rao joined #gluster
09:47 qdk joined #gluster
09:57 nbalachandran joined #gluster
10:10 qdk joined #gluster
10:14 foobar yup... looks like reverting to 3.4 clients fixes the 'stale file handle' problem
10:24 haomai___ joined #gluster
10:31 edward1 joined #gluster
10:34 dusmant joined #gluster
10:36 ctria joined #gluster
10:37 ira joined #gluster
10:45 kkeithley1 joined #gluster
10:51 haomaiwa_ joined #gluster
10:55 mojibake joined #gluster
11:04 mojibake Can someone explain the relationship between performance.cache settings, and direct.io settings. If direct.io is enabled by default, does that mean the performance.cache settings are ignored?
11:08 mojibake if those options are set on the volume using the gluster command. Are those options then passed to clients when the volume is mounted, or doe each client need a gluster.vol config file to implement the various cache settings? (Still new to gluster , "gluster volume info" does not look to display what options has been set. what command can be used to see the options that have been set. Is it written to gluster.vol on glustermaster?
11:14 coredump joined #gluster
11:19 prasanth_ joined #gluster
11:33 julim joined #gluster
11:41 power8 joined #gluster
11:44 cultav1x joined #gluster
11:48 power8 i think i might have found how files are continuously showing up in my heal status.  i have no idea how to actually replicate it but i will try to find info in my gluster logs.  with this being IRC and limited buffer this might take a few lines...
11:49 foobar power8: if so... use a paste-it like site ;)
11:49 foobar !paste?
11:50 power8 ah good idea
11:50 foobar pastie.org for example
11:50 power8 brb
11:52 ninthBit joined #gluster
11:53 ninthBit ok, i have so many questions and issues things are going to seem all over the place.  let's start with this one.  I have a test environment where I have an ftp server connected with gluster-fuse and windows servers connected with samba to the volume.
11:54 ninthBit when i copy files with windows to samba -> fuse -> gluster volume the files appear to be distributed among the two replica-sets.  When using the ftp server ftp->fuse->volume the files seem to all go to a single replica-set and the test data set is large enuogh to run a single replica-set out of space.
11:55 ninthBit and that is what the error is right now. my one replica-set ran out of space but the othe replica-set is basically untouched. :P  but windows has no problem pushin this amount of data to the volume.   I am trying to replicate issues we see in production only to have different issues to deal with.  gluster is eating a hella lot of time :P
11:56 ninthBit i might branch my virtual machines and test out 3.5.x and see how things go....
12:02 ninthBit is it proper to report gluster.org links that are 404 to bugzilla ? :)
12:02 rastar joined #gluster
12:03 ndevos ninthBit: 404 links can get sent to gluster-infra@gluster.org
12:05 the-me kkeithley_: sorry for my late answer. -dev: please do not do this, just but it in clusterfs-common. -geo-rep: is there something missing in the current packaging for geo-rep? never tried it out..
12:07 kkeithley_ the-me: yeah, already decided not to. Debian seems to not care about package bloat or pulling in a bunch of dependencies; not like the Fedora people do.
12:09 kkeithley_ If it ain't broke, don't fix it.
12:10 C_Kode Hi, I'm getting Connection failed. Please check if gluster daemon is operational.   I show both nodes have port 24007 open and systemctl status glusterd is running on both.  Using tcpdump I don't see any network activity attempting to connect to node two
12:10 C_Kode CentOS7 btw
12:10 kkeithley_ ninthBit: oh, please don't open BZs for 404s. Just let us know here, or maybe in #gluster-dev.
12:11 the-me kkeithley_: the problem is, that gluster does not care about SONAME versions, ok, since it is not a real library :/
12:11 ndevos C_Kode: there are other ,,(ports) than 24007 needed
12:11 glusterbot C_Kode: glusterd's management port is 24007/tcp (also 24008/tcp if you use rdma). Bricks (glusterfsd) use 49152 & up since 3.4.0 (24009 & up previously). (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
12:11 the-me I do not find the related bug number now..
12:12 kkeithley_ the-me: for xlators that's true. And we're planning an SONAME bump for libgfapi in 3.6.
12:12 C_Kode Hmm.
12:12 ndevos C_Kode: 'gluster volume status' shouls show the ports that your bricks use
12:12 ndevos *should
12:13 C_Kode That returns the same error.   I'm sure there is some service I haven't started.
12:13 ndevos kkeithley_, the-me: 3.6 is the 1st version that provides a versioned libgfapi.so
12:13 C_Kode Let me look at the howtos again as I clearly must have missed something
12:13 saurabh joined #gluster
12:14 ndevos C_Kode: glusterd should be the only service that you need to start, maybe firewalld or SElinux is preventing something?
12:14 C_Kode SELinux maybe.  I already disabled firewalld to ensure that wasn't the issue.
12:15 tryggvil joined #gluster
12:15 ndevos C_Kode: in the SElinux case, you should be able to see something in the /var/log/audit/ logs, or try 'seteneforce 0' and restart the service
12:15 julim joined #gluster
12:16 C_Kode nods, I already used setenfoce to disable it. (and updated sysconfig/selinux)
12:16 ndevos if some processes failed to start, or listen on ports, a restart of the service is probably needed
12:17 mbukatov joined #gluster
12:17 prasanth_ joined #gluster
12:17 C_Kode Hmm.  CentOS 7 doesn't show anything when you restart a service like the old init.d scripts did
12:20 diegows joined #gluster
12:24 itisravi joined #gluster
12:25 C_Kode Okay, it took a reboot after the selinux changes, but it was now successful in probing for the peer nide.  Thanks ndevos
12:25 C_Kode node too
12:26 rsavage_ joined #gluster
12:26 rsavage_ howdy
12:26 C_Kode Hi.
12:26 glusterbot C_Kode: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
12:26 ndevos C_Kode: ah, good to know! it would be nice to have SElinux functional though
12:27 rsavage_ Question.  Say I have a 4 node gluster cluster, and all of the volumes are replica volumes.  And say I want to remove a server.  After I remove the server, do I have to remove the brick?
12:27 C_Kode Agreed
12:29 C_Kode rsavage_, I'm not that far along yet, but I would think removing a server would remove its bricks too.
12:29 C_Kode Though, that I can only assume at this point.
12:29 rsavage_ C_Kode: it doesn't
12:30 rsavage_ C_Kode: I stopped glusterd on the server1, and then powered it down.
12:30 C_Kode Well, assuming is generally a bad idea.
12:30 tdasilva joined #gluster
12:30 C_Kode Well, that isn't removing it from the cluster
12:30 rsavage_ C_Kode: The other servers 2,3,4 still think there's a a server1:Brick
12:30 C_Kode I would think you would have to tell the cluster that you are removing that node.  Not just shutting it down.
12:30 rsavage_ C_Kode: True
12:31 rsavage_ gluster peer detach NFS-SERVER1
12:31 rsavage_ peer detach: failed: Brick(s) with the peer NFS-SERVER1 exist in cluster
12:32 glusterbot New news from newglusterbugs: [Bug 1130023] [RFE] Make I/O stats for a volume available at client-side <https://bugzilla.redhat.co​m/show_bug.cgi?id=1130023>
12:32 rsavage_ this is why I ask if I have to remove the brick first
12:33 rsavage_ And then when I try to remove the brick, I get this pretty message
12:33 rsavage_ WARNING: running remove-brick commands without an explicit option is deprecated, and will be removed in the next version of GlusterFS.
12:33 rsavage_ To forcibly remove a brick in the next version of GlusterFS, you will need to use "remove-brick force".
12:33 C_Kode Sorry, I'm not that far along yet.    I still wonder how that works when you have a mirror setup and removing one of the nodes causes breakage in the mirror
12:35 rsavage_ C_Kode: no worries
12:36 mbukatov joined #gluster
12:38 C_Kode It says that I couldn't create a volume in the root partition.  I'm guessing this means I could attach it to something like the /var partition rather than a folder of the primary root partition?  /
12:38 C_Kode er shouldn't
12:40 rsavage_ Looks like I just remove the brick and change the replica count from 4 to 3, and it works.
12:40 rsavage_ then I can do the peer detach
12:41 cultav1x joined #gluster
12:41 chirino joined #gluster
12:48 bennyturns joined #gluster
12:49 coredump joined #gluster
12:55 nbalachandran joined #gluster
12:56 coredump joined #gluster
12:56 qdk joined #gluster
13:00 rsavage_ C_Kode: So here's what I ended up doing.  1. Stop glusterd on the server.  2. Power down the server. 3. Removed the brick and changed the replica count to servers-1.  (originally I had 4 bricks and went to 3.).  4. Then did peer detach
13:00 C_Kode Ahh
13:01 C_Kode I suppose I will testing that soon enough.  I've got the cluster setup now and I can mount the share, but I can't write too it.  Trying to find where I set permissions now
13:01 B21956 joined #gluster
13:01 andreask joined #gluster
13:02 mojibake I am having trouble with trying to set up io-cache in gluster.vol on client.  I have a 3x2 volume. which I think I have setup in the gluster.vol properly, as it will mount. But when I add the following it will fail to mount.
13:02 mojibake volume io-cache
13:02 mojibake type performance/io-cache
13:02 mojibake option cache-size 64MB
13:02 mojibake option cache-timeout 180
13:02 mojibake subvolumes replicate
13:02 mojibake end-volume
13:02 mojibake But if I comment out the "option cache-timeout 180" line it appears to mount OK.
13:05 bene2 joined #gluster
13:05 imad_VI joined #gluster
13:06 sage joined #gluster
13:11 bala joined #gluster
13:13 hagarth joined #gluster
13:14 cultav1x joined #gluster
13:15 chirino joined #gluster
13:18 imad_VI Hi everyone, i'm trying to set redundancy between two nodes , but whene I create a file on one folder it doesn't copy it on the other. Here is the commands I used==> http://pastebin.com/TXFj1F5i
13:18 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
13:19 theron joined #gluster
13:20 imad_VI http://fpaste.org/125558/14080223/
13:20 glusterbot Title: #125558 Fedora Project Pastebin (at fpaste.org)
13:20 andreask joined #gluster
13:21 kkeithley1 joined #gluster
13:21 theron joined #gluster
13:22 imad_VI Do I have to mount the volume on each nodes ?
13:25 ctria joined #gluster
13:25 kkeithley_ imad_VI: do not write in the backing brick volumes.  mount your datastore_1 volume somewhere. Writes to the mounted volume will be replicated to both servers.
13:28 imad_VI kkeithley_: the problem is that I'm not setting this up to let client accessing the volume. It's only to sync two folder between a master and his slave.
13:29 kkeithley_ then use rsync. This is not what gluster is for
13:30 imad_VI kkeithley_: I know but I don't have the choice...
13:32 kkeithley_ you can mount the gluster datastore_1 volume on both servers. Writes will be replicated to both volumes. Make sure you write and read from the mounted volumes, not from the back end storage (the bricks)
13:36 imad_VI kkeithley_: Thank you, but gluster doesn't let me to mount it because the name of the folder is already use on the datastore_1, Maybe I'll try use symlink.
13:38 kkeithley_ I'm not sure what that means. mount the volume locally, e.g. `mount -t glusterfs srv2:datastore_1 /mnt`
13:38 kkeithley_ then do your writes and reads to /mnt
13:42 msmith joined #gluster
13:44 imad_VI kkeithley_: on srv2 I was trying to mout srv1:/datastore_1 on /var/gluster_share
13:45 kkeithley_ yes, that should work
13:45 kkeithley_ srv1% mount -t glusterfs srv2:datastore_1 /var/gluster_share
13:46 imad_VI the name /var/gluster_share is already use on datastore_1 volume so it does not let me mount it
13:47 imad_VI ERROR: /var/gluster_share is in use as a brick of a gluster volume
13:47 imad_VI tha's why I was thinking about symlink, but I'm not sure about it.
13:48 kkeithley_ your volume create command was `gluster volume create datastore_1 replica 2 transport tcp srv1:/var/glust_share srv2:/var/glust_share`
13:49 kkeithley_ srv2:/var/glust_share is your brick, i.e. part of the volume backing store
13:49 kkeithley_ you can't mount the volume on top of the backing store
13:52 plarsen joined #gluster
13:53 bennyturns joined #gluster
13:57 bala joined #gluster
13:57 mojibake joined #gluster
14:00 imad_VI kkeithley_:Ok, but my volume is made with a folder,  I need to duplicate files when the each srv writes on it and because of the folder's names used in the backing volume I can't mount it locally ou remotly.
14:01 getup- joined #gluster
14:07 siel joined #gluster
14:10 luckyinva joined #gluster
14:11 wushudoin| joined #gluster
14:18 sjm joined #gluster
14:23 mbukatov joined #gluster
14:32 bala joined #gluster
14:33 luckyinva I’m looking for a recommendation for RAID and replicate value.  Can anyone here offer any guidance?  I’m not a storage guru by any stretch of the imagination.
14:34 luckyinva In my meeting with Redhat last week they recommended RAID 6 and or RAID 10
14:34 foobar that sounds right...
14:35 foobar i'm running on no-raid... but that's all kinds of misery
14:35 luckyinva that was my original plan, distributed replica 2 no raid
14:35 rsavage_ foobar: love the name
14:36 foobar luckyinva: i have triple-replicate no raid, and a 4th offsite replica as backup
14:36 foobar but disk failures cause many issues with self-healing and rebuilding...
14:36 foobar mostly because it's 500M+ files
14:36 luckyinva so I have read
14:37 luckyinva Im looking to use gluster with Openstack, particulary as instance / volume and object storage
14:37 foobar with big files it should be ok I guess
14:38 witlessb joined #gluster
14:40 witlessb left #gluster
14:40 luckyinva other than here or the gluster website is there any other source that is reliable for gluster information?
14:40 ws2k3 hello i have a question regarding glusterfs how does the distrubuted replicated volume exacly work i already red the creating distributed replicated volumes article on gluster.org but i think i will dont understand how this volume type works exacly
14:40 luckyinva I’ve read most of what I can google for including Joe Julians site
14:41 luckyinva @ws2K3 have you read the concepts page on gluster.org
14:42 ws2k3 no i did not
14:42 mbukatov joined #gluster
14:42 ws2k3 i found the page now let me read it thank you luckyinva
14:43 luckyinva np im just learning it all myself, first time I could help someone else
14:45 ws2k3 luckyinva is this the page you mean http://gluster.org/community/documen​tation/index.php/GlusterFS_Concepts ?
14:45 glusterbot Title: GlusterFS Concepts - GlusterDocumentation (at gluster.org)
14:45 nshaikh joined #gluster
14:46 luckyinva yes that’s at a start /
14:48 luckyinva there is also a redhat page that is pretty good
14:49 luckyinva https://access.redhat.com/documentation/​en-US/Red_Hat_Storage/2.0/html/Administr​ation_Guide/sect-User_Guide-Setting_Volu​mes-Distributed_Striped_Replicated.html
14:49 glusterbot Title: 8.9. Creating Distributed Striped Replicated Volumes (at access.redhat.com)
14:50 luckyinva basically sections 8.3 - 8.9
14:51 luckyinva if that doesn’t help you i’m sure someone else here can offer you some explanation - good luck
14:52 ws2k3 so if i make a duplicated volume with 2 machines they both have exacly the same data right? and is it still possible to write to both machines?
14:52 luckyinva yes
14:52 getup- joined #gluster
14:52 ws2k3 and how exacly do i need to read distributed replicated
14:53 luckyinva do you mean how to you access the volume?
14:53 luckyinva do*
14:54 ws2k3 i understand duplicated is all data is the same on all machines you can write to all machines, with distributed is like stripe eatch machine gets a part of the data. but how about duplicated and distributed
14:54 ws2k3 no how is the data speard out when using and distributed and duplicated at the same time
14:55 ws2k3 and what happens if a server fails
14:55 luckyinva I think the term in gluster is distributed / replica 2
14:56 ws2k3 luckyinva i think so to but how does it work then exacly ?
14:56 luckyinva replication vs duplicate.. the diagrams I linked you to demonstrate the file distribution
14:56 luckyinva beyond that I’ll defer to the room here as many here have more experience
14:57 luckyinva the agent handles the replicaton
14:58 overclk joined #gluster
14:59 luckyinva The server that the files are written to is calculated by hashing the filename. If the filename changes, a pointer file is written to the server that the new hash code would point to, telling the distribute translator which server the file is actually on.
14:59 luckyinva that’s the text from the Distribute diagram
15:01 mojibake joined #gluster
15:04 C_Kode Where exactly do you set file permissions for the gluster shares?  NFS is usually in exports, but I can't figure out how to set it where I can write to the share.  (I can mount it, but can't read or write)
15:04 B21956 joined #gluster
15:07 mojibake joined #gluster
15:08 TvL2386 joined #gluster
15:08 ws2k3 what happens when i mount a glusterfs server using the glusterfs client and that glusterfs server goes down can it automaticly change the mount to another server in the cluster? or how does that work
15:09 ctria joined #gluster
15:10 luckyinva that is handled, the peers probe each other, much of that is also detailed in the documentation, you shouldn’t need to change the mount
15:10 deepakcs joined #gluster
15:11 luckyinva you mount the volume you create , then probe peers, if a node goes down, the other nodes continue to work, replicate etc
15:11 sputnik13 joined #gluster
15:11 luckyinva actually the order is peer probe then mount
15:11 TvL2386 ndevos, don't know if you remember the chat from yesterday, but I had the ownership issues on auth.allow changes. I am running version 3.4.1
15:13 ws2k3 so if i understand correctly it automaticly changes the mount when a volume goes down
15:13 ndevos TvL2386: ah, yes, I recommended the storage.owner-uid and storage.owner-gid options, right?
15:13 ndevos TvL2386: they were included in 3.4.3, I think
15:14 ndevos TvL2386: you'd be facing bug 1040275
15:14 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1040275 high, high, ---, vbellur, CLOSED CURRENTRELEASE, Stopping/Starting a Gluster volume resets ownership
15:15 ndevos and my 'git describe --contains 046cb49' says it was fixed with version v3.4.2qa4~1, so 3.4.2 should have the fix too
15:16 luckyinva with a single node going down the volume doesnt go down
15:17 luckyinva the volume remains via the other node (s)
15:19 ws2k3 no i know that but does a glusterfs client automaticly changes server?
15:21 XpineX joined #gluster
15:24 ws2k3 how should i create a volume that is only duplicated?
15:29 ghenry joined #gluster
15:29 ghenry joined #gluster
15:31 getup- joined #gluster
15:34 ninthBit is it correct and OK to use a DNS entry with all IP addresses for the gluster peers of the volume that is being mounted with the gluster-fuse client?  the DNS entry is round-robin of all peers. the idea is that when the client mounts it will connect with at least one peer in the cluster.
15:37 edward2 joined #gluster
15:40 bala joined #gluster
15:43 theron_ joined #gluster
15:50 PeterA joined #gluster
15:53 semiosis ninthBit: yes.  ,,(rrdns)
15:53 glusterbot ninthBit: You can use rrdns to allow failover for mounting your volume. See Joe's tutorial: http://goo.gl/ktI6p
15:55 Guest10509 joined #gluster
15:57 msmith joined #gluster
15:57 dtrainor joined #gluster
15:59 emitor joined #gluster
15:59 bjuanico joined #gluster
15:59 dtrainor joined #gluster
15:59 bjuanico HIIIIII
16:00 semiosis hi
16:00 glusterbot semiosis: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
16:00 semiosis bjuanico: ^^
16:02 TvL2386 ndevos, thanks! very helpful!
16:03 ndevos TvL2386: ah, good :)
16:03 emitor hi!, I'm trying to connect a 3.3 glusterfs client to a 3.5 glusterfs server, but it's not working. Here are the logs on the client http://pastebin.com/dhVA2unq
16:03 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
16:05 emitor the logs again.. http://ur1.ca/hz1ve
16:05 glusterbot Title: #125611 Fedora Project Pastebin (at ur1.ca)
16:07 ctria joined #gluster
16:12 jackdpeterson joined #gluster
16:14 semiosis emitor: that's probably not going to work
16:16 daMaestro joined #gluster
16:16 guillaume_94404 joined #gluster
16:16 nishanth joined #gluster
16:19 emitor semiosis: it shouldn't be backward compatible?
16:20 JoeJulian "Started running /usr/sbin/glusterfs version 3.2.7" Nopr
16:20 semiosis regardless of what should or should not be, i'm pretty sure those versions are not compatible
16:20 JoeJulian s/r$/e/
16:20 glusterbot What JoeJulian meant to say was: "Started running /usr/sbin/glusterfs version 3.2.7" Nope
16:25 luckyinva joined #gluster
16:27 guillaume_94404 Hi, I have system of 2 servers running glusterfs.  I use that system for nagios and syslog.  Only one server at a time is having nagios and syslog running.  I'm using glusterfs to share nagios and syslog data between both server.  Under normal operation, the system works fine, however during failover and recovery, glusterfs is not able to heal properly.  I find many file in split brain condition (especially .rrd file and .dat file).  Is there
16:27 guillaume_94404 any solution/recommendation for running nagios and syslog on a glusterfs system?
16:27 siel joined #gluster
16:28 semiosis guillaume_94404: use replica 3 & quorum
16:28 JoeJulian Add a 3rd replica and turn on quorum?
16:28 semiosis JoeJulian++
16:28 glusterbot semiosis: JoeJulian's karma is now 10
16:29 emitor joined #gluster
16:29 guillaume_94404 thanks, beside adding a 3rd replica, is there other option, or configuration in gluster?
16:30 rsavage_ left #gluster
16:30 bjuanico joined #gluster
16:31 ninkotech__ joined #gluster
16:31 JoeJulian So you want the ability to write to each server without regard to what's on the other server and have them magically be in sync without engineering a way for that to happen?
16:32 emitor JoeJulian: You are correct, it's running the glusterfs version 3.2.7, I paste the wrong log from another server, here is the correct http://ur1.ca/hz20m
16:32 glusterbot Title: #125614 Fedora Project Pastebin (at ur1.ca)
16:35 pdrakeweb joined #gluster
16:35 guillaume_94404 JoeJulian: I thought that since I only have 1 server running (the other server has no application running, just os/gluster) at any time, the other server just replicate what is on the other server.
16:35 semiosis guillaume_94404: how do you control which server is "active" for nagios/syslog?
16:36 guillaume_94404 keepalived
16:36 JoeJulian emitor: "Transport endpoint is not connected), peer (10.11.131.11:24007"
16:39 JoeJulian guillaume_94404: You have one server running. You write to it. It tags the files you've written to as having pending updates for the other server. You switch servers. Unless you wait for the self-heal queue to finish, if you disconnect the server with the data there's no way for the server without the data to get that data. Now you write to those files. Each server now has differing files and no way to resolve the difference, ie. split-brain.
16:39 ghenry joined #gluster
16:40 pdrakeweb joined #gluster
16:40 JoeJulian The only ways to prevent that is to enforce a quorum where the volume goes read-only if you cannot meet the quorum. With replica 2 that means both servers have to be on or it goes read-only. Replica 3 and only 2 servers need be up.
16:41 JoeJulian Or be super careful not to let that happen.
16:41 semiosis maybe disable write-behind?
16:42 guillaume_94404 JoeJulian: Thanks, adding a 3rd replica make sense.
16:43 jackdpeterson Is there a way to share the NFS state in a shared volume to the effect of assisting NFS failover?
16:44 guillaume_94404 semiosis: Thanks, I don't know about write-behind, I will check on google
16:44 semiosis guillaume_94404: ,,(options)
16:44 glusterbot guillaume_94404: See config options and their defaults with 'gluster volume set help'; you can see the current value of an option, if it has been modified, with 'gluster volume info'; see also this page about undocumented options: http://goo.gl/mIAe4E
16:48 emitor JoeJulian: I'm not sure how could I fix the problem with your hint. What could cause that warning?
16:49 C_Kode I cannot find any information about configuring access for the shares.   I'm mounting them NFS using autofs, and it mounts fine, but nobody can read/write to the directory.  I've added auth.allow and even set the ownership on the share.
16:50 siel joined #gluster
16:53 sputnik13 joined #gluster
16:54 semiosis @3.4 upgrade notes
16:54 glusterbot semiosis: http://vbellur.wordpress.com/2013/​07/15/upgrading-to-glusterfs-3-4/
16:54 louisbenoit joined #gluster
16:55 guillaume_94404 semiosis: Thanks, I will check that too.
16:56 louisbenoit left #gluster
16:58 msmith joined #gluster
17:01 theron joined #gluster
17:03 msmith_ joined #gluster
17:08 Philambdo joined #gluster
17:19 Humble joined #gluster
17:20 Guest10509 joined #gluster
17:25 clyons joined #gluster
17:27 bala joined #gluster
17:36 JoeJulian emitor: That's no warning, that's the failure. It's a standard network error stating that the tcp connection was dropped. Check your network, glusterd log and if those don't help, wireshark.
17:36 JoeJulian Though why glusterfs calls it a warning I have no idea.
17:37 zerick joined #gluster
17:39 gmcwhistler joined #gluster
17:39 rotbeard joined #gluster
17:39 andreask joined #gluster
17:45 mojibake Hope it is not rude to post same question from earlier in the morning(It was early.)
17:45 mojibake Can someone explain the relationship between performance.cache settings, and direct.io settings. If direct.io is enabled by default, does that mean the performance.cache settings are ignored?
17:45 mojibake if those options are set on the volume using the gluster command. Are those options then passed to clients when the volume is mounted, or doe each client need a gluster.vol config file to implement the various cache settings? (Still new to gluster , "gluster volume info" does not look to display what options has been set. what command can be used to see the options that have been set. Is it written to gluster.vol on glustermaster?
17:45 clyons|2 joined #gluster
17:46 JoeJulian To the first part, I think that's correct, but I haven't looked at the code to make sure.
17:47 JoeJulian And yes. If you set a setting for the volume through the cli, that does immediately apply to all the clients. Options that are still at default are not displayed in gluster volume info.
17:49 theron joined #gluster
17:49 MacWinner joined #gluster
17:51 mojibake JoeJulian++
17:51 glusterbot mojibake: JoeJulian's karma is now 11
17:52 mojibake I just wanted to see glusterbot raise JoeJulian's karma.
17:55 _dist joined #gluster
18:03 tryggvil joined #gluster
18:13 mojibake hmmmm. Maybe someone can help diagnose what went wrong here.. I had a 3x2 volume, which I mounted on client in the format svr:web-content /mnt/gfs/web-content glusterfs defaults 0 0
18:13 mojibake But then i tried using a /etc/glusterfs/gluster.vol file on client. Content of file roughly
18:13 mojibake volume route1
18:14 mojibake volume remote2
18:14 mojibake volume remote3
18:14 mojibake volume remote4
18:14 mojibake volume remote5
18:14 mojibake volume remote6
18:14 mojibake volume replicate
18:14 mojibake subvolume remote1 remote2 remote3 remote4 remote5 remote6
18:14 mojibake (shortend for brevity)..
18:16 mojibake Mounted it using the gluster.vol.. but then went back to mounting the original way, but now it looks like I have a copy of all the files, on all the bricks..Except for a 50mb file which by file size can see it is only one more set of replica bricks.
18:18 mojibake my guess is I screwed it up on the
18:18 mojibake volume replicate
18:18 mojibake type cluster/replicate
18:18 mojibake subvolumes remote1 remote2 remote3 remote4 remote5 remote6
18:18 mojibake end-volume
18:20 mojibake reason for attempting to use the gluster.vol file was wanting to attempt to set the performance.cache options on the client.. But I will take JoeJulian's info and set the volume options on the master.
18:20 Pupeno joined #gluster
18:25 siel joined #gluster
18:34 bala joined #gluster
18:35 getup- joined #gluster
18:37 siel joined #gluster
18:38 _Bryan_ joined #gluster
18:39 Spiculum joined #gluster
18:40 Spiculum Hi, is anyone running vmware vm's on gluster storage via nfs?
18:40 JoeJulian Yes. Somebody is.
18:41 Spiculum Obviously I meant anyone here.
18:41 JoeJulian Not necessarily obvious. Could just be looking for justification to try it. That happens a lot.
18:42 Spiculum ...
18:46 Spiculum Anyways, I'm having perfomance issues deploying Cloud Foundry on a vCenter environment.  I was wondering if anyone uses deploys templates and the such on a glusterfs mount has any tips or perfomance options I could set on the gluster fs volume.
18:47 Spiculum *uses vcenter/gluster to deploy templates..
18:51 clyons joined #gluster
18:59 theron joined #gluster
19:02 * Spiculum has set away! (auto away after idling [15 min]) [Log:ON] .gz.
19:03 siel joined #gluster
19:04 B21956 joined #gluster
19:07 bjuanico left #gluster
19:14 emitor JoeJulian: Thanks! it was a network issue..
19:15 emitor There is any way to mount a subdirectory of a brick using the glusterfs-client 3.5?
19:15 JoeJulian excellent. Glad you got it sussed.
19:15 JoeJulian Only through nfs.
19:19 emitor oh... too bad... thanks
19:20 msmith joined #gluster
19:20 caiozanolla how do I fix a split brain issue when the target is a directory?
19:21 magicrobotmonkey joined #gluster
19:21 magicrobotmonkey i have a glusterfs process using port 443 for some reason
19:21 magicrobotmonkey is there anything i can do to preven that?
19:21 caiozanolla for files I just find the gfid, remove both the file and the .glusterfs entry ant stat the file from the client. but the directory is present on all bricks!
19:21 JoeJulian caiozanolla: Have to reset the trusted.glusterfs.afr.* attributes on one brick for just that directory.
19:22 JoeJulian Or, rather, all but one brick (or all bricks if you prefer).
19:23 JoeJulian magicrobotmonkey: Shouldn't be happening
19:23 JoeJulian Well, I guess that's not entirely true.
19:23 magicrobotmonkey lol i agree
19:23 JoeJulian It could use that on an outgoing connection.
19:23 magicrobotmonkey yea its a client
19:23 JoeJulian There was a bug for that...
19:24 caiozanolla JoeJulian, ok and how do I do that? also, all bricks have this directory.
19:24 JoeJulian bug 762989
19:24 glusterbot Bug https://bugzilla.redhat.com​:443/show_bug.cgi?id=762989 low, high, ---, rabhat, CLOSED CURRENTRELEASE, Possibility of GlusterFS port clashes with reserved ports
19:24 JoeJulian ~extended attributes | caiozanolla
19:24 glusterbot caiozanolla: (#1) To read the extended attributes on the server: getfattr -m .  -d -e hex {filename}, or (#2) For more information on how GlusterFS uses extended attributes, see this article: http://hekafs.org/index.php/2011/​04/glusterfs-extended-attributes/
19:25 JoeJulian Write them with setfattr -n <attribute name> -v <value, in your case 0x0>
19:27 magicrobotmonkey hmm im on 3.4.5 which means i should have that, correct?
19:28 msmith joined #gluster
19:29 JoeJulian yep
19:29 JoeJulian Did you reserve the ports per the notes.
19:31 siel joined #gluster
19:32 msmith_ joined #gluster
19:34 caiozanolla JoeJulian I dont have this attibute, only trusted.afr.volumename-client-N, trusted.gfid and trusted.glusterfs.dht
19:35 caiozanolla JoeJulian, did you mean trusted.afr.* ? if so, I have 2 of them on this directory
19:48 theron joined #gluster
19:50 JoeJulian Yes.
19:50 JoeJulian It always throws me off because they sometimes use trusted.glusterfs.something, and sometimes leave off the glusterfs.
19:52 JoeJulian If anyone has $0.02 that they want to throw in about split-brain resolution, you might want to add your feedback to this thread in gluster-devel: https://www.mail-archive.com/glust​er-devel@gluster.org/msg01406.html
19:52 glusterbot Title: [Glusterdevel] Automated splitbrain resolution (at www.mail-archive.com)
19:53 siel joined #gluster
19:56 caiozanolla JoeJulian, im not sure i've set it correctly. I've reset both attributes to: trusted.afr.FileServerCacheVolume0-client-0="0x0" … had to resort to ' "0x0" ' because of "bad input encoding"
19:57 caiozanolla still, no dice. logs show "metadata self heal  failed, entry self heal  failed,"
19:57 JoeJulian I wonder if it requires the full complement of zeros...
19:57 caiozanolla trusted.afr.FileServerCacheVolume0-client-0="0x0" and trusted.afr.FileServerCacheVolume0-client-1="0x0"
19:57 JoeJulian I've never tried using it short like that.
19:58 caiozanolla but I do have to reset both, right?
19:58 JoeJulian Right
19:58 caiozanolla ok
19:58 JoeJulian One of the devs mentioned 0x0 once, and I always wanted a chance to try it.
19:58 magicrobotmonkey left #gluster
19:58 JoeJulian Sorry to make you my guinea pig.
19:59 caiozanolla JoeJulian, np :)
20:00 caiozanolla but...
20:00 caiozanolla trusted.afr.FileServerCacheVo​lume0-client-0=0sAAAAAAAAAAA=
20:00 caiozanolla trusted.afr.FileServerCacheVo​lume0-client-1=0sAAAAAAAAAAA=
20:00 caiozanolla and still I have IO error
20:00 luckyinva joined #gluster
20:01 JoeJulian And you did that on all but one brick (or all bricks), adjusting the client-N to match each distribute subvolume?
20:03 caiozanolla no, did that on just one brick.
20:03 caiozanolla oh, just re-read that, you mentioned all but one.
20:03 caiozanolla ok, lemme try that
20:03 glusterbot New news from newglusterbugs: [Bug 1130307] MacOSX/Darwin port <https://bugzilla.redhat.co​m/show_bug.cgi?id=1130307> || [Bug 1130308] FreeBSD port for GlusterFS <https://bugzilla.redhat.co​m/show_bug.cgi?id=1130308>
20:06 caiozanolla JoeJulian, ok, did it in all but one brick. still its unable to self heal
20:07 caiozanolla is "0sAAAAAAAAAAA=" correct?
20:07 JoeJulian Just do it on all then.
20:07 JoeJulian I always do it in hex, but that looks familiar.
20:09 msmith joined #gluster
20:15 ricky-ticky joined #gluster
20:24 caiozanolla JoeJulian, reset all of them. still io error. ok, if it makes a difference, its a 2 node replicated 7 brick each node, so on each brick I've reset the "trusted.afr.volumename-client-N" on each brick I had different client numbers, correct? so I reset those.
20:26 JoeJulian That's correct.
20:26 caiozanolla JoeJulian, maybe the parent dir is screwed too?
20:26 JoeJulian And you're getting an io error accessing that directory?
20:26 JoeJulian maybe
20:26 caiozanolla ies
20:26 caiozanolla yes
20:27 JoeJulian Does the client log still say anything about split-brain for that directory?
20:27 JoeJulian What version?
20:28 caiozanolla ahhhhhh, its the parent!!! been misreading the log cause the dir name repeats! like /gluster/folder/folder  just a sec.
20:34 caiozanolla JoeJulian, strange. I ran it on the parent dir then ls that dir. saw 2 "entry self heal  is successfully completed", with ioerror, then on the 2nd ls one more successfull heal and an ioerror.
20:37 caiozanolla JoeJulian, u mind taking a look? http://pastie.org/9474178
20:37 glusterbot Title: #9474178 - Pastie (at pastie.org)
20:39 JoeJulian "Conflicting entries" suggests that ownership or permissions may not match.
20:40 JoeJulian Or maybe one's a file instead of a directory?
21:31 msmith joined #gluster
22:07 Lee- joined #gluster
22:24 msmith joined #gluster
22:34 chirino joined #gluster
22:40 chirino_m joined #gluster
22:51 jobewan joined #gluster
22:56 siel joined #gluster
23:02 qdk joined #gluster
23:06 B21956 joined #gluster
23:22 luckyinva joined #gluster
23:29 gildub joined #gluster
23:34 Pupeno_ joined #gluster
23:43 andreask joined #gluster
23:43 andreask joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary