Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-01-24

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:06 jansel hello
00:06 glusterbot jansel: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
00:06 jansel for our project (lots of small files) we are seeing pretty terrible performance from gluster
00:06 jansel `git clone` takes 13+ minutes on gluster and 24.7 seconds on local disk (30x slower)
00:07 jansel `git status` takes 41 seconds on gluster and 0.1 seconds on local disk (400x slower)
00:07 jansel is there any tuning that I can do to improve this?
00:08 jansel our setup is on amazon ec2,  gluster servers are large SSD backed machines
00:08 jansel have tried both mirror and distribute setups
00:09 jansel thanks
00:15 jansel_ joined #gluster
00:23 khushildep_ joined #gluster
00:32 purpleidea semiosis: for fun, i patched some low hanging fruit on my todo list: https://github.com/purpleidea/puppet-gluster/commit/b0f905c751ded462c1c4400c3a7c1d1513608a5b
00:32 glusterbot Title: Add support for specifying the base-port option in glusterd.vol · b0f905c · purpleidea/puppet-gluster · GitHub (at github.com)
00:33 purpleidea semiosis: this patch (branch) adds support for a simple glusterd.vol file option... adding your feature is probably equally simple. I didn't test that patch, but it should give you the idea.
00:39 theron joined #gluster
00:45 jporterfield joined #gluster
00:59 TrDS left #gluster
01:05 hagarth joined #gluster
01:22 ira joined #gluster
01:24 SpeeR joined #gluster
01:38 klaxa is the lack of /usr/include/glusterfs/api/glfs.h in the community debian packages for wheezy intended?
01:44 diegows joined #gluster
01:48 johnbot11 joined #gluster
01:49 hagarth joined #gluster
01:49 johnbot__ joined #gluster
01:50 johnbot__ left #gluster
01:51 primechuck joined #gluster
02:00 mohankumar joined #gluster
02:02 johnbot11 joined #gluster
02:04 SpeeR joined #gluster
02:04 gmcwhistler joined #gluster
02:05 _Bryan_ joined #gluster
02:05 mohankumar joined #gluster
02:06 purpleidea @debian
02:06 glusterbot purpleidea: I do not know about 'debian', but I do know about these similar topics: 'files edited with vim become unreadable on other clients with debian 5.0.6'
02:06 purpleidea klaxa: semiosis might know ^
02:07 klaxa i see, well i just took the header file from a different package and built my stuff
02:07 klaxa but that feels really awkward
02:07 klaxa and all the other headers are in the glusterfs-common package
02:11 bharata-rao joined #gluster
02:13 radez_g0n3 joined #gluster
02:14 semiosis purpleidea: i just dropped in for a few, missed the beginning of that.  what might I know?
02:15 semiosis actually i can ccheck chan logs, give me a few mins, brb
02:19 semiosis ah i see, glfs.h missing
02:20 jflilley joined #gluster
02:20 jflilley left #gluster
02:20 semiosis klaxa: that was not intentional.  i will fix it.  pretty swamped the next few days so no promises, but i'll try to update the packages as soon as I can
02:20 klaxa ah cool, good to hear i was not wrong :)
02:20 klaxa thanks!
02:20 semiosis klaxa: curious, what are you doing with libgfapi???
02:21 klaxa we want to run libvirt and qemu with libgfapi
02:21 mohankumar joined #gluster
02:21 semiosis cool
02:21 klaxa and the backport packages are not compiled with direct support, so we compile it ourselves and needed the headers
02:24 semiosis ok so i have a lot to do to pack for a trip, but going to try and get you a fix in the next 10 min before i get started packing.  if it works the first time you'll have packages in a few, if it doesnt work, will prob. be days before I can try again
02:24 semiosis brb
02:26 semiosis klaxa: what version?  3.4.2 or 3.5beta?
02:26 klaxa 3.4.2
02:26 klaxa ah i'm not at the system right now. i'll be back at work on wednesday
02:27 klaxa like i said, the libvirt and qemu packages are already compiled, i just took the header from some other package, just wanted to make sure i was not crazy :)
02:27 semiosis well i'm not at the system now either but that's not stopping me :D
02:31 semiosis purpleidea: still around?
02:36 mohankumar joined #gluster
02:43 semiosis klaxa: success.  new packages (3.4.2-2) published to download.gluster.org
02:44 semiosis glusterfs-common now installs glfs.h into /usr/include/glusterfs/api/glfs.h
02:44 klaxa nice, i will update those packages once i'm at work
02:44 semiosis ok, let me know how it goes
02:44 semiosis have a good one
02:44 semiosis i'm out
02:44 klaxa o/
02:44 semiosis \m/
03:03 kshlm joined #gluster
03:14 sputnik13 joined #gluster
03:27 purpleidea semiosis: back now
03:28 purpleidea semiosis: maybe you're out. i'm going to sleep soon, but check out the backlog i highlighted you on and we'll hack tomorrow
03:29 badone__ joined #gluster
03:41 itisravi joined #gluster
03:49 RameshN joined #gluster
03:52 primechuck joined #gluster
03:59 andreask joined #gluster
04:08 SpeeR joined #gluster
04:09 ppai joined #gluster
04:10 SpeeR joined #gluster
04:11 SpeeR joined #gluster
04:13 raghug joined #gluster
04:14 SpeeR joined #gluster
04:15 SpeeR_ joined #gluster
04:17 SpeeR_ joined #gluster
04:17 bala joined #gluster
04:21 SpeeR joined #gluster
04:22 SpeeR joined #gluster
04:24 pk joined #gluster
04:24 SpeeR joined #gluster
04:25 bala joined #gluster
04:26 SpeeR joined #gluster
04:27 kanagaraj joined #gluster
04:28 SpeeR joined #gluster
04:30 SpeeR joined #gluster
04:30 satheesh1 joined #gluster
04:31 SpeeR joined #gluster
04:33 SpeeR joined #gluster
04:35 SpeeR_ joined #gluster
04:37 SpeeR_ joined #gluster
04:38 SpeeR joined #gluster
04:40 SpeeR_ joined #gluster
04:42 prasanth joined #gluster
04:42 SpeeR joined #gluster
04:44 SpeeR joined #gluster
04:46 SpeeR joined #gluster
04:46 shylesh joined #gluster
04:48 SpeeR joined #gluster
04:49 SpeeR joined #gluster
04:51 SpeeR_ joined #gluster
04:53 SpeeR joined #gluster
04:55 SpeeR_ joined #gluster
04:56 SpeeR joined #gluster
04:58 SpeeR joined #gluster
04:59 abhi_ joined #gluster
05:00 SpeeR joined #gluster
05:01 raghug joined #gluster
05:02 SpeeR_ joined #gluster
05:03 vpshastry joined #gluster
05:04 SpeeR_ joined #gluster
05:05 SpeeR joined #gluster
05:07 SpeeR joined #gluster
05:08 dusmant joined #gluster
05:09 SpeeR_ joined #gluster
05:09 aravindavk joined #gluster
05:11 SpeeR_ joined #gluster
05:11 bala joined #gluster
05:13 SpeeR joined #gluster
05:14 CheRi joined #gluster
05:14 SpeeR_ joined #gluster
05:16 SpeeR joined #gluster
05:18 SpeeR_ joined #gluster
05:20 ndarshan joined #gluster
05:20 SpeeR__ joined #gluster
05:21 saurabh joined #gluster
05:22 SpeeR joined #gluster
05:23 SpeeR joined #gluster
05:24 psharma joined #gluster
05:25 SpeeR joined #gluster
05:26 kdhananjay joined #gluster
05:27 SpeeR joined #gluster
05:29 SpeeR joined #gluster
05:31 SpeeR joined #gluster
05:32 SpeeR joined #gluster
05:34 SpeeR joined #gluster
05:36 SpeeR joined #gluster
05:38 SpeeR joined #gluster
05:40 SpeeR joined #gluster
05:41 SpeeR joined #gluster
05:43 SpeeR joined #gluster
05:45 SpeeR joined #gluster
05:47 SpeeR joined #gluster
05:48 bala joined #gluster
05:49 SpeeR joined #gluster
05:50 SpeeR joined #gluster
05:52 SpeeR joined #gluster
05:53 primechuck joined #gluster
05:54 SpeeR_ joined #gluster
05:55 rastar joined #gluster
05:56 SpeeR joined #gluster
05:56 raghu joined #gluster
05:57 nshaikh joined #gluster
05:57 SpeeR joined #gluster
05:58 overclk joined #gluster
05:59 lalatenduM joined #gluster
05:59 SpeeR joined #gluster
06:01 hagarth joined #gluster
06:01 SpeeR joined #gluster
06:03 SpeeR joined #gluster
06:05 SpeeR joined #gluster
06:06 SpeeR_ joined #gluster
06:08 SpeeR joined #gluster
06:10 SpeeR joined #gluster
06:10 raghug joined #gluster
06:11 bala joined #gluster
06:12 SpeeR_ joined #gluster
06:14 SpeeR_ joined #gluster
06:15 SpeeR joined #gluster
06:17 SpeeR joined #gluster
06:19 [o__o] left #gluster
06:21 SpeeR joined #gluster
06:21 [o__o] joined #gluster
06:23 SpeeR_ joined #gluster
06:23 benjamin__ joined #gluster
06:24 [o__o] left #gluster
06:24 SpeeR joined #gluster
06:25 ngoswami joined #gluster
06:26 SpeeR joined #gluster
06:27 [o__o] joined #gluster
06:27 askb joined #gluster
06:28 SpeeR joined #gluster
06:29 [o__o] left #gluster
06:30 aravindavk joined #gluster
06:30 SpeeR joined #gluster
06:31 [o__o] joined #gluster
06:32 SpeeR joined #gluster
06:33 SpeeR joined #gluster
06:34 [o__o] left #gluster
06:35 SpeeR joined #gluster
06:37 [o__o] joined #gluster
06:37 SpeeR joined #gluster
06:38 anoopcs joined #gluster
06:39 SpeeR_ joined #gluster
06:39 [o__o] left #gluster
06:41 SpeeR joined #gluster
06:42 [o__o] joined #gluster
06:42 anoopcs left #gluster
06:42 SpeeR joined #gluster
06:44 pk joined #gluster
06:44 SpeeR joined #gluster
06:46 ricky-ti1 joined #gluster
06:46 SpeeR joined #gluster
06:47 DV joined #gluster
06:48 SpeeR joined #gluster
06:50 SpeeR_ joined #gluster
06:51 SpeeR joined #gluster
06:52 hagarth joined #gluster
06:53 SpeeR joined #gluster
06:55 SpeeR joined #gluster
06:57 SpeeR joined #gluster
06:59 SpeeR joined #gluster
07:00 SpeeR_ joined #gluster
07:02 SpeeR joined #gluster
07:04 SpeeR joined #gluster
07:06 SpeeR joined #gluster
07:08 SpeeR joined #gluster
07:08 davinder joined #gluster
07:09 SpeeR_ joined #gluster
07:11 pk joined #gluster
07:11 SpeeR joined #gluster
07:13 SpeeR joined #gluster
07:15 SpeeR joined #gluster
07:17 SpeeR_ joined #gluster
07:18 SpeeR joined #gluster
07:19 kdhananjay joined #gluster
07:20 SpeeR_ joined #gluster
07:20 aravindavk joined #gluster
07:22 SpeeR joined #gluster
07:24 SpeeR joined #gluster
07:26 SpeeR_ joined #gluster
07:27 SpeeR joined #gluster
07:29 SpeeR joined #gluster
07:31 SpeeR joined #gluster
07:33 SpeeR joined #gluster
07:35 SpeeR joined #gluster
07:36 SpeeR joined #gluster
07:38 SpeeR joined #gluster
07:40 jporterfield joined #gluster
07:40 SpeeR joined #gluster
07:42 SpeeR joined #gluster
07:42 ekuric joined #gluster
07:42 shyam joined #gluster
07:43 ctria joined #gluster
07:44 SpeeR_ joined #gluster
07:45 pureflex joined #gluster
07:45 SpeeR joined #gluster
07:46 bala joined #gluster
07:47 SpeeR joined #gluster
07:49 SpeeR joined #gluster
07:50 s2r2_ joined #gluster
07:51 SpeeR_ joined #gluster
07:52 SpeeR joined #gluster
07:54 primechuck joined #gluster
07:54 SpeeR joined #gluster
07:56 SpeeR_ joined #gluster
07:56 hagarth joined #gluster
07:58 SpeeR__ joined #gluster
08:00 SpeeR joined #gluster
08:01 eseyman joined #gluster
08:01 SpeeR joined #gluster
08:03 SpeeR joined #gluster
08:05 SpeeR_ joined #gluster
08:07 SpeeR joined #gluster
08:09 SpeeR joined #gluster
08:11 SpeeR joined #gluster
08:12 SpeeR_ joined #gluster
08:14 SpeeR joined #gluster
08:16 itisravi joined #gluster
08:16 SpeeR joined #gluster
08:18 SpeeR_ joined #gluster
08:19 SpeeR_ joined #gluster
08:21 blook joined #gluster
08:21 keytab joined #gluster
08:21 SpeeR_ joined #gluster
08:23 SpeeR joined #gluster
08:25 SpeeR_ joined #gluster
08:27 SpeeR joined #gluster
08:28 SpeeR joined #gluster
08:30 SpeeR joined #gluster
08:32 SpeeR_ joined #gluster
08:34 SpeeR joined #gluster
08:36 SpeeR joined #gluster
08:37 SpeeR_ joined #gluster
08:39 SpeeR joined #gluster
08:39 glusterbot` joined #gluster
08:41 SpeeR_ joined #gluster
08:42 TrDS joined #gluster
08:43 SpeeR joined #gluster
08:45 SpeeR joined #gluster
08:46 SpeeR joined #gluster
08:48 SpeeR joined #gluster
08:50 SpeeR joined #gluster
08:52 SpeeR_ joined #gluster
08:54 SpeeR_ joined #gluster
08:55 SpeeR joined #gluster
08:57 SpeeR_ joined #gluster
08:59 SpeeR joined #gluster
09:01 SpeeR_ joined #gluster
09:02 franc joined #gluster
09:02 franc joined #gluster
09:03 SpeeR joined #gluster
09:04 SpeeR joined #gluster
09:05 benjamin__ joined #gluster
09:06 SpeeR joined #gluster
09:08 SpeeR joined #gluster
09:10 SpeeR joined #gluster
09:10 benjamin__ joined #gluster
09:12 SpeeR_ joined #gluster
09:13 SpeeR joined #gluster
09:14 Frankl joined #gluster
09:15 SpeeR joined #gluster
09:16 RameshN joined #gluster
09:18 davinder2 joined #gluster
09:19 SpeeR joined #gluster
09:19 hagarth joined #gluster
09:20 vpshastry joined #gluster
09:20 kanagaraj joined #gluster
09:20 SpeeR joined #gluster
09:21 ndarshan joined #gluster
09:22 SpeeR joined #gluster
09:24 SpeeR joined #gluster
09:26 SpeeR_ joined #gluster
09:28 SpeeR joined #gluster
09:30 SpeeR joined #gluster
09:31 dusmant joined #gluster
09:31 SpeeR joined #gluster
09:33 SpeeR joined #gluster
09:35 SpeeR joined #gluster
09:37 SpeeR joined #gluster
09:39 SpeeR joined #gluster
09:40 SpeeR joined #gluster
09:41 mgebbe_ joined #gluster
09:42 SpeeR_ joined #gluster
09:44 SpeeR joined #gluster
09:46 SpeeR joined #gluster
09:48 SpeeR joined #gluster
09:49 SpeeR joined #gluster
09:50 meghanam joined #gluster
09:51 SpeeR joined #gluster
09:53 SpeeR_ joined #gluster
09:54 andreask joined #gluster
09:55 primechuck joined #gluster
09:55 SpeeR joined #gluster
09:55 mkzero joined #gluster
09:56 SpeeR_ joined #gluster
09:57 psyl0n joined #gluster
09:58 glusterbot New news from newglusterbugs: [Bug 923540] features/compress: Compression/DeCompression translator <https://bugzilla.redhat.com/show_bug.cgi?id=923540>
09:58 SpeeR joined #gluster
10:00 SpeeR joined #gluster
10:02 SpeeR joined #gluster
10:02 ricky-ticky joined #gluster
10:04 SpeeR joined #gluster
10:06 SpeeR_ joined #gluster
10:07 SpeeR_ joined #gluster
10:09 SpeeR joined #gluster
10:11 SpeeR joined #gluster
10:13 SpeeR joined #gluster
10:14 SpeeR_ joined #gluster
10:16 SpeeR joined #gluster
10:17 ells joined #gluster
10:18 satheesh4 joined #gluster
10:18 SpeeR joined #gluster
10:20 SpeeR joined #gluster
10:22 SpeeR joined #gluster
10:24 SpeeR joined #gluster
10:25 ndarshan joined #gluster
10:25 SpeeR joined #gluster
10:27 SpeeR_ joined #gluster
10:27 geewiz joined #gluster
10:28 glusterbot New news from newglusterbugs: [Bug 1047416] Feature request (CLI): Add options to the CLI that let the user control the reset of stats <https://bugzilla.redhat.com/show_bug.cgi?id=1047416>
10:28 geewiz Does anyone know what could cause a client log full of "LOOKUP() /path/to/some/file => -1 (Permission denied)" errors?
10:29 SpeeR joined #gluster
10:30 dusmant joined #gluster
10:31 SpeeR joined #gluster
10:31 RameshN joined #gluster
10:32 SpeeR joined #gluster
10:34 SpeeR_ joined #gluster
10:36 SpeeR joined #gluster
10:38 SpeeR joined #gluster
10:40 SpeeR joined #gluster
10:40 ndevos SpeeR: you seen to be reconnecting every minute? Could you fix that?
10:41 ndevos geewiz: sounds as if a user for that volume tried to access <VOLUME>/path/to/some/file, and does not have permissions for the file or its parent dirs?
10:41 SpeeR joined #gluster
10:42 geewiz ndevos: Gluster logs all regular permissions violations?
10:43 ndevos geewiz: yeah, I think so, but maybe the log-level has been changed in recent versions
10:43 anoopcs joined #gluster
10:43 SpeeR joined #gluster
10:45 benjamin__ joined #gluster
10:45 SpeeR joined #gluster
10:45 sahina joined #gluster
10:47 SpeeR joined #gluster
10:47 StarBeast joined #gluster
10:49 SpeeR joined #gluster
10:49 aravindavk joined #gluster
10:49 benjamin__ joined #gluster
10:50 SpeeR_ joined #gluster
10:52 SpeeR joined #gluster
10:54 SpeeR joined #gluster
10:56 SpeeR joined #gluster
10:57 masterzen joined #gluster
10:58 SpeeR joined #gluster
10:59 SpeeR joined #gluster
11:01 SpeeR joined #gluster
11:02 diegows joined #gluster
11:03 SpeeR joined #gluster
11:05 SpeeR_ joined #gluster
11:06 dusmant joined #gluster
11:06 aravindavk joined #gluster
11:07 SpeeR joined #gluster
11:08 SpeeR joined #gluster
11:10 SpeeR_ joined #gluster
11:12 SpeeR joined #gluster
11:12 vpshastry1 joined #gluster
11:14 SpeeR_ joined #gluster
11:16 SpeeR joined #gluster
11:17 SpeeR joined #gluster
11:19 SpeeR_ joined #gluster
11:20 hagarth joined #gluster
11:21 SpeeR joined #gluster
11:23 SpeeR joined #gluster
11:25 SpeeR_ joined #gluster
11:25 benjamin__ joined #gluster
11:26 SpeeR joined #gluster
11:28 SpeeR joined #gluster
11:30 edward1 joined #gluster
11:30 SpeeR joined #gluster
11:32 SpeeR_ joined #gluster
11:32 dusmant joined #gluster
11:34 SpeeR_ joined #gluster
11:35 SpeeR joined #gluster
11:37 SpeeR joined #gluster
11:39 SpeeR joined #gluster
11:41 SpeeR joined #gluster
11:42 TonySplitBrain joined #gluster
11:43 SpeeR joined #gluster
11:44 Frankl lots of repeative errors:
11:44 Frankl [2014-01-24 19:43:57.267344] E [marker-quota-helper.c:230:mq_dict_set_contribution] (-->/usr/lib64/glusterfs/3.3.0.5rhs_iqiyi_6/xlator/debug/io-stats.so(io_stats_lookup+0x13e) [0x7fe3f5a88a3e] (-->/usr/lib64/glusterfs/3.3.0.5rhs_iqiyi_6/xlator/features/marker.so(marker_lookup+0x300) [0x7fe3f5c9e170] (-->/usr/lib64/glusterfs/3.3.0.5rhs_iqiyi_6/xlator/features/marker.so(mq_req_xattr+0x3c) [0x7fe3f5ca76ec]))) 0-marker: invalid argument: loc->parent
11:44 SpeeR joined #gluster
11:44 Frankl who knowns what happened?
11:46 SpeeR joined #gluster
11:49 bolaz joined #gluster
11:50 SpeeR joined #gluster
11:52 SpeeR joined #gluster
11:52 dusmant joined #gluster
11:53 hagarth joined #gluster
11:53 SpeeR joined #gluster
11:54 giannello joined #gluster
11:55 SpeeR joined #gluster
11:56 primechuck joined #gluster
11:57 SpeeR joined #gluster
11:58 SpeeR joined #gluster
12:25 ira joined #gluster
12:33 ells joined #gluster
12:44 pk left #gluster
12:57 dusmant joined #gluster
12:59 Frankl joined #gluster
13:01 sinatributos joined #gluster
13:02 sinatributos Hello. Is the default value for performance.io-thread-count still 16 on the 3.4.2 version?
13:03 sinatributos I ask because I tried 32 to do some perf tests and then  switched back to 16, now when i check volume info I see:
13:03 sinatributos Options Reconfigured:
13:03 sinatributos performance.io-thread-count: 16
13:04 sinatributos I wonder if being 16 the default value this option should be listed as reconfigured or just disappear from the list.
13:04 sinatributos Thanks.
13:04 SpeeR joined #gluster
13:07 vpshastry1 left #gluster
13:09 Frankl_ joined #gluster
13:10 benjamin__ joined #gluster
13:33 sroy joined #gluster
13:37 plarsen joined #gluster
13:41 gork4life joined #gluster
13:43 [o__o] joined #gluster
13:43 xavih sinatributos: this is normal behavior. Even if you set the value to the same than current default, it is a force value that should not change even if the default value changes in a future version
13:43 gluslog_ joined #gluster
13:44 xavih sinatributos: if you really want to use the default value, whatever it is, even if it changes in the future, you need to reset it whith 'gluster volume reset <volname> <option>'
13:44 jiqiren joined #gluster
13:44 lalatenduM sinatributos, yup the default  is 16
13:45 Amanda joined #gluster
13:46 B21956 joined #gluster
13:49 swaT30 joined #gluster
13:49 Kins joined #gluster
13:49 sac`away joined #gluster
13:49 RobertLaptop joined #gluster
13:49 atrius joined #gluster
13:49 brosner joined #gluster
13:50 compbio joined #gluster
13:51 smellis joined #gluster
13:56 diegows joined #gluster
13:56 primechuck joined #gluster
13:57 davinder joined #gluster
13:58 rwheeler joined #gluster
14:05 ira joined #gluster
14:06 ira joined #gluster
14:08 andreask joined #gluster
14:20 sinatributos lalatenduM: thanks
14:20 lalatenduM sinatributos, :)
14:21 sinatributos xavih: thanks! volume reset is the answer :-)
14:22 japuzzo joined #gluster
14:22 Jayunit100 johnmark: is james shubin in here a lot ?
14:24 primechuck joined #gluster
14:29 theron joined #gluster
14:33 malteo joined #gluster
14:34 jskinner joined #gluster
14:43 eryc joined #gluster
14:43 eryc joined #gluster
14:43 wica joined #gluster
14:46 malteo left #gluster
14:47 bennyturns joined #gluster
14:48 purpleidea Jayunit100: sometimes
14:48 purpleidea Jayunit100: please use 'purpleidea' so that my full name doesn't over-proliferate the internets, thanks
14:49 purpleidea Jayunit100: what can i help you with?
14:52 hagarth joined #gluster
14:53 Alpinist joined #gluster
14:53 Jayunit100 was gonna try to use vagrant to provision your cluster recipes to EC2.  any thoughts on if it will work?
14:54 Jayunit100 and great talk by the way purpleidea ..
14:56 benjamin__ joined #gluster
14:58 purpleidea Jayunit100: glad you enjoyed the talk. i think it would be very easy to make work
14:58 purpleidea ... on ec2
14:59 glusterbot New news from newglusterbugs: [Bug 1057645] ownership of diskimage changes during livemigration, livemigration with kvm/libvirt fails <https://bugzilla.redhat.com/show_bug.cgi?id=1057645>
15:01 khushildep joined #gluster
15:04 rprice left #gluster
15:14 gork4life hello all
15:15 gork4life I'm wanting to know how to set up failover with gluster does anyone have a link that could point me in the right direction
15:16 bolaz joined #gluster
15:16 purpleidea gork4life: set up a test cluster with ,,(vagrant) and take down nodes your self to see how it works. gluster makes storage highly available by replicating data across hosts. usually people use N=2 repl. factor
15:16 glusterbot gork4life: (#1) Part 1 @ https://ttboj.wordpress.com/2013/12/09/vagrant-on-fedora-with-libvirt/, or (#2) Part 2 @ https://ttboj.wordpress.com/2013/12/21/vagrant-vsftp-and-other-tricks/, or (#3) Part 3 @ https://ttboj.wordpress.com/2014/01/02/vagrant-clustered-ssh-and-screen/, or (#4) Part 4 @
15:16 glusterbot https://ttboj.wordpress.com/2014/01/08/automatically-deploying-glusterfs-with-puppet-gluster-vagrant/, or (#5) https://ttboj.wordpress.com/2014/01/16/testing-glusterfs-during-glusterfest/
15:17 gork4life purpleidea: I have two servers replicating, but I'm trying to mount nfs but each server is on different ip. I need to learn how to set up a virtual ip
15:18 gork4life purpleidea: Do you have any links that could help me
15:20 purpleidea gork4life: if you look at the above links you'll see that tht setup sets up a VIP with keepalived. it does it automatically, and you can then copy the setup.
15:21 gork4life purpleidea: Ok I'll check these out and post back thanks
15:21 purpleidea gork4life: if you just want the code, look at ,,(puppet) #1
15:21 glusterbot gork4life: (#1) https://github.com/purpleidea/puppet-gluster, or (#2) semiosis' unmaintained puppet module: https://github.com/semiosis/puppet-gluster
15:22 primechuck Has anyone tried NFS 4.1 genesha with a gluster backend?
15:23 purpleidea primechuck: i'm sure some have. it might be more useful if you ask a specific question
15:24 _Bryan_ joined #gluster
15:28 bugs_ joined #gluster
15:30 nullck joined #gluster
15:35 primechuck Mostly, more specific to NFS questions, but mainly does it actually work and is stable.  Have had time to test it yet since we committed whole heartedly to FUSE and libgfapi :)
15:36 tg2 joined #gluster
15:37 gork4life joined #gluster
15:39 purpleidea primechuck: well, according to the maintainers file, it is maintained by anand git://github.com/nfs-ganesha/nfs-ganesha.git
15:42 dbruhn joined #gluster
15:51 benjamin__ joined #gluster
15:52 yosafbridge joined #gluster
15:55 bolaz joined #gluster
15:57 sinatributos joined #gluster
15:59 eryc joined #gluster
16:04 gork4life joined #gluster
16:04 theron_ joined #gluster
16:08 lpabon joined #gluster
16:19 jag3773 joined #gluster
16:21 LoudNoises joined #gluster
16:28 kaptk2 joined #gluster
16:31 glusterbot New news from resolvedglusterbugs: [Bug 849630] client_t implementation <https://bugzilla.redhat.com/show_bug.cgi?id=849630>
16:36 Slashmane joined #gluster
16:36 SpeeR joined #gluster
16:36 b0e joined #gluster
16:37 jmarley__ joined #gluster
16:37 pravka joined #gluster
16:38 jmarley joined #gluster
16:38 ells joined #gluster
16:39 rotbeard joined #gluster
16:45 zerick joined #gluster
16:58 ron-slc joined #gluster
16:59 khushildep_ joined #gluster
17:08 qdk joined #gluster
17:14 jag3773 joined #gluster
17:26 theron joined #gluster
17:37 b0e joined #gluster
17:38 Mo__ joined #gluster
17:40 jag3773 joined #gluster
17:42 b0e Hi, i have some question to geo-replication. We use glusterfs-3.4.1 and geo-replication (ssh and file) for a volume with 17TB of files (mostly small files and many directorties and subdir).  The geo-replica is very outdated (i think a couple of days, the checkpoint from thursday 4pm is not reached yet).
17:42 b0e i tried to set geo-replication.indexing to off, but w
17:43 theron joined #gluster
17:43 b0e but when i start the geo-replication, the parameter is set to on
17:43 b0e is this parameter only available if i use a gluster volume as slave?
17:46 KyleG joined #gluster
17:46 KyleG joined #gluster
17:47 lalatenduM joined #gluster
17:57 KyleG1 joined #gluster
17:59 gork4life I have two nodes that's replicated. I want to mount a nfs on it so I can use as a drive for vsphere. Do I have to create the nfs on boths nodes to see the drives even if one fails
18:02 glusterbot New news from newglusterbugs: [Bug 1035586] gluster volume status shows incorrect information for brick process <https://bugzilla.redhat.com/show_bug.cgi?id=1035586>
18:04 b0e gork4life: what do you mean with 'create the nfs'. you have to enable nfs on the gluster nodes.
18:09 gork4life b0e: I have, but what I'm needing is basically use the nfs and a drive for vmware. I'm wanting to pull the plug on one server and have the drive still connected in vsphere.
18:10 KyleG joined #gluster
18:10 KyleG joined #gluster
18:11 b0e gork4life: then you will need something like a service-ip which is switched e.g. with pacemaker if one node is down.
18:11 aixsyd joined #gluster
18:11 aixsyd dbruhn: !!
18:12 gork4life b0e: Do you mean virtual ip
18:12 aixsyd jclift: you around bro?
18:12 b0e gork4life: the nfs-file handles are the same on the glusterfs-nodes. and you should decrease the value from network.ping-timeout e.g to 5 seconds.
18:13 b0e gork4life: yes. one ip which you use for mounting nfs... if one node fails, you can move the ip to the other node
18:13 gork4life b0e: And do you assign the same virtual ip on both nodes
18:14 b0e gork4life: not on the same time.
18:14 pixelgremlins joined #gluster
18:15 gork4life b0e: Ok I'm a look at pacemaker and I'll post the results
18:15 b0e gork4life: if everything is ok, the ip is up at node1. if node1 fails, the ip is moved to node2. you need software like heartbeat or pacemaker to switch the ip.
18:15 b0e gork4life: hf.
18:16 gork4life b0e: thanks
18:16 aixsyd anyone know anything about Voltarie IB switches?
18:17 pixelgremlins I have two servers: Apollo and Chronos -- can ssh/scp.. they're mapped to local lan ip addresses communicating easily enough... --- I installed glusterfs on both, ran gluster peer probe chronos -- from apollo, then gluster peer status... it says num peers: 1, hostname:chronos.. UUid... State: Accepted peer request(Connected)
18:18 pixelgremlins when I run : # gluster volume create www replica 2 transport tcp apollo:/var/export/www chronos:/var/export/www I get "Host chronos not connected"
18:19 zaitcev joined #gluster
18:21 semiosis pixelgremlins: ,,(hostnames)
18:21 glusterbot pixelgremlins: Hostnames can be used instead of IPs for server (peer) addresses. To update an existing peer's address from IP to hostname, just probe it by name from any other peer. When creating a new pool, probe all other servers by name from the first, then probe the first by name from just one of the others.
18:21 semiosis probe both ways
18:21 semiosis by name
18:21 pixelgremlins so I should probe apollo from chronos too?
18:21 nage joined #gluster
18:21 nage joined #gluster
18:21 semiosis for the 3rd time, yes ;)
18:22 semiosis also note peer status lists other peers not the local machine
18:22 pixelgremlins I was using csync2 + lsyncd to mirror linodes, but caused some issues, and failed at times..trying this as a method to better replicate
18:23 semiosis you really should use replica 3 with quorum if your servers are also clients
18:23 semiosis or make the clients read-only & just use one mount to write into the volume, if using replica 2
18:23 semiosis these are ways to surely prevent split-brain
18:24 semiosis there are other higher level ways to do that, but i dont know anything about your use case
18:26 pixelgremlins I have two app servers... split via loadbalancer -- each has local caches/file uploads which need synching between servers.. I'm working to eventually put those all on a centralized storage server--anything that is write-intensive. But for now I need a solution for mirroring both servers-- ie content upload  in chronos:/www must immediately appear on apollo:/www and vice versa
18:27 semiosis as long as you never have the same file written on both servers you should be ok
18:27 semiosis split brain could occur if the servers cant reach each other, but they can be reached from the load balancer, and a request comes in to both servers causing the same file to be created/modified
18:28 semiosis when the servers reconnect to each other that file will be split-brained, unable to be healed automatically
18:28 semiosis best to avoid this situation by design, to make it impossible, using quorum
18:29 pixelgremlins ok, thanks - will have to research that option. I'm new to devops, and glusterfs.
18:30 KyleG left #gluster
18:31 blook joined #gluster
18:33 diegows joined #gluster
18:33 aixsyd I have no idea what I'm doing with this switch v.v
18:42 dbruhn aixsyd, what do you mean?
18:45 aixsyd dbruhn: i got that voltaire in, and I cannot seem to facotry reset it, connect via management, etc
18:45 aixsyd its laughing at me v.v
18:46 aixsyd theres TWO reset buttons on the front, one actually feels like a reset, the other is much smaller and doesnt feel like theres anything in there to press
18:48 aixsyd its got a mini-usb port for a console port o.O
18:48 semiosis managed switches often have a keyboard interrupt during boot which can be used to reset
18:48 semiosis via serial console
18:49 aixsyd it has no RS-232 port
18:49 semiosis aixsyd: usb serial adapter
18:49 semiosis guessing here
18:49 semiosis have you tried... RTFM?
18:50 aixsyd semiosis: yep, not a thing in it about a factory reset
18:50 aixsyd nor keyboard interrupt
18:50 semiosis :(
18:50 semiosis i'd call the mfg
18:50 semiosis demand support
18:55 SpeeR joined #gluster
18:57 Gugge joined #gluster
18:57 sroy__ joined #gluster
19:08 dbruhn Sorry stepped away for the sec.
19:09 dbruhn is it an HP branded one? or what is the brand that was slapped on it
19:10 dbruhn what model is that again?
19:11 dbruhn looks like the 4036's reset is to hold the reset switch during boot for 60 sec.
19:32 blook joined #gluster
19:38 Staples84 joined #gluster
19:44 SpeeR joined #gluster
19:45 Gugge joined #gluster
19:58 diegows joined #gluster
20:06 SpeeR joined #gluster
20:13 thefiguras joined #gluster
20:16 SpeeR joined #gluster
20:19 sroy_ joined #gluster
20:27 aixsyd joined #gluster
20:27 aixsyd dbruhn: I found its IP - but theres a password for the enable command. its not default. the reset button doesnt reset the configs. wat do?
20:28 dbruhn hey bud
20:28 dbruhn is that an HP branded one?
20:28 dbruhn or a mellenox branded one?
20:29 aixsyd Voltaire
20:29 B21956 joined #gluster
20:30 blook joined #gluster
20:30 aixsyd i checked all the manuals and I called Mellanox, and no one there knew of a way to reset factory defaults without the admin password
20:30 aixsyd which is SHOCKING to me.
20:31 aixsyd essentially, they told me that i have a bricked device.
20:31 dbruhn ISR 9024?
20:31 aixsyd yes
20:32 dbruhn is the switch not passing data?
20:32 aixsyd well a) its on a completely wrong subnet.
20:32 aixsyd so i'd like to config it from the ground up
20:32 dbruhn Ahh ok
20:32 aixsyd cant w/o enable password
20:33 sroy_ joined #gluster
20:33 dbruhn I'll be honest the one I bought I never even logged into, I just plugged it in, and it was passing traffic
20:33 dbruhn seeing if I can find something
20:34 dbruhn Does that not have a serial port on the back of it?
20:34 aixsyd no serial, has 2x ethernet management ports and a mini usb serial port
20:34 aixsyd i'm IN it. I'm connected to it
20:34 SpeeR joined #gluster
20:34 aixsyd the admin password was default, but the enable password wasnt
20:35 dbruhn what IP address is it at?
20:35 aixsyd 192.168.20.72
20:36 dbruhn are you sure both of those ports are ethernet? or is one ethernet, and the other a rj45 console port?
20:36 dbruhn Mine is a 45 min drive away, otherwise I'd go grab it and take a look
20:37 aixsyd 1000% sure theyre both ethernet.
20:37 dbruhn I am actually certain mine has a DB9 connector though, because it came and was a little bent up
20:37 aixsyd yeah, no db9 for me
20:38 aixsyd dbruhn: this is (i hope) a silly question - i can pass IBoIP through this switch, yes?
20:39 SpeeR_ joined #gluster
20:39 dbruhn Yep, the IPoIB is independent of the switch
20:39 aixsyd okay - so i can ibping, but not tcp ping
20:40 dbruhn gah, I am sitting here looking at the manual and it mentions a serial port connector too
20:40 dbruhn http://h20628.www2.hp.com/km-ext/kmcsdirect/emr_na-c02013942-1.pdf
20:40 aixsyd there IS a serial port - but it wants a mini usb plug to rs-232
20:41 dbruhn Oh gross
20:41 aixsyd yup
20:41 aixsyd how would that help me?
20:41 aixsyd if i was able to use that
20:42 dbruhn typically switches and other networking devices can be interrupted on boot and reset to factory defaults
20:42 aixsyd thats exactly right
20:42 aixsyd but... how dafuq?
20:45 pravka joined #gluster
20:46 aixsyd i dont get it - how the heck can i ibping but not tcp ping? D:
20:46 andreask joined #gluster
20:46 dbruhn that is weird
20:46 aixsyd would it help to uninstall opensm? XD
20:47 dbruhn Are you sure the SM is running on the switch?
20:47 aixsyd nope. no way to know
20:48 aixsyd er wait
20:48 aixsyd it says "has SM: No"
20:48 aixsyd SMState: Not Active
20:48 dbruhn So enable your server run SM and see if you can ping then
20:49 aixsyd i cant.. i have no access to enable
20:49 aixsyd i'm completely stuck
20:49 dbruhn No I mean the subnet manager you were running on your server
20:49 dbruhn not the one on the switch
20:49 aixsyd i ran opensm on two nodes, still cant ping
20:49 davidjpeacock joined #gluster
20:49 SpeeR joined #gluster
20:49 dbruhn but you can run ibping?
20:50 aixsyd yep
20:50 dbruhn are you sure the IP interfaces are up? Mine won't come up without a SM running on the network
20:50 dbruhn if you run ifup ib0
20:50 dbruhn or whatever starts the interface for ethernet
20:50 aixsyd one sec...
20:52 rwheeler joined #gluster
20:55 aixsyd if i have 5 IB nodes on this switch, how many of those nodes should be running opensm?
20:55 dbruhn You need one subnet manager per network
20:55 aixsyd so only on one node?
20:55 dbruhn Yep
20:56 dbruhn I'm sure I've shared this with you, but this is a good reference resource for IB, http://pkg-ofed.alioth.debian.org/howto/infiniband-howto-4.html
20:56 glusterbot Title: Infiniband HOWTO: Setting up a basic infiniband network (at pkg-ofed.alioth.debian.org)
20:59 pravka joined #gluster
20:59 aixsyd opensm is in standby state
21:02 aixsyd interfaces configured. theyre up. opensm only on one node, says in standby mode.
21:02 aixsyd cant ping.
21:03 aixsyd i can ibping though
21:06 aixsyd jesus christ, if I could only reset the DAMN SWITCH like any other enterprise-grade network device
21:09 aixsyd am I crazy for thinking that resetting a switch to defaults should be possible?
21:10 semiosis aixsyd: Could you please keep your voice down--this is a family restaurant.
21:14 dbruhn aixsyd, what do you get when you run ifconfig?
21:15 pravka joined #gluster
21:15 aixsyd it shows that its up
21:16 aixsyd with the right IP
21:16 dbruhn and the server you are trying to ping?
21:16 aixsyd same deal
21:17 aixsyd dbruhn: i'm giving up for now. ill come back to it monday. too stressed out.
21:17 dbruhn kk
21:18 aixsyd jclift: if you know of a way to factory reset an ISR 9024M, e-mail me! aixsyd@gmail.com
21:24 smellis i've got a bunch of vm images on a volume, while the vms are running there is always something in the output of volume heal info, is this normal?  If so, how do I know when it's ok to reboot one of the brick hosts?
21:27 pravka joined #gluster
21:28 smellis if I shut all of the vms down, it eventually clears out, but it would be nice to know for sure about the consistency across the bricks
21:28 semiosis smellis: nfs or fuse client?
21:28 smellis fuse client
21:28 smellis the brick hosts are also the clients
21:28 semiosis are you *sure* it's connected to all the bricks?
21:29 smellis I am
21:29 semiosis maybe due to write behind?
21:29 semiosis not sure
21:29 smellis I have write behind enabled on the hardware raid controllers
21:29 semiosis glusterfs write behind
21:29 smellis oh, gotcha
21:30 smellis is that option documented somewhere
21:30 smellis i've looked at performance.flush-behind
21:30 smellis but on or off doesn't seem to make a difference
21:31 smellis oh crap, i see it now
21:31 smellis don't know how I missed that
21:31 smellis what kind of implications does that have for performance?
21:32 smellis is it pretty severe if I disable it?
21:36 pravka joined #gluster
21:37 77CAA6PE5 joined #gluster
21:41 smellis looks like 3.4 introduced performance enhancments for the write-behind translator, which sounds good.
21:41 smellis do the fuse clients need to remount in order for that option to take affect or can I just change it on the fly?
21:46 rnathuji joined #gluster
21:47 qdk joined #gluster
21:48 semiosis smellis: config changes are dynamic, online
21:48 semiosis smellis: does disabling flush-behind resolve your heal issue?
21:49 smellis semiosis: understood, thanks for the info.
21:49 smellis semiosis: I am worried about doing it right now, because I don't want to destroy performance
21:49 semiosis see also ,,(undocumented options)
21:49 glusterbot Undocumented options for 3.4: http://www.gluster.org/community/documentation/index.php/Documenting_the_undocumented
21:49 semiosis understood
21:51 SpeeR joined #gluster
21:54 smellis i've looked at that page, wonder what the default is for strict-write-ordering and what the implications are there.   The 3.4 release notes mention casual ordering for the write behind translator
21:55 purpleidea semiosis: o hai
21:55 semiosis hey
21:57 purpleidea you want to send me info on that option you need?
22:02 semiosis https://github.com/semiosis/libgfapi-jni#compiling-and-testing
22:02 glusterbot Title: semiosis/libgfapi-jni · GitHub (at github.com)
22:02 semiosis "First we need to manually fix glusterd. Edit /etc/glusterfs/glusterd.vol and add this line...."
22:03 semiosis that part specifically
22:03 semiosis brb coffee
22:04 purpleidea semiosis: which line?
22:04 purpleidea nm
22:07 semiosis purpleidea: get what i'm saying/
22:07 semiosis ?
22:09 SpeeR joined #gluster
22:12 purpleidea semiosis: ya
22:13 beeeerock joined #gluster
22:13 purpleidea semiosis: give me ten min to patch puppet-gluster...
22:14 purpleidea semiosis: what do you think the option name should be called? "rpcauthallowinsecure" ?
22:19 ricky-ti1 joined #gluster
22:31 jflilley joined #gluster
22:32 jflilley left #gluster
22:37 semiosis don't ask me about puppet module design! :P
22:38 semiosis my idea of puppet module design is "copy this file from puppetmaster to the instance"
22:38 semiosis option names, pfft
22:38 semiosis whatever you want to call it is fine with me
22:38 semiosis i gotta split
22:38 semiosis afk for a couple days
22:38 semiosis thanks!
22:40 purpleidea semiosis: cool. i'll ping you something later
22:49 blook joined #gluster
23:06 blook2nd joined #gluster
23:16 jbrooks joined #gluster
23:33 theron joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary