Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-10-21

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:11 David_H_Smith joined #gluster
00:14 David_H_Smith joined #gluster
00:24 David_H_Smith joined #gluster
00:28 David_H_Smith joined #gluster
00:40 David_H_Smith joined #gluster
00:43 David_H_Smith joined #gluster
00:46 David_H_Smith joined #gluster
00:52 meghanam_ joined #gluster
00:52 meghanam joined #gluster
01:00 calisto joined #gluster
01:20 David_H_Smith joined #gluster
01:21 msmith_ joined #gluster
01:22 David_H_Smith joined #gluster
01:26 David_H_Smith joined #gluster
01:34 David_H_Smith joined #gluster
01:39 David_H_Smith joined #gluster
01:45 David_H_Smith joined #gluster
01:46 David_H_Smith joined #gluster
01:47 msmith joined #gluster
01:54 David_H_Smith joined #gluster
01:55 dtrainor joined #gluster
01:58 David_H_Smith joined #gluster
01:59 David_H_Smith joined #gluster
02:05 harish joined #gluster
02:17 glusterbot New news from newglusterbugs: [Bug 1149943] duplicate librsync code should likely be linked removed and linked as a library <https://bugzilla.redhat.co​m/show_bug.cgi?id=1149943>
02:36 David_H_Smith joined #gluster
02:54 dtrainor joined #gluster
03:11 David_H__ joined #gluster
03:12 sijis joined #gluster
03:19 kdhananjay joined #gluster
03:31 bala joined #gluster
03:36 rejy joined #gluster
03:52 itisravi joined #gluster
03:56 overclk joined #gluster
03:58 dtrainor joined #gluster
04:06 kshlm joined #gluster
04:06 dtrainor joined #gluster
04:06 RobertLaptop joined #gluster
04:14 elico joined #gluster
04:15 harish joined #gluster
04:15 cjanbanan joined #gluster
04:22 bharata-rao joined #gluster
04:24 David_H_Smith joined #gluster
04:25 David_H_Smith joined #gluster
04:27 prasanth_ joined #gluster
04:36 kanagaraj joined #gluster
04:37 smohan joined #gluster
04:38 rafi1 joined #gluster
04:38 Rafi_kc joined #gluster
04:41 kdhananjay joined #gluster
04:45 kumar joined #gluster
04:48 spandit joined #gluster
04:51 dtrainor joined #gluster
04:52 lalatenduM joined #gluster
04:54 deepakcs joined #gluster
04:55 ramteid joined #gluster
04:58 dtrainor joined #gluster
04:58 spandit joined #gluster
05:04 ndarshan joined #gluster
05:05 topshare joined #gluster
05:05 saurabh joined #gluster
05:13 anoopcs joined #gluster
05:13 anoopcs joined #gluster
05:18 bala joined #gluster
05:23 jiffin joined #gluster
05:25 nbalachandran joined #gluster
05:25 kdhananjay joined #gluster
05:25 kshlm joined #gluster
05:25 hagarth joined #gluster
05:26 raghu joined #gluster
05:29 vimal joined #gluster
05:31 soumya joined #gluster
05:47 aravindavk joined #gluster
05:50 nishanth joined #gluster
05:53 atalur joined #gluster
05:56 msmith joined #gluster
06:08 rgustafs joined #gluster
06:17 RaSTar joined #gluster
06:21 Slydder joined #gluster
06:21 Slydder morning all
06:22 atinmu joined #gluster
06:26 dusmant joined #gluster
06:32 ppai joined #gluster
06:36 topshare joined #gluster
06:36 Fen2 joined #gluster
06:39 kshlm joined #gluster
06:39 Slydder ndevos: you there? I need that bind address patch and a link to the debian source package if you happen to know where it is. If not a debian .dsc would work.
06:40 rolfb joined #gluster
06:40 kshlm joined #gluster
06:43 kaushal_ joined #gluster
06:47 VeggieMeat joined #gluster
06:50 karnan joined #gluster
06:51 meghanam joined #gluster
06:53 ndevos Slydder: you need 2 patches:
06:54 ndevos http://review.gluster.org/#/c/8910/
06:54 glusterbot Title: Gerrit Code Review (at review.gluster.org)
06:54 ndevos http://review.gluster.org/#/c/8908/
06:54 glusterbot Title: Gerrit Code Review (at review.gluster.org)
06:54 ndevos those commit ids match the ones in the glusterfs source repository
06:56 ndevos Debian packages are here, not sure if there are the sources too: http://download.gluster.org/pub/gl​uster/glusterfs/3.5/3.5.2/Debian/
06:56 glusterbot Title: Index of /pub/gluster/glusterfs/3.5/3.5.2/Debian (at download.gluster.org)
06:58 Slydder ndevos: thanks
06:59 kshlm joined #gluster
07:00 dtrainor joined #gluster
07:03 nellemann left #gluster
07:07 dusmant joined #gluster
07:09 cjanbanan joined #gluster
07:14 ctria joined #gluster
07:15 soumya joined #gluster
07:20 soumya_ joined #gluster
07:20 nellemann joined #gluster
07:21 kanagaraj joined #gluster
07:22 ndevos Slydder: oh, at least one of those patches does not cleanly apply to 3.5, I'll post the 3.5 versions in a bit
07:25 prasanth_ joined #gluster
07:45 msmith joined #gluster
07:46 ndevos Slydder: wget http://paste.fedoraproject​.org/143718/14138775/raw/ to get the two patches for 3.5
07:48 glusterbot New news from newglusterbugs: [Bug 1149857] Option transport.socket.bind-address ignored <https://bugzilla.redhat.co​m/show_bug.cgi?id=1149857>
07:53 shubhendu joined #gluster
08:00 liquidat joined #gluster
08:00 Slydder ndevos: I did a quilt patch with the unified 8910 + 8908 version of glusterd-utils.c so didn't need the 2 patches. generated a single patch.
08:02 Slydder oh oh. fatal error: glusterd-messages.h: No such file or directory
08:03 kdhananjay joined #gluster
08:08 ndevos Slydder: that file gets generated when you run ./autogen.sh (I think)
08:08 Slydder hmmm. then maybe autogen.sh should be called during the debian build.
08:08 msmith joined #gluster
08:09 ndevos maybe, depends on how the tar.gz was created, I think the glusterd-messages.h should be included by default...
08:10 ndevos Slydder: hmm, I dont have any references to glusterd-messages.h in my 3.5 source tree?
08:14 smohan_ joined #gluster
08:15 smohan_ joined #gluster
08:19 Slydder looks like the unified patches shown are based on a newer release.
08:24 nshaikh joined #gluster
08:26 cjanbanan joined #gluster
08:27 Slydder ndevos: how the hell do you just get the patch from that damn site.?
08:28 Slydder ndevos: just saw your link above. thanks
08:29 Slashman joined #gluster
08:31 ndevos Slydder: gerrit isnt the most friendly, but it lists a download command in the patch details
08:31 ndevos but well, I thought the fpaste url would be easier for you ;)
08:41 anands joined #gluster
08:43 Slydder ndevos: which version did you patch against? I am trying to patch 3.5.2. not that the offsets are, well, off.
08:47 rjoseph joined #gluster
08:47 Slydder patch seems to have taken. building now.
08:48 nellemann left #gluster
08:51 Slydder so. build works now.
08:59 vimal joined #gluster
09:00 karnan joined #gluster
09:04 overclk joined #gluster
09:07 cjanbanan Is there any way to determine the proper path to use for filters? None of my filters are called so I guess my path is wrong.
09:11 cjanbanan Any verbose level to use in order to get some info in a log file?
09:20 harish joined #gluster
09:25 smohan joined #gluster
09:27 RaSTar joined #gluster
09:28 giannello joined #gluster
09:30 tryggvil joined #gluster
09:38 ndevos Slydder: its against the current 3.5 branch, so 3.5.3beta1 and a little
09:38 ndevos Slydder: I plan to include those patches in the next 3.5 release, so any feedback on them would be good :)
09:39 64MAAT5JI joined #gluster
09:45 ndarshan joined #gluster
09:46 Slydder patches and builds fine with 3.5.2. deb package is built and installed on 3 system atm.
09:46 Slydder binding works fine but still need to create a new volume to see if shd and all works as hoped.
09:49 glusterbot New news from newglusterbugs: [Bug 1155027] Excessive logging in the self-heal daemon after a replace-brick <https://bugzilla.redhat.co​m/show_bug.cgi?id=1155027>
09:53 dusmant joined #gluster
09:57 Slydder volume creation works great and shd nfs and all are running as they should
09:58 rwheeler joined #gluster
10:02 anoopcs1 joined #gluster
10:02 anoopcs1 joined #gluster
10:06 ndevos Slydder++ cool, thanks for testing!
10:07 glusterbot ndevos: Slydder's karma is now 1
10:07 Slydder ndevos: seems to be working great. now just have to find actual info on extending a running replicated volume without shutting it down.
10:08 dusmant joined #gluster
10:09 ndevos Slydder: you can use 'gluster volume add-brick $volume $new_brick_a $new_brick_b'
10:09 ndevos Slydder: well, you probly need to use 'gluster --remote-host=$IP ...'
10:10 Slydder you mean I have to add 2 new bricks? why that?
10:11 Slydder got it.
10:12 Slydder have to tell it the new replica count as well.
10:16 VeggieMeat joined #gluster
10:18 ppai joined #gluster
10:22 kshlm joined #gluster
10:27 kshlm joined #gluster
10:28 VeggieMeat joined #gluster
10:28 kshlm joined #gluster
10:35 overclk joined #gluster
10:41 Slydder ndevos: just tested the HA gluster cluster. works perfectly now that the address binding is as it should be. ;)
10:42 RaSTar joined #gluster
10:42 Slydder corosync controls IP and GlusterFS resources on 2 gfs servers on the backend using raid 10 subsystem.
10:44 Slydder now we just need a sync_with option (defaults to all) that we can force a single sync stream to a single server that then can sync to the others in the cluster.
10:52 calum_ joined #gluster
10:55 kkeithley1 joined #gluster
10:57 kshlm joined #gluster
10:58 overclk_ joined #gluster
11:06 dusmant joined #gluster
11:07 virusuy joined #gluster
11:10 Fen1 joined #gluster
11:20 ppai joined #gluster
11:25 overclk joined #gluster
11:26 ndevos Slydder: nice!
11:27 DV_ joined #gluster
11:28 ndevos REMINDER: Gluster Bug Triage meeting starts in 30 minutes, see https://public.pad.fsfe.org/p/gluster-bug-triage
11:29 glusterbot Title: FSFE Etherpad: public instance (at public.pad.fsfe.org)
11:36 DV__ joined #gluster
11:37 marcoceppi joined #gluster
11:37 _NiC joined #gluster
11:37 marcoceppi joined #gluster
11:37 churnd joined #gluster
11:37 dblack joined #gluster
11:37 kanagaraj joined #gluster
11:37 ndevos joined #gluster
11:37 ndevos joined #gluster
11:37 overclk joined #gluster
11:37 RioS2 joined #gluster
11:39 smohan_ joined #gluster
11:41 edward1 joined #gluster
11:42 calisto joined #gluster
11:43 prasanth_ joined #gluster
11:43 anoopcs joined #gluster
11:44 diegows joined #gluster
11:46 dblack joined #gluster
11:46 ndevos joined #gluster
11:46 ndevos joined #gluster
11:46 edward1 joined #gluster
11:46 prasanth_ joined #gluster
11:46 anoopcs joined #gluster
11:49 anoopcs joined #gluster
11:55 overclk joined #gluster
12:00 ndevos REMINDER: Gluster Bug Triage meeting starting now in #gluster-meeting
12:00 ira joined #gluster
12:00 meghanam joined #gluster
12:01 soumya_ joined #gluster
12:04 _dist joined #gluster
12:04 bennyturns joined #gluster
12:05 anands joined #gluster
12:10 jbrooks joined #gluster
12:10 dusmant joined #gluster
12:13 ppai joined #gluster
12:18 rjoseph joined #gluster
12:21 mojibake joined #gluster
12:24 hagarth joined #gluster
12:25 LebedevRI joined #gluster
12:36 dusmant joined #gluster
12:40 B21956 joined #gluster
12:45 Guest53473 joined #gluster
12:54 rgustafs joined #gluster
12:54 diegows joined #gluster
12:54 theron joined #gluster
12:57 julim joined #gluster
13:00 bala joined #gluster
13:14 coredump joined #gluster
13:19 smohan joined #gluster
13:48 theron joined #gluster
13:50 freemanbrandon joined #gluster
13:51 msmith joined #gluster
13:56 spandit joined #gluster
13:59 freemanbrandon joined #gluster
14:00 theron joined #gluster
14:04 freemanbrandon joined #gluster
14:11 tdasilva joined #gluster
14:13 daxatlas joined #gluster
14:14 glusterbot New news from resolvedglusterbugs: [Bug 950048] [RHEV-RHS]: "gluster volume sync" command not working as expected <https://bugzilla.redhat.com/show_bug.cgi?id=950048> || [Bug 765401] Add a method to resolve peers in rejected state due to volume checksum difference <https://bugzilla.redhat.com/show_bug.cgi?id=765401>
14:17 bene joined #gluster
14:18 jbrooks joined #gluster
14:19 dusmant joined #gluster
14:20 glusterbot New news from newglusterbugs: [Bug 1155181] Lots of compilation warnings on OSX. We should probably fix them. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1155181> || [Bug 1152617] Documentation bug for glfs_set_volfile_server <https://bugzilla.redhat.co​m/show_bug.cgi?id=1152617>
14:25 jbautista- joined #gluster
14:28 failshell joined #gluster
14:32 davidhadas joined #gluster
14:33 jdarcy joined #gluster
14:34 topshare joined #gluster
14:34 Pupeno joined #gluster
14:35 topshare joined #gluster
14:46 theron joined #gluster
14:47 theron joined #gluster
14:51 DoctorO joined #gluster
14:51 jobewan joined #gluster
14:52 bennyturns joined #gluster
14:53 ninkotech__ joined #gluster
14:53 ninkotech_ joined #gluster
14:53 topshare joined #gluster
14:57 calisto joined #gluster
14:59 PeterA joined #gluster
15:02 merlink joined #gluster
15:04 bala joined #gluster
15:10 Pupeno joined #gluster
15:13 kanagaraj joined #gluster
15:17 kumar joined #gluster
15:19 plarsen joined #gluster
15:20 dtrainor joined #gluster
15:22 bene2 joined #gluster
15:24 bala joined #gluster
15:31 Slydder joined #gluster
15:33 Slydder hey all. have a strange situation. added a new brick to a 2 brick replication (now 3) but it doesn't seem to want to start on the new node. glusterd starts but no shd, fuse or nfs. ideas?
15:35 Slydder sorry. I am officially declaring myself an idiot. never mind. it helps if you actually read what gluster volume status actually tells you.
15:38 DoctorO joined #gluster
15:43 Philambdo joined #gluster
15:51 _Bryan_ joined #gluster
15:56 andreask joined #gluster
16:03 tryggvil joined #gluster
16:14 meghanam joined #gluster
16:15 R0ok_|kejani joined #gluster
16:16 rjoseph joined #gluster
16:26 thermo44 joined #gluster
16:27 elico joined #gluster
16:30 Pupeno_ joined #gluster
16:33 Pupeno joined #gluster
16:36 anoopcs joined #gluster
16:45 hagarth joined #gluster
16:45 lmickh joined #gluster
16:46 fattaneh1 joined #gluster
16:48 Pupeno joined #gluster
16:53 sputnik13 joined #gluster
17:03 dtrainor joined #gluster
17:04 theron joined #gluster
17:13 zerick joined #gluster
17:14 dtrainor joined #gluster
17:15 brettnem joined #gluster
17:40 davidhadas joined #gluster
17:46 failshell joined #gluster
17:48 17SAAN94R joined #gluster
17:50 nellemann joined #gluster
17:50 Pupeno_ joined #gluster
17:52 neofob joined #gluster
17:53 soumya_ joined #gluster
18:07 freemanb_ joined #gluster
18:09 mojibake joined #gluster
18:10 davemc joined #gluster
18:14 ekuric joined #gluster
18:21 glusterbot New news from newglusterbugs: [Bug 1152265] Documentation Update to Warn about localhost NFS mounts <https://bugzilla.redhat.co​m/show_bug.cgi?id=1152265>
18:23 theron joined #gluster
18:35 semiosis https://twitter.com/vbellur​/status/524461322218401793
18:35 glusterbot Title: Vijay Bellur on Twitter: "From @chitika: "In short, Gluster is the crux of our storage architecture" - http://t.co/CuTO1LEiLa" (at twitter.com)
18:41 lpabon joined #gluster
18:45 hchiramm_ joined #gluster
18:46 freemanbrandon joined #gluster
19:09 theron joined #gluster
19:15 16WAAAEAQ joined #gluster
19:15 1JTAAQ5CI joined #gluster
19:24 freemanbrandon joined #gluster
19:26 failshell im currently investigating a really odd issue. we're using 3.5.2 and ctdb 1.0.114. our servers are configured using DHCP. and they keep losing their DNS. Why? Because when they need to renew their lease, they send a DHCPDECLINE to the DHCP server. something in ctdb is messing up with dhcp.
19:29 semiosis failshell: this may not be the best place to find help with ctdb/dhcp
19:30 semiosis but idk where else to suggest you ask
19:30 failshell semiosis: i figured since its the recommended way to run samba over gluster
19:30 semiosis maybe ask samba people?
19:30 failshell and its documented on gluster.org
19:30 semiosis oh
19:30 semiosis link?
19:30 failshell https://download.gluster.org/pub/gluster​/glusterfs/doc/Gluster_CTDB_setup.v1.pdf
19:31 failshell https://download.gluster.org/pub/glus​ter/glusterfs/doc/HA%20and%20Load%20B​alancing%20for%20NFS%20and%20SMB.odt
19:31 failshell its also part of RHS
19:33 semiosis i know the mtime on that pdf is from last year, but i suspect that document is many years old
19:33 failshell its still current as that's the official way of having a VIP for NFS or Samba
19:33 semiosis craig carl hasn't worked at gluster since 2011
19:33 failshell its still packaged that way in RHS
19:34 failshell https://access.redhat.com/documenta​tion/en-US/Red_Hat_Storage/2.0/html​/Administration_Guide/ch09s04.html
19:34 glusterbot Title: 9.4. Configuring Automated IP Failover for NFS and SMB (at access.redhat.com)
19:34 failshell newer document, if you have access to the site
19:34 semiosis are you a RHS customer?
19:34 semiosis not that it matters
19:34 failshell out licenses are going to expire soon
19:34 failshell we're switching to the OSS version
19:35 failshell pricing is way too expensive. and it was limiting us to have one setup
19:35 semiosis ok, that's all well and good, and i wish i could help, but to be honest, ctdb doesnt come up much here
19:35 semiosis maybe someone lurking can help you, but i doubt the regulars have much (if any) experience with it
19:36 failshell yeah sadly we have some windows machines that need access to some of that data
19:36 semiosis so i'm just suggesting that, in addition to asking here, you might want to try a samba channel/forum as well
19:36 failshell the bane of my life :)
19:36 failshell yeah that sounds like a good advice
19:39 theron joined #gluster
19:40 theron joined #gluster
19:40 MacWinner joined #gluster
19:51 glusterbot New news from newglusterbugs: [Bug 1155285] twitter link on community page broken <https://bugzilla.redhat.co​m/show_bug.cgi?id=1155285>
19:52 clutchk joined #gluster
19:52 clutchk Hey anyway to get gluster heals to happen server side only?
19:53 clutchk I'm trying to use gluster replication to setup HA kvm backend storage and the heals are killing the host.
19:53 semiosis clutchk: since version 3.3 there's a server side self heal daemon which will heal files automatically.  however, if a client accesses a file that is out of sync before the self heal daemon gets to it, then the client will heal the file
19:54 semiosis why are there so many heals?
19:54 semiosis have you configured your volume for VM workload?
19:54 clutchk Well we were having some network flakeyness that is now fixed.
19:55 semiosis see the 'group virt' options here: https://access.redhat.com/documentation/en-US/​Red_Hat_Storage/2.0/html/Quick_Start_Guide/cha​p-Quick_Start_Guide-Virtual_Preparation.html
19:55 glusterbot Title: Chapter 3. Managing Virtual Machine Images on Red Hat Storage Servers (at access.redhat.com)
19:58 clutchk awesome! Thanks for that. Will this tuning feature help cut down the amount of heals needed?
19:59 clutchk "group virt" that is.
20:00 semiosis no idea :)
20:00 calisto1 joined #gluster
20:00 clutchk ok, I'll give it a try and see. Thanks for the lead.
20:01 semiosis yw
20:01 semiosis let us know how it goes
20:01 semiosis oh btw, i think the 'group virt' command is only on RHS.  you'll probably have to set those options individually
20:03 clutchk I'm using CentOS, is that good enough?
20:04 n-st joined #gluster
20:04 nshaikh joined #gluster
20:09 DoctorO left #gluster
20:09 clutchk I can go rhel if need be.
20:11 fattaneh1 joined #gluster
20:29 gpmidi joined #gluster
20:31 gpmidi Is RDMA still unsupported with 3.5? I'm getting between 10MB/s and 15MB/s when I use RDMA only for a volume. When I recreate the volume with TCP instead I get around 400MB/s. The TCP connection is going over an IPOIB interface.
20:31 Slydder am getting really bad performance using gluster with magento. load is currently over 30 on a 6 core box.
20:32 Slydder well php-fpm and magento.
20:36 badone joined #gluster
21:15 failshell semiosis: i replaced ctdb with keepalived and the initial tests seems to be ok
21:15 semiosis great
21:16 failshell it also resolves another issue we had with ctdb, the ARP propagation between 2 VLANs took almost an hour. now its under a minuten
21:16 semiosis clutchk: centos should be fine, plenty of people use that.
21:16 failshell why would red hat even use something this flaky ...
21:17 semiosis failshell: well, i think red hat wouldn't, which is why they're changing to pacemaker/corosync (iirc)
21:17 failshell semiosis: they do right now :)
21:17 semiosis failshell: remember, that ctdb doc you showed me was from pre-acquisition
21:17 MugginsM joined #gluster
21:17 failshell well, its in their current product
21:17 semiosis but i can't speak for them
21:17 failshell anyway
21:18 failshell i have a solution, that's all that matters now
21:18 semiosis great
21:19 MugginsM hi all.   I'm about to go through a fairly large gluster replica set, migrating bricks onto a newly formatted xfs with file type support (for readdir() d_type). This means that while I'm migrating, clients will get a mix of good readdir() and half-incomplete() readdir
21:20 MugginsM anyone done this before, and were there any problems?
21:20 MugginsM 's about 20TB of data so will take a while to go through them all
21:20 semiosis Slydder: have you optimized your include path?  enabled APC?
21:20 semiosis Slydder: does magento use autoloading, or lots of require calls?  if the latter, then you might want to disable stat with APC.  see also ,,(php)
21:20 glusterbot Slydder: (#1) php calls the stat() system call for every include. This triggers a self-heal check which makes most php software slow as they include hundreds of small files. See http://joejulian.name/blog/optimizi​ng-web-performance-with-glusterfs/ for details., or (#2) It could also be worth mounting fuse with glusterfs --attribute-timeout=HIGH --entry-timeout=HIGH
21:20 glusterbot --negative-timeout=HIGH --fopen-keep-cache
21:22 semiosis MugginsM: what is 'file type support'?
21:22 Slydder semiosis: what about tuning the performance cache size?
21:22 MugginsM semiosis: means when you do a readdir() the d_type field contains the type of file (directory, link, etc)
21:22 MugginsM rather than having to do a stat for each file in the directory
21:23 semiosis MugginsM: cool!  how do you enable that?
21:23 MugginsM it's a big performance win for some of our use cases
21:23 MugginsM you need kernel >= 3.12 and xfsprogs >= 3.2
21:23 MugginsM and format the xfs partition with an option
21:23 semiosis Slydder: doubt it will help, but shouldn't hurt to try.  let me know if it makes a difference for you
21:24 semiosis MugginsM: very nice, thx for the info
21:24 MugginsM I intend to do a blog post with benchmarks *after* :)
21:24 MugginsM just concerned about the period when half the bricks have it and half don't
21:24 semiosis please do!
21:24 Slydder semiosis: do I have to actually use bytes or does it accept other units?
21:24 semiosis i would expect it to be fine, since gluster is working ok without it
21:24 semiosis MugginsM: ^
21:25 MugginsM gluster just passes the results from the underlying fs through to the client
21:25 semiosis Slydder: idk
21:25 MugginsM but is a bit unique in that the "underlying fs" might be across multiple bricks
21:26 semiosis i "upgraded" my bricks from ext4 to xfs one at a time, live on production, with no issues
21:26 semiosis fwiw
21:26 MugginsM cool, that's probably a similar kind of change
21:26 JoeJulian @mount server
21:26 glusterbot JoeJulian: The server specified is only used to retrieve the client volume definition. Once connected, the client connects to all the servers in the volume. See also @rrdns
21:33 Slydder semiosis: higher cache setting plus a higher refresh timeout works wonders for it. load is now a quarter of what it was before.
21:33 semiosis Slydder: ,,(pasteinfo) ?
21:33 glusterbot Slydder: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
21:34 Slydder but will still need to enable apc and disable the stat calls though. that will be the big bringer
21:40 tryggvil joined #gluster
21:42 jbrooks joined #gluster
21:52 glusterbot New news from newglusterbugs: [Bug 1155328] GlusterFS allows insecure SSL modes <https://bugzilla.redhat.co​m/show_bug.cgi?id=1155328>
21:52 MugginsM "surprise"  :)
22:03 MugginsM ok how do I do what replace-brick did pre 3.4?
22:03 MugginsM if I remove or add I get replica-num warnings
22:03 semiosis you dont have a replace-brick command?
22:04 semiosis i dont understand
22:04 MugginsM gives me "depracated" warnings and doesn't seem to work
22:04 MugginsM deprecated
22:04 MugginsM gluster 3.4.5
22:05 MugginsM worked fine on 3.3 :-/
22:05 semiosis wow
22:05 semiosis news to me
22:06 zerick joined #gluster
22:08 MugginsM woah, worked the second time I tried
22:09 badone joined #gluster
22:09 [o__o] joined #gluster
22:14 JoeJulian MugginsM:  It says to just do "commit force"
22:17 firemanxbr joined #gluster
22:28 davemc reminder Gluster community meeting is 22-Oct at 0:00 UTC on #gluster-meeting
22:31 doekia joined #gluster
22:39 samsaffron___ joined #gluster
22:40 MugginsM ah, if you do commit force after doing commit  it doesn't work :)
22:41 MugginsM works fine just doing commit force from the start
22:41 MugginsM gonna be all day refilling the new bricks
22:42 MugginsM guess it's too late to tell heal to pull them from the same server :)
22:48 coredump joined #gluster
22:49 samsaffron___ joined #gluster
22:53 yooo joined #gluster
22:54 yooo running glusterfs on production for a couple of months now with no problems aprt
22:54 yooo from two identical problems which i encountered
22:55 yooo i had an I/O error on a specific log file in one of the two clients which write on this file
22:55 yooo i couldn't find any logs for this but i suspect both clients trying to write on the same file at the same time or something
22:55 yooo anyone experienced this before?
22:56 yooo my version is 3.2.7-3+deb7u1
22:57 yooo i had to umount/remount in order to fix the problem for this file....i am using gluster client on the clients and not nfs
22:58 gpmidi left #gluster
23:03 cjanbanan joined #gluster
23:25 theron joined #gluster
23:28 theron joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary