Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-08-29

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 Pupeno joined #gluster
00:08 Pupeno joined #gluster
00:17 jermudgeon Updated yesterday from 3.3 to 3.5. All seemed well until today. I've got a 2x2 four-brick cluster. Rebalance is in progress. Mounted native client. I can create files and directories, but actually writing data to files fails with an I/O error. Volume status  is online for all bricks, no hardware-level IO errors.
00:18 jermudgeon Doesn't appear to be a) directory dependent or b) user dependent [other than normal permissions requirements]. Haven't tried NFS client yet.
00:19 jermudgeon No obvious (to me) errors in brick logs.
00:31 bala joined #gluster
00:32 gildub joined #gluster
00:36 jermudgeon OK, resetting options to default fixed it.
00:44 topshare joined #gluster
00:50 vxitch left #gluster
00:53 _Bryan_ joined #gluster
01:01 ws2k33 joined #gluster
01:34 vimal joined #gluster
01:45 _pol joined #gluster
01:59 harish joined #gluster
02:03 Pupeno joined #gluster
02:04 Lee- joined #gluster
02:09 sjm joined #gluster
02:35 bala joined #gluster
02:49 topshare joined #gluster
03:01 haomaiwa_ joined #gluster
03:13 haomai___ joined #gluster
03:17 haomaiwa_ joined #gluster
03:23 haomaiwa_ joined #gluster
03:41 haomaiw__ joined #gluster
03:46 anoopcs joined #gluster
03:48 anoopcs1 joined #gluster
03:51 coredump joined #gluster
03:59 clyons joined #gluster
04:11 sjm left #gluster
04:12 topshare joined #gluster
04:21 glusterbot New news from newglusterbugs: [Bug 1132766] ubuntu ppa: 3.5 missing hooks and files for new geo-replication <https://bugzilla.redhat.com/show_bug.cgi?id=1132766>
04:48 topshare joined #gluster
04:54 ramteid joined #gluster
04:56 shylesh__ joined #gluster
05:19 haomaiwa_ joined #gluster
05:20 haomaiw__ joined #gluster
05:21 glusterbot New news from newglusterbugs: [Bug 1135321] common_secret.pem.pub handling prevents multiple geo-replication sessions <https://bugzilla.redhat.com/show_bug.cgi?id=1135321>
05:35 haomaiwa_ joined #gluster
05:37 sputnik13 joined #gluster
05:48 simulx2 joined #gluster
05:53 hchiramm_ joined #gluster
06:01 purpleidea joined #gluster
06:06 anoopcs joined #gluster
06:08 hagarth joined #gluster
06:13 fyxim_ joined #gluster
06:16 capri joined #gluster
06:27 anoopcs1 joined #gluster
06:35 ricky-ticky1 joined #gluster
06:43 sputnik13 joined #gluster
06:52 glusterbot New news from newglusterbugs: [Bug 917901] Mismatch in calculation for quota directory <https://bugzilla.redhat.com/show_bug.cgi?id=917901> || [Bug 1135348] regression tests fail on osx due delay in FUSE mount <https://bugzilla.redhat.com/show_bug.cgi?id=1135348>
06:52 ricky-ti2 joined #gluster
06:52 ThatGraemeGuy_ joined #gluster
07:06 sputnik13 joined #gluster
07:22 glusterbot New news from newglusterbugs: [Bug 1135358] Update licensing and move all MacFUSE references to OSXFUSE <https://bugzilla.redhat.com/show_bug.cgi?id=1135358>
07:32 fsimonce joined #gluster
08:16 nbalachandran joined #gluster
08:36 elico joined #gluster
08:37 richvdh joined #gluster
08:47 Slashman joined #gluster
08:51 hchiramm_ joined #gluster
09:00 richvdh joined #gluster
09:09 haomaiwa_ joined #gluster
09:16 glusterbot New news from resolvedglusterbugs: [Bug 894355] spelling mistake? <https://bugzilla.redhat.com/show_bug.cgi?id=894355>
09:17 elico joined #gluster
09:17 maxxx2014 joined #gluster
09:17 T0aD joined #gluster
09:17 hflai joined #gluster
09:17 sage joined #gluster
09:17 nated joined #gluster
09:17 fignews joined #gluster
09:17 osiekhan4 joined #gluster
09:19 nbalachandran joined #gluster
09:22 twx joined #gluster
09:22 foobar joined #gluster
09:22 bfoster joined #gluster
09:22 glusterbot New news from newglusterbugs: [Bug 960752] Update to 3.4-beta1 kills glusterd <https://bugzilla.redhat.com/show_bug.cgi?id=960752>
09:22 neoice joined #gluster
09:22 JustinClift joined #gluster
09:22 SteveCooling joined #gluster
09:24 haomaiw__ joined #gluster
09:24 sickness joined #gluster
09:25 gildub joined #gluster
09:27 gildub joined #gluster
09:30 ramteid joined #gluster
09:30 cmtime joined #gluster
09:30 avati joined #gluster
09:30 lyang0 joined #gluster
09:30 d-fence joined #gluster
09:30 jvandewege joined #gluster
09:30 atrius joined #gluster
09:30 coredumb joined #gluster
09:30 sauce joined #gluster
09:30 al joined #gluster
09:30 Intensity joined #gluster
09:30 Intensity joined #gluster
09:30 jiqiren joined #gluster
09:30 dblack joined #gluster
09:46 glusterbot New news from resolvedglusterbugs: [Bug 1043548] Cannot compile glusterfs on OpenSuse 13.1 <https://bugzilla.redhat.com/show_bug.cgi?id=1043548>
09:47 nage joined #gluster
09:47 verdurin_ joined #gluster
09:48 juhaj joined #gluster
09:49 skippy_ joined #gluster
09:50 muhh_ joined #gluster
09:50 haomaiwang joined #gluster
09:50 lezo__ joined #gluster
09:51 dockbram joined #gluster
09:52 aulait joined #gluster
09:52 glusterbot New news from newglusterbugs: [Bug 1005616] glusterfs client crash (signal received: 6) <https://bugzilla.redhat.com/show_bug.cgi?id=1005616>
09:53 atrius` joined #gluster
09:55 _jmp_ joined #gluster
09:56 JordanHackworth_ joined #gluster
09:56 cfeller_ joined #gluster
09:56 Moe-sama joined #gluster
09:56 prasanth|afk joined #gluster
09:56 johnnytran joined #gluster
09:57 tru_tru_ joined #gluster
09:58 Dave2_ joined #gluster
09:58 Ramereth|home joined #gluster
09:58 Ramereth|home joined #gluster
09:58 lkthomas_ joined #gluster
09:58 siel_ joined #gluster
09:59 delhage_ joined #gluster
09:59 ultrabizweb_ joined #gluster
09:59 churnd- joined #gluster
10:03 AaronGr joined #gluster
10:04 sadbox_ joined #gluster
10:04 masterzen joined #gluster
10:05 semiosis_ joined #gluster
10:09 semiosis joined #gluster
10:10 yosafbridge joined #gluster
10:13 johnmwilliams__ joined #gluster
10:13 pasqd joined #gluster
10:14 yosafbridge joined #gluster
10:14 [o__o] joined #gluster
10:22 glusterbot New news from newglusterbugs: [Bug 1127457] Setting security.* xattrs fails <https://bugzilla.redhat.com/show_bug.cgi?id=1127457>
10:24 simulx joined #gluster
10:25 silky joined #gluster
10:25 dockbram_ joined #gluster
10:26 neoice_ joined #gluster
10:26 ricky-ticky joined #gluster
10:26 calum_ joined #gluster
10:27 coredump|br joined #gluster
10:30 tru_tru joined #gluster
10:31 and`_ joined #gluster
10:32 atrius_ joined #gluster
10:32 hagarth1 joined #gluster
10:33 Philambdo joined #gluster
10:33 crashmag_ joined #gluster
10:34 jvandewege joined #gluster
10:38 Slashman joined #gluster
10:39 gildub joined #gluster
10:40 ackjewt joined #gluster
10:48 Norky joined #gluster
10:49 fyxim__ joined #gluster
10:50 edong23_ joined #gluster
10:51 l0uis_ joined #gluster
10:51 bfoster_ joined #gluster
10:51 mkzero_ joined #gluster
10:51 saltsa_ joined #gluster
10:51 siel_ joined #gluster
10:52 Andreas-IPO joined #gluster
10:52 gomikemike joined #gluster
10:52 samsaffron_ joined #gluster
10:53 foster_ joined #gluster
10:53 Lee-- joined #gluster
10:55 rturk|af` joined #gluster
10:55 uebera|| joined #gluster
10:55 uebera|| joined #gluster
10:56 JoeJulian joined #gluster
10:56 XpineX joined #gluster
10:57 kalzz joined #gluster
10:57 asku joined #gluster
10:58 cyberbootje joined #gluster
10:59 masterzen joined #gluster
11:01 m0zes joined #gluster
11:02 nbalachandran joined #gluster
11:03 uebera|| joined #gluster
11:15 Pupeno_ joined #gluster
11:16 kkeithley1 joined #gluster
11:20 topshare joined #gluster
11:21 gts__ joined #gluster
11:22 xavih_ joined #gluster
11:22 sspinner joined #gluster
11:22 muhh joined #gluster
11:22 lava joined #gluster
11:23 chirino joined #gluster
11:23 DJClean joined #gluster
11:23 torbjorn__ joined #gluster
11:23 Zordrak joined #gluster
11:23 Zordrak joined #gluster
11:23 eightyeight joined #gluster
11:24 georgeh|workstat joined #gluster
11:28 mrErikss1n joined #gluster
11:29 Moe-sama joined #gluster
11:29 msciciel_ joined #gluster
11:29 Chr1s1an_ joined #gluster
11:30 tru_tru_ joined #gluster
11:30 siXy_ joined #gluster
11:30 tobias- joined #gluster
11:30 tty00 joined #gluster
11:31 simulx joined #gluster
11:31 ccha2 joined #gluster
11:34 mikedep333 joined #gluster
11:34 tg2 joined #gluster
11:34 mibby joined #gluster
11:34 codex joined #gluster
11:37 andreask joined #gluster
11:40 Slashman joined #gluster
11:40 doekia joined #gluster
11:44 asku joined #gluster
11:45 mjrosenb joined #gluster
11:46 swebb joined #gluster
11:48 diegows joined #gluster
11:49 getup- joined #gluster
11:53 glusterbot New news from newglusterbugs: [Bug 1127140] memory leak <https://bugzilla.redhat.com/show_bug.cgi?id=1127140>
12:03 edward1 joined #gluster
12:03 topshare joined #gluster
12:05 todakure joined #gluster
12:10 Bardack joined #gluster
12:12 T0aD- joined #gluster
12:12 sputnik13 hi, I have a volume where there is free space shown when I do df -h
12:12 sputnik13 but when I try to write to the volume I get an out of space error
12:13 siXy do a df -i
12:13 sputnik13 yup, it shows 990K used 4.4G free
12:13 sputnik13 with df -ih
12:13 topshare joined #gluster
12:13 sputnik13 I'm baffled by this
12:13 qdk joined #gluster
12:13 osiekhan4 joined #gluster
12:14 georgeh|workstat joined #gluster
12:14 saltsa joined #gluster
12:21 ira joined #gluster
12:25 LebedevRI joined #gluster
12:27 DV__ joined #gluster
12:28 pdrakeweb joined #gluster
12:30 harish_ joined #gluster
12:33 topshare joined #gluster
12:43 topshare joined #gluster
12:44 topshare joined #gluster
12:44 gmcwhistler joined #gluster
12:50 B21956 joined #gluster
12:54 mojibake joined #gluster
12:58 recidive joined #gluster
12:59 LHinson joined #gluster
13:02 bene2 joined #gluster
13:07 pdrakeweb joined #gluster
13:08 kshlm joined #gluster
13:08 churnd joined #gluster
13:10 bennyturns joined #gluster
13:12 theron joined #gluster
13:25 firemanxbr joined #gluster
13:27 merlink joined #gluster
13:29 ira joined #gluster
13:54 kanagaraj joined #gluster
13:57 theron joined #gluster
14:01 theron_ joined #gluster
14:03 andreask joined #gluster
14:13 davinder16 joined #gluster
14:19 wushudoin joined #gluster
14:22 sprachgenerator joined #gluster
14:22 LHinson joined #gluster
14:23 LHinson1 joined #gluster
14:23 glusterbot New news from newglusterbugs: [Bug 1014242] /etc/init.d/glusterfsd not provided by any package in 3.4.1 on rhel <https://bugzilla.redhat.com/show_bug.cgi?id=1014242>
14:25 xleo joined #gluster
14:33 tdasilva joined #gluster
14:36 gmcwhistler joined #gluster
14:39 _Bryan_ joined #gluster
14:43 chucky_z joined #gluster
14:43 asku joined #gluster
14:43 chucky_z hello, can someone tell me what *should* happen when a server drops that is part of a replicated brick?
14:44 osiekhan4 joined #gluster
14:46 asku left #gluster
14:52 jbrooks joined #gluster
14:52 AdrianH joined #gluster
14:53 glusterbot New news from newglusterbugs: [Bug 1135548] Error in quick start: start volume and specify mount point <https://bugzilla.redhat.com/show_bug.cgi?id=1135548>
14:55 AdrianH Hello, I have a question for you guys: I have setup a gluster of 4 servers distributed and replicated, each server has 2 bricks of each 150GB. So when i mount Gluster on a client I should have a Gluster of 300GB ??
14:55 cyberbootje joined #gluster
14:56 skippy AdrianH: that sounds correct.
14:57 AdrianH I am asking because when I mount df -h says I have 600GB
14:57 AdrianH sudo gluster volume info
14:57 AdrianH Volume Name: gluster-volume
14:57 AdrianH Type: Distributed-Replicate
14:57 AdrianH Volume ID: 09ec1eaa-1328-441a-a6c1-3b36327582e5
14:57 AdrianH Status: Started
14:57 AdrianH Number of Bricks: 4 x 2 = 8
14:57 AdrianH Transport-type: tcp
14:57 AdrianH Bricks:
14:57 AdrianH Brick1: gluster1:/home/ec2-user/brick1/share
14:57 AdrianH Brick2: gluster3:/home/ec2-user/brick1/share
14:57 AdrianH Brick3: gluster2:/home/ec2-user/brick1/share
14:57 AdrianH Brick4: gluster4:/home/ec2-user/brick1/share
14:57 skippy chucky_z: there will be a delay while the remaining server(s) try to verify whether the failed server is there or not. This will block IO.  Then everytihng will carry on working as expected.
14:57 AdrianH Brick5: gluster1:/home/ec2-user/brick2/share
14:57 AdrianH Brick6: gluster3:/home/ec2-user/brick2/share
14:57 AdrianH Brick7: gluster2:/home/ec2-user/brick2/share
14:58 AdrianH df -h gluster1:/gluster-volume         590G  238M  560G   1% /var/gluster
14:58 AdrianH ok so if I wait it will go down to 300GB ????
14:58 AdrianH (sorry didnt see you were talking to chucky)
14:58 chucky_z skippy: OK, in my situation it split-brain about 1500 files. :(
14:58 chucky_z i'm just trying to figure out why
14:59 skippy chucky_z: you can avoid split-brain by adding another server to the pool. This server does not need to host any bricks.  It's just a quorum server,
14:59 skippy chucky_z: https://github.com/gluster/glusterfs/blob/master/doc/features/server-quorum.md
14:59 lmickh joined #gluster
14:59 glusterbot Title: glusterfs/server-quorum.md at master · gluster/glusterfs · GitHub (at github.com)
15:00 chucky_z fantastic!
15:01 skippy AdrianH: sorry, I mis-read your setup.  You have 8X150GB bricks total = 1200GB raw, divided by replica 2 = 600GB usable
15:01 AdrianH yes but it is distributed and replicated
15:01 nbvfuel joined #gluster
15:02 AdrianH so shouldn't it be 1200GB / 4 ???
15:02 AdrianH or am I getting confused...
15:02 skippy 10:57 < AdrianH> Number of Bricks: 4 x 2 = 8
15:02 skippy you have replica 2, it seems.
15:02 mojibake replica 2 means 2 copies of each peice of data.. so 1200mb /2
15:02 skippy did you want replica 4?  That would seem overkill.
15:03 AdrianH oh ok, I must have missread the docs
15:03 skippy unless you have specific reasons for wanting to ensure that each server contains a full set of the data
15:03 chucky_z skippy: could you point me to some documentation on setting this up?
15:03 nbvfuel I'm planning two gluster nodes.  Both nodes will also locally mount the volumes.  Will the gluster native client prefer the local peer?
15:04 skippy chucky_z: there's not much.  Just install Gluster on another server and `peer probe` that new server from either of the existing servers.
15:04 chucky_z seriously?
15:04 AdrianH Thanks guys for the info :)
15:04 semiosis AdrianH: next time please use pastie.org, instead of pasting in channel
15:05 swebb joined #gluster
15:05 chucky_z sorry if i'm questioning things... it just blows me away how simple this stuff seems to be
15:05 skippy then `gluster volume set <volume name> cluster.server-quorum-type server` and `gluster volume set <volume name> cluster.server-quorum-ratio 51`
15:05 skippy I think that should do it.
15:05 skippy "Note that these are cluster-wide flags. All volumes served by the cluster will be affected"  <-- from the docs
15:05 glusterbot skippy: <'s karma is now -4
15:05 skippy but I suspect that's what you want.
15:06 chucky_z that is exactly what I want.
15:06 skippy you'll probably need to stop and re-start the volume after you make that change?  Dunno.  The docs don't make it clear.
15:06 AdrianH sorry for the pasting
15:06 ndevos nbvfuel: mostly yes, the reads will be done from the bricks that respond quickest on a LOOKUP - local bricks tend to be quicker than remote ones
15:06 chucky_z another question on split-brain.  i have about 300~400 files with 'heal-failed' but they are good in the actual brick.  i've used the python split-brain heal utility before but it didn't work quite as expected.
15:07 chucky_z is there a way i can simply cp/rsync files into the gluster mounted volume and have it go 'yeah ok these are fine'
15:07 semiosis chucky_z: ,,(split-brain)
15:07 glusterbot chucky_z: (#1) To heal split-brain, use splitmount. http://joejulian.name/blog/glusterfs-split-brain-recovery-made-easy/, or (#2) For additional information, see this older article http://joejulian.name/blog/fixing-split-brain-with-glusterfs-33/
15:07 chucky_z yes, splitmount didn't resolve this exact situation last time i tried
15:08 nbvfuel ndevos: That makes sense, thanks!  Is there a document anywhere that describes the gluster protocol at all?  Ie, to get a sense of "read chatiness" when accessing files?
15:08 AdrianH left #gluster
15:09 ndevos nbvfuel: nah, its not really documented... current versions of wireshark will happily show you the details though
15:09 nbalachandran joined #gluster
15:09 chucky_z um -- it gets very chatty.
15:10 skippy how chatty, chucky_z ?
15:10 chucky_z at least depending on your network setup.  one of our clients actually has all of gluster traffic filtered out
15:10 chucky_z they are running a very old version though
15:10 PsionTheory joined #gluster
15:10 ndevos nbvfuel: well, actually, this could be seen as 'some' documentation: http://people.redhat.com/ndevos/talks/debugging-glusterfs-with-wireshark.d/debugging-with-wireshark.pdf
15:11 chucky_z skippy: if you're curious i could monitor one of our live servers with a newer version :)
15:11 ndevos nbvfuel: and, maybe http://people.redhat.com/ndevos/talks/gluster-wireshark-201211.pdf
15:12 nbvfuel ndevos: Taking a peak now, thanks!
15:13 ndevos nbvfuel: I'm happy to answer any follow up questions, but they'll have to wait until next week, I'm leaving for the day :)
15:13 ndevos nbvfuel: or, send an email to the mailinglist and you may get a response sooner
15:14 nbvfuel ndevos: Will do, take care
15:20 ekuric joined #gluster
15:24 skippy chucky_z: I'm marginally curious, but don't put yourself out for it.
15:26 haomaiwang joined #gluster
15:43 Spiculum joined #gluster
15:43 sputnik13 joined #gluster
15:47 cultavix joined #gluster
15:47 cultavix afternoon lads
15:47 cultavix loving gluster so far, I've put together a prototype at home for a system which I am trying to get my company to adopt
15:48 cultavix anyway, just trying out all of the different features, one of the key features will be the ability to replicate to other parts of the world using geo-replication.
15:48 cultavix Also, any sort of snapshotting/backup capability is good too
15:49 cultavix so yea, it's good.... I'm just now trying to add some more disk space and just trying to see what is the best way of doing that
15:49 RobBBBB joined #gluster
15:49 cultavix do I expand the LV that it's currently using for one of it's volumes
15:49 semiosis that's a good option imho
15:49 cultavix (Logical Vol. form LVM)
15:49 cultavix or do I just add another disk and then expand it somehow
15:50 cultavix which is the best way ?
15:50 semiosis add-brick requires rebalancing, which is expensive (and sometimes buggy)
15:50 cultavix I've got vol data1 and vol data2, one is just for my own personal use, I want to expand it by about 300GB
15:50 semiosis resizing the underlying brick fs is safe & easy
15:50 RobBBBB cultavix: I missed the beginning of your question? Are you using Gluster on top of LVM?
15:50 cultavix do I just add a virtual disk on each gluster node
15:51 cultavix expand the LV, and then what?
15:51 cultavix I am using Gluster on top of LVM yes
15:51 cultavix I always use LV's anyway
15:51 semiosis cultavix: lvextend then xfs_resize
15:51 cultavix Yes, that's what I am thinking
15:51 cultavix semiosis: ill add the disk to each node on this cluster (only have 2)
15:52 cultavix also, another question, I've got a slave on AWS for a second volume
15:52 cultavix it works great, when you put data on the vol it replicates it to the slave
15:52 cultavix my question is, what about replicating from the slave back to the master?
15:52 semiosis imo it's best to avoid add-brick/rebalance by design.  if you think you might need to scale out beyond two servers for performance then make several bricks per server, you can use replace-brick to move them onto new servers later
15:52 semiosis that way you wont have to rebalance
15:52 cultavix semiosis: xfs_resize, is that like resize2fs ?
15:53 semiosis cultavix: yes, for XFS, which you should be using
15:53 cultavix semiosis: Yes, I am :)
15:53 cultavix ok great, ill start doing that then.... thanks
15:53 cultavix and for the geo-replication stuff?
15:53 cultavix the way I've set it up, it seems to be a one-way system
15:53 semiosis if you just need to add space, then use LVM or similar to add grow the bricks
15:53 semiosis geo-rep is one way
15:53 semiosis also, what distro are you on?
15:54 cultavix semiosis: ill expand via LVM and then hopefully, gluster will just have more space right? No need to tell Gluster about it ?
15:54 cultavix CentOS 6.5 Minimal
15:54 semiosis yes thats right
15:54 cultavix awesome
15:55 wushudoin joined #gluster
15:55 RobBBBB So growing the size of bricks = growing size of the LV ? No need to tell gluster? I guess all bricks need to be the same size?
15:55 skippy there's nothing to tell Gluster when more space appears in the LVM.
15:55 semiosis RobBBBB: it's a good idea to keep all bricks same size, but not a requirement.  gluster will be limited to the smaller of the replicas.
15:55 skippy it does make sense to keep bricks the same size.
15:56 semiosis so you can grow one replica, then the other
15:56 RobBBBB this is a nice feature
15:56 cultavix skippy: yea, that's good...
15:57 * Spiculum has set away! (auto away after idling [15 min]) [Log:ON] .gz.
15:57 cultavix what is the best way to have geo-replication, I'd like to have a master gluster cluster in the UK, and then have slaves in different AWS regions as well as 2 regional offices, one in the USA and on in Korea
15:57 cultavix so latency/connectivity could be shaky
15:57 cultavix I'm hoping Gluster will cope
15:58 semiosis Spiculum: please disable the auto-away message :)
15:59 semiosis idk much about geo-rep best practices, maybe someone else can weigh in on that
16:01 plarsen joined #gluster
16:02 jobewan joined #gluster
16:02 cultavix semiosis: I dont have that command for XFS
16:02 semiosis sorry it's xfs_growfs
16:03 cultavix semiosis: ok thanks
16:03 semiosis yw
16:05 LHinson joined #gluster
16:06 RobBBBB does anyone know if EXT4 should still be avoided?
16:06 RobBBBB Do you all recommend XFS?
16:06 cultavix RobBBBB: like the plague
16:06 cultavix (not serious, just thought like it would sound funny)
16:07 RobBBBB ok thank you for advice
16:09 julim joined #gluster
16:12 getup- joined #gluster
16:12 getup- joined #gluster
16:14 Spiculum oopse sorry!
16:16 semiosis RobBBBB: XFS is recommended, although ext4 should work.
16:16 semiosis glusterfs is tested & used most commonly with XFS
16:18 RobBBBB I've got everything setup with Ansible, all seems to be working fine but I didn't see the bit about XFS been recommended so I my bricks are LV with ext4 disks...
16:19 RobBBBB I'll look into changing it to XFS
16:19 semiosis my big complaints about ext4 in production are 1) time to fsck, and 2) running out of inodes
16:20 semiosis fsck on xfs is O(1) and there's practically infinite inodes
16:20 RobBBBB semiosis: might be a good idea to change then as I'll be having nearly 30TB of data
16:22 semiosis lol -- http://linux.die.net/man/8/fsck.xfs
16:22 glusterbot Title: fsck.xfs(8): do nothing, successfully - Linux man page (at linux.die.net)
16:29 hagarth semiosis: good one!
16:33 cultavix added the new disk, gluster expanded... thank you :)
16:43 RobBBBB left #gluster
16:57 todakure joined #gluster
17:04 LHinson1 joined #gluster
17:07 sprachgenerator joined #gluster
17:09 clyons joined #gluster
17:13 Jay3 does anyone have a good cheat sheet for iozone testing on Gluster?
17:13 Jay3 or some initial expectations?
17:18 rotbeard joined #gluster
17:19 chirino joined #gluster
17:31 zerick joined #gluster
17:33 _dist joined #gluster
17:36 jermudgeon Jay3: I've used bonnie++; not exactly iozone but quick and dirty
17:36 glusterbot jermudgeon: bonnie's karma is now 7
17:40 necrogami joined #gluster
17:45 _dist My favourite quick test that's easy is gnome disk utility in kali (palimpsest from cli)
17:47 PeterA joined #gluster
17:49 PeterA anyone experienced directory quota mismatch with du?
17:54 glusterbot New news from newglusterbugs: [Bug 1023134] Used disk size reported by quota and du mismatch <https://bugzilla.redhat.com/show_bug.cgi?id=1023134>
18:01 Layne-Koelpin joined #gluster
18:05 skippy finally got permission from the powers that be to publish this Gluster Puppet module: https://github.com/covermymeds/puppet-gluster
18:05 glusterbot Title: covermymeds/puppet-gluster · GitHub (at github.com)
18:05 skippy it's a work in progress, but hopefully useful to some.
18:16 curlingbiathlon joined #gluster
18:25 getup- joined #gluster
18:33 semiosis bonnie++--
18:33 glusterbot semiosis: bonnie's karma is now 8
18:34 glusterbot semiosis: bonnie++'s karma is now -1
18:34 semiosis JoeJulian: ^^!!
18:34 skippy ha!
18:35 Jamoflaw joined #gluster
18:49 B21956 joined #gluster
18:53 semiosis JoeJulian++++++++++++++++++++++++++++++++++++++++++++++++++++++++
18:53 glusterbot semiosis: JoeJulian++++++++++++++++++++++++++++++++++++++++++++++++++++++'s karma is now 1
18:53 semiosis aww
18:58 PeterA how do i sync up quota with du usages??
18:58 PeterA my Dir quota is off with du
18:59 PeterA keep accumulating changes and use up quota spaces...
18:59 PeterA while du shows empty...
18:59 PeterA how to i trigger the recalculation of Directory quota?
19:17 theron joined #gluster
19:26 PeterA help!! directory is filling up :(
19:27 PeterA seems like some sort of file locking
19:27 PeterA keep filling up the quota
19:33 XpineX joined #gluster
19:33 uebera|| joined #gluster
19:33 uebera|| joined #gluster
19:38 clyons joined #gluster
19:42 m0zes joined #gluster
19:52 jermudgeon joined #gluster
20:10 chirino_m joined #gluster
20:27 jermudgeon me is happy: got qvm+libgfapi working on Centos 6.5
20:46 Gu_______ joined #gluster
21:04 T0aD joined #gluster
21:33 getup- joined #gluster
21:35 _Bryan_ joined #gluster
21:44 T0aD- joined #gluster
21:45 chirino joined #gluster
21:47 sauce_ joined #gluster
21:47 _jmp_ joined #gluster
21:47 siXy_ joined #gluster
21:48 mkzero joined #gluster
21:48 l0uis_ joined #gluster
21:48 l0uis_ joined #gluster
21:51 AaronGreen joined #gluster
21:51 atrius- joined #gluster
21:52 d-fence_ joined #gluster
21:52 sauce joined #gluster
21:54 lava joined #gluster
22:00 avati joined #gluster
22:00 qdk joined #gluster
22:01 zerick joined #gluster
22:03 cmtime joined #gluster
22:03 firemanxbr joined #gluster
22:07 chirino joined #gluster
22:10 lyang0 joined #gluster
22:11 dblack_ joined #gluster
22:11 _Bryan_ joined #gluster
22:11 mkzero_ joined #gluster
22:11 l0uis joined #gluster
22:11 AaronGr joined #gluster
22:11 dblack joined #gluster
22:11 17SAAXJZS joined #gluster
22:11 coredumb joined #gluster
22:11 al joined #gluster
22:11 dblack_ joined #gluster
22:12 jiqiren joined #gluster
22:13 Intensity joined #gluster
22:13 Intensity joined #gluster
22:15 PeterA1 joined #gluster
22:19 daxatlas_ joined #gluster
22:22 MacWinner joined #gluster
22:46 rkelley joined #gluster
22:48 rkelley joined #gluster
22:55 rkelley hello
22:55 glusterbot rkelley: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
23:02 chucky_z joined #gluster
23:08 swebb joined #gluster
23:44 B21956 joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary