Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-12-16

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 jonybravo30 there are 2 already, but i dont know what command might help me
00:00 jonybravo30 to adding this host03
00:05 ninkotech joined #gluster
00:05 ninkotech_ joined #gluster
00:09 misko_ joined #gluster
00:09 daMaestro joined #gluster
00:15 fandi joined #gluster
00:17 TrDS left #gluster
00:27 MacWinner joined #gluster
00:32 SmithyUK joined #gluster
00:35 fandi joined #gluster
00:41 bala joined #gluster
00:52 jonybravo30 gluster volume add-brick gluster_data host03:/data/glusterfs
00:52 jonybravo30 Incorrect number of bricks supplied 1 for type REPLICATE with count 2
00:52 jonybravo30 this must be new third node
00:52 jonybravo30 please some advises?
01:02 badone joined #gluster
01:11 daMaestro joined #gluster
01:22 _pol_ joined #gluster
01:31 jbrooks joined #gluster
01:38 theron joined #gluster
01:47 harish joined #gluster
01:53 bala joined #gluster
01:57 haomaiwa_ joined #gluster
02:05 diegows joined #gluster
02:09 rjoseph joined #gluster
02:12 _pol joined #gluster
03:14 David_H_Smith joined #gluster
03:24 _Bryan_ joined #gluster
03:26 RameshN joined #gluster
03:28 kanagaraj joined #gluster
03:34 rejy joined #gluster
04:02 kshlm joined #gluster
04:04 nbalacha joined #gluster
04:14 haomaiwa_ joined #gluster
04:15 itisravi joined #gluster
04:16 atinmu joined #gluster
04:18 kdhananjay joined #gluster
04:20 shubhendu joined #gluster
04:21 ppai joined #gluster
04:38 ws2k3 joined #gluster
04:39 verdurin joined #gluster
04:40 karnan joined #gluster
04:47 nishanth joined #gluster
04:51 ndarshan joined #gluster
04:53 bala joined #gluster
05:03 sage_ joined #gluster
05:07 spandit joined #gluster
05:09 bala joined #gluster
05:13 lalatenduM joined #gluster
05:15 prasanth_ joined #gluster
05:17 sahina joined #gluster
05:20 gildub joined #gluster
05:25 tg2 joined #gluster
05:26 PeterA1 joined #gluster
05:29 sac_ joined #gluster
05:48 Paul-C left #gluster
05:49 rjoseph joined #gluster
05:52 soumya joined #gluster
05:59 harish joined #gluster
06:00 karnan joined #gluster
06:02 karnan joined #gluster
06:06 raghu joined #gluster
06:06 kshlm joined #gluster
06:10 soumya joined #gluster
06:13 nshaikh joined #gluster
06:19 shubhendu joined #gluster
06:21 dusmant joined #gluster
06:21 anil_ joined #gluster
06:22 wgao joined #gluster
06:23 overclk joined #gluster
06:24 meghanam joined #gluster
06:24 meghanam_ joined #gluster
06:27 atalur joined #gluster
06:28 aravindavk joined #gluster
06:28 saurabh joined #gluster
06:43 shubhendu joined #gluster
06:43 kanagaraj joined #gluster
06:47 nishanth joined #gluster
06:49 bala joined #gluster
06:51 dusmant joined #gluster
06:55 Debloper joined #gluster
06:57 ricky-ti1 joined #gluster
07:04 ctria joined #gluster
07:12 glusterbot News from newglusterbugs: [Bug 1174625] nfs server restarts when a snapshot is deactivated <https://bugzilla.redhat.com/show_bug.cgi?id=1174625>
07:15 pcaruana joined #gluster
07:17 aravindavk joined #gluster
07:18 nishanth joined #gluster
07:20 bala joined #gluster
07:20 nshaikh joined #gluster
07:31 LebedevRI joined #gluster
07:31 dusmant joined #gluster
07:32 jtux joined #gluster
07:38 [Enrico] joined #gluster
07:48 kovshenin joined #gluster
07:49 rgustafs joined #gluster
07:58 mbukatov joined #gluster
08:05 [Enrico] joined #gluster
08:05 [Enrico] joined #gluster
08:05 ricky-ticky1 joined #gluster
08:34 fsimonce joined #gluster
08:42 glusterbot News from newglusterbugs: [Bug 1174639] [SNAPSHOT]: Need logging correction during the lookup failure case. <https://bugzilla.redhat.com/show_bug.cgi?id=1174639>
08:43 Fetch joined #gluster
08:44 lalatenduM joined #gluster
08:47 Dw_Sn joined #gluster
08:48 ghenry joined #gluster
08:48 ghenry joined #gluster
08:55 smohan joined #gluster
08:57 tvb joined #gluster
08:57 tvb anyone who can help me with:
08:57 tvb [2014-12-16 09:57:10.182988] W [client3_1-fops.c:1059:client3_1_getxattr_cbk] 0-STATIC-DATA-client-1: remote operation failed: No such file or directory. Path: <gfid:941acccd-e33a-43b4-a679-fc900ec4f10e> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path
08:58 tvb when I run a heal
09:02 theron joined #gluster
09:03 fandi joined #gluster
09:08 ppai joined #gluster
09:08 Dw_Sn joined #gluster
09:09 aulait joined #gluster
09:15 jonybravo30_ joined #gluster
09:15 jonybravo30_ in glusterfs replica count can be 3 5 7 etc?
09:16 jonybravo30_ or only par numbers
09:18 liquidat joined #gluster
09:26 dizquierdo joined #gluster
09:27 mrEriksson jonybravo30_: You can have any number of replicas
09:29 Slashman joined #gluster
09:40 SOLDIERz_ joined #gluster
09:47 sahina joined #gluster
09:50 mrEriksson joined #gluster
09:51 atalur joined #gluster
09:54 atalur joined #gluster
09:55 elico joined #gluster
10:04 Norky joined #gluster
10:13 sahina joined #gluster
10:18 theron joined #gluster
10:19 [Enrico] joined #gluster
10:20 deniszh joined #gluster
10:24 DV joined #gluster
10:38 bala joined #gluster
10:50 jonybravo30_ sorry but, i am stucked, have two nodes with working replication but when i add new brick to volume working on replicate 2, output
10:50 jonybravo30_ Incorrect number of bricks supplied 1 for type REPLICATE with count 2
10:51 jonybravo30_ result may be 3 nodes replicated
10:51 jonybravo30_ with one brick in each one
10:53 M28 if the replication of the volume is 2, each 2 bricks will "merge"
10:54 M28 changing the replica count is tricky
10:55 B21956 joined #gluster
10:55 M28 I think that in 3.3 there was something added for that, but it's probably not documented well
10:56 M28 if there isn't anything in the volume, it's easier to just destroy it and create a new one
11:03 jonybravo30_ thanks M28, i will try this command
11:03 partner_ hmm, haven't done that.. i have made replica from distributed volume and vice versa but not from 2 to 3..
11:05 jonybravo30_ gluster volume create gluster_data replica 3 transport tcp host01:/data/glusterfs host02:/data/glusterfs host03:/data/glusterfs
11:06 jonybravo30_ this command must be create one brick in each node and must be replicated right?
11:07 M28 yes, those 3 bricks will be identical
11:07 jonybravo30_ theres is another way without destroying existent replica 2 (composed by 2 nodes)?
11:07 jonybravo30_ this is a production enviroment
11:11 M28 well
11:11 M28 is the current volume just 2 bricks?
11:11 jonybravo30_ yeah
11:11 jonybravo30_ 1 brick for each node
11:11 M28 you can create another volume, copy the contents, and delete the old volume once you verify that the contents have been copied correctly
11:11 kkeithley1 joined #gluster
11:12 sahina joined #gluster
11:13 ndevos jonybravo30_: possibly like this: gluster volume add-brick $VOLNAME replica 3 host03:/data/glusterfs
11:14 M28 ndevos, I think that will try to create a new set of bricks and fail because it needs 3 bricks
11:14 jonybravo30_ # gluster volume add-brick gluster_data replica 3 host03:/data/glusterfs wrong brick type: replica, use <HOSTNAME>:<export-dir-abs-path>
11:14 meghanam__ joined #gluster
11:14 jonybravo30_ out:
11:14 jonybravo30_ wrong brick type: replica, use <HOSTNAME>:<export-dir-abs-path>
11:15 ndevos hmm, that should work for moving from a distribute-only to a replica-2 set, maybe replica-3 isnt possible that way :-/
11:15 jonybravo30_ but, # gluster volume add-brick gluster_data host03:/data/glusterfs
11:15 jonybravo30_ Incorrect number of bricks supplied 1 for type REPLICATE with count 2
11:16 meghanam joined #gluster
11:16 ndevos yes, by default add-brick expects N x "the replica" number of bricks
11:17 jonybravo30_ there is no command to update replica count?
11:18 ndevos yes there is, add-brick can do that for replica 1 -> 2, but maybe not for 2 -> 3
11:18 jonybravo30_ i will try to create another brick, but it will be so easy if that could work
11:18 jonybravo30_ i see
11:19 jonybravo30_ TY!
11:20 ndevos you can send an email to gluster-users@gluster.org and see if someone knows of a trick
11:20 purpleidea fubada: oh! what version of gluster are you using? 3.6 ?
11:22 lalatenduM joined #gluster
11:28 anil_ joined #gluster
11:31 calisto joined #gluster
11:34 hybrid512 joined #gluster
11:37 ndevos REMINDER: Gluster Community Bug Triage meeting starts in ~25 minutes in #gluster-meeting
11:38 ndarshan joined #gluster
11:38 diegows joined #gluster
11:42 soumya_ joined #gluster
11:44 calum_ joined #gluster
11:48 sahina joined #gluster
11:50 shubhendu joined #gluster
11:52 anil joined #gluster
11:54 DV joined #gluster
11:57 saurabh joined #gluster
11:57 badone joined #gluster
12:00 jdarcy joined #gluster
12:05 lpabon joined #gluster
12:05 RameshN joined #gluster
12:05 DV joined #gluster
12:07 theron joined #gluster
12:10 M28_ joined #gluster
12:13 glusterbot News from resolvedglusterbugs: [Bug 764063] Debian package does not depend on fuse <https://bugzilla.redhat.com/show_bug.cgi?id=764063>
12:14 soumya joined #gluster
12:14 crashmag_ joined #gluster
12:14 anti[Enrico] joined #gluster
12:16 feeshon joined #gluster
12:17 diegows joined #gluster
12:17 ppai joined #gluster
12:18 ppai joined #gluster
12:19 LebedevRI joined #gluster
12:20 edward1 joined #gluster
12:21 eclectic_ joined #gluster
12:22 Andreas-IPO joined #gluster
12:22 feeshon joined #gluster
12:23 cfeller joined #gluster
12:24 ryao_ joined #gluster
12:26 siel_ joined #gluster
12:27 DV joined #gluster
12:28 ur__ joined #gluster
12:29 karnan joined #gluster
12:29 ndarshan joined #gluster
12:29 ndarshan joined #gluster
12:30 Andreas-IPO_ joined #gluster
12:30 T3 joined #gluster
12:31 ramteid joined #gluster
12:32 shubhendu joined #gluster
12:34 JonathanD joined #gluster
12:34 pdrakeweb joined #gluster
12:35 RameshN joined #gluster
12:36 ppai joined #gluster
12:36 uebera|| joined #gluster
12:36 uebera|| joined #gluster
12:37 smohan joined #gluster
12:39 itisravi_ joined #gluster
12:43 glusterbot News from newglusterbugs: [Bug 1173437] [RFE] changes needed in snapshot info command's xml output. <https://bugzilla.redhat.com/show_bug.cgi?id=1173437>
12:43 glusterbot News from newglusterbugs: [Bug 1075417] Spelling mistakes and typos in the glusterfs source <https://bugzilla.redhat.com/show_bug.cgi?id=1075417>
12:43 glusterbot News from newglusterbugs: [Bug 1172348] new installation of glusterfs3.5.3-1; quota not displayed to client. <https://bugzilla.redhat.com/show_bug.cgi?id=1172348>
12:43 glusterbot News from newglusterbugs: [Bug 1172458] fio: end_fsync failed for file FS_4k_streaming_writes.1.0 unsupported operation <https://bugzilla.redhat.com/show_bug.cgi?id=1172458>
12:43 glusterbot News from newglusterbugs: [Bug 1173732] Glusterd fails when script set_geo_rep_pem_keys.sh is executed on peer <https://bugzilla.redhat.com/show_bug.cgi?id=1173732>
12:43 glusterbot News from newglusterbugs: [Bug 1173909] glusterd crash after upgrade from 3.5.2 <https://bugzilla.redhat.com/show_bug.cgi?id=1173909>
12:43 glusterbot News from resolvedglusterbugs: [Bug 1173953] typos <https://bugzilla.redhat.com/show_bug.cgi?id=1173953>
12:47 DV joined #gluster
12:49 rgustafs joined #gluster
12:50 _Bryan_ joined #gluster
12:50 ira joined #gluster
13:01 Slashman_ joined #gluster
13:12 lalatenduM joined #gluster
13:13 glusterbot News from newglusterbugs: [Bug 1170786] man mount.glusterfs <https://bugzilla.redhat.com/show_bug.cgi?id=1170786>
13:13 glusterbot News from newglusterbugs: [Bug 1171477] Hook scripts are not installed after make install <https://bugzilla.redhat.com/show_bug.cgi?id=1171477>
13:13 bennyturns joined #gluster
13:13 glusterbot News from newglusterbugs: [Bug 1174765] Hook scripts are not installed after make install <https://bugzilla.redhat.com/show_bug.cgi?id=1174765>
13:13 glusterbot News from newglusterbugs: [Bug 1135358] Update licensing and move all MacFUSE references to OSXFUSE <https://bugzilla.redhat.com/show_bug.cgi?id=1135358>
13:13 glusterbot News from newglusterbugs: [Bug 1065620] in 3.5 hostname resuluation issues <https://bugzilla.redhat.com/show_bug.cgi?id=1065620>
13:13 glusterbot News from resolvedglusterbugs: [Bug 1169999] Debian 7 wheezy, /sbin/mount.glusterfs fails to automount and print help <https://bugzilla.redhat.com/show_bug.cgi?id=1169999>
13:18 nishanth joined #gluster
13:43 elico joined #gluster
13:43 elico joined #gluster
13:43 elico joined #gluster
13:43 glusterbot News from newglusterbugs: [Bug 1174783] [readdir-ahead]: indicate EOF for readdirp <https://bugzilla.redhat.com/show_bug.cgi?id=1174783>
13:44 elico joined #gluster
13:46 dusmant joined #gluster
13:48 julim joined #gluster
13:51 elico joined #gluster
13:52 theron joined #gluster
13:52 theron joined #gluster
13:55 ppai joined #gluster
13:56 theron joined #gluster
13:58 nbalacha joined #gluster
13:58 soumya joined #gluster
14:05 coredump joined #gluster
14:06 asku joined #gluster
14:12 virusuy joined #gluster
14:13 meghanam__ joined #gluster
14:13 meghanam joined #gluster
14:14 bala joined #gluster
14:17 DV joined #gluster
14:20 pdrakewe_ joined #gluster
14:21 coredump|br joined #gluster
14:22 kkeithley1 joined #gluster
14:23 kaii_ joined #gluster
14:24 saltsa joined #gluster
14:26 bene joined #gluster
14:29 ryao_ joined #gluster
14:31 johndescs joined #gluster
14:31 ur_ joined #gluster
14:37 fubada purpleidea: yes sir, 3.6
14:37 vimal joined #gluster
14:46 feeshon joined #gluster
14:46 primusinterpares joined #gluster
14:47 DV joined #gluster
14:49 m0zes joined #gluster
14:50 m0zes joined #gluster
14:53 afics__ joined #gluster
14:55 calum_ joined #gluster
15:05 wushudoin joined #gluster
15:06 tdasilva joined #gluster
15:10 cmtime joined #gluster
15:10 DV joined #gluster
15:14 wushudoin joined #gluster
15:24 purpleidea fubada: that's a bug... want to write the patch or should i?
15:25 purpleidea fubada: additionally, this should be ported to use data-in-modules... https://github.com/purpleidea/puppet-gluster/blob/master/manifests/host.pp#L100
15:25 bene joined #gluster
15:25 RameshN joined #gluster
15:25 shubhendu joined #gluster
15:30 nishanth joined #gluster
15:30 Philambdo joined #gluster
15:32 dizquierdo left #gluster
15:32 jbrooks joined #gluster
15:36 jobewan joined #gluster
15:43 bennyturns joined #gluster
15:47 bala joined #gluster
16:03 _pol joined #gluster
16:12 T3 joined #gluster
16:16 sac_ joined #gluster
16:21 harish joined #gluster
16:25 RameshN joined #gluster
16:33 fubada purpleidea: i dont think ill be able to write this patch
16:33 fubada as im a busy n00b
16:33 fubada :(
16:43 dberry joined #gluster
16:43 dberry joined #gluster
16:46 lmickh joined #gluster
16:46 vimal joined #gluster
17:05 T3 what is the high number of gfid entries I see on this output?
17:05 T3 root@web4:~# gluster volume heal site-images info
17:05 T3 Brick web3:/export/images1-1/brick/
17:05 T3 <gfid:04fe3e9f-b3d0-49f3-bdfd-2c00cea24e16>
17:05 T3 (and goes, thousands of them)
17:06 T3 it goes forever, never ends
17:08 Norky joined #gluster
17:09 T3 I can see this binary file with same name as the hash: /export/images1-1/brick/.glusterfs/04/fe/04fe3e9f-b3d0-49f3-bdfd-2c00cea24e16
17:11 T3 glusterbot, split-brain
17:12 glusterbot T3: (#1) To heal split-brain, use splitmount. http://joejulian.name/blog/glusterfs-split-brain-recovery-made-easy/, or (#2) For additional information, see this older article http://joejulian.name/blog/fixing-split-brain-with-glusterfs-33/
17:12 calum_ joined #gluster
17:15 purpleidea fubada: did you look at the link??
17:16 purpleidea fubada: the easy version of the patch is a one-liner to add the correct operating version...
17:16 purpleidea fubada: if you don't want to do that, then point me to the correct data... i think there was a wiki page
17:16 purpleidea fubada: actually: http://gluster.org/community/documentation/index.php/OperatingVersions
17:17 fubada 30600
17:17 fubada :)
17:17 purpleidea fubada: can you confirm in 3.6 that's what you saw when you install gluster manually?
17:18 fubada purpleidea: in  /var/lib/glusterd/glusterd.info
17:18 fubada ?
17:19 fubada I just see a uuid
17:20 wushudoin left #gluster
17:20 Norky joined #gluster
17:21 daMaestro joined #gluster
17:21 TrDS joined #gluster
17:26 purpleidea fubada: before puppet-gluster removed whatever value was there, there was a value there... if you did a clean install you should see 30600 and it would be great if you confirmed this theory :)
17:27 Norky joined #gluster
17:33 purpleidea fubada: patched in git master. please test and confirm happiness :)
17:33 purpleidea fubada: https://github.com/purpleidea/puppet-gluster/commit/097fab7904e9fb3715d8f884a0e58f7a9d28190c
17:37 Norky joined #gluster
17:38 deniszh joined #gluster
17:43 lalatenduM joined #gluster
17:53 R0ok_|kejani joined #gluster
17:55 SOLDIERz_ joined #gluster
17:58 fubada purpleidea: awesome, testing! thank you
18:02 purpleidea fubada: and....
18:03 purpleidea fubada: even better, test this version: https://github.com/purpleidea/puppet-gluster/tree/feat/operating-version
18:03 fubada purpleidea: give me a second
18:03 fubada is master not good? i;d have to change my r10k
18:03 purpleidea fubada: i generalized the code so that adding new gluster versions is easy and hopefully makes this bug easier to patch in the future
18:04 purpleidea fubada: master should fix your issue. the feature branch should generalize that fix. i'd love to have it tested :) ... it should *also* solve your problem, but also solve your problem in the future instead of only now.
18:05 fubada thanks man, testing master
18:05 purpleidea fubada: thanks. lmk
18:06 fandi joined #gluster
18:10 R0ok_|kejani joined #gluster
18:11 fubada purpleidea: works!
18:11 fubada awesome no more service restart
18:12 d-fence joined #gluster
18:17 elico joined #gluster
18:18 purpleidea fubada: sweet! as a bonus, care to test the feature branch i added?
18:18 purpleidea fubada: if we know it works now, i'll merge it into master, and then it will be less likely (or easier to fix) when gluster 3.7 or 3.6.next comes out
18:22 zerick joined #gluster
18:26 R0ok_|kejani joined #gluster
18:27 theron joined #gluster
18:33 TrDS left #gluster
18:40 n-st joined #gluster
18:42 MacWinner joined #gluster
18:47 rjoseph joined #gluster
18:51 chirino joined #gluster
18:55 bennyturns joined #gluster
19:13 DougBishop joined #gluster
19:15 Paul-C joined #gluster
19:20 daxatlas joined #gluster
19:20 lpabon joined #gluster
19:23 daxatlas Question, does anyone geo-replicate a single 40TB Distributed-replicate volume in production? If so, have you run into issues? We have a 4 node cluster that hosts a 72TB volume (40TB used) and geo-replicates that volume over a WAN to another set of 4 nodes. It appears to be functioning normally, except that if you update the contents of a file, it doesn't update in datacenter 2 (the geo-replica target) until after you nuke the indexes. Has anyone had this
19:33 semiosis iirc there's a delay of several minutes before changes are synced to the slave
19:34 semiosis did you wait 10+ minutes?
19:34 semiosis that should be long enough
20:04 edong23 joined #gluster
20:10 rwheeler joined #gluster
20:15 m0ellemeister joined #gluster
20:19 ira joined #gluster
20:21 coredump joined #gluster
20:49 calum_ joined #gluster
20:51 y4m4_ joined #gluster
20:54 gildub joined #gluster
20:55 Philambdo joined #gluster
20:57 y4m4_ left #gluster
21:02 ricky-ti1 joined #gluster
21:03 _pol joined #gluster
21:10 calum_ joined #gluster
21:10 theron joined #gluster
21:44 T3 I read somewhere that it's not recommended to run other services on the same server of a gluster brick. Anyone know why?
21:45 glusterbot News from newglusterbugs: [Bug 1005344] duplicate entries in volume property <https://bugzilla.redhat.com/show_bug.cgi?id=1005344>
21:45 semiosis "You can't believe everything you read on the Internet." -Abraham Lincoln
21:47 coredump joined #gluster
21:49 badone joined #gluster
21:51 devilspgd T3: I did a small test with Apache2, gluster client and bricks all on one machine (well, all on each of three) and the performance was, well, lacklaster.
21:51 semiosis T3: seriously though, i've run plenty of other stuff alongside gluster bricks.  as long as you have sufficient cpu & memory for the workload you should be OK.  unless you want to run POP3S/IMAPS, then you might run into a port conflict :(
21:52 devilspgd But that might speak more to my hardware specs than gluster itself.
21:53 chirino joined #gluster
21:54 devilspgd I'm in a VPS environment so I can shuffle, I moved to a 2x Apache with 2x Gluster brick solution with the same total number of cores, things are much snappier.
21:55 devilspgd For my load, NFS was much snappier than gluster client too, although I'm seeing some weird file ownership issues since switching to NFS... Can't figure how they'd be related though.
21:58 _pol joined #gluster
21:58 T3 ok, that's comforting guys haha
21:59 devilspgd Also keep in mind that Apache2 is hosting tons of small files (html, css, js, php, etc) -- This is a very different workload than user documents, or large/bulk file storage. Test in your own environment.
21:59 devilspgd My setup is less than ideal hardware too :)
22:01 T3 my gluster data is a lot of tiny files too.. pdf, .doc and images (jpg, png)
22:01 T3 I'm on cloud, plenty of room to increase
22:01 T3 and money for that shouldn't be the problem too
22:02 T3 I already noticed that glusterfs consumes cpu and ram, which shouldn't be a problem, unless it keeps insanely consuming more and more
22:05 diegows joined #gluster
22:13 badone joined #gluster
22:15 gildub joined #gluster
22:25 devilspgd Even with bricks on dedicated hardware, I still occasionally see gluster on the client side chewing tons and tons of CPU when there doesn't seem to be anything else happening.
22:29 MugginsM joined #gluster
22:51 calum_ joined #gluster
23:21 _pol joined #gluster
23:28 jaank joined #gluster
23:35 partner_ just looking at one client box and 21% of cpu goes to glusterfs process
23:35 partner_ i guess its working hard :)
23:38 fandi joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary