Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-05-02

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:06 nikk_ joined #gluster
00:09 sprachgenerator joined #gluster
00:14 gdubreui joined #gluster
00:29 tomato joined #gluster
00:32 tomato Just upgraded from 3.4 to 3.5 and although I have 4 servers in distributed replicated configuration, the mount become unavailable when checking from a client. Is this to be expected?
00:53 mjsmith2 joined #gluster
00:57 semiosis joined #gluster
01:03 itisravi_afk joined #gluster
01:06 semiosis joined #gluster
01:12 hagarth joined #gluster
01:15 semiosis :O
01:21 bala joined #gluster
01:27 semiosis joined #gluster
01:41 jmarley joined #gluster
01:41 jmarley joined #gluster
01:53 elico So basically the front-servers should have lots of bandwidth towards the replicated volume hosts.
01:53 elico How is the grade of my homework?
01:59 sks joined #gluster
02:23 nishanth joined #gluster
02:23 nthomas joined #gluster
02:35 bharata-rao joined #gluster
02:37 harish joined #gluster
02:40 ppai joined #gluster
02:46 itisravi_afk joined #gluster
02:58 kanagaraj joined #gluster
03:05 nthomas joined #gluster
03:06 nishanth joined #gluster
03:20 jag3773 joined #gluster
03:21 shubhendu joined #gluster
03:45 itisravi_afk joined #gluster
03:49 RameshN joined #gluster
04:09 rastar joined #gluster
04:19 saurabh joined #gluster
04:22 atinmu joined #gluster
04:24 lalatenduM joined #gluster
04:28 ndarshan joined #gluster
04:30 surabhi joined #gluster
04:39 dusmant joined #gluster
04:46 kdhananjay joined #gluster
04:47 pdrakeweb joined #gluster
04:52 lyang0 joined #gluster
05:00 vpshastry joined #gluster
05:05 TvL2386 joined #gluster
05:09 ravindran1 joined #gluster
05:17 nthomas joined #gluster
05:18 nishanth joined #gluster
05:18 hagarth joined #gluster
05:19 nshaikh joined #gluster
05:22 prasanthp joined #gluster
05:25 purpleidea joined #gluster
05:33 raghu joined #gluster
05:37 aravindavk joined #gluster
05:40 bala joined #gluster
05:44 dusmantkp_ joined #gluster
05:52 davinder joined #gluster
06:14 psharma joined #gluster
06:15 nishanth joined #gluster
06:15 rjoseph joined #gluster
06:20 aravindavk joined #gluster
06:28 edward2 joined #gluster
06:32 nishanth joined #gluster
06:42 Psi-Jack_ joined #gluster
06:46 eseyman joined #gluster
06:58 ekuric joined #gluster
06:58 Michal__ joined #gluster
06:58 ctria joined #gluster
06:59 Michal__ hi, i have problem with distributed volume. when i creat one on 2 bricks on 2 servers and wirite 200 1MB file from client all the files are on brick 2
06:59 Michal__ any ideas ? :) or is it normal ?
07:00 Michal__ im using 3.5.0
07:07 harish__ joined #gluster
07:15 lalatenduM Michal__, are you creating the files from the mount point?
07:16 Michal__ yes and srange thing is when i creat one from client 1 they all go to brick1 and from client to they go to brick2
07:17 Michal__ i mounted gluster mount.gluster pc2:/gv0 /mnt/gluster
07:17 Michal__ on both clients
07:20 lalatenduM Michal__, it is not the usual behavior ...which document you followed for doing all these?
07:22 Michal__ i have used this tutorial http://www.sohailriaz.com/glusterfs-howto-on-centos-6-x/ and documentation for 3.3 http://www.gluster.org/wp-content/uploads/2012/05/Gluster_File_System-3.3.0-Administration_Guide-en-US.pdf
07:23 Michal__ Volume Name: gv0
07:23 Michal__ Type: Distribute
07:23 Michal__ Volume ID: eb8acc71-6ab1-4986-bb94-90b01eb7c27e
07:23 Michal__ Status: Started
07:23 Michal__ Number of Bricks: 2
07:23 Michal__ Transport-type: tcp
07:23 Michal__ Bricks:
07:23 Michal__ Brick1: pc2:/export/sda2/brick1
07:23 Michal__ Brick2: pc3:/export/sda2/brick2
07:25 Michal__ hmm i have just deleted the volme and created new one and it works fine
07:25 kevein joined #gluster
07:25 Michal__ what i have done diferently was that i named brick1 and brick2 instead of just brick for both
07:26 Michal__ pc2:/export/sda2/brick pc3:/export/sda2/brick previously
07:27 bala joined #gluster
07:30 Michal__ it looks fine now, should have try to recreate the volume first sorry
07:33 glusterbot New news from resolvedglusterbugs: [Bug 1091372] Behaviour of glfs_fini() affecting QEMU <https://bugzilla.redhat.com/show_bug.cgi?id=1091372>
07:33 lalatenduM Michal__, the same brick name should not be a issue, at least on my setup
07:36 lalatenduM Michal__, just try creating files names of a1 to a100 from any client and check the bricks
07:37 fsimonce joined #gluster
07:47 glusterbot New news from newglusterbugs: [Bug 1093594] Glfs_fini() not freeing the resources <https://bugzilla.redhat.com/show_bug.cgi?id=1093594>
07:58 DV joined #gluster
08:00 chirino joined #gluster
08:00 ngoswami joined #gluster
08:11 haomaiwa_ joined #gluster
08:16 micu1 joined #gluster
08:17 systemonkey2 joined #gluster
08:17 glusterbot New news from newglusterbugs: [Bug 1093602] geo-rep/glusterd: Introduce pause and resume cli command for geo-replication <https://bugzilla.redhat.com/show_bug.cgi?id=1093602>
08:18 pdrakewe_ joined #gluster
08:19 jcsp1 joined #gluster
08:20 avati joined #gluster
08:20 natgeorg joined #gluster
08:20 natgeorg joined #gluster
08:21 Michal_ joined #gluster
08:21 smithyuk1_ joined #gluster
08:21 d-fence_ joined #gluster
08:21 tru_tru_ joined #gluster
08:21 nikk__ joined #gluster
08:22 hflai_ joined #gluster
08:22 rwheeler_ joined #gluster
08:46 DV_ joined #gluster
08:48 churnd- joined #gluster
08:53 saravanakumar joined #gluster
08:53 hflai joined #gluster
08:53 ekuric joined #gluster
08:53 NuxRo joined #gluster
08:53 hybrid512 joined #gluster
08:53 crashmag joined #gluster
08:53 klaas joined #gluster
08:53 the-me joined #gluster
08:53 chirino joined #gluster
08:53 Philambdo joined #gluster
08:53 Mneumonik joined #gluster
08:53 nthomas joined #gluster
08:53 eseyman joined #gluster
08:53 fsimonce` joined #gluster
08:53 uebera|| joined #gluster
08:53 siel joined #gluster
08:53 twx_ joined #gluster
08:53 samppah_ joined #gluster
08:53 nikk___ joined #gluster
08:53 Uguu joined #gluster
08:53 johnmark_ joined #gluster
08:53 eclectic_ joined #gluster
08:53 nixpanic_ joined #gluster
08:53 churnd- joined #gluster
08:53 DV_ joined #gluster
08:53 rwheeler_ joined #gluster
08:53 tru_tru_ joined #gluster
08:53 d-fence_ joined #gluster
08:53 smithyuk1_ joined #gluster
08:53 Michal_ joined #gluster
08:53 natgeorg joined #gluster
08:53 avati joined #gluster
08:53 jcsp1 joined #gluster
08:53 pdrakewe_ joined #gluster
08:53 systemonkey2 joined #gluster
08:53 micu1 joined #gluster
08:53 haomaiwa_ joined #gluster
08:53 ngoswami joined #gluster
08:53 harish__ joined #gluster
08:53 ctria joined #gluster
08:53 Psi-Jack joined #gluster
08:53 nishanth joined #gluster
08:53 edward2 joined #gluster
08:53 aravindavk joined #gluster
08:53 rjoseph joined #gluster
08:53 davinder joined #gluster
08:53 purpleidea joined #gluster
08:53 prasanthp joined #gluster
08:53 nshaikh joined #gluster
08:53 hagarth joined #gluster
08:53 ravindran1 joined #gluster
08:53 TvL2386 joined #gluster
08:53 vpshastry joined #gluster
08:53 lyang0 joined #gluster
08:53 kdhananjay joined #gluster
08:53 surabhi joined #gluster
08:53 ndarshan joined #gluster
08:53 lalatenduM joined #gluster
08:53 saurabh joined #gluster
08:53 rastar joined #gluster
08:53 RameshN joined #gluster
08:53 itisravi joined #gluster
08:53 shubhendu joined #gluster
08:53 kanagaraj joined #gluster
08:53 ppai joined #gluster
08:53 bharata-rao joined #gluster
08:53 semiosis joined #gluster
08:53 badone_ joined #gluster
08:53 MacWinner joined #gluster
08:53 tg2 joined #gluster
08:53 coredump joined #gluster
08:53 gmcwhistler joined #gluster
08:53 Licenser joined #gluster
08:53 asku joined #gluster
08:53 [o__o] joined #gluster
08:53 ron-slc joined #gluster
08:53 tdasilva joined #gluster
08:53 cvdyoung joined #gluster
08:53 qdk joined #gluster
08:53 ThatGraemeGuy joined #gluster
08:53 AaronGr joined #gluster
08:53 cyberbootje joined #gluster
08:53 cogsu joined #gluster
08:53 ghenry joined #gluster
08:53 nhm joined #gluster
08:53 sadbox joined #gluster
08:53 xymox joined #gluster
08:53 mkzero joined #gluster
08:53 Slasheri joined #gluster
08:53 hchiramm_ joined #gluster
08:53 Debolaz joined #gluster
08:53 ninkotech joined #gluster
08:53 GabrieleV joined #gluster
08:53 verdurin joined #gluster
08:53 d3vz3r0 joined #gluster
08:53 cyber_si_ joined #gluster
08:53 jvandewege joined #gluster
08:53 VeggieMeat joined #gluster
08:53 sauce joined #gluster
08:53 johnmwilliams__ joined #gluster
08:53 masterzen joined #gluster
08:53 refrainblue joined #gluster
08:53 basso joined #gluster
08:53 glusterbot joined #gluster
08:53 VerboEse joined #gluster
08:53 edong23 joined #gluster
08:53 xiu joined #gluster
08:53 sputnik13 joined #gluster
08:53 jiffe98 joined #gluster
08:53 Humble joined #gluster
08:53 RobertLaptop joined #gluster
08:53 JoseBravo joined #gluster
08:53 XpineX joined #gluster
08:53 marcoceppi joined #gluster
08:53 Intensity joined #gluster
08:53 msp3k1 joined #gluster
08:53 somepoortech joined #gluster
08:53 nullck joined #gluster
08:53 ninkotech_ joined #gluster
08:53 tjikkun_work joined #gluster
08:53 velladecin joined #gluster
08:53 txmoose joined #gluster
08:53 seddrone joined #gluster
08:53 suliba joined #gluster
08:53 elico joined #gluster
08:53 abyss^ joined #gluster
08:53 ultrabizweb joined #gluster
08:53 doekia joined #gluster
08:53 social joined #gluster
08:53 yosafbridge joined #gluster
08:53 necrogami joined #gluster
08:53 dblack joined #gluster
08:53 jbd1_away joined #gluster
08:53 Joe630 joined #gluster
08:53 Peanut joined #gluster
08:53 Arrfab joined #gluster
08:53 foster joined #gluster
08:53 tziOm joined #gluster
08:53 Bardack joined #gluster
08:53 Amanda joined #gluster
08:53 JonathanD joined #gluster
08:53 jhp joined #gluster
08:53 tjikkun joined #gluster
08:53 overclk joined #gluster
08:53 sticky_afk joined #gluster
08:53 Georgyo joined #gluster
08:53 codex joined #gluster
08:53 silky joined #gluster
08:53 n0de joined #gluster
08:53 T0aD joined #gluster
08:53 decimoe joined #gluster
08:53 mjrosenb joined #gluster
08:53 lanning joined #gluster
08:53 JustinClift joined #gluster
08:53 atrius joined #gluster
08:53 cfeller joined #gluster
08:53 divbell joined #gluster
08:53 auganov joined #gluster
08:53 kshlm joined #gluster
08:53 osiekhan1 joined #gluster
08:53 portante joined #gluster
08:53 msvbhat joined #gluster
08:53 xavih joined #gluster
08:53 NCommander joined #gluster
08:53 _NiC joined #gluster
08:53 eightyeight joined #gluster
08:53 goerk joined #gluster
08:53 JoeJulian joined #gluster
08:53 efries_ joined #gluster
08:53 msciciel3 joined #gluster
08:53 radez_g0n3 joined #gluster
08:53 mwoodson joined #gluster
08:53 fyxim_ joined #gluster
08:53 sulky joined #gluster
08:53 DanF joined #gluster
08:53 neoice joined #gluster
08:53 tomased joined #gluster
08:53 sac`away joined #gluster
08:53 atrius` joined #gluster
08:53 aurigus joined #gluster
08:53 lkoranda joined #gluster
08:53 Gugge joined #gluster
08:53 muhh joined #gluster
08:53 mibby joined #gluster
08:53 Dave2 joined #gluster
08:53 Ramereth joined #gluster
08:53 ndevos joined #gluster
08:53 Rydekull joined #gluster
08:53 jezier_ joined #gluster
08:53 saltsa_ joined #gluster
08:53 al joined #gluster
08:53 juhaj joined #gluster
08:53 l0uis joined #gluster
08:53 coredumb joined #gluster
08:53 delhage joined #gluster
08:53 mtanner_ joined #gluster
08:53 eshy joined #gluster
08:53 eryc joined #gluster
08:53 sac_ joined #gluster
08:53 brosner joined #gluster
08:53 Kins joined #gluster
08:53 SteveCooling joined #gluster
08:53 partner joined #gluster
08:53 vincent_vdk joined #gluster
08:53 stigchristian joined #gluster
08:53 jurrien joined #gluster
08:53 m0zes joined #gluster
08:53 georgeh|workstat joined #gluster
08:53 ackjewt joined #gluster
08:53 morse joined #gluster
08:56 primusinterpares joined #gluster
09:07 tryggvil joined #gluster
09:12 nueces joined #gluster
09:15 dusmant joined #gluster
09:16 vpshastry joined #gluster
09:24 pkoro joined #gluster
09:27 bala joined #gluster
09:34 scuttle_ joined #gluster
09:41 ravindran1 joined #gluster
09:49 bala joined #gluster
10:06 gdubreui joined #gluster
10:07 ade_b joined #gluster
10:07 ade_b should I be worried about messages like - E [glusterd-utils.c:153:glusterd_lock] 0-management: Unable to get lock for uuid: afd1c0f5-129b-4856-abdc-d6b15567dd51, lock held by: afd1c0f5-129b-4856-abdc-d6b15567dd51
10:10 kumar joined #gluster
10:15 bala joined #gluster
10:23 andreask joined #gluster
10:31 surabhi joined #gluster
10:34 glusterbot New news from resolvedglusterbugs: [Bug 1080970] SMB:samba and ctdb hook scripts are not present in corresponding location after installation of 3.0 rpm's <https://bugzilla.redhat.com/show_bug.cgi?id=1080970>
10:45 nshaikh joined #gluster
10:52 prasanthp joined #gluster
10:59 davinder2 joined #gluster
11:03 saurabh joined #gluster
11:13 nishanth joined #gluster
11:13 nthomas joined #gluster
11:18 glusterbot New news from newglusterbugs: [Bug 1093688] Memory corruption issues reported by Coverity <https://bugzilla.redhat.com/show_bug.cgi?id=1093688>
11:19 kkeithley joined #gluster
11:25 hagarth joined #gluster
11:31 churnd joined #gluster
11:33 Norky joined #gluster
11:35 kkeithley joined #gluster
11:37 ira joined #gluster
11:41 RameshN joined #gluster
11:48 glusterbot New news from newglusterbugs: [Bug 1093690] Illegal memory accesses issues reported by Coverity <https://bugzilla.redhat.com/show_bug.cgi?id=1093690> || [Bug 1093692] Resource/Memory leak issues reported by Coverity. <https://bugzilla.redhat.com/show_bug.cgi?id=1093692>
11:57 nthomas_ joined #gluster
11:59 andreask joined #gluster
12:01 nthomas joined #gluster
12:07 jmarley joined #gluster
12:07 jmarley joined #gluster
12:11 nthomas_ joined #gluster
12:11 nishanth joined #gluster
12:21 itisravi joined #gluster
12:22 edward2 joined #gluster
12:31 nthomas_ joined #gluster
12:32 nishanth joined #gluster
12:32 DV_ joined #gluster
12:33 ndarshan joined #gluster
12:40 ade_b Im seeing a fee "Unable to get lock" messgages in my log, should I be concerned ? http://fpaste.org/98592/13990343/
12:40 glusterbot Title: #98592 Fedora Project Pastebin (at fpaste.org)
12:44 sroy joined #gluster
12:47 GabrieleV joined #gluster
12:48 surabhi joined #gluster
12:51 nishanth joined #gluster
12:53 bala1 joined #gluster
13:10 lalatenduM joined #gluster
13:15 kkeithley and he's back.
13:29 DV joined #gluster
13:35 primechuck joined #gluster
13:37 dbruhn joined #gluster
13:44 vpshastry left #gluster
13:52 qdk joined #gluster
13:58 mjsmith2 joined #gluster
14:07 jbrooks joined #gluster
14:10 _Bryan_ joined #gluster
14:12 y4m4 joined #gluster
14:12 lmickh joined #gluster
14:18 jag3773 joined #gluster
14:27 jbrooks joined #gluster
14:27 aravindavk joined #gluster
14:29 jobewan joined #gluster
14:33 vpshastry joined #gluster
14:38 vpshastry left #gluster
14:52 kmai007 joined #gluster
14:53 kmai007 has anybody seen this on the gluster storage brick log ? http://fpaste.org/98626/13990423/
14:53 glusterbot Title: #98626 Fedora Project Pastebin (at fpaste.org)
14:53 fsimonce joined #gluster
14:53 kmai007 i don't know  what to make of the gibberish characters when its trying to do an lstat?
14:53 kmai007 on gluster3.4.2
15:02 davinder joined #gluster
15:08 LoudNoises joined #gluster
15:10 aravindavk joined #gluster
15:15 Mneumonik joined #gluster
15:16 scuttle_ joined #gluster
15:16 jmarley joined #gluster
15:16 jmarley joined #gluster
15:18 sprachgenerator joined #gluster
15:19 ndevos kmai007: that looks like a very strange filename?
15:21 nage joined #gluster
15:39 hagarth joined #gluster
15:39 daMaestro joined #gluster
15:49 glusterbot New news from newglusterbugs: [Bug 1093768] Comment typo in gf-history.changelog.c <https://bugzilla.redhat.com/show_bug.cgi?id=1093768>
15:55 vpshastry joined #gluster
15:56 somepoortech I have a distributed replicated gluster volume, when I blow away a server to test failure recovery and try to check status all I get is 'Another transaction could be in progress. Please try again after sometime.' anyone have experience with this?
15:57 somepoortech this would be for running - gluster volume heal gfs info
16:08 eseyman joined #gluster
16:09 jmarley joined #gluster
16:09 jmarley joined #gluster
16:18 semiosis somepoortech: never seen that before, but could you ,,(paste) part of the glusterd log around when you ran that command?
16:18 glusterbot somepoortech: For RPM based distros you can yum install fpaste, for debian and ubuntu it's pastebinit. Then you can easily pipe command output to [f] paste [binit] and it'll give you a URL.
16:19 Mo__ joined #gluster
16:21 SFLimey joined #gluster
16:24 zaitcev joined #gluster
16:25 MeatMuppet joined #gluster
16:28 jag3773 joined #gluster
16:36 B21956 joined #gluster
16:36 [o__o] joined #gluster
16:42 jbd1 joined #gluster
16:49 glusterbot New news from newglusterbugs: [Bug 1089668] DHT - rebalance - gluster volume rebalance status shows output even though User hasn't run rebalance on that volume (it shows remove-brick status) <https://bugzilla.redhat.com/show_bug.cgi?id=1089668>
16:49 aravindavk joined #gluster
17:02 somepoortech semiosis: http://paste.ubuntu.com/7380779/
17:02 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
17:09 semiosis somepoortech: please pastie the output of 'gluster peer status' from each of your servers
17:10 semiosis bbiab
17:18 Matthaeus joined #gluster
17:23 VerboEse joined #gluster
17:38 somepoortech semiosis: one of my servers has no peers present
17:40 somepoortech semiosis: http://paste.ubuntu.com/7380968/
17:40 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
17:42 daMaestro joined #gluster
17:50 gmcwhistler joined #gluster
18:04 Matthaeus joined #gluster
18:12 TvL2386 joined #gluster
18:34 jrcresawn joined #gluster
18:35 prasanth|offline joined #gluster
18:37 semiosis somepoortech: check out ,,(replace)
18:37 glusterbot somepoortech: Useful links for replacing a failed server... if replacement server has different hostname: http://web.archive.org/web/20120508153302/http://community.gluster.org/q/a-replica-node-has-failed-completely-and-must-be-replaced-with-new-empty-hardware-how-do-i-add-the-new-hardware-and-bricks-back-into-the-replica-pair-and-begin-the-healing-process/ ... or if replacement
18:37 glusterbot server has same hostname: http://goo.gl/rem8L
18:37 semiosis probably that second link
18:42 dfs joined #gluster
18:52 sjoeboo joined #gluster
18:53 dfs left #gluster
18:55 ron-slc joined #gluster
19:04 edward1 joined #gluster
19:06 lpabon joined #gluster
19:18 theron joined #gluster
19:25 theron joined #gluster
19:25 ThatGraemeGuy joined #gluster
19:28 vpshastry left #gluster
19:32 theron_ joined #gluster
19:35 social joined #gluster
19:35 scuttle_ joined #gluster
19:37 theron joined #gluster
19:40 somepoortech semiosis: I ended up doing this to reconnect my bricks after reatching the peer; http://www.joejulian.name/blog/replacing-a-brick-on-glusterfs-340 waiting for a couple hundred gigs to self heal now
19:40 glusterbot Title: Replacing a brick on GlusterFS 3.4.0 (at www.joejulian.name)
19:41 semiosis glad to hear it
20:03 chirino joined #gluster
20:12 jruggiero joined #gluster
20:23 P0w3r3d joined #gluster
20:51 MrAbaddon joined #gluster
21:01 diegows joined #gluster
21:24 qdk joined #gluster
21:30 Debolaz2_ joined #gluster
21:34 kaptk2 joined #gluster
21:53 badone_ joined #gluster
21:58 edoceo joined #gluster
21:59 edoceo So, XFS is the preferred filesystem for Gluster - correct?
21:59 edoceo Any preference on building that on top of RAID# or LVM or whichever?
22:01 Debolaz2_ XFS is the recommended filesystem.
22:06 Joe630 edoceo: I *think* it is is more efficient to have gluster do the replication over multiple drives than use raid, but I am not sure.
22:07 Joe630 so if you have a raid 1 with 1 brick, think you would do better having 2 bricks at raid 0
22:07 Joe630 but that is comlete conjecture
22:13 edoceo I'll have two bricks
22:13 edoceo I've got like 12 disks in each, but I want the full size of each, so I'm thinking I just stripe in both machines
22:14 edoceo Or, maybe, sacrafice a little size to get raid6 + one warm disk
22:22 gmcwhist_ joined #gluster
22:49 semiosis edoceo: smaller bricks will heal faster
22:50 semiosis less complexity is better, imho
23:05 zerick joined #gluster
23:42 mjsmith2 joined #gluster
23:59 sprachgenerator joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary