Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-11-15

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 snowboarder04 that worked JoeJulian, thanks! (btw the link to Jeff Darcy's blog post is dead)
00:14 calisto joined #gluster
00:51 JoeJulian I know... sad...
00:53 sputnik13 joined #gluster
00:59 David_H_Smith joined #gluster
01:05 David_H__ joined #gluster
01:09 David_H_Smith joined #gluster
01:20 David_H_Smith joined #gluster
01:21 David_H_Smith joined #gluster
01:23 David_H_Smith joined #gluster
01:24 sputnik13 joined #gluster
01:34 David_H_Smith joined #gluster
01:41 David_H_Smith joined #gluster
01:47 David_H_Smith joined #gluster
01:48 David_H_Smith joined #gluster
01:49 David_H__ joined #gluster
01:53 David_H_Smith joined #gluster
02:07 sputnik13 joined #gluster
02:28 bala joined #gluster
02:39 smallbig_ joined #gluster
03:00 rjoseph joined #gluster
03:08 JustinClift joined #gluster
03:23 David_H_Smith joined #gluster
03:37 meghanam_ joined #gluster
03:38 meghanam joined #gluster
03:51 Nowaker left #gluster
03:59 B21956 joined #gluster
04:00 David_H_Smith joined #gluster
04:06 nishanth joined #gluster
04:12 zerick joined #gluster
04:16 georgeh joined #gluster
04:41 B21956 joined #gluster
04:45 pp joined #gluster
04:53 lalatenduM joined #gluster
05:05 vimal joined #gluster
05:17 lalatenduM joined #gluster
05:42 David_H_Smith joined #gluster
05:44 David_H_Smith joined #gluster
06:00 meghanam_ joined #gluster
06:00 meghanam joined #gluster
06:04 haomaiwang joined #gluster
06:26 anoopcs joined #gluster
06:27 soumya joined #gluster
07:12 badone joined #gluster
07:52 ekuric joined #gluster
08:09 badone joined #gluster
08:21 ctria joined #gluster
08:30 ProT-0-TypE joined #gluster
08:44 d4nku joined #gluster
09:06 nishanth joined #gluster
09:07 spandit joined #gluster
09:13 ekuric left #gluster
09:26 badone joined #gluster
09:28 andreask joined #gluster
09:30 pradeepto joined #gluster
09:40 pradeepto joined #gluster
10:05 andreask joined #gluster
10:21 andreask joined #gluster
10:27 Philambdo joined #gluster
10:37 T0aD joined #gluster
10:43 soumya joined #gluster
10:46 lalatenduM joined #gluster
11:16 calum_ joined #gluster
11:19 andreask joined #gluster
11:27 lalatenduM joined #gluster
11:48 Tool_Fan joined #gluster
11:55 lalatenduM joined #gluster
12:08 nbalachandran joined #gluster
12:18 lalatenduM joined #gluster
12:35 hagarth joined #gluster
12:44 anand joined #gluster
12:44 anand left #gluster
12:45 Tool_Fan joined #gluster
12:55 LebedevRI joined #gluster
13:07 meghanam joined #gluster
13:08 meghanam_ joined #gluster
13:56 tdasilva joined #gluster
14:00 ekuric joined #gluster
14:05 doekia joined #gluster
14:31 anoopcs joined #gluster
14:37 sonicrose joined #gluster
14:40 sonicrose hi #gluster!!!  i’m looking for some ideas on why rebalance doesn’t finish?  I have about 20 million pictures taking up about 22TB of a 72TB gluster volume… 36 x 4TB bricks in distribute 9 x replica 2…  when I run reblance it gets thru scanning a few hundred thousands pictures and a few hundred GB, but then it stops.. the run time keeps going, it still says running, but the file counters in rebalance volume status dont count u
14:40 sonicrose anymore.  if i do rebalance stop  it says stopped but the runtime keeps going.  if i try to do rebalance start or start force it says rebalance is already running.  seems I have to stop glusterd on all nodes, then restart it… any ideas  is there a rebalancing log?
14:42 sonicrose the bricks are “very” unbalanced. we initially started with just 24 TB total cap… we got to about 75% before i added more bricks… now i’ve still got a bunch of bricks at 75% use and a ton of others at 5% use
14:43 sonicrose i believe i’m trying to use rebalance the right way to get it to balance out the freespace on all the bricks?
14:43 sonicrose also what happens if a brick fills up?  will it find the free space on another brick?
14:53 meghanam joined #gluster
14:53 _Bryan_ joined #gluster
14:53 meghanam_ joined #gluster
15:01 calisto joined #gluster
15:01 JustinClift joined #gluster
15:13 DanielGluster joined #gluster
15:50 social joined #gluster
16:02 Philambdo joined #gluster
16:04 tdasilva joined #gluster
16:05 vimal joined #gluster
16:06 deepakcs joined #gluster
16:09 mikedep333 joined #gluster
16:13 lalatenduM joined #gluster
16:35 DanielGluster joined #gluster
16:54 cyberbootje joined #gluster
17:36 sonicrose ^^ is it ok to repeat my question after 30 or 40 people have come and gone?
17:38 JustinClift sonicrose: Sure
17:39 JustinClift sonicrose: It's probably a better idea to email the gluster-users mailing list though
17:39 JustinClift Some weekends have a bunch of people on IRC here.  Other weekends have almost no-one.
17:39 JustinClift These seems to be a "no-one" weekend. ;)
17:40 JustinClift (it is shortly after release though, so people are probably taking the weekend off to recover for once :>)
17:40 sonicrose ooh. speaking of releases… i noticed a problem between 352 and 360
17:40 JustinClift sonicrose: I'm kinda doubtful you'll get a timely answer this weekend. :(
17:40 sonicrose whast the new release?
17:40 JustinClift 3.6.1
17:40 sonicrose ah.
17:41 sonicrose i think the centos yum repos were missing gluster server 3.6.0
17:41 JustinClift 3.6.0 had some conflicts with the RH released "glusterfs-3.6.0" packages (part of the "Red Hat Storage" product)
17:41 JustinClift Heh
17:41 JustinClift This is exactlt that bug
17:41 sonicrose well that is not the problem i was having
17:41 JustinClift So, we pushed out a 3.6.1, which has a higher release #.  Overrides theirs ;)
17:42 sonicrose ah i see cool.  i’ll have to try to bring mine up soon
17:42 JustinClift What's the problem you're having, just in case I happen to have a clue about it (doubtfuly)
17:42 sonicrose i’m having a problem with rebalance though
17:42 sonicrose on 3.5.2
17:42 JustinClift s/doubtfuly/doubtful
17:42 JustinClift Yeah, no idea personally
17:42 JustinClift I've bcome extremely non-technical over the last few months
17:42 sonicrose i’ll probably open a ticket monday if i dont find some answers this weekend
17:42 sonicrose i have a bug report account
17:42 JustinClift Yeah, it's a good idea. :)
17:51 JustinClift joined #gluster
18:05 dataio how can i make gluster start again.. its not in /etc/init.d/glusterd start
18:06 dataio .. /usr/sbin/glusterd
18:33 B21956 joined #gluster
18:33 B21956 left #gluster
18:45 johnnytran joined #gluster
18:47 ccha2 joined #gluster
18:47 ws2k3 joined #gluster
18:49 stigchristian joined #gluster
18:50 sman_ joined #gluster
18:51 sickness joined #gluster
18:51 l0uis_ joined #gluster
18:52 dastar joined #gluster
18:54 tom[] joined #gluster
18:54 tomased joined #gluster
18:54 atrius joined #gluster
18:55 prasanth|brb joined #gluster
18:55 ocellus joined #gluster
18:57 rastar_afk joined #gluster
18:57 sage__ joined #gluster
18:58 cmorandi1 joined #gluster
18:59 Ramereth|home joined #gluster
19:00 Bosse joined #gluster
19:00 social joined #gluster
19:01 crashmag joined #gluster
19:02 Andreas-IPO_ joined #gluster
19:04 johnnytran joined #gluster
19:05 capri joined #gluster
19:07 dockbram joined #gluster
19:09 Arrfab joined #gluster
19:22 msciciel joined #gluster
19:22 lyang0 joined #gluster
19:23 the-me joined #gluster
19:23 _NiC joined #gluster
19:23 eclectic joined #gluster
19:23 kke joined #gluster
19:24 Rydekull joined #gluster
19:25 wgao joined #gluster
19:26 m0zes joined #gluster
19:26 ccha2 joined #gluster
19:30 johndescs joined #gluster
19:34 doekia joined #gluster
19:36 plarsen joined #gluster
19:49 aulait joined #gluster
19:54 tom[] joined #gluster
19:57 VeggieMeat_ joined #gluster
19:59 delhage_ joined #gluster
19:59 nixpanic_ joined #gluster
19:59 morsik_ joined #gluster
20:00 nixpanic_ joined #gluster
20:00 eryc_ joined #gluster
20:00 ryao_ joined #gluster
20:00 schrodinger_ joined #gluster
20:03 jaymtee joined #gluster
20:03 frankS2_ joined #gluster
20:03 and`_ joined #gluster
20:03 mikedep3- joined #gluster
20:04 plarsen joined #gluster
20:04 RobertLaptop_ joined #gluster
20:05 calisto joined #gluster
20:06 [o__o] joined #gluster
20:07 nixpanic joined #gluster
20:07 Guest33713 joined #gluster
20:07 XpineX joined #gluster
20:07 gburiticato joined #gluster
20:07 coreping joined #gluster
20:07 verdurin joined #gluster
20:07 samkottler joined #gluster
20:07 JordanHackworth joined #gluster
20:08 ultrabizweb joined #gluster
20:11 georgeh joined #gluster
20:11 purpleidea joined #gluster
20:11 purpleidea joined #gluster
20:11 kke joined #gluster
20:11 necrogami joined #gluster
20:12 cfeller joined #gluster
20:13 Maitre joined #gluster
20:15 mikedep333 joined #gluster
20:15 free_amitc_ joined #gluster
20:16 asku joined #gluster
20:16 codex joined #gluster
20:16 tdjb joined #gluster
20:17 and` joined #gluster
20:24 plarsen joined #gluster
20:24 harish joined #gluster
20:28 JoeJulian sonicrose: Sounds to me like the rebalance client crashed. There is a log, look in /var/log/glusterfs on all the servers.
20:28 sonicrose thx joe i’ll check there
20:29 JoeJulian sonicrose: rebalance has been a long-running (imho) disaster and one that I've never felt comfortable using as a solution to anything.
20:29 JoeJulian sonicrose: I would, however, recommend doing the rebalance...fix-layout which would at least allow your clients to use the new bricks.
20:31 JoeJulian sonicrose: With regard to full bricks, if a brick is full and a *new* file is intended to be created there, it will create that file on a different brick and create a dht pointer to let that file be found.
20:31 JoeJulian If you're filling up existing files, you're up the proverbial creek without the necessary means of propulsion.
20:31 sonicrose is fix-layout still an option i kept getting errors
20:32 JoeJulian It is.
20:33 JoeJulian "volume rebalance <VOLNAME> [fix-layout] {start|stop|status}"
20:34 Telsin joined #gluster
20:44 sonicrose joined #gluster
20:45 sonicrose @joejulian  E [glusterfsd.c:1531:glusterfs_pidfile_setup] 0-glusterfsd: pidfile /var/lib/glusterd/vols/pictage-ftp/rebalance/f8f02856-83f8-48de-adb6-2368af291494.pid lock error (Resource temporarily unavailable)
20:46 vimal joined #gluster
20:47 sonicrose Usage: volume rebalance <VOLNAME> [fix-layout] {start|stop|status} [force]
20:47 sonicrose so fix-layout start or fix-layout start force ?
20:47 sonicrose i tried stopping but every time i try to restart it says theres one already running
20:50 Bosse joined #gluster
20:51 foster joined #gluster
21:09 sonicrose i stop it, it says stopped, but the runtime counters keep incrementing … and if i try to do eithe fix-layout start or just start it says rebalance is already running
21:10 sonicrose can i safely cleanup any .pid files in /var/lib/glusterd/vols/pictage-ftp/rebalance/ ?
21:25 sonicrose left #gluster
21:28 rjoseph joined #gluster
21:34 calisto1 joined #gluster
21:41 B21956 joined #gluster
21:42 B21956 left #gluster
21:53 glusterbot New news from newglusterbugs: [Bug 1158129] After readv, md-cache only checks cache times if read was empty <https://bugzilla.redhat.com/show_bug.cgi?id=1158129>
22:08 al joined #gluster
22:13 DV joined #gluster
22:26 glusterbot New news from resolvedglusterbugs: [Bug 1158622] SELinux denial when mounting glusterfs nfs volume when using base-port option <https://bugzilla.redhat.com/show_bug.cgi?id=1158622>
22:54 glusterbot New news from newglusterbugs: [Bug 1158126] md-cache checks for modification using whole seconds only <https://bugzilla.redhat.com/show_bug.cgi?id=1158126>
23:20 Philambdo joined #gluster
23:36 DV joined #gluster
23:59 bunni_ left #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary