Camelia, the Perl 6 bug

IRC log for #gluster, 2013-09-09

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:25 johnsonetti joined #gluster
01:01 jporterfield joined #gluster
01:20 asias joined #gluster
01:26 diegows_ joined #gluster
01:27 jporterfield joined #gluster
01:33 kevein joined #gluster
01:34 harish joined #gluster
01:38 StarBeast joined #gluster
01:48 piotrektt joined #gluster
01:48 glusterbot New news from newglusterbugs: [Bug 1005616] glusterfs client crash (signal received: 6) <http://goo.gl/FN1Yxb>
02:02 jporterfield joined #gluster
02:08 jporterfield joined #gluster
02:23 harish joined #gluster
02:26 \_pol joined #gluster
02:41 jporterfield joined #gluster
02:53 jporterfield joined #gluster
03:14 wgao joined #gluster
03:26 davinder joined #gluster
03:35 bstr_ joined #gluster
03:46 bala joined #gluster
03:49 jporterfield joined #gluster
04:07 jporterfield joined #gluster
04:19 jporterfield joined #gluster
04:29 RameshN joined #gluster
04:29 harish joined #gluster
04:30 kanagaraj joined #gluster
04:40 hchiramm_ joined #gluster
05:01 jporterfield joined #gluster
05:27 hchiramm_ joined #gluster
05:43 hchiramm_ joined #gluster
05:54 vrg joined #gluster
06:15 shylesh joined #gluster
06:29 jtux joined #gluster
06:39 hchiramm_ joined #gluster
06:40 rgustafs joined #gluster
06:49 glusterbot New news from newglusterbugs: [Bug 996746] Reading from glusterfs drive takes twice as much data transfer as what you read <http://goo.gl/vzf5d5>
06:52 ctria joined #gluster
06:57 hchiramm_ joined #gluster
07:05 jporterfield joined #gluster
07:05 saurabh joined #gluster
07:06 tjikkun_work joined #gluster
07:06 kanagaraj joined #gluster
07:08 eseyman joined #gluster
07:24 jporterfield joined #gluster
07:36 jporterfield joined #gluster
07:44 jporterfield joined #gluster
07:45 tziOm joined #gluster
07:46 puebele1 joined #gluster
07:47 mgebbe_ joined #gluster
07:50 andreask joined #gluster
07:53 wgao joined #gluster
08:01 selphloathing joined #gluster
08:01 selphloathing left #gluster
08:04 puebele1 joined #gluster
08:07 ProT-0-TypE joined #gluster
08:09 tryggvil joined #gluster
08:12 hchiramm_ joined #gluster
08:14 vimal joined #gluster
08:29 msd joined #gluster
08:33 lalatenduM joined #gluster
08:35 glusterbot New news from resolvedglusterbugs: [Bug 996324] possible fdleak on unlink <http://goo.gl/0ZqiYO>
08:36 jporterfield joined #gluster
08:46 hchiramm_ joined #gluster
08:58 21WABGLQ0 joined #gluster
09:04 ricky-ticky joined #gluster
09:08 msciciel_ joined #gluster
09:18 hchiramm_ joined #gluster
09:25 stickyboy Ok, one of my gluster servers has a disk down.  It's a hardware RAID, so I don't have to replace any bricks.
09:26 stickyboy The volumes on that server are replicated... can I simply bring it down and clients shouldn't notice?
09:29 hchiramm_ joined #gluster
09:32 hchiramm__ joined #gluster
09:33 davinder joined #gluster
09:36 manik joined #gluster
09:38 mooperd_ joined #gluster
09:45 RameshN joined #gluster
09:46 hchiramm_ joined #gluster
10:00 hchiramm_ joined #gluster
10:00 wgao_ joined #gluster
10:02 jtux joined #gluster
10:02 wgao joined #gluster
10:09 edward2 joined #gluster
10:10 eseyman joined #gluster
10:12 mika left #gluster
10:26 hchiramm_ joined #gluster
10:33 andreask joined #gluster
10:42 lalatenduM stickyboy, u r still around?
10:45 lalatenduM stickyboy, if it a replicated volume and you bring down on of the brick(i.e. the server on which the brick is present), clients should not notice. That means as long as at least one brick is available from the replica pair it is fine
10:48 timothy joined #gluster
10:50 glusterbot New news from newglusterbugs: [Bug 1005754] du -shc over 15 GB directory with 150k files takes 75 min clock <http://goo.gl/Pt0zFR>
10:53 hchiramm__ joined #gluster
10:55 jtux joined #gluster
11:02 andrewk joined #gluster
11:04 andrewkember joined #gluster
11:10 tesuki joined #gluster
11:15 haritsu joined #gluster
11:16 tryggvil joined #gluster
11:19 micu3 joined #gluster
11:21 mbukatov joined #gluster
11:21 davinder joined #gluster
11:28 stickyboy lalatenduM: Yeah, I'm here.
11:29 timothy joined #gluster
11:29 lalatenduM stickyboy, did u get the answer
11:29 stickyboy lalatenduM: Yeah, I saw your reply.
11:29 stickyboy So then presumably when the replica comes back up it will self heal.
11:30 stickyboy Still, I will make the change when my users aren't very active. :D
11:31 lalatenduM stickyboy, yes, the automatic self heal happens in every 10 mins or any change to the fs,
11:31 lalatenduM stickyboy, yeah, that would be good
11:33 stickyboy lalatenduM: In other news, I'm running gluster 3.3.1 and trying to find a strategy to bring my machines to 3.3.2 and then 3.4.
11:34 stickyboy If I recall correctly I do servers first.
11:35 lalatenduM stickyboy, I haven't done that myself :), and i had read a blog about it some where
11:37 stickyboy True, I think there's an official one somewhere
11:38 stickyboy Lemme look
11:38 lalatenduM stickyboy, http://vbellur.wordpress.com/2013/​07/15/upgrading-to-glusterfs-3-4/
11:38 glusterbot <http://goo.gl/SXX7P> (at vbellur.wordpress.com)
11:38 stickyboy That's the one
11:41 haritsu joined #gluster
11:44 lalatenduM stickyboy, you might want to look at this too http://vbellur.wordpress.com/2012/​05/31/upgrading-to-glusterfs-3-3/
11:44 glusterbot <http://goo.gl/qOiO7> (at vbellur.wordpress.com)
11:48 stickyboy lalatenduM: Interesting, he says GlusterFS 3.4 is compatible with 3.3.  I assume he means 3.3 clients -> 3.4 servers.
11:49 niximor left #gluster
11:49 lalatenduM stickyboy, let me check
11:51 lalatenduM stickyboy, it means you can take gluster 3.3. server to 3.4 , 3.4 is compatible with 3.3
11:51 hchiramm_ joined #gluster
11:52 stickyboy lalatenduM: Seems like I'd be shocked if that *wasn't* the case?
11:53 lalatenduM stickyboy, Yup, but some time drastic code changes makes version incompatible with the previous
11:54 stickyboy Ah
11:55 stickyboy So in this case it's a straight forward upgrade.
11:55 lalatenduM stickyboy, yeah..
11:55 stickyboy "GlusterFS 3.3.0 is not compatible with any earlier released versions. Please make sure that you schedule a downtime before you upgrade."
11:56 stickyboy That's what the 3.1 / 3.2 -> 3.3 guide says. :P
12:00 ndevos stickyboy: thats because there were incompatible changes in the rpc-protocol between those versions, communication will just fail
12:01 stickyboy Is that to say that 3.4 clients can talk to 3.3 servers?
12:01 haritsu joined #gluster
12:01 ndevos yes, that should work, but 3.4 contains a lot of bugfixes, so updating clients is advised
12:02 lalatenduM :)
12:02 ndevos uh, well, the other way around too
12:02 stickyboy ndevos: Definitely
12:02 ndevos stickyboy: at least, I think that should work, I have not tested it myself, so do try before mass-updating :)
12:02 stickyboy haha
12:03 stickyboy I think I'll move from 3.3.1 to 3.3.2 first. :P
12:03 stickyboy I'm in no hurry...
12:03 hchiramm_ joined #gluster
12:04 kkeithley fortune favors the bold. ;-) But there's no harm in being conservative in this case.
12:04 lalatenduM agree with kkeithley !! ;)
12:04 sprachgenerator joined #gluster
12:08 stickyboy kkeithley: Yah, unplanned downtime when something goes wrong is a real drag :P
12:09 stickyboy Like, to tell your hot date you can't make it cuz "it will only take an hour" turns into four hours. :P
12:12 hchiramm__ joined #gluster
12:20 jcsp joined #gluster
12:20 glusterbot New news from newglusterbugs: [Bug 1005786] NFSv4 support <http://goo.gl/rjiysR>
12:23 B21956 joined #gluster
12:24 diegows_ joined #gluster
12:26 tryggvil joined #gluster
12:27 tryggvil joined #gluster
12:27 RameshN joined #gluster
12:35 pkoro joined #gluster
12:44 mattf joined #gluster
12:47 hagarth joined #gluster
12:48 hchiramm_ joined #gluster
12:52 ctria joined #gluster
13:01 aurigus joined #gluster
13:09 harish joined #gluster
13:09 rcheleguini joined #gluster
13:10 lalatenduM joined #gluster
13:16 robo joined #gluster
13:19 StarBeast joined #gluster
13:19 shylesh joined #gluster
13:20 tesuki joined #gluster
13:21 hchiramm_ joined #gluster
13:24 xavih joined #gluster
13:25 bennyturns joined #gluster
13:25 dkorzhevin joined #gluster
13:27 haritsu joined #gluster
13:29 timothy joined #gluster
13:30 sjoeboo joined #gluster
13:33 chirino joined #gluster
13:37 haritsu joined #gluster
13:38 hchiramm_ joined #gluster
13:55 jruggiero joined #gluster
13:56 jruggiero left #gluster
13:58 robo joined #gluster
13:58 bcdonadio joined #gluster
13:59 jruggiero joined #gluster
14:00 bugs_ joined #gluster
14:02 bcdonadio I have two LTSP servers with identical configs, and I need to mirror their homes (the disks are on the servers themselves, and not in an external NAS). The objective is to achieve disk-mirroring between servers (there will be no data redundancy inside a single server). There will be no master/slave relation, instead will be peering, and both must be able to write concurrently. Question: is Gluster for me, or should I try GFS2?
14:05 jruggiero joined #gluster
14:05 timothy joined #gluster
14:06 jruggiero left #gluster
14:09 rwheeler joined #gluster
14:11 jcsp joined #gluster
14:15 andreask joined #gluster
14:21 glusterbot New news from newglusterbugs: [Bug 1005860] GlusterFS: Can't add a third brick to a volume - "Number of Bricks" is messed up <http://goo.gl/cPBrIK>
14:34 ricky-ticky joined #gluster
14:37 timothy joined #gluster
14:48 robo joined #gluster
14:51 glusterbot New news from newglusterbugs: [Bug 1005862] GlusterFS: Can't add a new peer to the cluster - "Number of Bricks" is messed up <http://goo.gl/YF4t1e>
14:56 piotrektt joined #gluster
15:01 andreask joined #gluster
15:07 \_pol joined #gluster
15:21 \_pol joined #gluster
15:21 daMaestro joined #gluster
15:24 kanagaraj joined #gluster
15:24 neofob joined #gluster
15:25 johnbot11 joined #gluster
15:26 zaitcev joined #gluster
15:28 criticalhammer joined #gluster
15:31 hchiramm_ joined #gluster
15:33 Technicool joined #gluster
15:41 Excolo joined #gluster
15:43 Excolo Hey, quick question. I have a 2 server replication setup, and at 4am this morning they started throwing a fit and all my servers connected to them would not add new files. One of the two servers is reading glusterfsd at 400% for cpu when running top. I have taken gluster out of our production environment for the time being. Anyone have any suggestions?
15:45 manik joined #gluster
15:46 jag3773 joined #gluster
15:49 _zerick_ joined #gluster
15:50 zerick joined #gluster
15:55 LoudNoises joined #gluster
15:59 criticalhammer left #gluster
16:08 sjoeboo is there anyway to force a gluster volume to re-calculate quota info, without removing quotas?
16:08 sjoeboo one of our volumes seems to suddenly have accounting info way off from reality.
16:12 awheeler joined #gluster
16:18 manik joined #gluster
16:23 samkottler joined #gluster
16:23 Mo__ joined #gluster
16:24 ThatGraemeGuy joined #gluster
16:27 andreask joined #gluster
16:40 RedShift joined #gluster
16:42 StarBeast joined #gluster
17:16 RedShift joined #gluster
17:17 kanagaraj joined #gluster
17:27 andreask joined #gluster
17:30 [o__o] left #gluster
17:32 [o__o] joined #gluster
17:44 kPb_in joined #gluster
17:51 diegows_ joined #gluster
17:54 lpabon joined #gluster
17:55 sprachgenerator joined #gluster
17:57 neofob joined #gluster
18:07 \_pol joined #gluster
18:08 manik joined #gluster
18:10 haritsu joined #gluster
18:22 JoeJulian sjoeboo: The only suggestion I could think of is to restart bricks under the assumption that only one of them has it wrong. Changing settings that would affect a reload of the graph could work. To come up with anything more concrete I'd have to read through source and I don't have time to do that right now.
18:22 sjoeboo hm, okay.
18:23 sjoeboo the number of hard kills/restart of bricks we seem to ahve to do is...disconcerning to say the least (service glusterd restart, for  example, never gets all the procs)
18:25 Excolo joined #gluster
18:26 GLHMarmot joined #gluster
18:26 flrichar joined #gluster
18:26 JoeJulian service glusterd restart isn't supposed to restart all the procs.
18:27 JoeJulian I, for one, don't want all my bricks restarting just because I want to restart the management daemon.
18:34 sjoeboo right...but aside form manual kills, is there a supported way to restart JUST the bricks?
18:36 JoeJulian Depending on the distro, "service glusterfsd stop ; service glusterd restart" should do it. I have 60 bricks per server so I never want to restart all my bricks simultaneously.
18:36 JoeJulian Of course, "killall glusterfsd" does the same thing.
18:37 JoeJulian er, same thing as "service glusterfsd stop"
18:54 semiosis :O
19:00 haritsu joined #gluster
19:00 hchiramm_ joined #gluster
19:02 JoeJulian :O
19:02 JoeJulian You back from vacation?
19:09 rwheeler_ joined #gluster
19:13 rbennacer joined #gluster
19:13 rbennacer left #gluster
19:17 GLHMarmot joined #gluster
19:18 hchiramm_ joined #gluster
19:18 semiosis JoeJulian: yes.  came back from vacation with a cold.  back to health (and work) now though :)
19:32 robo joined #gluster
19:39 failshell joined #gluster
19:42 failshell joined #gluster
19:44 vimal joined #gluster
19:47 jasson joined #gluster
19:58 \_pol_ joined #gluster
19:59 robo joined #gluster
20:01 Jasson joined #gluster
20:06 lpabon joined #gluster
20:07 awheeler joined #gluster
20:12 johnbot11 joined #gluster
20:18 manik joined #gluster
20:21 arusso joined #gluster
20:28 chirino joined #gluster
20:31 arusso joined #gluster
20:32 johnmark joined #gluster
20:34 awheele__ joined #gluster
20:35 \_pol joined #gluster
20:50 socinoS joined #gluster
20:52 socinoS I'm getting the following error on Ubuntu 12.04 x64 with gluster 3.3 from semiosis repo: gluster: symbol lookup error: gluster: undefined symbol: gf_xdr_from_cli_defrag_vol_req
20:53 socinoS any ideas
20:54 \_pol_ joined #gluster
21:00 socinoS left #gluster
21:03 badone joined #gluster
21:19 haritsu joined #gluster
21:19 JoeJulian @later tell socinoS Yes, gf_xdr_from_cli_defrag_vol_req comes from a 3.2 binary. You have mixed your versions.
21:19 glusterbot JoeJulian: The operation succeeded.
21:21 P0w3r3d joined #gluster
21:23 johnbot11 joined #gluster
21:52 hchiramm_ joined #gluster
22:02 _Bryan_ joined #gluster
22:08 glusterbot New news from resolvedglusterbugs: [Bug 907072] concurrent mkdir results in GFID mismatch <http://goo.gl/ijZsw>
22:19 haritsu joined #gluster
22:20 awheeler joined #gluster
22:38 tryggvil joined #gluster
22:48 robo joined #gluster
23:07 edong23 joined #gluster
23:09 StarBeas_ joined #gluster
23:09 robos joined #gluster
23:09 toad- joined #gluster
23:09 hchiramm_ joined #gluster
23:10 gluslog_ joined #gluster
23:20 haritsu joined #gluster
23:26 childish left #gluster
23:38 ninkotech joined #gluster
23:44 harish joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary