Camelia, the Perl 6 bug

IRC log for #gluster, 2013-01-12

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:22 andreask joined #gluster
00:35 xmltok_ joined #gluster
00:37 jiffe1 joined #gluster
00:57 mohankumar joined #gluster
01:06 nik_ joined #gluster
01:10 gbrand_ joined #gluster
01:16 xmltok joined #gluster
01:17 xmltok joined #gluster
01:30 bala1 joined #gluster
01:50 chirino joined #gluster
02:17 hagarth joined #gluster
02:23 stopbit joined #gluster
02:45 nik_ joined #gluster
03:08 lh joined #gluster
03:18 jvyas joined #gluster
04:49 yinyin joined #gluster
05:09 hagarth joined #gluster
05:48 hagarth joined #gluster
06:11 mohankumar joined #gluster
06:19 hagarth joined #gluster
06:20 raven-np joined #gluster
06:39 sound joined #gluster
06:39 Guest11478 hi guys, question: I reinstalled gluster (3.3.1) and want to recreate a volume using the old bricks with data in them
06:39 phreek is that possible?
06:41 phreek any help would be greatly appreciated, thanks in advance
06:47 phreek anyone :(
06:52 hagarth joined #gluster
06:58 cyr_ joined #gluster
07:04 yinyin joined #gluster
07:27 JoeJulian phreek: Should work as long as you specify the bricks in the same order.
07:40 theron joined #gluster
07:47 hagarth joined #gluster
07:55 ctria joined #gluster
08:04 theron joined #gluster
08:54 yinyin joined #gluster
09:02 bala joined #gluster
09:03 phreek JoeJulian: any idea why i get duplicate files after i recreate the volume
09:04 phreek seeing the same file like 20 times
10:25 bala joined #gluster
11:35 H___ joined #gluster
11:47 gbrand_ joined #gluster
11:57 yinyin joined #gluster
12:13 rags_ joined #gluster
12:14 cyr_ joined #gluster
12:15 ultrabizweb joined #gluster
12:20 mohankumar joined #gluster
12:53 khushildep joined #gluster
13:15 mohankumar joined #gluster
14:58 yinyin joined #gluster
15:54 tjikkun joined #gluster
15:54 tjikkun joined #gluster
15:58 yinyin joined #gluster
16:11 tjikkun_ joined #gluster
16:16 m0zes joined #gluster
16:58 yinyin joined #gluster
17:01 _br_- joined #gluster
17:09 _br_ joined #gluster
17:59 yinyin joined #gluster
18:04 khushildep joined #gluster
18:17 rags_ joined #gluster
18:59 yinyin joined #gluster
19:11 designbybeck_ joined #gluster
19:43 _NiC Is anyone running KVM vm's on top of gluster? How is your io performance?
19:56 _NiC How would gluster work as a serving filesystem for websites? I.e. mount /home/*/public_html as a gluster filesystem?
19:56 semiosis i use it to serve web sites and it works great for me
19:57 _NiC semiosis, how high is your traffic?
19:58 semiosis what difference does it make?  you can scale glusterfs to meet your needs
19:59 _NiC just curious.
20:00 NashTrash joined #gluster
20:00 NashTrash Hello Gluster'ers
20:01 NashTrash I have a drive migration planned, and I am hoping you all can guide me a bit.
20:01 NashTrash We want to move all of our disks from slow disks to fast disks but we have to keep the cluster in operation the whole time.
20:02 NashTrash I was planning on adding all of the new drives, rebalancing, then slowing removing the old drives with a periodic rebalance.
20:02 lkoranda joined #gluster
20:02 NashTrash Reasonable?
20:02 semiosis NashTrash: if you use replication you can just replace one replica at a time, i've done that
20:02 semiosis one replica brick at a time that is
20:03 NashTrash Ok.  Can you please provide some additional details?
20:03 NashTrash I am running with replication set to 3
20:03 semiosis kill its glusterfsd process to block any access to the brick, then replace the disk keeping the mount point the same, then restart the glusterd service on the server and it will respawn the glusterfsd process you killed earlier
20:04 semiosis see ,,(processes)
20:04 glusterbot the GlusterFS core uses three process names: glusterd (management daemon, one per server); glusterfsd (brick export daemon, one per brick); glusterfs (FUSE client, one per client mount point; also NFS daemon, one per server). There are also two auxiliary processes: gsyncd (for geo-replication) and glustershd (for automatic self-heal). See http://goo.gl/hJBvL for more information.
20:04 yinyin joined #gluster
20:04 semiosis self heal should fill the new empty brick in with data from its surviving replica(s)
20:05 semiosis of course i suggest testing this on a test env before doing it on prod :)
20:05 NashTrash Cool.  I will look into that.
20:06 NashTrash I have also been testing rebalance and I see lots of errors in the rebalance log.  But, I found a mail thread saying that this was just a know issue and that these weren't really failures.
20:06 NashTrash Do you happen to know anything about this?
20:06 semiosis keep an eye on your clients to make sure they reconnect to the brick -- they should, but just keep an eye on their logs to be sure
20:07 elyograg NashTrash: by chance is your volume more than half full?  (I have a hammer, everything looks like a nail... ;)
20:07 semiosis tbh i'm weary of rebalance and have managed to get by without it
20:07 NashTrash elyograg: Nope.  Mostly empty
20:07 elyograg ok, my thought wouldn't likely be the case, then.
20:10 _NiC semiosis, what are your webservers? physical or virtual machines?
20:11 semiosis virtual all the way
20:11 semiosis ec2 :)
20:12 _NiC semiosis, :)
20:12 rags_ joined #gluster
21:00 ultrabizweb joined #gluster
21:04 yinyin joined #gluster
21:57 DataBeaver joined #gluster
22:05 yinyin joined #gluster
22:10 JuanBre joined #gluster
22:19 khushildep joined #gluster
23:02 theron joined #gluster
23:05 yinyin joined #gluster
23:16 Ramereth joined #gluster
23:17 Ramereth joined #gluster
23:22 raven-np joined #gluster
23:25 JuanBre joined #gluster
23:30 theron joined #gluster
23:52 Ramereth joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary