Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-10-17

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 haomaiwa_ joined #gluster
00:04 dgandhi joined #gluster
00:14 shyam joined #gluster
00:14 shyam left #gluster
00:23 plarsen joined #gluster
00:26 kminooie i guess the more important question is whether it is safe to, while upgrading, skip a few version? I am talking about the actual cluster, not the client stuff. and if so, would still be safe to do that one node at a time (rolling upgrade ) or does the cluster need to be shutdown?
00:34 Chr1st1an I did an upgrade from 3.4 to 3.7 did not go that well
00:35 Chr1st1an Should have gone for 3.6
00:38 JoeJulian I did the 3.4->3.7 upgrade. No problems.
00:39 JoeJulian I'll be doing it again soon.
00:39 JoeJulian What problem did you have?
00:39 zhangjn joined #gluster
00:42 Chr1st1an Well i upgraded from Redhat Storage 2.1 to 3.1U1 release following there guide
00:43 Chr1st1an Had 1 node failing during yum update with some weird error message , a know issue when running on Blade servers apparently
00:44 Chr1st1an Then after a reboot of the system 12 of the nodes failed to start glusterd
00:46 Chr1st1an Then I got some UUID issues due to peer probe ( My mistake )
00:47 Chr1st1an And when I finally got the volume up and running  , i did a gluster volume stop volumename and then a reboot on all the nodes just to check that they will come up after a reboot
00:48 Chr1st1an But now I see that there is a lot of "volume start vol0 : FAILED : Locking failed on"
00:50 Chr1st1an There was a bug in 3.7.0 but it should have been fixed in 3.7.1 from what I can find
00:56 JoeJulian Ah, ok. I waited until the critical bugs were fixed. 3.7.4.
00:57 Chr1st1an Didn't know that there was that many bugs in 3.7 , been running it for a while on anothere cluster without issues
00:58 Chr1st1an But this cluster didn't like it :(
01:01 haomaiwa_ joined #gluster
01:11 JoeJulian Hang out in here all day helping people on the side and you see a lot more bugs than you do testing it. :)
01:12 Chr1st1an Think I found my error
01:17 Chr1st1an When running "gluster volume set all cluster.op-version 30703" it fails but when you do "gluster volume get vol0 op-version" it returns ; cluster.op-version                      30703
01:17 Chr1st1an But when looking at the volume info file
01:18 Chr1st1an # cat /var/lib/glusterd/vols/vol0/info | grep op-version
01:18 Chr1st1an op-version=2
01:18 Chr1st1an client-op-version=2
01:20 Chr1st1an Unsure if that can just be changed in the config files while glusterd is stopped
01:32 JoeJulian That's not it. Mine are still at 2.
01:37 Chr1st1an Thanks , then I will have to look at other places tomorrow :)
01:39 theron joined #gluster
01:45 dgbaley joined #gluster
02:01 cliluw joined #gluster
02:01 haomaiwa_ joined #gluster
02:04 beeradb joined #gluster
02:13 cliluw joined #gluster
02:23 Lee1092 joined #gluster
02:27 plarsen joined #gluster
02:30 plarsen joined #gluster
02:35 maveric_amitc_ joined #gluster
03:01 haomaiwa_ joined #gluster
03:03 beeradb joined #gluster
03:12 vmallika joined #gluster
03:23 dgandhi joined #gluster
03:27 [7] joined #gluster
03:40 stickyboy joined #gluster
03:51 haomaiwa_ joined #gluster
04:22 F2Knight joined #gluster
04:24 haomaiwa_ joined #gluster
04:29 maveric_amitc_ joined #gluster
04:41 hagarth joined #gluster
04:51 vmallika joined #gluster
04:53 woakes070048 joined #gluster
04:57 skoduri joined #gluster
05:01 haomaiwa_ joined #gluster
05:32 beeradb joined #gluster
05:37 rafi joined #gluster
06:01 haomaiwa_ joined #gluster
06:01 beeradb joined #gluster
06:05 dusmant joined #gluster
06:06 kotreshhr joined #gluster
06:20 kotreshhr left #gluster
06:26 RayTrace_ joined #gluster
06:45 Philambdo joined #gluster
06:46 haomaiwa_ joined #gluster
06:46 cppking joined #gluster
07:01 haomaiwa_ joined #gluster
07:07 RayTrace_ joined #gluster
07:07 Pupeno joined #gluster
07:11 haomaiwang joined #gluster
07:22 haomaiwa_ joined #gluster
07:38 LebedevRI joined #gluster
07:55 rafi joined #gluster
08:01 haomaiwa_ joined #gluster
08:30 Philambdo joined #gluster
09:09 deni #firefox
09:09 deni ups...sorry about that
09:27 hos7ein joined #gluster
09:28 deniszh joined #gluster
09:33 bluenemo joined #gluster
09:36 Philambdo joined #gluster
09:36 stickyboy joined #gluster
09:38 rafi joined #gluster
09:58 RayTrac__ joined #gluster
09:59 RayTrace_ joined #gluster
10:14 Pupeno joined #gluster
10:23 Philambdo joined #gluster
10:27 social joined #gluster
10:52 stickyboy joined #gluster
11:02 kotreshhr joined #gluster
11:02 kotreshhr left #gluster
11:25 kotreshhr joined #gluster
11:25 kotreshhr left #gluster
11:41 Lee1092 joined #gluster
11:46 ghenry joined #gluster
11:46 ghenry joined #gluster
12:14 Pupeno joined #gluster
12:19 bluenemo joined #gluster
12:26 maveric_amitc_ joined #gluster
12:33 haomaiwa_ joined #gluster
12:34 mhulsman joined #gluster
12:38 RayTrace_ joined #gluster
13:01 zhangjn joined #gluster
13:01 haomaiwa_ joined #gluster
13:04 zhangjn joined #gluster
13:06 zhangjn joined #gluster
13:07 TheSeven joined #gluster
13:12 zhangjn joined #gluster
13:13 zhangjn joined #gluster
13:19 mhulsman joined #gluster
13:22 zhangjn joined #gluster
13:27 mhulsman joined #gluster
13:27 haomaiwa_ joined #gluster
13:39 EinstCrazy joined #gluster
13:45 mhulsman joined #gluster
13:53 theron joined #gluster
14:32 zhangjn joined #gluster
14:33 zhangjn joined #gluster
14:35 zhangjn joined #gluster
14:37 cvstealth joined #gluster
14:46 maveric_amitc_ joined #gluster
14:56 side_control joined #gluster
14:57 Philambdo joined #gluster
14:58 plarsen joined #gluster
15:00 haomaiwang joined #gluster
15:01 haomaiwa_ joined #gluster
15:08 deniszh joined #gluster
15:10 haomaiwang joined #gluster
15:29 nbalacha joined #gluster
15:33 maveric_amitc_ joined #gluster
15:39 stickyboy joined #gluster
15:45 haomaiwa_ joined #gluster
16:01 haomaiwa_ joined #gluster
16:11 maveric_amitc_ joined #gluster
16:55 skoduri joined #gluster
16:56 Lee1092 joined #gluster
17:01 haomaiwa_ joined #gluster
17:05 sysconfig joined #gluster
17:29 social joined #gluster
18:09 Pupeno joined #gluster
18:12 Manikandan joined #gluster
18:22 woakes070048 joined #gluster
18:23 EinstCrazy joined #gluster
18:37 rafi joined #gluster
19:08 haomaiwa_ joined #gluster
19:36 Sunghost joined #gluster
19:40 Sunghost Hello, i have a problem with my glusterfs distributed volume. i had a raidcrash because of a disk with lots of bad blocks.
19:40 Sunghost now i repaired as much i can but on the volume, mounted with nfs, see some files any momre, but on the folder on the the brick self.
19:41 Sunghost the question is, can i simply take the files from brick and move them back on volume or is there any logic of repairing?
20:06 dlambrig joined #gluster
20:10 DV joined #gluster
20:15 Sunghost joined #gluster
20:15 Sunghost any idea
20:17 dlambrig joined #gluster
20:20 Pupeno joined #gluster
20:50 hamiller joined #gluster
21:04 haomaiwang joined #gluster
21:09 theron joined #gluster
21:38 stickyboy joined #gluster
21:49 Guest70685 joined #gluster
21:51 halloo joined #gluster
21:58 haomaiwang joined #gluster
22:06 social joined #gluster
22:17 halloo Hi, I would like to verify my understanding of how Gluster works. Could someone verify my understanding of how this works?
22:17 sage joined #gluster
22:18 halloo I have a simple 2-node replicated volume, i.e. 2 servers acting as a virtual "RAID-1" disk.
22:18 halloo Both have "brick" directories like this:  server1:/brick,  server2:/brick
22:20 halloo Both servers are running Gluster v3.2
22:20 halloo under Centos 6
22:20 Pupeno joined #gluster
22:21 halloo I would like to upgrade to Gluster v3.6
22:22 halloo Can I just uninstall glusterd on both servers and just install the new v3.6 software and create a new volume using the old /brick directories?
22:24 social joined #gluster
22:50 plarsen joined #gluster
23:10 theron joined #gluster
23:33 mlhamburg joined #gluster
23:42 mlhamburg1 joined #gluster
23:45 haomaiwang joined #gluster
23:52 Pupeno joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary