Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-05-10

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:38 edwardm61 joined #gluster
01:45 plarsen joined #gluster
01:53 badone_ joined #gluster
02:27 cholcombe joined #gluster
02:35 aravindavk joined #gluster
02:44 rjoseph|afk joined #gluster
02:48 yosafbridge joined #gluster
03:07 kotreshhr joined #gluster
03:25 atinmu joined #gluster
03:48 rajesh joined #gluster
03:50 [7] joined #gluster
03:57 glusterbot News from resolvedglusterbugs: [Bug 1112531] Dist-geo-rep : deletion of files on master, geo-rep fails to propagate to slaves. <https://bugzilla.redhat.com/show_bug.cgi?id=1112531>
03:57 glusterbot News from resolvedglusterbugs: [Bug 1136296] Option to specify a keyfile needed for Geo-replication create push-pem command. <https://bugzilla.redhat.com/show_bug.cgi?id=1136296>
04:01 kotreshhr joined #gluster
04:10 rafi joined #gluster
04:11 nbalacha joined #gluster
04:20 haomaiwa_ joined #gluster
04:27 glusterbot News from resolvedglusterbugs: [Bug 1201631] Dist-geo-rep: With new Active/Passive switching logic, mgmt volume mountpoint is not cleaned up. <https://bugzilla.redhat.com/show_bug.cgi?id=1201631>
04:29 gem joined #gluster
04:31 jiffin joined #gluster
04:34 jiffin joined #gluster
04:39 kotreshhr joined #gluster
04:52 nbalacha joined #gluster
05:12 rafi joined #gluster
05:21 rafi joined #gluster
05:24 rafi joined #gluster
05:25 glusterbot News from newglusterbugs: [Bug 1220100] Typos in the messages logged by the CTR translator <https://bugzilla.redhat.com/show_bug.cgi?id=1220100>
05:44 jiffin joined #gluster
05:55 glusterbot News from newglusterbugs: [Bug 1216940] Disperse volume: glusterfs crashed while testing heal <https://bugzilla.redhat.com/show_bug.cgi?id=1216940>
06:12 haomaiwa_ joined #gluster
06:15 gem joined #gluster
06:42 jiffin1 joined #gluster
06:46 rafi joined #gluster
06:52 rafi joined #gluster
07:00 kripper joined #gluster
07:04 jiffin joined #gluster
07:25 glusterbot News from newglusterbugs: [Bug 1218304] Intermittent failure of basic/afr/data-self-heal.t <https://bugzilla.redhat.com/show_bug.cgi?id=1218304>
07:27 rafi1 joined #gluster
07:29 kripper check logs
07:29 kripper google errors
07:42 anrao joined #gluster
07:57 kovshenin joined #gluster
08:02 Leildin joined #gluster
08:08 Pupeno joined #gluster
08:14 rafi joined #gluster
08:15 fsimonce joined #gluster
08:15 fattaneh1 joined #gluster
08:22 jiffin joined #gluster
08:34 ghenry joined #gluster
08:50 jiffin joined #gluster
09:03 kotreshhr left #gluster
09:06 jiffin joined #gluster
09:12 fattaneh1 left #gluster
09:15 gem joined #gluster
09:18 gem joined #gluster
09:41 saltsa joined #gluster
09:42 DV joined #gluster
09:45 archers joined #gluster
09:55 soumya joined #gluster
09:56 jiffin joined #gluster
10:08 kripper joined #gluster
10:25 anrao joined #gluster
10:33 MrAbaddon joined #gluster
10:36 swebb joined #gluster
11:03 kumar joined #gluster
11:10 nangthang joined #gluster
11:16 user joined #gluster
11:22 alexcrow Hi, a quick question. Any ideas why I get bursts of very high CPU (eg 1100%) only on the gluster node that the client used in the mount command during high IO, but no high CPU on the other nodes?
11:23 LebedevRI joined #gluster
11:40 jiffin joined #gluster
11:45 Slashman joined #gluster
11:46 fattaneh1 joined #gluster
11:46 fattaneh1 left #gluster
11:48 kron4eg joined #gluster
11:57 MrAbaddon joined #gluster
12:06 kron4eg left #gluster
12:17 nbalacha joined #gluster
12:17 ira joined #gluster
12:40 fattaneh1 joined #gluster
12:41 fattaneh1 left #gluster
12:42 kdhananjay joined #gluster
12:45 MrAbaddon joined #gluster
12:55 julim joined #gluster
12:57 nbalacha joined #gluster
13:03 hagarth joined #gluster
13:03 nangthang joined #gluster
13:09 fattaneh1 joined #gluster
13:11 kovshenin joined #gluster
13:15 fattaneh1 left #gluster
13:18 kkeithley1 joined #gluster
13:20 kkeithley1 joined #gluster
13:23 al joined #gluster
13:27 RameshN joined #gluster
13:34 rafi joined #gluster
13:37 rbazen joined #gluster
13:39 rbazen Hi, I have a problem with my gluster setup. I have a two node replicated gluster with some volumes. After a reboot of one of the machines, the volumes wont come back up. Can anyone assist me in finding out what is wrong? pretty please
13:40 rbazen Node1 was rebooted and now does not see its peer, its bricks are N/A
13:41 rbazen Node2 is still up and shows the volumes2. but on two volumes its bricks are also n/a
13:42 rbazen on Node2 the localhost nfs is up..
13:43 ndevos rbazen: you should verify that the filesystem for the bricks are mounted, is that the case?
13:43 ndevos rbazen: also, you can check the /var/log/glusterfs/bricks/path-to-brick.log logfile (replace the "path-to-brick")
13:44 rbazen ndevos: The underlying filesystems are mounted.
13:44 ndevos rbazen: anything useful at the end of those log files?
13:45 rbazen Hmm I see a volume definition and then "accepted client from <same host>, then a Warning:  [glusterfsd.c:1194:cleanup_and_exit] (--> 0-: received signum (15), shutting down
13:45 glusterbot rbazen: ('s karma is now -70
13:46 rbazen on the other node its the same
13:47 ndevos rbazen: can you ,,(paste) those lines, with the ~40 ones before that?
13:47 glusterbot rbazen: For RPM based distros you can yum install fpaste, for debian, ubuntu, and arch it's pastebinit. Then you can easily pipe command output to [f] paste [binit] and it'll give you a URL.
13:48 MrAbaddon joined #gluster
13:48 rbazen ndevos: sure, just a sec
13:49 ndevos rbazen: dont paste them here, but on fpaste.org or something :)
13:49 rbazen Will do :)
13:50 rbazen http://ur1.ca/kc2pa
13:50 badone_ joined #gluster
13:52 rbazen http://ur1.ca/kc2pk
13:55 rbazen ndevos: those are the fpastes, btw
13:56 ndevos rbazen: hmm, those really do not show a lot :-/
13:56 ndevos rbazen: you can try to start the missing processes with: gluster volume start engine force
13:57 ndevos if "engine" is the name of the volume :)
13:57 RameshN joined #gluster
13:58 rbazen that... sort of works, i think
13:59 rbazen this is the current status: http://ur1.ca/kc2qn http://ur1.ca/kc2qp
13:59 ndevos rbazen: I'm not sure why those brick processes got stopped though, there is noting that suggets an issue to me
14:02 ndevos rbazen: of course, you can start the other brick processes for the other volumes like that too ;-)
14:04 rbazen Thanks so far :)
14:04 rbazen Now the bricks are up, but the peers still are not
14:04 rbazen wait...
14:04 rbazen that is a different problem...
14:06 rbazen vdsm scrambles my network config files, both the bridge and the interface have the same ip 0_o
14:11 rbazen ndevos: thanks for helping me, man. Really appreciate it. Was losing it for a second -_-
14:11 ndevos oh, yes, I think vdsm can be tricky with things like that
14:12 ndevos rbazen: possibly glusterd decided to stop those brick processes because it noticed the network issue
14:12 rbazen Aye, I think so too.
14:12 ndevos rbazen: that would be in the /var/log/glusterfs/etc-....log file, I guess
14:13 rbazen Everything is working again. Gluster wise that is..
14:17 nsoffer joined #gluster
14:21 rbazen Ok, another question. Now that this works. :)
14:21 rbazen I want to add a third peer and set is as a third replica for the existing volumes.
14:22 rbazen On the main node, I probe it. then do gluster volume add-brick engine thirdnode:/path/to/brick ?
14:23 rbazen or do I have to set replica 3 as well?
14:26 ndevos rbazen: yes, gluster volume add-brick replica 3 engine thirdnode:/path/to/brick
14:26 ndevos at least, I think that is the right command
14:26 * ndevos will be back later
14:27 xiu /b 27
14:56 rafi joined #gluster
15:06 alexcrow Hi, a quick question. Any ideas why I get bursts of very high CPU (eg 1100%) only one brick node every so often during high IO, but no high CPU on the other nodes? The high CPU seems to move between nodes ran
15:06 alexcrow domly every few minutes.
15:16 badone_ joined #gluster
15:24 wushudoin joined #gluster
15:26 haomaiwang joined #gluster
15:26 haomaiwang joined #gluster
15:27 fattaneh1 joined #gluster
15:27 fattaneh1 left #gluster
15:57 meghanam joined #gluster
16:15 fattaneh joined #gluster
16:23 mbukatov joined #gluster
16:24 rbazen Anyone here with a bit of knowledge of vdsm? How do I keep it from breaking my network config?
16:44 kumar joined #gluster
16:45 kripper joined #gluster
16:45 bennyturns joined #gluster
16:57 RameshN joined #gluster
16:58 kripper left #gluster
17:07 fattaneh left #gluster
17:08 rafi joined #gluster
17:12 eljrax joined #gluster
17:16 aaronott joined #gluster
17:20 RameshN joined #gluster
17:33 fattaneh1 joined #gluster
17:40 rafi joined #gluster
17:58 glusterbot News from newglusterbugs: [Bug 1220173] SEEK_HOLE support (optimization) <https://bugzilla.redhat.com/show_bug.cgi?id=1220173>
18:05 aaronott joined #gluster
18:19 gem joined #gluster
18:34 nsoffer joined #gluster
18:37 atrius joined #gluster
18:46 kripper joined #gluster
18:46 kripper JoeJulian: hi
19:16 kripper left #gluster
19:33 fattaneh joined #gluster
19:33 fattaneh left #gluster
19:34 MrAbaddon joined #gluster
19:47 harish joined #gluster
19:47 hagarth joined #gluster
19:48 harish joined #gluster
20:10 plarsen joined #gluster
20:19 nsoffer joined #gluster
20:49 rafi joined #gluster
21:07 MrAbaddon joined #gluster
21:16 wushudoin joined #gluster
21:17 wushudoin joined #gluster
21:22 badone_ joined #gluster
21:38 vimal joined #gluster
21:40 vimal joined #gluster
22:03 vimal joined #gluster
22:04 plarsen joined #gluster
22:08 vimal joined #gluster
22:24 soumya joined #gluster
22:32 kovshenin joined #gluster
22:34 n-st joined #gluster
22:40 nishanth joined #gluster
22:40 DV joined #gluster
23:30 mike25de joined #gluster
23:45 plarsen joined #gluster
23:48 MrAbaddon joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary