Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-04-06

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:33 gmcwhistler joined #gluster
00:34 johnbot11111 joined #gluster
00:40 XpineX_ joined #gluster
01:03 qdk joined #gluster
01:05 diegows joined #gluster
01:06 avati joined #gluster
01:22 qdk joined #gluster
01:46 ilbot3 joined #gluster
01:46 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:52 qdk joined #gluster
02:13 ndevos joined #gluster
02:13 ndevos joined #gluster
02:34 ndevos joined #gluster
02:34 ndevos joined #gluster
02:50 ndevos joined #gluster
02:50 ndevos joined #gluster
03:01 nightwalk joined #gluster
03:14 atrius joined #gluster
04:28 RameshN joined #gluster
04:36 hagarth joined #gluster
05:01 ngoswami joined #gluster
05:33 hchiramm_ joined #gluster
06:10 plarsen joined #gluster
06:35 andrewklau joined #gluster
06:38 ngoswami joined #gluster
06:39 andrewklau Does anyone know of any special options I need to add to have nfs fail over to another server using keepalived?
06:52 qdk joined #gluster
06:56 RameshN joined #gluster
07:00 ngoswami joined #gluster
07:08 qdk joined #gluster
07:17 qdk joined #gluster
08:00 hagarth joined #gluster
08:14 hchiramm_ joined #gluster
08:47 hchiramm_ joined #gluster
08:52 qdk joined #gluster
08:56 DV__ joined #gluster
09:39 Pavid7 joined #gluster
09:42 nixpanic joined #gluster
09:42 nixpanic joined #gluster
09:47 ctria joined #gluster
10:08 davinder joined #gluster
10:42 ndevos joined #gluster
10:42 ndevos joined #gluster
10:50 harish joined #gluster
10:52 sputnik1_ joined #gluster
10:55 DV joined #gluster
11:11 hchiramm_ joined #gluster
11:55 DV joined #gluster
12:05 Pavid7 joined #gluster
12:19 varuntv joined #gluster
12:19 varuntv gluster peer probe returns Connection failed. Please check if gluster daemon is operational.
12:20 varuntv log file says DNS resolution failed on host localhost
12:20 varuntv y?
12:20 varuntv wht i am doing wrong?
12:20 varuntv tried probe with ipaddress also, still same
13:00 sputnik1_ joined #gluster
13:38 jag3773 joined #gluster
14:17 Pavid7 joined #gluster
14:58 Ark joined #gluster
15:12 hagarth joined #gluster
15:57 jbrooks joined #gluster
16:45 sputnik13 joined #gluster
17:10 diegows joined #gluster
17:39 JonathanD joined #gluster
18:16 JoeJulian varuntv: Do you have the localhost entries in /etc/hosts?
18:18 JoeJulian Andyy2: My guess is that the network is shutting down while glusterfsd is still running when you reboot. Test that theory by killall glusterfsd just before you reboot and see if your problem still exists.
18:20 Andyy2 JoeJulian: I don't follow you. If the network is down on a brick, should it not affect a client (running  VM on another server)?
18:29 Andyy2 btw: my backlog is almost never "empty". Some times there's a disk image on brick 1, sometimes on brick 2 and sometimes on both that needs healing. Ideas and could this be related to the other problem?
18:32 Andyy2 JoeJulian: ?
18:36 Pavid7 joined #gluster
18:37 JoeJulian Andyy2: The client (libgfapi in your case) connects to all the bricks and interacts with the replicas in real time. This is done over TCP connections. If a network connection goes away, the TCP connection is not closed and the client will wait for ping-timeout for the connection to continue. If the brick is stopped first, then the normal tcp fin handshake is done, the connection closed, and the client will not wait for the brick.
18:37 JoeJulian @ping-timeout
18:37 glusterbot JoeJulian: The reason for the long (42 second) ping-timeout is because re-establishing fd's and locks can be a very expensive operation. Allowing a longer time to reestablish connections is logical, unless you have servers that frequently die.
18:41 Andyy2 I've set the ping timeout to 10 seconds, and I'm trying to test for a "server dying" situation via reboots. The network is one of the last operations to go down on a reboot, while glusterfsd is one of the first. I'm puzzled by the fact that the qemu machine that runs on libgfapi will "die" regardless of the replica on gluster.
18:42 Andyy2 In my case, the problem persists well beyond the 10 seconds. In fact, even after 60+seconds I cannot "reboot" the qemu virtual machine. I have to kill qemu/kvm and start a new instance, or the simple reboot will find no disks to boot from. reinitializing qemu from scratch with one replica offline works and the virtual machine is able to boot. Does not look like a timeout problem.
18:45 Andyy2 could it be because of missing cluster.eager-lock:enable, nework.remote-dio:enable or performance.{stat-prefetch,io-cache,read-ahead,quick-read}: off ?
19:54 Andyy2 JoeJulian: would you recommend setting timeout to something like 2-3 seconds, on volumes holding virtual disk images? Anything is better than gaving kvm guests die.
19:56 Andyy2 and, i'm talking of windows guests, to which there's no access (third party images).
20:42 _VerboEse joined #gluster
20:43 sijis_ joined #gluster
20:44 neoice_ joined #gluster
20:47 radez_g0` joined #gluster
20:47 ThatGraemeGuy_ joined #gluster
20:48 rshade98 joined #gluster
20:49 Jakey joined #gluster
20:51 badone joined #gluster
20:54 DV joined #gluster
20:54 joshin joined #gluster
20:55 coredumb joined #gluster
20:55 Durzo joined #gluster
20:55 ron-slc joined #gluster
20:56 elico joined #gluster
21:02 cyberbootje joined #gluster
21:10 Durzo joined #gluster
21:11 badone joined #gluster
21:15 ron-slc joined #gluster
21:46 badone joined #gluster
21:54 andrewklau joined #gluster
22:03 askb joined #gluster
22:08 sputnik13 joined #gluster
22:11 plarsen joined #gluster
22:26 Durzo joined #gluster
22:34 vincent_vdk joined #gluster
22:47 nightwalk joined #gluster
23:38 diegows joined #gluster
23:57 fitted joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary