Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-07-06

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:04 firemanxbr joined #gluster
00:09 julim joined #gluster
00:19 coredump joined #gluster
00:43 gmcwhistler joined #gluster
00:55 obelix_ joined #gluster
01:08 theron joined #gluster
02:31 ndevos joined #gluster
02:31 ndevos joined #gluster
02:45 lanning joined #gluster
02:51 bala joined #gluster
03:34 gmcwhistler joined #gluster
03:53 sputnik13 joined #gluster
04:12 plarsen joined #gluster
04:19 ricky-ticky1 joined #gluster
04:20 bala joined #gluster
04:34 burnalot joined #gluster
04:39 sputnik13 joined #gluster
04:45 sputnik13 joined #gluster
04:45 sage joined #gluster
04:49 tru_tru_ joined #gluster
04:49 lanning_ joined #gluster
04:49 bala joined #gluster
05:00 Moe-sama joined #gluster
05:01 sauce joined #gluster
05:01 Intensity joined #gluster
05:05 dblack joined #gluster
05:07 pureflex joined #gluster
05:11 julim joined #gluster
05:28 MrAbaddon joined #gluster
05:35 hagarth joined #gluster
05:45 Intensity joined #gluster
07:05 marcoceppi joined #gluster
07:11 julim joined #gluster
07:40 ctria joined #gluster
08:19 ramteid joined #gluster
08:26 jiqiren joined #gluster
08:46 ricky-ticky joined #gluster
08:52 redbeard joined #gluster
09:15 davinder16 joined #gluster
09:49 elico joined #gluster
10:40 keichii joined #gluster
10:57 whopawho joined #gluster
11:11 rwheeler joined #gluster
11:45 LebedevRI joined #gluster
12:06 doekia joined #gluster
12:06 doekia_ joined #gluster
12:14 DV joined #gluster
12:55 sijis joined #gluster
12:55 sijis joined #gluster
12:56 gehaxelt joined #gluster
12:56 gehaxelt Hi
12:56 glusterbot gehaxelt: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
12:59 gehaxelt I have a question about node failures. Take the following situation: I set up glusterfs on two storage nodes and configure them to replicate one volume. Then a client mounts the volume from the first storage node. (mount -t glusterfs serverA:/vol1 /mnt/gfs). Let's assume that the storage node serverA fails completely. What happens with the mountpoint on the client? Will the failure be detected? Will it automatically switch to the 2nd storage server?
13:14 gehaxelt Google tells me, that rrdns is a solution for that problem, right?
13:15 diegows joined #gluster
13:25 obelix_ joined #gluster
13:32 Gurdeep joined #gluster
13:48 jiffe98 man self heal takes forever its been sitting on the same 20 files for 10 minutes now they those files are no more than 1-2kb in size
13:48 RioS2 gehaxelt, If you are using the gluster client for the mount it will be aware of both nodes
13:50 gehaxelt RioS2, okay, what do you mean by "gluster client" ?
13:55 Gurdeep Hello all, I have gluster setup in replicate type and its working fine (replication happens when files are created on either server). I am seeing a lot of RPC call/reply being constantly exchanged between the servers, is that a common behavior? Its using around 200K/s bandwidth, which is consuming my allocated VPS bandwidth unnecessarly. Any way we can fine tune this communication?
13:57 jiffe98 how does one go about listing options defaults again?
14:02 RioS2 gehaxelt, would need to install the glusterfs on the client, and mount with type glusterfs instead of nfs
14:03 RioS2 mount -t glusterfs <nodeA/B>:/gv0 /mnt/point
14:05 gehaxelt RioS2, I think I'm using that option. (I've installed glusterfs-client on the client). I think I'll set up a bunch of virtualboxes locally and fiddle around with failures :)
14:05 gehaxelt Thanks for the answers!
14:13 sputnik13 joined #gluster
14:13 davinder16 joined #gluster
14:28 neoice joined #gluster
14:29 n0de joined #gluster
14:32 pdrakewe_ joined #gluster
14:32 azenk1 joined #gluster
14:33 n0de_ joined #gluster
14:34 ccha2 joined #gluster
14:34 ccha2 joined #gluster
14:36 sijis_ joined #gluster
14:40 _weykent joined #gluster
14:48 georgeh|workstat joined #gluster
14:48 marmalodak joined #gluster
14:48 Slasheri joined #gluster
14:53 Oyeaussie joined #gluster
15:26 gmcwhistler joined #gluster
15:33 huleboer joined #gluster
15:36 dblack joined #gluster
15:36 Intensity joined #gluster
15:37 jiqiren joined #gluster
15:44 n0de joined #gluster
15:45 diegows joined #gluster
15:54 huleboer joined #gluster
15:55 neoice joined #gluster
16:15 sputnik13 joined #gluster
16:19 ninkotech_ joined #gluster
16:20 sputnik13 joined #gluster
16:55 dencaval joined #gluster
17:01 obelix_ joined #gluster
17:06 coredump joined #gluster
17:10 mortuar joined #gluster
17:47 n0de joined #gluster
18:01 coredump joined #gluster
18:33 neoice joined #gluster
18:54 qdk joined #gluster
19:26 davinder16 joined #gluster
19:53 bfoster joined #gluster
19:53 foster joined #gluster
20:21 theron joined #gluster
20:26 gehaxelt I have another question: I read about transparent "encryption". Is that already implemented?
20:30 tru_tru joined #gluster
20:45 theron_ joined #gluster
20:56 redbeard joined #gluster
21:06 sputnik13 joined #gluster
21:11 theron joined #gluster
21:17 sputnik13 joined #gluster
21:24 sputnik13 joined #gluster
21:30 Pupeno joined #gluster
21:37 theron joined #gluster
22:07 japuzzo joined #gluster
22:09 sputnik13 joined #gluster
22:13 coredump joined #gluster
22:28 Gurdeep left #gluster
22:31 mjsmith2 joined #gluster
23:08 ThatGraemeGuy joined #gluster
23:58 5EXABOMEP joined #gluster
23:58 ninkotech__ joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary