Camelia, the Perl 6 bug

IRC log for #gluster, 2013-10-20

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:12 khushildep joined #gluster
00:42 Shdwdrgn joined #gluster
01:14 DV joined #gluster
01:33 sgowda joined #gluster
02:00 MrNaviPacho joined #gluster
02:23 vynt joined #gluster
04:31 plarsen joined #gluster
05:58 davinder joined #gluster
06:34 bayamo joined #gluster
06:36 bayamo i need some help with gluster 3.4.1 2 X node replicate setup, can connect via nfs
06:36 bayamo had a server go down and now cant connect to either node
06:45 bayamo 0-xen-client-0: Server and Client   lk-version numbers are not same, reopening the fds
06:45 glusterbot bayamo: This is normal behavior and can safely be ignored.
06:45 bayamo ?
06:45 bayamo why do you say that, i cant connect nothing is wokring currently
06:49 vynt joined #gluster
07:02 shubhendu joined #gluster
07:04 mohankumar joined #gluster
07:05 crashmag joined #gluster
07:22 vynt joined #gluster
07:24 samppah bayamo: you can connect via nfs but not native glusterfs right?
07:24 samppah any other error messages in log files
07:27 bayamo actually, i just realised, i can connect via nfs, my issue is nfs3_fh_resolve_inode_lookup_cbk] 0-nfs-nfsv3: Lookup failed:
07:39 blook joined #gluster
08:06 davinder joined #gluster
08:11 davinder joined #gluster
08:52 m0zes joined #gluster
09:11 StarBeast joined #gluster
10:27 davinder joined #gluster
10:53 dusmant joined #gluster
11:37 hagarth joined #gluster
11:44 jmaes So I have a replicated volume working across two nodes - and have it mounted via DNS round robin (ie fs-i goes to fs1-i or fs2-i) during mounts - and now I am into the testint HA phase
11:47 jmaes right now - when I kill a node - it seems like the entire share stop working (frozen from the clients point of view) until that node comes back online
11:47 jmaes is that correct behavior?
11:47 jmaes our goal is HA
11:48 jmaes so I need the clients to be able to continue working with the shares even when a server node drops out of a while (ie, lose a DC)
11:52 manik joined #gluster
12:02 jmaes i am seeing things recover -after around a 10 second delay - is that a configurable parameter?
12:03 samppah jmaes: you can configure that with network.ping-timeout parameter
12:03 samppah it should be 42 seconds by default
12:03 jmaes samppah - thanks
12:04 samppah but you shouldn't use too low timeout
12:04 jmaes what is the timeout doing?  allowing a node to fall out of the group - and then forces an heal when it comes back in?
12:36 dusmant joined #gluster
12:43 StarBeast joined #gluster
13:34 ninkotech_ joined #gluster
13:34 ninkotech joined #gluster
14:36 jmaes well - shucks - down to my last issue I think
14:36 jmaes none of the client machines will mount the glusterfs mount at boot
14:36 jmaes they have no issue mounting it once the machine is up (ie, mount -a)
14:37 jmaes but not mounting it during boot (here is my entry in /etc/fstab - fileshare-i.infopluswms.com:/web-i_ecomshare /mnt/web-i_ecomshare glusterfs defaults,_netdev 0 0)
14:37 jmaes not seeing anything in the boot.msg
14:37 jmaes is there a trick I am missing
14:37 jmaes google seems to show many others with problems *like* this - but no clear solutions
14:48 jmaes what boot service should be mounting the glusterfs mount?
14:48 jmaes network-remotefs?
15:05 Remco http://joejulian.name/blog/glusterfs-volumes-​not-mounting-in-debian-squeeze-at-boot-time/
15:05 glusterbot <http://goo.gl/t6PY4> (at joejulian.name)
15:05 Remco Different distro, but will tell you where to look
15:06 jmaes ya - read that - tried all the reorders- etc.. - weird thing - nothing in the logs - I am finishing up a one-off init script now to force a mount at the end of the boot - will see if that works and then work back from there
15:19 jmaes my own init script has no trouble at boot - so not sure what the means
15:19 jmaes it will work - but still not sure why network-remotefs isnt picking it up
15:20 jmaes willing to assume its something special due to being on SLES
15:20 jmaes thank you all the same!
16:00 ricky-ticky joined #gluster
16:21 anands joined #gluster
16:32 dneary joined #gluster
17:04 edward2 joined #gluster
17:48 ricky-ticky joined #gluster
17:55 ricky-ticky1 joined #gluster
18:00 ricky-ticky joined #gluster
18:06 ricky-ticky joined #gluster
18:08 ricky-ticky1 joined #gluster
18:25 zerick joined #gluster
19:10 DV joined #gluster
19:14 juhaj_ joined #gluster
19:15 juhaj_ Hi! "gluster volume info all" says there are no volumes, but "gluster volume create ... /path" says "/path or a prefix of it is already part of a volume". What's going on?
19:15 glusterbot juhaj_: To clear that error, follow the instructions at http://goo.gl/YUzrh or see this bug http://goo.gl/YZi8Y
19:22 juhaj_ Heh, that was convenient
19:34 anands joined #gluster
19:41 purpleidea joined #gluster
19:55 DV joined #gluster
19:58 purpleidea joined #gluster
20:01 sprachgenerator joined #gluster
20:13 cyberbootje1 joined #gluster
20:21 purpleidea @tell later dbruhn indeed puppet-gluster will help you manage UUID's. You can either manually choose UUID's for each node, or puppet-gluster can automatically decide and keep each one consistent. Even on backup/restore of your filesystem. If you want to initialize a puppet-gluster managed cluster "part way through", then I'd recommend setting the UUID's manually with puppet, and then after their set once, puppet will remember them and you can g
20:21 glusterbot purpleidea: Error: I haven't seen later, I'll let you do the telling.
20:23 purpleidea @later tell dbruhn indeed puppet-gluster will help you manage UUID's. You can either manually choose UUID's for each node, or puppet-gluster can automatically decide and keep each one consistent. Even on backup/restore of your filesystem. If you want to initialize a puppet-gluster managed cluster "part way through", then I'd recommend setting the UUID's manually with puppet, and then after their set once, puppet will remember them and you can g
20:23 glusterbot purpleidea: The operation succeeded.
20:28 mibby hi is anyone about?
21:07 jkroon joined #gluster
21:20 jmaes joined #gluster
21:36 dneary joined #gluster
21:57 fidevo joined #gluster
22:21 MrNaviPa_ joined #gluster
22:26 sjoeboo joined #gluster
22:52 failshell joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary