Camelia, the Perl 6 bug

IRC log for #gluster, 2013-01-05

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
01:02 stopbit joined #gluster
01:03 haidz so im doing RR DNS with the frontend servers going back to gluster.. does it make sense that the client servers have connections open to each of the backend gluster storage servers?
01:04 sunus joined #gluster
01:06 semiosis yes, over the glusterfs ,,(ports)
01:06 glusterbot glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111.
01:06 semiosis see also ,,(mount server)
01:06 glusterbot (#1) The server specified is only used to retrieve the client volume definition. Once connected, the client connects to all the servers in the volume. See also @rrnds, or (#2) Learn more about the role played by the server specified on the mount command here: http://goo.gl/0EB1u
01:07 haidz ,,(rrnds)
01:07 glusterbot I do not know about 'rrnds', but I do know about these similar topics: 'rrdns'
01:07 haidz ,,(rrdns)
01:07 glusterbot You can use rrdns to allow failover for mounting your volume. See Joe's tutorial: http://goo.gl/ktI6p
01:10 haidz semiosis, so it has to connect to all servers? if i understand this right it does the replication at the client side?
01:11 semiosis yep
01:11 haidz is the exception to that geo-replication?
01:11 semiosis writes get replicated, reads get load balanced, client gets HA
01:12 haidz i see
01:12 semiosis can survive loss of either server
01:12 semiosis geo-rep is different, yeah
01:12 haidz so the client replicates the writes... who initiates the read load balancing? glusterd?
01:21 semiosis assuming this hasnt changed since 3.1... when the client wants to open a file it polls all online replicas and whichever responds first serves the reads for that file -- or something like that
01:23 haidz oh thats pretty cool
01:24 haidz semiosis, so when nodes come out due to an outage of some kind.. either box reboots, or network interruption, glusterd appears to replicate the new data to the servers that were down
01:24 haidz is that only true for recovery? since the client is doing the replication generally
01:25 semiosis afaik yes that's the role of the 'self heal daemon' which runs on the gluster servers
01:25 haidz ah ok
01:26 haidz semiosis, is there a way to re-order the bricks in a volume when adding new ones?
01:26 haidz to force gluster to write files across additional racks
01:28 semiosis um, you have to do a rebalance if you change the number of distribution units
01:28 haidz ah i see
01:29 m0zes to re-order replicas you need to do a replace-brick from an old to a new, then you can add the old bricks back in.
01:29 semiosis "adding new bricks" is ambiguous... you could be changin the replica count, or the distribution count, or both
01:29 semiosis m0zes: thx for clearing that up i was confused by 'reorder'
01:29 semiosis :)
01:29 haidz ah thanks m0zes
01:30 m0zes np :)
01:31 haidz thanks semiosis, you've been a great help
01:31 semiosis yw
02:07 badone joined #gluster
03:32 duffrecords left #gluster
03:40 hagarth joined #gluster
03:46 y4m4 joined #gluster
04:00 nueces joined #gluster
04:51 zhuyb joined #gluster
05:22 zhuyb joined #gluster
06:15 zhuyb joined #gluster
06:52 dhsmith joined #gluster
06:55 hateya joined #gluster
07:20 isomorphic joined #gluster
08:06 dhsmith joined #gluster
08:08 dhsmith_ joined #gluster
08:12 dhsmith joined #gluster
08:19 dhsmith_ joined #gluster
08:57 kore joined #gluster
08:58 kore left #gluster
09:32 hateya joined #gluster
11:46 Oneiroi joined #gluster
11:46 Oneiroi joined #gluster
12:00 mohankumar joined #gluster
13:15 hateya joined #gluster
14:50 edward1 joined #gluster
15:38 dhsmith joined #gluster
16:14 isomorphic joined #gluster
17:23 dhsmith joined #gluster
17:38 dustint joined #gluster
19:33 andrei_ joined #gluster
20:43 andreask joined #gluster
20:50 designbybeck joined #gluster
21:21 dhsmith joined #gluster
22:17 hateya joined #gluster
22:18 mtanner ,,(rdma)
22:18 glusterbot GlusterFS uses the kernel provided APIs to utilize the RDMA features so, technically, glusterfs is hardware agnostic (similar to the backend FS)
22:19 mtanner ,,(nfs rdma)
22:19 glusterbot mtanner: Error: No factoid matches that key.
22:19 mtanner is the gluster's nfs server rdma capable as well?
22:20 mtanner as in, can I mount the nfs exports through rdma to clients over IB fabric?
22:24 m0zes no natively. you would need to use ipoib
22:25 mtanner thanks! that means I'll need to use kernel's nfs server then for rdma... mhm, one complication more
22:25 m0zes does a normal nfs client support rdma?
22:25 hagarth joined #gluster
22:26 mtanner you need to 'modprobe xprtrdma' on client, then mount with '-o rdma,port=xxx'
22:26 mtanner but yes, kernel's nfs server and client support nfs-rdma
22:28 mtanner and on server side you need to 'modprobe svcrdma' and register a port for nfs-rdma with rpc
22:50 andrei__ joined #gluster
23:41 nueces joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary