Camelia, the Perl 6 bug

IRC log for #gluster, 2012-10-27

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
01:31 Daxxial_ joined #gluster
01:31 atrius_away joined #gluster
01:36 bala1 joined #gluster
02:04 atrius joined #gluster
02:11 stopbit joined #gluster
02:29 ika2810 joined #gluster
02:30 saz joined #gluster
02:42 init_ joined #gluster
02:43 init__ joined #gluster
02:44 init_ joined #gluster
03:02 dmachi joined #gluster
03:38 Bullardo joined #gluster
03:44 ika2810 left #gluster
04:08 Bullardo joined #gluster
04:10 Bullardo joined #gluster
04:39 Bullardo joined #gluster
04:52 Bullardo joined #gluster
05:11 Bullardo joined #gluster
05:20 bulde1 joined #gluster
05:35 Bullardo joined #gluster
05:51 bulde1 joined #gluster
06:06 dsj joined #gluster
06:08 dsj Question: I've got a four-node gluster 3.3 cluster with a distribute-replicate volume.  The four nodes are peered just fine but one of the nodes can't be reached by most clients because of DNS resolution.  I'd like to change the node's address from hostname to IP without disrupting the cluster/volume.  Is this possible?
06:42 raghu joined #gluster
07:40 ekuric joined #gluster
08:10 andreask joined #gluster
08:54 lkoranda joined #gluster
08:55 lkoranda joined #gluster
09:50 oneiroi joined #gluster
09:58 Triade joined #gluster
10:07 Triade joined #gluster
10:15 Triade1 joined #gluster
10:18 Triade joined #gluster
11:04 tryggvil_ joined #gluster
11:26 tryggvil joined #gluster
11:33 randomcamel joined #gluster
12:26 layer3 joined #gluster
12:58 layer3 joined #gluster
13:00 UnixDev joined #gluster
13:51 tryggvil joined #gluster
14:16 UnixDev I have 500gig LVM's that are stored on a replicated gluster vol. I set 'cluster.data-self-heal-algorithm' to diff . Anything else that I should do to make sure the vols are in sync and working optimally?
14:20 Daxxial_1 joined #gluster
15:02 dmachi1 joined #gluster
16:21 sazified joined #gluster
16:57 NuxRo dsj-afk-till-mon: I think you can simply remove that brick and add it back as new with the correct IP/hostname
17:11 tryggvil joined #gluster
17:18 JoeJulian No, not really. The better solution would be for dsj-afk-till-mon fix his dns. That's one of the best things about using hostnames.
17:29 wintix _Bryan_: ping
17:36 NuxRo JoeJulian: best practice question. Can I have 2 bricks, parts of different volume, sharing the same filesystem/partition or should I chop/lvm that partition?
17:42 JoeJulian Either way should be fine. I prefer the lvm approach just for potential flexibility and resource management.
17:44 JoeJulian If both volumes share the same filesystem then, of course, adding files to one volume will affect the free space on the other.
17:48 NuxRo aha, I'll play around, see what suits me best, cheers
17:51 lh joined #gluster
17:51 lh joined #gluster
18:11 andreask joined #gluster
18:44 z00dax NuxRo: coming to the RH devday thing next week ?
19:01 TSM joined #gluster
19:02 TSM does gamin or the like work with gluster, i guess no
19:07 HeMan joined #gluster
19:10 tryggvil joined #gluster
19:14 JoeJulian Besides being a street urchin, what's gamin?
19:14 _Bryan_ wintix: still there?
19:15 TSM gamin is like inotify but works across nfs
19:15 TSM gamin / fam
19:17 JoeJulian hmm, well I haven't heard of any success or failures. I'm not sure where in the kernel notification comes from either. It'd be worth trying and blogging about. I see it has a "poll" mode, so with that I would guess that even if the notify method doesn't work, the poll would.
19:21 TSM does gluster do any internal caching of metadata on the client, or do all calls for stat result in a stat on the bricks, im wandering if ile see the an issue i have with nfs where somtimes if you stat a file that has just been changed on nfs but via a diffrent server you dont see that it had been updated
19:42 JoeJulian nfs attribute caching is done by the kernel. The fuse client shouldn't see that behavior.
19:46 TSM best i use the fuse client then if i want to get way from that issue
19:47 TSM ile only realy be shifting files that are between 1-3MB max 10MB
19:47 TSM when the wiki says if you access lots of small files what size are they saying?
19:47 TSM i mean using nfs for small files etc
19:48 NuxRo z00dax: url? (re devday)
19:52 wintix _Bryan_: now again. just wanted to let you know that switching off offloading on the 10GE interface indeed brings heavy throughput gains.
19:53 wintix _Bryan_: also tried it with xen nodes that have storage attached via iscsi and drbd on the backend storage. was kind of weird, with the xen/iscsi/drbd setup i saw a huge performance loss upon disabling offloading
19:57 wintix seems a bit strange that gluster benefits so much while iscsi/drbd applications suffer.
19:59 wintix _Bryan_: what i also saw to increase performance was to set option transport.socket.nodelay on in the glusterd.vol
20:01 wintix _Bryan_: did you find that tweaking other gluster options gave you benefits?
20:06 andreask joined #gluster
20:36 Alpinist joined #gluster
20:43 tryggvil joined #gluster
20:46 balunasj joined #gluster
21:37 UnixDev how can you ever tell if a file is in sync between two mirrors? also, what if that file keeps changing, like lvm vol?
21:40 TSM actualy good question is there a program to check consistancy
21:41 TSM from what ive read a write in a redundant setup will only be confirmed if all bricks associated with the write confirm
21:41 TSM where it gets iffy for me is when one brick is down etc, what is the affect on write speeds
21:41 tryggvil joined #gluster
23:11 randomcamel I saw a Gluster talk at work, but my memory is hazy. do I have this right: in Stripe mode, files sync from other nodes if/only if they're accessed by a node that doesn't have a copy of them?
23:34 18VAAFY57 joined #gluster
23:34 18VAAFY57 left #gluster
23:50 johnmark TSM: we are working on the concept of triggers for 3.4
23:51 JoeJulian randomcamel: Nope. See ,,(stripe). Replication self-heal WAS triggered only by a lookup() of a file (if self-heal was necessary). Now that will trigger it but there's also a proactive self-heal daemon.
23:51 glusterbot randomcamel: Please see http://joejulian.name/blog/sho​uld-i-use-stripe-on-glusterfs/ about stripe volumes.
23:51 johnmark TSM: driven by the marker API, which is the foundation for geo-rep

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary