Camelia, the Perl 6 bug

IRC log for #gluster, 2013-01-27

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:17 Shdwdrgn joined #gluster
00:43 sashko joined #gluster
00:46 layer3switch joined #gluster
02:09 DWSR So, from earlier, can I not use gluster on uneven bricks?
02:33 daMaestro joined #gluster
03:00 jjnash DWSR: "uneven bricks"?
03:00 DWSR uneven sized bricks.
03:00 jjnash space-wise?
03:00 DWSR yes.
03:01 jjnash I've done it in the past
03:01 jjnash It's not that wise since replication will fail when you go above and beyond the storage capacity of the smallest brick
03:01 jjnash but it works
03:02 DWSR don't want replication.
03:02 DWSR I just want distribution.
03:02 DWSR All the redundancy and whatnot is being done below the FS level.
03:02 jjnash It still should work
03:03 jjnash You'd just have to be mindful about how full it is
03:04 DWSR lol, that's a no, then.
03:05 DWSR I want pure brick concatenation.
03:05 DWSR That's all I want.
03:05 jjnash What is your use case?
03:06 DWSR I have 2 servers, 1 I rolled on my own and another was a gift.
03:06 layer3switch joined #gluster
03:06 DWSR Drive relocation is impossible.
03:06 DWSR 1 is 5TB, 1 is 1.5TB
03:08 jjnash Maybe you could shrink the filesystem on the 5TB side so that you have enough space to create a matching 1.5TB filesystem for gluster?
03:08 DWSR rofl.
03:08 DWSR Then I don't get a solution to my problem.
03:08 layer3switch joined #gluster
03:08 DWSR I want all of the disk space available in 1 large pool.
03:08 DWSR both servers have redundancy built in below the FS level, as stated, so I don't care about gluster providing it.
03:09 jjnash Ok. lets back up up again for a sec
03:09 jjnash What is the use case FOR this space? vm storage? mail? multimedia? etc?
03:09 DWSR media
03:10 jjnash Then gluster should work. If you'd said vm storage I would have pointed you to Ceph instead
03:13 DWSR So how do I handle this then?
03:13 jjnash Just a second. I'm trying to find you a link
03:22 jjnash DWSR: I'm thinking it might be this translator that I was thinking of: http://europe.gluster.org/community/documen​tation/index.php/Translators/cluster/unify
03:22 glusterbot <http://goo.gl/Tkehz> (at europe.gluster.org)
03:23 jjnash ALU + disk-usage in particular
03:23 DWSR http://europe.gluster.org/community/documenta​tion/index.php/Understanding_Unify_Translator
03:23 glusterbot <http://goo.gl/HnGgW> (at europe.gluster.org)
03:24 jjnash It doesn't exist. What of it?
03:25 jjnash s/it/the page/
03:25 glusterbot What jjnash meant to say was: It doesn't exist. What of the page?
03:25 jjnash Ugh. Don't listen to glusterbot
03:29 badone joined #gluster
03:32 DWSR hrm.
03:32 DWSR Seems the pages were merged.
03:32 DWSR Anyway, that seems to be the right choice.
03:32 DWSR Awesome that I can do the Switch Scheduler.
03:35 jjnash There's also cluster/distribute as described here: http://hekafs.org/index.php/2012/03​/glusterfs-algorithms-distribution/
03:35 glusterbot <http://goo.gl/MLB8a> (at hekafs.org)
03:35 jjnash However, it seems like I recall reading somewhere that unify is the preferred translator now
04:07 hagarth joined #gluster
04:43 badone joined #gluster
04:53 badone joined #gluster
05:41 __Bryan__ joined #gluster
06:30 ekuric joined #gluster
06:59 melanor9 joined #gluster
07:01 melanor9 hi gents,  how can i find what   brick-replace's are in progress right now ?  i cant stop volume since it tells me that its   brick-replace in progress
07:01 melanor9 and i dont really know which brick it is
07:10 ekuric joined #gluster
07:21 ekuric joined #gluster
08:06 __Bryan__ left #gluster
08:18 __Bryan__ joined #gluster
08:23 ekuric left #gluster
09:16 melanor91 joined #gluster
09:22 melanor9 joined #gluster
09:53 DaveS joined #gluster
11:16 sgowda joined #gluster
11:43 gbrand_ joined #gluster
12:24 martoss joined #gluster
12:26 martoss hey folks. I wonder if glustefs is usable in the following scenario: I have a NAS on which I can install gluster. Further, there's one laptop and two workstations. The workstations, and the nas should have the data locally available, while the laptop can just use the other bricks.
12:29 martoss I thought about setting up a glusterfs with 3 bricks, all as replicas on nas and 2x workstations and mount this from workstations and laptop. When I turn on a workstation, it should first trigger self heal and then mount the filesystem. Is it a problem, e.g. if the laptop or a workstation doesn't have all bricks online during mount?
12:36 hagarth joined #gluster
12:42 ndevos martoss: that is not really the intended use-case, it may work, but you may also run into split-brains when files change on two disconnected bricks
12:43 ndevos martoss: I think coda is something that targets offline usage sync-later
12:44 martoss oh, well the NAS is "always on" - so one replica is always "available". What is coda?
12:45 martoss ah, "Coda is an advanced networked filesystem"
12:48 JusHal joined #gluster
12:53 JusHal testing nfs failover by moving vip, connection wise it works, but is does not seem to be completely transparent. Client writing a file every seconds gets "Too many levels of symbolic links" during fail over. Is this expected?
12:55 ndevos martoss: yeah, so it may work, but ideally you would have all the bricks available all the time
12:58 martoss ok, I'll try it out. Thx for your judgement...
12:58 ndevos JusHal: that is not expected... are you just writing a file, or also doing a readdir/ls kind of thing?
12:59 JusHal ndevos: while true; do touch `date +%s`; echo -n .; sleep 1 ; done
13:00 ndevos JusHal: hmm, do you get errors in /var/log/message or dmesg like "NFS: directory X contains a readdir loop.Please contact your server vendor."
13:00 ndevos that could be Bug 845330
13:00 glusterbot Bug http://goo.gl/JIuPA unspecified, high, ---, kkeithle, ASSIGNED , RHS volume mounted as NFS causing a lot of readdir loop messages
13:02 ndevos JusHal: I'm leaving for the day, maybe you can send an email to the gluster-devel list, or file a bug in bugzilla
13:02 glusterbot http://goo.gl/UUuCq
13:02 ndevos JusHal: if you can reproduce it with a current glusterfs version that is, describe your setup, how your fail-over works and how you do the testing
13:03 JusHal ndevos: thank you, I will do that
13:03 ndevos if others can reproduce that problem too, it should be relatively easy to identify (not neccesarily fix) the issue
13:04 ndevos thanks, looking forward to the details
13:04 * ndevos signs out
13:14 DaveS_ joined #gluster
14:53 JusHal joined #gluster
15:28 tryggvil joined #gluster
15:28 tryggvil_ joined #gluster
16:11 _NiC joined #gluster
17:11 elyograg joined #gluster
18:05 __Bryan__ joined #gluster
18:16 melanor9 joined #gluster
18:16 melanor9 hi gents,  how can i find what   brick-replace's are in progress right now ?  i cant stop volume since it tells me that its   brick-replace in progress
18:16 melanor9 and i dont really know which brick it is
18:19 sashko joined #gluster
18:21 tjikkun joined #gluster
19:04 tomsve joined #gluster
19:18 daMaestro joined #gluster
19:21 JusHal joined #gluster
20:27 tomsve joined #gluster
21:12 ninkotech_ joined #gluster
21:42 y4m4 joined #gluster
22:13 RicardoSSP joined #gluster
23:11 melanor9 joined #gluster
23:11 polenta|gone joined #gluster
23:34 dcmbrown joined #gluster
23:44 dcmbrown /who
23:44 JusHal joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary