Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2013-11-11

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:12 KORG Huy guys
00:13 KORG Please advice - i have 2 powedns 3.3 servers under centos 6.4 both are configured to use mysql as backend
00:13 KORG So, 2 servers with powerdns 3.3 and mysql configured master-slave on both
00:15 KORG I have 1 domain in database with records. If i ask first ns server 'dig @myns1 domain' from remote computer - i get correct answer. But, if i do same for second ns2 - i get empty answer
00:15 KORG If i do it localy on ns2 - i get correct answer
00:15 KORG Where can i find problem?
00:16 KORG Ok, please just ignore my question :)
00:16 KORG It was iptables :)
00:17 KORG *iptables rules
01:01 yinyin joined #gluster
02:03 harish joined #gluster
02:37 satheesh1 joined #gluster
02:39 zwu joined #gluster
03:01 vshankar joined #gluster
03:05 _br_ joined #gluster
03:09 kshlm joined #gluster
03:21 aravindavk joined #gluster
03:29 shubhendu joined #gluster
03:33 sgowda joined #gluster
03:34 RameshN joined #gluster
03:35 dbruhn left #gluster
03:50 vshankar joined #gluster
03:50 rjoseph joined #gluster
03:53 kanagaraj joined #gluster
03:58 itisravi joined #gluster
03:59 saurabh joined #gluster
03:59 lalatenduM joined #gluster
04:08 ababu joined #gluster
04:12 aravindavk joined #gluster
04:38 shyam joined #gluster
04:43 psharma joined #gluster
04:47 hagarth joined #gluster
04:49 ppai joined #gluster
04:49 mohankumar joined #gluster
04:55 dusmant joined #gluster
05:13 ndarshan joined #gluster
05:15 T0aD joined #gluster
05:21 davinder joined #gluster
05:22 spandit joined #gluster
05:27 DV__ joined #gluster
05:30 vshankar joined #gluster
05:32 vpshastry joined #gluster
05:37 vshankar joined #gluster
05:44 rastar joined #gluster
06:05 DV__ joined #gluster
06:07 davinder joined #gluster
06:16 nshaikh joined #gluster
06:17 bala joined #gluster
06:34 vpshastry joined #gluster
06:43 vshankar joined #gluster
06:47 dasfda joined #gluster
06:49 asias_ joined #gluster
06:52 ricky-ticky joined #gluster
06:53 meghanam joined #gluster
06:56 bulde joined #gluster
06:56 CheRi joined #gluster
07:09 ngoswami joined #gluster
07:11 kPb_in_ joined #gluster
07:15 sticky_afk joined #gluster
07:15 stickyboy joined #gluster
07:22 jtux joined #gluster
07:24 T0aD joined #gluster
07:34 ndarshan joined #gluster
07:45 vpshastry joined #gluster
07:54 T0aD joined #gluster
07:59 rastar joined #gluster
07:59 jtux joined #gluster
07:59 glusterbot New news from newglusterbugs: [Bug 1026143] Gluster rebalance --xml doesn't work <http://goo.gl/hVyRoP>
08:03 ctria joined #gluster
08:04 XpineX joined #gluster
08:11 keytab joined #gluster
08:17 tziOm joined #gluster
08:30 glusterbot New news from newglusterbugs: [Bug 1028672] BD xlator <http://goo.gl/y6DSIl>
08:31 kbut joined #gluster
08:31 blook joined #gluster
08:33 ThatGraemeGuy joined #gluster
08:35 kbut ndevos: hi, you replicated my problem with acl's and 32 bits systems?
08:41 mgebbe joined #gluster
09:10 pkoro joined #gluster
09:13 ndevos kbut: no I have not tried, and I do not know if I can find time for that soon
09:18 DV__ joined #gluster
09:24 shubhendu joined #gluster
09:30 glusterbot New news from newglusterbugs: [Bug 923540] features/compress: Compression/DeCompression translator <http://goo.gl/l5Y0Z> || [Bug 808073] numerous entries of "OPEN (null) (--) ==> -1 (No such file or directory)" in brick logs when an add-brick operation is performed <http://goo.gl/zQN2F>
09:39 bulde1 joined #gluster
09:47 Norky joined #gluster
09:57 shubhendu joined #gluster
10:30 shubhendu joined #gluster
10:33 ppai joined #gluster
10:45 X3NQ joined #gluster
10:47 bnh2 joined #gluster
10:47 bnh2 morning All
10:48 bnh2 I have a problem with glusterFS again but this time with compatibily issues if exist with glusterFS
10:49 bnh2 the problem is "I'm having problems with Apache.lucene indexing" has anyone seen any problems with lucene indexing in a glusterfs enviornment?
10:50 bnh2 Caused by: java.lang.RuntimeException: mergeFields produced an invalid result: docCount is 9640 but fdx file size is 65536 file=_fcz.fdx file exists?=true; now aborting this merge to prevent index corruption
10:50 bnh2 at org.apache.lucene.index.SegmentMerger.mergeFields(SegmentMerger.java:265)
10:50 bnh2 at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:108)
10:50 bnh2 at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4273)
10:50 bnh2 at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3917)
10:50 bnh2 at org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:388)
10:50 bnh2 at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:456)
10:50 bnh2 Exception in thread "Lucene Merge Thread #570" org.apache.lucene.index.MergePolicy$MergeException: java.lang.RuntimeException: mergeFields produced an invalid result: docCount is 190 but fdx file size is 0 file=_fd1.fdx file exists?=true; now aborting this merge to prevent index corruption
10:51 bnh2 joined #gluster
10:51 bnh2 left #gluster
10:51 bnh2 joined #gluster
10:53 calum_ joined #gluster
11:04 bnh2 Can we use glusterfs to replicate indexed files?? we are using lucene apache
11:15 shubhendu joined #gluster
11:25 partner nice, got /var filled up with self-heal logs and then gluster was also unable to write its configs (i assume), now it does not recognize its bricks at all and won't start up the volume
11:29 partner the other part of the replica keeps kicking me out
11:32 edward1 joined #gluster
11:39 ppai joined #gluster
11:47 partner ok, now they both are busted
12:05 MiteshShah joined #gluster
12:09 itisravi joined #gluster
12:15 kkeithley joined #gluster
12:18 partner hmm i wonder if the bricks has lost their port configs.. all bricks saying listen-port=0
12:19 partner or rather the wondering part is what i should be putting there instead. 24009 probably for starters
12:20 partner nvm, its 0 for replica counterpart.
12:24 partner getting a faint hint from that, cleaning up the peer info fixed the first issue and i've got one side up again
12:27 davidbierce joined #gluster
12:29 partner at least the glustershd.log is getting filled with lots of entries, it will probably run out of disk again due to that :)
12:55 davidbierce joined #gluster
13:00 davidbierce joined #gluster
13:01 ctria joined #gluster
13:01 anonymus joined #gluster
13:03 T0aD joined #gluster
13:11 T0aD joined #gluster
13:14 T0aD joined #gluster
13:21 vpshastry joined #gluster
13:22 dusmant joined #gluster
13:43 rastar joined #gluster
13:46 vpshastry left #gluster
13:55 hagarth joined #gluster
13:56 dusmant joined #gluster
14:00 anonymus hi
14:00 glusterbot anonymus: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
14:01 glusterbot New news from newglusterbugs: [Bug 990028] enable gfid to path conversion <http://goo.gl/1HwiQc> || [Bug 969461] RFE: Quota fixes <http://goo.gl/XFSM4>
14:06 bennyturns joined #gluster
14:08 failshell joined #gluster
14:10 anonymus guys, tell me please is it possible to use glusterfs in production environment?
14:14 ira joined #gluster
14:15 ira joined #gluster
14:16 tziOm joined #gluster
14:22 ctria joined #gluster
14:23 ndevos anonymus: sure, and you can even get a company to support it: http://www.redhat.com/storage/
14:23 glusterbot Title: Red Hat | Red Hat Storage Server | Open, software-defined storage (at www.redhat.com)
14:25 asias joined #gluster
14:27 davinder joined #gluster
14:34 anonymus ndevos: is it reliable enough?
14:34 ndevos anonymus: sure, but it also depends on the use-case and workload
14:35 anonymus I want to use it for replication
14:35 anonymus but not sure
14:35 anonymus that it would be what I need
14:36 B21956 joined #gluster
14:36 anonymus because the configuration is not common
14:36 ndevos well, replication is one thing, but certain workloads are not a very good match, like storing databases
14:37 anonymus my configuration is for small file storage
14:37 ndevos or, things like git repositories (many small files) can give you trouble too
14:37 anonymus it is located at the railway terminal
14:38 B21956 joined #gluster
14:38 ndevos for some people and their workloads with small files it works well, for other it does not work sufficiently, you probably should give it a try and see for yourself
14:38 anonymus and the client nodes are located in railway wagons
14:39 anonymus I will try of course
14:39 anonymus do not know what to use: nfs + rsync  or gluster
14:39 anonymus the storage to sync will be about 20 Gigs
14:40 anonymus the increment for 20minutes sync about 1 Gig
14:40 kkeithley If there's a lot of latency between the terminal and the wagons (carriages, cars; moving) then geo-replication is probably better than gluster AFR synchronous replication.
14:40 anonymus and about 12 wagons
14:40 anonymus the replication will start only at the end terminal
14:40 kkeithley s/moving/and they're moving/
14:40 glusterbot What kkeithley meant to say was: If there's a lot of latency between the terminal and the wagons (carriages, cars; and they're moving) then geo-replication is probably better than gluster AFR synchronous replication.
14:41 anonymus geo-replication?
14:41 ndevos a gluster volume that is mounted, will not be usable in case there is a network disconnect, would that be a common situation?
14:41 anonymus yes
14:42 anonymus it will be automatically connected at the end station by wifi
14:42 ndevos with geo-replication your terminals would be their own gluster trusted pool, and geo-replication can be used to sync the volume from one pool to an other (thr one at the railway terminal)
14:43 anonymus to the main node with useful content
14:44 anonymus I will try to find out how to do it, thank you, guys.
14:44 ndevos well, geo-replication is currently one-way (only writable at the master volume, synced to slave volumes) - where does the content change?
14:45 anonymus at the main node there are some files that have to be synced to the wagon nodes after arriving to the end station
14:45 anonymus the terminal node can be read-only
14:45 anonymus so I need to read about geo-replication?
14:45 ricky-ticky joined #gluster
14:46 ndevos yes, that would probably make most sense
14:46 anonymus thank you!
14:46 ndevos good luck :)
14:46 anonymus I'll try :)
14:47 anonymus bb
14:49 mohankumar joined #gluster
14:50 dbruhn joined #gluster
15:00 harish joined #gluster
15:03 jag3773 joined #gluster
15:04 bugs_ joined #gluster
15:05 plarsen joined #gluster
15:08 jdarcy joined #gluster
15:10 rjoseph joined #gluster
15:10 kaptk2 joined #gluster
15:14 wushudoin joined #gluster
15:20 rcheleguini joined #gluster
15:21 lpabon joined #gluster
15:26 fyxim joined #gluster
15:37 sprachgenerator joined #gluster
15:39 zaitcev joined #gluster
15:40 vpshastry joined #gluster
15:50 frostyfrog joined #gluster
15:51 bulde joined #gluster
15:55 vpshastry left #gluster
16:12 pravka joined #gluster
16:16 daMaestro joined #gluster
16:32 Technicool joined #gluster
16:34 davinder joined #gluster
16:37 hchiramm__ joined #gluster
16:37 aliguori joined #gluster
16:38 LoudNoises joined #gluster
16:49 bulde joined #gluster
16:54 chirino joined #gluster
17:07 bala joined #gluster
17:18 dusmant joined #gluster
17:32 lalatenduM joined #gluster
17:47 dusmant joined #gluster
17:51 satheesh1 joined #gluster
17:53 osiekhan joined #gluster
18:10 satheesh joined #gluster
18:22 bulde joined #gluster
18:23 pravka joined #gluster
18:34 vpshastry joined #gluster
18:37 vpshastry left #gluster
19:14 SpeeR joined #gluster
19:37 Mo__ joined #gluster
20:18 gmcwhistler joined #gluster
20:18 gmcwhistler joined #gluster
20:22 ira joined #gluster
21:54 lava joined #gluster
22:11 cyberbootje joined #gluster
22:27 sweeper joined #gluster
22:28 sweeper sup folks. I'd like to use gluster to manage replication across a heterogeneous JBOD, but it makes me add bricks according to the replica count
22:29 sweeper i.e. I have a bunch of disks of varying sizes, I'd like to have 2 copies of any given file across the array
22:31 elyograg sweeper: gluster replication is explicit - each brick in a replica set replicates to other specific brick(s).
22:32 sweeper ah, I see :/
22:33 elyograg actually, the client handles most replication by writing to all bricks in a set.
22:33 sweeper I'm looking for something more along the lines of round-robin 'spreading out' with the caveat that there must be 2 of any given file
22:35 sweeper don't suppose you have any ideas? this is all on one server
22:35 elyograg gluster is more rigid than that.  If you set it up as replica 2, you must add bricks in pairs.  If you set it up as replica 3, you must add bricks three at a time.
22:35 elyograg there's no redundancy of you've only got one server.  that server dies, and everything's gone.
22:35 sweeper oh, I don't need redundancy so much as I need durability
22:36 sweeper this is a home fileserver
22:37 elyograg for a single server, RAID is probably just as good at durability, and far more performant.
22:37 sweeper ya but the disks are heterogenous
22:40 elyograg you'll have problems with gluster in that regard too - for proper operation, all bricks must be the same size, because files typically get distributed between them roughly equally.  When a brick fills up, new files can end up on diffreent bricks, but performance drops.  If you *append* to an existing file on a completely full brick, it will fail, even though the filesystem as a whole says there's plenty of space.
22:40 sweeper yar. this seems like a problem that would have been solved
22:40 B21956 joined #gluster
22:41 elyograg can't be solved without an entirely new software design.
22:41 sweeper oh, not in gluster, I get that those design choices have been made, I just mean in general
22:43 elyograg I think that commercial storage solutions like Isilon and Compellent have solved that problem.  There's no open source version of that, you get to pay them a lot of money for it.
23:01 nage someone in our local LUG was working on something like that, but I'm not sure how far they got with it: http://www.youtube.com/watch?v=MC9vQVkclUg
23:01 glusterbot Title: PLUG - 9/17/2013 - Steve Meyers - "Drobo-style RAID" - YouTube (at www.youtube.com)
23:06 ninkotech joined #gluster
23:11 davidbierce joined #gluster
23:11 sweeper nage: I'm about to write a python fusefs that will do this :/
23:12 nage sweeper: he has a perl script for part of it (https://github.com/stevecoug/expandable-raid), a python fusefs would be nice
23:12 glusterbot Title: stevecoug/expandable-raid 路 GitHub (at github.com)
23:45 dbruhn joined #gluster
23:48 davidbierce joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary