Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-03-16

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 T3 joined #gluster
00:39 T3 joined #gluster
00:48 tuxcrafter joined #gluster
00:58 mbukatov joined #gluster
01:04 topshare joined #gluster
01:09 hagarth joined #gluster
01:30 asku joined #gluster
01:36 bala joined #gluster
01:38 lalatenduM joined #gluster
01:53 plarsen joined #gluster
01:59 nangthang joined #gluster
02:45 soumya joined #gluster
02:59 harish joined #gluster
03:00 kshlm joined #gluster
03:04 sputnik13 joined #gluster
03:10 glusterbot News from newglusterbugs: [Bug 1199003] Avoid possibility of segfault if xl->ctx is  NULL. <https://bugzilla.redhat.com/show_bug.cgi?id=1199003>
03:16 bharata-rao joined #gluster
03:18 topshare_ joined #gluster
03:20 edong23 joined #gluster
03:20 zerick joined #gluster
03:22 Hemanth1 joined #gluster
03:23 Pupeno joined #gluster
03:35 T3 joined #gluster
03:37 topshare joined #gluster
03:40 ppai joined #gluster
03:44 tetreis joined #gluster
03:55 shubhendu joined #gluster
03:58 lalatenduM joined #gluster
04:00 meghanam joined #gluster
04:04 atinmu joined #gluster
04:06 hagarth joined #gluster
04:16 haomaiwa_ joined #gluster
04:16 nbalacha joined #gluster
04:31 meghanam joined #gluster
04:33 RameshN joined #gluster
04:34 anoopcs joined #gluster
04:36 topshare joined #gluster
04:37 kanagaraj joined #gluster
04:38 nbalacha joined #gluster
04:39 spandit joined #gluster
04:42 rafi joined #gluster
04:42 jiffin joined #gluster
04:44 ndarshan joined #gluster
04:52 nishanth joined #gluster
04:58 schandra joined #gluster
04:58 kovshenin joined #gluster
04:58 jiffin1 joined #gluster
05:09 meghanam joined #gluster
05:10 glusterbot News from newglusterbugs: [Bug 1194640] Tracker bug for Logging framework expansion. <https://bugzilla.redhat.com/show_bug.cgi?id=1194640>
05:12 Pupeno joined #gluster
05:12 R0ok_ joined #gluster
05:13 Apeksha joined #gluster
05:17 gem joined #gluster
05:25 ppp joined #gluster
05:27 bala joined #gluster
05:29 kdhananjay joined #gluster
05:30 zerick_ joined #gluster
05:38 atinmu joined #gluster
05:44 zerick_ joined #gluster
05:45 _zerick_ joined #gluster
05:46 ramteid joined #gluster
05:46 anrao joined #gluster
05:47 ashiq joined #gluster
05:49 Bhaskarakiran joined #gluster
05:55 nangthang joined #gluster
05:59 vimal joined #gluster
06:00 T3 joined #gluster
06:06 ppp joined #gluster
06:15 lalatenduM joined #gluster
06:17 nshaikh joined #gluster
06:23 sripathi1 joined #gluster
06:25 meghanam joined #gluster
06:25 raghu joined #gluster
06:30 ppai joined #gluster
06:31 atalur joined #gluster
06:33 SOLDIERz_ joined #gluster
06:41 glusterbot News from newglusterbugs: [Bug 1202209] RFE: Sync the time of logger with that of system <https://bugzilla.redhat.com/show_bug.cgi?id=1202209>
06:41 glusterbot News from newglusterbugs: [Bug 1202212] Performance enhancement for RDMA <https://bugzilla.redhat.com/show_bug.cgi?id=1202212>
06:51 atinmu joined #gluster
07:03 gomikemike joined #gluster
07:07 papamoose1 joined #gluster
07:11 glusterbot News from newglusterbugs: [Bug 1202218] Disperse volume: Input/output error on nfs mount after the volume start force <https://bugzilla.redhat.com/show_bug.cgi?id=1202218>
07:24 nangthang joined #gluster
07:26 [Enrico] joined #gluster
07:27 lifeofguenter joined #gluster
07:33 jtux joined #gluster
07:41 _polto_ joined #gluster
07:41 meghanam joined #gluster
07:44 ppai joined #gluster
07:58 schandra_ joined #gluster
07:59 schandra joined #gluster
08:14 Philambdo joined #gluster
08:16 maveric_amitc_ joined #gluster
08:18 fsimonce joined #gluster
08:21 kovshenin joined #gluster
08:22 itisravi joined #gluster
08:29 sripathi2 joined #gluster
08:32 o5k joined #gluster
08:41 glusterbot News from newglusterbugs: [Bug 1202244] [Quota] : To have a separate quota.conf file for inode quota. <https://bugzilla.redhat.com/show_bug.cgi?id=1202244>
08:41 glusterbot News from newglusterbugs: [Bug 1202250] [Quota] : Handle file count and directory count, introduced as part of inode quota feature, in AFR <https://bugzilla.redhat.com/show_bug.cgi?id=1202250>
08:45 smohan joined #gluster
08:50 ppai joined #gluster
08:52 liquidat joined #gluster
08:53 topshare_ joined #gluster
08:55 elico joined #gluster
09:01 ashiq joined #gluster
09:01 T3 joined #gluster
09:03 ctria joined #gluster
09:04 _polto_ joined #gluster
09:05 shaunm joined #gluster
09:08 jflf joined #gluster
09:08 mbukatov joined #gluster
09:13 shubhendu joined #gluster
09:14 32NABJXY3 joined #gluster
09:14 ninkotech joined #gluster
09:20 ndarshan joined #gluster
09:22 Slashman joined #gluster
09:22 deepakcs joined #gluster
09:25 smohan_ joined #gluster
09:28 bene2 joined #gluster
09:35 fsimonce joined #gluster
09:38 m0ellemeister joined #gluster
09:39 ndarshan joined #gluster
09:41 glusterbot News from newglusterbugs: [Bug 1202270] Disperse volume: trusted.ec.version xattr lost when heal is invoked from the client <https://bugzilla.redhat.com/show_bug.cgi?id=1202270>
09:48 karnan joined #gluster
09:49 ctria joined #gluster
09:53 dusmant joined #gluster
09:55 shaunm joined #gluster
09:56 topshare joined #gluster
10:07 aravindavk joined #gluster
10:08 sripathi1 joined #gluster
10:10 sripathi1 joined #gluster
10:12 atalur_ joined #gluster
10:15 gildub joined #gluster
10:23 topshare joined #gluster
10:24 bala joined #gluster
10:29 anil joined #gluster
10:30 harish_ joined #gluster
10:30 o5k_ joined #gluster
10:34 jflf Good morning everyone, assuming that you're in a European timezone, otherwise good whatever moment of the day it is for you!
10:35 jflf While having a look at my logs I noticed some lines like this:
10:35 jflf 0-xlator: /usr/lib64/glusterfs/3.6.2/xlator/rpc-transport/socket.so: cannot open shared object file: No such file or directory
10:35 jflf Has anyone seen this already? What's the impact?
10:44 bene2 joined #gluster
10:47 topshare joined #gluster
10:48 topshare joined #gluster
10:51 firemanxbr joined #gluster
10:52 soumya joined #gluster
10:55 ppai joined #gluster
10:57 deniszh joined #gluster
11:03 T3 joined #gluster
11:04 Pupeno joined #gluster
11:04 Pupeno joined #gluster
11:21 pkoro joined #gluster
11:35 T3 joined #gluster
11:38 bala joined #gluster
11:38 mcblady joined #gluster
11:42 glusterbot News from newglusterbugs: [Bug 1188184] Tracker bug :  NFS-Ganesha new features support for  3.7. <https://bugzilla.redhat.com/show_bug.cgi?id=1188184>
11:42 glusterbot News from newglusterbugs: [Bug 1202316] NFS-Ganesha : Starting NFS-Ganesha independent of platforms <https://bugzilla.redhat.com/show_bug.cgi?id=1202316>
11:42 glusterbot News from resolvedglusterbugs: [Bug 1065644] With compression translator for a volume fuse mount I/O is returning input/output error <https://bugzilla.redhat.com/show_bug.cgi?id=1065644>
11:43 mcblady hello guys, i got a quick question about geo replication - the slave server has two interfaces eth0 with private IP and eth1 with public. Server connets on 22 port on public but then tries to connect on port 24007 on private. Slave advertise wrong IP, can I force it to stay on same interface or bind to specific interface only ?
11:43 mcblady glusterfs version 3.6.2 running on centos 6.6
11:45 Norky joined #gluster
11:48 nishanth joined #gluster
12:00 chirino joined #gluster
12:01 o5k__ joined #gluster
12:04 smohan joined #gluster
12:04 ppai joined #gluster
12:04 sripathi1 joined #gluster
12:06 LebedevRI joined #gluster
12:11 dblack joined #gluster
12:13 nbalacha joined #gluster
12:15 nbalacha joined #gluster
12:16 ashiq joined #gluster
12:17 dusmant joined #gluster
12:19 dblack joined #gluster
12:23 malevolent joined #gluster
12:23 xavih joined #gluster
12:23 anoopcs joined #gluster
12:27 ramteid joined #gluster
12:28 _polto_ joined #gluster
12:31 anoopcs joined #gluster
12:31 theron joined #gluster
12:33 theron_ joined #gluster
12:33 meghanam joined #gluster
12:37 bennyturns joined #gluster
12:42 Apeksha joined #gluster
12:55 harish_ joined #gluster
12:57 ppp joined #gluster
13:00 Slashman_ joined #gluster
13:02 andreask1 joined #gluster
13:03 asku joined #gluster
13:03 bala joined #gluster
13:07 nishanth joined #gluster
13:09 julim joined #gluster
13:10 smohan joined #gluster
13:12 _polto_ joined #gluster
13:20 Slashman joined #gluster
13:22 topshare joined #gluster
13:22 SOLDIERz_ joined #gluster
13:23 topshare joined #gluster
13:23 hagarth joined #gluster
13:27 rwheeler joined #gluster
13:27 dusmant joined #gluster
13:27 hamiller joined #gluster
13:30 georgeh-LT2 joined #gluster
13:30 ppp joined #gluster
13:35 o5k_ joined #gluster
13:36 nishanth joined #gluster
13:38 jiffin joined #gluster
13:42 glusterbot News from newglusterbugs: [Bug 1200262] Upcall framework support along with cache_invalidation usecase handled <https://bugzilla.redhat.com/show_bug.cgi?id=1200262>
13:42 glusterbot News from newglusterbugs: [Bug 1200267] Upcall: Cleanup the expired upcall entries <https://bugzilla.redhat.com/show_bug.cgi?id=1200267>
13:42 glusterbot News from newglusterbugs: [Bug 1200268] Upcall: Support for lease_locks <https://bugzilla.redhat.com/show_bug.cgi?id=1200268>
13:43 plarsen joined #gluster
13:47 lalatenduM joined #gluster
13:50 o5k joined #gluster
13:50 lpabon joined #gluster
13:51 B21956 joined #gluster
13:56 luis_silva joined #gluster
14:00 anoopcs joined #gluster
14:07 monotek1 joined #gluster
14:07 virusuy joined #gluster
14:14 martin__ joined #gluster
14:16 martin__ Hello everyone
14:17 wushudoin joined #gluster
14:17 martin__ I need some help using glusterfs used on a webserver
14:17 _Bryan_ joined #gluster
14:18 martin__ I have a problem with DHT lookups for non existent files
14:18 martin__ I believe the problem is described here http://joejulian.name/blog/dht-misses-are-expensive/
14:19 martin__ since that blog post is more than 2 years old I thought there are some ways of mitigating the problem?
14:20 dusmant joined #gluster
14:20 hagarth joined #gluster
14:25 dastar_ hi
14:25 glusterbot dastar_: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
14:25 dastar_ i have a lot of  file in .glusterfs directory, is it normal ?
14:28 nbalacha joined #gluster
14:28 sripathi joined #gluster
14:30 pdrakeweb joined #gluster
14:34 nbalacha joined #gluster
14:36 Apeksha joined #gluster
14:36 asku joined #gluster
14:37 raging-dwarf guys a quick question, i have two gluster servers in duplicated mode, on one of the servers i have a secundairy interface (multi-homed) which clients should use for their gluster mounts. Should this just work or does gluster expect clients to talk to the same interface?
14:38 andreask joined #gluster
14:38 hamiller raging-dwarf, On a replicated Volume the client must be able to reach both peers if using glusterfs fuse mou nt
14:38 hagarth joined #gluster
14:39 raging-dwarf hamiller: thanks, that explains some of my issues
14:39 Apeksha joined #gluster
14:39 hamiller raging-dwarf, My pleasure
14:40 raging-dwarf is there a way to make two gluster servers run in two diffierent NAT environments serving local clients on their side?
14:40 corretico joined #gluster
14:40 raging-dwarf i kind of worked around it by exporting a local gluster mount as nfs share
14:40 raging-dwarf but it is not how i like to see it
14:41 DV__ joined #gluster
14:42 _polto_ joined #gluster
14:44 jflf dastar_: yes it's normal, the .glusterfs directory contains a lot of GlusterFS metadata
14:46 Apeksha joined #gluster
14:46 andreask left #gluster
14:47 karnan joined #gluster
14:49 roost joined #gluster
14:49 T3 joined #gluster
14:52 dastar_ jflf: thanks
14:56 raging-dwarf raging-dwarf: does anyone know if i can make a client talk to only one of the replicated-gluster-servers?
14:56 bene2 joined #gluster
14:56 raging-dwarf *whoops didn't mean to talk to myself*
14:58 shubhendu joined #gluster
15:03 DV__ joined #gluster
15:11 topshare joined #gluster
15:11 o5k_ joined #gluster
15:12 _polto_ joined #gluster
15:13 lmickh joined #gluster
15:21 kanagaraj joined #gluster
15:22 jmarley joined #gluster
15:25 martin__ does gluster replicated volume use dht in same way when mounted with native client (glusterfs) and nfs?
15:26 deniszh1 joined #gluster
15:30 elico joined #gluster
15:31 theron joined #gluster
15:32 smohan_ joined #gluster
15:49 anil joined #gluster
15:50 soumya joined #gluster
15:52 atinmu joined #gluster
15:54 jmarley joined #gluster
16:01 jiffin joined #gluster
16:11 jobewan joined #gluster
16:14 jiffin joined #gluster
16:15 soumya joined #gluster
16:21 ctria joined #gluster
16:22 ildefonso joined #gluster
16:39 Pupeno joined #gluster
16:39 Pupeno joined #gluster
16:41 bala joined #gluster
16:41 harish_ joined #gluster
16:45 wkf joined #gluster
17:06 Rapture joined #gluster
17:08 maveric_amitc_ joined #gluster
17:09 _polto_ I have broken directories on my glusterfs that I can not remove. it says that the directory is not empty. but it is empty. and in the log I see
17:09 _polto_ E [stripe-helpers.c:358:stripe_ctx_handle] 0-data-stripe-2: Failed to get stripe-size
17:13 glusterbot News from newglusterbugs: [Bug 1202463] nfs : dynamic netgroup/export authentication fails <https://bugzilla.redhat.com/show_bug.cgi?id=1202463>
17:17 smohan joined #gluster
17:21 shubhendu joined #gluster
17:24 plarsen joined #gluster
17:24 o5k_ joined #gluster
17:31 shaunm joined #gluster
17:38 kanagaraj joined #gluster
17:40 hchiramm_ joined #gluster
17:42 hybrid5122 joined #gluster
18:02 _polto_ E [stripe-helpers.c:358:stripe_ctx_handle] 0-data-stripe-2: Failed to get stripe-size
18:02 _polto_ does not look good. :(
18:02 karnan_ joined #gluster
18:07 ashiq joined #gluster
18:09 _polto_ I start to agree with this blog : https://nileshgr.com/2014/07/18/failed-experiment-glusterfs
18:09 lalatenduM joined #gluster
18:09 _polto_ :(
18:10 _polto_ I am executing heavy file processing on the same machines as the gluster nodes. Can this cause split-brains ? and how to avoid them ?
18:25 harish_ joined #gluster
18:30 jermudgeon joined #gluster
18:31 _polto_ joined #gluster
18:41 vipulnayyar joined #gluster
18:43 glusterbot News from newglusterbugs: [Bug 1202492] Rewrite glfs_new function for better error out scenarios. <https://bugzilla.redhat.com/show_bug.cgi?id=1202492>
18:45 jermudgeon joined #gluster
18:51 Philambdo1 joined #gluster
18:54 SOLDIERz_ joined #gluster
19:04 free_amitc_ joined #gluster
19:39 dbruhn joined #gluster
19:39 lifeofguenter joined #gluster
19:45 SOLDIERz_ joined #gluster
19:47 deniszh joined #gluster
19:48 jermudgeon joined #gluster
19:50 _polto_ joined #gluster
19:56 sputnik13 joined #gluster
20:20 coredump joined #gluster
20:21 _polto_ joined #gluster
20:28 DV joined #gluster
20:40 deniszh joined #gluster
20:42 DV joined #gluster
20:43 deniszh joined #gluster
20:58 deniszh joined #gluster
21:09 theron joined #gluster
21:10 Rapture joined #gluster
21:20 bala joined #gluster
21:22 SOLDIERz_ joined #gluster
21:51 rotbeard joined #gluster
21:54 tetreis joined #gluster
22:06 jermudgeon joined #gluster
22:09 _polto_ joined #gluster
22:09 merlink joined #gluster
22:16 plarsen joined #gluster
22:24 gildub joined #gluster
22:31 bennyturns joined #gluster
22:33 _polto_ joined #gluster
22:57 theron joined #gluster
23:03 elico joined #gluster
23:18 plarsen joined #gluster
23:18 plarsen joined #gluster
23:22 lnr joined #gluster
23:24 lnr left #gluster
23:38 p0licy joined #gluster
23:39 p0licy has anyone had issuer peering a probe across an IPSec VPN connection?
23:39 p0licy s/issuer/issues/
23:39 glusterbot What p0licy meant to say was: An error has occurred and has been logged. Please contact this bot's administrator for more information.
23:41 JoeJulian p0licy: I have done that with no issue.
23:41 JoeJulian Just make sure to keep your MTU below the fragmentation threshold.
23:44 p0licy joejulian: I seeing errors in the logs from the ip adddress of the openswan server
23:44 p0licy JoeJuilan: Rejecting management handshake request from unknown peer <vpnserver Ip>:<port>
23:46 CyrilPeponnet yo guys
23:47 CyrilPeponnet settings up a brand-new geo rep between two gluster pool
23:47 JoeJulian p0licy: you have to probe the *new* server from the *existing* pool. Random servers cannot join your trusted pool.
23:47 CyrilPeponnet I'm still in hybrid-crawl (10TB of data)...
23:47 CyrilPeponnet but I have some errors in the slave log file
23:47 CyrilPeponnet [2015-03-16 23:24:59.448923] E [client-rpc-fops.c:5341:client3_3_inodelk] (-->/usr/lib64/glusterfs/3.5.2/xlator/cluster/replicate.so(+0x49c0f) [0x7f5c9c123c0f] (-->/usr/lib64/glusterfs/3.5.2/xlator/cluster/replicate.so(afr_lock_blocking+0x993) [0x7f5c9c123a23] (-->/usr/lib64/glusterfs/3.5.2/xlator/protocol/client.so(client_inodelk+0x9e) [0x7f5c9c362bbe]))) 0-: Assertion failed: 0
23:48 glusterbot CyrilPeponnet: ('s karma is now -63
23:48 glusterbot CyrilPeponnet: ('s karma is now -64
23:48 glusterbot CyrilPeponnet: ('s karma is now -65
23:48 CyrilPeponnet :p
23:48 * JoeJulian whacks glusterbot
23:48 CyrilPeponnet Not sure if these are truly errors...
23:48 CyrilPeponnet the remote vol was empty before starting the georeplication process
23:50 CyrilPeponnet it should took around 30 days for the georeplication first crawl finish but I'd like to be sure the volume will be consistent at the end despite of there errors
23:50 JoeJulian CyrilPeponnet: it's an error all right. When you see "assertion failed" errors it's because that shouldn't have been possible and would have been a segafault otherwise.
23:50 JoeJulian Please file a bug report
23:50 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
23:50 CyrilPeponnet Doh.
23:51 CyrilPeponnet will this lead to geo-rep failure ?
23:51 sputnik13 joined #gluster
23:51 ildefonso ++
23:51 ildefonso --
23:52 JoeJulian I don't think so. Looks more like a replication bug on the remote volume.
23:52 CyrilPeponnet because I have like 150k line in my logs since 1h
23:52 JoeJulian yeesh.
23:52 JoeJulian Can you stop and start that remote volume?
23:52 CyrilPeponnet I mean 150K of " E "
23:53 CyrilPeponnet sure it' snot used for now
23:53 JoeJulian Also can you run 3.5.3
23:53 CyrilPeponnet erf
23:53 CyrilPeponnet I'm kind of stuck with 3.5.2 for now
23:53 JoeJulian No problem. Some people like eating bugs.
23:53 CyrilPeponnet :)
23:54 CyrilPeponnet If it was easy I could give a try...
23:54 CyrilPeponnet can I start a georep between a 3.5.2 and a 3.5.3 ?
23:54 JoeJulian yes
23:54 CyrilPeponnet of the op-version MUST be concistent
23:54 CyrilPeponnet Hmm.
23:54 JoeJulian what?
23:54 JoeJulian where did you see that?
23:55 CyrilPeponnet I got issue in the past with different gluster version and op-version
23:55 JoeJulian hmm
23:55 CyrilPeponnet like dead like issue...
23:55 JoeJulian yeah, I'll double check the source. I can't think of any reason why it would matter.
23:58 CyrilPeponnet is 3.5.3 worth the migration ?
23:58 CyrilPeponnet because I already have a production node running 3.5.2
23:58 CyrilPeponnet and this node should join the 2 others when the geo-rep will be done

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary