Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-02-27

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 kkeithley_ there's a changelog for each release in .../doc/release-notes
00:01 snewpy kkeithley_: ok, thanks
00:01 kkeithley_ it was fixed in the master branch on Mar 17 2013 in commit e0616e9314c8323dc59fca7cad6972f08d72b936
00:02 gh5046 kkeithley: Thank you.
00:09 gildub joined #gluster
00:17 plarsen joined #gluster
00:19 Pupeno joined #gluster
00:26 Pupeno joined #gluster
00:28 ninkotech joined #gluster
00:29 ackjewt joined #gluster
00:31 masterzen joined #gluster
00:32 sadbox joined #gluster
00:36 glusterbot News from newglusterbugs: [Bug 1196904] RDMA mount fails for unprivileged user without cap_net_bind_service <https://bugzilla.redhat.co​m/show_bug.cgi?id=1196904>
00:59 skyice joined #gluster
01:07 MugginsM joined #gluster
01:09 bala joined #gluster
01:12 wkf joined #gluster
01:14 Pupeno joined #gluster
01:28 skyice left #gluster
01:29 ackjewt joined #gluster
01:36 Pupeno joined #gluster
01:36 Pupeno joined #gluster
01:38 rafi joined #gluster
01:53 T3 joined #gluster
01:56 nangthang joined #gluster
01:56 gh5046 left #gluster
02:00 sprachgenerator joined #gluster
02:03 MugginsM joined #gluster
02:07 haomaiwang joined #gluster
02:07 haomaiwa_ joined #gluster
02:13 zwevans joined #gluster
02:34 geerlingguy1 joined #gluster
02:35 geerlingguy1 Are there any logo usage guidelines for Gluster? e.g. If I wanted to include the logo in a blog post about Gluster configuration, is that allowed? (Also, is the ant on http://www.gluster.org/ the official mascot/logo at this point?)
02:38 hagarth joined #gluster
02:49 ilbot3 joined #gluster
02:49 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:49 DV joined #gluster
02:50 bharata-rao joined #gluster
03:02 B21956 left #gluster
03:05 bharata-rao joined #gluster
03:10 Folken_ joined #gluster
03:12 victori joined #gluster
03:36 hagarth joined #gluster
03:40 shubhendu joined #gluster
03:46 spandit joined #gluster
03:47 RameshN joined #gluster
03:48 kanagaraj joined #gluster
03:51 nbalacha joined #gluster
03:54 dgandhi joined #gluster
04:16 ppai joined #gluster
04:25 ndarshan joined #gluster
04:37 anoopcs joined #gluster
04:37 jiffin joined #gluster
04:43 gem joined #gluster
04:46 nbalacha joined #gluster
04:48 deepakcs joined #gluster
04:48 karnan joined #gluster
04:51 T3 joined #gluster
04:51 soumya joined #gluster
05:01 anrao joined #gluster
05:07 glusterbot News from newglusterbugs: [Bug 1196949] Posix : Invalid chek in posix_get_ancestry() <https://bugzilla.redhat.co​m/show_bug.cgi?id=1196949>
05:08 rafi joined #gluster
05:09 kumar joined #gluster
05:11 schandra joined #gluster
05:12 rjoseph joined #gluster
05:17 harish_ joined #gluster
05:24 atalur joined #gluster
05:25 meghanam joined #gluster
05:26 Apeksha joined #gluster
05:31 prasanth_ joined #gluster
05:34 gildub joined #gluster
05:40 itpings hi guys
05:40 itpings same question again
05:46 hagarth joined #gluster
05:46 lalatenduM joined #gluster
05:47 itpings how to check if my howto was proof read and posted
05:47 itpings i just want to know if i was successful in helping the community
05:49 raghu joined #gluster
05:51 cornus_ammonis joined #gluster
05:53 vimal joined #gluster
05:54 anil joined #gluster
05:56 itpings thanks guys
05:56 itpings i replied to the mails
06:06 maveric_amitc_ joined #gluster
06:10 kdhananjay joined #gluster
06:13 overclk joined #gluster
06:20 rjoseph joined #gluster
06:21 soumya joined #gluster
06:24 shubhendu joined #gluster
06:26 victori joined #gluster
06:27 necrogami joined #gluster
06:29 nbalacha joined #gluster
06:31 kovshenin joined #gluster
06:32 ppai joined #gluster
06:34 necrogami joined #gluster
06:42 vipulnayyar joined #gluster
06:43 atalur_ joined #gluster
06:45 bala joined #gluster
06:50 doekia joined #gluster
06:54 smohan joined #gluster
07:03 soumya joined #gluster
07:06 bala joined #gluster
07:07 kovshenin joined #gluster
07:12 Bhaskarakiran joined #gluster
07:13 anil_ joined #gluster
07:16 Bhaskarakiran I am getting dependency errors while installing master nightly builds i.e. "Error: Package: glusterfs-server-3.7dev-0.​611.git729428a.el6.x86_64 (/glusterfs-server-3.7dev-0​.611.git729428a.el6.x86_64)           Requires: liburcu-cds.so.1()(64bit)"
07:16 Bhaskarakiran The details are at http://www.fpaste.org/191231/25021329/
07:17 Bhaskarakiran Does anyone have any idea about it. lalatenduM ^^\
07:20 lalatenduM Bhaskarakiran, looking
07:20 lalatenduM ndevos, ^^
07:23 jtux joined #gluster
07:23 lalatenduM Bhaskarakiran, can you please install/enable epel repo and try it again
07:23 Bhaskarakiran_ joined #gluster
07:24 lalatenduM Bhaskarakiran_,  can you please install/enable epel repo and try it again
07:24 Bhaskarakiran_ lalatenduM++
07:24 glusterbot Bhaskarakiran_: lalatenduM's karma is now 5
07:24 lalatenduM Bhaskarakiran_, for EPEL repo install http://ftp.riken.jp/Linux/fedora/epe​l/6/i386/epel-release-6-8.noarch.rpm
07:24 rjoseph joined #gluster
07:26 LebedevRI joined #gluster
07:31 kovshenin joined #gluster
07:35 awerner joined #gluster
07:36 Bhaskarakiran_ thanks lalatenduM, it got fixed..
07:47 deepakcs joined #gluster
07:52 T3 joined #gluster
07:57 jtux joined #gluster
08:00 [Enrico] joined #gluster
08:03 bala joined #gluster
08:03 ppai joined #gluster
08:16 Manikandan joined #gluster
08:27 kevein joined #gluster
08:37 glusterbot News from newglusterbugs: [Bug 1117655] 0-mem-pool: invalid argument with fio --thread <https://bugzilla.redhat.co​m/show_bug.cgi?id=1117655>
08:55 Pupeno joined #gluster
08:55 Pupeno joined #gluster
08:58 Slashman joined #gluster
09:00 DV joined #gluster
09:05 T0aD joined #gluster
09:05 ctria joined #gluster
09:08 glusterbot News from newglusterbugs: [Bug 847821] After disabling NFS the message "0-transport: disconnecting now" keeps appearing in the logs <https://bugzilla.redhat.com/show_bug.cgi?id=847821>
09:08 badone_ joined #gluster
09:20 Norky joined #gluster
09:27 liquidat joined #gluster
09:30 jiffin1 joined #gluster
09:35 jiffin joined #gluster
09:37 meghanam joined #gluster
09:38 glusterbot News from resolvedglusterbugs: [Bug 762184] Support mandatory locking in glusterfs <https://bugzilla.redhat.com/show_bug.cgi?id=762184>
09:45 social joined #gluster
09:46 rjoseph joined #gluster
09:48 shubhendu joined #gluster
09:53 hagarth joined #gluster
09:58 maveric_amitc_ joined #gluster
10:00 o5k joined #gluster
10:08 glusterbot News from newglusterbugs: [Bug 1065639] Crash in nfs with encryption enabled <https://bugzilla.redhat.co​m/show_bug.cgi?id=1065639>
10:08 glusterbot News from newglusterbugs: [Bug 916406] NLM failure against Solaris NFS client <https://bugzilla.redhat.com/show_bug.cgi?id=916406>
10:08 glusterbot News from newglusterbugs: [Bug 1105883] Enabling DRC for nfs causes memory leaks and crashes <https://bugzilla.redhat.co​m/show_bug.cgi?id=1105883>
10:08 glusterbot News from resolvedglusterbugs: [Bug 847619] [FEAT] NFSv3 pre/post attribute cache (performance, caching attributes pre- and post fop) <https://bugzilla.redhat.com/show_bug.cgi?id=847619>
10:08 glusterbot News from resolvedglusterbugs: [Bug 847626] [FEAT] nfsv3  cluster aware rpc.statd for NLM failover <https://bugzilla.redhat.com/show_bug.cgi?id=847626>
10:31 hagarth joined #gluster
10:34 mbukatov joined #gluster
10:38 glusterbot News from newglusterbugs: [Bug 916375] Incomplete NLMv4 spec compliance: asynchronous requests and responses <https://bugzilla.redhat.com/show_bug.cgi?id=916375>
10:38 glusterbot News from newglusterbugs: [Bug 962450] POSIX ACLs fail display / apply / set on NFSv3 mounted Gluster filesystems <https://bugzilla.redhat.com/show_bug.cgi?id=962450>
10:38 glusterbot News from newglusterbugs: [Bug 1010747] cp of large file from local disk to nfs mount fails with  "Unknown error 527" <https://bugzilla.redhat.co​m/show_bug.cgi?id=1010747>
10:38 ira joined #gluster
10:39 vipulnayyar joined #gluster
10:44 rafi joined #gluster
10:46 SOLDIERz joined #gluster
10:48 kovshenin joined #gluster
10:52 ctria joined #gluster
10:54 meghanam joined #gluster
10:54 soumya joined #gluster
10:56 badone_ joined #gluster
11:08 firemanxbr joined #gluster
11:10 SOLDIERz joined #gluster
11:19 hchiramm joined #gluster
11:27 spandit joined #gluster
11:36 _shaps_ joined #gluster
11:37 _shaps_ joined #gluster
11:43 kumar joined #gluster
11:50 maveric_amitc_ joined #gluster
11:54 diegows joined #gluster
11:55 jiffin1 joined #gluster
12:01 badone_ joined #gluster
12:12 hagarth joined #gluster
12:17 vipulnayyar joined #gluster
12:31 vipulnayyar joined #gluster
12:58 awerner joined #gluster
12:58 spandit joined #gluster
13:04 tdasilva joined #gluster
13:18 rjoseph joined #gluster
13:20 tigert left #gluster
13:20 Philambdo joined #gluster
13:34 B21956 joined #gluster
13:35 tanuck joined #gluster
13:46 SOLDIERz joined #gluster
13:51 ctria joined #gluster
13:58 T3 joined #gluster
13:59 plarsen joined #gluster
13:59 wkf joined #gluster
13:59 rjoseph joined #gluster
14:16 dgandhi joined #gluster
14:17 dgandhi joined #gluster
14:18 harish_ joined #gluster
14:18 dgandhi joined #gluster
14:23 cyberbootje1 joined #gluster
14:28 bennyturns joined #gluster
14:32 ctria joined #gluster
14:36 georgeh-LT2 joined #gluster
14:39 squizzi joined #gluster
14:42 RameshN joined #gluster
14:47 T3 joined #gluster
14:51 T3 joined #gluster
14:51 coredump joined #gluster
14:55 ctria joined #gluster
15:03 plarsen joined #gluster
15:11 Gill_ joined #gluster
15:22 Philambdo joined #gluster
15:29 R0ok_ joined #gluster
15:35 jobewan joined #gluster
15:43 victori joined #gluster
15:44 soumya joined #gluster
15:44 luis_silva joined #gluster
15:52 B21956 left #gluster
15:53 B21956 joined #gluster
16:00 jennawaha joined #gluster
16:04 jennawaha I have a quick question, if I could get pointed in the right direction for troubleshooting (having already checked http://www.gluster.org/community/d​ocumentation/index.php/Gluster_3.2​:_Troubleshooting_Geo-replication) . I am having issues with geo-replication, where it starts fine, continues to give its status as OK, but only does an initial sync. What I mean is, the slave volume gets updated when starting the sync, or if I delete hte index, but never g
16:12 jennawaha left #gluster
16:20 Gill_ joined #gluster
16:21 Gill_ joined #gluster
16:22 virusuy joined #gluster
16:22 virusuy joined #gluster
16:39 jennawaha joined #gluster
16:41 PeterA joined #gluster
16:44 kkeithley1 joined #gluster
16:47 jennawaha Ah nevermind figured it out, file changes were being made to the underlying brick and not via the glusterfs mount and (obvs) that doesn't work.
16:52 nitro3v joined #gluster
17:00 Gill joined #gluster
17:01 Gill joined #gluster
17:05 squizzi joined #gluster
17:09 glusterbot News from resolvedglusterbugs: [Bug 1196898] nfs: crash with nfs process <https://bugzilla.redhat.co​m/show_bug.cgi?id=1196898>
17:17 sputnik13 joined #gluster
17:21 jennawaha left #gluster
17:22 geerlingguy1 joined #gluster
17:25 PeterA We keep having brick crash everyday
17:25 PeterA http://pastie.org/9987794
17:25 PeterA wondering why it got the Error " 0-management: Failed to remove /var/run/f422793f928763c541562cd141488c0c.socket error: No such file or directory"
17:25 PeterA "E [glusterd-utils.c:4124:gluster​d_nodesvc_unlink_socket_file] 0-management: Failed to remove /var/run/f422793f928763c541562cd141488c0c.socket error: No such file or directory"
17:27 geerlingguy1 left #gluster
17:29 PeterA file a bug
17:29 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
17:32 jobewan joined #gluster
17:39 sputnik13 joined #gluster
17:39 glusterbot News from newglusterbugs: [Bug 1197185] Brick/glusterfsd crash randomly once a day on a replicated volume <https://bugzilla.redhat.co​m/show_bug.cgi?id=1197185>
17:50 nitro3v joined #gluster
18:02 T3 joined #gluster
18:12 nitro3v joined #gluster
18:15 lalatenduM joined #gluster
18:26 Rapture joined #gluster
18:29 tetreis joined #gluster
18:29 victori joined #gluster
18:33 chirino joined #gluster
18:43 Gill joined #gluster
18:44 Gill joined #gluster
18:45 Gill joined #gluster
19:07 kminooie joined #gluster
19:12 B21956 left #gluster
19:20 Pupeno_ joined #gluster
19:20 dbruhn joined #gluster
19:30 tetreis joined #gluster
19:32 huleboer joined #gluster
19:35 Pupeno joined #gluster
19:40 T3 joined #gluster
19:43 kminooie JoeJulian: hey, can I bug a bit more about the splitmount
19:44 kminooie ... bug you a bit ...
19:51 huleboer joined #gluster
19:51 kminooie http://ur1.ca/jtd13   in line 24 there is this 'ProgVers: 2,'  I think a few days ago while I was going the log files I saw something similar in my cluster ( 3.6.2 ) log files but it was saying that was saying that it was 1 ( prog_ver = 1) but I can't find it again. I am not sure in which log file I saw that in the first place. can that be the issue here? cause I upgraded this cluster from 3.2
19:54 jbrooks joined #gluster
19:57 lalatenduM joined #gluster
20:05 JoeJulian Shouldn't be a problem. That just tells the rpc which version of itself to reply with so that the structure of the reply matches.
20:07 JoeJulian kminooie: Can you check /var/log/glusterfs/etc-glusterfs-glusterd.vol.log on srv3.office at 19:44:06 and see why it failed to fetch the volfile for adsimages?
20:28 DV joined #gluster
20:30 nitro3v joined #gluster
20:31 jackdpeterson joined #gluster
21:06 ws2k3 joined #gluster
21:10 nitro3v joined #gluster
21:11 ws2k3 joined #gluster
21:13 ws2k3 joined #gluster
21:14 T3 joined #gluster
21:14 badone_ joined #gluster
21:22 theron joined #gluster
21:28 mat1010 joined #gluster
21:30 andreask joined #gluster
21:30 luis_silva joined #gluster
21:58 kminooie JoeJulian: sorry I had to go away for a bit. so I checked the log file and there was no entry for that time stamp. I ran the splitmount again ( got the same error ) but nothing got added to the log file so I then increased the log level for diagnostics.brick-log-level and diagnostics.client-log-level to DEBUG and run the code again and ( http://ur1.ca/jtdvy ) it worked. does this make sense to you?
21:59 mat1010 joined #gluster
21:59 kminooie btw there is still no entry in etc-glusterfs-glusterd.vol.log ( on srv3 ) for any of this
22:03 kminooie ok so no it is still not working. thou r1 and r2 directory have been created, they are empty and nothing is mounted on them :(
22:12 kminooie let restart everything and try again I am getting   Remote I/O error
22:12 kminooie let me ...
22:16 bala joined #gluster
22:16 mat1010 joined #gluster
22:19 mat1010 joined #gluster
22:27 nitro3v joined #gluster
22:28 JoeJulian Maybe run glusterd from the command line as "glusterd -d" so you can watch the debug?
22:28 JoeJulian kminooie: ^
22:28 Rapture joined #gluster
22:37 jbrooks_ joined #gluster
22:37 jbrooks_ joined #gluster
23:08 rypervenche joined #gluster
23:19 kminooie this might not be related directly but I get this a lot ( this is after I ran gluster volume status all detail )  http://ur1.ca/jtef6
23:20 kminooie gluster volume status output ^^^^ http://ur1.ca/jteg8
23:22 social joined #gluster
23:22 JoeJulian Interesting. Are your bricks ext4?
23:24 victori joined #gluster
23:26 kminooie on brick-2 yes. on brick-1 ext3
23:30 kminooie and this the output of splitmount http://ur1.ca/jtej9  nothing new on srv3 command line output or the log file  after I ran the command
23:31 kminooie btw srv3 ( with tune2fs error ) is the one with ext3
23:34 kminooie although I get the same error msg ( tune2fs ) on the one with ext4 as well
23:37 kminooie I guess at this point all I need to do is whether these are 3.6 issues or my issues :)  these are all my staging servers I don't particularity care what happens to them. I am doing these to prepare before upgrading our data centers
23:37 kminooie .. all I need to know is whether ....
23:39 kminooie could it be that upgrading from 3.2 directly to 3.6 was too far of a jump ?  I didn't notice anything specific about this in the change logs
23:41 glusterbot News from newglusterbugs: [Bug 1197260] segfault trying to call ibv_dealloc_pd on a null pointer if ibv_alloc_pd failed <https://bugzilla.redhat.co​m/show_bug.cgi?id=1197260>
23:47 jackdpeterson Read-only file system (30) rsync error: error in file IO (code 11) at receiver.c(389) [receiver=3.1.0] < -- was testing out client-side quorum and failures
23:47 jackdpeterson how do I fix that?
23:47 jackdpeterson (all nodes are back in rotation)
23:49 plarsen joined #gluster
23:52 luis_silva joined #gluster
23:55 PorcoAranha joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary