Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-05-25

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:04 prilly_ joined #gluster
01:24 edong23 joined #gluster
01:53 glusterbot News from resolvedglusterbugs: [Bug 806841] object-strorage: GET for large data set fails <https://bugzilla.redhat.com/show_bug.cgi?id=806841>
02:07 kevein joined #gluster
02:10 nangthang joined #gluster
02:52 msvbhat joined #gluster
02:54 harish joined #gluster
02:55 kdhananjay joined #gluster
03:03 wushudoin joined #gluster
03:11 lexi2 joined #gluster
03:23 TheSeven joined #gluster
03:23 bharata-rao joined #gluster
03:31 hagarth joined #gluster
03:34 kanagaraj joined #gluster
03:35 RameshN joined #gluster
03:43 sakshi joined #gluster
04:09 yazhini joined #gluster
04:11 coredump joined #gluster
04:14 shubhendu joined #gluster
04:16 ndarshan joined #gluster
04:36 meghanam joined #gluster
04:45 rafi joined #gluster
05:03 karnan joined #gluster
05:05 Bhaskarakiran joined #gluster
05:08 pppp joined #gluster
05:09 deepakcs joined #gluster
05:13 hgowtham joined #gluster
05:14 ashiq joined #gluster
05:15 Manikandan joined #gluster
05:16 atalur joined #gluster
05:19 rjoseph joined #gluster
05:19 aravindavk joined #gluster
05:21 anil joined #gluster
05:22 schandra joined #gluster
05:31 kotreshhr joined #gluster
05:40 overclk joined #gluster
05:41 scubacuda joined #gluster
05:41 Manikandan joined #gluster
05:50 an joined #gluster
05:57 spandit joined #gluster
05:58 jiffin joined #gluster
06:00 kdhananjay joined #gluster
06:03 Bhaskarakiran joined #gluster
06:06 itisravi joined #gluster
06:09 kshlm joined #gluster
06:10 Anjana joined #gluster
06:10 raghu joined #gluster
06:13 an joined #gluster
06:14 maveric_amitc_ joined #gluster
06:16 Saravana joined #gluster
06:23 poornimag joined #gluster
06:31 nangthang joined #gluster
06:47 kotreshhr1 joined #gluster
06:49 bharata_ joined #gluster
06:57 rafi1 joined #gluster
06:58 sripathi joined #gluster
07:00 gildub joined #gluster
07:06 rotbeard joined #gluster
07:06 LebedevRI joined #gluster
07:07 nsoffer joined #gluster
07:15 Manikandan ashiq, thanks
07:15 Manikandan ashiq++
07:15 glusterbot Manikandan: ashiq's karma is now 2
07:19 ashiq thanks
07:30 spalai joined #gluster
07:33 fsimonce joined #gluster
07:36 nsoffer joined #gluster
07:44 glusterbot News from newglusterbugs: [Bug 1224624] cli: Excessive logging <https://bugzilla.redhat.co​m/show_bug.cgi?id=1224624>
08:05 al joined #gluster
08:10 kotreshhr joined #gluster
08:13 akay1 joined #gluster
08:28 coredump joined #gluster
08:44 shubhendu joined #gluster
08:46 rafi joined #gluster
08:47 ndarshan joined #gluster
08:49 Anjana joined #gluster
08:57 mbukatov joined #gluster
09:13 coredump joined #gluster
09:39 owlbot joined #gluster
09:43 shubhendu joined #gluster
09:44 ndarshan joined #gluster
09:51 poornimag joined #gluster
10:02 an_ joined #gluster
10:09 ashiq joined #gluster
10:17 schandra joined #gluster
10:39 s19n joined #gluster
10:45 poornimag joined #gluster
10:47 an joined #gluster
10:47 hgowtham joined #gluster
10:59 nishanth joined #gluster
10:59 Manikandan joined #gluster
11:13 haomaiwa_ joined #gluster
11:20 atalur joined #gluster
11:21 nsoffer joined #gluster
11:27 atalur joined #gluster
11:28 schandra joined #gluster
11:42 anrao joined #gluster
11:49 firemanxbr joined #gluster
11:49 jcastill1 joined #gluster
11:54 jcastillo joined #gluster
11:55 glusterbot News from resolvedglusterbugs: [Bug 1075611] [FEAT] log: enhance gluster log format with message ID and standardize errno reporting <https://bugzilla.redhat.co​m/show_bug.cgi?id=1075611>
12:00 itisravi joined #gluster
12:00 rafi1 joined #gluster
12:02 kotreshhr left #gluster
12:15 glusterbot News from newglusterbugs: [Bug 1220347] Read operation on a file which is in split-brain condition is successful <https://bugzilla.redhat.co​m/show_bug.cgi?id=1220347>
12:15 glusterbot News from newglusterbugs: [Bug 1224709] Read operation on a file which is in split-brain condition is successful <https://bugzilla.redhat.co​m/show_bug.cgi?id=1224709>
12:23 sripathi joined #gluster
12:28 spalai left #gluster
12:58 rafi joined #gluster
13:02 wushudoin joined #gluster
13:03 wushudoin joined #gluster
13:04 wushudoin left #gluster
13:13 meghanam joined #gluster
13:29 nangthang joined #gluster
13:31 georgeh-LT2 joined #gluster
13:32 tugrik joined #gluster
13:45 elico joined #gluster
13:47 haomai___ joined #gluster
13:47 cyberbootje joined #gluster
13:49 haomaiw__ joined #gluster
13:51 kshlm joined #gluster
14:23 Prilly joined #gluster
14:34 soumya joined #gluster
14:34 DV joined #gluster
14:43 DV_ joined #gluster
15:07 shubhendu joined #gluster
15:11 atinmu joined #gluster
15:15 kdhananjay joined #gluster
15:59 poornimag joined #gluster
16:05 coredump joined #gluster
16:08 coredump joined #gluster
16:10 RameshN joined #gluster
16:15 d-fence joined #gluster
16:26 wowaslansin joined #gluster
16:26 spieke Hello!
16:26 glusterbot spieke: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
16:27 spieke i have a three node gluster with replica 3. After rebooting one node the command "gluster volume heal RaidVolB info" takes forever... :-/
16:28 spieke in my logs i get: E [glusterd-op-sm.c:225:glusterd_get_txn_opinfo] (--> /lib64/libglusterfs.so.0(_gf_log_​callingfn+0x186)[0x7fce2ce43f16] (--> /usr/lib64/glusterfs/3.7.0/xlator/mgmt/glusterd.s​o(glusterd_get_txn_opinfo+0x1b7)[0x7fce21cbe9a7] (--> /usr/lib64/glusterfs/3.7.0/xlator/mgmt/glusterd.so​(__glusterd_handle_stage_op+0x1a2)[0x7fce21cb06f2] (--> /usr/lib64/glusterfs/3.7.0/xlator/​mgmt/glusterd.so(glusterd_big_loc
16:28 spieke ked_handler+0x30)[0x7fce21caecf0] (--> /lib64/libglusterfs.so.0(synct​ask_wrap+0x12)[0x7fce2ce7f4a2] ))))) 0-management: Unable to get transaction opinfo for transaction ID : cd739f66-5109-4278-9dc7-b1d1c9dfd35
16:28 glusterbot spieke: ('s karma is now -76
16:28 glusterbot spieke: ('s karma is now -77
16:28 glusterbot spieke: ('s karma is now -78
16:28 glusterbot spieke: ('s karma is now -79
16:28 glusterbot spieke: ('s karma is now -80
16:35 spieke kshlm, dude, are you there? :)
16:39 s19n left #gluster
16:39 kshlm Yeah,
16:41 kshlm spieke, Can you give some more details? version, what you were doing etc.
16:41 elico joined #gluster
16:41 spieke kshlm, i was able to solve it by restarting glusterd. i saw you helped here, too: http://irclog.perlgeek.de/gluster/2014-11-17
16:43 spieke http://fpaste.org/225360/57218414/ => versions
16:44 jalmada joined #gluster
16:45 kshlm Awesome, that I could help you!
16:45 spieke kshlm, hehe, well yes. But why does glusterd need a restart here?
16:45 spieke should it not handle the problem itself?
16:45 kshlm In the case you linked, it was due to a bug which prevented the lock from being freed.
16:46 kshlm It was probably something different in your case.
16:47 kshlm The golden rule 'turning it off and on again' works to solve many glusterd issues.
16:47 RameshN joined #gluster
16:48 spieke ok :)
16:49 ndevos you The IT Crowd? https://www.youtube.com/watch?v=nn2FB1P_Mn8
16:49 ndevos s/you /you know/
16:49 glusterbot What ndevos meant to say was: An error has occurred and has been logged. Please contact this bot's administrator for more information.
16:50 kshlm Wow. glusterbot can read minds!
16:50 kshlm ndevos, I do.
16:51 TheSeven joined #gluster
17:01 hagarth kshlm: is this the same frame->cookie problem that Atin has addressed?
17:06 kshlm hagarth, not sure. I didn't go into much details as the problem was solved by a glusterd restart.
17:07 hagarth kshlm: seems to point to that, more reasons to do 3.7.1 soon.
17:07 kshlm Also, AFAIK the cookie problem is hard analyze without gdb ing.
17:08 kshlm hagarth, there are a lot patches wating to get merged solving a lot of issues. All we need is your go ahead to start merging.
17:08 kshlm When are you hoping to give the signal?
17:13 hagarth kshlm: as soon as we fix all outstanding issues for tests :)
17:13 hagarth there is no point merging further patches until we fix these problems .. we are going to be slowed down further if we don't fix tests.
17:35 plarsen joined #gluster
17:42 aaronott joined #gluster
17:42 nishanth joined #gluster
17:45 tugrik left #gluster
17:55 verdurin joined #gluster
18:00 elico joined #gluster
18:16 barnim joined #gluster
18:29 ws2k3 joined #gluster
18:34 aaronott joined #gluster
18:46 hchiramm joined #gluster
18:59 aaronott joined #gluster
19:11 plarsen joined #gluster
19:22 anrao joined #gluster
19:31 minfig404 joined #gluster
19:35 siel joined #gluster
19:54 plarsen joined #gluster
20:03 uebera|| joined #gluster
20:03 uebera|| joined #gluster
20:11 autoditac joined #gluster
21:00 minfig404 Looks like I've back myself into a bit of a corner. Fedora 21, Gluster 3.7. Trying to setup a disperse _ redundancy 2 volume between 2 servers each with 3 bricks. Tried to create that volume, firewall blocked that.
21:01 minfig404 Fixed the firewall (not sure if that will persist), but then there was the remnants of the volume create floating around. Restarted the affected glusterd, and now I've got bricks part of a volume that doesn't exist. How can I clear up the bricks?
21:57 glusterbot News from resolvedglusterbugs: [Bug 1217576] [HC] Gluster volume locks the whole cluster <https://bugzilla.redhat.co​m/show_bug.cgi?id=1217576>
21:57 glusterbot News from resolvedglusterbugs: [Bug 1220623] Seg. Fault during yum update <https://bugzilla.redhat.co​m/show_bug.cgi?id=1220623>
22:08 plarsen joined #gluster
22:19 badone_ joined #gluster
22:21 coredump joined #gluster
22:36 minfig404 Anyone here now?
23:06 lexi2 joined #gluster
23:30 gildub joined #gluster
23:36 badone joined #gluster
23:46 pdrakeweb joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary