Camelia, the Perl 6 bug

IRC log for #gluster, 2013-08-02

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:10 ultrabizweb joined #gluster
00:20 ultrabizweb joined #gluster
00:38 vpshastry joined #gluster
00:38 rcoup joined #gluster
00:50 chirino joined #gluster
00:56 delcast joined #gluster
00:56 johnmorr joined #gluster
01:03 kevein joined #gluster
01:12 bala joined #gluster
01:12 yinyin joined #gluster
01:13 RicardoSSP joined #gluster
01:13 RicardoSSP joined #gluster
01:20 semiosis hagarth: ping
01:20 semiosis if you're around, or anyone else is who knows these things, i'm wondering...
01:20 semiosis how is transport endpoint not connected handled with libgfapi?
01:21 semiosis i am trying to simulate it, by killing brick process, or using iptables to drop packets, and my program just crashes, no chance to catch the error
01:21 semiosis thoughts?
01:23 semiosis on the bright side, i figured out how to fetch errno & strerror through jni.
01:23 semiosis (by figured out i mean copied leveldbjni :)
01:23 semiosis ^ /cc chirino
01:24 manik joined #gluster
01:30 raghug joined #gluster
01:37 harish_ joined #gluster
01:51 bharata joined #gluster
01:53 chirino semiosis: sweetness!
01:53 johnmorr joined #gluster
01:56 semiosis chirino: you're up late!  didn't expect you to see that until tomorrow
01:58 manik joined #gluster
01:58 manik joined #gluster
02:06 manik joined #gluster
02:24 harish_ joined #gluster
02:27 yinyin joined #gluster
03:13 vpshastry joined #gluster
03:23 shubhendu joined #gluster
03:25 MACscr left #gluster
03:26 rjoseph joined #gluster
03:31 sgowda joined #gluster
03:42 itisravi joined #gluster
03:45 shylesh joined #gluster
03:47 dusmant joined #gluster
03:57 CheRi joined #gluster
04:04 Tangram joined #gluster
04:09 lalatenduM joined #gluster
04:10 lalatenduM joined #gluster
04:19 nagenrai joined #gluster
04:30 vpshastry joined #gluster
04:33 mohankumar joined #gluster
04:46 satheesh joined #gluster
04:55 chirino joined #gluster
05:07 ajha joined #gluster
05:08 kshlm joined #gluster
05:12 raghu joined #gluster
05:19 bala joined #gluster
05:32 shireesh joined #gluster
05:35 bulde joined #gluster
05:35 nagenrai joined #gluster
05:42 ndarshan joined #gluster
05:44 dusmant joined #gluster
05:45 hagarth joined #gluster
05:55 psharma joined #gluster
05:56 aravindavk joined #gluster
05:57 nagenrai joined #gluster
06:01 bulde joined #gluster
06:06 atrius joined #gluster
06:08 skyw joined #gluster
06:09 shubhendu joined #gluster
06:10 vpshastry1 joined #gluster
06:17 karthik joined #gluster
06:19 shubhendu joined #gluster
06:20 sgowda joined #gluster
06:23 bala1 joined #gluster
06:23 shruti joined #gluster
06:28 vimal joined #gluster
06:40 aravindavk joined #gluster
06:41 ricky-ticky joined #gluster
06:41 ndarshan joined #gluster
06:41 bulde joined #gluster
06:43 dusmant joined #gluster
06:48 raghug joined #gluster
06:49 Dga joined #gluster
06:55 ctria joined #gluster
07:07 vpshastry1 joined #gluster
07:09 rastar joined #gluster
07:10 ngoswami joined #gluster
07:27 skyw joined #gluster
07:32 dobber joined #gluster
07:36 ricky-ticky joined #gluster
07:43 vshankar joined #gluster
07:45 raghug joined #gluster
07:46 harish_ joined #gluster
08:09 ninkotech joined #gluster
08:21 shruti joined #gluster
08:21 shubhendu joined #gluster
08:21 deepakcs joined #gluster
08:22 hybrid5122 joined #gluster
08:22 bala joined #gluster
08:25 aravindavk joined #gluster
08:28 nagenrai joined #gluster
08:29 nagenrai joined #gluster
08:29 mohankumar joined #gluster
08:33 kanagaraj joined #gluster
08:34 SynchroM joined #gluster
08:45 vimal joined #gluster
08:45 shubhendu joined #gluster
08:48 rjoseph joined #gluster
08:51 harish_ joined #gluster
08:59 shruti joined #gluster
09:02 ipalaus joined #gluster
09:02 ipalaus joined #gluster
09:04 Alpinist joined #gluster
09:04 wgao joined #gluster
09:10 longsleep whois longsleep
09:10 longsleep ups sorry
09:14 aravindavk joined #gluster
09:14 ajha joined #gluster
09:14 ppai joined #gluster
09:16 sahina joined #gluster
09:18 ninkotech_ joined #gluster
09:21 skyw joined #gluster
09:24 bharata joined #gluster
09:28 bala joined #gluster
09:34 ricky-ticky1 joined #gluster
09:36 kanagaraj joined #gluster
09:37 shubhendu joined #gluster
09:38 piotrektt joined #gluster
09:43 X3NQ joined #gluster
09:44 spresser joined #gluster
09:47 sahina joined #gluster
09:51 aravindavk joined #gluster
09:53 mooperd joined #gluster
09:56 dusmant joined #gluster
09:56 bala joined #gluster
09:58 mbukatov joined #gluster
09:58 longsleep left #gluster
10:04 nagenrai joined #gluster
10:04 shruti joined #gluster
10:05 ajha joined #gluster
10:06 zetheroo joined #gluster
10:06 zetheroo is there a way to restart/reset a gluster brick without restarting the host?
10:10 samppah zetheroo: afaik only way is to kill glusterfs process for brick in case and then restarting glusterd should start it again
10:13 bstr joined #gluster
10:13 msvbhat zetheroo: Tyr gluster volume start <volname> force
10:14 msvbhat s/Tyr/Try
10:18 al joined #gluster
10:19 bulde1 joined #gluster
10:20 skyw joined #gluster
10:22 DataBeaver joined #gluster
10:24 fcami joined #gluster
10:25 edward1 joined #gluster
10:28 ProT-0-TypE joined #gluster
10:35 ThatGraemeGuy joined #gluster
10:36 ctria joined #gluster
10:43 sgowda joined #gluster
10:48 basicer joined #gluster
10:51 chirino joined #gluster
10:51 skyw joined #gluster
10:51 vpshastry joined #gluster
10:54 harish joined #gluster
10:56 zetheroo any commands done withing the gluster brick take ages to complete ...
10:56 zetheroo msvbhat: I have tried that but the issue persists
11:00 msvbhat zetheroo: Let me understand your problem. You have a brick which is down and now you want to restart it right?
11:00 msvbhat zetheroo: start force will not restart a process which is already running
11:00 zetheroo no - the brick is up but is very slow ...
11:00 zetheroo restarting the host will probably sort it out ...
11:00 zetheroo but I would rather not have to restart the hsot
11:01 zetheroo ok I seem to have sorted it out by stopping the gluster volume and then starting it up again
11:02 msvbhat zetheroo: If you kill the glusterfsd process which is exporting that brick and then you run start force again
11:02 msvbhat zetheroo: Okay,
11:02 msvbhat zetheroo: That's the better way to restart the whole volume :)
11:02 zetheroo what is the diff between glusterfs and glusterfsd? I see both of them running....
11:03 msvbhat glusterfsd is a daemon process exporting the a brick.
11:04 zetheroo ok
11:04 msvbhat glusterfs is either gluster fuse mount process, or gluster nfs process or self heal deamon
11:04 ndevos @processes
11:04 glusterbot ndevos: The GlusterFS core uses three process names: glusterd (management daemon, one per server); glusterfsd (brick export daemon, one per brick); glusterfs (FUSE client, one per client mount point; also NFS daemon, one per server). There are also two auxiliary processes: gsyncd (for geo-replication) and glustershd (for automatic self-heal). See http://goo.gl/F6jqx for more information.
11:07 msvbhat Awesome :)
11:07 msvbhat ndevos: But the link seems to be broken
11:08 ndevos msvbhat: hmm, yeah, do you know where the q&a forum moved to?
11:09 msvbhat ndevos: Not sure. Will check
11:10 msvbhat ndevos: Are you the contact person to update the link?
11:11 ndevos msvbhat: I think you can update it yourself with "@forget ..." and "@learn ... as ..."
11:12 msvbhat ndevos: Okay. Wil lcheck and update. Thanks :)
11:12 ndevos msvbhat: cool :)
11:12 kkeithley joined #gluster
11:16 basicer_ joined #gluster
11:17 mbukatov joined #gluster
11:22 Rocky joined #gluster
11:23 ajha joined #gluster
11:28 manik joined #gluster
11:29 ngoswami joined #gluster
11:34 manik joined #gluster
11:36 skyw joined #gluster
11:49 sgowda joined #gluster
11:55 shruti joined #gluster
11:55 ctria joined #gluster
12:01 tziOm joined #gluster
12:01 lpabon joined #gluster
12:01 plarsen joined #gluster
12:04 neofob joined #gluster
12:05 rcheleguini joined #gluster
12:05 nagenrai joined #gluster
12:10 ricky-ticky joined #gluster
12:11 kwevers joined #gluster
12:32 manik joined #gluster
12:37 chirino joined #gluster
12:41 hybrid512 joined #gluster
12:43 vimal joined #gluster
12:47 ajha joined #gluster
12:56 shruti joined #gluster
12:56 nagenrai joined #gluster
12:57 vpshastry joined #gluster
13:07 bluefoxxx joined #gluster
13:09 bennyturns joined #gluster
13:17 aliguori joined #gluster
13:30 nagenrai joined #gluster
13:31 harish joined #gluster
13:37 ngoswami joined #gluster
13:38 bugs_ joined #gluster
13:38 spider_fingers joined #gluster
13:45 bennyturns joined #gluster
13:45 ricky-ticky joined #gluster
13:48 ziiin joined #gluster
13:50 ziiin joined #gluster
13:51 shanks joined #gluster
13:52 ziiin joined #gluster
13:54 ziiin joined #gluster
13:54 ziiin joined #gluster
13:54 ziiin joined #gluster
13:57 ziiin joined #gluster
14:07 kaptk2 joined #gluster
14:09 glusterbot New news from resolvedglusterbugs: [Bug 950083] Merge in the Fedora spec changes to build one single unified spec <http://goo.gl/tajoiQ>
14:21 lsouljacker joined #gluster
14:21 spider_fingers left #gluster
14:22 lsouljacker I've got a little problem I can't figure out.  Self Healing is failing silently on one of my bricks.  Initiating a self heal on this brick results in "Launching Heal operation on volume archive has been unsuccessful
14:23 lsouljacker Launching from the other brick is fine however.  And in the glustershd.log file on the affected brick I see "Stopping crawl for archive-client-1 , subvol went down" and "Stopping crawl as < 2 children are up" along with a bunch of "inode link failed on the inode"
14:25 lsouljacker I'm running glusterfs glusterfs 3.3.1 built on Oct 11 2012 21:49:36
14:39 Mr_SH4RK joined #gluster
14:44 Mr_SH4RK joined #gluster
14:48 skyw joined #gluster
14:56 nagenrai joined #gluster
14:58 Mr_SH4RK joined #gluster
15:01 Mr_SH4RK can i erase the .glusterfs directory  ? how can i rebuild it?
15:02 rastar joined #gluster
15:03 zetheroo left #gluster
15:05 stephane_|work joined #gluster
15:06 stephane_|work how is it possible to avoid the mount SPOF :: glu1:/volume_id when glu1 is down ?
15:13 jebba joined #gluster
15:14 manik joined #gluster
15:19 nagenrai joined #gluster
15:23 semiosis stephane_|work: ,,(mount server)
15:23 glusterbot stephane_|work: The server specified is only used to retrieve the client volume definition. Once connected, the client connects to all the servers in the volume. See also @rrnds
15:23 semiosis so that's not a SPOF
15:25 stephane_|work semiosis, great, i don't see that in man mount.glusterfs
15:25 stephane_|work glusterbot, thx
15:25 glusterbot stephane_|work: you're welcome
15:25 * semiosis looks at glusterbot
15:28 Guest2430 joined #gluster
15:28 sprachgenerator joined #gluster
15:31 nagenrai joined #gluster
15:33 semiosis glusterbot: thanks
15:33 glusterbot semiosis: you're welcome
15:33 semiosis glusterbot: thank you
15:33 semiosis @alias thank you as thanks
15:33 glusterbot semiosis: (alias [<channel>] <oldkey> <newkey> [<number>]) -- Adds a new key <newkey> for factoid associated with <oldkey>. <number> is only necessary if there's more than one factoid associated with <oldkey>. The same action can be accomplished by using the 'learn' function with a new key but an existing (verbatim) factoid content.
15:33 semiosis @alias thanks "thank you"
15:33 glusterbot semiosis: The operation succeeded.
15:33 semiosis glusterbot: thank you
15:33 glusterbot semiosis: you're welcome
15:33 semiosis heh
15:33 kkeithley thanks
15:34 semiosis you have to address glusterbot or use one of the triggers, @ ,,() etc
15:34 glusterbot (whatis [<channel>] [--raw] <key> [<number>]) -- Looks up the value of <key> in the factoid database. If given a number, will return only that exact factoid. If '--raw' option is given, no variable substitution will take place on the factoid. <channel> is only necessary if the message isn't sent in the channel itself.
15:34 _pol joined #gluster
15:35 semiosis kkeithley: or if you were thanking me, then you're welcome
15:35 bala joined #gluster
15:35 chirino joined #gluster
15:40 johnmark lol
15:40 johnmark glusterbot: thanks
15:40 glusterbot johnmark: you're welcome
15:41 johnmark heh
15:41 johnmark glusterbot: @chanstats
15:41 johnmark glusterbot: @stats
15:41 johnmark @chanstats
15:41 glusterbot johnmark: You've given me 5 invalid commands within the last minute; I'm now ignoring you for 10 minutes.
15:41 johnmark doh
15:41 kkeithley ;-)
15:42 kkeithley glusterbot: bite me
15:42 johnmark haha!
15:45 semiosis @channelstats
15:45 glusterbot semiosis: On #gluster there have been 164270 messages, containing 6970986 characters, 1164998 words, 4693 smileys, and 622 frowns; 1032 of those messages were ACTIONs. There have been 62559 joins, 1960 parts, 60628 quits, 20 kicks, 163 mode changes, and 7 topic changes. There are currently 182 users and the channel has peaked at 217 users.
15:45 semiosis glusterbot: do bite bite bite
15:45 * glusterbot bite bite bite
15:48 nagenrai joined #gluster
15:51 SynchroM_ joined #gluster
15:52 bennyturns joined #gluster
15:53 nagenrai joined #gluster
15:54 sprachgenerator joined #gluster
15:54 nagenrai joined #gluster
15:57 sprachgenerator joined #gluster
15:58 mohankumar joined #gluster
16:02 abassett joined #gluster
16:02 bennyturns joined #gluster
16:03 manik joined #gluster
16:10 semiosis joined #gluster
16:16 nagenrai joined #gluster
16:16 Mo__ joined #gluster
16:18 itisravi joined #gluster
16:19 karthik joined #gluster
16:26 LoudNoises joined #gluster
16:28 nagenrai joined #gluster
16:30 stopbit joined #gluster
16:36 nagenrai joined #gluster
16:37 chirino joined #gluster
16:55 manik joined #gluster
16:59 zaitcev joined #gluster
17:02 satheesh joined #gluster
17:04 _pol joined #gluster
17:05 hagarth joined #gluster
17:07 thomaslee joined #gluster
17:14 chirino joined #gluster
17:38 skyw joined #gluster
17:48 aliguori joined #gluster
18:09 lpabon joined #gluster
18:15 neofob joined #gluster
18:43 mooperd joined #gluster
18:46 neofob joined #gluster
18:48 bluefoxxx joined #gluster
19:25 _ilbot joined #gluster
19:25 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
19:37 dbruhn joined #gluster
19:40 tjstansell joined #gluster
19:40 ultrabizweb joined #gluster
19:40 al joined #gluster
19:59 pea_brain joined #gluster
20:11 ricky-ticky joined #gluster
20:18 pea_brain joined #gluster
20:20 ricky-ticky joined #gluster
20:43 pea_brain joined #gluster
20:43 plarsen joined #gluster
20:44 edong23 joined #gluster
20:44 recidive joined #gluster
20:53 pea_brain joined #gluster
21:14 bluefoxxxx joined #gluster
21:15 pea_brain joined #gluster
21:20 pea_brain joined #gluster
21:24 _pol do multiple calls to gluster volume set vol1 auth.allow <IP> clobber or append?
21:24 _pol And is that the preferred way to restrict access to a gluster volume?
21:34 Technicool _pol, it looks like it clobbers, you can use comma separated values or ranges with a wildcard, e.g., 192.168.122.4,192.168.122.5  or 192.168.*
21:36 _pol Technicool: can you use CIDR?
21:38 Technicool _pol, no it will tell you 'is not a valid internet-address-list'
21:38 _pol Ok
21:38 semiosis there is, or has been, a feature request bug about CIDR
21:44 Technicool this i think - https://bugzilla.redhat.com/show_bug.cgi?id=764843
21:45 glusterbot <http://goo.gl/6AiwG> (at bugzilla.redhat.com)
21:45 glusterbot Bug 764843: medium, low, ---, kaushal, ASSIGNED , [FEAT] allow.auth only allows wild cards
21:55 pea_brain left #gluster
22:47 brosner joined #gluster
23:02 NuxRo CIDR would be really cool, I have volumes with 32 IPs in auth.allow, looks ridiculous
23:22 jhkappert__ joined #gluster
23:28 jhkappert____ joined #gluster
23:30 badone joined #gluster
23:31 badone joined #gluster
23:58 plarsen joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary