Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-10-29

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:26 haomaiwang joined #gluster
01:10 ankitraj joined #gluster
01:19 Lee1092 joined #gluster
01:35 aj__ joined #gluster
01:36 shyam joined #gluster
01:48 ilbot3 joined #gluster
01:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:15 haomaiwang joined #gluster
02:25 derjohn_mobi joined #gluster
02:47 Jacob843 joined #gluster
02:56 ic0n joined #gluster
03:05 bluenemo joined #gluster
03:51 plarsen joined #gluster
03:52 ankitraj joined #gluster
04:08 hchiramm joined #gluster
04:37 mchangir joined #gluster
04:48 haomaiwang joined #gluster
05:09 ankitraj joined #gluster
05:35 kdhananjay joined #gluster
06:08 ankitraj joined #gluster
06:25 social joined #gluster
06:53 gem joined #gluster
07:48 haomaiwang joined #gluster
07:58 d0nn1e joined #gluster
08:33 Philambdo joined #gluster
09:09 hchiramm joined #gluster
10:14 purpleidea joined #gluster
10:14 purpleidea joined #gluster
10:54 kdhananjay1 joined #gluster
11:04 ankitraj joined #gluster
11:19 luizcpg joined #gluster
11:49 arpu joined #gluster
12:06 f0rpaxe joined #gluster
12:24 mchangir joined #gluster
12:48 haomaiwang joined #gluster
13:31 nbalacha joined #gluster
13:54 haomaiwang joined #gluster
14:05 ankitraj joined #gluster
14:34 tom[] joined #gluster
14:44 karolyi joined #gluster
14:44 karolyi hey, what's the minimal setup for glusterfs, can I use only one machine?
14:53 ankitraj joined #gluster
15:15 plarsen joined #gluster
15:23 riyas joined #gluster
15:40 ahino joined #gluster
15:42 karolyi left #gluster
15:53 riyas joined #gluster
15:57 hchiramm joined #gluster
15:58 Gnomethrower joined #gluster
16:08 tbas joined #gluster
16:15 tbas hi, i was testing gluster with a trusted storage pool of two nodes. node A had a volume configured and a client was connected via NFS to node B. I did see that node B was now proxying the traffic to node A.
16:15 tbas 1. how is this feature called? 2. in which version was it introduced?
16:35 post-factum tbas: it is how built-in nfs server works
16:48 jiffin joined #gluster
16:53 shyam joined #gluster
16:56 Jules- joined #gluster
16:57 Jules- having a strange behavior with lastest: 3.7.16-1 glusterfs release. If i try to use the fuse client, the share freezes on accessing it. Is that one more regression bug?
16:58 hchiramm joined #gluster
17:14 jkroon joined #gluster
17:17 panina joined #gluster
17:18 panina joined #gluster
17:19 jkroon joined #gluster
17:33 Philambdo joined #gluster
17:38 haomaiwang joined #gluster
17:46 tbas post-factum: so this behavior exists in 3.5.1, too?
17:47 post-factum tbas: dunno for sure, but suppose so
17:50 hchiramm joined #gluster
18:34 gem joined #gluster
19:02 Jules- nobody knows?
19:02 Jules- is it possible to rollback from 3.7.16-1 to previous release without destroying anything?
19:05 Jules- or will it blow my metadata due changes you made?
19:06 Jules- i'm stuck. i dont understand the testing concept at all for new releases. one release fixes and brings a new bug
19:06 post-factum Jules-: why?
19:07 Jules- because 3.7.16-1 made fuse mounts inaccesable
19:08 Jules- fuse mounts freeze when accessing.
19:11 post-factum you'd better find out why
19:11 post-factum and probably file a bugreport
19:11 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
19:11 post-factum glusterbot: thanks man, go get some sleep
19:11 Jules- no time for that
19:11 Jules- since sites down
19:12 post-factum Jules-: come on. it is saint saturday, relax, grab some beer
19:12 post-factum Jules-: i guess you can downgrade to .15 with no issues
19:12 Jules- you guess or know?
19:12 Jules- its an emergency
19:12 DV joined #gluster
19:12 post-factum Jules-: emergency is when someone dies. and this is trivial downgrade. it should just work
19:13 Jules- maybe i will after having to much trouble with glusterfs and get overworked due one bug hits another
19:14 post-factum Jules-: yeah. gluster maintenance is not the easiest thing in the world
19:15 post-factum Jules-: but now you may think you are closer to be *the* real man
19:15 Jules- finally i wish redhat team could test their releases better before push it to the crowd
19:17 post-factum Jules-: wait, man, you are talking to RH employee
19:17 post-factum Jules-: and you are talking about *community* release
19:17 post-factum Jules-: don't you?
19:18 Jules- sure its an community release but that shouldn't exclude quality
19:18 post-factum Jules-: no warranty, eh?
19:18 post-factum Jules-: just go and file a bug instead
19:18 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
19:18 post-factum glusterbot-- shut up
19:18 glusterbot post-factum: glusterbot's karma is now 9
19:24 Jules- it looks like its the glusterfsd which is corrupt
19:25 Jules- i've downgraded right now and after glusterfd restarted the mounts work again
19:33 Jules- where do you i find a diff what have changed in glusterfsd since last release?
19:44 plarsen joined #gluster
19:47 post-factum Jules-: you have to inspect logs
19:47 post-factum Jules-: for errors, warnings, stacktraces etc
19:48 Jules- it has a reason why i'm asking for the diff, since the log doesn't say nothing useful.
19:52 haomaiwang joined #gluster
20:13 post-factum Jules-: let me check
20:15 post-factum Jules-: what is you volume layout?
20:19 Jules- 3 replica bricks
20:20 Jules- replicate
20:20 post-factum Jules-: volume status and volume info please
20:20 post-factum @paste
20:20 glusterbot post-factum: For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
20:20 post-factum glusterbot++
20:20 glusterbot post-factum: glusterbot's karma is now 10
20:28 Jules- there ya go: http://termbin.com/nuvnc
20:30 post-factum Jules-: you have cluster.*-self-heal on. why?
20:32 Jules- due the 2node problematic
20:32 post-factum Jules-: huh?
20:33 post-factum Jules-: also, you have server quorum ridiculous options
20:33 post-factum Jules-: do you know what that mean?
20:36 Jules- see ticket
20:36 Jules- #1347329
20:39 post-factum kinda shit
20:39 post-factum 2-node cluster is possible
20:39 post-factum just don't touch quorum :)
20:39 Jules- a untouched quorum brought me the issues
20:42 Jules- that have worked before just fine
20:42 post-factum meh
20:46 Jules- on 2 node setup after that release, a rebooting machine will shut down gluster on other machine too
20:50 post-factum wow
20:51 post-factum such enterprise
20:51 post-factum much fun
20:51 Jules- #1352277
20:52 post-factum Jules-: anyway, i won't help you with quorum too much. used 2-nodes setup w/o quorum with no issues
20:52 post-factum Jules-: anyway, you have to check logs
20:53 Jules- well as i said, i didn't had issues until someone patched this security feature in and the trouble began.
20:58 post-factum Jules-: just revert 1 patch. i had to maintain separate gluster tree for production cluster anyway
20:58 post-factum Jules-: because of surprises like that
20:59 Jules- btw. were can i check if the patch set have passed any release yet?
20:59 Jules- in example the one out of my bug report
21:00 Jules- http://review.gluster.org/#/c/14848/
21:00 glusterbot Title: Gerrit Code Review (at review.gluster.org)
21:09 post-factum Change 14848 - Merged
21:09 post-factum it is merged
21:09 post-factum but it is merged into master
21:09 post-factum you can grep git log to find out if it was backported to some stable release
22:04 om2_ joined #gluster
22:14 Jacob843 joined #gluster
22:27 kpease joined #gluster
23:45 haomaiwang joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary