Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2015-08-17

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
01:00 dlambrig joined #gluster-dev
01:02 lkoranda joined #gluster-dev
01:02 xavih joined #gluster-dev
01:05 mikedep333 left #gluster-dev
01:28 topshare joined #gluster-dev
01:29 samikshan joined #gluster-dev
01:38 dlambrig joined #gluster-dev
01:45 yliu joined #gluster-dev
02:08 overclk joined #gluster-dev
02:38 dlambrig joined #gluster-dev
03:28 byreddy_ joined #gluster-dev
03:37 nbalacha joined #gluster-dev
03:43 msvbhat joined #gluster-dev
03:45 shubhendu joined #gluster-dev
03:47 itisravi joined #gluster-dev
03:53 overclk joined #gluster-dev
03:58 nishanth joined #gluster-dev
04:03 atinm joined #gluster-dev
04:10 gem joined #gluster-dev
04:14 ndevos kkeithley: soumya and I started to put some things together in https://public.pad.fsfe.org/p/gluster-nfs-todos
04:19 kshlm joined #gluster-dev
04:23 kkeithley1 joined #gluster-dev
04:23 sakshi joined #gluster-dev
04:24 kanagaraj joined #gluster-dev
04:28 ndevos kkeithley_blr: soumya and I started to put some things together in https://public.pad.fsfe.org/p/gluster-nfs-todos
04:36 aravindavk joined #gluster-dev
04:41 Manikandan joined #gluster-dev
04:43 ashiq joined #gluster-dev
04:48 baojg joined #gluster-dev
04:52 vimal joined #gluster-dev
04:56 aspandey joined #gluster-dev
04:57 deepakcs joined #gluster-dev
05:10 ndarshan joined #gluster-dev
05:12 overclk joined #gluster-dev
05:17 Bhaskarakiran joined #gluster-dev
05:23 kkeithley1 joined #gluster-dev
05:24 atinm ndevos, Week 31 & 32 news are out now, just FYI :)
05:25 atinm ndevos, so now we are up to date
05:25 overclk joined #gluster-dev
05:27 anekkunt joined #gluster-dev
05:30 ndevos atinm++ great, thanks!
05:30 glusterbot ndevos: atinm's karma is now 26
05:31 hgowtham joined #gluster-dev
05:37 ggarg joined #gluster-dev
05:38 rafi joined #gluster-dev
05:40 aspandey :q
05:46 dlambrig joined #gluster-dev
05:56 Manikandan joined #gluster-dev
05:59 Bhaskarakiran joined #gluster-dev
06:10 kkeithley_blr who, if anyone, is looking at the netbsd regression failures on the master branch?
06:13 vmallika joined #gluster-dev
06:14 raghu joined #gluster-dev
06:18 poornimag joined #gluster-dev
06:21 Manikandan joined #gluster-dev
06:21 overclk joined #gluster-dev
06:23 Saravana_ joined #gluster-dev
06:37 rafi1 joined #gluster-dev
06:40 atinm kkeithley, recently one of the geo-rep patch has broken netbsd, Avra will be sending a patch to revert the test temporarily till we get it fixed
06:40 atinm kkeithley, hope that will bring in some stability
06:41 kkeithley_blr thanks atinm++
06:41 glusterbot kkeithley_blr: atinm's karma is now 27
06:43 atinm anekkunt, http://review.gluster.org/#/c/10262/
06:47 asengupt joined #gluster-dev
06:49 nbalacha joined #gluster-dev
06:54 krishnan_p joined #gluster-dev
07:01 jcsp joined #gluster-dev
07:03 baojg joined #gluster-dev
07:12 jiffin joined #gluster-dev
07:19 ndevos jiffin: nbslave77 is busted, logins dont work there :-/
07:20 atinm anekkunt, could you ack http://review.gluster.org/#/c/11886/1 ?
07:21 anekkunt atinm, ok
07:22 anekkunt atinm, done
07:23 atinm thanks anekkunt++
07:23 glusterbot atinm: anekkunt's karma is now 4
07:24 rafi joined #gluster-dev
07:26 rafi2 joined #gluster-dev
07:32 jiffin ndevos: k . can we try with another slave machine?
07:33 ndevos jiffin: when I come back from lunch?
07:34 jiffin ndevos: sure
07:49 rafi joined #gluster-dev
07:50 badone ndevos: do you know if it's safe to run /usr/sbin/glfsheal from the command line? It looks okay
07:51 badone krishnan_p: would you know?
07:57 vipul_ joined #gluster-dev
07:58 nbalacha joined #gluster-dev
08:09 itisravi badone: yes it is fine. When you run the `gluster volume heal` related commands , the gluster CLI just invokes glfsheal with the appropriate arguments.
08:11 badone itisravi: looks like it passes the volume name only
08:12 badone itisravi: it expects argc == 2
08:12 itisravi badone: It also passes the file name, source-brick etc for split-brain resolution commands.
08:12 itisravi badone: ah is this not on 3.7?
08:13 overclk joined #gluster-dev
08:13 badone itisravi: no, sorry, this is an older version
08:13 itisravi badone: more options have been added subsequently.
08:13 badone itisravi: I see, thanks
08:13 itisravi badone: np
08:13 badone itisravi: one more question if I may?
08:13 itisravi badone: sure
08:13 badone itisravi: when you run that what is it actually doing on the peers?
08:14 badone itisravi: afraid this is 3.4 and it is being run with just the volume name as agrgument
08:15 itisravi badone: glfsheal is basically a client process and thus has AFR loaded, which scans .glusterfs/indices/xattrop of all peers and lists files if they need healing.
08:15 badone itisravi: okay, so if the index directory were large it could take a while?
08:15 itisravi badone: yes
08:16 badone itisravi: many thanks, I appreciate it
08:16 itisravi badone: welcome :)
08:24 rjoseph joined #gluster-dev
08:39 baojg joined #gluster-dev
08:40 krishnan_p badone, I didn't see your message on time. Sorry. I guess itisravi has answered.
08:43 krishnan_p itisravi, I have added ./tests/basic/afr/self-heald.t to the spurious failures etherpad.
08:43 itisravi krishnan_p: okay
08:44 krishnan_p itisravi, what is the criteria for escalating a test from spurious failure to 'bad_test'?
08:45 itisravi krishnan_p: I think that if the test case fails multiple times, we need to add it to bad tests.
08:45 krishnan_p ndevos, I have updated http://review.gluster.com/#/c/11911/ (marking event_pool as 'dispatched' patch). It is failing on netbsd (spurious). Please take a look when you can.
08:46 krishnan_p itisravi, OK. If it fails again I will send a patch adding it to bad_tests. Does that make sense?
08:46 itisravi krishnan_p: okay.
08:47 krishnan_p itisravi, thanks.
08:47 itisravi krishnan_p: np
08:49 overclk joined #gluster-dev
09:09 skoduri joined #gluster-dev
09:09 kkeithley1 joined #gluster-dev
09:17 kbyrne joined #gluster-dev
09:19 overclk joined #gluster-dev
09:20 krishnan_p Has anyone here tried out libmill [libmill.org]? It is a library that provides Go routines and channels in C runtime (with some stack blackmagic that I don't understand too well).
09:21 overclk joined #gluster-dev
09:21 krishnan_p Having done asynchronous callback (hell?) based network communication in C, libmill looks refreshing.
09:30 overclk joined #gluster-dev
09:37 itisravi_ joined #gluster-dev
09:40 skoduri joined #gluster-dev
09:42 itisravi joined #gluster-dev
09:46 shubhendu joined #gluster-dev
09:47 schandra joined #gluster-dev
09:47 overclk joined #gluster-dev
09:52 ndarshan joined #gluster-dev
09:52 nishanth joined #gluster-dev
09:55 overclk joined #gluster-dev
10:04 krishnan_p joined #gluster-dev
10:17 atinm joined #gluster-dev
10:19 byreddy_ joined #gluster-dev
10:23 ggarg joined #gluster-dev
10:25 baojg joined #gluster-dev
10:33 Manikandan_ joined #gluster-dev
10:37 badone krishnan_p: all good :)
10:49 ira joined #gluster-dev
10:51 overclk joined #gluster-dev
10:56 kkeithley1 joined #gluster-dev
11:01 ndarshan joined #gluster-dev
11:04 shubhendu joined #gluster-dev
11:07 gem_ joined #gluster-dev
11:09 Manikandan_ joined #gluster-dev
11:12 anekkunt joined #gluster-dev
11:12 skoduri joined #gluster-dev
11:17 Saravana_ joined #gluster-dev
11:18 hchiramm_ joined #gluster-dev
11:19 kkeithley1 joined #gluster-dev
11:20 krishnan_p joined #gluster-dev
11:21 ggarg joined #gluster-dev
11:28 byreddy_ joined #gluster-dev
11:29 atinm joined #gluster-dev
11:39 firemanxbr joined #gluster-dev
11:39 shyam joined #gluster-dev
11:42 jrm16020 joined #gluster-dev
11:42 overclk joined #gluster-dev
11:48 Manikandan krishnan_p++
11:48 glusterbot Manikandan: krishnan_p's karma is now 11
11:51 ndevos atinm: do you know who would be interested in firewalls? http://thread.gmane.org/gmane.comp​.file-systems.gluster.devel/12243
11:53 overclk joined #gluster-dev
11:59 atinm ndevos, I am not aware of anyone who is interested atm
11:59 atinm ndevos, but we could encourage :)
12:00 atinm ndevos, how would option 2 solve the problem?
12:00 atinm ndevos, we can not open up 24007 at the time of volume creation
12:01 ndevos atinm: the rules for firewalld are stored in .xml files, we would ned to create them for the bricks when a volume gets started
12:01 atinm ndevos, peer probe would need it and we could add a node even without a volume
12:01 ndevos atinm: for glusterd, a default glusterd.xml would be sufficient, at least for me
12:02 atinm ndevos, hmm, but that has to be done at init, not at hooks
12:02 ndevos atinm: other .xml files (one per brick, or per volume?) could probably be used in addition
12:02 ndevos atinm: the default glusterd.xml would just get shipped and installed by the rpms
12:03 atinm ndevos, but which part of the code will open up the port mentioned in that xml?
12:03 rafi1 joined #gluster-dev
12:04 ndevos atinm: one solution would be create the .xml for the bricks from the hook scripts, firewall-cmd will then read that .xml and open the port(s)
12:04 ndevos atinm: there would not be any need to modify the sources that way (well, maybe the hook scripts need to receive the port)
12:05 atinm ndevos, yup, that makes sense
12:06 rafi joined #gluster-dev
12:06 ndevos atinm: the other way would be to integrate firewalld with dbus/firewalld and have the glusterfs(d) binary to open the ports - but I'm not sure how a sysadmin would overrule/tune that
12:07 atinm ndevos, I prefer the first option
12:07 ndevos atinm: I'm not sure how the firewalld api can be used, so it is difficult to comment on #1
12:08 atinm ndevos, I didn't mean option 1 from the mail :)
12:08 ndevos atinm: which option 1 do you mean then?
12:08 ndevos atinm: or, better yet, reply to the mail :D
12:08 atinm ndevos, my vote is for creating .xml
12:09 atinm ndevos, because we discussed about it first and then jumped to firewalld api :)
12:09 ndevos atinm: well, so do I, but Chris would like to do the other solution ;-)
12:09 rafi1 joined #gluster-dev
12:10 atinm ndevos, so can't we expect a patch from him ;)
12:10 ndevos atinm: I think we can, I just need someone else to state his/her preference :)
12:13 nbalacha joined #gluster-dev
12:14 dlambrig joined #gluster-dev
12:16 Manikandan joined #gluster-dev
12:16 poornimag joined #gluster-dev
12:19 Saravana_ joined #gluster-dev
12:22 kkeithley1 joined #gluster-dev
12:28 overclk joined #gluster-dev
12:30 hchiramm_ joined #gluster-dev
12:44 hgowtham joined #gluster-dev
13:19 overclk joined #gluster-dev
13:27 _Bryan_ joined #gluster-dev
13:29 rafi joined #gluster-dev
13:32 shyam joined #gluster-dev
13:39 lpabon joined #gluster-dev
13:48 krishnan_p joined #gluster-dev
13:52 dlambrig joined #gluster-dev
14:15 lpabon joined #gluster-dev
14:18 vipul_ joined #gluster-dev
14:19 overclk joined #gluster-dev
14:24 JoeJulian_ joined #gluster-dev
14:30 lpabon joined #gluster-dev
14:41 cholcombe joined #gluster-dev
14:43 baojg joined #gluster-dev
14:49 overclk joined #gluster-dev
14:50 vipul_ left #gluster-dev
15:00 lpabon joined #gluster-dev
15:02 shaunm joined #gluster-dev
15:27 wushudoin| joined #gluster-dev
15:30 topshare joined #gluster-dev
16:05 overclk joined #gluster-dev
16:09 rafi joined #gluster-dev
16:32 dlambrig_ joined #gluster-dev
16:35 aravindavk joined #gluster-dev
16:41 overclk joined #gluster-dev
17:15 jrm16020 joined #gluster-dev
17:21 skoduri joined #gluster-dev
17:22 jrm16020 joined #gluster-dev
17:24 overclk joined #gluster-dev
17:32 jrm16020 joined #gluster-dev
17:35 jrm16020 joined #gluster-dev
17:50 lpabon joined #gluster-dev
17:54 cholcombe joined #gluster-dev
17:54 cholcombe kkeithley, so i found a bug in the RPC system for quotad.  I can send it a packet and crash it every time
17:55 cholcombe ndevos, maybe you can help with this ^
18:02 lpabon joined #gluster-dev
18:28 lpabon joined #gluster-dev
19:19 lpabon joined #gluster-dev
20:48 badone_ joined #gluster-dev
21:10 [o__o] joined #gluster-dev
21:23 shyam joined #gluster-dev
22:44 lpabon joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary