Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2017-07-13

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:05 shyam joined #gluster-dev
00:07 shyam left #gluster-dev
00:18 Alghost joined #gluster-dev
00:30 Alghost joined #gluster-dev
00:31 Alghost joined #gluster-dev
00:32 Alghost joined #gluster-dev
00:33 Alghost joined #gluster-dev
01:06 gyadav__ joined #gluster-dev
01:09 pranithk1 joined #gluster-dev
01:24 mchangir joined #gluster-dev
01:48 ilbot3 joined #gluster-dev
01:48 Topic for #gluster-dev is now Gluster Development Channel - https://www.gluster.org | For general chat go to #gluster | Patches - https://review.gluster.org/ | Channel Logs - https://botbot.me/freenode/gluster-dev/ & http://irclog.perlgeek.de/gluster-dev/
02:02 gyadav__ joined #gluster-dev
02:02 prasanth joined #gluster-dev
02:04 Alghost joined #gluster-dev
02:26 rraja joined #gluster-dev
02:29 apandey joined #gluster-dev
02:56 apandey joined #gluster-dev
03:03 apandey joined #gluster-dev
03:03 apandey joined #gluster-dev
03:09 poornima joined #gluster-dev
03:15 skoduri joined #gluster-dev
03:22 mchangir joined #gluster-dev
03:43 itisravi joined #gluster-dev
03:43 ppai joined #gluster-dev
03:48 riyas joined #gluster-dev
03:55 Shu6h3ndu joined #gluster-dev
03:59 nbalacha joined #gluster-dev
03:59 atinm joined #gluster-dev
03:59 rraja joined #gluster-dev
04:01 msvbhat joined #gluster-dev
04:10 apandey joined #gluster-dev
04:16 pkalever joined #gluster-dev
04:17 msvbhat joined #gluster-dev
04:18 sanoj joined #gluster-dev
04:33 rastar joined #gluster-dev
04:37 gyadav__ joined #gluster-dev
04:38 msvbhat joined #gluster-dev
04:57 susant joined #gluster-dev
05:03 sanoj joined #gluster-dev
05:07 gobindadas joined #gluster-dev
05:10 jiffin joined #gluster-dev
05:10 apandey_ joined #gluster-dev
05:11 Shu6h3ndu joined #gluster-dev
05:11 karthik_us joined #gluster-dev
05:12 ankitr joined #gluster-dev
05:23 amarts joined #gluster-dev
05:26 msvbhat joined #gluster-dev
05:28 rafi joined #gluster-dev
05:36 Saravanakmr joined #gluster-dev
05:37 rafi1 joined #gluster-dev
05:42 Saravanakmr joined #gluster-dev
06:00 skoduri joined #gluster-dev
06:02 kdhananjay joined #gluster-dev
06:07 hgowtham joined #gluster-dev
06:09 sanoj joined #gluster-dev
06:29 msvbhat joined #gluster-dev
06:36 sona joined #gluster-dev
06:44 skumar joined #gluster-dev
06:50 atinm joined #gluster-dev
06:58 psony joined #gluster-dev
07:17 Shu6h3ndu joined #gluster-dev
07:26 itisravi nigelb: centos regressions for https://review.gluster.org/#/c/17717/ seem to fail prematurely. Any ideas?
07:26 itisravi "Caused by: hudson.plugins.git.GitException: Command "git config remote.origin.url git://review.gluster.org/glusterfs.git" returned status code 4:" is what I see in the log.
07:29 msvbhat joined #gluster-dev
07:42 apandey joined #gluster-dev
08:08 nbalacha itisravi, nigel is on PTO. Try misc.
08:11 itisravi nbalacha: ah okay thanks
08:20 mchangir_lunch xavih, probably missed your response, so repeating my query: "why are there 256 gf_muladd_NN() functions w.r.t. EC implementation ?"
08:21 xavih mchangir: ec is using an 8 bits Galois Field, so there are 256 possible values
08:22 mchangir xavih, okay; looks like I need to read up on "Galois Field"
08:22 xavih mchangir: what do you want to do ?
08:23 mchangir xavih, understand things
08:24 atinm joined #gluster-dev
08:25 xavih mchangir: basically, a Galois Field is like a regular integer field, but changing the addition operation by an xor (i.e. there are no carries)
08:25 xavih mchangir: and it has a modulus to keep all operations inside a range. In this case 8 bits
08:27 xavih mchangir: it has very good properties for computing
08:27 xavih mchangir: in a Galois field, 5 + 1 = 4
08:28 nbalacha rastar, ping
08:28 rastar nbalacha pong
08:29 mchangir xavih, okay
08:30 nbalacha rastar, never mind. I found what I was looking for.
08:32 mchangir xavih, Rings, Fields, Equivalences, Functions ... I've kinda left them far behind :)
08:39 xavih mchangir: hehe :)
08:43 ndevos obnox, poornima, anoopcs, rastar: is this correct for the Samba VFS plugin? https://review.gluster.org/#/c/17583/4/MAINTAINERS@516
08:44 rastar ndevos: yes..
08:46 ndevos ok then, thanks!
08:50 ppai rastar++
08:50 glusterbot ppai: rastar's karma is now 52
08:55 apandey joined #gluster-dev
08:58 nbalacha anybody know how to debug a coredump generated by Ubuntu?
09:02 pkalever1 joined #gluster-dev
09:02 ndarshan joined #gluster-dev
09:03 pkalever joined #gluster-dev
09:08 Shu6h3ndu joined #gluster-dev
09:23 skumar joined #gluster-dev
09:29 pkalever joined #gluster-dev
10:10 itisravi nbalacha: normal gdb techniques don't work?
10:10 Saravanakmr joined #gluster-dev
10:11 nbalacha itisravi, I tried it out after I asked the q but it looks like gdb doesn't recognise the file format so have asked Szymon to recheck .
10:11 itisravi nbalacha: okay
10:17 anoopcs nbalacha, Is it something named as _usr_sbin_glusterfsd.0.crash (assuming its a brick crash)?
10:22 nbalacha anoopcs, it is. It is supposed to be a crash dump of the rebalance process
10:25 anoopcs nbalacha, Then I think its a plain text file explaining the details of the crash. What does `file <_usr_sbin_glusterfsd.0.crash>` say? If it says ASCII text, then we may need some tools to extract the coredump
10:26 sanoj joined #gluster-dev
10:27 nbalacha anoopcs, ah. I never checked to see if it was a text file
10:28 nbalacha anoopcs, you are right. :) It is a text file
10:28 anoopcs :-)
10:28 nbalacha anoopcs, any idea how I can get the core dump
10:28 nbalacha ?
10:30 msvbhat joined #gluster-dev
10:30 anoopcs nbalacha, Sorry.. I have not done anything on my own. I have heard about a tool called 'apport-unpack' which can do *some extraction* from such files
10:30 anoopcs https://wiki.ubuntu.com/Apport may have some details
10:42 pkalever joined #gluster-dev
10:42 rastar joined #gluster-dev
10:49 kdhananjay joined #gluster-dev
10:50 apandey joined #gluster-dev
11:14 ndevos misc: is there spam detection in mailman that is not enabled for this list? this one came through: http://lists.gluster.org/pipermail/integration/2017-July/000033.html
11:32 apandey joined #gluster-dev
11:32 pkalever joined #gluster-dev
11:43 gyadav__ joined #gluster-dev
11:56 PotatoGim Hello, Does GlusterFS have a feature plan for antivirus which like samba-vscan? If doesn't, I'm planning to write that :)
12:00 rastar joined #gluster-dev
12:00 vbellur PotatoGim: go ahead, not aware of anybody trying to do that :)
12:01 vbellur PotatoGim: ndevos did some work earlier to integrate with clamav IIRC
12:05 ndarshan joined #gluster-dev
12:09 nbalacha joined #gluster-dev
12:10 PotatoGim vbellur: Thanks a lot! I will go ahead with that! :)
12:18 Shu6h3ndu joined #gluster-dev
12:19 misc ndevos: we have no spam detection for mailman
12:19 misc ndevos: someone likely subscribe to the list
12:36 misc did people seems lots of crash on slave24 ?
12:36 misc (as the server is full of core)
12:37 skumar joined #gluster-dev
13:10 deep-book-gk_ joined #gluster-dev
13:10 deep-book-gk_ left #gluster-dev
13:11 obnox ndevos: hi. could add Günther to samba glusterfs-vfs module?
13:11 obnox ndevos: in the maintainers patch
13:12 obnox ndevos: and what is "P:" for kkeithley at storhaug?
13:13 kkeithley obnox: there's a legend at the top of the file that tells what M, P, etc. mean.   M = maintainer, P = peer.
13:13 kkeithley peer = someone who can review, maybe merge, but is not the owner/maintainer
13:14 obnox kkeithley: at least in the current file, P does not exist.
13:15 ndevos obnox: P is new, it is a little like maintainer-in-training, or such
13:15 kkeithley right, not in the current file in git.  It's an added line (49) in the update
13:15 kkeithley https://review.gluster.org/#/c/17583/4/MAINTAINERS
13:16 kkeithley s/added line (49)/added line (no. 49)/
13:17 obnox kkeithley: thanks for the explanation!
13:17 ndevos obnox: we can add Guenther, could you leave a comment with his email?
13:17 obnox what's the relevance of samba's vfs-glusterfs in the maintainers file of gluster when these are not maintainers in the samba.git ?
13:18 kkeithley just a point of contact AFAIK
13:18 ndevos vbellur, PotatoGim: sorry, I dont think I ever did anything with clamav... maybe I once pointed out we could do it
13:19 ndevos PotatoGim: a clamav xlator would be nice to have, you could describe your ideas in a github issue at https://github.com/gluster/glusterfs/issues/
13:20 prasanth joined #gluster-dev
13:21 ndevos misc: its an open list, no subscription needed, and no moderation - makes it less problematic to CC in case other projects discuss gluster integration plans
13:23 misc ndevos: I guess I can try to setup some antispam
13:23 ndevos kkeithley: did you try to email justin on his gluster.org address?
13:23 kkeithley me? No.
13:24 ndevos misc: well, its the first time it happened, so not too annoying yet :)
13:25 ndevos kkeithley: you want to try? I thought it was setup to redirect to his real email, but if you suspect that it changed...
13:25 pkalever joined #gluster-dev
13:27 kkeithley misc, nigelb: is justin clift's gluster.org email addr still valid. IOW are we actually doing gluster.org email?
13:27 misc kkeithley: that's still valid
13:27 misc I am still waiting on a process to decide who can ask for one
13:27 misc but I am patient :)
13:27 kkeithley too patient
13:28 misc but if it was me, it would just be "people who did get a commit accepted and ask for a alias"
13:29 misc (and we already have the technical process: https://github.com/gluster/infra-docs/blob/master/procedures/adding_alias.rst )
13:30 ndevos ndevos@gluster.org works, but I dont remember the reason someone created that for me...
13:31 misc it predate our current scm, and likely the older scm either, so I can't tell
13:45 amarts joined #gluster-dev
13:56 riyas joined #gluster-dev
14:06 pranithk1 joined #gluster-dev
14:07 ndevos kkeithley: /usr/lib64/glusterfs/3.10.4/xlator does not seem to be part of the (Fedora 26) package, have you noticed that before?
14:08 ndevos and that is a problem in the upstream package as well (just checked)
14:08 ndevos so, probably all RPMs...
14:20 kkeithley ndevos: not part of the package, as in `rpm -q --whatprovides /usr/lib64/glusterfs/NVR/xlator` doesn't show glusterfs?
14:30 Saravanakmr joined #gluster-dev
14:32 kkeithley ndevos: do you think it can be owned by glusterfs? Or do we have to have fine-grained ownership amongst glusterfs, glusterfs-client-xlators, and glusterfs-extra-xlators ?
14:35 ndevos kkeithley: from the master branch, I get this:
14:35 ndevos # rpm -qf /usr/lib64/glusterfs/3.12dev/xlator
14:35 ndevos file /usr/lib64/glusterfs/3.12dev/xlator is not owned by any package
14:35 kkeithley so that would be a yes to my question
14:36 kkeithley first question
14:36 ndevos it should be owned by all packages that have xlators, directories can be part of multiple packages
14:41 kkeithley related puzzle: /usr/lib64/glusterfs/$NVR/xlators/mgmt/{,glusterd.so} are owned by glusterfs-server. But neither the directory or the glusterd.so is listed in the %files server section.
14:41 kkeithley of the .spec
14:45 ndevos yeah, I did not look at it yet, but I expect more of those dirs to be missing
14:45 * ndevos drops off for a bit, will be back later
14:45 kkeithley /usr/lib64/glusterfs/$NVR/xlators/protocol/{client.so,server.so} are owned by -server and -client-xlators. Again, nothing in %files server, but %files client-xlators has client.so (and some others)
14:46 kkeithley so I'm curious about how those files managed to be owned by -server
14:48 kkeithley actually, no puzzle. I just need to read all of the %files server section
14:53 ankitr joined #gluster-dev
15:04 msvbhat joined #gluster-dev
15:06 sanoj joined #gluster-dev
15:25 adynamic_liu joined #gluster-dev
15:26 ndevos kkeithley: shall I file a bug  for the "file /usr/lib64/glusterfs/*/xlator is not owned by any package" problem?
15:26 kkeithley yup
15:26 ndevos do you want to send a patch for it, or shall I?
15:27 kkeithley I'm just testing my patch
15:27 ndevos cool!
15:27 ndevos kkeithley++
15:27 glusterbot ndevos: kkeithley's karma is now 194
15:30 sona joined #gluster-dev
15:30 kkeithley there's quite a bit more than just the xlators
15:32 ndevos :-/
15:33 kkeithley rpm -qf /usr/libexec/glusterfs/
15:33 kkeithley file /usr/libexec/glusterfs is not owned by any package
15:33 ndevos you can use https://bugzilla.redhat.com/show_bug.cgi?id=1470768 for it
15:33 glusterbot Bug 1470768: unspecified, unspecified, ---, kkeithle, ASSIGNED , file /usr/lib64/glusterfs/3.12dev/xlator is not owned by any package
15:48 * kkeithley doesn't want to look at nfs-ganesha rpms
16:06 skoduri joined #gluster-dev
16:11 ndevos kkeithley: did you only read the .spec, or did you use a script?
16:11 kkeithley read the spec
16:12 ndevos I just wrote http://termbin.com/x59b for it, you might like it
16:12 kkeithley oh nice
16:12 ndevos as long as you have the packages installed, its easy to run against nfs-ganesha or anything else
16:14 ndevos I only have a limited set of packages installed now, but http://termbin.com/rqob still lists quite paths
16:15 kkeithley really? With every package installed I only have
16:15 kkeithley file /usr/libexec/glusterfs/python is not owned by any package
16:15 kkeithley file /usr/libexec/glusterfs/python/syncdaemon is not owned by any package
16:16 ndevos oh, but that is not with the patch that you just sent
16:17 kkeithley oh, yours is before the fix
16:17 adynamic_liu left #gluster-dev
16:17 kkeithley ?
16:17 ndevos yes, I'm testing some mem-pool free'ing and didnt include your patch on my test-system
16:37 mchangir kkeithley, can a directory be part of multiple sub-packages ?
16:37 kkeithley ndevos said it can
16:37 mchangir okay
16:37 kkeithley in the scrollback ^^^
16:38 mchangir found it
16:40 ndevos I thought they could be, yes - check what rpmlint says about the .spec
16:48 gyadav__ joined #gluster-dev
16:56 pkalever joined #gluster-dev
16:57 shyam joined #gluster-dev
17:09 rafi1 joined #gluster-dev
17:21 kkeithley rpmlint doesn't complain
17:31 gyadav_ joined #gluster-dev
17:46 gyadav joined #gluster-dev
17:49 rastar joined #gluster-dev
17:51 gyadav_ joined #gluster-dev
18:17 jiffin joined #gluster-dev
18:33 gyadav__ joined #gluster-dev
18:34 anoopcs joined #gluster-dev
18:49 misc rpmlint can't complain, it only take 1 binary rpm in account
18:50 misc my ex roomie did wrote code to look at that distrowide however, but it was taking too much memory for the ressources we had at that time
19:02 kkeithley rpmlint glusterfs.spec
19:05 msvbhat joined #gluster-dev
19:14 misc yeah, that's a part that can't be implemented without replicating fully the rpm spec parser, unfortunately :/
19:15 misc (because, everything can be a macro, some macros need to run software, etc)
20:23 vbellur joined #gluster-dev
22:56 Alghost joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary