Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2017-11-14

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:04 rastar joined #gluster-dev
00:24 msvbhat joined #gluster-dev
00:29 timotheus1_ joined #gluster-dev
00:34 timotheus1_ joined #gluster-dev
02:20 mchangir joined #gluster-dev
02:28 shyam joined #gluster-dev
02:38 hgichon joined #gluster-dev
02:38 hgichon_ joined #gluster-dev
02:56 ilbot3 joined #gluster-dev
02:56 Topic for #gluster-dev is now Gluster Development Channel - https://www.gluster.org | For general chat go to #gluster | Patches - https://review.gluster.org/ | Channel Logs - https://botbot.me/freenode/gluster-dev/ & http://irclog.perlgeek.de/gluster-dev/
03:03 gyadav joined #gluster-dev
03:06 susant joined #gluster-dev
03:12 nbalacha joined #gluster-dev
03:24 rraja joined #gluster-dev
03:42 sunnyk joined #gluster-dev
03:57 rraja joined #gluster-dev
04:04 rraja_ joined #gluster-dev
04:05 psony joined #gluster-dev
04:08 itisravi joined #gluster-dev
04:20 sanoj joined #gluster-dev
04:26 Saravanakmr joined #gluster-dev
04:33 atinm joined #gluster-dev
04:37 skumar joined #gluster-dev
04:55 azhar joined #gluster-dev
04:57 azhar_ joined #gluster-dev
04:59 Saravanakmr joined #gluster-dev
05:04 rastar joined #gluster-dev
05:04 aravindavk joined #gluster-dev
05:12 PatNarciso joined #gluster-dev
05:15 humblec joined #gluster-dev
05:16 amarts joined #gluster-dev
05:19 vaibhav ndevos: bt log - http://termbin.com/y05h
05:29 rafi1 joined #gluster-dev
05:31 ppai joined #gluster-dev
05:34 gobindadas joined #gluster-dev
05:34 karthik_us joined #gluster-dev
05:37 uebera|| joined #gluster-dev
05:37 uebera|| joined #gluster-dev
05:44 dkhandel joined #gluster-dev
05:44 hgowtham joined #gluster-dev
05:46 apandey joined #gluster-dev
06:15 susant joined #gluster-dev
06:21 apandey_ joined #gluster-dev
06:27 kdhananjay joined #gluster-dev
06:28 pkalever joined #gluster-dev
06:28 apandey__ joined #gluster-dev
06:33 xavih joined #gluster-dev
06:42 karthik_us joined #gluster-dev
06:43 psony joined #gluster-dev
06:45 kotreshhr joined #gluster-dev
06:49 godas_ joined #gluster-dev
06:50 gobindadas joined #gluster-dev
06:52 gobindadas joined #gluster-dev
06:58 ppai amarts, https://review.gluster.org/#/c/18737/1 should get in before the rest
07:00 ppai amarts, and the string substitution isn't taken care in glusterd1
07:04 amarts ppai, yes, i am aware, did the rebase in a way that this patch was sent first
07:04 ppai amarts, cool
07:04 amarts actually, found 2 conflicts.. 1 in posix
07:05 amarts 1 in EC -- apandey__ ^^
07:05 apandey__ amarts: checking
07:06 karthik_us joined #gluster-dev
07:18 poornima_ joined #gluster-dev
07:43 rastar joined #gluster-dev
08:04 amarts apandey__, (this is from experimental branch patch for options change)
08:04 amarts try applying it directly on master, if applies, then good to send
08:05 amarts kshlm, ^^ you can review the patches (most of them has you as author)
08:06 amarts apandey__, I see that sunil sent the EC patch : https://review.gluster.org/18181
08:07 apandey__ amarts: ok. skumar ^^
08:16 jiffin1 joined #gluster-dev
08:17 jiffin1 nigelb: ping
08:21 amarts ppai, kshlm, aravindavk i am not sure if there are any other place outside of github you are tracking GD2 tasks pending on others... let me know if any. I am planning to create component specific issues for filesystem maintainers (ie, xlators) to review if everything is working as expected
08:22 godas_ joined #gluster-dev
08:38 nigelb jiffin1: yep. Working on getting the job reviewed and merged.
08:38 nigelb jiffin1: Give me 15 mins?
08:38 jiffin1 nigelb: sure
08:38 jiffin1 no hurry
09:00 ndevos vaibhav: the only thing that looks weird in the bt is the address of fs=0x33efc0 that is passed to pub_glfs_set_volfile_server()
09:00 ndevos vaibhav: do you still have the gdb session open? could you "print *fs"?
09:00 nigelb jiffin1: all set.
09:00 nigelb jiffin1: release away :)
09:01 nigelb Still doesn't auto-copy to bits.gluster.org, but I'll do that soon-ish.
09:02 vaibhav ndevos: Cannot access memory at address 0x33efc0
09:02 ndevos vaibhav: ah, well, that explains the segfault then
09:02 ndevos vaibhav: now we only need to understand why an invalid address is passed...
09:03 vaibhav right
09:04 ndevos vaibhav: can you  edit tests/features/ipctest.py and below the "fs = api.glfs_new(...)" line, check "if fs is None" and print an error and exit?
09:05 vaibhav sure
09:06 percevalbot joined #gluster-dev
09:09 xavih joined #gluster-dev
09:10 aravindavk joined #gluster-dev
09:11 ndevos PotatoGim: still waiting for a patch that adds your company to https://github.com/gluster/glusterfs/blob/master/extras/who-wrote-glusterfs/gitdm.domain-map and the gitdm.aliases files
09:16 vaibhav ndevos: it still throws segmentation fault
09:17 ndevos vaibhav: hmm... I'll have to think a little more about it then
09:17 * ndevos isnt much aware how ctypes work in python
09:18 vaibhav ohh
09:22 ndevos vaibhav: on what distribution are you running the test?
09:22 vaibhav ndevos: I can print the value of fs in python script
09:22 vaibhav :ndevos ubuntu:16.04
09:23 poornima_ joined #gluster-dev
09:23 ndevos vaibhav: hmm, so glfs_new() was able to allocate something... eventhough the address looks suspicious
09:24 ndevos vaibhav: can you attach gdb and print the return value when glfs_new() exits?
09:26 sanoj joined #gluster-dev
09:26 vaibhav ndevos: ye
09:26 vaibhav ndevos: print glfs_new() [New Thread 0x3fff8a15910 (LWP 3974)] [New Thread 0x3fff8215910 (LWP 3975)] [New Thread 0x3fff7a15910 (LWP 3976)] $1 = 4103008
09:28 ndevos vaibhav: I mean set a breakpoint before glfs_new() returns, and then print *fs
09:29 ndevos vaibhav: depending on the version you are running, "b glfs.c:844" might be the right spot
09:30 vaibhav ohh
09:30 Humble joined #gluster-dev
09:31 pkalever joined #gluster-dev
09:36 skumar amarts, apandey__ : I will update the patch and send it to master
09:36 apandey__ skumar: ok
09:48 vaibhav ndevos: glfs_new() returns structure
10:06 humblec joined #gluster-dev
10:08 ppai joined #gluster-dev
10:10 ppai csaba, hi
10:11 csaba ppai: hey
10:12 ppai csaba, I pulled in the latest code of experimental branch. I see a crash of glusterfsd process during mount. The stack trace (http://paste.openstack.org/show/626256/) lead me back to this change (https://review.gluster.org/#/c/18665/)
10:16 ppai csaba, I can send you the core if you'd like that. I didn't file a BZ as the change is not in master yet. "prune_ops->prune_init" is NULL on inspecting the core.
10:17 csaba ppai: ok that sounds great. but I guess I also need the glusterfs binaries you use, what about them?
10:18 ndevos vaibhav: thats weird, if glfs_new() returns a structure, the next glfs_*() call in the python script should be able to pass that as a parameter
10:19 csaba ppai: prune_thing being 0 should not be an issue... see https://github.com/gluster/glusterfs/commit/4b54278f53525a8e227c4c1ce1efcbafe60f027d#diff-5c1ffb73c1712bdbcafcbde314386f79R1606
10:19 susant joined #gluster-dev
10:20 csaba ppai: so the stack trace still leaves me puzzled.
10:20 ppai csaba, prune_ops->prune_init is NULL
10:22 csaba ah
10:22 ppai csaba, I missed that in the paste output I sent you
10:22 csaba ppai: now that's a problem :)
10:24 csaba ppai: I don't understand why, it should be initialized to the correct value here: https://github.com/gluster/glusterfs/commit/4b54278f53525a8e227c4c1ce1efcbafe60f027d#diff-5c1ffb73c1712bdbcafcbde314386f79R54
10:26 csaba ppai: so how does the prune_ops_standard object look like?
10:29 gobindadas joined #gluster-dev
10:39 sankarshan joined #gluster-dev
10:44 poornima_ joined #gluster-dev
10:51 rafi joined #gluster-dev
11:12 rafi3 joined #gluster-dev
11:15 nigelb jiffin1: did you do a release?
11:15 jiffin1 nigelb: not yet
11:15 jiffin1 nigelb: I triggered now
11:16 nigelb cheers
11:17 nishanth joined #gluster-dev
11:19 nigelb sigh.
11:19 nigelb fixing.
11:20 nigelb jiffin1: can you retrigger
11:21 nigelb jiffin1: wait
11:21 nigelb jiffin1: right, retrigger now.
11:22 nigelb We changed the sha from 256 to 512
11:22 nigelb that broke parts of the script.
11:22 susant joined #gluster-dev
11:30 ppai csaba, hi, was afk. prune_ops_standard is empty
11:31 jiffin1 nigelb: i have retriggered
11:32 nigelb wtf
11:32 nigelb argh
11:35 nigelb misc: I got bitten by the centos6 vs centos7 problem.
11:36 nigelb Can I give you those to machines to re-image to centos7?
11:36 ppai csaba, what about this one ? https://github.com/gluster/glusterfs/blob/experimental/libglusterfs/src/inode.h#L145
11:37 kdhananjay left #gluster-dev
11:38 misc nigelb: sure, I can
11:39 nigelb jiffin1: haha, almost every time you triggered that job
11:39 nigelb jiffin1: it hit one of two machines where we had an older version of git.
11:39 misc nigelb: when do you want it ?
11:39 jiffin1 nigelb: I am feel so lucky
11:40 nigelb misc: builder0 and builder1. I've removed them from doing regular Jenkins jobs.
11:40 nigelb misc: uhh... this week?
11:40 nigelb jiffin1: I got it to work!
11:40 nigelb And it should have emailed packaging@ and maintainers@
11:41 misc nigelb: ok
11:42 misc nigelb: you open abug ? (I already killed the VM and pushing the change so they get reinstalled)
11:43 nigelb misc: will do so now.
11:44 misc in the mean time, I am going to run ansible -i 'misc,' -m meal -a "status=prepared" all
11:44 nigelb misc: 1512877
11:45 nigelb misc: so somethign tells me supercolony may be having trouble again.
11:45 nigelb YEP.
11:46 misc mhh
11:46 misc I did disable the stuff on supercolony yesterday
11:46 misc let me dig
11:50 shyam joined #gluster-dev
11:52 misc mhhh
11:52 misc so seems my fix wasn't the right one ?
11:55 misc nigelb: so either I forgot something yesterday (like forgetting to restart syslog), or the fix is wrong
11:55 misc I will wait for tomorrow
11:56 bfoster joined #gluster-dev
11:56 jiffin1 nigelb: have triggered the job ?
11:56 jiffin1 or it is complete
11:57 jiffin1 Okay I saw the mail
11:57 nigelb jiffin1: I restarted rsyslog
11:57 nigelb jiffin1: yeah, the delay was because our server had issues.
11:58 jiffin1 nigelb++ thanks
11:58 glusterbot jiffin1: nigelb's karma is now 73
11:58 misc how come supercolony impact release ?
12:01 nigelb misc: email.
12:01 nigelb misc: The release job sends an email to the lists.
12:01 nigelb We were looking for this email :)
12:02 vaibhav ndevos: in glfs_new(), structure fs has the following value: volname = 0x2aa00364820 <error: Cannot access memory at address 0x2aa00364820>,
12:23 karthik_us joined #gluster-dev
12:33 nbalacha joined #gluster-dev
12:42 karthik_us joined #gluster-dev
12:45 jiffin1 nigelb: I have raise requested on https://scan.coverity.com/ for glusterfs project
12:48 pkalever joined #gluster-dev
12:53 kdhananjay joined #gluster-dev
12:54 kdhananjay left #gluster-dev
13:05 msvbhat joined #gluster-dev
13:05 amarts joined #gluster-dev
13:19 atinm joined #gluster-dev
13:38 pkalever joined #gluster-dev
13:55 psony joined #gluster-dev
13:58 nbalacha joined #gluster-dev
14:08 atinm joined #gluster-dev
14:19 skumar joined #gluster-dev
14:23 pkalever joined #gluster-dev
14:24 shyam joined #gluster-dev
14:28 jiffin joined #gluster-dev
14:34 kkeithley jiffin1: now you have two coverity accounts:  thotz (w/ admin) and jiffintt.  Shall I delete the jiffintt account?
14:35 kkeithley with gmail and redhat email addresses respectively.
14:36 kkeithley jiffin: ^^^
14:37 jiffin i tried to login with redhat email
14:37 jiffin it didn't work
14:37 jiffin anyway delete it
14:37 kkeithley okay
14:37 kkeithley jthottan at redhat ?
14:38 nigelb and he's left :)
14:40 gyadav joined #gluster-dev
14:52 kkeithley nigelb: this morning's release email for 3.12.3 says the build artifacts are a) the tar.gz, and b) the sha256.sum file.
14:52 kkeithley but when I look at job #21 in jenkins it says the artifacts are the tar.gz and the sha512sum file
14:53 kkeithley yesterday I took the tar.gz from job #21.
14:53 kkeithley today the tar.gz from job #21 is a different size than the one I dl'd yesterday
14:53 kkeithley although the contents are exactly the same.  I.e. untar and diff -r
15:00 nigelb kkeithley: I did wonder about the size of the tar file.
15:00 nigelb I was going to ask you :)
15:01 kkeithley dunno, maybe some randomness in inodes in the filesystem resulted in differences in the directory layout which compressed differently
15:03 kkeithley I was more puzzled by how yesterday's job #21 got replaced by a different job #21 today.   And I presume the descrepancy between the email sha256sum is due somehow to the change between patch set 4 and patch set 5
15:05 nigelb kkeithley: wait, wait.
15:05 nigelb What?
15:06 kkeithley in release.sh in patch set 4, there's `sha512sum ... > glusterfs*.sha256sum`
15:07 kkeithley then in patch set 5 you changed/fixed it to `sha512sum ... > glusterfs*.sha512sum`
15:07 kkeithley yes?
15:07 nigelb Yes.
15:07 nigelb So when I proposed this patch, you said let's move to sha512sum.
15:07 kkeithley right
15:07 nigelb Which I did in the command.
15:07 nigelb Not anywhere else.
15:07 nigelb I caught most of them today.
15:07 kkeithley yup
15:08 nigelb Except in the yml, which I fixed and pushed directly to master.
15:08 kkeithley the email this morning for the 3.12.3 release has a link to the glusterfs-3.12.3.sha256sum artifact (which doesn't actually exist)
15:09 nigelb Oh.
15:09 nigelb AHem.
15:09 nigelb Yeah, the other bit I missed.
15:09 nigelb Fixing.
15:09 kkeithley and both the tar.gz and the sha*sum artifacts in _today's_ release job #21 are different than _yesterday's_ release job #21.
15:10 kkeithley I presume you did some behind the scenes skullduggery to make that happen
15:10 kkeithley anyway, doesn't matter
15:10 kkeithley what's done is done
15:10 nigelb so, the tar and sha512 sum will be different.
15:10 nigelb Because we do not yet have repeatable builds.
15:11 nigelb So lots of things will change the sha512sum.
15:11 nigelb But the size difference worries me.
15:11 nigelb I'll investigate that.
15:11 nigelb and I'll fix the email template
15:13 kkeithley I don't know how tar files get written. I believe you could untar+tar and end up with files in a different order in the tar file. Which could easily change how the tar file gets compressed.
15:14 nigelb ah.
15:14 xavih joined #gluster-dev
15:15 kkeithley man tar says you can do --sort=ORDER, and default is --sort=none
15:16 kkeithley If you want reproducibility we should change the tar to use '--sort=name'
15:16 glusterbot kkeithley: ''s karma is now -3
15:16 nigelb caught the email bug.
15:17 kkeithley nigelb++
15:17 glusterbot kkeithley: nigelb's karma is now 74
15:17 sanoj joined #gluster-dev
15:17 nigelb do you want to patch up our makefiles to do the sort=name?
15:17 nigelb Given that the move to bits.gluster.org isn't automated
15:18 nigelb this is a good time to test this stuff.
15:18 nigelb We can also control the emails to go to limited people.
15:18 kkeithley I'll submit a change if you file a BZ for it. ;-)
15:18 nigelb I wonder if I should submit two bugs.
15:18 nigelb One a meta bug for reproducible builds.
15:19 nigelb And two a dependent bug as one of the thigns that could get us closer.
15:19 misc +1 for reproductible build
15:19 kkeithley sure, as long as you don't assign the meta/tracker BZ to me
15:20 kkeithley ISTR we have some other BZ open for reproducible builds on debian. Or has that already been fixed and closed?
15:20 * nigelb looks
15:22 nigelb Can't find any.
15:22 * kkeithley wonders if it's a github issue instead
15:23 sankarshan The reproducible Debian build was a RHBZ I recall. Not that I have it handy
15:23 bfoster joined #gluster-dev
15:28 nigelb kkeithley: Looks like the tar command is even more trickier - https://reproducible-builds.org/docs/stable-inputs/
15:31 kkeithley It's even trickier because the 'make dist' rule is something autoconf or automake adds to the makefile.
15:31 kkeithley I presume our jenkins jobs run in the C locale anyway
15:33 kkeithley autoconf+automake by way of the AM_INIT_AUTOMAKE(tar-pax) line in configure.ac
15:39 msvbhat joined #gluster-dev
15:54 nigelb kkeithley: Does `make dist` does tar up everything after the ./configure?
15:54 nigelb could we override that with our own script?
15:55 kkeithley dist is a magick automake target in the Makefile
15:55 nigelb I know, I know. Could we choose not to use it?
15:56 nigelb Does it do anything magical?
15:57 kkeithley we could. I'm trying to untangle the magic bits
16:01 jstrunk joined #gluster-dev
16:03 msvbhat joined #gluster-dev
16:04 kkeithley But I'm not sure we should. (Just because you can do a thing doesn't mean you should.)
16:04 ndevos 'make dist' takes the directives of the Makefile.am into account, it is very different from a 'git archive ...' kind of call
16:04 wushudoin joined #gluster-dev
16:05 nigelb kkeithley: to answer our immediate questions, I'm going to use try.diffoscope.org to compare.
16:05 ndevos like, include the files that get generated with ./autogen.sh, but not the files that get generated with ./configure - the 'make dist' tarball should be ready to run ./configure with
16:06 amarts joined #gluster-dev
16:06 ndevos because the 'make dist' tarball includes generated files, the timestamps of those files will differ each time the tarball is generated, different checksum for sure (size changes would be rather unexpected)
16:07 nigelb so, the way to fix that is to fix the tar command with specific parameters
16:08 ndevos I'm not sure if it is worth spending time to fix that
16:11 kkeithley both yesterday's job #21 tar.gz and today's job #21 tar.gz, when uncompressed are the same size.
16:11 nigelb kkeithley: can confirm it's only ordering problem.
16:12 nigelb and timestamp
16:12 nigelb I took both of them, untarred and re-tar'd with predictable datetime and order
16:12 nigelb same hash.
16:12 kkeithley and when unpacked diff -r shows no diff between the two
16:12 nigelb maybe that's an option?
16:12 nigelb that we use make-dist to create a tar, untar and re-tar it?
16:13 kkeithley you lost me
16:13 nigelb I took both yesterday and today's tar.gz
16:13 nigelb I untarred them separately and retar'd them with this command
16:13 kkeithley ah, got it.
16:13 nigelb tar --mtime='2015-10-21 00:00Z' --clamp-mtime -cf glusterfs.tar.gz glusterfs
16:13 nigelb clamped down the mtime to a specific time.
16:14 nigelb man tar
16:14 nigelb bleh
16:16 kkeithley trying to add --sort=name to the tar command in the `make dist` is a Hard Problem®
16:17 kkeithley and I'm starting to think it's not a problem we need to solve
16:17 kkeithley We should not have that many files in the tar.gz that are generated by autogen.sh, configure, or `make dist`
16:18 nigelb kkeithley: My solution is to say, we can do whatever we want in the tar generated by `make dist`
16:18 nigelb let's untar and redo it correctly.
16:18 nigelb before shipping it as a "build".
16:18 nigelb I'm curious if a newer version of the toolchain has fixed this already.
16:19 nigelb because the deterministic build thing has been going on for a few years.
16:19 * nigelb -> bed
16:19 kkeithley we could untar and `tar --sort=name` followed by sha512sum. Are you thinking of something else beyond that?
16:20 nigelb and --clamp-mtime
16:20 nigelb so the timestamp is the same.
16:20 kkeithley I'm not sure I like that
16:22 kkeithley I don't think we have very many generated files.  pkgconfig files with version. What else?
16:22 pladd joined #gluster-dev
16:26 kkeithley we're not really planning to run a release(-new) job on the same tag over and over again. right now we're trying to understand why two tar.gz files of the same tag are different sizes.
16:26 kkeithley And I believe we've answered that.
16:30 pladd joined #gluster-dev
16:38 skumar joined #gluster-dev
16:56 gyadav joined #gluster-dev
16:56 timotheus1_ joined #gluster-dev
17:03 wushudoin joined #gluster-dev
19:02 gyadav joined #gluster-dev
19:13 shyam joined #gluster-dev
19:31 rraja_ joined #gluster-dev
19:37 kkeithley just out of curiosity, what does anyone think about the possibility of switching to glibc/rpc or libtirpc instead of embedded rpc for 4.0?
19:40 rafi1 joined #gluster-dev
20:32 wushudoin joined #gluster-dev
20:53 xavih_ joined #gluster-dev
21:15 major joined #gluster-dev
21:39 jiffin joined #gluster-dev
22:04 decayofmind joined #gluster-dev
22:04 owlbot joined #gluster-dev
23:14 major joined #gluster-dev
23:21 major joined #gluster-dev
23:37 msvbhat joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary