Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2016-06-23

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:11 shyam joined #gluster-dev
01:48 ilbot3 joined #gluster-dev
01:48 Topic for #gluster-dev is now Gluster Development Channel - http://gluster.org | For general chat go to #gluster | Patches - http://review.gluster.org/ | Channel Logs - https://botbot.me/freenode/gluster-dev/ & http://irclog.perlgeek.de/gluster-dev/
01:58 spalai joined #gluster-dev
02:08 nishanth joined #gluster-dev
02:31 pkalever joined #gluster-dev
03:13 spalai left #gluster-dev
03:20 hagarth joined #gluster-dev
03:22 aravindavk joined #gluster-dev
03:23 spalai joined #gluster-dev
03:25 magrawal joined #gluster-dev
03:29 nbalacha joined #gluster-dev
03:43 nbalacha joined #gluster-dev
03:57 ppai joined #gluster-dev
03:58 jiffin joined #gluster-dev
04:10 atinm joined #gluster-dev
04:17 hchiramm joined #gluster-dev
04:17 gem joined #gluster-dev
04:18 aspandey joined #gluster-dev
04:22 shubhendu joined #gluster-dev
04:25 itisravi joined #gluster-dev
04:33 raghug joined #gluster-dev
04:43 spalai joined #gluster-dev
04:54 Manikandan joined #gluster-dev
04:58 aravindavk joined #gluster-dev
05:02 Manikandan joined #gluster-dev
05:04 raghug joined #gluster-dev
05:05 prasanth joined #gluster-dev
05:10 spalai joined #gluster-dev
05:14 pkalever joined #gluster-dev
05:14 ndarshan joined #gluster-dev
05:22 rafi joined #gluster-dev
05:22 jiffin joined #gluster-dev
05:28 hgowtham joined #gluster-dev
05:31 rafi1 joined #gluster-dev
05:33 ashiq joined #gluster-dev
05:35 rafi joined #gluster-dev
05:37 poornimag joined #gluster-dev
05:40 Bhaskarakiran joined #gluster-dev
05:40 luizcpg_ joined #gluster-dev
05:40 karthik___ joined #gluster-dev
05:42 Apeksha joined #gluster-dev
05:46 mchangir joined #gluster-dev
05:52 kdhananjay joined #gluster-dev
06:01 pur_ joined #gluster-dev
06:07 aspandey joined #gluster-dev
06:12 kotreshhr joined #gluster-dev
06:14 atalur joined #gluster-dev
06:16 Manikandan joined #gluster-dev
06:17 raghug joined #gluster-dev
06:27 asengupt joined #gluster-dev
06:36 skoduri joined #gluster-dev
06:46 msvbhat joined #gluster-dev
06:53 skoduri joined #gluster-dev
06:58 [o__o] joined #gluster-dev
07:00 gem joined #gluster-dev
07:05 gem joined #gluster-dev
07:07 atalur joined #gluster-dev
07:09 gem_ joined #gluster-dev
07:21 [o__o] joined #gluster-dev
07:24 post-factum hey, guys. got the following for replica 2 storage yesterday: http://termbin.com/9cdo also found this https://bugzilla.redhat.co​m/show_bug.cgi?id=1134305 i thought it was fixed
07:24 glusterbot Bug 1134305: medium, medium, ---, pkarampu, POST , rpc actor failed to complete successfully messages in Glusterd
07:24 post-factum any thoughts what could happen?
07:25 post-factum it seems after that we've got split-brain on several files
07:25 post-factum (3.7.11)
07:45 nigelb I'm testing jenkins job builder with this job -> https://build.gluster.org/job/glusterfs-rpms-test/
07:45 nigelb (FYI)
07:52 karthik___ joined #gluster-dev
08:05 baojg joined #gluster-dev
08:10 nigelb YESSSSSSSS
08:10 nigelb https://build.gluster.org/job/glusterfs-rpms-test/
08:20 mchangir nigelb, any clue why this one failed: https://build.gluster.org/job/glu​sterfs-devrpms-el6/17063/console
08:22 nigelb Not a clue.
08:22 nigelb Looks like an rpm build error though
08:22 nigelb ERROR: Exception(glusterfs-3.9dev-0​.194.git28e323f.el6.src.rpm) Config(epel-6-x86_64) 0 minutes 8 seconds
08:23 nigelb which probably lead to
08:23 nigelb ERROR: Command failed. See logs for output.
08:23 nigelb the chroot isn't clean for some reason.
08:23 nigelb mchangir: This machine was disconnected and I brought it back online just now.
08:23 nigelb I wonder if it's related to the node.
08:27 mchangir nigelb, okay
08:27 nigelb I just did a retrigger.
08:27 nigelb Let's see how that goes.
08:28 mchangir sure
08:29 itisravi joined #gluster-dev
08:37 nigelb mchangir: Retrigger seems to have worked okay.
08:37 nigelb And another build worked fine on that machine.
08:37 nigelb So I suspect something transient.
08:40 pranithk1 joined #gluster-dev
08:47 nigelb Anyone merging something to master today?
08:51 mchangir nigelb, yup, the build passed
09:02 jiffin rjoseph, asengupt:  can u please review http://review.gluster.org/#/c/13763/6 ?
09:02 jiffin and http://review.gluster.org/#/c/13764/
09:08 ndevos nigelb: maybe you can file a bug for http://www.gluster.org/pipermail/g​luster-devel/2016-June/049876.html ?
09:12 gem joined #gluster-dev
09:13 skoduri raghug, pranithk1 ..have a question wrt unlink fop..do we remove/unlink a file in the posix layer if there are any open fds?
09:14 hchiramm joined #gluster-dev
09:21 nigelb ndevos: I've already asked in the RTD IRC channel.
09:22 nigelb There is a bug.
09:22 nigelb apparently rtd + mkdocs don't play nice.
09:22 pranithk1 skoduri: We move the gfid to <brick-path>/.glusterfs/unlink if there are openfds on the file and the linkcount is 0
09:22 pranithk1 skoduri: Once the inode is forgotten it will be removed
09:22 pranithk1 skoduri: Or brick restart will do the same
09:22 ppai nigelb, I've requested update on https://github.com/rtfd/re​adthedocs.org/issues/2013
09:23 skoduri pranithk1, okay..so the operations coming on that openfds will continue till the inode is forgotten..rest all shall receive ESTALE/ENOENT error is it?
09:23 pranithk1 skoduri: fd based fops will succeed. Even with anon-fd. But non-fd based fops will fail
09:23 skoduri pranithk1++ okay thanks
09:24 glusterbot skoduri: pranithk1's karma is now 5
09:28 hchiramm ppai++ thanks ..
09:29 glusterbot hchiramm: ppai's karma is now 14
09:31 jiffin1 joined #gluster-dev
09:35 aspandey joined #gluster-dev
09:40 nishanth joined #gluster-dev
09:41 gem joined #gluster-dev
09:43 kdhananjay joined #gluster-dev
09:44 nigelb ppai: If we have to move to rst and sphinx
09:44 nigelb I'm all for it.
09:44 nigelb Do you want to start a thread on the list about potentially changing the markup?
09:45 ppai nigelb, I'd also prefer it. pandoc can do the conversion
09:45 nigelb exactly
09:45 ppai nigelb, It will require some content supervision though
09:46 nigelb Let's start a thread on the mailing list and see where everyone stands
09:46 atalur joined #gluster-dev
09:46 ppai nigelb, if rtd folks reach out giving an estimated time for mkdocs+rtd fix, then we can decide. What do you think ?
09:46 ppai nigelb, contributors aren't familiar with .rst syntax yet
09:47 nigelb rst is the standard documentation syntax. It's only different in subtle ways.
09:47 nigelb I think sphinx is a better toolchain for documentation anyway.
09:47 * nigelb is more than slightly biased
09:47 ppai nigelb, yeah, most projects (especially python) uses sphinx
09:49 ppai nigelb, on an unrelated note, there's some internal effort at red hat for converting downstream product doc into upstream. I don't have much details on that though
09:49 pranithk1 joined #gluster-dev
09:49 nigelb More reasons to start a discussion now.
09:49 ppai agreed
09:57 misc nigelb: rst is also standard, unlike markdown :)
09:58 misc http://zverovich.net/2016/​06/16/rst-vs-markdown.html
09:58 misc you can just throw this one to people asking on why :p
10:04 jiffin1 joined #gluster-dev
10:12 s-kania joined #gluster-dev
10:25 gem joined #gluster-dev
10:25 msvbhat_ joined #gluster-dev
10:34 hchiramm joined #gluster-dev
10:37 itisravi joined #gluster-dev
10:39 atalur_ joined #gluster-dev
10:39 jiffin joined #gluster-dev
10:39 ndevos hi Apeksha, do you know of connectathon tests have been put in distaf yet?
10:41 kdhananjay joined #gluster-dev
10:47 luizcpg hi, quick question… by clicking on link “GlusterFS version 3.7 is the latest version at the moment.” in the site…  I can see the version 3.8.0, which is weird ….but, what about  gluster 3.7.12 ? Are you going to jump to 3.8.0 and 3.7.x has finished ? Thanks
10:53 aspandey joined #gluster-dev
11:05 post-factum luizcpg: no, 3.7 is still maintained
11:09 Apeksha ndevos, hi niels
11:09 Apeksha ndevos, cthon tests are there in distaf downstream
11:10 Apeksha ndevos, they are not yet ported to distaf upstream
11:13 luizcpg Are we goint to have 3.7.12 out shortly ?
11:15 anoopcs luizcpg, Yes. 3.7.12 will be released shortly (hopefully)
11:16 jiffin kshlm (he is not online) is managing the release 3.7.12
11:17 luizcpg quick question… do we have issues with the shard feature on 3.7.11 ?
11:17 luizcpg features.shard-block-size: 512MB
11:17 luizcpg features.shard: on
11:17 prasanth joined #gluster-dev
11:17 luizcpg I’m using it on production, so I would to know if it’s risk somehow…
11:18 atinm itisravi, ^^ since I don't find Krutika/Pranith around
11:19 atinm luizcpg, there are some fixes related to sharding which have gone in post 3.7.11
11:19 itisravi luizcpg: what issues specifically?
11:21 luizcpg http://picpaste.com/pics/RjvQrmOW.1466680896.png
11:22 luizcpg I’m not sure if it’s an issue or not...
11:22 itisravi luizcpg: Most of the fixes that have gone in for shard post 3.7.11 are related to O_DIRECT. Unless you plan to use that, 3.7.11 should be good I guess.
11:23 luizcpg I can see a “constant” traffic of ~8 mbps in and ~5 mpbs out… I’m using as a engine and vm’s storage domains for oVirt 3.6.6
11:23 luizcpg as long as I’m just using gluster to store the os data, I don’t see any reason to have this traffic.
11:24 luizcpg there is no application data flowing on top of gluster
11:24 luizcpg I’m using gluster 3.7.11 as a engine and vm’s storage domains for oVirt 3.6.6
11:25 itisravi Are there any pending heals?
11:25 luizcpg https://bugzilla.redhat.com/​show_bug.cgi?id=1298693#c26
11:25 glusterbot Bug 1298693: medium, medium, ovirt-3.6.7, stirabos, VERIFIED , [RFE] - Single point of failure on entry point server deploying hosted-engine over gluster FS
11:25 luizcpg ^ here is my gluster settings
11:26 poornimag joined #gluster-dev
11:27 luizcpg Do you see any issues with my gluster settings ?
11:28 luizcpg I’ve disabled some performance settings following recommendadions of the oVirt list while ago, but I’m not sure if they are completly correct.
11:30 itisravi nothing except ping timeout- 10 seconds might be too low. 30 seconds should be good.
11:32 luizcpg no heals
11:32 luizcpg Status: Connected
11:32 luizcpg Number of entries: 0
11:32 luizcpg for all
11:32 luizcpg what about his traffic … do you think it’s normal ?
11:32 luizcpg * this traffic *
11:36 mchangir joined #gluster-dev
11:37 itisravi If the VMs aren't running any workload on them, then maybe not. But I'm not sure how much of I/O an idle VM should take.
11:38 itisravi Then there's also ovirt orchestration which I think updates the sanlock file etc.
11:40 rastar joined #gluster-dev
11:41 lpabon joined #gluster-dev
11:56 nigelb misc: srpm = src.rpm file?
11:57 post-factum nigelb: should be
12:02 prasanth joined #gluster-dev
12:03 misc nigelb: yeah
12:03 misc nigelb: but mostly because in the rpm set of directory, there is SRPMS SOURCES RPMS , etc
12:04 luizcpg itisravi: I also don’t know if “idle” vm’s consume exactly in term of IO on top of oVirt …~ 8mbps looks pretty high, but I’m not sure if the performance settings can mitigate this traffic somehow…
12:04 misc so not sure if RPMS/** was all rpms in the rpm way of seeing, or all rpms as "all file in the rpm format"
12:05 itisravi luizcpg: Assuming you can test, one way to find out would be to shutdown all VMs and see if the traffic comes down.
12:06 luizcpg they are on production… it’s no so easy
12:06 luizcpg but it’s an option for sue.
12:06 luizcpg sure
12:06 nigelb misc: you can see the output here: https://build.gluster.org/job/glusterfs-rpms-test/
12:06 kotreshhr left #gluster-dev
12:07 nigelb This is exactly as it used to be.
12:07 luizcpg anyway, do you think by enabling some performance/caching feature might mitigate this traffic somehow ?
12:07 nigelb Shouldn't be any different from before.
12:07 misc oh yeah, just that I wasn't sure how it was before :)
12:07 misc but yeah, there is src.rpm, so that's ok
12:08 pranithk1 joined #gluster-dev
12:08 itisravi right
12:08 nigelb I haven't done rpm packaging in the past, so this is new to me too :)
12:10 itisravi luizcpg: nothing I can think of. pranithk1 do you have any suggestions for volume options for luizcpg than what he's done in https://bugzilla.redhat.com/​show_bug.cgi?id=1298693#c26 ?
12:10 glusterbot Bug 1298693: medium, medium, ovirt-3.6.7, stirabos, VERIFIED , [RFE] - Single point of failure on entry point server deploying hosted-engine over gluster FS
12:13 luizcpg itisravi:  http://pastebin.com/4SGS6HKd
12:13 luizcpg by the way, I have another gluster replica 3 + arbiter for the application, therefore it’s application workload…
12:14 pranithk1 luizcpg: hey! long time... how are you?
12:14 luizcpg I’ve enabled some performance settings and the volume behaves pretty good...
12:14 luizcpg good good man  and you ?
12:14 pranithk1 luizcpg: good. What problems are you running into?
12:14 itisravi pranithk1: it seems he's seeing 8mbps of traffic across the nodes even when there's no IO in the VMs and no heals pending.
12:15 pranithk1 itisravi: oh. luizcpg: Is it fuse mount?
12:15 luizcpg looks like a very “similar” traffic on the replica 3 nodes being used by oVirt 3.6.6
12:15 luizcpg yes, fuse mount
12:16 luizcpg http://picpaste.com/pics/RjvQrmOW.1466680896.png
12:16 luizcpg ^ this is the traffic
12:16 pranithk1 luizcpg: What is your setup, where are the mounts located?
12:17 luizcpg by disabling the performance settings, the traffic load is pretty similar and I’m not sure if gluster is exchanging too much data, due to the lack of caching features enabled.
12:17 luizcpg I have 3 oVirt 3.6.6 nodes on top of 3 external gluster replica 3 hosts
12:18 luizcpg the gluster mount is being used by VDSM
12:18 pranithk1 luizcpg: I want to understand if clients and servers reside on same machines or different
12:18 luizcpg no
12:19 luizcpg It’s not a hypercoverged setup with oVirt
12:19 pranithk1 luizcpg: okay
12:20 luizcpg I have six (6) different servers … 3 hosts for gluster replica 3 and another 3 hosts of oVirt hosted engines.
12:20 kdhananjay joined #gluster-dev
12:21 pranithk1 luizcpg: okay and fuse mounts are on these 3 hosts where oVirt is there?
12:21 luizcpg yes
12:21 luizcpg each server has one fuse mount ….
12:21 luizcpg being managed by VDSM
12:22 pranithk1 luizcpg: okay. Is it production setup?
12:22 luizcpg yes
12:22 luizcpg it’s running on production
12:22 luizcpg gluster 3.7.11 by the way
12:22 pranithk1 luizcpg: I was wondering if you see the same data transfer rates if you stop the volume....
12:22 pranithk1 luizcpg: Is there a way to find which process is sending data out the network?
12:23 itisravi asked him to do that, but not possible unfortunately.
12:23 luizcpg hold on
12:23 pranithk1 itisravi: yeah :-/
12:26 luizcpg http://picpaste.com/pics/wlBEWCPB.1466684758.png
12:27 pranithk1 luizcpg: I am wondering if you know of any command to see process wise breakup of bandwidth usage
12:27 pranithk1 luizcpg: I don't :-(, let me google a bit
12:27 luizcpg ^ I only have traffic from the oVirt nodes (10.1.50.4, 10.1.50.5 and 10.1.50.6) to the gluster node… this this case 10.1.50.7 (gluster1)
12:28 itisravi pranithk1: luizcpg nethog
12:29 pranithk1 itisravi: yeah, google says the same :-)
12:29 luizcpg yep
12:29 luizcpg let me try
12:29 itisravi Trying it out now. it gives a nice output.
12:30 itisravi sent+recv bytes per process on each interface.
12:31 luizcpg http://picpaste.com/pics/YZTQiDqe.1466685073.png
12:31 luizcpg ^ it’s gluster as expected
12:33 luizcpg I’m not saying it’s a problem necessarly, but it looks pretty high the load having only OS data of the vm’s …. I don’t have application load in this case…that’s why ~8bmps looks weird to me.
12:34 pranithk1 luizcpg: Where is it sending to? Looking at the processes, it seems like the brick is serving reads...
12:34 kkeithley skoduri,jiffin,sraj: ping
12:34 glusterbot kkeithley: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
12:34 kkeithley silly glusterbot
12:34 itisravi pranithk1: I was telling him earlier that sanlock files get updated frequently even when nothing is running. But not sure if 8mbps can be attributed to only that.
12:36 luizcpg I agree pranithk1 that gluster is serving reads, but would be good to try to reduce the traffic load if possible … so, should I change my gluster settings to minimize the traffic load by enabling the performance features I disabled previosly ?
12:36 pranithk1 luizcpg: wait..
12:37 pranithk1 luizcpg: It says it is receiving 1.5MB/sec
12:37 luizcpg on this specific moment ….
12:37 pranithk1 luizcpg: it also says it is sending only 236KB/sec
12:38 pranithk1 luizcpg: So it seems like all these are writes... not reads
12:40 luizcpg http://pastebin.com/15wTyNeK
12:40 luizcpg ^ ook this
12:40 luizcpg look this
12:41 kdhananjay1 joined #gluster-dev
12:41 luizcpg I’m also receiving some alerts for the monitoring system saying I’m having iowait’s …
12:41 pranithk1 luizcpg: 1 thing at a time :-)
12:42 luizcpg for sure… it might be related with what you said about being writes and not reads.
12:42 luizcpg :)
12:44 pranithk1 luizcpg: You were saying there is 8MB/sec right? We could only see 2MB worth of transfer in your nethogs output
12:44 pranithk1 luizcpg: Wondering where are we getting extra 6MB/sec
12:45 mchangir joined #gluster-dev
12:46 ira joined #gluster-dev
12:48 luizcpg pranithk1: it’s ~8 Mbits/sec which is ~ 1MB/s
12:48 luizcpg the unit is mega bits per second.
12:48 pranithk1 luizcpg: ah! cool. Thanks!
12:48 luizcpg and not mega bytes per second
12:48 luizcpg cool
12:48 pranithk1 luizcpg: So why are the VMs writing data if they are not doing much?
12:49 pranithk1 luizcpg: Are you sure there are no applications in the images which keep writing to logs...?
12:49 luizcpg I have  ~20 vm’s running centos7 os data only.
12:49 pranithk1 luizcpg: How many VMs do you have?
12:49 luizcpg writing system logs and that’s it..
12:50 pranithk1 luizcpg: cool so around 75KB per VM per second?
12:50 pranithk1 luizcpg: Do you think it may be generating as much logs etc inside the VM?
12:51 pranithk1 luizcpg: not even logs actually, even if new inodes are created/deleted etc, everything will be a write on the VM harddisk which will be propagated to glusterfsds as writes...
12:53 pranithk1 luizcpg: One way to confirm is to do tcpdump for some time and see what kind of operations are going on what VM images stored on glusterfs....
12:53 pranithk1 luizcpg: That would give all the info you may need to find why so many writes are coming in...
12:53 luizcpg got it..
12:54 luizcpg I don’t think it’s an issue… just want to understand better how gluster behaves…
12:54 luizcpg do you think I can change the performance settings somehow ?
12:54 luizcpg is it recommended in my case?
12:55 pranithk1 luizcpg: I don;t think any perf settings will change the writes IMO :-)
12:56 luizcpg another question is about sharding.
12:56 pranithk1 luizcpg: you should ask kdhananjay1 about it :-). We are joining a meeting in 4 minutes...
12:56 luizcpg Someone from oVirt list said something that on gluster 3.7.12 we’ll have more “patches”
12:56 luizcpg is the sharding stable on 3.7.11 ?
12:57 pranithk1 luizcpg: It is perfect in 3.7.12
12:57 kdhananjay1 luizcpg: i would say use 3.7.12
12:57 luizcpg I’m using sharding on this specifc case…
12:57 luizcpg got it.
12:57 luizcpg When do you plan to release 3.7.12 ?
12:58 jiffin1 joined #gluster-dev
12:58 pranithk1 luizcpg: It is almost out the door. We are gettings acks from maintainers to get it out
12:58 luizcpg awesome…
12:58 luizcpg thanks..
13:01 raghug joined #gluster-dev
13:18 nigelb ndevos: You probably want to exclude project-infrastructure from your bugzilla scripts.
13:21 misc or we should mark them all as mainline
13:25 msvbhat_ joined #gluster-dev
13:28 ndevos nigelb: it is already excluded? https://github.com/gluster/releas​e-tools/blob/master/check-bugs.py
13:29 ndevos nigelb: I've closed bugs that got patches merged in the git repo, maybe some of those should have used a different component?
13:29 nigelb You set triaged for a few bugs
13:29 nigelb which had nothing to do with patches :)
13:30 ndevos nigelb: oh, right, we triage any bug against the Gluster project, I'm not sure if it makes sense to skip some?
13:31 misc well, it depend, what triaging would imply ?
13:31 misc (I skipped the meeting since I go to lunch usually)
13:31 ndevos nigelb: we use the bugzilla queries on https://public.pad.fsfe.or​g/p/gluster-bugs-to-triage to do the triaging
13:32 ndevos misc: triage basically means checking if the component is set correctly, sufficient information is there to fix/address the bug and if a suitable person is on CC
13:33 ndevos http://gluster.readthedocs.io/en/la​test/Contributors-Guide/Bug-Triage/ contains more notes about it
13:33 misc ndevos: and so, people are able to do that for infra bug ?
13:33 misc (because for now, we are using that as todo list...)
13:35 ndevos misc: I guess we should be able to do it for most bugs, and for things we dont know, we check with the maintainers of the component(s)
13:35 ndevos misc, nigelb: if you guys file bugs for your own todo list, you should probably just add the "Triaged" keyword immediately
13:36 ndevos we only do the meeting to catch bugs that are filed by users, and maintainers or developers have not looked at yet, it is something like a safety net
13:37 misc yeah, that's a good process, but I do not think we want to add uneeded burden
13:38 nigelb If all you're going to add is "triaged" keyword, that's fine for now. I'll see if we can get you a better search query.
13:41 nigelb misc: updated that yaml file with a template :)
13:41 nigelb and added some documentation that's an utter guess :)
13:41 nigelb I welcome better suggestions.
13:41 ndevos nigelb: well, how would we find out that users report a bugs against an incorrect component, like project-infrastructure if we skip that in the listing?
13:42 nigelb every bug automatically CC's gluster-infra@
13:42 nigelb We'd know and probably ping you about that anyway.
13:47 luizcpg joined #gluster-dev
13:47 nbalacha joined #gluster-dev
13:47 rraja joined #gluster-dev
13:49 luizcpg joined #gluster-dev
13:50 nigelb Lovely, the triggering works beautifully :) https://build.gluster.org/job/glusterfs-rpms-test/
13:57 atinm nigelb, another instance of build failure - https://build.gluster.org/job​/glusterfs-rpms/3493/console
13:58 atinm nigelb, however I get another notification of build success from http://build.gluster.org/jo​b/glusterfs-rpms-el6/3419/
13:59 nigelb atinm: I think I know what's wrong.
13:59 nigelb for the glusterfs-rpm job, we have concurrence turned on.
14:00 nigelb So a package build might be attempted on the same machine at the same time.
14:00 nigelb The my new job definitions turn it off.
14:17 misc nigelb: speaking of doc, would doc for making a release go in the repo for infra, or somewhere else ?
14:19 nigelb release management docs have been in the gluster.readthedocs.org site. I'm happy for it to stay there.
14:19 nigelb Unless you see a reason for it to move into infra?
14:20 hagarth joined #gluster-dev
14:20 misc well, same as we have separate infra stuff:
14:20 misc - separate from version of gluster
14:20 misc - not relelvent to downstream (the day it will be applicable, if it will be applicable)
14:21 misc I am also wondering about overlap between infra people and rel-eng, in term of access, etc
14:21 nigelb That's something we have to nail down in the short-term.
14:21 nigelb In the long term, I want to ensure people can do their jobs without access.
14:22 skoduri joined #gluster-dev
14:22 misc let skynet handle stuff :)
14:23 misc nigelb: but people would still need different access, like access to say "this is the final build" vs not pushing the button
14:23 misc so we still need access, just to different things
14:23 nigelb yep
14:31 msvbhat_ joined #gluster-dev
14:36 pkalever left #gluster-dev
14:38 Manikandan joined #gluster-dev
14:57 nigelb (the rpm jobs are failing. I pushed a fix and triggered a new build)
15:05 wushudoin joined #gluster-dev
15:17 pranithk1 joined #gluster-dev
15:27 pkalever joined #gluster-dev
15:31 spalai joined #gluster-dev
15:45 rafi joined #gluster-dev
16:18 baojg joined #gluster-dev
16:19 nbalacha joined #gluster-dev
16:31 hagarth joined #gluster-dev
16:31 ira joined #gluster-dev
16:54 atinm joined #gluster-dev
16:56 kotreshhr joined #gluster-dev
16:57 kotreshhr left #gluster-dev
17:14 shubhendu joined #gluster-dev
17:49 JoeJulian Oh, hey, neat. Were you guys already aware that fio had a gfapi engine?
17:50 misc I am not sure what fio is
17:51 JoeJulian Performance testing tool
17:55 misc and gfapi mean the gluster api, so we can use fio to test the perf of a gluster backend ?
17:56 JoeJulian Yep
17:57 JoeJulian I'm rather impressed with this little tool. I still won't recommend it generally since I believe that performance testing should simulate one's actual use case, but still it's pretty complete for what it is.
17:59 misc wonder if that could be used in CI
17:59 misc like, to track potential regresison in perf
18:00 JoeJulian I was wondering that too. I've got an intern working on a project to use elk to log performance tests and then pull against them to check against standard deviation from normal to look for anomalies. When he completes that the work that he's done might translate over to what you were just thinking.
18:02 nishanth joined #gluster-dev
18:09 pkalever joined #gluster-dev
18:12 hagarth joined #gluster-dev
18:27 gem joined #gluster-dev
18:32 jiffin joined #gluster-dev
18:33 Manikandan joined #gluster-dev
18:49 pkalever left #gluster-dev
20:00 pkalever joined #gluster-dev
20:01 pkalever left #gluster-dev
20:03 hagarth joined #gluster-dev
20:27 dblack joined #gluster-dev
20:28 _iwc joined #gluster-dev
20:38 esmiurium joined #gluster-dev
20:40 PotatoGim joined #gluster-dev
22:41 overclk joined #gluster-dev
23:51 hagarth joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary