Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2016-08-23

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
01:16 jlrgraham joined #gluster-dev
01:45 spalai joined #gluster-dev
02:00 hagarth joined #gluster-dev
02:48 jlrgraham joined #gluster-dev
03:12 magrawal joined #gluster-dev
03:36 spalai joined #gluster-dev
03:54 kshlm joined #gluster-dev
04:01 atinm joined #gluster-dev
04:06 nigelb what the hell
04:09 nigelb kkeithley: oh man. I was thinking gerrit or jenkins was acting really strange.
04:10 mchangir joined #gluster-dev
04:17 itisravi joined #gluster-dev
04:19 shubhendu joined #gluster-dev
04:25 ramky joined #gluster-dev
04:26 jobewan joined #gluster-dev
04:26 jiffin joined #gluster-dev
04:31 nbalacha joined #gluster-dev
04:55 rafi joined #gluster-dev
05:02 Bhaskarakiran joined #gluster-dev
05:04 sanoj joined #gluster-dev
05:09 ndarshan joined #gluster-dev
05:11 spalai joined #gluster-dev
05:13 skoduri joined #gluster-dev
05:18 aspandey joined #gluster-dev
05:22 aravindavk joined #gluster-dev
05:35 nbalacha joined #gluster-dev
05:38 mchangir joined #gluster-dev
05:38 spalai joined #gluster-dev
05:43 asengupt joined #gluster-dev
05:43 atalur joined #gluster-dev
05:47 aravindavk joined #gluster-dev
05:48 poornima joined #gluster-dev
05:48 poornima_ joined #gluster-dev
05:49 poornima joined #gluster-dev
05:50 Muthu_ joined #gluster-dev
05:51 kdhananjay joined #gluster-dev
05:52 rastar joined #gluster-dev
05:55 hgowtham joined #gluster-dev
06:01 aravindavk joined #gluster-dev
06:02 hchiramm joined #gluster-dev
06:04 kotreshhr joined #gluster-dev
06:12 nigelb kshlm: Can I put the machines I loaned to you back in the pool? Have the fixes landed for the ssl issue?
06:13 kshlm nigelb, Yup. I had replied in the bug the same day.
06:14 nigelb Bah, sorry. I must have missed it in the noise.
06:14 nigelb Thanks
06:14 kshlm The fix is ready and should be merged soon, if not already merged.
06:19 anoopcs aravindavk, ndevos : It seems that systemd explicitly checks for unit configuration files under /usr/local/lib/systemd/system (See DIRECTORIES section from http://man7.org/linux/man-​pages/man1/systemd.1.html).
06:19 post-factum nigelb: http://review.gluster.org/#/c/15233/ could you please take a look at netbsd here?
06:19 atinm joined #gluster-dev
06:19 anoopcs This means that even with the recently merged change we don't need to create any links to make systemd aware of glusterd service file installed under /usr/local/lib/systemd/system.
06:20 anoopcs https://review.gluster.org/#/c/14892/
06:20 nigelb post-factum: good chance it's the one I just aborted.
06:20 post-factum nigelb: it was aborted 8 hours ago
06:20 nigelb post-factum: Ah, no. atinm pointed that an older aborted job out to me. Turns out my fix yesterday wasn't sufficient. I've fixed it again.
06:21 nigelb So we shouldn't have that sort of failures anymore.
06:21 post-factum nigelb: should i do another recheck?
06:21 anoopcs aravindavk, ndevos: We got lucky to have it placed under right directory :-)
06:21 nigelb post-factum: yes (but be prepared to wait for a while)
06:22 post-factum nigelb: okay, triggered
06:22 Manikandan joined #gluster-dev
06:22 prasanth joined #gluster-dev
06:22 devyani7 joined #gluster-dev
06:23 nigelb Heads up everyone: I'm goign to be policing netbsd jobs today for ones that are stuck.
06:23 nigelb If your job is manually aborted by me, please trigger a recheck.
06:23 Saravanakmr joined #gluster-dev
06:24 devyani7 joined #gluster-dev
06:24 post-factum nigelb: manual intervention sounds so unenterprise-ish
06:25 itisravi joined #gluster-dev
06:25 nigelb post-factum: temporary manual intervention :)
06:25 nigelb the fix is in (for reals this time)
06:25 nigelb I just need to make sure the existing jobs aren't stuck.
06:25 nigelb quite a few were stuck
06:26 nigelb Most times a pkill gluster gets them unstuck.
06:26 nigelb some machines hang and need a hard reboot.
06:30 rastar joined #gluster-dev
06:33 kdhananjay joined #gluster-dev
06:33 nigelb kshlm: thanks! queue down to 32 jobs :)
06:34 kotreshhr joined #gluster-dev
06:34 nigelb This is when I wish I had the ability to burst the number of nodes available.
06:36 skoduri joined #gluster-dev
06:38 spalai joined #gluster-dev
06:39 msvbhat joined #gluster-dev
06:41 pur joined #gluster-dev
06:41 ankitraj joined #gluster-dev
06:45 mchangir nigelb, somehow, the "some machines need reboot" thing converges around weekends and then nothing can be achieved over the weekend
06:46 ashiq joined #gluster-dev
06:46 mchangir nigelb, just an observation
06:46 nigelb mchangir: Mostly because I'm not around to do the manual intervention.
06:47 nigelb mchangir: Anyway, I may have actually fixed it up so I don't need to do manual intervention anymore.
06:48 mchangir nigelb, if it comes to it ... then a reboot per node every night could be a reasonable thing to do to reduce the plaque that brings the whole farm down to it knees
06:49 nigelb that won't help.
06:49 nigelb This is what happens
06:49 nigelb 1. A job hangs because of a test. We abort the job, but the gluster processes remain.
06:50 nigelb 2. the next job cannot umount because of the existing gluster processes.
06:50 nigelb 3. Gets aborted.
06:50 nigelb 4. Goto 2
06:50 nigelb what I've done is add a `pkill gluster` just before umount, which fixes this in majority of the cases. I've only seen one case where it didn't work.
06:51 nigelb And that's in about 20 to 30 times I've had to do this.
06:52 atalur joined #gluster-dev
06:52 nigelb I added the pkill yesterday, but forgot to run it as root.
06:52 nigelb I fixed that up today.
06:53 mchangir "pkill gluster" is still a nice way to clean up things ... but a "kill -KILL $(pgrep gluster)" would be best
06:53 nigelb We already do that in cleanup.sh
06:53 nigelb that doesn't seem to be working.
06:53 mchangir ok
06:54 nigelb I want to figure out what's going on there, but I'm also blocked by the fact that the code in /opt/qa on netbsd machines is different from master.
06:54 nigelb It diverged in 2012, so getting any change pushed is currently a pain.
06:55 mchangir hmm
06:55 nigelb https://github.com/gluster/glusterfs-​patch-acceptance-tests/pull/47/files <-- if you're curious
06:55 glusterbot nigelb: <'s karma is now -14
06:55 nigelb heh, I guess I do that pretty often :)
06:55 atinm joined #gluster-dev
07:01 kdhananjay joined #gluster-dev
07:10 nigelb mchangir: weird. We do a great kill -15 and then kill -9
07:10 nigelb I don't know why that doesn't work.
07:10 nigelb We even have code to abort the job when there's stuck processes.
07:14 hchiramm joined #gluster-dev
07:15 shubhendu joined #gluster-dev
07:20 aravindavk joined #gluster-dev
07:35 atalur joined #gluster-dev
07:37 kotreshhr joined #gluster-dev
07:47 post-factum could we enforce committers to put additional tag like "Requires-backport-to: " into commit message?
07:49 msvbhat joined #gluster-dev
08:03 rastar joined #gluster-dev
08:25 spalai joined #gluster-dev
08:32 kdhananjay joined #gluster-dev
08:34 post-factum also, here is bugreport for fuse memory leak https://bugzilla.redhat.co​m/show_bug.cgi?id=1369364 whom should i ping/add to it?
08:34 glusterbot Bug 1369364: medium, unspecified, ---, bugs, NEW , Huge memory usage of FUSE client
08:34 post-factum kkeithley: you might be interested in it too ^^
08:50 ndevos nigelb: btw, why not have a netbsd branch in the patch-acceptance-tests repository? and merge that branch step by step?
08:51 nigelb ndevos: That's what I've been trying to do without much success in terms of reviews.
08:51 ndevos nigelb: yeah, reviewing on github is a pita
08:54 ankitraj joined #gluster-dev
08:56 kdhananjay joined #gluster-dev
09:08 atalur_ joined #gluster-dev
09:25 EinstCra_ joined #gluster-dev
09:26 mchangir ndevos, is v3.8.3 stable and ready for use ... I'm presuming you would be able to answer that
09:26 EinstCra_ Hello, I found a bug https://bugzilla.redhat.co​m/show_bug.cgi?id=1369382 , Anyone has a suggestion how to fix this?
09:26 glusterbot Bug 1369382: unspecified, unspecified, ---, amukherj, NEW , Glusterd mem leak when run command gluster volume status
09:27 ndevos mchangir: yes, it is
09:27 ndevos EinstCra_: a fix would be in the form or a patch, a workaround would be to restart glusterd
09:30 EinstCra_ Yes, I know. how to workaround, But I want to fix in  a patch. But I've some problem to position it
09:33 rafi1 joined #gluster-dev
09:43 ashiq joined #gluster-dev
09:47 itisravi_ joined #gluster-dev
09:48 nbalacha joined #gluster-dev
09:58 EinstCra_ joined #gluster-dev
10:07 kotreshhr joined #gluster-dev
10:09 Byreddy joined #gluster-dev
10:20 Muthu_ joined #gluster-dev
10:29 ankitraj joined #gluster-dev
10:30 nbalacha joined #gluster-dev
10:36 msvbhat joined #gluster-dev
10:47 msvbhat joined #gluster-dev
10:52 rafi1 joined #gluster-dev
10:56 nbalacha joined #gluster-dev
11:00 nigelb rastar: I need help. Can you look at bug 1369401? I'm trying to figure out why the umount fails after lock-revocation.t
11:00 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1369401 high, high, ---, bugs, NEW , NetBSD hangs at /tests/features/lock_revocation.t
11:00 nigelb I've added as much info as I have into the bug.
11:01 nigelb Let me know if you want me to gather more info.
11:01 nigelb Somehow, during cleanup, the umount process hangs.
11:01 nigelb this is the cause of all my netbsd worries right now.
11:04 misc yeah, live migration is not supported on centos 7
11:04 misc so if we want to migrate VM, we have to ... shut them down
11:05 rastar nigelb: checking
11:07 Muthu_ joined #gluster-dev
11:07 ashiq joined #gluster-dev
11:10 atalur joined #gluster-dev
11:12 kotreshhr joined #gluster-dev
11:15 spalai joined #gluster-dev
11:30 post-factum misc: what do you mean saying "live migration is not supported on centos 7"?
11:33 misc post-factum: for qemu
11:34 misc or just snapshotting
11:34 post-factum misc: i really doubt about that or i do not understand your idea
11:34 misc erreur :Operation not supported: live disk snapshot not supported with this QEMU binary
11:34 post-factum misc: ah, that is different
11:35 post-factum misc: live migration works well on centos 7
11:35 misc post-factum: using provided qemu, or taking another one ?
11:35 post-factum misc: provided qemu + shared storage (gluster or ceph, does not matter)
11:36 misc post-factum: yeah, in this case, I just want a snapshot of the disk to copy without having downtime
11:36 post-factum misc: live disk snapshotting is really unavailable
11:36 misc and even with pause, i can't do it :/
11:37 post-factum misc: i believe one can achieve live snaphots with qemu guest agent calling fsfreeze and patched qemu package, but not sure
11:37 post-factum misc: never tried that
11:37 misc post-factum: well, I translate that "do not trust RHEL and use Fedora or Ubuntu next time"
11:37 misc which is what I am gonna do
11:37 post-factum misc: tell that to Arch user (/me) again :D
11:38 post-factum misc: i believe RH had some reason for disabling that
11:38 misc post-factum: yeah, that's called "pushing people to use another more expensive product"
11:39 misc so for RHEl, they want people to use RHEV which is fine
11:39 misc for centos however I am screwed
11:40 misc and since getting proper rhel subscription for community servers is such a pain, we use centos
11:41 post-factum misc: you can rebuild rpms ;)
11:45 misc post-factum: I am not really motivated to make my own distro again
11:45 misc forking 1 was already enough work
11:45 post-factum misc: which one?
11:45 misc post-factum: mandriva
11:45 post-factum misc: fork name?
11:46 misc I was part of the group who forked it to mageia
11:46 post-factum misc: oh, you are one of that men
11:46 misc post-factum: yeah
11:46 misc I was part of the packaging team and sysadmin too
11:47 post-factum misc: now i do not really know which mandrake fork survived if there is at least one
11:47 misc post-factum: both did survive
11:47 misc even if mageia struggle a bit to get enough volunteers in some part of the project, like most project
11:48 misc sysadmins have been crushed one by one by the sheer amount of work needed ::
11:48 post-factum misc: Mandriva Linux / Mageia / OpenMandriva Lx / ROSA Linux
11:48 post-factum misc: i thought mandriva is obsoleted by openmandriva
11:49 misc post-factum: mandriva as a company crashed
11:49 misc I think rosa linux is still financed by russians, but i stopped looking at that long ago :)
11:50 ndevos misc: for CentOS, you can probably use the updated qemu from the Virt SIG?
11:50 post-factum misc: yup, rosa in russian one, and i know senior dev personally ;)
11:50 post-factum *is
11:50 misc ndevos: but that likely requires a reboot and downtime
11:50 post-factum ndevos: could that qemu do blockcopy?
11:51 ndevos misc: update the qemu package(s) will probably require a VM restart, yes
11:51 misc and the virt-sig do not publish any timeline of support or anything
11:51 misc so I have no idea of the support policy there
11:51 misc which is usually a red flag for me
11:52 post-factum misc: you need rh subscription then :)
11:52 ndevos post-factum: maybe, it are newer packages that oVirt users need
11:53 ndevos and I'm sure oVirt can do live-migrations, but not sure if they do live-disk-migrations too
11:53 misc the problem is that I just want to make a snapshot :(
11:53 Muthu_ joined #gluster-dev
11:53 misc something that virtualbox does since as long as I remember, something that the xen on our old virt box was doing perfectly
11:55 Muthu_ REMINDER: Gluster community bug triage meeting at #gluster-meeting(~ in 5 minutes)
11:57 ndevos misc: create a snapshot with qemu-img (after doing fsfreeze in the VM)?
11:57 misc ndevos: yeah, that's what I am gonna try once I finish ranting :)
11:58 misc now, of course, doing a fsfreeze on jenkins and gerrit is likely gonna cause trouble
12:00 post-factum misc: is that even possible to do fsfreeze on / without troubles?
12:00 misc post-factum: well, what could really go wrong :)
12:09 post-factum misc: i wish all linux fs were atomic
12:11 dlambrig joined #gluster-dev
12:17 itisravi_ amye: hello
12:19 skoduri joined #gluster-dev
12:20 itisravi amye: hagarth Did any of you guys get my CFP email for the summit? For some reason, the mail is not appearing on the ML. I sent it twice and I had cc'ed you in it.
12:23 pranithk1 joined #gluster-dev
12:29 shyam joined #gluster-dev
12:31 itisravi joined #gluster-dev
12:33 mchangir joined #gluster-dev
12:43 mchangir kkeithley, are you there?
12:43 kkeithley yup
12:44 mchangir kkeithley, please take a look at https://bugzilla.redhat.co​m/show_bug.cgi?id=1369391    and if possible pin-point me the place where the file modes change to +x
12:44 glusterbot Bug 1369391: urgent, unspecified, ---, mchangir, NEW , configuration file shouldn't be marked as executable and systemd complains for it
12:45 mchangir kkeithley, I've taken a look at glusterfs.spec.in, both upstream and downstream, and couldn't get a handle on it
12:46 mchangir kkeithley, I've taken a look at the Makefile.{am,in} under extras/systemd/ as well
12:47 kkeithley right, one sec
12:48 dlambrig joined #gluster-dev
12:49 ndevos mchangir: that was an issue in Makefile.am under extras/systemd/ - we fixed that recently
12:49 mchangir :D
12:49 kkeithley mchangir: change is in extras/systemd/Makefile.am.  The change is removing the install-exec-local which installs it as an executable
12:49 mchangir ah!
12:50 mchangir kkeithley, ndevos, so it's been taken care of then
12:50 kkeithley ndevos made that change, in BZ 1354489
12:50 mchangir ok
12:50 mchangir thanks a ton
12:50 kkeithley yw
12:51 * mchangir make note of looking at git log next time
12:59 poornima joined #gluster-dev
13:04 * kkeithley wonders why (only) the dict_* functions use __attribute__((warn_unused_result))
13:07 kkeithley ankitraj++ Muthu_++
13:07 glusterbot kkeithley: ankitraj's karma is now 7
13:07 glusterbot kkeithley: Muthu_'s karma is now 9
13:08 Muthu_ kkeithley, ;)
13:13 nigelb misc: so we'll have a downtime for moving gerrit and jenkins onto new hosts?
13:14 misc nigelb: we would anyway, due to different ip address
13:14 dlambrig joined #gluster-dev
13:14 misc I was hoping to avoid that for the test migration
13:14 misc but I am still trying to do unclean hack
13:14 misc (I just got distracted by my own scream watching ovirt.org)
13:16 kkeithley oh, what happened on ovirt.org?
13:18 atinm joined #gluster-dev
13:19 kkeithley joined #gluster-dev
13:21 julim joined #gluster-dev
13:22 misc kkeithley: well, the download instruction recommend to download a rpm over http, the signature verification is buried at the end of the page
13:22 misc and the package install repo who are over http and without gpg for deps
13:23 kkeithley oh, is that all. ;-)
13:24 misc then there is also broken link due to the wiki conversion
13:24 misc and the rpm also install repo but never remove them
13:25 misc and instruction to install are "run that script, answer questions, done"
13:25 nigelb hey, be glad it's not curl http:/something.com/install.sh | bash -
13:26 misc that's kinda equivalent :)
13:26 poornima joined #gluster-dev
13:26 rraja joined #gluster-dev
13:28 kshlm joined #gluster-dev
13:31 nigelb ouch
13:32 misc since someone could MITM the rpm over http and then rpm are run as root
13:33 jiffin1 joined #gluster-dev
13:40 shubhendu joined #gluster-dev
13:46 aravindavk joined #gluster-dev
13:46 raghu joined #gluster-dev
13:49 pranithk1 joined #gluster-dev
13:50 jiffin1 joined #gluster-dev
13:53 kotreshhr left #gluster-dev
14:13 pur joined #gluster-dev
14:13 pur https://www.quora.com/What-are-the-most-incorrect-​things-taught-by-Indian-parents-to-their-children
14:16 shyam joined #gluster-dev
14:24 rafi ping pranithk1
14:31 rastar nigelb: update for the hang issue on NetBSD is that it looks like a gluster code bug
14:31 Bhaskarakiran joined #gluster-dev
14:31 rastar nigelb: I did not find anything wrong in the setup or the test file
14:31 nigelb rastar: gluster code rather than our test code?
14:31 rastar nigelb: yes,
14:31 nigelb rastar: Remember regression.sh and smoke.sh is different for netbsd.
14:31 rastar nigelb: not 100% sure, but we have seen such bugs before
14:32 nigelb and build.sh
14:32 rastar nigelb: you are right that umount hangs
14:32 rastar nigelb: umount hangs only if the corresponding client process or the server process are in bad state
14:32 nigelb so we're doing something evil on ufs.
14:32 nigelb wow, I didn't think this would lead there.
14:32 rastar nigelb: which one? I would not be able to tell unless I gdb into both the processes
14:33 rastar nigelb: I wasn't able to do that today.
14:33 nigelb no worries.
14:33 rastar nigelb: one question though, now we are not using the rackspace-* dir and using netbsd* dir ?
14:33 nigelb Do you want me to ping you when I notice a hung job tomorrow?
14:34 nigelb rastar: yeah, I made the naming more logical. I brought it up on gluster-infra@
14:34 rastar nigelb: I would probably take a NetBSD system and run the test instead of waiting for it
14:34 rastar nigelb: I will ping you tomorrow morning
14:35 rastar nigelb: ok, thanks I had missed that..
14:35 nigelb sure
14:35 nigelb rastar: I'd be grateful if you can add a summary to the bug.
14:35 rastar Yes, makes sense
14:36 rastar I will do it now
14:38 prasanth joined #gluster-dev
14:39 anthony joined #gluster-dev
14:40 anthony joined #gluster-dev
14:41 anthony hola hola:-*
14:48 rafi shyam: ping
14:48 glusterbot rafi: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
14:48 rafi glusterbot: really sorry
14:48 shyam rafi: here...
14:48 rafi shyam: regarding your comments for http://www.gluster.org/pipermail/gl​uster-devel/2015-August/046448.html
14:49 rafi shyam: I guess your concerns are related compound fop
14:49 rafi shyam: right ?
14:49 shyam Let me read that lengthy mail ;)
14:49 rafi shyam: sure :)
14:51 shyam rafi: I need some time, getting into a call, will ping you back. What is your question/concern/resolution exactly?
14:51 atalur joined #gluster-dev
14:51 rafi shyam: we have sent a patch for fop serialization
14:51 rafi shyam: I was wondering if you have any further comments or design issues that is not addressed so far
14:52 shyam Brick side FOP serialization, right. So how is order across 2 bricks ensured in a replica scenario?
14:52 rafi shyam: currently that is a responsibility of client
14:52 shyam What prevents FOPa being executed on brick1 and FOPb executed on brick2, where brick1 and 2 are part of the same replica
14:52 shyam How does the client ensure that?
14:52 atalur left #gluster-dev
14:53 shyam (or what am I missing)
14:53 rafi shyam: ohh got it , serialization of replica set, right
14:53 hagarth shyam, rafi: somehow I don't have this email thread in my devel folder of my mailbox :-/
14:53 hagarth ah this is from 2015
14:53 rafi hagarth: sometimes gluster-devel behave strangely
14:54 hagarth never mind I was looking at 2016 ;)
14:54 rafi hagarth: :)
14:55 shyam rafi: Thanks for digging that out though...
14:56 shyam I forgot I had sent it O:-)
14:56 rafi shyam: basically it tries to solve the issues like races or inconsistencies when dentries ops are parallely executing
14:56 rafi shyam: I just started working on it, so was looking for devel discussions ;)
14:57 shyam rafi: There was a discussion led by rjoseph that talked about replica set cordination for leases or such, that my have some potential solutions or already solved problems for this purpose (just maybe)
14:58 rafi shyam: great, I will check with him
14:58 rafi shyam: basically we had issues when mkdir and lookup are racing
14:58 shyam rafi: Let me see if I can find something in my history...
14:58 rafi shyam: races with gfid handles
14:58 rafi shyam: cool
14:59 shyam rafi: ok meeting starts, later... (if you are still around)
14:59 rafi shyam: so the concerns you raised in that threads are for compound fops , right
14:59 rafi shyam: cool
14:59 shyam BTW lookup and mkdir is possibly safe in a race, 2 change ops are not
14:59 rafi shyam: catch you later
14:59 shyam change -: ops that change on disk data
14:59 shyam rafi: yes
15:00 shyam ok later ;)
15:00 rafi hagarth: http://www.gluster.org/pipermail/gl​uster-devel/2016-August/050573.html
15:00 rafi hagarth: if you get some time, please take a look ;)
15:03 Manikandan joined #gluster-dev
15:04 atinm joined #gluster-dev
15:06 wushudoin joined #gluster-dev
15:08 ankitraj joined #gluster-dev
15:15 nbalacha joined #gluster-dev
15:27 hagarth joined #gluster-dev
15:29 Manikandan_ joined #gluster-dev
15:34 rafi nigelb: hi nigelb
15:34 rafi nigelb: it seems that mails to gluster-devel are not delivered on time, any idea ;) ?
15:36 hagarth rafi: misc might know better about this one
15:37 rafi hagarth, misc: okey ,
15:37 rafi hagarth, misc: I think itisravi also experienced the same problem
15:43 msvbhat joined #gluster-dev
15:52 magrawal joined #gluster-dev
15:53 magrawal ndevos: Please review this http://review.gluster.org/15086 when u have time
16:07 Javezim joined #gluster-dev
16:13 shyam joined #gluster-dev
16:16 jobewan joined #gluster-dev
16:24 atalur joined #gluster-dev
16:30 spalai joined #gluster-dev
16:41 rafi joined #gluster-dev
16:51 rafi joined #gluster-dev
16:54 mchangir \o/    I got a CodeReview+2 at this hour from ndevos!
16:56 shubhendu joined #gluster-dev
16:59 atinm joined #gluster-dev
17:02 Manikandan_ joined #gluster-dev
17:07 jiffin joined #gluster-dev
17:11 msvbhat joined #gluster-dev
17:19 dlambrig joined #gluster-dev
17:20 post-factum mchangir: lucky you, i want that too
17:21 justinclift joined #gluster-dev
17:22 justinclift misc: Mailing list server having issues?
17:22 misc justinclift: not that I am aware of, but I am in the train right now
17:22 misc so can you give more details ?
17:22 * justinclift tried to subscribe with various email addresses... nothing coming through.
17:23 justinclift Non-urgent, but figured worth asking about ;)
17:23 misc yeah, I can take a look
17:24 misc ping me again in 1h
17:24 justinclift misc: Will do :)
17:28 hchiramm joined #gluster-dev
17:32 rraja joined #gluster-dev
17:55 misc justinclift: so I see nothing suspcious
17:55 anthony joined #gluster-dev
17:55 misc I see you tried to subscribed with your postgres email
17:55 misc and I see that mail was sent to postgres alias
17:56 justinclift misc: Yeah.  Also tried the justin@gluster.org one before that too.
17:57 justinclift Er... which postgres alias?
17:58 justinclift misc: Anyway, so it seems like a problem on the postgresql.org MX side then?
17:59 misc justinclift: justin@postgresql
18:00 justinclift Ahhh.  Yeah, that's a real email address.  But, no biggie. :)
18:00 misc so you got nothing ?
18:01 justinclift Not a thing.  I'll ping the PG sysadmins, and see if they're aware of anything.
18:01 justinclift misc: Thanks for checking. :)
18:02 misc justinclift: I do not discount that there is a issue on mailman :/
18:04 anthony joined #gluster-dev
18:06 justinclift Yeah, me neither.  I'll ping the PG sysadmin team and see if they know of anything weird happening.
18:15 anthony joined #gluster-dev
18:16 rastar joined #gluster-dev
18:17 post-factum yup, ML looks like slowpoke today
18:17 post-factum 3.8.3 announce hit my mailbox just now
18:18 post-factum sent 10h ago ;)
18:29 anthony joined #gluster-dev
18:30 jiffin joined #gluster-dev
18:32 justinclift Ahhh, so not PG infra then.  Thanks. :)
19:03 dlambrig joined #gluster-dev
19:10 anthony joined #gluster-dev
19:10 anthony holA
20:37 anthony joined #gluster-dev
20:42 anthony1 joined #gluster-dev
21:45 anthony joined #gluster-dev
21:48 dlambrig joined #gluster-dev
21:51 anthony joined #gluster-dev
22:02 anthony joined #gluster-dev
22:20 anthony joined #gluster-dev
22:23 anthony1 joined #gluster-dev
22:41 hagarth joined #gluster-dev
23:05 dlambrig joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary