Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-03-22

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 Guest2812 joined #gluster
00:08 juhaj joined #gluster
00:10 JoeJulian s/better/different/
00:10 glusterbot What JoeJulian meant to say was: That's different.
00:10 * JoeJulian shrugs
00:31 shyam joined #gluster
00:38 ghenry joined #gluster
00:38 ghenry joined #gluster
00:52 kramdoss_ joined #gluster
00:58 ankitr joined #gluster
01:09 shyam joined #gluster
01:11 vbellur joined #gluster
01:14 shdeng joined #gluster
01:14 pioto joined #gluster
01:14 arpu joined #gluster
01:16 gyadav joined #gluster
01:18 niknakpa1dywak joined #gluster
01:19 niknakpa1dywak left #gluster
01:20 gem joined #gluster
01:53 kramdoss_ joined #gluster
02:02 pioto_ joined #gluster
02:06 riyas joined #gluster
02:06 glisignoli joined #gluster
02:07 glisignoli anyone using the voxpupupli puppet gluster module? I'm having trouble where it won't create any volumes
02:17 prasanth joined #gluster
02:23 arpu joined #gluster
02:36 kramdoss_ joined #gluster
02:48 ilbot3 joined #gluster
02:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
03:03 Gambit15 joined #gluster
03:19 riyas joined #gluster
03:23 baber joined #gluster
03:53 magrawal joined #gluster
03:58 gyadav joined #gluster
04:02 dominicpg joined #gluster
04:05 itisravi joined #gluster
04:12 msvbhat joined #gluster
04:16 sanoj joined #gluster
04:21 ashiq joined #gluster
04:23 RameshN joined #gluster
04:32 sbulage joined #gluster
04:32 sanoj joined #gluster
04:39 skumar joined #gluster
04:47 karthik_us joined #gluster
04:49 ppai joined #gluster
05:01 ankitr joined #gluster
05:03 kramdoss_ joined #gluster
05:04 susant joined #gluster
05:06 ankitr joined #gluster
05:06 Saravanakmr joined #gluster
05:08 Shu6h3ndu joined #gluster
05:08 skoduri joined #gluster
05:08 Philambdo joined #gluster
05:10 Prasad joined #gluster
05:14 ndarshan joined #gluster
05:19 Philambdo joined #gluster
05:24 susant joined #gluster
05:24 ashiq joined #gluster
05:31 gyadav joined #gluster
05:42 buvanesh_kumar joined #gluster
05:45 riyas joined #gluster
05:48 ndarshan joined #gluster
06:00 hgowtham joined #gluster
06:05 ndarshan joined #gluster
06:10 ahino joined #gluster
06:13 kdhananjay joined #gluster
06:15 Karan joined #gluster
06:24 prasanth joined #gluster
06:24 masber joined #gluster
06:28 tdasilva joined #gluster
06:34 nthomas joined #gluster
06:35 Humble joined #gluster
06:40 ashiq joined #gluster
06:40 sbulage joined #gluster
06:41 sona joined #gluster
06:48 ankush joined #gluster
06:56 apandey joined #gluster
06:56 msvbhat joined #gluster
06:57 foster joined #gluster
06:59 sona joined #gluster
07:04 anbehl joined #gluster
07:10 Wizek joined #gluster
07:11 mhulsman joined #gluster
07:13 sbulage joined #gluster
07:27 jtux joined #gluster
07:27 jtux left #gluster
07:36 jiffin joined #gluster
08:00 jwd joined #gluster
08:12 mbukatov joined #gluster
08:12 ashiq joined #gluster
08:28 [diablo] joined #gluster
08:31 fsimonce joined #gluster
08:38 msvbhat joined #gluster
08:39 sanoj joined #gluster
08:49 sona joined #gluster
08:57 sbulage joined #gluster
08:58 msvbhat joined #gluster
09:00 karthik_us joined #gluster
09:03 jwaibel joined #gluster
09:07 rastar joined #gluster
09:09 mk-fg Can't seem to "mount -t glusterfs" from a non-glusterd host - "username" and "password" parameters don't get sent in volfile (as seen on client strace), and one of the bricks says "no authentication module is interested in accepting remote-client (null)"
09:10 mk-fg Tried with auth.allow being '*' and host ip, doesn't seem to help
09:10 poornima_ joined #gluster
09:10 mk-fg Does it look like a client, glusterd or brick misconfiguration?
09:18 jiffin mk-fg: do u have installed required client packages
09:18 jiffin ?
09:18 mk-fg Yes, built and installed glusterfs if that's what you mean
09:19 mk-fg (3.10 btw)
09:20 mk-fg To be clear "mount -t glusterfs" gives "Mount failed. Please check the log file for more details.", and this log - https://gist.github.com/mk-fg/6141c56c64d3a9dff93cea278565d778
09:20 glusterbot Title: gist:6141c56c64d3a9dff93cea278565d778 · GitHub (at gist.github.com)
09:21 jiffin mk-fg: have u used auth.allow/auth.reject feature for the volume
09:22 jiffin ?
09:23 mk-fg Default auth.allow was set to '*', I've tried setting it to a comma-separated list of IPs and that didn't work either
09:24 mk-fg (what I meant above by "Tried with auth.allow being '*' and host ip, doesn't seem to help")
09:25 jiffin mk-fg: can try mounting without password and username with some other test on same machine
09:25 mk-fg Yeah, totally works on other machine which runs glusterd
09:26 mk-fg And difference I see between these two clients is that "username" and "password" gets sent in successful case in volfile, and doesn't sent for failed case
09:27 mk-fg Guess auth.allow should control that thing, right?
09:27 sona joined #gluster
09:27 jiffin mk-fg: i meant can try mounting a another(test) volume without using hostname/password on the same client machine
09:28 mk-fg Ah, yeah, would need to create another volume for that...
09:28 jiffin mk-fg: if possible
09:28 mk-fg It's probably something weird with this one, as it's been around from 3.3
09:29 mk-fg Possible, just not that easy ;)
09:29 jiffin mk-fg: okay np
09:29 mk-fg Thanks for the idea
09:29 mk-fg I'll try to find where in code these params get scrubbed first, I guess
09:30 jiffin mk-fg: ur clients and servers have version of glusterfs
09:30 mk-fg Might be easier to just check what affects that and tweak it
09:30 mk-fg Same exact package and version, yeah
09:30 mk-fg 3.10
09:30 mk-fg *3.10.0
09:30 jiffin mk-fg: i saw following log MSGID: 114057] [client-handshake.c:1451:select_server_supported_programs] 0-core-client-0: Using Program GlusterFS 3.3, Num (1298437), Version (330)
09:30 jiffin [2017-03-22 09:15:13.291117] I [rpc-clnt.c:1964:rpc_clnt_reconfig] 0-core-client-1: changing port to 49159 (from 0)
09:30 mk-fg Yeah, that actually didn't change from 3.3 and was puzzling to me as well
09:31 mk-fg But googling around, found that apparently it's a known thing, and is ok
09:31 mk-fg Not sure if reliable info, ofc
09:31 jiffin mk-fg: Yeah correct
09:32 jiffin mk-fg: any clues in glusterd logs from server machine which tried to mount?
09:33 mk-fg Surprisingly little - https://gist.github.com/mk-fg/ce92ce0dd4e56c42803b2ee15d120c85
09:33 glusterbot Title: gist:ce92ce0dd4e56c42803b2ee15d120c85 · GitHub (at gist.github.com)
09:33 mk-fg That's whole thing from brick log on mount attempt
09:34 mk-fg For successful attempt, it says "[2017-03-22 09:25:05.172066] I [login.c:76:gf_auth] 0-auth/login: allowed user names: ..." instead
09:35 mk-fg (successful - from same machine, that is)
09:35 mk-fg Which I guess means that either username/password should be sent to remote client as well, or auth.allow doesn't work somehow?
09:36 mk-fg ...if auth.allow module should work instead of auth/login, that is
09:58 mk-fg Looked at addr.c and found a bunch of debug logging there, then enabled it with "diagnostics.brick-log-level DEBUG"...
09:58 mk-fg ...and looks my problem is: received addr = "�l�"
09:59 mk-fg Should be 192.168.0.10, but guess it gets garbled somewhere
09:59 flying joined #gluster
09:59 mk-fg Also weird, don't see any exception for '*' in addr.c, maybe just missed it...
10:01 mk-fg Oh, right, compared via fnmatch
10:03 ashiq joined #gluster
10:26 jiffin mk-fg: u can take the packet trace using wireshark and see what all information send from clien
10:28 mk-fg Suspect that address comes from accept() or whatever's the socket-peer option, can't really be sent by client as it might not know it due to NAT and such
10:29 mk-fg A bit weird that fnmatch() with * wouldn't work on that garbled stuff though, but maybe it's in the man
10:29 mk-fg Currently trying to check if bypassing auth there via patch will work, which should definitely tell that it's bad address being passed
10:31 social joined #gluster
10:35 mk-fg Yup, patching auth/addr to accept that noise-address on that one host works, meaning that auth.allow works for all other hosts
10:35 mk-fg It happens to be old host with linux-3.14 and rather old userspace, so maybe not worth figuring out where that bug comes from in its case
10:36 mk-fg (which is why I did that test instead of trying to got for the cause)
10:37 mk-fg Though gist is that "diagnostics.brick-log-level DEBUG" showed the issue clearly, just never used before, thought glusterd --debug did enable that as well
10:43 mk-fg A bit weird how mount.glusterfs can apparently bypass auth.allow because it gets provided username/password
10:44 mk-fg ...when used on localhost
10:49 kshlm mk-fg, that is exactly the reason why the username/password auth was introduced. The auth.* parameters apply to all glusterfs clients, not just the fuse clients. So users could inadvertently block other glusterfs services (self-heal daemon, nfs server, quotad etc) setting auth.allow.
10:51 mk-fg Ah, makes sense now, guess mount.glusterfs uses same protocol and hence have to get creds, same as everything else on localhost
10:58 vbellur joined #gluster
11:02 vbellur joined #gluster
11:13 msvbhat joined #gluster
11:30 jwd joined #gluster
11:35 ashiq joined #gluster
11:38 jkroon joined #gluster
11:43 ahino joined #gluster
11:47 itisravi|android joined #gluster
11:53 fubada joined #gluster
11:56 Prasad_ joined #gluster
11:59 itisravi|android joined #gluster
12:01 sanoj joined #gluster
12:03 shyam joined #gluster
12:10 baber joined #gluster
12:11 susant left #gluster
12:13 msvbhat joined #gluster
12:22 kpease joined #gluster
12:22 Prasad__ joined #gluster
12:29 AndChat|106409 joined #gluster
12:30 bluenemo joined #gluster
12:31 ahino joined #gluster
12:45 unclemarc joined #gluster
12:49 gyadav joined #gluster
13:01 Karan joined #gluster
13:22 msvbhat joined #gluster
13:27 vbellur joined #gluster
13:31 MrAbaddon joined #gluster
13:31 jwd joined #gluster
13:36 oajs joined #gluster
13:50 skylar joined #gluster
13:51 Skinny joined #gluster
13:53 Skinny hi all, I've just setup my first " real" gluster cluster on kubernetes based on https://github.com/gluster/gluster-kubernetes. I have a 3 node setup with a volume replicated 3 times. The read/write speed from even one of the nodes is horrible but that should be a mistake somewhere from my side
13:54 Skinny https://gist.github.com/skinny/6ae713eac56e959d0bc98d621b6663b8
13:54 glusterbot Title: gist:6ae713eac56e959d0bc98d621b6663b8 · GitHub (at gist.github.com)
13:54 Skinny what's the easiest way to troubleshoot what the root cause of this is ?
13:56 mhulsman joined #gluster
14:00 ashiq joined #gluster
14:01 mhulsman joined #gluster
14:03 vide_ joined #gluster
14:05 skumar joined #gluster
14:14 ahino joined #gluster
14:24 plarsen joined #gluster
14:31 nirokato joined #gluster
14:35 gem joined #gluster
14:37 mhulsman joined #gluster
14:42 vbellur joined #gluster
14:42 vbellur joined #gluster
14:43 vbellur joined #gluster
14:44 vbellur joined #gluster
14:44 vbellur joined #gluster
14:45 vbellur joined #gluster
14:45 vbellur joined #gluster
14:47 XpineX joined #gluster
14:47 farhorizon joined #gluster
14:51 gyadav joined #gluster
15:04 vbellur joined #gluster
15:04 mb_ joined #gluster
15:05 jbrooks joined #gluster
15:05 squizzi joined #gluster
15:06 gyadav_ joined #gluster
15:13 baber joined #gluster
15:14 wushudoin joined #gluster
15:15 prasanth joined #gluster
15:23 mhulsman joined #gluster
15:32 jwd joined #gluster
15:34 jiffin joined #gluster
15:43 mk-fg joined #gluster
15:43 mk-fg joined #gluster
15:47 vbellur joined #gluster
15:48 vbellur joined #gluster
15:49 vbellur joined #gluster
15:49 vbellur joined #gluster
15:50 vbellur joined #gluster
15:51 vbellur joined #gluster
15:51 vbellur joined #gluster
15:52 vbellur joined #gluster
15:53 vbellur joined #gluster
15:54 vbellur joined #gluster
15:54 vbellur joined #gluster
15:55 vbellur joined #gluster
15:57 vbellur joined #gluster
15:57 vbellur joined #gluster
15:58 vbellur joined #gluster
15:59 vbellur joined #gluster
15:59 vbellur joined #gluster
16:00 vbellur joined #gluster
16:01 vbellur1 joined #gluster
16:01 vbellur joined #gluster
16:02 baber joined #gluster
16:03 vbellur joined #gluster
16:03 vbellur joined #gluster
16:04 vbellur joined #gluster
16:05 vbellur joined #gluster
16:06 vbellur joined #gluster
16:07 vbellur joined #gluster
16:07 jkroon joined #gluster
16:07 vbellur joined #gluster
16:08 farhorizon joined #gluster
16:09 vbellur joined #gluster
16:10 vbellur joined #gluster
16:11 oajs joined #gluster
16:16 farhorizon joined #gluster
16:17 mk-fg joined #gluster
16:18 mk-fg joined #gluster
16:27 vbellur joined #gluster
16:28 raghu joined #gluster
16:28 Gambit15 joined #gluster
16:38 JoeJulian major: You probably should weigh in on this: http://lists.gluster.org/pipermail/gluster-devel/2017-March/052286.html
16:38 glusterbot Title: [Gluster-devel] Gluster volume snapshot - Invitation to edit (at lists.gluster.org)
16:38 unclemarc joined #gluster
16:39 JoeJulian Skinny: If I were to guess, I'd say network latency.
16:44 vbellur joined #gluster
16:47 jwd joined #gluster
16:49 vbellur joined #gluster
16:49 jiffin joined #gluster
16:51 major sigh
16:51 major I still haven't subscribed
16:51 major does Sriram know that I rewrote all of his patches?
16:52 jiffin major: i don't think so
16:52 major well .. I didn't rewrite them .. I split them out between LVM and ZFS
16:52 major and left the ZFS portion on its own branch
16:52 major because it was pretty much incomplete
16:53 vbellur1 joined #gluster
16:53 major ...
16:53 major and .. this document looks disturbingly like the interfaces I already define :)
16:53 vbellur joined #gluster
16:54 major bleh .. I really should subscribe and chime in ...
16:54 jiffin major: wow
16:54 vbellur joined #gluster
16:54 major https://github.com/major0/glusterfs/blob/lvm-snapshot-cleanup/xlators/mgmt/glusterd/src/snapshot/glusterd-lvm-snapshot.h
16:54 glusterbot Title: glusterfs/glusterd-lvm-snapshot.h at lvm-snapshot-cleanup · major0/glusterfs · GitHub (at github.com)
16:54 JoeJulian major: I wish you would.
16:56 vbellur joined #gluster
16:56 vbellur joined #gluster
16:57 Shu6h3ndu joined #gluster
16:57 vbellur joined #gluster
16:58 vbellur joined #gluster
16:58 vbellur joined #gluster
17:00 vbellur joined #gluster
17:00 Skinny @JoeJulian thanks, my first guess as well. The three nodes can ping each other in <1 ms (max 1.2ms). I did read something about the ' Replicate'  mode of gluster that requires all nodes to confirm the version of the file requested
17:00 major JoeJulian, subscribed ..
17:00 Skinny will another mode be better ?
17:00 vbellur joined #gluster
17:01 vbellur joined #gluster
17:02 vbellur joined #gluster
17:02 JoeJulian Skinny: Not so much version as that they all have to confirm that the others are healthy. That's done once at lookup().
17:03 vbellur joined #gluster
17:05 JoeJulian Since readdirp takes 2 seconds, you must have a huge directory.
17:06 Karan joined #gluster
17:07 Skinny not really,  2.5MB in 34 files
17:07 JoeJulian Well that is odd.
17:07 Skinny that's what I tought.. I must be missing something very obvious
17:08 Skinny I've updated the gist (https://gist.github.com/skinny/6ae713eac56e959d0bc98d621b6663b8) with the volume definition
17:08 glusterbot Title: gist:6ae713eac56e959d0bc98d621b6663b8 · GitHub (at gist.github.com)
17:09 JoeJulian disk io, network (mtu inconsistency, tcp retransimissions, etc), memory usage (in swap)...
17:13 Skinny hmm.. those VM's aren't doing anything really. This is a bit of new territory for me so still finding my way in the metrics/settings etc..
17:14 Skinny on VM level the brick is performing well imo, but mounted as a gluster volume, it's suffers
17:19 major and .. email sent
17:21 major now .. back to fixing up git-track so I can stop forgetting what I was working on and what is left to be done..
17:25 jiffin1 joined #gluster
17:27 nthomas joined #gluster
17:30 jiffin joined #gluster
17:31 rastar joined #gluster
17:32 gyadav joined #gluster
17:39 Skinny by far the slowest call in strace the 'ls'  :      0.843965 fstat(3, {st_mode=S_IFDIR|0755, st_size=64, ...}) = 0
17:40 Shu6h3ndu joined #gluster
17:45 Humble joined #gluster
17:47 major sort of amazed .. gluster-devel is almost as quiet as the IRC channels :)
17:48 jiffin1 joined #gluster
17:55 amye major: a lot of the gluster developer group's at a conference this week, so that might be part of it
17:55 JoeJulian Yeah, it's a problem, imho. Most of the devs are in Bangalore and are able to talk face-to-face. Outside of that, you have a couple people in EU, Massachusetts, and California, and since they people they work with aren't on gluster-dev, they're really slow to respond if they do at all.
17:55 JoeJulian And Vault.
17:55 amye Yeah Vault
17:55 amye So a lot of the devs from Bangalore are here, and the people from EU and Massachusetts ... all in Boston.
17:57 jiffin joined #gluster
18:05 major ...
18:05 major we need more devs near seattle
18:05 major cause like .. why not?
18:06 amye Recruit! Start a meetup? :D
18:11 sona joined #gluster
18:16 mlg9000 joined #gluster
18:19 baber joined #gluster
18:24 dayne joined #gluster
18:27 major lol
18:27 major sounds like a job for JoeJulian ;)
18:28 major can we open a bug on that and assign it to him?  "Start a Gluster meetup in Seattle" ;)
18:30 rastar joined #gluster
18:31 dayne is there a way to get the gluster volume status on NFS clients?
18:32 dayne mostly done with the converstion to direct gluster mounting of gluster volumes but still have a few systems that need to use NFS.. wondering if there is an easy way from the gluster side of things to peek into and identify current NFS clients
18:38 arpu joined #gluster
18:38 jbrooks joined #gluster
18:38 sona joined #gluster
18:41 vbellur joined #gluster
18:41 vbellur1 joined #gluster
18:42 vbellur joined #gluster
18:44 vbellur joined #gluster
18:44 vbellur joined #gluster
18:45 vbellur joined #gluster
18:46 vbellur joined #gluster
18:47 vbellur joined #gluster
18:51 vbellur joined #gluster
18:55 ahino joined #gluster
18:55 vbellur joined #gluster
18:57 vbellur joined #gluster
19:01 vbellur joined #gluster
19:02 farhoriz_ joined #gluster
19:05 vbellur joined #gluster
19:10 rastar joined #gluster
19:40 vbellur joined #gluster
19:41 vbellur joined #gluster
19:42 vbellur joined #gluster
19:44 vbellur joined #gluster
19:45 vbellur joined #gluster
19:45 baber joined #gluster
19:46 vbellur joined #gluster
19:47 vbellur joined #gluster
19:47 vbellur joined #gluster
19:48 vbellur joined #gluster
19:49 vbellur joined #gluster
19:49 vbellur joined #gluster
19:51 vbellur joined #gluster
19:52 vbellur joined #gluster
19:53 vbellur joined #gluster
19:53 vbellur joined #gluster
19:54 vbellur joined #gluster
19:55 vbellur joined #gluster
19:55 vbellur joined #gluster
19:56 vbellur joined #gluster
19:57 vbellur joined #gluster
19:58 vbellur joined #gluster
19:58 vbellur joined #gluster
20:28 baber joined #gluster
20:48 vbellur joined #gluster
20:49 vbellur joined #gluster
20:49 vbellur joined #gluster
20:50 vbellur joined #gluster
20:50 vbellur joined #gluster
20:52 tjyoung joined #gluster
20:52 vbellur joined #gluster
20:52 vbellur joined #gluster
20:53 vbellur joined #gluster
20:54 vbellur joined #gluster
20:54 vbellur joined #gluster
20:55 vbellur1 joined #gluster
20:57 vbellur joined #gluster
21:00 vbellur joined #gluster
21:06 raghu joined #gluster
21:07 oajs joined #gluster
21:09 kettlewell joined #gluster
21:12 kettlewell On gluster 3.5.3 , we have an 18 node ( distributed / replicated ) that is favoring the last 2 pairs ( 4 nodes ) for writes...  logs don't tell me much, and I can't seem to find a pattern as to why those 4 are so consistent, from the 50 or so clients...
21:13 kettlewell any thoughts / ideas on troubleshooting?  using gfs fuse from the clients... no NFS in use ( though they all seem to be listening for NFS )
21:19 tjyoung joined #gluster
21:20 tjyoung join
21:26 JoeJulian amye: $10/month for meetup.com... Is RH reimbursing me? I'm making Software Defined Storage in general (even though there's already a ceph meetup downstairs).
21:30 sysanthrope joined #gluster
21:33 JoeJulian dayne: I think so... Check out "gluster volume status <VOLNAME|all> clients"
21:35 JoeJulian kettlewell: My guess would have something to do with either full disks or rebalance fix-layout. If it's neither of those, perhaps the same filename is always used for file creation, then is renamed?
21:36 JoeJulian amye: You're also buying food and beverages, yes?
21:55 john51 joined #gluster
22:00 john51 joined #gluster
22:00 vbellur joined #gluster
22:01 vbellur joined #gluster
22:02 vbellur joined #gluster
22:02 baber joined #gluster
22:02 vbellur joined #gluster
22:03 vbellur joined #gluster
22:04 vbellur joined #gluster
22:07 vbellur joined #gluster
22:07 vbellur joined #gluster
22:08 vbellur joined #gluster
22:09 vbellur joined #gluster
22:12 vbellur joined #gluster
22:15 major JoeJulian, thought RH was ;)
22:15 JoeJulian amye is Red Hat.
22:15 major oh .. well .. thats .. not different then ..
22:15 JoeJulian It's her budget I'm trying to spend.
22:15 major :)
22:16 vbellur joined #gluster
22:17 major oh .. thats easy .. have to add supplying servers/workstations for setting up a gluster cluster to the supplies
22:17 major couple of 10G switches maybe .. food .. coffee .. drinks .. maybe show of tiering .. so gonna need some SSD nodes ..
22:17 JoeJulian I've tried that before. I got stuff that wouldn't work for the purpose.
22:18 major I need to send you pictures of my gluster-in-a-box I built
22:18 major 19" wide, 9U tall, 18" deep
22:18 major holds 2 switches, a router, a UPS, and 4 nodes
22:19 major only needs 2 external cables .. 1 ethernet and 1 power
22:20 JoeJulian I tried to set up a build-your-own gluster where I had openstack on some machines and gave people 3 VMs to build a cluster with at a conference. Red Hat said they would take care of it (pre Amye) and they showed up with one day to configure the boxes, no time to test, and with less than half the horsepower we'd talked about. It was a sad workshop.
22:20 major :(
22:21 vbellur joined #gluster
22:22 major well .. I am heading back south tomorrow after work .. wont be back till Sunday evening...
22:22 masber joined #gluster
22:22 major but .. I almost finished git-track .. which is going to help me be 100% more distracted with tweaking it and adding new features
22:25 JoeJulian +1
22:25 vbellur joined #gluster
22:27 vbellur joined #gluster
22:28 vbellur joined #gluster
22:32 om2 joined #gluster
22:33 vbellur joined #gluster
22:37 oajs joined #gluster
22:38 vbellur joined #gluster
22:38 vbellur joined #gluster
22:39 vbellur joined #gluster
22:45 vbellur joined #gluster
22:46 vbellur joined #gluster
22:50 vbellur joined #gluster
22:50 vbellur joined #gluster
22:52 major https://github.com/major0/gitrack
22:52 glusterbot Title: GitHub - major0/gitrack: Distributed note/bug/issue tracker integrated with Git. (at github.com)
22:52 major I have this urge to get into a white lab coat and cackle at the ceiling screaming "IT LIVES!!"
22:53 major but like .. you can just add git-track to ~/bin/ or whatever and be all: git track new headachs
22:53 major and it will happily let you track things .. AND .. your entries can be safely cross-merged with other developers
22:54 vbellur joined #gluster
22:55 major I am hoping to add some git hooks so that my automated testing stuff will log test results via: git track new test;run tests; attach log, if tests pass close test entry, else leave it open
22:55 vbellur joined #gluster
22:56 major that way bugs and test results exist together on topic branches .. such that I can create a topic branch, open a bug report on it, apply a fix, execute some tests, and the topic branch carries the bug history and proof that the bug is fixed :)
22:56 major like .. proof carrying branches
22:56 major all before I publish them
22:56 vbellur joined #gluster
22:56 major thats the idea anyway ..
22:57 major I suspect it will take more coffee to get it all cleaned up
22:57 vbellur joined #gluster
22:57 oajs_ joined #gluster
23:00 vbellur joined #gluster
23:00 vbellur joined #gluster
23:01 vbellur joined #gluster
23:02 vbellur joined #gluster
23:03 vbellur joined #gluster
23:03 vbellur joined #gluster
23:07 vbellur joined #gluster
23:11 cholcombe joined #gluster
23:11 vbellur joined #gluster
23:27 vinurs joined #gluster
23:49 MrAbaddon joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary