Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-03-29

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:46 arpu joined #gluster
00:46 * major sighs.
00:47 major and .. something new to fix..
00:55 squeakyneb joined #gluster
01:12 csaba joined #gluster
01:13 lkoranda joined #gluster
01:48 ilbot3 joined #gluster
01:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:54 arpu joined #gluster
02:16 kraynor5b joined #gluster
02:26 arpu joined #gluster
02:27 blu_ joined #gluster
02:44 moneylotion joined #gluster
02:47 Gambit15 joined #gluster
03:04 Gambit15 joined #gluster
03:17 zerick joined #gluster
03:21 raginbajin joined #gluster
03:26 wellr00t4d joined #gluster
03:45 plarsen joined #gluster
03:48 adathor joined #gluster
04:03 riyas joined #gluster
04:08 dominicpg joined #gluster
04:09 moneylotion joined #gluster
04:24 moneylotion joined #gluster
04:53 armyriad joined #gluster
04:57 Seth_Karlo joined #gluster
05:07 ankitr joined #gluster
05:07 legreffi1r joined #gluster
05:20 sbulage joined #gluster
05:20 shdeng joined #gluster
05:51 msvbhat joined #gluster
05:56 xMopxShell joined #gluster
06:19 jkroon joined #gluster
06:21 jtux joined #gluster
06:22 Shu6h3ndu joined #gluster
06:35 jtux left #gluster
06:37 Shu6h3ndu joined #gluster
06:39 ankitr joined #gluster
06:42 ankush joined #gluster
06:50 sanoj joined #gluster
06:59 ivan_rossi joined #gluster
07:10 ankitr joined #gluster
07:13 kramdoss_ joined #gluster
07:34 mbukatov joined #gluster
07:36 prasanth joined #gluster
07:42 Seth_Karlo joined #gluster
07:51 sona joined #gluster
08:18 armyriad joined #gluster
08:33 [diablo] joined #gluster
08:35 jiffin joined #gluster
08:35 [diablo] Good morning #gluster
08:36 [diablo] guys we have some machines that will connect via native client to 2 x gluster servers
08:36 [diablo] with an fstab entry specifying the IP address of one of the servers
08:37 [diablo] obviously if that machine is offline, I assume it's going to cause issues, so I proposed to my coworker we create a round robin DNS entry
08:37 [diablo] and change the fstab to connect to gluster on a FQDN
08:37 [diablo] however... would it be better to setup a VIP?
08:37 [diablo] or is RRDNS sufficient please?
08:39 [fre] joined #gluster
08:42 ankush joined #gluster
08:46 Wizek_ joined #gluster
08:49 bulde joined #gluster
09:09 MrAbaddon joined #gluster
09:13 ivan_rossi left #gluster
09:24 Seth_Karlo joined #gluster
09:25 Seth_Karlo joined #gluster
09:32 Seth_Kar_ joined #gluster
09:37 sona joined #gluster
09:50 jwd joined #gluster
09:57 kraynor5b_ joined #gluster
10:15 kraynor5b__ joined #gluster
10:23 kramdoss_ joined #gluster
10:41 sona joined #gluster
10:42 kramdoss_ joined #gluster
11:09 ghenry joined #gluster
11:29 ghenry joined #gluster
11:29 ghenry joined #gluster
11:35 sbulage joined #gluster
11:35 bwerthmann
11:45 musa22 joined #gluster
11:55 derjohn_mob joined #gluster
12:02 jtux joined #gluster
12:03 sanoj joined #gluster
12:05 jtux left #gluster
12:15 Philambdo joined #gluster
12:19 kpease joined #gluster
12:28 sbulage joined #gluster
12:28 ira joined #gluster
12:38 jiffin joined #gluster
12:43 kkeithley [diablo] VIP is probalby overkill. Lots of people use RRDNS. You only need RRDNS for the inial mount. After mounting gluster provides its own HA so RRDNS and VIP are not needed
12:43 kkeithley [diablo] Ask JoeJulian when he gets online in a little bit
12:44 baber joined #gluster
12:56 unclemarc joined #gluster
13:04 jbrooks joined #gluster
13:05 shyam joined #gluster
13:05 jbrooks_ joined #gluster
13:13 msvbhat joined #gluster
13:17 sona joined #gluster
13:28 skylar joined #gluster
13:32 sona i didn't get the o/p of sprint_ubacktrace()
13:32 sona 0x7f1808e9996e : dict_ref+0xc/0xd6 [/usr/local/lib/libglusterfs.so.0.0.1]
13:32 sona 0x7f1808e9866b : dict_new+0x31/0x37 [/usr/local/lib/libglusterfs.so.0.0.1]
13:32 sona 0x7f17fb7303a8 : 0x7f17fb7303a8 [/usr/local/lib/glusterfs/3.10dev/xlator/protocol/client.so+0x513a8/0x262000]
13:33 sona in the last line after client.so,0x513a8/0x262000
13:33 sona this is the line or of the function or something else?
13:35 TBlaar joined #gluster
13:39 ankitr joined #gluster
13:44 q1x joined #gluster
13:45 thatgraemeguy joined #gluster
13:47 q1x hello, noob question if you please, how can I calculate the amount of gluster storage given the number of disks in the cluster? I can't seem to find a calculation tool or guideline for this.
13:48 squizzi joined #gluster
13:51 q1x we're planning on starting a cluser with 3 blades and might want to grow up to 8 blades but I have no clue what disk sizes to use because I don't know what storage space will be left after setting up gluster
13:54 MrAbaddon joined #gluster
14:17 MrAbaddon joined #gluster
14:24 major Depends on how you configure the storage.
14:27 q1x major: do you know if there is a calculator tool or smth?
14:27 q1x I cannot imagine I'm the first to ask :)
14:28 major Generally it is not much different than the storage tradeoffs for most other systems.  2 way replication divides total capacity by 2, 3 way by 3.
14:28 q1x major: total used capacity or total available capacity?
14:29 wushudoin joined #gluster
14:29 major I am not certain what sort of overhead is consumed by an arbiter node... I assume it's minimal.
14:30 major In replication you are mirroring data, so you have multiple copies across the network.
14:30 q1x major: could I use a raid 5 like mode?
14:31 wushudoin joined #gluster
14:31 major Yes, though I have no good input on the storage overhead there.
14:31 major Trying to find a link.. but on my phone :)
14:31 q1x major: awesome, thanks for the help
14:32 major Think you are looking for dispersed volumes.. not used them myself.
14:36 major Phone is not playing nice with urls.. Googling gluster dispersed has the links I would look at.
14:38 Philambdo joined #gluster
14:40 major Most of the system follows normal ideas.. striping is striping, replication is mirroring, dispersed is .. for basic comparisons kinda like raid.. but gluster operates on files instead of blocks.
14:40 major And you can combine the various storage schemes.
14:41 major Soo overhead is really a matter of how much redundancy you want and with which schemes.
14:47 MrAbaddon joined #gluster
14:48 jbrooks joined #gluster
14:50 farhorizon joined #gluster
14:55 Asako joined #gluster
14:56 Asako Hello.  Is it recommended to have a gluster server be its own client?  I'm thinking about setting up gluster on one of our file servers but I need to be able to mount the file system locally.
15:01 raghu joined #gluster
15:04 Asako and can I create volumes using a disk that already has data on it?
15:05 farhorizon joined #gluster
15:10 programmerq left #gluster
15:12 Larsen_ joined #gluster
15:15 juhaj We want a half-way-house filesystem between a fast, distributed parallel lustre and slow, reliable, backed-up, mirrored, snapshotter NFS(on ZFS). Glusterfs would be one option, but are there others? Reliability?
15:16 major Asako, I have a server node that is also a client, not certain there is any recommendation for or against doing such
15:19 major and yes .. you can "technically" point a gluster node at an empty directory and use that for your brick .. though depending on what else you are doing with that filesystem .. though I would treat that config with extra caution as you can run into performance issues as well as the chance that something "else" (not gluster) might fill that partition...
15:19 susant joined #gluster
15:19 major still .. gluster will let you force the creation of the config
15:20 major juhaj, glusters snapshot support is currently limited to LVM partitions with initial support for btrfs having been started, and zfs "coming soon"
15:22 Asako major: thanks.  I just created the volume and it looks fine.
15:22 Asako the trick is going to be updating bind mounts to use the right volume, etc.
15:23 flying joined #gluster
15:23 cloph joined #gluster
15:23 juhaj major: This would not need snapshots, as the slow NFS system provides those
15:24 juhaj major: All we want is a price-point between the NFS system (which is very expensive/TB due to the mirroring and snapshotting) and the lustre (which is likewise expensive due to the high bandwidth)
15:24 q1x major, thanks for the help (sorry, was a bit busy)
15:24 juhaj No, that's not between. I mean pricepoint under
15:24 Asako lustre is pretty nice but in some ways I like gluster more.  No metadata or management nodes to worry about, replication is built-in, etc.
15:25 juhaj A big plus for glusterfs is one particular IRC channel on freenode ;)
15:25 Asako that too
15:25 Asako #lustre is like a ghost town
15:25 Asako mailing list is decent though
15:26 Asako with 10 or 40 gbit ethernet bandwidth shouldn't be an issue :D
15:29 juhaj Asako: You think so? 10 Gb ethernet on a 40 node cluster: 250 MB/s per node...
15:30 juhaj Mb/s I mean
15:30 Asako hmm
15:30 Asako I guess we're not moving that much data
15:30 juhaj Having said that, we could set a policy that one must not write to this from the compute nodes (or just simply not mount it there)
15:31 juhaj But if we were to do that, then I think another ZFS+NFS solution is probably going to win even though it suffers from split-brain problem once the first NFS server is full of discs
15:32 Asako gluster can do nfs
15:32 juhaj What other reasonable options are there, Cephfs? Another lustre? (Can I run another lustre from the same MDSs?)
15:32 Asako juhaj: yes, you can create new MDTs
15:33 Asako but that's more a topic for #lustre
15:33 juhaj Asako: I know, but if the speed is 1x10Gbps or even 1x40Gbps then gluster will likely look like too much extra hassle and lose
15:35 juhaj Gluster can spread the IO across several servers like lustre and in this scenario we don't suffer from lustre's perpetual problem of increased bandwidth can only be achieved by an increasing number of OSSs. In a way that's true with gluster as well, but 2-4 servers should be enough for this "slow" option. If is later expanded beyond those initial servers, the increase in bandwidth is not a problem of course
15:37 Asako you're always going to limited by what the network can do.  Assuming you're maxing out disk throughput.
15:38 juhaj Yes, that's a given. But what I would not like to be limited by is the network of a single server
15:41 Asako I'm not really an expert but I thought I/O was distributed between bricks
15:45 juhaj Yes, that's gluster's/lustre's big point (well, one of them, anyway) and that's something NFS/ZFS cannot do
15:46 juhaj So NFS-based solution is limited by the B/W of the NFS server, i.e. 10 or 40 Gbps, whereas gluster/lustre are limited by N(servers)*BW/server
15:47 juhaj We have lustre for high B/W stuff, so that limit might not be so important, but more imporant is the expansion limit that NFS brings: it's a single server and when that's full, that's it.
15:47 Asako I wouldn't want to manage multiple nfs servers either.  Kind of in that situation here right now.
15:48 Asako so I'm working on setting up gluster to provide redundancy and better performance
15:49 juhaj Oh, I didn't even get into the admin/management side yet! :) I was only considering users/clients for now. I feel just the user/client inconvenience of NFS warrants at least another OPTION even if NFS still wins in the end
15:55 Asako gluster also has an advantage since the client doesn't require kernel mods
15:55 Asako lnet requires a specific kernel on a specific distro (CentOS)
15:56 Asako kind of makes installing security updates a pain
15:56 Asako juhaj: nfs/cifs are probably the most convenient options for clients.  glusterfs-fuse also works great.
16:03 juhaj Clients do not (can not) need to worry about kernels, mounting etc, we do all that for them. But they WILL worry about "which NFS share my files were again" if there were many
16:04 armyriad joined #gluster
16:05 ankitr joined #gluster
16:11 XpineX joined #gluster
16:14 kblin hi folks
16:15 kblin I've lost a disk containing my brick data on one of my gluster servers. All data is mirrored to another machine, so I figure self-healing should be able to just sort this out again, right?
16:15 kblin do I need to do anything more than just creating the brick directories again and bringing up gluster?
16:17 Asako juhaj: sounds like you need something like autofs with configuration management
16:17 Asako our users don't need to know, or care, about what server their data is on.  Puppet manages it.
16:21 Asako kblin: you should be able to just mount and start the volume.  self heal will replace the missing data
16:21 kblin ok, I'll give this a try.
16:22 decayofmind Hi! In both 3.7.* and 3.8.* I'm getting SEGFAULTS at "af_inet_bind_to_port_lt_ceiling" where  i = 32527
16:23 decayofmind Actually the i value is different every time, but always near 32500
16:23 decayofmind SEGFAULT is on attempt to mount a volume
16:24 Gambit15 joined #gluster
16:27 musa22 Hi All! I need to migrate glusterfs bricks directory from old disks to newer disks. Can someone pls advice me best way to do this?
16:30 kblin hm, it's reporting a split-brain, which is curious, as the one copy should be gone...
16:32 kblin [xlator.c:403:xlator_init] 0-tue-posix: Initialization of volume 'tue-posix' failed, review your volfile again
16:32 saltsa joined #gluster
16:32 kblin hm, that looks like it's a bit unhappy
16:33 kblin I mean obviously the EAs for the volume are gone, I lost that disk and this is a new partition
16:34 Seth_Karlo joined #gluster
16:38 kblin is there any way of checking the progress of the self-heal?
16:39 kblin gluster volume heal $VOL info doesn't seem to be getting shorter
16:41 juhaj Asako: Hm... tell me? I don't see how autofs helps, it just mounts, no? My problem is that once there are two nfs servers, we would have /nfs/share1 and /nfs/share2 and users will start to worry about whether their data is on one or the other. Automatic migration from one to the other might be an option but that complicates pricing and creates a lot extra admin work, so not sure (would have to cost it first
16:41 juhaj )
16:46 Seth_Karlo joined #gluster
16:50 sona joined #gluster
16:53 msvbhat joined #gluster
16:55 major And in todays episode of "How I pay my bills" we are going to draw pictures for our manager to "help them understand" what exactly it is you are doing and how configuration management works....
17:00 sona joined #gluster
17:05 raghu joined #gluster
17:06 moneylotion joined #gluster
17:08 Asako major: I've had to draw diagrams too
17:08 major yah .. think everyone has .. just being selfish as I have other things I would like to be working on
17:09 Asako juhaj: not sure exactly what your goals are but we have user home directories mounted using autofs.  Doesn't matter what nfs node they actually mount from.
17:10 Asako bind mounts and aufs can also make things appear as a single directory
17:10 major I really want to work on the subdir mount code
17:10 major then just autofs nodeN:home/${USER} /home/${USER}
17:12 Asako how does gluster react if I change files directly on the brick?
17:12 major ...
17:12 baber joined #gluster
17:13 major the contents or the extended attributes?
17:13 Asako never mind
17:14 Asako right now I just have a volume with a single brick, there's other processes running on the original mount point though.
17:22 MrAbaddon joined #gluster
17:25 nishanth joined #gluster
17:26 unclemarc joined #gluster
17:46 TZaman joined #gluster
17:46 TZaman left #gluster
17:58 social joined #gluster
17:59 sona joined #gluster
18:07 raghu joined #gluster
18:11 vbellur joined #gluster
18:11 jiffin joined #gluster
18:17 juhaj Asako: Sure, there's no problem as long as all your files are on a single server, but as I said, that would unlikely be true in the medium term with NFS as people accumulate more files
18:17 MrAbaddon joined #gluster
18:18 MidlandTroy joined #gluster
18:18 juhaj aufs could be an option, but I'm unsure how reliable/fast/hassle-free and how much management it would imply
18:21 msvbhat joined #gluster
18:22 baber joined #gluster
18:23 jwd joined #gluster
18:27 msvbhat joined #gluster
18:40 sonal joined #gluster
19:09 kblin hi folks
19:10 legreffier joined #gluster
19:10 kblin I've got a gluster server that lost the drive the bricks were on
19:10 kblin I've replaced the drive, and started gluster again, but it looks like my bricks are not running, according to gluster volume status
19:11 kblin the gluster self-heal also is logging "failed to get the port number for remote subvolume."
19:11 major this might be a silly question, but did you go through the process of telling gluster that you replaced the drive?
19:12 JoeJulian yep, silly. ;)
19:12 major I am entitled to be silly now and then ..
19:12 major spent my whole morning drawing pictures...
19:13 JoeJulian There are safeguards to ensure the brick servers do not start just in case your brick isn't mounted (to keep you from filling up your root drive accidentally)...
19:13 major and I didn't have my favourite crayons
19:13 kblin oh, that makes sense, of course
19:13 JoeJulian To overcome those safeguards, kblin, you'll need to "start $volume force"
19:14 * misc ponder on obvious star wars joke
19:14 kblin JoeJulian: volume start $vol force, I assume?
19:14 major gluster volume start $vol force?
19:14 JoeJulian gluster volume start $vol force
19:14 major jinx
19:15 JoeJulian gluster volume list | xargs -I{} gluster volume start {} force
19:15 kblin JoeJulian, major: thanks
19:15 kblin I only have one :)
19:15 JoeJulian Do I win at being pedantic?
19:16 JoeJulian You're welcome kblin
19:16 baber joined #gluster
19:16 major for vol in $(gluster volume list); do gluster volume start "${vol}" force & done # :P
19:17 JoeJulian Just because it's more verbose doesn't make it more pedantic. :P
19:17 major but it starts faster ;)
19:17 major its a race to the bottom I think
19:18 major bah .. get to head home a day early so I can sit in the rain and change the brake caliper on the truck ... woohoo..
19:18 JoeJulian for vol in /var/lib/glusterd/vols/*; do gluster volume start $(basename $vol) force; done
19:19 JoeJulian There... even faster.
19:19 major you know you can s/force;/force\&/ right? start all the volumes in parallel and see if you can't locate race conditions and locking problems? :)
19:20 JoeJulian hehe
19:20 kblin ok, now I just need to wait for my 33k entries in the self-heal list to be fixed
19:20 JoeJulian I'm pretty sure there are actually locks that would prevent that.
19:20 major I should go spin up 100 1G bricks and run that
19:21 JoeJulian kblin: wheeeee
19:21 JoeJulian major: I received your document and passed it along appropriately.
19:22 major Understood
19:22 major you been following the discussion on the snapshot cleanup?
19:22 kblin and by wait, I actually mean "go home and check back tomorrow"
19:22 JoeJulian loosly
19:22 kblin which is among the better kind of waiting
19:22 kblin thanks again :_
19:22 kblin :)
19:22 major kblin, scotch helps
19:22 JoeJulian kblin: Just cut out early and make it a long weekend.
19:23 kblin Nah, need to update to a recent gluster version once that cluster is down anyway
19:23 JoeJulian +1
19:24 kblin just didn't want to do that with 33k outstanding self-heals
19:24 JoeJulian Of course... if you just stop the volume now, upgrade it, then let the new version do the heals... no need to come back.
19:24 JoeJulian I'd do it.
19:24 kblin I'll see how it looks like tomorrow :)
19:25 moneylotion joined #gluster
19:33 jbrooks joined #gluster
19:35 raghu joined #gluster
19:35 major I feel like all the current valgrind test targets are .. sorta dated
19:43 major bah .. brain
19:43 major vagrant targets
19:55 msvbhat joined #gluster
20:08 shyam joined #gluster
20:08 raghu joined #gluster
20:13 Asako hey, is there a way to fix my SELinux labels on a gluster mount?
20:13 Asako everything is labeled as system_u:object_r:fusefs_t:s0 which is wrong
20:18 jbrooks joined #gluster
20:27 baber joined #gluster
20:31 Asako there's always a wrench in the works, blah
20:36 misc what version of gluster, as I think selinux support was quite recent ?
20:38 Asako 3.10
20:38 misc oki, yeah, that's recent
20:41 Asako I really don't want to disable selinux but it looks like FUSE doesn't support it
20:43 MidlandTroy joined #gluster
20:44 major hmm .. it should...
20:44 major or .. it can...
20:48 P0w3r3d joined #gluster
20:49 Asako http://lists.gluster.org/pipermail/gluster-users/2016-March/025919.html found this thread which is a year old
20:49 glusterbot Title: [Gluster-users] SELinux support in the near future!!! (at lists.gluster.org)
20:49 major https://bugzilla.redhat.com/show_bug.cgi?id=1318100
20:49 glusterbot Bug 1318100: medium, medium, ---, manikandancs333, ASSIGNED , RFE : SELinux translator to support setting SELinux contexts on files in a glusterfs volume
20:49 derjohn_mob joined #gluster
20:50 major yah .. looks like the necessary full-feature set is not fully there yet
20:51 Asako ok, thanks
20:53 Asako guess my option is to disable it which leaves us no worse off than the old file server
21:02 major well .. there are apparently limited options for assigning a context, so long as it doesn't change
21:02 major and it looks like it is working towards support for 3.11
21:02 major but .. I dunno how useful either of those are for you in the here&now
21:20 musa22 joined #gluster
21:26 musa22 joined #gluster
21:29 musa22 joined #gluster
21:31 musa22 Hi All! I need to migrate glusterfs bricks directories off from old disks to newer disks. Can someone pls advice me best way to do this?
21:51 papna joined #gluster
22:02 JoeJulian According to the developers that have deprecated the use of "replace-brick" for this purpose (a change I disagree with fundamentally), the approved way is to either "replace-brick ... commit force" and let self-heal handle re-replicating or to add-brick while increasing the replica count, wait for the heals to finish, then remove-brick while decreasing the replica count.
22:03 JoeJulian Not happy with any of it.
22:03 JoeJulian musa22: ^
22:06 major ...
22:06 major I am having a time wrapping my head around the ins and outs of that
22:07 major is there a link for the reason for said deprecation?
22:07 farhoriz_ joined #gluster
22:07 JoeJulian There were failures to migrate all the files. Rather than figure out why and fix that, it was just deprecated.
22:07 major ...
22:08 JoeJulian As it sits now, remove-brick will still leave a notice in the log file stating that not all the files might have been moved and that it's you're responsibility to verify. >:(
22:08 major https://i.ytimg.com/vi/6AuZdUGj1BU/hqdefault.jpg
22:11 musa22 Thanks JoeJulian. Currently we've split-brain entries, do you advice not to proceed with "replace-brick" until split-brain entries are resolved?
22:11 JoeJulian Yes
22:11 musa22 Many Thanks
22:21 MidlandTroy joined #gluster
22:22 shyam joined #gluster
22:49 crag joined #gluster
22:51 farhorizon joined #gluster
22:56 musa22 joined #gluster
22:58 musa22 JoeJulian: One more question :) - Can i use lvm to migrate PE to newer disk instead of glusterfs replace-brick command?
22:59 JoeJulian yep
22:59 JoeJulian That's what I would do given the chance.
22:59 kblin hm, it looks like my heal list is getting longer, not shorter..
23:00 musa22 JoeJulian: Many Thanks.
23:00 JoeJulian kblin: not unexpected. As it traverses the directories it will add files.
23:01 kblin but I don't see anything actually going on there
23:02 kblin there doesn't seem to be any IO on the server, disk use on the brick partition isn't going up
23:02 JoeJulian Are directories being created?
23:02 JoeJulian iirc, the directory tree gets recreated first.
23:03 kblin hm, I see one of 14 top level directories
23:03 kblin possibly the smallest one
23:03 kblin that one's 80 KB
23:04 JoeJulian subtrees maybe?
23:04 JoeJulian Maybe "gluster volume heal $vol full"
23:04 JoeJulian though if there's stuff in the heal queue it should be going.
23:05 kblin iotop claims there's literally nothing going on
23:06 MrAbaddon joined #gluster
23:06 kblin hmm, after the gluster volume heal $vol full, the heal list grew by another 1000 entries :)
23:07 kblin but now there
23:07 kblin 's IO at least
23:07 kblin there's my directory tree, that looks much better
23:07 kblin thanks yet again
23:11 Seth_Karlo joined #gluster
23:12 JoeJulian You're welcome. :)
23:16 major okay .. almost have a replacement ansible system that is generic and handles the current vagrant centos6 and fedora templates .. all w/in a single playbook
23:16 major really hate the vim highlights for yaml though ...
23:16 major no time to fix it
23:17 kblin hm? I'm more annoyed by the indenting
23:17 major ....
23:17 major I disabled ai
23:17 major before I threw the keyboard
23:18 major or .. so that I wouldn't throw the keyboard
23:18 kblin :)
23:21 Seth_Karlo joined #gluster
23:27 JoeJulian fyi... replacement *for* ansible and the next generation of config management, see #mgmt (right purpleidea?)
23:30 major I'm for whatever works this coming week so I can keep writing code ;)
23:30 major or rather .. whatever works and I can easily add to..
23:31 major always those self-centered qualifiers...
23:31 major all about me me me
23:31 major and .. 1 hr till amtrak time
23:55 raghu joined #gluster
23:57 purpleidea JoeJulian: almost... #mgmtconfig
23:57 purpleidea JoeJulian: but this is meant as an automation tool for things that are out of scope for Ansible. So Ansible is still good for many things.
23:58 JoeJulian Oh, right. You work for Red Hat. ;)
23:59 major hah

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary