We have spent time with many Banks who are looking to improve the performance of their credit processes. Often, they have explained that their overall performance is sluggish – analysis, objections and approvals simply take too long with no understandable root cause. Some think it’s their staffing level, some that it’s sub-standard performance in key positions or departments, and others that maybe there is a chokepoint in the process where one step takes too long because the work involved is primarily manual. But, the bottom line is that they are not certain because they don’t have metrics that they can completely trust.
Our recommendation to these Banks is that they look at their credit processes from the top down, compartmentalizing each step. The compartmentalization thought process, with the intent of being able to measure each step and maximize its efficiency, often leads to a different way of thinking about the overall process.
The key objective for breaking the process down is to be able to establish metrics for each step. For example, if you believe that your existing staffing levels should be able to support the originating, decisioning and boarding of a loan within 18 days, but it generally seems to take 30 to 35 days – having each step identified and measurable will allow you to validate that assumption very quickly.
You can then dial-in the efficiency of the steps that are causing the most significant delays in the overall process. If you choose to add to the resource pool, you can do it from a foundation of fact, rather than speculation and defend that decision. Or, there might be a less costly remedy such as changing the workflow, changing the criteria for the completion of a step, or introducing automation or deeper data integration into that step.
Robust workflow software will help compartmentalize the process for easy tracking. Even without workflow software, though, your bank can generate and track metrics by passing a virtual “token” between process steps that you define. Managers will feel the heat to move the token forward if they know it is being measured. This exercise may improve your metrics even before you set out to measure them.
Regardless of the path you take, a well-defined, measurable process gives you the tools you need to increase efficiency, profitability and support growth on demand.
You have lending and credit processes that have worked well for years. Growth has been gradual so you have not noticed, nor been able to measure, the cracks that have appeared in the foundation of those processes. Now, you’ve started an integration effort for a bank that’s been acquired, or you’ve been tasked by the Board to increase loan volume by 50% over the next 18 months while maintaining your current cost structure. Both events give you pause. Will your existing processes scale in a seamless manner? Or, will they break down under the pressure of increased workload and cause significant delays in processing; creating disruption in customer service and increasing your risk exposure? And, to potentially make matters worse, do you have a handle on how you will measure any potential performance degradation? Staff performance, monitoring and management will no longer be as simple as it once was when the team was smaller. What do you do? Immediately hire more staff, right? Or possibly shift responsibilities so your higher-paid, client-facing staff is taking on more steps in the process since they know the clients and opportunities best? We’ve seen this happen, and it’s probably one of the most expensive paths you can choose. While these steps might seem to be the easiest way to respond, they are not your best course of action for the long term.
We believe that now is the time whether you are experiencing rapid growth, an efficiency crisis or are in a steady state to evaluate and tune your credit processes from the top down. Our experience tells us that starting with well-defined goals and performance objectives is key. Creating processes that identify well-understood milestones, potential chokepoints, risk factors and key pressure points can give you and your team a firm, predictable set of metrics from which to manage your lending and credit processes. This will create clarity, so you are able to effectively communicate the processes and metrics to both the management team for buy-in and support, and to the teams involved so they can execute against a well-defined set of performance objectives.
Once solid metrics are in place, you can then apply growth, contraction or other resource models with confidence to predict how the overall processes will perform; measuring your level of responsiveness to your clients and accurately predicting your costs and profitability.
Global Wave Group is active in our local community, Orange County, California. Here is a great article from the University of California, Irvine about Global Wave’s sponsorship of a 2018 capstone project in the Department of Informatics. Happy Holidays!
After a new value proposition takes the market by storm, as with Salesforce’s SaaS model introduced in 1999, the value proposition will eventually be digested, taken apart and reconstituted.
Grant Miller, CEO of Southern California-based Replicated, rebuilds the SaaS value proposition in a surprising way, as detailed in a TechCrunch opinion piece.
Today, Miller argues, two of the original SaaS arguments are shopworn and are no longer compelling: “Go multi-tenant to save costs” and “Centralize services to ease deployment, maintenance and upgrades”
- With respect to multi-tenant, the cost of computing is low, whatever setup you choose. Security and control are paramount; cost savings on computing resources is somewhat less important. Thus the rise of the private cloud and the hybrid cloud.
- With respect to centralized services, software buyers have never been comfortable with having customer data on a SaaS provider’s servers. Containerization is a new technology that makes centralized services optional.
Containerization is the future of on-prem. “Enterprise software packed as a set of Docker containers orchestrated by Kubernetes or Docker Swarm, for example, can be installed pretty much anywhere and be live in minutes,” Miller writes.
Bankers, consider that private cloud / on-premises software can be served to users in a browser application (just like SaaS), with deployment, updates and upgrades provided in containers to your IT team. Private cloud / on-premises gains the ease-of-use of SaaS while retaining behind-the-firewall security and control.
The next time a SaaS vendor tells you that your ultra-sensitive bank customer data is safe with them, you could give them your best Office Space impression:
“Yeah, I’m gonna need you to containerize your SaaS, okay? Ah, I almost forgot, I’m also gonna need you to go ahead and deploy, maintain and upgrade behind our firewall. So, if you could do that, that would be great…”
As a fast-growing commercial and private bank with $4.5 billion in assets, First Foundation Bank was delighted with their procurement and implementation of Credit Track, Global Wave Group’s straight-through commercial loan origination system.
AuditOne, LLC Co-CEO Jeremy Taylor has prepared a summary of the proactive measures financial institutions need to consider now to better prepare for the new Current Expected Credit Loss (CECL) standard. We’re delighted to host this guest blog post.
A lot is being written these days about the new Current Expected Credit Loss (CECL) standard for the ALLL and what it’s going to do to bankers’ lives. There are plenty of summaries available out there. We’re going to stick here to two angles.
- What you need to do now to prepare. For many institutions (the non-public), there’s still about three years until you need to be reporting your loan loss reserving in accordance with CECL. Which means temptation to postpone. But there are a couple of things all institutions should be doing right now to lay the groundwork, even if time can still be taken for other things (like considering alternative calculation methodologies, available vendor models). That’s because #2 below will require a lot of planning, to ensure you have those needs fully anticipated and ready to go. Which will in turn become the top agenda item for #1.
- Form a CECL Committee. At a smaller institution, the obvious participants are the CCO, CFO and COO/CIO (or their designees), all of them having direct interests in the process. At this earlier stage, the Committee will have an education role for the bank, and will need to be gathering information for future decisions on models, methodologies, et al. But its key near-term responsibility will be to:
- Identify and arrange for collection of all required data. This applies both in terms of time series (i.e., as far back as can reasonably be gathered) and
cross-sectionally (i.e., a broader range of data series than currently required). It applies both to internal data (i.e., loss and other performance characteristics for the institution’s loan portfolio, down to the borrower and loan level) and external (e.g., macroeconomic conditions in relevant markets, peer bank loan performance metrics). It should be noted that identification of data needs will require at least some sense of how reserve requirements will be calculated (modeled).
- What may not have registered. The 2016 guidance on CECL was deliberately vague as to how to go about setting up a CECL-compliant approach. This was appropriate simply because of the vast differences across the US financial system in size, sophistication, data availability, MIS capabilities, in-house expertise/understanding, etc., etc. But there are some key features or characteristics of CECL whose significance and implications may not have fully registered, that we thought might be helpful to highlight.
- The general vs. specific reserving distinction (i.e., FAS 5 vs. 114) is going away. That’s because the current approach to impairment analysis is in line with the general CECL approach (whatever the loan quality) – i.e., estimating potential loss over remaining life of the loan. So the carve-out of impaired loans, with their own manual of requirements, will no longer be needed.
- But there will still be pooling. CECL envisages estimation of potential loss on the basis of pooling assets with similar (risk of loss) characteristics, similar to today’s approach. That could apply to impaired assets, such as mortgages or consumer loans with common borrower and structural features and common drivers of credit impairment. But it is likely that larger commercial loans that are adversely graded will continue to be handled and reported individually.
- CECL will apply not just to loans but also to securities. But not to a trading portfolio. For HTM securities, you’ll need to estimate a lifetime credit loss, just like for loans. For AFS, rather than the current requirement of (irreversible) OTTI assessment, there will be a valuation adjustment to reflect the difference between fair value and amortized cost. Estimation of lifetime expected loss can be done on a pooled basis for securities with similar risk characteristics.
- When you book a new loan or security, you book the expected credit loss as an expense right away. It’s no longer the incurred loss approach of booking when a loss is deemed probable. Rather, it’s an up-front estimation as to how much might be lost actuarially, given the mortality (i.e., default and recovery) characteristics of that type of borrower and loan. On average you’re going to lose a little making a given type of loan; recognizing this with a day one loss provision is entirely appropriate. Doing so will help remind us that our credit spread is intended to cover that expected loss amount (with capital there to protect against outlier (“unexpected”) losses).
- CECL’s impact on reserve levels may be material – but shouldn’t be excessive. Intuitively, moving from losses already incurred (which in practice is typically calculated based on a one-year loss horizon) to a life of loan should boost the required reserves; it means a longer period over which losses might occur. True, but there are offsetting effects. Most importantly, smaller financial institutions today are typically carrying booked reserves in excess of required (i.e., calculated) levels – and that’s after using Q-factors to push up the required levels. The move to CECL will push up required loss reserves, but for many institutions that may still lie below the current actual reserve level.
- Regulators recognize that CECL implementation will vary widely. For large institutions, splitting probability of default (PD) from loss given default (LGD) will be expected, along with more powerful migration or vintage analysis approaches. Smaller institutions, on the other hand, should be able to build on their current ALLL methodology in order to satisfy regulators – e.g., still starting with historic loss rates, but looking back over a longer time horizon; still adding on Q-factor adjustments, but looking out over a longer (remaining life) horizon. However:
- More institutions will find vendor software worth considering – as much for managing the more onerous data expectations as for increases in complexity of calculations required.