Perspectives: Strategic Issues

November 18, 2019
Simple is Not Always Better: The Community Bank Leverage Ratio Playbook
By: Adam Mustafa, CEO – Invictus Group

In September, the Federal Deposit Insurance Corporation finalized the Community Bank Leverage Ratio (“CBLR”). Community banks with less than $10 billion in assets can opt into the new capital framework and forgo risk-based capital rules as long as they maintain at least a 9% Tier 1 leverage ratio. The rule is a byproduct of S.2155, adopted in 2018 to roll back much of the Dodd-Frank Act. The bill called for regulators to create a new simpler capital framework for community banks, with a CBLR between 8% and 10%. Predictably, the regulators settled on the midpoint of that range. The CBLR is on track to go into effect on January 1, 2020. Since banks will use their Call Reports to report their capital levels, the framework will first be available on March 31, 2020.

Banks that opt into the CBLR and remain above the 9% threshold would no longer be required to comply with the “Basel III” capital rules, or even calculate their risk-based capital ratios. Touted as easing the regulatory burden, the new framework will primarily free community banks from the paperwork hassle of calculating these ratios.

However, this will come at a severe – yet hidden – cost to shareholders. Invictus has calculated that 96% of community banks could justify a leverage ratio requirement of less than 9%. In other words, a $1-billion bank that can support a customized capital requirement of 8% would be burning $10 million of capital to the ground simply by opting into the CBLR.

Fiduciary Responsibility to Know Your Bank
Every community bank is unique. Each operates in a discrete footprint with its own strategy. The composition and risk characteristics of their assets are distinct. Their income streams, cost structures and efficiencies are different. Yet any one-size-fits-all approach to capital adequacy such as the CBLR will be based upon the lowest common denominator bank. It’s no coincidence that examiners often tell banks they must hold a minimum Tier 1 capital ratio of at least 9% – that is the fall back, de facto level.

The management team of every community bank must take it upon itself to calculate the minimum capital requirement commensurate with its risk profile. Blindly opting in to the CBLR is a disservice to shareholders. At the same time, if a community bank decides NOT to opt in to the CBLR, it must be prepared to support and defend that requirement to its board of directors and regulators. Choosing not to opt into the CBLR but not having a customized capital requirement calculated and backed by data and analytics is also unacceptable. It will give your regulator cause for concern because you do not have strong command over how your capital is allocated.

To be frank, even if your regulator ignores or doesn’t accept your calculation, management’s job is to not go down without a fight. Simply bemoaning that your regulator won’t be willing to have such a conversation is no excuse for not having one. That’s weak, short-sighted and a disservice to your shareholders. And in our experience, regulators will listen – and they will look at documentation you provide to make your case.

Stress Testing – The Right Tool for the Job
So how does a community bank go about calculating its own capital requirement? The only answer is stress testing. Community banks must realize that stress testing is the perfect weapon to control their own capital requirements. Yes, the Comprehensive Capital Analysis and Review (CCAR) is designed for large banks, not community banks. However, community banks need to quit looking at stress testing as simply a check-the-box exercise to please their regulators or to adhere to the 2006 interagency guidance on managing CRE concentrations.

By taking this approach, community banks will be pleasantly surprised to discover that their regulators will often respond positively. For the regulators, it’s all about trust. Can they trust a given bank to properly manage its capital? In defense of regulators, how can the answer possibly be yes if a given bank can’t even estimate its own capital requirement or support it with data?

Most community bank regulators will be receptive to a customized capital requirement, even if they are not trained in stress testing, under three conditions:

1.The bank is genuinely using stress testing to help create and manage its strategic and capital plans. In other words, the bank is not just running a stress test to appease the regulators. The regulators’ conscious or subconscious litmus test will be “would this bank be using stress testing even if they were not regulated?” The answer needs to be yes. Banks cannot fake this. They need to embrace stress testing, or it will never work, no matter how much money, quants or data they throw at the exercise.

2. The stress tests CANNOT be a black box that management does not understand. Do not purchase a model from a vendor without understanding how it works, or what the results mean, and then hand a report to the regulators and say, “here is my stress test”. Management does not need to be experts in stress testing, but they do need to have a strong understanding of how their stress test works, why the methodologies utilized are appropriate and how their inputs (loan-level information) translate into outputs (ultimately their customized capital requirement).

3. The stress tests must be forward-looking, driven by loan-level information and able to be validated. Simply using your historical loss experience from the Great Recession or the 75th percentile worst bank is insufficient. The heart and soul of a bank’s vulnerability to stress will be credit risk embedded within the loan portfolio. Therefore, it is crucial to utilize loan-level information that contains these risk characteristics to drive the loan portfolio stress test component of your capital stress test. It is the only way to perform forward-looking analysis. Validation is also important to regulators, so make sure your model is designed in such a fashion that it’s easy to do so. Validation should also be important to you because if you are adhering to condition #1 above, you want to make sure you are relying on a model that you can trust.

A Decade of Community Bank Stress Testing Mistakes
Most community banks are using some form of stress testing. But while they are often checking the box with regulators, most are also not doing so in a manner that can be used sufficiently for calculating capital requirements and figuring out whether to opt into the CBLR. Below is a list of the most common shortfalls we see with community bank stress tests:

- They are only stress testing their loans. They cannot connect the results to the impact on capital because they are not stressing the rest of their balance sheet or their earnings.

- Only the CRE loans are being stress tested – residential mortgages and consumer loans are not included. The irony is that many banks will find they are most vulnerable to losses within the C&I portfolio under stress due to the ‘soft’ nature of the collateral, but they are not stressing those, either.

- They are performing capital stress tests, but they are shortcutting the calculation for loan loss provisions and net charge-offs by using historical losses from their own bank or other banks, or by applying some multiple to their net charge-offs in “good times.” What they are missing is a forward-looking analysis using loan-level information. Today’s loans were originated under completely different economic and interest rate conditions than loans that were on the books in 2008. Underwriting philosophies and standards have also changed. This all gets missed with a “look back” approach that is used simply to plug in a number.

- They are sending “flat files” to a vendor and getting a report back in return, but don’t understand the report and aren’t using it to make any real decisions about strategic or capital plans.

- They are unable to incorporate planned actions such as loan growth, dividends, stock repurchases, mergers and acquisitions or investments in new business lines into their stress tests.

- Stress testing is being done in a vacuum as the sole responsibility of either the Chief Credit Officer or Chief Financial Officer, and there is little to no collaboration across various departments within the bank.

- Stress tests may be done on a recurring basis, but each stress test is its own mutually exclusive exercise with zero trend analysis. What community banks often miss is that the most valuable insights from stress testing are unlocked from performing trend analysis across previous stress tests.
Stress testing is an imperfect exercise. It is not and never will be a proverbial crystal ball. However, if the same general test is performed over multiple periods, then changes in the results from one period to the next are screaming to tell you a story. The most important result, perhaps, is the capital requirement estimated for a given bank. But this is not a static number. A given bank’s capital requirement will change over time as its loan portfolio turns over, as its earnings model changes, as its mix of assets and liabilities fluctuates, etc. Has the stress loss rate on the CRE-non-owner-occupied portfolio increased or decreased versus the prior analysis? And why?

Community banks need to take the next step with respect to stress testing. They need to run CCAR-style stress tests and fill in the above-mentioned gaps, if applicable. Most community bankers brave enough to have read up to this point are gasping right now, asking themselves: How are we going to do something that is this complex? How much will it cost?

But this type of stress testing is not that difficult or expensive. Most community banks have much simpler business models than the large money center banks, which often include international operations, investment banking and massive off-balance sheet derivative exposures. Most community banks gather deposits and make loans, and that is their primary business. Some may have additional revenue streams such as loan sales of mortgages and SBA loans, wealth management and loan servicing, but these are not overly complex business models, either. Most community banks have plain vanilla securities portfolios, so analyzing those aren’t too difficult.

Community banks are also in a better position today to ensure the stress testing of their loan portfolio is forward-looking and using loan-level information. This is because they (hopefully) are at some sort of stage in the process of preparing for CECL.

Don’t Miss the Opportunity
In many ways, the decision regarding the CBLR provides a tremendous opportunity for community banks. They can use this process to support and defend a customized capital requirement. It allows them to put a stake in the ground with their regulators, so they do not have to succumb to a rule-of-thumb that was ultimately based on the lowest common denominator bank.

Banks that do nothing or blindly opt into the new framework risk encumbering unnecessary capital that can be used to drive shareholder value and ensure their ongoing independence in a world where generating the appropriate levels of ROE is becoming increasingly difficult.

By our calculations, $44 billion may be at stake.

Disclaimer: The views and opinions expressed in this article are those of the author(s) and do not necessarily reflect the official policy or position of the Financial Managers Society.

About the Author

As a co-founder and CEO of the Invictus Group, Adam Mustafa has overseen the design and implementation of fully customized capital stress testing, capital management, CECL and strategic planning systems for financial institutions. Prior to joining Invictus, he had senior-level experience as a banker, financial services consultant and corporate CFO.

December 24, 2018
Data Needed to Comply with CECL
By Toby Lawrence, President, Lawrence Advisory Services and Owner, Platinum Risk Advisors

Having attended more than 20 different CECL seminars offered by accounting firms, software companies and industry regulators, one common question continues to come up time and time again.

What data is needed to comply with CECL?

Some of the speakers respond by listing everything except the kitchen sink to ensure they don’t leave anything out, while others state that no additional data is needed because their solution relies only on the data within an institution’s Call Report. One concern with these so-called “simple models” is that when we experience another economic slowdown, the adequacy of these models may come into question and/or may result in a small amount of losses tainting an institution’s entire portfolio – resulting in a higher provision for the allowance for loan and lease losses (ALLL) than what is actually necessary.

Many institutions likely already have the data they need to calculate CECL in their current loan subsidiary ledgers (with the possible exception of the additional information needed to calculate prepayment percentages). For the actual CECL calculation, however, you need to be thinking about the following information.

Data needed for loans that are currently outstanding
- Customer / member number
- Loan number
- Loan type
- Ability to distinguish between term loans and line of credit loans
- Date the loan was originated
- Maturity date of the loan
- Original amount of the loan
- Current interest rate
- Unpaid balance at month-end
- Additional amount that can be draw on the loan (for line of credit loans)

For CECL, you may want to use more loan types than what are currently in your loan subsidiary ledger. This will help prevent significant losses in one loan type from tainting a large portion of your loan portfolio, leading to your institution having to record a higher ALLL balance than necessary. Additionally, the more collateral types you use, the better your ability to segment the loan portfolio and truly analyze the opportunities and risks within.

Data needed to calculate prepayment percentages for term loans
- Amount of contractually due principal payments received by vintage or year of origination
- Amount of total principal payments received by vintage year

Data needed for charge-offs
- Date of the charge-off or recovery
- Loan type
- Unpaid balance of the loan at the time of charge-off
- Estimated selling costs incurred to liquidate the related collateral
- Net proceeds received from the liquidation of the collateral
- Amount of the charge-off or recovery
- Year the loan was originated
- Amount of any remaining accrued interest
- If using migration analysis, the last risk rating (commercial loans) or FICO credit score (consumer loans) and the date the loan was assigned to that risk rating / FICO credit score
- Loan officer assigned to the loan
-If using the probability of default / severity of loss method, the number of net charge-offs and number of loans originated by each loan type and vintage year (year of origination)

Additional data will be required to justify the subjective adjustments to the CECL historical charge-off percentages. To help with this, be prepared to segment your loan portfolio by:
- Collateral type
- Ranges of the loan-to-value ratio
- Ranges of the debt service coverage ratio for commercial loans and debt-to-income ratio for consumer loans
- Risk rating for commercial loans and FICO credit scores for consumer loans (assuming the institution doesn’t risk consumer rate loans)
- Separating the loans located inside and outside of the normal trade area
- Loans acquired through participation
- Loan officer responsibility codes (to determine if there are any trends in loan officers’ individually-managed portfolios)
- Delinquency status
- Spec versus presold loans for commercial construction one-to-four family loans
- Level of policy and technical exceptions
In order to segment a loan portfolio as noted above, lenders will need additional data for their loan subsidiary ledgers.

Data needed to justify subjective adjustments
- Collateral type (to do this correctly most lenders will need to add significantly more loan types to their loan subsidiary ledgers)
- Risk ratings for commercial loans
- FICO credit scores for consumer loans
- Cash flow generated from on-going operations (commercial loans)
- Principal and interest payments due to the institution (commercial loans)
- Principal and interest payments due to other lenders (commercial loans)
- Estimated market value of collateral pledged against the loan
- Debt-to-income ratio for consumer loans
- Number and type of policy exceptions
- Number and type of technical exceptions
- Zip code for real estate loans (this information is already in the loan subsidiary ledger)
- Whether the loan is on nonaccrual status or a TDR (this information is likely already in the loan subsidiary ledger)

The good news for most institutions is that their data processing systems are already set up to store this additional data. An interagency statement issued by the FDIC, OCC and the Federal Reserve Bank in 2006 required banks to segment their loans for major loan concentrations. This statement hasn’t been enforced well to date, but regulators will be expecting institutions to do a better job of segmenting their loan portfolios going forward. These same types of data will also be needed to properly stress test a loan portfolio.

Getting this data for the current year will take some effort and will require a data scrub of all the loans currently in the loan portfolio. However, after the initial data scrub tracking this additional data should be relatively painless. The most challenging issue with implementing CECL will be obtaining this same level of data for prior years. To ensure you have enough data for your CECL calculation it is strongly recommended that institutions implement whatever model they’re planning to use for CECL adoption as soon as possible, since at least 3 to 5 years of verifiable data will be needed to perform a proper CECL-compliant ALLL calculation.

Disclaimer: The views and opinions expressed in this article are those of the authors and do not necessarily reflect the official policy or position of the Financial Managers Society.

About the Author

Toby Lawrence is the president of Lawrence Advisory Services and the co-founder and owner of Platinum Risk Advisors.

December 3, 2018
Hedging to Cope with Interest Rate Uncertainty
By Ira Kawaller, Managing Director, HedgeStar

Most market observers face a conundrum. After seeing a change in prices in virtually any market, it’s difficult to discern whether said change reflects the beginning or continuation of a trend in that direction, or if the change is a temporary distortion soon to be reversed. With interest rates, however, we have a unique consideration – the Federal Reserve (the “Fed”).

The Fed has unparalleled influence in this sector, and seasoned forecasters know better than to ignore the Fed’s public statements. As of this writing, the Fed is unambiguously projecting interest rate increases. Of course, this projection rests on an expected continuation of the current economic expansion, as well as a sanguine outlook for inflation. While both of these forecasts will likely be tested at some point in the future, the Fed can be expected to signal any revision of its sensibilities if and when they were to change. Until then, however, higher interest rates seem most likely.

The more relevant question, then, is not whether interest rates will rise, but rather how high they are likely to go. Answering this question requires at least enough humility to admit that nobody knows for sure – not even the Fed. That said, interest rate futures markets offer clues as to consensus expectations for a variety of benchmark interest rates. For example, with one of the most actively traded futures contracts, three-month LIBOR is one such benchmark rate. These contracts effectively reveal where this key interest rate is expected to be at three-month intervals over the next 10 years. And while futures prices adjust with trading every day, they offer explicit, objective forecasts at any point in time.

We can also look to bond and note futures, fed funds futures and swap futures for analogous forecasts of other benchmark interest rates. Besides offering rate-specific forecasts, these various futures prices serve as the foundation for pricing a broad array of over-the-counter interest rate derivatives.

Building a Hedge
While it’s generally understood that interest rate derivatives can protect against rising or falling interest rates, the starting point for the protection derives from futures pricing curves as of the date the derivative is transacted. Thus, if a hedger wanted to use a derivative to lock in an interest rate today, the rate that would be available to that firm would be consistent with the consensus forecast. In other words, the hedger seeking to lock in rates would have to accept the consensus forecast rate as its hedging objective – regardless of whether the spot interest rate happens to be higher or lower than that consensus forecast rate at that time.

Depending on the nature of the exposure, the difference between current spot interest rates and the implied forecasted rates underlying interest rate derivatives might be adverse or beneficial. These days, for instance, with consensus forecasts anticipating rate increases, hedging with derivatives tends to impose somewhat of a cost for hedging against rate increases, while at the same time offering a benefit to entities faced with the opposite risk of falling interest rates. (If you can borrow today at 5%, but the market offers the opportunity to lock up a future funding cost of 5.5%, you’re forced to accept a 50 basis point penalty; on the other hand, if you can invest at 5% today, that same derivative would let you invest in the future at 5.5%, thereby offering a 50 basis point benefit.)

Consider the case of a commercial entity that expects to issue three-year debt in the coming four months, where the prospect of higher interest rates has stimulated interest in entering into an interest rate swap to lock in the interest rate on an intended funding. Three critical questions would have to be asked:

1. What benchmark interest rate can be secured for the three-year period starting in four months? (This question distills to getting a quote for the fixed rate on a forward starting three-year swap.)
2. What is the credit spread that the firm would likely bear, relative to this benchmark interest rate?
3. Given the expected all-in rate (i.e., the swap’s fixed rate plus the expected credit spread), what portion of the interest rate exposure that the firm is facing should be hedged?

In the current environment, this all-in interest rate should be expected to come in at a rate higher than the cost of funds that the company would bear if it were to issue debt today. This higher-than-today’s interest rate might discourage the company from hedging, but it shouldn’t preclude it. The appropriate question is how much of the exposure should be addressed with a derivative, given the fixed rate level that the derivative allows the firm to access?

Dealing with Uncertainty
Along with the implied fixed rate available with the derivative, a complementary consideration is the business judgement as to the probabilities associated with interest rates ultimately falling below, reaching or rising above the implied rates underlying the derivative. It should be clear that if the market for swaps allowed this prospective borrower to lock in an all-in cost of funds at, say 5%, while at the same time expecting rates to rise even higher, hedging would be particularly attractive. On the other hand, hedging would be less attractive if the firm didn’t expect market interest rates to rise above 5%. Extending this line of thinking further, it may be interesting to realize if the consensus forecast reflected in the pricing of the derivative were actually realized (which shouldn’t be expected), the swap wouldn’t generate any payoff whatsoever – the company would realize identical earnings regardless of whether it hedged or not.

Unfortunately, the calculus becomes more complicated because we live in a world of uncertainty. The idea of not hedging at all because we don’t expect market rates to surpass the threshold of the implied forecast of the derivative is problematic because we might be wrong. Thus, even if we might not believe the rate will move beyond that critical value, it may still be reasonable to hedge some portion of an existing exposure. Put another way, even though the market conditions force the hedging entity to lock-in an implicit rate increase dictated by the price of the swap, it’s the probability that interest rates could move even higher that would justify hedging, even at a seemingly elevated interest rate.

Employing the swap serves to eliminate the uncertainty that would otherwise prevail if the exposure were left unhedged. With the swap, the company should have a high degree of confidence that the anticipated all-in funding costs initially calculated would be realized (subject to accurately forecasting the credit spread) for the portion of the exposure that the company chooses to hedge.

Managing a Hedge
Thus far, the discussion has focused on how much to hedge at the start of the hedging process, but hedging deserves reconsideration both periodically and whenever economic circumstances change in material ways. Suppose, for example, an initial hedge was initiated to protect against a rate increase that ultimately materializes. But suppose further that with time remaining before the hedge expires, the market has evolved, and now it now seems more likely that interest rates could retreat. Does it make sense to maintain the hedge in the face of these changed circumstances? Probably not. As time passes and perceptions change as to the probabilities associated with adverse price moves, or if the company’s risk tolerances change, the degree of hedge coverage could be adjusted – either up or down. Critically, just because a derivative contract hasn’t expired doesn’t necessarily mean it’s prudent to maintain hedge coverage.

Clearly, an orientation that favors a dynamic hedge adjustment process could open the door for abuse. Consider the case of the company that starts out with a hedge of 50% of some exposure. Assume that the firm perceives the risk as being more pressing, thus adjusting its hedge coverage to 75%. Later, the company reassesses conditions and decides that the expected adverse rate move has run its course such that rates now are expected to move beneficially. With this reassessment, the firm decides to reduce its hedge coverage down to 25%.

Throughout this adjustment process, this firm could represent that it is mitigating risk, albeit at varying degrees. Still, while it might be appropriate to observe these kinds of hedge adjustments over weeks or months, an objective observer would likely have a problem with these kinds of adjustments if they were made over the course of a single trading day! The moral here is that hedge adjustments should be implemented on the basis of some previously devised plan that reflects the company’s risk management orientation and policies. Thus, a mechanical rule that imposes an objective discipline on the hedge-adjustment process is preferable to ad hoc assessments relating to adjusting hedge positions. Unfortunately, it’s not clear that any single rules-based approach will be appropriate in all circumstances.

When considering an objective hedge management plan, it’s critical to be sensitive to two opposing concerns: if you’re starting with partial hedge coverage and interest rates move adversely, it’s natural to want to increase the degree of hedge coverage; on the other hand, at some point, the prospect of interest rates achieving a top (or bottom) might gain greater currency. Prudent managers will periodically review their hedge coverage and adjust their plans accordingly, reflecting a forward-looking orientation as to the changing probabilities associated with future interest rate changes.

Disclaimer: The views and opinions expressed in this article are those of the author(s) and do not necessarily reflect the official policy or position of the Financial Managers Society.

About the Author

Ira Kawaller is a Managing Director of HedgeStar, a Minnesota-based consulting firm that specializes in derivatives strategies, valuations and hedge accounting services.

November 19, 2018
How to Determine Millennial Borrowers' Credit Worthiness
By Joseph Lowe, Marketing Manager, Sageworks

When assessing the potential risks a borrower presents an institution’s portfolio, the typical starting point for most lenders is the “five Cs of credit” – capacity, character, capital, collateral and conditions. But as a younger generation, burdened with excess debt, becomes the prime demographic for commercial and consumer loans, community banks and credit unions may want to reconsider that approach if they want to capture this increasingly important segment.

Judging by the numbers, the American economy is on an uptick. The national unemployment rate sits at its lowest rate since 2000 (3.9%), the average FICO credit score is at its highest point ever (704) and median household income is at its highest mark in over 30 years ($61,372). In addition, young borrowers’ share of the lending market is growing.

Despite these positive figures, however, the financial outlook for young borrowers is not on par with the national averages. For example, the average FICO credit score for young borrowers (ages 21-34) is 638, while the average income for Millennials is $35,592.

Given these disparities, it will be difficult for community institutions to grow revenue if they choose not to factor in metrics other than the five Cs when analyzing young borrowers. Let’s take a look at the five Cs of credit in consideration with the young borrower market.

Capacity – Young borrowers earn an average salary of $35,592 and owe an average of $25,000 in student loan debt alone, making for a poor debt-to-income (DTI) ratio.

Character – Young borrowers’ average credit score of 638 is considered fair or poor for most financial institutions that rely on credit scores as the only gauge of character.

Capital – Young borrowers are spending more on bills than previous generations, leaving less money to put toward loan payments.

Collateral – Young borrowers are postponing major purchases such as homes and cars, opting instead for renting and public transportation.

Conditions – Young borrowers are starting new businesses, which, due to their limited credit history and high debt burden, can be too risky of a loan for community banks and credit unions to offer.

In light of these realities, community financial institutions looking for a share of the up-and-coming young borrower market may consider including supplemental factors within their credit analyses and implementing technology to better evaluate credit risk.

Analyzing a young borrower’s entire relationship through global cash flow
Global cash flow refers to a lender or credit analyst’s ability to review a borrower’s financial relationships with his or her peers in the community and, more importantly, the financial institution. Rather than solely focusing on the borrower’s financial history as a key determinant of creditworthiness, financial institutions can determine how businesses, properties and family members connected to the young borrower will affect credit risk for the institution.

For example, consider a loan application from a young borrower named Jack for a $5,000 commercial loan to pay equipment costs for a moving business. When analyzing his financial statements, you see that not only does Jack make a lower-than-average income of $29,000 per year, but he also owes a total of $25,000 in student loans. Your initial reaction is to deny the line of credit. However, upon reviewing the global cash flow analysis, you realize that his student loans have a guarantor on the account – his mother, Linda. Linda earns an income of $110,000 annually and has a credit score higher than 750. She co-owns two businesses with other prominent community members and has banked with your institution for 20 years.

By considering relationships through global cash flow, you have more evidence to potentially justify the line of credit and offer the loan to Jack based on conditions that mitigate his credit risk. By using global cash flow analysis, lenders can identify opportunities, increase defensibility of loan decisioning and take informed, calculated risks.

Using technology to determine credit worthiness
In a recent article published by the Wharton School of the University of Pennsylvania, Benjamin Keys, Wharton professor of real estate, and Richard K. Green, director of the University of Southern California’s Luck Center for Real Estate, both pointed to technology as a way for banks and credit unions to pull in other factors during credit analysis to provide supplemental evidence that borrowers can repay loans.

Implementing credit analysis technology allows lenders to identify portfolio risks based on both internal factors (such as probability of default) and external factors (such as data from other financial institutions) through automated credit risk models and APIs. APIs layer on another source of bank data for lenders to include within credit analysis as well – third-party data.

An automated commercial credit risk model can determine credit worthiness using predictive financial factors and limited data entry from lenders or credit analysts. Furthermore, automated credit risk models can quickly compare probability of default with broader industry trends and examine the industry’s risk to the institution. For young adults with limited access to capital, a better understanding of industry trends can provide another factor to be taken into account when examining credit.

As the demographics of community financial institutions’ customers shift to younger borrowers with less credit history and higher DTI than previous generations, it’s important for banks and credit unions to focus on more ways to help them find good risks that represent profitable growth from a core of young borrowers.

Disclaimer: The views and opinions expressed in this article are those of the authors and do not necessarily reflect the official policy or position of the Financial Managers Society.

About the Author

As a commercial lending marketing manager at Sageworks, Joseph Lowe helps educate bankers on ways to optimize their lending and credit risk processes.

July 2, 2018
Managing with a Forward View
By Mary Ellen Biery, Research Specialist – Sageworks

Dealing with the day-to-day challenges of operating a bank or credit union can keep top management “in the weeds” of lending or credit operations. This can leave little to no time for surveying the entire “field” of the portfolio, its risks and its impact on the institution’s financial results.

It’s the same challenge the institution’s small business customers often face. When owners’ days are filled with handling current-day issues and reviewing recent results, they end up with little time for big-picture planning. It’s not until these businesses begin forecasting sales and expenses, and managing with a forward-looking perspective, that they are able to generate meaningful growth.

Banks and credit unions, many of which are small businesses themselves, can also make more informed strategic decisions that aid growth when they manage with a forward view. Managers may currently rely solely on Excel-based reports of last month’s loan delinquencies, charge-offs and the like. But executives can quickly understand trends in the portfolio and use insights to inform strategic planning by incorporating forward-looking indicators, many of which can be generated automatically through technology.

Indeed, in a recent FDIC Supervisory Insights article, an analyst for the FDIC’s Division of Risk Management Supervision emphasized the importance of forward-looking risk indicators. Such indicators, senior analyst Michael McGarvey wrote, “can be indicative of future performance and should be the focus of a sound credit management information system program to proactively identify and mitigate risk exposure.” The article described a scenario where one bank relied heavily on lagging risk indicators, resulting in inadequate risk identification. Another bank, meanwhile, was able to be more proactive in risk management, thanks to forward-looking metrics.

According to the FDIC article, an example of incorporating forward-looking credit metrics would be monitoring concentrations in relation to capital so that the institution can establish strategies to decrease, maintain or increase exposure to a certain concentration or identify concentrations approaching or exceeding limits. Metrics to aid in this approach include data related to:

- Loan category (C&I, CRE, unsecured, auto, etc.)
- Industry breakouts on C&I loans
- Individual and related borrowers
- Geographic concentrations

Another example of incorporating forward-looking data would be monitoring the institution’s performance and risk indicators against policy limits and the risk appetite statement. Tracking the volume of loan exceptions, underwriting trends, loan grade migrations and concentration risks would aid in developing this type of report, the FDIC analyst wrote.

“The FDIC is absolutely right to focus on this issue,” says Neill LeCorgne, vice president of Sageworks and a former bank president. “What typically happens in the banking world is when the economy goes well and everybody’s doing well, there’s not a deal that a bank doesn’t want to take a look at. When the economy starts to turn down, everyone starts to pull back. Now is the time to start getting your management information systems established and working and following some of these practices.”

LeCorgne says a banking technology platform that has heavy analytical capabilities at the portfolio level makes it easier to slice and dice concentrations and global relationship exposures, and to provide custom visual summaries to share with the board, auditors and examiners.

Data generated using technology at the front end of the origination process – such as an online loan application – can interact with an automated tickler system to track correspondence and data requests on a go-forward basis. That way, banking staff don’t have to manage all of the quarterly or annual reports on borrowing-base analyses, quarterly/annual reviews or renewals. Instead, time previously spent on those tasks can be used to look at the big picture regarding the potential future impact of credit exposures and underwriting trends.

At the other end of the loan’s lifecycle, technology that helps banks leverage the results of calculations for the allowance for loan and lease losses (ALLL) – especially results under the upcoming current expected credit loss model, or CECL – can also provide forward-looking insight. Institutions can use the results of CECL calculations to back-test risk rating models and scorecards and develop sound risk-based pricing systems. In this way, executives can more effectively manage profit in a CECL world.

While historical performance metrics typically convey what has occurred in the past in the portfolio, forward-looking metrics throughout the life of the loan can help financial institutions identify underlying risks that could potentially affect not just future performance but also future strategic decisions. Banks and credit unions strengthening credit management information systems with the assistance of automated data generation and tracking, as well as sound governance, will be better able to respond to emerging risks in the years ahead. Like their business customers utilizing forward-looking information, these institutions can aid their growth when they rise above “the weeds” to survey the entire credit and business landscape.

Disclaimer: The views and opinions expressed in this article are those of the authors and do not necessarily reflect the official policy or position of the Financial Managers Society.

About the Author

Mary Ellen Biery is a research specialist at Sageworks, a financial information company that provides lending, credit risk and portfolio risk solutions to over 1,200 financial institutions across the country.

April 9, 2018
Leadership For An Industry 4.0 World
By David E. Perry and Ron Wiens

The world is entering its fourth Industrial Revolution, often called Industry 4.0. While Western economies ruled the first three industrial revolutions, the economies that will dominate the 4.0 World have yet to be determined. With the future up for grabs, what will the differentiator be for winning organizations?

Ushered in by the steam engine, the first Industrial Revolution led to the mechanization of work. The second, led by the electrification of factories and machinery, enabled mass production on a grand scale. The third, occurring in the second half of the twentieth century, introduced computers to the workplace and led to the automation of everything from back-office administration to the teller’s window.

The common theme of these revolutions has been a decline in the dependence on human capital. But Industry 4.0 is about to change that.

Knowledge + Connectivity = Industry 4.0
Industry 4.0 is driven by an electronically connected world. In the emerging 4.0 World, people are connected not only to each other, but also to each other’s knowledge. The impact of this connectivity is best summed up by the following observation from Dr. Nick Bontis of McMaster University: “In the 1930s, the cumulative codified (i.e., written down) knowledge base of the world doubled every 30 years; in the 1970s… it doubled every 7 years.” Bontis predicted in 2000 that by 2010, the world’s codified knowledge would double every 11 hours.

Maybe we haven’t reached that fateful 11-hour figure, but we now live and work in a world in which knowledge is growing exponentially. Since knowledge equals opportunity, the opportunities available to organizations are also growing exponentially. And because everyone is connected to this knowledge, everyone is connected to these opportunities. Therefore, competitive advantage today lies in an organization’s ability to exploit this knowledge and spot opportunities before anyone else – companies that can consistently do this faster than their competition will thrive.

An interesting by-product of this knowledge explosion is that the days of the all-knowing, all-seeing manager are over. Knowledge workers today are often more aware of new knowledge than management is. It’s not that managers have gotten dumber, but rather that employees have gotten smarter – or at least better educated.

Organizations are ripe with highly educated knowledge workers. That’s a key difference between now and the first Industrial Revolution, when our current management systems were invented. Here’s a nice bit of alignment: we have an explosion of knowledge, and at the same time that we have growth in the capability of the organization’s employees to understand and make use of this knowledge. The continued prosperity of already successful organizations now depends directly on the ability of their workers to continuously generate new value. Winning organizations have awoken to this fact.

The Power of Leadership
What does ‘waking up’ mean? At its core, it means a fundamental shift in how people are managed and led. The 4.0 World is all about leadership.

The current approach to managing people tends to focus almost exclusively on maximizing the productivity of individuals. This is Leadership 1.0 – steam age leadership, in which the whole is viewed as the sum of its parts. Industry 1.0 leadership can be summed up by the following philosophy: “We all have a job, and if we each do our job we will be successful.”

In an Industry 4.0 World, the view is quite different – the whole can be much more than the sum of its parts. 4.0 leaders still work at maximizing the performance of the individual, but they also focus on maximizing the performance of the team. This means looking at recruiting leaders through a new lens. In a 4.0 World, the skills and behaviors needed in a leader have changed considerably.

Building an environment that facilitates the ongoing creation of new value means managing not only the individuals who make up a team, but also the interaction space between these individuals. A lesson learned from the IT industry – which was the forerunner to Industry 4.0 and provides insight to the 4.0 World – is that between any two individuals on a team there is a hidden creative force. When the interaction space between individuals is effectively managed, this force emerges and the creative impact of the team is multiplied. In a 4.0 World, an organization’s ongoing prosperity now directly depends on its leaders’ ability to draw out this creative energy.

Building an organizational culture that facilitates the ongoing creation of new value is not rocket science. But it requires a fundamental change in perspective on the part of the organization’s managers – a change that will challenge current management practices, including how a manager’s performance is measured and evaluated. To be successful in a 4.0 World, organizations will now need to evaluate their managers not only on the basis of what they have delivered, but also by the readiness of their teams to deliver in an unknown future. Contrary to popular belief, winning in the fourth Industrial Revolution is not about speed – it’s about non-stop strategic change that constantly advances the organization toward its stated goals.

What does a 4.0 leader look like? 4.0 leaders not only manage the space in between people while building a high-performance culture. They never rest. They never allow the organization to crest. They know success is not a sprint but a marathon. Change is ongoing in a 4.0 World, which is why the 4.0 Leader is constantly developing and strengthening the organization’s change muscle. The successful organization in a 4.0 World reflects this kind of leader by constantly moving forward – never stopping, never resting.

Building a 4.0 Team
The goal in hiring isn’t to find the best talent looking for work, or at least it shouldn’t be; what it should be is finding the best talent period. Today that means recruiting leaders who are comfortable in a 4.0 world, and therein lies the recruiting challenge. The best leaders – the 4.0 talent – already have good jobs. The key to recruiting successfully in a 4.0 world now means going after talent that isn’t looking for work.

But hiring the best is not about money – it never was. Surprisingly enough, the best will come to an organization not to make more money, but because of what the organization stands for and what it’s trying to achieve. Work is personal to 4.0 talent, which is why you have to first engage their heart. Once you’ve spoken to the heart, the next step is to speak to their head – the best will want to understand the organization’s business goals, its challenges, its assumptions and its blind spots. Once the head is engaged, you next have to address the feet – the best will want to understand the organizational culture that drives the way people interact and how things get done. To do all of this, you need a systematic approach to finding 4.0 talent by engaging their interest and assessing their alignment with your goals.

Two Alternatives
In the ‘Old West’, it was said that there were two kinds of people – the quick and the dead. In the Industry 4.0 world, there are just two types of banks and credit unions – the quick and the dying.

The quick embrace new ways of leading and creating value, while the dying hang on for dear life to what brought them success in the past. Which will you choose to be?

Disclaimer: The views and opinions expressed in this article are those of the authors and do not necessarily reflect the official policy or position of the Financial Managers Society.

About the Authors

A well-known name in executive search circles with over 30 years of work as an executive recruiter under his belt, David Perry helps companies find and bring aboard Industry 4.0 leaders as the founder and managing partner of Perry-Martel International. A noted speaker on the topics of leadership and cultural change, Ron Wiens ( has spent the past 35 years helping organizations build high-performance cultures.

January 22, 2018
Kick-Start Your Institution’s Cybersecurity Awareness
By Emily Larkin, Chief Information Security Officer, Sageworks

Just as information security awareness programs are a regulatory requirement for many financial institutions, they likewise represent a major pain point for most. The value of a strong awareness program is often difficult to quantify and thus gets little funding or attention, but once implemented, it can be an invaluable defense against both internal and external cyber attacks.

There are countless options for those looking to pay for security awareness materials or consultants to deliver those materials, but these measures only cover part of the challenge. How do you make information security part of your institution’s culture? How do you get buy-in across departments and leadership?

Getting started is often the hardest part for financial institutions. Here are five proven ways to kick up the buy-in and acceptance:

Start at the top
While board and executive buy-in is widely believed to be essential to a successful information security awareness program, getting to that point can be a challenge for some financial institutions. The key is to find what drives your leadership team, and in most cases, it is the revenue – presenting the potential financial impact of a cybersecurity incident and breach will quickly get the board’s attention.

This is not a scare tactic, but rather an educational opportunity for those who focus on growth and financials. There is an assumption that information security lives with the IT team and that a strong firewall will protect the company, but an effective 15-minute presentation on the risks and vulnerabilities that exist at the employee level will quickly turn around executive and board perceptions. Such a presentation might highlight:

- The regulatory requirements for an information security program;
- The average cost of a breach;
- The potential for reputational risk; and
- Some examples of the current vulnerabilities within the institution

Make information security part of every employee’s orientation
A formal introduction to a member of the information security team and hands-on training in the information security program will go far with new employees, helping to demystify information security and make it part of the welcome package. Employees will appreciate meeting new people and gaining a better understanding of the importance of information security at the institution.

Make sure information security awareness is presented as part of the company culture. Encourage new employees to report any suspicious activity – assuring them that no question or incident is too minor to report, and outlining the protocol for reporting such potential incidents.

Put information security as an agenda item on your institution’s staff meetings and individual team meetings
Give the institution’s information security team a captive audience and a high-profile platform from which to speak and share news to help create positive energy around cybersecurity awareness and encourage participation.

Topics can range from recent vulnerabilities and projects in process to new controls and, most importantly, a thank you to users for their ongoing input and vigilance. Users tend to respond to statistics and data, such as the number of threats detected or the number of phishing attempts blocked in a month, so be sure to include some numbers that will help employees understand that they are part of a company that is committed to protecting the overall business.

Exercise your information security program
One of the most effective ways to raise cyber awareness is to involve users, and phishing tests represent a great example of this effort.

There are a number of tools available that allow organizations to send a mock phishing email and track who opens the email, who clicks on the links or who opens the attachment and/or provides their credentials. The key is to pick an influential figure in the organization and have an email come from some variation of his or her email address. While some may argue that this type of exercise sets employees up for failure, in truth this is simply the reality of how attackers infiltrate institutions – since most organizations have leadership teams posted on their public websites, this information is all a potential attacker needs to launch an effective phishing campaign. Employees can benefit from seeing how easy it is to gain confidence with a short email from the right sender.

Once the data from this type of exercise is collected, it is critical to share it with employees. Of course, there’s little value to be had in shaming people by name, but certainly showing the percentage of users who bit on the phish and how they could have spotted it is extremely beneficial for everyone. Phishing tests also allow an institution to exercise its incident response plans and better understand its employees’ comfort level in reporting suspicious activity. With this type of test data, the institution can then tailor targeted training for teams that fell below the company average and improve the means for reporting incidents.

Require an annual acknowledgement of your information security awareness program
While this is a regulatory requirement for many companies, it is a best practice for all companies. The acknowledgement should apply to all employees, including executives and board members. An efficient way to do this is to make it part of the annual information security policy and program approval process – thus promoting buy-in at the top, while also receiving the required approvals.

There are countless ways to deliver and track awareness training, with online delivery that interacts with the user and allows the organization to reach remote employees being one of the most effective and efficient options. This can be accomplished through a company intranet or learning management system that provides short quizzes after the training, thus ensuring accountability and easy tracking.

Often, one of the greatest challenges in the annual training and acknowledgement process is getting full participation. Be sure to set expectations up front with the initial delivery of the annual training, then reach out to non-compliers with a friendly nudge or reminder when they miss the deadline. As a last resort, work with the IT team to have a non-responsive employee’s email and/or chat account suspended until he or she completes the annual training.

When it comes to cybersecurity, improved employee awareness is often an institution’s best defense – it just takes the right strategies and consistent and timely delivery to get your employees on board. They will appreciate your efforts, understand the importance of protecting the institution and its assets and recognize that doing so is part of everyone’s job.

Disclaimer: The views and opinions expressed in this article are those of the author(s) and do not necessarily reflect the official policy or position of the Financial Managers Society.

About the Author

Emily Larkin is the chief information security officer at Sageworks, where she helps manage corporate information security, business continuity, disaster recovery and technology-related audit and compliance activities.