|

News
SDBA Updates
SDBA Events
Online Education

The FDIC and the Office of the Comptroller of the Currency today rescinded guidance on leveraged lending issued more than a decade ago, saying it was too restrictive.
The agencies rescinded the 2013 guidance and a 2014 FAQ on the guidance that was issued jointly with the Federal Reserve. The guidance outlined “high-level principles” related to safe-and-sound leveraged lending activities. However, in a statement today, the FDIC and OCC said the documents impeded banks’ application of risk management principles to leveraged lending.
“This resulted in a significant drop in leveraged lending market share by regulated banks and significant growth in leveraged lending market share by nonbanks, pushing this type of lending outside of the regulatory perimeter,” they said.
The agencies are replacing the guidance with eight “general principles” for risk management of commercial loans and other types of lending. They include the recommendation that banks have “a clearly defined risk appetite that is reasonable and reflects the aggregate level and types of risk it is willing and able to assume to achieve its strategic objectives.”
In related news, the OCC today updated guidance concerning venture loans. The updates reflect “the OCC’s policy of not discouraging banks from engaging in prudent venture lending activities.”
ABA Banking Journal: Freddie Mac issues guidelines for AI use by mortgage companies
December 9, 2025

Freddie Mac has updated its guidelines for mortgage companies to establish a framework for the responsible use and deployment of artificial intelligence technologies and machine learning systems.
According to a recent bulletin, Freddie Mac updated its servicer guide to account for AI and ML use. The updated sections include enhanced requirements and best practices to ensure transparency, accountability and ethical stewardship within AI and ML initiatives.
“The governance framework outlined in these sections will support organizations in aligning with regulatory standards, mitigating risks and fostering trust in AI-driven solutions,” Freddie Mac said.
The changes will go into effect on March 3, 2026.
Full Article
ABA Banking Journal: Are we sleepwalking into an agentic AI crisis?
December 9, 2025| By Siddharth Damle

In early 2025, a healthtech firm disclosed a breach that compromised records of more than 483,000 patients. The cause was a semi-autonomous AI agent that, in trying to streamline operations, pushed confidential data into unsecured workflows. What does this mean for the rollout of agentic AI in finance?
Financial institutions are racing to adopt so-called agentic AI, which describes systems that can pursue goals, make decisions and act with limited human oversight. But autonomy comes with a price. Agentic AI introduces layers of unpredictability: emergent behaviors, misaligned objectives and even the potential for agents to collude or evolve strategies unintended by their designers.
Unless boards and regulators act now, the financial services sector could face its own “737 Max moment,” where over-reliance on automation collides with public trust and regulatory accountability.
Not just another chatbot Until recently, most corporate AI use cases looked like digital assistants: customer service chatbots, predictive models or workflow optimizers. They were narrow, reactive and tightly governed by their training data.
Agentic AI is different. These systems aren’t just answering questions — they’re taking initiative, adapting and autonomously performing workflow tasks. An agent might book travel, negotiate a supplier contract or manage a multi-step cyber-defense routine. In more advanced deployments, multi-agent systems work together, adapting to shifting conditions, and making decisions faster than human managers can intervene.
The promise is enormous: smarter automation, fewer bottlenecks, and cost savings at scale. Gartner has described agentic AI as “standing on the shoulders of generative AI,” poised to transform industries by carrying out tasks that once required skilled human oversight.
But that very autonomy is what creates new risks.
When autonomy backfires According to recent research published in the HIPAA Journal, attackers are already exploiting agentic AI to automate every stage of an intrusion.
Autonomous systems can be designed to handle reconnaissance, probing networks for weaknesses. They can generate tailored phishing campaigns that adapt in real time to the victim’s responses, and even coordinate lateral movement to extract valuable data — often without triggering alarms.
But AI that is non-factual, invents information or makes its own decisions can also be costly for businesses. These are not hypothetical scenarios: real cases show how the same autonomy that makes AI powerful can make it dangerously disruptive. For example, Replit’s AI coding assistant reportedly went rogue during a code freeze at startup SaaStr, wiping the production database. To cover its tracks, the agent generated fake data — including 4,000 phantom users — fabricated reports and falsified unit test results.
McDonald’s has ended its three-year AI drive-through experiment with IBM after repeated ordering errors led to frustrated customers. Viral videos, including one showing the AI adding 260 Chicken McNuggets to an order, highlighted the system’s failures.
One of the most notable cases highlighting corporate liability for AI occurred when Air Canada was ordered to pay CA$812.02 to a passenger after its chatbot provided incorrect information about bereavement fares. The passenger followed the assistant’s guidance and applied for a retroactive refund, only to have his refund claim denied. A Canadian tribunal ruled the airline failed to ensure the chatbot’s accuracy, holding it responsible for the misinformation.
Incremental risks posed by agentic AI applications While agentic AI has promising applications in business context, the technology can go off-script in subtle but damaging ways.
- Error propagation. A single hallucination — such as an agent misclassifying a transaction — can cascade across linked systems and other agents, leading to compliance violations or financial misstatements.
- Unbounded execution. An AI agent tasked with executing a business process can enter a recursive loop, consuming massive computing resources and drive cloud service provider bills into six figures.
- Opaque reasoning. As agents make decisions based on probabilistic models, executives often cannot explain why a decision was made. This lack of transparency is increasingly unacceptable to supervisors in highly regulated industries like finance and healthcare.
- Collusion. Multi-agent environments may lead to “unintended teamwork.” Researchers have shown that when agents interact, they can develop novel strategies — sometimes working at cross-purposes with the organization’s goals.
These risks amplify known AI threats — bias, data breaches or IP theft — raising the stakes for businesses. A hallucination in a chatbot might annoy a customer, but a self-directed financial agent’s mistake could trigger millions in erroneous trades.
The governance imperative There is an inherent temptation to delegate ownership of AI oversight to the technology department. That strategy can prove to be myopic. Agentic AI risk is not purely a technology issue. It’s a broader systemic risk issue, requiring oversight from multiple departments spanning legal, privacy, data, compliance, enterprise architecture, information security and more.
Institutions must start with fundamentals: inventory every AI tool in use, whether embedded in vendor platforms or introduced informally by staff. Without a clear map of what agents exist, leadership cannot effectively govern them.
Governance must also move beyond high-level “AI ethics principles” to concrete, enforceable practices:
- Policies for testing, monitoring, and retiring AI agents.
- Resource caps to prevent runaway execution.
- Isolation protocols to limit unintended collusion among agents.
- Recurring oversight, not one-time audits, since autonomous systems evolve over time.
Gartner’s recent AI Agent Assessment Framework offers one useful model. By categorizing agent capabilities — perception, decisioning, actioning, adaptability — organizations can determine whether a given use case truly requires agentic AI, or whether traditional automation would be safer and cheaper.
When not to use agentic AI It’s tempting to apply the latest technology everywhere. But not every task benefits from autonomy. Stable, predictable workflows — payroll processing, for example — are often better served by robotic process automation or deterministic scripts. Overengineering these processes with agentic AI introduces needless cost and risk.
Certain domains remain too complex or high-stakes for delegation. In consumer lending, for instance, handing over full credit approval authority to an opaque AI system could be reckless. In healthcare, allowing autonomous agents to manage treatment protocols without human oversight is equally unacceptable. Finding the sweet spot for agentic AI adoption requires discipline: identifying where adaptability and autonomy genuinely add value, and where human judgment or traditional tools remain indispensable.
The shift to agentic AI mirrors earlier technological revolutions. Just as the internet expanded both opportunity and exposure, autonomous AI promises to streamline industries even as it creates new vulnerabilities. According to a recent MIT study, 95% of enterprise AI pilots fail. Among the root causes are poor integration with existing workflows, reliance on generic tools that don’t adapt to enterprise needs and slow scaling within large organizations.
Companies that treat agentic AI as a shortcut to efficiency may soon find themselves explaining to shareholders and regulators why they let machines take the wheel. Industry leaders have a window to act — to build governance strong enough to keep autonomy in check, well before the first major agentic AI crisis hits the balance sheet.
Siddharth Damle is a financial and AI risk management expert based in the tri-state area. Opinions expressed in this article are the author’s own and do not represent those of any company or organization.
Original Article
ABA Events: 2026 ABA Washington Summit: March 9-11

Join the biggest annual gathering of bank leaders in Washington to push for a bank policy framework that lets your bank stay focused on serving your customers, clients and communities. Hear directly from key players in the 119th Congress and the new administration on what the future holds for banks of all sizes.

2026 Holiday Signs
The SDBA offers holiday signs that banks can print and display to notify customers when the bank will be closed for standard holidays. The signs are set up to be printed on 8.5x11" paper and are provided as a high-resolution pdf file. Banks may print these signs and use as they see fit.
2026 Holiday Signs

2026 GSBC Bolder Banking Scholarship Program
The Graduate School of Banking at Colorado (GSBC) and the SDBA are partnering to recognize community banks across South Dakota that are redefining what it means to serve customers and communities boldly.
Through the Bolder Banking Scholarship, GSBC will award one SDBA member bank for its innovative, community-driven approach to banking. The recipient bank will then select a rising star employee to attend GSBC’s flagship Annual School Session in Boulder, Colorado, using the scholarship toward tuition. SDBA member banks may nominate themselves or another bank demonstrating innovation and bold leadership in banking.
Nomination deadline: February 1, 2026 | Recipient announced: March 1, 2026
Submit a Nomination
Learn more about GSBC and the Bolder Banking Scholarship at www.GSBColorado.org.


2026 SDBA IRA Basics Webinar
January 7 | 9:00 a.m. CST | Virtual via Zoom
This course is designed as a “very basic” IRA seminar as it is designed to build a solid IRA foundation. The seminar will start with the differences between a Traditional and a Roth IRA, and then discuss how to set up a new IRA and the eligibility rules to contribute to an IRA. The biggest topic for people new to IRAs to discuss is the moving of money from one financial institution to another. This involves IRA transfers and rollovers, plus the direct rollovers from a qualified plan. Discussion will go thru the 13 exceptions to taking money out of an IRA before age 59.5 to avoid the penalty tax, and how RMD is calculated in a traditional IRA. There will be an introduction into death distributions. Finally, we will cover how to take money out of a Roth IRA.
Matt Dickinson of JM Consultants has been the SDBA's trusted IRA educator since 2021, bringing a wealth of knowledge to the role. With nearly 20 years of experience in banking and retirement planning, webinar attendees can rest assured they are learning from an expert!
Details & Registration
2026 Midwest Economic Forecast Forum
Wednesday, January 14, 2026 | 11:00 a.m. - 12:45 p.m. CST
Prepare for 2026 by joining an economic discussion with Federal Reserve Bank President Neel Kashkari. Time will be allowed for open Q&A during this virtual event.
Bankers are encouraged to invite their business clients and local community leaders to tune in to these economic insights together. Individuals or group registration rates are available.
Details & Registration
2026 SDBA State Legislative Day
February 11, 2026 | Ramkota Hotel & Conference Center | Pierre
Registration is OPEN for the 2026 SDBA State Legislative Day, February 11, at the Ramkota Hotel & Conference Center in Pierre, SD. SDBA’s Legislative Day offers a valuable opportunity to stay informed on state and federal legislation impacting the banking industry. Attendees can expect insightful discussions, networking, and direct engagement with key policymakers.
SD GOED Commissioner, Bill Even, is our keynote this year. He'll lead n engaging discussion on the vital connection between economic development and the banking industry. Mr. Even will explore how the state's economic initiatives influence the banking landscape and how banks play a critical role in driving community and business growth across South Dakota. He will also highlight opportunities for partnership and collaboration to strengthen local economies, support business expansion, and foster long-term prosperity statewide.
The hotel blocks are open, but they close on January 11! It is suggested to make your lodging reservations now as availability during legislative session in Pierre/Fort Pierre becomes incredibly difficult! Blocks are available at the Ramkota and ClubHouse Hotel & Suites.
Details & Registration
Online Education

Participating in learning opportunities outside the bank can be challenging. Take advantage of the SDBA's extensive selection of webinars and on-demand training to enhance your banking expertise directly from your computer.
GSB Online Seminars OnCourse Learning SBS Institute ABA Training
Q: We’ve heard a lot lately about Section 1071 – what’s the latest status on those rules (including thresholds and timelines)?
A: Ah, Section 1071 - the gift that keeps on giving…and changing! The "current" interim final rule extended the Section 1071 small-business-lending compliance dates by roughly a year. Under this rule, Tier 1 institutions would begin collecting demographic data July 1, 2026; Tier 2 by January 1, 2027; and Tier 3 by October 1, 2027. Voluntary collection one year in advance is still fair game (for testing "…procedures and systems for compiling and maintaining this information…").
However, as of November 13th, the CFPB has proposed revisions to Section 1071 via its proposed rule (90 FR 50952) which appears to signal a shift toward a "longer-term, incremental approach." The proposed rule, if adopted, would significantly narrow the scope of the 2023 final rule (currently on hold due to ongoing litigation from the Texas Bankers Association and other plaintiffs) by rolling back several discretionary data points and redefining what counts as a covered transaction.
Under the new draft, lenders would report on a smaller set of core data points, with the CFPB reserving the option to expand requirements later. The proposed rule appears to remove prior requirements to collect details like denial reasons, pricing data, application method, and workforce size, while also excluding merchant cash advances, agricultural credit, and small-dollar loans. Coverage thresholds would seemingly shift as well, and the rule’s "small business" definition would tighten to firms with $1 million or less in annual revenue. The proposed rule can be found at 90 FR 50952. For additional changes and up-to-date news on Section 1071, please check in periodically to our Regulatory Change Management Tracker and our Banker Compliance News, as well as our 1071 Small Business Lending Toolkit.
SDBA eNews Archive
Advertising OpportunityLearn more about sponsoring the SDBA eNews
Questions/Comments
Contact the SDBA at 605.224.1653 or via email
|