ABA Banking Journal: Banking groups file motion to intervene in retailer lawsuit over debit interchange fees
The Bank Policy Institute and The Clearing House filed a motion to intervene in a retailer lawsuit seeking to invalidate Regulation II’s standard for setting debit interchange fees at levels higher than federal statute permits. It’s the latest turn in a case that made its way to the U.S. Supreme Court earlier this year.
A group of North Dakota retailer associations and truck stop Corner Post sued the Federal Reserve in 2021 on the grounds that Reg II allows for higher interchange fees as long as such fees are “reasonable and proportional to the cost incurred by the issuer with respect to the transaction.” A district court dismissed the lawsuit because the six-year statute of limitations for challenging the regulation had passed, as the regulation went into effect in 2011. However, the Supreme Court ruled that the statute of limitations does not accrue until the plaintiff is injured by final agency action, allowing the case to move forward in district court.
In their motion to intervene filed in U.S. District Court for North Dakota, the BPI and TCH argue that vacating Reg II would severely harm their members by threatening their ability to recover a reasonable return on debit card transactions, as required by the Durbin Amendment.
In a statement, American Bankers Association Card Policy Council Executive Director Tom Rosenkoetter said ABA strongly supports the effort by BPI and TCH to give banks of all sizes a voice in the legal dispute over debit interchange.
“BPI and TCH’s intervention will ensure that our industry can challenge the merchants’ unjustified demands to slash debit interchange and avoid any further consumer harms on top of what has already resulted from the current rule,” Rosenkoetter said. “Recently, community advocates, academics and a host of stakeholders outside of banking have joined together to oppose similar efforts to reduce debit interchange given the critical role it plays in supporting financial inclusion, fraud protection and other important bank programs. We urge those voices to be heard in this case as well.”
CISA News: How to avoid scams before and after a weather emergency
Extreme weather and natural disasters can occur with little warning, leaving you to make critical decisions when you may feel rushed. The information here will help you spot, avoid, and report scams as you prepare for, deal with, and recover from extreme weather and natural disasters.
Bowman: Credit unions, nonbanks should be subject to same regulatory expectations as banks
Credit unions and other nonbanks should be subject to the same regulatory and supervisory expectations as banks if they are engaged in the same activities, Federal Reserve Governor Michelle Bowman said today. Speaking at the Community Bankers Symposium in Chicago, Bowman outlined several challenges facing community banks, including competition from nonbanks. She also criticized regulators for exacerbating some of the risk management challenges facing banks.
A core concept in financial regulation is to impose the same regulation on entities that are engaged in the same activities, Bowman said. She suggested policymakers expand that principle to include the same regulation, guidance and supervisory expectations.
Community banks often face disadvantages when competing with nonbank revivals, which may not be subject to taxes or regulations such as the Community Reinvestment Act, Bowman said. “[Banks] are also subject to a broader range of restrictions imposed by regulatory requirements or the ‘soft’ power of supervision. In all of these cases, the disparity in the legal framework can have a distortive effect on competition.
“In short, where the financial regulatory framework can provide for parity of treatment, it should do so,” she said. “The regulatory framework should not knowingly distort competition, or effectively impose a regulatory allocation of credit.”
At the same time, one of the greatest challenges facing community banks is not managing any particular risk but rather how to address all of the risks they face, and how to prioritize the approach to tackling those risks, Bowman said. Regulators have sometimes exacerbated these challenges through policy choices, she added.
“Both regulators and banks should be working toward a common goal — a banking system that supports economic activity throughout the country, in which banks operate in a safe and sound manner and in compliance with consumer laws and regulations,” she said.
ABA Banking Journal: FinCEN Releases Commercial on Beneficial Ownership Information Reporting
The Financial Crimes Enforcement Network this week released a new video and radio commercial to educate business owners on the new beneficial ownership information reporting requirements. It is part of a larger public outreach campaign by the agency, which includes a dedicated website and videos on BOI reporting.
FinCEN last month issued a notice to financial institution customers about BOI reporting, explaining why certain customers must report directly to the agency in addition to giving information to their banks, which are subject to the customer due diligence rule.
Finextra: The Evolution of Trust: How to Build Confidence in AI-Based Financial Services (A3)
October 10, 2024 | Dirk Emminger, Managing Director
Traditionally, trust in the financial sector was based on personal relationships and company figures. The direct contact with advisors in a branch office, the opportunity to ask questions, and making decisions together with a human created a sense of security. However, with the increasing integration of AI in financial services, this dynamic is changing fundamentally. Trust needs to be redefined: Instead of the human element, transparency, algorithm explainability, and data security come to the forefront. Companies must adjust to the fact that trust is no longer automatically established through the human factor, but through the ability to design AI systems that are understandable, explainable, and secure.
Another aspect of this development is the varying level of trust that customers place in different AI applications. While the automation of everyday financial processes – such as paying invoices in the B2B sector through AI-driven API requests – is likely to be more readily accepted, there is greater skepticism when it comes to deep financial decisions affecting one’s personal financial future. The thought of giving a bot access to sensitive account information or even responsibility for investments and loans is still hard to imagine for many. Therefore, a differentiated approach is needed: Where can AI and automation foster trust, and where must clear boundaries be drawn to promote acceptance?
Source Capgemini Research Institute
Customers also have different perceptions when it comes to the decisions of AI systems. While people can generally understand why a human advisor makes a particular recommendation, it is often harder to grasp the background and processes behind AI systems. This leads to uncertainties: Is the AI truly neutral? How are data being used to make decisions? And what factors actually contribute to these decisions?
To strengthen trust in AI decisions (and naturally to meet regulatory requirements), it is crucial to educate customers and help them understand how AI works and which data are used for what purposes. A key obstacle in building trust in AI systems is the “black box” problem – complex algorithms that are difficult for the layperson to understand. Many AI systems make decisions whose logic is barely comprehensible from the outside, which can lead to customers feeling a loss of control. Therefore, transparency and explainability are essential to gain user trust.
Errors in AI-driven processes are not out of the question, which raises the issue of responsibility. What happens if an AI system makes erroneous credit approvals or authorizes transactions incorrectly? Such cases are not rare: For instance, with AI-based credit scoring systems that rejected creditworthy customers, or with insurance algorithms that unjustifiably increased premiums.
Companies must clearly communicate how they handle errors and who holds responsibility. Processes for monitoring and correcting AI decisions are crucial to maintain customer trust.
However, trust is not built on accountability alone, but primarily through transparency and ethics – two key factors that we will examine in the next section.
Transparency and Ethics: Building Trust Through Clear Communication
A central element for fostering trust in AI-based financial services is clear communication about how these systems work. Providers must – not only driven by regulatory requirements – be capable of making complex AI processes tangible and understandable for their customers. This also includes revealing the logic behind automated decisions. A McKinsey study highlights that “explainability” – the ability to make AI decisions comprehensible – is a critical factor for customer acceptance. Users want to understand why an algorithm provides a particular recommendation or makes a decision. Therefore, banks and fintech companies should aim to avoid technical jargon and communicate the benefits and functionality of AI in a way that is easy to understand. Tools like “Explainable AI” (XAI) can help provide insights into the decision-making processes of AI models and make the “black box” more transparent. Additionally, it is crucial to ensure that models, particularly in highly critical applications, are free of bias and hallucinations.
Data privacy is another key element in building trust. Customers expect that their data are handled securely and ethically. In the financial sector, sensitive information such as account details, spending behavior, or credit limits are involved. Transparency regarding data flow is indispensable: How are customer data collected, stored, and used? Companies should openly communicate their data policies and make it clear that data privacy is a top priority. It’s advisable to go beyond mere compliance with regulatory requirements and develop ethical guidelines for handling customer data.
Additionally, company ethics play a crucial role in building customer trust. Companies that establish clear ethical standards and guidelines can set themselves apart positively from competitors and build trust. Customers perceive companies as more trustworthy when they feel that ethical considerations are a key part of developing and implementing AI solutions. A survey by PwC found that 87% of consumers consider a company “more trustworthy” when it publishes and actively implements ethical guidelines for handling AI.
Critical View: Do Transparency and Ethics Even Matter in the Age of AI Agents?
Of course, trust is crucial when dealing with AI – at least, that’s the prevailing assumption. But what if we consider the situation from a different perspective? What if “greed overrides reason”? This saying describes the tendency of people to set aside rational considerations in favor of personal gain and advantages. It raises the question: how important are ethical behavior, transparency, or data privacy really, when the personal benefits of AI are significant? There have been instances in the past where users ignored data privacy concerns as soon as the personal advantages – such as faster services or financial gains – became apparent. Think of the scandals around Facebook and Cambridge Analytica: despite significant data privacy violations, Facebook’s user base hardly shrank.
Another critical aspect is whether the aforementioned requirements for transparency, ethics, and data protection truly apply equally to all customer segments. A generation raised on social media, online data trade, and multinational data usage may have a different attitude toward their data than older generations. Younger customers are often used to exchanging personal information for service advantages – whether it be for personalized advertising on social media or sharing location data for real-time updates. Could it be that for these segments, the benefits of AI-driven financial services outweigh concerns about transparency and data privacy?
The issue of power dynamics between companies and customers also requires critical examination. While companies are expected to uphold principles like transparency and ethics, firms with significant market power might find it tempting to dilute these standards. A historical example is the introduction of “pay-to-win” models in video games: despite ethical concerns and criticism from consumer advocates, these models prevailed because they were profitable and enough users accepted the conditions.
Another point is the authority of AI. Once AI agents independently take on financial decisions in various areas – be it for credit, investments, or other services – power dynamics could shift even further. Customers might grow accustomed to AI systems making the “better” decisions and relinquish control without questioning decision pathways or ethical implications. The risk here is that technological progress takes center stage while ethical concerns and transparency fade into the background.
But how do customers experience all these aspects in practice? Ultimately, the daily user experience determines whether trust in AI is established or lost. This makes the question of “customer experience” a central challenge: how can the user experience with AI solutions be crafted in such a way that trust is not only built but also sustained?
Customer Experience: The Influence of User Experience on Trust in AI
One of the key factors for building trust in AI solutions is usability. Usability plays a crucial role in how quickly customers understand and use the available features. The more intuitive and simple a system is designed, the more likely customers are to integrate it into their daily routines. A strong user experience not only builds trust but also makes complex technologies more accessible. However, if customers face difficulties in handling the system, have to go through too many steps, or cannot understand how an AI feature works, it can lead to mistrust and uncertainty. Many customers expect AI assistants not only to provide precise answers but also to deliver contextually relevant information and anticipate their needs. Especially in the financial sector, it’s important to convey that AI systems understand the individual context of the customer – whether it’s recommending a suitable product or providing highly personalized, quick information. To meet and exceed these expectations, AI solutions should not just be reactive but should proactively offer helpful suggestions.
Emotional Component and Human-Machine Interaction
While speed, precision, and usability are technically driven, the emotional component of human-machine interaction should not be overlooked. AI systems, especially in the financial sector, operate in an area where customers often exhibit a high degree of trust and emotional attachment. Therefore, it is crucial to convey a sense of humanity even in digital interactions. An example includes AI systems that engage with customers through voice assistants or animated avatars. Here, the use of “empathetic AI” can help by responding to the user’s mood and communicating accordingly. Even non-AI-based “follow-up questions” can strengthen trust – such as when a system verifies critical decisions with the user before executing a transaction. Providers should ensure that the interaction with AI is perceived positively on an emotional level, incorporating elements of friendliness, reliability, and understanding into the user experience.
In conclusion, the user experience significantly influences the trust placed in AI systems. An intuitive design, rapid and precise responses, and a human interaction layer can make all the difference. Ultimately, it’s not just the technology itself that matters but how customers interact with it and how AI integrates into their daily lives. The challenge is to put the customer at the center and create an experience where the benefits of AI are clearly tangible – without losing the human aspects of the financial world.
While we have already established how crucial trust, transparency, and UX are in the context of AI in the financial sector, one key question remains: how much AI interaction do customers truly want? From the traditional personal advisor to digital AI agents – the transformation is deep yet challenging. How far are customers willing to embrace this new form of advisory? And at what point do they yearn for the “human touch” that even the most advanced AI cannot provide? In the next article, we will delve into the changing customer relationship, the balancing act between man and machine, and the crucial question of how much AI customers are willing to accept – and where the boundaries lie.
ABA Banking Journal: New cryptography guidance outlines process for banks to strengthen IT security
October 8, 2024
A new whitepaper provides guidance for banks and other financial institutions to help secure their computer systems as new threats emerge, such as those from quantum computing.
The paper by the Financial Services Information Sharing and Analysis Center, or FS-ISAC, provides financial firms a framework to improve their “cryptographic agility,” which measures an organization’s ability to adapt cryptographic solutions or algorithms quickly and efficiently in response to new developments and threats. FS-ISAC warns that the move to crypto agility must begin immediately as quantum computing is likely to make a commonly used class of cryptography algorithms insecure in the next few years.
“The financial services sector cannot risk insecure data transmission or storage — it would break the way we conduct business today,” FS-ISAC says in the whitepaper. “And as the number of systems, dependencies between systems and overall technical complexity grow, the effort to update cryptographic assets has intensified.”
The first part of the paper describes what is new and distinctive about crypto agility, according to FS-ISAC. The second part offers more detailed information for technologists and other IT and security specialists at financial institutions. The paper seeks to help stakeholders across organizations “understand the problem space, grasp the necessity of crypto agility, and define an approach that works for their institutions.”
Vote No on IM-28 | South Dakota Retailers Association
South Dakota Retailers Association Executive Director, Nathan Sanderson, shares the impact of IM-28, a widespread tax cut that would decrease state funding by up to $646 Million, potentially resulting in an income tax, higher property taxes, and/or less funding to essential public services like schools and roadways.
Salary & Compensation Surveys have been sent to participants and non-participants who pre-ordered them. These surveys gathered salary and cash compensation (salary + annual cash incentive/bonus + commissions) for approximately 30 executive positions and over 150 middle management and staff level positions. Blanchard Consulting Group will provide the national and combined North Dakota and South Dakota surveys to all purchasers. Please be aware that this should not be reproduced or distributed to anyone else. If you are interested in ordering, reach out to [email protected]. For any survey-related questions, please contact Elyse Hoffmann at [email protected] or 608-843-9672.
2024 SDBA Fall IRA Update - Virtual
November 14, 2024
The IRA Update builds on the attendees’ knowledge of IRA basics to address some of the more complex IRA issues their financial organizations may handle. This course includes how the SECURE Act really changes our two biggest topics: RMDs and death distributions and discusses any pending legislation. This is a specialty session; some previous IRA knowledge is assumed. The instructor uses real-world exercises to help participants apply information to job-related situations.
Q: Does Regulation F / Fair Debt Collection Practices Act (FDCPA) apply to banks that collect on their own debts?
A: While banks collecting their own debts are not strictly required to follow the FDCPA, financial institutions in these situations should nevertheless be mindful of the potential risks – namely UDAAP/UDAP – related to their internal collections. The FDCPA and UDAAP/UDAP are closely related in many ways, because the FDPCA defines certain actions as unfair, deceptive, or abusive. Additionally, since the FDCPA has been around for a long time - and it allows consumers to bring lawsuits against debt collectors directly - there is plentiful case law in which courts have held that various FDCPA violations constitute unfair, deceptive, or abusive acts or practices. For this reason, many institutionsw will endeavor to have their internal collections abide by most the FDCPA's prohibitions, particularly those that the statute or the regulation defines as unfair, deceptive, or abusive; to that end, of particular note are Regulation F’s sections 1006.14, 1006.18, and 1006.22.
While the FDCPA doesn't require that original creditors collecting their own debts follow the initial validation notice and validation period process for collecting their own debts, then, the prohibitions - don't contact too frequently, don't disclose the debt to third parties, make sure you disclose to the consumer that you are attempting to collect a debt, etc. - are things that may be treated by a regulator as a UDAAP issues when it comes to first party collections.