'Fast and Furious': As AI Accelerates, So Does Corporate 'Risk Velocity'
The sudden collapse of Silicon Valley Bank is a textbook example of how quickly risks can skyrocket. Experts say that's why being proactive is so important. Dominque Shelton Leipzig is quoted.
“With great power comes great responsibility,” Uncle Ben once told Peter Parker.
What can happen when a company acquires too much strength and advances too quickly? Risk velocity—the speed at which risk can occur and cause damage to an organization—increases along with it.
Today’s society is living proof that the velocity of advancement is on a constantly accelerating trajectory. And while it’s true that the possibilities are endless, it’s also true that rapid technological innovation will inevitably be accompanied by a rapidly growing assortment of unexpected and unknown risks.
The rapid spread of information in the digital age has left companies more susceptible to risk than ever before.
The Silicon Valley Bank collapse in March saw the largest bank run in history in just hours due to easy withdrawal accessibility and social media posts before federal regulators stepped in.
The initial launch of Twitter Blue from CEO Elon Musk replacing the social media company’s longstanding verification system saw companies become indirectly exposed to misinformation and financial side effects.
A Twitter account posing as the official Eli Lilly and Co. account tweeted that “insulin is free now” as a joke; but the impact to the company was anything but funny—stock prices fell, its market cap dropped and the company pulled its advertising from the site.
Artificial intelligence is another area in which the pace of evolution is matched only by the associated risk velocity.
The ability of machines to learn and improve without explicit instructions has the potential to revolutionize many industries, from health care to finance and up the courtroom steps and ultimately into the judge’s chambers. However, businesses that use AI must be aware of the legal and operational risks that come with it—and how quickly those risks can materialize.
The risk landscape can change on a dime, said Dominique Shelton Leipzig, a partner in Mayer Brown’s Los Angeles office and a member of the firm’s cybersecurity and data privacy practice. And the proliferation of new technologies, such as AI advancements seen in 2023, were not on the radar for most companies, she added.
A proactive approach is vital to assessing potential threats to an organization, Shelton Leipzig said.
“Being able to see where trends are going really involves pulling people into the ecosystem and determining enterprise risk that will be useful to help the enterprise of general counsel, CEOs, board members, the chief compliance officer—everybody—to be able to look around corners and anticipate the next thing,” she said.
The rapid adoption and evolution of new platforms and technologies have caused a critical moment requiring legal, compliance, IT and security teams to reorient to an entirely new and constantly evolving data landscape. But the sheer velocity at which new applications emerge, existing applications are updated and the ways they are used change, means the notion of a “developing risk” is becoming almost quaint.
In today’s technological landscape, a business crisis can occur literally at the click of a button.
Businesses that use AI must be aware of the legal and operational risks that come with it. By designing their AI systems with fairness, accountability and transparency in mind, complying with relevant laws and regulations, developing robust cybersecurity measures, and fairly compensating creators, businesses can mitigate these risks and reap the benefits of this powerful technology.
Collaborations within a business organization’s various departments is a necessity, but there is still a need to expand and develop resources outside the organization.
At Mayer Brown, the firm has initiated AI conferences with stakeholders and clients to discuss the legal, contractual and regulatory aspects of using AI to deliver financial services. It also held its first annual Digital Trust Summit this past March that brought together 60 CEOs and board members to discuss digital trust in the context of generative AI.
“What’s really important right now on the AI front: develop a governance structure with someone in [the] enterprise [being] responsible for AI and understanding what data is being used to train models,” she said. “This is going to be happening fast and furious for clients and companies in the foreseeable future.”
Reprinted with permission from the May 25, 2023 edition of Corporate Counsel © 2023 ALM Properties, Inc. All rights reserved. Further duplication without permission is prohibited.