Video
The first video shows a man who thinks he’s talking to a woman (bottom right corner) but is actually talking to a man (top left corner) and the second videos is deepfake demo.
NEW You can now listen to Fox News articles!
New generative Artificial Intelligence (AI) systems have captivated the world’s imagination with promise and potential. AI’s ability to analyze vast amounts of data and make autonomous decisions is a source of both awe and anxiety. People worry about bias in decision-making, the invasion of privacy, job displacement, and even the existential fear of machines becoming uncontrollable. How can we make sure AI benefits society?
The National Telecommunications and Information Administration (NTIA) has responded by seeking input on how to ensure that AI companies are “accountable.” It asks, “how to develop a productive AI accountability ecosystem.” NTIA defines “AI” broadly enough to sweep in most significant software systems, including many that pre-date recent AI developments.
And yet, as we at the Center for Growth and Opportunity point out in our just-filed comments, NTIA’s broad sweep misses the primary way companies and technology are held accountable.
AI companies, like those in every industry, are held accountable first and foremost by their customers acting within a competitive market system.
The best way to rein in artificial intelligence is by market forces, not Big Government regulation. (JOSEP LAGO/AFP via Getty Images)
Business and consumer markets, reputational and financial markets, generally applicable laws, and societal norms – all create feedback loops that align the interests of producers with stakeholders. Competition for profit drives this alignment, sparking companies to innovate, develop valuable products and services, and thus benefit society.
Of course, this system isn’t perfect. Sometimes, market feedback mechanisms fail. When a producer’s actions affect third parties or information distribution is asymmetrical, we may need alternative mechanisms. But these alternatives should be the exception, not the rule. They need to support, not replace, market accountability mechanisms.
NTIA’s request doesn’t acknowledge market-based accountability or identify gaps. Yet many of its proposed AI accountability mechanisms, such as transparency, certifications and third-party audits can and already do function within the market, both in AI and in other areas.
Consider, for example, products that certify the integrity of their supply chain with seals, or the “UL” certification marks on a wide variety of electrical home devices, or Yelp ratings of local services and restaurants.
Video
In AI, one recent example is the Center for Industry Self-Regulation’s recently released Principles for Trustworthy AI in Recruiting and Hiring and the Independent Certification Protocols for AI-Enabled Hiring and Recruiting Technologies.
These were created to establish a global baseline standard for the use of AI applications in recruitment and hiring and to create a pathway to independent certification for such technology. Market-based approaches like these are vital to a thriving AI accountability ecosystem.
It’s important to remember that the NTIA, while influential in convening discussions, is not a regulatory body and lacks the power to impose binding rules. Therefore, the agency cannot afford to ignore market-based accountability, given its limited authority and the inherent complexity of AI.
AI accountability is not just about regulation. It’s about engaging with our existing market system, identifying gaps and cautiously implementing exceptions when necessary. It’s about preserving the market’s ability to self-correct and innovate.
AI companies, like those in every industry, are held accountable first and foremost by their customers acting within a competitive market system.
Indeed, markets can help society weigh the inevitable trade-offs between different accountability goals, such as privacy and transparency, or accuracy and access.
NTIA’s discussion of AI accountability should be more than an exercise in identifying how governments might regulate companies or how companies might regulate themselves. It should also move beyond identifying hypothetical AI issues that might need regulation sometime in the future.
Instead, NTIA should take a holistic view, recognizing that accountability comes first from markets. It should focus on how to enhance these market-based accountability ecosystems in the era of AIs rather than attempting to replace or undermine them.
If NTIA seeks to strengthen our oldest and strongest accountability mechanisms – those provided by markets – then it will be better able to address any potential harms while preserving the benefits of AI for society.
Neil Chilson is the former chief technologist for the FTC and is currently a senior research fellow with the Center for Growth and Opportunity.