
Let’s face it — Artificial Intelligence is just that: artificial.
Artificial intelligence is already reshaping our lives, and nowhere are the risks more serious than in our elections and in the lives of our children. From AI-generated political messages designed to manipulate voters, to the misuse of children’s images to create exploitative or harmful content, these technologies are being deployed faster than the law can respond.
As your state senator, my first responsibility is to protect Missourians — especially families and the integrity of our democratic process.
I also believe deeply in state sovereignty and local control, which is why I am troubled by recent efforts at the federal level to prevent states from acting where Washington has failed. Missouri should not be left defenseless while powerful technology outpaces accountability.
Here are clear, public examples of the harms unregulated AI has caused — the kind of real-world damage that demands action.
AI & Election Integrity
Artificial intelligence isn’t just a tool for convenience — it’s rapidly becoming a powerful weapon in shaping political opinion and influencing elections, often without voters’ awareness.
AI generated robocalls mimicking public officials' voices have already appeared in the United States. In January 2024, thousands of New Hampshire voters received phone calls using an AI-generated voice resembling President Joe Biden, falsely urging them not to participate in the state’s presidential primary — a tactic widely condemned as voter suppression and election interference. Criminal charges and federal fines have been pursued in connection with these false calls under laws prohibiting impersonation and deceptive practices.
Further research shows that AI chatbots can significantly alter voter opinions with only a short interaction. A 2025 study from Cornell University found that conversational AI can shift voter support in either direction by producing large amounts of persuasive claims, some of which are incomplete, biased, or misleading — meaning generative AI could be harnessed to sway political views on a massive scale.
These incidents underscore how accessible AI tools have become and how easily they can be deployed to circulate misleading political content — potentially suppressing turnout, distorting public perception and affecting outcomes of elections along with horribly damaging the character and reputations of innocent candidates.
The ability to fabricate credible-sounding statements and realistic video at scale poses a growing threat to our elections — and Missouri should not wait for federal action.
Exploitation of Children
One of the most alarming and underreported harms of unregulated AI is the exploitation of children’s images.
AI is trained on enormous datasets scraped from the internet that include identifiable photos of children - often without consent. These datasets empower generative models to create convincing AI-generated images or videos of minors that never existed or depict them in harmful or exploitative scenarios. Researchers warn that even personal photos parents post online can be reused in training sets and later manipulated without permission, creating risks of deepfake abuse and child exploitation
Law enforcement has documented real cases where generative AI tools were used to create child sexual abuse material — in one instance, a medical professional received a decades-long prison sentence for generating and distributing AI-produced sexual images of minors. Moreover, AI is being used by private individuals — including teenagers — to sexualize images of peers, causing profound emotional trauma, reputational damage, and long-lasting harm to young people’s lives.
These are not hypothetical risks; they are occurring today in real communities, and they show that unchecked AI use can put children directly in harm's path.
Other Harms of Unregulated AI
AI doesn’t only threaten political integrity and children’s welfare. It has real life risks that have already caused harm or have great potential to harm if left unchecked:
- Data privacy issues
- Intellectual property infringement
- Job loss
- Lack of transparency
- Unsafe decision-making by law enforcement, doctors, etc.
These are real harms and real people have already experienced many due to the lack of adequate oversight of AI.
My Response: SB 1012
To confront these threats, I am proud to sponsor Senate Bill 1012. SB 1012 creates new state provisions relating to artificially generated content, especially where it intersects with elections and exploitation.
Key components of SB 1012 include:
- Election Transparency. Political ads, communications, and public messaging content created or modified using AI must clearly disclose that they contain AI-generated elements, with violations.
- Criminal Penalties for Harmful Deepfakes: The bill establishes criminal offenses for creating or threatening to publicly disclose deepfakes of individuals under 18 with penalties ranging from class E to class B felonies depending on severity.
SB 1012 is common sense legislation that protects elections and safeguards children while respecting innovation and political expression.
Why a Federal Clampdown on States Is No Solution
On December 11, 2025 the President signed an executive order attempting to centralize AI policy and restrict states from creating regulations of their own. Critics — including many state leaders — argue this overreaches and could prevent states from protecting their residents while Congress fails to act.
Executive orders are not legislation and cannot preempt state law absent specific congressional authorization. States have long exercised authority over consumer protection, child welfare, elections, and safety — areas where AI is already creating novel harms. Preempting state authority at this critical moment risks leaving citizens exposed and communities unprotected.
States must be able to act when technology outpaces federal law. Missouri will not shrug off that responsibility.
Welcome Innovation With Common Sense
We should welcome technology that improves lives. But welcome does not mean blind trust. The stories above are proof that when companies and governments move faster than common-sense protections, ordinary people are the ones who get hurt. I respect the principle of federal leadership, but not when it weakens the ability of states to defend their citizens.
If Washington insists on trying to preempt state action, we should not abandon our duty to protect our communities. I will keep fighting for common-sense rules that keep families safe, preserve our freedoms, and hold those who profit from technology accountable when their systems harm people.
Missouri can - and must - do better.









0 Comments