THE REPRO RUNDOWN | Inside the Science and Politics of In Vitro Fertilization

In the wake of President-elect Donald Trump’s victory, women and birthing people have been left to wonder how his return to power will impact their reproductive rights. Though his political record demonstrates an unwillingness to recognize key reproductive freedoms, including abortion, his avowed support for in vitro fertilization (IVF) breaks ranks with many Republican Party politicians who oppose the procedure. Trump, in an effort to appeal to the majority of U.S. voters who support IVF, even proclaimed himself the “father of IVF,” despite requiring Senator Katie Britt (R-Ala.) to explain the procedure to him earlier this year.
While Democrats and Republicans debate the future of IVF, around 22% of U.S. adults remain unsure if they support the procedure, possibly owing to a lack of knowledge about what IVF actually entails. Moreover, cultural taboo contributes to misconceptions about the treatment — for example, the Catholic Church condemns IVF, decrying that it disassociates marriage from procreation. So, putting politics and culture aside, what is the science behind IVF?
The journey from lab to life begins when a person seeks to become pregnant. There are many reasons someone might seek the help of assisted reproductive technology: 1 in 8 U.S. couples struggle with infertility, same-sex partners can use IVF to start a family and cancer patients may freeze their eggs to protect their fertility from chemotherapy-induced damage. Regardless, the first step in IVF is an evaluation of the male and female partners’ eggs and sperm, ensuring that at least some of their reproductive cells, called gametes, are healthy. Then, the female partner receives hormone injections for 10 to 12 days, which allows multiple eggs to mature inside the ovary. 
The next step in IVF is egg retrieval, which occurs while the patient is under anesthesia. With the guidance of an ultrasound machine, a doctor inserts a thin needle through the vagina into the ovary and extracts 10 to 20 eggs. The doctor then places the eggs in a dish with sperm isolated from the male partner’s semen. 
The fertilized eggs grow in an incubator until they reach the blastocyst stage of development, typically after 3 to 7 days. This incubation period — during which growth occurs in an artificial environment instead of in a live organism — explains why the procedure is called “in vitro.” What characterizes a blastocyst embryo are two layers of cells: one that becomes the fetus and one that becomes the placenta. 
Finally, doctors either freeze the blastocyst embryos for future use or transfer one or more of them into the patient’s uterus. Herein lies the main point of contention for IVF’s opponents: If frozen embryos are accidentally destroyed, does that constitute murder? That is, should embryos enjoy the same rights as people? 
In February 2024, many in the United States reacted with shock and dismay at the Alabama Supreme Court’s answer to these questions. The court ruled that the accidental destruction of human embryos at IVF clinics constitutes child murder, spurring fertility clinics to cease operations. Though state lawmakers responded with a measure that protects IVF providers from wrongful death lawsuits, many clinicians remained worried that the court’s ruling could expose them to legal repercussions. 
Their legal concerns stem from the fact that it is common for fertilized eggs not to survive: 45% of lab-grown IVF embryos die before becoming blastocysts, and plenty of others die in the female body even when naturally conceived. Furthermore, when couples are left with extra embryos after a successful round of IVF, it is common practice for clinics to destroy them or donate them to research. 
Given that IVF enjoys widespread support throughout the United States, it is unlikely that concerns about fetal personhood will lead to significant bans, especially as the GOP remains split on the issue. However, fertility experts worry that increasing the legal risk of IVF could impose restrictions on the use of embryos, rendering the process more inefficient and requiring couples to undergo more cycles. IVF is already quite cost prohibitive, with one cycle costing up to $24,000 and lacking insurance coverage in 31 U.S. states. 
In the United States, the debate about IVF is far from settled — as recently as September 2024, Senate Republicans blocked a bill that would have enacted federal protections for IVF, contradicting Trump’s assertion that he wants to expand access to the treatment. Nevertheless, the science underpinning the safety and efficacy of IVF remains steadfast, as does the fact that thousands of families could not have exercised their right to bear children in its absence.

Will AI Agents Replace Human Subjects in Social Science?

In a groundbreaking study that reads like a blueprint for the future of social science, researchers have unveiled an innovative use of artificial intelligence to simulate the behavior, attitudes, and decisions of over 1,000 people.

Drawing from detailed qualitative interviews, these “generative agents” replicated participants’ responses with uncanny accuracy, begging the question about whether AI might one day render human recruitment for psychological and social science research obsolete.

By embedding the findings of extensive interviews into a large language model, the researchers created AI-driven agents capable of emulating the responses of real people to various surveys and experiments.

These agents mirrored the responses of the 1,052 human participants with an accuracy of 85%.

An example is their answers to the General Social Survey (GSS), a widely used sociological survey that assesses attitudes and beliefs.And 85% actually exceeds the consistency of humans when retaking the same tests several weeks apart.

The study, conducted by a consortium of scholars from Stanford, Northwestern, Google DeepMind, and the University of Washington, introduces a new paradigm for human behavioral simulation.The study was published on Cornell University’s arXiv.org, an open-access repository for academic research, on November 15, 2024.ArXiv is widely used by researchers in fields like computer science, physics, mathematics, and social sciences to share preprints of work that has not yet undergone peer review.

The Promise of Simulated Research

For decades, social scientists have relied on labor-intensive methods to recruit diverse participants, administer surveys, and run experiments.While these traditional approaches provide valuable insights, they also come with significant costs and logistical hurdles.

By offering a scalable, ethical alternative, generative agents could become a powerful tool for exploring human behavior on a massive scale.

Imagine testing public health messages, economic policies, marketing campaigns, or educational programs across thousands of simulated people representing diverse demographic groups, all without scheduling a single in-person session or paying hefty participation fees.

The authors describe this as creating “a laboratory for researchers to test a broad set of interventions and theories.”

Revolutionizing Psychology and Personality Research

Psychology and personality research have long relied on painstaking methods to gather data from human participants.Studies often involve surveys, interviews, or experiments conducted in laboratory or virtual settings, requiring significant investments of time, labor, and money.

These tasks typically involve many researchers and assistants.Participants, in turn, must dedicate their time, often over multiple sessions, leading to logistical challenges and high costs for compensation.

Participants also often receive compensation, and some studies additional payments for follow-up sessions or incentives for economic games.Including wages for researchers, software costs, and institutional overhead, a study of this scale could easily run into six figures and take months — or even years — to complete.

Using AI-driven generative agents offers a transformative alternative.Instead of recruiting and surveying people, researchers could program these agents with data from previous interviews or personality assessments.

These agents, trained on data from tools like the Big Five Inventory or General Social Survey, can simulate responses to hypothetical scenarios, yielding insights that closely mirror human behaviors.

By eliminating the need to recruit participants or administer surveys, a study replicating the scope of traditional research could be completed in days instead of months.

Researchers could simulate additional scenarios or experiments without incurring significant extra effort or expense.

Beyond the immediate savings, this approach opens new doors for smaller research teams or institutions with limited funding, enabling them to conduct large-scale studies that were previously out of reach.

By reshaping the logistical and economic landscape of social and psychological research, generative agents could significantly accelerate the pace of discovery in understanding human behavior and personality.

Methodology

The study recruited a sample of 1,052 U.S. participants via Bovitz, a study recruitment firm.They were diverse in terms of age, region, education, ethnicity, gender, income, political ideology, and sexual identity to represent the broader population.Ages ranged from 18 to 84, with a mean of 48 years.

The participants completed two-hour voice interviews conducted by an AI interviewer, which asked follow-up questions based on participants’ responses, ensuring depth and richness of data.

The interviews explored personal history, values, and opinions on societal topics, capturing an average of 6,491 words per participant.These transcripts formed the knowledge base for creating the generative agents.

To evaluate the AI agents, the human participants completed the General Social Survey (GSS), the Big Five Personality Inventory (BFI-44), behavioral economic games like the Dictator Game and Trust Game.

Two weeks after the initial interview, participants retook the surveys and experiments to provide a benchmark for their internal consistency.

The transcripts of the participant interviews were embedded into a large language model to create the AI agents, and these agents were evaluated by comparing their responses to human responses.

But 85% is far from perfect, right?

While this may not seem perfect, it is remarkably close to human-like accuracy, particularly since human respondents themselves are inconsistent over time.

The researchers measured this inconsistency by asking participants to retake surveys and experiments two weeks after their initial responses.The human participants’ own replication accuracy — the rate at which they gave the same answers on both occasions — was 81% on average.This indicates that human responses naturally vary, influenced by factors like memory, mood, and context.

In other words, 85% accuracy effectively nears the ceiling of what could be expected from humans.As the researchers note, the generative agents “predict participants’ behavior and attitudes well, especially when compared to participants’ own rate of internal consistency.”

So while improvements are still possible, the agents are already effective enough to simulate behaviors and attitudes with a level of fidelity that mirrors real-world variability in human responses.

A New Era of Accelerated Social Science Discovery

This new methodology allows for rapid hypothesis testing, simultaneous exploration of multiple research questions, and instant availability of results, drastically shortening the time from concept to insight.

Meta-analyses, traditionally time-intensive, could become standard practice, allowing researchers to validate findings across large datasets quickly and systematically.

By testing complex interactions, exploring diverse scenarios, and developing personalized theories, the field could address long-standing challenges like the reproducibility crisis while advancing ethical and policy interventions at scale.

“Human behavioral simulation—general-purpose computational agents that replicate human behavior across domains—could enable broad applications in policymaking and social science,” the researchers write.

Study details:

Title: “Generative Agent Simulations of 1,000 People”

Date Submitted: 15 Nov 2024

Authors: Joon Sung Park (Stanford University), Carolyn Q. Zou (Stanford University/Northwestern University), Aaron Shaw (Northwestern University), Benjamin Mako Hill (University of Washington), Carrie Cai (Google DeepMind), Meredith Ringel Morris (Google DeepMind), Robb Willer (Stanford University), Percy Liang (Stanford University), Michael S. Bernstein (Stanford University)

Link: https://arxiv.org/abs/2411.10109

Will AI Agents Replace Human Subjects in Social Science?

In a groundbreaking study that reads like a blueprint for the future of social science, researchers have unveiled an innovative use of artificial intelligence to simulate the behavior, attitudes, and decisions of over 1,000 people.

Drawing from detailed qualitative interviews, these “generative agents” replicated participants’ responses with uncanny accuracy, begging the question about whether AI might one day render human recruitment for psychological and social science research obsolete.

By embedding the findings of extensive interviews into a large language model, the researchers created AI-driven agents capable of emulating the responses of real people to various surveys and experiments.

These agents mirrored the responses of the 1,052 human participants with an accuracy of 85%.

An example is their answers to the General Social Survey (GSS), a widely used sociological survey that assesses attitudes and beliefs.And 85% actually exceeds the consistency of humans when retaking the same tests several weeks apart.

The study, conducted by a consortium of scholars from Stanford, Northwestern, Google DeepMind, and the University of Washington, introduces a new paradigm for human behavioral simulation.The study was published on Cornell University’s arXiv.org, an open-access repository for academic research, on November 15, 2024.ArXiv is widely used by researchers in fields like computer science, physics, mathematics, and social sciences to share preprints of work that has not yet undergone peer review.

The Promise of Simulated Research

For decades, social scientists have relied on labor-intensive methods to recruit diverse participants, administer surveys, and run experiments.While these traditional approaches provide valuable insights, they also come with significant costs and logistical hurdles.

By offering a scalable, ethical alternative, generative agents could become a powerful tool for exploring human behavior on a massive scale.

Imagine testing public health messages, economic policies, marketing campaigns, or educational programs across thousands of simulated people representing diverse demographic groups, all without scheduling a single in-person session or paying hefty participation fees.

The authors describe this as creating “a laboratory for researchers to test a broad set of interventions and theories.”

Revolutionizing Psychology and Personality Research

Psychology and personality research have long relied on painstaking methods to gather data from human participants.Studies often involve surveys, interviews, or experiments conducted in laboratory or virtual settings, requiring significant investments of time, labor, and money.

These tasks typically involve many researchers and assistants.Participants, in turn, must dedicate their time, often over multiple sessions, leading to logistical challenges and high costs for compensation.

Participants also often receive compensation, and some studies additional payments for follow-up sessions or incentives for economic games.Including wages for researchers, software costs, and institutional overhead, a study of this scale could easily run into six figures and take months — or even years — to complete.

Using AI-driven generative agents offers a transformative alternative.Instead of recruiting and surveying people, researchers could program these agents with data from previous interviews or personality assessments.

These agents, trained on data from tools like the Big Five Inventory or General Social Survey, can simulate responses to hypothetical scenarios, yielding insights that closely mirror human behaviors.

By eliminating the need to recruit participants or administer surveys, a study replicating the scope of traditional research could be completed in days instead of months.

Researchers could simulate additional scenarios or experiments without incurring significant extra effort or expense.

Beyond the immediate savings, this approach opens new doors for smaller research teams or institutions with limited funding, enabling them to conduct large-scale studies that were previously out of reach.

By reshaping the logistical and economic landscape of social and psychological research, generative agents could significantly accelerate the pace of discovery in understanding human behavior and personality.

Methodology

The study recruited a sample of 1,052 U.S. participants via Bovitz, a study recruitment firm.They were diverse in terms of age, region, education, ethnicity, gender, income, political ideology, and sexual identity to represent the broader population.Ages ranged from 18 to 84, with a mean of 48 years.

The participants completed two-hour voice interviews conducted by an AI interviewer, which asked follow-up questions based on participants’ responses, ensuring depth and richness of data.

The interviews explored personal history, values, and opinions on societal topics, capturing an average of 6,491 words per participant.These transcripts formed the knowledge base for creating the generative agents.

To evaluate the AI agents, the human participants completed the General Social Survey (GSS), the Big Five Personality Inventory (BFI-44), behavioral economic games like the Dictator Game and Trust Game.

Two weeks after the initial interview, participants retook the surveys and experiments to provide a benchmark for their internal consistency.

The transcripts of the participant interviews were embedded into a large language model to create the AI agents, and these agents were evaluated by comparing their responses to human responses.

But 85% is far from perfect, right?

While this may not seem perfect, it is remarkably close to human-like accuracy, particularly since human respondents themselves are inconsistent over time.

The researchers measured this inconsistency by asking participants to retake surveys and experiments two weeks after their initial responses.The human participants’ own replication accuracy — the rate at which they gave the same answers on both occasions — was 81% on average.This indicates that human responses naturally vary, influenced by factors like memory, mood, and context.

In other words, 85% accuracy effectively nears the ceiling of what could be expected from humans.As the researchers note, the generative agents “predict participants’ behavior and attitudes well, especially when compared to participants’ own rate of internal consistency.”

So while improvements are still possible, the agents are already effective enough to simulate behaviors and attitudes with a level of fidelity that mirrors real-world variability in human responses.

A New Era of Accelerated Social Science Discovery

This new methodology allows for rapid hypothesis testing, simultaneous exploration of multiple research questions, and instant availability of results, drastically shortening the time from concept to insight.

Meta-analyses, traditionally time-intensive, could become standard practice, allowing researchers to validate findings across large datasets quickly and systematically.

By testing complex interactions, exploring diverse scenarios, and developing personalized theories, the field could address long-standing challenges like the reproducibility crisis while advancing ethical and policy interventions at scale.

“Human behavioral simulation—general-purpose computational agents that replicate human behavior across domains—could enable broad applications in policymaking and social science,” the researchers write.

Study details:

Title: “Generative Agent Simulations of 1,000 People”

Date Submitted: 15 Nov 2024

Authors: Joon Sung Park (Stanford University), Carolyn Q. Zou (Stanford University/Northwestern University), Aaron Shaw (Northwestern University), Benjamin Mako Hill (University of Washington), Carrie Cai (Google DeepMind), Meredith Ringel Morris (Google DeepMind), Robb Willer (Stanford University), Percy Liang (Stanford University), Michael S. Bernstein (Stanford University)

Link: https://arxiv.org/abs/2411.10109

President Ramaphosa recognises small business entrepreneurs

JOHANNESBURG – President Cyril Ramaphosa has applauded small business who’ve contributed to growing the economy of the country.He delivered the keynote address at a National Presidential Awards ceremony.The event is aimed at recognising outstanding Micro, Small and Medium Enterprises and co-operatives who drive growth and transformation across the nation.Ramaphosa says a well-regulated framework is needed for entrepreneurs to run successful businesses.Ramaphosa added that government has embarked on structural reforms to address the issue.

‘Science is a Drag’ fuses drag, science and humor at Oakland’s Chabot Center

The Chabot Space and Science Center in Oakland is holding “Science is a Drag,” where attendees can celebrate the fusion of science, humor and drag on Friday.

Drag superstar VERA is hosting the event at a time when many places, like the California Academy of Sciences, are embracing the science meets drag combination.

VERA said their favorite part of the science drag night is interacting with the audience.

“It’s education and entertainment combined. I learn things, the audience learns things and it’s just a pure good time,” VERA said. “It’s going to be a night packed with drag performances and science experiments, there’s going to be a plasma ball, colored flames, the list goes on.”

VERA said to expect drag numbers interspersed with science experiments and space exploration.

The event, which is open to everyone 21 years of age or older, will be held at the Chabot Space and Science Center on Friday.

Tickets can be purchased at the door or on Chabot’s site at https://chabotspace.org/calendar/science-is-a-drag/. Doors open at 6:45 p.m., and the show runs from 7:30 p.m. to 9 p.m.

‘Science is a Drag’ fuses drag, science and humor at Oakland’s Chabot Center

The Chabot Space and Science Center in Oakland is holding “Science is a Drag,” where attendees can celebrate the fusion of science, humor and drag on Friday.

Drag superstar VERA is hosting the event at a time when many places, like the California Academy of Sciences, are embracing the science meets drag combination.

VERA said their favorite part of the science drag night is interacting with the audience.

“It’s education and entertainment combined. I learn things, the audience learns things and it’s just a pure good time,” VERA said. “It’s going to be a night packed with drag performances and science experiments, there’s going to be a plasma ball, colored flames, the list goes on.”

VERA said to expect drag numbers interspersed with science experiments and space exploration.

The event, which is open to everyone 21 years of age or older, will be held at the Chabot Space and Science Center on Friday.

Tickets can be purchased at the door or on Chabot’s site at https://chabotspace.org/calendar/science-is-a-drag/. Doors open at 6:45 p.m., and the show runs from 7:30 p.m. to 9 p.m.

Long An’s 2nd Culture, Sports, Tourism Week to kick off in late November

The Mekong Delta province of Long An will host the second Culture, Sports, and Tourism Week 2024 from November 28 to December 4, according to the provincial People’s Committee.

  The 2nd Long An Culture, Sports, and Tourism Week will take place during the flood season and feature local typical cuisine. (Photo: VNA)  The Mekong Delta province of Long An will host the second Culture, Sports, and Tourism Week 2024 from November 28 to December 4, according to the provincial People’s Committee.This week-long celebration will offer a wide range of activities, providing visitors with unique experiences of Long An’s culture, people, and natural beauty.The event will kick off with an opening ceremony themed “Long An – Aspiration of the Vam River” on November 28, featuring vibrant art performances promoting the province’s culture and image. A special highlight will be a spectacular drone light show.There will be food courts introducing traditional dishes, with stalls displaying specialties of Long An and neighbouring provinces.The event will also comprise traditional music and dance shows featuring hundreds of performers from leading art troupes of Long An and other provinces in the Mekong River Delta. Folk games, cooking and sports competitions will also be held.A number of sports events will take place during the week, such as a marathon in the Waterpoint Ben Luc urban area in Ben Luc district, the Long An Open Bodybuilding Championship, martial arts performances, and the first-ever three-plank boat race on the Vam Co River— an iconic natural wonder in the province.Other events will include the 2024 Long An Investment Promotion Conference and a forum on promoting tourism and One Commune, One Product (OCOP) products between Ho Chi Minh City and the Mekong Delta, providing opportunities for Long An to build partnerships, foster regional cooperation, and attract investment in industry, trade, and tourism.Nguyen Thanh Thanh, director of the provincial Department of Culture, Sports and Tourism, noted that tourist arrivals in the province increased by 60% year-on-year during its first Culture, Sports and Tourism Week last year. It is expected to welcome 1 million visitors during this year’s edition, he said.As a gateway of the Mekong Delta region, which neighbours HCM City, the province has great potential for developing specific tourism products such as exploring the lives of local people living along rivers, orchards, craft villages, and historical and cultural heritage sites.Long An has a rich history and traditions, with 21 national-level historical and cultural heritage sites and 105 provincial-level onesIt boasts the Tan Lap Floating Village, the Endless Field Tourist Site, the My Quynh Zoo and Chavi Garden will help visitors have the experience of the river scenery and typical culture of the Mekong Delta region.Regarding sports tourism, Long An hosts two golf courses meeting international standards in Duc Hoa and Duc Hue districts. In the Mekong Delta, only Phu Quoc and Long An own this advantage.In the first nine months of this year, Long An received more than 1.1 million visitors, up 64% compared to the same period last year.In 2025, it hopes to attract 2.5 million domestic visitors and 30,000 international tourists a year, with an annual growth of 30% and revenues of more than 2 trillion VND (78.5 million USD)./.

Senators Hold Hearing on AI Fraud and Scams, Vow to Pass AI Bills in Coming Weeks — AI: The Washington Report

On November 19, the Senate Commerce Committee’s Subcommittee on Consumer Protection, Product Safety, and Data Security convened a hearing on “Protecting Consumers from Artificial Intelligence Enabled Fraud and Scams.”
The witnesses at the hearing testified about how AI technologies enable fraud and scams, while Senators from both parties asked questions that highlighted the need for federal laws to crack down on such activity.
The hearing comes as Senators try to pass AI legislation during the lame-duck session of this Congress. Subcommittee Chair Hickenlooper (D-CO) specifically discussed five bipartisan AI bills during the hearing that he vowed to get “across the finish line and passed into law in the coming weeks.”  

On November 19, the Senate Commerce Committee’s Subcommittee on Consumer Protection, Product Safety, and Data Security convened a hearing on “Protecting Consumers from Artificial Intelligence Enabled Fraud and Scams.” The Subcommittee heard from witnesses who testified about how AI technologies and tools enable fraud and scams, while Senators from both parties asked questions that highlighted the need for federal laws to crack down on such activity.
The hearing comes as Senators try to pass AI legislation during the lame-duck session of Congress. Subcommittee Chair Hickenlooper (D-CO) specifically discussed five AI bills during the hearing, which he noted have all received bipartisan support and vowed to get “across the finish line and passed into law in the coming weeks.”
Opening Statements
Subcommittee Chair Hickenlooper’s opening remarks, while acknowledging the many benefits that AI brings, noted that “for all those benefits, we have to mitigate and anticipate the concurrent risks that this technology brings along with it.” To this end, he specifically discussed five AI bills that he believes have bipartisan support, which we cover later in this newsletter, and vowed to get them “across the finish line and passed into law in the coming weeks.”
Subcommittee Ranking Member Marsha Blackburn (R-TN) focused on AI scams and fraud and their now widespread impact. She noted that the FTC Consumer Sentinel Network Data Book found that “scams increased a billion dollars over the last 12 months, to $10 billion… And of course we know AI is what is driving a lot of this.” To combat the fraud and scams driven in part by AI, Senator Blackburn called for an “encompassing” and “comprehensive” policy approach, including “an actual online privacy standard, which we’ve never passed.”
The Chair and Ranking Member’s statements both highlight that AI-enabled scams and other harms are a bipartisan point of concern.
Expert Testimony: Shared Concerns and Solutions
The following experts testified at the hearing:

Dr. Hany Farid, an academic who studies deepfakes and other AI-generated or digitally manipulated images
Justin Brookman, the Director of Technology Policy at Consumer Reports
Mounir Ibrahim, the Chief Communications Officer and Head of Public Policy at Truepic, a company that provides digital authenticity technology
Dorota Mani, the mother of a victim of an AI-generated deepfake  

The experts’ testimonies and responses to the Senators’ questions focused on four main themes:

Content Provenance. Mr. Ibrahim as well as the other panelists noted that content provenance – metadata that is attached to content that reveals whether it is AI-generated or not – is one of the most promising solutions we currently have to help people differentiate AI-generated content from real content. Senator Hickenlooper asked Mr. Ibrahim about the incentives that exist to scale content provenance technologies and make them widely available and used. Mr. Ibrahim responded that “there have not been the financial incentives…or consequences for these platforms to better protect or at least give more transparency to their consumers.”  
Comprehensive Privacy Laws. Both the Subcommittee Chair and Ranking Member pointed to the need for comprehensive data privacy laws in their remarks. Dr. Farid stated,“It should be criminal that we don’t have a data privacy law in this country.”  
Holding Creators of AI Content and AI Tools Accountable. Several Senators, as well as the panelists, discussed the need to shift the burden from consumers to companies that produce AI content and AI tools to ensure that their content or tools are not misused or harming people. “If you’re an AI company,” testified Dr. Farid, “and you’re allowing anybody to clone anybody’s voice by simply clicking a box that says, ‘I have permission to use their voice, you’re the one who’s on the hook for this.’” Senator Hickenlooper highlighted that the AI Research, Innovation, and Accountability Act, as we’ve covered, would create “a framework to hold AI developers accountable” for their content and AI tools.  
Stronger Enforcement. Mr. Brookman noted that while “fraud and scams are already illegal,” “because of insufficient enforcement – or consequences when caught – there is not enough deterrence against potential scammers.” He called on Congress to grant the FTC additional resources to hire more staff and expand its legal powers “to allow the agency to keep pace with the threats that plague the modern economy.”  

Legislation Discussed at the Hearing
Five AI bills, which we’ve covered before, were discussed during the hearing. None of these bills would constitute the comprehensive data privacy laws that the Senators and panelists called for, but they would lay the groundwork for increased transparency for AI developers and also protect consumers from deepfakes and other AI-generated harmful content:

The Future of Artificial Intelligence Innovation Act of 2024  

The Future of AI Innovation Act would permanently establish the Artificial Intelligence Safety Institute, which would create voluntary standards and guidelines for AI and research AI model safety and issues. The Institute would also create a test program for vendors of foundation models to test their models “across a range of modalities.” The bill would also direct the National Institute of Standards and Technology (NIST) and the Department of Energy to establish a testbed for the discovery of new materials for AI systems.  

Validation and Evaluation for Trustworthy AI Act  

The VET Artificial Intelligence Act would require the director of the NIST to develop “voluntary guidelines and specifications for internal” and external artificial intelligence assurance, which is the impartial evaluation of an AI model by a third party to identify errors in the functioning and testing of the model and verify claims about the model’s functionality.  

Artificial Intelligence Research, Innovation, and Accountability Act  

Regarding research and innovation, the Artificial Intelligence Research, Innovation, and Accountability Act (AIRIA) would direct the Secretary of Commerce to conduct research on “content provenance and authentication for human and AI-generated works” and the Comptroller General to study “statutory, regulatory, and policy barriers to the use of AI” within the federal government. Regarding accountability, the bill would also create standardized definitions of common AI terms; transparency requirements for AI use, including disclosures that content is AI-generated and disclosure and reporting obligations for high-impact AI systems, including those involved in making decisions related to housing, employment, credit, education, health care, or insurance, among other requirements.  

The COPIED Act  

The Content Origin Protection and Integrity from Edited and Deepfaked Media Act (The COPIED Act) would aim to address the rise of deepfakes. The bill would direct federal agencies to develop standards for AI-generated content detection, establish AI disclosure requirements for developers and deployers of AI systems, and prohibit the unauthorized use of copyrighted content to train AI models.  

TAKE IT DOWN Act  

The Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks Act, known as the TAKE IT DOWN Act, would criminalize the publication of non-consensual intimate imagery, including certain AI-generated deepfakes, and also require social media platforms to establish processes by which they remove such content from their platforms.  

Lame-Duck Period: Potential AI Activity
Whether there is enough momentum to get AI legislation across the finish line during the lame-duck Congress is an open question. As we’ve noted, lame-duck periods are complicated, particularly where control of the Senate will shift in the next Congress. The final weeks of Democratic control may incentivize Democratic Senators to act on AI, but AI legislation will compete with other legislative priorities. Furthermore, while all of the AI bills discussed during the Subcommittee hearing have bipartisan support, Republicans may want to wait until the next Congress begins with them in the majority in both chambers to address AI. While any prospects for passing these bills remain uncertain, the three weeks remaining on the Senate’s calendar this Congress will bring clarity.
We will continue to monitor, analyze, and issue reports on AI legislation developments in the lame-duck Congress and the 119th Congress.
Matthew Tikhonovsky also contributed to this article.