Logo

Technology Law

What role should government play in managing AI safety and mitigating risks from advanced artificial intelligence?

According to the discussion, governments should play a critical role when public safety is at risk from advanced AI, particularly digital superintelligence. Rishi Sunak emphasized that governments should develop capabilities to test AI models before they're released, with his Safety Institute working to protect the public from potential risks. Elon Musk agreed that while most software poses no public safety risk, advanced AI is different and requires government intervention to safeguard public interests. Both leaders highlighted the importance of external safety testing of AI models, with governments taking responsibility for managing potential dangers associated with superintelligent systems.

Watch clip answer (01:24m)
Thumbnail

Rishi Sunak

03:08 - 04:32

What's the current status of generative AI legislation in the United States?

Regulations around generative AI are increasing across the United States, with most current state legislation focused on two main areas: politics (addressing deepfakes) and non-consensual pornography. These are domains where there's general agreement about the need for regulation. A smaller number of states have enacted or proposed laws specifically protecting people's likenesses, including their images and voices. Notably, legislation was introduced last year that would provide protection for celebrities like Scarlett Johansson against unauthorized AI recreation of their likeness, addressing growing concerns about consent and image rights in the era of generative AI.

Watch clip answer (00:29m)
Thumbnail

CBS News

02:59 - 03:29

How are social media companies fighting against online protections for children despite promising not to?

Social media companies have found a loophole by paying lobbyists to file lawsuits on their behalf. This is evident in Maryland where NetChoice, a lobbying group representing Meta, X, Snap, and Google, filed a suit against the state's Kids Code that limits data collection on minors and requires prioritizing children's well-being over commercial interests. What makes this particularly significant is that it comes just months after Meta explicitly promised not to fight such policies in court. By working through lobbying groups, companies can oppose child safety regulations while maintaining plausible deniability, effectively circumventing their public commitments to safeguard children on their platforms.

Watch clip answer (01:31m)
Thumbnail

Philip DeFranco

23:01 - 24:33

What are the ethical concerns about using AI for predictive policing?

According to expert Carme Artigas, while police should utilize all available technology to prosecute crime, predictive policing raises fundamental ethical concerns. She emphasizes that AI should not be used in a predictive way that presumes guilt before evidence is found, as this contradicts the principle that individuals are innocent until proven guilty. Artigas cautions against reversing this core legal principle through predictive algorithms that might infringe on civil liberties. The proper approach is to use technology to enhance law enforcement capabilities while maintaining the presumption of innocence, rather than allowing AI to make preemptive judgments about potential criminal behavior.

Watch clip answer (00:14m)
Thumbnail

Johnny Harris

02:50 - 03:05

What specific dangers does artificial intelligence pose to society according to Johnny Harris?

According to Johnny Harris, AI poses genuine dangers to our society, with the journalist claiming there's a "100%" threat to civilization. He references that lawmakers and regulators have significant concerns about this rapidly developing technology and its potential societal impacts, which is reflected in new laws being proposed globally. Harris indicates he has been deeply researching these regulatory frameworks, which provide insight into what specific threats officials are anticipating. He promises to explain these dangers in "the plainest terms possible," suggesting concrete scenarios rather than abstract risks. The clip frames AI's potential threats as serious enough to warrant careful examination and regulation.

Watch clip answer (00:26m)
Thumbnail

Johnny Harris

00:42 - 01:09

Why should Character AI be sued?

Character AI should be sued because it recklessly and dangerously launched an AI chatbot companion app without implementing necessary safety guardrails, particularly for minors and vulnerable demographics. The company proceeded with full knowledge of the potential risks their product posed to users. Despite understanding these dangers, Character AI failed to institute any protective measures before bringing their product to market. This negligence has already resulted in actual harm to users, highlighting the company's failure to fulfill its responsibility to protect vulnerable users, especially children, from the foreseeable risks of their technology.

Watch clip answer (00:59m)
Thumbnail

Al Jazeera English

20:20 - 21:19

of3