Scholarship
It is incredibly challenging to keep up with the pace of news coming out about AI, its tools, and issues around the technology. The traditional publishing models for scholarly communication can’t keep up with the rapid changes that are occurring in the everyday use of the technology especially by students and teachers.
The resources featured below are authoritative, trustworthy, and have repeatedly come up in our conversations with leaders in the field and educators who are interested in learning more about the current landscape and background on the development of these tools.
This bibliography is not comprehensive and is growing as more research and articles come out. If you have a resource that would be useful to be included in this list of scholarship on AI please email lawandthefuture@gmail.com.
Articles
Constitutional AI: Harmlessness from AI Feedback \ Anthropic. December 15, 2022.
This technical paper by Anthropic's research team introduces Constitutional AI (CAI), a method for training AI systems to be helpful, harmless, and honest without relying extensively on human feedback for safety training. The research, led by Anthropic's AI safety team, demonstrates how AI systems can be trained to follow a set of principles or "constitution" and use AI-generated feedback to improve their own safety and alignment. The work represents a significant advancement in AI safety research by showing how self-supervised learning can be used to create more reliable and safer AI systems at scale.
Calo, Ryan. “Artificial Intelligence Policy: A Primer and Roadmap.” UC Davis Law Review 51 (December 2017): 399-435.
This foundational legal analysis by robotics law expert Ryan Calo provides a comprehensive framework for understanding the unique policy challenges posed by AI technologies. Calo, a professor at the University of Washington School of Law and co-director of the Tech Policy Lab, argues that AI presents novel regulatory challenges that existing legal frameworks are inadequately equipped to address. The work identifies key characteristics of AI systems that complicate traditional approaches to technology governance. Calo's roadmap has become influential in legal scholarship for its systematic approach to AI policy development and its emphasis on the need for adaptive regulatory mechanisms.
Cooper, A. Feder, et al. Report of the 1st Workshop on Generative AI and Law. 2023. https://arxiv.org/abs/2311.06477.
This collaborative report emerges from the first interdisciplinary workshop bringing together legal scholars, computer scientists, and policy experts to examine the intersection of generative AI and law. Led by A. Feder Cooper of Yale University along with contributors from major universities and tech companies, the report synthesizes discussions on critical legal questions raised by generative AI systems like large language models. The work addresses key areas including intellectual property implications, liability frameworks, and regulatory approaches for generative AI technologies. This report represents an important early effort to establish scholarly dialogue between the legal and technical communities on emerging AI governance challenges.
Craig, Carys J. “The AI-Copyright Trap.” Osgoode Legal Studies Research Paper, no. 4905118 (July 15, 2024): 1-29.
This critical legal analysis by intellectual property scholar Carys Craig examines how AI technologies challenge traditional copyright frameworks and potentially undermine the foundational purposes of copyright law. Craig, a professor at Osgoode Hall Law School at York University and expert in copyright theory, argues that attempts to extend copyright protection to AI-generated works create a "trap" that could distort the balance between creators' rights and public interest. The work explores how AI systems trained on copyrighted materials complicate questions of authorship, originality, and fair use, while warning against copyright expansions that could stifle innovation and public access to information.
Lemley, Mark A., and Peter Henderson. “The Mirage of Artificial Intelligence Terms of Use Restrictions.” Princeton University Program in Law & Public Affairs Research Paper, no. 2025-04 (December 09, 2024): 1327-87.
This legal analysis by intellectual property scholar Mark Lemley and AI researcher Peter Henderson challenges the effectiveness and enforceability of contractual restrictions that AI companies place on the use of their systems. Lemley, a professor at Stanford Law School and co-director of the Stanford Program in Law, Science & Technology, collaborates with Henderson, a computer science researcher, to examine how terms of service agreements attempt to control AI system usage. The work argues that many such restrictions are legally unenforceable and practically ineffective, functioning more as corporate posturing than meaningful governance mechanisms. The analysis has significant implications for understanding the limits of private ordering in AI governance and the need for regulatory approaches.
Nguyen, C. Thi. “Echo Chambers and Epistemic Bubbles.” Episteme 17, no. 2 (June 2020): 141-61.
This philosophical analysis by C. Thi Nguyen distinguishes between two related but distinct phenomena that can distort information environments and undermine democratic discourse. Nguyen, a philosopher at the University of Utah who specializes in epistemology and social philosophy, argues that epistemic bubbles involve the mere absence of contrary information, while echo chambers actively discredit outside sources and create self-reinforcing belief systems. The work provides critical conceptual clarity for understanding how different types of information isolation require different remedial approaches, with significant implications for addressing misinformation and polarization in digital media environments.
Surden, Harry. “Artificial Intelligence and Law: An Overview.” Georgia State University Law Review 35, no. 4 (Summer 2019): 1305-37.
This article by legal technology expert Harry Surden provides an introduction to the intersection of artificial intelligence and legal systems for legal practitioners and scholars. Surden, a professor at the University of Colorado Law School who specializes in AI and law, offers a balanced assessment of how AI technologies are being deployed within legal practice and the regulatory challenges they present. The work covers key applications including legal research, document review, and predictive analytics, while addressing concerns about algorithmic bias, accountability, and professional responsibility. This overview has served as an important primer for legal professionals seeking to understand AI's growing role in the legal system.
Books
Bender, Emily M., and Alex Hanna. The AI Con: How to Fight Big Tech's Hype and Create the Future We Want. New York: Harper, an imprint of HarperCollinsPublishers, 2025.
This critical analysis by computational linguist Emily Bender and AI researcher Alex Hanna challenges the dominant narratives surrounding artificial intelligence promoted by major technology companies. Bender, a professor at the University of Washington known for her work on natural language processing ethics, and Hanna, a researcher focused on AI fairness and algorithmic accountability, expose how corporate hype obscures the real limitations and risks of current AI systems. The book provides tools for understanding AI marketing claims critically and advocates for more democratic approaches to shaping AI development that prioritize public benefit over corporate profit.
Boden, Margaret A. Artificial Intelligence: A Very Short Introduction. Oxford, UK: Oxford University Press, 2018.
This introduction to Artificial Intelligence from cognitive science professor and AI researcher Margaret Boden considers the history of Artificial Intelligence and reviews the philosophical and technological challenges raised by Artificial Intelligence. Boden was a critical figure in the exploration of the philosophy of psychology and artificial intelligence. The VERY SHORT INTRODUCTION series is for anyone wanting a stimulating and accessible way into a new subject.
Buckner, Cameron J. From Deep Learning to Rational Machines: What the History of Philosophy Can Teach Us About the Future of Artificial Intelligence. New York: Oxford University Press, 2023.
This philosophical examination by Cameron Buckner explores how historical philosophical insights can inform our understanding of contemporary AI development, particularly the transition from deep learning systems to more rational decision-making machines. Buckner, a philosopher specializing in cognitive science and AI ethics, draws connections between classical philosophical problems and modern AI challenges. The work bridges the gap between abstract philosophical theory and practical AI implementation, offering a historically grounded perspective on the future trajectory of artificial intelligence.
Chayka, Kyle. Filterworld: How Algorithms Flattened Culture. New York: Doubleday, 2024.
This cultural critique by journalist and critic Kyle Chayka examines how algorithmic recommendation systems have homogenized and diminished cultural diversity across digital platforms. Chayka, a staff writer at The New Yorker who covers technology and culture, argues that algorithms have created a "filterworld" where cultural content is increasingly standardized and predictable. The book explores the implications of algorithmic curation for creativity, taste-making, and cultural expression in the digital age.
Crawford, Kate. Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. New Haven: Yale University Press, 2021.
This comprehensive analysis by AI researcher Kate Crawford maps the hidden infrastructure, labor practices, and environmental impacts that underpin artificial intelligence systems. Crawford, a principal researcher at Microsoft Research and co-founder of the AI Now Institute, reveals the material foundations of AI, from mineral extraction to data center operations. The work challenges common narratives about AI by exposing its dependence on exploitative labor, environmental degradation, and geopolitical power structures.
D'Ignazio, Catherine, and Lauren F. Klein. Data Feminism. Cambridge, MA: The MIT Press, 2020.
This interdisciplinary examination by data scientists Catherine D'Ignazio and Lauren Klein applies feminist principles to critique and reimagine data science practices. Both authors are prominent figures in digital humanities and data ethics, offering a framework for understanding how power operates through data collection, analysis, and visualization. The book presents seven principles of data feminism aimed at creating more ethical, inclusive, and justice-oriented approaches to working with data.
Floridi, Luciano. The 4th Revolution: How the Infosphere is Reshaping Human Reality. Oxford, UK: Oxford University Press, 2014.
This philosophical treatise by information ethics expert Luciano Floridi argues that the digital revolution represents the fourth major shift in humanity's understanding of itself, following the Copernican, Darwinian, and Freudian revolutions. Floridi, a professor of philosophy and ethics of information at Oxford, introduces the concept of the "infosphere" to describe our increasingly digital existence. The work explores how information and communication technologies are fundamentally altering human identity, relationships, and moral frameworks.
Hao, Karen. Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI. New York: Penguin Press, 2025.
This investigative account by technology journalist Karen Hao examines the rise of OpenAI under Sam Altman's leadership, exploring both the utopian promises and dystopian risks of the company's artificial intelligence developments. Hao, former senior AI editor at MIT Technology Review, provides an insider's perspective on the power dynamics, ethical dilemmas, and competitive pressures shaping one of the world's most influential AI organizations. The book offers critical insights into the concentration of AI power and its implications for society.
Hill, Kashmir. Your Face Belongs to Us: A Secretive Startup's Quest to End Privacy as We Know It. New York: Random House, 2023.
This investigative exposé by New York Times technology reporter Kashmir Hill reveals the story of Clearview AI and its controversial facial recognition technology that scraped billions of photos from the internet. Hill, known for her privacy and surveillance reporting, traces how the startup's technology has been adopted by law enforcement and private entities, fundamentally altering expectations of anonymity in public spaces. The book raises critical questions about consent, surveillance, and the commercialization of biometric data.
Merchant, Brian. Blood in the Machine: The Origins of the Rebellion Against Big Tech. New York: Little, Brown and Company, 2023.
This historical analysis by technology journalist Brian Merchant draws parallels between the 19th-century Luddite movement and contemporary resistance to Big Tech's labor displacement practices. Merchant, a senior editor at VICE who covers technology and labor, reframes the Luddites not as anti-technology reactionaries but as early labor organizers fighting for workers' rights. The book offers lessons from history for understanding and responding to technology-driven economic disruption.
Mitchell, Melanie. Artificial Intelligence: A Guide for Thinking Humans. New York: Farrar, Straus and Giroux, 2019.
This accessible overview of AI by computer scientist Melanie Mitchell provides a balanced assessment of artificial intelligence's current capabilities and limitations for general readers. Mitchell, a professor at the Santa Fe Institute and expert in complex systems and AI, combines technical expertise with clear explanations to demystify AI technologies. The book addresses common misconceptions about AI while exploring the genuine challenges and opportunities presented by machine learning and related technologies.
Noble, Safiya Umoja. Algorithms of Oppression: How Search Engines Reinforce Racism. New York: New York University Press, 2018.
This groundbreaking study by information studies scholar Safiya Noble demonstrates how search engine algorithms perpetuate and amplify racial and gender biases, particularly affecting women of color. Noble, a professor at UCLA and co-director of the Center for Critical Internet Inquiry, provides empirical evidence of discriminatory search results and their real-world consequences. The work has been instrumental in establishing the field of algorithmic bias research and advocating for more equitable AI systems.
O'Neil, Cathy. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York: Broadway Books, 2017.
This influential critique by mathematician and data scientist Cathy O'Neil exposes how algorithmic decision-making systems can perpetuate and amplify social inequalities across education, employment, criminal justice, and finance. O'Neil, a former Wall Street quantitative analyst turned data science critic, identifies the characteristics that make algorithms particularly harmful: opacity, scale, and damaging feedback loops. The book has become a foundational text for understanding algorithmic accountability and the need for regulatory oversight.
Schellmann, Hilke. The Algorithm: How AI Decides Who Gets Hired, Monitored, Promoted, and Fired and Why We Need to Fight Back Now. New York: Hachette Books, 2024.
This investigative examination by journalist Hilke Schellmann reveals how artificial intelligence systems are increasingly used to make employment decisions, often with limited transparency or accountability. Schellmann, an award-winning journalist and assistant professor at New York University, combines extensive research with personal narratives to show how AI hiring tools can perpetuate discrimination and undermine worker rights. The book provides practical guidance for job seekers and advocates for greater regulation of AI in employment contexts.
Vallor, Shannon. The AI Mirror: How to Reclaim our Humanity in an Age of Machine Thinking. New York: Oxford University Press, 2024.
This philosophical reflection by technology ethicist Shannon Vallor examines how artificial intelligence serves as a mirror for understanding human values, capabilities, and moral responsibilities. Vallor, a professor at the University of Edinburgh and expert in technology ethics, argues that AI development should be guided by humanistic principles rather than purely technical considerations. The work offers a framework for ensuring that AI systems enhance rather than diminish human flourishing and moral agency.
Wynn-Williams, Sarah. Careless People: A Cautionary Tale of Power, Greed, and Lost Idealism. New York: Flatiron Books, 2025.
This critical examination by Sarah Wynn-Williams explores how institutional power and corporate greed can corrupt initial idealistic intentions, leading to harmful consequences for individuals and society. Wynn-Williams analyzes patterns of moral compromise and ethical drift within organizations, showing how well-intentioned initiatives can become vehicles for exploitation and inequality. The work serves as both a warning about the corrupting influence of unchecked power and a call for maintaining ethical standards in the face of institutional pressures.
Interesting People to Follow
Margaret Boden FBA - The British Academy
Cameron Buckner - The University of Florida
Casey Fiesler - University of Colorado Boulder
Sorelle Friedler - Haverford College
Kashmir Hill - The New York Times
Ethan Mollick -The Wharton School of the University of Pennsylvania
Matthew Sag - Emory University School of Law
Shannon Vallor - Edinburgh Futures Institute
Annette Vee - Computation & Writing Substack
Marc Watkins - Rhetorica Substack
Podcasts
Hard Fork - The New York Times
This tech-focused podcast by The New York Times provides in-depth analysis of AI developments and their societal implications through the perspectives of experienced technology journalists. Hosted by Kevin Roose, a technology columnist covering AI, automation, and digital culture, and Casey Newton, former senior editor at The Verge who covers social media and online platforms, the show offers informed commentary on AI policy, industry developments, and cultural impacts. The podcast regularly features interviews with AI researchers, policymakers, and industry leaders.
The Most Interesting Thing In A.I. - The Atlantic
This AI podcast from The Atlantic magazine examines the broader cultural, philosophical, and societal implications of AI development through the lens of thoughtful journalism and expert analysis. Produced by The Atlantic's editorial team, the podcast explores how AI intersects with democracy, creativity, labor, and human values. The show contributes to AI discourse by emphasizing humanistic perspectives on technological change and frequently features conversations with ethicists, researchers, and cultural critics.
This AI-focused podcast and its daily email newsletter provides updates and analysis on AI developments, research breakthroughs, and industry trends for professionals and enthusiasts following the rapidly evolving field. Created by Pete Huang and other AI industry analysts, The Neuron synthesizes technical research, business developments, and policy discussions. The Neuron serves as a bridge between academic research, industry practice, and public understanding, helping audiences stay current with the fast-paced developments in AI. The content emphasizes practical applications and implications of AI advances while maintaining focus on both technical accuracy and broader societal considerations.
This podcast provides critical investigative reporting on the surveillance industry, data privacy violations, and the societal impacts of emerging technologies including AI. Founded by Jason Koebler, Emanuel Maiberg, Samantha Cole, and Joseph Cox, 404 Media focuses on underreported stories about how technology affects ordinary people's lives. The podcast contributes to AI and technology discourse through rigorous investigative work that exposes corporate misconduct, government surveillance programs, and the human costs of technological implementation.
Projects
Training the Archive – Forschungsprojekt am Ludwig Forum, Aachen
Repository - Generative AI in Education Hub
The 2025 AI Index Report - Stanford HAI
Centre for Technomoral Futures
Center for AI and Digital Policy
Association for Computing Machinery
Arizona State University Artificial Intelligence
Webcasts & Presentations
Surden, Harry. “How GPT/ChatGPT Work - An Understandable Introduction to the Technology.” April 22, 2023.