Open Publishing in the Age of AI

I had my first serendipitous experience with AI in 2018 when I was a graduate student in musicology. One of the composition faculty asked me to stand in as a test performer in an AI opera that he wrote where performers were taught to learn a general structure of the piece, but relied on live staging instructions coming through spoken word in ear pieces and textual phrases on screens that both the audience and performers could see. I now understand that this opera was likely generated by a small AI model fed by information from the composer and librettist. This experience that brought together my musical skills together with composition and technology got me completely hooked! 

During that same time, I was also experimenting with technology and learning some of the basics of social media communication, blogging, public writing, and basic web development on platforms like Wordpress from Project Vox, a digital humanities project that publishes open educational resources on women philosophers. My training brought me to think about audience, user experience, and reader comprehension. I started wondering, By looking at our analytics data, how can I tell who is engaging with the Project Vox website? How can I translate this tricky philosophical idea into a blog post for undergraduates? How can I think about enhancing the metadata on the back end of that blog post, so that it actually reaches high schoolers and their teachers? 

All of these questions and a little tech savvy took me to the National Humanities Center, where I had an internship as a graduate student. I knew they were interested in the intersection of technology and the humanities, for an audience of scholars and teachers. When I started there, I took on two roles that focused on academic publishing and pedagogy. As we were thinking about publishing humanities resources, I kept wondering: At a time when humanities courses are being pushed away in favor of those from STEM fields, how can the humanities (in collaboration with the sciences) demonstrate that we have something to offer? 

My first major project bridging the humanities and sciences was on the topic of AI. I led a Responsible AI curriculum development project with Google, which brought together 15 institutions in building innovative courses connecting humanities methodologies to computer science curricula. This project was a success due to the fact that about 2000 students took the courses that academic year. In course surveys, I heard directly from the students that this course felt essential to their career trajectories. It also helped me secure funding for a second Google project that connected Minority Serving Institutions and community colleges with other local universities to create courses taught by faculty from both the humanities and computer science on both campuses. In short, they both teach the same course on each campus, share resources, and eventually plan for dual enrollment and/or transfer credit.  

Second, I was part of a team to revive our Open Education Resources library of about 10,000 resources for K-12 teachers. This was where I worked closely with a team of librarians to relearn what we had been taught about metadata. It was no longer just Dublin core, but a shift to long tail keywords, schema, and generating easily readable content on the front end of a webpage. We rethought how to publish our pedagogical resources to make sure they were at the top of every teacher’s search. 

Both of these experiences really made me want to invest fully into learning more about the technology that we were so focused on using in digital humanities projects like Project Vox. I wanted to dive deeper! That career nudge led me to my current role at the Data Science and AI Academy at NC State University. The role of our academy in the Provost’s Office is to transform the institution to an AI university, one where all students from all 12 colleges can access AI training, both in technical courses and in ethics. While you may not agree with this approach, NC State believes it’s their responsibility to ensure that all students are prepared for a new job market that requires a clear understanding of AI, whether in getting around HR systems in hiring, completing daily workplace tasks, or making value judgements on the efficacy of AI search results. 

Most relevant to this panel is my current research project funded by Google.org with two lawyers Will Cross, Director, Open Knowledge Center at North Carolina State University, and Meredith Jacob, Director of the Program on Information Justice and Intellectual Property’s Project on Copyright and Open Licensing at American University College of Law, and Sarah Harris a librarian and our excellent project manager. Our goals are to provide educators with clear guidance and reliable training materials as they work to understand legal and policy issues in AI so they can critically engage with (or, as appropriate, reject) AI as a pedagogical tool. We began by conducting interviews with faculty to better understand what’s going on in classrooms across the country. We know that news is sensational and publishing can be a bit delayed and we want to know what’s happening now. This spring and summer we are developing common use cases to respond with a report and roadmap that will identify key areas for future development of responsible AI curriculum resources, so that we and other scholars can begin the work of building the materials. Finally, we will convene experts in the fall to review and make recommendations for how to move the project forward. One of the pitfalls that we’ve seen with lengthy legal guidance on fair use and copyright is that educators just don't have time to read it, so we’re testing out some visualizations that are user friendly and perhaps lead our audience to specific parts of our written guidance.

I want to end with my takeaways from working on all of these AI projects and with other faculty across the humanities and law:  

  • This is the moment to work together. Working with the 15 different campuses from large R1s to Minority Serving Institutions across the country, I always found the most successful of those courses on responsible AI were those that were co-taught by faculty from computer science and the humanities. I know this is costly, but it’s important to really have both sets of expertise in the room and to model a complex discussion for your students. 

  • Build your digital humanities projects with AI in mind. Use your digital products to your advantage. Google’s AI still prioritizes content credibility, authority, and structured data. You are all experts who are both credible and authoritative with degrees, well-crafted prose, and institutions to boost your content and make it more discoverable. Use that to your advantage. I highly recommend looking into schema.org! 

  • There is a lot for philosophers to say. This is the moment to flex ethics training and research. Although you may want to be a complete skeptic of AI, your undergraduates and graduate students are using AI whether they admit it or not. While they may not like it as a tool, they should understand it and some of that can be learned in your philosophy classrooms. 

Everyone is still learning. One of the most important things I learned from working with so many faculty is that no one is an AI expert in everything. We’re all trained to be an expert in our fields, but with the ever changing nature of the technical landscape, there’s always something new to learn or a new way to think about a technical question. This is the part about working in responsible AI that I like the most—it’s humbling to even the most confident scholar and is begging for collaboration and interdisciplinary discussion.  

Previous
Previous

AI Forward: Leading Innovation, Research and Learning at NC State University

Next
Next

Future of Copyright Education at the Library Copyright Institute Capstone Event