In this groundbreaking episode, the collective intelligence of LawPod is pitted against the generative intelligence of ChatGPT to explore the potential impact of artificial intelligence on the study and practice of law and the world at large. We also probe the metaphysical and explore the legal and ethical considerations of generative AI in a wide-ranging and fascinating conversation with our most famous guest to date. Sorry George Monbiot!
Thanks to the whole LawPod team for their collaborative work on this episode and a special thanks to Peter Lockhart for recording a special introduction. Peter’s is the only human voice that you will briefly hear on the episode. The other voices, the collective LawPod voice and the voice for ChatGPT, were selected from the software we use to edit podcasts, Descript.
Descript, in their own words, “is a collaborative audio/video editor that works like a doc. It includes transcription, a screen recorder, publishing, and some mind-bendingly useful AI tools.”
We have utilised the software’s AI Overdub functionality to assign generated voices to our conversation’s participants, we hope to good effect.
The responses to our questions from ChatGPT have not been altered in anyway and appear as they were answered, there have been minor edits for sequencing of the questions and some minor edits with regard to the timing of answers most notably in the addition of a few milliseconds of time between a question finishing and an answer beginning to allow for a more considered flow.
As Peter says in his introduction we are proud of this episode, please let us know what you think.
Guidance For Students
The response from the Higher Education community to ChatGPT and other generative models has been timely and measured and recent guidance from Queen’s University Belfast outlines that “we need to focus on responsible usage by staff and students and associated ethical considerations to ensure the safe and productive deployment of this technology.”
This episode sets out ways in which AI could be used, but under no circumstances should students endeavour to generate content that is subsequently used in an assessment unless otherwise instructed. Listeners are encouraged to think critically about the responses produced by the AI in the episode, particularly in light of academic ethics and integrity standards, rather than to accept them as uncontentious facts.
Pay close attention to the “Procedure for Dealing with Academic Offences” part of the aforementioned document and the proposed amendment to the Contract Cheating clause. Academic offences are treated extremely seriously by the University and penalties for what would be considered a major office can result in suspension or withdrawal.
Quality Assurance Agency Guidance
AI generated cover art for the episode using Midjourney