Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.
Log in
- Convenor:
-
Cari Hyde-Vaamonde
(King’s College London)
Send message to Convenor
- Format:
- Panel
- Sessions:
- Monday 6 June, -
Time zone: Europe/London
Short Abstract:
Involving AI in justice decisions represents a serious challenge, which will impact society as a whole. This panel includes voices from law, technology and the arts in an attempt to break down barriers between disciplines, focusing on the impact of AI on the way in which people experience justice.
Long Abstract:
Laws can seem impersonal, designed to judge on the facts rather than the individual. If true, is judging mechanical - something that can be automated? When AI is used to look at how justice is done by humans, it throws up challenging examples of bias and inconsistency, set against a backdrop of increasing delays in justice. Perhaps it would be wrong not to bring in the machines? To what extent can algorithms help, or is there an ethical barrier to involving them where human life is at stake. In examining these questions the focus cannot just be on the risk of algorithmic bias or the technical accuracy of the algorithms themselves. Already, the impact of AI on law and legal decisions is far reaching, leading to deep institutional effects, affecting the personal experience of justice. In an area prone to polarisation, we gather together a panel with diverse skills to explore the nuanced issues in relation to the integration of AI to law and justice. How helpful is human discretion to justice? Can a common language to be developed to realign the aims of the state, the technical expert, the individual and society?
In 2020 a distressed defendant's sobs were simply 'muted' in an online hearing by a UK judge, while in Singapore a man was sentenced to death over a Zoom call (Gibbs 2020; Ratcliffe, 2020). Is anything less than in-person justice, in-human?
Accepted papers:
Session 1 Monday 6 June, 2022, -Paper short abstract:
In Brazil, the use of AI in justice recently increased. Now, different professionals are debating, and effectively making material changes. This paper will address lectures, courses, and regulations created from these discussions.
Paper long abstract:
Projects and actions to implement artificial intelligence and automation within the Brazilian judicial system have grown exponentially in the last decade. From The General Data Protection Law (LGPD), inspired by the European General Data Protection Regulation, enacted in 2018 and fully effective in 2021, there was a profusion of seminars, congresses, and lectures on these new mechanisms in Judiciary and law firms. Such events, highlighted by their heterogeneity (members of academia, corporations, or institutions), mark the gradual insertion of these technologies and the criticism on how they could be applied, highlighting the infrastructures involved in modifying the legal universe. Furthermore, many speakers are making these regulations and systems (almost simultaneously to the debates).
Currently, the Superior Courts already use automated systems, most of them to filter repetitive lawsuits and appeals requirements. In the Courts of First Instance, there are some projects in progress, ranging from blocked amounts to automated decisions in simple processes, for example. In law offices, many firms use mechanisms such as jurimetrics and predictive justice in preparation for their petitions.
In this paper, I try to draw a brief overview of the regulation, creation of projects, and availability of new systems by the judiciary system, law, and other institutions, in addition to the legislative changes and debates in other countries, which have reflected on the discussions in Brazil. Later, as the core of this ethnography, I will point out the events that took place in 2020-2021, in person or online.
Paper short abstract:
Most criminal defendants in England and Wales don't meet anyone concerned with their case, in person or virtually. The single justice procedure processes "low level" offences online/on paper. Online pleas & convictions will soon follow. Are these fair & do they facilitate effective participation?
Paper long abstract:
Prosecuting people outside the court is cheap but not so cheerful. Most criminal defendants in England and Wales are sent their criminal charge in the post and expected to respond via an online or paper form. Most of those accused don't respond to the charge, and no plea of guilty or not guilty is entered. Any defendant who hasn't responded is convicted in their absence and fined. No one knows why so few defendants engage with this process but no-one also knows whether defendants actually receive the postal charge. The single justice service (as it's known) excludes a crucial participant - the accused person. The government knows the service has flaws but is desperate to save more money on courts. So in the Judicial Review and Courts Bill they are proposing that all defendants should be encouraged to plead guilty or not guilty online, and that low level offences should be dealt with via an end-to-end online court. This online court will treat all defendants the same, as if all have the same means and vulnerabilities. To what extent is online justice an inevitable development of austerity and managerialism? Can we make it sensitive to the reality of defendants' lives?
Paper short abstract:
There seems to be a widely shared intuition that human decision as such has value. Is this intuition justified, and could it persist even if AI outperformed humans on every measurable level? This talk aims to explore these questions with regard to the judge from an ethical and a legal perspective.
Paper long abstract:
The admissibility of AI deciding over humans has been widely discussed. The answers vary but are mostly based on a comparison of the capabilities of humans and AI. The question of the value of human decisions is usually not addressed explicitly. However, it should be the starting point for all further considerations, especially when it comes to the use of AI in court, for law is made by humans for humans.
This human-focused concept of law and legal application prevalent in modern democracies implies certain mutual expectations and thereto related responsibilities. One of the constitutive ideas in this context is the question of what we, as humans, owe one another. The notions of role reversibility and answerability pick up on this. Role reversibility follows the principle of inversion, i.e. the ability to put oneself into someone else’s shoes. Answerability underlines that a human is not only responsible for something, but also towards something or someone. Combining these two ideas with the concept of a judicial judgement as an act of participating in a shared political morality of a community results in attesting a value to a human judge and its decision beyond certain abilities.
Now, what would happen if we replaced a human judge with AI? Could (and if so: should) we hold on to our current concept of law revolving around humans? Or do we need to fundamentally rethink and reform our legal system? This talk aims to explore these questions from an ethical and a legal perspective.
Paper short abstract:
Asked to define 'legal complexity', a prominent legal scholar answered by saying: "I know it when I read it". Can we do better? What are typical features of and possible remedies to the multi-faceted nature of how normed societies work? Complexity science, a modern branch of physics, can help!
Paper long abstract:
In recent years, a growing body of research has underpinned the idea that the interaction of law and society could be interpreted and analysed through the prism of the "complex adaptive system" paradigm. Both experts and laypeople may agree that recent times have witnessed a steady growth in social, political, and economic "complexity" - in turn manifested in "legal complexity" - without necessarily agreeing on an operational definition of complexity, let alone its defining features or possible remedies. A sensible approach is to leverage techniques and tools from statistical physics, complexity science, and computational social science to both characterise and predict the behaviour of various legal institutions, frameworks, and processes - with the ultimate goal of understanding and taming the complexity of the law.
How interactions between individuals are shaped by norms, and what are the emerging ("collective") phenomena in the highly interconnected "legal" landscape - interpreted in the broadest sense - constitute the core questions that I will try to address. From the scientific challenges posed by the use of technology as a tool to tame the complex web of interactions between individuals and norms in the "legal" domain, to the question of how complex network theory applies to items of legislation, the spectrum of real-life issues around how normed societies work is very broad and of paramount interest. My talk will hopefully shed some light on the interplay between statistical physics, computational social science, politics and legal studies that have traditionally followed rather separate trajectories.