top of page
PRISM logo white

Cari Hyde-Vaamonde on the future of AI and the law

Headline:

The use of AI and machine learning in the justice system is quickly growing, and it will require the complete redesigning of the system


About the interview:

In March 2022, PRISM interviewed Cari Hyde-Vaamonde, a visiting lecturer at King’s College London, lawyer, and Ph.D. candidate, about the intersection of technology and the law. Cari’s research explores how decision-making algorithms or judge AI impact the legitimacy of the legal system, and in the process she has spent significant time researching the broader ways that technology is being used to investigate and prosecute crimes as well as facilitate legal judgements. The conversation looks at the implications that different uses of AI and machine learning technologies may have on various aspects of daily life from our politics to how we interact with law enforcement. Notably, Cari’s approach to our conversation was highly balanced and nuanced, and she detailed the many potential benefits of using these technologies (some even somewhat counter-intuitive), as well as highlighted the potential risks that misusing these technologies could bring.


Key Takeaways:

  • Technology – particularly artificial intelligence and machine learning – has the power to completely change how individuals interact with the law and the justice system. It will change everything from how penalties for speeding are judged, to how potential criminals are identified, low-level offenses are adjudicated, and so much more.

  • The major shifts stemming from the use of technology in applying the law will require major changes to how the justice system works. This will require new rules and protections to be put in place so that justice functions better with the use of technology rather than creating new harms.

  • Technology may bring new challenges if we are not attentive to externalities that may arise from them. One notable example is that artificial intelligence and machine learning may learn existing biases and compound them when it analyzes information and makes decisions based on them.

  • We need to be critical of the decisions that algorithms make and understand why computers produce the outcomes that they generate. Accepting them as fact and not understanding why they produce certain outputs may create new problems.

  • The applications of technology in law go beyond investigating and punishing crimes. Future laws may be written as code, which could allow them to be instantaneously applied and executed without the need for additional interpretation or leeway.


The interview:

PRISM: Hi, Cari. It’s great to talk and welcome to the Future of the World Project. We are excited to chat with you about the future of the justice system and the research that you are doing right now. Before we get started, would you share a bit about yourself, what you are doing, and how you got interested in these topics?


Cari Hyde-Vaamonde: Great! Yes, so, I studied law, as I think you are now aware, and after leaving university, I began to practice and my first role was as legal counsel to a technology company that was working with mobile technologies and location-based services. There I was dealing with the regulator quite a lot and I then began to specialize more in advocacy, at which point I qualified for the Bar and went into litigation of many different types of law: employment law, contracts, and some prosecutions. It was quite a range of different things that I worked on, and I really enjoyed being in the courtroom. At the same time, I also felt like I wanted to think a bit more strategically and that a more systematic analysis was needed of the topics that I was working on. I wanted to build something rather than churn through cases. In that sense, there was a part of me that likes mathematics and scientific approaches, and that part of me wasn’t really being fulfilled at that point, so I decided to move into research, which is basically where I am now: researching how AI and technology could impact in the justice system and how those things may change the way that people are impacted or interact with justice.



PRISM: Super interesting. And all of this is the premise of your Ph.D. research at the moment, correct?


CHV: Yes. So, I am working on that now, and I am also a visiting lecturer at King’s College London. Specifically, my research is on the impact of AI on the justice system and how the former is impacting the latter’s legitimacy. More concretely, I'm looking at the use of AI more generally in our justice system – its role in deciding low-level traffic offenses and things like that. I am looking to see how that might be automated. I'm working with mathematicians and scientists to see how that could be automated, and with those insights in mind, investigate what impacts that might have on people and civil processes that might be used to understand if someone can afford a fine, et cetera.


My perspective is that there is so much pressure on the justice system, and there is potential for technology to bring massive changes. In a sense, some of these changes feel almost inevitable and some may change substantively, how the justice system works, and for that reason, I think that it's one of those areas where we just desperately need to build expertise and knowledge.



PRISM: Fascinating and very forward-thinking. When you talk about traffic offenses, are you talking about a sort of automated speed camera type thing?


CHV: Exactly. Cameras that are tracking things like failing to stop at a stop sign, speeding, and so on. A lot of things that I am looking at go through an automated process, including things like COVID-19 offenses. It’s a very low-level process, and it's partially automated already, in the sense that people are basically found guilty of these offenses if they do not object to them. That – in and of itself – is a massive change from how we used to deal with criminal offenses. So, what I see is a trend towards automation, and I think it’s very important to research and understand so that we are fully aware of the impacts, even if there are also great potential benefits.



PRISM: Do you think we should frame this as the future of the justice system or the future of AI and the justice system? I think the conversations here could take two different directions that are similar but maybe don’t bring us to the same conversation, right?


CHV: I think that’s right. At its core, I am interested in the justice system as a whole. I came up through the justice system and that is where I have spent my career, and I am very interested in the way that things may develop there. I think that AI and other technologies have a massive role to play in how things develop and how the system is reformed. But then we can take it a step further to talk about how AI is used in policing, which is a really big topic and an area where there are a lot of developments, too. We can cover a lot.



PRISM: What are the things that you think are underappreciated right now? Where do you think that things will change in say five years and then maybe in ten years? I think that is an important distinction to make given the pace at which technology moves.


CHV: In terms of policing, things are already changing. A few years ago, we already had AI trying to recognize whether or not an individual was carrying a knife or not using video, all of which was being used to prevent risky scenarios. This is an area where private companies are doing massive amounts of work because this is not necessarily something that's homegrown within police departments or the government. Private companies will be identifying and indicating to policing departments that they need these technologies and services. That’s been happening more in recent years; I only see that trend accelerating.


I can see a situation where those private companies are going to be asked to demonstrate that they are trustworthy, transparent and whatnot to maintain the confidence of the police and other officials who are then commissioning these things, as well as the public. So, I think that one would then expect them to have quite robust ethical guidelines to demonstrate their skills and that they are trustworthy actors to take on these things. There may be oversight boards analyzing innovations on a rolling basis.



PRISM: Diving into that piece a little bit more, on the police side and their use of AI, what is the general thrust of what they are trying to achieve? It seems like there are different goals that they could have. One might be that there are things that we can do with technology that require fewer people, so that becomes attractive from a costs and manpower perspective. Another might be that it is assistive to what the police can do and just makes individual police officers more efficient. Or, is it more that these technologies give law enforcement capabilities that humans simply cannot possess?


CHV: Yes. You've summarized it really well, actually. Essentially police forces across the world have limited budgets and resources, and therefore, there is certainly an emphasis on making the work of the person on the ground easier. So, that part can be quite simplistic; it might just be an app that they’re able to use while on the move, whereas before they had to fill in desk-based forms. So, you know, there's a lot of potential there for speeding up processes, with quite simplistic technology. That’s certainly one aspect.


There is also the argument, of course, that some of this work being done by computers cannot be done by humans. So there are situations where access to a vast amount of data was not possible previously, but now it is being made accessible, it can be analyzed and then a step beyond that, you have predictive technologies where we might not know why the computer gives a certain answer, but know it has crunched a large amount of information with machine-learning algorithms. Often, the argument is that it works and it doesn't really matter how it works as long as it does. The counter-argument of that of course is: Where's the evidence? Where's the actual concrete evidence that this works? Statistically, it might look like it works, but there might be no empirical evidence because no one has actually gone to test it. Also, that is an inherent issue in the justice system. For example, in many cases, who knows if someone's actually guilty of an offense? Judges and juries draw their conclusions on the evidence, but they can make mistakes. So, how do you test if the algorithm is getting it right? That is sometimes incredibly difficult.


I'm not sure if I've fully covered your question, but it’s definitely an amalgam. There's a strong argument that says we are failing by not using the technology, but equally, there's a debate as to whether that technology in all cases is beneficial.



PRISM: What is this more of? A cost-savings thing because police officers are expensive? Or is it something where we get a significantly safer society because we have the technology to prevent crime or to punish it in a more efficient manner?


CHV: I think that it's difficult to say. Essentially, a massive driver is resources. But, safety is also a big factor, because there are things with topics like terrorism, violence against women, other sorts of gender violence, and so on, where these technologies could be useful, and it almost becomes a massive moral imperative to say because we could use it we should be doing so. That's one of the reasons why I feel like there's a sort of inevitability in the progression of these technologies in this space, at the end of the day all these factors are pushing in the same direction. What is slowing it down is the people that are saying, “Well what about transparency?” “Is this really effective?” “Or does it discriminate?” and all these other considerations that we can't ignore. We actually need quite a lot of expertise to analyze what is going on and that’s a lot of interdisciplinary work. It doesn’t just end up being the lawyers or policing specialists who have to know about this stuff, but you need people who understand how algorithms work, how they process information, how to unpack the results of that information, and how to understand which directions are good to move in and which we may want to take a step back from.



PRISM: Yes. This strikes me a bit like the conversation that we are bound to have around self-driving cars. Like when they actually become self driving, they will inevitably be involved in accidents, and some of those accidents will kill people. That's a problem from a regulatory perspective, even if in aggregate machines kill at lower rates than cars being driven by humans, the debate is still going to focus on the algorithms because we are so accustomed to human error. There is obviously human error in the justice system, too. Do you think that those are a bit analogous in that way?


CHV: Yes, I think you make a good point that there are always scenarios where something bad happens. It can't always be avoided. It often depends on the outcome. When it comes to self-driving cars, for example, some might say that there should be no tolerance for risk of life with these technologies, and those critics won’t want to hear any considerations that those vehicles may actually be safer.


In terms of the justice system, there is clear evidence of failures, of unintended prejudices affecting the way in which decisions are made, and so on. Humans are fallible. I'm really interested in investigating how we can work with algorithms to try to lessen the impacts of some of those rather than compound them. On balance, I think that with a full understanding of this technology, we can get to a better place.


So yes, I think that you're absolutely right. There are people in this space that are established lawyers and other individuals who speak about wanting to protect the rights of people and ensure that they are treated fairly. That’s great, but it's very difficult to sort of surgically remove that idea away from the inherent inertia that causes people to not want things to change because there are potentially vested interests. So, I think policymakers are really going to be driving through reforms like they have in other areas, so it’s imperative to extract the real concerns from the we-don't-want-change concerns. Those are the ideas that I am really trying to look at in my research.



PRISM: Stepping back a bit. Are we in a place where there's a ton of technology already and it's super powerful, and the law and the norms and ethical principles are sort of trying to try to catch up? Or are we in a super early stage on the technology side and the concerns that we are talking about today are really things that we need to be thinking about to prepare for the future?


CHV: Yes, I'd say that in the UK, we are at quite an early stage on this. We are certainly starting to hear more about automated judgments, but it's not something that's really happening right now but may happen in the near future. Meanwhile, in other parts of the world, like Brazil and China, using judicial algorithms is something that's much more common. This is all complicated by the fact that legal systems are very location specific with jurisdictions and whatnot that are kind of closed systems.


In terms of the UK, it's in its infancy, but I think there is plenty of technology out there. I mean there are scenarios where you might have a company that can predict the potential outcomes of cases - based on the individual judge hearing the case - and might give a percentage likelihood that a case will be won or lost. That is one type of predictive technology in the legal system that is actually quite useful for legal advocates in the commercial sector who are using it to advise their clients. But whether or not that takes a leap into the actual justice system is another thing. At the same time, maybe it doesn't need to be integrated to have a massive impact on how justice works. It affects lawyers and their clients. In those types of cases, the technologies are being used to resolve cases, and in those cases it doesn’t directly go through the court system at all, it’s more on the side.


So, you know, it depends very much on your sector. The commercial sector is one thing and that is an example, but criminal cases would be completely different. So, it's sort of sector and jurisdiction specific, so it's quite a complex scenario there.



PRISM: Taking an even bigger step back, now I am curious for us to ask if there is an opportunity here for us to become significantly safer by using this technology. China's a little bit of an interesting case, right? They use an incredible amount of surveillance, and crime is extremely low. But, is there a reason to believe that using AI could make us significantly safer?


CHV: Okay, well, yes, I think if I were in policing, I would probably answer in the affirmative. Personally, I'm more on the fence on this. I think that I would need to see what technologies were being used and understand them fully. There are, of course, some things that compared to human analysis, AI can do things that humans cannot, but equally, there are things that AI cannot do, that we can. So, it's a tough question to really answer.


I think the trade off is potentially that we end up with highly surveilled societies, and we do have to ask the question about if that is actually what we want. Is safety the objective? If it’s the only objective, then we could all be perfectly safe in a cell not seeing anyone or doing anything, but liberty is obviously one of these things that we need to sort of balance. There is certainly massive potential associated with AI, but it is really a question of where it takes us. It's actually quite difficult to foresee this because it depends very much on the appetite within the country using it and how that country will want to use it. All of this will vary. In some jurisdictions—maybe not Europe, the UK, or the US—there might be more openness to making safety a top priority and using these technologies as much as possible.


On the other side, there are still concerns about whether these technologies are accurate or if they are discriminatory and whatnot. So, if you consider those concerns, they might not improve your safety.



PRISM: Everything sounds quite reasonable. It's clear that there is an opportunity here to use technology for our benefit, there’s a civil debate about how to take it forward, and there is a lot of activity that we are talking about how to make this go forward in a good way. I guess that makes me question what the divergent scenarios might be. In my head, when I think of AI and policing, I am thinking about Minority Report and pre-crime monitoring and things like that.


That’s all more on the crazier outcome side of things, but is it really a wild-card type of outcome where that is never going to happen? Or what’s the boundary of the conversation here?


CHV: Well, I don't think it's unhelpful to think about. Some people might say, “You shouldn't even think about that,” but I think that, you know, yes, if you're predicting areas where crimes are likely to happen, which individuals are likely to commit crimes, and so on, that’s something that is worth thinking about. There are scenarios where police may identify a person who because they are with Person X in a certain place may be statistically more likely to commit a crime, so it becomes attractive for police to intervene early before a crime is committed.


I haven't watched Minority Report, but my understanding is, you know, that's kind of in the direction of what I am talking about here.



PRISM: It's a good movie. I recommend it.


CHV: I'll make sure I see it! In the scenario that I outlined, maybe some crime was prevented before it happened, but it is quite possible that the individual was not going to take the action that was predicted, and by intervening, you are disrupting and inhibiting their ability to be free. So to answer your question: I don't think it's unrealistic to think about these extreme scenarios because the technology is incredibly powerful. If we allow it and take it to the extreme, then it can get quite unnerving and move towards the Black Mirror kind of scenarios that we see on TV, where we say this is not where we want to be. Others in security might say that the level of risk that we experience every day is extremely high and that these interventions are worth it.



PRISM: On the other side of the coin, what's the discussion around just shutting all of this down and not using it? I feel like the conversations around facial recognition often reach this point. Is that a possibility in areas like AI and machine learning? Or is it that right now everyone is more or less onboard with this technology as long as it is used correctly?


CHV: Well, I think that there is a strong suggestion that certain things should be shut down, like the prediction of court cases that I referred to previously. In France, that’s already been outlawed. So, it is being pushed back against in certain areas, and certain governments are not interested. Personally, I don't see these technologies being completely blocked. Some might be restricted. I'm not saying this is a good or a bad thing, but in terms of the future, I think there's always going to be an appetite for risk-based analysis in policing—maybe less so in the courtroom, because there you have more of an individualized scenario and processes—, but what I potentially see there is a shift in the line of what's considered a serious criminal offense and a rethinking around what cases are considered minor - and therefore may not require the same type of proper trial - to reduce the volume of cases before courts or to reduce the number of investigations that need to be carried out to their fullest extent. So to that extent, you might end up with a sliding scale of what police need to show as evidence to prove cases in certain scenarios and more cases might fall under the bar set to determine which cases need to go to a full trial because of these technologies.



PRISM: That makes total sense that more stuff will be automatic, just like how traffic tickets work in the UK, where you just get a letter in the mail. When we think about where this goes, who are the different actors thinking about where this all goes, and what are their positions on these topics? I guess I am particularly interested in where these different actors diverge.


CHV: Yes, so you have justice campaigners, and those are groups like Big Brother Watch that are very hot on these types of things, for example facial recognition stuff. So, you have those voices that are trying to protect the public essentially. They're quite a strong actor in the sense of they're trying to get research done and trying to get things out into the media when they feel like civil liberties are being restricted.


You have the legal professionals and judges—you have the justice system and the government, but then you also have the legal professionals and judges in the system—who have a very different perspective and are often thinking of their professional values. They want to protect the idea of justice and they'd have a perspective on what that means. And for them, you also have the fact that the further we move into automation, the more likely it is that jobs disappear or change materially. So, that can be a factor as well.


From the overarching justice system viewpoint, the leading people in that space seem to be very positive about technology and want to see more done and more stuff being taken out of the justice system so hearings do not need to be held and so on. So there is a positive attitude in that sense.


And then as you anticipated, the companies that are trying to provide services and different technologies to smooth the way are out there promoting themselves, but it’s really up to the government to select the ones that are actually going to help in these scenarios.



PRISM: Is this a trend that the Googles, the Facebooks, and the Microsofts of the world are building out, or is this a space that smaller startups are involved in?


CHV: It’s definitely the larger companies, because there is an issue around trust and the concern that smaller companies may not have the due diligence and robust security systems that the justice system would require. The justice system is this massive machine and for that reason, they are not going to take a chance on something not working or going wrong. So, they're definitely going with larger companies.


A few years back there was a report on where money was going for these kinds of technology contracts and in fact it was very piecemeal and somewhat diverse, but the trend is towards the larger companies. At the same time, it is often very decentralized where different police departments contract out to different companies, but it seems like things are starting to converge more now.



PRISM: Yes, so I wonder if we look 10 years out, what is the use of technology that you can imagine occurring in that time period?


CHV: So I think one of these things is the rules as code movement, which is the idea that when governments write their legislation on this type of thing, they also write the code in parallel. So, instead of legal rules being retrospectively put into code, it's done simultaneously. In this scenario, you have legislators creating rules and code at the same time so that there really is no implementation period; the code is implemented at the moment that the law is made. A lot would flow from that if something becomes law. For instance, if we look at how laws are implemented now, there is a lot of discretion around what is prosecuted and feedback about implementation. If you legislate code that automatically changes the law, you might lose that.



PRISM: Just curious how that works? Like what kind of code are they writing?


CHV: So, one example would be the passing of a new social security law, where instead of having the regulations be implemented by people, the new law and its code could be entered in, and it would automatically change what benefits people were to be allocated. This type of thing could be implemented in various other places, like company law, where it would reduce frictions and could improve efficiency.


There are benefits to doing that, but there are also big risks because if you make a big mistake, it's a huge mistake, and it’s one that impacts everybody from the moment it is made. So, that would be a big change and would change how lawyers work and how companies operate.


Then, the other thing that I could see is in the justice system that judges or even juries use technology to eliminate biases and help their reasoning.



PRISM: That’s very interesting and it comes up in the book Noise, where they tried to remove biases from judges in the US for a while, and it had improved results for several decades, but after a while the judges rebelled and those requirements were removed. I wonder if this would have a similar trajectory.


CHV: I think that there are many fights to be had because of how these things might be implemented, and this goes along with what I was saying before about the difficulty in making changes in situations where some concerns are warranted and where some concerns are actually more to do with self protection. To that extent, I think the answer is some kind of co-development, so that people making the technology and the ones using it are working together. At the moment, the problem with this approach is that many lawyers don't have an understanding of how algorithms work, and they are not necessarily scientifically minded. This is not all lawyers, but many lawyers, and as a result, they don’t have an understanding of the benefits that can be brought by these technologies, or the pitfalls.



PRISM: Yes, this is fascinating. I'm really interested in the coding of the law thing, and this seems like this would just have massive implications. In a UK context, I could imagine that means that the civil service would shrink over time, because there are fewer people required to implement things, but then on the policymaking side, there are so many more people needed to make the rules and the code. Then you have to get everything done upfront alongside the legislation, rather than post-passage, which also seems like it would have massive political implications since it would completely change the politics of how laws are made and implemented.


CHV: Right, it frontloads a lot of the decision making and then front loads a lot of these issues so that things that would normally only really emerge way down the line are highlighted. Even if some ambiguity is purposefully engineered in, it will be obvious when you are doing that. Potentially all of these issues need to be addressed from the outset under this approach. It potentially makes it extremely difficult for the political compromises that happen right now to occur because people can run the code and see how they might be impacted. It could end up being completely different but also more transparent.


Some argue it could become extremely difficult to get things done under this approach and, either way, I think it could have a massive impact if it does happen. I think that there's a lot of movement in that direction. New Zealand is moving in that direction, and the government in Wales is looking at this approach, so it's not just hypothetical. Legislators in the UK’s central government are interested too. So, it's one of those movements that is worth being aware of.



PRISM: It seems like the transparency almost brings the philosophical questions to the fore in really narrow and specific ways. Like when you can get all the numbers upfront, this completely readjusts the political calculus of how things are made and how decisions would be made.


CHV: It'd be interesting to see how that would play out in political discussions, because if this is kept transparent, then the results can be quite shocking to those who have to make political decisions. It’s really all just right out in front of you. So if you make a decision about what is funded via the NHS or whatever, you would have the information, right in front of you, about what the difference in people's lives would be.


I think the other impact is that coders become more like legislators. So that if, if the law becomes code and code is law, then the people who are writing laws at the end of the day are the people who are doing the coding. There are some questions about the democratic angle there, too, and where the accountability is. I think there will be a lot more questions on that front.


And that comes back to what I was saying before about companies getting involved in government contracts. It all means that we’re going to see a lot more attention on their ethical background and how they’re approaching things.



PRISM: We're gonna need some kind of Agile legal methodology.


CHV: Yes! I mean there's been reforms in the civil justice system in the UK, where they have literally used that Agile methodology as they're developing it. So, it's happening!



PRISM: Is there a big thing that we missed?


CHV: It's such a developing area really and it's not well traveled ground. People have been working in AI law for over 30 years, but this is all becoming more of a reality or at least a possibility. I think that now the pressure is stronger to say, actually before we didn’t really have the technology to do this right, but now things are changing, and we are confronting the fallibilities of humans and computers and balancing these in policy making and decision making and getting to the point where we can’t just say that all computers are bad and that humans are so much better. There is not a clean cut argument.


There are really so many nuances, and it's really a developing area. I think that we're still finding our way on all of these issues. We are starting to get some ethical codes and the European Union is planning to legislate on topics related to the requirements for these kinds of things, but it’s really going to be a case by case basis. A lot of this may feel like we’re redesigning the whole system.



PRISM: That's a great ending: Redesign the whole system. We even got a great headline. Thank you for this super interesting chat!



bottom of page