Design Leader Insights - Carol Smith on how to practice ethical AI in user experience design

January 16, 2023

Transcript

Alex Smith: Hi, Carol, thanks so much for joining the show today. 

Carol Smith: Thanks for having me.

Alex Smith: Of course. And yeah, to get started here, can you give the audience a little bit of background context on your history in UX?

Carol Smith: Yeah, certainly, I have been working in this field for a very long time. I have a master's degree from DePaul University that I got over 20 years ago. And since then, I've been working across a lot of different industries, and working with AI systems since about 2015, in a variety of different settings. And currently, I am working at the Software Engineering Institute at Carnegie Mellon University, where we have an AI division, which I'm a part of, and we look at all kinds of AI and emerging technologies, and really try to make sure that these systems are built with people in mind. That they're made to be systems that people are responsible, willing to be responsible for.  And I also teach in the human computer interaction Institute at Carnegie Mellon, right now I'm teaching or I will be teaching in the fall the Interaction Design Overview, which is a great course with people who are not majoring in the area of HCI,but who are interested in it. And it's just a fun way to get people interested in this work.

Alex Smith: No, I love that a lot to unpack there. We'll definitely hop into AI in a second. But yeah, just shout out to CMU, I know they're definitely one of the leading programs for human computer interaction and UX, which is awesome. And I love the idea of teaching, kind of like I guess, UX or HCI to non majors.Who attends that course? Is that so like other fields can understand what designers are really, really up to? Or does that lead to people switching into UX?

Carol Smith: Both actually, yeah, so we do have a fair number of people who do change careers, which is really exciting. And a lot of these people are in adjacent areas. So we end up with quite a few architecture students, students who are studying some aspects of computer science. Sometimes we'll have crossovers from remote departments as well. So it's a really nice broad set of people, particularly those in the business programs, who are studying product management, or some kind of technology from a management perspective, and who are  interested in making sure that they understand what human computer interaction is about.

Alex Smith: Awesome. Yeah, let's dive into AI. So tell me a little bit about how you've what type of research you've been conducting? And, you know, what do you think about that field? 

Carol Smith: This technology has been in the works for decades, many, many decades, but just now is really getting to the point where it's useful. And we need to make sure it's usable, and designed in ways that are able to be used and augment the people who are working with it so that they can be the best that they can be, at doing the work that they're doing. And so a lot of the work that I'm doing is looking at what those practices are, how to really make sure that people are doing efforts to make sure that they're reducing harm, preventing harm, that they are really understanding what the environment is that they're putting the system into and the complexities that are already existing, and how the system may increase that complexity. And helping people by making tools to help them really investigate that. So looking at hazard analysis for systems that are really in very high risk situations. So autonomous vehicles and things that measure, you know, very critical systems, those types of things. Making sure that people are really doing the due diligence and understand that these unlike previous types of software, and other systems  where you build it once, and you do that activity one time to really assess the risk. With these systems, we have to continuously be monitoring them and assessing the risk at any given point because as context changes, as the data itself changes in the system, that is going to potentially introduce new risks and change how the system is used or change what the system is doing. So we need to be continuously doing this work. And that's new for a lot of people. It's new work, and it's new practices that we need to develop.

Alex Smith: Yeah, and when you talk about usability of AI, I think some people think, oh, it's automatic. It's automated. I don't need, there's no interface that's used. It's doing the work for me. Are you talking about, you know, the bridge to AI and like, partially self-driving vehicles, which can actually cause a problem, people think it's already automated. And then, you know, we see Tesla's getting into accidents because they think autopilot is going to do everything for them. Or are you talking about the interfaces for people designing the AI systems or both?

Carol Smith: Both really, yeah. So on the kind of that mid range is, how can a, an AI engineer or even a UX person who's working on the system understand what the system is doing and how it's doing it and have a you know, an understanding that's appropriate for their level of use and how they're interacting with the system. And then also on the end user's perspective, the person who is potentially driving in that vehicle, or at least operating the vehicle, and then with software as well, when you're interacting with the systems, you need to have some awareness of what that system's context is. What kind of information it has.How it is going to transition control to you. How it's going to take turns with you, and when those interactions are going to occur, and just how much you should even be trusting the system.If the system is not operating as intended, or it's reduced its competence in the recommendations it's giving, then that's something that the person who's interacting with the system really needs to now. So figuring out what that looks like, and how to provide the appropriate information at the appropriate time, so that the system remains, you know, as contextually aware as possible, and that the human is contextually aware of what the system is too.

Alex Smith: One of the things you mentioned was responsible, I often hear ethical AI. Tell us about that. And how should designers be thinking about that? 

Carol Smith: Yeah, so much like with with any other software system, we really need to understand the context who is going to be interacting with the system, how will be used, what are those implications that may be negative, in some cases necessary but really looking at who's affected by the system, both in the initial deployment and the people who are actually using it, but also the potential populations that are affected by the system that's being put in. And then making sure that we really understand how the system is intended to be used, what the data is that's going to be built on. Without good data, AI systems are useless. They really are just a combination of data, and some mathematical equations, computation and programming. And that's it, it's, you know, it's based on what you put into it. And so really taking all those building blocks and having an understanding of those pieces, not necessarily understanding the math or being able to program, but looking at the context, looking at the content that's being used for that context, and making sure that it really is the appropriate solution for that situation.

Alex Smith: So Carol, when a designer, UX designer is maybe they're starting a position in a company like Google that has AI and actually getting into AI design, what's the process for actually designing in AI?

Carol Smith: Yeah, so part of it, initially, we do our normal UX work, we have to figure out what is the problem that we're trying to solve? What's the challenge? How are we approaching this, who is going to use it? What are their needs, and making sure that AI is the appropriate option at that point. And then if it is, really looking at the potential harms and hazards that could be created with the system. So there are a couple of activities of usability testing, is one that I've really been promoting more recently, which is an imaginative speculative activity, where the team gets together, not just necessarily the designers, although there's certainly a huge bit of this. But everyone who's working on the team, ideally, a pretty diverse group, works together to really think through what the system is being meant to do. The good parts about the system, the  outcomes that we expect to be good as well as the potential harms, and then really imaginatively going through and kind of going down a path of the worst possible outcome. So that we can think about if you know, if things really went wrong, if the system was hacked, what kind of data might be exposed? If the system was to be overtaken by someone else?  What could happen? What kind of potential negative outcomes are possible? And then how do we prevent those ideally, or mitigate for those situations instead of waiting to see what happens really doing that proactive thinking about that work. And the systems are dynamic, and not always going to be the same day to day. And so we need to make sure that we put in the type of safeguards that we can to make sure that people are able to get the things done they need to do.

Alex Smith: Carol, let's switch gears here. What advice do you have for new designers entering the field today? I'm sure you come across a lot of new designers at Carnegie Mellon.

Carol Smith: Yeah, definitely looking for a team that's established. When you're looking for a job, finding organizations that really value this kind of work and that are doing interesting work, challenging work. And by joining the team that's established, you're going to be able to see what they do. See what kinds of decisions they're making, listen to their conversations, and really learn about how to do the work. In school, we tend to learn, you know, a lot of patterns of work. Processes and things that we can use, but actually being able to apply those in different situations is a skill that you learn usually on the job and by working with other people, you're gonna have more exposure to the different  ways people do that work, because very few people do it exactly the same. 

Alex Smith: Yeah, that's a great point. Find a team that's actually doing what you want to learn to do and want to be a part of. Where can people go? Or where would you point people looking to learn more about AI in design, specifically UX?

Carol Smith: The bigger organization, so UXPA, and IxDA, the information architecture conferences, those kinds of groups and events are really excellent sources of information and a great way to meet  people in the field. Many of those organizations have online content and then in person or virtual conferences as well.Those are my go-tos, just to get the freshest information. And then I use various social media and other sources as well, to keep up on what's going on.

Alex Smith: Well, thanks so much for coming on the show today, Carol.

Carol Smith: Thank you. It's been a pleasure.