Three individuals are seated at a round wooden table engaged in a discussion. Behind them, a wooden cabinet displays books, awards, and a framed photograph. The person in the center gestures with open hands, emphasizing a point, while the others listen attentively. In the lower-left corner, a circular logo with a globe design and the text 'Engineering Progress 2024-2025' is visible.
Professor of Computer Science Ian Davidson, left, sits with Dean Richard L. Corsi, center, and Professor of Electrical and Computer Engineering Chen-Nee Chuah to discuss the current and near-future applications of artificial intelligence. (Mario Rodriguez/UC Davis)

Dean’s Download: Discussing the Promise of AI

As a crosscut of our strategic research vision, the College of Engineering has made a commitment to engineering for all, or designing engineering solutions and accessible technologies that leave no communities behind. In reflecting on Engineering for All, it’s hard to stop oneself from thinking of artificial intelligence. It promises so much good for so many sectors, from making research and education more accessible to expediting discoveries in human health and global sustainability. 

Yet, as we are in the early stages of this technology’s development, there is also a lot that causes pause. How do we ensure we are engineering AI models to be fair and just, and how do we ensure that we are building technologies now that are equitable and usable by all down the road? 

It’s a lot to chew on. Thankfully, I have two brilliant researchers — Professor of Computer Science Ian Davidson and Professor of Electrical and Computer Engineering Chen-Nee Chuah — to help me talk through Ai’s current and near-future applications in the College of Engineering and society at large. 

Richard Corsi: I would like to start with the UC Davis AI Center in Engineering, which we just launched. It supports research of foundational AI and its translational applications with 10 key focus areas, ranging from climate to health to society. Why do you believe it is important to have a broad view of AI?  

Ian Davidson: I firmly believe we're in the middle of another industrial revolution. But the machines now are not going to be boilers and engines: they are going to be AIs. It will be unlike any other industrial revolution we've seen, because in previous industrial revolutions the technology has been a tool to help humans. This time, the tool can actually replace humans.  

So, I'm really happy to see these 10 areas in the center. Two areas in particular that ​​we pushed for were the impact of AI on education and the impact of AI on society. We have to think about the ethics of AI.  

I'm also really happy to see AI is broadly viewed in the college, because I think it's extremely important that we don't just view our AI center as being a technology center. It has to span across society, different industries and underlying science, which is why interdisciplinary research in AI is so important. 

A man wearing a button-up shirt smiles while seated
Ian Davidson (Mario Rodriguez/UC Davis)

Richard Corsi: The way I look at it is, foundational AI is critical to making translational applications of AI better. We want to make the use of AI much more effective in the future, and that requires us to look under the hood and improve the engine, which is just the sort of thing research in foundational AI does.  

Chen-Nee Chuah: In some cases, Ian, the use of AI isn't meant to replace humans. For example, in healthcare, I believe that the final diagnosis and decision should lie in the hands of the doctors. In that case, AI is just a tool — it's helping humans sift through the haystack of medical details to find the right needle. 

Richard Corsi: Shifting gears a little bit here. We've heard a lot in the media about the tremendous potential benefits of AI, but also a lot of concerns about AI gone awry. How can artificial intelligence be leveraged for the betterment of humanity and the betterment of the planet? 

Ian Davidson: I think we can go low and go high. You think about all the wasted time, or what my kids call “busy work.” Just look at this meeting, right? We spent 20 minutes scheduling this meeting. There's no reason why personal AIs can't take out all that busy work from our lives.  

Additionally, many of us, Chen-Nee and myself included, are working with various centers on campus, like the UC Davis Medical Center, to solve issues by working across disciplines. AIs could solve these grand challenges much more easily than humans.

Richard Corsi: How do we ensure that AI doesn't leave any communities behind, that it becomes a tool that benefits everybody, not just those who can afford it? 

Chen-Nee Chuah: For me, AI is like fire. If you use it cautiously, responsibly and in controllable amounts, it benefits a lot of sectors.   

We need to be mindful that the data we use to train AI models are representative of all populations. Again, I'm just using one example in health because that's the field that I apply AI to. A lot of the data that we use to train the models comes from big medical research centers. The population who receives care there may not be diverse depending on the location of the research center. Even for us, we serve the greater Sacramento region and many underserved communities. Some of those patients don't make it to the hospital for care because of various challenges, such as transportation, financial reasons or their health condition. That means we don't have their data.  

Richard Corsi: It sounds like what you're saying is that AI is going to be inherently biased if the data for underserved communities are sparse. 

Chen-Nee Chuah: Yes and no. That's why we need to look at it carefully, because if we are extracting features that are agnostic, like a smile is always a smile, and we have enough information about recognizing that, then maybe it's not an issue. However, pulse oximeters [which measure oxygen levels in blood] have been shown to be less accurate depending on the skin tone of the patient. If that is the input in our AI model, we may want to be careful and ensure that we incorporate that and factor in all of these pre-analytic variables.  

That also means we need to practice due diligence to involve all communities in the first place, in getting their data as well as interpreting the results of their data. For example, my lab and I are currently working on a project to see if you can use AI to understand if there are any neuropathology differences in Alzheimer disease across populations, specifically for patients who identify as Hispanic and Latino — a group of people we don’t have much medical data on. 

A woman with glasses sits in a chair
Chen-Nee Chuah (Mario Rodriguez/UC Davis)

Ian Davidson: I can add a different perspective. In Europe, they've taken the approach that AI has to be legislated. The EU has the AI Act, which says, amongst other things, that any time an AI makes a decision on a human, the human has the fundamental right to ask the AI to explain how it made that decision. If you want no people left behind, we have to legislate AI and introduce explainable AI in the U.S.  

Another requirement we need is that the machine has to be fair to humans. For example, if AI is used to select people to interview for a job, with half of the applicant pool being male and half being female, that should be represented in the selection. 

Richard Corsi:  I’d like to hear from each of you what you think are the big concerns. What are the things about AI that keep you up at night? 

Chen-Nee Chuah: I worry about AI burning out our planet because it costs so much computational power. I'm afraid that when the world is just waking up to AI, everyone is rushing to use AI as a solution without a proper, cautious exploration of the question, “Is it really the best tool for the job?” Maybe AI is too big of a hammer for some of the problems we’re facing; some problems don't need fancy, deep-learning models.  

The other concern is safe AI, because AI does run into hallucination problems, even for the latest foundational model from Meta, which is supposed to excel at generic learning tasks. We tested it with a histopathology [the study of tissue changes caused by disease] image analysis task, and it gave us erroneous results.  

Richard Corsi: I’ve heard people say, “Yeah, but AI will give us the solutions to AI burning the planet.” 

Chen-Nee Chuah: Part of the things that could help make AI more efficient are the solutions that also make AI more accessible, which is work that our colleagues are working on to make AI models more compact. I know Hamed Pirsiavash is looking at computationally efficient models, Avesta Sasan is looking at hardware accelerators and Houman Homayoun is working to fit AIs on a small edge device the size of a credit card. I think edge intelligence, or edge AI, can help bring AI-based solutions to the wider community while addressing environmental and ecosystem concerns. 

Richard Corsi: I'd like to turn now to our college specifically, as opposed to the greater impacts of AI on society. How do you think the College of Engineering is preparing the next generation of engineers and computer scientists to be leaders in AI, to be individuals using AI and using it appropriately? 

Ian Davidson: In computer science, we have faculty who specialize in machine vision problems, large language models, foundational models and more. All these AI researchers teach large undergraduate classes, so undergraduates in our department, and other departments as well, can learn about AI from many different angles.  

In addition, for more than 15 years, we’ve taught an ethics and technology course. Students typically take it in their last year, and it’s like the golden swan of courses. The students love it, the faculty love it and it's really important. It's a win-win-win situation. My colleague Patrice Koehl introduced a companion course to be taught in a student’s earlier years. With these two courses, students finish with an understanding of the ethical ramifications of what they'll do as scientists and technologists.  

Chen-Nee Chuah: I think hands-on experience is very important as well. I'm teaching an undergraduate senior design and applied machine learning course, where students form teams to do capstone projects, working with faculty sponsors from different application domains, such as health and transportation. Students need to work with domain scientists to understand the data as more than just numbers. Learning how to extract important signals or features from the data can improve the performance and interpretability of machine learning models.  

Then, they touch upon all these other issues, such as ethical issues and security and privacy issues, in their capstone project. Over the last eight years, working on interdisciplinary projects, whether it's undergraduate, graduate students or postdocs, I have found we really need cross-disciplinary training. All my students who work on health projects go to the UC Davis Medical Center to receive cross-training from my collaborators and complete the basic training course for biomedical researchers. For example, one collaborator took them on ICU rounds. Seeing firsthand how their work may impact patients really motivates my students. 

Dean Corsi and two others are sitting and talking in a lounge; a large aerial photo of a complex is on the wall behind them.
The conversation carried over to the lounge. (Mario Rodriquez/UC Davis) 

Richard Corsi: I want to return to the new UC Davis AI Center in Engineering we spoke about earlier. Through it, we will teach a new course called “AI for All,” which will be open to students across UC Davis, not just our own College of Engineering. That seems pretty important to me. We tend to focus on our own students in the College of Engineering and giving them a good education, but now we're able to branch out a bit and provide education on AI, at least introductory concepts of AI, to students across campus. 

Chen-Nee Chuah: I think that's very important. I think it is going to be similar to “CS for All,” the “Beauty and Joy of Computing” class that UC Berkeley instructor Dan Garcia introduced, which has been shared with over 1,000 high school teachers in the nation and with instructors all over the world now. I think “AI for All” will be like that too. 

Ian Davidson: I'm trying to think of an area AI won't touch in the future. It's hard to think of one, right? It's important that everyone knows about it. 

Richard Corsi: Let's say we sit down and have this same conversation five years from now. What are the major topics you think we'll be talking about in five years? 

Ian Davidson: A lot of AI is linked to just one modality at a time. You have large language models, and you have another AI, which is limited to seeing what's in an image. I can see it is very easily going to meld together, which is what we call multi-modal AI. 

With multi-modal AI, you can show a machine a picture and ask it about what's in the picture. It'll describe what is in the picture. You can then ask it to reason about what's in the picture. This human-level intelligence that you and I do, that is going to occur. For example, what you and I are doing right now, we're looking around, we're accessing stuff, we're reasoning about it, AI will be able to do it. And that will almost certainly spurn the need for regulations.  

It's going to be a fascinating five years. 

Chen-Nee Chuah: Two things. One is augmented reality and virtual reality. I mean, there's a lot of potential there. And, you know, five years from now, I'll probably be wondering what is real and what is not. I'm pretty sure it will be a mixed reality, holographic, not too far from a Star Trek-type thing.  

And then the second thing that I am thinking of is education. Really, what does it mean to be a university anymore, right? What is our role if we have AI tutoring the students? What does that mean in terms of knowledge?  

Ian Davidson: I think AI will push us to be better engineers. Even in computer science, ChatGPT is now taking over a lot of the basic jobs. It can write code. If you're a computer science programmer and all those simple tasks that you did before are now done by a machine, it elevates you because you have to be better. So, I think it will push us. 

Chen-Nee Chuah: That's my hope. It will push us. But there are so many different reactions to AI, just like everything else. 

Primary Category

Secondary Categories

Education & Outreach Research

Tags

Engineering Progress logo in aggie blue color

This article was featured in our 2025 digital edition of Engineering Progress Magazine. Read more stories.

More Engineering News