Innovation Fueled by AI

Innovation Fueled by AI

UNT Diving Eagle
March 4, 2024

UNT researchers are harnessing the power of AI to transform a variety of industries from healthcare and business to transportation and emergency management.

BY AMANDA LYONS

Cover illustration generated using Adobe Firefly

From ancient myths of automatons to the birth of modern artificial intelligence in the 1950s, the idea of AI has fascinated human imagination for centuries. While interest had its ups and downs, AI is seeing a resurgence with soaring investments in tech and widespread public conversations like those about the popular language model-based chatbot ChatGPT.

By the end of 2024, the U.S. Department of Commerce’s International Trade Administration expects the global AI market to be worth $300 billion.

UNT is on the forefront of the AI revolution, finding new ways to harness the technology in research labs, studios and classrooms. Many researchers are generating their own AI models, mathematical algorithms or computational structures designed to perform specific tasks. And as the first university in Texas to offer a Master of Science in Artificial Intelligence, UNT is leading the way in AI
education. 

Together, students and professors are shaping the future of AI use in multiple industries from healthcare to emergency
management. Through their coursework and research experiences, students are growing into AI leaders better prepared for careers in a world where AI is becoming as common as using electricity.

Faster Rehabilitation

When preparing for a surgery, doctors consider their options. They pick the procedure they think will have the fastest recovery time and best long-term outcome. If they could better predict these outcomes, they could better choose among the different surgical options. Mark Albert, UNT associate professor in computer science and engineering, wants to help medical professionals make the best predictions using not only their individual experience and training, but decades of data across many hospitals.

AI-generated image of lightbulb with healthcare symbol
Illustration generated using Adobe Firefly

“You can ask a person how they’re feeling after surgery, but their answers are going to vary wildly depending on things like personality and pain tolerance,” Albert says. “If you want a clear picture of outcomes to impact medical decision making, you don’t want to use just one measure or one data point. You want to use them all for a more holistic picture of their health.”

For example, Albert’s team is working with Shriners Children’s hospital to develop the Shriners Gait Index, which uses deep learning to combine over 100 measures to more holistically represent walking quality and better inform surgical decision making.

Albert has worked with Shriners, Lurie Children’s Hospital in Chicago, the Shirley Ryan AbilityLab and others for the past decade using AI with wearable devices to measure clinically relevant outcomes for patients with mobility impairments. Quality data on mobility is recorded and can be used to determine how therapies are administered to improve mobility. Such information can inform prediction models that can suggest therapies or variations that might lead to better outcomes.

“At the end of the day, they’re the experts in the field, and they have the final decision,” Albert says. “But this system is built on decades of surgical data that one person may not be able to experience in a lifetime. The system can point out something they may not have considered, or they can dig into why the system chose a different procedure than the one they initially decided on. It’s almost like asking for a second opinion.”

Tracking Signals

A once complicated process is becoming much easier through AI and the work of a UNT team led by Yan Huang, a Regents Professor in computer science and engineering. Huang’s research involves tracking the source of signals, such as sound waves or radio waves.

“The traditional way of tracking these signals is to use mathematical modeling, but this can be difficult to do because of its complexity. It’s hard to account for changing environments,” Huang says.

Huang is leading a team including faculty members Heng Fan, Chenxi Qiu, Qing Yang and Asif Baba in computer science and engineering; and Xinrong Li, Hung Luyen, Yusheng Wei and Tom Derryberry in electrical engineering. Their work is part of a $13 million multi-institutional project funded by the U.S. Army Research Laboratory. It is led by the Kostas Research Institute in collaboration with five universities, including UNT, Northeastern University, Northern Arizona University, University of Houston and University of Massachusetts-Amherst. Their overall goal is to further understanding of how to create a network of AI devices that can monitor and gather data from the surrounding environment.

Huang’s team will create sensors with AI models installed that can track the signals. The models can account for obstacles and situations like when waves bounce off buildings. Huang and her students are creating the model’s training dataset by using simulated maps of real world environments. When the sensor finds itself in a new environment, it will be able to rely on the knowledge learned from previous data from the simulations to judge how best to respond to a new situation or obstacle.

“We’re developing more efficient directional communication capability in small devices, which is crucial in high mobility environments such as public safety, emergency response and many other areas,” says Xinrong Li, associate professor of electrical engineering. “Our work will help multiple agents like robots or unmanned aerial vehicles to coordinate more seamlessly together and make decisions as one.”

Fighting Hate Speech

According to the United Nations, the scale and impact of hate speech has been amplified by online media and forums. Using AI technology that monitors language and user behavior, Lingzi Hong, an assistant professor of information science, is studying ways to contend with the rise of hate speech and misinformation online along with collaborators at the University of Arizona and Peking University in China.

This is a good collaboration between AI and people creating an instant guide for the workers.
Lingzi Hong

To start, she and her team of students train the AI model with real-world conversations happening on sites such as Facebook, X and Reddit. When feeding the interactions into the language model, they also train the AI to recognize sarcasm or key phrases and words that may seem safe on a surface level but are actually problematic in some way. Misinformation can be tricky to train the model to recognize because sometimes the posts look reliable and true.

The AI model could even go through a user’s past posts and make a judgment faster than a human. Hong says volunteers, such as online moderators or nonprofit professionals who monitor online misbehavior, could use the AI as a guide as well.

“We can then tell the model how we want it to respond in these instances. To either encourage less hate and respond with positivity or to ignore the comments and carry on the conversation with others,” Hong says.

“Previously, they would need to follow a template, but not all templates work for every situation or they’re so generic it’s very impersonal. But a template that tries to respond to every situation would take too long to go through and find the response you need. This is a good collaboration between AI and people creating an instant guide for the workers.”

Hong’s team has created multiple models that can target specific instances of hate speech or are more tailored toward a certain topic. The plan is to make those models available to the public in the future.

Disaster Preparation

In times of disaster, delivering timely risk information is crucial to prevent loss of life and property. With the help of AI, Tristan Wu, an associate professor of emergency management and disaster science, is researching how people comprehend disaster risk information and how they seek and respond to such information during disasters.

Wu and his collaborators at Oklahoma State University and Jacksonville State University are placing couples in a simulation where a tornado is approaching their home. Using machine learning AI, the researchers analyze participants’ responses to alerts as well as how they gather information and discuss what actions they will take to protect themselves.

Individuals are exposed to a screen with a tornado warning alert at the top and multiple boxes of information blurred out.The AI tracks which boxes they choose to reveal, the order of the reveal, how long a person looks at an information box and how much information they chose to reveal before discussions with their partner.

Additionally, the AI can judge which information had the most impact on an individual. Wu has done studies like this over the past 20 years and says AI has been a major help to his research.

“Before, we were not able to link components of this information, preference and decision making together. AI can do this for us now,” he says.

Wu’s team is working to survey couples in the Dallas-Fort Worth area, but they’ve already found some interesting results from surveys in Seattle.

“Couples there actually spent more time on textual rather than visual information. Things that wrote when and where the tornado would go and how much damage it could do. In the past, when we surveyed couples in the South, we found they would focus on graphical images like weather radars.”

Gaining a better understanding of how people will respond to certain information can help officials tailor their emergency updates to their community’s preferences.

“I believe prediction, planning and mitigating risks and damage will play a big part in planning efforts in the future.”

Cars of the Future

In the future, Song Fu sees a world where streets are filled with electric vehicles capable of driving without a human behind the wheel. It’s not just a pipe dream. The UNT computer science and engineering professor is leading a team of researchers in developing a fully functional self-driving car powered by machine and deep learning programs.

To do this, researchers have installed multiple sensors on the car capable of taking 2D pictures and creating 3D point clouds, a series of points in a space that creates a 3D outline of an object like a car or house. The team doesn’t want the car to be solely self-reliant though. They’re also working on how autonomous cars can share their sensor data to communicate with one another.

Photo of UNT researchers inside the autonomous car they built
Professor Song Fu (pictured middle) is leading a team of researchers in developing a fully functional self-driving car powered by machine and deep learning programs.

“This way multiple cars can sense an object that a single car may not have picked up on. For example, a person using a crosswalk or an upcoming accident on the side of the road,” Fu says.

Information sharing brings up another area Fu and his team are addressing — privacy. Specialized code the team is developing will allow the cars to share object information while protecting both user data and pedestrians’ appearances. For example, one code could blur pedestrians’ faces or even remove pedestrians without affecting object detection before the vehicles share images with each other. The third component of the project is infrastructure.

Along with communication between vehicles, Fu and his team are working on ways permanent structures, such as traffic lights, can send data to vehicles. Similar to communicating with other cars, this would allow a car to know of approaching objects a traffic light camera might pick up that the car can’t.

The car research is being conducted through the U.S. National Science Foundation-funded Center for Electric, Connected and Autonomous Technologies for Mobility (eCAT), a national effort to foster more collaboration in the development of emerging vehicle technologies. As the UNT lead for eCAT, Fu is working with his UNT colleagues — along with researchers at Wayne State University, Clarkson University and University of Delaware — to leverage research across academic disciplines and industry expertise to transform the future of mobility and train the next generation of the workforce in this area.

“AI is everywhere now. It would benefit not just our current students, but our future students to know what it is, what it can do and how we can use it for good,” Fu says.

Responsible Use

As some build AI, and some learn to use AI, others want to understand more about those people who are working with the technology. Such is the case in Yunhe Feng’s Responsible AI Lab, where his team is studying how people use the technology responsibly and fairly.

“AI is always changing, and new technologies are being developed every day, but sometimes those inventors aren’t thinking of the responsible use of AI. That’s where we come in,” says Feng, an assistant professor of computer science and engineering.

Feng’s lab gathered posts people made online about code they had ChatGPT write and studied the posts from multiple angles. For the most part, they found people expressed fear about what ChatGPT wrote far more than any other emotion. At the same time, their results also revealed that the code written by the AI had many errors or went against conventional coding practices. They also found there are still loopholes present that users can get around to make ChatGPT show implicit biases.

“If we slightly change the prompt, we can easily get around policies set in place to prevent these biases. It’s fixed now, but these issues can continue to occur. We need to be more careful about creating these large language models and think about what kind of impact AI may have on people, on society and on research and education,” Feng says.

Feng’s lab collaborates with other UNT professors on incorporating AI into their research like Huaxiao “Adam” Yang in biomedical engineering, who is using AI to study human organoids, simplified, microscopic versions of organs that can mimic their functionality.

“People know how powerful AI is, but how do they deploy it? With interdisciplinary research we’re looking at how we can adopt AI techniques and use them for other domain sciences,” Feng says.

Really, all of us are using AI daily whether we realize it or not. It’s best that we start thinking more critically and understand more about its functionality and intent.
Anna Sidorova

Anna Sidorova is studying how others work with AI. Her research explores the point where humans and AI meet from creation to use.

“We’re looking at the intersection where the technology is being shaped by its position in society and where society is being shaped by the qualities of the technology,” says Sidorova, the chair of the Department of Information Technology and Decision Sciences in the G. Brint Ryan College of Business.

Sidorova believes the intention behind AI creation will become an increasingly important facet when studying AI in the future. “Big companies want to make a point of having good intentions, but whether that translates into whatever outcome occurs when the technology is adopted is another thing.”

Specifically, she studies foundational models, the building blocks behind generative AI like ChatGPT, and the social and economic issues surrounding them.

“When a machine learning model is created, to what extent does it create a social relationship that becomes the structure that governs us? Really, all of us are using AI daily whether we realize it or not. It’s best that we start thinking more critically and understand more about its functionality and intent.”