More than 50 years ago, Alvin Toffler authored the book Future Shock, which described a frightening world disrupted by rapidly emerging technologies. Today, one of those technologies, artificial intelligence (AI), powers chatbots like ChatGPT, Microsoft’s Bing, and Google’s Bard. AI not only pulls information off the web, but it can also engage in human-like conversations, create essays, and perform more complex tasks, like writing computer code.
AI also poses threats to life as we know it. More than 1,000 technology leaders, researchers, and other pundits working in and around artificial intelligence signed an open letter in late March warning that A.I. technologies present “profound risks to society and humanity.”
We spoke with Gang “Gary” Hu, assistant professor of computer information systems (CIS) at Buffalo State University, who has researched AI. He explained some of the benefits and possible dangers of this evolving technology.
Name: Gang “Gary” Hu
Title: Buffalo State Assistant Professor of Computer Information Systems (CIS)
Hu joined the faculty in fall 2019. Previously, he served as an assistant professor of CIS at SUNY Fredonia.
Hu earned a Ph.D. in computer science from Dalhousie University in Nova Scotia, Canada, in 2012, and a master of computer science from the University of Windsor in Ontario in 2005. His primary research areas include image and natural language processing, computational models of visual perception, artificial intelligence, and machine learning. His ongoing projects include AI-based pharmacy error analytics, BERT-based negotiation chatbot, content-based multimedia data retrieval, multisource audio separation and recognition, and deep learning networks.
What are the main benefits of AI?
AI can complete heavy and sometimes dangerous jobs, such as ones found in manufacturing, safely. It can also benefit white-collar professions like medicine and education. AI is increasingly handling paperwork for doctors, so they are freed up to spend more time on patient care.
AI helps in educational settings by customizing learning. Based on students’ behaviors and personalities, AI has created a more precise way to respond to them through reminders to teachers. This has just begun, and there are few results yet. I believe that AI’s ability to come up with real-time solutions will be a huge benefit for teachers.
What do you perceive as the dangers of AI?
Generally speaking, AI was developed to help people. Right now, we’re still safe. The reason is that AI can do things comparable to human beings, not better, at least in terms of semantic understanding. It’s just a machine that is a helper. In the future, I think, AI will become smarter than we are. That’s when it will be dangerous.
We’re seeing small signs of trouble already. Six years ago, two chatbots at Facebook’s headquarters were trained to talk to each other. Gradually, the machines developed a new language only they could understand, which is a little scary.
Right now, it’s still called AI. The new term for the future, one that some people are already using, is artificial general intelligence, or AGI. It’s able to develop intelligence on its own, improve itself, and eventually, create something we don’t understand.
What is being done to prevent AI from reaching a dangerous level?
Right now, I’d say 99 percent of researchers’ efforts is going toward creating an AI model to develop something. Unfortunately, very little effort is being made toward controlling what AI can do.
If we only specify the ultimate goal we’re trying to achieve with a project and don’t control the intermediate steps, or subgoals, AI will generate those steps on its own. We need to do more to make sure the subgoals aren’t going to hurt us. In order to fully control everything AI does, we have to understand and control all the numbers in the model. Currently, it’s an impossible mission because the size of the parameters is huge. For example, in the GPT4 model, there are billions of parameter numbers; therefore, more research is required.
Will AI replace workers?
In my view, robots are machines. As with any new technology, machines can replace human workers. Often, what robots or machines are doing now is handling workers’ burdens. Generations ago, machines helped farmers tend to their acres of crops. Assembly-line machines reduced the labor force in manufacturing. When ATMs first came out, people worried about the loss of bank tellers. Today, we still have bank tellers who are doing more complicated jobs.
AI will change our jobs and replace others. We, as workers, have to focus on being more creative and let the machines do the routine, more data-intensive work. Right now, AI is based on massive and inconsistent data. Sometimes, it gives you good answers; other times it does not. This could improve over time.
Can you explain your research on BERT-based negotiation?
BERT stands for bidirectional encoder representations from transformers. It is a family of language models that ChatGPT belongs to. BERT can read text backward and forward and analyze a document holistically. This enables it to better detect a document’s sentiment. For instance, a customer review may look positive. But when you understand the relationship of every word, it’s actually a bad review.
I began working on BERT in 2021 through a collaboration with colleagues at my alma mater in Canada. Two of my students, CIS majors Dustin Doctor, ’22, and Matthew Clifford, ’23, were hired to do data preparation and coding for model training, testing, and result analyzing in the spring and fall of 2022. They had good results; the trained language model was able to predict whether the online-negotiation process would reach a deal or no deal. Specifically, there was 92 percent accuracy in terms of predictions in negotiations for online sales—bargaining for the price of anything online, with back-and-forth messages from both parties. In addition, Matthew Clifford coauthored an article that was accepted for inclusion in an international conference this summer in Japan. This research is very exciting.
Photos by Jesse Steffan-Colucci, Buffalo State photographer.