“I like to think of artificial intelligence (AI) like electricity because it’s being used everywhere–it’s essential,” says Karl Kuhnert, professor in the practice of organization and management. “Anything that can be digitized or automated will be. The end of routine is near, and the cost of not using AI is becoming obsolete. You won’t be in business for long because all of your competitors are using AI to their advantage.”
Kuhnert has created a way to virtually unite students with the best minds in their respective fields. The technology is called Tacit Object Modeler, or TOM for short, and Kuhnert is using it in the classroom to teach business ethics. TOM replicates the decision made by an expert by sampling how they respond to different scenarios. Based on their responses, TOM learns how the expert uses “tacit knowledge” to make critical decisions.
Kuhnert advised the creation of the AI technology TOM by Merlynn Intelligence Technologies. He says the idea behind TOM is to duplicate decisions that experts make, from which non-experts can extract value. Nonexperts can input data–the circumstances of a given scenario–and TOM will render a decision intended to be used by the non-expert as a second opinion.
“For example, some physicians make better diagnoses than others,” Kuhnert says. “The idea with TOM is to digitize the very best physician making that particular diagnosis and share that with the world.”
When digitizing an expert, Kuhnert says the AI identifies the variables that contribute to a given expert’s decision about a given situation. The software collects this data–the different variables for different situations–and builds an algorithm. The expert can then test to ensure that the algorithm accurately replicates their decisions.
Jesse Bockstedt, professor of Information Systems & Operations Management, defines artificial intelligence models like TOM as technologies that use data to create statistical models that make predictions or complete human cognitive tasks to improve business performance and drive value.
Bockstedt explains that AI applications in business range from robotics used in manufacturing procedures, such as moving inventory around in a warehouse or scanning packages for defective items, to customer service chatbots and algorithms that recommend products to customers.
The benefits AI offers its user are expediency, efficiency, and the facilitation of personalization, which, in turn, allow businesses to cut costs and improve profits.
We might think of AI and machine learning as futuristic things, but they are not future technologies. They’re current technologies. In every industry and every facet of an organization, there are opportunities for finding value with AI and machine learning.
“Awareness of these applications in business is key, and understanding AI and machine learning is essential,” Bockstedt says.
Bockstedt says the prevalence of AI and machine learning in the past decade can be attributed to several factors, such as the accessibility and abundance of data that came with the .com boom, rise of e-commerce, and advancement of smartphones. Other reasons include the increase in computing power and research and development within the deep learning space, which allow for technologies like automated tracking, speech recognition, and image recognition.
Introducing AI into Medical Care
Kuhnert says it is difficult to introduce AI technologies into the medical field because the standard of care is so high. Christopher Awad 23MBA 23MD is collaborating with Kuhnert on TOM to explore just how the software could be implemented in medicine.
For TOM specifically, Kuhnert says AI derived from data or models of experts will present a great deal of problems if it deviates from certain standards of care. Misconceptions surrounding what AI technologies are and how they work present another issue.
To combat these misconceptions and investigate how to apply TOM, Awad and Kuhnert surveyed healthcare professionals on their impressions of technologies like TOM and what they believe the role of advanced technologies should be in hospitals.
Among advanced technologies intended for medical use are models like BoXHED, a software that allows scientists to render real-time predictions for when something is at a high risk of occurring.
Donald Lee, associate professor of Information Systems & Operations Management at Goizueta and of biostatistics and bioinformatics at Rollins School of Public Health, is one of the creators behind BoXHED. He developed the algorithm for BoXHED to extract predictive signals from high-frequency health data, such as those logged by wearable devices.
“An Apple Watch can monitor a number of vital signs and provides a streaming data set of health information, minute by minute as long as you’re wearing your watch,” Lee says. “BoXHED can take that data and try to identify time-varying interactions among them that are indicative of an event happening, such as stroke. If your watch is monitoring 50 signals, maybe only 12 matter. BoXHED distills these into a single number called the hazard rate, which is the probability of experiencing a stroke in the next instance in time.”
Lee says BoXHED works like other machine learning models, in that it extracts information meaningful to humans from noisy data.
“The human mind simply cannot deal with high dimensional data sets that change constantly, so you need a way to summarize them intelligently,” Lee notes. “All machine learning models essentially use a branch of mathematics called approximation theory to recover some salient quantity from the data, for example the hazard rate of stroke.”
Presenting the Ethical Dilemma
Like all things, the wonders of AI don’t come without costs. The most prevalent, Bockstedt says, is the presence of biases in AI models, which make their way into algorithms when the data used to create them are biased. These biases can have serious implications.
“We have to think about not just the first order effects of using AI and machine learning, but the second, third, and fourth order effects,” Bockstedt says.
For instance, when using AI to render sentences for criminal cases, he shares, “The first effect of using a sentencing guideline software system is that we’re able to speed up the sentencing process. But the second order effect is that we realize, over time, that there’s bias, and we’re treating certain people more strictly than we are other people. The third order effect is that after this happens for long enough, we start to see impact on society directly–we see certain types of people being locked up more than others.”
The interaction between AI technologies and population diversity is a focus Awad has committed his work to. In addition to collaborating on TOM, Awad is also exploring diversity issues in the medical sphere. He is conducting research at Morehouse School of Medicine, where he is developing a data science curriculum for healthcare professionals that will explore health disparities use cases for investigation.
“The role of advanced technologies is meant to increase our agency, preserve our ethical beliefs, and promote equity in the dispensation of health care and health equity to everyone,” Awad says.
Bockstedt says regardless of the field of application, it is the responsibility of technology creators to be aware of these biases, use high quality data when building these technologies, and to use explainable AI so that users can understand how predictions are made.
One way to address these ethical considerations is to diversify who is in the room when building and using new technologies. Emory has set out to do just that through its AI.Humanity initiative. The initiative includes hiring new faculty across departments, placing AI and machine learning experts across campus, and facilitating the exchange of ideas relating to advanced technologies.
These are also considerations Kuhnert has when using TOM. He says issues of bias in AI must be at the forefront of applications in any industry.
“There is always the potential of getting biased data, which creates biased algorithms and produces biased decisions,” Kuhnert says.
Artificial intelligence is a tool, and like any other tool, it can be used for good or for bad. Our hope is that we’re not giving this to people who have evil intentions and wish to duplicate the decision-making of someone who’s not ethical.
Karl Kuhnert
Bockstedt says as things progressively become more automated, he foresees an increase in explainable AI and AI ethics. Although we may expect to see a reduction of the workforce, we can also look forward to the creation of new jobs, such as an AI trainer, and a period of coexistence where technology, like TOM, will help workers perform better instead of replacing them.
“Some people see AI as an exciting future, while others feel it is part of a dystopian society,” Kuhnert says. “I don’t really hold either belief. It’s completely in our hands what we want to do here. We can use this technology to unburden the world. We have to decide together how we want to use this technology.”
Learn more about Emory Executive Education’s AI and Machine Learning for Business.