Artificial intelligence is an umbrella term for a variety of technologies that mimic human-like qualities such as perception, learning, and decision-making based on data or learned experiences.
One type of AI is symbolic, or logic-based, artificial intelligence, which follows pre-programmed rules, for example in basic chat bots with limited responses and early chess computers. A popular modern alternative is machine learning, a broad type of AI that can help detect patterns in large data sets, predict outcomes, and often perform tasks without explicit instructions.
Machine learning tends to be categorized as supervised learning, unsupervised learning, or reinforcement learning. With supervised learning, researchers provide models with labeled data—or data with a clear input and output—during training. For example, for a model learning to classify different types of fruits, researchers would provide images of fruits that are labeled as apples, oranges, etc. For a model learning to predict stock prices, researchers would provide variables of interest as well as an output, like the stock price at the end of each day or week. Supervised learning is particularly useful for classification, such as classifying images, and regression, or establishing the numerical relationship between variables, such as predicting stock prices.
Alternatively, machine learning can be unsupervised, where the model is left to form associations from the training data without any labels or assistance. These models are particularly good at clustering—finding similarities in data points—or association—finding relationships among variables. Machine learning can also be semi-supervised, in which some labeled data is provided.
Finally, with reinforcement learning, models learn by trial and error to achieve a goal. For example, a chess bot learns from each move to ultimately maximize its gains and minimize losses.
“If we think of machine learning as a dense forest, there are many vantage points, or problems we would like to solve, such as classification, regression, or clustering,” said Bharath Sriperumbudur, professor of statistics and mathematics. “There are many trails or methods we could take to get to each of these vantage points. These could include deep learning, the kernel method that I study, or many other algorithms.”
Deep learning is a type of machine learning inspired by the neural networks of the human brain (see deep learning Q&A for more information). Using artificial neural networks (ANNs), these models can analyze large amounts of structured data and have been used in voice recognition, chatbots, recommendation services in retail and streaming settings, as well image recognition, like that used in self-driving cars. Generative AI is a subset of deep learning that results in the generation of text, images, or other content. Large language models (LLMs), for example, train on existing texts in order to predict logical sentences, while large physics models can train on high-quality physics data to predict shape-related properties for engineering purposes. These models can be highly specific, like those built by Associate Professor of Physics Dezhe Jin to understand the structure of birdsong, or incredibly large, like OpenAI's ChatGPT and Microsoft Copilot. These large models train on vast quantities of data and are also referred to as foundation models, which researchers can refine and build on for their own specialized purposes.
Editor's Note: This story is part of a larger feature about artificial intelligence developed for the Winter 2026 issue of the Eberly College of Science Science Journal.