Science Journal Winter 2026 Artificial Intelligence.
science-journal

AI: Opportunities and Challenges

23 January 2026

The various forms of AI each have their own strengths and weaknesses, which are worth considering when identifying the most appropriate tool to use. The large foundation models in particular bring unique challenges proportionate to their massive scale, which require an ever-increasing amount of data. 

Data quality and bias: The quality of AI models is in many ways linked to the quality of the data it is trained on. If the training data is biased—for example, due to how scientific data is collected or the bias within images and opinions expressed on the internet—it may impact how the model performs and perpetuate harmful stereotypes. For foundation models trained on the entirety of the internet, great efforts must be taken to sanitize the data. 

Ethics of sourcing training data: While researchers are generally very intentional about sourcing publicly available data and/or data gathered with consent, there are concerns with how foundation models like ChatGPT have acquired their data. Few government regulations exist in the AI space, and several copyright-related lawsuits around training data are currently pending, which allege that the companies behind several foundation models gathered text, artwork, music, etc. for training data without consent or compensation of the authors and artists, including copyright materials. 

Data privacy: Particularly with foundation models, there are also concerns about data privacy, especially if users are working with confidential, sensitive, or copyright information, or data about students, medical patients, or intellectual property. Although not all user prompts are used in future training data, many workplaces, including Penn State, are building policies around how their employees should use foundation models and encourage their employees to read user agreements carefully. 

Illusion of intelligence: AI models by definition are built to emulate human qualities, but they ultimately only provide the illusion of intelligence. Foundation models, for example, are designed to predict logical sentences, but that doesn’t always mean they are accurate. Although these models can provide a variety of services to varying degrees of quality, they are not writers or scientists, and their outputs should always be thoroughly vetted and interpreted by humans. Recent studies, including a recent study by MIT researchers, have also suggested that outsourcing many tasks to foundation models is negatively impacting memory, attentiveness, and critical thinking skills. 

Black box nature: Because many AI models, including deep learning models, identify associations and patterns on their own, they can be difficult to understand, validate, and improve. Penn State researchers are working to elucidate this “black box” nature and help ensure that AI models are appropriately used

Sustainability: The data centers that power AI are collectively straining power grids, with their global power consumption growing 12 percent annually since 2017, according to a report by the International Energy Agency. Data centers also stress sources of drinking water because they are often cooled by clean water, and raise concerns about carbon emissions and electronic waste. The scale of foundation models makes them particularly resource-intensive, especially during training. New data centers are increasingly being built; for example, the Three Mile Island nuclear power facility near Harrisburg, Pennsylvania, is slated to reopen in 2028 to power a Microsoft data center. 

How can we responsibly use and develop AI? The AI boom collectively will continue to have a huge impact on our environment, our wellbeing, and our jobs, but the potential for substantial gains could help offset some costs. There is considerable interest in making AI models more efficient, so they consume fewer resources, and more sustainable. Smaller-scale specialized models also consume far fewer resources than large general-purpose models. Increased regulations could also help ensure ethical AI use and encourage responsible growth in the field. 

“At Penn State, our researchers are exploring not only how to harness AI, but how to do so responsibly,” said Aleksandra (Seša) Slavković, professor of statistics and associate dean for research in the Eberly College of Science. “Like other technologies, AI has an environmental footprint, and we are exploring how to reduce that footprint as well as trying be thoughtful and intentional about when we use AI in research, teaching, and our everyday operations.” 

Editor's Note: This story is part of a larger feature about artificial intelligence developed for the Winter 2026 issue of the Eberly College of Science Science Journal.