Find answers to common questions about artificial intelligence (AI).
If you’re confused about what the term artificial intelligence or AI means, you’re not alone. AI is used in various contexts to refer to a wide variety of technologies, systems, and disciplines. In most everyday contexts, AI refers to specific AI applications, like ChatGPT, self-driving cars, or social media algorithms. In computer science, the term AI can refer to the behind-the-scenes computer programming that powers these AI tools. In the realm of research, AI is often used to refer to the entire field dedicated to theorizing and developing artificial intelligence technologies.
With all these different definitions, it can feel difficult to grasp what AI is and what it does. A helpful starting definition for AI, then, is a computer system which performs tasks that usually require human intelligence.. Let’s break down this definition into its three component parts:
Despite the current buzz surrounding the technology, AI has actually been around for a long time. In fact, some of the theoretical underpinnings for AI date back to at least the 1600s. So why all the AI hype now? Two major changes laid the groundwork for our current AI revolution (see PBS’s Crash Course on Artificial Intelligence for more detail on this history):
To start, let’s first define what a model is. At their most simple, models are generalizations of the world around us.
For meteorologists, for example, weather models are fine-tuned generalizations about how we think the weather works based on decades of observations we have made of wind, clouds, rain, air pressure, and other atmospheric phenomena. While these models might help us understand how the weather works more generally, they are most useful to use in their ability to predict future weather behavior. Each time meteorologists’ forecasts are inaccurate, they have to revise their model to account for these changing patterns.
Weather models represents a few key features of models more generally:
The observations, examples, or cases used to train AI models are collectively referred to as training data. Training data can be anything, depending on what you want an AI model to predict. Let’s say a car insurance company wanted to make an AI model that calculates premiums. Their training data might include things like past customers’ driving records, credit histories, ages, and mileages. Once it detects patterns between these various factors, the AI model can start predicting who is at higher risk of making insurance claims and, by extension, who needs to pay a higher premium.
At their most basic, algorithms are a set of step-by-step rules or instructions for accomplishing a goal. Recipes, for example, are technically a kind of algorithm, just not a computer-based one.
Algorithms are the mechanics that make AI models work. They are the tools that help models analyze data and identify complex patterns, which they then use to perform tasks. The algorithms that power AI models are extremely complex but, at their core, they are simply instructions that help computers spot patterns and accomplish tasks.
Algorithms are everywhere around us and they orchestrate much of our everyday lives without us even knowing. They shape how we experience the internet by ranking, sorting, searching, and filtering the information we encounter online and the results we get on Google. They shape what we see on our social media feeds and customize the types of ads we see on the internet. They suggest the next best words or phrases when we’re typing in a document with tools like autocomplete and identify grammatical or spelling errors we’ve made with tools like grammar checkers.
But algorithms go well beyond our experience on the internet. In many cities, for example, algorithms help control traffic lights and are responsive to real time traffic patterns. They power self-driving cars and help screen for concerning health conditions. More recently, organizations have begun using AI to make important decisions related to things like hiring in a process known as automated decision-making. Using computer algorithms to make decisions may seem, at first, to solve many of the biases that arise when humans make decisions. In reality, however, many of these biases may be unintentionally amplified (see Can algorithms be harmful?)
Let’s take search engines as an example. Google uses a complex search algorithm that finds, sorts, and ranks relevant results based on what you type into the search bar. Here’s a simplified, behind-the-scenes look at what this algorithm, or the step-by-step directions orchestrating Google’s search function, might look like when you type in “cute cats”:
Algorithms are powerful tools that help run many of the tools that make our lives easier. However, because they help computers spot subtle patterns in data, they are also extremely effective at detecting and amplifying patterns of human bias. These biases can have harmful consequences in the realms of automated decision-making, social media, and search engines:
Many people believe that computer algorithms are more objective and fair than humans and so should be used to reduce human bias in decision making processes. Algorithms, however, are designed by humans and are trained on human data, meaning that human biases seep into the algorithms powering AI models. While some of the biases present in algorithms are harmless, others can have life-altering consequences, especially when these algorithms are used to make choices about who gets access to housing, jobs, and healthcare.
For example, imagine a company wanted to design an algorithm that helps rank job applicants for a tech position. They train the algorithm on hundreds of successful and unsuccessful job applications from the past two decades. As it combes through these applications, the algorithm might start to identify patterns about what makes a successful applicant—things like education, skills, and past work experience. It might also start to pick up on other, unintended traits that might be present in the job data like an applicant’s gender, particularly in a male-dominated industry like tech. The algorithm might start to pick up on these past hiring biases and reproduce them, recommending only male job candidates. This is an example of algorithmic bias – or instances when algorithms produce discriminatory outcomes – and it is precisely what happened when Amazon attempted to design a similar hiring algorithm for the company in 2015.
The more organizations, like employers and schools, begin to rely on AI-powered software to identify qualified applicants, the higher the stakes of algorithmic bias become. And since many companies don’t disclose how their algorithms make decisions or even whether they use algorithms in the first place, there is often little recourse for those who feel they may be the victim of this type of bias.
Algorithms not only can amplify bias in automated decision-making, they can also amplify extreme messages on your social media feeds. Because social media algorithms are designed to promote posts based on how much engagement they receive, posts with more outrageous headlines, over-the-top language, and outlandish – and oftentimes prejudiced – claims are often the ones that go viral. This kind of sensationalized content is called rage bait, and it is intentionally meant to be provocative in order to keep you engaged. This type of media profits off your outrage and can leave you feeling lonely, isolated, and emotionally-exhausted.
Social media algorithms also provide hyper-individualized content for you based on what you like, share, or which posts you comment on and interact with. This personalization can connect us with communities based on our interests or recommend new content that we might like. But it can also lead to what are called filter bubbles, or personalized content on the internet that normalizes and reinforces a person’s existing beliefs and preferences. Filter bubbles drown out certain views and perspectives you see online and can keep you hooked on social media sites for long periods of time. In extreme cases, they can even push extreme views to try and keep you engaged. Like rage bait, filter bubbles can leave you feeling isolated and can make you less open to alternative perspectives.
If you and a friend googled “cute cats,” chances are you would both receive different results despite having searched for the same thing. This is because search engines like Google use algorithms that curate your search results based on your past search history, location, preferences, and online habits. Whereas most of us are familiar with how our behavior on social media affects what we see on these platforms, the personalized nature of search engine algorithms – an important tool for accessing and verifying information – are far less commonly known.
As with social media, these search engine algorithms can shape the types of information we get online and can lead to filter bubbles. In most cases, this type of search engine personalization can be helpful. In other cases, this hyper-individualized design can lead to exposure to dangerous dis/misinformation and reinforce harmful beliefs.
Algorithms themselves aren’t inherently “good” or “bad,” but they can detect and reproduce harmful biases in the realm of automated decision-making, social media, and search engines:
Because of this, it is essential to always approach the outputs of AI tools and other algorithmically-powered systems with a critical eye. This can seem overwhelming at first, but it can start by simply recognizing where and how algorithms shape your online experience:
Developing this awareness can give you enough time to pause, process information, decide on a course of action, and seek out any additional information and resources.
Some helpful resources for spotting and protecting against algorithmic bias:
As mentioned previously in our discussion of AI models, AI is an extremely effective pattern-spotting machine. Once it detects a pattern from a large collection of examples, it uses this pattern to accomplish tasks (e.g., sorting and ranking search results) or predict likely outcomes (e.g., forecasting weather) when it encounters a new example. Simpler AI models, like autocomplete, use algorithms to predict the next word or phrase in sentences and suggest them to users as they’re typing. Models like ChatGPT, known as Large Language Models do a much more complicated version of this word prediction on a much larger scale to generate the next most likely words and phrases in response to your input. AI image generators, like DALL-E, predict the next most likely pixels to create intricate graphics. Other AI models predict traffic patterns, spending habits, and even health outcomes.
At their core, AI models are pattern-spotting and probability machines. They identify patterns in vast amounts of data and then make predictions about aspects of the world based on those patterns. However, as we’re likely all too aware from rain forecasts or betting odds, probabilities aren’t always accurate. In fact, because AI models like ChatGPT use probabilities to predict the next best word in a sentence, they can sometimes produce results that, though sounding coherent or plausible, are factually incorrect. These inaccuracies are called hallucinations and, for some AI tools, they can appear in as much as 27% of responses.
Hallucinations can sound quite convincing, especially because tools like ChatGPT respond authoritatively and often don’t cite their sources. This isn’t because AI is trying to intentionally mislead you but because AI doesn’t possess human intelligence; it doesn’t “know” fact from fiction. It simply uses its complex pattern-spotting ability to generate responses that sound probable. The big takeaway here is that it's always important to double check the results of any AI system or tool, particularly if you’re using AI content in high stakes situations.
Large Language Models, or LLMs for short, are AI models that generate human-like language. They include popular text-based AI tools like ChatGPT and Google Gemini. Because LLMs are one type of AI model, we can break them down the same way we can break down any model (see What Are AI Models?):
Artificial intelligence has been around in various forms for a long time. But when most people talk about AI these days, they are often referring to a specific type of AI known as generative AI. Generative AI models are tools like ChatGPT, Gemini, DeepSeek, Claude, or Dall-E that can generate original images, videos, or text based on instructions you give them. Generative AI differs from other AI models that are used to make predictions but that don’t necessarily generate new content. Non-generative AI models are often difficult to spot because of the way they are incorporated into the background of systems like traffic controls, job sites, credit score calculators, and search engines.
For children 13 years or older, generative AI technology can be a really powerful tool for fostering creativity or providing individualized learning. It also, however, poses challenging questions like how do these models encourage plagiarism? What do we do when these models produce harmful biases? What impact might AI have on my child’s mental health? And how does generative AI threaten critical thinking skills?
Prompts are the instructions you give to a generative AI tool when you want it to do a task. When you type a question into ChatGPT, instruct Google Gemini to generate an image, or ask Alexa what the weather is, you are giving these AI tools a prompt. Prompts can be written (as they are with text-based AI tools like ChatGPT), spoken (as they are with voice assistants like Siri or Alexa), or uploaded as an image, sound, or text file
Have you ever asked AI to do one thing and it does something different? Or it misunderstands what you’ve asked it to do? Sometimes, it takes a little tinkering to develop just the right prompt to get AI to do what you want it to do. Another word for this prompt tweaking process is prompt engineering. Prompt engineering can get quite complicated, but for most everyday purposes, knowing a few basic prompting principles can be helpful (for more detail, see AI for Education’s Five “S” Model):
AI literacy is an evolving term that describes a person’s ability to navigate AI. Just as traditional literacy encompasses both the skills of writing and reading, AI literacy describes both the ability to use AI to accomplish tasks as well as the capability to recognize and evaluate AI-created content. Being AI literate doesn’t require you to be an expert computer programmer or know the ins and outs of AI technology. It does, however, require that you know enough about how AI works generally, in order to engage with this technology safely, responsibly, and ethically.
In this way, being AI literate is similar to learning how to drive: you don’t necessarily need to be a mechanic to drive a car, but you do need to know enough about how your car works – how to start the ignition, how to accelerate, how to break – to get yourself where you want to go. Just as importantly, you also need to learn the rules of the road in order to keep yourself and the others around you safe. Like driving, AI literacy requires practice.
In learning more about AI, you may come across concepts like machine learning (ML), deep learning (DL), or neural networks. You may also hear terms like natural language processing (NLP) and computer vision. For now, all you need to know about the first group of terms is that they describe the various complex processes through which AI models are developed. The second group of terms, on the other hand, describe different areas where AI models can be applied (e.g., image detection, voice recognition). To learn more about these terms, you can watch PBS’s Crash Course series on Artificial Intelligence.