Frequently Asked Questions about AI

Find answers to common questions about artificial intelligence (AI).

 

What is Artificial Intelligence (AI)?

If you’re confused about what the term artificial intelligence or AI means, you’re not alone. AI is used in various contexts to refer to a wide variety of technologies, systems, and disciplines. In most everyday contexts, AI refers to specific AI applications, like ChatGPT, self-driving cars, or social media algorithms. In computer science, the term AI can refer to the behind-the-scenes computer programming that powers these AI tools. In the realm of research, AI is often used to refer to the entire field dedicated to theorizing and developing artificial intelligence technologies.

With all these different definitions, it can feel difficult to grasp what AI is and what it does. A helpful starting definition for AI, then, is a computer system which performs tasks that usually require human intelligence.. Let’s break down this definition into its three component parts:

  • A computer system: AI, at its core, is a computer-powered learning machine. It learns from millions of examples in order to perform tasks and make predictions.
  • Which performs tasks: AI can be used to perform a countless number of tasks, like summarizing documents, sorting through job applications, identifying images, and driving cars that previously would’ve previously required human skill, intelligence, and labor.
  • That usually require human intelligence: While AI is programmed to replicate human thinking processes, it is not “intelligent” in the way humans are; it does not “think” or “understand.” Rather, AI learns rules and identifies patterns to perform specific tasks and make predictions based on the particular data it was trained on.

🠝 Back To Top


Why am I hearing so much about AI nowadays?

Despite the current buzz surrounding the technology, AI has actually been around for a long time. In fact, some of the theoretical underpinnings for AI date back to at least the 1600s. So why all the AI hype now? Two major changes laid the groundwork for our current AI revolution (see PBS’s Crash Course on Artificial Intelligence for more detail on this history):

  • Stronger Computing Power. In the 1960s, even the most advanced supercomputers could do only a fraction of what many of our modern smart phones can do in seconds – and that doesn’t even account for what many of today’s strongest computer systems are capable of. Greater computing power and storage capacity turned what had primarily been computational theories of AI into reality.
  • Data. Lots and Lots of Data.
    Nowadays, we do almost everything on the internet: we post on social media, we search and apply for jobs, we read articles, we stream media, conduct financial transactions, access healthcare, and make purchases . All of this activity creates data. Such huge troves of readily-available data matched with increasingly efficient computers paved the way for the powerful AI models we have today.

🠝 Back To Top


What are AI models?

To start, let’s first define what a model is. At their most simple, models are generalizations of the world around us.

For meteorologists, for example, weather models are fine-tuned generalizations about how we think the weather works based on decades of observations we have made of wind, clouds, rain, air pressure, and other atmospheric phenomena. While these models might help us understand how the weather works more generally, they are most useful to use in their ability to predict future weather behavior. Each time meteorologists’ forecasts are inaccurate, they have to revise their model to account for these changing patterns.

Weather models represents a few key features of models more generally:

  1. Models are generalizations based on lots of observations. The more examples a model has on a certain occurrence – measurements of weather patterns in this case – the more this model can start to make detailed generalizations about how it works.
  2. Models are predictive.   After learning about how something works from a large number of examples (e.g., past hurricanes, wind patterns), models can then be used to make predictions about future cases or phenomena, like weather behavior.
  3. Models are constantly being fine-tuned. Models are continually being improved with new data or with changing methods for detecting patterns in the data. This revision allows for more accurate models with higher predictive power.
  4. Models are simplifications of reality. Even the most detailed models are simplifications meaning that while models can be helpful, they are never 100% accurate. The real world is messy and doesn’t always follow clear patterns. It is also impossible to collect every possible observation of a given phenomena, meaning models will always be somewhat shortsighted or biased.
AI models, like weather models, are also (1) generalizations based on large amounts of data that are (2) used to make predictions and (3) constantly being improved with new training data and new ways of detecting patterns in that data. And, though increasingly powerful and human-like, they are (4) vast simplifications of the human "intelligence" they claim to model.

🠝 Back To Top


What data is used to train AI models?

The observations, examples, or cases used to train AI models are collectively referred to as training data. Training data can be anything, depending on what you want an AI model to predict. Let’s say a car insurance company wanted to make an AI model that calculates premiums. Their training data might include things like past customers’ driving records, credit histories, ages, and mileages. Once it detects patterns between these various factors, the AI model can start predicting who is at higher risk of making insurance claims and, by extension, who needs to pay a higher premium.

🠝 Back To Top


What is an algorithm?

At their most basic, algorithms are a set of step-by-step rules or instructions for accomplishing a goal. Recipes, for example, are technically a kind of algorithm, just not a computer-based one.

Algorithms are the mechanics that make AI models work. They are the tools that help models analyze data and identify complex patterns, which they then use to perform tasks. The algorithms that power AI models are extremely complex but, at their core, they are simply instructions that help computers spot patterns and accomplish tasks.

🠝 Back To Top


What do algorithms do?

Algorithms are everywhere around us and they orchestrate much of our everyday lives without us even knowing. They shape how we experience the internet by ranking, sorting, searching, and filtering the information we encounter online and the results we get on Google. They shape what we see on our social media feeds and customize the types of ads we see on the internet. They suggest the next best words or phrases when we’re typing in a document with tools like autocomplete and identify grammatical or spelling errors we’ve made with tools like grammar checkers.

But algorithms go well beyond our experience on the internet. In many cities, for example, algorithms help control traffic lights and are responsive to real time traffic patterns. They power self-driving cars and help screen for concerning health conditions. More recently, organizations have begun using AI to make important decisions related to things like hiring in a process known as automated decision-making. Using computer algorithms to make decisions may seem, at first, to solve many of the biases that arise when humans make decisions. In reality, however, many of these biases may be unintentionally amplified (see Can algorithms be harmful?)

What do algorithms look like?

Let’s take search engines as an example. Google uses a complex search algorithm that finds, sorts, and ranks relevant results based on what you type into the search bar. Here’s a simplified, behind-the-scenes look at what this algorithm, or the step-by-step directions orchestrating Google’s search function, might look like when you type in “cute cats”:

  1. Break the user’s search terms into smaller parts: cute + cats
  2. Search for pages on the internet that are categorized with the terms “cute” and “cats”
  3. Rank the search results in the order that is most relevant based on page content, past user searches, sponsored ads, etc.

🠝 Back To Top


Can algorithms be harmful?

Algorithms are powerful tools that help run many of the tools that make our lives easier. However, because they help computers spot subtle patterns in data, they are also extremely effective at detecting and amplifying patterns of human bias. These biases can have harmful consequences in the realms of automated decision-making, social media, and search engines:

Automated decision-making

Many people believe that computer algorithms are more objective and fair than humans and so should be used to reduce human bias in decision making processes. Algorithms, however, are designed by humans and are trained on human data, meaning that human biases seep into the algorithms powering AI models. While some of the biases present in algorithms are harmless, others can have life-altering consequences, especially when these algorithms are used to make choices about who gets access to housing, jobs, and healthcare.

For example, imagine a company wanted to design an algorithm that helps rank job applicants for a tech position. They train the algorithm on hundreds of successful and unsuccessful job applications from the past two decades. As it combes through these applications, the algorithm might start to identify patterns about what makes a successful applicant—things like education, skills, and past work experience. It might also start to pick up on other, unintended traits that might be present in the job data like an applicant’s gender, particularly in a male-dominated industry like tech. The algorithm might start to pick up on these past hiring biases and reproduce them, recommending only male job candidates. This is an example of algorithmic bias – or instances when algorithms produce discriminatory outcomes – and it is precisely what happened when Amazon attempted to design a similar hiring algorithm for the company in 2015.

The more organizations, like employers and schools, begin to rely on AI-powered software to identify qualified applicants, the higher the stakes of algorithmic bias become. And since many companies don’t disclose how their algorithms make decisions or even whether they use algorithms in the first place, there is often little recourse for those who feel they may be the victim of this type of bias.

Social media algorithms

Algorithms not only can amplify bias in automated decision-making, they can also amplify extreme messages on your social media feeds. Because social media algorithms are designed to promote posts based on how much engagement they receive, posts with more outrageous headlines, over-the-top language, and outlandish – and oftentimes prejudiced – claims are often the ones that go viral. This kind of sensationalized content is called rage bait, and it is intentionally meant to be provocative in order to keep you engaged. This type of media profits off your outrage and can leave you feeling lonely, isolated, and emotionally-exhausted.

Social media algorithms also provide hyper-individualized content for you based on what you like, share, or which posts you comment on and interact with. This personalization can connect us with communities based on our interests or recommend new content that we might like. But it can also lead to what are called filter bubbles, or personalized content on the internet that normalizes and reinforces a person’s existing beliefs and preferences. Filter bubbles drown out certain views and perspectives you see online and can keep you hooked on social media sites for long periods of time. In extreme cases, they can even push extreme views to try and keep you engaged. Like rage bait, filter bubbles can leave you feeling isolated and can make you less open to alternative perspectives.

Search engine algorithms

If you and a friend googled “cute cats,” chances are you would both receive different results despite having searched for the same thing. This is because search engines like Google use algorithms that curate your search results based on your past search history, location, preferences, and online habits. Whereas most of us are familiar with how our behavior on social media affects what we see on these platforms, the personalized nature of search engine algorithms – an important tool for accessing and verifying information – are far less commonly known.

As with social media, these search engine algorithms can shape the types of information we get online and can lead to filter bubbles. In most cases, this type of search engine personalization can be helpful. In other cases, this hyper-individualized design can lead to exposure to dangerous dis/misinformation and reinforce harmful beliefs.

🠝 Back To Top


What can we do about algorithmic bias?

Algorithms themselves aren’t inherently “good” or “bad,” but they can detect and reproduce harmful biases in the realm of automated decision-making, social media, and search engines:

  • When left unchecked, algorithms can amplify harmful forms of sexism, racism, and many other forms of algorithmic bias when they are used in automated decision-making.
  • Algorithms can also create echo chambers on social media and promote rage bait that keeps you engaged on platforms in ways that may negatively impact your mental health.
  • Algorithms actively shape the results you see on search engines and lead to filter bubbles that can amplify dangerous misinformation.

Because of this, it is essential to always approach the outputs of AI tools and other algorithmically-powered systems with a critical eye. This can seem overwhelming at first, but it can start by simply recognizing where and how algorithms shape your online experience:

  • Where and how, for example, might algorithms be used in a decision-making process?
  • How might your online behavior shape what content you see on social media or on search engines?
  • What viewpoints might be filtered out because of your social media algorithm?

Developing this awareness can give you enough time to pause, process information, decide on a course of action, and seek out any additional information and resources.

Some helpful resources for spotting and protecting against algorithmic bias:

🠝 Back To Top


What do AI models predict?

As mentioned previously in our discussion of AI models, AI is an extremely effective pattern-spotting machine. Once it detects a pattern from a large collection of examples, it uses this pattern to accomplish tasks (e.g., sorting and ranking search results) or predict likely outcomes (e.g., forecasting weather) when it encounters a new example. Simpler AI models, like autocomplete, use algorithms to predict the next word or phrase in sentences and suggest them to users as they’re typing. Models like ChatGPT, known as Large Language Models do a much more complicated version of this word prediction on a much larger scale to generate the next most likely words and phrases in response to your input. AI image generators, like DALL-E, predict the next most likely pixels to create intricate graphics. Other AI models predict traffic patterns, spending habits, and even health outcomes.

🠝 Back To Top


How accurate are AI models?

At their core, AI models are pattern-spotting and probability machines. They identify patterns in vast amounts of data and then make predictions about aspects of the world based on those patterns. However, as we’re likely all too aware from rain forecasts or betting odds, probabilities aren’t always accurate. In fact, because AI models like ChatGPT use probabilities to predict the next best word in a sentence, they can sometimes produce results that, though sounding coherent or plausible, are factually incorrect. These inaccuracies are called hallucinations and, for some AI tools, they can appear in as much as 27% of responses.

Hallucinations can sound quite convincing, especially because tools like ChatGPT respond authoritatively and often don’t cite their sources. This isn’t because AI is trying to intentionally mislead you but because AI doesn’t possess human intelligence; it doesn’t “know” fact from fiction. It simply uses its complex pattern-spotting ability to generate responses that sound probable. The big takeaway here is that it's always important to double check the results of any AI system or tool, particularly if you’re using AI content in high stakes situations.


🠝 Back To Top


What are large language models (LLMs)?

Large Language Models, or LLMs for short, are AI models that generate human-like language. They include popular text-based AI tools like ChatGPT and Google Gemini. Because LLMs are one type of AI model, we can break them down the same way we can break down any model (see What Are AI Models?):

  1. LLMs are generalizations of human language based on massive amounts of language data. With the rise of the internet, we now have more access to written texts, like websites, books, forums, and articles than at any point in human history. LLMs use this vast storehouse of texts to make detailed models about how language works.
  2. LLMs are predictive. After identifying the rules and patterns of language from this massive collection of examples, LLMs can then generate human-like text. LLMs do this by breaking down a users’ input into smaller pieces and then predicting the next most likely words and phrases to follow.
  3. LLMs are continually being fine-tuned. Large Language Models and their training data are regularly updated to make LLMs more efficient and accurate, reduce potential biases, and to work out bugs in the system.
  4. LLMs are simplifications of human language and human intelligence. It is important to acknowledge that while LLMs like ChatGPT may seem to possess human intelligence, they are really just generalized models of human language based on probability. In other words, LLMs are limited. There will always be limitations in the data used to train these models or in predictions these models make that are not always accurate.

🠝 Back To Top


What is generative AI?

Artificial intelligence has been around in various forms for a long time. But when most people talk about AI these days, they are often referring to a specific type of AI known as generative AI. Generative AI models are tools like ChatGPT, Gemini, DeepSeek, Claude, or Dall-E that can generate original images, videos, or text based on instructions you give them. Generative AI differs from other AI models that are used to make predictions but that don’t necessarily generate new content. Non-generative AI models are often difficult to spot because of the way they are incorporated into the background of systems like traffic controls, job sites, credit score calculators, and search engines.

For children 13 years or older, generative AI technology can be a really powerful tool for fostering creativity or providing individualized learning. It also, however, poses challenging questions like how do these models encourage plagiarism? What do we do when these models produce harmful biases? What impact might AI have on my child’s mental health? And how does generative AI threaten critical thinking skills?

🠝 Back To Top


What is a prompt?

Prompts are the instructions you give to a generative AI tool when you want it to do a task. When you type a question into ChatGPT, instruct Google Gemini to generate an image, or ask Alexa what the weather is, you are giving these AI tools a prompt. Prompts can be written (as they are with text-based AI tools like ChatGPT), spoken (as they are with voice assistants like Siri or Alexa), or uploaded as an image, sound, or text file

🠝 Back To Top


What is prompt engineering?

Have you ever asked AI to do one thing and it does something different? Or it misunderstands what you’ve asked it to do? Sometimes, it takes a little tinkering to develop just the right prompt to get AI to do what you want it to do. Another word for this prompt tweaking process is prompt engineering. Prompt engineering can get quite complicated, but for most everyday purposes, knowing a few basic prompting principles can be helpful (for more detail, see AI for Education’s Five “S” Model):

  • Be specific. Generative AI tools, like ChatGPT, are often more effective when you specify how you want them to respond in terms of style, format, and level of detail. Do you want a short bullet-point summary with minor detail or a long, thorough 5-paragraph response? Specify that in your prompt.
  • Try assigning roles. It can often be useful to assign AI a role – such as a teacher or critic – in a prompt. An example could be: “Pretend to be a tutor, explain this concept to me like I am a high schooler.”
  • Revise your approach. If you don’t get your desired outcome on your first shot with AI, revise your prompt and try again. The more you do this, the more you’ll start to notice what works for you.

🠝 Back To Top


What is AI Literacy?

AI literacy is an evolving term that describes a person’s ability to navigate AI. Just as traditional literacy encompasses both the skills of writing and reading, AI literacy describes both the ability to use AI to accomplish tasks as well as the capability to recognize and evaluate AI-created content. Being AI literate doesn’t require you to be an expert computer programmer or know the ins and outs of AI technology. It does, however, require that you know enough about how AI works generally, in order to engage with this technology safely, responsibly, and ethically.

In this way, being AI literate is similar to learning how to drive: you don’t necessarily need to be a mechanic to drive a car, but you do need to know enough about how your car works – how to start the ignition, how to accelerate, how to break – to get yourself where you want to go. Just as importantly, you also need to learn the rules of the road in order to keep yourself and the others around you safe. Like driving, AI literacy requires practice.

🠝 Back To Top


Some other AI terms

In learning more about AI, you may come across concepts like machine learning (ML), deep learning (DL), or neural networks. You may also hear terms like natural language processing (NLP) and computer vision. For now, all you need to know about the first group of terms is that they describe the various complex processes through which AI models are developed. The second group of terms, on the other hand, describe different areas where AI models can be applied (e.g., image detection, voice recognition). To learn more about these terms, you can watch PBS’s Crash Course series on Artificial Intelligence.

🠝 Back To Top