Artificial Intelligence A to Z: Building Your Essential AI Vocabulary
- Allison Higgins

- Sep 22
- 7 min read
“AI will not replace humans, but those who use AI will replace those who don’t.” – Ginni Rometty, Former CEO of IBM

Why It Matters
Demonstrating the ability to apply artificial intelligence to problem solving is quickly becoming an interview "must"for fields in, and adjacent to technology. Artificial intelligence has impacted every sector of the economy. Familiarizing yourself with these terms, regardless of your background, will automatically add value to your skillset. Understanding and using them correctly in your everyday work, can also open doors to higher paying and essential jobs in artificial intelligence economy
Where To Use These Terms
Core Artificial Intelligence Roles
Machine Learning Engineer
Artificial Intelligence Engineers
Robotics Engineer
Data Science and Analytics
Artificial Intelligence Adjacent Roles
Technical Sales
Workflow Development Specialist
Compliance Specialists
Cybersecurity
Roles Most Likely to be Replaced by Artificial Intelligence
Manufacturing
Data Entry and Visualization
Retail/Commerce
Clerical - Accounting, Bookkeeping, and Tax preparation
Where to Learn Artificial Intelligence
Who to Know in AI
A
Agent - A system that uses an LLM to decide the control flow of an application.
Citation - https://blog.langchain.com/what-is-an-agent/
Agentic Artificial Intelligence (Agentic AI) - Autonomous system that observes its environment and takes actions to achieve goals.
Agentic Workflow - Series of tasks executed by AI agents to achieve complex goals.
Algorithm - A set of rules or instructions for solving a problem or completing a task.
Artificial Intelligence - Technology enabling machines to simulate human intelligence and perform tasks that typically require human cognition.
Automation in Artificial Intelligence - Using AI to perform simple, repetitive, and time consuming tasks without human input.
B
Bias - Unfair or inaccurate AI decisions due to flawed data.
Citation - https://www.ibm.com/think/topics/ai-bias
BERT (Bidirectional Encoder Representation from Transformers) - this model digs deep into sentences, picking up on context from both directions—left to right and right to left. BERT learns bi-directional representations of text to significantly improve contextual understanding of unlabeled text across many different tasks.
Citation - https://www.nvidia.com/en-us/glossary/bert/
C
Chain-of-Thought Prompting - Asking AI to explain its reasoning step-by-step. This is used to help the AI analyze it's decision making and increase it's accuracy.
Chatbot - A chatbot is a software application that is designed to imitate human conversation through text or voice commands.
Citation - https://www.coursera.org/resources/ai-terms
Cognitive Computing - AI that mimics human thought processes
Computer Vision - Computer vision is an interdisciplinary field of science and technology that focuses on how computers can gain understanding from images and videos. For AI engineers, computer vision allows them to automate activities that the human visual system typically performs.
Citation - https://www.coursera.org/resources/ai-terms
Context Engineering - is the practice of designing systems that decide what information an AI model sees before it generates a response.
Context Window - The maximum number of tokens (words or parts of words) that an AI model can process and consider simultaneously when generating a response. It is essentially the “memory” capacity of the model during an interaction or task. Models with larger context windows can handle larger attachments/prompts/inputs and sustain “memory” of a conversation for longer (Fogarty, 2023).
Copilot - An intelligent virtual assistant, powered by generative AI and large language models, that help users by performing tasks, automating processes, and providing contextual information to boost productivity.
D
Data Science - An interdisciplinary field of technology that uses algorithms and processes to gather and analyze large amounts of data to uncover patterns and insights that inform business decisions.
Citation - https://www.coursera.org/resources/ai-terms
Deep Learning - A machine learning technique that layers algorithms and computing units—or neurons —into what is called an artificial neural network (ANN). Unlike machine learning, deep learning algorithms can improve incorrect outcomes through repetition without human intervention. These deep neural networks take inspiration from the structure of the human brain.
Citation - https://www.coursera.org/resources/ai-terms
Docker - is an open platform for developing, shipping, and running applications.
E
Emergent Behavior (Emergence) - when an AI system shows unpredictable or unintended capabilities that only occur when individual parts interact as a wider whole.
Citation - https://www.coursera.org/resources/ai-terms
F
Few Shot Learning - A machine learning framework in which an AI model learns to make accurate predictions by training on a very small number of labeled examples. It’s typically used to train models for classification tasks when suitable training data is scarce.
Fine Tuned Model - A machine learning technique that adapts a pre-trained model to perform better on your specific task. Instead of training a model from scratch, you start with a model that already understands general patterns and adjust it to work with your data.
G
Generative Artificial Intelligence (Generative AI/ Gen AI) - a type of technology that uses AI to create content, including text, video, code and images. A generative AI system is trained using large amounts of data, so that it can find patterns for generating new content.
Citation - https://www.coursera.org/resources/ai-terms
Generative Pre-Trained Transformer (GPT) - Advanced language models known for human-like text, sound and image generation.
Citation - https://www.ibm.com/think/topics/gpt
H
Hallucination - Hallucination refers to an incorrect response from an AI system, or false information in an output that is presented as factual information.
Hyperparameter - are configuration variables that data scientists set ahead of time to manage the training process of a machine learning model.
I
Inference - the ability of trained AI models to recognize patterns and draw conclusions from information that they haven’t seen before.
L
Large Language Models (LLM) - a category of deep learning models trained on immense amounts of data, making them capable of understanding and generating natural language and other types of content to perform a wide range of tasks. LLMs are built on a type of neural network architecture called a transformer which excels at handling sequences of words and capturing patterns in text.
M
Machine Learning - a subset of AI in which algorithms mimic human learning while processing data. This field focuses on developing algorithms and models that help machines learn from data and predict trends and behaviors, without human assistance.
Citation - https://www.coursera.org/resources/ai-terms
Mechanistic Interpretability - An emerging subfield of AI research that focuses on understanding how neural networks process data by empirically probing, defining and verifying the internal mechanisms that produce their output.
Model - a mathematical representation of a task, created by applying a machine learning algorithm to a training data set.
Model Context Protocol (MCP) - serves as a standardization layer for AI applications to communicate effectively with external services such as tools, databases and predefined templates.
Multi Agent - multiple main agents that talk to each other, each with its own goals or perspective.
N
Natural Language Processing (NLP) - a type of AI that enables computers to understand spoken and written human language. NLP enables features like text and speech recognition on devices.
Citation - https://www.coursera.org/resources/ai-terms
Neural Network - A neural network is a deep learning technique designed to resemble the structure of the human brain. It requires large data sets to perform calculations and create outputs, which enables features like speech and vision recognition.
Citation - https://www.coursera.org/resources/ai-terms
O
Overfitting - creating a model that matches (memorizes) the training set so closely that the model fails to make correct predictions on new data.
P
Pattern Recognition - Pattern recognition is the method of using computer algorithms to analyze, detect, and label regularities in data. This informs how the data gets classified into different categories.
Citation - https://www.coursera.org/resources/ai-terms
Predictive Analytics - Predictive analytics is a type of analytics that uses technology to predict what will happen in a specific time frame based on historical data and patterns.
Citation - https://www.coursera.org/resources/ai-terms
Prompt - a natural language instruction that tells a large language model (LLM) to perform a task.
Prompt Engineering - is the process of writing, refining and optimizing inputs to encourage generative AI systems to create specific, high-quality outputs.
Q
Quantum Computing - is built on quantum bits, or qubits, which can store both zeros and ones. Qubits can represent any combination of both zero and one simultaneously; this is called superposition, and it is a basic feature of any quantum state. When a qubit’s subatomic particles are in a superposition state, each subatomic particle can interact with and influence others, a phenomenon called quantum interference. Quantum chips make up the physical hardware that stores qubits, similar to microchips in classical computers.
R
Reinforcement Learning - a type of machine learning process that focuses on decision making by autonomous agents. An autonomous agent is any system that can make decisions and act in response to its environment independent of direct instruction by a human user.
Retrieval-Augmented Generation (RAG) - combines AI models with external databases, allowing models to retrieve factual information to improve responses.
S
Sub Agent - you still have one "main" agent, but instead of doing everything, it plays tech lead and delegates pieces of the work to other specialized agents.
Supervised Learning - a type of machine learning that learns from labeled historical input and output data. It’s “supervised” because you are feeding it labeled information.
Citation - https://www.coursera.org/resources/ai-terms
T
Token - A token is a basic unit of text that an LLM uses to understand and generate language. A token may be an entire word or parts of a word.
Citation - https://www.coursera.org/resources/ai-terms
Turing Test - The Turing test was created by computer scientist Alan Turing to evaluate a machine’s ability to exhibit intelligence equal to humans, especially in language and behavior. When facilitating the test, a human evaluator judges conversations between a human and machine. If the evaluator cannot distinguish between responses, then the machine passes the Turing test.
Citation - https://www.coursera.org/resources/ai-terms
U
Unsupervised Learning - Unsupervised learning is a machine learning type that looks for data patterns. Unlike supervised learning, unsupervised learning doesn’t learn from labeled data. This type of machine learning is often used to develop predictive models and to create clusters.
Citation - https://www.coursera.org/resources/ai-terms
V
Vibe Coding - refers to the use of natural language tools that dramatically lower the technical barrier to software development. You describe what you want—literally, just say it—and the AI writes the code, building your app before your eyes.
Z
Zero Shot Prompting - is an application of zero-shot learning, a machine learning pattern that asks models to make predictions with zero training data.



Comments