AI glossary

The pace of AI development and adoption is creating a brand-new dictionary of buzzwords. To help you stay ahead, here are clear, simple breakdowns of AI keywords, technical terms and phrases.

AI Article 7 min Thu, Aug 14, 2025

Artificial intelligence (AI): A broad field of computer science that simulates human intelligence in machines, enabling the technology to perform problem-solving, learning and decision-making tasks.

AI algorithm: Instructions that inform how an AI tool analyzes data, learns patterns and produces outputs.

AI bias: An AI system producing unfair or harmful results that reinforce existing prejudices, based on discrimination embedded in the large language model (LLM).

AI governance: Policies, processes and guardrails put in place to help ensure AI tools are used safely, responsibly and ethically.

AI neural network: A machine learning model inspired by the structure and function of the human brain. It consists of layers of artificial neurons that process and learn from data to make predictions or decisions.

AI poisoning: A type of cyber attack where threat actors manipulate training data used by AI systems, with the goal of compromising their accuracy and effectiveness

AI washing: The practice of exaggerating the value and volume of AI technology in use (e.g. Amazon Fresh ‘just walk out’ technology) - a significant transaction liability risk, especially for businesses selling and stating their technology values.

Black box: A system where internal workings are not easily understood by humans. This term is commonly associated with deep learning and complex machine learning, where decision-making processes are difficult to trace.

Chatbot: A type of conversational AI used to simulate human conversation and streamline customer support.

Data labelling: The process of identifying raw data such as images, text files and videos, and adding meaningful tags so a machine learning model can learn from them.

Data mining: The process of sorting and analyzing large volumes of data to extract patterns and insights, without extracting the data itself.

Deep learning: A type of machine learning where neural networks with multiple layers analyze complex patterns from data.

Deepfakes: A fake image, video or audio created using AI to convincingly mimic real people, often used by cybercriminals for deceptive purposes and by media and entertainment companies to avoid hiring an actor.

False positives/negatives: An AI system can incorrectly identify a positive/negative, perhaps leading to critical incidents being missed or unnecessary disruptions to normal operations.

Few-shot prompting: The act of giving a large language model a prompt containing only a few examples of desired output, leveraging the model’s ability to learn from a limited amount of data.

Generative AI: A type of AI able to create new content such as text, images, music, and more, based on existing data it’s learned from.

Ground truth: Verified, true data used to train, validate and test AI models. This is the gold standard for data, and is vital to develop accurate AI applications.

Hallucinations: When a generative AI model presents false, misleading or nonsensical information as true. This can occur due to overgeneralization, applying patterns outside of the data range or inconsistent training data

Knowledge mining: A process using intelligent services to quickly learn from vast amounts of information and transform raw patterns into useful knowledge. It allows organizations to understand and explore information, uncover hidden insights and find relationships and patterns at scale.

Large language models (LLM): A specific type of generative AI built to understand and generate human language. LLMs can also generate images and videos based off a prompt provided by the user.

Machine learning (ML): A subset of AI that enables machines to learn from data and improve their performance without explicit programming.

Natural language processing (NLP): A method used by computer programs to understand, interpret and generate human language.

Open source: Making underlying source code, models and training data freely available for anyone to use, modify and distribute.

Predictive analytics: AI tools can analyze vast volumes of historical data to identify patterns and predict future outcomes.

Prompting: When a user provides specific instructions to an LLM, guiding it to generate a specific output.

Prompt injections: When an attacker provides malicious inputs to an LLM, manipulating the AI system into performing an unintended action.  

Training data: Structured information used to teach ML models to recognize patterns, make predictions and perform tasks based on specific examples.

Turing test: The test proposed by Alan Turing to determine whether a machine can demonstrate human intelligence.

Zero-shot prompting: A type of prompt that doesn’t include examples or demonstrations for that task, making the LLM rely upon pre-trained knowledge to generate a response.

AI terminology and risks: How to stay ahead

The AI space may be packed with technical jargon, but many of the ideas behind the innovation are surprisingly simple. By understanding key terms, you’ll be better equipped to lead informed conversations with businesses - helping them grasp how AI works, where the risks lie and why protection is so important.

Ready for the latest insights on how AI is transforming risk, from regulatory shifts to real-world business impact? Sign up now to receive exclusive content and early access to expert analysis.