In our ‘New Beings’ series, we explore the vast and complex area of AI as it relates to Software Development. There are so many new words and terms used in relation to AI, so we’ve put together a handy glossary of the most common-used terms.

I hope it’s helpful.

Generative AIA subfield of artificial intelligence focused on creating new content or predictions. It uses algorithms and models, often based on machine learning techniques, to generate outputs such as text, images, music, and more.
Large Language Models (LLMsThese are AI models trained on vast amounts of text data. They predict or generate new text based on input by learning patterns in the data they’ve been trained on. Examples include GPT-3, GPT-4, BERT, etc.
GPT (Generative Pretrained Transformer)A type of LLM developed by OpenAI. It’s trained to predict the next word in a sentence and can generate human-like text based on a given prompt.
BERT (Bidirectional Encoder Representations from Transformers)A model developed by Google that’s designed to understand the context of words in a sentence by looking at the words before and after it.
Transfer LearningA machine learning method where a pre-trained model is adapted for a second, related task. This approach saves resources as it requires less data and computational time.
Fine-tuningThe process of tweaking a pre-trained model for a specific task. It involves adjusting the model’s parameters to optimise its performance on the new task.
Transformer ModelsA type of deep learning model introduced in the paper “Attention is All You Need”. It utilises self-attention mechanisms and has been highly influential in NLP tasks.
Generative Adversarial Networks (GANs)A type of generative model consisting of two neural networks – a generator and a discriminator. The generator creates new data samples, such as images, by learning to mimic the distribution of the training data, while the discriminator learns to differentiate between real and fake samples. The two networks are trained together in a competitive process until the generator can create realistic samples that can fool the discriminator.
AutoencodersAutoencoders are a type of neural network that are used for unsupervised learning. They are designed to encode input data into a lower-dimensional representation and then decode it back to its original form. Autoencoders can be used for tasks such as image compression, denoising, and image generation.
Variational Autoencoders (VAEs)VAEs are a type of autoencoder that uses a probabilistic approach to generate new data samples. They learn a probabilistic distribution over the latent space and use it to generate new samples that are similar to the training data.
Recurrent Neural Networks (RNNs)RNNs are a type of neural network that are designed to process sequential data, such as text or speech. They use feedback loops to process the input data one step at a time and maintain a memory of the previous steps. RNNs can be used for tasks such as language translation, text generation, and speech recognition.
TransformersTransformers are a type of neural network architecture that are designed to process sequences of data, such as text or speech. They use attention mechanisms to focus on the relevant parts of the input sequence and process them in parallel, making them more efficient than RNNs for long sequences.