Code examples

Our code examples are short (less than 300 lines of code), focused demonstrations of vertical deep learning workflows.

All of our examples are written as Jupyter notebooks and can be run in one click in Google Colab, a hosted notebook environment that requires no setup and runs in the cloud. Google Colab includes GPU and TPU runtimes.

= Good starter example = Keras 3 example

Computer Vision

Image classification

Image classification from scratch Simple MNIST convnet Image classification via fine-tuning with EfficientNet Image classification with Vision Transformer Classification using Attention-based Deep Multiple Instance Learning Image classification with modern MLP models A mobile-friendly Transformer-based model for image classification Pneumonia Classification on TPU Compact Convolutional Transformers Image classification with ConvMixer Image classification with EANet (External Attention Transformer) Involutional neural networks Image classification with Perceiver Few-Shot learning with Reptile Semi-supervised image classification using contrastive pretraining with SimCLR Image classification with Swin Transformers Train a Vision Transformer on small datasets A Vision Transformer without Attention Image Classification using Global Context Vision Transformer Image Classification using BigTransfer (BiT)

Image segmentation

Image segmentation with a U-Net-like architecture Multiclass semantic segmentation using DeepLabV3+ Highly accurate boundaries segmentation using BASNet Image Segmentation using Composable Fully-Convolutional Networks

Object detection

Object Detection with RetinaNet Keypoint Detection with Transfer Learning Object detection with Vision Transformers

3D

3D image classification from CT scans Monocular depth estimation 3D volumetric rendering with NeRF Point cloud segmentation with PointNet Point cloud classification

OCR

OCR model for reading Captchas Handwriting recognition

Image enhancement

Convolutional autoencoder for image denoising Low-light image enhancement using MIRNet Image Super-Resolution using an Efficient Sub-Pixel CNN Enhanced Deep Residual Networks for single-image super-resolution Zero-DCE for low-light image enhancement

Data augmentation

CutMix data augmentation for image classification MixUp augmentation for image classification RandAugment for Image Classification for Improved Robustness

Image & Text

Image captioning Natural language image search with a Dual Encoder

Vision models interpretability

Visualizing what convnets learn Model interpretability with Integrated Gradients Investigating Vision Transformer representations Grad-CAM class activation visualization

Image similarity search

Near-duplicate image search Semantic Image Clustering Image similarity estimation using a Siamese Network with a contrastive loss Image similarity estimation using a Siamese Network with a triplet loss Metric learning for image similarity search Metric learning for image similarity search using TensorFlow Similarity Self-supervised contrastive learning with NNCLR

Video

Video Classification with a CNN-RNN Architecture Next-Frame Video Prediction with Convolutional LSTMs Video Classification with Transformers Video Vision Transformer

Performance recipes

Gradient Centralization for Better Training Performance Learning to tokenize in Vision Transformers Knowledge Distillation FixRes: Fixing train-test resolution discrepancy Class Attention Image Transformers with LayerScale Augmenting convnets with aggregated attention Learning to Resize

Other

Semi-supervision and domain adaptation with AdaMatch Barlow Twins for Contrastive SSL Consistency training with supervision Distilling Vision Transformers Focal Modulation: A replacement for Self-Attention Using the Forward-Forward Algorithm for Image Classification Masked image modeling with Autoencoders Segment Anything Model with 🤗Transformers Semantic segmentation with SegFormer and Hugging Face Transformers Self-supervised contrastive learning with SimSiam Supervised Contrastive Learning When Recurrence meets Transformers Efficient Object Detection with YOLOV8 and KerasCV

Natural Language Processing

Text classification

Text classification from scratch Review Classification using Active Learning Text Classification using FNet Large-scale multi-label text classification Text classification with Transformer Text classification with Switch Transformer Text classification using Decision Forests and pretrained embeddings Using pre-trained word embeddings Bidirectional LSTM on IMDB Data Parallel Training with KerasNLP and tf.distribute

Machine translation

English-to-Spanish translation with KerasNLP English-to-Spanish translation with a sequence-to-sequence Transformer Character-level recurrent sequence-to-sequence model

Entailment prediction

Multimodal entailment

Named entity recognition

Named Entity Recognition using Transformers

Sequence-to-sequence

Text Extraction with BERT Sequence to sequence learning for performing number addition

Text similarity search

Semantic Similarity with KerasNLP Semantic Similarity with BERT Sentence embeddings using Siamese RoBERTa-networks

Language modeling

End-to-end Masked Language Modeling with BERT Pretraining BERT with Hugging Face Transformers

Parameter efficient fine-tuning

Parameter-efficient fine-tuning of GPT-2 with LoRA

Other

Abstractive Text Summarization with BART Training a language model from scratch with 🤗 Transformers and TPUs MultipleChoice Task with Transfer Learning Question Answering with Hugging Face Transformers Abstractive Summarization with Hugging Face Transformers

Structured Data

Structured data classification

Structured data classification with FeatureSpace FeatureSpace advanced use cases Imbalanced classification: credit card fraud detection Structured data classification from scratch Structured data learning with Wide, Deep, and Cross networks Classification with Gated Residual and Variable Selection Networks Classification with TensorFlow Decision Forests Classification with Neural Decision Forests Structured data learning with TabTransformer

Recommendation

Collaborative Filtering for Movie Recommendations A Transformer-based recommendation system

Timeseries

Timeseries classification

Timeseries classification from scratch Timeseries classification with a Transformer model Electroencephalogram Signal Classification for action identification Event classification for payment card fraud detection

Anomaly detection

Timeseries anomaly detection using an Autoencoder

Timeseries forecasting

Traffic forecasting using graph neural networks and LSTM Timeseries forecasting for weather prediction

Generative Deep Learning

Image generation

Denoising Diffusion Implicit Models A walk through latent space with Stable Diffusion DreamBooth Denoising Diffusion Probabilistic Models Teach StableDiffusion new concepts via Textual Inversion Fine-tuning Stable Diffusion Variational AutoEncoder GAN overriding Model.train_step WGAN-GP overriding Model.train_step Conditional GAN Data-efficient GANs with Adaptive Discriminator Augmentation Deep Dream GauGAN for conditional image generation Face image generation with StyleGAN Vector-Quantized Variational Autoencoders

Style transfer

Neural style transfer Neural Style Transfer with AdaIN

Text generation

GPT2 Text Generation with KerasNLP GPT text generation from scratch with KerasNLP Text generation with a miniature GPT Character-level text generation with LSTM Text Generation using FNet

Graph generation

Drug Molecule Generation with VAE WGAN-GP with R-GCN for the generation of small molecular graphs

Other

Density estimation using Real NVP

Audio Data

Speech recognition

Automatic Speech Recognition with Transformer

Other

Automatic Speech Recognition using CTC MelGAN-based spectrogram inversion using feature matching Speaker Recognition English speaker accent recognition using Transfer Learning Audio Classification with Hugging Face Transformers

Reinforcement Learning

Actor Critic Method Proximal Policy Optimization Deep Q-Learning for Atari Breakout Deep Deterministic Policy Gradient (DDPG)

Graph Data

Graph attention network (GAT) for node classification Node Classification with Graph Neural Networks Message-passing neural network (MPNN) for molecular property prediction Graph representation learning with node2vec

Quick Keras Recipes

Keras usage tips

Parameter-efficient fine-tuning of Gemma with LoRA and QLoRA Float8 training and inference with a simple Transformer model Keras debugging tips Customizing the convolution operation of a Conv2D layer Trainer pattern Endpoint layer pattern Reproducibility in Keras Models Writing Keras Models With TensorFlow NumPy Simple custom layer example: Antirectifier Packaging Keras models for wide distribution using Functional Subclassing

Serving

Serving TensorFlow models with TFServing

ML best practices

Estimating required sample size for model training Memory-efficient embeddings for recommendation systems Creating TFRecords

Other

Approximating non-Function Mappings with Mixture Density Networks Probabilistic Bayesian Neural Networks Knowledge distillation recipes Evaluating and exporting scikit-learn metrics in a Keras callback How to train a Keras model on TFRecord files

Adding a new code example

We welcome new code examples! Here are our rules:

New examples are added via Pull Requests to the keras.io repository. They must be submitted as a .py file that follows a specific format. They are usually generated from Jupyter notebooks. See the tutobooks documentation for more details.

If you would like to convert a Keras 2 example to Keras 3, please open a Pull Request to the keras.io repository.