Code examples
» Code examples
Mục Lục
Code examples
Our code examples are short (less than 300 lines of code), focused demonstrations of vertical deep learning workflows.
All of our examples are written as Jupyter notebooks and can be run in one click in Google Colab,
a hosted notebook environment that requires no setup and runs in the cloud. Google Colab includes GPU and TPU runtimes.
★
= Good starter example
Image classification
★
Image classification from scratch
★
Simple MNIST convnet
★
Image classification via fine-tuning with EfficientNet
Image classification with Vision Transformer
Image Classification using BigTransfer (BiT)
Classification using Attention-based Deep Multiple Instance Learning
Image classification with modern MLP models
A mobile-friendly Transformer-based model for image classification
Pneumonia Classification on TPU
Compact Convolutional Transformers
Image classification with ConvMixer
Image classification with EANet (External Attention Transformer)
Involutional neural networks
Image classification with Perceiver
Few-Shot learning with Reptile
Semi-supervised image classification using contrastive pretraining with SimCLR
Image classification with Swin Transformers
Train a Vision Transformer on small datasets
A Vision Transformer without Attention
Image segmentation
★
Image segmentation with a U-Net-like architecture
Multiclass semantic segmentation using DeepLabV3+
Object detection
Object Detection with RetinaNet
Keypoint Detection with Transfer Learning
Object detection with Vision Transformers
3D
3D image classification from CT scans
Monocular depth estimation
3D volumetric rendering with NeRF
Point cloud classification
OCR
★
OCR model for reading Captchas
Handwriting recognition
Image enhancement
Convolutional autoencoder for image denoising
Low-light image enhancement using MIRNet
Image Super-Resolution using an Efficient Sub-Pixel CNN
Enhanced Deep Residual Networks for single-image super-resolution
Zero-DCE for low-light image enhancement
Data augmentation
CutMix data augmentation for image classification
MixUp augmentation for image classification
RandAugment for Image Classification for Improved Robustness
Image & Text
Image captioning
Natural language image search with a Dual Encoder
Vision models interpretability
Visualizing what convnets learn
Model interpretability with Integrated Gradients
Investigating Vision Transformer representations
Grad-CAM class activation visualization
Image similarity search
Near-duplicate image search
Semantic Image Clustering
Image similarity estimation using a Siamese Network with a contrastive loss
Image similarity estimation using a Siamese Network with a triplet loss
Metric learning for image similarity search
Metric learning for image similarity search using TensorFlow Similarity
Video
Video Classification with a CNN-RNN Architecture
Next-Frame Video Prediction with Convolutional LSTMs
Video Classification with Transformers
Video Vision Transformer
Other
Semi-supervision and domain adaptation with AdaMatch
Barlow Twins for Contrastive SSL
Class Attention Image Transformers with LayerScale
Consistency training with supervision
Distilling Vision Transformers
FixRes: Fixing train-test resolution discrepancy
Focal Modulation: A replacement for Self-Attention
Using the Forward-Forward Algorithm for Image Classification
Gradient Centralization for Better Training Performance
Knowledge Distillation
Learning to Resize in Computer Vision
Masked image modeling with Autoencoders
Self-supervised contrastive learning with NNCLR
Augmenting convnets with aggregated attention
Point cloud segmentation with PointNet
Semantic segmentation with SegFormer and Hugging Face Transformers
Self-supervised contrastive learning with SimSiam
Supervised Contrastive Learning
Learning to tokenize in Vision Transformers
Text classification
★
Text classification from scratch
Review Classification using Active Learning
Text Classification using FNet
Large-scale multi-label text classification
Text classification with Transformer
Text classification with Switch Transformer
Text classification using Decision Forests and pretrained embeddings
Using pre-trained word embeddings
Bidirectional LSTM on IMDB
Machine translation
★
English-to-Spanish translation with KerasNLP
English-to-Spanish translation with a sequence-to-sequence Transformer
Character-level recurrent sequence-to-sequence model
Entailment prediction
Multimodal entailment
Named entity recognition
Named Entity Recognition using Transformers
Sequence-to-sequence
Text Extraction with BERT
Sequence to sequence learning for performing number addition
Text similarity search
Semantic Similarity with BERT
Language modeling
End-to-end Masked Language Modeling with BERT
Pretraining BERT with Hugging Face Transformers
Other
Question Answering with Hugging Face Transformers
Abstractive Summarization with Hugging Face Transformers
Structured data classification
★
Structured data classification with FeatureSpace
★
Imbalanced classification: credit card fraud detection
Structured data classification from scratch
Structured data learning with Wide, Deep, and Cross networks
Classification with Gated Residual and Variable Selection Networks
Classification with TensorFlow Decision Forests
Classification with Neural Decision Forests
Structured data learning with TabTransformer
Recommendation
Collaborative Filtering for Movie Recommendations
A Transformer-based recommendation system
Timeseries classification
★
Timeseries classification from scratch
Timeseries classification with a Transformer model
Electroencephalogram Signal Classification for action identification
Anomaly detection
Timeseries anomaly detection using an Autoencoder
Timeseries forecasting
Traffic forecasting using graph neural networks and LSTM
Timeseries forecasting for weather prediction
Image generation
★
Denoising Diffusion Implicit Models
★
A walk through latent space with Stable Diffusion
DreamBooth
Denoising Diffusion Probabilistic Models
Teach StableDiffusion new concepts via Textual Inversion
Fine-tuning Stable Diffusion
Variational AutoEncoder
GAN overriding Model.train_step
WGAN-GP overriding Model.train_step
Conditional GAN
CycleGAN
Data-efficient GANs with Adaptive Discriminator Augmentation
Deep Dream
GauGAN for conditional image generation
PixelCNN
Face image generation with StyleGAN
Vector-Quantized Variational Autoencoders
Style transfer
Neural style transfer
Neural Style Transfer with AdaIN
Text generation
★
GPT text generation with KerasNLP
Text generation with a miniature GPT
Character-level text generation with LSTM
Text Generation using FNet
Graph generation
Drug Molecule Generation with VAE
WGAN-GP with R-GCN for the generation of small molecular graphs
Other
Density estimation using Real NVP
Automatic Speech Recognition using CTC
MelGAN-based spectrogram inversion using feature matching
Speaker Recognition
Automatic Speech Recognition with Transformer
English speaker accent recognition using Transfer Learning
Audio Classification with Hugging Face Transformers
Actor Critic Method
Deep Deterministic Policy Gradient (DDPG)
Deep Q-Learning for Atari Breakout
Proximal Policy Optimization
Graph attention network (GAT) for node classification
Node Classification with Graph Neural Networks
Message-passing neural network (MPNN) for molecular property prediction
Graph representation learning with node2vec
Simple custom layer example: Antirectifier
Probabilistic Bayesian Neural Networks
Knowledge distillation recipes
Creating TFRecords
Keras debugging tips
Endpoint layer pattern
Memory-efficient embeddings for recommendation systems
A Quasi-SVM in Keras
Estimating required sample size for model training
Evaluating and exporting scikit-learn metrics in a Keras callback
Customizing the convolution operation of a Conv2D layer
Writing Keras Models With TensorFlow NumPy
Serving TensorFlow models with TFServing
How to train a Keras model on TFRecord files
Trainer pattern
Adding a new code example
= Good starter example
We welcome new code examples! Here are our rules:
- They should be shorter than 300 lines of code (comments may be as long as you want).
- They should demonstrate modern Keras / TensorFlow 2 best practices.
- They should be substantially different in topic from all examples listed above.
- They should be extensively documented & commented.
New examples are added via Pull Requests to the keras.io repository.
They must be submitted as a .py
file that follows a specific format. They are usually generated from Jupyter notebooks.
See the tutobooks
documentation for more details.