# MODNet **Repository Path**: gdjmck/MODNet ## Basic Information - **Project Name**: MODNet - **Description**: 搬运工 https://github.com/ZHKKKe/MODNet.git - **Primary Language**: Unknown - **License**: Not specified - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 2 - **Created**: 2021-01-21 - **Last Updated**: 2024-09-06 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README

MODNet: Is a Green Screen Really Necessary for Real-Time Portrait Matting?

Arxiv Preprint | Supplementary Video

WebCam Video Demo [Offline][Colab] | Custom Video Demo [Offline] | Image Demo [WebGUI][Colab]

This is the official project of our paper Is a Green Screen Really Necessary for Real-Time Portrait Matting?
MODNet is a trimap-free model for portrait matting in real time under changing scenes.
--- ## News - [Mar 12 2021] Support [TorchScript version](torchscript) of MODNet (from the community). - [Feb 19 2021] Support [ONNX version](onnx) of MODNet (from the community). - [Jan  28 2021] Release the [code](src/trainer.py) of MODNet training iteration. - [Dec 25 2020] ***Merry Christmas!*** :christmas_tree: Release Custom Video Matting Demo [[Offline](demo/video_matting/custom)] for user videos. - [Dec 10 2020] Release WebCam Video Matting Demo [[Offline](demo/video_matting/webcam)][[Colab](https://colab.research.google.com/drive/1Pt3KDSc2q7WxFvekCnCLD8P0gBEbxm6J?usp=sharing)] and Image Matting Demo [[Colab](https://colab.research.google.com/drive/1GANpbKT06aEFiW-Ssx0DQnnEADcXwQG6?usp=sharing)]. - [Nov 24 2020] Release [Arxiv Preprint](https://arxiv.org/pdf/2011.11961.pdf) and [Supplementary Video](https://youtu.be/PqJ3BRHX3Lc). ## Demos ### Video Matting We provide two real-time portrait video matting demos based on WebCam. When using the demo, you can move the WebCam around at will. If you have an Ubuntu system, we recommend you to try the [offline demo](demo/video_matting/webcam) to get a higher *fps*. Otherwise, you can access the [online Colab demo](https://colab.research.google.com/drive/1Pt3KDSc2q7WxFvekCnCLD8P0gBEbxm6J?usp=sharing). We also provide an [offline demo](demo/video_matting/custom) that allows you to process custom videos. ### Image Matting We provide an [online Colab demo](https://colab.research.google.com/drive/1GANpbKT06aEFiW-Ssx0DQnnEADcXwQG6?usp=sharing) for portrait image matting. It allows you to upload portrait images and predict/visualize/download the alpha mattes. ### Community Here we share some cool applications/extentions of MODNet built by the community. - **WebGUI for Image Matting** You can try [this WebGUI](https://www.gradio.app/hub/aliabd/modnet) (hosted on [Gradio](https://www.gradio.app/)) for portrait matting from your browser without code! - **Colab Demo of Bokeh (Blur Background)** You can try [this Colab demo](https://colab.research.google.com/github/eyaler/avatars4all/blob/master/yarok.ipynb) (built by [@eyaler](https://github.com/eyaler)) to blur the backgroud based on MODNet! - **ONNX Version of MODNet** You can convert the pre-trained MODNet to an ONNX model by using [this code](onnx) (provided by [@manthan3C273](https://github.com/manthan3C273)). You can also try [this Colab demo](https://colab.research.google.com/drive/1P3cWtg8fnmu9karZHYDAtmm1vj1rgA-f?usp=sharing) for MODNet image matting (ONNX version). - **TorchScript Version of MODNet** You can convert the pre-trained MODNet to an TorchScript model by using [this code](torchscript) (provided by [@yarkable](https://github.com/yarkable)). ## Code We provide the [code](src/trainer.py) of MODNet training iteration, including: - **Supervised Training**: Train MODNet on a labeled matting dataset - **SOC Adaptation**: Adapt a trained MODNet to an unlabeled dataset In the function comments, we provide examples of how to call the function. ## TODO - Release the code of One-Frame Delay - Release PPM-100 validation benchmark (**Delayed, But On The Way...**) **NOTE**: PPM-100 is a **validation set**. Our training set will not be published. ## License This project (**code, pre-trained models, demos, *etc.***) is released under the [Creative Commons Attribution NonCommercial ShareAlike 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode) license. **NOTE**: The license will be changed to allow commercial use after this work is accepted by a conference or a journal. ## Acknowledgement - We thank [City University of Hong Kong](https://www.cityu.edu.hk/) and [SenseTime](https://www.sensetime.com/) for their support to this project. - We thank         [the Gradio team](https://github.com/gradio-app/gradio), [@eyaler](https://github.com/eyaler), [@manthan3C273](https://github.com/manthan3C273), [@yarkable](https://github.com/yarkable), for their contributions to this repository or their cool applications based on MODNet. ## Citation If this work helps your research, please consider to cite: ```bibtex @article{MODNet, author = {Zhanghan Ke and Kaican Li and Yurou Zhou and Qiuhua Wu and Xiangyu Mao and Qiong Yan and Rynson W.H. Lau}, title = {Is a Green Screen Really Necessary for Real-Time Portrait Matting?}, journal={ArXiv}, volume={abs/2011.11961}, year = {2020}, } ``` ## Contact This project is currently maintained by Zhanghan Ke ([@ZHKKKe](https://github.com/ZHKKKe)). If you have any questions, please feel free to contact `kezhanghan@outlook.com`.