In this model a novel data-driven approach is used to synthesize filamentary structured images like retinal fundus images and neuronal images when ground truth input is given. The model is inspired by the recent progresses in generative adversarial network (GANs) and image style transfer. It learn a direct mapping from a segmentation back to the raw filamentary structured image. Model has three components, a generator, a discriminator and a feature network in the Fila-sGan variant setup. The generator takes segmentation map and noise vector as input and produce colour images. The discriminator is to identify the real images from the fake image i.e. the generated phantoms. The approach is examined on four standard benchmarks i.e. DRIVE, STARE, High-res fundus (HRF) as well as 2D Neurons so that a wide variety of filamentary structured images including both retinal blood vessels and neurons is covered.
Input variables : Ground Truth Image
Output Variables : Synthetic Image
Visit Model : web.bii.a-star.edu.sg
Additional links : web.bii.a-star.edu.sg
Model Category | : | Public |
Date Published | : | June, 2017 |
Healthcare Domain | : | Medical Technology |
Code | : | web.bii.a-star.edu.sg |
Medical Imaging |
Data Privacy |
Image Synthesis |
Synthetic Data Generation |