Brain Tumors

项目名称:脑肿瘤分割
编程语言:Keras/Python


Introduction:

Task 1: Segmentation of gliomas in pre-operative MRI scans.
       The participants are called to address this task by using the provided clinically-acquired training data to develop their method and produce segmentation labels of the different glioma sub-regions. The sub-regions considered for evaluation are: 1) the “enhancing tumor” (ET), 2) the “tumor core” (TC), and 3) the “whole tumor” (WT) [see figure below]. The ET is described by areas that show hyper-intensity in T1Gd when compared to T1, but also when compared to “healthy” white matter in T1Gd. The TC describes the bulk of the tumor, which is what is typically resected. The TC entails the ET, as well as the necrotic (fluid-filled) and the non-enhancing (solid) parts of the tumor. The appearance of the necrotic (NCR) and the non-enhancing (NET) tumor core is typically hypo-intense in T1-Gd when compared to T1. The WT describes the complete extent of the disease, as it entails the TC and the peritumoral edema (ED), which is typically depicted by hyper-intense signal in FLAIR.
       The labels in the provided data are: 1 for NCR & NET2 for ED4 for ET, and 0 for everything else.
       The participants are called to upload their segmentation labels into CBICA’s Image Processing Portal for evaluation.

Task 2: Prediction of patient overall survival (OS) from pre-operative scans.
       Once the participants produce their segmentation labels in the pre-operative scans, they will be called to use these labels in combination with the provided multimodal MRI data to extract imaging/radiomic features that they consider appropriate, and analyze them through machine learning algorithms, in an attempt to predict patient OS. The participants do not need to be limited to volumetric parameters, but can also consider intensity, morphologic, histogram-based, and textural features, as well as spatial information, and glioma diffusion properties extracted from glioma growth models.
       Note that participants are expected to provide predicted survival status only for subjects with resection status of GTR (i.e., Gross Total Resection).
      The participants are called to upload a .csv file with the subject ids and the predicted survival values into CBICA’s Image Processing Portal for evaluation.

Feel free to send any communication related to the BraTS challenge to brats2018@cbica.upenn.edu


Process:


Model:

paper:Multi-level Activation for Segmentation of Hierarchically-nested Classes on 3D-Unet


Tricks:

MethodsName
NormalizationBN / LN / IN / SN
DroupoutS dropout / dropblock / Targeted dropout
Reduce dim1*1 kernel
AugmentSTN / Deformable ConvNets
ContextSPP / ParseNet / PSP / HDC / ASPP / FPA
Updeconvolution / bilinear / DUC / GAU
Crfdense crf
Domainconnected domain
Early stopyes
Lossdice loss / jaccard loss / focal loss
Regularizationl1_l2 / W constraint
ActivationReLU / PreLU / LeakyReLU / ELU

结果:

配置:显卡Nvidia 1060, 图片使用112*112*112
训练:Epoch 175/500 loss: -0.7303 – jaccard: 0.6066 – val_loss: -0.6615 – val_jaccard: 0.5421
Scores:
1.WT:0.8886     
2.TC:0.8049     
3.ET:0.7772
learderboard 2019 :leardboard

损失函数和箱状图:

Dice Loss
Box Diagram

原始数据:

分割结果:

3D 效果
左边是ground truth, 右边是预测结果

Question:

1. Interface N4BiasFieldCorrection failed to run.
      I am really new in this area. When I import  N4BiasFieldCorrection from nipype.interfaces.ants, it works well. But when I run the python code, the problem shows : OSError: command ‘N4BiasFieldCorrection’ could not be found on host chenruideMacBook-Pro.local Interface N4BiasFieldCorrection failed to run. Anything else I should import or download ?
      Nipype only gets you the interface to third-party software such as FSL and ANTS. You do not get the software when you install nipype. You can try to run N4BiasFieldCorrection on your prompt and it will give you the same message: command not found.
Please install ANTS in your local machine!!!
参考配置文档:ANTs

2. Process finished with exit code 139(interruped by signal 11:SIGSEGV)
错误原因:电脑配置太low (Nvidia 1060),数据存储格式是h5格式的,训练到epoch的最后一次获取验证数据的时候,读取h5数据太慢,而训练的速度太快导致获取不到数据,即验证数组为空,出现段错误。
解决办法:
a. 使用npz存储格式代替h5存储格式,但是这样的缺点是压缩率变低,数据包变大,但是可以训练
b. 可以不修改h5,在进入get_training_and_validation_generator这个函数后,把h5中的数据直接提取出来换成数组即可

参考链接:

1. MICCAI 
2. MICCAI Brats Papers 2016-2018
3. U-Net
4. U-Net 3D
5. Batch Normalization
6. Layer Normalizaiton
7. Instance Normalization
8. Group Normalization
9. Switchable Normalization
10. Targeted Dropout
11. DropBlock
12. Spatial Dropout
13. SPPNet
14. FPN
15. ParseNet
16. DenseNet
17. PSPNet
18. Deeplab V1
19. Deeplab V2
20. Deeplab V3
21. Deeplab V3 +
22. Focal Loss
23. HDC DUC
24. DRN
25. STN
26. Deformable ConvNets
27. PAN:FPA-GAU
28. In-Place ABN
29. N4ITKBiasFieldCorrection
30. N3 correction
31. Pool 缺点
32. BN原理
33. 模型集成
34. 优秀blog
35. 上述所有涉及代码
36. 上述提到的所有论文合集
36. MICCAI数据集 提取码:s6lt

本文总结于网络文章,加入了个人理解,仅用于个人学习研究,不得用于其他用途,如涉及版权问题,请联系邮箱513403849@qq.com

Leave a Reply

Your email address will not be published. Required fields are marked *