HonestQiao 发表于 2024-5-9 00:03

《深度学习与医学图像处理》【学习分享6】医学图像分类处理和模型训练实践

<div class='showpostmsg'> 本帖最后由 HonestQiao 于 2024-5-10 08:01 编辑

<p>《深度学习与医学图像处理》的第五章,重点讲述了医学图像的分类处理。</p>

<p>这里的分类,是指通过深度处理工具和算法,对医学图像进行分类训练,最终对图片数据进行预测。</p>

<p>&nbsp;</p>

<p><strong>一、内容思维导图</strong></p>

<p>本章的思维导图如下:</p>

<div style="text-align: center;"></div>

<p>&nbsp;</p>

<p>上图中,涉及到的概念非常多,虽然都配备了对应的代码,但是如果没有一定的基础,读起来比较费力,而且代码也可能运行不起来。</p>

<p>&nbsp;</p>

<p>经过反复的阅读理解,以及尝试,最终成功进行了实战。</p>

<p>&nbsp;</p>

<p><strong>二、环境准备</strong></p>

<p>首先建立专门的python环境</p>

<pre>
<code class="language-bash">conda create -n doctor python==3.10
conda activate doctor</code></pre>

<p>然后,安装需要的扩展:</p>

<pre>
<code>pip install pydicom
pip install numpy
pip install scikit-image
pip install sklearn
pip install tensorflow==2.8.0
pip install kaggle</code></pre>

<p>准备好环境以后,需要检测 tensorflow是否能够使用GPU,否则训练速度会大打折扣。</p>

<pre>
<code>import tensorflow as tf
tf.config.list_physical_devices('GPU')</code></pre>

<p>输出结果如下,说明tensorflow可以正常使用gpu</p>

<p> &nbsp;</p>

<p>如果最后一步的输出如下:</p>

<p>&nbsp; 说明不能用到GPU,只能用CPU运算,速度就会慢很多的。</p>

<p>&nbsp;</p>

<p><strong>三、测试数据集下载</strong></p>

<p>是占用的数据集,是<a href="https://www.rsna.org/rsnai/ai-image-challenge/rsna-intracranial-hemorrhage-detection-challenge-2019" target="_blank">&nbsp;RSNA Intracranial Hemorrhage Detection Challenge (2019) </a>这个竞赛对应的数据集,可以试用kaggle直接下载:</p>

<pre>
<code class="language-bash">kaggle competitions download -c rsna-intracranial-hemorrhage-detection</code></pre>

<p>下载前,先要去&nbsp;<a href="https://www.kaggle.com/competitions/rsna-intracranial-hemorrhage-detection" target="_blank">RSNA Intracranial Hemorrhage Detection</a>&nbsp;注册账户,并且统一许可,才能下载。</p>

<p>&nbsp;</p>

<p>下载的数据集为一个182G的压缩包:</p>

<p> &nbsp;</p>

<p>解压后的大小有431G:</p>

<p> &nbsp;</p>

<p>需要准备足够的磁盘空间来跑这次的实战。</p>

<p>&nbsp;</p>

<p>测试数据集准备好以后,就可以参考书上的步骤,进行数据预处理、训练和测试了。</p>

<p>&nbsp;</p>

<p><strong>四、数据预处理</strong></p>

<p>训练对应的代码如下:</p>

<pre>
<code class="language-python">import os
import csv

import numpy as np
import pydicom as dicom
from skimage.transform import resize
from sklearn.metrics import roc_auc_score, precision_score, recall_score, f1_score
# import tensorflow as tf
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()

def z_score_norm(img, mean=None, std=None):
    if mean is None:
      mean = np.mean(img)
    if std is None:
      std = np.std(img)
    return (img - mean) / (std + 1e-6)


#=============Process original data &amp; Save train and test npy files
def get_random_data(train_label_csv, pick_num):
    kaggle_list = {
      'epidural': 0,
      'intraparenchymal': 1,
      'intraventricular': 2,
      'subarachnoid': 3,
      'subdural': 4,
      'any': 5,
    }

    train_split_list = {
      'any': [],
      'epidural': [],
      'intraparenchymal': [],
      'intraventricular': [],
      'subarachnoid': [],
      'subdural': [],
    }
    f = csv.reader(open(train_label_csv))
    train_dict =
    for ID_info, label in list(f):
      tok = ID_info.split('_')
      ID = '_'.join(tok[:2])
      ICH_name = tok
      if int(label) == 1 and ICH_name != 'any':
            train_dict] = int(label)
      if ICH_name == 'any':
            key_candidate = np.where(np.array(train_dict)&gt;0)
            if len(key_candidate) == 0:
                key = 'any'
            else:
                key = list(kaggle_list.keys())
            train_split_list.append()
            train_dict =

    res_list = []
    for i, key in enumerate(train_split_list):
      np.random.shuffle(train_split_list)
      res_list.append(train_split_list[:pick_num])
    return res_list

def dicom_2_npy(path_list):
    npy_list = []
    for path in path_list:
      try:
            ds = dicom.read_file(path)
      except:
            print('bad', path)
            continue
      pix = ds.pixel_array
      Intercept = ds.RescaleIntercept
      Slope = ds.RescaleSlope
      pix=pix*Slope + Intercept
      if pix.shape != (512,512):
            pix = resize(pix, (512,512))
      npy_list.append(np.reshape(pix, (1,512,512)))
    dicom_npy = np.concatenate(npy_list, axis=0)
    return dicom_npy

def save_train_test_npy(res_list, data_path, output_path, data_set_dict, read_count):
    for label, (train_num, test_num) in data_set_dict.items():
      res_info = res_list
      path_list = []
      label_list = []
      for i in range(train_num):
            ID, la = res_info + i]
            dcm_name = ID + '.dcm'
            path_list.append(os.path.join(data_path, dcm_name))
            label_list.append(la)
      read_count += train_num
      dicom_npy = dicom_2_npy(path_list)
      np.save(os.path.join(output_path, 'trainval_img_%d_%d.npy'%(train_num, label)), dicom_npy)
      np.save(os.path.join(output_path, 'trainval_label_%d_%d.npy'%(train_num, label)), label_list)

      path_list = []
      label_list = []
      for i in range(test_num):
            ID, la = res_info + i]
            dcm_name = ID + '.dcm'
            path_list.append(os.path.join(data_path, dcm_name))
            label_list.append(la)
      read_count += test_num
      dicom_npy = dicom_2_npy(path_list)
      np.save(os.path.join(output_path, 'test_img_%d_%d.npy'%(train_num, label)), dicom_npy)
      np.save(os.path.join(output_path, 'test_label_%d_%d.npy'%(test_num, label)), label_list)

      print('save %d done'%label)
    return

# =============
# data_root = './data/'
data_root = '/workspace/rsna-intracranial-hemorrhage-detection'
train_img_path = os.path.join(data_root, 'stage_2_train')
train_label = os.path.join(data_root, 'stage_2_train.csv')
npy_path = os.path.join(data_root, 'stage_2_train_npy')
if not os.path.exists(npy_path):
    os.mkdir(npy_path)
res_list = get_random_data(train_label, pick_num=)
read_count = {0:0, 1:0, 2:0, 3:0, 4:0, 5:0}
data_set = {
    0:,
    1:,
    2:,
    3:,
    4:,
    5:,
}
save_train_test_npy(res_list, train_img_path, npy_path, data_set, read_count)
</code></pre>

<p>&nbsp;</p>

<p>直接使用python运行,输出如下:</p>

<p> &nbsp;</p>

<p>&nbsp;</p>

<p>最终会得到原始数据的.npy文件:</p>

<p> &nbsp;</p>

<p><strong>五、模型训练</strong></p>

<p>参考书上的实例,模型运行的代码如下:</p>

<pre>
<code class="language-python">import os
import csv

import numpy as np
import pydicom as dicom
from skimage.transform import resize
from sklearn.metrics import roc_auc_score, precision_score, recall_score, f1_score
# import tensorflow as tf
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
tf.logging.set_verbosity(tf.logging.ERROR)

def z_score_norm(img, mean=None, std=None):
    if mean is None:
      mean = np.mean(img)
    if std is None:
      std = np.std(img)
    return (img - mean) / (std + 1e-6)


#=============Process original data &amp; Save train and test npy files
def get_random_data(train_label_csv, pick_num):
    kaggle_list = {
      'epidural': 0,
      'intraparenchymal': 1,
      'intraventricular': 2,
      'subarachnoid': 3,
      'subdural': 4,
      'any': 5,
    }

    train_split_list = {
      'any': [],
      'epidural': [],
      'intraparenchymal': [],
      'intraventricular': [],
      'subarachnoid': [],
      'subdural': [],
    }
    f = csv.reader(open(train_label_csv))
    train_dict =
    for ID_info, label in list(f):
      tok = ID_info.split('_')
      ID = '_'.join(tok[:2])
      ICH_name = tok
      if int(label) == 1 and ICH_name != 'any':
            train_dict] = int(label)
      if ICH_name == 'any':
            key_candidate = np.where(np.array(train_dict)&gt;0)
            if len(key_candidate) == 0:
                key = 'any'
            else:
                key = list(kaggle_list.keys())
            train_split_list.append()
            train_dict =

    res_list = []
    for i, key in enumerate(train_split_list):
      np.random.shuffle(train_split_list)
      res_list.append(train_split_list[:pick_num])
    return res_list

def dicom_2_npy(path_list):
    npy_list = []
    for path in path_list:
      try:
            ds = dicom.read_file(path)
      except:
            print('bad', path)
            continue
      pix = ds.pixel_array
      Intercept = ds.RescaleIntercept
      Slope = ds.RescaleSlope
      pix=pix*Slope + Intercept
      if pix.shape != (512,512):
            pix = resize(pix, (512,512))
      npy_list.append(np.reshape(pix, (1,512,512)))
    dicom_npy = np.concatenate(npy_list, axis=0)
    return dicom_npy

def save_train_test_npy(res_list, data_path, output_path, data_set_dict, read_count):
    for label, (train_num, test_num) in data_set_dict.items():
      res_info = res_list
      path_list = []
      label_list = []
      for i in range(train_num):
            ID, la = res_info + i]
            dcm_name = ID + '.dcm'
            path_list.append(os.path.join(data_path, dcm_name))
            label_list.append(la)
      read_count += train_num
      dicom_npy = dicom_2_npy(path_list)
      np.save(os.path.join(output_path, 'trainval_img_%d_%d.npy'%(train_num, label)), dicom_npy)
      np.save(os.path.join(output_path, 'trainval_label_%d_%d.npy'%(train_num, label)), label_list)

      path_list = []
      label_list = []
      for i in range(test_num):
            ID, la = res_info + i]
            dcm_name = ID + '.dcm'
            path_list.append(os.path.join(data_path, dcm_name))
            label_list.append(la)
      read_count += test_num
      dicom_npy = dicom_2_npy(path_list)
      np.save(os.path.join(output_path, 'test_img_%d_%d.npy'%(train_num, label)), dicom_npy)
      np.save(os.path.join(output_path, 'test_label_%d_%d.npy'%(test_num, label)), label_list)

      print('save %d done'%label)
    return

#=============
# data_root = './data/'
# train_img_path = os.path.join(data_root, 'stage_2_train')
# train_label = os.path.join(data_root, 'stage_2_train.csv')
# npy_path = os.path.join(data_root, 'stage_2_train_npy')
# if not os.path.exists(npy_path):
#   os.mkdir(npy_path)
# res_list = get_random_data(train_label, pick_num=)
# read_count = {0:0, 1:0, 2:0, 3:0, 4:0, 5:0}
# data_set = {
#   0:,
#   1:,
#   2:,
#   3:,
#   4:,
#   5:,
# }
# save_train_test_npy(res_list, train_img_path, npy_path, data_set, read_count)



def data_generator(data, label, batch_size, is_training):
    data = np.array(data)
    label = np.array(label)
    n_step = len(data) // batch_size
    while True:
      if is_training:
            np.random.seed(123)
            np.random.shuffle(data)
            np.random.seed(123)
            np.random.shuffle(label)
      for step in range(n_step):
            x = data
            x = np.array(x)
            x = np.expand_dims(x, -1)
            y = label
            y = np.array(y)
            yield x,y

def _conv_bn_relu(x, filters, kernel_size=3, strides=1, padding='same'):
    x = tf.layers.conv2d(x, filters=filters, kernel_size=kernel_size, strides=strides, padding=padding)
    x = tf.layers.batch_normalization(x)
    x = tf.nn.relu(x)
    return x

def conv_res_block(x, filters, padding='same', num_layers=1):
    if num_layers == 1:
      x = _conv_bn_relu(x, filters, kernel_size=7, strides=2,)
      return x
    for _ in range(num_layers):
      input_shape = x.get_shape()
      if input_shape[-1] != filters:
            shortcut = _conv_bn_relu(x, filters, kernel_size=1, strides=2, padding=padding)
            x = _conv_bn_relu(x, filters//4, kernel_size=1, strides=2, padding=padding)
      else:
            shortcut = x
            x = _conv_bn_relu(x, filters//4, kernel_size=1, padding=padding)
      x = _conv_bn_relu(x, filters//4, kernel_size=3, padding=padding)
      x = _conv_bn_relu(x, filters, kernel_size=1, padding=padding)
      x = x + shortcut
    return x

def build_res_network(inputs, n_filters, n_class, rate=.2, is_training=True):
    l1 = conv_res_block(inputs, n_filters, num_layers=1)
    l2 = conv_res_block(l1, n_filters*4, num_layers=3)
    l3 = conv_res_block(l2, n_filters*8, num_layers=4)
    l4 = conv_res_block(l3, n_filters*16, num_layers=23)
    l5 = conv_res_block(l4, n_filters*32, num_layers=3)
    block_shape = l5.get_shape()
    pool2 = tf.layers.average_pooling2d(l5, pool_size=(block_shape, block_shape),
                              strides=(1,1))
    fc = tf.layers.flatten(pool2)
    fc = tf.layers.dense(fc, units=512)
    fc = tf.layers.dropout(fc, rate=rate, training=is_training)
    fc = tf.layers.dense(fc, units=128)
    fc = tf.layers.dropout(fc, rate=rate, training=is_training)
    out = tf.layers.dense(fc, units=n_class)
    out = tf.nn.softmax(out)
    return out

#=============compute mean&amp;std
def compute_mean_std(npy_path):
    img_path = []
    for f in os.listdir(npy_path):
      if ".npy" in f and "img" in f:
            img_path.append(f)
    data = []
    for img_file in img_path:
      images = np.load(os.path.join(npy_path, img_file))
      data.extend(images)
    mean = np.mean(data)
    std = np.std(data)
    np.save('./mean_std.npy',)

# compute_mean_std(npy_path)


def load_split_data(img_path, label_path, thre=.8):
    train_data = []
    valid_data = []
    train_label = []
    valid_label = []
    for img_file, label_file in zip(img_path, label_path):
      images = np.load(img_file)
      labels = np.load(label_file)
      images = images[:,64:448,64:448]
      labels = np.clip(labels.sum(axis=-1), 0, 1)
      labels = np.expand_dims(labels, -1)
      labels = np.concatenate(, axis=-1)

      split_num = int(len(images)*thre)
      train_data.extend(images[:split_num])
      valid_data.extend(images)
      train_label.extend(labels[:split_num])
      valid_label.extend(labels)
    (mean, std) = np.load('mean_std.npy')
    train_data = z_score_norm(train_data, mean, std)
    valid_data = z_score_norm(valid_data, mean, std)
    return train_data, train_label, valid_data, valid_label

def cross_entropy(labels, logits):
    return -tf.reduce_mean(labels*tf.math.log(tf.clip_by_value(logits,1e-10, 1.0-1e-7)), name='cross_entropy')

def train(train_img_npy, train_label_npy, save_model_dir, batch_size=32, epochs=100, \
    input_size=, n_class=2, first_channels=64, lr=0.0001, display_step=100):
    print("start train")
    train_data, train_label, valid_data, valid_label = load_split_data(train_img_npy, train_label_npy, thre=0.9)
    train_dataset = data_generator(train_data, train_label, batch_size, True)
    val_dataset = data_generator(valid_data, valid_label, 1, False)
    step_train = len(train_data)//batch_size
    # step_train = 500
    step_valid = len(valid_data)
    is_training = tf.placeholder(tf.bool)

    input = tf.placeholder(dtype=tf.float32, shape=, input_size, input_size])

    print('build network')
    logit = build_res_network(input, first_channels, n_class, rate=.2, is_training=is_training)
    label = tf.placeholder(dtype=tf.float32, shape=)
    loss_tf = cross_entropy(labels=label, logits=logit)
    global_step = tf.Variable(0, name='global_step', trainable=False)
    optimizer = tf.train.AdamOptimizer(learning_rate=lr)
    train_opt = optimizer.minimize(loss_tf, global_step=global_step)
    init_op = tf.global_variables_initializer()
    saver = tf.train.Saver(max_to_keep=epochs)

    print("start optimize")
    config = tf.compat.v1.ConfigProto()
    config.gpu_options.allow_growth=True
    with tf.compat.v1.Session(config=config) as sess:
      sess.run(init_op)
      min_loss = 10.0
      for epoch in range(epochs):
            total_loss = []
            print('*'*20, 'Train Epoch %d'%epoch, '*'*20)
            for step in range(step_train):
                x,y = next(train_dataset)
                _, loss, pred_logits = sess.run(, feed_dict={input:x, label:y, is_training:True})
                total_loss.append(loss)
                if step % display_step==0:
                  print('Epoch {:}, train steps {:}, loss={:.4f}'.format(epoch, step, loss), flush=True)
            print('Epoch {:}, train Avg loss={:.4f}, lr={:.4f}'.format(epoch, np.mean(total_loss), lr))

            print('*'*20, 'Valid Epoch %d'%epoch, '*'*20)
            total_loss=[]
            all_num = step_valid
            TP = 0
            for step in range(step_valid):
                x,y = next(val_dataset)
                val_loss, pred_logits = sess.run(, feed_dict={input:x, label:y, is_training:False})
                y_pred = np.argmax(pred_logits, axis=-1)
                y = np.argmax(y, axis=-1)
                total_loss.append(val_loss)
                if y == y_pred:
                  TP += 1
                if step % display_step==0:
                  print('Epoch {:}, valid steps {:}, loss={:.4f}'.format(epoch, step, val_loss))
            val_loss_avg = np.mean(total_loss)
            print('Epoch {:}, valid Avg loss={:.4f}, acc={:.4f}'.format(epoch, val_loss_avg, TP*1.0/all_num))
            print('*'*20, 'Valid Epoch %d'%epoch, '*'*20)
            saver.save(sess, os.path.join(save_model_dir, 'epoch_%03d_%0.4f_model' % (epoch, val_loss_avg)), write_meta_graph=False)

            saver.save(sess, os.path.join(save_model_dir, 'last_weight_model'), write_meta_graph=False)
            if min_loss &gt; val_loss_avg:
                min_loss = val_loss_avg
                saver.save(sess, os.path.join(save_model_dir, 'best_weight_model'), write_meta_graph=False)

#============test
def test(test_img_npy, test_label_npy, save_model, batch_size=1, input_size=, n_class=2, first_channels=64, display_step=100):
    test_data, test_label, _, _ = load_split_data(test_img_npy, test_label_npy, thre=1)
    test_dataset = data_generator(test_data, test_label, batch_size, False)
    input = tf.placeholder(dtype=tf.float32, shape=, input_size, input_size])
    logit = build_res_network(input, first_channels, n_class, rate=.2, is_training=False)
    label = tf.placeholder(dtype=tf.float32, shape=)
    saver = tf.train.Saver()
    with tf.Session() as sess:
      saver.restore(sess, save_model)
      pred_label = []
      cls_label = []
      y_prob = []
      all_num = len(test_data)
      TP = 0
      for step in range(all_num):
            x, y = next(test_dataset)
            pred_logits = sess.run(logit, feed_dict={input:x, label:y})
            pred_y = np.argmax(pred_logits,1).astype(np.int32)
            label_y = np.argmax(y,1).astype(np.int32)
            pred_label.append(pred_y)
            cls_label.append(label_y)
            y_prob.append(pred_logits)
            if label_y==pred_y:
                TP += 1
            if step%display_step == 0:
                print('Test steps {:} y true {:} y pred{:}'.format(step, label_y, pred_y))
      auc = roc_auc_score(cls_label, y_prob)
      precision = precision_score(cls_label, pred_label)
      recall = recall_score(cls_label, pred_label)
      f1 = f1_score(cls_label, pred_label)
      print('Tesy AUC={:4f}, Avg acc={:4f}, Precision={:4f}, Recall={:4f}, F1={:4f}'.format(auc, TP*1.0/all_num, precision, recall, f1))


# npy_path = '/Users/kai/Downloads/Chapter5/'
npy_path = '/workspace/rsna-intracranial-hemorrhage-detection/stage_2_train_npy/'
print("compute mean std")
compute_mean_std(npy_path)
train_img_npy = [
    os.path.join(npy_path, 'trainval_img_5000_0.npy'),
    os.path.join(npy_path, 'trainval_img_1000_1.npy'),
    os.path.join(npy_path, 'trainval_img_1000_2.npy'),
    os.path.join(npy_path, 'trainval_img_1000_3.npy'),
    os.path.join(npy_path, 'trainval_img_1000_4.npy'),
    os.path.join(npy_path, 'trainval_img_1000_5.npy'),
]
train_label_npy = [
    os.path.join(npy_path, 'trainval_label_5000_0.npy'),
    os.path.join(npy_path, 'trainval_label_1000_1.npy'),
    os.path.join(npy_path, 'trainval_label_1000_2.npy'),
    os.path.join(npy_path, 'trainval_label_1000_3.npy'),
    os.path.join(npy_path, 'trainval_label_1000_4.npy'),
    os.path.join(npy_path, 'trainval_label_1000_5.npy'),
]
test_img_npy = [
    os.path.join(npy_path, 'test_img_1000_0.npy'),
    os.path.join(npy_path, 'test_img_200_1.npy'),
    os.path.join(npy_path, 'test_img_200_2.npy'),
    os.path.join(npy_path, 'test_img_200_3.npy'),
    os.path.join(npy_path, 'test_img_200_4.npy'),
    os.path.join(npy_path, 'test_img_200_5.npy'),
]
test_label_npy = [
    os.path.join(npy_path, 'test_label_1000_0.npy'),
    os.path.join(npy_path, 'test_label_200_1.npy'),
    os.path.join(npy_path, 'test_label_200_2.npy'),
    os.path.join(npy_path, 'test_label_200_3.npy'),
    os.path.join(npy_path, 'test_label_200_4.npy'),
    os.path.join(npy_path, 'test_label_200_5.npy'),
]
'''
save_model_dir = './temp/saved_model'
if not os.path.exists(save_model_dir):
    os.makedirs(save_model_dir)
train(train_img_npy, train_label_npy, save_model_dir, display_step=1)
'''

if __name__ == "__main__":
    # 训练
    if True:
      save_model_dir = './saved_model'
      train(train_img_npy, train_label_npy, save_model_dir, display_step=10)

    # 测试
    if False:
      save_model = './saved_model/epoch_036_0.5757_model'
      test(test_img_npy, test_label_npy, save_model)
</code></pre>

<p>&nbsp;</p>

<p>运行上述代码后,输出如下:</p>

<p> &nbsp;</p>

<p>热身工作完成后,训练过程正式开始:</p>

<p>这个过程,较为漫长,建议睡觉前开启,早上醒来再看进度吧。数据需要迭代100次,才最终完成。</p>

<p> &nbsp;</p>

<p>&nbsp;</p>

<p>训练完成后,生成的模型如下:</p>

<p> &nbsp;</p>

<p>&nbsp;</p>

<p> &nbsp;</p>

<p>&nbsp;</p>

<p> &nbsp;</p>

<p>&nbsp;</p>

<p><strong>六、模型测试</strong></p>

<p>现在模型训练完成了,就可以用提供的数据进行测试,检验模型的具体应用能力了。</p>

<p>在前面数据预处理的时候,就已经生成了用于测试的数据,占比为20%:</p>

<p> &nbsp;</p>

<p>对应测试的代码如下:</p>

<pre>
<code>import os
import csv

import numpy as np
import pydicom as dicom
from skimage.transform import resize
from sklearn.metrics import roc_auc_score, precision_score, recall_score, f1_score
# import tensorflow as tf
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
tf.logging.set_verbosity(tf.logging.ERROR)

def z_score_norm(img, mean=None, std=None):
    if mean is None:
      mean = np.mean(img)
    if std is None:
      std = np.std(img)
    return (img - mean) / (std + 1e-6)


#=============Process original data &amp; Save train and test npy files
def get_random_data(train_label_csv, pick_num):
    kaggle_list = {
      'epidural': 0,
      'intraparenchymal': 1,
      'intraventricular': 2,
      'subarachnoid': 3,
      'subdural': 4,
      'any': 5,
    }

    train_split_list = {
      'any': [],
      'epidural': [],
      'intraparenchymal': [],
      'intraventricular': [],
      'subarachnoid': [],
      'subdural': [],
    }
    f = csv.reader(open(train_label_csv))
    train_dict =
    for ID_info, label in list(f):
      tok = ID_info.split('_')
      ID = '_'.join(tok[:2])
      ICH_name = tok
      if int(label) == 1 and ICH_name != 'any':
            train_dict] = int(label)
      if ICH_name == 'any':
            key_candidate = np.where(np.array(train_dict)&gt;0)
            if len(key_candidate) == 0:
                key = 'any'
            else:
                key = list(kaggle_list.keys())
            train_split_list.append()
            train_dict =

    res_list = []
    for i, key in enumerate(train_split_list):
      np.random.shuffle(train_split_list)
      res_list.append(train_split_list[:pick_num])
    return res_list

def dicom_2_npy(path_list):
    npy_list = []
    for path in path_list:
      try:
            ds = dicom.read_file(path)
      except:
            print('bad', path)
            continue
      pix = ds.pixel_array
      Intercept = ds.RescaleIntercept
      Slope = ds.RescaleSlope
      pix=pix*Slope + Intercept
      if pix.shape != (512,512):
            pix = resize(pix, (512,512))
      npy_list.append(np.reshape(pix, (1,512,512)))
    dicom_npy = np.concatenate(npy_list, axis=0)
    return dicom_npy

def save_train_test_npy(res_list, data_path, output_path, data_set_dict, read_count):
    for label, (train_num, test_num) in data_set_dict.items():
      res_info = res_list
      path_list = []
      label_list = []
      for i in range(train_num):
            ID, la = res_info + i]
            dcm_name = ID + '.dcm'
            path_list.append(os.path.join(data_path, dcm_name))
            label_list.append(la)
      read_count += train_num
      dicom_npy = dicom_2_npy(path_list)
      np.save(os.path.join(output_path, 'trainval_img_%d_%d.npy'%(train_num, label)), dicom_npy)
      np.save(os.path.join(output_path, 'trainval_label_%d_%d.npy'%(train_num, label)), label_list)

      path_list = []
      label_list = []
      for i in range(test_num):
            ID, la = res_info + i]
            dcm_name = ID + '.dcm'
            path_list.append(os.path.join(data_path, dcm_name))
            label_list.append(la)
      read_count += test_num
      dicom_npy = dicom_2_npy(path_list)
      np.save(os.path.join(output_path, 'test_img_%d_%d.npy'%(train_num, label)), dicom_npy)
      np.save(os.path.join(output_path, 'test_label_%d_%d.npy'%(test_num, label)), label_list)

      print('save %d done'%label)
    return

#=============
# data_root = './data/'
# train_img_path = os.path.join(data_root, 'stage_2_train')
# train_label = os.path.join(data_root, 'stage_2_train.csv')
# npy_path = os.path.join(data_root, 'stage_2_train_npy')
# if not os.path.exists(npy_path):
#   os.mkdir(npy_path)
# res_list = get_random_data(train_label, pick_num=)
# read_count = {0:0, 1:0, 2:0, 3:0, 4:0, 5:0}
# data_set = {
#   0:,
#   1:,
#   2:,
#   3:,
#   4:,
#   5:,
# }
# save_train_test_npy(res_list, train_img_path, npy_path, data_set, read_count)



def data_generator(data, label, batch_size, is_training):
    data = np.array(data)
    label = np.array(label)
    n_step = len(data) // batch_size
    while True:
      if is_training:
            np.random.seed(123)
            np.random.shuffle(data)
            np.random.seed(123)
            np.random.shuffle(label)
      for step in range(n_step):
            x = data
            x = np.array(x)
            x = np.expand_dims(x, -1)
            y = label
            y = np.array(y)
            yield x,y

def _conv_bn_relu(x, filters, kernel_size=3, strides=1, padding='same'):
    x = tf.layers.conv2d(x, filters=filters, kernel_size=kernel_size, strides=strides, padding=padding)
    x = tf.layers.batch_normalization(x)
    x = tf.nn.relu(x)
    return x

def conv_res_block(x, filters, padding='same', num_layers=1):
    if num_layers == 1:
      x = _conv_bn_relu(x, filters, kernel_size=7, strides=2,)
      return x
    for _ in range(num_layers):
      input_shape = x.get_shape()
      if input_shape[-1] != filters:
            shortcut = _conv_bn_relu(x, filters, kernel_size=1, strides=2, padding=padding)
            x = _conv_bn_relu(x, filters//4, kernel_size=1, strides=2, padding=padding)
      else:
            shortcut = x
            x = _conv_bn_relu(x, filters//4, kernel_size=1, padding=padding)
      x = _conv_bn_relu(x, filters//4, kernel_size=3, padding=padding)
      x = _conv_bn_relu(x, filters, kernel_size=1, padding=padding)
      x = x + shortcut
    return x

def build_res_network(inputs, n_filters, n_class, rate=.2, is_training=True):
    l1 = conv_res_block(inputs, n_filters, num_layers=1)
    l2 = conv_res_block(l1, n_filters*4, num_layers=3)
    l3 = conv_res_block(l2, n_filters*8, num_layers=4)
    l4 = conv_res_block(l3, n_filters*16, num_layers=23)
    l5 = conv_res_block(l4, n_filters*32, num_layers=3)
    block_shape = l5.get_shape()
    pool2 = tf.layers.average_pooling2d(l5, pool_size=(block_shape, block_shape),
                              strides=(1,1))
    fc = tf.layers.flatten(pool2)
    fc = tf.layers.dense(fc, units=512)
    fc = tf.layers.dropout(fc, rate=rate, training=is_training)
    fc = tf.layers.dense(fc, units=128)
    fc = tf.layers.dropout(fc, rate=rate, training=is_training)
    out = tf.layers.dense(fc, units=n_class)
    out = tf.nn.softmax(out)
    return out

#=============compute mean&amp;std
def compute_mean_std(npy_path):
    img_path = []
    for f in os.listdir(npy_path):
      if ".npy" in f and "img" in f:
            img_path.append(f)
    data = []
    for img_file in img_path:
      images = np.load(os.path.join(npy_path, img_file))
      data.extend(images)
    mean = np.mean(data)
    std = np.std(data)
    np.save('./mean_std.npy',)

# compute_mean_std(npy_path)


def load_split_data(img_path, label_path, thre=.8):
    train_data = []
    valid_data = []
    train_label = []
    valid_label = []
    for img_file, label_file in zip(img_path, label_path):
      images = np.load(img_file)
      labels = np.load(label_file)
      images = images[:,64:448,64:448]
      labels = np.clip(labels.sum(axis=-1), 0, 1)
      labels = np.expand_dims(labels, -1)
      labels = np.concatenate(, axis=-1)

      split_num = int(len(images)*thre)
      train_data.extend(images[:split_num])
      valid_data.extend(images)
      train_label.extend(labels[:split_num])
      valid_label.extend(labels)
    (mean, std) = np.load('mean_std.npy')
    train_data = z_score_norm(train_data, mean, std)
    valid_data = z_score_norm(valid_data, mean, std)
    return train_data, train_label, valid_data, valid_label

def cross_entropy(labels, logits):
    return -tf.reduce_mean(labels*tf.math.log(tf.clip_by_value(logits,1e-10, 1.0-1e-7)), name='cross_entropy')

def train(train_img_npy, train_label_npy, save_model_dir, batch_size=32, epochs=100, \
    input_size=, n_class=2, first_channels=64, lr=0.0001, display_step=100):
    print("start train")
    train_data, train_label, valid_data, valid_label = load_split_data(train_img_npy, train_label_npy, thre=0.9)
    train_dataset = data_generator(train_data, train_label, batch_size, True)
    val_dataset = data_generator(valid_data, valid_label, 1, False)
    step_train = len(train_data)//batch_size
    # step_train = 500
    step_valid = len(valid_data)
    is_training = tf.placeholder(tf.bool)
    input = tf.placeholder(dtype=tf.float32, shape=, input_size, input_size])
    print('build network')
    logit = build_res_network(input, first_channels, n_class, rate=.2, is_training=is_training)
    label = tf.placeholder(dtype=tf.float32, shape=)
    loss_tf = cross_entropy(labels=label, logits=logit)
    global_step = tf.Variable(0, name='global_step', trainable=False)
    optimizer = tf.train.AdamOptimizer(learning_rate=lr)
    train_opt = optimizer.minimize(loss_tf, global_step=global_step)
    init_op = tf.global_variables_initializer()
    saver = tf.train.Saver(max_to_keep=epochs)
    print("start optimize")
    config = tf.compat.v1.ConfigProto()
    config.gpu_options.allow_growth=True
    with tf.compat.v1.Session(config=config) as sess:
      sess.run(init_op)
      min_loss = 10.0
      for epoch in range(epochs):
            total_loss = []
            print('*'*20, 'Train Epoch %d'%epoch, '*'*20)
            for step in range(step_train):
                x,y = next(train_dataset)
                _, loss, pred_logits = sess.run(, feed_dict={input:x, label:y, is_training:True})
                total_loss.append(loss)
                if step % display_step==0:
                  print('Epoch {:}, train steps {:}, loss={:.4f}'.format(epoch, step, loss), flush=True)
            print('Epoch {:}, train Avg loss={:.4f}, lr={:.4f}'.format(epoch, np.mean(total_loss), lr))

            print('*'*20, 'Valid Epoch %d'%epoch, '*'*20)
            total_loss=[]
            all_num = step_valid
            TP = 0
            for step in range(step_valid):
                x,y = next(val_dataset)
                val_loss, pred_logits = sess.run(, feed_dict={input:x, label:y, is_training:False})
                y_pred = np.argmax(pred_logits, axis=-1)
                y = np.argmax(y, axis=-1)
                total_loss.append(val_loss)
                if y == y_pred:
                  TP += 1
                if step % display_step==0:
                  print('Epoch {:}, valid steps {:}, loss={:.4f}'.format(epoch, step, val_loss))
            val_loss_avg = np.mean(total_loss)
            print('Epoch {:}, valid Avg loss={:.4f}, acc={:.4f}'.format(epoch, val_loss_avg, TP*1.0/all_num))
            print('*'*20, 'Valid Epoch %d'%epoch, '*'*20)
            saver.save(sess, os.path.join(save_model_dir, 'epoch_%03d_%0.4f_model' % (epoch, val_loss_avg)), write_meta_graph=False)

            saver.save(sess, os.path.join(save_model_dir, 'last_weight_model'), write_meta_graph=False)
            if min_loss &gt; val_loss_avg:
                min_loss = val_loss_avg
                saver.save(sess, os.path.join(save_model_dir, 'best_weight_model'), write_meta_graph=False)

#============test
def test(test_img_npy, test_label_npy, save_model, batch_size=1, input_size=, n_class=2, first_channels=64, display_step=100):
    test_data, test_label, _, _ = load_split_data(test_img_npy, test_label_npy, thre=1)
    test_dataset = data_generator(test_data, test_label, batch_size, False)
    input = tf.placeholder(dtype=tf.float32, shape=, input_size, input_size])
    logit = build_res_network(input, first_channels, n_class, rate=.2, is_training=False)
    label = tf.placeholder(dtype=tf.float32, shape=)
    saver = tf.train.Saver()
    with tf.Session() as sess:
      saver.restore(sess, save_model)
      pred_label = []
      cls_label = []
      y_prob = []
      all_num = len(test_data)
      TP = 0
      for step in range(all_num):
            x, y = next(test_dataset)
            pred_logits = sess.run(logit, feed_dict={input:x, label:y})
            pred_y = np.argmax(pred_logits,1).astype(np.int32)
            label_y = np.argmax(y,1).astype(np.int32)
            pred_label.append(pred_y)
            cls_label.append(label_y)
            y_prob.append(pred_logits)
            if label_y==pred_y:
                TP += 1
            if step%display_step == 0:
                print('Test steps {:} y true {:} y pred{:}'.format(step, label_y, pred_y))
      auc = roc_auc_score(cls_label, y_prob)
      precision = precision_score(cls_label, pred_label)
      recall = recall_score(cls_label, pred_label)
      f1 = f1_score(cls_label, pred_label)
      print('Tesy AUC={:4f}, Avg acc={:4f}, Precision={:4f}, Recall={:4f}, F1={:4f}'.format(auc, TP*1.0/all_num, precision, recall, f1))


# npy_path = '/Users/kai/Downloads/Chapter5/'
# npy_path = '/data/projects/kaggle/rsna-intracranial-hemorrhage-detection/stage_2_train_npy/'
npy_path = '/workspace/rsna-intracranial-hemorrhage-detection/stage_2_train_npy/'
# print("compute mean std")
# compute_mean_std(npy_path)
train_img_npy = [
    os.path.join(npy_path, 'trainval_img_5000_0.npy'),
    os.path.join(npy_path, 'trainval_img_1000_1.npy'),
    os.path.join(npy_path, 'trainval_img_1000_2.npy'),
    os.path.join(npy_path, 'trainval_img_1000_3.npy'),
    os.path.join(npy_path, 'trainval_img_1000_4.npy'),
    os.path.join(npy_path, 'trainval_img_1000_5.npy'),
]
train_label_npy = [
    os.path.join(npy_path, 'trainval_label_5000_0.npy'),
    os.path.join(npy_path, 'trainval_label_1000_1.npy'),
    os.path.join(npy_path, 'trainval_label_1000_2.npy'),
    os.path.join(npy_path, 'trainval_label_1000_3.npy'),
    os.path.join(npy_path, 'trainval_label_1000_4.npy'),
    os.path.join(npy_path, 'trainval_label_1000_5.npy'),
]
test_img_npy = [
    os.path.join(npy_path, 'test_img_1000_0.npy'),
    os.path.join(npy_path, 'test_img_200_1.npy'),
    os.path.join(npy_path, 'test_img_200_2.npy'),
    os.path.join(npy_path, 'test_img_200_3.npy'),
    os.path.join(npy_path, 'test_img_200_4.npy'),
    os.path.join(npy_path, 'test_img_200_5.npy'),
]
test_label_npy = [
    os.path.join(npy_path, 'test_label_1000_0.npy'),
    os.path.join(npy_path, 'test_label_200_1.npy'),
    os.path.join(npy_path, 'test_label_200_2.npy'),
    os.path.join(npy_path, 'test_label_200_3.npy'),
    os.path.join(npy_path, 'test_label_200_4.npy'),
    os.path.join(npy_path, 'test_label_200_5.npy'),
]
'''
save_model_dir = './temp/saved_model'
if not os.path.exists(save_model_dir):
    os.makedirs(save_model_dir)
train(train_img_npy, train_label_npy, save_model_dir, display_step=1)
'''

if __name__ == "__main__":
    # 训练
    if False:
      save_model_dir = './saved_model'
      train(train_img_npy, train_label_npy, save_model_dir, display_step=10)

    # 测试
    if True:
      save_model = './saved_model/epoch_036_0.6214_model'
      test(test_img_npy, test_label_npy, save_model)
</code></pre>

<p>上述save_model,根据前面训练出来的模型选择一个即可:</p>

<p></p>

<p>选择epoch_036运行后,结果如下:</p>

<p></p>

<p>选择epoch_099后,运行结果如下:</p>

<p> &nbsp;&nbsp;</p>

<p>在上面的输出中,各项具体含义如下:</p>

<ul>
        <li>AUC:AUC值为ROC曲线所覆盖的区域面积,AUC越大,分类效果越好。</li>
        <li>Avg acc:在所有预测出来的正例中有多少是真的正例。</li>
        <li>Precision:表示模型对所有正预测中正确预测正数的能力。</li>
        <li>Recall:表示模型从实际阳性样本中正确预测阳性的能力。召回率得分越高,机器学习模型在识别正面和负面样本能力就越好</li>
        <li>F1:是精度和召回率分数的谐波平均值,在选择精度或召回率分数可能导致模型分别给出高误报和漏报的情况下用作指标。</li>
</ul>

<p>&nbsp;</p>

<p><strong>七、总结</strong></p>

<p>老实说,顺利的跑完模型训练测试,验证这章学习的内容,不是件容易的事情,需要好好研究学习实践。</p>

<p>不过,通过测试数据集合完整的模型训练过程,得以对相关的概念有了更具象化的了解,受益匪浅。</p>
</div><script>                                        var loginstr = '<div class="locked">查看本帖全部内容,请<a href="javascript:;"   style="color:#e60000" class="loginf">登录</a>或者<a href="https://bbs.eeworld.com.cn/member.php?mod=register_eeworld.php&action=wechat" style="color:#e60000" target="_blank">注册</a></div>';
                                       
                                        if(parseInt(discuz_uid)==0){
                                                                                                (function($){
                                                        var postHeight = getTextHeight(400);
                                                        $(".showpostmsg").html($(".showpostmsg").html());
                                                        $(".showpostmsg").after(loginstr);
                                                        $(".showpostmsg").css({height:postHeight,overflow:"hidden"});
                                                })(jQuery);
                                        }                </script><script type="text/javascript">(function(d,c){var a=d.createElement("script"),m=d.getElementsByTagName("script"),eewurl="//counter.eeworld.com.cn/pv/count/";a.src=eewurl+c;m.parentNode.insertBefore(a,m)})(document,523)</script>

通途科技 发表于 2024-7-5 18:34

页: [1]
查看完整版本: 《深度学习与医学图像处理》【学习分享6】医学图像分类处理和模型训练实践