+
+
+
+
+ Given clean line drawings, rough sketches or photographs of arbitrary resolution as input, our framework generates the corresponding vector line drawings directly. As shown in (b), the framework models a virtual pen surrounded by a dynamic window (red boxes), which moves while drawing the strokes. It learns to move around by scaling the window and sliding to an undrawn area for restarting the drawing (bottom example; sliding trajectory in blue arrow). With our proposed stroke regularization mechanism, the framework is able to enlarge the window and draw long strokes for simplicity (top example).
+
+
+
+
+
+
+
+
+
Abstract
+
+ Vector line art plays an important role in graphic design, however, it is tedious to manually create.
+ We introduce a general framework to produce line drawings from a wide variety of images,
+ by learning a mapping from raster image space to vector image space.
+ Our approach is based on a recurrent neural network that draws the lines one by one.
+ A differentiable rasterization module allows for training with only supervised raster data.
+ We use a dynamic window around a virtual pen while drawing lines,
+ implemented with a proposed aligned cropping and differentiable pasting modules.
+ Furthermore, we develop a stroke regularization loss
+ that encourages the model to use fewer and longer strokes to simplify the resulting vector image.
+ Ablation studies and comparisons with existing methods corroborate the efficiency of our approach
+ which is able to generate visually better results in less computation time,
+ while generalizing better to a diversity of images and applications.
+
+
+
+
+
+ Our framework generates the parametrized strokes step by step in a recurrent manner.
+ It uses a dynamic window (dashed red boxes) around a virtual pen to draw the strokes,
+ and can both move and change the size of the window.
+ (a) Four main modules at each time step: aligned cropping, stroke generation, differentiable rendering and differentiable pasting.
+ (b) Architecture of the stroke generation module.
+ (c) Structural strokes predicted at each step;
+ movement only is illustrated by blue arrows during which no stroke is drawn on the canvas.
+
+
+
+
+
+@article{mo2021virtualsketching,
+ title = {General Virtual Sketching Framework for Vector Line Art},
+ author = {Mo, Haoran and Simo-Serra, Edgar and Gao, Chengying and Zou, Changqing and Wang, Ruomei},
+ journal = {ACM Transactions on Graphics (Proceedings of ACM SIGGRAPH 2021)},
+ year = {2021},
+ volume = {40},
+ number = {4},
+ pages = {51:1--51:14}
+}
+
+
+
+
Related Work
+
+
+ Jean-Dominique Favreau, Florent Lafarge and Adrien Bousseau.
+ Fidelity vs. Simplicity: a Global Approach to Line Drawing Vectorization. SIGGRAPH 2016.
+ [Paper]
+ [Webpage]
+
+
+
+
+ Mikhail Bessmeltsev and Justin Solomon.
+ Vectorization of Line Drawings via PolyVector Fields. SIGGRAPH 2019.
+ [Paper]
+ [Code]
+
+
+
+
+ Edgar Simo-Serra, Satoshi Iizuka and Hiroshi Ishikawa.
+ Mastering Sketching: Adversarial Augmentation for Structured Prediction. SIGGRAPH 2018.
+ [Paper]
+ [Webpage]
+ [Code]
+
+
+
+
+ Zhewei Huang, Wen Heng and Shuchang Zhou.
+ Learning to Paint With Model-based Deep Reinforcement Learning. ICCV 2019.
+ [Paper]
+ [Code]
+
+
+
+
+
+
+
+
+
diff --git a/hi-arm/qmupd_vs/draw_tools.py b/hi-arm/qmupd_vs/draw_tools.py
new file mode 100644
index 0000000000000000000000000000000000000000..fd699d1fb3ac09125fe060ae8565fb903e2837e5
--- /dev/null
+++ b/hi-arm/qmupd_vs/draw_tools.py
@@ -0,0 +1,657 @@
+import os
+import cv2
+from matplotlib import pyplot as plt
+import numpy as np
+from IPython.display import clear_output
+from scipy.interpolate import splprep, splev
+import shutil
+import glob
+import time
+import sys
+import numpy as np
+from PIL import Image
+import tensorflow as tf
+import cv2
+from utils import get_colors, draw, image_pasting_v3_testing
+from model_common_test import DiffPastingV3
+import random
+os.environ['CUDA_VISIBLE_DEVICES'] = '0'
+
+def fix_edge_contour(contour, im_shape):
+ """
+ 有时候生成的轮廓点会有一些头部或者尾部紧挨着图像边沿的情况,这样的点位是不需要的,需要过滤掉。
+ 如果轮廓点头部或者尾部紧挨着图像边沿,过滤裁掉该部分的点位
+ """
+ # 将轮廓转换为列表
+ contour = contour.tolist()
+
+ # 检查轮廓的头部点
+ while True:
+ x, y = contour[0][0]
+ if x == 0 or y == 0 or x == (im_shape[1] - 1) or y == (im_shape[0] - 1):
+ del contour[0]
+ else:
+ break
+
+ # 检查轮廓的尾部点
+ while True:
+ x, y = contour[-1][0]
+ if x == 0 or y == 0 or x == (im_shape[1] - 1) or y == (im_shape[0] - 1):
+ del contour[-1]
+ else:
+ break
+
+ # 将轮廓转换回numpy数组
+ contour = np.array(contour)
+ return contour
+
+def getContourList(image, pen_width: int = 3, min_contour_len: int = 30, is_show: bool = False):
+ """
+ 从图像中获取轮廓列表
+ :param image: 图像
+ :param pen_width: 笔的粗细
+ :param min_contour_len: 最短的轮廓长度
+ :param is_show: 是否显示图像
+ :return: 轮廓列表
+ """
+ # 读取图片
+ # im = cv2.imread("../data/1_fake.png",cv2.IMREAD_GRAYSCALE)
+ if image is None:
+ print("Can't read the image file.")
+ return
+ elif len(image.shape) == 3:
+ image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
+ elif len(image.shape) == 4:
+ image = cv2.cvtColor(image, cv2.COLOR_BGRA2GRAY)
+ # 转换二值化
+ image = cv2.threshold(image, 127, 255, cv2.THRESH_BINARY)[1]
+
+ # 获取图像线条的绘制顺序,以方便于机器人连续运动绘制图像
+ # Create a copy of the original image to draw contours on
+ image_copy = image.copy()
+
+ image_with_contours = np.full_like(image_copy, 255)
+
+ # Initialize a list to store the contours
+ contour_list = []
+
+ directions = [(0, 1), (0, -1), (1, 0), (-1, 0), (1, 1), (1, -1), (-1, 1), (-1, -1)]
+ sec0 = (0, image_copy.shape[0])
+ sec1 = (sec0[1]-1, sec0[1]+image_copy.shape[1]-1)
+ sec2 = (sec1[1]-1, sec1[1]+image_copy.shape[0]-1)
+ sec3 = (sec2[1]-1, sec2[1]+image_copy.shape[1]-2)
+ while True:
+ # Find contours in the image
+ # 并且找到的轮廓都在黑色的像素上
+ _, contours, _ = cv2.findContours(image_copy, cv2.RETR_LIST, cv2.CHAIN_APPROX_NONE)
+
+ # If no contours are found, break the loop
+ # 没有轮廓需要中止;当图像是全白时,可以检测到一个轮廓,也需要终止
+ if len(contours) == 0 or (len(contours)==1 and np.all(image_copy == 255)):
+ break
+
+ # Remove the border contour
+ # contours = [cnt for cnt in contours if not np.any(cnt == 0) and not np.any(cnt == height-1) and not np.any(cnt == width-1)]
+ # `cv2.findContours`函数在找到轮廓时,实际上是在找到黑色对象(前景)和白色背景之间的边界
+ # 这意味着轮廓的坐标可能不会精确地落在原始图像的黑色像素上,而是在黑色和白色像素之间。
+ # 如果你希望轮廓精确地落在黑色像素上,需要对`cv2.findContours`的结果进行一些后处理。例如,遍历轮廓的每个点,然后将它们的坐标向最近的黑色像素进行取整。
+ # 避免后续在擦除时,并没有擦除原有图像的黑色像素
+ print(f"pen width: {pen_width}")
+ if pen_width == 1:
+ for contour in contours:
+ for point in contour:
+ x, y = point[0]
+ if image_copy[y, x] == 255:
+ for dx, dy in directions:
+ nx, ny = x + dx, y + dy
+ if nx >= 0 and ny >= 0 and nx < image_copy.shape[1] and ny < image_copy.shape[0]:
+ if image_copy[ny, nx] == 0:
+ point[0][0] = nx
+ point[0][1] = ny
+ break
+
+ cv2.drawContours(image_with_contours, contours, -1, 0, 1)
+ # erase the exist contours
+ cv2.drawContours(image_copy, contours, -1, 255, pen_width)
+ # add contours to list
+ # Sort the elements in contours according to the length of the elements.
+ # The longest contour is at the front, which is convenient for subsequent drawing and can be drawn first.
+
+ # remove the contour when the contour is the box of image
+ contours = list(contours)
+ max_len = 0
+ for i in reversed(range(len(contours))):
+ # 太短的也不要
+ if len(contours[i]) < min_contour_len:
+ contours.pop(i)
+ continue
+ # 将画四个边框的轮廓去掉
+ if (len(contours[i]) >= ( image_with_contours.shape[0]*2 + image_with_contours.shape[0]*2 - 4) and \
+ (contours[i][sec0[0]:sec0[1], :, 0] == 0).all() and \
+ (contours[i][sec1[0]:sec1[1], :, 1] == image_with_contours.shape[0]-1).all() and \
+ (contours[i][sec2[0]:sec2[1], :, 0] == image_with_contours.shape[1]-1).all() and \
+ (contours[i][sec3[0]:sec3[1], :, 1] == 0).all()):
+ contours.pop(i)
+ continue
+ contours.sort(key=lambda x: x.shape[0], reverse=True)
+ contour_list.extend(contours)
+ if is_show:
+ # show the image with the drawn contours
+ # Clear the previous plot
+ clear_output(wait=True)
+
+ plt.subplot(1,3,1)
+ plt.imshow(image, cmap='gray', vmin=0, vmax=255)
+
+ plt.subplot(1,3,2)
+ plt.imshow(image_copy, cmap='gray', vmin=0, vmax=255)
+
+ plt.subplot(1,3,3)
+ # Show the image with the current contour
+ plt.imshow(image_with_contours, cmap='gray', vmin=0, vmax=255)
+ plt.show()
+ for i in reversed(range(len(contour_list))):
+ contour = contour_list[i]
+ contour = fix_edge_contour(contour=contour, im_shape=image.shape)
+ if len(contour) < min_contour_len:
+ contour_list.pop(i)
+ return contour_list
+
+def sortContoursList(contour_list):
+ """
+ 根据以下规则排序:
+ 1. 先从最长的1/3个轮廓中,挑选出最长的一些轮廓(大致1/5的轮廓)
+ 2. 以上一个轮廓的终点为准,找到剩下轮廓中,起点与该点位最近的距离排序
+ """
+ contour_list.sort(key=lambda x: x.shape[0], reverse=True)
+ # 数量太少,直接返回排序后的轮廓列表,不需要太多策略
+ if len(contour_list) <= 10:
+ return contour_list
+ origin_count = len(contour_list)
+ # 1. 先从最长的1/3个轮廓中,随机选出一些轮廓(大致1/2的轮廓),
+ # 这样画尝的轮廓容易先画出来图像的大体轮廓。另外,随机一下,是为了避免每次都是画同样或者相似的轮廓
+ tmp_contour_list = contour_list[:int(len(contour_list)/3)]
+ np.random.shuffle(tmp_contour_list)
+ tmp_contour_list = tmp_contour_list[:int(len(tmp_contour_list)/2)]
+ for contour in tmp_contour_list:
+ for i in reversed(range(len(contour_list))):
+ if contour_list[i] is contour:
+ contour_list.pop(i)
+ break
+ ret_contour_list = tmp_contour_list
+ # 2. 以上一个轮廓的终点为准,找到剩下轮廓中,起点与该点位最近的距离排序
+ count = len(tmp_contour_list)
+ while (count < origin_count):
+ # 找到最后一个轮廓的终点
+ last_contour = ret_contour_list[-1]
+ last_point = last_contour[-1][0]
+ # 找到剩下轮廓中,起点与该点位最近的距离排序
+ min_index = -1
+ min_distance = 999999999
+ for i in range(len(contour_list)):
+ # print(contour_list[i].shape)
+ first_point = contour_list[i][0][0]
+ distance = (first_point[0] - last_point[0])**2 + (first_point[1] - last_point[1])**2
+ if distance < min_distance:
+ min_distance = distance
+ min_index = i
+ ret_contour_list.append(contour_list[min_index])
+ contour_list.pop(min_index)
+ count += 1
+ return ret_contour_list
+
+def remove_overlap_and_near_contours(contours_list, image_size, extend_pixel , near_threshold=0.5, min_contour_length=10):
+ """
+ 移除重叠及过近的轮廓
+ :param contours_list: 轮廓列表
+ :param image_size: 图像大小
+ :param extend_pixel: 扩展像素
+ :param near_threshold: 过近阈值
+ """
+ # 思路:模拟画图,如果后面的轮廓与前面的轮廓重叠或者过近,那么就不画
+ circle_lookup = np.zeros((extend_pixel*2+1, extend_pixel*2+1), dtype=np.bool_)
+ for i in range(-extend_pixel, extend_pixel+1):
+ for j in range(-extend_pixel, extend_pixel+1):
+ if (i**2 + j**2) <= extend_pixel**2:
+ circle_lookup[i, j] = True
+ map = np.zeros((image_size[0], image_size[1]), dtype=np.bool_)
+ new_contours_list = []
+ for contour in contours_list:
+ # 太短的轨迹不画
+ if len(contour) < min_contour_length:
+ continue
+ # 画图
+ contour_length = len(contour)
+ overlap_length = 0
+ for point in contour:
+ x, y = int(point[0][0]),int(point[0][1])
+ # 统计重叠度
+ if (map[x, y] == True):
+ overlap_length += 1
+ # 与原来重叠度比较高,则去掉,这条轨迹不画了。
+ if overlap_length / contour_length >= near_threshold:
+ continue
+ else:
+ # 去掉长度为0的轮廓
+ if (len(contour) > 0):
+ new_contours_list.append(np.array(contour))
+ else:
+ print("==========contour length is 0, in position 3")
+ # new_contours_list.append(np.array(contour))
+ # 把当前轨迹经过的像素都在map中进行标记,以便于后续查询需要
+ for point in contour:
+ x, y = int(point[0][0]),int(point[0][1])
+ for i in range(-extend_pixel, extend_pixel+1):
+ for j in range(-extend_pixel, extend_pixel+1):
+ if circle_lookup[i, j]:
+ if x+i >= 0 and x+i < image_size[0] and y+j >= 0 and y+j < image_size[1]:
+ map[x+i, y+j] = True
+ return new_contours_list
+
+
+def sample_and_smooth_contours(contour_list, interval: int = 5):
+ """
+ 采样并平滑拟合轮廓
+ :param contour_list: 轮廓列表
+ :param interval: 采样间隔
+ :return: 平滑拟合并采样后的轮廓列表。注意为浮点的数组
+ """
+ f_contour_list = []
+ for contour in contour_list:
+ # 对contour中的点进行B样条进行拟合,然后平滑和重采样,
+ # Fit a B-spline to the contour
+ if (contour[0] == contour[-1]).all():
+ contour = contour.reshape(-1, 2)
+ tck, u = splprep(contour.T, w=None, u=None, ue=None, k=3, task=0, s=1.0, t=None, full_output=0, nest=None, per=1, quiet=1)
+ else:
+ contour = contour.reshape(-1, 2)
+ tck, u = splprep(contour.T, w=None, u=None, ue=None, k=3, task=0, s=1.0, t=None, full_output=0, nest=None, per=0, quiet=1)
+ # 设置重采样的点数
+ num = contour.shape[0] // interval
+ u_new = np.linspace(u.min(), u.max(), num)
+ x_new, y_new = splev(u_new, tck, der=0)
+ f_contour = np.array([x_new, y_new]).T.reshape(-1, 1, 2)
+ f_contour_list.append(f_contour)
+ return f_contour_list
+
+
+def save_contour_points(contour_list, filepath):
+ """
+ 保存轮廓点到文件中,每个轮廓占一行,x和y坐标用逗号分割,点之间用逗号分割
+ Usage:
+ save_contour_points(f_contour_list, "../data/1_fake_data.txt")
+ """
+ dirname = os.path.dirname(filepath)
+ if (not os.path.exists(dirname)):
+ os.makedirs(dirname)
+ with open(filepath, "w") as f:
+ for contour in contour_list:
+ for point in contour:
+ x, y = point[0]
+ f.write(f"{x},{y},")
+ f.write("\n")
+
+
+def load_contours_list(filename):
+ contours_list = []
+ with open(filename, "r") as f:
+ for line in f:
+ points = line.strip().split(",")
+ # 去处最后一个空字符
+ if points[-1] == '':
+ points = points[:-1]
+ contour = []
+ for i in range(0, len(points), 2):
+ x, y = float(points[i]), float(points[i+1])
+ contour.append(np.array([[x, y]]))
+ # 去掉长度为0的轮廓
+ if (len(contour) > 0):
+ contours_list.append(np.array(contour))
+ print(f"Load {len(contours_list)} contours.")
+ return contours_list
+
+def generate_style_image(image_name, dataroot, output_dir):
+ # plt.imsave("./data/input.jpg", image)
+ # shutil.copy("../data/input.jpg", "../../QMUPD/examples/input.jpg")
+ start_time = time.time()
+ # curr_path = os.getcwd()
+ #================== settings ==================
+ # style_root = "../../QMUPD/"
+ # os.chdir(style_root)
+
+ exp = 'QMUPD_model'
+ epoch='200'
+ gpu_id = '-1'
+ netga = 'resnet_style2_9blocks'
+ model0_res = 0
+ model1_res = 0
+ imgsize = 512
+ extraflag = ' --netga %s --model0_res %d --model1_res %d' % (netga, model0_res, model1_res)
+ base_image = os.path.splitext(os.path.basename(image_name))[0]
+ # 生成风格图像
+ # im = draw_tools.generate_style_image(image)
+ # cv2.imshow('image', image)
+ # cv2.waitKey(0)
+ # cv2.destroyAllWindows()
+ # 临时方案,把图像移动到dataset中
+ if not os.path.exists(dataroot):
+ os.makedirs(dataroot)
+ else:
+ # 清空
+ files = glob.glob(f'%s*' % dataroot)
+ for f in files:
+ os.remove(f)
+ # copy
+ shutil.copy(image_name, dataroot)
+
+ # 清空结果
+ if not os.path.exists(output_dir):
+ os.makedirs(output_dir)
+ else:
+ # 清空
+ files = glob.glob(f'%s*' % output_dir)
+ for f in files:
+ os.remove(f)
+
+ #==================== command ==================
+ vec = [0,1,0]
+ svec = '%d,%d,%d' % (vec[0],vec[1],vec[2])
+ img1 = 'imagesstyle%d-%d-%d'%(vec[0],vec[1],vec[2])
+ print('results/%s/test_%s/index%s.html'%(exp,epoch,img1[6:]))
+ command = 'python3 qmupd_single_image.py --dataroot %s --name %s --model test --output_nc 1 --no_dropout --model_suffix _A %s --num_test 1000 --epoch %s --style_control 1 --imagefolder %s --sinput svec --svec %s --crop_size %d --load_size %d --gpu_ids %s' % (dataroot,exp,extraflag,epoch,img1,svec,imgsize,imgsize,gpu_id)
+ os.system(command)
+ return os.path.join(output_dir, f'{base_image}_fake.png')
+
+
+def display_strokes_final(sess, pasting_func, data, init_cursor, image_size, infer_lengths, init_width,
+ save_base,
+ cursor_type='next', min_window_size=32, raster_size=128):
+ """
+ :param data: (N_strokes, 9): flag, x0, y0, x1, y1, x2, y2, r0, r2
+ :return:
+ """
+ canvas = np.zeros((image_size, image_size), dtype=np.float32) # [0.0-BG, 1.0-stroke]
+ canvas2_temp = np.zeros((image_size, image_size), dtype=np.float32) # [0.0-BG, 1.0-stroke]
+ drawn_region = np.zeros_like(canvas)
+ overlap_region = np.zeros_like(canvas)
+ canvas_color_with_overlap = np.zeros((image_size, image_size, 3), dtype=np.float32)
+ canvas_color_wo_overlap = np.zeros((image_size, image_size, 3), dtype=np.float32)
+ canvas_color_with_moving = np.zeros((image_size, image_size, 3), dtype=np.float32)
+
+ cursor_idx = 0
+
+ if init_cursor.ndim == 1:
+ init_cursor = [init_cursor]
+
+ stroke_count = len(data)
+ color_rgb_set = get_colors(stroke_count) # list of (3,) in [0, 255]
+ color_idx = 0
+
+ valid_stroke_count = stroke_count - np.sum(data[:, 0]).astype(np.int32) + len(init_cursor)
+ valid_color_rgb_set = get_colors(valid_stroke_count) # list of (3,) in [0, 255]
+ valid_color_idx = -1
+ # print('Drawn stroke number', valid_stroke_count)
+ # print(' flag x1\t\t y1\t\t x2\t\t y2\t\t r2\t\t s2')
+
+ # tempimage = np.zeros((image_size, image_size, 3), dtype=np.uint8) + 255
+ # color = random.randint(50, 120)
+ # cv2.imshow('canvas_rgb', tempimage)
+ contours_list = []
+ for round_idx in range(len(infer_lengths)):
+ contour = []
+ round_length = infer_lengths[round_idx]
+
+ cursor_pos = init_cursor[cursor_idx] # (2)
+ cursor_idx += 1
+ prev_width = init_width
+ prev_scaling = 1.0
+ prev_window_size = float(raster_size) # (1)
+ # cv2.imshow('canvas_rgb', canvas_black)
+ # 每个笔画
+ last_point = None
+ for round_inner_i in range(round_length):
+ stroke_idx = np.sum(infer_lengths[:round_idx]).astype(np.int32) + round_inner_i
+
+ curr_window_size_raw = prev_scaling * prev_window_size
+ curr_window_size_raw = np.maximum(curr_window_size_raw, min_window_size)
+ curr_window_size_raw = np.minimum(curr_window_size_raw, image_size)
+
+ pen_state = data[stroke_idx, 0]
+ stroke_params = data[stroke_idx, 1:] # (8)
+ x1y1, x2y2, width2, scaling2 = stroke_params[0:2], stroke_params[2:4], stroke_params[4], stroke_params[5]
+ x0y0 = np.zeros_like(x2y2) # (2), [-1.0, 1.0]
+ x0y0 = np.divide(np.add(x0y0, 1.0), 2.0) # (2), [0.0, 1.0]
+ x2y2 = np.divide(np.add(x2y2, 1.0), 2.0) # (2), [0.0, 1.0]
+ widths = np.stack([prev_width, width2], axis=0) # (2)
+ stroke_params_proc = np.concatenate([x0y0, x1y1, x2y2, widths], axis=-1) # (8)
+
+ next_width = stroke_params[4]
+ next_scaling = stroke_params[5]
+ next_window_size = next_scaling * curr_window_size_raw
+ next_window_size = np.maximum(next_window_size, min_window_size)
+ next_window_size = np.minimum(next_window_size, image_size)
+
+ prev_width = next_width * curr_window_size_raw / next_window_size
+ prev_scaling = next_scaling
+ prev_window_size = curr_window_size_raw
+
+ f = stroke_params_proc.tolist() # (8)
+ f += [1.0, 1.0]
+ gt_stroke_img, contour_deatil = draw(f) # (H, W), [0.0-stroke, 1.0-BG]
+ # print("stroke image", contour)
+ # contour = cursor_pos * image_size + contour
+ # cv2.imshow('canvas_stroke', gt_stroke_img)
+ # print("gt_stroke_img shape:", gt_stroke_img.shape)
+ # cv2.waitKey(30)
+ gt_stroke_img_large = image_pasting_v3_testing(1.0 - gt_stroke_img, cursor_pos,
+ image_size,
+ curr_window_size_raw,
+ pasting_func, sess) # [0.0-BG, 1.0-stroke]
+ # print("gt_stroke_img_large shape:", gt_stroke_img_large.shape)
+ is_overlap = False
+
+ if pen_state == 0:
+ canvas += gt_stroke_img_large # [0.0-BG, 1.0-stroke]
+ # print("canvas shape:", canvas.shape)
+ # cv2.imshow('canvas_rgb_lager', canvas)
+ # cv2.waitKey(30)
+ curr_drawn_stroke_region = np.zeros_like(gt_stroke_img_large)
+ curr_drawn_stroke_region[gt_stroke_img_large > 0.5] = 1
+ intersection = drawn_region * curr_drawn_stroke_region
+ # regard stroke with >50% overlap area as overlaped stroke
+ if np.sum(intersection) / np.sum(curr_drawn_stroke_region) > 0.5:
+ # enlarge the stroke a bit for better visualization
+ overlap_region[gt_stroke_img_large > 0] += 1
+ is_overlap = True
+
+ drawn_region[gt_stroke_img_large > 0.5] = 1
+
+ color_rgb = color_rgb_set[color_idx] # (3) in [0, 255]
+ color_idx += 1
+
+ color_rgb = np.reshape(color_rgb, (1, 1, 3)).astype(np.float32)
+ color_stroke = np.expand_dims(gt_stroke_img_large, axis=-1) * (1.0 - color_rgb / 255.0)
+ canvas_color_with_moving = canvas_color_with_moving * np.expand_dims((1.0 - gt_stroke_img_large),
+ axis=-1) + color_stroke # (H, W, 3)
+ if pen_state == 0:
+ valid_color_idx += 1
+
+ if pen_state == 0:
+ valid_color_rgb = valid_color_rgb_set[valid_color_idx] # (3) in [0, 255]
+ # valid_color_idx += 1
+
+ valid_color_rgb = np.reshape(valid_color_rgb, (1, 1, 3)).astype(np.float32)
+ valid_color_stroke = np.expand_dims(gt_stroke_img_large, axis=-1) * (1.0 - valid_color_rgb / 255.0)
+ canvas_color_with_overlap = canvas_color_with_overlap * np.expand_dims((1.0 - gt_stroke_img_large),
+ axis=-1) + valid_color_stroke # (H, W, 3)
+ if not is_overlap:
+ canvas_color_wo_overlap = canvas_color_wo_overlap * np.expand_dims((1.0 - gt_stroke_img_large),
+ axis=-1) + valid_color_stroke # (H, W, 3)
+
+ # update cursor_pos based on hps.cursor_type
+ new_cursor_offsets = stroke_params[2:4] * (float(curr_window_size_raw) / 2.0) # (1, 6), patch-level
+ new_cursor_offset_next = new_cursor_offsets
+
+ # important!!!
+ new_cursor_offset_next = np.concatenate([new_cursor_offset_next[1:2], new_cursor_offset_next[0:1]], axis=-1)
+
+ cursor_pos_large = cursor_pos * float(image_size)
+
+ stroke_position_next = cursor_pos_large + new_cursor_offset_next # (2), large-level
+
+ if cursor_type == 'next':
+ cursor_pos_large = stroke_position_next # (2), large-level
+ else:
+ raise Exception('Unknown cursor_type')
+
+ cursor_pos_large = np.minimum(np.maximum(cursor_pos_large, 0.0), float(image_size - 1)) # (2), large-level
+ if (pen_state == 0):
+ # cursor_pos_fact = int(cursor_pos * float(image_size) + 0.5)
+ cursor_pos_fact = np.minimum(np.maximum(cursor_pos * float(image_size), 0.0), float(image_size - 1))
+ # 假如超出边界
+ # cv2.circle(canvas2_temp, (int(cursor_pos_fact[0]), int(cursor_pos_fact[1])), 2, (255, 0, 0), 1)
+ # cv2.line(canvas2_temp, (int(cursor_pos_fact[0]), int(cursor_pos_fact[1])), (int(cursor_pos_large[0]), int(cursor_pos_large[1])), (255, 0, 0), 1)
+ # 有起点, 终点, 和轨迹
+ if (last_point is not None):
+ # 如果这一笔的笔画琪点和上一笔的笔画不在同一个位置
+ if ((int(cursor_pos_fact[0]) != int(last_point[0]) or int(cursor_pos_fact[1]) != int(last_point[1]))):
+ # 如果距离比较接近,也算同一个轨迹上面(减少机械臂抬手动作)
+ if (np.linalg.norm(cursor_pos_fact - last_point) > 2):
+ # print("add contour and new one")
+ # 去掉长度为0的轮廓
+ if (len(contour) > 0):
+ contours_list.append(np.array(contour))
+ else:
+ print("==========contour length is 0, in position 1")
+ # contours_list.append(np.array(contour))
+ contour = []
+
+ for x in contour_deatil:
+ # x[0] 转为 np.array
+ x = np.array(x)
+ point_pos = (x[0] - 128) * curr_window_size_raw / 256 + cursor_pos_fact
+ point_pos[0] = min(point_pos[0], image_size - 1)
+ point_pos[1] = min(point_pos[1], image_size - 1)
+ # 去重
+ if (last_point is not None):
+ if (int(point_pos[0]) != int(last_point[0]) or int(point_pos[1]) != int(last_point[1])):
+ contour.append(np.array([[point_pos[0], point_pos[1]]]))
+ last_point = point_pos
+ cv2.circle(canvas2_temp, (int(point_pos[0]), int(point_pos[1])), 1, (255, 255, 0), 1)
+ else:
+ contour.append(np.array([[point_pos[0], point_pos[1]]]))
+ last_point = point_pos
+ cv2.circle(canvas2_temp, (int(point_pos[0]), int(point_pos[1])), 1, (255, 255, 0), 1)
+
+ # print(len(contour))
+ # cv2.circle(canvas2_temp, (int(point_pos[0]), int(point_pos[1])), 1, (255, 255, 0), 1)
+ # break
+ # break
+ # print("cursor_pos_fact:", contour)
+ # cv2.imshow('canvas_rgb', canvas2_temp)
+ # cv2.waitKey(30)
+
+ cursor_pos = cursor_pos_large / float(image_size)
+
+ # print(int(cursor_pos[0] * image_size), int(cursor_pos[1] * image_size))
+ # 在对应位置画个点
+ # tempimage = cv2.circle(tempimage, (int(cursor_pos[0] * image_size), int(cursor_pos[1] * image_size)), 2, (color, color, color) , 1)
+ # cv2.imshow('canvas_rgb', tempimage)
+ # cv2.waitKey(30)
+ # if (pen_state == 0):
+ # contour.append([[cursor_pos[0] * image_size, cursor_pos[1] * image_size]])
+ # 去掉长度为0的轮廓
+ if (len(contour) > 0):
+ contours_list.append(np.array(contour))
+ # canvas_rgb = np.stack([np.clip(canvas, 0.0, 1.0) for _ in range(3)], axis=-1)
+ canvas_color_with_overlap = 255 - np.round(canvas_color_with_overlap * 255.0).astype(np.uint8)
+ canvas_color_wo_overlap = 255 - np.round(canvas_color_wo_overlap * 255.0).astype(np.uint8)
+ canvas_color_with_moving = 255 - np.round(canvas_color_with_moving * 255.0).astype(np.uint8)
+
+ canvas_color_png = Image.fromarray(canvas_color_with_overlap, 'RGB')
+ canvas_color_save_path = os.path.join(save_base, 'output_order_with_overlap.png')
+ canvas_color_png.save(canvas_color_save_path, 'PNG')
+ return contours_list
+
+def drawContours(contours_list, cavas_size):
+ image = np.zeros(cavas_size, dtype=np.uint8) + 255
+ for contour in contours_list:
+ # color = random.randint(0, 255), random.randint(0, 255), random.randint(0, 255)
+ color = (0, 0, 0)
+ for i in range(len(contour)):
+ point = contour[i]
+ if i < len(contour) - 1:
+ # cv2.line(image, tuple(contour[i][0]), tuple(contour[i+1][0]), color, 1)
+ cv2.circle(image, (int(point[0][0]), int(point[0][1])), 1, color, 1)
+ return image
+
+
+def getContourList_v2(npz_path):
+ assert npz_path != ''
+
+ min_window_size = 32
+ raster_size = 128
+
+ split_idx = npz_path.rfind('/')
+ if split_idx == -1:
+ file_base = './'
+ file_name = npz_path[:-4]
+ else:
+ file_base = npz_path[:npz_path.rfind('/')]
+ file_name = npz_path[npz_path.rfind('/') + 1: -4]
+
+ regenerate_base = os.path.join(file_base, file_name)
+ os.makedirs(regenerate_base, exist_ok=True)
+
+ # differentiable pasting graph
+ paste_v3_func = DiffPastingV3(raster_size)
+
+ tfconfig = tf.ConfigProto()
+ tfconfig.gpu_options.allow_growth = True
+ sess = tf.InteractiveSession(config=tfconfig)
+ sess.run(tf.global_variables_initializer())
+
+ data = np.load(npz_path, encoding='latin1', allow_pickle=True)
+ strokes_data = data['strokes_data']
+ init_cursors = data['init_cursors']
+ image_size = data['image_size']
+ round_length = data['round_length']
+ init_width = data['init_width']
+ if round_length.ndim == 0:
+ round_lengths = [round_length]
+ else:
+ round_lengths = round_length
+ print('Processing ...')
+ contours_list = display_strokes_final(sess, paste_v3_func,
+ strokes_data, init_cursors, image_size, round_lengths, init_width,
+ regenerate_base,
+ min_window_size=min_window_size, raster_size=raster_size)
+ return contours_list
+# # mian
+# if __name__ == "__main__":
+# # 读取图片
+# im = cv2.imread("../data/1_fake.png",cv2.IMREAD_GRAYSCALE)
+# # 获取轮廓列表
+# contour_list = getContourList(im, is_show=True)
+# # 对轮廓列表进行排序
+# contour_list = sortContoursList(contour_list)
+# # 平滑拟合并采样轮廓
+# f_contour_list = sample_and_smooth_contours(im, contour_list, is_show=True)
+# # 保存轮廓点到文件中,每个轮廓占一行,x和y坐标用逗号分割,点之间用逗号分割
+# save_contour_points(f_contour_list, "../data/1_fake_data.txt")
+
+
+
+
+if __name__ == '__main__':
+ file = "./robot_data/contour_points/image_e1b3f4a3-08f1-4d52-ab40-c5badf38b46e_fake_contour_points.txt"
+ contours_lists = load_contours_list(file)
+ contours_lists = sortContoursList(contours_lists)
+ cv2.imshow("sorted", drawContours(contours_lists, (512, 512,3)))
+ contours_lists = remove_overlap_and_near_contours(contours_lists, (512, 512), 3, 0.9, 5)
+ # contours_lists = remove_overlap_and_near_contours(contours_lists, (512, 512), 4, 0.7)
+ cv2.imshow("remove overlap", drawContours(contours_lists, (512, 512,3)))
+ # save_contour_points(contours_lists, "./image_e1b3f4a3-08f1-4d52-ab40-c5badf38b46e_fake_0_contour_points_sorted.txt")
+ #contours_lists = sample_and_smooth_contours(contours_lists, 10)
+ cv2.imshow("sample and smooth", drawContours(contours_lists, (512, 512,3)))
+ cv2.waitKey(0)
\ No newline at end of file
diff --git a/hi-arm/qmupd_vs/environment.yaml b/hi-arm/qmupd_vs/environment.yaml
new file mode 100644
index 0000000000000000000000000000000000000000..c962535006e16726b9db9e434d26c8240b49700a
--- /dev/null
+++ b/hi-arm/qmupd_vs/environment.yaml
@@ -0,0 +1,115 @@
+name: vsketch
+channels:
+ - pytorch
+ - http://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
+ - http://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
+ - http://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
+ - http://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free/
+ - http://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main/
+ - http://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge/
+dependencies:
+ - _libgcc_mutex=0.1=conda_forge
+ - _openmp_mutex=4.5=2_kmp_llvm
+ - blas=1.0=mkl
+ - ca-certificates=2024.3.11=h06a4308_0
+ - cairo=1.14.8=0
+ - certifi=2016.2.28=py36_0
+ - cpuonly=2.0=0
+ - cudatoolkit=10.0.130=0
+ - cycler=0.10.0=py36_0
+ - dbus=1.10.20=0
+ - dominate=2.4.0=py_0
+ - expat=2.1.0=0
+ - fftw=3.3.9=h5eee18b_2
+ - fontconfig=2.12.1=3
+ - freetype=2.5.5=2
+ - glib=2.50.2=1
+ - gst-plugins-base=1.8.0=0
+ - gstreamer=1.8.0=0
+ - hdf5=1.10.2=hc401514_3
+ - icu=54.1=0
+ - jbig=2.1=0
+ - jpeg=9b=0
+ - ld_impl_linux-64=2.38=h1181459_1
+ - libblas=3.9.0=1_h6e990d7_netlib
+ - libcblas=3.9.0=3_h893e4fe_netlib
+ - libffi=3.4.4=h6a678d5_0
+ - libgcc=7.2.0=h69d50b8_2
+ - libgcc-ng=13.2.0=h807b86a_5
+ - libgfortran=3.0.0=1
+ - libgfortran-ng=7.5.0=ha8ba4b0_17
+ - libgfortran4=7.5.0=ha8ba4b0_17
+ - libgomp=13.2.0=h807b86a_5
+ - libiconv=1.14=0
+ - liblapack=3.9.0=3_h893e4fe_netlib
+ - libopenblas=0.3.18=hf726d26_0
+ - libpng=1.6.39=h5eee18b_0
+ - libstdcxx-ng=11.2.0=h1234567_1
+ - libtiff=4.0.6=3
+ - libwebp-base=1.3.2=h5eee18b_0
+ - libxcb=1.12=1
+ - libxml2=2.9.4=0
+ - llvm-openmp=14.0.6=h9e868ea_0
+ - lz4-c=1.9.4=h6a678d5_0
+ - matplotlib=2.0.2=np113py36_0
+ - mkl=2017.0.3=0
+ - ncurses=6.4=h6a678d5_0
+ - olefile=0.46=pyhd3eb1b0_0
+ - opencv=3.4.1=py36h6fd60c2_1
+ - openssl=1.0.2l=0
+ - pip=21.3.1
+ - pcre=8.39=1
+ - pillow=4.2.1=py36_0
+ - pixman=0.34.0=0
+ - pyparsing=2.2.0=py36_0
+ - pyqt=5.6.0=py36_2
+ - python=3.6.2=0
+ - python-dateutil=2.6.1=py36_0
+ - python_abi=3.6=2_cp36m
+ - pytorch-mutex=1.0=cpu
+ - pytz=2017.2=py36_0
+ - qt=5.6.2=5
+ - readline=6.2=2
+ - scipy=0.19.1=np113py36_0
+ - setuptools=36.4.0=py36_1
+ - sip=4.18=py36_0
+ - sqlite=3.13.0=0
+ - tk=8.5.18=0
+ - wheel=0.29.0=py36_0
+ - xz=5.2.3=0
+ - zlib=1.2.13=h5eee18b_0
+ - zstd=1.3.3=h84994c4_0
+ - pip:
+ - absl-py==1.4.0
+ - astor==0.8.1
+ - cached-property==1.5.2
+ - cairocffi==1.0.0
+ - cffi==1.15.1
+ - dataclasses==0.8
+ - gast==0.5.4
+ - gizeh==0.1.11
+ - grpcio==1.48.2
+ - h5py==3.1.0
+ - importlib-metadata==4.8.3
+ - importlib-resources==5.4.0
+ - keras-applications==1.0.8
+ - keras-preprocessing==1.1.2
+ - markdown==3.3.7
+ - munch==4.0.0
+ - numpy==1.17.0
+ - opencv-python==3.4.2.16
+ - pip==21.3.1
+ - pretrainedmodels==0.7.4
+ - protobuf==3.19.6
+ - pycparser==2.21
+ - six==1.16.0
+ - tensorboard==1.12.2
+ - tensorflow==1.12.0
+ - termcolor==1.1.0
+ - torch==1.2.0+cpu
+ - torchvision==0.4.0+cpu
+ - tqdm==4.64.1
+ - typing-extensions==4.1.1
+ - werkzeug==2.0.3
+ - zipp==3.6.0
+prefix: /home/qian/anaconda3/envs/vsketch
diff --git a/hi-arm/qmupd_vs/examples/celebahq-11103.jpg b/hi-arm/qmupd_vs/examples/celebahq-11103.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..02c594956835579c76f00cf41dca9d803d56f4d4
Binary files /dev/null and b/hi-arm/qmupd_vs/examples/celebahq-11103.jpg differ
diff --git a/hi-arm/qmupd_vs/examples/celebahq-11918.jpg b/hi-arm/qmupd_vs/examples/celebahq-11918.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..21852a3f266515c979a2b74b3c52b17dbe341940
Binary files /dev/null and b/hi-arm/qmupd_vs/examples/celebahq-11918.jpg differ
diff --git a/hi-arm/qmupd_vs/examples/celebahq-15556.jpg b/hi-arm/qmupd_vs/examples/celebahq-15556.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..12503163767fefec021d37e593009992ad87799c
Binary files /dev/null and b/hi-arm/qmupd_vs/examples/celebahq-15556.jpg differ
diff --git a/hi-arm/qmupd_vs/examples/celebahq-25033.jpg b/hi-arm/qmupd_vs/examples/celebahq-25033.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..536e7fc045052f7a51e908b21d582485d8eda519
Binary files /dev/null and b/hi-arm/qmupd_vs/examples/celebahq-25033.jpg differ
diff --git a/hi-arm/qmupd_vs/examples/celebahq-2524.jpg b/hi-arm/qmupd_vs/examples/celebahq-2524.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..b145b96861f531eb6003e198445742694295ac73
Binary files /dev/null and b/hi-arm/qmupd_vs/examples/celebahq-2524.jpg differ
diff --git a/hi-arm/qmupd_vs/examples/celebahq-26036.jpg b/hi-arm/qmupd_vs/examples/celebahq-26036.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..1053b05098a6a66216ce14daedea0eee114737ae
Binary files /dev/null and b/hi-arm/qmupd_vs/examples/celebahq-26036.jpg differ
diff --git a/hi-arm/qmupd_vs/examples/celebahq-27799.jpg b/hi-arm/qmupd_vs/examples/celebahq-27799.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..bfc7c6467d9e13b7fbd69237326c150a2554323d
Binary files /dev/null and b/hi-arm/qmupd_vs/examples/celebahq-27799.jpg differ
diff --git a/hi-arm/qmupd_vs/examples/celebahq-4797.jpg b/hi-arm/qmupd_vs/examples/celebahq-4797.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..cf18592f8eb43ba7f80232291b4d3da51bda62b4
Binary files /dev/null and b/hi-arm/qmupd_vs/examples/celebahq-4797.jpg differ
diff --git a/hi-arm/qmupd_vs/examples/celebahq-7235.jpg b/hi-arm/qmupd_vs/examples/celebahq-7235.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..1f58117681d314434e9a0480afff9ab0a21e2839
Binary files /dev/null and b/hi-arm/qmupd_vs/examples/celebahq-7235.jpg differ
diff --git a/hi-arm/qmupd_vs/examples/celebahq-896.jpg b/hi-arm/qmupd_vs/examples/celebahq-896.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..cfa58028e8fca031925d8178a2751906b3d79fa2
Binary files /dev/null and b/hi-arm/qmupd_vs/examples/celebahq-896.jpg differ
diff --git a/hi-arm/qmupd_vs/hyper_parameters.py b/hi-arm/qmupd_vs/hyper_parameters.py
new file mode 100644
index 0000000000000000000000000000000000000000..66a3fa9f938d0b35f09a84811cb0058cda94e6a6
--- /dev/null
+++ b/hi-arm/qmupd_vs/hyper_parameters.py
@@ -0,0 +1,341 @@
+import tensorflow as tf
+
+
+#############################################
+# Common parameters
+#############################################
+
+FLAGS = tf.app.flags.FLAGS
+
+tf.app.flags.DEFINE_string(
+ 'dataset_dir',
+ 'datasets',
+ 'The directory of sketch data of the dataset.')
+tf.app.flags.DEFINE_string(
+ 'log_root',
+ 'outputs/log',
+ 'Directory to store tensorboard.')
+tf.app.flags.DEFINE_string(
+ 'log_img_root',
+ 'outputs/log_img',
+ 'Directory to store intermediate output images.')
+tf.app.flags.DEFINE_string(
+ 'snapshot_root',
+ 'outputs/snapshot',
+ 'Directory to store model checkpoints.')
+tf.app.flags.DEFINE_string(
+ 'neural_renderer_path',
+ 'outputs/snapshot/pretrain_neural_renderer/renderer_300000.tfmodel',
+ 'Path to the neural renderer model.')
+tf.app.flags.DEFINE_string(
+ 'perceptual_model_root',
+ 'outputs/snapshot/pretrain_perceptual_model',
+ 'Directory to store perceptual model.')
+tf.app.flags.DEFINE_string(
+ 'data',
+ '',
+ 'The dataset type.')
+
+
+def get_default_hparams_clean():
+ """Return default HParams for sketch-rnn."""
+ hparams = tf.contrib.training.HParams(
+ program_name='new_train_clean_line_drawings',
+ data_set='clean_line_drawings', # Our dataset.
+
+ input_channel=1,
+
+ num_steps=75040, # Total number of steps of training.
+ save_every=75000,
+ eval_every=5000,
+
+ max_seq_len=48,
+ batch_size=20,
+ gpus=[0, 1],
+ loop_per_gpu=1,
+
+ sn_loss_type='increasing', # ['decreasing', 'fixed', 'increasing']
+ stroke_num_loss_weight=0.02,
+ stroke_num_loss_weight_end=0.0,
+ increase_start_steps=25000,
+ decrease_stop_steps=40000,
+
+ perc_loss_layers=['ReLU1_2', 'ReLU2_2', 'ReLU3_3', 'ReLU5_1'],
+ perc_loss_fuse_type='add', # ['max', 'add', 'raw_add', 'weighted_sum']
+
+ init_cursor_on_undrawn_pixel=False,
+
+ early_pen_loss_type='move', # ['head', 'tail', 'move']
+ early_pen_loss_weight=0.1,
+ early_pen_length=7,
+
+ min_width=0.01,
+ min_window_size=32,
+ max_scaling=2.0,
+
+ encode_cursor_type='value',
+
+ image_size_small=128,
+ image_size_large=278,
+
+ cropping_type='v3', # ['v2', 'v3']
+ pasting_type='v3', # ['v2', 'v3']
+ pasting_diff=True,
+
+ concat_win_size=True,
+
+ encoder_type='conv13_c3',
+ # ['conv10', 'conv10_deep', 'conv13', 'conv10_c3', 'conv10_deep_c3', 'conv13_c3']
+ # ['conv13_c3_attn']
+ # ['combine33', 'combine43', 'combine53', 'combineFC']
+ vary_thickness=False,
+
+ outside_loss_weight=10.0,
+ win_size_outside_loss_weight=10.0,
+
+ resize_method='AREA', # ['BILINEAR', 'NEAREST_NEIGHBOR', 'BICUBIC', 'AREA']
+
+ concat_cursor=True,
+
+ use_softargmax=True,
+ soft_beta=10, # value for the soft argmax
+
+ raster_loss_weight=1.0,
+
+ dec_rnn_size=256, # Size of decoder.
+ dec_model='hyper', # Decoder: lstm, layer_norm or hyper.
+ # z_size=128, # Size of latent vector z. Recommend 32, 64 or 128.
+ bin_gt=True,
+
+ stop_accu_grad=True,
+
+ random_cursor=True,
+ cursor_type='next',
+
+ raster_size=128,
+
+ pix_drop_kp=1.0, # Dropout keep rate
+ add_coordconv=True,
+ position_format='abs',
+ raster_loss_base_type='perceptual', # [l1, mse, perceptual]
+
+ grad_clip=1.0, # Gradient clipping. Recommend leaving at 1.0.
+
+ learning_rate=0.0001, # Learning rate.
+ decay_rate=0.9999, # Learning rate decay per minibatch.
+ decay_power=0.9,
+ min_learning_rate=0.000001, # Minimum learning rate.
+
+ use_recurrent_dropout=True, # Dropout with memory loss. Recommended
+ recurrent_dropout_prob=0.90, # Probability of recurrent dropout keep.
+ use_input_dropout=False, # Input dropout. Recommend leaving False.
+ input_dropout_prob=0.90, # Probability of input dropout keep.
+ use_output_dropout=False, # Output dropout. Recommend leaving False.
+ output_dropout_prob=0.90, # Probability of output dropout keep.
+
+ model_mode='train' # ['train', 'eval', 'sample']
+ )
+ return hparams
+
+
+def get_default_hparams_rough():
+ """Return default HParams for sketch-rnn."""
+ hparams = tf.contrib.training.HParams(
+ program_name='new_train_rough_sketches',
+ data_set='rough_sketches', # ['rough_sketches', 'faces']
+
+ input_channel=3,
+
+ num_steps=90040, # Total number of steps of training.
+ save_every=90000,
+ eval_every=5000,
+
+ max_seq_len=48,
+ batch_size=20,
+ gpus=[0, 1],
+ loop_per_gpu=1,
+
+ sn_loss_type='increasing', # ['decreasing', 'fixed', 'increasing']
+ stroke_num_loss_weight=0.1,
+ stroke_num_loss_weight_end=0.0,
+ increase_start_steps=25000,
+ decrease_stop_steps=40000,
+
+ photo_prob_type='one', # ['increasing', 'zero', 'one']
+ photo_prob_start_step=35000,
+
+ perc_loss_layers=['ReLU2_2', 'ReLU3_3', 'ReLU5_1'],
+ perc_loss_fuse_type='add', # ['max', 'add', 'raw_add', 'weighted_sum']
+
+ early_pen_loss_type='move', # ['head', 'tail', 'move']
+ early_pen_loss_weight=0.2,
+ early_pen_length=7,
+
+ min_width=0.01,
+ min_window_size=32,
+ max_scaling=2.0,
+
+ encode_cursor_type='value',
+
+ image_size_small=128,
+ image_size_large=278,
+
+ cropping_type='v3', # ['v2', 'v3']
+ pasting_type='v3', # ['v2', 'v3']
+ pasting_diff=True,
+
+ concat_win_size=True,
+
+ encoder_type='conv13_c3',
+ # ['conv10', 'conv10_deep', 'conv13', 'conv10_c3', 'conv10_deep_c3', 'conv13_c3']
+ # ['conv13_c3_attn']
+ # ['combine33', 'combine43', 'combine53', 'combineFC']
+
+ outside_loss_weight=10.0,
+ win_size_outside_loss_weight=10.0,
+
+ resize_method='AREA', # ['BILINEAR', 'NEAREST_NEIGHBOR', 'BICUBIC', 'AREA']
+
+ concat_cursor=True,
+
+ use_softargmax=True,
+ soft_beta=10, # value for the soft argmax
+
+ raster_loss_weight=1.0,
+
+ dec_rnn_size=256, # Size of decoder.
+ dec_model='hyper', # Decoder: lstm, layer_norm or hyper.
+ # z_size=128, # Size of latent vector z. Recommend 32, 64 or 128.
+ bin_gt=True,
+
+ stop_accu_grad=True,
+
+ random_cursor=True,
+ cursor_type='next',
+
+ raster_size=128,
+
+ pix_drop_kp=1.0, # Dropout keep rate
+ add_coordconv=True,
+ position_format='abs',
+ raster_loss_base_type='perceptual', # [l1, mse, perceptual]
+
+ grad_clip=1.0, # Gradient clipping. Recommend leaving at 1.0.
+
+ learning_rate=0.0001, # Learning rate.
+ decay_rate=0.9999, # Learning rate decay per minibatch.
+ decay_power=0.9,
+ min_learning_rate=0.000001, # Minimum learning rate.
+
+ use_recurrent_dropout=True, # Dropout with memory loss. Recommended
+ recurrent_dropout_prob=0.90, # Probability of recurrent dropout keep.
+ use_input_dropout=False, # Input dropout. Recommend leaving False.
+ input_dropout_prob=0.90, # Probability of input dropout keep.
+ use_output_dropout=False, # Output dropout. Recommend leaving False.
+ output_dropout_prob=0.90, # Probability of output dropout keep.
+
+ model_mode='train' # ['train', 'eval', 'sample']
+ )
+ return hparams
+
+
+def get_default_hparams_normal():
+ """Return default HParams for sketch-rnn."""
+ hparams = tf.contrib.training.HParams(
+ program_name='new_train_faces',
+ data_set='faces', # ['rough_sketches', 'faces']
+
+ input_channel=3,
+
+ num_steps=90040, # Total number of steps of training.
+ save_every=90000,
+ eval_every=5000,
+
+ max_seq_len=48,
+ batch_size=20,
+ gpus=[0, 1],
+ loop_per_gpu=1,
+
+ sn_loss_type='fixed', # ['decreasing', 'fixed', 'increasing']
+ stroke_num_loss_weight=0.0,
+ stroke_num_loss_weight_end=0.0,
+ increase_start_steps=0,
+ decrease_stop_steps=40000,
+
+ photo_prob_type='interpolate', # ['increasing', 'zero', 'one', 'interpolate']
+ photo_prob_start_step=30000,
+ photo_prob_end_step=60000,
+
+ perc_loss_layers=['ReLU2_2', 'ReLU3_3', 'ReLU4_2', 'ReLU5_1'],
+ perc_loss_fuse_type='add', # ['max', 'add', 'raw_add', 'weighted_sum']
+
+ early_pen_loss_type='move', # ['head', 'tail', 'move']
+ early_pen_loss_weight=0.2,
+ early_pen_length=7,
+
+ min_width=0.01,
+ min_window_size=32,
+ max_scaling=2.0,
+
+ encode_cursor_type='value',
+
+ image_size_small=128,
+ image_size_large=256,
+
+ cropping_type='v3', # ['v2', 'v3']
+ pasting_type='v3', # ['v2', 'v3']
+ pasting_diff=True,
+
+ concat_win_size=True,
+
+ encoder_type='conv13_c3',
+ # ['conv10', 'conv10_deep', 'conv13', 'conv10_c3', 'conv10_deep_c3', 'conv13_c3']
+ # ['conv13_c3_attn']
+ # ['combine33', 'combine43', 'combine53', 'combineFC']
+
+ outside_loss_weight=10.0,
+ win_size_outside_loss_weight=10.0,
+
+ resize_method='AREA', # ['BILINEAR', 'NEAREST_NEIGHBOR', 'BICUBIC', 'AREA']
+
+ concat_cursor=True,
+
+ use_softargmax=True,
+ soft_beta=10, # value for the soft argmax
+
+ raster_loss_weight=1.0,
+
+ dec_rnn_size=256, # Size of decoder.
+ dec_model='hyper', # Decoder: lstm, layer_norm or hyper.
+ # z_size=128, # Size of latent vector z. Recommend 32, 64 or 128.
+ bin_gt=True,
+
+ stop_accu_grad=True,
+
+ random_cursor=True,
+ cursor_type='next',
+
+ raster_size=128,
+
+ pix_drop_kp=1.0, # Dropout keep rate
+ add_coordconv=True,
+ position_format='abs',
+ raster_loss_base_type='perceptual', # [l1, mse, perceptual]
+
+ grad_clip=1.0, # Gradient clipping. Recommend leaving at 1.0.
+
+ learning_rate=0.0001, # Learning rate.
+ decay_rate=0.9999, # Learning rate decay per minibatch.
+ decay_power=0.9,
+ min_learning_rate=0.000001, # Minimum learning rate.
+
+ use_recurrent_dropout=True, # Dropout with memory loss. Recommended
+ recurrent_dropout_prob=0.90, # Probability of recurrent dropout keep.
+ use_input_dropout=False, # Input dropout. Recommend leaving False.
+ input_dropout_prob=0.90, # Probability of input dropout keep.
+ use_output_dropout=False, # Output dropout. Recommend leaving False.
+ output_dropout_prob=0.90, # Probability of output dropout keep.
+
+ model_mode='train' # ['train', 'eval', 'sample']
+ )
+ return hparams
diff --git a/hi-arm/qmupd_vs/main.py b/hi-arm/qmupd_vs/main.py
new file mode 100644
index 0000000000000000000000000000000000000000..aaa3f12c95d784b4d7c31fe6cfa46de1c55ca3f4
--- /dev/null
+++ b/hi-arm/qmupd_vs/main.py
@@ -0,0 +1,574 @@
+
+from camera_tools import CameraApp
+import draw_tools
+import cv2
+import os
+from options.test_options import TestOptions
+from data import create_dataset
+from models import create_model
+from util.visualizer import save_images
+import shutil
+import os, glob
+import warnings
+import util
+import paramiko
+
+#================== settings ==================
+exp = 'QMUPD_model'
+epoch='200'
+dataroot = 'robot_data/dataset/'
+gpu_id = '-1'
+netga = 'resnet_style2_9blocks'
+model0_res = 0
+model1_res = 0
+imgsize = 512
+extraflag = ' --netga %s --model0_res %d --model1_res %d' % (netga, model0_res, model1_res)
+output_dir = 'robot_data/output/'
+
+import numpy as np
+import os
+import tensorflow as tf
+from six.moves import range
+from PIL import Image
+import argparse
+
+import hyper_parameters as hparams
+from model_common_test import DiffPastingV3, VirtualSketchingModel
+from utils import reset_graph, load_checkpoint, update_hyperparams, draw, \
+ save_seq_data, image_pasting_v3_testing, draw_strokes
+from dataset_utils import load_dataset_testing
+
+os.environ['CUDA_VISIBLE_DEVICES'] = '-1'
+
+
+def move_cursor_to_undrawn(current_pos_list, input_image_, patch_size,
+ move_min_dist, move_max_dist, trial_times=20):
+ """
+ :param current_pos_list: (select_times, 1, 2), [0.0, 1.0)
+ :param input_image_: (1, image_size, image_size, 3), [0-stroke, 1-BG]
+ :return: new_cursor_pos: (select_times, 1, 2), [0.0, 1.0)
+ """
+
+ def crop_patch(image, center, image_size, crop_size):
+ x0 = center[0] - crop_size // 2
+ x1 = x0 + crop_size
+ y0 = center[1] - crop_size // 2
+ y1 = y0 + crop_size
+ x0 = max(0, min(x0, image_size))
+ y0 = max(0, min(y0, image_size))
+ x1 = max(0, min(x1, image_size))
+ y1 = max(0, min(y1, image_size))
+ patch = image[y0:y1, x0:x1]
+ return patch
+
+ def isvalid_cursor(input_img, cursor, raster_size, image_size):
+ # input_img: (image_size, image_size, 3), [0.0-BG, 1.0-stroke]
+ cursor_large = cursor * float(image_size)
+ cursor_large = np.round(cursor_large).astype(np.int32)
+ input_crop_patch = crop_patch(input_img, cursor_large, image_size, raster_size)
+ if np.sum(input_crop_patch) > 0.0:
+ return True
+ else:
+ return False
+
+ def randomly_move_cursor(cursor_position, img_size, min_dist_p, max_dist_p):
+ # cursor_position: (2), [0.0, 1.0)
+ cursor_pos_large = cursor_position * img_size
+ min_dist = int(min_dist_p / 2.0 * img_size)
+ max_dist = int(max_dist_p / 2.0 * img_size)
+ rand_cursor_offset = np.random.randint(min_dist, max_dist, size=cursor_pos_large.shape)
+ rand_cursor_offset_sign = np.random.randint(0, 1 + 1, size=cursor_pos_large.shape)
+ rand_cursor_offset_sign[rand_cursor_offset_sign == 0] = -1
+ rand_cursor_offset = rand_cursor_offset * rand_cursor_offset_sign
+
+ new_cursor_pos_large = cursor_pos_large + rand_cursor_offset
+ new_cursor_pos_large = np.minimum(np.maximum(new_cursor_pos_large, 0), img_size - 1) # (2), large-level
+ new_cursor_pos = new_cursor_pos_large.astype(np.float32) / float(img_size)
+ return new_cursor_pos
+
+ input_image = 1.0 - input_image_[0] # (image_size, image_size, 3), [0-BG, 1-stroke]
+ img_size = input_image.shape[0]
+
+ new_cursor_pos = []
+ for cursor_i in range(current_pos_list.shape[0]):
+ curr_cursor = current_pos_list[cursor_i][0]
+
+ for trial_i in range(trial_times):
+ new_cursor = randomly_move_cursor(curr_cursor, img_size, move_min_dist, move_max_dist) # (2), [0.0, 1.0)
+
+ if isvalid_cursor(input_image, new_cursor, patch_size, img_size) or trial_i == trial_times - 1:
+ new_cursor_pos.append(new_cursor)
+ break
+
+ assert len(new_cursor_pos) == current_pos_list.shape[0]
+ new_cursor_pos = np.expand_dims(np.stack(new_cursor_pos, axis=0), axis=1) # (select_times, 1, 2), [0.0, 1.0)
+ return new_cursor_pos
+
+
+def sample(sess, model, input_photos, init_cursor, image_size, init_len, seq_lens,
+ state_dependent, pasting_func, round_stop_state_num,
+ min_dist_p, max_dist_p):
+ """Samples a sequence from a pre-trained model."""
+ select_times = 1
+ curr_canvas = np.zeros(dtype=np.float32,
+ shape=(select_times, image_size, image_size)) # [0.0-BG, 1.0-stroke]
+
+ initial_state = sess.run(model.initial_state)
+
+ params_list = [[] for _ in range(select_times)]
+ state_raw_list = [[] for _ in range(select_times)]
+ state_soft_list = [[] for _ in range(select_times)]
+ window_size_list = [[] for _ in range(select_times)]
+
+ round_cursor_list = []
+ round_length_real_list = []
+
+ input_photos_tiles = np.tile(input_photos, (select_times, 1, 1, 1))
+
+ for cursor_i, seq_len in enumerate(seq_lens):
+ if cursor_i == 0:
+ cursor_pos = np.squeeze(init_cursor, axis=0) # (select_times, 1, 2)
+ else:
+ cursor_pos = move_cursor_to_undrawn(cursor_pos, input_photos, model.hps.raster_size,
+ min_dist_p, max_dist_p) # (select_times, 1, 2)
+ round_cursor_list.append(cursor_pos)
+
+ prev_state = initial_state
+ prev_width = np.stack([model.hps.min_width for _ in range(select_times)], axis=0)
+ prev_scaling = np.ones((select_times), dtype=np.float32) # (N)
+ prev_window_size = np.ones((select_times), dtype=np.float32) * model.hps.raster_size # (N)
+
+ continuous_one_state_num = 0
+
+ for i in range(seq_len):
+ if not state_dependent and i % init_len == 0:
+ prev_state = initial_state
+
+ curr_window_size = prev_scaling * prev_window_size # (N)
+ curr_window_size = np.maximum(curr_window_size, model.hps.min_window_size)
+ curr_window_size = np.minimum(curr_window_size, image_size)
+
+ feed = {
+ model.initial_state: prev_state,
+ model.input_photo: input_photos_tiles,
+ model.curr_canvas_hard: curr_canvas.copy(),
+ model.cursor_position: cursor_pos,
+ model.image_size: image_size,
+ model.init_width: prev_width,
+ model.init_scaling: prev_scaling,
+ model.init_window_size: prev_window_size,
+ }
+
+ o_other_params_list, o_pen_list, o_pred_params_list, next_state_list = \
+ sess.run([model.other_params, model.pen_ras, model.pred_params, model.final_state], feed_dict=feed)
+ # o_other_params: (N, 6), o_pen: (N, 2), pred_params: (N, 1, 7), next_state: (N, 1024)
+ # o_other_params: [tanh*2, sigmoid*2, tanh*2, sigmoid*2]
+
+ idx_eos_list = np.argmax(o_pen_list, axis=1) # (N)
+
+ output_i = 0
+ idx_eos = idx_eos_list[output_i]
+
+ eos = [0, 0]
+ eos[idx_eos] = 1
+
+ other_params = o_other_params_list[output_i].tolist() # (6)
+ params_list[output_i].append([eos[1]] + other_params)
+ state_raw_list[output_i].append(o_pen_list[output_i][1])
+ state_soft_list[output_i].append(o_pred_params_list[output_i, 0, 0])
+ window_size_list[output_i].append(curr_window_size[output_i])
+
+ # draw the stroke and add to the canvas
+ x1y1, x2y2, width2 = o_other_params_list[output_i, 0:2], o_other_params_list[output_i, 2:4], \
+ o_other_params_list[output_i, 4]
+ x0y0 = np.zeros_like(x2y2) # (2), [-1.0, 1.0]
+ x0y0 = np.divide(np.add(x0y0, 1.0), 2.0) # (2), [0.0, 1.0]
+ x2y2 = np.divide(np.add(x2y2, 1.0), 2.0) # (2), [0.0, 1.0]
+ widths = np.stack([prev_width[output_i], width2], axis=0) # (2)
+ o_other_params_proc = np.concatenate([x0y0, x1y1, x2y2, widths], axis=-1).tolist() # (8)
+
+ if idx_eos == 0:
+ f = o_other_params_proc + [1.0, 1.0]
+ pred_stroke_img, _ = draw(f) # (raster_size, raster_size), [0.0-stroke, 1.0-BG]
+ pred_stroke_img_large = image_pasting_v3_testing(1.0 - pred_stroke_img,
+ cursor_pos[output_i, 0],
+ image_size,
+ curr_window_size[output_i],
+ pasting_func, sess) # [0.0-BG, 1.0-stroke]
+ curr_canvas[output_i] += pred_stroke_img_large # [0.0-BG, 1.0-stroke]
+
+ continuous_one_state_num = 0
+ else:
+ continuous_one_state_num += 1
+
+ curr_canvas = np.clip(curr_canvas, 0.0, 1.0)
+
+ next_width = o_other_params_list[:, 4] # (N)
+ next_scaling = o_other_params_list[:, 5]
+ next_window_size = next_scaling * curr_window_size # (N)
+ next_window_size = np.maximum(next_window_size, model.hps.min_window_size)
+ next_window_size = np.minimum(next_window_size, image_size)
+
+ prev_state = next_state_list
+ prev_width = next_width * curr_window_size / next_window_size # (N,)
+ prev_scaling = next_scaling # (N)
+ prev_window_size = curr_window_size
+
+ # update cursor_pos based on hps.cursor_type
+ new_cursor_offsets = o_other_params_list[:, 2:4] * (
+ np.expand_dims(curr_window_size, axis=-1) / 2.0) # (N, 2), patch-level
+ new_cursor_offset_next = new_cursor_offsets
+
+ # important!!!
+ new_cursor_offset_next = np.concatenate([new_cursor_offset_next[:, 1:2], new_cursor_offset_next[:, 0:1]],
+ axis=-1)
+
+ cursor_pos_large = cursor_pos * float(image_size)
+ stroke_position_next = cursor_pos_large[:, 0, :] + new_cursor_offset_next # (N, 2), large-level
+
+ if model.hps.cursor_type == 'next':
+ cursor_pos_large = stroke_position_next # (N, 2), large-level
+ else:
+ raise Exception('Unknown cursor_type')
+
+ cursor_pos_large = np.minimum(np.maximum(cursor_pos_large, 0.0),
+ float(image_size - 1)) # (N, 2), large-level
+ cursor_pos_large = np.expand_dims(cursor_pos_large, axis=1) # (N, 1, 2)
+ cursor_pos = cursor_pos_large / float(image_size)
+
+ if continuous_one_state_num >= round_stop_state_num or i == seq_len - 1:
+ round_length_real_list.append(i + 1)
+ break
+
+ return params_list, state_raw_list, state_soft_list, curr_canvas, window_size_list, \
+ round_cursor_list, round_length_real_list
+
+
+def main_testing(test_image_base_dir, test_dataset, test_image_name,
+ sampling_base_dir, model_base_dir, model_name,
+ sampling_num,
+ min_dist_p, max_dist_p,
+ longer_infer_lens, round_stop_state_num,
+ draw_seq=False, draw_order=False,
+ state_dependent=True):
+ model_params_default = hparams.get_default_hparams_rough()
+ model_params = update_hyperparams(model_params_default, model_base_dir, model_name, infer_dataset=test_dataset)
+
+ [test_set, eval_hps_model, sample_hps_model] = \
+ load_dataset_testing(test_image_base_dir, test_dataset, test_image_name, model_params)
+
+ test_image_raw_name = test_image_name[:test_image_name.find('.')]
+ model_dir = os.path.join(model_base_dir, model_name)
+
+ reset_graph()
+ sampling_model = VirtualSketchingModel(sample_hps_model)
+
+ # differentiable pasting graph
+ paste_v3_func = DiffPastingV3(sample_hps_model.raster_size)
+
+ tfconfig = tf.ConfigProto()
+ tfconfig.gpu_options.allow_growth = True
+ sess = tf.InteractiveSession(config=tfconfig)
+ sess.run(tf.global_variables_initializer())
+
+ # loads the weights from checkpoint into our model
+ snapshot_step = load_checkpoint(sess, model_dir, gen_model_pretrain=True)
+ print('snapshot_step', snapshot_step)
+ sampling_dir = os.path.join(sampling_base_dir, test_dataset + '__' + model_name)
+ os.makedirs(sampling_dir, exist_ok=True)
+
+ for sampling_i in range(sampling_num):
+ input_photos, init_cursors, test_image_size = test_set.get_test_image()
+ # input_photos: (1, image_size, image_size, 3), [0-stroke, 1-BG]
+ # init_cursors: (N, 1, 2), in size [0.0, 1.0)
+
+ print()
+ print(test_image_name, ', image_size:', test_image_size, ', sampling_i:', sampling_i)
+ print('Processing ...')
+
+ if init_cursors.ndim == 3:
+ init_cursors = np.expand_dims(init_cursors, axis=0)
+
+ input_photos = input_photos[0:1, :, :, :]
+
+ ori_img = (input_photos.copy()[0] * 255.0).astype(np.uint8)
+ ori_img_png = Image.fromarray(ori_img, 'RGB')
+ ori_img_png.save(os.path.join(sampling_dir, test_image_raw_name + '_input.png'), 'PNG')
+
+ # decoding for sampling
+ strokes_raw_out_list, states_raw_out_list, states_soft_out_list, pred_imgs_out, \
+ window_size_out_list, round_new_cursors, round_new_lengths = sample(
+ sess, sampling_model, input_photos, init_cursors, test_image_size,
+ eval_hps_model.max_seq_len, longer_infer_lens, state_dependent, paste_v3_func,
+ round_stop_state_num, min_dist_p, max_dist_p)
+ # pred_imgs_out: (N, H, W), [0.0-BG, 1.0-stroke]
+
+ print('## round_lengths:', len(round_new_lengths), ':', round_new_lengths)
+
+ output_i = 0
+ strokes_raw_out = np.stack(strokes_raw_out_list[output_i], axis=0)
+ states_raw_out = states_raw_out_list[output_i]
+ states_soft_out = states_soft_out_list[output_i]
+ window_size_out = window_size_out_list[output_i]
+
+ multi_cursors = [init_cursors[0, output_i, 0]]
+ for c_i in range(len(round_new_cursors)):
+ best_cursor = round_new_cursors[c_i][output_i, 0] # (2)
+ multi_cursors.append(best_cursor)
+ assert len(multi_cursors) == len(round_new_lengths)
+
+ print('strokes_raw_out', strokes_raw_out.shape)
+
+ clean_states_soft_out = np.array(states_soft_out) # (N)
+
+ flag_list = strokes_raw_out[:, 0].astype(np.int32) # (N)
+ drawing_len = len(flag_list) - np.sum(flag_list)
+ assert drawing_len >= 0
+
+ # print(' flag raw\t soft\t x1\t\t y1\t\t x2\t\t y2\t\t r2\t\t s2')
+ for i in range(strokes_raw_out.shape[0]):
+ flag, x1, y1, x2, y2, r2, s2 = strokes_raw_out[i]
+ win_size = window_size_out[i]
+ out_format = '#%d: %d | %.4f, %.4f, %.4f, %.4f, %.4f, %.4f, %.4f, %.4f'
+ out_values = (i, flag, states_raw_out[i], clean_states_soft_out[i], x1, y1, x2, y2, r2, s2)
+ out_log = out_format % out_values
+ # print(out_log)
+
+ print('Saving results ...')
+ # 保存结果
+ print("================", sampling_dir, test_image_raw_name + '_' + str(sampling_i))
+ save_seq_data(sampling_dir, test_image_raw_name + '_' + str(sampling_i),
+ strokes_raw_out, multi_cursors,
+ test_image_size, round_new_lengths, eval_hps_model.min_width)
+
+ draw_strokes(strokes_raw_out, sampling_dir, test_image_raw_name + '_' + str(sampling_i) + '_pred.png',
+ ori_img, test_image_size,
+ multi_cursors, round_new_lengths, eval_hps_model.min_width, eval_hps_model.cursor_type,
+ sample_hps_model.raster_size, sample_hps_model.min_window_size,
+ sess,
+ pasting_func=paste_v3_func,
+ save_seq=draw_seq, draw_order=draw_order)
+
+
+def generate_simple_order_line(model_name, test_image_name, sampling_num):
+ test_dataset = 'rough_sketches'
+ # test_image_base_dir = 'sample_inputs'
+ # test_image_base_dir = 'results/QMUPD_model/test_200/imagesstyle0-0-1'
+ test_image_base_dir = './'
+ sampling_base_dir = 'robot_data/sampling'
+ model_base_dir = 'outputs/snapshot'
+
+ state_dependent = False
+ longer_infer_lens = [128 for _ in range(10)]
+ round_stop_state_num = 12
+ min_dist_p = 0.3
+ max_dist_p = 0.9
+
+ draw_seq = False
+ draw_color_order = True
+
+ # set numpy output to something sensible
+ np.set_printoptions(precision=8, edgeitems=6, linewidth=200, suppress=True)
+
+ #main_testing(test_image_base_dir, test_dataset, test_image_name,
+ # sampling_base_dir, model_base_dir, model_name, sampling_num,
+ # min_dist_p=min_dist_p, max_dist_p=max_dist_p,
+ # draw_seq=draw_seq, draw_order=draw_color_order,
+ # state_dependent=state_dependent, longer_infer_lens=longer_infer_lens,
+ # round_stop_state_num=round_stop_state_num)
+ main_testing(output_dir, test_dataset, test_image_name,
+ sampling_base_dir, model_base_dir, model_name, sampling_num,
+ min_dist_p=min_dist_p, max_dist_p=max_dist_p,
+ draw_seq=draw_seq, draw_order=draw_color_order,
+ state_dependent=state_dependent, longer_infer_lens=longer_infer_lens,
+ round_stop_state_num=round_stop_state_num)
+
+
+def decode_npz_file(npz_file):
+ data = np.load(npz_file, encoding='latin1', allow_pickle=True)
+ strokes_data = data['strokes_data']
+ init_cursors = data['init_cursors']
+ image_size = data['image_size']
+ round_length = data['round_length']
+ init_width = data['init_width']
+ return strokes_data, init_cursors, image_size, round_length, init_width
+
+def scp_transfer(host, port, username, password, local_path, remote_path):
+ # 创建一个SSH客户端对象
+ ssh = paramiko.SSHClient()
+
+ # 允许连接不在know_hosts文件中的主机
+ ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
+
+ # 连接到SSH服务器
+ ssh.connect(host, port, username, password)
+
+ # 使用SSH客户端创建一个SFTP对象
+ sftp = ssh.open_sftp()
+
+ # 使用SFTP的put方法上传文件
+ sftp.put(local_path, remote_path)
+
+ # 关闭SFTP和SSH连接
+ sftp.close()
+ ssh.close()
+
+from flask import Flask, request, send_from_directory
+
+app = Flask(__name__)
+
+@app.route('/upload', methods=['GET', 'POST'])
+def create_upload_file():
+ if request.method == 'POST':
+ print("XXXXXXXXXXXXXXXXXXXXX")
+ file = request.files['file']
+ filename = file.filename
+ file.save(os.path.join(dataroot, filename))
+ return "OK"
+import time
+
+
+def transparence2white(img):
+ sp=img.shape # 获取图片维度
+ width=sp[0] # 宽度
+ height=sp[1] # 高度
+ for yh in range(height):
+ for xw in range(width):
+ color_d=img[xw,yh] # 遍历图像每一个点,获取到每个点4通道的颜色数据
+ if(color_d.size != 4): #如果图片只有三个通道,也是可以正常处理
+ continue
+ if(color_d[3] ==0): # 最后一个通道为透明度,如果其值为0,即图像是透明
+ img[xw,yh]=[255,255,255,255] # 则将当前点的颜色设置为白色,且图像设置为不透明
+ return img
+
+@app.route('/sketch', methods=['POST'])
+def sketch():
+ # 将文件保存到dataroot文件夹下
+ # file = request.files['file']
+ # filename = file.filename
+ # print("XXXXXXXXXXXXXXXXXXXX")
+ # print(filename)
+ image_path = request.form.get("image_path")
+ print("image_path:", image_path)
+ matting_root = "/home/qian/projects/robot_sketch_draw/image-matting"
+ filename = image_path.split('/')[-1]
+ print("将文件存放到input文件夹下")
+ src_path = os.path.join(matting_root + image_path)
+ print("src_path:", src_path)
+ filepath = os.path.join(dataroot,"../input", filename)
+ shutil.copyfile(src_path, filepath)
+
+ png_image = cv2.imread(filepath, cv2.IMREAD_UNCHANGED)
+ filepath=filepath.replace(".png", ".jpg")
+ # 将png背景透明部分设置为白色
+ #png_image[np.where((png_image == [0, 0, 0, 0]).all(axis=2))] = [255, 255, 255, 255]
+ png_image = transparence2white(png_image)
+ # 转为512*512
+ png_image = cv2.resize(png_image, (512, 512))
+ cv2.imwrite(filepath, png_image)
+
+
+ outimage_path = draw_tools.generate_style_image(filepath, dataroot, output_dir)
+ outimage_path = outimage_path.split('/')[-1]
+ # return {
+ # "sketch_image_url": "./robot_data/output/"+outimage_path,
+ # "seq_data_file": None
+ # }
+ # outimage_path = "robot_data/output/1714032527749_fake.png"
+ # print(data)
+ generate_simple_order_line("pretrain_rough_sketches", outimage_path, 1)
+
+ prx = outimage_path.split('.')[0]
+ # out_png_image = os.path.join("robot_data/sampling/rough_sketches__pretrain_rough_sketches/", f"{prx}_0_pred.png")
+ out_png_image = os.path.join("robot_data/contour_images/", f"{prx}.png")
+ seq_data_file = os.path.join("robot_data/sampling/rough_sketches__pretrain_rough_sketches/seq_data/", f"{prx}_0.npz")
+ # strokes_data, init_cursors, image_size, _, _ = decode_npz_file(seq_data_file)
+ contours_list = draw_tools.getContourList_v2(seq_data_file)
+ contours_list = draw_tools.sortContoursList(contours_list)
+ # 这里设置了一些超参数,4是邻域扩展为4,位于调节线条的稀疏性;0.8 为重叠度或者近邻超过0.8的曲线去除。10为保留最短的轮廓长度
+ contours_list = draw_tools.remove_overlap_and_near_contours(contours_list, (512, 512), 3, 0.9, 5)
+ # 绘制一个轮廓线图像,并保存
+ contour_image = draw_tools.drawContours(contours_list, (512, 512,3))
+ # util.mkdirs('robot_data/contour_images')
+ # 平滑和采样
+ #contours_lists = draw_tools.sample_and_smooth_contours(contours_list, 10)
+ # prx = seq_data_file.split('/')[-1].split('.')[0]
+ cv2.imwrite(f"robot_data/contour_images/{prx}.png", contour_image)
+ # prx = prx.split('_')[0:1]
+ draw_tools.save_contour_points(contours_list, f"robot_data/contour_points/{prx}_contour_points.txt")
+ return {
+ "sketch_image_url": out_png_image,
+ "seq_data_file": seq_data_file
+ }
+
+@app.route('/drawing', methods=['GET', 'POST'])
+def drawing():
+ seq_data_file = request.form.get("seq_data_file")
+ print("seq_data_file:", seq_data_file)
+ # TODO: 临时代码,强制转换成contour_points路径
+ # seq_data_file = robot_data/sampling/rough_sketches__pretrain_rough_sketches/seq_data/{prx}_0.npz
+ # 转为contours_path 为 f"robot_data/contour_points/{prx}_contour_points.txt"
+ prx = seq_data_file.split('/')[-1].split('_')[:-1]
+ prx = "_".join(prx)
+ contours_list_path = f"./robot_data/contour_points/{prx}_contour_points.txt"
+ print(contours_list_path)
+ scp_transfer('192.168.253.95', 22, "root", "root", contours_list_path, "/home/robot/Work/system/bspline.txt")
+ return "OK"
+
+@app.route('/')
+def hello():
+ return "hello"
+
+
+@app.route('/files/')
+def serve_file(filename):
+ return send_from_directory('', filename)
+
+if __name__ == '__main__':
+ warnings.filterwarnings("ignore", category=FutureWarning)
+ parser = argparse.ArgumentParser()
+ parser.add_argument('--sample', '-s', type=int, default=1, help="The number of outputs.")
+ parser.add_argument('--name', '-n', type=str, default="", help="The name of the image.")
+ args = parser.parse_args()
+ if args.name == "":
+ app = CameraApp() # 创建 CameraApp 对象,启动程序
+ # # # 获得图像名称
+ # # # image_name = "./robot_data/input/1714032527749.jpg"
+ # # # image = cv2.imread(image_name, cv2.IMREAD_COLOR)
+ image = app.last_photo
+ image_name = app.last_photo_name
+ else:
+ image_name = args.name
+
+ filepath = image_name
+ outimage_path = draw_tools.generate_style_image(filepath, dataroot, output_dir)
+ outimage_path = outimage_path.split('/')[-1]
+ # outimage_path = "robot_data/output/1714032527749_fake.png"
+ # print(data)
+ generate_simple_order_line("pretrain_rough_sketches", outimage_path, 1)
+ prx = outimage_path.split('.')[0]
+ out_png_image = os.path.join("robot_data/sampling/rough_sketches__pretrain_rough_sketches/", f"{prx}_0_pred.png")
+ seq_data_file = os.path.join("robot_data/sampling/rough_sketches__pretrain_rough_sketches/seq_data/", f"{prx}_0.npz")
+ # strokes_data, init_cursors, image_size, _, _ = decode_npz_file(seq_data_file)
+ contours_list = draw_tools.getContourList_v2(seq_data_file)
+ cv2.imshow("origin contours", draw_tools.drawContours(contours_list, (512, 512,3)))
+ contours_list = draw_tools.sortContoursList(contours_list)
+ cv2.imshow("sorted contours", draw_tools.drawContours(contours_list, (512, 512,3)))
+ # 这里设置了一些超参数,4是邻域扩展为4,位于调节线条的稀疏性;0.8 为重叠度或者近邻超过0.8的曲线去除。10为保留最短的轮廓长度
+ contours_list = draw_tools.remove_overlap_and_near_contours(contours_list, (512, 512), 4, 0.7, 10)
+ cv2.imshow("remove overlap contours", draw_tools.drawContours(contours_list, (512, 512,3)))
+ # 平滑和采样
+ #contours_lists = draw_tools.sample_and_smooth_contours(contours_list, 10)
+ # simple_image = cv2.imread(out_png_image)
+ # contours_list = draw_tools.getContourList(simple_image, 4, 100, 1)
+ draw_tools.save_contour_points(contours_list, f"robot_data/contour_points/{prx}_contour_points.txt")
+ cv2.waitKey(0)
+ # return "OK"/
+ # print("image_name:", image_name)
+ #image_name = "robot_data/input/1714032527749.jpg"
+ # # 生成风格图像
+ # import uvicorn
+ # default_bind_host = "0.0.0.0"
+ # uvicorn.run(app, host=default_bind_host, port=8002)
+
+
diff --git a/hi-arm/qmupd_vs/model_common_test.py b/hi-arm/qmupd_vs/model_common_test.py
new file mode 100644
index 0000000000000000000000000000000000000000..ff5c40dfdf03363b9d14515fe96e5ed9bbd15ce2
--- /dev/null
+++ b/hi-arm/qmupd_vs/model_common_test.py
@@ -0,0 +1,607 @@
+import rnn
+import tensorflow as tf
+
+from subnet_tf_utils import generative_cnn_encoder, generative_cnn_encoder_deeper, generative_cnn_encoder_deeper13, \
+ generative_cnn_c3_encoder, generative_cnn_c3_encoder_deeper, generative_cnn_c3_encoder_deeper13, \
+ generative_cnn_c3_encoder_combine33, generative_cnn_c3_encoder_combine43, \
+ generative_cnn_c3_encoder_combine53, generative_cnn_c3_encoder_combineFC, \
+ generative_cnn_c3_encoder_deeper13_attn
+
+
+class DiffPastingV3(object):
+ def __init__(self, raster_size):
+ self.patch_canvas = tf.compat.v1.placeholder(dtype=tf.float32,
+ shape=(None, None, 1)) # (raster_size, raster_size, 1), [0.0-BG, 1.0-stroke]
+ self.cursor_pos_a = tf.compat.v1.placeholder(dtype=tf.float32, shape=(2)) # (2), float32, in large size
+ self.image_size_a = tf.compat.v1.placeholder(dtype=tf.int32, shape=()) # ()
+ self.window_size_a = tf.compat.v1.placeholder(dtype=tf.float32, shape=()) # (), float32, with grad
+ self.raster_size_a = float(raster_size)
+
+ self.pasted_image = self.image_pasting_sampling_v3()
+ # (image_size, image_size, 1), [0.0-BG, 1.0-stroke]
+
+ def image_pasting_sampling_v3(self):
+ padding_size = tf.cast(tf.ceil(self.window_size_a / 2.0), tf.int32)
+
+ x1y1_a = self.cursor_pos_a - self.window_size_a / 2.0 # (2), float32
+ x2y2_a = self.cursor_pos_a + self.window_size_a / 2.0 # (2), float32
+
+ x1y1_a_floor = tf.floor(x1y1_a) # (2)
+ x2y2_a_ceil = tf.ceil(x2y2_a) # (2)
+
+ cursor_pos_b_oricoord = (x1y1_a_floor + x2y2_a_ceil) / 2.0 # (2)
+ cursor_pos_b = (cursor_pos_b_oricoord - x1y1_a) / self.window_size_a * self.raster_size_a # (2)
+ raster_size_b = (x2y2_a_ceil - x1y1_a_floor) # (x, y)
+ image_size_b = self.raster_size_a
+ window_size_b = self.raster_size_a * (raster_size_b / self.window_size_a) # (x, y)
+
+ cursor_b_x, cursor_b_y = tf.split(cursor_pos_b, 2, axis=-1) # (1)
+
+ y1_b = cursor_b_y - (window_size_b[1] - 1.) / 2.
+ x1_b = cursor_b_x - (window_size_b[0] - 1.) / 2.
+ y2_b = y1_b + (window_size_b[1] - 1.)
+ x2_b = x1_b + (window_size_b[0] - 1.)
+ boxes_b = tf.concat([y1_b, x1_b, y2_b, x2_b], axis=-1) # (4)
+ boxes_b = boxes_b / tf.cast(image_size_b - 1, tf.float32) # with grad to window_size_a
+
+ box_ind_b = tf.ones((1), dtype=tf.int32) # (1)
+ box_ind_b = tf.cumsum(box_ind_b) - 1
+
+ patch_canvas = tf.expand_dims(self.patch_canvas,
+ axis=0) # (1, raster_size, raster_size, 1), [0.0-BG, 1.0-stroke]
+ boxes_b = tf.expand_dims(boxes_b, axis=0) # (1, 4)
+
+ valid_canvas = tf.image.crop_and_resize(patch_canvas, boxes_b, box_ind_b,
+ crop_size=[raster_size_b[1], raster_size_b[0]])
+ valid_canvas = valid_canvas[0] # (raster_size_b, raster_size_b, 1)
+
+ pad_up = tf.cast(x1y1_a_floor[1], tf.int32) + padding_size
+ pad_down = self.image_size_a + padding_size - tf.cast(x2y2_a_ceil[1], tf.int32)
+ pad_left = tf.cast(x1y1_a_floor[0], tf.int32) + padding_size
+ pad_right = self.image_size_a + padding_size - tf.cast(x2y2_a_ceil[0], tf.int32)
+
+ paddings = [[pad_up, pad_down],
+ [pad_left, pad_right],
+ [0, 0]]
+ pad_img = tf.pad(valid_canvas, paddings=paddings, mode='CONSTANT',
+ constant_values=0.0) # (H_p, W_p, 1), [0.0-BG, 1.0-stroke]
+
+ pasted_image = pad_img[padding_size: padding_size + self.image_size_a,
+ padding_size: padding_size + self.image_size_a, :]
+ # (image_size, image_size, 1), [0.0-BG, 1.0-stroke]
+ return pasted_image
+
+
+class VirtualSketchingModel(object):
+ def __init__(self, hps, gpu_mode=True, reuse=False):
+ """Initializer for the model.
+
+ Args:
+ hps: a HParams object containing model hyperparameters
+ gpu_mode: a boolean that when True, uses GPU mode.
+ reuse: a boolean that when true, attemps to reuse variables.
+ """
+ self.hps = hps
+ assert hps.model_mode in ['train', 'eval', 'eval_sample', 'sample']
+ # with tf.variable_scope('SCC', reuse=reuse):
+ if not gpu_mode:
+ with tf.device('/cpu:0'):
+ print('Model using cpu.')
+ self.build_model()
+ else:
+ print('-' * 100)
+ print('model_mode:', hps.model_mode)
+ print('Model using gpu.')
+ self.build_model()
+
+ def build_model(self):
+ """Define model architecture."""
+ self.config_model()
+
+ initial_state = self.get_decoder_inputs()
+ self.initial_state = initial_state
+
+ ## use pred as the prev points
+ print(self.image_size)
+ other_params, pen_ras, final_state = self.get_points_and_raster_image(self.image_size)
+
+ # other_params: (N * max_seq_len, 6)
+ # pen_ras: (N * max_seq_len, 2), after softmax
+
+ self.other_params = other_params # (N * max_seq_len, 6)
+ self.pen_ras = pen_ras # (N * max_seq_len, 2), after softmax
+ self.final_state = final_state
+
+ if not self.hps.use_softargmax:
+ pen_state_soft = pen_ras[:, 1:2] # (N * max_seq_len, 1)
+ else:
+ pen_state_soft = self.differentiable_argmax(pen_ras, self.hps.soft_beta) # (N * max_seq_len, 1)
+
+ pred_params = tf.concat([pen_state_soft, other_params], axis=1) # (N * max_seq_len, 7)
+ self.pred_params = tf.reshape(pred_params, shape=[-1, self.hps.max_seq_len, 7]) # (N, max_seq_len, 7)
+ # pred_params: (N, max_seq_len, 7)
+
+ def config_model(self):
+ if self.hps.model_mode == 'train':
+ self.global_step = tf.Variable(0, name='global_step', trainable=False)
+
+ if self.hps.dec_model == 'lstm':
+ dec_cell_fn = rnn.LSTMCell
+ elif self.hps.dec_model == 'layer_norm':
+ dec_cell_fn = rnn.LayerNormLSTMCell
+ elif self.hps.dec_model == 'hyper':
+ dec_cell_fn = rnn.HyperLSTMCell
+ else:
+ assert False, 'please choose a respectable cell'
+
+ use_recurrent_dropout = self.hps.use_recurrent_dropout
+ use_input_dropout = self.hps.use_input_dropout
+ use_output_dropout = self.hps.use_output_dropout
+
+ dec_cell = dec_cell_fn(
+ self.hps.dec_rnn_size,
+ use_recurrent_dropout=use_recurrent_dropout,
+ dropout_keep_prob=self.hps.recurrent_dropout_prob)
+
+ # dropout:
+ # print('Input dropout mode = %s.' % use_input_dropout)
+ # print('Output dropout mode = %s.' % use_output_dropout)
+ # print('Recurrent dropout mode = %s.' % use_recurrent_dropout)
+ if use_input_dropout:
+ print('Dropout to input w/ keep_prob = %4.4f.' % self.hps.input_dropout_prob)
+ dec_cell = tf.contrib.rnn.DropoutWrapper(
+ dec_cell, input_keep_prob=self.hps.input_dropout_prob)
+ if use_output_dropout:
+ print('Dropout to output w/ keep_prob = %4.4f.' % self.hps.output_dropout_prob)
+ dec_cell = tf.contrib.rnn.DropoutWrapper(
+ dec_cell, output_keep_prob=self.hps.output_dropout_prob)
+ self.dec_cell = dec_cell
+
+ self.input_photo = tf.compat.v1.placeholder(dtype=tf.float32,
+ shape=[self.hps.batch_size, None, None, self.hps.input_channel]) # [0.0-stroke, 1.0-BG]
+ self.init_cursor = tf.compat.v1.placeholder(
+ dtype=tf.float32,
+ shape=[self.hps.batch_size, 1, 2]) # (N, 1, 2), in size [0.0, 1.0)
+ self.init_width = tf.compat.v1.placeholder(
+ dtype=tf.float32,
+ shape=[self.hps.batch_size]) # (1), in [0.0, 1.0]
+ self.init_scaling = tf.compat.v1.placeholder(
+ dtype=tf.float32,
+ shape=[self.hps.batch_size]) # (N), in [0.0, 1.0]
+ self.init_window_size = tf.compat.v1.placeholder(
+ dtype=tf.float32,
+ shape=[self.hps.batch_size]) # (N)
+ self.image_size = tf.compat.v1.placeholder(dtype=tf.int32, shape=()) # ()
+
+ ###########################
+
+ def normalize_image_m1to1(self, in_img_0to1):
+ norm_img_m1to1 = tf.multiply(in_img_0to1, 2.0)
+ norm_img_m1to1 = tf.subtract(norm_img_m1to1, 1.0)
+ return norm_img_m1to1
+
+ def add_coords(self, input_tensor):
+ batch_size_tensor = tf.shape(input_tensor)[0] # get N size
+
+ xx_ones = tf.ones([batch_size_tensor, self.hps.raster_size], dtype=tf.int32) # e.g. (N, raster_size)
+ xx_ones = tf.expand_dims(xx_ones, -1) # e.g. (N, raster_size, 1)
+ xx_range = tf.tile(tf.expand_dims(tf.range(self.hps.raster_size), 0),
+ [batch_size_tensor, 1]) # e.g. (N, raster_size)
+ xx_range = tf.expand_dims(xx_range, 1) # e.g. (N, 1, raster_size)
+
+ xx_channel = tf.matmul(xx_ones, xx_range) # e.g. (N, raster_size, raster_size)
+ xx_channel = tf.expand_dims(xx_channel, -1) # e.g. (N, raster_size, raster_size, 1)
+
+ yy_ones = tf.ones([batch_size_tensor, self.hps.raster_size], dtype=tf.int32) # e.g. (N, raster_size)
+ yy_ones = tf.expand_dims(yy_ones, 1) # e.g. (N, 1, raster_size)
+ yy_range = tf.tile(tf.expand_dims(tf.range(self.hps.raster_size), 0),
+ [batch_size_tensor, 1]) # (N, raster_size)
+ yy_range = tf.expand_dims(yy_range, -1) # e.g. (N, raster_size, 1)
+
+ yy_channel = tf.matmul(yy_range, yy_ones) # e.g. (N, raster_size, raster_size)
+ yy_channel = tf.expand_dims(yy_channel, -1) # e.g. (N, raster_size, raster_size, 1)
+
+ xx_channel = tf.cast(xx_channel, 'float32') / (self.hps.raster_size - 1)
+ yy_channel = tf.cast(yy_channel, 'float32') / (self.hps.raster_size - 1)
+ # xx_channel = xx_channel * 2 - 1 # [-1, 1]
+ # yy_channel = yy_channel * 2 - 1
+
+ ret = tf.concat([
+ input_tensor,
+ xx_channel,
+ yy_channel,
+ ], axis=-1) # e.g. (N, raster_size, raster_size, 4)
+
+ return ret
+
+ def build_combined_encoder(self, patch_canvas, patch_photo, entire_canvas, entire_photo, cursor_pos,
+ image_size, window_size):
+ """
+ :param patch_canvas: (N, raster_size, raster_size, 1), [-1.0-stroke, 1.0-BG]
+ :param patch_photo: (N, raster_size, raster_size, 1/3), [-1.0-stroke, 1.0-BG]
+ :param entire_canvas: (N, image_size, image_size, 1), [0.0-stroke, 1.0-BG]
+ :param entire_photo: (N, image_size, image_size, 1/3), [0.0-stroke, 1.0-BG]
+ :param cursor_pos: (N, 1, 2), in size [0.0, 1.0)
+ :param window_size: (N, 1, 1), float, in large size
+ :return:
+ """
+ if self.hps.resize_method == 'BILINEAR':
+ resize_method = tf.image.ResizeMethod.BILINEAR
+ elif self.hps.resize_method == 'NEAREST_NEIGHBOR':
+ resize_method = tf.image.ResizeMethod.NEAREST_NEIGHBOR
+ elif self.hps.resize_method == 'BICUBIC':
+ resize_method = tf.image.ResizeMethod.BICUBIC
+ elif self.hps.resize_method == 'AREA':
+ resize_method = tf.image.ResizeMethod.AREA
+ else:
+ raise Exception('unknown resize_method', self.hps.resize_method)
+
+ patch_photo = tf.stop_gradient(patch_photo)
+ patch_canvas = tf.stop_gradient(patch_canvas)
+ cursor_pos = tf.stop_gradient(cursor_pos)
+ window_size = tf.stop_gradient(window_size)
+
+ entire_photo_small = tf.stop_gradient(tf.image.resize_images(entire_photo,
+ (self.hps.raster_size, self.hps.raster_size),
+ method=resize_method))
+ entire_canvas_small = tf.stop_gradient(tf.image.resize_images(entire_canvas,
+ (self.hps.raster_size, self.hps.raster_size),
+ method=resize_method))
+ entire_photo_small = self.normalize_image_m1to1(entire_photo_small) # [-1.0-stroke, 1.0-BG]
+ entire_canvas_small = self.normalize_image_m1to1(entire_canvas_small) # [-1.0-stroke, 1.0-BG]
+
+ if self.hps.encode_cursor_type == 'value':
+ cursor_pos_norm = tf.expand_dims(cursor_pos, axis=1) # (N, 1, 1, 2)
+ cursor_pos_norm = tf.tile(cursor_pos_norm, [1, self.hps.raster_size, self.hps.raster_size, 1])
+ cursor_info = cursor_pos_norm
+ else:
+ raise Exception('Unknown encode_cursor_type', self.hps.encode_cursor_type)
+
+ batch_input_combined = tf.concat([patch_photo, patch_canvas, entire_photo_small, entire_canvas_small, cursor_info],
+ axis=-1) # [N, raster_size, raster_size, 6/10]
+ batch_input_local = tf.concat([patch_photo, patch_canvas], axis=-1) # [N, raster_size, raster_size, 2/4]
+ batch_input_global = tf.concat([entire_photo_small, entire_canvas_small, cursor_info],
+ axis=-1) # [N, raster_size, raster_size, 4/6]
+
+ if self.hps.model_mode == 'train':
+ is_training = True
+ dropout_keep_prob = self.hps.pix_drop_kp
+ else:
+ is_training = False
+ dropout_keep_prob = 1.0
+
+ if self.hps.add_coordconv:
+ batch_input_combined = self.add_coords(batch_input_combined) # (N, in_H, in_W, in_dim + 2)
+ batch_input_local = self.add_coords(batch_input_local) # (N, in_H, in_W, in_dim + 2)
+ batch_input_global = self.add_coords(batch_input_global) # (N, in_H, in_W, in_dim + 2)
+
+ if 'combine' in self.hps.encoder_type:
+ if self.hps.encoder_type == 'combine33':
+ image_embedding, _ = generative_cnn_c3_encoder_combine33(batch_input_local, batch_input_global,
+ is_training, dropout_keep_prob) # (N, 128)
+ elif self.hps.encoder_type == 'combine43':
+ image_embedding, _ = generative_cnn_c3_encoder_combine43(batch_input_local, batch_input_global,
+ is_training, dropout_keep_prob) # (N, 128)
+ elif self.hps.encoder_type == 'combine53':
+ image_embedding, _ = generative_cnn_c3_encoder_combine53(batch_input_local, batch_input_global,
+ is_training, dropout_keep_prob) # (N, 128)
+ elif self.hps.encoder_type == 'combineFC':
+ image_embedding, _ = generative_cnn_c3_encoder_combineFC(batch_input_local, batch_input_global,
+ is_training, dropout_keep_prob) # (N, 256)
+ else:
+ raise Exception('Unknown encoder_type', self.hps.encoder_type)
+ else:
+ with tf.variable_scope('Combined_Encoder', reuse=tf.AUTO_REUSE):
+ if self.hps.encoder_type == 'conv10':
+ image_embedding, _ = generative_cnn_encoder(batch_input_combined, is_training, dropout_keep_prob) # (N, 128)
+ elif self.hps.encoder_type == 'conv10_deep':
+ image_embedding, _ = generative_cnn_encoder_deeper(batch_input_combined, is_training, dropout_keep_prob) # (N, 512)
+ elif self.hps.encoder_type == 'conv13':
+ image_embedding, _ = generative_cnn_encoder_deeper13(batch_input_combined, is_training, dropout_keep_prob) # (N, 128)
+ elif self.hps.encoder_type == 'conv10_c3':
+ image_embedding, _ = generative_cnn_c3_encoder(batch_input_combined, is_training, dropout_keep_prob) # (N, 128)
+ elif self.hps.encoder_type == 'conv10_deep_c3':
+ image_embedding, _ = generative_cnn_c3_encoder_deeper(batch_input_combined, is_training, dropout_keep_prob) # (N, 512)
+ elif self.hps.encoder_type == 'conv13_c3':
+ image_embedding, _ = generative_cnn_c3_encoder_deeper13(batch_input_combined, is_training, dropout_keep_prob) # (N, 128)
+ elif self.hps.encoder_type == 'conv13_c3_attn':
+ image_embedding, _ = generative_cnn_c3_encoder_deeper13_attn(batch_input_combined, is_training, dropout_keep_prob) # (N, 128)
+ else:
+ raise Exception('Unknown encoder_type', self.hps.encoder_type)
+ return image_embedding
+
+ def build_seq_decoder(self, dec_cell, actual_input_x, initial_state):
+ rnn_output, last_state = self.rnn_decoder(dec_cell, initial_state, actual_input_x)
+ rnn_output_flat = tf.reshape(rnn_output, [-1, self.hps.dec_rnn_size])
+
+ pen_n_out = 2
+ params_n_out = 6
+
+ with tf.variable_scope('DEC_RNN_out_pen', reuse=tf.AUTO_REUSE):
+ output_w_pen = tf.get_variable('output_w', [self.hps.dec_rnn_size, pen_n_out])
+ output_b_pen = tf.get_variable('output_b', [pen_n_out], initializer=tf.constant_initializer(0.0))
+ output_pen = tf.nn.xw_plus_b(rnn_output_flat, output_w_pen, output_b_pen) # (N, pen_n_out)
+
+ with tf.variable_scope('DEC_RNN_out_params', reuse=tf.AUTO_REUSE):
+ output_w_params = tf.get_variable('output_w', [self.hps.dec_rnn_size, params_n_out])
+ output_b_params = tf.get_variable('output_b', [params_n_out], initializer=tf.constant_initializer(0.0))
+ output_params = tf.nn.xw_plus_b(rnn_output_flat, output_w_params, output_b_params) # (N, params_n_out)
+
+ output = tf.concat([output_pen, output_params], axis=1) # (N, n_out)
+
+ return output, last_state
+
+ def get_mixture_coef(self, outputs):
+ z = outputs
+ z_pen_logits = z[:, 0:2] # (N, 2), pen states
+ z_other_params_logits = z[:, 2:] # (N, 6)
+
+ z_pen = tf.nn.softmax(z_pen_logits) # (N, 2)
+ if self.hps.position_format == 'abs':
+ x1y1 = tf.nn.sigmoid(z_other_params_logits[:, 0:2]) # (N, 2)
+ x2y2 = tf.tanh(z_other_params_logits[:, 2:4]) # (N, 2)
+ widths = tf.nn.sigmoid(z_other_params_logits[:, 4:5]) # (N, 1)
+ widths = tf.add(tf.multiply(widths, 1.0 - self.hps.min_width), self.hps.min_width)
+ scaling = tf.nn.sigmoid(z_other_params_logits[:, 5:6]) * self.hps.max_scaling # (N, 1), [0.0, max_scaling]
+ # scaling = tf.add(tf.multiply(scaling, (self.hps.max_scaling - self.hps.min_scaling) / self.hps.max_scaling),
+ # self.hps.min_scaling)
+ z_other_params = tf.concat([x1y1, x2y2, widths, scaling], axis=-1) # (N, 6)
+ else: # "rel"
+ raise Exception('Unknown position_format', self.hps.position_format)
+
+ r = [z_other_params, z_pen]
+ return r
+
+ ###########################
+
+ def get_decoder_inputs(self):
+ initial_state = self.dec_cell.zero_state(batch_size=self.hps.batch_size, dtype=tf.float32)
+ return initial_state
+
+ def rnn_decoder(self, dec_cell, initial_state, actual_input_x):
+ with tf.variable_scope("RNN_DEC", reuse=tf.AUTO_REUSE):
+ output, last_state = tf.nn.dynamic_rnn(
+ dec_cell,
+ actual_input_x,
+ initial_state=initial_state,
+ time_major=False,
+ swap_memory=True,
+ dtype=tf.float32)
+ return output, last_state
+
+ ###########################
+
+ def image_padding(self, ori_image, window_size, pad_value):
+ """
+ Pad with (bg)
+ :param ori_image:
+ :return:
+ """
+ paddings = [[0, 0],
+ [window_size // 2, window_size // 2],
+ [window_size // 2, window_size // 2],
+ [0, 0]]
+ pad_img = tf.pad(ori_image, paddings=paddings, mode='CONSTANT', constant_values=pad_value) # (N, H_p, W_p, k)
+ return pad_img
+
+ def image_cropping_fn(self, fn_inputs):
+ """
+ crop the patch
+ :return:
+ """
+ index_offset = self.hps.input_channel - 1
+ input_image = fn_inputs[:, :, 0:2 + index_offset] # (image_size, image_size, -), [0.0-BG, 1.0-stroke]
+ cursor_pos = fn_inputs[0, 0, 2 + index_offset:4 + index_offset] # (2), in [0.0, 1.0)
+ image_size = fn_inputs[0, 0, 4 + index_offset] # (), float32
+ window_size = tf.cast(fn_inputs[0, 0, 5 + index_offset], tf.int32) # ()
+
+ input_img_reshape = tf.expand_dims(input_image, axis=0)
+ pad_img = self.image_padding(input_img_reshape, window_size, pad_value=0.0)
+
+ cursor_pos = tf.cast(tf.round(tf.multiply(cursor_pos, image_size)), dtype=tf.int32)
+ x0, x1 = cursor_pos[0], cursor_pos[0] + window_size # ()
+ y0, y1 = cursor_pos[1], cursor_pos[1] + window_size # ()
+ patch_image = pad_img[:, y0:y1, x0:x1, :] # (1, window_size, window_size, 2/4)
+
+ # resize to raster_size
+ patch_image_scaled = tf.image.resize_images(patch_image, (self.hps.raster_size, self.hps.raster_size),
+ method=tf.image.ResizeMethod.AREA)
+ patch_image_scaled = tf.squeeze(patch_image_scaled, axis=0)
+ # patch_canvas_scaled: (raster_size, raster_size, 2/4), [0.0-BG, 1.0-stroke]
+
+ return patch_image_scaled
+
+ def image_cropping(self, cursor_position, input_img, image_size, window_sizes):
+ """
+ :param cursor_position: (N, 1, 2), float type, in size [0.0, 1.0)
+ :param input_img: (N, image_size, image_size, 2/4), [0.0-BG, 1.0-stroke]
+ :param window_sizes: (N, 1, 1), float32, with grad
+ """
+ input_img_ = input_img
+ window_sizes_non_grad = tf.stop_gradient(tf.round(window_sizes)) # (N, 1, 1), no grad
+
+ cursor_position_ = tf.reshape(cursor_position, (-1, 1, 1, 2)) # (N, 1, 1, 2)
+ cursor_position_ = tf.tile(cursor_position_, [1, image_size, image_size, 1]) # (N, image_size, image_size, 2)
+
+ image_size_ = tf.reshape(tf.cast(image_size, tf.float32), (1, 1, 1, 1)) # (1, 1, 1, 1)
+ image_size_ = tf.tile(image_size_, [self.hps.batch_size, image_size, image_size, 1])
+
+ window_sizes_ = tf.reshape(window_sizes_non_grad, (-1, 1, 1, 1)) # (N, 1, 1, 1)
+ window_sizes_ = tf.tile(window_sizes_, [1, image_size, image_size, 1]) # (N, image_size, image_size, 1)
+
+ fn_inputs = tf.concat([input_img_, cursor_position_, image_size_, window_sizes_],
+ axis=-1) # (N, image_size, image_size, 2/4 + 4)
+ curr_patch_imgs = tf.map_fn(self.image_cropping_fn, fn_inputs, parallel_iterations=32) # (N, raster_size, raster_size, -)
+ return curr_patch_imgs
+
+ def image_cropping_v3(self, cursor_position, input_img, image_size, window_sizes):
+ """
+ :param cursor_position: (N, 1, 2), float type, in size [0.0, 1.0)
+ :param input_img: (N, image_size, image_size, k), [0.0-BG, 1.0-stroke]
+ :param window_sizes: (N, 1, 1), float32, with grad
+ """
+ window_sizes_non_grad = tf.stop_gradient(window_sizes) # (N, 1, 1), no grad
+
+ cursor_pos = tf.multiply(cursor_position, tf.cast(image_size, tf.float32))
+ print(cursor_pos)
+ cursor_x, cursor_y = tf.split(cursor_pos, 2, axis=-1) # (N, 1, 1)
+
+ y1 = cursor_y - (window_sizes_non_grad - 1.0) / 2
+ x1 = cursor_x - (window_sizes_non_grad - 1.0) / 2
+ y2 = y1 + (window_sizes_non_grad - 1.0)
+ x2 = x1 + (window_sizes_non_grad - 1.0)
+ boxes = tf.concat([y1, x1, y2, x2], axis=-1) # (N, 1, 4)
+ boxes = tf.squeeze(boxes, axis=1) # (N, 4)
+ boxes = boxes / tf.cast(image_size - 1, tf.float32)
+
+ box_ind = tf.ones_like(cursor_x)[:, 0, 0] # (N)
+ box_ind = tf.cast(box_ind, dtype=tf.int32)
+ box_ind = tf.cumsum(box_ind) - 1
+
+ curr_patch_imgs = tf.image.crop_and_resize(input_img, boxes, box_ind,
+ crop_size=[self.hps.raster_size, self.hps.raster_size])
+ # (N, raster_size, raster_size, k), [0.0-BG, 1.0-stroke]
+ return curr_patch_imgs
+
+ def get_points_and_raster_image(self, image_size):
+ ## generate the other_params and pen_ras and raster image for raster loss
+ prev_state = self.initial_state # (N, dec_rnn_size * 3)
+
+ prev_width = self.init_width # (N)
+ prev_width = tf.expand_dims(tf.expand_dims(prev_width, axis=-1), axis=-1) # (N, 1, 1)
+
+ prev_scaling = self.init_scaling # (N)
+ prev_scaling = tf.reshape(prev_scaling, (-1, 1, 1)) # (N, 1, 1)
+
+ prev_window_size = self.init_window_size # (N)
+ prev_window_size = tf.reshape(prev_window_size, (-1, 1, 1)) # (N, 1, 1)
+
+ cursor_position_temp = self.init_cursor
+ self.cursor_position = cursor_position_temp # (N, 1, 2), in size [0.0, 1.0)
+ cursor_position_loop = self.cursor_position
+
+ other_params_list = []
+ pen_ras_list = []
+
+ curr_canvas_soft = tf.zeros_like(self.input_photo[:, :, :, 0]) # (N, image_size, image_size), [0.0-BG, 1.0-stroke]
+ curr_canvas_hard = tf.zeros_like(curr_canvas_soft) # [0.0-BG, 1.0-stroke]
+
+ #### sampling part - start ####
+ self.curr_canvas_hard = curr_canvas_hard
+
+ if self.hps.cropping_type == 'v3':
+ cropping_func = self.image_cropping_v3
+ # elif self.hps.cropping_type == 'v2':
+ # cropping_func = self.image_cropping
+ else:
+ raise Exception('Unknown cropping_type', self.hps.cropping_type)
+
+ for time_i in range(self.hps.max_seq_len):
+ cursor_position_non_grad = tf.stop_gradient(cursor_position_loop) # (N, 1, 2), in size [0.0, 1.0)
+
+ curr_window_size = tf.multiply(prev_scaling, tf.stop_gradient(prev_window_size)) # float, with grad
+ curr_window_size = tf.maximum(curr_window_size, tf.cast(self.hps.min_window_size, tf.float32))
+ curr_window_size = tf.minimum(curr_window_size, tf.cast(image_size, tf.float32))
+
+ ## patch-level encoding
+ # Here, we make the gradients from canvas_z to curr_canvas_hard be None to avoid recurrent gradient propagation.
+ curr_canvas_hard_non_grad = tf.stop_gradient(self.curr_canvas_hard)
+ curr_canvas_hard_non_grad = tf.expand_dims(curr_canvas_hard_non_grad, axis=-1)
+
+ # input_photo: (N, image_size, image_size, 1/3), [0.0-stroke, 1.0-BG]
+ crop_inputs = tf.concat([1.0 - self.input_photo, curr_canvas_hard_non_grad], axis=-1) # (N, H_p, W_p, 1+1)
+
+ cropped_outputs = cropping_func(cursor_position_non_grad, crop_inputs, image_size, curr_window_size)
+ index_offset = self.hps.input_channel - 1
+ curr_patch_inputs = cropped_outputs[:, :, :, 0:1 + index_offset] # [0.0-BG, 1.0-stroke]
+ curr_patch_canvas_hard_non_grad = cropped_outputs[:, :, :, 1 + index_offset:2 + index_offset]
+ # (N, raster_size, raster_size, 1/3), [0.0-BG, 1.0-stroke]
+
+ curr_patch_inputs = 1.0 - curr_patch_inputs # [0.0-stroke, 1.0-BG]
+ curr_patch_inputs = self.normalize_image_m1to1(curr_patch_inputs)
+ # (N, raster_size, raster_size, 1/3), [-1.0-stroke, 1.0-BG]
+
+ # Normalizing image
+ curr_patch_canvas_hard_non_grad = 1.0 - curr_patch_canvas_hard_non_grad # [0.0-stroke, 1.0-BG]
+ curr_patch_canvas_hard_non_grad = self.normalize_image_m1to1(curr_patch_canvas_hard_non_grad) # [-1.0-stroke, 1.0-BG]
+
+ ## image-level encoding
+ combined_z = self.build_combined_encoder(
+ curr_patch_canvas_hard_non_grad,
+ curr_patch_inputs,
+ 1.0 - curr_canvas_hard_non_grad,
+ self.input_photo,
+ cursor_position_non_grad,
+ image_size,
+ curr_window_size) # (N, z_size)
+ combined_z = tf.expand_dims(combined_z, axis=1) # (N, 1, z_size)
+
+ curr_window_size_top_side_norm_non_grad = \
+ tf.stop_gradient(curr_window_size / tf.cast(image_size, tf.float32))
+ curr_window_size_bottom_side_norm_non_grad = \
+ tf.stop_gradient(curr_window_size / tf.cast(self.hps.min_window_size, tf.float32))
+ if not self.hps.concat_win_size:
+ combined_z = tf.concat([tf.stop_gradient(prev_width), combined_z], 2) # (N, 1, 2+z_size)
+ else:
+ combined_z = tf.concat([tf.stop_gradient(prev_width),
+ curr_window_size_top_side_norm_non_grad,
+ curr_window_size_bottom_side_norm_non_grad,
+ combined_z],
+ 2) # (N, 1, 2+z_size)
+
+ if self.hps.concat_cursor:
+ prev_input_x = tf.concat([cursor_position_non_grad, combined_z], 2) # (N, 1, 2+2+z_size)
+ else:
+ prev_input_x = combined_z # (N, 1, 2+z_size)
+
+ h_output, next_state = self.build_seq_decoder(self.dec_cell, prev_input_x, prev_state)
+ # h_output: (N * 1, n_out), next_state: (N, dec_rnn_size * 3)
+ [o_other_params, o_pen_ras] = self.get_mixture_coef(h_output)
+ # o_other_params: (N * 1, 6)
+ # o_pen_ras: (N * 1, 2), after softmax
+
+ o_other_params = tf.reshape(o_other_params, [-1, 1, 6]) # (N, 1, 6)
+ o_pen_ras_raw = tf.reshape(o_pen_ras, [-1, 1, 2]) # (N, 1, 2)
+
+ other_params_list.append(o_other_params)
+ pen_ras_list.append(o_pen_ras_raw)
+
+ #### sampling part - end ####
+
+ prev_state = next_state
+
+ other_params_ = tf.reshape(tf.concat(other_params_list, axis=1), [-1, 6]) # (N * max_seq_len, 6)
+ pen_ras_ = tf.reshape(tf.concat(pen_ras_list, axis=1), [-1, 2]) # (N * max_seq_len, 2)
+
+ return other_params_, pen_ras_, prev_state
+
+ def differentiable_argmax(self, input_pen, soft_beta):
+ """
+ Differentiable argmax trick.
+ :param input_pen: (N, n_class)
+ :return: pen_state: (N, 1)
+ """
+ def sign_onehot(x):
+ """
+ :param x: (N, n_class)
+ :return: (N, n_class)
+ """
+ y = tf.sign(tf.reduce_max(x, axis=-1, keepdims=True) - x)
+ y = (y - 1) * (-1)
+ return y
+
+ def softargmax(x, beta=1e2):
+ """
+ :param x: (N, n_class)
+ :param beta: 1e10 is the best. 1e2 is acceptable.
+ :return: (N)
+ """
+ x_range = tf.cumsum(tf.ones_like(x), axis=1) # (N, 2)
+ return tf.reduce_sum(tf.nn.softmax(x * beta) * x_range, axis=1) - 1
+
+ ## Better to use softargmax(beta=1e2). The sign_onehot's gradient is close to zero.
+ # pen_onehot = sign_onehot(input_pen) # one-hot form, (N * max_seq_len, 2)
+ # pen_state = pen_onehot[:, 1:2] # (N * max_seq_len, 1)
+ pen_state = softargmax(input_pen, soft_beta)
+ pen_state = tf.expand_dims(pen_state, axis=1) # (N * max_seq_len, 1)
+ return pen_state
diff --git a/hi-arm/qmupd_vs/model_common_train.py b/hi-arm/qmupd_vs/model_common_train.py
new file mode 100644
index 0000000000000000000000000000000000000000..a7c22b33f45a9cbd7e69c878866cb2fb6dd81f7c
--- /dev/null
+++ b/hi-arm/qmupd_vs/model_common_train.py
@@ -0,0 +1,1193 @@
+import rnn
+import tensorflow as tf
+
+from subnet_tf_utils import generative_cnn_encoder, generative_cnn_encoder_deeper, generative_cnn_encoder_deeper13, \
+ generative_cnn_c3_encoder, generative_cnn_c3_encoder_deeper, generative_cnn_c3_encoder_deeper13, \
+ generative_cnn_c3_encoder_combine33, generative_cnn_c3_encoder_combine43, \
+ generative_cnn_c3_encoder_combine53, generative_cnn_c3_encoder_combineFC, \
+ generative_cnn_c3_encoder_deeper13_attn
+from rasterization_utils.NeuralRenderer import NeuralRasterizorStep
+from vgg_utils.VGG16 import vgg_net_slim
+
+
+class VirtualSketchingModel(object):
+ def __init__(self, hps, gpu_mode=True, reuse=False):
+ """Initializer for the model.
+
+ Args:
+ hps: a HParams object containing model hyperparameters
+ gpu_mode: a boolean that when True, uses GPU mode.
+ reuse: a boolean that when true, attemps to reuse variables.
+ """
+ self.hps = hps
+ assert hps.model_mode in ['train', 'eval', 'eval_sample', 'sample']
+ # with tf.variable_scope('SCC', reuse=reuse):
+ if not gpu_mode:
+ with tf.device('/cpu:0'):
+ print('Model using cpu.')
+ self.build_model()
+ else:
+ print('-' * 100)
+ print('model_mode:', hps.model_mode)
+ print('Model using gpu.')
+ self.build_model()
+
+ def build_model(self):
+ """Define model architecture."""
+ self.config_model()
+
+ initial_state = self.get_decoder_inputs()
+ self.initial_state = initial_state
+ self.initial_state_list = tf.split(self.initial_state, self.total_loop, axis=0)
+
+ total_loss_list = []
+ ras_loss_list = []
+ perc_relu_raw_list = []
+ perc_relu_norm_list = []
+ sn_loss_list = []
+ cursor_outside_loss_list = []
+ win_size_outside_loss_list = []
+ early_state_loss_list = []
+
+ tower_grads = []
+
+ pred_raster_imgs_list = []
+ pred_raster_imgs_rgb_list = []
+
+ for t_i in range(self.total_loop):
+ gpu_idx = t_i // self.hps.loop_per_gpu
+ gpu_i = self.hps.gpus[gpu_idx]
+ print(self.hps.model_mode, 'model, gpu:', gpu_i, ', loop:', t_i % self.hps.loop_per_gpu)
+ with tf.device('/gpu:%d' % gpu_i):
+ with tf.name_scope('GPU_%d' % gpu_i) as scope:
+ if t_i > 0:
+ tf.get_variable_scope().reuse_variables()
+ else:
+ total_loss_list.clear()
+ ras_loss_list.clear()
+ perc_relu_raw_list.clear()
+ perc_relu_norm_list.clear()
+ sn_loss_list.clear()
+ cursor_outside_loss_list.clear()
+ win_size_outside_loss_list.clear()
+ early_state_loss_list.clear()
+ tower_grads.clear()
+ pred_raster_imgs_list.clear()
+ pred_raster_imgs_rgb_list.clear()
+
+ split_input_photo = self.input_photo_list[t_i]
+ split_image_size = self.image_size[t_i]
+ split_init_cursor = self.init_cursor_list[t_i]
+ split_initial_state = self.initial_state_list[t_i]
+ if self.hps.input_channel == 1:
+ split_target_sketch = split_input_photo
+ else:
+ split_target_sketch = self.target_sketch_list[t_i]
+
+ ## use pred as the prev points
+ other_params, pen_ras, final_state, pred_raster_images, pred_raster_images_rgb, \
+ pos_before_max_min, win_size_before_max_min \
+ = self.get_points_and_raster_image(split_initial_state, split_init_cursor, split_input_photo,
+ split_image_size)
+ # other_params: (N * max_seq_len, 6)
+ # pen_ras: (N * max_seq_len, 2), after softmax
+ # pos_before_max_min: (N, max_seq_len, 2), in image_size
+ # win_size_before_max_min: (N, max_seq_len, 1), in image_size
+
+ pred_raster_imgs = 1.0 - pred_raster_images # (N, image_size, image_size), [0.0-stroke, 1.0-BG]
+ pred_raster_imgs_rgb = 1.0 - pred_raster_images_rgb # (N, image_size, image_size, 3)
+ pred_raster_imgs_list.append(pred_raster_imgs)
+ pred_raster_imgs_rgb_list.append(pred_raster_imgs_rgb)
+
+ if not self.hps.use_softargmax:
+ pen_state_soft = pen_ras[:, 1:2] # (N * max_seq_len, 1)
+ else:
+ pen_state_soft = self.differentiable_argmax(pen_ras, self.hps.soft_beta) # (N * max_seq_len, 1)
+
+ pred_params = tf.concat([pen_state_soft, other_params], axis=1) # (N * max_seq_len, 7)
+ pred_params = tf.reshape(pred_params, shape=[-1, self.hps.max_seq_len, 7]) # (N, max_seq_len, 7)
+ # pred_params: (N, max_seq_len, 7)
+
+ if self.hps.model_mode == 'train' or self.hps.model_mode == 'eval':
+ raster_cost, sn_cost, cursor_outside_cost, winsize_outside_cost, \
+ early_pen_states_cost, \
+ perc_relu_loss_raw, perc_relu_loss_norm = \
+ self.build_losses(split_target_sketch, pred_raster_imgs, pred_params,
+ pos_before_max_min, win_size_before_max_min,
+ split_image_size)
+ # perc_relu_loss_raw, perc_relu_loss_norm: (n_layers)
+
+ ras_loss_list.append(raster_cost)
+ perc_relu_raw_list.append(perc_relu_loss_raw)
+ perc_relu_norm_list.append(perc_relu_loss_norm)
+ sn_loss_list.append(sn_cost)
+ cursor_outside_loss_list.append(cursor_outside_cost)
+ win_size_outside_loss_list.append(winsize_outside_cost)
+ early_state_loss_list.append(early_pen_states_cost)
+
+ if self.hps.model_mode == 'train':
+ total_cost_split, grads_and_vars_split = self.build_training_op_split(
+ raster_cost, sn_cost, cursor_outside_cost, winsize_outside_cost,
+ early_pen_states_cost)
+ total_loss_list.append(total_cost_split)
+ tower_grads.append(grads_and_vars_split)
+
+ self.raster_cost = tf.reduce_mean(tf.stack(ras_loss_list, axis=0))
+ self.perc_relu_losses_raw = tf.reduce_mean(tf.stack(perc_relu_raw_list, axis=0), axis=0) # (n_layers)
+ self.perc_relu_losses_norm = tf.reduce_mean(tf.stack(perc_relu_norm_list, axis=0), axis=0) # (n_layers)
+ self.stroke_num_cost = tf.reduce_mean(tf.stack(sn_loss_list, axis=0))
+ self.pos_outside_cost = tf.reduce_mean(tf.stack(cursor_outside_loss_list, axis=0))
+ self.win_size_outside_cost = tf.reduce_mean(tf.stack(win_size_outside_loss_list, axis=0))
+ self.early_pen_states_cost = tf.reduce_mean(tf.stack(early_state_loss_list, axis=0))
+ self.cost = tf.reduce_mean(tf.stack(total_loss_list, axis=0))
+
+ self.pred_raster_imgs = tf.concat(pred_raster_imgs_list, axis=0) # (N, image_size, image_size), [0.0-stroke, 1.0-BG]
+ self.pred_raster_imgs_rgb = tf.concat(pred_raster_imgs_rgb_list, axis=0) # (N, image_size, image_size, 3)
+
+ if self.hps.model_mode == 'train':
+ self.build_training_op(tower_grads)
+
+ def config_model(self):
+ if self.hps.model_mode == 'train':
+ self.global_step = tf.Variable(0, name='global_step', trainable=False)
+
+ if self.hps.dec_model == 'lstm':
+ dec_cell_fn = rnn.LSTMCell
+ elif self.hps.dec_model == 'layer_norm':
+ dec_cell_fn = rnn.LayerNormLSTMCell
+ elif self.hps.dec_model == 'hyper':
+ dec_cell_fn = rnn.HyperLSTMCell
+ else:
+ assert False, 'please choose a respectable cell'
+
+ use_recurrent_dropout = self.hps.use_recurrent_dropout
+ use_input_dropout = self.hps.use_input_dropout
+ use_output_dropout = self.hps.use_output_dropout
+
+ dec_cell = dec_cell_fn(
+ self.hps.dec_rnn_size,
+ use_recurrent_dropout=use_recurrent_dropout,
+ dropout_keep_prob=self.hps.recurrent_dropout_prob)
+
+ # dropout:
+ # print('Input dropout mode = %s.' % use_input_dropout)
+ # print('Output dropout mode = %s.' % use_output_dropout)
+ # print('Recurrent dropout mode = %s.' % use_recurrent_dropout)
+ if use_input_dropout:
+ print('Dropout to input w/ keep_prob = %4.4f.' % self.hps.input_dropout_prob)
+ dec_cell = tf.contrib.rnn.DropoutWrapper(
+ dec_cell, input_keep_prob=self.hps.input_dropout_prob)
+ if use_output_dropout:
+ print('Dropout to output w/ keep_prob = %4.4f.' % self.hps.output_dropout_prob)
+ dec_cell = tf.contrib.rnn.DropoutWrapper(
+ dec_cell, output_keep_prob=self.hps.output_dropout_prob)
+ self.dec_cell = dec_cell
+
+ self.total_loop = len(self.hps.gpus) * self.hps.loop_per_gpu
+
+ self.init_cursor = tf.placeholder(
+ dtype=tf.float32,
+ shape=[self.hps.batch_size, 1, 2]) # (N, 1, 2), in size [0.0, 1.0)
+ self.init_width = tf.placeholder(
+ dtype=tf.float32,
+ shape=[1]) # (1), in [0.0, 1.0]
+ self.image_size = tf.placeholder(dtype=tf.int32, shape=(self.total_loop)) # ()
+
+ self.init_cursor_list = tf.split(self.init_cursor, self.total_loop, axis=0)
+ self.input_photo_list = []
+ for loop_i in range(self.total_loop):
+ input_photo_i = tf.placeholder(dtype=tf.float32, shape=[None, None, None, self.hps.input_channel]) # [0.0-stroke, 1.0-BG]
+ self.input_photo_list.append(input_photo_i)
+
+ if self.hps.input_channel == 3:
+ self.target_sketch_list = []
+ for loop_i in range(self.total_loop):
+ target_sketch_i = tf.placeholder(dtype=tf.float32, shape=[None, None, None, 1]) # [0.0-stroke, 1.0-BG]
+ self.target_sketch_list.append(target_sketch_i)
+
+ if self.hps.model_mode == 'train' or self.hps.model_mode == 'eval':
+ self.stroke_num_loss_weight = tf.Variable(0.0, trainable=False)
+ self.early_pen_loss_start_idx = tf.Variable(0, dtype=tf.int32, trainable=False)
+ self.early_pen_loss_end_idx = tf.Variable(0, dtype=tf.int32, trainable=False)
+
+ if self.hps.model_mode == 'train':
+ self.perc_loss_mean_list = []
+ for loop_i in range(len(self.hps.perc_loss_layers)):
+ relu_loss_mean = tf.Variable(0.0, trainable=False)
+ self.perc_loss_mean_list.append(relu_loss_mean)
+ self.last_step_num = tf.Variable(0.0, trainable=False)
+
+ with tf.variable_scope('train_op', reuse=tf.AUTO_REUSE):
+ self.lr = tf.Variable(self.hps.learning_rate, trainable=False)
+ self.optimizer = tf.train.AdamOptimizer(self.lr)
+
+ ###########################
+
+ def normalize_image_m1to1(self, in_img_0to1):
+ norm_img_m1to1 = tf.multiply(in_img_0to1, 2.0)
+ norm_img_m1to1 = tf.subtract(norm_img_m1to1, 1.0)
+ return norm_img_m1to1
+
+ def add_coords(self, input_tensor):
+ batch_size_tensor = tf.shape(input_tensor)[0] # get N size
+
+ xx_ones = tf.ones([batch_size_tensor, self.hps.raster_size], dtype=tf.int32) # e.g. (N, raster_size)
+ xx_ones = tf.expand_dims(xx_ones, -1) # e.g. (N, raster_size, 1)
+ xx_range = tf.tile(tf.expand_dims(tf.range(self.hps.raster_size), 0),
+ [batch_size_tensor, 1]) # e.g. (N, raster_size)
+ xx_range = tf.expand_dims(xx_range, 1) # e.g. (N, 1, raster_size)
+
+ xx_channel = tf.matmul(xx_ones, xx_range) # e.g. (N, raster_size, raster_size)
+ xx_channel = tf.expand_dims(xx_channel, -1) # e.g. (N, raster_size, raster_size, 1)
+
+ yy_ones = tf.ones([batch_size_tensor, self.hps.raster_size], dtype=tf.int32) # e.g. (N, raster_size)
+ yy_ones = tf.expand_dims(yy_ones, 1) # e.g. (N, 1, raster_size)
+ yy_range = tf.tile(tf.expand_dims(tf.range(self.hps.raster_size), 0),
+ [batch_size_tensor, 1]) # (N, raster_size)
+ yy_range = tf.expand_dims(yy_range, -1) # e.g. (N, raster_size, 1)
+
+ yy_channel = tf.matmul(yy_range, yy_ones) # e.g. (N, raster_size, raster_size)
+ yy_channel = tf.expand_dims(yy_channel, -1) # e.g. (N, raster_size, raster_size, 1)
+
+ xx_channel = tf.cast(xx_channel, 'float32') / (self.hps.raster_size - 1)
+ yy_channel = tf.cast(yy_channel, 'float32') / (self.hps.raster_size - 1)
+ # xx_channel = xx_channel * 2 - 1 # [-1, 1]
+ # yy_channel = yy_channel * 2 - 1
+
+ ret = tf.concat([
+ input_tensor,
+ xx_channel,
+ yy_channel,
+ ], axis=-1) # e.g. (N, raster_size, raster_size, 4)
+
+ return ret
+
+ def build_combined_encoder(self, patch_canvas, patch_photo, entire_canvas, entire_photo, cursor_pos,
+ image_size, window_size):
+ """
+ :param patch_canvas: (N, raster_size, raster_size, 1), [-1.0-stroke, 1.0-BG]
+ :param patch_photo: (N, raster_size, raster_size, 1/3), [-1.0-stroke, 1.0-BG]
+ :param entire_canvas: (N, image_size, image_size, 1), [0.0-stroke, 1.0-BG]
+ :param entire_photo: (N, image_size, image_size, 1/3), [0.0-stroke, 1.0-BG]
+ :param cursor_pos: (N, 1, 2), in size [0.0, 1.0)
+ :param window_size: (N, 1, 1), float, in large size
+ :return:
+ """
+ if self.hps.resize_method == 'BILINEAR':
+ resize_method = tf.image.ResizeMethod.BILINEAR
+ elif self.hps.resize_method == 'NEAREST_NEIGHBOR':
+ resize_method = tf.image.ResizeMethod.NEAREST_NEIGHBOR
+ elif self.hps.resize_method == 'BICUBIC':
+ resize_method = tf.image.ResizeMethod.BICUBIC
+ elif self.hps.resize_method == 'AREA':
+ resize_method = tf.image.ResizeMethod.AREA
+ else:
+ raise Exception('unknown resize_method', self.hps.resize_method)
+
+ patch_photo = tf.stop_gradient(patch_photo)
+ patch_canvas = tf.stop_gradient(patch_canvas)
+ cursor_pos = tf.stop_gradient(cursor_pos)
+ window_size = tf.stop_gradient(window_size)
+
+ entire_photo_small = tf.stop_gradient(tf.image.resize_images(entire_photo,
+ (self.hps.raster_size, self.hps.raster_size),
+ method=resize_method))
+ entire_canvas_small = tf.stop_gradient(tf.image.resize_images(entire_canvas,
+ (self.hps.raster_size, self.hps.raster_size),
+ method=resize_method))
+ entire_photo_small = self.normalize_image_m1to1(entire_photo_small) # [-1.0-stroke, 1.0-BG]
+ entire_canvas_small = self.normalize_image_m1to1(entire_canvas_small) # [-1.0-stroke, 1.0-BG]
+
+ if self.hps.encode_cursor_type == 'value':
+ cursor_pos_norm = tf.expand_dims(cursor_pos, axis=1) # (N, 1, 1, 2)
+ cursor_pos_norm = tf.tile(cursor_pos_norm, [1, self.hps.raster_size, self.hps.raster_size, 1])
+ cursor_info = cursor_pos_norm
+ else:
+ raise Exception('Unknown encode_cursor_type', self.hps.encode_cursor_type)
+
+ batch_input_combined = tf.concat([patch_photo, patch_canvas, entire_photo_small, entire_canvas_small, cursor_info],
+ axis=-1) # [N, raster_size, raster_size, 6/10]
+ batch_input_local = tf.concat([patch_photo, patch_canvas], axis=-1) # [N, raster_size, raster_size, 2/4]
+ batch_input_global = tf.concat([entire_photo_small, entire_canvas_small, cursor_info],
+ axis=-1) # [N, raster_size, raster_size, 4/6]
+
+ if self.hps.model_mode == 'train':
+ is_training = True
+ dropout_keep_prob = self.hps.pix_drop_kp
+ else:
+ is_training = False
+ dropout_keep_prob = 1.0
+
+ if self.hps.add_coordconv:
+ batch_input_combined = self.add_coords(batch_input_combined) # (N, in_H, in_W, in_dim + 2)
+ batch_input_local = self.add_coords(batch_input_local) # (N, in_H, in_W, in_dim + 2)
+ batch_input_global = self.add_coords(batch_input_global) # (N, in_H, in_W, in_dim + 2)
+
+ if 'combine' in self.hps.encoder_type:
+ if self.hps.encoder_type == 'combine33':
+ image_embedding, _ = generative_cnn_c3_encoder_combine33(batch_input_local, batch_input_global,
+ is_training, dropout_keep_prob) # (N, 128)
+ elif self.hps.encoder_type == 'combine43':
+ image_embedding, _ = generative_cnn_c3_encoder_combine43(batch_input_local, batch_input_global,
+ is_training, dropout_keep_prob) # (N, 128)
+ elif self.hps.encoder_type == 'combine53':
+ image_embedding, _ = generative_cnn_c3_encoder_combine53(batch_input_local, batch_input_global,
+ is_training, dropout_keep_prob) # (N, 128)
+ elif self.hps.encoder_type == 'combineFC':
+ image_embedding, _ = generative_cnn_c3_encoder_combineFC(batch_input_local, batch_input_global,
+ is_training, dropout_keep_prob) # (N, 256)
+ else:
+ raise Exception('Unknown encoder_type', self.hps.encoder_type)
+ else:
+ with tf.variable_scope('Combined_Encoder', reuse=tf.AUTO_REUSE):
+ if self.hps.encoder_type == 'conv10':
+ image_embedding, _ = generative_cnn_encoder(batch_input_combined, is_training, dropout_keep_prob) # (N, 128)
+ elif self.hps.encoder_type == 'conv10_deep':
+ image_embedding, _ = generative_cnn_encoder_deeper(batch_input_combined, is_training, dropout_keep_prob) # (N, 512)
+ elif self.hps.encoder_type == 'conv13':
+ image_embedding, _ = generative_cnn_encoder_deeper13(batch_input_combined, is_training, dropout_keep_prob) # (N, 128)
+ elif self.hps.encoder_type == 'conv10_c3':
+ image_embedding, _ = generative_cnn_c3_encoder(batch_input_combined, is_training, dropout_keep_prob) # (N, 128)
+ elif self.hps.encoder_type == 'conv10_deep_c3':
+ image_embedding, _ = generative_cnn_c3_encoder_deeper(batch_input_combined, is_training, dropout_keep_prob) # (N, 512)
+ elif self.hps.encoder_type == 'conv13_c3':
+ image_embedding, _ = generative_cnn_c3_encoder_deeper13(batch_input_combined, is_training, dropout_keep_prob) # (N, 128)
+ elif self.hps.encoder_type == 'conv13_c3_attn':
+ image_embedding, _ = generative_cnn_c3_encoder_deeper13_attn(batch_input_combined, is_training, dropout_keep_prob) # (N, 128)
+ else:
+ raise Exception('Unknown encoder_type', self.hps.encoder_type)
+ return image_embedding
+
+ def build_seq_decoder(self, dec_cell, actual_input_x, initial_state):
+ rnn_output, last_state = self.rnn_decoder(dec_cell, initial_state, actual_input_x)
+ rnn_output_flat = tf.reshape(rnn_output, [-1, self.hps.dec_rnn_size])
+
+ pen_n_out = 2
+ params_n_out = 6
+
+ with tf.variable_scope('DEC_RNN_out_pen', reuse=tf.AUTO_REUSE):
+ output_w_pen = tf.get_variable('output_w', [self.hps.dec_rnn_size, pen_n_out])
+ output_b_pen = tf.get_variable('output_b', [pen_n_out], initializer=tf.constant_initializer(0.0))
+ output_pen = tf.nn.xw_plus_b(rnn_output_flat, output_w_pen, output_b_pen) # (N, pen_n_out)
+
+ with tf.variable_scope('DEC_RNN_out_params', reuse=tf.AUTO_REUSE):
+ output_w_params = tf.get_variable('output_w', [self.hps.dec_rnn_size, params_n_out])
+ output_b_params = tf.get_variable('output_b', [params_n_out], initializer=tf.constant_initializer(0.0))
+ output_params = tf.nn.xw_plus_b(rnn_output_flat, output_w_params, output_b_params) # (N, params_n_out)
+
+ output = tf.concat([output_pen, output_params], axis=1) # (N, n_out)
+
+ return output, last_state
+
+ def get_mixture_coef(self, outputs):
+ z = outputs
+ z_pen_logits = z[:, 0:2] # (N, 2), pen states
+ z_other_params_logits = z[:, 2:] # (N, 6)
+
+ z_pen = tf.nn.softmax(z_pen_logits) # (N, 2)
+ if self.hps.position_format == 'abs':
+ x1y1 = tf.nn.sigmoid(z_other_params_logits[:, 0:2]) # (N, 2)
+ x2y2 = tf.tanh(z_other_params_logits[:, 2:4]) # (N, 2)
+ widths = tf.nn.sigmoid(z_other_params_logits[:, 4:5]) # (N, 1)
+ widths = tf.add(tf.multiply(widths, 1.0 - self.hps.min_width), self.hps.min_width)
+ scaling = tf.nn.sigmoid(z_other_params_logits[:, 5:6]) * self.hps.max_scaling # (N, 1), [0.0, max_scaling]
+ # scaling = tf.add(tf.multiply(scaling, (self.hps.max_scaling - self.hps.min_scaling) / self.hps.max_scaling),
+ # self.hps.min_scaling)
+ z_other_params = tf.concat([x1y1, x2y2, widths, scaling], axis=-1) # (N, 6)
+ else: # "rel"
+ raise Exception('Unknown position_format', self.hps.position_format)
+
+ r = [z_other_params, z_pen]
+ return r
+
+ ###########################
+
+ def get_decoder_inputs(self):
+ initial_state = self.dec_cell.zero_state(batch_size=self.hps.batch_size, dtype=tf.float32)
+ return initial_state
+
+ def rnn_decoder(self, dec_cell, initial_state, actual_input_x):
+ with tf.variable_scope("RNN_DEC", reuse=tf.AUTO_REUSE):
+ output, last_state = tf.nn.dynamic_rnn(
+ dec_cell,
+ actual_input_x,
+ initial_state=initial_state,
+ time_major=False,
+ swap_memory=True,
+ dtype=tf.float32)
+ return output, last_state
+
+ ###########################
+
+ def image_padding(self, ori_image, window_size, pad_value):
+ """
+ Pad with (bg)
+ :param ori_image:
+ :return:
+ """
+ paddings = [[0, 0],
+ [window_size // 2, window_size // 2],
+ [window_size // 2, window_size // 2],
+ [0, 0]]
+ pad_img = tf.pad(ori_image, paddings=paddings, mode='CONSTANT', constant_values=pad_value) # (N, H_p, W_p, k)
+ return pad_img
+
+ def image_cropping_fn(self, fn_inputs):
+ """
+ crop the patch
+ :return:
+ """
+ index_offset = self.hps.input_channel - 1
+ input_image = fn_inputs[:, :, 0:2 + index_offset] # (image_size, image_size, 2), [0.0-BG, 1.0-stroke]
+ cursor_pos = fn_inputs[0, 0, 2 + index_offset:4 + index_offset] # (2), in [0.0, 1.0)
+ image_size = fn_inputs[0, 0, 4 + index_offset] # (), float32
+ window_size = tf.cast(fn_inputs[0, 0, 5 + index_offset], tf.int32) # ()
+
+ input_img_reshape = tf.expand_dims(input_image, axis=0)
+ pad_img = self.image_padding(input_img_reshape, window_size, pad_value=0.0)
+
+ cursor_pos = tf.cast(tf.round(tf.multiply(cursor_pos, image_size)), dtype=tf.int32)
+ x0, x1 = cursor_pos[0], cursor_pos[0] + window_size # ()
+ y0, y1 = cursor_pos[1], cursor_pos[1] + window_size # ()
+ patch_image = pad_img[:, y0:y1, x0:x1, :] # (1, window_size, window_size, 2/4)
+
+ # resize to raster_size
+ patch_image_scaled = tf.image.resize_images(patch_image, (self.hps.raster_size, self.hps.raster_size),
+ method=tf.image.ResizeMethod.AREA)
+ patch_image_scaled = tf.squeeze(patch_image_scaled, axis=0)
+ # patch_canvas_scaled: (raster_size, raster_size, 2/4), [0.0-BG, 1.0-stroke]
+
+ return patch_image_scaled
+
+ def image_cropping(self, cursor_position, input_img, image_size, window_sizes):
+ """
+ :param cursor_position: (N, 1, 2), float type, in size [0.0, 1.0)
+ :param input_img: (N, image_size, image_size, 2/4), [0.0-BG, 1.0-stroke]
+ :param window_sizes: (N, 1, 1), float32, with grad
+ """
+ input_img_ = input_img
+ window_sizes_non_grad = tf.stop_gradient(tf.round(window_sizes)) # (N, 1, 1), no grad
+
+ cursor_position_ = tf.reshape(cursor_position, (-1, 1, 1, 2)) # (N, 1, 1, 2)
+ cursor_position_ = tf.tile(cursor_position_, [1, image_size, image_size, 1]) # (N, image_size, image_size, 2)
+
+ image_size_ = tf.reshape(tf.cast(image_size, tf.float32), (1, 1, 1, 1)) # (1, 1, 1, 1)
+ image_size_ = tf.tile(image_size_, [self.hps.batch_size // self.total_loop, image_size, image_size, 1])
+
+ window_sizes_ = tf.reshape(window_sizes_non_grad, (-1, 1, 1, 1)) # (N, 1, 1, 1)
+ window_sizes_ = tf.tile(window_sizes_, [1, image_size, image_size, 1]) # (N, image_size, image_size, 1)
+
+ fn_inputs = tf.concat([input_img_, cursor_position_, image_size_, window_sizes_],
+ axis=-1) # (N, image_size, image_size, 2/4 + 4)
+ curr_patch_imgs = tf.map_fn(self.image_cropping_fn, fn_inputs, parallel_iterations=32) # (N, raster_size, raster_size, -)
+ return curr_patch_imgs
+
+ def image_cropping_v3(self, cursor_position, input_img, image_size, window_sizes):
+ """
+ :param cursor_position: (N, 1, 2), float type, in size [0.0, 1.0)
+ :param input_img: (N, image_size, image_size, k), [0.0-BG, 1.0-stroke]
+ :param window_sizes: (N, 1, 1), float32, with grad
+ """
+ window_sizes_non_grad = tf.stop_gradient(window_sizes) # (N, 1, 1), no grad
+
+ cursor_pos = tf.multiply(cursor_position, tf.cast(image_size, tf.float32))
+ cursor_x, cursor_y = tf.split(cursor_pos, 2, axis=-1) # (N, 1, 1)
+
+ y1 = cursor_y - (window_sizes_non_grad - 1.0) / 2
+ x1 = cursor_x - (window_sizes_non_grad - 1.0) / 2
+ y2 = y1 + (window_sizes_non_grad - 1.0)
+ x2 = x1 + (window_sizes_non_grad - 1.0)
+ boxes = tf.concat([y1, x1, y2, x2], axis=-1) # (N, 1, 4)
+ boxes = tf.squeeze(boxes, axis=1) # (N, 4)
+ boxes = boxes / tf.cast(image_size - 1, tf.float32)
+
+ box_ind = tf.ones_like(cursor_x)[:, 0, 0] # (N)
+ box_ind = tf.cast(box_ind, dtype=tf.int32)
+ box_ind = tf.cumsum(box_ind) - 1
+
+ curr_patch_imgs = tf.image.crop_and_resize(input_img, boxes, box_ind,
+ crop_size=[self.hps.raster_size, self.hps.raster_size])
+ # (N, raster_size, raster_size, k), [0.0-BG, 1.0-stroke]
+ return curr_patch_imgs
+
+ def get_pixel_value(self, img, x, y):
+ """
+ Utility function to get pixel value for coordinate vectors x and y from a 4D tensor image.
+
+ Input
+ -----
+ - img: tensor of shape (B, H, W, C)
+ - x: flattened tensor of shape (B, H', W')
+ - y: flattened tensor of shape (B, H', W')
+
+ Returns
+ -------
+ - output: tensor of shape (B, H', W', C)
+ """
+ shape = tf.shape(x)
+ batch_size = shape[0]
+ height = shape[1]
+ width = shape[2]
+
+ batch_idx = tf.range(0, batch_size)
+ batch_idx = tf.reshape(batch_idx, (batch_size, 1, 1))
+ b = tf.tile(batch_idx, (1, height, width))
+
+ indices = tf.stack([b, y, x], 3)
+
+ return tf.gather_nd(img, indices)
+
+ def image_pasting_nondiff_single(self, fn_inputs):
+ patch_image = fn_inputs[:, :, 0:1] # (raster_size, raster_size, 1), [0.0-BG, 1.0-stroke]
+ cursor_pos = fn_inputs[0, 0, 1:3] # (2), in large size
+ image_size = tf.cast(fn_inputs[0, 0, 3], tf.int32) # ()
+ window_size = tf.cast(fn_inputs[0, 0, 4], tf.int32) # ()
+
+ patch_image_scaled = tf.expand_dims(patch_image, axis=0) # (1, raster_size, raster_size, 1)
+ patch_image_scaled = tf.image.resize_images(patch_image_scaled, (window_size, window_size),
+ method=tf.image.ResizeMethod.BILINEAR)
+ patch_image_scaled = tf.squeeze(patch_image_scaled, axis=0)
+ # patch_canvas_scaled: (window_size, window_size, 1)
+
+ cursor_pos = tf.cast(tf.round(cursor_pos), dtype=tf.int32) # (2)
+ cursor_x, cursor_y = cursor_pos[0], cursor_pos[1]
+
+ pad_up = cursor_y
+ pad_down = image_size - cursor_y
+ pad_left = cursor_x
+ pad_right = image_size - cursor_x
+
+ paddings = [[pad_up, pad_down],
+ [pad_left, pad_right],
+ [0, 0]]
+ pad_img = tf.pad(patch_image_scaled, paddings=paddings, mode='CONSTANT',
+ constant_values=0.0) # (H_p, W_p, 1), [0.0-BG, 1.0-stroke]
+
+ crop_start = window_size // 2
+ pasted_image = pad_img[crop_start: crop_start + image_size, crop_start: crop_start + image_size, :]
+ return pasted_image
+
+ def image_pasting_diff_single(self, fn_inputs):
+ patch_canvas = fn_inputs[:, :, 0:1] # (raster_size, raster_size, 1), [0.0-BG, 1.0-stroke]
+ cursor_pos = fn_inputs[0, 0, 1:3] # (2), in large size
+ image_size = tf.cast(fn_inputs[0, 0, 3], tf.int32) # ()
+ window_size = tf.cast(fn_inputs[0, 0, 4], tf.int32) # ()
+ cursor_x, cursor_y = cursor_pos[0], cursor_pos[1]
+
+ patch_canvas_scaled = tf.expand_dims(patch_canvas, axis=0) # (1, raster_size, raster_size, 1)
+ patch_canvas_scaled = tf.image.resize_images(patch_canvas_scaled, (window_size, window_size),
+ method=tf.image.ResizeMethod.BILINEAR)
+ # patch_canvas_scaled: (1, window_size, window_size, 1)
+
+ valid_canvas = self.image_pasting_diff_batch(patch_canvas_scaled,
+ tf.expand_dims(tf.expand_dims(cursor_pos, axis=0), axis=0),
+ window_size)
+ valid_canvas = tf.squeeze(valid_canvas, axis=0)
+ # (window_size + 1, window_size + 1, 1)
+
+ pad_up = tf.cast(tf.floor(cursor_y), tf.int32)
+ pad_down = image_size - 1 - tf.cast(tf.floor(cursor_y), tf.int32)
+ pad_left = tf.cast(tf.floor(cursor_x), tf.int32)
+ pad_right = image_size - 1 - tf.cast(tf.floor(cursor_x), tf.int32)
+
+ paddings = [[pad_up, pad_down],
+ [pad_left, pad_right],
+ [0, 0]]
+ pad_img = tf.pad(valid_canvas, paddings=paddings, mode='CONSTANT',
+ constant_values=0.0) # (H_p, W_p, 1), [0.0-BG, 1.0-stroke]
+
+ crop_start = window_size // 2
+ pasted_image = pad_img[crop_start: crop_start + image_size, crop_start: crop_start + image_size, :]
+ return pasted_image
+
+ def image_pasting_diff_single_v3(self, fn_inputs):
+ patch_canvas = fn_inputs[:, :, 0:1] # (raster_size, raster_size, 1), [0.0-BG, 1.0-stroke]
+ cursor_pos_a = fn_inputs[0, 0, 1:3] # (2), float32, in large size
+ image_size_a = tf.cast(fn_inputs[0, 0, 3], tf.int32) # ()
+ window_size_a = fn_inputs[0, 0, 4] # (), float32, with grad
+ raster_size_a = float(self.hps.raster_size)
+
+ padding_size = tf.cast(tf.ceil(window_size_a / 2.0), tf.int32)
+
+ x1y1_a = cursor_pos_a - window_size_a / 2.0 # (2), float32
+ x2y2_a = cursor_pos_a + window_size_a / 2.0 # (2), float32
+
+ x1y1_a_floor = tf.floor(x1y1_a) # (2)
+ x2y2_a_ceil = tf.ceil(x2y2_a) # (2)
+
+ cursor_pos_b_oricoord = (x1y1_a_floor + x2y2_a_ceil) / 2.0 # (2)
+ cursor_pos_b = (cursor_pos_b_oricoord - x1y1_a) / window_size_a * raster_size_a # (2)
+ raster_size_b = (x2y2_a_ceil - x1y1_a_floor) # (x, y)
+ image_size_b = raster_size_a
+ window_size_b = raster_size_a * (raster_size_b / window_size_a) # (x, y)
+
+ cursor_b_x, cursor_b_y = tf.split(cursor_pos_b, 2, axis=-1) # (1)
+
+ y1_b = cursor_b_y - (window_size_b[1] - 1.) / 2.
+ x1_b = cursor_b_x - (window_size_b[0] - 1.) / 2.
+ y2_b = y1_b + (window_size_b[1] - 1.)
+ x2_b = x1_b + (window_size_b[0] - 1.)
+ boxes_b = tf.concat([y1_b, x1_b, y2_b, x2_b], axis=-1) # (4)
+ boxes_b = boxes_b / tf.cast(image_size_b - 1, tf.float32) # with grad to window_size_a
+
+ box_ind_b = tf.ones((1), dtype=tf.int32) # (1)
+ box_ind_b = tf.cumsum(box_ind_b) - 1
+
+ patch_canvas = tf.expand_dims(patch_canvas, axis=0) # (1, raster_size, raster_size, 1), [0.0-BG, 1.0-stroke]
+ boxes_b = tf.expand_dims(boxes_b, axis=0) # (1, 4)
+
+ valid_canvas = tf.image.crop_and_resize(patch_canvas, boxes_b, box_ind_b,
+ crop_size=[raster_size_b[1], raster_size_b[0]])
+ valid_canvas = valid_canvas[0] # (raster_size_b, raster_size_b, 1)
+
+ pad_up = tf.cast(x1y1_a_floor[1], tf.int32) + padding_size
+ pad_down = image_size_a + padding_size - tf.cast(x2y2_a_ceil[1], tf.int32)
+ pad_left = tf.cast(x1y1_a_floor[0], tf.int32) + padding_size
+ pad_right = image_size_a + padding_size - tf.cast(x2y2_a_ceil[0], tf.int32)
+
+ paddings = [[pad_up, pad_down],
+ [pad_left, pad_right],
+ [0, 0]]
+ pad_img = tf.pad(valid_canvas, paddings=paddings, mode='CONSTANT',
+ constant_values=0.0) # (H_p, W_p, 1), [0.0-BG, 1.0-stroke]
+
+ pasted_image = pad_img[padding_size: padding_size + image_size_a, padding_size: padding_size + image_size_a, :]
+ return pasted_image
+
+ def image_pasting_diff_batch(self, patch_image, cursor_position, window_size):
+ """
+ :param patch_img: (N, window_size, window_size, 1), [0.0-BG, 1.0-stroke]
+ :param cursor_position: (N, 1, 2), in large size
+ :return:
+ """
+ paddings1 = [[0, 0],
+ [1, 1],
+ [1, 1],
+ [0, 0]]
+ patch_image_pad1 = tf.pad(patch_image, paddings=paddings1, mode='CONSTANT',
+ constant_values=0.0) # (N, window_size+2, window_size+2, 1), [0.0-BG, 1.0-stroke]
+
+ cursor_x, cursor_y = cursor_position[:, :, 0:1], cursor_position[:, :, 1:2] # (N, 1, 1)
+ cursor_x_f, cursor_y_f = tf.floor(cursor_x), tf.floor(cursor_y)
+ patch_x, patch_y = 1.0 - (cursor_x - cursor_x_f), 1.0 - (cursor_y - cursor_y_f) # (N, 1, 1)
+
+ x_ones = tf.ones_like(patch_x, dtype=tf.float32) # (N, 1, 1)
+ x_ones = tf.tile(x_ones, [1, 1, window_size]) # (N, 1, window_size)
+ patch_x = tf.concat([patch_x, x_ones], axis=-1) # (N, 1, window_size + 1)
+ patch_x = tf.tile(patch_x, [1, window_size + 1, 1]) # (N, window_size + 1, window_size + 1)
+ patch_x = tf.cumsum(patch_x, axis=-1) # (N, window_size + 1, window_size + 1)
+ patch_x0 = tf.cast(tf.floor(patch_x), tf.int32) # (N, window_size + 1, window_size + 1)
+ patch_x1 = patch_x0 + 1 # (N, window_size + 1, window_size + 1)
+
+ y_ones = tf.ones_like(patch_y, dtype=tf.float32) # (N, 1, 1)
+ y_ones = tf.tile(y_ones, [1, window_size, 1]) # (N, window_size, 1)
+ patch_y = tf.concat([patch_y, y_ones], axis=1) # (N, window_size + 1, 1)
+ patch_y = tf.tile(patch_y, [1, 1, window_size + 1]) # (N, window_size + 1, window_size + 1)
+ patch_y = tf.cumsum(patch_y, axis=1) # (N, window_size + 1, window_size + 1)
+ patch_y0 = tf.cast(tf.floor(patch_y), tf.int32) # (N, window_size + 1, window_size + 1)
+ patch_y1 = patch_y0 + 1 # (N, window_size + 1, window_size + 1)
+
+ # get pixel value at corner coords
+ valid_canvas_patch_a = self.get_pixel_value(patch_image_pad1, patch_x0, patch_y0)
+ valid_canvas_patch_b = self.get_pixel_value(patch_image_pad1, patch_x0, patch_y1)
+ valid_canvas_patch_c = self.get_pixel_value(patch_image_pad1, patch_x1, patch_y0)
+ valid_canvas_patch_d = self.get_pixel_value(patch_image_pad1, patch_x1, patch_y1)
+ # (N, window_size + 1, window_size + 1, 1)
+
+ patch_x0 = tf.cast(patch_x0, tf.float32)
+ patch_x1 = tf.cast(patch_x1, tf.float32)
+ patch_y0 = tf.cast(patch_y0, tf.float32)
+ patch_y1 = tf.cast(patch_y1, tf.float32)
+
+ # calculate deltas
+ wa = (patch_x1 - patch_x) * (patch_y1 - patch_y)
+ wb = (patch_x1 - patch_x) * (patch_y - patch_y0)
+ wc = (patch_x - patch_x0) * (patch_y1 - patch_y)
+ wd = (patch_x - patch_x0) * (patch_y - patch_y0)
+ # (N, window_size + 1, window_size + 1)
+
+ # add dimension for addition
+ wa = tf.expand_dims(wa, axis=3)
+ wb = tf.expand_dims(wb, axis=3)
+ wc = tf.expand_dims(wc, axis=3)
+ wd = tf.expand_dims(wd, axis=3)
+ # (N, window_size + 1, window_size + 1, 1)
+
+ # compute output
+ valid_canvas_patch_ = tf.add_n([wa * valid_canvas_patch_a,
+ wb * valid_canvas_patch_b,
+ wc * valid_canvas_patch_c,
+ wd * valid_canvas_patch_d]) # (N, window_size + 1, window_size + 1, 1)
+ return valid_canvas_patch_
+
+ def image_pasting(self, cursor_position_norm, patch_img, image_size, window_sizes, is_differentiable=False):
+ """
+ paste the patch_img to padded size based on cursor_position
+ :param cursor_position_norm: (N, 1, 2), float type, in size [0.0, 1.0)
+ :param patch_img: (N, raster_size, raster_size), [0.0-BG, 1.0-stroke]
+ :param window_sizes: (N, 1, 1), float32, with grad
+ :return:
+ """
+ cursor_position = tf.multiply(cursor_position_norm, tf.cast(image_size, tf.float32)) # in large size
+ window_sizes_r = tf.round(window_sizes) # (N, 1, 1), no grad
+
+ patch_img_ = tf.expand_dims(patch_img, axis=-1) # (N, raster_size, raster_size, 1)
+ cursor_position_step = tf.reshape(cursor_position, (-1, 1, 1, 2)) # (N, 1, 1, 2)
+ cursor_position_step = tf.tile(cursor_position_step, [1, self.hps.raster_size, self.hps.raster_size,
+ 1]) # (N, raster_size, raster_size, 2)
+ image_size_tile = tf.reshape(tf.cast(image_size, tf.float32), (1, 1, 1, 1)) # (N, 1, 1, 1)
+ image_size_tile = tf.tile(image_size_tile, [self.hps.batch_size // self.total_loop, self.hps.raster_size,
+ self.hps.raster_size, 1])
+ window_sizes_tile = tf.reshape(window_sizes_r, (-1, 1, 1, 1)) # (N, 1, 1, 1)
+ window_sizes_tile = tf.tile(window_sizes_tile, [1, self.hps.raster_size, self.hps.raster_size, 1])
+
+ pasting_inputs = tf.concat([patch_img_, cursor_position_step, image_size_tile, window_sizes_tile],
+ axis=-1) # (N, raster_size, raster_size, 5)
+
+ if is_differentiable:
+ curr_paste_imgs = tf.map_fn(self.image_pasting_diff_single, pasting_inputs,
+ parallel_iterations=32) # (N, image_size, image_size, 1)
+ else:
+ curr_paste_imgs = tf.map_fn(self.image_pasting_nondiff_single, pasting_inputs,
+ parallel_iterations=32) # (N, image_size, image_size, 1)
+ curr_paste_imgs = tf.squeeze(curr_paste_imgs, axis=-1) # (N, image_size, image_size)
+ return curr_paste_imgs
+
+ def image_pasting_v3(self, cursor_position_norm, patch_img, image_size, window_sizes, is_differentiable=False):
+ """
+ paste the patch_img to padded size based on cursor_position
+ :param cursor_position_norm: (N, 1, 2), float type, in size [0.0, 1.0)
+ :param patch_img: (N, raster_size, raster_size), [0.0-BG, 1.0-stroke]
+ :param window_sizes: (N, 1, 1), float32, with grad
+ :return:
+ """
+ cursor_position = tf.multiply(cursor_position_norm, tf.cast(image_size, tf.float32)) # in large size
+
+ if is_differentiable:
+ patch_img_ = tf.expand_dims(patch_img, axis=-1) # (N, raster_size, raster_size, 1)
+ cursor_position_step = tf.reshape(cursor_position, (-1, 1, 1, 2)) # (N, 1, 1, 2)
+ cursor_position_step = tf.tile(cursor_position_step, [1, self.hps.raster_size, self.hps.raster_size,
+ 1]) # (N, raster_size, raster_size, 2)
+ image_size_tile = tf.reshape(tf.cast(image_size, tf.float32), (1, 1, 1, 1)) # (N, 1, 1, 1)
+ image_size_tile = tf.tile(image_size_tile, [self.hps.batch_size // self.total_loop, self.hps.raster_size,
+ self.hps.raster_size, 1])
+ window_sizes_tile = tf.reshape(window_sizes, (-1, 1, 1, 1)) # (N, 1, 1, 1)
+ window_sizes_tile = tf.tile(window_sizes_tile, [1, self.hps.raster_size, self.hps.raster_size, 1])
+
+ pasting_inputs = tf.concat([patch_img_, cursor_position_step, image_size_tile, window_sizes_tile],
+ axis=-1) # (N, raster_size, raster_size, 5)
+ curr_paste_imgs = tf.map_fn(self.image_pasting_diff_single_v3, pasting_inputs,
+ parallel_iterations=32) # (N, image_size, image_size, 1)
+ else:
+ raise Exception('Unfinished...')
+ curr_paste_imgs = tf.squeeze(curr_paste_imgs, axis=-1) # (N, image_size, image_size)
+ return curr_paste_imgs
+
+ def get_points_and_raster_image(self, initial_state, init_cursor, input_photo, image_size):
+ ## generate the other_params and pen_ras and raster image for raster loss
+ prev_state = initial_state # (N, dec_rnn_size * 3)
+
+ prev_width = self.init_width # (1)
+ prev_width = tf.expand_dims(tf.expand_dims(prev_width, axis=0), axis=0) # (1, 1, 1)
+ prev_width = tf.tile(prev_width, [self.hps.batch_size // self.total_loop, 1, 1]) # (N, 1, 1)
+
+ prev_scaling = tf.ones((self.hps.batch_size // self.total_loop, 1, 1)) # (N, 1, 1)
+ prev_window_size = tf.ones((self.hps.batch_size // self.total_loop, 1, 1),
+ dtype=tf.float32) * float(self.hps.raster_size) # (N, 1, 1)
+
+ cursor_position_temp = init_cursor
+ self.cursor_position = cursor_position_temp # (N, 1, 2), in size [0.0, 1.0)
+ cursor_position_loop = self.cursor_position
+
+ other_params_list = []
+ pen_ras_list = []
+
+ pos_before_max_min_list = []
+ win_size_before_max_min_list = []
+
+ curr_canvas_soft = tf.zeros_like(input_photo[:, :, :, 0]) # (N, image_size, image_size), [0.0-BG, 1.0-stroke]
+ curr_canvas_soft_rgb = tf.tile(tf.zeros_like(input_photo[:, :, :, 0:1]), [1, 1, 1, 3]) # (N, image_size, image_size, 3), [0.0-BG, 1.0-stroke]
+ curr_canvas_hard = tf.zeros_like(curr_canvas_soft) # [0.0-BG, 1.0-stroke]
+
+ #### sampling part - start ####
+ self.curr_canvas_hard = curr_canvas_hard
+
+ rasterizor_st = NeuralRasterizorStep(
+ raster_size=self.hps.raster_size,
+ position_format=self.hps.position_format)
+
+ if self.hps.cropping_type == 'v3':
+ cropping_func = self.image_cropping_v3
+ # elif self.hps.cropping_type == 'v2':
+ # cropping_func = self.image_cropping
+ else:
+ raise Exception('Unknown cropping_type', self.hps.cropping_type)
+
+ if self.hps.pasting_type == 'v3':
+ pasting_func = self.image_pasting_v3
+ # elif self.hps.pasting_type == 'v2':
+ # pasting_func = self.image_pasting
+ else:
+ raise Exception('Unknown pasting_type', self.hps.pasting_type)
+
+ for time_i in range(self.hps.max_seq_len):
+ cursor_position_non_grad = tf.stop_gradient(cursor_position_loop) # (N, 1, 2), in size [0.0, 1.0)
+
+ curr_window_size = tf.multiply(prev_scaling, tf.stop_gradient(prev_window_size)) # float, with grad
+ curr_window_size = tf.maximum(curr_window_size, tf.cast(self.hps.min_window_size, tf.float32))
+ curr_window_size = tf.minimum(curr_window_size, tf.cast(image_size, tf.float32))
+
+ ## patch-level encoding
+ # Here, we make the gradients from canvas_z to curr_canvas_hard be None to avoid recurrent gradient propagation.
+ curr_canvas_hard_non_grad = tf.stop_gradient(self.curr_canvas_hard)
+ curr_canvas_hard_non_grad = tf.expand_dims(curr_canvas_hard_non_grad, axis=-1)
+
+ # input_photo: (N, image_size, image_size, 1/3), [0.0-stroke, 1.0-BG]
+ crop_inputs = tf.concat([1.0 - input_photo, curr_canvas_hard_non_grad], axis=-1) # (N, H_p, W_p, 1/3+1)
+
+ cropped_outputs = cropping_func(cursor_position_non_grad, crop_inputs, image_size, curr_window_size)
+ index_offset = self.hps.input_channel - 1
+ curr_patch_inputs = cropped_outputs[:, :, :, 0:1 + index_offset] # [0.0-BG, 1.0-stroke]
+ curr_patch_canvas_hard_non_grad = cropped_outputs[:, :, :, 1 + index_offset:2 + index_offset]
+ # (N, raster_size, raster_size, 1), [0.0-BG, 1.0-stroke]
+
+ curr_patch_inputs = 1.0 - curr_patch_inputs # [0.0-stroke, 1.0-BG]
+ curr_patch_inputs = self.normalize_image_m1to1(curr_patch_inputs)
+ # (N, raster_size, raster_size, 1/3), [-1.0-stroke, 1.0-BG]
+
+ # Normalizing image
+ curr_patch_canvas_hard_non_grad = 1.0 - curr_patch_canvas_hard_non_grad # [0.0-stroke, 1.0-BG]
+ curr_patch_canvas_hard_non_grad = self.normalize_image_m1to1(curr_patch_canvas_hard_non_grad) # [-1.0-stroke, 1.0-BG]
+
+ ## image-level encoding
+ combined_z = self.build_combined_encoder(
+ curr_patch_canvas_hard_non_grad,
+ curr_patch_inputs,
+ 1.0 - curr_canvas_hard_non_grad,
+ input_photo,
+ cursor_position_non_grad,
+ image_size,
+ curr_window_size) # (N, z_size)
+ combined_z = tf.expand_dims(combined_z, axis=1) # (N, 1, z_size)
+
+ curr_window_size_top_side_norm_non_grad = \
+ tf.stop_gradient(curr_window_size / tf.cast(image_size, tf.float32))
+ curr_window_size_bottom_side_norm_non_grad = \
+ tf.stop_gradient(curr_window_size / tf.cast(self.hps.min_window_size, tf.float32))
+ if not self.hps.concat_win_size:
+ combined_z = tf.concat([tf.stop_gradient(prev_width), combined_z], 2) # (N, 1, 2+z_size)
+ else:
+ combined_z = tf.concat([tf.stop_gradient(prev_width),
+ curr_window_size_top_side_norm_non_grad,
+ curr_window_size_bottom_side_norm_non_grad,
+ combined_z],
+ 2) # (N, 1, 2+z_size)
+
+ if self.hps.concat_cursor:
+ prev_input_x = tf.concat([cursor_position_non_grad, combined_z], 2) # (N, 1, 2+2+z_size)
+ else:
+ prev_input_x = combined_z # (N, 1, 2+z_size)
+
+ h_output, next_state = self.build_seq_decoder(self.dec_cell, prev_input_x, prev_state)
+ # h_output: (N * 1, n_out), next_state: (N, dec_rnn_size * 3)
+ [o_other_params, o_pen_ras] = self.get_mixture_coef(h_output)
+ # o_other_params: (N * 1, 6)
+ # o_pen_ras: (N * 1, 2), after softmax
+
+ o_other_params = tf.reshape(o_other_params, [-1, 1, 6]) # (N, 1, 6)
+ o_pen_ras_raw = tf.reshape(o_pen_ras, [-1, 1, 2]) # (N, 1, 2)
+
+ other_params_list.append(o_other_params)
+ pen_ras_list.append(o_pen_ras_raw)
+
+ #### sampling part - end ####
+
+ if self.hps.model_mode == 'train' or self.hps.model_mode == 'eval' or self.hps.model_mode == 'eval_sample':
+ # use renderer here to convert the strokes to image
+ curr_other_params = tf.squeeze(o_other_params, axis=1) # (N, 6), (x1, y1)=[0.0, 1.0], (x2, y2)=[-1.0, 1.0]
+ x1y1, x2y2, width2, scaling = curr_other_params[:, 0:2], curr_other_params[:, 2:4],\
+ curr_other_params[:, 4:5], curr_other_params[:, 5:6]
+ x0y0 = tf.zeros_like(x2y2) # (N, 2), [-1.0, 1.0]
+ x0y0 = tf.div(tf.add(x0y0, 1.0), 2.0) # (N, 2), [0.0, 1.0]
+ x2y2 = tf.div(tf.add(x2y2, 1.0), 2.0) # (N, 2), [0.0, 1.0]
+ widths = tf.concat([tf.squeeze(prev_width, axis=1), width2], axis=1) # (N, 2)
+ curr_other_params = tf.concat([x0y0, x1y1, x2y2, widths], axis=-1) # (N, 8), (x0, y0)&(x2, y2)=[0.0, 1.0]
+ curr_stroke_image = rasterizor_st.raster_func_stroke_abs(curr_other_params)
+ # (N, raster_size, raster_size), [0.0-BG, 1.0-stroke]
+
+ curr_stroke_image_large = pasting_func(cursor_position_loop, curr_stroke_image,
+ image_size, curr_window_size,
+ is_differentiable=self.hps.pasting_diff)
+ # (N, image_size, image_size), [0.0-BG, 1.0-stroke]
+
+ ## soft
+ if not self.hps.use_softargmax:
+ curr_state_soft = o_pen_ras[:, 1:2] # (N, 1)
+ else:
+ curr_state_soft = self.differentiable_argmax(o_pen_ras, self.hps.soft_beta) # (N, 1)
+
+ curr_state_soft = tf.expand_dims(curr_state_soft, axis=1) # (N, 1, 1)
+
+ filter_curr_stroke_image_soft = tf.multiply(tf.subtract(1.0, curr_state_soft), curr_stroke_image_large)
+ # (N, image_size, image_size), [0.0-BG, 1.0-stroke]
+ curr_canvas_soft = tf.add(curr_canvas_soft, filter_curr_stroke_image_soft) # [0.0-BG, 1.0-stroke]
+
+ ## hard
+ curr_state_hard = tf.expand_dims(tf.cast(tf.argmax(o_pen_ras_raw, axis=-1), dtype=tf.float32),
+ axis=-1) # (N, 1, 1)
+ filter_curr_stroke_image_hard = tf.multiply(tf.subtract(1.0, curr_state_hard), curr_stroke_image_large)
+ # (N, image_size, image_size), [0.0-BG, 1.0-stroke]
+ self.curr_canvas_hard = tf.add(self.curr_canvas_hard, filter_curr_stroke_image_hard) # [0.0-BG, 1.0-stroke]
+ self.curr_canvas_hard = tf.clip_by_value(self.curr_canvas_hard, 0.0, 1.0) # [0.0-BG, 1.0-stroke]
+
+ next_width = o_other_params[:, :, 4:5]
+ next_scaling = o_other_params[:, :, 5:6]
+ next_window_size = tf.multiply(next_scaling, tf.stop_gradient(curr_window_size)) # float, with grad
+ window_size_before_max_min = next_window_size # (N, 1, 1), large-level
+ win_size_before_max_min_list.append(window_size_before_max_min)
+ next_window_size = tf.maximum(next_window_size, tf.cast(self.hps.min_window_size, tf.float32))
+ next_window_size = tf.minimum(next_window_size, tf.cast(image_size, tf.float32))
+
+ prev_state = next_state
+ prev_width = next_width * curr_window_size / next_window_size # (N, 1, 1)
+ prev_scaling = next_scaling # (N, 1, 1))
+ prev_window_size = curr_window_size
+
+ # update the cursor position
+ new_cursor_offsets = tf.multiply(o_other_params[:, :, 2:4],
+ tf.divide(curr_window_size, 2.0)) # (N, 1, 2), window-level
+ new_cursor_offset_next = new_cursor_offsets
+ new_cursor_offset_next = tf.concat([new_cursor_offset_next[:, :, 1:2], new_cursor_offset_next[:, :, 0:1]], axis=-1)
+
+ cursor_position_loop_large = tf.multiply(cursor_position_loop, tf.cast(image_size, tf.float32))
+
+ if self.hps.stop_accu_grad:
+ stroke_position_next = tf.stop_gradient(cursor_position_loop_large) + new_cursor_offset_next # (N, 1, 2), large-level
+ else:
+ stroke_position_next = cursor_position_loop_large + new_cursor_offset_next # (N, 1, 2), large-level
+
+ stroke_position_before_max_min = stroke_position_next # (N, 1, 2), large-level
+ pos_before_max_min_list.append(stroke_position_before_max_min)
+
+ if self.hps.cursor_type == 'next':
+ cursor_position_loop_large = stroke_position_next # (N, 1, 2), large-level
+ else:
+ raise Exception('Unknown cursor_type')
+
+ cursor_position_loop_large = tf.maximum(cursor_position_loop_large, 0.0)
+ cursor_position_loop_large = tf.minimum(cursor_position_loop_large, tf.cast(image_size - 1, tf.float32))
+ cursor_position_loop = tf.div(cursor_position_loop_large, tf.cast(image_size, tf.float32))
+
+ curr_canvas_soft = tf.clip_by_value(curr_canvas_soft, 0.0, 1.0) # (N, raster_size, raster_size), [0.0-BG, 1.0-stroke]
+
+ other_params_ = tf.reshape(tf.concat(other_params_list, axis=1), [-1, 6]) # (N * max_seq_len, 6)
+ pen_ras_ = tf.reshape(tf.concat(pen_ras_list, axis=1), [-1, 2]) # (N * max_seq_len, 2)
+ pos_before_max_min_ = tf.concat(pos_before_max_min_list, axis=1) # (N, max_seq_len, 2)
+ win_size_before_max_min_ = tf.concat(win_size_before_max_min_list, axis=1) # (N, max_seq_len, 1)
+
+ return other_params_, pen_ras_, prev_state, curr_canvas_soft, curr_canvas_soft_rgb, \
+ pos_before_max_min_, win_size_before_max_min_
+
+ def differentiable_argmax(self, input_pen, soft_beta):
+ """
+ Differentiable argmax trick.
+ :param input_pen: (N, n_class)
+ :return: pen_state: (N, 1)
+ """
+ def sign_onehot(x):
+ """
+ :param x: (N, n_class)
+ :return: (N, n_class)
+ """
+ y = tf.sign(tf.reduce_max(x, axis=-1, keepdims=True) - x)
+ y = (y - 1) * (-1)
+ return y
+
+ def softargmax(x, beta=1e2):
+ """
+ :param x: (N, n_class)
+ :param beta: 1e10 is the best. 1e2 is acceptable.
+ :return: (N)
+ """
+ x_range = tf.cumsum(tf.ones_like(x), axis=1) # (N, 2)
+ return tf.reduce_sum(tf.nn.softmax(x * beta) * x_range, axis=1) - 1
+
+ ## Better to use softargmax(beta=1e2). The sign_onehot's gradient is close to zero.
+ # pen_onehot = sign_onehot(input_pen) # one-hot form, (N * max_seq_len, 2)
+ # pen_state = pen_onehot[:, 1:2] # (N * max_seq_len, 1)
+ pen_state = softargmax(input_pen, soft_beta)
+ pen_state = tf.expand_dims(pen_state, axis=1) # (N * max_seq_len, 1)
+ return pen_state
+
+ def build_losses(self, target_sketch, pred_raster_imgs, pred_params,
+ pos_before_max_min, win_size_before_max_min, image_size):
+ def get_raster_loss(pred_imgs, gt_imgs, loss_type):
+ perc_layer_losses_raw = []
+ perc_layer_losses_weighted = []
+ perc_layer_losses_norm = []
+
+ if loss_type == 'l1':
+ ras_cost = tf.reduce_mean(tf.abs(tf.subtract(gt_imgs, pred_imgs))) # ()
+ elif loss_type == 'l1_small':
+ gt_imgs_small = tf.image.resize_images(tf.expand_dims(gt_imgs, axis=3), (32, 32))
+ pred_imgs_small = tf.image.resize_images(tf.expand_dims(pred_imgs, axis=3), (32, 32))
+ ras_cost = tf.reduce_mean(tf.abs(tf.subtract(gt_imgs_small, pred_imgs_small))) # ()
+ elif loss_type == 'mse':
+ ras_cost = tf.reduce_mean(tf.pow(tf.subtract(gt_imgs, pred_imgs), 2)) # ()
+ elif loss_type == 'perceptual':
+ return_map_pred = vgg_net_slim(pred_imgs, image_size)
+ return_map_gt = vgg_net_slim(gt_imgs, image_size)
+ perc_loss_type = 'l1' # [l1, mse]
+ weighted_map = {'ReLU1_1': 100.0, 'ReLU1_2': 100.0,
+ 'ReLU2_1': 100.0, 'ReLU2_2': 100.0,
+ 'ReLU3_1': 10.0, 'ReLU3_2': 10.0, 'ReLU3_3': 10.0,
+ 'ReLU4_1': 1.0, 'ReLU4_2': 1.0, 'ReLU4_3': 1.0,
+ 'ReLU5_1': 1.0, 'ReLU5_2': 1.0, 'ReLU5_3': 1.0}
+
+ for perc_layer in self.hps.perc_loss_layers:
+ if perc_loss_type == 'l1':
+ perc_layer_loss = tf.reduce_mean(tf.abs(tf.subtract(return_map_pred[perc_layer],
+ return_map_gt[perc_layer]))) # ()
+ elif perc_loss_type == 'mse':
+ perc_layer_loss = tf.reduce_mean(tf.pow(tf.subtract(return_map_pred[perc_layer],
+ return_map_gt[perc_layer]), 2)) # ()
+ else:
+ raise NameError('Unknown perceptual loss type:', perc_loss_type)
+ perc_layer_losses_raw.append(perc_layer_loss)
+
+ assert perc_layer in weighted_map
+ perc_layer_losses_weighted.append(perc_layer_loss * weighted_map[perc_layer])
+
+ for loop_i in range(len(self.hps.perc_loss_layers)):
+ perc_relu_loss_raw = perc_layer_losses_raw[loop_i] # ()
+
+ if self.hps.model_mode == 'train':
+ curr_relu_mean = (self.perc_loss_mean_list[loop_i] * self.last_step_num + perc_relu_loss_raw) / (self.last_step_num + 1.0)
+ relu_cost_norm = perc_relu_loss_raw / curr_relu_mean
+ else:
+ relu_cost_norm = perc_relu_loss_raw
+ perc_layer_losses_norm.append(relu_cost_norm)
+
+ perc_layer_losses_raw = tf.stack(perc_layer_losses_raw, axis=0)
+ perc_layer_losses_norm = tf.stack(perc_layer_losses_norm, axis=0)
+
+ if self.hps.perc_loss_fuse_type == 'max':
+ ras_cost = tf.reduce_max(perc_layer_losses_norm)
+ elif self.hps.perc_loss_fuse_type == 'add':
+ ras_cost = tf.reduce_mean(perc_layer_losses_norm)
+ elif self.hps.perc_loss_fuse_type == 'raw_add':
+ ras_cost = tf.reduce_mean(perc_layer_losses_raw)
+ elif self.hps.perc_loss_fuse_type == 'weighted_sum':
+ ras_cost = tf.reduce_mean(perc_layer_losses_weighted)
+ else:
+ raise NameError('Unknown perc_loss_fuse_type:', self.hps.perc_loss_fuse_type)
+
+ elif loss_type == 'triplet':
+ raise Exception('Solution for triplet loss is coming soon.')
+ else:
+ raise NameError('Unknown loss type:', loss_type)
+
+ if loss_type != 'perceptual':
+ for perc_layer_i in self.hps.perc_loss_layers:
+ perc_layer_losses_raw.append(tf.constant(0.0))
+ perc_layer_losses_norm.append(tf.constant(0.0))
+
+ perc_layer_losses_raw = tf.stack(perc_layer_losses_raw, axis=0)
+ perc_layer_losses_norm = tf.stack(perc_layer_losses_norm, axis=0)
+
+ return ras_cost, perc_layer_losses_raw, perc_layer_losses_norm
+
+ gt_raster_images = tf.squeeze(target_sketch, axis=3) # (N, raster_h, raster_w), [0.0-stroke, 1.0-BG]
+ raster_cost, perc_relu_losses_raw, perc_relu_losses_norm = \
+ get_raster_loss(pred_raster_imgs, gt_raster_images, loss_type=self.hps.raster_loss_base_type)
+
+ def get_stroke_num_loss(input_strokes):
+ ending_state = input_strokes[:, :, 0] # (N, seq_len)
+ stroke_num_loss_pre = tf.reduce_mean(ending_state) # larger is better, [0.0, 1.0]
+ stroke_num_loss = 1.0 - stroke_num_loss_pre # lower is better, [0.0, 1.0]
+ return stroke_num_loss
+
+ stroke_num_cost = get_stroke_num_loss(pred_params) # lower is better
+
+ def get_pos_outside_loss(pos_before_max_min_):
+ pos_after_max_min = tf.maximum(pos_before_max_min_, 0.0)
+ pos_after_max_min = tf.minimum(pos_after_max_min, tf.cast(image_size - 1, tf.float32)) # (N, max_seq_len, 2)
+ pos_outside_loss = tf.reduce_mean(tf.abs(pos_before_max_min_ - pos_after_max_min))
+ return pos_outside_loss
+
+ pos_outside_cost = get_pos_outside_loss(pos_before_max_min) # lower is better
+
+ def get_win_size_outside_loss(win_size_before_max_min_, min_window_size):
+ win_size_outside_top_loss = tf.divide(
+ tf.maximum(win_size_before_max_min_ - tf.cast(image_size, tf.float32), 0.0),
+ tf.cast(image_size, tf.float32)) # (N, max_seq_len, 1)
+ win_size_outside_bottom_loss = tf.divide(
+ tf.maximum(tf.cast(min_window_size, tf.float32) - win_size_before_max_min_, 0.0),
+ tf.cast(min_window_size, tf.float32)) # (N, max_seq_len, 1)
+ win_size_outside_loss = tf.reduce_mean(win_size_outside_top_loss + win_size_outside_bottom_loss)
+ return win_size_outside_loss
+
+ win_size_outside_cost = get_win_size_outside_loss(win_size_before_max_min, self.hps.min_window_size) # lower is better
+
+ def get_early_pen_states_loss(input_strokes, curr_start, curr_end):
+ # input_strokes: (N, max_seq_len, 7)
+ pred_early_pen_states = input_strokes[:, curr_start:curr_end, 0] # (N, curr_early_len)
+ pred_early_pen_states_min = tf.reduce_min(pred_early_pen_states, axis=1) # (N), should not be 1
+ early_pen_states_loss = tf.reduce_mean(pred_early_pen_states_min) # lower is better
+ return early_pen_states_loss
+
+ early_pen_states_cost = get_early_pen_states_loss(pred_params,
+ self.early_pen_loss_start_idx, self.early_pen_loss_end_idx)
+
+ return raster_cost, stroke_num_cost, pos_outside_cost, win_size_outside_cost, \
+ early_pen_states_cost, \
+ perc_relu_losses_raw, perc_relu_losses_norm
+
+ def build_training_op_split(self, raster_cost, sn_cost, cursor_outside_cost, win_size_outside_cost,
+ early_pen_states_cost):
+ total_cost = self.hps.raster_loss_weight * raster_cost + \
+ self.hps.early_pen_loss_weight * early_pen_states_cost + \
+ self.stroke_num_loss_weight * sn_cost + \
+ self.hps.outside_loss_weight * cursor_outside_cost + \
+ self.hps.win_size_outside_loss_weight * win_size_outside_cost
+
+ tvars = [var for var in tf.trainable_variables()
+ if 'raster_unit' not in var.op.name and 'VGG16' not in var.op.name]
+ gvs = self.optimizer.compute_gradients(total_cost, var_list=tvars)
+ return total_cost, gvs
+
+ def build_training_op(self, grad_list):
+ with tf.variable_scope('train_op', reuse=tf.AUTO_REUSE):
+ gvs = self.average_gradients(grad_list)
+ g = self.hps.grad_clip
+
+ for grad, var in gvs:
+ print('>>', var.op.name)
+ if grad is None:
+ print(' >> None value')
+
+ capped_gvs = [(tf.clip_by_value(grad, -g, g), var) for grad, var in gvs]
+
+ self.train_op = self.optimizer.apply_gradients(
+ capped_gvs, global_step=self.global_step, name='train_step')
+
+ def average_gradients(self, grads_list):
+ """
+ Compute the average gradients.
+ :param grads_list: list(of length N_GPU) of list(grad, var)
+ :return:
+ """
+ avg_grads = []
+ for grad_and_vars in zip(*grads_list):
+ grads = []
+ for g, _ in grad_and_vars:
+ expanded_g = tf.expand_dims(g, 0)
+ grads.append(expanded_g)
+ grad = tf.concat(grads, axis=0)
+ grad = tf.reduce_mean(grad, axis=0)
+
+ v = grad_and_vars[0][1]
+ grad_and_var = (grad, v)
+ avg_grads.append(grad_and_var)
+
+ return avg_grads
\ No newline at end of file
diff --git a/hi-arm/qmupd_vs/models/__init__.py b/hi-arm/qmupd_vs/models/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..fc01113da66ff042bd1807b5bfdb70c4bce8d14c
--- /dev/null
+++ b/hi-arm/qmupd_vs/models/__init__.py
@@ -0,0 +1,67 @@
+"""This package contains modules related to objective functions, optimizations, and network architectures.
+
+To add a custom model class called 'dummy', you need to add a file called 'dummy_model.py' and define a subclass DummyModel inherited from BaseModel.
+You need to implement the following five functions:
+ -- <__init__>: initialize the class; first call BaseModel.__init__(self, opt).
+ -- : unpack data from dataset and apply preprocessing.
+ -- : produce intermediate results.
+ -- : calculate loss, gradients, and update network weights.
+ -- : (optionally) add model-specific options and set default options.
+
+In the function <__init__>, you need to define four lists:
+ -- self.loss_names (str list): specify the training losses that you want to plot and save.
+ -- self.model_names (str list): define networks used in our training.
+ -- self.visual_names (str list): specify the images that you want to display and save.
+ -- self.optimizers (optimizer list): define and initialize optimizers. You can define one optimizer for each network. If two networks are updated at the same time, you can use itertools.chain to group them. See cycle_gan_model.py for an usage.
+
+Now you can use the model class by specifying flag '--model dummy'.
+See our template model class 'template_model.py' for more details.
+"""
+
+import importlib
+from models.base_model import BaseModel
+
+
+def find_model_using_name(model_name):
+ """Import the module "models/[model_name]_model.py".
+
+ In the file, the class called DatasetNameModel() will
+ be instantiated. It has to be a subclass of BaseModel,
+ and it is case-insensitive.
+ """
+ model_filename = "models." + model_name + "_model"
+ modellib = importlib.import_module(model_filename)
+ model = None
+ target_model_name = model_name.replace('_', '') + 'model'
+ for name, cls in modellib.__dict__.items():
+ if name.lower() == target_model_name.lower() \
+ and issubclass(cls, BaseModel):
+ model = cls
+
+ if model is None:
+ print("In %s.py, there should be a subclass of BaseModel with class name that matches %s in lowercase." % (model_filename, target_model_name))
+ exit(0)
+
+ return model
+
+
+def get_option_setter(model_name):
+ """Return the static method of the model class."""
+ model_class = find_model_using_name(model_name)
+ return model_class.modify_commandline_options
+
+
+def create_model(opt):
+ """Create a model given the option.
+
+ This function warps the class CustomDatasetDataLoader.
+ This is the main interface between this package and 'train.py'/'test.py'
+
+ Example:
+ >>> from models import create_model
+ >>> model = create_model(opt)
+ """
+ model = find_model_using_name(opt.model)
+ instance = model(opt)
+ print("model [%s] was created" % type(instance).__name__)
+ return instance
diff --git a/hi-arm/qmupd_vs/models/base_model.py b/hi-arm/qmupd_vs/models/base_model.py
new file mode 100644
index 0000000000000000000000000000000000000000..d06337d4ee138db99a94032b40fe6ad9c8627f4b
--- /dev/null
+++ b/hi-arm/qmupd_vs/models/base_model.py
@@ -0,0 +1,248 @@
+import os
+import torch
+from collections import OrderedDict
+from abc import ABCMeta, abstractmethod
+from . import networks
+import pdb
+
+
+class BaseModel():
+ __metaclass__ = ABCMeta
+ """This class is an abstract base class (ABC) for models.
+ To create a subclass, you need to implement the following five functions:
+ -- <__init__>: initialize the class; first call BaseModel.__init__(self, opt).
+ -- : unpack data from dataset and apply preprocessing.
+ -- : produce intermediate results.
+ -- : calculate losses, gradients, and update network weights.
+ -- : (optionally) add model-specific options and set default options.
+ """
+
+ def __init__(self, opt):
+ """Initialize the BaseModel class.
+
+ Parameters:
+ opt (Option class)-- stores all the experiment flags; needs to be a subclass of BaseOptions
+
+ When creating your custom class, you need to implement your own initialization.
+ In this fucntion, you should first call
+ Then, you need to define four lists:
+ -- self.loss_names (str list): specify the training losses that you want to plot and save.
+ -- self.model_names (str list): specify the images that you want to display and save.
+ -- self.visual_names (str list): define networks used in our training.
+ -- self.optimizers (optimizer list): define and initialize optimizers. You can define one optimizer for each network. If two networks are updated at the same time, you can use itertools.chain to group them. See cycle_gan_model.py for an example.
+ """
+ self.opt = opt
+ self.gpu_ids = opt.gpu_ids
+ self.isTrain = opt.isTrain
+ self.device = torch.device('cuda:{}'.format(self.gpu_ids[0])) if self.gpu_ids else torch.device('cpu') # get device name: CPU or GPU
+ self.save_dir = os.path.join(opt.checkpoints_dir, opt.name) # save all the checkpoints to save_dir
+ if opt.preprocess != 'scale_width': # with [scale_width], input images might have different sizes, which hurts the performance of cudnn.benchmark.
+ torch.backends.cudnn.benchmark = True
+ self.loss_names = []
+ self.model_names = []
+ self.visual_names = []
+ self.optimizers = []
+ self.image_paths = []
+ self.metric = 0 # used for learning rate policy 'plateau'
+
+ @staticmethod
+ def modify_commandline_options(parser, is_train):
+ """Add new model-specific options, and rewrite default values for existing options.
+
+ Parameters:
+ parser -- original option parser
+ is_train (bool) -- whether training phase or test phase. You can use this flag to add training-specific or test-specific options.
+
+ Returns:
+ the modified parser.
+ """
+ return parser
+
+ @abstractmethod
+ def set_input(self, input):
+ """Unpack input data from the dataloader and perform necessary pre-processing steps.
+
+ Parameters:
+ input (dict): includes the data itself and its metadata information.
+ """
+ pass
+
+ @abstractmethod
+ def forward(self):
+ """Run forward pass; called by both functions and ."""
+ pass
+
+ @abstractmethod
+ def optimize_parameters(self):
+ """Calculate losses, gradients, and update network weights; called in every training iteration"""
+ pass
+
+ def setup(self, opt):
+ """Load and print networks; create schedulers
+
+ Parameters:
+ opt (Option class) -- stores all the experiment flags; needs to be a subclass of BaseOptions
+ """
+ if self.isTrain:
+ self.schedulers = [networks.get_scheduler(optimizer, opt) for optimizer in self.optimizers]
+ if not self.isTrain or opt.continue_train:
+ load_suffix = 'iter_%d' % opt.load_iter if opt.load_iter > 0 else opt.epoch
+ self.load_networks(load_suffix)
+ self.print_networks(opt.verbose)
+
+ def eval(self):
+ """Make models eval mode during test time"""
+ for name in self.model_names:
+ if isinstance(name, str):
+ net = getattr(self, 'net' + name)
+ net.eval()
+
+ def test(self):
+ """Forward function used in test time.
+
+ This function wraps function in no_grad() so we don't save intermediate steps for backprop
+ It also calls to produce additional visualization results
+ """
+ with torch.no_grad():
+ self.forward()
+ self.compute_visuals()
+
+ def compute_visuals(self):
+ """Calculate additional output images for visdom and HTML visualization"""
+ pass
+
+ def get_image_paths(self):
+ """ Return image paths that are used to load current data"""
+ return self.image_paths
+
+ def update_learning_rate(self):
+ """Update learning rates for all the networks; called at the end of every epoch"""
+ for scheduler in self.schedulers:
+ if self.opt.lr_policy == 'plateau':
+ scheduler.step(self.metric)
+ else:
+ scheduler.step()
+
+ lr = self.optimizers[0].param_groups[0]['lr']
+ print('learning rate = %.7f' % lr)
+
+ def get_current_visuals(self):
+ """Return visualization images. train.py will display these images with visdom, and save the images to a HTML"""
+ visual_ret = OrderedDict()
+ for name in self.visual_names:
+ if isinstance(name, str):
+ visual_ret[name] = getattr(self, name)
+ return visual_ret
+
+ def get_current_losses(self):
+ """Return traning losses / errors. train.py will print out these errors on console, and save them to a file"""
+ errors_ret = OrderedDict()
+ for name in self.loss_names:
+ if isinstance(name, str):
+ errors_ret[name] = float(getattr(self, 'loss_' + name)) # float(...) works for both scalar tensor and float number
+ return errors_ret
+
+ def save_networks(self, epoch):
+ """Save all the networks to the disk.
+
+ Parameters:
+ epoch (int) -- current epoch; used in the file name '%s_net_%s.pth' % (epoch, name)
+ """
+ for name in self.model_names:
+ if isinstance(name, str):
+ save_filename = '%s_net_%s.pth' % (epoch, name)
+ save_path = os.path.join(self.save_dir, save_filename)
+ net = getattr(self, 'net' + name)
+
+ if len(self.gpu_ids) > 0 and torch.cuda.is_available():
+ torch.save(net.module.cpu().state_dict(), save_path)
+ net.cuda(self.gpu_ids[0])
+ else:
+ torch.save(net.cpu().state_dict(), save_path)
+
+ def __patch_instance_norm_state_dict(self, state_dict, module, keys, i=0):
+ """Fix InstanceNorm checkpoints incompatibility (prior to 0.4)"""
+ key = keys[i]
+ if i + 1 == len(keys): # at the end, pointing to a parameter/buffer
+ if module.__class__.__name__.startswith('InstanceNorm') and \
+ (key == 'running_mean' or key == 'running_var'):
+ if getattr(module, key) is None:
+ state_dict.pop('.'.join(keys))
+ if module.__class__.__name__.startswith('InstanceNorm') and \
+ (key == 'num_batches_tracked'):
+ state_dict.pop('.'.join(keys))
+ else:
+ self.__patch_instance_norm_state_dict(state_dict, getattr(module, key), keys, i + 1)
+
+ def load_networks(self, epoch):
+ """Load all the networks from the disk.
+
+ Parameters:
+ epoch (int) -- current epoch; used in the file name '%s_net_%s.pth' % (epoch, name)
+ """
+ for name in self.model_names:
+ if isinstance(name, str):
+ load_filename = '%s_net_%s.pth' % (epoch, name)
+ load_path = os.path.join(self.save_dir, load_filename)
+ net = getattr(self, 'net' + name)
+ if isinstance(net, torch.nn.DataParallel):
+ net = net.module
+ print('loading the model from %s' % load_path)
+ # if you are using PyTorch newer than 0.4 (e.g., built from
+ # GitHub source), you can remove str() on self.device
+ state_dict = torch.load(load_path, map_location=str(self.device))
+ if hasattr(state_dict, '_metadata'):
+ del state_dict._metadata
+
+ # patch InstanceNorm checkpoints prior to 0.4
+ for key in list(state_dict.keys()): # need to copy keys here because we mutate in loop
+ self.__patch_instance_norm_state_dict(state_dict, net, key.split('.'))
+ net.load_state_dict(state_dict)
+ #param1 = {}
+ #for name, parameters in net.named_parameters():
+ # print(name,',',parameters.size())
+ # param1[name] = parameters.detach().cpu().numpy()
+ #pdb.set_trace()
+
+ def print_networks(self, verbose):
+ """Print the total number of parameters in the network and (if verbose) network architecture
+
+ Parameters:
+ verbose (bool) -- if verbose: print the network architecture
+ """
+ print('---------- Networks initialized -------------')
+ for name in self.model_names:
+ if isinstance(name, str):
+ net = getattr(self, 'net' + name)
+ num_params = 0
+ for param in net.parameters():
+ num_params += param.numel()
+ if verbose:
+ print(net)
+ print('[Network %s] Total number of parameters : %.3f M' % (name, num_params / 1e6))
+ print('-----------------------------------------------')
+
+ def set_requires_grad(self, nets, requires_grad=False):
+ """Set requies_grad=Fasle for all the networks to avoid unnecessary computations
+ Parameters:
+ nets (network list) -- a list of networks
+ requires_grad (bool) -- whether the networks require gradients or not
+ """
+ if not isinstance(nets, list):
+ nets = [nets]
+ for net in nets:
+ if net is not None:
+ for param in net.parameters():
+ param.requires_grad = requires_grad
+
+ # ===========================================================================================================
+ def masked(self, A,mask):
+ if self.opt.mask_type == 0:
+ return (A/2+0.5)*mask*2-1
+ elif self.opt.mask_type == 1:
+ return ((A/2+0.5)*mask+1-mask)*2-1
+ elif self.opt.mask_type == 2:
+ return torch.cat((A, mask), 1)
+ elif self.opt.mask_type == 3:
+ masked = ((A/2+0.5)*mask+1-mask)*2-1
+ return torch.cat((masked, mask), 1)
\ No newline at end of file
diff --git a/hi-arm/qmupd_vs/models/cycle_gan_cls_model.py b/hi-arm/qmupd_vs/models/cycle_gan_cls_model.py
new file mode 100644
index 0000000000000000000000000000000000000000..8883fcec78f150470728571ae2c1c6f9fbbd0346
--- /dev/null
+++ b/hi-arm/qmupd_vs/models/cycle_gan_cls_model.py
@@ -0,0 +1,565 @@
+import torch
+import itertools
+from util.image_pool import ImagePool
+from .base_model import BaseModel
+from . import networks
+import models.dist_model as dm # numpy==1.14.3
+import torchvision.transforms as transforms
+import os
+from util.util import tensor2im, tensor2im2, save_image
+
+def truncate(fake_B,a=127.5):#[-1,1]
+ #return torch.round((fake_B+1)*a)/a-1
+ return ((fake_B+1)*a).int().float()/a-1
+
+class CycleGANClsModel(BaseModel):
+ """
+ This class implements the CycleGAN model, for learning image-to-image translation without paired data.
+
+ The model training requires '--dataset_mode unaligned' dataset.
+ By default, it uses a '--netG resnet_9blocks' ResNet generator,
+ a '--netD basic' discriminator (PatchGAN introduced by pix2pix),
+ and a least-square GANs objective ('--gan_mode lsgan').
+
+ CycleGAN paper: https://arxiv.org/pdf/1703.10593.pdf
+ """
+ @staticmethod
+ def modify_commandline_options(parser, is_train=True):
+ """Add new dataset-specific options, and rewrite default values for existing options.
+
+ Parameters:
+ parser -- original option parser
+ is_train (bool) -- whether training phase or test phase. You can use this flag to add training-specific or test-specific options.
+
+ Returns:
+ the modified parser.
+
+ For CycleGAN, in addition to GAN losses, we introduce lambda_A, lambda_B, and lambda_identity for the following losses.
+ A (source domain), B (target domain).
+ Generators: G_A: A -> B; G_B: B -> A.
+ Discriminators: D_A: G_A(A) vs. B; D_B: G_B(B) vs. A.
+ Forward cycle loss: lambda_A * ||G_B(G_A(A)) - A|| (Eqn. (2) in the paper)
+ Backward cycle loss: lambda_B * ||G_A(G_B(B)) - B|| (Eqn. (2) in the paper)
+ Identity loss (optional): lambda_identity * (||G_A(B) - B|| * lambda_B + ||G_B(A) - A|| * lambda_A) (Sec 5.2 "Photo generation from paintings" in the paper)
+ Dropout is not used in the original CycleGAN paper.
+ """
+ parser.set_defaults(no_dropout=True) # default CycleGAN did not use dropout
+ parser.set_defaults(dataset_mode='unaligned_mask_stylecls')
+ parser.add_argument('--netda', type=str, default='basic_cls') # discriminator has two branches
+ parser.add_argument('--truncate', type=float, default=0.0, help='whether truncate in forward')
+ if is_train:
+ parser.add_argument('--lambda_A', type=float, default=5.0, help='weight for cycle loss (A -> B -> A)')
+ parser.add_argument('--lambda_B', type=float, default=5.0, help='weight for cycle loss (B -> A -> B)')
+ parser.add_argument('--lambda_identity', type=float, default=0, help='use identity mapping. Setting lambda_identity other than 0 has an effect of scaling the weight of the identity mapping loss. For example, if the weight of the identity loss should be 10 times smaller than the weight of the reconstruction loss, please set lambda_identity = 0.1')
+ parser.add_argument('--perceptual_cycle', type=int, default=6, help='whether use perceptual similarity for cycle loss')
+ parser.add_argument('--use_hed', type=int, default=1, help='whether use hed processing for cycle loss')
+ parser.add_argument('--ntrunc_trunc', type=int, default=1, help='whether use both non-trunc version and trunc version')
+ parser.add_argument('--trunc_a', type=float, default=31.875, help='multiply which value to round when trunc')
+ parser.add_argument('--lambda_A_trunc', type=float, default=5.0, help='weight for cycle loss for trunc')
+ parser.add_argument('--hed_pretrained_mode', type=str, default='./checkpoints/network-bsds500.pytorch', help='path to the pretrained hed model')
+ parser.add_argument('--vgg_pretrained_mode', type=str, default='./checkpoints/vgg19.pth', help='path to the pretrained vgg model')
+ parser.add_argument('--lambda_G_A_l', type=float, default=0.5, help='weight for local GAN loss in G')
+ parser.add_argument('--style_loss_with_weight', type=int, default=0, help='whether multiply prob in style loss')
+ parser.add_argument('--metric', action='store_true', help='whether use metric loss for fakeB')
+ parser.add_argument('--metric_model_path', type=str, default='3/30_net_Regressor.pth', help='metric model path')
+ parser.add_argument('--lambda_metric', type=float, default=0.5, help='weight for metric loss')
+ parser.add_argument('--metricvec', action='store_true', help='whether use metric model with vec input')
+ parser.add_argument('--metric_resnext', action='store_true', help='whether use resnext as metric model')
+ parser.add_argument('--metric_resnet', action='store_true', help='whether use resnet as metric model')
+ parser.add_argument('--metric_inception', action='store_true', help='whether use inception as metric model')# the inception of transform_input=False
+ parser.add_argument('--metric_inmask', action='store_true', help='whether use inmask in metric model')
+ else:
+ parser.add_argument('--check_D', action='store_true', help='whether use check Ds outputs')
+ # for masks
+ parser.add_argument('--use_mask', type=int, default=1, help='whether use mask for special face region')
+ parser.add_argument('--use_eye_mask', type=int, default=1, help='whether use mask for special face region')
+ parser.add_argument('--use_lip_mask', type=int, default=1, help='whether use mask for special face region')
+ parser.add_argument('--mask_type', type=int, default=3, help='use mask type, 0 outside black, 1 outside white')
+ # for style control
+ parser.add_argument('--style_control', type=int, default=1, help='use style_control')
+ parser.add_argument('--sfeature_mode', type=str, default='1vgg19_softmax', help='vgg19 softmax as feature')
+ parser.add_argument('--netga', type=str, default='resnet_style_9blocks', help='net arch for netG_A')
+ parser.add_argument('--model0_res', type=int, default=0, help='number of resblocks in model0 (before insert style)')
+ parser.add_argument('--model1_res', type=int, default=0, help='number of resblocks in model1 (after insert style, before 2 column merge)')
+ parser.add_argument('--one_hot', type=int, default=0, help='use one-hot for style code')
+
+ return parser
+
+ def __init__(self, opt):
+ """Initialize the CycleGAN class.
+
+ Parameters:
+ opt (Option class)-- stores all the experiment flags; needs to be a subclass of BaseOptions
+ """
+ BaseModel.__init__(self, opt)
+ # specify the training losses you want to print out. The training/test scripts will call
+ self.loss_names = ['D_A', 'G_A', 'cycle_A', 'idt_A', 'D_B', 'G_B', 'cycle_B', 'idt_B']
+ # specify the images you want to save/display. The training/test scripts will call
+ visual_names_A = ['real_A', 'fake_B', 'rec_A']
+ visual_names_B = ['real_B', 'fake_A', 'rec_B']
+ if self.isTrain and self.opt.lambda_identity > 0.0: # if identity loss is used, we also visualize idt_B=G_A(B) ad idt_A=G_A(B)
+ visual_names_A.append('idt_B')
+ visual_names_B.append('idt_A')
+ if self.isTrain and self.opt.use_hed:
+ visual_names_A.append('real_A_hed')
+ visual_names_A.append('rec_A_hed')
+ if self.isTrain and self.opt.ntrunc_trunc:
+ visual_names_A.append('rec_At')
+ if self.opt.use_hed:
+ visual_names_A.append('rec_At_hed')
+ self.loss_names = ['D_A', 'G_A', 'cycle_A', 'cycle_A2', 'idt_A', 'D_B', 'G_B', 'cycle_B', 'idt_B', 'G']
+ if self.isTrain and self.opt.use_mask:
+ visual_names_A.append('fake_B_l')
+ visual_names_A.append('real_B_l')
+ self.loss_names += ['D_A_l', 'G_A_l']
+ if self.isTrain and self.opt.use_eye_mask:
+ visual_names_A.append('fake_B_le')
+ visual_names_A.append('real_B_le')
+ self.loss_names += ['D_A_le', 'G_A_le']
+ if self.isTrain and self.opt.use_lip_mask:
+ visual_names_A.append('fake_B_ll')
+ visual_names_A.append('real_B_ll')
+ self.loss_names += ['D_A_ll', 'G_A_ll']
+ if self.isTrain and self.opt.metric:
+ self.loss_names += ['metric']
+ #visual_names_B += ['fake_B2']
+ if not self.isTrain and self.opt.use_mask:
+ visual_names_A.append('fake_B_l')
+ visual_names_A.append('real_B_l')
+ if not self.isTrain and self.opt.use_eye_mask:
+ visual_names_A.append('fake_B_le')
+ visual_names_A.append('real_B_le')
+ if not self.isTrain and self.opt.use_lip_mask:
+ visual_names_A.append('fake_B_ll')
+ visual_names_A.append('real_B_ll')
+ self.loss_names += ['D_A_cls','G_A_cls']
+
+ self.visual_names = visual_names_A + visual_names_B # combine visualizations for A and B
+ print(self.visual_names)
+ # specify the models you want to save to the disk. The training/test scripts will call and .
+ if self.isTrain:
+ self.model_names = ['G_A', 'G_B', 'D_A', 'D_B']
+ if self.opt.use_mask:
+ self.model_names += ['D_A_l']
+ if self.opt.use_eye_mask:
+ self.model_names += ['D_A_le']
+ if self.opt.use_lip_mask:
+ self.model_names += ['D_A_ll']
+ else: # during test time, only load Gs
+ self.model_names = ['G_A', 'G_B']
+ if self.opt.check_D:
+ self.model_names += ['D_A', 'D_B']
+
+ # define networks (both Generators and discriminators)
+ # The naming is different from those used in the paper.
+ # Code (vs. paper): G_A (G), G_B (F), D_A (D_Y), D_B (D_X)
+ if not self.opt.style_control:
+ self.netG_A = networks.define_G(opt.input_nc, opt.output_nc, opt.ngf, opt.netG, opt.norm,
+ not opt.no_dropout, opt.init_type, opt.init_gain, self.gpu_ids)
+ else:
+ print(opt.netga)
+ print('model0_res', opt.model0_res)
+ print('model1_res', opt.model1_res)
+ self.netG_A = networks.define_G(opt.input_nc, opt.output_nc, opt.ngf, opt.netga, opt.norm,
+ not opt.no_dropout, opt.init_type, opt.init_gain, self.gpu_ids, opt.model0_res, opt.model1_res)
+ self.netG_B = networks.define_G(opt.output_nc, opt.input_nc, opt.ngf, opt.netG, opt.norm,
+ not opt.no_dropout, opt.init_type, opt.init_gain, self.gpu_ids)
+
+ #if self.isTrain: # define discriminators
+ if self.isTrain or self.opt.check_D: # define discriminators
+ self.netD_A = networks.define_D(opt.output_nc, opt.ndf, opt.netda,
+ opt.n_layers_D, opt.norm, opt.init_type, opt.init_gain, self.gpu_ids, n_class=3)
+ self.netD_B = networks.define_D(opt.input_nc, opt.ndf, opt.netD,
+ opt.n_layers_D, opt.norm, opt.init_type, opt.init_gain, self.gpu_ids)
+ if self.opt.use_mask:
+ if self.opt.mask_type in [2, 3]:
+ output_nc = opt.output_nc + 1
+ else:
+ output_nc = opt.output_nc
+ self.netD_A_l = networks.define_D(output_nc, opt.ndf, opt.netD,
+ opt.n_layers_D, opt.norm, opt.init_type, opt.init_gain, self.gpu_ids)
+ if self.opt.use_eye_mask:
+ if self.opt.mask_type in [2, 3]:
+ output_nc = opt.output_nc + 1
+ else:
+ output_nc = opt.output_nc
+ self.netD_A_le = networks.define_D(output_nc, opt.ndf, opt.netD,
+ opt.n_layers_D, opt.norm, opt.init_type, opt.init_gain, self.gpu_ids)
+ if self.opt.use_lip_mask:
+ if self.opt.mask_type in [2, 3]:
+ output_nc = opt.output_nc + 1
+ else:
+ output_nc = opt.output_nc
+ self.netD_A_ll = networks.define_D(output_nc, opt.ndf, opt.netD,
+ opt.n_layers_D, opt.norm, opt.init_type, opt.init_gain, self.gpu_ids)
+
+ if self.isTrain and self.opt.metric:
+ if not opt.metric_resnext and not opt.metric_resnet and not opt.metric_inception:
+ self.metric = networks.define_inception_v3a(init_weights_='./checkpoints/metric/'+self.opt.metric_model_path,gpu_ids_ = self.gpu_ids,vec=self.opt.metricvec)
+ elif opt.metric_resnext:
+ self.metric = networks.define_resnext101a(init_weights_='./checkpoints/metric/'+self.opt.metric_model_path,gpu_ids_ = self.gpu_ids,vec=self.opt.metricvec)
+ elif opt.metric_resnet:
+ self.metric = networks.define_resnet101a(init_weights_='./checkpoints/metric/'+self.opt.metric_model_path,gpu_ids_ = self.gpu_ids,vec=self.opt.metricvec)
+ elif opt.metric_inception:
+ self.metric = networks.define_inception3a(init_weights_='./checkpoints/metric/'+self.opt.metric_model_path,gpu_ids_ = self.gpu_ids,vec=self.opt.metricvec)
+ self.metric.eval()
+ self.set_requires_grad(self.metric, False)
+
+ if not self.isTrain and self.opt.check_D:
+ self.criterionGAN = networks.GANLoss('lsgan').to(self.device)
+
+ if self.isTrain:
+ if opt.lambda_identity > 0.0: # only works when input and output images have the same number of channels
+ assert(opt.input_nc == opt.output_nc)
+ self.fake_A_pool = ImagePool(opt.pool_size) # create image buffer to store previously generated images
+ self.fake_B_pool = ImagePool(opt.pool_size) # create image buffer to store previously generated images
+ # define loss functions
+ self.criterionGAN = networks.GANLoss(opt.gan_mode).to(self.device) # define GAN loss.
+ self.criterionCycle = torch.nn.L1Loss()
+ self.criterionIdt = torch.nn.L1Loss()
+ self.criterionCls = torch.nn.CrossEntropyLoss()
+ self.criterionCls2 = torch.nn.CrossEntropyLoss(reduction='none')
+ # initialize optimizers; schedulers will be automatically created by function .
+ self.optimizer_G = torch.optim.Adam(itertools.chain(self.netG_A.parameters(), self.netG_B.parameters()), lr=opt.lr, betas=(opt.beta1, 0.999))
+ if not self.opt.use_mask:
+ self.optimizer_D = torch.optim.Adam(itertools.chain(self.netD_A.parameters(), self.netD_B.parameters()), lr=opt.lr, betas=(opt.beta1, 0.999))
+ elif not self.opt.use_eye_mask:
+ D_params = list(self.netD_A.parameters()) + list(self.netD_B.parameters()) + list(self.netD_A_l.parameters())
+ self.optimizer_D = torch.optim.Adam(D_params, lr=opt.lr, betas=(opt.beta1, 0.999))
+ elif not self.opt.use_lip_mask:
+ D_params = list(self.netD_A.parameters()) + list(self.netD_B.parameters()) + list(self.netD_A_l.parameters()) + list(self.netD_A_le.parameters())
+ self.optimizer_D = torch.optim.Adam(D_params, lr=opt.lr, betas=(opt.beta1, 0.999))
+ else:
+ D_params = list(self.netD_A.parameters()) + list(self.netD_B.parameters()) + list(self.netD_A_l.parameters()) + list(self.netD_A_le.parameters()) + list(self.netD_A_ll.parameters())
+ self.optimizer_D = torch.optim.Adam(D_params, lr=opt.lr, betas=(opt.beta1, 0.999))
+ self.optimizers.append(self.optimizer_G)
+ self.optimizers.append(self.optimizer_D)
+
+ if self.opt.perceptual_cycle:
+ if self.opt.perceptual_cycle in [1,2,3,6]:
+ self.lpips = dm.DistModel(opt,model='net-lin',net='alex',use_gpu=True)
+ elif self.opt.perceptual_cycle in [4,5,8]:
+ self.vgg = networks.define_VGG(init_weights_=opt.vgg_pretrained_mode, feature_mode_=True, gpu_ids_=self.gpu_ids) # using conv4_4 layer
+
+ if self.opt.use_hed:
+ #self.hed = networks.define_HED(init_weights_=opt.hed_pretrained_mode, gpu_ids_=self.gpu_ids)
+ self.hed = networks.define_HED(init_weights_=opt.hed_pretrained_mode, gpu_ids_=self.opt.gpu_ids_p)
+ self.set_requires_grad(self.hed, False)
+
+
+ def set_input(self, input):
+ """Unpack input data from the dataloader and perform necessary pre-processing steps.
+
+ Parameters:
+ input (dict): include the data itself and its metadata information.
+
+ The option 'direction' can be used to swap domain A and domain B.
+ """
+ AtoB = self.opt.direction == 'AtoB'
+ self.real_A = input['A' if AtoB else 'B'].to(self.device)
+ self.real_B = input['B' if AtoB else 'A'].to(self.device)
+ self.image_paths = input['A_paths' if AtoB else 'B_paths']
+ if self.opt.use_mask:
+ self.A_mask = input['A_mask'].to(self.device)
+ self.B_mask = input['B_mask'].to(self.device)
+ if self.opt.use_eye_mask:
+ self.A_maske = input['A_maske'].to(self.device)
+ self.B_maske = input['B_maske'].to(self.device)
+ if self.opt.use_lip_mask:
+ self.A_maskl = input['A_maskl'].to(self.device)
+ self.B_maskl = input['B_maskl'].to(self.device)
+ if self.opt.style_control:
+ self.real_B_style = input['B_style'].to(self.device)
+ self.real_B_label = input['B_label'].to(self.device)
+ if self.opt.isTrain and self.opt.style_loss_with_weight:
+ self.real_B_style0 = input['B_style0'].to(self.device)
+ self.zero = torch.zeros(self.real_B_label.size(),dtype=torch.int64).to(self.device)
+ self.one = torch.ones(self.real_B_label.size(),dtype=torch.int64).to(self.device)
+ self.two = 2*torch.ones(self.real_B_label.size(),dtype=torch.int64).to(self.device)
+ if self.opt.isTrain and self.opt.metricvec:
+ self.vec = input['vec'].to(self.device)
+ if self.opt.isTrain and self.opt.metric_inmask:
+ self.A_maskfg = input['A_maskfg'].to(self.device)
+
+ def forward(self):
+ """Run forward pass; called by both functions and ."""
+ if not self.opt.style_control:
+ self.fake_B = self.netG_A(self.real_A) # G_A(A)
+ else:
+ #print(torch.mean(self.real_B_style,(2,3)),'style_control')
+ #print(self.real_B_style,'style_control')
+ self.fake_B = self.netG_A(self.real_A, self.real_B_style)
+ self.rec_A = self.netG_B(self.fake_B) # G_B(G_A(A))
+ self.fake_A = self.netG_B(self.real_B) # G_B(B)
+ if not self.opt.style_control:
+ self.rec_B = self.netG_A(self.fake_A) # G_A(G_B(B))
+ else:
+ #print(torch.mean(self.real_B_style,(2,3)),'style_control')
+ self.rec_B = self.netG_A(self.fake_A, self.real_B_style) # -- cycle_B loss
+
+ if self.opt.use_mask:
+ self.fake_B_l = self.masked(self.fake_B,self.A_mask)
+ self.real_B_l = self.masked(self.real_B,self.B_mask)
+ if self.opt.use_eye_mask:
+ self.fake_B_le = self.masked(self.fake_B,self.A_maske)
+ self.real_B_le = self.masked(self.real_B,self.B_maske)
+ if self.opt.use_lip_mask:
+ self.fake_B_ll = self.masked(self.fake_B,self.A_maskl)
+ self.real_B_ll = self.masked(self.real_B,self.B_maskl)
+
+ def backward_D_basic(self, netD, real, fake):
+ """Calculate GAN loss for the discriminator
+
+ Parameters:
+ netD (network) -- the discriminator D
+ real (tensor array) -- real images
+ fake (tensor array) -- images generated by a generator
+
+ Return the discriminator loss.
+ We also call loss_D.backward() to calculate the gradients.
+ """
+ # Real
+ pred_real = netD(real)
+ loss_D_real = self.criterionGAN(pred_real, True)
+ # Fake
+ pred_fake = netD(fake.detach())
+ loss_D_fake = self.criterionGAN(pred_fake, False)
+ # Combined loss and calculate gradients
+ loss_D = (loss_D_real + loss_D_fake) * 0.5
+ loss_D.backward()
+ return loss_D
+
+ def backward_D_basic_cls(self, netD, real, fake):
+ # Real
+ pred_real, pred_real_cls = netD(real)
+ loss_D_real = self.criterionGAN(pred_real, True)
+ if not self.opt.style_loss_with_weight:
+ loss_D_real_cls = self.criterionCls(pred_real_cls, self.real_B_label)
+ else:
+ loss_D_real_cls = torch.mean(self.real_B_style0[:,0] * self.criterionCls2(pred_real_cls, self.zero) + self.real_B_style0[:,1] * self.criterionCls2(pred_real_cls, self.one) + self.real_B_style0[:,2] * self.criterionCls2(pred_real_cls, self.two))
+ # Fake
+ pred_fake, pred_fake_cls = netD(fake.detach())
+ loss_D_fake = self.criterionGAN(pred_fake, False)
+ if not self.opt.style_loss_with_weight:
+ loss_D_fake_cls = self.criterionCls(pred_fake_cls, self.real_B_label)
+ else:
+ loss_D_fake_cls = torch.mean(self.real_B_style0[:,0] * self.criterionCls2(pred_fake_cls, self.zero) + self.real_B_style0[:,1] * self.criterionCls2(pred_fake_cls, self.one) + self.real_B_style0[:,2] * self.criterionCls2(pred_fake_cls, self.two))
+ # Combined loss and calculate gradients
+ loss_D = (loss_D_real + loss_D_fake) * 0.5
+ loss_D_cls = (loss_D_real_cls + loss_D_fake_cls) * 0.5
+ loss_D_total = loss_D + loss_D_cls
+ loss_D_total.backward()
+ return loss_D, loss_D_cls
+
+ def backward_D_A(self):
+ """Calculate GAN loss for discriminator D_A"""
+ fake_B = self.fake_B_pool.query(self.fake_B)
+ self.loss_D_A, self.loss_D_A_cls = self.backward_D_basic_cls(self.netD_A, self.real_B, fake_B)
+
+ def backward_D_A_l(self):
+ """Calculate GAN loss for discriminator D_A_l"""
+ fake_B = self.fake_B_pool.query(self.fake_B)
+ self.loss_D_A_l = self.backward_D_basic(self.netD_A_l, self.masked(self.real_B,self.B_mask), self.masked(fake_B,self.A_mask))
+
+ def backward_D_A_le(self):
+ """Calculate GAN loss for discriminator D_A_le"""
+ fake_B = self.fake_B_pool.query(self.fake_B)
+ self.loss_D_A_le = self.backward_D_basic(self.netD_A_le, self.masked(self.real_B,self.B_maske), self.masked(fake_B,self.A_maske))
+
+ def backward_D_A_ll(self):
+ """Calculate GAN loss for discriminator D_A_ll"""
+ fake_B = self.fake_B_pool.query(self.fake_B)
+ self.loss_D_A_ll = self.backward_D_basic(self.netD_A_ll, self.masked(self.real_B,self.B_maskl), self.masked(fake_B,self.A_maskl))
+
+ def backward_D_B(self):
+ """Calculate GAN loss for discriminator D_B"""
+ fake_A = self.fake_A_pool.query(self.fake_A)
+ self.loss_D_B = self.backward_D_basic(self.netD_B, self.real_A, fake_A)
+
+ def update_process(self, epoch):
+ self.process = (epoch - 1) / float(self.opt.niter_decay + self.opt.niter)
+
+ def backward_G(self):
+ """Calculate the loss for generators G_A and G_B"""
+ lambda_idt = self.opt.lambda_identity
+ lambda_G_A_l = self.opt.lambda_G_A_l
+ lambda_A = self.opt.lambda_A
+ lambda_B = self.opt.lambda_B
+ lambda_A_trunc = self.opt.lambda_A_trunc
+ if self.opt.ntrunc_trunc:
+ lambda_A = lambda_A * (1 - self.process * 0.9)
+ lambda_A_trunc = lambda_A_trunc * self.process * 0.9
+ self.lambda_As = [lambda_A, lambda_A_trunc]
+ # Identity loss
+ if lambda_idt > 0:
+ # G_A should be identity if real_B is fed: ||G_A(B) - B||
+ self.idt_A = self.netG_A(self.real_B)
+ self.loss_idt_A = self.criterionIdt(self.idt_A, self.real_B) * lambda_B * lambda_idt
+ # G_B should be identity if real_A is fed: ||G_B(A) - A||
+ self.idt_B = self.netG_B(self.real_A)
+ self.loss_idt_B = self.criterionIdt(self.idt_B, self.real_A) * lambda_A * lambda_idt
+ else:
+ self.loss_idt_A = 0
+ self.loss_idt_B = 0
+
+ # GAN loss D_A(G_A(A))
+ pred_fake, pred_fake_cls = self.netD_A(self.fake_B)
+ self.loss_G_A = self.criterionGAN(pred_fake, True)
+ if not self.opt.style_loss_with_weight:
+ self.loss_G_A_cls = self.criterionCls(pred_fake_cls, self.real_B_label)
+ else:
+ self.loss_G_A_cls = torch.mean(self.real_B_style0[:,0] * self.criterionCls2(pred_fake_cls, self.zero) + self.real_B_style0[:,1] * self.criterionCls2(pred_fake_cls, self.one) + self.real_B_style0[:,2] * self.criterionCls2(pred_fake_cls, self.two))
+ if self.opt.use_mask:
+ self.loss_G_A_l = self.criterionGAN(self.netD_A_l(self.fake_B_l), True) * lambda_G_A_l
+ if self.opt.use_eye_mask:
+ self.loss_G_A_le = self.criterionGAN(self.netD_A_le(self.fake_B_le), True) * lambda_G_A_l
+ if self.opt.use_lip_mask:
+ self.loss_G_A_ll = self.criterionGAN(self.netD_A_ll(self.fake_B_ll), True) * lambda_G_A_l
+ # GAN loss D_B(G_B(B))
+ self.loss_G_B = self.criterionGAN(self.netD_B(self.fake_A), True)
+ # Forward cycle loss || G_B(G_A(A)) - A||
+ if self.opt.perceptual_cycle == 0:
+ self.loss_cycle_A = self.criterionCycle(self.rec_A, self.real_A) * lambda_A
+ if self.opt.ntrunc_trunc:
+ self.rec_At = self.netG_B(truncate(self.fake_B,self.opt.trunc_a))
+ self.loss_cycle_A2 = self.criterionCycle(self.rec_At, self.real_A) * lambda_A_trunc
+ else:
+ if self.opt.perceptual_cycle == 1:
+ self.loss_cycle_A = self.lpips.forward_pair(self.rec_A, self.real_A).mean() * lambda_A
+ if self.opt.ntrunc_trunc:
+ self.rec_At = self.netG_B(truncate(self.fake_B,self.opt.trunc_a))
+ self.loss_cycle_A2 = self.lpips.forward_pair(self.rec_At, self.real_A).mean() * lambda_A_trunc
+ elif self.opt.perceptual_cycle == 2:
+ ts = self.real_A.shape
+ rec_A = (self.rec_A[:,0,:,:]*0.299+self.rec_A[:,1,:,:]*0.587+self.rec_A[:,2,:,:]*0.114).unsqueeze(0)
+ real_A = (self.real_A[:,0,:,:]*0.299+self.real_A[:,1,:,:]*0.587+self.real_A[:,2,:,:]*0.114).unsqueeze(0)
+ self.loss_cycle_A = self.lpips.forward_pair(rec_A.expand(ts), real_A.expand(ts)).mean() * lambda_A
+ elif self.opt.perceptual_cycle == 3 and self.opt.use_hed:
+ ts = self.real_A.shape
+ #[-1,1]->[0,1]->[-1,1]
+ rec_A_hed = (self.hed(self.rec_A/2+0.5)-0.5)*2
+ real_A_hed = (self.hed(self.real_A/2+0.5)-0.5)*2
+ self.loss_cycle_A = self.lpips.forward_pair(rec_A_hed.expand(ts), real_A_hed.expand(ts)).mean() * lambda_A
+ self.rec_A_hed = rec_A_hed
+ self.real_A_hed = real_A_hed
+ print(lambda_A)
+ elif self.opt.perceptual_cycle == 4:
+ x_a_feature = self.vgg(self.real_A)
+ g_a_feature = self.vgg(self.rec_A)
+ self.loss_cycle_A = self.criterionCycle(g_a_feature, x_a_feature.detach()) * lambda_A
+ elif self.opt.perceptual_cycle == 5 and self.opt.use_hed:
+ ts = self.real_A.shape
+ rec_A_hed = (self.hed(self.rec_A/2+0.5)-0.5)*2
+ real_A_hed = (self.hed(self.real_A/2+0.5)-0.5)*2
+ x_a_feature = self.vgg(real_A_hed.expand(ts))
+ g_a_feature = self.vgg(rec_A_hed.expand(ts))
+ self.loss_cycle_A = self.criterionCycle(g_a_feature, x_a_feature.detach()) * lambda_A
+ self.rec_A_hed = rec_A_hed
+ self.real_A_hed = real_A_hed
+ elif self.opt.perceptual_cycle == 6 and self.opt.use_hed and self.opt.ntrunc_trunc:
+ ts = self.real_A.shape
+ gpu_p = self.opt.gpu_ids_p[0]
+ gpu = self.opt.gpu_ids[0]
+ rec_A_hed = (self.hed(self.rec_A.cuda(gpu_p)/2+0.5)-0.5)*2
+ real_A_hed = (self.hed(self.real_A.cuda(gpu_p)/2+0.5)-0.5)*2
+ self.rec_At = self.netG_B(truncate(self.fake_B,self.opt.trunc_a))
+ rec_At_hed = (self.hed(self.rec_At.cuda(gpu_p)/2+0.5)-0.5)*2
+ self.loss_cycle_A = (self.lpips.forward_pair(rec_A_hed.expand(ts), real_A_hed.expand(ts)).mean()).cuda(gpu) * lambda_A
+ self.loss_cycle_A2 = (self.lpips.forward_pair(rec_At_hed.expand(ts), real_A_hed.expand(ts)).mean()).cuda(gpu) * lambda_A_trunc
+ self.rec_A_hed = rec_A_hed
+ self.real_A_hed = real_A_hed
+ self.rec_At_hed = rec_At_hed
+ elif self.opt.perceptual_cycle == 8 and self.opt.use_hed and self.opt.ntrunc_trunc:
+ ts = self.real_A.shape
+ rec_A_hed = (self.hed(self.rec_A/2+0.5)-0.5)*2
+ real_A_hed = (self.hed(self.real_A/2+0.5)-0.5)*2
+ self.rec_At = self.netG_B(truncate(self.fake_B,self.opt.trunc_a))
+ rec_At_hed = (self.hed(self.rec_At/2+0.5)-0.5)*2
+ x_a_feature = self.vgg(real_A_hed.expand(ts))
+ g_a_feature = self.vgg(rec_A_hed.expand(ts))
+ gt_a_feature = self.vgg(rec_At_hed.expand(ts))
+ self.loss_cycle_A = self.criterionCycle(g_a_feature, x_a_feature.detach()) * lambda_A
+ self.loss_cycle_A2 = self.criterionCycle(gt_a_feature, x_a_feature.detach()) * lambda_A_trunc
+ self.rec_A_hed = rec_A_hed
+ self.real_A_hed = real_A_hed
+ self.rec_At_hed = rec_At_hed
+
+ # Backward cycle loss || G_A(G_B(B)) - B||
+ self.loss_cycle_B = self.criterionCycle(self.rec_B, self.real_B) * lambda_B
+
+ # Metric loss, metric higher better
+ if self.opt.metric:
+ self.fake_B2 = self.fake_B.clone()
+ if self.opt.metric_inmask:
+ # background black
+ #self.fake_B2 = (self.fake_B2/2+0.5)*self.A_maskfg*2-1
+ # background white
+ self.fake_B2 = ((self.fake_B2/2+0.5)*self.A_maskfg+1-self.A_maskfg)*2-1
+ if not self.opt.metric_resnext and not self.opt.metric_resnet: # for two version of inception (during training input is [-1,1])
+ self.fake_B2 = torch.nn.functional.interpolate(input=self.fake_B2, size=(299, 299), mode='bilinear', align_corners=False)
+ self.fake_B2 = self.fake_B2.repeat(1,3,1,1)
+ else: # for resnet and resnext
+ self.fake_B2 = torch.nn.functional.interpolate(input=self.fake_B2, size=(224, 224), mode='bilinear', align_corners=False)
+ x = self.fake_B2.repeat(1,3,1,1)
+ # [-1,1] -> [0,1] -> mean [0.485,0.456,0.406], std [0.229,0.224,0.225]
+ x_ch0 = (torch.unsqueeze(x[:, 0],1)*0.5+0.5-0.485)/0.229
+ x_ch1 = (torch.unsqueeze(x[:, 1],1)*0.5+0.5-0.456)/0.224
+ x_ch2 = (torch.unsqueeze(x[:, 2],1)*0.5+0.5-0.406)/0.225
+ self.fake_B2 = torch.cat((x_ch0, x_ch1, x_ch2, x[:, 3:]), 1)
+
+
+ if not self.opt.metricvec:
+ pred = self.metric(self.fake_B2)
+ else:
+ pred = self.metric(torch.cat((self.fake_B2, self.vec),1))
+ self.loss_metric = torch.mean((1-pred)) * self.opt.lambda_metric
+
+ # combined loss and calculate gradients
+ self.loss_G = self.loss_G_A + self.loss_G_B + self.loss_cycle_A + self.loss_cycle_B + self.loss_idt_A + self.loss_idt_B
+ if getattr(self,'loss_cycle_A2',-1) != -1:
+ self.loss_G = self.loss_G + self.loss_cycle_A2
+ if getattr(self,'loss_G_A_l',-1) != -1:
+ self.loss_G = self.loss_G + self.loss_G_A_l
+ if getattr(self,'loss_G_A_le',-1) != -1:
+ self.loss_G = self.loss_G + self.loss_G_A_le
+ if getattr(self,'loss_G_A_ll',-1) != -1:
+ self.loss_G = self.loss_G + self.loss_G_A_ll
+ if getattr(self,'loss_G_A_cls',-1) != -1:
+ self.loss_G = self.loss_G + self.loss_G_A_cls
+ if getattr(self,'loss_metric',-1) != -1:
+ self.loss_G = self.loss_G + self.loss_metric
+ self.loss_G.backward()
+
+ def optimize_parameters(self):
+ """Calculate losses, gradients, and update network weights; called in every training iteration"""
+ # forward
+ self.forward() # compute fake images and reconstruction images.
+ # G_A and G_B
+ self.set_requires_grad([self.netD_A, self.netD_B], False) # Ds require no gradients when optimizing Gs
+ if self.opt.use_mask:
+ self.set_requires_grad([self.netD_A_l], False)
+ if self.opt.use_eye_mask:
+ self.set_requires_grad([self.netD_A_le], False)
+ if self.opt.use_lip_mask:
+ self.set_requires_grad([self.netD_A_ll], False)
+ self.optimizer_G.zero_grad() # set G_A and G_B's gradients to zero
+ self.backward_G() # calculate gradients for G_A and G_B
+ self.optimizer_G.step() # update G_A and G_B's weights
+ # D_A and D_B
+ self.set_requires_grad([self.netD_A, self.netD_B], True)
+ if self.opt.use_mask:
+ self.set_requires_grad([self.netD_A_l], True)
+ if self.opt.use_eye_mask:
+ self.set_requires_grad([self.netD_A_le], True)
+ if self.opt.use_lip_mask:
+ self.set_requires_grad([self.netD_A_ll], True)
+ self.optimizer_D.zero_grad() # set D_A and D_B's gradients to zero
+ self.backward_D_A() # calculate gradients for D_A
+ if self.opt.use_mask:
+ self.backward_D_A_l()# calculate gradients for D_A_l
+ if self.opt.use_eye_mask:
+ self.backward_D_A_le()# calculate gradients for D_A_le
+ if self.opt.use_lip_mask:
+ self.backward_D_A_ll()# calculate gradients for D_A_ll
+ self.backward_D_B() # calculate graidents for D_B
+ self.optimizer_D.step() # update D_A and D_B's weights
diff --git a/hi-arm/qmupd_vs/models/dist_model.py b/hi-arm/qmupd_vs/models/dist_model.py
new file mode 100644
index 0000000000000000000000000000000000000000..e61d5de0214978ef071cb520dcbed77882c59836
--- /dev/null
+++ b/hi-arm/qmupd_vs/models/dist_model.py
@@ -0,0 +1,323 @@
+
+from __future__ import absolute_import
+
+import sys
+sys.path.append('..')
+sys.path.append('.')
+import numpy as np
+import torch
+from torch import nn
+from collections import OrderedDict
+from torch.autograd import Variable
+from .base_model import BaseModel
+from scipy.ndimage import zoom
+import skimage.transform
+
+from . import networks_basic as networks
+# from PerceptualSimilarity.util import util
+from util import util
+
+class DistModel(BaseModel):
+ def name(self):
+ return self.model_name
+
+ def __init__(self, opt, model='net-lin', net='alex', pnet_rand=False, pnet_tune=False, model_path=None, colorspace='Lab', use_gpu=True, printNet=False, spatial=False, spatial_shape=None, spatial_order=1, spatial_factor=None, is_train=False, lr=.0001, beta1=0.5, version='0.1'):
+ '''
+ INPUTS
+ model - ['net-lin'] for linearly calibrated network
+ ['net'] for off-the-shelf network
+ ['L2'] for L2 distance in Lab colorspace
+ ['SSIM'] for ssim in RGB colorspace
+ net - ['squeeze','alex','vgg']
+ model_path - if None, will look in weights/[NET_NAME].pth
+ colorspace - ['Lab','RGB'] colorspace to use for L2 and SSIM
+ use_gpu - bool - whether or not to use a GPU
+ printNet - bool - whether or not to print network architecture out
+ spatial - bool - whether to output an array containing varying distances across spatial dimensions
+ spatial_shape - if given, output spatial shape. if None then spatial shape is determined automatically via spatial_factor (see below).
+ spatial_factor - if given, specifies upsampling factor relative to the largest spatial extent of a convolutional layer. if None then resized to size of input images.
+ spatial_order - spline order of filter for upsampling in spatial mode, by default 1 (bilinear).
+ is_train - bool - [True] for training mode
+ lr - float - initial learning rate
+ beta1 - float - initial momentum term for adam
+ version - 0.1 for latest, 0.0 was original
+ '''
+ BaseModel.__init__(self, opt)
+
+ self.model = model
+ self.net = net
+ self.use_gpu = use_gpu
+ self.is_train = is_train
+ self.spatial = spatial
+ self.spatial_shape = spatial_shape
+ self.spatial_order = spatial_order
+ self.spatial_factor = spatial_factor
+
+ self.model_name = '%s [%s]'%(model,net)
+ if(self.model == 'net-lin'): # pretrained net + linear layer
+ #self.device = torch.device('cuda:{}'.format(opt.gpu_ids[0])) if opt.gpu_ids else torch.device('cpu')
+ self.device = torch.device('cuda:{}'.format(opt.gpu_ids_p[0])) if opt.gpu_ids_p else torch.device('cpu')
+ self.net = networks.PNetLin(pnet_rand=pnet_rand, pnet_tune=pnet_tune, pnet_type=net,use_dropout=True,spatial=spatial,version=version,lpips=True).to(self.device)
+ kw = {}
+
+ if not use_gpu:
+ kw['map_location'] = 'cpu'
+ if(model_path is None):
+ import inspect
+ #model_path = os.path.abspath(os.path.join(inspect.getfile(self.initialize), '..', '..', 'weights/v%s/%s.pth'%(version,net)))
+ model_path = './checkpoints/weights/v%s/%s.pth'%(version,net)
+
+ if(not is_train):
+ print('Loading model from: %s'%model_path)
+ #self.net.load_state_dict(torch.load(model_path, **kw))
+ state_dict = torch.load(model_path, map_location=str(self.device))
+ self.net.load_state_dict(state_dict, strict=False)
+
+ elif(self.model=='net'): # pretrained network
+ assert not self.spatial, 'spatial argument not supported yet for uncalibrated networks'
+ self.net = networks.PNet(use_gpu=use_gpu,pnet_type=net,device=self.device)
+ self.is_fake_net = True
+ elif(self.model in ['L2','l2']):
+ self.net = networks.L2(use_gpu=use_gpu,colorspace=colorspace,device=self.device) # not really a network, only for testing
+ self.model_name = 'L2'
+ elif(self.model in ['DSSIM','dssim','SSIM','ssim']):
+ self.net = networks.DSSIM(use_gpu=use_gpu,colorspace=colorspace,device=self.device)
+ self.model_name = 'SSIM'
+ else:
+ raise ValueError("Model [%s] not recognized." % self.model)
+
+ self.parameters = list(self.net.parameters())
+
+ if self.is_train: # training mode
+ # extra network on top to go from distances (d0,d1) => predicted human judgment (h*)
+ self.rankLoss = networks.BCERankingLoss(use_gpu=use_gpu,device=self.device)
+ self.parameters+=self.rankLoss.parameters
+ self.lr = lr
+ self.old_lr = lr
+ self.optimizer_net = torch.optim.Adam(self.parameters, lr=lr, betas=(beta1, 0.999))
+ else: # test mode
+ self.net.eval()
+
+ if(printNet):
+ print('---------- Networks initialized -------------')
+ networks.print_network(self.net)
+ print('-----------------------------------------------')
+
+ def forward_pair(self,in1,in2,retPerLayer=False):
+ if(retPerLayer):
+ return self.net.forward(in1,in2, retPerLayer=True)
+ else:
+ return self.net.forward(in1,in2)
+
+ def forward(self, in0, in1, retNumpy=False):
+ ''' Function computes the distance between image patches in0 and in1
+ INPUTS
+ in0, in1 - torch.Tensor object of shape Nx3xXxY - image patch scaled to [-1,1]
+ retNumpy - [False] to return as torch.Tensor, [True] to return as numpy array
+ OUTPUT
+ computed distances between in0 and in1
+ '''
+
+ self.input_ref = in0
+ self.input_p0 = in1
+
+ self.var_ref = Variable(self.input_ref,requires_grad=True)
+ self.var_p0 = Variable(self.input_p0,requires_grad=True)
+
+ self.d0 = self.forward_pair(self.var_ref, self.var_p0)
+ self.loss_total = self.d0
+
+ def convert_output(d0):
+ if(retNumpy):
+ ans = d0.cpu().data.numpy()
+ if not self.spatial:
+ ans = ans.flatten()
+ else:
+ assert(ans.shape[0] == 1 and len(ans.shape) == 4)
+ return ans[0,...].transpose([1, 2, 0]) # Reshape to usual numpy image format: (height, width, channels)
+ return ans
+ else:
+ return d0
+
+ if self.spatial:
+ L = [convert_output(x) for x in self.d0]
+ spatial_shape = self.spatial_shape
+ if spatial_shape is None:
+ if(self.spatial_factor is None):
+ spatial_shape = (in0.size()[2],in0.size()[3])
+ else:
+ spatial_shape = (max([x.shape[0] for x in L])*self.spatial_factor, max([x.shape[1] for x in L])*self.spatial_factor)
+
+ L = [skimage.transform.resize(x, spatial_shape, order=self.spatial_order, mode='edge') for x in L]
+
+ L = np.mean(np.concatenate(L, 2) * len(L), 2)
+ return L
+ else:
+ return convert_output(self.d0)
+
+ # ***** TRAINING FUNCTIONS *****
+ def optimize_parameters(self):
+ self.forward_train()
+ self.optimizer_net.zero_grad()
+ self.backward_train()
+ self.optimizer_net.step()
+ self.clamp_weights()
+
+ def clamp_weights(self):
+ for module in self.net.modules():
+ if(hasattr(module, 'weight') and module.kernel_size==(1,1)):
+ module.weight.data = torch.clamp(module.weight.data,min=0)
+
+ def set_input(self, data):
+ self.input_ref = data['ref']
+ self.input_p0 = data['p0']
+ self.input_p1 = data['p1']
+ self.input_judge = data['judge']
+
+ if(self.use_gpu):
+ self.input_ref = self.input_ref.cuda(self.device)
+ self.input_p0 = self.input_p0.cuda(self.device)
+ self.input_p1 = self.input_p1.cuda(self.device)
+ self.input_judge = self.input_judge.cuda(self.device)
+
+ self.var_ref = Variable(self.input_ref,requires_grad=True)
+ self.var_p0 = Variable(self.input_p0,requires_grad=True)
+ self.var_p1 = Variable(self.input_p1,requires_grad=True)
+
+ def forward_train(self): # run forward pass
+ self.d0 = self.forward_pair(self.var_ref, self.var_p0)
+ self.d1 = self.forward_pair(self.var_ref, self.var_p1)
+ self.acc_r = self.compute_accuracy(self.d0,self.d1,self.input_judge)
+
+ # var_judge
+ self.var_judge = Variable(1.*self.input_judge).view(self.d0.size())
+
+ self.loss_total = self.rankLoss.forward(self.d0, self.d1, self.var_judge*2.-1.)
+ return self.loss_total
+
+ def backward_train(self):
+ torch.mean(self.loss_total).backward()
+
+ def compute_accuracy(self,d0,d1,judge):
+ ''' d0, d1 are Variables, judge is a Tensor '''
+ d1_lt_d0 = (d1 %f' % (type,self.old_lr, lr))
+ self.old_lr = lr
+
+
+
+def score_2afc_dataset(data_loader,func):
+ ''' Function computes Two Alternative Forced Choice (2AFC) score using
+ distance function 'func' in dataset 'data_loader'
+ INPUTS
+ data_loader - CustomDatasetDataLoader object - contains a TwoAFCDataset inside
+ func - callable distance function - calling d=func(in0,in1) should take 2
+ pytorch tensors with shape Nx3xXxY, and return numpy array of length N
+ OUTPUTS
+ [0] - 2AFC score in [0,1], fraction of time func agrees with human evaluators
+ [1] - dictionary with following elements
+ d0s,d1s - N arrays containing distances between reference patch to perturbed patches
+ gts - N array in [0,1], preferred patch selected by human evaluators
+ (closer to "0" for left patch p0, "1" for right patch p1,
+ "0.6" means 60pct people preferred right patch, 40pct preferred left)
+ scores - N array in [0,1], corresponding to what percentage function agreed with humans
+ CONSTS
+ N - number of test triplets in data_loader
+ '''
+
+ d0s = []
+ d1s = []
+ gts = []
+
+ # bar = pb.ProgressBar(max_value=data_loader.load_data().__len__())
+ for (i,data) in enumerate(data_loader.load_data()):
+ d0s+=func(data['ref'],data['p0']).tolist()
+ d1s+=func(data['ref'],data['p1']).tolist()
+ gts+=data['judge'].cpu().numpy().flatten().tolist()
+ # bar.update(i)
+
+ d0s = np.array(d0s)
+ d1s = np.array(d1s)
+ gts = np.array(gts)
+ scores = (d0s epochs
+ and linearly decay the rate to zero over the next epochs.
+ For other schedulers (step, plateau, and cosine), we use the default PyTorch schedulers.
+ See https://pytorch.org/docs/stable/optim.html for more details.
+ """
+ if opt.lr_policy == 'linear':
+ def lambda_rule(epoch):
+ lr_l = 1.0 - max(0, epoch + opt.epoch_count - opt.niter) / float(opt.niter_decay + 1)
+ return lr_l
+ scheduler = lr_scheduler.LambdaLR(optimizer, lr_lambda=lambda_rule)
+ elif opt.lr_policy == 'step':
+ scheduler = lr_scheduler.StepLR(optimizer, step_size=opt.lr_decay_iters, gamma=0.1)
+ elif opt.lr_policy == 'plateau':
+ scheduler = lr_scheduler.ReduceLROnPlateau(optimizer, mode='min', factor=0.2, threshold=0.01, patience=5)
+ elif opt.lr_policy == 'cosine':
+ scheduler = lr_scheduler.CosineAnnealingLR(optimizer, T_max=opt.niter, eta_min=0)
+ else:
+ return NotImplementedError('learning rate policy [%s] is not implemented', opt.lr_policy)
+ return scheduler
+
+
+def init_weights(net, init_type='normal', init_gain=0.02):
+ """Initialize network weights.
+
+ Parameters:
+ net (network) -- network to be initialized
+ init_type (str) -- the name of an initialization method: normal | xavier | kaiming | orthogonal
+ init_gain (float) -- scaling factor for normal, xavier and orthogonal.
+
+ We use 'normal' in the original pix2pix and CycleGAN paper. But xavier and kaiming might
+ work better for some applications. Feel free to try yourself.
+ """
+ def init_func(m): # define the initialization function
+ classname = m.__class__.__name__
+ if hasattr(m, 'weight') and (classname.find('Conv') != -1 or classname.find('Linear') != -1):
+ if init_type == 'normal':
+ init.normal_(m.weight.data, 0.0, init_gain)
+ elif init_type == 'xavier':
+ init.xavier_normal_(m.weight.data, gain=init_gain)
+ elif init_type == 'kaiming':
+ init.kaiming_normal_(m.weight.data, a=0, mode='fan_in')
+ elif init_type == 'orthogonal':
+ init.orthogonal_(m.weight.data, gain=init_gain)
+ else:
+ raise NotImplementedError('initialization method [%s] is not implemented' % init_type)
+ if hasattr(m, 'bias') and m.bias is not None:
+ init.constant_(m.bias.data, 0.0)
+ elif classname.find('BatchNorm2d') != -1: # BatchNorm Layer's weight is not a matrix; only normal distribution applies.
+ init.normal_(m.weight.data, 1.0, init_gain)
+ init.constant_(m.bias.data, 0.0)
+
+ print('initialize network with %s' % init_type)
+ net.apply(init_func) # apply the initialization function
+
+
+def init_net(net, init_type='normal', init_gain=0.02, gpu_ids=[]):
+ """Initialize a network: 1. register CPU/GPU device (with multi-GPU support); 2. initialize the network weights
+ Parameters:
+ net (network) -- the network to be initialized
+ init_type (str) -- the name of an initialization method: normal | xavier | kaiming | orthogonal
+ gain (float) -- scaling factor for normal, xavier and orthogonal.
+ gpu_ids (int list) -- which GPUs the network runs on: e.g., 0,1,2
+
+ Return an initialized network.
+ """
+ if len(gpu_ids) > 0:
+ assert(torch.cuda.is_available())
+ net.to(gpu_ids[0])
+ net = torch.nn.DataParallel(net, gpu_ids) # multi-GPUs
+ init_weights(net, init_type, init_gain=init_gain)
+ return net
+
+
+def define_G(input_nc, output_nc, ngf, netG, norm='batch', use_dropout=False, init_type='normal', init_gain=0.02, gpu_ids=[], model0_res=0, model1_res=0, extra_channel=3):
+ """Create a generator
+
+ Parameters:
+ input_nc (int) -- the number of channels in input images
+ output_nc (int) -- the number of channels in output images
+ ngf (int) -- the number of filters in the last conv layer
+ netG (str) -- the architecture's name: resnet_9blocks | resnet_6blocks | unet_256 | unet_128
+ norm (str) -- the name of normalization layers used in the network: batch | instance | none
+ use_dropout (bool) -- if use dropout layers.
+ init_type (str) -- the name of our initialization method.
+ init_gain (float) -- scaling factor for normal, xavier and orthogonal.
+ gpu_ids (int list) -- which GPUs the network runs on: e.g., 0,1,2
+
+ Returns a generator
+
+ Our current implementation provides two types of generators:
+ U-Net: [unet_128] (for 128x128 input images) and [unet_256] (for 256x256 input images)
+ The original U-Net paper: https://arxiv.org/abs/1505.04597
+
+ Resnet-based generator: [resnet_6blocks] (with 6 Resnet blocks) and [resnet_9blocks] (with 9 Resnet blocks)
+ Resnet-based generator consists of several Resnet blocks between a few downsampling/upsampling operations.
+ We adapt Torch code from Justin Johnson's neural style transfer project (https://github.com/jcjohnson/fast-neural-style).
+
+
+ The generator has been initialized by . It uses RELU for non-linearity.
+ """
+ net = None
+ norm_layer = get_norm_layer(norm_type=norm)
+
+ if netG == 'resnet_9blocks':
+ net = ResnetGenerator(input_nc, output_nc, ngf, norm_layer=norm_layer, use_dropout=use_dropout, n_blocks=9)
+ elif netG == 'resnet_8blocks':
+ net = ResnetGenerator(input_nc, output_nc, ngf, norm_layer=norm_layer, use_dropout=use_dropout, n_blocks=8)
+ elif netG == 'resnet_style_9blocks':
+ net = ResnetStyleGenerator(input_nc, output_nc, ngf, norm_layer=norm_layer, use_dropout=use_dropout, n_blocks=9, extra_channel=extra_channel)
+ elif netG == 'resnet_style2_9blocks':
+ net = ResnetStyle2Generator(input_nc, output_nc, ngf, norm_layer=norm_layer, use_dropout=use_dropout, n_blocks=9, model0_res=model0_res, extra_channel=extra_channel)
+ elif netG == 'resnet_style2_8blocks':
+ net = ResnetStyle2Generator(input_nc, output_nc, ngf, norm_layer=norm_layer, use_dropout=use_dropout, n_blocks=8, model0_res=model0_res, extra_channel=extra_channel)
+ elif netG == 'resnet_style2_10blocks':
+ net = ResnetStyle2Generator(input_nc, output_nc, ngf, norm_layer=norm_layer, use_dropout=use_dropout, n_blocks=10, model0_res=model0_res, extra_channel=extra_channel)
+ elif netG == 'resnet_style3decoder_9blocks':
+ net = ResnetStyle3DecoderGenerator(input_nc, output_nc, ngf, norm_layer=norm_layer, use_dropout=use_dropout, n_blocks=9, model0_res=model0_res)
+ elif netG == 'resnet_style2mc_9blocks':
+ net = ResnetStyle2MCGenerator(input_nc, output_nc, ngf, norm_layer=norm_layer, use_dropout=use_dropout, n_blocks=9, model0_res=model0_res, extra_channel=extra_channel)
+ elif netG == 'resnet_style2mc2_9blocks':
+ net = ResnetStyle2MC2Generator(input_nc, output_nc, ngf, norm_layer=norm_layer, use_dropout=use_dropout, n_blocks=9, model0_res=model0_res, model1_res=model1_res, extra_channel=extra_channel)
+ elif netG == 'resnet_6blocks':
+ net = ResnetGenerator(input_nc, output_nc, ngf, norm_layer=norm_layer, use_dropout=use_dropout, n_blocks=6)
+ elif netG == 'unet_128':
+ net = UnetGenerator(input_nc, output_nc, 7, ngf, norm_layer=norm_layer, use_dropout=use_dropout)
+ elif netG == 'unet_256':
+ net = UnetGenerator(input_nc, output_nc, 8, ngf, norm_layer=norm_layer, use_dropout=use_dropout)
+ else:
+ raise NotImplementedError('Generator model name [%s] is not recognized' % netG)
+ return init_net(net, init_type, init_gain, gpu_ids)
+
+
+def define_D(input_nc, ndf, netD, n_layers_D=3, norm='batch', init_type='normal', init_gain=0.02, gpu_ids=[], n_class=3):
+ """Create a discriminator
+
+ Parameters:
+ input_nc (int) -- the number of channels in input images
+ ndf (int) -- the number of filters in the first conv layer
+ netD (str) -- the architecture's name: basic | n_layers | pixel
+ n_layers_D (int) -- the number of conv layers in the discriminator; effective when netD=='n_layers'
+ norm (str) -- the type of normalization layers used in the network.
+ init_type (str) -- the name of the initialization method.
+ init_gain (float) -- scaling factor for normal, xavier and orthogonal.
+ gpu_ids (int list) -- which GPUs the network runs on: e.g., 0,1,2
+
+ Returns a discriminator
+
+ Our current implementation provides three types of discriminators:
+ [basic]: 'PatchGAN' classifier described in the original pix2pix paper.
+ It can classify whether 70×70 overlapping patches are real or fake.
+ Such a patch-level discriminator architecture has fewer parameters
+ than a full-image discriminator and can work on arbitrarily-sized images
+ in a fully convolutional fashion.
+
+ [n_layers]: With this mode, you cna specify the number of conv layers in the discriminator
+ with the parameter (default=3 as used in [basic] (PatchGAN).)
+
+ [pixel]: 1x1 PixelGAN discriminator can classify whether a pixel is real or not.
+ It encourages greater color diversity but has no effect on spatial statistics.
+
+ The discriminator has been initialized by . It uses Leakly RELU for non-linearity.
+ """
+ net = None
+ norm_layer = get_norm_layer(norm_type=norm)
+
+ if netD == 'basic': # default PatchGAN classifier
+ net = NLayerDiscriminator(input_nc, ndf, n_layers=3, norm_layer=norm_layer)
+ elif netD == 'basic_cls':
+ net = NLayerDiscriminatorCls(input_nc, ndf, n_layers=3, n_class=3, norm_layer=norm_layer)
+ elif netD == 'n_layers': # more options
+ net = NLayerDiscriminator(input_nc, ndf, n_layers_D, norm_layer=norm_layer)
+ elif netD == 'pixel': # classify if each pixel is real or fake
+ net = PixelDiscriminator(input_nc, ndf, norm_layer=norm_layer)
+ else:
+ raise NotImplementedError('Discriminator model name [%s] is not recognized' % net)
+ return init_net(net, init_type, init_gain, gpu_ids)
+
+
+def define_HED(init_weights_, gpu_ids_=[]):
+ net = HED()
+
+ if len(gpu_ids_) > 0:
+ assert(torch.cuda.is_available())
+ net.to(gpu_ids_[0])
+ net = torch.nn.DataParallel(net, gpu_ids_) # multi-GPUs
+
+ if not init_weights_ == None:
+ device = torch.device('cuda:{}'.format(gpu_ids_[0])) if gpu_ids_ else torch.device('cpu')
+ print('Loading model from: %s'%init_weights_)
+ state_dict = torch.load(init_weights_, map_location=str(device))
+ if isinstance(net, torch.nn.DataParallel):
+ net.module.load_state_dict(state_dict)
+ else:
+ net.load_state_dict(state_dict)
+ print('load the weights successfully')
+
+ return net
+
+def define_VGG(init_weights_, feature_mode_, batch_norm_=False, num_classes_=1000, gpu_ids_=[]):
+ net = VGG19(init_weights=init_weights_, feature_mode=feature_mode_, batch_norm=batch_norm_, num_classes=num_classes_)
+ # set the GPU
+ if len(gpu_ids_) > 0:
+ assert(torch.cuda.is_available())
+ net.cuda(gpu_ids_[0])
+ net = torch.nn.DataParallel(net, gpu_ids_) # multi-GPUs
+
+ if not init_weights_ == None:
+ device = torch.device('cuda:{}'.format(gpu_ids_[0])) if gpu_ids_ else torch.device('cpu')
+ print('Loading model from: %s'%init_weights_)
+ state_dict = torch.load(init_weights_, map_location=str(device))
+ if isinstance(net, torch.nn.DataParallel):
+ net.module.load_state_dict(state_dict)
+ else:
+ net.load_state_dict(state_dict)
+ print('load the weights successfully')
+ return net
+
+###################################################################################################################
+from torchvision.models import vgg11, vgg11_bn, vgg13, vgg13_bn, vgg16, vgg16_bn, vgg19, vgg19_bn
+def define_vgg11_bn(gpu_ids_=[],vec=0):
+ net = vgg11_bn(pretrained=True)
+ net.classifier[6] = nn.Linear(4096, 1) #LSGAN needs no sigmoid, LSGAN-nn.MSELoss()
+ if len(gpu_ids_) > 0:
+ assert(torch.cuda.is_available())
+ net.cuda(gpu_ids_[0])
+ net = torch.nn.DataParallel(net, gpu_ids_)
+ return net
+def define_vgg19_bn(gpu_ids_=[],vec=0):
+ net = vgg19_bn(pretrained=True)
+ net.classifier[6] = nn.Linear(4096, 1) #LSGAN needs no sigmoid, LSGAN-nn.MSELoss()
+ if len(gpu_ids_) > 0:
+ assert(torch.cuda.is_available())
+ net.cuda(gpu_ids_[0])
+ net = torch.nn.DataParallel(net, gpu_ids_)
+ return net
+def define_vgg19(gpu_ids_=[],vec=0):
+ net = vgg19(pretrained=True)
+ net.classifier[6] = nn.Linear(4096, 1) #LSGAN needs no sigmoid, LSGAN-nn.MSELoss()
+ if len(gpu_ids_) > 0:
+ assert(torch.cuda.is_available())
+ net.cuda(gpu_ids_[0])
+ net = torch.nn.DataParallel(net, gpu_ids_)
+ return net
+###################################################################################################################
+from torchvision.models import resnet18, resnet34, resnet50, resnet101, resnet152
+def define_resnet101(gpu_ids_=[],vec=0):
+ net = resnet101(pretrained=True)
+ num_ftrs = net.fc.in_features
+ net.fc = nn.Linear(num_ftrs, 1) #LSGAN needs no sigmoid, LSGAN-nn.MSELoss()
+ if len(gpu_ids_) > 0:
+ assert(torch.cuda.is_available())
+ net.cuda(gpu_ids_[0])
+ net = torch.nn.DataParallel(net, gpu_ids_)
+ return net
+def define_resnet101a(init_weights_,gpu_ids_=[],vec=0):
+ net = resnet101(pretrained=True)
+ num_ftrs = net.fc.in_features
+ net.fc = nn.Linear(num_ftrs, 1) #LSGAN needs no sigmoid, LSGAN-nn.MSELoss()
+ if not init_weights_ == None:
+ print('Loading model from: %s'%init_weights_)
+ state_dict = torch.load(init_weights_, map_location=str(torch.device('cpu')))
+ if isinstance(net, torch.nn.DataParallel):
+ net.module.load_state_dict(state_dict)
+ else:
+ net.load_state_dict(state_dict)
+ print('load the weights successfully')
+ if len(gpu_ids_) > 0:
+ assert(torch.cuda.is_available())
+ net.cuda(gpu_ids_[0])
+ net = torch.nn.DataParallel(net, gpu_ids_)
+ return net
+###################################################################################################################
+import pretrainedmodels.models.resnext as resnext
+def define_resnext101(gpu_ids_=[],vec=0):
+ net = resnext.resnext101_64x4d(num_classes=1000,pretrained='imagenet')
+ net.last_linear = nn.Linear(2048, 1) #LSGAN needs no sigmoid, LSGAN-nn.MSELoss()
+ if len(gpu_ids_) > 0:
+ assert(torch.cuda.is_available())
+ net.cuda(gpu_ids_[0])
+ net = torch.nn.DataParallel(net, gpu_ids_)
+ return net
+def define_resnext101a(init_weights_,gpu_ids_=[],vec=0):
+ net = resnext.resnext101_64x4d(num_classes=1000,pretrained='imagenet')
+ net.last_linear = nn.Linear(2048, 1) #LSGAN needs no sigmoid, LSGAN-nn.MSELoss()
+ if not init_weights_ == None:
+ print('Loading model from: %s'%init_weights_)
+ state_dict = torch.load(init_weights_, map_location=str(torch.device('cpu')))
+ if isinstance(net, torch.nn.DataParallel):
+ net.module.load_state_dict(state_dict)
+ else:
+ net.load_state_dict(state_dict)
+ print('load the weights successfully')
+ if len(gpu_ids_) > 0:
+ assert(torch.cuda.is_available())
+ net.cuda(gpu_ids_[0])
+ net = torch.nn.DataParallel(net, gpu_ids_)
+ return net
+###################################################################################################################
+from torchvision.models import Inception3, inception_v3
+def define_inception3(gpu_ids_=[],vec=0):
+ net = inception_v3(pretrained=True)
+ net.transform_input = False # assume [-1,1] input
+ net.fc = nn.Linear(2048, 1)
+ net.aux_logits = False
+ if len(gpu_ids_) > 0:
+ assert(torch.cuda.is_available())
+ net.cuda(gpu_ids_[0])
+ net = torch.nn.DataParallel(net, gpu_ids_)
+ return net
+def define_inception3a(init_weights_,gpu_ids_=[],vec=0):
+ net = inception_v3(pretrained=True)
+ net.transform_input = False # assume [-1,1] input
+ net.fc = nn.Linear(2048, 1)
+ net.aux_logits = False
+ if not init_weights_ == None:
+ print('Loading model from: ', init_weights_)
+ state_dict = torch.load(init_weights_, map_location=str(torch.device('cpu')))
+ if isinstance(net, torch.nn.DataParallel):
+ net.module.load_state_dict(state_dict)
+ else:
+ net.load_state_dict(state_dict)
+ print('load the weights successfully')
+ if len(gpu_ids_) > 0:
+ assert(torch.cuda.is_available())
+ net.cuda(gpu_ids_[0])
+ net = torch.nn.DataParallel(net, gpu_ids_)
+ return net
+###################################################################################################################
+from torchvision.models.inception import BasicConv2d
+def define_inception_v3(init_weights_,gpu_ids_=[],vec=0):
+
+ ## pretrained = True
+ kwargs = {}
+ if 'transform_input' not in kwargs:
+ kwargs['transform_input'] = True
+ if 'aux_logits' in kwargs:
+ original_aux_logits = kwargs['aux_logits']
+ kwargs['aux_logits'] = True
+ else:
+ original_aux_logits = True
+ print(kwargs)
+ net = Inception3(**kwargs)
+
+ if not init_weights_ == None:
+ print('Loading model from: %s'%init_weights_)
+ state_dict = torch.load(init_weights_, map_location=str(torch.device('cpu')))
+ if isinstance(net, torch.nn.DataParallel):
+ net.module.load_state_dict(state_dict)
+ else:
+ net.load_state_dict(state_dict)
+ print('load the weights successfully')
+
+ if not original_aux_logits:
+ net.aux_logits = False
+ del net.AuxLogits
+
+ net.fc = nn.Linear(2048, 1)
+ if vec == 1:
+ net.Conv2d_1a_3x3 = BasicConv2d(6, 32, kernel_size=3, stride=2)
+ net.aux_logits = False
+
+ if len(gpu_ids_) > 0:
+ assert(torch.cuda.is_available())
+ net.cuda(gpu_ids_[0])
+ net = torch.nn.DataParallel(net, gpu_ids_)
+
+ return net
+
+def define_inception_v3a(init_weights_,gpu_ids_=[],vec=0):
+
+ kwargs = {}
+ if 'transform_input' not in kwargs:
+ kwargs['transform_input'] = True
+ if 'aux_logits' in kwargs:
+ original_aux_logits = kwargs['aux_logits']
+ kwargs['aux_logits'] = True
+ else:
+ original_aux_logits = True
+ print(kwargs)
+ net = Inception3(**kwargs)
+
+ if not original_aux_logits:
+ net.aux_logits = False
+ del net.AuxLogits
+
+ net.fc = nn.Linear(2048, 1)
+ if vec == 1:
+ net.Conv2d_1a_3x3 = BasicConv2d(6, 32, kernel_size=3, stride=2)
+ net.aux_logits = False
+
+ if not init_weights_ == None:
+ print('Loading model from: %s'%init_weights_)
+ state_dict = torch.load(init_weights_, map_location=str(torch.device('cpu')))
+ if isinstance(net, torch.nn.DataParallel):
+ net.module.load_state_dict(state_dict)
+ else:
+ net.load_state_dict(state_dict)
+ print('load the weights successfully')
+
+ if len(gpu_ids_) > 0:
+ assert(torch.cuda.is_available())
+ net.cuda(gpu_ids_[0])
+ net = torch.nn.DataParallel(net, gpu_ids_)
+
+ return net
+
+def define_inception_ori(init_weights_,transform_input=False,gpu_ids_=[]):
+
+ ## pretrained = True
+ kwargs = {}
+ kwargs['transform_input'] = transform_input
+
+ if 'aux_logits' in kwargs:
+ original_aux_logits = kwargs['aux_logits']
+ kwargs['aux_logits'] = True
+ else:
+ original_aux_logits = True
+ print(kwargs)
+ net = Inception3(**kwargs)
+
+
+ if not init_weights_ == None:
+ print('Loading model from: %s'%init_weights_)
+ state_dict = torch.load(init_weights_, map_location=str(torch.device('cpu')))
+ if isinstance(net, torch.nn.DataParallel):
+ net.module.load_state_dict(state_dict)
+ else:
+ net.load_state_dict(state_dict)
+ print('load the weights successfully')
+ #for e in list(net.modules()):
+ # print(e)
+
+ if not original_aux_logits:
+ net.aux_logits = False
+ del net.AuxLogits
+
+
+ if len(gpu_ids_) > 0:
+ assert(torch.cuda.is_available())
+ net.cuda(gpu_ids_[0])
+
+ return net
+###################################################################################################################
+
+def define_DT(init_weights_, input_nc_, output_nc_, ngf_, netG_, norm_, use_dropout_, init_type_, init_gain_, gpu_ids_):
+ net = define_G(input_nc_, output_nc_, ngf_, netG_, norm_, use_dropout_, init_type_, init_gain_, gpu_ids_)
+
+ if not init_weights_ == None:
+ device = torch.device('cuda:{}'.format(gpu_ids_[0])) if gpu_ids_ else torch.device('cpu')
+ print('Loading model from: %s'%init_weights_)
+ state_dict = torch.load(init_weights_, map_location=str(device))
+ if isinstance(net, torch.nn.DataParallel):
+ net.module.load_state_dict(state_dict)
+ else:
+ net.load_state_dict(state_dict)
+ print('load the weights successfully')
+ return net
+
+def define_C(input_nc, classes, ngf, netG, norm='batch', use_dropout=False, init_type='normal', init_gain=0.02, gpu_ids=[], h=512, w=512, nnG=3, dim=4096):
+ net = None
+ norm_layer = get_norm_layer(norm_type=norm)
+ if netG == 'classifier':
+ net = Classifier(input_nc, classes, ngf, num_downs=nnG, norm_layer=norm_layer, use_dropout=use_dropout, h=h, w=w, dim=dim)
+ elif netG == 'vgg':
+ net = VGG19(init_weights=None, feature_mode=False, batch_norm=True, num_classes=classes)
+ return init_net(net, init_type, init_gain, gpu_ids)
+
+##############################################################################
+# Classes
+##############################################################################
+class GANLoss(nn.Module):
+ """Define different GAN objectives.
+
+ The GANLoss class abstracts away the need to create the target label tensor
+ that has the same size as the input.
+ """
+
+ def __init__(self, gan_mode, target_real_label=1.0, target_fake_label=0.0):
+ """ Initialize the GANLoss class.
+
+ Parameters:
+ gan_mode (str) - - the type of GAN objective. It currently supports vanilla, lsgan, and wgangp.
+ target_real_label (bool) - - label for a real image
+ target_fake_label (bool) - - label of a fake image
+
+ Note: Do not use sigmoid as the last layer of Discriminator.
+ LSGAN needs no sigmoid. vanilla GANs will handle it with BCEWithLogitsLoss.
+ """
+ super(GANLoss, self).__init__()
+ self.register_buffer('real_label', torch.tensor(target_real_label))
+ self.register_buffer('fake_label', torch.tensor(target_fake_label))
+ self.gan_mode = gan_mode
+ if gan_mode == 'lsgan':#cyclegan
+ self.loss = nn.MSELoss()
+ elif gan_mode == 'vanilla':
+ self.loss = nn.BCEWithLogitsLoss()
+ elif gan_mode in ['wgangp']:
+ self.loss = None
+ else:
+ raise NotImplementedError('gan mode %s not implemented' % gan_mode)
+
+ def get_target_tensor(self, prediction, target_is_real):
+ """Create label tensors with the same size as the input.
+
+ Parameters:
+ prediction (tensor) - - tpyically the prediction from a discriminator
+ target_is_real (bool) - - if the ground truth label is for real images or fake images
+
+ Returns:
+ A label tensor filled with ground truth label, and with the size of the input
+ """
+
+ if target_is_real:
+ target_tensor = self.real_label
+ else:
+ target_tensor = self.fake_label
+ return target_tensor.expand_as(prediction)
+
+ def __call__(self, prediction, target_is_real):
+ """Calculate loss given Discriminator's output and grount truth labels.
+
+ Parameters:
+ prediction (tensor) - - tpyically the prediction output from a discriminator
+ target_is_real (bool) - - if the ground truth label is for real images or fake images
+
+ Returns:
+ the calculated loss.
+ """
+ if self.gan_mode in ['lsgan', 'vanilla']:
+ target_tensor = self.get_target_tensor(prediction, target_is_real)
+ loss = self.loss(prediction, target_tensor)
+ elif self.gan_mode == 'wgangp':
+ if target_is_real:
+ loss = -prediction.mean()
+ else:
+ loss = prediction.mean()
+ return loss
+
+
+def cal_gradient_penalty(netD, real_data, fake_data, device, type='mixed', constant=1.0, lambda_gp=10.0):
+ """Calculate the gradient penalty loss, used in WGAN-GP paper https://arxiv.org/abs/1704.00028
+
+ Arguments:
+ netD (network) -- discriminator network
+ real_data (tensor array) -- real images
+ fake_data (tensor array) -- generated images from the generator
+ device (str) -- GPU / CPU: from torch.device('cuda:{}'.format(self.gpu_ids[0])) if self.gpu_ids else torch.device('cpu')
+ type (str) -- if we mix real and fake data or not [real | fake | mixed].
+ constant (float) -- the constant used in formula ( | |gradient||_2 - constant)^2
+ lambda_gp (float) -- weight for this loss
+
+ Returns the gradient penalty loss
+ """
+ if lambda_gp > 0.0:
+ if type == 'real': # either use real images, fake images, or a linear interpolation of two.
+ interpolatesv = real_data
+ elif type == 'fake':
+ interpolatesv = fake_data
+ elif type == 'mixed':
+ alpha = torch.rand(real_data.shape[0], 1, device=device)
+ alpha = alpha.expand(real_data.shape[0], real_data.nelement() // real_data.shape[0]).contiguous().view(*real_data.shape)
+ interpolatesv = alpha * real_data + ((1 - alpha) * fake_data)
+ else:
+ raise NotImplementedError('{} not implemented'.format(type))
+ interpolatesv.requires_grad_(True)
+ disc_interpolates = netD(interpolatesv)
+ gradients = torch.autograd.grad(outputs=disc_interpolates, inputs=interpolatesv,
+ grad_outputs=torch.ones(disc_interpolates.size()).to(device),
+ create_graph=True, retain_graph=True, only_inputs=True)
+ gradients = gradients[0].view(real_data.size(0), -1) # flat the data
+ gradient_penalty = (((gradients + 1e-16).norm(2, dim=1) - constant) ** 2).mean() * lambda_gp # added eps
+ return gradient_penalty, gradients
+ else:
+ return 0.0, None
+
+
+class ResnetGenerator(nn.Module):
+ """Resnet-based generator that consists of Resnet blocks between a few downsampling/upsampling operations.
+
+ We adapt Torch code and idea from Justin Johnson's neural style transfer project(https://github.com/jcjohnson/fast-neural-style)
+ """
+
+ def __init__(self, input_nc, output_nc, ngf=64, norm_layer=nn.BatchNorm2d, use_dropout=False, n_blocks=6, padding_type='reflect'):
+ """Construct a Resnet-based generator
+
+ Parameters:
+ input_nc (int) -- the number of channels in input images
+ output_nc (int) -- the number of channels in output images
+ ngf (int) -- the number of filters in the last conv layer
+ norm_layer -- normalization layer
+ use_dropout (bool) -- if use dropout layers
+ n_blocks (int) -- the number of ResNet blocks
+ padding_type (str) -- the name of padding layer in conv layers: reflect | replicate | zero
+ """
+ assert(n_blocks >= 0)
+ super(ResnetGenerator, self).__init__()
+ if type(norm_layer) == functools.partial:
+ use_bias = norm_layer.func == nn.InstanceNorm2d
+ else:
+ use_bias = norm_layer == nn.InstanceNorm2d
+
+ model = [nn.ReflectionPad2d(3),
+ nn.Conv2d(input_nc, ngf, kernel_size=7, padding=0, bias=use_bias),
+ norm_layer(ngf),
+ nn.ReLU(True)]
+
+ n_downsampling = 2
+ for i in range(n_downsampling): # add downsampling layers
+ mult = 2 ** i
+ model += [nn.Conv2d(ngf * mult, ngf * mult * 2, kernel_size=3, stride=2, padding=1, bias=use_bias),
+ norm_layer(ngf * mult * 2),
+ nn.ReLU(True)]
+
+ mult = 2 ** n_downsampling
+ for i in range(n_blocks): # add ResNet blocks
+
+ model += [ResnetBlock(ngf * mult, padding_type=padding_type, norm_layer=norm_layer, use_dropout=use_dropout, use_bias=use_bias)]
+
+ for i in range(n_downsampling): # add upsampling layers
+ mult = 2 ** (n_downsampling - i)
+ model += [nn.ConvTranspose2d(ngf * mult, int(ngf * mult / 2),
+ kernel_size=3, stride=2,
+ padding=1, output_padding=1,
+ bias=use_bias),
+ norm_layer(int(ngf * mult / 2)),
+ nn.ReLU(True)]
+ model += [nn.ReflectionPad2d(3)]
+ model += [nn.Conv2d(ngf, output_nc, kernel_size=7, padding=0)]
+ model += [nn.Tanh()]
+
+ self.model = nn.Sequential(*model)
+
+ def forward(self, input, feature_mode = False):
+ """Standard forward"""
+ if not feature_mode:
+ return self.model(input)
+ else:
+ module_list = list(self.model.modules())
+ x = input.clone()
+ indexes = list(range(1,11))+[11,20,29,38,47,56,65,74,83]+list(range(92,101))
+ for i in indexes:
+ x = module_list[i](x)
+ if i == 3:
+ x1 = x.clone()
+ elif i == 6:
+ x2 = x.clone()
+ elif i == 9:
+ x3 = x.clone()
+ elif i == 47:
+ y7 = x.clone()
+ elif i == 83:
+ y4 = x.clone()
+ elif i == 93:
+ y3 = x.clone()
+ elif i == 96:
+ y2 = x.clone()
+ #y = self.model(input)
+ #pdb.set_trace()
+ return x,x1,x2,x3,y4,y3,y2,y7
+
+class ResnetStyleGenerator(nn.Module):
+ """Resnet-based generator that consists of Resnet blocks between a few downsampling/upsampling operations.
+
+ We adapt Torch code and idea from Justin Johnson's neural style transfer project(https://github.com/jcjohnson/fast-neural-style)
+ """
+
+ def __init__(self, input_nc, output_nc, ngf=64, norm_layer=nn.BatchNorm2d, use_dropout=False, n_blocks=6, padding_type='reflect'):
+ """Construct a Resnet-based generator
+
+ Parameters:
+ input_nc (int) -- the number of channels in input images
+ output_nc (int) -- the number of channels in output images
+ ngf (int) -- the number of filters in the last conv layer
+ norm_layer -- normalization layer
+ use_dropout (bool) -- if use dropout layers
+ n_blocks (int) -- the number of ResNet blocks
+ padding_type (str) -- the name of padding layer in conv layers: reflect | replicate | zero
+ """
+ assert(n_blocks >= 0)
+ super(ResnetStyleGenerator, self).__init__()
+ if type(norm_layer) == functools.partial:
+ use_bias = norm_layer.func == nn.InstanceNorm2d
+ else:
+ use_bias = norm_layer == nn.InstanceNorm2d
+
+ model0 = [nn.ReflectionPad2d(3),
+ nn.Conv2d(input_nc, ngf, kernel_size=7, padding=0, bias=use_bias),
+ norm_layer(ngf),
+ nn.ReLU(True)]
+
+ n_downsampling = 2
+ for i in range(n_downsampling): # add downsampling layers
+ mult = 2 ** i
+ model0 += [nn.Conv2d(ngf * mult, ngf * mult * 2, kernel_size=3, stride=2, padding=1, bias=use_bias),
+ norm_layer(ngf * mult * 2),
+ nn.ReLU(True)]
+
+ mult = 2 ** n_downsampling
+ model1 = [nn.Conv2d(3, ngf * mult, kernel_size=3, stride=1, padding=1, bias=use_bias),
+ norm_layer(ngf * mult),
+ nn.ReLU(True)]
+
+ model = []
+ model += [nn.Conv2d(ngf * mult * 2, ngf * mult, kernel_size=3, stride=1, padding=1, bias=use_bias),
+ norm_layer(ngf * mult),
+ nn.ReLU(True)]
+ for i in range(n_blocks): # add ResNet blocks
+
+ model += [ResnetBlock(ngf * mult, padding_type=padding_type, norm_layer=norm_layer, use_dropout=use_dropout, use_bias=use_bias)]
+
+ for i in range(n_downsampling): # add upsampling layers
+ mult = 2 ** (n_downsampling - i)
+ model += [nn.ConvTranspose2d(ngf * mult, int(ngf * mult / 2),
+ kernel_size=3, stride=2,
+ padding=1, output_padding=1,
+ bias=use_bias),
+ norm_layer(int(ngf * mult / 2)),
+ nn.ReLU(True)]
+ model += [nn.ReflectionPad2d(3)]
+ model += [nn.Conv2d(ngf, output_nc, kernel_size=7, padding=0)]
+ model += [nn.Tanh()]
+
+ self.model0 = nn.Sequential(*model0)
+ self.model1 = nn.Sequential(*model1)
+ self.model = nn.Sequential(*model)
+
+ def forward(self, input1, input2):
+ """Standard forward"""
+ f1 = self.model0(input1)
+ f2 = self.model1(input2)
+ #pdb.set_trace()
+ f1 = torch.cat((f1,f2), 1)
+ return self.model(f1)
+
+
+class ResnetStyle2Generator(nn.Module):
+ """Resnet-based generator that consists of Resnet blocks between a few downsampling/upsampling operations.
+
+ We adapt Torch code and idea from Justin Johnson's neural style transfer project(https://github.com/jcjohnson/fast-neural-style)
+ """
+
+ def __init__(self, input_nc, output_nc, ngf=64, norm_layer=nn.BatchNorm2d, use_dropout=False, n_blocks=6, padding_type='reflect', extra_channel=3, model0_res=0):
+ """Construct a Resnet-based generator
+
+ Parameters:
+ input_nc (int) -- the number of channels in input images
+ output_nc (int) -- the number of channels in output images
+ ngf (int) -- the number of filters in the last conv layer
+ norm_layer -- normalization layer
+ use_dropout (bool) -- if use dropout layers
+ n_blocks (int) -- the number of ResNet blocks
+ padding_type (str) -- the name of padding layer in conv layers: reflect | replicate | zero
+ """
+ assert(n_blocks >= 0)
+ super(ResnetStyle2Generator, self).__init__()
+ self.n_blocks = n_blocks
+ if type(norm_layer) == functools.partial:
+ use_bias = norm_layer.func == nn.InstanceNorm2d
+ else:
+ use_bias = norm_layer == nn.InstanceNorm2d
+
+ model0 = [nn.ReflectionPad2d(3),
+ nn.Conv2d(input_nc, ngf, kernel_size=7, padding=0, bias=use_bias),
+ norm_layer(ngf),
+ nn.ReLU(True)]
+
+ n_downsampling = 2
+ for i in range(n_downsampling): # add downsampling layers
+ mult = 2 ** i
+ model0 += [nn.Conv2d(ngf * mult, ngf * mult * 2, kernel_size=3, stride=2, padding=1, bias=use_bias),
+ norm_layer(ngf * mult * 2),
+ nn.ReLU(True)]
+
+ mult = 2 ** n_downsampling
+ for i in range(model0_res): # add ResNet blocks
+ model0 += [ResnetBlock(ngf * mult, padding_type=padding_type, norm_layer=norm_layer, use_dropout=use_dropout, use_bias=use_bias)]
+
+ model = []
+ model += [nn.Conv2d(ngf * mult + extra_channel, ngf * mult, kernel_size=3, stride=1, padding=1, bias=use_bias),
+ norm_layer(ngf * mult),
+ nn.ReLU(True)]
+
+ for i in range(n_blocks-model0_res): # add ResNet blocks
+ model += [ResnetBlock(ngf * mult, padding_type=padding_type, norm_layer=norm_layer, use_dropout=use_dropout, use_bias=use_bias)]
+
+ for i in range(n_downsampling): # add upsampling layers
+ mult = 2 ** (n_downsampling - i)
+ model += [nn.ConvTranspose2d(ngf * mult, int(ngf * mult / 2),
+ kernel_size=3, stride=2,
+ padding=1, output_padding=1,
+ bias=use_bias),
+ norm_layer(int(ngf * mult / 2)),
+ nn.ReLU(True)]
+ model += [nn.ReflectionPad2d(3)]
+ model += [nn.Conv2d(ngf, output_nc, kernel_size=7, padding=0)]
+ model += [nn.Tanh()]
+
+ self.model0 = nn.Sequential(*model0)
+ self.model = nn.Sequential(*model)
+ #print(list(self.modules()))
+
+ def forward(self, input1, input2, feature_mode=False, ablate_res=-1):
+ """Standard forward"""
+ if not feature_mode:
+ if ablate_res == -1:
+ f1 = self.model0(input1)
+ y1 = torch.cat([f1, input2], 1)
+ return self.model(y1)
+ else:
+ f1 = self.model0(input1)
+ y = torch.cat([f1, input2], 1)
+ module_list = list(self.model.modules())
+ for i in range(1, 4):#merge module
+ y = module_list[i](y)
+ for k in range(self.n_blocks):#resblocks
+ if k+1 == ablate_res:
+ print('skip resblock'+str(k+1))
+ continue
+ y1 = y.clone()
+ for i in range(6+9*k,13+9*k):
+ y = module_list[i](y)
+ y = y1 + y
+ for i in range(4+9*self.n_blocks,13+9*self.n_blocks):#up convs
+ y = module_list[i](y)
+ return y
+ else:
+ module_list0 = list(self.model0.modules())
+ x = input1.clone()
+ for i in range(1,11):
+ x = module_list0[i](x)
+ if i == 3:
+ x1 = x.clone()#[1,64,512,512]
+ elif i == 6:
+ x2 = x.clone()#[1,128,256,256]
+ elif i == 9:
+ x3 = x.clone()#[1,256,128,128]
+ #f1 = self.model0(input1)#[1,256,128,128]
+ #pdb.set_trace()
+ y1 = torch.cat([x, input2], 1)#[1,259,128,128]
+ module_list = list(self.model.modules())
+ indexes = list(range(1,4))+[4,13,22,31,40,49,58,67,76]+list(range(85,94))
+ y = y1.clone()
+ for i in indexes:
+ y = module_list[i](y)
+ if i == 76:
+ y4 = y.clone()#[1,256,128,128]
+ elif i == 86:
+ y3 = y.clone()#[1,128,256,256]
+ elif i == 89:
+ y2 = y.clone()#[1,64,512,512]
+ elif i == 40:
+ y7 = y.clone()
+ #out = self.model(y1)
+ #pdb.set_trace()
+ return y,x1,x2,x3,y4,y3,y2,y7
+
+class ResnetStyle3DecoderGenerator(nn.Module):
+ """Resnet-based generator that consists of Resnet blocks between a few downsampling/upsampling operations.
+
+ We adapt Torch code and idea from Justin Johnson's neural style transfer project(https://github.com/jcjohnson/fast-neural-style)
+ """
+
+ def __init__(self, input_nc, output_nc, ngf=64, norm_layer=nn.BatchNorm2d, use_dropout=False, n_blocks=6, padding_type='reflect', model0_res=0):
+ """Construct a Resnet-based generator
+
+ Parameters:
+ input_nc (int) -- the number of channels in input images
+ output_nc (int) -- the number of channels in output images
+ ngf (int) -- the number of filters in the last conv layer
+ norm_layer -- normalization layer
+ use_dropout (bool) -- if use dropout layers
+ n_blocks (int) -- the number of ResNet blocks
+ padding_type (str) -- the name of padding layer in conv layers: reflect | replicate | zero
+ """
+ assert(n_blocks >= 0)
+ super(ResnetStyle3DecoderGenerator, self).__init__()
+ if type(norm_layer) == functools.partial:
+ use_bias = norm_layer.func == nn.InstanceNorm2d
+ else:
+ use_bias = norm_layer == nn.InstanceNorm2d
+
+ model0 = [nn.ReflectionPad2d(3),
+ nn.Conv2d(input_nc, ngf, kernel_size=7, padding=0, bias=use_bias),
+ norm_layer(ngf),
+ nn.ReLU(True)]
+
+ n_downsampling = 2
+ for i in range(n_downsampling): # add downsampling layers
+ mult = 2 ** i
+ model0 += [nn.Conv2d(ngf * mult, ngf * mult * 2, kernel_size=3, stride=2, padding=1, bias=use_bias),
+ norm_layer(ngf * mult * 2),
+ nn.ReLU(True)]
+
+ mult = 2 ** n_downsampling
+ for i in range(model0_res): # add ResNet blocks
+ model0 += [ResnetBlock(ngf * mult, padding_type=padding_type, norm_layer=norm_layer, use_dropout=use_dropout, use_bias=use_bias)]
+
+ model1 = []
+ model2 = []
+ model3 = []
+ for i in range(n_blocks-model0_res): # add ResNet blocks
+ model1 += [ResnetBlock(ngf * mult, padding_type=padding_type, norm_layer=norm_layer, use_dropout=use_dropout, use_bias=use_bias)]
+ model2 += [ResnetBlock(ngf * mult, padding_type=padding_type, norm_layer=norm_layer, use_dropout=use_dropout, use_bias=use_bias)]
+ model3 += [ResnetBlock(ngf * mult, padding_type=padding_type, norm_layer=norm_layer, use_dropout=use_dropout, use_bias=use_bias)]
+
+ for i in range(n_downsampling): # add upsampling layers
+ mult = 2 ** (n_downsampling - i)
+ model1 += [nn.ConvTranspose2d(ngf * mult, int(ngf * mult / 2),
+ kernel_size=3, stride=2,
+ padding=1, output_padding=1,
+ bias=use_bias),
+ norm_layer(int(ngf * mult / 2)),
+ nn.ReLU(True)]
+ model2 += [nn.ConvTranspose2d(ngf * mult, int(ngf * mult / 2),
+ kernel_size=3, stride=2,
+ padding=1, output_padding=1,
+ bias=use_bias),
+ norm_layer(int(ngf * mult / 2)),
+ nn.ReLU(True)]
+ model3 += [nn.ConvTranspose2d(ngf * mult, int(ngf * mult / 2),
+ kernel_size=3, stride=2,
+ padding=1, output_padding=1,
+ bias=use_bias),
+ norm_layer(int(ngf * mult / 2)),
+ nn.ReLU(True)]
+ model1 += [nn.ReflectionPad2d(3)]
+ model1 += [nn.Conv2d(ngf, output_nc, kernel_size=7, padding=0)]
+ model1 += [nn.Tanh()]
+ model2 += [nn.ReflectionPad2d(3)]
+ model2 += [nn.Conv2d(ngf, output_nc, kernel_size=7, padding=0)]
+ model2 += [nn.Tanh()]
+ model3 += [nn.ReflectionPad2d(3)]
+ model3 += [nn.Conv2d(ngf, output_nc, kernel_size=7, padding=0)]
+ model3 += [nn.Tanh()]
+
+ self.model0 = nn.Sequential(*model0)
+ self.model1 = nn.Sequential(*model1)
+ self.model2 = nn.Sequential(*model2)
+ self.model3 = nn.Sequential(*model3)
+ print(list(self.modules()))
+
+ def forward(self, input, domain):
+ """Standard forward"""
+ f1 = self.model0(input)
+ if domain == 0:
+ y = self.model1(f1)
+ elif domain == 1:
+ y = self.model2(f1)
+ elif domain == 2:
+ y = self.model3(f1)
+ return y
+
+class ResnetStyle2MCGenerator(nn.Module):
+ # multi-column
+
+ def __init__(self, input_nc, output_nc, ngf=64, norm_layer=nn.BatchNorm2d, use_dropout=False, n_blocks=6, padding_type='reflect', extra_channel=3, model0_res=0):
+ """Construct a Resnet-based generator
+
+ Parameters:
+ input_nc (int) -- the number of channels in input images
+ output_nc (int) -- the number of channels in output images
+ ngf (int) -- the number of filters in the last conv layer
+ norm_layer -- normalization layer
+ use_dropout (bool) -- if use dropout layers
+ n_blocks (int) -- the number of ResNet blocks
+ padding_type (str) -- the name of padding layer in conv layers: reflect | replicate | zero
+ """
+ assert(n_blocks >= 0)
+ super(ResnetStyle2MCGenerator, self).__init__()
+ if type(norm_layer) == functools.partial:
+ use_bias = norm_layer.func == nn.InstanceNorm2d
+ else:
+ use_bias = norm_layer == nn.InstanceNorm2d
+
+ model0 = [nn.ReflectionPad2d(3),
+ nn.Conv2d(input_nc, ngf, kernel_size=7, padding=0, bias=use_bias),
+ norm_layer(ngf),
+ nn.ReLU(True)]
+
+ n_downsampling = 2
+ model1_3 = []
+ model1_5 = []
+ for i in range(n_downsampling): # add downsampling layers
+ mult = 2 ** i
+ model1_3 += [nn.Conv2d(ngf * mult, ngf * mult * 2, kernel_size=3, stride=2, padding=1, bias=use_bias),
+ norm_layer(ngf * mult * 2),
+ nn.ReLU(True)]
+ model1_5 += [nn.Conv2d(ngf * mult, ngf * mult * 2, kernel_size=5, stride=2, padding=2, bias=use_bias),
+ norm_layer(ngf * mult * 2),
+ nn.ReLU(True)]
+
+ mult = 2 ** n_downsampling
+ for i in range(model0_res): # add ResNet blocks
+ model1_3 += [ResnetBlock(ngf * mult, padding_type=padding_type, norm_layer=norm_layer, use_dropout=use_dropout, use_bias=use_bias)]
+ model1_5 += [ResnetBlock(ngf * mult, padding_type=padding_type, norm_layer=norm_layer, use_dropout=use_dropout, use_bias=use_bias, kernel=5)]
+
+ model = []
+ model += [nn.Conv2d(ngf * mult * 2 + extra_channel, ngf * mult, kernel_size=3, stride=1, padding=1, bias=use_bias),
+ norm_layer(ngf * mult),
+ nn.ReLU(True)]
+
+ for i in range(n_blocks-model0_res): # add ResNet blocks
+ model += [ResnetBlock(ngf * mult, padding_type=padding_type, norm_layer=norm_layer, use_dropout=use_dropout, use_bias=use_bias)]
+
+ for i in range(n_downsampling): # add upsampling layers
+ mult = 2 ** (n_downsampling - i)
+ model += [nn.ConvTranspose2d(ngf * mult, int(ngf * mult / 2),
+ kernel_size=3, stride=2,
+ padding=1, output_padding=1,
+ bias=use_bias),
+ norm_layer(int(ngf * mult / 2)),
+ nn.ReLU(True)]
+ model += [nn.ReflectionPad2d(3)]
+ model += [nn.Conv2d(ngf, output_nc, kernel_size=7, padding=0)]
+ model += [nn.Tanh()]
+
+ self.model0 = nn.Sequential(*model0)
+ self.model1_3 = nn.Sequential(*model1_3)
+ self.model1_5 = nn.Sequential(*model1_5)
+ self.model = nn.Sequential(*model)
+ print(list(self.modules()))
+
+ def forward(self, input1, input2):
+ """Standard forward"""
+ f0 = self.model0(input1)
+ f1 = self.model1_3(f0)
+ f2 = self.model1_5(f0)
+ y1 = torch.cat([f1, f2, input2], 1)
+ return self.model(y1)
+
+class ResnetStyle2MC2Generator(nn.Module):
+ # multi-column, need to insert style early
+
+ def __init__(self, input_nc, output_nc, ngf=64, norm_layer=nn.BatchNorm2d, use_dropout=False, n_blocks=6, padding_type='reflect', extra_channel=3, model0_res=0, model1_res=0):
+ """Construct a Resnet-based generator
+
+ Parameters:
+ input_nc (int) -- the number of channels in input images
+ output_nc (int) -- the number of channels in output images
+ ngf (int) -- the number of filters in the last conv layer
+ norm_layer -- normalization layer
+ use_dropout (bool) -- if use dropout layers
+ n_blocks (int) -- the number of ResNet blocks
+ padding_type (str) -- the name of padding layer in conv layers: reflect | replicate | zero
+ """
+ assert(n_blocks >= 0)
+ super(ResnetStyle2MC2Generator, self).__init__()
+ if type(norm_layer) == functools.partial:
+ use_bias = norm_layer.func == nn.InstanceNorm2d
+ else:
+ use_bias = norm_layer == nn.InstanceNorm2d
+
+ model0 = [nn.ReflectionPad2d(3),
+ nn.Conv2d(input_nc, ngf, kernel_size=7, padding=0, bias=use_bias),
+ norm_layer(ngf),
+ nn.ReLU(True)]
+
+ n_downsampling = 2
+ model1_3 = []
+ model1_5 = []
+ for i in range(n_downsampling): # add downsampling layers
+ mult = 2 ** i
+ model1_3 += [nn.Conv2d(ngf * mult, ngf * mult * 2, kernel_size=3, stride=2, padding=1, bias=use_bias),
+ norm_layer(ngf * mult * 2),
+ nn.ReLU(True)]
+ model1_5 += [nn.Conv2d(ngf * mult, ngf * mult * 2, kernel_size=5, stride=2, padding=2, bias=use_bias),
+ norm_layer(ngf * mult * 2),
+ nn.ReLU(True)]
+
+ mult = 2 ** n_downsampling
+ for i in range(model0_res): # add ResNet blocks
+ model1_3 += [ResnetBlock(ngf * mult, padding_type=padding_type, norm_layer=norm_layer, use_dropout=use_dropout, use_bias=use_bias)]
+ model1_5 += [ResnetBlock(ngf * mult, padding_type=padding_type, norm_layer=norm_layer, use_dropout=use_dropout, use_bias=use_bias, kernel=5)]
+
+ model2_3 = []
+ model2_5 = []
+ model2_3 += [nn.Conv2d(ngf * mult + extra_channel, ngf * mult, kernel_size=3, stride=1, padding=1, bias=use_bias),
+ norm_layer(ngf * mult),
+ nn.ReLU(True)]
+ model2_5 += [nn.Conv2d(ngf * mult + extra_channel, ngf * mult, kernel_size=5, stride=1, padding=2, bias=use_bias),
+ norm_layer(ngf * mult),
+ nn.ReLU(True)]
+
+ for i in range(model1_res): # add ResNet blocks
+ model2_3 += [ResnetBlock(ngf * mult, padding_type=padding_type, norm_layer=norm_layer, use_dropout=use_dropout, use_bias=use_bias)]
+ model2_5 += [ResnetBlock(ngf * mult, padding_type=padding_type, norm_layer=norm_layer, use_dropout=use_dropout, use_bias=use_bias, kernel=5)]
+
+ model = []
+ model += [nn.Conv2d(ngf * mult * 2, ngf * mult, kernel_size=3, stride=1, padding=1, bias=use_bias),
+ norm_layer(ngf * mult),
+ nn.ReLU(True)]
+ for i in range(n_blocks-model0_res-model1_res): # add ResNet blocks
+ model += [ResnetBlock(ngf * mult, padding_type=padding_type, norm_layer=norm_layer, use_dropout=use_dropout, use_bias=use_bias)]
+
+ for i in range(n_downsampling): # add upsampling layers
+ mult = 2 ** (n_downsampling - i)
+ model += [nn.ConvTranspose2d(ngf * mult, int(ngf * mult / 2),
+ kernel_size=3, stride=2,
+ padding=1, output_padding=1,
+ bias=use_bias),
+ norm_layer(int(ngf * mult / 2)),
+ nn.ReLU(True)]
+ model += [nn.ReflectionPad2d(3)]
+ model += [nn.Conv2d(ngf, output_nc, kernel_size=7, padding=0)]
+ model += [nn.Tanh()]
+
+ self.model0 = nn.Sequential(*model0)
+ self.model1_3 = nn.Sequential(*model1_3)
+ self.model1_5 = nn.Sequential(*model1_5)
+ self.model2_3 = nn.Sequential(*model2_3)
+ self.model2_5 = nn.Sequential(*model2_5)
+ self.model = nn.Sequential(*model)
+ print(list(self.modules()))
+
+ def forward(self, input1, input2):
+ """Standard forward"""
+ f0 = self.model0(input1)
+ f1 = self.model1_3(f0)
+ f2 = self.model1_5(f0)
+ f3 = self.model2_3(torch.cat([f1,input2],1))
+ f4 = self.model2_5(torch.cat([f2,input2],1))
+ #pdb.set_trace()
+ return self.model(torch.cat([f3,f4],1))
+
+class ResnetBlock(nn.Module):
+ """Define a Resnet block"""
+
+ def __init__(self, dim, padding_type, norm_layer, use_dropout, use_bias, kernel=3):
+ """Initialize the Resnet block
+
+ A resnet block is a conv block with skip connections
+ We construct a conv block with build_conv_block function,
+ and implement skip connections in function.
+ Original Resnet paper: https://arxiv.org/pdf/1512.03385.pdf
+ """
+ super(ResnetBlock, self).__init__()
+ self.conv_block = self.build_conv_block(dim, padding_type, norm_layer, use_dropout, use_bias, kernel)
+
+ def build_conv_block(self, dim, padding_type, norm_layer, use_dropout, use_bias, kernel=3):
+ """Construct a convolutional block.
+
+ Parameters:
+ dim (int) -- the number of channels in the conv layer.
+ padding_type (str) -- the name of padding layer: reflect | replicate | zero
+ norm_layer -- normalization layer
+ use_dropout (bool) -- if use dropout layers.
+ use_bias (bool) -- if the conv layer uses bias or not
+
+ Returns a conv block (with a conv layer, a normalization layer, and a non-linearity layer (ReLU))
+ """
+ conv_block = []
+ p = 0
+ pad = int((kernel-1)/2)
+ if padding_type == 'reflect':#by default
+ conv_block += [nn.ReflectionPad2d(pad)]
+ elif padding_type == 'replicate':
+ conv_block += [nn.ReplicationPad2d(pad)]
+ elif padding_type == 'zero':
+ p = pad
+ else:
+ raise NotImplementedError('padding [%s] is not implemented' % padding_type)
+
+ conv_block += [nn.Conv2d(dim, dim, kernel_size=kernel, padding=p, bias=use_bias), norm_layer(dim), nn.ReLU(True)]
+ if use_dropout:
+ conv_block += [nn.Dropout(0.5)]
+
+ p = 0
+ if padding_type == 'reflect':
+ conv_block += [nn.ReflectionPad2d(pad)]
+ elif padding_type == 'replicate':
+ conv_block += [nn.ReplicationPad2d(pad)]
+ elif padding_type == 'zero':
+ p = pad
+ else:
+ raise NotImplementedError('padding [%s] is not implemented' % padding_type)
+ conv_block += [nn.Conv2d(dim, dim, kernel_size=kernel, padding=p, bias=use_bias), norm_layer(dim)]
+
+ return nn.Sequential(*conv_block)
+
+ def forward(self, x):
+ """Forward function (with skip connections)"""
+ out = x + self.conv_block(x) # add skip connections
+ return out
+
+
+class UnetGenerator(nn.Module):
+ """Create a Unet-based generator"""
+
+ def __init__(self, input_nc, output_nc, num_downs, ngf=64, norm_layer=nn.BatchNorm2d, use_dropout=False):
+ """Construct a Unet generator
+ Parameters:
+ input_nc (int) -- the number of channels in input images
+ output_nc (int) -- the number of channels in output images
+ num_downs (int) -- the number of downsamplings in UNet. For example, # if |num_downs| == 7,
+ image of size 128x128 will become of size 1x1 # at the bottleneck
+ ngf (int) -- the number of filters in the last conv layer
+ norm_layer -- normalization layer
+
+ We construct the U-Net from the innermost layer to the outermost layer.
+ It is a recursive process.
+ """
+ super(UnetGenerator, self).__init__()
+ # construct unet structure
+ unet_block = UnetSkipConnectionBlock(ngf * 8, ngf * 8, input_nc=None, submodule=None, norm_layer=norm_layer, innermost=True) # add the innermost layer
+ for i in range(num_downs - 5): # add intermediate layers with ngf * 8 filters
+ unet_block = UnetSkipConnectionBlock(ngf * 8, ngf * 8, input_nc=None, submodule=unet_block, norm_layer=norm_layer, use_dropout=use_dropout)
+ # gradually reduce the number of filters from ngf * 8 to ngf
+ unet_block = UnetSkipConnectionBlock(ngf * 4, ngf * 8, input_nc=None, submodule=unet_block, norm_layer=norm_layer)
+ unet_block = UnetSkipConnectionBlock(ngf * 2, ngf * 4, input_nc=None, submodule=unet_block, norm_layer=norm_layer)
+ unet_block = UnetSkipConnectionBlock(ngf, ngf * 2, input_nc=None, submodule=unet_block, norm_layer=norm_layer)
+ self.model = UnetSkipConnectionBlock(output_nc, ngf, input_nc=input_nc, submodule=unet_block, outermost=True, norm_layer=norm_layer) # add the outermost layer
+
+ def forward(self, input):
+ """Standard forward"""
+ return self.model(input)
+
+
+class UnetSkipConnectionBlock(nn.Module):
+ """Defines the Unet submodule with skip connection.
+ X -------------------identity----------------------
+ |-- downsampling -- |submodule| -- upsampling --|
+ """
+
+ def __init__(self, outer_nc, inner_nc, input_nc=None,
+ submodule=None, outermost=False, innermost=False, norm_layer=nn.BatchNorm2d, use_dropout=False):
+ """Construct a Unet submodule with skip connections.
+
+ Parameters:
+ outer_nc (int) -- the number of filters in the outer conv layer
+ inner_nc (int) -- the number of filters in the inner conv layer
+ input_nc (int) -- the number of channels in input images/features
+ submodule (UnetSkipConnectionBlock) -- previously defined submodules
+ outermost (bool) -- if this module is the outermost module
+ innermost (bool) -- if this module is the innermost module
+ norm_layer -- normalization layer
+ user_dropout (bool) -- if use dropout layers.
+ """
+ super(UnetSkipConnectionBlock, self).__init__()
+ self.outermost = outermost
+ if type(norm_layer) == functools.partial:
+ use_bias = norm_layer.func == nn.InstanceNorm2d
+ else:
+ use_bias = norm_layer == nn.InstanceNorm2d
+ if input_nc is None:
+ input_nc = outer_nc
+ downconv = nn.Conv2d(input_nc, inner_nc, kernel_size=4,
+ stride=2, padding=1, bias=use_bias)
+ downrelu = nn.LeakyReLU(0.2, True)
+ downnorm = norm_layer(inner_nc)
+ uprelu = nn.ReLU(True)
+ upnorm = norm_layer(outer_nc)
+
+ if outermost:
+ upconv = nn.ConvTranspose2d(inner_nc * 2, outer_nc,
+ kernel_size=4, stride=2,
+ padding=1)
+ down = [downconv]
+ up = [uprelu, upconv, nn.Tanh()]
+ model = down + [submodule] + up
+ elif innermost:
+ upconv = nn.ConvTranspose2d(inner_nc, outer_nc,
+ kernel_size=4, stride=2,
+ padding=1, bias=use_bias)
+ down = [downrelu, downconv]
+ up = [uprelu, upconv, upnorm]
+ model = down + up
+ else:
+ upconv = nn.ConvTranspose2d(inner_nc * 2, outer_nc,
+ kernel_size=4, stride=2,
+ padding=1, bias=use_bias)
+ down = [downrelu, downconv, downnorm]
+ up = [uprelu, upconv, upnorm]
+
+ if use_dropout:
+ model = down + [submodule] + up + [nn.Dropout(0.5)]
+ else:
+ model = down + [submodule] + up
+
+ self.model = nn.Sequential(*model)
+
+ def forward(self, x):
+ if self.outermost:
+ return self.model(x)
+ else: # add skip connections
+ return torch.cat([x, self.model(x)], 1)
+
+
+class NLayerDiscriminator(nn.Module):
+ """Defines a PatchGAN discriminator"""
+
+ def __init__(self, input_nc, ndf=64, n_layers=3, norm_layer=nn.BatchNorm2d):
+ """Construct a PatchGAN discriminator
+
+ Parameters:
+ input_nc (int) -- the number of channels in input images
+ ndf (int) -- the number of filters in the last conv layer
+ n_layers (int) -- the number of conv layers in the discriminator
+ norm_layer -- normalization layer
+ """
+ super(NLayerDiscriminator, self).__init__()
+ if type(norm_layer) == functools.partial: # no need to use bias as BatchNorm2d has affine parameters
+ use_bias = norm_layer.func != nn.BatchNorm2d
+ else:
+ use_bias = norm_layer != nn.BatchNorm2d
+
+ kw = 4
+ padw = 1
+ sequence = [nn.Conv2d(input_nc, ndf, kernel_size=kw, stride=2, padding=padw), nn.LeakyReLU(0.2, True)]
+ nf_mult = 1
+ nf_mult_prev = 1
+ for n in range(1, n_layers): # gradually increase the number of filters
+ nf_mult_prev = nf_mult
+ nf_mult = min(2 ** n, 8)
+ sequence += [
+ nn.Conv2d(ndf * nf_mult_prev, ndf * nf_mult, kernel_size=kw, stride=2, padding=padw, bias=use_bias),
+ norm_layer(ndf * nf_mult),
+ nn.LeakyReLU(0.2, True)
+ ]
+
+ nf_mult_prev = nf_mult
+ nf_mult = min(2 ** n_layers, 8)
+ sequence += [
+ nn.Conv2d(ndf * nf_mult_prev, ndf * nf_mult, kernel_size=kw, stride=1, padding=padw, bias=use_bias),
+ norm_layer(ndf * nf_mult),
+ nn.LeakyReLU(0.2, True)
+ ]
+
+ sequence += [nn.Conv2d(ndf * nf_mult, 1, kernel_size=kw, stride=1, padding=padw)] # output 1 channel prediction map
+ self.model = nn.Sequential(*sequence)
+
+ def forward(self, input):
+ """Standard forward."""
+ return self.model(input)
+
+
+class NLayerDiscriminatorCls(nn.Module):
+ """Defines a PatchGAN discriminator"""
+
+ def __init__(self, input_nc, ndf=64, n_layers=3, n_class=3, norm_layer=nn.BatchNorm2d):
+ """Construct a PatchGAN discriminator
+
+ Parameters:
+ input_nc (int) -- the number of channels in input images
+ ndf (int) -- the number of filters in the last conv layer
+ n_layers (int) -- the number of conv layers in the discriminator
+ norm_layer -- normalization layer
+ """
+ super(NLayerDiscriminatorCls, self).__init__()
+ if type(norm_layer) == functools.partial: # no need to use bias as BatchNorm2d has affine parameters
+ use_bias = norm_layer.func != nn.BatchNorm2d
+ else:
+ use_bias = norm_layer != nn.BatchNorm2d
+
+ kw = 4
+ padw = 1
+ sequence = [nn.Conv2d(input_nc, ndf, kernel_size=kw, stride=2, padding=padw), nn.LeakyReLU(0.2, True)]
+ nf_mult = 1
+ nf_mult_prev = 1
+ for n in range(1, n_layers): # gradually increase the number of filters
+ nf_mult_prev = nf_mult
+ nf_mult = min(2 ** n, 8)
+ sequence += [
+ nn.Conv2d(ndf * nf_mult_prev, ndf * nf_mult, kernel_size=kw, stride=2, padding=padw, bias=use_bias),
+ norm_layer(ndf * nf_mult),
+ nn.LeakyReLU(0.2, True)
+ ]
+
+ nf_mult_prev = nf_mult
+ nf_mult = min(2 ** n_layers, 8)
+ sequence1 = [
+ nn.Conv2d(ndf * nf_mult_prev, ndf * nf_mult, kernel_size=kw, stride=1, padding=padw, bias=use_bias),
+ norm_layer(ndf * nf_mult),
+ nn.LeakyReLU(0.2, True)
+ ]
+ sequence1 += [nn.Conv2d(ndf * nf_mult, 1, kernel_size=kw, stride=1, padding=padw)] # output 1 channel prediction map
+
+ sequence2 = [
+ nn.Conv2d(ndf * nf_mult_prev, ndf * nf_mult, kernel_size=kw, stride=2, padding=padw, bias=use_bias),
+ norm_layer(ndf * nf_mult),
+ nn.LeakyReLU(0.2, True)
+ ]
+ sequence2 += [
+ nn.Conv2d(ndf * nf_mult, ndf * nf_mult, kernel_size=kw, stride=2, padding=padw, bias=use_bias),
+ norm_layer(ndf * nf_mult),
+ nn.LeakyReLU(0.2, True)
+ ]
+ sequence2 += [
+ nn.Conv2d(ndf * nf_mult, n_class, kernel_size=16, stride=1, padding=0, bias=use_bias)]
+
+
+ self.model0 = nn.Sequential(*sequence)
+ self.model1 = nn.Sequential(*sequence1)
+ self.model2 = nn.Sequential(*sequence2)
+ print(list(self.modules()))
+
+ def forward(self, input):
+ """Standard forward."""
+ feat = self.model0(input)
+ # patchGAN output (1 * 62 * 62)
+ patch = self.model1(feat)
+ # class output (3 * 1 * 1)
+ classl = self.model2(feat)
+ return patch, classl.view(classl.size(0), -1)
+
+
+class PixelDiscriminator(nn.Module):
+ """Defines a 1x1 PatchGAN discriminator (pixelGAN)"""
+
+ def __init__(self, input_nc, ndf=64, norm_layer=nn.BatchNorm2d):
+ """Construct a 1x1 PatchGAN discriminator
+
+ Parameters:
+ input_nc (int) -- the number of channels in input images
+ ndf (int) -- the number of filters in the last conv layer
+ norm_layer -- normalization layer
+ """
+ super(PixelDiscriminator, self).__init__()
+ if type(norm_layer) == functools.partial: # no need to use bias as BatchNorm2d has affine parameters
+ use_bias = norm_layer.func != nn.InstanceNorm2d
+ else:
+ use_bias = norm_layer != nn.InstanceNorm2d
+
+ self.net = [
+ nn.Conv2d(input_nc, ndf, kernel_size=1, stride=1, padding=0),
+ nn.LeakyReLU(0.2, True),
+ nn.Conv2d(ndf, ndf * 2, kernel_size=1, stride=1, padding=0, bias=use_bias),
+ norm_layer(ndf * 2),
+ nn.LeakyReLU(0.2, True),
+ nn.Conv2d(ndf * 2, 1, kernel_size=1, stride=1, padding=0, bias=use_bias)]
+
+ self.net = nn.Sequential(*self.net)
+
+ def forward(self, input):
+ """Standard forward."""
+ return self.net(input)
+
+
+class HED(nn.Module):
+ def __init__(self):
+ super(HED, self).__init__()
+
+ self.moduleVggOne = nn.Sequential(
+ nn.Conv2d(in_channels=3, out_channels=64, kernel_size=3, stride=1, padding=1),
+ nn.ReLU(inplace=False),
+ nn.Conv2d(in_channels=64, out_channels=64, kernel_size=3, stride=1, padding=1),
+ nn.ReLU(inplace=False)
+ )
+
+ self.moduleVggTwo = nn.Sequential(
+ nn.MaxPool2d(kernel_size=2, stride=2),
+ nn.Conv2d(in_channels=64, out_channels=128, kernel_size=3, stride=1, padding=1),
+ nn.ReLU(inplace=False),
+ nn.Conv2d(in_channels=128, out_channels=128, kernel_size=3, stride=1, padding=1),
+ nn.ReLU(inplace=False)
+ )
+
+ self.moduleVggThr = nn.Sequential(
+ nn.MaxPool2d(kernel_size=2, stride=2),
+ nn.Conv2d(in_channels=128, out_channels=256, kernel_size=3, stride=1, padding=1),
+ nn.ReLU(inplace=False),
+ nn.Conv2d(in_channels=256, out_channels=256, kernel_size=3, stride=1, padding=1),
+ nn.ReLU(inplace=False),
+ nn.Conv2d(in_channels=256, out_channels=256, kernel_size=3, stride=1, padding=1),
+ nn.ReLU(inplace=False)
+ )
+
+ self.moduleVggFou = nn.Sequential(
+ nn.MaxPool2d(kernel_size=2, stride=2),
+ nn.Conv2d(in_channels=256, out_channels=512, kernel_size=3, stride=1, padding=1),
+ nn.ReLU(inplace=False),
+ nn.Conv2d(in_channels=512, out_channels=512, kernel_size=3, stride=1, padding=1),
+ nn.ReLU(inplace=False),
+ nn.Conv2d(in_channels=512, out_channels=512, kernel_size=3, stride=1, padding=1),
+ nn.ReLU(inplace=False)
+ )
+
+ self.moduleVggFiv = nn.Sequential(
+ nn.MaxPool2d(kernel_size=2, stride=2),
+ nn.Conv2d(in_channels=512, out_channels=512, kernel_size=3, stride=1, padding=1),
+ nn.ReLU(inplace=False),
+ nn.Conv2d(in_channels=512, out_channels=512, kernel_size=3, stride=1, padding=1),
+ nn.ReLU(inplace=False),
+ nn.Conv2d(in_channels=512, out_channels=512, kernel_size=3, stride=1, padding=1),
+ nn.ReLU(inplace=False)
+ )
+
+ self.moduleScoreOne = nn.Conv2d(in_channels=64, out_channels=1, kernel_size=1, stride=1, padding=0)
+ self.moduleScoreTwo = nn.Conv2d(in_channels=128, out_channels=1, kernel_size=1, stride=1, padding=0)
+ self.moduleScoreThr = nn.Conv2d(in_channels=256, out_channels=1, kernel_size=1, stride=1, padding=0)
+ self.moduleScoreFou = nn.Conv2d(in_channels=512, out_channels=1, kernel_size=1, stride=1, padding=0)
+ self.moduleScoreFiv = nn.Conv2d(in_channels=512, out_channels=1, kernel_size=1, stride=1, padding=0)
+
+ self.moduleCombine = nn.Sequential(
+ nn.Conv2d(in_channels=5, out_channels=1, kernel_size=1, stride=1, padding=0),
+ nn.Sigmoid()
+ )
+
+ def forward(self, tensorInput):
+ tensorBlue = (tensorInput[:, 2:3, :, :] * 255.0) - 104.00698793
+ tensorGreen = (tensorInput[:, 1:2, :, :] * 255.0) - 116.66876762
+ tensorRed = (tensorInput[:, 0:1, :, :] * 255.0) - 122.67891434
+
+ tensorInput = torch.cat([ tensorBlue, tensorGreen, tensorRed ], 1)
+
+ tensorVggOne = self.moduleVggOne(tensorInput)
+ tensorVggTwo = self.moduleVggTwo(tensorVggOne)
+ tensorVggThr = self.moduleVggThr(tensorVggTwo)
+ tensorVggFou = self.moduleVggFou(tensorVggThr)
+ tensorVggFiv = self.moduleVggFiv(tensorVggFou)
+
+ tensorScoreOne = self.moduleScoreOne(tensorVggOne)
+ tensorScoreTwo = self.moduleScoreTwo(tensorVggTwo)
+ tensorScoreThr = self.moduleScoreThr(tensorVggThr)
+ tensorScoreFou = self.moduleScoreFou(tensorVggFou)
+ tensorScoreFiv = self.moduleScoreFiv(tensorVggFiv)
+
+ tensorScoreOne = nn.functional.interpolate(input=tensorScoreOne, size=(tensorInput.size(2), tensorInput.size(3)), mode='bilinear', align_corners=False)
+ tensorScoreTwo = nn.functional.interpolate(input=tensorScoreTwo, size=(tensorInput.size(2), tensorInput.size(3)), mode='bilinear', align_corners=False)
+ tensorScoreThr = nn.functional.interpolate(input=tensorScoreThr, size=(tensorInput.size(2), tensorInput.size(3)), mode='bilinear', align_corners=False)
+ tensorScoreFou = nn.functional.interpolate(input=tensorScoreFou, size=(tensorInput.size(2), tensorInput.size(3)), mode='bilinear', align_corners=False)
+ tensorScoreFiv = nn.functional.interpolate(input=tensorScoreFiv, size=(tensorInput.size(2), tensorInput.size(3)), mode='bilinear', align_corners=False)
+
+ return self.moduleCombine(torch.cat([ tensorScoreOne, tensorScoreTwo, tensorScoreThr, tensorScoreFou, tensorScoreFiv ], 1))
+
+# class for VGG19 modle
+# borrows largely from torchvision vgg
+class VGG19(nn.Module):
+ def __init__(self, init_weights=None, feature_mode=False, batch_norm=False, num_classes=1000):
+ super(VGG19, self).__init__()
+ self.cfg = [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 256, 'M', 512, 512, 512, 512, 'M', 512, 512, 512, 512, 'M']
+ self.init_weights = init_weights
+ self.feature_mode = feature_mode
+ self.batch_norm = batch_norm
+ self.num_clases = num_classes
+ self.features = self.make_layers(self.cfg, batch_norm)
+ self.classifier = nn.Sequential(
+ nn.Linear(512 * 7 * 7, 4096),
+ nn.ReLU(True),
+ nn.Dropout(),
+ nn.Linear(4096, 4096),
+ nn.ReLU(True),
+ nn.Dropout(),
+ nn.Linear(4096, num_classes),
+ )
+ # print('----------load the pretrained vgg net---------')
+ # if not init_weights == None:
+ # print('load the weights')
+ # self.load_state_dict(torch.load(init_weights))
+
+
+ def make_layers(self, cfg, batch_norm=False):
+ layers = []
+ in_channels = 3
+ for v in cfg:
+ if v == 'M':
+ layers += [nn.MaxPool2d(kernel_size=2, stride=2)]
+ else:
+ conv2d = nn.Conv2d(in_channels, v, kernel_size=3, padding=1)
+ if batch_norm:
+ layers += [conv2d, nn.BatchNorm2d(v), nn.ReLU(inplace=True)]
+ else:
+ layers += [conv2d, nn.ReLU(inplace=True)]
+ in_channels = v
+ return nn.Sequential(*layers)
+
+ def forward(self, x):
+ if self.feature_mode:
+ module_list = list(self.features.modules())
+ for l in module_list[1:27]: # conv4_4
+ x = l(x)
+ if not self.feature_mode:
+ x = self.features(x)
+ x = x.view(x.size(0), -1)
+ x = self.classifier(x)
+
+ return x
+
+class Classifier(nn.Module):
+ def __init__(self, input_nc, classes, ngf=64, num_downs=3, norm_layer=nn.BatchNorm2d, use_dropout=False, h=512, w=512, dim=4096):
+ super(Classifier, self).__init__()
+ self.input_nc = input_nc
+ self.ngf = ngf
+ if type(norm_layer) == functools.partial:
+ use_bias = norm_layer.func == nn.InstanceNorm2d
+ else:
+ use_bias = norm_layer == nn.InstanceNorm2d
+
+ model = [nn.Conv2d(input_nc, ngf, kernel_size=4, stride=2, padding=1, bias=use_bias), nn.LeakyReLU(0.2, True)]
+ nf_mult = 1
+ nf_mult_prev = 1
+ for n in range(1, num_downs):
+ nf_mult_prev = nf_mult
+ nf_mult = min(2 ** n, 8)
+ model += [
+ nn.Conv2d(int(ngf * nf_mult_prev), int(ngf * nf_mult), kernel_size=4, stride=2, padding=1, bias=use_bias),
+ norm_layer(int(ngf * nf_mult)),
+ nn.LeakyReLU(0.2, True)
+ ]
+ nf_mult_prev = nf_mult
+ nf_mult = min(2 ** num_downs, 8)
+ model += [
+ nn.Conv2d(ngf * nf_mult_prev, ngf * nf_mult, kernel_size=4, stride=1, padding=1, bias=use_bias),
+ norm_layer(ngf * nf_mult),
+ nn.LeakyReLU(0.2, True)
+ ]
+ self.encoder = nn.Sequential(*model)
+
+ self.classifier = nn.Sequential(
+ nn.Linear(512 * 7 * 7, dim),
+ nn.ReLU(True),
+ nn.Dropout(),
+ nn.Linear(dim, dim),
+ nn.ReLU(True),
+ nn.Dropout(),
+ nn.Linear(dim, classes),
+ )
+
+ def forward(self, x):
+ ax = self.encoder(x)
+ #print('ax',ax.shape) # (8, 512, 7, 7)
+ ax = ax.view(ax.size(0), -1) # view -- reshape
+ return self.classifier(ax)
diff --git a/hi-arm/qmupd_vs/models/networks_basic.py b/hi-arm/qmupd_vs/models/networks_basic.py
new file mode 100644
index 0000000000000000000000000000000000000000..d71d6b383b9763bce2c1c19ae703966d87ba8cdf
--- /dev/null
+++ b/hi-arm/qmupd_vs/models/networks_basic.py
@@ -0,0 +1,187 @@
+
+from __future__ import absolute_import
+
+import sys
+import torch
+import torch.nn as nn
+import torch.nn.init as init
+from torch.autograd import Variable
+import numpy as np
+from pdb import set_trace as st
+from skimage import color
+from IPython import embed
+from . import pretrained_networks as pn
+
+from util import util
+
+def spatial_average(in_tens, keepdim=True):
+ return in_tens.mean([2,3],keepdim=keepdim)
+
+def upsample(in_tens, out_H=64): # assumes scale factor is same for H and W
+ in_H = in_tens.shape[2]
+ scale_factor = 1.*out_H/in_H
+
+ return nn.Upsample(scale_factor=scale_factor, mode='bilinear', align_corners=False)(in_tens)
+
+# Learned perceptual metric
+class PNetLin(nn.Module):
+ def __init__(self, pnet_type='vgg', pnet_rand=False, pnet_tune=False, use_dropout=True, spatial=False, version='0.1', lpips=True):
+ super(PNetLin, self).__init__()
+
+ self.pnet_type = pnet_type
+ self.pnet_tune = pnet_tune
+ self.pnet_rand = pnet_rand
+ self.spatial = spatial
+ self.lpips = lpips
+ self.version = version
+ self.scaling_layer = ScalingLayer()
+
+ if(self.pnet_type in ['vgg','vgg16']):
+ net_type = pn.vgg16
+ self.chns = [64,128,256,512,512]
+ elif(self.pnet_type=='alex'):
+ net_type = pn.alexnet
+ self.chns = [64,192,384,256,256]
+ elif(self.pnet_type=='squeeze'):
+ net_type = pn.squeezenet
+ self.chns = [64,128,256,384,384,512,512]
+ self.L = len(self.chns)
+
+ self.net = net_type(pretrained=not self.pnet_rand, requires_grad=self.pnet_tune)
+
+ if(lpips):
+ self.lin0 = NetLinLayer(self.chns[0], use_dropout=use_dropout)
+ self.lin1 = NetLinLayer(self.chns[1], use_dropout=use_dropout)
+ self.lin2 = NetLinLayer(self.chns[2], use_dropout=use_dropout)
+ self.lin3 = NetLinLayer(self.chns[3], use_dropout=use_dropout)
+ self.lin4 = NetLinLayer(self.chns[4], use_dropout=use_dropout)
+ self.lins = [self.lin0,self.lin1,self.lin2,self.lin3,self.lin4]
+ if(self.pnet_type=='squeeze'): # 7 layers for squeezenet
+ self.lin5 = NetLinLayer(self.chns[5], use_dropout=use_dropout)
+ self.lin6 = NetLinLayer(self.chns[6], use_dropout=use_dropout)
+ self.lins+=[self.lin5,self.lin6]
+
+ def forward(self, in0, in1, retPerLayer=False):
+ # v0.0 - original release had a bug, where input was not scaled
+ in0_input, in1_input = (self.scaling_layer(in0), self.scaling_layer(in1)) if self.version=='0.1' else (in0, in1)
+ outs0, outs1 = self.net.forward(in0_input), self.net.forward(in1_input)
+ feats0, feats1, diffs = {}, {}, {}
+
+ for kk in range(self.L):
+ feats0[kk], feats1[kk] = util.normalize_tensor(outs0[kk]), util.normalize_tensor(outs1[kk])
+ diffs[kk] = (feats0[kk]-feats1[kk])**2
+
+ if(self.lpips):
+ if(self.spatial):
+ res = [upsample(self.lins[kk].model(diffs[kk]), out_H=in0.shape[2]) for kk in range(self.L)]
+ else:
+ res = [spatial_average(self.lins[kk].model(diffs[kk]), keepdim=True) for kk in range(self.L)]
+ else:
+ if(self.spatial):
+ res = [upsample(diffs[kk].sum(dim=1,keepdim=True), out_H=in0.shape[2]) for kk in range(self.L)]
+ else:
+ res = [spatial_average(diffs[kk].sum(dim=1,keepdim=True), keepdim=True) for kk in range(self.L)]
+
+ val = res[0]
+ for l in range(1,self.L):
+ val += res[l]
+
+ if(retPerLayer):
+ return (val, res)
+ else:
+ return val
+
+class ScalingLayer(nn.Module):
+ def __init__(self):
+ super(ScalingLayer, self).__init__()
+ self.register_buffer('shift', torch.Tensor([-.030,-.088,-.188])[None,:,None,None])
+ self.register_buffer('scale', torch.Tensor([.458,.448,.450])[None,:,None,None])
+
+ def forward(self, inp):
+ return (inp - self.shift.to(inp.device)) / self.scale.to(inp.device)
+
+
+class NetLinLayer(nn.Module):
+ ''' A single linear layer which does a 1x1 conv '''
+ def __init__(self, chn_in, chn_out=1, use_dropout=False):
+ super(NetLinLayer, self).__init__()
+
+ layers = [nn.Dropout(),] if(use_dropout) else []
+ layers += [nn.Conv2d(chn_in, chn_out, 1, stride=1, padding=0, bias=False),]
+ self.model = nn.Sequential(*layers)
+
+
+class Dist2LogitLayer(nn.Module):
+ ''' takes 2 distances, puts through fc layers, spits out value between [0,1] (if use_sigmoid is True) '''
+ def __init__(self, chn_mid=32, use_sigmoid=True):
+ super(Dist2LogitLayer, self).__init__()
+
+ layers = [nn.Conv2d(5, chn_mid, 1, stride=1, padding=0, bias=True),]
+ layers += [nn.LeakyReLU(0.2,True),]
+ layers += [nn.Conv2d(chn_mid, chn_mid, 1, stride=1, padding=0, bias=True),]
+ layers += [nn.LeakyReLU(0.2,True),]
+ layers += [nn.Conv2d(chn_mid, 1, 1, stride=1, padding=0, bias=True),]
+ if(use_sigmoid):
+ layers += [nn.Sigmoid(),]
+ self.model = nn.Sequential(*layers)
+
+ def forward(self,d0,d1,eps=0.1):
+ return self.model.forward(torch.cat((d0,d1,d0-d1,d0/(d1+eps),d1/(d0+eps)),dim=1))
+
+class BCERankingLoss(nn.Module):
+ def __init__(self, chn_mid=32):
+ super(BCERankingLoss, self).__init__()
+ self.net = Dist2LogitLayer(chn_mid=chn_mid)
+ # self.parameters = list(self.net.parameters())
+ self.loss = torch.nn.BCELoss()
+
+ def forward(self, d0, d1, judge):
+ per = (judge+1.)/2.
+ self.logit = self.net.forward(d0,d1)
+ return self.loss(self.logit, per)
+
+# L2, DSSIM metrics
+class FakeNet(nn.Module):
+ def __init__(self, use_gpu=True, colorspace='Lab'):
+ super(FakeNet, self).__init__()
+ self.use_gpu = use_gpu
+ self.colorspace=colorspace
+
+class L2(FakeNet):
+
+ def forward(self, in0, in1, retPerLayer=None):
+ assert(in0.size()[0]==1) # currently only supports batchSize 1
+
+ if(self.colorspace=='RGB'):
+ (N,C,X,Y) = in0.size()
+ value = torch.mean(torch.mean(torch.mean((in0-in1)**2,dim=1).view(N,1,X,Y),dim=2).view(N,1,1,Y),dim=3).view(N)
+ return value
+ elif(self.colorspace=='Lab'):
+ value = util.l2(util.tensor2np(util.tensor2tensorlab(in0.data,to_norm=False)),
+ util.tensor2np(util.tensor2tensorlab(in1.data,to_norm=False)), range=100.).astype('float')
+ ret_var = Variable( torch.Tensor((value,) ) )
+ if(self.use_gpu):
+ ret_var = ret_var.cuda()
+ return ret_var
+
+class DSSIM(FakeNet):
+
+ def forward(self, in0, in1, retPerLayer=None):
+ assert(in0.size()[0]==1) # currently only supports batchSize 1
+
+ if(self.colorspace=='RGB'):
+ value = util.dssim(1.*util.tensor2im(in0.data), 1.*util.tensor2im(in1.data), range=255.).astype('float')
+ elif(self.colorspace=='Lab'):
+ value = util.dssim(util.tensor2np(util.tensor2tensorlab(in0.data,to_norm=False)),
+ util.tensor2np(util.tensor2tensorlab(in1.data,to_norm=False)), range=100.).astype('float')
+ ret_var = Variable( torch.Tensor((value,) ) )
+ if(self.use_gpu):
+ ret_var = ret_var.cuda()
+ return ret_var
+
+def print_network(net):
+ num_params = 0
+ for param in net.parameters():
+ num_params += param.numel()
+ print('Network',net)
+ print('Total number of parameters: %d' % num_params)
diff --git a/hi-arm/qmupd_vs/models/pretrained_networks.py b/hi-arm/qmupd_vs/models/pretrained_networks.py
new file mode 100644
index 0000000000000000000000000000000000000000..b1329d64b798229bb16578f5bcaa1dff7d660a8e
--- /dev/null
+++ b/hi-arm/qmupd_vs/models/pretrained_networks.py
@@ -0,0 +1,181 @@
+from collections import namedtuple
+import torch
+from torchvision import models
+from IPython import embed
+
+class squeezenet(torch.nn.Module):
+ def __init__(self, requires_grad=False, pretrained=True):
+ super(squeezenet, self).__init__()
+ pretrained_features = models.squeezenet1_1(pretrained=pretrained).features
+ self.slice1 = torch.nn.Sequential()
+ self.slice2 = torch.nn.Sequential()
+ self.slice3 = torch.nn.Sequential()
+ self.slice4 = torch.nn.Sequential()
+ self.slice5 = torch.nn.Sequential()
+ self.slice6 = torch.nn.Sequential()
+ self.slice7 = torch.nn.Sequential()
+ self.N_slices = 7
+ for x in range(2):
+ self.slice1.add_module(str(x), pretrained_features[x])
+ for x in range(2,5):
+ self.slice2.add_module(str(x), pretrained_features[x])
+ for x in range(5, 8):
+ self.slice3.add_module(str(x), pretrained_features[x])
+ for x in range(8, 10):
+ self.slice4.add_module(str(x), pretrained_features[x])
+ for x in range(10, 11):
+ self.slice5.add_module(str(x), pretrained_features[x])
+ for x in range(11, 12):
+ self.slice6.add_module(str(x), pretrained_features[x])
+ for x in range(12, 13):
+ self.slice7.add_module(str(x), pretrained_features[x])
+ if not requires_grad:
+ for param in self.parameters():
+ param.requires_grad = False
+
+ def forward(self, X):
+ h = self.slice1(X)
+ h_relu1 = h
+ h = self.slice2(h)
+ h_relu2 = h
+ h = self.slice3(h)
+ h_relu3 = h
+ h = self.slice4(h)
+ h_relu4 = h
+ h = self.slice5(h)
+ h_relu5 = h
+ h = self.slice6(h)
+ h_relu6 = h
+ h = self.slice7(h)
+ h_relu7 = h
+ vgg_outputs = namedtuple("SqueezeOutputs", ['relu1','relu2','relu3','relu4','relu5','relu6','relu7'])
+ out = vgg_outputs(h_relu1,h_relu2,h_relu3,h_relu4,h_relu5,h_relu6,h_relu7)
+
+ return out
+
+
+class alexnet(torch.nn.Module):
+ def __init__(self, requires_grad=False, pretrained=True):
+ super(alexnet, self).__init__()
+ alexnet_pretrained_features = models.alexnet(pretrained=pretrained).features
+ self.slice1 = torch.nn.Sequential()
+ self.slice2 = torch.nn.Sequential()
+ self.slice3 = torch.nn.Sequential()
+ self.slice4 = torch.nn.Sequential()
+ self.slice5 = torch.nn.Sequential()
+ self.N_slices = 5
+ for x in range(2):
+ self.slice1.add_module(str(x), alexnet_pretrained_features[x])
+ for x in range(2, 5):
+ self.slice2.add_module(str(x), alexnet_pretrained_features[x])
+ for x in range(5, 8):
+ self.slice3.add_module(str(x), alexnet_pretrained_features[x])
+ for x in range(8, 10):
+ self.slice4.add_module(str(x), alexnet_pretrained_features[x])
+ for x in range(10, 12):
+ self.slice5.add_module(str(x), alexnet_pretrained_features[x])
+ if not requires_grad:
+ for param in self.parameters():
+ param.requires_grad = False
+
+ def forward(self, X):
+ h = self.slice1(X)
+ h_relu1 = h
+ h = self.slice2(h)
+ h_relu2 = h
+ h = self.slice3(h)
+ h_relu3 = h
+ h = self.slice4(h)
+ h_relu4 = h
+ h = self.slice5(h)
+ h_relu5 = h
+ alexnet_outputs = namedtuple("AlexnetOutputs", ['relu1', 'relu2', 'relu3', 'relu4', 'relu5'])
+ out = alexnet_outputs(h_relu1, h_relu2, h_relu3, h_relu4, h_relu5)
+
+ return out
+
+class vgg16(torch.nn.Module):
+ def __init__(self, requires_grad=False, pretrained=True):
+ super(vgg16, self).__init__()
+ vgg_pretrained_features = models.vgg16(pretrained=pretrained).features
+ self.slice1 = torch.nn.Sequential()
+ self.slice2 = torch.nn.Sequential()
+ self.slice3 = torch.nn.Sequential()
+ self.slice4 = torch.nn.Sequential()
+ self.slice5 = torch.nn.Sequential()
+ self.N_slices = 5
+ for x in range(4):
+ self.slice1.add_module(str(x), vgg_pretrained_features[x])
+ for x in range(4, 9):
+ self.slice2.add_module(str(x), vgg_pretrained_features[x])
+ for x in range(9, 16):
+ self.slice3.add_module(str(x), vgg_pretrained_features[x])
+ for x in range(16, 23):
+ self.slice4.add_module(str(x), vgg_pretrained_features[x])
+ for x in range(23, 30):
+ self.slice5.add_module(str(x), vgg_pretrained_features[x])
+ if not requires_grad:
+ for param in self.parameters():
+ param.requires_grad = False
+
+ def forward(self, X):
+ h = self.slice1(X)
+ h_relu1_2 = h
+ h = self.slice2(h)
+ h_relu2_2 = h
+ h = self.slice3(h)
+ h_relu3_3 = h
+ h = self.slice4(h)
+ h_relu4_3 = h
+ h = self.slice5(h)
+ h_relu5_3 = h
+ vgg_outputs = namedtuple("VggOutputs", ['relu1_2', 'relu2_2', 'relu3_3', 'relu4_3', 'relu5_3'])
+ out = vgg_outputs(h_relu1_2, h_relu2_2, h_relu3_3, h_relu4_3, h_relu5_3)
+
+ return out
+
+
+
+class resnet(torch.nn.Module):
+ def __init__(self, requires_grad=False, pretrained=True, num=18):
+ super(resnet, self).__init__()
+ if(num==18):
+ self.net = models.resnet18(pretrained=pretrained)
+ elif(num==34):
+ self.net = models.resnet34(pretrained=pretrained)
+ elif(num==50):
+ self.net = models.resnet50(pretrained=pretrained)
+ elif(num==101):
+ self.net = models.resnet101(pretrained=pretrained)
+ elif(num==152):
+ self.net = models.resnet152(pretrained=pretrained)
+ self.N_slices = 5
+
+ self.conv1 = self.net.conv1
+ self.bn1 = self.net.bn1
+ self.relu = self.net.relu
+ self.maxpool = self.net.maxpool
+ self.layer1 = self.net.layer1
+ self.layer2 = self.net.layer2
+ self.layer3 = self.net.layer3
+ self.layer4 = self.net.layer4
+
+ def forward(self, X):
+ h = self.conv1(X)
+ h = self.bn1(h)
+ h = self.relu(h)
+ h_relu1 = h
+ h = self.maxpool(h)
+ h = self.layer1(h)
+ h_conv2 = h
+ h = self.layer2(h)
+ h_conv3 = h
+ h = self.layer3(h)
+ h_conv4 = h
+ h = self.layer4(h)
+ h_conv5 = h
+
+ outputs = namedtuple("Outputs", ['relu1','conv2','conv3','conv4','conv5'])
+ out = outputs(h_relu1, h_conv2, h_conv3, h_conv4, h_conv5)
+
+ return out
diff --git a/hi-arm/qmupd_vs/models/test_model.py b/hi-arm/qmupd_vs/models/test_model.py
new file mode 100644
index 0000000000000000000000000000000000000000..b86872218cbf60e61e76989649799adb993de3bf
--- /dev/null
+++ b/hi-arm/qmupd_vs/models/test_model.py
@@ -0,0 +1,96 @@
+from .base_model import BaseModel
+from . import networks
+import torch
+import pdb
+
+class TestModel(BaseModel):
+ """ This TesteModel can be used to generate CycleGAN results for only one direction.
+ This model will automatically set '--dataset_mode single', which only loads the images from one collection.
+
+ See the test instruction for more details.
+ """
+ @staticmethod
+ def modify_commandline_options(parser, is_train=True):
+ """Add new dataset-specific options, and rewrite default values for existing options.
+
+ Parameters:
+ parser -- original option parser
+ is_train (bool) -- whether training phase or test phase. You can use this flag to add training-specific or test-specific options.
+
+ Returns:
+ the modified parser.
+
+ The model can only be used during test time. It requires '--dataset_mode single'.
+ You need to specify the network using the option '--model_suffix'.
+ """
+ assert not is_train, 'TestModel cannot be used during training time'
+ parser.set_defaults(dataset_mode='single')
+ parser.add_argument('--model_suffix', type=str, default='', help='In checkpoints_dir, [epoch]_net_G[model_suffix].pth will be loaded as the generator.')
+ parser.add_argument('--style_control', type=int, default=0, help='use style_control')
+ parser.add_argument('--sfeature_mode', type=str, default='vgg19_softmax', help='vgg19 softmax as feature')
+ parser.add_argument('--sinput', type=str, default='sind', help='use which one for style input')
+ parser.add_argument('--sind', type=int, default=0, help='one hot for sfeature')
+ parser.add_argument('--svec', type=str, default='1,0,0', help='3-dim vec')
+ parser.add_argument('--simg', type=str, default='Yann_Legendre-053', help='drawing example for style')
+ parser.add_argument('--netga', type=str, default='resnet_style_9blocks', help='net arch for netG_A')
+ parser.add_argument('--model0_res', type=int, default=0, help='number of resblocks in model0')
+ parser.add_argument('--model1_res', type=int, default=0, help='number of resblocks in model1 (after insert style, before 2 column merge)')
+
+ return parser
+
+ def __init__(self, opt):
+ """Initialize the pix2pix class.
+
+ Parameters:
+ opt (Option class)-- stores all the experiment flags; needs to be a subclass of BaseOptions
+ """
+ assert(not opt.isTrain)
+ BaseModel.__init__(self, opt)
+ # specify the training losses you want to print out. The training/test scripts will call
+ self.loss_names = []
+ # specify the images you want to save/display. The training/test scripts will call
+ #self.visual_names = ['real', 'fake', 'rec', 'fake_B']
+ self.visual_names = ['real', 'fake']
+ # specify the models you want to save to the disk. The training/test scripts will call and
+ self.model_names = ['G' + opt.model_suffix, 'G_B'] # only generator is needed.
+ if not self.opt.style_control:
+ self.netG = networks.define_G(opt.input_nc, opt.output_nc, opt.ngf, opt.netG,
+ opt.norm, not opt.no_dropout, opt.init_type, opt.init_gain, self.gpu_ids)
+ else:
+ print(opt.netga)
+ print('model0_res', opt.model0_res)
+ print('model1_res', opt.model1_res)
+ self.netG = networks.define_G(opt.input_nc, opt.output_nc, opt.ngf, opt.netga, opt.norm,
+ not opt.no_dropout, opt.init_type, opt.init_gain, self.gpu_ids, opt.model0_res, opt.model1_res)
+
+ self.netGB = networks.define_G(opt.output_nc, opt.input_nc, opt.ngf, opt.netG,
+ opt.norm, not opt.no_dropout, opt.init_type, opt.init_gain, self.gpu_ids)
+ # assigns the model to self.netG_[suffix] so that it can be loaded
+ # please see
+ setattr(self, 'netG' + opt.model_suffix, self.netG) # store netG in self.
+ setattr(self, 'netG_B', self.netGB) # store netGB in self.
+
+ def set_input(self, input):
+ """Unpack input data from the dataloader and perform necessary pre-processing steps.
+
+ Parameters:
+ input: a dictionary that contains the data itself and its metadata information.
+
+ We need to use 'single_dataset' dataset mode. It only load images from one domain.
+ """
+ self.real = input['A'].to(self.device)
+ self.image_paths = input['A_paths']
+ if self.opt.style_control:
+ self.style = input['B_style']
+
+ def forward(self):
+ """Run forward pass."""
+ if not self.opt.style_control:
+ self.fake = self.netG(self.real) # G(real)
+ else:
+ #print(torch.mean(self.style,(2,3)),'style_control')
+ self.fake = self.netG(self.real, self.style)
+
+ def optimize_parameters(self):
+ """No optimization for test model."""
+ pass
diff --git a/hi-arm/qmupd_vs/operator_main.ipynb b/hi-arm/qmupd_vs/operator_main.ipynb
new file mode 100644
index 0000000000000000000000000000000000000000..b1fe6961d9f3e756531a67f649983428b10bff46
--- /dev/null
+++ b/hi-arm/qmupd_vs/operator_main.ipynb
@@ -0,0 +1,606 @@
+{
+ "cells": [
+ {
+ "cell_type": "code",
+ "execution_count": 16,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "application/vnd.jupyter.widget-view+json": {
+ "model_id": "138ed57c06ca45c786e09bcf744f4d54",
+ "version_major": 2,
+ "version_minor": 0
+ },
+ "text/plain": [
+ "CameraStream(constraints={'facing_mode': 'user', 'audio': False, 'video': {'width': 512, 'height': 512, 'facin…"
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "application/vnd.jupyter.widget-view+json": {
+ "model_id": "d0e4ef53014b4bbab34e6ba90336ad52",
+ "version_major": 2,
+ "version_minor": 0
+ },
+ "text/plain": [
+ "ImageRecorder(image=Image(value=b''), stream=CameraStream(constraints={'facing_mode': 'user', 'audio': False, …"
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ }
+ ],
+ "source": [
+ "from ipywebrtc import CameraStream, ImageRecorder\n",
+ "from IPython.display import display\n",
+ "import PIL.Image\n",
+ "import io\n",
+ "import numpy\n",
+ "import cv2\n",
+ "from ipywebrtc import CameraStream\n",
+ "camera = CameraStream.facing_user(audio=False, constraints={\n",
+ " 'facing_mode': 'user',\n",
+ " 'audio': False,\n",
+ " 'video': { 'width': 512, 'height': 512 }\n",
+ "})\n",
+ "display(camera)\n",
+ "recorder = ImageRecorder(stream=camera)\n",
+ "display(recorder)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 4,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "image/png": "iVBORw0KGgoAAAANSUhEUgAAAicAAADCCAYAAACSRmLFAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjUuMywgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/NK7nSAAAACXBIWXMAAA9hAAAPYQGoP6dpAADBJklEQVR4nOydd3hUZfq/7+kzyWTSe28EQiCU0EITkCoWEHQtYFt3ddW1Ytvvquuuva5rXXVxERVlsaKAVOkt1JCQhIT0nkmbTJ85vz9yzfkxJoEEqXru6+LSmXPmnDOT97zneZ/yeWSCIAhISEhISEhISFwgyM/3BUhISEhISEhInIhknEhISEhISEhcUEjGiYSEhISEhMQFhWScSEhISEhISFxQSMaJhISEhISExAWFZJxISEhISEhIXFBIxomEhISEhITEBYVknEhISEhISEhcUEjGiYSEhISEhMQFhWScSEhISEhISFxQnDfj5K233iIhIQGtVsuoUaPYvXv3+boUCYk+IY1diYsVaexKXCycF+Pk888/54EHHuDJJ59k3759ZGZmMn36dOrr68/H5UhI9Bpp7EpcrEhjV+JiQnY+Gv+NGjWKESNG8OabbwLgdruJjY3lnnvu4dFHHz3l591uN9XV1fj5+SGTyc725Ur8ShEEgfb2dqKiopDLe2enS2NX4kJAGrsSFyu9HbvKc3hNANjtdnJycnjsscfE9+RyOZdeeik7duzo9jM2mw2bzSa+rqqqIj09/axfq8Rvg4qKCmJiYk65nzR2JS40pLErcbFyqrF7zo2TxsZGXC4X4eHhXu+Hh4dz9OjRbj/z3HPP8be//a3L+xUVFRgMhrNynRK/ftra2oiNjcXPz69X+0tjV+JCQRq7Ehcr33zzDQsXLjzl2D3nxsnp8Nhjj/HAAw+Irz03psFgkG4SiV/M2XRRS2NX4mwijV2Jiw0fHx/g1GP3nBsnISEhKBQK6urqvN6vq6sjIiKi289oNBo0Gs25uDwJiR6Rxq7ExYo0diUuNs55tY5arWb48OGsX79efM/tdrN+/XrGjBlzri9HQqLXSGNX4mJFGrsSFxvnJazzwAMPcNNNN5GVlcXIkSN5/fXX6ejo4JZbbjkflyMh0WuksStxsSKNXYmLifNinFx77bU0NDTwxBNPUFtby5AhQ1i9enWXZC0JiQsNaexKXKxIY1fiYuK86Jz8Utra2vD396e1tVVKzJI4bc7HOJLGrsSZQBq7EhcrK1asYN68eaccR1JvHQkJCQkJCYkLCsk4kZCQkJCQkLigkIwTCQkJCQkJiQuKi0KETeLc4HA4OHbsGIWFhQQGBpKenk5wcLDUR0NCQkLiLCAIAnV1deTm5lJVVYVerycpKYm0tDRRrOy3imSc/MawWq0oFApUKhWCIGCxWMjPz2fTpk389NNPbNq0ifb2dtRqNWFhYVx33XXMmzePiooKhg0bRmJi4vn+ChISEhIXDVarlerqarRaLQEBATgcDmpraykpKaGoqAiZTEZ8fDzJycmYTCYOHDjAtm3biI2NpbW1lbS0NEaMGNHrBo+/FiTj5FeKIAg4HA6MRiM1NTWUlpaye/duVq9eTXBwMMnJybS3t7N//36qqqpob28HIDQ0lAkTJqBQKMjJyeHll1/m9ddfx8fHh4iICP7v//6Pfv36kZGR8Zu37CUkJCQ8CIKA0WjEbDZTVVWF0Wikvr6e5uZmWltbvRaFdrud/v37ExUVRUZGBocPH+bbb7/F6XSSlZVFamoqLS0tVFZW0tLSQm5uLmFhYQwcOJCEhITfhKEiGSe/IhwOBwUFBezcuZOVK1fS0NBAQUEBFosFs9nste+JSpHh4eFERkYybdo0XC6X2AgsNDSUqVOn8tlnnxEbG0t+fj4LFixAq9UyZMgQ7rvvPi677DL0ev05/Z4SEhISFwp1dXUcPHiQsrIympqa0Ol0hIWF4ePjQ0xMDK2treh0OqxWK1arlbi4OKxWK3V1dbS2trJmzRrkcjk+Pj4IgkBxcTE7d+4kOzsbq9WK2+0WDZUDBw4QHx/P2LFjSUxM/FUbKZJx8ivAZDKRk5PD66+/ztq1azGbzQiCgEqlIiQkhMGDB6NQKNi0aRNOp9Prs2PHjmXo0KGsWrWKlStXUlFRgcvlErdXVVXhdDrJzc0V37NarezcuZMbb7yRrKws7r//fubMmYNKpTpn31lCQkLifFJWVsa+ffvIz8/H5XKh0+nQaDQMHjyYsrIyamtraWtrQ61WExoaiiAI+Pr6UltbS2NjI/X19cTGxhIaGopGo0EmkyGXywkLC8NkMlFVVUVkZCTt7e3Y7Xbkcjl+fn7U1dWxZMkSUlNTueKKK361mjOScXIR43A42LFjB08//TRbt27FZrMBoFAoGDduHA888ABpaWkcOnSITz75xMvoAIiKimL06NG89dZbWK3Wbs+RmJhIYGAghYWFXbYplUoSExO59dZbycvL4/7778ff3//Mf1EJCQmJC4S6ujp27NhBbm4uZrOZIUOGkJiYyJ49ezAajZSXlwOd87NnTtbr9YSEhLBt2zYUCgVtbW1ER0cDnQ0WPYvJ8PBwGhoakMvlNDU1YbFYiIiIwGKxoFAosNls2Gw2HA4HarWaFStWMGnSJBISEs7Xz3HWkIyTixCn08n27dt57bXX+PHHHzGbzajVapRKJYGBgTz66KNcf/31fP755zz88MMcO3asi2Gi1+uRyWT861//wm6393iu0tJSWltbu91ms9nQaDT84Q9/4IUXXiAnJ4dnn32WjIwMqcJHQkLiV0VLSwubNm0iLy8Ph8OB2+0WwzAFBQWEhYVxySWXcPjwYerr64HOhos6nY7AwEB27tyJWq3G399f9GDLZDJUKhUKhQKXy0VNTQ0ulwsfHx/8/PxobW2lurqaoKAgGhoaEAQBmUyGWq2mtLSUpKQk1q9fT3x8POPGjUOr1Z7Pn+iMIhknFxnNzc28+OKL/POf/xSt6WnTprFo0SI0Gg2RkZGEhoZyzz33sHTpUnrqTmCxWOjo6Ohxu4f6+voeE18FQWDJkiXMnTuXv/71r7z66qtMnDiRhx56iHvuuQc/P79f/H0lJCQkzjfFxcWsWrWKjo4OwsPDaWpqwul0iguxsLAwoqOj+fbbb2lpaUGhUOB2u8VQTUFBAS6XC7fbjVwuF/9FRUWhVqtpaWnB4XDQ1tYGdHq/9Xo9ra2tuN1umpqagE5vtWeh2dzczO7duxk/fjwmk4mPP/6YyZMn/2pyUSTj5CKivr6em266iTVr1iAIAqGhoTzyyCP88Y9/FJNSbTYbf/zjH09qmABdPCknw+12n3Tb//73P8rLy3nqqaf48MMP+cc//sG3337Lfffdx+zZs6WEWQkJiYuWwsJCvv76a4KCgsjIyGD//v0EBAQwffp04uPjgc5QzyeffILdbkelUiGXy9FoNCgUCpTKzsesp4zY6XQiCAIKhQKZTEZxcTHQ6c1OT0+nuLgYm81GVVWVeA0ej0lKSgrNzc3U19ej0WgA2LNnDyEhIURGRrJq1Sr8/f255JJLiI2NPce/1Jnl4jevfiPU19ezcOFCVq9ejUKhIC0tjU8++YQHH3zQ6+G/du1aPvvsMwRBwMfHRxzAP0ej0TBv3rxelQP3lI9yIrt37+bNN99k5syZPPbYY9TW1nLjjTdyzTXXcPDgwVN6aCQkJCQuNAoLC/nmm28IDAyksbGRzZs3k5qaysKFC0XDRBAENm3ahMvlYujQoTidTux2OyaTiba2NpqammhqaqKlpUV8z1NOXFFRgSAICIKAyWSiqKiIgIAAxowZg4+PD4GBgaIRIwgCRUVFBAcHExwcjFKpFMPnx44dY/PmzURGRpKQkMCyZcvYsGEDJpPpfP58vwjJOLkI8Bgma9asISUlheXLl7N9+3amTp3qtZ/b7ebTTz/FbreTkJDAnXfe2cX48LgS//Of//Dhhx8SFxcnbuvXrx9Tpkw57essKCjg+eefZ8mSJTzyyCMMHz6cVatWMX36dB555BFKSkokI0VCQuKi4ETDpLm5mbi4OObMmcPMmTO9cjtaWlqoqqoiJiaGPXv2UFtbi9vtFj0ogBjeSUlJYdq0aQiCgEajQa1W43a7sdvtCIKA2+2mtraWgwcPYrPZaG1t9fJyu1wu8vLy8PPz45JLLkEul6NWqxk2bBhZWVkcOnSI7du3k5iYiMlk4qOPPmLr1q1dpCQuBiTj5AKnqampi2Fy1VVXERQU1GXf0tJS1q1bx5w5c1i9ejXFxcU0NzcDnTfH1Vdfzbp169iwYQPXX389Go1GLENTq9W8+OKL/PGPfxT395S3RUREkJaWJt5onsTb7rj88svRarU8++yzzJs3j9tvvx2Hw8FLL71EdnY2jzzyCMePHz8bP5WEhITEGaGoqIivv/6awMBAjEYjY8aM4brrrmPQoEEoFAqvfXNzc/H19cVsNjN48GAiIyNRqVTExcWJya1paWlMmDCB6667jpCQENFQcTqdDBgwgMTERHQ6HQEBAchkMjo6OnA6nbhcLtFrIpPJxH8eUU2ZTIbL5SI3NxdBEBgwYABms1l8PX78ePLy8njvvffYvXv3RbU4lIyTCxi3280rr7zCjz/+KBomQ4YM6XH/oKAg/vOf/7BkyRIANm7cCHR6S66++mo++OADJk2aRFpaGtAZ2pk5cyYAqampTJ48mfz8fAAmTpzIunXrePvtt1m+fDm7du3ikUceQS6XM3DgQL7++muSkpK6VOXs3LlTzIF58skncTqdvPTSS/z+97+nra2Nl156iRkzZnDgwIEz/GtJSEhI/HLa29tZs2YNdrudhoYGxowZw7hx43qsQIyIiCAsLIxp06aJVZAxMTEUFxfj6+vLNddcw7XXXsvEiRNRKBRERUUREBBAcXExbrebgQMHYrVaGTt2LCaTiYEDBxIVFcWQIUOYN28ewcHBREVFodPpSE5ORi6XI5PJaG1tpX///gwaNAg/Pz+OHz+O3W4nIyMDt9vNxo0bxQRdpVJJTk4O27Ztu2gMFMk4uYDZvHkzb7/9NsnJyac0TKAz4cqTgPrZZ5+Jg/df//oXH374IQEBAV3KhrOyslAoFNxxxx0olUq+++47AK655hrGjRvHHXfcwbhx4/D39+f+++8nKiqKtLQ0xo0bx9q1a1m8eDGRkZHijVtfX8/TTz/N+PHjcbvdLF68mHvuuYfY2FiuvPJKoLNS6MYbb5Q8KBISEhccGzduJCUlhcjISMaNG3dSwwQ6F3Zz5swhPDyc3NxcAgICqKysJDIykmuuuYaYmBiqq6tFo0CtVhMUFERUVBSRkZFotVqcTicqlQqDwcDVV1/NbbfdxpVXXklGRgbZ2dnY7XaUSiUzZszg0ksvJTY2FoVCQV5eHrW1tQwdOhSNRsPhw4cpKysjIiJC9OC0tLQA0NbWxu7du9m/f/+5+Bl/MZJxcoHS3NzMokWLMBgMfPHFFz0aJm1tbd1W0yiVSm6++WZ+/PFH/vSnP2EwGGhsbOSZZ57x2i8jI4OwsDCmTJkiCgvJZDJ8fX27HFOn0+Hj48OVV16JXC4nKSmJm266iZ07d/Lcc88RGRkp1uOvXLmSAQMGAGA2m3nllVcYOnQoer2e/v37c9ttt4ky+RISEhIXAgUFBdTX19PW1kZqamq3hoknMfXnyaaCIBAeHo4gCFx22WXceuutxMXFkZ+fz44dO7z0ohITE0lOTiYlJYXDhw9jsVj46aefUKlUyGQyMQkWOufd0NBQwsLCCA4OZsyYMSxYsICpU6cSGhpKfX09hw4dIjw8HIPBQGtrK1arFYPBgN1ux2g0EhISgsvlQi6XU1tbe1F4TyTj5ALE7Xbz0ksvcfToUZ555hmGDh3a7X55eXncd9993VbTLFq0iP/85z9e5WTV1dWsW7cOi8UivqfVasUbYd++fVitVkJCQhg7dmyXYzY1NdHa2kp4eLjX+3FxcTz88MPs3r2bd955B4VCQU1NjVfopq2tjU8++YSMjAw2bNjAnj17GDlyZF9/GgkJCYmzgslkYuPGjQQHByMIAqNHj+5imDgcDvbu3Ut5eTl1dXVe2zQaDWPHjuXmm29m2LBhYglxQ0MD/fr1w2g0ivuq1Wri4uKQyWRUVVWhVCqx2Wz079+/y3V5lGL9/PzE61EqlYwZM4ZbbrmFmTNnEhAQQHNzM+np6SiVStrb20WhuMDAQMrLy0U9lMTExItCJFMyTi4w3G43//73v/nnP//JrFmzuO6667rd74cffuCyyy5j4sSJ3ZYDe5JZT0QQBKqqqrwyt+12exfPyzXXXENSUlKXYzY0NGA0Glm+fDkOh8NrmyfOmp2dTUBAQLfXfPjwYWpra3G5XCxbtozbbrvN64aVkJCQOB9YLBZWrlxJeno65eXlTJkyBbVa7bWP2+3m+++/Z+PGjXR0dHQrGR8TE9NtCw9/f38xvAKd825VVZXowRg9ejQajYbhw4d3+WxTUxMajYaKigqxwMHzOa1Wy6hRoxg2bBgdHR1UVlaKxRLNzc0oFAp0Oh1qtZqKigr8/Pz44YcfKCkpOa3f6VwiGScXGHv27OHxxx9Hp9Px0EMPidb3iRQUFLBo0SIGDhzYo/HSHfv37yckJASdTie+d+jQIVGG3nPzDBo06KSW9ZIlS9i5c2e3hkpERASpqalA5+rglltuEbPbZTIZlZWVpKenExERwTfffMOtt94qGSgSEhLnDUEQ+Omnn/D396ejo4P+/fsTEhLSZZ+9e/eKyaUjRozo9fEbGhpYt26dl9FSWVnJTz/9hNvtxu1243A4CAgI6FGXymQyYTab+emnnzh+/DiffPKJl7Hj8ZB7jCDPAtRgMCCXyxkyZAiVlZW0t7cTEBDAV199dcEbKJJxcgFRV1fHQw89RHNzM7///e/Jysrqso/JZOKuu+6irq6Op59+GqvVyoYNGwC84ohut1tsOgWd9fHr168nKirKy9PicrmIjY0lMjJSLE1LSUnp9vqsViuCIIhdNpcsWcLChQvZtm2buI9MJhMNKo1Gw7333itWB4WEhDB+/Hhqa2uZO3cuMpmMb7/9lqeffrpLt2QJCQmJc0FhYaEY0i4uLmb8+PFd9vF0IPb19WXw4MG4XC5++OEHHA4Hra2t4txrMpnEvjoARqORqqoqWlpavDzK/v7+TJ06FR8fH5xOJ0qlEoPB0KVMGTp7qUVGRhIYGIivry/ff/+9OJ/D/1ePhc5wj9vtJjExUTyPw+Fg8+bNBAUFUVFRgd1uJywsjG+++UaUy78QkYyTCwSz2cwtt9zC1q1bSU5O5o477ujWe/Hxxx+Tk5PD66+/TmZmJs888wybNm2ioaGBxx9/XKzGKS8v5y9/+Yso4FNeXs7q1au7CLdBZ3KW1WqluroavV7fJafEw7p163A6naSkpJCUlITZbMZqtfLNN98gCAIrV65kw4YNXoaGXq8XlRRbW1sJCAjAaDQSERGBr68vgiDw/vvvs3PnToCTNiGUkJCQOJOUl5fz9ddfYzAYyM3NZeTIkV3C5E6nk7Vr1+Lj44NcLmfo0KGUlpbi4+NDQUEBW7ZsobKyEoAjR45w4MABsRfOkSNH6OjoQK/Xe3lOZDKZqGPi5+dHS0sLwcHBXa7PoyIrl8sxmUwkJSXhdDpJT0+noqKClpYWvv/+e1avXo3b7RaNm6CgIDGkExgYKBpHHR0dNDQ0MHjwYGw2Gxs3bkQQBCorKy+4BaJknFwAOBwOnn32WX788Uf8/Px49tlnu41nNjc3s2TJEv71r39xww034HQ6+fHHHxk+fDgvvPAC69atw2w2YzQayc/P5+OPPxb7M3z22WfI5XKuuOKKLsdNTk7GYrFQU1ODn5+f2Mr7RARB4NChQwCMHDkSlUolbtu5cycOh4OlS5cyd+5cr1I1hULBgw8+SHBwMDabja+//lrc5jG+zGYzTzzxBCaTiXvuuYctW7ac1u8oISEh0Vuam5v54Ycf8PX1xWKx0K9fP4YNG9Zlv2PHjhEQEIBWq2X27Nm0trYSGhpKU1MTW7Zsobq6mrKyMvLz88X/5ufn43A4OHjwIGq1WkxUPZG2tjZcLhcGg4Hm5uYuoSTo1FxxOp34+Pggk8lEAyYkJARBECgrK+PYsWP4+fmhUChEAyM0NJTMzEwqKysZN24cycnJNDQ0AODr68uuXbvw9/cnPz+fnJwcioqKyMnJOdM/8S9CMk4uAL7++mtefvllZDIZL7/8MvPnz+92P5fLxZNPPskNN9wgPtgDAwPp168fa9asISsriz179jBlyhR27txJW1sbzc3N2O12vv/+e/r160dUVFSXYwqCwPbt22lra2PAgAE9dhNOSUnBz8+PhQsXIpPJMBgM4goAEEWETgwnAUyZMoXf/e53gHfoKTQ0lLi4OPr378/OnTvF8NTixYsvilI3CQmJixOHw8GaNWtoaWmho6OD0NBQrrzyym5z/JxOJ+Hh4Vx55ZVoNBoEQaC1tRW73U5dXR0tLS2UlpayZcsWjh49itVqpa2tjZqaGmw2G3K5vNvFptvtpri4mNjYWJqbm4mIiOiyj1qtRq/XU1tbS3JyMgaDAY1Gg9VqRS6X4+fnh1qtpr6+nvj4eLG8WS6XM2nSJARBYOPGjQQEBNDS0oJer8fHxwebzYbZbGbEiBEcPHiQyspKDh48eEGFeSTj5DxTWlrKfffdh8Ph4Pbbb2fBggU9JqOGhIQwY8YMr+1DhgwRk6GKiopEKeN//OMfOJ1OysvLqaqqorCwkNmzZ3t5PAC2bt2KSqXik08+wel00q9fv25vUJlMxlNPPcUPP/wglgCPHDkSpVJJY2MjbW1tzJgxg1GjRnn1nfDw5JNPimq0AAcPHmTw4MHU1tYyaNAgrFYrP/74I+PHj+enn36SkmQlJCTOGnv27CEvLw+lUolOp2PWrFndzlvQqQU1fvx4MVlVJpNRVlZGfHw8vr6+qNVqGhoaUKlUqNVqlEolRqORsrIy3G43Wq3WK1RutVrRarUYjUYaGhqIiYkB6HZR6Ofnx5w5c9DpdEyZMgW5XE5kZCTV1dX4+/vT1tZGRkYGZrOZsrIyL30qvV7PVVddhdvtpq6ujtDQUHx8fKipqWHgwIFUVlZSXV0tGmitra0XlDCmZJycRwRB4N///jfV1dVkZ2fzzDPPeFXSnAq1Ws29996LVqslPT2dxsZGRo8ezbhx43C73QQEBJCens4333yD0+lk1qxZXp+32Wzs2LGD4cOHc+TIEZRKJZdcckmP5/Px8WHcuHFij53k5GRSU1MJDw8nICCA1NRUNm7cyGOPPYZcLkcQBC834z//+U+xRPmnn37i2muvJSoqiuXLlyMIAjt37iQzM5P29naKior6+GtKSEhInBqLxcK+ffuIjo7GarUyY8YMwsLCTvqZExeEERERxMTEkJycjFqtJiAgQAyhhIaGYjabiYiIEI2ffv36eS0K6+rqCAsLo6amBn9/f4xGI0FBQV7d5U/E47Xx5MKkpKRQUlJCY2MjQUFBjB07lvnz5xMYGEh9fT0+Pj6iPERKSgrjx4/HZrMRHx8vdjlub28nMzOTmpoaTCaTaFSVl5f/0p/3jCEZJ+eR0tJSPvroIwIDA3nrrbd6bKbXEzKZjPj4eJRKJe+88w5vv/02fn5+YtfMN954A4B//vOfXHnllQwcONDr8263W6yBr6ysZMiQIcyePbvX5w8ODubaa68V+0lAp5rhfffdR3Z2Nh0dHV5CbKmpqbzyyiuoVCoaGhr45ptvePHFF0WD7PDhwxQUFBAdHU1FRUWffgsJCQmJ3rB//37CwsKIjIxk1KhR9OvXr0+f9/HxYcyYMURFRTF16lSxD47H0EhNTUWtViMIAoIgdBGbtNlsmEwm2tvbiYqKYufOnd1WCPVESkqKGC5SKpUoFAqSkpLEeX/MmDGUlpaK+48ePZqEhAQOHjxIv3790Gg0lJWVUVZWJoaHamtrSU9Pv6DE2STj5Dzy/vvvU1NTw/z588nIyPhFxwoKCiI7OxuFQkFWVhYPP/ww8+bNIz8/n5qaGq699lrR4+FBo9Hw4YcfIpPJCA0N5d577+1W0O1kLFq0iA8//NCrBM5gMPDWW2+h1+u7qNfOmDGDq666CoAVK1ZgMpmYO3cu0JmZvmjRIoxGo9gtWUJCQuJM4fGapKWlUVpa+otVqtPT05k2bRpBQUGkpaURGBjI9OnTKS8vx+l0Eh0d3UWUMioqSmz419TURFRUFHFxcb0+p1arZf78+UycONErTyUlJYW0tDR27twpVmlCZ/7J5MmTRfl6X19fXC4XCoUCtVqNRqPBbDZTXFz8i36LM03X5AKJc0JZWZnoNbn77ru7GA6/hMmTJzNp0iRkMhk1NTVMnjy523CNXC4nMTGR4OBgdu/efUrXZnfodDqxVPhE+vfvz4ABA6iurvZ6X6vV8ve//52NGzfS2NjI888/z5tvvsm3335Le3s7paWljB49mkmTJvX5WiQkJCROxoEDBwgNDaWsrIz09PRu1VxPl8mTJ2Oz2dDpdISEhBAQENBtTzS9Xs/o0aMpLi6mtbW120qeUxEeHt6t5EO/fv3Izc3F5XLhcDjEcFJERARjxoxh8+bNDB48mH379uHj44PFYiE4OFjswNwXz/nZRvKcnCdWr15NbW0tV111VZdwy5ngxDDLa6+9dtJcFoPBcFqGyclQKpWEhYV5hXU8JCQkMGbMGACKi4uprq72KnFWKpVn1FiTkJCQ8BQLREdHU1JS0ieV196gUqnQ6/W4XC46OjoICwsTk127Izk5mWHDhvWYiHs6eAyO9vZ2rx5q0OlZUSgUCIJAYGAgSqUSlUqFy+UShTIvpHn3wrmS3xAtLS28/fbbGAwGbrnlli4Dwm63U1pa2qXr5elw4403igPvXCKXyxk/fjxFRUVdbhKNRsOf//xnfHx8cLlc/Otf/+KWW24RXZT5+fmiqJGEhITEmaCwsBCZTIbJZCIlJaWL16S5udlLUO10UavVzJo164wbP70hKCiIkJAQzGYzjY2NXtuioqJITU2loKCA9PR0zGYzOp2O9vZ2wsPDkclkXrkq5xvJODkPrF69mtzcXGbNmtVt99+PPvqIzMxMnn322fNwdWeOq6++GqvVKqq/nsgll1zCuHHjAMjJyeGhhx4SG1Y1NTWxb9++s3593XVzlpCQ+HWya9cu+vXrR15eXpd51+l0smrVKsrKyli9enUXraa+otPpzktyqUKhYPDgwRgMhm7LgseOHYvNZhP76jQ1NaFQKCgpKcHf359jx46d9Wv0NC88FZJxch5YtWoVAHPmzOniNTEajbzxxhu0tbWdUXff+SA5OZkXXniBL774oouomlKp5JprrhFjrQcOHKCwsJAJEyYgk8nOeklbRUWFKAwnISHx66a9vZ3GxkbcbjfBwcFdklSPHj1KUVERdXV1mM3mC6pqpa94uhQrFAqv5oDQmasSHR1NTU0N/fr1IywsDB8fH4qLi0lOTsZsNp81GXuXy8X+/fvZtGlTr/aXEmLPMVarlcLCQiIiIrotH/vyyy/Jz89HqVR261W52JgxYwYjR47s9ma/+uqr+cc//iG6Ep1OJzabjeDgYFEq/0xiMpkoLi7m6NGjPPXUUxw9evSMn0NCQuLCw2g0otFoOH78OBkZGV7zkdvtZufOnajVajo6OkhLS0OtVp/Hq/1lyGQyRo4cSWlpaZeqR7lczogRIygrK6OiooLk5GQOHDiATCajurqa9vZ2rFZrj5orvcHpdOJ2u7FYLAiCQG1tLcePH6ehoYHS0lKvLvYnQzJOzjE1NTUUFhYycuTILkmoLpeLL7/8ErfbzZgxY8Sk0YsZuVzebc8I6OyYecUVV4h6LAC7d+/utXS9Z+CfmNPS3NxMcXExFouFxsZGOjo6yMvLw+12c/jwYcrLy7FaraJIkYSExK+f8vJyUaTs59WFDQ0N1NfX43a7kclkjBo16jxd5ZlDq9XSv3//brclJiZiMBgwmUwUFhYSEhIiGm+nwuVyYbFYyM/P98qJ9DQUtNvtOBwOXC4X7e3tKBQKAgMD8ff3p7y8nPj4eAoLC3v1HSTj5BxTUlJCS0sLY8eO7RLSKSkpET0GV111VZ81Ry42ZDIZt956K++//75oYPTGMGlpaeHjjz8mPz+/S9tvj/elu+OEhYUxaNAgDh48SHBwMH5+fpL3RELiN0BVVRUqlQq5XN6l+69HjdrPzw+5XN4nzZGLEb1ez8CBA9m1axdms5nJkyezY8cOjEbjSRdtx44dY926dbS1tXUpcoCuc7evr6+ofHv8+HFcLhfHjx//bXhOLrQWz71h+/btXq2tPbjdbl588UWqqqoIDAzsIjV/KlwuFzKZ7IIoBXO5XOTl5XHgwAHGjx9PbGxsl+/rITExkfj4+C5GQn5+PjabrVtrPicnh/vuu69P3g9fX18WL17Mpk2bWLRoEePHj8fhcJy01E9CQqIrVVVVaLXasxr6cLvdZ2wuczqd1NbWkpWVRU1NjVdIp7W1le3bt+Pr60tbWxvZ2dm9Pq/b7aa2thatVktgYGC38/q5QhAE6urqyMvLo7i4mIEDB5KUlERYWFi33yc+Pp6dO3cik8lobW0lPj6e/fv3I5PJaGpq6jass2/fPgRBwG63i+q3Hn7+7NFoNOh0OhoaGlAoFGRkZGC1WkWvzRdffHHK73RRGyfXXnstWVlZjB8/Hr1ej1KpJC0tTUyy9Eizny4ul4tVq1Yxa9asM3KjuN1ucnNzkclkXaTqc3Nz+d///gd0yrwnJyf36pjV1dWsW7eOjz/+mBtvvJHk5GS2b9/OnXfeKbrXQkJCzlqClyAItLe3iwN1165dvPLKK+Tl5VFbW0t4eDhDhw7l5ptv5vLLL+/y9/D19SUhIaGLcdLY2IjD4ejWONm1a1efwzLz588nJCSEtWvX8vjjjxMQEHBBdeCUkLhYKC8vp7q6WlQh9YRuPXNMcHAwISEhp53Q39LSwsaNG5k0aVKXxNXToa2tDUEQ0Gg0aLVar7lw9+7d2O12kpKSaGlp6dW8a7PZOHz4MIcOHUKtVuPn54dCoaCuro4JEyZgs9lwOBykpqb+otyNnnA4HJhMJiorK8V5sKSkhOPHj9PR0UFGRoYo4bB582amTp3a5XkTFBSEQqHA7XbT0NAgphi4XC7MZnOXc3oMPKPRiE6nQ6VSYbVavQwUQRBQq9VERERQU1NDY2MjOp0OPz8/goOD8ff3Jz09vdflyhe1cfLKK69QWFhIR0cH+fn5FBYW4uPjw/HjxzGbzQQHB5OYmMh1112HTqfDx8eH0NBQ5HJ5ryzcXbt28eWXX/bZi9ETLpeLkpISIiMjvUTHoFPK3ZNZfdNNN50y/tfS0sKbb77Jhx9+SFNTEw8//DDDhg1j3rx5FBcXs379esrLy+no6ODqq69m8ODBQKdQ0GWXXdbnPj4eHA4HFRUVWK1WfvjhB/bu3esll9zc3ExHRwcPPvggRqORgoICbr75Zu688062bNnCX/7yF0JDQ8XjKRQKrrjiCtasWeM10BsaGqisrOw2btra2tqna/b39+fWW2/lk08+weFwXPRVUBIS55vKykoUCgUBAQFotVrKysrw9/cXu5QXFRUhl8uRy+X4+PgQEhKCTqcT5QI8236OIAjk5eUREBBwxjzj7e3tyGQyjh49yrBhw0TjxOFwkJubS1hYmLiQ8nR474mysjK+++47HA4HgiBgMBgwGAwcOHCAyMhIvvzyS/Gh7efnJ3YJViqVTJgwgaSkpD6rwULnnLd//37sdjuFhYVi4qoHmUzGgAEDqKyspKKiAovFgkKhoLGxkWXLljFlyhSvHkKhoaFi88Hy8nLRKJPL5VRXVzNgwACv85/oMfGoyvr6+tLc3IwgCPj4+KDVajEYDFRXV+NwOJDJZCQkJFBWVkZhYSHz58/v03e+qI2To0eP8tFHHxEcHExsbCzjxo0jJSVFTHhyu9388MMPrF+/nv3792M2m+no6CAoKIikpCScTieJiYmMGDGChIQEfHx8xJV9e3s7//jHP1i0aNEZcy/W1tZSWlrKJZdc4tUTwel0smXLFqDT23Oqfg8tLS388Y9/ZPfu3cyePZubbrqJgIAAbrnlFgoLCwkMDGTdunWiVf3666+Ln5XJZAwfPpwFCxZw66239tmyf/PNN3nqqadwOp3dWtgevvjiC3x8fGhsbCQ5OZl///vf3HHHHRiNRj744AMvD0pmZiYKhcJrMmpra2PdunXdGieeCa63zJ49m4yMDG6//Xaam5vZvn07EyZM6NMxJCQkOqmurqalpUX0TJeXl+Pr64vT6cTlcuHv709bW5uYDOnJUbDb7djtdpRKpbjKDggIIDg4WJyHKioqUKvVOByOHhPp+0pVVRV+fn7U1dV5ybM3NDTQ3t5OdHQ0x44dY/DgwT0uWgVB4Pjx46xbtw6r1YpCoSA5ORk/Pz927dqFy+WiurqahIQEcZHZ3NxMe3u7eIzPP/+csLAwpkyZQlJSUq+fKyaTiS+++IK6ujpcLheCICCTyUQjy9Mrp6ioCLfbjVKpxNfXV/RouVwu1qxZIwq0QacREhoaSl1dHRaLBbVaTVBQEI2NjRQWFjJ58mQvD5NMJkOj0YhJsM3Nzfj7+xMaGoparRYbHVZUVGC325HJZGi1WqKioigtLUWpVJKTk4O/vz8dHR29+t4XtXGyYMEClEoll19+OSUlJSxfvpz6+nqCgoKQy+UMGDAAvV7P2LFjue6664iPj6ehoQGtVktLSwutra3s3buXFStWUFdXR2BgoJgsVVlZSXR0tCgUdiaw2+1YLBYx8cqD2+0WQwwDBgw4qZz92rVreeaZZ6isrOS7774jNjaWb7/9lieeeEJ0l51M5EYQBPbu3cv+/fv55JNPePPNN3ulZGiz2XjjjTd4/vnnexUOObGr8NVXX83DDz/MSy+9xL333suiRYt4/PHHCQsLQyaTYTAY0Gq1XtnfgiD0mKyalZWFTCbrVfKsXq/noYceoqSkhNLSUmw2G7Nnz+byyy/vdWKWhITE/6ehoYH09HTa2tpwu93YbDY6OjoYPnw4jY2NGI1GIiMjMZlM1NfX43K5qKysRKfT0a9fP5xOJxqNBh8fH3F1X1lZKd7zycnJPVaanA42m00MYXh6zUDnfOx2u6mqqsLhcHTxFpy43/r16ykuLhZTByIiIjhy5AiHDx9Gr9cTExOD0WiktbWVAQMG4Ha7KS4upqGhAY1GQ0REBA6HA4fDwYoVK8jOzu5VJ+La2lq+/fZbampqxDwPnU4nKrsKgkBaWhr19fXY7XZ0Oh0KhQKXy0VVVZVocISFhfHll18ycuRIUlJS0Gq1+Pr6isdbt24dCoUClUpFe3s7HR0dXgtXpVJJaGio6ClxuVxiB+aamhrMZjMGg0E07mQyGZmZmbS0tGCz2QgPD+fo0aMcP3681zIRF7VxIggCN9xwA++8844oS1xcXCyWMe3Zswe73c6aNWv47rvvkMvlHDhwAJ1Ox6effsq0adOYP38+drud2tpaDh06RG5uLgCTJk1i7ty5XoP5TCCTybqs/D0eFegMQXQXdrBYLOzatYtbb72VyspKJk2axKuvvsqOHTsoKiry6kLZG1wuF7t37+a+++7jhRdeYMiQIT16URwOB4sWLeLdd989rQd6SUkJd999N5GRkXR0dPDGG2/www8/MHXqVCIiIhg0aBA6nQ6TyYRcLkelUmGz2cjNze02Ma4v+TPz5s1j0KBBvPHGG6Lqo8Vi6VVCloSERFc8oZdbb70VpVJJZWUl5eXlYlm/QqHAYrEQGhpKcHCw2A3Xbrcjl8u59NJLsdlsNDQ0UFtbS2trq+jl9fX1JSQkpNumdr8Eq9WKSqXyCqlUVlaKi6P29nYxBOPBZDLR0NBATk4OFRUVBAUFUV1djclkoqamhrS0NFpbW5HL5Rw/fpy4uDiMRiPbtm3rcu7jx48TExPDoEGDKCgoIC8vj8bGRtLT0wkPD0epVHaZf+vq6vj8889FL1VKSgp5eXk4nU7a29txu90IgiCqaXu8G57+OWlpaeJ+Op2OyMhINmzYwIYNG0hNTcVoNCKXy4mJiREVY+Pi4igtLaW9vf2UXnVPHx+PB72jo0N8dmk0GrKysli+fDkulwutVit6x3v7rLqojZP58+cza9Ys0VozGAwMHTpU3O7pbOuxON1uNxs3buR3v/sdX331lagjolariYuLIy4u7px0Zfx52MZms3UrpW61WiktLeW9995j69atHDlyRLz58/LyRO9ERkYGw4YN6/IQz8rK8srvaGtrY8eOHTidTvbt20dHRwf79+8X45FpaWmEhYUxYsQIhg8fjk6nw2az8dJLL/HZZ591a5hERESg1+uRy+UMHjyYgIAARo0addKcFqfTydq1a/nss8+YNm0aW7duxW63A505MYGBgdTW1vb4eY9L81Sek5CQEB544AFxklEqlRdlhZeExIWEJzzgCREEBASQkZEhbvc8NI1Go/gwqqmpwel0cvDgQSZOnIhWqyU2NpbY2FgcDodonHjKfc80dXV1xMbGei38bDab+ED3IAgClZWVFBQUcOTIEVpbW1Gr1YSFhWE0GkXvgdPp5PDhw2J+iVqtFues8PBwr2M6HA6MRqOYR6dUKhkyZAg2m421a9diMplQKpX4+fkRFxeHWq3G6XRy6NAhzGYzAQEBpKWlcfDgQVwul6jH4pn/NBoNcrmcwMBAr3C50WgEICkpCavVSlVVFYIgEBAQgMlkIj4+Xgw7xcfHU1ZWhtls7vH3//mi0N/fXwxhCYKA0+kUPeAjR47EZDJRXV2NUqmkvLyciIgIMXH6V1+tU19f3yuZc8/DTC6XM2XKFCZPnsyePXtwOp2iJV1YWMiyZcswm81otVouvfRSEhISCA0N7ZU4TV/4+R/5+PHj4sPZ7XbT0dHB5s2befXVV9m5cydms1kciOPGjeMvf/kLAwcOFA0AhUJx0q7DJ3LLLbcAnd4Dj/dk+fLlFBQUsGXLFurr63nnnXfQ6XTI5XJR6e9EfHx8GDVqlOh5Cg8PRy6Xo9Vqez2xzJ07l6KiIvz8/PjTn/6ETqfjpZdeYseOHVRVVZ30s6mpqQQHB9PQ0NBlW0pKCqmpqaxatYr58+czcOBAMRzVW3E3CQmJnvE0lHO5XN3maHjmgBMXRj4+Prz33ns4nU7q6+uJiYnB7XZz8OBBmpqaxIet2+0mMDCQlJSU007a78v30Gq14txZVVXFTz/9RFFRET4+PgwaNAi32y0mziYkJBAfHy9el0ql6uLhkcvlovfCg9vtFoUfa2trsdvtHD16lPLycux2O+Hh4ZhMJoKCgujo6MDhcOB0OlGpVAQEBBAQEMChQ4fEhWlAQADDhw8XjcOwsDC0Wm2v5t+SkhJ+/PFHKisraW5uRq/XU1paSlxcHDqdDq1W26NnIy4ujry8PGQyGSqVCn9/f+rq6sS/m81mIyAgAJvNRmZmJitWrCA8PJyIiAicTicFBQVUVVVx5MiRXv19LmrjpKKigvXr13Pdddf1ulzWarVy9OjRLh0pV6xYQUREBCqVik2bNnHddddxySWXEB8fz9NPP31GrPkTk5hOZPv27aJXYs+ePUyZMoUDBw6IbtCxY8eyY8cOhg8fzhdffEFkZOQvvhbPDTl58mQmT56M2+1my5YtzJ07l5iYGJKSktiyZQs+Pj5iyKW6upqQkBD+85//MGPGjF8U8lKpVIwePZoXX3yRpUuXMnnyZBYuXMiDDz7InDlzTuo58fX17fHcDoeDffv2oVAouOSSSzAajbzzzjtehklsbCyTJ08mODiYV1999bS/g4TEb5GOjg4UCgUHDx4kNTVVTL48GU1NTWLSrOde9FT+OZ1O0ftaX19PY2MjTqeTjIwM/Pz8fvH1dudltdvtojfHZDIhCALff/89Pj4+DBkyhOrqanbt2kVAQADt7e1cfvnlXt6hvuCpWILOxRNAeno6JpMJk8nE3r17qampEY2F+Ph4cZvD4RC9E4MGDWLMmDGEh4eftp5KTEwMZrOZoKAgampqMBqNBAUFodVqsVgsJ/Usn+h1ksvlXkaMTCbDz8+PxsZGQkJCqKiooKamhvj4eOrr62lqagI6vTy9lffo0xP3ueeeY8SIEfj5+REWFsZVV11FQUGB1z5Wq5W77rpLzMC++uqrqaur89qnvLycyy67DB8fH8LCwli0aNFpuds9SaGjR4/m9ttv57///S/l5eU9Wn5ut5tPP/2U4uLiLsbMtGnTuOWWW7jllltYvHgxr776Kvfffz/Lly/n22+/7fO1dUdERATx8fFiXgt0Zr5/9dVX4muz2SzW3kOny/Spp55Co9Fw3XXXnRHDpDvkcjnjx49n+vTpXH/99fzvf/9j9+7d5OTksH//flavXo2fnx/jx4/n8ssvPyO5OPHx8chkMmw2G6tWreIPf/iDODlA5w3cV32WsrIy6urqiI+PZ9q0aXz//fdiPFWhUKBWqzGbzbS0tHD99dd7ffZcjl0Jid7S3bzrUTX1cC7HrifB87vvvuPtt99m8eLFbN++ncbGxm69k558Oc/97nnIqdVqWlpaGDNmDJdeeimZmZn4+fnhdrs5cuQI69at63MuXXdER0djs9kwmUxi+LygoIC2tjYcDofoCerfvz9DhgyhtLRUTPp0Op0EBASc0QRdD3q9XuyxFhcXR3p6Otdffz2hoaEMGjSIefPmMXToUMLDw9FqtUyePJmoqKhfJPSmVCrx8fGhublZrDayWq1YrVaUSiUOhwNfX99TGoUeJW5P92WlUkm/fv1oaWkRVbhlMhnt7e00NDTg7+/PjBkzuPvuu3stzdEnz8lPP/3EXXfdxYgRI3A6nTz++ONMmzaNvLw8MZno/vvv5/vvv2f58uX4+/tz9913M3fuXDFJyOVycdlllxEREcH27dupqalh4cKFqFQqnn322b5cjtioqaSkhJKSEv7zn/8QEhJCdnY2M2bMIDg4mJaWFoKDgzEYDGzdupWXX34Zi8VC//79vf7Iw4cPF/9fLpczf/58BEHg5ZdfPmN9WNRqNT4+PnzxxRcsWrSIlpYWbrzxxi7Zy55y3wceeAA/Pz/RkEpMTDzp8T1xP+hcqbS1tdHe3i5aqxEREV0aQZ2IXC7HYDCQlJSEQqEgKSlJ3BYZGUlgYGCPN4bVavUSBfJ8j+jo6B7bh6elpXmtatrb2zl69ChDhw6lsLCQRx99tNvP6fV6+vXrR3V1dY/fZcKECQiCwPPPP49KpSI+Pp4//OEPzJw5E7fbzeOPP85VV13l9ZlzOXYlJHpLd/PunDlzvPY5l2NXo9EgCAJRUVE0NzfT2NjI2rVr2bBhA8HBweh0OiwWC76+vmi1WmpqasTqErfbLQqryWQyrr32WjFsHhoayhVXXIHVamXlypWoVKozEorVaDSivlVpaSlut5uVK1ciCAJ6vZ60tDT2799PcXExhYWFREZG4u/vL2pmJSQk9KhN4nK5sNls1NfXA52VTK2trWICrkKhIDo6msTExB71lVQqFTExMWKI50SZCYvFIs5zP58LHQ4HlZWVHD9+3GveVSgUxMTEEBkZ2SWp1ZObUl9fj6+vL4MGDWLr1q34+vri4+OD3W7nkksu6TYZNjQ0FKVSidvtxu12U11d7ZVEfOzYMXx8fDCbzdhsNtLS0qisrGTy5MkMHToUvV6PIAgnlaA4kT4ZJ6tXr/Z6/dFHHxEWFkZOTg4TJkygtbWVDz/8kE8//ZTJkycDsHjxYgYMGMDOnTsZPXo0P/74I3l5eaxbt47w8HCGDBnC3//+dx555BGeeuqpPim6rl27lqVLl7J161aKi4tpa2ujoaGBb775hm+++UbMmVAoFMjlcjF0EhYWxsyZM095fJlMxuWXX96HX6h3lJWVsXr1apYuXcpPP/3U5Zy/+93vePPNN8Wqnvb29h7FgcxmM3l5eezcuZO8vDz27NmD2+2mrq6O5uZmMXlKJpMRFxfHDTfcwJ///OdujRS3201zc7OYSHUiVqu12xLiyspKPvroI7777juxwZ4HjUZDbGwsY8eO5e6772bAgAFeN5hGo+lywwmCQHNzs7ia6A5P0mxPyGQy5s6dy4EDBygpKeHKK6/kk08+8fL2PPjgg0yZMkV8fa7HroREb+lp3vVwrsfuwIEDOXr0qBgO0ev1Ygnq+PHjEQQBf39/bDYbTqdTDGvk5+eTkpLi9aD/eT6fTCZDp9Nx9dVXIwjCGZODl8lkhIaGsnHjRpqamsQKovj4eA4fPozL5cLHx4fLLrtMXDQdPXqUwMBA0UDySB148jQOHz5MVVUVbW1t4gP3xGR9z9wmCAKhoaFMnz69WwVam81GRUUFOp2uiwS+xWIRk0w9ybCCIHDw4EG2bt1KR0eHKMlgNBoJDg6msbGRvXv3olQqGT58OGPGjPGa+zzH97Q8EQRB9IKo1eoevUS+vr5e38lgMJCdnc3333+PIAg4HA5iY2OpqakhMjKS0tJSL2+/IAhs27atyzOvJ35RzolHqdPzEM3JycHhcHDppZeK+/Tv35+4uDh27NjB6NGj2bFjB4MGDfJKJJo+fTp33nknR44c8aq28WCz2cQyUEB8SKampvLaa69htVppbGxk48aNbN26laKiImpqasjKyqKpqYnm5mZR+0Ov1/PII48wceLEX/LVfxEWi4UFCxZ0qdDxGCbvvvuul/Gg0WjQ6/VinsgLL7zAsWPHGD58OMuWLWPXrl3dVvv8nKNHj/Lkk0+ybt06XnjhBXHweqiurmbfvn08+OCDXT6rUqnQarXU1tZSWFhIZWUlx44d49VXX6WyspLAwEDsdjuRkZGUlZURGhrK22+/zV133cU777zDV199xRNPPMGtt9560gRjT87IggULTvpdTlbmFhERQVZWFm+88QZRUVG8+uqrXjdnY2Njl+94rseuhMTp8nOF5HM9dsePH8/EiRNpaGigpqaGgoICcSXuqWBRKpUoFAoUCgVWqxWTycTgwYOZOnVqr0K1Z7pip729HZvNRlNTk6hJYrVaKSwsxOFw4OfnxzXXXOO1CNRoNCQlJVFVVYXJZGLTpk0EBARgNBopLy8nKSmJUaNGsXPnToYOHYparRZl2g8ePCh6qg8fPixWKBYWFqLVapk4caL4HYuLi4mJicHhcHQxxjzzlkajYd26dQQGBtLY2Ijb7SYqKoqjR4/idDppbm7Gx8eHuro60XMyceJEtm7dyooVK5g+ffpJF3RNTU1kZmbS3t7eY3GF5+/pCbV5FsEymUxclCYmJortRSZNmuRlmBw+fJhNmzb12uA8bePE7XZz3333MXbsWDFRqLa2VlT9O5Hw8HAxwdEjE/zz7Z5t3fHcc8/xt7/9rcdr0Wq1xMTEsGDBAhYsWCBmO3syjz3/ALGq5HzTnTGRmZnJm2++2a1Xw9/fX/ydx4wZg8vl4ttvv0WlUjFu3DgEQaC4uJiKiopu47Se7Oq0tDSCgoIoKiryuhHdbjcvvfQSISEhDBo0iNbWVlEDADrLcvv160d+fr4oZfz2229TUFBAeHg4er1elE6GzrLuSZMmcc0114hiat9++y1ZWVmi6FtkZCQGg6Fb0bhT6RxkZ2fz8ccfd7ttyJAhhISEcOTIEaZMmUJ0dLS4rby8XPSqnMj5GrsSEn3BM++OHj2anTt3Audn7Or1evR6PYmJiYwaNQqbzYbRaMRsNhMREYHFYhFzGaAzpB0bG3ta0u1nAk8Pmbi4OCIiIqisrBS3yWQyZs2a1a13uq2tDYPBIErvexTGS0pKyMvL48iRI5jNZlER98Tz5efn43K5sFqtaDQaVCqVmEzs8cZ4Gg/a7XauuuoqcnNzkcvlpKWliSGhLVu2iA38PNVDw4YNY/fu3ajValFMLTAwkLKyMoYPH87x48f59NNPEQSBlJQU9u3bJ3qKQ0NDRS+5pzBEEAT69+9PeXl5j8aDwWDA39+f+vp6FAoFHR0d5OTkIAgCcrkcmUxGQEAAZrMZmUxGWlqa+Nn9+/ezZcsW8ffoDac9Uu666y5yc3PZunXr6R6i1zz22GM88MAD4uu2traT9kBQqVSixalUKs/bDfFzrFarKN3rESPKzs7mb3/7GzKZjD//+c8nlWb3WNoTJkwQcypOxGg0kpOT0207a0+tfHBwMEqlssvqJTc3l6VLl/KXv/wFjUbD1q1bKS0t5bbbbhMHnlwup6mpieLiYmbNmsWQIUP45ptv2L9/PwAJCQmUlpYSHx/P73//e4KCgnjjjTe8znPieUNDQ7sYJx6FyRMHdnfExMR0q1sil8v53e9+R0tLC3l5efzhD38Qz+l0OnniiSfIyck56bHPJH0duxISJ8Mz765atYr09PSzeq7ejl2FQoGPj49YkQKcNLftXOOZD2NjYzl27JhomOh0OpxOJ6GhoV59Z07E8+BVKBRkZ2d7KaN6qK2tJS8vTyxiOBFfX19iY2PF3LugoKAujQfdbjd+fn4EBQWxfv16nE4nERER4r6CIIhz69y5czly5Ag7duzAYrHg4+NDVVWV2NOoX79+jB07luHDh3sZASdWpwYFBSEIgpf4XWNjY5e/YXcEBATQ0NCAIAhERkbidDpF9drY2FhsNhuBgYFERESISbWNjY2sX79elMTobQXWaT217777blauXMnmzZu9Ws5HRERgt9tpaWnxsuLr6urEJJ+IiAh2797tdTxPVvmJiUAnotFozrjWyPnCMzCjoqL46KOP2L59O9B5M0+dOrXHz3RXHfNzAyM4OJhp06b1+ZocDgdPPfUUYWFh/P73v0cul3e5FpvNJia0rVy5kpkzZxIREcEf//hHcR+n04nD4eiixNgTer2ecePGUVZWBnR6wMLCwrBYLKfsnzN27FjmzZvHsmXLvN5XKpUMHjyYoqIi7Ha7lzT/d99916O3RRq7Ehc6J867njYbII3dU+GZJ+12Ow6Hg0suuURsZWGz2ejXr1+Pei0/13JRKBRe+i3Q6YHKzMzs83XV1tZSUFCAw+Fg+vTpBAUFMXfuXFHeHzoNK5vNRk1NDdXV1VgsFgYPHszAgQNFY6i1tZXW1lax141CoUCj0XSRy/AQExMjykN4KpM8ia2nqsIcNmwYVVVVmM1mampqiI2NFQ240NBQampqCA0N9epv9+OPP3olwXpKo09FnwJ7giBw991389VXX7Fhw4Yu1SPDhw9HpVKxfv168b2CggLKy8tFNdYxY8Zw+PBhMbsZOhNbDQbDWV8JnG9UKpWYeDZv3jzCwsL4+uuvxXbePQ0MlUrF0KFDvX6zM0ljYyO7du0SO4d2R2lpqVg2vnbt2m4HmFKpRKfT9dpTJZfLvdRyo6KicDgcvXL7BQYG8vvf/76LgeZpQrV9+3ZSUlJEoaLGxkb++Mc/9lh5JY1diQsVad79ZahUKrE81tfXV+wH09raSlNTU48GWHh4uNgb5mxQXl4uJhN7cug8hoWHkpISsZmixWLh+PHj4n6eHjsRERFiv5/e5HMYDAb0er3o8YLOufibb745ZXVU//79iYmJEfNMEhISxDlYoVBgNBqx2+2i06KgoICSkhJRpV0mk53SO+OhT8bJXXfdxdKlS/n000/x8/OjtrZW7KcAna6j2267jQceeICNGzeSk5PDLbfcwpgxYxg9ejTQqSeSnp7OggULOHjwIGvWrOH//u//uOuuu34VVvrJUCgUKJVK5HI548aNY9++faxcuRLoTCQ6mSs0KyuLNWvWnJXrqq2tpampiQULFvRoIEVHRzNs2DCgs9po1apVZ+TcJ670pk+fLibPeXoNnYzo6OguLsLo6Gji4uLYtm2blxT2woULu1WU9SCNXYkLle7m3RM1TKSxe3I8De2ioqIIDQ3l4MGDQOec61nxd4dHgsFms52VJqH19fW0tbXh6+tLVFRUt/skJiaKhkBgYCAHDx78xbpKHr0nT+6lj48PISEhaDSaLto43REWFuZVNeTxMPn7+1NeXk5ZWZloJOXk5OB0Or2qmLpLO+iOPhkn77zzDq2trVxyySVERkaK/z7//HNxn9dee43Zs2dz9dVXM2HCBCIiIvjyyy+9fpiVK1eiUCgYM2YMN954IwsXLuTpp5/uy6Vc1MjlcsLDw3n88cdFD8SIESNOKkE/dOhQNm7ceMarPQRB4LvvviM8PJwZM2b0uJ/BYOCJJ55Aq9XicDh45plnui05Pl1CQkL405/+xLZt27DZbF1EprojLCysS7MuvV7fbS5Kb4wpaexKXIh0N+/+PEdCGrunprW1lfb2dnJychg5ciRWqxWdTudVln0iMpkMf39/9Hp9r9qk9AWz2UxlZSVms5lBgwb16PFITk5mwIABHDp0iEGDBlFSUtJjt/a+otVqKS8vF/NnBg8eLOaTnAxPzopMJqOjo0P8f8/C1vN5u92O0WgUt3uMkxO7z5+MPod1uvt38803e33ht956C6PRSEdHB19++WWXmGZ8fDw//PADZrOZhoYGXn755QsmafVc0d7e7tVj4MSKku6Ii4sjPDyce++9t1dehd5SUFDA+++/z6uvvtqjroiH8ePHixnfR48e5d133/3FCo6RkZFkZmby8ccfk56eLrpQ//vf/57SwpbL5V3GzfDhwzGZTKIKr8Ph4O233+5xAjoRaexKXIh0N+f+vJxYGru9o76+HofDgVKppKWlxcuD0B0pKSmUlJTw7bffsn379l6v+k/F3r17KSsrIyEhgaysrJPum52djdPppKOjg/j4eLZu3drrB3xPBAYG4na7SUtLY9SoUTidTvLy8qivrxcrLnvC4402GAxMmzZN7HrscDjEXnRtbW0sXbqU5uZmscrI02G+t8nSZ779o8Qp8Qj8nCzM8HMUCgUvvfQSTU1NjBs3jmeffbbXiUU9YTKZePTRRxkyZEivEmnVajX3338/Wq0Wt9vN008/zV//+lexb8LpMHXqVLZs2cKMGTOQy+Wi7kpxcfEpb0B/f38GDx7s9Z5cLufw4cNUVFQgCAKffPIJ999//1nL15GQkLg4cLvdlJaWolAo2Lx5c6/mreDgYPHhvXnzZpYsWSJqi5wulZWV7Nixg/DwcEaPHn1KA9GTU5KXl0d8fDxqtZolS5aQm5vb67LcnzN79mymTZvGFVdcQVhYGHq9XhSm88j590R0dLQounfi71BYWEhqaiput5uvvvpK9PL/3HPSW8V1yTg5h8hkMoKDg0lPT6e2trbPAzwkJISPP/6Yu+66i88++4ypU6fy3//+97RuFLfbzV/+8hcOHjzIK6+80uvyrvHjxzNp0iT8/Py4++67+eKLLxg3bhx/+9vfTtsAOPHcHg+HxWIR23n3RE9VTE1NTbjdbvbu3ctTTz3VbYmfhITEbwNPTk1hYaH4EPZ0Re4NycnJzJs3j8DAQCwWC3a7nQ0bNpxWWLu1tZVvv/0WHx8fsrKySEhIOOVnZDIZ2dnZuN1u9uzZQ1VVFR0dHXz11Vd8/fXXHDt2rM/XodPpxColpVKJWq0WvRunWhR6QlAnGhqeXJLS0lKUSiXt7e20trZ2GyKSjJMLEIVCwXXXXcdNN9102rLM/v7+PPbYY2zfvp3XXnuN1NTU01JTbGpq4vPPP+fyyy/vsca/O9RqNTNmzMBqtTJt2jTWrl3L/PnzaWxspKqqqs/X0RMBAQG9CsV4StZOxOOmbW5uPuOxYgkJiYuL0NBQBgwYQFxcnFhF0tck4MTERBYuXMjMmTNpbGxErVafNEewJ4qKirBarTidTrHBaW8IDw/HYDAQHh5OYmIi48ePx2Aw4Ovr+4s81ycyduxYgoKCxArHntBoNGi1WgRB8MqBVCgU1NfXExISctLnW29Dib+tgOMFwA033ADA66+/7vX+0aNHvfoxnAo/Pz/GjRt32texbds2TCbTaR1j1KhRCILA66+/zhdffHFWkupOVPU9Gd2VQf7aqw8kJCR6j1wuZ+zYsdjtdsrLy2lsbESj0aBQKLBYLJhMpl7lQeh0OtLS0k4pENkTTqeTsrIytFotPj4+J23B0d13iIqKQi6XU1NTQ3R0NHfeeSdut/uM9fQymUw4HI5TejY8Ym3t7e2iYSSTyVAoFNjtdlwuFyqVCrvd7tWLx7PfgAEDenU9kufkHCOXy5HL5QwbNszL4+FRjj0XFBYWct9995GWlsb06dP7/PmUlBSGDx/Ojz/+yIcffnhGr83T+dhkMnUray8hISHRV/R6PUFBQaIMfH19vVii+0tLc3vLnj17aGlpQS6XiyXefWHAgAE0NjYyatQo1q5diyAIaLXaM9KHyNfXl927d9Pe3v6Lcxmrqqqw2Wxdck08rwcNGtSr40jGyXkiJibGy1pvbGw8I+3BT4XJZOLuu+8mJCSEjz76qEcVwZMRHBzMww8/jEwm47XXXhNbep8J/Pz8xOzuxsbGUwogdZdzEhMT0+scGgkJid8OHo0Oj2Kp0+k8JzlpZWVl7N69m7a2Nvr379+nULqH1NRUtFotNpsNtVrN4cOHz9j1aTQaHA4HLpeLqqoqysvLT2q0eVqgnOjd9vy2nqqcn1fpeF73NgwlGSfnidDQUK8H6LkwTKBT7yM3N5f33nuv1xZsd8ycOZPRo0dTXl7Ohg0bztj1VVdX43A4EASBL7/8kpkzZ7Ju3boe9+9OHyY5OblP8VwJCYnfBh6Njra2Nq/S7LOJIAhs2rSJ1tZWMjMzvToS9wW5XE52djZ79uwhLS2NQ4cOnbFrb21tJSAgALVajcPhICcnhw0bNvR4/Li4OGQymahYC53qsZ4+bCd6Sn7+urc6LZJxcp7w8/Pj2muvFV8XFBScsoTrl+JwOPjPf/5D//79u5Tg9hWbzYZSqUQQhDMmCgSI1np4eDiPPPIIWq2WN954o8ebRKVSdSth//MOrRISEhJJSUkEBASgVCpJSUlBEIQzmsjfHR5FX6VSSVhYGC0tLadtVFgsFlpaWigpKaG5ufkX60x5cLlcjB49Gn9/f+Lj48nPz2f37t09PpMUCkUXz4lOp+uVcXJin52TISXEnieqq6vJzMxEp9OJ7cXPduzT5XJRXl5ORUUFl19+OQEBAUyYMIHs7GxSUlJ6naDlcrnYsmULu3btOuPXuHfvXgCuvvpqDAYDWq2WoqIiTCaTFKqRkJD4RRw7dgyj0YhGoyEtLY3CwsKzHtYxmUyo1Wr8/f353//+Jy6eEhISGDBgALGxsb2qYGlvb2fv3r2o1WrKy8vPSK6J5/o6OjpwuVy43W6Sk5PZtm0bWq0Wo9HYo7z/2UYyTs4TxcXFvPLKK6jVaiwWC2azmZKSkh57LJwpPOI5nj49n3/+OVqtlmHDhnH11VfTr18/IiMjiY2Npba2tttS3HXr1vHBBx+IaolnKltcEASqq6uRyWRkZWXhcrno6OigsrKS1tZWyTiRkJD4RVRWVhIeHk5jYyO5ubkolcqzLtAol8tpaWkhNjZWDJ90dHSwb98+cnJyCAwMxN/fn8DAQMLCwjAajd0KdDY0NNDe3k5wcDAWi+W05Sh+jt1ux+12097eLub8aTQagoKCeswP6W1V6S9BMk7OE0lJSVRXV4sy1J622GcTrVbLTTfdxKJFi3C5XMhkMrRaLRaLhe3bt7N9+3bg/5eKWSyWU1YRRUVFMXfu3DNyfe3t7ezfvx9/f3+GDBmC1Wo9ZbJtREQEMTExFBYWAnDo0KGz1kVUQkLi4iY4OJi8vDz8/f1RKBS4XK4z2iOsO2JiYoiKiqKgoIDg4GCam5sRBIGEhASxtNloNFJSUoLb7UYmk3XrFZHJZKSnp9PS0oLRaCQrK+uMtB+orq4mICCA2tpakpKSaGtrQ6fT9dgEFjrnXU+iK3Q+v/qieN4bpJyT80R0dDRjx471es9jHJxNbr/9dmbPni0O6u5in2azmcbGxlMaJhqNhpdffpmMjIwzcm2tra10dHSQlpZGVFQUTqeT5uZmYmJieswh0el0Xi24jUZjrxUIJSQkflukpqbicrnQarXU1NQQFBSE0Wg8bRn43qDRaJgxYwZqtRqj0YhWq8XX15eOjg4GDx5MSkoKiYmJBAYGAp0ClImJiV7//Pz8MBgMtLS0iEbEhAkTzsj1tbW1ib9HXFyc6JXxeGm6w9fX1yunxOVynfHfUDJOzhNyuZxp06YRFRUlShgXFBSc9bwTvV4vtl8fNWrUaR3Dk9j13HPPcc0115yxa1uxYgUNDQ2EhIR4rQhCQkK8DJCT0draesYUEyUkJH5dBAcHEx0dTUdHBx0dHVitVqxWa5dGimeauLg4rrnmGmJiYpDJZKKhsWfPHoqLiyktLaWlpQWFQkFbWxulpaVe/1pbW2lubqa6upro6GiuuuqqMxLWcblcHDhwgMzMTJxOpzjPemTsTyVOJ5fL0ev1yGSyX6yP8nOksM55JDIyEqvVKg6yoqIi7Hb7We8UqtfrmT9/PjNnzqS4uJjdu3fjcDg4fvw4BQUFJ/1sRkYG2dnZDB48mNjY2DMae/Rkhk+YMKHXyV5KpZL+/ftz4MABoFOW/2xXPUlISFycyGQysVxWoVCg1WpFwcfw8PCzeu64uDhuuOEGSktLKSgowGKx4Ovri81mw2w2n3QuNRgMREREkJCQwJAhQ3q9WDsVnp44ra2tqFQqgoKCeqVbFRAQgFarxel0iuGdMx3WkYyT80hGRgZ2u12MedbX11NeXk7//v3Pyfn1ej2ZmZlkZmaek/P1Bh8fH6ZMmdLr/eVyudgv40R601BLQkLit0d4eDgdHR2i1olcLqeqquqczLtyuZykpCSSkpLE9zwFESdDr9ef1bYcx44dIzExsdfeGF9fX1QqFVarlfb2dtHQ02g0XobWibL1nte99bBIYZ3zSHR0NOPHjxdf9yYB9deK2+3m6NGjxMXFnXbvihM5VwaehITExUVqairNzc3I5XJRm+N8zrs6nY7g4OCT/jtbhoknnNXY2EhKSsovOpaPjw86nU5SiP01oNVq+fe//811112HQqHA6XSyf//+831Z54Xdu3ezceNG5s2b10Vvpbi4uNcZ9RqNBo1GQ1lZ2dm4TAkJiYucmJgYZsyYgc1mE5NTa2trz5lK94XE7t27CQgIQKPReHmblUolISEh1NbWdvs5t9uNIAioVCqxqkcQBLHZ38lE2HrbMkUyTs4zMTExfPDBB0yYMAFBEM56WdvJMJlM1NfXd1vtcjYTTZ1OJy+++CIOh4M5c+Z02d7e3t5jJnhRUZHXYA8ODiYsLOysaxdISEhcvGRkZHDttddSXV1NRkaGqNl0rrHZbBQVFbF3794u12AymcjNzeXAgQM4HI4zfu76+nqKiorw8/MT+/Z4OFFmojt++OEHmpubCQ4OZsqUKbjdbgICAnplnPQ2X0bKObkA8PHx4ZprrmHjxo3n/Nx2u52SkhI+//xzvvzyS+rr61m4cCHjx48nJiaG+vp6vv32W9auXYvdbueOO+7g2muvPaM5Hd988w2rV69m4cKFXvkvCoUCnU5He3t7j9olmzdvJi0tTbwB1Go1Go1G6mgsISFxUjwSBedaesAjmV9QUMCRI0dE3ZPNmzcTGRmJn58fHR0dlJeX09HRgUwmY8uWLYwaNYphw4adkYIJt9vNhg0bSExMpLKykqlTp4rbVCoVLpcLpVLZrXquzWYTw2BtbW3id5LJZDidTrHbMyAWNpz4+ue90HpCMk4uEEaOHIlOpztnK363283hw4dZtGiR2C3T49Z88cUXeeWVV1AoFF1aij/66KO8//77PP3008ybNw+1Wi0OyNMpbWtvb+f5558nODiYxx9/3OsYSqVSFAfqaeVQXl7upRxrtVqxWCyMHTuWr7/+us/XIyEh8dtAoVAQHR1NfX09giCIYZ6zidlsZu3ateTl5eFwOAgKCsLPzw+VSoXNZqOyslLsOuxyufD398fpdGKxWNi7dy+FhYXMmDGDkJAQmpubcbvdPWqRnAxPbx673c6wYcNEjRXozIHxLAa7690jCAJhYWFs3rwZf39/NBoNMpkMjUaDv78/TU1NKBQKBEEQxT5PfF1RUdGra5TCOhcIqampREdHs2bNGtrb28/quRwOB2+//TaTJ09m7dq1tLa2dom3ulwu7HZ7F90VmUxGWVkZt9xyC/fddx9ms5klS5awaNGi09Jo+eabbzh06BBPPvkkcXFxXtt0Oh0JCQk4nc4eM7yPHj3qdd7a2lqqqqrIzs4+Y7L6EhISv06SkpIwm83nJE+toaGB5cuXi3mFQUFBhIaGis0Iw8PDiY2NJTMzk0mTJhEWFkZUVBQqlQqlUklwcDDl5eV8+umn1NTUsGnTJr7++us+pwK43W62bduGn58fbrebYcOGeW0PDAzEbDajUqm6TRS2WCz4+PiIXpGwsDCUSiV1dXWivIQnJ8XTCPDE1yEhIb26TslzcoHg5+fHoEGD2LBhA0aj8az0kREEgT179vDmm2/yxRdfeIVKoqOjqa2txdfXl5SUFOLi4qipqRFdi+3t7WLYZMCAARw4cID33nsPf39/Ro8ezYcffsh1113HiBEjen09TqeTpUuXEhQUxGWXXdbjfm63u8fmXJ7mXZ74qCcpKyYmBn9//zNeey8hIfHrITIykvb2djFP7XQqBZ1OJ3a7vdtcCkEQ6OjoYPPmzRw5ckQM0wwZMoQjR45w9OhR1Gq1OBfL5XIEQfCSsY+JiaGlpYXDhw+j1+txOp0sX74cnU5HW1sb27dvZ/bs2b2+3vr6evEZM3jw4B5l6lUqVbclzj4+PjgcDjIyMmhtbRXzAV0uV68Mj96G0STPyQXEmDFjaGtr49ChQ6f1+S1btvDqq6928YKYTCaWLVvGXXfdxbRp0/j444+75HA0NTXhdrvx9/cnLS2N7OxsnnjiCbZt28a2bdu4+eabUSqV2Gw2UfDM7Xbz8ccfM2zYMN544w0SExP7dL1ms5n8/HwsFgv5+fk9el4cDkePOSQxMTEYDAZ8fX3FfZctW0ZMTMxJDR4JCQkJPz8/dDodOp2O48ePn9Yxtm/fztq1azGZTOJ7nvDFkiVLePvtt8nJycHpdBISEsKUKVPQ6XR0dHQQGhrKnDlzCAkJQavVkpqaSkxMDGlpaQwaNAhBEGhoaECr1aJSqUTRNqPRSEJCAjExMX2WTWhoaMDX1xeNRkNubm6PXmlfX1+v7+RBo9Fgt9txOBzY7Xb27NkDdGqlJCcnn7Lsubfhf8k4uYDIyspCpVJRWVl5Wp8/ePAgr7/+uvggb2ho4P3332fOnDnceOONvPPOO93KNMtkMqKiopg6dSpZWVlYrVaOHz/Ohg0buPnmm4GepfWrq6v5/vvvuemmm3rtrvs5ra2tXH755dxzzz1s374dQRAwmUwUFxcDnauPngyXfv36cfz4cS/XZm5uLjKZzEtDRkJCQuLnKJVKIiIiaGtrw2KxdJtjcSry8/M5dOgQx44dAzrnyg8//JD//ve/GI1GoqKiCA0NFb3hW7duZceOHUBniOSHH34Q+9jExMSQnJxM//79cblcGAwGlEolHR0dpKeni3O1QqHg+PHjzJ0797T0SWpra3G73VRWVvLBBx/w3XffYbPZqKioYMeOHWIIpicvh1qtJigoyCtp1hMCCg4O7rZKx+MJ6q1mixTWuYBISkpCr9ezefNm/vjHP/Zawt1DaGgoLpdLvMEefvhhPvroI699FAoFGo1GlC2WyWTMmzePf/3rX4SEhHhZtS6Xi6qqKqKiohg1ahSRkZFUVlaya9cucR9BEPj666/5/e9/3+csck8LcOj0orz77rt8+umnvPfee2RlZfGvf/3rlHksAwYMICoqyus9T3xz0qRJBAYGSpU7EhISPeIxTlpaWmhvb++xyWhPaDQaMYG1tbWVlStXEhgYSHJysljk4Ek+hf+vmqrX65kyZQqRkZFifpxKpRIFKceNG8fWrVtpbGykoaGBgoICcX5PSUmhrKyMxsZGIiMj+3S9nuaoLpcLtVpNR0cHBw8exGKxEBwcTGtr6ykVa4OCgrr18Mvlcvr3709VVRXgrQzrKSPureCdZJxcQMTGxjJz5kw2b95MQ0NDn3s9ZGdn8+yzzxIUFAQgWuoGg4GxY8cyYcIE0tLSGDhwIHa7nZycHGQyGVdccUW3N6RCoRCTVO+66y52795NU1OTaAx0dHRw6NAhduzYwebNm5k8eXKfrvfAgQNdPDltbW289957jBgxguXLl58yOTg8PJyHHnpIbBoInSsZo9FIQEAAer1eMk4kJCR6JCMjg927d6PRaCguLmb48OF9+vzw4cPZt28f/fr1E6tSBEHg+PHjuN1uDAYD0dHRREVF4XK5KCsrQyaTMW3aNOLj47s95siRIwFITk4Whc48PcPa2tpwOp2Eh4ezf//+PhkngiCI56+trUWj0aDVapHJZOTn5zNgwADa2tpQKpWUlpb2eJz09HTKy8vJycmhvb0dg8GA0WikqqoKPz8/0QA7sZQ4ODhYDAX1Bsk4uYCQy+WMGzeO5cuXU1VV1WfjJDY2lptuugno9ETs37+f6dOn89xzzzF48OAusb6MjIxeHzs5OZnk5GQA7r77bqDTs9LU1MTevXu9ekX0BpvNxrJly7p1G5pMJkJDQ4mMjMThcHQb9zyRwMBAL6+NzWbD5XIRHBxMRkZGr0vXJCQkfnv4+fnh6+uLw+Ggurq6z8bJoEGDGDBgAEqlkpKSEqKjo2lqamL48OEMHTpUDH+cDsOHDyczMxO73S4+1D3hl4qKCmJjY/t0vIaGBqqrq8XCAYPBQGNjo2hEhISEUFNTg1arxd/fv8cmqmq1mpiYGHJycnC5XKJR5nA4SEpKQqlUepUS+/r6olAoaGpq6n1T1z59M4mzzqWXXoqfnx95eXldSrz6glwu5x//+Ieon3I2UCgUhIWFMWvWrD5/trS0lNWrV3e77ejRoxQXFyOXy39Rh2GZTHbWOzxLSEhc3CgUCtLS0ti0aRM+Pj5iCKIveOYZrVZLSEgI48aN67PhcLJjK5VKr2qg8PBwsrKy+nysw4cPY7PZxJySn4umFRcXY7PZSE9Pp7Cw0Ev/pLf83PgwGAwoFArq6ur69LtKCbEXGNHR0SQmJrJs2bJeu7+6Q6vVMnHixDNumHis41/KW2+91aNHRKFQ9DnueyIWi6VXbb8lJCQkAOLi4vDx8aGlpaXHfjK9wVNYcKYME+j0ULe3t/9igU6TySRWWkKn51yr1WIwGMQwjK+vL2azGb1e3+teQzKZjICAAGQyWZcWJ55KSk+jxb4YJ9Ky8gJDp9MxYsQIli5dSmFhYZ9CL6eLp3NkbW0tBw4cEOvW5XI5gwcPRqvVYjQa2bx5MwcPHiQ0NJRnnnmmz6sLD83NzaxatarH7RaLhdLSUgYOHMjevXtPebyWlhav3jsOh+OsC9lJSEj8evAIiQUFBZGbm9vnJNPe4slH8XTndTgclJeXU1JSIi5G5XI58fHxqFQqTCYTx48fF4Xibr755i6NUXvL8ePHxXkxMTGRgoICbDYbERERJCcnc/z4caxWK1qttlcGWnt7uxiy8fX1RSaTdenF4+fnJ1ZSeqp3eqtzIhknFyBXXXUVH374IWvWrDkrxokgCKxYsYKioiJsNhvbt2/HbDZz9OhRURLZg5+fHwqFAofDIWZZP/LII7/o/KtXrz6ppoDdbqexsbFLFU5P7N+/36tW3+Fw9NikUKvVYrVaUSqVuFyu32QnUgkJCW/0ej0xMTFiI75Jkyad8ZCw2Wxm8+bNFBYWit6QE0ttT8STjCqXywkMDBRzYvpawelBEARycnKAzqrQ5uZm8XgNDQ34+PiI4moqlapbyYmfH6+0tFS8Hk/DvxMXhZ5WAFartdt+O6dCMk4uQNLT0wkKCuK7777jnnvuOeMy7G63m9zcXF599dVTehh+vj0jI4M//elPp+018YiknagnkJKSwpAhQ3C5XOzcuZOamhrxXB5r+2QUFBR02cejkeIhMDCQp59+mjFjxpCbmyv2hnj++edP63tISEj8uoiLixPbYdTW1hITE3NGj2+328VS5aioKKqqqtBqtRQVFeHj44PZbBaVYX19fVEqlbS2thIZGSn21OltR9+fU19fT3V1NXK5HIfDIS7mPEUSFRUVJCQkYLfbCQoKorGx8aThe0/ZtNvtpqamRuzDZjKZxDyWwMBAGhoaCAkJISEhgerqavz8/MT5/VRIxskFSHBwMGFhYRw/fpy2trbTFjfrCYVCwZNPPsnkyZN54403xNLlkyGTycjOzuaDDz7o0gOnL+zatYu1a9cCEBAQwI033sjjjz9OZGSkaI0vX76cAQMG0NTU1CvjpDs3Yb9+/YDOkrzvvvuO4cOHk5WVhcvlYsCAAbjd7tNK9pKQkPh1EhISgslkws/Pj4aGhjNunAQEBDBz5kw2bNjAoUOHMJvNGAwG5HI5crmc6Ohorx40HmPl6NGjTJs2jaFDh572uXfu3IlMJmPQoEEcPHgQPz8/Jk+eTGJiIoIgkJ+fT15eHv369aO2tpaKiopee5U9Hhi5XI7BYBCbGQqCgNVqJS4uDqfTSVhYWJ9yFiXj5ALC7XZz7NgxPv/8c4qKihg+fDgGg+GsnEsmkzFhwgTGjRvHsWPH+PLLL9m0aRMbNmzwGjwymYzExETmzp3L//3f/+Hv73/a5xQEgf/973+i2M/ixYu57LLLRDef51wPP/wwAG+++Wav4pMnSxzOzs5GpVKxfv16Nm3a5LXtdBoVSkhI/HoQBAGj0Uhubi6HDh0iJiaGioqKM74g9KDX67niiitobGzk8OHDbNiwgba2NgIDA2lra0OlUuFwOETDwN/fn9mzZ5OcnHza3mqTycSxY8fQarWUl5czZMgQpkyZ4tW/LTs7m1GjRuF2u/nggw/w9/fvkj9yIp4y4e5CUiqViujoaIqKiggKCqKkpMRre2/zASXj5AKgsrJS7DD5008/ieWzl19++VnvrCuXy2lubmbz5s20trZ6xQOVSiX33HMPDz/8MBEREb/4XDU1NaxYsQKVSsUjjzzC5Zdf3u1+FouFbdu28d577wGdN+jJcm9OdhOFhoaKOTOSMSIhIeFpj5Gfn8+RI0eoqanBbrcTGRlJWVmZKGl/NgkJCUEQBAIDA8VOvQEBARQXFyMIAkFBQYwbN45+/fqddgKsh6NHj6LT6TCbzfj6+jJjxgwxH8SD1WqlsrKS3NxcmpqaGDJkCPv37+9RJM5jnHj+/+dzqydHJj4+nujoaHG/0tJSSb7+QsdisbBnzx4+/PBDNm7cSFVVlZeXIDY2lquuuuqcXEt0dDSDBg3ihx9+ICwsTHx/1KhRPPPMM2esHPmDDz6gurqahx9+mHvvvddrm91u59ixYyxZsoTVq1dTXFwslhpnZGT0WeTN0zvinXfe6dLkUEJC4reHpx3HgQMHOHLkCHa7HUEQUCqVYuO9pqYmJkyY0GOn3jNJVFQUR44cISQkBJfLRVFREdApp3/55ZefkbCS3W5nx44dZGZmUlBQwNy5c1Gr1WKn97q6OoqKijh8+LCYBBsUFERFRQVut5vU1NRTnkOlUolNYS0WC01NTeTn5wOdpcQeWQiHw0FJSUmvve+ScXIOcTqdFBcX880337Bq1Sp27drV7ao/PDyc559/XsybONvExMTwwgsv8Ne//tUrUdUjbXwmyM/P59133wU6OyC/+OKL4jaj0cj27dvJy8vr4vKTyWTMmjXrpJNFd4PdYrGQl5fHp59+KlXkSEj8hqmvr+fIkSPk5ubS2toqrvJlMhkajYasrCyOHDlCeXk5qamponT82aZ///7Ex8dTWVnp9RwICgo6Y/kuO3fuxGazkZOTQ1tbG0uWLBG3uVwuOjo6xNCMr68vkZGR+Pv7k5eXJ+Y+dodcLvfy6nsW1i6Xiz179mA2m4mOjiYnJ8dr/pXJZL0W1pSMk3OAzWZj9erVfPTRR2zevNmrg+6JyGQyBg4cyD//+c8+96k5E/xS92FPmM1mHn30UTFL+/333+/V5xQKBQsWLOAPf/jDSfcbNWpUl8RZQRCorq6mvb2d2NhY5s2bx5AhQ8RtO3bs4NNPP5X0UCQkfoW4XC6qq6v56aefKC8vx+VyodVqxW66Wq2W1NRUiouLxQ7BGRkZzJw586yH0k9Ep9P1yjtxOlRWVrJt2zbsdrvoLTlRckEmk6FWq1Gr1YSGhmK326mpqaGkpERMmO0pBKNUKgkNDaWlpQWDweDVv6yhoYGEhASOHz+OXq8XK4xsNhvl5eVSQuyFgNlsZsWKFXzwwQfs2LFD/KOoVCrS09OpqanxUv274ooreO+99/rcU+dCxuFw8PLLL7Ny5cpef8bHx4cBAwZw7733Mm/evFOGlbpLFCspKWH27Nm88MIL3HDDDVitVr7++mtMJpOYjCbloEhI/LrwGCXr1q2joqICq9WKQqHAz88Pl8uF1WrFbDajVqsRBIHKykp8fHyYMWOGKF3wa6Curo6vvvoKh8NBeHg4GRkZHDhwAL1ej8lkwmAwUFtbi0KhwG63c/z4cWQyGTKZjNjYWGbNmtXr55CnpBg6m8EOGDCA48ePc+mll2IymSgoKBBz/lQqlaRzcj5pb2/n66+/ZsmSJWzcuBGXy4VOpyMzM5NRo0ZxxRVXkJ2dzZ/+9Cc+/vhjoNPF9/bbb/9qDBOHw8GOHTt4/fXXWbVqVZeqmxNbaSsUCoKCgsjKymLmzJlMnDiRxMREr2zyk+FZCZ3oGs3NzSU+Pl7sWPznP/9ZkrSXkPgVU1paKubvCYKARqMRdUM8lYCpqamsXr0avV5PeXk5CoWCSy+9lEGDBonHOZ3eOhcK9fX1bN++nfz8fIKCghgwYAA1NTVs2rRJrExyu900NDSI2iRqtZqwsDCSkpJITk4mKSmpS5PY7vCUCwuCgF6vx2w2Y7VaSU1NZfTo0axevZr8/Hwx50+pVDJ27Fgv783JkIyTM4jFYmH37t384x//EI0StVrN0KFDef7558nOzha9AIcPHxb1PnQ6Hc8991wXRdQffviB8ePH9/ohfaFw8OBBnnrqKdasWSMaDFqtFpVKhcViwel0MmzYMK655hrsdjvjx48nKSmJ8PDw03KpJiQkEBAQ4GWcxMbGolAoyMvL4/7775cMEwmJXymlpaXs37+f/Px8sYu5Xq/HbreTmZlJZmYmERERaLVa1qxZg9PpZPTo0WzdupXo6GgyMzOx2WxoNBqsVisbNmxg8ODBZ1zn5GzidDrZvn07W7duRa1Wk5KSQnh4ODt37sRisSAIgij0VldXR2BgIAMGDCAuLo6EhATCw8N7ZZCcSFRUFHv27MFgMJCamsq+fftQqVTo9XpycnI4cuQIcrkcPz8/0XNVUFAgKo2fCsk4OQN4MqKfeOIJ9uzZg8ViQafTMW3aNO69916ysrK8DIy6ujoWLFhAbW0tcrmcxx57jNmzZ9Pa2srnn3/O7bffjkwmY/HixVRVVXH77befx2/Xe9xuN8uWLePxxx8XS/KmT5/O1q1biYmJ4aqrriIqKoqWlhY2bdrEm2++SUZGBgkJCaSnp5/RWO+sWbNobm5mwYIFVFRUdNk+YcIEYmJi+PTTT8/YOSUkJM4Nbrebqqoq9u7dK1be2O12dDodgYGBpKWlMXjwYJKTk8UwwuHDh8nPz2fYsGHs378fmUzG1KlTKS8v59ChQ0ycOFH0tOzYsYM5c+ZcFF3NrVYrX331FWVlZaSmplJSUkJ+fr4YviouLsbtdqNUKklISMBgMODn54fNZkOn0xEcHNxnw+REVCqV2FsnLCxMlMZwOp2MHDmSpKQktm/fTn19PWazWUqIPRd4lPXuv/9+tmzZgsViQaVSceWVV3L//feLAmAnYjabue222zh48CAAAwYM4O6770apVFJdXc13330nGiNTpkzhb3/7G6NGjWLw4MHn/Pv1BbfbzX//+1/+/Oc/iyXAcrmchx56iDFjxvD000/zwgsvoNfriYiI4Oqrr+aKK65g+fLl3HbbbURGRnLbbbcxf/58+vfv/4uuxZOs9dlnn7F///5u99m6detp96mQkJA4f3i8GwcPHsRms+FyuXC5XBgMBgYMGEB2dnaX8Pjx48f54YcfsFgs4up93rx5YqO/kJAQLBYLISEhVFdX09zczE8//cSUKVPO07fsHVarlS+//JLGxkbCw8Ox2Ww4HA5iYmKwWCwYDAb69esnysobjUbUajUymQyDwcCGDRvYunUrgwYNYsyYMb3WIPk5nsICrVbL9u3bcblcxMfHU1RURE5ODmq1muTkZBwOBxqNhjVr1pzymNLsfJo0Njby8ssvc8kll/Djjz9it9uZPXs2y5cv54svvmDixIldDJP6+nruuOMO8Q8TFhbGa6+9Jsqo5+TkeIUmZs+ejdvt5q233vLqunsycnNz2bRpE2VlZb3Oiv6l/NwwSUlJITs7G7vdzpNPPsktt9wiVsp41ApffvllNmzYwJIlS7jnnnuorq7miSeeYOLEifz973/vsXGfhxNLnvV6PcnJyeJrf39/4uPjWbx4sVcFj0qlIjY2loSEBAYPHtzrxoISEhIXBvn5+Xz00Ufs2bMHpVKJTCbDx8eH9PR0br31VubMmdPFMCksLOR///sfQUFBaDQaOjo6GDZsGGlpaUBnON4jmaBUKhk4cCAqlYri4uJTtvWATs/5hg0bWLFiBVu2bKG6uhqn03nWJQw8HpOSkhISExMpLi4WF73V1dXExMRQVVWFRqMhODgYf39//Pz88PHxIS0tjYaGBiwWCx0dHWzdupV///vf5Obm9qgLZbVaqa2tFb+bR+Cyo6OD/Px8r+72iYmJ1NTUiI1kzWYzBQUF1NXVSb11ziZFRUUsWLCA3bt3IwgCKSkpPPHEE1x99dU9NmYqLCzknnvu4ccffwQ6kzifeuoppk6dKu5jtVq9Ekejo6O56aabePXVVxk5ciS33nqrKBzUnf6Iw+HgoYceYuPGjfj6+jJkyBDGjRtHVlYWiYmJotqfXC4/Y2XDDoeDjz/+mHvvvVf0mISEhHDttdeyfft2du3axebNm7n33nu59dZbRaNi8uTJVFRU8Mwzz/DGG2/g7+/Pq6++Sn19PU888QSff/45119/PXfffbeXhL8gCLz//vv873//Y9SoUYwZM4YJEyZw/fXXs337dvH3q66upqioCIVCQWZmJmPHjmXGjBlERERQX1/PoUOHMJlM/P3vfz8jv4OEhMTZw+l0snv3bjZs2ACAWq3GYrEQFxfH1KlTu80PcbvdbN++nR07dpCYmIi/vz+NjY3079+fqVOniqEMTy8bD4MHD2bfvn1ERUWxZs0a5s+fT0NDA4IgEBsb2+U89fX1HD58mPT0dIqKiti0aRO+vr6EhYUREhJCdHQ0MplMlHX3CL/9EnHL+vp6fvzxR4qLi0lISCA3N1c8h06nQyaTUVtbS1RUFBUVFaSnp1NYWIjRaESr1VJRUcFll13GunXraG9vJyMjg/r6er788kuCgoKYPn26V0jMYrGwYsUKamtr8fHxITg4mMGDBxMeHk5TUxODBw9mz549VFRUEB4eTn19PQ6HQ9RP0Wg0NDU1iX17eoNMuAgVqtra2vD396e1tfWs9Z7piT179nDjjTdSWFiIwWDgjjvu4J577jlp8tTu3bu59tprxRbT48aN48UXX2TkyJFeWeEvvfQSq1atYt26deKgaGpqYuLEiVRUVPDll19SU1PD0qVL+fvf/05WVlaXrPK8vDwWLlzIgQMHRENALpej0+nEfhG+vr5MmDABi8VCUFAQw4YNIzg4mEGDBhERESE2ceoJT/fJ9evXi4quJ3p2rrvuOmbNmsWCBQsAmDNnDv/5z38YN24cR44cAWD+/PkMHjyYFStW8OWXX5KQkMDq1at56KGHyMvLE6972LBh/O1vf2Pq1KmoVCpKSkoYO3YsSqWSmJgYDh06xKBBg/jggw+46qqrKC4uJisri7Vr17J582bcbjeJiYksXryYXbt2UVRUREtLi9hgCzin4+h8jl2JXw/nYxydr7HrcDj47rvvKCoqwt/fn7q6OgICAhg3bhxDhgzpNl9CEAQ2bdrEjh07iI6Oxm6309DQwNChQ5k4caK4iHQ6nSxZsgS1Ws2ll14qytbn5uaya9cuEhISaGlpQaVSiSJt48eP91qECoLA7t27OXLkCG63G4VCQWNjI1arFZlMJnqwPV4eT/6Hv78/er2exMREoqOj0ev1XX5XuVwuzlVGoxGj0SgaAXa7HV9fX9RqNa2trWg0GgRBQK1Wk5SURG5uLpdccgnr16/Hx8cHQRDo6OggMDAQjUbD8OHDSU9PZ926deTn5zNo0CAsFouYyDpw4EDGjx9PWFgYW7du5dixY0CnfkpSUpLY8K+kpITIyEhqamq47LLLxIa1arWasrIy/P39CQ0NFWX6N2zYwDPPPHPKcfSLPCfPP/88jz32GPfeey+vv/460Ln6f/DBB1m2bBk2m43p06d3KZEtLy/nzjvvZOPGjej1em666Saee+65Cz75aNWqVfzpT3+itLSUrKws3n333R5vDg9Go5E///nPlJaWEh0dzV133cXdd9/dbQVOYGAgRUVFNDQ0iL9XcHAwTz75JAsWLODuu+9myZIlZGZmMmfOHG644QYeeughQkNDxWOkp6ezdu1aNm7cyJ///GdRFr+jo8MrS9pjAHhQKpUEBgaSnJxMYmIiWVlZmM1mHA4HCoWCpKQkioqKEASBvXv3UlZWRkFBgVd4BWDYsGE888wzrFq1Snyvrq4OvV7PjBkzRONk//79vPPOO9x3332iF2fmzJmkpaXxyCOP8OWXX+J2u9m7dy/z5s1jxowZPPHEExw+fJiGhgb8/Px4/fXX+frrr3nllVf49NNPmTRpEsXFxVgsFmQyGdOnT2fZsmVceeWVlJWV4evrS2pqKrfeeiv79+9n3bp1Xtf+ax67Er8ePPPuifxax67VamXt2rXk5eWJImKjRo1i7NixJ61iLCwsZMeOHahUKqqrq/H39+faa6/1Cv9Cp9CjzWbj2LFj9O/fXzRO+vfvL4Y4iouLxTy4Xbt2UVhYyKRJk+jfv78YWho5ciT+/v7s3buX2tpacTGYlpaGTCbDbreze/dubDab6M3weJoLCwvF0mc/Pz+cTqeYmOvv7y+W3prNZi91W61Wi8FgoKGhAblcTnZ2Nrm5ubhcLvz9/XE6nYSHh6PX65HL5aSmprJnzx4yMjKIjY0lOTkZpVIpSuVv2LABu91OUFAQNpuN3NxcCgoKGDVqFHl5eeICNDw8HKfTSXJysthMtbi4GB8fH3x9fZk8eTIrV66krKyM8ePH09jYSH19PVarlaqqKiorK3v1tz/tUblnzx7ee++9Loma999/P99//z3Lly/H39+fu+++m7lz57Jt2zagM1fgsssuIyIigu3bt1NTU8PChQtRqVQ8++yzp3s5Z5X29nbeffddXnzxRZqbm1m4cCF///vfiYuLO+VnFy9ezN69e5kwYQL//Oc/xdyL7hg1ahTNzc3s2bOH2bNni+9feeWVXH/99SxevJhbb72VlStXMmzYMB588EFWrlzJ3XffzbXXXktgYCAymYzAwEBCQ0NJTk6mqqoKPz8/fH19aWxs7FF4zOl00tDQQENDAzt37uSzzz7z2v5zBdbuGDZsGP/+979JTEz0UsH1uBmvv/56/v3vf9Pe3o7T6UStVncJLyUlJbF48WICAwP5+OOPsVqtWCwWvvrqK7Zu3cqdd97J3LlzWb58Oc899xzvvPMOOTk55Ofnk52dDXTGpdevX09hYSF//etfcTqd3HLLLSxatIjY2Fi2b9/O0qVLiYmJ8bpRfo1jV+LXhWfezcjIIDc3V3z/1zh2S0tLWbNmDbW1tQQEBOB0OpkyZcpJ51DozAHZtGkT/fr1o6ioiDFjxvSY7OnRP2loaODo0aNkZWUBnYu1qVOnsnjxYsxmMwcPHmTkyJE0NzfT0NDA2rVr2bFjB6NHjyYxMRG9Xk9SUhKVlZWUl5djMBioqKgQG93pdDoiIiKwWq3I5XIxlKTT6TCZTKhUKnGObWlpISwsDKPRSFtbmxiqlsvlyGQyFAoFgiDQ1taG0+nE5XIxceJERowYwf79+/Hx8cHpdIrh+/T0dPbv3y+GkSIiIsR8G+g00LKysoiKiuLzzz+npaWFIUOG4HA4KCgoEIs9fHx8SE1Npbq6mlGjRrF161ZxcRoREYHD4WDr1q1iTopSqWTXrl0MHTqUfv36YTabqaysFPvunIrTSog1mUzccMMNvP/++2IyJ3S6xz/88ENeffVVJk+ezPDhw1m8eDHbt29n586dAPz444/k5eWxdOlShgwZwsyZM/n73//OW2+9hd1uP53LOascOHCAuXPn8thjj2GxWHj99dd55513emWYmEwmfvzxR5566im+/fbbLjfVz70OERERhISE8N1333nlnqjVap599lmmTJnCkSNHuPbaa+nfvz9r164lMzOTRx99lFGjRnHPPfewdu1ampubOXjwIPv27QM681seeOABli1bxrPPPktUVBRBQUEMHz4cX1/fXv0OvTFMPvjgA4YPHw7g5U2aOXMmCoWCjIwMMjMzgc7GfMuXL+/2WHq9nrfeeovXXnvNy33a0NDAK6+8wpYtWwD49ttvefvtt7niiis4duwYAwYMEN2gTzzxBC+88AJOp5OgoCCxb9ATTzzB7NmzaWxs9DJMfo1jV+LXxYnzrqeZGvz6xq7b7Wbz5s18+umnYmK/Xq/nhhtuOKVhAlBRUUFISAhGo5FrrrmGiRMnehkmP1+khYeH4+vrS1NTk9j8Djq91rNnz0ar1aJUKiksLCQsLIyBAwciCAJms5m1a9fy/vvv88UXX3Do0CEaGxtRKpVibkp9fb1YKVNUVCQaLIAYZpHJZKKHIz4+nrCwMNRqNZGRkYSGhqLT6fD19aVfv34MHjyYlJQUUXoeOmURJkyYgEKhQCaTiecKCgoiNDSUjIwMnE4nHR0dhIaGcuDAgW6TXg0GA1dffTUJCQns37+flpYWpk+fTmJiIjqdDrfbTWlpKQqFgn379qHVaomNjUUul2Oz2RgwYAAVFRXU1dUREhKC3W5n5MiRqFQq1q5dy+rVq8X+Rr3htIyTu+66i8suu4xLL73U6/2cnBwcDofX+/379ycuLk7sX7Bjxw4GDRrk5W6cPn06bW1tosv/59hsNtra2rz+nQvWrVvHvHnzWLduHdHR0bzzzjv86U9/6jHp9efo9XqWLVvGX/7yF6/mdIIgcPDgQR599FGvXI2QkBAxRvjzni8REREsXbqUGTNmkJOTw1VXXcXhw4f56KOPWLt2LSNGjOC///0vs2bNYvTo0TgcDt577z0CAwMpKyvjkUce4aGHHqK9vZ2goCAMBgNPPfUUW7Zs4bPPPuMvf/kLc+bMwd/fn/79+5OWloZcLqdfv34ndfsGBweLgmtDhw4V309ISAA6mwdOmDAB6DSyPO5Rq9XKX//6VzZt2uQlkHbw4EHuvPNOnn76aWbNmsXbb7/tZQh2dHRQW1srvn755ZcpKChgxIgRYu4MwJEjR0R36KxZsygpKWHGjBm89tprOByOLpVMv7axK/Hr47cw7zqdTr755ht++ukn5HK5WIlz/fXXiyGXUxEfH8+AAQO48cYbSU5OFvPyPKrVO3fu9ErK9Oxjt9u7JGv279+fq666SvR+lJeX09jYyJVXXikmjIaHhxMcHMyBAwfECpjc3Fyys7PJzMykf//+XrmB/v7+ZGZmEh4ejp+fnygjYTQaycnJwWg0UlZWRlVVFRaLhbCwMCIjIwkKCqKkpISjR4/icrkIDg5m7ty5TJw4EblcLibDNjc309TURFxcnBiu12g0FBUVIZfLKS4u5vvvv2f37t1iCGnTpk28/fbbfPvtt6IwW1VVFT/++CMul4vExESUSiUOhwO3201rayuNjY0kJiYyZswYsZLHYDAQFRVFfn4+crmcmpoaNm/eTFtbG0FBQWRkZJCRkdGrv2OfjZNly5axb98+nnvuuS7bamtrUavVXlY9dFqmngdKbW1tl1Ivz+sTHzon8txzz+Hv7y/+6y5j+kyzadMmfv/734v5JStWrGDBggV91sbwhFqgc0VQU1PD4sWLmT17NsOHD/equpHJZPTv35+ysjLWr1/f5VgtLS1MnjyZ++67j+bmZm677TbuuOMOoqKi+Pjjj1m7di3Z2dmUlJSwaNEiL+MnKSmJjo4OnnvuOXJzcyktLWXBggUsWbKEI0eOEB0dzbvvvsukSZNIS0vjiSeeQKVS9RgKkslkDB8+nC+++IInn3xSTLb14PnMiBEjvAbjrFmzRK9KdXU1M2fOZOTIkXz88cd8+eWXXHnllbz77rs899xzPPnkkyxcuFA0Ek8szQ4LC+O+++7D5XLxzjvvoFarxZyZE0lPT2fcuHEsXLiQqqqqHv9Ov6axK/Hr47cw75rNZr7//nvq6upwu904HA7GjRvH/Pnz+1TZolQqSU9PFxeRZrOZ3bt38+mnn3Lo0CFUKpXXgsfHxweVSkViYiIHDhzwOpYnzGI0GqmpqSEqKoqQkBC+//57YmNjueGGGzAYDBQVFTF48GDGjx9PTEwMvr6+WK1WGhsb0Wq1OBwO4uPjmTZtGk6nk2PHjokiaSNGjECv16NWq8XeM57Qj16vJzAwkJKSEnbu3InJZEKpVDJs2DBuv/120tPTvZ5JgiAwbtw4dDqd6GXS6/VERUXR3t5OZGQkgYGBHD58mFWrVrF06VI++eQTtm7dSkREBIIgUFtby/Tp05k8eTJyuZyKigr8/f0xGAy43W7q6+uRyWQEBARw+PBhoqKiUCgUNDc3I5fLaWpqEsNNxcXFhISEMHToUCIjI7s0HzwZfXrSVlRUcO+99/LJJ590W8p6tnjsscdobW0V//W2FOl0EASBlStX8rvf/Q6j0chf//pXNmzYIMYifwk7d+5kwoQJLFq0iAULFjBv3rwu+1xzzTUYDAY+++wzL6PAZDKxYMECHn30UeLi4vjmm28YO3Ys//3vf5k9ezZr1qwhIyODVatWsWzZMsaOHUtzc7Oom5KcnMzYsWMZNmwYCQkJKJVKWlpaeP311/nHP/7BPffcw9ChQ/n+++/54Ycf+MMf/oDNZuvSQVmtVpORkcHrr7/OmjVruu2e7HQ6WbFiBdC58jhxYpkyZYqXyJonSeqmm25i/vz5lJWVoVAouOOOO3j11VeRyWSkpqaydOlSHnvsMTEJrrGxkbi4OBYuXIjL5UIQBHQ6Ha+//jpz5swhOTmZIUOGkJWVxVNPPeXlnVEoFCiVynMiwnYux67Er5PfwrzrKVVVKBQkJycTERHB7373OyZMmNBFL6qv7Ny5k6qqKvH6x4wZ4xV2ViqVZGZm0t7eTlNTk5eC6bFjx9i6dSuhoaHo9Xra2tooKipi9OjR7N+/n59++onBgwczffp0CgsL2bJlC4GBgYSEhFBTU4NcLufYsWMIgkDF/2vvzKOjKLP+/0nSnc6eTiArWxIgbNlI2BLZZRUlKExQBFEUB0RFEOUMOCLvK7KIzigMjPoTEEYElFEgBEJkTQgQCISQhaxkhexk7U6v9fuD0/USCRAgCsH6nMM52l3pqqfqVtV97nPv9xYWEhsbK0Y3QkNDGTZsGEVFRWJ/ML1ej16vR6lUEhwcTGlpqahdAogClqblppspKyvj+vXrlJSUIAgCzs7O4nf9+/fH3NxcFKAzaZIUFBRQUlKCXC4Xo8khISG4u7szePBgZs6ciZubGxcvXkSv1+Po6IhCoRAnp42NjXTu3JmwsDD0ej01NTXU1dXR2NiIQqEgICCAhoYGkpKSSEtLIz09/fdJiE1MTKSsrIzg4GDxM4PBwIkTJ1i/fj3R0dFotVqqq6ubePGlpaViSM7d3Z2EhIQmv1taWip+1xwKheK+levuBZ1Ox+7du3nnnXfQarX861//Ytq0aQ8k7Xsz/fv3Z/fu3VhZWYlhst/SpUsXBg4cyNGjRykpKRFLlLVaLUVFRRgMBr744gvi4+PZtWsX33zzDRs3bmTKlCkMHz6cVatW8dxzz/H0009z8eJF1q9fz9GjRzl27BharZaXX36Zjz76iMLCQrZv386RI0fw9fWltrYWQRDo3LkzFy5cuKX/gVwuJyAggA8//JCRI0feUSclJydHvAmnT5/e5DsHBwcWLVrEggULmnjQpgZSJsfk008/beLUKBQKPvzwQ4KDg3n11VeprKxk5cqVLFq0CAcHB86fP49Go2HgwIH8+OOPqNVqTp8+zYQJE9DpdLi4uFBeXt4ksdeU1+Ps7NzmbVfi8eV2z114PGy3oqKC/fv34+zsjI2NDcXFxUybNq3VeooFBASQnJxMWFgYffr0abZ8tUePHiQmJqJUKsnKyhIjwXV1ddTV1REYGEhKSgrdunWjf//+REVFiVorUVFR+Pr6Eh4eTnV1NRcvXiQzMxOj0UifPn1wdXXl7NmzdOzYEYPBICbJ3qxSrdfrsba2pkuXLshkMmpqakhLSxOfid26dSMkJARvb+/bOmupqaliPkr37t2bPKO9vb3x9fUlPz+fxsZG2rdvj7+/P+fOnSMoKIiUlBRUKhVTp07F1dVV/Dt3d3emTZtGTEwMmZmZBAYGkpmZSVFRkaiZUlFRwciRIwkMDKSsrIzo6GhsbW0pKysjKSkJQRCwtbWlc+fOWFpailVKd+Oepo5PPvkkly5dIikpSfzXr18/XnzxRfG/5XJ5kyWJjIwMCgoKCA0NBW54rZcuXaKsrEzcJiYmBgcHB3r37n0vh9NqCIJAfn4+S5cu5ZVXXsFgMLB161amT5/exDHJzs4Wb+j7wfSC9/X1va2BWVhYiH1hTMlscCM0161bNwByc3P59NNPsbW15d133yU+Pp4PPviAM2fOMHr0aBYuXMilS5fEPJTz588TGRnJhAkTxKhQYWEhy5Yt4+TJk/z3v//l0KFDbNu2jYULFzbJj1EoFIwYMYKffvqJX3/9lYkTJ4pGfzsVxD179lBVVUVISIh43W/mpZde4sCBAwwePFj8zMzMDC8vL5YsWXKLY3LzuQkPD2f37t14enpSVlbG8uXLUavVZGVlcebMGXE7Ozs7jEYj5ubmfPDBB/znP//hu+++48KFC1y8eJGkpCSxE2lcXFybtV2Jx5/mnrum/K62bLv19fUkJCSwadMmKioqMDMzo6GhgWeffVZ0TARB4MKFCyQnJ9934m779u0ZOXIkI0eOvG3XdxcXFxQKBTY2NqJkgulzCwsL5HI5NjY2HDt2DGdnZ2bPnk1wcDCNjY20a9eO4uJiNm/eTFpaGsOHD2fOnDmMGzcOnU7H2bNnEQSBU6dO0dDQwMCBAxk6dCgvvPCC2IHdpFUCN57vlZWV6HQ6fH19iYiI4Pnnn8fX1xegSRmyCYPBQEpKCn369KGwsFAsTDChUCiYNGkSgYGByGQyqqqqxEloXFwcMpnsFsfEhIODAxMnTqR79+5cunSJHj164OnpSW5uLiqVShSAa9++Pb1798bDwwNLS0s0Gg2Ojo64u7vz0ksvERERwbhx4/Dw8GjRdXtgEbbhw4cTFBQk6pzMnTuXqKgotmzZgoODA2+99RYA8fHxwI2TGBQUhKenJ2vWrKGkpIQZM2bw2muvtbikrbXEgHQ6HZcuXWLv3r1s2bKF/Px82rdvL+aE/Javv/6aL774gtmzZ/P666+3ODH2Xjl//jxDhgzhxRdf5OuvvxY/f++991i7di1ww1k5fPgwAwYMAG7cxImJiaxYsYI9e/bg7OzMyy+/zJw5c8SEL51OR15eHnv27OHbb79Fp9PRs2dP/Pz8OHPmDOnp6VRWViIIAh07diQiIoKJEycSEhIiOgsqlYr09HSOHz9OfHw8q1atEp0muOGwPPPMMxw7doytW7fyl7/85bbjrKioYMaMGRw8eJB27drx5Zdf4unpiZmZGQqFgt69e9/2+sbExPDaa69RUFAgftavXz/27NkjytLHxsYyZcoUsaSwX79+7Nq1i7KyMv71r3+xbt06NBqNaEdtyXYl/twMGTKEuLi4Nmm7paWlnD17loKCAqqqqvD39yc7O5vu3bvz1FNPNYko63Q6Dh8+TFpaGg4ODjz55JN06dLld1mSjYqKoqamhuLiYubMmYOdnR1qtZoNGzagUqnE8t+goCDx/aBWqzlx4gT5+fn4+flRU1NDdnY2QUFBBAQE4ODgQGlpKZcvXyY5OZnq6mqxE/K1a9dITU1FqVTi4+NDaWkpOTk52Nvb4+PjQ0hICJ06dRKrZAoLC0WRN7lcLkonwI28oR07dtC3b1+uXr1KRETEbcXpsrOz2b17N4IgoNVq0ev1+Pj4iIJuPXr0oEuXLre833Q6HXv37hUF2/Lz88Uk6fDwcHGyFxkZyeXLl6mrq8Pb25va2lqeeeYZysvLiY2NJSEhgV27dt3VjlrdOTGJAf3www9NxIBuDh3m5+czd+5cUeZ35syZrFq1qsViQA9ykxgMBoqKijh79ixbt27l119/FZX8XnzxRd566y369+/f7N/q9Xo+/fRTPv74Y/r27Ss25WstKXgTarWa0NBQampqOHXqlHjufvnlF6ZMmYLBYEAmk7FgwQLWrFnT5G/r6upYt24dn3/+OZWVlbi6ujJu3DgmTZpESEgInp6eoud87tw5ysrKiI2NpaysTCwLHzFiBOPGjcPFxQWNRkNxcTEZGRlkZGTw/fffYzQamT59Os8880wTxwRuGObUqVMZO3YsO3fuvOt6cUpKCuHh4eTm5jZ54MhkMnx8fBg/fjwLFixoNhlv3759zJo1q8ka8WuvvcYXX3yBjY0NKSkpjBgxgoqKCkaMGMF3331HbGwsS5cuFcv54P8UYh9125WQMPFb5+RRt92qqipSU1O5cuUKxcXFKJVK7OzsqKioEJc/Ro0a1eyx1NbW8ssvv6BUKikuLsbd3R0/Pz+6dOnSqp3Ms7OzOXDgAA4ODvj7+4vLaP/5z38oLy9HoVBga2uLRqNh5syZTZa8ioqKOHDgAC4uLnh6eorjdHFxwdvbm27duokim1euXMHS0hILCwtSU1PF94+rqyv+/v50796d69evk5eXx/Xr17GysuLatWv079+fkpISHB0d6du3r3iuBEHgp59+wsHBgczMTCIiIm4bITIRFxfHiRMncHV1paysTNRFMTc3x8LCAltbW4YPH35LZY1OpyMqKork5GQsLCwYPHiwKHZnqqg6fPgwxcXFXL9+HXt7e/z8/MjIyODKlSsolUqys7NZt27d7++cPAzu5yYxGAwkJSXxxRdfEBkZSW1tLQaDAWtrawYOHMizzz7LrFmz7upo3OygGI1GevTowYsvvsgLL7yAm5vbAydvwQ1je/vtt9m4cSPbtm3jhRdeAG4kPJkk67du3YqXl5fYL+dmjEYjGRkZ7Ny5k8jISC5evIggCCiVSgYMGMCrr76Kj48P3t7eTc6PhYWFKJSTl5dHYmIiJ0+eJCUlhXbt2jFs2DAGDx5MeHi4WLJ7M2q1mqeeeoqMjAwOHz5Mr169WjTeXbt2MXPmzGabG7Zr147t27czZsyYZv82MTGR119/XdR0MTc3JyIigtWrV9OpUyex1HrNmjWcOnWKGTNmoNFoUCgUeHt7c/ny5T+FBLjE40Vbka+vq6sjLi6OrKws3N3dcXFxoaysjIyMDOzs7LCxsSE8PPyuof7a2lr27t0rvnSzs7MxMzMjKCiILl26tHip4E7U19fzzTff4Ofnx9WrV8XqzMTERJKSkmhoaMDW1hZ/f3/69+9/S+sQlUrFmTNnuHTpEubm5mLl4PXr1ykrK8PZ2VnstWPSSVGr1SgUCjGCW1dXR2lpqRipsbCwYNiwYZSWluLn50f37t1veccUFBSwZ88e0TEySTfcCYPBwM6dOykpKcHZ2Rl7e3u0Wi1ubm5kZmZibW0t6rz8Fp1Ox9GjR0lISMDa2hp7e3vMzc2pra1lxIgReHl5sWfPHgoLCxk+fDh5eXk4Ojpy4cIF5HI5Fy9e/GMiJw+De71JKioqWLNmDf/+979paGigY8eODB8+HFtbW1544QUGDRp0T06FwWDg8OHDLFu2jMTERPR6PS4uLoSEhLBgwQKGDBkiZjTfL6dPn+bJJ58kLCyMAwcOIJPJ0Ov1TJo0iSNHjnDixIm7VhCZRH7i4+PZv38/p0+fJjs7m7q6OuRyeRMBPRM6nU5UJfTx8WH48OEMHz6cJ554QmxgdTvWr1/P+++/z+eff86cOXNaPNaamhoCAgLEJRqZTEa3bt2YOnUqffr0QSaTERoaipubW7P7z87OZubMmZw7dw5BEDAYDPTq1YshQ4bQs2dPSkpKRIfJzs6O+fPnExYWhre3Nx07dnzkH/ASEr/lUXdOBEEgPT2dEydO0LFjR1Gj49q1a5iZmdGtWzf69euHh4dHiwsO6urq+PXXXyktLaVnz54oFApycnK4fv06Hh4e9O3bF09PzwdqqBcZGQncSOqPiIjAw8OD0tJSfvjhB7y9vbGwsGj2hf3b48zKyiIzM5OrV6+KZd5VVVUoFApRTt9UbmxKLFUqlajVajGS4ubmhr+/Pz169MDe3r7ZZ5+p8aq9vT319fVMnz69xe+y5ORk9u7di7W1NZ07d+by5cvo9XpRxdZU8ejv799sYnRiYiIHDhwQpfeDg4PJz88XOxHX1tZiZWVF7969sbS0FFu45ObmsmjRoj+3c2LqAzN37lwuXrxIz549+etf/8pzzz0n5iU8CPX19Zw9e5Y9e/Zw8OBBsrKyRKGx8ePHi70c7mfZp66ujv79+1NUVERMTIyY2BYdHU14eDhr1qzh7bffvuNv5OTksHbtWpYtWyZKJxcXF1NQUMDZs2cxGo2kpaVRWFiIq6urmGRnZ2fHwIED8fHxaTZC0hzZ2dmMHz+e3r17s3PnznsqeTQYDGzatEks+R04cCBvvvkmmzZtIjExkYaGBjw8PPjoo49Eye3fYirxEwSB1NRUfvjhB3JycsjJyWmStDt37lxWr16Nvb39I/+Al5C4HY+y7TY2NhIXF0dBQQHdunUjPT1dzGXw8vJCqVTed76e0WjkypUrXLhwgaKiIuzt7QkMDKS+vl6U8zf1B+vcufM97yc9PZ3Y2Fi8vLzQaDQ888wz4rKJWq2mtraWefPm3XGSFhcXR1lZGX369KFz586UlpaSn59PWVkZV69eRRAEsZmfmZmZuJytUCjw9PTEx8eHrl27tujYExISuHjxIhqNhkmTJt2xAe1vqaurY//+/WRkZODi4kJNTQ0ajQZLS0t0Op14fKbl9d9qWcGN5356ejpGo5GqqipKS0uRyWQ4ODigVqtpaGjA2tqa2tpawsPD8fHxISoqihdeeOHP65yoVCo2bNjAqlWr0Gq1LFiwgEWLFmFvb8+ZM2f49ttv+fDDD+/pYt6J69evs3PnTtavXy8q+Nna2tKpUyfGjBkjJpY6Ojo2a9hVVVUcPnyYYcOGiRnTa9euZfHixQwZMoR9+/aJ3vGIESNYuXLlLUqRv6WyslKUNv7HP/7RrMNgKuE1MzO770hPbW0t48ePJy8vj5iYmCbZ/9XV1dTW1tKxY8e7JrHl5OQQGRnJtm3byMnJuUWsR6FQ8Pzzz7N8+fJml7NuxnSzLFq0iP/85z9i6aVcLqd79+6EhITg4+PD8uXLH8kHvITEnXhUnZOSkhIiIyNxd3fHxsaG/Px8wsLC8PLy4vjx4+Tk5DBs2LBWqRBSqVSiOJmZmRl9+vTBycmJw4cPo1arEQRBVIvt2rVrsxGVtLQ00tLS6NatmyhLv3nzZsLCwsTotY+PD5mZmZw+fRo7Ozuee+65Ox5XSkoKBw8exNHRkUmTJjVpzGqSL6irq0OlUmFlZSVWR97rMzgrK4tjx46JImvDhg0T95GRkUFlZaWYlHs7dDodZ86cISUlhcbGRuzt7Wnfvj06nY6ysjLRuVCr1YwePZrAwMDbHqPRaKSiooKSkhKOHDlCY2OjmJOYmpoqRlgyMzPZsmXLn9M5ycnJYcmSJfz000+4urry1Vdf8fTTT4svx6NHjxIeHk5wcDBbt25tUZ+cllJVVcXJkydZt24dx48fF8vf5HI5Pj4+PPfcc0ydOrXJPk3NnMaMGUNFRQXz58/npZdeAmDixInExsayfPlyFi9ejFwuF1tUtyR8uWnTJt5++23WrVvHyy+//EBLTc1hNBpZv3497733Hl988UWT5ZyrV6/y3HPPkZOTw5QpU/joo4/umqgFN2ZeZ86cYd++fXz77be3OClBQUEsXrwYT09P+vXrd8cZRn19PUuWLGHdunW33eZRe8BLSNyNR8050ev1nD17llOnTuHp6Skua48cOVLcdt++fWi1WlQqFT179iQ4OLhVNKT0er1YDVNRUYFcLhd1mgwGA2q1GicnJ3x9fZvkppiZmaHRaDhx4gQWFhbY29szePBgZDIZR44cISwsjLi4OKZMmYKTkxMJCQl4eXndNb9Fp9OJjVNlMhmTJ09udb0YtVrNjz/+iI2NDfX19bzwwgviPi5cuEB0dDQajQZbW1ueeeaZJo3+mkOv11NQUEBKSgpZWVmoVCqMRiN2dnY4ODhQUlKChYUFnp6e2NraMmTIkDu2EygqKmL79u1otVo8PDwoKioSJ8FpaWl/vpwTQRCIi4tj9uzZZGZmis7Hb710g8HAd999x1tvvYW7uzsbN25kxIgRD5zMavIc9Xo99vb2nD59mh9//JGtW7c2abRka2vbREvEzMwMPz8/Ll26xNWrV5HJZPTs2ZMVK1aICbepqamsWbOGefPm3VMZXW1tLSNHjiQ/P59Dhw416X/zoNTW1rJ+/XpWrFhBcHAw+/fvF69HSUkJL730EjExMeL2o0ePZtu2bS1yUODG+bx06RLbtm2juroajUZDTEwMpaWlYsvwLVu2EBERccffKS0tZe7cufz888/Nfv+oPOAlJFrKo+ScmBqcFhYW4ubmRnFxMf369SMsLKzJM7W+vp6dO3fi4uJCbW0tdnZ2PPHEE00iC/d7XNnZ2WJZ7OnTpykvL8fT05OuXbuK8v75+fm3tOKwsLDAaDSKOX3m5ub06NGDdu3aUVVVRceOHcnKymLq1Kn3tFSdmZnJ8ePHcXZ2RqlUMnLkyFaZGJqWtY4cOSJ2ph45ciRdu3bFaDSSkpJCdHS0uBxfU1ODTqcjPDz8rg6KCbVaTVxcHMnJyWIvM3d3d3x8fNBqtWi1Wuzs7Jg2bdod30XJyclERUWh1+uxs7PDy8uL5ORkUlNT/1zOiVqt5vvvv+f9998XE4NWrFhxWy/XYDAQHR3N7Nmzqa6uZsqUKSxfvlxsWNdSGhsbUalUpKWlsW3bNg4ePIhOp8PT0xNBEMRysPs5zTY2NsyaNYtp06YxadIkVCoVX375JTNnzrwnByUmJobJkyczaNAgdu7c2Wwi7L2gUqk4cOAAn3/+OadPn8bDw4Mff/yR0NBQ8UG1bNkyUlJS6NixI127diU5OZnr168zatQovvzyyxZX8sCNJbOff/6ZDRs2kJKSglarpWvXrnz44YdERES0aFaSm5vLa6+9xrFjx265Fo/CA15C4l54VJyToqIiIiMjaWhoICAggLS0NEaMGEFAQECzv2F6PshkMuRyOXl5eYSEhBAcHNzikma4IURWUVFBcXExOTk5ODk5IZPJKC8vp6GhAVdXV1xcXEhNTcXZ2ZmgoCAyMjLQ6/VNBMzq6uowNzfH2dmZqqoqUcFUqVTi6uqKSqWipKQEHx8fJkyYIAo83g1BEPjll1+Qy+WUlJTw5JNPNqmOvFeMRiP5+fmcPHmShoYGwsLCRI2UUaNGUVRURFxcHLm5uTg6OqJSqdDpdJiZmYkRkP79+zNo0KAWlV/X19eTnp7OmTNnqKqqwtHRUVzicXFxYfz48S3KR0xMTOTQoUNotVqUSiX29vYcOnSInTt3/jmcE1PvicjISORyOWvXrmXWrFktemklJSUxf/58Tpw4QdeuXfn444+ZNGlSi73klJQUVqxYwcmTJyktLRWbNbm4uNCrVy9cXV0JCgpCoVBw5swZ4uPjxQSilmBubs6rr75Kv379mDdvHvb29mKN/ZIlS1r0YDIYDCxbtozVq1ezZMkSli9f3qJ9/5by8nKOHTvGP//5T1EKe/DgwWzcuBE7OzuioqLYtm0b8fHxGI1GAgMD2bJlC/7+/qSnp7N06VIiIyPx8PDggw8+YPr06be90U0zhHXr1rF//36uXLmCwWDA19eXJUuWMH78+GbVDO9EZWUl48aN49y5c00+f9gPeAmJe+VhOyf29vYkJiZy5MgRPDw8qKiowNnZmXHjxt01MmowGDh+/DhXr17Fz8+PgwcPihWULc0BjI2NRaPRkJOTg6OjI+3ataOuro6rV6+i0WgwGAw4ODhgbm5OVVUVWq0WS0tLsdOxi4sLTk5OFBcXo9VqKS8vx9HRkc6dO1NcXEy7du3EdiHXr19HLpdjZWWFjY0NEyZMoHv37i06Xz/99BPdunUjOTmZ11577Z57I2k0Gi5fvsylS5eora0lICAACwsLLl26hJeXFzY2Nly+fFls3mh6rwwbNgxfX1/S0tI4f/48gwYNory8nLq6OoYMGSJ2LL4ZQRCoqanh4sWLJCYmUltbi6WlJR07dhTVwPv160fv3r3vaZXBVBWk1+sxMzMjMzOT7du3P/7OSU5ODi+88AIZGRl06dKFTz/9lMmTJ99TZOHq1at88MEHbN26FQsLC0aPHs2KFSvw9/dv0e9otVpKSkpEMTBHR0c8PDxQKBS3rKlWVVURGRnJgQMHOHToECqVisbGRmQymaji+lvMzc1ZuXIlP/zwA0lJSdjY2IidHr/99tsWebB1dXWEh4eTkpLCoUOHxI6Vd8NoNFJcXMzGjRv58ccfKS8vx97eHn9/fyIiIujatStRUVFs3bqVkpISUcHw2WefZc2aNU1mC7W1tWzcuJG1a9dSXV1Nv379mDp1KlOmTMHW1hZ7e3saGxs5ceIE27dv58CBA2LUycrKihdffJGPPvrogZKYs7KyiIiIaNJ9VHJOJNoaD9M5qa6uJjk5mcOHDyMIAnK5nG7dujFx4sR7qo5JS0sjOjqa0tJSLC0tsba2JiQkhAEDBtzSYfm3mJ5LhYWFVFZWAjdk1t3c3Gjfvj02NjZi3ktjYyO5ubmcOXOGiooK1Gr1HZdYrK2t8fHxoaKigry8PGxtbXFycsJoNFJTU0P79u0ZO3YsPj4+dx1jXl4e0dHRODk54ezsfNciBhO1tbWkpaURFxeHwWBAp9Ph5uZGdXU1jY2NODk5ibIPDg4OyOVy5HI5169fJywsjCFDhohjzM/PJyYmBnd3d9zc3EhMTEQQBAIDA7GxscHd3Z2amhqSkpLIzc1Fp9PRqVMnDAYDxcXF2NraMmLECIKCgu47R+jcuXMcPHgQg8Hw58g5OXHiBLNmzSI3N5ewsDA+//zz26q73o3GxkbWrl3L2rVrqampwdHRkeeee4758+fTq1cv5HJ5qyaTarVaKisrKSws5OrVqzg4OIhtp03U19eLXYKVSiXOzs4kJSWhVCr58MMP2bRpE506dWL9+vUtulHOnTvH+PHjGT9+PJs3b76roV25coVPP/2U/fv34+HhQVhYGDNmzECn03HhwgW2bt1KUlISKpUKuOFEeXt7M336dBYtWtRsVMQks//+++9z8uRJdDod7dq1w9LSkt69e3Pt2jWysrLERGILCwv69u3LqlWrWqVDqek8TJkyhfz8fEByTiTaHg/TOTlw4ABnz54VxQwHDx7MsGHD7ktSvqKigh9//FHMIzPlkgUFBdG9e3exG3BrIAgCFRUV5OTkUF5eTmVlJTY2NpibmzdZ6tHr9ZSXl9OnTx9UKhXZ2dm0a9eOiooKGhoa8PHxQRAEevXqxeDBg+867kOHDlFWVkZlZSUzZsxo0i34t+j1ehITE4mNjUUul1NaWorRaCQ0NJTCwkKqq6sxNzenb9++6PV6MbpTUVGBq6srY8aMuUW1G26kPcTGxpKXl4e3tzfW1tbk5OSQn5+PTCZDp9NhZWWFTqdr4tT16tWLESNG3PGYW4rJbtLT09mxY8fj7Zx07dqVgoIC3n33XRYvXnxXb/tuCILAgQMH+OSTT4iPjxdVVb28vBg0aBBPPfUUgYGBODg43LYkuLUxGo0cO3aMlStXcuLECfGlPWDAAFavXs27776L0Wjk22+/bdK19HbjM6nbfvXVV6LybHNERkayYMECnJ2defHFF8UeChMnTmTChAkUFxcjCAIuLi74+fmJ1UazZ88Ww6l3or6+nsTERPbs2UN6ejqhoaH8/PPPJCUlibX1Y8aMYfDgwUyYMKFJAnFr8PHHH/P3v/8dkJwTibbHw3ROVq1ahUajEZdxTL277he1Ws2BAwdIS0vDzs4OJycnCgsL0ev12NjY4OnpiZeXFw4ODnTo0EHU0bjXfZoiH+bm5nd8ngiCQHV1NbGxseTm5iIIAt27d0epVHLs2DE8PT3FaIa/vz+DBw++47GY8iFtbGyQyWQ8++yzzU6y1Go1v/76KxcvXkSlUqHVaunevTv29vZYWlpSUFBAWVmZ6BiVl5ejVqtxcXFhyJAheHl53dWRMyl/m86vTCZDJpPh7OyMpaUlaWlpWFlZ4eLiQlhYGL6+vq32ntNqtWzYsIFff/2V/fv3P97OiUwm429/+xsffPBBq/ZYqK6u5sMPP+Sbb75pIqluZmaGs7MzdnZ2+Pv7Y2dnx7Bhwxg0aBAeHh44OTnR0NBAZWWluEaXn59PQUFBkyRMlUrFyZMnsbS0ZMaMGYSGht7VsdJoNBw+fJh//OMfHD9+HJ1OxyuvvMKsWbOYO3cujY2N7Nq1667VOPX19bz88sskJiby66+/ihLLN/Pdd9+xZMkSsStzTEwMEydORK/Xs3PnTjIyMnBycuL5559n8eLFoobJ/RqxIAhkZWUxc+ZMTp8+jYuLC4sWLSIsLIzAwMBWa51+M/Hx8YwZM4aGhgbJOZFoczxM52Tfvn2Ulpby3HPPPXByvQlBEMjJyWHfvn3Y2dnh6+tLamoq5eXlYlKnaQnJ3NwcJycnunTpgpubG87OztTX14uVkoIgUFVVdUshgtFopLq6GgsLC3x8fAgICKB79+63TcQVBIHi4mJOnjxJVlaW2MunqKgINzc3zM3NkclkeHh48MQTT9xxQmZKHDY3N8ff3/+Wbu0ajYadO3dSWlqKwWDAzc2N/v37c+TIEQoKCnBwcMDZ2ZnKykrs7OwIDQ3F29tbPBf3+v5Tq9VUV1eTkZFBXFycqKxdW1tLcHAwYWFhLYrG3yv79u3j+++/f/wTYpcuXcqHH37Yqo6JCYPBwIYNG1i9ejXFxcV33NbkabZr1476+npKS0vFkjWdTndL+drNyOVyevToweLFi5k0adJdPd/GxkZWrlzJypUrEQSBdevWMWjQIDHPpiUOyoULFxgxYgTTp0/nn//8p3hz6nQ6Nm7cyMaNG/n4449Rq9WcOXOGXr168fXXX3Pp0iXgRvv1DRs24Ofn98DdQY1GIydOnGDmzJlNOgybmZkhl8v56KOP+Nvf/vZA+7jdfnfu3Mm0adMk50SizfEwnZN169Yxffr0B45UN0dNTQ3//e9/KSwsxMHBgY4dO5KWliYmepomQCbNjN+KSJq+NzkzN2NSYzV9bmZmhre3N2FhYc1O0kyYnJT9+/eLS+/Ozs64ublRV1eHhYUFHh4ed42gREdHc+3aNerr65kxY4YYvSkvL2fPnj1cvXpVdIDMzc25ePEiMplMlLN3dHRkwIABDBgw4IGveX19PceOHRN7kjk7O1NdXY2NjQ06nQ57e3tee+21Vn+3qlQqli9fzpo1ax5v56S6urrVw/2/JSsri3HjxpGbm/u77sfS0pLg4GBWrlzJ0KFD7/jS1+l0rFixglWrVmFjY0NUVBQA06dPx8XFhcjIyDsmyRoMBmbPns327dv56KOPmD9/PiUlJfz973/n2LFjfPPNN5ibm/PZZ5+h1+uJj49Ho9Hg5ubG66+/zsKFC1vlwaTT6di+fTsLFy6kqqrqlu+VSiVbtmwhPDz8gffVHA+74kFyTiTul4dpu8XFxa3S/uN2NDY2cvLkSeLi4gBwdXWluroavV4vTvQsLCwwGAxixNYkCW/idpFck1K0yVExJfAHBgYyfPjwO57L6upqdu3aRV1dHZ07dxaVUxsbG6mpqWH06NF31BKprq5m69at+Pv7c+XKFYYOHUpJSQkJCQlYWlri4eGBwWCgoKBArH6ysbGhqqoKZ2dnxowZg6+v732d05spLy/nl19+obS0FKVSiUajwd7eHr1ej5OTEzk5OfTq1YtJkya1ikjeb9m9ezdTpkx5vJ2TP+rG/OSTT1i6dOnvvh+4Uenz/PPP884779CzZ8/bbmeKcixatIjevXsTHR1NamoqM2bMoF+/fmzatOmODkpKSgqjR4+mqqoKX19fKioq6NChA5988glpaWn885//FBNGAQYOHMiGDRvo27dvq6xB5uXlsWzZMnbu3NlEoA5uRJMmTJjAG2+8wahRo3633B7JOZFoqzzutqvVavnqq6/ESUuHDh3QarVNEmdvjpiYIiim11lzzom5uTl2dnYIgoC5uTnt2rXD2tqa7Oxs1Go1tra2DBo0iL59+942gl1dXc327dsZNGgQycnJBAYGkpubK1YMjRo16o7LIceOHSMhIQGNRoONjQ2Ojo5otVosLCzQarVcv34dmUxGnz59qKio4Nq1a4SFhTFo0KD77kdkQq/Xc/78eY4fP45MJhO7RCcmJooVTKbecBMmTPhdViSg5c7Jg8Xk/yQ8iHjOvVJTU8NXX33FqFGjeOutt24bsZHL5cydO5eFCxeSnJzMu+++y+DBg9m9eze5ubmsWLGi2bJkE35+fixfvhyZTEZmZiahoaGsX7+eXbt2sXjxYtExsbS05K9//Sv79+8nODj4gR2F69evs379ekaNGiUq51paWmJmZoa7uzszZswgKiqKH374gdGjR/8hSccSEhKPFqbSYrgR5SgqKkKr1eLk5CRGPExLNAaDQXQ4LCwssLCwED83GAxYWVnh6uoq5sbV19dTXV1Nbm4u6enpGAwG2rdvT/v27Tly5Aj//ve/OX36tFh8cDNKpZJp06Zx4cIFgoKCOHXqFKGhoaJg2969e6mpqbntuEJDQ3F3d0cmk+Hj4yP2szHJ7Xfq1AmZTEZBQQHV1dX85S9/YcSIEQ/kmAiCQGZmJps3byY6Ohpzc3Pkcjk5OTnExcWhVquxtramV69evPzyyzz77LO/m2NyL7Rcku9PTHx8/O/223K5nMDAQBwdHUlNTRWTuoqLi1m/fj2RkZG88847zJ079xaDkcvlvP/++yQmJrJr1y4GDhzIW2+9xc8//8wrr7zCN998wxtvvHHbfb/66qu4ubmJs6E5c+aQnJws3ui9e/dmwYIFvPTSS/ek3tgcjY2N/Prrr6xYsYKEhASMRiMdOnRg1qxZPP3006SlpTFy5Eg6dOjwu4QSJSQk2g4qlUpMaDU5InV1dbi5uWFvb09dXR3t2rXDzMyMqqoqamtrmyzrmJmZYWtrS7t27dBoNNTW1lJSUiI+20wRFqPRiF6vR6PRoNfrcXBwoLa2lujoaFJSUggPD79FXl+pVDJs2DBiY2Pp168f0dHRPP/886SmpnLq1CnOnj3L0KFDm33BKxQKpkyZwt69e9FoNMjlcvLz8+ncuTMODg4UFBTQ2NiIi4sLkyZNeuCE45KSEo4dO0ZWVhbu7u4olUqqqqqQy+X4+/tTVlaGn58fPXv2bLXk5tZCck5aQF1d3e/yuwqFglmzZvHZZ5+hUCioqKjg+PHjfPLJJ6JIWF5eHu+99x5nzpzhH//4xy3qi87Oznz22WeEh4fz2WefERoaSr9+/dizZw85OTli6LM5LCwseOqpp9i8eTNLly6loqIChULB0KFDiYiIYMqUKa2SW5KVlcWSJUvYu3cvWq2WHj16MH36dF566SWxAeKAAQMeeD8SEhKPByan4XafW1hYYGNjg52dHb169SIhIQFra2tkMhkuLi6cPXsWW1tbSktLxQjIzQmxHTp0YMiQISiVSoqKikhPTycvLw9HR0cGDRrEmTNnKCoqYvPmzYwfP54+ffo0yQPs1q0bxcXFFBcXI5fLOXz4MBMmTMDBwYErV66g1+tvG32wsbFh9OjR7Nu3j+rqalQqFUVFRRiNRtzd3Rk9ejS+vr4PNEkTBIGzZ89y9OhRampqcHV1paioiPbt2zNu3DgCAwPvWa32j+ZP6ZwUFRVRWFjIoEGDWrRs0Fx470Hw8vLCz8+PadOmMWXKFLHm3cHBgTFjxhASEsLYsWPJzs4G/q/LZWpqKqtXr2bMmDFNbpSAgAD+93//l1dffZWlS5fy888/4+zsfFfhnPLycpYuXcqWLVuws7PjtddeIyIigqFDh7ZKF01BEDh58iRz584lJSUFe3t7Fi5cyLx58x5I5VVCQqLtkZSUxNWrVxk+fPhdlymaq7Yx5WeYIiomwUqj0dikUic7OxszMzPKy8tvmZyZm5vj5eXF2LFjcXFxQaVS0b59e4KCgvDy8uLIkSMkJCTg4OCARqNBrVbz888/k5mZyejRo5vkSDzxxBPs2LGDLl26kJqaSlFREb6+vndNWs3KyiImJobQ0FCysrLIy8vDx8dHlKR40MixVqvl0KFDnD9/HrVaLQqITpw4EX9//0diyaYl/OmcE5N8uUwmIz4+/q6Ko/n5+Zw4caLV9m9lZcWXX34p5lM0NDSIYmQJCQlUVlayePFi3n77bRYuXNhk9pCcnMzkyZOZPXs2f/vb35pEUSIiIjh+/DibN2/m//2//8dbb711R8ervLycl156iejoaAICAvjqq6/o37//A5cGm9BqtezatYu33nqLuro6hg4dyvLlyxkyZIi0bCMh8Sfj3LlzJCYmYjQaKSsru2uD1bS0NLFxnbm5OR4eHjQ2NlJdXS06HKbKG1M0xLSsY+o0bGdnh8FgQK1WA+Dm5kZwcDBOTk6UlpZy7Ngx8vLyUKvVCIJA586dxQaAjY2NTZZ/UlJSKCwsZNSoUfj5+QEgk8kYO3Yse/bsoVevXsTExDBt2rQ7RiRMjomDgwMXL17E09OTOXPmtFqCcVVVFVFRUaSlpSGXy7GxsSEwMJCRI0e2mtLuH8WfqlqnoKCA8PBwkpKS8PDw4PDhw3fsjqtWq5k9ezbff/99axy2iIuLi+hYNDY2kp+f3yR51c7OjvXr13PkyBG2bt16y9+bmZkREhLCtm3bmlT0lJSU8MQTT9DQ0EBkZCT9+vVrdv/l5eXMnDmTmJgYJkyYwNdff33PTfTuhEaj4e9//ztffPEF5ubmzJ07l//5n/9p9ZvDFNG635nA417xIPH40pZs98KFCxw8eBBnZ2cUCgXe3t4MGzbstttfu3aN7du3U1dXh0wmIyQkhOzsbLFyx+ScmJwRkxNhepWZnBOlUknHjh2prKwUm7LerJdy8+TNzMwMGxsbGhoa0Ol0Yq+zTp06if144Eae39ChQxk0aJCYh3f+/HlSUlJQKpXY2dkxYsSIZieGWVlZnD59mh49ehAfH8+oUaPo3bt3q00IKysr2blzJ+Xl5RgMBlxdXXn66afp0qVLqxQWmETsVCoVgiDg4eFxX7mILa3W+dNETurr61myZAlJSUkEBQVRV1fHxYsX7+icHDhwgJ07d97xd03lV3Bj+aUl3YbLy8spLy+/47G+8cYbTJ48GblcfkvVjSAInDt3jieffJIvvviC8PBw5HI57u7uvP322yxatIj58+cTHR19i0NgipicPn2a5cuX88477zxwidrN1NTUsHz5ctatW4dCoWDNmjXMnj37vnriCIJAQUEBhYWFTRr16XQ6YmNjxXM4fvx4hg8fTkhISKssR0lISLQOV69e5ciRI+h0OsrKynBxcbmjqKXRaOTw4cPU19fj6uqKubk5586dQ6/Xi06IqXTYFIE1LeuYXvIm/ZPa2lpSUlLo1q0bnTp1wmg0UllZicFgEFtk9O7dm9TUVAoKCsTGoyZdEUEQUKlU2NjYiP3DtFothw8fpqioiHHjxqFUKvHz8+PChQt4eXkRHx+Pt7f3LRWeWVlZxMXF4ezsTFpaGpMnT6ZTp06tdp4zMzM5dOgQcKNxoZOTE5MnT76vnjj19fWkpaVRX1/fJKVBpVKRlpaGQqEQn7Pe3t4EBQXh5ubW6stFfxrn5JdffmHHjh0EBweze/duzMzM7tjaWxAEfvnll9uquyqVSgIDA5kxYwb9+/fHzMyMtLQ0Tp48SXZ2NgaDgdTUVK5fvy6GDe8FlUrF999/f0dn5+rVq0yfPp0FCxawZMkS7O3tefXVV9mxYwcJCQns3buXadOmids3Njby17/+lYKCAn744YdbclcelOrqat58801++OEH7O3t2bhxIxEREfe9jFNTU8OGDRvIyMhoNu9HEASSk5OJi4vD1taWvn37MmTIEMaPH8+AAQMkR0VC4iFiNBqJiYnB39+fCxcu8OSTT4r9Ym5HfX09hYWFoqNwc0M+ExYWFri5udGhQwfMzMwoLS2lvLxcbDVyc5TAJIuvUChwdXXF2dkZlUpFbW0teXl55OXliSXHDQ0NjB8/nvPnz4sy8pWVlVhYWIgOkek3MzIyqKio4Omnn8bLy4sRI0aI3d5PnjwplgTDjed0dHS0+DtTpkxp1SjypUuXOH78OEajkdraWrp3787YsWPvu5jhypUrlJWV4ezs3CSyUV9fL46hQ4cOODs7o9Pp+OWXXzAYDPTq1Yu+ffu2WhT+T+OcxMbGYmVlxZo1a+663gk3BGvq6+uRy+V4eHjg4OCAlZUV/fv3Jzg4mJEjR+Lp6dlkfdHf35+pU6cCN27MqqoqioqKxHryixcvUlpaSkFBgVi21tyFFASBwsJCsdnendBoNKxZs4a6ujqWLVuGi4sLq1ev5plnnmHhwoX4+fkREBAAQHFxMUqlkv3797foHNwLRUVFvPLKKxw9epTg4GDWrl3L0KFDHyicqFQqWb169W2/12q1ZGdnExkZybZt2zhz5gwXL14kOjqaZcuWMXHixPvet4SExIOhUqmoqqrCzs6OHj16EBISctfngUqlQiaT0dDQIOqXmJZcnJ2d6dGjB15eXnTq1KnJb9XX19PY2IjRaKSoqIjS0tImTotaraaoqAhAXN75bUS6pqaGqKgorK2tmywTKRQKunTpwpUrV0QHyJSQ+9NPPzFp0iR8fHzo2LEj9fX12NnZceLECYYPH465uTl5eXl07NiRTp06ERQU1Go5dwaDgeTkZGJiYhg6dCjnz58nICCAcePGPVAUw9/fny5dujQ7IfTz80OtVnP58mUuX76MmZkZAQEB5ObmYmVlRXJyMiNHjmyVSW+bzDmpqalBqVSK/RdawvLlyzlx4gRRUVEtnlFXVVWRnp5Ojx49sLOzw8zM7IFm4zqdDp1OR1FREXq9Hnt7+1tq6E3k5+cTHx/P7t27iY2NbdHvL1y4kGXLlmE0GnnzzTf5/vvv8fPzY9OmTfTo0UO8KVs7IfXKlStMnTqVjIwMrK2tOXDgwF37+7Q29fX1nDp1ig4dOuDt7Y1CobjrDVJbW0unTp3+kDYIJu7HdiUkfsvDtN2tW7e2aClYo9Fw8OBB1Go1w4YNw8PDo0X7ycnJISUlRdT+sLCwwNXVFYVCcU/PLkEQ0Gg0VFVV0dDQQGFhIQaD4a7P3ZycHKytrZu84O3t7XF3dyc3N1fs+6XX6zEajVhZWTF27FgaGxs5evQotra2VFdX4+/vT/fu3VGr1ajVapycnFpVVDI/P5/U1FTMzMzQ6XS4uroycODAP0y4UqvVkp+fT1JSEk5OTnh5eYk6NHfi1KlTfPbZZ3e13TbpnOTm5t6xUZOExL1QWFj4h5U2S7Yr0ZpItivRVrmb7bbJZR1Tkk9BQcEfNmv4ozHNjB7nGfbDHqMgCNTV1f2uTcx+i2S7jwcPe4yS7f4+POzr+kfwsMfYUtttk86JKVzv6Oj42BqQCQcHB2mMvyN/9ENWst3HC8l2H08k2/19aYntSo3/JCQkJCQkJB4pJOdEQkJCQkJC4pGiTTonCoWCZcuWPdY6FtIYH0/+DGOWxvh48mcYszTGR4c2Wa0jISEhISEh8fjSJiMnEhISEhISEo8vknMiISEhISEh8UghOScSEhISEhISjxSScyIhISEhISHxSCE5JxISEhISEhKPFG3SOfnXv/6Fl5cXVlZWDBw4kISEhId9SC1i5cqV9O/fH3t7e1xdXZk0aRIZGRlNtmlsbGTevHm0a9cOOzs7Jk+eTGlpaZNtCgoKmDBhAjY2Nri6uvLee++h1+v/yKG0mFWrVmFmZsY777wjfva4jfFekGy37VxXyXabItlu27muj4XtCm2MHTt2CJaWlsKmTZuE1NRUYfbs2YJSqRRKS0sf9qHdlbFjxwqbN28WUlJShKSkJOGpp54SOnfuLNTX14vbzJkzR+jUqZNw+PBh4dy5c8KgQYOEsLAw8Xu9Xi/4+fkJo0aNEi5cuCBERUUJ7du3F/72t789jCHdkYSEBMHLy0sICAgQ5s+fL37+OI3xXpBst+1cV8l2myLZbtu5ro+L7bY552TAgAHCvHnzxP83GAyCp6ensHLlyod4VPdHWVmZAAjHjx8XBEEQqqurBblcLvz444/iNunp6QIgnDp1ShAEQYiKihLMzc2FkpIScZuNGzcKDg4Ogkaj+WMHcAfq6uqE7t27CzExMcKwYcPEm+RxGuO9Itlu27iuku3eimS7beO6Pk6226aWdbRaLYmJiYwaNUr8zNzcnFGjRnHq1KmHeGT3R01NDfB/3T4TExPR6XRNxtezZ086d+4sju/UqVP4+/vj5uYmbjN27Fhqa2tJTU39A4/+zsybN48JEyY0GQs8XmO8FyTbbTvXVbLdpki223au6+Nku22qK3FFRQUGg6HJyQNwc3Pj8uXLD+mo7g+j0cg777zDE088gZ+fHwAlJSVYWlqiVCqbbOvm5kZJSYm4TXPjN333KLBjxw7Onz/P2bNnb/nucRnjvSLZbtu4rpLt3opku23juj5uttumnJPHiXnz5pGSkkJcXNzDPpRWpbCwkPnz5xMTE4OVldXDPhyJ3wHJdiXaKpLtth3a1LJO+/btsbCwuCXDuLS0FHd394d0VPfOm2++SWRkJEePHqVjx47i5+7u7mi1Wqqrq5tsf/P43N3dmx2/6buHTWJiImVlZQQHByOTyZDJZBw/fpwvv/wSmUyGm5tbmx/j/SDZ7qN/XSXbbR7Jdh/96/pY2u4fnuXygAwYMEB48803xf83GAxChw4d2kRiltFoFObNmyd4enoKmZmZt3xvSlr66aefxM8uX77cbNLSzVnyX331leDg4CA0Njb+/oO4C7W1tcKlS5ea/OvXr58wffp04dKlS4/FGO8XyXYf7esq2e7tkWz30b6uj6PttjnnZMeOHYJCoRC2bNkipKWlCa+//rqgVCqbZBg/qsydO1dwdHQUjh07Jly7dk38p1KpxG3mzJkjdO7cWThy5Ihw7tw5ITQ0VAgNDRW/N5V7jRkzRkhKShIOHjwouLi4PJIlbSZuzhoXhMdzjC1Bst22d10l272BZLtt77q2ddttc86JIAjCunXrhM6dOwuWlpbCgAEDhNOnTz/sQ2oRQLP/Nm/eLG6jVquFN954Q3BychJsbGyEZ599Vrh27VqT38nLyxPGjx8vWFtbC+3btxfeffddQafT/cGjaTm/vUkexzG2FMl229Z1lWz3/5Bst21d17Zuu2aCIAh/3CKShISEhISEhMSdaVMJsRISEhISEhKPP5JzIiEhISEhIfFIITknEhISEhISEo8UknMiISEhISEh8UghOScSEhISEhISjxSScyIhISEhISHxSCE5JxISEhISEhKPFJJzIiEhISEhIfFIITknEhISEhISEo8UknMiISEhISEh8UghOScSEhISEhISjxT/Hy7wUJLUXV9oAAAAAElFTkSuQmCC\n",
+ "text/plain": [
+ "
+