一区二区三区日韩精品-日韩经典一区二区三区-五月激情综合丁香婷婷-欧美精品中文字幕专区

分享

HALCON 20.11:深度學習筆記(7)

 乘舟泛海賞雨 2021-03-27

HALCON 20.11.0.0中,實現(xiàn)了深度學習方法。下面,我們將描述深度學習環(huán)境中使用的最重要的術語:

anchor ()

Anchors are fixed bounding boxes. They serve as reference boxes (參考框), with the aid of which the network proposes bounding boxes for the objects to be localized (定位).

annotation (注釋)

An annotation is the ground truth information, what a given instance in the data represents, in a way recognizable for the network. This is e.g., the bounding box and the corresponding label for an instance in object detection.

anomaly (異常)

An anomaly means something deviating from (偏離) the norm, something unknown.

backbone (骨干)

A backbone is a part of a pretrained classification network. Its task is to generate various feature maps (特征圖), for what reason the classifying layer has been removed.

batch size (批大小) - hyperparameter 'batch_size'

The dataset is divided into smaller subsets of data, which are called batches. The batch size determines the number of images taken into a batch and thus processed simultaneously (同時).

bounding box (邊界框)

Bounding boxes are rectangular boxes used to define a part within an image and to specify the localization of an object within an image.

class agnostic (類不可知論者)

Class agnostic means without the knowledge of the different classes. In HALCON, we use it for reduction of overlapping predicted bounding boxes. This means, for a class agnostic bounding box suppression the suppression of overlapping instances is done ignoring the knowledge of classes, thus strongly overlapping instances get suppressed independently of their class.

change strategy (改變策略)

A change strategy denotes the strategy, when and how hyperparameters are changed during the training of a DL model.

class ()

Classes are discrete categories (離散類別) (e.g., 'apple', 'peach', 'pear') that the network distinguishes. In HALCON, the class of an instance is given by its appropriate annotation.

classifier (分類器)

In the context (上下文) of deep learning we refer to the term classifier as follows. The classifier takes an image as input and returns the inferred confidence values (推斷置信值), expressing how likely the image belongs to every distinguished class. E.g., the three classes 'apple', 'peach', and 'pear' are distinguished. Now we give an image of an apple to the classifier. As a result, the confidences 'apple': 0.92, 'peach': 0.07, and 'pear': 0.01 could be returned.

COCO (上下文常見對象)

COCO is an abbreviation (縮寫) for "common objects in context", a large-scale object detection, segmentation, and captioning dataset. There is a common file format for each of the different annotation (注釋) types.

confidence (置信度)

Confidence is a number expressing (表示) the affinity (親緣關系) of an instance to a class. In HALCON the confidence is the probability, given in the range of [0,1]. Alternative name: score

confusion matrix (混淆矩陣)

A confusion matrix is a table which compares the classes predicted by the network (top-1) with the ground truth class affiliations (從屬關系). It is often used to visualize the performance of the network on a validation or test set.

Convolutional Neural Networks (CNNs) (卷積神經網絡)

Convolutional Neural Networks are neural networks used in deep learning, characterized by the presence of at least one convolutional layer (卷積層) in the network. They are particularly successful for image classification.

data (數(shù)據(jù))

We use the term data in the context of deep learning for instances to be recognized (e.g., images) and their appropriate information concerning the predictable characteristics (可預測特征) (e.g., the labels in case of classification).

data augmentation (數(shù)據(jù)擴充)

Data augmentation is the generation of altered copies of samples within a dataset. This is done in order to augment the richness of the dataset, e.g., through flipping or rotating.

dataset (數(shù)據(jù)集): training (訓練集), validation (驗證集), and test set (測試集)

With dataset we refer to the complete set of data used for a training. The dataset is split into three, if possible disjoint, subsets:

  1. The training set contains the data on which the algorithm optimizes the network directly.
  2. The validation set contains the data to evaluate the network performance during training.
  3. The test set is used to test possible inferences (predictions), thus to test the performance on data without any influence on the network optimization.

deep learning (深度學習)

The term "deep learning" was originally used to describe the training of neural networks with multiple hidden layers. Today it is rather used as a generic term for several different concepts in machine learning. In HALCON, we use the term deep learning for methods using a neural network with multiple hidden layers.

epoch ()

In the context of deep learning, an epoch is a single training iteration over the entire training data, i.e., over all batches. Iterations over epochs should not be confused with the iterations over single batches (e.g., within an epoch).

在深度學習環(huán)境中,epoch是對整個訓練數(shù)據(jù)的單一訓練迭代,即對所有批次的訓練迭代。在epoch上的迭代不應該與在單個批次(例如,在epoch內)上的迭代相混淆。

errors (錯誤)

In the context of deep learning, we refer to error when the inferred class of an instance does not match the real class (e.g., the ground truth label in case of classification). Within HALCON, we use the term error in deep learning when we refer to the top-1 error.

feature map (特征圖)

A feature map is the output of a given layer.

feature pyramid (特征金字塔)

A feature pyramid is simply a group of feature maps, whereby every feature map origins from another level, i.e., it is smaller than its preceding levels.

head ()

Heads are subnetworks. For certain architectures they attach on selected pyramid levels. These subnetworks proceed information from previous parts of the total network in order to generate spatially resolved output, e.g., for the class predictions. Thereof they generate the output of the total network and therewith constitute the input of the losses.

hyperparameter (超參數(shù))

Like every machine learning model, CNNs contain many formulas with many parameters. During training the model learns from the data in the sense of optimizing the parameters. However, such models can have other, additional parameters, which are not directly learned during the regular training. These parameters have values set before starting the training. We refer to this last type of parameters as hyperparameters in order to distinguish them from the network parameters that are optimized during training. Or from another point of view, hyperparameters are solver-specific parameters. Prominent examples are the initial learning rate or the batch size.

inference phase (推理階段)

The inference phase is the stage when a trained network is applied to predict (infer) instances (which can be the total input image or just a part of it) and eventually their localization. Unlike during the training phase, the network is not changed anymore in the inference phase.

intersection over union (交集)

The intersection over union (IoU) is a measure to quantify (程度) the overlap of two areas. We can determine the parts common in both areas, the intersection, as well as the united areas, the union. The IoU is the ratio between the two areas intersection and union. The application of this concept may differ between the methods.

label (標簽)

Labels are arbitrary strings used to define the class of an image. In HALCON these labels are given by the image name (eventually followed by a combination of underscore and digits) or by the directory name, e.g., 'apple_01.png', 'pear.png', 'peach/01.png'.

layer and hidden layer (層和隱藏層)

A layer is a building block in a neural network, thus performing specific tasks (e.g., convolution (卷積), pooling (池化), etc., for further details we refer to the “Solution Guide on Classification”). It can be seen as a container, which receives weighted input, transforms it, and returns the output to the next layer. Input and output layers are connected to the dataset, i.e., the images or the labels, respectively. All layers in between are called hidden layers.

learning rate (學習率) - hyperparameter 'learning_rate'

The learning rate is the weighting (權重), with which the gradient (see the entry for the stochastic gradient descent SGD) is considered when updating the arguments of the loss function. In simple words, when we want to optimize a function, the gradient tells us the direction in which we shall optimize and the learning rate determines how far along this direction we step. Alternative names: step size

level (層次)

The term level is used to denote within a feature pyramid network the whole group of layers, whose feature maps have the same width and height. Thereby the input image represents level 0.

loss (損失)

A loss function compares the prediction from the network with the given information, what it should find in the image (and, if applicable, also where), and penalizes deviations (懲罰偏差). This loss function is the function we optimize during the training process to adapt the network to a specific task. Alternative names: objective (目標) function, cost (成本) function, utility (效用) function

momentum (動量) - hyperparameter 'momentum'

The momentum  is used for the optimization of the loss function arguments. When the loss function arguments are updated (after having calculated the gradient), a fraction ?? of the previous update vector (of the past iteration step) is added. This has the effect of damping oscillations (阻尼振蕩). We refer to the hyperparameter ?? as momentum. When ?? is set to 0, the momentum method has no influence. In simple words, when we update the loss function arguments, we still remember the step we did for the last update. Now we go a step in direction of the gradient with a length according to the learning rate and additionally we repeat the step we did last time, but this time only ?? times as long.

non-maximum suppression (非極大值抑制)

In object detection, non-maximum suppression is used to suppress (抑制) overlapping predicted bounding boxes. When different instances overlap more than a given threshold value, only the one with the highest confidence value is kept while the other instances, not having the maximum confidence value, are suppressed.

overfitting (過擬合)

Overfitting happens when the network starts to 'memorize' training data instead of learning how to find general rules for the classification. This becomes visible when the model continues to minimize error on the training set but the error on the validation set increases. Since most neural networks have a huge amount of weights, these networks are particularly prone to overfitting.

regularization (正則化) - hyperparameter 'weight_prior'

Regularization is a technique to prevent neural networks from overfitting by adding an extra term to the loss function. It works by penalizing (懲罰) large weights, i.e., pushing the weights towards zero. Simply put, regularization favors (傾向于) simpler models that are less likely to fit to noise in the training data and generalize better. In HALCON, regularization is controlled via the parameter 'weight_prior'. Alternative names: regularization parameter, weight decay parameter, λ (note that in HALCON we use λ for the learning rate and within formulas the symbol α for the regularization parameter).

retraining (再訓練)

We define retraining as updating the weights of an already pretrained network, i.e., during retraining the network learns the specific task. Alternative names: fine-tuning (微調).

solver (求解器)

The solver optimizes the network by updating the weights in a way to optimize (i.e., minimize) the loss.

stochastic gradient descent (SGD) (隨機梯度下降法)

SGD is an iterative optimization algorithm for differentiable (可微) functions. In deep learning we use this algorithm to calculate the gradient to optimize (i.e., minimize) the loss function. A key feature of the SGD is to calculate the gradient only based on a single batch containing stochastically (隨機) sampled (采樣) data and not all data.

top-k error

The classifier infers for a given image class confidences of how likely the image belongs to every distinguished class. Thus, for an image we can sort the predicted classes according to the confidence value the classifier assigned. The top-k error tells the ratio of predictions where the ground truth class is not within the k predicted classes with highest probability. In the case of top-1 error, we check if the target label matches the prediction with the highest probability. In the case of top-3 error, we check if the target label matches one of the top 3 predictions (the 3 labels getting the highest probability for this image). Alternative names: top-k score.

transfer learning (遷移學習)

Transfer learning refers to the technique where a network is built upon the knowledge of an already existing network. In concrete terms this means taking an already (pre)trained network with its weights and adapt the output layer to the respective application to get your network. In HALCON, we also see the following retraining step as a part of transfer learning.

underfitting (欠擬合)

Underfitting occurs when the model over-generalizes (過度概括). In other words it is not able to describe the complexity of the task. This is directly reflected in the error on the training set, which does not decrease significantly.

weights (權重)

In general weights are the free parameters of the network, which are altered during the training due to the optimization of the loss. A layer with weights multiplies or adds them with its input values. In contrast to hyperparameters, weights are optimized and thus changed during the training.

    本站是提供個人知識管理的網絡存儲空間,所有內容均由用戶發(fā)布,不代表本站觀點。請注意甄別內容中的聯(lián)系方式、誘導購買等信息,謹防詐騙。如發(fā)現(xiàn)有害或侵權內容,請點擊一鍵舉報。
    轉藏 分享 獻花(0

    0條評論

    發(fā)表

    請遵守用戶 評論公約

    類似文章 更多

    国产精品视频久久一区| 99在线视频精品免费播放| 日韩人妻有码一区二区| 久久福利视频在线观看| 国产精品福利一级久久| 超薄肉色丝袜脚一区二区| 亚洲男人天堂网在线视频| 暴力三级a特黄在线观看| 亚洲夫妻性生活免费视频| 亚洲综合香蕉在线视频| 亚洲国产成人av毛片国产 | 精品女同一区二区三区| 国产一级内片内射免费看| 在线免费观看黄色美女| 久久综合亚洲精品蜜桃| 国产大屁股喷水在线观看视频| 国产免费成人激情视频| 国产成人精品午夜福利| 欧美乱码精品一区二区三| 99久热只有精品视频最新| 欧美一区二区三区视频区| 精品国产一区二区欧美| 日本少妇三级三级三级| 国产欧美日产久久婷婷| 久久女同精品一区二区| 国产盗摄精品一区二区视频| 肥白女人日韩中文视频 | 熟女乱一区二区三区丝袜| 欧美丰满人妻少妇精品| 人妻熟女欲求不满一区二区| 黑丝国产精品一区二区| 九九九热在线免费视频| 免费观看一区二区三区黄片| 精品al亚洲麻豆一区| 亚洲欧美日本国产不卡 | 国产精品99一区二区三区| 在线欧美精品二区三区| 色偷偷偷拍视频在线观看| 黑人粗大一区二区三区| 国产av熟女一区二区三区蜜桃| 国产av熟女一区二区三区蜜桃|