Exterior hard disk SSD OEM ram DDR4 Object Detection Method based ram ddr4 SSDadmin
ram ddr4SSD Network structure hard disk
Exterior hard disk SSD OEMram DDR4 Object Detection Method based on “Proposal+ Classification“ ram memory ddr4+ 8618750919058,hard disk R– CNN series( R— CNN, SPPnet, Fast R— CNN and Faster L– CNN) very system.Drawing.Bitmap, but increasing mAP while taking into accounts the velocity has gradually grow to be a trend in object detection. Although YOLO can perform real– period results, its mAP is usually still far from the state of hawaii— of– the— art results. YoLO just predicts one object, which usually is an easy task to miss.
recognition, it will be fairly delicate to the size in the item, hard disk plus the generalization ability of things with mass adjustments is poor. In look at of these deficiencies on YOLO, the method SOLID STATE DRIVE proposed with this paper offers been improved in the two aspects, while considering the requirements of mAP and real– time overall performance. Underneath the condition of actual– time performance, this is near to the effect of state of artwork. On the VOC2007 check with an input picture size of 300* 300, it may achieve 58 frames every second( Titan X GPU), 72. 1% mAP. The input image size is usually 500* 500, and the mAP can reach 75. 1%. The writer ‘s idea is generally Faster R– CNN+YOLO, utilizing the idea in YOLO along with the anchor package of Faster R— CNN.
This type of paper uses the essential network structure of VGG16, uses the initial 5 layers, and uses the astrous algorithm for converting the fc6 and fc7 layers into two convolutional layers. Then add 3 convolutional layers and a typical pool layer. Element maps at different levels are widely– employed for the cancel out from the default pack plus the prediction of the scores of various categories( usual idea: use an over-all structure( including the first 5 convs, etc.) for the reason that basic network, and increase its its layer), and finally get the ultimate prognosis result through nms.
The size of the feature maps of these added convolutional layers varies greatly, allowing objects at different scales to be found : From the low– level feature maps, the sensitive field is relatively small, and the high– level receptive field is relatively large, which is performed on different feature maps. Convolution, to achieve multi– scale purposes. Looking at YOLO, you will find two fully attached layers in the rear. After the fully attached layer, each output will observe the complete image, that is not reasonable. Nevertheless the SOLID STATE DRIVE removes the fully attached layer, and productivity is only going to end up information around the mark, as an example the context. Doing it will improve the rationality. And different feature maps, predict images with different aspect quotients, which increases the conjecture of more ratio cardboard boxes than YOLO.( The laterally flow in the torso below)
A significant factor 2: Multi– level feature map gets arrears boxes and the 4 position offsets and 21 years old class confidences
All the feature points on the feature maps of different scales( 38x38x512, 19x19x512, 10x10x512, 5x5x512, 3x3x512, 1x1x256 in the above picture): Consider 5x5x256 as an example , its# defalut_boxes= six
The key to the training of supervised learning is the manually tagged labels. The heavily weighed in the network model( such as for example: YOLO, Quicker R– CNN, MultiBox) containing the default package( called anchor in Quicker R– CNN) is just how to map the annotation information( ground true box, ground true category) to( around the arrears box)
Inside the conjecture stage, the offset of each and every default box and the related score for each category are directly expected. Finally, the last recognition result is obtained by nms.
With this newspaper, feature maps of various layers used to reproduce the detection of things at different scales.
Almost every point around the feature map useful for conjecture must have 6 different default boxes, and the majority of the default boxes are negative samples, leading to an imbalance between negative and positive samples. During the training process, implement the strategy of Really difficult Negative Mining( sort corresponding to confidence loss all the things the obtained boxes, and so the ratio of positive and negative examples is maintained at 1 SYM 3) to balance the ratio of positive and negative instances. This can be upgraded by about 4%.
To really make the model better quality, you ought to use inputs of lengths and patterns. The infoauthors randomly tune the info in these ways:
When the center of the place truth box is in the sampled patch, we keep the overlap. Immediately after these sampling steps, each sampled patch is resized to a fixed size and randomly flipped width wise with probability 0. 5. It is proved by experiments that the results mAP can be increased by 8. 8%. external hard diskSSD OEMram ddr4