| The popularization of intelligent consumer electronic products in the market plays a crucial role in strengthening the relationship between the physical world and the network world.Embedded devices such as intelligent containers have low processor performance and less memory resources,and traditional solutions need to install robotic arms,which are prone to failure and high maintenance costs,while deep learning-based solutions only need to install cameras and development versions to retrieve goods through object detection methods.However,the object detection method has high computational complexity,large number of parameters,and is prone to false detection and missed detection of medium and small objects in complex scenarios,therefore,this article focuses on the visual solution of intelligent container commodities,and the specific work content is as follows:A lightweight commodity detection method based on fast and efficient feature information sensing fusion is proposed.The lightweight network is combined with YOLOv5,reconstructing efficient information sensing lightweight networks,which effectively reduces the amount of network parameters and solves the problem of tight memory resources of embedded devices;Integrate multi-time domain collaborative attention,process channel and space domain information in parallel,reduce the loss of feature information caused by abrupt reduction of parameter quantity,and improve the detection effect of medium and small objects in complex scenes;Build a two-stage fast feature aggregation network,the activation function and attention mechanism of embedded devices are more suitable for fusion,and multi-scale features are fused efficiently and quickly,further improving the representation ability of the network.Compared with advanced detection methods,the m AP of this method reaches 98.6%,and the number of parameters is reduced by about 41.2%,which has higher detection accuracy and lighter model than advanced detection methods.An intelligent container commodity detection method integrating high-order interactive convolution and multi-dimensional attention is proposed,YOLOv7 is improved to make it more suitable for dense and highly occlusive small object detection.Fusion of advanced interactive convolution to reduce the computational complexity of the network,and improve detection of small and dense objects in intelligent containers.Integrate multidimensional attention,dig deep into in-depth feature information,and reduce the false detection rate;A fast feature weighted pyramid network and integrate the Hard-Swish activation function is integrated to efficiently and quickly perform multi-scale feature fusion,strengthen feature learning of high-occlusion objects,and increase robustness.The m AP of this method in the commodity object detection task reaches 98.8%,and the computational complexity is reduced by about 60.3%,and the number of parameters is reduced by about 7.0%,which is better than the mainstream object detection.The unmanned vending identification and background management system are designed and implemented,and the proposed detection network is trained according to the object commodity category of the identification system,and the trained network model is migrated to the Jetson Nano development board,and the model is accelerated through Tensor RT processing technology to detect the object goods in real time.At the same time,the information of users taking goods is stored in the background management system,which can manage vending machines and product information,and view visual data such as orders and profits. |