| Music plays an important role in human development.Although music is an auditory phenomenon,it can also be expressed visually and is widely used in practical needs.In this paper,we have chosen the most visually appealing color for visualization,and we have used psychology,audio signal processing technology and deep learning technology to analyze the properties and characteristics of music in the time and frequency domains,and we have developed a color music visualization system on embedded platform and PCs by establishing the relationship model between music and color.On the embedded platform,this paper uses audio signal processing technology to sample and analyze the audio in real time,and based on the theory of time-frequency analysis,an improved algorithm is used to identify the sound level of the music,and through the established relationship between the sound level and the color,the played music is driven by Pulse Width Modulation(PWM)to match the RGB-LED mixing module in real time,showing the visualization of sound and color.The corresponding colors are matched in real time by the pulse width modulation(PWM)driven RGBLED mixing module,showing the visualization of sound and color.In order to ensure stable communication between modules and smooth playback of music and color in real time,a dual-processor system was designed by considering the work scheduling and timing arrangement of the system.On this basis,we have completed the construction and design of each module as well as the functional debugging.Finally,through extensive experimental testing and analysis,the real-time performance of the system and the reliability of the algorithm processing are ensured,and the expected results are achieved.Since the system is developed and designed on an embedded platform,it has the significant advantages of portability,real-time and portability,and is of great practical application value in the fields of urban landscape,car ambient light and stage lighting.In addition,music has a wide range of prospects in psychotherapy and other areas,so this paper analyzes music further,not only in terms of its lower characteristics,but also in terms of its higher characteristics-emotions.Due to the complexity,subjectivity,and ambiguity of emotion,this paper conducts experiments and analyses through a PC.For the emotion of music,this paper proposes a CLSA model based on time-frequency images by analyzing the time-frequency features of music in different angles based on the emotion model of Valence-Arousal dimension,and identifies the emotion of music,and then outputs the values in the two-dimensional space of Valence-Arousal dimension,and identifies the emotion of music by the established mapping relationship between emotion space and color space,the played music is matched with the corresponding color in real time by the trained model,which achieves the matching of emotion and color of music and shows the visualization effect between emotion and color of music,which provides a greater application value for the future music therapy and other fields. |