In this paper, the BERT model containing a large number of encoders is used to complete the preprocessing operation of literary text data and generate the corresponding word vectors. The frequency-intended document frequency (TF-IDF) algorithm and the sentiment computation model SnowNLP are introduced to count the word frequency and calculate the sentiment polarity. Build a metaphorical sentiment polarity computation modeling framework. Embedding culturally relevant attributes in word vectors, modeling contextual semantics by combining long and short-term memory networks, and using the attention mechanism to train to get the attention weight matrix of the word items to accurately classify the emotional polarity of the word items. Taking the selected comparison works as an example, the average word length of classical Chinese literature is 1.08-1.20, and the proportion of real words accounts for 74.15%. The percentage of negative sentiment for the 6 keywords was 20%, 15%, 15%, 21%, 15%, and 10%.Five of the six represented pieces are dominated by negative emotions. The average word length in Western Romantic literature is 2.26-2.46, with 78.27% real words. The 6 keywords positive emotions accounted for 15%, 5%, 20%, 25%, 20%, 15%. 5 of the 6 represented works were dominated by positive emotions.