Good loss is flexible to samples

    2023-10-15

    In NLP, embedding is a key conpoent for many downstream tasks such as classification, NLI etc. A good loss function usually benefits for embedding of training. Recently, I have read two papers r...

    Read More

    Text semantic similarity

    2023-09-04

    Semantic Similarity

    Cosine similarity is commonly used for semantic similarity because it offers several advantages in measuring the similarity between vectors,...

    Read More
    View: User: