Posts
-
RoFormer: Enhanced Transformer with Rotary Position Embedding
-
Scalable Diffusion Models with Transformers
-
Matryoshka Representation Learning
-
BGE M3-Embedding: Multi-Lingual, Multi-Functionality, Multi-Granularity Text Embeddings Through Self-Knowledge Distillation
-
Text Embeddings by Weakly-Supervised Contrastive Pre-training
-
Bayesian Vs Frequentist Inference
subscribe via RSS