Putting the reading passage before the question and answer choices (CQO) makes language models much more accurate than putting it after (QOC), by about 15 percentage points on average.
C2LLM is a new family of code embedding models that helps computers find the right code faster and more accurately.