Reciprocal Rank Fusion

Jaskamal SinghJaskamal Singh
4 min read

Reciprocal Rank Fusion (RRF) in RAG (Retrieval-Augmented Generation) is a technique used to combine multiple ranked lists of documents from different retrieval systems or search modes into a single, more accurate ranking.
This is particularly useful when you have different search methods (like lexical and semantic search) that might each produce relevant results, but in a way that needs to be combined for better overall performance.

Letโ€™s have a diagram for better understanding

Ab simple terms me samjhte hai ki reciprocate rank fusion techique me hota kya hai :

Imagine you are planning a wedding, and you ask 3 of your cousins to suggest good caterers.

  • Cousin 1 gives you a list: Sharma Caterers (rank 1), Gupta Foods (rank 2), and Tandoori Delights (rank 3).

  • Cousin 2 gives another list: Tandoori Delights (rank 1) , Sharma Caterers (rank 2) , Biryani House (rank 3).

  • Cousin 3 says: Gupta Foods (rank 1), Sharma Caterers (rank 2), Tandoori Delights (rank 3).

Ab, har cousin ka apna opinion hai. Kisi ke liye Sharma best hai, kisi ke liye Tandoori.
Confusion ho gaya! ๐Ÿซ 


RRF kya karta hai?

  • Har list me jitna upar koi caterer hai, utna zyada point milta hai.

  • Jaise Sharma har list me top 2 me hai, toh uska total score high ho jaayega.

  • Tandoori bhi har list me aaya, toh usko bhi acche points milenge.

  • Biryani House sirf ek cousin ne bataya, aur woh bhi third position pe, toh uske kam points aayenge.


Final Result after RRF:

  1. Sharma Caterers (sabse jyada points, sab jagah dikha)

  2. Tandoori Delights (achha perform kiya)

  3. Gupta Foods (ek jagah top tha, par sab jagah nahi)

  4. Biryani House (bas ek cousin ne suggest kiya)


Simple 1 Liner:

"Jisko zyada log upar rank karte hain, aur baar-baar naam aata hai, usko RRF me sabse upar le aate hain."

Letโ€™s get into the code :

from pathlib import Path
from collections import Counter
from langchain_community.document_loaders import PyPDFLoader
from langchain_text_splitters import RecursiveCharacterTextSplitter
from langchain_openai import OpenAIEmbeddings
from langchain_qdrant import QdrantVectorStore
from dotenv import load_dotenv
from openai import OpenAI
import os
import ast


# Load environment variables from .env file
load_dotenv()
apikey = os.environ["OPENAI_API_KEY"]

# Initialize OpenAI client
client = OpenAI(api_key=apikey)

# 1. Load and split PDF
pdf_path = Path(__file__).parent / "node_js_sample.pdf"
loader = PyPDFLoader(pdf_path)
docs = loader.load()

# Split document into chunks
text_splitter = RecursiveCharacterTextSplitter(
    chunk_size=1000,
    chunk_overlap=200,
)
split_docs = text_splitter.split_documents(documents=docs)

# 2. Create an embedder
embedder = OpenAIEmbeddings(
    model="text-embedding-3-large",
    api_key=apikey
)

# Only run the below once  to insert data into Qdrant , for frst time only
# vector_store = QdrantVectorStore.from_documents(
#     documents=split_docs,
#     embedding=embedder,
#     url="http://localhost:6333",
#     collection_name="learning_node_js",
# )
# vector_store.add_documents(documents=split_docs)

# Connect to existing Qdrant vector store
retriever = QdrantVectorStore.from_existing_collection(
    url="http://localhost:6333",
    collection_name="learning_node_js",
    embedding=embedder,
)

print("๐Ÿ“„ PDF Ingestion Complete!\n")

# 3. Take user question
user_query = input("Ask a question about Node.js: ")

# 4. Query Expansion Prompt
augmentation_prompt = f"""Generate 3 semantically different variations of this question for better retrieval:
"{user_query}"
Only return a Python list of 3 strings.

Example: ["hi", "hello", "how are you"]
"""

# Call OpenAI to expand query
query_expansion = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[{"role": "user", "content": augmentation_prompt}]
)

# 5. Parse string output to actual Python list
raw_response = query_expansion.choices[0].message.content.replace("`", "")
similar_queries = ast.literal_eval(raw_response)

print("๐Ÿ” Expanded Queries:\n", similar_queries)

# 6. Search for relevant docs for each variation
all_relevant_docs = []

for q in similar_queries:
    docs = retriever.similarity_search(query=q, k=3)
    all_relevant_docs.extend(docs)

# 7. finding top ranked doc with most occurance
pages_frequencies = Counter(doc.metadata['page'] for doc in all_relevant_docs)
print("\n",pages_frequencies)

page_freq=0
page_num=0
top_ranked_doc=[]

for page, count in pages_frequencies.items():
    if(count>page_freq):
        page_freq=count
        page_num=page

print("page",page_num,"\n"+"page freq",page_freq)

for d in all_relevant_docs:
    if(page_num == d.metadata['page']):
        top_ranked_doc.append(d)

unique_docs = list({doc.page_content: doc for doc in top_ranked_doc}.values())
context = "\n\n".join(doc.page_content for doc in unique_docs)

# 8. Send to OpenAI for final answer generation
response = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[
        {"role": "system", "content": "You are a helpful assistant knowledgeable in Node.js."},
        {"role": "user", "content": f"Context:\n{context}\n\nQuestion: {user_query}"}
    ]
)

# 9. Display response
answer = response.choices[0].message.content.replace("*", "").replace("`", "").replace("#", "")
print("\n๐Ÿ’ก Answer:\n", answer)

Output :

So thatโ€™s all! ๐Ÿ™Œ This was all about the Reciprocal Rank Fusion technique.
Hope I made it easier for you to understand ๐Ÿ˜Š

Agar yaha tak padh liya to
๐Ÿ™ Shukriya Doston!

Milte hain agle post mein ek naye concept ke saath.
Happy learning! ๐Ÿš€

#ChaiCode
#GenAI

0
Subscribe to my newsletter

Read articles from Jaskamal Singh directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Jaskamal Singh
Jaskamal Singh