Predicting Continuous Data with Support Vector Regression (SVR) | My ML Project Breakdown

Lokesh PatidarLokesh Patidar
3 min read

Introduction:

Hello!
I recently completed a hands-on project using Support Vector Regression (SVR), one of the coolest and most powerful regression techniques in the ML toolkit. In this post, I’ll walk you through what SVR is, why it’s different from other models, how I implemented it in Python, and what I learned along the way.

Whether you're just starting your ML journey or exploring advanced regressors, I hope this gives you valuable insight!


→ is Support Vector Regression (SVR)?

SVR is an extension of the Support Vector Machine (SVM) algorithm, but for regression tasks. While SVM is used to classify data points, SVR tries to fit the best line within a threshold value (called epsilon) so that most data points lie within that margin.

It aims to:

  • Minimize prediction error

  • Fit within a certain acceptable range (epsilon)

  • Maintain model generalization (avoid overfitting)


→ Why Use SVR Over Linear or Polynomial Regression?

SVR is helpful when:

  • Your data has outliers or non-linearity

  • You want a model that balances bias and variance well

  • You need a robust model that doesn’t overfit easily

While linear and polynomial regression try to minimize squared errors, SVR tries to keep the error within a margin.


🛠️ Tools & Libraries Used:

  • Python

  • NumPy

  • Matplotlib / Seaborn

  • Pandas

  • Scikit-learn (for SVR and scaling)


→Dataset:

I used a dataset that relates position level to salary (same as used for linear and polynomial regression). The goal is to predict the salary based on the position level.

Position LevelSalary
145000
250000
360000
......
101000000

🔍 Step-by-Step Implementation:

1. Import Libraries

pythonCopyEditimport numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.preprocessing import StandardScaler
from sklearn.svm import SVR

2. Load the Dataset

pythonCopyEditdataset = pd.read_csv('Position_Salaries.csv')
X = dataset.iloc[:, 1:2].values
y = dataset.iloc[:, 2].values
y = y.reshape(len(y), 1)  # Required for scaling

3. Feature Scaling (VERY IMPORTANT for SVR)

pythonCopyEditsc_X = StandardScaler()
sc_y = StandardScaler()
X = sc_X.fit_transform(X)
y = sc_y.fit_transform(y)

4. Fit SVR Model

pythonCopyEditsvr_regressor = SVR(kernel='rbf')  # RBF works well for non-linear problems
svr_regressor.fit(X, y.ravel())

5. Visualize the SVR Results

pythonCopyEditplt.scatter(X, y, color='red')
plt.plot(X, svr_regressor.predict(X), color='blue')
plt.title('Support Vector Regression')
plt.xlabel('Position Level')
plt.ylabel('Salary')
plt.show()

6. Make a Prediction (e.g., for level 6.5)

pythonCopyEditscaled_pred = sc_X.transform([[6.5]])
predicted_scaled_salary = svr_regressor.predict(scaled_pred)
predicted_salary = sc_y.inverse_transform(predicted_scaled_salary.reshape(-1, 1))
print(predicted_salary)

📈 Output & Results:

The SVR model produced a smooth and flexible curve that captures the non-linear trends in the data better than basic linear regression.

Predicted salary for level 6.5 (example):
$170,000 (after inverse scaling)


→Key Takeaways:

  • SVR needs feature scaling because it uses distances between data points.

  • Works well for non-linear problems and handles outliers better than basic models.

  • Choosing the right kernel (linear, poly, rbf) can make a big difference.

  • SVR is robust, but a bit more complex to set up due to scaling.


→ What I Learned:

  • I got comfortable using StandardScaler and transforming predictions back to original scale.

  • Visualizing regression results helped me compare models side-by-side.

  • Hyperparameter tuning (kernel, C, epsilon, gamma) is crucial in improving SVR performance.


0
Subscribe to my newsletter

Read articles from Lokesh Patidar directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Lokesh Patidar
Lokesh Patidar

Hey, I'm Lokesh Patidar! I'm a 2nd-year student at SATI Vidisha, passionate about AI, Machine Learning, Full-Stack Development , and DSA. What I'm Learning: Currently Exploring Machine Learning 🤖 Completed DSA & Frontend Development 🌐 Now exploring Backend Development 💡 Interests: I love solving problems, building projects, and integrating AI into real-world applications. Excited to contribute to tech communities and share my learning journey! 📌 Follow my blog for insights on AI, ML, and Full-Stack projects!