A Comprehensive Guide to Binary Classification in Machine Learning
Table of contents
- What is Binary Classification?
- How Does Binary Classification Work?
- Common Algorithms for Binary Classification
- 1. Logistic Regression
- Example 1: Logistic Regression
- 2. Decision Trees
- Example 2: Decision Trees
- 3. Random Forest
- Example 3: Random Forest
- 4. Support Vector Machines (SVM)
- Example 4: Support Vector Machines (SVM)
- 5. Neural Networks
- Example 5: Neural Networks (MLP Classifier)
- Evaluation Metrics for Binary Classification
Binary classification is a fundamental concept in machine learning where the goal is to classify data into one of two distinct classes or categories. It is widely used in various fields, including spam detection, medical diagnosis, customer churn prediction, and fraud detection. This blog will provide an in-depth explanation of binary classification, how it works, and some practical examples to make the concept clear and easy to understand.
What is Binary Classification?
In simple terms, binary classification is a type of supervised learning where the model predicts one of two possible outcomes. These outcomes are often represented as 0
and 1
(or "negative" and "positive", or "false" and "true"). For example:
Spam Detection: Classify emails as "Spam" or "Not Spam."
Medical Diagnosis: Predict whether a patient has a disease ("Has disease" or "No disease").
Credit Scoring: Predict whether a loan applicant will default ("Default" or "No Default").
Binary classification uses input data (features) to make predictions about the outcome class (target).
How Does Binary Classification Work?
The process of binary classification involves several steps:
Data Collection: Gather data that includes both the features (input variables) and the labels (output classes). For example, if you're building a spam detector, the features could be the content of an email, and the label could be whether the email is spam (
1
) or not (0
).Data Preprocessing: Clean the data by handling missing values, removing duplicates, and transforming the data into a suitable format for the algorithm (e.g., converting categorical data into numerical values).
Feature Selection: Identify the most important features that contribute to the prediction. This can involve scaling, normalizing, or selecting key variables that help the model perform better.
Model Selection: Choose an algorithm that is appropriate for binary classification. Common algorithms include Logistic Regression, Decision Trees, Random Forests, Support Vector Machines (SVM), and Neural Networks.
Training: Use historical data to train the model. The algorithm will learn the patterns in the data that can help distinguish between the two classes.
Evaluation: After training the model, evaluate its performance using metrics such as Accuracy, Precision, Recall, F1-Score, and ROC-AUC Curve to understand how well it is performing.
Prediction: Once trained, the model can predict the outcome class for new, unseen data.
Common Algorithms for Binary Classification
1. Logistic Regression
Logistic regression is one of the simplest and most commonly used algorithms for binary classification. Despite its name, it is a classification algorithm, not a regression algorithm. It works by modeling the probability of the binary outcome using a sigmoid function.
How it works: Logistic regression computes the weighted sum of the input features and applies the logistic function to map this sum to a value between 0 and 1. The model then classifies the data based on a threshold, typically 0.5. If the output is greater than 0.5, it predicts class
1
; otherwise, it predicts class0
.Example: In medical diagnosis, logistic regression can be used to predict whether a patient has heart disease based on input features such as age, cholesterol levels, and blood pressure.
Example 1: Logistic Regression
The model estimates the probability that a given input belongs to class 1
(positive class), and this probability is thresholded to classify the data into either 0
or 1
.
Example: Predicting whether a person has heart disease based on age and cholesterol levels.
# Import necessary libraries
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score, confusion_matrix
# Example dataset: Features - Age, Cholesterol Level; Label - 0 (No Disease) or 1 (Has Disease)
X = np.array([[25, 200], [30, 180], [35, 220], [40, 240], [45, 260],
[50, 250], [55, 270], [60, 300], [65, 310], [70, 280]])
y = np.array([0, 0, 0, 0, 0, 1, 1, 1, 1, 1]) # 0: No disease, 1: Has disease
# Split dataset into training and test sets (70% train, 30% test)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
# Initialize and train the Logistic Regression model
log_reg = LogisticRegression()
log_reg.fit(X_train, y_train)
# Make predictions on the test data
y_pred = log_reg.predict(X_test)
# Evaluate the model
accuracy = accuracy_score(y_test, y_pred)
conf_matrix = confusion_matrix(y_test, y_pred)
print(f"Logistic Regression Accuracy: {accuracy}")
print(f"Confusion Matrix: \n{conf_matrix}")
Explanation:
We use a dataset of 10 people, with their ages and cholesterol levels as input features.
The target label is whether or not they have heart disease (
0
for No,1
for Yes).Logistic Regression learns the relationship between the features and the target, then predicts for the test set.
The confusion matrix provides insight into how many predictions were correct/incorrect.
2. Decision Trees
A decision tree algorithm splits the data into subsets based on the values of input features, and these subsets are further split recursively to form a tree structure. The leaves of the tree represent the final classification outcomes (0
or 1
).
How it works: The algorithm chooses the best feature to split the data at each step based on metrics like Gini Impurity or Information Gain. It continues splitting until no further improvements can be made or a maximum depth is reached.
Example: In credit scoring, a decision tree can predict whether a customer will default on a loan based on their income, credit history, and loan amount.
Example 2: Decision Trees
Decision Trees recursively split the data into subsets based on feature values. Each decision node splits the data on the feature that provides the most information (using Gini impurity or entropy). This results in a tree structure where leaves represent the final classification.
Example: Predicting customer churn (whether a customer will leave) based on monthly charges and tenure.
# Import necessary libraries
from sklearn.tree import DecisionTreeClassifier
# Example dataset: Features - Tenure (months), Monthly Charges; Label - 0 (Stays), 1 (Leaves)
X = np.array([[10, 70], [12, 80], [15, 90], [18, 60], [20, 100],
[22, 110], [25, 120], [28, 130], [30, 85], [35, 95]])
y = np.array([0, 0, 0, 0, 0, 1, 1, 1, 1, 1]) # 0: Stays, 1: Leaves
# Split the dataset into training and test sets (70% train, 30% test)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
# Initialize and train the Decision Tree model
decision_tree = DecisionTreeClassifier(random_state=42)
decision_tree.fit(X_train, y_train)
# Make predictions on the test data
y_pred = decision_tree.predict(X_test)
# Evaluate the model
accuracy = accuracy_score(y_test, y_pred)
conf_matrix = confusion_matrix(y_test, y_pred)
print(f"Decision Tree Accuracy: {accuracy}")
print(f"Confusion Matrix: \n{conf_matrix}")
Explanation:
The dataset contains 10 customers with features such as tenure (how long they’ve been with the company) and monthly charges.
The target label is whether or not the customer leaves (
0
for Stays,1
for Leaves).The Decision Tree algorithm constructs a tree to decide the best splits based on the data features.
3. Random Forest
Random Forest is an ensemble method that builds multiple decision trees and combines their predictions to improve accuracy and reduce overfitting. Each tree is trained on a random subset of the data and features, and the final prediction is based on the majority vote from all trees.
How it works: Random Forest works similarly to decision trees, but it builds many trees instead of just one. Each tree is trained on a different subset of the data, and the final prediction is made by averaging the predictions of all the trees.
Example: In fraud detection, Random Forest can be used to identify fraudulent transactions by examining features like transaction amount, location, and time.
Example 3: Random Forest
Random Forest is an ensemble of decision trees. Each tree is built using a random subset of the data and features, and the final prediction is the majority vote of all the trees.
Example: Predicting loan default (whether a customer will default) based on credit score and income.
# Import necessary libraries
from sklearn.ensemble import RandomForestClassifier
# Example dataset: Features - Credit Score, Income; Label - 0 (No Default), 1 (Default)
X = np.array([[700, 50000], [650, 45000], [620, 40000], [610, 35000], [600, 30000],
[580, 28000], [570, 27000], [550, 25000], [530, 24000], [500, 22000]])
y = np.array([0, 0, 0, 0, 0, 1, 1, 1, 1, 1]) # 0: No Default, 1: Default
# Split the dataset into training and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
# Initialize and train the Random Forest model
random_forest = RandomForestClassifier(n_estimators=100, random_state=42)
random_forest.fit(X_train, y_train)
# Make predictions on the test data
y_pred = random_forest.predict(X_test)
# Evaluate the model
accuracy = accuracy_score(y_test, y_pred)
conf_matrix = confusion_matrix(y_test, y_pred)
print(f"Random Forest Accuracy: {accuracy}")
print(f"Confusion Matrix: \n{conf_matrix}")
Explanation:
The dataset contains 10 customers, with features such as credit score and income.
The target label is whether the customer will default on a loan (
0
for No,1
for Yes).Random Forest combines multiple decision trees and takes the majority vote to make predictions.
4. Support Vector Machines (SVM)
SVM is a powerful classification algorithm that aims to find the best hyperplane that separates the two classes. It works well in high-dimensional spaces and is effective when the classes are not linearly separable.
How it works: SVM tries to find the hyperplane that maximizes the margin between the two classes. In cases where the classes are not linearly separable, it uses kernel functions to transform the data into a higher-dimensional space where a hyperplane can separate the classes.
Example: SVM can be used for handwriting recognition, classifying whether a given character is "A" (class
0
) or "B" (class1
).
Example 4: Support Vector Machines (SVM)
Support Vector Machines are powerful classifiers that work well with both linearly and non-linearly separable data. They try to find the hyperplane that maximally separates the two classes.
Example: Classifying whether a tumor is benign or malignant based on tumor size and texture.
from sklearn.svm import SVC
# Example dataset: Features - Tumor Size, Tumor Texture; Label - 0 (Benign), 1 (Malignant)
X = np.array([[1.0, 1.5], [1.3, 1.6], [1.5, 1.8], [1.7, 2.0], [2.0, 2.2],
[2.5, 2.6], [2.8, 3.0], [3.0, 3.2], [3.5, 3.6], [4.0, 4.0]])
y = np.array([0, 0, 0, 0, 0, 1, 1, 1, 1, 1]) # 0: Benign, 1: Malignant
# Split the dataset into training and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
# Initialize and train the SVM model
svm_model = SVC(kernel='linear', random_state=42)
svm_model.fit(X_train, y_train)
# Make predictions on the test data
y_pred = svm_model.predict(X_test)
# Evaluate the model
accuracy = accuracy_score(y_test, y_pred)
conf_matrix = confusion_matrix(y_test, y_pred)
print(f"SVM Accuracy: {accuracy}")
print(f"Confusion Matrix: \n{conf_matrix}")
Explanation:
The dataset contains 10 samples of tumors, with features such as size and texture.
The target label is whether the tumor is benign or malignant (
0
for Benign,1
for Malignant).SVM finds the best separating hyperplane between the two classes.
5. Neural Networks
Neural networks consist of layers of neurons that can learn complex patterns in data. For binary classification, the output layer typically contains a single neuron with a sigmoid activation function that outputs a probability between 0 and 1.
How it works: The network learns by adjusting the weights of the connections between neurons during training. The final output is a probability that the input belongs to one of the two classes, and the prediction is made by applying a threshold (e.g., 0.5).
Example: Neural networks can be used in image classification, such as predicting whether an image contains a cat or not.
Example 5: Neural Networks (MLP Classifier)
Neural Networks are powerful for capturing complex relationships in data. They work by mimicking how the human brain operates, with layers of neurons that learn patterns in the input data.
Example: Predicting whether a student passes or fails based on study hours and class attendance.
from sklearn.neural_network import MLPClassifier
# Example dataset: Features - Study Hours, Attendance; Label - 0 (Fail), 1 (Pass)
X = np.array([[1, 50], [2, 60], [3, 70], [4, 80], [5, 90],
[6, 95], [7, 97], [8, 99], [9, 85], [10, 88]])
y = np.array([0, 0, 0, 0, 0, 1, 1, 1, 1, 1]) # 0: Fail, 1: Pass
# Split the dataset into training and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
# Initialize and train the Neural Network model
mlp = MLPClassifier(hidden_layer_sizes=(5,), max_iter=1000, random_state=42)
mlp.fit(X_train, y_train)
# Make predictions on the test data
y_pred = mlp.predict(X_test)
# Evaluate the model
accuracy = accuracy_score(y_test, y_pred)
conf_matrix = confusion_matrix(y_test, y_pred)
print(f"Neural Network Accuracy: {accuracy}")
print(f"Confusion Matrix: \n{conf_matrix}")
Explanation:
The dataset contains 10 students, with features such as study hours and class attendance.
The target label is whether the student passes or fails (
0
for Fail,1
for Pass).The Neural Network (MLP) learns the pattern and predicts the test data accordingly.
Evaluation Metrics for Binary Classification
After training a model, it's important to evaluate how well it performs. Here are some common metrics used in binary classification:
Accuracy: The ratio of correctly predicted instances to the total number of instances. While simple, accuracy may not always be a good measure, especially when the classes are imbalanced.
Precision: The ratio of true positive predictions to the total number of positive predictions. It measures the accuracy of positive predictions.
Recall: The ratio of true positive predictions to the total number of actual positives. It measures how well the model captures positive instances.
F1-Score: The harmonic mean of Precision and Recall. It is useful when there is an imbalance between Precision and Recall.
ROC-AUC: The Receiver Operating Characteristic curve plots the True Positive Rate (Recall) against the False Positive Rate. The Area Under the Curve (AUC) measures the model's ability to distinguish between the two classes.
Conclusion
Subscribe to my newsletter
Read articles from ByteScrum Technologies directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
ByteScrum Technologies
ByteScrum Technologies
Our company comprises seasoned professionals, each an expert in their field. Customer satisfaction is our top priority, exceeding clients' needs. We ensure competitive pricing and quality in web and mobile development without compromise.