How are Beauty Brands using AI to create better products?

MeghaMegha
7 min read

Imagine opening your vanity to find beauty products that suit you perfectly. AI in cosmetics makes this possible.

Now, skin analysis can be done quickly. It can read your skin. Recommend you the perfect serum. No more regrets on buying wrong shades of lipsticks or foundations. Beauty regimes can now meet individual requirements. Brands can speed up R&D with intelligent algorithms. Formulas can be updated with real-time input. Skin issues can be diagnosed rapidly.

When intelligence (real or artificial) meets beauty, possibilities multiply.

Monetary Worth of Beauty

Global market forecasts that the beauty industry is expected to generate a revenue of US$677.19bn in 2025, with a 3.37% CAGR (2025–30). This spike is attributed to the use of AI, especially for adding features such as personalization and sustainability. In fact, 75% of shoppers are willing to pay more for AI-backed skincare, such as Olay’s Skin Advisor tool, which has doubled its conversion rates by utilizing intelligent diagnostics.

image- worth of beauty industry

So, how is AI integration transforming our beauty regimes?

  1. Reducing formulation time,

  2. Attracting consumers by personalized recommendations,

  3. Focusing on sustainability through supply-chain models that decrease ingredient waste.

Bottleneck of the Beauty R&D

image- conventional in-lab testing of products

Any beauty product’s journey starts with R&D, which involves methods such as HPLC (high-performance liquid chromatography) and GC-MS (gas chromatography-mass spectrometry) to determine ingredient purity and other stability tests. However, these fall short in predicting biological interactions. And tests for its shelf life take ~6–18 months per formulation. Zeta potential and droplet size analysis, crucial for testing emulsion stability, also need physical samples and lack real-time adaptability.

Furthermore, to ensure product efficacy, human clinical trials and sensory testing are conducted. Double-blind, randomized controlled trials (RCTs) with 100+ human subjects is the gold standard test for measuring effects such as wrinkle depth. But these are costly ($500K-$2M) and lengthy (6–24 months). Meanwhile, sensory tests evaluate features such as texture and aroma, though they highly vary as per personal preferences.

Non-AI computational techniques, such as QSAR models, can predict chemical toxicity, but miss intricate patterns, resulting in just 60–75% accuracy. Finite Element Analysis (FEA) is another way that simulates and predicts rheology of creams but is limited to pre-specified material parameters.

This led to a shift toward AI-centric approaches to address challenges in product design and testing. And here’s where it is currently used in the cosmetology sector:

image- use of ai-approaches in beauty labs

  1. Computer Vision & Convolutional Neural Networks (CNNs):

CNNs are deep learning algorithms that automate feature extraction from visual data with convolutional, pooling, and fully connected layers. It is applied for:

  • Skin Phenotype Analysis: Attention U-Net CNNs assist dermatologists in wrinkle and pore segmentation. Unlike static imaging, the model’s dynamic video evaluation captures biomechanical parameters, like elasticity decay.

  • Bias Mitigation: GAN model of computer vision can generate varied skin images to augment underrepresented datasets.

2. Predictive Analytics & Recurrent Neural Networks (RNNs):

RNNs handle sequential data using feedback loops, enabling time-series prediction of skin responses and consumer behavior. Use cases include:

  • Demand forecasting: Using long short-term memory (LSTM) networks, brands can process several data streams (social media, search trends) and predict beauty trends ~6–10 months before its peak.

  • Personalized replenishment: Algorithms like reinforcement learning link usage patterns (app frequency, weather sensor data) to product consumption. This helps to lower item stockouts.

3. Transformer Architectures & Generative AI:

Transformers use self-attention mechanisms to develop new molecular designs or packaging concepts by identifying patterns within training data.

  • Formulation design: SkincareGPT at Perfect Corp uses transformer models to propose novel bioactive combos (like algae-derived ceramides) within ~72 hours versus the conventional 6-month screening cycle.

  • Innovative packaging: Diffusion models are used in making eco-friendly packaging designs right from text inputs (like “lip balm tube, candy-shaped, biodegradable”).

4. Multimodal Sensor Fusion:

These ensemble models integrate computer vision, natural language processing (NLP), deep learning and IoT sensor inputs for end-to-end diagnostics.

  • L’Oréal’s Perso Ecosystem: Integrates smartphone-centric facial CNN inspection, Breezometer environmental APIs (pollution/UV index), and usage-logging sensors. Then, a reinforcement learning model converts these inputs into suitable formulations in real time, as per real data.

  • Neurocosmetic evaluation: CNNs also classify EEG frequency bands (alpha, gamma) during the application of products, correlating neural patterns to sensory feels.

These technologies are validated using frameworks like:

  • Clinical concordance test: CNN-based skin classifiers must surpass 85% concordance with at least three board-certified dermatologists across several skin types.

  • GAN validation: Once trained, they should be able to create counterfactual images (e.g., melasma on artificial skin) to test diagnostic dependability.

  • Real-world performance monitoring: Autonomous devices should validate predictions through real-time selfie analysis and user response loops.

Having covered briefly about various AI-based approaches in cosmetology, let’s see how VAEs are implemented in the beauty world.

VAEs in Cosmetology Innovations

image- use of VAE model in designing products

Variational Autoencoders (VAEs) are one of the deep generative models that develop low-dimensional representations of intricate data using concepts of probability, to generate unique outputs.

Unlike conventional autoencoders which fixate input values, VAEs generate a latent space- a probabilistic structure defined by its mean (μ) and variance (σ²). This lets them reconstruct original inputs and produce new training samples, thus accommodating for all possibilities.

Imagine VAEs as a digital beauty assistant. With its training on several skincare blends, VAEs create a recipe playground, and remixes new ones as per individual needs. The result is faster, data-fueled concoctions and customizable products. Three primary ways of achieving this are:

  • Formulation design: VAEs can explore large chemical datasets (like ChEMBL) to suggest novel ingredient combinations- those with sustainable features, lesser screening time, reduced in-vitro tests.

  • Texture prediction: Its probabilistic nature allows it to forecast sensory properties, like how smooth or grainy a formula will be, decreasing need to build physical prototypes.

  • Hyper-personalization: It can also combine skin analysis, environmental conditions, and user preferences, making personalized formulations a ‘dream come true’.

This is how VAEs work:

  1. An encoder processes information- let’s say any HD face scan or ingredient profile- and transforms it into a distribution in the latent space (defined by μ, σ²).

  2. The model subsequently samples a latent vector ‘z’ with some randomness (aka “Reparameterization”) from the latent distribution.

  3. A decoder uses ‘z’ to generate novel, valid outputs, like a tailored serum recipe.

  4. This training attempts two tasks: reconstruct inputs correctly and map the latent space in a way that smooth and useful interpolations can be made.

Limitations like training instability and biasness in datasets still hinder. But ongoing research into 3D-aware VAEs and reinforcement learning can gradually improve on these adjustments.

Study Spotlight- L’Oréal’s AI-powered Beauty Team

ai image- fictional representation of L’Oreal’s AI product

L’Oreal Perso is an AI-smart device that develops customized skincare by analyzing multiple data points. It uses the ModiFace algorithm and has a 97% success rate in identifying skin concerns such as wrinkles and pigmentation and monitors the changes over time to build a dynamic aging profile. Some of its unique capabilities include:

  • Environment assessment: Using Breezometer APIs, it adds geo-located inputs like UV index and pollution. This helps to reformulate based on varying environmental states.

  • Up-to-date choices: In makeup mode, Perso refers to current social media trends to serve users with latest options, like bestseller lipsticks and blush shades. Its TrendSpotter function searches >3,500 online sources to catch these trends early, thus reducing time from idea to market availability.

  • Skin analysis: Uses computer vision for skin diagnosis, sensors to track air quality and UV exposure, and reinforcement learning for formulation changes. All this results in customized skincare catered to your unique needs.

L’Oréal’s HAPTA program also highlights inclusivity by uniting computer vision and robotic operation to aid those with mobility impairments in makeup application, with a 73% success rate during trials.

Overall, L’Oréal’s incorporation of AI technology sets a good example to connect scientific study with daily skincare, bringing complex diagnostics and customized options to the consumer.

Beyond the beauty

Blending AI with cosmetology is building an age of personalized beauty- where data, algorithms, and human imagination merge. It’s breaking the illusion of a one-fits-all solution, making beauty more inclusive and tailored. It also prompts us to remember that AI isn’t limited to industries like technology. From farming to skincare, AI learns to help us, bring innovative solutions, and push our limits beyond expectations.


Suggestions:

  1. How Rare Beauty gave its ad strategy an AI glow-up

  2. AI in Cosmetics Drives New Standards in Beauty Innovation 2025

Disclaimer:

Backlinks provided within this blog are intended for the reader’s further understanding only. My personal experiences and research serve as the base for the content of this blog. Despite my best efforts to keep the content current and correct, not every situation may necessarily benefit from it. Images utilized in this blog are created using Canva and Copilot. While making any crucial life decisions, please consult professional advice or conduct independent research. This blog does not intend to be a substitute for expert guidance; it is solely meant to be informative.

0
Subscribe to my newsletter

Read articles from Megha directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Megha
Megha