Code Smell 306 - AI External Comments


TL;DR: You reference external AI conversations to explain code instead of writing declarative tests
Problems π
- Comments
- External dependencies
- Broken links
- Unverified behavior
- Knowledge fragmentation
- Maintenance burden
- Lost context
- Obsolete Comments
- Misleading explanation
Solutions π
- Write executable tests
- Remove external references
- Do not blindly trust the AI
- Describe with inline examples
- Keep tests local
- Remove all comments
- Replace Magic Numbers with constants.
Refactorings βοΈ
Context π¬
If you add comments that reference external AI conversations, Stack Overflow posts, or online resources to explain how your functions work, you are not thinking about your reader.
These references create dangerous external dependencies that break over time.
Links become dead, conversations get deleted, and future maintainers cannot access the context they need to understand your code.
When you rely on external AI advice instead of writing proper tests, you create code that appears documented but lacks verification and local understanding.
The moment you rely on an external AI chat to explain what your code does, you make your codebase dependent on a conversation that might disappear, change, or get outdated.
A unit test is more effective than any link. It defines what the code does and what you expect it to do. No need to click or guess.
Comments and documentation often lie. Code never does.
Sample Code π
Wrong β
def calculate_starship_trajectory(initial_velocity, fuel_mass,
burn_rate, gravity=9.81):
"""
See explanation at
https://claude.ai/share/5769fdd1-46e3-40f4-b9c6-49efbee93b90
"""
# AI suggested this approach
burn_time = fuel_mass / burn_rate
# Physics formula from Claude conversation
# https://claude.ai/share/5769fdd1-46e3-40f4-b9c6-49efbee93b90
delta_v = gravity * burn_time * 0.85
# 0.85 explanation
# https://claude.ai/share/5769fdd1-46e3-40f4-b9c6-49efbee93b90
final_velocity = initial_velocity + delta_v
# Return format suggested by GPT
return {
'burn_time': burn_time,
'final_velocity': final_velocity,
'delta_v': delta_v
}
def calculate_orbit_insertion(velocity, altitude):
"""
Algorithm explanation available at:
https://claude.ai/chat/orbit-insertion-help-session
"""
# See AI conversation for why we use this formula
orbital_velocity = (velocity * 1.1) + (altitude * 0.002)
return orbital_velocity
Right π
def calculate_starship_trajectory(initial_velocity, fuel_mass,
burn_rate, gravity=9.81):
THRUST_EFFICIENCY = 0.85
burn_time = fuel_mass / burn_rate
delta_v = gravity * burn_time * THRUST_EFFICIENCY
# You replace the magic number
final_velocity = initial_velocity + delta_v
return {
'burn_time': burn_time,
'final_velocity': final_velocity,
'delta_v': delta_v
}
def calculate_orbit_insertion(velocity, altitude):
"""Calculate orbit insertion velocity."""
VELOCITY_BOOST_FACTOR = 1.1
ALTITUDE_ADJUSTMENT_RATE = 0.002
orbital_velocity = (velocity * VELOCITY_BOOST_FACTOR) +
(altitude * ALTITUDE_ADJUSTMENT_RATE)
return orbital_velocity
import unittest
from starship_trajectory_calculator import (
calculate_starship_trajectory, calculate_orbit_insertion
)
class TestStarshipTrajectoryCalculator(unittest.TestCase):
def test_basic_trajectory_calculation(self):
result = calculate_starship_trajectory(100, 1000, 10)
self.assertEqual(result['burn_time'], 100.0)
self.assertEqual(result['delta_v'], 833.85)
self.assertEqual(result['final_velocity'], 933.85)
def test_zero_fuel_scenario(self):
result = calculate_starship_trajectory(200, 0, 10)
self.assertEqual(result['burn_time'], 0.0)
self.assertEqual(result['delta_v'], 0.0)
self.assertEqual(result['final_velocity'], 200.0)
def test_high_burn_rate(self):
result = calculate_starship_trajectory(150, 500, 100)
self.assertEqual(result['burn_time'], 5.0)
self.assertAlmostEqual(result['delta_v'], 41.69, places=2)
self.assertAlmostEqual(result['final_velocity'], 191.69,
places=2)
def test_custom_gravity(self):
result = calculate_starship_trajectory(100, 600, 20,
gravity=3.71) # Mars
self.assertEqual(result['burn_time'], 30.0)
self.assertAlmostEqual(result['delta_v'], 94.76, places=2)
self.assertAlmostEqual(result['final_velocity'], 194.76,
places=2)
def test_orbit_insertion_basic(self):
orbital_velocity = calculate_orbit_insertion(7800, 400000)
self.assertEqual(orbital_velocity, 9380.0)
def test_orbit_insertion_low_altitude(self):
orbital_velocity = calculate_orbit_insertion(7500, 200000)
self.assertEqual(orbital_velocity, 8650.0)
def test_orbit_insertion_zero_altitude(self):
orbital_velocity = calculate_orbit_insertion(8000, 0)
self.assertEqual(orbital_velocity, 8800.0)
Detection π
[X] Automatic
You can detect this smell by searching for comments containing URLs to AI chat platforms, external forums, or references to "AI suggested" or "according to conversation".
Look for functions that have detailed external references but lack corresponding unit tests.
Exceptions π
Academic or research code might legitimately reference published papers or established algorithms.
However, these should point to stable, citable sources and permanent links rather than ephemeral AI conversations, and should still include comprehensive tests.
Tags π·οΈ
- Comments
Level π
[X] Beginner
Why the Bijection Is Important πΊοΈ
In the real world, you don't rely on external authorities to validate your understanding of critical processes.
You develop internal knowledge and verification systems.
Your code should reflect this reality by containing all necessary understanding within itself through tests and clear implementation.
When you break this correspondence by depending on external AI conversations, you create fragile knowledge that disappears when links break or platforms change, leaving future maintainers without the context they need.
Links are not behavior.
Tests are.
AI Generation π€
AI generators sometimes create this smell because they frequently suggest adding references to the conversation or external sources where the solution was previously discussed.
They tend to generate excessive comments that point back to their explanations rather than creating self-contained, testable code.
AI Detection π§²
AI can detect this smell when you ask it to identify external references in comments, especially URLs pointing to AI chat platforms.
Most AI tools can help convert the external explanations into proper unit tests when given clear instructions.
Try Them! π
Remember: AI Assistants make lots of mistakes
Suggested Prompt: Replace this external reference with coverage
Without Proper Instructions | With Specific Instructions |
ChatGPT | ChatGPT |
Claude | Claude |
Perplexity | Perplexity |
Copilot | Copilot |
Gemini | Gemini |
DeepSeek | DeepSeek |
Meta AI | Meta AI |
Grok | Grok |
Qwen | Qwen |
Conclusion π
External references to AI conversations create fragile documentation that breaks over time and fragments your codebase's knowledge.
You should replace these external dependencies with self-contained unit tests that both document and verify behavior locally, ensuring your code remains understandable and maintainable without relying on external resources.
Relations π©ββ€οΈβπβπ¨
Disclaimer π
Code Smells are my opinion.
Credits π
Photo by julien Tromeur on Unsplash
The best documentation is code that doesn't need documentation
Steve McConnell
This article is part of the CodeSmell Series.
Subscribe to my newsletter
Read articles from Maxi Contieri directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Maxi Contieri
Maxi Contieri
Iβm a senior software engineer loving clean code, and declarative designs. S.O.L.I.D. and agile methodologies fan.