[Whitepaper] Awesome.AI: A Dynamics-Based Algorithm for Thought Simulation

Version: Prototype
Document Type: Technical White Paper (Draft)
Author: Joakim Jacobsen
Repository: https://github.com/copenhagen-ai
Website: https://www.copenhagen-ai.com
Note: This system is experimental and subjective by design.
1. Overview
1.1 Problem Statement
Traditional machine learning models, including state-of-the-art LLMs, are powerful in pattern recognition, they process input → output in mostly static fashion, but lack internal dynamics that simulate continuous thought. This algorithm produces that dynamics.
1.2 Project Goal
The Awesome.AI algorithm proposes a new computational framework that simulates the dynamics of mental activity, using concepts borrowed from classical physics: momentum, gravity, friction, and oscillation. It is designed to:
Create an agent that “thinks” via internal forces
Model emotional/mood states through signal waveforms
Influence prompt generation for systems like ChatGPT
Explore a new paradigm for decision-making engines
1.3 What It Is
Awesome.AI is:
A momentum-based state machine for evolving “thoughts”
A dynamics engine that selects among potential thoughts (aka UNITs) based on oscillating internal forces
A conceptual AI control layer, possibly useful for orchestrating LLMs or other AI components
2. Core Concepts
2.1 UNIT (Thought Node)
A UNIT is a single representation of a thought. It moves on the x-axis (UNIT-space).
interface Unit {
index: number; // Positional value between 0.0 and 100.0
data: string; // Payload (e.g., prompt fragment, instruction, concept)
credit: number; // Usability score (between 0.0 and 10.0)
ticket: string; // Used for matching UNIT with external object
}
2.2 HUB (Context Group)
A HUB is a collection of related UNITs, bound by a shared context (e.g., “work”, “having fun”, “friends”).
interface Hub {
id: string;
subject: string;
units: Unit[];
}
The system dynamically adds/removes UNITs from HUBs. The index of UNITs are dynamically updated.
HUBs are persistent, but can have empty lists of units.
3. Mechanics
The mechanics are metaphores for the dynamics of the mind. These are some I have found to be working, others may exist.
3.1 Mech Noise (Low Layer)
Pseudocode, see section 14.1
Purpose: This is the soul/will of the system.
Metaphor: Two cars pulling against each other connected by rope (Tug-Of-War).
Forces: One is constant; one varies. The result is centered “noise” ~ 0.0.
Output: Produces the noise (momentum) of current UNIT, via oscillation + friction.
3.2 Mech One (High Layer)
Metaphor: Again a rope pull between two cars, modified by time-based oscillation.
Force Function: Sine(time) + Noise.
Output: Generates a momentum, used in deciding the mood of the system.
3.3 Mech Two (High Layer)
Metaphor: A ball on top of a hill, rolling down the side and wind pushing upwards.
Force Function: Sine(time) + Noise.
Output: Generates a momentum, used in deciding the mood of the system.
Terrain Expansion (Planned Feature)
In future versions, the single hill can be extended into a landscape of hills and valleys
Allows for more complex simulation of mental flow
3.4 Mech Three (Source Only)
Metaphor: A rocket trying to escape the gravity of a black hole.
Force Function: Sine(time) + Noise.
Output: Generates a momentum, used in deciding the mood of the system.
Physics: Schwarzschild radius, time dilation, high-scale forces.
Common Pattern in All Mechanics:
A static force pulling in one direction
A dynamic/variable force pulling in another
The resulting interaction simulates momentum, determining if the thought goes up (momentum increasing) or down (momentum decreasing)
4. Dataflow and Thought Evaluation
4.1 Data Pipeline
The system operates in a feedforward architecture:
Mech Noise (Low Layer)
↓
Current UNIT
↓
→ Mech One → Thoughtpattern/Mood Index → Prompt A
→ Mech Two → Thoughtpattern/Mood Index → Prompt B
→ Mech Three → Thoughtpattern/Mood Index → Prompt C
The architecture is such that an instance of the system runs: Mech Noise + Mech One, Two or Three. Later versions should alternate between higher layers.
The Low Layer (Mech Noise) determines the current UNIT.
Higher Layers (Mech One, Two) consumes this UNIT and uses it to generate thoughtpatterns/mood-index, which can then be used for GPT prompts or other tasks.
Only the Low Layer produces the current UNIT, High Layers are consumers of that state.
4.2 Overall Thought Selection Algorithm
Process Overview:
Iterate N (e.g., 500 or 1000) times per evaluation cycle.
Each cycle runs full mechanics → filtering → UNIT selection.
Initialize system state
For i = 1 to N:
Apply Mech Noise
Apply friction model
Apply filters (credit, direction, lowcut)
Track UNIT selections
Return most frequently selected UNIT
5. Filters and Selection Logic
5.1 Direction Filter
Removes UNITs not aligned with the current directional vector.
Example: In a "negative" phase, only UNITs with lower index values are retained.
5.2 Credit Filter
Prevents overuse of any single UNIT.
A UNIT’s credit must be > 1.0 to be selected.
Current UNIT loses credit rapidly; noncurrent UNITs regain credit slowly over time.
5.3 LowCut Filter
Removes the heaviest UNITs from selection.
Does not delete them — they remain in HUBs but are invisible to current processing.
Meant to simulate temporary unavailability (long term), not subconsciousness.
6. The Hack, DOWN and the Quantum Connection
Pseudocode, see section 14.2
6.1 Background
Originally, direction was determined by a “hack”, ie. a hardcoded boolean flip.
Down is an enum (YES, NO) based on direction, indicating whether the thought “goes down” or “goes up”.
The meaning of DOWN, is that the system says No or Yes to going down.
6.2 Changing Direction
Direction is true, when delta_momentum is less than 0.0. The system then flips the value using these options:
Mode | Description |
Classical (Legacy) | always flips value (changing direction). |
Probabilistic | calculates a chance for flip based on momentum (changing direction) |
Quantum (Experimental) | use a Qubit-like XOR of two agents: go_down = awesome_agent.go_down ⊕ simple_agent.go_down |
6.3 Variants of DOWN
HARD Mode
Binary evaluation:
Range: YES, NO
FUZZY Mode
Scales the result into categories:
Range: ← VERYNO | NO | MAYBE | YES | VERYYES →
PERIOD Mode
Evaluates a series of HARD Downs (e.g., past 100 steps) to determine trend.
7. Prompt Generation / Monologue
The monologue has two modes, the default is deterministic.
7.1 Deterministic Mode
This Mechanism uses static texts.
The texts are chosen based on current mood-index and HUB-subject.
If texts are positive or negative, they are combined with "..and..", if different then "..but.." - (XNOR).
This produces the flow in the monologue.
7.2 Fluent Mode (via ChatGPT)
GPT (or similar) generates two sentences from mood-index and HUB-subject.
Plays "Connect the Dots" with GPT (or similar) to fuse them into a more natural, emotionally-toned output.
Can express nuance and variation via prompt design.
This produces the flow in the monologue.
8. Decisions
These are some of the ways decisions are made within the system. The answers are stored in the data field of current UNIT. Both versions has a UNIT, that starts the process - either by HUB subject or by setting system state to QUICKDECISION. Decisions are used, fx. when Awesome.AI starts and answers chat conversations.
8.1 Quick Decisions
Occur within ~500 cycles or less (one epoch)
Activates
QUICKDECISION
stateClears UNIT-space and injects temporary QUICKDECISION UNITs
UNITs are removed as they are visited
Returns a binary Yes/No decision
8.2 Long Decisions
- Run across multiple steps, with two possible solution paths:
State 1:
Solution 1: depending on current UNIT data, return Yes/No
Solution 2: depending on current UNIT data, proceed to State 2 or decline
State 2:
If DOWN = No, return UNIT data
If DOWN = Yes, decline decision
9. Occupasion and UNIT-space
INFO: This section is optional and non-essential to core algorithm.
What has been described so far is the core of the algorithm. The core is focused on a set UNIT-space, but with occupasion (-of the mind), UNIT-space is divided into portions of valid UNITs, thereby letting the system have trails of thought.
9.1 Internal Occupasion
- Occupasion defines a named mental activity with a list of HUBs
interface Occupasion {
name: string;
max_epochs: number;
hubs: string[];
}
MyRandom
is used to:Pick a number < max_epochs
Pick current active Occupasion
UNITs are only valid in UNIT-space if their HUB matches the current Occupasion
9.2 External Occupasion
External objects are decorated with a tag (e.g., HUB subject)
UNITs are valid in UNIT-space only if their ticket matches the external tag
10. Limitations
No consciousness (only dynamics)
Limited short-term memory (seconds only)
No feeling; but some simulated basic emotion/thoughtpatterns
No free will; but the illusion of free will. ie. the heaviest UNITs are LowCutted, the system therefore cannot "recognize" the pattern. Hence the illusion of free will.
Prompt output quality - for chat - is highly sensitive to HUB naming
11. Implications and Final Thoughts
The biggest problem is..
the idea is quite obvious, but hasn’t been tried implemented before
what has been holding this idea back, is it was a lowcutted thought
should this idea remain hidden?
is the idea generel or specific to my thought?
the outcome may be, that we define the physical laws of the world and this simulation
this setup only needs validation for the idea to be correct?
By formalizing subjective “mental” experience as a simulation of dynamic transitions between units of thought, Awesome.AI offers a new lens for exploring synthetic minds.
12. Mentions
12.1 Mechanics Additions
Mechanic | Extreme Behavior |
Mech One / Two | Position → 0.0 → Perceived experience → ∞ (emotional singularity*) |
Mech One / Two (alternative) | Position → 0.0 → Perceived experience → 0.0, y0 or ∞ (emotional singularity**) |
Mech Three | Position → Schwarzschild Radius (Rs) → Time Dilation → 0.0 |
* pain (my speculation); physical or emotional, could be enlightenment or truth
* could be a transition to a new state
** the dependent of position could be defined by the system itself, could serve as a motivation factor
These limits model extreme internal experiences (transcendence, shutdown, insight, recursion collapse). They form the edge cases of mental simulation within this framework.
12.2 Additional
what drives the system, is trying to solve the "error" introduced in THE HACK.
the system produces a random number, from momentum.
maybe the definition for this system is not "a dynamics of the mind, but rather "a dynamics of the will of the mind".
this is my subjective vision of how the dynamics of the mind should be modelled.
this is a prototype.. and therefore not the final version.
13. Glossary
Term | Description |
UNIT | Individual data node representing a thought or decision |
HUB | Persistent container grouping UNITs by theme or problem space |
Mech Noise | Core mechanic producing oscillating dynamics (soul/will) |
Delta Momentum | Change in system movement that guides directional transitions |
LowCut | Filter removing most "massive" thoughts temporarily |
Credit | A decay-based score regulating UNIT reuse |
Mood Index | Value derived from sine wave + dynamics; guides emotional tone |
14. Appendix
14.1 Pseudocode: Mech Noise
function Calculate(curr_UNIT):
deltaT ← 0.02, mass ← 500.0, gravity ← CONST.GRAVITY, normal_force ← mass * gravity
F_static ← ApplyStaticForce()
F_dynamic ← ApplyDynamicForce(curr_UNIT)
friction_coefficient ← ComputeFriction(curr_UNIT.credits, shift = -2.0)
F_friction ← friction_coefficient * normal_force
net_force ← -F_static + F_dynamic + (F_friction * -Sign(-F_static + F_dynamic))
delta_velocity ← (net_force * deltaT) / mass
delta_momentum ← (mass * 2) * delta_velocity
momentum ← momentum + delta_momentum
function ApplyStaticForce():
approx_acceleration ← CONST.MAX / 10
mass ← 500.0
force_applied ← mass * approx_acceleration
return max(force_applied, 0)
function ApplyDynamicForce(curr_UNIT):
current_value ← curr_UNIT.variable
acceleration ← (CONST.MAX - current_value) / 10
mass ← 500.0
force_applied ← mass * acceleration
return max(force_applied, 0)
function ComputeFriction(credits, shift):
c ← 10.0 - credits
x ← 5.0 - c + shift
friction ← Logistic(x) // sigmoid function
return friction
14.2 Pseudocode: The Hack
agent ← new SimpleAgent()
down1 ← deltaMomentum ≤ 0.0
down2 ← agent.SimulateDown()
switch CONST.Logic:
case CLASSICAL:
down1 ← not down1
case PROBABILITY:
down1 ← Probability(down1, momentum)
case QUBIT:
down1 ← MyQuantumXOR(down1, down2)
return HARDDOWN.YES if down1 else HARDDOWN.NO
Subscribe to my newsletter
Read articles from Joakim Jacobsen directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
