How Augmented Awareness Evolved Over Time


Augmented Awareness is the system I developed to manage my daily life during the pandemic. It evolved into an early-stage toolset for quantified self-experiments.
I will work on the next version of Augmented Awareness openly on GitHub, instead of privately. It currently contains an Arduino pomodoro timer and a library to read Obsidian vaults.
AgAu
The new version is called Aww. The previous version was named AgAu and had a silver/gold Yin and Yang logo, but since crypto companies like that acronym too much, and I've discovered the benefits of meditation, I'm choosing cute meditating cats as the new theme.
The first version of Augmented Awareness wasn't a traditional application. It was a no-code system using Google Sheets, Nest devices, and IFTTT. It worked well until Google sunset the Assistant features I depended on.
The core of next and current version, AgAu, is a traditional web application. I built a few prototypes in Python (Django, fastapi) and TypeScript (Svelte, SvelteKit and Prisma), but eventually choose to write it in C# as an ASP.NET Core Blazor application. It is a quite traditional design, based on EntityFramework Core and Clean architecture.
flowchart LR
subgraph AgAu
app[ASP.NET core app] --> db[(database)]
end
subgraph obsidian
notes
schedule
end
daily[offline behavior and activities] -.manually copy information .-> AgAu
wearables -.manually copy information .-> AgAu
screen[ScreenTime and ActivityWatch] -.manually copy information .-> AgAu
obsidian -.manually copy information .-> AgAu
pomodoro[physical pomodoro timers] -.manually copy information .-> AgAu
AgAu lost some important properties of the first version: it needs regular busy work to operate. I have to enter or copy over the information of my actions and behavior. This happened because I didn't have a clear understanding of my requirements. I didn’t spell them out, so I didn’t preserve them!
The original Augmented Awareness, through its Google Home interface, was easy to use during my daily activities, tasks, and routines. AgAu lost its ambient intelligence and pervasive computing capabilities.
I appreciated the original system's ability to work with minimal attention required:
$$Value={{Functionality} \over {Attention Required}}$$
Aww
Aww won’t initially replace AgAu but will complement it. The first step in developing Aww is to bring back the lost ambient computing features by replacing manual data entry with IoT devices and software integrations. I will create Aww components to connect with AgAu using REST services, and eventually, I might retire or open-source AgAu. You can use Aww as a toolkit of independent components, mix and match them and create your own quantified-self experiments or life management systems.
Aww will be a lot more than just a bunch of IoT devices. I want it to adapt to my behavior, be contextual and fade in the background.
Core Principles and Design Philosophy
Ambient & Permacomputing: The system operates in the background, minimizing direct user interaction. It leverages existing habits and routines, integrating seamlessly into the user's life rather than demanding constant attention. Data collection is largely passive. Computation is prioritized locally, using minimal resources.
Agentic AI (OODA Loop): The core intelligence is built around the Observe-Orient-Decide-Act loop, creating a proactive and adaptive system.
Local-First & Privacy-Preserving: All raw data processing happens locally. Cloud APIs are used judiciously and only for high-level, anonymized inferences.
Modularity & Extensibility: The architecture is designed to be modular, allowing for easy swapping of components (different sensors, AI models, UI elements) and future expansion.
User Control & Transparency: The user maintains full control over the system's actions and can override recommendations or adjust settings at any time. The reasoning behind decisions should be explainable (to the extent possible with AI).
Multi-Modal Data Fusion: The system combines data from diverse sources to build a holistic understanding of the user's state and environment.
Data Sources
Wearable Sensors (HealthConnect):
Source: Android wearable devices via HealthConnect API.
Data: Heart rate, sleep stages, activity levels, steps, etc. (timeseries data)
Processing: Local aggregation, feature extraction (e.g., resting heart rate, sleep efficiency).
Environmental Sensors:
Source: RTL-SDR (rtl_433MHz) for door/cabinet openings (e.g. repurposed door alarm to count fridge openings), temperature/humidity sensors, air quality sensors. Could also include a smart electricity meter, if available.
Data: Event timestamps (door openings), environmental readings (timeseries).
Processing: Anomaly detection (e.g., unusual fridge opening frequency), data smoothing, and aggregation.
Home State Imagery:
Source: Periodic snapshots from strategically placed low-resolution cameras (e.g., repurposed webcams, IoT cameras).
Data: Images of key areas of the home (e.g., living room, kitchen).
Processing: Local object detection and spatial analysis (using a small, efficient model like YOLO-NAS or MobileNet). Count objects, identify misplaced items, assess overall "clutter" level. This data is never sent to the cloud in image form. Images are automatically deleted after analysis.
Journal/Notes (Obsidian):
Source: User's Obsidian vault (Markdown files).
Data: Text entries, tags, timestamps.
Processing: Local natural language processing (sentiment analysis, topic extraction, keyword detection) using small, efficient models (e.g., Sentence Transformers for embeddings, or smaller LLMs).
User Feedback & Input:
Source: Web UI, potentially voice input via a local speech-to-text model.
Data: Explicit feedback on suggestions (thumbs up/down), manual goal setting, overrides.
Processing: Direct incorporation into the decision-making process.
Weather and Indoor environment Information:
Source: A local weather station, or data from stations close to the user. Air quality monitors, environmental monitors.
Data: Temperature, rain, cloud cover. Indoor humidity, temperature, air quality, radon levels.
Processing: Aggregation and analysis.
Agentic OODA Loop
The system is based on agentic AI and robotics principles, themselves inspired by the OODA loop from the US military doctrine:
Observe: Gathers data from the Data Processing Layer. This is the input to the Orient phase.
Orient:
Contextualization: Combines current observations with historical data, user preferences, and external knowledge (e.g., general health guidelines).
Pattern Recognition: Identifies patterns and anomalies in the user's behavior and environment. This might involve time-series analysis, clustering, or anomaly detection algorithms.
Learning: Updates internal models based on new data and user feedback. This could involve simple rule updates, reinforcement learning, or fine-tuning of pre-trained models. This step leverages local LLMs for tasks like summarizing journal entries, identifying recurring themes, and relating observations to potential underlying causes. Judicious use of cloud LLMs (OpenAI API) might be used for complex reasoning or knowledge retrieval, but only with anonymized, high-level summaries.
Decide:
Goal Evaluation: Assesses progress towards user-defined goals (e.g., improve sleep quality, reduce stress, maintain a tidy home).
Planning: Generates potential actions (tips, reminders, schedule adjustments) to help the user achieve their goals. This could involve constraint satisfaction, optimization algorithms, or a rule-based system.
Risk Assessment: Evaluates the potential negative consequences of actions (e.g., suggesting a workout when the user is already fatigued).
Act:
Action Execution: Delivers interventions to the user via appropriate channels:
Web UI: Detailed information, explanations, and options for customization.
Wearable Notifications: Brief, timely reminders or prompts.
IoT Dashboards (e-Paper): Visual summaries, schedules, or motivational messages.
Feedback Monitoring: Tracks user responses to interventions (explicit feedback, changes in behavior).
Subscribe to my newsletter
Read articles from Roberto Lupi directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
