Augmented Awareness: A Good Start

I am quite pleased with the results from the first week of development of Augmented Awareness, my local-fist/off-line first toolset for quantified-self experiments (github). Keep in mind that this is a personal project, and I can only work on it outside of my regular working hours and on weekends.
I made quick progress because I had experimented with various ideas before, so I knew exactly what to implement. I used AI extensively to speed up coding and prototyping ideas. Since a large part of the system I envision is AI-based, it was very helpful to prototype interactions before fully developing them in code.
Design Decisions
This week I made significant progress in choosing my technology stack:
Python for quick prototyping and fast software development. If performance-critical code is needed, I'll switch to Rust. Alternatively, porting to Julia is also an option.
I don't need separate solutions for frontend development because there are Python libraries that provide the support I need. If new requirements come up, I will use TypeScript and Svelte or React.
C++ or Rust for embedded code, with MicroPython as an option for prototyping.
Kotlin for mobile development, because my initial focus is primarily on Android.
SQLite for metadata: Provides lightweight, efficient storage for the system's metadata with excellent reliability and performance characteristics.
Arrow as internal data format: Enables high-performance in-memory analytics with zero-copy reads and efficient data sharing between components.
Parquet for data lake: Gives me columnar storage efficiency for temporal datasets while maintaining compatibility with my Arrow-based processing.
Development will initially focus on offline-first on a single machine, and then shift to local-first, aiming for easy deployment on homelabs using Docker when I introduce multi-device support.
My design thinking has advanced through:
OODA loop implementation: Successfully structured the codebase around the Observe-Orient-Decide-Act (OODA) cycle for optimal decision flow.
TAQS model adoption: defined the Temporally Aware Quantified Self (TAQS) model–inspired by sequence of tenses in Latin (consecutio temporum)–which provides comprehensive handling of:
Temporal modalities (past, present, future, hypothetical)
Event hierarchies and concept relationships
Point and span events with duration
Data provenance through source tracking
Concept hierarchy discovery: Both top-down and bottom-up approaches are yielding interesting insights into our data structures. It need to support both concepts expressed in symbolic representation and natural language, and in sub-symbolic form as probability distributions or tensor embeddings.
- A key technical achievement this week was figuring out round-trip temporal encoding, allowing seamless conversion between human-readable temporal concepts (like "around 6am" or "between 9am and 11am") and precise data structures. This capability enables both intuitive user interactions and rigorous temporal calculations, while maintaining full data provenance through the TAQS model's temporal modalities. The system will handle complex temporal relationships while preserving the semantic meaning of time expressions across the entire OODA loop.
Implemented Features
obsidian tasks: collect and display tasks from dated pages (journal) in Obsidian.
obsidian info: Show detailed information about an Obsidian page: tracked events, tasks, tags, markdown content.
obsidian busy: Shows total time spent in various activities by tag:
- obsidian tips: use local LLMs to provide tips for a more wholesome life and answer questions, using the schedule as context:
- obsidian web: a web interface presenting the same information as obsidian busy and task:
- ActivityWatch: library support to read from activitywatch and export afk/not-afk, web history, current window as Arrow tables.
Subscribe to my newsletter
Read articles from Roberto Lupi directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
