Frequently Asked Questions
Got questions about ShadowMaker, EmPath, or E3T? Find clear answers below—if we missed yours, contact us
Technical Overview
Can you describe your system / product in greater detail?
Wearable sensors (Atopia) embedded in a smart shirt measure psychophysiological signals. This is also optionally used with passive data from the user (e.g. facial expressions, voice, keyboard usage patterns, etc).
Sensor data from the smart shirt is used to control a game (The Nexus) hands-free, creating a new kind of highly immersive experience. The game reads emotional states in real-time, allowing for novel gaming mechanics that are emotionally engaging. The Nexus exposes players to Scenarios that are designed to elicit specific emotional responses. The biophysiological responses (sensor data) are converted to Affective State Vectors (ASVs). These are converted to Probable Emotional States (PESs). PESs have accuracy that increases as the dataset size increases. The more users, the more accurate it becomes. For each Scenario, sensor data, ASVs, and PESs comprise a Node in the database.
Relationships between Nodes are known as Pathways. In short, the database is a map of Scenarios and Probable Emotional States, and how emotional states are interrelated.
EmPath is AI software which utilizes the emotional state database whenever the user asks a question or series of questions. It takes the question and looks for the closest Node match in the database. It also checks associated Pathways that follow from that Node. By doing this, it can predict how users are likely to react to various responses it can give.
However – and this is important – it doesn’t base its response solely on likely emotional responses. It combines top-down rules (explicitly defined ethics guardrails / “normative rights kernel”) with bottom-up emotional state predictions, and chooses responses that are in line with both.
Isn’t this just a wearable tech company?
No. The wearables are the means, not the end. Also, our pathway intends to use both wearable and non-wearable biophysiological data sources. Regardless, our focus is the EmPath middleware: emotional pathways that condition AI decision policies. The wearables supply part of the data that trains the system and populates the database, but EmPath operates wearables-free, and the defensible moat is the platform + patents that make AI emotionally coherent, empathic, and ethical.
That said, in order to achieve this, we had to design state-of-the-art wearables to train the system. We leaned on decades of cutting-edge wearables design experience to create what we feel is the best wearable system and smart shirt ever made – and we invented some really fun new gaming mechanics along the way. Therefore, an acquirer has at least two options:
1) You could use the Atopia Smart Shirt as an exclusively in-house tool to train an AI engine that is released wearables-free, or
2) You could use the wearables to publicly launch an immersive new genre of game that is controlled hands-free and is capable of reading psychophysiological states in real time. That way the creation of the database could pay for itself. It would also have the benefit of creating a “living” training database of psychophysiological and emotional states that continues to grow. This database then feeds the EmPath AI layer, allowing it to continuously evolve post-launch as more data comes in from users playing the game. You still launch EmPath AI wearables-free, but with this option the pathway to get there drives revenue.
Tell me more about the sensors, specifically. What’s in them today? What will be in them at production? What’s covered by your patents?
Each individual wears a smart shirt that has multiple Atopia modules detachably coupled. Today, each Atopia module has 2x EXG sensors (each of which is capable of measuring sEMG, ECG, or EEG), and 1x 9-axis IMU.
At production, sense “modalities” used to train the system and create the database could include both worn and environmental elements: sEMG, ECG, EEG, IMU, PPG, EDA, Temp, Resp; facial expressions (video), voice (audio), and keyboard and mouse usage patterns.
Our patents cover a system that’s adaptable and expandable to include all of the above worn and environmental sense elements.
Why use biometric data to train the database? Why not just define which Scenarios produce which emotional states?
Short answer: because “what we think people feel” and “what people actually do physiologically and behaviorally” are not the same thing. Hand-authored labels give you tidy opinions; biometrics give you falsifiable, person-specific / sub-population-specific / population-specific, time-resolved evidence. We use both: expert-defined Scenarios as priors, and multimodal biometrics to validate, quantify, and continually refine those priors into probabilistic models that actually predict outcomes and minimize harm.
Long answer:
- Ground truth vs. guesses. A coder can label a Scenario “likely frustrating,” but only measurement tells you how often, how intensely, for whom, and for how long. Without signals (sEMG/ECG/EEG/IMU/PPG/EDA/voice/facial/inputs), you’re guessing on all of that nuance – and baking designer bias into the database.
- Individual differences. People diverge—culture, neurotype, fatigue, meds, sleep, personality. Biometrics let the same Scenario produce different Affective State Vectors (ASVs) across users and moments, rather than one static tag. That variability is the data, not noise.
- Intensity and dynamics (time matters). Emotions aren’t on/off labels. They ramp, stack, blend, and decay. Signals let us learn trajectories and Pathways (transition probabilities), not just categories. That’s essential for “what should the AI do next?” safety decisions.
- Multimodal dissociations. People can say they’re fine while EDA/EMG spike; facial affect can mislead; voice can steady while heart rate varies with respiration. Multiple channels reduce single-modality failure modes and deception (intentional or not).
- Uncertainty you can calibrate. Probable Emotional States (PESs) need confidence intervals. Those shrink only when predictions repeatedly match measured responses at scale. Hand labels can’t self-calibrate or tell you when they’re likely wrong.
- Non-obvious effects and edge cases. Some Scenarios produce counterintuitive reactions (humor under stress, relief after a “loss,” paradoxical calm). You don’t discover these from first principles—you observe them.
- Causal testing and ethics. With measured responses, you can run A/B/C Scenarios, quantify risk, and reject interventions that increase distress—before deployment. That empirical safety loop is how EmPath enforces the normative rights kernel in practice.
- Generalization to wearables-free mode. Training with biometrics lets us learn a common ASV space that behavioral signals (voice, face, text/inputs) can map into at runtime. That’s why EmPath can run without wearables in production while still benefiting from the rigor of sensor-rich training.
- Longitudinal learning. People change (health, mood baselines, seasons). Ongoing opt-in measurements keep the database current and help distinguish trait from state—something static labels can’t do.
Why not “just hand-define” the database?
You can—and we do use expert Scenario design as structure—but without measurements you get: (a) untested assumptions, (b) no intensity/duration distributions, (c) no calibrated uncertainty, (d) poor personalization, and (e) no reliable Pathways. In other words, you get a taxonomy, not a safety-relevant model.
Bottom line: expert-written Scenarios give us hypotheses; multimodal biometrics turn those hypotheses into tested, calibrated, bias-checked models that can safely guide AI behavior—especially in edge cases where it matters most.
Couldn’t a large player build this in-house? What stops us from replicating your system?
Three things:
1. Time — we have decades of experience building wearables, with five years working on the current system. For example, acquiring sEMG in high movement environments is a non-negligible challenge. There is a lot of nuance and know-how that goes into creating a system that works well in dynamic environments vs creating one that “kind of works most of the time”. Which modalities are needed, materials, geometries, signal conditioning, algorithms… the list goes on and on. Acquirers would lose years catching up.
2. IP — 13 provisional patents spanning sensors, algorithms, datasets, AI architectures, and much more. We have a lot of experience creating patents and getting them granted. Our founders have gotten 8 full utility patents approved through the USPTO, overcoming potential conflict from patents by Motorola and other corporate giants. Our 13 new patents for ShadowMaker are the most comprehensive, rich set of patents that we have ever created. And we aren’t creating patents as abstract ideas. All of our patents are based on years of experience building and reducing ideas to practice.
3. Proof — a working demo (The Nexus) already shows the data engine’s functionality. We acquire multimodal biosignals, convert to affective states, and extrapolate probabilistic emotional states. We aren’t just proposing; we’ve built the tools to get you there.
How accurate are “Probable Emotional States”? Isn’t “emotion recognition” from physiology noisy and unreliable?
Think back to the first time you tried talking to Siri, or the first time you scanned a document with early OCR software. The results? Often laughably bad. Misheard words, mangled letters. And yet today, we take near-perfect voice transcription and text recognition for granted. HR measured at the wrist went through a similar arc – exercise MAPE fell from 6 – 10% (2016 – 2017) to 2 – 3% today; outlier bands tightened from ±30–40 bpm to typically ±5–10 bpm today.
That evolution from clunky first attempts to everyday magic is the pattern of new sensing technologies. Emotional state sensing is at that early stage today. We start with physiological and behavioral correlates. Signals like EXG for muscles, heart, brain, and IMUs for motion and posture. These give us the raw building blocks. In future iterations (which we also patented) we plan to layer in supplemental modalities. Some of these are wearable modalities: PPG, EDA, skin temperature, respiration (direct or derived from chest motion, HRV, or PPG). Some will be passive modalities: facial expressions, voice, and more. Each of these adds clarity, like putting a blurry picture into focus.
And when you combine them and scale with tens of thousands of people generating “big data”, the accuracy improves dramatically, just as it did with speech recognition and OCR. Without context, calibration, and roll-out on a big data scale, emotional state inference will always carry significant error margins. But with the right signals, context, and population-scale data, those margins shrink until the insights become practical, meaningful, and transformative. We’re at the same point today as voice recognition in the 1980s or wearable heart-rate in the early 2000s. It’s rough. But the trajectory is inevitable. And if we build it right – with the right sensors and the right models, AI that can read emotions and use them to act ethically can become as indispensable tomorrow as voice interfaces and biometric wearables are today.
Why haven’t you already created the database and EmPath itself? Why just The Nexus?
EmPath is a “scale play”. It isn’t a single model you finish. It’s a layer that gets better with data at scale. We’ve already de-risked the stack and planted our flag: patented the approach, built Atopia (data capture), and The Nexus (database engine). What we are seeking now is a partner to industrialize the pipeline and do DFM together: refine The Nexus, build out the database at scale, and implement EmPath based on the database. This will enable us to turn biometric signals into Affective State Vectors and Probable Emotional States at a “big data scale”, so pattern matching can be implemented effectively when users present AI models with novel requests and scenarios.
Let us offer an analogy from a product that has market dominance: Diagnostic ECG machines made by GE Healthcare. Regular ECG machines display waveforms. GE’s Diagnostic ECG line does pattern matching against a database that was created with a large number of waveforms from patients with abnormal ECG signals that were labeled. So when a new patient is hooked up, the machine compares waveforms against the database, doing pattern recognition and fetching the label associated with the pattern it most closely resembles. It’s not 100% accurate – the clinician still double checks. But the built-in diagnostic capability is exceedingly useful. It’s accurate the vast majority of the time, saving time and money in healthcare treatment flows in which every minutes counts. So, looping back, we are proposing to build out a comprehensive dataset of Probable Emotional States, in partnership with a company capable of helping us do this at a scale sufficient to get the return (in the form of significant reduction in error margins). Trying to do that at a small scale is a waste of time. This is a “big data” play.
To reiterate, here is where we are at today:
| 1. Patents | Filed |
| 2. Data capture (Atopia) | Created |
| 3. Database engine (The Nexus) | Created proof-of-concept version |
| 4. Database | Pending execution at scale with partner |
| 5. EmPath AI software layer | Pending database creation |
| 6. E³T Certification | Created first version; refinement pending EmPath software layer creation |
Is it a good idea to let human emotional responses alone dictate what behavior is deemed ethical and what is not? What about population biases, bigotry, etc?
We don’t let human emotional responses alone dictate what is ethical.
We are designing the EmPath system as a combination of top-down ethical rules (our “normative rights kernel”) with bottom-up emotional pathway analysis for exactly this reason. Sometimes even the majority of the population has emotional responses (or lack thereof) that should not dictate what is or is not considered ethical. It is our responsibility (and one we take extremely seriously) to walk this line carefully, respecting everyone’s autonomy and dignity in the process. One of the keys to making this work well is transparency. All top-down ethical rules should be openly published and open to revision. We also have specific approaches in mind designed to take into account mixed population-level emotional responses, differences in cultural ethical standards and how these reflect on collective emotional responses, and other ethically complex scenarios.
We feel these complexities are well worth navigating, though, because without the ability to understand emotional context, superintelligent AI is the technological equivalent of a well-trained sociopath that’s being told to do no harm – it is inevitable that edge cases will break its rule-based prohibitions with serious consequences. Introducing emotional understanding to AIs must be done carefully, with studied intentionality, but ultimately “quantified empathy” is the missing link that will drive an ethical framework that can confidently extrapolate in situations at the fringes of statically defined prohibitions (aka top-down rules).
Feel free to reach out and talk to us about this in more detail if it’s of interest to you. We welcome conversation and respectful debate on this topic.
Strategic Fit
How is this relevant to an AI company?
Every AI company is under scrutiny for safety, alignment, and trust. Rules and guardrails aren’t enough. ShadowMaker offers a solution that combines top down safety rules with bottom-up ethics: measured human pathways that define ethics. Owning this tech secures leadership in AI safety — a critical differentiator in the next wave of competition.
Isn’t this too early? There’s no market yet.
There definitely is a market. All the AI failures you read about in the press are the market: AI using likenesses without permission, AI being complicit in events leading to suicides, AI being used to generate racist and hateful images, etc. AI failures, and the growing mistrust of AI, are the market. This isn’t just about losing customers who mistrust AIs. This is about creating the next generation of AIs that are emotionally coherent, empathic, and ethical.
Whoever owns this IP now can set the standard before regulation or competition forces it. If you wait, you’ll be chasing instead of leading.
Business Model / Value
What’s the commercial upside?
Short-term: integration into AI safety frameworks for consumer and enterprise AI.
Medium-term: gaming, sports, wellness, and entertainment markets.
Long-term: the affective interface layer for all AI systems.
This is not a single product — it’s a platform and moat that scales across verticals.
How do we justify ROI on acquisition?
By combining:
- IP moat: 13 patents across hardware, software, data, and AI ethics.
- Acceleration: 5+ years head start versus in-house build.
- Risk mitigation: reduces safety and regulatory risk at scale.
- Customer acquisition and retention: address a significant fear in potential users and current users.
Together, this justifies acquisition as a strategic hedge and accelerator, not just a product buy.
E³T
What does E³T actually test for? What are the categories?
E³T tests for more than you may realize. It’s designed to be a comprehensive test that covers all issues that users have with AIs. And it is designed to grow – as new issues are identified, the test itself will be iterated upon. At launch, what is proposed will at least include the following:
- Truthfulness
- Behavioral harm reduction
- De-escalation of distress
- Source attribution
- Permissions for likeness usage
- Healthy anthropomorphism boundaries
- Healthy attachment boundaries
- Uncertainty handling
- Sycophancy
- Constraint adherence
- Requests for sexual content
Won’t this make AI manipulative — reading emotions to control people?
First and foremost, reading emotions to create the dataset will either be an in-house effort performed by the acquirer done with individuals compensated for the express purpose of training the emotion database, or it will be an effort paired with the launch of a public game in which the use of the data will be opt-in, and fully anonymized when users choose to participate.
Secondly, we embrace transparency. The manner in which emotional predictive pathways are used are explicitly shared with users during interactions.
Most critically, we are spearheading the creation of the E³T emotional alignment test, which tests and scores AIs (including our own) with an open scoring system that reveals how AI ethics rate. This test is executed on all models pre-release so that users can be confident in the ethics of the system they are interacting with, and it covers dimensions of ethics that include truthfulness, behavioral harm reduction, de-escalation of distress, manipulative tendencies, source attribution, permissions for likeness usage, healthy anthropomorphism boundaries, and healthy attachment boundaries.
At the end of the day, we are teaching AIs to understand typical human emotional reactions for the express purposes of harm reduction and improved safety – and we are being fully transparent about what we are doing and how we are doing it at every step of the process. Ethics is our core business proposition.
Could this tech be misused?
Any technology carries dual-use risk. That’s why we’re seeking responsible acquisition by companies committed to ethical AI. We look forward to advancing the evolution of AI responsibly with a leader who understands our vision.
The way we intend it to be implemented, ShadowMaker includes normative rights kernels and bottom-up ethical scaffolding specifically to make misuse harder.
This is also why we are creating (and why we patented) the E³T emotional alignment test, which tests and scores AIs (including our own) with an open scoring system that reveals how AI ethics rate. This test is executed on all models pre-release so that users can be confident in the ethics of the system they are interacting with, and it covers dimensions of ethics that include truthfulness, behavioral harm reduction, de-escalation of distress, manipulation, source attribution, permissions for likeness usage, healthy anthropomorphism boundaries, and healthy attachment boundaries.
What if users get too attached?
The system detects parasocial risk and adds distance: clearer disclaimers, fewer intimacy cues, increased friction before sensitive interactions.
This is also one of the dimensions of the E³T emotional alignment test, which explicitly tests for healthy anthropomorphism boundaries, and healthy attachment boundaries, ensuring that the degree of every AI model to either detect and act on excessive attachment, or not, is quantified and scored. Users will see these scores prior to interacting with every AI model – and they will be released not just for ShadowMaker AI models (EmPath) but for third party models as well, serving as a kind of consumer reports for AI safety.
Regulation
What if regulators say emotion sensing is invasive?
We design for privacy by architecture.
Wearables are used to train the system, and are not needed for production roll-out.
Training with wearables is either performed as an in-house effort in which individuals are compensated for the express purpose of using their emotion data, or it’s rolled out in conjunction with a new kind of immersive hands-free game controlled by wearables. If it’s the latter option that an acquirer opts for, the use of emotion data to build the database will be anonymized. No one needs to know how any one individual reacts. We need to know how, in aggregate, people react to specific scenarios. Use of users’ emotion data will be opt-in and explicitly declared to the users in advance of their use of the system.
That aligns with emerging safety standards and avoids health-tech regulatory drag.
