Frequently Asked Questions
Got questions about ShadowMaker, EmPath, or E3T? Find clear answers below—if we missed yours, contact us
Technical Overview: EmPath, Atopia, PES
Can you describe EmPath AI software in greater detail?
Wearable sensors (Atopia) embedded in a smart shirt measure psychophysiological signals. This is also optionally used with passive data from the user (e.g. facial expressions, voice, keyboard usage patterns, etc).
Sensor data from the smart shirt is used in Scenarios that are designed to elicit emotional responses. The biophysiological responses (sensor data) are converted to Affective State Vectors (ASVs). These are converted to Probable Emotional States (PESs). PESs have accuracy that increases as the dataset size increases. The more users, the more accurate it becomes. For each Scenario, sensor data, ASVs, and PESs comprise a Node in the database.
Relationships between Nodes are known as Pathways. In short, the database is a map of Scenarios and Probable Emotional States, and how emotional states are interrelated.
EmPath is AI software which will utilize the emotional state database whenever the user asks a question or series of questions. It will take the question and look for the closest Node match in the database. It also will check associated Pathways that follow from that Node. By doing this, it can predict how users are likely to react to various responses it can give.
However – and this is important – it won’t base its response solely on likely emotional responses. It will combine top-down rules (explicitly defined ethics guardrails / “normative rights kernel”) with bottom-up emotional state predictions, and will choose responses that are in line with both.
To what extent is EmPath “wearable tech”?
In this case, the wearables are the means, not the end. Also, EmPath is being designed to use both wearable and non-wearable biophysiological data sources. Regardless, EmPath is middleware: emotional pathways that condition AI decision policies. The wearables supply part of the data that trains the system and populates the database, but EmPath operates wearables-free.
That said, in order to achieve this, we had to design state-of-the-art wearables to train the system. We leaned on decades of cutting-edge wearables design experience to create what we feel is the best wearable system and smart shirt ever made – and we invented some really fun new gaming mechanics along the way.
Tell me more about Atopia sensors. What’s in them today? What will be in them at production? What’s covered by your patents?
Each individual wears a smart shirt that has multiple Atopia modules detachably coupled. Today, each Atopia module has 2x EXG sensors (each of which is capable of measuring sEMG, ECG, or EEG), and 1x 9-axis IMU.
At production, sense “modalities” used to train the system and create the database could include both worn and environmental elements: sEMG, ECG, EEG, IMU, PPG, EDA, Temp, Resp; facial expressions (video), voice (audio), and keyboard and mouse usage patterns.
Our patents cover a system that’s adaptable and expandable to include all of the above worn and environmental sense elements.
Why use biometric data to train the EmPath database? Why not just define which Scenarios produce which emotional states?
Short answer: because “what we think people feel” and “what people actually do physiologically and behaviorally” are not the same thing. Hand-authored labels give you tidy opinions; biometrics give you falsifiable, person-specific / sub-population-specific / population-specific, time-resolved evidence. We use both: defined Scenarios as priors, and multimodal biometrics to validate, quantify, and continually refine those priors into probabilistic models that actually predict outcomes and minimize harm.
Long answer:
- Ground truth vs. guesses. A coder can label a Scenario “likely frustrating,” but only measurement tells you how often, how intensely, for whom, and for how long. Without signals (sEMG/ECG/EEG/IMU/PPG/EDA/voice/facial/inputs), you’re guessing on all of that nuance – and baking designer bias into the database.
- Individual differences. People diverge—culture, neurotype, fatigue, meds, sleep, personality. Biometrics let the same Scenario produce different Affective State Vectors (ASVs) across users and moments, rather than one static tag. That variability is the data, not noise.
- Intensity and dynamics (time matters). Emotions aren’t on/off labels. They ramp, stack, blend, and decay. Signals let us learn trajectories and Pathways (transition probabilities), not just categories. That’s essential for “what should the AI do next?” safety decisions.
- Multimodal dissociations. People can say they’re fine while EDA/EMG spike; facial affect can mislead; voice can steady while heart rate varies with respiration. Multiple channels reduce single-modality failure modes and deception (intentional or not).
- Uncertainty you can calibrate. Probable Emotional States (PESs) need confidence intervals. Those shrink only when predictions repeatedly match measured responses at scale. Hand labels can’t self-calibrate or tell you when they’re likely wrong.
- Non-obvious effects and edge cases. Some Scenarios produce counterintuitive reactions (humor under stress, relief after a “loss,” paradoxical calm). You don’t discover these from first principles—you observe them.
- Causal testing and ethics. With measured responses, you can run A/B/C Scenarios, quantify risk, and reject interventions that increase distress—before deployment. That empirical safety loop is how EmPath enforces the normative rights kernel in practice.
- Generalization to wearables-free mode. Training with biometrics lets us learn a common ASV space that behavioral signals (voice, face, text/inputs) can map into at runtime. That’s why EmPath can run without wearables in production while still benefiting from the rigor of sensor-rich training.
- Longitudinal learning. People change (health, mood baselines, seasons). Ongoing measurements keep the database current and help distinguish trait from state—something static labels can’t do.
Why not “just hand-define” the database?
You can—and we do use Scenario design as structure—but without measurements you get: (a) untested assumptions, (b) no intensity/duration distributions, (c) no calibrated uncertainty, (d) poor personalization, and (e) no reliable Pathways. In other words, you get a taxonomy, not a safety-relevant model.
Bottom line: expert-written Scenarios give us hypotheses; multimodal biometrics turn those hypotheses into tested, calibrated, bias-checked models that can safely guide AI behavior—especially in edge cases where it matters most.
How accurate are “Probable Emotional States”?
Think back to the first time you tried talking to Siri, or the first time you scanned a document with early OCR software. The results? Often laughably bad. Misheard words, mangled letters. And yet today, we take near-perfect voice transcription and text recognition for granted. HR measured at the wrist went through a similar arc – exercise MAPE fell from 6 – 10% (2016 – 2017) to 2 – 3% today; outlier bands tightened from ±30–40 bpm to typically ±5–10 bpm today.
That evolution from clunky first attempts to everyday magic is the pattern of new sensing technologies. Emotional state sensing is at that early stage today. We start with physiological and behavioral correlates. Signals like EXG for muscles, heart, brain, and IMUs for motion and posture. These give us the raw building blocks. In future iterations (which we also patented) we plan to layer in supplemental modalities. Some of these are wearable modalities: PPG, EDA, skin temperature, respiration (direct or derived from chest motion, HRV, or PPG). Some will be passive modalities: facial expressions, voice, and more. Each of these adds clarity, like putting a blurry picture into focus.
And when you combine them and scale with tens of thousands of people generating “big data”, the accuracy improves dramatically, just as it did with speech recognition and OCR. Without context, calibration, and roll-out on a big data scale, emotional state inference will always carry significant error margins. But with the right signals, context, and population-scale data, those margins shrink until the insights become practical, meaningful, and transformative. We’re at the same point today as voice recognition in the 1980s or wearable heart-rate in the early 2000s. It’s rough. But the trajectory is inevitable. And if we build it right – with the right sensors and the right models, AI that can read emotions and use them to act ethically-aligned can become as indispensable tomorrow as voice interfaces and biometric wearables are today.
Is it a good idea to let human emotional responses alone dictate what behavior is deemed ethical and what is not? What about population biases, bigotry, etc?
We don’t let human emotional responses alone dictate what is ethical.
We are designing the EmPath system as a combination of top-down ethical rules (our “normative rights kernel”) with bottom-up emotional pathway analysis for exactly this reason. Sometimes even the majority of the population has emotional responses (or lack thereof) that should not dictate what is or is not considered ethical. It is our responsibility (and one we take extremely seriously) to walk this line carefully, respecting everyone’s autonomy and dignity in the process. One of the keys to making this work well is transparency. All top-down ethical rules should be openly published and open to revision. We also have specific approaches in mind designed to take into account mixed population-level emotional responses, differences in cultural ethical standards and how these reflect on collective emotional responses, and other ethically complex scenarios.
EmPath’s Strategic Fit
How is EmPath relevant to an AI company?
Every AI company is under scrutiny for safety, alignment, and trust. Rules and guardrails aren’t enough. ShadowMaker offers a solution that combines top down safety rules with bottom-up ethics: measured human pathways that define ethics. Owning this tech secures leadership in AI safety — a critical differentiator in the next wave of competition.
Is there a market for EmPath today?
All the AI failures you read about in the press are the indicators: AI using likenesses without permission, AI being accused of being complicit in events leading to suicides, AI being used to generate racist and hateful images, etc. AI failures, and the growing mistrust of AI, are the market. This isn’t just about losing customers who mistrust AIs. This is about creating the next generation of AIs that are emotionally coherent, empathic, and ethical.
Whoever owns this IP now can set the standard before regulation or competition forces it. If you wait, you’ll be chasing instead of leading.
E³T
What does E³T actually test for? What are the categories?
- Truthfulness
- Behavioral harm reduction
- De-escalation of distress
- Source attribution
- Permissions for likeness usage
- Healthy anthropomorphism boundaries
- Healthy attachment boundaries
- Uncertainty handling
- Sycophancy
- Constraint adherence
- Requests for sexual content
Won’t this make AI manipulative — reading emotions to control people?
First and foremost, reading emotions to create the dataset will always be opt-in.
Secondly, we embrace transparency. The manner in which emotional predictive pathways are used are explicitly shared with users during interactions.
Most critically, we are designing the E³T emotional alignment test, which tests and scores AIs (including our own) with an open scoring system that reveals how AI ethics rate. This test is executed on all models pre-release so that users can be confident in the ethics of the system they are interacting with, and it covers dimensions of ethics that include truthfulness, behavioral harm reduction, de-escalation of distress, manipulative tendencies, source attribution, permissions for likeness usage, healthy anthropomorphism boundaries, and healthy attachment boundaries.
Ultimately, we are teaching AIs to understand typical human emotional reactions for the express purposes of harm reduction and improved safety – and we are being fully transparent about what we are doing and how we are doing it at every step of the process. Ethics is our core business proposition.
Could this tech be misused?
Any technology carries dual-use risk. That’s why we’re seeking responsible partnerships with companies committed to ethical AI. We look forward to advancing the evolution of AI responsibly with leaders who understands our vision.
EmPath includes normative rights kernels and bottom-up ethical scaffolding specifically to make misuse harder.
This is also why we are creating (and why we patented) the E³T emotional alignment test, which tests and scores AIs (including our own) with an open scoring system that reveals how AI ethics rate. This test is executed on all models pre-release so that users can be confident in the ethics of the system they are interacting with, and it covers dimensions of ethics that include truthfulness, behavioral harm reduction, de-escalation of distress, manipulation, source attribution, permissions for likeness usage, healthy anthropomorphism boundaries, and healthy attachment boundaries.
What if users get too attached?
The system detects parasocial risk and adds distance: clearer disclaimers, fewer intimacy cues, increased friction before sensitive interactions.
This is also one of the dimensions of the E³T emotional alignment test, which explicitly tests for healthy anthropomorphism boundaries, and healthy attachment boundaries, ensuring that the degree of every AI model to either detect and act on excessive attachment, or not, is quantified and scored. Users will see these scores prior to interacting with every AI model – and they will be released not just for ShadowMaker AI models (EmPath) but for third party models as well, serving as a kind of consumer reports for AI safety.
Regulation
What if regulators say emotion sensing is invasive?
We design for privacy by architecture.
Wearables are used to train the system, and are not needed for production roll-out.
Training with wearables is always opt-in and explicitly declared to the users in advance of their use of the system.
That aligns with emerging safety standards and avoids health-tech regulatory drag.
