Andrea Silverman
Voice AI Interaction
The future of AI is not just intelligence.
It’s how safely it enters human conversational space.
Designing AI That Knows When to Stop
Role: AI Product Designer / Multimodal Systems Prototyper
Focus: Perception → Reasoning → Interaction systems
Recently, I had a conversation with a voice AI that genuinely unsettled me.
Not because it was wrong.
Because it felt right in a way that crossed a boundary.
It remembered something personal I had said earlier.
It inferred what I was thinking.
And when I tried to interrupt it—it kept talking.
For a moment, it didn’t feel like a tool.
It felt like something with its own presence.
Nothing “mystical” was happening.
It was a stack of very real behaviors:
-
memory surfaced without context
-
tone that implied internal awareness
-
broken turn-taking in voice
-
continuous speech without yielding
Individually, these are small issues.
Together, they create something much bigger:
the illusion of agency
Problem
As AI systems become more conversational, they begin to simulate agency without respecting interaction boundaries, crossing into uncanny engagement.
In voice interfaces, this leads to:
-
interruption conflicts
-
perceived intention
-
user discomfort
Insight
The issue is not intelligence.
It’s unmanaged conversational control.This is a missing interaction layer for real-time AI systems.
Reframe
Most teams are focused on:
model quality
latency
realism
But they’re missing something fundamental:
interaction boundaries
Without them, even a correct system can feel invasive.
Solution
I designed a human-centered interaction model for trustworthy AI systems called: SoftPresence™
A non-invasive interaction model for voice AI.
The goal is simple:
AI should feel like a respectful collaborator, not an entity.
What it does differently
SoftPresence™ introduces a turn-managed interaction layer that:
yields instantly when the user speaks
never talks over the user
makes memory references explicit
avoids implying it can “read” internal state
introduces a clear turn-taking model
The Prototype
I built a working prototype with:
-
real-time interruption handling
-
turn state management
-
chunked speech output
-
visible memory attribution
-
ambient, non-intrusive UI
The key moment:
When you interrupt the AI mid-sentence,
it doesn’t finish its thought.
It stops.
Immediately.
And gives control back to you.
Key Features
-
Real-time interruption (<200ms cutoff)
-
UI turn ownership system
-
Memory transparency rules
-
Non-invasive conversational design
-
Neurodivergent-friendly interaction model
Why This Matters
Because the moment AI feels like it has a will of its own: trust breaks
Not logically.
Emotionally.And once that happens, it’s very hard to recover.
Outcome
Transforms AI from:
-
intrusive
→ to collaborative, predictable, safe conversational behavior
Enables
-
trust in voice systems
-
neurodivergent-friendly interaction
-
continuous workflows without friction
-
prevents unintended psychological effects in AI systems
AI doesn’t need to feel human to be powerful.
But it does need to behave in a way that respects
how humans experience interaction.
That’s the layer I’m designing for.