13 Comments
User's avatar
stephen.kang's avatar

Thank you for the excellent article. In Korea, this phenomenon is referred to as the "zero-UI" phenomenon. In fact, many large corporations are preparing for a situation where high-level decision-making is possible solely through speech and text. Although it was missing from this article, the core of the zero-UI phenomenon is "Conversational AI with argumentative capabilities."

Our team has successfully created a conversational engine based on Argumentational Technology that can handle deep and complex argumentation and dialectical conversations, and we've secured initial customers. To put it more technically, it's an AI that can produce strategic and structured intents, beyond simply understanding the customer's intent.

Although we weren't selected in the recent AIR startup program, we hope for a better opportunity next time. The essence of the screenless future is an AI conversational engine that can mimic the highest level of human abstract thought.

– By DeepSkill Team

Expand full comment
Sophie Bakalar's avatar

I haven’t heard this phrase before! Thank you for sharing. And excited to learn more about DeepSkill. Send me a DM and we can find time to talk 1:1.

Expand full comment
Hefin's avatar

Thank you for a really interesting read. Has there been any existing or emerging discourse about what this potential shift means from an accessibility perspective? For example, how are AI screenless devices usable for people that are deaf or hard of hearing?

Expand full comment
Sophie Bakalar's avatar

This is a good point. A lot of screenless AI is voice-based, which can be restrictive for people who are deaf or hard of hearing. But there’s interesting work happening around things like real-time captions, haptic feedback, and visual alerts through wearables or AR glasses. Still early days, but accessibility needs to be baked in from the start, not added later. Would love to know if you’ve seen anything cool on this front too!

Expand full comment
Hefin's avatar

I haven't (yet) but really appreciate the examples you've shared.

Expand full comment
Prynce Karki's avatar

Feels like technology is moving closer and closer. There were computers, an arm’s distance away, then phones, near your face, glasses, on your face, and then Elon Musk plants a microchip in your brain.

Expand full comment
Erica Kelly's avatar

This was exactly my first thought. Tech is becoming more and more embedded into our daily lives. When we don’t even have the device to recognize when we’re connected, how are we expected to ask people to log off and … live ?

Expand full comment
Prynce Karki's avatar

I mean, I think my take away from the essay is…well, if it’s more seamlessly integrated into the human experience, will it help that? We have to be online anyways; let’s be cyborgs.Or will I sit in a park, like a zombie, muttering to my AI to post this exact comment on substack with AI glasses?

Expand full comment
Erica Kelly's avatar

Our future will be more screen-less, sure, but will it actually mean we’re more present in reality? I fear not.

Expand full comment
Prynce Karki's avatar

Yeah there’s too much money in keeping us in addictive habit loops

Expand full comment
Sophie Bakalar's avatar

You’re probably right. But my hope is that without all the physical devices to distract us, we might actually engage with the real world again. I like to imagine taking a hike without checking my phone for directions. Or cooking without my iPad. But still able to access all the auxiliary data (and more).

The attention economy is a relatively recent phenomenon. Maybe they’ll find other ways to monetize us going forward.

Expand full comment
Prynce Karki's avatar

But it’ll be so much easier when their tool of attention grabbing is literally strapped to our face…with AI talking to us…

This is the Matrix dude!

Expand full comment