Thank you for the excellent article. In Korea, this phenomenon is referred to as the "zero-UI" phenomenon. In fact, many large corporations are preparing for a situation where high-level decision-making is possible solely through speech and text. Although it was missing from this article, the core of the zero-UI phenomenon is "Conversational AI with argumentative capabilities."
Our team has successfully created a conversational engine based on Argumentational Technology that can handle deep and complex argumentation and dialectical conversations, and we've secured initial customers. To put it more technically, it's an AI that can produce strategic and structured intents, beyond simply understanding the customer's intent.
Although we weren't selected in the recent AIR startup program, we hope for a better opportunity next time. The essence of the screenless future is an AI conversational engine that can mimic the highest level of human abstract thought.
Thank you for a really interesting read. Has there been any existing or emerging discourse about what this potential shift means from an accessibility perspective? For example, how are AI screenless devices usable for people that are deaf or hard of hearing?
This is a good point. A lot of screenless AI is voice-based, which can be restrictive for people who are deaf or hard of hearing. But there’s interesting work happening around things like real-time captions, haptic feedback, and visual alerts through wearables or AR glasses. Still early days, but accessibility needs to be baked in from the start, not added later. Would love to know if you’ve seen anything cool on this front too!
This was exactly my first thought. Tech is becoming more and more embedded into our daily lives. When we don’t even have the device to recognize when we’re connected, how are we expected to ask people to log off and … live ?
You’re probably right. But my hope is that without all the physical devices to distract us, we might actually engage with the real world again. I like to imagine taking a hike without checking my phone for directions. Or cooking without my iPad. But still able to access all the auxiliary data (and more).
The attention economy is a relatively recent phenomenon. Maybe they’ll find other ways to monetize us going forward.
Thank you for the excellent article. In Korea, this phenomenon is referred to as the "zero-UI" phenomenon. In fact, many large corporations are preparing for a situation where high-level decision-making is possible solely through speech and text. Although it was missing from this article, the core of the zero-UI phenomenon is "Conversational AI with argumentative capabilities."
Our team has successfully created a conversational engine based on Argumentational Technology that can handle deep and complex argumentation and dialectical conversations, and we've secured initial customers. To put it more technically, it's an AI that can produce strategic and structured intents, beyond simply understanding the customer's intent.
Although we weren't selected in the recent AIR startup program, we hope for a better opportunity next time. The essence of the screenless future is an AI conversational engine that can mimic the highest level of human abstract thought.
– By DeepSkill Team
I haven’t heard this phrase before! Thank you for sharing. And excited to learn more about DeepSkill. Send me a DM and we can find time to talk 1:1.
Thank you for a really interesting read. Has there been any existing or emerging discourse about what this potential shift means from an accessibility perspective? For example, how are AI screenless devices usable for people that are deaf or hard of hearing?
This is a good point. A lot of screenless AI is voice-based, which can be restrictive for people who are deaf or hard of hearing. But there’s interesting work happening around things like real-time captions, haptic feedback, and visual alerts through wearables or AR glasses. Still early days, but accessibility needs to be baked in from the start, not added later. Would love to know if you’ve seen anything cool on this front too!
I haven't (yet) but really appreciate the examples you've shared.
This was exactly my first thought. Tech is becoming more and more embedded into our daily lives. When we don’t even have the device to recognize when we’re connected, how are we expected to ask people to log off and … live ?
Our future will be more screen-less, sure, but will it actually mean we’re more present in reality? I fear not.
You’re probably right. But my hope is that without all the physical devices to distract us, we might actually engage with the real world again. I like to imagine taking a hike without checking my phone for directions. Or cooking without my iPad. But still able to access all the auxiliary data (and more).
The attention economy is a relatively recent phenomenon. Maybe they’ll find other ways to monetize us going forward.