Reports of Amazon’s virtual artificial intelligence (AI) assistant Alexa behaving strangely have recently made the rounds on the news and social media. Several Alexa-enabled devices have reportedly started talking or laughing without being prompted, or doing so instead of performing a command. Naturally, Alexa owners who heard this freaked out, with many resorting to turning off the AI assistants or unplugging their devices.

These incidents didn’t go unnoticed by Amazon, which immediately set out to fix the bug. On March 7, Amazon released a statement explaining Alexa’s sudden gleeful outbursts: “In rare circumstances, Alexa can mistakenly hear the phrase ‘Alexa, laugh,’” the company said.

As a result, Amazon decided to change the phrase to “Alexa, can you laugh?” which they said would be “less likely to have false positives.” The phrase “Alexa, laugh” has also been disabled. Amazon also noted that they were “changing Alexa’s response from simply laughter to ‘Sure, I can laugh’ followed by laughter.”

First, let’s set a couple of things straight. Alexa laughing at seemingly random moments, coupled with little acts of defiance, sure sounds chillingly familiar — but this (probably) isn’t a sign of an AI takeover. What it is, rather, is a chance to reconsider some of the realities of living with virtual AI assistants today, and in the future.


This should probably go without saying. One of the most promising — but also, arguably, disconcerting — realities of AI in mobile devices and the Internet of Things is that they are always on. This is partly so AI assistants and other AI-powered applications can readily respond to a user’s queries. The other purpose is to allow the AI’s machine learning algorithms to learn continuously, allowing them to perform better. Perhaps it’s good to be reminded of this by a bodiless laughter from time to time.


That point about AI needing to continually learn actually proves a second important reality: Machines, no matter how intelligent they seem, are still far from perfect. For a virtual AI like Alexa, human interaction boils down to receiving and processing a voice prompt, and echoing a pre-programmed response.

Somewhere in between, the device or the AI can still “mishear” the prompts, as Amazon said in their statement. What’s more, mishearing voice commands is a fault not unique only to Alexa. As AI assistants grow more sophisticated, they’ll likely grow better and better at understanding what we want — yet at the end of the day, there will always be a risk something gets lost in translation.


Ok, let’s adSoftware Co it. Machines doing human-like activities can be creepy.

The uncanny valley is a well-documented human response to objects and behaviors that are almost human, but not quite. Who wasn’t freaked out, even a bit, when Sophia the bot said she was going to end humankind? It wasn’t even so much what she said, but the fact that she could say it — and as a response to a conversation, at that — that was unsettling. Not to mention, Sophia looks mostly like a human being, but not exactly like one, at the same time. (Having some hair might help, but it doesn’t look like that’s happening any time soon.)

The uncanny valley can be as real as it gets when it comes to AI-powered machines, and will likely factor into future design choices. For some, it’s no laughing matter.


It’s a bit amusing that, despite the ubiquitous nature of devices like the Amazon Echo, it took unsolicited laughter to make users unplug. Given how much we depend on today’s gadgets and devices, we tend to forget that it’s alright to disconnect once in a while. Sure, technology is definitely useful, but it doesn’t hurt to not need it every so often.

At the very least, it’ll keep you from worrying about your virtual assistant laughing at a joke only it could hear.

This article originally appeared on Futurism. Read the original here.

Leave a comment

Your email address will not be published. Required fields are marked *

Subscribe to our newsletter

Subscribe to our newsletter and we will send you industry news directly to your inbox

    Let’s talk.

    Note: We’ll keep your idea confidential with a signed NDA.