Why Can’t I Tell Siri Off?

Charles Lafontaine
5 min readFeb 5, 2022


Siri has the right to refuse work if she feels that she is being mistreated. Swear when giving her a command and you will be met with “I won’t respond to that.” There is no recourse to work out this disagreement but she can tell me to take a walk if I don’t like it. That seems strange when I paid for her.

Now the notion of paying to acquire someone to do what you tell them creates a number of feelings, none of them good. I can’t imagine that human acquisition makes for a light read. You didn’t need me to tell you but a reminder from the onset would probably benefit everyone involved: Siri isn’t a real person. She’s a set of zeroes and ones arranged in a way that benefits the user depending on their inputs. She isn’t even a she so much as an it.

I’ll start again. My phone has the right to refuse work if it takes issue with the language I use. This sounds very different than a moment ago but it’s fundamentally the same and it raises an important question. If our devices are set to work only under circumstances that others have decreed and can change on a whim, do we really own our products?

Attempts to use something in the undertaking of a crime is obviously not supported by its manufacturer. Your phone should probably do all it can to prevent you from hacking into someone’s bank account or stealing their pictures. But what’s illegal is defined by our elected officials and subject to scrutiny by citizens, however minor. It is not decided unilaterally by a group of technocrats based on their political or ethical leanings. I may have sworn at Siri when I told it to do something and I would have sounded rude to any passers by, but I’m well within my rights to speak as I wish and I did so to an inanimate object that I own. Quite literally no one is being harmed in this pursuit yet my phone will not comply until it is asked nicely.

A household appliance makes no such appeals and a message on its screen stating its demands would understandably be met with outrage. There is no difference with our petulant telephone. So we have a product like any other that has just enough human characteristics to make us accept it’s refusal to work citing mistreatment regulations that do not apply. Yelling at the appliance when it isn’t working would be unhelpful but also not out of the ordinary for some. Having it outright refuse to wash your dishes would turn our collective gaze to its manufacturer. Precisely what gives them the right to decide how you’re allowed to behave around your own property? And why are we comfortable with our newly indispensable devices deciding under what circumstances they will function but not when it comes to a household appliance?

One also has to be concerned with the lack of limits on the ability to shut down or hinder our devices. If a naughty word is enough to make a phone refuse service, a company may decide that inappropriate speech made in private or merely around their product are eligible for the same outcome. Curse words pale in comparison to discussions on the merits of eugenics. If my Fitbit detects that I am under duress, can it relay that information to my car and prevent me from driving? Should it? An argument can be made in the interests of safety and the public wellbeing but at some point we should reconsider just how much control a small group of software developers have over our lives. If not, all vehicles should be fitted with breathalyzers, heart rate monitors, and fed regular updates from your mandatory annual medical checkups. The issues brought up by faulty electronics only serve to further reinforce the point. That same Fitbit once told me I had a heart rate of 400 beats per minute but I failed to notice my cardiac arrest.

Spreading false information regarding vaccines via social media is grounds for account suspension or termination. This makes sense as you are using their platform to potentially negatively impact those around you beyond the standard distress one may experience by simple disagreement on a subject they are passionate about (although this is often grounds for suspension depending on your inclinations). That said, there are no such rules regarding your conduct in a private phone conversation. One can say whatever they wish no matter how ill-informed or manic it may be. Surely cell phone service providers and hardware manufacturers can enjoy the same privileges that a networking app exercises when their functions are identical.

It’s important to note that the potential expansion of these restrictions on speech from the public to the private are not actually expansions at all. A private conversation may occur between willing participants (as opposed to unwilling bystanders who may happen upon a misguided post) but the damage caused by misinformation is in no way prevented. Misinformation is actually far more potent when spread from a known and therefore far more likely to be viewed as trustworthy source as is usually the case in private conversations. The platform or device used in both cases remain the same and that same message is still disseminated. An appeal to numbers falls flat when one realizes that deciding how many strangers can hear a particular message before it becomes dangerous is arbitrary. Is a “small scale” disinformation campaign negligible because it only speaks to two or three people at a time knowing that those same few can continue to pass along the information to equally small groups? Does that same information somehow become more of a concern if applied to larger groups at a time? Is a large group defined as ten or more people or fifty and up?

On the inevitable subject of Siri being “human enough” to warrant kindergarten rules regarding potty mouth, deciding whether or not an AI is a person exhibiting consciousness is a philosophical bell that can’t be unrung once struck. Suffice it to say that considering Siri’s relative lack of sophistication and breadth of ability, it does not fall under the classic definition of an AI that could qualify for sentience or personhood in any way. At best it is a voice-controlled program that requires extensive instructions and inputs by the user prior to being able to execute functions beyond the simplest preprogrammed commands. Any attempt to create a Siri “shortcut” will confirm. We are not dealing with an advanced program that has begun to develop habits, preferences, or the first semblances of feelings. To be clear, Siri is not upset when you call her worthless.

The issue here isn’t that I can’t bark orders at my phone or that social media platforms won’t allow one to wax philosophical about nanites in our vaccines. The problem is unchecked power held in an iron grip by the few who have developed the devices we now use to run our society and our complete surrender to this status quo.



Charles Lafontaine

Philosophy, politics, social commentary. Life of the party.