Technical, cultural and social reasons.
When I at home again after work, the lights in my house would automatically turn on around 6 pm and then again at 7 am. This would have been far more frustrating in the morning if it weren’t already light and we weren’t mostly up anyway.
In an ideal world every new device a consumer brings into the home should be able to simply work with the central hub device or software and start sharing data with it. We will need one or two software protocols such as the AllSeen Alliance’s AllJoyn protocol or perhaps something designed by another group that developers can easily build into their products and software, but the only capabilities it really needs are a way to say what a device is and what it can and can’t do on the network. This will allow developers from many different companies to take those capabilities and write software that can take advantage of them.
We don’t have the inputs we need
A good artificial intelligence will need context to build great automation. But homes don’t offer computers that context yet. There is both a lack of understanding of where people are in the home (presence detection), who that person is (computer vision or just grabbing a unique ID off their phone if it’s on them), what they are doing (computer vision), and the power dynamics in the home. For example, if a six-year-old demands that the stereo switch from Brahms to the latest Taylor Swift song for yet another rendition of Bad Blood, is the home smart enough to know that the adult’s previous command is the one to stick with? At least if the speaker is in the kitchen and not in the six-year-old’s room?
We don’t have training data yet
And that previous example gets us to another technical hurdle for artificial intelligence. Any artificial intelligence has to be trained on very particular data sets. When you train a computer to learn, you don’t train it to learn about everything. You train it to learn about one very specific thing such as identifying an appropriate cartoon caption for the New Yorker, or identifying a person’s face based on what’s around them. We already have some great artificial intelligence for the smart home around computer vision, figuring out what temperature people like and when they are home. But we are still missing crucial things such as when people like certain things to happen and when they might now want certain things to happen. Morning music is a great example of this. Most mornings you might love to wake up with your stereo playing, but not if your kid was up all night puking. Then you might want the house silent in hopes that she stays asleep.
And because homes are places where things change as children age and circumstances suddenly shift, it can take time to train artificial intelligence to quickly pick up on each individual families’ routines. And while it learns, the mistakes can be glaring. An example might be having to expressly tell a computer that your child’s bedtime is later now, or that because your wife is having hot flashes, it’s time to dial back the thermostat at night.
Smart homes are shared
The main character in Her lived alone, possibly because he wore those nerdy high-waisted pants, but also because it’s much easier to highlight artificial intelligence that manages the life of a single person in a home (his loneliness was also a key point in the plot). However, for most people, homes are shared environments while PCs and mobile phones are intensely personal. This changes how you have to build everything from the controls (can your tween change the air conditioning settings your home?) to how you update an app (how many individual devices do you want to update that lighting app on?). Other considerations involve managing for people who don’t have smartphones yet which may not show up as being inside a connected home and offering some way for guests to control some things in the home that make sense. Getting artificial intelligence to figure all that out and then to offer that requires a lot of training, security and access controls and scenario-building that most app builders are barely even thinking about yet.
There is no single way to control the smart home
Do you talk to Siri or Google or type your texts? What about when you want to open the camera on your phone? Do you use a gesture to quickly flick the camera on or do you hit the buttons on the screen? Because of all the sensors in phones, people can interact with them differently based on what is the most convenient for them at that moment. The home will be no different, but there are also more distractions. Sometimes, when the automation gets it right, we don’t use anything because the lights just happen to be one when we want them on. That’s exactly what we want, but, in fact, it’s pretty rare.
Most people say their hope is to get to that last point, where when the lights just turn on when they walk into the living room. But you can probably appreciate why that hasn’t really happened yet. And even when it does work, there will still be some glitches to be ironed out. Even as a dedicated fan of the smart home, sometimes I have to admit that the basic wall charger or desktop charger looks pretty good.