The future is disappearing: How humanity is falling short of its grand technological promise by The Koyal Group Info Mag What I find most interesting about typical visions of the future isn’t all the fanciful and borderline magical technology that hasn’t been invented yet, but rather how much of it actually already exists. Consider something relatively straightforward, like a multi-touch interface on your closet door that allows you to easily browse and experiment with your wardrobe, offering suggestions based on prior behavior, your upcoming schedule and the weather in the locations where you are expected throughout the day. Or a car that, as it makes real-time navigational adjustments in order to compensate for traffic anomalies, also lets your co-workers know that you will be a few minutes late, and even takes the liberty of postponing the delivery of your regular triple-shot, lactose-free, synthetic vegan latte. There’s very little about these types of scenarios that isn’t entirely possible right now using technology that either already exists, or that could be developed relatively easily. So if the future is possible today, why is it still the future? I believe there are two primary reasons. The first is a decidedly inconvenient fact that futurists, pundits and science fiction writers have a tendency to ignore: Technology isn’t so much about what’s possible as it is about what’s profitable. The primary reason we haven’t landed a human on Mars yet has less to do with the technical challenges of the undertaking, and far more to do with the costs associated with solving them. And the only reason the entire sum of human knowledge and scientific, artistic and cultural endeavor isn’t instantly available at every single person’s fingertips anywhere on the planet isn’t because we can’t figure out how to do it; it’s because we haven’t yet figured out the business models to support it. Technology and economics are so tightly intertwined, in fact, that it hardly even makes sense to consider them in isolation. The second reason is the seemingly perpetual refusal of devices to play together nicely, or interoperate. Considering how much we still depend on sneakernets, cables and email attachments for something as simple as data dissemination, it will probably be a while before every single one of our devices is perpetually harmonized in a ceaseless chorus of digital kumbaya. Before our computers, phones, tablets, jewelry, accessories, appliances, cars, medical sensors, etc., can come together to form our own personal Voltrons, they all have to be able to detect each other’s presence, speak the same languages, and leverage the same services. The two reasons I’ve just described as to why the future remains as such — profit motive and device isolation — are obviously not entirely unrelated. In fact, they could be considered two sides of the same Bitcoin. However, there’s still value in examining each individually before bringing them together into a unified theory of technological evolution. Profitable, Not Possible Even though manufacturing and distribution costs continue to come down, bringing a new and innovative product to market is still both expensive and surprisingly scary for publicly traded and historically risk-adverse companies. Setting aside the occasional massively disruptive invention, the result is that the present continues to look suspiciously like a slightly enhanced or rehashed version of the past, rather than an entirely reimagined future. This dynamic is something we have mostly come to accept as a tenet of our present technology, but conveniently disregard when contemplating the world of tomorrow. Inherent in our collective expectations of what lies ahead seems to be an emboldened corporate culture that has grown weary of conservative product iteration; R&D budgets unencumbered by intellectual property squabbles, investor demands, executive bonuses and golden parachutes; and massive investment in public infrastructure by municipalities that seem constantly on the verge of complete financial collapse – none of which, as we all know, are particularly reminiscent of the world we actually live in. One of the staples of our collective vision of the future is various forms of implants: neurological enhancements to make us smarter, muscular augmentation to make us stronger, and subcutaneous sensors and transmitters to allow us to better integrate with and adapt to our environments. With every ocular implant that enables the blind to sense more light and higher resolution imagery; with every amputee who regains some independence through a fully articulated prosthetic; and with every rhesus monkey who learns to feed herself by controlling a robotic arm through a brain-computer interface, humanity seems to be nudging itself ever-closer to its cybernetic destiny. There’s no doubt in my mind that it is possible to continue implanting electronics inside of humans, and organics inside of machines, until both parties eventually emerge as new and exponentially more capable species. However, what I’m not sure of yet is who will pay for all of it outside of research laboratories. Many medical procedures don’t seem to be enjoying the same trends toward availability and affordability as manufacturing processes, and as far as I can tell, insurance companies aren’t exactly becoming increasingly lavish or generous. As someone who is fortunate enough to have reasonably good benefits, but who still thinks long and hard about going to any kind of a doctor for any reason whatsoever due to perpetually increasing copays and deductibles (and perpetually decreasing quality of care), I can’t help regarding our future cybernetic selves with a touch of skepticism. The extent to which the common man will merge with machines in the foreseeable future will be influenced as much by economics and policy as by technological and medical breakthroughs. After all, almost a decade ago researchers had a vaccine that was 100 percent effective in preventing Ebola in monkeys, but until now, the profit motive wasn’t there to develop it further. Let’s consider a more familiar and concrete data point: air travel. Growing up just a few miles from Dulles Airport outside of Washington, D.C., my friends and I frequently looked up to behold the sublime, delta-wing form of the Concorde as it passed overhead. I remember thinking that if one of the very first supersonic passenger jets entered service only three years after I was born, surely by the time I grew up (and assuming the better part of the planet hadn’t been destroyed by a nuclear holocaust unleashed by itchy trigger fingers in the United States or Soviet Union), surely all consumer air travel would be supersonic. Thirty-eight years after the Concorde was introduced — and 11 years after the retirement of the entire fleet — I think it’s fair to say that air travel has not only failed to advance from the perspective of passengers, but unless you can afford a first- or business-class ticket, it has in fact gotten significantly worse. It would be unfair of me not to acknowledge that many of us do enjoy in-flight access to dozens of cable channels through a primitive LCD touchscreen (which encourages passengers behind us to constantly poke at our seats, rudely dispelling any hope whatsoever of napping) as well as email-grade Wi-Fi (as opposed to a streaming-media-grade Internet connection), but somehow I’d hoped for a little more than the Food Network and the ability to send a tweet at 35,000 feet about how cool it is that I can send a tweet at 35,000 feet.