Anticipatory Recommendations: The Past, Present, and Future

Trend Analysis

Imagine a world where every action you take in the digital sphere – every search, social post, email message, calendar entry, even every conversation – is weighed and factored by an external entity tasked with providing you with a constant stream of contextually relevant, proactive guidance throughout your daily life. Such is the emerging reality of smartphone-enabled anticipatory recommendations: a world that is becoming less science-fiction and more science fact with each passing day.

The Building Blocks of Anticipatory Recommendations

  1. Instantaneous Delivery

One could argue that the first seedlings of anticipatory recommendations were sown in September of 2010 when Google rolled out its Google Instant search engine functionality. The key technical insight stemmed from the fact that people can read much faster than they can type (approximately 10 times faster, according to Google). Instant search allowed users to get to the desired content much faster because they didn’t have to finish typing the full search term, or even press “search,” to begin receiving results. The ability to see the results as you type in the search pane also helped users formulate better search queries by providing instant feedback.

Instantaneous delivery provides the real-time context necessary for anticipatory recommendations to function.

  1. Contextual Understanding

Thanks to advances in semantic recognition and machine learning (such as Google’s Hummingbird algorithm), search engines have evolved to be able to recognize and process basic conversational language. To offer an analogy to childhood development, semantic recognition advances the search engine’s “comprehension” level from that of a toddler, who can only respond to simple and direct (keyword) prompting, to that of a young child who is able to understand basic “conversational” language patterns and contextual nuances. Just like a young child, today’s semantic search engine is able to factor in context to decipher meaning – it can instantly recognize, for instance, whether the word “apple” is in reference to a computing device or a piece of fruit.

Contextual understanding provides the technological framework necessary for anticipatory recommendations to function.

  1. Unlimited Access

Today’s smartphones utilize two technologies, GPS and Wi-Fi, to calculate user location. This kind of proximity tacking offers the contextual relevance necessary to make useful predictive recommendations. When trying to anticipate what you want, it’s helpful to know where you’re at.

We have our smartphones on us wherever we go. They’re always with us because they’ve been designed to be always with us and always useful to us, whenever or wherever; they’re our pocket-sized personal assistants.

The anticipatory recommendations apps of today (such as Google Now) combine user geo-locational data with a surfeit of personal profile, search, and social data to serve up contextually relevant recommendations. The anticipatory recommendation engines of the near future will be able to factor in the exact phrasing of our questions along with our current location, past preferences, relevant comments and reviews from friends, family, and influencers on social media, as well as crowdsourced user-generated reviews from third party social-local review sites like Yelp, to provide us the most contextually relevant results instantaneously.

This kind of unlimited personal access provides the sheer amount of data necessary for anticipatory recommendations to function.

Digital Assistant vs Anticipatory Recommendations

In a fascinating post for Search Engine Land, digital marketing expert Danny Sullivan makes an important distinction between a digital assistant and a predictive search (aka anticipatory recommendation) function. He points out that, whereas Apple Siri, Google Now, and Microsoft Cortana are all digital assistants in so far as each has the ability to add reminders to your calendar or help you do things with your mobile device such as send a text, play a song, or set an alarm, none of these actions is necessarily predictive or anticipatory in nature.

Sullivan writes how, among the three top competitors in the mobile space, Apple Siri is a digital assistant that is largely lacking in predictive search capability: “it learns little about you, and has no memory of what you do.” Google Now, on the other hand, “will anticipate all types of information you may need, so much so that it can be downright scary.” Microsoft Cortana’s predictive search capability falls somewhere in between.

It is easy to understand how Google’s search engine benefits its anticipatory recommendations app. Google Now works so well chiefly because it can pull data from all elements of your Google profile – from the content of your emails and calendar entries to your past search history – as it works to proactively suggest future actions for you to take.

As Danny Sullivan observes, “…virtually none of this information is showing up because I deliberately arranged for it to happen…{Google has} decided to show this information based on searches that I’ve done, or something it has spotted in my email, or based on web sites I’ve visited, as well as places I’ve been to…”

Sullivan notes how the close interplay between predictive search and locational proximity helps explain why predictive search has emerged as a mobile feature rather than a desktop feature. To this end, he even predicts that in the near future, it may prove to be the essential mobile device feature we can’t live without.

For more insight into Cortana and the future of predictive search, check out this video of Danny Sullivan’s interview with Microsoft.

The Future: Deep Learning, Humanlike Robots, and Artificial General Intelligence

“This idea of anticipatory computing is going to be the next big change in our relationship with computers. And it’s coming more quickly than you realize.” – Om Makik, venture capitalist and founder of Gigaom.


San Francisco-based Expect Labs is taking the idea of anticipatory computing to a new level with its much-vaunted MindMeld iPad app, which represents the convergence of three emerging technologies: mobile, voice recognition, and big data. Expect Labs recognizes the massive consumer shift to mobile has fundamentally changed information discovery; typing in text-based queries is not as convenient on smartphones and tablets as it is on PCs and laptops.

To resolve this, Expect has focused on harnessing the ability of mobile devices to capture ambient audio, visual, and location-based information (so-called “soft signals”) to interpret “meaning and intent from multiple different streams of sensor data.”

Thanks to advancements in cloud computing, context-based search engines, and data aggregation, we now have access to unprecedented levels of raw information. The difficulty lies in drawing useful meaning out of it all; it’s like trying to understand what one person is saying when one billion other people are talking at the same time.

However, sophisticated deep learning algorithms are now enabling computer programs to analyze real-time conversations to derive context and intent. With time and enough listening, these programs can even anticipate information that may be relevant in the future.

A more technical term for this phenomenon is “continuous predictive modeling.” When programs are able to access streams of Internet, social, mobile, and geo-locational data, or (“Proactive Information Discovery”), and then filter and categorize it, they can predict or anticipate future actions.

As Expect Labs puts it, “analyzing and understanding a conversation over time can sometimes make it possible to anticipate information that may be relevant in the future.”

To make its anticipatory computing engine more accurate, in December of 2012 Expect Labs teamed up with voice recognition technology firm Nuance, whose software also powers the voice recognition systems Google Now and Siri. To make its anticipatory computing engine smarter, Event Labs also inked a deal with Factual, a big data darling that apparently has access to data on 58 million local businesses and points of interest in 50 countries.

Expect Labs and companies like it represent the vanguard of what will likely become the dominant paradigm for anticipatory recommendations: mobile + voice recognition + big data.

Convergence with IOT

Expect Labs believes its anticipatory computing technology can even provide the structural interface needed to help enable the emerging Internet of things (IOT). The company’s founder, Tim Tuttle, imagines a future where our homes are packed with tablets, smartphones, and built-in computer technologies that benefit from anticipatory computing. All these machines, notes Tuttle, “are going to listen to everything you say and be able to assist you with the right song, map, or recipe, without you even having to ask…In a couple years when you’re wearing a wristwatch that’s intelligent, Google Glass, have smart panels and a Nest thermostat on every wall, you’re going to need the technology to make everything work.”

As Kit Eaton writing for Fast Company muses, anticipatory computing could create a creepy future where devices are always listening to you.

The Race to Replicate the Human Brain

In early 2013, a team of over 200 top-level researchers from over 100 institutions worldwide secured $1.6 Billion in funding for the Human Brain Project, a colossal effort to artificially re-create the human brain. Located in Lausanne, Switzerland, it is already being called the “CERN for the brain.” (Remember CERN, also located in Switzerland? It is home of the Large Hadron Collider, the last international effort to discover something seemingly undiscoverable, in that case the Higgs-boson, or “God particle.” They did it in two years, with the machine at only half power).

The group hopes that the project will also speed up advancements in super computing. With recent advancements in such areas as parallel processing and quantum computing, it is not inconceivable that the Human Brain Project will be successful sooner rather than later.

On a larger scale, the convergence of technologies like MindMeld and initiatives like the Human Brain Project may eventually conspire to bring forth the higher-level artificial general intelligence (AGI) that is the stuff of science fiction. Faster than many realize or are prepared to accept, we may be able to create lifelike, robotic personal assistants that rival, and eventually even surpass, human intelligence. Indeed, the speed of advancement of human-like robot (android) technology, which has been led in large part by Japanese researchers, is itself a cause for amazement.

Check out this video of very human-like Japanese robots, keeping in mind it was filmed in late 2011!

Interested in learning more? Watch this video of none other than Google Director of Engineering Ray Kurzweil at Google I/O in June of 2014 speaking on what he believes is the near future of artificial intelligence.

It’s also interesting to note that the two companies powering the largest search engines on Earth, namely Google and Microsoft, are in a bit of an artificial intelligence arms race, with Google’s secretive Project X Labs and Google Brain initiative vying against Microsoft’s Project Adam.


  1. In what ways will anticipatory recommendations redefine the traditional search function?
  2. How can brands capitalize on anticipatory recommendations to enhance their digital marketing efforts?
  3. What are the ethical implications of creating an artificial general intelligence that rivals and even surpasses human intelligence?