context first, concept second, words third

Context, Then Concepts, Words Last

The “five forces of context” (mobile, social media, data, sensors and location) have be called the future of computing. Why? Because they may finally give computers the ability to understand “your context”.

Analysts under time and deadline pressure need to know that the information distilled by an AI solution is relevant to their context and is not simply the result of key word searches.  Understanding context is foundational to the collaborative AI solution offered by our partner BEA.

In another guest article, Tom Marsh, CTO at Boulder Equity Analytics (BEA), talks about context and why, without it, computer algorithms will never be able to truly know “you”.

Since Robert Scoble and Shel Israel just released their new book “The Fourth Transformation”, I decided to revisit “Age of Context“, a global survey of the contributions to the forces influencing technology.  The five forces were mobile, social media, data, sensors and location.  Scoble called these the “five forces of context”, the future of computing.  The five forces are still there but hardly tamed and in the rear view mirror.

Revisiting this topic three years later (see my ai-one post), there is still a lot of work to do in mainstream applications of cognitive computing.  For our BEA clients, it remains a critical challenge to building analytics for analysts under time and deadline pressure, faced with exploding amounts of information.

Why is context so important

First, context is fundamental to our ability to understand the text we’re reading and the world we live in.  When reading a sentence, you draw context, then concepts, then wordson the semantics of the words, the sentence, the paragraph, the context of the page, chapter, book and prior works or conversations.  Added to your ability to understand the sentence is your education and experience, and your reasons and objectives for reading it.  This diagram from Chris Campion’s blog is instructive in this regard.

Second, if you broaden the challenge it overlaps with personal intelligent agents (Siri, Alexa, Cortana, Google Now), the bigger problem of complexity.  The inability to provide context has always made it difficult for computers and people to understand each other.  Three years ago Scoble felt Google Glass could be that enabling breakthrough and now maybe VR will be the answer.

People and the language used to describe the world is a complex system.  No matter how much data is crunched or how sophisticated the technology (e.g., new generation convolutional neural nets), you can’t be reduced to an algorithm.  Claiming personalization, these tools create approximations of you based on people like you, not a true you.

There’s a difference, especially when applied to automating the decisions of experts in investing, reinsurance contracts or road-maps for new technologies. The intelligent agents that make those decisions need to have a complete model of a complex world.

The five forces of context as the foundation

Pete Mortensen also addressed the problem of context at the same time in his article “The Future of Technology Isn’t Mobile, It’s Contextual.”  Mortensen argues that the five forces are finally giving computers the foundational information needed to understand “your context” and that context is expressed in four graphs.  These data graphs are
 
•    Social (friends, family and colleagues)
•    Interest (likes & purchases)
•    Behavior (what you do & where)
•    Personal (beliefs & values)
 
At BEA, we build the foundations for context by extracting Mortensen’s graphs from the complex data generated by your digital activity, your domain and a complete view of the material you’re researching.  We’ve been using our technology to process and store unstructured and structured text in diverse domains to deliver a representation of that knowledge through powerful visualizations of the models that before now existed only on the page and in your head. 

The five forces of context – and beyond

So first, we get the context right.  Then we add metadata and attributes using NLP, entity extraction, sentiment and other proprietary linguistic algorithms.  This is processed, indexed and stored via python middle-ware that allows us to extract all possible attributes for each paragraph. 

Running in the cloud and integrated with big data sources and ecosystems of existing APIs and applications, we can quickly create and test your investment models or add intelligence to old ones.  Our interactive Tableau visualizations respect the analyst’s expertise, neutralize human bias while acknowledging the limitations of the technology, never delivering “black box” results that aren’t transparent and verifiable.

You work with concepts and use context constantly in your offline life.  We work to bring the same intuitive and human experience to your work as an analyst, without compromising your privacy or creating an approximation of you from the data of others.
 

Tom

@tom_semantic

KDD Analytics and Boulder Equity Analytics are partnering to deliver collaborative artificial intelligence to the financial and competitive analysis industries.