subscribe: Daily Newsletter

 

Artificial intelligence for data analytics

1 comment

Kathy Gibson reports from MyWorld of Tomorrow – Artificial intelligence (AI) could finally be coming into its own as a means of analysing the vast quantities of data being produced by the Internet of Things (IoT).

Carl Wocke, founder of Merlynn, points out that one of the biggest challenges with the IoT is to build appropriate logic into these systems.

Traditional data mining is about building insights using historical data. “But there is a challenge around the scale and width of the data,” says Wocke. “Data mining is a massive challenge and you need an IBM Watson to cope with that. And even then hardware cannot cope with all the data.”

The other option – and usually the best solution, he says – is to ask a human.

“This is what we do. We believe the world will be a better place if we can get computers that think like key individuals.”

Wocke set up Merlynn to attempt to work out how people think, and to clone the way they make decisions. It’s what he calls the “grandfather dream”, where he imagines a world where you could talk to your grandparents even after they are gone.

“Knowledge and insight disappear when someone dies,” he says. “The concept was to build a framework that can house this intelligence. Merlynn is in the people-cloning business, with specific emphasis on decision cloning. We create virtual versions of your decisioning capabilities.”

Wocke points out that the IoT is about getting all the various data elements to talk to each other, and share what they are experiencing. Typically these data points would draw conclusions from various external factors.

“What I would like to see is that the intelligence going into virtual experts,” Wocke says. “We would rather bring in the expertise of real people.”

Examples of systems that already have human inspired thinking include a war room, where there is a layer of expertise making mission-ecisions all the time. In humans, the more experts make decision, the more they occur on a subconscious level.

“In a war room context you have key individuals making decisions. But they get stressed then go home. We now have the ability to staff the war room with experts who can assess millions of transactions per second without ever getting tired.”

In the call centre space, a local company has built a system based on their top call centre agent. In fact, says Wocke, the virtual agent is more efficient.

Human capital management using artificial intelligence can help with staff retention by scanning all the profiles of all the staff members of an organisation every day.

A forex risk agent can mimic three traders, which helps to obviate risk.

“We have the ability to create a virtual version of you that looks at how you operate and comments when you act in an atypical fashion,” Wocke says.

When the state of Utah identified a problem with recidivism, Wocke created four virtual sheriffs that can scan the profiles of all released offenders; each scans for different characteristics to predict behaviour and one of them recommends which action to take.

Stock control is a problem in many organisations that can be solved by virtual agents.

And even virtual doctors are making an appearance, helping to make key decisions that could save lives.

“Imagine an interconnected system acknowledging multiple tiers of inputs and creating insights as if domain specialists were commenting in realtime.

“It’s about making millisecond decisions that require a range of inputs. It allows you to assess vast amounts of data and assess them through the eyes of experts.”

Merlynn’s technology is Tacit Object Modeller (TOM). Tacit knowledge could be equated to “gut instinct”, says Wocke. “TOM uses the latest advances in machine learning to learn from your top experts, not from big data. It looks at how they make decisions – what they consider and what they ignore; what the sanity checks are; and what the decision rules are.”

TOM has been field tested for over five years in a range of industries including healthcare, insurance, banking and recruitment.

The system can be made infinitely scalable as a decision support tool for field workers, as a virtual advisor, and as a training application for less experienced workers and to domain specialist consistency.

Within the next few months, Merlynn plans to launch www.uptotom.com, where individuals can go online and create virtual clones of themselves.

  • petergkinnon

    Most folk still seem unable to break free from the traditional science fiction based notions involving individual robots/computers. Either as potential threats, beneficial aids or serious basis for “artificial intelligence”.

    In actuality, the real next cognitive entity quietly self assembles in the background, mostly unrecognized for what it is. And, contrary to our usual conceits, is not stoppable or directly within our control.

    We are very prone to anthropocentric distortions of objective reality. This is perhaps not surprising, for to instead adopt the evidence based viewpoint now afforded by “big science” and “big history” takes us way outside our perceptive comfort zone.

    The fact is that the evolution of the Internet (and, of course, major components such as Google) is actually an autonomous process. The difficulty in convincing people of this “inconvenient truth” seems to stem partly from our natural anthropocentric mind-sets and also the traditional illusion that in some way we are in control of, and distinct from, nature. Contemplation of the observed realities tend to be relegated to the emotional “too hard” bin.

    This evolution is not driven by any individual software company or team of researchers, but rather by the sum of many human requirements, whims and desires to which the current technologies react. Among the more significant motivators are such things as commerce, gaming, social interactions, education and sexual titillation.

    Virtually all interests are catered for and, in toto provide the impetus for the continued evolution of the Internet. Netty is still in her larval stage, but we “workers” scurry round mindlessly engaged in her nurture.

    By relinquishing our usual parochial approach to this issue in favor of the overall evolutionary “big picture” provided by many fields of science, the emergence of a new predominant cognitive entity (from the Internet, rather than individual machines) is seen to be not only feasible but inevitable.

    The separate issue of whether it well be malignant, neutral or benign towards we snoutless apes is less certain, and this particular aspect I have explored elsewhere.

    Stephen Hawking, for instance, is reported to have remarked “Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all,”

    This statement reflects the narrow-minded approach that is so common-place among those who make public comment on this issue. In reality, as much as it may offend our human conceits, the march of technology and its latest spearhead, the Internet is, and always has been, an autonomous process over which we have very little real control.

    Seemingly unrelated disciplines such as geology, biology and “big history” actually have much to tell us about the machinery of nature (of which technology is necessarily a part) and the kind of outcome that is to be expected from the evolution of the Internet.

    This much broader “systems analysis” approach, freed from the anthropocentric notions usually promoted by the cult of the “Singularity”, provides a more objective vision that is consistent with the pattern of autonomous evolution of technology that is so evident today.

    Very real evidence indicates the rather imminent implementation of the next, (non-biological) phase of the on-going evolutionary “life” process from what we at present call the Internet. It is effectively evolving by a process of self-assembly. The “Internet of Things” is proceeding apace and pervading all aspects of our lives. We are increasingly, in a sense, “enslaved” by our PCs, mobile phones, their apps and many other trappings of the increasingly cloudy net. We are already largely dependent upon it for our commerce and industry and there is no turning back. What we perceive as a tool is well on its way to becoming an agent.

    There are at present an estimated 2 Billion Internet users. There are an estimated 10 to 80 Billion neurons in the human brain. On this basis for approximation the Internet is even now only one order of magnitude below the human brain and its growth is exponential.

    That is a simplification, of course. For example: Not all users have their own computer. So perhaps we could reduce that, say, tenfold. The number of switching units, transistors, if you wish, contained by all the computers connecting to the Internet and which are more analogous to individual neurons is many orders of magnitude greater than 2 Billion. Then again, this is compensated for to some extent by the fact that neurons do not appear to be binary switching devices but instead can adopt multiple states.

    Without even crunching the numbers, however, we see that we must take seriously the possibility that even the present Internet may well be comparable to a human brain in processing power. And, of course, the degree of interconnection and cross-linking of networks within networks is also growing rapidly.

    The emergence of a new and predominant cognitive entity that is a logical consequence of the evolutionary continuum that can be traced back at least as far as the formation of the chemical elements in stars.

    This is the main theme of my latest book “The Intricacy Generator: Pushing Chemistry and Geometry Uphill” which is now available as a 336 page illustrated paperback from Amazon, etc.

    Netty, as you may have guessed by now, is the name I choose to identify this emergent non-biological cognitive entity