Enabling autonomous, real-time, embedded intelligence

The exponential growth of wireless infrastructures is the basis for the always-connected lifestyle of today. This shift from isolated devices to always-connected networks has had a dramatic impact on application architectures. It enables devices and customers to seamlessly access vasts amount of data, and simultaneously enables the back-office systems to create intelligence of the connected population to tap additional sources of revenue, and/or increase service levels.

To digest all this new information requires unprecedented levels of computation, both at the device/client level and at the service/server level. At the device, this computation must be provided at power levels that are measured in the milliWatts, whereas at the server, the computation must be elastic, and scalable, to deal with rapidly changing demands and potentially global or Internet scale.

The economic scale of the new dawn of intelligent devices is phenomenal. There are more than a billion smart phones in operation today, and smart cities and supply chains are generating exabytes of information that can be mined for valuable insights to improve service levels and revenue.

Knowledge Processing

To create intelligence, or knowledge, from raw information requires clever methods to extract inherent relationships contained in the raw information. These relationships contain the gems of insight that lead to higher levels of abstraction and understanding. For example, a common approach for knowledge creation uses statistical regression methods. The fundamental mathematical equation solved in these methods is a minimization problem. These types of constraint solvers embody the core of knowledge creation in machine learning and business intelligence.

Another area of knowledge creation involves the numerical solution of mathematical models that describe the phenomena of interest. This class of applications is characterized by a discretization of space, and a discretization of the 'physics'. The latter term is used as a historical reference to the origin of many of these methods to describe the behavior of the physical world. However, many of these techniques are now used on non-physical worlds, such as econometrics, option trading, and even social network analysis. The solution strategies for these mathematical models also involve constraint solvers, but the additional structure of the underlying 'physics' provides mechanisms to improve the computational efficiency of the solver.

Finally, embedded intelligence applications that take in observational data and need to compose a model or comprehension of the environment, also depend on constraint solvers. The front-end of these systems require signal processing operations, such as filtering, peak detection, and spectral analysis. These signal processing operations in general are algebraic with the unique requirement that results are delivered as fast as the data comes in; otherwise stated, realtime. After the signal processing stage, intelligence, or knowledge, is created through sensor fusion, model building, parameter estimation, and image processing techniques, such as optical flow.

In these three examples, machine learning, computational physics / chemistry / engineering, and embedded intelligence, all knowledge creation is driven by constraint solvers that need to operate on vast amounts of data, in real-time, and frequently, on very small power budgets.

Constraint Solvers Dynamics

Intelligent systems depend on constraint solvers for their intelligence. Unfortunately, constraint solvers are complex algorithms with very strict numerical requirements. Research scientist Hans Moravec mapped out a path towards intelligent systems by estimating the equivalency between MIPS and levels of autonomous intelligence, shown in the following figure:




From this graphic it is very clear that sequential processing based on the Random Access Machine is not a very good fit to deliver the performance, power, and cost required for delivering embedded intelligence. This is easy to understand if we look at how biological systems are put together. All biological systems are highly distributed, highly redundant, and highly event driven.

To simplify the construction of intelligent systems it is beneficial to adopt this distributed and event driven model of computation. The Stillwater Knowledge Processing UnitTM is the industry's first distributed data flow machine that can directly execute these distributed, and sensory driven, applications.

Stillwater Knowledge Processing UnitTM

The Stillwater Knowledge Processing Unit is an ideal solution to the problem of creating scalable, real-time intelligent systems -- particularly where power constraints on the entire system are important. The distributed data flow architecture provides a low-power, high-performance execution engine that is tightly matched to the characteristics of the constraint solvers that make up the computational bottleneck for creating intelligence. This enables new levels of performance in embedded applications where high performance processing is used to imbue the device with autonomous behavior, or value-added services to the human-machine interface. Finally, a domain flow programming model provides a high-productivity software development environment to describe fine-grain parallel algorithms without the burden of having to express scheduling, resource contention resolution, and thread coordination. The Stillwater KPU providing the execute engine that takes care of all the constraints by directly executing the data flow graph of the algorithm.




Leverage the Stillwater Knowledge Processing Platform to jumpstart your product development Contact Stillwater how to get started