Stillwater Supercomputing, Inc. announces general availability of its posit arithmetic library. Deep Learning applications have demonstrated the inefficiencies of the IEEE floating point format. Both Google and Microsoft have jettisoned IEEE floating point for their AI cloud services to gain two orders of magnitude better performance. Similarly, AI applications for mobile and embedded applications are shifting away from IEEE floating point. However, Deep Learning applications are hardly the only applications that expose the limitations of floating point. Cloud scale, IoT, embedded, control, and HPC applications are also limited by the inefficiencies of the IEEE floating point format. A simple change to a new number system can improve scale and cost of these applications by orders of magnitude. When performance and/or power efficiency are differentiating attributes for an application, the complexity of IEEE floats simply can't compete with number systems that are tailored to the needs of the application.
Stillwater Supercomputing, Inc. was founded in 2006 and Incorporated in 2007 to develop the next generation platform for computational science and engineering. Focused on improving the efficiency of execution, the Stillwater Knowledge Processing framework is facilitating the creation of best-in-class intelligent systems for high-performance embedded and autonomous systems, to large-scale cloud-based systems for bioinformatics, science, and engineering. For more information about Stillwater products and services, visit www.stillwater-sc.com, or contact firstname.lastname@example.org.