Distributed GPU based platform. Local, cloud, or hybrid config. Sub-second query speeds across many tens of Billions of records.Read more
Direct integration with existing operations platforms deployed widely throughout municipalities and utilities globally.Read more
Ai based algorithmic extraction & sophisticated image processing trained using human assisted machine learning.Read more
Disaster Intelligence is the first company to envision the use of cognitive technologies applied purely to the challenge of Emergency Management. The platform can be utilized by municipalities, State & Federal agencies, major utilities & insurance underwriters to optimize operations & accountability at all phases of Emergency Management.
Our hybrid solutions are built on Open Source technologies like Kubernetes, RAPIDS, & Arrow. DISASTER INTELLIGENCE CORE™, and the complimentary analytics suite DISASTER ANALYTICS™, are unmatched in their ability to deliver real time situational awareness & analysis to emergency management professionals, with baseline capabilities well beyond all historical precedents.
By employing Augmented Intelligence, our platform extends human cognitive function through the pairing of people and the blistering computational performance of highly parallel GPU based & distributed computing. Our platform is designed to be feedback driven, self-learning and self-assuring, emulating and extending human cognitive abilities, not replacing them.
Our hybrid GPU based platform is capable of either local, cloud or hybrid configuration depending on the technical requirements of the moment, and delivers sub-second query speeds across many tens of Billions of data points, combined with an immersive, real-time geospatial data visualization framework allowing contextually relevant data exploration and forecasting at the speed of thought. No other architecture provides this level of robust flexibility or performance to those working in a response & recovery context.
MEMORY BANDWIDTH MATTERS
GPU memory has always been faster than CPU memory, and with Nvidia’s release of Pascal, they upped the game even further by nealry trippling memory bandwidth.
Volta has now incresed performance 150% above Pascal. Volta leaves CPU memory in the dust.
Because big data analytics is typically I/O bound, memory bandwidth is fundamental to performance.
MASSIVE DENSITY YIELDS MASSIVE PERFORMANCE
While GPU core performance is not on par with CPU, the massive gap in core density makes individual core perfomance a moot point.
While individual CPU’s may have 20 cores, a single Nvidia V100 GPU has 5120 CUDA cores and 640 Tensor cores. This provides massive parallel computing capabilities that crush what is possible on CPU based architectures.
Integrated Open Source & Technology Partners
With groundbreaking Open Source technologies like Arrow & Kubernetes at it’s core, Disaster Intelligence™ is unmatched in it’s ability to deliver real time analysis for emergency management professionals.
DISASTER INTELLIGENCE™ has designed and built it’s entire software stack utilizing Apache Arrow. With Arrow, passing data between Arrow-compliant frameworks requires no data conversions. For framework developers, that means writing fewer connectors, and for users, more interop and at faster speeds. The Arrow memory format supports zero-copy reads for lightning-fast data access without any serialization overhead.
Apache Arrow is backed by key developers of 13 major open source projects, including Calcite, Cassandra, Drill, Hadoop, HBase, Ibis, Impala, Kudu, Pandas, Parquet, Phoenix, Spark, and Storm making it the de-facto standard for columnar in-memory analytics.
Best in Class Data Providers & Standards Support
..and many, many moreGET INTEGRATED
Context Is Everything
DISASTER INTELLIGENCE’s cross-filter paradigm meets the need for modern self-service data discovery, critical when working with today’s massive data sets. When users click on any dimension in a chart or graph, we simultaneously redraw every other visualization in a dashboard to reflect the new context. This is a transformative way to quickly find correlations and outliers in data.
Multiple analysts can simultaneously display visualizations with dozens of distinct datasets in the their own dashboards, NEVER having to join underlying tables. This saves data preparation time and uncovers surprising multi- factor relationships that analysts may never consider looking for in a visualization system that can handle only one data source with fewer records.
Each chart (or groups of charts) in dashboards can now point to different tables, and filters are applied at the dataset level. These unique multisource dashboards expand an analyst’s ability to compare across datasets no other competing solution can match.
Users are able create geo charts with multiple layers of data and visualize the relationship between factors within any geographic area. Each layer representing a distinct metric overlaid on the same map. These different metrics may come from the same, or different underlying datasets. Analysts can freely add multiple layers, reorder them, show or hide visualizations by layer, or simply adjust opacity, and they refresh in milliseconds.
The platform redefines operational analytics by giving users the power to query and visually explore highly diverse, multi-billion row, high-velocity datasets on their own. With instantaneous query and visualization response, teams dramatically improve situational awareness and decision-making.
When developing an API, one of the most important considerations in the entire development cycle is the architecture upon which the system will be built
We chose to build the DISASTER INTELLIGENCE & DISASTER ANALYTICS platform on Thrift; but not only for performance. Thrift’s forward-thinking architecture — resulting from its adoption of soft versioning — allows RPC calls to be freely developed and implemented with a central library or repository functioning as a standard codebase. As future technologies emerge many will require more complex, experimental, and forward-thinking features than REST or SOAP can provide.
But we didn’t stop there. Recognizing the need to simplify service level integration, we provide a robust API Proxy, allowing native REST and SOAP based platforms a simplified integration path.
Time to execute 1M service calls in seconds
SCALABILITYSupporting cross-language services seamlessly between C#, C++, Cocoa, Haskell, and more
SPEEDThrift uses binary serialization to handle data, providing significant performance gains
EVOLUTIONAllowing for soft versioning, supporting third party teams to develop RPC calls as needed
Today’s large scale natural disasters, hurricanes, wildfires or major flooding events, all create data at a pace that outstrips the capability of existing CPU-based solutions. As a result, global leaders need the performance extreme analytics provide when the problem is too big, too important, and too critical to human life to trust on platforms architected for an earlier era.
This compute and visualization inflection point has broad implications for operational & geospatial analytics in Emergency Management. Not to mention the related data science, research and discovery on existing big data sets.