MapR awarded additional patent for Converged Data Platform

Company furthers innovation in optimal architecture for big data.

MapR Technologies has been granted a patent (US9,501,483) from the United States Patent and Trademark Office. The awarded patent demonstrates the company’s technology advancements in architecting a modern data platform that reliably runs mission-critical applications. This patent, in particular, covers the key technology underpinning components of the MapR Converged Data Platform including the multi-modal NoSQL database (MapR-DB) and global streaming engine (MapR Streams).

 

“Strengthening our growing IP portfolio, this patent reinforces our commitment to allowing our customers to uniquely run both operational and analytical processing on a single platform,” said Matt Mills, CEO, MapR Technologies. “Unlike Apache Hadoop or alternative big data technologies, the patented Converged Data Platform provides a unified and fast access layer to any type of data. We enable companies to take advantage of next generation applications, creating innovation and advancing their business through digital transformation.”

 

The key components of the patent claims include protection for file, table and stream processing for the following technology advances:

 

·         Convergence - Fundamental integration of tables, files and streams into converged data platform

·         Fast processing with low latency - Ability to open tables without having to replay a log

·         High availability and Strong consistency – Provides continuous access and fast recovery while ensuring strong consistency

·         Security - Keeps secure snapshots and mirrors of all kinds of persistent data such as files, tables and streams. This can help avoid data loss even in extreme cases such as ransomware.

 

Leveraging these inventions, the MapR Converged Data Platform delivers a core architecture for data-centric businesses along these four key areas:

 

Enterprise-grade reliability in one platform Vast scale with mission critical reliability, disaster recovery, end-to-end security, and multi-tenancy let customers run next generation big data applications in a business-critical, 24x7 environment that must never lose data.

 

Global, real-time, continuous data processing – Full read-write capabilities, low administration, automated optimisations, and immediate access to data all enable an end-to-end real-time environment that lets analysts continuously leverage data for gaining critical business insights.

 

 

Continuous innovation -- A patented core with standard APIs drives greater value from Apache Hadoop/Spark and other open source projects. Advanced technologies allow greater scale and performance while compliance with community-driven open APIs such as industry standard POSIX, NFS, LDAP, ODBC, REST, and Kerberos allows all of the key open source big data systems to work with existing systems. 

 

Foundation for converged analytics – A platform that enables multiple workloads in a single cluster lets customers run continuous analytics on both data at rest and data in motion without the delay due to moving data to a task-specific cluster. Having a single cluster that can handle converged workloads is also easier to manage and secure.

Keepit partners with Ingram Micro to enhance SaaS data protection in Poland, expanding reach to...
Westcon-Comstor partners with ecoDriver and Powercor to enhance carbon emission reduction efforts...
Crown Information Management's latest whitepaper highlights the hidden costs of ROT data,...
Over 30% of UK's organisations neglect comprehensive data backups, risking severe disruption from...
Responsible AI is becoming vital for UK businesses, yet challenges remain in data governance,...
A breakthrough prototype wins the £50,000 Smart Data Challenge Prize, revolutionising the...
Chief Data Officers are at the forefront of technological change, emphasising data strategy and AI...
Datadog Elevates Cloud Storage Management with New Solution