Sinequa supercharges intelligent search powered by NVIDIA

Sinequa adopts NVIDIA DGX systems to accelerate and scale the development and training of its Neural Search and Large Language Models.

  • 4 months ago Posted in

Leading AI-powered enterprise search provider Sinequa will develop powerful, enterprise-grade AI capabilities using NVIDIA accelerated computing to transform the modern workplace. The collaboration enables organizations to make full use of generative AI within an enterprise setting by pairing it with Sinequa’s Neural Search, which will fundamentally transform the way work gets done. 


Using NVIDIA DGX H100 systems, the world’s most advanced enterprise AI infrastructure, Sinequa will develop custom large language models (LLMs) trained to deliver unmatched AI-powered Neural Search. This will allow employees to securely find answers based on an organization’s own content, enabling faster processes and decision-making. 


Sinequa will leverage the immense power of NVIDIA systems and software to develop new LLM models, improve the accuracy of existing models, and quickly deploy these models into Sinequa’s platform. Its customers will continue to benefit from the most accurate and relevant search results possible. As more leaders look to capitalize on generative AI solutions, accessing relevant, accurate, reliable and traceable information has never been more significant. The most comprehensive Neural Search with the best LLMs means customers can converse with their content for the easiest, most powerful, and most complete way to capitalize on the knowledge of their organization to drive the business forward. 


Built for large-scale industrial data and content, Sinequa’s platform combines Neural Search and Microsoft Azure’s Open AI Service with any generative large language model. This expands Sinequa’s already extensive AI functionality with innovative use cases for generative AI across key enterprise verticals, including manufacturing and life sciences. 


Manufacturers can now deploy powerful AI-based search capabilities combined with automated summaries to help engineering teams and service teams find answers. The solution connects teams with information across the product lifecycle and enhances their work with fast, accurate, and unified search and insight generation across projects. Employees are also granted easy visibility into all products and parts within an organization’s design, supply chain, manufacturing, and services processes. 


For leading pharmaceutical organizations, Sinequa enables accurate, fast, traceable semantic search, insight generation, and summarization. Users can query and converse with a secure corpus of data, including proprietary life science systems, enterprise collaboration systems, and external data sources, to answer complex and nuanced questions. Comprehensive search results — with best-in-class relevance and the ability to generate concise summaries — enhance R&D intelligence, optimize clinical trials, and streamline regulatory workflows. 


Alexandre Bilger, Co-CEO at Sinequa, said: “As generative AI continues to gain momentum, leading organizations realize that the key to applying this technology to their enterprise data is to combine it with intelligent search. Companies that embrace this transformation early will gain a competitive edge by accelerating the application of their corporate knowledge, easily leveraging the golden information hidden in their applications. Using NVIDIA DGX H100 systems, we are able to accelerate our cycle of development to bring cutting-edge LLMs to our customers so that they can converse with their content.”


“Enterprises looking to us to leverage transformational capabilities of generative AI need powerful infrastructure that can tackle the unique demands of today’s most complex AI models such as LLMs,” said Charlie Boyle, vice president of DGX systems at NVIDIA. “NVIDIA DGX H100 systems provide Sinequa the world’s most advanced AI platform to help them develop and deploy custom LLMs, paving the way for innovation across industries.”

Although most remain “unsure how it actually works”, 40% of C-level executives are planning to use AI and the advantages that can be gained through Generative AI (Gen AI) such as ChatGPT to cover critical skills shortages, according to new research by Kaspersky.
Civo has published the results of its research into the challenges faced by Machine Learning (ML) developers in their roles. With more businesses deploying ML, the research highlights the current hurdles faced and the high rate of project failure.
Now Assist in Virtual Agent, flow generation, and Now Assist for Field Service Management are the latest in powerful GenAI solutions to be embedded into the ServiceNow Platform.
Report unveils AI adoption rates for 2024 along with other tech and customer experience predictions.
New innovations in cloud threat detection give SOC teams the edge to pinpoint suspicious activity across their attack surface.
A new report from Appsbroker and CTS shows 86% of organisations have already been impacted by GenAI, with a better understanding of the potentially disruptive impacts of the technology a top priority for 78% of them.
LANL adopts SambaNova Suite and expands existing SambaNova DataScale® deployment to run AI workloads for performing national security, science, technology, and engineering projects.
Collaboration brings together companies’ deep expertise and research in AI and large language models, powered by NVIDIA AI Enterprise software.