Financial News
Aporia Expands Research and Development Lab to Combat Generative AI Risks
SAN JOSE, Calif., March 07, 2024 (GLOBE NEWSWIRE) -- Aporia, the leading AI control platform and sole provider of real-time hallucination mitigation, today announced the expansion of Aporia Labs, its innovative research and development center. The expansion will enable Aporia Labs' team of elite ML engineers, cybersecurity and AI experts to focus on identifying and implementing AI risk prevention policies to protect generative AI (GenAI) applications. The group is integral to the company's AI Guardrails solution, ensuring the highest standards of data leakage prevention and delivering defense against hallucinations and prompt attacks.
With an emphasis on security and control, Aporia Labs' policies will continuously refine the company’s AI Guardrails solution, which is designed to ensure robust and unbiased AI product performance. By seamlessly fitting between the large language model (LLM) and the end user, AI Guardrails ensures fair and responsible usage and effectively blocks hallucinations such as discriminatory, untrue or NSFW responses from LLM/chatbots. In addition to promoting responsible content, the solution plays a crucial role in managing data leakage risks and securing sensitive information, such as credit card or medical data, ensuring the privacy and safety of end users.
This expansion comes at a pivotal moment in the AI landscape, as the fast-paced evolution of AI models brings challenges and risks, particularly the widespread occurrence of hallucinations and bias. Recent research revealed that 89% of ML engineers in organizations utilizing Large Language Models (LLMs) and generative AI technologies report instances of hallucination in their models. In response, Aporia Labs will focus on creating advanced policies to mitigate hallucinations, prevent prompt injections and data leakage, safeguard against personal identifiable information (PII) breaches, and ensure brand language integrity across communications.
“The growth of Aporia Labs is a testament to our unwavering dedication to leading AI security innovation,” said Alon Gubkin, CTO of Aporia and Head of Aporia Labs. “Our research efforts not only enhance Aporia’s Guardrails but also aim to reshape the future of AI by reducing risks and promoting system integrity. The goal is to enable businesses to confidently leverage AI, knowing that their systems are fortified against emerging threats.”
Prioritizing customer ease, AI Guardrails will automatically integrate new policies developed by the research center, allowing users to seamlessly benefit from the latest advancements. The meticulously developed policies by Aporia Labs ensure that AI Guardrails is in a constant state of evolution, offering unparalleled protection and performance enhancements.
"As AI adoption broadens, so does the task of securing AI systems," said Liran Hason, CEO of Aporia. "Just like the evolution of cyber firewalls, AI security measures must rapidly advance to address the constant surfacing of hallucinations and bias. We recognize that with each day comes new vulnerabilities and manipulation tactics, demanding an agile and proactive approach to AI safety. Aporia Labs' mission is to continuously refine and enhance our Guardrails strategies, ensuring that customer's AI products are prepared for any risks."
In the coming months, Aporia Labs will release proprietary data and reports that shed light on the risks associated with AI.
To learn more about Aporia Labs and Aporia Guardrails, please visit: https://www.aporia.com/
About Aporia
Aporia is the leading AI control platform and the sole provider of real-time hallucination mitigation. The company is recognized as a Technology Pioneer by the World Economic Forum for its mission of driving Responsible AI. Trusted by Fortune 500 companies and industry leaders such as Bosch, Lemonade, Levi’s, Munich RE, and Sixt, Aporia empowers organizations to deliver AI apps that are reliable, responsible, and fair. Its platform offers real-time guardrails, enabling AI leaders and product teams to confidently control and create trust in their AI apps.
Contact:
Jay Smith
jay.smith@fusionpr.com
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.