Ensuring Security, Scalability, and Privacy with Luci: Our GenAI Copilot
In this blog, Lucinity's technology team shares the measures taken to ensure the security of Luci, our GenAI Copilot for financial services. It highlights our use of Azure and Azure OpenAI for creating a secure, scalable, and privacy-focused infrastructure
Introduction
Welcome to an in-depth look at Luci, our innovative GenAI Copilot designed to transform workflow automation and data analysis. Luci transcends the traditional chatbot role, offering a configurable workflow system capable of various tasks like Retrieval-Augmented Generation (RAG), text writing, text and image analysis, and generating data visualizations. This blog dives into the architectural design that makes Luci secure, scalable, and privacy-focused, leveraging the power of Azure and Azure OpenAI.
About Luci
Luci is built around the concept of "skills," each comprising multiple "capabilities." For instance, the "Negative News Search" skill aggregates news from various sources, crawls and ranks the content, and processes it through an LLM for analysis and content generation. Other skills include narrative writing for cases, financial flow visualizations, and financial report analysis. Luci's continuous evolution with new skills makes it more capable each day.
At Lucinity, we are continuously looking for ways to optimize and simplify the ways our customers work with the system. The key reason for designing Luci as a generic system is to allow partners and customers to develop skills using a no-code Skill Studio. This enables customers to customize Luci to fit their business needs and workflows.
But enough of the high-level stuff, let's dive into some architecture.
Security
One of the key success factors for a business such as ours, is to have an extremely high bar for security. This is not only vital to our customers but also to us. We are both SOC 2 and ISO 27001 certified, and spend a great deal of effort ensuring that our infrastructure remains secure.
When Microsoft announced the Azure OpenAI managed services, we were among the first companies to sign up and immediately started building Luci. Creating our own OpenAI instances allowed us to launch an end-to-end GenAI solution for our enterprise customers, all managed by us, in our Azure cloud.
Our customers can be sure that the data used in Luci is never sent outside of our virtual network, and the prompts provided to the OpenAI instances are not used to enrich the Open AI’s existing models. As a managed service, we have access to the latest GPT models and versions as soon as they become available. This helps ensure that we always operate using the most up-to-date models.
Each Azure OpenAI instance is locked down at the network level, accessible only through a private link established between an API Management service (APIM), and the Azure OpenAI instances (AOAI). This setup prevents unauthorized access and ensures secure communication within the infrastructure.
Audit logging is also built in to APIM via Log Analytics, capturing detailed logs for every prompt and response. This greatly helps us monitor performance and cost, ensure compliance, and fine-tune the system as needed.
Lastly, all access keys to AOAI are stored in a secure Azure Key Vault, ensuring encrypted storage and access management.
To ensure our solutions meet the highest standards of security and compliance, we follow the Microsoft Azure security framework. For more details, you can refer to the Microsoft security and compliance framework and guidelines.
Scalability
Due to the nature of AOAI, operating with a single instance will most likely not scale for production workloads. This is where the API Management Services (APIM) come to the rescue.
By effectively acting as a load balancer, we are able to have a single access point to our OpenAI infrastructure, replicating OpenAI API signatures and distributing the load across multiple AOAI instances. This approach allows horizontal scaling to meet increasing throughput demands.
Another benefit to this approach is the ability to mix and match provisioned throughput units (paid up front) with consumption based deployments, further helping with optimizing for cost as the traffic volume grows, as well as ensuring the best possible user experience by keeping latency (due to token constraints) at a minimum.
APIM can be configured to route the requests intelligently between the underlying AOAI instances, by continuously monitoring usage metrics on each instance, and always chose the least utilized AOAI instance at any given time.
Last, but not least - having the APIM sit in front of our AOAI instances, enables us to easily set and configure retry policies with exponential backoff, all in a single place. This reduces a lot of the boilerplate code otherwise required in Luci to handle errors related to quota and any retries needed.
Privacy
The last area I would like to briefly mention is how we ensure that our customers’ data stays private. The Luci AOAI infrastucture is operated as a multi-tenant system - but only at the APIM+AOAI layer, which ensures that each customer's data is isolated and private. The models used by Luci are stateless, meaning they do not retain any information from the prompts they process, maintaining strict data privacy.
All data transmitted to and from Luci is encrypted, ensuring that data in transit remains secure and inaccessible to unauthorized parties. Access keys for each AOAI instance are securely stored in Azure Key Vault, providing encrypted storage and access management to protect data at rest.
Network security is also a top priority. Each AOAI instance is accessible only through a private link established between the API Management service (APIM) and the AOAI instances, preventing unauthorized access and ensuring secure communication within the infrastructure. We implement Role-Based Access Control (RBAC) to ensure that only authorized personnel have access to sensitive data and services, minimizing the risk of unauthorized access.
Additionally, all requests and replies are fully audit logged in our Log Analytics workspace, separated by customer. This detailed logging helps us monitor performance, ensure compliance, and fine-tune the system while keeping customer data secure.
Conclusion
Luci stands as a testament to our commitment to delivering a secure, scalable, and privacy-centric GenAI Copilot. By leveraging Azure's robust infrastructure and following best practices in architecture and security, Luci is poised to handle increasing workloads while ensuring customer data remains secure and isolated. Trusted by large financial enterprises and tier 1-3 banks, Lucinity continues to innovate and expand Luci's capabilities. Our focus on security, scalability, and privacy will remain paramount, ensuring a reliable and efficient tool for all our users.