Blog

How Can You Use AI Securely, Reliably, and Consistently?

clock icon7 min read

If you work in the public sector or another space that handles sensitive data, you may be asking the same question we asked ourselves in 2023: can we use AI securely and reliably in our work? I’m happy to say the answer is yes. Saying it can be done is one thing—accomplishing it is another.

As a contractor for federal clients working in the biomedical and life science spaces, we committed ourselves to exploring how we can use AI securely, consistently, and reliably. And, for this blog, I’d like to share some of the lessons we learned for organizations wanting to tackle the same challenges.

Before I share those lessons, let me take a step back and address some of the security concerns you may have heard about AI.

What AI security risks should leaders be aware of?

We can explore these concerns by the area of information security it impacts.

 

AI Confidentiality

AI can expose the data it was trained on or given access to. For example:

  • an employee may unintentionally upload a meeting transcript with sensitive employee information, unaware those details are shared with the model.
  • an employee provides proprietary information to an AI tool to draft a requirements document, thinking that details will stay within their personal account.

AI Integrity

AI responses can be inaccurate, misled, or biased, depending on the data that it was and is still being trained on. For example:

  • a model makes up (“hallucinates”) a new code unrelated to the prompt given by a software engineer, costing more time to debug and fix the code.
  • a chatbot does not have the protection to prevent users from manipulating its model. One customer orders it to sell a product well below market value, and the chatbot does as directed.
  • a company has created a chat tool for its international customers that can respond in their preferred language. However, they were not aware that the model had only been trained on English-language data. International customers report confusion when the chatbot responds with idioms that do not make sense when translated to their preferred language.

AI Vulnerability

 

Just like other technical systems, malicious actors can hijack access to AI models or to the systems running the AI. For example:

  • a malicious actor embeds hidden instructions inside a normal prompt to manipulate an AI agent connected to a customer-order API, causing it to reveal or modify sensitive business data.
  • a malicious actor uses AI to rapidly identify vulnerabilities in a publicly available, open-source AI tool and finds an exploit that allows them to compromise organizational systems.

These examples show that AI is facing similar issues in information security as any other digital technology.

 

Just like there are now standards to assess and vet the security of information systems, guiding principles are emerging to develop responsible AI models and tools.

How does "Responsible AI" reduce risk and build trust?

Responsible AI is not just a buzzword. It’s a mindset which an engineer can apply when developing AI models or tools. A model that follows responsible AI principles is:

  • secure and safe by design.
  • transparent and explainable.
  • accountable with human oversight.
  • continuously monitored and governed.
  • equitable, fair, and unbiased.

As engineers we can keep these principles in mind as we develop, helping each AI solution be more secure, reliable, and consistent.

 

As leaders, we need to look at the larger picture and ask: how do we create the space where responsible AI practices can thrive?

For the last two years, we at ESI have experimented with this very question. From it, we’ve learned three lessons guiding our infrastructure, processes, and people.

How can leaders encourage secure, responsible AI adoption?

Lesson Learned #1: Use established infrastructure and adjust to your needs

 

You need the infrastructure to run these models securely. You may ask, “doesn’t ChatGPT already do this?” While it’s true that commercially available models are easy to use and offer a great deal of value, we’ve found some limitations with the work we do. Some models:

  • do not offer a fully closed, private space and are not transparent in how they store, recall, and train data.
  • are not trained to inherently recognize, process, and de-identify data.
  • have trouble understanding our technical field (leading to more hallucinations).

Hosting our own servers is not the most cost-effective, which is why we use established cloud infrastructure like AWS. The platform allows us to protect the data in a private bucket and customize access. Through it we can integrate a variety of tools that help us safeguard and ensure the right context. This goes beyond just large language models—this also allows us better visibility and control of agentic systems or AI that both interprets and acts.

 

You don’t need to reinvent the wheel. This approach ensures your AI solutions are secure and safe by design, giving you transparency and control over how models access sensitive data.

 

Lesson Learned #2: Create, socialize, and roll out an AI standard operating procedure (SOP)

 

As your organization integrates AI into your operations, align everyone to a standard operating procedure that incorporates responsible AI principles. Our SOP standardizes how our developers, DevOps, and Quality Assurance engineers will apply AI tools and techniques in the software development lifecycle. From concept to deployment, we have outlined not only the standard practices of how we will use AI, but when a human must be looped into the process to verify AI outputs. Our SOP is not static, but a living document evolving with our company and AI technology.

 

By embedding human oversight into standard practices, leaders can maintain accountability while ensuring models operate within secure, well-governed boundaries.

 

Lesson Learned #3: Increase your staff’s AI literacy

 

Many of the risks I outlined above are related to human error. With AI tools so readily available it makes sense that your staff would want to explore them. However, without knowing how AI works, how it uses and stores data, or what safeguards are available, your staff are at a disadvantage in being able to use AI securely and intentionally.

 

We realized that we needed a baseline understanding of AI fundamentals so that all staff can answer these questions and safeguard against the risks as they use AI tools. All our engineers are required to earn an AWS AI Practitioner certification or a similar certification related to AI use for their role.

 

As AI technology is rapidly developing, we’ve also set up a roadmap and efforts to encourage staff to continuously grow their AI skills and share their expertise with fellow staff. Through that effort, we created a centralized training portal, curating a course catalog of AI classes from accredited training platforms like AWS Skillbuilder, Udemy, and PluralSight to help staff quickly find courses that not only cover AI in their work, but have been vetted by their fellow AI-savvy teammates.

 

Building a strong foundation of AI literacy supports fairness, reduces human-driven risk, and empowers staff to use AI responsibly and securely.

What is the most important mindset for leaders adopting AI?

AI is here to stay. It is not a passing fad; it has and will continue to revolutionize industries. AI is like other technological revolutions we have seen. With its great opportunities, there are also risks. However, with an intentional, mindful approach that supports your infrastructure, processes, and people, leaders can use AI reliably, consistently, and securely even in sensitive environments. 

 

Author

Ye Wu
President and Chief Architect