This document provides information about Codeflash and how it handles your data.

What Codeflash app does

CodeFlash is a code performance optimization tool that automatically finds the most performant version of your code. To be able to scan new code for performance optimizations, CodeFlash requires a GitHub action workflow to be installed which runs the code optimization logic on every new pull request. If the action workflow finds an optimization, it communicates with the Codeflash GitHub app through our secure servers and asks it to suggest new changes to the pull request.

Approving the GitHub App

The Codeflash app does not act on a user’s behalf. All actions are undertaken by the codeflash-ai bot account.

This doc by GitHub talks about the steps involved in Installing and authorizing a GitHub app for your organization.

Codeflash GitHub App permissions

Security and Trust

Codeflash is building a long term business and your security is paramount to us. Our team is composed of experienced security-conscious engineers who have built and secured infrastructure for big companies like Meta, Airbnb, and others, as well as for many early and growth stage startups.

“Sensitive Data” is defined as your code, any identifiable user data such as tokens, any derived info from your code content, any generated optimization code, any generated test code and any generated optimization explanations.

Codeflash does not store or persist any Sensitive Data for Enterprise Customers.

Code scanning is done in your environment by the Codeflash client. The only Sensitive Data that is sent over the network to the Codeflash backend for the purposes of optimization is the code snippet under optimization.

Our backend is hosted on Azure. The results of a successful optimization are used by our GitHub app to create a code suggestion on GitHub pull requests. No Sensitive Data is saved by the GitHub app.

Sensitive Data is processed by an LLM through the enterprise grade Azure OpenAI service, so your code stays inside the Azure cloud. Azure does not send any data to OpenAI or use it to train on your data.

To detect and mitigate abuse, Azure OpenAI Service stores all prompts and generated content securely for up to thirty (30) days. If you would like Azure OpenAI to not store any data, please write to us and we will ask Azure to turn off the abuse monitoring. This page has more details about Azure OpenAI’s data privacy policies.