Results Insights AI
Summary
Results Insights AI is an optional, client-controlled, and opt-in feature that uses Anthropic Claude Haiku 4.5 via Amazon Bedrock to generate contextual AI-assisted outputs within the platform on the Result Details page of each Disclosure, Breach, and Warning. It is designed to be an assistive capability within an existing human-led workflow.
Key Policy Points:
Technology & Security: The feature operates through a controlled backend and is subject to standard application security and access controls. Amazon Bedrock Guardrails are enabled for:
Harmful-content filtering
Profanity filtering
Sensitive information filtering
Data Privacy & IP: Contractual terms with both AWS and Anthropic ensure that user inputs and model outputs are not shared with model providers and are not used to train AWS or third-party models. The client retains all rights to its Client Data inputs.
Client Consent & Responsibility: The feature is opt-in, and by enabling it, the client confirms they have obtained necessary approvals and are satisfied that its use is appropriate within their own governance framework.
Data Retention: Prompts, inputs, and associated logs are retained for 12 months, or deleted upon termination in line with the Data Retention Policy. Outputs are part of a client’s environment and will be retained for the duration of the term of the agreement with the client.
Assurance Status: The feature is currently not included in the scope of the latest ISO 27001 or SOC 2 Type II audits but is intended to be brought into scope in the next audit cycle.
Limitations: AI-generated content may be inaccurate or incomplete and should be treated as assistive information and reviewed before use. It should not be relied upon as a substitute for professional judgment.
Charges: The feature is currently provided at no additional charge, though the right to introduce charges in the future is reserved.
What is Results Insights AI?
In the FundApps user interface, this feature appears on the Result Details page of each Disclosure, Breach, and Warning, offering smart context based on how you’ve interacted with similar results in the past, aiming to reduce your manual research time. An example of how it looks is as below:

How does it work?
The Results Insights AI feature is an optional, client-controlled capability designed to assist users within an existing human-led workflow. This feature uses the Anthropic Claude Haiku 4.5 language model, which is accessed via Amazon Bedrock, to generate contextual summaries, classifications, recommendations, and explanatory observations on the Result Details page of each Disclosure, Breach, and Warning.
When a user uses the feature, the application sends the minimum necessary contextual information to the model through a controlled backend environment, and the model's output is then presented to the user. As an application-controlled and permissioned feature, access is governed by the platform's product controls, and it is not a direct end-user connection to the model provider.
It is strictly an assistive capability and is not intended to replace user judgment, provide legal or compliance advice, or make automated decisions on a client’s behalf.
Security controls
The security of the Results Insights AI feature is managed through standard technical and organisational security measures, with AWS and Anthropic providing the underlying infrastructure and FundApps responsible for the configuration and governance.
To secure the service, we have enabled Amazon Bedrock Guardrails, which are configurable safeguards that evaluate both user inputs and model responses. The currently enabled controls are:
Content filters, which help detect and block harmful content in prompts and model responses across categories such as hate, insults, sexual content, violence, and misconduct. AWS states these filters can be configured at different strengths for prompts and responses. (AWS Documentation)
Word filters, including the managed profanity filter, which blocks profane words and phrases in prompts and model responses. AWS states the profanity list is based on conventional definitions and is continually updated. (AWS Documentation)
Sensitive information filters, to help detect and block or mask sensitive information. (AWS Documentation)
In addition to the guardrails, AWS ensures customer content processed by Amazon Bedrock is encrypted and stored at rest in the AWS Region. Amazon Bedrock is also described as supporting a strong enterprise security and compliance framework, being in scope for standards like SOC, multiple ISO standards, HIPAA eligibility, and GDPR-supporting use. We may also monitor use of the feature for security, support, abuse prevention, and compliance purposes.
AWS and Anthropic provider position
Amazon Bedrock provides the managed AWS service through which the model is accessed, and Anthropic provides the underlying Claude Haiku 4.5 model. The model is not used autonomously; it operates within the application workflow and within the controls we configure around it. We have enabled Bedrock guardrails so that both user inputs and model outputs are evaluated against configured safety policies before responses are returned.
AWS states that:
user inputs and model outputs in Amazon Bedrock are not shared with model providers;
customer content processed by Bedrock is encrypted and stored at rest in the Region where Bedrock is used; and
AWS and third-party model providers do not use Bedrock inputs or outputs to train AWS or third-party models.
Anthropic’s commercial terms state that:
the customer retains all rights to its inputs and owns its outputs;
Anthropic assigns to the customer any rights it may have in outputs;
Anthropic may not train models on customer content from the services; and
customer content is the customer’s confidential information.
AWS’s Customer Agreement states that AWS obtains no rights in customer content other than the rights needed to provide the services, and that AWS will not access or use customer content except as necessary to maintain or provide the services or comply with law. AWS also allows customers to specify the AWS Regions in which their content will be stored.
The AWS DPA provides the processor framework for AWS-hosted processing, including Region selection, documented processing instructions, Standard Contractual Clauses where applicable, and return or deletion controls following termination.
The feature is client-controlled and opt-in. It will only process client confidential data where the client has chosen to enable the feature and is satisfied that it is appropriate to do so within its own governance, consent, and internal review processes. AWS’s terms also make clear that the customer is responsible for providing any necessary notices and obtaining any required consents relating to its use of the service.
Client Data Retention
Results Insights AI is designed to process only the information needed to generate the requested output for the relevant workflow.
Prompts, inputs and associated logs are retained for 12 months, or deleted on termination in accordance with our usual offboarding and data retention processes, as set out in our Data Retention Policy. Outputs are part of a client’s environment and will be retained for the duration of the term of the agreement with the client and deleted once our agreement with the client is terminated.
Where personal data or confidential information is processed through the feature, the client remains responsible for determining that such use is appropriate in light of its own governance requirements, confidentiality obligations, notice requirements and consent framework.
Contractual terms with AWS and Anthropic ensure that customer content is treated as the client's confidential information, is not shared with model providers, and is not used to train vendor models. The client retains all rights to inputs and owns the outputs, with Anthropic assigning any rights in outputs to the customer, and AWS obtaining no rights except those needed to provide the services.
Privacy
We confirm that Data Processing Agreements (DPAs) are in place with both AWS (Amazon Bedrock) and Anthropic to govern the processing of any user personal data. Where personal data is transferred outside the UK or EEA, such transfers are subject to the applicable transfer mechanisms incorporated into the relevant vendor terms, including the EU Standard Contractual Clauses and, where applicable, the UK Addendum, and may also rely on other recognised transfer mechanisms, including the EU-U.S. Data Privacy Framework, where applicable.
To ensure the highest level of data protection, no Personally Identifiable Information (PII) from the client's source data is passed to the Results Insights AI model itself. This is further enforced by the application of Amazon Bedrock Guardrails, which include sensitive information filters designed to detect and block or mask PII within both user inputs and model responses.
Any sub-processing of personal data is strictly limited to a user's name and email address for the purposes of tracking feature usage and for security monitoring.
For full details on our data handling commitments, please refer to our Privacy Policy.
Access and client consent
Results Insights AI is offered on an opt-in basis and is not enabled by default.
By enabling or using the feature, the client confirms that it:
has the right to provide the relevant content for processing through the feature;
has obtained any internal approvals, notices or consents it considers necessary; and
is satisfied that use of the feature is appropriate within its own governance, privacy and compliance framework.
Limitations of AI-generated output
AI-generated outputs may be inaccurate, incomplete, out of date or unsuitable for a particular purpose. Outputs should therefore be treated as assistive information and reviewed before use.
It is the client’s responsibility to evaluate whether outputs are appropriate for its use case, including where human review is appropriate, and that factual assertions in outputs should not be relied upon without independent checking. The outputs are provided 'as is,' and we disclaim all representations and warranties that the outputs are accurate, complete, or error-free.
For that reason, Results Insights AI should not be relied upon as a substitute for professional judgment, independent verification or internal review procedures.
Support model
Results Insights AI will be supported in accordance with the Support Plan that the client has subscribed to in their agreement with FundApps. Queries relating solely to Results Insights AI will generally be treated as Level 4 - Operational Enquiry, unless the issue independently affects broader platform security, service availability or core product performance.
Acceptable use and monitoring
We may monitor use of Results Insights AI for security, abuse prevention, compliance, support, service protection and cost-management purposes.
We reserve the right to suspend, restrict or revoke access where we reasonably believe the feature is being misused, used unlawfully, used in breach of applicable terms, or used in a way that threatens the security, integrity or intended operation of the service.
This is also consistent with the model provider framework and the requirement that client use also complies with Anthropic and AWS usage policies.
Charges
Results Insights AI is currently provided at no additional charge.
We reserve the right to introduce charges for this feature in future. If charges are introduced, notice will be provided through our usual commercial or product communication channels.
Assurance status
Results Insights AI is not currently included within the scope of our latest ISO 27001 certification audit or SOC 2 Type II report, as it was developed after those audit periods. We intend to include the feature within the scope of the next relevant audit cycle.
Last updated
Was this helpful?

