Note: This document is a verbatim reproduction of content served at https://public-buyers-community.ec.europa.eu/communities/procurement-ai/resources/updated-eu-ai-model-contractual-clauses, reproduced here for usability.
📖 Related Article: EU Model Contractual Clauses for AI Procurement: A Resource Map for Legal Teams
|
DISCLAIMER This working document is created by the European Commission to support the Community of Practice on the Procurement of AI to guide public administrations in procuring AI solutions. It is a working document in progress and does not reflect an official position of the European Commission. Actors that decide to make use of this working document carry full responsibility for its use in public procurement. These Model Clauses for High-Risk AI are without the prejudice to the requirements stemming from the AI Act. |
Section A –
Definitions
Article 1
Definitions:
capitalised terms used in these MCC-AI-High-Light will have the meaning as
defined in this article;
Agreement: the entire agreement
of which the MCC-AI-Light, as a schedule, are an integral part;
AI System: the AI system(s) as
referred to in Annex A, including any new versions thereof;
MCC-AI-Light: these standard
contractual clauses for the procurement of non-high-risk artificial
intelligence by public organisations;
Public Organisation Data Sets: the Data
Sets (or parts of) (i) provided by the Public Organisation to the Supplier
under the Agreement or (ii) to be created or collected as part of the
Agreement, including any modified or extended versions of the Data Sets
referred to under (i) and (ii) (for example due to annotation, labelling,
cleaning, enrichment or aggregation);
Data Sets: all data sets used in
the development of the AI System, including the data set or data sets as
described in Annex B;
Delivery: the time at which the
Supplier informs the Public Organisation that the AI System satisfies all
agreed conditions and is ready for use;
Intended Purpose: the use
for which an AI System is intended by the Public Organisation, including the
specific context and conditions of use, as specified in Annex B, the
information supplied by the Supplier in the instructions for use, promotional
or sales materials and statements as well as in the technical documentation;
Reasonably
Foreseeable Misuse: the use of the
AI System in a way that is not in accordance with its Intended Purpose, but
which may result from reasonably foreseeable human behaviour or interaction
with other systems; including other AI systems;
Substantial Modification: a change
to the AI System following the Delivery which affects the compliance of the AI
System with the requirements set out in these MCC-AI-Light or results in a
modification to the Intended Purpose;
Supplier: the natural or legal person, public authority,
agency or other body that supplies the AI System to the Public Organisation
pursuant to the Agreement;
Supplier Data Sets and Third-Party Data
Sets:
the Data Sets (or parts of) that do not qualify as Public Organisation Data
Sets.
Section B – Essential requirements in relation to the
AI-system
Article 2
Risk management system
2.1.
The Supplier ensures that, prior to the
Delivery of the AI System, a risk
management system established, implemented, documented and maintained in
relation to the AI System.
2.2.
The risk management system shall at least
comprise the following steps:
a.
identification, estimation and evaluation of
the known and reasonably foreseeable risks that the AI System can pose to
health, safety or fundamental rights when the AI System is used in accordance
with the Intended Purpose;
b.
the estimation and
evaluation of the risks that may emerge when the AI System is used in
accordance with the Intended Purpose and under conditions of Reasonably Foreseeable Misuse;
c.
evaluation of
other possibly arising risks, based on the analysis of data gathered from the
post-market monitoring system;
d.
adoption of
appropriate and targeted risk management measures designed to address the risks
identified pursuant to point (a) of this paragraph in accordance with the
provisions of the following paragraphs.
2.3.
The risks referred to in this Article shall
concern only those which may be reasonably
mitigated or eliminated through the development
or design of the AI System or the provision of adequate technical information.
2.4.
The risks referred to in this article shall
give due consideration to the effects and possible interaction resulting from
the combined application of the requirements set out in Section B, with a view
to minimising risks more effectively while achieving an appropriate balance in
implementing the measures to fulfil those requirements.
2.5.
The risk management measures referred to in
paragraph 2.2, point (d) shall be such that relevant residual risks associated
with each hazard as well as the overall residual risk of the AI system is
judged to be acceptable by the Supplier, provided that the AI System is used in
accordance with the Intended Purpose or under conditions of Reasonably
Foreseeable Misuse.
2.6.
In identifying the most appropriate risk
management measures referred to in paragraph 2.2, point (d), the following
shall be ensured:
a.
elimination or reduction of risks identified
and evaluated pursuant to paragraph 2.2 in as far as technically feasible
through adequate design and development of the AI System;
b.
where appropriate, implementation of adequate
mitigation and control measures addressing risks that cannot be eliminated;
c.
provision of adequate information to the
Public Organisation and if applicable, training to deployers.
2.7.
The Supplier ensures that, prior to the
Delivery of the AI System, the AI System is tested to identify whether the AI
System complies with the MCC-AI-Light and whether the risk management measures
referred to in paragraph 2.2, point (d) are effective in light of the Intended
Purpose and Reasonably
Foreseeable Misuse. If requested by the Public Organisation, the
Supplier is obliged to test the AI System in the environment of the Public
Organisation.
2.8.
The testing of the AI System shall be
performed, as appropriate, at any time throughout the development process, and,
in any event, prior to the Delivery. Testing shall be carried out against prior
defined metrics and probabilistic thresholds that are appropriate to the
Intended Purpose of the AI System.
2.9.
All risks identified, measures taken and tests
performed in the context of compliance with this article must be documented by
the Supplier. The Supplier must make this documentation available to the Public
Organisation at least at the time of the Delivery of the AI System. This
documentation can be part of the technical documentation and/or instructions
for use.
2.10.
The risk management system shall be understood
as a continuous and iterative process planned and run throughout the entire
duration of the Agreement. After the Delivery of the AI System the Supplier
must therefore:
a.
regularly review and update the risk
management process to ensure its continuing effectiveness;
b.
keep the documentation described in article
2.7 up
to date; and
c.
make every new version of the documentation
described in article 2.7 available to the Public Organisation without delay.
2.11.
If reasonably required for the proper
execution of the risk management system by the Supplier, the Public
Organisation will provide the Supplier, on request, with information insofar as
this is not of a confidential nature.
2.12.
<optional> If the Public Organisation’s
use of the AI System continues beyond the term of the Agreement, at the end of
the term of the Agreement, the Supplier shall provide the Public Organisation
with the information necessary to maintain the risk management system by
itself.
Article 3
<Article
3 is only relevant for AI Systems which make use of techniques involving the
training of models with data. Article 3 presupposes the Supplier (or its
subcontractors) has (have) full access to the Data Sets. If the Data Sets are
exclusively held by the Public Organisation, it is necessary to make other
arrangements.> Data and data governance
3.1.
The Supplier ensures that the Data Sets used in
the development of the AI System, including training, validation and testing,
have been and shall be subject to data governance and management practices appropriate
for the Intended Purpose of the AI System. Those measures shall concern in
particular:
a.
the relevant design choices;
b.
data collection processes and the origin of
data, and in the case of personal data, the original purpose of the data
collection;
c.
relevant data preparation for processing
operations, such as annotation, labelling, cleaning, updating, enrichment and
aggregation;
d.
the formulation of assumptions, in particular
with respect to the information that the data are supposed to measure and
represent;
e.
an assessment of the availability, quantity and
suitability of the data sets that are needed;
f.
examination in view of possible biases that are
likely to affect the health and safety of persons, have a negative impact on
fundamental rights or lead to discrimination prohibited under Union law,
especially where data outputs influence inputs for future operations;
g.
appropriate measures to detect, prevent and
mitigate possible biases identified according to point (f);
h.
the identification of relevant data gaps or
shortcomings that prevent compliance with these MCC-AI-Light, and how those
gaps and shortcomings can be addressed.
3.2.
The Supplier ensures that the Data Sets used in
the development of the AI System are relevant, sufficiently
representative and, to the best extent possible, free of
errors and complete in view of the Intended Purpose. The Supplier ensures that
Data Sets have the appropriate statistical properties, including, where
applicable, as regards the persons or groups of persons in relation to whom the
AI System is intended to be used. These characteristics of the Data Sets shall
be met at the level of individual data sets or a combination thereof.
3.3.
The Supplier ensures that the Data Sets used in
the development of the AI System considered, to the extent
required by the Intended Purpose or Reasonably Foreseeable Misuse, the
characteristics or elements that are particular to the specific geographical,
contextual, behavioural or functional setting within which the AI System is
intended to be used.
3.4.
The obligations under this article apply not
only to the development of the AI System prior to Delivery, but also to any use
of Data Sets by the Supplier that may affect the functioning of the AI System
at any other time during the term of the Agreement.
Article 4
Technical documentation and instructions for
use
4.1.
The Delivery of the AI
System by the Supplier includes the handover of the technical documentation and
instructions for use.
4.2.
The technical documentation must enable the
Public Organisation or a third party to assess the compliance of the AI System
with the provisions of the requirements set in these MCC-AI-Light and at least
satisfy the conditions described in Annex C.
4.3.
The instructions for use shall include concise, complete, correct and clear
information that is relevant, accessible and comprehensible to the Public
Organisation. The instructions for use must at least satisfy the conditions
described in Annex D.
4.4.
The Supplier must update this documentation at
least with
every Substantial Modification during the term of the Agreement and subsequently make it available to the Public Organisation.
4.5.
<Optional> The
technical documentation and instructions for use must be drawn up in English.
4.6.
<Optional> The Public
Organisation has the right to make copies of the technical documentation and
instructions for use to the extent necessary for internal use within the
organisation of the Public Organisation, without prejudice to the provisions of
article 6 and article 11.
Article 5
Record-keeping
5.1.
The Supplier ensures that the AI System shall
technically allow for the automatic recording of events ('logs') over the
lifetime of the AI System.
5.2.
The logging capabilities shall ensure a level
of traceability of the AI System that is appropriate to the Intended Purpose of
the system and Reasonably Foreseeable Misuse. They shall enable the
recording of events relevant for the identification of situations that may:
a.
result in the AI System presenting a risk to
the health or safety or to the protection of fundamental rights of persons; or
b.
lead to a Substantial Modification.
5.3.
<optional>
The
Supplier will allow the Public Organisation to access the logs automatically
generated by the AI System on a real time basis.
5.4.
The Supplier shall keep the logs automatically
generated by the AI System, to the extent such logs are under its control based
on the Agreement, for the duration of the Agreement. At the end of the term of
the Agreement, the Supplier will provide these logs to the Public Organisation
without delay.
Article 6
Transparency of the AI System
6.1.
The Supplier
ensures that the AI System has been and shall be designed and developed in such
a way that the operation of the AI System is sufficiently transparent to enable
the Public Organisation to interpret the system’s output and use it
appropriately.
6.2.
The AI System
shall be accompanied by instructions for use in an appropriate digital format
or otherwise that include concise, complete, correct and clear information that
is relevant, accessible and comprehensible to the Public Organisation.
6.3.
The instructions for use shall contain at least
the following information:
a.
the identity and the contact details of the
Supplier and, where applicable, of its authorised representative;
b.
the characteristics, capabilities and
limitations of performance of the high-risk AI system, including:
i.
its intended purpose;
ii.
the level of accuracy, including its metrics,
robustness and cybersecurity against which the AI system has been tested and validated,
and which can be expected, and any known and foreseeable circumstances that may
have an impact on that expected level of accuracy, robustness and
cybersecurity;
iii.
any known or foreseeable circumstance related
to the use of the AI system in accordance with its intended purpose or under
conditions of reasonably foreseeable misuse, which may lead to risks to the
health and safety or fundamental rights;
iv.
where applicable, the technical capabilities
and characteristics of the AI system to provide information that is relevant to
explain its output;
v.
when appropriate, its performance regarding
specific persons or groups of persons on which the system is intended to be
used;
vi.
when appropriate, specifications for the input
data or any other relevant information in terms of the training, validation and
testing data sets used, taking into account the intended purpose of the AI
system;
vii.
where applicable, information to enable Public
Organisation to interpret the output of the high-risk AI system and use it
appropriately;
c.
the changes to the AI system and its
performance which have been predetermined by the provider at the moment of the
initial conformity assessment, if any;
d.
the human oversight measures, including the
technical measures put in place to facilitate the interpretation of the outputs
of the AI system by the deployers;
e.
the computational and hardware resources
needed, the expected lifetime of the AI system and any necessary maintenance
and care measures, including their frequency to ensure the proper functioning
of that AI system, including as regards software updates;
6.4.
where relevant, a description of the mechanisms
included within the AI system that allows deployers to properly collect, store
and interpret the logs to ensure appropriate transparency before the Delivery
of the AI System; at least the technical and organisational measures described
in Annex E shall be implemented by
the Supplier.
7.1.
The Supplier ensures that the AI System has
been and shall be designed and developed in such a way, including with
appropriate human-machine interface tools, that it can be effectively overseen
by natural persons during the period in which it is in use.
7.2.
Human oversight shall aim to prevent or
minimise the risks to health, safety or fundamental rights that may emerge when
an AI system is used in accordance with its intended purpose or under
conditions of reasonably foreseeable misuse, where such risks persist despite
the application of other requirements set out in this Section.
7.3.
The oversight measures shall be commensurate
with the risks, level of autonomy and context of use of the AI system and shall
be ensured through either one or both of the following types of measures.
7.4.
The Supplier ensures that, prior to the
Delivery, appropriate measures shall be embedded in the AI System and taken to
ensure human oversight. These measures shall ensure that the natural persons,
to whom human oversight is assigned, are enabled as appropriate and
proportionate:
a.
to properly understand the relevant
capacities and limitations of the AI System and to be able to duly monitor its
operation, including in view of detecting and addressing anomalies,
dysfunctions and unexpected performance;
b.
to remain aware of the possible tendency of
automatically relying or over-relying on the output produced by the AI System (‘automation
bias’), in particular, if the AI System is used to provide information or
recommendations for decisions to be taken by natural persons;
c.
to correctly interpret the AI System's output,
taking into account, for example, in particular the characteristics of the
system and the interpretation tools and methods available;
d.
to decide, in any particular situation, not to
use the AI System or otherwise disregard, override or reverse the output of the
AI System;
e.
to intervene on the operation of the AI System
or interrupt the system through a ‘stop’ button or a similar procedure that
allows the system to come to a halt in a safe state.
7.5.
<Optional> To ensure
appropriate human oversight, the Supplier shall at least implement the
technical and organisational measures described in Annex F before the Delivery of the AI System.
Article 8
Accuracy, robustness and cybersecurity
8.1.
The Supplier ensures that the AI System has
been and shall
be designed and developed in such a way that it achieves an appropriate level
of accuracy, robustness, safety and cybersecurity, and perform consistently in
those respects.
8.2.
The levels of accuracy and the relevant
accuracy metrics of the AI System are described in Annex G.
8.3.
To ensure an appropriate level of robustness,
safety and cybersecurity, the Supplier shall at least implement the technical
and organisational measures described in Annex
H before the Delivery of the AI System.
Article 9
Compliance
9.1.
The Supplier must ensure that, from the
Delivery of the AI System until the end of the term of the Agreement, the AI System complies with these MCC-AI-Light.
9.2.
At first request of Public Organisation, the
Suppliers must make available to Public Organisation all information necessary
to demonstrate compliance with these MCC-AI-Light.
9.3.
If during the term of the
agreement the Supplier considers or has reason to consider
that the AI System is not in conformity with these MCC-AI-Light, whether in
response to a comment by the Public Organisation or not, it shall immediately
take the necessary corrective actions to bring the system into conformity. The
Supplier shall inform the Public Organisation accordingly.
Article 10
Fundamental rights impact
assessment
On first request of the Public Organisation,
the Supplier shall cooperate in the Public Organisation’s performance of an
assessment of the impact on fundamental rights that the use of the AI System
may produce.
Article 11
Obligation to explain the functioning of the AI
System on an individual level
11.1.
In addition to the obligations described in
Article 6, the Supplier is obliged to assist the Public Organisation at the
Public Organisation's first request in providing a clear and meaningful
explanation of the role of the AI system in the decision-making procedure. This
meaningful explanation should particularly (but not exclusively) provide
insight in the main elements of the decision(s) taken to any affected person
who is subject to Public Organisation decision(s) based on the output of the AI
System.
11.2.
The obligation as described in article 11.1
comprises the provision to the Public Organisation of all the technical and
other information required to explain how the AI System produced a particular
output and to offer the affected persons the opportunity to verify the way in
which the AI System produced a particular output. The Supplier hereby grants
the Public Organisation the right to use, share and disclose this information,
if and to the extent necessary, to inform the affected persons accordingly.
11.3.
<Optional> The
obligations referred to in article 11.1 and article 11.2 include the source
code of the AI System, the technical specifications used in developing the AI
System, the Data Sets, technical information on how the Data Sets used in
developing the AI System were obtained and edited, information on the method of
development used and the development process undertaken, substantiation of the
choice for a particular model and its parameters, and information on the
performance of the AI System.
Section C – Rights to use the Data Sets
Article 12
Rights to Public Organisation Data Sets
12.1.
All rights, including any intellectual property
right, relating to Publication Organisation Data Sets will accrue to the Public
Organisation or a third party designated as such by the Public Organisation.
12.2.
The Supplier is not entitled to use Publication
Organisation Data Sets for any purpose other than the performance of the
Agreement, except as otherwise provided in Annex B.
12.3.
On first request of the Public Organisation,
the Supplier must destroy Publication Organisation Data Sets, except as
otherwise provided in Annex B. If the Public Organisation so demands, the
Supplier must provide feasible evidence of the destruction of Publication
Organisation Data Sets.
Article 13
Rights to Supplier Data Sets and Third-Party
Data Sets
13.1.
All rights, including any intellectual property
right, relating to Supplier Data Sets and Third-Party Data sets will accrue to
the Supplier or a third party.
13.2.
The Supplier grants the Public Organisation a
non-exclusive right to use Supplier Data Sets and Third-Party Data Sets that is
in any event sufficient for performance of the provisions of the Agreement,
including the MCC-AI-Light, except as otherwise provided in Annex B.
13.3.
<Optional>
The
right of use described in article 13.2 includes the right to use Supplier Data
Sets and Third-Party Sets for the further development of the AI System,
including any new versions thereof, by the Public Organisation or a third
party.
Article 14
Handover of the Data Sets
14.1.
On first request of the Public Organisation,
the Supplier will hand over the most recent version of Public Organisation Data
Sets to the Public Organisation.
14.2.
On first request of the Public Organisation,
the Supplier will hand over the most recent version of the Supplier Data Sets
and Third-Party Data Sets to the Public Organisation, except as otherwise
provided in Annex B.
14.3.
The Data Sets must be handed over to the Public
Organisation by the Supplier in a common file format to be designated by the
Public Organisation. <Optional>
The Data Sets will be returned as follows: [file format]
Article 15
Indemnifications
15.1.
The Supplier shall indemnify the Public
Organisation from all claims brought by third parties, including supervisors,
arising out of any infringement of intellectual property rights, data
protection rights or equivalent rights resulting from the use of the AI System,
the Supplier Data Sets and/or Third-Party Data Sets by the Public Organisation.
15.2.
The Public Organisation shall indemnify the
Supplier from all claims brought by third parties, including supervisors,
arising out of any infringement of their intellectual property rights, privacy
rights or equivalent rights resulting from the use of the Public Organisation
Data Sets.
Annex A –
The AI System and the Intended Purpose
Description of the AI System
Within the scope of these MCC-AI-Light are the following systems or
components of systems:
Please
provide a description of the AI System(s). This can also be an algorithmic
system that does not qualify as an AI System under the AI Act.
Intended
Purpose
Please
provide a description of the use for which the AI System is intended.
Annex B –
The Data Sets
Please
provide a description of the Data Sets used for the training (if applicable), validation and testing of the AI
System. Distinguish between Public Organisation Data Sets and Supplier Data
Sets and Third-Party Data Sets. In the case of Public Organisation Data Sets,
describe the purposes for which the Supplier may use the Data Sets (other than
the performance of the Agreement) and whether the Supplier is required to
destroy the Data Set at the end of the term of the Agreement. In the case of
Supplier Data Sets and Third-Party Data Sets describe the purposes for which
the Public Organisation may use the Data Sets and whether the Supplier is
obliged to hand over the Data Sets.
The Public Organisation Data Sets
The
following Data Sets are provided by the Public Organisation to the Supplier
under the Agreement or to be created or collected as part of the Agreement:
|
Description of the Data Set |
Rights of use of the Supplier |
Obligation to destroy the Data Set at
the end of the term of the Agreement |
|
|
|
Yes/No |
|
|
|
Yes/No |
|
|
|
Yes/No |
|
|
|
Yes/No |
Supplier Data Sets and Third-Party
Data Sets
The
following Supplier Data Sets and Third-Party Data Sets will be or were used for
the training (if applicable), validation and testing of the AI System:
|
Description of the Data Set |
Rights of use of the Public
Organisation |
Obligation to hand over[1] |
|
|
|
Yes/No |
|
|
|
Yes/No |
|
|
|
Yes/No |
|
|
|
Yes/No |
Annex C – Technical
documentation
The technical documentation shall contain at
least the following information, as applicable to the AI System:
1.
a general description of the AI System
including:
1.1.
its intended purpose, the name of the Supplier
and the version of the system reflecting its relation to previous versions;
1.2.
how the AI System can interact or can be used
to interact with hardware or software, including other AI systems, that is not
part of the AI System itself, where applicable;
1.3.
the versions of relevant software or firmware
and any requirement related to version update;
1.4.
the description of hardware on which the AI
System is intended to run;
1.5.
where the AI System is a component of products,
photographs or illustrations showing external features, marking and internal
layout of those products;
1.6.
a basic description of the user-interface
provided to the Public Organisation;
1.7.
instructions for use for the deployer and a
basic description of the user-interface, where applicable.
2.
a detailed description of the elements of the
AI System and of the process for its development, including:
2.1.
the methods and steps performed for the
development of the AI System, including, where relevant, recourse to
pre-trained systems or tools provided by third parties and how these have been
used, integrated or modified by the Supplier including a description of any
licencing or other contractual arrangements related to such third-party inputs;
2.2.
the design specifications of the system, namely
the general logic of the AI System and of the algorithms; the key design
choices including the rationale and assumptions made, also with regard to
persons or groups of persons on which the system is intended to be used; the
main classification choices; what the system is designed to optimise for and
the relevance of the different parameters; the description of the expected
output and output quality of the system; the decisions about any possible
trade-off made regarding the technical solutions adopted to comply with the
requirements set out in these MCC-AI-Light;
2.3.
the description of the system architecture
explaining how software components build on or feed into each other and
integrate into the overall processing; the computational resources used to
develop, train, test and validate the AI System;
2.4.
where relevant, the data requirements in terms
of data sheets describing the training methodologies and techniques and the
training data sets used, including a general description of these data sets,
information about their provenance, scope and main characteristics; how the
data was obtained and selected; labelling procedures (e.g. for supervised
learning), data cleaning methodologies (e.g. outliers detection);
2.5.
assessment of the human oversight measures
needed in accordance with these MCC-AI-Light, including an assessment of the
technical measures needed to facilitate the interpretation of the outputs of AI
systems by the Public Organisation in accordance with these MCC-AI-Light;
2.6.
where applicable, a detailed description of
predetermined changes to the AI System and its performance, together with all
the relevant information related to the technical solutions adopted to ensure
continuous compliance of the AI System with the relevant requirements set out
in these MCC-AI-Light;
2.7.
the validation and testing procedures used,
including information about the validation and testing data used and their main
characteristics; metrics used to measure accuracy, robustness and compliance
with other relevant requirements set out in these MCC-AI-Light as well as
potentially discriminatory impacts; test logs and all test reports dated and
signed by the responsible persons, including with regard to predetermined
changes as referred to under point 2.5;
2.8.
cybersecurity measures put in place.
3.
detailed information about the monitoring,
functioning and control of the AI System, in particular with regard to: its
capabilities and limitations in performance, including the degrees of accuracy
for specific persons or groups of persons on which the system is intended to be
used and the overall expected level of accuracy in relation to its intended
purpose; the foreseeable unintended outcomes and sources of risks to health and
safety, fundamental rights and discrimination in view of the intended purpose of
the AI System.
4.
a description of the appropriateness of the
performance metrics for the AI System;
5.
a detailed description of the risk management
system in accordance with article 2;
6.
a description of any relevant change made by
the Supplier to the system through its lifecycle.
Annex D – Instructions
for use
The
instructions for use shall contain at least the following information, as
applicable to the AI System:
1.
the identity and the contact details of the Supplier
and, where applicable, of its authorised representatives;
2.
the characteristics, capabilities and
limitations of performance of the AI System, including where appropriate:
2.1.
the Intended Purpose;
2.2.
the level of accuracy, including its metrics, robustness
and cybersecurity referred to in article 8 against which the AI System has been
tested and validated and which can be expected, and any clearly known and
foreseeable circumstances that may have an impact on that expected level of
accuracy, robustness and cybersecurity;
2.3.
any known or foreseeable circumstance related
to the use of the AI System in accordance with the Intended Purpose or under
conditions of Reasonably Foreseeable Misuse, which may lead to risks to the
health and safety or fundamental rights;
2.4.
where applicable, the technical capabilities
and characteristics of the AI System to provide information that is relevant to
explain its output;
2.4.
2.5.
when appropriate, its performance regarding
specific persons or groups of persons on which the AI System is intended to be
used;
2.6.
when appropriate, specifications for the input
data or any other relevant information in terms of the training, validation and
testing data sets used, taking into account the intended purpose of the AI
System;
2.7.
where applicable, information to enable the
Public Organisation to interpret the output of the AI System and use it
appropriately;
2.7.
3.
the changes to the AI System and its
performance which have been predetermined by the Supplier, if any;
4.
the human oversight measures referred to in
article 7, including the technical measures put in place to facilitate the
interpretation of the outputs of the AI System by the Public Organisation;
5.
the computational and hardware resources
needed, the expected lifetime of the AI System and any necessary maintenance
and care measures, including their frequency to ensure the proper functioning
of that AI System, including as regards software updates;
6.
where relevant, a description of the mechanisms
included within the AI System that allows the Public Organisation to properly
collect, store and interpret the logs in accordance with article 5 of these MCC-AI-Light.
Annex E – Measures to ensure transparency
Please provide here a description of the
technical and organisational measures to be taken by the Supplier to ensure
transparency in accordance with article 6 of the MCC-AI-Light.
Annex F – Measures to ensure human oversight
Please provide here a description of the
technical and organisational measures to be taken by the Supplier to ensure
human oversight in accordance with article 7 of the MCC-AI-Light.
Annex G – Levels of accuracy
Describe here the required levels of accuracy.
Annex H – Measures to ensure an appropriate
level of robustness, safety and cybersecurity
Please provide here a description of the
technical and organisational measures to be taken by the Supplier to ensure an
appropriate level of robustness, safety and cybersecurity in accordance with
article 8 of the MCC-AI-Light.
These measures must ensure that the AI System
shall be as resilient as possible regarding errors, faults or inconsistencies
that may occur within the system or the environment in which the system operates,
in particular due to their interaction with natural persons or other systems.
The AI System shall be resilient as regards to
attempts by unauthorised third parties to alter their use, behaviour, outputs
or performance by exploiting the system’s vulnerabilities. The technical
solutions to address AI specific vulnerabilities may include, where
appropriate, measures to prevent, detect, respond to, resolve and control for
attacks trying to manipulate the training data set (‘data poisoning’) or
pre-trained components used in training (‘model poisoning’), inputs designed to
cause the model to make a mistake (‘adversarial examples’ or ‘model evasion’),
confidentiality attacks or model flaws, which could lead to harmful decision-making.
[1] Â Â A limitation of the obligation to hand over
Supplier Data Sets and Third-Party Data Sets, does not limit Supplier’s
obligations described in article 6 and article 11.