commit 358471e349765354d6c4aa8172b94e27c2b2515c from: Romain VINCENT date: Sat Jan 17 09:49:00 2026 UTC Imporove Section model. Move to file for Section parser tests. commit - 4563b43dc46cf12d4ea4e82321a75a037e81d5c9 commit + 358471e349765354d6c4aa8172b94e27c2b2515c blob - /dev/null blob + 76339a0ea7148fe6ca8a4c8332db6165261b1675 (mode 644) --- /dev/null +++ eur-lex-scraper/data/tests/parsers/section/section_test_1.html @@ -0,0 +1,383 @@ + +
+

+ SECTION 1 +

+
+

+ + Classification of AI systems as high-risk + +

+
+
+

Article 6

+
+

Classification rules for high-risk AI systems

+
+
+

1.   Irrespective of whether an AI system is placed on the market or put into service independently of the products referred to in points (a) and (b), that AI system shall be considered to be high-risk where both of the following conditions are fulfilled:

+ + + + + + + + + +
+

(a)

+
+

the AI system is intended to be used as a safety component of a product, or the AI system is itself a product, covered by the Union harmonisation legislation listed in Annex I;

+
+ + + + + + + + + +
+

(b)

+
+

the product whose safety component pursuant to point (a) is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment, with a view to the placing on the market or the putting into service of that product pursuant to the Union harmonisation legislation listed in Annex I.

+
+
+
+

2.   In addition to the high-risk AI systems referred to in paragraph 1, AI systems referred to in Annex III shall be considered to be high-risk.

+
+
+

3.   By derogation from paragraph 2, an AI system referred to in Annex III shall not be considered to be high-risk where it does not pose a significant risk of harm to the health, safety or fundamental rights of natural persons, including by not materially influencing the outcome of decision making.

+

The first subparagraph shall apply where any of the following conditions is fulfilled:

+ + + + + + + + + +
+

(a)

+
+

the AI system is intended to perform a narrow procedural task;

+
+ + + + + + + + + +
+

(b)

+
+

the AI system is intended to improve the result of a previously completed human activity;

+
+ + + + + + + + + +
+

(c)

+
+

the AI system is intended to detect decision-making patterns or deviations from prior decision-making patterns and is not meant to replace or influence the previously completed human assessment, without proper human review; or

+
+ + + + + + + + + +
+

(d)

+
+

the AI system is intended to perform a preparatory task to an assessment relevant for the purposes of the use cases listed in Annex III.

+
+

Notwithstanding the first subparagraph, an AI system referred to in Annex III shall always be considered to be high-risk where the AI system performs profiling of natural persons.

+
+
+

4.   A provider who considers that an AI system referred to in Annex III is not high-risk shall document its assessment before that system is placed on the market or put into service. Such provider shall be subject to the registration obligation set out in Article 49(2). Upon request of national competent authorities, the provider shall provide the documentation of the assessment.

+
+
+

5.   The Commission shall, after consulting the European Artificial Intelligence Board (the ‘Board’), and no later than 2 February 2026, provide guidelines specifying the practical implementation of this Article in line with Article 96 together with a comprehensive list of practical examples of use cases of AI systems that are high-risk and not high-risk.

+
+
+

6.   The Commission is empowered to adopt delegated acts in accordance with Article 97 in order to amend paragraph 3, second subparagraph, of this Article by adding new conditions to those laid down therein, or by modifying them, where there is concrete and reliable evidence of the existence of AI systems that fall under the scope of Annex III, but do not pose a significant risk of harm to the health, safety or fundamental rights of natural persons.

+
+
+

7.   The Commission shall adopt delegated acts in accordance with Article 97 in order to amend paragraph 3, second subparagraph, of this Article by deleting any of the conditions laid down therein, where there is concrete and reliable evidence that this is necessary to maintain the level of protection of health, safety and fundamental rights provided for by this Regulation.

+
+
+

8.   Any amendment to the conditions laid down in paragraph 3, second subparagraph, adopted in accordance with paragraphs 6 and 7 of this Article shall not decrease the overall level of protection of health, safety and fundamental rights provided for by this Regulation and shall ensure consistency with the delegated acts adopted pursuant to Article 7(1), and take account of market and technological developments.

+
+
+
+

Article 7

+
+

Amendments to Annex III

+
+
+

1.   The Commission is empowered to adopt delegated acts in accordance with Article 97 to amend Annex III by adding or modifying use-cases of high-risk AI systems where both of the following conditions are fulfilled:

+ + + + + + + + + +
+

(a)

+
+

the AI systems are intended to be used in any of the areas listed in Annex III;

+
+ + + + + + + + + +
+

(b)

+
+

the AI systems pose a risk of harm to health and safety, or an adverse impact on fundamental rights, and that risk is equivalent to, or greater than, the risk of harm or of adverse impact posed by the high-risk AI systems already referred to in Annex III.

+
+
+
+

2.   When assessing the condition under paragraph 1, point (b), the Commission shall take into account the following criteria:

+ + + + + + + + + +
+

(a)

+
+

the intended purpose of the AI system;

+
+ + + + + + + + + +
+

(b)

+
+

the extent to which an AI system has been used or is likely to be used;

+
+ + + + + + + + + +
+

(c)

+
+

the nature and amount of the data processed and used by the AI system, in particular whether special categories of personal data are processed;

+
+ + + + + + + + + +
+

(d)

+
+

the extent to which the AI system acts autonomously and the possibility for a human to override a decision or recommendations that may lead to potential harm;

+
+ + + + + + + + + +
+

(e)

+
+

the extent to which the use of an AI system has already caused harm to health and safety, has had an adverse impact on fundamental rights or has given rise to significant concerns in relation to the likelihood of such harm or adverse impact, as demonstrated, for example, by reports or documented allegations submitted to national competent authorities or by other reports, as appropriate;

+
+ + + + + + + + + +
+

(f)

+
+

the potential extent of such harm or such adverse impact, in particular in terms of its intensity and its ability to affect multiple persons or to disproportionately affect a particular group of persons;

+
+ + + + + + + + + +
+

(g)

+
+

the extent to which persons who are potentially harmed or suffer an adverse impact are dependent on the outcome produced with an AI system, in particular because for practical or legal reasons it is not reasonably possible to opt-out from that outcome;

+
+ + + + + + + + + +
+

(h)

+
+

the extent to which there is an imbalance of power, or the persons who are potentially harmed or suffer an adverse impact are in a vulnerable position in relation to the deployer of an AI system, in particular due to status, authority, knowledge, economic or social circumstances, or age;

+
+ + + + + + + + + +
+

(i)

+
+

the extent to which the outcome produced involving an AI system is easily corrigible or reversible, taking into account the technical solutions available to correct or reverse it, whereby outcomes having an adverse impact on health, safety or fundamental rights, shall not be considered to be easily corrigible or reversible;

+
+ + + + + + + + + +
+

(j)

+
+

the magnitude and likelihood of benefit of the deployment of the AI system for individuals, groups, or society at large, including possible improvements in product safety;

+
+ + + + + + + + + +
+

(k)

+
+

the extent to which existing Union law provides for:

+ + + + + + + + + +
+

(i)

+
+

effective measures of redress in relation to the risks posed by an AI system, with the exclusion of claims for damages;

+
+ + + + + + + + + +
+

(ii)

+
+

effective measures to prevent or substantially minimise those risks.

+
+
+
+
+

3.   The Commission is empowered to adopt delegated acts in accordance with Article 97 to amend the list in Annex III by removing high-risk AI systems where both of the following conditions are fulfilled:

+ + + + + + + + + +
+

(a)

+
+

the high-risk AI system concerned no longer poses any significant risks to fundamental rights, health or safety, taking into account the criteria listed in paragraph 2;

+
+ + + + + + + + + +
+

(b)

+
+

the deletion does not decrease the overall level of protection of health, safety and fundamental rights under Union law.

+
+
+
+
+ blob - be404e25c4d19e77c886af9625174c2655747147 blob + 6556a32b2e65c72dfb57bf037452ac78da3d0974 --- eur-lex-scraper/src/models/enacting_terms.rs +++ eur-lex-scraper/src/models/enacting_terms.rs @@ -1,4 +1,5 @@ use crate::models::articles::Article; +use crate::models::section::Section; #[derive(Clone, Debug, PartialEq, Eq)] pub enum Item { @@ -61,20 +62,3 @@ impl Into for Chapter { Item::Chapter(self) } } - -#[derive(Clone, Debug, Default, PartialEq, Eq)] -pub struct Section { - pub items: Vec
, -} - -impl Section { - pub fn push(&mut self, article: Article) { - self.items.push(article) - } -} - -impl Into for Section { - fn into(self) -> Item { - Item::Section(self) - } -} blob - eecd48fcd1a04847f27fdc63f4f952ac137a2378 blob + 0510c51c0be4b7ca5953f59ac9c51390dff35a31 --- eur-lex-scraper/src/models/mod.rs +++ eur-lex-scraper/src/models/mod.rs @@ -2,3 +2,4 @@ pub mod acts; pub mod articles; pub mod enacting_terms; pub mod preambles; +pub mod section; blob - /dev/null blob + 59939a3bd4f42b8d2691492a71e9f1e56e6fa925 (mode 644) --- /dev/null +++ eur-lex-scraper/src/models/section.rs @@ -0,0 +1,36 @@ +use crate::models::{articles::Article, enacting_terms::Item}; + +#[derive(Clone, Debug, Default, PartialEq, Eq)] +pub struct Section { + items: Vec
, +} + +impl Into for Section { + fn into(self) -> Item { + Item::Section(self) + } +} + +impl IntoIterator for Section { + type Item = Article; + type IntoIter = std::vec::IntoIter; + + fn into_iter(self) -> Self::IntoIter { + self.items.into_iter() + } +} + +impl Section { + pub fn push(&mut self, article: Article) { + self.items.push(article) + } + pub fn get(&self, index: usize) -> Option<&Article> { + self.items.get(index) + } + pub fn get_mut(&mut self, index: usize) -> Option<&mut Article> { + self.items.get_mut(index) + } + pub fn len(&self) -> usize { + self.items.len() + } +} blob - 25e44521de570bdb68bc955a0b7f188a2b149e4c blob + 161624a33298dea9b4ce298f8fa8d7975108ed54 --- eur-lex-scraper/src/parsers/section.rs +++ eur-lex-scraper/src/parsers/section.rs @@ -1,7 +1,7 @@ use scraper::{ElementRef, Selector}; use thiserror::Error; -use crate::models::enacting_terms::Section; +use crate::models::section::Section; use crate::parsers::article::{ArticleParser, ArticleParserError}; pub struct SectionParser {} @@ -37,409 +37,27 @@ mod tests { use super::*; use scraper::{Html, Selector}; - - fn get_section_html() -> String { - let section_html = r#" -
-

- SECTION 1 -

-
-

- - Classification of AI systems as high-risk - -

-
-
-

Article 6

-
-

Classification rules for high-risk AI systems

-
-
-

1.   Irrespective of whether an AI system is placed on the market or put into service independently of the products referred to in points (a) and (b), that AI system shall be considered to be high-risk where both of the following conditions are fulfilled:

- - - - - - - - - -
-

(a)

-
-

the AI system is intended to be used as a safety component of a product, or the AI system is itself a product, covered by the Union harmonisation legislation listed in Annex I;

-
- - - - - - - - - -
-

(b)

-
-

the product whose safety component pursuant to point (a) is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment, with a view to the placing on the market or the putting into service of that product pursuant to the Union harmonisation legislation listed in Annex I.

-
-
-
-

2.   In addition to the high-risk AI systems referred to in paragraph 1, AI systems referred to in Annex III shall be considered to be high-risk.

-
-
-

3.   By derogation from paragraph 2, an AI system referred to in Annex III shall not be considered to be high-risk where it does not pose a significant risk of harm to the health, safety or fundamental rights of natural persons, including by not materially influencing the outcome of decision making.

-

The first subparagraph shall apply where any of the following conditions is fulfilled:

- - - - - - - - - -
-

(a)

-
-

the AI system is intended to perform a narrow procedural task;

-
- - - - - - - - - -
-

(b)

-
-

the AI system is intended to improve the result of a previously completed human activity;

-
- - - - - - - - - -
-

(c)

-
-

the AI system is intended to detect decision-making patterns or deviations from prior decision-making patterns and is not meant to replace or influence the previously completed human assessment, without proper human review; or

-
- - - - - - - - - -
-

(d)

-
-

the AI system is intended to perform a preparatory task to an assessment relevant for the purposes of the use cases listed in Annex III.

-
-

Notwithstanding the first subparagraph, an AI system referred to in Annex III shall always be considered to be high-risk where the AI system performs profiling of natural persons.

-
-
-

4.   A provider who considers that an AI system referred to in Annex III is not high-risk shall document its assessment before that system is placed on the market or put into service. Such provider shall be subject to the registration obligation set out in Article 49(2). Upon request of national competent authorities, the provider shall provide the documentation of the assessment.

-
-
-

5.   The Commission shall, after consulting the European Artificial Intelligence Board (the ‘Board’), and no later than 2 February 2026, provide guidelines specifying the practical implementation of this Article in line with Article 96 together with a comprehensive list of practical examples of use cases of AI systems that are high-risk and not high-risk.

-
-
-

6.   The Commission is empowered to adopt delegated acts in accordance with Article 97 in order to amend paragraph 3, second subparagraph, of this Article by adding new conditions to those laid down therein, or by modifying them, where there is concrete and reliable evidence of the existence of AI systems that fall under the scope of Annex III, but do not pose a significant risk of harm to the health, safety or fundamental rights of natural persons.

-
-
-

7.   The Commission shall adopt delegated acts in accordance with Article 97 in order to amend paragraph 3, second subparagraph, of this Article by deleting any of the conditions laid down therein, where there is concrete and reliable evidence that this is necessary to maintain the level of protection of health, safety and fundamental rights provided for by this Regulation.

-
-
-

8.   Any amendment to the conditions laid down in paragraph 3, second subparagraph, adopted in accordance with paragraphs 6 and 7 of this Article shall not decrease the overall level of protection of health, safety and fundamental rights provided for by this Regulation and shall ensure consistency with the delegated acts adopted pursuant to Article 7(1), and take account of market and technological developments.

-
-
-
-

Article 7

-
-

Amendments to Annex III

-
-
-

1.   The Commission is empowered to adopt delegated acts in accordance with Article 97 to amend Annex III by adding or modifying use-cases of high-risk AI systems where both of the following conditions are fulfilled:

- - - - - - - - - -
-

(a)

-
-

the AI systems are intended to be used in any of the areas listed in Annex III;

-
- - - - - - - - - -
-

(b)

-
-

the AI systems pose a risk of harm to health and safety, or an adverse impact on fundamental rights, and that risk is equivalent to, or greater than, the risk of harm or of adverse impact posed by the high-risk AI systems already referred to in Annex III.

-
-
-
-

2.   When assessing the condition under paragraph 1, point (b), the Commission shall take into account the following criteria:

- - - - - - - - - -
-

(a)

-
-

the intended purpose of the AI system;

-
- - - - - - - - - -
-

(b)

-
-

the extent to which an AI system has been used or is likely to be used;

-
- - - - - - - - - -
-

(c)

-
-

the nature and amount of the data processed and used by the AI system, in particular whether special categories of personal data are processed;

-
- - - - - - - - - -
-

(d)

-
-

the extent to which the AI system acts autonomously and the possibility for a human to override a decision or recommendations that may lead to potential harm;

-
- - - - - - - - - -
-

(e)

-
-

the extent to which the use of an AI system has already caused harm to health and safety, has had an adverse impact on fundamental rights or has given rise to significant concerns in relation to the likelihood of such harm or adverse impact, as demonstrated, for example, by reports or documented allegations submitted to national competent authorities or by other reports, as appropriate;

-
- - - - - - - - - -
-

(f)

-
-

the potential extent of such harm or such adverse impact, in particular in terms of its intensity and its ability to affect multiple persons or to disproportionately affect a particular group of persons;

-
- - - - - - - - - -
-

(g)

-
-

the extent to which persons who are potentially harmed or suffer an adverse impact are dependent on the outcome produced with an AI system, in particular because for practical or legal reasons it is not reasonably possible to opt-out from that outcome;

-
- - - - - - - - - -
-

(h)

-
-

the extent to which there is an imbalance of power, or the persons who are potentially harmed or suffer an adverse impact are in a vulnerable position in relation to the deployer of an AI system, in particular due to status, authority, knowledge, economic or social circumstances, or age;

-
- - - - - - - - - -
-

(i)

-
-

the extent to which the outcome produced involving an AI system is easily corrigible or reversible, taking into account the technical solutions available to correct or reverse it, whereby outcomes having an adverse impact on health, safety or fundamental rights, shall not be considered to be easily corrigible or reversible;

-
- - - - - - - - - -
-

(j)

-
-

the magnitude and likelihood of benefit of the deployment of the AI system for individuals, groups, or society at large, including possible improvements in product safety;

-
- - - - - - - - - -
-

(k)

-
-

the extent to which existing Union law provides for:

- - - - - - - - - -
-

(i)

-
-

effective measures of redress in relation to the risks posed by an AI system, with the exclusion of claims for damages;

-
- - - - - - - - - -
-

(ii)

-
-

effective measures to prevent or substantially minimise those risks.

-
-
-
-
-

3.   The Commission is empowered to adopt delegated acts in accordance with Article 97 to amend the list in Annex III by removing high-risk AI systems where both of the following conditions are fulfilled:

- - - - - - - - - -
-

(a)

-
-

the high-risk AI system concerned no longer poses any significant risks to fundamental rights, health or safety, taking into account the criteria listed in paragraph 2;

-
- - - - - - - - - -
-

(b)

-
-

the deletion does not decrease the overall level of protection of health, safety and fundamental rights under Union law.

-
-
-
-
- "#; - section_html.to_string() + use std::fs; + // Well formed section + fn get_test_section_1() -> String { + fs::read_to_string("data/tests/parsers/section/section_test_1.html").unwrap() } #[test] fn parsing_article() { - let html = Html::parse_fragment(&get_section_html()); + let html = Html::parse_fragment(&get_test_section_1()); let selector = Selector::parse(r#"[id*="sct_"]:not([id*=".tit_"])"#).unwrap(); let element_ref = html.select(&selector).next().unwrap(); let section_left = SectionParser::parse(element_ref).unwrap(); - assert_eq!(section_left.items.len(), 2); - assert_eq!(section_left.items.get(0).unwrap().get_number(), 6); + assert_eq!(section_left.len(), 2); + assert_eq!(section_left.get(0).unwrap().get_number(), 6); assert_eq!( - section_left.items.get(0).unwrap().get_title(), + section_left.get(0).unwrap().get_title(), "Classification rules for high-risk AI systems" ); - assert_eq!(section_left.items.get(1).unwrap().get_number(), 7); + assert_eq!(section_left.get(1).unwrap().get_number(), 7); assert_eq!( - section_left.items.get(1).unwrap().get_title(), + section_left.get(1).unwrap().get_title(), "Amendments to Annex III" ); }