Skip to main content

Data Protection and Machine-Learning-Supported Decision-Making at the EU Border: ETIAS Profiling Under Scrutiny

  • Conference paper
  • First Online:
Privacy Technologies and Policy (APF 2022)

Part of the book series: Lecture Notes in Computer Science ((LNSC,volume 13279))

Included in the following conference series:

Abstract

ETIAS is an upcoming, largely automated IT system to identify risks posed by visa-exempt Third Country Nationals (TCNs) traveling to the Schengen area. It is expected to be operational by the end of 2022. The largely automated ETIAS risk assessments include the check of traveller data against not yet defined abstract risk indicators which might discriminate against certain groups of travellers. Moreover, there is evidence for the planned use of machine learning (ML) for risk assessments under the ETIAS framework. The risk assessments that could result in personal data being entered into terrorist watchlists or in a refusal of a travel authorisation have strong impacts especially on the fundamental right to data protection. The use of ML-trained models for such risk assessments raises concerns, since existing models lack transparency and, in some cases, have been found to be significantly biased. The paper discusses selected requirements under EU data protection law for ML-trained models, namely human oversight, information and access rights, accuracy, and supervision. The analysis considers provisions of the AI Act Proposal of the European Commission as the proposed regulation can provide guidance for the application of existing data protection requirements to AI.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 44.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 59.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Schengen Information System (SIS), Visa Information System (VIS), Entry-Exit System (EES), Eurodac, Europol data and Interpol SLTD and TDAWN.

  2. 2.

    The ETIAS screening rules are planned to be used in future also when examining the risk of visa applicants (VIS Reg [84], Art. 9j), who will also be checked against the ETIAS watchlist (ibid., Art. 9a and 22b).

  3. 3.

    See Sect. 3.2.

  4. 4.

    The most prevalent approach to ML, supervised learning, refers to training models with labelled data, i.e. the training data consists of inputs and desired outputs [51].

  5. 5.

    In this context see Art. 10 (3), Rec. 44 AI Act Proposal [28] that require data sets to be, among other things, sufficiently representative, accurate and complete.

  6. 6.

    Schengen Information System (SIS), Visa Information System (VIS), Entry-Exit System (EES), Eurodac, Europol data and Interpol SLTD and TDAWN.

  7. 7.

    See Sect. 2.

  8. 8.

    For example, the Commission AI report refers to misuse and bias in the context of EU policy making and enforcement process [27].

  9. 9.

    The provision refers to the offences referred to in Art. 2 (1) and 2 (2) of the Framework decision 2002/584/JHA [13].

  10. 10.

    ‘[D]ata recorded in Europol data’.

  11. 11.

    Europol has used Palantir Gotham for the operational analysis of counter terrorism related data according to the Commission’s answer to a parliamentary question [41]. The software is promoted as ‘AI-ready’ [73]. Some national law enforcement agencies use AI tools already, see, for example, [37].

  12. 12.

    On the impossibility to travel for medical purposes due to a UN blacklist.

  13. 13.

    According to the ECtHR, professional relations can be considered private life under Art. 8 ECHR [31] (on an impossibility to travel for medical and business purposes due to an entry ban in the SIS).

  14. 14.

    See para 126 and the case-law cited there.

  15. 15.

    On Art. 47 CFREU [11].

  16. 16.

    The provision refers to the predecessor Regulation (EC) No 45/2001 that has been replaced by the EUDPR [82].

  17. 17.

    For information and access rights see Sect. 5.3 with fn. 28, 31.

  18. 18.

    Specifically, the AI Act Proposal [28] statutes an exemption for AI systems which are components of the large-scale IT systems established by the legal acts listed in Annex IX that have been placed on the market or put into service before 12 months after the date of application of the proposed AI Act, Art. 83 (1). The AI Act shall apply after 24 months following its entry into force, Art. 85 (2). Annex IX lists union legislations on large-scale IT systems in the area of freedom, security and justice, particularly the regulations concerning ETIAS (Nr. 5) that is expected to be operational by end of 2022 and hence is highly likely to fall under the exemption according to the AI Act Proposal [28]. However, the AI Act Proposal [28] stipulates that the requirements of the proposed regulation shall be taken into account in the evaluation of the exempted AI systems, Art. 83 (1). ETIAS will be evaluated three years after the start of operations and every four years thereafter, Art. 92 (5) ETIAS Regulation [81].

  19. 19.

    Cf. AI Act Proposal [28], Art. 3 (1), Annex I (a).

  20. 20.

    AI Act Proposal [28], p. 7. The proposal particularly classifies AI systems intended to be used for assessing risks posed by persons who intend to enter into the territory of a Member State (e.g. further ETIAS risk assessments in case of a hit), and AI systems intended to be used for making individual risk assessments regarding the risk of future criminal offences (e.g. upstream risk assessments which SIS or Europol entries are based on) as high-risk, Art. 6 (2), annex III Nr.6 (a), III Nr.7 (b) AI Act Proposal [28].

  21. 21.

    See Sects. 5.2 and 5.4.

  22. 22.

    The LED provision explicitly covers ‘adverse’ legal effects only, Art. 11 (1) LED [24].

  23. 23.

    The Art. 29 Working Party, WP251rev.01, p. 21 gives the ‘refused admission to a country’ as an example for such a decision.

  24. 24.

    In the context of American prosecution procedure law (‘totality-of-the-circumstances-analysis’).

  25. 25.

    See Sect. 3.1.

  26. 26.

    See Sect. 5.1 on the applicability.

  27. 27.

    On accuracy and non-discrimination see Sect. 5.4.

  28. 28.

    No such explicit requirement exists under the LED, which demonstrates that the level of transparency provided for under the LED is lower as compared to the GDPR. This inconsistency could pose a problem where the determination of the applicable provisions is unclear (see Sect. 5.1).

  29. 29.

    See Sect. 5.2.

  30. 30.

    See Sect. 4.

  31. 31.

    This requirement is not explicitly anchored in the LED [24].

  32. 32.

    See Sect. 5.2.

  33. 33.

    In the context of visa decisions.

  34. 34.

    On the applicability of the AI Act Proposal [28] see Sect. 5.1.

  35. 35.

    From a statistical point of view, discrimination is the objective of risk assessments. See, for example, on the discriminatory power of credit scorings [5].

  36. 36.

    See Sect. 3.1.

References

  1. Angwin, J., Larson, J., Mattu, S., Kirchner, L.: Machine Bias. There’s software used across the country to predict future criminals. And it’s biased against blacks. ProPublica (2016). https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing. Accessed 04 Apr 2022

  2. Article 29 Working Party: Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679, 17/EN WP251rev.01 (2018)

    Google Scholar 

  3. Bäcker, in: Kühling, J., Buchner, B.: Datenschutz-Grundverordnung, Bundesdatenschutzgesetz: DS-GVO/BDSG, 3rd edn, C.H. Beck (2020). Art. 13

    Google Scholar 

  4. Berk, R.: Criminal Justice Forecasts of Risks – A Machine Learning Approach. Springer, Berlin (2012). https://doi.org/10.1007/978-1-4614-3085-8

  5. Blöchlinger, A., Leippold, M.: Economic benefit of powerful credit scoring. J. Bank. Finance 30, 851–873 (2006)

    Article  Google Scholar 

  6. Brkan, M.: The essence of the fundamental rights to privacy and data protection: finding the way through the maze of the CJEU’s constitutional reasoning. German Law J. 20(6), 864–883 (2019)

    Article  Google Scholar 

  7. Brouwer, E.: Schengen and the administration of exclusion: legal remedies caught in between entry bans, risk assessment and artificial intelligence. Eur. J. Migr. Law 23, 485–507 (2021)

    Article  Google Scholar 

  8. Buchner, B., in: Kühling, J., Buchner, B.: Datenschutz-Grundverordnung, Bundesdatenschutzgesetz: DS-GVO/BDSG, 3rd edn, C.H. Beck (2020). Art. 22

    Google Scholar 

  9. Bygrave, L.A.: Minding the machine: article 15 of the EC data protection directive and automated profiling. Comput. Law Secur. Report 17, 17–24 (2001)

    Article  Google Scholar 

  10. Bygrave, L.A.: Minding the machine v2.0: the EU general data protection regulation and automated decision making. In: Yeung, K., Lodge, M. (eds.) Oxford University Press, Oxford (2019)

    Google Scholar 

  11. Charter of Fundamental Rights of the European Union, OJC 326/391 (CFREU) (2012)

    Google Scholar 

  12. Convention for the Protection of Human Rights and Fundamental Freedoms, Rome, 4 November 1950 (ECHR)

    Google Scholar 

  13. Council Framework Decision of 13 June 2002 on the European arrest warrant and the surrender procedures between Member States – Statements made by certain Member States on the adoption of the Framework Decision, OJ L190/1 (2002)

    Google Scholar 

  14. Court of Justice of the European Union: Case C 362/14 Maximilian Schrems v Data Protection Commissioner, ECLI:EU:C:2015:650 (Schrems I) (2015)

    Google Scholar 

  15. Court of Justice of the European Union: Case C-293/12 Digital Rights Ireland and C-594/12 Seitlinger and Others, ECLI:EU:C:2014:238 (2014)

    Google Scholar 

  16. Court of Justice of the European Union: Case C-673/17 Bundesverband der Verbraucherzentralen und Verbraucherverbände — Verbraucherzentrale Bundesverband eV v Planet49 GmbH, ECLI:EU:C:2019:801 (2019)

    Google Scholar 

  17. Court of Justice of the European Union: Joined Cases C‑225/19 and C‑226/19 R.N.N.S. and K.A. v Minister van Buitenlandse Zaken, ECLI:EU:C:2020:951 (2020)

    Google Scholar 

  18. Court of Justice of the European Union: Joined Cases C‑511/18, C‑512/18 and C‑520/18 La Quadrature du Net, ECLI:EU:C:2020:791 (2020)

    Google Scholar 

  19. Court of Justice of the European Union: Opinion 1/15 of the Court (Grand Chamber), ECLI:EU:C:2017:592 (EU – Canada PNR Opinion) (2017)

    Google Scholar 

  20. Courtland. Bias detectives: the researchers striving to make algorithms fair. https://www.nature.com/articles/d41586-018-05469-3. Accessed 04 Apr 2022

  21. Dietterich, T.G.: Overfitting and undercomputing in machine learning. ACM Comput. Surv. 27, 326–327 (1995)

    Article  Google Scholar 

  22. Dimitrova, D.: Data subject rights: the rights to access and rectification in the area of freedom, security and justice. Doctoral Dissertation at the Vrije Universiteit Brussel (2021)

    Google Scholar 

  23. Dimitrova, D.: The rise of the personal data quality principle: is it legal and does it have an impact on the right to rectification? EJLT 12(3) (2021)

    Google Scholar 

  24. Directive (EU) 2016/680 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data by competent authorities for the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, and on the free movement of such data, and repealing Council Framework Decision 2008/977/JHA, OJ L 119/89 (Law Enforcement Directive or LED) (2016)

    Google Scholar 

  25. Doran, D., Schulz, S., Besold T.R.: What does explainable AI really mean? A new conceptualization of perspectives (2017). https://arxiv.org/pdf/1710.00794.pdf. Accessed 04 Apr 2022

  26. EDPB-EDPS: Joint Opinion 5/2021 on the proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). 18 June 2021. https://edpb.europa.eu/system/files/2021-06/edpb-edps_joint_opinion_ai_regulation_en.pdf. Accessed 04 Apr 2022

  27. European Commission: Opportunities and Challenges for the Use of Artificial Intelligence in Border Control, Migration and Security. vol. 1: Main Report, written by Deloitte (2020)

    Google Scholar 

  28. European Commission. Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts. COM (2021) 206 final, Brussels, 21 April 2021

    Google Scholar 

  29. European Court of Human Rights: Big Brother Watch and Others v the United Kingdom App nos. 58170/13, 62322/14 and 24960/15, 25 May 2021

    Google Scholar 

  30. European Court of Human Rights: Centrum för rättvisa v Sweden App no. 35252/08, 25 May 2021

    Google Scholar 

  31. European Court of Human Rights: Dalea v France App no. 964/07, 2 February 2010

    Google Scholar 

  32. European Court of Human Rights: Nada v Switzerland App no. 10593/08, 12 September 2012

    Google Scholar 

  33. European Court of Human Rights: Rotaru v Romania App no. 28341/95, 4 May 2000

    Google Scholar 

  34. European Court of Human Rights: S. and Marper v the United Kingdom App no. 30562/04 and 30566/04, 4 December 2008

    Google Scholar 

  35. European Court of Human Rights: Weber and Saravia v Germany App no. 54934/00, 29 June 2006

    Google Scholar 

  36. European Data Protection Board: 2019 Annual Report: Working Together for Stronger Rights. (2020). https://edpb.europa.eu/sites/edpb/files/files/file1/edpb_annual_report_2019_en.pdf. Accessed 04 Apr 2022

  37. European Data Protection Board: Finnish SA: Police reprimanded for illegal processing of personal data with facial recognition software. 7 October 2021. https://edpb.europa.eu/news/national-news/2021/finnish-sa-police-reprimanded-illegal-processing-personal-data-facial_en. Accessed 04 Apr 2022

  38. European Data Protection Supervisor: Decision on the retention by Europol of datasets lacking Data Subject Categorisation (Cases 2019-0370 & 2021-0699). https://edps.europa.eu/system/files/2022-01/22-01-10-edps-decision-europol_en.pdf. Accessed 04 Apr 2022

  39. European Data Protection Supervisor: Opinion 3/2017. EDPS Opinion on the Proposal for a European Travel Information and Authorisation System (ETIAS) (2017). https://edps.europa.eu/sites/edp/files/publication/17-03-070_etias_opinion_en.pdf. Accessed 04 Apr 2022

  40. European Parliament: Artificial intelligence at EU borders – Overview of applications and key issues. July 2021. https://www.europarl.europa.eu/thinktank/en/document/EPRS_IDA(2021)690706. Accessed 04 Apr 2022

  41. European Parliament: Parliamentary Questions, Question reference: E-000173/2020. 9 June 2020. https://www.europarl.europa.eu/doceo/document/E-9-2020-000173-ASW_EN.html. Accessed 04 Apr 2022

  42. European Union Agency for Fundamental Rights and Council of Europe: Handbook on European data protection law (2018)

    Google Scholar 

  43. European Union Agency for the Operational Management of Large-Scale IT Systems in the Area of Freedom, Security and Justice (eu-LISA): Artificial Intelligence in the Operational Management of Large-scale IT systems – Research and Technology Monitoring Report. July 2020. Accessed 09 Oct 2021. https://www.eulisa.europa.eu/Publications/Reports/AI%20in%20the%20OM%20of%20Large-scale%20IT%20Systems.pdf#search=AI%20in%20the%20operational%20management. Accessed 04 Apr 2022

  44. European Union Agency for the Operational Management of Large-Scale IT Systems in the Area of Freedom, Security and Justice (eu-LISA): Call for tender “Framework Contract for Implementation and Maintenance in Working Order of the Biometrics Part of the Entry Exit System and Future Shared Biometrics Matching System”. https://etendering.ted.europa.eu/cft/cft-display.html?cftId=4802. Accessed 04 Apr 2022

  45. European Union Agency for the Operational Management of Large-Scale IT Systems in the Area of Freedom, Security and Justice (eu-LISA): AI Initiatives at eu-LISA. https://eulisa.europa.eu/SiteAssets/Bits-and-Bytes/002.aspx. Accessed 04 Apr 2022

  46. Fotiadis, A., Stavinoha. L., Zandonini, G., Howden, D.: A data ‘black hole’: Europol ordered to delete vast store of personal data. https://www.theguardian.com/world/2022/jan/10/a-data-black-hole-europol-ordered-to-delete-vast-store-of-personal-data. Accessed 30 Mar 2022

  47. Frontex: ETIAS, what it means for travellers; what it means for Frontex. https://frontex.europa.eu/future-of-border-control/etias/. Accessed 04 Apr 2022

  48. Fröwis, M., Gottschalk, T., Haslhofer, B., Rückert, C., Pesch, P.: Safeguarding the evidential value of forensic cryptocurrency investigations. Forensic Sci. Int. Digit. Invest. 33, 200902 (2020)

    Google Scholar 

  49. Galindo, J., Tamayo, P.: Credit risk assessment using statistical and machine learning: basic methodology and risk modeling applications. Comput. Econ. 15, 107–143 (2000)

    Article  Google Scholar 

  50. Geiger, R.S. et al: Garbage in, garbage out? Do machine learning application papers in social computing report where human-labeled training data comes from? In: FAT* 2020: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp. 325–336 (2020)

    Google Scholar 

  51. Ghahramani, Z.: Unsupervised learning. In: Bousquet, O., von Luxburg, U., Rätsch, G. (eds.) ML -2003. LNCS (LNAI), vol. 3176, pp. 72–112. Springer, Heidelberg (2004). https://doi.org/10.1007/978-3-540-28650-9_5

    Chapter  MATH  Google Scholar 

  52. Goddard, K., Roudsari, A., Wyatt, J.C.: Automation bias: a systematic review of frequency, effect mediators, and mitigators. JAMIA 19(1), 12–17 (2012)

    Google Scholar 

  53. Gonzalez-Fuster, G.: Artificial Intelligence and Law Enforcement Impact on Fundamental Rights. Study requested by the LIEBE Committee, European Parliament July 2020. https://www.europarl.europa.eu/RegData/etudes/STUD/2020/656295/IPOL_STU(2020)656295_EN.pdf. Accessed 04 Apr 2022

  54. Google: Machine Learning Glossary. https://developers.google.com/machine-learning/glossary#bias-ethicsfairness. Accessed 04 Apr 2022

  55. Green, B., Chen, Y.: The principles and limits of algorithm-in-the-loop decision-making. In: Proceedings of the ACM on Human-Computer Interaction. vol. 3, Issue CSCW, pp. 1–24, November 2019, Article No. 50

    Google Scholar 

  56. Green, B.: The flaws of policies requiring human oversight of government algorithms 2021. https://arxiv.org/abs/2109.05067. Accessed 04 Apr 2022

  57. Hao, K: What is AI? We drew you a flowchart to work it out. MIT Technology Review, 10 November 2018. https://www.technologyreview.com/2018/11/10/139137/is-this-ai-we-drew-you-a-flowchart-to-work-it-out/. Accessed 04 Apr 2022

  58. Hildebrandt, M.: The dawn of a critical transparency right for the profiling era. In: Bus, J. et al. (eds.) Digital Enlightenment Yearbook 2012, pp. 41–56, IOS Press, Amsterdam (2012)

    Google Scholar 

  59. Hittmeir, M., Ekelhart, A., Mayer, R.: On the utility of synthetic data: an empirical evaluation on machine learning tasks. In: ARES 2019: Proceedings of the 14th International Conference on Availability, Reliability and Security, pp. 1–6, August 2019. Article No. 29

    Google Scholar 

  60. Idemia, Artificial Intelligence is all around us. https://www.idemia.com/news/artificial-intelligence-all-around-us-2018-02-27. Accessed 04 Apr 2022

  61. Incze, R.: The Cost of Machine Learning Projects 2019. https://medium.com/cognifeed/the-cost-of-machine-learning-projects-7ca3aea03a5c. Accessed 04 Apr 2022

  62. Ingleton, R.D.: mission incomprehensible: the linguistic barrier to effective police cooperation in Europe (1994)

    Google Scholar 

  63. Jacobs, M., Pradier, M.F., McCoy, T.H., Perlis, R.H., Doshi-Velez, F., Gajos, K.Z.: How machine-learning recommendations influence clinician treatment selections: the example of antidepressant selection. Transl. Psychiatry 11(108), (2021). https://www.nature.com/articles/s41398-021-01224-x. Accessed 04 Apr 2022

  64. Kaminski, M.: The right to explanation. Exp. Berkeley Tech. Law J. 34, 189–218 (2019)

    Google Scholar 

  65. Lee, M.S.A.: Risk identification questionnaire for detecting unintended bias in the machine learning development lifecycle. In: AIES 2021: Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, pp. 704–714, July 2021

    Google Scholar 

  66. Legg, S., Hutter, M.: Universal intelligence: a definition of machine intelligence. Mind. Mach. 17, 391–444 (2007)

    Article  Google Scholar 

  67. Liao, Q.V., Gruen, D., Miller, S.: Questioning the AI: informing design practices for explainable AI user experiences. In: ACM CHI Conference on Human Factors in Computing Systems (CHI 2020) (2020)

    Google Scholar 

  68. Lynskey, O.: Criminal justice profiling and EU data protection law: precarious protection from predictive policing. Int. J. Law Context 15(2), 162–176 (2019)

    Article  Google Scholar 

  69. Malgieri, G., Comande, G.: Why a right to legibility of automated decision-making exists in the general data protection regulation. Int. Data Priv. Law 7(4), 243–265 (2017)

    Article  Google Scholar 

  70. Malgieri, G.: Automated decision-making in the EU member states: the right to explanation and other ‘suitable safeguards’ in the national legislations. CSLR 35, 1–26 (2018)

    Google Scholar 

  71. Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., Galstyan, A.: A survey on bias and fairness in machine learning. ACM Comput. Surv. 54(6), 1–35 (2021). https://doi.org/10.1145/3457607. Accessed 04 Apr 2022

  72. Ntoutsi, E., et al.: Bias in data-driven artificial intelligence systems – an introductory survey. WIREs 10(3), e1356 (2020)

    Google Scholar 

  73. Palantir: Gotham. https://www.palantir.com/platforms/gotham/. Accessed 04 Apr 2022

  74. Paltrinieri, N., Comfort, L., Reniers, G.: Learning about risk: machine learning for risk assessment. Saf. Sci. 118, 475–486 (2019)

    Google Scholar 

  75. Pasquini, C., Böhme, R.: Trembling triggers: exploring the sensitivity of backdoors in DNN-based face recognition. EURASIP J. Inf. Secur. 1, 1–15 (2020)

    Google Scholar 

  76. Poursabzi-Sangdeh, F., Goldstein, D.G., Hofman, J.M., Vaughan, J.W., Wallach, H.: Manipulating and Measuring Model Interpretability.’ In: CHI 2021: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, pp. 1–52, May 2021

    Google Scholar 

  77. Practitioner’s Guide to COMPAS Core. Equivant 4 April 2019. https://www.equivant.com/wp-content/uploads/Practitioners-Guide-to-COMPAS-Core-040419.pdf. Accessed 04 Apr 2022

  78. Rai, A.: Explainable AI: from black box to glass box. J. Acad. Mark. Sci. 48(1), 137–141 (2019). https://doi.org/10.1007/s11747-019-00710-5

    Article  Google Scholar 

  79. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation), OJ L 119/1 (GDPR) (2016)

    Google Scholar 

  80. Regulation (EU) 2016/794 of the European Parliament and of the Council of 11 May 2016 on the European Union Agency for Law Enforcement Cooperation (Europol) and replacing and repealing Council Decisions 2009/371/JHA, 2009/934/JHA, 2009/935/JHA, 2009/936/JHA and 2009/968/JHA, OJ L 135/53 (2016)

    Google Scholar 

  81. Regulation (EU) 2018/1240 of the European Parliament and of the Council of 12 September 2018 establishing a European Travel Information and Authorisation System (ETIAS) and amending Regulations (EU) No 1077/2011, (EU) No 515/2014, (EU) 2016/399, (EU) 2016/1624 and (EU) 2017/2226, OJ L 236/1 (2018)

    Google Scholar 

  82. Regulation (EU) 2018/1725 of the European Parliament and of the Council of 23 October 2018 on the protection of natural persons with regard to the processing of personal data by the Union institutions, bodies, offices and agencies and on the free movement of such data, and repealing Regulation (EC) No 45/2001, OJ L 295/39 (2018)

    Google Scholar 

  83. Regulation (EU) 2018/1862 of the European Parliament and of the Council of 28 November 2018 on the establishment, operation and use of the Schengen Information System (SIS) in the field of police cooperation and judicial cooperation in criminal matters, amending and repealing Council Decision 2007/533/JHA, and repealing Regulation (EC) No 1986/2006 of the European Parliament and of the Council and Commission Decision 2010/261/EU, OJ L312/56 (2018)

    Google Scholar 

  84. Regulation (EU) 2021/1134 of the European Parliament and of the Council of 7 July 2021 amending Regulations (EC) No 767/2008, (EC) No 810/2009, (EU) 2016/399, (EU) 2017/2226, (EU) 2018/1240, (EU) 2018/1860, (EU) 2018/1861, (EU) 2019/817 and (EU) 2019/1896 of the European Parliament and of the Council and repealing Council Decisions 2004/512/EC and 2008/633/JHA, for the purpose of reforming the Visa Information System, OJ L 248/11 (2021)

    Google Scholar 

  85. Rich, M.L.: Machine learning, automated suspicion algorithms, and the fourth amendment. Univ. Pa. Law Rev. 164, 871–929 (2016)

    Google Scholar 

  86. Selbst, A.D., Powles, J.: Meaningful information and the right to explanation. Int. Data Priv. Law 7(4), 233–242 (2017)

    Article  Google Scholar 

  87. Sidiroglou-Douskos, S., Misailovic, S., Hoffmann, H., Rinard, M.: Managing performance vs. accuracy trade-offs with loop perforation. In: Proceedings of the 19th ACM SIGSOFT symposium and the 13th European conference on Foundations of software engineering, pp. 124–134 (2011)

    Google Scholar 

  88. Sopra Steria: Artificial Intelligence. https://www.soprasteria.de/services/technology-services/artificial-intelligence. Accessed 04 Apr 2022

  89. Sopra Steria: Press release “IDEMIA and Sopra Steria chosen by eu-LISA to build the new Shared Biometric Matching System (sBMS) for border protection of the Schengen Area”. https://www.soprasteria.com/newsroom/press-releases/details/idemia-and-sopra-steria-chosen-by-eu-lisa-to-build-the-new-shared-biometric-matching-system-(sbms)-for-border-protection-of-the-schengen-area. Accessed 04 Apr 2022

  90. Statewatch: E.U: Legislators must put the brakes on big data plans for Europol (2022). https://www.statewatch.org/news/2022/february/eu-legislators-must-put-the-brakes-on-big-data-plans-for-europol/. Accessed 04 Apr 2022

  91. Teoh, E.R., Kidd, D.G.: Rage against the machine? Google’s self-driving cars versus human drivers. J. Saf. Res. 63, 57–60 (2017)

    Article  Google Scholar 

  92. Valverde-Albacete, F.J., Peeláez-Moreno, C.: 100% Classification accuracy considered harmful: the normalized information transfer factor explains the accuracy paradox. PloS ONE 10 (2014). https://doi.org/10.1371/journal.pone.0084217. Accessed 04 Apr 2022

  93. Vavoula, N.: Artificial intelligence (AI) at schengen borders: automated processing, algorithmic profiling and facial recognition in the era of techno-solutionism. EJML 23, 457–484 (2021)

    Google Scholar 

  94. Veale, M., Zuiderveen Borgesius, F.: Demystifying the draft EU artificial intelligence act. Analysing the good, the bad, and the unclear elements of the proposed approach. Comput. Law Rev. Int. 4, 97–112 (2021)

    Google Scholar 

  95. Wachter, S., Mittelstadt, B., Floridi, L.: Why a right to explanation of automated decision-making does not exist in the general data protection regulation. Int. Data Priv. Law 7(2), 76–99 (2017)

    Google Scholar 

  96. Žliobaitė, I.: Learning under concept drift: an overview. Technical report 2009, Vilnius University (2010). https://arxiv.org/abs/1010.4784. Accessed 04 Apr 2022

  97. Zou, J., Schiebinger, L.: AI can be sexist and racist — it’s time to make it fair. Nature (2018). https://www.nature.com/articles/d41586-018-05707-8. Access 04 Apr 2022

Download references

Acknowledgements

The authors would like to thank Tobias Kupek, Rainer Böhme and our anonymous reviewers for valuable remarks.

Franziska Boehm and Paulina Jo Pesch participate in the project INDIGO (Information in the EU’s Digitalized Governance). The project is financially supported by the NORFACE Joint Research Programme on Democratic Governance in a Turbulent Age and co-funded by AEI, AKA, DFG and FNR, and the European Commission through Horizon 2020 under grant agreement No. 822166.

figure a

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Paulina Jo Pesch .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Jo Pesch, P., Dimitrova, D., Boehm, F. (2022). Data Protection and Machine-Learning-Supported Decision-Making at the EU Border: ETIAS Profiling Under Scrutiny. In: Gryszczyńska, A., Polański, P., Gruschka, N., Rannenberg, K., Adamczyk, M. (eds) Privacy Technologies and Policy. APF 2022. Lecture Notes in Computer Science(), vol 13279. Springer, Cham. https://doi.org/10.1007/978-3-031-07315-1_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-07315-1_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-07314-4

  • Online ISBN: 978-3-031-07315-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics