How Democratic is the Implementation of Biometric Face Recognition Technology for Boarding Verification in Intercity Train Services?

an analysis from the case of PT. KAI.


Land transportation, especially train, has been a big part of the development of Indonesia. The railroad has become essential for Indonesia’s industry such as the mobility of goods and raw materials, and it has also become one of the cheapest inter-province public transportation with an average of passenger 6000 per month (Badan Pusat Statistik, no date). There are economic benefits, cost efficiency, and less environmental damage by choosing trains to travel over airplanes and private cars (Jones and Lucas, 2012). The increased need for good services and the need for innovation inspired Indonesia’s stated train service company called Kereta Api Indonesia (KAI) to implement many innovations (Sejumlah Inovasi KAI Hadirkan pada Usia ke-77 Tahun, no date), but one of them is raising question and concern: the implementation of biometric face recognition as a method of passenger data verification in the boarding gate of the intercity train services. Using biometric face recognition increases the risk of passenger data violation, and the introduction lacks a democratic approach. This essay will analyze the implementation of the technology using the power over/power to frameworks, explore the incertitude of the implementations, and determine the best precaution for the performance to be more democratic.  

The reason behind biometric technology implementation

Since KAI is a state-owned company and the only service provider of inner and inter-city trains, the actions and implementation of the passengers must be accountable to the public. There are two main reasons for the new technology of biometrics for KAI passenger verification based on the KAI’s official website:

First, to lessen the boarding queue of the passengers at the boarding gate. While this reason seems fair and a solution to a problem, the research on the customer care account in X (formerly Twitter) shows no result that any account ever complained about boarding gate queues (Twitter search, 2023). The complaints through the KAI official social media account (@KAI121) before the implementation of face recognition are mainly about the narrow seating (Penumpang Kereta Jarak Jauh Keluhkan Kursi Sempit, no date), fully booked train (Wanita Ini Keluhkan Kapasitas Penumpang KAI Tanpa Kursi yang Membludak Sampai Sulit Bergerak, – Detik Sumsel – Halaman 2, no date), the price of the train tickets (Keluhkan Harga Tiket Kereta Api yang Tak Wajar, Pelanggan Ancam Beralih ke Bus, no date). This result shows that the queuing or process of boarding is not a top priority of passenger complaints. Moreover, there are reluctances from passengers to register their data on the biometric system because of the data security threat (Rahmada, Wientor, 2023).

Second, KAI claimed that installing biometric face recognition technology will boost security inside the station or wagon area. The implementation can be a surveillance tool to recognize the snatcher or abuser in the incident on the train. Selengkapnya (Teknologi Face Recognition KAI, no date. While this is a serious problem and the solution seems promising, face recognition accuracy is still a high concern for technology experts because of the high number of false positives and the algorithmic racial bias (Dauvergne, no date; Why new facial-recognition airport screenings are raising concerns | CU Boulder Today | University of Colorado Boulder, no date; Etzioni, 2018). Moreover, biometric Face recognition is not the primary tool for crime surveillance on the train. CCTVs are more likely to help to catch the culprit or become the main tools for evidence. However, the recent complaint from one passenger who lost their bag on the train shows that the CCTV checking cannot be done at the time of the event. The passenger and the security team will access the CCTV after the train arrives at the designated stations. The chance of the culprit escaping is higher when security checks the CCTV (Ghaisani, Alya, 2023). In other words, biometric face recognition depends on the other surveillance technology. If the other technology is not reliable, the purpose of better passenger security is also challenging to achieve.

What KAI did during the decision making of the biometric technology is not available for the public to see; the information about why KAI implemented biometric face recognition is on their official website, but not report on the urgency and evidence such as the number of queuing cases in the boarding gate or complaints from a passenger about it. That assumes that KAI makes face recognition without a reasonable justification or the public request. KAI asserts its power as a service provider to put coercion and manipulation (power over) (Etzioni, 2018) towards its passengers. KAI uses its capability to predetermine the technology without an open decision-making process with the important stakeholder: KAI’s passengers or, in other words, KAI exercising power over (Avelino, 2021) by knowing or ignoring the risk that can affect its passengers. Moreover, KAI is still allowing passenger who does not agree to register their data to the biometric. Still, KAI also manipulates passenger (nudge), where the passenger who decided to register face recognition will have a nearby boarding gate, while passengers who disagree must walk to the farther gate (Gate Selatan Stasiun Gambir Hanya Layani Boarding Pengenalan Wajah, no date; Face Recognition Boarding Dinilai Ganggu Privasi, KAI: Bisa Boarding Manual, no date).

Although the intention is not bad, and KAI wants to have an innovation of digitalization on the verification of its passenger, advances in technology comes with new uncertainties and failures, and such enthusiasm can bring more harm than good for the passengers (Jasanoff and Jasonoff, 2003; Pestre, 2009). Next, we will analyze the aspect of incertitude in the Face recognition biometric of KAI.

Aspect of incertitude of biometric face recognition technology

What can we know as the incertitude of adopting biometric face recognition as method of verification? In the second part of this essay, I will analyze the aspect of incertitude gathered from many sources in the Andy Stirling mini lecture below.

Figure 1: Aspect of Incertitude (Stirling, 2019)

The use of biometric face recognition has both salient possibilities defined and the possibilities undefined. Engineering failures such as algorithm errors, database errors, or camera errors can be categorized as risks because of the possibilities. Meanwhile, data breaching, data leaking, and fraudulent use of data can be a part of uncertainty. The data breaching can be empirical because there were 79 data breaching cases in Indonesia from 2019 to 2023. From all the cases, 35 out of 79 happened in 2023 alone (Deretan Kasus Kebocoran Data Pribadi di Indonesia Sepanjang 2022-2023, no date). For this matter, KAI claimed that the passenger’s data is good security information management and will update the security gradually (Terapkan Face Recognition, Boarding Kini Cukup Pindai Wajah, no date). However, KAI did not mention how and what is the security system. The public does not know whether they can delete the data from the KAI database, which means that the KAI wholly owns the data from biometrics, and the passenger does not have the power to use and protect their data.

The theoretical uncertainty on biometric face recognition is what the expert called ‘mounting the evidence of privacy violation and right abuses’ (Dauvergne, no date). The stolen data can be sold by criminals to any individual or entity that will benefit from the data for surveillance, politics, or disguise by using stolen identity. Moreover, Indonesia also has an online loan, where people can get a loan from banks, cooperatives, or finance start just by submitting a photo of their ID and their selfie using the ID (Apa Itu Pinjol: Arti, Jenis, no date). Theoretically, any individual or organization with biometric data can use the stolen ID and its user’s face to apply for an online loan. This intention can harm the passenger in the future and cause loss of finances and well-being.

Other possibilities that are undefined by KAI are ambiguity and ignorance. Both can be something that is not yet on decision-makers’ radar or simply ignored. To open the ambiguity of the technology, such as the constitution or the law, for anything related to the use of biometric face recognition in the future. We have such laws and regulations for racial discrimination, transgender rights, and religious discrimination, but what if the culprit of this discrimination is a machine or algorithm, for example (Why new facial-recognition airport screenings are raising concerns | CU Boulder Today | University of Colorado Boulder, no date). Because of the limitation of work and the research, this essay has yet to explore the other possibilities from many different points of view of the expert, passengers, and other possibilities from vulnerable communities such as people with disabilities. Furthermore, in industrial technology and innovation nowadays, the safety of new technologies is delegated chiefly to producers (Jasanoff and Jasonoff, 2003), but the implementer who adopts this technology and how to ensure the accountability of the adopter is rarely discussed; this becomes part of undefined possibilities in the aspect of incertitude.

How to democratize KAI’s new technology with the precaution of a practical framework.

In this section, I will explore what KAI can do to democratize the technology for their passenger using the ‘Precaution of Government Technology’ framework (Stirling, 2016). This precaution has two important promises. First, it is to ‘broaden the attention to the diversity of options’ and second, to ‘open up’ the ‘vibrant, mature, and robust policy over the implication of different interpretation and uncertainty’ (Stirling, 2016).

Figure 2: Precaution of Government Technology (Stirling, 2016)

KAI, as a state-owned company, needs to prepare the communication channels. This channel will help the communication and engagement with public stakeholders at every stage(Stirling, 2016). The first step is the screening. In this part, KAI explores the severe and unambiguous threat of biometric face recognition (Stirling, 2016) through open discussion with many data security and engineering experts, public transportation experts, and other public stakeholders. It seems necessary that the technical input of the problem developed independently of political influences (Jasanoff and Jasonoff, 2003). From the discussion, KAI can then create a ‘Presumption of Prevention,’ a restrictive management measure to mitigate all the risks and unambiguous threats.

In the second step, KAI opens up on the uncertain threat that may come up. First question: is there any scientific uncertainty? If yes, then KAI needs to do a ‘Precautionary Appraisal’ by doing the ‘transdisciplinary engagement, humility on science, targeted research, and pros and cons of the alternatives’ (Stirling, 2016) with the expert, scientist, transportation community, and public in general to see whether there are threat possibilities in the future. The second question is there a socio-politically ambiguous threat? If the answer is yes, the KAI will deliberate by doing the ‘citizen participation, stakeholder negotiation, inclusivity, accessibility, and representative’ (Stirling, 2016) in the process. The deliberation process will be more pervasive and heterogeneous, increasing the demands for greater public involvement (Jasanoff and Jasonoff, 2003). The participation also helps to open the democratic doors, although it is not the panacea. Participation is not immune from powerful interest. KAI’s public participation needs to be aware of the top-down exercise legitimation, acknowledge the power influencing participation transparency, and ensure the participation discussion impacts the policy and implementation (Rowe and Frewer, 2000).

If there is no ambiguous sociopolitic thread, KAI has to do the ‘Risk assessment’ for a ‘rigorous, peer-reviewed, evidence-based, transparent, professional, and comprehensive’ decision-making and implementation (Stirling, 2016). The technology must be based on actual evidence, and KAI can show how many complaints and the urgency of implementing biometric face recognition and that there is no other alternative that has more benefit and less risk for both the company and passengers. The technology must be peer-reviewed before implementation, and all discussion results are accessible. This practice is not contrary to Harrison (1993), who suggests that a company can be a ‘model of openness’ and urges the company to submit their activities to public scrutiny and always to open dialogue with interest groups and the public in general (Spiegelhalter and Riesch, 2011).

Each screening and appraisal step is communicated to the public and stakeholders. The KAI does the evaluation to balance the pros and costs, setting up tolerability, adjusting purpose, and finally choosing the technology, followed by setting up the standard of monitoring the implementation.


The decision by the state-owned train company to install biometric face recognition as a verificatory of the passenger, replacing the manual verification by humans, is raising questions on whether the company implemented the democratic approach to their chosen technology before the decision making. The KAI claimed that the decision was to alleviate the boarding queue and increase the security inside the station and train. However, the decision is not based on evidence, yet the implementation puts more risk to the passenger than the benefit.

The aspects of incertitude of KAI biometric face recognition explored many possibilities of threat from the implementation that are ignored and never been communicated and discussed with the public and interest stakeholders. To create a democratic approach to implementing the new technology in the boarding system, KAI can implement precaution by opening up the multi-perspectives from various stakeholders and the public. The precaution can help KAI to make a better and more democratic decision for their services.


Apa Itu Pinjol: Arti, Jenis, Cara Membedakan Pinjol Legal dan Ilegal (no date). Available at: (Accessed: 22 October 2023).

Avelino, F. (2021) ‘Theories of power and social change. Power contestations and their implications for research on social change and innovation, Journal of Political Power, 14(3), pp. 425–448. Available at:

Badan Pusat Statistik (no date). Available at: (Accessed: 22 October 2023).

Dauvergne, P. (no date) Identified, tracked, and profiled: the politics of resisting facial recognition technology.

Deretan Kasus Kebocoran Data Pribadi di Indonesia Sepanjang 2022-2023 (no date). Available at: (Accessed: 22 October 2023).

Etzioni, A. (2018) China and the Lessons of Modern Surveillance Technology. Available at:

Face Recognition Boarding Dinilai Ganggu Privasi, KAI: Bisa Boarding Manual (no date). Available at: (Accessed: 22 October 2023).

Gate Selatan Stasiun Gambir Hanya Layani Boarding Pengenalan Wajah (no date). Available at: (Accessed: 22 October 2023).

Jasanoff, S. and Jasonoff, S. (2003) TECHNOLOGIES OF HUMILITY: CITIZEN PARTICIPATION IN GOVERNING SCIENCE, Source: Minerva. Available at:

Jones, P. and Lucas, K. (2012) ‘The social consequences of transport decision-making: Clarifying concepts, synthesising knowledge and assessing implications’, Journal of Transport Geography, 21, pp. 4–16. Available at:

Keluhkan Harga Tiket Kereta Api yang Tak Wajar, Pelanggan Ancam Beralih ke Bus (no date). Available at: (Accessed: 22 October 2023).

Penumpang Kereta Jarak Jauh Keluhkan Kursi Sempit, Sebut Tak Nyaman Dengkul Beradu dengan Orang Lain (no date). Available at: (Accessed: 22 October 2023).

Pestre, D. (2009) ‘Understanding the Forms of Government in Today’s Liberal and Democratic Societies: An Introduction’, 47(3), pp. 243–260. Available at:

Rowe, G. and Frewer, L.J. (2000) Public Participation Methods: A Framework for Evaluation.

Sejumlah Inovasi KAI Hadirkan pada Usia ke-77 Tahun (no date). Available at: (Accessed: 22 October 2023).

Spiegelhalter, D.J. and Riesch, H. (2011) ‘Don’t know, can’t know: Embracing deeper uncertainties when analysing risks’, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences. Royal Society, pp. 4730–4750. Available at:

Stirling, A. (2016) Precaution in the Governance of Technology. Available at:

Stirling, A. (2019) Lecture slides: ‘Towards innovation democracies’ as part of the module Democratising Science and Technology’, University of Sussex. 

Teknologi Face Recognition KAI, Manfaat atau Mudarat? (no date). Available at: (Accessed: 22 October 2023).

Terapkan Face Recognition, Boarding Kini Cukup Pindai Wajah (no date). Available at: (Accessed: 22 October 2023).

Twitter. Com/wrahmada/status/1708646472542286084? S=52&t=x0w2by8bydrmwfeaxg5lcw (no date) X (formerly Twitter). Available at: (Accessed: 22 October 2023).

Twitter. Com/goyaaeo/status/1713161202371895362? S=52&t=x0w2by8bydrmwfeaxg5lcw (no date) X (formerly Twitter). Available at: (Accessed: 22 October 2023).

Twitter. Com/search? Q=antrian%20boarding%20%22boarding%22%(Antrian%20or%20boarding%20or%20stasiun)20%(To%3akai121)20%(%40kai121)20%20until%3a2020-08-01%20since%3a2022-08-01&src=typed_query (no date) X (formerly Twitter). Available at: (Accessed: 22 October 2023).

Wanita Ini Keluhkan Kapasitas Penumpang KAI Tanpa Kursi yang Membludak Sampai Sulit Bergerak, Gak Bahaya Ta? – Detik Sumsel – Halaman 2 (no date). Available at: (Accessed: 22 October 2023).

Why new facial-recognition airport screenings raise concerns | CU Boulder Today | University of Colorado Boulder (no date). Available at: (Accessed: 22 October 2023)

Writer’s note:
This essay created in October 2023, there may be updates on the technology and policy afterwards.


Get every new post delivered to your Inbox

Join other followers: