原文

After reading Excavating AI: The Politics of Images in Machine Learning Training Sets and Humans of AI, I have developed a rather pessimistic perspective on universal values. Specifically, I believe that the harms and biases mentioned in these articles—arising during the process of data collection and AI training—are in fact inherent to human nature and therefore unavoidable. These biases are the result of collective choices made by humanity as a whole. Furthermore, the interests of individuals are often replaced by the interests of humans as a collective.

First, I would like to discuss my understanding of the history of Western physiognomy from the perspective of a photographer. The rise of physiognomy can be traced back to the Victorian era in Europe, during which people would make simple judgments about an individual's character and abilities based on their facial features and bone structure. This period coincided with the emergence of large-scale modern cities, such as Paris, and the advent of public transportation. As a result, many people migrated from rural communities, where they had deep-rooted familiarity with their neighbors, to large urban centers composed of strangers. In this transition, there arose a pressing need for a quick and accessible knowledge system that could provide instant assessments of unfamiliar individuals. Of course, the accuracy of these judgments did not need to be particularly high. In an urban environment where people encounter a large number of strangers daily, physiognomy only needed to maintain a certain level of effectiveness—say, around 40% accuracy—to be considered useful. For instance, if I were to see a person with a face full of scars, I might judge them to be a violent criminal and choose to keep my distance. Even if my judgment were incorrect, the personal cost to me would be minimal. Naturally, some individuals with malicious intent could exploit the errors in such heuristic judgments for their own gain, but from a macro perspective, this remains a relatively low-probability scenario, as we do not live in a society composed entirely of fraudsters. Thus, the ability to make quick assessments of others has historically been more beneficial than harmful. Consequently, the biases generated by this practice have persisted deeply into the present, manifesting as the same kind of imprecise errors in our contemporary datasets.

Second, I believe that with the emergence of large collectives, there inevitably arises an organizational entity akin to a government that serves to regulate and manage affairs on a macro level. Assuming an ideal government—excluding factors such as corruption and abuse of power—I argue that the government functions as a machine designed to resolve trolley problem dilemmas on behalf of the collective. That is, it ensures that a given action or decision is sufficiently effective by balancing costs and benefits. The cost of training a model is undeniably high. First, I find it logically impossible to train a model that is entirely non-harmful or unbiased. Second, for governments or large corporate entities, what matters most is that a model is sufficiently effective and ensures profitability. For example, during the Tokyo Olympics, numerous computer vision models were employed to assess potentially dangerous behaviors. The reduction in labor costs achieved through these models, when compared to the costs incurred from misclassifications of certain individual behaviors, remains substantial. This is evidenced by the increasing use of computer vision models in security screenings at large-scale commercial events. Ideally, the savings generated by these models would be reallocated to more productive areas. Thus, the job positions lost in security screening and the cases of misjudgment along the way are, in a sense, inevitable.

Of course, I personally believe that the lingering influence of Victorian-era pseudoscience will have a profoundly negative impact on us. Ultimately, everything is a matter of probability. However, this trajectory is one that we, as a collective humanity, have collectively chosen. If one day we are replaced by machines, it will be a consequence of our own decisions—though not necessarily my own.